Microsoft Orleans #IMO


IMO, Microsoft Orleans is a framework and an implementation for developing highly distributed concurrent applications, which can also be considered as a wrapper or a developer friendliness (or the so called developer productivity) coated actor framework.

Microsoft Orleans said to be having the concept/notion of virtual actors and optimized for cloud. As in production Orleans is deployed in Azure cloud services and on premise deployment. IMO – The production implementation of the Orleans is quite challenging compared to Akka.NET implementations.

The idea behind the developer friendliness is quite confusing; the so called developer friendliness is achieved through abstracting and hiding plenty of underlying concepts of actors model and some areas Orleans has key breaches to the actor model as well. In that sense one can argue that Orleans is not an actor framework. So if you’re a person who is into details, you might find it bit less involved, but if you’re a developer who really want to create a quick solution for a burning business issue this is ok.

I highly recommend, you to read actor model and how it works and then get into Orleans as this would give clear picture of Orleans and how it is implemented.

If I’m to tell the fundamental difference between Orleans and Akka.NET based on the developer learning aspect, it is same as the exact difference between Java and C#. Java is pure (recent versions are quite different, if you have used Java 1.5/1.6 you’d understand) on object orient programming and a good tool to learn the real concepts of OOP. But C# has OOP features but not a strict follower of it, on top of the OOP concepts it goes beyond the OOP language constructs in order to achieve developer friendliness and productivity.

Orleans is a derived innovation on top of the actor model concept and Akka.NET is more over a real mapped implementation of actor model.

Distributed Transactions in Azure SQL Databases–Azure App Service and EF


Are you handing more than one SQL database in Azure for your application ? Most of the times the answer would be YES. In dedicated database multi-tenant systems at least you have your customer information in the master database and dedicated application database for each customers. Some CRUD operations need to touch both the master and customer specific databases.

We need MSDTC (Microsoft Distributed Transaction Controller) for distributed transactions in on premise systems, but in Azure the SQL Databases has the elastic distributed transaction feature enabled and using .NET 4.6.1 we can use them via TransactionScope class from Systems.Transactions.

This link explains how this works, but I wanted to test this with EF and Azure App service as the Azure App service has the target platform option .NET 4.6 and not 4.6.1.

I created two logical Azure SQL servers in two different regions, and enabled the transaction communication link between them using PowerShell.

2016-08-27_18-13-09

Then I created a small Web API project using .NET 4.6.2 (which is higher than the required version) and tested the app from the local machine and things worked well. I deployed the same stuff and things worked fine in Azure as well.

Even the though the target platform is .NET 4.6 in the Azure App Service, when we deploy the .NET 4.6.1 and .NET 4.6.2 projects, the required assemblies in the respected platform version are referenced.

But my swagger endpoint behaved strange and didn’t output the results, no idea why and need to launch another investigation for that.

You can reference the test project from my Github

Conclusion – We can use the Distributed transactions in Azure SQL Database using EF and deploy your projects written in .NET 4.6.1/ 4.6.2 in the Azure App Service platform targeting .NET 4.6

Are you awaiting at the right place?


The C# language features async and await are very easy to use, straight forward and available right out of the box in the .NET framework. But it seems the idea behind async & await has some confusions in implementations, especially where you await in the code.

The asynchronous feature boasts about the responsiveness, but it can help boosting the performance of your application as well. Most developers seem to miss this point.

Since most projects start with the Web API, let me start the discussion from there. In a Web API action like below, the async in the action method helps the IIS threads not to be blocked till the end of the call and return immediately, thus increasing the throughput of the IIS.

image

When ever we have an async method developers use the await immediately right there. This makes sense when the rest of the code depends on the result of the call, otherwise this is not a wise option.

Assume we have an async operation like below.

image

Say that you want to invoke the method twice.

image

In the above code snippet, the method is asynchronous – the action method is marked as async, and IIS thread pool returns before the completion and continue from the point where it left when the response arrives.

But the method is not gaining much in the performance, this method would take 12+ seconds to complete. As it goes to the first DoWork() which takes 6 seconds and then the second DoWork() which takes another 6 seconds and finally returns.

Since the result of the first execution is not used or not needed in the rest of the execution we don’t need to perform individual awaits.  We can execute this in parallel.

image

The above code executes the tasks in parallel and awaits at the end of the method. This model would take 6+ seconds.

Async and await are very powerful features of the .NET and they help not only being responsive but also in performance and parallel execution. By using the await carefully you gain more performance advantages.

The point of polyglot


Recently I spoke about polyglot persistence in one of the SQL Saturday events. The basic idea of this session revolved around the idea of not getting overwhelmed by the NoSQL boom, but at the same time understanding the modern application requirements which demand more features which side with the NoSQL features.

Enterprise application development is under massive shift than ever before. Enterprises look for more consumer application and social features in the enterprise software. Example – having a chat feature in a banking system, tags based image search, heavy blob handling features like bookmarking, read-resume-state and some go beyond the traditional limits and have AI features with cognitive services.

So the NoSQL technologies would help us in mapping, modeling, designing and developing these applications, sure they would do. But the adaption of NoSQL technologies, how it happens and the mentality of the people is quite interesting to see.

In my opinion there are two major concerns prevail in the industry about the adaption of NoSQL technology. They are

  • NoSQL for no reason  – People who believe that NoSQL is the way to go in all the projects. NoSQL is the ultimate savior.  NoSQL replaces the relational stores. World does not require relational databases. I often hear complains that the database table have more than 1 million rows or the database has grown more than 2 TB and now we think we need to move this to NoSQL, or they say it is very slow, so we need to move to NoSQL.
  • Fear of traditional relational database people – People who have relational database skills and think their skills do not match the NoSQL world and afraid of it. NoSQL is an alien technology that is going to replace relational databases. The fear of these people get worse by the group of people mentioned above who believe NoSQL for no reason.

Both parties miss the big picture. The better option is to use the right technology based on the requirement. The better case is – opting for polyglot persistence as the hybrid of both relational and NoSQL technologies.

Let’s name the decision point of when to make the move to polyglot persistence is point of polyglot. Below I have presented two real cases of polyglot persistence and mainly at which stage it happened.

Scenario of moving to polyglot from relational only – A product used in banking risk analysis, it handles many transactions and Azure SQL database running on premium tier. A feature came that users should be able to create their own forms and collect data (a custom surveys) we needed to store the HTML of the survey template and data filled by the users. At this point we thought about NoSQL, but we sided with relational. We stored the template as HTML and data as JSON in the SQL Database. We made this decision because there is no search required to be performed and the new feature seemed less likely to be used frequently. Later another feature rich chat module came with the ability to send attachments and group conversations. This is the point we decided to use Document DB (Azure based document type NoSQL). The user related data is in SQL Databases and the chat messages are in Document DB leading to a polyglot persistence.

Things to note : We were reluctant to move to NoSQL when the survey requirement came because, though it is dynamic during creation very much static after creation. And we didn’t want to add up NoSQL just because of this feature which is a part of a big module. But we readily made the decision of using Document DB for chat because it is a replacement of internal email system and not a good candidate to model using the relational schema.

Scenario of moving to polyglot from NoSQL only – This is a backend service and persistence of an emerging mobile app. Loads of unstructured data about places and reviews. Started with Azure Document DB. Later the app expanded and wanted the places and restaurants to be able to login using a portal and adjust their payment plans for promotions. We required to persist meta data and payment information – that’s the point we set up a Azure SQL Database and everything is smooth.

Things to note : It’s not that a NoSQL database cannot handle those transaction / accounting based information but it is  not a natural fit for any reporting and auditing purpose.

As you see there’s no strict rule on when one should decide to move to a NoSQL or to relational schema. I mention this balance as the natural fit.

Having strict demarcations of relational and NoSQL wouldn’t help to achieve the best use cases. As it’s hard to define the crossing point but it easy to see the overall business case and decide.

The below figure shows, the point of polyglot (author’s concept)

image

Natural fit plays a major role in deciding the point of polyglot. But it doesn’t mean it is always somewhere in the middle, it can be anywhere based on the product features, roadmaps and team skill. There are products which have polyglot persistence from the beginning of the implementation.

Though point of polyglot can be mapped like above, the implementation of polyglot is influenced by two major factors – they are cost of implementation and the available skills. The below figure shows the decision matrix (author’s concept).

image

Conclusion – There are two groups of people with opposing mindsets in adapting either NoSQL or relational stores. At some point most of the projects would go through the point of polyglot but this is not the implementation point. In the general ground, implementation decision is highly influenced by the decision matrix.

Integrating Azure Power BI Embedded in your DevOps


imageimage

Before starting, this posts answers the question, Can we change the connection settings of the Power BI Embedded reports. YES. Let’s continue.

Azure Power BI Embedded is a great tool for developers to integrate reports and dashboards in the applications. You can read about what Azure Power BI Embedded is and how to use it in your application in this documentation.

The moment a feature is supported there comes the questions of how to fit it in the developer pipeline or the modern automated CI/CD DevOps process.

This article focuses on how to develop and deploy the Power BI Embedded applications in your automated CI/CD DevOps pipeline.

In order to make this works, we need to solve the burning question, can we change the connection settings of a published report in Power BI Embedded ? Answer is YES.

Power BI Embedded SDK has the support for this. In Azure Power BI Embedded we have the Workspace collection which is a container for the Workspaces. We publish our reports inside a workspace which is another container for our reports (.pbix files).

In the SDK a report is consists of two components. They are the DataSet object and the Report object. DataSet has operations to update the connection settings. Bingo, that’s what we’re looking for.

power bi embedded hierarchy

 

Automation

Since we can change the DataSet configuration using the Power BI SDK, we can automate this process using a small command line in our TFBuild process.

Since each application is different and there’s no solid rule on how to do this, but I have explained a simple process you can follow.

Developer creates the reports in his/her development machine connecting to the development database. Then the developer adds the .pbix to the Visual Studio solution, something similar to what is shown in the below image.

image

The command line utility adds the reports to the specified workspace and set the DataSet connection properties and updates the application database with the Ids.

In my TFBuild I’ve added this build step and specified the required arguments.

image

So now my Utility command line will do the work for me when ever I do a check-in. Same way we can maintain the CI/CD pipeline in multiple environments.  The image shows the idea.

image

When the command line utility runs it takes care of my reports and updates the connection settings.

image

 

Code Snippet for updating connection string

You can find this code in the Power BI Embedded sample in the Github. Check the line numbers from 212 to 236 in this file.  https://github.com/Azure-Samples/power-bi-embedded-integrate-report-into-web-app/blob/master/ProvisionSample/Program.cs

Detailing ASP.NET Core in Azure App Service


ASP.NET core is the next generation development standard for the .NET world – may be that’s how I like to express it. Every ASP.NET Core application is a DNX (.NET Execution Environment) application.

When you create an ASP.NET Core application in Visual Studio (VS) 2015, it creates the project targeted at both the .NET Framework and .NET Core. You can see this under the frameworks section of the project.json file. Also the ASP.NET Core team recommends to leave this settings as it is. (Refer : https://docs.asp.net/) letting your application targets both the frameworks.

But in order to understand how things are deployed in the Azure App Service, I compiled an ASP.NET Core application and published it to an Azure Web App. Then I browsed the app with Kudu services and the Process Explorer looked like this, which shows ASP.NET Core app is running on DNX.

image

Under the Debug Console of the Kudu services in the following path site\approot\runtimes we can see the shipped .NET Core runtime, a feature which makes ASP.NET Core applications self-contained.

image

All these information are hidden from the developers and let them focus on the application development. So though the Visual Studio publishing model of the ASP.NET Core application is same as ASP.NET application publish model, based on the defined configurations Azure App Service hosts your web application under different runtimes.

Managing multiple sites with Azure Traffic Manager and deployments using TFBuilds


Introduction

Azure Traffic Manager is used to mange the traffic of your Internet resources and monitor the health of those resources. These resources may reside outside the Azure as well.

This post is focused on Azure Web Apps and how to manage multiple web apps using Azure Traffic Manager and handling deployments using new TFBuild of VSTS.

Consider a scenario of having two deployments of your site in different regions, one in the South East Asia and the other one in the North Europe. The site is deployed in two places in order to cater the users from each region with minimum latency. Traffic Manager is deployed in place in order determine the resource usage and directs the clients to the right location.

First when a client requests the site, the request hits the DNS, the DNS records have the mapping of the URL to the Traffic Manager DNS and it makes a lookup request to the corresponding Traffic Manager DNS.

Then the Traffic Manager DNS will deliver the right IP address of the web app based on the configured routing rules. This IP address will be given to the client, subsequent requests from the client will be sent directly to the obtained IP address until the local DNS cache expires.

Setting up Traffic Manager

Create a Traffic Manager space and you will get a URL like domainprefix.trafficmanager.net(The below sample I generated when while sipping my iced tea and named the Traffic Manager mytea). When creating the Traffic Manager you will configure the load balancing mechanism. Here I simply chose Performance as my load balancing mechanism since I want to reduce the latency of the site based on the geographic region it is accessed from.

image

Then you add the web apps you want to manage to the Traffic Manager as endpoints. (Note, only web apps running in the standard or upper tier are allowed to be added to the Traffic Manager)

I added two web apps one in South East Asia and the other one in the North Europe as you can see in the below image.

image

How does this work ?

After creating the Traffic Manager profile (mytea.trafficmanager.net), you will add the endpoints. When adding the endpoints, the Traffic Manager will register its URL as one of the domain names in mentioned web apps. The web app URLs are registered as CNAME entries of the Traffic Manager DNS.

image

How does this work when you have a custom domain ?

When you have a custom domain, example abc.com you register that domain in the above section, and you configure the azure web app URL as a CNAME record in the abc.com domain. Now when you type abc.com in the browser you will be served with the site.

In a more simpler way, the DNS entry which holds the A record of abc.com should have CNAME record to point to the Azure web app.

When using the Traffic Manager, you register the traffic manager URL as a CNAME entry in the abc.com.

Managing deployments to multiple web apps

This had been one of the well known and highly anticipated requirement of CI/CD pipeline. But with the new TFBuilds introduced in Visual Studio Team Services it is very simple. You can simply add multiple deployments steps in your build definition and the TFBuild will take care of your deployment.

Below image shows a build definition with two azure web app deployment endpoints.

image

Testing

Now you can type the Traffic Manager URL in the browser with the http/https prefix and you will be served with the site.

In order to check the performance routing of the region I changed the home page of the site deployed in North Europe. Then I browsed the site using a VM deployed in North Europe and browsed it from local machine where my physical location is closer to the South East Asia.

image

image

You can see that two different sites are served based on the location from where I’m browsing the site.