Using Akka.NET with ASP.NET Core – Creating a Quiz API

This is a template and quick start guide for Akka.NET with ASP.NET Core. You can grab the concepts of using Akka.NET with ASP.NET Core and how Akka.NET actor model can be used in a simple quiz or survey based scenario.

But at the same time, this post will not provide all the fundamentals of actor model programming or Akka.NET. It assumes that you already have the understanding of the actor model and reactive programming basics, along with the some practical experience with the concepts of Akka.NET.

Scenario :  A quiz engine has many quizzes and users can attend the quizzes. Each user can attend many quizzes as possible at the same time. So each user session is associated with a quiz. One user can have many quiz sessions at the same time. A simplest session key is a combination of quiz Id and user Id. This combination is unique and referred as a session Id. Each session is an actor.

Also a template actor provides the quiz templates during the session creation. Each session actor gets the fresh copy of the quiz during session creation.

The below diagram shows the actor system used in this scenario.

Akka.NET actor model for quiz engine

Step by step explanation

  • In the ASP.NET Core Startup class the actor system (QuizActorSystem) is instantiated.
  • QuizMasterActor is created in the context of QuizActorSystem and the QuizActorSystem is added to the ASP.NET Core services collection, to be consumed by the controllers.
  • QuizMasterActor creates QuizSessionCoordinatorActor and QuizTemplateActor under its context.
  • For simplicity the QuizController of ASP.NET Core has two actions.
    1. GetQuestion – This gets session Id and question Id. The controller asks for the session actor from QuizSessionCoordinatorActor. If the session actor is already available it will be returned else QuizSessionCoordinatorActor will create a new session actor under its context. QuizSessionActor loads the quiz from the QuizTemplateActor in the initial creation, gets the fresh copy of the quiz and returns the requested question. Consequent requests will be served directly by the QuizSessionActor.
    2. GetAnswer – This action methods takes the session Id and the answer for the question and pass it to the right QuizSessionActor for the update.

The entire QuizSessionActor tree is created upon the request for a question under a specific session and this is quite safe and straight forward.

You can download the source code from this Github repo.

Microsoft Orleans #IMO

IMO, Microsoft Orleans is a framework and an implementation for developing highly distributed concurrent applications, which can also be considered as a wrapper or a developer friendliness (or the so called developer productivity) coated actor framework.

Microsoft Orleans said to be having the concept/notion of virtual actors and optimized for cloud. As in production Orleans is deployed in Azure cloud services and on premise deployment. IMO – The production implementation of the Orleans is quite challenging compared to Akka.NET implementations.

The idea behind the developer friendliness is quite confusing; the so called developer friendliness is achieved through abstracting and hiding plenty of underlying concepts of actors model and some areas Orleans has key breaches to the actor model as well. In that sense one can argue that Orleans is not an actor framework. So if you’re a person who is into details, you might find it bit less involved, but if you’re a developer who really want to create a quick solution for a burning business issue this is ok.

I highly recommend, you to read actor model and how it works and then get into Orleans as this would give clear picture of Orleans and how it is implemented.

If I’m to tell the fundamental difference between Orleans and Akka.NET based on the developer learning aspect, it is same as the exact difference between Java and C#. Java is pure (recent versions are quite different, if you have used Java 1.5/1.6 you’d understand) on object orient programming and a good tool to learn the real concepts of OOP. But C# has OOP features but not a strict follower of it, on top of the OOP concepts it goes beyond the OOP language constructs in order to achieve developer friendliness and productivity.

Orleans is a derived innovation on top of the actor model concept and Akka.NET is more over a real mapped implementation of actor model.

Distributed Transactions in Azure SQL Databases–Azure App Service and EF

Are you handing more than one SQL database in Azure for your application ? Most of the times the answer would be YES. In dedicated database multi-tenant systems at least you have your customer information in the master database and dedicated application database for each customers. Some CRUD operations need to touch both the master and customer specific databases.

We need MSDTC (Microsoft Distributed Transaction Controller) for distributed transactions in on premise systems, but in Azure the SQL Databases has the elastic distributed transaction feature enabled and using .NET 4.6.1 we can use them via TransactionScope class from Systems.Transactions.

This link explains how this works, but I wanted to test this with EF and Azure App service as the Azure App service has the target platform option .NET 4.6 and not 4.6.1.

I created two logical Azure SQL servers in two different regions, and enabled the transaction communication link between them using PowerShell.


Then I created a small Web API project using .NET 4.6.2 (which is higher than the required version) and tested the app from the local machine and things worked well. I deployed the same stuff and things worked fine in Azure as well.

Even the though the target platform is .NET 4.6 in the Azure App Service, when we deploy the .NET 4.6.1 and .NET 4.6.2 projects, the required assemblies in the respected platform version are referenced.

But my swagger endpoint behaved strange and didn’t output the results, no idea why and need to launch another investigation for that.

You can reference the test project from my Github

Conclusion – We can use the Distributed transactions in Azure SQL Database using EF and deploy your projects written in .NET 4.6.1/ 4.6.2 in the Azure App Service platform targeting .NET 4.6

Are you awaiting at the right place?

The C# language features async and await are very easy to use, straight forward and available right out of the box in the .NET framework. But it seems the idea behind async & await has some confusions in implementations, especially where you await in the code.

The asynchronous feature boasts about the responsiveness, but it can help boosting the performance of your application as well. Most developers seem to miss this point.

Since most projects start with the Web API, let me start the discussion from there. In a Web API action like below, the async in the action method helps the IIS threads not to be blocked till the end of the call and return immediately, thus increasing the throughput of the IIS.


When ever we have an async method developers use the await immediately right there. This makes sense when the rest of the code depends on the result of the call, otherwise this is not a wise option.

Assume we have an async operation like below.


Say that you want to invoke the method twice.


In the above code snippet, the method is asynchronous – the action method is marked as async, and IIS thread pool returns before the completion and continue from the point where it left when the response arrives.

But the method is not gaining much in the performance, this method would take 12+ seconds to complete. As it goes to the first DoWork() which takes 6 seconds and then the second DoWork() which takes another 6 seconds and finally returns.

Since the result of the first execution is not used or not needed in the rest of the execution we don’t need to perform individual awaits.  We can execute this in parallel.


The above code executes the tasks in parallel and awaits at the end of the method. This model would take 6+ seconds.

Async and await are very powerful features of the .NET and they help not only being responsive but also in performance and parallel execution. By using the await carefully you gain more performance advantages.

The point of polyglot

Recently I spoke about polyglot persistence in one of the SQL Saturday events. The basic idea of this session revolved around the idea of not getting overwhelmed by the NoSQL boom, but at the same time understanding the modern application requirements which demand more features which side with the NoSQL features.

Enterprise application development is under massive shift than ever before. Enterprises look for more consumer application and social features in the enterprise software. Example – having a chat feature in a banking system, tags based image search, heavy blob handling features like bookmarking, read-resume-state and some go beyond the traditional limits and have AI features with cognitive services.

So the NoSQL technologies would help us in mapping, modeling, designing and developing these applications, sure they would do. But the adaption of NoSQL technologies, how it happens and the mentality of the people is quite interesting to see.

In my opinion there are two major concerns prevail in the industry about the adaption of NoSQL technology. They are

  • NoSQL for no reason  – People who believe that NoSQL is the way to go in all the projects. NoSQL is the ultimate savior.  NoSQL replaces the relational stores. World does not require relational databases. I often hear complains that the database table have more than 1 million rows or the database has grown more than 2 TB and now we think we need to move this to NoSQL, or they say it is very slow, so we need to move to NoSQL.
  • Fear of traditional relational database people – People who have relational database skills and think their skills do not match the NoSQL world and afraid of it. NoSQL is an alien technology that is going to replace relational databases. The fear of these people get worse by the group of people mentioned above who believe NoSQL for no reason.

Both parties miss the big picture. The better option is to use the right technology based on the requirement. The better case is – opting for polyglot persistence as the hybrid of both relational and NoSQL technologies.

Let’s name the decision point of when to make the move to polyglot persistence is point of polyglot. Below I have presented two real cases of polyglot persistence and mainly at which stage it happened.

Scenario of moving to polyglot from relational only – A product used in banking risk analysis, it handles many transactions and Azure SQL database running on premium tier. A feature came that users should be able to create their own forms and collect data (a custom surveys) we needed to store the HTML of the survey template and data filled by the users. At this point we thought about NoSQL, but we sided with relational. We stored the template as HTML and data as JSON in the SQL Database. We made this decision because there is no search required to be performed and the new feature seemed less likely to be used frequently. Later another feature rich chat module came with the ability to send attachments and group conversations. This is the point we decided to use Document DB (Azure based document type NoSQL). The user related data is in SQL Databases and the chat messages are in Document DB leading to a polyglot persistence.

Things to note : We were reluctant to move to NoSQL when the survey requirement came because, though it is dynamic during creation very much static after creation. And we didn’t want to add up NoSQL just because of this feature which is a part of a big module. But we readily made the decision of using Document DB for chat because it is a replacement of internal email system and not a good candidate to model using the relational schema.

Scenario of moving to polyglot from NoSQL only – This is a backend service and persistence of an emerging mobile app. Loads of unstructured data about places and reviews. Started with Azure Document DB. Later the app expanded and wanted the places and restaurants to be able to login using a portal and adjust their payment plans for promotions. We required to persist meta data and payment information – that’s the point we set up a Azure SQL Database and everything is smooth.

Things to note : It’s not that a NoSQL database cannot handle those transaction / accounting based information but it is  not a natural fit for any reporting and auditing purpose.

As you see there’s no strict rule on when one should decide to move to a NoSQL or to relational schema. I mention this balance as the natural fit.

Having strict demarcations of relational and NoSQL wouldn’t help to achieve the best use cases. As it’s hard to define the crossing point but it easy to see the overall business case and decide.

The below figure shows, the point of polyglot (author’s concept)


Natural fit plays a major role in deciding the point of polyglot. But it doesn’t mean it is always somewhere in the middle, it can be anywhere based on the product features, roadmaps and team skill. There are products which have polyglot persistence from the beginning of the implementation.

Though point of polyglot can be mapped like above, the implementation of polyglot is influenced by two major factors – they are cost of implementation and the available skills. The below figure shows the decision matrix (author’s concept).


Conclusion – There are two groups of people with opposing mindsets in adapting either NoSQL or relational stores. At some point most of the projects would go through the point of polyglot but this is not the implementation point. In the general ground, implementation decision is highly influenced by the decision matrix.

Integrating Azure Power BI Embedded in your DevOps


Before starting, this posts answers the question, Can we change the connection settings of the Power BI Embedded reports. YES. Let’s continue.

Azure Power BI Embedded is a great tool for developers to integrate reports and dashboards in the applications. You can read about what Azure Power BI Embedded is and how to use it in your application in this documentation.

The moment a feature is supported there comes the questions of how to fit it in the developer pipeline or the modern automated CI/CD DevOps process.

This article focuses on how to develop and deploy the Power BI Embedded applications in your automated CI/CD DevOps pipeline.

In order to make this works, we need to solve the burning question, can we change the connection settings of a published report in Power BI Embedded ? Answer is YES.

Power BI Embedded SDK has the support for this. In Azure Power BI Embedded we have the Workspace collection which is a container for the Workspaces. We publish our reports inside a workspace which is another container for our reports (.pbix files).

In the SDK a report is consists of two components. They are the DataSet object and the Report object. DataSet has operations to update the connection settings. Bingo, that’s what we’re looking for.

power bi embedded hierarchy



Since we can change the DataSet configuration using the Power BI SDK, we can automate this process using a small command line in our TFBuild process.

Since each application is different and there’s no solid rule on how to do this, but I have explained a simple process you can follow.

Developer creates the reports in his/her development machine connecting to the development database. Then the developer adds the .pbix to the Visual Studio solution, something similar to what is shown in the below image.


The command line utility adds the reports to the specified workspace and set the DataSet connection properties and updates the application database with the Ids.

In my TFBuild I’ve added this build step and specified the required arguments.


So now my Utility command line will do the work for me when ever I do a check-in. Same way we can maintain the CI/CD pipeline in multiple environments.  The image shows the idea.


When the command line utility runs it takes care of my reports and updates the connection settings.



Code Snippet for updating connection string

You can find this code in the Power BI Embedded sample in the Github. Check the line numbers from 212 to 236 in this file.

Detailing ASP.NET Core in Azure App Service

ASP.NET core is the next generation development standard for the .NET world – may be that’s how I like to express it. Every ASP.NET Core application is a DNX (.NET Execution Environment) application.

When you create an ASP.NET Core application in Visual Studio (VS) 2015, it creates the project targeted at both the .NET Framework and .NET Core. You can see this under the frameworks section of the project.json file. Also the ASP.NET Core team recommends to leave this settings as it is. (Refer : letting your application targets both the frameworks.

But in order to understand how things are deployed in the Azure App Service, I compiled an ASP.NET Core application and published it to an Azure Web App. Then I browsed the app with Kudu services and the Process Explorer looked like this, which shows ASP.NET Core app is running on DNX.


Under the Debug Console of the Kudu services in the following path site\approot\runtimes we can see the shipped .NET Core runtime, a feature which makes ASP.NET Core applications self-contained.


All these information are hidden from the developers and let them focus on the application development. So though the Visual Studio publishing model of the ASP.NET Core application is same as ASP.NET application publish model, based on the defined configurations Azure App Service hosts your web application under different runtimes.