Service mesh in Service Fabric

Introduction

Microservices is here to stay and we can witness the increasing popularity and the maturing technology stack which facilitate microservices. In this great article which explains about the maturity of microservices and the 2.0 stack, it mentions three key aspects.

  1. Service mesh
  2. Matured orchestrators
  3. RPC based service protocols.

This post focuses on the communication infrastructure in Service Fabric. Service Mesh is about the communication infrastructure in a microservices / distributed system platform.

First, let’s look at What is a service mesh ?  In the simplest explanation, service mesh is all about service to service communication. Say, service A wants to talk to service B, then Service A should have all the network and communication functionality and the corresponding implementations, in addition to its business logic. Implementation of the the network functionality makes the service development complex and unnecessarily big.

Service mesh abstracts all or the majority of the networking and communications functionality from a service by providing a communication infrastructure, allowing the services to remain clean with their own business logic.

So with that high level understanding if we do some googling and summarize the results, we will have a definition of a service mesh, with these two key attributes.

  • Service mesh is a network infrastructure layer
  • Primary (or the sole) purpose is to facilitate the service to service communication in cloud native applications.

Cloud native ?? – (wink) do not bother much on that, for the sake of this article, it is safe to assume a distributed system’s service communication.

imgpsh_fullsize

Modern service mesh implementations are proxies which run as sidecar for the services. Generally an agent runs on each node and the services run on the node talk to the proxy and proxy does the service resolution and perform communication.

When Service A wants to talk to Service B

  1. When service A calls its local proxy with the request.
  2. The local proxy perform service resolution and makes the request to Service B
  3. Service B replies to the proxy running in Container 1
  4. Service A receives the response from its local proxy
  5. Service B’s local proxy is NOT used in this communication. Only the caller needs a proxy not the respondent.
  6. Service A is NOT aware of service resolution, resiliency and other network functionalities required to make this call.

There are notable service mesh implementations in the market, Linkered and Istio are quite famous and Conduit is another one and many more in the market. This is a good article explaining those different service mesh technologies.

The mentioned service mesh implementations are known in the Kubernetes and Docker based microservices, but what about service mesh in Service Fabric. 


Service mesh is inherent in Service Fabric

Service Fabric has a proxy based communication system. Defining this as a service mesh is up to the agreed definition of service mesh. Typically there should be a control plane and data plane in a service mesh implementation. Before diving into the details of it, let’s see the available proxy based communication setup in Service Fabric.

Reverse Proxy for HTTP Communication

SF has a Reverse Proxy implementation for HTTP communications. This proxy runs an agent in each node when enabled. This reverse proxy handles the service discovery and resiliency in HTTP based service to service communication. If you want to read more practical aspect of the Reverse Proxy implementation, this article explains the service communication and SF reverse proxy implementation.

Reverse Proxy by default runs on port 19081 and can be configured in the clusterManifest.json


{

............

"reverseProxyEndpointPort": "19081"

............

}

In the local development machine this is configured in the clusterManifest.xml

<HttpApplicationGatewayEndpoint Port="19081" Protocol="http" />

When Service A wants to call the Service B’s APIs, it calls its local reverse proxy with a following URL structure.

http://localhost:{port}/{application name}/{service name}/{api action path}

There are many variations of reverse proxy URLs should be used depending what kind of a service the calls are made. This is a detailed article about Service Fabric Reverse Proxy.

RPC Communication in Service Fabric

RPC Communications in Service Fabric are facilitated by the Service Fabric Remoting SDK. The SDK has the following ServiceProxy class.

Microsoft.ServiceFabric.Services.Remoting.Client.ServiceProxy

Service Proxy class creates a lightweight local proxy for RPC communication and provided by the factory implementation in the SDK. Since we use the SDK to create the RPC proxy, in contrast to the HTTP reverse proxy this has the application defined lifespan and there’s no agent runs in each node.

Regardless of the implementation both the HTTP and RPC communication are well supported by Service Fabric by native and has the sidecar based proxy model implementation.


Data Plane and Control Plane in Service Fabric

From the web inferred definition of service mesh, it has two key components, (note, now we’re talking the details of service mesh) known as data plane and control plane. I recommend to read this article which explains the data plane and the control plane in service mesh.

The inbuilt sidecar based communication proxies in Service Fabric form the network communication infrastructure : which represents the data plane component of the service mesh. The sidecar proxies in Service Fabric form the data plane. 

Control plane is generally bit confusing to understand, but in short, it is safe to assume  control plane has the policies to manage and orchestrate the data plane of the service mesh.

In Service Fabric, control plane is not available as per the complete definition in the above article. Most of the control plane functions are application model specific and implemented by the developers and some are in built in the communication and federation subsystem of Service Fabric. The key missing piece in the control plane component of Service Fabric is, the unified UI to manage the communication infrastructure (or the data plane).

The communication infrastructure cannot be managed separate to the application infrastructure, thus a complete control plane is not available in Service Fabric.

With those observations, we can conclude:

Service Fabric’s service mesh is a sidecar proxy based network communication infrastructure, which is leaning much on the data plane attributes of a service mesh.

Advertisement

Are you awaiting at the right place?

The C# language features async and await are very easy to use, straight forward and available right out of the box in the .NET framework. But it seems the idea behind async & await has some confusions in implementations, especially where you await in the code.

The asynchronous feature boasts about the responsiveness, but it can help boosting the performance of your application as well. Most developers seem to miss this point.

Since most projects start with the Web API, let me start the discussion from there. In a Web API action like below, the async in the action method helps the IIS threads not to be blocked till the end of the call and return immediately, thus increasing the throughput of the IIS.

image

When ever we have an async method developers use the await immediately right there. This makes sense when the rest of the code depends on the result of the call, otherwise this is not a wise option.

Assume we have an async operation like below.

image

Say that you want to invoke the method twice.

image

In the above code snippet, the method is asynchronous – the action method is marked as async, and IIS thread pool returns before the completion and continue from the point where it left when the response arrives.

But the method is not gaining much in the performance, this method would take 12+ seconds to complete. As it goes to the first DoWork() which takes 6 seconds and then the second DoWork() which takes another 6 seconds and finally returns.

Since the result of the first execution is not used or not needed in the rest of the execution we don’t need to perform individual awaits.  We can execute this in parallel.

image

The above code executes the tasks in parallel and awaits at the end of the method. This model would take 6+ seconds.

Async and await are very powerful features of the .NET and they help not only being responsive but also in performance and parallel execution. By using the await carefully you gain more performance advantages.

Project Oxford – Behind the scenes of how-old.net

http://www.how-old.net has been trending recently in social media. Simply you can upload a picture in this website and it will detect the faces in the photo and tells you the gender and the age of the person the face belongs to.

The site uses Face API behind the scenes, which is available here http://gallery.azureml.net/MachineLearningAPI/b0b2598aa46c4f44a08af8891e415cc7

You can try this service by subscribing to the service. It’s an App Service in Microsoft Azure and you need to have a Azure subscription to subscribed to that. Currently it is free and you are allowed to make 20 transactions per minute for the subscription.

image

image

Once you are done with the purchase like any other service Face API is available in the Azure Marketplace section.

image

In the management section you can get the key for the API, the Face API is managed by Azure API Management (read more about Azure API Management here)

 

image

Face API teams also provides a sample WPF application with the portable client library as a wrapper for their REST service.

Getting Started with the Face API .NET client SDK

A simple face detection method would be very similar to this.

var faceClient = new FaceServiceClient("<your subscription key>");

                        var faces = await faceClient.DetectAsync(fileStream, false, true, true, false);

                        var collection = new List<MyFaceModel>();

                        foreach (var face in faces)
                        {
                            collection.Add(new MyFaceModel()
                                {
                                    FaceId = face.FaceId.ToString(),
                                    Gender = face.Attributes.Gender,
                                    Age = face.Attributes.Age
                                });
                        }

A direct JSON output would be like this. (test it here – http://www.projectoxford.ai/demo/face)

image

Face detection is made immensely easy by this research project. 🙂 Happy face detection.

The library has loads of other features like matching faces, grouping highlighting and all.

HttpResponseMessage vs IHttpActionResult

In Web API 2 IHttpActionResult is introduced. Read this post which explains the Web API 2 response types and the benefits of IHttpActionResult

Assuming you’ve read the above article it is recommended to use IHttpActionResult.

Apart from the benefits of clean code and unit testing the main design argument of using IHttpActionResult is the single responsibility principle; stating that actions have the responsibility of serving the HTTP requests and should not involve in creating the HTTP response messages. This argument makes sense, but keeping this aside if we look at the implementation of IHttpActionResult it calls the ExecuteAsync method to create the HttpResponseMessage object.

But overall it is new, easy to perform unit testing and a recommended practice to use IHttpActionResult. I personally prefer IHttpActionResult due to clean code and the ability to write neat unit tests.

Still HttpResponseMessage provides more control over the HTTP response message sent across the wire, do we have that control in IHttpActionResult especially the HTTP response message creation is hidden from us.

Yes you can get the full control. Because in the above article it’s mentioned that ExecuteAsync method is called in the pipeline in constructing the HTTP response. So the solution is simple we should have a custom type which implements IHttpActionResult interface and provide the logic for for creating the HttpResponseMessage object.

This github repo has the code for a generic type which implements IHttpActionResult. Which you can use or extend; in the sample I have provided how to implement caching in the response header.

The main class is CacheableHttpActionResult<T>

Generating code using System.CodeDom

Get the code sample for this post git

We face several occasions that we need to generate code by automation. Several tools do exist in order to automate the code generation. In this article I discuss about the System.CodeDom which is part of .NET SDK.

CodeDom is designed in the provider model and it gives the flexibility to initiate with the desired .NET language, so we can create the code generation algorithm and ask CodeDom to generate the code in different languages like C#, VB or C++.

Place these using statements before begin.

image

 

 

You can get the list of languages supported by the CodeDom provider from this code. Output will contain different names for same language (like csharp and c#)

image

 

 

 

 

CodeNameSpace is the core object which wraps the entire code generation logic.

 

Create a namespace, include some imports and adding comments.

image 

 

 

 

 

 

 

 

Declare a class and add a property to the class.

image

 

 

 

 

 

 

 

 

 

Like above CodeDom also provides methods to create private variables, constructors, methods, attributes and even logic inside methods. The downloadable sample has demonstration for all the features.

After adding all the required elements for the code we can add the types to the namespace.

image

 

 

Once the namespace is created we have to generate the code, here the CodeDom magic works. In order to generate  the code we should use the following method

CodeDomProvider.CreateProvider(“language“).GenerateCodeFromNamespace(codeNamespace, textWrtier, codeGeneratorOptions)

Creating the CodeGeneratorOptions

image

 

 

 

Then you can pass any TextWriter object, here I’ve used StreamWriter and generating a physical file. Compact code in my fashion.

image

 

 

 

Sample contains code for

  • generating constructors
  • creating .NET 4.0 type parameters
  • private properties
  • methods
  • adding attributes
  • adding methods
  • adding code inside the methods.

Get it from github

How to create a certificate authentication with Azure Management Service

In order to carry out any management tasks in Azure using an agent (Visual Studio or any custom code), it should authenticate itself with Azure. Requests to the Azure Management API should be authenticated using on of the following methods.

  • Active Directory
  • Certificate Authentication

This article covers the certificate authentication. Azure Management Service (AMS) APIs require a X.509 certificate for the authentication. For the development purpose we can create a sample certificate in our machine using the following command line. Make sure you open the Visual Studio command line in administrator mode to execute this.

makecert -sky exchange -r -n "CN=<CertificateName>" -pe -a sha1 -len 2048 -ss My "<CertificateName>.cer"

image

This creates the certificate in the local machine under the Personal Certificates since I have specified “My”as location.

Open the Certificate Manager in your local machine (enter certmgr.msc in the Run). You can check for your new certificate.

image

 

We should upload this certificate to Azure to establish the trust and each and every API request should contain the certificate. Certificates are saved in Azure under subscriptions thus they are used to manage the subscription owner actions. Each subscription can contain up to 100 certificates as of this writing.

Export the certificate from certificate store, as a .cer file. Follow the screen shots below.

image image image image image

Once you have exported the certificate, next step is to upload it to the Azure subscription. Login to the Azure select the correct directory if you more than one under your login and select the correct subscription to which you need to upload the certificate. Then go Settings and go to Management Certificates tab, there you can upload your certificate.

After uploading the certificate you can view it in grid like this.

image

 

To summarize what we’ve done up to now,

  • We need establish a trust between Azure and the subscription agent via certificate authentication.
  • Subscription agent is the party / tool which programmatically carries our the tasks of a subscription owner.
  • First we generated a local certificate using certmgr.msc
  • We exported the certificate and put it in the Azure management certification store.
  • So now any subscription agent with the certificate can perform the subscription ownership tasks (using Azure Management API) thus authenticating using the certificate.

The below C# code shows how to retrieve the certificate from your local store by providing the thumbprint.

   1: public X509Certificate2 GetStoreCertificate(string thumbprint)

   2: {

   3:     List<StoreLocation> locations = new List<StoreLocation>

   4:     {

   5:         StoreLocation.CurrentUser,

   6:         StoreLocation.LocalMachine

   7:     };

   8:

   9:     foreach (var location in locations)

  10:     {

  11:         X509Store store = new X509Store("My", location);

  12:         try

  13:         {

  14:             store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);

  15:             X509Certificate2Collection certificates = store.Certificates.Find(X509FindType.FindByThumbprint, thumbprint,false);

  16:

  17:             if (certificates.Count == 1)

  18:             {

  19:                 return certificates[0];

  20:             }

  21:         }

  22:         finally

  23:         {

  24:             store.Close();

  25:         }

  26:     }

  27:

  28:     throw new ApplicationException("No Certificate found");

  29: }

The above code tries to get the certificate from the Personal certification location, as the parameter “My” has been passed to the X509Store constructor.

After obtaining the certificate, you should pass it through each and every Azure Management API request whether you use the REST API or any language SDK.

How to enable sessions in Web API

Web API does not support native HTTP sessions. And it’s the nature of Web API, but there might be times you need HTTP sessions which resembles your bad design. Because a service framework should not support HTTP sessions as it should be a stateless element. So why do we need sessions in Web API ? I think you should not use sessions in Web API in production; eliminate HTTP sessions completely.

So the answer for the question why do we need sessions in Web API is, just to show how you can enable them. Silly though but you can use this in developing some POC and quick functional demos. Never use sessions in Web API because Web API is designed to be stateless.

First we should implement a ControllerHandler which is capable of handling sessions. In order make our ControllerHandler handle sessions we should implement IRequiresSessionState interface as well. Look at the below code.

   1: public class SessionableControllerHandler : HttpControllerHandler, IRequiresSessionState

   2: {

   3:     public SessionableControllerHandler(RouteData routeData) 

   4:         : base(routeData)

   5:     {

   6:  

   7:     }

   8: }

The next step is to create a RouteHandler as a wrapper to the ControllerHandler we created, this is because when registering routes in the RouteTable we can pass RouteHandler types not ControllerHandler types. Look at the below code for the custom RouteHandler.

   1: public class SessionStateRouteHandler : IRouteHandler

   2: {

   3:     public IHttpHandler GetHttpHandler(RequestContext requestContext)

   4:     {

   5:         return new SessionableControllerHandler(requestContext.RouteData);

   6:     }

   7: }

Then finally we have to register our RouteHandler in the RouteTable

   1: RouteTable.Routes.MapHttpRoute(

   2:     name: "DefaultApi",

   3:     routeTemplate: "api/{controller}/{id}",

   4:     defaults: new { id = RouteParameter.Optional }

   5: ).RouteHandler = new SessionStateRouteHandler();

In order to make our custom route to be used we need to put it on top of other route registrations.

Windows Azure Caching

Role Based Caching

Windows Azure provides 2 primary role based caching options. Shared caching and Dedicated caching.

Shared Caching is defining a portion of the memory of the web or worker role. This does not include additional charges since you’re already paying for the cloud service instance and using the portion of the memory. This might be a performance hit when the cache size is significantly bigger portion of the total instance memory allocation. Shared caching is also known as In-Role caching and Co-Located caching. The name In-Role caching is self explanatory.

Dedicated caching is again a very self explanatory term, this enables us to have a dedicated Cache Worker Role.

In-Role Caching

This is the cache type we provision inside our role instance. Create a Windows Azure cloud service project and 2 roles. (one Web role and one Cache Worker role).

image

Right click on the WebRole1 and go to properties.

image

Tick Enable Caching, also notice that Dedicated Role option is disabled since this is a Web Role. You can specify the amount of cache size in percentage. You can notice that it says Cache Cluster settings. This is because Azure roles can run on more than one instance when the role runs on more than one instance this forms the cache cluster. A cache cluster is a distributed caching service that combines all the memory from all the running instances.

Each cache cluster maintains the runtime state in the Azure storage. You should provide a valid storage account information in the text box when deploying the solution to Windows Azure.

Named Cache Settings is the last section. Each cache cluster can have more than one Named Cache. This is a logical partition of the cache memory with different settings. You can see the different settings we can configure for each named cache. Eviction policy LRU means Last Recently Used.

 

Dedicated Caching

The below screen explains the dedicated caching. You can see that Dedicated Role is enabled and also you have the option of using the dedicated role as a co-located cache by specifying the amount of memory in the percentage. This is useful when you plan a strict resource framed deployment.

image

 

 

Windows Azure Caching Service

Other then the above role based cache service Windows Azure provides a Cache Service which is in preview.

image

The cache offering is available in 3 different packages. Basic, Standard and Premium. The good side of these offerings is that each of them can be scaled with in a range. Once you provision a cache service you get the endpoint URL and security keys.

image

In the Azure management portal you get other options like dashboard, configuration options to create named cache instances and scaling options.

 

Accessing Windows Azure Cache Service in a .NET application

First install the Windows Azure Cache assembly from NuGet.

image

This adds some configuration settings to your config file as well. This is where you specify your endpoint URL and the security key.

image 

The programming model is simple and straight forward. We can use the DataCache class in Microsoft.ApplicationServer.Caching to access the cache. This is the same class we use in accessing the role based Windows Azure Cache as well.

A very crude code sample.

   1: static void CacheTest()

   2: {

   3:     var cache = new DataCache("default");

   4:     Console.WriteLine(cache.Name);

   5:  

   6:     cache.Add("key", "12");

   7:  

   8:     var value = cache.Get("key");

   9:     Console.WriteLine(value);

  10: }

MSDN link for the Windows Azure Cache (Preview) Development.