Thick API Gateways

I came across the term ‘Overambitious API Gateways’ from Thought Works tech radar. The point is, whether is it good or bad to have business logic in the API Gateways? Since the term Gateway is not a functional requirement and serves the purpose of a reverse proxy; it is quite obvious that including business logic in an API gateway is NOT a good design. But the idea behind the overambitious API gateways, seems to be a finger pointing at the API Gateway vendors, rather than considering the solution design and development and how the API Gateways should be used.

I prefer the term ‘Thick API Gateways‘ over overambitious API Gateways because the implementation is up to the developer regardless of what the tool can offer. This ensures an anti-pattern.

With the advent of microservices architecture, API Gateways gained another additional boost in the developer tool box, compared to other traditional integration technologies.

giphy

Microservices favor the patterns like API composer (aggregation of results from multiple services) Saga (orchestration of services with compensation) at the API Gateway. API Gateways also host other business logic like authorization, model transformation and etc. resulting a Thick API Gateway implementations.

Having said, though thick API gateway is a bad design and brings some awkward feeling at night when you sleep, in few cases it is quite inevitable. If you’re building a solution with different systems and orchestration of the business flows is easy and fast at the API gateway. In some cases it is impossible to change all the back-end services, so we should inject custom code between the services and API gateways to achieve this, which would result other challenges.

At the same time, as developers when we get a new tool we’re excited about it, and we often fall into the ‘if all you have is a hammer, everything looks like a nail‘ paradigm. It’s better to avoid this.

giphy1

Let’s see some practical stuff; in fact, what kind of business logic the modern API gateways can include? For example, if we take the gateway service offered in Azure API Management (APIM), it is enriched with high degree of programmable request/response pipeline.

Below APIM policy, I have provided an authorization template based on the role based claims.

The API gateway decides the authorization to the endpoints based on the role based claims. The sections are commented, first it validates the incoming JWT token, then sets the role claim in the context variable and finally handle authorization to the endpoints based on the role claim.


<policies>
<inbound>
<!– validates RS256 JWT token –>
<validate-jwt header-name="massrover_token" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized"
require-expiration-time="true" require-signed-tokens="true">
<audiences>
<audience>audience id</audience>
</audiences>
<issuers>
<issuer>issuer id</issuer>
</issuers>
<required-claims>
<claim name="role" match="any">
<value>admin</value>
<value>moderator</value>
<value>reader</value>
</claim>
</required-claims>
<openid-config url="https://massrover.idenityserver/.well-known/openid-configuration" />
</validate-jwt>
<!– sets the role claim to the context variable –>
<set-variable name="massrover_role"
value="@(context.Request.Headers["massrover_token"].First().Split(' ')[1].AsJwt()?.Claims["role"].FirstOrDefault())" />
<!– performs authorization based on role claim and allowed http method –>
<choose>
<when condition="@(context.Variables.GetValue("massrover_role").Equals("admin"))">
<forward-request/>
</when>
<when condition="@(context.Variables.GetValue("massrover_role").Equals("moderator")">
<when condition="@(context.Request.Method.Equals("delete", StringComparison.OrdinalIgnoreCase))">
<return-response>
<set-status code="403" reason="Forbidden" />
<set-body>Moderators cannot perform delete action</set-body>
</return-response>
</when>
<otherwise>
<forward-request/>
</otherwise>
</when>
<when condition="@(context.Variables.GetValue("massrover_role").Equals("reader")">
<when condition="@(context.Request.Method.Equals("get", StringComparison.OrdinalIgnoreCase))">
<forward-request/>
</when>
<otherwise>
<return-response>
<set-status code="403" reason="Forbidden" />
<set-body>Readers have only read access</set-body>
</return-response>
</otherwise>
</when>
<otherwise>
<return-response">
<set-status code="405" reason="Not Allowed" />
<set-body>Invalid role claim</set-body>
</return-response>
</otherwise>
</choose>
<base />
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
</outbound>
<on-error>
<base />
</on-error>
</policies>

Note: This is a thick API gateway implementation and the pros and cons of this is subject to the problem in hand. This above is a practical elaboration of one thick API implementation.

Advertisement

Design an online forum application on Azure Table Storage

NoSQL technologies provide solutions for issues that relational databases cannot provide. At the same time, designing an application on top of a NoSQL technology requires specific technology dependent design decisions and architecture.

This post addresses the issue and explains how to model a real world problem using Azure Table Storage. This is neither an introduction to Azure Table storage nor a code sample, but this post provides the thinking behind designing applications on Azure Table Storage.

Designing in Azure Table Storage

Azure Table Storage is a column store NoSQL data store, it has 4 types of querying practices.

  1. Point Query – Query based on Partition Key and Row Key, retrieves single entity.
  2. Range Query – Query based on Partition Key and range of Row Keys, retrieves multiple entities.
  3. Partition Scan – Partition Key is used but Row Key is not known / not used in the in the query, other non-key fields might be used.
  4. Table Scan – Partition Key is not used in the query, other key fields might be used.

Scenario

Think something similar to StackOverflow or MSDN forum. (Be mindful that developing a forum in that scale requires lot more technologies and strategies other than NoSQL). But as a scenario let’s assume we’re going to build a small scale forum with the following features.

    • Forum members can post questions under categories.
    • Forum members can reply to posts.
    • Users have points based on their forum contribution.

Design

In modeling our application in the Azure Table Storage, we need to identify the tables first. Users, Posts, Replies and Categories are the main tables.

Categories table can have single partition or may be two partitions – Active and Archived.

1

Row Key has been used to store the category name, in the entity class CategoryName has IgnoreProperty attribute, which makes it virtual and there will not be a physical column called CategoryName in the table. Since category name is the Row Key under a partition there won’t be duplicates in category names within the partition.

image

Keep the fixed Partition Keys as enums, this avoids mistakes (mostly typing mistakes in dealing with strings) in defining Partition Keys.

image

A simple query (partition scan) to retrieve all Active categories.

image

 

Users table has a special design, email address and password are used as credentials. So email address should be unique across the entire Users table regardless of the Partition Key – Row Key combination. So are we going to design the Users table in a single partition with email being the Row Key ?

This is possible but it is not a good design practice, dumping millions of user records under single partition.

The strategy is simple bucketing, I define 6 partitions for the Users table with Partition Key simply being a single number, like 1 to 6. And allocate email addresses based on their first letter.

Consider, that any email address starting from ‘a’ to ‘d’ go to partition 1, email addresses starting from ‘e’ to ‘h’ go to partition 2 like shown in the table below. This achieves both the uniqueness of the email address across the table and gives the partition scalability.

aa

A simple method like below would decide the Partition Key.

image

 

Posts table would be a straight forward design with Partition Key being the category name and PostId (GUID) would be the Row Key. Posts of each category live in a separate parition.

image

Like the Category entity, Post entity class will link Partition Key and Row Key using two properties CategoryName and PostId respectively marked with the IgnoreProperty attribute. See the code snippet given below.

image

If you think, using category names as Partition Keys would outgrow the rows in a single partition since one category can have hundreds of thousands of rows, you can concatenate the category name along with the year and create partitions like Azure-2015, Azure-2016 or use any other possible variable.

But the point is, making sure that you can calculate the Partition Keys from a formula gives you the ability to limit your queries maximum to Partition Scans.

 

In this scenario, Replies table can take two highly possible designs.

First, there is no separate table for Replies, use the Posts table with an additional column called ParentId. Posts will have an empty ParentId and replies will have values for ParentId of the post they are made to. Replies also go to the same partition as Posts.

Second design is having a separate table for Replies – I would personally go for this design as we can have more detailed information specific to replies.

Partition Key would be the category name and Row Key would be the Reply ID. PostId would be another column. So in order to find the replies of a Post we would trigger a Partition Scan.

Conclusion

Designing an application on top of any NoSQL technology requires specific planning and architecture based on the domain and the selected NoSQL platform. The knowledge of the underlying NoSQL technology is very essential in order to make the design efficient.

For example, in the above design if we get a requirement to show the recent 20 posts in the home page, regardless of the category, this would definitely trigger a Table Scan and also we have to bring all the posts and sort it based on the TimeStamp property.

So a good decision would be having another temporary table to keep the top 20 posts, when a new post is added the Id of the post will be updated in that table and removing the last old one. We can use write behind strategies in the application to do this.

So make sure that you design the application for the technology in a correct and efficient way.

The biggest misleading point I always here in the industry is, NoSQL development is easy and takes less time. Those two arguments are subjective and also you need to compare it with some other technology, commonly they do the comparison with relational database technologies. But in my experience I don’t see any significant time savings in using a NoSQL technology. But there are other benefits for sure.

Circuit Breaker Pattern for Cloud based Micro Service Architecture

Modern applications communicate with many external services; these external services could be from third party providers or from the same provider or they are components of the same application. Micro service architecture is a great example for disconnected, individually managed and scalable software components that work together. The communication takes place using simple HTTP endpoints.

Example: Think that you’re developing a modern shopping cart. Product catalog could be one micro service, ordering component would be another one and the user comment and feedback system would be another one. All three services together provide the full shopping cart experience.

Each service is built to be consumed by each other, they might have sophisticated API Management interfaces or just a simple self-documented REST endpoints or undocumented REST endpoints.

Another example is Facebook, it has messenger feature implemented by a totally different team from who manage the feeds page and the Edge Ranking stuff. Each team push updates and individually manage and operate. The entire Facebook experience comes from the whole collection of micro services.

So the communication among these components is essential. Circuit Breaker (CB) manages the communication by acting as a proxy. If a service is down, then no point trying it and wasting the time. If a service is being recovered, then better not to congest it with flooded requests; time to heal should be given to the service.

Circuit Breaker and Retry Logics

It is important to understand when to use Circuit Breaker and when to retry. In case of transient failures, application should retry. Transient failures are temporary failures; a common example would be TimeOutException. It is obvious that we can retry for one more time.

But think an API Management gateway blocks your call for some reason (IP restriction, Request Limit) or any 500 error then you should stop the retry and inform the caller about the issue. And let the service heal. This is where Circuit Breaker helps.

How Circuit Breaker Works?

Circuit Breaker has 3 states.

  • Closed
  • Open
  • Partially Open

Look at the below diagram and follow the context for the explanation.


By default, Circuit Breaker is in the Closed state. The first request comes in and it will be routed to the externa service. Circuit Breaker keeps a counter for the non-transient failures occur in the external service in a given time period. Say that the given time period is 15 seconds and the failure threshold is 10, if the service fails 10 times within 15 seconds for n number of requests then Circuit Breaker goes to Open state. If there’re no or less than 10 failures during 15 seconds, the failure counter will be reset and Circuit Breaker remains in Closed state.

In the Open state, Circuit Breaker does not forward any requests to the external service, regardless of how many requests it receives. It replies to those requests with the last known exception. It remains in the Open state for a specified time period. After the Open state has elapsed Circuit Breaker enters the Partial Open state / Semi Open State.

In the Partial Open state, some of the requests are being forwarded to the external service while others are being rejected as Circuit Breaker is in Open State. In the allowed number of requests Circuit Breaker monitors the success of those calls, and if a specified number of calls are continuously successful then Circuit Breaker resets it counters and goes to the Closed state.

The mechanism of which calls should reach the service during the Partial Open state is up to the implementation. You can simply write an algorithm to reject one call after the other or you can use your own business domain. Example calls from members of Admin role can pass through and others fail.

Partial Open State and preventing Senseless blocking.

This is a bit tricky state because, this state does not have a timeout period. So Circuit Breaker will remain in the Partial Open state until the right number of requests come to satisfy the condition. This might not be preferable in some cases.

Example, consider the service is down at 10:00:00AM and Circuit Breaker goes to Open state. After 3 minutes (at 10:03:00AM) it goes to Partial Open state. From 10:03:00AM to 10:23:00AM only few requests came and some of them will be rejected by the Circuit Breaker, and still Circuit Breaker is waiting for more calls though the service is perfectly back to normal by 10:08:00AM. I named this kind of prevention from the Circuit Breaker as Senseless Blocking.

There are few remedies you can implement to prevent senseless blocking. Simply we can put a timeout period for the Partial Open state or we can do a heartbeat check from the circuit breaker to the external service using a background thread. But be mindful that senseless blocking is an issue in Circuit Breaker pattern.

When not to use Circuit Breaker?

When you do not make frequent calls to the external service, it is better to do it without going through a Circuit Breaker, because when your calls are not frequent there’s a high probability that you might face senseless blocking.

Implementation

I have provided a reusable pattern template for the Circuit Breaker.

Code is available in GitHub : https://github.com/thuru/CloudPatterns/tree/master/CloudPatterns/CircuitBreakerPattern


Advanced Caching Techniques

These are the techniques, how objects are stored and retrieved from a cache.

  • Read Through
  • Write Through
  • Read Ahead

Last week I wrote about Cache-Aside pattern and provided a code sample of a minimal implementation of Cache-Aside pattern to get started with Redis. (intentionally tested using Redis on Azure). The code sample also has a provider class and a practical implementation of the pattern which can directly be used in MVC / Web API projects.

In this post let’s discuss more into the conceptual engineering aspect of the caching strategies. I address the scenarios and also how to implement them using a language. The steps are explained in English so the real implementation can be done using any programing language.

Read Through

This is a very simple and a straight forward approach and very common in use. Application reads the data from the cache. If the data is available in the cache application will get it, else application reads the data from the data store and store it in the cache for the future references.

Objects in the cache are stored for a specific time, any request that comes to the cache within this time frame will be served from the cache. If a write happens to the object within that time frame and

  1. If that write operation invalidates the cache, then next immediate read after the write will hit the data store and update the cache.
  2. If that write does not invalidate the cache, then next immediate read after the write will get stale data.

The code sample of Cache-Aside pattern explains the scenario 1 under the Read Through. This ensures that application does not get stale data. But at the same time this might bring performance issues where write rate is equal or greater than the read of the object.

  • Application reads the data from the cache
  • If data is in the cache application gets it, else it loads from the data store and updates the cache.
  • Application writes data to the data store.
  • Successful write operation would invalidate the corresponding object in the cache.

Write Through

Applications write the data to the cache, not to the data store. The caching service will write the data transparently to the data store. Mostly this update is synchronous, so a typical write operation returns a success when the data is written to the cache and to the data store. Since data is written to the cache, no need to invalidate the object. Modifications to the object in the cache need to be handled in a thread safe way. Applications get the latest data.

  • Application write the data to the cache.
  • Caching service or the application writes the data to the data store
  • A write is considered successful if both the cache and the data store are updated.
  • We can use two different threads one to update the cache and the other to update the data store and wait for both to complete successfully.
  • Using application generated IDs for the objects would help.
  • Updating the objects should be thread safe

Write through also has a delayed update to the data store. This is known as a Write-Behind strategy.

  • Application writes the data to the cache
  • A write is considered success if the write to the cache is success.
  • Later stage (either periodically or based on eviction time or based on any specific parameter) data store will be updated.

This is a very helpful and a high responsive design. Most of the modern applications which has high throughput follow this strategy. Your cache should be reliable and should support at least one stage of master-slave model in order to be reliable. Because if the cache goes down before the write takes place to the data store, then there’s no way to get the data.

Also if any object requires all auditing trails, then this strategy cannot be useful. Example – An application requires all operations on the Products should be logged. A new product is added and then modified. Data store update happens after the modification, so in this case we totally miss the old value and the change log of that product.

Read Ahead

Read the frequent access data from the data store, before the cache object get evicted. For example, there’s a products collection in the cache. This is accessed very often and the eviction time is 120 seconds. So this collection will be removed from the cache after 120 seconds under normal cache implementations.

So the first read after the object has been cleared from the cache go through the Read Through strategy. So that read might take longer time. Read Ahead strategy refreshes the collection before it get evicted. The refresh happens automatically. In the Read Through this refresh happens on demand.

  • There should be a mechanism to observe the cache object life times. (Redis has an implementation that it triggers an event for this)
  • Based on the event, we fire up a worker to load the data to the cache, even before the application requests the data.

Caching is a strategical decision. We can simply use it just to store some objects and also an entire application can be designed and scaled based on the caching as well.


Cached-Aside Pattern using Redis on Azure

Cache-Aside is a common pattern in modern cloud applications. This is a very simple and a straight forward one. The followings are the characteristics of the pattern.

  • When an application needs data, first it looks in the cache.
  • If the data available in the cache, then application will use the data from the cache, otherwise data is retrieved from the data store and the cache entry will be updated.
  • When the application writes the data, first it writes to the data store and invalidates the cache.

How to handle the lookups and other properties and events of the cache are independent, meaning the patters does not enforce any rules on that. These diagrams summarize the idea.

  1. Application checks the cache for the data, if the data in the cache it gets it from the cache.
  2. If the data is not available in the cache application looks for the data in the data store.
  3. Then the application updates the cache with the retrieved data.

  1. Application writes the data to the data store.
  2. Sends the invalidate request to the cache.

Implementation

Project : https://github.com/thuru/CloudPatterns/tree/master/CloudPatterns/CacheAsidePatternWebTemplate

The above project has an implementation of this pattern.

Data objects implement an interface ICacheable and an abstract class CacheProvider<ICacheable> has the abstract implementation of the cache provider. You can implement any cache provider by extending CacheProvider<ICacheable>. GitHub sample contains code for the Azure Redis and AWS Elastic Cache implementations.

Implementation of ICacheable : https://github.com/thuru/CloudPatterns/blob/master/CloudPatterns/CacheAsidePatternWebTemplate/Cache/ICacheable.cs

Implementation of CacheProvider<ICacheable>: https://github.com/thuru/CloudPatterns/blob/master/CloudPatterns/CacheAsidePatternWebTemplate/Cache/CacheProvider.cs

Implementation of AzureRedisCacheProvider : https://github.com/thuru/CloudPatterns/blob/master/CloudPatterns/CacheAsidePatternWebTemplate/Cache/AzureRedisCacheProvider.cs

The template also includes Cache Priming in Global.asax. This could be used to prime your cache (loading the mostly accessed data in the application start)

Let’s hookup with ASP.NET Webhooks Preview

Webhook – A simple HTTP POST based pub/sub mechanism between web applications or services. This is a very effective that most of the modern web applications use this in order to handle event based pub/sub.

ASP.NET WebHooks is a framework which is in the preview release; this eases the task of incorporating webhooks in ASP.NET applications. It provides some predefined clients to subscribe events from, like Instagram, GitHub and others. It also provides a way to setup our custom ASP.NET web applications to send webhook notifications to the subscribed clients and prepare the clients to receive the webhook notifications. See the below URL for the more information.

https://github.com/aspnet/WebHooks

How WebHooks work and the structure of the ASP.NET WebHook Framework

Webhooks are simple HTTP POST based pub/sub.

Webhooks has the following structure.

  • The web application/service which publishes the events should provide an interface for the subscribers to register for the webhooks.
  • Subscribers select the events they want to subscribe to, submit the callback URL to be notified along with other optional parameters. Security keys are most common among the other optional parameters.
  • The publisher will persist the subscriber details.
  • When the event occurs publisher notifies all the eligible subscribers by triggering POST request to the callback back along with the event data.

The above 4 steps are the most vital steps of a working webhook. Let’s see how ASP.NET Webhooks implement this.

As of this writing ASP.NET Webhooks are in preview and nugget packages also in the preview release.

The support for sending WebHooks is provided by the following Nuget packages:

  • Microsoft.AspNet.WebHooks.Custom: This package provides the core functionality for adding WebHook support to your ASP.NET project. The functionality enables users to register WebHooks using a simple pub/sub model and for your code to send WebHooks to receivers with matching WebHook registrations.
  • Microsoft.AspNet.WebHooks.Custom.AzureStorage This package provides optional support for persisting WebHook registrations in Microsoft Azure Table Storage.
  • Microsoft.AspNet.WebHooks.Custom.Mvc This package exposes optional helpers for accessing WebHooks functionality from within ASP.NET MVC Controllers. The helpers assist in providing WebHook registration through MVC controllers as well as creating event notification to be sent to WebHook registrants.
  • Microsoft.AspNet.WebHooks.Custom.Api This package contains an  optional set of ASP.NET Web API Controllers for managing filters and registrations through a REST-style interface.

ASP.NET Webhooks works well in Web API and it has the Azure Table Storage provider for persisting the publisher metadata and event data.

Please go through this article for the detailed information

What is missing

According to the article, webhooks are delivered to the users based on authorization. So ASP.NET webhooks store the information of the subscriber along with the user information. This helps the ASP.NET webhooks to publish the event to the right subscriber.

For example, 2 users have subscribed to the same event.

User Id Event Callback URL
Thuru PhotoAdded https://thuru.net/api/webhooks/in/content
Bob PhotoAdded http://bob.net/api/hooks/incoming/photo

ASP.NET Webhooks requires the subscribers to login to the application or the REST service in order to subscribe the events.

So when the event is triggered based on the logged in user the POST request is made to the right client. ASP.NET webhooks use user based notifications. Is there any limitation in this?

Yes, consider a scenario – where you have an application with multiple customers. Each customer has many users. And the admin of one customer wants subscribe to the PhotoAdded event as above. Her intention is to be notified whenever any of her users add a photo. So if she registers for a webhook by logging in using her credentials, she will get the notifications only when she adds a photo, because ASP.NET webhooks by default provide user based notifications. Also we can’t register this event in the global level with no authentication, the she will be notified when users of the other customers add a photo.

I hope ASP.NET webhook will provide a way to customize the notification. As of now NotifyAsync is a static extension method to which the overriding is not possible.

Adapter Pattern

Adapter pattern is often mentioned as wrapper in normal convention. It is a pattern which introduces loose coupling when creating a middle interface between two unmatched types. The name itself, self describes itself.  Rolling on the floor laughing

Think we have a class which renders a dataset on the screen. So we’ve a working class like this.

class Renderer { private readonly IDbDataAdapter _adapter; public Renderer(IDbDataAdapter dataAdaper) { _adapter = dataAdaper; } public void Render() { Console.WriteLine("Writing data...."); DataSet dataset = new DataSet(); _adapter.Fill(dataset); DataTable table = dataset.Tables.OfType<DataTable>().First<DataTable>(); foreach (DataRow row in table.Rows) { foreach (DataColumn column in table.Columns) { Console.Write(row[column].ToString()); Console.Write(" "); } Console.WriteLine(); } Console.WriteLine("/n Render complete"); } }

Severe bug is, that it renders only the first table of the DataSet, but forget about the bug as of now and let’s focus on the design. Freezing

Simply Renderer does these things.

  • It has a constructor which takes an IDbDataAdapter and set it to its readonly property.
  • The Render() calls the Fill method of the IDbDataAdapter by passing a DataSet
  • Takes the first table of the DataSet
  • Displays the data

 

So it’s obvious any class that implements IDbDataAdapter would be a perfect candidate for the Renderer class.

Now move on to the scenario. Airplane

Think we have a data object class Game.

public class Game { public string Id { get; set; } public string Name { get; set; } public string Description { get; set; } }

And we have a class for rendering the game objects on the screen.

public class GameRenderer { public string ListGames(IEnumerable<Game> games) { // do the rendering return game.ToString(); } }

We have the Render class to do the rendering work for us. So we do not need to write the same code again. Only thing we’ve to do is connecting GameRenderer to our Renderer.

But we’ve got problems. Steaming mad

  • We need an IDbDataAdapter implementation to use the Renderer.
  • Render() is a parameter less method with no return types, which has to be mapped with the ListGames(IEnumerable<Game> games) which returns a string.

We need to have a class which works between GameRenderer and Renderer. That’s the adapter class we’re going to write. (GameCollectionDBAdapter)

So our GameCollectionDBAdapter should to be an IDbDataAdapter to work with the Renderer. In the other end it should be some other type to conform with the GameRenderer.

Create a new interface called IGameColllectionRenderer. This is the interface which conforms our adapter class with the GameRenderer.

The diagram explains the things clearly.

adapter

Not neat indeed. Shifty

So now you’ve got the idea.

The rest of the code goes here.

Code for IGameColllectionRenderer

public interface IGameColllectionRenderer { string ListGames(IEnumerable<Game> games); }

Code for the GameCollectionDBAdapter which is a IDbDataAdapter and  IGameColllectionRenderer.

public class GameCollectionDBAdapter : IDbDataAdapter, IGameColllectionRenderer { private IEnumerable<Game> _games; public string ListGames(IEnumerable<Game> games) { _games = games; Renderer renderer = new Renderer(this); renderer.Render(); return _games.Count().ToString(); } public int Fill(DataSet dataSet) { DataTable table = new DataTable(); table.Columns.Add(new DataColumn() { ColumnName = "Id" }); table.Columns.Add(new DataColumn() { ColumnName = "Name" }); table.Columns.Add(new DataColumn() { ColumnName = "Description" }); foreach (Game g in _games) { DataRow row = table.NewRow(); row.ItemArray = new object[] { g.Id, g.Name, g.Description }; table.Rows.Add(row); } dataSet.Tables.Add(table); dataSet.AcceptChanges(); return _games.Count(); } }

 

Here IDbDataAdapter is not fully implemented, Fill method alone enough to run the code. But you have to have blank implementations of the other methods with throw NotImplementedException.

A slight change in your GameRenderer

public class GameRenderer { private readonly IGameColllectionRenderer _gameControllerRenderer; public GameRenderer(IGameColllectionRenderer gameCollectionRenderer) { _gameControllerRenderer = gameCollectionRenderer; } public string ListGames(IEnumerable<Game> games) { return _gameControllerRenderer.ListGames(games); } }

Finally the Main method

Thumbs up

class Program { static void Main(string[] args) { List<Game> games = new List<Game>() { new Game() { Id = "2323", Name = "Need for Sleep", Description = "A game for sleepers" }, new Game() { Id = "w4334", Name = "MK4", Description = "Ever green fighter game" } }; GameRenderer gr = new GameRenderer(new GameCollectionDBAdapter()); gr.ListGames(games); Console.ReadKey(); } }

Singleton Pattern

Singleton pattern is a simple design pattern in software practice and sometimes considered as an anti-pattern due its tight coupling nature.

A very simple non thread safe implementation of the Singleton pattern would be like this.

Singleton non thread safe

class Singleton
{
  privatestatic Singleton _instance;
  private Singleton()
  {
    Console.WriteLine(Singleton instantiated);
  }
 
publicstatic Singleton SingletonInstance
{
  get
   {
        if (_instance ==null)
            {
               // delay the object creation to demonstrate the thread saftey.
               Thread.Sleep(1500);
            _instance =new Singleton();
            }
      return     _instance;
  }
 }
}

Run the above class using Main method shown below, you can notice Singleton constructor is called twice by the both threads, since it is not thread safe. ( You can use the same Main methid implementation for all 3 Singleton implementations)

Main method implementation

class Program
{
staticvoid Main(string[] args)
{
new Thread(() => { Singleton sin1 = Singleton.SingletonInstance; }).Start();
Singleton sin2 = Singleton.SingletonInstance;
 
Console.ReadKey();
}
}

Singleton thread safe

Making the above implementation to a thread safe code is not a complex task, we can use our same old locking technique.

class Singleton
{
privatestatic Singleton _instance;
privatestaticobject _lock =newobject();
 
private Singleton()
{
Console.WriteLine(Singleton instantiated);
}
 
publicstatic Singleton SingletonInstance
{
get
{
lock (_lock)
{
if (_instance ==null)
  {
      // delay the object creation to demonstrate the thread saftey.
        Thread.Sleep(1500);
         _instance =new Singleton();
   }
return _instance;
}
}
}
}

The above is a perfect Singleton implementation in C#. But is there any other way that we can have the Singleton behavior without compensating our performance into locking. Because locking is a performance drop for sure.

Singleton C# way  – The trendy way

This is a neat and a trendy way to implement the Singleton.

We do not use locks in this implementation and it is very fast yet purely thread safe.

class Singleton
{
    privatestaticreadonly Singleton _instance;
 
    static Singleton()
    {
             _instance =new Singleton();
     }
 
    private Singleton()
   {
       Console.WriteLine(Singleton instantiated);
    }
 
   public static Singleton SingletonInstance
  {
     get
    {
      return _instance;
     }
  }
}

The magic is, static constructor.

A static constructor is used to initialize any static data, or to perform a particular action that needs to be performed once only. It is called automatically before the first instance is created or any static members are referenced.

To remeber simply you can think that the static constructor is called when the class is loaded.

More about static constructors in this MSDN article.