Category Archives: Articles

Build your SaaS right with Azure

Cloud has the proven promise of great opportunities and agility for ISVs. Modern cloud platforms have low entry barriers and huge array of service offerings beyond traditional enterprise application requirements. Cloud services provide intact environment to SaaS applications with features such as cutting edge innovative services, intelligence as services, continuous integration and continuous delivery, computation and storage scale for the global reach.

The current digitized environment, device proliferation and the span of intelligent cloud services give the best mix of social, technical and business aspects for SaaS products to emerge and prevail with high success.

Cloud enables equal opportunity to every SaaS player – technical and business domain skills and expertise are vital elements in order to succeed in the SaaS playground, knowing the business and knowing the technology are two utmost important facts.

From a SaaS consumer point of view, a customer has ample number of choices available to choose from list of SaaS providers. Having the right mix of features, availability, security and business model is important. Choosing the right tools at the right time at the right cost is the skill to master.

Figure 1: What customers expect from SaaS providers.

1Source: Frost & Sullivan, 2017

In order to deliver successful SaaS application, ISVs should have attributes such as – concrete DevOps practices to deliver features and fixes seamlessly, responsible SaaS adoption models concerning Administration & Shadow IT, trust and the privacy of Data & Encryption, promising service Uptime and many more.

DevOps with Azure Tooling

Azure tools bring agile development practices and continuous integration & continuous delivery. Code changes take immediate effect in the build pipeline with VSTS build definitions and deployed to the respective environments in Azure.

Figure 2: The simple DevOps model with Azure tooling

2

Environment and resource provisioning is handled via automated ARM template deployments from VSTS build and release pipeline. The model depicted in Figure 2 vary based on the context and complexity of the project with multiple environments, workflows and different services.

Centralized Administration and Shadow IT

Customers have the concern of how the SaaS enables the centralized organizational access management can be performed. On the other hand, SaaS providers require frictionless approach in the adoption of the services and enable more users much as possible.

Azure based organizational SaaS implementations often utilize Azure Active Directory (AAD) based integration and Single Sign On (SSO).

Data Security and Encryption

Customers trust the SaaS providers with their data. It is the most valuable asset SaaS providers take responsibility of in delivering value and helping the business of the customers. Data security and encryption is a prime concern and growing rapidly with complex and fast evolving regulatory and complaince requirements.

Azure has great compliancy support, tools and services in data protection. It offers many out of the box data encryption and protection services like TDE, DDM (Dynamic Data Masking), RLS (Row Level Security), In-built blob encryption and etc.

In certain cases, built-in security features do not provide the sufficient protection and compliance. In those sensitive environments we can leverage additional Azure services which provide high degree data security.

Figure 3: Advanced data security implementation in Azure

3

Azure Key Vault based encryption with SQL Database Always Encrypted, Blob encryption (envelope encryption), AAD based access control and MFA can be implemented in such cases. Also, this provides new models of Bring Your Own Key (BYOK) in encryption where customers can provide and manage their keys.

Uptime

Service uptime should be considered not only during unexpected failures but also during updates.

Azure provides inbuilt geo replication for databases, storage and specific services. Application tier redundancy is implemented with the use of Traffic Manager. Configuring geo replication and redundancy introduces concerns like geographic regulatory concerns of data, synchronization issues and performance.

Azure tools like Application Insights for application monitoring & telemetry, auto scaling, geo replication, traffic manager and many others are mixed with architectural practices to deliver required uptime for the SaaS application.

Conclusion

Apart from the technologies and tools, SaaS application development on a cloud platform requires expertise on the platform of choice, in order to achieve cost effectiveness, business agility and innovation.

How SaaS application is bundled and sold is a crucial decision in technology strategies like cache priming, tenant isolation, security aspects, centralized security, multi-tenancy at different services and etc.

This article provides a high level of view about the considerations customers look from SaaS providers and how Azure tools and services can help in achieving them.

 

 

Contribution of cloud computing to the Agile

I can be pretty sure that almost all the times we hear the word Agile our mind relates that to the Agile software development process rather than the English word agile. Even Google thinks so. True enough that the semantic of the English word agile is they key to name the so called process as agile.

image

The reason I gave such an introduction to the agile is to bring out how much popularity the process has gained overtime. There’re different ways to implement agile, I don’t know any of them properly by the rules. But I have an understanding that the core of the agile is iterative thinking in an incremental delivery mode. That’s the key rest is how you do that.

Thinking about the current software delivery, the process of agile and how it evolved from the well blamed waterfall model, I felt little happy about myself for knowing some old school stuff. I think I was lucky enough to work with computers with huge keyboards which make sound of a shutting clam with green monochrome screens. They used to run the so called DOS 6.2. I have written programs in GW Basic and FoxPro and used 5 1/2  inch floppy disks.

Software used to be developed and  delivered totally different in those days. An ISV  had to write the software and ship it through some hard media (floppy disks or optical drives) mostly with a serial key for licensing purposes. We couldn’t think of iterative delivery on that model. A huge complex software would have ended up with 100s of CDs delivered to the client every two weeks; probably requiring a delivery service like DHL or FedEx.

So the delivery and the development practices were forced to lock up in the boundary of water fall model because frequent deliveries were mostly impossible due technology limitations. And those days most of the software were written for desktop computers.

With the time, industry evolved and cloud computing has become the heart and soul of the IT. Software development practices started to change and most of the development occurs for the cloud.

Cloud not only facilitates the different licensing models and how organizations manage their resources, cloud also has changed the entire software development process. It brought the trends of continuous delivery, online build automation, continuous integration, cloud source control and much more features which are the core part for the iterative development and agile methodologies.

Without those tools and technical processes we cannot think of implementing agile in software development in the modern day. Cloud facilitates the modern Agile Software Development and Dev Ops.

Each and every line of change is reflected to the customers in near real time with entire automation. Iterative development is fueled by the fast feedback loops. In order to gain the faster feedback loops continuous delivery is vital. Cloud computing facilitates this phenomena.

image

The developer and operations work flow is seamless with the cloud computing. Platforms like Microsoft Azure provides end to end DevOps workflow with tools like Visual Studio online, Azure Web Apps and Application Insights which exactly maps to the above diagram.

Cloud not simply a platform it’s the trend setter.

How to create custom NuGet packages

NuGet provides an easy and a very efficient solution to distribute packages. A NuGet package can contain assemblies, content files and other tools that you want to distribute. A NuGet package is described by a manifest file known as nuspec. View the nuspec reference.

You can use 3 sub folders \lib \content and \tools in the NuGet package structure. \lib is used to store the assemblies, \content folder is used to store scripts, images, style sheets, ect… and finally \tools folder contains power shell scripts which mostly handle the package events (installation of the package, un-installation of the package)

\content folder also contains the transformation files which apply changes to files such as web.config and app.config. NuGet packages can also be used to insert code and create code files.

You can create NuGet packages either using visual designer or command line. In order to use the command line, we need NuGet.exe which can be downloaded from this link.

Open CMD and go to NuGet.exe path (or you can add the NuGet.exe path to environment variable to access it from anywhere)

Type the following command to make sure that you’re running the latest version of NuGet.exe

NuGet Update –self

Then we have to create the nuspec for the package. Create the above mentioned 3 folders and copy the assemblies inside the \lib. You can have sub folders inside the those folders. For example you can have \content\images to store images and when your package is referenced it will create a folder named ‘images’ in the project.

After setting up the file and folder structure, run the following command to create the nuspec file.

nuget spec

This will create the nuspec manifest file with tokens for the parameters like author, owner, description, urls and ect. Open this nuspec file and enter those details. A sample nuspec file looks like this.

   1: <?xml version="1.0"?>

   2: <package >

   3:   <metadata>

   4:     <id>Package</id>

   5:     <version>1.0.0</version>

   6:     <authors>Thuru</authors>

   7:     <owners>Thuru</owners>

   8:     <licenseUrl>https://thuruinhttp.wordpress.com</licenseUrl>

   9:     <projectUrl>https://thuruinhttp.wordpress.com</projectUrl>

  10:     <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>

  11:     <requireLicenseAcceptance>false</requireLicenseAcceptance>

  12:     <description>Package description</description>

  13:     <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>

  14:     <copyright>Copyright 2014</copyright>

  15:     <tags>Tag1 Tag2</tags>

  16:     <dependencies>

  17:     </dependencies>

  18:   </metadata>

  19: </package>

Once the nuspec file (consider the name is mypackage.nuspec) is created, you can create the NuGet package. Run the below command to create the NuGet package

nuget pack mypackage.nuspec

This will create the NuGet package. You may get errors and warnings based on some values in the nuspec.

This the structure used. (Note here I’ve copied the NuGet.exe in my working folder because I didn’t set the environment variable)

image

Few good to know features of Entity Framework

I have been doing some reading about advance EF features for last couple of weeks and got to know some interesting things, this post is a compilation of the information and provides links to the relevant sites.

Starting with EF 5.0 ObjectContext was replaced with DbContext. So what’s the difference between ObjectContext and DbContext ?

This link gives the simplest answer for the above question. And using the following code you get the ObjectContext from your DbContext. Say that your have model class named MyEntities the best practice is to create a partial class (partial classes are the safest way since the auto generated classes could be overridden at any time there’s a code regeneration) and create a property that returns the ObjectContext. The below code describes the idea.

   1: public partial class MyEntities : DbContext

   2: {

   3:     public System.Data.Entity.Core.Objects.ObjectContext ObjectContextProperty

   4:     {

   5:         get

   6:         {

   7:             return ((IObjectContextAdapter)this).ObjectContext;

   8:         }

   9:     }

  10: }

 

Compiled Queries

Compiled Queries are helpful when we want to squeeze the final drop of performance from our application. We cache the TSQL statement which is produced from LINQ to Entity queries we write. Since we write the queries in LINQ these queries first get translated into TSQL statements and they’re executed against the SQL Server. As a result of caching the TSQL statements of the LINQ queries we cut down the translation time. Microsoft has mentioned this provides 7% performance improvement. Details here

This article provides information on creating compiled queries

MSDN Compile Query Class

The reason why I haven’t mention any samples here is that we do not need to worry much about the query compilation in EF 5.0 and above because it automatically happens.

This MSDN blog post explains the automatic query compilation in EF 5.0 and above. But again if you want to have more advanced and granular control over the compilation you should consider implementing your custom compiled queries. It’s better to have the compiled queries in your data repository classes.

Performance

Read this Data Access Blog article about the EF 5.0 performance. The below image take from the blog gives a summary.

image 

As you see that updating to EF 5.0 on .NET 4.5 gives a greater performance boost to LINQ to Entities, automatic query compilation is one of the main features for this performance gain.

EF Versions – http://msdn.microsoft.com/en-us/data/jj574253

This channel 9 video is a good one about EF 5 and EF 6

 http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Entity-Framework-5-and-6

Microsoft doesn’t embrace it own products

I have noticed few things that makes me feel that MS doesn’t embrace its own products sometimes. For example when MS launched Windows Phone 7 and 7.5 there were massive marketing campaigns about the phone.

But in the Live / Hotmail (now Outlook) login page iPhone was in the middle as a highlighted smartphone which supports Hotmail / Live. Then thankfully someone noticed it and changed it. Later WP was in the middle.

Yesterday I came across a big frustrating problem when dealing with Windows Azure websites. MS has been doing a really great job with Azure and Azure websites deployment provides plenty of options to host the websites. We can pull the website files from various sources and Dropbox is also available. But they don’t have an option to pull a folder from Skydrive to Azure websites, they still have an option for Dropbox.

I cant believe this. Really confused.

Capture

After few minutes I saw this tweet.

c2

I really don’t know what’s going on.  But I think this is the real problem in MS now. There’s no communication, everyone does something in their own. But MS is not a company which became big yesterday. They should have and I hope definitely they should be having processes for integration and a streamlined communication between products. I wonder whether they don’t have the processes or someone has forgotten it in the middle.

Microsoft’s confuzzled.

You might wonder what this confuzzled is. It’s a word I heard in a movie or in a TV series (don’t remember exactly where) as a combination of confused and puzzled. Hot smile Cool isn’t it ? But it perfectly suits the current situation of the Silverlight.

Silverlight is a good platform, no one can deny that. MS released it as a RIA plugin for the web. (initially to battle against Adobe Flash and Java FX) then Microsoft realized that HTML 5 is  great and started the promising of open web.

They started campaigning Silverlight is not meant for the web. At that time Mango was in a hype. So Silverlight got a safe shelter under the Mango tree. Island with a palm tree

Microsoft also announced the Lightswitch, which is based on Silverlight. (out of the browser feature).

Lightswitch has its own attention from some ISV’s and has been growing, but not much popular in a broader market. Though it is a pleasing platform to develop applications deployment of a Lightswitch app is little hard.

But now Silverlight got a very serious problem. It doesn’t know where to go and where to exactly fit in ? Because of the new boss – WinRT. Just kidding

We all know that Windows 8 apps are running on top of WinRT, and the Windows phone 8 is also a WinRT platform. No more  Silverlight in Windows phone.

Poor Silverlight it lost one of its major safe houses. Sick smile

So as of now Lightswitch is the only guaranteed place for Silverlight to rest in peace.

But you might wonder still we use the XAML in WinRT, so how come we loss the Silverlight. XAML is a powerful UI standard created for WPF and then ported to Silverlight and now being used in WinRT.

So still you can use the XAML skills you’ve got on layout, user interaction, data binding and so and so in the WinRT, but Silverlight is no longer there. That’s it.

So what about the existing WP7 (Mango) apps written in Silverlight, MS promised an automatic recompile of the apps in the marketplace to WinRT.

And since XAML is there, Silverlight devs no need to panic much, because they can still use the XAML and other features like Command model, Converter model and other patterns in WinRT. ( I think the dependency property is still there, not so sure)

Silverlight also has the promises of BI 2.0 but again in a very limited scope. In SQL Server 2012 the power view has some sort of a Silverlight touch.

So when calculating all the formulas, I think Silverlight 5 would be the last version. Party smile

Everything about Windows Processes

This is a great web site which has almost all the information we need to know about the processes running on Windows operating system.

http://www.neuber.com/taskmanager/process/index.html

Time to time we wonder looking at our Task Manager, and thinking what that process if for ? Is it a Windows Process or a 3rd party one or a virus. The above link gives you all the data about the Windows processes.

Here I have listed some of them, (summarized version from the above the site) which seems suspicions but are not dangerous.

igfxsrvc.exe

Igfxsrvc.exe is installed along-side Intel Graphics Accelerator cards and with on-board graphics chipsets. It is the Common User Interface and starts with the Operating System and occupies a slot in the System Tray. It provides the graphical interface for all the display settings of the chipset including color quality and screen resolution.http://www.neuber.com/taskmanager/process/igfxsrvc.exe.html

csrss.exe

This is the user-mode portion of the Win32 subsystem; Win32.sys is the kernel-mode portion. Csrss stands for Client/Server Run-Time Subsystem, and is an essential subsystem that must be running at all times. Csrss is responsible for console windows, creating and/or deleting threads, and implementing some portions of the 16-bit virtual MS-DOS environment.

hkcmd.exe

"hkcmd.exe" is Intel’s "extreme" graphics hot key interceptor. If you never use the Intel hotkeys. You can turn off the hot keys and close this process.

dwm.exe

One of the new features in Windows Vista/7 is the Desktop Window Manager (DWM). It responsible for the graphical effects such as live window previews and a glass-like frame around windows (Aero Glass), without draining your CPU.

rundll32.exe

This program is part of Windows, and is used to run program code in DLL files as if they were within the actual program. However, many viruses also use this name or similar ones. This file is also commonly used by spyware to launch its own malicious code. http://www.neuber.com/taskmanager/process/rundll32.exe.html

Workgroup and Domain

If you are working on a Windows environment you might get some simple questions about what are they ? Here I have listed some simple questions and the answers I found by exploring things by own and referring the Internet. Please make a comment in case if I’m wrong.

 

What is a Computer Name and the Full Computer Name in Windows OS ?

A computer name in Windows is just the name of the computer. It is something like you have a name. For example Thuru-PC.

A Full Computer name is also known as fully qualified domain name (FQDN), a full computer name includes the host (computer) name, the domain name, and all the higher-level domains. For example elixir\thuru-pc

 

What is the difference between Domain, Workgroup and a Home group ?

Domains, workgroups, and homegroups represent different methods for organizing computers in networks. The main difference among them is how the computers and other resources on the networks are managed.

Computers running Windows on a network must be part of a workgroup or a domain. Computers running Windows on home networks can also be part of a home group, but it’s not required.

Computers on home networks are usually part of a workgroup and possibly a home group, and computers on workplace networks are usually part of a domain.

 

In a Workgroup

  • All computers are peers; no computer has control over another computer.

  • Each computer has a set of user accounts. To log on to any computer in the workgroup, you must have an account on that computer.

  • There are typically no more than twenty computers.

  • A workgroup is not protected by a password.

  • All computers must be on the same local network or subnet.

In a Home group

  • Computers on a home network must belong to a workgroup, but they can also belong to a home group. A home group makes it easy to share pictures, music, videos, documents, and printers with other people on a home network.

  • A home group is protected with a password, but you only need to type the password once, when adding your computer to the home group.

 

    image
Note

Home groups aren’t available in Windows Server 2008 R2.

 

    In a domain
  • One or more computers are servers. Network administrators use servers to control the security and permissions for all computers on the domain. This makes it easy to make changes because the changes are automatically made to all computers. Domain users must provide a password or other credentials each time they access the domain.

  • If you have a user account on the domain, you can log on to any computer on the domain without needing an account on that computer.

  • You probably can make only limited changes to a computer’s settings because network administrators often want to ensure consistency among computers.

  • There can be thousands of computers in a domain.

  • The computers can be on different local networks.