Home Cloud & Networking Microservices with Azure

Microservices with Azure

By Rahul Rai , Namit Tanasseri
books-svg-icon Book
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Microservices – Getting to Know the Buzzword
About this book
Microsoft Azure is rapidly evolving and is widely used as a platform on which you can build Microservices that can be deployed on-premise and on-cloud heterogeneous environments through Microsoft Azure Service Fabric. This book will help you understand the concepts of Microservice application architecture and build highly maintainable and scalable enterprise-grade applications using the various services in Microsoft Azure Service Fabric. We will begin by understanding the intricacies of the Microservices architecture and its advantages over the monolithic architecture and Service Oriented Architecture (SOA) principles. We will present various scenarios where Microservices should be used and walk you through the architectures of Microservice-based applications. Next, you will take an in-depth look at Microsoft Azure Service Fabric, which is the best–in-class platform for building Microservices. You will explore how to develop and deploy sample applications on Microsoft Azure Service Fabric to gain a thorough understanding of it. Building Microservice-based application is complicated. Therefore, we will take you through several design patterns that solve the various challenges associated with realizing the Microservices architecture in enterprise applications. Each pattern will be clearly illustrated with examples that you can keep referring to when designing applications. Finally, you will be introduced to advanced topics such as Serverless computing and DevOps using Service Fabric, to help you undertake your next venture with confidence.
Publication date:
June 2017
Publisher
Packt
Pages
360
ISBN
9781787121140

 

Chapter 1. Microservices – Getting to Know the Buzzword

The world of information technology today is witnessing a revolution influenced by cloud computing. Agile, inexpensive, scalable infrastructure which is completely self-serviced and pay-per-use has a critical part to play in optimizing the operational efficiency and time-to-market for software applications enabling all major industries. With the changing nature of underlying hardware and operational strategies, many companies find it challenging to meet competitive business requirements of delivering applications or application features which are highly scalable, highly available, and continuously evolving by nature.

The agility of this change has also compelled solution architects and software developers to constantly rethink their approach of architecting a software solution. Often, a new architecture model is inspired by learnings from the past. Microservices-driven architecture is one such example which is inspired by Service-Oriented Architecture (SOA). The idea behind Microservices-based architecture is heavily based on componentization, abstraction, and object-oriented design, which is not new to a software engineer.

In a traditional application, this factorization is achieved by using classes and interfaces defined in shared libraries accessed across multiple tiers of the application. The cloud revolution encourages developers to distribute their application logic across services to better cater to changing business demands such as faster delivery of capabilities, increased reach to customers across geographies, and improved resource utilization.

 

What are Microservices?


In simple words, a Microservice can be defined as an autonomous software service which is built to perform a single, specific, and granular task.

The word autonomous in the preceding definition stands for the ability of the Microservice to execute within isolated process boundaries. Every Microservice is a separate entity which can be developed, deployed, instantiated, scaled, and managed discretely.

The language, framework, or platform used for developing a Microservice should not impact its invocation. This is achieved by defining communication contracts which adhere to industry standards. Commonly, Microservices are invoked using network calls over popular internet protocols such as REST.

On cloud platforms, Microservices are usually deployed on a Platform as a Service (PaaS) or Infrastructure as a Service (IaaS) stack. It is recommended to employ a management software to regulate the lifecycle of Microservices on a cloud stack. This is especially desirable in solutions which require high density deployment, automatic failover, predictive healing, and rolling updates. Microsoft Azure Service Fabric is a good example of a distributed cluster management software which can be used for this purpose. More about this is covered in later sections of this book.

Microservices are also highly decoupled by nature and follow the principle of minimum knowledge. The details about the implementation of the service and the business logic used to achieve the task are abstracted from the consuming application. This property of the service enables it to be independently updated without impacting dependent applications or services. Decoupling also empowers distributed development as separate teams can focus on delivering separate Microservices simultaneously with minimal interdependency.

It is critical for a Microservice to focus on the task it is responsible for. This property is popularly known as the Single Responsibility Principle (SRP) in software engineering. This task ideally should be elementary by nature. Defining the term elementary is a key challenge involved in designing a Microservice. There is more than one way of doing this:

  • Restricting the cyclomatic complexity of the code module defining the Microservice is one way of achieving this. Cyclomatic complexity indicates the complexity of a code block by measuring the linear independent paths of execution within it.
  • Logical isolation of functionality based on the bounded context that the Microservice is a part of.
  • Another simpler way is to estimate the duration of delivering a Microservice.

Irrespective of the approach, it is also important to set both minimum and maximum complexity for Microservices before designing them. Services which are too small, also known as Nanoservices, can also introduce crucial performance and maintenance hurdles.

Microservices can be developed using any programming language or framework driven by the skills of the development team and the capability of the tools. Developers can choose a performance-driven programming language such as C or C++ or pick a modern managed programming language such as C# or Java. Cloud hosting providers such as Azure and Amazon offer native support for most of the popular tools and frameworks for developing Microservices.

A Microservice typically has three building blocks – code, state, and configuration. The ability to independently deploy, scale, and upgrade them is critical for the scalability and maintainability of the system. This can be a challenging problem to solve. The choice of technology used to host each of these blocks will play an important role in addressing this complexity. For instance, if the code is developed using .NET Web API and the state is externalized on an Azure SQL Database, the scripts used for upgrading or scaling will have to handle compute, storage, and network capabilities on both these platforms simultaneously. Modern Microservice platforms such as Azure Service Fabric offer solutions by co-locating state and code for the ease of management, which simplifies this problem to a great extent.

Co-location, or having code and state exist together, for a Microservice has many advantages. Support for versioning is one of them. In a typical enterprise environment, it's a common requirement to have side-by-side deployments of services serving in parallel. Every upgrade to a service is usually treated as a different version which can be deployed and managed separately. Co-locating code and state helps build a clear logical and physical separation across multiple versions of Microservices. This will simplify the tasks around managing and troubleshooting services.

A Microservice is always associated with a unique address. In the case of a web-hosted Microservice, this address is usually a URL. This unique address is required for discovering and invoking a Microservice. The discoverability of a Microservice must be independent of the infrastructure hosting it. This calls for a requirement of a service registry which keeps track of where each service is hosted and how it can be reached. Modern registry services also capture health information of Microservices, acting like a circuit breaker for the consuming applications.

Microservices natively demands hyperscale deployments. In simpler words, Microservices should scale to handle increasing demands. This involves seamless provisioning of compute, storage, and network infrastructure. It also involves challenges around lifecycle management and cluster management. A Microservices hosting platform typically has the features to address these challenges.

Microservices hosting platform

The primary objective of a Microservices hosting platform is to simplify the tasks around developing, deploying, and maintaining Microservices while optimizing the infrastructure resource consumption. Together, these tasks can be called Microservice lifecycle management tasks.

The journey starts with the hosting platform supporting development of the Microservices by providing means for integrating with platform features and application framework. This is critical to enable the hosting platform to manage the lifecycle of a service hosted on it. Integration is usually achieved by the hosting platform exposing APIs (application programming interfaces) which can be consumed by the development team. These APIs are generally compatible with popular programming languages.

Co-locating code and state is desirable for improving the efficiency of a Microservice. While this is true, storing state locally introduces challenges around maintaining the integrity of data across multiple instances of a service. Hosting platforms such as Service Fabric come with rich features for maintaining consistency of state across multiple instances of a Microservice there by abstracting the complexity of synchronizing state from the developer.

The hosting platform is also responsible for abstracting the complexity around physical deployment of Microservices from the development team. One way this is achieved is by containerizing the deployment. Containers are operating system-level virtualized environments. This means that the kernel of the operating system is shared across multiple isolated virtual environments. Container-based deployment makes possible an order-of-magnitude increase in density of the Microservice deployed. This is aligned with the recommended cloud design pattern called compute resource consolidation. A good example to discuss in this context, as mentioned by Mark Fussell from Microsoft, is the deployment model for Azure SQL Databases hosted on Azure Service Fabric. A SQL Azure Database cluster comprises hundreds of machines running tens of thousands of containers hosting a total of hundreds of thousands of databases. Each of these containers hosts code and state associated with multiple Microservices. This is an inspiring example of how a good hosting platform can handle hyperscale deployment of Microservices.

A good hosting platform will also support deployment of services across heterogeneous hardware configurations and operating systems. This is significant for meeting demands of services which have specific requirements around high-performance hardware. An example would be a service which performs GPU (graphics processing unit) intensive tasks.

Once the Microservices are deployed, management overhead should be delegated to the hosting platform. This includes reliability management, health monitoring, managing updates, and so on. The hosting platform is responsible for the placement of a Microservice on a cluster of virtual machines. The placement is driven by a highly optimized algorithm which considers multiple constraints at runtime to efficiently pick the right host virtual machine for a Microservice.

The following diagram illustrates a sample placement strategy of Microservices in a cluster:

Microservice placement strategy

As the number of Microservices grows, so does the demand for automating monitoring, and diagnostics systems which takes care of the health of these services. The hosting platform is responsible for capturing the monitoring information from every Microservice and then aggregating it and storing it in a centralized health store. The health information is then exposed to the consumers and also ingested by the hosting platform itself, to take corrective measures. Modern hosting platforms support features such as preventive healing, which uses machine learning to predict future failures of a virtual machine and take preventive actions to avoid service outages. This information is also used by the failover manager subsystem of the hosting platform to identify failure of a virtual machine and to automatically reconfigure the service replicas to maintain availability. The failover manager also ensures that when nodes are added or removed from the cluster, the load is automatically redistributed across the available nodes. This is a critical feature of a hosting platform considering the nature of the cloud resources to fail, as they are running on commodity hardware.

Considering the fact that migrating to a Microservices architecture can be a significant change in terms of the programming paradigm, deployment model, and operational strategy, a question which usually rises is why adopt a Microservice architecture?

The Microservice advantage

Every application has a shelf life, after which it is either upgraded or replaced with another application with evolved capabilities or which is a better fit for changing business needs. The agility in businesses has reduced this shelf life further by a significant factor. For instance, if you are building an application for distributing news feeds among employees within a company, you would want to build quicker prototypes and get feedback on the application sooner than executing an elaborate design and plan phase. This, of course, will be with the cognizance that the application can be further optimized and revised iteratively. This technique also comes in handy when you are building a consumer application where you are unsure of the scale of growth in the user base. An application such as Facebook, which grew its user base from a couple of million to 1,500 million in a few years would have been impossible to plan for, if the architecture was not well architected to accommodate future needs. In short, modern-day applications demand architectural patterns which can adapt, scale and gracefully handle changes in workload.

To understand the benefits of Microservices architecture for such systems, we will require a brief peek at its predecessor, monolithic architecture. The term monolith stands for a single large structure. A typical client-server application for the previous era would use a tiered architecture. Tiers would be decoupled from one another and would use contracts to communicate with each other. Within a tier, components or services would be packed with high cohesion, making them interdependent on each other.

The following diagram illustrates a typical monolithic application architecture:

Monolithic application deployment topology

This works fine in simpler systems which are aimed to solve a static problem catering to a constant user base. The downside is that the components within a tier cannot scale independently, neither can they be upgraded or deployed separately. The tight coupling also prevents the components from being reused across tiers. These limitations introduce major roadblocks when a solution is expected to be agile by nature.

Microservices architecture addresses these problems by decomposing tightly coupled monolithic ties to smaller services. Every Microservice can be developed, tested, deployed, reused, scaled, and managed independently. Each of these services will align to a single business functionality. The development team authoring a service can work independently with the customer to elicit business requirements and build the service with the technology best suited to the implementation of that particular business scenario. This means that there are no overarching constraints around the choice of technology to be used or implementation patterns to be followed. This is perfect for an agile environment where the focus is more on delivering the business value over long-term architectural benefits. A typical set of enterprise applications may also share Microservices between them. The following diagram illustrates the architecture of such a Microservices-driven solution:

Microservice application deployment topology

The following are a few key advantages of a Microservice architecture:

Fault tolerance

As the system is decomposed to granular services, failure of a service will not impact other parts of the system. This is important for a large, business-critical application. For instance, if a service logging events of the system fails, it will not impact the functioning of the whole system.

The decomposed nature of the services also helps fault isolation and troubleshooting. With proper health monitoring systems in place, a failure of a Microservice can be easily identified and rectified without causing downtime to the rest of the application. This also applies to application upgrades. If a newer version of a service is not stable, it can be rolled back to an older version with minimal impact to the overall system. Advanced Microservice hosting platforms such as Service Fabric also come with features such as predictive healing, which uses machine learning to foresee failures and takes preventive measures to avoid service downtime.

Technology-agnostic

In today's world, when the technology is changing fast, eliminating long-term commitment to a single technology stack is a significant advantage. Every Microservice can be built on a separate technology stack and can be redesigned, replaced, or upgraded independently as they execute in isolation. This means that every Microservice can be built using a different programming language and use a different type of data store which best suits the solution. This decreases the dependency concerns compared to the monolithic designs, and makes replacing services much easier.

A good example where this ability of a Microservice maximizes its effect is a scenario where different data stores can be used by different services in alignment with the business scenario they address. A logging service can use a slower and cheaper data store, whereas a real-time service can use a faster and more performant data store. As the consuming services are abstracted from the implementation of the service, they are not concerned about the compatibility with the technology used to access the data.

Development agility

Microservices being handled by separate logical development streams makes it easier for a new developer to understand the functionality of a service and ramp up to speed. This is particularly useful in an agile environment where the team can constantly change and there is minimal dependency on an individual developer. It also makes code maintenance related tasks simpler as smaller services are much more readable and easily testable.

Often, large-scale systems have specific requirements which require specialized services. An example of this is a service which processes graphical data which requires specialized skills to build and test the service. If a development team does not have the domain knowledge to deliver this service, it can be easily outsourced or offloaded to a different team which has the required skill sets. This would be very hard in a monolithic system because of the interdependency of services.

Heterogeneous deployment

The ability of Microservices to be executed as an isolated process decouples it from the constraints around a specific hosting environment. For instance, services can be deployed across multiple cloud stacks such as IaaS and PaaS and across different operating systems such as Windows and Linux hosted on private data centers or on cloud. This decouples the technology limitations from the business requirements.

Most of the mid and large sized companies are now going through a cloud transformation. These companies have already invested significant resources on their on-premises data centers. This forces cloud vendors to support hybrid computing models where the IT infrastructure can coexist across cloud and on-premises data centers. In this case, the infrastructure configuration available on-premises may not match the one provisioned on cloud. The magnitude of application tiers in a monolithic architecture may prevent it from being deployed on less capable server machines, making efficient resource utilization a challenge. Microservices, on the other hand, being smaller, decoupled deployment units, can easily be deployed on heterogeneous environments.

Manageability

Each Microservice can be separately versioned, upgraded, and scaled without impacting the rest of the system. This enables running multiple development streams in parallel with independent delivery cycles aligned with the business demands. If we take a system which distributes news to the employees of a company as an example, and the notification service needs an upgrade to support push notifications to mobile phones, it can be upgraded without any downtime in the system and without impacting the rest of the application. The team delivering the notification service can function at its own pace without having a dependency on a big bang release or a product release cycle.

The ability to scale each service independently is also a key advantage in distributed systems. This lets the operations team increase or decrease the number of instances of a service dynamically to handle varying loads. A good example is systems which require batch processing. Batch jobs which run periodically, say once in a day, only require the batch processing service to be running for a few hours. This service can be turned on and scaled up for the duration of batch processing and then turned off to better utilize the computing resources among other services.

Reusability

Granularity is the key for reuse. Microservices, being small and focused on a specific business scenario, improve the opportunity for them to be reused across multiple subsystems within an organization. This in turn reflects as momentous cost savings.

The factor of reuse is proportional to the size of the organization and its IT applications. Bigger companies have more number of applications developed by multiple development teams, each of which may run their own delivery cycles. Often, the lack of ability to share code across these teams forces software components to be duplicated, causing a considerable impact on development and maintenance cost. Although service duplication across applications may not always be bad, with proper service cataloging and communication, Microservices can easily solve this problem by enabling service reuse across business units.

 

The SOA principle


SOA has multiple definitions that vary with the vendors that provide platforms to host SOA services. One of the commonly accepted SOA definitions was coined by Don Box of Microsoft. His definition is essentially a set of design guidelines which a service-oriented system should adhere to.

Boundaries are Explicit Services are Autonomous Services share Schema and Contract, not Class Compatibility is based upon Policy                                                    – Don Box, Microsoft

Although this definition was originally explained in relation to Microsoft Indigo (now WCF), the tenets still hold true for other SOA platforms as well. An understanding of this principle is that all the services should be available in the network. This tenet dictates that no modules, routines, or procedures can be considered as participants in SOA. Let's take a look at the original tenets in a little detail. The first tenet says that a service should implement a domain functionality and should be discoverable by the other services making up the system. The discovery of service is generally done by registering each service in a directory. The clients of the services can discover each service at runtime. The second tenet explains that the services should be independent of the other services that make up the system. Since the services are independent of each other, they may also enjoy independence of platform and programming language. The third tenet advices that each service should expose an interface through which the rest of the services can communicate with it. The knowledge of this contract should be sufficient to operate with the service. The fourth tenet dictates that the services define the boundaries in which they would work. An example of such a boundary can be a range of integers within which a service that performs arithmetic operations would operate. Such policies should be mentioned in the form of policy expressions and should be machine readable. In WCF, the policies are implemented by the Web Services Policy (WS-Policy) framework.

Although none of the original tenets dictate the size of individual services built using SOA architecture, to obtain independence from other services in the system, an individual service in SOA needs to be coarse-grained. To minimize interaction between services, each service should implement functionalities that work together.

In essence, both the Microservices architecture and the SOA architecture try to solve the problems of monolithic design by modularizing the components. In fact, a system already designed using the SOA architecture is a step in the right direction to realize Microservices architecture.

Issues with SOA

An inherent problem in the SOA architecture is that it tries to mimic the communication levels in an enterprise. SOA principles take a holistic look at the various communication channels in an enterprise and try to normalize them. To understand this problem in a better manner, let us take a look at a real-world SOA implementation done for an organization.

The following is the architecture of a real-life SOA-based application of a car rental company. The architecture diagram presented below has intentionally been simplified to ease understanding:

SOA architecture

This model is a classic example of an SOA-based system. The various participants in this SOA landscape are as follows:

  • The corporate office services: These services provides data pertaining to fleet management, finances, data warehouse, and so on.
  • The reservation services: These services help manage bookings and cancellations.
  • Backend services: These services interface the systems that supply rules to the system and that supply reservation data to the system. There might be additional systems involved in the application, but we will consider only two of them at the moment.
  • Integration platform: The various services of the system need to interact with each other. The integration platform is responsible for orchestrating the communication between the various services. This system understands the data that it receives from the various systems and responds to the various commands that it receives from the portal.
  • The Point of Sale portal: The portal is responsible for providing an interface for the users to interact with the services. The technology to realize the frontend of the application is not important. The frontend might be a web portal, a rich client, or a mobile application.

The various systems involved in the application may be developed by different teams. In the preceding example, there can be a team responsible for the backend systems, one for the reservation center, one for the corporate office, one for the portal, and one for the integration services. Any change in the hierarchy of communication may lead to a change in the architecture of the system and thus drive up the costs. For instance, if the organization decides to externalize its finance systems and offload some of the information to another system, then the existing orchestrations would need to be modified. This would lead to increased testing efforts and also redeployment of the entire application.

Another aspect worth noting here is that the integration system forms the backbone of SOA. This concept is generally wrongly interpreted and Enterprise Service Bus (ESB) is used to hook up multiple monoliths which may communicate over complicated, inefficient, and inflexible protocols. This not only adds the overhead of complex transformations to the system but also makes the system resilient to change. Any change in contract would lead to composing of new transformations.

Typical SOA implementations also impede agility. Implementing a change in application is slow because multiple teams need to coordinate with each other. For example, in the preceding scenario, if the application needs to accept a new means of payment, then the portal team would need to make changes in the user interface, the payment team would need to make changes in their service, the backend team would need to add new fields in the database to capture the payment details, and the orchestration team would need to make changes to tie the communication together.

The participant services in SOA also face versioning issues. If any of the services modify their contract, then the orchestrator systems would need to undergo changes as well. In case the changes are too expensive to make, the new version of service would need to maintain backward compatibility with the old contract, which may not always be possible. The deployment of modified services requires more coordination as the modified service needs to be deployed before the affected services get deployed, leading to the formation of deployment monoliths.

The orchestration and integration system runs the risk of becoming a monolith itself. As most of the business logic is concentrated in the orchestration system, the services might just be administering data whereas the orchestration system contains all the business logics of the entire application. Even in a domain-driven design setting, any change in an entity that leads to a change in the user interface would require redeployment of many services. This makes SOA lose its flexibility.

 

The Microservices solution


Unlike SOA, which promotes cohesion of services, Microservices promote the principle of isolation of services. Each Microservice should have minimal interaction with other Microservices that are part of the system. This gives the advantage of independent scale and deployment to the Microservices.

Let's redraw the architecture of the car rental company using the Microservices architecture principle:

Microservices architecture

In the revised architecture, we have created a Microservice corresponding to each domain of the original system. This architecture does away with the integration and orchestration component. Unlike SOA, which requires all services to be connected to an ESB, Microservices can communicate with each other through simple message passing. We will soon look at how Microservices can communicate.

Also, note that we have used the principles of Domain-Driven Design (DDD), which is the principle that should be used for designing a Microservices-based system. A Microservice should never spawn across domains. However, each domain can have multiple Microservices. Microservices avoid communicating with each other and for the most part use the user interface for communication.

In the revised setup, each team can develop and manage a Microservice. Rather than distributing teams around technologies and creating multiple channels of communication, this distribution can increase agility. For instance, adding a new form of payment requires making a change in the payment Microservice and therefore requires communication with only a single team.

Isolation between services makes adoption of Continuous Delivery much simpler. This allows you to safely deploy applications and roll out changes and revert deployments in case of failures.

Since services can be individually versioned and deployed, significant savings are attained in the deployment and testing of Microservices.

 

Inter-Microservice communication


Microservices can rarely be designed in a manner that they do not need to communicate with each other. However, if you base your Microservices system on the DDD principle, there should be minimal communication required between the participant Microservices.

Cross-domain interactions of Microservices help reduce the complexity of individual services and duplication of code. We will take a look at some of the communication patterns in Chapter 8Microservices Architectural Patterns. However, let us look at the various types of communication.

Communication through user interface

In most cases, the usability of a system is determined through the frontend. A system designed using Microservices should avoid using a monolithic user interface. There are several proponents of the idea that Microservices should contain a user interface and we agree with that.

Tying a service with a user interface gives high flexibility to the system to incorporate changes and add new features. This also ensures that distribution of teams is not by the communication hierarchy of the organization but by domains that Microservices are a part of. This practice also has the benefit of ensuring that the user interface will not become a deployment monolith at any point in time.

Although there are several challenges associated with integrating the user interface of Microservices, there are several ways to enable this integration. Let's take a look at a few.

Sharing common code

To ensure a consistent look and feel of the end user portal, code that ensures consistency can be shared with the other frontends. However, care should be taken to ensure that no business logic, binding logic, or any other logic creeps into the shared code.

Your shared code should always be in a state of being released publicly. This will ensure that no breaking changes or business logic gets added to the shared library.

Composite user interface for the web

Several high-scale websites such as Facebook and MSN combine data from multiple services on their page. Such websites compose their frontend out of multiple components. Each of these components could be the user interface provided by individual Microservices. A great example of this approach is Facebook's BigPipe technology, which composes its web page from small reusable chunks called pagelets and pipes them through several executing stages inside web servers and browsers:

Facebook BigPipe (source: https://www.facebook.com/notes/facebook-engineering/bigpipe-pipelining-web-pages-for-high-performance/389414033919/)

The composition of a user interface can take place at multiple levels, ranging from development to execution. The flexibility of such integrations varies with the level they are carried out at.

The most primitive form of composition can be the sharing of code, which can be done at the time of development. However, using this integration, you have to rely on deployment monoliths as the various versions of user interface can't be deployed in parallel.

A much more flexible integration can also take place at runtime. For instance, Asynchronous JavaScript and XML (AJAX), HTML, and other dependencies can be loaded in the browser. Several JavaScript frameworks, such as Angular.js, Ember.js, and Ext.js, can help realize composition in single-page applications.

In cases where integration through JavaScript is not feasible, middleware may be used which fetches the HTML component of each Microservice and composes them to return a single HTML document to the client. Some typical examples of such compositions are the edge side includes of varnish or squid, which are proxies and caches. Server-side includes such as those available on Apache and NGINX can also be used to carry out transformations on servers rather than on caches.

Thin backend for rich clients

Unlike web applications, rich clients need to be deployed as monoliths. Any change in the Microservices would require a fresh deployment of the client application. Unlike web applications where each Microservice consists of a user interface, it is not the case for mobile or desktop applications. Moreover, structuring the teams in a manner that each team has a frontend developer for each rich client that the application can be deployed to is not feasible.

A way in which this dependency can be minimized is by having a backend for the rich client applications which is deployed with the application:

Microservices for rich clients

Although this approach is not perfect, it does ensure that part of the system conforms to Microservices architecture. Care should be taken to not alter any Microservice to encapsulate the business logic of the rich client. The mobile and desktop clients should optimize content delivery as per their needs.

Synchronous communication

A simple solution for synchronous communication between services is to use REST and transfer JSON data over HTTP. REST can also help in service discovery by using Hypermedia as the Engine of Application State (HATEOAS). HATEOAS is a component of REST which models relationships between resources by using links. Once the client queries the entry point of the service, it can use the links it receives to navigate to other Microservices.

If text-based transfers are not desired, protocol buffers (Google's data interchange format) may be used to transmit data. This protocol has been implemented in several languages to increase its adoption, for example, Ruby protobuf.

A protocol that can be used to transmit structured data across a network is Simple Object Access Protocol (SOAP). It can be used to make calls to different Microservices using various transport mechanisms such as JMS, TCP, or UDP. SOAP is language-neutral and highly extensible.

Asynchronous communication

Asynchronous message passing has the benefit of truly decoupling Microservices from each other. Since the communication is carried out by a broker, individual services need not be aware of the location of the receiver of the request. This also gives individual services the ability to scale independently and recover and respond to messages in case of failure. However, this communication pattern lacks the feature of immediate feedback and is slower than the synchronous communication format.

There are several tools available for such communication, such as MSMQ and Rabbit MQ. Microsoft Azure offers Service Bus Queues and Microsoft Azure Storage Queue for asynchronous messaging on cloud. Amazon SQS provides similar functionality in Amazon Web Services.

Orchestrated communication

This process is similar to the asynchronous communication process that we discussed earlier. Orchestrated communication still uses message stores to transmit data; however, the Microservice sending the data would insert different messages in different queues in order to complete the action. For example, an Order Microservice would insert the message in the queue consumed by the Inventory Microservice and another message in the queue consumed by the Shipment Microservice:

Orchestrated communications using queues

The orchestration may be carried out by a separate component, which is known as Saga, which we will read more about in Chapter 8Microservices Architectural Patterns.

Shared data

Microservices should not share the same data store. Sharing data representation can make altering the database very difficult, and even if done, such a change always runs the risk of causing failure to services that are still using the old data representation. Such challenges ultimately lead to a bloated and complex database and accumulation of lots of dead data over time.

Data replication is a possible solution to sharing data across Microservices. However, data should not be blindly replicated across Microservices as the same problems that are present with shared databases would still remain. A custom transformation process should convert data available from the database to the schema used by the data store of the Microservice. The replication process can be triggered in batches or on certain events in the system.

 

Architecture of Microservices-based systems


Many of us have been curious about the representation of a Microservice by a hexagon. The reason for this is the inspiration behind the architectural pattern that drives Microservices – the hexagonal architecture. This pattern is also popularly known as ports and adapters in some parts of the globe. In a hexagonal architecture pattern, the code application logic is insulated with an isolation perimeter. This insulation helps a Microservice be unaware of the outside world. The insulation opens specific ports for establishing communication channels to and from the application code. Consuming applications can write adapters against these ports to communicate with the Microservice. The following diagram illustrates a hexagonal pattern for a Microservice:

Hexagonal architecture

Protocols in the case of a Microservice architecture are usually APIs. These APIs are exposed using popular protocols for ease of consumption. Hexagonal architecture lets the Microservice treat all of its consumers alike, whether it is a user interface, test suit, monitoring service, or an automation script.

 

Conway's law


Melvin Edward Conway, an American computer scientist, coined a law that generally guides the design of the applications built by an organization.

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.                                                 – Melvyn Conway 1967

An important aspect of the law that should be noted is that the communication structure mentioned in the law is not the same as organizational hierarchy but rather how the various teams in the organization communicate with each other. For instance, an e-commerce company might have a product team and an invoicing team. Any application designed by this organization will have a product module and an invoicing module that will communicate with each other through a common interface.

For a large enterprise with many communication channels, the application architecture will be very complex and nearly impossible to maintain.

Using the law in conjunction with principles of domain driven design can actually help an organization enhance agility and design scalable and maintainable solutions. For instance, in an e-commerce company, teams may be structured around the domain components rather than the application layers that they specialize in, for instance, user interface, business logic, and database:

Team structure for Microservices development

Since the domains are clearly defined, the teams across domains will not need to interact too frequently. Also, the interfaces between teams would not be too complex and rigid. Such team layouts are commonly employed by large organizations such as Amazon, where each team is responsible for creating and maintaining a part of a domain.

Note

Amazon practices the two-pizza rule to limit the size of teams. According to the rule, no team can be larger in size than what two pizzas can feed. Amazon also does not practice heavy communication between teams and all teams are required to communicate with each other through APIs. For instance, if the marketing team needs statistical data from a product team, they can't ask them for it. They need to hit the product team's API to get the data.

Microservices work better when coupled with the principles of domain driven design rather than communication channels. In the application architecture that we designed earlier, we could have ignored the domains of the application and classified teams by communication structure; for instance, two Microservices may be created, each of which handles product listing and product inventory. Such a distribution might lead to each of the teams to develop components independently of each other and will make moving functionalities between them very difficult if the communication hierarchy changes, such as when the two services need to be merged.

 

Summary


In this chapter, we learned about the concept of Microservices and its evolution. We also compared it with its predecessors, SOA and monolithic architecture. We then explored the requirement for a Microservices hosting platform and its properties.

We then discussed the various means of communications between Microservices and their advantages and disadvantages, after which we explored the architecture of a Microservices-based system.

To conclude, we also explored the philosophy behind hexagonal architecture and Conway's law.

In the next chapter, we will learn about Microsoft Azure, a cloud platform for hosting Microservices.

About the Authors
  • Rahul Rai

    Rahul Rai is a technology consultant based in Sydney, Australia with over nine years of professional experience. He has been at the forefront of cloud consulting for government organizations and businesses around the world. Rahul has been working on Microsoft Azure since the service was in its infancy, delivering an ITSM tool built for and on Azure in 2008. Since then, Rahul has played the roles of a developer, a consultant, and an architect for nterprises ranging from small start-ups to multinational corporations. He worked for over five years with Microsoft Services with diverse teams to deliver innovative solutions on Microsoft Azure. In Microsoft, Rahul was a subject matter expert in Microsoft cloud technologies. Rahul has also worked as a cloud solution architect for Microsoft, for which he worked closely with some established Microsoft partners to drive joint customer transformations to cloud based architectures.

    Browse publications by this author
  • Namit Tanasseri

    Namit Tanasseri is a certified Microsoft cloud solutions architect with an experience of more than 11 years. He started his career as a software development engineer with Microsoft Research and Development Center in 2005. During the first five years of his career, he had opportunities to work with major Microsoft product groups, such as Microsoft Office and Windows. During this time, he strengthened his knowledge of agile software development methodologies and processes. He also earned a patent during this tenure. As a technology consultant with Microsoft, Namit worked with Microsoft Azure Services for four years. Namit is a subject matter expert in Microsoft Azure and actively contributes to the Microsoft cloud community, while delivering top quality solutions for Microsoft customers. Namit also led the Windows Azure community in Microsoft Services India. Namit currently serves as a Microsoft cloud solutions architect from Sydney, Australia, and works on large and medium-sized enterprise engagements.

    Browse publications by this author
Latest Reviews (1 reviews total)
Good reading, excellent .
Microservices with Azure
Unlock this book and the full library FREE for 7 days
Start now