Home Cloud & Networking Designing Microservices Platforms with NATS

Designing Microservices Platforms with NATS

By Chanaka Fernando
books-svg-icon Book
eBook $35.99 $24.99
Print $43.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $35.99 $24.99
Print $43.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Chapter 1: Introduction to the Microservice Architecture
About this book
Building a scalable microservices platform that caters to business demands is critical to the success of that platform. In a microservices architecture, inter-service communication becomes a bottleneck when the platform scales. This book provides a reference architecture along with a practical example of how to implement it for building microservices-based platforms with NATS as the messaging backbone for inter-service communication. In Designing Microservices Platforms with NATS, you’ll learn how to build a scalable and manageable microservices platform with NATS. The book starts by introducing concepts relating to microservices architecture, inter-service communication, messaging backbones, and the basics of NATS messaging. You’ll be introduced to a reference architecture that uses these concepts to build a scalable microservices platform and guided through its implementation. Later, the book touches on important aspects of platform securing and monitoring with the help of the reference implementation. Finally, the book concludes with a chapter on best practices to follow when integrating with existing platforms and the future direction of microservices architecture and NATS messaging as a whole. By the end of this microservices book, you’ll have developed the skills to design and implement microservices platforms with NATS.
Publication date:
November 2021
Publisher
Packt
Pages
356
ISBN
9781801072212

 

Chapter 1: Introduction to the Microservice Architecture

The microservice architecture is an evolutionary approach to building effective, manageable, and scalable distributed systems. The overwhelming popularity of the internet and the smart digital devices that have outnumbered the world's population have made every human being a consumer of digital products and services. Business leaders had to re-evaluate their enterprise IT platforms to make sure that these platforms are ready for the consumer revolution due to the growth of their business. The so-called digital-native companies such as Google, Amazon, Netflix, and Uber (to name a few) started building their enterprise platforms to support this revolution. The microservice architecture evolved as a result of the work that was done at these organizations to build scalable, manageable, and available enterprise platforms.  

When microservice-based platforms become larger and larger with hundreds or thousands of microservices inside them, these services need to communicate with each other using the point-to-point model before they become too complicated to manage. As a solution to this problem, centralized message broker-based solutions provided a less complex and manageable solution. Organizations that adopted the microservice architecture are still evaluating the best possible approach to solve the problem of communication among services. The so-called model of smart endpoints and dumb pipes also suggests using a message broker-based approach for this.

NATS is a messaging framework that acts as the always-on dial tone for distributed systems communication. It supports the traditional pub-sub messaging model, which is supported by most of the message brokers, as well as the request-response style communication model while supporting high message rates. It can be used as the messaging framework for the microservice architecture.

In this book, we will discuss the concepts surrounding the microservice architecture and how we can use the NATS messaging framework to build effective, manageable, and scalable distributed systems.

Distributed computing systems have evolved from the early days of mainframes and large servers, sitting in separate buildings, to serverless computing, where users do not even need to consider the fact that there is a server that is running their software component. It is a journey that continues even today and into the future. From the early scheduled jobs to simple programs written in Assembly to monolithic applications written in C, or from Java to ESB/SOA-based systems to microservices and serverless programs, the evolution continues.

IT professionals have been experimenting with different approaches to solve the complex problem of distributed computing so that it eventually produces the best experience for the consumers. The microservice architecture brings out several benefits to the distributed computing system's design and implementation, which was not feasible before. It became mainstream at a time where most of the surrounding technological advancements, such as containers, cloud computing, and messaging technologies, are also becoming popular. This cohesion of technologies made the microservice architecture even more appealing to solve complex distributed systems-related challenges.

In this chapter, we're going to cover the following main topics:

  • The evolution of distributed systems
  • What is a microservice architecture?
  • Characteristics of the microservice architecture
  • Breaking down a monolith into microservices
  • Advantages of the microservice architecture
 

The evolution of distributed systems

The quality of the human mind to ask for more has been the driving force behind many innovations. In the early days of computing, a single mainframe computer executed a set of batch jobs to solve a certain mathematical problem at an academic institution. Then, large business corporations wanted to own these mainframe computers to execute certain tasks that would take a long time to complete if done by humans. With the advancements in electrical and electronics engineering, computers became smaller and instead of having one computer sequentially doing all the tasks, business owners wanted to execute multiple tasks in parallel by using multiple computers. The effects of improved technology on electronic circuits and their reduced size resulted in a reduction in costs, and more and more organizations started using computers.

Instead of getting things done through a single computer, people started using multiple computers to execute certain tasks, and these computers needed to connect to communicate and share the results of their executions to complete the overall task. This is where the term distributed systems came into use.

A distributed system is a collection of components (applications) located on different networked computers that communicate and coordinate their tasks by passing messages to one another via a network to achieve a common goal.

Distributing a workload (a task at hand) among several computers poses challenges that were not present before. Some of those challenges are as follows:

  • Failure handling
  • Concurrency
  • Security of data
  • Standardizing data
  • Scalability

Let's discuss these challenges in detail so that the distributed systems that we will be designing in this book can overcome these challenges well.

Failure handling

Communication between two computers flows through a network. This can be a wired network or a wireless network. In either case, the possibility of a failure at any given time is inevitable, regardless of the advancements in the telecommunications industry. As a designer of distributed systems, we should vary the failures and take the necessary measures to handle these failures. A properly designed distributed system must be capable of the following:

  • Detecting failures
  • Masking failures
  • Tolerating failures
  • Recovery from failures
  • Redundancy

We will discuss handling network and system failures using the preceding techniques in detail in the upcoming chapters.

Concurrency

When multiple computers are operating to complete a task, there can be situations where multiple computers are trying to access certain resources such as databases, file servers, and printers. But these resources may be limited in that they can only be accessed by one consumer (computer) at a given time. In such situations, distributed computer systems can fail and produce unexpected results. Hence, managing the concurrency in a distributed system is a key aspect of designing robust systems. We will be discussing techniques such as messaging (with NATS) that can be used to address this concurrency challenge in upcoming chapters.

Security of data

Distributed systems move data from one computer to another via a communication channel. These communication channels are sometimes vulnerable to various types of attacks by internal and external hackers. Hence, securing data transfers across the network is a key challenge in a distributed system. There are technologies such as Secure Socket Layer (SSL) that help improve the security of wire-level communication. It is not sufficient in a scenario where systems are exposing business data to external parties (for example, customers or partners). In such scenarios, applications should have security mechanisms to protect malicious users and systems from accessing valuable business data. Several techniques have evolved in the industry to protect application data.

Some of them are as follows:

  • Firewalls and proxies to filter traffic: Security through network policies and traffic rules.
  • Basic authentication with a username and password: Protect applications with credentials provided to users in the form of a username and password.
  • Delegated authentication with 2-legged and 3-legged OAuth flow (OAuth2, OIDC): Allow applications to access services on behalf of the users using delegated authentication.
  • Two-Factor Authentication (2FA): Additional security with two security factors such as username/password and a one-time password (OTP).
  • Certificate-based authentication (system-to-system): Securing application-to-application communication without user interaction using certificates.

We will be exploring these topics in detail in the upcoming chapters.

Standardizing data

The software components that are running on different computers may use different data formats and wire-level transport mechanisms to send/receive data to/from other systems. This will become a major challenge when more and more systems are introduced to the platform with different data and transport mechanisms. Hence, adhering to a common standard makes it easier to network different systems without much work. Distributed systems designers and engineers have come up with various standards in the past, such as XML, SOAP, and REST, and those standards have helped a lot in standardizing the interactions among systems. Yet there is a considerable number of essential software systems (such as ERP and CRM) that exchange data with proprietary standards and formats. On such occasions, the distributed system needs to adopt those systems via technologies by using an adapter or an enterprise service bus that can translate the communication on behalf of such systems.

Scalability

Most systems start with one or two computers running a similar number of systems and networking, which is not a difficult task. But eventually, these systems become larger and larger and sometimes grow to hundreds or thousands of computers running a similar or a greater number of different systems.

Hence, it is essential to take the necessary action at the very early stages to address the challenge of scalability. There are various networking topologies available to design the overall communication architecture, as depicted in Figure 1.1 – Networking topologies. In most cases, architects and developers start with the simplest model of point-to-point and move into a mesh architecture or star (hub) architecture eventually.

The bus topology is another common pattern most of the distributed systems adhered to in the past, and even today, there are a significant number of systems using this architecture.

Distributed systems networking architecture

The software engineers and architects who worked on these initial distributed computing system's designs and implementations have realized that different use cases require different patterns of networking. Therefore, they came up with a set of topologies based on their experiences. These topologies helped the systems engineers to configure the networks efficiently based on the problem at hand. The following diagram depicts some of the most common topologies used in distributed systems:

Figure 1.1 – Networking topologies

Figure 1.1 – Networking topologies

These topologies helped engineers solve different types of real-world problems with distributed computing. In most cases, engineers and architects started with a couple of applications connected in a point-to-point manner. When the number of applications grows, this becomes a complicated network of point-to-point connections. These models were easy to begin with, yet they were harder to manage when the number of nodes grew beyond a certain limit. In traditional IT organizations, change is something people avoid unless it is critical or near a break-even point. This reserved mindset has made many enterprise IT systems fall into the category of either a mesh or a fully connected topology, both of which are hard to scale and manage. The following diagram shows a real-world example of how complicated an IT system would look like with this sort of topology:

Figure 1.2 – Distributed system with a mesh topology

Figure 1.2 – Distributed system with a mesh topology

The preceding diagram depicts an architecture where multiple applications are connected in a mesh topology that eventually became an unmanageable system. There are many such examples in real IT systems where deployments become heavily complicated, with more and more applications being introduced as a part of the business's evolution.

The era of the service-oriented architecture (SOA) and the enterprise service bus (ESB)

The IT professionals who were designing and implementing these systems realized the challenge and tried to find alternative approaches to building complex distributed systems. By doing so, they identified that a bus topology with a clear separation of responsibilities and services can solve this problem. That is where the service-oriented architecture (SOA) became popular, along with the centralized enterprise service bus (ESB).

The SOA-based approach helped IT professionals build applications (services) with well-defined interfaces that abstract the internal implementation details so that the consumers of these applications would only need to integrate through the interface. This approach reduced the tight coupling of applications, which eventually ended up in a complex mesh topology with a lot of friction for change.

The SOA-based approach allowed application developers to change their internal implementations more freely, so long as they adhered to the interface definitions. The centralized service bus (ESB) was introduced to network various applications that were present in the enterprise due to various business requirements. The following diagram depicts the enterprise architecture with the bus topology, along with an ESB in the middle acting as the bus layer:

Figure 1.3 – Distributed system with the bus topology using ESB

Figure 1.3 – Distributed system with the bus topology using ESB

As depicted in the preceding diagram, this architecture worked well in most use cases, and it allowed the engineers and architects to reduce the complexity of the overall system while onboarding more and more systems that were required for business growth. One challenge with this approach was that more and more complex logic and the load were handled by the centralized ESB component, and it became a central point of failure unless you deployed that with high availability. This was inevitable with this architecture and IT professionals were aware of this challenge.

Scaling for demand

With the introduction of agile development methodologies, container-based deployments, and the popularity of cloud platforms, this ESB-based architecture looked obsolete, and people were looking for better approaches to reap the benefits of these new developments. This is the time where IT professionals identified major challenges with this approach. Some of them are as follows:

  • Scaling the ESB requires scaling all the services implemented in the ESB at once.
  • Managing the deployment was difficult since changing one service could impact many other services.
  • The ESB approach could not work with the agile development models and container-based platforms.

Most people realized that going forward with the ESB style of networking topology for distributed systems was not capable of gaining the benefits offered by technological advancements in the computing world. This challenge was not only related to ESB, but also to many applications that were developed in a manner where more and more functionalities were built into the same application. The term monolithic application was used to describe such applications.

Microservices and containers

This was the time when a set of companies called Digital Native companies came from nowhere to rule the world of business and IT. Some popular examples are Google, Facebook, Amazon, Netflix, Twitter, and Uber. These companies became so large that they couldn't support their scale of IT demand with any of the existing models. They started innovating on the infrastructure demand as well as the application delivery demands as their primary motivations. As a result of that, two technologies evolved:

  • Container-based deployments
  • The microservice architecture

These two innovations go hand-in-hand to solve the problems of increased demand for the aforementioned companies. Those innovations later helped organizations of all sizes due to the many advantages they brought to the table. We will explore these topics in more detail in the upcoming chapters.

Container-based deployments

Any application that runs on a distributed system requires computing power to execute its assigned tasks. Initially, all the applications ran on a physical computer (or server) that had an operating system with the relevant runtime components (for example, JDK) included. This approach worked well until people wanted to run different operating systems on the same computer (or server). That is when virtualization platforms came into the picture and users were able to run several different operating systems on the same computer, without mixing up the programs running on each operating system. This approach was called virtual machines, or VMs.

It allowed the users to run different types of programs independent from each other on the same computer, similar to programs running on separate computers. Even though this approach provided a clear separation of programs and runtimes, it also consumed additional resources for running the operating system.

As a solution to this overuse of resources by the guest operating system and other complexities with VMs, container technology was introduced. A container is a standard unit of a software package that bundles all the required code and dependencies to run a particular application. Instead of running on top of a guest operating system, similar to VMs, containers run on the same host operating system of the computer (or server). This concept was popularized with the introduction of Docker Engine as an open source project in 2013. It leveraged the existing concepts in the Linux operating system, such as cgroups and namespaces. The major difference between container platforms such as Docker and VMs is the usage of the host operating system instead of the guest operating system. This concept is depicted in the following diagram:

Figure 1.4 – Containers versus virtual machines

Figure 1.4 – Containers versus virtual machines

The following table provides the key points of distinction between containers and VMs:

Table 1.1 – Containers versus virtual machines

Table 1.1 – Containers versus virtual machines

So far, we've gone through the evolution of the design of distributed systems and their implementation and how that evolution paved the way to the main topic of this chapter, which is the microservice architecture. We'll try to define and understand the microservice architecture in detail in the next section.

 

What is a microservice architecture?

When engineers decided to move away from large monolithic applications to SOA, they had several goals in mind to achieve the new model. Some of them are as follows:

  • Loose coupling
  • Independence (deployment, scaling, updating)
  • Standard interfaces
  • Discovery and reusability

Even though most of these goals were achieved with the technology that was available at the time, most of the SOA-based systems ended up as a collection of large monolithic applications that run on heavy servers or virtual machines. When modern technological advancements such as containers, domain-driven design, automation, and virtualized cloud infrastructure became popular, these SOA-based systems could not reap the benefits that were offered by the same.

For this reason and a few others, such as scalability, manageability, and robustness, engineers explored an improved architecture that could fulfill these modern enterprise requirements. Instead of going for a brand-new solution with a lot of breaking changes, enterprise architects identified the microservice architecture as an evolution of the distributed system design. Even though there is no one particular definition that is universally accepted, the core concept of the microservice architecture can be characterized like so:

"The term microservice architecture refers to a distributed computing architecture that is built using a set of small, autonomous services (microservices) that act as a cohesive unit to solve a business problem or problems."

The preceding definition explores a software architecture that is used to build applications. Let's expand this definition into two main sections.

Microservices are small and do one thing well

Instead of doing many things, microservices focus on doing one thing and one thing well. That does not necessarily mean that it should be written in fewer than 100 lines of code or something like that. The number of code lines depends on many factors, such as the programming language of choice, usage of libraries, and the complexity of the task at hand. But one thing is clear in this definition, and that is that the scope of the microservice is limited to one particular task. This is like patient registration in a healthcare system or account creation in a banking system. Instead of designing the entire system as a large monolith, such as a healthcare application or banking application, we could design these applications in a microservice architecture by dividing these separate functional tasks into independent microservices. We will explore how to break a monolithic application down into a microservice architecture later in this chapter.

Microservices are autonomous and act as cohesive units

This is the feature of the microservice architecture that addresses most of the challenges faced by the service-oriented architecture. Instead of having tightly coupled services, with microservices, you need to have fully autonomous services that can do the following:

  • Develop
  • Deploy
  • Scale
  • Manage
  • Monitor

Independently from each other, this allows the microservices to adapt to modern technological advancements such as agile development, container-based deployments, and automation, and fulfill business requirements more frequently than ever before.

The second part of this feature is the cohesiveness of the overall platform, where each microservice interacts with other microservices and with external clients with a well-defined standardized interface, such as an application programming interface (API), that hides the internal implementation detail.

 

Characteristics of the microservice architecture

In this section, we will discuss the different characteristics of a typical microservice architecture. Given the fact that the microservice architecture is still an evolving architecture, don't be surprised if the characteristics you see here are slightly different than what you have seen already. That is how the evolving architectures work. However, the underlying concepts and reasons would be the same in most cases:

  • Componentization via services
  • Each service has a scope identified based on business functions
  • Decentralized governance
  • Decentralized data management
  • Smart endpoints and dumb pipes
  • Infrastructure automation
  • Container-based deployments
  • Designing for failure
  • Agile development approach
  • Evolving architecture

Let's discuss these characteristics in detail.

Componentization via services

Breaking down large monolithic applications into separate services was one of the successful features of SOA, and it allowed engineers to build modular software systems with flexibility. The same concept is carried forward by the microservice architecture with much more focus. Instead of stopping at the modularity of the application, it urges for the autonomy of these services by introducing concepts such as domain-driven design, decentralized governance, and data management, all of which we will discuss in the next section.

This allows the application to be more robust. Here, the failure of one component (service) won't necessarily shut down the entire application since these components are deployed and managed independently. At the same time, adding new features to one particular component is much easier since it does not require deploying the entire application and testing every bit of its functionality.

Business domain-driven scope for each service

The modular architecture is not something that was introduced with microservices. Instead, it has been the way engineers build complex and distributed systems. The challenge is with scoping or sizing these components. There are no definitions or restrictions regarding the component's sizes in the architectures that came before microservices. But microservices specifically focus on the scope and the size of each service.

The amount of work that is done by one microservice should be small enough so that it can be built, deployed, and managed independently. This is an area where most people struggle while adopting microservices since they think it is something that they should do right the first time. But the reality is that the more you work on the project, the better you become at defining the scope for a given microservice.

Decentralized governance

Instead of having one team governing and defining the language, tools, and libraries to use, microservices allow individual teams to select the best tool that is suitable for their scope or use case. This is often called the polyglot model of programming, where different microservices teams use different programming languages, databases, and libraries for their respective service. It does not stop there, though – it even allows each team to have its own software development life cycles and release models so that they don't have to wait until someone outside the team gives them approval. This does not necessarily mean that these teams do not engage with the experienced architects and tech leads in the organization. They will become a part of the team during the relevant sprints and work with these teams as a team member rather than an external stakeholder.

Decentralized data management

Sometimes, people tend to think that the microservice style is only suitable for stateless applications and they avoid the question of data management. But in the real world, most applications need to store data in persistent storage, and managing this data is a key aspect of application design. In monolithic applications, everything is stored in a single database in most cases, and sharing data across components happens through in-memory function calls or by sharing the same database or tables. This approach is not suitable for the microservice architecture and it poses many challenges, such as the following:

  • A failure in one component handling data can cause the entire application to fail.
  • Identifying the root cause of the failure would be hard.

The microservice architecture suggests the approach of having databases specific to each microservice so that it can keep the state of the microservice. In a situation where microservices need to share data between them, create a separate microservice for common data access and use that service to access the common database. This approach solves the two issues mentioned previously.

Smart endpoints and dumb pipes

One of the key differences between the monolithic architecture and the microservice architecture is the way each component (or service) communicates with the other. In a monolith, the communication happens through in-memory function calls and developers can implement any sort of interconnections between these components within the program, without worrying about failures and complexity. But in a microservice architecture, this communication happens over the network, and engineers do not have the same freedom as in monolithic design.

Given the nature of the microservice approach, the number of services can grow rapidly from tens to hundreds to thousands in no time. This means that going with a mesh topology for inter-service communication can make the overall architecture super complex. Hence, it suggests using the concept of smart endpoints and dumb pipes, where a centralized message broker is used to communicate across microservices. Each microservice would be smart enough to communicate with any other service related to it by only contacting the central message broker; it does not need to be aware of the existence of other services. This decouples the sender and the receiver and simplifies the architecture significantly. We will discuss this topic in greater detail later in this book.

Infrastructure automation

The autonomy provided by the architecture becomes a reality by automating the infrastructure that hosts the microservices. This allows the teams to rapidly innovate and release products to production with a minimum impact on the application. With the increased popularity of Infrastructure as a Service (IaaS) providers, deploying services has become much easier than ever before. Code development, review testing, and deployment can be automated through the continuous integration/continuous deployment (CI/CD) pipelines with the tools available today.

Container-based deployments

The adoption of containers as a mechanism to package software as independently deployable units provided the impetus that was needed for microservices. The improved resource utilization provided by the containers against the virtual machines made the concept of decomposing a monolithic application into multiple services a reality. This allowed these services to run in the same infrastructure while providing the advantages offered by the microservices.

The microservice architecture created many small services that required a mechanism to run these services without needing extra computing resources. The approach of virtual machines was not good enough to build efficient microservice-based platforms. Containers provided the required level of process isolation and resource utilization for microservices. The microservice architecture would have not been so successful if there were no containers.

Design for failure

Once the all-in-one monolithic application had been decomposed into separate microservices and deployed into separate runtimes, the major setback was communication over the network and the inevitable nature of the distributed systems, which is components failing. With the levels of autonomy we see in the microservices teams, there is a higher chance of failure.

The microservice architecture does not try to avoid this. Instead, it accepts this inevitable fact and designs the architecture for failure. This allows the application to be more robust and ready for failure rather than crashing when something goes wrong. Each microservice should handle failures within itself and common failure handling concepts such as retry, suspension, and circuit breaking need to be implemented at each microservice level.

Agile development

The microservice architecture demands changes in not only the software architecture but also the organizational culture. The traditional software development models (such as the waterfall method) do not go well with the microservice style of development. This is because the microservice architecture demands small teams and frequent releases of software rather than spending months on software delivery with many different layers and bureaucracy. Instead, the microservice architecture works with a more product-focused approach, where each team consists of people with multiple disciplines that are required for a given phase of the product release.

Evolving architecture

The concepts or characteristics we've discussed so far are by no means set in stone for a successful microservice implementation. These concepts will evolve with time and people will identify new problems, as well as come up with better approaches, to solve some of the problems that the microservice architecture tries to solve. Hence, it is important to understand that the technology landscape is an ever-evolving domain and that the microservice architecture is no exception.

 

Breaking down a monolith into microservices

Let's try to understand the concepts of the microservice architecture with a practical example by decomposing a monolithic application into a set of microservices. We will be using a healthcare application for this purpose. The same application will be used throughout this book to demonstrate various concepts along the way.

Let's assume we are building an IT system for a hospital to increase the efficiency of the health services provided by the hospital to the community. In a typical hospital, many units exist, and each unit has one or more specific functionalities. Let's start with one particular unit, the outward patient department or OPD. In an OPD section, a few major functions are executed to provide services to people:

  • Patient registration
  • Patient inspection
  • Temporary treatment
  • Releasing the patient from the unit

We'll start with one unit of the hospital and eventually build an entire healthcare system with microservices as we complete this book. Given that there are only four main functions, the IT team at the hospital has developed one web application that covers all these different functional units. The current design is a simple web application with four different web pages, each of which contains a form to update the details captured at each stage with a simple login. Anyone with an account in this system can view the details of all four pages.

Figure 1.5 – Simple web application for the OPD unit of a hospital

Figure 1.5 – Simple web application for the OPD unit of a hospital

As depicted in the preceding diagram, the OPD web application is hosted in a web server (for example, Tomcat) and it uses a central database server to keep the data. This system works well, and users of this system are given a username and a password to access the system. Only authorized people can access the web application and it is hosted in a physical computing infrastructure inside the hospital.

Let's try to identify the challenges of this approach of building an application as a single unit or a monolith:

  • Adding new features and improving existing features is a tedious task that requires a total restart of the application, with possible downtimes.
  • The failure of one function can cause the entire application to be useless.
  • If one particular function needs more resources, the entire application needs to be scaled (vertically or horizontally).
  • Integrating with other systems is difficult since most of the functional logic is baked into the same service.
  • It contains large code bases that become complex and hard to manage.

As a result of these challenges, the overall efficiency of the system becomes low and the opportunity to serve more patients with new services becomes harder. Instead, let's try to break this monolithic application down into small microservices.

More often than not, the microservice architecture demands a change in the organizational IT culture, as well as the software architecture and tools. Most organizations follow the waterfall approach to building software products. It follows a sequential method where each step in the sequence depends on the previous step. These steps include design, implementation, testing, deployment, and support. This sort of model won't work well with the microservice architecture, which requires a team that consists of people from these various internal groups and can act as a single unit to release each microservice in an agile manner.

Figure 1.6 – Waterfall development culture

Figure 1.6 – Waterfall development culture

The preceding diagram depicts a typical organizational IT culture where different teams with different expertise (center of excellence or CoE teams) work sequentially to develop and manage the life cycle of a software application. This model poses several challenges, such as the following:

  • Longer product release cycles
  • Resistance to change causes a lack of frequent innovation
  • Friction between teams can cause delayed releases, missing features, and low-quality products
  • Higher risk of failure and uncertainty

This kind of organizational culture is not suitable for the highly demanding, innovation-driven enterprise platforms of today. Hence, it is necessary to reduce these boundaries and formulate truly agile teams before starting microservice-style development. Sometimes, it may be difficult to fully remove these boundaries at the beginning. But with time, the individuals and management will realize the advantages of the agile approach.

Figure 1.7 – Sprint-based agile development approach

Figure 1.7 – Sprint-based agile development approach

The preceding diagram depicts an approach where the development of microservices is done as sprints. These focus on certain aspects of the software development life cycle within a defined time frame. One clear difference in this approach, compared to the waterfall approach, is that the team consists of people from different CoE teams, and a virtual team is formed for the sprint's duration. The responsibility of delivering the expected outcome is on their shoulders.

Let's focus on the application architecture where we identified the challenges with the monolithic approach, which was followed by the OPD web application. Instead of having several functions baked into one single application, the microservice architecture suggests building separate microservices for each function and making them communicate over the network whenever they need to interact with each other.

In the existing design, a common database is used to share data among different functions. In a microservice architecture, it makes sense for each microservice to have its data store and if there is a need to share data across services, services can use messaging or a separate microservice with that common data store instead of directly accessing a shared data store. We can decompose the application into separate microservices, as depicted in the following diagram:

Figure 1.8 – OPD application with microservices

Figure 1.8 – OPD application with microservices

If you find the preceding diagram too complicated, I suggest that you read this book until the end. By doing so, you will realize that this complexity comes with many advantages that outweigh the complexity in most cases. As we can see, the OPD web application is divided into a set of independent microservices that act as a cohesive unit to provide the necessary functionality to the consumers. The following is a list of major changes we made to the architecture of the application:

  • Each function is implemented as a separate microservice
  • Each microservice has a datastore
  • A message broker is introduced for inter-service communication
  • The web interface is designed as a single-page application

Let's explore these changes in a bit more detail so that we understand the practicalities of the approach.

Business functions implemented as separate microservices

In the book Domain-Driven Design by Eric Evans, he explained the approaches to follow when deciding on the boundaries for each microservice with many practical examples. At the same time, he reiterated the fact that this is not straightforward and that it takes some practice to achieve higher levels of efficiency. In the OPD example, this is somewhat straightforward since the existing application has certain levels of functional isolation at the presentation layer (web pages), though this is not reflected in the implementation. As depicted in the preceding diagram, we have identified six different microservices to implement for the OPD application. Each service has a clearly defined scope of functionality, as mentioned here:

  • Registration microservice: This service is responsible for capturing the details of the patient who is visiting the OPD unit and generates a unique ID for the patient if they have not visited before. If the patient is already in the system, their visit details are updated and they are marked as a patient to inspect.
  • Inspection microservice: Once the registration is done, the patient is directed to the inspection unit, where a medical officer inspects the patient, updates the details of the inspection results, and provides their recommendation on the next step to take. The next step might involve temporary treatment, discharging the patient from the unit and moving them to the ward unit, or releasing the patient with medication.
  • Temporary treatment microservice: If the medical officer recommends a temporary treatment, the patient will be admitted to this unit and the required medication will be provided. This service will capture the medication provided against the patient ID and the frequency. After a certain time, the patient will be inspected again by the medical officer, who will decide on the next step.
  • Discharge microservice: Once the medical officer gives their final verdict on the patient, they will be discharged from the OPD unit to either a long-term treatment unit (ward) or their home. This service will capture the details of the discharge state and any treatment that needs to be provided in case the patient is sent back home.
  • Authentication microservice: This microservice is implemented to provide authentication and authorization to the application based on users and roles, as well as to protect the valuable patient details from unauthorized access. This service will store the user credentials, along with their roles, and grant access to the relevant microservice based on the user.
  • Common data microservice: Since the overall OPD unit acts as a single unit that acts on a given common patient, each microservice has to update one common data store for requirements, such as exposing a summary to the external applications. Instead of exposing each service to the external applications, having this sort of common microservice can help with such a function by providing a separate interface that is different from each microservice interface.

Once the microservice boundaries have been defined, the implementation can follow an agile approach where one or more microservices are implemented at the same time, depending on the availability of the resources. Once the interfaces have been defined, teams do not need to wait until another service has been fully implemented. The resources can rotate among teams, depending on their availability. We will discuss additional details regarding the deployment architecture and its implementation details later in this book.

Each microservice has a datastore

One of the main differences between the microservice-based approach and the monolithic approach we discussed in the previous sections is how data is handled. In the monolithic approach, there is a central data store (database) that stores the data related to each section of the OPD unit.

Whenever there is a need for data sharing, the application directly uses the database, and different functions access the same database (same table) without any control. This kind of approach can result in situations where data is corrupted due to an error in the implementation of a certain function and all the functions fail due to that. At the same time, finding the root cause would be hard since multiple components of the application access the same database or table. This kind of design will cause more problems when the applications have a higher load on the database, where all the parts of the application are affected due to the performance of one particular function.  

Due to these reasons and many others, the microservice architecture suggests following an approach where each microservice has a data store. At the same time, if there is a need to share a common database across multiple microservices, it recommends having a separate microservice that wraps the database and provides controlled access. If there is a need to share data between microservices, that will be done through an inter-service communication mechanism via messages. We will discuss how to deploy these local data stores, along with microservices, later in this book.

Message broker for inter-service communication

In the monolithic approach, each function runs within the same application runtime (for example, JVM for Java) and whenever there is a need to communicate between functions, it uses an in-memory call such as a method call or function invocation. This is much more reliable and faster since everything happens within the same computer.

The microservice architecture follows a different approach for inter-service communication since each microservice runs on separate runtime environments. Also, these runtime environments may run on different networked computers. Many approaches can be used for inter-service communication, and we will explore all those options in Chapter 2, Why Is Messaging Important in a Microservice Architecture, of this book. For this initial introduction, we will use the message broker-based approach. We will also be using this throughout this book, so we will discuss it in more detail later in this book.

At the beginning of this chapter, we discussed the different networking topologies and the evolution of distributed systems. There, we identified that having a mesh topology can complicate the overall system architecture and make it harder to maintain the system. Hence, we suggest using a message broker-based approach for inter-service communication throughout this book. The advantages of this approach are as follows:

  • A less complicated system
  • Loose coupling
  • Supports both synchronous and asynchronous communication
  • Easier to maintain
  • Supports the growth of the architecture with less complexity

We will discuss the advantages of using message brokers for inter-service communication throughout this book.

The web interface is designed as a single-page application (SPA)

As we discussed earlier in this chapter, the microservice style of application development involves making a lot of changes to the way engineers build applications. Traditional web applications are built in a manner where different sections of the application are developed as separate web pages and when the user needs to access a different section, the browser will load an entirely different web page, causing delays and a less optimal user experience.

Figure 1.9 – Multi-page OPD web application

Figure 1.9 – Multi-page OPD web application

As depicted in the preceding diagram, the user is accessing two pages of the web application. Each action triggers the following:

  • A request to the web server
  • The web server calling the database to retrieve data
  • The web server generating the HTML content based on the data
  • Responding to the web browser with generated HTML content

These traditional, multi-page applications can cause a significant performance impact to the user due to this multi-step process of loading a web page on the browser.

The concept of SPA addresses these issues and suggests an approach where the entire web application is designed as a single page that will be loaded to the browser at once. A page refresh won't occur when accessing different sections of the application, which will result in better performance and a better user experience.

Figure 1.10 – Single-page application with microservices

Figure 1.10 – Single-page application with microservices

While there is only a single page for the presentation layer of the application, different microservices implemented at the backend provide the required data to be displayed in the web frontend. The advantage of having separate microservices with an SPA is that users with different privileges will only get access to the authorized details. They will also get access to those details in the shortest possible time since no additional data loading happens.

The preceding diagram only depicts a couple of microservices for the sake of simplicity. The real implementation would interact with all the different microservices we discussed in the previous section.

So far, we've discussed the approach that can be followed when decomposing a monolithic application into a microservice architecture at a very high level while providing a few details on certain aspects. We will be continuing with this topic throughout this book, which means you will get the opportunity to learn more along the way.

 

Advantages of the microservice architecture

The microservice architecture provides many advantages over its predecessors. Most of these advantages relate to the evolution of the technology landscape. Understanding the evolution of distributed systems helped the microservice architecture become a better approach for building complex distributed systems for the modern era. The following list points out the advantages of the microservice architecture:

  • Ability to build robust applications
  • Supports the scalability demands of modern businesses
  • Better utilization of computing resources
  • Helps with innovation with reduced time to market and frequent releases
  • Builds systems that are easy to support and maintain

There are several other advantages related to the microservice architecture, but we will start with the aforementioned list and explore more as we continue with this book.

Build robust, resilient applications

The challenge with distributed systems has always been their robustness and their resiliency to failure. The microservice architecture helps organizations tackle this problem by designing applications that are ready for failure and with well-defined scopes. The concept of fail-fast and recovery allows microservice-based applications to identify failures quickly and fix those issues instantly. At the same time, due to the componentized architecture, the failure of one component won't bring down the entire application. Instead, a portion of it and most of the users might not even notice the failure if they don't use that function. This provides a better experience for users.

Build scalable, available applications for modern businesses

The microservice architecture has evolved with the need for scalability in large-scale digital-native organizations, which are required to run thousands of instances of applications and hundreds of different applications. Hence, scalability and availability across wider geographical areas have always been advantages of the microservice architecture. Characteristics such as single responsibility, a modularized architecture, and decentralized data management allow the applications to scale across different data centers and regions without many complications.

Better utilization of computing resources

The popularity of cloud vendors has had a huge impact on the infrastructure costs that are incurred by enterprise software platforms. There were many situations where software systems were utilizing only a fraction of the overall computing infrastructure maintained by these organizations. These reasons paved the way for containers becoming popular and microservices followed the path that was opened up by containers.

Microservices allowed the application to be decomposed into independent units. Each unit can decide on the resources required for it to function. Containers allowed each microservice to define the required levels of resources and collectively, it provided a mechanism to utilize the computing resources in a much better way than the previous monolithic application-driven model.

Helps an innovation-driven organization culture

Modern business organizations are driven by innovations, so having a software architecture that supports that culture helps these organizations excel. The microservice architecture allows teams to innovate and release frequently by choosing the best technology and approach that suits a given business requirement. This urges other teams to also innovate and create an innovation-driven culture within the IT organization.

Build manageable systems

One of the challenges with large monolithic systems was the concept of Subject-Matter Experts (SMEs) and center of excellence (CoE) teams, which had control over such applications. Even the Chief Technical Officer (CTO) would kneel in front of them due to their lack of knowledge regarding those systems. These systems were brittle and the failure of such an application could cause entire organizations to pause their operations. With a defined yet small scope for each microservice and individuals rotating around different teams, microservice-driven applications became much more open to the entire team and no one team had the power to control the system.

 

Summary

In this chapter, we discussed the concepts of distributed systems and how the evolution of distributed systems paved the way for the microservice architecture, which helps organizations build robust applications with distributed systems concepts. We discussed the key characteristics of the microservice architecture and identified the advantages of it with a practical example of decomposing a monolithic healthcare application into a set of microservices. This chapter has helped you identify the challenges that exist in enterprise software platforms and how to tackle those challenges with microservice architecture principles. The concepts you learned about in this chapter can be used to build scalable, manageable software products for large- and medium-scale enterprises.

In the next chapter, we will get into the nitty-gritty details of building a microservice architecture. We will focus on the important aspects of inter-service communication with messaging technologies.

 

Further reading

Domain-Driven Design: Tackling complexity in the heart of software, by Eric Evans, available at https://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215.

About the Author
  • Chanaka Fernando

    Chanaka Fernando is a solution architect with 12+ years of experience in designing, implementing, and supporting enterprise-scale software solutions for customers across various industries including finance, education, healthcare, and telecommunications. He has contributed to the open source community with his work (design, implementation, and support) as the product lead of the WSO2 ESB, one of the founding members of the "Ballerina: cloud-native programming language" project, and his own work on GitHub. He has spoken at several WSO2 conferences and his articles are published on Medium, DZone, and InfoQ. Chanaka has a bachelor's degree in electronics and telecommunications engineering from the University of Moratuwa.

    Browse publications by this author
Latest Reviews (1 reviews total)
Got a lot of insights into designing our Nats deployments.
Designing Microservices Platforms with NATS
Unlock this book and the full library FREE for 7 days
Start now