Home Web Development TypeScript Microservices

TypeScript Microservices

books-svg-icon Book
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Debunking Microservices
About this book
In the last few years or so, microservices have achieved the rock star status and right now are one of the most tangible solutions in enterprises to make quick, effective, and scalable applications. The apparent rise of Typescript and long evolution from ES5 to ES6 has seen lots of big companies move to ES6 stack. If you want to learn how to leverage the power of microservices to build robust architecture using reactive programming and Typescript in Node.js, then this book is for you. Typescript Microservices is an end-to-end guide that shows you the implementation of microservices from scratch; right from starting the project to hardening and securing your services. We will begin with a brief introduction to microservices before learning to break your monolith applications into microservices. From here, you will learn reactive programming patterns and how to build APIs for microservices. The next set of topics will take you through the microservice architecture with TypeScript and communication between services. Further, you will learn to test and deploy your TypeScript microservices using the latest tools and implement continuous integration. Finally, you will learn to secure and harden your microservice. By the end of the book, you will be able to build production-ready, scalable, and maintainable microservices using Node.js and Typescript.
Publication date:
May 2018
Publisher
Packt
Pages
404
ISBN
9781788830751

 

Chapter 1. Debunking Microservices

"If I had asked people what they wanted, they would have said faster horses."

– Henry Ford

Whether you are a tech lead, developer, or a tech savant eager to adapt to new modern web standards, the preceding line represents your current life situation in a nutshell. Today's mantra for the successful business, fail quickly, fix and rise soon, quicker delivery, frequent changes, adaption to changing technologies, and fault-tolerant systems are some of the general daily requirements. For the very same reason, during recent times, the technology world has seen a quick change in architectural designs that have led industry leaders (such as Netflix, Twitter, Amazon, and so on) to move away from monolithic applications and adopt microservices. In this chapter, we will debunk microservices and study their anatomy, and learn their concepts, characteristics, and advantages. We will learn about microservice design aspects and see some microservice design patterns.

In this chapter, we will talk about the following topics:

  • Debunking microservices
  • Key considerations for microservices
  • Microservice FAQs
  • How microservices satisfy the twelve-factors of the application
  • Microservices in the current world
  • Microservice design aspects
  • Microservice design patterns
 

Debunking microservices


The core idea behind microservice development is if the application is broken down into smaller independent units, with each group performing its functionality well, then it becomes straightforward to build and maintain an application. The overall application then just becomes the sum of individual units. Let's begin by debunking microservices.

Rise of microservices

Today's world is evolving exponentially, and it demands an architecture that can satisfy the following problems that made us rethink traditional architecture patterns and gave rise to microservices.

Wide selection of languages as per demand

There is a great need for technological independence. At any point in time, there is a shift in languages and adoption rates change accordingly. Companies such as Walmart have left the Java stack and moved towards the MEAN stack. Today's modern applications are not just limited to the web interface and extends its need for mobile and smartwatch application too. So, coding everything in one language is not at all a feasible option. We need an architecture or ecosystem where multiple languages can coexist and communicate with each other. For example, we may have REST APIs exposed in Go, Node.js, and Spring Boot—a gateway as the single point of contact for the frontend.

Easy handling of ownership

Today's applications not only include a single web interface, but go beyond into mobiles, smart watches, and virtual reality (VR). A separation of logic into individual modules helps to control everything as each team owns a single unit. Also, multiple things should be able to run in parallel, hence achieving faster delivery. Dependencies between teams should be reduced to zero. Hunting down the right person to get the issue fixed and get the system up and running demands a microservice architecture.

Frequent deployments

Applications need to constantly evolve to keep up with an evolving world. When Gmail started, it was just a simple mailing utility and now it has evolved into much more than that. These frequent changes demand frequent deployments in such a way that the end user doesn't even know that a new version is being released. By dividing into smaller units, a team can thus handle frequent deployments with testing and get the feature into customers hands quickly. There should be graceful degradation, that is, fail fast and get it over with.

Self-sustaining development units

 A tight dependency between different modules soon cascades to the entire application and it goes down. This requires smaller independent units in such a way that if one unit is not operational, then the entire application is not affected by it.

Now let's understand in depth about microservices, their characteristics, their advantages, and all the challenges while implementing a microservice architecture.

What are microservices?

There is no universal definition of microservices. Simply stating—a microservice can be any operational block or unit, which handles its single responsibility very efficiently.

Microservices are modern styles to build autonomous, self-sustaining, loosely coupled business capabilities that sum up as an entire system. We will look into the principles and characteristics of microservices, the benefit that microservices provide, and the potential pitfalls to keep an eye out for.

Principles and characteristics

There are a few principles and characteristics that define microservices. Any microservice pattern would be distinguished and explained further by these points. 

No monolithic modules

A microservice is just another new project satisfying a single operational business requirement. A microservice is linked with business unit changes and thus it has to be loosely coupled. It should be that a microservice can continuously serve the changing business requirements irrespective of the other business units. For other services, it is just a matter of consumption, the mode of consumption should not change. Implementations can change in the background.

Dumb communication pipes

 Microservices promote basic, time-tested, asynchronous communication mechanisms among microservices. As per this principle, the business logic should stay inside the endpoint and not be amalgamated with the communication channel. The communication channel should be dumb and just communicate in the communication protocol decided. HTTP is a favorable communication protocol, but a more reactive approach—queues is prevalent these days. Apache Kafka, and RabbitMQ are some of the prevalent dumb communication pipes providers.

Decentralization or self-governance

While working with microservices, there is often a change of failure. A contingency plan that eventually stops the failure from propagating to the entire system. Furthermore, each microservice may have its own data storage need. Decentralization manages just the need for that. For example, in our shopping module we can store our customer and his transactions-related information in SQL databases, but since the product data is highly unstructured, we store it in NoSQL-related databases. Every service should be able to take a decision on what to do in fail case scenarios.

Service contracts and statelessness

Microservices should be well defined through service contracts. A service contract basically gives information about how to consume the service and what all the parameters are that need to be passed to that service. Swagger and AutoRest are some of the widely adopted frameworks for creating service contracts. Another salient characteristic is that nothing is stored and no state is maintained by the microservice. If there is a need to persist something, then it will be persisted in a cache database or some datastore.

Lightweight

Microservices, being lightweight, help to replicate a setup easily in any hosting environment. Containers are more preferred than hypervisors. Lightweight application containers help us to maintain a lower footprint, thus by binding a microservice to some context. Well-designed microservices should perform only one function and do that operation well enough. Containerized microservices are easily portable, thus enabling easy auto-scaling.

Polyglot

Everything is abstract and unknown behind the service API in microservice architecture. In the preceding example of shopping cart microservices, we can have our payment gateway entirely as a service deployed in the cloud (serverless architecture), while the rest of the services can be in Node.js. The internal implementations are completely hidden behind the microservices and the only concern to be taken care of is that the communication protocol should be the same throughout.

Now, let's see what advantages microservice architecture has to offer us.

Good parts of microservices

Adopting microservices has several advantages and benefits. We will look at the benefits and higher business values we get while using microservices.

Self-dependent teams

Microservices architecture enables us to scale any operation independently, have availability on demand, and introduce new services very quickly without zero to very few configurations. Technological dependence is also greatly reduced. For example, in our shopping microservice architecture, the inventory and shopping module can be independently deployed and worked upon. The inventory service will just assume that the product will exist and work accordingly. The inventory service can be coded in any language as long as the communication protocol between inventory and product service is met.

Graceful degradation of services

Failure in any system is natural, graceful degradation is a key advantage of microservices. Failures are not cascaded to the entire system. Microservices are designed in such a way that microservices adhere to agreed service level agreements; if the service level agreements (SLAs) are not met, then the service is dropped. For example, coming back to our shopping microservice example, if our payment gateway is down, then further requests to that service are stopped until the service is up and running.

Supports polyglot architecture and DevOps

Microservices make use of resources as per need or effectively create polyglot architecture. For example, in shopping microservices, you can store products and customer data in a relational database, but any audit or log-related data you can store in Elasticsearch or MongoDB. As each microservice operates in its bounded context, this can enable experimentation and innovation. The cost of change impact will be very less. Microservices enables DevOps to full level. Many DevOps tools and techniques are needed for a successful microservice architecture. Small microservices are easy to automate, easy to test, contaminate the failure if needed, and are easy to scale. Docker is one of the major tools for containerizing microservices.

Event-driven architecture

A well-architected microservice will support asynchronous event-driven architecture. Event-driven architecture helps as any event can be traced into—each action would be the outcome of any event, we can tap into any event to debug an issue. Microservices are designed with the publisher-subscriber pattern, meaning adding any other service that just subscribes to that event will be a mere task. For example, you are using a shopping site and there's a service for add to cart. Now, we want to add new functionality so that whenever a product is added to a cart, the inventory should be updated. Then, an inventory service can be prepared that just has to subscribe to the add to cart service.

Now, we will look into the complexities that microservice architecture introduces.

Bad and challenging parts of microservices

With great power comes greater challenges. Let's look at the challenging parts of designing microservices.

Organization and orchestration

It is one of the topmost challenges while adapting microservice architecture. This is more of a non-functional challenge wherein new organizational teams need to be formed and they need to be guided in adopting the microservice, agile, and scrum methodologies. They need to be simulated in such an environment that they can work independently. Their developed outcome should be integrated into the system in such a way that it is loosely coupled and is easily scaled.

Platform

Creating the perfect environment needs a proper team, and a scalable fail-safe infrastructure across all data centers. Going to the right cloud provider (AWS or GCP or Azure), adding automation, scalability, high availability, managing containers, and instances of microservices are some of the key considerations. Further microservices demand other component needs such as an enterprise service bus, document databases, cache databases, and so on. Maintaining these components becomes an added task while dealing with microservices.

Testing

Being completely independent testing out services with dependencies is extremely challenging. As a microservice gets introduced into the ecosystem, proper governance and testing are needed, otherwise it will be a single point of failure for the system. Several levels of testing are needed for any microservice. It should start from whether the service is able to access cross-cutting concerns (cache, security, database, logs) or not. The functionality of the service should be tested, followed by testing of the protocol through which it is going to communicate. Next is collaborative testing of the microservice with other services. After that is the scalability testing followed by fail-safe testing.

Service discovery

Locating services in a distributed environment can be a tedious task. Constant change and delivery is the dire requirement for today's constantly evolving world. In such situations, service discovery can be challenging as we want independent teams and minimal dependencies across teams. Service discovery should be such that a dynamic location can be provided for microservices. The location of a service may constantly change depending on deployments and auto-scaling or failures. Service discovery should also keep a lookout for services that are down or are performing poorly.

Microservice example

The following is a diagram of shopping microservices, which we are going to implement throughout this book. As we can see, each service is independently maintained and there are independent modules or smaller systems—Billing Module, Customer Module, Product Module, and Vendor Module. To coordinate with every module, we have API Gateway and Service Registry. Adding any additional service becomes very easy as the service registry will maintain all the dynamic entries and will be updated accordingly:

 

Key considerations while adopting microservices


A microservice architecture introduces well-defined boundaries, which makes it possible to isolate failures within the boundaries. But being like other distributed systems, there is likely a chance of failure at the application level. To minimize the impact, we need to design fault-tolerant microservices, which react in a predefined way to certain types of failure. While adapting to microservice architecture, we add one more network layer to communicate with rather than in-memory method calls, which introduces extra latency and one more layer to manage. Given next are a few considerations that if handled with care while designing microservices for failure, will benefit the system in the long run.

Service degradation

Microservices architecture allows you to isolate failures, thus enabling you to isolate the failures and get graceful degradation as failures are contained within the boundaries of the service and are not cascaded. For example, in social networking sites, the messaging service may go down, but that won't stop the end users from using social networks. They can still browse posts, share statuses, check-in locations, and so on. Services should be made such that they adhere to certain SLAs. If a microservice stops meeting its SLA, then that service should be restored to back up. Netflix's Hystrix is based on the same principle.

Proper change governance

Introducing change without any governance can be a huge problem. In a distributed system, services depend on each other. So when you introduce a new change, the utmost consideration should be given as if any side or unwanted effects are introduced, then its effect should be minimal. Various change management strategies and automatic rollout options should be available. Also, proper governance should be there in code management. Development should be done via TDD or BDD, and if the agreed percentage is met upon, only then should it be rolled out. Releases should be done gradually. One useful strategy is the blue-green or red-black deployment strategy, wherein you run two production environments. You rollout the change in only one environment and point out the load balancer to a newer version only after your change is verified. This is more likely when maintaining a staging environment.

Health checks, load balancing, and efficient gateway routing

Depending on business requirements, the microservice instance can start, restart, stop on some failure, run low on memory, and auto-scale, which may make it temporarily or permanently unavailable. Therefore, the architecture and framework should be designed accordingly. For example, a Node.js server, being single-threaded, stops immediately in the case of failure, but using graceful tools such as PM2 forever keeps them running. A gateway should be introduced that will be the only point of contact for the microservice consumer. The gateway can be a load balancer that should skip unhealthy microservices instances. The load balancer should be able to collect health information metrics and route traffic accordingly, it should smartly analyze the traffic on any particular microservice, and it should trigger auto-scaling if needed.

Self-curing

Self-curing design can help the system to recover from disasters. Microservices implementation should be able to recover lost services and functionality automatically. Tools such as Docker restart services whenever they fail. Netflix provides wide tools as an orchestration layer to achieve self-healing. Eureka service registry and Hystrix circuit breaker are commonly used. Circuit breakers make your service calls more resilient. They track each microservice endpoint's status. Whenever timeout is encountered, Hystrix breaks the connection, triggers the need for curing that microservice, and reverts to some fail-safe strategy. Kubernates is another option. If a pod or any container inside the pod goes down, Kubernates brings up the system and maintains the replica set intact.

Cache for failover

Failover caching helps to provide necessary data whenever there are temporary failures or some glitches. The cache layer should be designed so that it can smartly decide how long the cache can be used in a normal situation or during failover situations. Setting cache standard response headers in HTTP can be used. The max-age header specifies the amount of time a resource will be considered fresh. The stale-if-error header determines how long the resource should be served from the cache. You can also use libraries such as Memcache, Redis, and so on.

Retry until

Due to its self-healing capabilities, a microservice usually gets up and running in no time. Microservice architecture should have retry logic until condition capabilities, as we can expect that the service will recover or the load balancer will redirect the service request to another healthy instance. Frequent retries can also have a huge impact on the system. A general idea is increasing the waiting time between retries after each failure. Microservices should be able to handle idempotency issues; let's say you are retrying to purchase an order, then there shouldn't be double purchases on the customer. Now, let's take time to revisit the microservice concept and understand the most common questions asked about microservice architecture.

 

Microservice FAQs


While understanding any new terms, we often come across several questions. The following are some of the most frequently asked questions that we come across while understanding microservices:

  • Aren't microservices just like service-oriented architecture (SOA)? Don't I already have them? When should I start?

If you have been in the software industry for a long time, then seeing microservices would probably get you remembering SOA. Microservices does take the concept of modularity and message-based communication from SOA, but there are many differences between them. While SOA focuses more on code reuse, microservices follow the play in your own bundled context rule. Microservices are more of a subset of SOA. Microservices can be scaled on demand. Not all microservice implementations are the same. Using Netflix's implementation in your medical domain is probably a bad idea as any mistake in the medical report will be worth a human life. The simple answer for a working microservice can be to have a clear goal of the operation that the service is meant to perform and if it doesn't perform then what it should do in failures. There have been various answers to when and how to beginwith microservices. Martin Fowler, one of the pioneers in microservices, states to start with the monolith and then gradually move to microservices. But the question here is—is there enough investment to go into the same phase again in this technological innovation era? The short answer is going early in microservices has huge benefits as it will address all concerns from the very beginning.

  • How will we deal with all the parts? Who's in charge? 

Microservices introduce localization and self-rule. Localization means that the huge work that was done earlier will no longer be done by the central team. Embracing self-rule means trusting all teams to let them make their own decisions. This way, software changes or even migrations becomes very easy and fast to manage. Having said that, it doesn't mean that there's no central body at all. With more microservices, the architecture becomes more complex. The central team then should handle all centralized controls such as security, design patterns, frameworks, enterprise security bus, and so on. Certain self-governance processes should be introduced, such as SLAs. Each microservice should adhere to these SLAs and system design should be smart in such a way that if SLAs are not met, then the microservice should be dropped.

  • How do I introduce change or how do I begin with microservice development?

Almost all successful microservice stories have begun with a monolith that got too big to be managed and was broken up. Changing some part of the architecture all of a sudden will have a huge impact, it should be introduced as  gradually kind of divide and rule. Consider asking yourself the following questions for deciding which part to break in the monolith—How is my application built and packaged? How is my application code written? Can I have different data sources and how will my application function when I introduce multiple data sources?—Based on the answers of these parts, refactor that part and measure and observe the performance of that application. Make sure that the application stays in its bounded context. Another part that you can begin is the part whose performance is worst in the current monolithic. Finding these bottlenecks that hinder change would be good for organizations. Introducing centralized operations will eventually allow multiple things to be run in parallel and benefit the greater good of the company.

  • What kind of tools and technologies are required? 

While designing microservice architecture, proper thought should be given to the technology or framework selection for any particular stage. For example, an ideal environment for microservice features, cloud infrastructure, and containers. Containers give heterogeneous and easy to port or migrate systems. Using Docker brings resiliency and scalability on demand in microservices. Any part of microservices, such as the API Gateway or the service registry should be such that it should be API friendly, adaptable to dynamic changes, and not be a single point of failure. Containers require shifting on and off to a server, track all application upgrades for which proper framework either—Swarm or Kubernates for orchestrating framework deployments. Lastly, some monitoring tools to keep health checks on all microservices and take actions needed. Prometheus is one such famous tool.

  • How do I govern a microservices system? 

With lots of parallel service development going on, there is a primitive need to have a centralized governing policy. Not only do we need to take care of certifications and server audits, but also centralized concerns such as security, logging, scalability, and distributed concerns such as team ownership, sharing concerns between various services, code linters, service-specific concerns, and so on. In such a case, some standard guidelines can be made such as each team should provide a Docker configuration file that bundles the software right from getting dependencies to building it and producing a container that has the specifics of the service. The Docker image can then be run in any standard way, or using orchestration tools such as Amazon EC2, GCP, or Kubernates.

  • Should all the microservices be coded in the same language?

The generic answer to this question is it is not a prerequisite. Microservices interact with each other via predefined protocols such as HTTP, Sockets, Thrift, RPC, and so on, which we will see in much detail later on. This means different services can be written in completely different technological stacks. The internal language implementation of the microservice is not important as the external outcome, that is, the endpoint and API. As long as the communication protocols are maintained, language implementation is not important, while it is an added advantage for not just having one language, but adding too many languages will also result in an added complexity of system developers to maintain language environment needs. The entire ecosystem should not be a wild jungle where you grow anything.

Cloud-based systems now have a standard set of guidelines. We will look at the famous twelve-factor applications and how microservices adhere to those guidelines.

 

Twelve-factor application of microservices


"Good code fails when you don't have a good process and a platform to help you. Good team fails when you don't have a good culture that embraces DevOps and microservices."

- Tim Spann

The twelve-factor application is a methodology for Software as a Service (SaaS) or web applications or software deployed in the cloud. It tells us about the characteristics of the output expected from such applications. It essentially is just outlining necessities for making well-structured and scalable cloud applications:

  • Codebase: We maintain a single code base here for each microservice, with a configuration specific to their own environments, such as development, QA, and prod. Each microservice would have its own repository in a version control system such as Git, mercurial, and so on.
  • Dependencies: All microservices will have their dependencies as part of the application bundle. In Node.js, there is package.json, which mentions all the development dependencies and overall dependencies. We can even have a private repository from where dependencies will be pulled.
  • Configs: All configurations should be externalized, based on the server environment. There should be a separation of config from code. You can set environment variables in Node.js or use Docker compose to define other variables.
  • Backing services: Any service consumed over the network such as database, I/O operations, messaging queries, SMTP, the cache will be exposed as microservices and using Docker compose and be independent of the application.
  • Build, release, and run: We will use automated tools like Docker and Git in distributed systems. Using Docker we can isolate all the three phases using its push, pull, and run commands.
  • Processes: Microservices designed would be stateless and would share nothing, hence enabling zero fault tolerance and easy scaling. Volumes will be used to persist data thus avoiding data loss.
  • Port binding: Microservices should be autonomous and self-contained. Microservices should embed service listeners as part of service itself. For example— in Node.js application using HTTP module, service network exposing services for handling ports for all processes.
  • Concurrency: Microservices will be scaled out via replication. Microservices are scaled out rather than scaled up. Microservices can be scaled or shrunk based on the flow of workload diversity. Concurrency will be dynamically maintained.
  • Disposability: To maximize the robustness of application with fast startup and graceful shutdown. Various options include restart policies, orchestration using Docker swarm, reverse proxy, and load balancing with service containers.
  • Dev/prod parity: Keep development/production/staging environments exactly alike. Using containerized microservices helps via build once, run anywhere strategy. The same image is deployed across various DevOps stage.
  • Logs: Creating separate microservice for logs for making it centralized, to treat as event streams and send it to frameworks such as elastic stack (ELK).
  • Admin processes: Admin or any management tasks should be packed as one of the processes, so they can be easily executed, monitored, and managed. This will include tasks like database migrations, one-time scripts, fixing bad data, and so on.
 

Microservices in the current world


Now, let's look at the pioneer implementers of microservices in the current world, the advantages they have got, and the roadmap ahead. The common objective of why these companies adopted microservices was getting rid of monolithic hell. Microservices even saw its adoption at the frontend. Companies such as Zalando use microservices principles to have composition at the UI level too.

Netflix

Netflix is one of the front-runners in microservice adoption. Netflix processes billions of viewing events per day. It needed a robust and scalable architecture to manage and process this data. Netflix used polyglot persistence to get the strength of each of the technological solutions they adopted. They used Cassandra for high volume and lower latency writes operations and a hand-made model with tuned configurations for medium volume write operations. They have Redis for high volume and lower latency reads at the cache level. Several frameworks that Netflix tailor-made are now open source and available for use:

Netflix Zuul

An edge server or the gatekeeper to the outside world. It doesn't allow unauthorized requests to pass through. It is the only point of contact for the outside world.

Netflix Ribbon

A load balancer that is used by service consumers to find services at runtime. If more than one instances of microservices are found, ribbon uses load balancing to evenly distribute the load.

Netflix Hystrix

A circuit breaker that is used to keep the system up and running. Hystrix breaks the connection for services that are going to fail eventually and only joins the connection when services are up again.

Netflix Eureka

Used for service discovery and registration. It allows services to register themselves at runtime.

Netflix Turbine

Monitoring tool to check the health of running microservices.

 

Just checking the stars on these repositories will give an idea of the rate of adoption of microservices using Netflix's tools.

Walmart

Walmart is one of the most popular companies on Black Friday. During Black Friday, it has more than 6 million page views per minute. Walmart adopted to microservices architecture to adopt to the world of 2020 to have 100% availability with reasonable costs. Migrating to microservices gave a huge uplift to the company. Conversion rates went up by 20%. They have zero downtime on Black Friday. They saved 40% of their computational power and got 20-50% cost savings overall.

Spotify

Spotify has 75 million active users per month with an average session length of 23 minutes. They adopted a microservice architecture and polyglot environment. Spotify is a company of 90 teams, 600 developers, and five offices across two continents, all working on the same product. This was a major factor in reducing dependencies as much as possible.

Zalando

Zalando implemented microservices at the frontend. They introduced fragments that served as separate services for the frontend. Fragments can be composed together at runtime as per the template definitions provided. Similar to Netflix, they have outsourced usage libraries:

Tailor

It's a layout service, which composes a page out of various fragments, as it does asynchronous and streams based fetching it has outstanding time to the first byte (TTFB).

Skipper

HTTP router for communication, more of an HTTP interceptor, it has the ability to modify request and responses with filters.

Shaker

UI components library used for providing consistent user experience while developing fragments across multiple teams.

Quilt

Template storage and manager with REST API.

Innkeeper

Datastores for routes.

Tesselate

Server-side renderer and component tree builder.

 

It now serves more than 1500 fashion brands, generates more than $3.43 billion revenue, and developments are done in a team of more than 700 people.

In the next section, we will debunk microservices from the design point of view. We will see what components are involved in the microservices design and see widely prevalent microservices design patterns.

 

Microservice design aspects


While designing microservices, various important decisions need to be taken such as how will the microservices communicate with each other, how we will handle security, how we will do data management, and so on. Let's now look at those various aspects involved in the microservices design and understand various options available to it.

Communication between microservices

Let's understand this aspect with a real-world example to understand the problem. In the shopping cart application, we have our product microservices, inventory microservice, check out microservice, and user microservice. Now a user opts to buy a product; for the user, the product should be added to their cart, the amount paid, on successful payment, the checkout done, and inventory updated. Now if payment is successfully done, then only the checkout and inventory should be updated, hence the services need to communicate with each other. Let's now look at some of the mechanisms that microservices can use to communicate with each other or any of the external clients.

Remote Procedure Invocation (RPI)

Briefly speaking, remote procedure call is a protocol that anyone can use to access services from any other providers located remotely in the network, without the need of understanding the network details. The client uses the protocol of request and replies to make requests for services and it is one of the most feasible solutions to REST for big data search systems. It has one of the major advantages of serialization time. Some of the technologies providing RPI are Apache Thrift and Google's gRPC. gRPC is a widely adopted library and it has more than 23,000 downloads from Node.js per day. It has some awesome utilities such as pluggable authentication, tracing, load balancing, and health checking. It is used by Netflix, CoreOS, Cisco, and so on. This pattern of communication has the following advantages:

  • Request and reply are easy
  • Simple to maintain as there is no middle broker
  • Bidirectional streams with HTTP/2-based transportation methods
  • Efficiently connecting polyglot services in microservices styled architectural ecosystems

This pattern has the following challenges and issues for consideration:

  • The caller needs to know the locations of service instances, that is, maintain a client-side registry and server-side registry
  • It only supports the request and reply model and has no support for other patterns such as notifications, async responses, the publish/subscribe pattern, publish async responses, streams, and so on

RPI uses binary rather than text to keep the payload very compact and efficient. These requests are multiplexed over a single TCP connection, which can allow multiple concurrent messages to be in flight without having to compromise for network consumption usage.

Messaging and message bus

This mode of communication is used when services have to handle the requests from various client interfaces. Services need to collaborate with each other to handle some specific operations, for which they need to use an inter-process communication protocol. Asynchronous messaging and message bus is one of them. Microservices communicate with each other by exchanging messages over various messaging channels. Apache Kafka, RabbitMQ, and ActiveMQ, Kestrel are some of the widely available message brokers that can be used for communication between microservices.

The message broker ultimately does the following set of functionalities:

  • Route messages coming from various clients to different microservices destinations.
  • Changes messages to desired transformations as per need.
  • Ability to do message aggregations, segregate a message into multiple messages, and send them to the destination as per need and recompose them.
  • Respond to errors or events.
  • Provide content and routing using the publish-subscribe pattern.
  • Using message bus as a means of communication between microservices has the following advantages:
  • The client is decoupled from the services; they don't need to discover any services. Loosely coupled architecture throughout.
  • Highly available as the message broker persists messages until the consumer is able to process them for operations.
  • It has support for a variety of communication patterns, including the widely used request/reply, notifications, async responses, publish-subscribe, and so on.

While this mode provides several advantages, it increases the complexity of adding a message broker that should be made highly available, as it can become a single point of failure. It also implies the need for the client to discover the location of the message broker, the single point of contact.

Protobufs

Protocol buffers or protobufs are a binary format created by Google. Google defines protobufs as a language and platform neutral extensive way of serializing structured data that can be used as one of the communication protocols. Protobufs also defines a set of some language rules that define the structure of messages. Some demonstrations effectively show that protobufs is six times faster than JSON. It is very easy to implement and it involves three major stages, which are creating message descriptors, message implementations, and parsing and serialization. Using protobufs in your microservices gives you the following advantages:

  • Formats for protobufs are self-explaining—formal formats.
  • It has RPC support; you can declare server RPC interfaces as part of protocol files.
  • It has an option for structure validation. As it has larger datatype messages that are serialized on protobufs, it can be validated automatically by the code that is responsible for exchanging them.

While the protobuf pattern offers various advantages, it has some drawbacks, which are as follows:

  • It is an upcoming pattern; hence you won't find many resources or detailed documentation for implementation of protobuf. If you just look for the protobuf tag on Stack Overflow, you will merely see a mere 10,000 questions.
  • As it's binary format, it's non-readable when compared to JSON, which is simple to read and analyze on the other hand. The next generation of protobuf and flatbuffer is already available now.

Service discovery

The next obvious aspect to take care of is the method through which any client interface or any microservice will discover the network location of any service instance. Modern applications based on microservices run in virtualized or containerized environments where things change dynamically, including the number of instances of services and their location. Also, the set of service instances changes dynamically based on auto-scaling, upgrades, and so on. We need an elaborate a service discovery mechanism. Discussed ahead are widely used patterns.

Service registry for service-service communication

Different microservices and various client interfaces need to know the location of service instances so as to send requests. Usually, virtual machines or containers have a different or dynamic IP address, for example, an EC2 group when applied auto-scaling, it auto adjusts the number of instances based on load. Various options are available to maintain a registry anywhere such as client-side or server-side registrations. Clients or microservices look up to that registry to find other microservices for communication.

Let's take the real-life example of Netflix. Netflix Eureka is a service registry provider. It has various options for registering and querying available service instances. Using the POST API exposed an instance of service tells about its network location. It must be constantly updated every 30 seconds with the PUT API exposed. Any interface can use the GET API to get that instance and use it as per demand. Some of the widely available options are as follows:

  • etcd: A key-value store used for shared configuration and service discovery. Projects such as Kubernates and cloud foundry are based on etcd as it can be highly available, key-value based, and consistent.
  • consul: Yet another tool for service discovery. It has wide options such as exposed API endpoints that allow the client to register and discover services and perform health checks to determine service availability.
  • ZooKeeper: Very widely used, highly available, and a high performant coordinated service used in distributed applications. Originally a subproject of Hadoop, Zookeeper is a widely used top-level project and it comes preconfigured with various frameworks.

Some systems have implicit in-built service registry, built in as a part of their framework. For example, Kubernates, Marathon, and AWS ELB.

Server-side discovery

All requests made to any of the services are routed via a router or load balancers that run in a location known to client interfaces. The router then queries a maintained registry and forwards the request based on the query response. An AWS Elastic load balancer is a classic example that has the ability to handle load balancing, handle internal or external traffic, and act as a service registry. EC2 instances are registered to ELB either via exposed API calls or either through auto-scaling. Other options include NGINX and NGINX Plus. There are available consul templates that ultimately generate the nginx.conf file from the consul service registry and can configure proxying as required.

Some of the major advantages of using server-side discovery are as follows:

  • The client does not need to know the location of different microservices. They just need to know the location of the router and the service discovery logic is completely abstracted from the client so there is zero logic at the client end.
  • Some environments provide this component functionality for free.

While these options have great advantages, there are some drawbacks too that need to be handled:

  • It has more network hops, that is, one from the client service registry and another from the service registry microservice.
  • If the load balancer is not provided by the environment, then it has to be set up and managed. If not properly handled, then it can be a single point of failure.
  • The selected router or load balancer must support different communication protocols for modes of communication.

Client-side discovery

Under this mode of discovery, the client is responsible for handling the network location of available microservices and load balancing incoming requests across them. The client needs to query a service registry (a database of available services maintained on the client side). The client then selects service instances on the basis of an algorithm and then makes a request. Netflix uses this pattern extensively and has open sourced their tools Netflix OSS, Netflix Eureka, Netflix Ribbon, and Netflix Prana. Using this pattern has the following advantages:

  • High performance and availability as there are fewer transition hops, that is, the client just has to invoke the registry and the registry will redirect to the microservice as per their needs.
  • This pattern is fairly simple and highly resilient as besides the service registry there are no moving parts. As the client knows about available microservices, they can make intelligent decisions easily such as to use a hash, when to trigger auto-scaling, and so on.
  • One significant drawback of using this mode of service discovery is implementation of client-side service discovery logic has to be done in every programming language of the framework that is used by the service clients. For example, Java, JavaScript, Scala, Node.js, Ruby, and so on.

Registration patterns – self-registration

While using this pattern, any microservice instance is responsible for registering and deregistering itself from the maintained service registry. To maintain health checks, a service instance sends heartbeat requests to prevent its registry from expiring. Netflix uses a similar approach and has outsourced their Eureka library, which handles all aspects of service registration and deregistration. It has its client in Java as well as Node.js. The Node.js client (eureka-js-client) has more than 12,000 downloads a month. The self-registration pattern has major benefits, such as any microservice instance would know its own state, hence it can implement or shift to other modes easily such as Starting, Available, and others.

However, it also has the following drawbacks:

  • It couples the service tightly to the self-service registry, which forces us to enable the service registration code in each language we are using in the framework
  • Any microservice that is in running mode, but is not able to handle requests, will often be unaware of which state to pursue, and will often end up forgetting to unregister from the registry

Data management

Another important question in microservice design aspect is the database architecture in a microservices application. We will see various options such as whether to maintain a private datastore, managing transactions, and making querying datastores easy in distributed systems. An initial thought can be going with a single database, but if we give it deep thought, we will soon see it as an unwise and unfitting solution because of tight coupling, different requirements, and runtime blocking by any of the services.

Database per service

In a distributed microservices architecture, different services have needs and usages of different storage requirements. The relational database is a perfect choice when it comes to maintaining relations and having complex queries. NoSQL databases such as MongoDB is the best choice when there is unstructured complex data. Some may require graph data and thus use Neo4j or GraphQL. The solution is to keep each of the microservices data private to that service and get it accessible only via APIs. Each microservice maintains its datastore and is a private part of that service implementation and hence it is not directly accessible by other services.

Some of the options you have while implementing this mode of data management are as follows:

  • Private tables/collections per service: Each microservice has a set of defined tables or collections that can only be accessed by that service
  • Schema per service: Each service has a schema that can only be accessed via the microservice it is bound to
  • Database per service: Each microservice maintains its own database as per its needs and requirements

When thought of, maintaining a schema per service seems to be the most logical solution as it will have lower overhead and ownership can clearly be made visible. If some services have high usage and throughput and different usage, then maintaining a separate database is the logical option. A necessary step is to add barriers that will restrict any microservice from accessing data directly. Various options to add this barrier include assigning user IDs with restricted privileges or accessing control mechanisms such as grants. This pattern has the following advantages:

  • Loosely coupled services that can stand on their own; changes to one service's datastore won't affect any other services.
  • Each service has the liberty to select the datastore as required. Each microservice has the option of whether to go for relational or non-relational databases as per need. For example, any service that needs intensive search results on text may go for Solr or Elasticsearch, whereas any service where there is structured data may go for any SQL database.

This pattern has the following drawbacks and upcomings that need to be handled with care:

  • Handling complex scenarios that involve transactions spanning across multiple services. The CAP theorem states that it is impossible to have more than two out of the following three guarantees—consistency, availability, and partitions in the distributed datastore, so transactions are generally avoided.
  • Queries ranging across multiple databases are challenging and resource consuming.
  • The complexity of managing multiple SQL and non-SQL datastores.

To overcome the drawbacks, the following patterns are used while maintaining a database per service:

  • Sagas: A saga is defined as a batch sequence of local transactions. Each entry in the batch updates the specified database and moves on by publishing a message or triggering an event for the next entry in the batch to happen. If any entry in the batch fails locally or any business rule is violated, then the saga executes a series of compensating transactions that compensate or undo the changes that were made by the saga batch updates.
  • API Composition: This pattern insists that the application should perform the join rather than the database. As an example, a service is dedicated to query composition. So, if we want to fetch monthly product distributions, then we first retrieve the products from the product service and then query the distribution service to return the distribution information of the retrieved products.
  • Command Query Responsibility Segregation (CQRS): The principle of this pattern is to have one or more evolving views, which usually have data coming from various services. Fundamentally, it splits the application into two parts—the command or the operating side and the query or the executor side. It is more of a publisher-subscriber pattern where the command side operates create/update/delete requests and emits events whenever the data changes. The executor side listens for those events and handles those queries by maintaining views that are kept up to date, based on the subscription of events that are emitted by the command or operating side.

Sharing concerns

The next big thing in distributed microservice architecture to handle is sharing concerns. How will general things such as API routing, security, logging, and configurations work? Let's look at those points one by one.

Externalized configuration

An application usually uses one or many infrastructures third-party services such as a service registry, message broker, server, cloud deployment platform, and so on. Any service must be able to run in multiple environments without any modifications. It should have the ability to pick up external configurations. This pattern is more of a guideline that advises us to externalize all the configurations, including database information, environment info, network location, and so on, that create a startup service that reads this information and prepares the application accordingly. There are various options available. Node.js provides setting environment variables; if you use Docker, then it has the docker-compose.yml file.

Observability

Revisiting the twelve-factor's required for an application, we observe that any application needs some centralized features, even if it's distributed. These centralized features help us to have proper monitoring and debugging in case of issues. Let's look at some of the common observability parameters to look out for.

Log aggregation

Each service instance will generate information about what it is doing in a standardized format, which contains logs at various levels such as errors, warning, info, debug, trace, fatal, and so on. The solution is to use a centralized logging service that collects logs from each service instance and stores them in some common place where the user can search and analyze the logs. This enables us to configure alerts for certain kinds of logs. Also, a centralized service will help to do audit logging, exception tracking, and API metrics. Available and widely used frameworks are Elastic Stack (Elasticsearch, Logstash, Kibana), AWS CloudTrail, and AWS CloudWatch.

Distributed tracing

The next big problem is to understand the behavior and application so as to troubleshoot problems when required. This pattern is more of a designing guideline that states to maintain a unique external request ID, which is maintained by a microservice. This external request ID needs to be passed to all services that are involved in handling that request and in all the log messages. Another guideline is to include the start time and end time of requests and operations performed when a microservice does the operation.

Based on the preceding design aspects, we will see common microservice design patterns and understand each pattern in depth. We'll see when to use a particular pattern, what the problems are that it solves, and what pitfalls to avoid while using that design pattern.

 

Microservice design patterns


As microservices evolve, so evolves its designing principles. Here are some of the common design patterns that help to design an efficient and scalable system. Some of the patterns are followed by Facebook, Netflix, Twitter, LinkedIn, and so on, which provide some of the most scalable architectures.

Asynchronous messaging microservice design pattern

One of the most important things to consider in a distributed system is state. Although highly powerful REST APIs, it has a very primitive flaw of being synchronous and thus blocking. This pattern is about achieving a non-blocking state and asynchronicity to maintain the same state across the whole application reliably, avoid data corruption, and allow a faster rate of change across the application:

  • Problem: Speaking contextually, if we go with the principle of single responsibility, a model or an entity in the application can mean something different to different microservices. So, whenever any change occurs, we need to ensure that different models are in sync with those changes. This pattern helps to solve this issue with the help of asynchronous messaging. In order to ensure data integrity throughout, there is a need to replicate the state of key business data and business events between microservices or datastores.
  • Solution: Since it's asynchronous communication, the client or the caller assumes that the message won't be received immediately, carries on and attaches a callback to the service. The callback is for when the response is received what further operation to be carried on. A lightweight message broker (not to be confused with orchestrators used in SOA) is preferably used. The message broker is dumb, that is, they are ignorant of the application state. They communicate to services handling events, but they never handle events. Some of the widely adopted examples include RabbitMQ, the Azure bus, and so on. Instagram's feed is powered by this simple RabbitMQ. Based on the complexity of the project, you can introduce either a single receiver or multiple receivers. While a single receiver is good, soon it can be the single point of failure. A better approach is going reactive and introducing the publish-subscribe pattern of communication. That way the communication from the sender will be available to subscriber microservices in one go. Practically, when we consider a routine scenario, an update in any of the models will trigger an event to all its subscribers, which may further trigger the change in their own models. To avoid this, event bus is generally introduced in such type of a pattern that can fulfill the role of inter micro service communication and act as the message broker. Some of the commonly available libraries are AMQP, RabbitMQ, NserviceBus, MassTransit, and so on for scalable architecture.
  • Take care of: To successfully implement this design, the following aspects should be considered:
  • When you need high scalability, or your current domain is already a message-based domain, then preference should be given to message-based commands over HTTP.
  • Publishing events across microservices, as well as changing the state in the original microservices.
  • Make sure that events are communicated across; mimicking the event would be a very bad design pattern.
  • Maintain the position of the subscriber's consumer to scale up performance.
  • When to make a rest call and when to use a messaging call. As HTTP is a synchronous call, it should be used only when needed.
  • When to use: This is one of the most commonly used patterns. Based on the following use cases, you can use this pattern or its variants as per your requirements:
  • When you want to use real-time streaming, use the Event Firehouse pattern, which has KAFKA as one of its key components.
  • When your complex system is orchestrated in various services, one of the variants of this system, RabbitMQ, is extremely helpful.
  • Often, instead of subscribing to services, directly subscribing to the datastore is advantageous. In such a case use, GemFire or Apache GeoCode following this pattern is helpful.
  • When not to use: In the following scenarios, this pattern is less recommended:
  • When you have heavy database operations during event transmission, as database calls are synchronous
  • When your services are coupled
  • When you don't have standard ways defined to handle data conflict situations

Backend for frontends

The current world demands a mobile-first approach everywhere. The service may respond differently to mobile where it has to show little content, as it has very less content. On the web, it has to show huge content as lots of space is available. Scenarios may differ drastically based on the device. As for example in the mobile app, we may allow barcode scanner, but in desktop, it is not a wise option. This pattern addresses these issues and helps to effectively design microservices across multiple interfaces:

  • Problem: With the advent of development of services supporting multiple interfaces, it becomes extremely painful to manage everything in one service. This constantly evolves change in any of the single interfaces; the need to keep services working in all interfaces can soon become a bottleneck and a pain to maintain.

  • Solution: Rather than maintaining a general purpose API, design one backend per user experience or interface, better termed as a backend for frontend (BFFs). The BFF is tightly bound to a single interface or specific user experience and is maintained by their specific teams so as to easily adapt to new change. While implementing this pattern, one of  the common concerns that occurs is maintaining the number of BFFs. A more generic solution would be separating concerns and having each BFF handle its own responsibility.

  • Take care of: While implementing this design pattern, the following points should be taken care of as they are the most common pitfalls:

  • A fair consideration of the amount of BFFs to be maintained. A new BFF should only be created when concerns across a generally available service can be separated out for a specific interface.
  • A BFF should only contain client/interface-specific code to avoid code duplication.
  • Divide responsibilities across teams for maintaining BFFs.
  • This should not be confused with a Shim, a converter to the convert to interface-specific format required for that type of interface.
  • When to use: This pattern is extremely useful in the following scenarios:

  • There are varying differences in a general-purpose backend service across multiple interfaces and there are multiple updates at any point in time in a single interface.
  • You want to optimize a single interface and not disturb the utility across other interfaces.
  • There are various teams, and implement an alternative language for a specific interface and you want to maintain it separately.
  • When not to use: While this pattern does solve lots of issues, this pattern is not recommended in the following scenarios:

  • Do not use this pattern to handle generic parameter concerns such as authentication, security, or authorization. This would just increase latency.
  • If the cost of deploying an extra service is too high.
  • When interfaces make the same requests and there is not much difference between them.
  • When there is only one interface and support for multiple interfaces is not there, a BFF won't make much sense.

Gateway aggregation and offloading

Dump or move specialized, common services and functionalities to a gateway. This pattern can introduce simplicity by moving shared functionality into a single part. Shared functionality can include things such as the use of SSL certificates, authentication, and authorization. A gateway can further be used to join multiple requests into a single request. This pattern simplifies needs where a client has to make multiple calls to different microservices for some operation:

  • Problem: Often, to perform a simple task, a client may need to make multiple HTTP calls to various different microservices. Too many calls to a server requires an increase in resources, memory, and threads, which adversely affects performance and scalability. Many features are commonly used across multiple services; an authentication service and a product checkout service are both going to use the log in the same way. This service requires configuration and maintenance. Also, these type of services need an extra set of eyes as they are essential. For example, token validation, HTTPS certificate, encryption, authorization, and authentication. With each deployment, it is difficult to manage that as it has to span across the whole system.

  • Solution: The two major components in this design pattern are the gateway and gateway aggregator. The gateway aggregator should always be placed behind the gateway. Hence, single responsibility is achieved, with each component doing the operation they are meant to do.

  • Gateway: It offloads some of the common operations such as certificate management, authentication, SSL termination, cache, protocol translation, and so on to one single place. It simplifies the development and abstracts all this logic in one place and speeds up development in a huge organization where not everyone has access to the gateway, only specialized teams work on it. It maintains consistency throughout the application. The gateway can ensure a minimum amount of logging and thus help out to find the faulty microservice. It's much like the facade pattern in object-oriented programming. It acts as the following:

  • Filter
  • Single entry point that exposes various microservices
  • Solution to a common operation such as authorization, authentication, central configuration, and so on, abstracting this logic into a single place
  • Router for traffic management and monitoring

Netflix uses a similar approach and they are able to handle more than 50,000 requests per hour and they open sourced ZuuL:

  • Gateway aggregator: It receives the client request, then it decides to which different systems it has to dispatch the client request, gets the results, and then aggregates and sends them back to the client. For the client, it is just one request. Overall round trips between client and server are reduced.
  • Take care of: The following pitfalls should be properly handled in order to successfully implement this design pattern in microservices:
  • Do not introduce service coupling, that is, the gateway can exist independently, without other service consumers or service implementers.
  • Here, every microservice will be dependent on the gateway. Hence, the network latency should be as low as possible.
  • Make sure to have multiple instances of the gateway, as only a single instance of the gateway may introduce it as a single point of failure.
  • Each of the requests goes through the gateway. Hence, it should be ensured that gateway has efficient memory and adequate performance, and can be easily scaled to handle the load. Have one round of load testing to make sure that it is able to handle bulk load.
  • Introduce other design patterns such as bulkheads, retry, throttle, and timeout for efficient design.
  • The gateway should handle logic such as the number of retries, waiting for service until.
  • The cache layer should be handled, which can improve performance.
  • The gateway aggregator should be behind the gateway, as the request aggregator will have another. Combining them in a gateway will likely impact the gateway and its functionalities.
  • While using the asynchronous approach, you will find yourself smacked by too many promises of callback hell. Go with the reactive approach, a more declarative style. Reactive programming is prevalent from Java to Node.js to Android. You can check out this link for reactive extensions across different links: https://github.com/reactivex.
  • Business logic should not be there in the gateway.
  • When to use: This pattern should be used in the following scenarios:
  • There are multiple microservices across and a client needs to communicate with multiple microservices.
  • Want to reduce the frequent network calls when the client is in lesser range network or cellular network. Breaking it in one request is efficient as then the frontend or the gateway will only have to cache one request.
  • When you want to encapsulate the internal structure or introduce an abstract layer to a large team present in your organization.
  • When not to use: The following scenarios are when this pattern won't be a good fit:
  • When you just want to reduce the network calls. You cannot introduce a whole level of complexity for just that need.
  • The latency by the gateway is too much.
  • You don't have asynchronous options in the gateway. Your system makes too many synchronous calls for operations in the gateway. That would result in a blocking system.
  • Your application can't get rid of coupled services.

Proxy routing and throttling

When you have multiple microservices that you want to expose across a single endpoint and that single endpoint routes to service as per need. This application is helpful when you need to handle imminent transient failures and have a retry loop on a failed operation, thus improve the stability of the application. This pattern is also helpful when you want to handle the consumption of resources used by a microservice.

This pattern is used to meet the agreed SLAs and handle loads on resources and resource allocation consumption even when an increase in demand places loads on resources:

  • Problem: When a client has to consume a multitude of microservices, challenges soon turn up such as client managing each endpoint and setting up separate endpoints. If you refactor any part of the code in any service then the client must also be updated as the client is directly in contact with the endpoint. Further, as these services are in the cloud, they have to be fault tolerant. Faults include temporary loss of connectivity or unavailability of services. These faults should be self-correcting. For example, a database service that is taking a large number of concurrent requests should throttle further requests until the memory load and resource utilization has decreased. On retrying the request, the operation is completed. The load on any application varies drastically on time period. For example, a social media chatting platform will have very less load during peak office hours and a shopping portal will have extreme load during festive season sales. For a system to perform efficiently it has to meet to agreed LSA, once it exceeds, subsequent requests needs to be stopped until load consumption has decreased.

  • Solution: Place gateway layer in front of microservices. This layer includes the throttle component, as well as retry, once failed component. With the addition of this layer, the client needs only to interact with this gateway rather than interacting with each different microservice. It lets you abstract backend calls from the client and thus keeping the client end simple as the client only has to interact with the gateway. Any number of services can be added, without changing the client at any point in time. This pattern can also be used to handle versioning effectively. A new version of the microservice can be deployed parallelly and the gateway can route too, based on input parameters passed. New changes can be easily maintained by just a configuration change at the gateway level. This pattern can be used as an alternative strategy to auto-scaling. This layer should allow network requests only up to a certain limit and then throttle the request and retry once the resources have been released. This will help the system to maintain SLAs. The following points should be considered while implementing the throttle component:

  • One of the parameters to consider for throttling is user requests or tenant requests. Assuming that a specific tenant or user triggers throttle, then it can be safely assumed that there's some issue with the caller.
  • Throttling doesn't essentially mean to stop the requests. Lower quality resources if available can be given, for example, a mobile-friendly site, a lower quality video, and so on. Google does the same.
  • Maintaining priority over microservices. Based on the priority they can be placed in the retry queue. As an ideal solution, three queues can be maintained—cancel, retry, and retry-after sometime.
  • Take care of: Given here are some of the most common pitfalls that we can come across while successfully implementing this pattern:
  • The gateway can be a single point of failure. Proper steps have to be taken to ensure that it has fault tolerant capabilities during development. Also, it should be run in multiple instances.
  • Gateway should have proper memory and resource allocation otherwise it will introduce a bottleneck. Proper load testing should be done to ensure that failures are not cascaded.
  • Routing can be done based on IP, header, port, URL, request parameter, and so on.
  • The retry policy should be crafted very carefully based on the business requirements. It's okay in some places to have a please try again rather than having waiting periods and retrials. The retry policy may also affect the responsiveness of the application.
  • For effective application, this pattern should be combined with Circuit Breaker Application.
  • If service is idempotent, then and only then should it be retried. Trying retrial on other services may have unhealthy consequences. For example, if there is a payment service that waits for responses from other payment gateways, the retry component may think it fails and may send another request and the customer gets charged twice.
  • Different exceptions should handle the retry logic accordingly, based on the exceptions.
  • Retry logic should not disturb transaction management. The retry policy should be used accordingly.
  • All failures that trigger a retry should be logged and handled properly for future scenarios.
  • An important point to be considered is this is no replacement for exception handling. The first priority should be given to exceptions always, as they would not introduce an extra layer and add latency.
  • Throttling should be added early in the system as it's difficult to add once the system is implemented; it should be carefully designed.
  • Throttling should be performed quickly. It should be smart enough to detect an increase in activity and react accordingly by taking appropriate measures.
  • Consideration between throttling and auto-scaling should be decided based on business requirements.
  • The requests that are throttled should be effectively placed in a queue based on priority.
  • When to use: This pattern is very handy in the following scenarios:
  • To ensure that agreed LSAs are maintained.
  • To avoid a single microservice consuming the majority of the pool of resources and avoid resource depletion by a single microservice.
  • To handle sudden bursts in consumption of microservices.
  • To handle transient and short-lived faults.
  • When not to use: In the following scenarios, this pattern should not be used:
  • Throttling shouldn't be used as a means to handle exceptions.
  • When faults are long-lasting. If this pattern is applied in that case, it will severely affect the performance and responsiveness of the application.

Ambassador and sidecar pattern

This pattern is used when we want to segregate common connectivity features such as monitoring, logging, routing, security, authentication, authorization, and more. It creates helper services that act as ambassadors and sidecars that do the objective of sending requests on behalf of a service. It is just another proxy that is located outside of the process. Specialized teams can work on it and let other people not worry about it so as to provide encapsulation and isolation. It also allows the application to be composed of multiple frameworks and technologies.

The sidecar component in this pattern acts just like a sidecar attached to a motorcycle. It has the same life cycle as the parent microservice, retires the same time a parent microservice does, and it does essential peripheral tasks:

  • Solution: Find a set of operations that are common throughout different microservices and place them inside their own container or process, thus providing the same interface for these common operations to all frameworks and platforms services in the whole system. Add an ambassador layer that acts as a proxy between applications and microservices. This ambassador can monitor performance metrics such as the amount of latency, the resource usage, and so on. Anything inside the ambassador can be maintained independently of the application. An ambassador can be deployed as a container, common process, daemon, or windows service. An ambassador and sidecar are not part of the microservice, but rather are connected to the microservice. Some of the common advantages of this pattern are as follows:
  • Language-independent development of the sidecar and ambassador, that is, you don't have to build a sidecar and ambassador for every language you have in your architecture.
  • Just part of the host, so it can access the same resources as any other microservice can.
  • Due to connection with microservices, there hardly is any latency Netflix uses a similar approach and they have open sourced their tool Prana (https://github.com/Netflix/Prana). Take a look at the following diagram:

  • Take care of: The following points should be taken care of as they are the most common pitfalls:
  • The ambassador can introduce some latency. Deep thought should be given on whether to use a proxy or expose common functionalities as the library.
  • Adding generalized functionalities in ambassador and sidecar is beneficial, but is it required for all scenarios? For example, consider the number of retries to a service, it might not be common for all use cases.
  • The language or framework in which ambassador and sidecar will be built, managed, and deployed strategy for it. The decision to create single or multiple instances of it based on need.
  • Flexibility to pass some parameters from service to ambassador and proxy and vice versa.
  • The deployment pattern: this is well suited when the ambassador and sidecar are deployed in containers.
  • The inter-micro service communication pattern should be such that it is framework agnostic or language agnostic. This would be beneficial in the long run.
  • When to use: This pattern is extremely helpful in the following scenarios:
  • When there are multiple frameworks and languages involved and you need a common set of general features such as client connectivity, logging, and so on throughout the application. An ambassador and sidecar can be consumed by any service across the application.
  • Services are owned by different teams or different organizations.
  • You need independent services for handling this cross-cutting functionality and they can be maintained independently.
  • When your team is huge and you want specialized teams to handle, manage, and maintain core cross-cutting functionalities.
  • You need to support the latest connectivity options in a legacy application or an application that is difficult to change.
  • You want to monitor resource consumption across the application and cut off a microservice if its resource consumption is huge.
  • When not to use: While this pattern does solve many issues, this pattern is not recommended in the following scenarios:
  • When network latency is utmost. Introducing a proxy layer would introduce some overhead that will create a slight delay, which may not be good for real-time scenarios.
  • When connectivity features cannot be generalized and require another level of integration and dependency with another service.
  • When creating a client library and distributing it to the microservices development team as a package.
  • For small applications where introducing an extra layer is actually an overhead.
  • When some services need to scale independently; if so, then the better alternative would be to deploy it separately and independently. 

Anti-corruption microservice design pattern

Often, we need interoperability or coexistence between legacy and modern applications. This design provides an easy solution for this by introducing a facade between modern and legacy applications. This design pattern ensures that the design of an application is not hampered or blocked by legacy system dependencies:

  • Problem: New systems or systems in the process of migration often need to communicate with the legacy system. The new system's model and technology would probably be different, considering that old systems are usually weak, but still, legacy resources may be needed for some operations. Often, these legacy systems suffer from poor design and poor schema designs. For interoperability, we may still need to support the old system. This pattern is about solving such corruption and still have a cleaner, neater, and easier to maintain microservice ecosystem.

  • Solution: To avoid using legacy code or a legacy system, design a layer that does the following task: acts as the only layer for communicating with legacy code, which prevents accessing legacy code directly wherein different people may deal with them differently. The core concept is to separate out a legacy or the corrupt application by placing an ACL with the objective of not changing the legacy layer, and thus avoid compromising its approach or major technological change.

  • The anti-corruption layer (ACL) should contain all the logic for translating as per new needs from the old model. This layer can be introduced as a separate service or translator component any place where needed. A general approach to organizing the design of the ACL is a combination of a facade, adapters, translators, and communicators to talk to systems. An ACL is used to prevent unexpected behavior of an external system from leaking in your existing context:

  • Take care of: While effectively implementing this pattern, the following points should be considered as they are some of the major pitfalls:
  • The ACL should be properly scaled and given a better resources pool, as it will add latency to calls made between two systems.
  • Make sure that the corruption layer you introduce is actually an improvement and you don't introduce yet another layer of corruption.
  • The ACL adds an extra service; hence it must be effectively managed and maintained and scaled.
  • Effectively decide the number of ACLs. There can be many reasons to introduce an ACL—a means to translate undesirable formats of the object in required formats means to communicate between different languages, and so on.
  • Effective measures to make sure that transactions and data consistency are maintained between both systems and can be monitored.
  • The duration of the ACL, will it be permanent, how will the communication be handled.
  • While an ACL should successfully handle exceptions from the corruption layer, it should not completely, otherwise it would be very difficult to preserve any information about the original error.
  • When to use: The anti-corruption pattern is highly recommended and extremely useful in the following scenarios:
  • There is a huge system up for refactoring from monolithic to microservices and there is a phase-by-phase migration planned instead of the big bang migration wherein the legacy system and new system need to coexist and communicate with each other.
  • If the system that you are undertaking is dealing with any data source whose model is undesirable or not in sync with the needed model, this pattern can be introduced and it will do the task of translating from undesirable formats to needed formats.
  • Whenever there is a need to link two bounded contexts, that is, a system is developed by someone else entirely and there is very little understanding of it, this pattern can be introduced as a link between systems.
  • When not to use: This pattern is highly not recommended in the following scenarios:
  • There are no major differences between new and legacy systems. The new system can coexist without the legacy system.
  • You have lots of transactional operations and maintaining data consistency between the ACL and the corrupt layer adds too much latency. In such case, this pattern can be merged with other patterns.
  • Your organization doesn't have extra teams to maintain and scale the ACL as and when needed.

Bulkhead design pattern

Separate out different services in the microservices application into various pools such that if one of the services fails, the others will continue to function irrespective of failure. Create a different pool for each microservice to minimize the impact:

  • Problem: This pattern takes inspiration from sectioned out parts of a ship's hull. If the hull of a ship is damaged, then only the damaged section would fill with water, which would prevent the ship from sinking. Let's say you are connecting to various microservices that are using a common thread pool. If one of the services starts showing delay, then all pool members will be too exhausted to wait for responses. Incrementally, a large number of requests coming from one service would deplete available resources. That's where this pattern suggests a dedicated pool for every single service.
  • Solution: Separate service instances into different groups based on load and network usage. This allows you to isolate system failures and prevent depletion of resources in the connection pool. The essential advantages of this system are the prevention of propagating failures and ability to configure capacity of the resource pool. For higher priority services, you may assign higher pools.

Note

For example, given is a sample file from which we can see pool allocation for service shopping-management: https://gist.github.com/parthghiya/be80246cc5792f796760a0d43af935db.

  • Take care of: Make sure to take care of the following points to make sure that a proper bulkhead design is implemented:
  • Define proper independent partitions in the application based on business and technical requirements.
  • Bulkheads can be introduced in forms of thread pools and processes. Decide which one is suitable for your application.
  • Isolation in the deployment of your microservices.
  • When to use: The bulkhead pattern adds an advantage in the following scenarios:
  • The application is huge and you want to protect it from cascading or spreading failures
  • You can isolate critical services from standard services and you can allocate separate pools for them
  • When not to use: This pattern is not advisable for the following scenarios:
  • When you don't have that much budget for maintaining separate overheads in terms of cost and management
  • The added level of complexity of maintaining separate pools is not necessary
  • Your resources usage is unexpected and you can't isolate your tenants and keep a limit on it as it is not acceptable when you would place several tenants in one partition

Circuit breaker

Services sometimes need to collaborate with each other when they need to handle requests. In such cases, there is a very high scenario that the other service is not available, is showing high latency, or is unusable. This pattern essentially solves this issue by introducing a breakage in the circuit that stops propagation in the whole architecture:

  • Problem: In the microservices architecture when there is inter-services communication, a remote call needs to be invoked instead of an in-memory call. It may so happen that the remote call may fail or reach a timeout limit and hang without any response. Now in such cases when there are many callers, then all such locked threads that you can run out of resources and the whole system will become unresponsive.
  • Solution: A very primitive idea for solving this issue is introducing a wrapper for a protected function call, it monitors for failures. Now this wrapper can be triggered via anything such as certain threshold in failures, database connection fails, and so on. All further calls will return with an error and stop catastrophic propagation. This will trip the circuit open, and while the circuit is open, it will avoid making the protected call. The implementation is done in the following three stages just as in an electric circuit. It is in three stages: Closed State, Open State, and Half-Open State, as explained in the following diagram:

Note

Here is an example for implementation in Node.js: Hystrix is open sourced by Netflix https://gist.github.com/parthghiya/777c2b423c9c8faf0d427fd7a3eeb95b

  • Take care of: The following needs to be taken care of when you want to apply the circuit breaker pattern:
  • Since you are invoking a remote call, and there may be many remote call invocation asynchronous and reactive principles for using future, promises, async, and await is must.
  • Maintain a queue of requests; when your queue is overcrowded, you can easily trip the circuit. Always monitor the circuit, as you will often need to activate it again for an efficient system. So, have a mechanism ready for reset and failure handlers.
  • You have a persistent storage and network cache such as Memcache or Redis to record availability.
  • Logging, exception handling, and relaying failed requests.
  • When to use: In the following use cases, you can use the circuit breaker pattern:
  • When you don't want your resources to be depleted, that is, an action that is doomed to fail shouldn't be tried until it is fixed. You can use it to check the availability of external services.
  • When you can compromise a bit on performance, but want to gain high availability of the system and not deplete resources.
  • When not to use: In the following scenarios, it is not advisable to introduce the circuit breaker pattern:
  • You don't have an efficient cache layer that monitors and maintains states of services for a given time for requests across the clustered nodes.
  • For handling in-memory structures or as the substitute for handling exceptions in business logic. This would add overhead to performance.

Strangler pattern

Today's world is one where technology is constantly evolving. What is written today, is just tomorrow's legacy code. This pattern is very helpful when it comes to the migration process. This pattern is about eventually migrating a legacy system by incrementally replacing particular parts of functionality with new microservices application and services. It eventually introduces a proxy that redirects either to the legacy or the new microservices, until the migration is complete and at the end, you can shut off the strangler or the proxy:

  • Problem: With aging systems, new evolving development tools, and hosting options, the evolution of cloud and serverless platforms maintaining the current system gets extremely painful with the addition of new features and functionalities. Completely replacing a system can be a huge task, for which gradual migration needs to be done such that the old system is still handled for the part that hasn't been migrated yet. This pattern solves this very problem.
  • Solution: The strangler solution resembles a vine that strangles a tree that it's wrapped over. Over time, the migrated application strangles the original application until you can shut off the monolithic application. Thus, the overall process is as follows:
  • Reconstruct: Construct a new application or site (in serverless or AWS cloud-based on modern principles). Incrementally reconstruct the functionalities in an agile manner.
  • Coexist: Leave the legacy application as it is. Introduce a facade that eventually acts as a proxy and decides where to route the request based on the current migration status. This facade can be introduced at web server level or programming level based on various parameters such as IP address, user agent, or cookies.
  • Terminate: Redirect everything to the modern migrated application and loosen off all the ties with the legacy application.

Note

A sample gist of .htaccess that acts as a facade can be found at this link: https://gist.github.com/parthghiya/a6935f65a262b1d4d0c8ac24149ce61d.

The solution instructs us to create a facade or a proxy that has the ability to intercept the requests that are going to the backend legacy system. The facade or proxy then decides whether to route it to the legacy application or the new microservices. This is an incremental process, wherein both the systems can coexist. The end users won't even know when the migration process is complete. It gives the added advantage that if the adopted microservice approach doesn't work, there is a very simple way to change it.

  • Take care of: The following points need to be taken care of for effectively applying the strangulation pattern:
  • The facade or the proxy needs to be updated with the migration.
  • The facade or the proxy shouldn't be a single point of failure or bottleneck.
  • When the migration is complete, facade will evolve as the adapter for legacy applications.
  • The new code written should be such that it can easily be intercepted, so in future, we can replace it in future migrations.
  • When to use: The strangler application is extremely useful when it comes to replacing a legacy and monolithic application with microservices. The pattern is used in the following cases:
  • When you want to follow the test-driven on behavior-driven development, and run fast and comprehensive tests with the accessibility of code coverages and adapt CI/CD.
  • Your application can be contained bounded contexts within which a model applies in the region. As an example, in a shopping cart application, the product module would be one context.
  • When not to use: This pattern may not be applicable in the following scenarios:
  • When you are not able to intercept the user agent request, or you are not able to introduce a facade in your architecture.
  • When you think of doing a page by page migration at a time or you are thinking of doing it all at once.
  • When your application is more frontend-driven; that's where you have to entirely change and rework the interacting framework based on the way the frontend is interacting with services, as you don't want to expose the various ways the user agent is interacting with services.
 

Summary


In this chapter, we debunked microservices to understand the evolution of microservices, the characteristics of microservices, and the advantages of microservices. We went through various design principles of microservices, the process of refactoring from monolithic applications to microservices, and the various microservice design patterns.

In the next chapter, we will start our microservice journey. We will go through all the setup required for our microservice journey. We will go through concepts related to Node.js and TypeScript, which are essential throughout the book. We will also create our first microservice, Hello World.

Latest Reviews (2 reviews total)
The books I bought on Microservices so far have been very enlightening. They are explaining the fundamental concepts and the whys of switching over to microservices.
I was hoping to buy the ePub version. BUT, inspecting this found all kinds of mishaps. Unicode characters showing as HTML codes instead of actual letters. General layout was okay but this kinda forced my hand at getting the PDF version. Not ideal but it works.
TypeScript Microservices
Unlock this book and the full library FREE for 7 days
Start now