This book does not blindly praise microservices. Instead, it's about how we can use their benefits while being able to handle the challenges of building scalable, resilient, and manageable microservices.
As an introduction to this book, the following topics will be covered in this chapter:
- How I learned about microservices and what experience I have of their benefits and challenges
- What is a microservice-based architecture?
- Challenges with microservices
- Design patterns for handling challenges
- Software enablers that can help us handle these challenges
- Other important considerations that aren't covered in this book
No installations are required for this chapter. However, you may be interested in taking a look at the C4 model conventions, https://c4model.com, since the illustrations in this chapter are inspired by the C4 model.
This chapter does not contain any source code.
When I first learned about the concept of microservices back in 2014, I realized that I had been developing microservices (well, kind of) for a number of years without knowing it was microservices I was dealing with. I was involved in a project that started in 2009 where we developed a platform based on a set of separated features. The platform was delivered to a number of customers that deployed it on-premise. To make it easy for the customers to pick and choose what features they wanted to use from the platform, each feature was developed as an autonomous software component; that is, it had its own persistent data and only communicated with other components using well-defined APIs.
Since I can't discuss specific features in this project's platform, I have generalized the names of the components, which are labeled from Component A to Component F. The composition of the platform into a set of components is illustrated as follows:
Each component is developed using Java and the Spring Framework, and is packaged as a WAR file and deployed as a web app in a Java EE web container, for example, Apache Tomcat. Depending on the customer's specific requirements, the platform can be deployed on single or multiple servers. A two-node deployment may look as follows:
Decomposing the platform's functionality into a set of autonomous software components provides a number of benefits:
- A customer can deploy parts of the platform in its own system landscape, integrating it with its existing systems using its well-defined APIs.
The following is an example where one customer decided to deploy Component A, Component B, Component D, and Component E from the platform and integrate them with two existing systems in the customer's system landscape, System A and System B:
- Another customer can choose to replace parts of the platform's functionality with implementations that already exist in the customer's system landscape, potentially requiring some adoption of the existing functionality in the platform's APIs. The following is an example where a customer has replaced Component C and Component F in the platform with their own implementation:
- Each component in the platform can be delivered and upgraded separately. Thanks to using well-defined APIs, one component can be upgraded to a new version without being dependent on the life cycle of the other components.
The following is an example where Component A has been upgraded from version v1.1 to v1.2. Component B, which calls Component A, does not need to be upgraded since it uses a well-defined API; that is, it's still the same after the upgrade (or it's at least backward-compatible):
- Thanks to the use of well-defined APIs, each component in the platform can also be scaled out to multiple servers independently of the other components. Scaling can be done either to meet high availability requirements or to handle higher volumes of requests. Technically, this is achieved by manually setting up load balancers in front of a number of servers, each running a Java EE web container. An example where Component A has been scaled out to three instances looks as follows:
We also learned that decomposing the platform introduced a number of new challenges that we were not exposed (at least not to the same degree) when developing more traditional, monolithic applications:
- Adding new instances to a component required manually configuring load balancers and manually setting up new nodes. This work was both time-consuming and error-prone.
- The platform was initially prone to errors in the other systems it was communicating with. If a system stopped responding to requests that were sent from the platform in a timely fashion, the platform quickly ran out of crucial resources, for example, OS threads, specifically when exposed to a large number of concurrent requests. This caused components in the platform to hang or even crash. Since most of the communication in the platform is based on synchronous communication, one component crashing can lead to cascading failures; that is, clients of the crashing components could also crash after a while. This is known as a chain of failures.
- Keeping the configuration consistent and up to date in all the instances of the components quickly became a problem, causing a lot of manual and repetitive work. This led to quality problems from time to time.
- Monitoring the state of the platform in terms of latency issues and hardware usage (for example, usage of CPU, memory, disks, and the network) was more complicated compared to monitoring a single instance of a monolithic application.
- Collecting log files from a number of distributed components and correlating related log events from the components was also difficult but feasible since the number of components was fixed and known in advance.
Over time, we addressed most of the challenges that were mentioned in the preceding list with a mix of in-house-developed tools and well-documented instructions for handling these challenges manually. The scale of the operation was, in general, at a level where manual procedures for releasing new versions of the components and handling runtime issues were acceptable, even though they were not desirable.
Learning about microservice-based architectures in 2014 made me realize that other projects had also been struggling with similar challenges (partly for other reasons than the ones I described earlier, for example, the large cloud service providers meeting web-scale requirements). Many microservice pioneers had published details of lessons they'd learned. It was very interesting to learn from these lessons.
Many of the pioneers initially developed monolithic applications that made them very successful from a business perspective. But over time, these monolithic applications became more and more difficult to maintain and evolve. They also became challenging to scale beyond the capabilities of the largest machines available (also known as vertical scaling). Eventually, the pioneers started to find ways to split monolithic applications into smaller components that could be released and scaled independently of each other. Scaling small components can be done horizontally, that is, deploying a component on a number of smaller servers and placing a load balancer in front of it. If done in the cloud, the scaling capability is potentially endless – it is just a matter of how many virtual servers you bring in (given that your component can scale out on a huge number of instances, but more on that later on).
In 2014, I also learned about a number of new open source projects that delivered tools and frameworks that simplified the development of microservices and could be used to handle the challenges that come with a microservice-based architecture. Some of these are as follows:
- Pivotal released Spring Cloud, which wraps parts of the Netflix OSS in order to provide capabilities such as dynamic service discovery, configuration management, distributed tracing, circuit breaking, and more.
- I also learned about Docker and the container revolution, which is great for minimizing the gap between development and production. Being able to package a component not only as a deployable runtime artifact (for example, a Java, war or, jar file) but as a complete image ready to be launched as a container (for example, an isolated process) on a server running Docker was a great step forward for development and testing.
- A container engine, such as Docker, is not enough to be able to use containers in a production environment. Something is needed that, for example, can ensure that all the containers are up and running and that they can scale out containers on a number of servers, thereby providing high availability and/or increased compute resources. These types of product became known as container orchestrators. A number of products have evolved over the last few years, such as Apache Mesos, Docker in Swarm mode, Amazon ECS, HashiCorp Nomad, and Kubernetes. Kubernetes was initially developed by Google. When Google released v1.0, they also donated Kubernetes to CNCF (https://www.cncf.io/). During 2018, Kubernetes became kind of a de facto standard, available both pre-packaged for on-premise use and available as a service from most major cloud providers.
- I have recently started to learn about the concept of a service mesh and how a service mesh can complement a container orchestrator to further offload microservices from responsibilities to make them manageable and resilient.
Since this book can't cover all aspects of the technologies I just mentioned, I will focus on the parts that have proven to be useful in customer projects I have been involved in since 2014. I will describe how they can be used together to create cooperating microservices that are manageable, scalable, and resilient.
Each chapter in this book will address a specific concern. To demonstrate how things fit together, I will use a small set of cooperating microservices that we will evolve throughout this book:
Now that we know the how and what of microservices, let's start to look into how a microservice can be defined.
To me, a microservice architecture is about splitting up monolithic applications into smaller components, which achieves two major goals:
- Faster development, enabling continuous deployments
- Easier to scale, manually or automatically
A microservice is essentially an autonomous software component that is independently upgradeable and scalable. To be able to act as an autonomous component, it must fulfill certain criteria as follows:
- It must conform to a shared-nothing architecture; that is, microservices don't share data in databases with each other!
- It must only communicate through well-defined interfaces, for example, using synchronous services or preferably by sending messages to each other using APIs and message formats that are stable, well-documented, and evolve by following a defined versioning strategy.
- It must be deployed as separate runtime processes. Each instance of a microservice runs in a separate runtime process, for example, a Docker container.
- Microservice instances are stateless so that incoming requests to a microservice can be handled by any of its instances.
Using a set of microservices, we can deploy to a number of smaller servers instead of being forced to deploy to a single big server, like we have to do when deploying a monolithic application.
Given that the preceding criteria have been fulfilled, it is easier to scale up a single microservice into more instances (for example, using more virtual servers) compared to scaling up a big monolithic application. Utilizing auto-scaling capabilities that are available in the cloud is also a possibility, but not typically feasible for a big monolithic application. It's also easier to upgrade or even replace a single microservice compared to upgrading a big monolithic application.
This is illustrated by the following diagram, where a monolithic application has been divided into six microservices, all of which have been deployed into one separate server. Some of the microservices have also been scaled up independently of the others:
A very frequent question I receive from customers is, How big should a microservice be?
I try to use the following rules-of-thumb:
- Small enough to fit in the head of a developer
- Big enough to not jeopardize performance (that is, latency) and/or data consistency (SQL foreign keys between data that's stored in different microservices are no longer something you can take for granted)
So, to summarize, a microservice architecture is, in essence, an architectural style where we decompose a monolithic application into a group of cooperating autonomous software components. The motivation is to enable faster development and to make it easier to scale the application.
Next, we will move on to understand some of the challenges that we will face when it comes to microservices.
In the Challenges with autonomous software components section, we have already seen some of the challenges that autonomous software components can bring (and they all apply to microservices as well) as follows:
- Many small components that use synchronous communication can cause a chain of failure problem, especially under high load.
- Keeping the configuration up to date for many small components can be challenging.
- It's hard to track a request that's being processed and involves many components, for example, when performing root cause analysis, where each component stores log events locally.
- Analyzing the usage of hardware resources on a component level can be challenging as well.
- Manual configuration and management of many small components can become costly and error-prone.
Another downside (but not always obvious initially) of decomposing an application into a group of autonomous components is that they form a distributed system. Distributed systems are known to be, by their nature, very hard to deal with. This has been known for many years (but in many cases neglected until proven differently). My favorite quote to establish this fact is from Peter Deutsch who, back in 1994, stated the following:
The 8 fallacies of distributed computing: Essentially everyone, when they first build a distributed application, makes the following eight assumptions. All prove to be false in the long run and all cause big trouble and painful learning experiences:
- The network is reliable
- Latency is zero
- Bandwidth is infinite
- The network is secure
- Topology doesn't change
- There is one administrator
- Transport cost is zero
- The network is homogeneous
Note: The eighth fallacy was actually added by James Gosling at a later date. For more details, please go to https://www.rgoarchitects.com/Files/fallacies.pdf.
In general, building microservices-based on these false assumptions leads to solutions that are prone to both temporary network glitches and problems that occur in other microservice instances. When the number of microservices in a system landscape increases, the likelihood of problems also goes up. A good rule of thumb is to design your microservice architecture based on the assumption that there is always something going wrong in the system landscape. The microservice architecture needs to be designed to handle this, in terms of detecting problems and restarting failed components but also on the client-side so that requests are not sent to failed microservice instances. When problems are corrected, requests to the previously failing microservice should be resumed; that is, microservice clients need to be resilient. All of these need, of course, to be fully automated. With a large number of microservices, it is not feasible for operators to handle this manually!
The scope of this is large, but we will limit ourselves for now and move on to study design patterns for microservices.
This topic will cover using design patterns to mitigate challenges with microservices, as described in the preceding section. Later in this book, we will see how we can implement these design patterns using Spring Boot, Spring Cloud, and Kubernetes.
The concept of design patterns is actually quite old; it was invented by Christopher Alexander back in 1977. In essence, a design pattern is about describing a reusable solution to a problem when given a specific context.
The design patterns we will cover are as follows:
- Service discovery
- Edge server
- Reactive microservices
- Central configuration
- Centralized log analysis
- Distributed tracing
- Circuit Breaker
- Control loop
- Centralized monitoring and alarms
We will use a lightweight approach to describing design patterns, and focus on the following:
- The problem
- A solution
- Requirements for the solution
Later in this book, we will delve more deeply into how to apply these design patterns. The context for these design patterns is a system landscape of cooperating microservices where the microservices communicate with each other using either synchronous requests (for example, using HTTP) or by sending asynchronous messages (for example, using a message broker).
The service discovery pattern has the following problem, solution, and solution requirements.
How can clients find microservices and their instances?
Microservices instances are typically assigned dynamically allocated IP addresses when they start up, for example, when running in containers. This makes it difficult for a client to make a request to a microservice that, for example, exposes a REST API over HTTP. Consider the following diagram:
Add a new component – a service discovery service – to the system landscape, which keeps track of currently available microservices and the IP addresses of its instances.
Some solution requirements are as follows:
- Automatically register/unregister microservices and their instances as they come and go.
- The client must be able to make a request to a logical endpoint for the microservice. The request will be routed to one of the microservices available instances.
- Requests to a microservice must be load-balanced over the available instances.
- We must be able to detect instances that are not currently healthy; that is, requests will not be routed to them.
Implementation notes: As we will see, this design pattern can be implemented using two different strategies:
- Client-side routing: The client uses a library that communicates with the service discovery service to find out the proper instances to send the requests to.
- Server-side routing: The infrastructure of the service discovery service also exposes a reverse proxy that all requests are sent to. The reverse proxy forwards the requests to a proper microservice instance on behalf of the client.
The edge server pattern has the following problem, solution, and solution requirements.
In a system landscape of microservices, it is in many cases desirable to expose some of the microservices to the outside of the system landscape and hide the remaining microservices from external access. The exposed microservices must be protected against requests from malicious clients.
Add a new component, an Edge Server, to the system landscape that all incoming requests will go through:
Implementation notes: An edge server typically behaves like a reverse proxy and can be integrated with a discovery service to provide dynamic load balancing capabilities.
Some solution requirements are as follows:
- Hide internal services that should not be exposed outside their context; that is, only route requests to microservices that are configured to allow external requests.
- Expose external services and protect them from malicious requests; that is, use standard protocols and best practices such as OAuth, OIDC, JWT tokens, and API keys to ensure that the clients are trustworthy.
The reactive microservice pattern has the following problem, solution, and solution requirements.
Traditionally, as Java developers, we are used to implementing synchronous communication using blocking I/O, for example, a RESTful JSON API over HTTP. Using a blocking I/O means that a thread is allocated from the operating system for the length of the request. If the number of concurrent requests goes up (and/or the number of involved components in a request, for example, a chain of cooperating microservices, goes up), a server might run out of available threads in the operating system, causing problems ranging from longer response times to crashing servers.
Also, as we already mentioned in this chapter, overusing blocking I/O can make a system of microservices prone to errors. For example, an increased delay in one service can cause clients to run out of available threads, causing them to fail. This, in turn, can cause their clients to have the same types of problem, which is also known as a chain of failures. See the Circuit Breaker section for how to handle a chain-of-failure-related problem.
Use non-blocking I/O to ensure that no threads are allocated while waiting for processing to occur in another service, that is, a database or another microservice.
Some solution requirements are as follows:
- Whenever feasible, use an asynchronous programming model; that is, send messages without waiting for the receiver to process them.
- If a synchronous programming model is preferred, ensure that reactive frameworks are used that can execute synchronous requests using non-blocking I/O, that is, without allocating a thread while waiting for a response. This will make the microservices easier to scale in order to handle an increased workload.
- Microservices must also be designed to be resilient, that is, capable of producing a response, even if a service that it depends on fails. Once the failing service is operational again, its clients must be able to resume using it, which is known as self-healing.
The central configuration pattern has the following problem, solution, and solution requirements.
An application is, traditionally, deployed together with its configuration, for example, a set of environment variables and/or files containing configuration information. Given a system landscape based on a microservice architecture, that is, with a large number of deployed microservice instances, some queries arise:
- How do I get a complete picture of the configuration that is in place for all the running microservice instances?
- How do I update the configuration and make sure that all the affected microservice instances are updated correctly?
Add a new component, a configuration server, to the system landscape to store the configuration of all the microservices.
Make it possible to store configuration information for a group of microservices in one place, with different settings for different environments (for example, dev, test, qa, and prod).
Centralized log analysis has the following problem, solution, and solution requirements.
Traditionally, an application writes log events to log files that are stored on the local machine that the application runs on. Given a system landscape based on a microservice architecture, that is, with a large number of deployed microservice instances on a large number of smaller servers, we can ask the following questions:
- How do I get an overview of what is going on in the system landscape when each microservice instance writes to its own local log file?
- How do I find out if any of the microservice instances get into trouble and start writing error messages to their log files?
- If end users start to report problems, how can I find related log messages; that is, how can I identify which microservice instance is the root cause of the problem? The following diagram illustrates the problem:
Add a new component that can manage centralized logging and is capable of the following:
- Detecting new microservice instances and collecting log events from them
- Interpreting and storing log events in a structured and searchable way in a central database
- Providing APIs and graphical tools for querying and analyzing log events
Distributed tracing has the following problem, solution, and solution requirements.
It must be possible to track requests and messages that flow between microservices while processing an external call to the system landscape.
Some examples of fault scenarios are as follows:
- If end users start to file support cases regarding a specific failure, how can we identify the microservice that caused the problem, that is, the root cause?
- If one support case mentions problems related to a specific entity, for example, a specific order number, how can we find log messages related to processing this specific order – for example, log messages from all microservices that were involved in processing this specific order?
The following diagram depicts this:
To track the processing between cooperating microservices, we need to ensure that all related requests and messages are marked with a common correlation ID and that the correlation ID is part of all log events. Based on a correlation ID, we can use the centralized logging service to find all related log events. If one of the log events also includes information about a business-related identifier, for example, the ID of a customer, product, order, and so on, we can find all related log events for that business identifier using the correlation ID.
The solution requirements are as follows:
- Assign unique correlation IDs to all incoming or new requests and events in a well-known place, such as a header with a recognized name.
- When a microservice makes an outgoing request or sends a message, it must add the correlation ID to the request and message.
- All log events must include the correlation ID in a predefined format so that the centralized logging service can extract the correlation ID from the log event and make it searchable.
The Circuit Breaker pattern will have the following problem, solution, and solution requirements.
A system landscape of microservices that uses synchronous intercommunication can be exposed to a chain of failure. If one microservice stops responding, its clients might get into problems as well and stop responding to requests from their clients. The problem can propagate recursively throughout a system landscape and take out major parts of it.
Add a Circuit Breaker that prevents new outgoing requests from a caller if it detects a problem with the service it calls.
The solution requirements are as follows:
- Open the circuit and fail fast (without waiting for a timeout) if problems with the service are detected.
- Probe for failure correction (also known as a half-open circuit); that is, allow a single request to go through on a regular basis to see if the service operates normally again.
- Close the circuit if the probe detects that the service operates normally again. This capability is very important since it makes the system landscape resilient to these kinds of problems; that is, it self-heals.
The following diagram illustrates a scenario where all synchronous communication within the system landscape of microservices goes through Circuit Breakers. All the Circuit Breakers are closed; that is, they allow traffic, except for one Circuit Breaker detected problems in the service the requests go to. Therefore, this Circuit Breaker is open and utilizes fast-fail logic; that is, it does not call the failing service and waits for a timeout to occur. In the following, it immediately returns a response, optionally applying some fallback logic before responding:
The control loop pattern will have the following problem, solution, and solution requirements.
In a system landscape with a large number of microservice instances spread out over a number of servers, it is very difficult to manually detect and correct problems such as crashed or hung microservice instances.
Add a new component, a control loop, to the system landscape; this constantly observes the actual state of the system landscape; compares it with the desired state, as specified by the operators; and, if required, takes action. For example, if the two states differ, it needs to make the actual state equal to the desired state:
Implementation notes: In the world of containers, a container orchestrator such as Kubernetes is typically used to implement this pattern. We will learn more about Kubernetes in Chapter 15, Introduction to Kubernetes.
For this pattern, we will have the following problem, solution, and solution requirements.
If observed response times and/or the usage of hardware resources become unacceptably high, it can be very hard to discover the root cause of the problem. For example, we need to be able to analyze hardware resource consumption per microservice.
To curb this, we add a new component, a monitor service, to the system landscape, which is capable of collecting metrics about hardware resource usage for each microservice instance level.
The solution requirements are as follows:
- It must be able to collect metrics from all the servers that are used by the system landscape, which includes auto-scaling servers.
- It must be able to detect new microservice instances as they are launched on the available servers and start to collect metrics from them.
- It must be able to provide APIs and graphical tools for querying and analyzing the collected metrics.
The following screenshot shows Grafana, which visualizes metrics from Prometheus, a monitoring tool that we will look at later in this book:
That was an extensive list! I am sure these design patterns helped you understand the challenges with microservices better. Next, we will move on to understand software enablers.
As we've already mentioned, we have a number of very good open-source tools that can help us both meet our expectations of microservices and, most importantly, handle the new challenges that come with them:
- Spring Boot
- Spring Cloud/Netflix OSS
- Istio (a service mesh)
The following table maps the design patterns we will need to handle these challenges, along with the corresponding open-source tool that implements the design pattern:
|Netflix Eureka and Netflix Ribbon
|Kubernetes kube-proxy and service resources
|Spring Cloud and Spring Security OAuth
|Kubernetes Ingress controller
|Istio ingress gateway
|Spring Reactor and Spring WebFlux
|Spring Config Server
|Kubernetes ConfigMaps and Secrets
Centralized log analysis
Elasticsearch, Fluentd, and Kibana
|Spring Cloud Sleuth and Zipkin
|Kubernetes controller manager
|Centralized monitoring and alarms
|Grafana and Prometheus
Note: Actually not part of Kubernetes
but can easily be deployed and configured together with Kubernetes
|Kiali, Grafana, and Prometheus
Please note that Spring Cloud, Kubernetes, and Istio can be used to implement some design patterns, such as service discovery, edge server, and central configuration. We will discuss the pros and cons of using these alternatives later in this book.
Now, let's look at some other important things that we need to take into consideration.
To be successful implementing a microservice architecture, there are a number of related areas to consider as well. I will not cover these areas in this book; instead, I'll just briefly mention them here as follows:
- Importance of Dev/Ops: One of the benefits of a microservice architecture is that it enables shorter delivery times and, in extreme cases allows the continuous delivery of new versions. To be able to deliver that fast, you need to establish an organization where dev and ops work together under the mantra you built it, you run it. This means that developers are no longer allowed to simply pass new versions of the software over to the operations team. Instead, the dev and ops organizations need to work much more closely together, organized into teams that have full responsibility for the end-to-end life cycle of one microservice (or a group of related microservices). Besides the organizational part of dev/ops, the teams also need to automate the delivery chain, that is, the steps for building, testing, packaging, and deploying the microservices to the various deployment environments. This is known as setting up a delivery pipeline.
- Organizational aspects and Conway's law: Another interesting aspect of how a microservice architecture might affect the organization is Conway's law, which states the following:
-- Melvyn Conway, 1967
This means that the traditional approach of organizing IT teams for large applications based on their technology expertise (for example, UX, business logic, and databases-teams) will lead to a big three-tier application – typically a big monolithic application with a separately deployable unit for the UI, one for processing the business logic, and one for the big database. To successfully deliver an application based on a microservice architecture, the organization needs to be changed into teams that work with one or a group of related microservices. The team must have the skills that are required for those microservices, for example, languages and frameworks for the business logic and database technologies for persisting its data.
- Decomposing a monolithic application into microservices: One of the most difficult and expensive decisions is how to decompose a monolithic application into a set of cooperating microservices. If this is done in the wrong way, you will end up with problems such as the following:
- Slow delivery: Changes in the business requirements will affect too many of the microservices, resulting in extra work.
- Slow performance: To be able to perform a specific business function, a lot of requests have to be passed between various microservices, resulting in long response times.
- Inconsistent data: Since related data is separated into different microservices, inconsistencies can appear over time in data that's managed by different microservices.
A good approach to finding proper boundaries for microservices is to apply Domain-Driven Design and its Bounded Context concept. According to Eric Evans, a Bounded Context is "A description of a boundary (typically a subsystem, or the work of a particular team) within which a particular model is defined and applicable." This means that the microservice defined by a Bounded Context will have a well-defined model of its own data.
- Importance of API design: If a group of microservices expose a common, externally available API, it is important that the API is easy to understand and consumes the following:
- If the same concept is used in multiple APIs, it should have the same description in terms of the naming and data types used.
- It is of great importance that APIs are allowed to evolve in a controlled manner. This typically requires applying a proper versioning schema for the APIs, for example, https://semver.org/, and having the capability of handling multiple major versions of an API over a specific period of time, allowing clients of the API to migrate to new major versions at their own pace.
- Migration paths from on-premise to the cloud: Many companies today run their workload on-premise, but are searching for ways to move parts of their workload to the cloud. Since most cloud providers today offer Kubernetes as a Service, an appealing migration approach can be to first move the workload into Kubernetes on-premise (as microservices or not) and then redeploy it on a Kubernetes as a Service offering provided by a preferred cloud provider.
- Good design principles for microservices, the 12-factor app: The 12-factor app (https://12factor.net) is a set of design principles for building software that can be deployed in the cloud. Most of these design principles are applicable to building microservices independently of where and how they will be deployed, that is, in the cloud or on-premise. Some of these principles will be covered in this book, such as config, processes, and logs, but not all.
That's it for the first chapter! I hope this gave you a good basic idea of microservices and helped you understand the large scale topics that will be covered in this book.
In this introductory chapter, I described my own way into microservices and delved into a bit of their history. We defined what a microservice is, that is, a kind of autonomous distributed component with some specific requirements. We also went through the good and challenging aspects of a microservice-based architecture.
To handle these challenges, we defined a set of design patterns and briefly mapped the capabilities of open source products such as Spring Boot, Spring Cloud, and Kubernetes to them.
You're eager to develop your first microservice now, right? In the next chapter, we will be introduced to Spring Boot and complementary open source tools that we will use to develop our first microservices