The focus of this chapter is to get you acquainted with microservices. Slicing your application into a number of services is neither a feature of service-oriented architecture (SOA) nor microservices. However, microservices combines service design with best practices from the SOA world, along with a few emerging practices, such as isolated deployment, semantic versioning, providing lightweight services, and service discovery in polyglot programming. We implement microservices to satisfy business features, reducing the time to market and increasing flexibility.
We will cover the following topics in this chapter:
- The origin of microservices
- Discussing microservices
- Understanding the microservice architecture
- The advantages of microservices
- SOA versus microservices
- Understanding the problems with the monolithic architectural style
- The challenges in standardizing a .NET stack
- An overview of Azure Service Fabric
In this chapter, we will become familiar with the problems that arise from having a layered monolithic architecture. We will also discuss the solutions available for these problems in the monolithic world. By the end of the chapter, we will be able to break down a monolithic application into a microservice architecture.
This chapter contains various code examples to explain the concepts. The code is kept simple and is just for demo purposes.
To run and execute the code, you will need the following prerequisites:
- Visual Studio 2019
- .NET Core 3.1
To install and run these code examples, you need to install Visual Studio 2019 (the preferred IDE). To do so, download Visual Studio 2019 (the Community edition, which is free) from the download link mentioned in the installation instructions: https://docs.microsoft.com/en-us/visualstudio/install/install-visual-studio. Multiple versions are available for the Visual Studio installation. We are using Visual Studio for Windows.
If you do not have .NET Core 3.1 installed, you can download it from the following link: https://www.microsoft.com/net/download/windows.
Before we discuss the details, we should explore the origin of microservices or any new framework, language, and so on. Microservices is a buzzword, and we should be aware of how this architectural style evolved to the point that it is now trending. There are several reasons to familiarize yourself with the origin of any language or framework. The most important things to know are as follows:
- How the specific language or framework came into context.
- Who is behind the new trending architectural style of microservices?
- What and where it was founded.
Now let's discuss the origin of microservices. The term microservices was used for the first time in mid-2011 at a workshop for software architects. In March 2012, James Lewis presented some of his ideas about microservices. By the end of 2013, various groups from the IT industry started having discussions about microservices, and by 2014, the concept had become popular enough to be considered a serious contender for large enterprises.
There is no official definition available for microservices. The understanding of the term is purely based on use cases and discussions held in the past.
In 2014, James Lewis and Martin Fowler came together and provided a few real-world examples as well as presenting microservices (refer to http://martinfowler.com/microservices/).
The official Microsoft document page for microservices (refer to https://docs.microsoft.com/en-us/azure/architecture/guide/architecture-styles/microservices) defines the microservices architectural style as follows:
It is very important that you see all the attributes Lewis and Fowler defined here. They defined microservices as an architectural style that developers can utilize to develop a single application with the business logic spread across a bunch of small services, each having their own persistent storage functionality. Also, note its attributes: it can be independently deployable, can run in its own process, is a lightweight communication mechanism, and can be written in different programming languages.
We want to emphasize this specific definition since it is the crux of the whole concept. As we move along, all the pieces will fit together by the time we finish this book. For now, we will look at microservices in detail.
We have gone through a few definitions of microservices; now let's discuss them in detail.
In short, a microservice architecture removes most of the drawbacks of SOA. It is also more code-oriented than SOA services (we will discuss this in detail in the coming sections).
Before we move on to understanding the architecture, let's discuss the two important architectures that led to its existence:
- The monolithic architecture style
Most of us know that when we develop an enterprise application, we have to select a suitable architectural style. Then, at various stages, the initial pattern is further improved and adapted with changes that cater to various challenges, such as deployment complexity, a large code base, and scalability issues. This is exactly how the monolithic architecture style evolved into SOA, and then led to microservices.
The monolithic architectural style is a traditional architecture type that has been widely used in the IT industry. The term monolithic is not new and is borrowed from the Unix world. In Unix, most commands exist as a standalone program whose functionality is not dependent on any other program. As seen in the following diagram, we can have different components in the application, including the following:
- User interface: This handles all of the user interactions while responding with HTML, JSON, or any other preferred data interchange format (in the case of web services).
- Business logic: This includes all the business rules applied to the input being received in the form of user input, events, and the database.
- Database access: This houses the complete functionality for accessing the database for the purpose of querying and persisting objects. A widely accepted rule is that it is utilized through business modules and never directly through user-facing components.
Software built using this architecture is self-contained. We can imagine a single .NET assembly that contains various components, as depicted in the following diagram:
As the software is self-contained here, its components are interconnected and interdependent. Even a simple code change in one of the modules may break a major functionality in other modules. This would result in a scenario in which we'd need to test the whole application. With the business depending critically on its enterprise application frameworks, this amount of time could prove to be very critical.
Having all the components tightly coupled poses another challenge: whenever we execute or compile such software, all the components should be available or the build will fail. Refer to the previous diagram, which represents a monolithic architecture and is a self-contained or a single .NET assembly project. However, monolithic architectures might also have multiple assemblies. This means that even though a business layer (assembly, data access layer assembly, and so on) is separated, all of them will come together and run as one process at runtime.
A user interface depends on other components' direct sales and inventory in a manner similar to all other components that depend upon each other. In this scenario, we would not be able to execute this project in the absence of any one of these components. The process of upgrading them would be more complex, as we would have to consider other components that require code changes too. This would result in more development time than is required for the actual change.
Deploying such an application would become another challenge. During deployment, we would have to make sure that every component was deployed properly. If we didn't do this, we may end up facing a lot of issues in our production environments.
If we develop an application using the monolithic architecture style, as discussed previously, we might face the following challenges:
- Large code base: This is a scenario where the code lines outnumber the comments by a great margin. As components are interconnected, we would have to deal with a repetitive code base.
- Too many business modules: This is in regard to modules within the same system.
- Code base complexity: This results in a higher chance of code breaking due to the fix required in other modules or services.
- Complex code deployment: You may come across minor changes that would require whole system deployment.
- One module failure affecting the whole system: This is with regard to modules that depend on each other.
- Scalability: This is required for the entire system and not just the modules in it.
- Intermodule dependency: This is due to tight coupling. This results in heavy changes if required for an operation of any of the modules.
- Spiraling development time: This is due to code complexity and interdependency.
- Inability to easily adapt to new technology: In this case, the entire system would need to be upgraded.
As discussed earlier, if we want to reduce development time, ease deployment, and improve the maintainability of software for enterprise applications, we should avoid traditional or monolithic architecture. Therefore, we will look at SOA.
In the previous section, we discussed the monolithic architecture and its limitations. We also discussed why it does not fit into our enterprise application requirements. To overcome these issues, we should take a modular approach where we can separate the components so that they come out of the self-contained or single .NET assembly. A system that uses a service or multiple services in the fashion depicted in the previous diagram is called a service-oriented architecture (SOA).
Let's discuss modular architecture, that is, SOA. This is a famous architectural style where enterprise applications are designed as a collection of services. These services may be RESTful or ASMX web services. To understand SOA in more detail, let's discuss services first.
Services, in this case, are an essential concept of SOA. They can be a piece of code, a program, or software that provides functionality to other system components. This piece of code can interact directly with the database or indirectly through another service. Furthermore, it can be consumed by clients directly, where the client may be a website, desktop app, mobile app, or any other device app. The following diagram shows that services can be consumed by various clients via the web, desktop, mobile, or any other devices. Services can be with or without database support at the backend:
A service refers to a type of functionality exposed for consumption by other systems (generally referred to as clients/client applications). As mentioned earlier, this can be represented by a piece of code, a program, or software. Such services are exposed over the HTTP transport protocol as a general practice. However, the HTTP protocol is not a limiting factor, and a protocol can be picked as deemed fit for the scenario.
In the following diagram, Service - direct selling is directly interacting with the Database and three different clients, namely, Web, Desktop, and Mobile, are consuming the service. On the other hand, we have clients consuming Service - partner selling, which is interacting with Service - channel partners for database access.
A product selling service is a set of services that interact with client applications and provide database access directly or through another service, in this case, Service – Channel partners. In the case of Service – direct selling, shown in the following diagram, it is providing the functionality to a web store, a desktop application, and a mobile application. This service is further interacting with the database for various tasks, namely, fetching and persisting data.
Normally, services interact with other systems via a communication channel, generally the HTTP protocol. These services may or may not be deployed on the same or single servers:
In the previous diagram, we have projected an SOA example scenario. There are many fine points to note here, so let's get started. First, our services can be spread across different physical machines. Here, Service - direct selling is hosted on two separate machines. It is possible that instead of the entire business functionality, only a part of it will reside on Server 1 and the remaining part on Server 2. Similarly, Service - partner selling appears to have the same arrangement on Server 3 and Server 4. However, it doesn't stop Service - channel partners from being hosted as a complete set on both Server 5 and Server 6.
We will discuss SOA in detail in the following sections.
Let's recall monolithic architecture. In this case, we did not use it because it restricts code reusability; it is a self-contained assembly, and all the components are interconnected and interdependent. For deployment, in this case, we will have to deploy our complete project after we select the SOA (refer to the previous diagram and the subsequent discussion). Now, because of the use of this architectural style, we have the benefit of code reusability and easy deployment. Let's examine this in the light of the previous diagram:
- Reusability: Multiple clients can consume the service. This can also be simultaneously consumed by other services. For example, OrderService is consumed by web and mobile clients. OrderService can now also be used by the Reporting Dashboard UI.
- Stateless: Services do not persist in any state between requests from the client. This means that the service doesn't know or care that the subsequent request has come from the client that has/hasn't made the previous request.
- Contract-based: Interfaces make any service technology-agnostic on both sides of implementation and consumption. They also serve to make it immune to the code updates in the underlying functionality.
- Scalability: A system can be scaled up, and the SOA can be individually clustered with the appropriate load balancing.
- Upgradeability: It is very easy to roll out new functionalities or introduce new versions of the existing functionality. The system doesn't stop you from keeping multiple versions of the same business functionality.
This section covered SOA, and we have also discussed the concept of services and how they impact architecture. Next, we will move on to learn all about microservice architecture.
Microservice architecture is a way to develop a single application containing a set of smaller services. These services are independent of each other and run in their own processes. An important advantage of these services is that they can be developed and deployed independently. In other words, we can say that microservices are a way to segregate our services so that they can be handled completely independently of each other in the context of design, development, deployment, and upgrades.
In a monolithic application, we have a self-contained assembly of a user interface, direct sales, and inventory. In microservice architecture, the parts of the services of the application change to the following depiction:
Here, business components have been segregated into individual services. These independent services are now the smaller units that existed earlier within the self-contained assembly in the monolithic architecture. Both direct sales and inventory services are independent of each other, with the dotted lines depicting their existence in the same ecosystem, not yet bound within a single scope.
Refer to the following diagram, depicting user interaction with different APIs:
From the previous diagram, it's clear that our user interface can interact with any services. There is no need to intervene with any service when a UI calls it. Both services are independent of each other, without being aware of when the other one would be called by the user interface. Both services are liable for their own operations and not for any other part of the whole system. Although we are much closer to the layout of our intended microservice architecture. Note that the previous representation of the layout is not entirely a complete visualization of the intended microservice architecture.
Now let's apply this final change so that each service will have its own database persisting the necessary data. Refer to the following diagram:
Here, the User interface is interacting with the services, which have their own independent storage. In this case, when a user interface calls the service for direct sales, the business flow for direct sales is executed independently of any data or logic contained within the inventory service.
The solution provided by the use of microservices has a lot of benefits, including the following:
- A smaller code base: Each service is small and therefore easier to develop and deploy as a unit.
- The ease of an independent environment: With the separation of services, all developers work independently, deploy independently, and no one is concerned about module dependency.
With the adoption of microservice architecture, monolithic applications are now harnessing the associated benefits, as they can now be scaled easily and deployed independently using a service.
It is very important to carefully consider the choice of messaging mechanism when dealing with microservice architecture. If this aspect is ignored, it can compromise the entire purpose of designing a microservice architecture. In monolithic applications, this is not a concern, as the business functionality of the components gets invoked through function calls. On the other hand, this happens via a loosely coupled web service-level messaging feature in which services are primarily based on SOAP. In the case of the microservice messaging mechanism, this should be simple and lightweight.
There are no set rules for making a choice between the various frameworks or protocols for microservice architecture. However, there are a few points worth considering here. First, it should be simple enough to implement, without adding any complexity to your system. Second, it should be very lightweight, keeping in mind the fact that the microservice architecture could heavily rely on interservice messaging. Let's move ahead and consider our choices for both synchronous and asynchronous messaging, along with the different messaging formats.
Synchronous messaging is when a timely response is expected from service by a system, and the system waits until a response is received from the service. What's left is the most sought-after choice in the case of microservices. This is simple and supports an HTTP request-response, thereby leaving little room to look for an alternative. This is also one of the reasons that most implementations of microservices use HTTP (API-based styles).
Asynchronous messaging is when a system does not immediately expect a timely response from the service, and the system can continue processing without blocking that call.
Let's incorporate this messaging concept into our application and see how it would change the working and look of our application:
In the preceding diagram, the user would get a response while the system is interacting with the Sales DB and/or Inventory DB service(s) and fetch or push the data to their respective databases. The calls from the user (via the User interface) to respective services would not block new calls from the same or different users.
Over the past few years, working with MVC and the like has got me hooked on the JSON format. You could also consider XML. Both formats would be fine on HTTP with the API style resource. Binary message formats are also available if you need to use one. We are not recommending any particular format; you can go ahead with your preferred message format.
Tremendous patterns and architectures have been explored by the community, with some gaining popularity. With each solution having its own advantages and disadvantages, it has become increasingly important for companies to quickly respond to fundamental demands, such as scalability, high performance, and easy deployment. Any single aspect failing to be fulfilled in a cost-effective manner could easily impact large businesses negatively, making a big difference between a profitable and a non-profitable venture.
With the help of this architectural style, stakeholders can ensure that their designs are protected against the problems mentioned previously. It is also important to consider the fact that this objective is met in a cost-effective manner while respecting the time involved.
Let's see how microservice architecture works.
Microservice architecture is an architectural style that structures an application as a collection of loosely coupled services. These services can intercommunicate or be independent of each other. The overall working architecture of a microservice-based application depends on the various patterns that are used to develop the application. For example, microservices could be based on backend or frontend patterns. We will discuss various patterns in Chapter 10, Design Patterns and Best Practices.
Up until this point, we have discussed various aspects of microservice architecture, and we can now depict how it works; we can use any combination according to our design approach or predict a pattern that would fit. Here are some benefits of working with microservice architecture:
- In the current era of programming, everyone is expected to follow all of the SOLID principles. Almost all languages are object-oriented programming (OOP).
- It is the best way is to expose functionality to other, or external, components in a way that allows any other programming language to use that functionality without adhering to any specific user interfaces (that is, services such as web services, APIs, REST services, and so on).
- The whole system works according to a type of collaboration that is not interconnected or interdependent.
- Every component is liable for its own responsibilities. In other words, components are responsible for only one functionality.
- It segregates code with a separation concept, and segregated code is reusable.
Now let's explore and discuss various factors as advantages of microservices over the SOA and monolithic architectures:
- Cost-effective to scale: You don't need to invest a lot to make the entire application scalable. In terms of a shopping cart, we could simply load balance the product search module and the order-processing module while leaving out less frequently used operational services, such as inventory management, order cancellation, and delivery confirmation.
- Clear code boundaries: The code should match an organization's departmental hierarchies. With different departments sponsoring product development in large enterprises, this can be a huge advantage.
- Easier code changes: The code is done in a way that is not dependent on the code of other modules and only achieves isolated functionality. If done right, then the chances of a change in a microservice affecting another microservice are minimal.
- Easy deployment: Since the entire application is more like a group of ecosystems that are isolated from each other, deployment can be done one microservice at a time, if required. Failure in any one of these would not bring the entire system down.
- Technology adaptation: You could port a single microservice or a whole bunch of them overnight to a different technology, without your users even knowing about it. Remember to maintain those service contracts.
- Distributed system: The meaning is implied here, but a word of caution is necessary. Make sure that your asynchronous calls are used well and synchronous ones are not really blocking the whole flow of information. Use data partitioning well. We will come to this a little later, in the Data partition section of this chapter, so don't worry for now.
- Quick market response: The world being competitive is a definite advantage. Users tend to quickly lose interest if you are slow to respond to new feature requests or the adoption of new technology within your system.
So far, we have covered SOA and microservice architecture. We have discussed each in detail. We also saw how each is independent. In the next section, we will understand the differences between microservices and SOA.
You'll get confused between microservices and SOA if you don't have a complete understanding of both. On the surface, the microservice features and advantages sound almost like a slender version of SOA, with many experts suggesting that there is, in fact, no need for an additional term such as microservices and that SOA can fulfill all the attributes laid out by microservices. However, this is not the case. There are enough differences to isolate them technologically.
The underlying communication system of SOA inherently suffers from the following problems:
- The fact that a system developed in SOA depends upon its components, which are interacting with each other. So, no matter how hard you try, it is eventually going to face a bottleneck in the message queue.
- Another focal point of SOA is imperative monogramming. With this, we lose the path to make a unit of code reusable with respect to OOP.
We all know that organizations are spending more and more on infrastructure. The bigger the enterprise is, the more complex the question of the ownership of the application being developed. With an increasing number of stakeholders, it becomes impossible to accommodate all of their ever-changing business needs.
SOA uses an enterprise service bus (ESB) for communication; an ESB can be the reason for communication failures and can impact the entire application. This could happen in a scenario where one service is slowing down and communication is delayed, hampering the workings of the entire application. On the other hand, it would not be a problem in microservices; in the case of independent services, if one service is down, then only that microservice will be affected. In the case of interdependent services, if one of the services is down, then only a particular service(s) will be affected. The other microservices will continue to handle requests.
Data storage is common/sharable in the case of SOA. On the other hand, each service can have independent data storage in microservices.
This is where microservices clearly stand apart. Although cloud development is not in the current scope of our discussion, it won't harm us to say that the scalability, modularity, and adaptability of microservice architecture can be easily extended with the use of cloud platforms. It's time for a change.
Let's look at the prerequisites of microservice architecture.
It is important to understand the resulting ecosystem from a microservice architecture implementation. The impact of microservices is not just pre-operational in nature. The changes in any organization opting for microservice architecture are so profound that if they are not well prepared to handle them, it won't be long before advantages turn into disadvantages.
After the adoption of microservice architecture is agreed upon, it would be wise to have the following prerequisites in place:
- Deployment and QA: Requirements will become more demanding, with a quicker turnaround from development requirements. This will require you to deploy and test as quickly as possible. If it is just a small number of services, then this will not be a problem. However, if the number of services is increasing, it could very quickly challenge the existing infrastructure and practices. For example, your QA and staging environment may no longer suffice to test the number of builds that come back from the development team.
- A collaboration platform for the development and operations team: As the application goes to the public domain, it won't be long before the age-old script of development versus QA is played out again. The difference this time would be that the business would be at stake. So, you need to be prepared to quickly respond in an automated manner to identify the root cause when required.
- A monitoring framework: With the increasing number of microservices, you will quickly need a way to monitor the functioning and health of the entire system for any possible bottlenecks or issues. Without any means of monitoring the status of the deployed microservices and the resultant business function, it would be impossible for any team to take a proactive deployment approach.
This section explained the prerequisites of a microservice architecture-based application. With them in place, the next section will help us understand the problems with a monolithic .NET stack-based application.
In this section, we will discuss all the problems with the monolithic .NET stack-based application. In a monolithic application, the core problem is this: scaling monolithic applications is difficult. The resultant application ends up having a very large code base and poses challenges with regard to maintainability, deployment, and modifications. In the coming sections, we will learn about scaling, and then we will move on to deployment challenges by following scaling properties.
In monolithic application technology, stack dependency stops the introduction of the latest technologies from the outside world. The present stack poses challenges, as a web service itself will suffer from them:
- Security: There is no way to identify the user via web services due to there being no clear consensus on a strong authentication scheme. Just imagine a banking application sending unencrypted data containing user credentials. All airports, cafes, and public places offering free Wi-Fi could easily become victims of increased identity theft and other cybercrimes.
- Response time: Though the web services themselves provide some flexibility in the overall architecture, it quickly diminishes because of the long processing time taken by the service itself. So, there is nothing wrong with the web service in this scenario. It is a fact that a monolithic application involves a huge amount of code; complex logic makes the response time of a web service long, and therefore unacceptable.
- Throughput rate: This is on the higher side, and as a result, hampers subsequent operations. It is not a bad idea for a checkout operation to rely on a call to the inventory web service that has to search for a few million records. However, when the same inventory service feeds the main product searching for the entire portal, it could result in a loss of business. One service call failure out of 10 calls would mean a 10% lower conversion rate for the business.
- Frequent downtime: As the web services are part of the whole monolithic ecosystem, they are bound to be down and unavailable each time there is an upgrade or an application failure. This means that the presence of any B2B dependency from the outside world on the application's web services would further complicate decision-making, thereby causing downtime. This makes the smaller upgrades of the system look expensive; thus, it further increases the backlog of the pending system upgrades.
- Technology adoption: In order to adopt or upgrade a technology stack, it would require the whole application to be upgraded, tested, and deployed, since modules are interdependent and the entire code base of the project would be affected. Consider the payment gateway module using a component that requires a compliance-related framework upgrade. The development team has no option but to upgrade the framework itself and carefully go through the entire code base to identify any code breaks preemptively. Of course, this would still not rule out a production crash, but this can easily make even the best of architects and managers lose sleep.
- Availability: A percentage of time during which a service is operating.
- Response time: The time a service takes to respond.
- Throughput: The rate of processing requests.
Monolithic applications have high module interdependency, as they are tightly coupled. The different modules utilize functionality in such an intra-module manner that even a single module failure brings the system down, due to its cascading effect. We all know that a user not getting results for a product search would be far less severe than the entire system coming down to its knees.
Decoupling using web services has been traditionally attempted at the architecture level. For database-level strategies, ACID has been relied upon for a long time. Let's examine both of these points further:
- Web services: In the current monolithic application, the customer experience is degraded due to using web services. Even as a customer tries to place an order, reasons, such as the long response time of web services or even a complete failure of the service itself, result in a failure to place the order successfully. Not even a single failure is acceptable, as users tend to remember their last experience and assume a possible repeat. Not only does this result in the loss of possible sales, but also the loss of future business prospects. Web service failures can cause a cascading failure in the system that relies on them.
- ACID: ACID is the acronym for atomicity, consistency, isolation, and durability; it's an important concept in databases. It is in place, but whether it's a boon or a bane is to be judged by the sum total of the combined performance. It takes care of failures at the database level, and there is no doubt that it does provide some insurance against database errors that creep in. At the same time, every ACID operation hampers/delays operations by other components/modules. The point at which it causes more harm than benefit needs to be judged very carefully.
The monolithic application that will be transitioned to microservices has various challenges related to security, response time, scalability, and moreover, its modules are highly dependent on each other. These are all big challenges while trying to deal with a standard application, but especially a monolithic application, which is supposed to be used for a high volume of users. The main and important point here for our monolithic application is scalability, which will be discussed in the next section.
Factors, such as the availability of different means of communication, easy access to information, and open-world markets, are resulting in businesses growing rapidly and diversifying at the same time. With this rapid growth, there is an ever-increasing need to accommodate an increasing client base. Scaling is one of the biggest challenges that any business faces while trying to cater to an increased user base.
Scalability describes the capability of a system/program to handle an increasing workload. In other words, scalability refers to the ability of a system/program to scale.
Before starting the next section, let's discuss scaling in detail, as this will be an integral part of our exercise, as we work on transitioning from monolithic architecture to microservices.
There are two main strategies or types of scalability:
- Vertical scaling or scale-up
- Horizontal scaling or scale-out
We can scale our application by adopting one of these types of strategies. Let's discuss more about these two types of scaling and see how we can scale our application.
In vertical scaling, we analyze our existing application to find the parts of the modules that cause the application to slow down, due to a longer execution time. Making the code more efficient could be one strategy, so that less memory is consumed. This exercise of reducing memory consumption could be for a specific module or the whole application. On the other hand, due to obvious challenges involved with this strategy, instead of changing the application, we could add more resources to our existing IT infrastructure, such as upgrading the RAM or adding more disk drives. Both these paths in vertical scaling have a limit to the extent to which they can be beneficial. After a specific point in time, the resulting benefit will plateau. It is important to keep in mind that this kind of scaling requires downtime.
In horizontal scaling, we dig deep into modules that show a higher impact on the overall performance for factors such as high concurrency; this will enable our application to serve our increased user base, which is now reaching the million mark. We also implement load balancing to process a greater amount of work. The option of adding more servers to the cluster does not require downtime, which is a definite advantage. Each case is different, so whether the additional costs of power, licenses, and cooling are worthwhile, and up to what point, will be evaluated on a case-by-case basis.
Scaling will be covered in detail in Chapter 8, Scaling Microservices with Azure.
The current application also has deployment challenges. It was designed as a monolithic application, and any change in the order module would require the entire application to be deployed again. This is time-consuming, and the whole cycle would have to be repeated with every change, meaning that this could be a frequent cycle. Scaling could only be a distant dream in such a scenario.
As discussed with regard to scaling current applications that have deployment challenges that require us to deploy the entire assembly, the modules are interdependent, and this is a single assembly application of .NET. The deployment of the entire application in one go also makes it mandatory to test the entire functionality of our application. The impact of such an exercise would be huge:
- High-risk deployment: Deploying an entire solution or application in one go poses a high risk, as all modules would be deployed even for a single change in one of the modules.
- Longer testing time: As we have to deploy the complete application, we will have to test the functionality of the entire application. We can't go live without testing. Due to higher interdependency, the change might cause a problem in some other module.
- Unplanned downtime: Complete production deployment needs code to be fully tested, and hence we need to schedule our production deployment. This is a time-consuming task that results in long downtime. While it is planned downtime, during this time, both business and customers will be affected due to the unavailability of the system; this could cause revenue loss to the business.
- Production bugs: A bug-free deployment would be the dream for any project manager. However, this is far from reality and every team dreads this possibility of a buggy deployment. Monolithic applications are no different from this scenario, and the resolution of production bugs is easier said than done. The situation can only become more complex with a previous bug remaining unresolved.
In a monolithic application, having a large code base is not the only challenge that you'll face. Having a large team to handle such a code base is one more problem that will affect the growth of the business and application.
To align an organization, the most concerning factor is the goal of the team. It is very important that a team goal should be the same for all team members:
- The same goal: In a team, all the team members have the same goal, which is timely and bug-free delivery at the end of each day. However, having a large code base in the current application means that the monolithic architectural style will not be comfortable territory for the team members. With team members being interdependent due to the interdependent code and associated deliverables, the same effect that is experienced in the code is present in the development team as well. Here, everyone is just scrambling and struggling to get the job done. The question of helping each other out or trying something new does not arise. In short, the team is not a self-organizing one.
- A different perspective: The development team takes too much time for deliverables for reasons such as feature enhancement, bug fixes, or module interdependency, preventing easy development. The QA team is dependent upon the development team, and the development team has its own problems. The QA team is stuck once developers start working on bugs, fixes, or feature enhancements. There is no separate environment or build available for a QA team to proceed with their testing. This delay hampers the overall delivery, and customers or end users will not get the new features or fixes on time.
In respect to our monolithic application where we may have an Order module, a change in the Order module affects the Stock module, and so on. It is the absence of modularity that results in such a condition.
This also means that we can't reuse the functionality of a module within another module. The code is not decomposed into structured pieces that could be reused to save time and effort. There is no segregation within the code modules, and hence no common code is available.
The business is growing and its customers are growing in leaps and bounds. New or existing customers from different regions have different preferences when it comes to the use of the application. Some like to visit the website, but others prefer to use mobile apps. The system is structured in a way that means that we can't share the components across a website and a mobile app. This makes introducing a mobile/device app for the business a challenging task. The business is therefore affected as companies lose out owing to customers preferring mobile apps.
The difficulty is in replacing the application components that are using third-party libraries, an external system such as payment gateways, and an external order-tracking system. It is a tedious job to replace the old components in the currently styled monolithic architectural application. For example, if we consider upgrading the library of our module that is consuming an external order-tracking system, then the whole change would prove to be very difficult. Furthermore, it would be an intricate task to replace our payment gateway with another one.
In any of the previous scenarios, whenever we upgraded the components, we upgraded everything within the application, which called for the complete testing of the system and required a lot of downtime. Apart from this, the upgrade would possibly result in production bugs, which would require repeating the whole cycle of development, testing, and deployment.
Our current application has a mammoth database, containing a single schema with plenty of indexes. This structure poses a challenging job when it comes to fine-tuning performance:
- Single schema: All the entities in the database are clubbed under a single schema named dbo. This again hampers the business, owing to the confusion with the single schema regarding various tables that belong to different modules. For example, customer and supplier tables belong to the same schema, that is, dbo.
- Numerous stored procedures: Currently, the database has a large number of stored procedures, which also contain a sizeable chunk of the business logic. Some of the calculations are being performed within the stored procedures. As a result, these stored procedures prove to be a baffling task to tend to when it comes to optimizing them or breaking them down into smaller units.
Whenever deployment is planned, the team will have to look closely at every database change. This, again, is a time-consuming exercise that will often turn out to be more complex than the build and deployment exercise itself.
A big database has its own limitations. In our monolithic application, we have a single schema database. This has a lot of stored procedure and functions; all this has an impact on the performance of the database.
In the coming section, we will discuss various solutions and other approaches to overcome these problems. But before that, we need to know the prerequisites of microservices before digging into this architectural style.
To gain a better understanding of microservices, let's look at an imaginary example of FlixOne Inc. With this example as our base, we can discuss all the concepts in detail and see what it looks like to be ready for microservices.
FlixOne is an e-commerce player that is spread all over India. They are growing at a very fast pace and diversifying their business at the same time. They have built their existing system on .NET Framework, and this is a traditional three-tier architecture. They have a massive database that is central to this system, and there are peripheral applications in their ecosystem. One such application is for their sales and logistics team, and it happens to be an Android app. These applications connect to their centralized data center and face performance issues. FlixOne has an in-house development team supported by external consultants. Refer to the following diagram:
The previous diagram depicts a broader sense of our current application, which is a single .NET assembly application. Here, we have the user interfaces we use to search and order products, track the order, and check out. Now look at the following diagram:
The previous diagram depicts our Shopping cart module only. The application is built with C#, MVC5, and Entity Framework, and it has a single project application. This diagram is just a pictorial overview of the architecture of our application. This application is web-based and can be accessed from any browser. Initially, any request that uses the HTTP protocol will land on the user interface that is developed using MVC5 and jQuery. For cart activities, the UI interacts with the Shopping cart module, which is a business logic layer that interacts with the database layer (written in C#). We are storing data within the database (SQL Server 2008R2).
Here, we are going to understand the functional overview of the FlixOne bookstore application. This is only for the purpose of visualizing our application. The following is a simplified functional overview of the application that shows the process from Home page to Checkout:
In the current application, the customer lands on the home page, where they see featured/highlighted books. They also have the option to search for a book item. After getting the desired result, the customer can choose book items and add them to their shopping cart. Customers can verify the book items before the final checkout. As soon as the customer decides to check out, the existing cart system redirects them to an external payment gateway for the specified amount you need to pay for the book items in the shopping cart.
As discussed previously, our application is a monolithic application; it is structured to be developed and deployed as a single unit. This application has a large code base that is still growing. Small updates need to deploy the whole application at once.
In this section, we have discussed the functional overview of the application. We still need to analyze and address the challenges and find the best solution for the current challenges. So, let's discuss those things next.
The business is growing rapidly, so we decide to open our e-commerce website in 20 more cities. However, we are still facing challenges with the existing application and struggling to serve the existing user base properly. In this case, before we start the transition, we should make our monolithic application ready for its transition to microservices.
In the very first approach, the Shopping cart module will be segregated into smaller modules, then you'll be able to make these modules interact with each other, as well as external or third-party software:
This proposed solution is not sufficient for our existing application, though developers would be able to divide the code and reuse it. However, the internal processing of the business logic would remain the same in the way it would interact with the UI or the database. The new code would interact with the UI and the database layer, with the database still remaining as the same old single database. With our database remaining undivided and as tightly coupled layers, the problem of having to update and deploy the whole code base would still remain. So, this solution is not suitable for resolving our problem.
In the previous section, we discussed the deployment challenges we will face with the current .NET monolithic application. In this section, let's take a look at how we can overcome these challenges by making or adapting a few practices within the same .NET stack.
With our .NET monolithic application, our deployment is made up of XCOPY deployments. After dividing our modules into different submodules, we can adapt to deployment strategies with the help of these. We can simply deploy our business logic layer or some common functionality. We can adapt to continuous integration and deployment. The XCOPY deployment is a process where all the files are copied to the server; it is mostly used for web projects.
Now that we understand all the challenges with our existing monolithic application, our new application should serve us better with new changes. As we are expanding, we can't miss the opportunity to get new customers. If we do not overcome a challenge, then we will lose business opportunities as well. Let's discuss a few points to solve these problems.
Our modules are interdependent, so we are facing issues, such as the reusability of code and unresolved bugs due to changes in one module. These are deployment challenges. To tackle these issues, let's segregate our application in such a way that we will be able to divide modules into submodules. We can divide our Order module so that it will implement the interface, and this can be initiated from the constructor.
Here is a short code snippet that shows how we can apply this to our existing monolithic application. The following code example shows our Order class, where we use constructor injection:
public class Order : IOrder
private readonly IOrderRepository _orderRepository;
public Order() => _orderRepository = new OrderRepository();
public Order(IOrderRepository orderRepository) => _orderRepository = orderRepository;
public IEnumerable<OrderModel> Get() => _orderRepository.GetList();
public OrderModel GetBy(Guid orderId) => _orderRepository.Get(orderId);
In the previous code snippet, we abstracted our Order module in such a way that it could use the IOrder interface. Afterward, our Order class implements the IOrder interface, and with the use of inversion of control, we create an object, as this is resolved automatically with the help of inversion of control.
Furthermore, the code snippet for IOrderRepository is as follows:
public interface IOrderRepository
OrderModel Get(Guid orderId);
We have the following code snippet for OrderRepository, which implements the IOrderRepository interface:
public class OrderRepository : IOrderRepository
public IEnumerable<OrderModel> GetList() => DummyData();
public OrderModel Get(Guid orderId) => DummyData().FirstOrDefault(x => x.OrderId == orderId);
In the preceding code snippet, we have a method called DummyData(), which is used to create Order data for our sample code.
The following is a code snippet showing the DummyData() method:
private IEnumerable<OrderModel> DummyData()
return new List<OrderModel>
OrderId = new Guid("61d529f5-a9fd-420f-84a9-
OrderDate = DateTime.Now,
OrderStatus = "In Transit"
Here, we are trying to showcase how our Order module gets abstracted. In the previous code snippet, we returned default values (using sample data) for our order just to demonstrate the solution to the actual problem.
Finally, our presentation layer (the MVC controller) will use the available methods, as shown in the following code snippet:
public class OrderController : Controller
private readonly IOrder _order;
public OrderController() => _order = new Order();
public OrderController(IOrder order) => _order = order;
// GET: Order
public ActionResult Index() => View(_order.Get());
// GET: Order/Details/5
public ActionResult Details(string id)
var orderId = Guid.Parse(id);
var orderModel = _order.GetBy(orderId);
The following diagram is a class diagram that depicts how our interfaces and classes are associated with each other and how they expose their methods, properties, and so on:
Here, we again used constructor injection where IOrder passed and got the Order class initialized. Consequently, all the methods are available within our controller.
Getting this far means we have overcome a few problems, including the following:
- Reduced module dependency: With the introduction of IOrder in our application, we have reduced the interdependency of the Order module. This way, if we are required to add or remove anything to or from this module, then other modules will not be affected, as IOrder is only implemented by the Order module. Let's say we want to make an enhancement to our Order module; this would not affect our Stock module. This way, we reduce module interdependency.
- Introducing code reusability: If you are required to get the order details of any application modules, you can easily do so using the IOrder type.
- Improvements in code maintainability: We have now divided our modules into submodules or classes and interfaces. We can now structure our code in such a manner that all the types (that is, all the interfaces) are placed under one folder and follow the structure for the repositories. With this structure, it will be easier for us to arrange and maintain code.
- Unit testing: Our current monolithic application does not have any kind of unit testing. With the introduction of interfaces, we can now easily perform unit testing and adopt the system of test-driven development with ease.
As discussed in the previous section, our application database is huge and depends on a single schema. This huge database should be considered while refactoring. To refactor our application database, we follow these points:
- Schema correction: In general practice (not required), our schema depicts our modules. As discussed in previous sections, our huge database has a single schema (which is now dbo), and every part of the code or table should not be related to dbo. There might be several modules that will interact with specific tables. For example, our Order module should contain some related schema names, such as Order. So, whenever we need to use the tables, we can use them with their own schema, instead of a general dbo schema. This will not impact any functionality related to how data is retrieved from the database, but it will have structured or arranged our tables in such a way that we will be able to identify and correlate each and every table with their specific modules. This exercise will be very helpful when we are in the stage of transitioning a monolithic application to microservices. Refer to the following diagram depicting the Order schema and Stock schema of the database:
In the previous diagram, we see how the database schema is separated logically. It is not separated physically as our Order schema and Stock schema belong to the same database. Consequently, here, we will separate the database schema logically, not physically.
We can also take the example of our users; not all users are an admin or belong to a specific zone, area, or region. However, our user table should be structured in such a way that we should be able to identify the users by the table name or the way they are structured. Here, we can structure our user table on the basis of regions. We should map our user table to a region table in such a way that it should not impact or make any changes to the existing code base.
- Moving the business logic to code from stored procedures: In the current database, we have thousands of lines of stored procedure with a lot of business logic. We should move the business logic to our code base. In our monolithic application, we are using Entity Framework; here, we can avoid the creation of stored procedures, and we can write all of our business logic as code.
When it comes to database sharding and partitioning, we choose database sharding. Here, we will break it into smaller databases. These smaller databases will be deployed on a separate server:
In general, database sharding is simply defined as a shared-nothing partitioning scheme for large databases. This way, we can achieve a new level of high performance and scalability. The word sharding comes from shard and spreading, which means dividing a database into chunks (shards) and spreading it to different servers.
The previous diagram is a pictorial overview of how our database is divided into smaller databases. Take a look at the following diagram:
The preceding diagram illustrates that our application now has a smaller database or each service has its own database.
In the previous sections, we discussed the challenges and problems faced by the team. Here, we'll propose a solution for the DevOps team: the collaboration of the development team with another operational team should be emphasized. We should also set up a system where development, QA, and the infrastructure teams work in collaboration.
Infrastructure setup can be a very time-consuming job, and developers remain idle while the infrastructure is being readied for them. They will take some time before joining the team and contributing. The process of infrastructure setup should not stop a developer from becoming productive, as it will reduce overall productivity. This should be an automated process. With the use of Chef or PowerShell, we can easily create our virtual machines and quickly ramp up the developer count as and when required. This way, our developers can be ready to start work on day one of joining the team.
Chef is a DevOps tool that provides a framework to automate and manage your infrastructure. PowerShell can be used to create our Azure machines and to set up Azure DevOps (formerly TFS).
We are going to introduce automated testing as a solution to the problems that we faced while testing during deployment. In this part of the solution, we have to divide our testing approach as follows:
- Adopt test-driven development (TDD). With TDD, a developer writes the test before its actual code. In this way, they will test their own code. The test is another piece of code that can validate whether the functionality is working as intended. If any functionality is found to not satisfy the test code, the corresponding unit test fails. This functionality can be easily fixed as you know this is where the problem is. In order to achieve this, we can utilize frameworks, such as MS tests or unit tests.
- The QA team can use scripts to automate their tasks. They can create scripts by utilizing QTP or the Selenium framework.
The current system does not have any kind of versioning system, so there is no way to revert if something happens during a change. To resolve this issue, we need to introduce a version control mechanism. In our case, this should be either Azure DevOps or Git. With the use of version control, we can now revert our change if it is found to break some functionality or introduce any unexpected behavior in the application. We now have the capability of tracking the changes being made by the team members working on this application, at an individual level. However, in the case of our monolithic application, we did not have the capability to do this.
In our application, deployment is a huge challenge. To resolve this, we'll introduce continuous integration (CI). In this process, we need to set up a CI server. With the introduction of CI, the entire process is automated. As soon as the code is checked in by any team member, using version control Azure DevOps or Git, in our case, the CI process kicks into action. This ensures that the new code is built and unit tests are run along with the integration test. In the scenario of a successful build or otherwise, the team is alerted to the outcome. This enables the team to quickly respond to the issue.
Next, we move onto continuous deployment. Here, we introduce various environments, namely, a development environment, a staging environment, a QA environment, and so on. Now, as soon as the code is checked in by any team member, CI kicks into action. This invokes the unit/integration test suites, builds the system, and pushes it out to the various environments we have set up. This way, the turnaround time for the development team to provide a suitable build for QA is reduced to a minimum.
As a monolith application, we have various challenges related to deployment that affect the development team as well. We have discussed CI/CD and seen how deployment works.
The next section covers identifying decomposition candidates within a monolith architecture, which can cause problems.
We have now clearly identified the various problems that the current FlixOne application architecture and its resultant code are posing for the development team. We also understand which business challenges the development team is not able to take up and why.
It is not that the team is not capable enough—it is just the code. Let's move ahead and check out the best strategy to zero in on the various parts of the FlixOne application that we need to move to the microservice-styled architecture. We need to know that we have a candidate with a monolith architecture, which poses problems in one of the following areas:
- Focused deployment: Although this comes at the final stage of the whole process, it demands more respect, and rightly so. It is important to understand here that this factor shapes and defines the whole development strategy from the initial stages of identification and design. Here's an example of this: the business is asking you to resolve two problems of equal importance. One of the issues might require you to perform testing for many more associated modules, and the resolution for the other might allow you to get away with limited testing. Having to make such a choice would be wrong, and a business shouldn't have the option to do so.
- Code complexity: Having smaller teams is the key here. You should be able to assign small development teams for a change that is associated with a single functionality. Small teams are comprised of one or two members. Any more than this and a project manager will be needed. This means that something is more interdependent across modules than it should be.
- Technology adoption: You should be able to upgrade components to a newer version or a different technology without breaking anything. If you have to think about the components that depend on technology, you have more than one candidate. Even if you have to worry about the modules that this component depends on, you'll still have more than one candidate. I remember one of my clients who had a dedicated team to test out whether the technology being released was a suitable candidate for their needs. I learned later that they would actually port one of the modules and measure the performance impact, effort requirement, and turnaround time of the whole system. I don't agree with this, though.
- High resources: In my opinion, everything in a system, from memory, CPU time, and I/O requirements, should be considered a module. If any one of the modules takes more time, and/or occurs more frequently, it should be singled out. In any operation that involves higher-than-normal memory, the processing time blocks the delay and the I/O keeps the system waiting; this would be good in our case.
- Human dependency: If moving team members across modules seems like too much work, you have more candidates. Developers are smart, but if they struggle with large systems, it is not their fault. Break the system down into smaller units, and the developers will be both more comfortable and more productive.
This section helped us understand the problems that a monolithic architecture faces. Next, we will move on to some advantages of this architecture.
We have performed the first step of identifying our candidates for moving to microservices. It will be worthwhile to go through the corresponding advantages that microservices provide. Let's understand them in the following sections.
With each of the microservices being independent of each other, we now have the power to use different technologies for each microservice. The payment gateway could be using the latest .NET Framework, whereas the product search could be shifted to any other programming language.
The entire application could be based on a SQL Server for data storage, whereas the inventory could be based on NoSQL. The flexibility is limitless.
Since we try to achieve isolated functionality within each microservice, it is easy to add new features, fix bugs, or upgrade technology within each one. This will have no impact on other microservices. Now you have vertical code isolation that enables you to perform all of this and still be as fast with the deployments.
This doesn't end here. The FlixOne team now has the ability to release a new option for the payment gateway, alongside the existing one. Both payment gateways could coexist until both the team and the business owners are satisfied with the reports. This is where the immense power of this architecture comes into play.
It is not necessarily the business owner's concern to understand that certain features are harder or more time-consuming to address. Their responsibility is to keep driving and growing the business. The development team should become a support network for achieving the business goals, not a roadblock.
It is extremely important to understand that quickly responding to business needs and adapting to marketing trends are not by-products of microservices, but goals.
The capability to achieve these goals with smaller teams only makes microservices more suitable for business owners.
Each microservice becomes an investment for the business, since it can easily be consumed by other microservices, without having to redo the same code again and again. Every time a microservice is reused, time is saved by avoiding the testing and deployment of that part.
The user experience is enhanced, since the downtime is either eliminated or reduced to a minimum.
With vertical isolation in place and each microservice rendering a specific service to the whole system, it is easy to scale. Not only is the identification easier for the scaling candidates, but the cost is less. This is because we only scale a part of the whole microservice ecosystem.
This exercise can be cost-intensive for the business. Consequently, the prioritization of which microservice is to be scaled first can now be a choice for the business team; this decision no longer has to be a choice for the development team.
Security is similar to what is provided by the traditional layered architecture; microservices can be secured as easily. Different configurations can be used to secure different microservices. You can have a part of the microservice ecosystem behind firewalls and another part for user encryption. Web-facing microservices can be secured differently from the rest of the microservices. You can suit your needs as per choice, technology, or budget.
It is common to have a single database in the majority of monolithic applications. And almost always, there is a database architect or a designated owner responsible for its integrity and maintenance. The path to any application enhancement that requires a change in the database has to go via this route. For me, it has never been an easy task. This further slows down the process of application enhancement, scalability, and technology adoption.
Because each microservice has its own independent database, the decision-making related to changes required in the database can be easily delegated to the respective team. We don't have to worry about the impact on the rest of the system, as there will not be any.
At the same time, this separation of the database brings forth the possibility for the team to become self-organized. They can now start experimenting.
For example, the team can now consider using Azure Table storage or Azure Cache for Redis to store the massive product catalog instead of the database, as is being done currently. Not only can the team now experiment, but their experience can also easily be replicated across the whole system, as required by other teams in the form of a schedule convenient to them.
In fact, nothing is stopping the FlixOne team now from being innovative and using a multitude of technologies available at the same time, then comparing performance in the real world and making a final decision. Once each microservice has its own database, this is how FlixOne will look:
In the preceding image, each service has its own database and has scalability; the inventory service has caching (Redis server).
Whenever a choice is made to move away from monolithic architecture in favor of microservice-styled architecture, the time and cost axis of the initiative will pose some resistance. A business evaluation might rule against moving some parts of the monolithic application that do not make a business case for the transition.
It would have been a different scenario if we were developing the application from the beginning. However, this is also the power of microservices, in my opinion. The correct evaluation of the entire monolithic architecture can safely identify the monolithic parts to be ported later.
We must safeguard against the risk of integration to ensure that these isolated parts do not cause a problem to other microservices in the future.
While we discussed various parts of the monolithic application, our goal was to create them collaboratively, so that they can communicate with each other on the patterns followed by the application based on the microservice architectural style. To achieve this, there would be a need for various patterns and the technology stack in which the original monolithic application was developed. For example, if we have used the event-driven pattern, our monolithic application should adhere to this pattern in such a way that it consumes and publishes the events. To implement or obey this pattern, we should manage the code of our monolithic application, which basically includes the development efforts to make the changes in the existing code.
Similarly, if there is a need to use the API Gateway pattern, then we should make sure that our gateway should suffice, so that it communicates with the monolith application. To achieve this could be a bit complex or tricky, where the existing monolithic application does not have such functionality to expose web services (RESTful). This would also put pressure on the development team to make changes in the existing code to make the application viable to fit the standard of a gateway. There would be a great chance to make code changes to add or update the RESTful services because these services could be easily consumed by the gateway. To overcome this overburden, we can create a separate microservice so that we can avoid major changes in the source code.
We discussed the integration of monolithic applications in this section, with the help of various approaches, such as the domain-driven pattern, the API Gateway pattern, and so on. The next section discusses Azure Service Fabric.
When we talk about microservices in the .NET Core world, Azure Service Fabric is a name that is widely used for microservices. In this section, we will discuss Fabric services.
This is a platform that helps us with easy packaging, deployment, and managing scalable and reliable microservices (the container is also like Docker). Sometimes it is difficult to focus on your main responsibility as a developer, because of complex infrastructural problems. With the help of Azure Service Fabric, developers need not worry about infrastructure issues. Azure Service Fabric provides various technologies and comes as a bundle that has the power of Azure SQL Database, Cosmos DB, Microsoft Power BI, Azure Event Hubs, Azure IoT Hub, and many more core services.
According to the official documentation (https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-overview), we can define Azure Service Fabric as follows:
- Service fabric—any OS, any cloud: You just need to create a cluster of service fabric and this will run on Azure (cloud) or on-premises, on Linux, or on a Windows server. Moreover, you can also create clusters on other public clouds.
- Service fabric: Stateless and stateful microservices. With the help of service fabric, you can build applications as stateless or stateful.
According to its official documentation (https://docs.microsoft.com/en-us/azure/service-fabric/), we can define stateless microservices as follows:
Full support for application lifecycle management: With the help of service fabric, get the support of a full application lifecycle that includes development, deployment, and so on.
You can develop a scalable application. For more information on this, refer to: https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-application-lifecycle.
You can develop highly reliable, stateless, and stateful microservices.
There are different service fabric programming models available that are beyond the scope of this chapter. For more information, refer to: https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-choose-framework.
The aim of this section was to give an overview of Azure Service Fabric followed by stateless microservices. We have seen that Azure Service Fabric supports in developing scalable applications.
In this chapter, we discussed the microservice architectural style in detail, its history, and how it differs from its predecessors, monolithic architecture, and SOA. We further defined the various challenges that monolithic architecture faces when dealing with large systems. Scalability and reusability are some definite advantages that SOA provides over monolithic architecture.
We also discussed the limitations of monolithic architecture, including scaling problems by implementing a real-life monolithic application. The microservice architecture style resolves all of these issues by reducing code interdependency and isolating the dataset size that any one of the microservices work upon. We utilized dependency injection and database refactoring for this. We also further explored automation, CI, and deployment. These easily allow the development team to let the business sponsor choose which industry trends to respond to first. This results in cost benefits, better business response, timely technology adoption, effective scaling, and the removal of human dependency. Finally, we discussed Azure Service Fabric and got an idea about Service Fabric and its different programming models.
In the next chapter, we will go ahead and transition our existing application to microservice-style architecture and put our knowledge to the test. We will transition our monolithic application to microservices by discussing the new technology stack (C#, EF, and so on). We will also cover the concept of seam and discuss microservice communication.
- What are microservices?
- Can you define Azure Service Fabric?
- What is database sharding?
- What is TDD and why should developers adopt this?
- Can you elaborate on dependency injection (DI)?