Home Web Development Implementing Event-Driven Microservices Architecture in .NET 7

Implementing Event-Driven Microservices Architecture in .NET 7

By Joshua Garverick , Omar Dean McIver
books-svg-icon Book
eBook $35.99 $24.99
Print $44.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $35.99 $24.99
Print $44.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Chapter 1: The Sample Application
About this book
This book will guide you through various hands-on practical examples for implementing event-driven microservices architecture using C# 11 and .NET 7. It has been divided into three distinct sections, each focusing on different aspects of this implementation. The first section will cover the new features of .NET 7 that will make developing applications using EDA patterns easier, the sample application that will be used throughout the book, and how the core tenets of domain-driven design (DDD) are implemented in .NET 7. The second section will review the various components of a local environment setup, the containerization of code, testing, deployment, and the observability of microservices using an EDA approach. The third section will guide you through the need for scalability and service resilience within the application, along with implementation details related to elastic and autoscale components. You’ll also cover how proper telemetry helps to automatically drive scaling events. In addition, the topic of observability is revisited using examples of service discovery and microservice inventories. By the end of this book, you’ll be able to identify and catalog domains, events, and bounded contexts to be used for the design and development of a resilient microservices architecture.
Publication date:
March 2023
Publisher
Packt
Pages
326
ISBN
9781803232782

 

The Sample Application

Over the past several years, the emergence of high-volume, scalable, event-driven applications has caused an interesting shift in application development. Complimentary design patterns have made writing and implementing event-driven architectures more appealing and have helped to reduce the learning curve when it comes to fully leveraging the elasticity and resiliency of cloud platform components. We will be taking a look at an application that utilizes event-driven architectures, implemented using .NET 7 and leveraging cloud-native applications and data constructs.

The purpose of this chapter is to outline the sample application we will be using throughout this book, along with the business drivers and goals it intends to satisfy. This will provide you with the opportunity to get a baseline understanding of the application's structure, source code, mechanics, and domains.

In this chapter, we'll cover the following main topics:

  • Exploring business drivers and the application
  • Architectural structures and paradigms
  • Implementation details
 

Technical requirements

There are several prerequisites you will need to have an understanding of or have installed on your machine to use the code base and follow along with the examples. These include the following:

  • Git
  • Visual Studio or Visual Studio Code
  • Docker
  • Kubernetes
  • Service-oriented architectures
  • Domain-Driven Design (DDD)

We will be using an application that has been custom-developed and is included with the source code for this book. The primary platform we will be using to develop in will be .NET 7. All examples will use Visual Studio 2022 as the primary integrated developer environment (IDE). Either Visual Studio 2022 or Visual Studio Code will be required to develop .NET 7 solutions.

Important note

The links to all the white papers and other sources mentioned in this chapter are provided in the Further reading section toward the end of the chapter.

 

Exploring business drivers and the application

It's always a good idea to have a solid understanding of why an application exists, how it came to be, and what problems or opportunities it looks to solve. This application is a concept application that involves Internet of Things (IoT) devices, distributed event ingestion at scale, and facial recognition features. The primary market for this application is for turnstiles used at mass transit locations:

Figure 1.1 – Turnstiles in use at a transit station

Figure 1.1 – Turnstiles in use at a transit station

In this scenario, the baseline events capture a simple count of customers who pass through the turnstiles at both the entrance and exit points of the mass transit system. Some drivers that contribute to the concept of the application, along with its need, include the following:

  • To increase the visibility of equipment health and the need for proactive maintenance
  • To allow integration for facial recognition sensors that can scan law enforcement databases for potential fugitives or persons of interest
  • To manage costs associated with turnstile equipment, with options for expanding to fare payment interfaces
  • To analyze transit usage, turnstile placement, and the need for additional units in high-volume areas

Having the ability to capture foot traffic related to the entrance and exit points of a transit station has several benefits. First, it can be used to understand how busy any one station is. Second, with extended use, the equipment can wear down and eventually break. With a line of sight into how many people are using the equipment, technicians can make educated decisions regarding when units might need to be serviced or ultimately replaced. This could also lead to the deeper monitoring of other components besides the turnstile unit, such as the payment interfaces. Some units might only have a ticket scanner, while others might have a ticket scanner and an electronic payment interface, where contactless payments using mobile devices can be used. The monitoring of normal usage, malfunctions, and scheduling the proactive servicing of those components could also be beneficial.

An additional use case could be that of transit scheduling and vehicle availability. Generally, the number of vehicles (such as trains, trams, buses, and more) any transit authority might have in its fleet is a direct result of them already monitoring customer traffic demands. Using data that has been captured in real time can help accelerate the analysis of needed schedule adjustments, fleet adjustments, or reductions in services for less-traveled stations.

The addition of facial recognition software to the equipment is not a hard requirement but does offer a value-add in the ability to potentially identify criminals at large or suspects who are wanted for questioning. With any artificial intelligence, it is essential to both program and operate with ethics and security in mind. While closed-circuit cameras and more advanced video surveillance equipment can be found in many transit stations, those cameras do not immediately notify anyone if a person has been recognized based on an alert or a bulletin issued by a law enforcement agency. Data collected during facial scans must be treated as personally identifiable information and must be purged if no match has been found.

Unpacking this a bit more, other potential drivers could come into play. For example, examining the business requirements for the application would add clarity. Looking at the domain model and any domain-specific language (DSL) associated with the requirements would help remove any ambiguity around what is meant by a customer, an order, an item, or even a payment method. Let's take a look at the domain model to get a better understanding of the layout of the different services, contexts, and aggregates.

Reviewing the domain model

The application's domain model describes the functional areas (domains) that live within the confines of the application. Each is developed using a ubiquitous language that everyone—from business analysts to senior leadership, to junior developers—can easily understand and relate to. Figure 1.2 represents a simple domain model diagram that aligns to the structure of the application:

Figure 1.2 – A high-level domain model

Figure 1.2 – A high-level domain model

The primary domains we will reference for this application are related to the primary pieces of functionality the application looks to offer. The following table offers a description of each domain:

Table 1.1 – Application functions

Table 1.1 – Application functions

With these baseline domains defined, some simple rules of engagement can be derived. For example, a passenger could use a piece of equipment to enter a transit station while being run through facial recognition by the Identification domain. Equipment could raise an error noting a malfunction, which could then schedule a maintenance event. Equipment events such as turnstile operations could fire events per turn, allowing the aggregation of passenger throughput per turnstile and per station. These interactions can then be broken into areas of overlapping concern and, ultimately, help derive aggregate roots that are important to the model and the application. They include the following:

  • Passenger
  • Station
  • Turnstile
  • Camera
  • NotificationConfiguration
  • TurnstileMaintenanceSchedule
  • CameraMaintenanceSchedule

Each of the aggregates will contain common properties such as the name and the ID. Some differences between entities and value objects related to the aggregates will be required, as each one will have its own requirements for data, as prescribed by the domain. Figure 1.3 represents a high-level diagram of each aggregate, including properties (the list items), entities (the white rounded rectangles), and value objects (the green rounded rectangles):

Figure 1.3 – A high-level aggregate view

Figure 1.3 – A high-level aggregate view

Chapter 4, Domain Model and Asynchronous Design, dives deeper into the domain model, including a review of events and event handlers and asynchronous design.

With an understanding of the business relevance and the domain model that supports the business case, next, we can go one level deeper and examine some of the architectural structures and paradigms that help to define the event-driven nature of this application.

 

Assessing architectural structures and paradigms

Establishing an architectural baseline helps to drive decisions regarding how the application and its components will ultimately be implemented. Additionally, it also provides an opportunity to evaluate different patterns and practices with the ultimate goal of selecting a path forward. This section covers the overall architectural design of the sample application and some core tenets that enable the creation and consumption of events.

A high-level logical architecture

The solution is predicated on the use of hardware interfaces (such as equipment) that can communicate to hosted services in the cloud via a standard network connection. There is a hardware gateway (such as Raspberry Pi) that hosts simple write-only services, which will integrate using relevant domain services to record turnstile usage, facial recognition hits, and possible malfunctions with the turnstile or camera. Any user interface can interact with a common API gateway layer, which allows for data exchange without needing to know all the particulars of the available APIs. The backend runtime is managed by Kubernetes (in this particular case, AKS), with containers for each of the available domain microservices. Each of these microservices interacts with the event bus to send events. Then, the events are handled according to the domain's applicable event handlers. A reporting layer is used to access information captured via the event stream. SQL databases will be used to maintain the append-only activity log of events that come in via Kafka, and read models will be consumed from domain databases using read-oriented services.

The following reference diagram shows the logical construction of the application:

Figure 1.4 – A logical high-level reference architecture

Figure 1.4 – A logical high-level reference architecture

The application uses the Producer-Consumer pattern to produce events, which are later consumed by components who need to know about them. You might also see this pattern referred to as Publish-Subscribe or pub-sub. The key point to take away from the use of this pattern is that any number of components could produce events containing relevant domain information, and any number of possible components could consume those events and act accordingly. We will dive into the producer-consumer pattern in much more detail in Chapter 2, The Producer-Consumer Pattern.

Digging down a layer, there are two technology architecture specifications that we will be using. One is for the device board inside the turnstile unit, which hosts the Equipment domain service. The other is the layout of the cloud components, as mentioned in the reference architecture in Figure 1.4. The high-level flow between the turnstile device and the cloud components is as follows:

  • On the turnstile, after completing one turn, a message is sent to the equipment service indicating a completed rotation.
  • The equipment service will send an event to the IoT hub with the results of the turnstile action.
  • Using Kafka Connect, the message will be forwarded to Kafka, implemented within the Kubernetes cluster using the confluent platform.
  • The event will be written to the appropriate stream.
  • Any relevant event handlers will process the event.

A more detailed diagram of the technology architecture can be seen in Figure 1.5, where both the turnstile unit and the cloud components are represented:

Figure 1.5 – The technology architecture for turnstile-to-cloud communication

Figure 1.5 – The technology architecture for turnstile-to-cloud communication

Next, we will move on to the design of the event sourcing technique.

Event sourcing

Event sourcing is a technique that allows an application to append data to a log or stream in order to capture a definitive list of changes related to an object. One of the benefits of using event sourcing versus traditional create, retrieve, update, and delete (CRUD) methods with relational databases is that the performance can be tuned and increased at the service level, as the overhead of using CRUD methods is not needed. Also, it facilitates implementing a separation of concerns and the single responsibility principle, as outlined by the SOLID development practices (https://en.wikipedia.org/wiki/SOLID).

Another benefit of using event sourcing is its ability to achieve high message throughput while maintaining a high degree of resiliency. Technologies such as Kafka inherently allow for multiple message brokers and multiple partitions within topics. This design ensures that, at the very least, one broker is available to communicate with, and multiple partitions within a topic allow for data redundancy and scalability since Kafka will replicate partition data to each broker in the cluster. This enables multiple consumers to access or write data in parallel.

When using event stores with streaming capabilities, it enables you to debug point-in-time data and replay events to aid in debugging. For example, if an event has data that causes an error in the service code, you are fully able to go back to the point in time before that error was thrown and replay events to help identify potential bugs. Additionally, it can be used to perform "what if" testing. In some cases, normal use cases might have related edge cases that could either cause issues or introduce complexities that they were not originally designed for. Using "what if" testing allows you to go to a certain point in time and begin issuing new events that would correlate to the edge case while also monitoring application performance and potential failures.

Command-Query Responsibility Segregation

Command-Query Responsibility Segregation (CQRS) is a design pattern introduced by Greg Young that is used to describe the logical and physical separation of concerns for reading and writing data. Normally, you will see specific functionality implemented to only allow writing to an event store (commands) or only allow reading from an event store (queries). This allows for the independent scaling of read and write operations depending on the needs of the application or the needs of a presentation layer, either in the form of business intelligence software, such as PowerBI, or web applications accessible from desktop and mobile clients.

Details around how CQRS impacts the design of the application's domain services are covered in the next section. It's important to note that having that distinct separation of concerns is vital to leverage the pattern effectively.

 

Reviewing the implementation details

After looking at the patterns that will support the business use cases for the application, now, we can move on to the more specific implementation details. While some of the implementation constructs used in this solution will seem familiar, there are some technical details that might be new to you. We will be exploring several topics in this section, which are intended to prepare you for the journey ahead.

The Visual Studio solution topology

The solutions within the source folder are broken up by domain, with a separate solution for each. Additionally, there is a solution for core platform needs, such as marker interfaces to identify value objects, entities, aggregates, and other objects. The intent is to allow for each of the services to be run as an independent solution, which are eventually moved into their own repositories if so desired.

Each of the domains will have API services that can be communicated with. These projects in Visual Studio are not overly complex or even far from the general project template that is created when you create a new .NET Core API app. There are separate project types for queries, which read data, and commands, which affect data. Each domain will have a domain library, an infrastructure library, and test projects where applicable. Also, each domain will have a persistent consumer, in the form of an executable, that will run to enable listening for domain messages and handle those messages accordingly.

Solution folders will also be present to house Docker files, Docker Compose files, and any relevant Infrastructure-as-Code (IaC) or Configuration-as-Code (CaC) required to deploy the necessary components. Eventually, this will also be the location of the YAML file that defines the build and release pipeline.

Important note

The namespaces in each solution all start with a common acronym: MTAEDA. This stands for Mass Transit Authority Event-Driven Application.

Identity and Access Management considerations

Managing access to an application can be a daunting task. Many different options are available, from standalone implementations to platform-native solutions such as Azure Active Directory. Sometimes, the choice to go with an identity provider can be left with the application team; other times, it is driven by an enterprise strategy for authentication and authorization.

In this case, authentication will be handled at two layers. One layer is for transmitting events to applicable services, and the other layer is for users to log in and access management tools, such as dashboards and reports. As the dashboards and reports will be hosted in PowerBI, Azure Active Directory will be used to manage the authentication and authorization of those assets. For communication to the gateway and subsequent domain services for read and write operations, certificates will be used to govern traffic from the equipment to the gateway.

Event structure and schema

To help simplify and streamline event constructs, we have selected the CloudEvents open specification as the baseline for all events being transmitted. This allows you to capture relevant metadata about the operation while still sending over the event data itself. Additionally, using the CloudEvents schema enables you to potentially leverage platform tooling such as Azure Log Analytics and Azure Monitor. Of course, if your cloud target is different, there might be other ways the event schema could be useful. However, in this book, we will focus on the Azure cloud platform.

The schema for a CloudEvents schema is rather simple. There are fields for Data, Subject, Type, Source, Time, and DataContentType. They do not all require values; however, we will be using them to help better define the intent and content of each event we raise. It is entirely possible to not use this construct and still use the domains and domain services. The primary reason this design decision was made was to ensure there is consistency in the message format, along with a capacity to understand metadata associated with the event itself. Table 1.1 illustrates the CloudEvent fields and how they will be used to contain pertinent information when an event is raised:

Table 1.2 – The CloudEvent schema and field mappings

Table 1.2 – The CloudEvent schema and field mappings

Local development and debugging

For local development, using Visual Studio is the easiest option to ensure any prerequisites for the solution can be installed and managed. Additionally, you can use Visual Studio Code, or even use GitHub CodeSpaces, to leverage a fully encapsulated development environment in the cloud.

If you are using Windows as your primary operating system, you will likely also leverage the Windows Subsystem for Linux (WSL), which allows for Linux-native builds and tooling to be directly run from Windows. In the event that any SDKs are missing, Visual Studio will alert you to that, and allow you to install them by clicking on a link next to the message.

There are a couple of different options that you can use to debug the application locally:

  • Start debugging directly from Visual Studio (F5).
  • Run the application using docker compose and attach to the Docker processes via Visual Studio.
  • Deploy the application to Kubernetes and attach it to the application using the Kubernetes extension in Visual Studio.

New .NET 7 features

With the rollout of .NET 7, many improvements have been made to the underlying functionality offered along with language-specific updates. In this application, we will be taking advantage of some of the latest updates from a framework and language perspective. Language-wise, the implementations of minimal APIs and the asynchronous streaming of JSON data will come in handy for simplifying service implementations, and the ability to leverage Hot Reload will allow for faster and more meaningful debugging during the development life cycle.

Minimal APIs

One of the more exciting features in .NET 7 is a feature called minimal APIs. This allows you to develop an ASP.NET Core Web API app with very little code. The .NET team has worked on making the using statement a global construct—meaning that top-level statements, such as using System or using Microsoft.AspNet.MVC, are assumed to be required by all files within a Web API project and are not required in each file as a result. Additionally, the Startup.cs file is no longer required, as you can configure the app directly from the main Program.cs file. The following example code illustrates a code block that is valid and will create an ASP.NET Core Web API app when it is compiled:

var app = WebApplication.Create(args);
app.MapGet("/api/testing",(Func<IActionResult>)(() => { return
  new ContentResult() { Content = "Testing" }; }));
app.Run();

For a very simple API, you can map Get, Post, Put, Patch, and Delete operations directly in the Program.cs file, and they will be added to the routes for the Web API app. Additionally, you can call app.MapControllers() if you wish to keep controller code in separate files, as found in traditional Web API project layouts. On startup, the application will look for items derived from the Controller base class. If you choose this option, you will need to invoke the WebApplication.BuildConfig() method and pass in the build configurations, telling the application to add controllers to the configuration services, as demonstrated in the following code block:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
var app = builder.Build();
app.MapControllers();
app.Run();

JSON transcoding for gRPC

While support for gRPC services was originally added in .NET 6, further improvements have been introduced to enhance the experience. Previously, in order to connect to a gRPC service for testing purposes, you had to build a client for that service and interact with it via the client. With the addition of JSON transcoding support, you can now launch a Swagger page that contains all of the available methods you are exposing via ProtoBuf, and perform tests against them. This doesn't replace the need to have a client built for communication purposes when deployed, but it does help the experience of testing locally.

Observability

With .NET 7, the integration with OpenTelemetry allows developers to leverage out-of-the-box instrumentation as well as telemetry exporters for popular site reliability platforms such as Prometheus and Jaeger. OpenTelemetry is a platform-agnostic framework that enables developers to expose both stack metrics (such as ASP.NET Core instrumentation) as well as custom metrics based on counters, histograms, and meters. While there is active work being done on these libraries, there are versions available that can be installed via NuGet and makes adding baseline telemetry capture straightforward.

Hot reload

One bit of functionality that has been present in other web development stacks for years but not in Visual Studio itself is the option to hot reload when debugging. For example, if you were to change a line of code in a controller, you would need to stop debugging, change the line of code, then resume debugging. With Hot Reload support in .NET 7, this is no longer an obstacle. In Visual Studio 2022, there is now a new icon that invokes hot reload once a change has been detected in the underlying source code.

 

Summary

This chapter provided an overview of the sample transit application, including the underlying business drivers, architectures, and implementation patterns. We have taken a quick lap around the domain model along with aggregates, entities, and value objects. Additionally, we have covered some key areas within the application's architecture, along with some specific implementation details, including new features in .NET 7 that will make development and debugging easier for us. All of these core topics will be covered in more detail in the coming chapters.

The next chapter takes a look into the producer-consumer pattern, which is an essential underpinning of the application and what helps event-driven systems work at scale. We will be looking at the underlying usage of this design pattern, how it benefits applications that operate at scale, how it is implemented, and how to validate that communications are properly being routed and sent.

 

Questions

Answer the following questions to test your knowledge of this chapter:

  1. What potential insights can be gained when examining the business perspective behind an application?
  2. Are there other domains that you can identify for the application that are not already listed in the primary domain model?
  3. Are any of the aggregates misrepresented? Or do they contain information that might be irrelevant within the scope of the domain?
  4. How is event sourcing different from using a relational database or NoSQL database to store and retrieve application data?
  5. Is there an advantage to separating read operations from write operations?
  6. What benefits can be gained by separating domain solutions from the overall application solution? Are there potential drawbacks to separating the domain solutions?
  7. What other authentication and authorization mechanisms are available to secure access to reporting data and/or the write services that send data to Kafka?
  8. Is using a standard schema for events, such as CloudEvents, unnecessarily complicating the overall design of the application? Why or why not?
  9. What are some alternative implementations for these services aside from Docker or Kubernetes?
 

Further reading

About the Authors
  • Joshua Garverick

    Joshua Garverick is a Microsoft MVP (Most Valuable Professional) and a seasoned IT professional with nearly two decades of enterprise experience in several large industries (finance, healthcare, transportation, and logistics). He specializes in Azure application and platform architecture and is currently involved with application modernization and digital transformation projects. Josh is a Microsoft Certified Solution Expert (MCSE) in Cloud Platform and Infrastructure, a certified Microsoft Azure Solution Architect Expert, and a Microsoft DevOps Engineer Expert.

    Browse publications by this author
  • Omar Dean McIver

    Omar Dean McIver is an MCT (Microsoft Certified Trainer) and has experience of more than 12 years developing enterprise grade applications in Oil & Gas and other regulated industries. He specialises in cloud-native development and application modernization. He is a certified Azure Solution Architect and FinOps Practitioner. His Udemy course on Practical OAuth, OpenID, and JWT in C# .NET Core has a rating of 4.5-stars. Omar continues to stay at the forefront of cloud-native development with a keen focus on cost optimization, performance tuning, and highly scalable microservice architectures.

    Browse publications by this author
Latest Reviews (1 reviews total)
Implementing Event-Driven Microservices Architecture in .NET 7
Unlock this book and the full library FREE for 7 days
Start now