Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Hands-On Microservices with C#
Hands-On Microservices with C#

Hands-On Microservices with C#: Designing a real-world, enterprise-grade microservice ecosystem with the efficiency of C# 7

Arrow left icon
Profile Icon Matt Cole
Arrow right icon
zł59.99 zł158.99
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1 (1 Ratings)
eBook Jun 2018 254 pages 1st Edition
eBook
zł59.99 zł158.99
Paperback
zł197.99
Subscription
Free Trial
Arrow left icon
Profile Icon Matt Cole
Arrow right icon
zł59.99 zł158.99
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1 (1 Ratings)
eBook Jun 2018 254 pages 1st Edition
eBook
zł59.99 zł158.99
Paperback
zł197.99
Subscription
Free Trial
eBook
zł59.99 zł158.99
Paperback
zł197.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Hands-On Microservices with C#

Let's Talk Microservices, Messages, and Tools

Microservices are all the rage. They are talked about everywhere, and it seems like everyone wants them nowadays. There are probably as many implementations of them as there are words in this paragraph, and we'll add yet another one into the mix. But this comes from several implementations and years of experience developing enterprise grade microservice ecosystems for big clients. Now, I'm letting you in on the same techniques and best practices I've been using in the real world. And thus, you have the logic behind this book. I'm going to show you how to develop a powerful, flexible, and scalable microservice ecosystem, and hopefully along the way spark ideas for you to go off on your own endeavors and create even more. And we're not talking about some skimpy little web page or a single service; I've packed this book full of more microservices than you can shake a stick at, and I am sure your ideas will take shape and you will enhance this ecosystem to meet your needs.

In this chapter, we will cover:

  • What a microservice is
  • What a microservice architecture is
  • Pros and cons of a microservice
  • Installing and an overview of Topshelf
  • Installing and an overview of RabbitMQ
  • Installing and an overview of EasyNetQ
  • Installing and an overview of Autofac
  • Installing and an overview of Quartz
  • Installing and an overview of Noda Time

What is a microservice?

Ok, let's just go ahead and get this one out of the way. Let's start this book off by talking a bit about exactly what a microservice is, to us at least. Let's start with a simplistic visual diagram of what we're going to accomplish in this book. This diagram says it all, and if this looks too confusing, this might be a good place to stop reading!

Let's next agree to define a microservice as an independently deployable and developable, small, modular service that addresses a specific and unique business process or problem, and communicates via a lightweight event-based, asynchronous, message-based architecture. A lot of words in that one I know, but I promise by this end of the book that the approach will make perfect sense to you. Basically, what we are talking about here is the Messages central component in the previous diagram.

I know that some of you might be asking yourselves, what's the difference between a service and a microservice? That is one very good question. Lord knows I've had some very heated discussions from non-believers over the years, and no doubt you might as well. So, let's talk a bit about what a Service-Oriented Architecture (SOA) is.

Service-Oriented Architecture

The SOA is a software design paradigm where services are the central focus. For the purposes of discussion and clarity, let's define a service as a discrete unit of functionality that can be accessed remotely and acted upon independently. The characteristics of a service in terms of a SOA are:

  • It represents a specific business function or purpose (hopefully)
  • It is self-contained
  • It can and should function as a black box
  • It may also be comprised of other associated services
  • There is a hard and dedicated contract for each service (usually)

Some folks like to consider a microservice nothing more than a more formalized and refined version of an SOA. Perhaps in some ways, that could be the case. Many people believe that the SOA just never really formalized, and microservices are the missing formality. And although I am sure an argument could be made for that being true, microservices are usually designed differently, with a response-actor paradigm, and they usually use smaller or siloed databases (when permissible), and smaller and faster messaging protocols versus things like a giant Enterprise Service Bus (ESB).

Let's take a moment and talk about the microservice architecture itself.

Microservice architecture

Just as there is no one set definition for a microservice, there is also not one set architecture. What we will do is make a list of some of the characteristics that we view a microservice architecture to have. That list would then look something like this:

  • Each microservice can be deployed, developed, maintained, and then redeployed independently.
  • Each microservice focuses on a specific business purpose and goal and is non-monolithic.
  • Each microservice receives requests, processes them, and then may or may not send a response.
  • Microservices practice decentralized governance and in some cases, when permissible, decentralized data management.
  • Perhaps most importantly, at least in my mind anyways, I always design a microservice around failure. In fact, they are designed to fail. By following this paradigm, you will always be able to handle failures gracefully and not allow one failing microservice to negatively impact the entire ecosystem. By negatively impact, I mean a state where all other microservices are throwing exceptions due to the one errant microservice. Every microservice needs to be able to gracefully handle not being able to complete its task.
  • Finally, let's stay flexible and state that our microservice architecture is free to remain fluid and evolutionary.
  • No microservice talks directly to another microservice. Communication is always done in the form of messages.

With all that in mind, we've now created our definition of a microservice and its architecture and characteristics. Feel free to adjust these as you or your situation sees fit. Remember, as C# developers we don't always have the luxury, save truly greenfield projects, to dictate all the terms. Do the best you can with the room you have to operate within. As an example, chances are you will have to work with the corporate database and their rules rather than a small siloed database as described earlier. It's still a microservice, so go for it!

Pros and cons

Let's run down some pros and cons of a microservice architecture.

Pros

Here are a few of the positive points of a microservice architecture:

  • They give developers the freedom to independently architect, develop, and deploy services
  • Microservices can be developed in different languages if permitted
  • Easier integration and deployment than traditional monolithic applications and services
  • Microservices are organized around specific business capabilities
  • When change is required, only the specific microservice needs to be changed and redeployed
  • Enhanced fault isolation
  • They are easier to scale
  • Integration to external services is made easier

Cons

Here are a few negatives when considering a microservice architecture. Please keep in mind that negative does not equal bad, just information that may affect your decision:

  • Testing can be more involved
  • Duplication of effort and code can occur more often
  • Product management could become more complicated
  • Developers may have more work when it comes to communications infrastructure
  • Memory consumption may increase

Case study

Let's take a look at someone who took a monolithic application and broke it down into components and created a microservice-based system. The following is the story of Parkster; I think you will enjoy it!

A growing digital parking service from Sweden is right now breaking down their monolithic application towards microservices. Follow their story!

Portable parking meter in your pocket

Founded in 2010, Parkster has quickly become one of the fastest growing digital parking services in Sweden. Their vision is to make it quick and easy for you to pay your parking fees with your smartphone, via your Parkster app, SMS, or voicemail. They want to see a world where you don't need to guesstimate the required parking time or stand in line waiting by a busy parking meter. It should be easy to pay for parking—for everyone, everywhere. Moreover, Parkster doesn't want the customer to pay more when using tools of the future— that's why there are no extra fees if you are using Parkster's app when parking:

Breaking up a tightly coupled monolithic application

Like many other companies, Parkster started out with a monolithic architecture. They wanted to have their business model proven before they went further. A monolithic application is where the whole application is built as a single unit. All code for a system is in a single codebase that is compiled together and produces a single system.

Having one codebase seemed the easiest and quickest solution at the time, and solved their core business problems, which included connecting devices with people, parking zones, billing, and payments. A few years later, they decided to break up the monolith into multiple small codebases, which they did through multiple microservices communicating via message queues.

Parkster tried out their parking service for the first time in Lund, Sweden. After that, they rapidly expanded into more cities and introduced new features. The core model grew, and components became tightly coupled:

Deploying the codebase meant deploying everything at once. One big codebase made it hard and difficult to fix bugs and to add new features. A deep knowledge was also required before making an attempt at a single small code change; no one wants to add new code that could disrupt operations in some unforeseen way. One day they had enough—the application had to be decoupled. The biggest reason we moved from monolith to microservice were decoupling.

Application decoupling

Parkster's move from a monolith architecture to a microservice architecture is a working progress. They are breaking up their software into a collection of small, isolated services, where each service can independently deploy and scale as needed, independently of other services. Their system today has about 15-20 microservices, where the core app is written in Java.

They are already enjoying their changes: It's very nice to focus on a specific limited part of the system instead of having to think about the entire system, every time you do something new or make changes. As we grow, I think we will benefit even more from this change. said Anders Davoust, developer at Parkster:

Breaking down their codebase has also given the software developers the freedom to use whatever technologies make sense for a particular service. Different parts of the application can evolve independently, be it written in different languages and/or be maintained by separated developer teams. For example, one part of the system uses MongoDB and another part uses MySQL. Most code is written in Java, but parts of the system are written in Clojure. Parkster is using the open source system Kubernetes as a container orchestration platform.

Resiliency—the capacity to recover quickly from difficulties

Applications might be delayed or crash sometimes; it happens. It could be due to timeouts or that you simply have errors in your code that could affect the whole application.

Another thing Parkster really like about their system today is that it can still be operational, even if part of the backend processing is delayed or broken. Everything will not break just because one small part of the system is not operating as it should. By breaking up the system into autonomous components, Parkster inherently created more resiliency.

Message queues, RabbitMQ, and CloudAMQP

Parkster is separating different components via message queues. A message queue may force the receiving application to confirm that it has completed a task and that it is safe to remove the task from the queue. The message will just stay in the queue if anything fails in the receiving application. A message queue provides temporary message storage when the destination program is busy or not connected.

The message broker used between all microservices in Parkster is RabbitMQ. It was a simple choice - we had used RabbitMQ in other projects before we built Parkster and we had a good experience of RabbitMQ. The reason they went for CloudAMQP, a hosted RabbitMQ solution, was because they felt that CloudAMQP had way more knowledge about management of RabbitMQ than they had. They simply wanted to put their focus on the product instead of spending days configuring and handling server setups. CloudAMQP has been at the forefront when it comes to RabbitMQ server configurations and optimization since 2012.

I asked what they like about CloudAMQP, and I received a quick answer: I love the support that CloudAMQP gives us, always quick feedback and good help.

Now, Parkster's goal is to get rid of the old monolithic repo entirely, and focus on a new era where the whole system is built upon microservices.

Messaging concepts

The following is a list of concepts that relate to messaging:

  • Producer: Application that sends the messages.
  • Consumer: Application that receives the messages.
  • Queue: Buffer that stores messages.
  • Message: Information that is sent from the producer to a consumer through RabbitMQ.
  • Connection: A connection is a TCP connection between your application and the RabbitMQ broker.
  • Channel: A channel is a virtual connection inside a connection. When you are publishing or consuming messages from a queue - it's all done over a channel.
  • Exchange: Receives messages from producers and pushes them to queues depending on rules defined by the exchange type. In order to receive messages, a queue needs to be bound to at least one exchange.
  • Binding: A binding is a link between a queue and an exchange.
  • Routing key: The routing key is a key that the exchange looks at to decide how to route the message to queues. The routing key is like an address for the message.
  • Advanced Message Queuing Protocol (AMQP): AMQP is the protocol used by RabbitMQ for messaging.
  • Users: It is possible to connect to RabbitMQ with a given username and password. Every user can be assigned permissions such as rights to read, write, and configure privileges within the instance. Users can also be assigned permissions to specific virtual hosts.
  • Vhost: A virtual host provides a way to segregate applications using the same RabbitMQ instance. Different users can have different access privileges to different vhosts and queues, and exchanges can be created so they only exist in one vhost.

Message queues

Throughout this book, we will be dealing a lot with message queues. You will also see it prevalent in the software we are developing. Messaging queues are how our ecosystem communicates, maintains separation of concerns, and allows for fluid and fast development. With that being said, before we get too far along into something else, let's spend some time discussing exactly what message queues are and what they do.

Let's think about the functionality of a message queue. They are two sided components; messages enter from one side and exit from the other one. Thus, each message queue can establish connections on both sides; on the input side, a queue fetches messages from one or more exchanges, while on the output side, the queue can be connected to one or more consumers. From the single queue point of view being connected to more than one exchange with the same routing key, this is transparent, since the only thing that concerns the message queue itself are the incoming messages:

Put another way...

The basic architecture of a message queue is simple. There are client applications called producers that create messages and deliver them to the broker (the message queue). Other applications, called consumers, connect to the queue and subscribe to the messages to be processed. A software can be a producer, or consumer, or both a consumer and a producer of messages. Messages placed onto the queue are stored until the consumer retrieves them:

And, breaking that down even further:

The preceding diagram illustrates the following process:

  1. The user sends a PDF creation request to the web application
  2. The web application (the producer) sends a message to RabbitMQ, including data from the request, such as name and email
  3. An exchange accepts the messages from a producer application and routes them to correct message queues for PDF creation
  4. The PDF processing worker (the consumer) receives the task and starts the processing of the PDF

Let's now look at some of the different message queue configurations that we can use. For now, let's think of a queue as an ordered collection or list of messages. In the diagrams that follow, we're going to use P to represent a producer, C to represent a consumer, and the red rectangles to represent a queue.

Here's our legend:

Producer consumer queue

Let's start by taking the simplest of all possible scenarios. We have a single producer, which sends one or more messages (each message is one red block) to a single consumer, such as in the following:

Our next step up the difficulty ladder would be to have a single producer publish one or more messages to multiple consumers, such as in the following diagram. This is distributing tasks (work) among different workers, also sometimes referred to as the competing consumers pattern. This means that each consumer will take one or more messages. Depending upon how the message queues are set up, the consumers may each receive a copy of every message, or alternate in their reception based upon availability. So, in one scenario, consumer one may take ten messages, consumer two may take five, then consumer one takes another ten. Alternatively, the messages that consumer one takes, consumer two does not get and vice versa:

Next, we have the ever so famous publish/subscribe paradigm, where messages are sent to various consumers at once. Each consumer will get a copy of the message, unlike the scenario shown previously where consumers may have to compete for each message:

Our next scenario provides us with the ability for a client to selectively decide which message(s) they are interested in, and only receive those. Using a direct exchange, the consumers are able to ask for the type of message that they wish to receive:

If we were to expand this direct exchange map out a little bit, here's what our system might look like:

A direct exchange delivers messages to queues based on a message routing key. The routing key is a message attribute added into the message header by the producer. The routing key can be seen as an address that the exchange is using to decide how to route the message. A message goes to the queue(s) whose binding key exactly matches the routing key of the message.

The direct exchange type is useful when you would like to distinguish between messages published to the same exchange using a simple string identifier.

Next, as you will see me use quite heavily in this book, our consumers can receive selected messages based upon patterns (topics) with what is known as a topic queue. Users subscribe to the topic(s) that they wish to receive, and those messages will be sent to them. Note that this is not a competing consumers pattern where only one microservice will receive the message. Any microservice that is subscribed will receive the selected messages:

If we expand this one out a little bit, we can see what our system might look like:

The topic exchanges route messages to queues based on wildcard matches between the routing key and routing pattern specified by the queue binding. Messages are routed to one or many queues based on a matching between a message routing key and this pattern. The routing key must be a list of words, delimited by a period (.). The routing patterns may contain an asterisk (*) to match a word in a specific position of the routing key (for example, a routing pattern of agreements.*.*.b.* will only match routing keys where the first word is agreements and the fourth word is b). A pound symbol (#) indicates a match on zero or more words (for example, a routing pattern of agreements.eu.berlin.# matches any routing keys beginning with agreements.eu.berlin).

The consumers indicate which topics they are interested in (such as subscribing to a feed for an individual tag). The consumer creates a queue and sets up a binding with a given routing pattern to the exchange. All messages with a routing key that match the routing pattern will be routed to the queue and stay there until the consumer consumes the message.

Finally, we have the request/reply pattern. This scenario will have a client subscribing to a message, but rather than consume the message and end there, a reply message is required, usually containing the status result of the operation that took place. The loop and chain of custody is not complete until the final response is received and acknowledged:

Now that you know all you need to know about message queues and how they work, let's fill in our initial visual diagram a bit more so it's a bit more reflective of what we are doing, what we hope to accomplish, and how we expect our ecosystem to function. Although we will primarily be focusing on topic exchanges, we may occasionally switch to fanouts, direct, and others. In the end, the visual that we are after for our ecosystem is this:

Creating common messages

Let's start with a very simple message, the deployment messages:

public class DeploymentStartMessage
{
public DateTime Date { get; set; }
}
public class DeploymentStopMessage
{
public DateTime Date { get; set; }
}

As you can see, they are not overly complicated. What will happen is that we will have a DeploymentMonitor microservice. As soon as our deployment kicks off, we will send a DeploymentStartMessage to the message queue. Our microservice manager will receive the message, and immediately disable tracking each microservice's health until the DeploymentStopMessage is received.

Always include all your messages in the same namespace. This makes it much easier for EasyNetQ and its type name resolver to know where the messages are coming from. It also gives you a centralized location for all your messages, and lastly, prevents a lot of weird looking exchange and queue names!

Message subscriptions

Now that we have shown you what a deployment message looks like, let's discuss what happens when you subscribe to a message.

An EasyNetQ subscriber subscribes to a message type (the .NET type of the message class). Once the subscription to a type has been set up by calling the Subscribe method, a persistent queue will be created on the RabbitMQ broker and any messages of that type will be placed on the queue. RabbitMQ will send any messages from the queue to the subscriber whenever it is connected.

To subscribe to a message, we need to give EasyNetQ an action to perform whenever a message arrives. We do this by passing the Subscribe method a delegate such as this:

bus.Subscribe<MyMessage>("my_subscription_id", msg => Console.WriteLine(msg.Text));

Now, every time an instance of MyMessage is published, EasyNetQ will call our delegate and print the message's Text property to the console.

The subscription ID that you pass to subscribe is important

EasyNetQ will create a unique queue on the RabbitMQ broker for each unique combination of message type and subscription ID. Each call to Subscribe creates a new queue consumer. If you call the Subscribe method two times with the same message type and subscription ID, you will create two consumers consuming from the same queue. RabbitMQ will then round-robin successive messages to each consumer in turn. This is great for scaling and work-sharing. Say you've created a service that processes a particular message, but it's getting overloaded with work. Simply start a new instance of that service (on the same machine, or a different one) and without having to configure anything, you get automatic scaling.

If you call the Subscribe method two times with different subscription IDs but the same message type, you will create two queues, each with its own consumer. A copy of each message of the given type will be routed to each queue, so each consumer will get all the messages (of that type). This is great if you've got several different services that all care about the same message type.

Considerations when writing the subscribe callback delegate

As messages are received from queues subscribed to via EasyNetQ, they are placed on an in-memory queue. A single thread sits in a loop taking messages from the queue and calling their action delegates. Since the delegates are processed one at a time on a single thread, you should avoid long-running synchronous IO operations. Return control from the delegate as soon as possible.

Using SubscribeAsync

SubscribeAsync allows your subscriber delegate to return a Task immediately and then asynchronously execute long-running IO operations. Once the long-running subscription is complete, simply complete the Task.

Canceling subscriptions

All the subscribe methods return an ISubscriptionResult. It contains properties that describe the IExchange and IQueue used by the underlying IConsumer; these can be further manipulated using the advanced API IAdvancedBus if required.

You can cancel a subscriber at any time by calling Dispose on the ISubscriptionResult instance or on its ConsumerCancellation property:

var subscriptionResult = bus.Subscribe<MyMessage>("sub_id", MyHandler);
subscriptionResult.Dispose();

This will stop EasyNetQ consuming from the queue and close the consumer's channel. It is the equivalent to calling subscriptionResult.ConsumerCancellation.Dispose();

Note that disposing of the IBus or IAdvancedBus instance will also cancel all consumers and close the connection to RabbitMQ.

Versioning messages

Even though I can honestly say that I have developed interfaces that could accommodate any change made to both sides without ever modifying the interface, most people don't design to that extreme. There will, more likely than not, come a time where you will have to change a message to accommodate a new feature or request, and so on. Now, we get into the issue of message versioning.

To enable support for versioned messages, we need to ensure the required components are configured. The simplest way to achieve this is as follows:

var bus = RabbitHutch.CreateBus( "host=localhost", services =>   services.EnableMessageVersioning() )

Once support for versioned messages is enabled, we must explicitly opt-in any messages we want to be treated as versioned. So as an example, let's say we have a message defined called MyMessage. As you can see in the following message, it is not versioned and all versions will be treated the same way as any other when it is published:

public class MyMessage
{
public string Text { get; set; }
}

The next message that you see will be versioned, and ultimately it will find its way to both the V2 and previous subscribers by using the ISupersede interface:

public class MyMessageV2 : MyMessage, ISupersede<MyMessage>
{
public int Number { get; set; }
}

How does message versioning work?

Let's stop for a second and think about what's happening here. When we publish a message, EasyNetQ usually creates an exchange for the message type and publishes the message to that exchange:

Subscribers create queues that are bound to the exchange and therefore receive any messages published to it:

With message versioning enabled, EasyNetQ will create an exchange for each message type in the version hierarchy and bind those exchanges together. When you publish the MyMessageV2 message, it will be sent to the MyMessageV2 exchange, which will automatically forward it to the MyMessage exchange.

When messages are serialized, EasyNetQ stores the message type name in the type property of the message properties. This metadata is sent along with your message to any subscribers, who can then use it to deserialize the message.

With message versioning enabled, EasyNetQ will also store all the superseded message types in a header in the message properties. Subscribers will use this to find the first available type that the message can be deserialized into, meaning that even if an endpoint does not have the latest version of a message, so long as it has a version, it can be deserialized and handled.

Message versioning guidance

Here are a few tips for message versioning:

  • If the change cannot be implemented by extending the original message type, then it is not a new version of the message; it is a new message type
  • If you are unsure, prefer to create a new message type rather than version an existing message
  • Versioned messages should not be used with request/response as the message types are part of the request/response contract and Request<V1,Response> is not the same as Request<V2,Response>, even if V2 extends V1 (that is, public class V2 : V1 {})
  • Versioned messages should not be used with send/receive as this is targeted sending and therefore there is a declared dependency between the sender and the receiver

Message publishing

Messages are not published directly to any specific message queue. Instead, the producer sends messages to an exchange. Exchanges are message routing agents, defined per virtual host within RabbitMQ. An exchange is responsible for the routing of the messages to the different queues. An exchange accepts messages from the producer application and routes them to message queues with the help of header attributes, bindings, and routing keys.

A binding is a link that you set up to bind a queue to an exchange.

The routing key is a message attribute. The exchange might look at this key when deciding how to route the message to queues (depending on exchange type).

Exchanges, connections, and queues can be configured with parameters such as durable, temporary, and auto delete upon creation. Durable exchanges will survive server restarts and will last until they are explicitly deleted. Temporary exchanges exist until RabbitMQ is shut down. Auto-deleted exchanges are removed once the last bound object is unbound from the exchange.

As we begin to explore more about messages, I want to give a big shoutout to Lovisa Johansson at CloudAMQP for permission to reprint information she and others have done an excellent job at obtaining. Everyone should visit CloudAMQP; it is an infinite source of wisdom when it comes to RabbitMQ.

Message flow

The following is a standardly configured RabbitMQ message flow:

  1. The producer publishes a message to the exchange.
  2. The exchange receives the message and is now responsible for the routing of the message.
  3. A binding has to be set up between the queue and the exchange. In this case, we have bindings to two different queues from the exchange. The exchange routes the message in to the queues.
  4. The messages stay in the queue until they are handled by a consumer.
  5. The consumer handles the message:

Exchanges

Messages are not published directly to a queue; instead, the producer sends messages to an exchange. An exchange is responsible for the routing of the messages to the different queues. An exchange accepts messages from the producer application and routes them to message queues with the help of bindings and routing keys. A binding is a link between a queue and an exchange:

Message flow in RabbitMQ

  1. The producer publishes a message to an exchange. When you create the exchange, you have to specify the type of it. The different types of exchanges are explained in detail later on.
  2. The exchange receives the message and is now responsible for the routing of the message. The exchange takes different message attributes into account, such as routing key, depending on the exchange type.
  1. Bindings have to be created from the exchange to queues. In this case, we see two bindings to two different queues from the exchange. The exchange routes the message into the queues depending on message attributes.
  2. The messages stay in the queue until they are handled by a consumer.
  3. The consumer handles the message.

Direct exchange

A direct exchange delivers messages to queues based on a message routing key. The routing key can be seen as an address that the exchange is using to decide how to route the message. A message goes to the queues whose binding key exactly matches the routing key of the message.

The direct exchange type is useful when you would like to distinguish messages published to the same exchange using a simple string identifier.

Imagine that Queue A (create_pdf_queue) in the following diagram is bound to a direct exchange (pdf_events) with the binding key pdf_create. When a new message with the routing key pdf_create arrives at the direct exchange, the exchange routes it to the queue where the binding_key = routing_key is, in this case, to Queue A (create_pdf_queue).

SCENARIO 1:

  • Exchange: pdf_events
  • Queue A: create_pdf_queue
  • Binding a key between exchange (pdf_events) and Queue A (create_pdf_queue): pdf_create

SCENARIO 2:

  • Exchange: pdf_events
  • Queue B: pdf_log_queue
  • Binding a key between exchange (pdf_events) and Queue B (pdf_log_queue): pdf_log

EXAMPLE:

A message with the routing key pdf_logis sent to the exchange pdf_events. The message is routed to pdf_log_queue because the routing key (pdf_log) matches the binding key (pdf_log). If the message routing key does not match any binding key, the message will be discarded, as seen in the direct exchange diagram:

A message goes to the queues whose binding key exactly matches the routing key of the message.

Default exchange

The default exchange is a pre-declared direct exchange with no name, usually referred to by the empty string "". When you use the default exchange, your message will be delivered to the queue with a name equal to the routing key of the message. Every queue is automatically bound to the default exchange with a routing key that is the same as the queue name.

Topic exchange

Topic exchanges route messages to queues based on wildcard matches between the routing key and something called the routing pattern specified by the queue binding. Messages are routed to one or many queues based on a matching between a message routing key and this pattern.

The routing key must be a list of words, delimited by a period (.); examples are agreements.us and agreements.eu.stockholm, which, in this case, identifies agreements that are set up for a company with offices in lots of different locations. The routing patterns may contain an asterisk (*) to match a word in a specific position of the routing key (for example, a routing pattern of agreements.*.*.b.* will only match routing keys where the first word is agreements and the fourth word is b). A pound symbol (#) indicates a match on zero or more words (for example, a routing pattern of agreements.eu.berlin.# matches any routing keys beginning with agreements.eu.berlin).

The consumers indicate which topics they are interested in (such as subscribing to a feed for an individual tag). The consumer creates a queue and sets up a binding with a given routing pattern to the exchange. All messages with a routing key that match the routing pattern will be routed to the queue and stay there until the consumer consumes the message.

The following diagram shows three example scenarios:

SCENARIO 1:

Consumer A is interested in all the agreements in Berlin:

  • Exchange: agreements
  • Queue A: berlin_agreements
  • Routing pattern between exchange (agreements) and Queue A (berlin_agreements): agreements.eu.berlin.#
  • Example of message routing key that will match: agreements.eu.berlin and agreements.eu.berlin.headstore

SCENARIO 2:

Consumer B is interested in all the agreements:

  • Exchange: agreements
  • Queue B: all_agreements
  • Routing pattern between exchange (agreements) and Queue B (all_agreements): agreements.#
  • Example of message routing key that will match: agreements.eu.berlin and agreements.us

Topic Exchange: Messages are routed to one or many queues based on a matching between a message routing key and the routing pattern.

SCENARIO 3:

Consumer C is interested in all agreements for European head stores:

  • Exchange: agreements
  • Queue C: headstore_agreements
  • Routing pattern between exchange (agreements) and Queue C (headstore_agreements): agreements.eu.*.headstore
  • Example of message routing keys that will match: agreements.eu.berlin.headstore and agreements.eu.stockholm.headstore

Fanout exchange

The fanout copies and routes a received message to all queues that are bound to it regardless of routing keys or pattern matching, as with direct and topic exchanges. Keys provided will simply be ignored.

Fanout exchanges can be useful when the same message needs to be sent to one or more queues with consumers who may process the same message in different ways.

The fanout copies and routes a received message to all queues that are bound to it regardless of routing keys or pattern matching as with direct and topic exchanges. Keys provided will simply be ignored.

Fanout exchanges can be useful when the same message needs to be sent to one or more queues with consumers who may process the same message in different ways.

The following fanout exchange figure shows an example where a message received by the exchange is copied and routed to all three queues that are bound to the exchange. It could be sport or weather news updates that should be sent out to each connected mobile device when something happens.

The default exchange AMQP brokers must provide for the topic exchange is amq.fanout:

Received messages are routed to all queues that are bound to the exchange.

SCENARIO 1:

  • Exchange: sport news
  • Queue A: Mobile client Queue A
  • Binding: Binging between the exchange (sport news) and Queue A (mobile client Queue A)

EXAMPLE:

A message is sent to the exchange sport news. The message is routed to all queues (Queue A, Queue B, and Queue C) because all queues are bound to the exchange. Provided routing keys are ignored.

Headers exchange

Headers exchange routes are based on arguments containing headers and optional values. Headers exchanges are very similar to topic exchanges, but they route based on header values instead of routing keys. A message is considered matching if the value of the header equals the value specified upon binding.

A special argument named x-match, which can be added in the binding between your exchange and your queue, tells if all headers must match or just one. Either any common header between the message and the binding counts as a match, or all the headers referenced in the binding need to be present in the message for it to match. The x-match property can have two different values: any or all, where all is the default value. A value of all means all header pairs (key, value) must match and a value of any means at least one of the header pairs must match. Headers can be constructed using a wider range of data types—integer or hash, for example, instead of a string. The header's exchange type (used with binding argument any) is useful for directing messages that may contain a subset of known (unordered) criteria:

  • Exchange: Binding to Queue A with arguments (key = value): format = pdf, type = report, x-match = all
  • Exchange: Binding to Queue B with arguments (key = value): format = pdf, type = log, x-match = any
  • Exchange: Binding to Queue C with arguments (key = value): format = zip, type = report, x-match = all

SCENARIO 1:

Message 1 is published to the exchange with the header arguments (key = value): format = pdf, type = report and with the binding argument x-match = all.

Message 1 is delivered to Queue A —all key/value pair match.

SCENARIO 2:

Message 2 is published to the exchange with header arguments of (key = value): format = pdf and with the binding argument x-match = any.

Message 2 is delivered to Queue A and Queue B—the queue is configured to match any of the headers (format or log):

Headers exchange route messages to queues that are bound using arguments (key and value) that contain headers and optional values.

SCENARIO 3:

Message 3 is published to the exchange with the header arguments of (key = value): format = zip, type = log and with the binding argument x-match = all.

Message 3 is not delivered to any queue; the queue is configured to match all of the headers (format or log).

Common messages

Following are all the common messages we have defined for this book. You may feel free to change any one you need to as they are merely guides for you to start thinking in the microservice mindset:

[Queue("Bitcoin", ExchangeName = "EvolvedAI")]
[Serializable]
public class BitcoinSpendMessage
{
public decimal amount { get; set; }
}
[Queue("Bitcoin", ExchangeName = "EvolvedAI")]
[Serializable]
public class BitcoinSpendReceipt
{
public long ID { get; set; }
public decimal amount { get; set; }
public bool success { get; set; }
public DateTime time { get; set; }
}
[Queue("Financial", ExchangeName = "EvolvedAI")]
[Serializable]
public class BondsRequestMessage
{
public DateTime issue { get; set; }
public DateTime maturity { get; set; }
public double coupon { get; set; }
public int frequency { get; set; }
public double yield { get; set; }
public string compounding { get; set; }
public double price { get; set; }
public double calcYield { get; set; }
public double price2 { get; set; }
public string message { get; set; }
}
[Queue("Financial", ExchangeName = "EvolvedAI")]
[Serializable]
public class BondsResponseMessage
{
public long ID { get; set; }
public DateTime issue { get; set; }
public DateTime maturity { get; set; }
public double coupon {get; set; }
public int frequency { get; set; }
public double yield { get; set; }
public string compounding { get; set; }
public double price { get; set; }
public double calcYield { get; set; }
public double price2 { get; set; }
public string message { get; set; }
}
[Queue("Financial", ExchangeName = "EvolvedAI")]
[Serializable]
public class CreditDefaultSwapRequestMessage
{
public double fixedRate { get; set; }
public double notional { get; set; }
public double recoveryRate {get; set;}
public double fairRate { get; set; }
public double fairNPV { get; set; }
}
[Queue("Financial", ExchangeName = "EvolvedAI")]
[Serializable]
public class CreditDefaultSwapResponseMessage
{
public long ID { get; set; }
public double fixedRate { get; set; }
public double notional { get; set; }
public double recoveryRate { get; set; }
public double fairRate { get; set; }
public double fairNPV { get; set; }
}
[Serializable]
[Queue("Deployments", ExchangeName = "EvolvedAI")]
public class DeploymentStartMessage
{
public long ID { get; set; }
public DateTime Date { get; set; }
}
[Serializable]
[Queue("Deployments", ExchangeName = "EvolvedAI")]
public class DeploymentStopMessage
{
public long ID { get; set; }
public DateTime Date { get; set; }
}
[Queue("Email", ExchangeName = "EvolvedAI")]
[Serializable]
public class EmailSendRequest
{
public string From;
public string To;
public string Subject;
public string Body;
}
[Serializable]
[Queue("FileSystem", ExchangeName = "EvolvedAI")]
public class FileSystemChangeMessage
{
public long ID { get; set; }
public int ChangeType { get; set; }
public int EventType { get; set; }
public DateTime ChangeDate { get; set; }
public string FullPath { get; set; }
public string OldPath { get; set; }
public string Name { get; set; }
public string OldName { get; set; }
}
[Serializable]
Queue("Health", ExchangeName = "EvolvedAI")]
public class HealthStatusMessage
{
public string ID { get; set; }
public DateTime date { get; set; }
public string serviceName { get; set; }
public int status { get; set; }
public string message { get; set; }
public double memoryUsed { get; set; }
public double CPU { get; set; }
}
[Serializable]
[Queue("Memory", ExchangeName = "EvolvedAI")]
public class MemoryUpdateMessage
{
public long ID { get; set; }
public string Text { get; set; }
public int Gen1CollectionCount { get; set; }
public int Gen2CollectionCount { get; set; }
public float TimeSpentPercent { get; set; }
public string MemoryBeforeCollection { get; set; }
public string MemoryAfterCollection { get; set; }
public DateTime Date { get; set; }
}
[Serializable]
[Queue("MachineLearning", ExchangeName = "EvolvedAI")]
public class MLMessage
{
public long ID { get; set; }
public int MessageType { get; set; }
public int LayerType { get; set; }
public double param1 { get; set; }
public double param2 { get; set; }
public double param3 { get; set; }
public double param4 { get; set; }
public double replyVal1 { get; set; }
public double replyVal2 { get; set; }
public string replyMsg1 { get; set; }
public string replyMsg2 { get; set; }
}
[Serializable]
[Queue("Trello", ExchangeName = "EvolvedAI")]
public class TrelloResponseMessage
{
public bool Success { get; set; }
public string Message { get; set; }
}

Summary

In this chapter, we defined what a microservice and its architecture means to us. We also had an in-depth discussion regarding what we will see as queues and their different configurations. Without any further ado, let's move on and start talking about some of the pieces of our puzzle. We're going to discuss the fantastic world of open source software and take a look at some of the many tools and frameworks we are highlighting in this book in order to create our ecosystem. This entire book is written, and the software is developed, with the sole purpose of you being able to quickly develop a microservice ecosystem, and there is no better way to do this than to leverage the many great open source contributions made.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn to build message-based microservices
  • Packed with case studies to explain the intricacies of large-scale microservices
  • Build scalable, modular, and robust architectures with C#

Description

C# is a powerful language when it comes to building applications and software architecture using rich libraries and tools such as .NET. This book will harness the strength of C# in developing microservices architectures and applications. This book shows developers how to develop an enterprise-grade, event-driven, asynchronous, message-based microservice framework using C#, .NET, and various open source tools. We will discuss how to send and receive messages, how to design many types of microservice that are truly usable in a corporate environment. We will also dissect each case and explain the code, best practices, pros and cons, and more. Through our journey, we will use many open source tools, and create file monitors, a machine learning microservice, a quantitative financial microservice that can handle bonds and credit default swaps, a deployment microservice to show you how to better manage your deployments, and memory, health status, and other microservices. By the end of this book, you will have a complete microservice ecosystem you can place into production or customize in no time.

Who is this book for?

C# developers, software architects, and professionals who want to master the art of designing the microservice architecture that is scalable based on environment. Developers should have a basic understanding of.NET application development using C# and Visual Studio

What you will learn

  • Explore different open source tools within the context of designing microservices
  • Learn to provide insulation to exception-prone function calls
  • Build common messages used between microservices for communication
  • Learn to create a microservice using our base class and interface
  • Design a quantitative financial machine microservice
  • Learn to design a microservice that is capable of using Blockchain technology

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 29, 2018
Length: 254 pages
Edition : 1st
Language : English
ISBN-13 : 9781789533767
Vendor :
Microsoft
Languages :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want

Product Details

Publication date : Jun 29, 2018
Length: 254 pages
Edition : 1st
Language : English
ISBN-13 : 9781789533767
Vendor :
Microsoft
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 577.97
Hands-On Microservices with C#
zł197.99
C# Data Structures and Algorithms
zł221.99
Hands-On Design Patterns with C# and .NET Core
zł157.99
Total 577.97 Stars icon

Table of Contents

15 Chapters
Let's Talk Microservices, Messages, and Tools Chevron down icon Chevron up icon
ReflectInsight – Microservice Logging Redefined Chevron down icon Chevron up icon
Creating a Base Microservice and Interface Chevron down icon Chevron up icon
Designing a Memory Management Microservice Chevron down icon Chevron up icon
Designing a Deployment Monitor Microservice Chevron down icon Chevron up icon
Designing a Scheduling Microservice Chevron down icon Chevron up icon
Designing an Email Microservice Chevron down icon Chevron up icon
Designing a File Monitoring Microservice Chevron down icon Chevron up icon
Creating a Machine Learning Microservice Chevron down icon Chevron up icon
Creating a Quantitative Financial Microservice Chevron down icon Chevron up icon
Trello Microservice – Board Status Updating Chevron down icon Chevron up icon
Microservice Manager – The Nexus Chevron down icon Chevron up icon
Creating a Blockchain Bitcoin Microservice Chevron down icon Chevron up icon
Adding Speech and Search to Your Microservice Chevron down icon Chevron up icon
Best Practices Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
(1 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 100%
Chicco Oct 25, 2018
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Il libro è troppo caro per sole 240 pagine. Gli argomenti sarebbero anche interessanti ma non appena cominci a leggere il codice che NON ha alcuna indentazione ti passa la voglia di continuare (vedi immagine allegata). Peccato perchè negli ultimi tempi i libri della Packt sono interessanti, purtroppo questo è uscito proprio male.
Amazon Verified review Amazon