Home Web Development Building Serverless Microservices in Python

Building Serverless Microservices in Python

By Richard Takashi Freeman
books-svg-icon Book
eBook $22.99 $15.99
Print $32.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $22.99 $15.99
Print $32.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Serverless Microservices Architectures and Patterns
About this book
Over the last few years, there has been a massive shift from monolithic architecture to microservices, thanks to their small and independent deployments that allow increased flexibility and agile delivery. Traditionally, virtual machines and containers were the principal mediums for deploying microservices, but they involved a lot of operational effort, configuration, and maintenance. More recently, serverless computing has gained popularity due to its built-in autoscaling abilities, reduced operational costs, and increased productivity. Building Serverless Microservices in Python begins by introducing you to serverless microservice structures. You will then learn how to create your first serverless data API and test your microservice. Moving on, you'll delve into data management and work with serverless patterns. Finally, the book introduces you to the importance of securing microservices. By the end of the book, you will have gained the skills you need to combine microservices with serverless computing, making their deployment much easier thanks to the cloud provider managing the servers and capacity planning.
Publication date:
March 2019
Publisher
Packt
Pages
168
ISBN
9781789535297

 

Serverless Microservices Architectures and Patterns

Microservices architectures are based on service. You could think of the microservices as a lightweight version of SOA but enriched with more recent architectures, such as the event-driven architecture, where an event is defined as a state of change that's of interest. In this chapter, you will learn about the monolithic multi-tier architecture and monolithic service-oriented architecture (SOA). We will discuss the benefits and drawbacks of both architectures. We will also look at the microservices background to understand the rationale behind its growth, and compare different architectures.

We will cover the design patterns and principles and introduce the serverless microservice integration patterns. We then cover the communication styles and decomposition microservice patterns, including synchronous and asynchronous communication.

You will then learn how serverless computing in AWS can be used to quickly deploy event-driven computing and microservices in the cloud. We conclude the chapter by setting up your serverless AWS and development environment.

In this chapter we will cover the following topics:

  • Understanding different architecture types and patterns
  • Virtual machines, containers, and serverless computing
  • Overview of microservice integration patterns
  • Communication styles and decomposition microservice patterns
  • Serverless computing in AWS
  • Setting up your serverless environment
 

Understanding different architecture types and patterns

In this section, we will discuss different architectures, such as monolithic and microservices, along with their benefits and drawbacks.

The monolithic multi-tier architecture and the monolithic service-oriented architecture

At the start of my career, while I was working for global fortune 500 clients for Capgemini, we tended to use multi-tier architecture, where you create different physically separate layers that you can update and deploy independently. For example, as shown in the following three-tier architecture diagram, you can use Presentation, Domain logic, and Data Storage layers:

In the presentation layer, you have the user interface elements and any presentation-related applications. In domain logic, you have all the business logic and anything to do with passing the data from the presentation layer. Elements in the domain logic also deal with passing data to the storage or data layer, which has the data access components and any of the database elements or filesystem elements. For example, if you want to change the database technology from SQL Server to MySQL, you only have to change the data-access components rather than modifying elements in the presentation or domain-logic layers. This allows you to decouple the type of storage from presentation and business logic, enabling you to more readily change the database technology by swapping the data-storage layer.

A few years later at Capgemini, we implemented our clients' projects using SOA, which is much more granular than the multi-tier architecture. It is basically the idea of having standardized service contracts and registry that allows for automation and abstraction:

There are four important service properties related to SOA:

  • Each service needs to have a clear business activity that is linked to an activity in the enterprise.
  • Anybody consuming the service does not need to understand the inner workings.
  • All the information and systems are self-contained and abstracted.
  • To support its composability, the service may consist of other underlying services

Here are some important SOA principles:

  • Standardized
  • Loosely coupled
  • Abstract
  • Stateless
  • Granular
  • Composable
  • Discoverable
  • Reusable

The first principle is that there is a standardized service contract. This is basically a communication agreement that's defined at the enterprise level so that when you consume a service, you know exactly which service it is, the contract for passing in messages, and what you are going to get back. These services are loosely coupled. That means they can work autonomously, but also you can access them from any location within the enterprise network. They also offer an abstract version, which means that these services are a black box where the inner logic is actually hidden away, but also they can work independently of other services.

Some services will also be stateless. That means that, if you call a service, passing in a request, you will get a response and you would also get an exception if there is a problem with the service or the payload. Granularity is also very important within SOA. The service needs to be granular enough that it's not called inefficiently or many times. So, we want to normalize the level and the granularity of the service. Some services can be decomposed if they're being reused by the services, or services can be joined together and normalized to minimize redundancy. Services also need to be composable so you can merge them together into larger services or split them up.

There's a standardized set of contracts, but the service also needs to be discoverable. Discoverable means that there is a way to automatically discover what service is available, what endpoints are available, and a way to interpret them. Finally, the reasonable element, reuse is really important for SOA, which is when the logic can be reused in other parts of the code base.

Benefits of monolithic architectures

In SOA, the architecture is loosely coupled. All the services for the enterprise are defined in one repository. This allows us to have good visibility of the services available. In addition, there is a global data model. Usually, there is one data store where we store all the data sources and each individual service actually writes or reads to it. This allows it to be centralized at a global level.

Another benefit is that there is usually a small number of large services, which are driven by a clear business goal. This makes them easy to understand and consistent for our organization. In general, the communication between the services is decoupled via either smart pipelines or some kind of middleware.

Drawbacks of the monolithic architectures

The drawback of the monolithic architecture is that there is usually a single technology stack. This means the application server or the web server or the database frameworks are consistent throughout the enterprise. Obsolete libraries and code can be difficult to upgrade, as this is dependent on a single stack and it's almost like all the services need to be aligned on the same version of libraries.

Another drawback is that the code base is usually very large on a single stack stack, which means that there are long build times and test times to build and deploy the code. The services are deployed on a single or a large cluster of application servers and web servers. This means that, in order to scale, you need to scale the whole server, which means there's no ability to deploy and scale applications independently. To scale out an application, you need to scale out the web application or the application server that hosts the application.

Another drawback is that there's generally a middleware orchestration layer or integration logic that is centralized. For example, services would use the Business Process Management (BPM) framework to control the workflow, you would use an Enterprise Service Bus (ESB), which allows you to do routing your messages centrally, or you'd have some kind of middleware that would deal with the integration between the services themselves. A lot of this logic is tied up centrally and you have to be very careful not to break any inter-service communication when you're changing the configuration of that centralized logic.

Overview of microservices

The term microservice arose from a workshop in 2011, when different teams described an architecture style that they used. In 2012, Adrien Cockcroft from Netflix actually described microservice as a fine-grained SOA who pioneered this fine-grained SOA at web scale.

For example, if we have sensors on an Internet of Things (IoT) device, if there's a change of temperature, we would emit an event as a possible warning further downstream. This is what's called event-stream processing or complex-event processing. Essentially, everything is driven by events throughout the whole architecture.

The other type of design used in microservices is called domain-driven design (DDD). This is essentially where there is a common language between the domain experts and the developers. The other important component in DDD is the bounded context, which is where there is a strict model of consistency that relies in its bounds for each service. For example, if it's a service dealing with customer invoicing, that service will be the central point and only place where customer invoicing can be processed, written to, or updated. The benefits are that there won't be any confusion around the responsibilities of data access with systems outside of the bounded context.

You could think of microservice as centered around a REST endpoint or application programming interface using JSON standards. A lot of the logic could be built into the service. This is what is called a dumb pipeline but a smart endpoint, and you can see why in the diagram. We have a service that deals with customer support, as follows:

For example, the endpoint would update customer support details, add a new ticket, or get customer support details with a specific identifier. We have a local customer support data store, so all the information around customer support is stored in that data store and you can see that the microservice emits customer-support events. These are sent out on a publish-subscribe mechanism or using other publishing-event frameworks, such as Command Query Responsibility Segregation (CQRS). You can see that this fits within the bounded context. There's a single responsibility around this bounded context. So, this microservice controls all information around customer support.

Benefits and drawbacks of microservice architectures

The bounded context, and the fact that this is a very small code base, allow you to build very frequently and deploy very frequently. In addition, you can scale these services independently. There's usually one application server or web server per microservice. You can obviously scale it out very quickly, just for the specific service that you want to. In addition, you can have frequent builds that you test more frequently, and you can use any type of language, database, or web app server. This allows it to be a polygon system. The bounded context is a very important as you can model one domain. Features can be released very quickly because, for example, the customer services microservice could actually control all changes to the data, so you can deploy these components a lot faster.

However, there are some drawbacks to using a microservices architecture. First, there's a lot of complexity in terms of distributed development and testing. In addition, the services talk a lot more, so there's more network traffic. Latency and networks become very important in microservices. The DevOps team has to maintain and monitor the time it takes to get a response from another service. In addition, the changing of responsibilities is another complication. For example, if we're splitting up one of the bounded contexts into several types of sub-bounded context, you need to think about how that works within teams. A dedicated DevOps team is also generally needed, which is essentially there to support and maintain much larger number of services and machines throughout the organization.

SOA versus microservices

Now that we have a good understanding of both, we will compare the SOA and microservices architectures. In terms of the communication itself, both SOA and microservices can use synchronous and asynchronous communication. SOA typically relied on Simple Object Access Protocol (SOAP) or web services. Microservices tend to be more modern and widely use REpresentational State Transfer (RESTApplication Programming Interfaces (APIs).

We will start with the following diagram, which compares SOA and microservices:

The orchestration is where there's a big differentiation. In SOA, everything is centralized around a BPM, ESB, or some kind of middleware. All the integration between services and data flowing is controlled centrally. This allows you to configure any changes in one place, which has some advantages.

The microservices approach has been to use a more choreography-based approach. This is where an individual service is smarter, that is, a smart endpoint but a dumb pipeline. That means that the services know exactly who to call and what data they will get back, and they manage that process within the microservice. This gives us more flexibility in terms of the integration for microservices. In the SOA world or the three-tier architecture, there's less flexibility as it's usually a single code base and the integration is a large set of monolith releases and deployments of user interface or backend services. This can limit the flexibility of your enterprise. For microservices, however, these systems are much smaller and can be deployed in isolation and much more fine-grained.

Finally, on the architecture side, SOA works at the enterprise level, where we would have an enterprise architect or solutions architect model and control the release of all the services in a central repository. Microservices are much more flexible. Microservices talked about working at the project level where they say the team is only composed of a number of developers or a very small number of developers that could sit around and share a pizza. So, this gives you much more flexibility to make decisions rapidly at the project level, rather than having to get everything agreed at the enterprise level.

 

Virtual machines, containers, and serverless computing

Now that we have a better understanding of the monolithic and microservice architectures, let's look at the Amazon Web Service (AWS) building blocks for creating serverless microservices.

But first we'll cover virtual machines, containers, and serverless computing, which are the basic building blocks behind any application or service hosted in the public cloud.

Virtual machines are the original offering in the public cloud and web hosting sites, containers are lightweight standalone images, and serverless computing is when the cloud provider fully manages the resources. You will understand the benefits and drawbacks of each approach and we will end on a detailed comparison of all three.

Virtual machines

In traditional data centers, you would have to buy or lease physical machines and have spare capacity to deal with additional web or user traffic. In the new world, virtual machines were one of the first public cloud offerings. You can think of it as similar to physical boxes, where you can install an operating system, remotely connect via SSH or RDP, and install applications and services. I would say that virtual machines have been one of the key building blocks for start-up companies to be successful. It gave them the ability to go to market with only small capital investments and to scale out with an increase in their web traffic and user volumes. This was something that previously only large organizations could afford, given the big upfront costs of physical hardware.

The advantages of virtual machines are the pay per usage, choice of instance type, and dynamic allocation of storage, giving your organization full flexibility to rent hardware within minutes rather than wait for physical hardware to be purchased. Virtual machines also provides security, which is managed by the cloud provider. In addition, they provide multi-region auto-scaling and load balancing, again managed by the cloud provider and available almost at the click of a button. There are many virtual machines available, for example, Amazon EC2, Azure VMs, and Google Compute Engine.

However, they do have some drawbacks. The main drawback is that it takes a few minutes to scale. So, any machine that needs to be spun up takes a few minutes, making it impossible most to scale quickly upon request. There is also an effort in terms of configuration where the likes of Chef or Puppet are required for configuration management. For example, the operating system needs to be kept up to date.

Another drawback is that you still need to write the logic to poll or subscribe to other managed services, such as streaming analytics services. In addition, you still pay for idle machine time. For example, when your services are not running, the virtual machines are still up and you're still paying for that time even if they're not being actively used.

Containers

The old way with virtual machines was to deploy applications on a host operating system with configuration-management tools such as Chef or Puppet. This has the advantage of managing the application artifacts' libraries and life cycles with each other and trying to operate specific operating systems, whether Linux or Windows. Containers came out of this limitation with the idea of shipping your code and dependencies into a portable container where you have full operating-system-level virtualization. You essentially have better use of the available resources on the machine.

These containers can be spun up very fast and they are essentially immutable, that is, the OS, library versions, and configurations cannot be changed. The basic idea is that you ship the code and dependencies in this portable container and the environments can be recreated locally or on a server by a configuration. Another important aspect is the orchestration engine. This is the key to managing containers. So, you'd have Docker images that will be managed, deployed, and scaled by Kubernetes or Amazon EC2 container service (ECS).

The drawbacks are that these containers generally scale within seconds, which is still too slow to actually invoke a new container per request. So, you'd need them to be pre-warmed and already available, which has a cost. In addition, the cluster and image configuration does involve some DevOps effort.

Recently AWS introduced AWS Fargate and Elastic Kubernetes Service (EKS), which have helped to relieve some of this configuration-management and support effort, but you would still need a DevOps team to support them.

The other drawback is that there's an integration effort with the managed services. For example, if you're dealing with a streaming analytics service, you still need to write the polling and subscription code to pull the data into your application or service.

Finally, like with virtual machines, you still pay for any containers that are running even if the Kubernetes assists with this. They can run on the EC2 instance, so you'll still need to pay for that actual machine through a running time even if it's not being used.

Serverless computing

You can think of service computing as focusing on business logic rather than on all the infrastructure-configuration management and integration around the service. In serverless computing, there are still servers, it's just that you don't manage the servers themselves, the operating system, or the hardware, and all the scalability is managed by the cloud provider. You don't have access to the raw machine, that is, you can't SSH onto the box.

The benefits are that you can really focus on the business logic code rather than any of the infrastructure or inbound integration code, which is the the business value you are adding as an organization for your customers and clients.

In addition, the security is managed by the cloud provider again, auto-scaling and the high availability options also managed by the cloud provider. You can spin up more instances dynamically based on the number of requests, for example. The cost is per execution time not per idle time.

There are different public cloud serverless offerings. Google, Azure, AWS, and Alibaba cloud have the concept of Functions as a Service (FaaS). This is where you deploy your business logic code within a function and everything around it, such as the security and the scalability, is managed by the cloud provider.

The drawback is that these are stateless, which means they have a very short lifetime. After the few minutes are over, any state maintained within that function is lost, so it has to be persisted outside. It's not suitable for a long-running processes. It does have a limited instance type and a duration too. For example, AWS Lambdas have a duration of 15 minutes before they are terminated. There's also constraints on the actual size of the external libraries or any custom libraries that you package together, since these lambdas need to be spun up very quickly.

Comparing virtual machines, containers, and serverless

Let's compare Infrastructure as a Service (IaaS), Containers as a Service (CaaS), and Functions as a Service (FaaS). Think of IaaS as the virtual machine, CaaS as pool of Docker containers and FaaS an example will be Lambda functions. This is a comparison between IaaS, CaaS, and FaaS:

The green elements are managed by the user, and the blue elements are managed by the cloud service provider. So, on the left, you can see that IaaS, as used with virtual machines, have a lot of the responsibility on the user. In CaaS, the operating-system level is managed by the provider, but you can see that the container and the runtime are actually managed by the user. And, finally on the right, FaaS, you can see the core business logic code and application configuration is managed by the user.

So, how do you choose between AWS Lambda containers and EC2 instances in the AWS world? Check out the following chart:

If we compare virtual machines against the containers and Lambda functions on the top row, you can see that there is some configuration effort required in terms of the maintenance, building it for high availability, and management. For the Lambda functions, this is actually done on a pre-request basis. That is, it's request-driven. AWS will spin up more lambdas if more traffic hits your site to make it highly available (HA), for example.

In terms of flexibility, you have full access in virtual machines and containers, but with AWS Lambda, you have default hardware, default operating system, and no graphics processing units (GPU) available. The upside is that there is no upgrade or maintenance required on your side for Lambdas.

In terms of scalability, you need to plan ahead for virtual machines and containers. You need to provision the containers or instances and decide how you are going to scale. In AWS Lambda functions, scaling is implicit based on the number of requests or data volumes, as you natively get more or fewer lambdas executing in parallel.

The launch of virtual machines is usually in minutes and they can stay on perhaps for weeks. Containers can spin up within seconds and can stay on for minutes or hours before they can be disposed of. Lambda functions, however, can spin up in around 100 milliseconds and generally live for seconds or maybe a few minutes.

In terms of state, virtual machines and containers can maintain state even if it's generally not best practice for scaling. Lambda functions are always stateless, when they terminate their execution, anything in memory is disposed of, unless it's persisted outside in a DynamoDB table or S3 bucket, for example.

Custom integration with AWS services is required for virtual machines and Docker containers. In Lambda functions, however, event sources can push data to a Lambda function using built-in integration with the other AWS services, such as Kinesis, S3, and API Gateway. All you have to do is subscribe the Lambda event source to a Kinesis Stream and the data will get pushed to your Lambda with its business logic code, which allows you to decide how you process and analyze that data. However, for EC2 virtual machines and ECS containers, you need to build that custom inbound integration logic using the AWS SDK, or by some other means.

Finally, in terms of pricing, EC2 instances are priced per second. They also have a spot instance that uses market rates, which is lot cheaper than on-demand instances. The same goes for containers, except that you can have many containers on one EC2 instance. This makes better use of resources and is a lot cheaper, as you flexibility to spread different containers among the EC2 instances. For AWS Lambda functions, the pricing is per 100 milliseconds, invocation number, and the amount of random-access memory (RAM) required.

 

Overview of microservice integration patterns

In this section, we'll discuss design patterns, design principles, and how microservice architectural patterns relate to traditional microservice patterns and can be applied to serverless microservices. These topics will help you gain an overview of different integration patterns.

Design patterns

Patterns are reusable blueprints that are a solution to a similar problem others have faced, and that have widely been reviewed, tested, and deployed in various production environments.

Following them means that you will benefit from best practices and the wisdom of the technical crowd. You will also speak the same language as other developers or architects, which allows you to exchange your ideas much faster, integrate with other systems more easily, and run staff handovers more effectively.

Software design patterns and principles

Your will probably be using object-oriented (OO) or functional programming in your microservices or Lambda code, so let's briefly talk about the patterns linked to them.

In OO programming, there are many best practice patterns or principles you can use when coding, such as GRASP or SOLID. I will not go into too much depth as it would take a whole book, but I would like to highlight some principles that are important for microservices:

  • SOLID: This has five principles. One example is the Single Responsibility Principle (SRP), where you define classes that each have a single responsibility and hence a single reason for change, reducing the size of the services and increasing their stability.
  • Package cohesion: For example, common closure-principle classes that change together belong together. So when a business rule changes, developers only need to change code in a small number of packages.
  • Package coupling: For example, the acyclic dependencies principle, which states that dependency graphs of packages or components should have no cycles.

Let's briefly go into some of the useful design patterns for microservice:

  • Creational patterns: For example, the factory method creates an instance of several derived classes.
  • Structural patterns: For example, the decorator adds additional responsibilities to an object dynamically.
  • Behavioral patterns: For example, the command pattern encapsulates a request as an object, making it easier to extract parameters, queuing, and logging of requests. Basically, you decouple the parameter that creates the command from the one that executes it.
  • Concurrency patterns: For example, the reactor object provides an asynchronous interface to resources that must be handled synchronously.

Depending on you coding experience, you may be familiar with these. If not, it's worth reading about them to improve you code readability, management, and stability, as well as your productivity. Here are some references where you can find out more:

  • SOLID Object-Oriented Design, Sandi Metz (2009)
  • Design Patterns: Elements of Reusable Object-Oriented Software, Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides (1995)
  • Head First Design Patterns, Eric T Freeman, Elisabeth Robson, Bert Bates, Kathy Sierra (2004)
  • Agile Software Development, Principles, Patterns, and Practices, Robert C. Martin (2002)

Serverless microservices pattern categories

On top of the software design patterns and principles we just discussed are the microservices patterns. From my experience, there are many microservices patterns that I recommended that are relevant for serverless microservices, as shown in the following diagram:

I created this diagram to summarize and illustrate the serverless microservices patterns we will be discussing in this book:

  • Communication styles: How services communicate together and externally.
  • Decomposition pattern: Creating a service that is loosely coupled by business capability or bounded context.
  • Data management: Deals with local and shared data stores.
  • Queries and messaging: Looks at events and messages that are sent between microservices, and how services are queried efficiently.
  • Deployment: Where ideally we would like uniform and independent deployments, you also don't want developers to re-create a new pipeline for each bounded context or microservice.
  • Observability and discovery: Being able to understand whether a service is functioning correctly, monitor and log activity allow you to drill down if there are issues. You also want to know and monitor what is currently running for cost and maintenance reasons, for example.
  • Security: This is critical for compliance, data integrity, data availability, and potential financial damage. It's important to have different encryption, authentication, and authorization processes in place.

Next we will have a look at the communication styles and decomposition pattern first.

 

Communication styles and decomposition microservice patterns

In this section, we will discuss two microservice patterns, called communication styles and decomposition, with a sufficient level of detail that you will be able to discuss them with other developers, architects, and DevOps.

Communication styles

Microservice applications are distributed by nature, so they heavily rely on the authorizations network. This makes it important to understand the different communications styles available. These can be to communicate with each other but also with the outside world. Here are some examples:

  • Remote procedure calls: It used to be popular for Java to use Remote Method Invocation (RMI), which is a tight coupling between client and servers with a non-standard protocol, which is one limitation. In addition, the network is not reliable and so traditional RMIs should be avoided. Others, such as the SOAP interface, and a client generated from the Web Service Definition Language (WSDL), are better but are seen as heavy weight, compared to REpresentational State Transfer (REST) APIs that have widely been adopted in microservices.
  • Synchronous communication: It is simpler to understand and implement; you make a request and get a response. However, while waiting for the response, you may also be blocking a connection slot and resources, limiting calls from other services:

  • Asynchronous communication: With asynchronous communication, you make the request and then get the response later and sometimes out of order. These can be implemented using callbacks, async/await, or promise in Node.js or Python. However, there are many design considerations in using async, especially if there are failures that need monitoring. Unlike most synchronous calls, these are non-blocking:

When dealing with communications, you also need to think about whether your call is blocking or non-blocking. For example, writing metrics from web clients to a NoSQL database using blocking calls could slow down your website.

You need to think about dealing with receiving too many requests and throttling them to not overwhelm your service, and look at failures such as retires, delays, and errors.

When using Lambda functions, you benefit from AWS-built event source and spinning up a Lambda per request or with a micro-batch of data. In most cases, synchronous code is sufficient even at scale, but it's important to understand the architecture and communication between services when designing a system, as it is limited by bandwidth, and network connections can fail.

One-to-one communication microservice patterns

At an individual microservice level, the data management pattern is composed of a suite of small services, with its own local data store, communicating with a REST API or via publish/subscribe:

API Gateway is a single entry point for all clients, and tailored for them, allowing changes to be decoupled from the main microservice API, which is especially useful for external-facing services.

One-to-one request/response can be sync or async. If they are sync, they can have a response for each request. If the communication is async, they can have an async response or async notification. Async is generally preferred and much more scalable, as it does not hold an open connection (non-blocking), and makes better use of the central processing unit (CPU) and input/output (I/O) operations.

We will go into further detail on the data-management patterns later in the book, where we will be looking at how microservices integrate in a wider ecosystem.

Many-to-many communication microservice patterns

For many-to-many communication, we use publish/subscribe, which is a messaging pattern. This is where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers; rather, the receiver needs to subscribe to the messages. It's a highly scalable pattern as the two are decoupled:

Asynchronous messaging allows a service to consume and act upon the events, and is a very scalable pattern as you have decoupled two services: the publisher and the subscriber.

Decomposition pattern by business capability

How do you create and design microservices? If you are migrating existing systems, you might look at decomposing a monolith or application into microservices. Even for new a green-field project, you will want to think about the microservices that are required:

First, you identify the business capability, that is, what an organization does in order to generate value, rather than how. That is, you need to analyze purpose, structure, and business processes. Once you identify the business capabilities, you define a service for each capability or capability group. You then need to add more details to understand what the service does by defining the available methods or operations. Finally, you need to architect how the services will communicate.

The benefit of this approach is that it is relatively stable as it is linked to what your business offers. In addition, it is linked to processes and stature.

The drawbacks are that the data can span multiple services, it might not be optimum communication or shared code, and needs a centralized enterprise-language model.

Decomposition pattern by bounded context

There are three steps to apply the decomposition pattern by bounded context: first, identify the domain, which is what an organization does. Then identify the subdomain, which is to split intertwined models into logically-separated subdomains according to their actual functionality. Finally, find the bounded context to mark off where the meaning of every term used by the domain model is well understood. Bounded context does not necessarily fall within only a single subdomain. The three steps are as follows:

The benefits of this pattern are as follows:

  • Use of Ubiquitous Language where you work with domain experts, which helps with wider communication.
  • Teams own, deploy, and maintain services, giving them flexibility and a deeper understanding within their bounded context. This is good because services within it are most likely to talk to each other.
  • The domain is understood by the team with a representative domain expert. There is an interface that abstracts away of a lot of the implementation details for other teams.

There are a few drawbacks as well:

  • It needs domain expertise.
  • It is iterative and needs to be continuous integration (CI) to be in place.
  • Overly complex for a simple domain, dependent on Ubiquitous Language and domain expert.
  • If a polyglot approach was used, it's possible no one knows the tech stack any more. Luckily, microservices should be smaller and simpler, so these can be rewritten.

More details can be found in the following books:

  • Building-microservices, Sam Newman (2015)
  • Domain-Driven Design: Tackling Complexity in the Heart of Software, Eric Evans (2003)
  • Implementing Domain-Driven Design, Vaughn Vernon (2013)
 

Serverless computing in AWS

Serverless computing in AWS allows you to quickly deploy event-driven computing in the cloud. With serverless computing, there are still servers but you don't have the manage them. AWS automatically manages all the computing resources for you, as well as any trigger mechanisms. For example, when an object gets written to a bucket, that would trigger an event. If another service writes a new record to an Amazon DynamoDB table, that could trigger an event or an endpoint to be called.

The main idea of using event-driven computing is that it easily allows you to transform data as it arrives into the cloud, or we can perform data-driven auditing analysis notifications, transformations, or parse Internet of Things (IoT) device events. Serverless also means that you don't need to have an always-on running service in order to do that, you can actually trigger it based on the event.

Overview of some of the key serverless services in AWS

Some key serverless services in AWS are explained in the following list:

  • Amazon Simple Storage Service (S3): A distributed web-scale object store that is highly scalable, highly secure, and reliable. You only pay for the storage that you actually consume, which makes it beneficial in terms of pricing. It also supports encryption, where you can provide your own key or you can use a server-side encryption key provided by AWS.
  • Amazon DynamoDB: A fully-managed NoSQL store database service that is managed by AWS and allows you to focus on writing the data out to the data store. It's highly durable and available. It has been used in gaming and other high-performance activities, which require low latency. It uses SSD storage under the hood and also provides partitioning for high availability.
  • Amazon Simple Notification Service (SNS): A push-notification service that allows you to send notifications to other subscribers. These subscribers could be email addresses, SNS messages, or other queues. The messages would get pushed to any subscriber to the SNS service.
  • Amazon Simple Queue Service (SQS): A fully-managed and scalable distributed message queue that is highly available and durable. SQS queues are often subscribed to SNS topics to implement the distributed publish-subscribe pattern. You pay for what you use based on the number of requests.
  • AWS Lambda: The main idea is you write your business logic code and it gets triggered based on the event sources you configure. The beauty is that you only pay for when the code is actually executed, down to the 100 milliseconds. It automatically scales and is highly available. It is one of the key components to the AWS serverless ecosystem.
  • Amazon API Gateway: A managed API service that allows you to build, publish, and manage APIs. It performs at scale and allows you to also perform caching, traffic throttling, and caching in edge locations, which means they're localized based on where the user is located, minimizing overall latency. In addition, it integrates natively with AWS Lambda functions, allowing you to focus on the core business logic code to parse that request or data.
  • AWS Identity and Access Management (IAM): The central component of all security is IAM roles and policies, which are basically a mechanism that's managed by AWS for centralizing security and federating it to other services. For example, you can restrict a Lambda to only read a specific DynamoDB table, but not have the ability to write to the same DynamoDB table or deny read/write access any other tables.
  • Amazon CloudWatch: A central system for monitoring services. You can, for example, monitor the utilization of various resources, record custom metrics, and host application logs. It is also very useful for creating rules that trigger a notification when specific events or exceptions occur.
  • AWS X-Ray: A service that allows you to trace service requests and analyze latency and traces from various sources. It also generates service maps, so you can see the dependency and where the most time is spent in a request, and do root cause analysis of performance issues and errors.
  • Amazon Kinesis Streams: A steaming service that allows you to capture millions of events per second that you can analyze further downstream. The main idea is you would have, for example, thousands of IoT devices writing directly to Kinesis Streams, capturing that data in one pipe, and then analyzing it with different consumers. If the number of events goes up and you need more capacity, you can simply add more shards, each with a capacity of 1,000 writes per second. It's simple to add more shards as there is no downtime, and they don't interrupt the event capture.
  • Amazon Kinesis Firehose: A system that allows you to persist and load streaming data. It allows you to write to an endpoint that would buffer up the events in memory for up to 15 minutes, and then write it into S3. It supports massive volumes of data and also integrates with Amazon Redshift, which is a data warehouse in the cloud. It also integrates with the Elasticsearch service, which allows you to query free text, web logs, and other unstructured data.
  • Amazon Kinesis Analytics: Allows you to analyze data that is in Kinesis Streams using structured query language (SQL). It also has the ability to discover the data schema so that you can use SQL statements on the stream. For example, if you're capturing web analytics data, you could count the daily page view data and aggregate them up by specific pageId.
  • Amazon Athena: A service that allows you to directly query S3 using a schema on read. It relies on the AWS Glue Data Catalog to store the table schemas. You can create a table and then query the data straight off S3, there's no spin-up time, it's serverless, and allows you to explore massive datasets in a very flexible and cost-effective manner.

Among all these services, AWS Lambda is the most widely used serverless service in AWS. We will discuss more about that in the next section.

AWS Lambda

The key serverless component in AWS is called AWS Lambda. A Lambda is basically some business logic code that can be triggered by an event source:

A data event source could be the put or get of an object to an S3 bucket. Streaming event sources could be new records that have been to a DynamoDB table that trigger a Lambda function. Other streaming event sources include Kinesis Streams and SQS.

One example of requests to endpoints are Alexa skills, from Alexa echo devices. Another popular one is Amazon API Gateway, when you call an endpoint that would invoke a Lambda function. In addition, you can use changes in AWS CodeCommit or Amazon Cloud Watch.

Finally, you can trigger different events and messages based on SNS or different cron events. These would be regular events or they could be notification events.

The main idea is that the integration between the event source and the Lambda is managed fully by AWS, so all you need to do is write the business logic code, which is the function itself. Once you have the function running, you can run either a transformation or some business logic code to actually write to other services on the right of the diagram. These could be data stores or invoke other endpoints.

In the serverless world, you can implement sync/asyc requests, messaging or event stream processing much more easily using AWS Lambdas. This includes the microservice communication style and data-management patterns we just talked about.

Lambda has two types of event sources types, non-stream event sources and stream event sources:

  • Non-stream event sources: Lambdas can be invoked asynchronously or synchronously. For example, SNS/S3 are asynchronous but API Gateway is sync. For sync invocations, the client is responsible for retries, but for async it will retry many times before sending it to a Dead Letter Queue (DLQ) if configured. It's great to have this retry logic and integration built in and supported by AWS event sources, as it means less code and a simpler architecture:

  • Stream event sources: The Lambda is invoked with micro-batches of data. In terms of concurrency, there is one Lambda invoked in parallel per shard for Kinesis Streams or one Lambda per partition for DynamoDB Stream. Within the lambda, you just need to iterate over the Kinesis Streams, DynamoDB, or SQS data passed in as JSON records. In addition, you benefit from the AWS built-in streams integration where the Lambda will poll the stream and retrieve the data in order, and will retry upon failure until the data expires, which can be up to seven days for Kinesis Streams. It's also great to have that retry logic built in without having to write a line of code. It is much more effort if you had to build it as a fleet of EC2 or containers using the AWS Consumer or Kinesis SDK yourself:

In essence, AWS is responsible for the invocation and passing in the event data to the Lambda, you are responsible for the processing and the response of the Lambda.

Serverless computing to implement microservice patterns

Here is an overview diagram of some of the serverless and managed services available on AWS:

Leveraging AWS-managed services does mean additional vendor lock-in but helps you reduce non business differentiating support and maintenance costs. But also to deploy your applications faster as the infrastructure can be provisioned or destroyed in minutes. In some cases, when using AWS-managed services to implement microservices patterns, there is no need for much code, only configuration.

We have services for the following:

  • Events, messaging, and notifications: For async publish/subscribe and coordinating components
  • API and web: To create APIs for your serverless microservices and expose it to the web
  • Data and analytics: To store, share, and analyze your data
  • Monitoring: Making sure your microservices and stack are operating correctly
  • Authorization and security: To ensure that your services and data is secure, and only accessed by those authorized

At the center is AWS Lambda, the glue for connecting services, but also one of the key places for you to deploy your business logic source code.

Example use case – serverless file transformer

Here is an example use case, to give you an idea of how different managed AWS systems can fit together as a solution. The requirements are that a third-party vendor is sending us a small 10 MB file daily at random times, and we need to transform the data and write it to a NoSQL database so it can be queried in real time. If there are any issues with the third-party data, we want to send an alert within a few minutes. Your boss tells you that you they don't want to have an always-on machine just for this task, the third party has no API development experience, and there is a limited budget. The head of security also finds out about this project and adds another constraint. They don't want to give third-party access to your AWS account beyond one locked-down S3 bucket:

This can be implemented as an event-driven serverless stack. On the left, we have an S3 bucket where the third party has access to drop their file. When a new object is created, that triggers a Lambda invocation via the built-in event source mapping. The Lambda executes code to transform the data, for example, extracts key records such as user_id, date, and event type from the object, and writes them to a DynamoDB table. The Lambda sends summary custom metrics of the transformation, such as number of records transformed and written to CloudWatch metrics. In addition, if there are transformation errors, the Lambda sends an SNS notification with a summary of the transformation issues, which could generate an email to the administrator and third-party provider for them to investigate the issue.

 

Setting up your serverless environment

If you already have an AWS account and configured it locally you can skip this section, but for security reasons, I recommend you enable Multi-Factor Authentication (MFA) for console access and do not use the root user account keys for the course.

There are three ways to access resources in AWS:

  • AWS Management Console is a web-based interface to manage your services and billing.
  • AWS Command Line Interface is a unified tool to manage and automate all your AWS services.
  • The software-development kit in Python, JavaScript, Java, .NET, and GO, which allows you to programmatically interact with AWS.

Setting up your AWS account

It's very simple to set up an account; all you need is about five minutes, a smartphone, and a credit card:

  1. Create an account. AWS accounts include 12 months of Free Tier access: https://aws.amazon.com/free/.
  2. Enter your name and address.
  3. Provide a payment method.
  4. Verify your phone number.

This will create a root account, I recommend you only use it for billing and not development

Setting up MFA

I recommend you use MFA as it adds an extra layer of protection on top of your username and password. It's free using your mobile phone as a Virtual MFA Device (https://aws.amazon.com/iam/details/mfa/). Perform the following steps to set it up:

  1. Sign into the AWS Management Console: https://console.aws.amazon.com.
  2. Choose Dashboard on the left menu.
  3. Under Security Status, expand Activate MFA on your root account.
  4. Choose Activate MFA or Manage MFA.
  1. In the wizard, choose Virtual MFA device, and then choose Continue.
  2. Install an MFA app such as Authy (https://authy.com/).
  3. Choose Show QR code then scan the QR code with you smartphone. Click on the account and generate an Amazon six-digit token.
  4. Type the six-digit token in the MFA code 1 box.
  5. Wait for your phone to generate a new token, which is generated every 30 seconds.
  6. Type the six-digit token into the MFA code 2 box.
  7. Choose Assign MFA:

Setting up a new user with keys

For security reasons, I recommend you use the root account only for billing! So, the first thing is to create another user with fewer privileges:

Create a user with the following steps:

  1. Sign into the AWS Management console (https://console.aws.amazon.com/).
  2. Choose Security, Identity, & Compliance > IAM or search for IAM under Find services.
  3. In the IAM page, choose Add User.
  4. For User name, type new user on the set user details pane.
  1. For Select AWS access Type, select the check boxes next to Programmatic access, AWS Console access. Optionally select Autogenerated password and Require password rest.
  2. Choose Next: Permissions:

Follow these steps to set the permission for the new user:

  1. Choose Create group.
  2. In the Create group dialog box, type Administrator for new group name.
  3. In policy list, select the checkbox next to AdministratorAccess (note that, for non-proof of concept or non-development AWS environments, I recommend using more restricted access policies).
  4. Select Create group.
  1. Choose refresh and ensure the checkbox next to Administrator is selected.
  2. Choose Next: Tags.
  3. Choose Next: Review.
  4. Choose Create user.
  5. Choose Download .csv and take a note of the keys and password. You will need these to access the account programmatically and log on as this user.
  6. Choose Close.

As with the root account, I recommend you enable MFA:

  1. In the Management Console, choose IAM | User and choose the newuser.
  2. Choose the Security Credentials tab, then choose Manage next to Assigned MFA device Not assigned.
  3. Choose a virtual MFA device and choose Continue.
  4. Install an MFA application such as Authy (https://authy.com/).
  5. Choose Show QR code then scan the QR code with you smartphone. Click on the Account and generate an Amazon six-digit token.
  6. Type the six-digit token in the MFA code 1 box.
  7. Wait for your phone to generate a new token, which is generated every 30 seconds.
  8. Type the six-digit token into the MFA code 2 box.
  9. Choose Assign MFA.

Managing your infrastructure with code

A lot can be done with the web interface in the AWS Management Console. It's a good place to start and help you to understand what you are building, but most often it is not recommended for production deployments as it is time-consuming and prone to human error. Best practice is to deploy and manage your infrastructure using code and configuration only. We will be using the AWS Command-line Interface (CLI), bash shell scripts, and Python 3 throughout this book, so let's set these up now.

Installing bash on Windows 10

Please skip this step if you are not using Windows.

Using bash (Unix shell) makes your life much easier when deploying and managing your serverless stack. I think all analysts, data scientists, architects, administrators, database administrators, developers, DevOps, and technical people should know some basic bash and be able to run shell scripts, which are typically used on Linux and Unix (including the macOS Terminal).

Alternatively, you can adapt the scripts to use MS-DOS or PowerShell, but it's not something I recommended, given that bash can now run natively on Windows 10 as an application, and there are many more examples online in bash.

Note that I have stripped off the \r or carriage returns, as they are illegal in shell scripts. You can use something such as Notepad++ (https://notepad-plus-plus.org/) on Windows if you want to view the carriage returns in your files properly. If you use traditional Windows Notepad, the new lines may not be rendered at all, so use Notepad++, Sublime (https://www.sublimetext.com/), Atom (https://atom.io/), or another editor.

A detailed guide on how to install Linux Bash shell on Windows 10 can be found at https://www.howtogeek.com/249966/how-to-install-and-use-the-linux-bash-shell-on-windows-10/. The main steps are as follows:

  1. Navigate to Control Panel | Programs | Turn Windows Features On Or Off.
  2. Choose the check box next to the Windows Subsystem for Linux option in the list, and then Choose OK.
  3. Navigate to Microsoft Store | Run Linux on Windows and select Ubuntu.
  4. Launch Ubuntu and set up a root account with a username and password the Windows C:\ and other drives are already mounted, and you can access them with the following command in the Terminal:
$ cd /mnt/c/

Well done, you now have full access to Linux on Windows!

Updating Ubuntu, installing Git and Python 3

Git will be used later on in this book:

$ sudo apt-get update
$ sudo apt-get -y upgrade
$ apt-get install git-core

The Lambda code is written in Python 3.6. pip is a tool for installing and managing Python packages. Other popular Python package and dependency managers are available, such as Conda (https://conda.io/docs/index.html) or Pipenv (https://pipenv.readthedocs.io/en/latest/), but we will be using pip as it is the recommended tool for installing packages from the Python Package Index PyPI (https://pypi.org/) and is the most widely supported:

$ sudo apt -y install python3.6
$ sudo apt -y install python3-pip

Check the Python version:

$ python --version

You should get Python version 3.6+.

The dependent packages required for running, testing, and deploying the severless microservices are listed in requirements.txt under each project folder, and can be installed using pip:

$ sudo pip install -r /path/to/requirements.txt

This will install the dependent libraries for local development, such as Boto3, which is the Python AWS Software Development Kit (SDK).

In some projects, there is a file called lambda-requirements.txt, which contains the third-party packages that are required by the Lambda when it is deployed. We have created this other requirements file as the Boto3 package is already included when the Lambda is deployed to AWS, and the deployed Lambda does not need testing-related libraries, such as nose or locust, which increase the package size.

Installing and setting up the AWS CLI

The AWS CLI is used to package and deploy your Lambda functions, as well as to set up the infrastructure and security in a repeatable way:

$ sudo pip install awscli --upgrade

You created a user called newuser earlier and have a crednetials.csv file with the AWS keys. Enter them by running aws configure:

$ aws configure
AWS Access Key ID: <the Access key ID from the csv>
AWS Secret Access Key: <the Secret access key from the csv>
Default region name: <your AWS region such as eu-west-1>
Default output format: <optional>

More details on setting up the AWS CLI are available in the AWS docs (https://docs.aws.amazon.com/lambda/latest/dg/welcome.html).

To choose your AWS Region, refer to AWS Regions and Endpoints (https://docs.aws.amazon.com/general/latest/gr/rande.html). Generally, those in the USA use us-east-1 and those in Europe use eu-west-1.

 

Summary

In this chapter, we got an overview of monolithic and microservices architectures. We then talked about the design patterns and principles and how they relate to serverless microservices. We also saw how to set up the AWS and development environment that will be used in this book.

In the next chapter, we will create a serverless microservice that exposes a REST API and is capable of querying a NoSQL store built using API Gateway, Lambda, and DynamoDB.

About the Author
  • Richard Takashi Freeman

    Richard Takashi Freeman has an M.Eng. in computer systems engineering and a Ph.D. in machine learning and natural language processing from the University of Manchester, UK. He is currently a lead big data and machine learning engineer at JustGiving; and a cloud architect, serverless computing, and machine learning freelance SME and consultant at Starwolf. He previously worked at PageGroup and Capgemini, and has been delivering cloud-based, big data, machine learning, serverless, and scalable solutions for over 14 years across different sectors. He is a blogger; a speaker, presenting at various events; and the author of two video courses. You can visit his website, titled Richard Freeman, PhD, for his blog posts, presentations, and courses.

    Browse publications by this author
Building Serverless Microservices in Python
Unlock this book and the full library FREE for 7 days
Start now