If you are new to serverless computing, then you may be wondering how serverless architecture differs from more traditional architectures. Of course, there are differences when we get down to the details. But, by and large, serverless architecture builds on the same proven concepts that we should employ no matter what software we are writing.
We will use Domain-Driven Design (DDD ), apply the SOLID principles, and strive to implement a clean Hexagonal Architecture. One of the more interesting differences, though, is that we will apply these concepts at multiple levels. We will apply them differently at the subsystem level (that is, macro architecture ), the service level (that is, micro architecture ), and the function level (that is, nano architecture ).
Let’s review these concepts and see where we will apply them in our serverless architecture and how we will do so differently than we have in the past.
Domain-driven design
When I started my career, our industry was struggling with the paradigm shift to object-oriented programming. The Unified Modeling Language (UML ) eventually emerged, but we all used it in different ways. The Gang of Four’s book Design Patterns , Gregor Hohpe’s and Bobby Woolf’s book Enterprise Integration Patterns , and Martin Fowler’s book Patterns of Enterprise Application Architecture helped us create better designs. But it wasn’t until Eric Evans’ book Domain-Driven Design took root that we all had common semantics that we could use to produce more familiar designs. Let’s review the main concepts of DDD that we will employ throughout this book: bounded context, domain aggregate, and domain event.
Bounded context
A bounded context defines a clear boundary between different areas of a domain. Within a bounded context, the domain model should be consistent and clear. Between bounded contexts, we use context maps to define the relationships between the models. Defining these boundaries is the main thrust of this chapter. We will be using this concept to define our autonomous subsystems and therefore our cloud accounts as well. In Chapter 7 , Bridging Intersystem Gaps , we will learn how to create an anti-corruption layer to implement the context mapping between subsystems (that is, bounded contexts).
Domain aggregate
A domain aggregate is a top-level domain entity that groups related entities together in a cohesive unit. We will exchange data between services and subsystems at this aggregate level. We will cover data modeling and introduce the idea of data life cycle architecture in Chapter 5 , Turning the Cloud into the Database . We will introduce the Backend For Frontend (BFF ) pattern later in this chapter and in Chapter 6 , A Best Friend for the Frontend , we will see how each BFF service owns a domain aggregate at a specific phase in the data life cycle .
Domain event
A domain event represents a change in state of a domain aggregate. These events are fundamental in our event-driven architecture and the heart of our autonomous services. We will cover the idea of event-first thinking in this chapter and event processing in Chapter 4 , Trusting Facts and Eventual Consistency .
And of course, we will strive to use a ubiquitous language within each bounded context.
SOLID principles
SOLID is an acronym for a set of guiding design principles that help us implement clean, flexible, and maintainable software. The principles include the following:
S ingle Responsibility Principle
O pen-Closed Principle
L iskov Substitution Principle
I nterface Segregation Principle
D ependency Inversion Principle
Robert Martin first promoted these principles for improving object-oriented designs. Since then, these principles have proven equally valuable at the architectural level. Although the implementation of the principles varies at different levels, it is convenient to apply a consistent mental framework throughout. Let’s see how we can use each of these principles to guide the creation of evolutionary architectures that enable change.
Single Responsibility Principle
The Single Responsibility Principle (SRP ) states that a module should be responsible to one, and only one, actor.
The SRP is simultaneously the most referenced and arguably the most misunderstood of all the SOLID principles. At face value, the name of this principle suggests a module should do one thing and one thing only . This misunderstanding leads to the creation of fleets of microservices that are very small, very interdependent, and coupled. Ultimately, this results in the creation of the microliths and microservice death stars that I mentioned in the first chapter. The author of this principle, Robert (Uncle Bob) Martin , has made an effort to correct this misunderstanding in his book Clean Architecture .
The original and common definition of this principle states a module should have one, and only one, reason to change . The operative word here is change . This is crucial in the context of architecture because the purpose of architecture is to enable change. However, when we apply this principle incorrectly, it tends to have the opposite effect, because highly interconnected modules impede change. The coupling increases the need for inter-team communication and coordination and thus slows down innovation.
In his latest definition of the SRP, Uncle Bob is focusing on the source of change. Actors (that is, people), drive changing requirements. These are the people that use the software, the stakeholders, and the owners of external systems that interact with the system, even the governing bodies that impose regulations. After all, people are the reason that software systems exist in the first place.
The goal of this principle is to avoid creating a situation where we have competing demands on a software module that will eventually tie it in knots and ultimately impede change. We achieve this goal by defining architectural boundaries for the different actors. This helps ensure that the individual modules can change independently with the whims of their actors. Uncle Bob also refers to this as the Axis of Change , where the modules on either side of the boundary change at different rates.
Later in this chapter, we will use the SRP to divide our software system into autonomous subsystems and to decompose subsystems into autonomous services. The architectural boundaries will also extend vertically across the tiers, so that the presentation, business logic, and data tiers can all change together.
Open-Closed Principle
The Open-Closed Principle (OCP ) states that a module should be open for extension but closed for modification . Bertrand Meyer is the original author of this principle.
This principle highlights how human nature impacts a team’s pace of innovation. We naturally slow down when we modify an existing piece of software, because we have to account for all the impacts of that change . If we are worried about unintended side effects, then we may even resist change altogether. Conversely, adding new capabilities to a system is far less daunting. We are naturally more confident and willing to move forward when we know that the existing usage scenarios have been completely untouched.
An architecture that enables change is open to adding and experimenting with new features while leaving existing functionality completely intact to serve the current user base. With the SRP, we define the architectural boundaries along the axis of change to limit the scope of a given change to a single service. We must also close off other services to any inadvertent side effects by fortifying the boundaries with bulkheads on both sides.
Events will be our main mechanism for achieving this freedom. We can extend existing services to produce new event types without side effects. We can add new services that produce existing event types without impacting existing consumers. And we can add new consumers without changes to existing producers. At the presentation tier, micro frontends will be our primary mechanism for extension, which we will cover in Chapter 3 , Taming the Presentation Tier .
Sooner or later, modification and refactoring of existing code is inevitable. When this is necessary, we will employ the Robustness principle to mitigate the risks up and down the dependency chains. We will cover common scenarios for extending and modifying systems with zero downtime in Chapter 11 , Choreographing Deployment and Delivery .
Liskov Substitution Principle
The Liskov Substitution Principle (LSP ) states that objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program. Barbara Liskov is the original author of this principle, hence the L in SOLID.
The substitution principle is essential to creating evolutionary architecture. Most innovations will consist of incremental changes. Yet, some will require significant changes and necessitate running multiple versions of a capability simultaneously. Following the LSP, we can substitute in new versions, so long as they fulfill the contracts with upstream and downstream dependencies.
The LSP will play a major role in the hexagonal architecture we will cover shortly. We will use events to define the contracts for inter-service communication. This design-by-contract approach enables the substitution that powers the branch-by-abstraction approach. We can substitute event producers and swap event consumers. We will leverage the LSP to strangle legacy applications as we modernize our systems and continue this evolutionary process indefinitely to support continuous innovation with zero downtime. But zero downtime requires an understanding of all the interdependencies, and this leads us to the Interface Segregation Principle.
Interface Segregation Principle
The Interface Segregation Principle (ISP ) states that no client should be forced to depend on interfaces they do not use.
I provided an anecdote at the beginning of this chapter that highlighted the build-time issues that can arise when we violate the ISP. These build-time issues can have a big impact on a monolith. However, they are of less concern for our autonomous services because they are independently deployable units with their own CI/CD pipelines. We are still concerned about including unnecessary libraries because they can have an impact on cold start times and increase the risk of security vulnerabilities. But our real concern is on the deployment and runtime implications of violating the ISP and how it impacts downtime.
Our goal is to create an architecture that enables change so that we can reduce our lead times, deploy more often, and tighten the feedback loop. This requires confidence that our deployments will not break the system. Our confidence comes from an understanding of the scope and impact of any given change and our certainty that we have accounted for all the side effects and minimized the potential for unintended consequences. We facilitate this by creating clean and lean interfaces that we segregate from other interfaces to avoid polluting them with unnecessary concerns. This minimizes coupling and limits scope so that we can easily identify the impacts of a change and coordinate a set of zero-downtime deployments.
A common mistake that violates the ISP is the creation of general-purpose interfaces. The misperception is that reusing a single interface will accelerate development time. This may be true in the short term, but not in the long term. The increased coupling ultimately impedes innovation because of competing demands and the risk of unintended consequences.
This is a primary driver behind creating client-specific interfaces using the Backend for Frontend pattern. We will cover this pattern in detail in Chapter 6 , A Best Friend for the Frontend .
For all our inter-service communication, our individual domain events are already nicely segregated, because we can change each of them independently. We do have to account for the fact that many downstream services will depend on these events. We will manage this by dividing the system into subsystems of related services and using internal domain events for intra-subsystem communication and external domain events for inter-subsystem communication. From here, we will leverage the Robustness principle to incrementally evolve our domain events. We will see this in play in Chapter 11 , Choreographing Deployment and Delivery .
Even with well-segregated interfaces, we still need to avoid leaky abstractions. This occurs when details of specific upstream services are visible in the event payload, and we inadvertently use those details in services downstream. This leads us to the Dependency Inversion Principle.
Dependency Inversion Principle
The Dependency Inversion Principle (DIP ) states that a module should depend on abstractions, not concretions.
At the code level, we also refer to the DIP as programming to an interface, not to an implementation. We also refer to it as Inversion of Control (IoC ). It manifests itself in the concept of Dependency Injection (DI ), which became an absolute necessity in monolithic systems. It eliminates cyclic dependencies, enables testing with mocks, and permits code to change and evolve by substituting implementations while holding interfaces closed to modification. We will use simple constructor-based DI in our serverless functions without the need for a heavy-weight framework.
At the architecture level, I think we can best understand the value of the DIP by using the scientific method as an analogy. Holding variables constant is a crucial component of the scientific method. When we perform a scientific experiment, we cannot derive anything from the results if nothing was held constant because it is impossible to determine cause and effect. In other words, some level of stability is a prerequisite for the advancement of knowledge. Stability in the face of change is the whole motivation behind autonomous services. We need the ability to continuously change and evolve a running system while maintaining stability.
The DIP is a fundamental principle for creating architecture that provides for flexibility and evolution while maintaining stability. For any given change to a service, we have to hold something constant to maintain the stability of the system. That constant in an event-first system is the domain events . When we modify any service, all others will rely on the stability of the event types they all share. This will control the scope and impact of the change so that teams have the confidence to move forward.
Translating the DIP to the architectural level, we get the following:
Domain events are the abstractions and autonomous services are the concretions.
Upstream services should not depend on downstream services and vice versa. Both should depend on domain events.
Domain events should not depend on services. Services should only depend on domain events.
In fact, many upstream services won’t know or care that a downstream service exists at all. Downstream services will depend on the presence of an upstream service because something has to produce a needed event, but they should not know or care which specific upstream service produced an event.
This ultimately means that upstream services are responsible for the flow of control and downstream services are responsible for the flow of dependencies. In other words, upstream services control when an event is produced and downstream services control who consumes those events. Hence, the name of the principle still applies at the architectural level. We are inverting the flow of dependency (that is, consumption) from the flow of control (that is, production).
This event-based collaboration will be most evident when we cover the Control Service Pattern in Chapter 8 , Reacting to Events with More Events . These high-level services embody control flow policies and other services simply react to the events they produce.
Taking the DIP to the next level is the notion that downstream services react to domain events, and upstream services do not invoke downstream services. This is an inversion of responsibility that leads to more evolutionary systems. It allows us to build systems in virtually any order and gives us the ability to create end-to-end test suites that don’t require the entire system to be running or even completely implemented. This stems from the power of Event-First Thinking , which we will cover shortly. But first, let’s review the concepts of Hexagonal Architecture.
Hexagonal Architecture
Building on the SOLID principles, we need an architecture that allows us to assemble our software in flexible ways. Our software needs the ability to execute in different runtime environments, such as a serverless function , a serverless container , or a testing tool. We may decide to change a dependency , such as switching from one type of datastore to another, or even switching cloud providers. Most importantly, we need the ability to test our domain logic in isolation from any remote resources . To support this flexibility, we need a loosely coupled software architecture that hides these technical design decisions from the domain logic.
Alistair Cockburn’s Hexagonal Architecture is a popular approach for building loosely coupled modules. Robert Martin’s Clean Architecture and Jeffrey Palermo’s Onion Architecture are variations on this theme. An alternate name for this architecture is Ports and Adapters , because we connect modules through well-defined ports (that is, interfaces ) and glue them together with adapter code. When we need to make a change, we leverage the SOLID principles, such as the DIP and LSP, and substitute different adapters.
Hexagonal architecture gets its name from the hexagonal shapes we use in the diagrams. We use hexagons simply because they allow us to represent systems as a honeycomb structure of loosely coupled modules. We read these diagrams from left to right. We refer to the left side as the primary or driving side because it defines the module’s input flow, while we call the right side the secondary or driven side since it defines the output flow. The inner hexagon represents the clean domain model with its input and output ports and the outer hexagon holds the inbound and outbound adapters .
Hexagonal architecture emerged well before serverless technology and the popularity of the cloud. Its origins align with the development of more monolithic software systems. Today, our serverless solutions consistent of fine-grained resources that we compose into distributed , event-driven systems. As a result, the shape of our systems has changed, but their nature remains the same, and therefore, hexagonal architecture is more important than ever.
We need to scale hexagonal architecture up to a macro perspective and down to a nano perspective. In our serverless architecture we will apply the hexagonal concepts at three different levels, the function , service , and subsystem levels, as depicted in Figure 2.1 :
Figure 2.1: Hexagonal architecture
This summary diagram features detailed diagrams for each of the three levels, Nano , Micro , and Macro , all displayed together so we can get a bird’s-eye view of how our serverless systems fit together. We need a mental picture of what is happening in a serverless function , how we combine these functions into autonomous services , and how we compose autonomous subsystems out of these services. We will see these diagrams again, individually, when we dig into the details. For now, let’s review summary descriptions of each level.
Note that you can find a legend for all the icons in the preface of the book.
Function-level (nano)
The function-level or nano hexagonal architecture describes the structure and purpose of the code within a serverless function . This level is most like traditional hexagonal architecture, but we scale it down to the scope of an individual serverless function. The handler and connector adapt the Model to the cloud services. We will dig deeper into this level in the Dissecting autonomous services section of this chapter.
Service-level (micro)
The service-level or micro hexagonal architecture describes the structure and purpose of the resources within an autonomous service . This level is less traditional because we are spreading the code across multiple serverless functions. The entities of the internal domain model live in a dedicated datastore so that we can share them across all the functions. The listener and trigger functions adapt the internal domain model to the domain events exchanged between services. The command and query functions adapt the model for frontend communication. We will dig deeper into the details of this level in the Dissecting autonomous services section later in this chapter.
Subsystem-level (macro)
The subsystem-level or macro hexagonal architecture describes the structure and purpose of the autonomous services within an autonomous subsystem . This level is different from traditional hexagonal architecture because the adapters are full services instead of just code artifacts. The Core of the subsystem is composed of Backend for Frontend (BFF ) services and Control services that work together to implement the domain model and the internal domain events exchanged between these services define the ports of the model. The Ingress and Egress External Service Gateway (ESG ) services adapt external domain events to internal domain events. We will cover external domain events in the Creating subsystem bulkheads section of this chapter. We will dig into the details of this level when we cover the ESG pattern in Chapter 7 , Bridging Intersystem Gaps .
Now that we have cover the foundational concepts, let’s learn how event-first thinking helps us create evolutionary systems.