Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Software Architecture with C++
Software Architecture with C++

Software Architecture with C++: Designing Robust C++ Systems with Modern Architectural Practices , Second Edition

Arrow left icon
Profile Icon Andrey Gavrilin Profile Icon Adrian Ostrowski Profile Icon Piotr Gaczkowski
Arrow right icon
Early Access Early Access Publishing in Oct 2025
€18.99 per month
Paperback Oct 2025 647 pages 2nd Edition
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Andrey Gavrilin Profile Icon Adrian Ostrowski Profile Icon Piotr Gaczkowski
Arrow right icon
Early Access Early Access Publishing in Oct 2025
€18.99 per month
Paperback Oct 2025 647 pages 2nd Edition
Subscription
Free Trial
Renews at €18.99p/m
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Software Architecture with C++

Join our Community on Discord!

A qr code with an orange square AI-generated content may be incorrect.

https://packt.link/3Wrwh

The purpose of this introductory chapter is to show what role software architecture plays in software development. It will focus on the key aspects to keep in mind when designing the architecture of a C++ solution. We'll discuss how to design efficient code with convenient and functional interfaces. We'll also show how domain-driven approach complements Agile principles and guides both code and architecture.In this chapter, we'll cover the following topics:

  • Understanding software architecture
  • Learning the importance of proper architecture
  • Exploring the fundamentals of good architecture
  • Developing architecture using Agile principles
  • The philosophy of C++
  • Following the SOLID and DRY principles
  • Coupling and cohesion

Technical requirements

To play with the code from this chapter, you'll need the following:

Understanding software architecture

Let's begin by defining what software architecture is. When you create an application, library, or any software component, you need to think about how the elements you write will look and how they will interact with each other. This arrangement of elements and their interactions defines software architecture. In other words, you're designing different elements and their relations with their surroundings. Just like with urban architecture, it's important to think about the bigger picture so as to not end up in a haphazard state. Thus, the architecture of a software system is a metaphor similar to the architecture of a building; it is a set of important decisions about the organization of a software system. On a small scale, every single building may look okay, but they may not fit together well. Similarly, while, software architecture aims to create a well-structured system, software development might progress in unexpected ways. Keep in mind that whether you're putting your thoughts into it or not, when writing software you are creating an architecture. Therefore, avoid accidental architectures—those that arise without a clear strategy—as this can disrupt your IT systems.On the other hand, emerging architectures—those that emerge gradually from the multitude of design decisions—are inevitable as systems grow. Over time, they become explicitly identified and are implemented intentionally after proving themselves.So, what exactly should you be creating if you want to mindfully define the architecture of your solution? The Software Engineering Institute has this to say:

The software architecture of a system is the set of structures needed to reason about the system, which comprise software elements, relations among them, and properties of both.

In the following section, we will discuss different types of architectures and explore how software architecture fits within a broader context. A clear understanding of the architecture type helps in correctly identifying the elements involved and defining the scope of work instead of just hopping into writing code.

Different ways to look at architecture

There are several ways to look at architecture, each with a different scope:

  • Enterprise architecture deals with the whole company or even a group of companies. It takes a holistic approach and is concerned about the strategy of whole enterprises. When thinking about enterprise architecture, you should be looking at how all the systems in a company behave and cooperate with each other. It's concerned about the alignment between business and IT.
  • Solution architecture is less abstract than its enterprise counterpart. It stands somewhere in the middle between enterprise and software architecture. Usually, solution architecture is concerned with one specific system and the way it interacts with its surroundings. A solution architect needs to come up with a way to fulfill a specific business need, usually by designing a whole software system or modifying existing ones.
  • Software architecture is even more concrete than solution architecture. It concentrates on a specific project, the technologies it uses, and how it interacts with other projects. A software architect is interested in the internals of the project's components.
  • Infrastructure architecture is, as the name suggests, concerned about the infrastructure that the software will use. It defines the deployment environment and strategy, how the application will scale, failover handling, site reliability, and other infrastructure-oriented aspects.
  • Solution architecture is based on both software and infrastructure architectures to satisfy the business requirements. Later in this chapter, we will talk about both those aspects to prepare you for both small- and large-scale architecture design. Let's now discuss another critical aspect of understanding software architecture.

Communication and culture

The focus of this book is software architecture. Why would we want to mention communication and culture in a book around software, then? If you think about it, all software is written by people for people. The human aspect is prevalent and yet we often fail to admit it.As an architect, your role won't be just about figuring out the best approach to solving a given problem. You'll also have to communicate your proposed solution to your team members. Often, the choices you make will result from previous discussions. These are the reasons communication and team culture also play a role in software architecture.Conway's Law states that the architecture of the software system reflects the organization that's working on it. This means that building great products requires building great teams and understanding that social interaction impacts the success or failure of projects.Development culture can be compared to an ecosystem. It is a daily work and cannot be introduced by decree. The culture can become destructive if you don’t take care of it. Poor management can degrade even a well-established team culture within an organization.Thus, if you want to be a great architect, learning people skills may be as important as learning technical ones.Now that we have looked at both technical and social aspects of software architecture, let’s answer one fundamental question: why is architecture important?

Learning the importance of proper architecture

A better question would be: why is caring about your architecture important? As we mentioned earlier, regardless of whether you put conscious effort into building it or not, your software will end up with some kind of architecture. If after several months or even years of development you still want your software to retain its qualities, you need to take some steps earlier in the process. If you won't think about your architecture, chances are it won't ever present the required qualities.So, in order for your product to meet the business requirements (formal descriptions of business-related objectives and expectations) and attributes such as performance, maintainability, scalability, you need to take care of its architecture, and it is best if you do it as early as you can in the process. Failing to do so could result in the issues discussed in the following two subsections.

Technical debt

Even after you did the initial work and had a specific architecture in mind, you need to continuously monitor how the system evolves and whether it still aligns with its users' needs, as those may also change during the development and lifetime of your software. Technical debt, sometimes also called software decay, software erosion or software rot, occurs when the implementation decisions don't correspond to the intentional architecture. It is a metaphor describing the trade-offs between short-term gain and long-term stability of software development.Technical debt could result from a variety of factors such as unclear project requirements, poorly written or hastily written code, hard-coded values, lack of documentation, outdated documentation, lack of testing, insufficient testing, lack of code reviews, deprecated libraries or frameworks, deferred upgrades, or accumulated bug debt.

Accidental architecture

Failing to track if the development adheres to the chosen architecture or failing to intentionally plan how the architecture should look will often result in a so-called accidental architecture, and it can happen regardless of applying best practices in other areas, such as testing or having any specific development culture.There are architectural anti-patterns and smells that suggest your architecture is accidental. Code resembling the Big Ball of Mud or spaghetti code, an architectural anti-pattern that suggests a lack of structure, is the most obvious example. Having a god object, where one entity is responsible for everything at once, is another important sign of this. Altogether, if your software is getting tightly coupled with strong dependency between software components—perhaps with circular dependencies, which occur when two or more components depend on each other— it's an important signal to put more conscious effort into how the architecture looks.Let's now describe what an architect must understand to deliver a viable solution.

Exploring the fundamentals of good architecture

It's important to know how to recognize a good architecture from a bad one, but it's not an easy task. Recognizing anti-patterns is an important aspect of it, but for an architecture to be good, primarily it must support delivering what's expected from the software, which involves meeting functional requirements, addressing attributes of the solution, or dealing with the constraints coming from various places. Many of these aspects can be easily derived from the architecture context.

Architecture context

The context is what an architect takes into account when designing a solid solution. It comprises requirements, assumptions, and constraints, which can come from the stakeholders, as well as the business and technical environments. It also influences the stakeholders and the environments, for example, by allowing the company to enter a new market segment.

Stakeholders

Stakeholders are all the people that are somehow involved with the product. Those can be your customers, the users of your system, or the management. Communication is a key skill for every architect and properly managing your stakeholder's needs is key to delivering what they expected and in a way they wanted.Different things are important to different groups of stakeholders, so try to gather input from all those groups.Your customers will probably care about the cost of writing and running the software, the functionality it delivers, its lifetime, time to market, and the quality of your solution.The users of your system can be divided into two groups: end users and administrators. The first ones usually care about things such as the usability, user experience, and performance of the software. For the latter, more important aspects are user management, system configuration, security, backups, and recovery.Finally, things that could matter for stakeholders working in management are keeping the development costs low, achieving business goals, being on track with the development schedule, and maintaining product quality.

Business and technical environments

Architecture can be influenced by the business side of the company. Important related aspects are the time to market, the rollout schedule, the organizational structure, utilization of the workforce, and investment in existing assets.By technical environment, we mean the technologies already used in a company or those that are for any reason required to be part of the solution. Other systems that we need to integrate with are also a vital part of the technical environment. The technical expertise of the available software engineers is of importance here, too: the technological decisions an architect makes can impact staffing the project, and the ratio of junior to senior developers can influence how a project should be governed. Good architecture should take all of that into account.Equipped with all this knowledge, let's now discuss a somewhat controversial topic that you'll most probably encounter as an architect in your daily work, as it relates to a very common development methodology.

Developing architecture using Agile principles

Agile principles are concepts that encourage adaptability and efficiency in software development. The Agile Manifesto lists 12 guiding principles. You can refer to the link provided in the Further Reading section to read about the principles. Seemingly, architecture and Agile development methodologies are in an adversarial relationship, as there are many myths around Agile methodology; these are also mentioned in a resource linked at the end of the chapter. There are a few simple principles that you should follow in order to develop your product in an Agile way while still caring about its architecture.Agile, by nature, is iterative and incremental. This means preparing a big, upfront design is not an option in an Agile approach to architecture. Instead, a small, but still reasonable upfront design should be proposed. It's best if it comes with a log of decisions with the rationale for each of them. This way, if the product vision changes, the architecture can evolve with it. To support frequent release delivery, the upfront design should then be updated incrementally. Architecture developed this way is called evolutionary architecture.Thus, managing architecture doesn't require extensive documentation. In fact, documentation should cover only what's essential as this way it's easier to keep it up to date. It should be simple and cover only the relevant views of the system.Also, architect should not be considered as the single source of truth and the ultimate decision-maker. In Agile environments, it's the teams who are making decisions. Having said that, it's crucial that the stakeholders are contributing to the decision-making process – after all, their points of view shape how the solution should look.Nevertheless, an architect should remain part of the development team as often they're bringing strong technical expertise and years of experience to the table. They should also take part in making estimations, resolving conflicts over software architecture and planning the architecture changes needed before each iteration.In order for your team to remain Agile, you should think of ways to work efficiently and only on what's important. A good idea to embrace to achieve those goals is domain-driven design.

Domain-driven design

Domain-driven design (DDD) is a term introduced by Eric Evans in his book of the same title. In essence, it's about improving communication between business and engineering by bringing the developers' attention to the domain model that primarily consists of entities and their relationships. Aligning the implementation of the software with this model often leads to designs that are easier to understand and evolve together with the model changes.What has DDD got to do with Agile? Let's recall a part of the Agile Manifesto:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
— The Agile Manifesto

So, how does DDD and Agile intersect? They have similar principles, which creates the basis for their integration:

  • active stakeholder engagement: a ubiquitous language in DDD facilitates effective communication. Similarly, Agile focuses on collaboration.
  • flexibility and adaptability: Agile embraces and implements changes, while DDD evolves the models to understand and represent domain specifics. Thus, both support dynamic environments.
  • iterative development: Agile focuses on small incremental steps of software development, and DDD refines the models as they evolve. Thus, DDD aligns with the iterative nature of Agile as well as its tendency to avoid excessive documentation.

DDD and Agile complement each other and can provide better alignment with business requirements.In order to make the proper design decisions, you must understand the domain first. To do so, you'll need to talk to people a lot and encourage your developer teams to narrow the gap between them and businesspeople. The concepts in the code should be named after entities that are part of ubiquitous language. It's basically the common part of business experts' jargon and technical experts' jargon. Countless misunderstandings can be caused by each of these groups using terms that the other understands differently, leading to flaws in business logic implementations and often subtle bugs. Naming things with care and using terms agreed by both groups can improve the clarity of the project. Having a business analyst or other business domain experts as part of the team can help a lot here.If you're modeling a bigger system, it might be hard to make all the terms mean the same to different teams. This is because each of those teams really operates in a different context. DDD proposes the use of bounded contexts to deal with this. If you're modeling, say, an e-commerce system, you might want to think of the terms just in terms of a shopping context, but upon a closer look, you may discover that the inventory, delivery, and accounting teams actually all have their own models and terms.Each of those bounded contexts is a different subdomain of your e-commerce domain. Ideally, each subdomain can be mapped to its own bounded context managed by a dedicated team – a part of your system with its own vocabulary. Structuring teams around specific domains is consistent with Conway's Law described in the next chapter. It's important to set clear boundaries of such contexts when splitting your solution into smaller modules. When the boundaries are crossed, the domain models and terms may not remain relevant or acquire new meanings. Just like its context, each module has clear responsibilities, its own database schema, and its own code base. To help communicate between the teams in larger systems, you might want to introduce a context map, such as the one shown in Figure 1.1, which will show how the terms from different contexts relate to each other:

Figure 1.1:  Two bounding contexts with the matching terms mapped between them (image from one of Martin Fowler's articles on DDD: https://martinfowler.com/bliki/BoundedContext.html)

As you now understand some of the important project-management topics, we can switch to a few more technical ones.

The philosophy of C++

Let's now move closer to the programming language we'll be using throughout this book. C++ is a multi-paradigm language that has been around for a few decades now. It has changed a lot since its inception. When C++11 came out, Bjarne Stroustrup, the creator of the language, said that it felt like a completely new language. The release of C++20 marked another milestone in the evolution of this beast, bringing a similar revolution to how we write code. C++23 is also a great addition to the language that expands on C++20 and brings new features. One thing, however, has stayed the same during all those years: the language's philosophy. It can be summarized by the following key philosophies:

  • You don't pay for what you don't use.
  • The language ensures backward compatibility across C++ standards.
  • It supports portability, interoperability, and cross-platform development.
  • What you do use is just as efficient as what you could reasonably write by hand.

Not paying for what you don't use means that, for example, if you want to have your data member created on the stack, you can. Many languages allocate their objects on the heap, but it's not necessary for C++. Allocating on the heap has some cost to it – probably your allocator will have to lock a mutex for this, which can be a big burden in some types of applications. The good part is you can easily allocate variables without dynamically allocating memory each time pretty easily.Backward compatibility in C++ is so important because there are systems working for decades. If you take an old C++ project, it will most likely compile with a modern compiler, perhaps with some modifications.Moreover, C++ was originally designed to be portable across different operating systems and processor architectures. Beyond that, C++ can interoperate with other programming languages: directly with C and through language bindings with Python, JavaScript (Node.js), Rust, Swift, Go, Java (JNI), C#. Сross-platform development with C++ for different platforms is based on conditional compilation by applying preprocessor directives and operations.Further, high-level abstractions are what differentiate C++ from lower-level languages such as C or assembly. They allow for expressing ideas and intent directly in the source code, which plays great with the language's type safety. Consider the following code snippet using fundamental number types for storing units of measurement:

struct Duration {
  int millis_;
};

void example() {
  auto d = Duration{};
  d.millis_ = 100;

  auto timeout = 1; // one second
  d.millis_ = timeout; // error: we meant 1000 milliseconds but assigned just 1
}

A much better idea would be to utilize the type-safety features such as user-defined literals offered by the language to avoid such conversion errors:

#include <chrono>

struct Duration {
  std::chrono::milliseconds millis_;
};

void example() {
  using namespace std::literals::chrono_literals;
 auto d = Duration{};
  // d.millis_ = 100; // compilation error, as 100 could mean anything
  d.millis_ = 100ms; // okay
  auto timeout = 1s; // or std::chrono::seconds(1);
  d.millis_ =
      timeout; // okay, converted automatically to milliseconds
}

The preceding abstraction can save us from mistakes and doesn't cost us anything while doing so. That's why it's called a zero-cost abstraction. Sometimes C++ allows us to use abstractions that result in better code than if they were not used. One example of a language feature that could often result in such benefit is the concepts feature from C++20 covered in Chapter 5, Leveraging C++ Language Features.Another great set of abstractions is the Standard Template Library (STL) and Boost Libraries that consist of different data structures and algorithms. Which of the following code snippets do you think is easier to read and easier to prove bug-free? Which expresses the intent better?std::string_view, unlike const char*, is not a trivial type, but a class with a set of fields and methods. const char* is a null-terminated C string that requires either calculating the length of a string or storing the length separately from the string type. In addition, when using modern C++ abstractions like std::string_view, you do not need to reimplement algorithms such as std::count but can simply import them from the standard library.

// Approach #1
int count_dots(const char *str, std::size_t len) {
  int count = 0;
  for (std::size_t i = 0; i < len; ++i) {
    if (str[i] == '.') count++;
  }
  return count;
}

// Approach #2
#include <algorithm>
#include <string_view>

int count_dots(std::string_view str) {
  return std::count(str.begin(), str.end(), '.');
}

Okay, the second function has a different interface, but even if it was to stay the same, we could just create a std::string_view object from the pointer and the length. Since std::string_view  is such a lightweight type, your compiler should optimize it away by simply storing it in a processor register instead of memory.Thus, using higher-level abstractions leads to simpler, more maintainable code. The C++ language has strived to provide zero-cost abstractions since its inception, so it’s advisable to build upon that instead of redesigning the wheel using lower levels of abstraction.To use these abstractions efficiently, it's important to understand how copying affects performance in C++.Lightweight types or cheap-to-copy types such as C++17 std::string_view,C++20 std::span, C++26 std::function_ref are designed to be passed by value. In contrast, heavier or expensive-to-copy types, in particular std::string and std::vector, are preferred to pass by reference because copying objects of expensive-to-copy types for passing by value can lead to a large memory footprint and a performance penalty. Therefore, passed constant references are an optimization not to copy and modify the original objects; however, for lightweight types, passing by value remains more efficient and literally cheap to copy.By default, in C++, function arguments are passed by value, which means that an object passed to the function is copied and destroyed on return, but the original object ought to be never modified. It is easy to overlook that copy constructors are implicitly called when copying class objects, which can lead to object slicing if a subclass object is copied into a super class object that has neither the subclass member variables nor functions, leading to hard-to-detect bugs. If your class does not need copy or move constructors, delete them explicitly.Building on the idea of writing simple and maintainable code, the next section introduces some rules and heuristics that are invaluable on the path to writing such code.

Following the SOLID and DRY principles

There are many principles to keep in mind when writing code. Regardless of whether you are writing C++ in a mostly object-oriented programming manner or not, you should keep in mind the SOLID and DRY principles.

SOLID principles

SOLID is a set of practices that can help you write cleaner and less bug-prone software. It's an acronym made from the first letters of the five concepts behind it:

  • Single responsibility principle
  • Open-closed principle
  • Liskov substitution principle
  • Interface segregation
  • Dependency Inversion

Single responsibility principle

In short, the Single Responsibility Principle (SRP) means each code unit should have exactly one responsibility. This means writing functions that do one thing only, creating types that are responsible for a single thing, and creating higher-level components that are focused on one aspect only.For instance, our class manages some type of resources, such as file handles, and parses strings to find numbers:

class FileManagerAndParser {
public:
  int read(char* s, std::streamsize n) { return 0; }

  void write(const char* s, std::streamsize n) {}

  std::vector<int> parse(const std::string &s);
};

When maintaining this class and inheriting from it, you will need to track changes of both functionalities instead of doing it separately. Moreover, some derived classes may simply not need additional functionality. Therefore, better to split the class into two classes having one responsibility:

class FileManager {
public:
  int read(char* s, std::streamsize n) { return 0; }

  void write(const char* s, std::streamsize n) {}
};
class Parser {
  std::vector<int> parse(const std::string &s);
};

Often, if you see a function with And in its name, it's violating the SRP and should be refactored. Another sign is when a function has comments indicating what each code block of the function does. Each such block would be probably better off as a distinct function.The most known anti-pattern violating the single responsibility principle is the use of God objects that know too much or do too much. Following the single responsibility principle means decomposing complex classes that do many things at once into simple specialized ones. This principle is intended to simplify further modifications and maintenance by reducing complexity, but excessive decomposition can be harmful as it introduces more complexity or makes maintenance more difficult.

A related topic is the principle of least knowledge, also known as the Law of Demeter. In its essence, it says that no object should know more than necessary about other objects, so it doesn't depend on any of their internals and an object should only communicate with its immediate neighbors. Applying it leads to more maintainable code with fewer interdependencies between components. These recommendations are easy to remember:

each unit should only know about the units that are closely related to it

each unit should only talk to its immediate friends

It shouldn’t talk to strangers

The principle was proposed by Ian Holland at Northeastern University in 1987. It was named after the Demeter project, which was itself inspired by Demeter, the Greek Goddess of Agriculture.

Open-closed principle

The Open-Closed Principle (OCP) states that software entities (such as functions, classes, and modules) are supposed to be both open for extension and closed for modification. Open for extension means that new functionalities can be added without changing the existing code. Closed for modification means existing software entities shouldn't be changed, as this can often cause bugs elsewhere in the system. A great example of this principle in C++ is the << operator in std::ostream closed for modification, but you can extend this class to support your custom class. All you need to do is to overload the operator:

std::ostream &operator<<(std::ostream &stream, const MyPair<int, int> 
    &mp) {
  stream << mp.firstMember() << ", ";
  stream << mp.secondMember();
  return stream;
}

Note that our implementation of operator<< is a free (non-member) function. You should prefer those to member functions if possible as it actually helps encapsulation. For more details on this, consult the article by Scott Meyers in the Further reading section at the end of this chapter. If you don't want to provide public access to some field that you wish to print to ostream, you can make operator<< a friend function, as shown:

class MyPair {
// ...
  friend std::ostream &operator<<(std::ostream &stream, 
    const MyPair &mp);
};
std::ostream &operator<<(std::ostream &stream, const MyPair &mp) {
  stream << mp.first_ << ", ";
  stream << mp.second_ << ", ";
  stream << mp.secretThirdMember_;
  return stream;
}

Friend classes, methods and functions in C++ are useful when a class and its friends have a special relationship and when protected and private members of the class must be hidden from other classes. In such cases, strong coupling is intentional, for example, as in the case of the <<  operator or for testing private class members.Note that this definition of OCP we discussed is slightly different from the more common one related to polymorphism. The latter is about creating base classes that can't be modified themselves but are open for others to inherit from them.Speaking of polymorphism, let's move on to the next principle, which uses it correctly.

Liskov substitution principle

In essence, the Liskov Substitution Principle (LSP) states that if a function uses a pointer or reference to a base class, the function must be able to use the pointer or reference to objects of derived classes without knowing it. This rule is sometimes broken because the techniques we apply in source code do not always work in real-world abstractions.A classic example is the relationship between squares and rectangles. Mathematically speaking, the former is a specialization of the latter, so there's an is a relationship between them. This tempts us to create a Square class that inherits from the Rectangle class. So, we could end up with code like the following:

class Rectangle {
 public:
  Rectangle(double width, double height) : width_(width), height_(height) {}
  virtual ~Rectangle() = default;
  virtual double area() { return width_ * height_; }
  virtual void setWidth(double width) { width_ = width; }
  virtual void setHeight(double height) { height_ = height; }

 private:
  double width_;
  double height_;
};

class Square : public Rectangle {
 public:
  Square(double side) : Rectangle(side, side) {}
  ~Square() override = default;
  double area() override { return Rectangle::area(); }
  void setWidth(double width) override {
    Rectangle::setWidth(width);
    Rectangle::setHeight(width);
  }
  void setHeight(double height) override { setWidth(height); }
};

Casting the derived class Square to its base class Rectangle results in a conceptual error:

Rectangle* s1 = new Rectangle(2, 3);
Rectangle* s2 = new Square(4);
s2->setWidth(5);
s2->setHeight(6);
std::cout << s1->area() << std::endl;  // 2*3=6 (expected)
std::cout << s2->area() << std::endl;  // 6*6=36, but 5*6=30

How should we implement the members of the Square class? If we want to follow the LSP and save the users of such classes from unexpected behavior, we can't: our square stops being a square if we set different values in setWidth and setHeight because the dimensions of a square are always equal. We can either stop having a square (not expressible using the preceding code) or modify the height as well, thus making the square look different than a rectangle. Therefore, the Square class has problematic setWidth and setHeight functions as a workaround. If the width changes, the height changes too and vice versa, but we have the base class Rectangle and expect the sides to change independently.If your code violates the LSP, it's likely that you're using an incorrect abstraction. In our case, Square shouldn't inherit from Rectangle after all. A better approach could be making the two implement a Shape interface:

class Shape {
 public:
  virtual double area() = 0;
  virtual ~Shape() = default;
};

class Rectangle : public Shape {
 public:
  Rectangle(double width, double height) : width_(width), height_(height) {}
  ~Rectangle() override = default;
  double area() override { return width_ * height_; }
  virtual void setWidth(double width) { width_ = width; }
  virtual void setHeight(double height) { height_ = height; }

 private:
  double width_;
  double height_;
};

class Square : public Shape {
 public:
  Square(double side) : side_(side) {}
  ~Square() override = default;
  double area() override { return side_ * side_; }
  void setSide(double side) { side_ = side; }

 private:
  double side_;
};

The conceptual error is resolved without loss of functionality since the Shape class is the base class of both classes:

Shape* s1 = new Rectangle(2, 3);
Square* s = new Square(4);
s->setSide(5);
Shape* s2 = s;
std::cout << s1->area() << std::endl;  // 2*3=6 (expected)
std::cout << s2->area() << std::endl;  // 5*5=25 (expected)

Since we are on the topic of interfaces, let's move on to the next item, which is also related to them.

Interface segregation principle

The interface segregation principle is just about what its name suggests. It is formulated as follows:

No client should be forced to depend on methods that it does not use.

That sounds pretty obvious, but it has some connotations that aren't that obvious. Firstly, you should prefer more but smaller interfaces to a single big one. Secondly, when you're adding a derived class or are extending the functionality of an existing one, you should think before you extend the interface the class implements.Let's show this on an example that violates this principle, starting with the following interface:

class IFoodProcessor {
 public:
  virtual ~IFoodProcessor() = default;
  virtual void blend() = 0;
};

We could have a simple class that implements it:

class Blender : public IFoodProcessor {
 public:
  void blend() override;
};

So far so good. Now say we want to model another, more advanced food processor and we recklessly tried to add more methods to our interface:

class IFoodProcessor {
 public:
  virtual ~IFoodProcessor() = default;
  virtual void blend() = 0;
  virtual void slice() = 0;
  virtual void dice() = 0;
};

class AnotherFoodProcessor : public IFoodProcessor {
 public:
  void blend() override;
  void slice() override;
  void dice() override;
};

Now we have an issue with the Blender class as it doesn't support this new interface – there's no proper way to implement it. We could try to hack a workaround or throw std::logic_error, but a much better solution would be to just split the interface into two, each with a separate responsibility:

class IBlender {
 public:
  virtual ~IBlender() = default;
  virtual void blend() = 0;
};

class ICutter {
 public:
  virtual ~ICutter() = default;
  virtual void slice() = 0;
  virtual void dice() = 0;
};

Now our AnotherFoodProcessor can just implement both interfaces, and we don't need to change the implementation of our existing food processor.We have one last SOLID principle left, so let's learn about it now.

Dependency inversion principle

Dependency inversion is a principle useful for decoupling by inverting the dependency relationship. In essence, it means that high-level modules should not depend on lower-level ones. Instead, both should depend on the same abstraction because classes should not rely on the implementation details of their dependencies.C++ allows two ways to inverse the dependencies between your classes. The first one is the regular, polymorphic approach and the second uses templates. Let's see how to apply both of them in practice.Assume you're modeling a notification system that is supposed to have SMS and email channels. A simple approach would be to write it like so:

class SMSNotifier {
public:
  void sendSMS(const std::string &message) {
    std::cout << "SMS channel: " << message << std::endl;
  }
};
 
class EMailNotifier {
public:
  void sendEmail(const std::string &message) {
    std::cout << "Email channel: " << message << std::endl;
  }
};
 
class NotificationSystem {
public:
  void notify(const std::string &message) {
    sms_.sendSMS(message);
    email_.sendEmail(message);
  }
 
private:
  SMSNotifier sms_;
  EMailNotifier email_;
};

Each notifier is constructed by the NotificationSystem class. This approach is not ideal, though, since now the higher-level concept, NotificationSystem, depends on lower-level ones – modules for individual notifiers. Let's see how applying dependency inversion using polymorphism changes this. We can define our notifiers to depend on an interface as follows:

class Notifier {
public:
  virtual ~Notifier() = default;
  virtual void notify(const std::string &message) = 0;
};
 
class SMSNotifier : public Notifier {
public:
  void notify(const std::string &message) override { sendSMS(message); }
 
private:
  void sendSMS(const std::string &message) {
    std::cout << "SMS channel: " << message << std::endl;
  }
};
 
class EMailNotifier : public Notifier {
public:
  void notify(const std::string &message) override { sendEmail(message); }
 
private:
  void sendEmail(const std::string &message) {
    std::cout << "Email channel: " << message << std::endl;
  }
};

Now, the NotificationSystem class no longer has to know the implementations of the notifiers. Because of this, it has to accept them as constructor arguments:

class NotificationSystem {
public:
  using Notifiers = std::vector<std::unique_ptr<Notifier>>;
 
  explicit NotificationSystem(Notifiers notifiers)
      : notifiers_{std::move(notifiers)} {}
 
  void notify(const std::string &message) {
    for (const auto &notifier : notifiers_) {
      notifier->notify(message);
    }
  }
 
private:
  Notifiers notifiers_;
};

In this approach, NotificationSystem is decoupled from the concrete implementations and instead depends only on the polymorphic interface named Notifier. The lower level concrete classes also depend on this interface. This can help you shorten your build time and allows for much easier unit testing – now you can easily pass mocks as arguments in your test code.Using dependency inversion with virtual dispatch comes at a cost, however, as now we're dealing with memory allocations and the dynamic dispatch has overhead on its own. Sometimes C++ compilers can detect that only one implementation is being used for a given interface and will remove the overhead by performing devirtualization (often you need to mark the function as final for this to work). Here, however, two implementations are used, so the performance cost of dynamic dispatch (commonly implemented as jumping through virtual method tables, or vtables for short) must be paid.There is another way of inverting dependencies that doesn't have those drawbacks. Let's see how this can be done using a variadic template from C++11, a generic lambda from C++14, and variant, either from C++17 or a third-party library such as Abseil or Boost.If you're not familiar with variant, it's just a class that can hold any of the types passed as template parameters. Because we're using a variadic template that can have any number of parameters, we can pass however many types we like. To call a function on the object stored in the variant, we can either extract it using std::get or use std::visit and a callable object – in our case, the generic lambda. It shows how duck-typing looks in practice.First are the notifier classes:

class SMSNotifier {
public:
  void notify(const std::string &message) { sendSMS(message); }
 
private:
  void sendSMS(const std::string &message) {
    std::cout << "SMS channel: " << message << std::endl;
  }
};
 
class EMailNotifier {
public:
  void notify(const std::string &message) { sendEmail(message); }
 
private:
  void sendEmail(const std::string &message) {
    std::cout << "Email channel: " << message << std::endl;
  }
};

Now we don't rely on an interface anymore, so no virtual dispatch will be done. The NotificationSystem class will still accept a vector of Notifiers:

template <typename... T>
class NotificationSystem {
 public:
  using Notifiers = std::vector<std::variant<T...>>;
  explicit NotificationSystem(Notifiers notifiers)
      : notifiers_{std::move(notifiers)} {}
  void notify(const std::string &message) {
    for (auto &notifier : notifiers_) {
      std::visit([&](auto &n) { n.notify(message); }, notifier);
    }
 }
 private:
  Notifiers notifiers_;
};

Since all our notifier classes implement the notify function, the code will compile and run. If your notifier classes would have different methods, you could, for instance, create a function object that has overloads of operator() for different types.Because NotificationSystem is now a template, we have to either specify the list of types each time we create it or provide a type alias. You can use the final class like so:

using MyNotificationSystem = NotificationSystem<SMSNotifier, EMailNotifier>;
auto sn = SMSNotifier{};
auto en = EMailNotifier{};
auto ns = MyNotificationSystem{{sn, en}};
ns.notify("Quinn, Wade, Arturo, Rembrandt");

This approach is guaranteed to not allocate separate memory for each notifier or use a virtual table. However, in some cases, this approach results in less extensibility, since once the variant is declared, you cannot add another type to it.

It’s noteworthy that we used dependency injection in our examples. It is a software engineering technique to implement the dependency inversion principle. It's about injecting the dependencies from the outside through constructors or setters rather than creating them internally, which is beneficial to code testability (think about injecting mock objects, for example). There are frameworks for injecting dependencies across entire applications, such as Boost.DI, Google Fruit, Hypodermic, Kangaru, Wallaroo.

The DRY rule

DRY is short for don't repeat yourself. It means you should avoid code duplication, and reuse when it's possible. This means you should create a function or a function template if your code repeats similar operations a few times. Also, instead of creating several similar types, you should consider writing a template.Let’s look at an example where two functions implement the same functionality and see how we can eliminate the duplication using a template:

// two functions implement the same functionality to return minimal int and double values
int minimum(const int& x, const int& y) { return x < y ? x : y; }

double minimum(const double& x, const double& y) { return x < y ? x : y; }
// one template function replaces them to remove duplicated functionality
template <typename T>
T minimum(const T& x, const T& y) {
  return x < y ? x : y;
}
// the calls do not differ before and after applying the rule
cout << minimum(3, 5) << endl;
cout << minimum(3.0, 5.0) << endl;

It's also important not to reinvent the wheel when it's not necessary, that is, not to repeat others' work. Nowadays there are dozens of well-written and mature libraries that can help you with writing high-quality software faster. We'd like to specifically mention a few of them: Boost, Folly, Abseli, Qt, EASTL, BDE.Sometimes duplicating code can have its benefits, however. One such scenario is developing microservices. Of course, it's always a good idea to follow DRY inside a single microservice, but violating the DRY rule for code used in multiple services can actually be worth it. Whether we're talking about model entities or logic, it's easier to maintain multiple services when code duplication is allowed.Imagine having multiple microservices reusing the same code for an entity. Suddenly one of them needs to modify one field. All the other services now have to be modified as well. The same goes for dependencies of any common code. With dozens or more microservices that have to be modified because of changes unrelated to them, it's often easier for maintenance to just duplicate the code.Since we're talking about dependencies and maintenance, let's proceed to the next section, which discusses a closely related topic.

Coupling and cohesion

Low cohesion and high coupling are usually associated with software that's difficult to test, reuse, maintain, or even understand, so it lacks many of the quality attributes usually desired to have in software.A diagram of a network Description automatically generated with medium confidenceFigure 1.2: Coupling versus cohesionThose terms often go together because often one trait influences the other, regardless of whether the unit we talk about is a function, class, module, library, service or even a whole system. To give an example, usually, monoliths are highly coupled and low cohesive, while distributed services tend to be at the other end of the spectrum.

Coupling

Coupling is a measure of how strongly one software unit depends on other units. A unit with high coupling relies on many other units. The lower the coupling, the better.An example of tightly coupled classes is the first implementation of the NotificationSystem and notifier classes while discussing the Dependency inversion topic. This principle reduces the degree of direct knowledge of modules about each other to reduce their coupling. Let's see what would happen if we were to add yet another notifier type:

class ChatNotifier {
 public:
  void sendMessage(const std::string &message) {
    std::cout << "Chat channel: " << message << std::endl;
  }
};


class NotificationSystem {
 public:
  void notify(const std::string &message) {
    sms_.sendSMS(message);
    email_.sendEmail(message);
    chat_.sendMessage(message);
  }
 private:
  SMSNotifier sms_;
  EMailNotifier email_;
  ChatNotifier chat_;
};

It looks like instead of just adding the ChatNotifier class, we had to modify the public interface of the NotificationSystem class. This means they're tightly coupled and that this implementation of the NotificationSystem class actually breaks the OCP. For comparison, let's now see how the same modification would be applied to the implementation using dependency inversion:

class ChatNotifier {
 public:
  void notify(const std::string &message) { sendMessage(message);  }
 
 private:
  void sendMessage(const std::string &message) {
   std::cout << "Chat channel: " << message << std::endl;
  }
};

No changes to the NotificationSystem class were required, so now the classes are loosely coupled. All we needed to do was to add the ChatNotifier class. Structuring our code this way allows for smaller rebuilds, faster development, and easier testing, all with less code that's easier to maintain. To use our new class, we only need to modify the calling code:

using MyNotificationSystem =
    NotificationSystem<SMSNotifier, EMailNotifier, ChatNotifier>;
 
auto sn = SMSNotifier{};
auto en = EMailNotifier{};
auto cn = ChatNotifier{};
auto ns = MyNotificationSystem{{sn, en, cn}};
ns.notify("Azabeth Burns");

This shows coupling on a class level. On a larger scale, for instance, if you're having a microservice architecture, a common pattern is to have multiple services use a shared database and communicate via this database. This causes high coupling between those services as you cannot freely modify the database schema without affecting all the microservices that use it. A better option is to have a database per service, wherein the low coupling can be achieved by introducing techniques such as message queueing, where services communicate by sending messages to a queue instead of calling each other. The services wouldn't then depend on each other directly, but just on the message format. However, having one database per service can be extremely expensive. Shared instance is a compromise pattern that helps solve the issue. Here, services must request data from other services via the API or other techniques because services must access only their parts of the data to loosen coupling.Figure 1.3: Microservices database design patternsLet's now move on to cohesion.

Cohesion

Cohesion is a measure of how strongly software elements belong together. In a system, the functionality offered by components in a module should be strongly related to make it highly cohesive.Consider the following example. It may seem trivial, but posting real-life scenarios, often hundreds if not thousands of lines long, would be impractical:

class CachingProcessor {
 public:
  Result process(WorkItem work);
  Results processBatch(WorkBatch batch);
  void addListener(const Listener &listener);
  void removeListener(const Listener &listener);

 private:
  void addToCache(const WorkItem &work, const Result &result);
  void findInCache(const WorkItem &work);
  void limitCacheSize(std::size_t size);
  void notifyListeners(const Result &result);
  // ...
};

We can see that our processor actually does three types of work and, therefore, violates SRP: the actual work, the caching of the results, and managing listeners. A common way to increase cohesion in such scenarios is to extract a class or even multiple ones:

class WorkResultsCache {
 public:
  void addToCache(const WorkItem &work, const Result &result);
  void findInCache(const WorkItem &work);
  void limitCacheSize(std::size_t size);
 private:
  // ...
};

class ResultNotifier {
 public:
  void addListener(const Listener &listener);
  void removeListener(const Listener &listener);
  void notify(const Result &result);
 private:
  // ...
};
class CachingProcessor {
 public:
  explicit CachingProcessor(ResultNotifier &notifier);
  Result process(WorkItem work);
  Results processBatch(WorkBatch batch);
 private:
  WorkResultsCache cache_;
  ResultNotifier notifier_;
  // ...
};

Now each part is done by a separate, highly cohesive entity. Reusing them is now possible without much hassle. Even making them a template class should require little work. Last but not least, testing such classes should be easier as well.Putting this on a component or system level is straightforward – each component, service, and system you design should be concise and focus on doing one thing and doing it right. This concludes our introductory chapter. Let's now summarize what we've learned.

Summary

In this chapter, we discussed what software architecture is and why it's worth caring about it. We've shown what happens when architecture is not updated along with the changing requirements and explored the fundamentals of good architecture. Then we saw how to treat architecture in an Agile environment. Then we moved on to Domain-driven design. We also learned how C++ language features such as zero-cost abstractions, concepts, and the STL help express architectural decisions. Finally, we discussed the SOLID and DRY principles, and furthermore terms coupling and cohesion.You should now be able to point out many design flaws in code reviews and refactor your solutions for greater maintainability, as well as be less error prone as a developer.In the next chapter, we will learn about the different architectural approaches or styles. We will also learn about how and when we can use them to gain better results. 

Questions

  • Why care about software architecture?
  • Should the architect be the ultimate decision-maker in an Agile team?
  • How is the SRP related to cohesion?
  • In what phases of a project's lifetime can it benefit from having an architect?
  • What's the benefit of following the SRP?

Further reading

Left arrow icon Right arrow icon

Key benefits

  • Architect high-performance C++ systems using C++20 and beyond
  • Build, test, and secure production-ready systems by applying solid design principles
  • Manage, package, and deploy cloud-native applications using CMake, Conan, and CI/CD

Description

Designing scalable and maintainable software with C++ requires more than language expertise—it demands architectural thinking and an ability to deliver systems in dynamic environments. This practical guide equips you with the architectural skills needed to design and build robust, distributed software systems using modern C++. Starting with fundamental architectural principles and design philosophies, the book walks readers through practical approaches to designing and deploying reliable systems. This edition includes significant updates and new content: chapters on observability, package management, and C++ modules address real-world software challenges. Readers will explore software decomposition strategies, design and system patterns, fault tolerance, API management, and testability—all applied with C++. Additionally, the book covers modern CI/CD pipelines, cloud-native design, microservices, and modular development, helping developers navigate today's fast-evolving software landscape. With updated examples and a renewed emphasis on maintainable and observable architectures, this edition equips C++ professionals to architect modern, production-grade systems. By the end of this book, you will be able to design, build, test, and deploy enterprise-grade software solutions using modern C++ and proven architectural techniques.

Who is this book for?

This book is intended for experienced C++ developers and software engineers aiming to expand their architectural knowledge, lead software projects, or build scalable systems. It assumes readers are comfortable with modern C++ (C++11 onwards) and familiar with basic design principles and patterns.

What you will learn

  • Apply architectural fundamentals to design scalable C++ systems
  • Use modern C++ features to create maintainable and secure applications
  • Implement architectural and system design patterns
  • Design testable code and automate quality checks via CI/CD pipelines
  • Manage dependencies and build systems using CMake and Conan
  • Explore microservices, containers, and cloud-native practices in C++
  • Improve observability with logging, tracing, and monitoring tools
  • Build secure, fault-tolerant, and high-performance production-grade software

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Last updated date : Oct 06, 2025
Publication date : Dec 17, 2025
Length: 647 pages
Edition : 2nd
Language : English
ISBN-13 : 9781803243016
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Last updated date : Oct 06, 2025
Publication date : Dec 17, 2025
Length: 647 pages
Edition : 2nd
Language : English
ISBN-13 : 9781803243016
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Table of Contents

5 Chapters
Software Architecture with C++, Designing Robust C++ Systems with Modern Architectural Practices Chevron down icon Chevron up icon
Importance of Software Architecture and Principles of Great Design Chevron down icon Chevron up icon
Architectural Styles Chevron down icon Chevron up icon
Functional and Nonfunctional Requirements Chevron down icon Chevron up icon
4Architectural and System Design Patterns Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.

Modal Close icon
Modal Close icon