Dependency Injection in .NET Core 2.0

4.3 (6 reviews total)
By Marino Posadas , Tadit Dash
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. The SOLID Principles of Software Design

About this book

.NET Core provides more control than ever over web application architectures. A key point of this software architecture is that it's based on the use of Dependency Injection as a way to properly implement the Dependency Inversion principle proposed in the SOLID principles established by Robert C. Martin.

With the advent of .NET Core, things have become much simpler with Dependency Injection built into the system. This book aims to give you a profound insight into writing loosely-coupled code using the latest features available in .NET Core. It talks about constructors, parameter, setters, and interface injection, explaining in detail, with the help of examples, which type of injection to use in which situation. It will show you how to implement a class that creates other classes with associated dependencies, also called IoC containers, and then create dependencies for each MVC component of ASP.NET Core. You'll learn to distinguish between IoC containers, the use of Inversion of Control, and DI itself, since DI is just a way of implementing IoC via these containers. You'll also learn how to build dependencies for other frontend tool such as Angular. You will get to use the in-built services offered by .NET Core to create your own custom dependencies.

Towards the end, we'll talk about some patterns and anti-patterns for Dependency Injection along with some techniques to refactor legacy applications and inject dependencies.

Publication date:
November 2017
Publisher
Packt
Pages
436
ISBN
9781787121300

 

Chapter 1. The SOLID Principles of Software Design

This book focuses on techniques related to Dependency Injection and the way those techniques are implemented by default and can be extended by the programmer in .NET Core--the first version of .NET that executes on every platform.

It works on Windows, macOS, and Linux distro on the desktop, and the idea can even be extended to the mobile world covering the Apple, Android, and Tizen (Samsung) operating systems.

This is, with no doubt, the most ambitious project from Microsoft in its search for a universal coverage of programming technologies and tools, and it can be considered a natural step after the initial UWP (Universal Windows Platform) project that allows building applications for any device supporting Windows, from IoT devices to the desktop, XBOX, or HoloLens.

So, in this chapter we'll start with a quick review of the main architectural components of .NET Core and its derivative frameworks (such as ASP.NET Core), to be followed with the foundations on which Dependency Injection techniques are based, as part of the SOLID principles, stated by Robert C. Martin (Uncle Bob) in 2000. (See Wikipedia: https://en.wikipedia.org/wiki/SOLID_(object-oriented_design).)

Therefore, we'll revise those five SOLID principles, explaining their purpose and advantages, together with some basic implementations of each one of them in the C# language using Console applications coded in .NET Core. In all we'll see an explanation of each principle and its coverage:

  • Separation of concerns (clearly implemented in the core infrastructure of .NET Core and also from the initial configuration of pipelines and middleware in ASP.NET Core)
  • Open/Closed (already implemented in classic .NET Framework since version 3.0 and also present here)
  • Liskov Substitution Principle (available in two ways--in a classic manner through the support of typecasting, and through the use of generics)
  • Interface segregation: Explanation of Interface segregation and its advantages
  • Dependency Inversion: Explanation of the principle, its derivatives, and the concept of IoC containers
 

In the beginning


The evolution of programming techniques is, somehow, related to language evolution. Once the initial (and, in some ways, chaotic) times had passed, the universality of computing became clear, and the need for good patterns and languages capable of affording large projects turned out to be evident.

The 70s marked the start of the adoption of other paradigms, such as procedural programming, and later on, object-oriented programming (OOP), proposed by Ole-Johan Dahl and Kristen Nygaard with the Simula language, when they both worked at the Norwegian Computing Center. They were given the Turing Award for these achievements, among other recognitions.

A few years later (around 1979), Bjarne Stroustrup created C with Classes, the prototype of what C++ is today because he found valuable aspects in the Simula language, but he thought that it was too slow for practical purposes, being the first OOP language that was universally adopted.

C++ originally had imperative features and object-oriented and generic ones, while also providing the ability to program for low-level memory manipulation. While it's true that it has become a de facto standard for building critical systems and applications, for many people it was not adequate for LOB (Line of Business) applications.

Years later, Java and the .NET platforms proposed a much easier and affordable solution for many programmers while still moving within the ordered space that object-oriented programming languages promote.

So, OOP was adopted, and so far no other important programming paradigm has replaced these ideas. Certainly, there are other approaches, such as functional programming, but even the most significant representative of this tendency, JavaScript, is becoming more object-oriented in the latest versions (ECMAScript 2015).

.NET and .NET Core

.NET has been revamped lately in order to achieve the goal that Microsoft has pursued since Satya Nadella arrived in the company--"

Any Developer, Any App, Any Platforms.

".

According to Principal Manager Scott Hunter, the company now presents a set of unified application models that can be summarized in the following screenshot:

Source: http://www.hanselman.com/blog/AnUpdateOnASPNETCore10RC2.aspx

As you see, the situation now is quite promising for a .NET Developer. The screenshot shows a Common Infrastructure (compilers, languages, and runtime components), powered by Roselyn services and other features. All those integrate with the IDEs that support these projects, now including Visual Studio for Mac.

On top of that lies a .NET Standard Library, which has points in common that allow us to share code along the three different frameworks--the classic .NET Framework (in version 4.6.2, at the time of writing this), .NET Core (now in version 2.0), and Xamarin, which allows building applications for any type of mobile target--Android, iOS, Windows Phone, and Tizen (Samsung).

About .NET Core

.NET Core is the new version of .NET presented officially in the summer of 2016, and updated to version 1.1 in the November Connect() event the same year. It's defined as a cross-platform, open source, cloud-ready and modular .NET platform for creating modern web apps, microservices, libraries, and console applications that run everywhere (Windows, Linux, and MacOS).

It can be deployed along with the application itself, minimizing installation issues.

Prior to its publication, Microsoft decided to restart the numbering, reinforcing the idea that this is a totally new concept with respect to classical versions, as a better way to avoid ambiguities.

MSDN architect Cesar de la Torre defines in his blog very precisely the goals and structure of .NET Core--unlike the traditional .NET Framework, which is a single package installation, system-wide, and Windows-only runtime environment, .NET Core is about decoupling .NET from Windows, allowing it to run in non-Windows environments without having to install a giant 400 Mb set of binaries (versus just the footprint of the components you need from .NET Core) plus the ability to deploy applications accompanying the framework itself, supporting side-by-side execution of different versions of the framework.

A very interesting part of its architecture and deployment infrastructure, as mentioned in the same source, is that instead of being part of the operating system, .NET Core is composed of NuGet packages and is either compiled directly into an application or put into a folder inside the application. This means applications can carry .NET Core within and thus are completely side by side on the machine.

I, personally, think this is absolutely crucial for the project to be successful. No side-effects, no component installation in the target machine, and no dependencies. (As you'll see throughout this book this avoiding of dependencies is totally foundational when building software that follows good practices.)

NET Core 2.0 - Supported OS Versions Proposal:

OS

Version

Architectures

Notes

Windows Client

7 SP1+

x64, x86

Windows Server

2008 R2 SP1+

x64, x86

Configurations: Full, Server Core, Nano

Windows IoT

10+

[C] arm32

IoT Core - see Raspberry Pi instructions

Red Hat Enterprise Linux

7.3+

x64

This includes Centos and Oracle Linux

Fedora

25+

x64

Debian

8.7+

x64

Debian 9 (Stretch) workaround

Ubuntu

14.04+

x64, [C] arm32

This includes Linux Mint 17 for x64 For arm32, see Raspberry Pi instructions

openSUSE

42.2+

x64

Tizen

4+

[S] arm32

Tizen .NET Developer Preview

Mac OS X

10.12+

x64

In Progress OS's

Arch Linux

[C] TBD

TBD

Blocked on missing OpenSSL 1.0 package in distro. Arch Linux community efforts tracked here.

FreeBSD & NetBSD

[C] TBD

TBD

Tracking main issue and label. NetBSD packages for .NET Core 1.0.0

As for the types of programmable project available from any of the above-mentioned IDE's, .NET Core can support its own application model, and also the Universal Windows Platform Model, optionally compiled to .NET Native (see the following screenshot):

Source: http://www.hanselman.com/blog/AnUpdateOnASPNETCore10RC2.aspx

We end this introduction to .NET Core with the summary from the same page mentioned previously in relation to this framework:

  • Cross-platform: .NET Core currently supports three main operating systems--Linux, Windows and OS X. There are other OS ports in progress such as FreeBSD, NetBSD, and Arch Linux. .NET Core libraries can run unmodified across supported OSes. The apps must be re-compiled per environment, given that apps use a native host. Users select the .NET Core supported environment that works best for their situation.
  • Open Source: .NET Core is available on GitHub at https://github.com/dotnet/core/blob/master/release-notes/2.0/2.0.0-preview1.md, licensed with the MIT and Apache 2 licenses (licensing is per component). It also makes use of a significant set of open source industry dependencies (see release notes). Being OSS is critical for having a thriving community plus a must for many organizations where OSS is part of their development strategy.
  • Natural acquisition: .NET Core is distributed as a set of NuGet packages that developers can pick and choose from. The runtime and base framework can be acquired from NuGet and OS-specific package managers, such as APT, Homebrew, and Yum. Docker images are available on docker hub. The higher-level framework libraries and the larger .NET library ecosystem are available on NuGet.
  • Modular framework: .NET Core is built with a modular design, enabling applications to include only the .NET Core libraries and dependencies that are needed. Each application makes its own .NET Core versioning choices, avoiding conflicts with shared components. This approach aligns with the trend of developing software using container technologies such as Docker.
  • Smaller deployment footprint: Even when in v1.0/1.1 the size of .NET Core is a lot smaller than .NET Framework; note that the overall size of .NET Core doesn't set out to be smaller than the .NET Framework over time, but since it is pay-for-play, most applications that utilize only parts of CoreFX will have a smaller deployment footprint.
  • Fast release cycles of .NET Core: .NET Core's modular architecture plus its OSS nature provide more modern and much faster release cycles (even per NuGet package) compared to slow release cycles from larger monolithic frameworks. This approach allows a much faster innovation pace from Microsoft and the OSS .NET community than what was traditionally possible with the .NET Framework.

Thus, there are multiple application model stacks built on top of the .NET Core that allow developers to build applications ranging from console applications, across UWP Windows 10 apps (PC, tablet, and phones) to scalable web applications and microservices with ASP.NET Core.

ASP.NET Core

ASP.NET applications that use .NET Core promote a model based on the previous MVC model, although built from scratch, targeted at cross-platform execution, the elimination of some unnecessary features, and the unification of the previous MVC with the web API variant; so, they work with the same controller type.

Besides this, the code doesn't need to be compiled prior to execution while you're developing. The BrowserSync technology allows you change the code on-the-fly and the Roselyn services take care of updating; so, you just have to refresh your page to see the changes.

ASP.NET Core also uses a new hosting model, completely decoupled from the web server environment that hosts the application. It supports IIS versions and also self-hosting contexts via Kestrel (cross-platform, extremely optimized, built on top of LibUv, the same component that Node.js uses) and WebListener HTTP (Windows-only) servers.

As part of its architecture, it proposes a new generation of middleware that is asynchronous, very modular, lightweight, and totally configurable, where we define things such as routing, authentication, static files, diagnostics, error handling, session, CORS, localization; and they can even be user-defined.

Notice also that ASP.NET Core can run as well in the classic .NET Framework with access to the functionality exposed by those libraries. The following screenshot shows the schema:

ASP.NET Core joins many things that were separate in previous versions. Thus, there are no distinctions between MVC and Web API and, if you target .NET Core or if you prefer to target any of the other version of .NET, the architectural model can be MVC using this rebuilt architecture.

In addition, a new built-in IoC container for dependency injection is responsible for bootstrapping the system, together with a new configuration protocol, which we'll see in practice in the following chapters.

About the IDE used in this book

Since this book deals with .NET Core and ASP.NET Core and their built-in capabilities covering SOLID principles in general and DI in particular, we're using the latest available version of Visual Studio (Visual Studio 2017 Enterprise), which includes full support for these platforms, together with a bunch of convenient extensions and templates.

You can also use Visual Studio 2017 Community Edition, which is free, or any higher version with practically no changes, as far as the codes samples are concerned.

If you're a Mac user, you can also use Visual Studio for Mac (https://www.visualstudio.com/vs/visual-studio-mac/), available since November 2016, and, if you prefer a light, full-fledged, and free IDE for any platform (Linux, Mac or Windows), you can opt for Visual Studio Code (https://code.visualstudio.com/download), which also has excellent editing and debugging capabilities. All of them have full support for .NET Core/ASP.NET Core as well (see the following screenshot):

Throughout this and other chapters, I'll use indiscriminately .NET Core or ASP.NET Core for the demos, depending on whether we need a more complex user interface or not. Notice also that .NET Core (for the time being) does not offer any visual UI beyond Console applications.

Actually, the currently available templates shown by default when we select New Project and click on .NET Core are the ones you can see in the following screenshot:

As you see, the choices are basically threefold (besides testing): Console apps, Class libraries, and ASP.NET Core Web apps, based on NET Core. In the three cases, the resulting apps run on any platform.

Other foundational changes in .NET Core

It's important to keep in mind that, with NET Core, you no longer depend on .NET Framework libraries (the BCL libraries), either installed by the OS or manually and located in the GAC (Global Assembly Cache).

All libraries are available via NuGet and downloaded accordingly. But, if you have tried .NET Core prior to Visual Studio 2017, you might miss the file project.json in which all dependencies were referenced.

The official documentation states that when using Visual Studio 2017:

  • MSBuild supports .NET Core projects, using a simplified csproj project format that makes it easier to be edited by hand, without the need for unloading the project
  • There is support for file wildcards in the project file, enabling folder-based projects that don't require individual files to be included
  • NuGet package references are now part of the csproj format, consolidating all project references in one file

So, if you try a new .NET Core project with this tool, the project's dependencies are now referenced in the csproj file (in XML format), as you can see when opening it in any text editor:

In parallel, Visual Studio reads that file, creates a Dependencies entry in the Solution Explorer, and starts looking for that information (either in the PC's cache or in NuGet).

Note also that they're not real, classic DLLs, but fragments of code that are assembled all together at compile time to minimize size and launch time. If you take a look at that entry you can see the Dependencies' dependencies, and so on:

Another critical point to highlight relates to the deliverables produced after the compiling process. If you open the demo included as ConsoleApp1 (or create a basic one of your own), and just compile it, you'll see that the bin directory does not contain any executable file. You'll see a DLL with that name instead (ConsoleApp1.dll).

When you launch the application (after adding a Console.Read() sentence to stop execution), you'll see that the executable is, indeed, dotnet.exe. And the same is true when you open the Diagnostics Tool and take a snapshot of the executable to see what is in place in that moment. The following screenshot shows the situation:

The reason for this is directly related to the complexity of this model. The application is thought to execute on distinct platforms. The default option allows the deployment architecture to determine the best way to configure the JIT compilers depending on the target. This is why the execution is undertaken by the dotnet runtime (named dotnet.exe).

From the point of view of deployment, in .NET Core, two types of application are defined: portable and self-contained.

In .NET Core, portable applications are the default. Of course, that means that (as developers) we can be sure about their portability in distinct .NET core installations. However, a standalone app does not depend on any previous installation to run. That is to say, it holds within itself all the necessary components and dependencies, including the runtime packaged with the application. Certainly, that builds a larger app, but it also makes the application capable of executing on any .NET Core platform whether you have .NET Core installed in the target or not.

For the main purposes of this book, it doesn't matter which runtime mode we choose. Anyhow, this brief introduction can give you an idea of how the new framework behaves and is managed inside Visual Studio 2017.

And, remember, anything I do using Visual Studio 2017, you can also do with Visual Studio Code.

 

The SOLID principles


Some programming guidelines have a comprehensive, general-purpose intention, while others are mainly designed to fix certain specific problems. Therefore, before we focus on specific problems, it's important to review those features that can be applied in different scenarios and solutions. I mean those principles that you should consider beyond the type of solution or specific platform to program for.

This is where the SOLID principles (and other related problems) come into play. In 2001, Robert Martin published a foundational article on the subject (http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod), in which he enumerated a set of principles and guidelines that, in his own words, focus very tightly on dependency management, its potential problems, and how to avoid them.

To explain this further, in his words, poor dependency management leads to code that is hard to change, fragile, and non-reusable. So, this principle is directly related with two of the OOP mantras--reusability, and maintainability (the capacity to change as the project grows, one of the main goals of inheritance).

Overall, Martin stated his 11 commandments to consider, but they can be divided into three areas:

  • The five SOLID principles, which deal with class design
  • The other six principles, mainly focused on packages--three of them are about package cohesion, and the other three explain the dangers of package coupling and how to evaluate a package structure

We're going to start with the SOLID principles, which by extension not only affect the class design, but also other aspects of software architecture.

Note

The application of these ideas has, for example, been decisive in important modifications made to the HTML5 standard. Concretely, the application of the SRP (Single Responsibility principle) only highlighted the need to totally separate presentation (CSS) from content (HTML) and the subsequent deprecation of some tags (<cite>, <small>, <font>).

This applies to other popular frameworks as well, such as AngularJS (and even more in Angular 2), both designed not only with the Single Responsibility principle in mind but also based on the Dependency Inversion principle (the D in SOLID).

The following diagram schematizes the five principles' initials and correspondences:

The explanation of every letter in the acronym as expressed in Wikipedia is as follows:

  • S - Single Responsibility Principle: A class should have only a single responsibility (that is, only one potential change in the software's specification should be able to affect the specification of the class). Martin states that this principle is based on the principle of cohesion, previously defined by Tom de Marco in a book named Structured Analysis and Systems Specification and by Meilir Page-Jones in his work The Practical Guide to Structured Systems Design.
  • O - Open/Closed Principle: Software entities should be open for extension, but closed for modification. Bertrand Meyer was the first to propose this principle. Martin puts this in another way at http://www.butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod, saying that You should be able to extend a class's behavior, without modifying it.
  • L - Liskov Substitution principle: Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program. Barbara Liskov first stated this, and Martin rephrases the principle in this manner--Derived classes must be substitutable for their base classes.
  • I - Interface Segregation principle: Many client-specific interfaces are better than one general-purpose interface. Robert C. Martin was the first to use and formulate this principle, which he rewords in the aforementioned article as--Make fine grained interfaces that are client specific.
  • D - Dependency inversion principle: We should 'Depend on Abstractions'. Do not depend upon concretions. This too is an idea developed by Robert C. Martin.

The Single Responsibility Principle (SRP)

The Single Responsibility Principle (SRP), focuses on the fact that there should never be more than one reason for a class to change. In this context, responsibility is defined as a reason for a change. If, under any circumstances, more than one reason comes up to change the class, then the class' responsibilities are multiple and should be redefined.

This is, indeed, one of the most difficult principles to apply properly because as Martin says, conjoining responsibilities is something that we do naturally. In his book, Agile Principles, Patterns, and Practices in C#, Martin proposes a canonical example to show the differences, as follows:

    interface Modem 
    { 
      public void dial(String phoneNumber); 
      public void hangup(); 
      public void send(char c); 
      public char recv(); 
   } 

Given the previous interface, any class implementing it has two responsibilities: connection management and the communication itself. Such responsibilities can be used from the different parts of an application, which, in turn, might change as well.

We're going to use the Visual Studio 2017 Class Designer to express the way Martin proposes we express this class design instead:

As we see, in Martin's solution, the class depends on two interfaces, each one in charge of a responsibility--connection and channel transmission (two abstractions, really: remember that an interface is not compiled and it only serves as a contract for the compiler to check).

However, one wonders, should these two responsibilities be separated? It only depends on application changes. To be precise, the key here is to know whether changes in the application affect the signature of connection functions. If they do, we should separate both; otherwise, there's no need for separation because we would then create needless complexity.

So, overall, a reason to change is the key, but keep in mind that a reason to change is applicable only if changes occur.

In other situations, there might be reasons to keep distinct responsibilities together as long as they are closely related to the business definitions or have to do with the hardware requirements of the operating system.

The background of the Separation of Concerns (SoC)

As always happens, there were previous approaches to the problem of software separation. Dijkstra in "On the role of scientific thought" (http://www.cs.utexas.edu/users/EWD/transcriptions/EWD04xx/EWD447.html) mentioned that It is what I sometimes have called "the separation of concerns", which, even if not perfectly possible, is yet the only available technique for effective ordering of one's thoughts, that I know of.

Another advance was Information Hiding, defined by Wikipedia (https://en.wikipedia.org/wiki/Information_hiding) as the principle of segregation of the design decisions in a computer program thataremost likely to change, thus protecting other parts of the program from extensive modification if the design decision is changed. This was the seed that later became a basic pillar of OOP--Data Encapsulation.

Even Barbara Liskov, whom we mentioned in connection with the substitution principle, published at the same time Programming With Abstract Data Types (http://dl.acm.org/citation.cfm?id=807045), which she describes as an approach to the computer representation of abstraction. The definition of ADTs as a class of objects whose logical behavior is defined by a set of values and a set of operations links data and functionality.

Later approaches have improved these ideas. Proposals for Code Contracts, originally introduced by Bertrand Meyer in his Eiffel language, and implemented in C# via Code Contracts (https://msdn.microsoft.com/es-es/library/dd264808(v=vs.110).aspx) foster the use of pre and post conditions that our software has to accomplish.

Finally, we can think of the separation of what Hayim Makabee (https://effectivesoftwaredesign.com/2012/02/05/separation-of-concerns/) reports as cross-cutting concerns--aspects that might affect distinct pieces of software in even distinct layers of the application and that should be managed in a similar fashion (authorization or instrumentation issues, and so on.). In .Net, we count on Attributes, applicable equally to classes and class members, to modify and tune such behavior.

A bit later in the same article, Makabee clearly establishes the main purposes for these techniques. If we understand coupling as the degree of dependency between two modules, the goal is to obtain low coupling. Another term is cohesion or the measure of how strongly-related the set of functions performed by a module is. Obviously, high cohesion is better.

He ends by summarizing the benefits obtained with these techniques:

Patterns and methodologies are always intended to reduce coupling and at the same time increase congruity. By hiding information, we reduce coupling since we isolate implementation details. Thus, ADT's reduce coupling by using clear and abstract interfaces. We have an ADT specifying the set of function that can be executed on a type, that's more cohesive than a global data structure modified by external functions. The way that OOP reaches that cohesion is the implementation of two of its basic principles--encapsulation and polymorphism, together with dynamic binding. Furthermore, inheritance reinforces cohesion by means of hierarchies that are based on generalization and specialization, which permits a suitable separation from the functionality belonging to a superclass from its subclasses. AOP, in turn, supplies solutions for cross-cutting concerns in a way that both aspects and functionality may become more cohesive.

Maintainability, reusability, and extensibility are only three of the main advantages gained with its implementation.

Well-known examples of Separation of Concerns

All of us have gone through cases and scenarios where the separation of concerns lies at the heart of the system or technology that implements it. One such case is HTML (and, especially HTML5).

Since its inception, the standard HTML5 was thought to clearly separate content from presentation. And the popularity of mobile devices only made that requirement more evident. The huge variety of form factors available today demanded a technology capable of adapting to these sizes, in such a way that content could be held by HTML tags and the final presentation in a given device decided at runtime depending on the device.

Therefore, some tags were declared deprecated, such as <font>, <big>, <center>, and a list of others, and the same happened to some attributes, such as background, align, bgcolor, or border since they didn't make sense in this new system. Even some of them that still remain unchanged and that have a visual effect on the output (such as <b>, <i>, or <small>) are kept for their semantic meaning and not for their presentational effects, which is a role totally dependent on CSS3.

So, one of the main goals is to avoid functionality overlapping, although this is not the only benefit. If we understand concerns as the different aspects of software functionality, the business logic of software is a concern, and the interface through which a person uses this logic is another.

In practice, this translates into keeping the code for each of these concerns separate. That means, changing the interface should not require changing the business logic code, and vice versa. The underlying principle of encapsulation reinforces these ideas in the OOP paradigm, and the Model-view-controller (MVC) design pattern is a great example of separating these concerns for better software maintainability.

A basic sample of Separation of Concerns

Let's put this into code with a very basic sample and check for the differences between coupled and decoupled implementations. Imagine that a Console application in .NET Core has to show the user the initial configuration of Console colors, change a value, and present those changes.

If you make a basic project ConsoleApp1, the following code could be the first approach:

    using System; 
 
    class Program 
    { 
      static void Main(string[] args) 
      { 
        Console.ResetColor(); 
        Console.WriteLine("This is the default configuration for Console"); 
        Console.ForegroundColor = ConsoleColor.Cyan; 
        Console.WriteLine("Color changed..."); 
        Console.Read(); 
      } 
    } 

This produces the expected results (see the following screenshot showing the output):

Which problems can we find in this code? First, the main point is in charge of everything: resets the previous initial configuration of the console, changes the foreground, and prints the results.

The first attempt at separation would be to realize that further needs might require other fragments of code to use the same functionality. Even more, that functionality would be better located in another difference-- a piece of software such as a library, for example. So, we should enhance our solution with a new project containing a library project that would be referenced by any other projects along with the solution.

Besides, the manual change to Cyan color implicitly reminds us of the need for a function that allows changing to any valid color.

So, we might end up with another piece of code like this:

    namespace Utilities 
    { 
      public class ConsoleService 
      { 
        public void ChangeForegroundColor(ConsoleColor newColor) 
        { 
            Console.ForegroundColor = newColor; 
        } 
        public void ResetConsoleValues() 
        { 
            Console.ResetColor(); 
        } 
      } 
    } 

Now, in the main entry point, we could write:

    /* This is version 2 (with utilities) */ 
    ConsoleService cs = new ConsoleService(); 
    cs.ResetConsoleValues(); 
    Console.WriteLine("This is the default configuration for Console"); 
    cs.ChangeForegroundColor(ConsoleColor.Cyan); 
    Console.WriteLine("Color changed..."); 
    Console.Read(); 

With exactly the same results (I omit the output since there are no changes). So, we made a physical separation together with a logical one, given that now any change to the Console, should be managed by the Utilities library, which increases their reusability and therefore maintainability and testing.

Notice also that we could have opted for creating the library as static, to avoid instantiation.

The only change over previous versions of .NET is that, as we showed in a previous screenshot, the reference to the library is now made slightly differently, as it appears in the Dependencies section on the Solution Explorer. Once the project is compiled, we can also see that reference in the bin directory resulting from compilation:

 

Another sample


Let's take a more everyday approach with another sample: something simple, such as reading from a JSON file on a disk and presenting the results in the output. So, I've created a .NET Core Console app that includes a JSON file with five books from PACKT.

A first approach could be the following code:

    using System; 
    using System.IO; 
    using Newtonsoft.Json; 
 
    class Program 
    { 
      static void Main(string[] args) 
      { 
        Console.WriteLine(" List of Books by PACKT"); 
        Console.WriteLine(" ----------------------"); 
        var cadJSON = ReadFile("Data/BookStore.json"); 
        var bookList = JsonConvert.DeserializeObject<Book[]>(cadJSON); 
        foreach (var item in bookList) 
        { 
            Console.WriteLine($" {item.Title.PadRight(39,' ')} " +  
                $"{item.Author.PadRight(15,' ')} {item.Price}");  
        } 
        Console.Read(); 
      } 
 
      static string ReadFile(string filename) 
      { 
        return File.ReadAllText(filename); 
      } 
    } 

As we can see, the code uses a Book class that implements the IBook interface, defined in a very simple manner:

    interface IBook 
    { 
      string Title { get; set; } 
      string Author { get; set; } 
      double Price { get; set; } 
    } 
    class Book : IBook 
    { 
      public string Author { get; set; } 
      public double Price { get; set; } 
      public string Title { get; set; } 
    } 

This works fine, and generates the following output:

Notice that we're using the popular Newtonsoft JSON library, to easily convert the string into an array of Book objects.

If we analyze the code, we can identify several places where that SoC principle is present:

  • First, since the entity to manage is a Book (which has three properties), I created a Model folder to hold the definition of a Book interface (IBook), and also a Book class that implements that interface
  • Secondly, the use of the Newtonsoft library is another separation since it's the library that takes care of the conversion of the string into an array of Books
  • Finally, file reading takes place in the method ReadFile(), which receives the name of the file

Is there any other separation required? As we mentioned, the reason to change would be key at the time to decide. For example, does the app read another type of information (apart from Books)? Or, does our UI really need to include the ReadFile()method? And what about having to reference directly the Newtonsoft library directly in the user interface?

If this isn't the case, perhaps a better approach would be to separate that method in a Utilities class, just like in the first sample, thus ensuring architecture has three separate folders to hold different aspects of the application: the data model, the utilities area, and the main user interface.

In this manner, we would end up with a Utilities class like this:

    using Newtonsoft.Json; 
    using System.IO; 
 
    internal class Utilities 
    { 
      internal static Book[] ReadData() 
      { 
        var cadJSON = ReadFile("Data/BookStore.json"); 
        return JsonConvert.DeserializeObject<Book[]>(cadJSON); 
      } 
 
      static string ReadFile(string filename) 
      { 
        return File.ReadAllText(filename); 
      } 
    } 

And the resulting Program class gets reduced to the following:

    using System; 
 
    class Program 
    { 
      static void Main(string[] args) 
      { 
        var bookList = Utilities.ReadData(); 
        PrintBooks(bookList); 
      } 
 
      static void PrintBooks(Book[] books) 
      { 
        Console.WriteLine(" List of Books by PACKT"); 
        Console.WriteLine(" ----------------------"); 
        foreach (var item in books) 
        { 
          Console.WriteLine($" {item.Title.PadRight(39, ' ')} "  + 
             $"{item.Author.PadRight(15, ' ')} {item.Price}"); 
        } 
        Console.Read(); 
      } 
    }  

Of course, we get the same output, but now we have an initial separation of concerns. There's no need to reference external libraries in the UI, which facilitates maintainability and extensibility.

Let's now explore the second principle: Open/Closed.

 

The Open/Closed principle


We can detect the need to use this principle when a change in the module results in a cascade of changes that affect dependent modules. The design is said to be too inflexible.

The Open/Closed principle (OCP) principle advises us that we should refactor the application in such a manner that future changes don't provoke further modifications.

The form to apply this principle correctly would be by extending the functionality with new code (for instance, using polymorphism) and never changing the old code, which is working already. We can find several strategies to achieve this goal.

Observe that closed for modification is especially meaningful when you have distinct, separate modules (DLLs, EXEs, and so on) that depend on the module that has to be changed.

On the other hand, using extension methods or polymorphic techniques allows us to perform changes in code without affecting the rest. Think, for example, about the extension methods available in the C# language since version 3.0.

You can consider extension methods as a special type of static methods with the difference being that they are called as if they were instance methods of the extended type. You find a typical example in the LINQ standard query operators because they add a query functionality to the existing types, such as System.Collections.IEnumerable or System.Collections.Generic.IEnumerable<T>.

The classical and simplest example of this pattern is the client/server cohesion that has been largely seen in development for many years. It is preferable that clients depend on server abstractions, not on their concretions.

This can be achieved with interfaces. Servers can implement a client interface that clients will use to connect to them. In this manner, servers can change without affecting the way clients use them (refer to the following diagram):

Any subtype of the client interface will be free to implement the interface in the way it deems more appropriate, and as long as it doesn't break other clients' access.

 

Back to our sample


Let's imagine a simple case in which the app has to cover a new aspect. For example, the app now has to allow the user to list an extra file of books to be added to the previous list.

For this new requirement, we can create a new and overloaded ReadData() method that receives an extra argument. Notice here that the argument doesn't even have to be used. It's enough if it declares another signature to be invoked for this extra situation.

If we have the extra data in another file (BookStore2.json, in our demo), we could create this extra version of the method:

     internal static List<Book> ReadData(string extra) 
     { 
       List<Book> books = ReadData(); 
       var cadJSON = ReadFile("Data/BookStore2.json"); 
       books.AddRange(JsonConvert.DeserializeObject<List<Book>>(cadJSON)); 
       return books; 
     } 

Notice that we don't even use the method's argument in this implementation (of course, there are other ways to do this, but let's put it this way for the purpose of the demo).

We have now two versions of ReadData() that should be called in the user interface depending on the user's choice (I also changed the Book[] definition into a List<Book> for simplicity, but you can see the older version as well in the source code):

    static List<Book> bookList; 
    static void Main(string[] args) 
    { 
      Console.WriteLine("Please, press 'yes' to read an extra file, "); 
      Console.WriteLine("or any other key for a single file"); 
      var ans = Console.ReadLine(); 
      bookList = (ans != "yes") ? Utilities.ReadData() : Utilities.ReadData(ans); 
      PrintBooks(bookList); 
    } 

Now if the user's answer is yes you have an extra set of books added to the list, as you can see in the output:

Besides all these reasons, you can think of situations such as having the Utilities code separated in a distinct library that could also be used by other parts of the application. The implementation of the Open/Closed principle here allows a more stable and extensible approach.

 

The Liskov Substitution principle


Let's remember this definition--subtypes must be substitutable for their base types. This means that this should happen without breaking the execution or losing any other kind of functionality.

You'll notice that this idea lies behind the basic principles of inheritance in the OOP programming paradigm.

If you have a method that requires an argument of the Person type (let's put it that way), you can pass an instance of another class (Employee, Provider, and so on) as long as these instances inherit from Person.

This is one of the main advantages of well-designed OOP languages, and most popular and accepted languages support this characteristic.

Back to the code again

Let's take a look at the support inside our sample, where a new requirement arises. Actually, our demo simply calls the PrintBooks method and expects to receive a List<Book> object as the argument.

However, another reason for change might come up when new lists of books appear, and those lists include some new field, like the topic each book belongs to (.NET, Node, Angular, and so on).

For example, a new list appears containing a fourth field, Topic, in this way:

    { 
      "Title": "AngularJS Services", 
      "Author": "Jim Lavin", 
      "Price": 30.99, 
      "Topic": "Angular" 
    } 

The class Book should not be changed since it's being used already. Thus, we can inherit from Book and create a TopicBook class just adding the new field (I'm trying to keep things as simple as possible to focus on the architecture we're dealing with):

    public class TopicBook : Book 
    { 
      public string Topic { get; set; } 
    }

To cover this new aspect, we can change the user interface to allow the user to select a new option (topic) that includes the new type of book:

    static void Main(string[] args) 
    { 
      Console.WriteLine("Please, press 'yes' to read an extra file, "); 
      Console.WriteLine("'topic' to include topic books or any
          other key for a single file"); 
      var ans = Console.ReadLine(); 
      bookList = ((ans != "yes") && (ans != "topic")) ?  
        Utilities.ReadData() : Utilities.ReadData(ans); 
      PrintBooks(bookList); 
    }   

Notice that we're just including a new condition and calling the overloaded method in case the new condition is selected.

As for the ReadData() overloaded method, we can make some minimal changes (basically, adding an if condition to include the extra data), like you can see in the following code:

    internal static List<Book> ReadData(string extra) 
    { 
      List<Book> books = ReadData(); 
      var filename = "Data/BookStore2.json"; 
      var cadJSON = ReadFile(filename); 
         books.AddRange(JsonConvert.DeserializeObject<List<Book>>(cadJSON)); 
      if (extra == "topic") 
      { 
         filename = "Data/BookStore3.json"; 
         cadJSON = ReadFile(filename); 
         books.AddRange(JsonConvert.DeserializeObject<List<TopicBook>>(cadJSON)); 
      } 
      return books; 
    } 
 

Observe that the method's changes are minimal, and especially that we're adding to the list of books the result of deserializing a different class (TopicBook), without any compilation or execution problems.

Therefore, the implementation of Generics in .NET (and .NET Core, in this case) correctly implements the Liskov Substitution Principle, and we don't have to make modifications in our logic.

We can check the results in the Automatic Window using a breakpoint before the return sentence of ReadData and seeing how the List<Book> now includes five elements of type TopicBook, with no complaints:

What about the other side (the user interface logic) and, especially, our PrintBooks method, which expects a List<Book>? Well, there's no difference insofar as we don't try to print out a field that doesn't exist.

You can check the output in the following screenshot:

Thanks to the Liskov Substitution principle support, we were able to add behavior and information with minimum effort, and consequently, enforce the OOP principle of code reutilization.

Other implementations of LSP in .NET

What we've seen up to this point is not the only implementation of the LSP principle that we find inside .NET, since different areas of the framework, have grown using this conception.

Events are flexible enough to be defined in a way that allows us to pass our own information via classic definitions, alternatively with the participation of generics, we can simply define a generic event handler that holds information of any kind. All these techniques foster the implementation of good practices, not just the SOLID principles.

 

The Interface Segregation principle


As Martin states, this principle deals with the inconveniences of fat interfaces. And the problem arises when the interfaces of the class can be logically fragmented into distinct groups or methods.

In this case, if there is more than one client of our application, chances are that some clients are connected to a functionality they never use. As Martin states in his Agile Principles Patterns and Practices in C# book,

When the clients are separate, the interfaces should remain separate, too. Why? Because clients exert forces on their server interfaces. When we think of forces that cause changes in software, we normally think about how changes to interfaces will affect their users.

And, as a conclusion, he remarks that

When clients are forced to depend on methods they don't use, those clients are subject to changes to those methods. This results in an inadvertent coupling between all the clients. Said another way, when a client depends on a class that contains methods that the client does not use but that other clients do use, that client will be affected by the changes that those other clients force on the class. We would like to avoid such couplings where possible, and so we want to separate the interfaces.

 

Another sample


Let's see this situation with another example that starts from a new scenario. Let's imagine another app in which we have to cover not only the two types of books available at the moment but also a new publication in video format, that holds another field named Duration.

A single record of this file would look like this:

     { 
       "Title": "HTML 5 Game Development", 
       "Author": "Daniel Albu", 
       "Price": 5.68, 
       "Topic": "HTML5 Games", 
       "Duration": "2h20m" 
     }  

But the application maintains the other two previous formats, so we have the possibility to list files with three, four, or five fields, depending on the initial selection the user chooses.

A first approach could lead us to an interface like this:

    interface IProduct 
    { 
      string Title {get; set;} 
      string Author {get; set;} 
      double Price {get; set;} 
      string Topic { get; set; } 
      string Duration { get; set; } 
    } 

Based on this interface we could create the Product class (the new name is supposed to locate a step above the books or videos since both have four fields in common):

    public class Product : IProduct 
    { 
      public string Title { get; set; } 
      public string Author { get; set; } 
      public double Price { get; set; } 
      public string Topic { get; set; } 
      public string Duration { get; set; } 
    } 

Now, the equivalent Utilities class could select a file depending on the user's entry, read it, deserialize it, and send the information back to a PrintProducts method in charge of the console output.

Our new user interface would look like this:

    using System;
    using System.Collections.Generic;

    class Program
    {
      static List<Product> productList;
      static void Main(string[] args)
      {
        string id = string.Empty;
        do
        {
            Console.WriteLine("File no. to read: 1/2/3-Enter(exit): ");
            id = Console.ReadLine();
            if ("123".Contains(id) && !String.IsNullOrEmpty(id))
            {
                productList = Utilities.ReadData(id);
                PrintBooks(productList);
            }
        } while (!String.IsNullOrWhiteSpace(id));
      }

      static void PrintBooks(List<Product> products)
      {
        Console.WriteLine(" List of Products by PACKT");
        Console.WriteLine(" ----------------------");
        foreach (var item in products)
        {
            Console.WriteLine($" {item.Title.PadRight(36, ' ')} " +
            $"{item.Author.PadRight(20, ' ')} {item.Price}" + " " +
            $"{item.Topic?.PadRight(12, ' ') } " +]
            $"{item.Duration ?? ""}");
        }
        Console.WriteLine();
      }
    }

Observe that we had to deal with the two cases in which some field could be null, so we use string interpolation, together with the null coalescence operator (??) and the Null-conditional operator (?), to prevent failure in these cases.

The Utilities class gets reduced to a much simpler code:

    using System.Collections.Generic;
    using System.IO;

    internal class Utilities
    {
      internal static List<Product> ReadData(string fileId)
      {
        var filename = "Data/ProductStore" + fileId + ".json";
        var cadJSON = ReadFile(filename);
        return JsonConvert.DeserializeObject<List<Product>>(cadJSON);
      }
      static string ReadFile(string filename)
      {
        return File.ReadAllText(filename);
      }
    }

The output lets the user select a number and print the file's content in a similar way to what we did in previous demos, only this time selecting each file individually:

If our application now requires more changes, like the addition of statistics, for example, the use of a single class to hold them all (the Product class, here) denotes a violation of the Interface Segregation principle.

This is because we should separate the interfaces and use a compound approach to prevent a class from dealing with unwanted or unneeded functionality.

The alternative and proper separation could be to create the following (distinct) interfaces:

    interface IProduct
    {
      string Title { get; set; }
      string Author { get; set; }
      double Price { get; set; }
    }

    interface ITopic
    {
      string Topic { get; set; }
    }

    interface IDuration
    {
      string Duration { get; set; }
    }

Now we should have three classes, since three entities can be distinguished, but could maintain three fields in common. The definitions of the three classes could be expressed in this way:

    class Book : IProduct
    {
      public string Author { get; set; }
      public double Price { get; set; }
      public string Title { get; set; }
    }
    class TopicBook: IProduct, ITopic
    {
      public string Author { get; set; }
      public double Price { get; set; }
      public string Title { get; set; }
      public string Topic { get; set; }
    }
    class Video: IProduct, ITopic, IDuration
    {
      public string Author { get; set; }
      public double Price { get; set; }
      public string Title { get; set; }
      public string Topic { get; set; }
      public string Duration { get; set; }
    }

Thanks to this division every entity keeps its own personality, and we can later create methods that use generics, or apply the Liskov Substitution principle to deal with the distinct requirements that might arise during the lifecycle.

 

The Dependency Inversion principle


The last of the SOLID principles is based on two statements, which Wikipedia (https://en.wikipedia.org/wiki/Dependency_inversion_principle) defines in this form:

  • High-level modules should not depend on low-level modules. Both should depend on abstractions
  • Abstractions should not depend upon details. Details should depend upon abstractions

As for the first statement, we should clarify what we understand by high-level and low-level modules. The terminology is related to the importance of the actions performed by the module with respect to the application as a whole.

Let's put it simply: if a module holds the business logic of a Customers class, and another module PrinterService includes the format that a list of the Customers class uses in a report, the first one would be high-class and the second would be low-class (the reason for the existence of the second is to provide some functionality to the first).

The second statement speaks for itself. If an abstraction depends on details, the usage as a definition contract is compromised (a change in the details could force a redefinition).

 

The (more or less) canonical example


Dependency Injection techniques are just a way of implementing this principle, and we will see them exemplified in many forms and scenarios along this book.

So, I'll use here the (almost) canonical code that you could find on the internet about this subject. I'm showing you here an adaptation made by Munir Hassan (https://www.codeproject.com/Articles/495019/Dependency-Inversion-Principle-and-the-Dependency) in CodeProject which uses a notification scenario to illustrate this situation, and I think it's particularly interesting. He starts with an initial code such as this:

    public class Email
    {
      public void SendEmail()
      {
        // code
      }
    }
    public class Notification
    {
      private Email _email;
      public Notification()
      {
        _email = new Email();
      }
      public void PromotionalNotification()
      {
        _email.SendEmail();
      }
    }

Notification depends on Email, creating an instance in its constructor. This kind of interaction is said to be tightly coupled. If we want to send other types of notification as well, we have to modify the way the Notification class is implemented.

A way to achieve this could be the introduction of an interface (a new level of abstraction) to define the concept of sending messages and force the Email class to implement that interface:

    public interface IMessageService
    {
      void SendMessage();
    }
    public class Email : IMessageService
    {
      public void SendMessage()
      {
        // code
      }
    }
    public class Notification
    {
      private IMessageService _iMessageService;
      public Notification()
      {
        _iMessageService = new Email();
      }
      public void PromotionalNotification()
      {
        _iMessageService.SendMessage();
      }
    }

Now, the class calls something named _iMessageService, whose implementation could vary. As Hamir mentions, there are three ways to implement this pattern:

DI is the act of supplying all classes that a service needs rather than leaving the responsibility to the service to obtain dependent classes. DI typically comes in three flavors: Constructor Injection, Property Injection, Method Injection

In the first form, (constructor injection) Hamir proposes the following:

    public class Notification
    {
      private IMessageService _iMessageService;
      public Notification(IMessageService _messageService)
      {
        this._iMessageService = _messageService;
      }
      public void PromotionalNotification()
      {
        _iMessageService.SendMessage();
      }
    }

This reminds us of what we will see in the implementation of Dependency Injection in ASP.NET Core in the following chapters. No mention of Emails here: only an IMessageService is implied.

You can visit the aforementioned page for more details about the other ways to implement injection, but, as I mentioned, we'll cover all those in detail in the coming chapters.

 

Other ways to implement Dependency Inversion


Generally speaking, there are many ways in which the DIP principle can lead to a solution. Another way to implement this principle is by using the Dependency Injection techniques, derived from another way to see Dependency Inversion: the so-called Inversion of Control (IoC).

According to the paper written by Martin Fowler (https://martinfowler.com/articles/injection.html), Inversion of Control is the principle whereby the control flow of a program is inverted; instead of the programmer controlling the flow of a program, the external sources (framework, services, and other components) take control of it.

One of them is a dependency container, which is a component and serves or provides you with some code, injecting it when required.

Some popular Dependency Containers for C# are Unity and Ninject, to name just a couple. In NET Core, there's an embedded container so there's no need to use an external one, except in cases where we might require some special functionality provided by them.

In the code, you instruct this component to register certain classes of your application; so, later on, when you need an instance of one of them, you just have to declare it (typically in the constructor), and it is served to your code automatically.

Other frameworks implement this principle as well, even if they're not purely object-oriented. This is the case with AngularJS or Angular 2, in which, when you create a controller that requires access to a service, you ask for the service in the controller's function declaration, and the internal Angular's DI system serves a singleton instance of the service without the intervention of the client's code.

 

Summary


In this chapter, we've reviewed the five SOLID Principles in the way they were formulated by Robert C. Martin in 2000.

We've explored each of these principles, discussing their advantages and checking their implementation with some simple code using .NET Core Console applications, to see how they can be coded.

In the next chapter, we will talk about Dependency Injection and the most popular IoC containers, reviewing how they can be used and analyzing their pros and cons in everyday applications.

About the Authors

  • Marino Posadas

    Marino Posadas is an independent senior trainer, writer, and consultant in Microsoft Technologies. He is a Microsof MVP in C#, Visual Studio, and Development Technologies; an MCT, MCPD, MCTS, MCAD, and MCSD; and was the former Director for Development in Spain and Portugal for Solid Quality Mentors.

    Marino has published 15 books and more than 500 articles on development technologies in several magazines and online publications. Topics covered in his books range from Clipper and Visual Basic 5.0/ 6.0 to C # and .NET- safe programming, to programming with Silverlight 2.0 and 4.0 and Web Standards. His latest books are Mastering C# and .NET Framework, Packt Publishing and The Guide to Programming in HTML5, CSS3, and JavaScript with Visual Studio. He is also a speaker at Microsof events, having lectured in Spain, Portugal, England, the USA, Costa Rica, and Mexico.

    You can follow him on Twitter as @MarinoPosadas (https://twitter.com/marinoposadas)

    Browse publications by this author
  • Tadit Dash

    Tadit Dash is a senior software engineer and a hardcore tech community contributor. Due to his exceptional contribution to the technical community, Microsoft has awarded him with the Microsoft Most Valuable Professional accolade since 2014. CodeProject has awarded him the CodeProject MVP accolade (the first from Odisha). For his constant mentorship, IndiaMentor featured him as a young mentor on their site. He was a featured speaker at DevTechDay Nepal and C# Corner Annual Conference, India. You can follow him on Twitter: @taditdash.

    Browse publications by this author

Latest Reviews

(6 reviews total)
Concise and easy to follow.
Valuable information supplied by Packt as e-books.
This is something that I need to increase my knowledge in and the book has the information I need.
Dependency Injection in .NET Core 2.0
Unlock this book and the full library for FREE
Start free trial