Entity Framework Core Cookbook - Second Edition

3 (2 reviews total)
By Ricardo Peres
  • Instant online access to over 8,000+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Improving Entity Framework in the Real World

About this book

Entity Framework is a highly recommended Object Relation Mapping tool used to build complex systems. In order to survive in this growing market, the knowledge of a framework that helps provide easy access to databases, that is, Entity Framework has become a necessity. This book will provide .NET developers with this knowledge and guide them through working efficiently with data using Entity Framework Core.

You will start off by learning how to efficiently use Entity Framework in practical situations. You will gain a deep understanding of mapping properties and find out how to handle validation in Entity Framework. The book will then explain how to work with transactions and stored procedures along with improving Entity Framework using query libraries. Moving on, you will learn to improve complex query scenarios and implement transaction and concurrency control. You will then be taught to improve and develop Entity Framework in complex business scenarios. With the concluding chapter on performance and scalability, this book will get you ready to use Entity Framework proficiently.

Publication date:
November 2016
Publisher
Packt
Pages
324
ISBN
9781785883309

 

Chapter 1. Improving Entity Framework in the Real World

In this chapter, we will cover the following topics:

  • Improving Entity Framework by using a code-first approach

  • Unit testing and mocking

  • Creating databases from code

  • Creating mock database connections

  • Implementing the repository pattern

  • Implementing the unit of work pattern

 

Introduction


If we were to buy the materials to build a house, would we buy the bare minimum to get four walls up and a roof, without a kitchen or a bathroom? Or would we buy enough material to build the house with multiple bedrooms, a kitchen, and multiple bathrooms?

The problem lies in how we define the bare minimum. The progression of software development has made us realize that there are ways of building software that do not require additional effort, but reap serious rewards. This is the same choice we are faced with when we decide on the approach to take with Entity Framework. We could just get it running and it would work most of the time.

Customizing and adding to it later would be difficult, but doable. There are a few things that we would need to give up for this approach. The most important among those is control over how the code is written. We have already seen that applications grow, mature, and have features added. The only thing that stays constant is the fact that at some point in time, in some way, we will come to push the envelope of almost every tool that we leverage to help us. The other side is that we could go into development, being aware of the value-added benefits that cost nothing, and with that knowledge, avoid dealing with unnecessary constraints.

When working with Entity Framework, there are some paths and options available to us. There are two main workflows for working with Object-Relational Mapper (ORM) tools such as Entity Framework:

  • Database first: We start by defining our database objects and their relations, then write our classes to match them, and we bind them together

  • Code first: We start by designing our classes as Plain Old CLR Objects (POCOs) to model the concepts that we wish to represent, without caring (too much!) how they will be persisted in the database

    Note

    The model-first approach was dropped in Entity Framework Core 1.0.

While following the database-first approach, we are not concerned with the actual implementation of our classes, but merely the structures—tables, columns, keys—on which they will be persisted. In contrast, with POCOs or code first, we start by designing the classes that will be used in our programs to represent the business and domain concepts that we wish to model. This is known as Domain-Driven Design (DDD). DDD certainly includes code first, but it is much more than that.

All of these approaches will solve the problem with varying degrees of flexibility.

Starting with a database-first approach in Entity Framework means we have an existing database schema and are going to let the schema, along with the metadata in the database, determine the structure of our business objects and domain model. The database-first approach is normally how most of us start out with Entity Framework and other ORMs, but the tendency is to move toward more flexible solutions as we gain proficiency with the framework. This will drastically reduce the amount of code that we need to write, but will also limit us to working within the structure of the generated code. Entities, which are generated by default here, are not 100% usable with WCF services, ASP.NET Web APIs, and similar technologies – just think about lazy loading and disconnected entities, for example. This is not necessarily a bad thing if we have a well-built database schema and a domain model that translates well into Data Transfer Objects (DTOs). Such a domain and database combination is a rare exception in the world of code production. Due to the lack of flexibility and the restrictions on the way these objects are used, this solution is viewed as a short-term or small-project solution.

Modeling the domain first allows us to fully visualize the structure of the data in the application, and work in a more object-oriented manner while developing our application. Just think of this: a relational database does not understand OOP concepts such as inheritance, static members, and virtual methods, although, for sure, there are ways to simulate them in the relational world. The main reasons for the lack of adoption of this approach include the poor support for round-trip updates, and the lack of documentation on manipulating the POCO model so as to produce the proper database structure. It can be a bit daunting for developers with less experience, because they probably won't know how to get started. Historically, the database had to be created each time the POCO model changed, causing data loss when structural changes were made.

Coding the classes first allows us to work entirely in an object-oriented direction, and not worry about the structuring of the database, without the restrictions that the model-first designer imposes. This abstraction gives us the ability to craft a more logically sound application that focuses on the behavior of the application rather than the data generated by it. The objects that we produce that are capable of being serialized over any service have true persistence ignorance, and can be shared as contract objects as they are not specific to the database implementation. This approach is also much more flexible as it is entirely dependent on the code that we write. This allows us to translate our objects into database records without modifying the structure of our application. All of this, however, is somewhat theoretical, in the sense that we still need to worry about having primary key properties, generation strategies, and so on.

In each of the recipes presented in this book, we will follow an incremental approach, where we will start by adding the stuff we need for the most basic cases, and later on, as we make progress, we will refactor it to add more complex stuff.

 

Improving Entity Framework by using a code-first approach


In this recipe, we start by separating the application into a user interface (UI) layer, a data access layer, and a business logic layer. This will allow us to keep our objects separated from database-specific implementations. The objects and the implementation of the database context will use a layered approach so we can add testing to the application. The following table shows the various projects, and their purpose, available for code-first approach:

Project

Purpose

BusinessLogic

Stores the entities that represent business entities.

DataAccess

Classes that access data and manipulate business entities. Depends on the BusinessLogic project.

UI

User interface – the MVC application. Makes use of the BusinessLogic and DataAccess projects.

UnitTests

Unit tests. Uses both the BusinessLogic and DataAccess projects.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

Let's get connected to the database using the following steps:

  1. Add a new C# class named Blog with the following code to the BusinessLogic project:

    namespace BusinessLogic
    {
        public class Blog
        {
            public int Id { get; set; }
            public string Title { get; set; }
        }
    }
  2. Create a new C# class named BlogContext with the following code in the DataAccess project:

    using Microsoft.EntityFrameworkCore;
    using BusinessLogic;
    namespace DataAccess
    {
        public class BlogContext : DbContext
        {
            private readonly string _connectionString;
            public BlogContext(string connectionString)
            {
                _connectionString = connectionString;
            }
            public DbSet<Blog> Blogs { get; set; }
            protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                optionsBuilder.UseSqlServer(_connectionString);
                base.OnConfiguring(optionsBuilder);
            }
        }
    }

    Note

    For Entity Framework 6, replace the Microsoft.EntityFrameworkCore namespace with System.Data.Entity and call the base constructor of DbContext passing it the connection string.

  3. Add the following connection string to the appsettings.json file:

    {
      "Data": {
        "Blog": {
          "ConnectionString":"Server=(local)\\SQLEXPRESS; Database=Blog; Integrated Security=SSPI;MultipleActiveResultSets=true"
        }
      }
    }

    Note

    With Entity Framework 6, we would add this connection string to the Web.config file, under the connectionStrings section, with the name Blog. Of course, change the connection string to match your system settings, for example, the name of the SQL Server instance (SQLEXPRESS, in this example).

  4. In the Controllers\BlogController.cs file, modify the Index method with the following code:

    using BusinessLogic;
    using DataAccess;
    using System.Linq;
    using Microsoft.AspNetCore.Mvc;
    using Microsoft.Extensions.Configuration;
    namespace UI.Controllers
    {
        public class BlogController : Controller
        {
            private readonly BlogContext _blogContext;
            public BlogController(IConfiguration config)
            {
                _blogContext = new BlogContext(config["Data:Blog:ConnectionString"]);
            }
            public IActionResult Index()
            {
                var blog = _blogContext.Blogs.First();
                return View(blog);
            }
        }
    }

    Note

    For Entity Framework 6, remove the config parameter from the HomeController constructor, and initialize BlogContext with the ConfigurationManager.ConnectionStrings["Blog"].ConnectionString value.

  5. Finally, in Startup.cs, we need to register the IConfiguration service so that it can be injected into the HomeController constructor. Please add the following lines to the ConfigureServices method:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
        services.AddSingleton<IConfiguration>(_ => Configuration);
    }

    Note

    Prior to version 5, ASP.NET MVC does not include any built-in Inversion of Control containers, unlike ASP.NET Core. You will need to bring your own and register it with the DependencyResolver.SetResolver method, or rely on a third-party implementation.

How it works…

The blog entity is created but not mapped explicitly to a database structure. This takes advantage of convention over configuration, found in the code-first approach, wherein the properties are examined and then the table mappings are determined. This is obviously a time saver, but it is fairly limited if you have a non-standard database schema. The other big advantage of this approach is that the entity is persistence-ignorant. In other words, it has no knowledge of how it is to be stored in the database.

The BlogContext class has a few key elements to understand. The first is to understand the inheritance from DbContext. DbContext is the code-first context class, which encapsulates all connection pooling, entity change tracking, and database interactions. We added a constructor to take in the connection string, so that it knows where to connect to.

We used the standard built-in functionality for the connection string, storing it in a text (JSON) file, but this could easily be any application setting store; one such location would be the .NET Core secrets file. We pass the connection string on the construction of the BlogContext. It enables us to pass that connection string from anywhere so that we are not coupled. Because Entity Framework is agnostic when it comes to data sources—can use virtually any database server–we need to tell it to use the SQL Server provider, and to connect to it using the supplied connection string. That's what the UseSqlServer method does.

There's more…

Approaching the use of code-first development, we have several overarching themes and industry standards that we need to be aware of. Knowing about them will help us leverage the power of this tool without falling into the pit of using it without understanding.

Convention over configuration

This is a design paradigm that says that default rules dictate how an application will behave, but allows the developer to override any of the default rules with specific behavior, in case it is needed. This allows us, as programmers, to avoid using a lot of configuration files or code to specify how we intended something to be used or configured. In our case, Entity Framework allows the most common behaviors to use default conventions that remove the need for a majority of the configurations. When the behavior we wish to create is not supported by the convention, we can easily override the convention and add the required behavior to it without the need to get rid of it everywhere else. This leaves us with a flexible and extendable system to configure the database interaction.

Model-View-Controller

In our example, we use Microsoft ASP.NET MVC. We would use MVC 5 for Entity Framework 6 and .NET 4.x, and MVC Core 1 for Entity Framework Core 1 and .NET Core, and, in both cases, the Razor view engine for rendering the UI. We have provided some simple views that will allow us to focus on the solutions and the code without needing to deal with UI design and markup.

Single Responsibility Principle

One of the SOLID principles of development, the Single Responsibility Principle (SRP), states that every class should have only one reason to change. In this chapter, there are several examples of that in use, for example, the separation of model, view and controller, as prescribed by MVC.

Entities in code-first have the structure of data as their singular responsibility in memory. This means that we will only need to modify the entities if the structure needs to be changed. By contrast, the code automatically generated by the database-first tools of Entity Framework inherits your entities from base classes within the Entity Framework Application Programming Interface (API). The process of Microsoft making occasional updates to the base classes of Entity Framework is the one that introduces a second reason to change, thus violating our principle.

Provider Model

Entity Framework relies on providers for achieving different parts of its functionality. These are called providers, and the most important, for sure, is the one that supplies the connection to the underlying data store. Different providers exist for different data sources, from traditional relational databases such as SQL Server, to non-relational ones, such as Redis and Azure Table Storage. There's even one for abstracting a database purely in memory!

Testing

While we did not actively test this recipe, we layered in the abstractions to do so. All of the other recipes will be executed and presented using test-driven development, as we believe it leads to better software design and a much clearer representation of intent.

See also

In this chapter:

  • Unit testing and mocking

  • Implementing the unit of work pattern

  • Implementing the repository pattern

 

Unit testing and mocking


Software development is not just writing code. We also need to test it, to confirm that it does what is expected. There are several kinds of tests, and unit tests are one of the most popular. In this chapter, we will set up the unit test framework that will accompany us throughout the book. Another important concept is that of mocking; by mocking a class (or interface), we can provide a dummy implementation of it that we can use instead of the real thing. This comes in handy in unit tests, because we do not always have access to real-life data and environments, and this way, we can pretend we do.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. Start by adding the required NuGet packages to the UnitTests project. We'll edit and add two dependencies, the main xUnit library and its runner for .NET Core, and then set the runner command.

  2. Now, let's add a base class to the project; create a new C# class file and call it BaseTests.cs:

    using Microsoft.Extensions.Configuration;
    namespace UnitTests
    {
        public abstract class BaseTest
        {
            protected BaseTest()
            {
                var builder = new ConfigurationBuilder()
                    .AddJsonFile("appsettings.json");
                Configuration = builder.Build();
            }
            protected IConfiguration Configuration{ get; private set; }
        }
    }
  3. Now, for a quick test, add a new C# file, called SimpleTest.cs, to the project, with this content:

    using Moq;
    using Xunit;
    namespace UnitTests
    {
        public class SimpleTest : BaseTest
        {
            [Fact]
            public void CanReadFromConfiguration()
            {
                var connectionString = Configuration["Data:Blog:ConnectionString"];
                Assert.NotNull(connectionString);
                Assert.NotEmpty(connectionString);
            }
            [Fact]
            public void CanMock()
            {
                var mock = new Mock<IConfiguration>();
                mock.Setup(x => x[It.IsNotNull<string>()]).Returns("Dummy Value");
                var configuration = mock.Object;
                var value = configuration["Dummy Key"];
                Assert.NotNull(value);
                Assert.NotEmpty(value);
            }
        }
    }
  4. If you want to have the xUnit runner running your unit tests automatically, you will need to set the test command as the profile to run in the project properties:

    Project properties

How it works…

We have a unit tests base class that loads configuration from an external file, in pretty much the same way as the ASP.NET Core template does. Any unit tests that we will define later on should inherit from this one.

When the runner executes, it will discover all unit tests in the project—those public concrete methods marked with the [Fact] attribute. It will then try to execute them and evaluate any Assert calls within.

The Moq framework lets you define your own implementations for any abstract or interface methods that you wish to make testable. In this example, we are mocking the IConfiguration class, and saying that any attempt to retrieve a configuration value should have a dummy value as the result.

If you run this project, you will get the following output:

Running unit tests

There's more…

Testing to the edges of an application requires that we adhere to certain practices that allow us to shrink the untestable sections of the code. This will allow us to unit test more code, and make our integration tests far more specific.

One class under test

An important point to remember while performing unit testing is that we should only be testing a single class. The point of a unit test is to ensure that a single operation of this class performs the way we expect it to.

This is why simulating classes that are not under test is so important. We do not want the behavior of these supporting classes to affect the outcomes of unit tests for the class that is being tested.

Integration tests

Often, it is equally important to test the actual combination of your various classes to ensure they work properly together. These integration tests are valuable, but are almost always more brittle, require more setup, and are run slower than the unit tests. We certainly need integration tests on any project of a reasonable size, but we want unit tests first.

Arrange, Act, Assert

Most unit tests can be viewed as having three parts: Arrange, Act, and Assert. Arrange is where we prepare the environment to perform the test, for instance, mocking the IDBContext with dummy data with the expectation that Set will be called. Act is where we perform the action under test, and is most often a singular line of code. Assert is where we ensure that the proper result was reached. Note the comments in the preceding examples that call out these sections. We will use them throughout the book to make it clear what the test is trying to do.

Mocking

Mocking and stubbing—providing a pre-built implementation for methods to intercept—is a very interesting topic. There are numberless frameworks that can provide mocking capabilities for even the most challenging scenarios, such as static methods and properties. Mocking fits nicely with unit tests because we seldom have an environment that is identical to the one where we will be deploying, but we don't have "real" data. Also, data changes, and we need a way to be able to reproduce things consistently.

 

Creating databases from code


As we start down the code-first path, there are a couple of things that could be true. If we already have a database, then we will need to configure our objects to that schema, but what if we do not have one? That is the subject of this recipe: creating a database from the objects we declare.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. First, we write a unit test with the following code in a new C# file called DatabaseTest.cs, in the UnitTests project:

    using BusinessLogic;
    using Xunit;
    using DataAccess;
    namespace UnitTests
    {
        public class DatabaseTest : BaseTest
        {
            [Fact]
            public void CanCreateDatabase()
            {
                //Arrange
                var connectionString = Configuration["Data:Blog:ConnectionString"];
                var context =new BlogContext(connectionString);
                //Act
                var created = context.Database.EnsureCreated();
                //Assert
                Assert.True(created);
            }
        }
    }
  2. We will need to add a connection string to the UnitTests project to our database; we do so by providing an identical appSettings.json file to the one introduced in the previous recipe:

    {
        "Data": {
            "Blog": {
                "ConnectionString": "Server=(local)\\SQLEXPRESS;Database=Blog;Integrated Security=SSPI;MultipleActiveResultSets=true"
            }
        }
    }

    Note

    Change the connection string to match your specific settings.

  3. In the DataAccess project, we will use the C# BlogContext class that was introduced in the previous chapter:

    using Microsoft.EntityFrameworkCore;
    using BusinessLogic;
    namespace DataAccess
    {
        public class BlogContext : DbContext
        {
            private readonly string _connectionString;
            public BlogContext(string connectionString)
            {
                _connectionString = connectionString;
            }
            public DbSet<Blog> Blogs { get; set; }
            protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                optionsBuilder.UseSqlServer(_connectionString);
                base.OnConfiguring(optionsBuilder);
            }
        }
    }

How it works…

Entity Framework will initialize itself by calling the OnConfiguring method whenever it needs to get data; after that, it knows about the database to use. The EnsureCreated method will make sure that the database either already exists or is created in the moment.

There's more…

When we start a green field project, we have that rush of happiness to be working in a problem domain that no one has touched before. This can be exhilarating and daunting at the same time. The objects we define and the structure of our program come naturally to a programmer, but most of us need to think differently to design the database schema. This is where the tools can help to translate our objects and intended structure into the database schema if we leverage some patterns. We can then take full advantage of being object-oriented programmers.

A word of caution: previous versions of Entity Framework offered mechanisms such as database initializers. These not only would create the database, but also rebuild it, in case the code-first model had changed, and even add some initial data. For better or worse, these mechanisms are now gone, and we will need to leverage Entity Framework Core Migrations for similar features. We will discuss Migrations in another recipe.

See also

In this chapter:

  • Unit testing and mocking

 

Creating mock database connections


When working with Entity Framework in a test-driven manner, we need to be able to slip a layer between our last line of code and the framework. This allows us to simulate the database connection without actually hitting the database.

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. In the DataAccess project, add a new C# interface named IDbContext using the following code:

    using System.Linq;
    namespace DataAccess
    {
        public interface IDbContext
        {
            IQueryable<T> Set<T>() where T : class;
        }
    }
  2. Add a new unit test in the UnitTests project to test so we can supply dummy results for fake database calls with the following code:

    using System.Linq;
    using DataAccess;
    using BusinessLogic;
    using Moq;
    using Xunit;
    namespace UnitTests
    {
        public class MockTest : BaseTest
        {      
            [Fact]
            public void CanMock()
            {
               //Arrange
                var data = new[] { new Blog { Id = 1, Title = "Title" }, newBlog { Id = 2, Title = "No Title" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                //Act
                var context = mock.Object;
                var blogs = context.Set<Blog>();
                //Assert
                Assert.Equal(data, blogs);
            }
        }
    }
  3. In the DataAccess project, update the C# class named BlogContext with the following code:

    using BusinessLogic;
    using System.Linq;
    using Microsoft.EntityFrameworkCore;
    namespace DataAccess
    {
        public class BlogContext : DbContext, IDbContext
        {
            private readonly string _connectionString;
            public BlogContext(string connectionString)
            {
                _connectionString = connectionString;
            }
            public DbSet<Blog> Blogs { get; set; }
            IQueryable<T> IDbContext.Set<T>()
            {
                return base.Set<T>();  
            }
            protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                optionsBuilder.UseSqlServer(_connectionString);
                base.OnConfiguring(optionsBuilder);
            }
            public void Rollback()
            {
                ChangeTracker.Entries().ToList().ForEach(x =>
                {
                    x.State = EntityState.Detached;
                    var keys = GetEntityKey(x.Entity);
                    Set(x.Entity.GetType(), keys);
                });
            }
        }
    }

How it works…

We implemented a fake class —a mock—that mimics some of the functionality of our IDbContext interface that we wish to expose and make testable; in this case, it is just the retrieval of data. This allows us to keep our tests independent of the actual data in the database. Now that we have data available from our mock, we can test whether it acts exactly like we coded it to. Knowing the inputs of the data access code, we can test the outputs for validity. We made our existing BlogContext class implement the interface where we define the contract that we wish to mock, IDbContext, and we configured a mock class to return dummy data whenever its Set method was called.

This layering is accomplished by having a Set method as an abstraction between the public framework method of Set<T> and our code, so we can change the type to something constructible. By layering this method, we can now control every return from the database in the test scenarios.

This layering also provides a better separation of concerns, as the DbSet<T> in Entity Framework mingles multiple independent concerns, such as connection management and querying, into a single object, whereas IQueryable<T> is the standard .NET interface for performing queries against a data source (DbSet<T> implements IQueryable<T>). We will continue to separate these concerns in future recipes.

See also

In this chapter:

  • Unit testing and mocking

 

Implementing the repository pattern


This recipe is an implementation of the Repository Pattern, which allows us to abstract the underlying data source and the queries used to obtain the data.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. Create a new file in the DataAccess project, with this content:

    using System.Linq;
    namespace DataAccess
    {
        public interface IRepository<out T> where T : class
        {
            IQueryable<T> Set<T>() where T : class;
            void RollbackChanges();
            void SaveChanges();
        }
    }
  2. In the DataAccess project, add a new C# interface named IBlogRepository with the following code:

    using System.Linq;
    namespace DataAccess
    {
        public interface IBlogRepository : IRepository<Blog>
        {
        }
    }
  3. In the DataAccess project, create a new C# class named BlogRepository containing the following code:

    using System.Data.Entity;
    using System.Linq;
    using BusinessLogic;
    namespace DataAccess
    {
        public class BlogRepository : IBlogRepository
        {
            private readonly IDbContext _context;
            public BlogRepository(IDbContext context)
            {
                _context = context;
            }
            public IQueryable<Blog> Set()
            {
                return _context.Set<Blog>();
            }
        }
    }
  4. We'll add a new unit test in the UnitTests project that defines a test for using the repository with the following code:

    using System.Linq;
    using BusinessLogic;
    using DataAccess;
    using Moq;
    using Xunit;
    namespace UnitTests
    {
        public class RepositoryTest : BaseTest
        {
            [Fact]
            public void ShouldAllowGettingASetOfObjectsGenerically()
            {
                //Arrange
                var data = new[] { new Blog { Id = 1, Title = "Title" },              newBlog { Id = 2, Title = "No Title" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                var context = mock.Object;
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                //Assert
                Assert.Equal(data, blogs);
            }
        }
    }
  5. In the BlogController class of the UI project, update the usage of BlogContext so it uses IBlogRepository with the following code:

    using BusinessLogic;
    using DataAccess;
    using System.Linq;
    using Microsoft.AspNet.Mvc;
    namespace UI.Controllers
    {
        public class BlogController : Controller
        {
            private readonly IBlogRepository _repository;
            public BlogController(IBlogRepository repository)
            {
                _repository = repository;
            }
            public IActionResult Index()
            {
                var blog = _repository.Set().First();
                return View(blog);
            }
        }
    }
  6. Finally, we need to register the IBlogRepository service for dependency injection so that it can be passed automatically to the HomeController's constructor. We do that in the Startup.cs file in the UI project, in the ConfigureServices method:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
        services.AddSingleton<IConfiguration>(_ => Configuration);
        services.AddScoped<IDbContext>(_ => new BlogContext(Configuration["Data:Blog:ConnectionString"]));
        services.AddScoped<IBlogRepository>(_ => new BlogRepository(_.GetService<IDbContext>()));
    }

How it works…

We start off with a test that defines what we hope to accomplish. We use mocking (or verifiable fake objects) to ensure that we get the behavior that we expect. The test states that any BlogRepository function will communicate with the context to connect for the data. This is what we are hoping to accomplish, as doing so allows us to layer tests and extension points into the domain.

The usage of the repository interface is a key part of this flexible implementation as it will allow us to leverage mocks, and test the business layer, while still maintaining an extensible solution. The interface to the context is a straightforward API for all database communication. In this example, we only need to read data from the database, so the interface is very simple.

Even in this simple implementation of the interface, we see that there are opportunities to increase reusability. We could have created a method or property that returned the list of blogs, but then we would have had to modify the context and interface for every new entity. Instead, we set up the Set method to take a generic type, which allows us to add entities to the usage of the interface without modifying the interface. We will only need to modify the implementation.

Notice that we constrained the IRepository interface to accept only the reference types for T, using the where T : class constraint. We did this because value types cannot be stored using Entity Framework; if you had a base class, you could use it here to constrain the usage of the generic even further. Importantly, not all reference types are valid for T, but the constraint is as close as we can get using C#. Interfaces are not valid because Entity Framework cannot construct them when it needs to create an entity. Instead, it will produce a runtime exception, as they are valid reference types and therefore the compiler won't complain.

Once we have the context, we need to wrap it with an abstraction. IBlogRepository will allow us to query the data without allowing direct control over the database connection. We can hide the details of the specific implementation, the actual context object, while surfacing a simplified API for gathering data. We can also introduce specific operations for the Blog entity here.

The other interface that we abstracted is the IDbContext interface. This abstraction allows us to intercept operations just before they are sent to the database. This makes the untestable part of the application as thin as possible. We can, and will, test right up to the point of database connection.

We had to register the two interfaces, IDbContext and IBlogRepository, in the ASP.NET dependency resolver. This is achieved at startup time, so that any code that requires these services can use them. You will notice that the registration for IBlogRepository makes use of the IDbContext registration. This is OK, because it is a requirement for the actual implementation of BlogRepository to rely on IDbContext to actually retrieve the data.

There's more…

Keeping the repository implementation clean requires us to leverage some principles and patterns that are at the core of object-oriented programming, but not specific to using Entity Framework. These principles will not only help us to write clean implementations of Entity Framework, but can also be leveraged by other areas of our code.

Dependency Inversion Principle

Dependency inversion is another SOLID principle. This states that all of the dependencies of an object should be clearly visible and passed in, or injected, to create the object. The benefit of this is twofold: the first is exposing all of the dependencies so the effects of using a piece of code are clear to those who will use the class. The second benefit is that by injecting these dependencies at construction, they allow us to unit test by passing in mocks of the dependent objects. Granular unit tests require the ability to abstract dependencies, so we can ensure only one object is under test.

Repository and caching

This repository pattern gives us the perfect area for implementing a complex or global caching mechanism. If we want to persist a value into the cache at the point of retrieval, and not retrieve it again, the repository class is the perfect location for such logic. This layer of abstraction allows us to move beyond simple implementations and start thinking about solving business problems quickly, and later extend to handle more complex scenarios as they are warranted by the requirements of the specific project. You can think of repository as a well-tested 80%+ solution. Put off anything more until the last responsible moment.

Mocking

The usage of mocks is commonplace in tests because mocks allow us to verify underlying behavior without having more than one object under test. This is a fundamental piece of the puzzle for test-driven development. When you test at a unit level, you want to make sure that the level directly following the one you are testing was called correctly while not actually executing the specific implementation. This is what mocking buys us.

Where generic constraint

There are times when we need to create complex sets of queries that will be used frequently, but only by one or two objects. When this situation occurs, we want to reuse that code without needing to duplicate it for each object. This is where the where constraint helps us. It allows us to limit generically defined behavior to an object or set of objects that share a common interface or base class. The extension possibilities are nearly limitless.

See also

In this chapter:

  • Implementing the unit of work pattern

  • Creating mock database connections

 

Implementing the unit of work pattern


In the next example, we present an implementation of the Unit of Work pattern. This pattern was introduced by Martin Fowler, and you can read about it at http://martinfowler.com/eaaCatalog/unitOfWork.html. Basically, this pattern states that we keep track of all entities that are affected by a business transaction and send them all at once to the database, sorting out the ordering of the changes to apply—inserts before updates, and so on.

Getting ready

We will be using the NuGet Package Manager to install the Entity Framework Core 1 package, Microsoft.EntityFrameworkCore. We will also be using a SQL Server database for storing the data, so we will also need Microsoft.EntityFrameworkCore.SqlServer.

To mock interfaces and base classes, we will use Moq.

Finally, xunit is the package we will be using for the unit tests and dotnet-text-xunit adds tooling support for Visual Studio. Note that the UnitTests project is a .NET Core App 1.0 (netcoreapp1.0), that Microsoft.EntityFrameworkCore.Design is configured as a build dependency, and Microsoft.EntityFrameworkCore.Tools is set as a tool.

Open Using EF Core Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it…

  1. First, we start by adding a new unit test in the UnitTests project to define the tests for using a unit of work pattern with the following code:

    using System;
    using System.Linq;
    using BusinessLogic;
    using DataAccess;
    using Moq;
    using Xunit;
    namespace UnitTests
    {
        public class UnitOfWorkTest : BaseTest
        {
            [Fact]
            public void ShouldReadToDatabaseOnRead()
            {
                //Arrange
                var findCalled = false;
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Callback(() => findCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                //Assert
                Assert.True(findCalled);
            }
            [Fact]
            public void ShouldNotCommitToDatabaseOnDataChange()
            {
                //Arrange
                var saveChangesCalled = false;
                var data = new[] { new Blog() { Id = 1, Title = "Test" }              }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                mock.Setup(x => x.SaveChanges()).Callback(() => saveChangesCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                blogs.First().Title = "Not Going to be Written";
                //Assert
                Assert.False(saveChangesCalled);
            }
            [Fact]
            public void ShouldPullDatabaseValuesOnARollBack()
            {
                //Arrange
                var saveChangesCalled = false;
                var rollbackCalled = false;
                var data = new[] { new Blog() { Id = 1, Title = "Test" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                mock.Setup(x => x.SaveChanges()).Callback(() => saveChangesCalled = true);
                mock.Setup(x => x.Rollback()).Callback(() => rollbackCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                blogs.First().Title = "Not Going to be Written";
                repository.RollbackChanges();
                //Assert
                Assert.False(saveChangesCalled);
                Assert.True(rollbackCalled);
            }
            [Fact]
            public void ShouldCommitToDatabaseOnSaveCall()
            {
                //Arrange
                var saveChangesCalled = false;
                var data = new[] { new Blog() { Id = 1, Title = "Test" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                 mock.Setup(x => x.SaveChanges()).Callback(() => saveChangesCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                blogs.First().Title = "Going to be Written";
                repository.SaveChanges();
                //Assert
                Assert.True(saveChangesCalled);
            }
            [Fact]
            public void ShouldNotCommitOnError()
            {
                //Arrange
                var rollbackCalled = false;
                var data = new[] { new Blog() { Id = 1, Title = "Test" } }.AsQueryable();
                var mock = new Mock<IDbContext>();
                mock.Setup(x => x.Set<Blog>()).Returns(data);
                mock.Setup(x => x.SaveChanges()).Throws(new Exception());
                mock.Setup(x => x.Rollback()).Callback(() => rollbackCalled = true);
                var context = mock.Object;
                var unitOfWork = new UnitOfWork(context);
                var repository = new BlogRepository(context);
                //Act
                var blogs = repository.Set();
                blogs.First().Title = "Not Going to be Written";
                try
                {
                    repository.SaveChanges();
                }
                catch
                {
                }
                //Assert
                Assert.True(rollbackCalled);
            }
        }
    }
  2. In the DataAccess project, create a new C# class named BlogContext with the following code:

    using BusinessLogic;
    using System.Linq;
    using Microsoft.EntityFrameworkCore;
    using Microsoft.Extensions.Configuration;
    using Microsoft.EntityFrameworkCore.Metadata.Internal;
    namespace DataAccess
    {
        public class BlogContext : DbContext, IDbContext
        {
            private readonly string _connectionString;
          
            public BlogContext(string connectionString)
            {
                _connectionString = connectionString;
            }
            public DbSet<Blog> Blogs { get; set; }
            protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                optionsBuilder.UseSqlServer(_connectionString);
                base.OnConfiguring(optionsBuilder);
            }
            public void Rollback()
            {
                ChangeTracker.Entries().ToList().ForEach(x =>
                {
                    x.State = EntityState.Detached;
                    var keys = GetEntityKey(x.Entity);
                    Set(x.Entity.GetType(), keys);
                });
            }
            
            public DbSet<T> Set<T>() where T : class
            {
                return Set<T>();
            }
            
            public object[] GetEntityKey<T>(T entity) where T : class
            {
                var state = Entry(entity);
                var metadata = state.Metadata;
                var key = metadata.FindPrimaryKey();
                var props = key.Properties.ToArray();
                return props.Select(x => x.GetGetter().GetClrValue(entity)).ToArray();
            }
        }
    }
  3. In the DataAccess project, create a new C# interface called IDbContext with the following code:

    using System.Linq;
    using Microsoft.EntityFrameworkCore;
    using Microsoft.EntityFrameworkCore.ChangeTracking;
    namespace DataAccess
    {
        public interface IDbContext
        {
            ChangeTracker ChangeTracker { get; }
            DbSet<T> Set<T>() where T : class;
            IQueryable<T> Set<T>() where T : class;
            EntityEntry<T> Entry<T>(T entity) where T : class;
            int SaveChanges();
            void Rollback();
        }
    }
  4. In the DataAccess project, create a new C# interface called IUnitOfWork with the following code:

    namespace DataAccess
    {
      public interface IUnitOfWork
      {
        void RegisterNew<T>(T entity) where T : class;
        void RegisterUnchanged<T>(T entity) where T : class;
        void RegisterChanged<T>(T entity) where T : class;
        void RegisterDeleted<T>(T entity) where T : class;
        void Refresh();
        void Commit();
        IDbContext Context { get; }
      }
    }
  5. In the DataAccess project, add a new C# class named UnitOfWork with the following code:

    using Microsoft.EntityFrameworkCore;
    namespace DataAccess
    {
      public class UnitOfWork : IUnitOfWork
      {
        public IDbContext Context { get; private set; }
        public UnitOfWork(IDbContext context)
        {
          Context = context;
        }
        public void RegisterNew<T>(T entity) where T : class
        {
          Context.Set<T>().Add(entity);
        }
        public void RegisterUnchanged<T>(T entity) where T : class
        {
          Context.Entry(entity).State = EntityState.Unchanged;
        }
        public void RegisterChanged<T>(T entity) where T : class
        {
          Context.Entry(entity).State = EntityState.Modified;
        }
        public void RegisterDeleted<T>(T entity) where T : class
        {
          Context.Set<T>().Remove(entity);
        }
        public void Refresh()
        {
          Context.Rollback();
        }
        public void Commit()
        {
          Context.SaveChanges();
        }
      }
    }
  6. Create a new C# file in the DataAccess project with this content:

    using System.Linq;
    namespace DataAccess
    {
        public interface IRepository<out T> where T : class
        {
            IQueryable<T> Set();
            void RollbackChanges();
            void SaveChanges();
        }
    }
  7. Also in the DataAccess project, add a new C# interface named IBlogRepository with the following code:

    using System.Linq;
    namespace DataAccess
    {
      public interface IBlogRepository : IRepository<Blog>
      {
      }
    }
  8. In the DataAccess project, create a new C# class named BlogRepository containing the following code:

    using System.Linq;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogRepository : IBlogRepository
      {
        private readonly IUnitOfWork _unitOfWork;
        public BlogRepository(IUnitOfWork unitOfWork)
        {
          _unitOfWork = unitOfWork;
        }
        public IQueryable<Blog> Set
        {
          return _unitOfWork.Context.Set<Blog>();
        }
        public void RollbackChanges()
        {
          _unitOfWork.Refresh();
        }
        public void SaveChanges()
        {
          try
          {
            _unitOfWork.Commit();
          }
          catch (Exception)
          {
            _unitOfWork.Refresh();
            throw;
          }
        }
      }
    }
  9. In BlogController, update BlogContext to use IBlogRepository with the following code:

    using BusinessLogic;
    using System.Linq;
    using DataAccess;
    using Microsoft.AspNet.Mvc;
    using Microsoft.Extensions.Configuration;
    namespace UI.Controllers
    {
      public class BlogController : Controller
      {
        private IBlogRepository _repository;
        public BlogController(IBlogRepository repository)
        {
          _repository = repository;
        }
        //
        // GET: /Blog/
        public IActionResult Index()
        {
          var blog = _repository.Set().First();
          return View(blog);
        }
      }
    }
  10. Finally, register the IUnitOfWork interface in the Startup.cs file, in the ConfigureServices method:

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
           services.AddSingleton<IConfiguration>(_ => Configuration);
           services.AddScoped<IDbContext>(_ => new BlogContext(Configuration["Data:Blog:ConnectionString"]));
           services.AddScoped<IBlogRepository>(_ => new BlogRepository(_.GetService<IDbContext>()));
           services.AddScoped<IUnitOfWork>(_ => new UnitOfWork(_.GetService<IDbContext>()));
    }

How it works…

The tests set up the scenarios in which we would want to use a unit of work pattern: reading, updating, rolling back, and committing. The key to this is that these are all separate actions, not dependent on anything before or after them. If the application is web-based, this gives you a powerful tool to tie to the HTTP request so any unfinished work is cleaned up, or to ensure that you do not need to call SaveChanges, since it can happen automatically.

The unit of work was originally created to track the changes made so they could be persisted, and it functions the same way now. We are using a more powerful, but less recognized, feature defining the scope of the unit of work. We gave the ability to control both scope and the changes that are committed in the database in this scenario. We have also put in some clean-up, which will ensure that even in the event of a failure, our unit of work will try to clean up after itself before throwing the error to be handled at a higher level. We do not want to ignore these errors, but we do want to make sure they do not destroy the integrity of our database.

In addition to this tight encapsulation of work against the database, pass in our unit of work to each repository. This enables us to couple multiple object interactions to a single unit of work. This will allow us to write code that's specific to the object, without giving up the shared feature set of the database context. This is an explicit unit of work, but Entity Framework, in the context, defines it to give you an implicit unit of work. If you want to tie this to the HTTP request, rollback on error, or tie multiple data connections together in new and interesting ways, then you will need to code in an explicit implementation such as this one.

This basic pattern will help to streamline data access, and resolve the concurrency issues caused by conflicts in the objects that are affected by a transaction.

There's more…

The unit of work is a concept that is deep at the heart of Entity Framework and adheres, out of the box, to the principles following it. Knowing these principles, and why they are leveraged, will help us use Entity Framework to its fullest without running into the walls built in the system on purpose.

Call per change

There is a cost for every connection to the database. If we were to make a call to keep the state in the database in sync with the state in the application, we would have thousands of calls, each with connection, security, and network overhead. Limiting the number of times that we hit the database not only allows us to control this overhead, but also allows the database software to handle the larger transactions for which it was built.

Interface Segregation Principle

Some might be inclined to ask why we should separate unit of work from the repository pattern. Unit of work is definitely a separate responsibility from repository, and as such it is important to not only define separate classes, but also to ensure that we keep small, clear interfaces. The IDbContext interface is specific in the area of dealing with database connections through an Entity Framework object context. This allows the mocking of a context to give us testability to the lowest possible level.

The IUnitOfWork interface deals with the segregation of work, and ensures that the database persistence happens only when we intend it to, ignorant of the layer under it that does the actual commands. The IRepository interface deals with selecting objects back from any type of storage, and allows us to remove all thoughts of how the database interaction happens from our dependent code base. These three objects, while related in layers, are separate concerns, and therefore need to be separate interfaces.

Refactoring

We have added IUnitOfWork to our layered approach to database communication, and if we have seen anything over the hours of coding, it is code changes. We change it for many reasons, but the bottom line is that code changes often, and we need to make it easy to change. The layers of abstraction that we have added to this solution with IRepository, IUnitOfWork, and IDbContext, have all given us a point at which the change would be minimally painful, and we can leverage the interfaces in the same way. This refactoring to add abstraction levels is a core tenet of clean, extensible code. Removing the concrete implementation details from related objects, and coding to an interface, forces us to encapsulate behavior and abstract our sections of code.

See also

In this chapter:

  • Implementing the repository pattern

About the Author

  • Ricardo Peres

    Ricardo Peres is a Portuguese developer, blogger, and occasionally, an e-book author. He has over 17 years of experience in software development, using technologies such as C/C++, Java, JavaScript, and .NET. His interests include distributed systems, architectures, design patterns, and general .NET development.

    He currently works for the London-based Simplifydigital as a technical evangelist, and was first awarded as an MVP in 2015.

    Ricardo maintains a blog, Development With A Dot, where he regularly writes about technical issues.

    He wrote Entity Framework Core Cookbook - Second Edition and was the technical reviewer for Learning NHibernate 4 by Packt.

    Ricardo also contributed to Syncfusion's Succinctly collection of e-books with titles on NHibernate, Entity Framework Code First, Entity Framework Core, multitenant ASP.NET applications, and Microsoft Unity.

    Browse publications by this author

Latest Reviews

(2 reviews total)
Este o carte excelenta, pe care o recomand!
He comprado un libro técnico que está lleno de erratas, algunas más que evidentes. Creo difícil que autor y editorial lo desconozcan. Me refiero por ejemplo a que el código mostrado no se corresponda al posterior comentario de dicho código: muestro un código que hace una operación de suma, luego añado una explicación del mismo, lo único que explicó una operación de resta.

Recommended For You

Book Title
Unlock this full book with a FREE 10-day trial
Start Free Trial