Entity Framework 4.1: Expert's Cookbook

By Devlin Liles , Tim Rayburn
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Improving Entity Framework in the Real World

About this book

Entity Framework 4.1 allows us to dive into the world of data access without having to write SQL statements. With the power to shape data access by your object model comes questions and this book holds the answers.

Entity Framework 4.1: Expert's Cookbook holds many examples to help guide you through tough technical decisions and avoid technical landmines. The book will guide you from just using Entity Framework to enhancing the data access wizard.

This book starts with examples that require some familiarity of object relational mappers, and then moves on to more advanced tasks. You will be guided through complex mapping scenarios, query definition, reusability, integration with other technologies, and architectural management. The approach is step-by-step and test driven so that it is focused as much as possible on solving problems and getting the most out of the time spent working through the book.

Entity Framework 4.1: Expert's Cookbook is a must have for any .NET developer who uses Entity Framework, and wants better, cleaner, and more maintainable code.

Publication date:
March 2012
Publisher
Packt
Pages
352
ISBN
9781849684460

 

Chapter 1. Improving Entity Framework in the Real World

In this chapter, we will cover the following topics:

  • Improving the Entity Framework by using code first

  • Creating mock database connections

  • Implementing the repository pattern

  • Implementing the unit of work pattern

  • Testing queries

  • Creating databases from code

  • Testing queries for performance

  • Performing load testing against a database

 

Introduction


If we were to buy the materials to build a house, would we buy the bare minimum to get four walls up and a roof, without a kitchen or a bathroom? Or would we buy enough material to build the house with multiple bedrooms, a kitchen, and multiple bathrooms? The problem lies in how we define "bare minimum". The progression of software development has made us realize that there are ways of building software that do not require additional effort, but reap serious rewards. This is the same choice we are faced with when we decide the approach to take with Entity Framework. We could just get it running, and it would work most of the time.

Customizing and adding to it later would be difficult, but doable. There are few things that we would need to give up for this approach. The most important among those is control over how code is written. We have already seen that applications grow, mature, and have features added. The only thing that stays constant is the fact that at some point of time, we will come to push the envelope of almost any tool in some way that we leverage to help us. The other side is that we could go into development, being aware of the value added benefits that cost nothing, and with that knowledge, avoid dealing with unnecessary constraints.

When working with Entity Framework , there are many paths and options presented to us. We can approach the business problem by thinking of the database-first approach, modelling our domain-first approach, or by writing our POCOs (Plain Old CLR Objects) first. While modelling the domain-first approach, we are not concerned with the implementation of classes, but merely the structure of interactions. In contrast, in POCO or code first, we write the implementation as a way to communicate that design. All of these approaches will solve the problem with varying degrees of code and flexibility. When we are connecting to a database and working with data, there are a couple of areas where the code is almost the same. No matter what our implementation is, these pieces will not change much. However, it can affect their flexibility.

Starting with a database-first approach in Entity Framework means we have an existing database schema and are going to let the schema, along with metadata in the database, determine the structure of our business objects and domain model. The database-first approach is normally how most of us start out with Entity Framework, but the tendency is to move towards more flexible solutions as we gain proficiency with the framework. This will drastically reduce the amount of code that we need to write, but will also limit us to working within the structure of the generated code. Business objects, which are generated by default here, are not usable with WCF services and have database logic in them that makes them poor candidates for usage throughout the application. This is not necessarily a bad thing if we have a well-built database schema and a domain model that contains simple structures which will translate well into objects. Such a domain and database combination is a rare exception in the world of code production. Due to the lack of flexibility and the restrictions on the way these objects are used, this solution is viewed as a short-term or small-project solution.

Modelling the domain first allows us to fully visualize the structure of the data in the application, and work in a more object-oriented manner while developing our application. This lends itself to the architect and solution lead developers as a way to define and control the schema which will be used. However, this approach is rarely used due to a lack of adoption, and that it has the same constraints as the generated database-first approach. The main reasons for the lack of adoption have been the lack of support for round trip updates, and the lack of documentation on manipulating the model as to produce the proper database structure. The database is created each time from the data model causing data loss when structural changes are made.

Coding the objects first allows us to work entirely in an object-oriented direction, and not worry about the structuring of the database, without the restrictions that the model-first designer imposes. This abstraction gives us the ability to craft a more logically sound application that focuses on the behavior of the application rather than the data generated by it. The objects that we produce which are capable of being serialized over any service, have "true persistence ignorance", and can be shared as contract objects as they are not specific to the database implementation. This approach is also much more flexible as it is entirely dependent on the code that we write. This allows us to translate our objects into database records without modifying the structure of our application.

 

Improving Entity Framework by using code first


In this recipe, we start by separating the application into a user interface, a data access layer, and a business logic layer. This will allow us to keep our objects segregated from database-specific implementations. The objects and the implementation of the database context will be a layered approach so we can slot testing and abstraction into the application.

Getting ready

We will be using the NuGet Package Manager to install Entity Framework 4.1 assemblies.

The package installer can be found at http://www.nuget.org/. We will also be using a database in order to connect to the data and update it.

Open Using Code First Solution from the included source code examples.

Execute the database setup script from the code samples included for this recipe. This can be found in the DataAccess project within the Database folder.

How to do it...

Let us get connected to the database using the following steps:

  1. In the BusinessLogic project, add a new C# class named Blog with the following code:

    using System;
    namespace BusinessLogic
    {
      public class Blog
      {
        public int Id { get; set; }
        public string Title { get; set; }
      }
    }
  2. In the DataAccess project, create a new C# class named BlogContext with the following code:

    using System.Data.Entity;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogContext : DbContext
      {
        public BlogContext(string connectionString) : base(connectionString)
        {
        }
        public DbSet<Blog> Blogs { get; set; }
      }
    }
  3. In the Recipe1UI project, add a setting Blog with the connection string shown in the following screenshot:

  4. In the UI project in the BlogController.cs, modify the Display method with the following code:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    using System.Web.Mvc;
    using BusinessLogic;
    using DataAccess;
    using UI.Properties;
    namespace UI.Controllers
    {
      public class BlogController : Controller
      {
        private BlogContext _blogContext;
        public BlogController() :this(new BlogContext(Settings.Default.BlogConnection)) { }
        public BlogController(BlogContext blogContext)
        {
          _blogContext = blogContext;
        }
        // GET: /Blog/
        public ActionResult Display()
        {
          Blog blog = _blogContext.Blogs.First();
          return View(blog);
        }
      }
    }

How it works...

The blog entity is created but not mapped explicitly to a database structure. This takes advantage of convention over configuration, found in the code first approach, wherein the properties are examined and then table mappings are determined. This is obviously a time saver, but it is fairly limited if you have a non-standard database schema. The other big advantage of this approach is that the entity is "persistence-ignorant". In other words, it has no code or knowledge of how it is to be stored in the database.

The blog context in the DataAccess project has a few key elements to understand. The first is to understand that the inheritance from DbContext. DbContext is the code first version of ObjectContext, which runs exposing all connection pooling, entity change tracking, and database interactions. We added a constructor to take in the connection string, but note that this is not the standard connection string used with the database-first and model-first approaches to the embedded Entity Framework metadata connection string. Instead, the connection string is a standard provider connection string. You can also pass the name of an application setting through here, but the recommended approach is to provide the full connection string after it has been retrieved from the application settings store.

We used the standard built-in functionality for the connection string, but this could easily be any application setting store. Larger applications require more flexibility for where the settings are stored so we pass in the connection string on the construction of the BlogContext. It enables us to source that connection string from anywhere.

We did need to pass in the BlogContext as a parameter of the controller. Doing so allows us to interact with the database. The context here allows us to access the collection of objects that directly translate to database objects at request time.

There's more...

Approaching the use of code first development, we have several overarching themes and industry standards that we need to be aware of. Knowing about them will help us leverage the power of this tool without falling into the pit of using it without understanding.

Convention over configuration

This is a design paradigm that specifies default rules about how each aspect of an application will behave, but allows the developer to override any of the default rules with specific behavior required by the application. This allows us, as programmers, to avoid using a lot of configuration files to specify how we intended something to be used or configured. In our case, Entity Framework allows the most common behaviors to use default conventions that remove the need for a majority of the configurations. When the behavior we wish to create is not supported by the convention, we can easily override the convention and add the required behavior to it without the need to get rid of it everywhere else. This leaves us with a flexible and extendable system to configure the database interaction.

Model-View-Controller

In our example, we use Microsoft ASP.NET . We are using MVC (Model-View-Controller) framework to deliver the User Interface (UI) because it builds a naturally testable model to house our code. You will see that we predominantly used the MVC approach in our examples. The implementations can be ported to any UI, or no UI for that matter at all, if that is your preference. All of our samples use the MVC 3 Framework and Razor view engine for rendering the UI. We have provided some simple views which will allow us to focus on the solutions and the code without needing to deal with UI design and markup.

Single responsibility principle

One of the solid principles of development, that is, the single responsibility principle, states that every class should have only one reason to change. In this chapter, there are several examples of that in use. For example, the separation of model, view and controller in MVC. However, this important tenant is also why we favor the code first approach to begin with.

Entities in code first have the structure of data as their singular responsibility in memory. This means that only if the structure needs to be changed will we need to modify the entities. By contrast, the code automatically generated by the database-first tools of Entity Framework inherits your entities from base classes within the Entity Framework Application Programming Interface (API). The process of Microsoft making occasional updates to the base classes of Entity Framework is the one that introduces a second reason to change, thus violating our principle.

Testing

While we did not actively test this recipe, we did layer in the abstractions to do so. All of the other recipes will be executed and presented using test-driven development, as we believe it leads to a better software design, and a much more clear representation of intent.

See also

In this chapter:

  • Implementing the unit of work pattern

  • Implementing the repository pattern

 

Creating mock database connections


When working with Entity Framework in a test-driven manner, we need to be able to slip a layer between our last line of code and the framework. This allows us to simulate the database connection without actually hitting the database.

Getting ready

We will be using NuGet Package Manager to install the Entity Framework 4.1 assemblies.

The package installer can be found at http://www.nuget.org/.

We will also be using a database for connecting to the data, and updating it.

Open the Mocking the Database solution in the included source code examples.

Execute the database setup script from the code samples included with this recipe. This can be found in the DataAccess project within the Database folder.

How to do it...

  1. In the BusinessLogic project, add a new C# interface named IDbContext using the following code:

    using System.Linq;
    namespace BusinessLogic
    {
      public interface IDbContext
      {
        IQueryable<T> Find<T>() where T : class;
      }
    }
  2. Add a new unit test in the Test project to test so we can supply false results from a fake database with the following code:

    using System.Collections.Generic;
    using System.Linq;
    using BusinessLogic;
    using DataAccess;
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using Rhino.Mocks;
    namespace Test
    {
      [TestClass]
      public class QueryTest
      {
        [TestMethod]
        public void ShouldFilterDataProperly()
        {
          //Arrange
          IDbContext mockContext = MockRepository.GenerateMock<IDbContext>();
          mockContext.Expect(x => x.Find<Blog>()).Return(new List<Blog>()
          {
            new Blog(){Id = 1,Title = "Title"},
            new Blog(){Id=2,Title = "no"}
          }.AsQueryable());
          //Act
          var items = mockContext.Find<Blog>().ToList();
          //Assert
          Assert.AreEqual(2,items.Count());
          Assert.AreEqual("Title",items[0].Title);
          Assert.AreEqual("no",items[1].Title);
        }
      }
    }
  3. In the DataAccess project, create a new C# class named BlogContext with the following code:

    using System.Data.Entity;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogContext : DbContext, IDbContext
      {
        public BlogContext(string connectionString) : base(connectionString)
        {
        }
        public DbSet<Blog> Blogs { get; set; }
        public IQueryable<T> Find<T>() where T : class
        {
          return this.Set<T>();
        }
      }
    }
  4. In the BusinessLogic project, add a new C# interface called IDbContext with the following code:

    using System.Linq;
    namespace BusinessLogic
    {
      public interface IDbContext
      {
        IQueryable<T> Find<T>() where T : class;
      }
    }

How it works...

The mocking framework that we are using (called RhinoMocks) allows us to pass a fake object which can simulate the responses that a database would provide for us without having that connection. This allows us to keep our tests from being dependent on SQL data, and therefore brittle. Now that we have data available from our mock, we can test whether it acts exactly like we coded it to. Knowing the inputs of the data access code, we can test the outputs for validity.

This layering is accomplished by putting our Find method as an abstraction between the public framework method of Set<T> and our code, so we can change the type to something constructible. This is required due to the constructors of DbSet<T> being internal (not callable from any other assembly). By layering this method, we can now control every return from the database in the test scenarios.

This layering also provides for the better separation of concerns, as the DbSet<T> in Entity Framework mingles multiple independent concerns, such as connection management and querying, into a single object. We will continue to separate these concerns in future recipes.

There's more...

Testing to the edges of an application requires that we adhere to certain practices which allow us to shrink the untestable sections of the code. This will allow us to unit test more code, and make our integration tests far more specific.

One object under test

An important point to remember while performing unit testing is that we should only be testing a single class. The point of a unit test is to ensure that a single unit, a single class, performs the way we expect it to.

This is why simulating classes that are not under test is so important. We do not want the behavior of these supporting classes to affect the outcomes of unit tests for our class under test.

Integration tests

Often, it is equally important to test the actual combination of your various classes, to ensure they work properly together. These integration tests are valuable, but are almost always more brittle, require more setup, and are run slower than the unit tests. We certainly need integration tests on any project of a reasonable size, but we want unit tests first.

Arrange, act, assert

Most unit tests can be viewed as having three parts: arrange, act, and assert. Arrange is where we prepare the environment to perform the test, for instance, mocking of the IDBContext and setting up an expectation that Find<T> will be called. Act is where we perform the action under test and is most often a singular line of code. Assert is where we ensure that the proper result was reached. Note the comments in the examples above that call out these sections. We will use them throughout the book to make it clear what the test is trying to do.

 

Implementing the repository pattern


This recipe is an implementation of the repository pattern which allows us to separate the usage of a database and its data from the act of reading that data.

Getting ready

We will be using NuGet Package Manager to install the Entity Framework 4.1 assemblies.

The package installer can be found at http://www.nuget.org/.

We will also be using a database for connecting to and updating data.

Open the Repository Pattern solution in the included source code examples.

Execute the database setup script from the code samples included with this recipe. This can be found in the DataAccess project within the Database folder.

How to do it...

  1. In the DataAccess project, add a new C# interface name IBlogRepository with the following code:

    using System.Linq;
    namespace DataAccess
    {
      public interface IBlogRepository
      {
        IQueryable<T> Set<T>() where T : class;
      }
    }
  2. In the DataAccess project, create a new C# class named BlogRepository with the following code:

    using System.Data.Entity;
    using System.Linq;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogRepository : IBlogRepository
      {
        private readonly IDbContext _context;
        public BlogRepository(IDbContext context)
        {
          _context = context;
        }
        public IQueryable<T> Set<T>() where T : class
        {
          return _context.Find<T>();
        }
      }
    }
  3. First, we start by adding a new unit test in the Test project that defines a test for using repository with the following code:

    using BusinessLogic;
    using DataAccess;
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using Rhino.Mocks;
    namespace Test
    {
      [TestClass]
      public class RepositoryTest
      {
        [TestMethod]
        public void ShouldAllowGettingASetOfObjectsGenerically()
        {
          //Arrange
          IDbContext mockContext = MockRepository.GenerateMock<IDbContext>();
          IBlogRepository repository = new BlogRepository(mockContext);
          //Act
          var items = repository.Set<Blog>();
          //Assert
          mockContext.AssertWasCalled(x => x.Find<Blog>());
        }
      }
    }
  4. In the BusinessLogic project, add a new C# interface name IBlogRepository with the following code:

    using System.Linq;
    namespace DataAccess
    {
      public interface IBlogRepository
      {
        IQueryable<T> Set<T>() where T : class;
      }
    }
  5. In the DataAccess project, create a new C# class named BlogRepository with the following code:

    using System.Data.Entity;
    using System.Linq;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogRepository : IBlogRepository
      {
        private readonly IDbContext _context;
        public BlogRepository(IDbContext context)
        {
          _context = context;
        }
        public IQueryable<T> Set<T>() where T : class
        {
          return _context.Find<T>();
        }
      }
    }
  6. In the BlogController update the usage of BlogContext to use IBlogRepository with the following code:

    using System.Linq;
    using System.Web.Mvc;
    using BusinessLogic;
    using DataAccess;
    using UI.Properties;
    namespace UI.Controllers
    {
      public class BlogController : Controller
      {
        private IBlogRepository _blogRepository;
        public BlogController() : this(new BlogRepository(new BlogContext(Settings.Default.BlogConnection))) { }
        public BlogController(IBlogRepository blogRepository)
        {
          _blogRepository = blogRepository;
        }
        //
        // GET: /Blog/
        public ActionResult Display()
        {
          Blog blog = _blogRepository.Set<Blog>().First();
          return View(blog);
        }
      }
    }

How it works...

We start off with a test that defines what we hope to accomplish. We use mocking (or verifiable fake objects) to ensure that we get the behavior that we expect. The test states that any BlogRepository will communicate with the context to connect for the data. This is what we are hoping to accomplish as doing so allows us to layer tests and extension points into the domain.

The usage of the repository interface is a key part of this flexible implementation as it will allow us to leverage mocks, and test the business layer, while still maintaining an extensible solution. The interface to the context is a straightforward API for all database communication. In this example, we only need to read data from the database, so the interface is very simple.

Even in this simple implementation of the interface, we see that there are opportunities to increase reusabilty. We could have created a method or property that returned the list of blogs, but then we would have had to modify the context and interface for every new entity. Instead, we set up the Find method to take a generic type, which allows us to add entities to the usage of the interface without modifying the interface. We will only need to modify the implementation.

Notice that we constrained the interface to accept only the reference types for T, using the where T : class constraint. We did this because value types cannot be stored using entity framework. If you had a base class, you could use it here to constrain the usage of the generic even further. Importantly, not all reference types are valid for T, but the constraint is as close as we can get using C#. Interfaces are not valid because Entity Framework cannot construct them when it needs to create an entity. Instead, it will produce a runtime exception as they are valid reference types.

Once we have the context, we need to wrap it with an abstraction. The BlogRepository will allow us to query the data without allowing direct control over the database connection. This is what BlogRepository accomplishes for us. We can hide the details of the specific implementation, the actual context object, while surfacing a simplified API for gathering data.

The other interface that we abstracted is the IDbContext interface. This abstraction allows us to intercept tests just before they would be sent to the database. This makes the untestable part of the application as thin as possible. We can, and will, test right up to the point of database connection.

There's more...

Keeping the repository implementation clean requires us to leverage some principles and patterns that are at the core of object-oriented programming, but not specific to using Entity Framework. These principles will not only help us to write clean implementations of Entity Framework, but can also be leveraged by other areas of our code.

Dependency inversion principle

Dependency inversion is another solid principle. This states that all of the dependencies of an object should be clearly visible and passed in, or injected, to create the object. The benefit of this is two-fold: the first is exposing all of the dependencies so the effects of using a piece of code are clear to those who will use the class. The second benefit is by injecting these dependencies at construction, they allow us to unit test by passing in mocks of the dependant objects. Granular unit tests require the ability to abstract dependencies, so we can ensure only one object is under test.

Repository and caching

This repository pattern gives us the perfect area for implementing a complex or global caching mechanism. If we want to persist some value into the cache at the point of retrieval, and not retrieve it again, the repository class is the perfect location for such logic. This layer of abstraction allows us to move beyond simple implementations and start thinking about solving business problems quickly, and later extend to handle more complex scenarios as they are warranted by the requirements of the specific project. You can think of repository as a well-tested 80+% solution. Put off anything more until the last responsible moment.

Mocking

The usage of mocks is commonplace in tests because mocks allow us to verify underlying behavior without having more than one object under test. This is a fundamental piece of the puzzle for test-driven development. When you test at a unit level, you want to make sure that the level directly following the one you are testing was called correctly while not actually executing the specific implementation. This is what mocking buys us.

Where constraint

There are times when we need to create complex sets of queries which will be used frequently, but only by one or two objects. When this situation occurs, we want to reuse that code without needing to duplicate it for each object. This is where the "where" constraint helps us. It allows us to limit generically defined behavior to an object or set of objects that share a common interface or base class. The extension possibilities are near limitless.

See also

In this chapter:

  • Implementing the unit of work pattern

  • Creating mock database connections

 

Implementing the unit of work pattern


In the next example, we present an implementation of the unit of work pattern which will allow us to limit our connections to the database, and keep the application in a stable state.

Getting ready

We will be using NuGet Package Manager to install the Entity Framework 4.1 assemblies.

The package installer can be found at http://www.nuget.org/.

We will also be using a database for connecting to the data and updating it.

Open the Unit of Work Pattern solution in the included source code examples.

Execute the database setup script from the code samples included with this recipe. This can be found in the DataAccess Project within the Database folder.

How to do it...

  1. First, we start by adding a new unit test in the Test project to define the tests for using a unit of work pattern with the following code:

    using System;
    using System.Data.Entity.Infrastructure;
    using System.Text;
    using System.Collections.Generic;
    using System.Linq;
    using BusinessLogic;
    using DataAccess;
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using Rhino.Mocks;
    namespace Test
    {
      [TestClass]
      public class UnitOfWorkTest
      {
        [TestMethod]
        public void ShouldReadToDatabaseOnRead()
        {
        //Arrange
        IDbContext mockContext = MockRepository.GenerateMock<IDbContext>();
        IUnitOfWork unitOfWork = new UnitOfWork(mockContext);
        IBlogRepository repository = new BlogRepository(unitOfWork);
        //Act
        var items = repository.Set<Blog>();
        //Assert
        mockContext.AssertWasCalled(x => x.Find<Blog>());
      }
      [TestMethod]
      public void ShouldNotCommitToDatabaseOnDataChange()
      {
        //Arrange
        IDbContext mockContext = MockRepository.GenerateMock<IDbContext>();
        IUnitOfWork unitOfWork = new UnitOfWork(mockContext);
        mockContext.Stub(x => x.Find<Blog>()).Return(new List<Blog>() {new Blog() {Id = 1, Title = "Test"}}.AsQueryable());
        IBlogRepository repository = new BlogRepository(unitOfWork);
        var items = repository.Set<Blog>();
        //Act
        items.First().Title = "Not Going to be Written";
        //Assert
        mockContext.AssertWasNotCalled(x => x.SaveChanges());
        }
        [TestMethod]
        public void ShouldPullDatabaseValuesOnARollBack()
        {
          //Arrange
          IDbContext mockContext = MockRepository.GenerateMock<IDbContext>();
          IUnitOfWork unitOfWork = new UnitOfWork(mockContext);
          mockContext.Stub(x => x.Find<Blog>()).Return(new List<Blog>() { new Blog() { Id = 1, Title = "Test" } }.AsQueryable());
        IBlogRepository repository = new BlogRepository(unitOfWork);
        var items = repository.Set<Blog>();
        items.First().Title = "Not Going to be Written";
        //Act
        repository.RollbackChanges();
        //Assert
        mockContext.AssertWasNotCalled(x=>x.SaveChanges());
        mockContext.AssertWasCalled(x=>x.Rollback());
        }
        [TestMethod]
        public void ShouldCommitToDatabaseOnSaveCall()
        {
          //Arrange
          IDbContext mockContext = MockRepository.GenerateMock<IDbContext>();
          IUnitOfWork unitOfWork = new UnitOfWork(mockContext);
          mockContext.Stub(x => x.Find<Blog>()).Return(new List<Blog>() { new Blog() { Id = 1, Title = "Test" } }.AsQueryable());
        IBlogRepository repository = new BlogRepository(unitOfWork);
        var items = repository.Set<Blog>();
        items.First().Title = "Going to be Written";
        //Act
        repository.SaveChanges();
        //Assert
        mockContext.AssertWasCalled(x=>x.SaveChanges());
        }
        [TestMethod]
        public void ShouldNotCommitOnError()
        {
          //Arrange
          IDbContext mockContext = MockRepository.GenerateMock<IDbContext>();
          IUnitOfWork unitOfWork = new UnitOfWork(mockContext);
          mockContext.Stub(x => x.Find<Blog>()).Return(new List<Blog>() { new Blog() { Id = 1, Title = "Test" } }.AsQueryable());
        mockContext.Stub(x => x.SaveChanges()).Throw(new ApplicationException());
        IBlogRepository repository = new BlogRepository(unitOfWork);
        var items = repository.Set<Blog>();
        items.First().Title = "Not Going to be Written";
        //Act
        try
        {
          repository.SaveChanges();
        }
        catch (Exception)
        {
        }
        //Assert
        mockContext.AssertWasCalled(x => x.Rollback());
        }
      }
    }
  2. In the DataAccess project, create a new C# class named BlogContext with the following code:

    using System;
    using System.Data.Entity;
    using System.Data.Entity.Infrastructure;
    using System.Linq;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogContext : DbContext, IDbContext
      {
        public BlogContext(string connectionString)
        : base(connectionString)
        {
        }
        public DbSet<Blog> Blogs { get; set; }
        public IQueryable<T> Find<T>() where T : class
        {
          return this.Set<T>();
        }
        public void Rollback()
        {
         this.ChangeTracker.Entries().ToList().ForEach(x=>x.Reload());
        }
      }
    }
  3. In the DataAccess project, create a new C# interface called IDbContext with the following code:

    using System.Data.Entity;
    using System.Data.Entity.Infrastructure;
    using System.Linq;
    namespace DataAccess
    {
      public interface IDbContext
      {
        DbChangeTracker ChangeTracker { get; }
        DbSet<T> Set<T>() where T : class;
        IQueryable<T> Find<T>() where T : class;
        DbEntityEntry<T> Entry<T>(T entity) where T : class;
        int SaveChanges();
        void Rollback();
      }
    }
  4. In the DataAccess project, create a new C# interface called IUnitOfWork with the following code:

    using System;
    namespace DataAccess
    {
      public interface IUnitOfWork
      {
        void RegisterNew<T>(T entity) where T : class;
        void RegisterUnchanged<T>(T entity) where T : class;
        void RegisterChanged<T>(T entity) where T : class;
        void RegisterDeleted<T>(T entity) where T : class;
        void Refresh();
        void Commit();
        IDbContext Context { get; set; }
      }
    }
  5. In the DataAccess project, add a new C# class named UnitOfWork with the following code:

    using System.Data;
    using System.Linq;
    namespace DataAccess
    {
      public class UnitOfWork : IUnitOfWork
      {
        public IDbContext Context { get; set; }
        public UnitOfWork(IDbContext context)
        {
          Context = context;
        }
        public void RegisterNew<T>(T entity) where T : class
        {
          Context.Set<T>().Add(entity);
        }
        public void RegisterUnchanged<T>(T entity) where T : class
        {
          Context.Entry(entity).State = EntityState.Unchanged;
        }
        public void RegisterChanged<T>(T entity) where T : class
        {
          Context.Entry(entity).State = EntityState.Modified;
        }
        public void RegisterDeleted<T>(T entity) where T : class
        {
          Context.Set<T>().Remove(entity);
        }
        public void Refresh()
        {
          Context.Rollback();
        }
        public void Commit()
        {
          Context.SaveChanges();
        }
      }
    }
  6. In the BusinessLogic project, add a new C# interface named IBlogRepository with the following code:

    using System.Linq;
    namespace DataAccess
    {
      public interface IBlogRepository
      {
        IQueryable<T> Set<T>() where T : class;
        void RollbackChanges();
        void SaveChanges();
      }
    }
  7. In the DataAccess project, create a new C# class named BlogRepository with the following code:

    using System;
    using System.Data.Entity;
    using System.Linq;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogRepository : IBlogRepository
      {
        private readonly IUnitOfWork _unitOfWork;
        public BlogRepository(IUnitOfWork unitOfWork)
        {
          _unitOfWork = unitOfWork;
        }
        public IQueryable<T> Set<T>() where T : class
        {
          return _unitOfWork.Context.Find<T>();
        }
        public void RollbackChanges()
        {
          _unitOfWork.Refresh();
        }
        public void SaveChanges()
        {
          try
          {
            _unitOfWork.Commit();
          }
          catch (Exception)
          {
            _unitOfWork.Refresh();
            throw;
          }
        }
      }
    }
  8. In the BlogController, update the usage of BlogContext to use IBlogRepository with the following code:

    using System.Linq;
    using System.Web.Mvc;
    using BusinessLogic;
    using DataAccess;
    using UI.Properties;
    namespace UI.Controllers
    {
      public class BlogController : Controller
      {
        private IBlogRepository _blogRepository;
        public BlogController() : this(new BlogRepository(new UnitOfWork(new BlogContext(Settings.Default.BlogConnection)))) { }
        public BlogController(IBlogRepository blogRepository)
        {
          _blogRepository = blogRepository;
        }
        //
        // GET: /Blog/
        public ActionResult Display()
        {
          Blog blog = _blogRepository.Set<Blog>().First();
          return View(blog);
        }
      }
    }

How it works...

The tests set up the scenarios in which we would want to use a unit of work pattern, reading, updating, rolling back, and committing. The key to this is that these are all separate actions, not dependant on anything before or after it. If the application is web-based, this gives you a powerful tool to tie to the HTTP request so any unfinished work is cleaned up, or to ensure that you do not need to call SaveChanges since it can happen automatically.

The unit of work was originally created to track the changes made so they could be persisted, and it functions that way now. We are using a more powerful, but less recognized, feature defining the scope of the unit of work. We gave the ability to control both scope and the changes that are committed in the database in this scenario. We also have put in some clean-up which will ensure that even in the event of a failure our unit of work will try to clean up after itself before throwing the error to be handled at a higher level. We do not want to ignore these errors, but we do want to make sure they do not destroy the integrity of our database.

In addition to this tight encapsulation of work against the database, pass in our unit of work to each repository. This enables us in coupling multiple object interactions to a single unit of work. This will allow us to write code, specific to the object, without giving up the shared feature set of the database context. This is an explicit unit of work, but Entity Framework in the context defines it to give you an implicit unit of work. If you want to tie this to the HTTP request, rollback on error, or tie multiple data connections together in new and interesting ways, then you will need to code in an explicit implementation like this one.

This basic pattern will help to streamline data access, and resolve the concurrency issues caused by conflicts in the objects that are affected by a transaction.

There's more...

The unit of work is a concept which is deep at the heart of Entity Framework, and adheres, out of the box, to the principles following it. Knowing these principles, and why they are leveraged, will help us use Entity Framework to it's fullest without running into the walls built in the system on purpose.

Call per change

There is a cost for every connection to the database. If we were to make a call to keep the state in the database in sync with the state in the application, we would have thousands of calls each with connection, security, and network overhead. Limiting the number of times that we hit the database not only allows us to control this overhead, but also allows the database software to handle the larger transactions for which it was built.

Interface segregation principle

Some might be inclined to ask why we should separate unit of work from the repository pattern. Unit of work is definitely a separate responsibility from repository, and as such it is important to not only define separate classes, but also to ensure that we keep small, clear interfaces. The IDbContext interface is specific in the area of dealing with database connections through an Entity Framework object context. This allows the mocking of a context to give us testability to the lowest possible level. The IUnitOfWork interface deals with the segregation of work, and ensures that the database persistence happens only when we intend it to, ignorant of the layer under it that does the actual commands. The IRepository interface deals with selecting objects back from any type of storage, and allows us to remove all thoughts of how the database interaction happens from our dependent code base. These three objects, while related in layers, are separate concerns, and therefore need to be separate interfaces.

Refactor

We have added IUnitOfWork to our layered approach to database communication, and if we have seen anything over the hours of coding, it is code changes. We change it for many reasons, but the bottom line is that code changes often, and we need to make it easy to change. The layers of abstraction that we have added to this solution with IRepository, IUnitOfWork, and IDbContext, have all given us a point at which the change would be minimally painful, and we can leverage the interfaces in the same way. This refactoring to add abstraction levels is a core tenant of clean extensible code. Removing the concrete implementation details from related objects, and coding to an interface, forces us to encapsulate behavior and abstract our sections of code.

See also

In this chapter:

  • Testing queries

  • Implementing the repository pattern

  • Performing load testing against a database

 

Testing queries


One of the questions that you will undoubtedly come across in using Entity Framework is the usage of LINQ statements getting transformed into SQL statements everywhere, and how to output those for testing. These tests are not meant to truly unit test the generated SQL, but rather provide a simple way to inform the development staff, possibly DataBase Administrators (DBAs), as to what SQL is actually being executed for a given LINQ statement.

Getting ready

We will be using NuGet Package Manager to install the Entity Framework 4.1 assemblies.

The package installer can be found at http://www.nuget.org/.

We will also be using a database for connecting to the data and updating it.

Open the Testing SQL Output solution in the included source code examples.

Execute the database setup script from the code samples included with this recipe. This can be found in the DataAccess project within the Database folder.

How to do it...

  1. First, we start by adding a new unit test in the Test project to extract the SQL statements for us and a test to verify the filters on a given set of data:

    using System;
    using System.Text;
    using System.Collections.Generic;
    using System.Linq;
    using BusinessLogic;
    using DataAccess;
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using Rhino.Mocks;
    namespace Test
    {
      [TestClass]
      public class QueryTest
      {
        [TestMethod]
        public void ShouldFilterDataProperly()
        {
          IUnitOfWork mockContext = MockRepository.GenerateMock<IUnitOfWork>();
          mockContext.Expect(x => x.Find<Blog>()).Return(new List<Blog>()
          {
            new Blog(){Id = 1,Title = "Title"},
            new Blog(){Id=2,Title = "no"}
          }.AsQueryable());
          IBlogRepository repository = new BlogRepository(mockContext);
          var items = repository.Set<Blog>().Where(x=>x.Title.Contains("t"));
          mockContext.AssertWasCalled(x => x.Find<Blog>());
          Assert.AreEqual(1,items.Count());
          Assert.AreEqual("Title",items.First().Title);
        }
        [TestMethod]
        public void ShouldAllowSqlStringOutput()
        {
          IBlogRepository repository = new BlogRepository(new BlogContext(Settings.Default.BlogConnection));
          var items = repository.Set<Blog>();
          var sql = items.ToString();
          Console.WriteLine(sql);
          Assert.IsTrue(sql.Contains("SELECT"));
        }
      }
    }
  2. In the Test project, add a setting for the connection to the database, as shown in the following screenshot:

  3. In the test results window, we want to right-click and open the View Test Result Details for our SQL string test, as shown in the following screenshot:

  4. Notice the output for SQL console in the following screenshot:

How it works...

The first test is to make sure that our LINQ statements are executing the filters that we believe them to be. This will allow us to encapsulate the filters and sorts that we use throughout our application to keep the query footprint small. Entity Framework writes parameterized SQL. The fewer queries we use in structure, the better the performance will be. The query paths for our set of queries will be stored in SQL Server, just like the query plans of stored procedures, which provides us huge performance gains without sacrificing the code base of our application.

With this recipe, we start leveraging the abstraction layers built in the repository and the unit of work patterns that we implemented earlier. We leverage the unit of work to get a false set of data in the first test. This is the set that allows us to verify filters, and if you have a complex data structure, this can be abstracted into a factory so we only need to provide the dummy list at one time, but then can test multiple filters and sorts against it.

The second test requires a fully formed context, which is why we loaded a connection string to the Test project. This is not hitting the database for data, but is connecting at the construction of the context to check metadata and schema definition. This is the metadata which the context will use along with the standard convention, and any exceptions that you have configured to translate the LINQ statements into SQL statements.

There's more...

Some of the Entity Framework presentations that we have seen over the last couple of years have implied that we can ignore the database with an object relational mapper such as Entity Framework. This is not entirely true. We can ignore the structure of the database while defining our objects, but we still must be aware of it while querying and mapping our objects.

Query execution plan

As SQL is declarative, there are often many ways to get the same set of results, each of these varying widely in performance. When a query is submitted to the database, it is run through the query optimizer that evaluates some of the possible plans for executing the query, and returns what it considers the best of them. The query optimizer is not perfect, but it is good. The cost of this optimizer is that it takes some overhead on the query. When a query is sent with the parameters, the optimizer evaluates it and returns it, but caches the resulting plan. If the same query is called with different parameters, the optimizer knows the resulting plan will be the same, and uses the cached version. This storage of query plans is what gives Entity Framework an advantage because it uses parameterized SQL statements. If we are able to keep the query footprint (the number of different queries) in our application small, then we will reap the most benefit from this optimization and storage.

Query performance

When looking at using Entity Framework, we all need to consider performance, as we do not control the query directly. Some developers will write LINQ statements without a thought to translating it to SQL at the backend. This can lead to performance problems that are blamed on Entity Framework. The problem rests with the LINQ code that was written. There are several tools on the market which will allow you to analyze the generated SQL, and some even allow you to get a real-time look at the query execution plan.

Here are some of them:

See also

In this chapter:

  • Implementing the repository pattern

  • Implementing the unit of work pattern

In Chapter 5, Improving Entity Framework with Query Libraries:

  • Creating reusable queries

 

Creating databases from code


As we start down the code first path, there are a couple of things that could be true. If we already have a database, then we will need to configure our objects to that schema, but what if we do not have one? That is the subject of this recipe, creating a database from the objects we declare.

Getting ready

We will be using NuGet Package Manager to install the Entity Framework 4.1 assemblies.

The package installer can be found at http://www.nuget.org/.

Open the Creating a Database from Code solution in the included source code examples.

How to do it...

  1. First, we write a test which will set up the context for us to use as a starting point for creating the database with the following code:

    using System.Data.Entity;
    using System.Linq;
    using BusinessLogic;
    using DataAccess;
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using Test.Properties;
    namespace Test
    {
      [TestClass]
      public class DatabaseCreationTests
      {
        [TestMethod]
        public void ShouldCreateDatabaseOnCreation()
        {
          BlogContext context = new BlogContext(Settings.Default.BlogConnection);
          Assert.IsTrue(context.Database.Exists());
          context.Database.Delete();
          Assert.IsFalse(context.Database.Exists());
          context = new BlogContext(Settings.Default.BlogConnection);
          Assert.IsTrue(context.Database.Exists());
        }
        [TestMethod]
        public void ShouldSeedDataToDatabaseOnCreation()
        {
          System.Data.Entity.Database.SetInitializer<BlogContext>(new BlogContextInitializer());
          BlogContext context = new BlogContext(Settings.Default.BlogConnection);
          Assert.IsTrue(context.Database.Exists());
          context.Database.Delete();
          Assert.IsFalse(context.Database.Exists());
          context = new BlogContext(Settings.Default.BlogConnection);
          context.Database.Initialize(true);
          Assert.IsTrue(context.Database.Exists());
          DbSet<Blog> blogs = context.Set<Blog>();
          Assert.AreEqual(3,blogs.Count());
        }
      }
    }
  2. We will need to add a connection setting to the Test project to our database, and make sure that the database name is populated (the database name needs to be typed as it does not exist yet):

  3. In the DataAccess project, create a new C# class named BlogContext with the following code:

    using System.Data.Entity;
    using System.Linq;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogContext : DbContext
      {
        public BlogContext(string connectionString)
        : base(connectionString)
        {
          if (this.Database.Exists() && !this.Database.CompatibleWithModel(false)) this.Database.Delete();
          if (!this.Database.Exists()) this.Database.Create();
        }
        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
          base.OnModelCreating(modelBuilder);
        }
        public DbSet<Blog> Blogs { get; set; }
      }
    }
  4. In the DataAccess project, create a new C# class named BlogContextInitializer with the following code:

    using System;
    using System.Collections.Generic;
    using System.Data.Entity;
    using BusinessLogic;
    namespace DataAccess
    {
      public class BlogContextInitializer : IDatabaseInitializer<BlogContext>
      {
        public void InitializeDatabase(BlogContext context)
        {
          new List<Blog>
          {
            new Blog {Id = 1, Title = "One"},
            new Blog {Id = 2, Title = "Two"},
            new Blog {Id = 3, Title = "Three"}
          }.ForEach(b => context.Blogs.Add(b));
          context.SaveChanges();
        }
      }
    }

How it works...

On the construction of the context, Entity Framework creates an in-memory version of the expected database model and then tries to connect to that database. If the database is not there, and sufficient rights have been granted to the user, then Entity Framework will create the database. This is done by using the same conventions that the context uses for connecting and retrieving the data. The context defines the metadata schema and then creates the database. There is an additional table that stores the model hash for future comparisons against the model in use.

We are checking for an existing database that is incompatible, and deleting it if found, and then creating one from the objects that we have registered onto the data context with the DbSet properties. You can use the model check to keep the application from starting against a malformed database as well.

Notice that we also call the Initialize method but pass it as true to force the script to run even if the model has not changed. This is for testing purposes, but in a real scenario you would want this code in the start of the application. This will load whatever data we have defined in the initializer. We have given the database three blog entries to seed for the test data, but you can use this to load many other table records. This also ensures that the database gets created correctly every time.

There are some objects which will be static but configured into the database, for example, reference tables or lookup tables come to the mind. These are normally populated by a manual script that needs to be updated every time data is added to the reference tables, or a new lookup is created. We can code these items to be populated when the database is created so the manual update does not need to be run.

There's more...

When we start a green field project, we have that rush of happiness to be working in a problem domain that no one has touched before. This can be exhilarating and daunting at the same time. The objects we define and the structure of our program come naturally to a programmer, but most of us need to think with a different method to design the database schema. This is where the tools can help to translate our objects and intended structure into the database schema if we leverage some patterns. We can then take full advantage of being object-oriented programmers.

Configuration and creation

If you have added configuration for the schema layout of your database, it will be reflected in the database that gets created. This allows you to set up configurations to match any requirements on the schema without sacrificing the object model internal to your application.

Sample data

Testing the database layer has always been complex, but there are tools and strategies which will make it simpler. First, we layer abstractions to allow for unit testing at each level of the application. This will help us cover most of the applications, but there are still integration tests which will need to verify the whole story. This is where database initializers can help us to set up the test so they are brittle and more repeatable.

 

Testing queries for performance


While working within the constraints of application development, one of the things that you need to be aware of and work to avoid is performance problems with queries that hit the database. While working with Entity Framework, you have several tools which will help with this.

Getting ready

We will be using NuGet Package Manager to install the Entity Framework 4.1 assemblies.

The package installer can be found at http://www.nuget.org/.

We will also be using a database for connecting to the data and updating it.

Open the Performance Testing Queries in the included source code examples.

Execute the database setup script from the code samples included with this recipe. This can be found in the DataAccess project within the Database folder.

How to do it...

  1. First, we start by adding a test class named PerformanceTests using the following code:

    using System;
    using System.Collections.Generic;
    using System.Data.Entity;
    using System.Diagnostics;
    using System.Linq;
    using BusinessLogic;
    using DataAccess;
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using Test.Properties;
    namespace Test
    {
      [TestClass]
      public class PerformanceTests
      {
        private static BlogContext _context;
        [ClassInitialize]
        public static void ClassSetup(TestContext a)
        {
          Database.SetInitializer(new PerformanceTestInitializer());
          _context = new BlogContext(Settings.Default.BlogConnection);
          _context.Database.Delete();
          _context.Database.Create();
          _context.Database.Initialize(true);
        }
        [TestMethod]
        public void ShouldReturnInLessThanASecondForTenThousandRecords()
        {
          var watch = Stopwatch.StartNew();
          var items = _context.Set<Blog>();
          watch.Stop();
          Assert.IsTrue(watch.Elapsed < new TimeSpan(1000));
        }
        [TestMethod]
        public void ShouldReturnAFilteredSetInLessThan500Ticks()
        {
          var watch = Stopwatch.StartNew();
          var items = _context.Set<Blog>().Where(x=>x.Id > 500 && x.Id < 510);
          watch.Stop();
          Assert.IsTrue(watch.Elapsed < new TimeSpan(500));
        }
      }
    }
  2. Add a new C# class named PerformanceTestInitializer to the Test project using the following code:

    public class PerformanceTestInitializer : IDatabaseInitializer<BlogContext>
     {
      public void InitializeDatabase(BlogContext context)
      {
        long totalElapsed = 0;
        for (int i = 0; i < 10000; i++)
        {
          Stopwatch stopwatch = Stopwatch.StartNew();
          Blog b = new Blog {Id = i,Title = string.Format("Test {0}", i)};
          context.Blogs.Add(b);
          context.SaveChanges();
          stopwatch.Stop();
          totalElapsed += stopwatch.ElapsedTicks;
        }
        Console.WriteLine(totalElapsed / 10000);
      }
    }

How it works...

The test initialization calls a new context, and then inserts 10,000 rows of data. This is done once for the test class, and then all of the tests use that data. It is time-intensive, but can be automated for performance. This is not a unit test, but an integration test. These should not be run with every build like unit tests, but should be run at key points in the continuous build and deployment processes. Nightly builds are a great place for testing like this as it will head of costly changes late in the development cycle caused by performance.

This kind of performance testing ensures that the critical pieces of the application meet the expected level of responsiveness for our customers. The only way to elicit these kinds of requirements for us is to sit down with the customer and ask about them. How slow is unacceptable? How long does it currently take? How fast does it need to be to fit into the rhythm of your work?

There's more...

When we shop for a car, we look at the miles per gallon, the engine size, the maintenance, and all of the other performance markers. Writing software should be no different. If we run into resistance to this idea, we can use the following information for help.

Why do performance testing?

Performance testing is a tool used to avoid not meeting expectations. When we release software to a customer, there are certain expectations which come along with that process. Some of these are communicated in requirements and some are not. Most of the time the performance expectations are not communicated, but can sink the success of a project anyway. Performance testing is not about trying to squeeze every millisecond out of an application, it is about making sure that it is good enough to meet the needs of the customer.

See also

In this chapter:

  • Performing load testing against a database

  • Testing queries

 

Performing load testing against a database


When we deploy to a production environment, we want to make sure that it can handle the load that we are going to put on it with the application. We are going to build a load testing suite that allows us to test our data access for this purpose.

Getting ready

For this recipe, you will need the load testing tools from Visual Studio. These are part of the load testing feature pack which is available to the Visual Studio Ultimate users with active MSDN accounts.

We will be using NuGet Package Manager to install the Entity Framework 4.1 assemblies.

The package installer can be found at http://www.nuget.org/.

We will also be using a database for connecting to the data and updating it.

Open the Load Testing against a Database solution in the included source code examples.

Execute the database setup script from the code samples included with this recipe. This can be found in the DataAccess project within the Database folder.

How to do it...

  1. We are going to take our already prepared performance tests and use them in a new load test by adding a load test to the Test project and naming the scenario BasicDataAccess in the wizard. Select the Use normal distribution centered on recorded think times option, and change the Think time between iterations to 1 second, as shown in the following screenshot:

  2. We are going to simulate 25 users, for a constant load, as shown in the following screenshot:

  3. Set the Test Mix Model to Based on the total number of tests, as shown in the following screenshot:

  4. Add both existing tests to the Test Mix at 50% of load, as shown in the following screenshot:

  5. We skip Network Mix and Counter Sets for this test, and on Run Settings, we set a 10 minute duration for the test run, as shown in the following screenshot:

  6. Finish the setup, and you should get the following screenshot. Click on the Run Test button in the upper left corner as follows:

  7. When you open the test, you will get a very complex screen. The left panel holds the counters that you can drag onto the graphs to the right to monitor system and test performance. The graphs hold the these values. The table on the bottom right holds the numeric values that drive the graphs, and the list on the bottom left holds the number of run and completed tests. It should take a while for the first test to run due to the setup scripts, but after that it should run pretty fast:

How it works...

This load test spins up threads across as many processors as it can to simulate multiple users hitting the application at the same time. This in turn creates many connections to the database. This flood of connectivity and processing would have been impossible except by full production load in the past, but is now a simple setup to get. We are able to test not only our database connections this way, but also other internal pieces of our application that may come under the load. This allows us to identify bottlenecks and resolve them before they cause a production issue.

There's more...

Load and performance testing are tightly coupled, and should be used in conjunction. The two styles of load testing need to be leveraged on our application in the same combination.

Stress testing

Stress testing is one of the two main ways to load test an application. This is the test in which we slowly increase the amount of load on a system until it fails. This does two very important functions for us. First, it identifies the initial point of failure, or the bottleneck. Second, it allows us to view the pieces of software on the value of their scalability, and work to increase the throughput and handling of those pieces that do not function well. This will help us ward off problems before they bring our boss to our desk at 4:45pm.

Real-world simulation

Real world simulation is the second of the two main ways to load test an application. This puts a slightly higher than expected load on the system to see how the performance and functionality will handle the load. This is an assurance that the system will function day-to-day at an acceptable level. We are not hunting for bottlenecks with this type of testing, but merely making sure we have met the expectations and that we are fully prepared to hand over this software to our customers without the fear of a major performance issue.

See also

In this chapter:

  • Testing queries for performance

  • Creating mock database connections

About the Authors

  • Devlin Liles

    Devlin Liles is a Principal Consultant at Improving Enterprises and a Data Platform Development MVP. Devlin has been writing software since he first crashed a DOS box back in 1989, and still loves pushing the envelope. Devlin has worked on all sizes of projects from enterprise wide inventory systems, to single install scheduling applications. He is a regular national presenter at user groups, conferences, and corporate events, speaking on data access practices, techniques, and architecture. He is an active community leader, and helps put on several conferences in Dallas and Houston. Devlin works for Improving Enterprises, a Dallas-based company that has been awesome enough to support him in chasing his dreams, and writing awesome code.

    Browse publications by this author
  • Tim Rayburn

    Tim Rayburn is a Principal Consultant with Improving Enterprises, and a Microsoft MVP for Connected Systems Development. He has worked with Microsoft technologies for over 13 years, and is the Founder of the Dallas/Fort Worth Connected Systems User Group, the organizer of the Dallas TechFest, and blogger at TimRayburn.net. When he’s not pursuing the ever moving technology curve, he is an avid gamer, from consoles to table-top RPGs.

    Browse publications by this author
Book Title
Access this book, plus 7,500 other titles for FREE
Access now