Ensuring Quality for Unit Testing with Microsoft Visual Studio 2010

Exclusive offer: get 50% off this eBook here
Refactoring with Microsoft Visual Studio 2010

Refactoring with Microsoft Visual Studio 2010 — Save 50%

Refactor with Microsoft Visual Studio 2010 and evolve your software system to support new and ever-changing requirements by updating your C# code base with patterns and principles with this book and eBook

$35.99    $18.00
by Peter Ritchie | July 2010 | Enterprise Articles Microsoft

Some level of automated unit testing is required to maintain the quality of the software to support the refactoring effort. In this article by Peter Ritchie, author of Refactoring with Microsoft Visual Studio 2010, we'll discuss various aspects of unit testing, as well as how to approach unit testing with Visual Studio. We will cover some testing frameworks to facilitate writing, like mocking frameworks, and executing those tests. We'll cover the following topics:

  • Automated testing
  • Unit tests
  • Mocking
  • Mocking frameworks
  • Unit test frameworks
  • Unit testing legacy code
  • Test-driven development
  • Third party refactoring tools

(For more resources on Microsoft products, see here.)

Change is not always good

Any change to existing code means it has the potential to change the external behavior of the system. When we refactor code, we explicitly intend not to change the external behavior of system. But how do we perform our refactorings while being reasonably comfortable that we haven't changed external behavior?

The first step to validating that external behavior hasn't been affected is to define the criteria by which we can validate that the external behavior hasn't changed.

Automated testing

Every developer does unit testing. Some developers write a bit of test code, maybe an independent project that uses the code to verify it in some way then promptly forgets about the project. Or even worse, they throw away that project. For the purposes of this text, when I use the term "testing", I mean "automated testing".

Test automation is the practice of using a testing framework to facilitate and execute tests. A test automation framework promotes the automatic execution of multiple tests. Generally these frameworks include some sort of Graphical User Interface that helps manage tests and their execution. Passing tests are "Green" and failing tests are "Red", which is where the "Red, Green, Refactor" mantra comes from.

Unit tests

If we're refactoring, there's a chance that what we want to refactor isn't currently under test. This means that if we do perform refactoring on the code, we'll have to manually test the system through the established user interfaces to verify that the code works. Realistically, this doesn't verify the code; this verifies that the external behavior hasn't changed. There could very well be a hidden problem in the code that won't manifest itself until the external behavior has been modified—distancing detection of the defect from when it was created. Our goal is to not affect external behavior when refactoring, so verification through the graphical user interface doesn't fully verify our changes and is time consuming and more prone to human error.

What we really want to do is unit test the code. The term "unit test" has become overloaded over the years. MSDN describes unit testing as taking:

... the smallest piece of testable software in the application, [isolating] from the remainder of the code, and [determining] whether it behaves exactly as [expected].

This smallest piece of software is generally at the method level—unit testing is effectively about ensuring each method behaves as expected. Originally, it meant to test an individual unit of code. "Unit test" has evolved to mean any sort of code-based automated test, tests that developers write and execute within the development process. With various available frameworks, the process of testing the graphical user interface can also be automated in a code-based test, but we won't focus on that.

It's not unusual for some software projects to have hundreds and thousands of individual unit tests. Given the granularity of some of the tests, it's also not unusual for the lines of code in the unit tests to outnumber the actual production code. This is expected.

At the lowest level, we want to perform true "unit-testing", we want to test individual units of code, independently, to verify that unit of code functions as expected—especially in the presence of refactoring. To independently test these units of code we often have to separate them from their dependant code. For example, if I want to verify the business logic to uniquely generate an entity ID, there's no real need for me to access the database to verify that code. That code to generate a unique ID may depend on a collection of IDs to fully verify the algorithm to generate a unique ID—but that collection of IDs, for the purposes of verification, doesn't need to come from a database. So, we want to separate out use of some dependencies like the database from some of our tests.

Techniques for loosely-coupled design like Dependency Inversion and Dependency Injection allow for a composable design. This composable design aids in the flexibility and agility of our software system, but it also aids in unit testing.

Other testing

Useful and thorough information about all types of testing could easily reach enough information to take up several tomes. We're focusing on the developer task of refactoring, so we're limiting our coverage of testing to absolute essential developer testing: unit testing.

The fact that we're focusing on unit tests with regard to refactoring doesn't mean that other types of testing is neither useful nor needed. The fact that developers are performing unit tests doesn't preclude that they also need to perform a certain level of integration testing and the QA personnel are performing other levels of integration testing, user interface testing, user acceptance testing, system testing, and so on.

Integration testing is combining distinct modules in the system to verify that they interoperate exactly as expected. User interface testing is testing that the user interface is behaving exactly as expected. User acceptance testing is verifying that specific user requirements are being met—which could involve unit testing, integration testing, user interface testing, verifying non-functional requirements, and so on.

Mocking

Mocking is a general term that usually refers the substitution of Test Doubles for dependencies within a system under test that aren't the focus of the test. "Mocking" generally encompasses all types of test doubles, not just Mock test doubles.

Test Double is any object that takes the place of a production object for testing purposes.
Mock is a type of Test Double that stands in for a production object whose behavior or attributes are directly used within the code under test and within the verification.

Test Doubles allow an automated test to gather the criteria by which the code is verified. Test Doubles allow isolation of the code under test. There are several different types of Test Doubles: Mock, Dummy, Stub, Fake, and Spy.

  • Dummy is a type of Test Double that is only passed around within the test but not directly used by the test. "null" is an example of a dummy—use of "null" satisfies the code, but may not be necessary for verification.
  • Stub is a type of Test Double that provides inputs to the test and may accept inputs from the test but does not use them. The inputs a Stub provides to the test are generally "canned".
  • Fake is a type of Test Double that is used to substitute a production component for a test component. A Fake generally provides an alternate implementation of that production component that isn't suitable for production but useful for verification. Fakes are generally used for components with heavy integration dependencies that would otherwise make the test slow or heavily reliant on configuration.
  • Spy is a type of Test Double that effectively records the actions performed on it. The recorded actions can then be used for verification. This is often used in behavioral-based—rather than state-based—testing.

Test doubles can be created manually, or they can be created automatically through the use of mocking frameworks. Frameworks like Rhino Mocks provide the ability to automatically create test doubles. Mocking framework generally rely on a loosely-coupled design so that the generated test doubles can be substituted for other objects based upon an interface.

Let's look at an example of writing a unit test in involving mocking. If we return to one of our decoupling examples—InvoiceRepository—we can now test the internals of InvoiceRepository without testing our Data Access Layer (DAL). We would start by creating a test for the InvoiceRepository.Load method:

[TestClass()]
public class InvoiceRepositoryTest
{
[TestMethod()]
public void LoadTest()
{
DateTime expectedDate = DateTime.Now;
IDataAccess dataAccess =
new InvoiceRepositoryDataAccessStub(expectedDate);
InvoiceRepository target = new
InvoiceRepository(dataAccess);
Guid invoiceId = Guid.NewGuid();
Invoice actualInvoice = target.Load(invoiceId);
Assert.AreEqual(expectedDate, actualInvoice.Date);
Assert.AreEqual(invoiceId, actualInvoice.Id);
Assert.AreEqual("Test", actualInvoice.Title);
Assert.AreEqual(InvoiceStatus.Posted,
actualInvoice.Status);
Assert.AreEqual(1, actualInvoice.LineItems.Count());
InvoiceLineItem actualLineItem =
actualInvoice.LineItems.First();
Assert.AreEqual("Description",
actualLineItem.Description);
Assert.AreEqual(1F, actualLineItem.Discount);
Assert.AreEqual(2F, actualLineItem.Price);
Assert.AreEqual(3F, actualLineItem.Quantity);
}
}

Here, we're creating an instance of our repository passing it a Stub IDataAccess class. We then invoke the Load method and verify the various attributes of the resulting Invoice object. We, of course, don't have a class named InvoiceRepositoryDataAccesStub, so we'll have to create one. This class, for the purposes of this test, will look like this.

class InvoiceRepositoryDataAccesStub : IDataAccess
{
private DateTime constantDate;
public InvoiceRepositoryDataAccesStub(DateTime date)
{
constantDate = date;
}
public System.Data.DataSet LoadInvoice(Guid invoiceId)
{
DataSet invoiceDataSet = new DataSet("Invoice");
DataTable invoiceTable =
invoiceDataSet.Tables.Add("Invoices");
DataColumn column = new DataColumn("Id",
typeof(Guid));
invoiceTable.Columns.Add(column);
column = new DataColumn("Date", typeof(DateTime));
invoiceTable.Columns.Add(column);
column = new DataColumn("Title", typeof(String));
invoiceTable.Columns.Add(column);
column = new DataColumn("Status", typeof(int));
invoiceTable.Columns.Add(column);
DataRow invoiceRow =
invoiceTable.NewRow();
invoiceRow["Id"] = invoiceId;
invoiceRow["Date"] = constantDate;
invoiceRow["Status"] = InvoiceStatus.Posted;
invoiceRow["Title"] = "Test";
invoiceTable.Rows.Add(invoiceRow);
return invoiceDataSet;
}
public System.Data.DataSet LoadInvoiceLineItems(
Guid invoiceId)
{
DataSet lineItemDataSet = new DataSet("LineItem");
DataTable lineItemTable =
lineItemDataSet.Tables.Add("LineItems");
DataColumn column =new DataColumn("InvoiceId", typeof(Guid));
lineItemTable.Columns.Add(column);
column = new DataColumn("Price", typeof(Decimal));
lineItemTable.Columns.Add(column);
column = new DataColumn("Quantity", typeof(int));
lineItemTable.Columns.Add(column);
column = new DataColumn("Discount", typeof(double));
lineItemTable.Columns.Add(column);
column = new DataColumn("Description", typeof(String));
lineItemTable.Columns.Add(column);
column = new DataColumn("TaxRate1", typeof(String));
lineItemTable.Columns.Add(column);
column = new DataColumn("TaxRate2", typeof(String));
lineItemTable.Columns.Add(column);
DataRow lineItemRow =
lineItemDataSet.Tables["LineItems"].NewRow();
lineItemRow["InvoiceId"] = invoiceId;
lineItemRow["Discount"] = 1F;
lineItemRow["Price"] = 2F;
lineItemRow["Quantity"] = 3;
lineItemRow["Description"] = "Description";
lineItemTable.Rows.Add(lineItemRow);
return lineItemDataSet;
}
public void SaveInvoice(System.Data.DataSet dataSet)
{
throw new NotImplementedException();
}
}

Here, we're manually creating DataSet object and populating rows with canned data that we're specifically checking for in the validation code within the test. It's worth noting that we haven't implemented SaveInvoice in this class. This is mostly because we haven't implemented this in the production code yet; but, in the case of testing Load, an exception would be thrown should it call SaveInvoice— adding more depth to the validation of the Load method, since it shouldn't be using SaveInvoice to load data.

In the InvoiceRepositoryTest.LoadTest method, we're specifically using the InvoiceRepositoryDataAccessStub. InvoiceRepositoryDataAccessStub is a Stub of and IDataAccess specifically for use with InvoiceRepository. If you recall, a Stub is a Test Double that substitutes for a production component but inputs canned data into the system under test. In our test, we're just checking for that canned data to verify that the InvoiceRepository called our InvoiceRepositoryDataAccessStub instance in the correct way.

Priorities

In a project with little or no unit tests, it can be overwhelming to begin refactoring the code. There can be the tendency to want to first establish unit tests for the entire code base before refactoring starts. This, of course, is linear thinking. An established code base has been verified to a certain extent. If it's been deployed, the code effectively "works". Attempting to unit test every line of code isn't going to change that fact.

It's when we start to change code that we want to verify that our change doesn't have an unwanted side-effect. To this effect, we want to prioritize unit-testing to avoid having unit-testing become the sole focus of the team. I find that the unit-testing priorities when starting out with unit-testing are the same as when a system has had unit tests for some time. The focus should be that any new code should have as much unit-testing code coverage as realistically possible and any code that needs to be refactored should have code coverage as high as realistically possible.

The priority here is to ensure that any new code is tested and verified, and accept the fact that existing code has been verified in its own way. If we're not planning on immediately changing certain parts of code, they don't need unit-tests and should be of lower priority.

Code coverage

Something that often goes hand-in-hand with unit testing is Code Coverage. The goal of code coverage is to get as close to 100% coverage as reasonably possible.

Code Coverage is the measure of the percentage of code that is executed (covered) by automated tests.

Code coverage is a metric generally used in teams that are performing unit tests on a good portion of their code. Just starting out with unit testing, code coverage is effectively anecdotal. It doesn't tell you much more than you are doing some unit tests.

One trap to get into as teams start approaching majority code coverage is to strive for 100% code coverage. This is both problematic and counterproductive. There is some code that is difficult to test and even harder to verify. The work involved to test this code is simply to increase code coverage percentages.

I prefer to view the code coverage delta over time. In other words, I concentrate on how the code coverage percentage changes (or doesn't). I want to ensure that it's not going down. If the code coverage percentage is really low (say 25%) then I may want to see it increasing, but not at the risk of supplanting other work.

Refactoring with Microsoft Visual Studio 2010 Refactor with Microsoft Visual Studio 2010 and evolve your software system to support new and ever-changing requirements by updating your C# code base with patterns and principles with this book and eBook
Published: July 2010
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:

(For more resources on Microsoft products, see here.)

Mocking frameworks

As we've seen so far, we've manually created test doubles in order to isolate the system under test. There are frameworks collectively called "Mocking Frameworks" that facilitate automatic creation of Stubs, Fakes, Mocks, and Dummies. We have a cursory look at a common subset of those frameworks now.

Rhino Mocks

Rhino Mocks is an open source mocking framework written by Ayende Rahien. To use Rhino Mocks with your test project, simply add a reference to Rhino.Mocks.dll.

[TestClass]
public class InvoiceRepositoryTest
{
/// <summary>
/// Creates canned invoice data
/// </summary>
/// <param name="invoiceId"></param>
/// <param name="constantDate"></param>
/// <returns>DataSet of invoice data</returns>
private DataSet CreateInvoiceDataSet(Guid invoiceId, DateTime constantDate)
{
DataSet invoiceDataSet = new DataSet("Invoice");
DataTable invoiceTable =
invoiceDataSet.Tables.Add("Invoices");
DataColumn column = new DataColumn("Id",
typeof(Guid));
invoiceTable.Columns.Add(column);
column = new DataColumn("Date", typeof(DateTime));
invoiceTable.Columns.Add(column);
column = new DataColumn("Title", typeof(String));
invoiceTable.Columns.Add(column);
column = new DataColumn("Status", typeof(int));
invoiceTable.Columns.Add(column);
DataRow invoiceRow =
invoiceTable.NewRow();
invoiceRow["Id"] = invoiceId;
invoiceRow["Date"] = constantDate;
invoiceRow["Status"] = InvoiceStatus.Posted;
invoiceRow["Title"] = "Test";
invoiceTable.Rows.Add(invoiceRow);
return invoiceDataSet;
}
/// <summary>
/// Creates canned line item data
/// </summary>
/// <param name="invoiceId"></param>
/// <returns>DataSet of line item data</returns>
private DataSet CreateLineItemDataSet(Guid invoiceId)
{
DataSet lineItemDataSet = new DataSet("LineItem");
DataTable lineItemTable =
lineItemDataSet.Tables.Add("LineItems");
DataColumn column =
new DataColumn("InvoiceId", typeof(Guid));
lineItemTable.Columns.Add(column);
column = new DataColumn("Price", typeof(Decimal));
lineItemTable.Columns.Add(column);
column = new DataColumn("Quantity", typeof(int));
lineItemTable.Columns.Add(column);
column = new DataColumn("Discount", typeof(double));
lineItemTable.Columns.Add(column);
column = new DataColumn("Description", typeof(String));
lineItemTable.Columns.Add(column);
column = new DataColumn("TaxRate1", typeof(String));
lineItemTable.Columns.Add(column);
column = new DataColumn("TaxRate2", typeof(String));
lineItemTable.Columns.Add(column);
DataRow lineItemRow =
lineItemDataSet.Tables["LineItems"].NewRow();
lineItemRow["InvoiceId"] = invoiceId;
lineItemRow["Discount"] = 1F;
lineItemRow["Price"] = 2F;
lineItemRow["Quantity"] = 3;
lineItemRow["Description"] = "Description";
lineItemTable.Rows.Add(lineItemRow);
return lineItemDataSet;
}
/// <summary>
/// Test Load Method
/// </summary>
[TestMethod]
public void LoadTest()
{
DateTime expectedDate = DateTime.Now;
Guid invoiceId = Guid.NewGuid();
var mocker = new Rhino.Mocks.MockRepository();
IDataAccess dataAccess = mocker.Stub<IDataAccess>();
using (mocker.Record())
{
Rhino.Mocks.SetupResult.For(dataAccess.LoadInvoice(invoiceId));
Return(CreateInvoiceDataSet(invoiceId, expectedDate));
Rhino.Mocks.SetupResult.For(dataAccess.LoadInvoiceLineItems(invoiceId));
Return(CreateLineItemDataSet(invoiceId));
}
InvoiceRepository target = new InvoiceRepository(dataAccess);
Invoice actualInvoice = target.Load(invoiceId);
Assert.AreEqual(expectedDate, actualInvoice.Date);
Assert.AreEqual(invoiceId, actualInvoice.Id);
Assert.AreEqual("Test", actualInvoice.Title);
Assert.AreEqual(InvoiceStatus.Posted,
actualInvoice.Status);
Assert.AreEqual(1, actualInvoice.LineItems.Count());
InvoiceLineItem actualLineItem =actualInvoice.LineItems.First();
Assert.AreEqual("Description", actualLineItem.Description);
Assert.AreEqual(1F, actualLineItem.Discount);
Assert.AreEqual(2F, actualLineItem.Price);
Assert.AreEqual(3F, actualLineItem.Quantity);
}
}

Rhino Mocks works with the record/playback paradigm. In our case, we first create a MockRepository that will be used to "load" mocks (and stubs, in our case). We then ask Rhino Mocks to create a stub of our IDataAccess. We then ask Rhino Mocks that we want the result of CreateInvoiceDataSet to be returned to IDataAccess.LoadInvoice when given the value stored in invoiceId. The same for LoadInvoiceLineItems and CreateLineItemDataSet. Rhino Mocks "records" these actions. If we want to also validate behavior, (that is, that certain methods are called and they're called in a certain order) we can use the "playback" feature. We're concentrating on state-based tests in our examples, so we don't have a corresponding playback call. We then proceed with the test using our stubbed IDataAccess and verifying that Load returns the data that we expect it to for the given inputs.

Moq

Moq (pronounced "Mock") is an open source mocking framework. To use Moq with your test project, simply add a reference to Moq.dll after installation.

Moq uses lambda expressions to configure each method in the test double. Our test re-written to support Moq would look like the following:

/// <summary>
/// Test Load Method
/// </summary>
[TestMethod]
public void LoadTest()
{
DateTime expectedDate = DateTime.Now;
Guid invoiceId = Guid.NewGuid();
var mock = new Moq.Mock<IDataAccess>();
mock.Setup(da => da.LoadInvoice(invoiceId))
.Returns(CreateInvoiceDataSet(invoiceId, expectedDate));
mock.Setup(da => da.LoadInvoiceLineItems(invoiceId))
.Returns(CreateLineItemDataSet(invoiceId));
IDataAccess dataAccess = mock.Object;
InvoiceRepository target = new InvoiceRepository(dataAccess);
Invoice actualInvoice = target.Load(invoiceId);
Assert.AreEqual(expectedDate, actualInvoice.Date);
Assert.AreEqual(invoiceId, actualInvoice.Id);
Assert.AreEqual("Test", actualInvoice.Title);
Assert.AreEqual(InvoiceStatus.Posted, actualInvoice.Status);
Assert.AreEqual(1, actualInvoice.LineItems.Count());
InvoiceLineItem actualLineItem = actualInvoice.LineItems.First();
Assert.AreEqual("Description", actualLineItem.Description);
Assert.AreEqual(1F, actualLineItem.Discount);
Assert.AreEqual(2F, actualLineItem.Price);
Assert.AreEqual(3F, actualLineItem.Quantity);
}

With this method, we start by creating a mock of our interface. Once we have a Mock object we then configure what return values we would like our methods to have. With Moq, the lambda expression tells Moq what method we want to configure. Mock.Setup accepts the lambda statement then parses that statement to find what method we are referring to "da => da.LoadInvoice(invoiceId)" is code to execute the IDataAccess.LoadInvoice method passing in a specific Guid value. Moq parses that and allows you to configure what return value will be returned for that specific Guid value. In our case, when IData.LoadInvoice is invoked with the value in invoiceId, we return the results of the CreateInvoiceDataSet() method. We configure IDataAccess.LoadInvoiceLineItems similarly. We then access the Mock.Object property to get the actual test double object that is used in the rest of the test.

Unit testing existing legacy code

If we already have unit tests for a particular portion of code, then we can reasonably be sure that we can detect whether changes to this code has caused any changes in external behavior.

Legacy code is code that has no automated tests associated with it. There are two realistic ways to deal with legacy code to go ahead and evolve it, or to decouple it from its dependants and treat it like a third party library that we don't have control over.

When you're refactoring code, that refactoring is often to redesign the code to follow principles like loosely-coupled. True unit-testing is possible only if the code is loosely-coupled. This is a Catch-22. Clearly, we can't unit-test until we have a loosely-coupled design where we want to unit test, but we can't verify refactoring to loosely-coupled without unit-testing.

So, how do we deal with this paradox? Clearly, we can write succinct and efficient unit tests to validate our existing legacy code before refactoring. But, we can begin refactoring our legacy code and the code that uses it. Some fairly benign refactorings can be applied to decouple the legacy code from the code that uses it. The Extract Interface refactoring is excellent for this. A particular legacy class can have the Extract Interface refactoring performed on it to create a new interface that matches the public interface of the class that it is performed upon. This refactoring doesn't replace use of variables that use the type with the new interface so you'll have to manually do that for each case where you need it. This works out well because you're going to focus on individual uses of the legacy code because you're testing the code that uses the legacy code, not the legacy code.

If we look at a previous implementation of DataAccess, it originally did not implement an IDataAccess interface. This class—which we'll use as an example of a legacy component—may have been used like the following:

DataAccess dataAccess = new DataAccess(connectionString);
InvoiceRepository repository = new InvoiceRepository(dataAccess);

Where InvoiceRepository is implemented, partially, as follows:

public class InvoiceRepository : IInvoiceRepository
{
DataAccess dataAccess;

public InvoiceRepository(DataAccess dataAccess)
{
this.dataAccess = dataAccess;
}
//...
}

If we performed the Extract Interface on DataAccess to generate an IDataAccess interface, we'd refactor our InvoiceRepository class to the following:

public class InvoiceRepository : IInvoiceRepository
{
IDataAccess dataAccess;

public InvoiceRepository(IDataAccess dataAccess)
{
this.dataAccess = dataAccess;
}
//...
}

Both the Extract Interface refactoring and the refactoring of InvoiceRepository are fairly benign. We've now decoupled InvoiceRepository from directly using DataAccess, but haven't changed any logic (just variable types). With little change to DataAccess (what we've deemed as the "legacy" component), the InvoiceRepository (the system under test) is now free to use a Test Double as a substitute for DataAccess in order to isolate the code in InvoiceRepository to fully validate. This affords us the ability to test our repository without having to deploy, install, populate, and configure a database.

TypeMock isolator

We've seen how facilitating unit tests sometimes requires changing existing code in order to realistically test certain code. Mocking usually requires a certain level of decoupled design in order to substitute product components with test doubles. In addition to being a mocking framework, TypeMock isolator offers the ability to stub out instance methods of class instances to replace their return value with canned values.

For example, if we return to our DataAccess class example before it was refactored for:

[TestClass]
public class InvoiceRepositoryTest
{
private DataSet CreateInvoiceDataSet(Guid invoiceId, DateTime constantDate)
{
DataSet invoiceDataSet = new DataSet("Invoice");
DataTable invoiceTable =
invoiceDataSet.Tables.Add("Invoices");
DataColumn column = new DataColumn("Id",
typeof(Guid));
invoiceTable.Columns.Add(column);
column = new DataColumn("Date", typeof(DateTime));
invoiceTable.Columns.Add(column);
column = new DataColumn("Title", typeof(String));
invoiceTable.Columns.Add(column);
column = new DataColumn("Status", typeof(int));
invoiceTable.Columns.Add(column);
DataRow invoiceRow =
invoiceTable.NewRow();
invoiceRow["Id"] = invoiceId;
invoiceRow["Date"] = constantDate;
invoiceRow["Status"] = InvoiceStatus.Posted;
invoiceRow["Title"] = "Test";
invoiceTable.Rows.Add(invoiceRow);
return invoiceDataSet;
}
private DataSet CreateLineItemDataSet(Guid invoiceId)
{
DataSet lineItemDataSet = new DataSet("LineItem");
DataTable lineItemTable =
lineItemDataSet.Tables.Add("LineItems");
DataColumn column =
new DataColumn("InvoiceId", typeof(Guid));
lineItemTable.Columns.Add(column);
column = new DataColumn("Price", typeof(Decimal));
lineItemTable.Columns.Add(column);
column = new DataColumn("Quantity", typeof(int));
lineItemTable.Columns.Add(column);
column = new DataColumn("Discount", typeof(double));
lineItemTable.Columns.Add(column);
column = new DataColumn("Description", typeof(String));
lineItemTable.Columns.Add(column);
column = new DataColumn("TaxRate1", typeof(String));
lineItemTable.Columns.Add(column);
column = new DataColumn("TaxRate2", typeof(String));
lineItemTable.Columns.Add(column);
DataRow lineItemRow =
lineItemDataSet.Tables["LineItems"].NewRow();
lineItemRow["InvoiceId"] = invoiceId;
lineItemRow["Discount"] = 1F;
lineItemRow["Price"] = 2F;
lineItemRow["Quantity"] = 3;
lineItemRow["Description"] = "Description";
lineItemTable.Rows.Add(lineItemRow);
return lineItemDataSet;
}
[TestMethod]
public void LoadTest()
{
DateTime expectedDate = DateTime.Now;
Guid invoiceId = Guid.NewGuid();
String connectionString = "";
DataAccess dataAccess = new DataAccess(connectionString);
Isolate.NonPublic.WhenCalled(dataAccess,"LoadInvoice").
WillReturn(CreateInvoiceDataSet(invoiceId, expectedDate));
Isolate.NonPublic.WhenCalled(dataAccess,"LoadInvoiceLineItems").
WillReturn(CreateLineItemDataSet(invoiceId));
InvoiceRepository target =
new InvoiceRepository(dataAccess);
Invoice actualInvoice = target.Load(invoiceId);
Assert.AreEqual(expectedDate, actualInvoice.Date);
Assert.AreEqual(invoiceId, actualInvoice.Id);
Assert.AreEqual("Test", actualInvoice.Title);
Assert.AreEqual(InvoiceStatus.Posted, actualInvoice.Status);
Assert.AreEqual(1, actualInvoice.LineItems.Count());
InvoiceLineItem actualLineItem = actualInvoice.LineItems.First();
Assert.AreEqual("Description",actualLineItem.Description);
Assert.AreEqual(1F, actualLineItem.Discount);
Assert.AreEqual(2F, actualLineItem.Price);
Assert.AreEqual(3F, actualLineItem.Quantity);
}
}

In the LoadTest method, we create a DataAccess object passing the constructor a dummy connection string. We then ask TypeMock isolator to return the result of CreateInvoiceDataSet() when DataAccess.LoadInvoice() is called, and to return the result of CreateLineItemDataSet() when DataAccess.LoadInvoiceLineItem() is called. CreateInvoiceDataSet() mimics the result of accessing the database by returning a canned DataSet containing invoice data.

CreateInvoiceLineItemDataSet() does something similar by returning a canned DataSet containing related invoice line item data. We complete the unit test by performing the same asserts we did in a previous example.

TypeMock isolator is useful for situations where you don't have control of the source code for the components you want to mock.

Refactoring with Microsoft Visual Studio 2010 Refactor with Microsoft Visual Studio 2010 and evolve your software system to support new and ever-changing requirements by updating your C# code base with patterns and principles with this book and eBook
Published: July 2010
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:

(For more resources on Microsoft products, see here.)

Unit testing in Visual Studio®

Unit testing is supported directly in Visual Studio® 2010 editions Professional and above. Visual Studio® supports automatically creating unit tests based on your source code. You can right click a method or a class and selecting Create Unit Tests… For example, if we right clicked on our EditInvoicePresenter class and chose Create Unit Tests…, we'd be presented with the following form:

Refactoring with Microsoft Visual Studio 2010

The default action for a class is to select all methods within the class to be tested. If we accept the defaults for this class and press OK, code similar to the following will be generated:

/// <summary>
///This is a test class for CreateInvoicePresenterTest and is
/// intended to contain all CreateInvoicePresenterTest Unit
///Tests
///</summary>
/// <summary>
///A test for CreateInvoicePresenter Constructor
///</summary>
[TestMethod()]
public void CreateInvoicePresenterConstructorTest()
{
ICreateInvoiceView view = null; // TODO: Initialize to
// an appropriate value
CreateInvoicePresenter target = new CreateInvoicePresenter(view);
Assert.Inconclusive("TODO: Implement code to verify target");
}
/// <summary>
///A test for GetInvoice
///</summary>
[TestMethod()]
public void GetInvoiceTest()
{
ICreateInvoiceView view = null; // TODO: Initialize to
// an appropriate value
CreateInvoicePresenter target = new CreateInvoicePresenter(view); // TODO: Initialize
// to an
// appropriate
// value
Invoice expected = null; // TODO: Initialize to an
// appropriate value
Invoice actual;
actual = target.GetInvoice();
Assert.AreEqual(expected, actual);
Assert.Inconclusive(
"Verify the correctness of this test method.");
}
/// <summary>
///A test for Start
///</summary>
[TestMethod()]
public void StartTest()
{
ICreateInvoiceView view = null; // TODO: Initialize to
// an appropriate value
CreateInvoicePresenter target =
new CreateInvoicePresenter(view); // TODO: Initialize
// to an
// appropriate
// value
target.Start();
Assert.Inconclusive("A method that does not return a value cannot be verified.");
}
}

The unit test creator in Visual Studio® analyzes the signature of each method and automatically generates unit test code that performs a test with dummy data. You simply have to fill in the blanks with the data you actually want to test with and remove the Assert.Inconclusive call. Our first unit test is a modified version of the above generated code.

Test driven development

As you start writing new code, you don't want to fall into the same trap as the legacy code you have to deal with. You want this new code to have tests in order to verify changes to it—either as the next step in your refactoring or in some future refactoring. With Test-Driven Development (TDD) you actually make sure you write the tests before you write the code. Let's have another look at our Model View Presenter (MVP) refactoring.

When separating out business logic code from the GUI (Graphical User Interface) we're now able to write tests to verify that business logic independent of the user interfaces—tests that can be automated much easier. Before we refactored CreateInvoiceForm, we knew we needed the presenter to create an invoice based on information in the view as well as construct a presenter based on a specific view. In this case, we want two tests: one to test that the presenter is created correctly with the correct view reference, and one to verify that the presenter uses the correct data from the view to create an Invoice object. In TDD we'd create these tests before we'd create the classes to support them. For example:

/// <summary>
///A test for CreateInvoicePresenter Constructor
///</summary>
[TestMethod()]
public void CreateInvoicePresenterConstructorTest()
{
ICreateInvoiceView view = new DummyCreateInvoiceView();
CreateInvoicePresenter target =
new CreateInvoicePresenter(view);
Assert.AreSame(view, target.View, "CreateInvoicePresenter.View had unexpected value after construction.");
}
/// <summary>
///A test for GetInvoice
///</summary>
[TestMethod()]
public void GetInvoiceTest()
{
string expectedTitle = "Test";
DateTime expectedDate = DateTime.Now;
List<InvoiceLineItemDTO> invoiceLineItemDTOs = new List<InvoiceLineItemDTO>() {
new InvoiceLineItemDTO()
{
Quantity = 1,
Description = "description",
Discount = 0F,
Price = 42F
}
};
ICreateInvoiceView view = new MockCreateInvoiceView()
{
Title = expectedTitle,
Date = expectedDate,
InvoiceLineItemDTOs = invoiceLineItemDTOs
};
CreateInvoicePresenter target = new CreateInvoicePresenter(view);
Invoice actual = target.GetInvoice();
Assert.AreEqual(expectedTitle, actual.Title);
Assert.AreEqual(expectedDate, actual.Date);
InvoiceLineItem actualInvoiceLineItem = actual.LineItems.First();
Assert.AreEqual(invoiceLineItemDTOs[0].Description, actualInvoiceLineItem.Description);
Assert.AreEqual(invoiceLineItemDTOs[0].Discount, actualInvoiceLineItem.Discount);
Assert.AreEqual(invoiceLineItemDTOs[0].Price, actualInvoiceLineItem.Price);
Assert.AreEqual(invoiceLineItemDTOs[0].Quantity, actualInvoiceLineItem.Quantity);
}

In CreateInvoicePresenterConstructorTest we create Dummy Test Double to pass to the CreateInvoicePresenter constructor that will only be used to verify, is used for the CreateInvoicePresenter.View property. The test verifies that the constructor is working correctly by verifying that View property contains the same reference that was passed into CreateInvoicePresenter constructor through the use of Assert.AreSame().

In the GetInvoiceTest method, we create all of the inputs to the test (expectedTitle, expectedDate, and invoiceLineItemDTOs). We then use those inputs to create Mock Test Double of the view when creating the presenter object. The call to CreateInvoicePresenter.GetInvoice() will then use the attributes of our Mock object to create an Invoice object. We then verify that what is returned from GetInvoice is correct (and thus verify that GetInvoice is functioning properly) through multiple calls to Assert.AreEqual on what we expect compared to attributes of the Invoice object.

Had our Invoice object overridden the Equals method, we could have possibly improved this test as follows:

public void GetInvoiceTest()
{
List<InvoiceLineItemDTO> invoiceLineItemDTOs = new List<InvoiceLineItemDTO>()
{
new InvoiceLineItemDTO()
{
Quantity = 1,
Description = "description",
Discount = 0F,
Price = 42F
}
};
Invoice expectedInvoice = new Invoice("Title",
new List<InvoiceLineItem>()
{
invoiceLineItemDTOs.First().ToInvoiceLineItem()
},
DateTime.Now);
ICreateInvoiceView view =
new MockCreateInvoiceView()
{
Title = expectedInvoice.Title,
Date = expectedInvoice.Date,
InvoiceLineItemDTOs = invoiceLineItemDTOs
};
CreateInvoicePresenter target =new CreateInvoicePresenter(view);
Invoice actual = target.GetInvoice();
Assert.AreEqual(expectedInvoice, actual, "Actual invoice returned from GetInvoice was not as expected.");
}

This simplifies our verification by process to a single call to Assert.AreEqual; but we lose granularity in the detail of what part of the Invoice doesn't match. We also employ InvoiceLineItemDTO.ToInvoiceLineItem, making the test dependant on that working and effectively dependant on tests to verify ToInvoiceLineItem be run and pass before this test. I'll leave it as an exercise of the reader to decide which benefits and drawbacks are important.

This of course, results in compile errors if we try to build our test project. We then create enough of the classes and code to get a clean compile. But, when we run the two tests they will both fail because the code hasn't been written yet. The following is just enough code to compile and fail the tests:

class DummyCreateInvoiceView : ICreateInvoiceView
{
public ICollection<InvoiceLineItemDTO> InvoiceLineItemDTOs
{
get { throw new NotImplementedException(); }
}
public DateTime Date
{
get
{
throw new NotImplementedException();
}
set
{
throw new NotImplementedException();
}
}
public string Title
{
get
{
throw new NotImplementedException();
}
set
{
throw new NotImplementedException();
}
}
}
class CreateInvoicePresenter
{
public CreateInvoicePresenter(ICreateInvoiceView view)
{
}
public ICreateInvoiceView View { get; private set; }
public void Start()
{
// TODO: wire-up other data, subscribe to events, etc.
}
public Invoice GetInvoice()
{
return null;
}
}
class MockCreateInvoiceView : ICreateInvoiceView
{
public ICollection<InvoiceLineItemDTO> InvoiceLineItemDTOs {
get;
set;
}
public DateTime Date { get; set; }
public string Title { get; set; }
}

Running the tests in the Visual Studio® 2010 test runner show that the tests fail:

Refactoring with Microsoft Visual Studio 2010

Failing tests first? This may seem counterproductive, but doing this has one striking benefit: it tests that unexpected things don't pass the test, that is, the defaults don't cause the test to pass and that actual written code is required to pass the test.

Unit testing frameworks

Visual Studio® 2010 Professional Edition and better includes unit-testing. Visual Studio® 2010 Premium includes additional testing tools like Code Coverage. Visual Studio® 2010 Ultimate Edition includes extra features related to managing the testing effort like Test Case Management. There are third party unit testing frameworks that are available if you're using Visual Studio® 2010 Express or Standard. We'll look at a common subset of these unit testing frameworks now.

NUnit

NUnit was one of the first unit testing frameworks available for .NET. It's the defacto standard for automated testing on .NET. The original author of NUnit is actually working for Microsoft now. Even if you find you're going to use the built-in Visual Studio® unit testing, you may encounter unit tests written for NUnit so it may be worthwhile understanding how NUnit works.

To write unit tests for NUnit, simply add a reference to the nunit.framework.dll after installation.

NUnit differs from VS unit testing firstly in the names of the attributes that declare a test class and a test method. NUnit uses TextFixtureAttribute instead of TestClassAttribute and TestAttribute instead of TestMethod attribute. Many of the Assert methods are the same. The following is an example of our test modified to work with NUnit:

[TestFixture]
public class InvoiceRepositoryTest
{
[Test]
public void LoadTest()
{
DateTime expectedDate = DateTime.Now;
IDataAccess dataAccess =
new InvoiceRepositoryDataAccessStub(expectedDate);
InvoiceRepository target = new
InvoiceRepository(dataAccess);
Guid invoiceId = Guid.NewGuid();
Invoice actualInvoice = target.Load(invoiceId);
Assert.AreEqual(expectedDate, actualInvoice.Date);
Assert.AreEqual(invoiceId, actualInvoice.Id);
Assert.AreEqual("Test", actualInvoice.Title);
Assert.AreEqual(InvoiceStatus.New,actualInvoice.Status);
Assert.AreEqual(1, actualInvoice.LineItems.Count());
InvoiceLineItem actualLineItem = actualInvoice.LineItems.First();
Assert.AreEqual("Description", actualLineItem.Description);
Assert.AreEqual(1F, actualLineItem.Discount);
Assert.AreEqual(2F, actualLineItem.Price);
Assert.AreEqual(3F, actualLineItem.Quantity);
}
}

XUnit

XUnit is an open source unit-testing framework originating out of Microsoft. Unfortunately, before going to press, XUnit didn't support .NET 4.0. So, to use XUnit you have to change the target framework to something other than .NET 4.0 for your test project and any project that your test project references (usually every other project).

To write tests for XUnit, simply add a reference to the XUnit.dll after installation.

XUnit's semantics is a little different than other testing frameworks. The first thing to notice is you don't have to attribute the test class at all. XUnit finds and executes tests simply by the method attribute. In XUnit the method attribute is FactAttribute. XUnit also doesn't use the same method names in the Assert class. Our test modified to work with XUnit would look like the following:

public class InvoiceRepositoryTest
{
[Fact]
public void LoadTest()
{
DateTime expectedDate = DateTime.Now;
IDataAccess dataAccess =
new InvoiceRepositoryDataAccessStub(expectedDate);
InvoiceRepository target = new
InvoiceRepository(dataAccess);
Guid invoiceId = Guid.NewGuid();
Invoice actualInvoice = target.Load(invoiceId);
Assert.Equal(expectedDate, actualInvoice.Date);
Assert.Equal(invoiceId, actualInvoice.Id);
Assert.Equal("Test", actualInvoice.Title);
Assert.Equal(InvoiceStatus.Posted, actualInvoice.Status);
Assert.Equal(1, actualInvoice.LineItems.Count());
InvoiceLineItem actualLineItem = actualInvoice.LineItems.First();
Assert.Equal("Description", actualLineItem.Description);
Assert.Equal(1F, actualLineItem.Discount);
Assert.Equal(2F, actualLineItem.Price);
Assert.Equal(3F, actualLineItem.Quantity);
}
}

Automating unit-testing

So far we've detailed coding unit test and the automatic executing through some sort of unit test framework. But, the executing of the tests has been a manual process—pressing "Run" in order to execute the tests. This assumes a developer is modifying code and running the tests to verify their changes. In an ideal world, this would be enough to verify our software system. But, various aspects of software development mean this isn't enough. Team software development, for example, means that someone else on the team could have made a modification that causes a test to fail. This failure may not be visible to them while they are both developing. While both team members may execute all the tests and they all pass, the act of both of them committing their changes to source control could introduce a failure that they wouldn't notice. This failure could impact other team members causing them decreased productivity or reduced quality.

Automated unit-testing is when the unit tests are run atomically. They may simply be periodically run, or they may be run in response to a particular action (like committing changes), or it may be a combination of both.

Some source control systems recognize that the code that is being committed may pass unit tests locally but may not pass unit test once committed. These types of source control systems have check-in policies that perform certain checks before fully committing the revision. In terms of unit-testing, there is a policy that can be enabled and/or configured that automatically executes the unit tests (or a subset thereof) and will not allow the change set to be committed unless all the unit tests pass. This is the most proactive way of ensuring unit tests are periodically executed so that changes are least likely to affect other members of the team.

An alternative to using the revision control system to proactively deny commits that don't pass all unit tests, unit tests can be periodically automatically executed and can inform team members of the status. This can simply be scheduled as part of the build environment, like unit tests automatically being executed every day at 2 a.m. This could also be in response to check-ins: where unit tests are executed on a specific server whenever a check-in is performed, and any errors are sent to team members so they know as soon as possible that there's a problem.

Continuous Integration

There's a category of tools and features that implement a feature called Continuous Integration. Continuous Integration promotes integration of the software system under development and the work of development team members continuously. Continuous Integration tools essentially facilitate the automation of the build process. This sounds very simple, but there are lots of details to a full build process. There's the integration with a source code control system, the extraction of the latest source code from the source code control system, the compilation of the source code, the execution of unit tests, compilation of installs, and communication of build status with the team. This process can also include many other details like automatic deployment to a test environment. Continuous Integration tools facilitate these tasks. There are several tools available for Continuous Integration from open source to commercial. Visual Studio® Team System (VSTS) includes Team Foundation Server (TFS) that facilitates Continuous Integration. CruiseControl.NET is an open source Continuous Integration tool. There are a few commercial Continuous Integration tools; one of the more popular ones (other than TFS) is TeamCity.

Continuous Integration frees the development team from a manual build process that someone has to be trained on and perform. Continuous Integration usually includes the process of running the unit tests, so there's an up-to-date status on the quality of the source code. The Continuous Integration tool can be configured to periodically start the build process, and can often be configured to automatically detect change set commits to the source code control system and automatically (and thus continuously) execute the build process.

Continuous Integration means that the work of development team members is integrated as quickly as possible. It's important that problems with the work product of people are found as close to when they perform that work as possible. The longer it takes to detect problems, the greater the amount of work involved is required to fix the problem. The more work involved to fix a problem, the more likely that quality will be affected. These tools mean problems are detected as closely as possible when they're created.

Continuous Integration leads to more rapid development of software, finding failures and integration problems as quickly as possible, and reducing the work involved in resolving integration and quality issues.

Continuous Integration relies heavily on a source code control system.

Third party refactoring tools

There's third party refactoring add-ons for Visual Studio® that increases the six built-in refactorings into dozens of refactorings. A couple of those add-ins are detailed below.

Resharper

JetBrains has a product called Resharper. Resharper (version 5) includes 40 factorings. Resharper includes many productivity utilities to aid in the development of software, especially with regard to Test Driven Development. Resharper also includes a unit test runner that works with Visual Studio® 2010 Standard—that doesn't include a built-in unit test runner.

More information about Resharper can be found at http://www.jetbrains.com/resharper/index.html

Refactor! Pro

DevExpress has a product called Refactor! Pro. Refactor! Pro has over 150 refactorings. As with Resharper, Refactor! Pro includes many productivity features that when coupled with DevExpress' CodeRush, productivity gains are increased even more.

I highly recommend using one of these two productivity tools. These tools offer features and extensions to Visual Studio® that better match the way people code and make coding that much more productive. After getting to know how to use either one of these tools, you'll find developing software in just Visual Studio® much less productive.

Summary

In this article, we discussed that to support the refactoring effort some level of automated unit testing is required to maintain the quality of the software. We detailed some testing frameworks to facilitate writing and executing those tests. We detailed other frameworks that aid in writing unit test faster, like mocking frameworks. We also detailed how we might approach promoting unit testing for new code through practices like test-driven development.


Further resources on this subject:


About the Author :


Peter Ritchie

Peter Ritchie is a software development consultant. Peter is president of Peter Ritchie Inc. Software Consulting Co., a software consulting company in Canada's National Capital Region specializing in Windows-based software development management, process, and implementation consulting. Peter has worked with such clients as Mitel, Nortel, Passport Canada, and Innvapost from mentoring to architecture to implementation. Peter has considerable experience building software development teams and working with startups towards agile software development.

Peter's range of experience ranges from designing and implementing simple standalone applications to architecting distributed n-tier applications spanning dozens of computers; from C++ to C#. Peter is active in the software development community attending and speaking at various events as well as authoring various works including Refactoring with Microsoft Visual Studio 2010.

Books From Packt


ASP.NET 3.5 Application Architecture and Design
ASP.NET 3.5 Application Architecture and Design

Microsoft Silverlight 4 Data and Services Cookbook
Microsoft Silverlight 4 Data and Services Cookbook

Microsoft Silverlight 4 and SharePoint 2010 Integration
Microsoft Silverlight 4 and SharePoint 2010 Integration

WCF 4.0 Multi-tier Services Development with LINQ to Entities
WCF 4.0 Multi-tier Services Development with LINQ to Entities

Microsoft Windows Workflow Foundation 4.0 Cookbook
Microsoft Windows Workflow Foundation 4.0 Cookbook

.NET Compact Framework 3.5 Data Driven Applications
.NET Compact Framework 3.5 Data Driven Applications

Applied Architecture Patterns on the Microsoft Platform
Applied Architecture Patterns on the Microsoft Platform

Microsoft Visio 2010 Business Process Diagramming and Validation
Microsoft Visio 2010 Business Process Diagramming and Validation


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software