Practical Test-Driven Development using C# 7

4 (1 reviews total)
By John Callaway , Clayton Hunt
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Why TDD is Important

About this book

Test-Driven Development (TDD) is a methodology that helps you to write as little as code as possible to satisfy software requirements, and ensures that what you've written does what it's supposed to do. If you're looking for a practical resource on Test-Driven Development this is the book for you. You've found a practical end-to-end guide that will help you implement Test-Driven Techniques for your software development projects.

You will learn from industry standard patterns and practices, and shift from a conventional approach to a modern and efficient software testing approach in C# and JavaScript. This book starts with the basics of TDD and the components of a simple unit test. Then we look at setting up the testing framework so that you can easily run your tests in your development environment. You will then see the importance of defining and testing boundaries, abstracting away third-party code (including the .NET Framework), and working with different types of test double such as spies, mocks, and fakes.

Moving on, you will learn how to think like a TDD developer when it comes to application development. Next, you'll focus on writing tests for new/changing requirements and covering newly discovered bugs, along with how to test JavaScript applications and perform integration testing. You’ll also learn how to identify code that is inherently un-testable, and identify some of the major problems with legacy applications that weren’t written with testability in mind.

By the end of the book, you’ll have all the TDD skills you'll need and you’ll be able to re-enter the world as a TDD expert!

 

Publication date:
February 2018
Publisher
Packt
Pages
403
ISBN
9781788398787

 

Chapter 1. Why TDD is Important

You've picked up this book because you want to learn more about Test-Driven Development (TDD). Maybe you've heard the term before. Perhaps you've known software developers who write unit tests and want to learn more. We'll introduce you to the terms, the structure, and the ideology around TDD. By the end of this book, you'll have sufficient knowledge to re-enter the world as a Test-Driven Developer and feel confident about using your skills throughout your long and prosperous career.

Why this book? Certainly, there are many other books on the topic of TDD. We have written this book with the hope that it provides you, the reader, with low-level insight into the mindset we use when doing TDD. We also hope that this book provides an updated view of some of the concepts and lessons we have learned while doing TDD over the last 10 years. 

So, why is TDD so important? As more businesses and industries rely on software solutions, it's increasingly important that those solutions be robust and error-free. The cheaper and more consistent, they are the better. Applications developed with TDD in mind are inherently more testable, easier to maintain, and demonstrate a certain level of correctness not easily achieved otherwise.

In this chapter, we will gain an understanding of:

  • Defining TDD and exploring the basics
  • Creating our first tests in C# and JavaScript
  • Exploring the basic steps of Red, Green, Refactor
  • Growing complexity through tests
 

First, a little background


It's possible that you've had some exposure to unit tests in your career. It's highly likely that you've written a test or two. Many developers, unfortunately, haven't had the opportunity to experience the joys of Test-Driven Development.

John's story on TDD

I was first introduced to TDD about five years ago. I was interviewing for a lead developer position for a small startup. During the interview process, the CTO mentioned that the development team was practicing TDD. I informed him that I didn't have any practical TDD experience, but that I was sure I could adapt.

In all honesty, I was bit nervous. Up to that point, I had never even written a single unit test! What had I gotten myself into? An offer was extended and I accepted. Once I joined the small company I was told that, while TDD was the goal, they weren't quite there yet. Phew; crisis averted. However, I was still intrigued. It wasn't until a few months later that the team delved into the world of TDD, and the rest, as they say, is history.

Clayton's story on TDD

My introduction to TDD is a little different from John's. I have been writing code since I was in middle school in the early 1990s. From then until 2010, I always struggled with writing applications that didn't require serious architectural changes when new requirements were introduced. In 2010, I finally got fed up with the constant rewrites and began researching tools and techniques to help me with my problem. I quickly found TekPub, an e-learning site that was, at the time, owned and operated by Rob Conery. Through TekPub I began learning the SOLID principles and TDD. After banging my head against the wall for close to six months, I started to grasp what TDD was and how I could use those principles. Coupled with the SOLID principles, TDD helped me to write easy to understand code that was flexible enough to stand up to any requirements the business could throw at me. I eventually ended up at the same company where John was employed and worked with him and, as he said, the rest is history.

Note

The SOLID principles, which will be explained in detail later, are guiding principles that help produce clean, maintainable, and flexible code. They help reduce rigidity, fragility, and complexity. Generally thought of as object-oriented principles, I have found them to be applicable in all coding paradigms.

 

So, what is TDD?


Searching online, you will certainly find that TDD is an acronym for Test-Driven Development. In fact, the title of this book will tell you that. We, however, use a slightly more meaningful definition. So, what is TDD? In the simplest terms, TDD is an approach to software development that is intended to reduce errors and enable flexibility within the application. If done correctly, TDD is a building block for rapid, accurate, and fearless application development. 

Test-Driven Development is a means of letting your tests drive the design of the system. What does that mean, exactly? It means that you mustn't start with a solution in mind, you must let your tests drive the code being written. This helps minimize needless complexity and avoid over-architected solutions. The rules of Test-Driven Development

Staunch proponents of TDD dictate that you may not write a single line of production code without writing a failing unit test, and failing to compile is a failure. This means that you write a simple test, watch it fail, then write some code to make it pass. The system slowly evolves as the tests and the production application grow in functionality.

Note

TDD is not about testing, it's about design.

Many would argue that TDD is about testing, and by extension, about test coverage of an application. While these are great side-effects of TDD, they are not the driving force behind the practice.

Additionally, if code coverage and metrics become the goal, then there is a risk that developers will introduce meaningless tests just to inflate the numbers. Perhaps it is less a risk and more a guarantee that this will happen. Let delivered functionality and happy customers be the metrics with which you measure success.

TDD is about design. Through TDD, an application will grow in functionality without introducing needless complexity. It's incredibly difficult to introduce complexity if you write small tests and only enough production code to make the test pass. Refactoring, modifying the structure of the code without adding or changing behavior,  should not introduce complexity, either.

 

An approach to TDD 


TDD is also referred to as Test First Development. In both names, the key aspect is that the test must be written before the application code. Robert C. Martin, affectionately called "Uncle Bob" by the developer community, has created The Three Laws of TDD. They are as follows: 

  1. You are not allowed to write any production code unless it is to make a failing unit test pass
  2. You are not allowed to write any more of a unit test than is sufficient to fail, and compilation failures are failures
  3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test

You can learn more about these laws at http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd 

By following these rules, you will ensure that you have a very tight feedback loop between your test code and your production code. One of the main components of Agile software development is working to reduce the feedback cycle. A small feedback cycle allows the project to make a course correction at the first sign of trouble. The same applies to the testing feedback cycle. The smaller you can make your tests, the better the end result will be. 

Note

For a video on Agile, check out Getting Started with Agile by Martin Esposito and Massimo Fascinari (https://www.packtpub.com/application-development/getting-started-agile-video).

An alternative approach 

The original approach to TDD has caused some confusion over the years. The problem is that the principles and approaches just weren't structured enough. In 2006, Dan North wrote an article in Better Software magazine (https://www.stickyminds.com/better-software-magazine/behavior-modification). The purpose of the article was to clear up some of this confusion and help to reduce the pitfalls that developers fell into while learning the TDD process. This new approach to TDD is called Behavior Driven Development (BDD). BDD provides a structure for testing, and a means of communicating between business requirements and unit tests, that is almost seamless. 

The process

It's difficult to start any journey without a goal in mind. There are a few tips and tricks that can be used to help get you started in TDD. The first is red, green, refactor.

Red, green, and refactor

We already discussed writing a failing test before writing production code. The goal is to build the system slowly through a series of tiny improvements. This is often referred to as red, green, refactor. We write a small test (red), then we make it pass by writing some production code (green), then we refactor our code (refactor) before we start the process again.

Many TDD practitioners advocate an It Exists test first. This will help determine that your environment is set up properly and you won't receive false positives. If you write an It Exists test and don't receive a failure right off the bat, you know something is wrong. Once you receive your first failure, you're safe to create the class, method, or function under test. This will also ensure that you don't dive in too deeply right off the bat with lines and lines of code before you're sure your system is working properly.

Once you have your first failure and the first working example, it's time to grow the application, slowly. Choose the next most interesting step and write a failing test to cover this step.

At each iteration, you should pause and evaluate whether there is any cleanup that can happen. Can you simplify a code block? Perhaps a more descriptive variable name is in order? Can any sins committed in the code be corrected, safely, at this time? It's important that you evaluate both the production code and the test suite. Both should be clean, accurate, and maintainable. After all, if it's such a mess that no one would be able to make head or tail of it, what good is the code?

Coder's block

TDD will also help you avoid what writers often call writer's block and what we're calling coder's block. Coder's block happens when you sit down at the keyboard in an attempt to solve a problem but don't know where to begin. We begin at the beginning. Write the easiest, simplest test you can imagine. Write It Exists.   

Why should we care?

We're professionals. We want to do a good job. We feel bad if someone finds fault with our code. If QA finds a bug, it makes us sad. If a user of our system encounters an error, we may cry. We should strive to deliver quality, error-free code and a fully functional, feature-rich application.

We're also lazy, but it's the good kind of lazy. We don't want to have to run the entire application just to validate that a simple function returns the proper value.

 

Arguments against TDD


There are arguments against TDD, some valid and some not. It's quite possible that you've heard some of them before, and likely that you've repeated some of these yourself. 

Testing takes time

Of course, testing takes time. Writing unit tests takes time. Adhering to the red, green, refactor cycle of TDD does take time. But, how else do you check your work if not through tests?

Do you validate that the code you wrote works? How do you do this without tests? Do you manually run the application? How long does that take? Are there conditional scenarios that you need to account for within the application? Do you have to set up those scenarios while manually testing the application? Do you skip some and just trust that they work?

What about regression testing? What if you make a change a day, a week, or a month later? Do you have to manually regression-test the entire application? What if someone else makes a change? Do you trust that they were also as thorough in their testing, as I’m sure you are?

How much time would you save if your code were covered by a test suite that you could run at the click of a button?

Testing is expensive

By writing tests, you're effectively doubling the amount of code you're writing, right? Well, yes and no. Okay, in an extreme case, you might approach double the code. Again, in an extreme case.

Note

Don't make tests a line item.

In some instances, consulting companies have written unit tests into a contract with a line item and dollar amount attached. Inevitably, this allows the customer the chance to argue to have this line item removed, thus saving them money. This is absolutely the wrong approach. Testing will be done, period, whether manually by the developer running the application to validate her work, by a QA tester, or by an automated suite of tests. Testing is not a line item that can be negotiated or removed (yikes!).

You would never buy an automobile that didn’t pass quality control. Light bulbs must pass inspection. A client, customer, or company will never, ever, save money by foregoing testing. The question becomes, do you write the tests early, while the code is being authored, or manually, at a later date?

Testing is difficult

Testing can be difficult. This is especially true with an application that was not written with testability in mind. If you have static methods and implementations using concrete references scattered throughout your code, you will have difficulty adding tests at a later date.

We don't know how

I don't know how to test is really the only acceptable answer, assuming it is quickly followed by, but I'm willing to learn. We're developers. We're the experts in the room. We're paid to know the answers. It's scary to admit that we don't know something. It's even scarier to start something new. Rest assured, it will be OK. Once you get the hang of TDD, you’ll wonder how you managed before. You'll refer to those times as the dark ages, before the discovery of the wheel.

 

Arguments in favor of TDD


What we would like to focus on here are the positives, the arguments in favor of TDD.

Reduces the effort of manual testing

We already mentioned that we, as professionals, will not ship anything without first determining that it works. Throwing something over the wall to QA, to our users, or to the general public and hoping that it all works as expected just isn't how we do business. We will verify that our code and our applications work as expected. In the beginning, while the application is small and has little functionality, we can manually test everything we can think of. But, as the application grows in size and complexity, it just isn't feasible for developers or anyone else to manually test an entire application. It’s too time-consuming and costly to do this manually. We can save ourselves time and our clients and companies money by automating our testing. We can do so quite easily, from the beginning, through TDD.

Reduces bug count

As our application grows, so do our tests. Or shall we say, our test suite has grown, and by making our tests pass, our application has grown. As both have grown, we've covered the happy path (for example: 2 + 2 = 4) as well as potential failures (for example: 2 + banana = exception). If the method or function under test can accept an input parameter, there is a potential for failure. You can reduce the potential for unexpected behavior, bugs, and exceptions by writing code to guard against these scenarios. As you write tests to express potential failures, your production code will inherently become more robust and less prone to errors. If a bug does slip by and make it to QA, or even to a production environment, then it's easy enough to add a new test to cover the newly discovered defect.

The added benefit of approaching bugs in this fashion is that the same bug rarely crops up again at some later date, as the new tests guard against this. If the same bug does appear, you know that, while the same result has happened, the bug occurred in a new and different way. With the addition of another test to cover this new scenario, this will likely be the last time you see the same old bug.

Ensures some level of correctness

With a comprehensive suite of tests, you can demonstrate some level of correctness. At some point, someone somewhere will ask you whether you are done. How will you show that you have added the desired functionality to an application?

Removes the fear of refactoring

Let's face it, we've all worked on legacy applications that we were scared to touch. Imagine if the class you were tasked with modifying were covered by a comprehensive set of unit tests. Picture how easy it would be to make a change and know that all was right with the world because all of the unit tests still passed.

A better architecture 

Writing unit tests tends to push your code towards a decoupled design. Tightly coupled code quickly becomes burdensome to test, and so, to make one's life easier, a Test-Driven Developer will begin to decouple the code. Decoupled code is easier to swap in and out, which means that, instead of modifying a tangled knot of production code, often all that a developer needs to do to make the necessary changes is swap out a subcomponent with a new module of code. 

Faster development 

It may not feel like it at first (in fact, it definitely will not feel like it at first), but writing unit tests is an excellent way to speed up development. Traditionally, a developer receives requirements from the business, sits down, and begins shooting lightning from her fingertips, allowing the code to pour out until an executable application has been written. Before TDD, a developer would write code for a few minutes and then launch the application so that she could see if the code worked or not. When a mistake was found, the developer would fix it and launch the application once again to check whether the fix worked. Often, a developer would find that her fix had broken something else and would then have to chase down what she had broken and write another fix. The process described is likely one that you and every other developer in the world are familiar with. Imagine how much time you have lost fixing bugs that you found while doing developer testing. This does not even include the bugs found by QA or in production by the customer. 

Now, let's picture another scenario. After learning TDD, when we receive requirements from the business, we quickly convert those requirements directly into tests. As each test passes we know that, as per the requirements, our code does exactly what has been asked of it. We might discover some edge cases along the way and create tests to ensure the code has the correct behavior for each one. It would be rare to discover that a test is failing after having made it pass. But, when we do cause a test to fail, we can quickly fix it by using the undo command in our editor. This allows us to hardly even run the application until we are ready to submit our changes to QA and the business. Still, we try to verify that the application behaves as required before submitting, but now we don't do this manually, every few minutes. Instead, let your unit tests verify your code each time you save a file. 

 

Different types of test


Over the course of this book, we will be leaning towards a particular style of testing, but it is important to understand the terminology that others will use so that you can relate when they speak about a certain type of test. 

Unit tests 

Let's jump right in with the most misused and least understood test type. In Kent Beck's book, Test-Driven Development by Example, he defines a unit test as simply a test that runs in isolation from the other tests. All that means is that for a test to be a unit test, all that has to happen is that the test must not be affected by the side-effects of the other tests. Some common misconceptions are that a unit test must not hit the database, or that it must not use code outside the method or function being tested. These simply aren't true. We tend to draw the line in our testing at third-party interactions. Any time that your tests will be accessing code that is outside the application you are writing, you should abstract that interaction. We do this for maximum flexibility in the design of the test, not because it wouldn't be a unit test. It is the opinion of some that unit tests are the only tests that should ever be written. This is based on the original definition, and not on the common usage of the term. 

Acceptance tests 

Tests that are directly affected by business requirements, such as those suggested in BDD, are generally referred to as acceptance tests. These tests are at the outermost limit of the application and exercise a large swathe of your code. To reduce the coupling of tests and production code, you could write this style of test almost exclusively. Our opinion is, if a result cannot be observed outside the application, then it is not valuable as a test. 

Integration tests 

Integration tests are those that integrate with an external system. For instance, a test that interacts with a database would be considered an integration test. The external system doesn't have to be a third-party product; however, sometimes, the external system is just an imported library that was developed independently from the application you are working on but is still considered in-house software. Another example that most don't consider is interactions with the system or language framework. You could consider any test that uses the functions of C#'s DateTime object to be an integration test. 

End to end tests 

These tests validate the entire configuration and usage of your application. Starting from the user interface, an end to end test will programmatically click a button or fill out a form. The UI will call into the business logic of the application, executing all the way down to the data source for the application. These tests serve the purpose of ensuring that all external systems are configured and operating correctly. 

Quantity of each test type 

Many developers ask the question: How many of each type of test should be used? Every test should be a unit test, as per Kent Beck's definition. We will cover variations on testing later that will have some impact on specific quantities of each type; but, generally, you might expect an application to have very few end to end tests, slightly more integration tests, and to consist mostly of acceptance tests. 

Parts of a unit test

The simplest way to get started and ensure that you have human-readable code is to structure your tests using Arrange, Act, and Assert.

Arrange

Also known as the context of a unit test, Arrange includes anything that exists as a prerequisite of the test. This includes everything from parameter values, stored in variables to improve readability, all the way to configuring values in a mock database to be injected into your application when the test is run.

Note

For more information on Mocking, see Chapter 3, Setting Up the JavaScript Environment, the Abstract Third Party Software and Test Double Types sections.

Act

An action, as part of a unit test, is simply the piece of production code that is being tested. Usually, this is a single method or function in your code. Each test should have only a single action. Having more than one action will lead to messier tests and less certainty about where the code should change to make the test pass.

Assert

The result, or assertion (the expected result), is exactly what it sounds like. If you expect that the method being tested will return a 3, then you write an assertion that validates that expectation. The Single Assert Rule states that there should be only one assertion made per test. This does not mean that you can only assert once; instead, it means that your assertions should only confirm one logical expectation. As a quick example, you might have a method that returns a list of items after applying a filter. After setting up the test context, calling the method will result in a list of only one item, and that item will match the filter that we have defined. In this case, you will have a programmatic assert for the count of items in the list and one programmatic assert for the filter criterion we are testing. 

Requirements 

While this book is not about business analysis or requirement generation, requirements will have a huge impact on your ability to effectively test-drive an application. We will be providing requirements for this book in a format that lends itself very well to high-quality tests. We will also cover some scenarios where the requirements are less than optimal, but for most of this book the requirements have been labored over to ensure a high-quality definition of the systems we are testing. 

Why are they important? 

We firmly believe that quality requirements are essential to a well-developed solution. The requirements inform the tests and the tests shape the code. This axiom means that with poor requirements, the application will result in a lower quality architecture and overall design. With haphazard requirements, the resulting tests and application will be chaotic and poorly factored. On the bright side, even poorly thought out or written requirements aren't the death knoll for your code. It is our responsibility, as professional software developers, to correct bad requirements. It is our task to ask questions that will lead to better requirements.  

User stories 

User stories are commonly used in Agile software development for requirement definitions. The format for a user story is fairly simple and consists of three parts: Role, Request, and Reason.  

As a <Role> 
I want <Request> 
So that <Reason> 
Role 

The role of the user story can provide a lot of information. When specifying the role, we have the ability to imply the capabilities of the user. Can the user access certain functionalities, or are they physically impaired in such a way that requires an alternate form of interaction with the system? We can also communicate the user's mindset. Having a new user could have an impact on the design of the user interface, in contrast to what an experienced user might expect. The role can be a generic user, a specific role, a persona, or a specific user. 

Generic users are probably the most used and, at the same time, the least useful. Having a story that provides no insight into the user limits our decision making for this story by not restricting our context. If possible, ask your business analyst or product owner for a more specific definition of who the requirement is for. 

Defining a specific role, such as Admin, User, or Guest, can be very helpful. Specific roles provide user capability information. With a specific role, we can determine if a user should even be allowed into the section of the application we are defining functionality for. It is possible that a user story will cause the modification of a user's rights within the system, simply because we specified a role instead of a generic user. 

Using a persona is the most telling of the wide-reaching role types. A persona is a full definition of an imaginary user. It includes a name, any important physical attributes, preferences, familiarity with the subject of the application, familiarity with computers, and anything else that might have an impact on the imaginary user's interactions with the software. By having all this information, we can start to roleplay the user's actions within the system. We can start to make assumptions or decisions about how that user would approach or feel about a suggested feature and we can design the user interface with that user in mind. 

Request 

The request portion of the user story is fairly simple. We should have a single feature or a small addition to functionality that is being requested. Generally, the request is too large if it includes any joining words, such as and or or. 

Reason 

The reason is where the business need is stated. This is the opportunity to explain how the feature will add value to the company. By connecting the reason to the role, we can enhance the impact of the feature's usefulness. 

A complete user story might look like the following:

As a Conference Speaker 
I want to search for nearby conferences by open submission date 
So that I may plan the submission of my talks 

Gherkin 

Gherkin is a style of requirements definitions that is often used for acceptance criteria. We can turn these requirements directly into code, and QA can turn them directly into test cases. The Gherkin format is generally associated with BDD, and it is used in Dan North's original article on the subject.  

The Gherkin format is just as simple as the user story format. It consists of three parts: Given, When, and Then. 

Given <Context> 
And Given <More Context> 
When <Action> 
Then <Result> 
And Then <More Results> 
Givens 

Because the Gherkin format is fairly simple, givens are broken out to one per contextual criterion. As part of specifying the context, we want to see any and all preconditions of this scenario. Is the user logged in? Does the user have any special rights? Does this scenario require any settings to be put into force before execution? Has the user provided any input on this scenario? One more thing to consider is that there should only be a small number of givens.

The more givens that are present in a scenario, the more likely it is that the scenario is too big or that the givens can somehow be logically grouped to reduce the count. 

Note

When we start writing our tests, a Given is analogous to the Arrange section of a test.  

When 

The when is the action taken by the user. There should be one action and only one action. This action will depend on the context defined by the Given and output the result expected by the Then. In our applications, this is equivalent to a function or method call. 

Note

When we start writing our tests, a When is analogous to the Act section of a test.

Then 

Thens equate to the output of the action. Thens describe what can be verified and tested from the output of a method or function, not only by developers but also by QA.  Just like with the Givens, we want our Thens to be singular in their expectation. Also like Givens, if we find too many Thens, it is either a sign that this scenario is getting too big, or that we are over-specifying our expectations. 

Note

When we start writing our tests, a Then is analogous to the Assert section of a test.

Complete acceptance criteria based on the user story presented earlier might look like the following:

Given I am a conference speaker 
And Given a search radius of 25 miles 
And Given an open submission start date 
And Given an open submission end date 
When I search for conferences 
Then I receive only conferences within 25 miles of my location 
And Then I receive only conferences that are open for submission within the specified date range 

Just like in life, not everything in this book is going to be perfect. Do you see anything wrong with the preceding acceptance criteria? Go on and take a few minutes to examine it; we'll wait.  

If you've given up, we'll tell you. The above acceptance criteria are just too long. There are too many Givens and too many Thens. How did this happen? How could we have created such a mistake? When we wrote the user story, we accidentally included too much information for the reason that we specified. If you go back and look at the user story, you will see that we threw nearby in the request. Adding nearby seemed harmless; it even seemed more correct. I, as the user, wasn't so interested in traveling too far for my speaking engagements.  

When you start to see user stories or acceptance criteria getting out of hand like this, it is your responsibility to speak with the business analyst or product owner and work with them to reduce the scope of the requirements. In this case, we can extract two user stories and several acceptance criteria. 

Here is a full example of the requirements we have been examining:

As a conference speaker 
I want to search for nearby conferences 
So that I may plan the submission of my talks 
Given I am a conference speaker 
And Given search radius of five miles 
When I search for conferences 
Then I receive only conferences within five miles of my location 
Given I am a conference speaker 
And Given search radius of 10 miles 
When I search for conferences 
Then I receive only conferences within 10 miles of my location 
Given I am a conference speaker 
And Given search radius of 25 miles 
When I search for conferences 
Then I receive only conferences within 25 miles of my location 

As a conference speaker 
I want to search for conferences by open submission date 
So that I may plan the submission of my talks 
Given I am a conference speaker 
And Given open submission start and end dates 
When I search for conferences 
Then I receive only conferences that are open for submission within the specified date range 
Given I am a conference speaker 
And Given an open submission start date 
And Given an empty open submission end date 
When I search for conferences 
Then an INVALID_DATE_RANGE error occurs for open submission date 
Given I am a conference speaker 
And Given an empty open submission start date 
And Given an open submission end date 
When I search for conferences 
Then an INVALID_DATE_RANGE error occurs for open submission date 

One thing that we have not discussed is the approach to the content of the user stories and acceptance criteria. It is our belief that requirements should be as agnostic about the user interface and data storage mechanism as possible. For that reason, in the requirement examples, you'll notice that there is no reference to any kind of buttons, tables, modals/popups, clicking, or typing. For all we know, this application is running in a Virtual Reality Helmet with a Natural User Interface. Then again, it could be running as a RESTful web API, or maybe a phone application. The requirements should specify the system interactions, not the deployment environment. 

In software development, it is everyone's responsibility to ensure high-quality requirements. If you find the requirements you have received to be too large, vague, user interface-dependent, or just unhelpful, it is your responsibility to work with your business analyst or product owner to make the requirements better and ready for development and QA. 

Our first tests in C#

Have you ever created a new MVC project in Visual Studio? Have you noticed the checkbox towards the bottom of the dialog box? Have you ever selected, Create Unit Test Project? The tests created with this Unit Test Project are largely of little use. They do little more than validate that the default MVC controllers return the proper type. This is perhaps one step beyond, ItExists. Let's look at the first set of tests created for us:

using System.Web.Mvc;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using SampleApplication.Controllers;

namespace SampleApplication.Tests.Controllers
{
  [TestClass]
  public class HomeControllerTest
  {
    [TestMethod]
    public void Index()
    {
      // Arrange
      HomeController controller = new HomeController();

      // Act
      ViewResult result = controller.Index() as ViewResult;

      // Assert
      Assert.IsNotNull(result);
    }

    [TestMethod]
    public void About()
    {
      // Arrange
      HomeController controller = new HomeController();

      // Act
      ViewResult result = controller.About() as ViewResult;

      // Assert
      Assert.AreEqual("Your application…", result.ViewBag.Message);
    }

    [TestMethod]
    public void Contact()
    {
      // Arrange
      HomeController controller = new HomeController();

      // Act
      ViewResult result = controller.Contact() as ViewResult;

      // Assert
      Assert.IsNotNull(result);
    }
  }
}

Here, we can see the basics of a test class, and the test cases contained within. Out of the box, Visual Studio ships with MSTest, which is what we can see here. The test class must be decorated with the [TestClass] attribute. Individual tests must likewise also be decorated with the [TestMethod] attribute. This allows the test runner to determine which tests to execute. We'll cover these attributes and more in future chapters. Other testing frameworks use similar approaches that we'll discuss later, as well.

For now, we can see that the HomeController is being tested. Each of the public methods has a single test, for which you may want to create additional tests and/or extract tests to separate files in the future. Later we'll be covering options and best practices to help you arrange your files in a much more manageable fashion. All of this should be part of your refactor step in your red, green, refactor cycle.

Growing the application with tests

Perhaps you want to accept a parameter for one of your endpoints. Maybe you will take a visitor's name to display a friendly greeting. Let's take a look at how we might make that happen:

[TestMethod]
public void ItTakesOptionalName()
{
  // Arrange
  HomeController controller = new HomeController();

  // Act
  ViewResult result = controller.About("") as ViewResult;

  // Assert
  Assert.AreEqual("Your application description page.", result.ViewBag.Message);
}

We start by creating a test to allow for the About method to accept an optional string parameter. We're starting with the idea that the parameter is optional since we don't want to break any existing tests. Let's see the modified method:

public ActionResult About(string name = default(string))
{
  ViewBag.Message = "Your application description page.";
  return View();
}    

Now, let's use the name parameter and just append it to our ViewBag.Message. Wait, not the controller. We need a new test first:

[TestMethod]
public void ItReturnsNameInMessage()
{
  // Arrange
  HomeController controller = new HomeController();

  // Act
  ViewResult result = controller.About("Fred") as ViewResult;

  // Assert
  Assert.AreEqual("Your application description page.Fred", result.ViewBag.Message); 
}

And now we'll make this test pass:

public ActionResult About(string name = default(string))
{
  ViewBag.Message = $"Your application description page.{name}";
  return View();
}

Our first tests in JavaScript

To get the ball rolling in JavaScript, we are going to write a Simple Calculator class. Our calculator only has the requirement to add or subtract a single set of numbers. Much of the code you write in TDD will start very simply, just like this example:

import { expect } from 'chai'

class SimpleCalc {
  add(a, b) {
    return a + b;
  }

  subtract(a, b) {
    return a - b;
  }
}

describe('Simple Calculator', () => {
  "use strict";

  it('exists', () => {
    // arrange
    // act
    // assert
    expect(SimpleCalc).to.exist;
  });

  describe('add function', () => {
    it('exists', () => {
      // arrange
      let calc;

      // act
      calc = new SimpleCalc();

      // assert
      expect(calc.add).to.exist;
    });

    it('adds two numbers', () => {
      // arrange
      let calc = new SimpleCalc();

      // act
      let result = calc.add(1, 2);

      // assert
      expect(result).to.equal(3);
    });
  });

  describe('subtract function', () => {
    it('exists', () => {
      // arrange
      let calc;

      // act
      calc = new SimpleCalc();

      // assert
      expect(calc.subtract).to.exist;
    });

    it('subtracts two numbers', () => {
      // arrange
      let calc = new SimpleCalc();

      // act
      let result = calc.subtract(3, 2);

      // assert
      expect(result).to.equal(1);
    });
  });
});

If the preceding code doesn't make sense right now, don't worry; this is only intended to be a quick example of some working test code. The testing framework used here is Mocha, and the assertion library used is chai. In the JavaScript community, most testing frameworks are built with BDD in mind. Each described in the code sample above represents a scenario or a higher-level requirements abstraction; whereas, each it represents a specific test. Within the tests, the only required element is the expect, without which the test will not deliver a valuable result.

Continuing this example, say that we receive a requirement that the add and subtract methods must be allowed to chain. How would we tackle that requirement? There are many ways, but in this case, I think I would like to do a quick redesign and then add some new tests. First, we will do the redesign, again driven by tests.

By placingonly on a describe or a test, we can isolate that describe/test. In this case, we want to isolate our add tests and begin making our change here:

it.only('adds two numbers', () => {
  // arrange
  let calc = new SimpleCalc(1);

  // act
  let result = calc.add(2).result;

  // assert
  expect(result).to.equal(3);
});

Previously, we have changed the test to use a constructor that takes a number. We have also reduced the number of parameters of the add function to a single parameter. Lastly, we have added a result value that must be used to evaluate the result of adding.

The test will fail because it does not use the same interface as the class, so now we must make a change to the class:

class SimpleCalc {
  constructor(value) {
    this._startingPoint = value || 0;
  }

  add(value) {
    return new SimpleCalc(this._startingPoint + value);
  }
  ... 
  get result() {
    return this._startingPoint;
  }
}

This change should cause our test to pass. Now, it's time to make a similar change for the subtract method. First, remove the only that was placed in the previous example:

it('subtracts two numbers', () => {
  // arrange
  let calc = new SimpleCalc(3);

  // act
  let result = calc.subtract(2).result;

  // assert
  expect(result).to.equal(1);
});

Now for the appropriate change in the class:

subtract(value) {
  return new SimpleCalc(this._startingPoint – value);
}

Out tests now pass again. The next thing we should do is create a test that verifies everything works together. We will leave this test up to you as an exercise, should you want to attempt it.

 

Why does it matter?


So, why does all this matter? Why write more code than we have to? Because it's worth it. And to be honest, most of the time it isn't more code. As you take the time to grow your application with tests, simple solutions are produced. Simple solutions are almost always less code than the slick solution you might have come up with otherwise. And inevitably, slick solutions are error-prone, difficult to maintain, and often just plain wrong.

 

Summary


If you didn't before, you should now have a good idea of what TDD is and why it is important. You have been exposed to unit tests in C# and JavaScript and how writing tests first can help grow an application.

As we continue, we'll learn more about TDD. We'll explore what it means to write testable code.

In Chapter 2, Setting Up the .NET Test Environment, we'll set up your development environment and explore additional aspects of a unit test.

 

 

About the Authors

  • John Callaway

    John Callaway, a Microsoft MVP, has been a professional developer since 1999. He has focused primarily on web technologies and has experience with everything from PHP to C# to ReactJS to SignalR. Clean code and professionalism are particularly important to him, along with mentoring and teaching others what he has learned along the way.

    Browse publications by this author
  • Clayton Hunt

    Clayton Hunt has been programming professionally since 2005, doing mostly web development with an emphasis on JavaScript and C#. He focuses on Software Craftsmanship and is a signatory of both the Agile Manifesto and the Software Craftsmanship manifesto. He believes that through short iterations and the careful gathering of requirements, we can deliver the highest quality and most value in the shortest time. He enjoys learning and encouraging others to continuously improve themselves.

    Browse publications by this author

Latest Reviews

(1 reviews total)
high valuable knowledge......

Recommended For You

Hands-On Design Patterns with C# and .NET Core

Apply design patterns to solve problems in software architecture and programming using C# 7.x and .NET Core 2

By Gaurav Aroraa and 1 more
Hands-On Software Architecture with C# 8 and .NET Core 3

Design scalable and high-performance enterprise applications using the latest features of C# 8 and .NET Core 3

By Francesco Abbruzzese and 1 more
C# 8.0 and .NET Core 3.0 – Modern Cross-Platform Development - Fourth Edition

Learn the fundamentals, practical applications, and latest features of C# 8.0 and .NET Core 3.0 from expert teacher Mark J. Price.

By Mark J. Price
Hands-On Domain-Driven Design with .NET Core

Solve complex business problems by understanding users better, finding the right problem to solve, and building lean event-driven systems to give your customers what they really want

By Alexey Zimarev