Testing with JUnit

4.3 (4 reviews total)
By Frank Appel
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies

About this book

JUnit has matured to become the most important tool when it comes to automated developer tests in Java. Supported by all IDEs and build systems, it empowers programmers to deliver software features reliably and efficiently. However, writing good unit tests is a skill that needs to be learned; otherwise it's all too easy to end up in gridlocked development due to messed up production and testing code. Acquiring the best practices for unit testing will help you to prevent such problems and lead your projects to success with respect to quality and costs.

This book explains JUnit concepts and best practices applied to the test first approach, a foundation for high quality Java components delivered in time and budget.

From the beginning you'll be guided continuously through a practically relevant example and pick up background knowledge and development techniques step by step. Starting with the basics of tests organization you'll soon comprehend the necessity of well structured tests and delve into the relationship of requirement decomposition and the many-faceted world of test double usage. In conjunction with third-party tools you'll be trained in writing your tests efficiently, adapt your test case environment to particular demands and increase the expressiveness of your verification statements. Finally, you'll experience continuous integration as the perfect complement to support short feedback cycles and quality related reports for your whole team.

The tutorial gives a profound entry point in the essentials of unit testing with JUnit and prepares you for test-related daily work challenges.

Publication date:
August 2015
Publisher
Packt
Pages
200
ISBN
9781782166603

 

Chapter 1. Getting Started

Accomplishing the evolving objectives of a software project in time and budget on a long-term basis is a difficult undertaking. In this opening chapter, we're going to explain why unit testing can play a vital role in meeting these demands. We'll illustrate the positive influence on the defect rate, code quality, development pace, specification density, and team morale. All that makes it worthwhile to acquire a broad understanding of the various testing techniques. To get started, you'll learn to arrange our tool set around JUnit and organize our project infrastructure properly. You'll be familiarized with the definition of unit tests and the basics of test-driven development. This will prepare us for the following chapters, where you'll come to know about more advanced testing practices.

  • Why you should busy yourself with unit tests

  • Setting the table

  • Serving the starter

 

Why you should busy yourself with unit tests


Since you are reading this, you likely have a reason to consider unit testing as an additional development skill to learn. Whether you are motivated by personal interest or driven by external stimulus, you probably wonder if it will be worth the effort. But properly applied unit testing is perhaps the most important technique the agile world has to offer. A well-written test suite is usually half the battle for a successful development process, and the following section will explain why.

Reducing the defect rate

The most obvious reason to write unit tests is to build up a safety net to guard your software from regression. There are various grounds for changing the existing code, whether it be to fix a bug or to add supplemental functionality. But understanding every aspect of the code you are about to change is difficult to achieve. So, a new bug sneaks in easily. And it might take a while before it gets noticed.

Think of a method returning some kind of sorted list that works as expected. Due to additional requirements, such as filtering the result, a developer changes the existing code. Inadvertently, these changes introduce a bug that only surfaces under rare circumstances. Hence, simple sanity tests may not reveal any problems and the developer feels confident to check in the new version. If the company is lucky, the problem will be detected by the quality assurance team, but chances are that it slips through to the customer. Boom!

This is because it's hardly possible to check all corner cases of a nontrivial software from a user's point of view, let alone if done manually. Besides an annoyed customer, this leads to a costly turnaround consisting of, for example, filing a bug report, reproducing and debugging the problem, scheduling it for repair, implementing the fix, testing, delivering, and, finally, deploying the corrected version. But who will guarantee that the new version won't introduce another regression?

Sounds scary? It is! I have seen teams that were barely able to deliver new functionality as they were about to drown in a flood of bugs. And hot fixes produced to resolve blocking situations on the customer side introduced additional regression all the time. Sounds familiar? Then, it might be time for a change.

Good unit tests can be written with a small development overhead and verify, in particular, all the corner case behavior of a component. Thus, the developer's said mistake would have been captured by a test. At the earliest possible point in time and at the lowest possible price. But humans make mistakes: what if a corner case is overlooked and a bug turns up? Even then, you are better off because fixing the issue sustainably means simply writing an additional test that reproduces the problem by a failing verification. Change the code until all tests pass and you get rid of the fault forever.

Improving the code quality

The influence a consistent testing approach will have on the code quality is less apparent. Once you have a safety net in place, changing the existing code to make it more readable, and hence easier to enhance, isn't risky anymore. If you are introducing a regression, your tests will tell you immediately. So, the code morphs from a never touch a running system shrine to a lively change embracing place.

Matured test-first practices will implicitly improve your code with respect to most of the common quality metrics. Testing first is geared to produce small, coherent, and loosely coupled components combined with a high coverage and verification of the component's behavior. The production of clean code is an inherent step of the test-driven development mantra explained further ahead.

The following image shows two screenshots of measurements taken from a small, real-world project of the Xiliary GitHub repository (https://github.com/fappel/xiliary). Developed completely driven by tests, we couldn't care less about the project's metrics before writing this chapter. But not very surprisingly, the numbers look quite okay.

Note

Don't worry if you're not familiar with the meaning of the metrics. All you need to know at the moment is that they would appear in red if exceeding the tool's default thresholds.

So, in case you wonder about the three red spots with low coverage numbers, note that two of those classes are covered by particular integration tests as they are adapters to third-party functionality (a more detailed explanation of integration tests follows in the upcoming Understanding the nature of a unit test section). The remaining class is at an experimental or prototypical stage and will be replaced in the future.

Note

Note that we'll deepen our knowledge of code coverage in Chapter 2, Writing Well-structured Tests, and in Chapter 8, Running Tests Automatically within a CI Build.

Metrics of a TDD project

Programs built on good code quality stand out from systems that merely run, because they are easier to maintain and usually impress with a higher feature evolution rate.

Increasing the development pace

At first glance, the math seems to be simple. Writing additional testing code means more work, which consumes more time, which leads to lower development speed. Right? But would you like to drive a car whose individual parts did not undergo thorough quality assurance? And what would be gained if the car had to spend most of its lifetime in the service shop rather than on the road, let alone the possibility of a life-threatening accident?

The initial production speed might be high, but the overall outcome would be poor and might ruin the car manufacturer in the end. It is not that much different with the development of nontrivial software systems. We elaborated already on the costs of bugs that manage to sneak through to the customer. So, it is a naïve assessment calculating development speed like that.

As a developer, you stand between two contradictory goals: on the one hand, you have to be quick on the draw to meet your deadlines. On the other hand, you must not commit too many sins to be able to also meet subsequent deadlines. The term sin refers to work that should be done before a particular job can be considered complete or proper. This is also denoted as technical debt, [TECDEP]. And here comes the catch. Keeping the balance often does not work out, and once the technical debt gets too high, the system collapses. From that point in time, you won't meet any deadlines again.

So, yes, writing tests causes an overhead. But if done well, it ensures that subsequent deadlines are not endangered by technical debt. The development pace might be initially at a slightly lower rate with testing, but it won't decrease and is, therefore, higher when watching the overall picture.

By the way, if you know your tools and techniques, the overhead isn't that much at all. At least, I am usually not hired for being particularly slow. When you think of it, running a component's unit tests is done in the time of a wink. On the flip side, checking its behavior manually involves launching the application, clicking to the point where your code actually gets involved, and after that, you click and type yourself again through certain scenarios you consider important. Does the latter sound like an efficient modus operandi?

Enhancing the specification density

A good test suite at hand can be an additional source of information about what your system components are really capable of and one that doesn't outdate unlike design docs, which usually do. Of course, this is a kind of low-level specification that only a developer is apt to write. But if done well, a test's name tells you about the functionality under test with respect to specific initial conditions and the test's verifications about the expected outcome produced by the execution of this functionality.

This way, a developer who is about to change an existing component will always have a chance to check against the accompanying tests to understand what a component is really all about. So, the truth is in the tests! But this underscores that tests have to be treated as first-class citizens and have to be written and adjusted with care. A poorly written test might confuse a programmer and hinder the progressing rate significantly.

Boosting confidence and courage

Everybody likes to be in a winning team. But once you are stuck in a bug trail longer than the Great Wall of China and a technical debt higher than Mount Everest, fear creeps in. At that time, the implementation of new features can cause avalanches of lateral damage and developers get reluctant to changes. What follow are debates about consolidation phases or even rewriting large parts of the system from scratch before they dare to think about new functionality. Of course, this is an economic horror scenario from the management's point of view, and that's how the development team member's confidence and courage say good bye.

Again, this does not happen as easily with a team that has build its software upon components backed up with well-written unit tests. We learned earlier why unit tested systems neither have many bugs nor too much technical debt. Introducing additional functionality is possible without expecting too much lateral damage since the existing tests beware of regressions. Combined with module-spanning integration tests, you get a rock-solid foundation in which developers learn to trust.

I have seen more than once how restructuring requirements of nontrivial systems were achieved without doing any harm to dependent components. All that was necessary was to take care not to break existing tests and cover changed code passages with new or adjusted tests. So, if you are unluckily more or less familiar with some of the scenarios described in this section, you should read on and learn how to get confidence and courage back in your team.

 

Setting the table


This book is based on a hands-on example that will guide us through the essential concepts and programming techniques of unit testing. For a sustainable learning experience, feel encouraged to elaborate and complete the various code snippets in your own working environment. Hence, here comes a short introduction of the most important tools and the workspace organization used while programming the sample.

Choosing the ingredients

As the book's title implies, the main tool this is all about is JUnit (http://www.junit.org). It is probably the most popular testing framework for developers within the Java world. Its first version was written by Kent Beck and Eric Gamma on a flight from Zurich to OOPSLA 1997 in Atlanta, [FOWL06]. Since then, it has evolved by adapting to changing language constructs, and quite a few supplemental libraries have emerged.

Java IDEs provide a UI and build path entries to compile, launch, and evaluate JUnit tests. Build tools, such as Ant, Maven, and Gradle, support test integration out of the box. When it comes to IDEs, the example screenshots in this book are captured using Eclipse (http://www.eclipse.org/). However, we do not rely on any Eclipse-specific features, which should make it easy to reproduce the results in your favorite IDE too.

In general, we use Maven (https://maven.apache.org/) for dependency management of the libraries mentioned next, which means that they can be retrieved from the Maven Central Repository (http://search.maven.org/). But if you clone the book's GitHub repository (https://github.com/fappel/Testing-with-JUnit), you will find a separate folder for each chapter, providing a complete project configuration with all dependencies and sources. This means navigating to this directory and using the 'mvn test' Maven command should enable you to compile and run the given examples easily. Let's finish this section with an introduction of the more important utilities we'll be using in the course of the book.

Chapter 3, Developing Independently Testable Units, covers the sense and purpose of the various test double patterns. It is no wonder that there are tools that simplify test double creation significantly. Usually, they are summarized under the term mock frameworks. The examples are based on Mockito (http://mockito.org), which suits very well to building up clean and readable test structures.

There are several libraries that claim to improve your daily testing work. Chapter 5, Using Runners for Particular Testing Purposes, will introduce JUnitParams (http://pragmatists.github.io/JUnitParams/) and Burst (https://github.com/square/burst) as alternatives to writing parameterized tests. Chapter 7, Improving Readability with Custom Assertions, will compare the two verification tools Hamcrest (http://hamcrest.org/) and AssertJ (http://assertj.org).

Automated tests are only valuable if they are executed often. Because of this, they are usually an inherent part of each project's continuous integration build. Hence, Chapter 8, Running Tests Automatically within a CI Build, will show how to create a basic build with Maven and introduce the value of code coverage reports with JaCoCo (http://www.eclemma.org/jacoco/).

Organizing your code

In the beginning, one of the more profane-looking questions you have to agree upon within your team is where to put the test code. The usual convention is to keep unit tests in classes with the same name as the class under test, but post- or prefixed with an extension Test or such like. Thus, a test case for the Foo class might be named FooTest.

Based on the description of Hunt/Thomas, [HUTH03], of different project structuring types, the simplest approach would be to put our test into the same directory where the production code resides, as shown in the following diagram:

A single-source tree with the same package

We usually don't want to break the encapsulation of our classes for testing purposes, which shouldn't be necessary in most cases anyway. But as always, there are exceptions to the rule, and before leaving a functionality untested, it's probably better to open up the visibility a bit. The preceding code organization provides the advantage that, in such rare cases, one can make use of the package member access the Java language offers.

Members or methods without visibility modifiers, such as public, protected, and private, are only accessible from classes within the same package. A test case that resides in the same package can use such members, while encapsulation still shields them from classes outside the package, even if such classes would extend the type under test.

Unfortunately, putting tests into the same directory as the production code has a great disadvantage too. When packages grow, the test cases are perceived soon as clutter and lead to confusion when looking at the package's content. Because of this, another possibility is to have particular test subpackages, as shown here:

Single-source tree with a separate test package

However, using this structure, we give up the package member access. But how can we achieve a better separation of production and testing code without loosing this capability? The answer is to introduce a parallel source tree for test classes, as shown here:

A parallel-source tree

To make this work, it is important that the root of both trees are part of the compiler's CLASSPATH settings. Luckily, you usually do not have to put much effort in this organization as it is the most common one and gets set up automatically, for example, if you use Maven archetypes to create your projects. Examples in this book assume this structure.

Last but not least, it is possible to enhance the parallel tree concept even further. A far-reaching separation can be achieved by putting tests in their own source code project. The advantage of this strategy is the ability to use different compiler error/warning settings for test and production code. This is useful, for example, if you decide to avoid auto-boxing in your components but feel it would make test code overly verbose when working with primitives. With project-specific settings, you can have hard compiler errors in production code without having the same restriction in tests.

Parallel-source tree with separate test project

Whatever organization style you may choose, make sure that all team members use the same one. It will be very confusing and hardly maintainable, if the different concepts get mixed up. Now that the preliminaries are done, we are ready for action.

 

Serving the starter


To reach as much practical relevance as possible, this book shows how to implement a real-world scenario driven by unit tests. This way of proceeding allows us to explain the various concepts and techniques under the light of a coherent requirement. Thus, we kick off with a modest specification of what our example application will be all about. However, before finally descending into the depths of development practices, we will go ahead and clarify the basic characteristics of unit testing and test-first practices in dedicated sections.

Introducing the example app

Let's assume that we have to write a simple timeline component as it is known from the various social networks, such as Twitter, Google+, Facebook, and the like. To make things a bit more interesting, the application has to run on different platforms (desktop, browser, and mobile) and allow the display of content from arbitrary sources. The wireframe in the following image gives an impression of the individual functional requirements of our timeline:

Timeline wireframe

The header contains a label indicating the source of the items displayed in the list under it. It also notifies the user if newer entries are available and allows the them to fetch and insert them at the top.

The list section is a sequence of chronologically ordered items, which can be browsed by a scrollbar. The component should allow us to load its entries page-wise. This means that it shows a maximum of, let's say, ten entries. If scrolling reaches the last one, the next ten items can be fetched from the provider. The newly loaded entries are added and the scrollbar is adjusted accordingly. To keep things in scope, a push button for manual fetching will be sufficient here.

An item type, in turn, comprises several text or image attributes that compose an entry's content. Note that the timestamp is considered mandatory as it is needed for chronological ordering. Apart from that, the depiction should be undetermined by the component itself and depend on the type of the underlying information source.

This means that a Twitter feed probably provides a different information structure than the commits of a branch in a Git repository. The following image shows what the running applications will look like. The JUnit items shown are commits taken from the master branch of the tool's project repository at GitHub.

Given the application description, it is important to note that the following chapters will focus on the unit testing aspects of the development process to keep the book on target. But this immediately raises the question: what exactly is a unit test?

 

Understanding the nature of a unit test


A unit test is basically a piece of code written by a developer to verify that another piece code—usually the implementation of a feature—works correctly. In this context, a unit identifies a very small, specific area of behavior and not the implementing code itself. If we regard adding an item to our timeline as a functional feature for example, appropriate tests would ensure that the item list grows by one and that the new item gets inserted at the right chronological position.

Yet, there is more to it than meets the eye. Unit tests are restricted to that code for which the developer is responsible. Consider using a third-party library that relies on external resources. Tests would implicitly run against that third-party code. In case one of the external resources is not available, a test could fail although there might be nothing wrong with the developer's code. Furthermore, set up could get painstaking, and due to the invocation time of external resources, execution would get slow.

But we want our unit tests to be very fast because we intend to run them all as often as possible without impeding the pace of development. By doing so, we receive immediate feedback about busting a low-level functionality. This puts us in the position to detect and correct a problem as it evolves and avoid expensive quality assurance cycles.

Different timeline UIs

As the book progresses, we will see how to deal with the integration of third-party code properly. The usual strategy is to create an abstraction of the problematic component. This way, it can be replaced by a stand-in that is under the control of the developer. Nevertheless, it is important to verify that the real implementation works as expected. Tests that cope with this task are called integration tests. Integration tests check the functionality on a more coarse-grained level and focus on the correct transition of component boundaries.

Having said all this, it is clear that testing a software system from the client's point of view to verify formal specifications does not belong to unit testing either. Such tests simulate user behavior and verify the system as a whole. They usually require a significant amount of time for execution. These kinds of tests are called acceptance or end-to-end tests.

Another way to look at unit tests is as an accompanying specification of the code under test, comparable to the dispatch note of a cogwheel, which tells Quality Assurance (QA) what key figures this piece of work should meet. But due to the nature of the software, no one but the developer is apt to write such low-level specifications. Thus, automated tests become an important source of information about the intended behavior of a unit and one that does not become outdated as easily as documentation.

Note

We'll elaborate on this thought in Chapter 2, Writing Well-structured Tests.

Now that we've heard so much about the nature of unit tests, it's about time to write the first one by ourselves!

Writing the first test

 

"A journey of a thousand miles begins with a single step."

 
 --Lao Tzu

Unit tests written with JUnit are grouped by plain Java classes, each of which is called a test case. A single test case specifies the behavior of a low-level component normally represented by a class. Following the metaphor of the accompanying specification, we can begin the development of our timeline example as follows:

public class TimelineTest {
}

The test class expresses the intent to develop the a component Timeline, which Meszaros, [MESZ07], would denote as system under test (SUT). And applying a common naming pattern, the component's name is complemented by the suffix Test. But what is the next logical step? What should be tested first? And how do we create an executable test anyway?

Usually, it is a good idea to start with the happy path, which is the normal path of execution and, ideally, the general business use case. Consider that we expect fetch-count to be an attribute of our timeline component. The value configures how many items will be fetched at once from an item source. To keep the first example simple, we will ignore the actual item loading for now and regard only the component's state change that is involved.

An executable JUnit test is a public, nonstatic method that gets annotated with @Test and takes no parameters. Summarizing all this information, the next step could be a method stub that names a functionality of our component we want to test. In our case, this functionality could be the ability to set the fetch-count to a certain amount:

public class TimelineTest {
  @Test
  public void setFetchCount() {
  }
}

Tip

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Additionally, the author has hosted the code sources for this book on his GitHub repository at https://github.com/fappel/Testing-with-JUnit. So, you can download it from this URL and work with the code.

This is still not much, but it is actually sufficient to run the test for the first time. JUnit test executions can be launched from the command line or a particular UI. But for the scope of this book, let's assume we have IDE integration available. Within Eclipse, the result would look like the next image.

The green progress bar signals that the test run did not recognize any problems, which is not a big surprise as we have not verified anything yet. But remember that we have already done some useful considerations that help us to populate our first test easily:

  • We intend to write the Timeline component. To test it, we can create a local variable that takes a new instance of this component.

  • As the first test should verify the state-changing effect of setting the item-count attribute, it seems natural to introduce appropriate setters and getters to do so:

    @Test
    public void setFetchCount() {
      Timeline timeline = new Timeline();
    
      timeline.setFetchCount( 5 );
      int actual = timeline.getFetchCount();
    }

It looks reasonable so far, but how can we assure that a test run is denoted as a failure if the actual value returned by getFetchCount does not match the input used with setFetchCount? For this purpose, JUnit offers the org.junit.Assert class, which provides a set of static methods to help developers to write so-called self-checking tests.

The green bar after a successful launch

The methods prefixed with assert are meant to check a certain condition, throwing an java.lang.AssertionError on a negative evaluation. Such errors are picked up by the tool's runtime and mark the test as failed in the resulting report. To assert that two values or objects are equal, we can use Assert.assertEquals. As it is very common to use static imports for assertion method calls, the getFetchCount test can be completed like this:

@Test
public void setFetchCount() {
  Timeline timeline = new Timeline();
  int expected = 5;

  timeline.setFetchCount( expected );
  int actual = timeline.getFetchCount();

  assertEquals( expected, actual );
}

The built-in mechanism of JUnit, which is often considered somewhat dated, isn't the only possibility to express test verifications. But to avoid information flooding, we will stick to it for now and postpone a thorough discussion of the pros and cons of alternatives to Chapter 7, Improving Readability with Custom Assertions.

Looking at our first test, you can recognize that it specifies a behavior of the SUT, which does not even exist yet. And by the way, this also means that the test class does not compile anymore. So, the next step is to create a skeleton of our component to solve this problem:

public class Timeline {

  public void setFetchCount( int fetchCount ) {
  }

  public int getFetchCount() {
    return 0;
  }
}

Well, the excitement gets nearly unbearable. What will happen if we run our test against the newly created component?

Evaluating test results

Now the test run leads to a failure with a red progress bar due to the insufficient implementation of the timeline component, as shown in the next image. The execution report shows how many tests were run in total, how many of those terminated with errors, and how many failed due to unmet assertions.

A stack trace for each error/failure helps to identify and understand the problem's cause. AssertionError raised by a verification call of a test provides an explaining message, which is shown in the first line of the trace. In our example, this message tells us that the expected value did not meet the actual value returned by getFetchCount.

A test terminated by an Exception indicates an arbitrary programming mistake beyond the test's assertion statements. A simple example of this can be access to an uninitialized variable, which subsequently terminates test execution with NullPointerException. JUnit follows the all or nothing principle. This means that if an execution involves more then one test, which is usually the case, a single problem marks the whole suite as failed.

The red bar after test failure

The UI reflects this by painting the progress bar red. You would now wonder whether we shouldn't have completed our component's functionality first. The implementation seems easy enough, and at least, we wouldn't have ended up with the red bar. But the next section explains why starting with a failing test is crucial for a clean test-first approach.

Writing tests first

Writing tests before the production code even exists might look strange to a newbie, but there are actually good reasons to do so. First of all, writing tests after the fact (meaning first code, then test) is no fun at all. Well, that sounds like a hell of a reason because, if you gotta do what you gotta do, [FUTU99], what's the difference whether you do it first or last?

The difference is in the motivation to do it right! Once you are done with the fun part, it is all too human to get rid of the annoying duties as fast and as sloppily as one can get through. You are probably reading this because you are interested in improving things. So, ask yourself how effective tests will be if they are written just for justification or to silence the conscience.

Even if you are disciplined and motivated to do your after the fact tests right, there will be more holes in the test coverage compared to the test-first approach. This is because the class under test was not designed for testing. Most of the time, it will take costly steps to decompose a component written from scratch into separate concerns that can be tested easily. And if these steps are considered too expensive, testing will be omitted. But isn't it a bad thing to change a design for testing purposes?

 

"Separation of Concerns' is probably the single most important concept in software design and implementation."

 
 --[HUTH03]

The point is that writing your tests first supports proper separation implicitly. Every time your test setup feels overly complicated, you are about to put too much functionality in your component. In this case, you should reconsider your class-level design and split it up into smaller pieces. Following this practice consequently leads to a healthy design on the class level out of the box.

Although this book is not about how to write tests first or test-driven development (TDD) as it is usually called, it follows this principle while developing the example application. But as the focus will be on getting unit tests right and not on the implementation aspects of the components, here come a few words about the work paradigm of TDD for better understanding.

The procedure is simple. Once you have picked your first work unit, write a test, make it run, and last, make it right, [BECK03]. After you're done, start it all over again with the next piece of functionality. This is exactly what we have done until now with our first test. We've decided about a small feature to implement. So, we wrote a test that specifies the intended behavior and invented a kind of programming interface that would match the use case.

When we feel confident with the outcome, it is about time to fix the compile errors and create a basic implementation stub to be able to execute the test. This way, the test is the first client of the freshly created component, and we will have the earliest possible feedback on how using it in programs will look. However, it is important that the first test run fails to ensure that the verification conditions were not met by accident.

The make it run step is about fixing the failing test as quickly as possible. This goal outweighs everything else now. We are even allowed to commit programming sins we usually try to avoid. If this feels a bit outlandish, think of it like this: if you want to write clean code that works (Ron Jeffries, [BECK03]) ensure that it works first and then take your time and clean it up second. This has the advantage that you know the specification can be met without wasting time in writing pretty code that will never work.

Last but not least, make it right. Once your component behaves as specified, ascertain that your production and test code follow the best programming standards you can think of. While overhauling your code, repeatedly executing the tests ensures that the behavior is kept intact. Changing code without changing its behavior is called refactoring.

In the overall image, we started with a failing test and a red bar, fixed the test, made the bar green again, and, finally, cleaned up the implementation during a last refactor step. As this pattern gets repeated over and over again in TDD, it is known as the red/green/refactor mantra.

So, always remember folks: keep the bar green to keep the code clean.

 

Summary


In this chapter, you learned why unit tests are such a valuable asset for Java developers. We've seen that well-written tests go beyond pure regression avoidance and experienced how they improve your code quality, increase your overall development pace, enhance your component specifications, and, last but not least, convey confidence and courage to your team members.

We've addressed the tool set that accompanies JUnit and prepared our workspace to be able to take active part in the following chapters. After the introduction of the ongoing example, which will serve us as the motivation and source for code snippets for the various subjects, we elaborated a definition of what unit testing is all about. Then, the time came to learn the very basics of writing and executing our first self-checking test. We concluded with an overview of the essentials of TDD, which prepared you for the following topics when you come to know more advanced unit testing techniques.

By continuously evolving our example, the next chapter will reveal the common structure of well-written unit tests. You'll learn some heuristics to pick the next behavior to implement and, finally, gain some insights into unit test naming conventions.

About the Author

  • Frank Appel

    Frank Appel is a stalwart of agile methods and test-driven development in particular. He has over 2 decades of experience as a freelancer and understands software development as a type of craftsmanship. Having adopted the test first approach over a decade ago, he applies unit testing to all kinds of Java-based systems and arbitrary team constellations. He serves as a mentor, provides training, and blogs about these topics at codeaffine.com.

    Browse publications by this author

Latest Reviews

(4 reviews total)
Buen libro para aprender conceptos básicos.
It's nice to have a plethora of options to start the 101
Good
Book Title
Access this book, plus 7,500 other titles for FREE
Access now