Setting Up a Test Infrastructure

Exclusive offer: get 50% off this eBook here
Backbone.js Testing

Backbone.js Testing — Save 50%

Plan, architect, and develop tests for Backbone.js applications using modern testing principles and practices with this book and ebook

$19.99    $10.00
by Ryan Roemer | September 2013 | Open Source Web Development

In this article by Ryan Roemer the author of Backbone.js Testing gives a brief description about how to set up your test application code and obtain the test libraries that we will use throughout this article. We create a basic test infrastructure, write the first tests, and review the test report results.

(For more resources related to this topic, see here.)

Setting up and writing our first tests

Now that we have the base test libraries, we can create a test driver web page that includes the application and test libraries, sets up and executes the tests, and displays a test report.

The test driver page

A single web page is typically used to include the test and application code and drive all frontend tests. Accordingly, we can create a web page named test.html in the chapters/01/test directory of our repository starting with just a bit of HTML boilerplate—a title and meta attributes:

<html>
<head>
<title>Backbone.js Tests</title>
<meta http-equiv="Content-Type"
content="text/html; charset=UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">

Then, we include the Mocha stylesheet for test reports and the Mocha, Chai, and Sinon.JS JavaScript libraries:

<link rel="stylesheet" href="js/lib/mocha.css" />
<script src = "js/lib/mocha.js"></script>
<script src = "js/lib/chai.js"></script >
<script src = "js/lib/sinon.js"></script>

Next, we prepare Mocha and Chai. Chai is configured to globally export the expect assertion function. Mocha is set up to use the bdd test interface and start tests on the window.onload event:

<script>
// Setup.
var expect = chai.expect;
mocha.setup("bdd");
// Run tests on window load event.
window.onload = function () {
mocha.run();
};
</script>

After the library configurations, we add in the test specs. Here we include a single test file (that we will create later) for the initial test run:

<script src = "js/spec/hello.spec.js"></script>
</head>

Finally, we include a div element that Mocha uses to generate the full HTML test report. Note that a common alternative practice is to place all the script include statements before the close body tag instead of within the head tag:

<body>
<div id="mocha"></div>
</body>
</html>

And with that, we are ready to create some tests. Now, you could even open chapters/01/test/test.html in a browser to see what the test report looks like with an empty test suite.

Adding some tests

It is sufficient to say that test development generally entails writing JavaScript test files, each containing some organized collection of test functions. Let's start with a single test file to preview the testing technology stack and give us some tests to run.

The test file chapters/01/test/js/spec/hello.spec.js creates a simple function (hello()) to test and implements a nested set of suites introducing a few Chai and Sinon.JS features. The function under test is about as simple as you can get:

window.hello = function () {
return "Hello World";
};

The hello function should be contained in its own library file (perhaps hello.js) for inclusion in applications and tests. The code samples simply include it in the spec file for convenience.

The test code uses nested Mocha describe statements to create a test suite hierarchy. The test in the Chai suite uses expect to illustrate a simple assertion. The Sinon.JS suite's single test shows a test spy in action:

describe("Trying out the test libraries", function () {
describe("Chai", function () {
it("should be equal using 'expect'", function () {
expect(hello()).to.equal("Hello World");
});
});
describe("Sinon.JS", function () {
it("should report spy called", function () {
var helloSpy = sinon.spy(window, 'hello');
expect(helloSpy.called).to.be.false;
hello();
expect(helloSpy.called).to.be.true;
hello.restore();
});
});
});

Not to worry if you do not fully understand the specifics of these tests and assertions at this point, as we will shortly cover everything in detail. The takeaway is that we now have a small collection of test suites with a set of specifications ready to be run.

Running and assessing test results

Now that all the necessary pieces are in place, it is time to run the tests and review the test report.

The first test report

Opening up the chapters/01/test/test.html file in any web browser will cause Mocha to run all of the included tests and produce a test report:

Test report

This report provides a useful summary of the test run. The top-right column shows that two tests passed, none failed, and the tests collectively took 0.01 seconds to run. The test suites declared in our describe statements are present as nested headings. Each test specification has a green checkmark next to the specification text, indicating that the test has passed.

Test report actions

The report page also provides tools for analyzing subsets of the entire test collection. Clicking on a suite heading such as Trying out the test libraries or Chai will re-run only the specifications under that heading.

Clicking on a specification text (for example, should be equal using 'expect') will show the JavaScript code of the test. A filter button designated by a right triangle is located to the right of the specification text (it is somewhat difficult to see). Clicking the button re-runs the single test specification.

The test specification code and filter

The previous figure illustrates a report in which the filter button has been clicked. The test specification text in the figure has also been clicked, showing the JavaScript specification code.

Advanced test suite and specification filtering

The report suite and specification filters rely on Mocha's grep feature, which is exposed as a URL parameter in the test web page. Assuming that the report web page URL ends with something such as chapters/01/test/test.html, we can manually add a grep filter parameter accompanied with the text to match suite or specification names.

For example, if we want to filter on the term spy, we would navigate a browser to a comparable URL containing chapters/01/test/test.html?grep=spy, causing Mocha to run only the should report spy called specification from the Sinon.JS suite. It is worth playing around with various grep values to get the hang of matching just the suites or specifications that you want.

Test timing and slow tests

All of our tests so far have succeeded and run quickly, but real-world development necessarily involves a certain amount of failures and inefficiencies on the road to creating robust web applications. To this end, the Mocha reporter helps identify slow tests and analyze failures.

Why is test speed important?

Slow tests can indicate inefficient or even incorrect application code, which should be fixed to speed up the overall web application. Further, if a large collection of tests run too slow, developers will have implicit incentives to skip tests in development, leading to costly defect discovery later down the deployment pipeline.

Accordingly, it is a good testing practice to routinely diagnose and speed up the execution time of the entire test collection. Slow application code may be left up to the developer to fix, but most slow tests can be readily fixed with a combination of tools such as stubs and mocks as well as better test planning and isolation.

Let's explore some timing variations in action by creating chapters/01/test/js/spec/timing.spec.js with the following code:

describe("Test timing", function () {
it("should be a fast test", function (done) {
expect("hi").to.equal("hi");
done();
});
it("should be a medium test", function (done) {
setTimeout(function () {
expect("hi").to.equal("hi");
done();
}, 40);
});
it("should be a slow test", function (done) {
setTimeout(function () {
expect("hi").to.equal("hi");
done();
}, 100);
});
it("should be a timeout failure", function (done) {
setTimeout(function () {
expect("hi").to.equal("hi");
done();
}, 2001);
});
});

We use the native JavaScript setTimeout() function to simulate slow tests. To make the tests run asynchronously, we use the done test function parameter, which delays test completion until done() is called.

The first test has no delay before the test assertion and done() callback, the second adds 40 milliseconds of latency, the third adds 100 milliseconds, and the final test adds 2001 milliseconds. These delays will expose different timing results under the Mocha default configuration that reports a slow test at 75 milliseconds, a medium test at one half the slow threshold, and a failure for tests taking longer than 2 seconds.

Next, include the file in your test driver page (chapters/01/test/test-timing.html in the example code):

<script src = "js/spec/timing.spec.js"></script>

Now, on running the driver page, we get the following report:

Test report timings and failures

This figure illustrates timing annotation boxes for our medium (orange) and slow (red) tests and a test failure/stack trace for the 2001-millisecond test. With these report features, we can easily identify the slow parts of our test infrastructure and use more advanced test techniques and application refactoring to execute the test collection efficiently and correctly.

Test failures

A test timeout is one type of test failure we can encounter in Mocha. Two other failures that merit a quick demonstration are assertion and exception failures. Let's try out both in a new file named chapters/01/test/js/spec/failure.spec.js:

// Configure Mocha to continue after first error to show
// both failure examples.
mocha.bail(false);
describe("Test failures", function () {
it("should fail on assertion", function () {
expect("hi").to.equal("goodbye");
});
it("should fail on unexpected exception", function () {
throw new Error();
});
});

The first test, should fail on assertion, is a Chai assertion failure, which Mocha neatly wraps up with the message expected 'hi' to equal 'goodbye'. The second test, should fail on unexpected exception, throws an unchecked exception that Mocha displays with a full stack trace.

Stack traces on Chai assertion failures vary based on the browser. For example, in Chrome, no stack trace is displayed for the first assertion while one is shown in Safari. See the Chai documentation for configuration options that offer more control over stack traces.

Test failures

Mocha's failure reporting neatly illustrates what went wrong and where. Most importantly, Chai and Mocha report the most common case—a test assertion failure—in a very readable natural language format.

Summary

In this article, we introduced an application and test structure suitable for development, gathered the Mocha, Chai, and Sinon.JS libraries, and created some basic tests to get things started. Then, we reviewed some facets of the Mocha test reporter and watched various tests in action—passing, slow, timeouts, and failures.

Resources for Article:


Further resources on this subject:


Backbone.js Testing Plan, architect, and develop tests for Backbone.js applications using modern testing principles and practices with this book and ebook
Published: July 2013
eBook Price: $19.99
Book Price: $39.99
See more
Select your format and quantity:

About the Author :


Ryan Roemer

Ryan Roemer is the Director of Engineering at Curiosity Media, a language learning startup, where he manages technical operations and leads the development team. He develops (and tests) full-stack JavaScript applications and backend Node.js services. He also works with data mining, cloud architectures, and problems related to large scale distributed systems.

He was previously an engineer in the cloud computing storage group of Microsoft's Azure platform and most recently developed the search and cloud architecture for IP Street, a patent data mining startup. Besides engineering, he is a registered patent attorney (inactive), although it has been a long time since he has put on his lawyer hat.

You can find him online at http://loose-bits.com and on Twitter at https://twitter.com/ryan_roemer.

Books From Packt


Software Testing using Visual Studio 2012
Software Testing using Visual Studio 2012

Software Testing using Visual Studio 2010
Software Testing using Visual Studio 2010

Swing Extreme Testing
Swing Extreme Testing

Web Services Testing with soapUI
Web Services Testing with soapUI

Arquillian Testing Guide
Arquillian Testing Guide

Android Application Testing Guide
Android Application Testing Guide

Instant Sikuli Test Automation [Instant]
Instant Sikuli Test Automation [Instant]

Using Node.js for UI Testing
Using Node.js for UI Testing


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software