Testing Tools and Techniques in Python

Exclusive offer: get 50% off this eBook here
Python Testing: Beginner's Guide

Python Testing: Beginner's Guide — Save 50%

An easy and convenient approach to testing your powerful Python projects

$23.99    $12.00
by Daniel Arbuckle | December 2010 | Beginner's Guides Open Source

This article by Daniel Arbuckle, author of Python Testing, introduces code coverage and continuous integration, and teaches how to tie automated testing into version control systems. In this article, we will

  • Discuss code coverage, and learn about coverage.py
  • Discuss continuous integration, and learn about buildbot
  • Learn how to integrate automated testing into popular version control systems

 

Python Testing: Beginner's Guide

Python Testing: Beginner's Guide

An easy and convenient approach to testing your powerful Python projects

  • Covers everything you need to test your code in Python
  • Easiest and enjoyable approach to learn Python testing
  • Write, execute, and understand the result of tests in the unit test framework
  • Packed with step-by-step examples and clear explanations
        Read more about this book      

(For more resources on Python, see here.)

So let's get on with it!

Code coverage

Tests tell you when the code you're testing doesn't work the way you thought it would, but they don't tell you a thing about the code that you're not testing. They don't even tell you that the code you're not testing isn't being tested.

Code coverage is a technique, which can be used to address that shortcoming. A code coverage tool watches while your tests are running, and keeps track of which lines of code are (and aren't) executed. After the tests have run, the tool will give you a report describing how well your tests cover the whole body of code.

It's desirable to have the coverage approach 100%, as you probably figured out already. Be careful not to focus on the coverage number too intensely though, it can be a bit misleading. Even if your tests execute every line of code in the program, they can easily not test everything that needs to be tested. That means you can't take 100% coverage as certain proof that your tests are complete. On the other hand, there are times when some code really, truly doesn't need to be covered by the tests—some debugging support code, for example—and so less than 100% coverage can be completely acceptable.

Code coverage is a tool to give you insight into what your tests are doing, and what they may be overlooking. It's not the definition of a good test suite.

coverage.py

We're going to be working with a module called coverage.py, which is—unsurprisingly—a code coverage tool for Python.

Since coverage.py isn't built in to Python, we'll need to download and install it. You can download the latest version from the Python Package Index at http://pypi.python.org/pypi/coverage. As before, users of Python 2.6 or later can install the package by unpacking the archive, changing to the directory, and typing:

$ python setup.py install --user

Users of older versions of Python need write permission to the system-wide site-packages directory, which is part of the Python installation. Anybody who has such permission can install coverage by typing:
$ python setup.py install

We'll walk through the steps of using coverage.py here, but if you want more information you can find it on the coverage.py home page at http://nedbatchelder.com/code/coverage/.

Time for action – using coverage.py

We'll create a little toy code module with tests, and then apply coverage.py to find out how much of the code the tests actually use.

  1. Place the following test code into test_toy.py. There are several problems with these tests, which we'll discuss later, but they ought to run.

    from unittest import TestCase
    import toy
    class test_global_function(TestCase):
    def test_positive(self):
    self.assertEqual(toy.global_function(3), 4)
    def test_negative(self):
    self.assertEqual(toy.global_function(-3), -2)
    def test_large(self):
    self.assertEqual(toy.global_function(2**13), 2**13 + 1)
    class test_example_class(TestCase):
    def test_timestwo(self):
    example = toy.example_class(5)
    self.assertEqual(example.timestwo(), 10)
    def test_repr(self):
    example = toy.example_class(7)
    self.assertEqual(repr(example), '<example param="7">')

  2. Put the following code into toy.py. Notice the if __name__ == '__main__' clause at the bottom. We haven't dealt with one of those in a while, so I'll remind you that the code inside that block runs doctest if we were to run the module with python toy.py.

    python toy.py.
    def global_function(x):
    r"""
    >>> global_function(5)
    6
    """
    return x + 1
    class example_class:
    def __init__(self, param):
    self.param = param
    def timestwo(self):
    return self.param * 2
    def __repr__(self):
    return '<example param="%s">' % self.param
    if __name__ == '__main__':
    import doctest
    doctest.testmod()

  3. Go ahead and run Nose. It should find them, run them, and report that all is well. The problem is, some of the code isn't ever tested.
  4. Let's run it again, only this time we'll tell Nose to use coverage.py to measure coverage while it's running the tests.

    $ nosetests --with-coverage --cover-erase

    Testing Tools and Techniques in Python

What just happened?

In step 1, we have a couple of TestCase classes with some very basic tests in them. These tests wouldn't be much use in a real world situation, but all we need them for is to illustrate how the code coverage tool works.

In step 2, we have the code that satisfies the tests from step 1. Like the tests themselves, this code wouldn't be much use, but it serves as an illustration.

In step 4, we passed --with-coverage and --cover-erase as command line parameters when we ran Nose. What did they do? Well, --with-coverage is pretty straightforward: it told Nose to look for coverage.py and to use it while the tests execute. That's just what we wanted. The second parameter, --cover-erase, tells Nose to forget about any coverage information that was acquired during previous runs. By default, coverage information is aggregated across all of the uses of coverage.py. This allows you to run a set of tests using different testing frameworks or mechanisms, and then check the cumulative coverage. You still want to erase the data from previous test runs at the beginning of that process, though, and the --cover-erase command line is how you tell Nose to tell coverage.py that you're starting anew.

What the coverage report tells us is that 9/12 (in other words, 75%) of the executable statements in the toy module were executed during our tests, and that the missing lines were line 16 and a lines 19 through 20. Looking back at our code, we see that line 16 is the __repr__ method. We really should have tested that, so the coverage check has revealed a hole in our tests that we should fix. Lines 19 and 20 are just code to run doctest, though. They're not something that we ought to be using under normal circumstances, so we can just ignore that coverage hole.

Code coverage can't detect problems with the tests themselves, in most cases. In the above test code, the test for the timestwo method violates the isolation of units and invokes two different methods of example_class. Since one of the methods is the constructor, this may be acceptable, but the coverage checker isn't in a position to even see that there might be a problem. All it saw was more lines of code being covered. That's not a problem— it's how a coverage checker ought to work— but it's something to keep in mind. Coverage is useful, but high coverage doesn't equal good tests.

Version control hooks

Most version control systems have the ability to run a program that you've written in response to various events, as a way of customizing the version control system's behavior. These programs are commonly called hooks.

Version control systems are programs for keeping track of changes to a source code tree, even when those changes are made by different people. In a sense, they provide an universal undo history and change log for the whole project, going all the way back to the moment you started using the version control system. They also make it much easier to combine work done by different people into a single, unified entity, and to keep track of different editions of the same project.

You can do all kinds of things by installing the right hook programs, but we'll only focus on one use. We can make the version control program automatically run our tests, when we commit a new version of the code to the version control repository.

This is a fairly nifty trick, because it makes it difficult for test-breaking bugs to get into the repository unnoticed. Somewhat like code coverage, though there's potential for trouble if it becomes a matter of policy rather than simply being a tool to make your life easier.

In most systems, you can write the hooks such that it's impossible to commit code that breaks tests. That may sound like a good idea at first, but it's really not. One reason for this is that one of the major purposes of a version control system is communication between developers, and interfering with that tends to be unproductive in the long run. Another reason is that it prevents anybody from committing partial solutions to problems, which means that things tend to get dumped into the repository in big chunks. Big commits are a problem because they make it hard to keep track of what changed, which adds to the confusion. There are better ways to make sure you always have a working codebase socked away somewhere, such as version control branches.

Bazaar

Bazaar is a distributed version control system, which means that it is capable of operating without a central server or master copy of the source code. One consequence of the distributed nature of Bazaar is that each user has their own set of hooks, which can be added, modified, or removed without involving anyone else. Bazaar is available on the Internet at http://bazaar-vcs.org/.

If you don't have Bazaar already installed, and don't plan on using it, you can skip this section.

Time for action – installing Nose as a Bazaar post-commit hook

  1. Bazaar hooks go in your plugins directory. On Unix-like systems, that's ~/.bazaar/plugins/, while on Windows it's C:\Documents and Settings\<username>\Application Data\Bazaar\<version>\plugins\. In either case, you may have to create the plugins subdirectory, if it doesn't already exist.
  2. Place the following code into a file called run_nose.py in the plugins directory. Bazaar hooks are written in Python:

    from bzrlib import branch
    from os.path import join, sep
    from os import chdir
    from subprocess import call
    def run_nose(local, master, old_num, old_id, new_num, new_id):
    try:
    base = local.base
    except AttributeError:
    base = master.base
    if not base.startswith('file://'):
    return
    try:
    chdir(join(sep, *base[7:].split('/')))
    except OSError:
    return
    call(['nosetests'])
    branch.Branch.hooks.install_named_hook('post_commit',
    run_nose,
    'Runs Nose after each
    commit')

  3. Make a new directory in your working files, and put the following code into it in a file called test_simple.py. These simple (and silly) tests are just to give Nose something to do, so that we can see that the hook is working.

    from unittest import TestCase
    class test_simple(TestCase):
    def test_one(self):
    self.assertNotEqual("Testing", "Hooks")
    def test_two(self):
    self.assertEqual("Same", "Same")

  4. Still in the same directory as test_simple.py, run the following commands to create a new repository and commit the tests to it. The output you see might differ in details, but it should be quite similar overall.

    $ bzr init
    $ bzr add
    $ bzr commit

    Testing Tools and Techniques in Python

  5. Notice that there's a Nose test report after the commit notification. From now on, any time you commit to a Bazaar repository, Nose will search for and run whatever tests it can find within that repository.

What just happened?

Bazaar hooks are written in Python, so we've written our hook as a function called run_nose. Our run_nose function checks to make sure that the repository which we're working on is local, and then it changes directories into the repository and runs nose. We registered run_nose as a hook by calling branch.Branch.hooks.install_named_hook.

Python Testing: Beginner's Guide An easy and convenient approach to testing your powerful Python projects
Published: January 2010
eBook Price: $23.99
Book Price: $39.99
See more
Select your format and quantity:
        Read more about this book      

(For more resources on Python, see here.)

Mercurial

Like Bazaar, Mercurial is a distributed version control system, with hooks that are managed by each user individually. Mercurial's hooks themselves, though, take a rather different form. You can find Mercurial on the web at http://www.selenic.com/mercurial/.

If you don't have Mercurial installed and don't plan to use it, you can skip this section.

Mercurial hooks can go in several different places. The two most useful are in your personal configuration file and in your repository configuration file.

Your personal configuration file is ~/.hgrc on Unix-like systems, and %USERPROFILE%\Mercurial.ini (which usually means c:\Documents and Settings\<username>\Mercurial.ini) on Windows-based systems.

Your repository configuration file is stored in a subdirectory of the repository, specifically .hg/hgrc, on all systems.

Time for action – installing Nose as a Mercurial post-commit hook

  1. We'll use the repository configuration file to store the hook, which means that the first thing we have to do is have a repository to work with. Make a new directory at a convenient place and execute the following command in it:

    $ hg init

  2. One side-effect of that command is that a .hg subdirectory got created. Change to that directory, and then create a text file called hgrc containing the following text:

    [hooks]
    commit = nosetests

  3. Back in the repository directory (i.e. the parent of the .hg directory), we need some tests for Nose to run. Create a file called test_simple.py containing the following (admittedly silly) tests:

    from unittest import TestCase
    class test_simple(TestCase):
    def test_one(self):
    self.assertNotEqual("Testing", "Hooks")
    def test_two(self):
    self.assertEqual("Same", "Same")

  4. Run the following commands to add the test file and commit it to the repository:

    $ hg add
    $ hg commit

    Testing Tools and Techniques in Python

  5. Notice that the commit triggered a run-through of the tests. Since we put the hook in the repository configuration file, it will only take effect on commits to this repository. If we'd instead put it into your personal configuration file, it would be called when you committed to any repository.

What just happened?

Mercurial's hooks are commands, just like you would enter into your operating systems command shell (also known as a DOS prompt on Windows). We just had to edit Mercurial's configuration file and tell it which command to run. Since we wanted it to run our Nose test suite when we commit, we set the commit hook to nosetests.

Git

Git is a distributed version control system. Similar to Bazaar and Mercurial, it allows every user to control their own hooks, without involving other developers or server administrators.

Git hooks are stored in the .git/hooks/ subdirectory of the repository, each in its own file.

If you don't have Git installed, and don't plan to use it, you can skip this section.

Time for action – installing Nose as a Git post-commit hook

  1. The hooks are stored in a subdirectory of a Git repository, so the first thing that we need to do is initialize a repository. Make a new directory for the Git repository and execute the following command inside of it:
    $ git init
  2. Git hooks are executable programs, so they can be written in any language. To run Nose, it makes sense to use a shell script (on Unix-like systems) or batch file (on Windows) for the hook. If you're using a Unix-like system, place the following two lines into a file called post-commit in the .git/hooks/ subdirectory, and then use the chmod +x post-commit command to make it executable.
    #!/bin/sh
    nosetests

    If you're using a Windows system, place the following lines inside a file called post-commit.bat in the .git\hooks\ subdirectory.

    @echo off
    nosetests

  3. We need to put some test code in the repository directory (that is, the parent of the .git directory), so that Nose has something to do. Place the following (useless) code into a file called test_simple.py:

    from unittest import TestCase
    class test_simple(TestCase):
    def test_one(self):
    self.assertNotEqual("Testing", "Hooks")
    def test_two(self):
    self.assertEqual("Same", "Same")

  4. Run the following commands to add the test file and commit it to the repository:

    $ git add test_simple.py
    $ git commit -a

    Testing Tools and Techniques in Python

  5. Notice that the commit triggered an execution of Nose and printed out the test results.
    Because each repository has its own hooks, only the repositories that were specifically configured to run Nose will do so.

What just happened?

Git finds its hooks by looking for programs with specific names, so we could have used any programming language to write our hook, as long as we could give the program the right name. However, all that we want is to run the nosetests command, so that we can use a simple shell script or batch file. All this simple program does is invoke the nosetests program, and then terminate.

Darcs

Darcs is a distributed version control system. Each user has control over their own set of hooks.

If you don't have Darcs installed, and you don't plan to use it, you can skip this section.

Time for action – installing Nose as a Darcs post-record hook

  1. Each local repository has its own set of hooks, so the first thing we need to do is create a repository. Make a directory to work in, and execute the following command in it:
    $ darcs initialize
  2. We need to put some test code in the repository directory so that Nose has something to do. Place the following (useless) code into a file called test_simple.py.

    test_simple.py.
    from unittest import TestCase
    class test_simple(TestCase):
    def test_one(self):
    self.assertNotEqual("Testing", "Hooks")
    def test_two(self):
    self.assertEqual("Same", "Same")

  3. Run the following command to add the test file to the repository:
    $ darcs add test_simple.py
  4. Darcs hooks are identified using command line options. In this case, we want to run nosetests after we tell Darcs to record changes, so we use the following command:

    $ darcs record --posthook=nosetests

    Testing Tools and Techniques in Python

  5. Notice that Darcs ran our test suite once it was done recording the changes, and reported the results to us.
  6. That's well and good, but Darcs doesn't remember that we want nosetests to be a post-record hook. As far as it's concerned, that was a one-time deal. Fortunately, we can tell it otherwise. Create a file called defaults in the _darcs/prefs/ subdirectory, and place the following text into it:
    record posthook nosetests
  7. Now if we change the code and record again, nosetests should run without us specifically asking for it. Make the following change to test_simple.py:

    from unittest import TestCase
    class test_simple(TestCase):
    def test_one(self):
    self.assertNotEqual("Testing", "Hooks")
    def test_two(self):
    self.assertEqual("Same", "Same!")

  8. Run the following command to record the change and run the tests:
    darcs record

    Testing Tools and Techniques in Python

  9. If you want to skip the tests for a commit, you can pass the --no-posthook command line option when you record your changes.

What just happened?

Darcs hooks are specified as command line options, so when we issue the record command we need to specify a program to run as a hook. Since we don't want to do that manually every time we record changes, we make use of Darcs' ability to accept additional command line options in its configuration file. This allows us to make running nosetests as a hook into the default behavior.

Subversion

Unlike the other version control systems that we've discussed, Subversion is a centralized one. There is a single server tasked with keeping track of everybody's changes, which also handles running hooks. This means that there is a single set of hooks that applies to everybody, probably under control of a system administrator.

Subversion hooks are stored in files in the hooks/ subdirectory of the server's repository.

If you don't have Subversion and don't plan on using it, you can skip this section.

Time for action – installing Nose as a Subversion post-commit hook

Because Subversion operates on an centralized, client-server architecture, we'll need both the client and server set up for this example. They can both be on the same computer, but they'll need to be in different directories.

  1. First we need a server. You can create one by making a new directory called svnrepo and executing the following command:
    $ svnadmin create svnrepo/
  2. Now we need to configure the server to accept commits from us. To do this, we open up the file called conf/passwd and add the following line at the bottom:
    testuser = testpass
  3. Then we need to edit conf/svnserve.conf, and change the line reading # password-db = passwd to password-db = passwd.
  4. The Subversion server needs to be running, before we can interact with it. This is done by making sure that we're in the svnrepo directory and then running the following command:
    svnserve -d -r ..
  5. Next we need to import some test code into the Subversion repository. Make a directory and place the following (simple and silly) code into it in a file called test_simple.py:
    from unittest import TestCase
    class test_simple(TestCase):
    def test_one(self):
    self.assertNotEqual("Testing", "Hooks")
    def test_two(self):
    self.assertEqual("Same", "Same")

    You can perform the import by executing:

    $ svn import --username=testuser --password=testpass svn://localhost/svnrepo/

    That command is likely to print out a gigantic, scary message about remembering passwords. In spite of the warnings, just say yes.

  6. Now that we've got the code imported, we need to check out a copy of it to work on. We can do this with the following command:
    $ svn checkout --username=testuser --password=testpass svn://localhost/svnrepo/ svn

    From here on in this example, we'll assume that the Subversion server is running in a Unix-like environment (the clients might be running on Windows, we don't care). The reason for this, is that the details of the post-commit hook are significantly different on systems that don't have a Unix style shell scripting language, although the concepts remain the same.

  7. The following code goes into a file called hooks/post-commit inside the subversion server's repository. (The svn update and svn checkout lines have been wrapped around to fit on the page. In actual use, this wrapping should not be present.)

    #!/bin/sh
    REPO="$1"
    if /usr/bin/test -e "$REPO/working"; then
    /usr/bin/svn update --username=testuser --password=testpass "$REPO/working/";
    else
    /usr/bin/svn checkout --username=testuser --password=testpass svn://localhost/svnrepo/ "$REPO/working/";
    fi
    cd "$REPO/working/"
    exec /usr/bin/nosetests

  8. Use the chmod +x post-commit command to make the hook executable.
  9. Change to the svn directory created by the checkout in step 5, and edit test_simple.py to make one of the tests fail. We do this because if the tests all pass, Subversion won't show us anything to indicate that they were run at all. We only get feedback if they fail.

    from unittest import TestCase
    class test_simple(TestCase):
    def test_one(self):
    self.assertNotEqual("Testing", "Hooks")
    def test_two(self):
    self.assertEqual("Same", "Same!")

  10. Now commit the changes using the following command:

    $ svn commit --username=testuser --password=testpass

    Testing Tools and Techniques in Python

  11. Notice that the commit triggered the execution of Nose, and that if any of the tests fail, Subversion shows us the errors.

Because Subversion has one central set of hooks, they apply automatically to anybody who uses the repository.

What just happened?

Subversion hooks are run on the server. Subversion locates its hooks by looking for programs with specific names, so we needed to create a program called post-commit to be the post-commit hook. We could have used any programming language to write the hook, as long as the program had the right name, but we chose to use shell scripting language, for simplicity's sake.

Automated continuous integration

Automated continuous integration tools are a step beyond using a version control hook to run your tests when you commit code to the repository. Instead of running your test suite once, an automated continuous integration system compiles your code (if need be) and runs your tests many times, in many different environments.

An automated continuous integration system might, for example, run your tests under Python versions 2.4, 2.5, and 2.6 on each of Windows, Linux, and Mac OS X. This not only lets you know about errors in your code, but also about unexpected problems caused by the external environment. It's nice to know when that last patch broke the program on Windows, even though it worked like a charm on your Linux box.

Buildbot

Buildbot is a popular automated continuous integration tool. Using Buildbot, you can create a network of 'build slaves' that will check your code each time you commit to you commit it to your repository. This network can be quite large, and it can be distributed around the Internet, so Buildbot works even for projects with lots of developers spread around the world.

Buildbot's home page is at http://buildbot.net/. Following links from that site, you can find the manual and download the latest version of the tool. Glossing over details that we've discussed several times before, installation requires you to unpack the archive, and then run the commands python setup.py build, and python setup.py install --user.

Buildbot operates in one of two modes, termed buildmaster and buildslave. A buildmaster manages a network of buildslaves, while the buildslaves run the tests in their assorted environments.

Time for action – using Buildbot with Bazaar

  1. To set up a buildmaster, create a directory for it to operate in and then run the command:
    $ buildbot create-master <directory>

    where <directory> is the directory you just created for buildbot to work in.

  2. Similarly, to set up a buildslave, create a directory for it to operate in and then run the command:

    $ buildbot create-slave <directory> <host:port> <name> <password>

    where <directory> is the directory you just created for the buildbot to work in, <host:port> are the internet host and port where the buildmaster can be found, and <name> and <password> are the login information that identifies this buildslave to the buildmaster. All of this information (except the directory) is determined by the operator of the buildmaster.

  3. You should edit <directory>/info/admin and <directory>/info/host to contain the email address you want associated with this buildslave and a description of the buildslave's operating environment, respectively.
  4. On both the buildmaster and the buildslave, you'll need to start up the buildbot background process. To do this, use the command:
    $ buildbot start <directory>
  5. Configuring a buildmaster is a significant topic in itself (and one that we' won't be addressing in detail). It's fully described in Buildbot's own documentation. We will provide a simple configuration file, though, for reference and quick setup. This particular configuration file assumes that you're using Bazaar, but it is not significantly different for other version control systems. The following goes in the master <directory>/master.cfg file:

    # -*- python -*-
    # ex: set syntax=python:
    c = BuildmasterConfig = {}
    c['projectName'] = "<replace with project name>"
    c['projectURL'] = "<replace with project url>"
    c['buildbotURL'] = "http://<replace with master url>:8010/"
    c['status'] = []
    from buildbot.status import html
    c['status'].append(html.WebStatus(http_port=8010,
    allowForce=True))
    c['slavePortnum'] = 9989
    from buildbot.buildslave import BuildSlave
    c['slaves'] = [
    BuildSlave("bot1name", "bot1passwd"),
    ]
    from buildbot.changes.pb import PBChangeSource
    c['change_source'] = PBChangeSource()
    from buildbot.scheduler import Scheduler
    c['schedulers'] = []
    c['schedulers'].append(Scheduler(name="all", branch=None,
    treeStableTimer=2 * 60,
    builderNames=["buildbot-full"]))
    from buildbot.process import factory
    from buildbot.steps.source import Bzr
    from buildbot.steps.shell import Test
    f1 = factory.BuildFactory()
    f1.addStep(Bzr(repourl="<replace with repository url>"))
    f1.addStep(Test(command = 'nosetests'))
    b1 = {'name': "buildbot-full",
    'slavename': "bot1name",
    'builddir': "full",
    'factory': f1,
    }
    c['builders'] = [b1]

  6. To make effective use of that Buildbot config, you also need to install a version control hook that notifies Buildbot of changes. Generically, this can be done by calling the buildbot sendchange command from the hook, but there's a nicer way to tie in with Bazaar: copy the contrib/bzr_buildbot.py file from the buildbot distribution archive into your Bazaar plugins directory, and then edit the locations.conf file, which you should find right next to the plugins directory. Add the following entry to locations.conf:

    [<your repository path>]
    buildbot_on = change
    buildbot_server = <internet address of your buildmaster>
    buildbot_port = 9989

    You'll need to add similar entries for each repository that you want to be connected to buildbot.

  7. Once you have the buildmaster and buildslaves configured, and have hooked buildbot into your version control system, and have started the buildmaster and buildslaves, you're in business.

What just happened?

We just set up Buildbot to run our tests, whenever it notices that our source code has been unchanged for two hours.

We told it to run the tests by adding a build step that runs nosetests:

f1.addStep(Test(command = 'nosetests'))

You'll be able to see a report of the Buildbot status in your web browser, by navigating to the buildbotURL that you configured in the master.cfg file. One of the most useful reports is the so-called 'waterfall' view. If the most recent commit passes the tests, you should see something similar to this:

Testing Tools and Techniques in Python

On the other hand, when the commit fails to pass the tests, you'll see something more like this:

Testing Tools and Techniques in Python

Either way, you'll also see a history of earlier versions, and whether or not they passed the tests, as well as who made the changes, when, and what the output of the test command looked like.

Summary

We learned a lot in this chapter about code coverage and plugging our tests into the other automation systems that we use while writing software.

Specifically, we covered:

  • What code coverage is, and what it can tell us about our tests
  • How to run Nose automatically when our version control software detects changes in the source code
  • How to set up the Buildbot automated continuous integration system

Further resources on this subject:


Python Testing: Beginner's Guide An easy and convenient approach to testing your powerful Python projects
Published: January 2010
eBook Price: $23.99
Book Price: $39.99
See more
Select your format and quantity:

About the Author :


Daniel Arbuckle

Daniel Arbuckle holds a Ph.D. in Computer Science from the University of Southern California. While at USC, he performed original research in the Interaction Lab (part of the Center for Robotics and Embedded Systems) and the Laboratory for Molecular Robotics (now part of the Nanotechnology Research Laboratory). His work has been published in peer-reviewed journals and in the proceedings of international conferences.

Books From Packt


Python 2.6 Graphics Cookbook
Python 2.6 Graphics Cookbook

Python 2.6 Text Processing Beginners Guide
Python 2.6 Text Processing Beginners Guide

wxPython 2.8 Application Development Cookbook
wxPython 2.8 Application Development Cookbook

MySQL for Python
MySQL for Python

Expert Python Programming
Expert Python Programming

Python 3 Object Oriented Programming
Python 3 Object Oriented Programming

Spring Python 1.1
Spring Python 1.1

Python Multimedia
Python Multimedia


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software