How do you know when code you have written is working as intended? Well, you test it. But how? For a web application, you can test the code by manually bringing up the pages of your application in a web browser and verifying that they are correct. This involves more than a quick glance to see whether they have the correct content, as you must also ensure, for example, that all the links work and that any forms work properly. As you can imagine, this sort of manual testing quickly becomes impossible to rely on as an application grows beyond a few simple pages. For any non-trivial application, automated testing is essential.
Automated testing of Django applications makes use of the fundamental test support built-in to the Python language: doctests and unit tests. When you create a new Django application with manage.py startapp
, one of the generated files contains a sample doctest and unit test, intended to jump-start your own test writing. In this chapter, we will begin our study of testing Django applications. Specifically, we will:
Examine in detail the contents of the sample
tests.py
file, reviewing the fundamentals of Python's test support as we do soSee how to use Django utilities to run the tests contained in
tests.py
Learn how to interpret the output of the tests, both when the tests succeed and when they fail
Review the effects of the various command-line options that can be used when testing
Let's get started by creating a new Django project and application. Just so we have something consistent to work with throughout this book, let's assume we are setting out to create a new market-research type website. At this point, we don't need to decide much about this site except some names for the Django project and at least one application that it will include. As market_research
is a bit long, let's shorten that to marketr
for the project name. We can use django-admin.py
to create a new Django project:
kmt@lbox:/dj_projects$ django-admin.py startproject marketr
Then, from within the new marketr
directory, we can create a new Django application using the manage.py
utility. One of the core applications for our market research project will be a survey application, so we will start by creating it:
kmt@lbox:/dj_projects/marketr$ python manage.py startapp survey
Now we have the basic skeleton of a Django project and application: a settings.py
file, a urls.py
file, the manage.py
utility, and a survey
directory containing .py
files for models, views, and tests. There is nothing of substance placed in the auto-generated models and views files, but in the tests.py
file there are two sample tests: one unit test and one doctest. We will examine each in detail next.
The unit test is the first test contained in tests.py
, which begins:
"""
This file demonstrates two different styles of tests (one doctest and one unittest). These will both pass when you run "manage.py test".
Replace these with more appropriate tests for your application.
"""
from django.test import TestCase
class SimpleTest(TestCase):
def test_basic_addition(self):
"""
Tests that 1 + 1 always equals 2.
"""
self.failUnlessEqual(1 + 1, 2)
The unit test starts by importing TestCase
from django.test
. The django.test.TestCase
class is based on Python's unittest.TestCase
, so it provides everything from the underlying Python unittest.TestCase
plus features useful for testing Django applications. These Django extensions to unittest.TestCase
will be covered in detail in Chapter 3, Testing 1, 2, 3: Basic Unit Testing and Chapter 4, Getting Fancier: Django Unit Test Extensions. The sample unit test here doesn't actually need any of that support, but it does not hurt to base the sample test case on the Django class anyway.
The sample unit test then declares a SimpleTest
class based on Django's TestCase
, and defines a test method named test_basic_addition
within that class. That method contains a single statement:
self.failUnlessEqual(1 + 1, 2)
As you might expect, that statement will cause the test case to report a failure unless the two provided arguments are equal. As coded, we'd expect that test to succeed. We'll verify that later in this chapter, when we get to actually running the tests. But first, let's take a closer look at the sample doctest.
The doctest portion of the sample tests.py
is:
__test__ = {"doctest": """
Another way to test that 1 + 1 is equal to 2.
>>> 1 + 1 == 2
True
"""}
That looks a bit more mysterious than the unit test half. For the sample doctest, a special variable, __test__
, is declared. This variable is set to be a dictionary containing one key, doctest
. This key is set to a string value that resembles a docstring containing a comment followed by what looks like a snippet from an interactive Python shell session.
The part that looks like an interactive Python shell session is what makes up the doctest. That is, lines that start with >>>
will be executed (minus the >>>
prefix) during the test, and the actual output produced will be compared to the expected output found in the doctest below the line that starts with >>>
. If any actual output fails to match the expected output, the test fails. For this sample test, we would expect entering 1 + 1 == 2
in an interactive Python shell session to result in the interpreter producing the output True
, so again it looks like this sample test should pass.
Note that doctests do not have to be defined by using this special __test__
dictionary. In fact, Python's doctest test runner looks for doctests within all the docstrings found in the file. In Python, a docstring is a string literal that is the first statement in a module, function, class, or method definition. Given that, you'd expect snippets from an interactive Python shell session found in the comment at the very top of this tests.py
file to also be run as a doctest. This is another thing we can experiment with once we start running these tests, which we'll do next.
The comment at the top of the sample tests.py
file states that the two tests: will both pass when you run "manage.py test"
. So let's see what happens if we try that:
kmt@lbox:/dj_projects/marketr$ python manage.py test
Creating test database...
Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_manager(settings)
File "/usr/lib/python2.5/site-packages/django/core/management/__init__.py", line 362, in execute_manager
utility.execute()
File "/usr/lib/python2.5/site-packages/django/core/management/__init__.py", line 303, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python2.5/site-packages/django/core/management/base.py", line 195, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/lib/python2.5/site-packages/django/core/management/base.py", line 222, in execute
output = self.handle(*args, **options)
File "/usr/lib/python2.5/site-packages/django/core/management/commands/test.py", line 23, in handle
failures = test_runner(test_labels, verbosity=verbosity, interactive=interactive)
File "/usr/lib/python2.5/site-packages/django/test/simple.py", line 191, in run_tests
connection.creation.create_test_db(verbosity, autoclobber=not interactive)
File "/usr/lib/python2.5/site-packages/django/db/backends/creation.py", line 327, in create_test_db
test_database_name = self._create_test_db(verbosity, autoclobber)
File "/usr/lib/python2.5/site-packages/django/db/backends/creation.py", line 363, in _create_test_db
cursor = self.connection.cursor()
File "/usr/lib/python2.5/site-packages/django/db/backends/dummy/base.py", line 15, in complain
raise ImproperlyConfigured, "You haven't set the DATABASE_ENGINE setting yet."
django.core.exceptions.ImproperlyConfigured: You haven't set the DATABASE_ENGINE setting yet.
Oops, we seem to have gotten ahead of ourselves here. We created our new Django project and application, but never edited the settings file to specify any database information. Clearly we need to do that in order to run the tests.
But will the tests use the production database we specify in settings.py
? That could be worrisome, since we might at some point code something in our tests that we wouldn't necessarily want to do to our production data. Fortunately, it's not a problem. The Django test runner creates an entirely new database for running the tests, uses it for the duration of the tests, and deletes it at the end of the test run. The name of this database is test_
followed by DATABASE_NAME
specified in settings.py
. So running tests will not interfere with production data.
In order to run the sample tests.py
file, we need to first set appropriate values for DATABASE_ENGINE
, DATABASE_NAME
, and whatever else may be required for the database we are using in settings.py
. Now would also be a good time to add our survey
application and django.contrib.admin
to INSTALLED_APPS
, as we will need both of those as we proceed. Once those changes have been made to settings.py
, manage.py test
works better:
kmt@lbox:/dj_projects/marketr$ python manage.py test
Creating test database...
Creating table auth_permission
Creating table auth_group
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table django_admin_log
Installing index for auth.Permission model
Installing index for auth.Message model
Installing index for admin.LogEntry model
...................................
----------------------------------------------------------------------
Ran 35 tests in 2.012s
OK
Destroying test database...
That looks good. But what exactly got tested? Towards the end it says Ran 35 tests
, so there were certainly more tests run than the two tests in our simple tests.py
file. The other 33 tests are from the other applications listed by default in settings.py
: auth, content types, sessions, and sites. These Django "contrib" applications ship with their own tests, and by default, manage.py test
runs the tests for all applications listed in INSTALLED_APPS
.
Note
Note that if you do not add django.contrib.admin
to the INSTALLED_APPS
list in settings.py
, then manage.py test
may report some test failures. With Django 1.1, some of the tests for django.contrib.auth
rely on django.contrib.admin
also being included in INSTALLED_APPS
in order for the tests to pass. That inter-dependence may be fixed in the future, but for now it is easiest to avoid the possible errors by including django.contrib.admin
in INTALLED_APPS
from the start. We will want to use it soon enough anyway.
It is possible to run just the tests for certain applications. To do this, specify the application names on the command line. For example, to run only the survey
application tests:
kmt@lbox:/dj_projects/marketr$ python manage.py test survey
Creating test database...
Creating table auth_permission
Creating table auth_group
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table django_admin_log
Installing index for auth.Permission model
Installing index for auth.Message model
Installing index for admin.LogEntry model
..
----------------------------------------------------------------------
Ran 2 tests in 0.039s
OK
Destroying test database...
There—Ran 2 tests
looks right for our sample tests.py
file. But what about all those messages about tables being created and indexes being installed? Why were the tables for these applications created when their tests were not going to be run? The reason for this is that the test runner does not know what dependencies may exist between the application(s) that are going to be tested and others listed in INSTALLED_APPS
that are not going to be tested.
For example, our survey application could have a model with a ForeignKey
to the django.contrib.auth User
model, and tests for the survey application may rely on being able to add and query User
entries. This would not work if the test runner neglected to create tables for the applications excluded from testing. Therefore, the test runner creates the tables for all applications listed in INSTALLED_APPS
, even those for which tests are not going to be run.
We now know how to run tests, how to limit the testing to just the application(s) we are interested in, and what a successful test run looks like. But, what about test failures? We're likely to encounter a fair number of those in real work, so it would be good to make sure we understand the test output when they occur. In the next section, then, we will introduce some deliberate breakage so that we can explore what failures look like and ensure that when we encounter real ones, we will know how to properly interpret what the test run is reporting.
Let's start by introducing a single, simple failure. Change the unit test to expect that adding 1 + 1
will result in 3
instead of 2
. That is, change the single statement in the unit test to be: self.failUnlessEqual(1 + 1, 3)
.
Now when we run the tests, we will get a failure:
kmt@lbox:/dj_projects/marketr$ python manage.py test
Creating test database...
Creating table auth_permission
Creating table auth_group
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table django_admin_log
Installing index for auth.Permission model
Installing index for auth.Message model
Installing index for admin.LogEntry model
...........................F.......
======================================================================
FAIL: test_basic_addition (survey.tests.SimpleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/dj_projects/marketr/survey/tests.py", line 15, in test_basic_addition
self.failUnlessEqual(1 + 1, 3)
AssertionError: 2 != 3
----------------------------------------------------------------------
Ran 35 tests in 2.759s
FAILED (failures=1)
Destroying test database...
That looks pretty straightforward. The failure has produced a block of output starting with a line of equal signs and then the specifics of the test that has failed. The failing method is identified, as well as the class containing it. There is a Traceback
that shows the exact line of code that has generated the failure, and the AssertionError
shows details of the cause of the failure.
Notice the line above the equal signs—it contains a bunch of dots and one F
. What does that mean? This is a line we overlooked in the earlier test output listings. If you go back and look at them now, you'll see there has always been a line with some number of dots after the last Installing index
message. This line is generated as the tests are run, and what is printed depends on the test results. F
means a test has failed, dot means a test passed. When there are enough tests that they take a while to run, this real-time progress update can be useful to get a sense of how the run is going while it is in progress.
Finally at the end of the test output, we see FAILED (failures=1)
instead of the OK
we had seen previously. Any test failures make the overall test run outcome a failure instead of a success.
Next, let's see what a failing doctest looks like. If we restore the unit test back to its original form and change the doctest to expect the Python interpreter to respond True
to 1 + 1 == 3
, running the tests (restricting the tests to only the survey
application this time) will then produce this output:
kmt@lbox:/dj_projects/marketr$ python manage.py test survey
Creating test database...
Creating table auth_permission
Creating table auth_group
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table django_admin_log
Installing index for auth.Permission model
Installing index for auth.Message model
Installing index for admin.LogEntry model
.F
======================================================================
FAIL: Doctest: survey.tests.__test__.doctest
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/django/test/_doctest.py", line 2180, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for survey.tests.__test__.doctest
File "/dj_projects/marketr/survey/tests.py", line unknown line number, in doctest
----------------------------------------------------------------------
File "/dj_projects/marketr/survey/tests.py", line ?, in survey.tests.__test__.doctest
Failed example:
1 + 1 == 3
Expected:
True
Got:
False
----------------------------------------------------------------------
Ran 2 tests in 0.054s
FAILED (failures=1)
Destroying test database...
The output from the failing doctest is a little more verbose and a bit less straightforward to interpret than the unit test failure. The failing doctest is identified as survey.tests.__test__.doctest
—this means the key doctest
in the __test__
dictionary defined within the survey/tests.py
file. The Traceback
portion of the output is not as useful as it was in the unit test case as the AssertionError
simply notes that the doctest failed. Fortunately, details of what caused the failure are then provided, and you can see the content of the line that caused the failure, what output was expected, and what output was actually produced by executing the failing line.
Note, though, that the test runner does not pinpoint the line number within tests.py
where the failure occurred. It reports unknown line number
and line ?
in different portions of the output. Is this a general problem with doctests or perhaps a result of the way in which this particular doctest is defined, as part of the __test__
dictionary? We can answer that question by putting a test in the docstring at the top of tests.py
. Let's restore the sample doctest to its original state and change the top of the file to look like this:
"""
This file demonstrates two different styles of tests (one doctest and one unittest). These will both pass when you run "manage.py test".
Replace these with more appropriate tests for your application.
>>> 1 + 1 == 3
True
"""
Then when we run the tests we get:
kmt@lbox:/dj_projects/marketr$ python manage.py test survey
Creating test database...
Creating table auth_permission
Creating table auth_group
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table django_admin_log
Installing index for auth.Permission model
Installing index for auth.Message model
Installing index for admin.LogEntry model
.F.
======================================================================
FAIL: Doctest: survey.tests
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/django/test/_doctest.py", line 2180, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for survey.tests
File "/dj_projects/marketr/survey/tests.py", line 0, in tests
----------------------------------------------------------------------
File "/dj_projects/marketr/survey/tests.py", line 7, in survey.tests
Failed example:
1 + 1 == 3
Expected:
True
Got:
False
----------------------------------------------------------------------
Ran 3 tests in 0.052s
FAILED (failures=1)
Destroying test database...
Here line numbers are provided. The Traceback
portion apparently identifies the line above the line where the docstring containing the failing test line begins (the docstring starts on line 1
while the traceback reports line 0
). The detailed failure output identifies the actual line in the file that causes the failure, in this case line 7
.
The inability to pinpoint line numbers is thus a side-effect of defining the doctest within the __test__
dictionary. While it doesn't cause much of a problem here, as it is trivial to see what line is causing the problem in our simple test, it's something to keep in mind when writing more substantial doctests to be placed in the __test__
dictionary. If multiple lines in the test are identical and one of them causes a failure, it may be difficult to identify which exact line is causing the problem, as the failure output won't identify the specific line number where the failure occurred.
So far all of the mistakes we have introduced into the sample tests have involved expected output not matching actual results. These are reported as test failures. In addition to test failures, we may sometimes encounter test errors. These are described next.
To see what a test error looks like, let's remove the failing doctest introduced in the previous section and introduce a different kind of mistake into our sample unit test. Let's assume that instead of wanting to test that 1 + 1
equals the literal 2
, we want to test that it equals the result of a function, sum_args
, that is supposed to return the sum of its arguments. But we're going to make a mistake and forget to import that function. So change self.failUnlessEqual
to:
self.failUnlessEqual(1 + 1, sum_args(1, 1))
Now when the tests are run we see:
kmt@lbox:/dj_projects/marketr$ python manage.py test survey
Creating test database...
Creating table auth_permission
Creating table auth_group
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table django_admin_log
Installing index for auth.Permission model
Installing index for auth.Message model
Installing index for admin.LogEntry model
E.
======================================================================
ERROR: test_basic_addition (survey.tests.SimpleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/dj_projects/marketr/survey/tests.py", line 15, in test_basic_addition
self.failUnlessEqual(1 + 1, sum_args(1, 1))
NameError: global name 'sum_args' is not defined
----------------------------------------------------------------------
Ran 2 tests in 0.041s
FAILED (errors=1)
Destroying test database...
The test runner encountered an exception before it even got to the point where it could compare 1 + 1
to the return value of sum_args
, as sum_args
was not imported. In this case, the error is in the test itself, but it would still have been reported as an error, not a failure, if the code in sum_args
was what caused a problem. Failures mean actual results didn't match what was expected, whereas errors mean some other problem (exception) was encountered during the test run. Errors may imply a mistake in the test itself, but don't necessarily have to imply that.
Note that a similar error made in a doctest is reported as a failure, not an error. For example, we can change the doctest 1 + 1
line to:
>>> 1 + 1 == sum_args(1, 1)
If we then run the tests, the output will be:
kmt@lbox:/dj_projects/marketr$ python manage.py test survey
Creating test database...
Creating table auth_permission
Creating table auth_group
Creating table auth_user
Creating table auth_message
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table django_admin_log
Installing index for auth.Permission model
Installing index for auth.Message model
Installing index for admin.LogEntry model
EF
======================================================================
ERROR: test_basic_addition (survey.tests.SimpleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/dj_projects/marketr/survey/tests.py", line 15, in test_basic_addition
self.failUnlessEqual(1 + 1, sum_args(1, 1))
NameError: global name 'sum_args' is not defined
======================================================================
FAIL: Doctest: survey.tests.__test__.doctest
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/django/test/_doctest.py", line 2180, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for survey.tests.__test__.doctest
File "/dj_projects/marketr/survey/tests.py", line unknown line number, in doctest
----------------------------------------------------------------------
File "/dj_projects/marketr/survey/tests.py", line ?, in survey.tests.__test__.doctest
Failed example:
1 + 1 == sum_args(1, 1)
Exception raised:
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/django/test/_doctest.py", line 1267, in __run
compileflags, 1) in test.globs
File "<doctest survey.tests.__test__.doctest[0]>", line 1, in <module>
1 + 1 == sum_args(1, 1)
NameError: name 'sum_args' is not defined
----------------------------------------------------------------------
Ran 2 tests in 0.044s
FAILED (failures=1, errors=1)
Destroying test database...
Thus, the error versus failure distinction made for unit tests does not necessarily apply to doctests. So, if your tests include doctests, the summary of failure and error counts printed at the end doesn't necessarily reflect how many tests produced unexpected results (unit test failure count) or had some other error (unit test error count). However, in any case, neither failures nor errors are desired. The ultimate goal is to have zero for both, so if the difference between them is a bit fuzzy at times that's not such a big deal. It can be useful though, to understand under what circumstances one is reported instead of the other.
We have now seen how to run tests, and what the results look like for both overall success and a few failures and errors. Next we will examine the various command line options supported by the manage.py test
command.
Beyond specifying the exact applications to test on the command line, what other options are there for controlling the behavior of manage.py
test? The easiest way to find out is to try running the command with the option --help
:
kmt@lbox:/dj_projects/marketr$ python manage.py test --help
Usage: manage.py test [options] [appname ...]
Runs the test suite for the specified applications, or the entire site if no apps are specified.
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=all output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will
be used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Print traceback on exception
--noinput Tells Django to NOT prompt the user for input of
any kind.
--version show program's version number and exit
-h, --help show this help message and exit
Let's consider each of these in turn (excepting help
, as we've already seen what it does):
Verbosity is a numeric value between 0
and 2
. It controls how much output the tests produce. The default value is 1
, so the output we have seen so far corresponds to specifying -v 1
or --verbosity=1
. Setting verbosity to 0
suppresses all of the messages about creating the test database and tables, but not summary, failure, or error information. If we correct the last doctest failure introduced in the previous section and re-run the tests specifying -v0
, we will see:
kmt@lbox:/dj_projects/marketr$ python manage.py test survey -v0
======================================================================
ERROR: test_basic_addition (survey.tests.SimpleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/dj_projects/marketr/survey/tests.py", line 15, in test_basic_addition
self.failUnlessEqual(1 + 1, sum_args(1, 1))
NameError: global name 'sum_args' is not defined
----------------------------------------------------------------------
Ran 2 tests in 0.008s
FAILED (errors=1)
Setting verbosity to 2
produces a great deal more output. If we fix this remaining error and run the tests with verbosity set to its highest level, we will see:
kmt@lbox:/dj_projects/marketr$ python manage.py test survey --verbosity=2
Creating test database...
Processing auth.Permission model
Creating table auth_permission
Processing auth.Group model
Creating table auth_group
[...more snipped...]
Creating many-to-many tables for auth.Group model
Creating many-to-many tables for auth.User model
Running post-sync handlers for application auth
Adding permission 'auth | permission | Can add permission'
Adding permission 'auth | permission | Can change permission'
[...more snipped...]
No custom SQL for auth.Permission model
No custom SQL for auth.Group model
[...more snipped...]
Installing index for auth.Permission model
Installing index for auth.Message model
Installing index for admin.LogEntry model
Loading 'initial_data' fixtures...
Checking '/usr/lib/python2.5/site-packages/django/contrib/auth/fixtures' for fixtures...
Trying '/usr/lib/python2.5/site-packages/django/contrib/auth/fixtures' for initial_data.xml fixture 'initial_data'...
No xml fixture 'initial_data' in '/usr/lib/python2.5/site-packages/django/contrib/auth/fixtures'.
[....much more snipped...]
No fixtures found.
test_basic_addition (survey.tests.SimpleTest) ... ok
Doctest: survey.tests.__test__.doctest ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.004s
OK
Destroying test database...
As you can see, at this level of verbosity the command reports in excruciating detail all of what it is doing to set up the test database. In addition to the creation of database tables and indexes that we saw earlier, we now see that the database setup phase includes:
Running
post-syncdb
signal handlers. Thedjango.contrib.auth
application, for example, uses this signal to automatically add permissions for models as each application is installed. Thus you see messages about permissions being created as thepost-syncdb
signal is sent for each application listed inINSTALLED_APPS
.Running custom SQL for each model that has been created in the database. Based on the output, it does not look like any of the applications in
INSTALLED_APPS
use custom SQL.Loading
initial_data
fixtures. Initial data fixtures are a way to automatically pre-populate the database with some constant data. None of the applications we have listed inINSTALLED_APPS
make use of this feature, but a great deal of output is produced as the test runner looks for initial data fixtures, which may be found under any of several different names. There are messages for each possible file that is checked and for whether anything was found. This output might come in handy at some point if we run into trouble with the test runner finding an initial data fixture (we'll cover fixtures in detail in Chapter 3), but for now this output is not very interesting.
Once the test runner finishes initializing the database, it settles down to running the tests. At verbosity level 2
, the line of dots, Fs, and Es we saw previously is replaced by a more detailed report of each test as it is run. The name of the test is printed, followed by three dots, then the test result, which will either be ok
, ERROR
, or FAIL
. If there are any errors or failures, the detailed information about why they occurred will be printed at the end of the test run. So as you watch a long test run proceeding with verbosity set to 2
, you will be able to see what tests are running into problems, but you will not get the details of the reasons why they occurred until the run completes.
You can pass the settings option to the test
command to specify a settings file to use instead of the project default one. This can come in handy if you want to run tests using a database that's different from the one you normally use (either for speed of testing or to verify your code runs correctly on different databases), for example.
Note the help text for this option states that the DJANGO_SETTINGS_MODULE
environment variable will be used to locate the settings file if the settings option is not specified on the command line. This is only accurate when the test
command is being run via the django-admin.py
utility. When using manage.py test
, the manage.py
utility takes care of setting this environment variable to specify the settings.py
file in the current directory.
This option allows you to append an additional directory to the Python path used during the test run. It's primarily of use when using django-admin.py
, where it is often necessary to add the project path to the standard Python path. The manage.py
utility takes care of adding the project path to the Python path, so this option is not generally needed when using manage.py test
.
This option is not actually used by the test
command. It is inherited as one of the default options supported by all django-admin.py
(and manage.py
) commands, but the test
command never checks for it. Thus you can specify it, but it will have no effect.
This option causes the test runner to not prompt for user input, which raises the question: When would the test runner require user input? We haven't encountered that so far. The test runner prompts for input during the test database creation if a database with the test database name already exists. For example, if you hit Ctrl + C during a test run, the test database may not be destroyed and you may encounter a message like this the next time you attempt to run tests:
kmt@lbox:/dj_projects/marketr$ python manage.py test
Creating test database...
Got an error creating the test database: (1007, "Can't create database 'test_marketr'; database exists")
Type 'yes' if you would like to try deleting the test database 'test_marketr', or 'no' to cancel:
If --noinput
is passed on the command line, the prompt is not printed and the test runner proceeds as if the user had entered 'yes' in response. This is useful if you want to run the tests from an unattended script and ensure that the script does not hang while waiting for user input that will never be entered.
This option reports the version of Django in use and then exits. Thus when using --version
with manage.py
or django-admin.py
, you do not actually need to specify a subcommand such as test
. In fact, due to a bug in the way Django processes command options, at the time of writing this book, if you do specify both --version
and a subcommand, the version will get printed twice. That will likely get fixed at some point.
The overview of Django testing is now complete. In this chapter, we:
Looked in detail at the sample
tests.py
file generated when a new Django application is createdLearned how to run the provided sample tests
Experimented with introducing deliberate mistakes into the tests in order to see and understand what information is provided when tests fail or encounter errors
Finally, we examined all of the command line options that may be used with
manage.py test
We will continue to build on this knowledge in the next chapter, as we focus on doctests in depth.