Text File
testrunner-errors.txt

Errors and Failures

Let's look at tests that have errors and failures, first we need to make a temporary copy of the entire testing dirctory (except .svn files which may be read only):

>>> import os.path, sys, tempfile, shutil
>>> tmpdir = tempfile.mkdtemp()
>>> directory_with_tests = os.path.join(tmpdir, 'testrunner-ex')
>>> source = os.path.join(this_directory, 'testrunner-ex')
>>> n = len(source) + 1
>>> for root, dirs, files in os.walk(source):
...     dirs[:] = [d for d in dirs if d != ".svn"] # prune cruft
...     os.mkdir(os.path.join(directory_with_tests, root[n:]))
...     for f in files:
...         shutil.copy(os.path.join(root, f),
...                     os.path.join(directory_with_tests, root[n:], f))
>>> from zope.testing import testrunner
>>> defaults = [
...     '--path', directory_with_tests,
...     '--tests-pattern', '^sampletestsf?$',
...     ]
>>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ '.split()
>>> testrunner.run(defaults)
... # doctest: +NORMALIZE_WHITESPACE
Running unit tests:
<BLANKLINE>
<BLANKLINE>
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
  File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_e.py", line 30, in sample2.sampletests_e.eek
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
        f()
      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
        g()
      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
        x = y + 1
    NameError: global name 'y' is not defined
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test3 (sample2.sampletests_e.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
    f()
  File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
    g()
  File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
    x = y + 1
NameError: global name 'y' is not defined
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Failure in test testrunner-ex/sample2/e.txt
Failed doctest test for e.txt
  File "testrunner-ex/sample2/e.txt", line 0
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/e.txt", line 4, in e.txt
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest e.txt[1]>", line 1, in ?
        f()
      File "<doctest e.txt[0]>", line 2, in f
        return x
    NameError: global name 'x' is not defined
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Failure in test test (sample2.sampletests_f.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
    self.assertEqual(1,0)
  File "/usr/local/python/2.3/lib/python2.3/unittest.py", line 302, in failUnlessEqual
    raise self.failureException, \
AssertionError: 1 != 0
<BLANKLINE>
  Ran 200 tests with 3 failures and 1 errors in 0.038 seconds.
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
Running samplelayers.Layer11 tests:
  Set up samplelayers.Layer11 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in 0.000 seconds.
  Set up samplelayers.Layer111 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer112 tests:
  Tear down samplelayers.Layer111 in 0.000 seconds.
  Set up samplelayers.Layer112 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
Running samplelayers.Layer12 tests:
  Tear down samplelayers.Layer112 in 0.000 seconds.
  Tear down samplelayers.Layerx in 0.000 seconds.
  Tear down samplelayers.Layer11 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer121 tests:
  Set up samplelayers.Layer121 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
Running samplelayers.Layer122 tests:
  Tear down samplelayers.Layer121 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
Total: 413 tests, 3 failures, 1 errors
True

We see that we get an error report and a traceback for the failing test. In addition, the test runner returned True, indicating that there was an error.

If we ask for single verbosity, the dotted output will be interrupted:

>>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ -uv'.split()
>>> testrunner.run(defaults)
... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
Running tests at level 1
Running unit tests:
  Running:
 .................................................................................................
<BLANKLINE>
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
  File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_e.py", line 30,
    in sample2.sampletests_e.eek
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
        f()
      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
        g()
      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
        x = y + 1
    NameError: global name 'y' is not defined
<BLANKLINE>
...
<BLANKLINE>
<BLANKLINE>
Error in test test3 (sample2.sampletests_e.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
    f()
  File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
    g()
  File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
    x = y + 1
NameError: global name 'y' is not defined
<BLANKLINE>
...
<BLANKLINE>
Failure in test testrunner-ex/sample2/e.txt
Failed doctest test for e.txt
  File "testrunner-ex/sample2/e.txt", line 0
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/e.txt", line 4, in e.txt
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest e.txt[1]>", line 1, in ?
        f()
      File "<doctest e.txt[0]>", line 2, in f
        return x
    NameError: global name 'x' is not defined
<BLANKLINE>
.
<BLANKLINE>
Failure in test test (sample2.sampletests_f.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
    self.assertEqual(1,0)
  File ".../unittest.py", line 302, in failUnlessEqual
    raise self.failureException, \
AssertionError: 1 != 0
<BLANKLINE>
................................................................................................
<BLANKLINE>
  Ran 200 tests with 3 failures and 1 errors in 0.040 seconds.
True

Similarly for progress output:

>>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
...             ' -p').split()
>>> testrunner.run(defaults)
... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
Running unit tests:
  Running:
    1/56 (1.8%)
<BLANKLINE>
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
  File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_e.py", line 30,
       in sample2.sampletests_e.eek
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
        f()
      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
        g()
      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
        x = y + 1
    NameError: global name 'y' is not defined
<BLANKLINE>
    2/56 (3.6%)\r
               \r
    3/56 (5.4%)\r
               \r
    4/56 (7.1%)
<BLANKLINE>
Error in test test3 (sample2.sampletests_e.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
    f()
  File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
    g()
  File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
    x = y + 1
NameError: global name 'y' is not defined
<BLANKLINE>
    5/56 (8.9%)\r
               \r
    6/56 (10.7%)\r
                \r
    7/56 (12.5%)
<BLANKLINE>
Failure in test testrunner-ex/sample2/e.txt
Failed doctest test for e.txt
  File "testrunner-ex/sample2/e.txt", line 0
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/e.txt", line 4, in e.txt
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest e.txt[1]>", line 1, in ?
        f()
      File "<doctest e.txt[0]>", line 2, in f
        return x
    NameError: global name 'x' is not defined
<BLANKLINE>
    8/56 (14.3%)
<BLANKLINE>
Failure in test test (sample2.sampletests_f.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
    self.assertEqual(1,0)
  File ".../unittest.py", line 302, in failUnlessEqual
    raise self.failureException, \
AssertionError: 1 != 0
<BLANKLINE>
    9/56 (16.1%)\r
                \r
    10/56 (17.9%)\r
                 \r
    11/56 (19.6%)\r
                 \r
    12/56 (21.4%)\r
                 \r
    13/56 (23.2%)\r
                 \r
    14/56 (25.0%)\r
                 \r
    15/56 (26.8%)\r
                 \r
    16/56 (28.6%)\r
                 \r
    17/56 (30.4%)\r
                 \r
    18/56 (32.1%)\r
                 \r
    19/56 (33.9%)\r
                 \r
    20/56 (35.7%)\r
                 \r
    24/56 (42.9%)\r
                 \r
    25/56 (44.6%)\r
                 \r
    26/56 (46.4%)\r
                 \r
    27/56 (48.2%)\r
                 \r
    28/56 (50.0%)\r
                 \r
    29/56 (51.8%)\r
                 \r
    30/56 (53.6%)\r
                 \r
    31/56 (55.4%)\r
                 \r
    32/56 (57.1%)\r
                 \r
    33/56 (58.9%)\r
                 \r
    34/56 (60.7%)\r
                 \r
    35/56 (62.5%)\r
                 \r
    36/56 (64.3%)\r
                 \r
    40/56 (71.4%)\r
                 \r
    41/56 (73.2%)\r
                 \r
    42/56 (75.0%)\r
                 \r
    43/56 (76.8%)\r
                 \r
    44/56 (78.6%)\r
                 \r
    45/56 (80.4%)\r
                 \r
    46/56 (82.1%)\r
                 \r
    47/56 (83.9%)\r
                 \r
    48/56 (85.7%)\r
                 \r
    49/56 (87.5%)\r
                 \r
    50/56 (89.3%)\r
                 \r
    51/56 (91.1%)\r
                 \r
    52/56 (92.9%)\r
                 \r
    56/56 (100.0%)\r
                  \r
<BLANKLINE>
  Ran 56 tests with 3 failures and 1 errors in 0.054 seconds.
True

For greater levels of verbosity, we summarize the errors at the end of the test

>>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
...             ' -vv').split()
>>> testrunner.run(defaults)
... # doctest: +NORMALIZE_WHITESPACE
Running tests at level 1
Running unit tests:
  Running:
    eek (sample2.sampletests_e)
<BLANKLINE>
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
  File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_e.py", line 30,
       in sample2.sampletests_e.eek
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
        f()
      File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
        g()
      File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
        x = y + 1
    NameError: global name 'y' is not defined
<BLANKLINE>
<BLANKLINE>
    test1 (sample2.sampletests_e.Test)
    test2 (sample2.sampletests_e.Test)
    test3 (sample2.sampletests_e.Test)
<BLANKLINE>
Error in test test3 (sample2.sampletests_e.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
    f()
  File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
    g()
  File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
    x = y + 1
NameError: global name 'y' is not defined
<BLANKLINE>
<BLANKLINE>
    test4 (sample2.sampletests_e.Test)
    test5 (sample2.sampletests_e.Test)
    testrunner-ex/sample2/e.txt
<BLANKLINE>
Failure in test testrunner-ex/sample2/e.txt
Failed doctest test for e.txt
  File "testrunner-ex/sample2/e.txt", line 0
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/e.txt", line 4, in e.txt
Failed example:
    f()
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest e.txt[1]>", line 1, in ?
        f()
      File "<doctest e.txt[0]>", line 2, in f
        return x
    NameError: global name 'x' is not defined
<BLANKLINE>
<BLANKLINE>
    test (sample2.sampletests_f.Test)
<BLANKLINE>
Failure in test test (sample2.sampletests_f.Test)
Traceback (most recent call last):
  File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
    self.assertEqual(1,0)
  File ".../unittest.py", line 302, in failUnlessEqual
    raise self.failureException, \
AssertionError: 1 != 0
<BLANKLINE>
<BLANKLINE>
    test_x1 (sample2.sample21.sampletests.TestA)
    test_y0 (sample2.sample21.sampletests.TestA)
    test_z0 (sample2.sample21.sampletests.TestA)
    test_x0 (sample2.sample21.sampletests.TestB)
    test_y1 (sample2.sample21.sampletests.TestB)
    test_z0 (sample2.sample21.sampletests.TestB)
    test_1 (sample2.sample21.sampletests.TestNotMuch)
    test_2 (sample2.sample21.sampletests.TestNotMuch)
    test_3 (sample2.sample21.sampletests.TestNotMuch)
    test_x0 (sample2.sample21.sampletests)
    test_y0 (sample2.sample21.sampletests)
    test_z1 (sample2.sample21.sampletests)
    testrunner-ex/sample2/sample21/../../sampletests.txt
    test_x1 (sample2.sampletests.test_1.TestA)
    test_y0 (sample2.sampletests.test_1.TestA)
    test_z0 (sample2.sampletests.test_1.TestA)
    test_x0 (sample2.sampletests.test_1.TestB)
    test_y1 (sample2.sampletests.test_1.TestB)
    test_z0 (sample2.sampletests.test_1.TestB)
    test_1 (sample2.sampletests.test_1.TestNotMuch)
    test_2 (sample2.sampletests.test_1.TestNotMuch)
    test_3 (sample2.sampletests.test_1.TestNotMuch)
    test_x0 (sample2.sampletests.test_1)
    test_y0 (sample2.sampletests.test_1)
    test_z1 (sample2.sampletests.test_1)
    testrunner-ex/sample2/sampletests/../../sampletests.txt
    test_x1 (sample2.sampletests.testone.TestA)
    test_y0 (sample2.sampletests.testone.TestA)
    test_z0 (sample2.sampletests.testone.TestA)
    test_x0 (sample2.sampletests.testone.TestB)
    test_y1 (sample2.sampletests.testone.TestB)
    test_z0 (sample2.sampletests.testone.TestB)
    test_1 (sample2.sampletests.testone.TestNotMuch)
    test_2 (sample2.sampletests.testone.TestNotMuch)
    test_3 (sample2.sampletests.testone.TestNotMuch)
    test_x0 (sample2.sampletests.testone)
    test_y0 (sample2.sampletests.testone)
    test_z1 (sample2.sampletests.testone)
    testrunner-ex/sample2/sampletests/../../sampletests.txt
  Ran 56 tests with 3 failures and 1 errors in 0.060 seconds.
<BLANKLINE>
Tests with errors:
   test3 (sample2.sampletests_e.Test)
<BLANKLINE>
Tests with failures:
   eek (sample2.sampletests_e)
   testrunner-ex/sample2/e.txt
   test (sample2.sampletests_f.Test)
True

Suppressing multiple doctest errors

Often, when a doctest example fails, the failure will cause later examples in the same test to fail. Each failure is reported:

>>> sys.argv = 'test --tests-pattern ^sampletests_1$'.split()
>>> testrunner.run(defaults) # doctest: +NORMALIZE_WHITESPACE
Running unit tests:
<BLANKLINE>
<BLANKLINE>
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
  File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 19,
     in sample2.sampletests_1.eek
Failed example:
    x = y
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
        x = y
    NameError: name 'y' is not defined
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 21,
     in sample2.sampletests_1.eek
Failed example:
    x
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
        x
    NameError: name 'x' is not defined
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 24,
     in sample2.sampletests_1.eek
Failed example:
    z = x + 1
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
        z = x + 1
    NameError: name 'x' is not defined
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
True

This can be a bit confusing, especially when there are enough tests that they scroll off a screen. Often you just want to see the first failure. This can be accomplished with the -1 option (for "just show me the first failed example in a doctest" :)

>>> sys.argv = 'test --tests-pattern ^sampletests_1$ -1'.split()
>>> testrunner.run(defaults) # doctest:
Running unit tests:
<BLANKLINE>
<BLANKLINE>
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
  File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/sample2/sampletests_1.py", line 19,
     in sample2.sampletests_1.eek
Failed example:
    x = y
Exception raised:
    Traceback (most recent call last):
      File ".../doctest.py", line 1256, in __run
        compileflags, 1) in test.globs
      File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
        x = y
    NameError: name 'y' is not defined
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.001 seconds.
True

Getting diff output for doctest failures

If a doctest has large expected and actual output, it can be hard to see differences when expected and actual output differ. The --ndiff, --udiff, and --cdiff options can be used to get diff output pf various kinds.

>>> sys.argv = 'test --tests-pattern ^pledge$'.split()
>>> _ = testrunner.run(defaults)
Running unit tests:
<BLANKLINE>
<BLANKLINE>
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
  File "testrunner-ex/pledge.py", line 24, in pledge
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
    print pledge_template % ('and earthling', 'planet'),
Expected:
    I give my pledge, as an earthling,
    to save, and faithfully, to defend from waste,
    the natural resources of my planet.
    It's soils, minerals, forests, waters, and wildlife.
Got:
    I give my pledge, as and earthling,
    to save, and faithfully, to defend from waste,
    the natural resources of my planet.
    It's soils, minerals, forests, waters, and wildlife.
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.

Here, the actual output uses the word "and" rather than the word "an", but it's a bit hard to pick this out. Wr can use the various diff outputs to see this better. We could modify the test to ask for diff output, but it's easier to use one of the diff options.

The --ndiff option requests a diff using Python's ndiff utility. This is the only method that marks differences within lines as well as across lines. For example, if a line of expected output contains digit 1 where actual output contains letter l, a line is inserted with a caret marking the mismatching column positions.

>>> sys.argv = 'test --tests-pattern ^pledge$ --ndiff'.split()
>>> _ = testrunner.run(defaults)
Running unit tests:
<BLANKLINE>
<BLANKLINE>
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
  File "testrunner-ex/pledge.py", line 24, in pledge
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
    print pledge_template % ('and earthling', 'planet'),
Differences (ndiff with -expected +actual):
    - I give my pledge, as an earthling,
    + I give my pledge, as and earthling,
    ?                        +
      to save, and faithfully, to defend from waste,
      the natural resources of my planet.
      It's soils, minerals, forests, waters, and wildlife.
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.003 seconds.

The -udiff option requests a standard "unified" diff:

>>> sys.argv = 'test --tests-pattern ^pledge$ --udiff'.split()
>>> _ = testrunner.run(defaults)
Running unit tests:
<BLANKLINE>
<BLANKLINE>
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
  File "testrunner-ex/pledge.py", line 24, in pledge
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
    print pledge_template % ('and earthling', 'planet'),
Differences (unified diff with -expected +actual):
    @@ -1,3 +1,3 @@
    -I give my pledge, as an earthling,
    +I give my pledge, as and earthling,
     to save, and faithfully, to defend from waste,
     the natural resources of my planet.
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.

The -cdiff option requests a standard "context" diff:

>>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff'.split()
>>> _ = testrunner.run(defaults)
Running unit tests:
<BLANKLINE>
<BLANKLINE>
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
  File "testrunner-ex/pledge.py", line 24, in pledge
<BLANKLINE>
----------------------------------------------------------------------
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
    print pledge_template % ('and earthling', 'planet'),
Differences (context diff with expected followed by actual):
    ***************
    *** 1,3 ****
    ! I give my pledge, as an earthling,
      to save, and faithfully, to defend from waste,
      the natural resources of my planet.
    --- 1,3 ----
    ! I give my pledge, as and earthling,
      to save, and faithfully, to defend from waste,
      the natural resources of my planet.
<BLANKLINE>
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.

Testing-Module Import Errors

If there are errors when importing a test module, these errors are reported. In order to illustrate a module with a syntax error, we create one now: this module used to be checked in to the project, but then it was included in distributions of projects using zope.testing too, and distutils complained about the syntax error when it compiled Python files during installation of such projects. So first we create a module with bad syntax:

>>> badsyntax_path = os.path.join(directory_with_tests,
...                               "sample2", "sampletests_i.py")
>>> f = open(badsyntax_path, "w")
>>> print >> f, "importx unittest"  # syntax error
>>> f.close()

Then run the tests:

>>> sys.argv = ('test --tests-pattern ^sampletests(f|_i)?$ --layer 1 '
...            ).split()
>>> testrunner.run(defaults)
... # doctest: +NORMALIZE_WHITESPACE
Test-module import failures:
<BLANKLINE>
Module: sample2.sampletests_i
<BLANKLINE>
  File "testrunner-ex/sample2/sampletests_i.py", line 1
    importx unittest
                   ^
SyntaxError: invalid syntax
<BLANKLINE>
<BLANKLINE>
Module: sample2.sample21.sampletests_i
<BLANKLINE>
Traceback (most recent call last):
  File "testrunner-ex/sample2/sample21/sampletests_i.py", line 15, in ?
    import zope.testing.huh
ImportError: No module named huh
<BLANKLINE>
<BLANKLINE>
Module: sample2.sample22.sampletests_i
<BLANKLINE>
AttributeError: 'module' object has no attribute 'test_suite'
<BLANKLINE>
<BLANKLINE>
Module: sample2.sample23.sampletests_i
<BLANKLINE>
Traceback (most recent call last):
  File "testrunner-ex/sample2/sample23/sampletests_i.py", line 18, in ?
    class Test(unittest.TestCase):
  File "testrunner-ex/sample2/sample23/sampletests_i.py", line 23, in Test
    raise TypeError('eek')
TypeError: eek
<BLANKLINE>
<BLANKLINE>
Running samplelayers.Layer1 tests:
  Set up samplelayers.Layer1 in 0.000 seconds.
  Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
Running samplelayers.Layer11 tests:
  Set up samplelayers.Layer11 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer111 tests:
  Set up samplelayers.Layerx in 0.000 seconds.
  Set up samplelayers.Layer111 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer112 tests:
  Tear down samplelayers.Layer111 in 0.000 seconds.
  Set up samplelayers.Layer112 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer12 tests:
  Tear down samplelayers.Layer112 in 0.000 seconds.
  Tear down samplelayers.Layerx in 0.000 seconds.
  Tear down samplelayers.Layer11 in 0.000 seconds.
  Set up samplelayers.Layer12 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer121 tests:
  Set up samplelayers.Layer121 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Running samplelayers.Layer122 tests:
  Tear down samplelayers.Layer121 in 0.000 seconds.
  Set up samplelayers.Layer122 in 0.000 seconds.
  Ran 34 tests with 0 failures and 0 errors in 0.006 seconds.
Tearing down left over layers:
  Tear down samplelayers.Layer122 in 0.000 seconds.
  Tear down samplelayers.Layer12 in 0.000 seconds.
  Tear down samplelayers.Layer1 in 0.000 seconds.
Total: 213 tests, 0 failures, 0 errors
<BLANKLINE>
Test-modules with import problems:
  sample2.sampletests_i
  sample2.sample21.sampletests_i
  sample2.sample22.sampletests_i
  sample2.sample23.sampletests_i
True

Reporting Errors to Calling Processes

The testrunner can return an error status, indicating that the tests failed. This can be useful for an invoking process that wants to monitor the result of a test run.

To use, specify the argument "--exit-with-status".

>>> sys.argv = (
...     'test --exit-with-status --tests-pattern ^sampletests_1$'.split())
>>> try:
...     testrunner.run(defaults)
... except SystemExit, e:
...     print 'exited with code', e.code
... else:
...     print 'sys.exit was not called'
... # doctest: +ELLIPSIS
Running unit tests:
...
  Ran 1 tests with 1 failures and 0 errors in 0.002 seconds.
exited with code 1

A passing test does not exit.

>>> sys.argv = (
...     'test --exit-with-status --tests-pattern ^sampletests$'.split())
>>> try:
...     testrunner.run(defaults)
... except SystemExit, e2:
...     print 'oops'
... else:
...     print 'sys.exit was not called'
... # doctest: +ELLIPSIS
Running unit tests:
...
Total: 364 tests, 0 failures, 0 errors
...
sys.exit was not called

And remove the temporary directory:

>>> shutil.rmtree(tmpdir)