DejaGnu always writes two kinds of output files: summary logs and detailed logs. The contents of both of these are determined by your tests.
For troubleshooting, a third kind of output file is useful: use --debug to request an output file showing details of what Expect is doing internally.
DejaGnu always produces a summary output file tool.sum. This summary shows the names of all test files run; for each test file, one line of output from each pass command (showing status PASS or XPASS) or fail command (status FAIL or XFAIL); trailing summary statistics that count passing and failing tests (expected and unexpected); and the full pathname and version number of the tool tested. (All possible outcomes, and all errors, are always reflected in the summary output file, regardless of whether or not you specify --all.)
If any of your tests use the procedures unresolved, unsupported, or runtested, the summary output also tabulates the corresponding outcomes.
For example, after runtest --tool binutils, look for a summary log in binutils.sum. Normally, DejaGnu writes this file in your current working directory; use the --outdir option to select a different directory.
Example 2-1. Here is a short sample summary log
Test Run By rob on Mon May 25 21:40:57 PDT 1992 === gdb tests === Running ./gdb.t00/echo.exp ... PASS: Echo test Running ./gdb.all/help.exp ... PASS: help add-symbol-file PASS: help aliases PASS: help breakpoint "bre" abbreviation FAIL: help run "r" abbreviation Running ./gdb.t10/crossload.exp ... PASS: m68k-elf (elf-big) explicit format; loaded XFAIL: mips-ecoff (ecoff-bigmips) "ptype v_signed_char" signed C types === gdb Summary === # of expected passes 5 # of expected failures 1 # of unexpected failures 1 /usr/latest/bin/gdb version 4.6.5 -q |
DejaGnu also saves a detailed log file tool.log, showing any output generated by tests as well as the summary output. For example, after runtest --tool binutils, look for a detailed log in binutils.log. Normally, DejaGnu writes this file in your current working directory; use the --outdir option to select a different directory.
Example 2-2. Here is a brief example showing a detailed log for G++ tests
Test Run By rob on Mon May 25 21:40:43 PDT 1992 === g++ tests === --- Running ./g++.other/t01-1.exp --- PASS: operate delete --- Running ./g++.other/t01-2.exp --- FAIL: i960 bug EOF p0000646.C: In function `int warn_return_1 ()': p0000646.C:109: warning: control reaches end of non-void function p0000646.C: In function `int warn_return_arg (int)': p0000646.C:117: warning: control reaches end of non-void function p0000646.C: In function `int warn_return_sum (int, int)': p0000646.C:125: warning: control reaches end of non-void function p0000646.C: In function `struct foo warn_return_foo ()': p0000646.C:132: warning: control reaches end of non-void function --- Running ./g++.other/t01-4.exp --- FAIL: abort 900403_04.C:8: zero width for bit-field `foo' --- Running ./g++.other/t01-3.exp --- FAIL: segment violation 900519_12.C:9: parse error before `;' 900519_12.C:12: Segmentation violation /usr/latest/bin/gcc: Internal compiler error: program cc1plus got fatal signal === g++ Summary === # of expected passes 1 # of expected failures 3 /usr/latest/bin/g++ version cygnus-2.0.1 |
With the --debug option, you can request a log file showing the output from Expect itself, running in debugging mode. This file (dbg.log, in the directory where you start runtest) shows each pattern Expect considers in analyzing test output.
This file reflects each send command, showing the string sent as input to the tool under test; and each Expect command, showing each pattern it compares with the tool output.
Example 2-3. The log messages begin with a message of the form
expect: does {tool output} (spawn_id n) match pattern {expected pattern}? |
For every unsuccessful match, Expect issues a no after this message; if other patterns are specified for the same Expect command, they are reflected also, but without the first part of the message (expect... match pattern).
When Expect finds a match, the log for the successful match ends with yes, followed by a record of the Expect variables set to describe a successful match.
Example 2-4. Here is an excerpt from the debugging log for a GDB test:
send: sent {break gdbme.c:34\n} to spawn id 6 expect: does {} (spawn_id 6) match pattern {Breakpoint.*at.* file gdbme.c, line 34.*\(gdb\) $}? no {.*\(gdb\) $}? no expect: does {} (spawn_id 0) match pattern {return} ? no {\(y or n\) }? no {buffer_full}? no {virtual}? no {memory}? no {exhausted}? no {Undefined}? no {command}? no break gdbme.c:34 Breakpoint 8 at 0x23d8: file gdbme.c, line 34. (gdb) expect: does {break gdbme.c:34\r\nBreakpoint 8 at 0x23d8: file gdbme.c, line 34.\r\n(gdb) } (spawn_id 6) match pattern {Breakpoint.*at.* file gdbme.c, line 34.*\(gdb\) $}? yes expect: set expect_out(0,start) {18} expect: set expect_out(0,end) {71} expect: set expect_out(0,string) {Breakpoint 8 at 0x23d8: file gdbme.c, line 34.\r\n(gdb) } epect: set expect_out(spawn_id) {6} expect: set expect_out(buffer) {break gdbme.c:34\r\nBreakpoint 8 at 0x23d8: file gdbme.c, line 34.\r\n(gdb) } PASS: 70 0 breakpoint line number in file |
This example exhibits three properties of Expect and DejaGnu that might be surprising at first glance:
Empty output for the first attempted match. The first set of attempted matches shown ran against the output {} --- that is, no output. Expect begins attempting to match the patterns supplied immediately; often, the first pass is against incomplete output (or completely before all output, as in this case).
Interspersed tool output. The beginning of the log entry for the second attempted match may be hard to spot: this is because the prompt {(gdb) } appears on the same line, just before the expect: that marks the beginning of the log entry.
Fail-safe patterns. Many of the patterns tested are fail-safe patterns provided by GDB testing utilities, to reduce possible indeterminacy. It is useful to anticipate potential variations caused by extreme system conditions (GDB might issue the message virtual memory exhausted in rare circumstances), or by changes in the tested program (Undefined command is the likeliest outcome if the name of a tested command changes).
The pattern {return} is a particularly interesting fail-safe to notice; it checks for an unexpected RET prompt. This may happen, for example, if the tested tool can filter output through a pager.
These fail-safe patterns (like the debugging log itself) are primarily useful while developing test scripts. Use the error procedure to make the actions for fail-safe patterns produce messages starting with ERROR on standard output, and in the detailed log file.