Tutorial automation

If you have run through the individual exercises then you will have manually determined whether or not a test has passed, by comparing the simulator output in the terminal, against the known expected output. The terminal output will generally be fairly verbose: it may contain various simulator compilation messages, for example, as well as module information, copyright and version messages, time-zero warnings, and so on. None of this is relevant to deciding whether or not a test has passed, and you have to scan through the output to find Maia-specific messages (such as (Log) (180ns) 18 vectors executed (18 passes, 0 fails)).

Manually running individual batch-mode simulations, and manually determining whether or not the test has passed, is not normally practical. For a real chip, you may have to repeat this procedure hundreds, or thousands, of times. These tests (the 'unit tests') therefore have to be automated. The automation is carried out by a number of Tcl files in the tutorial directory. This procedure is explained in more detail in the Unit test framework section, and is covered briefly here.

Of course, automating the execution of a small set of tutorial exercises is not actually 'unit testing'. Unit testing refers to the process of (repeated and automated) testing of small parts of your HDL code during development, while the tutorial exercises simply test a small number of fairly trivial HDL modules. However, the procedure is identical: all that changes is how exactly you set up your test list.

'runtests' in the distribution's tutorial directory automates the execution of the tutorial exercises, and checks the actual result against the expected result for each exercise. 'runtests' reads a list of tests to perform from 'testlist-verilog' (for Verilog simulation), or 'testlist-vhdl' (for VHDL simulation). These 3 files are Tcl scripts. The VHDL test list, for example, looks like this:

# ------------------------------------------
# ------------------------------------------
set default_simulator modelsim_mixed

# the list items are:
#    the sub-directory that the test file appears in
#     |  the name of the test file (without the '.tv')
#     |   |        the golden logfile (without the '.log'), in the 'golden' dir
#     |   |         |        VHDL source files
#     |   |         |         |                       -D defines to pass to mtv
#     |   |         |         |                        |
#     v   v         v         v                        v
#   { dir test      logfile   { source files}          { optional defines }}

set unit_tests {
    { tvl tut1      tut1A     { vhdl/counter1.vhd  }   {}}
    { tvl tut1      tut1B     { vhdl/counter2.vhd  }   {}}
    { tvl tut2      tut2A     { vhdl/counter1.vhd  }   {}}
    { tvl tut2      tut2B     { vhdl/counter2.vhd  }   {}}
    { tvl tut3      tut3A     { vhdl/counter1.vhd  }   {}}
    { tvl tut3      tut3B     { vhdl/counter2.vhd  }   {}}
    { tvl tut4      tut4      { vhdl/ram1.vhd      }   {}}
    { tvl tut5      tut5      { vhdl/ram1.vhd      }   {}}
    { tvl tut6      tut6      { vhdl/counter1.vhd  }   {}}
    { tvl tut7      tut7      { vhdl/counter1.vhd  }   {}}
    { tvl tut8      tut8      { -2008 vhdl/comb3.vhd } {}}
    { tvl tut9      tut9      { -2008 vhdl/comb3.vhd } {}}
    { tvl tut10     tut10     { vhdl/mac1.vhd      }   {}}
    { tvl tut11     tut11     { vhdl/mac1.vhd      }   {}}
    { tvl tut12     tut12     { vhdl/counter1.vhd  }   {}}
    { tvl c_mux8to1 c_mux8to1 { vhdl/c_mux8to1.vhd }   {}}

Note that the test list sets default_simulator, which overrides the value of RTV_SIMULATOR in your environment. You should either change this line, or identify the required simulator on the runtests command line, as shown below.

If you run runtests without parameters it will display a help screen. However, in summary, if you want to run all the Verilog tests, and you want to use ModelSim, execute this command (assuming that mtv is installed at /eda/mtv):

  $ cd /eda/mtv/tutorial
  $ ./runtests testlist-verilog modelsim

This will produce the following output:

 ModelSim simulation...
1   tvl/tut1.tv...                         Ok
2   tvl/tut1.tv...                         ** Comparison failed (1): tut1B.log does not match  golden/tut1B.log **
3   tvl/tut2.tv...                         Ok
4   tvl/tut2.tv...                         ** Comparison failed (1): tut2B.log does not match  golden/tut2B.log **
5   tvl/tut3.tv...                         Ok
6   tvl/tut3.tv...                         ** Comparison failed (1): tut3B.log does not match  golden/tut3B.log **
7   tvl/tut4.tv...                         Ok
8   tvl/tut5.tv...                         Ok
9   tvl/tut6.tv...                         Ok
10  tvl/tut7.tv...                         Ok
11  tvl/tut8.tv...                         Ok
12  tvl/tut9.tv...                         Ok
13  tvl/tut10.tv...                        Ok
14  tvl/tut11.tv...                        Ok
15  tvl/tut12.tv...                        Ok
12 test(s) passed; 3 test(s) failed.

The failures for Exercises 2, 4, and 6 are expected failures, and are due to a bug in the relevant Verilog DUT. The tut1B.log, tut2B.log, and tut3B.log files are left in the current directory, and will contain the error output; these can be compared against the same files in the 'golden' directory to identify the errors. Note that the files in the 'golden' directory contain the expected Maia output, without any additional (and unnecessary) simulator-specific output, while any error files will contain the entire simulator output. The Tcl test framework carries out an intelligent comparison of the two, ignoring simulator-specific messages, to decide whether or not the test has passed.

The output from the VHDL test is the same, except that Exercises 9 and 10 also fail, and there is an additional test (#16). The two failures are expected (see Exercise #9). The additional test is an instantiation of a VHDL configuration (15); this is the code from Testing VHDL DUTs.