Unit test framework

The compiler download includes a test framework (the 'MTF'). The framework is not part of the language, and is simply a Tcl application. The purpose of the MTF is to automate the running of a large numbers of tests, and to record the results of those tests. It doesn't matter what sort of tests they are: they might be unit tests, regression tests, or acceptance tests, for example.

The framework is not just a testing tool: it is a development tool, and should be used throughout your development cycle, to formalise requirements, to clarify architecture, and to write, debug, integrate, test, and release your code. If you have any familiarity with software unit test environments, you'll notice that the MTF has a number of differences from xUnit-style tools such as JUnit (for Java testing), or GoogleTest (for C++ testing). This is because there are a number of fundamental differences between software and hardware testing. If you're interested, you can read more about this, and the philosophy behind the MTF, here. If you just want to find out how to use the MTF, read on.

To use the MTF, you have to do three things:

  1. Copy the Tcl files listed below ('runtests', 'run_regression', and a filter for your simulator) to your own environment, with possible modifications as noted
  2. Create text files (the 'golden logfiles') for all the tests to be run, with the expected results for those tests
  3. Create a list of all the tests to be run (the equivalent of 'testlist-vhdl' or 'testlist-verilog')

The tutorials code in the distribution contains an example which automates the running of the tutorial examples (you can find instructions for running the tutorials tests here). The 'golden' directory contains the golden logfiles, while the test list is testlist-vhdl (for the VHDL tests), or testlist-verilog (for the Verilog tests).

These are the Tcl files that you need to know about, and which you need to copy into your own environment, and possibly modify as noted:

testlist-vhdl An example test list; modify as required. Note the 'default_simulator' variable, which sets the default simulator; modify as required
testlist-verlog See above
runtests The 'test runner'. You will need to modify this if you move 'run_regression' to a different location. The default is 'bin/run_regression' in the current directory. Search for the 'run_regression' variable
bin/run_regression Runs the test, and carries out the comparison aganst the golden logfile to determine whether or not the test passed. Should not require modification, unless you add a new simulator
bin/filt_modelsim An example output filter for a given simulator; see 'run_regression'. The filter removes unnecessary output from the batch-mode simulation, and allows the golden logfile to be compared against the simulator output. You may need to modify this file if you move to a new simulator version, or the simulator produces new output which should be ignored. See the 'bin' directory for filters for other simulators

runtests is the 'test runner'; run it without arguments to get instructions. 'runtests' has 3 command-line arguments:

  • The first (mandatory) argument is the test list to run. For the tutorials examples, this will be testlist-vhdl or testlist-verilog. The remaining two arguments are optional.
  • The second argument, if present, is the simulator which should be used. This should be the name of a simulator in your simulators.conf file; see here for further details. If this argument is not supplied, the default simulator used will be the one specified in your test list file (see the set default_simulator line). Note that this simulator-selection mechanism does not use the RTV_SIMULATOR environment variable.
  • The third argument, if present, is an inclusive integer range which gives the tests to run. The default is to run all the tests in the test list.

This command:

  $ ./runtests testlist-verilog icarus 1 2

will therefore run both the first and second Verilog tests (and no others) on Icarus, while this command:

  $ ./runtests testlist-vhdl aldec_mixed

will run all the VHDL tests on Riviera-PRO. The selected simulators must, of course, be installed and be on the search path before running these commands.


1: Creating the golden logfiles

A Maia program produces 3 types of console output:

  1. when the program terminates it automatically generates a test summary, listing the pass and fail counts;
  2. if a drive statement or an assertion fails, a failure message is automatically generated;
  3. any programmer output produced by a report statement.

In the simplest case, the golden logfile should simply be the expected test summary (item 1 above). tut4.log, for example, is the expected output from one of the tutorial examples. To generate this file, you should simply run the relevant test until you are satisified that it works, and record the output in a file; the file name will be required when creating the test list (see below). In general, you should remove any simulator-generated messages from the logfile until it shows only the output that you expect. When the test is later run by 'run_regression', simulator-specific messages will be filtered out of the console output, and the resulting output will then be compared against your golden logfile to determine whether or not the test passed.

In more complex cases, you will probably need to add more information in the log file using report statements. This could be anything: commentary that a given test has passed, progress indicators for a long test, the contents of a comms packet, and so on. In some circumstances, you may also want to include a given drive statement failure report in the golden logfile; this will report a test pass when that test is run and the test produces that failure report. This will create a test pass when the DUT produces an expected failure.


2: Simulator output filtering

You can check the operation of the simulator message filter by running it as a stand-alone program. The ModelSim filter, for example, is 'filt_modelsim' (see the table above). Assume that you are running on Windows, with Altera's 'Starter Edition' ModelSim, and that you want to run the VHDL version of the first tutorial example. This command sequence captures the rtv output in test.log, and then displays the output when this file is supplied as the standard input to 'filt_modelsim':

$ export RTV_SIMULATOR=modelsim_mixed
$ rtv tvl/tut1.tv vhdl/counter1.vhd > test.log
$ bin/filt_modelsim < test.log
(rtv: log)   compiling test vectors...
(rtv: log)   compiling...
(rtv: log)   running vectors...
(Log)        (180 ns) 18 vectors executed (18 passes, 0 fails)

If the filter is operating correctly, the output shows just the Maia-generated messages, and any messages produced by rtv (the rtv messages are generated in simulators.conf). 'run_regression' ignores any rtv-generated messages (any line that starts with (rtv:) before carrying out the comparison against the golden logfile, so you can remove these lines from the logfile if you'd prefer. Note that you can therefore add comment lines to the logfile by starting the line with (rtv: comment).


3: Creating the test list

The variable unit_tests in your test list is a list of tests to run. The first tutorial example in testlist-vhdl is listed as follows:

  # 1   2    3     4                     5
  { tvl tut1 tut1A { vhdl/counter1.vhd } {}}

There are 5 fields, as follows:

  1. The directory the test file is in. In a large project, you are likely to have multiple directories containing tests for different parts of the chip
  2. The name of the test file, without the .tv suffix
  3. The name of the golden logfile for this test. 'run_regression' searches for this file in the 'golden' directory, with a '.log' suffix, so this file is 'golden/tut1A.log'. However, this behaviour is easily changed if necessary (see the set golden golden/$logf_name.log line in 'run_regression')
  4. The list of HDL source files for this test, as a Tcl list (enclosed in {} braces). The tutorial examples are simple, and only a single HDL source file is required for each test, but multiple files can be listed here if necessary. These files are passed to the simulator as a single compilation command, so must all be in the same language (VHDL or Verilog). See below for strategies for combining VHDL and Verilog files within a single test.
  5. Any further arguments for mtv, as a Tcl list. These will generally be -D defines.

When running unit tests, you are likely to be testing only one module, so each line of the test list will probably contain a single HDL source. As integration testing progresses, you may need to list multiple HDL files within the list. At some point, however, this will become impractical, and it will make more sense to carry out a single compilation of all your HDL sources (VHDL, Verilog, or both) into one or more libraries. This build should be carried out prior to running any regressions, outside the MTF. You may then need to modify simulators.conf to add a library search path, or to point your simulator at a specific library.