Other ways to do 'Verification'

Surely Verification is a done deal, right? Well, not quite. The problem is that 'Verification' can mean a lot of different things. If you ignore formal methods, verification is carried out by simulating an HDL design, and trying to determine whether or not the simulation behaves as expected. Within this 'runtime' verification subset, verification can be carried out in a number of ways:

  1. at a higher level of abstraction, using a dedicated system-level verification tool (normally SystemVerilog or e)
  2. at a lower level of abstraction, using Verilog or VHDL directly
  3. a custom solution (normally C++, possibly within a 'Continuous Integration' environment such as Jenkins), which is specifically coded as required

Which one you use will depend entirely on what you're trying to do, and what your resources are. If you're designing an ASIC processor core, the answer is likely to be (1). If you're designing an ASIC MPEG encoder then the answer is likely to be (3). If you're working on FPGAs, and you don't have a verification team, then the answer is likely to be (2).


System-level verification

At one end of the scale, 'system-level' verification tools attempt to verify the behaviour of complex devices, or significant parts of that device. These devices have essentially infinite state spaces, which makes them impossible to test exhaustively. These tools therefore face two fundamental problems: first, they need to increase the abstraction level to make testing manageable; and, second, they need to provide some confidence that the subset of tests that can be carried out in a finite time does actually provide some confidence that the device 'works'.

These tools have a long history. The basic principles were set down in the mid-90s by (at least) Vera and Specman/e, and have changed little (or not at all) since then. Various commonly-used verification methodologies are actually little more than formalisations of design patterns that were introduced with these languages. The e Reuse Methodology (eRM), in particular, went on to form the basis of the URM, the OVM, and the UVM.

I spent most of 2001 debugging and verifying an ARM CPU using e. Many people, including myself, still consider it to be the best system-level verification solution available. Vera ended up with Synopsys in '98, but never made it as an independent language. Synopsys eventually pushed both Superlog and Vera onto Accellera as the foundation for SystemVerilog, which was defined, by committee, in 2005. SystemVerilog and e are generally referred to (or at least marketed as) 'Hardware Verification Languages', or HVLs.

In the C++ world, Cadence open-sourced the TestBuilder verification library in 2000. TestBuilder became the basis of the 2003 SystemC Verification Library (SCV), and the SCV, or an equivalent, are still frequently used in system-level verification. SystemC is primarily used for modelling, and the use of C++ for verification can make it a little clunky, particularly when compared to a purpose-designed language such as e. However, it does a good job, and Maia itself was initially derived from a SystemC co-verification environment.

These languages and libraries have a number of features in common, which include:

  • increasing the abstraction level (over plain HDLs) through Object- or Aspect-orientation, and techniques such as transaction-based modelling
  • mechanisms to selectively target corner cases which might cause problems, and to record the specific cases which have been tested (the coverage metrics)
  • 'temporal assertions' to verify the behaviour of signals over multiple clock cycles

'Constrained random' stimulus generation is used to generate 'scenarios' for testing difficult parts of the design. Random stimulus is useless if you don't know which cases you have covered, and which cases you haven't covered, so measuring coverage is key to this process. Since you can never test the entire state space of a large design you have to define coverage metrics, and trade off the coverage that you have currently achieved against the time taken taken to achieve that level of coverage. In other words, you carry on simulating as long as possible, and hope for the best. Your 'Verification Plan' is fundamental to this process: this defines what you need to test, and when you can stop testing.

If you're thinking that this all sounds a bit hit-and-miss, then you're right. However, it can be remarkably effective in some cases. When designing a processor, for example, you could never hope to write enough test programs to effectively test all interesting combinations of instructions. In this case, it's relatively straightforward to create constrained-random instruction scenarios which can, for example, test interlocks and stall conditions. It's no coincidence that Vera started life at Sun Microsystems, and these languages are often marketed as 'SoC' verification solutions, which means that there's a processor involved. However, having said that, few of us design or verify processors, and it is difficult to find other examples for which these languages are so ideally suited.


HDL-level verification

At the other end of the scale, you can verify your design by writing testbenches directly in your favourite HDL. It is often forgotten that both Verilog and VHDL are simulation languages, and their LRMs define the simulation semantics of the languages. Descriptions written in these languages can only be synthesised because various vendors have gone to a great deal of trouble to try to divine the designer's intent from the source code, without actually carrying out a simulation. This has led to an ad-hoc definition of a 'synthesisable subset' of the languages (the IEEE did try to standardise this subset - a process I was myself involved in - but this eventually came to nothing). If a description is written in this subset, and the appropriate design patterns are used, and the relevant guidelines are followed, then the netlist generated by a competent synthesis tool will probably simulate in the same way as the original source code. This is, for all intents and purposes, the definition of 'correctness' in synthesis tools.

As a designer, your first option to verify a DUT written in Verilog or VHDL is therefore to write your own testbench in the same language. On the face of it, this is a great idea, but it seldom works out in practice. In 25 years of ASIC and FPGA design and development (primarily in RTL, and the rest in modelling and verification), I have met very few FPGA developers who wrote a testbench which was any more complex than the minimum required to generate a waveform display. I've worked with a handful of (very smart) ASIC developers who coded in Verilog and verified in e, and a small number who could write usable VHDL testbenches, but everyone else just handed their code over to a verification department.

In short: it's actually very difficult, as an RTL designer, to write usable testbenches in the language you use for hardware development: these languages are just too abstract, and too low-level. This is precisely the problem that Maia addresses.