Lecture 8 VHDL Test Benches
Lecture 8 VHDL Test Benches
Testbench
Contents
• Purpose of test benches
• Structure of simple test bench
– Side note about delay modeling in VHDL
• More elegant test benches
– Separate, more reusable stimulus generation
– Separate sink from the response
– File handling for stimulus and response
• Example and conclusions
16.11.2017 8
What Is a VHDL Test Bench (TB)?
• VHDL test bench (TB) is a piece of code meant to verify the functional correctness of
HDL model
• The main objectives of TB is to:
1. Instantiate the design under test (DUT)
2. Generate stimulus waveforms for DUT
3. Generate reference outputs and compare them with the outputs of DUT
4. Automatically provide a pass or fail indication
• Test bench is a part of the circuits specification
• Sometimes it’s a good idea to design the test bench before the DUT
– Functional specification ambiguities found
– Forces to dig out the relevant information of the environment
– Different designers for DUT and its TB!
stimulus
Better than
none, but not
reliable
PORT (
BEGIN -- process
y <= a + b;
END IF;
END PROCESS;
END RTL;
Be careful! Behavior will be strange if the edges of the clk and signal generated this way are aligned.
Arto Perttula 16.11.2017 23
MORE ELEGANT TEST
BENCHES
Arto Perttula 16.11.2017 24
Test Bench with a Separate Source
• Source and DUT instantiated into TB
• For designs with complex input and simple output
• Source can be, e.g., another entity or a process
Arto Perttula 25
Test Bench with a Separate Source (2): Structure
ENTITY counter IS
PORT (
clk : IN STD_LOGIC;
rst_n : IN STD_LOGIC;
Y : OUT STD_LOGIC_VECTOR(2 DOWNTO 0)
);
END counter;
Arto Perttula 16.11.2017 27
Test Bench with a Separate Source (4): Create Counter
• Test bench:
– Architecture
ARCHITECTURE separate_source OF source_tb IS
– Declare the components DUT and source
COMPONENT adder
PORT (
clk : IN STD_LOGIC;
rst_n : IN STD_LOGIC;
a, b : IN UNSIGNED(2 DOWNTO 0);
y : OUT UNSIGNED(2 DOWNTO 0)
);
END COMPONENT;
COMPONENT counter
PORT (
clk : IN STD_LOGIC;
rst_n : IN STD_LOGIC;
y : OUT UNSIGNED(2 DOWNTO 0)
);
END COMPONENT;
Arto Perttula 16.11.2017 29
Test Bench with a Separate Source (6): Instantiate
• Simulation:
/a_cntr_dut
_dut_tb
stimulus response
Arto Perttula 37
Mutation Testing
• Q: How do you know if your TB really catches any bugs?
• A: Create intentional bugs and see what happens
• Examples
– Some output is stuck to ’0’ or ’1’
– Condition if (a and b) then becomes if (a or b)
– Comparison > becomes >=
– Assignment b <= a becomes b <= a+1
– Loop iterates 1 round less or more
– State machine starts from different state or omits some state change
• Automated mutation tools replace manual work
• If mutated code is not detected, the reason could be
a) Poor checking
b) Too few test cases
c) Codes were actually functionally equivalent
• Test case:
• Test bench:
– Libraries, remember to declare the textio-library!
library IEEE;
use IEEE.std_logic_1164.all;
use IEEE.std_logic_arith.all; -- old skool
use std.textio.all;
use IEEE.std_logic_textio.all;
• Simulation:
Now, the designer can prepare multiple test sets for certain corner
cases (positive/negative values, almost max/min values, otherwise
interesting) . However, the VHDL is not modified.
16.11.2017 47
This version does not check the response yet.
Test Bench with Text-IO (10): Files
• Input file provided to test bench • Output file produced by the test bench
...
# Comments:
# For each stimulus file, the designer also prepares the expected output trace. It can be automatically
# compared to the response of DUV, either in VHDL or using command line tool diff in Linux/Unix.
# It is good to allow comments in stimulus file. They can describe the structure:
# e.g. There is 1 line per cycle, 1st value is… in hexadecimal format, 2nd value is…
Arto Perttula 16.11.2017 48
Use Headers in Input And
Output Files
• Fundamental idea is to have many • E.g., DUV output log could provide
input, reference output, and result summary
files • # File: ov1_out.txt
– Provide clear links between these • # Input File: ov1_in.txt
• # Time: 2014-02-03 14:24:33
• Use headers! • # User: xavi
• E.g., input file • # Format:…
• # File: ov1_in.txt • 0
• # Purpose: Test overflow… • 0
• # Designer: Xavier Öllqvist • 4…
• # Date: 2013-12-24 16:55:02 • # Sim ends at: 100 ms
• # Version: 1.1 • # Num of cases: 430
• # Num of cases: 430 • # Errors: 0
• # Line format: hexadecimal… • # Throughput: 25.4 MB/s
• . . .
static part of TB
For example, ensure that two models are equivalent. E.g. behavioral
model for fast simulation and RTL model for efficient synthesis.
56
Example of Golden Design Test Bench
• Often, a system is first modeled with software and then parts are hardware accelerated
– Software implementation serves as a golden reference
• E.g., video encoder implemented with C, motion estimation accelerated C motion
• Tends to be quite slow estimation
Foreign language interface (FLI)
VHDL TESTBENCH
16.11.2017 58
Design Flow
61
What Is the Interconnection?
• The interconnection topology does not matter since we did not make assumptions about it, only the
functionality of the interconnection
a) Data arrives at correct destination
b) Data does not arrive to wrong destination
c) Data does not change (or duplicate)
d) Data B does not arrive before A, if A was sent before it
e) No IP starves (is blocked for long periods of time)
Example: Verifying an Interconnection
• Tested interconnections delivers data from a single source to destination
– Same principle, same IP interface, slightly different addressing
• Note that only the transferred data integrity is important, not what it represents – Running
numbers are great!
• The test bench should provide few assertions (features a-d in previous slide)
• When checking these assertions, you also implicitely verify the correctness of the interface!
– I.e., read-during-write, write to full buffer, write with an erroneous command etc.
• All of these can be fairly simply accomplished with an automatic test bench requiring no external
files
• TB is pseudo-random numbers (*)
– Longer simulation provides more thoroughness
– The same run can be repeated because it is not really random
– (*) Note that even if pseudo-random sequence is exactly the same, any change in DUV timing might mask
out the bug in later runs
begin -- tb
-- component instantiation
DUV : traffic_light ...
Clock generation
Reset generation
end tb;
A. Manual
– Generate test with ModelSim force command
– Check the wave forms
B. Automated test generation and response
checking
• B is the only viable option
• This real-life figure shows only
– 1/3 of signals
– 1/50 000 of time line
• This check should be repeated few times a day
during development…
1 000 cycles
16.11.2017 68
Summary And General Guidelines
• Every entity you design has an own test bench
• Automatic verification and result checking
– Input generated internally or from a file
– Output checked automatically
– The less external files we rely on, the easier is the usage
• Somebody else will also be using your code!
– ”vsim my_tb; run 10ms;” ”tests ok”
– or just type ”make verify”
• Timeout counters detect if the design does not respond at all!
– You must not rely that the designer checks the waves
• Test infrastructure is often of similar size and sometimes larger than the actual design
– TB must be easy to run and analyze the results
– TB may have bugs
– TB may be reused and modified. That must be easy.
Right: [P. Woo, Structured ASICs - A Risk Management Tool, Design&Reuse, Sep. 2005, [online]
Arto Perttula 16.11.2017 72
Available: http://www.design-reuse.com/articles/11367/structured-asics-a-risk-management-tool.html]
Time-To-Market (TTM)
Table 1. Time-to-market matters
Potential Sales
Time-To-Market Achieved
First-To-Market 100%
3 Months Late 73%
6 Months Late 53%
9 Months Late 32%
12 Months Late 9%
[P. Magarshack, SoC at the heart of conflicting, Réunion du comité de pilotage (20/02/2002), 74
trendshttp://www.comelec.enst.fr/rtp_soc/reunion_20020220/ST_nb.pdf]
[http://blogs.mentor.com/verificationhorizons/blog/2011/04/01/part-3-the-2010-wilson-research-group-functional-verification-study/slide21-2/]
TB
16.11.2017 75
Verification Methods (1)
1. Reviews come in 2 flavors
1. Specification reviews are especially useful
• Remove ambiguities
• Define how certain aspects of specification are recognized and analyzed in final product
• Be sure to capture customer’s wishes correctly
2. Code review
• Designer explains the code to others
• Surprisingly good results even if others do not fully understand
• Good for creating clear and easy-to-understand code
• Limited to small code sizes
• Define and adopt coding guidelines
– Automated checkers (lint) tools available
– ”I’ve made many many product mistakes over the years. I should at least help make sure we make new
mistakes this time around”
• Eric Hahn on code reviews
• Black box
dbg_out
– Contents invisible
– Access only through primary inputs and outputs
– E.g., proto-chip, SW binary input output
• Gray box
dbg_in
– Some parts visible, perhaps touchable
– E.g., proto-chip with test IO, some FPGAs
• White box, Glass box
– All parts fully visible and touchable
– E.g., RTL input output
Arto Perttula 83
Repeating Tests
• Same error must be repeated to see if fix works
– Same test data, same timing
Automated test generation
• Must ensure that ”fix” does not break any other part of system
Automated checking
• Manual checking suitable only for TB creation
• Preferably same TB during the design refinement
• E.g., RTL and gate-level use same TB
• Keep all the test cases that have failed
• Already fixed errors sometimes reappear later
• Partition test cases into smaller sets
Arto Perttula 16.11.2017 84
Bug-Free Behaviour Not Guaranteed
• [M. Keating, Toward Zero-Defect Design: Managing Functional Complexity in the Era of Multi-Million-Gate Design Architecture, DCAS '05]
Should not be
used without
test cases with
’known values’
• Only short (e.g., < 2048 cycles) traces collected if only on-chip memory utilized
Hard to define correct trigger condition
Arto Perttula 91
Tracking Error Source (1)
a) Pass / no pass
– When errors cannot be corrected, source is irrelevant
– E.g., manufacturing test on chips (black box)
• Faulty chips are thrown away
b) Usually the errors should be corrected
– Locating error is necessary
• SW or HW
• Which component
• Which internal state
• Line of code
• Which test case (input sequence)
• Locating errors is easier if
– smaller the system being verified
– fewer changes have been made to functioning system
[B. Bailey, Property Based Verification for SoC, Tampere SoC, 2003] 93
Tracking Error Source (3)
• Sad example of bus testing:
– Transmit 50 times value 0x7 and check that 50 values are received. What if:
a) <50 data received: which were missing?
b) 50 received: is some data duplicated and some missing?
c) >50 received: which one is duplicated?
• Selection of test data may simplify finding error
– Consider transmitting sequential numbers (1,2,3...) over bus instead on constant value 7
– Locating duplicated/missing data is trivial
– Of course, sequential numbers are not that useful with, e.g., arithmetic components
– Using unique values in test input helps to track the error from wave format or trace
• Differentiate sources: (1,2,3...); (101,102,103...); ...(901,902,903...)
assertion OK [B. Bailey, Property Based Verification for SoC, Tampere SoC, 2003]
Assertions (1)
• Express the design intent
– Best captured by the designer
– ”Built-in implementation specification”
• Check that certain properties always hold
– FSM won’t enter illegal state, one-hot encoding is always legal etc.
• Checked during simulation, not synthesizable
– Synthesizable HDL assertion would be über-cool
• E.g., signals A and B must NEVER be ’1’ at the same time
– VHDL: assert (A and B = 0) report ”A and B simultaneously asserted”
severity warning;
VHDL simulator: Warning: A and B simultaneously asserted, Time: 500 ns,
component : /tb_system/tx_ctrl/
• USE ASSERTIONS