0% found this document useful (0 votes)
6 views

Module 3.1_Verification Guidelines (1)

The document outlines verification guidelines for hardware design, emphasizing the importance of ensuring that designs function as intended rather than merely debugging errors. It details the roles and responsibilities of verification engineers, the testing methodologies including directed and constrained random testing, and the challenges faced during the verification process. Additionally, it discusses the significance of creating a robust verification plan and the necessity of functional coverage to validate design correctness.

Uploaded by

renuka
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Module 3.1_Verification Guidelines (1)

The document outlines verification guidelines for hardware design, emphasizing the importance of ensuring that designs function as intended rather than merely debugging errors. It details the roles and responsibilities of verification engineers, the testing methodologies including directed and constrained random testing, and the challenges faced during the verification process. Additionally, it discusses the significance of creating a robust verification plan and the necessity of functional coverage to validate design correctness.

Uploaded by

renuka
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

MODULE 3: UNIT 1:

VERIFICATION GUIDELINES

-Dr. Renuka V Tali


Associate Professor,
Dept. of ECE, KSSEM
3.1.1:VERIFICATION PROCESS
What is the goal of verification? Is it to
clear bugs?

Purpose of Verification:
The primary objective of verification is to
 ensure that the design functions as intended

 performs specific tasks based on the design


specification.
 Unlike debugging, the focus is not just on finding
errors but on validating the correctness of the design.
ROLE OF A VERIFICATION ENGINEER:

The verification engineer must:


 Understand the design: This involves thoroughly
comprehending the design specification and how the
hardware should operate.
 Interpret the human language description: The
engineer translates the design's written or human-
readable description into a machine-readable format,
typically in RTL (Register Transfer Level) code.
 Create the verification plan: This is the blueprint
for how the verification process will be conducted,
ensuring that all features of the design are tested.
BUILDING TESTS:

 Verification engineers are responsible for


building tests that ensure the RTL code correctly
implements the intended functionality.

 This is a crucial step to ensure the hardware


design aligns with the specification.
CHALLENGES OF INTERPRETATION:

 The verification process is often complicated by


ambiguities or missing details in the original
document, which can lead to conflicting
interpretations.
 A large part of the engineer's role involves
resolving these ambiguities to ensure the design
is verified properly.
TESTING AT DIFFERENT LEVELS
Design Bug Detection and Analysis

• Detecting bugs at the block level: Ensure


correct addition of numbers, successful bus
transactions, and successful packet transmission.
• Integration phase: Look for discrepancies at
boundaries between blocks.
• Troubleshooting: Identify disputed areas of
logic and help reconcile different views of the same
protocol.
• Challenges: Different designers may have
different interpretations of the same protocol,
leading to potential issues.
SIMULATING DESIGN BLOCKS IN SOFTWARE
DEVELOPMENT
• Simulating a single design block requires tests
generating stimuli from all surrounding blocks.
• These low-level simulations run fast but may uncover
bugs in both the design and testbench.
• Integration of design blocks reduces workload and
may uncover more bugs.
• Higher levels of Design Under Test (DUT) test the
entire system, but reduce simulation performance.
• Tests should aim for all blocks performing interesting
activities concurrently, including active I/O ports, data
crunching, and cache refining.
• Data alignment and timing bugs are likely to occur
due to these actions.
SOPHISTICATED DUT TESTING AND
ERROR HANDLING
• Runs tests with multiple operations concurrently
to ensure active blocks.
• Tests potential issues like user pressing buttons
during music downloads.
• Makes product easy to use and prevents repeated
lock-ups.
• Verifies DUT's performance in handling errors.
• Assesses ability to handle partial transactions or
corrupted data or control fields.
• Error injection and handling can be the most
challenging part of verification.
SYSTEM-LEVEL VERIFICATION
CHALLENGES
• Block level verification: Ensures correct flow of
individual cells through ATM router blocks.
• System level verification: Considers streams of
different priority.
• Analysis of statistics from thousands of cells:
Verifies aggregate behavior.
• Constantly evolving verification tactics: Can never
prove no bugs, necessitating new strategies.
THE VERIFICATION PLAN
 The verification plan is derived from the
hardware specification and contains a description
of what features need to be exercised and the
techniques to be used.
 These steps may include directed or random
testing, assertions, HW/SW co-verification,
emulation, formal proofs, and use of verification
IP
3.1.2:BASIC TESTBENCH FUNCTIONALITY
 The purpose of a testbench is to determine the
correctness of the DUT.
 This is accomplished by the following steps.

• Generate stimulus
• Apply stimulus to the DUT
• Capture the response
• Check for correctness
• Measure progress against the overall
verification goals
Some steps are accomplished automatically by the
testbench, while others are manually determined by
you.
3.1.3: DIRECTED TESTING

 Directed tests focus on specific features based on the


hardware specification.
 Engineers create stimulus vectors to test the Design
Under Test (DUT).
 Results (logs and waveforms) are manually reviewed to
ensure correctness.
 Incremental progress is made, appealing to managers
as it shows steady advancement.
 Quick results are produced with minimal infrastructure.
 Suitable for simpler designs if enough time and resources
are available.
 May be insufficient for complex designs due to the
increasing number of test cases required.
DIRECTED TEST PROGRESS OVER TIME
 Figure 3.1.1 shows how directed tests incrementally
cover the features in the verification plan. Each test
is targeted at a very specific set of design elements. If
you had enough time, you could write all the tests
needed for 100% coverage of the entire verification
plan.
DIRECTED TEST COVERAGE

 Figure 3.1.2 shows the total design space and


features that are covered by directed test cases. In
this space there are many features, some of which
have bugs. You need to write tests that cover all the
features and find the bugs.
3.1.4: METHODOLOGY BASICS

Principles used are


 Constrained-random stimulus

 Functional coverage

 Layered testbench using transactors

 Common testbench for all tests

 Test case-specific code kept separate from


testbench
METHODOLOGY BASICS
 Random Stimuli Importance: Random testing is
essential for identifying unexpected bugs in complex
designs, unlike directed tests which focus on known
issues.
 Functional Coverage: To measure verification progress
with random stimuli, functional coverage metrics are
necessary.
 Automated Result Prediction: With automatically
generated stimuli, an automated method (like a
scoreboard or reference model) is needed to predict
results.
 Testbench Infrastructure: Building a robust testbench
infrastructure, including self-prediction capabilities,
requires significant effort.
 Layered Testbench: A layered approach helps manage
complexity by breaking down the problem into smaller,
manageable pieces.
METHODOLOGY BASICS
 Transactors: Useful patterns like transactors can
aid in constructing the various components of the
Testbench.
 Reusable Infrastructure: Plan the Testbench so
it can be shared among all tests, with "hooks" for
specific actions (e.g., stimulus shaping).
 Separation of Concerns: Keep code specific to a
single test separate from the main Testbench to
avoid complicating the infrastructure.
DRAWBACKS OF DIRECTED TESTING

 Longer Setup Time: Building this style of


testbench takes more time compared to
traditional directed testbenches, potentially
delaying the first test run.
 Management Considerations: The initial
delay in running the first random test may cause
concern for managers; this should be factored
into project schedules.
CONSTRAINED RANDOM TEST PROGRESS
RANDOM TESTING
 High Payoff: Despite the initial daunting setup, creating a shared
testbench for random tests offers significant benefits.
 Common Testbench: Each random test utilizes a common
testbench, unlike directed tests, which require separate code for each
test.
 Concise Code for Constraints: Random tests consist of a few
dozen lines of code to constrain stimuli and create exceptions (e.g.,
protocol violations).
 Faster Bug Discovery: The single constrained-random testbench
can identify bugs more quickly than multiple directed tests.
 Exploration of New Areas: As the rate of bug discovery decreases,
new random constraints can be introduced to explore different areas
of the design.
 Directed Tests for Final Bugs: While most bugs are found
through random testing, the last few may still require directed tests.
 Flexibility of Random Testbench: A random testbench can be
adapted to create directed tests, but a directed testbench cannot be
converted into a true random testbench.
3.1.5: CONSTRAINED RANDOM STIMULUS

 It is a testing method where random inputs are


generated within specific constraints to explore a
wide range of scenarios while ensuring valid test
conditions.
 / Code your design here
 module packet_processor (
 input logic [7:0] address,
 input logic [7:0] data,
 output logic valid
 );
 // Simple design logic:
 // Example: If the address is within a valid range and data is
even, output 'valid' signal
 always_comb begin
 if ((address >= 8'h10 && address <= 8'hF0) && (data % 2
== 0)) begin
 valid = 1; // Valid packet
 end else begin
 valid = 0; // Invalid packet
 end
 end
 endmodule
CONSTRAINED RANDOM STIMULUS: EX. CODE
class Packet;
rand bit [7:0] address;
rand bit [7:0] data;

// Constraints
constraint address_range { address >= 8'h10 && address <= 8'hF0; }
constraint data_even { data % 2 == 0; }

// Constructor
function new();
endfunction
endclass

module packet_testbench;
// Signals to connect to the packet_processor design
logic [7:0] address;
logic [7:0] data;
logic valid;
// Instantiate the packet_processor (Design Under Test - DUT)
packet_processor uut (
.address(address), // Connects the address to the DUT
.data(data), // Connects the data to the DUT
.valid(valid) // Receives the valid output from the DUT
);

// Create an instance of the Packet class


Packet pkt = new(); // Proper initialization after Packet class is
defined

// Testbench logic for randomizing and applying stimulus


initial begin
// Loop to apply multiple test cases
repeat (5) begin
// Randomize the packet with the applied constraints
if (pkt.randomize()) begin
// Apply the randomized values to the design inputs
address = pkt.address;
data = pkt.data;
// Wait for a delta cycle for the combinational logic to settle
#1;

// Display the randomized packet data and the design's valid


output
$display("Randomized Packet - Address: %h, Data: %h, Valid:
%b", address, data, valid);
end else begin
$display("Failed to randomize packet");
end

// Wait for a short time before generating the next packet


#5;
end
end
endmodule
CONSTRAINED RANDOM TEST COVERAGE
 First, notice that a random test often covers a wider space than
a directed one.
 This extra coverage may overlap other tests, or may explore
new areas that you did not anticipate.
 If these new areas find a bug, you are in luck! If the new area is
not legal, you need to write more constraints to keep random
generation from creating illegal design functionality.
 Lastly, you may still have to write a few directed tests to find
cases not covered by any other constrained-random tests.
CONSTRAINED RANDOM TEST COVERAGE
 Figure shows the paths to achieve complete coverage. Start at
the upper left with basic constrained-random tests. Run them
with many different seeds.
 When you look at the functional coverage reports, find the
holes where there are gaps in the coverage.
 Then you make minimal code changes, perhaps by using new
constraints, or by injecting errors or delays into the DUT.
 Spend most of your time in this outer loop, writing directed
tests for only the few features that are very unlikely to be
reached by random tests.
3.1.6 WHAT SHOULD YOU RANDOMIZE
 When you think of randomizing the stimulus to a design,
you might first pick the data fields. These values are is the
easiest to create — just call $random () .
 The problem is that this choice gives a very low payback in
terms of bugs found.
 The primary types of bugs found with random data are data
path errors, perhaps with bit-level mistakes.
 You need to find bugs in the control logic, source of the most
devious problems.
Think broadly about all design inputs, such as the following.
 • Device configuration
 • Environment configuration
 • Input data
 • Protocol exceptions
 • Errors and violations
 • Delays
1. DEVICE AND ENVIRONMENT CONFIGURATION

 In the real world, your device operates in an


environment containing other components.
 When you are verifying the DUT, it is
connected to a Testbench that mimics this
environment.
 You should randomize the entire environment
configuration, including the length of the
simulation, number of devices, and how they
are configured.
 Of course you need to create constraints to
make sure the configuration is legal.
2. INPUT DATA
 You need to account for:
 Layered protocols: Ensuring that your design works
with multiple layers of communication protocols.
 Error injection: Testing how the design handles errors or
exceptions.
 Score boarding: A mechanism to track whether the
system is performing as expected.
 Functional coverage: Ensuring that all functional areas
of the design have been adequately tested.
 Proper preparation helps achieve effective
random testing while ensuring coverage and
robustness in design validation.
3. PROTOCOL EXCEPTIONS, ERRORS, AND VIOLATIONS

If something go wrong in the real hardware, you should


try to simulate it. Look at all the errors that can occur.
 What happens if a bus transaction does not
complete?
 If an invalid operation is encountered?

 Does the design specification state that two


signals are mutually exclusive?
 Drive them both and make sure the device
continues to operate properly.
 Just as you are trying to provoke the hardware
with ill-formed commands, you should also try to
catch these occurrences.
For example, recall those mutually exclusive signals.

 You should add checker code to look for these


violations.
 Your code should at least print a warning
message when this occurs, and preferably
generate an error and wind down the test.
 It is frustrating to spend hours tracking back
through code trying to find the root of a
malfunction, especially when you could have
caught it close to the source with a simple
assertion.
 Just make sure that you can disable the code
that stops simulation on error so that you can
easily test error handling.
4. DELAYS AND SYNCHRONIZATION
Strategies for timing and delays in a testbench to catch
subtle protocol bugs:
Random delays should be introduced in the testbench to
catch timing-related bugs.
Multiple interfaces:
 Even if a block works with a single interface, bugs may
emerge when transactions occur on multiple interfaces
simultaneously.
 Test for scenarios where multiple drivers communicate at
different rates.
Test scenarios like:
 Inputs arriving at the fastest rate, but outputs being processed
more slowly (throttled).
 Stimulus arriving concurrently or staggered across multiple
inputs.
 Functional coverage should be used to track which timing
and delay combinations have been generated and tested,
ensuring thorough exploration of the design.
5. PARALLEL RANDOM TESTING

How should you run the tests?


 A directed test has a testbench that produces a
unique set of stimulus and response vectors.
 To change the stimulus, you need to change the
test.
 A random test consists of the testbench code
plus a random seed.
 If you run the same test 50 times, each time with
a unique seed, you will get 50 different sets of
stimuli.
 Running with multiple seeds broadens the
coverage of your test and leverages your work.
3.1.7: FUNCTIONAL COVERAGE
The process of measuring and using functional
coverage consists of several steps.

 First, you add code to the testbench to monitor


the stimulus going into the device, and its
reaction and response, to determine what
functionality has been exercised.
 Run several simulations, each with a different
seed. Next, merge the results from these
simulations into a report.
 Lastly, you need to analyze the results and
determine how to create new stimulus to reach
untested conditions and logic.
FEEDBACK FROM FUNCTIONAL COVERAGE TO
STIMULUS
 Coverage-driven verification is a verification
methodology where random tests are continuously
refined using feedback from functional coverage to
target and explore untested areas of the design
space.
3.1.8: TEST BENCH COMPONENTS
 In simulation, the testbench wraps around the DUT, just as a
hardware tester connects to a physical chip, as shown in
Fig. 1.7.
 Both the testbench and tester provide stimulus and capture
responses.
 The difference between them is that your testbench needs to
work over a wide range of levels of abstraction, creating
transactions and sequences, which are eventually
transformed into bit vectors.
 A tester just works at the bit level.
3.1.8: TEST BENCH COMPONENTS
 The testbench block consists of Bus Functional
Models (BFM), which are testbench components that
generate and check responses to real devices
connected to AMBA, USB, PCI, and SPI buses.
 These high-level transactors obey protocol and
execute quickly. However, for prototyping using
FPGAs or emulation, BFMs need to be synthesizable.
3.1.9: LAYERED TEST BENCH:1. A FLAT TEST BENCH
2 :THE SIGNAL AND COMMAND LAYERS
 At the bottom is the signal layer that contains the design under test
and the signals that connect it to the testbench.
 The next higher level is the command layer. The DUT’s inputs are
driven by the driver that runs single commands, such as bus read or
write. The DUT’s output drives the monitor that takes signal
transitions and groups them together into commands.
 Assertions also cross the command/signal layer, as they look at
individual signals and also changes across an entire command.
3:THE FUNCTIONAL LAYER

 Figure 1.10 shows the testbench with the functional layer


added, which feeds down into the command layer.
 The agent block receives higher-level transactions such as DMA
read or write and breaks them into individual commands or
transactions.
 These commands are also sent to the scoreboard that predicts
the results of the transaction. The checker compares the
commands from the monitor with those in the scoreboard.
4:THE SCENARIO LAYER
5:THE TEST LAYER AND FUNCTIONAL COVERAGE
 Signal Layer: This layer contains all the signals and
variables needed for the simulation, including inputs
and outputs.
 Command Layer: This layer includes procedures or
tasks for controlling the simulation, such as resetting
the counter or starting the clock.
 Function Layer: This layer comprises reusable
functions to check or evaluate conditions.
 Scenario Layer: This layer describes specific test cases
or scenarios to validate the functionality.

You might also like