module 5 vlsi
module 5 vlsi
• When randomizing the stimulus to a design, the first consideration might be the
data fields.
• These are the easiest to create using $random.
• The problem is that this approach only identifies data-path bugs, such as bit-level
mistakes, and does not address control logic bugs.
• Control logic bugs are more challenging and require randomizing all decision
points in the DUT.
• Randomization increases the probability of exploring different paths in each test
case.
• Broader inputs to consider for randomization include:
• Device configuration
• Environment configuration
• Primary input data
• Encapsulated input data
• Protocol exception
• Delays
• Transaction status
• Errors and violations---
• 6.2.6 Delays
•
• Randomize delays within the legal range specified by communication protocols.
• Example:
• Introduce clock jitter to test the design’s sensitivity to timing variations.
• Use a clock generator outside the testbench with configurable parameters (e.g.,
frequency, offset).
• Focus on functional errors, not timing violations like setup/hold errors, which are
better validated using timing analysis tools.
• The random stimulus generation in SystemVerilog is most useful when used with
OOP.
• You first create a class to hold a group of related random variables, and then have
the random-solver fill them with random values.
• You can create constraints to limit the random values to legal values, or to test-
specific features.
• Note that you can randomize individual variables, but this case is the least
interesting.
• True constrained-random stimuli is created at the transaction level, not one value
at a time.
Simple Class with Random Variables Sample 6.1 shows a packet class with random
variables and constraints, plus test- bench code that constructs and randomizes a
packet.
• This class has four random variables.
• The first three use the rand modifier, so that every time you randomize the class,
the variables are assigned a value.
• Think of rolling dice: each roll could be a new value or repeat the current one.
• The kind variable is randc (random cyclic), which ensures that the random solver
does not repeat a random value until every possible value has been assigned.
• Think of dealing cards from a deck: you deal out every card in the deck in random
order, then shuffle the deck, and deal out the cards in a different order.
• Note: The cyclic pattern applies to a single variable. A randc array with eight
elements has eight different patterns.
A constraint is a set of relational expressions that must be true for the chosen value
of the variables.
• Example: The src variable must be greater than 10 and less than 15.
• The constructor is for initializing the object’s variables. Calling randomize() too
early might waste results.
• Variables in classes:
• All variables should be random and public to give maximum control over the DUT’s
stimulus and control.
• Forgetting to make a variable random may require editing the environment, which
should be avoided.
• Randomization can fail if there are conflicting constraints, so always check the
status.
• Example:
• If it fails, it returns 0.
• Use procedural assertions to check the result, print an error for failures, and set
the simulator to terminate upon errors.
• 6.3.3 The Constraint Solver
• The same seed and testbench produce identical results on the same simulator.
• Different simulators or versions may give different outcomes for the same
constrained-random test.
• SystemVerilog defines the meaning of expressions and legal values but does not
specify the precise order.
• You can use all the Verilog-1995 distribution functions, plus several that are
new for SystemVerilog. Consult a statistics book for more details on the “dist”
functions.
• Some of the useful functions include the following.
• $random() - Flat distribution, returning signed 32-bit random
• $urandom() - Flat distribution, returning unsigned 32-bit random
• $urandom_range() — Flat distribution over a range
• $dist_exponential () — Exponential decay, as shown in Figure 6-1
• $dist_normal () — Bell-shaped distribution
• $dist_poisson() — Bell-shaped distribution
• $dist_uniform() — Flat distribution
Procedural coding knowledge may not suffice; understanding constraints and random
distributions requires a new mindset.
Example in Sample 6.41 demonstrates a class with two random variables intended to
sum up to 64. The produced values depend on the constraints.
Here are the points derived from the provided text:
• Avoid using expensive operators like division, multiplication, and modulus (%).
• For division or multiplication by a power of 2, use right and left shift operators
instead.
• Using variables less than 32-bits wide may enhance performance when such
operators are necessary.
• The PRNG has an internal state set using a seed provided to $random.
SystemVerilog Concerns:
If two streams share the same PRNG, they each get a subset of the random values.
Gen2 has two random objects and uses twice as many random values as Gen1 during
every cycle.
Problem Scenario:
• In SystemVerilog, every object and thread has its own PRNG and unique seed.
• When a new object or thread is started, its PRNG is seeded from its parent’s
PRNG.
• A single seed specified at the start of the simulation can create many streams
of random stimulus, each distinct.
• This can be frustrating if the testbench no longer generates the same stimulus
during a DUT failure.
• To minimize the effect of code modifications, add new objects or threads after
the existing ones.
Shows a routine from the testbench that constructs objects and runs them in parallel
threads.
Here are the points derived from the provided text:
1. Coverage Types:
• FSM coverage: Which states and transitions in a state machine have been
visited.
• No extra HDL code is needed; the tool automatically instruments the design by
analyzing the source code.
• After running tests, the tool creates a database that tracks coverage.
• Code coverage measures how much the tests exercise the design code, not
the testbench.
• Untested design code could conceal a hardware bug or be redundant.
1. Functional Coverage:
• The goal of verification is to ensure the design behaves correctly in its real
environment (e.g., MP3 player, network router, or cell phone).
• The design specification details how the device should operate, while the
verification plan lists how functionality is to be stimulated, verified, and
measured.
• Example: A D-flip flop verification plan includes checking its data storage and
how it resets to a known state.
• 100% Functional Coverage: Achieved when both features of the design are
tested.
• Functional coverage is tied to the design intent, sometimes called
"specification coverage," while code coverage measures the design
implementation.
If a block of code is missing from the design, code coverage won't catch it, but
functional coverage can.
• Bug rate is an indirect way to measure coverage by tracking how many fresh
bugs are found over time.
• At the start of a project, many bugs are found during inspection while creating
the testbench.
• Inconsistencies in the design specification are identified and fixed before RTL
writing.
• Once the testbench is running, bugs are found as each module in the system
is checked.
• As the design nears tape-out, the bug rate drops, ideally to zero.
• When the bug rate drops, it's time to find different ways to create corner cases
to continue improving coverage.
Bug Rate Variability:
The bug rate can vary per week due to factors such as project phases, recent design
changes, blocks being integrated, personnel changes, and vacation schedules.
It is not uncommon to keep finding bugs even after tape-out and even after the design
ships to customers (as shown in Figure 9-3).
• Assertions can have local variables and perform simple data checking.
3. Assertion Types:
• Error Assertions: Look for errors (e.g., mutually exclusive signals or requests
without grants), stopping the simulation when an issue is detected.
• Cover Assertions: Look for interesting signal values or design states, such as
a successful bus transaction, using the cover property statement.
• 4. Measuring Assertion Coverage:
• Before writing test code, anticipate key design features, corner cases, and
possible failure modes to create your verification plan.
• Think about the information encoded in the design, not just data values.
• Example: For a FIFO, comparing read and write indices can measure how full
or empty the FIFO is.
• The corner cases for a FIFO are Full and Empty; if the FIFO goes from Empty
to Full and back to Empty, you’ve covered all levels in between.
• Interesting states in a FIFO involve indices passing between all 1’s and all 0’s.
• Look at information, not data values, and break down design signals with large
ranges into smaller ranges, focusing on corner cases.
• Example: For a 32-bit address bus, don't collect 4 billion samples; instead,
check for natural divisions (e.g., memory and IO space) and interesting counter
values.
• Simulations may run slower with functional coverage monitoring, but it has
lower overhead compared to waveform traces and code coverage.
• Save functional coverage data to disk, especially when using multiple test
cases and seeds.
• Avoid gathering unnecessary coverage data if you won't use the coverage
reports.
• To fully test a design, review all coverage measurements and consider the bug
rate.
• At the start of a project, both code and functional coverage are low.
• As you run tests with different random seeds, functional coverage increases
until it no longer shows improvement.
• Create additional constraints and tests to explore new areas and save high-
coverage test/seed combinations for regression testing.
• High Functional Coverage, Low Code Coverage: Tests may not exercise the full
design due to an inadequate verification plan. Update the verification plan and
add more functional coverage points.
• High Code Coverage, Low Functional Coverage: The testbench is running tests
but is unable to reach all interesting states to verify the full functionality.
Consider using formal verification tools to extract design states and create
appropriate stimulus.
Here are the points derived from the provided text:
• To measure functional coverage, start with the verification plan and write an
executable version of it for simulation.
• Multiple cover points sampled at the same time (e.g., when a transaction
completes) are grouped together in a cover group.
A transaction has eight flavors, and the testbench generates the port variable
randomly.
1. The testbench generated the values 1, 2, 3, 4, 5, 6, and 7, but never generated a port
of 0.
2. The at least column specifies how many hits are needed before a bin is considered
covered. (Refer to Section 9.9.3 for the at_least option).
The easiest strategy is to run more simulation cycles or try new random seeds.
Look at the coverage report for items with two or more hits. This usually means you
just need to run the simulation longer or use new seed values.
4. If a cover point had zero or one hit, a new strategy may be needed, as the testbench
is not generating the proper stimulus.
5. In this example, the very next random transaction (#33) had a port value of 0,
resulting in 100% coverage.
Anatomy of a cover group:
1. A cover group is similar to a class — define it once and instantiate it one or more
times.
It contains:
Cover points
Options
Formal arguments
Optional trigger
3. A cover group encompasses one or more data points, all sampled at the same time.
4. Create clear cover group names that explicitly indicate what you are measuring
and reference the verification plan when possible.
5. Example: Parity Errors_In Hexaword Cache Fills — the verbose name helps when
reading a coverage report with many cover groups.
6. Use the comment option for additional descriptive information (refer to Section
9.9.2).
7. A cover group can be defined: In a class At the program or module level
8. A cover group can sample any visible variable, such as: Program/module variables
Signals from an interface Any signal in the design (using a hierarchical reference)
9. A cover group inside a class can sample: Variables in that class Data values from
embedded classes
10. Avoid defining the cover group in a data class (e.g., a transaction) to prevent
additional overhead when gathering coverage data.
11. Example: Track beer consumption by patrons (simplified vs. tracking every bottle).
12. Define cover groups at the appropriate level of abstraction:
At the boundary between the testbench and the design In transactors that read and
write data In the environment configuration class
13. Sampling of any transaction must wait until it is received by the DUT.
14. If an error is injected during a transaction, treat it for functional coverage using a
different cover point created for error handling.
15. A class can contain multiple cover groups, allowing you to:Enable or disable
groups as neededUse separate triggers to gather data from many sources
16. Cover group instantiation is required to collect data:If forgotten, no error message
is printed at runtime, but the coverage report will not show the cover group.
17. Defining a cover group in a class: No separate name is needed when instantiating
it; use the original cover group name.
18. Sample 9.5: Similar to the first example, but embeds a cover group in a transactor
class, requiring no separate instance name.
Triggering a cover group
2. When new values are ready (e.g., when a transaction completes), the testbench
triggers the cover group.
This technique allows you to build a flexible testbench without restricting when
coverage is collected.
You can decide when and where values are sampled for every point in the verification
plan.
You can add extra “hooks” in the environment for a callback without affecting the
testbench.
8. You can create many separate callbacks for each cover group, with little overhead.
10. Sample 8.30 shows a driver class with two callback points: before and after the
transaction is transmitted.
12. Sample 8.31 shows a test with an extended callback class that sends data to a
scoreboard.
13. Create your own extension, Driver_cbs_coverage, of the base callback class
Driver_cbs to call the sample task for your cover group in post_tx.
14. Push an instance of the coverage callback class into the driver’s callback queue,
and your coverage code triggers the cover group at the right time.
Need 3 pages
Here are the points derived from the provided text:
2. SystemVerilog creates a number of “bins” to record how many times each value
has been seen.
3. These bins are the basic units of measurement for functional coverage.
4. When you sample a one-bit variable, two bins are created at maximum.
5. SystemVerilog drops a token in one of the bins every time the cover group is
triggered.
6. At the end of each simulation, a database is created with all bins that have a token
in them.
7. An analysis tool reads the databases and generates a report with the coverage for
each part of the design and for the total coverage.
8. To calculate the coverage for a point, you must determine the total number of
possible values, also known as the domain.
10. Coverage is the number of sampled values divided by the number of bins in the
domain.
11. A cover point that is a 3-bit variable has the domain 0:7 and is normally divided
into eight bins.
12. If, during simulation, values belonging to seven bins are sampled, the report will
show 7/8 or 87.5% coverage for this point.
13. All these points are combined to show the coverage for the entire group, and then
all the groups are combined to give a coverage percentage for all the simulation
databases.
14. This status reflects a single simulation, and coverage should be tracked over time
to identify trends.
15. Identifying trends helps predict when verification of the design will be completed.
17. It looks at the domain of the sampled expression to determine the range of
possible values.
18. For an expression that is N bits wide, there are 2^N possible values.
19. For a 3-bit variable (like port), there are 8 possible values.
20. The domain for enumerated data types is the number of named values, as shown
in Section 9.6.8.
21. You can also explicitly define bins as shown in Section 9.6.5.
22. The cover group option auto_bin_max specifies the maximum number of bins to
automatically create, with a default of 64 bins.
23. If the domain of values in the cover point variable or expression is greater than
auto_bin_max, SystemVerilog divides the range into auto_bin_max bins.
24. For example, a 16-bit variable has 65,536 possible values, and each of the 64 bins
covers 1.024 values.
26. It is suggested to lower the limit to 8 or 16, or better yet, explicitly define the bins,
as shown in Section 9.6.5.
The first bin holds the lower half of the range (0-3)
Need 3 to 4 age