0% found this document useful (0 votes)
24 views

module 5 vlsi

Uploaded by

Saha na
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

module 5 vlsi

Uploaded by

Saha na
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

• What to Randomize

• When randomizing the stimulus to a design, the first consideration might be the
data fields.
• These are the easiest to create using $random.
• The problem is that this approach only identifies data-path bugs, such as bit-level
mistakes, and does not address control logic bugs.
• Control logic bugs are more challenging and require randomizing all decision
points in the DUT.
• Randomization increases the probability of exploring different paths in each test
case.
• Broader inputs to consider for randomization include:
• Device configuration
• Environment configuration
• Primary input data
• Encapsulated input data
• Protocol exception
• Delays
• Transaction status
• Errors and violations---

• 6.2.1 Device Configuration



• Bugs are often missed due to insufficient configuration diversity during testing.
• Most tests use the design as it comes out of reset or with a fixed initialization
vector, which is not representative of real-world scenarios.
• Example:
• A verification engineer tested a time-division multiplexor switch with 600 input
and 12 output channels.
• Randomizing the parameters for a single channel and putting it in a loop helped
uncover bugs that were previously missed.
• 6.2.2 Environment Configuration

• The DUT operates in an environment with other devices.
• Randomize the entire environment, including the number and configurations of
objects.
• Example:
• For an I/O switch chip, randomize:
• The number of PCI buses (1–4).
• The number of devices on each bus (1–8).
• Parameters for each device (e.g., master/slave, CSR addresses).

• 6.2.3 Primary Input Data



• Randomize transactions such as bus writes or ATM cells.
• Preparing transaction classes carefully helps include layered protocols and error
injection.

• 6.2.4 Encapsulated Input Data

• Devices often process multiple layers of stimulus.
• Example:
• TCP traffic encoded in IP and sent as Ethernet packets.
• Randomize the data and the control fields at each layer.
• Write constraints for valid control fields and enable error injection.
• 6.2.5 Protocol Exceptions, Errors, and Violations

• Anticipate and simulate all possible errors in the system.
• Test the design’s ability to handle errors gracefully without entering illegal states.
• Example:
• Simulate communication interruptions.
• Randomly inject errors and validate combinations of error detection and
correction fields.

• 6.2.6 Delays

• Randomize delays within the legal range specified by communication protocols.
• Example:
• Introduce clock jitter to test the design’s sensitivity to timing variations.
• Use a clock generator outside the testbench with configurable parameters (e.g.,
frequency, offset).
• Focus on functional errors, not timing violations like setup/hold errors, which are
better validated using timing analysis tools.

• The random stimulus generation in SystemVerilog is most useful when used with
OOP.

• You first create a class to hold a group of related random variables, and then have
the random-solver fill them with random values.

• You can create constraints to limit the random values to legal values, or to test-
specific features.

• Note that you can randomize individual variables, but this case is the least
interesting.

• True constrained-random stimuli is created at the transaction level, not one value
at a time.

Simple Class with Random Variables Sample 6.1 shows a packet class with random
variables and constraints, plus test- bench code that constructs and randomizes a
packet.
• This class has four random variables.

• The first three use the rand modifier, so that every time you randomize the class,
the variables are assigned a value.

• Think of rolling dice: each roll could be a new value or repeat the current one.

• The kind variable is randc (random cyclic), which ensures that the random solver
does not repeat a random value until every possible value has been assigned.

• Think of dealing cards from a deck: you deal out every card in the deck in random
order, then shuffle the deck, and deal out the cards in a different order.

• Note: The cyclic pattern applies to a single variable. A randc array with eight
elements has eight different patterns.
A constraint is a set of relational expressions that must be true for the chosen value
of the variables.

• Example: The src variable must be greater than 10 and less than 15.

• Constraint expression is grouped using curly braces {} because this code is


declarative, not procedural (which uses begin...end).

• The randomize() function:

• Returns 0 if a problem is found with the constraints.

• A procedural assertion is used to check the result.


• Example: $fatal can stop simulation, but other code might print useful information
and gracefully shut down the simulation.

• You should not randomize an object in the class constructor:

• Constraints may need to be turned on/off, weights changed, or new constraints


added before randomization.

• The constructor is for initializing the object’s variables. Calling randomize() too
early might waste results.

• Variables in classes:

• All variables should be random and public to give maximum control over the DUT’s
stimulus and control.

• A random variable can always be turned off.

• Forgetting to make a variable random may require editing the environment, which
should be avoided.

• 6.3.2 Checking the Result from Randomization

• The randomize() function assigns random values to variables labeled as rand or


randc while obeying all active constraints.

• Randomization can fail if there are conflicting constraints, so always check the
status.

• Failing to check may cause variables to get unexpected values, leading to


simulation failures.

• Example:

• If randomize() succeeds, it returns 1.

• If it fails, it returns 0.

• Use procedural assertions to check the result, print an error for failures, and set
the simulator to terminate upon errors.
• 6.3.3 The Constraint Solver

• Solves constraint expressions and chooses values that satisfy constraints.

• Values come from SystemVerilog’s PRNG, initialized with a seed.

• The same seed and testbench produce identical results on the same simulator.

• Results may vary:

• Different simulators or versions may give different outcomes for the same
constrained-random test.

• SystemVerilog defines the meaning of expressions and legal values but does not
specify the precise order.

• Random Number Functions

• You can use all the Verilog-1995 distribution functions, plus several that are
new for SystemVerilog. Consult a statistics book for more details on the “dist”
functions.
• Some of the useful functions include the following.
• $random() - Flat distribution, returning signed 32-bit random
• $urandom() - Flat distribution, returning unsigned 32-bit random
• $urandom_range() — Flat distribution over a range
• $dist_exponential () — Exponential decay, as shown in Figure 6-1
• $dist_normal () — Bell-shaped distribution
• $dist_poisson() — Bell-shaped distribution
• $dist_uniform() — Flat distribution

• The $urandom_range () function takes two arguments, an optional low value,


and a high value.
• Sample 6.32 Surandom_range usage
• a = $urandom range(3, 10); // Pick a value from 3 to 10
• a =$urandom _range(10, 3); // Pick a value from 3 to 10
• b=$urandom range(5); // Pick a value from 0 to 5
Here are the points derived from the provided text:

1. Common Randomization Problems:

Procedural coding knowledge may not suffice; understanding constraints and random
distributions requires a new mindset.

Issues when creating random stimulus:

Use Signed Variables with Care:


Avoid using int, byte, or other signed types in random constraints unless signed
values are specifically desired.

Example in Sample 6.41 demonstrates a class with two random variables intended to
sum up to 64. The produced values depend on the constraints.
Here are the points derived from the provided text:

1. Solver Performance Tips (6.12.2):

• Each constraint solver has unique strengths and weaknesses.

• Guidelines to improve simulation speed with constrained random variables:

• Avoid using expensive operators like division, multiplication, and modulus (%).

• For division or multiplication by a power of 2, use right and left shift operators
instead.

• Replace modulus operation with a power of 2 using a boolean AND with a


mask.

• Using variables less than 32-bits wide may enhance performance when such
operators are necessary.

Here are the points derived from the provided text:

1. Random Number Generators:

• SystemVerilog provides random values for testbench stimulus patterns


beyond directed tests.

• Patterns need to be repeatable for debugging even with minor changes in


design or testbench.

2. 6.16.1 Pseudorandom Number Generators (PRNG):

• Verilog uses a simple PRNG accessible through the $random function.

• The PRNG has an internal state set using a seed provided to $random.

• All IEEE-1364-compliant Verilog simulators use the same algorithm for


generating values.

• Sample 6.68 PRNG (not SystemVerilog's):

• Has a 32-bit state.

• Next random value calculation:

• Square the state to get a 64-bit value.


• Take the middle 32 bits.

• Add the original value.

Here are the points derived from the provided text:

1. Random Stability — Multiple Generators:

Verilog uses a single PRNG for the entire simulation.

SystemVerilog Concerns:

Testbenches often have multiple stimulus generators running in parallel to create


data for the design under test.

If two streams share the same PRNG, they each get a subset of the random values.

2. Figure 6-3 Example:

Two stimulus generators share a single PRNG, producing values a, b, c, etc.


Generator Behavior:

Gen2 has two random objects and uses twice as many random values as Gen1 during
every cycle.

Problem Scenario:

When one of the classes changes (e.g., Gen1 gets an additional


This approach changes the values used not only by Gen1 but also by Gen2 (Figure 6-
4). In SystemVerilog, there is a separate PRNG for every object and thread. Changes
to one object don’t affect the random values seen by others (Figure 6-5).

Here are the points derived from the provided text:

1. Random Stability and Hierarchical Seeding:

• In SystemVerilog, every object and thread has its own PRNG and unique seed.

• When a new object or thread is started, its PRNG is seeded from its parent’s
PRNG.

• A single seed specified at the start of the simulation can create many streams
of random stimulus, each distinct.

2. Debugging Testbench Issues:

• During debugging, changes to the testbench (adding, deleting, or moving code)


may cause different random values to be generated, even with random
stability.

• This can be frustrating if the testbench no longer generates the same stimulus
during a DUT failure.
• To minimize the effect of code modifications, add new objects or threads after
the existing ones.

3. Sample 6.69 Example:

Shows a routine from the testbench that constructs objects and runs them in parallel
threads.
Here are the points derived from the provided text:

1. Coverage Types:

• Coverage measures progress in design verification.

• Simulations help cover all legal combinations of the design.

• Coverage tools gather information during simulation and postprocess it into a


coverage report.

• The report is used to identify coverage gaps, leading to modifications or


creation of new tests.

• This iterative process continues until the coverage level is satisfactory.

2. 9.1.1 Code Coverage:

• Code coverage measures verification progress by tracking:

• Line coverage: How many lines of code have been executed.

• Path coverage: Which paths and expressions have been executed.

• Toggle coverage: Which single-bit variables have toggled between 0 and 1.

• FSM coverage: Which states and transitions in a state machine have been
visited.

• No extra HDL code is needed; the tool automatically instruments the design by
analyzing the source code.

• After running tests, the tool creates a database that tracks coverage.

3. Code Coverage Tool:

• Many simulators include a code coverage tool.

• A postprocessing tool converts the coverage database into a readable form.

• Code coverage measures how much the tests exercise the design code, not
the testbench.
• Untested design code could conceal a hardware bug or be redundant.

4. Limitations of Code Coverage:

• Code coverage measures how thoroughly tests exercise the design's


implementation of the specification, not the verification plan.

• Achieving 100% code coverage does not guarantee complete verification.

• Potential issues include missed mistakes or missing features in the


implementation.

Here are the points derived from the provided text:

1. Functional Coverage:

• The goal of verification is to ensure the design behaves correctly in its real
environment (e.g., MP3 player, network router, or cell phone).

• The design specification details how the device should operate, while the
verification plan lists how functionality is to be stimulated, verified, and
measured.

• Design Coverage: Gathering measurements on what functions were covered.

• Example: A D-flip flop verification plan includes checking its data storage and
how it resets to a known state.

• 100% Functional Coverage: Achieved when both features of the design are
tested.
• Functional coverage is tied to the design intent, sometimes called
"specification coverage," while code coverage measures the design
implementation.

If a block of code is missing from the design, code coverage won't catch it, but
functional coverage can.

2. 9.1.3 Bug Rate:

• Bug rate is an indirect way to measure coverage by tracking how many fresh
bugs are found over time.

• At the start of a project, many bugs are found during inspection while creating
the testbench.

• Inconsistencies in the design specification are identified and fixed before RTL
writing.

• Once the testbench is running, bugs are found as each module in the system
is checked.

• As the design nears tape-out, the bug rate drops, ideally to zero.

• When the bug rate drops, it's time to find different ways to create corner cases
to continue improving coverage.
Bug Rate Variability:

The bug rate can vary per week due to factors such as project phases, recent design
changes, blocks being integrated, personnel changes, and vacation schedules.

Unexpected changes in the bug rate could signal a potential problem.

It is not uncommon to keep finding bugs even after tape-out and even after the design
ships to customers (as shown in Figure 9-3).

2. 9.1.4 Assertion Coverage:

• Assertions are declarative code that checks relationships between design


signals, either once or over a period of time.

• Assertions can be simulated with the design and testbench or proven by


formal tools.

• While you can sometimes write equivalent checks using SystemVerilog


procedural code, many assertions are more easily expressed using
SystemVerilog Assertions (SVA).

• Assertions can have local variables and perform simple data checking.

• For more complex protocols (e.g., checking if a packet successfully passed


through a router), procedural code is often better.

• There is a large overlap between sequences coded procedurally or using SVA.


(References: Vijayaraghavan and Ramanadhan, Cohen et al., VMM book,
Bergeron et al.)

3. Assertion Types:

• Error Assertions: Look for errors (e.g., mutually exclusive signals or requests
without grants), stopping the simulation when an issue is detected.

• Functional Assertions: Check things like arbitration algorithms, FIFOs, and


hardware using the assert property statement.

• Cover Assertions: Look for interesting signal values or design states, such as
a successful bus transaction, using the cover property statement.
• 4. Measuring Assertion Coverage:

• Assertion coverage measures how often assertions are triggered during a


test.

• Cover Property: Observes sequences of signals.

• Cover Group: Samples data values and transactions during simulation.

• These constructs overlap: a cover group can trigger when a sequence


completes, and a sequence can collect information used by a cover group.

Here are the points derived from the provided text:

1. 9.2 Functional Coverage Strategies:

• Before writing test code, anticipate key design features, corner cases, and
possible failure modes to create your verification plan.

• Think about the information encoded in the design, not just data values.

• The plan should focus on significant design states.

2. 9.2.1 Gather Information, Not Data:

• Example: For a FIFO, comparing read and write indices can measure how full
or empty the FIFO is.

• The corner cases for a FIFO are Full and Empty; if the FIFO goes from Empty
to Full and back to Empty, you’ve covered all levels in between.

• Interesting states in a FIFO involve indices passing between all 1’s and all 0’s.

• Look at information, not data values, and break down design signals with large
ranges into smaller ranges, focusing on corner cases.

• Example: For a 32-bit address bus, don't collect 4 billion samples; instead,
check for natural divisions (e.g., memory and IO space) and interesting counter
values.

3. 9.2.2 Only Measure What You Are Going to Use:


• Functional coverage data gathering can be expensive, so only measure what
will be analyzed and used to improve tests.

• Simulations may run slower with functional coverage monitoring, but it has
lower overhead compared to waveform traces and code coverage.

• Save functional coverage data to disk, especially when using multiple test
cases and seeds.

• Avoid gathering unnecessary coverage data if you won't use the coverage
reports.

• Control coverage data gathering at compilation, instantiation, or triggering


using switches, conditional compilation, or suppression of coverage data (the
last option is less desirable as it leads to 0% coverage sections in the report).

4. 9.2.3 Measuring Completeness:

• To fully test a design, review all coverage measurements and consider the bug
rate.

• At the start of a project, both code and functional coverage are low.

• As you run tests with different random seeds, functional coverage increases
until it no longer shows improvement.

• Create additional constraints and tests to explore new areas and save high-
coverage test/seed combinations for regression testing.

5. Addressing Low Code or Functional Coverage:

• High Functional Coverage, Low Code Coverage: Tests may not exercise the full
design due to an inadequate verification plan. Update the verification plan and
add more functional coverage points.

• High Code Coverage, Low Functional Coverage: The testbench is running tests
but is unable to reach all interesting states to verify the full functionality.
Consider using formal verification tools to extract design states and create
appropriate stimulus.
Here are the points derived from the provided text:

1. Simple Functional Coverage Example:

• To measure functional coverage, start with the verification plan and write an
executable version of it for simulation.

• In the SystemVerilog testbench, sample the values of variables and


expressions. These sampling locations are known as cover points.

• Multiple cover points sampled at the same time (e.g., when a transaction
completes) are grouped together in a cover group.

2. Sample 9.2 Example:

A transaction has eight flavors, and the testbench generates the port variable
randomly.

The verification plan requires that every value be tried.


Here are the points derived from the provided text:

1. The testbench generated the values 1, 2, 3, 4, 5, 6, and 7, but never generated a port
of 0.

2. The at least column specifies how many hits are needed before a bin is considered
covered. (Refer to Section 9.9.3 for the at_least option).

3. To improve functional coverage:

The easiest strategy is to run more simulation cycles or try new random seeds.

Look at the coverage report for items with two or more hits. This usually means you
just need to run the simulation longer or use new seed values.

4. If a cover point had zero or one hit, a new strategy may be needed, as the testbench
is not generating the proper stimulus.

5. In this example, the very next random transaction (#33) had a port value of 0,
resulting in 100% coverage.
Anatomy of a cover group:

1. A cover group is similar to a class — define it once and instantiate it one or more
times.
It contains:

Cover points

Options

Formal arguments

Optional trigger
3. A cover group encompasses one or more data points, all sampled at the same time.
4. Create clear cover group names that explicitly indicate what you are measuring
and reference the verification plan when possible.
5. Example: Parity Errors_In Hexaword Cache Fills — the verbose name helps when
reading a coverage report with many cover groups.
6. Use the comment option for additional descriptive information (refer to Section
9.9.2).
7. A cover group can be defined: In a class At the program or module level
8. A cover group can sample any visible variable, such as: Program/module variables
Signals from an interface Any signal in the design (using a hierarchical reference)
9. A cover group inside a class can sample: Variables in that class Data values from
embedded classes
10. Avoid defining the cover group in a data class (e.g., a transaction) to prevent
additional overhead when gathering coverage data.
11. Example: Track beer consumption by patrons (simplified vs. tracking every bottle).
12. Define cover groups at the appropriate level of abstraction:
At the boundary between the testbench and the design In transactors that read and
write data In the environment configuration class
13. Sampling of any transaction must wait until it is received by the DUT.
14. If an error is injected during a transaction, treat it for functional coverage using a
different cover point created for error handling.
15. A class can contain multiple cover groups, allowing you to:Enable or disable
groups as neededUse separate triggers to gather data from many sources
16. Cover group instantiation is required to collect data:If forgotten, no error message
is printed at runtime, but the coverage report will not show the cover group.
17. Defining a cover group in a class: No separate name is needed when instantiating
it; use the original cover group name.
18. Sample 9.5: Similar to the first example, but embeds a cover group in a transactor
class, requiring no separate instance name.
Triggering a cover group

1. The two major parts of functional coverage are:

The sampled data values

The time when they are sampled

2. When new values are ready (e.g., when a transaction completes), the testbench
triggers the cover group.

3. This can be done:

Directly with the sample function (as shown in Sample 9.5)

By using a blocking expression in the covergroup definition (can use wait or @ to


block on signals or events)

4. Use sample when:

You want to explicitly trigger coverage from procedural code

There is no existing signal or event that tells when to sample

There are multiple instances of a cover group that trigger separately

5. Use the blocking statement in the covergroup declaration when:

You want to tap into existing events or signals to trigger coverage

9.5.1 Sampling Using a Callback:

6. Callbacks allow you to integrate functional coverage into your testbench in a


flexible manner.

This technique allows you to build a flexible testbench without restricting when
coverage is collected.
You can decide when and where values are sampled for every point in the verification
plan.

You can add extra “hooks” in the environment for a callback without affecting the
testbench.

7. Callbacks only “fire” when the test registers a callback object.

8. You can create many separate callbacks for each cover group, with little overhead.

9. Callbacks are superior to using a mailbox to connect the testbench to coverage


objects, as explained in Section 8.7.4.

A mailbox requires a transactor to receive transactions and multiple mailboxes cause


you to juggle multiple threads.

Instead of using an active transactor, use a passive callback.

10. Sample 8.30 shows a driver class with two callback points: before and after the
transaction is transmitted.

11. Sample 8.29 shows the base callback class.

12. Sample 8.31 shows a test with an extended callback class that sends data to a
scoreboard.

13. Create your own extension, Driver_cbs_coverage, of the base callback class
Driver_cbs to call the sample task for your cover group in post_tx.

14. Push an instance of the coverage callback class into the driver’s callback queue,
and your coverage code triggers the cover group at the right time.

15. The examples define and use the callback Driver_cbs_coverage.

Need 3 pages
Here are the points derived from the provided text:

9.6 Data Sampling

1. Coverage information is gathered by specifying a variable or expression in a cover


point.

2. SystemVerilog creates a number of “bins” to record how many times each value
has been seen.

3. These bins are the basic units of measurement for functional coverage.

4. When you sample a one-bit variable, two bins are created at maximum.

5. SystemVerilog drops a token in one of the bins every time the cover group is
triggered.

6. At the end of each simulation, a database is created with all bins that have a token
in them.

7. An analysis tool reads the databases and generates a report with the coverage for
each part of the design and for the total coverage.

9.6.1 Individual Bins and Total Coverage

8. To calculate the coverage for a point, you must determine the total number of
possible values, also known as the domain.

9. There may be one value per bin or multiple values.

10. Coverage is the number of sampled values divided by the number of bins in the
domain.

11. A cover point that is a 3-bit variable has the domain 0:7 and is normally divided
into eight bins.
12. If, during simulation, values belonging to seven bins are sampled, the report will
show 7/8 or 87.5% coverage for this point.

13. All these points are combined to show the coverage for the entire group, and then
all the groups are combined to give a coverage percentage for all the simulation
databases.

14. This status reflects a single simulation, and coverage should be tracked over time
to identify trends.

15. Identifying trends helps predict when verification of the design will be completed.

9.6.2 Creating Bins Automatically

16. SystemVerilog automatically creates bins for cover points.

17. It looks at the domain of the sampled expression to determine the range of
possible values.

18. For an expression that is N bits wide, there are 2^N possible values.

19. For a 3-bit variable (like port), there are 8 possible values.

20. The domain for enumerated data types is the number of named values, as shown
in Section 9.6.8.

21. You can also explicitly define bins as shown in Section 9.6.5.

9.6.3 Limiting the Number of Automatic Bins Created

22. The cover group option auto_bin_max specifies the maximum number of bins to
automatically create, with a default of 64 bins.

23. If the domain of values in the cover point variable or expression is greater than
auto_bin_max, SystemVerilog divides the range into auto_bin_max bins.
24. For example, a 16-bit variable has 65,536 possible values, and each of the 64 bins
covers 1.024 values.

25. This approach may be impractical as it is difficult to find missing coverage in a


large number of auto-generated bins.

26. It is suggested to lower the limit to 8 or 16, or better yet, explicitly define the bins,
as shown in Section 9.6.5.

27. An example sets auto_bin_max to two bins, where:

The first bin holds the lower half of the range (0-3)

The second bin holds the upper values (4-7)

Need 3 to 4 age

You might also like