0% found this document useful (0 votes)
1K views46 pages

Whitepaper Using Coverage With Vcs On Solvnet

This document provides an overview of using coverage tools in VCS for design verification. It discusses verification planning to link specifications, plans, and verification data. Code coverage metrics like control-flow coverage and value coverage are introduced. Functional coverage using covergroups is also covered. The document describes managing coverage data files, targeting parts of designs, merging coverage data, generating reports, convergence techniques, and test grading.

Uploaded by

parabapurva2210
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views46 pages

Whitepaper Using Coverage With Vcs On Solvnet

This document provides an overview of using coverage tools in VCS for design verification. It discusses verification planning to link specifications, plans, and verification data. Code coverage metrics like control-flow coverage and value coverage are introduced. Functional coverage using covergroups is also covered. The document describes managing coverage data files, targeting parts of designs, merging coverage data, generating reports, convergence techniques, and test grading.

Uploaded by

parabapurva2210
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Using Coverage With VCS

March 2023
Copyright Notice and Proprietary Information
 2023 Synopsys, Inc. All rights reserved. This Synopsys software and all associated documentation are proprietary to
Synopsys, Inc. and may only be used pursuant to the terms and conditions of a written license agreement with
Synopsys, Inc. All other use, reproduction, modification, or distribution of the Synopsys software or the associated
documentation is strictly prohibited.

Destination Control Statement


All technical data contained in this publication is subject to the export control laws of the United States of America.
Disclosure to nationals of other countries contrary to United States law is prohibited. It is the reader's responsibility to
determine the applicable regulations and to comply with them.

Disclaimer
SYNOPSYS, INC., AND ITS LICENSORS MAKE NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH
REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

Trademarks
Synopsys company and certain product names are trademarks of Synopsys, as set forth at http://www.synopsys.com/
company/legal/trademarks-brands.html.
All other product or company names may be trademarks of their respective owners.

Third-Party Links
Any links to third-party websites included in this document are for your convenience only. Synopsys does not endorse
and is not responsible for such websites and their practices, including privacy practices, availability, and content.

Free and Open Source Software Licensing Notices


If applicable, Free and Open-Source Software (FOSS) licensing notices are available in the product installation.

Synopsys, Inc.
690 E. Middlefield Road
Mountain View, CA 94043
www.synopsys.com

ii
Contents

1. Overview

2. Verification Planning

3. Code Coverage
Control-Flow Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Value Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4

4. Functional Coverage
Basic Rules for Covergroup Merging . . . . . . . . . . . . . . . . . . . . . 4-2
Covergroup Shapes in a Design . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Covergroup Shape Changes Due to Edits . . . . . . . . . . . . . . . . . 4-4

5. Managing Coverage Data Files


Test Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2

Feedback

iii
6. Targeting Parts of Your Design

7. Merging Coverage Data


Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3

8. Generating Reports
Metrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Trend Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Navigating Coverage Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3

9. Convergence
Constant Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
Adaptive Exclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3

10. Test Grading

Feedback

iv
Overview Overview
1-1

1
Overview 1
VCS® provides industry-leading coverage and planning capabilities
for SystemVerilog, VHDL, and mixed-language designs. A wide
range of coverage metrics are available, with detailed control
available to users. The power and flexibility of VCS' coverage tools
provide support for many different coverage methodologies, but the
range of options available may be a bit overwhelming for new users.

This article provides a basic introduction to using VCS coverage for


design verification. It discusses the planning and coverage tools
supported by VCS and suggests how the different types should be
used together as part of an effective and comprehensive coverage
methodology. These guidelines should provide a starting point for
new users, while also serving as a handy reference for those
experienced with planning and coverage.

The details of these features can be found in the VCS user


documentation.

Overview
1-1
Overview

This document consists of the following topics:

• “Verification Planning”
• “Code Coverage”
• “Functional Coverage”
• “Managing Coverage Data Files”
• “Targeting Parts of Your Design”
• “Merging Coverage Data”
• “Generating Reports”
• “Convergence”
• “Test Grading”

1-2
Verification Planning Verification Planning
2-1

2
Verification Planning 1
The purpose of planning is to create and maintain consistent links
between the verification specification and the tests, coverage and
other data collected to provide actionable information, organized by
the plan specification, towards functional verification sign off. A
linked verification plan provides traceability from the specification
details to the individual-measured metric values.

The figure below shows how the specifications, plans, and data
come together to track progress and status. Links to the
specifications ensure that every feature in the specification has a
corresponding feature in the plan. Links to verification data ensure
that anything below target (low coverage, tests failing, etc.) is
presented and linked to the feature to highlight needed action and
owners.

Verification Planning
2-1
Verification Planning

The best way to create a verification plan is to start with the written
specification documents. Create a top-level plan feature for each
major feature in the specification, and then fill in sub-features to
match. At its most basic, the goal of a plan is to see which parts of
the project are on track and which need help, so dividing features
into areas of responsibility is a good way to start.

To create a plan from a specification document, run Verdi coverage,


and open your PDF file. Highlight each major feature's introductory
text and click one of the "create feature" buttons in Verdi, to either
create a sub-feature or a sibling feature. You can do the same thing
with the subsections of the major features. Identify sub-features that
can be verified separately, and then group them into parent features.

The easiest way to then link coverage sources into features is to


drag and drop the coverage regions from the Verdi coverage window
to each feature. Once you have linked coverage sources, whenever
you run Verdi or URG with a plan, the coverage scores for linked
features are automatically updated. You can export the per-feature
scores to multiple formats from either tool.

2-2
Verification Planning Verification Planning
2-3

If you want to link objects to the plan that do not yet exist in the
database, you can enter their names as sources in the plan. When
they are added to the design in later iterations, Verdi automatically
finds them and annotates their scores in the design. For example,
you can use the name of a covergroup that will be written later as a
source. You can also use wildcard expressions, so if you have a
naming convention for sources, you can have Verdi automatically
find any matching sources.

Verification Planning
2-3
Verification Planning

2-4
Code Coverage Code Coverage
3-1

3
Code Coverage 1
When deciding what metrics and options to use, it is important to
remember that each metric has a cost - in simulation overhead, to
some degree, but more importantly in coverage closure overhead.
Once you commit to a certain set of metrics, you are committing the
time it takes to develop tests, analyze results, apply waivers/
exclusions, and, ultimately, reach your coverage goals.

There are two types of information that code coverage monitors. The
first is control-flow coverage that tracks which lines, paths, and
branches are taken - the flow of execution through each executable
statement in the SystemVerilog or VHDL code. Control-flow metrics
include line coverage and branch coverage.

The second type is value coverage that monitors what values signals
and expressions take on during simulation. Value-coverage metrics
include condition coverage, toggle coverage, and FSM coverage,
which track the values (or value transitions) for signals and variables.

Code Coverage
3-1
Code Coverage

It is important to use code coverage as early as possible in your


verification process. It does not have to be run often at this stage, but
you can get early confirmation that the metric types you have chosen
are the right ones for you and that your merging, reporting, and
tracking setup works as desired. It is better to have time to adjust
your coverage flow early on when you are not in crunch time.

If you are not sure where to begin, it is recommended to use the


following VCS compile options for designs with significant procedural
code:

Compile Option Description


-cm line+cond+fsm Specifies line, condition, and FSM coverage.

-cm_cond obs+event Specifies observability-based condition


coverage, and conditions in always block
sensitivity lists.

If your design has less procedural code, use the following compile
options:

Compile Option Description


-cm line+branch+tgl Specifies line, branch, and toggle coverage.

-cm_cond contassign Monitors whether continuous assignments


ever wake up during simulation

In either case, it is recommended to use the following options:

Compile Option Description


-cm_report noinitial Ignores initial blocks.

-cm_seqnoconst Automatically ignores constant expressions


for toggle and flag unreachable statements
for line and condition coverage. For more
details, see Chapter 9, "Convergence".

3-2
Code Coverage Code Coverage
3-3

This chapter consists of the following sections:

• “Control-Flow Coverage”
• “Value Coverage”

Control-Flow Coverage

This section introduces each control-flow coverage metric briefly.

Line coverage is the most basic metric. It is a type of control-flow


coverage, and monitors which basic blocks in the design are
executed during simulation. It is the best place to start for most
designs, to ensure that each block of code is at least being executed
by testing. Line coverage is invoked with the -cm line option.

It is recommended to use the -cm_report noinitial option to


disable monitoring of line coverage in SystemVerilog initial blocks. In
addition, the -cm_line contassign option enables line coverage
on continuous assignments that have non-constant right-hand sides.

Branch coverage tracks which branches are taken during


simulation (if-then-else branches, case statement items, and
ternary operator choices). Branch coverage is only concerned with
whether each leaf in the control flow graph (the different sections of
code that can be executed depending on the value of signals and
variables) is covered. Branch coverage also does not monitor how
each control-flow condition became true or false, as condition
coverage does.

For example, in a ternary operator assignment such as the following


there are three branches to be covered, one for each of the three
values that may be assigned to x (y, z, and w):

Code Coverage
3-3
Code Coverage

x = a ? y : ( b ? z : w );
// branch 1: a is true, assign y
// branch 2: a is false and b is true, assign z
// branch 3: a and b are both false, assign w

In the next example, there are four branches (one for each of the
true/false arms of the two if statements).

if (a)
x <= 1'b0; // branch 1
else
x <= 1'b1; // branch 2
if (b)
y <= 1'b0; // branch 3
else
y <= 1'b1; // branch 4

Branch coverage is invoked with the -cm branch flag.

Value Coverage

Condition coverage measures which possible values each


conditional expression takes on during simulation. It is a hybrid of
flow-control and value coverage. While you can tell from line
coverage that a given if statement is true (since the then part
executed), condition coverage tells you how the expression became
true. For example, if the expression is if (a || b), condition
coverage tells you whether a is ever true and whether b is ever true,
not just that (a || b) is true. Condition coverage is invoked with the 
-cm cond option.

It is recommended to use condition coverage with the 


-cm_cond obs option to enable "observability-based" condition
coverage. For nested expressions, such as if (a || (b && d)),

3-4
Code Coverage Code Coverage
3-5

VCS tracks two expressions - the 'or' expression and the 'and'
expression. When -cm_cond obs is given, the 'and' expression is
only monitored when the value of a is zero, as the values of b and d
have no effect on the result if a is true.

Finite-State Machine (FSM) coverage monitors which states, state


transitions, and sequence of states each finite-state machine in your
design actually executed. VCS code coverage automatically detects
most FSMs, and you can modify how they are monitored, such as
specifying illegal states or transitions, using the FSM configuration
file. For more details about controlling and customizing how FSMs
are monitored in your design, see the Chapter, “More Options for
FSM Coverage in the Coverage Technology Reference Guide. FSM
coverage is invoked with the -cm fsm option.

It is recommended that you identify your reset signal(s) so that FSM


coverage can reduce the number of transitions it monitors. The
-cm_fsmresetfilter flag allows you to specify global or module-
specific reset signals; when given, transitions that require reset to be
asserted are automatically ignored for coverage.

Toggle coverage monitors value changes on signal bits in the


design. When toggle coverage reaches 100%, it means that every
bit of every monitored signal has changed its value from 0 to 1 and
from 1 to 0.

In SystemVerilog, to enable toggle coverage on elements in Multi-


Dimensional arrays (MDAs), use the -cm_tgl mda option to VCS.
MDA coverage is not supported in VHDL at this time. MDA toggle
coverage monitoring can greatly increase the coverage space;
whether or not you use it depends on your design.

Code Coverage
3-5
Code Coverage

Depending on your design, toggle coverage can be one of the most


expensive metrics in simulation runtime. Some users prefer to only
monitor the activity on port signals to reduce the runtime overhead;
you can do this with the compile-time option, -cm_tgl portsonly.

To invoke toggle coverage, use the -cm tgl option. For example:

% vcs -cm tgl -cm_tgl portsonly mydesign.v

3-6
Functional Coverage Functional Coverage
4-1

4
Functional Coverage 1
Functional coverage consists of two metric types: covergroups and
cover properties/assertions. Functional coverage is not
automatically discovered from the HDL code. It is explicitly created
by you or IP vendors and specifically targeted to the intended
functionality or implementation of the design.

Many books have been written on writing effective functional


coverage, and we would not presume to address that problem here.
However, there are some VCS options and flows that are useful to
know about when using functional coverage.

One key distinction of functional coverage is that it is enabled by


default. Because it is only monitoring your design, however, you can
turn it off without having any effect on simulation. This is sometimes
desirable for performance. Disabling functional coverage can be
done separately for cover properties and covergroups, as shown
below:

Functional Coverage
4-1
Functional Coverage

simv -assert nocovdb //Disables cover property coverage


simv -covg_disable_cg //Disables covergroup coverage

This chapter consists of the following sections:

• “Basic Rules for Covergroup Merging”


• “Covergroup Shapes in a Design”
• “Covergroup Shape Changes Due to Edits”

Basic Rules for Covergroup Merging

If a module contains a covergroup instance, then each instance of


the module has a separate covergroup reported for it. For example,
consider the following mymod module:

module mymod;

mygroup g1 = new(x, y, z);
endmodule

If there are three instances of mymod, they would each be reported


separately:

top.a4b::addum
top.a4a::addum
top.a4c::addum

This separation ensures that just because a bin is covered in one


instance, it does not appear that the same testing has been done in
the other instances.

4-2
Functional Coverage Functional Coverage
4-3

To override this creation of multiple groups, use the URG option,


-group merge_across_scopes. This is useful when the goal of
the group is explicitly to span the behavior across all instances
instead of tracking coverage separately for each instance of the
module. If this option is used, only a single covergroup is reported for
all instances, and, instead of using the instance name as a prefix,
URG reports it as $unit(file), where the file is the source file in
which the group is defined:

$unit(test.v)::addum

Covergroup Shapes in a Design

Covergroup instances are also reported separately when their


shapes are different. The shape of a covergroup is determined by the
number and size of each coverpoint and cross in the group.
Therefore, if a covergroup is passed a 4-bit argument in one case,
and a 32-bit argument in another, VCS splits it into two different
shapes that are reported separately, even if they are instantiated in
the same scope.

When a covergroup is split into shapes this way, VCS uses the base
name of the group and adds the parameter values that differentiate
the shapes. For example, if a parameter p is used to determine the
number of bins in a coverpoint, we might have the following in the
report:

$unit(test.v)::addum::SHAPE{p=16}
$unit(test.v)::addum::SHAPE{p=32}

There is no way to override this, as the different shapes cannot be


merged in a meaningful way.

Functional Coverage
4-3
Functional Coverage

Covergroup Shape Changes Due to Edits

Another way that shape differences can arise is through changes to


the covergroup definition itself. For example, Monday version of a
group may have 16 bins for one of its coverpoints, but Tuesday
version could have 32 bins for that coverpoint.

When this happens, by default URG keeps a separate copy of each


different version of each group that it finds in the coverage
databases. For example, if $unit(test.v)::addum is the group
in question, then a merge of Monday data and Tuesday data has the
following list of groups:

$unit(test.v)::addum
$unit(test.v)::addum

These have different detailed reports, with different numbers of bins


and coverage data. If you use the URG -show tests option (see
the “Test Data” section), you can see which tests covered any
coverable bins in each version.

While it is not the default, the best option when the covergroup
models are changing frequently is to use the flexible merging options
supported by VCS.

Flexible merging allows you to merge the data from the latest
versions of your covergroup models with the data collected from
previous versions. In the flexible merging flow, you can change the
number of bins in a coverpoint, add or delete coverpoints, modify
crosses, and so on, and, still merge the data and retain a single
covergroup that accumulates all the data for the different versions.

4-4
Functional Coverage Functional Coverage
4-5

The recommended flow is to use the -flex_merge union switch


when accumulating your coverage data (this switch has no effect on
code coverage, so it is safe to use in a mixed coverage environment,
too). For example:

urg -dir simv1.vdb simv2.vdb … simvN.vdb -flex_merge union


-dbname merged.vdb -noreport 

Union-merged databases contain multiple versions of coverpoints


inside a given group. The example below shows how union merge
works when the size of a monitored register changes from one
version to the next. As coverpoint P does not change, its data is
merged normally. But the shape of coverpoint Q has changed, so we
retain both versions in the union merged result:

Version A Version B Merge Result


reg [3:0] Q; reg [7:0] Q;
reg [3:0] P; reg [3:0] P;
covergroup G; covergroup G; covergroup G;
coverpoint P; coverpoint P; coverpoint P;
coverpoint Q; coverpoint Q; coverpoint Q; (4 bits)
endgroup endgroup coverpoint Q; (8 bits)
endgroup

When ready to generate a report from a single version of your design


- say at the end of the week - you use a chosen reference design and
use the -flex_merge reference switch:

urg -dir Friday.vdb merged.vdb -flex_merge reference

This switch takes only the coverage data that matches the reference
version. If your Friday version has an 8-bit Q, then that is what you
see in the report. As you used union merge, you get the coverage
data for coverpoint P from both versions of the group.

Functional Coverage
4-5
Functional Coverage

You should decide what merging strategy you should use early on
and run test merges to make sure it satisfies your needs.

4-6
Managing Coverage Data Files Managing Coverage Data Files
5-1

5
Managing Coverage Data Files 1
Using coverage produces some additional files and directories at
compile and simulation time. This section discusses how to manage
the creation of those files, and how to merge coverage data from
multiple simulation runs.

When you compile with VCS, the executable name you specify
determines the name of the daidir created. For example:

vcs -o mysimv file1.v file2.v …

The above command creates a binary executable mysimv, and a


corresponding mysimv.daidir directory.

When coverage is enabled, the same name is used for creating the
VDB directory, which contains compile-time coverage-related
information. For example, the following command creates the
mysimv.vdb directory, containing all compile-time coverage data:

Managing Coverage Data Files


5-1
Managing Coverage Data Files

vcs -cm line+cond+fsm -o mysimv file1.v file2.v …

There are other ways to name the mysimv.vdb directory, but it is


recommended to let the -o option determine the name to keep
things consistent later.

Test Names

When you run simulations with coverage, you specify names for your
coverage results. These names are used to uniquely identify the
sources of your coverage data at report time.

The -cm_dir option, when given on the simv command line,


specifies the directory in which the runtime coverage results are
saved. If you use this option, the runtime data is written into a
separate location from your compile-time data. Typically, users write
the results of every simv run to its own separate directory when
each simv is submitted to a grid/compute farm to be run in parallel.

The -cm_name flag is used to name the subdirectory in which the


test data is stored. In previous releases, this was the only name for
the test data, and if you later moved the parent VDB directory, it
changed the name of the test. The argument you give to -cm_name
needs to be unique within the targeted VDB directory or the data
from the latest run overwrites the data from the previous run with the
same -cm_name.

For example, to run two sets of tests, run1 and run2, and save the
data for each set into its own VDB directory, use the following:

mysimv -cm_dir t1dir.vdb -cm_name t1 …


mysimv -cm_dir t1dir.vdb -cm_name t2 …
mysimv -cm_dir t2dir.vdb -cm_name t1 …

5-2
Managing Coverage Data Files Managing Coverage Data Files
5-3

mysimv -cm_dir t2dir.vdb -cm_name t2 …

These commands result in two VDB directories, each containing two


tests. For example, if these directories are under /proj/march/
run7, the test names would be:

/proj/march/run7/t1dir/t1
/proj/march/run7/t1dir/t2
/proj/march/run7/t2dir/t1
/proj/march/run7/t2dir/t2

Managing Coverage Data Files


5-3
Managing Coverage Data Files

5-4
Targeting Parts of Your Design Targeting Parts of Your Design
6-1

6
Targeting Parts of Your Design 1
At compile time, you can tell VCS to monitor code coverage for only
parts of your design, for example, to ignore external IP when
monitoring coverage. You can also choose to apply different metrics
on different blocks.

Even if you collect coverage for the entire design, you can do the
same kind of filtering at report time. Both of these features are based
on a 'hier' file.

A hier file consists of a sequence of + and - directives. A + directive


includes the specified target or region, and a - directive excludes it.
If the first directive is for inclusion, it implies that everything else is
excluded. Similarly, if the first directive is exclusion, it means that
everything else in the design is included.

Targeting Parts of Your Design


6-1
Targeting Parts of Your Design

Each directive has a keyword following the + or - symbol. The


supported keywords include the following:

Keyword Meaning
tree Specifies a subtree in the design
hierarchy.
module Specifies a specific module.
moduletree Specifies a subtree of a specific
module, everywhere it is instantiated.
node Specifies a specific signal.
assert Specifies an assertion or cover
property.
library Specifies a library.
file Specifies a specific source file.

For example, a hier file consisting of the following line means that
only this tree is monitored for any type of coverage:

+tree top.HNTO_0.HNWOV_0

Whereas, the following in the hier file means that everything should
be monitored except for the specified module:

-module Wovmod

These hier files can be used at either compile time or report time.
When used at compile time, in some cases they can improve
simulation performance, as coverage instrumentation can be
removed. You can also specify that only certain metrics be collected
or excluded from regions of the design. For more details, see the
Coverage Technology Reference Guide.

6-2
Merging Coverage Data Merging Coverage Data
7-1

7
Merging Coverage Data 1
After running multiple simulation runs, the next step is to merge all
the coverage data from those runs. Once the data is merged, you
can generate reports or load the merged data into Verdi to analyze
the results.

Merging is done with the urg command. To merge data, you need to
provide the compile-time VDB directory and then any simulation-
produced VDB directories. The basic merge command is as follows:

% urg -dir mysimv.vdb t1dir.vdb t2dir.vdb … -dbname merged


-noreport

This command reads all of the specified VDB directories and merges
them to create a new VDB named merged.vdb. The -noreport
switch indicates that we do not want to generate a report at the same
time (it is better to generate reports separately).

Merging Coverage Data


7-1
Merging Coverage Data

You probably do not want to do a serial merge of all your coverage


data, as merging is an inherently parallelizable process. You can
direct URG to do the merging in parallel, either on a single machine,
on multiple machines, or on a compute farm/grid. The basic option
to add is -parallel. To see the flags for different supported grid
types, see the Coverage Technology Reference Guide.

There are many different options to direct how merging works. We


discussed some functional coverage merging options in Section
“Basic Rules for Covergroup Merging” . There are a couple of other
major merging options worth considering for your flow.

This chapter consists of the following sections:

• “Mapping”
• “Test Data”

Mapping

Mapping is what URG calls for merging the coverage results from
different subsets of the design. A common use of mapping is to
merge the coverage results collected for a sub-block onto the SOC
coverage model of the full design. For example, the coverage for the
md3 block in the figure below is merged onto all instances of its top
module in the base design using the URG -map option:

7-2
Merging Coverage Data Merging Coverage Data
7-3

When mapping, the coverage of any unmapped modules, such as


tbblk in this example, is ignored. This allows you to peel off the
testbench from a block and just copy the design coverage.

The default -map option is simple and module-based. To perform


more sophisticated mapping on an instance-by-instance basis, use
the -mapfile option.

Test Data

By default, URG retains the list of all test records in the final merged
VDB. Using the example from Section, “Test Names” , there are two
test records in each of t1dir.vdb and t2dir.vdb.To merge these
two records together, use the following command:

% urg -dir mysimv.vdb t*dir.vdb -dbname merged -noreport

The newly-created merged.vdb directory contains test records for


the following:

Merging Coverage Data


7-3
Merging Coverage Data

/run1/test1
/run1/test2
/run2/test1
/run2/test2

These test records retain data such as simulation cycles, peak


memory use, and CPU time, etc. If you specify pass/fail status or
other annotations to the test records, they are also retained.

By default, URG does not keep track of which test covered which
object in the merged VDB. Once the merged VDB is created, you
can still see which objects are covered, but by default you cannot see
which tests are responsible. To retain this test correlation
information, you must also provide the -show tests switch when
created the merged VDB:

% urg -dir mysimv.vdb t*dir.vdb -dbname merged -noreport


-show tests

This command increases the size of merged.vdb - as more


information is saved now - but it also allows you to generate reports
with -show tests that show the tests that cover each object:

% urg -dir merged.vdb -show tests …

We do not retain every test name that covered every object. It does
not make sense to keep track of every test that caused your clock to
toggle, for example. You can change the maximum number of tests
saved per covered object using the -db_max_tests option:

% urg -dir mysimv.vdb t*dir.vdb -dbname merged -noreport \


-show tests -db_max_tests 20

7-4
Merging Coverage Data Merging Coverage Data
7-5

Retaining the test correlation data has another benefit - you can do
test grading on the merged VDB. In previous releases, you had to
keep all the simulation-time VDBs to perform grading.

Merging Coverage Data


7-5
Merging Coverage Data

7-6
Generating Reports Generating Reports
8-1

8
Generating Reports 1
You generate a report by passing the names of all directories
containing coverage data to URG's -dir option. The first directory
passed should be compile-time directories. For example:

% urg -dir mysimv.vdb t*dir.vdb

By default, URG generates its reports in HTML in the urgReport


directory. To get started, open urgReport/dashboard.html with
your web browser. URG's HTML reports support many interactive
options including expanding and collapsing regions and navigation
panes. You can also generate text-format reports using the
-format text command line option.

This chapter consists of the following sections:

• “Metrics”
• “Ratios”

Generating Reports
8-1
Generating Reports

• “Trend Charts”
• “Navigating Coverage Reports”

Metrics

If you want to report only certain metrics, use the -metric flag:

% urg -dir mysimv.vdb t*dir.vdb -metric line+group

Ratios

There are many options that affect how the report is generated and
how scores are computed. The -hier option to URG is already
discussed in Chapter Chapter 6, "Targeting Parts of Your Design". It
is also recommended to use the -show ratios option. This option
has two effects:

• Reports show the number of covered and coverable objects for


each metric, instead of just the percentage score.
• This option causes URG to compute the covergroup score as the
ratio of (number of covered bins) / (number of bins). This is not
the LRM-defined scoring method, but the ratio scoring is preferred
because the LRM scoring algorithm results in artificially-high
coverage scores when the coverage can still be very low.
Verdi has corresponding preferences that you can set to achieve the
same results.

8-2
Generating Reports Generating Reports
8-3

Trend Charts

If you save the urgReport directories produced over a period of


time, you can use them to generate graphical trend reports of how
coverage is changing over time. The URG -trend option generates
these hyperlinked reports. If you are using a verification plan, the
charts include the scores of plan features. If you have stored your
saved urgReports in the /u/fred/myreports directory, the
basic usage is as follows:

% urg -trend root /u/fred/myreports

URG then generates a trend chart that you can load in your browser.
For more on trend charts, see the Coverage Technology User Guide.

Navigating Coverage Reports

URG generates a dashboard report that is a good place to start. This


includes the rolled-up coverage scores for all top-level instances,
and the overall score across the whole design. The hierarchy report
is an interactive HTML report (or static text report) that lets you see
the rolled-up scores at any level of the hierarchy. Covergroups are
listed in the groups report, and assertions and cover properties have
their own assertions report. Every URG HTML report page has a
common link menu at the top that you can use to navigate:

From any page that displays module or instance names, you can
click on those names and be taken to the detailed coverage report
for that region.

Generating Reports
8-3
Generating Reports

8-4
Convergence Convergence
9-1

9
Convergence 1
Any complex design may have code that is structurally unreachable,
or value combinations that can never happen. As code coverage is
automatically extracted from the HDL description, there can many of
these uninteresting or unreachable objects.

When you are trying to verify the design, you have to exclude these
impossible or do not-care coverage targets so you can focus testing
on the parts of the design that can happen. This section explains
some ways to remove impossible or uninteresting parts of the design
from coverage monitoring.

Exclusion is complementary to the 'hier' file flow discussed in


Chapter 6, "Targeting Parts of Your Design". Exclusion is typically
targeted more at individual objects, while the hier file flow is used to
address regions of the design.

This chapter consists of the following sections:

Convergence
9-1
Convergence

• “Constant Analysis”
• “Adaptive Exclusion”

Constant Analysis

Start with VCS' built-in constant analysis feature. You should always
use the -cm_seqnoconst switch if you are using code coverage:

% vcs -cm_seqnoconst -cm line+cond+tgl file1.v file2.v …

The -cm_seqnoconst switch automatically searches for


structurally unreachable coverage targets and removes them from
the coverage target list. These objects appear in the coverage
reports grayed-out, with the word 'unreachable'.

Sometimes these unreachable objects are not intended to be


unreachable by designers. To help understand the unreachability
analysis, VCS supports a diagnostic switch, -diag:

% vcs -cm_seqnoconst -diag noconst -cm line+cond+tgl file1.v


When this switch is given, the constfile.txt file is created at


compile time that lists all signals and parts of signals that VCS found
to be constant, with an explanation for each one. The explanation
gives the expression that was found to be constant, where it is
declared, where it is assigned, and the list of other signal parts that
are found to be constant that are assigned to it. For example, in the
snippet of the constfile.txt file below, the expression
attention[9:5] in the test_jukebox.jb1 instance is found to
always be 0, and it is assigned hold[4:0] at line 899:

9-2
Convergence Convergence
9-3

- instance: test_jukebox.jb1
expression: attention[9:5]
declaration: jukebox.v:436
value: 0 always
locations:
- location: jukebox.v:899
inputs:
- input: hold[4:0]

Synopsys also provides formal analysis tools that can further


improve on the constant analysis done by VCS under the
-cm_seqnoconst switch. VCS's built-in "noconst" analysis
complements the formal tools by finding the easier-to-find
unreachable let the formal tools focus on the more difficult cases.

Adaptive Exclusion

When performing coverage analysis, it is recommended to perform


the analysis in Verdi. When you identify a coverage target that can
be ignored, or is impossible by design, you can mark it for exclusion
directly in the GUI and add a comment explaining why it is excluded.
You can then save those exclusions and comments to an exclude file
that can be reused in the next and future sessions. Excluded objects
are removed from the coverage score calculation and appear grayed
out in reports.

Convergence
9-3
Convergence

To exclude lines from coverage, for example, you click the green
circle next to them in the Coverage Details pane. After excluding
objects, you then click the Recalculate button in Verdi, which
updates the coverage scores. The snapshot below shows that 
line 247 is excluded and the Recalculate button is clicked:

A common flow for exclusions is to load exclude files created in


previous sessions and then to do further analysis and exclusions.
When you are ready to save your progress, you can save only the
newly-created exclusions, to a separate file. This lets you keep the
new exclusions separate until they can be reviewed by the team, for
example. You can also choose to save excludes for different metrics
separately, save the exclusions only for a specific block, module or
covergroup, and so on. You should establish your team's exclude file
management policy before you begin coverage convergence.

The example below shows Verdi's Exclusion Manager pane. In this


pane, you can see a flat list of all exclusions applied to the current
design, along with descriptions and any comments (annotations) that
are added. Clicking on any exclusion brings up the coverage details
and source window so you can examine the object in context.

It is recommended to use the Exclusion Manager flat view (a


hierarchical view is also available) to conduct exclusion reviews with
your team.

9-4
Convergence Convergence
9-5

The most important feature of the exclusion flow is that it is able to


remap most exclusions from one version of your design to the next.
Fundamentally, your team spends a large amount of effort deciding
what objects can safely be excluded, and a design change that does
not affect an exclusion should not make you re-analyze it.

This is why Verdi preserves your exclusions as-is when the design
changes, without intervention, unless the object being monitored or
the control code around it has changed. And for exclusions whose
objects have changed, Verdi Exclusion Manager walks you through
each one so you can decide if the exclusions still apply, or tell Verdi
what if anything needs to change.

Once an exclude file is generated, you can load it into Verdi


interactively, or you can apply it to a URG report using the -elfile
option as shown below:

% urg -dir simv.vdb -elfile blk1.el …

Convergence
9-5
Convergence

9-6
Test Grading Test Grading
10-1

10
Test Grading 1
Grading, or test ranking, means comparing the effectiveness of tests
for coverage. Grading is useful when you want to eliminate
redundant tests from a test suite or put tests in best-first order.

Grading can select tests based purely on coverage score,


minimizing the total number of tests required to hit the coverage
score hit by the whole database, or it can be directed to select tests
to minimize total CPU time, simulation cycles, or wall clock time. The
basic grading command - assuming you used -show tests when
created your merged VDB - is shown below:

% urg -dir merged.vdb -grade listonly

To grade to minimize total CPU time instead, use the following


command:

% urg -dir merged.vdb -grade cost cputime listonly

Feedback Test Grading


10-1
Test Grading

If you did not use -show tests when creating your merged VDB,
or you have merged yet at all, you can use the -parallel switch to
do a parallel merge and then grade the tests:

% urg -dir mysimv.vdb t*dir.vdb -grade listonly -parallel

Some useful options for grading are given in the table below. Provide
these options anywhere after the "-grade" option on the URG
command line:

Options Description
indexlimit N Retains at most N tests per covered object. Higher N
uses more memory and time but produces more
precise grading results
cost Grades tests to minimize the total cost (CPU time,
[cputime|simtim simulation cycles, or clock time) instead of minimizing
e|clocktime] the number of tests
testfile Generates a file 'gradedtests.txt' in the urgReport
directory that is a simple list of the graded tests, in
order.

Feedback
10-2

You might also like