A Guide To Advanced Functional Verification
A Guide To Advanced Functional Verification
PROFESSIONAL VERIFICATION
A Guide to Advanced Functional Verification
PAUL WILCOX
eBook ISBN:
Print ISBN:
1-4020-7876-5
1-4020-7875-7
http://kluweronline.com
http://ebooks.kluweronline.com
This book is dedicated to my wife, Elsa, for her unwavering love and support, and to my children, Jonathan and Elizabeth, for the inspiration and
joy they have brought to my life.
Contents
Authors
xiii
Acknowledgements
xv
Section 1
1.
INTRODUCTION
LEARNING FROM OTHERS BEST PRACTICES
3
4
2.
VERIFICATION CHALLENGES
7
7
MISSED BUGSARE WE JUST UNLUCKY?
THE NEED FOR SPEED
9
DOING MORE WITH LESS
10
FRAGMENTED DEVELOPMENT, FRAGMENTED VERIFICA12
TION
3.
15
ADVANCED FUNTIONAL VERIFICATION
VERIFICATION AS A SEPARATE TASK
15
COORDINATING VERIFICATION WITH OTHER DEVELOPMENT
17
TASKS
18
VERIFICATION AS A MULTITHREADED PROCESS
20
VERIFICATION IS NOT 100 PERCENT
21
22
viii
4.
5.
Professional Verification
SUCCESSFUL VERIFICATION
23
TIME MANAGEMENT
Start Early
Remove Dependencies
Focus on Total Verification Time
RESOURCE USAGE
Plan and Document
Build a Team
Use Someone Skilled in Management
VERIFICATION PROCESSES
Choose the Right Tool for the Job
Choose the Right Information for the Job
Automate
APPROACHES
Keep Verification Real
Stress the Design
23
23
24
24
25
25
26
26
27
27
28
28
29
29
30
PROFESSIONAL VERIFICATION
31
31
32
33
34
35
35
36
36
36
37
37
41
42
42
43
43
46
Contents
7.
47
48
49
50
57
57
58
59
61
62
63
63
CREATING AN FVP
Creating a Transaction-Level Model for an FVP
Creating a TLM from a Behavioral Model
Creating an Analog FVP TLM
Creating an Algorithmic Digital TLM
Creating Stimulus and Response Generators
Creating Interface Monitors
Creating Architectural Checkers
64
64
64
65
66
66
66
67
67
67
68
69
ix
51
53
71
72
73
74
74
75
76
77
78
78
79
Professional Verification
9.
STEP 3: EXECUTION
Testbench Development
Advanced Verification Techniques
Assertions in Simulation
Coverage
Acceleration on Demand
Top-Down FVP-Based Flow
Bottom-Up Specification-Based Flow
79
80
81
81
81
82
83
84
86
87
11.
ANALOG/RF SUBSYSTEMS
CADENCE ACD METHODOLOGY
88
88
89
90
91
92
93
95
95
97
100
101
101
101
102
104
105
105
105
105
105
106
106
106
107
SYSTEM INTEGRATION
107
Contents
xi
108
109
111
111
112
113
113
12.
SYSTEM-LEVEL DESIGN
ISSUES ADDRESSED WITH AN FVP
VERIFICATION AND SOFTWARE DEVELOPMENT
Using the System Software Environment for Verification
Microcode Engines
Hardware Platforms
Software Algorithms
ABSTRACTION
Design Abstraction
Verification Abstraction
Transaction-Level Modeling
119
119
121
122
122
123
124
125
125
127
129
13.
131
131
132
132
135
137
138
14.
TESTBENCH DEVELOPMENT
TRADE-OFFS
ReuseIsolating Design-Specific Information
EfficiencyAbstracting Design Information
FlexibilityUsing Standard Interfaces
Balancing Practical Concerns
Top-Down vs. Bottom-Up Testbench Development
141
141
141
142
143
144
145
UNIFIED TESTBENCHES
Testbench Components
146
147
Professional Verification
xii
Stimulus Generators
Transactors
Interface Monitors
Response Checkers
Testbench API
Top-Down Testbench Development
Bottom-Up Testbench Development
VERIFICATION TESTS
Directed and Random Tests
Types of Directed Tests
Combining Random and Directed Approaches
Constraining Random Tests
Testbench Requirements
15.
16.
ADVANCED TESTBENCHES
ASSERTIONS
Using Assertions in the Test Process
Using Assertions
Assertions and the FVP
Assertions at the Block Level
Assertions and Chip-Level Verification
Assertions and System Verification
Flexibility and Reuse
COVERAGE
Using Coverage
Filling Coverage Holes
147
147
148
148
149
149
151
152
153
153
154
155
156
REACTIVE TESTBENCHES
159
159
160
162
164
164
165
165
166
166
167
170
171
HARDWARE-BASED VERIFICATION
ACCELERATED CO-VERIFICATION
Using an ISS, Software Simulator, and Accelerator/Emulator
Using an RTL Processor Model and Emulator
Using a Physical Model of the Processor and an Emulator
Comparing Approaches
175
175
177
178
179
180
Authors
Acknowledgements
Sir Isaac Newton once remarked, If I have seen further [than certain other
men], it is by standing upon the shoulders of giants. This book is based on
the experiences and hard work of many giants in the design and verification of
modern ICs. It would be impossible to list all the individuals who have contributed to the collected knowledge contained in this book, but it would be
foolish to not acknowledge their contribution.
I have encountered many giants in my career who have taken the time
and had the patience to teach me much of what is contained in this text. For
that I would like to acknowledge the friends and co-workers I have worked
with at Sun Microsystems, 0-In Design Automation, StratumOne Communications and Cisco Systems. Special thanks to Willis Hendly, David Kaffine,
James Antonellis, Curtis Widdoes, and Richard Ho.
This book is the product of the efforts of many people at Cadence Design
Systems, and I would like to acknowledge the following for their contributions and efforts in reviewing the text: Andreas Meyer, Grant Martin, Leonard
Drucker, Neyaz Khan, Phu Huynh, Lisa Piper, and the entire Methodology
Engineering team.
I want to acknowledge Linda Fogel for her tireless and professional editing, along with Kristen Willett, Kristin Lietzke, and Gloria Kreitman in
Cadences marketing communications group.
A special acknowledgement to Paul Estrada for providing me the opportunity and time to write this book and for showing faith in me when even I was
ready to give up. One could not ask for a better mentor or friend.
Finally, I would like to acknowledge the true giants of my life, my parents,
Eleanor and Gary Wilcox, for their love and support, and for teaching me the
nobility of education.
SECTION 1
THE PROFESSION OF VERIFICATION
Chapter 1
Introduction
Thinking about how it might not work
After years of doing what I considered grunt work in test, tool development, and verification, I finally got my chance to design a major portion of an
important chip. I had created a detailed specification and beat all the scheduled milestones. My design was meeting its performance goals with time to
spare, and the initial layout looked great. And then, two weeks before tapeout
of the entire chip, the bug reports began to come in. The random verification
regressions had been running fine for weeks until some of the parameters
were loosened. Suddenly, my block was losing or misordering transactions,
and all the simulations were failing. I found what I thought was a one-in-amillion corner case bug, but the next day the simulations were failing again.
Another fix and another fix and still the bugs kept popping up. I was called in
by the project managers. The tapeout deadline was at risk of slipping and it
was because of me.
As I drove home that night, I tried to figure out what was going wrong. I
had followed all the design rules, creating a very complex design in smaller
size and greater performance than had been required. The data structure I had
come up with could support many advanced features, and we were even patenting it. As I thought about it, I saw that the complexity I had added also
created many new possible side effects, and the simple testbench I had written
could not test these side effects. I realized that the only way I could get this
design back on track was to stop thinking about how it should work and start
thinking about how it might not work. It was at this moment that I began to
understand functional verification.
But Im not alone in going through this. Most engineers throughout the
integrated circuit (IC) industry have had similar experiences. Fortunately,
functional verification is evolving from an afterthought to an integral part of
the development process. The evolution has occurred not because of forethought and careful planning, but out of necessity. Functional verification
teams must keep up with growing complexities, growing device sizes, rapidly
changing standards, increased performance demands, and the rapid integration of separate functions into single systems. Functional verification of
todays nanometer-scale, complex ICs requires professional verification.
This book explores professional verification in a practical manner by
detailing the best practices used by advanced functional verification teams
throughout the industry. The goal of this book is not to present research into
Professional Verification
Chapter 1: Introduction
Chapter 2
Verification Challenges
Missed bugs, lack of time,and limited resources
Every development team faces verification issues of some kind. Some
might be due to the size or number of designs, some to the complexity of the
design, some to the verification process being used. Verification teams continually attempt to address these issues only to find that new problems arise
that are more complex than the original ones. Almost every issue in verification today can be placed in one of three buckets: missed bugs, lack of time, or
lack of resources. The most pressing issue is the inability to find all the bugs
during the verification process. Given enough time and resources, teams can
verify with a high degree of confidence that a design will work. But having
enough time to complete verification is a challenge. So more resources, more
compute power, and more processes are thrown at the problem in hopes of
decreasing the time to complete successful verification. Yet resources are
expensive, which leads to the perception that verification takes too many
resources to be successful on time. Teams need to find all the bugs in the
shortest amount of time, with the fewest number of resources, in the most efficient and effective manner. Lets take a closer look at each of these areas.
Professional Verification
three of these one in a million bugs are found in every project, a solution
needs to be found.
The difficulty in finding obscure bugs in large designs is that there are
more possible scenarios than there is time to test. Verification teams attempt
to address these bugs using advanced automated techniques, such as formal
verification or constrained random testbenches. Automating the processes
increases the likelihood that the correct sequence of events to stimulate and
catch the bug occur. But like any bug, finding obscure bugs is best handled by
using new or better tools and processes.
10
Professional Verification
bugs sooner is only a benefit if it decreases the total verification time. Some
techniques take so long to debug or to reverify that the total verification time
is the same or worse.
11
not offer formal training in verification, so teams are left with trying to hire
experienced engineers or training inexperienced engineers. This often leads to
specialized verification engineers who know how to perform some verification tasks, but not all. Specialized resources can only be utilized on specific
tasks, so unless the process is managed very carefully, resources are not utilized to their fullest extent. The net effect is inefficiencies in the verification
process, limiting the amount of work that can be done.
Verification resources also include compute power and verification tools.
In the past, many teams have attempted to throw compute power at verification to get it done faster. Today, computer and networking hardware is
relatively cheap, but outfitting a large server farm with the necessary software
licenses and verification tools can be costly. Quite often verification tools
focus on one specialized task in the process and can only be used for a small
percentage of the overall project time. Outfitting a server farm with software
to meet the demands for this limited amount of time can be very costly.
A mantra of many good code developers is write once and use often.
Unfortunately, the mantra in verification seems to be write often and use
once. Time is often wasted capturing and replicating the same information
multiple times for different processes and tools. Different environments
require the same information but in different representations. The goal of verification teams should be to reuse the information from task to task and from
project to project. Verification reuse can provide a huge productivity gain if
done correctly. Making models and components reusable requires more time
and effort, but if this is amortized across multiple projects, it is often worth
the effort and costs. Verification teams need to identify when reuse is applicable and have the processes and methodology in place for utilizing components
or models multiple times. If teams do not plan for reuse, the work can be
wasted.
Reusing design blocks is prevalent today in large complex designs. It is
often easier to reuse or modify a block from a previous design than to design a
block from scratch. Unfortunately, the verification for that block may have
been done by a different team using different methods and environments than
your team plans to use. Do you trust that the design was verified correctly the
first time? Does the design need to work differently in your system? These
questions often lead development teams to reverify existing blocks to reduce
the risk of failure.
Development teams want to be able to reuse old design blocks without
having to reverify the old design. This requires design teams to design blocks
that work for the intended design as well as newer designs. Verification teams
need to develop environments that can quickly reverify existing designs when
they change and that can also expand to verify designs within different system
12
Professional Verification
13
and models. Reuse from task to task, often known as vertical reuse, is limited
or impossible. The same information is recreated at each stage, only to be left
unmaintained once the task is completed. This information rot makes it
nearly impossible to quickly make late changes in the design and rerun the
task.
Fragmentation also exists from project to project. Few companies have a
common verification flow for all projects. Even derivative projects often
require new verification flows to be developed. Because each project is different, reusing models or information is impossible. Even though design IP can
be reused from project to project, verification IP used to reverify the design
often cannot be reused. Fragmentation also exists between design chain partners. Designs today are linked in a chain with IP developers providing blocks
to IC developers, who provide devices to system developers. Fragmentation
between design chain partners results in recreating verification environments
at each stage in the design chain.
The verification process has also become fragmented due to the ad hoc
approach most teams use to develop their process. Instead of addressing the
verification process as a whole, teams address individual issues on an asneeded basis. Teams add new techniques or tools without regard for the overall process, resulting in a flow that resembles islands of automation. With the
advent of hardware description languages (HDL) and common implementation flows, the productivity of design teams has surpassed the ability for
verification teams to keep up. Verification teams, who are under great time
and resource pressure, end up just fighting fires and not addressing long-term
issues, which results in fragmented approaches to verification.
No single tool or method can address the fragmentation in your development process. In fact, they might make it worse. What is needed is a
methodology to unify all the stages, from system design to system design-in,
14
Professional Verification
across different design domains and projects. Only by unifying the entire verification process will fragmentation be removed, making dramatic gains in
speed and efficiency possible.
The second section of this book looks at a unified verification methodology that is based on best practices used by advanced verification teams today.
It gives you a view into how the different tools and techniques can be used
together to address todays major verification challenges. But first, lets look
at the role advanced functional verification plays in developing a unified verification methodology.
Chapter 3
Advanced Funtional Verification
Viewing verification differently
Advanced functional verification is not simply doing more of what you are
already doing or using more powerful tools to make your job easier.
Advanced functional verification is a fundamentally different way of thinking
about and performing verification of large complex designs. Lets look at
some of the basic principles of advanced functional verification and the implications these principles have in verification today.
16
Professional Verification
In addition to decreasing the total project time, a separate verification process improves the quality of the results. Verifying a design often requires a
different mindset than implementation. When you implement a design, you
concentrate on how the design should work. When you verify the design, you
also concentrate on how the design might not work. It is common practice in
many software development processes that the designer should never be
responsible for testing ones own code, because if the designer made an incorrect assumption or interpreted a specification incorrectly when implementing
the design, the same assumption or incorrect interpretation will be made in
testing the design. Having an independent person verify the design decreases
the likelihood that the same incorrect assumptions or interpretations are made
17
18
Professional Verification
ify the implementation. When each of these groups works independently, the
work required to create and maintain these representations is duplicated. If
teams coordinate or reuse the functional representations, they can greatly
reduce development time and resources.
In the past, teams could wait for a complete architectural analysis to be
completed before beginning implementation, and software development
could wait until the hardware had been designed. But if teams want to meet
todays reduced development schedules, they must perform these development processes in parallel. This means that design and functional verification
begins before the architectural analysis is completed. Software development
is done in parallel with hardware development, and the final system design-in
begins before all the parts are fully tested and working in the lab. Starting
these development processes earlier in the project creates new demands for
functional verification teams. The verification team begins with less welldefined architectural specifications and needs to combine performance testing
with functional testing. The other development teams need a verified representation of the design earlier in the project, so the verification team must
prioritize their efforts and synchronize with other development teams.
Closer coordination between the verification team and the other development teams affects the communication and scheduling of the verification
process. Functional verification is often not considered until after the project
has begun and progressed for a period of time. The thinking of many design
teams was that the verification team should not engage on a project until the
architecture and specification were complete and the design was well underway. If the verification team is to coordinate with the other development
teams, it needs to engage in the project sooner. Verification needs to understand the development of the architecture if it is to help with performance
analysis and if specifications are limited. Verification managers or leads also
need to understand the requirements for deliverables between each group and
to schedule resources accordingly. Organizationally, the verification team
plays more of a central role in the development process.
19
need to be used to verify the design in different ways. Verifying a large complex design includes several parallel processes, multiple stages, and many
dependencies to manage.
20
Professional Verification
have project management skills and understand the entire development process to be successful.
21
22
Professional Verification
Chapter 4
Successful Verification
Managing time and resources using advanced functional verification
By studying a number of successful advanced verification teams, a set of
common guiding principles emerge. These principles guide how teams perform the process of verification as well as manage their time and use their
resources. Throughout this book we present best practices used by advanced
verification teams that can all be traced back to these common guiding
principles.
TIME MANAGEMENT
Time management is an important part of any complex process. With
proper time management, verification teams can complete verification sooner
or perform more verification in the allotted time.
Start Early
Every development project is unique and often requires new approaches
for functional verification. Starting the process of verification early in the
project enables the team to plan for new approaches and to adapt to changing
environments. Starting early also allows verification to guide important early
decisions, such as IP selection and feature support. As verification becomes a
larger and larger portion of the development process, more decisions will
need to be made to weigh the trade-offs and effects.
Functional verification requires preparation. If the verification team waits
until the design has been implemented to begin, time is wasted developing
and debugging the verification environment and tests. Verification teams
need to be ready to test the implementation before it is received so that no
time is lost.
Successful verification teams start by demonstrating the value of having
verification knowledge early in the process. These teams become involved in
the development and testing of system models used by architects and system
designers. They also try to decouple the development of verification environments and tests from the implementation process so that they can be done in
parallel. Of course, it is impossible for teams to engage in new projects early
24
Professional Verification
Remove Dependencies
The time it takes to complete a complex process like functional verification can be reduced by speeding the individual subtasks of the process or by
removing the dependencies associated with the subtasks. Successful verification teams understand that time spent waiting for a deliverable from one task
to start another task is wasted time. Removing dependencies not only
decreases the amount of time to complete the overall project, it also uses the
resources more efficiently throughout the project. Waiting for deliverables
like HDL code or specifications leads to large spikes in resource utilization,
followed by lulls as the resources wait for the next key deliverable.
Removing dependencies from external teams that are waiting for deliverables from the verification team reduces project time and improves the
perception of the verification team. Implementation teams must wait for a
bug-free design, and software teams often wait for functional models before
beginning implementation. Staying off the critical path for the project should
be an important goal of any verification team.
Successful verification teams remove dependencies in a number of ways.
Many teams develop their own high-level system model to use in place of the
HDL for developing tests and environments. The same system models can
also be used as an executable specification, alleviating the need to wait for a
functional specification. External dependencies can be met by providing the
high-level model or an early prototype or emulation system to software and
system design teams.
25
RESOURCE USAGE
Managing resources within a verification process is not simply trying to
optimize and do more verification with fewer resources. Resource management also includes building teams and environments that facilitate efficient
high-quality verification.
26
Professional Verification
Build a Team
The goal in building a successful verification team is to have a team whose
total abilities are greater than the sum of its parts. Successful verification
teams most often consist of individuals who have a common baseline knowledge of verification as well as specialized knowledge in specific areas. Each
team member is familiar with basic test writing and debugging skills in addition to an in-depth knowledge in an area, such as software development,
testbench development, scripting, or emulation.
Just as engineers learn the profession of verification through experience
and mentoring, verification teams also are built through experience and
benchmarking. As a team works together, it learns to utilize the individual
skills to meet the goals in the most efficient manner. Successful teams also
benchmark themselves against other teams and learn from the best practices
of these teams.
Successful verification teams build a cohesive team by selecting the right
members and keeping them together. These teams select members with a
wide range of abilities and experience. The verification process has many
complex tasks that require experienced individuals, and it has many basic
tasks that can be performed by less-experienced members. Having a well-balanced team keeps everyone engaged in the process and provides a path for
development. A trademark of many of the most successful teams is that they
have worked together for many projects. Keeping a verification team together
is often difficult, but the benefits are enormous.
27
VERIFICATION PROCESSES
The verification process is made up of many smaller separate processes
and techniques. Each of these smaller processes has its own unique tools and
methods. Selecting these tools and techniques carefully results in the most
efficient overall process.
28
Professional Verification
detected faster with less effort with dedicated static tools. These are just a few
of the possible trade-offs teams should make.
Automate
Automating the various tasks of the verification process reduces the time
and effort the team spends on repetitive tasks. The verification process contains many individual tasks that might be repeated hundreds or thousands of
times during a project. Automating these tasks might not speed up the individual task, but it does free up valuable resources to concentrate on other tasks.
Resource loads can be balanced to utilize the available resources, such as
computers and software licenses. Automation can also facilitate higher quality results. Human error can easily enter long repetitive tasks, such as running
large test suites and collecting results. Removing the chance of human error
assures more consistent reliable results.
29
APPROACHES
Proper management of time, resources, and tools results in a highly efficient advanced verification process. The last group of principles encompasses
the overall verification approaches that successful verification teams use.
30
Professional Verification
gram streams, or traffic flows taken directly from real-world applications. The
teams also utilize tools, such as emulators that allow the design to be tested in
a real-world environment. This extra verification not only verifies that the
design works correctly, but also verifies that the testbench and models that
were used are correct for future use. Finally, these teams write their environments so that tests can be written at a higher application level and results can
be debugged at that same high level. Transactors or adapters are written to
translate between these higher level data abstractions, such as an image or a
packet, down to the signal level necessary for detailed verification.
Chapter 5
Professional Verification
From second-class citizen to respected profession
Some might argue that functional verification is not a true profession
because of its low visibility and secondary role in the development process. A
closer look shows that today functional verification is an integral part of the
development process, requiring professional skills and organization. The term
professional has many meanings. Many people associate professional with
being paid to do a job, such as a professional athlete or musician. One might
also associate professional with the expectation of a high-level of quality. The
Cambridge Dictionary defines professionalism as the qualities connected
with trained and skilled people, and a professional as a person who has a
job that requires skill, education or training. Professional verification encompasses all these meanings.
32
Professional Verification
33
at a customer site can cause months or even years of delay in getting the product to market.
Deciding which features to include in a product is not determined by the
time or difficulty in implementing them, but in the time and cost it takes to
verify and test them. In many cases, adding a new feature to an IC is as easy
as adding a mode register and some random logic. But if you are doing basic
brute force verification, this simple change can cause the verification effort to
grow exponentially. Advanced verification based on adaptability and reusability can more easily handle and limit the effects of large feature sets.
Efficient advanced verification allows for the verification of more features in
the same amount of time, thereby increasing the value of the product.
When economic times are hard, adapting to rapidly changing customer
needs becomes a must. If you have a rigid development environment, including an inflexible verification environment, it is difficult, if not impossible, to
meet your customers requests. Having an adaptable, reusable, and efficient
verification environment enables you to respond to customers needs and provide greater value.
What is the value of getting your products to market sooner, with more
features, higher quality, and meeting your customers needs? Professional
verification can help you reach these goals. But what is the cost of achieving
this value?
34
Professional Verification
There is an old saying that you cannot put a price on quality. Professional
verification assumes a high standard of quality, and some may wonder if the
cost of that quality is worth it. The most measurable form of quality in functional verification is missed bugs. A bug that makes it through the verification
process comes at great cost to the development team. A bug found after tapeout may require a respin or metal fix for the device, costing engineering
dollars as well as lost time. A bug that makes it into the lab or validation process risks not only a respin of the device but the costs of time and dollars in
reverifying and revalidating the IC or system. A bug that makes it to the customer can lead to additional engineering costs, but can also cause a loss of
business, which is far worse.
Respins, metal fixes, reverification, revalidation, and customer bugs are
common experiences for all IC development teams today. While the additional cost in time and resources for advanced verification might seem like an
unnecessary investment, the potential savings in time and money provides a
far greater return than a low-quality job.
35
Verification Training
Formal verification training is hardly available. Few, if any, verification
schools offer functional verification as part of their curriculum. The reason
most often cited is the lack of pure research areas in functional verification
today. Only recently have periodicals begun to address verification topics or
have organized verification conferences. Advanced verification is most often
learned through experience and working with dedicated professionals. This
has led to small pockets of functional verification expertise throughout the
industry, but very little organization. Areas such as silicon process technology
and design automation have thrived because of organized groups and consortiums, which do not exist for functional verification.
36
Professional Verification
37
SECTION 2
THE UNIFIED VERIFICATION METHODOLOGY
Chapter 6
The Unified Verification Methodology
A new approach to verification
The previous section described the issues in advanced functional verification today and detailed the need for a verification methodology that removes
fragmentation and improves the speed and efficiency of the process. This section presents the Unified Verification Methodology (UVM) developed by
Cadence Design Systems, which addresses this need. In fall 2002, Cadence
Design Systems assembled a group of experienced verification engineers with
varying industry backgrounds to create a unified verification methodology.
The team was not limited to using a specific set of vendor tools or practices.
The only constraints were that all the methods explored were in use today and
that the resulting methodology dramatically increased the speed and efficiency of the verification process.
The first version of the Cadence UVM was released in February 2003, and
since then advanced verification teams have used the UVM as a blueprint for
developing their own unified verification methodology. The UVM program
continues at Cadence as the team identifies new practices and refines the
42
Professional Verification
43
KEY CONCEPTS
Before we dive into the UVM, it is important to first understand several
key concepts that run throughout the methodology.
44
Professional Verification
45
point in the development process, the FVP test suite can be run to verify that
the design still meets the original system goals.
A question commonly asked about an FVP is how do you verify that is it
correct? If the golden model is to be used to verify the implementation, an
error in the golden model will result in an error in the implementation. The
FVP should be verified in a similar manner as the final silicon will be verified
in the lab. The test suite should verify that the design meets the specified level
of functions, features, and performance. The test suite and test components of
the FVP are very application-specific. Providing the testbench and application-level test suite ensures that the FVP continues to meet the original
product goals.
The FVP is used throughout the UVM. As the design is being architected,
the functional models are developed at a high level and tested within the FVP.
Once the architecture is complete, the FVP is provided to each team developing a sub-block. These teams can reuse the model and testbench from the FVP
to verify the implementation created by the sub-block design team. After the
sub-block is verified by the individual development teams, the functional
model for the sub-block is replaced in the FVP by the implementation of the
sub-block. The test suite is run on this mixed-level FVP to verify that the
implementation meets the application-level goals and to verify the integration
of the sub-block. Each sub-block is replaced in the FVP until all have passed.
Finally, all the models are replaced within the FVP, and final applicationlevel testing is performed on the implementation-level FVP.
Professional Verification
46
Transaction-Level Verification
A dramatic improvement in design productivity occurred in the 1990s as
designers moved from working at the Boolean gate level to RTL. Moving to
RTL enabled designers to operate at a functional level that was more intuitive
than simple gates, since designers think in terms of finite state machines, arbiters, and memory elements. However, as design sizes have increased and
more functionality is placed in a single design, the verification process also
needs to move up a level in abstraction from RTL to the transaction level.
Verification engineers operate at an application level, where the concerns are
complex data formats, algorithms, and protocols. To be productive, verification engineers need to think and work at the more intuitive level of packets,
instructions, or images, and not at bus-cycle levels.
A transaction is the representation of a collection of signals and transitions
between two blocks in a form that is easily understood by the test writer and
debugger. A transaction can be as simple as a single data write operation or
sequences that can be linked together to represent a complex transaction, such
as transferring an IP packet. Figure 11 shows a simple data transfer (represented as B) linked together to form a frame, further linked together to form a
packet and finally a session.
47
Transaction Taxonomy
Level
Interface
Unit
Data Unit
Byte
Operations
Send, Receive, Gap
Fields
Bits
Frame
Assemble, Segment,
Address, Switch
Encapsulate, Retry,
Ack, Route
Initiate, Transmit,
Complete
Send, Receive, Gap
Feature
Packet
Application
Session
Interface
Byte
Using transactions in the UVM improves the speed and efficiency of the
verification process in several ways:
Provides increased simulation performance for the transaction-based
FVP
Allows the test writer to create tests in a more efficient manner by
removing the details of low-level signaling
Simplifies the debug process by presenting information to the engineer in a manner that is easy to interpret
Provides increased simulation performance for hardware-based acceleration
Allows easy collection and analysis of interface-level coverage information
Professional Verification
48
Assertions
Assertions are created in the UVM whenever design or architecture information is captured. You can use verification tools to verify the assertions
either in a dynamic manner using simulation or in a static manner with formal
mathematical techniques. These assertions are then used throughout the verification process to verify the design efficiently. The UVM uses three types of
assertions:
Architectural assertions prove architectural properties, such as fairness and deadlocks.
Interface assertions check the protocol of interfaces between blocks.
Structural assertions verify low-level internal structures within an
implementation, such as FIFO overflow or incorrect FSM transitions.
49
Coverage Type
Application
Interface
Structural
Code
When Measured
When Defined
Architecture Definition System Modeling and
System Verification
Implementation Defini- After Block Tests and
Subsystem Tests
tion
Micro-Architecture
After Block Tests
Definition
Coding
After Block Tests
Examples
Auto Retry, Cache Hit
Packet Types, Instruction Sequences
FSM States and Arcs,
FIFO Thresholds
Statement, Expression
50
Professional Verification
Hardware Acceleration
This example is for a small block design with a non-synthesizeable testbench. The standard run time for tests of an average-size design block is
shown on the X axis. The Y axis shows the total test time, including compile
time, run time, and debug time. Figure 13 shows that for short tests simulation
is the fastest method. Once the test length reaches around 70 minutes, acceleration is the fastest. This is the point where acceleration-on-demand is
effective. Each team should do a similar analysis, taking design size, testbench performance, and debug times into account to determine when to
accelerate a simulation.
Acceleration also speeds regression testing. Often small changes to the
design or test environment can have unwanted and unknown effects on other
parts of the system. Development teams set up a regression environment to
verify that changes in the design or environment have not caused unwanted
effects. It is important for development teams to receive confidence in their
changes in the shortest possible turnaround time. Regression testing is often
performed on large server farms running jobs in parallel. The time for completing the regression is dependant on the number of tests, length of the
51
individual tests, the number of servers in the farm, and the simulation speed.
When the number of tests and length of tests causes the total regression time
to exceed the acceptable turnaround time, acceleration should be used. Acceleration reduces the run time of longer tests and allows large groups of shorter
tests to be run in less total time.
Finally, acceleration provides the necessary speed improvements to facilitate system verification. Acceleration can be used in a simulation-based
system verification environment to simulate large system integrations with
embedded software. Acceleration is also at the heart of emulation-based system verification. Accelerating the design in an emulation environment
connects the design to real-world stimulus and instrumentation while running
and developing the system software.
METHODOLOGY OVERVIEW
Before diving into the details of the methodology, it is important to understand the overall flow from a high level. Perhaps the best way to describe the
methodology is to compare it to the basic methodology used by teams today.
Most development today begins with a group of architects or system designers working together to define the product at a high level. Simple models or
spreadsheets might be used for reviewing design trade-offs and partitioning.
This effort often results in a written architecture specification, which is given
to an implementation team or group of teams to develop. In many cases, the
implementation teams create an implementation or micro-architecture specification detailing what they intend to implement. The designers begin
implementing individual blocks of the design in parallel. At some point, verification engineers may help each block team verify their block.
As each block is completed, it is integrated with other blocks, until the
complete device is assembled. At this point, integration testing is done at the
device level to verify that the blocks work together correctly. When the
device is functionally correct, feature testing and performance testing verify
that what was created meets the original product-level goals. If errors are
found in integration testing or during performance or feature testing, the
design is sent back to the development teams for fixing or redesign. After the
design has passed functional, feature, and performance testing, it can be given
to the back-end teams, software developers, and system design and verification teams.
With this methodology, testing is uncoordinated and occurs late in the
development process. Once the architecture specification is agreed upon, each
team works independently on their portion of the design. There is no mecha-
52
Professional Verification
nism to verify design assumptions, that changes still meet the original goals,
or that these assumptions and changes are in sync with what other groups are
expecting. The individual teams must wait until integration testing to discover
problems due to incorrect, out-of-date, or incomplete specifications. The
teams must also wait until final feature and performance testing to know
whether the individual parts they created meet the goals of the device as a
whole. These delays often result in bugs found late in the development process, causing individual blocks to be redesigned while the project is at a
standstill.
53
Once the FVP is completed, it is distributed to the various groups participating in the project. The system designers use the FVP to model the device in
the context of the larger system. The software developers use the FVP to
begin developing low-level hardware-dependent software, such as device
drivers. The implementation teams use the FVP as an executable specification
to develop the individual blocks that make up the final system. The blocklevel implementation teams use the FVP model as a reference model to verify
that the responses of the implementation match the responses of the model.
The design is also verified by substituting the implementation for the
model in the FVP and rerunning the FVP test suite. If changes are required to
the block as implementation occurs, the changes can be verified within the
context of the FVP and then propagated out to the other development teams.
Verifying the implementation first against the model from the FVP and then
in place of the model in the FVP verifies early in the development process that
54
Professional Verification
the design meets the specified requirements and works correctly with other
blocks.
55
56
Professional Verification
This high-level description of the UVM has left out many of the important
details on how to accomplish this methodology. The following chapters of
this section describe more thoroughly how the UVM can be used for your
design.
Chapter 7
UVM System-Level Design
Creating an FVP
The first stage of the Unified Verification Methodology is system-level
design. During this stage, the product is defined, architectural measurements
and trade-offs are made, and detailed specifications are created. In many
methodologies, verification plays only a limited role. In the UVM, the verification effort begins early with the development and verification of the
functional virtual prototype. In the UVM, the first stage is vital, because it
sets the groundwork for unification throughout the rest of the methodology.
58
Professional Verification
Most projects go through three basic stages: idea generation, product definition, and implementation. The idea generation stage is the most dynamic
and where the steps taken differ from project to project and team to team. The
result of the idea stage is often a document that specifies what the problem or
opportunity is and presents a concept for a product as a solution.
The output of the product definition stage is a detailed specification of the
design and the environment it will operate in. The process specifies the algorithms and functions, the external and internal interfaces, the block
partitioning, and the data flows. The product definition consists of waves of
refinement, where the first wave begins by defining the specific implementation characteristics, some in great detail and others in less. Each successive
wave further defines the details of the product, working toward the final
project.
If the product definition is detailed enough, the implementation stage
focuses on coding the design and meeting the physical and timing characteristics. Quite often system-level design and architecture changes are made
during the implementation stage. Development teams use many different
mechanisms for developing the final definition of the design that the implementation teams will use, but the most common mechanism is the system
model. A system model is a high-level representation of the design that can be
simulated to explore possible architectures or functional trade-offs.
In the early stages of system-level design, high-level models of processors, buses, or interfaces help identify performance bottlenecks. The focus of
the system model at this point is interoperability and a fast turnaround in modifying and simulating various scenarios. System models are often thrown
away after architectural exploration or functional experiments are completed.
This is the first occurrence of fragmentation in the development process.
While it is usually not feasible to develop a complete FVP-like model for use
in early architectural analysis and system design, developing a system model
in a way that enables an FVP to evolve is highly beneficial.
This chapter focuses on the idea generation and product definition stages
of the development process and the role verification plays during these stages.
59
The system models of the past have been ineffective for verification for a
number of reasons, partitioning being one of them. Architects and system
designers are not concerned with following a design partitioning that matches
the intended implementation. Algorithms and architectural elements are easier to model as they exist functionally, but these functions may be partitioned
in a different manner than the final implementation. If the partitioning of the
model does not match the partitioning of the implementation, it is difficult to
reuse the model for verification uses, such as reference models or integration
vehicles.
The format and level of abstraction at which the model was created also
makes them ineffective. System models are written at very high behavioral
levels or at very low cycle-accurate levels. Behavioral models are great for
speeding verification, but if they are at too high a level, they lack the implementation details needed for verification. If the models are written at too low
a level of abstraction, the verification speed is not improved and it is difficult
to keep these models in sync with design changes. The preferred level for verification is the transaction level, which provides the right mix of speed and
detail.
Another reason system models have not been used in verification is completeness. Architects and system designers are usually concerned with only a
portion of the total design. Design parts, such as standard interfaces or service
blocks, are not a major consideration for architects, so they are not modeled or
modeled inaccurately. Verification development requires a nearly complete
model as an executable specification. A system model is also not kept up-todate. Architects and system designers only focus on the very early stages of
the project, and who maintains the model in the later stages often becomes an
issue.
An FVP differs in that it is modeled from the beginning with a focus on
verification. This does not mean that architectural analysis and system design
are not considered or supported, but rather an FVP supports architectural
analysis, system design, and software development in a framework that unifies these tasks with the verification process. The FVP is partitioned along the
same borders as the main implementation blocks and is developed at the
transaction level of abstraction. The FVP is owned by the verification team,
which ensures that it is kept up-to-date with design and verification changes
throughout the verification process. The FVP is a complete model of the system and testbench.
60
Professional Verification
fits received. Teams must have accurate information to determine the costs
and benefits before beginning development.
The costs of developing an FVP are measured by the time spent and
resources required. The time needed depends on the experience of the developers and the complexity of the design. The larger and more complex the
design, the longer it takes to develop a model. Less experienced engineers
need more time to learn a new modeling language and the concepts of highlevel modeling. Once an engineer has learned the basics, the modeling process can take between 10 to 20 percent of the time required to implement the
design. This time can be lessened by reusing models from previous designs or
purchasing models already completed.
How long it takes to develop an FVP depends on when it is begun and how
resources are used. If resources are taken from other tasks or tasks are put on
hold until the FVP is completed, the project will be delayed. If people are
engaged earlier in the process when they are usually idle, and the model is
completed in parallel with other tasks, less time cost is needed. In addition, if
parts or all of the FVP can be combined with other efforts, such as architectural modeling or software development, the cost is reduced. Teams need to
decide who will be developing the FVP, when development will begin, and
how much can be reused or leveraged from other efforts to calculate the true
cost.
The benefits of using an FVP are measured by the time saved and the quality of verification. The up-front investment in time and resources for
developing an FVP is offset by the savings in time and resources throughout
the development process. The most direct time and resource savings occur
when the FVP is used within the verification testbench. Having an early
model of the system lets the verification team to do more work in parallel with
the implementation team, thereby shortening the overall development time.
One of the most time-consuming tasks in developing a testbench is determining the expected results to compare against the results observed from the
design. Many verification teams end up embedding large parts of behavioral
representations of the design within the testbench or the tests to help determine what the correct expected result should be. Using the FVP in place of
this embedded behavioral information saves time and reduces the chance of
errors within the testbench.
Time savings can be realized in other ways. Finding architectural bugs
earlier in the process reduces the time wasted in redesign and reverification.
This time savings is difficult to quantify, since teams do not plan on having
architectural bugs. Past experience is the best way to determine what this savings might be. Software teams can also begin development earlier. This may
not be a verification time savings, but it might save time on the total project,
61
USING AN FVP
The FVP consists of testbench components, design modules, and interface
monitors. The inclusion of the testbench is important, since the FVP provides
a design representation as well as the infrastructure and test environment to
use the FVP. Testbench components include stimulus generators for driving
data into the design, response generators for responding to requests from the
design, and application checkers to verify that the FVP is operating correctly.
Design modules are the functional models that make up the design. They are
developed at the transaction level and follow the refinement and partitioning
of the implementation as it develops. Interface monitors verify correct operation by passively monitoring the transfer of information between design
blocks.
The FVP is created early in the development process at the same time as
the implementation architecture is being defined. The system-level verification team develops and maintains the FVP, which is refined as the design
progresses. The first step in creating an FVP is determining its intended use.
The FVP is not intended to be a one-size-fits-all solution: Each design and
each development process are unique and have their own set of challenges.
While the accuracy of the FVP must always be correct, the fidelity of the
Professional Verification
62
design may vary. An FVPs detail and fidelity are determined by its intended
use. There are three main uses of an FVP:
As an executable model for software development
As a reference model for subsystem development and integration
As a developed model for a design-in team
Each of these uses has different requirements for how the FVP should be
developed. A development team must determine which of these uses are relevant to their design and what priority should be placed on each.
Software Development
Developing and debugging software using an executable model can save a
project time. Unfortunately, many software teams today have limited
resources and are not available to begin work on a new project until the hardware has been developed. In this situation, the benefits of the FVP as an
executable model are limited. However, if resources are available, knowing
the amount of software to be developed as well as the intended application
gives the developer a good understanding of the fidelity required of the FVP
and the benefits it provides.
There are three basic types of software applications:
Service processor
User interface
Real-time
Service processor applications are designs where software controls a service processor to handle basic startup and maintenance functions. They
require the least amount of fidelity in the FVP. In these designs, the software
needs read and write access to registers and memory space to perform such
operations as configuring mode registers, error handling, and collecting runtime statistics. The FVP must closely model the software-accessible registers
within the system and provide basic register and memory functions. The algorithms, data paths, and processes within the blocks can be a very high-level
implementation and non-specific.
With user interface applications, software controls the processor to let a
user control and monitor the operation of the system. These designs require
greater fidelity than a service processor application. The FVP needs to closely
model the software-accessible registers within the system and be able to monitor run-time events as they occur. The algorithms, data paths, and processes
63
within the blocks have to provide the visibility and control required by the
software UI.
Real-time software applications are designs where software is directly
involved in the functional processes and algorithms of the system. These
designs require the highest fidelity. In real-time software applications, the
software is tightly coupled with the hardware. The FVP must closely model
the software-accessible registers and memory as well as the algorithms, data
paths, and processes within the blocks. This modeling has often already been
done in the architecture stage of the design.
Subsystem Development
When creating the FVP, it is important to understand the specific needs of
individual blocks. Individual blocks either already exist in an implementation
form, such as third-party IP or reused cores, or they entail new development.
An existing design might not have a transaction-level model (TLM) built for
it, so the FVP team must create a TLM, which is used for integration purposes
only. A TLM for an existing design should concentrate on fidelity at the interface level and only abstractly model the data path and control processes.
FVP blocks that need to be developed might require more detailed modeling. If the block will use the TLM in a top-down manner to verify the
individual sub-blocks as they are designed, the development team should
make sure that the TLM partitions the functions as they will be partitioned in
the design. If the development team is only using the TLM from the FVP for
full subsystem verification, internal partitioning is not necessary.
64
Professional Verification
CREATING AN FVP
The FVP is created in a top-down manner, following the partitioning of
the system architecture. The first step is to identify the modules and the correct hierarchical partitioning. Next, a hollow shell of each module should be
created, with a name that matches the implementation module name. External
ports for each module are then defined and added to the individual modules.
Consistent naming between the model and the implementation is important to
facilitate future integration.
After the modules have been defined, the developer defines the channels
to which the modules are interfaced. Again, careful consideration and planning are necessary to provide common interfaces that can be used throughout
the process. Once the modules and channels have been defined, each module
has its individual processes defined. The modules are defined to be functionally correct representations of the corresponding implementation. Separate
threads for parallel processes are used with the state and stored in member
variables.
65
also receives transactions across channels from other subsystems and converts
the transaction information into a form the behavioral model can utilize.
66
Professional Verification
67
Stimulus Generation
Stimulus generation is a mix of random and directed methods targeted at
early architectural and performance verification, along with interface tests to
smooth the integration process. The three layers of FVP verification are
encompassed in the methods used for stimulus generation.
It is important to verify the performance and functional requirements
early in the development process, so architectural errors do not cause redesign
later. This testing is done with directed tests to verify architectural behavior in
68
Professional Verification
a basic, isolated manner. Early random testing may introduce too many variables, clouding the verification and making debug difficult. Once the directed
tests have verified correct basic operation, directed random testing is used to
test special cases and stress conditions for the architecture.
The FVP could be meeting the performance and architectural goals, but its
behavior does not match the architects intention. Directed and pure random
stimulus, along with behavioral monitors placed on individual modules of the
FVP, determine whether a unit is behaving in an improper manner, such as
dropping more packets than it should or bypassing stages in a pipeline.
Often the stimulus that was used to verify the algorithms in isolation can
also be used in a directed manner to prove the correctness of the algorithms in
the FVP. Otherwise, directed tests are used to stimulate the basic operation
and known corner cases of the FVP. Directed random tests are then used to
cover unknown cases.
Architectural Checks
Architectural checks monitor and capture the response of the FVP to the
stimulus. The three layers of FVP verification are encompassed in the methods used for architectural checks.
The verification team works with the architects to define the functional
and performance requirements for the system. They then develop checkers to
verify this operation. Functional requirements can include calculation accuracy, event ordering, and correct adherence to protocols. Performance
requirements can include bandwidth, latency of operations, and computation
speeds. These checkers can be self-checking or require post-processing. In
any case, they are the basis for the architectural assertions to be defined later.
Architects often model some of the more complex operations of a design
at the behavioral level to create the best solution and to measure trade-offs.
These models are used as reference models for the FVP to verify that the
intended behavior of the design is implemented correctly.
Many functions of a design can be described as algorithms or simple process descriptions. These algorithms and processes can be represented in many
different forms or languages. These functions within the FVP can be verified
by either embedding the representations into the models with a transactionlevel wrapper, or by running the representations in parallel to the model and
verifying that the outputs are equivalent.
It is important to consider the scope of the testing when developing architectural checks. The architectural checks are meant to test the operation of the
entire design to the intended specification. Implementation details of the subsystems are tested by the individual subsystem teams, and system verification
tests the correct interface with real-world stimulus.
69
Chapter 8
Control Digital Subsystems
Verifying large digital designs
The second phase of the UVM is subsystem development. In this stage,
separate teams design and verify each of the individual subsystems that make
up the final system or device. The subsystems can be separated by function or
by design domain, allowing parallel teams to focus on development of smaller
individual pieces. Todays modern SoC might consist of subsystems from one
of three design domains: control digital, algorithmic digital, and analog/radio
frequency (RF). Although each of these subsystems are developed and verified in different manners, they are unified in the UVM through the FVP. Each
block can use the models from an FVP as a reference model to compare to
during verification, and each subsystem can use the FVP as an early integration vehicle. The following three chapters discuss each of these subsystems in
more detail.
Control digital subsystems are the most predominant design domain in ICs
today. Many of todays devices consist entirely of one or more control digital
subsystems. Control-based digital subsystems are developed from the specifi-
72
Professional Verification
cation of a control process. The designs may contain data paths but are not
based on algorithmic processing. The complexity associated with massive
digital subsystems has been the focus of much of the functional verification
efforts to date. Specialized tools for adding assertions and measuring coverage, along with specialized verification languages and methodologies, have
been developed to address this. Unfortunately, this focus has been the source
of much of the fragmentation found in functional verification today. These
focused methodologies and tools force the verification team to start development from scratch, isolate the verification of the subsystem from other design
domains, and provide for little, if any, reuse during integration and system
verification. The UVM removes the fragmentation associated with methodologies focused solely on the digital subsystem level by using the FVP to unify
the different stages and design domains. The verification of a control digital
subsystem is broken into four phases in the UVM, as described in this chapter.
73
tem to make sure that the individual goals obtained, from the lowest feature or
test level up to the ultimate product goal, tie together.
Professional Verification
74
Strategy
Once a clear set of goals have been articulated, the next step is to develop
a verification strategy. Strategy is an overloaded term these days. For the purposes of this discussion, we separate strategy from tactics. Strategy is the
general approach to be taken to meet the specified goals. Tactics are the methods and tools used to implement the strategy.
To develop a clear strategy, the team needs to first review the goals, the
design, and the environment the project must operate in. At this time, the team
should identify design issues and obstacles and prioritize them. Possible
issues or obstacles include the size of the design, the schedule, the number of
features to test, and how to determine when you are done. The most creative
part of the planning process is developing strategies that address the identified
issues and attain the desired goals. Groups generate ideas and strategies in different ways, varying from brainstorming sessions to mechanical problemsolving practices. Regardless of the process, it is important to be open to new
ideas and keep the big picture in mind. Examples of strategies for addressing
common issues include:
Partitioning a larger design into smaller segments for isolated testing
Cutting the schedule by staging deliverables, or purchasing or reusing
older environments
Using random testing or automation to verify as much of the state
space as possible
Using specific coverage metrics to verify that the design is ready to
tape out
Tactics
After the verification strategies have been determined, the team can focus
on the tactics for addressing the strategy. The development of tactics is where
the real go to battle planning is done. One can think of it as similar to developing a playbook for a sports team or military unit, detailing the intended use
of the people and technology, the environment and infrastructure, and the
coordination and communication methods. Having a comprehensive playbook keeps developers in sync and provides guidance to those who join later
or work in associated groups.
Listing the specific responsibilities and activities associated with each tactic lets the team know the training and development needs as well as provides
clear directions for resources. In a similar fashion, detailing which tools and
75
technologies will be used with each tactic leads to smarter purchasing and utilization decisions.
Perhaps the most important part of developing a tactical plan is detailing
the environments and infrastructure to be used, such as a description of the
testbenches to be developed as well as the APIs and interfaces to use. A
project might have many people moving on and off the project. Detailing the
basic testbenches and infrastructure enables them to come up to speed quickly
and operate efficiently. Finally, the tactics should describe the processes and
methods for maintaining communication throughout the project. Having all
this information together helps the entire team understand what needs to be
done and how it will be accomplished.
76
Professional Verification
77
78
Professional Verification
Linting
Linting is the most basic form of static verification analysis. There are
several different types of linting technologies available today, which are discussed in detail in Chapter 13. Designers should do linting up front in their
development process. The most basic forms of linting can identify simple
typographic errors or basic implementation bugs, such as unconnected buses.
Finding these errors quickly here is more efficient than in the front end of an
advanced tool or in the process of debugging a simulation failure. More
advanced forms of linting can be used to identify code issues, such as synthesizablity or dead code that may cause tool issues later in the verification
process.
Verification engineers or the HDL designer can run linting tools, but usually a designer needs to act on the results. The more basic linting tools should
be available for designers to run at any time while they develop their code.
The more often the designer runs these tools, the better the quality of code
delivered to the verification team will be. Motivating the designer to run these
tools requires that the output be of good quality, without many false or incorrect violations. The tool should be fast and allow the designer to identify the
issue within their code quickly. Advanced verification teams have found that
the investment in time and effort to use linting pays off over the entire project
cycle.
79
STEP 3: EXECUTION
The third and most time-consuming step in the verification of control digital subsystems is executing the verification plan. During this stage
testbenches are developed, tests are written, and the design is simulated in a
manner defined in the test plan. A verification team can take two possible
paths of execution. The traditional path is to break the design into parts, start
verifying the individual parts in isolation, and then integrate them together in
a flow that moves from the lowest level, or bottom up, to the top subsystem
level. The majority of verification teams today use bottom-up flows. A newer
80
Professional Verification
Testbench Development
The method used for developing a testbench has a dramatic impact on the
overall performance and efficiency of the entire verification process. Reuse
speeds the development of testbench components. Limiting the amount of
application-specific information encapsulated in testbench components facilitates reuse. Raising the level of abstraction makes test writing and debugging
much more efficient. Using standard, defined interfaces facilitates the com-
81
82
Professional Verification
defined in the system test plan as part of the definition of the transaction taxonomy. The verification team defines the transaction types used at the
subsystem level and also defines goals for the types and sequences of transactions driven into the DUV. Specific correlation goals between stimulus and
response are also defined at this time. Interface coverage is first measured
when block-level testing is completed. The team verifies that all specified
types of transactions have been stimulated, along with a large percentage, if
not all, of the combinations of transaction types. Specified correlation goals
should also all be measured before block verification is considered complete.
Interface coverage is also measured at the subsystem level to verify that all
transaction types have been simulated, along with a subset of the possible
transaction sequences.
The designer first adds the structural coverage monitors along with the
structural assertions. In many cases, assertions can be used to monitor correct
behavior as well as incorrect behavior. The verification team adds to these
coverage monitors in the form of structural assertions when they receive the
implementation from the design team. Structural coverage is measured when
block-level testing is completed. Block-level testing focuses on the specific
implementation features of the design; this is where structural coverage provides the most information. Structural coverage information is collected after
all tests have been run and passed, since it slows down the run times and does
not provide accurate information until the tests pass. Code coverage is also
run after block verification is complete. The verification team identifies holes
in the structural and code coverage and investigates to determine whether the
stimulus is lacking, the design is in error, or there is dead code. Coverage is an
iterative process in which the results are analyzed and modifications are made
to the tests or implementation until the team has addressed all coverage holes.
Acceleration on Demand
83
84
Professional Verification
85
the subsystem and for the subsystem as a whole. The team then creates the
block-level testbenches in a manner that allows for reuse at the subsystem
level. Once the block-level testbenches are complete, the verification team
writes block-level tests and waits for the implementation blocks to be delivered by the design team. When the blocks are delivered, they are verified
individually. The team measures coverage and adds tests as required to meet
the specified goals.
After the blocks have been verified individually, the verification team creates testbenches to integrate and test the blocks together. Depending on the
size and number of blocks, there might be many integration steps or just one
integration into a complete subsystem. The testbenches are created with parts
from the block-level testbenches where possible. The verification team develops integration-level tests and verifies the integrated blocks together in the
testbench. Once the integration testing is complete, the subsystem is tested
inside the FVP provided by the SoC team to verify that it works with other
subsystems.
Figure 31 shows a bottom-up UVM-based flow for two blocks developed
independently and integrated into a subsystem.
86
Professional Verification
Chapter 9
Algorithmic Digital Subsystems
Verifying algorithms
Algorithmic digital subsystems are found in designs such as digital signal
processors (DSP), wireless communications devices, and general data path
subsystems. The development and verification of these subsystems has traditionally been left to the few specialists who understand the complex
algorithms and know how they should be implemented. Todays SoCs combine these algorithmic subsystems with standard processors and interface
blocks to provide a complete solution for the customer. The combination of
these subsystems requires more than just a few specialists to understand and
perform design and verification.
88
Professional Verification
89
The partitioning between the two domains is often blurry at the early stages of
system design and architecture. An algorithmic subsystem most often exists
in this blurry partitioning range.
The FVP allows the architect to refine the system architecture to make the
correct partitioning between the continuous and discrete domains. An FVP
supports the development and verification of both domains together in one
system representation. The algorithmic subsystem can be modeled to operate
in either domain until final decisions are made. Converters between the
domains can be used to isolate the analog and digital circuitry as the architect
sees fit. Once the architecture is completed, the FVP should contain a mix of
models that interface at the transaction-level, along with algorithms and converters that interface to the continuous time models.
Algorithmic Models
Algorithmic-based subsystems are most commonly used today in communications and multimedia systems. The system and environmental effects on
these types of designs are not as easily predicted as they are in control-based
designs. This unpredictability increases the likelihood of an error in the algorithm not being detected until system integration testing. Algorithmic
development teams need to verify the intent of the design before implementation is performed. There are too many variables to wait for an accurate
implementation before beginning verification.
Algorithmic-based subsystems are developed by either modifying some
blocks in an existing system to provide a new function or reconfiguring the
blocks of existing systems to provide a derivative function. Both these processes require accurate modeling of the individual blocks at different levels of
abstraction. A broad range of building blocks is necessary for developing and
verifying the subsystem. Communications and multimedia applications are
usually based on standards and protocol layers. A broad up-to-date library of
standard algorithmic system components is required for accurate development
and verification.
The algorithmic subsystem is modeled in the FVP based on the application. The FVP is a transaction-level model of the system, but algorithmic
subsystems often operate and interface in a more continuous-time domain. To
correctly model an algorithmic subsystem for the FVP, each interface should
be modeled in the most efficient manner for passing information. Algorithmic
subsystems interface to other mixed-signal subsystems where a continuous
time-based interface is most efficient. The simulation of these mixed-signal
interfaces is discussed in Chapter 8. Algorithmic subsystems also often interface to control-based subsystems. These interfaces are defined at the
transaction level similar to control-based digital subsystems.
90
Professional Verification
91
92
Professional Verification
in this process is placing the hardware implementation back into the algorithm
workbench used to verify the original algorithm. Care must be taken when
implementing the RTL so that it interfaces correctly in the workbench environment. Unfortunately, the RTL model simulates at much slower speeds than
the algorithmic equivalent. This may result in reducing the amount of testing
performed in the environment to just the most critical.
The hardware implementation of the algorithm should also be verified in a
standard HDL-simulation environment. The algorithmic subsystem should be
simulated with the connecting digital subsystems to verify connectivity and
interoperability. The FVP or a modified existing control-digital subsystem
testbench can be used for this testing.
93
Chapter 10
Analog/RF Subsystems
Verifying analog subsystems
Analog subsystems include the classic analog designs as well as RF
designs and high-speed digital designs developed in a full custom manner.
The common thread in each design type is the need for functional verification
at the logical level as well as the transient or AC level. Standard digital design
verification separates the functional verification of the design down to the
Boolean or gate level from the implementation verification of the gates and
transistors. Verifying analog subsystems is more complex because functionality is equally impacted by the logical and physical design. Small changes in
placement, component sizing, or silicon process can dramatically impact
functionality.
The close relationship of physical design and verification with the functional verification of analog subsystems has led to an integrated approach to
developing these subsystems. Standard digital devices might be split into separate design and verification tasks, with specialists in each area. The design
and verification of analog subsystems are considered integrated tasks often
performed by the same individual. Many analog developers consider functional verification as part of the larger design task. The UVM focuses on
functional verification, but we cannot simply ignore analog subsystems as a
design-only concentration. Most SoC designs being developed today contain
both analog and digital subsystems. Verifying the individual subsystems as
well as the integration of the subsystems is part of an overall UVM. So
instead, we will focus on how the UVM connects to an advanced custom
design process to provide successful SoC verification. This chapter introduces
the Cadence Advanced Custom Design methodology and describes how it
integrates with the UVM.1
This chapter is taken from the Cadence Design Systems white paper The
Advanced Custom Design Methodology, written by Kurt Thompson in
September 2003.
96
Professional Verification
97
98
Professional Verification
ology, which is done using a bottom-up approach. In most cases, the block
only has the transistor level and layout abstraction levels supported. As a
result, the abstraction levels are derived bottom-up and then fed to the topdown process.
Abstraction levels serve as the foundation of the meet-in-the-middle
approach. Both simulation and physical design have predefined abstraction
levels, which are updated through the design process and support the mixedlevel capability. The abstraction levels are:
99
100
Professional Verification
101
team predict the time needed for each stage in the design process more
confidently.
System Requirements
System requirements is the first task in the flow. This is where the UVM
and ACD methodologies come together. The development of an FVP at the
architectural development stage of the UVM provides the system requirements that are fed into the analog subsystem stages. The FVP can model
analog, RF, or custom digital blocks at a high system level using C-like algorithms wrapped in a transaction-level wrapper. The FVP is given to the analog
subsystem teams. The team provides models for the individual subsystem
functionality and a system-level test environment. They can begin the fast
top-down simulation process using these models and the FVP environment.
Through the ACD flow, the functionality can be modified or refined.
These changes are updated in the FVP model and provided to the system-level
verification team for analysis and distribution to other subsystem teams. The
meet-in-the-middle process develops more accurate models. Where it makes
sense, these models can be integrated into the FVP models and tested within
the system-level test suite. Each of these steps ensures that the verification
and integration of digital and analog subsystems are performed in a unified
manner.
Process Feasibility
With IC requirements generated from system specifications and the FVP,
process technology selection must occur. Evaluations of silicon accuracy
capabilities and various integration strategies must be performed to verify the
feasibility of the proposed integration approach. Issues such as performance,
noise characteristics, cost, circuit type, and risk are all considered.
IC Requirements Translation
The system design process produces specifications that the IC must meet.
The system design process leverages the UVM in using these requirements
through system-level models, testbenches, and measurements. The testbenches may be further enhanced to match specific IC specifications where
the specification-driven environment can be set up. The specification-driven
environment then drives the chip level. Subsequently, the block level tests in a
manner consistent with the original requirements given to the design team.
102
Professional Verification
Simulation Strategy
With a process selected and its feasibility and silicon accuracy ensured,
the strategy by which the design will be built can be defined. At this point, the
design team has made primary decisions as to the integration strategy of the
design and identified the constraints to insert through the design process
based on silicon accuracy data.
Successfully executing a complex design is contingent on the thoroughness of the planning up front. No design can come together smoothly by
accident. With a strong plan in the beginning that specifies the top-level and
block-level requirements and the mixed-level strategies to use, a meet-in-themiddle approach can drive each block design to ensure full coverage of
important design specifications and smoothly allow for blocks to have different schedule constraints. By using the most up-to-date information available
at any given time, blocks that are done earlier can be verified in the top-level
context and be ready to go. This enables time and resources to focus on the
more complex blocks, which can also be using the most up-to-date
information.
At this point, the high risk points flagged for targeted verification are
examined. These could be areas such as analog/digital interfaces, timing constraints, or signal/data paths. What is extremely important at this stage is to
look at a simulation and physical design approach that can support verifying
these risk points. The mixed-level approach needs to be examined to determine the abstraction level these points are described at. For example, a key
analog/digital interface might need both the digital interface and analog interface sections described at the transistor level, with detailed parasitic
information in between to ensure bit errors do not occur. If this is the case, it
should be determined how the design will be partitioned to allow this simulation to occur in an efficient and repeatable manner. Often, this interface can
only be meaningfully tested at chip level over a variety of simulation setups.
Predictability is predicated on the assumption that all critical items are part of
a simulation and verification strategy and are repeatable and reliably execute
throughout the design process.
With critical circuit issues identified, the next step is to tackle design partitioning as part of the simulation and physical design plans. It is important to
consider design partitioning from a functional perspective as well as an
enabler to use the design tools effectively to verify the identified critical circuit issues. The designer must consider the ability of the tools to handle
certain types of analysis, and design the circuit hierarchy to isolate each issue
and efficiently tackle the problems associated with it.
Design partitioning is nearly always looked at from a functional perspective. It is natural to partition in this way because it leads to block
103
If the design partitioning does not take this situation into account, the next
option is to bring the analog and digital blocks into transistor level (assuming
this interface is critical and needs to be simulated at the transistor level).
While this achieves the objectives, it is quite possible this simulation would
be quite slow regardless of which simulator was used. Waiting for transistor
level also requires that the transistor level is complete, while the partitioning
approach allows for the analog and digital sections to be completed at their
own pace. If the interface sections get done first, the interface itself can be
tested before the analog and digital core pieces are complete, aiding a fast
design process. The ability to simulate the interface of concern at transistor
level satisfies the silicon accuracy requirement. As the design evolution
occurs, it might be desirable to bring more pieces into transistor level or to
simulate the analog blocks with the analog interface in transistor. This adds to
the predictability of the design process by enabling evolution and resolving
critical design issues early on.
Thus, the simulation strategy must be comprehensive to account for all
tests that must be performed and ensure that the design database is partitioned
Professional Verification
104
conducive to that strategy. The simulation strategy should also take into
account the completion estimates of each individual block and specify the
mixed level for that simulation. For example, the following table lists some
example sections of a simulation strategy.
Table 3.
Top-Level Test
Codec Verification
BER Function
DSP Verification
ADC
CODEC
SPICE
System BER
Verilog-AMS
Verilog-AMS
Functional A
DAC
DSP
Testbench
Functional A
For large SoCs, separate tables may be necessary for the major blocks.
Often, the first level of hierarchy for each block is much like a large chip and
can have all the issues associated with a chip. In these cases, separate add-on
tables, such as the one above, might exist for each block at the top level and
subsequently through the hierarchy, where applicable.
As the design evolves, analog HDL descriptions can get more accurate as
transistor-level simulation results are back-annotated into the models. There
is some simulation speed price for this. The simulation strategy is amended
where accurate models are needed. In block cases, it is likely that accurate
HDL is used across the board in conjunction with FastSPICE capability, and
SPICE-level capability for the most sensitive circuits.
For complex blocks that require some silicon accuracy at the top level, the
block designer might specify a particular mixed-level configuration when
simulating at the top level. At the top level, this block-specific configuration
exercises the simulation strategy. One view might be a non-hierarchical
behavioral view for the block, another might contain the internal sub-blocks
at accurate-HDL or transistor levels. This hierarchy and configuration must
be managed to match the simulation strategy.
105
Block-Level Design
Block-level design is based initially on the top-level simulations that verify the block specifications. Block-level design then encompasses the
detailed, silicon-accurate, transistor-level design. This also includes incorporating parasitic data and performing silicon analysis.
Model Calibration
The silicon accuracy process, enabled through the bottom-up flow,
requires higher level abstractions to maintain as much of the silicon-accurate
information of the individual blocks as possible. This requirement is met
through calibrating functionally correct behavioral models with silicon-accurate design data derived through the post-layout transistor-level simulations.
Updated Routes
As the design evolves, the initial setups are used to route updated physical
abstracts that represent more accurate size estimations, ultimately through
accurate block abstracts generated from the completed block layout process.
106
Professional Verification
RC Extraction
Whenever possible, post-layout analysis on the first cut database should be
set up, even if the results are not totally meaningful at this point.
Silicon Analysis
At the top level, silicon-accurate analysis that functional-based simulation
does not catch is performed. This includes IR drop, electromigration (EM),
and substrate noise analysis. Some silicon analysis can be performed at the
block level, and some can be performed during the updated routing tasks.
Chip Finishing
Chip finishing includes tapeout preparation tasks, such as adding a PG
test, layer editing, adding copyright and logos, and metal fill. At this point, it
might also be necessary to make final edits based on last minute design needs.
Chapter 11
Integration and System Verification
Verifying system operation
The final stages of the UVM are system integration and verification. Once
each of the individual subsystems has been verified using the UVM, it is time
to bring them together and verify the operation of the system as a whole. System integration and verification is where a verification methodology is put to
the test. Fragmented verification approaches fall apart when you try to integrate incompatible test environments that have been developed in complete
isolation. Using a unified methodology facilitates efficient integration utilizing testbench reuse and common models. After integration, final system
verification is performed to ensure correct operation under real-world environments. System verification techniques can vary depending on the specific
application, but the goals should remain the same. This chapter focuses on
best practices and techniques used in system integration and verification.1
SYSTEM INTEGRATION
In the UVM, each subsystem is continuously verified using the FVP as a
common reference, so the integration and test of the system should be
straightforward. The SoC team integrates each implementation block into the
FVP one at a time and runs the system test suite to verify the integration. The
lower level assertions and monitors should also be included in the integration
testing to aid debugging. The test plan is run with the FVP for comparison
checking. Once the system has been verified as equivalent to the FVP, the
implementation is considered the implementation-level FVP, and the original
FVP is the transaction-level FVP. The design is then ready for system
verification.
1.
Parts of this chapter are taken from Hardware-Based Verification Is Necessary for Todays Million Gate+ Designs, by Ray Turner, Jr., published in
Cadence Design Systems Verification Talk newsletter, March 2002.
108
Professional Verification
109
the transfer. The signal-level top level is also necessary, since it is common
for many side-band signals to appear between implementation blocks that are
not necessary in the transaction level.
The SoC team makes the integration process smoother by verifying the
physical signal-level top level with the transaction models as shown in Figure
41. In this example, transactors are placed at the interfaces of each model, and
interface monitors are converted to signal level and placed between pairs of
transactors. The test suite is then rerun to verify the signal-level top level. The
transactors and interface monitors, which will be reused by the subsystem
teams, are also verified in this configuration.
Simulation Acceleration
Hardware-based verification provides several modes of simulation acceleration, with varying levels of performance improvement over simulation
alone. The first mode is accelerated co-simulation, which is the easiest mode
to implement. With accelerated co-simulation, also known as lock-step cosimulation, engineers compile and download their design on a dedicated hardware verification engine while leaving their behavioral or C++ testbench in
the simulation environment. With the simulator or C-testbench within the
workstation communicating in lock-step fashion with the design in the accelerator, simulation performance is increased from two to ten times faster than
traditional software simulation. This is primarily because the bottleneckthe
workstationis now only responsible for a fraction of the total simulation
loadthe testbench.
Lock-step co-simulation is easiest to implement because it involves little
change to the simulation environment. In most cases, the design is simply
split into two pieces: the behavioral portion, which is generally the testbench,
remains on the workstation, while the synthesizable portion, along with memory constructs, are loaded onto the accelerator. Beyond this, little or no
change is made to the environment, which allows for fast implementation.
This approach has some drawbacks, which are primarily caused by the fact
110
Professional Verification
111
Order of Integration
The order of integration depends on the design and the delivery time of
each piece. If the design contains analog connected to a digital signal processor (DSP) block, with the DSP block connected to a control-based digital
block, the analog and DSP blocks are verified first, and then verified with the
control-based digital. If the design contains analog blocks directly connected
to control-based digital along with algorithmic-based digital, the controlbased digital is integrated first with the analog independently, and then with
the algorithmic-based digital independently. When these two integrations
have been verified, the system as a whole is verified.
SYSTEM VERIFICATION
The goal of the system verification phase is to verify the system under
real-world operating conditions. The UVM utilizes system verification for
several roles. First, to verify that the testbench environment used to stimulate
and check the implementation has accurately reflected the system. It also provides a mechanism for hardware-software co-verification in a realistic
environment. Up this point, software development has been done on a model
or Instruction Set Simulator (ISS) attached to the FVP or an implementation
model using only basic software debug tools. In system verification, software
is run either on the actual CPU or a mapped version of the CPU utilizing all
the software debug tools available in a real-world environment. System verification is also used as a design chain handoff mechanism, allowing early
access of the implementation to design chain partners.
Three basic types of system verification methods are used in the UVM:
software-based simulation, hardware prototype, and emulation. Which
method to use depends on the application and the skill set of the team. Software simulation works well for smaller designs that do not need to run at fast
speed for long periods of time. Setup and conversion to software simulation
methods is straightforward. FPGA development platforms work well for
modular designs such as SoCs. However, setup and conversion can be cumbersome for someone not experienced with FPGA development and
partitioning. The FPGA solution can run much closer to the speeds of the system, but it provides only limited visibility to help debug any problems
encountered. Emulation systems work well for large designs that do not need
to run at full speeds, but do need to be accelerated much faster than simulation. Emulation systems interface well to external devices and provide
excellent visibility and support for debugging design issues.
Professional Verification
112
Software-based Simulation
Software-based system verification provides the greatest amount of
observability and the smallest modification time of the three methods. However, performance can limit the amount of verification performed. The first
step is modifying the testbench. The implementation is now controlled and
monitored by the actual external environment. The testbench is modified to
remove stimulus generators. Response checkers are modified to become passive monitors for debugging. Hardware-dependent software must be loaded
into the processor model in the simulation or controlled through an instruction
set simulator. If the system contains mixed-signal subsystems, such as algorithmic digital or analog subsystems, they are either modeled in a higher level
of abstraction for simulation, such as a TLM, or black-boxed and ignored.
Stimulus can be provided in several different ways:
Interfacing to test equipment and capturing the stimulus for playback
on the implementation
Using API interfaces to the simulator to receive and drive data to and
from a workstation or network
A model or TLM that mimics the real-world stimulus
The output of the system is verified through comparison and analysis. It
can be compared for accuracy to an existing system or the FVP. Analysis can
verify user interfaces and performance requirements.
Advanced verification techniques continue to be used throughout system
verification. All the assertions defined at the architectural, interface, and
structural levels are reused to aid debugging and detecting errors found with
the new stimulus. Architectural-level coverage goals can be reverified to
ensure that stimulus is working correctly, but lower level coverage monitoring is not used. Transaction-level interfaces provide the same debugging
environments used in subsystem development and integration. Acceleration
increases the simulation speed of the design in a similar manner as with system integration.
113
Hardware Prototypes
Hardware prototypes are hardware systems built to replicate the real system environment using programmable hardware, such as FPGAs, to represent
the implementation. Hardware prototypes might provide the most high-performance solution for system verification, but also require the most work. The
process begins with developing the prototype system. In most cases, the prototype system must be built or modified for system verification use. The
prototype system requires a tested board with standard interface components,
along with observation interfaces. The digital implementation is placed in
programmable hardware. The analog and mixed-signal subsystems are implemented on the board.
Once the system is built and debugged, the design is compiled into programmable hardware devices, and system clock speeds are chosen to meet the
timing of the compiled implementation. The subsystem teams might have
already verified that the blocks map correctly into programmable hardware as
part of the block-hardening process. Stimulus is provided through the prototype system board and can be driven through test equipment, external
workstations, or networks. Software loading and debugging are done in the
same manner as the real system. Service processors can be used to boot the
system, and software debuggers connected through JTAG interfaces can be
used to debug the software. The response checking is done in a manner similar to the real system. Response to the stimulus can be captured for analysis
by test equipment, or the application can be tested to the user requirements.
Advanced verification techniques are not commonly used with hardware
prototypes. Assertions and coverage monitors do not map easily or have a
common interface to standard programmable hardware devices. Hardware
prototypes can be replicated for design chain partners, providing early access
to the implementation-level FVP. This requires a great deal of up-front planning and back-end support.
Emulation
In-circuit emulation provides the highest run-time performance for regression testing, hardware-software co-verification, and system-level verification.
In-circuit emulation replaces the testbench with physical hardware, which is
typically the system for which the IC is being designed. Working at the system level, in-circuit emulation verifies the IC as it interacts with the system,
which includes system firmware and software. Rather than using testbenchgenerated stimulus, which is often limited in scope, in-circuit emulation is
able to use live data generated in a real-world environment. Data generated at
high speed, using industry-standard test equipment, is also available with in-
114
Professional Verification
circuit emulation. In many cases, the last handful of corner-case bugs, which
if undetected would result in costly chip respins, can only be discovered
through the interaction of the IC in the context of the system, with software,
firmware, and live data.
In-circuit emulation also bridges the debug environment between simulation and physical hardware. Even when running in-circuit with a live target
system, emulation provides a comprehensive debug environment with full
visibility into the design being emulated. Combined with very fast compile
times (typically 4 million IC gates per hour on a single workstation), in-circuit
emulation becomes similar to simulation, where bugs can be found quickly,
fixed, and recompiled, often in less than an hour.
Several different applications of in-circuit emulation are available. Two of
the most commonly used include vector regression and hardware/software coverification. Using the emulation system in vector regression mode enables
users to run their sign-off suite of vectors at high speeds, which is valuable for
final certification of any design before tapeout. With vector regression mode,
test vectors are loaded onto the emulator along with the design. These vectors
are then used to stimulate the design, with the output vectors being captured.
For long regression tests or suites of tests, additional vectors can be loaded
onto the emulator as the previously loaded set of vectors is executed. Likewise, results from the previous set of test vectors are off-loaded and stored on
disk as the current set of results is captured. This ability to load and unload
one test while another test is executing maximizes throughput of the emulator.
To create a chip tester-like environment, an emulator can optionally compare the vector test results with golden result vectors on-the-fly and report
pass/fail results. For failed tests, which can be debugged later, vector mismatches are highlighted in the waveform display browser. With vector
regression mode, the emulator can be kept constantly testing designs, which
dramatically reduces the time required to complete a large test or entire
regression suite, often from weeks to hours.
Hardware-software co-verification is a powerful option to in-circuit emulation that can dramatically reduce the verification time and development time
of todays designs. By providing a functional system environment, an in-circuit emulation system can be used to develop system-level software, even as
the IC design is being verified. By developing software in parallel with hardware, not only is the development schedule effectively compressed, but the
system-level software becomes available as an additional verification tool for
the hardware, which provides another means to uncover deeply hidden bugs.
By testing software while the hardware is still being developed, changes can
be made in the hardware design before tapeout to yield optimal solutions.
115
SECTION 3
TOOLS OF THE TRADE
Chapter 12
System-Level Design
System modeling, software, and abstraction
Early system modeling and verification are the cornerstone of a modern
unified verification methodology. Earlier we introduced the FVP and showed
how it can unify a verification methodology to improve speed and efficiency.
An FVP is one example of a system model that has been used for many years
to assist in system design, system verification, and software development
tasks. In this chapter, we will look at the issues that can be addressed with an
FVP and how software development and functional verification can be unified in a methodology. We will also address the important topics of design
and verification abstraction.
120
Professional Verification
and often require large portions of the design to be redesigned. Finding these
architectural bugs as soon as possible limits the amount of rework by designers and verification. The system model can help find bugs that have slipped
through the verification process, and it can help find bugs sooner. Developing
a system model early in the development process allows the verification team
to verify the architecture and system design before implementation begins.
Verifying the system model early identifies bugs sooner than waiting for the
implementation to be completed. Correcting these bugs early saves implementation and verification time and resources.
In addition to finding bugs, verification teams face efficiency and productivity issues. Many development teams view design and verification as serial
processes. They believe that verification does not begin until the implementation has been completed, because you need to have something to test before
you can begin testing. Many verification tasks, such as testbench development
and test writing, can be done in parallel with the development of the implementation, but the verification engineers cannot test or debug their testbench
or tests until the design is made available. When a system model is available,
it can be used to test the verification environment and to develop and debug
the verification tests and infrastructure before the implementation is completed. This means that once the design is made available time is not lost
integrating, debugging, and bringing up the verification environment. This
can dramatically improve the verification schedule and spread the number of
resources more evenly.
The system model can also be reused as part of the verification environment, thereby saving testbench development time. A large portion of the
development in most testbenches is checking the response of the design to
stimulus and determining whether it is correct. If the system model is developed as an executable specification, the system model can be used to predict
the correct response for the design. This saves time in developing testbench
components. The system model can also be used as a verification environment. If the system model is partitioned correctly, it might be possible to
replace parts of the model with the actual implementation and reverify the
model. Verifying parts in this manner can facilitate reusing the system-level
tests to verify the operation of the part within the context of the entire system.
This reduces the amount of tests that need to be written.
If the design is part of a design chain, the design customer might need
access to the design early in the development process. The verification team is
often required to provide this prototype. Many teams deliver an implementation-level model or a prototype in the form of an FPGA. Instead, a system
model of the implementation can be delivered earlier in the design process
and in a more useable format for the customer. The customer might still
121
require a more detailed prototype later in the process, but the system model
can satisfy them until it is available.
122
Professional Verification
123
Verification teams can use random code generators to verify these types of
systems. These generators create code snippets that can be loaded into the
engine and run. Usually, the generation must be tightly constrained to allow
only valid code snippets. Software environments allow microcode engines to
verify both the hardware and the integration of software.
Hardware Platforms
124
Professional Verification
ware is usually readily available, and verification only needs to focus at the
device driver layer.
An algorithm can be implemented in hardware by converting it to a custom implementation or by running it on a standard processor. The algorithm is
first developed and verified in its software form in an environment such as an
FVP. Functional verification is left to verify the final implementation of the
algorithm and the interface to the system. If the algorithm is implemented
with custom hardware for speed or size reasons, verification focuses on verifying the conversion from algorithm to RTL or gates and the correct interface
to the rest of the system. The verification team can utilize the original software algorithm as a reference model. Random or directed stimulus can be
applied to the implementation and the software algorithm in parallel, and
responses can be compared. The quality of checking is determined by the
fidelity of the original software algorithm to the hardware implementation.
If the model is cycle-accurate to the implementation, detailed checking
can be done. If the model is untimed or behavioral, the checking can be more
difficult and less accurate. If the algorithm is implemented by running the
125
ABSTRACTION
When you abstract information from an object, you take only the information that is relevant to your purpose and remove the rest of the details.
Removing the details that are not important allows you to represent and analyze larger and more complex amounts of information. Abstracting
information does not make that information less accurate. Many people
believe that the more abstract a piece of information is the less accurate it is.
Accuracy measures the correctness of the information. Information as it is
abstracted must remain accurate. Fidelity measures how closely the information represents the original details. The fidelity of the information decreases
as information is abstracted, but the accuracy must not.
Design Abstraction
Abstraction has made the design of complex electronic systems possible
and can be beneficial in the verification of these same systems. At the most
basic level, an electronic design is simply the flow of electrons through different physical materials. The most basic representation of a electronic system is
the description of the physical materials, along with the charges applied to the
materials. This level of detail was sufficient for describing very small primitive electronic behaviors. As designers wanted to represent more complex
circuits, they needed to abstract the information from the circuit that was most
relevant to the design. In this case, designers abstracted from the physical layout details the functional components that these details represented. The
designer could then design using components such as transistors, resistors,
and diodes. The facilitator for moving to this higher level of abstraction was
the development of circuit simulators, such as SPICE, along with the associated libraries and netlisting facilities. These automation tools allowed
designers to represent and analyze their designs at the component level, often
referred to as the transistor level.
As designs became larger and more complex, designers again needed to
move to a higher level of abstraction to more efficiently represent and analyze
their designs. They abstracted from the component details of transistors and
126
Professional Verification
diodes to the Boolean gate level. This allowed the designers to simply specify
the Boolean gate types that are created from various components. The facilitators for this were schematic capture, Boolean optimization, and analysis tools.
Once again this level of abstraction worked well until the size and complexity
of the designs outgrew the effectiveness of this level.
The next level of abstraction was not as easy to define as the standard
component or Boolean gate level. The next level needed to represent the
behavioral characteristics of the design in a more efficient manner, but there
was not an established standard for this representation. For several years different design teams used different behavioral descriptions for this next
abstraction level. These different and incompatible levels of reference made it
difficult for tool development to facilitate a large move to one level. The facilitator became logic synthesis based on an industry standard description
language called Verilog. This resulting level abstracted the functional representation of synchronous designs from the logic gates by specifying the
functional operation between clock cycles or registers. This level became
known as RTL. Most designs today are written at RTL, which is then translated to the logic gate level by logic synthesis, which maps to gate libraries
specifying component-level detail, and finally to the layout level of detail.
Many attempts have been made to again raise the level of abstraction.
Some success has been made in representing designs at an algorithmic or
behavioral level, but the loss of detail in moving to this level has resulted in
less optimized designs. At some point, the demand for larger and more complex systems, along with design automation tool breakthroughs, will move the
design representation once again to a higher level of abstraction. Most experts
today feel that the physical design and functional verification of todays
127
designs must improve before design size and complexity grows to the breaking point.
Verification Abstraction
Verifying electronic systems has always followed the lead of design in
representing and abstracting information. At the physical or transistor level,
the generation of stimulus is quite simple in spanning the operating region.
The fabrication process, temperature, and frequency are the important simulation parameters to test over the operating region. Analysis at this level is
focused more on AC and transient effects than on functional behavior. At the
gate level, Boolean verification could often be done by brute force applying
test vectors to cover all possible test cases. Once designers moved to RTL,
there was a need for more complex testbenches with stimulus generation and
response checking. These testbenches were written in the same register transfer language used for system design. As RTL designs became more complex,
RTL became more cumbersome for developing complex testbenches. The
need for improved verification efficiency was first addressed with specific
verification languages, which added optimizations for coding and data representation to RTL. These languages suffered in performance, because they
were still tied to RTL and were fragmented due to the inability to standardize
on one language.
Advanced verification teams have come to realize that verifying todays
large complex systems requires moving to a higher level of abstraction. They
cannot wait for designs to lead the way to the next abstraction level. RTL is
sufficient for verifying low-level details, such as signaling of protocols and
handshakes at interfaces, but it is not sufficient for representing the algo-
128
Professional Verification
rithms, data structures, and data flows within a complex system. Advanced
verification teams have begun to move to the transaction level of abstraction,
which removes the low-level signaling details and represents the algorithms
and data flows in a more efficient manner. Removing this level of detail
makes definition easier, simulations faster, and allows teams to begin before
all the low-level details have been defined. The transaction level still retains
the implementation-specific information necessary for verification and facilitates the refinement of this information as the design develops. Moving to an
even higher level, such as a pure behavioral level of abstraction, would
remove necessary information for verification.
Currently, the definition of the transaction level suffers from the same lack
of clarity that RTL did before logic synthesis was introduced. Many teams are
working at the transaction level, but they each have slightly different definitions of what level of detail is needed. Some teams specify the transaction
level to the cycle boundary, where the information at each clock cycle is accurate to RTL. This cycle-accurate definition is only slightly higher than RTL
and might be the next step for design, but in most cases, it is still too low a
level for verification. Other teams only specify the behavior of the design and
include only the most basic forms of timing and synchronization information.
The driver for RTL was the move to a common logic synthesis toolteams
understood that it was in their best interest to converge on a single accepted
representation. The driver for the definition of the transaction level will be
driven by verification and more specifically the testbench. The industry will
converge on a single representation of the transaction level to facilitate the
development, transfer, and reuse of models and testbench components,
because it is for the common good of all. Teams today have settled on definitions of the transaction level. This definition has allowed these teams to
define interfaces between testbenches, models, and tools. The infrastructure
for verification is built around this definition. As the different definitions con-
129
verge into one, the industry will be able to leverage this infrastructure and
optimize efficiency.
Transaction-Level Modeling
The reason we have included this discussion of abstraction is that the system model should be written at the transaction level of abstraction. The
system model should be started early in the development process before all
the implementation details are available. The transaction level allows the definition of the model with basic implementation information and the
refinement of the model as information becomes defined. Users of the system
model require faster simulation speeds than available in RT models and cannot wait for the RT model to be completed. The transaction-level system
model can be developed before the implementation is started, and can be simulated much faster as the low-level details are removed. Finally, developing
the system model at the transaction level facilitates its use in the verification
environment. Using a standard transaction-level definition, the testbench can
interface directly to the system model to do early architectural testing and
facilitate developing and testing of the verification environment.
A TLM abstracts the functional and data flow information from the
design, removing the low-level signaling information. The functional or algorithmic information in the design is represented in the simplest manner. The
model does not care about the implementation specifics within the function.
The model focuses on the correct functional operation and the interface
between different functions. A TLM uses a common representation of transfer
of information between functions. This representation of data, and the transfer
between functions or blocks, is often defined as a transaction. A transaction
130
Professional Verification
Chapter 13
Formal Verification Tools
Understanding their strengths and limitations
There has always been a great deal of confusion when discussing the topic
of formal functional verification. Formal verification uses mathematical techniques to prove that a design is functionally correct. These mathematical
techniques are steeped in theory and mathematical sciences that go beyond
the comprehension of most verification engineers. The complexity of the
underlying technology leads to much of the confusion when discussing formal
verification. Rather than focus on the underlying technology, in this chapter
we will look at using formal verification tools to address functional verification issues that advanced teams face today. We will also examine the
limitations of these tools and techniques. As with all techniques, it is important for each verification team to measure the time and effort involved against
the return received as well as whether the strengths outweigh the limitations.
132
Professional Verification
To address this, formal verification uses mathematical techniques to compare the original representation to the new representation. These tools, known
as equivalency checkers, break the design down into mathematical representation and then formally prove that the two are equivalent. Equivalency
checkers can verify that two representations of a design, such as RTL and gate
level or gate level and transistor level, are functionally equivalent. So once the
verification is done at one level, it does not have to be repeated at the other
level. You can also use equivalency checking to verify that the change made
to a design only affects the intended functionality and does not have other
functional implications.
Difficult bugs are often not found with traditional simulation approaches.
Large complex designs have a huge state space, which can be impossible to
cover with simulation. Teams might employ random stimulus simulation
techniques to cover as much of the state space as possible, but these techniques are random and tend to take the path of least resistance. If a bug exists
outside of the covered state space, it will most likely not be found until the
design is in silicon.
While formal verification tools might lack the capacity to cover the state
space of a large design, it can thoroughly cover selected important areas that
are difficult to test with simulation, such as an arbiter or a complex queuing
scheme. Formal model checking tools cover these areas of the state space by
focusing on only certain parts of the design. Semi-formal or hybrid formal
verification tools use simulation to lead the tool to interesting places in the
state space and then thoroughly verify around that area. These approaches
allow verification teams to focus on the hard to simulate areas where difficult
bugs are found.
133
The major deficiency in most RTL analysis tools is the trade-off that must
be made between accuracy of the check and the chance of incorrect errors
being reported. These tools cannot infer all the information necessary to absolutely guarantee that the issues found are real bugs. If the tools follow the
strict coding of the check, they might report many issues to the designer that
turn out to be correct design characteristics. If the tools try to limit the checking or infer more information than is really available, they risk missing a bug
and giving the user a false sense of security. It is this trade-off that has moved
RTL analysis tools to more complex methods to find real bugs with fewer
false errors.
The simplest form of RTL analysis tools parse the code and identify issues
from the textual representation of the design. These tools can find typographic
and syntax errors that might result in incorrect connections or missing logic.
Unfortunately, simply parsing the text does not provide enough information
to detect more complex bugs without the risk of reporting many false violations. Early tools that focused only on the text were infamous for reporting
thousands of violations that were not real errors. To overcome these false violations, RTL analysis tools began to translate or elaborate the design into a
Boolean or mathematical format so that more complex information could be
inferred. This information can help identify cases that can be logically proven
to be correct or incorrect. An example of this is a net being driven by a tristate device. By breaking that circuit down into its Boolean-level representation, the tool might be able to identify that it is logically impossible for the
two drivers to be on at the same time, thus it would not report that as a false
violation.
Elaborating the design can identify some information to provide more
checks and limit the amount of false violations, but it still does not provide a
complete solution. Many checks require a more complex mathematical analysis to prove the design meets the expected behavior. The latest generation of
tools uses formal techniques to prove that certain complex bugs are real.
These checks verify complex relations, such as deadlocks, clock domains
crossings, and reachability.
Figure 47 shows a simple circuit with two different clock domains. A possible race condition occurs between register A and register B because it lacks
134
Professional Verification
the synchronization registers that exist in the path between register A and register E. Detecting this race condition is almost impossible using simulation,
but an RTL analysis tool can easily identify this violation.
135
This removes easy bugs and assures quality code before it enters the
verification process.
Teams run the tools every time a design change is made to verify that
the changes have not caused a new violation.
Teams run the tools as a final verification before code is signed off.
This may be part of a formal code review process.
Advanced verification teams have learned to use RTL analysis tools
in the context of their entire methodology. The following lessons
learned from these groups can be valuable to any team using RTL
analysis tools:
Understand what you want to achieve and know the types of bugs that
you want to find and that are important to you. If you do not do this
analysis first, you might waste lots of time investigating violations
that are of little value to you.
Understand what the checks are really verifying. Quite often two
tools use the same name for a check, such as clock domain crossing
checks, but what they check is very different. You can fool yourself
into thinking that you have verified an aspect of your design that is
not fully checked.
Set up defined rules and processes for using the tools, such as when is
the tool run and what results are considered acceptable. Many teams
add their own checks to the tools to verify their own design style
requirements.
Equivalency Checking
Equivalency checking is not always thought of as a functional verification
technique, because it is mostly used during the implementation or back-end
stage of the development process. Equivalency checking compares two representations of a design to verify that they are functionally correct. Functional
equivalency means that the two representations are logically equivalent. If
you apply the same stimulus to the two designs, you will get equivalent functional responses. Equivalency checkers break the design representations down
to a basic mathematical representation and then use formal techniques to
prove that the representations are equivalent.
The original promise of equivalency checkers was that they could provide
the verification link between high-level behavioral representations developed
by architects and the low-level implementation representations developed by
designers and automated tools. Unfortunately, the higher in abstraction a
design is modeled, the more ambiguous the representation can be. Verifying
136
Professional Verification
137
The users of functional equivalency tools are most often the engineers
responsible for logic synthesis, model creation, and final implementation.
Most advanced verification teams are not involved in these functions, so their
use of equivalency checkers is limited. However, verification engineers need
to understand their use to be able to identify when they should be used and
when reverification is necessary.
Model Checkers
Model checking tools are perhaps the most commonly associated with formal verification techniques today. Model checkers prove that a property or
assertion about the design is true or false. Advanced verification teams use
these tools in two different ways: to verify specific parts of the design that are
most amenable to formal proofs or to find difficult bugs that the user had not
thought of or was not able to simulate. Each of these uses can enhance your
verification methodology.
Because of capacity and performance issues, formal model checkers have
never been able to keep pace with the growing size and complexity of digital
designs. Thus, the use of model checkers has been focused on parts of the
design that are small enough to fit in the tools and are amenable to formal
techniques. The strategy many advanced teams use is to identify areas that
will be very difficult to verify with simulation because of the number of combinations of interactions that would need to be tested. Examples of these areas
include complex control logic found around memory controllers and bus
interfaces, arbiters, and complex queuing and flow control logic, such as
leaky bucket or token-based algorithms.
Once the areas are identified, these blocks are given to a verification specialist who is familiar with model checking. Model checkers are very
complex to run and debug, so a specialist is usually required. The specialist
meets with the block designer to identify properties or assertions that need to
be proven. The designer might be able to provide some assertions in the form
that was embedded into the code, but often they need to be modified for the
tool. The specialist also needs to understand how stimulus is applied to the
design, and what legal or illegal stimulus is. Constraining the tool is perhaps
the most difficult part. If the inputs or the assertions are underconstrained, the
tool reports many invalid violations. If they are overconstrained, bugs could
be missed.
The specialist runs the tool and identifies which violations are caused by
incorrect constraints and which may be real bugs. The specialist identifies the
bugs to the designer, who then determines whether they are real bugs or cases
where more constraining is required. The specialist modifies the constraints
or obtains a bug fix and reruns the tool to repeat the process. When the tool
138
Professional Verification
Semi-Formal Verification
Model checkers are also used in a newer form of formal verification called
semi-formal verification. Semi-formal verification tools overcome many of
the deficiencies of formal tools by combining formal techniques with standard
simulation techniques. The premise is to find difficult bugs within your
design that cannot be found with traditional approaches. These bugs are difficult to find with directed tests because they are so complex and obscure that
you cannot possibly think of every possible scenario. They also cannot be
found with random techniques, because the stimulus to trigger them is very
complex or the number of possible scenarios is so great that the odds of randomly finding them are slim.
The original promise of semi-formal tools was that you could use simulation techniques to lead you to interesting areas in the design where bugs might
139
be hidden and then use formal techniques to expand from that location to formally prove all the possible combinations of events around that area as shown
in Figure 50.
The promise of semi-formal tools has not yet been realized because of the
difficulty of automatically finding interesting states to start from and detecting bugs once you do find the area. Automatically identifying interesting
areas requires a target for the tool to search for. The most common method is
to search for coverage metrics like state machine arcs or process interactions.
Unfortunately, these do not always lead to the most interesting places and can
often send the tool off into unimportant areas, such as test logic. A
workaround is for the user to supply information to the tool. Some tools use
an already created directed test as a jumping off place to engage formal verification techniques by formally verifying the design starting from the existing
states of the test. Another group of tools uses assertions placed in the design to
target the formal verification engines. These tools simulate the design until
the assertion is stimulated, and then they engage the formal engine to verify
possible scenarios starting from that point. Directed tests or assertions help
direct the semi-formal tool to interesting areas, but these areas must first be
identified by the user. If a difficult bug is not found in an identified area, the
odds are slim that it will be found with these tools.
The other issue is whether semi-formal tools have the ability to check for
bugs once an interesting area has been identified. The most common form of
checking used with testbenches is data checking. Data is applied to the design,
and the results are compared with expected results at the output of the design.
This is true for a design as complex as a router or as simple as an FIFO. Data
checks require the ability to track or store data as it moves through a design
and the ability to predict the correct response. Formal verification techniques
140
Professional Verification
require that the design as well as the verification logic be understood by the
tool. Thus, formal techniques are targeted more at sequence or protocol
checks that verify the relationship between signals or events. Formal verification cannot handle large arrays of data storage required for data tracking and
cannot understand data prediction and checking operations required for data
checks. These limitations result in data checkers being removed from the formal proof. The only checks made while the semi-formal tool is looking for
difficult bugs are the assertions placed in the design. If the tool does find an
interesting state and covers a scenario that can cause a bug, that bug is only
detected if one of the assertions catches it.
Even with these limitations, todays semi-formal tools still provide
another mechanism for increasing the likelihood of catching a difficult bug.
These tools can handle larger and more complex designs and are easier to use
than traditional model checkers. Advanced verification teams use these tools
as an additional mechanism to find bugs that they might not have been able to
find before. Unfortunately, the limitations and the inability to provide coverage information means that these tools do not replace existing tasks but rather
they add to your already complex verification task.
Chapter 14
Testbench Development
Measuring the trade-offs
Simulation is by far the most prevalent technique used in functional verification today. The ability to verify the ideas as well as the implementation
before a device is manufactured saves a development team time and effort.
Developing a testbench environment is often the single most important and
time-consuming task for an advanced verification team. Many excellent
classes and texts on how to build a testbench for various types of designs
using various languages and tools are available. This chapter presents some
important issues and trade-offs that verification teams need to consider before
building a testbench. It also describes how to develop a reusable unified
advanced testbench.
Teams should focus on three basic goals when developing a testbench:
efficiency, resusability, and flexibility. The testbench should make verification more efficient by removing the low-level details and redundant processes
so that the verification engineer can focus on testing and debug. The testbench
should be designed to facilitate reuse of its components within other similar
testbenches. The testbench needs to flexible so that it can be easily leveraged
and integrated with other environments. It should facilitate the integration of
different designs and support the integration of the design being verified.
These three goals often conflict, forcing the testbench developer to make
trade-offs to create a testbench most suitable for the intended use.
TRADE-OFFS
Testbench developers have been striving to meet the goals of efficiency,
reuse, and flexibility for many years. Unfortunately, attaining these goals
often makes testbenches more complex to create and more difficult to use.
Every testbench developer must make a trade-off between the time and effort
to create and use the testbench versus the potential gain from making the testbench efficient, reusable, and flexible.
142
Professional Verification
143
Abstracting design information in a testbench requires extra work and produces extra testbench code. Developing converters is not a trivial task and
always has the potential for introducing new bugs. The gain in efficiency
obtained from abstraction often comes at the expense of controllability and
observability. Removing the implementation details from the test writing process limits the control the test writer has to affect the stimulus. Abstracting
implementation details from design data for checking and debug might also
overlook important implementation characteristics, and possibly bugs.
Most advanced verification teams find abstracting design information to
be valuable for newer designs that require a large number of tests and long
debug cycles. Existing designs with existing stimulus sets do not benefit from
large efficiency gains from abstraction.
144
Professional Verification
145
complex it will be. The number of test writers and their skill level should also
be factored into the interface to the testbench.
146
Professional Verification
UNIFIED TESTBENCHES
Chapter 8 discussed verifying digital subsystems within the context of the
UVM. The development of a unified testbench is vital to attaining the UVM
goals of increased speed and efficiency. Here, we describe a high-speed, reusable testbench that meets the requirements of the UVM.
147
Testbench Components
High-performance, reusable testbenches are based on standard components with a common interface for communications at different levels of
abstraction. Figure 51 shows the basic components.
Stimulus Generators
Stimulus generators create the data the testbench uses to stimulate the
design. Stimulus generators can create the data in a preprocessing mode with
custom scripts or capture programs, or they can create the data on-the-fly as
the simulation occurs. Stimulus generators are usually classified by the control the test writer exerts on the generation of the stimulus.
Transactors
Transactors change the levels of abstraction in a testbench. The most common use is to translate from implementation-level signaling to a higher level
transaction representation or the reverse. Transactors are placed in a testbench
at the interfaces of the design, providing a transaction-level interface to the
stimulus generators and the response checkers. Transactors can behave as
masters initiating activity with the design, as slaves responding to requests
generated by the design, or as both a master and a slave. The design of a transactor should be application-independent to facilitate maximum reuse.
Application-specific information can be contained in the stimulus generators
or TLMs attached to the transactors. Also, when developing a transactor, the
designer should consider its use in a hardware accelerator. Developing the
signal-level interface in a synthesizable manner allows it to be accelerated
along with the design, improving the performance gain obtained from a hardware accelerator.
Professional Verification
148
Interface Monitors
Interface monitors check the correct signaling and protocol of data transfers across design interfaces. In some testbenches, interface monitors are
combined either with the transactors or with the response checkers. Keeping
interface monitors separate from these components allows for maximum
reuse of the monitors. In addition to passively monitoring the data transfers
across interfaces, these monitors can encapsulate the data to be communicated
to response checkers. This allows the response checkers to concentrate solely
on verifying correct operation. Interface monitors contain interface assertions
and can be written in an assertion language. The interface monitors should be
application-independent and written in a manner that allows their easy reuse
in hardware acceleration.
Response Checkers
Response checkers verify that the data responses received from the design
are correct. Response checkers contain the most application-specific information in the testbench and usually can only be reused when the block they are
monitoring is being reused. There are three basic types of response checkers:
Reference model response checkers apply the same stimulus the
design receives to a model of the design and verify that the response
is identical to the design. The most efficient method is to reuse the
TLMs of the FVP for the reference models in the response checker.
Scoreboard response checkers save the data as it is received by the
design and monitor the translations made to the data as it passes
through the design. The translations are tracked with a Scoreboard,
and responses are verified as they are generated by the design.
Performance response checkers monitor the data flowing into and out
of the design and verify that the correct functional responses are
being maintained. These checkers verify characteristics of the
responses rather than the details of the response.
Scoreboards are used when the design responds in a predictable manner
and the data is easily correlated, such as a bridge or a switch design. Reference models are used when the design can be easily modeled independently
with enough accuracy, such as a computation unit or a pipeline. Performance
checkers are used when the functions in the design are unpredictable due to
implementation specifics, and the correct operation can be specified by the
characteristics of the design, such as a rate limiter or a routing algorithm.
149
Testbench API
150
Professional Verification
As the subsystem is micro-architected, it is partitioned into smaller hierarchical blocks and units. Large complex subsystems require the verification
team to do much of the verification at the block level. The verification team
chooses the levels of hierarchy to test at by selecting blocks with common
standard interfaces that provide enough observability and controllability and
are not too large for simulation tools to be efficient. Often designers want to
verify at the lowest unit level as they develop individual modules. This is
done by the designer using simple HDLs or HVLs, along with added structural assertions. The verification uses simple methods, applying stimulus
vectors and waveform inspection or application-specific environments. These
environments are not reused, since their purpose is simply to prove basic sanity of the unit-level modules.
Block-level testbenches are developed in a similar manner as the subsystem testbench. The FVP is partitioned to match the verification needs and
reused in the individual response checkers. Transactors and interface monitors are reused from the subsystem testbench. Common transactors are used in
slave or master modes for the different connecting blocks. The resulting testbenches are shown in Figure 54.
Tests can be developed in advance of the implementation in the blocklevel testbenches by substituting the models as described in the subsystem
testbench. When developed in a correct and efficient manner, the top-down
testbench method provides for the tests and testbench to be completed and
debugged before the implementation is delivered by the design team. This
creates the fastest and most efficient debugging and integration flow.
151
152
Professional Verification
VERIFICATION TESTS
So far we have only briefly talked about the actual tests that will run on the
testbench being developed. Many verification teams separate the creation of
the testbench from the creation of the test stimulus, because the two tasks are
very different and require different skills. Usually a testbench is developed by
a few engineers who are highly skilled at developing complex code and systems. Complex systems can require large teams of test writers that need to
know the intricacies of the design and understand how to test that it is working correctly. Development teams often use design engineers and software
engineers to write tests for a period of the development time.
The goal of the testbench developer is to create a testbench that allows the
test writer to write and debug a test in the most natural form possible. The
testbench should shield the test writer from the complexities of managing data
and interfacing to the design. In this section, we will take a closer look at
developing tests for an advanced verification testbench.
153
There are several types of directed tests. Interface tests verify the correct
operation of each of the major interfaces found in the subsystem. The tests
154
Professional Verification
verify correct handshaking and error handling. Stress tests are included to verify the constraints of the interface, such as time-outs, aborts, and deadlocks.
These tests should be run first to verify the correct communication between
the testbench and the design. Feature tests verify the processes contained
within the subsystem. These tests verify the correct operation of the features
under normal and stress conditions. Stress conditions include system interactions between features, such as interrupts, retries, and pipeline flushes. These
tests are run second with the goal of verifying each feature in isolation under
non-stress conditions before turning on randomness.
Error tests verify the correct operation of the subsystem to error conditions. Error conditions consist of recoverable and non-recoverable errors.
Recoverable error tests verify the observation of the error and the recovery
from the error. Non-recoverable error tests verify the observation and the correct system response, such as a halt, freeze, or interrupt signal. These tests
should be run after feature testing is complete and random tests have run for
multiple hours without failing. Performance tests verify that the subsystem
meets the performance requirements of the system. Performance requirements
can include latency, bandwidth, and throughput. The tests stimulate the system with normal rate stimulus as well as corner case stimulus that is known to
be performance limiting. Performance tests are run periodically throughout
the testing process to verify that the design is still meeting test goals and
should be run again at the end of testing to verify that design changes and bug
fixes have not violated the performance goals.
Combining Random and Directed Approaches
When random testing first became popular, many teams believed that the
best approach was to first run random tests, measure what had been tested,
and then write directed tests for the areas that were not stimulated by the random tests. In theory, this made sense; but in practice, it had many flaws.
When a design is first verified, it can contain many basic bugs. The most efficient way to bring up a design is to test it in stages, applying very basic
stimulus first to verify that it works before applying more complex stimulus.
When using random tests, you often have limited ability to select in which
order stimulus is applied, so a random test does not always follow the most
efficient bring-up process.
It is also difficult to measure what the random test verified. Coverage tools
can provide some information about what areas of the design have been stimulated, but they cannot tell whether a function has been verified completely.
Some teams place monitors in the design that signal when a function has been
tested. Placing these monitors throughout a design can be time-consuming
and requires intimate knowledge of the design. Knowing at what point to stop
155
running random tests and begin writing directed tests is also a challenge. The
team needs to leave enough time for the directed tests to be written to cover
the areas not stimulated by the random tests, but you do not know how many
directed tests are needed until the random tests are run. Also, as noted earlier,
the behavior of random tests can change with small changes in the design or
testbench. This means that what is tested by random may change from run to
run. Thus, it is often difficult to come to closure on exactly which directed
tests need to be written. Often, a verification team thought they completed
verification when it was discovered that a change in the testbench required
another directed test to be written. This open loop process makes it very difficult for verification teams to know when the design is verified to a sufficient
level to be taped out.
Today, advanced verification teams use a subset of directed tests first, then
use random tests followed by more specialized directed tests. Advanced verification teams first develop a group of must-have directed tests that are used
in bring-up and verify the most important functionality. These tests are also
used as a consistent regression suite. After the first directed tests are run, the
team can feel confident that the design is at a level of sanity where random
testing will be of most use. The team can use monitors or assertions in conjunction with coverage tools to get a good idea of which functions have or
have not been stimulated. At this point, a final set of directed tests are run to
verify the special case conditions that were not hit by random. Having a suite
of directed tests that verifies the most important functions eliminates the fear
that random testing missed a basic function that pops up late in the development stage.
156
Professional Verification
More complex constrained random tests might specify weightings to the values in the defined set, making it more likely for the random generator to select
one value or a range of values in the set over other values. For example, a test
writer could specify that a CPU request be a read 10 percent of the time and a
write 90 percent of the time, or that the operation accesses certain values
within a defined range more often than other values.
As more constraints are added, it is more likely that the constraints overlap
and form complex relationships. For example, a test might specify that a
request be either a read or a write, but if the request is a read, the address
range contains one set of values and if it is a write, the address range contains
a different set of values. There is now a relationship between the request type
and the address of the request generated. The random system must decide
which random value to pick first. If the request type is generated first, the system must generate the address within the range for that request type. If the
address is generated first, the system must make sure that the request type
generated is the correct one for the address. Complex verification tests can
contain hundreds of different constraints that all have relationships to each
other. Verification systems require constraint solvers that use mathematical
techniques to manage the correct selection of values to meet all the necessary
constraints.
The more a random test is constrained, the more control the test writer has
over the generation of stimulus and the more likely the test stimulates the
intended functions. Also, the more a random test is constrained, the more similar it becomes to a directed test and the more design-specific information is
required of the test writer. Test developers need to balance the need for controlling the randomness to target specific areas with the power of randomness
to stimulate and discover areas that are unknown to the test writer.
Advanced verification teams utilize constrained random tests as a substitute for many of the directed tests they would need to write. One random test
running for many cycles might stimulate the same functions as five or ten
directed tests. Constrained random tests are also used to duplicate the random
nature of traffic flowing into the system. Most electronic systems today operate in a very non-deterministic environment. Verification teams want to be
able to verify that the system operates correctly in a similar non-deterministic
manner. Constrained random systems can mimic the traffic patterns that
might be seen by the device.
Testbench Requirements
Constrained random tests require a testbench that has been designed for
supporting a constrained random approach. The testbench must be self-checking. Random tests can run for long periods of time and generate large amounts
157
Chapter 15
Advanced Testbenches
Using assertions and coverage
As the size and complexity of designs has increased, testbenches have
become more focused on testing the design from its periphery. Testbenches
have evolved to treating the design as a black box and testing it by applying
stimulus at its inputs and observing its outputs with little regard to the specific
implementation. This approach allows verification teams to verify large
designs because it requires less intimate knowledge of the design. A blackbox approach also facilitates reuse since less design-specific information is
contained in the testbench. This focus on simplicity and reuse has resulted in
testbenches that provide stimulus and checking capabilities but limited visibility of the implementation. Using a more visible or white-box approach to
verification, however, eases debugging and provides knowledge on what has
been tested. Assertions and coverage techniques, which have been touched
upon throughout this book, provide a more white-box approach to verification. Today, black-box testbenches are combined with these white-box
techniques to provide the efficiency, reuse, and flexibility that advanced verification requires.
ASSERTIONS
The use of assertions has become a hot topic in functional verification.
What is an assertion and how does it apply to functional verification? Theoretical sources describe assertions as capturing the designers intent and
specifying intended behaviors. Whereas the practical view is that assertions
are simply monitors placed in the design that identify actions within the
design. These monitors can identify illegal behaviors and act as a supplement
to the testbench checkers, or they can identify legal behaviors to help guide
the verification process. Assertions are also often tied to formal or static verification techniques, which are discussed in another chapter. This chapter
focuses on the practical application of assertions in a dynamic or simulationbased verification process.
Assertions address three basic verification issues. First, they address the
issue of bugs being missed by the testbench and slipping through into the late
stages of the verification process or even into the manufactured device. Catching a functional bug is a combination of stimulating the design to cause the
160
Professional Verification
bug to occur and checking the response to identify the consequences of the
bug. The testbench is responsible for generating the stimulus to cause the bug
to occur in a dynamic verification environment. Assertions increase the
amount of checks within a design, making it more likely that a bug is found if
it has been stimulated.
The second issue addressed is the time it takes to debug a failure in simulation. The process of debugging simulation failures is often described as
peeling onions. The failure usually is first identified at the external interface
of the design. The debugger must step back through each layer of the design
to try to locate the cause. This can be very time-consuming and complex,
depending on the size of the design and the location of the bug. Assertions
provide checkers inside the design closer to the source of the bug, making
debugging easier and faster.
Assertions also address the inefficiency of directing the verification process. An efficient dynamic verification process runs the test that has the
highest likelihood of finding the next bug or verifying the most important feature in the design. This is difficult to do unless you know what has been tested
in the past and what remains to be tested. Assertions can make the dynamic
verification process more efficient by providing accurate and timely information about what has been tested.
Perhaps the easiest way to understand how assertions address many of
todays verification issues is to look at the process of writing, running, and
debugging a test and how assertions are used during this process.
161
162
Professional Verification
mation. Placing assertions around an area where a bug was found helps to
reverify the bug fix and find any other bugs that might be hidden in the area.
The process of creating a test, running the test, and debugging failures is a
repetitive task. Once a test has been debugged, the test developer returns to
the next test and repeats the process. Selecting which test to create and run
next is very important for determining the most efficient verification process.
Randomly picking the next test or going in some arbitrary manner leads to
inefficient verification that takes longer to complete and results in important
bugs being found late in the development process. Advanced verification
teams prioritize the order of tests to be run so that the debug process is efficient and the most important functions get tested early.
Verifying the functions in the order that they are used in the system, from
input to output, allows the developer to more easily isolate a bug when a failure occurs. It is also important to prioritize the testing of the most important
and most used functions early in the process. The goal is to identify bugs in
important functionality first so that they can be fixed early in the development
process. Often designs are required to tape out before the verification is complete. Prioritizing the most important functionality allows you to have the
most confidence that the design will work if it has to tape out early. Prioritizing tests for debug and prioritizing by functionality might seem to conflict
with each other. Advanced verification teams try to mesh these two goals.
First, they prioritize for debug for early bring-up of the design to a level of
sanity, and then they prioritize by functionality.
Assertions can help identify which areas of the design have been tested
and which areas have been missed. Assertions provide coverage information
at the implementation level, which helps identify the degree that structures
and functions have been tested. We will talk more about using assertions as
coverage monitors in the coverage section of this chapter.
Using Assertions
Assertions have been used in the past mostly by large development teams.
These projects were usually very well staffed and had long development
cycles. The basic use model was for assertions to be placed throughout the
design either by the designer as the code was being written or by another engineer after the code was written. Assertions were placed using a specification
language that best encapsulated the intended behavior of the implementation.
The full suite of assertions would be simulated with the design each time a
test was run. The benefit of this approach was that the code would be fully
instrumented with a wide net of checkers to catch as many bugs as possible.
163
Most development teams today lack the time and resources to create such
an extensive net of assertions, nor do designers have the interest. Most designers do not see the value in using assertions because they believe that their
code is correct and that assertions are a verification teams task. Receding a
design in a different language to capture its intent seems like a redundant task.
The verification team usually lacks the implementation-specific knowledge
and the time to put assertions in for the designer. So, if the designer does not
create the assertions throughout the code, the older use model breaks down.
164
Professional Verification
At the block level, structural assertions can be used inside the design, and
interface assertions can be used at the boundaries of the design. Using structural assertions at the block level is highly dependent on the designer. If the
designer chooses to insert assertions, the most efficient path is for the designer
to place assertions in the form of library elements instead of using a language
to define the check. Using assertion pragmas or comments that allow the user
to place the library in a shorthand manner facilitates this approach. Automated
tools interpret these comments or pragmas and synthesize the assertion in the
form of a library or language.
Structural assertions should be placed around design hot spots and in common bug-trap locations. Design hot spots are places in the design that are
highly suspect of being incorrect based on past experience. Areas such as
arbiters, state machines, or clock domain crossings are common areas for
bugs. Designers should be encouraged to place assertions in the areas they
believe to be suspect or where they have had problems in the past. Bug traps
are places where the manifestation of bugs is most commonly seen. Placing
assertions at places like a MUX, FIFO, or a handshake are often easy ways to
catch the results of a bug in complex logic. A common rule of thumb many
teams use is that if an assertion for complex logic cannot be written with a
library or a few lines of language, you should use bug traps to catch the effects
instead.
Once the code for a block is complete, the designer or a verification engineer can add interface assertions to the block. Assertions for standard
interfaces can easily be added using a library approach. If the interface is not
standard and requires some unique checking, the assertions should be written
in a standard language. Interface assertions should be added at the interface
between major functional blocks and between blocks created by different
designers.
165
After individual blocks have been verified in isolation, they are integrated
together and verified as a full chip. Structural and interface assertions of the
individual blocks are integrated along with the design. Additional interface
assertions are placed at the primary inputs and outputs of the design and at
any missing internal interfaces. At this time, architectural assertions developed at the FVP are also integrated if they are not already part of the chip test
environment. If an FVP was not used or if additional architectural assertions
are required, they can be added using a standard language.
Simulation with assertions at the chip level is similar to the block level.
Assertions should be turned off during initialization, and structural assertions
might be ignored during basic bring-up. Fewer assertions fire during chiplevel simulation since they have already been thoroughly exercised during
block-level simulation, so each violation should be examined carefully. If
there are a large number of assertions in the design, performance could be
impacted. The team can turn off structural assertions if performance is unacceptable. The team might also choose to use a hardware accelerator, in which
case all the assertions might need to be synthesized so that they can be accelerated with the design.
Assertions and System Verification
Many teams stop using assertions once they move from a testbench-based
verification environment to a real-world system verification environment.
Teams often remove assertions when using an emulation system or an FPGA
because of speed and capacity requirements. While it is important to focus on
speed and capacity during system verification, assertions can provide important visibility that is often lost in emulation of FPGA-based systems. These
systems provide little or no internal visibility to help debug failures. Assertions should be included with the design in the emulation or FPGA system to
provide this needed visibility.
166
Professional Verification
COVERAGE
One of the most common issues verification teams face is determining
when they have done enough verification to be confident that the design is
ready for production. Verification teams have attempted to use coverage as a
metric for determining completeness, but often come to the realization that
these techniques fall short of the original goal. Advanced verification teams
have learned that the coverage techniques and tools used today simply provide raw information that the user must correlate and comprehend before any
actions can be taken or conclusions made. The information is also incomplete
or inconclusive. Still, verification teams have found that the information that
coverage techniques provide is valuable in guiding the verification process in
the most efficient manner. In this section, we will explore coverage techniques used by advanced verification teams.
Coverage cannot provide an answer to the question of completeness, but it
can provide some of the data to help make that determination. Advanced verification teams understand that the verification process is often very similar to
risk management. Before making an investment or placing a bet, one should
assess the risk of losing money and compare that to the possibility of being
rewarded. In a similar manner, when a development team decides they are
ready to tape out their design, they should assess the risk that there is a bug in
the design that will make the device unusable. This risk must be weighed
against the possible rewards of getting to market sooner. Coverage tools pro-
167
vide the type of information that, along with experience and proper processes,
enables the team to make a fair and accurate risk assessment.
Coverage can help keep the verification process on the most effective and
efficient path. The verification process of a complex device can last many
months or years. Verification plans and strategies are usually developed early
in the process and might not be modified unless there is a major change in the
projects direction. Advanced verification teams have found that it is good
practice to periodically check where they are in the process and reevaluate if
the original strategies are still the correct ones. High-level inexact data provided by code coverage or stimulus coverage tools can be enough to identify
which areas of the design are receiving the most verification and which they
might want to concentrate on more.
Perhaps the most important verification issue addressed by coverage is the
one that is most often overlooked. Coverage can be used to identify areas
within the design that have not been stimulated and, therefore, may be hiding
potential bugs. The previous section on assertions discussed the process for
finding bugs within a design. The first step in that process was generating
stimulus that stimulates the design in a way that manifests the bug. Verification teams often find that the reason a bug has slipped through the verification
process is that they never verified that operation. Coverage information can be
used to find bugs that are missed in the verification process.
Using Coverage
Using coverage within a verification environment consists of three stages:
identification of goals, simulation, and analysis. The first stage in any measurement process is determining what it is you are attempting to measure.
Advanced verification teams set coverage goals as part of their verification
strategy. The verification team develops a test plan that identifies the functionality to be tested and a strategy for testing each function thoroughly.
Coverage goals are set for each function. These goals list which metric is
important to obtain to verify that the function has been tested. The metric
could be as simple as a certain signal being asserted a number of times or as
complex as a series of protocol sequences.
Once the test plan is completed, a coverage model that details the functional coverage goals, along with stimulus coverage goals and goals based on
the teams experience, is developed. The coverage model identifies how each
of these goals is measured. The team uses the test plan and coverage model to
develop the testbench for stimulating and testing the design. The team might
track stimulus generation, increase interface monitors, or add internal monitors to identify certain coverage goals. Some of the coverage goals are
168
Professional Verification
169
should create new tests to cover these areas and resimulate the design until the
goals are met.
The coverage process is an iterative process of identifying missed goals or
stimulus holes and addressing them in a prioritized fashion. Metrics can be
collected along the way to mark the progress and to give management an indication of confidence in the design. In most cases, the collection and analysis
of coverage information is not started until the test suite is near completion
and the design is stable. Measuring coverage too early in the process can
result in incomplete information leading to incorrect assumptions.
170
Professional Verification
171
REACTIVE TESTBENCHES
One way to create stimulus to address coverage holes is to use run-time
coverage information. The coverage process detailed earlier relies solely on
using post-processed coverage information that is not analyzed until all the
simulations complete. Run-time coverage provides dynamic information that
can be used as the tests are running. It can be used for event notification and
information collection. You can use run-time coverage information with each
of the different stimulus generation techniques for addressing coverage holes.
Directed tests can utilize the notification of internal events to direct stimulus
to the targeted area. Random tests can use the collected information to change
constraints on-the-fly. Coverage-directed stimulus generators can use the
information to identify coverage holes and to direct stimulus to targeted areas.
172
Professional Verification
173
174
Professional Verification
these internal functions is to step back from the functionality layer by layer to
determine which sequence of operations needs to occur for the target to be
stimulated. The ability to do this traversing of the design is limited to formal
verification and symbolic simulation techniques. So, the solution to this problem will probably be a combination of verification techniques from
simulation, constrained randomization, formal model checking, and symbolic
simulation.
Advanced verification teams have begun to use run-time coverage information and reactive testbenches. One issue these teams have run into is the
difficulty in keeping the tools up-to-date with coverage information. Most
complex design projects have hundreds or even thousands of different tests to
run. If an environment or test is going to use coverage information from these
past tests, there is often a chicken-and-egg type problem. The user has to run
all the other tests to collect the data so that the next test can use it. If a test
fails or a change is made to the design or testbench, the tests have to be
repeated before the information can be used again. Most teams run their tests
simultaneously on server farms, which requires close coordination and synchronization of different active processes. The lesson learned from these
advanced teams is to fully understand the intended use model before embarking on using an advanced run-time coverage environment.
Chapter 16
Hardware-Based Verification
Advantages of hardware-software co-verification
As more and more electronic products have software content, designers
are faced with serious project delays if they wait for first silicon to begin software debugging. It also means that a serious system problem might not be
found until after first silicon, requiring a costly respin and delaying the project
for two to three months. Increasingly, designers are turning to hardware-software co-verificationconcurrently verifying hardware and software
components of the system designto meet demanding time-to-market
requirements. At a minimum, this means starting software debugging as soon
as the IC is taped out rather than waiting for good silicon. But even greater
concurrency is possible. In many cases, software debugging can begin as soon
as the hardware design achieves some level of correct functionality. Starting
software debugging early can save two to six months of product development
time. There are a variety of approaches to hardware-software co-verification.
This chapter addresses accelerated co-verification, since the complexity of
software in todays electronic products precludes adequate testing with the
performance of a software simulator.1
ACCELERATED CO-VERIFICATION
There are additional benefits to starting software verification prior to
freezing the hardware design. If problems are found in the interface between
hardware and software components, designers can make intelligent trade-offs
in deciding whether to change the hardware or software, possibly avoiding
degradation in product functionality, reduced performance, or an increase in
product cost.
1.
176
Professional Verification
177
178
Professional Verification
179
The steps in using co-verification with an RTL model and incisive emulator are:
1. Compile the software into a ROM code file.
2. Compile the hardware design for the emulator and download.
3. Plug the emulator and software debugger into the target system, if
used.
180
Professional Verification
Comparing Approaches
The table below summarizes the trade-offs of the three approaches
explained above.
181
Table 4.
Comparison of Approaches
Approach
Type of Model
Debug
Environment
ISS, Simulator,
and Accelerator
RTL model and
Emulator
ISS
Physical model
and emulator
RTL
Physical
Performance
Level of Software
That Can Be
Verified
Drivers and Diagnostics; Small OS
UNIX, Windows,
RTOS, and applications
UNIX, Windows,
RTPS, and applications
There are several factors to take into account when determining which
approach is best for your project. One factor to consider is the performance
required to meet your objectives. Another is whether you are going to begin
software debug before or after tapeout. The amount of software that you want
to verify is another consideration. If you only want to verify very little software before working silicon is available, use logic simulation and an ISS. For
a moderate amount of software, use an ISS and an Incisive simulator and
accelerator. If you want to verify a lot of software, use RTL or a physical processor model and emulation.
182
Professional Verification
average time that can be saved if you start software debugging before tapeout). For rapidly changing consumer markets, the lost opportunity cost can
easily be tens of millions of dollars. There are additional benefits from using
co-verification. For example, it is very helpful if the diagnostics are running
when the IC comes back from fabrication. They can be used to do focused
testing of specific parts of the design. Without working diagnostics, you end
up doing ad hoc testing of the whole IC at oncea hit-and-miss proposition.
With todays complex ICs, acceleration and emulation are practical necessities to verify designs and software in a complete system environment with
real data. Software content of electronic products is increasing exponentially
and is most often the pacing item for product completion. Using acceleration
or emulation for hardware-software co-verification takes advantage of the
investment made in the emulator and shortens product cycles by several
months. Emulation as a vehicle for hardware-software co-verification provides by far the highest performance available for this critical task, along with
real-world data for comprehensive system testing.
Appendix 1
Resources
Bergeron, Janick. Writing TestbenchesFunctional Verification of HDL Models, 2nd ed. Boston: Kluwer Academic Publishers, 2003. ISBN: 1402074018
Foster, Harry, Adam Krolnik, and David Lacey. Assertion-Based Design, 2nd ed. Boston: Kluwer Academic Publishers, 2003. ISBN: 1402074980
Grotker, Thorsten, ed. Stan Liao, Grant Martin, Stuart Swan. System Design with SystemC.
Boston: Kluwer Academic Publishers, 2002. ISBN: 1402070721
Haque, Faisal, Khizar Khan, and Jonathan Michelson. The Art of Verification with Vera. Verification Central, 2001. ISBN: 0-9711994-0-X
Meyer, Andreas. Principles of Functional Verification. Newnes, 2003. ISBN: 0750676175
Muller, Wolfgang, Wolfgang Rosenstiel, and Jurgen Ruf, eds. SystemC: Methodologies and
Applications. Boston: Kluwer Academic Publishers, 2003. ISBN: 1402074794
Palnitkar, Samir. Design Verification with e, 2nd ed. Boston: Kluwer Academic Publishers,
2003. ISBN: 0131413090
Sutherland, Stuart. Verilogs 2001: A Guide to the New Features of the VERILOG Hardware
Description Language, 1st ed. Boston: Kluwer Academic Publishers, 2002. ISBN:
0792375688
Glossary
Acceleration-on-Demand The ability to move from a software simulationbased test environment to a hardware-accelerated, simulation-based test
environment.
Algorithmic-based Digital Design A digital logic design that is directly
developed from a algorithm or protocol and does not contain controlbased operations.
Analog Behavioral Model A model of an analog circuit that represents the
behavior of the implementation, but does not include the implementationspecific information.
Application Assertion An assertion used to specify an application-specific
architectural property, such as fairness of an arbiter.
Application Coverage A measurement of the percentage of application coverage monitors that have measured an event.
Application Coverage Monitor A device to monitor the number of times an
application-specific event has occurred.
Architectural Checks Checkers that verify the correct functional and performance operation of the FVP.
Assertion A codified representation of a designers or architects intent when
creating a design. Assertions specify a property or behavior in a structured manner that can be verified to be correct.
Bottom-Up Development An approach to development starting at low-level
implementation blocks and integrating the blocks together to form system-level representation.
Control-based Digital Design A digital logic design that is developed from
a specification and not strictly based on a algorithm or protocol.
Design Hierarchy The naming of the designs hierarchical levels in a system. A system is made up of subsystems, which are made up of design
blocks, which are made up of design units.
186
Professional Verification
Glossary
187
Transaction A unit of information abstracted from a lower signal-level representation that is used to represent an information transfer separate from
the mechanism of transfer.
Transaction-Level Model (TLM) A functional model of the design in which
communications interface is in the form of transactions.
Transaction Taxonomy A classification of the types of transactions used
throughout a design.
Transactor A testbench component that converts different levels of interface
abstraction, such as signal-level interface to transaction-level interface.
Index
A
abstraction levels 98
acceleration 50, 82
Advanced Custom Design 95, 100
advanced functional verification
characteristics of 15
managing time and resources 23
advanced verification techniques 81, 92,
112, 113
algorithmic digital subsystems 66, 87
algorithmic models 66
algorithms
developing 88
analog 95
analog designs 65
analog models 65
analog subsystems 95
APIs 149
architectural assertions 164
architectural checkers 67
architectural checks 68
assertions 48, 77, 79, 81, 113, 134, 159,
172
reusing 166
automated tools 173
automation 28
B
behavioral models 64
block development 63
block-level verification 54
blocks
hardening 86
reusing 11
bottom-up verification 79, 84
bugs, causes of 7
C
Cadence Design Systems 41, 95
complex designs
verifying 15
constraints 155
continuous-time domain 65
control digital subsystems 71
coverage 49, 81, 113, 166
coverage holes 170
D
data checks 139
data path subsystems 87
dependencies 19, 24, 42
design changes 131
digital signal processors 87
directed tests 153, 170
E
efficiency 10, 12, 142
emulation 113
equivalency checking 135
error tests 154
F
feature tests 154
Professional Verification
190
flexibility 143, 166
formal verification 131
fragmentation 12, 58, 72
functional representations
reusing 17
functional verification 17
functional virtual prototypes 43
FVP 43, 53, 57, 58
costs and benefits 59
creating 53, 64
implementation level 107
integrating a subsystem 108
transaction level 107
using 61
verifying 67
H
hardware acceleration 50, 54
hardware prototypes 113
hardware-software co-verification 114, 175
HDL 13
I
in-circuit emulation 113
interface assertions 164
interface monitors 66, 148
interface tests 153
L
legacy IP 97
limited resources 10
linting 78, 132
M
mathematical techniques 132
methodology 21, 51
reusing 22
mixed-level approach 102
model checking tools 137
multiple designs 22
O
open standards 22
P
parallel processes 18
R
random tests 153, 170
constraining 155
reactive testbenches 171
real-time software applications 63
regression testing 50
resource management 25
resources 10
response checkers 148
reuse 11, 33, 46, 59, 141, 145, 159, 166
of functional representations 17
reused blocks 11
RF designs 95
risk management 20
RTL 46, 176
RTL analysis tools 132
S
scenarios 20
semi-formal verification 138
service processor applications 62
simulation 141
simulation acceleration 109
software development 62
speed 9, 12
static verification 78, 79
stimulus generation 67
stimulus generators 147
strategy 74
stress tests 154
structural assertions 164
sub-blocks, verifying 44
subsystem development 63, 71
synthesis tools 136
system integration 107
system models 44, 58
system verification 111
software-based 112
system-level design 57
Index
T
testbenches 90, 112, 141
advanced 159
bottom-up development 145
components 147
developing 80
efficiency 142
flexibility 143
reactive 171
reuse 141
tests 152
top-down development 145
testing, completing 9
tests 152
time constraints 9
time management 23, 24
tools 21, 27, 76, 135, 137, 138
top-down verification 80, 83
transaction taxonomies 46
transaction-level models 63, 64
transaction-level verification 46
transactions 46
transactors 147
U
Unified Verification Methodology 41
unifying verification process 13
user interface applications 62
UVM 53
V
verification
completing 9
coordinating with other tasks 17
fragmentation in 12
multiple designs 22
parallelizing 18
reuse 11
separating from design 17
tests 152
unifying process 13
verification engineers 10
verification flows 12
verification planning 72
verification strategy 74
verification tools 21
191