0% found this document useful (0 votes)
9 views

Writing Verification Testplan For Module Level Design

Verification plan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Writing Verification Testplan For Module Level Design

Verification plan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Writing Verification Testplan

for Module Level Design


In verification, often our first task is to make a test plan. This feels like having an
empty canvass and not knowing where to start and what to do with the colors.

I don’t know how to handle a canvass, but can make some progress with testplan.
We will focus on developing a testplan for module level verification. We will have
some guidelines and checkpoints that would help us stay on track.

Writing testplan for module level and full chip level are two different things.
There may be overlap, but both need different approach. Full Chip testplan is not
the topic for subsequent text.

Coming to the point, we need to draw an outline for our testplan. Let’s create
separate sections to cover different aspects. I would like to make following
sections and each section may have further sub-sections and sub-sub sections.

1. Sanity / Go-No-Go
2. Register & Memory Access
3. Features / Functionality

The segregation is a not mutually exclusive. There are lots of areas overlapping
with each other. As verification engineers, we should worry about not leaving a
hole, overlaps are fine.
Sanity / Go-No-Go
The testcases listed here aims at basic functioning of the module. What constitutes
‘basic’ is contextual. These are the baby steps that when perfected would form the
base for running of other tests. It would be easy to illustrate with examples.

 A single correct read and write transaction


 A packet being correctly received or transmitted

The sanity testcases serve as base indicator about the health of the module. These
testcases would be generally used throughout the development process. Typically,
we run sanity tests:

a) After bug fixes to ensure that the fix has not broken the basics

b) Periodically to test the database (RTL + TB) is compiling and running

Sanity tests are most directed, quick running tests. We don’t aim to cover too
many features here.

Register & Memory Access


Most of the modules have some configurations, memories and status registers.
This section deals with them. The testcases include default value checks, read and
write accesses, front door & back door read write combinations, memory depth
checks, etc. With some experience, you would realize that this section can be
copy-pasted from other modules along with tests.

It is always beneficial to list the register and memory access testcases in the
testplan. These tests should be run before sanity tests. These tests would help
uncover the bugs in configurations and status registers. These bugs are tedious to
debug in sanity or other feature tests.

Features / Functionality
These are the very reason for the module to exist. These are the properties or
functions that the module needs to fulfil. We need to list all the features and
functionalities mentioned in module architecture/micro-architecture document and
standard specifications. In this section, we would be covering most of the
configurations that the design supports along with status and interrupts. We need
to include the register configurations, interrupts and status registers getting
triggered functionally. We will be carefully listing everything that needs to be put
to test. While listing the testcases, do not worry about implementation of tests.
Think of it as somebody else’s job for two reasons:

1) We should not restrict the thoughts by implementation constraints

2) We should add more details to the test plan about the procedure, configuration
sequence, status register checks, etc.

It should be easy for somebody else to implement the testplan i.e. write the
testcase.

Remember: Don’t worry about how to write the tests while writing the testplan!

We will divide this section into two categories:

 Simple features
 Composite feature

Simple features: These are straight forward functionalities broken to the base
level. We find these directly mentioned in the relevant documents. Typical
examples of the statements are:

 Supports burst access up to 256 bytes


 Supports video resolution of 1024 x 768
 Supports clock recovery from input stream

All above are just examples and actual list is going to be bigger, probably the
biggest amongst all the sections.

Composite features: The functionality that constitutes of number of simple


features can be termed as composite feature. It has multiple features involved yet
can be visualized as single functionality. I like to call it alloy features. Let’s go
over some examples:

 Exclusive access – It involves exclusive read to address A, then an


exclusive write to same address A. There can be other accesses happening
in between ex-read and ex-write
 Packet Forwarding: This one needs proper forwarding table established by
learning or configuration. We need a packet hitting the forwarding table to
get forwarded to destined port
If above two examples do not strike a chord, then here is vague one. Image a robot
with functionality to travel in metro. It involves multiple tasks such a buying a
ticket, climbing stairs, wait for right metro, boarding and disembarking. Having
included individual scenarios in simple features, we should verify the all the
features in conjunction with the other ones. We can club two or more features at a
time. The composite feature forms base for our application tests.

Additional Sections
Performance tests
These are not a special category of tests. These are kind of feature testcases, with
more emphasis on advertised numbers. The separate section is created to mark up
the importance. The performance parameters can be found in data sheet or some
marketing material for the module.

The examples are:

 Handling of incoming traffic of n Gbps


 Average read write latency of n-clks
 Video processing capacity at 50 fps

These tests can stress test the module. These are performed towards the end when
all the features are tested ok.

Error cases
This section lists the unhandled but possible error cases. The handled error cases
fall under features sections. We can list unhandled error cases in this sections as
the expectation is different as compared to features tests. Since the graceful
handling mechanism is not specified in the design, the minimum expectation is not
to hang on seeing these errors. The design should be able to come to normal
working in turn.

The cases could be:

 Packet arrival rate greater than supported


 Internal memory soft corruption of single and multiple bits
Do we have the required
documents?
All our efforts are in-vain if we are not having complete information. Architecture
and Specifications documents are the primary source of information for
verification. Often one document is not sufficient. These documents may be
implicitly or explicitly referring to some standard documents, previous module
implementations, etc. We may refer design document to find out implementation
issues. It is a good idea to read all the relevant documents and think through the
implementation of module. If we can visualize the complete implementation, we
probably have sufficient set.

Self-Review
We would definitely go for review with designers, architects and peers, it is
important that we do it once our self. Having written the test plan first hand, I find
myself biased and running out of patience to go over the same testplan again. Here
the check list comes handy.

 Check that all the numbers in the documents are put to test. Search only the
numbers. E.g. clk frequency of 125 MHz, 24 ports, latency of 16 clocks,
1000 Mbps, etc. Identify the testcase in the testplan that is covering it. The
testcase can be in any section.
 All statements in the documents should be looked at suspiciously. We put a
question mark after the statement and look for testcase in testplan that tests
it. E.g. document says, “Small packets are dropped”. Let me test it.
 Configuration register – I try to map each configuration register field with
test case that verifies it.
 Status register – Some testcase should be reading and checking the status
during or at the end of simulation. Testcases for configuration and status
register checking does not include “Section 2 Register & Memory Access“
 Pin functionality – Ensure that we have tests for module’s interface pin
functions

With all necessary goals set properly, we will get ready to fire tests at design to
break it to strengthen it.

You might also like