0% found this document useful (0 votes)
5 views13 pages

VLSI DESIGN AUTOMATION

The document provides an overview of VLSI design automation tools, covering algorithmic and system design, structural and logic design, transistor-level design, layout design, verification methods, and design management tools. It highlights the importance of hardware description languages, simulation, and various design methodologies, including hardware-software co-design and floor-planning. Additionally, it discusses computational complexity and design rules essential for effective circuit fabrication.

Uploaded by

Sunanda Pandit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views13 pages

VLSI DESIGN AUTOMATION

The document provides an overview of VLSI design automation tools, covering algorithmic and system design, structural and logic design, transistor-level design, layout design, verification methods, and design management tools. It highlights the importance of hardware description languages, simulation, and various design methodologies, including hardware-software co-design and floor-planning. Additionally, it discusses computational complexity and design rules essential for effective circuit fabrication.

Uploaded by

Sunanda Pandit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

MODULE 1

Review of VLSI Design automation tools

1. Algorithmic and System Design

The designer is mainly concerned with the initial algorithm to be implemented in


hardware and works with a purely behavioral description of it. Some designers use
general-purpose programming languages like C or Pascal at this stage. However, it
becomes more and more popular to use so-called hardware description languages
(HDLs). Being specially created for this goal, they allow for a more natural
description of hardware. Currently, the languages VHDL and Verilog are the most
widely used.
Simulation helps in the detection of errors in the specification and allows the
comparison of the highest-level description with more detailed versions of the
designs that are created during the design process. A second application of formal
description is the possibility of automatic synthesis: a "synthesizer" reads the
description and generates an equivalent description of the design at a much lower
level. Such a low-level description may e.g. consist of a set of interconnected
standard cells. The degree of abstraction at which the input to the synthesis tool is
given determines the power of the tool: the higher this level, the less the number of
design steps to be performed by the human designer. The synthesis from the
algorithmic behavioral level to structural descriptions consisting of arithmetic
hardware elements, memories and wiring is called high-level synthesis.
Tools exist to capture part of the specification in a graphical way, e.g. in the case
that structural information is available. Another situation in which graphical entry
may be preferable above text is for the specification of finite state machines
(FSMs). Especially hierarchical FSMs in which some states may be hierarchical
FSMs themselves, are useful for the specification of so-called control-dominated
applications. The tools generally have the possibility to convert the graphical
information into a textual equivalent expressed in a language like VHDL that can
be accepted as input by a synthesis tool.
Design starting from a system specification will normally not result in a single
ASIC. It is much more realistic that the final design for a complex system will
consist of several chips, some of which are programmable. In such a case, the
design process involves decisions on which part of the specification will be realized
in hardware and which in software. Such a design process is called hardware-
software co-design.
Mapping the high level descriptions of the software to the low-level instructions of
the programmable hardware is a CAD problem of its own and is called code
generation. One possibility for the verification of the correctness of the result of co-
design is simulation. Because the simulator should be able to cope simultaneously
with descriptions of hardware and software, this process is called hardware-
software co-simulation.
2. Structural and Logic Design

In many situations, it is not possible to provide a high-level description of a circuit


and leave the rest of the design to synthesis tools: the tools might not be able to
cope with the desired behavior or may produce results whose quality is
unacceptable. In such a case, the designer can use a schematic editor program. This
CAD tool allows the interactive specification of the blocks composing a circuit and
their interconnections by means of a graphics computer screen, mouse, menus etc.
Often, the schematics constructed in this way are hierarchical: a block at one level
is an interconnection of blocks one level lower. The blocks at the lowest level are
normally elementary logic gates (e.g. a 3-input NAND or a D-flip flop), although
more abstract (e.g. an adder) or more detailed (e.g. a transistor) blocks could form
the lowest level as well. Once the circuit schematics have been captured by an
editor, it is a common practice to verify the circuit by means of simulation.
A topic closely related to simulation is fault simulation: one checks whether a set of
test vectors or test patterns (input signals used for testing) will be able to detect
faults caused by imperfections of the fabrication process. Going one step further,
one could let the computer search for the best set of test vectors by using a tool for
automatic test-pattern generation (ATPG).
Logic synthesis is concerned with the generation and optimization of a circuit at the
level of Boolean gates. In this field, three different types of problems can roughly
be distinguished: Synthesis of two-level combinational logic, synthesis of
multilevel combinational logic, Synthesis of sequential logic. Once that the circuit
is estimated to satisfy the optimization constraints, it is converted into a circuit
composed of actually available library cells by a technology mapping tool.

3. Transistor-level Design

Logic gates are composed of transistors. Designing at the transistor level requires
its own design tools, most of which are simulation tools. Depending on the
accuracy required, transistors can be simulated at different levels. At the switch
level, transistors are modelled as ideal bidirectional switches and the signals are
essentially digital, although the model is often augmented to deal with different
signal strengths, capacitances of nodes, etc. At the timing level, analog signals are
considered, but the transistors have simple models (e.g. piecewise linear functions).
At the circuit level, more accurate models of the transistors are used which often
involve nonlinear differential equations for the currents and voltages. The equations
are then solved by numerical integration. The more accurate the model, the more
computer time is necessary for simulation and, therefore, the lower the maximum
size of the circuit that can be simulated in reasonable time. The fact that an
integrated circuit will be realized in a mainly two-dimensional physical medium has
implications for design decisions at many levels. Circuit extraction is especially
important when performing full-custom design. In the case of standard cells
(semicustom design), the so-called characterization of the cells, i.e. the
determination of their timing behavior is done once by the library developer rather
than by the designer who makes use of the library.

4. Layout Design

The problem is to compose the layout of the entire integrated circuit. It is often
solved in two stages. First, a position in the plane is assigned to each sub-block,
trying to minimize the area to be occupied by interconnections. This is called the
placement problem. The next step is to generate the wiring patterns that realize the
correct interconnections between these blocks. This is called the routing problem.
The goal of placement and routing is to generate the minimal chip area, while
possibly satisfying some constraints. Constraints may e.g. be derived from timing
requirements.
The partitioning problem concerns the grouping of the sub-blocks in a structural
description such that those sub-blocks that are tightly connected are put in the same
group while the number of connections from one group to the other is kept low.
This problem is not strictly a layout problem. Partitioning can also help to solve the
placement problem.
The simultaneous development of structure and layout is called floor-planning. In a
top-down design methodology, when making a transition of a behavioral
description to a structure, one also fixes the relative positions of the sub-blocks.
Through floor-planning, layout information becomes available at early stages of the
design. It gives early feedback on e.g. long wires in the layout and may lead to a
reconsideration of the decisions on structural decomposition. The floor-planning
problem is closely related to the placement problem with the difference that
detailed layout information is available in placement whereas floor-planning has
mainly to deal with estimations.
A cell compiler generates the layout for a network of transistors. A problem
somewhat related to cell compilation is module generation. A module is normally
understood to be a hardware block, the layout of which can be composed by an
arrangement of cells from a small subset. These elementary cells are sometimes
called microcells. They have a complexity of around l0 transistors.
Working at the mask level gives the freedom of manipulating the layout at the
lowest level, but the increased freedom is as well a source of errors. In a correct
design, the mask patterns should obey some rules, e.g. on minimal distances and the
minimal widths, called design rules. Tools that analyse a layout to detect violations
of these rules are called design-rule checkers. A somewhat related tool that also
takes the mask patterns as its input is the circuit extraction. It constructs a circuit of
transistors, resistors and capacitances that can be simulated. Both design-rule
checking and circuit extraction lean on knowledge from the field called
"computational geometry".
One serious disadvantage of full-custom design is that the layout has to be
redesigned when the technology changes. As a remedy to this problem and to speed
up the design time in general, symbolic layout has been proposed. In symbolic
layout widths and distances of mask patterns are irrelevant. What matters is the
positions of the patterns relative to each other, the so called topology of the design.
Symbolic layout can only be used in combination with a compactor. This is a tool
that takes the symbolic description, assigns widths to all patterns and spaces the
patterns such that all design rules are satisfied.

5. Verification Methods

There are three ways of checking the correctness of an integrated circuit without
actually fabricating it:

Prototyping, i.e. building the system to be designed from discrete components


rather than one or a few integrated circuits. A form of prototyping called bread-
boarding is out of use nowadays, both because of the huge number of components
that would be needed and the fact that the behavior of devices on a chip is totally
different from that of discrete components when it comes to delays, parasitics, etc.
However, prototyping using programmable devices such as field-programmable
gate arrays is quite popular as a means to investigate the algorithms that a system
should realize. This type of prototyping is called rapid system prototyping and is
especially used in audio and video processing. The prototype is supposed to show
the effects of algorithms in real time, meaning that the computations should be as
fast as in the final design. The advantage of prototyping over simulation is that
simulation will in general not operate in real time. A prerequisite for rapid system
prototyping is the availability of a compiler that can "rapidly" map some algorithm
on the programmable prototype.

Simulation, i.e. making a computer model of all relevant aspects of the circuit,
executing the model for a set of input signals, and observing the output signals.
Simulation has the disadvantage that it is impossible to have an exhaustive test of a
circuit of reasonable size, as the set of all possible input signals and internal states
grows too large. One has to be satisfied with a subset that gives sufficient
confidence in the correctness of the circuit. So, simulation that does not check all
possible input patterns and internal states always includes the risk of overlooking
some errors.

Formal verification, i.e. the use of mathematical methods to prove that a circuit is
correct. A mathematical proof, as opposed to simulation, gives certainty on the
correctness of the circuit. The problem is that performing these proofs by hand is
too time consuming. Therefore, the attention is focused on those techniques that can
be performed by computers. Formal verification methods consider different aspects
of VLSI design. The most common problem is to check the equivalence of two
descriptions of a circuit, especially a behavioral description and its structural
decomposition. In this context the behavioral description is called the specification
and the structural one its implementation.

6. Design Management Tools

There are tools that are not directly related to the progress of the design itself, but
are indispensable in a CAD system. First of all, CAD tools consume and produce
design data in different design domains and at different levels of abstraction. These
data have to be stored in databases. The quantity of data for a VLSI chip can be
enormous and appropriate data management techniques have to be used to store and
retrieve them efficiently. Besides, design is an iterative activity: a designer might
modify a design description in several steps and sometimes discard some
modifications if they do not satisfy. Version management allows for the possibility
of undoing some design decisions and proceeding with the design.
A famous standard format is EDIF (Electronic Design Interchange Format).

Algorithmic Graph Theory and Computational Complexity

Algorithmic graph theory, as opposed to pure graph theory, emphasizes the design
of algorithms that operate on graphs, instead of concentrating on mathematical
properties of graphs and theorems expressing those properties. The distinction
between the two is not very sharp, however, and algorithmic graph theory certainly
benefits from results in pure graph theory.

Computational complexity refers to the time and memory required by a certain


algorithm as function of the size of the algorithm's input. The concept applies to
algorithms in general and is not restricted to graph algorithms.
Two types of computational complexity are distinguished: time complexity, which
is a measure for the time necessary to accomplish a computation, and space
complexity which is a measure for the amount of memory required for a
computation. Space complexity is often given less importance than time
complexity.

The big-O notation describes the upper bound of a function. If one wants to
describe a lower bound, the big-omega notation is used: f (n) : Ω (g(n)).

The time complexity of a computation is a function that gives the number of


elementary computational steps executed for inputs of a specific size. Normally, it
is not only the size of the input that determines the number of computational steps:
conditional constructs in the algorithm are the reason that the time required by the
algorithm is different for different inputs of the same size. Therefore one works
with the worst-case time complexity, assuming that the condition that requires the
largest number of computational steps will be true for an input of a given size.
Apart from the worst-case time complexity, there are other time complexity
measures. For example, the average-case time complexity is the expected value of
the computation time for a given distribution of the distinct inputs of the algorithm.
Actually, average-case time complexity has a higher practical value than worst-case
time complexity. Its analysis is, however, more complex.

Depending on the magnitude of the input size, a number of different criteria can be
used for qualifying an algorithm:

Polynomial vs. exponential order. As an exponential function grows faster than any
polynomial and the exponents of a polynomial tend to be small, an algorithm with a
polynomial time complexity is to be preferred over an exponential algorithm.

Linear vs. quadratic order. Suppose that the input size of an algorithm is determined
by the number of transistors in a circuit and that the algorithm has to be applied to
VLSI circuit containing some 106 transistors. Then, running an algorithm with a
linear time complexity is feasible on a computer with a realistic speed, but an
algorithm with quadratic time complexity is not.

Sublinear order. When the input of an algorithm is structured in some way, an


algorithm might find the solution to some problem without processing all input
elements separately. An extreme case is when an algorithm's computation is
completely independent of the input size: one says that the algorithm operates in
constant time and its complexity is written as O(1).

Graph Algorithms:
a. Depth First Search
b. Breadth First Search
c. Dijkstra's Shortest-path Algorithm
d. Prim's Algorithmfor Minimum Spanning Trees

Tractable and Intractable problems:

Tractable Problem: A problem that is solvable by a polynomial-time algorithm.


The upper bound is polynomial.
Here are examples of tractable problems (ones with known polynomial-time
algorithms):
– Searching an unordered list
– Searching an ordered list
– Sorting a list
– Multiplication of integers (even though there’s a gap)
– Finding a minimum spanning tree in a graph (even though there’s a gap)
Intractable Problem: a problem that cannot be solved by a polynomial-time algorithm.
The lower bound is exponential.
From a computational complexity stance, intractable problems are problems for which
there exist no efficient algorithms to solve them.
Most intractable problems have an algorithm that provides a solution, and that
algorithm is the brute-force search.
This algorithm, however, does not provide an efficient solution and is, therefore, not
feasible for computation with anything more than the smallest input.
Examples
Towers of Hanoi: we can prove that any algorithm that solves this problem must have
a worst-case running time that is at least 2n − 1.

Eulerian Cycle:
 All vertices with non-zero degree are connected.
 All vertices have even degree.

Eulerian Path:
 All vertices with non-zero degree are connected.
 If zero or two vertices have odd degree and all other vertices have even degree.
MODULE 2

Layout Compaction

1. Design Rules:

The mask patterns that are used for the fabrication of an integrated circuit have to
obey certain restrictions on their shapes and sizes. These restrictions are called the
design rules. Sticking to the design rules decreases the probability that the
fabricated circuit will not work due to short circuits, disconnections in wires,
parasitics, etc. The shape of the patterns is often restricted to rectilinear polygons,
i.e. polygons that are made of horizontal and vertical segments only. Some
technologies also allow 45-degree segments in polygons, segments that are parallel
to the lines y = x or y = -x on an x-y plane. There are design rules for layout
elements located in the same fabrication layer and rules for elements in different
layers. If patterns in two specific layers are constrained by one or more design
rules, the layers are said to interact. For example, polysilicon and diffusion are
interacting layers as their overlapping creates a transistor, whereas polysilicon and
metal form non-interacting layers (if one ignores parasitic capacitances). Design
rules can be quite complex. However, most of them can be expressed as minimum-
distance rules. As the minimum feature size that can be realized on a chip is subject
to continual change, distances are often expressed in integer multiples (or small
fractions) of a relative length unit, the λ, rather than absolute length units. In this
way, designers can deal with simple expressions independent of actual length
values. This means that all mask patterns are drawn along the lines of a so-called
lambda grid.

The most common types of minimum-distance rules are:


 Minimum width: a pattern in a certain layer cannot be narrower than a certain
distance.
 Minimum separation: two patterns belonging to the same layer or to different but
interacting layers cannot be positioned closer to each other than a certain
distance; this is also true when the rectangles are diagonally separated.
 Minimum overlap: a pattern in one layer located on top of a pattern in another
interacting layer should have a minimal overlap.

2. Problem Formulation:

Application of Compaction –
Layout compaction can be applied in four situations:-
 Converting symbolic layout to geometric layout.
 Removing redundant area from geometric layout.
 Adapting geometric layout to a new technology. A new technology means that
the design rules have changed; as long as the new and old technologies are
compatible, this adaptation can be done automatically, by means of so-called
mask-to-symbolic extraction. In such a case geometric layout in the old
technology is converted to a symbolic layout and then the design rules of the
new technology are used for the generation of the new geometric layout.
 Correcting small design rule errors. If there are methods to put layout elements
closer to each other to remove redundant space, it is reasonable to assume that
pulling layout elements apart when they are too close to each other can be done
similarly. This is true as long as the layout with design-rule errors is
topologically correct that is the relative ordering of the rectangle edges in
interacting layers should be the same as in the correct design.

Informal problem Formulation –


A layout is considered to consist of rectangles. However, not all rectangles are the
same. Basically, the rectangles can be classified into two groups: rigid rectangles and
stretchable rectangles. Rigid rectangles correspond to transistors and contact cuts
whose length and width are fixed. When they are moved during a compaction process,
their lengths and widths do not change. Stretchable rectangles correspond to wires. In
principle the width of a wire cannot be modified. The length of a wire, however, can
be changed by compaction.
Layout is essentially two-dimensional and layout elements can in principle be moved
both horizontally and vertically for the purpose of compaction. When one dimensional
compaction tools are used, the layout elements are only moved along one direction
either vertically or horizontally. Two dimensional compaction tools move layout
elements in both directions simultaneously. Theoretically, only two-dimensional
compaction can achieve an optimal result. However, this type of compaction is NP-
complete. On the other hand, one dimensional compaction can be solved optimally in
polynomial time. Two-dimensional compaction is NP-complete and exact as heuristic
algorithms to solve the problem can be quite complex. Most practical compaction tools
are based on repeated one-dimensional compaction.

Graph-Theoretical Formulation –
In one-dimensional, say horizontal, compaction a rigid rectangle can be represented by
one x-coordinate and a stretchable one by two. For the purpose of the algorithms to be
explained, it is assumed that there are n distinct x-coordinates. They will be indicated
as x1, x2, . . . , xn. A minimum-distance design rule between two rectangle edges can
now be expressed as an inequality:
xj – xi >= dij -------(1)

A so-called constraint graph G(V, E) can be constructed in the following way:


 The vertex set V is composed by associating a vertex u, with each variable x;
that occurs in an inequality.
 The edge set E is composed of edges (vi,vj) with weight w((vi,vj)) : dii for each
inequality xj - xi >= dij.
 There is a source vertex v0, located at x = 0. So, there are n + 1 vertices in total:
v0, v1, . . . , vn. All layout elements are assumed to have a positive x-coordinate.
This is incorporated in the graph by edges from the source vertex to those
vertices that do not have any other vertices constraining them at the left.

A constraint graph derived from only minimum-distance constraints has no cycles. It is


called a directed acyclic graph, often denoted by the abbreviation DAG.

Computing the lengths of the longest paths to all vertices in the constraint graph results
in a solution for the one-dimensional compaction problem.

Maximum-Distance Constraints –
Maximum-distance constraints can in general be written as:
xj – xi <= cij

where cij >= 0. This can also be written as:


xi – xj <= -cij

The last inequality has the same form as Inequality (1) and can be represented in the
constraint graph by an edge (vi,vj) with weight dij = -cij. The addition of this type of
edges can create cycles in the constraint graph. In the presence of cycles, the solution
of the compaction problem still amounts to computing the lengths of the longest paths.

3. Algorithms:

Longest Path Algorithm For DAG -


The longest-path problem for DAGs can be solved efficiently by an algorithm that
is quite similar to breadth-first search.

https://youtu.be/jdTnoCBSOVM

The longest-path algorithm presented has a time complexity O|E|.

Longest path in graphs with cycles –


Two cases can be distinguished:
 The graph only contains negative cycles, i.e. the sum of the edge weights
along any cycle is negative.
 The graphs contain positive cycles. The problem for graphs with positive
cycles is NP-hard. However, a constraint graph with positive cycles
corresponds to a layout with conflicting constraint. Such a layout is called
overconstrained and is impossible to realize. So, the best to be done in such a
case is to detect the existence of positive cycles. They can be detected in
polynomial time.
LIAO-WONG Algorithm –
Liao and Wong have proposed an algorithm that partitions the edge set E of the
constraint graph G(V, E) into two sets Ef and Eb.The edges in Ef have been obtained
from the minimum-distance inequalities and ate called forward edges. The edges in E b
correspond to maximum-distance inequalities and are called backward edges.
As the DAG longest-path algorithm has a time complexity of O(|Ef|) and is called at
most Eb times, the Liao-Wong algorithm has a time complexity of O(|Eb|x|Ef|). This
makes the algorithm interesting in cases when the number of backward edges is
relatively small.

Bellman-Ford Algorithm –

https://youtu.be/FtN3BYH2Zes

The algorithm does not discriminate between forward and backward edges. It is
comparable to the longest path algorithm for DAGs with the difference that several
iterations through the graph are necessary before the lengths of the longest paths have
been computed.
The time complexity of the Bellman-Ford algorithm is O(n x lEl) as each iteration
visits all edges at most once and there are at most n iterations. If the graph is dense, i.e.
the number of edges is O(n2), this would mean a worst-case time complexity of O(n3).
However, under assumptions that are realistic for compaction, the average time
complexity turns out to be O(n1.5).

Placement & Partitioning

1. Circuit representation:

The structural description of an electric circuit is a central issue in design


automation. It is the input of tools like placement, simulation, etc., while it is the
output of e.g. tools for logic and high-level synthesis. Consider the schematics
shown in that indicate how an RS-latch can be constructed from two NAND gates.
In the schematics, one can distinguish the two NAND gates g1 and g2, two input
terminals s and R, two output terminals Q and Q’ and the wires connecting the
gates and the terminals. Besides, a complete description of the circuit should
indicate to which specific input or output a wire is connected. Obviously, a data
model of an electric circuit should correctly deal with all issues mentioned for the
example.
The data model proposed here consists of the three structures cell, port and net. A
cell is the basic building block of a circuit. A NAND gate is an example of a cell.
The point at which a connection between a wire and a cell is established is called a
port. So, a cell has one or more ports. The wire that electrically connects two or
more ports is a net. So, a set of ports is associated with each net and a port can only
be part of a single net.
A cell in a circuit is an instance of a master cell. The master contains all
information that all cells of a specific type, e.g. all NAND gates, have in common.
The term instance refers to each occurrence of the cell in the circuit. one property
stored in the master is, of course, the name of the cell: "NAND". Another property
is a list of its inputs and outputs. Still other properties could be related to electrical
properties, such as the switching delay or layout properties, such as width and
height.
Any electric circuit communicates with the external world in one way or the other
through its terminals. These terminals cannot be directly incorporated in the data
model just presented. For a consistent modeling, pseudo cells called input cells and
output cells are introduced. An input cell has a single port through which it sends a
signal to the circuit and an output cell has a single port through which it receives a
signal from the circuit.
It is quite straightforward to derive a graph model in the form of tripartite graph, a
bipartite graph or a clique model of an electric circuit from the data model. A
complete circuit can e.g. be converted into a single cell by assigning the circuit's
contents to a new master.

2. Placement Algorithms
Placement algorithms can be grouped into two categories:
 Constructive placement: the algorithm is such that once the coordinates of a
cell have been fixed they are not modified anymore.
 Iterative placement: all cells have already some coordinates and cells are
moved around, their positions are interchanged, etc. in order to get a new
configuration.
Most placement algorithms contain both approaches: an initial placement is obtained
in a constructive way and attempts are made to increase the quality of the placement
by iterative improvement.

The min-cut placement method uses successive application of partitioning. The


following steps are:

1. Cut the placement area into two pieces.


2. Swap the logic cells to minimize the cut cost.
3. Repeat the process from step 1, cutting smaller pieces until all the logic cells are
placed.

You might also like