0% found this document useful (0 votes)
61 views

On Projection-Based Algorithms For Model-Order Reduction of Interconnects

Uploaded by

Kb Nguyen
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

On Projection-Based Algorithms For Model-Order Reduction of Interconnects

Uploaded by

Kb Nguyen
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO.

11, NOVEMBER 2002 1563

On Projection-Based Algorithms for Model-Order


Reduction of Interconnects
Janet Meiling Wang, Chia-Chi Chu, Member, IEEE, Qingjian Yu, and Ernest S. Kuh, Life Fellow, IEEE

Abstract—Model-order reduction is a key technique to do fast formation of the model, and less flexibility in meeting different
simulation of interconnect networks. Among many model-order requirements of the reduced model. The second one is based
reduction algorithms, those based on projection methods work on mapping the state–space of the original model with a larger
quite well. In this paper, we review the projection-based algo-
rithms in two categories. The first one is the coefficient matching size to a state–space of the reduced model with a smaller size.
algorithms. We generalize the Krylov subspace method on mo- This kind of algorithm is the projection-based one. Compared
ment matching at a single point, to multipoint moment-matching with the algorithms of the first kind, its implementation is gen-
methods with matching points located anywhere in the closed erally a little bit more complicated, but it is easier to deal with
right-hand side (RHS). of the complex plane, and we provide the stability and passivity problem, with much better numerical
algorithms matching the coefficients of series expansion-based
on orthonormal polynomials and generalized orthonormal basis stability in the formation of the model, and more flexible when
functions in Hilbert and Hardy space. The second category multiple requirements are needed.
belongs to the grammian-based algorithms, where we provide There have been many papers published on the projection-
efficient algorithm for the computation of grammians and new based model-order reduction algorithms. A typical one is the
approximate grammian-based approaches. We summarize some Krylov subspace methods which provide moment matching to a
important properties of projection-based algorithms so that they
may be used more flexibly. given order between the transfer functions of the reduced order
model and the original one. The Padé via Lanczos algorithm [2]
Index Terms—Coefficient matching, congruence transform,
uses an oblique projection with two bi-orthonormal projection
generalized orthonormal basis function, grammian, interconnect,
model order reduction, projection–based algorithsms, multipoint matrices. While the oblique-projection algorithms often result in
moment matching, orthonormal polynomials. better matching of coefficients in an expansion on some basis,
the stability of the reduced model is not guaranteed. The Krylov
subspace-based congruence transform [3], [4] uses one projec-
I. INTRODUCTION tion matrix and has been successfully applied in passive model-
order reduction. The Krylov subspace method is restricted in the
W ITH the rapid increase of the signal frequency and de-
crease of the feature sizes of high-speed electronic cir-
cuits, interconnect has becoming a dominant factor in deter-
finite-order system and is fixed in one matching point and the
accuracy of the reduced model is obtained by a suitable order
mining circuit performance and reliability in deep submicron of moment matching. It has been found that multipoint moment
designs. As interconnects are typically of very large size and matching is more efficient than a single-point one, and we have
high-order, model-order reduction is a necessity for efficient in- provided a multipoint moment-matching algorithm for multi-
terconnect modeling, simulation, design, and optimization. port distributed interconnect networks, where an integrated con-
There are two kinds of model-order reduction algorithms. The gruence transform was developed to map a system of infinite
first one only cares about the preservation of some characteris- order to a system of finite order [5]. So far, the matching points
tics of the model in frequency or time domain without consider- are limited on the positive real axis (including the origin) and
ation of mapping the state variables of the original to those of the imaginary axis on the complex plane.
reduced model. Asymptotic waveform evaluation (AWE) [1] is Another projection-based algorithm is the balanced trunca-
a typical example of such algorithms. Though working quite ef- tion (BT) method [10], [11]. Such algorithms also belong to the
ficiently in some cases, they often meet the problem of stability oblique-projection methods. By using BT, a major submatrix
and passivity of the reduced model, numerical stability in the of the grammian of the balanced system is preserved at the re-
duced model. Hankel norm error bounds are given for the choice
Manuscript received January 17, 2001; revised November 11, 2001. This of the order of the reduced model [12], [13]. Experiments on
work was supported by Semiconductor Research Corporation (SRC) under Con- the use of this method show that it results in better approxi-
tract 866.001. The work of C.-C. Chu was supported in part by the National mation of the performance of the reduced model in a wide fre-
Science Council, Taiwan, R.O.C. under Grant NSC37032F. This paper was rec-
ommended by Associate Editor K. Thulasiraman. quency range, than the moment-matching method with single
J. M. Wang is with the Electrical Computing Engineering Department, Uni- matching point at , but the comparisons with multipoint
versity of Arizona, Tucson, AR 85721 USA. moment-matching method has not been seen. The greatest dis-
C.-C. Chu is with the Department of Electrical Engineering, Chang Gung
University, Tao-Yuan 333, Taiwan, R.O.C. advantage of this method is its high computational cost. Though
Q. Yu is with Celestry Design Technologies, Inc., San Jose, CA 95134 USA many efforts have been made to reduce the cost [14], [15], it is
([email protected]). still much less efficient than the moment-matching method.
E. S. Kuh is with the Department of Electrical Engineering and Computer
Sciences, University of California, Berkeley, CA 94720 USA. All the above methods focus on the approximation of the
Publisher Item Identifier 10.1109/TCSI.2002.804542. characteristics of the reduced model in the frequency domain,
1057-7122/02$17.00 © 2002 IEEE
1564 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

and can be categorized as frequency domain methods. When the vector of inductance currents and it is assumed that there
the inductive effect becomes more and more serious in today’s are no inductance cutsets in the network. Then
technology, the waveform of the impulse response of intercon-
nects may be very complicated [17]. It is quite hard to predict (2.1)
the accuracy of the time-domain response of the reduced model,
based on the accuracy of the frequency-domain response, and it is the state vector of the network.
is needed to do model-order reduction directly in the time do- Let be the nodal capacitance matrix, the nodal conduc-
main. This work has now begun. In [16], the Krylov subspace tance matrix, , , and the branch inductance, resistance,
approach was used to do time-domain model-order reduction and the incidence matrix of the R–L branches, respectively. Let
and the derivatives of the circuit response are kept to a given be the input current vector and the incidence matrix
order. This approach is not efficient in dealing with model-order of the current source branches with the currents flowing out of
reduction of linear interconnects as the derivatives are functions these branches. Then, the state equations of the network can be
of time, when simulation advances, the circuit needs to be re- written in the following form:
modeled repeatedly.
In this paper, we review some of the key concepts and (2.2)
methods on the projection-based model-order reduction al-
gorithms. We extend the coefficient matching method from where
matching the coefficients of the power series expansion in
complex frequency to matching the coefficients of the series (2.3)
expansion based on other basis functions, which include
orthonormal polynomials in the frequency/time interval of
interest and the generalized orthonormal basis functions in (2.4)
Hilbert space (time domain) and Hardy space (frequency do-
main). To reduce the computational cost of the BT method, we and
give an efficient algorithm for the computation of the controlla- (2.5)
bility and observability grammians, and model-order reduction
algorithms using approximate grammians. All these algorithms
improve the performance of existing algorithms either in the Let be the output vector of interest. In most practical cases
order of reduction or in the computational cost. Also, we of interconnect analysis, the node or branch voltages or currents
summarize some important properties of the projection-based at purely resistive or R–L branches are of interest, and the output
model-order reduction algorithm, which will provide some equations may be written in the following form:
guidance to the use and development of such algorithms.
(2.6)
The rest of the paper is organized as follows. In Section II,
the block form of the state and output equations of an RLC In the typical case, when an interconnect is among several sub-
networks is given, which will be used throughout the paper. circuits and its reduced model is needed, we regard it as a mul-
In Section III, we provide multipoint moment-matching algo- tiport, and the port impedance (or admittance) matrix is of in-
rithms with matching points anywhere in the closed RHS com- terest. In this case
plex plane. In Section IV, we provide algorithms for matching
the coefficients of expansion series based on orthonormal poly- (2.7)
nomials in both frequency and time domains. In Section V, we
provide algorithms for matching the coefficients of expansion where is the vector of the source voltage of the current
series based on generalized orthonormal functions in Hilbert sources, and the matrices and in the output and state equa-
and Hardy space. In Section VI, we provide our new result on the tions are the same.
improvement of the BT methods. In Section VII, we summarize We assume that , and
some general properties of the projection-based model-order re- throughout the paper.
duction algorithms, which are useful in the implementation and The Laplace transform of (2.2) and (2.6) are the state and
development of algorithms. We give experimental results in Sec- output equations in the frequency domain, and can be written
tion VIII and conclusions in Section IX. as follows:

(2.8)
II. STATE AND OUTPUT EQUATIONS and
In the general case, a circuit model of an RLC interconnect (2.9)
may consist of purely capacitive branches, purely resistive
branches, and serially connected R–L branches. Except for the When model-order reduction is concerned, the impulse response
internal nodes of the R–L branches, it can be assumed that each of the circuit is of the most interest, as the response due to
floating node is connected to a grounded capacitor, and let any other input waveforms is uniquely determined by its im-
be the node voltage vector of these nodes. Let be pulse response. Let , where
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1565

, is the th unit vector, and is the unit im- is well defined for any in the region. On the other
pulse function. Let the state and output vector corresponding to hand, let
be and , respectively. Let
(3.3)
(2.10)
and
Then, is called the th-order moment of at
(2.11) . The moments of are similarly defined, and
Then, from (2.2), (2.6), (2.8) and (2.9), we have (3.4)

(2.12) In the time domain, from

(2.13) (3.5)
(2.14)
we have
(2.15)
(3.6)
These are the block forms of the state and output equations in the
time and frequency domains, respectively, where From
and . When , from (2.14) and (2.15)
(3.7)
(2.16)

and is the input impedance matrix of the network. (3.8)


Now, we consider the projection-based model-order reduc-
we have
tion. Let such that

(2.17) (3.9)

where . Then, is the transformation matrix map- For , we have


ping the -dimensional state–space to a -dimensional space.
Substitute (2.17) with (2.12), and premultiply on both sides (3.10)
of the equations, we have and

(2.18)
(3.11)
where , and . This is For
the block form of the state equations of the reduced model. The
output equations of the reduced model are in the form of (3.12)

(2.19) and for

where . The corresponding equations in the fre-


quency domain are easily written and omitted. (3.13)
The moments can be computed very efficiently by using the
III. MOMENT MATCHING METHOD above recursive formulas.
Now we are concerned about moment matching of the output
Expand into a power series of variable , where vector of the reduced system with that of the original system.
may be 0, positive real or a complex number with nonnegative Lemma 1: Let be the transformation matrix for the gener-
real part ation of the reduced system. Suppose that

(3.1) colspan (3.14)

where for a finite and with .


then Then

(3.2) (3.15)

Proof: We give a proof for a finite . The case where


is called the th moment of at . As for an RLC intercon- is similar, and is omitted. We do similar treatments in
nect, is analytic in the closed RHS of the -plane, so that the proof of the rest lemmas and theorems without declarations.
1566 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

From the condition of the lemma, as colspan Proof: Theorem 2 is a special case of Theorem 11 in
( ), there exist s such that Section V-B, and the detailed proof is omitted.
Let be the set of matching points with
(3.16) when , and be the set
of matching orders (when is finite, moment matching is from
Now, we prove that for . order 0 to ; and when , moment matching is from
Rewrite (3.10) and (3.11) in the following form: order 1 to ). Suppose that the first s are real or , and
the rest ones are complex. We give the following algorithm for
generation of a real transformation matrix for the model-order
reduction with moment matching at the matching point set
with the matching order set .

Algorithm 1: Moment-Matching Method


Input: , , , , , ;
Output: Transformation matrix ;
(3.17) ; ;
for to do
if ( )
; ;
Substitute (3.16) with (3.17) and premultiply matrix on else
both sides of the equations with ; ;
solve for ; has
(3.18) columns
;
then, we have the following equations: for to do
solve 1 for ;
;

for to do
;
if ( )
;
(3.19) for to do
real ; imag ;
;

Note that the matrices on both sides of the above equations are
just the same for the equations with the moments for
. From the uniqueness of the solution of the equations, The function uses the modified
it is known that and the lemma exists. . Gram–Shmidt algorithm and can be described as follows,
Theorem 1: Under the condition of Lemma 1 where stands for the inner product of two
vectors and .
(3.20)

Proof:
for to do
for to do
;
;
Theorem 2: In the case that matrices and in the state
;
equations are symmetric, which happens in an RC interconnect,
;
then under the condition of Lemma 1
;
(3.21)
return( );
When is finite, (3.21) exists for , and when W (; cola: colb) consists of the column vectors of W
cola to column colb.
1
, it exists for . from column
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1567

There is a typical case that for each complex , . In Proof: From Lemma 3, colspan . It is ob-
such a case, Algorithm 1 may be simplified to Algorithm 1.1, vious from Algorithm 1 that colspan colspan , so
where the operation with complex numbers in Theorem 3 exists.
is avoided and the algorithm becomes more efficient. Note that for a complex , when colspan ,
then its real part and imaginary part are in colspan ,
Algorithm 1.1: Simplified Moment-Matching respectively. Therefore, its complex conjugate
Method colspan . This means when is a matching
Input: , , , , , ; point with an order up to , is also a matching point with
Output: Transformation matrix ; the same order.
; ; The moment-matching model-order reduction preserves the
for to do moments at given points with given order from the original
if ( ) network. As the power-series expansion at a point will gen-
; ; erate a good approximation of the frequency response near this
else point, so a single-point moment-matching method often needs a
; ; very high order to make the frequency response of the reduced
solve for ; model approximate that of the original system well in a wide
if is real or range of frequencies. On the other hand, when multipoint mo-
; ment-matching method is used, some lower order model may be
for to do generated to obtain the same degree of approximation over the
solve for ; same frequency range.
;
IV. COEFFICIENT MATCHING METHOD WITH SERIES
EXPANSION BASED ON ORTHONORMAL POLYNOMIALS
else is complex with The moment-matching method is based on matching the coef-
real ; ficients of the power series expansion of complex frequency at
imag ; some specified points. In approximation theory, there are better
series expansions to approximate a function in certain interval,
among which the orthonormal polynomials are widely used. In
this section, we introduce the use of Chebyshev polynomials to
do model-order reduction in both frequency and time domains
We now show that s[ , when [6], [7]. The series expansion of Chebyshev polynomials has the
is finite, and when ] are covered by property of exponential convergence rate, so that lower order ap-
colspan so that Algorithm 1 guarantees moment matching proximation may be obtained with high accuracy. Also, among
with matching point set and order set . To prove it, we need the same degree of polynomials approximating a given function
the following two lemmas. in a finite interval, the truncated Chebyshev series is very close
Lemma 2: In the case that is finite, let to the minimax approximate polynomial, so that the truncated
and ; and in the case that , Chebyshev series expansion is nearly optimal in the minimax
let and . Denote sense.

(3.22) A. Frequency-Domain Approximation


In the case that the frequency response of a system is of in-
Then, for and , terest, let in (2.14), then, the state equations in the fre-
quency domain are expressed as
(3.23)
(4.1)

Lemma 3: For and and the output equations are expressed as

colspan (3.24) (4.2)

where and Suppose that the maximum frequency of interest is . For


. , the whole range of interest is . We
The proof of Lemmas 2 and 3 follows a way similar to the normalize the frequency by letting
proof of Lemma 1, and [9, Theorem 2] and is omitted.
Theorem 3: For and for finite and (4.3)
when ,
so that , which is the range where Chebyshev poly-
colspan (3.25) nomials are defined.
1568 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

Let , , , columns and rows are added, with matrix inserted into
, then (4.1) and (4.2) become the positions of
and ,2 and
(4.4) matrix inserted into the positions of
and . The LU decomposition for the old
blocks remains unchanged, only the operation for the additional
(4.5) three blocks is needed. Also, the first solutions to
the equations remain unchanged. These
Let be expanded into a series expansion of Chebyshev properties make the computation very efficient when the order
polynomials up to an order of of approximation is increased.
Another way for the computation of the Chebyshev coeffi-
(4.6) cient matrices is by pseudospectral Chebyshev method [18].
When degree is determined, let
We call the coefficient matrix the th Chebyshev coefficient
matrix. Note that is a polynomial of variable with real (4.12)
coefficients, and the Chebyshev coefficient matrices are com-
which are the extrema points of . Let and
plex matrices in the general case.
for . Then
To compute the Chebyshev coefficient matrices, note that

(4.7) (4.13)
and for
As , the computation of the Cheby-
(4.8) shev coefficient matrices amounts to the moment computation
Then together with a fast Fourier transform (FFT), which can be done
more efficiently than the previous method. Another advantage
of the pseudospectral method is that

(4.9) (4.14)

Substitute (4.6) and (4.9) into (4.4), and let the coefficient ma- i.e., the truncated Chebyshev expansion is exact at the colloca-
trices of the same be equal, we have tion points specified by (4.12). The disadvantage of this method
is that when degree is changed, the computation cannot be
done incrementally but should be started from the very begin-
ning.
The model-order reduction based on matching the Chebyshev
coefficient matrices is provided by the following lemma and
theorem.
Lemma 4: Suppose that and are the matrices
of the state variables of the original system and reduced system,
respectively, and is the transformation matrix. Let and
(4.10)
be the th Chebyshev coefficient matrices of the Chebyshev
series expansions of and , respectively. If

This set of equations is of the block tridiagonal form, and can colspan (4.15)
be solved efficiently by using LU decomposition. The degree Then,
may be chosen such that
(4.16)
(4.11)
Theorem 4: Let and be the output functions
of the original and reduced system, respectively. Let and
with a given . As , such a condition can guarantee be the th Chebyshev coefficient matrices of the Cheby-
a good approximation of the truncated Chebyshev series expan- shev series expansions of these two functions, respectively.
sion. Then, under the condition of Lemma 4
If it is found that is not big enough to meet the condition of
(4.11), we may increase one by one until the condition is met. (4.17)
When is increased by 1, in the RHS matrix of (4.10), rows 2(row 1 : row 2; col1 : col2) refers to a block in the rows between row 1
with zero elements are added. In the l.h.s. coefficient matrix, and row 2 and in the columns between col1 and col2.
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1569

i.e., the reduced order model preserves the Chebyshev coeffi- We integrate both sides of (4.18) from to some , and have
cient matrices for the output functions from order 0 up to the
order of .
(4.20)
The proof of Lemma 4 can be done by using (4.10) and fol-
lowing the same way as used in the proof of Lemma 1, and the
proof of Theorem 4 follows the same way as in the proof of Let
Theorem 1.
The model-order reduction algorithm with the preservation of (4.21)
Chebyshev coefficient matrices can be stated as follows, where
is the initial guess of the approximation order.
From the formulas of the integrals of the Chebyshev
Algorithm 2: Frequency-domain model-order polynomials
reduction with preservation of Chebyshev
coefficient matrices (4.22)
Input: , , , , , , ;
Output: Transformation matrix ;
Let ; (4.23)
for [ ; if inequality (4.11) is not
met; ] and for
Compute , ;
; ;
for to do
real ; (4.24)
imag ; we have

B. Time-Domain Approximation
When Chebyshev polynomials are used to approximate ,
we should first normalize the time variable to the interval
. Let be the time interval of interest.
may be chosen as the simulation time; and in most practical
cases, when the input signal is a pulse, may be chosen
as the sum of pulse width, rising, and falling time. Let
, , , and
. Then, the state equations of the system equation
(4.25)
(2.12) becomes
Substitute (4.21) and (4.25) into (4.20), and let the coefficient
(4.18)
matrices with the same on the both sides of
and the output equations become the equations be equal, we have the equations of the Chebyshev
coefficient matrices s as shown in (4.26) at the bottom of the
(4.19) page.

(4.26)
1570 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

The arrangement of the equations in reverse order (from to By combining (4.33) and (4.35) for , we have the
0) aims at avoiding the “block” fill-ins (an original empty block following set of equations:
is filled by a nonempty block) during the LU decomposition
of the coefficient matrix of the equations. By using such an ar-
rangement, the block structure of the coefficient matrix remains
unchanged after LU decomposition; otherwise, if the equations
are arranged in a forward order from order 0 to order , then
after LU decomposition, the block structure will be upper_Has-
senburg. Note that in the coefficient matrix, each submatrix
has a coefficient depending on the value of . When the order
of approximation is changed, the formation of and solution (4.36)
to (4.26) should be restarted. Therefore, the implementation of
the time-domain model-order reduction based on the Chebyshev
approximation is not as efficient as that of the frequency-domain
method. The coefficient matrix of the above equations is full in the block
The pseudospectral Chebyshev method can also be used here form, which makes it inefficient to find the solution.
to compute the Chebyshev coefficient matrices. When degree The preservation of the Chebyshev coefficient matrices
is determined, let during the model-order reduction is based on Lemma 5 and
Theorem 5. The proof of Lemma 5 and Theorem 5 is similar to
that of Lemma 1 and Theorem 1, and is omitted.
(4.27) Lemma 5: Suppose that and are the matrices of
the state variables of the original system and reduced system,
which are the extrema points of . Let and respectively, and is the transformation matrix. Let and
for . Then be the th Chebyshev coefficient matrices of the Chebyshev
series expansions of and , respectively. If
(4.28) colspan (4.37)

Then,
When , the derivatives of at the col-
location points can be expressed as follows [18]: (4.38)

Theorem 5: Let and be the output functions of the


(4.29)
original and reduced system, respectively. Let and be
the th Chebyshev coefficient matrices of the Chebyshev series
where expansions of these two functions, respectively. Then, under the
condition of Lemma 5
(4.30) (4.39)

(4.31) i.e., the reduced order model preserves the Chebyshev coeffi-
cient matrices for the output functions from order 0 up to the
order of .
(4.32) The selection of the order can also be based on the in-
equality of (4.11) with and replaced by and ,
From (4.18), we have respectively. As mentioned above, to increase one by one
and compute the s each time until (4.11) is satisfied is not
(4.33) efficient. Therefore, we use an approximate method. After
( ) is obtained for an order , when the order
and is increased to , let ( ) be the
coefficients under the new order. Then, we simply let
(4.34) for , and solve the equation

(4.40)
We list (4.34) at the first collocation points ( ).
From (4.29), we have
for . After is determined, then we solve (4.26) for the
s.
(4.35) The time-domain model-order reduction algorithm with the
preservation of Chebyshev coefficient matrices can be stated
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1571

as follows, where is the initial guess of the approximation B. Convergence Property of Series Expansion of Exponential
order. Function Via Orthonormal Basis Functions
An impulse response of a finite-order linear RC (RL) network
Algorithm 3: Time-domain model-order consists of exponential functions with negative real exponents.
reduction with preservation of Chebyshev Now, we consider one of its components with
coefficient matrices . Let
Input: , , , , , , ;
Output: Transformation matrix ; (5.3)
Let ;
Start from , use approximate method to
Then
determine ;
Compute , ; (5.4)
; ;
for to do
From this equation, we have the following results.
;
Theorem 6: If . Then, for

(5.5)
V. COEFFICIENT MATCHING METHOD WITH SERIES EXPANSION
This theorem shows that when , the expansion converges
BASED ON GENERALIZED ORTHONORMAL BASIS
in finite terms.
FUNCTIONS IN HILBERT AND HARDY SPACE
Now, suppose that and .
In this section, we will use the generalized orthonormal basis For
functions in Hilbert and Hardy space for model reduction. In
general, the work corresponds to the classical Caratheodory in-
terpolation, and has been found useful in recent years in the
fields of signal processing and system and control [23]–[26]. and
The main advantage of the basis function over Chebyshev poly-
nomials is that the function parameters can be selected to fit the
given system well and the series expansion of the state vector
of the system can converge very quickly. In [27], the Laguerre
function, which is a one-parameter basis function, is used for Then
model reduction. We will use the multi-parameter function here,
which works better than the one-parameter function.

A. Orthonormal Rational and Exponential Functions (5.6)


The orthonormal rational functions in the frequency domain
and the orthonormal exponential functions in the time domain From (5.6), we have the following theorem.
are related to each other. They were provided by the pioneer Theorem 7: If and for ,
work of W. H. Kautz [19] and extended later [20]. We review the , then
case of real parameters functions here. For complex parameter
functions, see the details in [19]. (5.7)
Let , be positive real numbers. Then,
the set of orthonormal basis functions are defined by In this case, the series expansion of in terms of the or-
thonormal exponential basis functions converges exponentially.
(5.1) In the case of Laguerre functions, If , then for
, and the expansion converges at its first term. In the case
that , we have
The s in (5.1) may be different or identical.
The orthonormal set in the frequency domain corre-
sponds to a set in the time domain. In the case that all
s are different
In this case, the expansion converges exponentially, and the con-
vergence rate
(5.2)
(5.8)
where is the residue of at . In the case that
all s are the same, is the th-order Laguerre function where . It can be seen that the closer to 1, the more
with parameter . rapidly the expansion converges.
1572 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

Now, we consider the general case that the whole impulse re- Suppose that the exponential basis functions are with parame-
sponse is approximated by the Laguerre functions with param- ters s and suppose that the parameters are different. Then
eter . Suppose that the eigenvalues of the state matrix are
( ) with . The convergence rate
(5.18)
corresponding to the component is denoted by

(5.9) where is the Laplace transform of , and is the


moment of at . Therefore, is a linear combina-
Given the ’s, it is needed to find an optimal value of tion of the moments of at the points equal to the parameters
such that the maximum value of w.r.t. all the s is of the basis functions. Note that the coefficients s are deter-
minimized. Let mined by the basis functions only, and have nothing to do with
the function itself. Therefore, it can be understood that the
(5.10) problem of forming a reduced order model with the preserva-
tion of the expansion coefficients of its impulse response can
and be transformed into an equivalent problem of model-order re-
(5.11) duction with the corresponding moment matching. When the
set of the parameters is given, Algorithm 1 or Algo-
We have the following theorem regarding the choice of . rithm 1.1 may be called to generate the transformation matrix
Theorem 8: When , and the new system can be formed by the congruence trans-
form w.r.t. matrix .
(5.12)
VI. IMPROVEMENTS IN GRAMMIAN-BASED METHODS
(5.13) A. Introduction
Another approach for model-order reduction is the gram-
where . We omit the proof for brevity. mian-based method.
From the above theorems, we suggest to use the following Rewrite the state equations of (2.2) into the following form:
method to select the parameters of the orthonormal basis func-
tions for model reduction of RC and RL circuits. We first find the (6.1)
approximations of its largest and smallest eigenvalues
and by using the Lanczos algorithm [28], [29]. Then, let
where , and , and the
, ,
output equations

(5.14) (6.2)

and The controllability grammian and the observability gram-


(5.15) mian are defined, respectively, as

By using this selection, as and , the (6.3)


fastest and slowest components of the impulse response will be
kept well in the impulse response of the reduced model and the and
expansion of other components converges exponentially, which (6.4)
will be benefit for a good approximation of the impulse response
of the reduced model. In (6.3), is the impulse response of the state vector
, and
C. Preservation of Coefficients in Series Expansion of
Orthonormal Basis Functions and Moment Matching
(6.5)
Now we consider an arbitrary impulse response , and ex-
pand it via the orthonormal exponential basis functions
such that Similarly, in (6.4), is the impulse response of the
state vector of the dual system,3 and

(5.16)
(6.6)

where It can be seen from the above two equations that and are
positive definite.
(5.17) 3See Section VII-B for the definition of the dual system.
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1573

The most commonly used approach to compute the gram- The most numerically stable method for implementing the BT
mians is to solve the following two Lyaponov equations: is the so-called square root method. Let and
be the Cholesky decomposition of and , and the
(6.7) singular value decomposition of with
and , and and are orthogonal matrices.
(6.8) Let ( ) be the submatrix of ( ) with its first columns,
and . Let
which is the most expensive part in the computation cost for the
grammian-based model-order reduction algorithms. (6.19)
When a transformation is applied to the original (6.20)
system such that , we will have
Then, the BT system with order is given by [32]
(6.9)
(6.21)
(6.10)
The square root method indicates that the BT method is gener-
where , and , and the ally a two sided projection method with left projection matrix
grammians in the new system become and right projection matrix such that . In the
symmetric case that and , and
(6.11) , then it becomes a one sided projection method.
and
(6.12) B. Computation of Grammians for RLC Interconnect
When an RLC interconnect is regarded as an port network,
The positive definiteness of and remains unchanged if the matrix in the state equations (2.12) and matrix in the
is of full rank. output equation (2.13) are equal. We can take advantage of this
With a proper choice of , it can be made that equality to reduce the computation cost for the grammians.
Substitute and into (6.7) and
(6.13) (6.8), we have
In such a case, the transformed system is called balanced. For (6.22)
a balanced system, if we partition the system matrices together
with the grammians conformally such that (6.23)

Equation (6.22) is equivalent to


(6.14)
(6.24)

(6.15) Let

(6.25)
(6.16)
Equation (6.23) becomes
(6.17)
(6.26)

where , , , and Suppose that the submatrix is of dimension


, then, the reduced system specified by the matrices and is of dimension . Let .
is stable, balanced, and has its controllability Then, , , and .
and observability grammians equal to . Such a method Also, from (2.5), and . Then, from (6.24),
is called the BT method, as the reduced model is formed we have
by truncating the “unimportant” part of the state vector of a
(6.27)
balanced system.
Let and or
. Then, it has been shown that (6.28)

Let
(6.18)
(6.29)
where is the norm defined by we have
, , and is the
maximum singular value of [12]. (6.30)
1574 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

Comparing (6.30) with (6.26), it can be seen that , and norm error bounds have been derived for these algorithms,
we have the following theorem. the high computation cost and numerical problem with the left
Theorem 9: Let be the solution to the generalized and right factorization of make them imprac-
Lyaponov equation (6.30), then tical for model-order reduction with large-scale interconnect
networks.
(6.31) Another way to guarantee the passivity of the reduced
and model via the grammian-based approach is to use approximate
(6.32) grammians. In [37], a dominant grammian eigenspace approach
based on vector alternate-direction-implicit (ADI) algorithm
From (6.31) and (6.32), we have is provided. Here, we provide two other methods based on the
approximate grammians.
(6.33) Let the Krylov subspace
and the Cholesky factors and are related by , and
. Then, it is known that [36]
(6.34)
(6.41)
By using Theorem 9, we only need to solve one generalized
Lyaponov equation and do one Cholesky factorization. This will where
save the computation cost at about 50%. The solution to the gen-
eralized Lyaponov equation can be found by using the matrix (6.42)
sign function [33].
with
C. Passivity of the Reduced Model Via Grammian-Based
Approach (6.43)
We first consider the case of an RC interconnect. In such a Let
case, the state equations and output equations are
(6.44)
(6.35)
(6.36) be an approximation of . It can be seen that when ,
. Let , then
Let , then we have is called the th-order dominant control-
lability grammian eigenspace [37]. Similarly, we can define the
(6.37) th-order dominant observability grammian eigenspace ,
and we try to find an orthonormal basis of as
(6.38)
the transformation matrix to form the reduced model.
where is symmetric and nonpositive To find an orthonormal basis for directly meets with
definite, and . In this case, , and the the difficulty in finding . However, there is
BT will result in a passive model. an indirect way to do it efficiently. Let be an orthonormal
For an RLC interconnect, let in (2.12) be basis of so that for any , there exists
, then the state and output equations of the system such that
become
(6.45)
(6.39)
Then
(6.40)
(6.46)
where and . Note
that in this case, is not symmetric, and in the general case, where , and is an orthonormal basis
. When , the balancing transformation of , too. Therefore, to find an orthonormal basis of the
matrix cannot be orthonormal. Otherwise, from (6.11) and dominant grammian subspace can be solved by
(6.12), we have and finding an orthonormal basis of , .
, and , which violates the assumption. Note that is the subspace formed by the 0th to the
As for an RLC interconnect, the balancing transformation ma- th-order moments at frequency of the original system and
trix is not orthonormal in the general case, there is no guarantee is the subspace formed by the 0th to th-order
for the passivity of the reduced model by the BT method. We moments at frequency of the dual system. Therefore, the ap-
have indeed found an example that the reduced model via BT is proximate grammian approach can be transformed to an equiv-
stable but not passive. alent moment-matching problem at frequency. This method
There are some other grammian-based algorithms which can is called the modified dominant grammian eigenspace method
guarantee the passivity of the reduced model [34], [35]. Though (scheme 1).
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1575

Fig. 1. Time-domain response of Example 1.

The approximate dominant grammian eigenspace method can interconnects. In this section, we summarize some general
also be implemented by using a system companion to the system properties of the projection-based algorithms, which are useful
described by (6.39) and (6.40). The companion system is de- for the development and application of the algorithms.
fined by the following equations:
A. Preservation of Passivity
(6.47) Theorem 10: If the transformation matrix is of full rank,
the reduced order model formed by the congruence transform
(6.48) w.r.t. is passive.
Let and be the controllability and observability The proof can be found in [5], and is omitted here.
grammians of the companion system, then it can be proved that
and [38]. By using the same argument B. Dual System in Model-Order Reduction
as in the previous paragraph, to find an orthonormal basis In the algorithms stated in the previous sections, the forma-
of can be transformed to find an orthonormal tion of the matrix for the generation of the reduced model is
basis of . Note based on the information of the state equations only, and the
that these two bases are the bases of the Krylov subspace output equations play no rule in the process. In this subsec-
spanned by the moments of the original system and its dual at tion, we will introduce a method to utilize the information of
zero frequency and the approximate grammian approach can the output equations, which gives new options to the algorithms
also be implemented by using moment-matching method at for model-order reduction.
zero frequency. This method is called the modified dominant Definition 1—Dual System: For a system described by its
grammian eigenspace method (scheme 2). state equations
Intuitively speaking, the approximate grammian method via
moment matching at (0) frequency will result in better ap- (7.1)
proximation at high (low) frequencies. As most interconnects
behave as low pass, we are in favor of using scheme 2. and output equations
(7.2)
VII. GENERAL PROPERTIES OF PROJECTION-BASED
ALGORITHMS its dual system is defined by the state equations
In the previous sections, we introduced some projec-
tion-based algorithms for model-order reduction of linear (7.3)
1576 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

Fig. 2. Circuit of Example 2.

Fig. 3. Time-domain response of Example 2.

and the output equations and we denote dual .


As mentioned in the previous sections, when model-order re-
(7.4) duction is concerned, we often work with the impulse response
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1577

of a system, and we start from the block form of the system. In


this case, the block form of the state and output equations of the
dual system are

(7.5)

and

(7.6)

Note that when , and ,


the block matrices of the state and output vectors of the original
system are and , but those for
the dual system are and . In the
case that , the sizes of and are different, and
when [e.g., in an multiple input single output (MISO)
system], the size of may be much smaller than that of .
As the number of columns of the transformation matrix is
proportional to the number of input variables, in the case that Fig. 4. Circuit of Example 3.
, the use of a dual system may be much more sparing than
the use of the original system. This is one of the main reasons
that we are interested in the dual system.
Next, we will give some theorems related to the use of a dual Theorem 11: Let ,
system in the model-order reduction. be the set of moments of at of the original
Lemma 6: Let and be the transfer functions of system, and
the original and dual system, respectively, then be the set of moments of at the same point of
the dual system. Let and be the th-order
(7.7) moments of the output vectors of the original system and the
reduced order system, respectively. If for the transformation
Proof: matrix

colspan (7.10)

Then
Let and , then from (7.7)
we have (7.11)

(7.8) The theorem can be proved by using Lemma 1, Lemma 7,


and (3.11), and the detailed proof is omitted.
Lemma 7: Let be the reduced This theorem means that if the transformation matrix
system w.r.t. the system and transforma- is formed by moment matching at the original system
tion matrix . Then, and the dual system, then the moment-matching order
of the reduced system is accumulated. This provides a
dual dual (7.9) new way to overcome the numerical instability problem
during the orthonormalization process in the formation
i.e., the operators “reduction” and “duality” are interchangeable. of the matrix . If we start from the Krylov subspace
Proof: and find that for some
, the norm of matrix (see Algorithm 1) after orthogo-
dual nalization is much smaller than its original value, we may
restart from the Krylov subspace of the dual system, then a
dual higher order model may be obtained.
Now, we consider the model-order reduction algorithm based
We first consider the moment-matching method and have the on the expansion on Chebyshev polynomials in the frequency
following theorem. domain.
1578 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

Fig. 5. Time-domain response of Example 3.

Lemma 8: Let and be the th Chebyshev coef- matrices of the output vectors of system , , and
ficient matrices of the output vectors of the original and dual , respectively. For , from Lemma 8, we have
system up to the order of , respectively. Then, we have

(7.12) From Theorem 4 and the condition of Theorem 12, we have


Proof:

From Lemmas 6 and 8, we have

and from Lemma 7 This theorem means that we can use the dual system to form the
transformation matrix with the preservation of the Chebyshev
coefficient matrices of the output vectors of the dual system, and
when the transformation w.r.t. matrix is applied on the orig-
inal system, the reduced model preserves the Chebyshev coeffi-
So, the lemma exists. cient matrices of the output vectors of the original system. Note
Theorem 12: Let . Suppose that that the number of columns of the transformation matrix is
proportional to ( ) when the original (dual) system is used,
colspan (7.13) and when , it is more efficient to use the dual system.
Note that Theorem 12 is valid not only for the Chebyshev
and a reduced order system is generated by the congruence
expansion-based frequency domain model-order reduction, but
transform w.r.t. matrix on the original system. Let and
also for the Chebyshev expansion-based time domain model-
be the th Chebyshev coefficient matrices of the output
order reduction. The proof in the case of time domain is similar
vectors of the original and reduced system, respectively. Then
to that in the frequency domain, and is not repeated.
(7.14) For the model-order reduction based on the orthonormal basis
functions in Hilbert and Hardy space, we have shown that such
Proof: Let , dual reduction with preservation of the expansion coefficient ma-
, be the reduced trices can be transformed to an equivalent moment-matching
system of , and be the reduced system of . Let problem. By using Theorem 11, it can be understood that the
, , and be the th Chebyshev coefficient dual system can also be applied in this case.
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1579

Fig. 6. Circuit of Example 4.

Fig. 7. Example 4-Chebyshev approximation-based model.

For the BT method, it is easy to show that the controllability servability and controllability grammians of the dual system, re-
and observability grammians of the original system are the ob- spectively, and nothing is gained by using the dual system.
1580 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

Fig. 8. Example 4—Moment-matching model.

and

(7.18)

Fig. 9. An RC ladder. Now, suppose that it is needed that for the reduced order system,
for and
for . Then, we can satisfy the re-
C. Separatability in Model-order Reduction
quirement by letting the transformation matrix be such that
In the previous sections, for coefficient matching algorithms
(Sections III–V), we use the block form of the state equations to
derive a number of theorems. By using the block form, all input colspan (7.19)
variables are treated equally. For example, for a two input one
output system, when moment-matching method is used, the two Note that if and , then in each , there are
transfer functions keep the same order of moment matching at components. If (7.19) is satisfied, for the components, their
the same point. matching points and matching orders are the same. If it is needed
To make the model-order reduction method more flexible to that for different components there are different matching points
fit the different need for different transfer functions, we may and matching orders, we may form the dual system of the system
treat the input variables separately. Now we use a two input described by (7.17) and (7.18) and separate the input variables
system as an example to explain the idea. of the dual system again. In the general case, when
Let , and and , there are transfer functions, and we can treat
such that for them individually with different need for model-order reduction
by using the separated systems and their dual systems. Note that
(7.15) the above technique can be used for any coefficient matching
method.
and D. Additivity
(7.16) One important property of projection-based coefficient
matching model-order reduction algorithm is the “additivity,”
and in the frequency domain which means that if there are several sets of coefficient
matching required, then the formation of the transformation
(7.17) matrix can be implemented to meet the coefficient matching
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1581

Fig. 10. Impulse response of Example 5.

one set after another, and finally the matching of all sets can be those of the SPICE simulation, where the source voltage is
realized by using one transformation matrix. a pulse. Among them, Examples 1 and 2 use model-order
For example, when coefficient matching based on the or- reduction based on the Chebyshev approximation in frequency
thonormal basis functions in Hilbert space is used, because of domain, and Examples 3 and 4 use model-order reduction
the error due to truncation, the initial point of the impulse re- based on the Chebyshev approximation in time domain. In
sponse of the reduced model may not be the same as that of the Example 5, the impulse responses of the reduced model based
original model. As the matching of the initial point of the im- on the generalized orthonormal basis functions are compared
pulse response is important, after formation of the first part of with the exact solution, and in Example 6, the frequency
by coefficient matching of the expansion on the orthonormal domain response of the reduced model based on the grammian
functions, a moment matching at frequency can be added to approach is compared with the exact solution.
form the second part of . Then, the reduced order model will Example 1: The circuit consists of a single transmission line
keep the coefficient matching based on the orthonormal basis with parameters /cm, nH/cm pF/cm
functions and the series expansion on simultaneously. length cm, a load resistor and a source resistor of 50 ,
The reason that the property of additivity for coefficient which match the characteristic impedance of the line at high fre-
matching algorithms exists is that in all the theorems related to quencies. A three point moment-matching method was used to
the coefficient matching, the condition is that some coefficient obtain a nearly exact time domain response, and the order of the
set is covered by the column span of matrix . If one coefficient reduced model is 10 [5]. When Chebyshev approximation in fre-
set has been covered by column span of , no matter how quency domain is used, only the zeroth-order coefficient matrix
many new columns are added to , such condition is always is used to form the transformation matrix, as the norm of
satisfied. Also, if this coefficient set is covered by column span with is much smaller than the norm of , which results
, no matter how many columns in before is added, in a model with order of 2. The time-domain response of the re-
it is covered by colspan . Therefore, a different coefficient duced model by Chebyshev expansion is shown in Fig. 1. Com-
set may be covered by the column span of a different part of , pared with SPICE simulation, the waveform is nearly exact.
and in the model-order reduction the matching of all coefficient Example 2: This is an example borrowed from [30]. The cir-
sets is obtained. cuit is shown in Fig. 2, which has two coupled line systems,
each of which consists of three coupled lines. The time-domain
responses of are shown in Fig. 3. The solid line represents
VIII. EXAMPLES
the exact solution where the coupled lines are represented by
We provide six examples here to show the result of various their exact characteristic model. The dashed line corresponds
model-order reduction algorithms. The first four examples the result from the model by Chebyshev expansion in frequency
compare the time-domain response of the reduced model with domain. These two curves are indistinguishable. The reduced
1582 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

(a)
Fig. 11. (a) Frequency response of Example 6.

model order for each transmission line system is 5. In [30], 5. It can be seen that the result from the Chebyshev model is
a single moment-matching method is used with an order up indistinguishable with that from the SPICE simulation, but the
to 40 to model the line system, and in [5], a multipoint mo- result from the moment-matching model is far from them.
ment-matching method is used with an order of 13. It can be seen Example 4: The circuit is an RLC mesh shown in Fig. 6. The
that multipoint moment-matching method works better than the circuit parameters are: , nh,
single moment-matching method, and the Chebyshev expansion pF and pF. We test the voltage at node 20, and
method works the best among the three methods. compare the results from Chebyshev expansion in time domain
Example 3: The circuit is shown in Fig. 4, which consists with order 7 (Fig. 7, dashed line) and moment-matching model
of three RLC lines with capacitive and inductive coupling be- at with the same order (Fig. 8, dashed line) with the SPICE
tween adjacent lines. Each line is modeled by 200 RLC sec- simulation result (Figs. 7 and 8, solid line). It can be seen that
tions. The circuit parameters are: , nh, the the Chebyshev model works better than the moment-matching
ground capacitance and coupling capacitance are 1 pF, model.
and the inductance coupling coefficient . The voltage Example 5: This is an RC ladder with 100 sections. The cir-
supply is connected to the middle line, and we test the output of cuit is shown in Fig. 9, where pF and .
the upper victim line. The time-domain responses are shown in The impulse responses at node 100 are shown in Fig. 10, where
Fig. 5, where the solid, dashed, and dotted line corresponds to the solid line corresponds to the exact solution, the dotted line
the result of SPICE simulation, the model formed by Chebyshev to the solution of the reduced model based on the series expan-
expansion in the time domain, and the moment-matching model sion of orthonormal basis functions with multiple parameters,
at , respectively. The order of the two reduced model is and the crossing line to that of the reduced model based on the
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1583

(b)
Fig. 11. (Continued) (b) Frequency response of Example 6.

series expansion of Laguerre functions. The order of the two re- sponse in the general case. The DGE method works better than
duced models is 12. It can be seen that the dotted line matches BT method at the frequencies lower than about 3.5 Ghz but
the solid line perfectly, and there is a big discrepancy between worse than BT at higher frequencies. The behaves sim-
the crossing line and the solid line. This shows that for the same ilar to DGE, but does not work well.
order, the reduced model based on the orthonormal functions
with multiple parameters works better than that with only one IX. CONCLUSION
parameter.
Example 6: We use the same circuit shown in Example 4 In this paper, we summarized our experience in model-order
to test the BT method and its modifications in the frequency reduction, and provided new algorithms and improvements on
domain. The reduced order is 16. The frequency response of existing algorithms for model-order reduction.
( ) is shown in Fig. 11(a) and Among the model-order reduction algorithms mentioned in
(b), where the exact solution, the results from BT method, from this paper, the moment-matching method is the basic one, as
the dominant grammian eigenspace (DGE) method proposed some of other algorithms, e.g., the frequency-domain Cheby-
by [37], and from the approximate grammian approach with shev approximation-based method, the orthonormal basis
scheme 1 and 2 are denoted by “ORIG,” “BT,” “DGE,” “ ” function-based method, and the approximate grammian-based
and “ ,” respectively. It can be seen from the simulation re- method can be implemented via the moment-matching
sult that while the BT method can essentially capture the peak method or moment computation. The moment-matching
values of the Bode plot over a wide frequency range with min- method with only one matching point generally needs high
imal average errors, it cannot capture the zero frequency re- order to well approximate a frequency response in a wide
1584 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 49, NO. 11, NOVEMBER 2002

frequency range or a time-domain response with widely REFERENCES


spread time constants, and the problem for its efficient use [1] L. T. Pillage and R. A. Rohrer, “Asymptotic waveform evaluation for
is how to select the matching points scattered in a wide timing analysis,” IEEE Trans. Computer-Aided Design, vol. 9, pp.
352–366, Apr. 1990.
range. The frequency-domain Chebyshev expansion-based [2] P. Feldmann and R. W. Freund, “Efficient linear circuit analysis by Padé
method provides one way to determine the matching points. via Lanczos process,” IEEE Trans. Computer-Aided Design, vol. 14, pp.
The advantage of this method is that it has a very efficient 639–649, May 1995.
[3] K. J. Kerns, I. L. Wemple, and A. T. Yang, “Stable and efficient reduc-
way to determine the order of approximation, and it may tion of substrate model networks using congruence transforms,” in Proc.
result in a very low-order approximation (e.g., Examples 1 ICCAD’95, Nov. 1995, pp. 207–214.
[4] A. Odabasioglu, M. Celik, and L. T. Pillegi, “PRIMA: Passive reduced-
and 2) because of the exponential convergence property of order interconnect macromodeling algorithm,” in Proc. ICCAD’97, Nov.
the Chebyshev expansion. However, the collocation points 1997, pp. 58–65.
of Chebyshev expansion are located more densely at high [5] Q. Yu, J. M. Wang, and E. S. Kuh, “Passive multipoint moment-matching
model order reduction on multiport distributed interconnect networks,”
frequencies than at low frequencies, which is not good for IEEE Trans. Circuits Syst. I, vol. 46, pp. 140–160, Jan. 1999.
interconnects with low-pass characteristics. The time-domain [6] J. M. Wang, E. S. Kuh, and Q. Yu, “The Chebyshev expansion based pas-
Chebyshev expansion-based method works better than the sive model for distributed interconnect networks,” in Proc. ICCAD’99,
Nov. 1999, pp. 370–375.
moment-matching method with a single matching point. Its [7] Q. Yu, J. M. Wang, and E. S. Kuh, “Passive model order reduction based
disadvantage is the computation complexity, which is higher on Chebyshev expansion of impulse response of interconnect networks,”
in Proc. DAC’00, June 2000, pp. 520–525.
than the frequency domain Chebyshev expansion-based [8] Q. Yu and E. S. Kuh, “Passive time-domain model order reduction via
method. The orthonormal basis function-based model-order orthonormal basis functions,” in Proc. ECCTD’01, Sept. 2001.
reduction has the advantage that the coefficients of the basis [9] K. Gallivan, E. Grimme, and P. Van Dooren, “Multi-point Padé approxi-
mants of large-scale systems via a two-sided rational Krylov algorithm,”
functions can be selected to fit for the characteristics of the Univ. of Illinois, Urbana, IL 61 801, Tech. Rep., 1994.
system of interest, and it can be easily transformed to an [10] B. C. Moore, “Principal component analysis in linear system: Controlla-
equivalent moment-matching method and can be implemented bility, observability, and model reduction,” IEEE Trans. Automat. Contr.,
vol. 26, pp. 17–32, Feb. 1981.
very efficiently. We have successfully used it to model [11] M. Green, “Balanced stochastic realizations,” Linear Algebra Appl., vol.
frequency dependent RL parameters of transmission lines at 98, pp. 211–247, 1988.
[12] K. Glover, “All optimal Hankel-norm approximations of linear multi-
high frequencies. We have also done some work on using variable systems and their L error bounds,” Int. J. Control, vol. 39,
the orthonormal basis functions with complex parameters for no. 6, pp. 1115–1193, 1984.
model reduction of RLC interconnects [8], and continued the [13] X. Chen and J. T. Wen, “Positive realness preserving model reduction
study in this respect. The BT method has a good estimation
with H norm bounds,” IEEE Trans. Circuits Syst. I, vol. 42, pp. 23–29,
Jan. 1995.
for the approximation error. However, it can hardly compete [14] M. G. Safanov and R. Y. Chiang, “A Schur method for balanced
with the above mentioned algorithms because of its high truncation model reduction,” IEEE Trans. Automat. Contr., vol. 34, pp.
729–733, July 1989.
computation complexity and no guarantee of the passivity for [15] Y. Saad, “Numerical solution of large Lyapunov equations,” in Proc. Int.
RLC interconnects. We provide some improvements on the Symp. Signal Processing, Scattering and Operator Theory and Numer-
ical Methods. MTINS’89, vol. 3, 1990, pp. 856–869.
two folds, and we think more work is needed in order to make [16] P. K. Gunupudi and M. S. Nakhla, “Model-reduction of nonlinear cir-
it widely accepted by circuit simulators. cuits using Krylov-space techniques,” in Proc. 36th DAC, June 1999,
There are some problems commonly met in practice. One pp. 13–16.
[17] C.-K. Cheng, J. Lillis, S. Lin, and N. Chang, Interconnect Analysis and
is the numerical instability problem with high-order approxi- Synthesis New York, 1999, ch. 5.
mation. We provide the dual system theory which is helpful to [18] C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, Spectral
deal with the problem and may also lead to some lower order Methods in Fluid Dynamics. Berlin, Germany: Springer-Verlag, 1987.
[19] W. H. Kautz, “Transient synthesis in the time domain,” IEEE Trans.
approximation with the same order of accuracy. Another one Circuit Theory, vol. 1, pp. 29–39, Sept. 1954.
is that by using a single algorithm some requirements may not [20] D. C. Ross, “Orthonormal exponentials,” IEEE Trans. Commun. Elec-
tron., pp. 173–176, 1964.
be met exactly, e.g., exact matching of the starting point of an [21] W. H. Huggins, “Signal theory,” IEEE Trans. Circuit Theory, vol. 3, pp.
impulse response and/or exact matching of the frequency re- 210–216, Dec. 1956.
sponse at the dc point. By using the additivity property of the [22] H. Akcay and B. Ninness, “Orthonormal basis functions for modeling
continuous-time systems,” Signal Processing, vol. 77, pp. 261–274,
projection-based parameter matching methods, these require- 1999.
ments can be met with very easily. Our experience shows that [23] P. S. C. Heuberger, P. M. J. Van Den Holf, and O. H. Bosgra, “A gen-
eralized orthonormal basis for linear dynamic systems,” IEEE Trans.
by using projection methods of one kind in a wide frequency Automat. Contr., vol. 40, pp. 451–465, Mar. 1995.
(time) range in addition to some parameter matching at spe- [24] P. M. J. Van Den Hof, P. S. Heuberger, and J. Bokor, “System identifi-
cial points of interest may be the best practical way to achieve cation with generalized orthonormal basis functions,” Automatica, vol.
31, no. 12, pp. 1821–1834, 1995.
a low-order model with high accuracy. [25] B. Wahberg, “System identification using Laguerre models,” IEEE
Trans. Automat. Contr., vol. 36, pp. 551–562, May 1991.
[26] B. Wahberg and P. M. Mäkilä, “On approximation of stable linear dy-
namical systems using Laguerre and Kautz functions,” Automatica, vol.
32, no. 5, pp. 693–708, 1996.
ACKNOWLEDGMENT [27] L. Knockaert and D. D. Zutter, “Passive reduced order multiport mod-
eling: The Padé–Laguerre, Krylov–Arnoldi–SVD connection,” AEÜ Int.
J. Electron. Commun., vol. 53, no. 5, pp. 254–260, 1999.
The authors wish to thank the reviewers for their helpful com- [28] A. Ruhe, “Rational Krylov sequence methods for eigenvalue computa-
ments to this paper. tion,” Linear Algebra and Its Applications, vol. 58, pp. 391–405, 1984.
WANG et al.: PROJECTION-BASED ALGORITHMS FOR MODEL-ORDER REDUCTION 1585

[29] G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd Qingjian Yu was born in Shanghai, China on March
ed. Baltimore, MD: Johns Hopkins Univ. Press, 1996, ch. 9. 27, 1939. He received the graduate degree from
[30] M. Celik and A. C. Cangellaris, “Simulation of dispersive multicon- East China Institute of Technology (ECIT), Nanjing,
ductor transmission lines by Padé approximation via the Lanczos China, in 1961.
process,” IEEE Trans. on Microwave Theory Tech., vol. 44, pp. He was a Teaching Assistant at ECIT from
2525–2535, Dec. 1996. September 1961 to July 1977, a Lecturer, from
[31] K. Glover, “All optimal Hankel-norm approximations of linear multi- August 1977 to March 1981, and an Associate
variable systems and their L -error bounds,” Int. J. Control, vol. 39, Professor from April 1981 to October 1986. He
no. 6, pp. 1115–1193, 1984. has been a Full Professor at ECIT, now called
[32] M. G. Safonov and R. Y. Chiang, “A Shur method for balanced-trun- Nanjing University of Science and Technology from
cation model reduction,” IEEE Trans. Automat. Contr., vol. 34, pp. November 1986 to March 1999 until he retired.
729–733, July 1989. Between 1982–1984, he was a Visiting Scholar at Columbia University, New
[33] C. S. Kenney and A. J. Laub, “The matrix sign function,” IEEE Trans. York, NY. In 1984, he was a Visiting Scholar at MIT, Cambridge, MA. From
Automat. Contr., vol. 40, pp. 1330–1348, Aug. 1995. November 1992 to October 1993, he was invited as a Visiting Fellow to the
[34] X. Chen and J. T. Wen, “Positive realness preserving model reductions Chinese University of Hong Kong, Hong Kong. He was employed as a Visiting
with H error bounds,” IEEE Trans. Circuits Syst. I, vol. 42, pp. 23–29, Scholar at University of California at Berkeley in 1994, 1997, and from June
Jan. 1995. 1999 to June 2002. He is currently working at Celestry Design Technologies,
[35] P. C. Opdenacker and E. A. Jonckheere, “A contraction mapping pre- San Jose, CA.
serving balanced reduction scheme and its infinity norm bounds,” IEEE Prof. Yu is a member of the CAD Committee of the Electronics Society of
Trans. Circuits Syst., vol. 35, pp. 184–189, Feb. 1988. China, and a member of the CAD Committee of the Electrical Engineering So-
[36] R. E. Skelton, Dynamic System Control: Linear Systems Analysis and ciety of China.
Synthesis. New York: Wiley, 1988.
[37] J. Li and J. White, “Efficient model reduction of interconnect via approx-
imate system grammians,” in Proc. ICCAD’99, Nov. 1999, pp. 380–383.
[38] V. Sreeram and P. Agathoklis, “On the computation of the gram matrix
in time domain and its application,” IEEE Trans. Automat. Contr., vol.
38, pp. 1516–1520, Oct. 1993.

Janet Meiling Wang received the B.S. degree from Nanjing University of Ernest S. Kuh (S’49–M’57–F’65–LF’94) received
Science of Technology (former East China Institute of Technology), Nanjing, the B.S. degree from the University of Michigan,
China, in 1991, the M.E. degree from the Chinese Academy of Sciences, Ann Arbor, in 1949, the M.S. degree from the
Beijing, China, in 1994, and the Master’s and Ph.D. degrees in electrical Massachusetts Institute of Technology, Cambridge,
engineering and computer sciences, from the University of California at in 1950, the Ph.D. degree from Stanford University,
Berkeley, in 1997 and 2000, respectively. Stanford, CA, in 1952, the Doctor of Engineering,
She was with Intel, Santa Clara, CA, and Cadence Santa Clara, CA, from Honoris Causa, from Hong Kong University of
2000 to 2002. She has recently joined the Electrical Computing Engineering Science and Technology, Hong Kong, in 1997, and
Department at University of Arizona, Tucson, as a Faculty Member. the Doctor of Engineering degree from the National
Chiao Tung University, Hsin Chu, Taiwan, R.O.C.,
in 1999.
From 1952 to 1956, he was a member of the Technical Staff at Bell
Chia-Chi Chu (S’92–M’95) was born in Taipei, Taiwan, R.O.C.,on September Telephone Laboratories in Murray Hill, NJ. He is the William S. Floyd, Jr.
4, 1965. He received the B.S. and M.S. degrees in electrical engineering from Professor Emeritus in Engineering and a Professor, the Graduate School of the
National Taiwan University, Taipei, Taiwan, R.O.C., and the Ph.D. degree in Department of Electrical Engineering and Computer Sciences, University of
electrical engineering from Cornell University, Ithaca, NY, in 1996. California, Berkeley, which he joined in 1956. From 1968 to 1972, he served
From 1995 to 1996, he was a member of the technical staff at Avant! Corp., as Chair of the Department, and from 1973 to 1980, as Dean of the College
Fremont, CA. Since 1996, he has been a Faculty Member of Electrical Engi- of Engineering.
neering, Chang Gung University, Tao-Yuan, Taiwan, R.O.C., where he is cur- Prof. Kuh is a member of the National Academy of Engineering, the
rently an Associate Professor. He was a Visiting Scholar at the University of Academia Sinica, and a foreign member of the Chinese Academy of Sciences.
California at Berkeley in 1999. His current research interests include intercon- He is a Fellow of AAAS. He has received numerous awards and honors,
nect analysis techniques for deep submicron ICs, and applications of nonlinear including the ASEE Lamme Medal, the IEEE Centennial Medal, the IEEE
circuit theory. Education Medal, the IEEE Circuits and Systems Society Award, the IEEE
Dr. Chu was the recipient of the Young Author Award of the IEEE 1997 Con- Millennium Medal, the 1996 C&C Prize, and the 1998 EDAC Phil Kaufman
trol of Oscillations and Chaos Conference (COC’97). Award.

You might also like