Feehery Thesis
Feehery Thesis
by
William Francis Feehery
Submitted to the Department of Chemical Engineering
in partial fulfillment of the requirements for the degree of
Doctor of Philosophy in Chemical Engineering
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
March 1998
c Massachusetts Institute of Technology 1998. All rights reserved.
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Department of Chemical Engineering
March 5, 1998
Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Paul I. Barton
Assistant Professor of Chemical Engineering
Thesis Supervisor
Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Robert Cohen
St. Laurent Professor of Chemical Engineering
Chairman, Committee on Graduate Students
2
Dynamic Optimization with Path Constraints
by
William Francis Feehery
Abstract
Dynamic optimization problems, also called constrained optimal control problems, are of
interest in many areas of engineering. However, numerical solution of such problems is
difficult and thus the application of dynamic optimization in process engineering has been
limited. The dynamic optimization problems of interest in process engineering typically
consist of large systems of differential and algebraic equations (DAEs), and often contain
path equality or inequality constraints on the state variables. The objective of this thesis
was to improve the efficiency with which large-scale dynamic optimization problems may
be solved and to develop improved methods for including path constraints.
The most efficient method for numerical solution of large dynamic optimization problems
is the control parameterization method. The cost of solving the dynamic optimization
problem is typically dominated by the cost of solving the sensitivity system. The efficiency
with which the sensitivity system can be solved is significantly improved with the staggered
corrector sensitivity algorithm which was developed and implemented during the course of
this thesis.
State variable path constraints are difficult to handle because they can cause the initial
value problem (IVP) to be a high-index DAE. An efficient method for numerical solution of
a broad class of high-index DAEs, the dummy derivative method, is described and demon-
strated. Also described is a method for transforming an equality path-constrained dynamic
optimization problem into a dynamic optimization problem with fewer degrees of freedom
that contains a high-index DAE, which may be solved using the dummy derivative method.
Inequality path-constrained dynamic optimization problems present special challenges
because they contain the additional decisions concerning the order and number of inequality
activations and deactivations along the solution trajectory. Such problems are shown to be
equivalent to a class of hybrid discrete/continuous dynamic optimization problems. Exis-
tence and uniqueness theorems of the sensitivities for hybrid systems are derived. Based on
these results, several algorithms are presented for the solution of inequality path-constrained
dynamic optimization problems. One of them, the fluctuating index infeasible path algo-
rithm, works particularly well, and its use is demonstrated on several examples.
3
4
To Lisa
5
6
Acknowledgments
Although this thesis may seem like an individual effort, it reflects an extensive network
of support, advice, and friendship given to me by many people.
I would like to thank Professor Paul Barton for his intellectual guidance and
friendship. As I look back over the last few years, the chance to work with Paul has
been the biggest advantage of my choice to come to MIT. His encouragement, advice,
and (sometimes) criticism have kept my research focused and interesting.
It has been fun to work with the other students in the Barton group. In particular,
I would like to mention Berit Ahmad, Russell Allgor, Wade Martinson, Taeshin Park,
and John Tolsma. In addition, the presence of many visitors to the group during
my time here have made it a stimulating place, especially Julio Banga, Christophe
Bruneton, Santos Galán, Lars Kreul, and Denis Sédès.
My family has been a great source of love, encouragement, and support, probably
even more than they know. My parents have always been there when I needed them,
and my having gotten this far on the educational ladder is an excellent reflection on
the values that they taught me.
Most of all, thanks to my wife Lisa. I will always think of the period I spent at
MIT as the time where I met the most amazing person and we fell in love. Lisa’s
support and understanding during this project have kept me going– especially at the
end as I was writing this thesis.
Finally, I would like to acknowledge financial support in the form of a fellowship
from the National Science Foundation, and additional support from the United States
Department of Energy.
7
8
Contents
1 Introduction 19
9
3.4.1 Existence and uniqueness of the sensitivity functions for hybrid
systems with ODEs . . . . . . . . . . . . . . . . . . . . . . . . 82
3.4.2 Existence and uniqueness of the sensitivity functions for hybrid
systems with linear time-invariant DAEs . . . . . . . . . . . . 87
3.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.5.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.5.2 Critical values for the parameters . . . . . . . . . . . . . . . . 93
3.5.3 Functions discretized by finite elements . . . . . . . . . . . . . 99
3.5.4 Singular Van der Pol’s equation . . . . . . . . . . . . . . . . . 101
3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
10
5.4.4 Switching index-1 models during integration . . . . . . . . . . 172
5.5 Pendulum Demonstration and Numerical Results . . . . . . . . . . . 174
5.6 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5.6.1 Fixed-volume condenser with no liquid holdup . . . . . . . . . 179
5.6.2 Standard high-index model . . . . . . . . . . . . . . . . . . . . 184
5.6.3 Continuous stirred-tank reactor . . . . . . . . . . . . . . . . . 185
5.6.4 High-Index dynamic distillation column . . . . . . . . . . . . . 187
5.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
11
7.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
9 Conclusions 315
9.1 Directions for Future Work . . . . . . . . . . . . . . . . . . . . . . . . 319
References 380
12
List of Figures
13
5-6 Graph of index-3 system after one step of Pantelides’ algorithm . . . 156
5-7 Graph of index-3 system after two steps of Pantelides’ algorithm . . . 156
5-8 Condition number of the corrector matrix at different points on the
solution trajectory of the equivalent index-1 pendulum as a function of
the step size h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5-9 Length constraint in dummy derivative solution of high-index pendulum 174
5-10 Solution of index-3 pendulum model using ABACUSS . . . . . . . . . 176
5-11 LINPACK estimate of corrector matrix condition number . . . . . . . 177
5-12 ABACUSS index-reduction output for high-index condenser model . . 182
5-13 Dummy derivative Temperature profile for high-index condenser . . . 183
5-14 Dummy derivative mole holdup profile for high-index condenser . . . 183
5-15 State trajectories for index-20 DAE . . . . . . . . . . . . . . . . . . . 184
5-16 Concentration profile for index-3 CSTR example . . . . . . . . . . . . 186
5-17 Temperature profiles for index-3 CSTR example . . . . . . . . . . . . 186
5-18 Reboiler temperature profile for BatchFrac Column . . . . . . . . . . . 187
6-1 State space plot showing the optimal trajectory of the two-dimensional
car problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6-2 Optimal acceleration trajectory for two-dimensional car problem . . . 216
6-3 The optimal velocity in the x direction for the two-dimensional car
problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
6-4 The optimal velocity in the y direction for the two-dimensional car
problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
6-5 State-space plot for the brachistochrone problem . . . . . . . . . . . . 219
6-6 Optimal force trajectory for the brachistochrone problem . . . . . . . . 220
6-7 Optimal θ trajectory for the brachistochrone problem . . . . . . . . . . 220
14
8-4 Curve-constrained brachistochrone state variable trajectory . . . . . . 277
8-5 Constrained Van Der Pol control variable trajectory . . . . . . . . . . 280
8-6 Constrained Van Der Pol state variable trajectories . . . . . . . . . . 280
8-7 Constrained car problem acceleration profile . . . . . . . . . . . . . . 283
8-8 Constrained car problem velocity profile . . . . . . . . . . . . . . . . . 283
8-9 Original Index-2 Jacobson and Lele control trajectory . . . . . . . . . 286
8-10 Original Index-2 Jacobson and Lele y2 trajectory . . . . . . . . . . . . 286
8-11 Modified Index-2 Jacobson and Lele control trajectory . . . . . . . . . 287
8-12 Modified Index-2 Jacobson and Lele y2 trajectory . . . . . . . . . . . . 287
8-13 Index-3 Jacobson and Lele control trajectory . . . . . . . . . . . . . . 289
8-14 Index-3 Jacobson and Lele y1 trajectory . . . . . . . . . . . . . . . . . 289
8-15 Pressure-constrained reactor control variable trajectory . . . . . . . . 292
8-16 Pressure-constrained reactor pressure trajectory . . . . . . . . . . . . 292
8-17 Pressure-constrained reactor concentration trajectories . . . . . . . . . 293
8-18 Fed-batch penicillin fermentation control variable trajectory . . . . . . 296
8-19 Fed-batch penicillin fermentation state variable trajectories . . . . . . 296
8-20 CSTR and column flowsheet . . . . . . . . . . . . . . . . . . . . . . . 297
8-21 ABACUSS index-reduction output for reactor and column startup model
when the constraint is inactive . . . . . . . . . . . . . . . . . . . . . . 308
8-22 ABACUSS index-reduction output for reactor and column startup model
when the constraint is active . . . . . . . . . . . . . . . . . . . . . . . 309
8-23 The optimal trajectory for the cooling water flowrate . . . . . . . . . . 311
8-24 The temperature profile in the reactor . . . . . . . . . . . . . . . . . . 311
8-25 The molar flowrates of the species leaving the system . . . . . . . . . 312
8-26 The temperature profile in the column . . . . . . . . . . . . . . . . . . 312
8-27 The reboiler duty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
8-28 The condenser duty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
15
16
List of Tables
17
18
Chapter 1
Introduction
19
the most simple problems. Furthermore, these techniques often require a significant
investment of time and effort to set the problem up to be solved.
The focus of much recent process engineering research has been development of
techniques for developing and solving numerically large simulations of dynamic sys-
tems. A typical oil refinery can be modeled with over 100,000 equations, and even
models of complex unit operations can require tens of thousands of equations. En-
gineers often use these simulations to find or improve feasible operating policies for
a given system design. This process is often performed by enumerating various op-
erating policies and comparing them against each other. Such a procedure is time
consuming and provides no guarantee of finding the ‘best’ answer. Finding and im-
proving feasible operating policies can also be done by solving a dynamic optimization
problem. Dynamic optimization problems have the advantage that the solution tech-
niques can often provide an optimal answer, at least locally, but the problems are at
present difficult to set up and solve.
20
mization problem can serve as set point programs for the control systems. Although
the literature abounds with analytical and numerical treatments of such problems (see
for example [118] for a review), the extension of dynamic optimization to the design
of integrated plant-wide operating policy for an entire batch process has only been
contemplated in recent years. The benefits of plant-wide dynamic optimization of
batch processes were first demonstrated in [10], and more recent work [19, 33] shows
that dynamic optimization of relatively sophisticated plant-wide models involving
thousands of states is possible.
The dynamic optimization problems that arise in process engineering have two
distinct characteristics. First, the dynamic models are composed of sets of differ-
ential and algebraic equations (DAEs). The differential equations typically arise
from dynamic material and energy balances, while the algebraic equations arise from
thermodynamic and kinetic relationships and the physical connectivity of a process
flowsheet. Although numerical solution of the types of DAEs that typically arise in
dynamic simulation of chemical processes has become reliable in recent years, dynamic
optimization problems typically involve (either explicitly or implicitly) more complex
forms of DAEs, called high-index DAEs, for which numerical solution is still an active
research area. Second, dynamic optimization problems that arise in process systems
engineering often contain path constraints on state variables, which result from phys-
ical, safety, and economic limitations imposed on the system. State variable path
constraints are not easily handled by dynamic optimization solution techniques that
are currently available. In fact, it is shown later in this thesis that state variable path
constraints lead to high-index DAEs.
21
FB FA
Reactor
Cooling Jacket
in
Fwater
out
Fwater
The objective of this thesis is to improve the ease with which dynamic optimiza-
tion can be used in process engineering on a regular basis. The three specific areas
of research were improving the computational efficiency of large-scale dynamic op-
timization solution methods, developing numerical solution techniques for the types
22
of high-index DAEs that arise in dynamic optimization problems, and developing
the ability to include state variable path constraints on the dynamic optimization
problem.
The dynamic optimization problem and a review of numerical solution techniques
are presented in Chapter 2. A description of the control parameterization method
is also given in Chapter 2. Control parameterization is dependent on the ability to
calculate the sensitivity of a DAE model to system parameters, which is described
in Chapter 3. A method for efficient numerical sensitivity calculation is given in
Chapter 4. Efficient numerical solution of high-index DAEs is described in Chapter 5.
Chapters 6 and 7 describe a new method for including state variable equality and
inequality path constraints in dynamic optimization problems, and Chapter 8 gives
numerical examples of this method. Chapter 9 contains conclusions and suggestions
for future research.
23
24
Chapter 2
The solution of dynamic optimization (also known as optimal control) problems re-
quires the determination of control trajectories that optimize some performance mea-
sure for a dynamic system. The objective of this chapter is to derive the first-order
necessary conditions for the solution to a dynamic optimization problem, and discuss
various numerical methods for solving these problems. The control parameterization
method, which was the method chosen in this thesis to solve dynamic optimization
problems, is described and justified. The dynamic optimization problems discussed in
this chapter do not include so-called additional state variable path constraints, which
are introduced in later chapters. These problems are constrained by the dynamic
system, which is assumed to be a differential-algebraic equation (DAE). The control
parameterization framework developed here is one that may be extended to handle
state variable path constraints later in this thesis.
The following glossary may be useful when reading this chapter. These are brief
definitions intended to clarify the discussion in this chapter, and they are explored in
greater detail later in this thesis.
25
can be written as
f (ż, z, t) = 0 (2.1)
Ordinary differential equations (ODEs) are one class of DAEs. With the excep-
tion of ODEs, one characteristic of DAEs is that there are algebraic constraints
on the state variables z. These constraints may appear explicitly as in
g(ẋ, x, y, t) = 0 (2.2)
h(x, y, t) = 0 (2.3)
∂f
where z = (x, y), or they may appear implicitly due to singularity of ∂ ż
when
it has no zero rows.
Differential Index of a DAE “The minimum number of times that all or part of
(2.1) must be differentiated with respect to t in order to determine ż as a
continuous function of z, t is the index of the DAE” [21]. Reliable numerical
techniques exist for the direct solution of DAEs that have index ≤ 1. The index
is a local quantity– i.e., it is defined at a particular point on the state trajectory
and may change at discrete points along the state trajectory.
High-Index DAE According to common convention a DAE that has index > 1.
The solution of a high-index DAE is constrained by time derivatives of a non-
empty subset of the equations in the DAE. In general, only limited classes of
high-index DAEs may be solved directly using standard numerical techniques.
26
a sufficient condition, and DAEs for which this condition does not hold are not
necessarily high-index. However, this is a very useful criterion for determining
the index of many problems.
Consistent Initial Conditions The vectors z(to ), ż(t0 ) are called consistent initial
conditions of (2.1) if they satisfy the corresponding extended system at t0 [139].
Design Degrees of Freedom The number of unknowns which can be specified ar-
bitrarily as design or input quantities is called the number of design degrees
of freedom [139]. This quantity is different from the dynamic degrees of free-
dom because it involves the input variables, the choice of which can affect the
number of dynamic degrees of freedom [56, 87, 139].
27
2.1 DAE Dynamic Optimization Necessary Con-
ditions
There are many texts (e.g., [24, 82]) that describe in detail the theory of dynamic
optimization. Most of them derive the necessary conditions for optimality of optimal
control problems under the assumption that the dynamic system is described by
ordinary differential equations (ODEs). In this section, the more general first-order
necessary optimality conditions are presented for dynamic systems that are described
by DAEs. The subject of necessary conditions for optimal control for DAEs was also
addressed in [36, 145], however both papers focus on the derivation of a Maximum
Principle. A Maximum Principle is useful when there are constraints on the control
variable, but in this section the dynamic optimization problem is assumed to have
no constraints on the control variable in order to demonstrate that the necessary
conditions for optimality for a DAE are also DAEs with some interesting properties.
The dynamic optimization problem considered in this section is one where the ini-
tial state for the state variables is given (that is, it is not free to be determined by the
optimization), the control trajectories are unconstrained, and the state trajectories
are constrained only by the DAE.
f (ẋ, x, u, t) = 0 (2.5)
where J(·), L(·), ψ(·) → R, f (·), φ(·) → Rmx , x ∈ Rmx , and u ∈ Rmu . The state
variables x in this formulation include both differential and algebraic state variables.
28
The DAE (2.5) may have arbitrary differential index v, and (2.6) defines a consistent
set of initial conditions. Since the index of (2.5) is not restricted, this formulation
may include equality path constraints on state variables.
Since it is assumed that the initial time t0 and state condition x(t0 ) are fixed, the
objective function may be expressed as:
tf
J= L̄ (ẋ, x, u, t) dt (2.8)
t0
where:
T
dψ ∂ψ ∂ψ
L̄(ẋ, x, u, t) = +L= + ẋ + L (2.9)
dt ∂t ∂x
29
the increment of the functional is:
tf
ΔJ¯ = [H(ẋ + δ ẋ, x + δx, u + δu, λ + δλ, t) − H(ẋ, x, u, λ, t)] dt
t0
tf +δtf (2.13)
+ H(ẋ, x, u, λ, t)dt
tf
Expanding the increment in a Taylor series around the point (ẋ(t), x(t), y(t)) and
extracting the terms that are linear in δ ẋ, δx, δu, δλ, and δtf gives the variation of
¯
J:
tf
∂H ∂H ∂H ∂H
δ J¯ = δ ẋ + δx + δu + δλ dt
t0 ∂ ẋ ∂x ∂u ∂λ (2.14)
+ H(ẋ, x, u, λ, t)δtf
First-order necessary conditions for an optimum can be found by setting the variation
30
of J¯ equal to zero. The conditions are:
∂H d ∂H
− =0 (2.18)
∂x dt ∂ ẋ
∂H
=0 (2.19)
∂u
∂H
=0 (2.20)
∂λ
∂H ∂H
δx + H− ẋ δtf = 0 (2.21)
∂ ẋ t=tf ∂ ẋ t=tf
These conditions are a generalization of the conditions that have been reported for
dynamic optimization of ODE systems.
31
2.1.1 ODE Example
The following dynamic optimization problem was taken from [24] and is attributed to
Isaac Newton. Essentially, the problem is to find the minimum-drag nose cone shape
in hypersonic flow. The problem is:
l
1 ru3
min [r(l)]2 + dx (2.27)
u(x) 2 0 1 + u2
subject to:
dr
+u=0 (2.28)
dx
r(0) = r0 (2.29)
where x is the axial distance from the point of maximum radius, r is the radius of
the body, r(0) is the maximum radius, and l is the length of the body.
The optimality conditions (2.23–2.26) reduce in this problem to:
dλ u3
= (2.30)
dx 1 + u2
ru2 (3 + u2 )
λ= (2.31)
(1 + u2 )2
dr
+u=0 (2.32)
dx
r(0) = r0 (2.33)
These are the same optimality conditions as those derived in [24] except that the
sign of λ in (2.31) and (2.34) is different. This difference occurs because the usual
derivation of conditions (2.23–2.26) for ODEs assumes that the ODEs have the form
ẋ = f (x, u, t), while the derivation of (2.23–2.26) made the more general assumption
that the DAE has the form f (ẋ, x, u, t) = 0. However, it can be shown that the
solution for the state variable given in [24] also satisfies (2.30–2.34), although the
adjoint variable trajectories are different.
32
Note that even though the original problem is described by an ODE, the optimality
conditions are an index-1 DAE (when u = 0), since (2.31) contains no time derivative
variables. In general, all dynamic optimization problems described by ODEs have
optimality conditions that are DAEs.
It is possible to turn the ODE example of the previous section into an index-1 DAE,
and thus check the validity of the optimality conditions, by writing:
l
min a(l) + bdx (2.35)
u(x) 0
subject to:
dr
+u=0 (2.36)
dx
1
a = r2 (2.37)
2
ru3
b= (2.38)
1 + u2
r(0) = r0 (2.39)
The optimality conditions for this problem (neglecting the condition (2.26) for the
moment) reduce to (2.36–2.39) and:
dλ1 u 3 λ2 1
=− 2
− λ3 (2.40)
dx 1+u 2
λ2 = −1 (2.41)
λ3 = 0 (2.42)
ru2 (3 + u2 )
λ1 = − λ2 (2.43)
(1 + u2 )2
If (2.41–2.42) are substituted into (2.40) and (2.43), the result is the same as (2.30–
2.31).
33
The boundary condition (2.26) reduces to:
λ1 (l) = 0 (2.44)
1=0 (2.45)
which apart from being inconsistent, does not agree with the results of the ODE
example. The problem is that the variable a appears in φ but ȧ does not appear in
the DAE (this is further discussed in Section 2.2 below). If (2.37) is replaced with its
time differential, the boundary condition reduces to:
λ3 (l) = −1 (2.47)
2
1
min x(tf ) − (2.48)
u(t) 2
subject to:
ẋ = v (2.49)
ẏ = w (2.50)
v̇ + T x = u2 y (2.51)
ẇ + T y = −1 + u2 x (2.52)
x2 + y 2 = 1 (2.53)
x(t0 ) = 0 (2.54)
ẏ(t0 ) = 0 (2.55)
34
The costate equations are:
λ2 = 0 (2.63)
λ3 = 0 (2.64)
λ4 = 0 (2.65)
which cannot be correct. As discussed in Section 2.2 the boundary conditions must
be derived using an equivalent non-high index formulation of the DAE.
Another way to find optimality conditions for this problem is to use the methods
35
of Chapter 5 to derive an equivalent index-1 DAE for (2.49–2.53):
ẋ = v (2.66)
ẏ = w (2.67)
mv̇ + T x = u2 y (2.68)
mẇ + T y = −1 + u2 x (2.69)
x2 + y 2 = 1 (2.70)
xẋ + y ȳ = 0 (2.71)
where ȳ and w̄ are dummy algebraic variables that have been introduced in place of
ẏ and ẇ. For this system, the costate equations are:
λ2 = 0 (2.76)
λ4 + yλ7 = 0 (2.79)
Interestingly, in this case the combined system (2.66–2.80) is index-2, even though
the DAE (2.66–2.72) is index-1. A degree of freedom analysis indicates that there are
two initial and two final time boundary conditions, as expected.
36
Applying the boundary condition (2.26) gives:
which is the correct number of final time conditions. This example leads to a conjec-
ture that it is possible to derive optimality conditions for a high-index DAE directly,
but that the final-time boundary conditions must be derived using an equivalent
non-high index formulation of the high-index DAE.
min tf (2.83)
θ(t),tf
subject to:
ẋ = u (2.84)
ẏ = v (2.85)
u̇ = F sin(θ) (2.86)
v̇ = g − F cos(θ) (2.87)
v
tan(θ) = (2.88)
u
[x(0), y(0), ẋ(0)] = [0, 1, 0]
This problem is a version of the brachistochrone problem due to [14] which is further
discussed in Chapter 6. This problem is used in the next section to demonstrate
the different types of boundary conditions that are possible in dynamic optimization
problems. The index of the DAE in this problem is two if θ is selected as the control.
37
The optimality condition (2.23) reduces to:
λ̇1 = 0 (2.89)
λ̇2 = 0 (2.90)
v
λ̇3 = −λ1 + λ5 (2.91)
u2
1
λ̇4 = −λ2 − λ5 (2.92)
u
0 = −λ3 sin(θ) + λ4 cos(θ) (2.93)
38
2.2 Boundary conditions
The exact form of the boundary condition (2.21) depends on the classes of constraints
imposed on the dynamic optimization problem. This section presents the boundary
conditions for several different classes of constraints. The approach followed is gen-
erally that of [82], but here the results are valid for arbitrary-index DAEs.
The number of boundary conditions depends on the form of (2.5), and is related
to rd , the dynamic degrees of freedom of the DAE. Several observations may be made
about the number of boundary conditions that may be imposed:
• The number of boundary conditions can be dependent on the time at which the
conditions are defined, since roverall can change over the time domain.
• Typically, the boundary conditions consist of initial conditions on the DAE and
end-point conditions on the costate equations. In this case, the number of initial
conditions for the DAE is rd at t = t0 . The number of end-point conditions on
the costate equations is roverall − rd at t = tf .
The final-time boundary conditions are obtained by customizing (2.26) to the par-
ticular type of side conditions in the dynamic optimization problem. It is important
to note that although (2.23–2.25) can be applied to DAEs of arbitrary index, (2.26)
cannot be correctly applied directly to high-index DAEs. Consistent initial conditions
for a high-index DAE can only be obtained with the corresponding extended system
for the high-index DAE, and likewise (2.26) can only be used with the corresponding
39
extended system for the DAE (2.23–2.25). Therefore, for the purposes of this section,
the DAE f is assumed to have index at most one. This restriction is allowable because
the methods of Chapter 5 can be used to derive an equivalent index-1 DAE for the
high-index DAEs of interest in this thesis.
Fixing tf provides the boundary condition on the final time, and allows δtf to be set
to zero in (2.26). There are several possibilities for the other boundary conditions,
depending on the form of the dynamic optimization problem.
• Final state free: Since the final state is unspecified, δx(tf ) is arbitrary, and
therefore the end-point boundary conditions must satisfy:
T ∂f ∂ψ
λ + δxf = 0 (2.95)
∂ ẋ ∂x t=tf
∂mi
i = 1 . . . nm (2.97)
∂x
40
and therefore it can be shown [82] that the boundary conditions are:
nm
T∂f ∂ψ ∂mi
λ + = νi (2.98)
∂ ẋ ∂x t=tf i=1
∂x
m(x(tf )) = 0 (2.99)
• Final state specified : This is a special instance of the preceding case. However,
here the dimensionality of the hypersurface is equal to rd , and therefore the
boundary conditions:
x(tf ) = xf (2.100)
apply. The boundary condition (2.26) applies also, but for every equation in
(2.100), the corresponding δx = 0 in (2.26). Therefore, if roverall > 2rd some of
the boundary conditions will come from (2.26).
• Final state fixed or final state free: These cases have the same boundary con-
ditions as their counterpart discussed above except that there is an additional
boundary condition:
∂ψ T ∂f
+L+λ f −λ
T
ẋ =0 (2.101)
∂t ∂ ẋ t=tf
• Final state constrained to lie on a moving point: In this case, x(tf ) must lie on
the final point defined by θ(tf ). This case is complicated by the fact that no
more than rd state conditions may be simultaneously imposed, and rd will be
less than mx if the DAE is index-1. The dimension of the vector function θ is
41
equal to rd , and the variations of δxf and δtf are related by:
dθ
P δxf = δtf (2.102)
dt
Note that (2.102) will not define all δx if rd is less than mx . Equation (2.95)
holds for all undefined elements of δx, and may give additional boundary con-
ditions.
• Final state constrained by a moving surface: This is the most complicated case,
where the final state is constrained by the moving surface m(x, t) = 0, m :
Rmx +1 → Rnm , 1 ≤ nm ≤ roverall − rd − 1 at tf . This situation is similar to
the preceding case, except that the vector [δx(tf ) | δtf ] is normal to each of the
gradient vectors [ ∂m
∂x
i
| ∂m
∂t
i
]. Therefore, the boundary conditions are:
nm
T∂f ∂ψ ∂mi
λ + = νi (2.105)
∂ ẋ ∂x t=tf i=1
∂x
m(x(tf )) = 0 (2.106)
nm
∂ψ T ∂f ∂mi
+L+λ f −λ
T
ẋ = νi (2.107)
∂t ∂ ẋ t=tf i=1
∂t
42
2.2.3 Problems with algebraic variables in ψ
As shown in Section 2.1.2, if the function ψ in (2.4) contains algebraic state variables,
special treatment is required to derive the boundary conditions with (2.26). The
difficulty is that the time derivatives of algebraic variables do not appear explicitly
in the DAE, but ∂H/∂ ẋ will be nonzero with respect to the time derivatives of any
algebraic variables for which the corresponding elements of ∂ψ/∂x are nonzero.
The boundary condition (2.26) can be correctly applied if a subset of the equations
DAE f (ẋ, x, u, t) is replaced with its time derivatives for the purposes of (2.26). It is
valid to do this because all time derivatives of f (ẋ, x, u, t) must hold at t0 .
The dynamic degrees of freedom of the DAE (2.84–2.88) and the extended DAE
(2.84–2.93) are three and seven, respectively. Three initial conditions were defined,
and therefore, four final time conditions remain to be specified. The implications of
some of the various boundary condition types can be demonstrated by considering
the following cases:
This is the case where the final state is constrained by a hypersurface. Since the
final time is free, there is an additional degree of freedom that is determined by
the dynamic optimization problem (2.83–2.88). The final time condition (2.101)
reduces to:
v
1 − λ1 u − λ2 v − λ3 F sin(θ) − λ4 F cos(θ) + λ5 tan(θ) − = 0 (2.108)
u t=tf
43
The optimality conditions (2.98–2.99) reduce to:
λ1 (tf ) = ν1 (2.109)
λ2 (tf ) = 0 (2.110)
λ3 (tf ) = 0 (2.111)
λ4 (tf ) = 0 (2.112)
x(tf ) = xf (2.113)
Note that the undetermined multiplier ν1 has been added to the dynamic opti-
mization problem, and there are now four remaining dynamic degrees of free-
dom.
This is the case where the final state must lie on a moving point. Assuming the
state variable vector for this problem is [x, y, u, v, F ], the permutation matrix
P for this problem is:
⎡ ⎤
1 0 0
⎢ ⎥
⎢ ⎥
⎢0 1 0⎥
⎢ ⎥
⎢ ⎥
P = ⎢0 0 0⎥ (2.114)
⎢ ⎥
⎢ ⎥
⎢0 0 1⎥
⎣ ⎦
0 0 0
δv(tf ) = 0 (2.117)
Note that δu and δF can take arbitrary values and still satisfy the boundary
44
condition. Equations (2.103–2.104) reduce to:
[λ1 a + λ2 b + 1 − λ1 u − λ2 v − λ3 F sin(θ) −
v (2.118)
λ4 F cos(θ) − λ5 tan(θ) − =0
u t=tf
x(tf ) = a(tf ) (2.119)
ẏ(tf ) = 0 (2.121)
There is still one more final time condition that must be defined. Equation
(2.95) is applied to the undefined elements of δx to obtain the final boundary
condition:
λ3 (tf ) = 0 (2.122)
In this case, the final state is constrained to lie on a circle that is expanding with
time (i.e., R(t) is a scalar valued function of time). Equations (2.105–2.107)
reduce to:
λ3 (tf ) = 0 (2.125)
λ4 (tf ) = 0 (2.126)
[1 − λ1 u − λ2 v − λ3 F sin(θ)
v ∂R (2.128)
− λ4 F cos(θ) + λ5 (tan(θ) − = −ν1
u t=tf ∂t
45
In this case, the objective function (2.83) is changed to:
and the final time is not a decision variable. Equations (2.89–2.94) apply un-
changed to this problem, and (2.95) gives the boundary conditions:
λ1 + 2x = 0 (2.130)
λ2 = 0 (2.131)
λ3 = 0 (2.132)
λ4 = 0 (2.133)
46
2.3 DAEs and Dynamic Optimization Problems
subject to:
This problem is a very simple ODE system which is linear in the control variable u,
and possibly nonlinear in the state variable x. The necessary conditions (2.23–2.26)
simplify to generate the extended system:
∂f ∂g
λ̇ = − + u λ (2.136)
∂x ∂x
λg(x) = 0 (2.137)
47
The Jacobian of (2.136–2.138) with respect to the highest order time derivatives
(λ̇, ẋ, u) is:
⎡ ∂g ⎤
1 0 ∂x λ
⎢ ⎥
⎢ ⎥
⎢0 0 0 ⎥ (2.140)
⎣ ⎦
0 1 −g
∂g
λ ẋ + λ̇g(x) = 0 (2.141)
∂x
the Jacobian of the underlying DAE consisting of (2.136), (2.141) and (2.138) is:
⎡ ∂g ⎤
1 0 λ
⎢ ∂x ⎥
⎢ ⎥
⎢g ∂x λ
∂g
0 ⎥ (2.142)
⎣ ⎦
0 1 −g
∂g ∂f
λ f (x) − λ g(x) (2.143)
∂x ∂x
since the variable u does not appear, a further differentiation of (2.136), (2.141) and
(2.138) is required to derive the corresponding extended system.
It is interesting to see whether (2.26) has given the correct number of final time
48
boundary conditions for this problem. The corresponding extended system for this
DAE consists of seven equations, (2.136–2.138), (2.141), and the time derivatives of
(2.136), (2.141) and (2.138). The variables incident in the corresponding extended
system are ẍ, ẋ, x, λ̈, λ̇, λ, u̇, and u. Since there are eight incident variables and seven
equations in the corresponding extended system, there should be one boundary con-
dition. That boundary condition is the final-time condition given by (2.139), which
implies no initial conditions may be specified for this problem.
The bottom line is that dynamic optimization problems are inherently boundary
value problems in DAEs, not ODEs (even if the dynamic system is an ODE). There-
fore, solving a dynamic optimization problem requires knowledge about the dynamic
degrees of freedom of both the original dynamic system and the DAE defined by
the optimality conditions in order to formulate valid boundary conditions, and the
ability to solve the DAE that is embedded in the two-point boundary value problem
either explicitly or implicitly. This fact has not been formally recognized in any of the
work that has been done on development of methods to solve dynamic optimization
problems. In fact, all of the methods that have been developed handle the DAEs
implicitly, both because these problems were not recognized as DAE problems, and
because advances that allowed direct numerical solution of DAEs have been made
only recently.
49
2.4 Solution algorithms for Dynamic Optimization
Problems
The optimality conditions presented in the previous section are of little practical use
by themselves. They give conditions that must be satisfied at the optimum, but
they are not a method for finding the optimum. In fact, they define a two-point
boundary value problem, which is well known to be difficult to solve numerically.
This section gives a brief overview of the various methods that have been developed
to solve dynamic optimization problems.
There is an extensive literature on numerical methods for the solution of dy-
namic optimization problems, which fall into three classes. The dynamic programming
method was described in [18] and the approach was extended to include constraints
on the state and control variables in [91]. Indirect methods focus on obtaining a
solution to the classical necessary conditions for optimality (2.18–2.21) which take
the form of a two-point boundary value problem. There are many examples of such
methods, for example [24, 37, 41, 42, 80, 100, 111]. Finally, direct methods trans-
form the infinite-dimensional dynamic optimization problem into a finite dimensional
mathematical programming problem, typically a nonlinear program (NLP).
The dynamic programming method is based on Bellman’s principal of optimality,
which can be used to derive the Hamilton-Jacobi-Bellman equation:
∂J ∂J
0= + min L + f (x, u, t) (2.144)
∂t u(t) ∂x
J(x(tf ), tf ) = ψ(x(tf ), tf ) (2.145)
which must hold at the optimum, assuming that the DAE is index-0 and may be
written as ẋ = f (x, u, t). This is a partial differential equation and in practice it is
very difficult to solve except in certain fortuitous cases [133].
The indirect methods are so termed because they attempt to find solutions to the
two-point boundary value problem (2.18–2.21), thus indirectly solving the dynamic
optimization problem (2.4–2.6). There are many variations on indirect methods, but
50
they are generally iterative methods that use an initial guess to find a solution to a
problem in which one or more of (2.18–2.21) is not satisfied. This solution is then
used to adjust the initial guess to attempt to make the solution of the next iteration
come closer to satisfying all of the necessary conditions.
One of the common variations on the indirect method is the steepest descent
algorithm (for example, [82]). In this method, the state equation (2.20) is integrated
forward using a guess for the control profile, and then the costate equation (2.18) is
integrated backward. Equation (2.19) is then used locally to find a steepest descent
direction for u at a discrete number of points, and globally as a termination criterion.
The main disadvantage of this method is the need to backward-integrate the costate
equations, which can be inefficient and/or unstable for both DAEs [142] and ODEs.
Also, it is not easy to set up the problem, since the costate equations need to be
generated, and convergence is slow for many problems.
There are two general strategies within the framework of the direct method. In
the sequential method, often called control parameterization, the control variables are
discretized over finite elements using polynomials [23, 105, 104, 115, 120, 128, 142, 143]
51
or in fact any suitable basis functions. The coefficients of the polynomials and the size
of the finite elements then become decision variables in a master nonlinear program
(NLP). Function evaluation is carried out by solution of an initial value problem
(IVP) of the original dynamic system, and gradients for a gradient-based search may
be evaluated by solving either the adjoint equations (e.g., [128]) or the sensitivity
equations (e.g., [142]). In the simultaneous strategy both the controls and the state
variables are discretized using polynomials on finite elements, and the coefficients and
element sizes become decision variables in a much larger NLP (e.g., see [12, 61, 85,
92, 106, 132, 137, 147]). Unlike control parameterization, the simultaneous method
does not require the solution of IVPs at every iteration of the NLP.
Direct methods have been demonstrated to work automatically and reliably for
very large problems. Within direct methods, there has been significant discussion in
the literature about the relative advantages of simultaneous and sequential strategies.
The main areas of contention are the ease of use, the computational cost in obtaining
a solution, and the ability to handle state variable path constraints.
The sequential strategy is easier to use on large problems than the simultaneous
strategy because it uses a numerical IVP solver to enforce (2.5), rather than including
it as a set of constraints in the NLP. Including these constraints in the NLP in the
sequential strategy causes the NLP to grow explosively with the size of the DAE,
the time horizon of interest, and the excitation of high-frequency responses. Solution
of such large NLPs requires special techniques and careful attention to determining
an initial guess for the optimization parameters that leads to convergence of the
NLP algorithm. On the other hand, solution of the IVP in the sequential strategy
can take advantage of the highly refined heuristics available in state-of-the-art DAE
integrators. In fact, it can be shown that if collocation is used for the discretization
in the simultaneous method, the problem is equivalent to performing a fully implicit
Runge-Kutta integration [142], which is not as efficient as the BDF method for DAEs
[21].
It is not clear from the literature discussions which strategy requires less compu-
tational cost. On the one hand, it is expensive to solve the extremely large NLPs that
52
result from the simultaneous method. On the other hand, an efficient search using
the sequential strategy requires gradients of the objective function, which can also
be very expensive to obtain. This issue is addressed in Chapter 4, where an efficient
method for calculating the sensitivities of the state variables is described, which can
then be used to calculate the gradients.
Until now, the simultaneous strategy has had the clear advantage in handling
constraints on the state variables. Such constraints may be handled by including them
directly as additional constraints in the NLP. However, the theoretical properties of
the discretization employed in the simultaneous strategy break down for nonlinear
DAEs which have index > 2 (at least locally; for example, during an activation
of a path inequality constraint) [90]. Attempts to handle state path constraints
in the sequential strategy have relied on including some measure of their violation
indirectly as point constraints in the NLP, rather than directly in the IVP subproblem
[84, 133, 142]. This method was used because it was previously not possible to solve
numerically the high-index DAEs which resulted from appending the path constraints
to the IVP problem. In fact, the use of a dynamic optimization formulation with such
point constraints has been proposed as a method for solving high-index DAEs [76],
although this method is extraordinarily costly.
This work focuses on the control parameterization method. This choice was based
on the fact that many problems in chemical engineering are modeled with large sys-
tems of equations, and the control parameterization method appears to be the most
easily applied and reliable method for these problems. The reasons for this include:
• Direct methods are more easily implemented than indirect methods because
there is no need to generate the additional costate equations. Generation of
the costate equations is a significant problem for large dynamic optimization
problems, and more than doubles the size of the DAE system that must be
solved.
53
we were unable to find any examples in the literature of the use of indirect meth-
ods to solve a problem with mx > 50. On the other hand, there are reported
examples of the direct method being used to solve problems with thousands of
state variables [33].
• The sequential direct method results in fairly small dense NLPs, but potentially
large sparse IVPs. Simultaneous direct methods do not require the solution of
an IVP, but require large sparse NLPs. Research into large-scale NLP solution
methods is currently a very active research area but it is still very difficult to
solve large NLPs from poor initial guesses. On the other hand, solution of large
sparse IVPs has become a standard numerical mathematical tool, and it is much
easier to provide good initial guesses for the small number of parameters in the
master NLP.
54
2.5 Optimality conditions for direct methods
u = u(p, t) (2.146)
where p ∈ Rnp is a set of parameters. Hence, as shown in [63] the solution of (2.23–
2.26) found by direct methods will be suboptimal to those found by an analytic
solution to the necessary conditions (2.4–2.6).
The use of control parameterization turns the dynamic optimization problem into
a parameter optimization problem:
tf
min J = ψ (x (p, tf ) , tf ) + L (x(p, t), u(p, t), t) dt (2.147)
p,tf t0
The necessary optimality conditions for (2.147–2.149) may be derived using La-
grange multipliers [16]:
∂J ∂ ẋ ∂J ∂x ∂J ∂u T ∂f ∂ ẋ ∂f ∂x ∂f ∂u
+ + + λ + + =0 (2.150)
∂ ẋ ∂p ∂x ∂p ∂u ∂p ∂ ẋ ∂p ∂x ∂p ∂u ∂p ∀t∈[t0 ,tf ]
55
Note that this problem is still infinite dimensional because (2.150–2.151) are enforced
for t ∈ [t0 , tf ].
The simultaneous direct method approximates (2.150–2.151) at discrete points in
the interval [t0 , tf ]. Doing this requires that the state variables as well as the control
variables are parameterized using:
x = x(p̄, t) (2.153)
ẋ = x (p̄, t) (2.154)
These state variable approximations are then substituted into (2.150–2.151), and the
result evaluated at a set of discrete points along the solution trajectory.
The sequential direct method creates a finite dimensional problem from (2.150–
2.152) by transforming it into the following problem:
∂J ∂ ẋ ∂J ∂x ∂J ∂u
+ + =0 (2.155)
∂ ẋ ∂p ∂x ∂p ∂u ∂p
∂f ∂ ẋ ∂f ∂x ∂f ∂u
+ + = 0 ∀t ∈ [t0 , tf ] (2.156)
∂ ẋ ∂p ∂x ∂p ∂u ∂p
∂φ ∂ ẋ ∂φ ∂x
+ =0 (2.157)
∂ ẋ ∂p ∂x ∂p t=t0
56
Objective
Gradients
Function
x(t)
Control Variables
Integrate
u(p, t)
Dynamic
System
f (ẋ, x, u, t) = 0
57
2.6 A general control parameterization problem
The control parameterization method approximates the control profiles using poly-
nomials over a set of NF E finite elements of length hi = ti − ti−1 , i = 1 . . . NF E .
For convenience in bounding the control profiles, the controls are parameterized us-
ing Lagrange polynomials. For control variable uj in finite element i the Lagrange
approximation is:
M +1
(M )
uij (τ (i) ) = uijk φk τ (i) ∀τ (i) ∈ [0, 1] (2.160)
k=1
i = 1 . . . NF E j = 1 . . . mu (2.161)
where:
⎧
⎪
⎪
⎪
⎪ 0 if M = 0
⎨
=
(M ) (i) M +1
φk τ τ − τl (2.162)
⎪
⎪ if M ≥ 0
⎪
⎩ l=1 τk − τl
⎪
l=k
t − ti−1
τ (i) (t) = (2.163)
ti − ti−1
58
Therefore, bounding the control parameters uijk is sufficient to bound the control
functions at the points τl . Bounding the profile at these points is sufficient to guar-
antee bounding of the entire control profile for M ≤ 1. It is possible to bound control
profiles with M > 1, but this requires the use of more complicated constraints that are
functions of the parameters. In practice, however, bounding higher order polynomials
can be done by ensuring that NF E is large.
Theoretically, the choice of the set of points τl does not affect the solution because
all polynomial approximations of the same order are equivalent. However, a poor
choice for τl can result in an ill-conditioned approximation of the control function.
Ideally, τ1 = 0 and, if M ≥ 0, τM +1 = 1 so that the control may be more easily
bounded. The other τl may be set at collocation points or at equidistant intervals.
The ‘best-conditioned’ approximation is given by selecting collocation points, but in
practice equidistant points are easier to use and do not seem to give a noticeable
decrease in the performance of the algorithm.
The goal here is not to present the most general formulation of the control param-
eterized problem, but rather to show a formulation that is general enough for the
purposes of this thesis. Note that this formulation does not include path constraints
on state variables, which are discussed in Chapter 6.
subject to:
59
Initial time point constraints:
vL ≤ v ≤ vU v ∈ Rmv (2.170)
tNF E = tf (2.172)
Transferring the values of the state variables from one finite element to the next re-
quires junction conditions. On the boundaries between finite elements, these junction
60
conditions have the form:
Φi (ẋ(uijk , tj=i , t− − − −
i ), x(uijk , tj=i , ti ), u(uijk , tj=i , ti ), ti ,
(2.174)
ẋ(uijk , tj=i , t+ + + +
i ), x(uijk , tj=i , ti ), u(uijk , tj=i , ti ), ti ) = 0
−
x+
n = xn ∀n ∈ Γ (2.175)
where Γ is an index set of the state variables which have time derivatives that explicitly
appear in the DAE. However, for high-index DAEs or DAEs where the index fluctuates
between boundaries equation (2.175) is not valid, and methods such as those described
in [141] must be used to determine appropriate forms of the transfer conditions.
The control parameterization formulation presented above is simpler than the
multi-stage formulation given in [142]. The multi-stage formulation allows the time
domain to be divided into subdomains which have end times that may or may not
coincide with finite element boundaries. The functional form of the DAE is permitted
to change from one stage to the next. However, it is worth pointing out that the multi-
stage formulation requires the stage sequence to be fixed, rather than be defined by
implicit state events as in [15, 110]. The significance of this limitation is further
discussed in Chapter 7.
61
racy and the high computational efficiency that may now be achieved when solving
the combined DAE and sensitivity system [49, 94, 123, Chapter 4].
The sensitivity equations associated with (2.173) are obtained by differentiating
(2.173) with respect to the optimization parameters p:
∂f ∂ ẋ ∂f ∂x ∂f ∂u(p, t) ∂f
+ =− − , ∀t = [t0 , tf ] (2.176)
∂ ẋ ∂p ∂x ∂p ∂u ∂p ∂p
∂φ ∂ ẋ ∂φ ∂x ∂φ ∂u(p, t0 ) ∂φ
+ =− − , t = t0 (2.177)
∂ ẋ ∂p ∂x ∂p ∂u ∂p ∂p
There are also junction conditions for the sensitivity equation that are valid at the
same times ti as the junction conditions for the DAE, which are discussed in Chapter 3.
Solving (2.176–2.177) requires the partial derivatives of the control functions with
respect to the parameters, which are [142]:
∂uij
= φM i
k (τ )δii δjj k = 1...M + 1 (2.178)
∂ui j k
⎧
⎪
⎪ ti − t
M +1
dφ
(M )
⎪
⎪− uijk k i for i = i − 1, i > 1
⎪
⎪ (ti − ti−1 )2
⎪
⎪ dτ
∂uji ⎨ k=1
M +1 (M )
= − t − ti−1 dφ
uijk k i for i = i (2.179)
∂ti ⎪
⎪
⎪
⎪ (ti − ti−1 )2 dτ
⎪
⎪
k=1
⎪
⎪
⎩0 otherwise
62
2.7 ABACUSS Dynamic Optimization Input Lan-
guage
State variable path-constrained optimization has been implemented within the ABA-
CUSS1 large-scale equation-oriented modeling system. The input language in ABA-
CUSS has been extended to permit the representation of large-scale path-constrained
dynamic optimization systems in a compact manner.
Examples of the input language are given in Figure 2-2 and Appendix A. Variable
types are declared in the DECLARE block. The equations describing the DAE are
given in the MODEL block. The OPTIMIZATION block is used to define a dynamic
optimization using the MODEL. Note that it is possible to move from a simulation
to an optimization in a seamless manner.
The sections in the optimization block are:
PARAMETER Defines the parameters that are used in the dynamic optimization
that are not defined in the model.
VARIABLE Defines variables that are used in the dynamic optimization that are
not defined in the models.
SET Sets values for any parameters defined in the UNIT or PARAMETER section
CONTROL Indicates that the listed variables are control variables. The first func-
tion in the triple notation defines the initial guess, the second and third define
the lower and upper bounds for the control, respectively.
TIME INVARIANT Indicates that the listed variable are time invariant optimiza-
tion parameters. The first number in the triple notation defines the initial guess,
1
ABACUSS (Advanced Batch And Continuous Unsteady-State Simulator) process modeling soft-
ware, a derivative work of gPROMS software, 1992
c by Imperial College of Science, Technology,
and Medicine.
63
DECLARE OBJECTIVE
TYPE MINIMIZE Final_Time
NoType = 0.0 : -1E9 : 1E9
SET
END WITHIN kraft DO
G := -1.0
MODEL brachistochrone END
A := -0.40
PARAMETER B := 0.30
G AS REAL
INEQUALITY
VARIABLE WITHIN kraft DO
X AS NoType Y>=A*X-B
Y AS NoType END
V AS NoType
Theta AS NoType CONTROL
Constr AS NoType WITHIN kraft DO
Theta := -1.6+1.6/0.7*TIME: -1.6: 0.0
EQUATION END
$X = V*COS(Theta) TIME_INVARIANT
$Y = V*SIN(Theta) Final_Time := 0.68 : 0.0 : 2.0
$V = G*SIN(Theta)
Constr = -0.40*X-0.30 INITIAL
WITHIN kraft DO
END # brachistochrone X=0.0
Y=0.0
OPTIMIZATION Brach V=0.0
END
PARAMETER
B AS REAL FINAL
A AS REAL WITHIN kraft DO
X=1.1
UNIT END
kraft AS brachistochrone
SCHEDULE
VARIABLE CONTINUE FOR Final_Time
Final_Time AS NoType
END # Brach
Figure 2-2: Example of ABACUSS input file for constrained dynamic optimization
the second and third define the lower and upper bounds for the variable, re-
spectively.
INITIAL Gives additional equations that define the initial condition for the prob-
lem, possibly as a function of the optimization parameters.
FINAL Defines point constraints that must be obeyed at the final time.
64
Chapter 3
This chapter is a summary of a paper [55] of the same title that was coauthored
with Santos Galán. The results of this paper are included here because the ability
to handle sensitivity functions for hybrid discrete/continuous systems is necessary
for the development of the methods to handle inequality path-constrained dynamic
optimization problems discussed in Chapter 7.
Parametric sensitivity analysis is the study of the influence of variations in the
parameters of a model on its solution. It plays an important role in design and mod-
eling, is used for parameter estimation and optimization, and is extensively applied
in the synthesis and analysis of control systems.
The variations of the parameters can be differential or finite. Here only the former
case is considered, which approximates linearly the variation of the solution. Two
kinds of sensitivity analysis can be distinguished, depending on whether the variation
is finite-dimensional (lumped) or it is a function (distributed). In the former case,
which is the only one considered in this chapter, the sensitivities are ordinary partial
derivatives. In the second, as in structural systems, functional derivatives are needed.
In process design, sensitivity information provides an elucidation of the influence
65
of design changes, without requiring trial and error. In modeling, sensitivity analysis
reveals the relative importance of every parameter, giving arguments for simplify-
ing the model or guiding new experiments. Sensitivities are also used in parameter
estimation for error analysis, and in gradient computation for dynamic optimization.
There has been great interest in the application of sensitivity information in con-
trol system design. Sensitivity analysis is necessary because the parameters are sub-
ject to inaccuracy in data, models and implementation, and because they can deviate
with time. Furthermore, sensitivity analysis is essential for adaptive systems. Early
reviews in this area are given in [83, 136].
Here, the term “hybrid” will refer to the combined existence and interaction of
continuous and discrete phenomena. The continuous part is usually modeled by
differential-algebraic equations (DAEs) and the discrete behavior by finite automata.
A precise definition of the systems considered in this chapter follows in Section 3.1.
Hybrid systems pose a problem for the calculation of sensitivities because the sensi-
tivities are not defined in general when changing from one continuous subsystem to
another.
66
evidenced by the imposition of an obviously incorrect anullation of components of
the gradient of the discontinuity function at the switching times.
In a concise paper, Rozenvasser [124] derived the sensitivity functions of discontin-
uous systems modeled by a given sequence of explicit ODE vector fields with explicit
or implicit switching times. Compared to the above works, it seems that the deriva-
tion is a relatively straightforward calculus exercise. This is the most general result
we know for sensitivities of hybrid systems and ironically, until now, has been en-
tirely neglected in the subsequent literature. In fact, we derived Rozenvasser’s results
independently, only subsequently stumbling upon [124] by chance.
This chapter extends Rozenvasser’s results in several ways. First, the discrete
aspects of the system model are significantly generalized in line with modern notions
of hybrid systems. Second, the results are extended to include DAE embedded hybrid
systems. Existence and uniqueness theorems are also presented for the sensitivity
functions of hybrid systems. These theorems shed light on the issue of sequencing
of state events in hybrid systems. Numerical results are given for the calculation of
sensitivity functions for hybrid systems.
67
3.1 Mathematical Representation of Hybrid Sys-
tems
A formalism derived from [6] and [15] is used to model a broad class of hybrid discrete/
k
continuous phenomena. Consider a system described by a state space S = nk=1 Sk
where each mode Sk which is characterized by:
(k)
1. A set of variables {ẋ(k) , x(k) , y (k) , u(k), p, t}, where x(k) (p, t) ∈ Rnx are the
(k)
differential state variables, y (k)(p, t) ∈ R ny
the algebraic state variables and
(k)
u(k) (p, t) ∈ R nu
the controls. The time invariant parameters p ∈ Rnp and time
t are the independent variables, and the controls u are explicit functions of both
the parameters and time.
2. A set of equations f (k) (ẋ(k) , x(k) , y (k) , u(k) , p, t) = 0, usually a coupled system
(k) (k) (k) (k)
of differential and algebraic equations, f (k) : Rnx × Rnx × Rny × Rnu ×
(k) (k)
Rn p × R → Rn x +ny
. In the mode Sk the specification of the parameters
p (and, consequently, the controls) coupled with a consistent initial condition
(k)
Tk (ẋ(k) , x(k) , y (k) , u(k), p, t) = 0 at t = t0 determines the evolution of the system
(k) (k)
in [t0 , tf ).
3. A (possibly empty) set of transitions to other modes. The set of modes Sj where
a transition from mode Sk is possible is J (k) . These transitions are described
by:
(k)
(a) Transition conditions Lj (ẋ(k) , x(k) , y (k), u(k) , p, t), j ∈ J (k) , determining
the transition times at which switching from mode k to mode j occurs. The
transition conditions are represented by logical propositions that trigger
the switching when they become true. They are described in section 3.3.
Note that discontinuities in the controls are included here.
(k)
Tj (ẋ(k) , x(k) , y (k) , u(k), ẋ(k+1) , x(k+1) , y (k+1) , u(k+1) , p, t)
68
are associated with the transition conditions which relate the variables in
the current mode Sk and the variables in the new mode Sj at the transition
(k)
time tf . A special case of the transition functions is the set of initial con-
ditions for the initial mode S1 . These initial conditions will be designated
(1)
by T0 .
69
3.2 Sensitivities
The partial derivatives of the variables with respect to the parameters are known as
sensitivity functions. Before discussing the sensitivities, the solution of the hybrid
system in isolation must be examined in more detail.
everywhere. This condition holds for all index 0 systems (ODEs) and most index 1
(1) (1)
systems, and implies that nx side conditions T0 are required to define uniquely a
consistent initial condition [15]. Chapter 5 details methods for deriving an equivalent
index-1 DAE from a high-index DAE that work for a broad class of high-index DAEs.
Equation (3.3) is usual at initialization, but in general the initial time can be
(1)
an implicit or explicit function of other parameters. Actually, t0 is a parameter.
The more general case is considered below when dealing with the sensitivities at
transitions.
(1) (1) (1) (1)
The number of equations nx + nx + ny + 1 and the number of variables nx +
(1) (1)
nx + ny + np + 2 (u are explicit functions of p and t) admit in general np + 1 degrees
(1)
of freedom for specification of the independent variables p and t0 .
A sufficient (implicit function theorem) and practical (numerical solution by a
70
Newton method) condition for the solvability of the system (3.1–3.3) is that the
matrix:
⎡ ⎤
∂f (1) ∂f (1) ∂f (1) ∂f (1) ⎡ ⎤
⎢ ∂ ẋ(1) ∂x(1) ∂y (1) ∂t ⎥ ∂f (1) ∂f (1) ∂f (1)
⎢ ∂T0(1) (1) (1)
∂T0 ⎥
(1)
⎣ ∂ ẋ(1) ∂x(1) ∂y (1) ⎦
⎢ ∂ ẋ(1) ∂T0 ∂T0
⎥ ⇐⇒ (1) (1) (1)
⎣ ∂x (1) ∂y (1) ∂t ⎦ ∂T0 ∂T0 ∂T0
∂ ẋ (1) ∂x (1) ∂y (1)
0 0 0 1
is nonsingular.
and let:
(1) (1)
x(1) (p, t0 , t), y (1) (p, t0 , t) (3.6)
be the solutions of f (1) that satisfy (3.1), (3.2) and (3.3). These solutions are functions
of t that pass through the point (3.5). The following relations can be derived:
(1)
(1) ∂x(1) (p, t0 , t)
ẋ(1) (p, t0 , t) = (3.7)
∂t
(1) (1) (1) (1)
x(1) (p, t0 , t0 ) = x0 (p, t0 ) (3.8)
(1) (1) (1) (1)
y (1) (p, t0 , t0 ) = y0 (p, t0 ) (3.9)
∂x(1) (p, t0 , t)
(1)
(1) (1) (1) (1) (1)
ẋ (p, t0 , t0 ) = = ẋ0 (p, t0 ) (3.10)
∂t t=t
(1)
0
71
3.2.2 Initial sensitivities
Now consider sensitivity functions of the above system. Differentiating the system
used for consistent initialization and applying the chain rule yields:
⎡ (1) (1)
⎤
∂ ẋ0 ∂ ẋ0
⎢ ∂p ∂t0 ⎥
(1)
⎢ ∂x(1) (1) ⎥
⎢ 0 ∂x0 ⎥
⎡ ⎤⎢ ∂p
⎢ (1) ∂t0 ⎥
(1)
⎥
∂f (1) ∂f (1) ∂f (1) ∂f (1) ∂f (1) ∂f (1) ∂f (1) ⎢ ∂y0 ∂y0 ⎥
(1)
∂t ∂t
=0 (1)
=1 (3.12)
∂p ∂t0
(1)
∂f (1) ∂T0
Since (1) = 0 and (1) = 0, the system is:
∂t0 ∂t0
⎡ (1) (1)
⎤
∂ ẋ ∂ ẋ0
⎡ ⎤ ∂p0 ⎡ ⎤
∂f (1) ∂f (1) ∂f (1)⎢ ∂t0 ⎥
(1)
∂f (1) ∂u(1) ∂f (1) ∂f (1) ∂u(1) ∂f (1)
⎢ (1)
∂y (1) ⎦ ⎢ ∂x0
(1) ⎥
∂x0 ⎥ + +
⎣ ∂ ẋ(1) ∂x(1) ⎣ ∂u(1) ∂p
− ∂T ∂p ∂u(1) ∂t ∂t ⎦
∂T0
(1)
∂T0
(1)
∂T0
(1) ⎢ ∂p (1) ⎥= (1)
∂u(1) ∂T0
(1) (1)
∂T0 ∂u(1) ∂T0
(1)
(3.13)
Note that the sensitivities of the solution of the consistent initialization problem
(3.5) have been defined. The initial sensitivities of the dynamic problem must also
be determined, which may or may not be the same. For the parameters p the initial
72
sensitivities are:
∂x(1) (p, t0 , t)
(1) (1) (1) (1) (1)
∂x0 (p, t0 ) ∂x(1) (p, t0 , t0 )
= = (1) (3.14)
∂p ∂p ∂p t=t0
∂y (1) (p, t0 , t)
(1) (1) (1) (1) (1)
∂y0 (p, t0 ) ∂y (1) (p, t0 , t0 )
= = (1) (3.15)
∂p ∂p ∂p t=t0
∂ ẋ(1) (p, t0 , t)
(1) (1) (1) (1) (1)
∂ ẋ0 (p, t0 ) ∂ ẋ(1) (p, t0 , t0 )
= = (1) (3.16)
∂p ∂p ∂p t=t 0
Therefore:
∂x(1) (p, t0 , t)
(1) (1) (1)
∂x0 (p, t0 ) (1) (1)
(1) (1) = (1)
− ẋ(1) (p, t0 , t0 ) (3.18)
∂t0 t=t0 ∂t0
Similarly,
∂y (1) (p, t0 , t)
(1) (1) (1)
∂y0 (p, t0 ) (1) (1)
(1) (1) = (1)
− ẏ (1) (p, t0 , t0 ) (3.19)
∂t0 t=t0 ∂t0
∂ ẋ(1) (p, t0 , t)
(1) (1) (1)
∂ ẋ0 (p, t0 ) (1) (1)
(1) (1) = (1)
− ẍ(1) (p, t0 , t0 ) (3.20)
∂t0 t=t0 ∂t0
Hence, ẏ (1) and ẍ(1) must be known in order to determine the initial sensitivities.
The next section shows how ẏ and ẍ can be calculated at any time.
73
3.2.3 Sensitivity trajectories
(k) (k)
From a practical point of view, the trajectory of the variables for t ∈ (t0 , tf ) can
be computed by numerical integration of the system f (k) = 0 from the initial values.
(1)
Hereafter p will include all the parameters (including t0 ).
Differentiating the system with respect to the independent variables p and t gives:
⎡ ⎤
∂ ẋ(k) ∂ ẋ(k)
⎢ ∂p ∂t ⎥
∂f (k) ∂f (k) ∂f (k) ⎢ ∂x(k) ∂x(k) ⎥ ∂f (k) ∂u(k) ∂f (k) ∂f (k) ∂u(k) ∂f (k)
⎢ ∂p ⎥ =− + + (3.21)
∂ ẋ(k) ∂x(k) ∂y (k) ⎣ ∂t ⎦ ∂u(k) ∂p ∂p ∂u(k) ∂t ∂t
∂y (k) ∂y (k)
∂p ∂t
∂x(k)
sx(k) = (3.22)
∂p
(k)
∂sx ∂ ∂x(k) ∂ ∂x(k) ∂ ẋ(k)
ṡx(k) = = ( )= ( )= (3.23)
∂t ∂t ∂p ∂p ∂t ∂p
(k)
∂y
sy(k) = (3.24)
∂p
The derivative with respect to time is a linear but not differential system (i.e.ẋ(k)
is known) that allows us to obtain the the values of ẏ (k) and ẍ(k) required in the
74
previous section and below:
⎡ ⎤
(k)
ẍ
∂f (k) ∂f (k) ⎣ ⎦=− ∂f (k) (k)
ẋ + ∂f (k) ∂u(k)
+ ∂f (k) (3.26)
∂ ẋ(k) ∂y (k) ∂x(k) ∂u(k) ∂t ∂t
ẏ (k)
75
D1 I1 V3 I3 I2 D2
V1 V2
3.3 Transitions
(k)
The transition conditions Lj (ẋ(k) , x(k) , y (k), u(k) , p, t) , j ∈ J (k) , are formed by logical
propositions that contain logical operators (e.g.NOT, AND, OR) connecting atomic
propositions (i.e.relational expressions) composed of valid real expressions and the
relational operators {>, <, ≤, ≥}. For example, in the rectifier circuit of Figure 3-1
[31], the condition for both diodes D1 and D2 to be conductive requires the following
logical proposition to be true:
(k) (k)
gji (ẋ(k) , x(k) , y (k), u(k) , p, t) i = 1, . . . , nj
formed by the difference between the two real expressions. For example, in the rectifier
system:
(1)
g2,4 = v2 − v3 (3.28)
Each relational expression changes its value whenever its corresponding discontinuity
function crosses zero. The transition conditions neutralize the only degree of freedom
(k)
(time), determining tf . The set L(k) of all transition conditions for the mode Sk
defines the boundary of Sk that, when intercepted by the trajectory, triggers the
switching to a new mode. Notice that the system can be in the same mode after the
76
application of a transition function.
Several issues arise in connection with the boundary created by the transition
conditions that are beyond the scope of this chapter. Some of them are discussed
in [69]. Here smoothness is assumed in the neighborhood of the transition time and
(k)
that only one relational expression activates at that moment. Hereafter, gj will be
used to designate the discontinuity function that actually determines the transition
from mode Sk to Sj .
77
x
(k)
The transfer to the new mode k + 1 is described by the transition function Tk+1 .
At the event following system exists:
This system may have multiple solutions, but for simplicity it is assumed that criteria
are given that define a unique solution (for example, limiting the domain of the
transition function).
The equations (3.29) represent the state trajectory resulting from the integration
of f (k) in mode k. In practice they are calculated numerically. The system naturally
decomposes into two structural blocks that can be solved sequentially. The first
(k) (k+1)
two equations determine t = tf = t0 (i.e., time is a dependent variable), and
the corresponding values for ẋ(k) , x(k) , y (k) and u(k) . The last two equations allows
calculation of the initial values of ẋ(k+1) , x(k+1) , y (k+1) , and u(k+1) in the new mode.
Again, a sufficient and practical condition for solving the system formed by (3.29)
78
and (3.30) is the nonsingularity of the Jacobian:
⎡ ⎤
(k)
1 0 0 − ∂ ẋ∂t
⎢ ⎥
⎢ (k) ⎥
⎢ 0 1 0 − ∂x∂t ⎥
⎢ ⎥
⎢ (k) ⎥
⎢ 0 0 1 − ∂y∂t ⎥
⎣ (k) (k) (k) (k) (k)
⎦
∂gk+1 ∂gk+1 ∂gk+1 ∂gk+1 ∂u(k) ∂gk+1
∂ ẋ(k) ∂x(k) ∂y (k) ∂u(k) ∂t
+ ∂t
(k)
The derivatives must exist and the trajectory should not be tangent to gk+1 in the
(k) (k) (k)
neighborhood of tf . If the trajectory is tangent to gk+1 in the neighborhood of tf ,
small variations of the parameters may not lead to a unique solution for the time of
the transition. As shown later in this chapter, the points where (3.33) is not satisfied
play an important role for hybrid systems.
The initialization of the new mode is a problem similar to the one described in Sec-
tion 3.2.1. The same assumptions for the transition functions T are made, providing
a condition for well-posed transitions. Again, the nonsingularity condition applies:
⎛⎡ (k) (k) (k)
⎤⎞
∂Tk+1 ∂Tk+1 ∂Tk+1
Using the same notation applied above, the solution of the reinitialization is expressed
as:
79
3.3.3 Sensitivity transfer at transitions
The discontinuity system is differentiated for evaluating the sensitivities. The only
degrees of freedom are the time-invariant parameters. The structural decomposition
is used again to solve the system in two steps.
First, (3.29) and (3.30) are differentiated. Differentiation of (3.29) indicates that
the sensitivities are the ones calculated in the mode k. Differentiation of (3.30) yields
a system that can be solved for the switching time sensitivities:
(k+1)
Note that in the case of a controlled transition with the logical condition t ≥ t0 ,
the solution is similar to that obtained in the initialization (3.12).
Second, the system formed by (3.31) and (3.32) is differentiated. Applying the
chain rule:
⎡ (k) ⎤T⎡ ⎤
∂Tk+1
0 ∂ ẋ(k) ∂ ẋ(k)
⎢ ∂ ẋ(k)
(k)
⎥ ⎢ ∂p ∂t ⎥
⎢ ∂Tk+1 ⎥ ⎢ ∂x(k) ⎥
⎢ ∂x(k) 0 ⎥ ⎢ ∂p ∂x(k)
⎥
⎢ (k) ⎥ ⎢ ∂t ⎥
⎢ ∂Tk+1 ⎥ ⎢ ∂y(k) ∂y (k) ⎥
⎢ (k) 0 ⎥ ⎢ ∂p ⎥
⎢ ∂y(k) ⎥ ⎢ ∂t ⎥
⎢ ∂Tk+1 ⎥ ⎢ ∂u(k) ⎥
⎢ (k) 0 ⎥ ⎢ ∂p ∂u (k)
⎥
⎢ ∂u ⎥ ⎢ ∂t ⎥⎡ ⎤
⎢ ∂T (k) ⎥ ⎢ ∂ ẋ(k+1)
⎢ k+1 ∂f (k+1) ⎥ ∂ ẋ(k+1) ⎥
⎢ ∂ ẋ(k+1) ⎥ ⎢ ∂p ⎥ I
⎢ ∂T (k) ∂ ẋ (k+1)
⎥ ⎢ ∂t ⎥⎣ ⎦
=0 (3.37)
⎢ k+1 (k+1) ⎥ ⎢ ∂x(k+1) ∂x(k+1) ⎥
⎢ ∂x(k+1)
∂f ⎢ ∂p ⎥ ∂t
∂x(k+1) ⎥ ⎢ ∂t ⎥ ∂p
⎢ (k) ⎥ ⎢ ∂y(k+1)
⎢ ∂Tk+1 ∂f (k+1) ⎥ ∂y (k+1) ⎥
⎢ ∂y(k+1) ⎥ ⎢ ∂p ⎥
⎢ (k) ∂y (k+1)
⎥ ⎢ ∂t ⎥
⎢ ∂Tk+1 ∂f (k+1) ⎥
⎢ ∂u(k+1) ∂u(k+1) ⎥
⎢ ∂u(k+1) ⎥ ⎢ ∂p ⎥
⎢ (k) ∂u(k+1) ⎥ ⎢ ∂t ⎥
⎢ ∂Tk+1 ⎢ ⎥
⎢ ∂p ∂f (k+1) ⎥ ⎥ ⎢ I 0 ⎥
⎣ (k) ∂p ⎦ ⎣ ⎦
∂Tk+1 ∂f (k+1) 0 1
∂t ∂t
80
Reordering to separate known and unknown variables yields:
⎡ (k) ⎤T ⎡ ⎤ ⎡ (k) ⎤T ⎡ ⎤
∂Tk+1 ∂f (k+1) ∂ ẋ(k+1) ∂Tk+1 ∂f (k+1) ∂ ẋ(k+1)
⎢ ∂ ẋ(k+1) ∂ ẋ(k+1) ⎥ ⎢ ∂p ⎥ ⎢ ∂ ẋ(k+1) ∂ ẋ(k+1) ⎥ ⎢ ∂t ⎥
⎢ ∂Tk+1(k)
∂f (k+1) ⎥ ⎢ ∂x(k+1) ⎥ ⎢ ∂Tk+1(k)
∂f (k+1) ⎥ ⎢ ∂x(k+1) ⎥ ∂t
⎢ ∂x(k+1) ⎥ ⎢ ∂p ⎥ = − ⎢ ∂x(k+1) ⎥ ⎢ ∂t ⎥
⎣ (k) ∂x(k+1) ⎦ ⎣ ⎦ ⎣ (k) ∂x(k+1) ⎦ ⎣ ⎦ ∂p
∂Tk+1 ∂f (k+1) ∂y (k+1) ∂Tk+1 ∂f (k+1) ∂y (k+1)
∂y (k+1) ∂y (k+1) ∂p ∂y (k+1) ∂y (k+1) ∂t
⎡ (k)
⎤T ⎡ ⎤
∂Tk+1
0 ⎥ ∂ ẋ(k) ∂ ẋ(k) ∂t
⎢ ∂ ẋ(k) +
⎥ ⎢ ⎥
∂p ∂t ∂p
⎢ (k)
⎢ ⎥
⎢ (k) ∂Tk+1
0 ⎥ ∂x(k) ∂x(k) ∂t
⎢ ∂x ⎥ ⎢⎢ ∂p
+ ∂t ∂p
⎥
⎥
⎢ ∂T (k) ⎥ ⎢
⎢ k+1 ⎥ ∂y (k) (k)
∂t ⎥
⎢ ∂y(k) 0 ⎥ ⎢ ∂p + ∂t ∂p ⎥
∂y
⎢ ∂T (k) ⎥ ⎢⎢ ∂u(k)
⎥
−⎢ ∂u(k) ∂t ⎥
⎢ ∂u(k)
k+1
0 ⎥ ⎥ ⎢ + ∂t ∂p ⎥ (3.38)
⎢ ∂T (k) ⎥ ⎢⎢
∂p ⎥
⎢ k+1 ∂f (k+1) ⎥ ⎢ ∂u(k+1) ∂u(k+1) ∂t ⎥
⎢ ∂u(k+1) + ∂t ∂p ⎥
⎢ (k) ∂u(k+1) ⎥⎥ ⎢ ∂p ⎥
⎢ ∂Tk+1 ⎥ ⎢ ⎥
⎢ ∂p
(k+1)
⎢ ⎥
∂p ⎥
∂f I
⎣ (k) ⎦ ⎣ ⎦
∂Tk+1 ∂f (k+1) ∂t
∂t ∂t ∂p
Again, two conditions are required for the existence of the sensitivity function in a
(k)
hybrid system: differentiability of the transfer functions Tk+1 and (3.34).
From equation (3.38) is obtained the initial sensitivities for the new mode directly
(recall that ẍ and ẏ can be calculated at any time with (3.26)). For clarity, the deriva-
tion is performed in two steps for the initialization (i.e., sensitivity of the initialization
solution and initial sensitivity of the trajectory) but now the differentiation has been
applied taking in account that the time of the event is a function of the parameters.
Note that the solution obtained in both cases is the same.
A procedure similar to the one in 3.2.1 may be followed to find the sensitivities of
the variables at the discontinuities (3.35). The difference is that in this case, when the
parameters change, the time of the event also changes, and the sensitivity includes
the variation due to this factor.
81
3.4 Existence and Uniqueness of the Sensitivity
Functions for Hybrid Systems
Existence and uniqueness theorems for sensitivity functions are intimately related to
theorems governing existence and uniqueness of the embedded differential system [68].
Hence, since no existence and uniqueness theorem exists for general nonlinear DAEs
[21], it is not possible to state an existence and uniqueness theorem for sensitivity
functions of general nonlinear DAEs. However, existence and uniqueness theorems
exist for both nonlinear explicit ODEs and linear time invariant DAEs. Thus, this
section establishes sufficient conditions for the existence and uniqueness of sensitivity
functions for both nonlinear explicit ODE embedded hybrid systems and linear time-
invariant DAE embedded hybrid systems.
(1) (1)
T0 (ẋ(1) , x(1) , p, t0 ) = 0 (3.40)
82
The transition functions:
(j) (j)
1. For t ∈ [t0 , tf ], j = 1, ..., nj the partial derivatives
∂f (j) ∂f (j)
and
∂x(j) ∂p
exist and are continuous in a neighborhood of the solution x(j) (p, t).
(j) (j)
2. ∀tf , j = 0...nj − 1, the system h(j) (ẋ(j) , x(j) , ẋ(j+1) , x(j+1) , tf ; p) = 0:
(j)
tf
(j) (j)
x (j)
(tf ) −x (j)
(t0 ) − f (j) (x(j) , p, t)dt = 0 (3.43)
(j)
t0
(j) (j) (j) (j)
gj+1(ẋ(j) (tf ), x(j) (tf ), p, tf ) = 0 (3.44)
(j) (j) (j) (j) (j) (j)
Tj+1(ẋ(j) (tf ), x(j) (tf ), ẋ(j+1) (tf ), x(j+1) (tf ), p, tf ) = 0 (3.45)
(j) (j) (j)
ẋ(j+1) (tf ) − f (j+1) (x(j+1) (tf ), p, tf ) = 0 (3.46)
(j) (j+1)
E ⊂ R(2nx +2nx +1)+(np )
83
Then,
∂x(j)
∂p
(j) (j)
in (t0 , tf )
(j)
2. At tf the relationship between the right-hand sensitivities of the variables in
mode Sj and the left-hand sensitivities of the variables in mode Sj+1 is deter-
mined by:
⎡ # $−1
(j+1) (j)
∂x ∂Tj+1
= − ⎣f (j+1) +
∂p ∂x(j+1)
# (j) (j) (j) (j)
$%
∂Tj+1 ∂f (j) ∂Tj+1 (j) ∂Tj+1 ∂f (j+1) ∂Tj+1 dt
(j)
+ (j)
f + (j+1) +
∂ ẋ ∂t ∂x ∂ ẋ ∂t ∂t dp
# (j)
$ −1 # (j) (j) (j) (j)
$
∂Tj+1 ∂Tj+1 ∂f (j) ∂Tj+1 ∂x(j) ∂Tj+1 ∂f (j+1) ∂Tj+1
− + + (j+1) +
∂x(j+1) ∂ ẋ(j) ∂p ∂x(j) ∂p ∂ ẋ ∂p ∂p
(3.48)
where
84
For the initialization problem, the implicit function theorem guarantees the ex-
istence of open sets U (1) and W (1) such that to every p ∈ W (1) ⊂ Rnp (np = 1)
(1) (1) (1)
there corresponds a unique (x(1) , ẋ(1) , t0 ; p) ∈ U (1) ⊂ R(nx +nx +1)+(np )
such that
(1)
h(0) (ẋ(1) , x(1) , t0 ; p) = 0, and there is a continuously differentiable mapping of W (1)
(1) (1) (1)
into R(nx +nx +1)
defining (x(1) ,ẋ(1) ,t0 ). Therefore, it is possible to find in a neighbor-
(1)
hood of p a point that produces (x(1) , ẋ(1) , t0 ) within that set where the conditions
of the Gronwall’s theorem apply for system f (1) .
Similarly Gronwall’s theorems assure that the difference:
(j) (j)
can be made as small as desired with t ∈ [t0 , tf ] by selecting a sufficiently small .
This is true for j = 1.
In the transition to the mode S2 , or in general from Sj to Sj+1 , the implicit func-
tion theorem is applied again. Therefore, sets U (j) and W (j) can be found such
that to every p ∈ W (j) ⊂ W (j−1) ⊂ Rnp (np = 1) there corresponds a unique
(j) (j) (j+1)
(ẋ(j) , x(j) , ẋ(j+1) , x(j+1) , tf ; p) ∈ U (j) ⊂ R(2nx +2nx +1)+(np )
such that
(j)
h(j) (ẋ(j) , x(j) , ẋ(j+1) , x(j+1) , tf ; p) = 0
(j) (j+1)
and there is a continuously differentiable mapping of W (j) into R(2nx +2nx +1)
defin-
(j)
ing (ẋ(j) , x(j) , ẋ(j+1) , x(j+1) , tf ).
Consequently, by selecting the smallest neighborhood of p from those that apply
for the Gronwall’s theorems and the implicit function theorem for all the modes, and
applying them in a chain, the existence and uniqueness of the sensitivity functions
for the ODE-embedded hybrid system is demonstrated. Expression (3.47) is from
Gronwall’s theorem. Expressions (3.48) and (3.49) are derived by differentiation of
the system h(j) (see Section 3.3.3).
Remark 3.1. Since it is desired to obtain partial derivatives, the theorem is applied
individually to every parameter while the rest are fixed, so the statement of the theorem
85
with only one parameter is applicable to the general multiple parameter case.
Remark 3.2. The controls u(p, t) have been dropped from the formulation of the
theorem, but are implicit since in all the functions p and t appear as variables. If the
transition function is explicit in the state variables x(j+1) and does not depend on the
time derivatives:
' # (j) $ %
∂x(j+1) ∂x(j) (j+1) ∂Φj+1 ∂Φ
(j)
j+1 dt
− = − f − f (j) + (j)
− I f (j) +
∂p ∂p ∂x ∂t dp
# (j) $ (j)
∂Φj+1 ∂x(j) ∂Φj+1
+ −I + (3.54)
∂x(j) ∂p ∂p
Remark 3.3. Hybrid systems formulated with this model are a sequence of alternat-
ing differential and algebraic systems of equations, and the conditions for solvability,
existence and uniqueness are the union of these conditions for all of the systems.
Remark 3.4. The sensitivity functions are not defined for solutions passing through
points where g or T are not differentiable. In particular, this is true at the points
(k) (k) (k) (k)
where gjα = giβ = 0 or gjα = gjβ = 0,which are transitions to a different mode or
transitions to the same mode with different discontinuity functions.
86
Remark 3.5. In general sensitivities jump even if the states are continuous. This
statement implies:
∂x(j+1) ∂x(j) dt
− = − f (j+1) − f (j) (3.56)
∂p ∂p dp
As Rozenvasser points out, sensitivities are continuous either if the time derivative is
continuous or if the event time does not depend on the parameter.
Lemma 3.1. Suppose that the linear constant-coefficient DAE system with differen-
tial index = ν:
is solvable ( i.e., λA + B is a regular pencil) for t ∈ [t0 , tf ] and the partial derivatives
∂ ∂if
i = 0, . . . , ν − 1 (3.58)
∂p ∂ti
87
Then, the partial derivatives:
∂z
s= (3.59)
∂p
∂f
Aṡ + Bs = (3.60)
∂p
Proof. Since the system is solvable, there exist nonsingular matrices p, Q such that:
z = Qw (3.61)
where N is a matrix of nilpotency ν. The resulting system after the change of coor-
dinates is:
ẇ1 + Cw 1 = f1 (3.64)
N ẇ2 + w2 = f2 (3.65)
The first equation is an ODE and Gronwall’s theorems can be applied. The second
equation has only one solution:
ν−1
∂ i f2
w2 = (−1)i N i (3.66)
i=0
∂ti
and the existence and uniqueness of the remaining parametric sensitivities can be
deduced directly from (3.58). The system (3.60) obtained by differentiating (3.57),
has the same matrix pencil as (3.57), assuming the forcing functions are sufficiently
differentiable (3.58). Hence (3.60) is solvable.
Remark 3.6. The sufficient differentiability (3.58) is not required for all the compo-
88
nents of f but only for the ones appearing in the linear combination f2 . If the exact
differentiability required for each component of f is specified that can be deduced from
(3.66), then the conditions of the lemma are necessary too.
Theorem 3.2. Suppose the linear constant coefficient DAE system with differential
index= ν:
is solvable for t ∈ [t0 , tf ], that f is (2ν − 1)-times differentiable with respect to time,
and that the partial derivatives:
∂ ∂if
i = 0, . . . , ν − 1 (3.68)
∂p ∂ti
∂A(p)
(3.69)
∂p
∂B(p)
(3.70)
∂p
∂z
s= (3.71)
∂p
∂f ∂A(p) ∂B(p)
Aṡ + Bs = − ż − z (3.72)
∂p ∂p ∂p
Proof. The partial differentiation of (3.67) leads to (3.72). This system is solvable if
the right hand side is (ν − 1)-times differentiable with respect to time, which requires
z to be ν-times differentiable with respect to time, (3.68), (3.69) and (3.70). Since
the solution z needs f to be (ν − 1)-times differentiable with respect to time, f must
be (2ν − 1)-times differentiable.
Remark 3.7. As in the lemma, not all the components of f need to be (2ν −1)-times
89
differentiable, since in general not all of them will end up in the rows of the nilpotent
matrix that requires this property.
F (j) (ẋ(j) , x(j) , y (j), u(j) , t; p) = A(j) (p)ż (j) + B (j) (p)z (j) − f (j) (p, t) = 0 (3.73)
(j)
and transitions to the following mode represented by the discontinuity function gj+1
(j)
and the transition functions Tj+1 .
Suppose that:
(j) (j)
1. For t ∈ [t0 , tf ], j = 1, ..., nj , every f (j) satisfies the conditions of Theorem
3.2.
(j) (j)
2. ∀tf , j = 0, ..., nj −1, the system h(j) (ẋ(j) , x(j) , y (j) , ẋ(j+1) , x(j+1) , y (j+1) , tf ; p) =
0:
(j)
(ẋ(j) , x(j) , y (j), ẋ(j+1) , x(j+1) , y (j+1) , tf ; p) ∈ E.
90
Assume that the submatrix formed by the columns of the Jacobian matrix corre-
(j)
sponding to the variables ẋ(j) , x(j) , y (j), ẋ(j+1) , x(j+1) , y (j+1) , and tf is invertible.
The equations (3.74) represent the state trajectory resulting from the integration
of f (j) in mode j.
Then:
∂z (j)
s(j) =
∂p
(j) (j)
exist, are continuous, and satisfy the differential equations (3.72) in (t0 , tf ).
(j)
2. At tf , the relationship between the right-hand sensitivities of the variables in
mode Sj and the left-hand sensitivities of the variables in mode Sj+1 is deter-
mined by:
91
⎡ ⎤
⎡ (j) (j) (j)
⎤ ṡ
(j+1)
+ ẍ (j+1) dt
∂Tj+1 ∂Tj+1 ∂Tj+1 ⎢ x dp ⎥
⎣ ∂ ẋ(j+1) ∂x(j+1) (j+1) ⎢
⎦ ⎢s(j+1) (j+1) dt ⎥
∂y
+ ẋ ⎥=
∂f (j+1) ⎣ dp ⎦
x
∂f (j+1) ∂f (j+1)
∂ ẋ(j+1) ∂x(j+1) ∂y (j+1) (j+1) dt
sy + ẏ (j+1) dp
⎡ ⎤
(j) dt
ṡx + ẍ(j) dp
⎢ ⎥
⎡ ⎢
⎤ ⎢s(j) (j) dt ⎥
(j) (j) (j) (j) (j) x + ẋ ⎥
∂Tj+1 ∂Tj+1 ∂Tj+1 ∂Tj+1 ∂Tj+1 ⎢ dp ⎥
∂p ⎦ ⎢ (j) dt ⎥
− ⎣ ∂ ẋ
(j) ∂x(j) ∂y (j) ∂t
⎢sy + ẏ (j) dp ⎥ (3.79)
∂f (j+1) ∂f (j+1) ⎢ ⎥
0 0 0 ⎢ ⎥
∂t ∂p
⎢ dt ⎥
⎣ dp ⎦
1
Proof. Provided Theorem 3.2 that replaces Gronwall’s theorems for the existence and
uniqueness of the sensitivities of a linear time invariant DAE where the coefficients
are functions of the parameters, the same arguments applied in the proof of Theorem
3.1 can be repeated.
Remark 3.8. This theorem can be extended to systems not satisfying (3.4), but then
the number of equations in the transition functions is smaller, as not all the variables
can be specified independently in the new mode.
92
3.5 Examples
3.5.1 Implementation
The equations derived in Sections 3.2 and 3.3 lead to a straightforward numerical
implementation for the combined state and sensitivity integration in the ABACUSS
mathematical modeling environment. The evolution of a hybrid system can be viewed
as a sequence of subproblems, each characterized by a continuous evolution in a mode
terminated by an event and then a solution of the transition functions to initialize
the new mode.
This example illustrates the practical significance of not satisfying the conditions of
the existence and uniqueness theorem.
Consider the following hybrid system with two modes and a reversible transition
93
Discontinuity function
4
p = 3.1
3 p = 3.0
p = 2.9
2
0
L
-1
-2
-3
-4
0 0.5 1 1.5 2 2.5 3 3.5
x
condition:
⎧
⎪
⎪ dx(1)
⎪
⎪ = 4 − x(1)
⎪
⎨ dt
S1 : L(1)
2 : −(x ) + 5(x ) − 7x
(1) 3 (1) 2 (1)
+p≤0 (3.80)
⎪
⎪
⎪
⎪
⎪
⎩T2(1) = x(2) − x(1) = 0
⎧
⎪
⎪ dx(2)
⎪
⎪ = 10 − 2x(2)
⎪
⎨ dt
S2 : L(2)
1 : −(x ) + 5(x ) − 7x
(2) 3 (2) 2 (2)
+p>0 (3.81)
⎪
⎪
⎪
⎪
⎪
⎩T1(2) = x(1) − x(2) = 0
The initial mode is S1 with initial condition x(1) (0) = 0. There is only one
parameter p that only appears in the transition condition. Figure 3-3 shows the
discontinuity function as a function of the state x(k) for three different values of p.
When p = 3.1 the discontinuity function activates at a value of x(1) close to 3. If
p = 2.9 then there are:
94
3. a transition to mode 2 around x(1) = 3.
For p = 3 the function is tangent at x(1) = 1 and crosses zero at x(1) = 3. The
first point is singular. It touches 0, switches to the second mode, and immediately
changes to mode 1 again. At this point the gradient of the discontinuity function
equals zero and (3.33) is not satisfied.
If the evolution of the transition point as a function of the parameter p is examined,
it is observed that for values less than 3 there are three transition times that vary
continuously. But at p = 3 there is a nonsmoothness in the event time: now there is
only one transition time and the transition time for the first event has jumped from
the previous case. Actually, at this point the second condition for the existence and
uniqueness of the parametric sensitivity functions is not satisfied. The changes in the
sequence of events can be seen to be related to these ‘critical’ points.
The sensitivity functions for p = 2.9 and p = 3.1 are plotted in Figures 3-4 and 3-
5. In this system, the sensitivity functions are discontinuous at the time of switching.
The expression for transfer of the sensitivities is:
1
−3x2 +10x−7
− s−
s+ = s− − (ẋ+ − ẋ− ) (3.82)
ẋ−
In Figures 3-4 and 3-5 the discontinuous effect of the different sequences is per-
ceptible in the trajectory, but almost negligible in the long term. Consider now a
system with a nonreversible transition condition, that is, a system where there is a
switch from S1 to S2 but no switch back to S1 :
⎧
⎪
⎪ dx(1)
⎪
⎪ = 4 − x(1)
⎪
⎨ dt
S1 : L(1)
2 : −(x ) + 5(x ) − 7x
(1) 3 (1) 2 (1)
+p≤0 (3.83)
⎪
⎪
⎪
⎪
⎪
⎩T2(1) = x(2) − x(1) = 0
S2 : dx(2) = 0 (3.84)
dt
Figures 3-6 and 3-7 show the trajectories and sensitivities for this system. Now
the jump in the final values of the state and the sensitivity for the variation of the
95
parameter p is evident.
96
Reversible Transition Condition (p = 2.9)
5
x
∂x/∂p
4
0
0 1 2 3 4 5
t
Figure 3-4: Sensitivity and state trajectory for reversible transition condition when
p = 2.9
0
0 1 2 3 4 5
t
Figure 3-5: Sensitivity and state trajectory for reversible transition condition when
p = 3.1
97
Nonreversible Transition Condition (p = 2.9)
0.8
x
0.6 ∂x/∂p
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
Figure 3-6: Sensitivity and state trajectory for nonreversible transition condition when
p = 2.9
Figure 3-7: Sensitivity and state trajectory for nonreversible transition condition when
p = 3.1
98
3.5.3 Functions discretized by finite elements
In many problems the control variables are approximated by a finite set of parameters
that define some generic functions over finite elements that are different for every
element. When the discretization is over time, the problem can be assimilated to a
hybrid one where the modes are the elements.
For example, let the control variables be discretized using Lagrange polynomials
(k)
ψi (t) of order I over K finite elements, as described in Chapter 2. The transition
conditions and functions assuming the usual continuity of the state variable are:
(k) (k)
gk+1 = t − tI (3.85)
(k)
Tk+1 = x(k+1) − x(k) (3.86)
Now consider the transfer of the sensitivities with respect to the junction time
t∗ = tI at the end of element k. Equation (3.36) gives,
(k)
⎧
⎪
⎨1 if p = t∗ ,
∂t
= (3.87)
∂p ⎪ ⎩0 if p = t∗
99
Noting that the derivative with respect to time of the DAE is identically zero, gives:
For another junction time p = t∗ , always supposing that p does not appear explicitly
in f , the first equation simplifies to:
Therefore, for stage k = 1 the initial sensitivities are all zero for (3.13), (3.14), (3.15)
and (3.16). The right hand side of (3.25) for element 1 is zero and the sensitivity
functions are all zero in that stage. The system (3.90) and (3.91) transfers zero initial
sensitivities to the new stage and this stage continues until stage k is entered. Here
(k) (k) (k)
the initial values for sx , sy , and ṡx are zero. However, the RHS of (3.25) is not
zero and therefore integration provides the values for the sensitivity functions.
(k)
At t = tI , the transfer is governed by (3.89) and (3.90). Therefore, the sen-
sitivities will in general jump even if the states are continuous. If the controls are
continuous over the junction of the finite elements, then ẋ(k+1) = ẋ(k) , and the sensi-
tivities of the differential variables will be continuous too.
In the following elements, the integration continues from the initial values at
every junction time. The sensitivities are continuous across element boundaries but
in general not differentiable at the boundary if the controls are discontinuous. The
RHS of (3.25) is zero, which indicates a system without excitation.
Figure 3-8 plots the sensitivities of three variables with respect to the junction
(2)
time tI , using five finite elements. These sensitivities exhibit the typical behavior
described by discontinuous controls: zero sensitivity, excitation, jump and decay with
non differentiable junctions.
100
ABACUSS Sensitivity Analysis
Values x 10-3
dy1/dt2 []
20.00 dy2/dt2 []
18.00 dy3/dt2 []
16.00
14.00
12.00
10.00
8.00
6.00
4.00
2.00
0.00
-2.00
-4.00
-6.00
-8.00
-10.00
-12.00
-14.00
-16.00
Time
0.00 0.20 0.40 0.60 0.80 1.00
ÿ + (y 2 − 1)ẏ + y = 0 (3.92)
ẋ = −y (3.93)
y3
x= −y (3.94)
3
At y = ±2/3, x = ∓1 the solutions cease to exist. For small values of , the solutions
jump with almost ‘infinite’ speed.
101
defined in relation to values of y:
⎧
⎪
⎨y < −1.001
S1 : (3.95)
⎪
⎩L(1) : y ≥ −1.001 (1)
2 T2 :y=2
⎧
⎪
⎨y > 1.001
S2 : (3.96)
⎪
⎩L(2) : y ≤ 1.001 (2)
1 T1 : y = −2
⎧
⎪
⎪
⎪
⎪ −0.999 < y < 0.999
⎪
⎨
S3 : L(3)
1 : y ≥ 0.999
(3)
T1 : y = −2 (3.97)
⎪
⎪
⎪
⎪
⎪
⎩L(3) (3)
2 : y ≤ −0.999 T2 : y = 2
The parameter is the initial value of y. At every event the sensitivity transfer is:
ẏ + −
s+
y = s (3.98)
ẏ − y
Figure 3-9 shows the trajectories of x and y in the time domain and Figure 3-10
shows y versus x for y(0) = p = −3. After some initial time the system begins cycling.
The sensitivities are plotted in Figures 3-11 and 3-12.
When p = 0, the system starts at a fixed point and y(t) = x(t) = 0. The
sensitivities are defined for t in a bounded interval:
sx = −et (3.99)
sy = et (3.100)
For a given bounded time interval, there will exist a neighborhood of parameter
values around zero for which the sensitivity functions are qualitatively similar in the
sense that they do not experience discontinuities. Outside of this neighborhood, the
evolution of the hybrid system is not smooth. On the positive side, the oscillating
solution is ‘phase shifted’ with respect to one starting from the negative side, as shown
in Figures 3-13 and 3-14. The sensitivities are the same in this example for a positive
102
or a negative value of the parameter of the same magnitude, but this is a special case
where there is symmetry in the system. In general, the sensitivity functions could be
different.
103
4
x
y
2
-2
-4
-6
0 1 2 3 4 5 6 7 8 9 10
t
2
1.5 y
1
0.5
0
-0.5
-1
-1.5
-2
-2.5
-3
-6 -5 -4 -3 -2 -1 0 1 2
x
104
8
dx/dp
6
-2
-4
-6
0 1 2 3 4 5 6 7 8 9 10
t
1500
dy/dp
1000
500
-500
-1000
-1500
0 1 2 3 4 5 6 7 8 9 10
t
105
2
x(0−)
1.5 y(0−)
x(0+)
1 y(0+)
0.5
-0.5
-1
-1.5
-2
0 1 2 3 4 5 6 7 8 9 10
t
25000
20000 dx/dt(0−)
dy/dt(0−)
15000 dx/dt(0+)
dy/dt(0+)
10000
5000
0
-5000
-10000
-15000
-20000
-25000
0 1 2 3 4 5 6 7 8 9 10
t
106
3.6 Conclusions
The parametric sensitivity functions for a broad class of index ≤ 1 DAE-embedded
hybrid systems have been derived and illustrated with examples. Computationally,
calculation of the sensitivity functions is inexpensive compared with solution of the
original system.
In the cases of index-0 DAEs (ODEs) or linear time invariant DAEs, theorems
giving sufficient conditions for existence and uniqueness of sensitivities have been
proven. These theorems imply the existence of ‘critical’ values for the parameters
related to qualitative changes in the evolution of the system, specifically bifurcations
in the sequence of events.
These results have important implications concerning the application of sensitivity
functions for the optimization of hybrid discrete/continuous dynamic systems. Suffi-
cient conditions for smoothness of the master optimization problem require existence
and local continuity of the sensitivity functions. Thus, changes in the sequence of
events at these critical values introduce nonsmoothness, confounding gradient-based
algorithms. The practical issue of the finite nature of numerical methods may aggra-
vate the nonsmoothness.
107
108
Chapter 4
109
method has lower computational cost than the staggered scheme because it minimizes
the number of matrix factorizations that need to be performed along the solution
trajectory.
In this chapter, a novel staggered corrector method for solving DAEs and sensi-
tivities is developed and demonstrated. In particular, the approach exhibits a com-
putational cost that is proven to be a strict lower bound for that of the simultaneous
corrector algorithm described in [94]. The staggered corrector method uses two cor-
rector iteration loops at each step, one for the DAE and a second for the sensitivities.
The computational savings result from fewer Jacobian updates in order to evaluate
the residuals of the sensitivity equations.
Many large DAEs found in engineering applications have Jacobians that are large,
sparse, and unstructured. Although the DAE may contain many equations, the av-
erage number of variables appearing in each equation is much lower than the number
of equations. Standard linear algebra codes exist that exploit this sparsity to reduce
dramatically the computational cost and memory resources required for the solution
of such systems [44]. The method for solving sensitivities described in this chapter
is particularly useful for, but not confined to, large sparse unstructured DAEs, be-
cause the computational time spent updating the Jacobian matrix is a significant
portion of the total solution time. The reason for this counterintuitive situation is
that state-of-the-art BDF integrators factor the corrector matrix infrequently, and
the factorization of large sparse matrices is particularly efficient, while on the other
hand the simultaneous corrector method [94] requires the Jacobian to be updated
frequently in order to calculate sensitivity residuals.
Also present in this chapter is a code for solving large, sparse DAEs and sensi-
tivities. The DSL48S code is based on the DASSL code but contains several novel
features, including the use of the highly efficient staggered corrector method. Exam-
ples are presented using this code demonstrating that the staggered corrector iteration
is a significant improvement over existing methods.
It is assumed that the reader is familiar with the DASSL code and the algorithms
used in that code [21, 94].
110
4.1 The Staggered Corrector Sensitivity Method
f (t, y, y , p) = 0 (4.1)
where y ∈ Rny are the state variables and p ∈ Rnp are the parameters. Consistent
initial conditions are given by (4.2). If (4.1) is a sparse unstructured DAE, each
equation in (4.1) involves on average c variables, where c ny .
Sensitivity analysis on (4.1) yields the derivative of the state variables y with
respect to each parameter. This analysis adds the following sensitivity system of
ns = np · ny equations to (4.1):
∂f ∂f ∂f
si + si + =0 i = 1, . . . , np (4.3)
∂y ∂y ∂pi
∂φ ∂φ ∂φ
si (to ) + si (to ) + =0 i = 1, . . . , np (4.4)
∂y ∂y ∂pi
where si = ∂y/∂pi . Note that consistent initial conditions (4.4) for the sensitivity
system (4.3) are derived from (4.2).
A numerical solution of the nonlinear DAE system (4.1) is obtained at each time
step of the integration by approximating the solution with a suitable discretization,
typically the k-step general backward difference formula (BDF), to yield the nonlinear
system:
# k
$
g(y(n+1) = f t(n+1) , y(n+1) , αj y(n+1−j) , p =0 (4.5)
j=0
where αj are the coefficients of the BDF formula. The solution of (4.5) is performed
iteratively by solving the following linear system at each iteration:
111
where
∂f ∂f
J = α0 (m)
+ (m)
(4.7)
∂y(n+1) ∂y(n+1)
Note that the corrector matrix J is calculated using information from the Jaco-
bian of (4.1). DAE codes such as DASSL use an approximation to J in order to
minimize the frequency of updates and factorizations, which are typically the most
computationally expensive steps of the integration algorithm. Thus, the corrector
solution is obtained by solving:
where
∂f ∂f
A=
∂y(n+1) ∂y (n+1)
(4.10)
In order to avoid factoring A, (4.9) may be solved as in (4.8) using the iteration:
⎡ (k ⎤
(m)
α0 si(n+1) + j=1 αj si(n+1−j)
Jˆ si(n+1) − si(n+1) = A ⎣
(m) (m+1) ⎦ + ∂f (4.11)
(m)
si(n+1) ∂pi
112
Niter Number of Newton iterations
Np Number of parameters
CSRES Cost of sensitivity residual evaluation (overall)
CRES Cost of DAE residual evaluation
CBS Cost of a backsubstitution
CMF Cost of LU matrix factorization
CMV Cost of a matrix-vector product
CVA Cost of a vector-vector addition
CVS Cost of a scalar operation on a vector
CJU Cost of a Jacobian update
CRHS Cost of an evaluation of ∂f /∂pi
the integrator, after the DAE corrector iteration and before the sensitivity corrector
iteration, because they are dependent only on information from the DAE solution.
This feature is the main reason that the staggered corrector method is attractive
compared with other methods, because the cost of updating A and ∂f /∂pi can be
high. Further, note that solution of the sensitivity corrector iteration does not require
additional factorization of a matrix, since Jˆ is already available as LU factors from
the DAE corrector iteration (4.8).
At each time step of the integrator, the additional cost involved in solving the
combined DAE and sensitivity system using the staggered corrector method versus
solving the DAE without sensitivities is:
where the definitions of the symbols in this equation are given in Table 4-1. The
cost of sensitivity residual evaluation is discussed in later sections.
The incremental cost presented in (4.12) is for the corrector iteration alone and
assumes that Jˆ is a sufficiently good approximation to J for this sensitivity corrector
iteration to converge without additional corrector matrix factorizations. The addi-
tional cost differentials that arise outside the corrector iteration are ignored here but
113
are discussed in a later section.
114
4.2 Other Sensitivity Methods
A number of codes solve the sensitivity system directly using a staggered scheme
[30, 77, 86]. This method involves solving the nonlinear DAE system (4.1) by approx-
imating the solution with a BDF method, as in (4.5–4.8).
Once an acceptable numerical solution has been obtained from the corrector it-
eration, the solution to the sensitivity system (4.9) is obtained through the direct
solution of:
⎡ ⎤
si
A⎣ ⎦ = − ∂f (4.13)
si ∂pi
115
in reality a solution to the unknown perturbed sensitivity system:
⎡ ⎤
si
Jˆ ⎣ ⎦ = − ∂f (4.14)
si ∂pi
Therefore, no guarantees may be made concerning the distance between the solution
to this system and the true sensitivity system. For example, in [77] it is noted that
sensitivities obtained using this method are not sufficiently accurate for the master
iteration of control parameterization.
At each step of the integrator, the additional cost involved in solving the combined
DAE and sensitivity system using the staggered direct method versus solving the DAE
without sensitivities is:
The method described in [94] combines the DAE and sensitivities to form a combined
system which is then solved using a BDF method on each step. The combined system
is:
∂f ∂f ∂f ∂f ∂f ∂f
F = f (t, y, y , p), s1 + s1 + ,... , sn + s + (4.16)
∂y ∂y ∂p1 ∂y p ∂y np ∂pnp
116
which can be solved using Newton’s method by solving the corrector equation:
∂J ∂J
Ji = si + (4.20)
∂y ∂pi
This method is a significant improvement over the staggered direct method be-
cause the need for additional matrix factorizations in order to solve the sensitivity
system has been eliminated. However, the disadvantage of the simultaneous corrector
method compared to the staggered corrector method is that the former requires the
system Jacobian A be updated at every corrector iteration in order to calculate the
portion of G due to sensitivity residuals. Although this cost is minor compared with
matrix factorization, it is shown below to be a significant cost for large problems.
At each step of the integrator, the additional cost involved in solving the combined
DAE and sensitivity system using the simultaneous corrector method versus solving
the DAE without sensitivities is:
117
4.3 Methods for Evaluating Sensitivity Residuals
There are three practical methods for evaluating the sensitivity residuals that may
be used with either the staggered corrector or the simultaneous corrector methods.
As shown in this section, the choice of method can significantly affect the cost of the
staggered corrector method.
This is the most desirable option because it ensures the accuracy of the sensitiv-
ity residuals and experience shows that it is typically less expensive than the other
residual methods described below. In practice, if automatic differentiation is used to
obtain the DAE Jacobian (for example, see [65]), it is not much trouble to extend the
automatic differentiation to produce ∂f /∂pi .
118
4.3.2 Finite differencing for ∂f /∂pi
The cost of the directional derivative sensitivity residual evaluation given in (4.26)
is:
Note that directional derivative residuals are less costly than finite difference residuals,
and therefore directional derivatives are the preferred method if analytical derivatives
are unavailable. However, it is demonstrated in Section 4.6 that analytic residuals
are preferred over directional derivative residuals from a cost standpoint.
119
4.3.4 Comparison of methods for evaluating sensitivity resid-
uals
The above expressions for the cost of evaluating sensitivity residuals are not the same
as the CSRES term in equations (4.12), (4.15), and (4.21).
For analytic sensitivities, these costs are:
StCorr
CSRES = CJU + Np (CRHS + Niter (2CMV + 2CVA )) (4.28)
SimCorr
CSRES = Niter (CJU + Np (CRHS + 2CMV + 2CVA )) (4.29)
The above equations show that the staggered corrector method has an advantage
over the simultaneous corrector method in the calculation of sensitivity residuals
because it is not necessary to update the Jacobian and the vector ∂f /∂pi at each
corrector iteration.
For finite difference sensitivities, the sensitivity residual costs are:
StCorr
CSRES = CJU + Np · Niter (CRES + 3CVA + CVS + 2CMV ) (4.30)
SimCorr
CSRES = Niter (CJU + Np (CRES + 3CVA + CVS + 2CMV )) (4.31)
Therefore the staggered corrector method has an advantage over the simultaneous
corrector method when the sensitivity residuals are calculated with finite differencing,
because it needs to update the Jacobian only once.
For directional derivative sensitivities, the sensitivity residual costs are:
StCorr
CSRES = Np · Niter (CRES + CVA + CVS ) (4.32)
SimCorr
CSRES = Np · Niter (CRES + CVA + CVS ) (4.33)
120
There is no cost advantage for the staggered corrector method when directional
derivative sensitivity residuals are used. Also, since this method requires fewer oper-
ations and no Jacobian update, it is preferred over the finite differencing of ∂f /∂pi
method for calculating sensitivity residuals for either the staggered or simultaneous
corrector method.
In practice, for large problems the Jacobian and residual evaluations are expensive,
and therefore the ability to reuse A and ∂f /∂pi during the corrector iteration results
in significant cost savings.
121
4.4 Cost Comparison of Sensitivity Methods
This section compares the computational costs of the staggered direct method and
the simultaneous corrector method with the staggered corrector method. The cost
differences presented are for one step of the integration, assuming that the step passes
the corrector and truncation error tests, and that (4.22) is used to calculate the
sensitivity residuals.
The cost comparison measures presented in this section have been derived so that
they may be tested in numerical experiments. In the numerical experiments, the
important statistic to compare is the ratio of the additional cost of integrating state
and sensitivity systems simultaneously to the cost of integrating a state system alone.
This ratio is:
Additional time for sens. integration τnp − τ0
ψnp = = (4.34)
Time for state integration τ0
where τnp is the time for the integration of the sensitivity and state system with np
parameters, and τ0 is the time for the integration of the DAE without sensitivities.
The state integration is dominated by the cost of matrix factorizations, which occur
for many DAEs on average every 5-10 steps. For large sparse systems, the cost
of matrix factorization has been frequently reported to be approximately 90% of the
total solution time, and the numerical results (Section 4.6) indicate that the balance is
dominated by Jacobian evaluations. The staggered direct method factors the matrix
at every step, so that between 5-10 times more evaluations and factorizations are
performed during the integration than would be performed to solve the DAE alone.
Therefore a lower bound on ψnp for the staggered direct method is:
Nsteps − Nfactorizations
ψnp = (4.35)
Nfactorizations
122
where Nsteps is the number of BDF steps and Nfactorizations is the number of corrector
matrix factorizations in the state integration. In the limit as the number of equations
becomes large the staggered direct method can do no better than this ratio (typically
about 4-9).
The same cost difference estimate can be performed for the simultaneous corrector
method with analytic derivatives. The additional cost of sensitivities in this method
is dominated by the need to update the system Jacobian at each iteration of the
corrector. However, the cost of these extra Jacobian updates is dominated by the
cost of matrix factorizations in an asymptotic sense, and a lower bound on ψnp for
the simultaneous corrector method is therefore zero.
As in the simultaneous corrector method, the asymptotic lower bound on ψnp for the
staggered corrector method is also zero. However, if the cost of matrix factorizations
(which is the same in both methods) is ignored, ψnp for both methods is dominated
by the cost of Jacobian updates and backsubstitutions. Therefore, the ratio of ψnp
for the two methods is approximately:
ψnSimCorr
p Niter (CJU + np (CBS + CRHS )
= (4.36)
ψnStCorr
p
CJU + np (CRHS + Niter CBS )
The two methods have the same cost only if Niter = 1, and the cost differential should
decrease as np increases.
The staggered corrector method is also less expensive than the simultaneous cor-
rector method if finite differencing (4.24) is used to calculate the sensitivity residuals.
However, there is little difference in cost if directional derivatives (4.26) are used to
calculate the sensitivity residuals.
123
4.4.3 Other cost considerations
The above analysis considers only the cost differences within the corrector iteration.
However, there are several other considerations that affect the overall cost of the
integration.
The sequence of step sizes taken by the integrator will vary according to which
sensitivity method is used. This observation is due to the fact that each method
is using the corrector iteration to solve a different nonlinear system. The step size
and the choice of when to factor the corrector are dependent upon the errors in the
Newton iteration, which are different in each method. The solution obtained to the
DAE is still correct to within the tolerances specified for all the methods, but the
number of steps and corrector matrix factorizations may vary. In practice, both the
simultaneous corrector and the staggered corrector often factor the corrector matrix
more often than would be done if just solving the DAE, but the difference in number
of factorizations is not typically large.
When the error control algorithm in the integrator is considered, the differences
in cost between the staggered and simultaneous corrector methods is even greater
than the above analysis would indicate. The staggered corrector method is able to
avoid solving the sensitivity system for steps where the corrector iteration or the
truncation error check fail for the DAE, while the simultaneous corrector iteration is
not capable of such a discrimination. In practice, the overall convergence of the two
corrector iterations in the staggered corrector method appears to be more robust than
convergence of the single corrector iteration in the simultaneous corrector method.
124
4.5 Description of DSL48S
The simultaneous corrector method has been implemented in a FORTRAN code called
DSL48S. This code is based on the original DASSL code [21], with modifications to
handle large unstructured sparse DAEs and to add the staggered corrector method
for sensitivities.
The DSL48S code has the following features:
1. The large-scale unstructured sparse linear algebra package MA48 [45] is embed-
ded in DSL48S for solution of the corrector equation. The MA48 package is
especially suitable for the types of problems that arise in chemical engineering,
as well as many other applications.
2. The staggered corrector method described above has been implemented, and
DSL48S offers options for solving the DAE either alone or with sensitivities.
3. The code has been adapted for use within a larger framework for solving a broad
class of high-index nonlinear DAEs [47] and dynamic optimization problems.
The result is a code that conforms closely with DASSL’s interface and uses its ex-
cellent heuristics for both the DAE and the sensitivity equations. A diagram detailing
the algorithm is shown in Figure 4-1.
125
Solve the DAE
corrector iteration
Not
Con- Converged Refactor matrix
verged? and/or cut step
Perform error
test on DAE
variables
Update Jacobian
and ∂F/∂pi
Solve the
sensitivity corrector Refactor matrix
equation
Con- Need NO
Not Converged to
verged? refactor?
Error test on
combined DAE and
sensitivity system
126
At each corrector iteration on the DAE, if the norm of the update is sufficiently
small, the corrector iteration is considered to have converged. If not, another itera-
tion is performed, possibly after refactoring the matrix or reducing the step size, as
in DASSL. When the corrector has converged, a truncation error test is performed on
the state variables. If the truncation error test fails, the corrector matrix is refactored
and/or the step size is reduced, and the DAE corrector equation is solved again. If
the state variables pass the truncation error test, the Jacobian is updated, and the
sensitivity corrector equation is solved in the same manner as the DAE corrector
equation. Provision is made to refactor the corrector matrix if it is determined that
the corrector iteration is not converging to the predicted values. After the sensitivity
variables have passed the corrector convergence test, a truncation error test is per-
formed on the combined state and sensitivity system. If this test fails, the step size
is reduced and/or the corrector matrix refactored and the step is attempted again.
The algorithm contains several features designed to minimize wasted computations
due to corrector convergence failure or error test failure. An error test is performed on
the DAE before the sensitivity corrector iteration is started because an error failure
on the DAE will usually cause an error failure on the combined system. The error
check on the DAE is inexpensive compared with the wasted work that would be done
in solving the sensitivity system if the error failure was not caught at this stage. It
was found empirically that an approximation to the corrector matrix that is sufficient
to converge the corrector iteration on the DAE on rare occasions may not be sufficient
to converge the corrector iteration on the sensitivities. Provision is therefore made
to update and re-factor the corrector matrix without reducing the step size if the
sensitivity corrector iteration does not converge.
127
4.6 Numerical Experiments
The DSL48S code was tested on all of the example problems in [94], and produced the
same answers as the DASSLSO code presented in that paper. To compare the code
and the staggered corrector iteration, the problem was tested on a large-scale pressure-
swing adsorption problem [13, 81]. Included in the problem are a number of partial
differential equations which are discretized spatially using a backward difference grid.
The expressions in the DAEs of this problem are complicated and lead to a sparse
unstructured Jacobian. The problem is scalable by the number of spatial mesh points
in the adsorbers, and several different problem sizes were chosen. The number of
equations in this system is 30 · N + 9, where N is the number of mesh points in the
backward difference grid. The number of nonzero elements in the Jacobian of this
system is 180 · N + 11, and therefore the average number of variables in each equation
c is approximately 6.
In order to compare the staggered corrector method with the simultaneous cor-
rector method, an option exists in DSL48S to use the simultaneous corrector method
as described in [94]. The code was extensively tested with both the simultaneous and
staggered corrector options.
The results of two different numerical experiments are reported. In the first ex-
periment, the solution times were recorded as the size of the DAE was increased. In
the second experiment, the size of the DAE was fixed, and the solution times were
recorded as the number of parameters was increased.
The performance measures that are reported for the first experiment are ψ1 and
ψ2 /ψ1 for both the staggered and the simultaneous corrector method, as well as the
number of steps (Nsteps ), the number of corrector matrix factorizations (NMF ), and
the number of Jacobian updates (NJU ). In both the simultaneous and the staggered
corrector methods, the incremental cost for solving one sensitivity should be much
higher than the incremental cost for each additional sensitivity because the same
number of Jacobian updates are performed provided that np ≥ 1.
128
# Equations τ0 Nsteps NMF
309 2.54s 129 36
3009 44.70s 212 33
6009 107.77s 257 34
9009 179.10s 288 35
12009 265.95s 345 30
15009 351.86s 355 33
18009 456.10s 393 30
21009 574.59s 422 32
30009 921.17s 477 31
The timing was performed by embedding the DSL48S code within the ABACUSS1
large-scale equation-oriented modeling system. ABACUSS is an example of high level
modeling environment with an interpretative software architecture. Rather than auto-
matically generating FORTRAN or C code which is then compiled and linked to form
a simulation executable, ABACUSS creates data structures representing the model
equations in machine memory, and during numerical solution these data structures
are “interpreted” by numerical algorithms to evaluate residuals, partial derivatives,
etc. Details of the ABACUSS implementation are given in [13].
Solving the combined sensitivity and DAE system requires more Jacobian eval-
uations than solving the DAE alone, and hence the use of automatic differentiation
to evaluate the Jacobian has much more impact on the sensitivity and DAE solution
time than on the DAE solution time. With automatic differentiation techniques, the
cost of a Jacobian evaluation is typically 2-3 times a function evaluation, although
rigorous upper bounds on this ratio are dependent upon the particular algorithm
employed. For the results reported in this chapter, the modified reverse-mode algo-
rithm described in [134] was used, which is particularly well-suited for large sparse
Jacobians.
Timing data comparing the staggered and simultaneous corrector method were
1
ABACUSS (Advanced Batch And Continuous Unsteady-State Simulator) process modeling software, a derivative
work of gPROMS software, 1992
c by Imperial College of Science, Technology, and Medicine.
129
Staggered Corrector Simultaneous Corrector
# Equations
ψ1 Nsteps NMF NJU ψ1 Nsteps NMF NJU
309 0.79 126 38 126 1.52 130 38 326
3009 0.80 204 37 204 1.45 195 35 453
6009 0.83 246 32 246 1.71 253 31 562
9009 0.86 277 33 277 1.53 286 30 617
12009 0.85 309 31 309 1.77 308 30 682
15009 0.89 337 32 337 1.65 338 29 729
18009 0.89 362 30 362 1.74 363 30 776
21009 0.94 398 29 398 1.79 408 30 873
30009 0.89 455 30 455 1.62 457 28 979
Table 4-3: Results for one parameter sensitivity and DAE system (analytic sensitivity
residuals)
Table 4-3 shows a dramatic difference in the computational cost for the staggered
and simultaneous corrector method with one parameter. Over a wide range of prob-
lem sizes, an average of 30% savings in the integration time was achieved with the
staggered corrector method with one parameter. This is largely due to the empirical
observation that matrix factorizations of the corrector matrix for this problem scale
less than cubically. The cost of a Jacobian update is a significant portion of the cost
of a matrix factorization, and thus the ability of the staggered corrector method to
reduce the number of Jacobian updates results in significant cost savings.
The second numerical experiment that was performed compared the solution cost
130
Staggered Corrector Simultaneous Corrector
# Equations ψ2 ψ2
ψ1
Nsteps NMF NJU ψ1
Nsteps NMF NJU
309 1.13 124 39 124 1.07 131 37 319
3009 1.06 188 33 188 1.10 190 32 441
6009 1.06 237 35 237 0.99 231 31 522
9009 1.08 274 31 274 1.12 274 29 590
12009 1.07 305 31 305 1.00 313 29 677
15009 1.06 325 32 325 1.06 327 29 708
18009 1.09 362 31 362 1.02 366 28 777
21009 1.07 381 31 381 1.02 374 34 815
30009 1.07 444 30 444 1.07 447 29 946
Table 4-4: Results for two parameter sensitivity and DAE system (analytic sensitivity
residuals)
as the number of parameters increases. The same 3009 equation model as was used in
the previous experiment was solved, using np = 1 . . . 20. Figure 4-2 shows the incre-
mental cost of adding additional parameters for both methods. As (4.36) indicates,
the cost differential is more significant for fewer parameters. For ease of compari-
son, these results were obtained with the truncation error control on the sensitivities
turned off.
A comparison was also made of the use of analytical residuals and directional
derivative residuals within the staggered corrector method. Table 4-5 shows the re-
sults of integrating the same 6009 equation pressure swing adsorption system with
an increasing number of parameters. The ψnp statistic is reported for both the ana-
lytical and the directional derivative residual methods. These results show that the
analytical method was favored over directional derivatives for all the tested problems,
and that the relative difference increases as np increases. Note that the results in this
table are not consistent with the results in Tables 4-3 and 4-4 because the parameters
used in the problem were different.
131
1.6
Staggered Corrector
1.4 Simultaneous Corrector
1.2
1
ψn p
np
0.8
0.6
0.4
0.2
0
0 5 10 15 20
np
Table 4-5: Comparison of analytic residuals and directional derivative residuals for
staggered corrector
132
4.7 Truncation Error Control on Sensitivities
There has been some uncertainty [77, 94] about whether truncation error control must
be maintained on the sensitivity variables as well as the state variables. The reason
often given for not including the sensitivity variables in the truncation error test is
that as long as the state variables are accurate, the sensitivity variables should be
fairly accurate. Furthermore, many applications that use sensitivity information could
arguably withstand a small amount of inaccuracy in the values of the sensitivities.
For example, in dynamic optimization it may not be necessary to have extremely
accurate sensitivity information when the optimizer is far from the solution.
There are significant computational advantages for skipping the truncation error
test on the sensitivities. The cost of the test itself is a linear function of problem size,
but even more significant cost savings may come from the integrator being able to
take larger steps on many problems. The larger step size will result even for problems
where the sensitivities never fail the truncation error test, since the test is also used
to choose the step size.
It was found during the course of this work that when the sensitivities are solved
using the staggered corrector scheme, the truncation error test should be performed
on the state variables. When the sensitivities are not included in the truncation error
test, the integrator is sometimes able to take large enough steps to miss features of
the dynamics of the sensitivity system. This effect is not limited to the staggered
corrector method, and should be observed in all of the sensitivity methods detailed in
this chapter. The staggered direct and the simultaneous direct methods also use the
BDF formula to approximate the time derivatives of the sensitivity variables, which
is incorrect if the stepsize is too large.
The problem can be seen by looking at the sensitivities of the following problem:
+
ẏ = −2gy sin(γ) (4.37)
+
ẋ = −2gy cos(γ) (4.38)
t − tf t − to
γ = p1 + p2 (4.39)
to − tf tf − to
133
where p1 = 0.5, p2 = 0.5, to = 0, and tf = 0.589.
Figures 4-3 and 4-4 show that the sensitivities can contain significant error when
they are excluded from the truncation error test, even for problems such as this one
that are not considered particularly stiff or otherwise hard to solve. The ‘kinks’ in the
trajectories without error control are due to the integrator taking very large steps.
It is very tempting to exclude the sensitivities from the truncation error test
because this results in substantial computational savings for some problems. However,
doing so in this example resulted in the numerical code giving no warning when
the sensitivity trajectories became incorrect, even by large amounts. Therefore, it
is recommended that the truncation error test should always be performed on the
sensitivities in order to avoid this problem.
134
Sensitivity Error Test Comparison
1.4
Sens. Error Test OFF
1.2 Sens. Error Test ON
1
0.8
∂x
∂p1
0.6
0.4
0.2
-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6
t
-0.4
∂y -0.6
∂p1
-0.8
-1
-1.2
-1.4
0 0.1 0.2 0.3 0.4 0.5 0.6
t
135
4.8 Conclusions
The staggered corrector method for numerical sensitivity analysis of DAEs has been
shown to exhibit a strict lower bound on the computational cost for the two other
methods typically used, the staggered direct and the simultaneous corrector methods.
Experience with large sparse problems has shown that the ability of the staggered
corrector method to reduce the number of Jacobian updates leads to significant cost
savings that are especially apparent with large, sparse DAEs, but also for other types
of DAEs.
It is possible to adapt the staggered corrector method for parallel execution. Par-
allelization may be accomplished by using different processors for each sensitivity
calculation and staggering the sensitivity calculation to follow the DAE corrector,
similar to the method described in [77].
The code DSL48S is reliable, easy to use, and efficient for sensitivities of large
sparse DAEs.
136
Chapter 5
137
5.1 Background on DAEs
The focus here is on the numerical solution of general nonlinear high-index differential-
algebraic equations (DAEs) of the form:
f (ẋ, x, y, t) = 0 (5.1)
where x ∈ Rmx are the differential state variables, y ∈ Rmy are the algebraic state
variables, t ∈ T ≡ (to , tf ], and f : Rmx × Rmx × Rmy × R → Rmx +my . The partitioning
of the state variables into algebraic and differential variables does not imply a loss of
generality.
There are several basic types of DAEs. The following definitions are from [21].
Linear constant coefficient DAEs have the form:
where A and B are (possibly singular) square matrices. Linear time varying DAEs
have the form:
Although very few systems in practical applications fall into this class, linear time
varying DAEs are important because there are proofs that exist only for this class
that lead to results and techniques that seem appropriate for more general nonlinear
DAEs.
This thesis is concerned mostly with nonlinear DAEs, which typically occur in
chemical process modeling. These may be either fully-implicit, which have the form:
f (ẋ, x, t) = 0 (5.4)
138
or semi-explicit, which have the form:
f1 (ẋ, x, t) = 0 (5.5)
f2 (x, t) = 0 (5.6)
The definition of differential index was given in the glossary of DAE terms at the
beginning of Chapter 2. Although there are other definitions of indices of a DAE, the
differential index is of most relevance in this thesis, and it shall be referred to simply as
the index. By this definition, the index of an ODE system is zero. Numerical solution
of DAEs is significantly different than solution of ODEs [112, 130], but the solution
of DAEs with index ≤ 1 is relatively straightforward using one of several methods.
These methods include backwards-difference formula (BDF) methods such as those
implemented in DASSL [21, 113] and DASOLV [76], the extrapolation code LIMEX
[40], and implicit Runge-Kutta methods such as RADAU [70] and those reviewed by
[21] and [70]. Of these, it has generally become accepted that the BDF methods are
the most efficient for a broad spectrum of problems [51, 95], and a variant of DASSL
called DSL48S (see Chapter 4) was chosen for the numerical integrations performed in
this thesis. However, the numerical solution of index > 1 DAEs presents complications
[21]. Convergence proofs for the BDF method are given in [21, 78, 96], and generally
apply only to problems with index less than two and very restricted forms of higher-
index problems. This chapter describes the dummy derivative method for the solution
of what are loosely called high-index DAEs, i.e., those with index > 1.
139
this to be a necessary and sufficient condition.) It is interesting to note that while the
two sets of index-1 DAEs that satisfy each of these necessary conditions respectively
intersect, they do not coincide. In other words, there exist index-1 DAEs that satisfy
both, one, or none of these two conditions. So, in principle, both criteria should be
applied to cover a broader class of index-1 DAEs.
This criterion is still not of much practical use from a numerical point of view
because, due to rounding error, it is very difficult to tell the difference numerically be-
tween a singular matrix and one that simply has an extremely high condition number
[131]. Also, rank detection computations are very expensive for large-scale systems
[9]. Further, singularity of the Jacobian is a local property, but the nonsingularity
condition must hold globally. For these reasons, the concept of structural singularity
is used. The following definition is from [121]:
Definition 5.1 (Structural matrix). The elements of a structural matrix [Q] are
either fixed at zero or indeterminate values which are assumed to be independent of
one another.
Therefore, the entries in structural matrices differentiate only between hard zeros (0)
and nonzeros (∗) [139]. A given matrix Q is called an admissible numerical realization
with respect to [Q] if it can be obtained by fixing all indeterminate entries (∗) of [Q] at
some particular values. Two matrices A and B are said to be structurally equivalent
if both A and B are numerical realizations of the same structural matrix [Q].
In other words, a property of a matrix is a generic or structural property if it holds
in a neighborhood of the nonzero entries of the matrix. A matrix A is structurally
nonsingular if it is an admissible numerical realization of the structural matrix [A]
and there exists a permutation P on the structural matrix such that P [A] has a
nonzero diagonal, which is referred to equivalently as a maximum transversal, an
output set assignment, or an augmenting path. It can be easily demonstrated that a
structurally singular matrix is also singular, but the converse is not necessarily true.
The advantage of using structural properties is that evaluating structural properties
is much less computationally intensive than numerical evaluation and exact in the
140
sense that it is not subject to rounding error.
When the sufficient conditions for a DAE to be at most index 1 are modified
so that structural singularity of the relevant Jacobian is examined, the criteria are
no longer sufficient conditions. That is, there are DAEs which have structurally
nonsingular Jacobians which are singular. On the other hand, if the Jacobian is
structurally singular, the DAE definitely has an index ≥ 1. It is interesting to note
that there are index-1 DAEs for which the Jacobian with respect to the highest order
time derivatives is structurally singular, for example:
x1 + x2 = 1 (5.8)
Differentiating this class of “special” index-1 DAEs once produces an index-0 DAE
(at least in the structural sense, which provides a means to detect them.
and also additional hidden constraints that are obtained by differentiating some of the
equations of the DAE. The initial condition (5.9) must be consistent, which practically
means that it must allow us to find consistent initial values for the state variables and
their time derivatives at t0 so that numerical integrations may be started smoothly.
Consistent initial values are defined as [26]:
141
Definition 5.2 (Consistent Initial Values). Initial values x0 are consistent if they
admit a smooth solution in [0, t] for t → 0 and x(0) = x0 .
Unlike the solution of ODEs, not all initial conditions (5.9) of DAEs admit a smooth
solution. So, from a practical standpoint solving DAEs requires both specifying a
valid (5.9) and then being able to use it to obtain consistent initial values.
In [139], the r components of x(t0 ), ẋ(t0 ) that can be assigned an arbitrary value
and still allow a consistent initialization are defined as dynamic degrees of freedom.
The number of dynamic degrees of freedom is dependent on the index of the DAE,
but not related by any explicit formula. The issue of finding a valid set of initial
conditions is addressed later in this chapter.
Once a consistent initial condition has been obtained, finding consistent initial
values is nontrivial from a practical point of view. The consistent initial values must
satisfy the nonlinear algebraic system formed by (5.1), (5.9) and the nonredundant
additional constraints formed by differentiating (5.1) one or more times with respect
to time; reliable solution of which is difficult with current large-scale root-finding
technology. Moreover, consistent initial values are necessary because weak instabilities
have been shown to occur when the DAE is solved with a BDF or implicit Runge-
Kutta method from inconsistent initial values [96]. There are several methods for
finding consistent initial values (see [95] for a review, and also [21, 93]) but this
problem is not in the scope of this thesis, and it is assumed to be possible to find a
set of consistent initial values for all of the problems discussed.
High-index DAEs arise when the number of truly independent state variables is less
than the number of variables that have time derivatives appearing in the DAE, or
more simply stated, when there are explicit and/or implicit algebraic relationships
among differential state variables. In general, high-index DAEs may result from sev-
eral sources. They may come from modeling assumptions that were made about the
physical system, which in turn may arise from a lack of information about the pa-
142
Fin
Fout
dh
= Fout − Fin (5.10)
dt
Fout = ah (5.11)
143
where Fin is the flow into the valve, Fout is the flow out of the tank, and a is a
constant determined by the characteristics of the output orifice. This problem is not
high-index if the design degree of freedom is satisfied with the constraint:
On the other hand, a high-index prescribed path control problem can be created by
satisfying the design degree of freedom with the constraint:
¯
h = f(t) (5.13)
Note that the DAE given by (5.10–5.11) and (5.13) is high-index because the differ-
ential state variable h is explicitly constrained by (5.13).
In this thesis the interest in high-index DAEs is primarily to create an algorithm
that can handle path constrained dynamic optimization problems. However, the
above discussion has shown that it is sometimes convenient to deliberately formulate
a high-index model for simulation purposes. Therefore, the interest here is on methods
that can solve general nonlinear DAEs of arbitrary index.
144
error control [1, 25]. To get around some of these problems, several methods have been
proposed that require some form of manipulation of the high-index DAE. Since the
objective is to find a method that works for general arbitrary-index DAEs, attention
is restricted here for the most part to methods that are not limited to a particular
form or index of DAE.
As noted in [21, 60], one method for obtaining numerical solutions to high-index
DAEs is to differentiate a subset of the equations in the DAE until an index-1 or
0 DAE is obtained that may be solved using standard numerical techniques. This
introduces the concept of underlying DAEs (UDAEs):
Definition 5.3 (UDAE). A DAE Ā is said to be an underlying DAE for the DAE
A if for every equation e ∈ A there is a corresponding ē ∈ Ā where ē = dn e/dtn ,
n ≥ 0.
For any DAE that is sufficiently differentiable, there are associated UDAEs that are
ODEs. These ODEs are termed underlying ODEs (UODE). Any solution to the
original DAE must also satisfy all associated UDAEs, including UODEs. However, it
is possible that some equations in the set of UDAEs may be redundant. For example,
consider the index-2 DAE:
ẋ1 = x2 + x1 (5.14)
x1 = f (t) (5.15)
One UDAE for this system may be obtained by differentiating (5.15) once:
ẋ1 = x2 + x1 (5.16)
d
ẋ1 = f (t) (5.17)
dt
and another UDAE may be obtained by differentiating both (5.14) and (5.15) once:
145
and a third UDAE may be obtained by differentiating (5.14) once and (5.15) twice:
Th UDAEs (5.18–5.19) and (5.20–5.21) are also UODEs. Equations (5.16), (5.17),
(5.18), (5.19), and (5.21) are nonredundant equations that must be satisfied by any
valid numerical solution to (5.16–5.17). In general, the numerical difficulties involved
with solving high-index DAEs are associated with finding a solution that satisfies all
nonredundant equations in the set of all possible UDAEs.
An example of a high-index DAE that will be used throughout this chapter for
illustrative purposes is the following model of a pendulum with a rigid rod derived in
Cartesian coordinates (see Figure 5-2):
λ
mẍ + x=0 (5.22)
L
λ
mÿ + y = −mg (5.23)
L
x2 + y 2 = L2 (5.24)
where x and y are the Cartesian coordinates, λ is the tension in the rod, L is the
length of the rod, m is the mass of the pendulum bob, and g is the gravitational
constant.
The DAE (5.22–5.24) is index-3 because the length constraint (5.24) constrains
the x and y differential state variables. An index-1 underlying DAE may be obtained
by differentiating the length constraint (5.24) twice in order to obtain a UDAE. When
146
x
λ
mẍ + x=0 (5.25)
L
λ
mÿ + y = −mg (5.26)
L
2xẍ + 2ẋ2 + 2y ÿ + 2ẏ 2 = 0 (5.27)
Note that this UDAE requires the specification of four initial conditions, rather than
the two required for the original DAE.
Figure 5-3 shows the numerical results obtained by solving the reduced-index
model (5.25–5.27) over a large number of oscillations with the initial conditions x0 = 1
and ẏ0 = 0 and the parameters m = 1 and L = 1 using a standard BDF method
integrator. The numerical code produced no error messages or any other sign that
numerical difficulties had been encountered, but it is obvious that there is some
problem because this is a frictionless system but the bob does not reach the same x
point at each oscillation. This can be seen more clearly in Figure 5-4 which shows
the length of the pendulum throughout the simulation, which shows that the length
invariant is not satisfied. Thus, although the analytic solution of (5.25–5.27) must
147
Gear Method Pendulum x position
1
0.8
0.6
0.4
0.2
0
x
-0.2
-0.4
-0.6
-0.8
-1
0 50 100 150 200 250 300
t
This phenomenon is called constraint drift and was noted in [57, 58, 59]. Several
methods have been proposed to handle this problem. In general, nonlinear implicit
constraints are not enforced when the problem is discretized. The simplest solution
is to integrate a UDAE using step sizes that are hopefully small enough to keep the
constraint drift to an acceptable minimum. However, it was shown in [54] that this
strategy may not work because differentiating a nonlinear constraint can affect the
stability properties of the DAE.
ẋ = f (x, t) (5.28)
and the constraints obtained while deriving the UODE from the DAE:
g(x, t) = 0 (5.29)
where g : Rmx ×R → Rng . By introducing new variables μ ∈ Rng the following system
148
Gear Method Pendulum Length
1.05
1
0.95
0.9
0.85
L
0.8
0.75
0.7
0.65
0.6
0 50 100 150 200 250 300
t
may be obtained:
T
∂g
ẋ = f (x, t) + μ (5.30)
∂x
g(x, t) = 0 (5.31)
which is a semi-explicit index-2 DAE. The solution of all the algebraic variables in
this problem can be shown to be zero, and the constraints and all of their derivatives
are explicitly enforced. Therefore, the numerical solution does not exhibit constraint
drift and general multistep methods can be used to solve the problem. In [21] this
constraint stabilization technique was used to solve the high-index pendulum (5.22–
5.24), and it was shown to eliminate the problem of constraint drift. However, a
numerical solution to the resulting semi-explicit index-2 DAE was obtained only by
excluding the algebraic variables from the integration error control estimates, and
solutions at very tight tolerances could not be obtained because the algebraic variables
kept the corrector iteration from converging. This could be due to the fact that the
condition number of the corrector matrix of a high-index DAE increases very rapidly
as the stepsize tends toward zero [1, 25]. Constraint stabilization methods were also
discussed and applied to constrained mechanical systems in [54].
149
Another method for obtaining numerical solutions to high-index DAEs is through
the use of regularizations [17, 114]. A regularization of a DAE is the introduction
of a small parameter into the DAE in such a way that solution of the regularized
DAE approaches the solution of the high-index DAE as the parameter approaches
zero. Regularization is a standard method that has been employed by engineers for
decades, such as the use of a weir equation for a vessel filled with an incompressible
fluid and any “controller” type equation [53]. Essentially regularization creates a
very stiff system where the solution decays very quickly to the solution manifold of
the high-index DAE. Regularizations have been applied with mixed success to high-
index DAEs [21], but their use has not advanced to the point where they may be
easily applied to arbitrary-index general DAEs. Moreover, in [2] it is shown that
the method of dummy derivatives described later in this chapter is more efficient
numerically than a regularization technique, due to the high degree of stiffness of the
regularized system.
There have been several attempts to develop numerical methods for direct solution
of high-index DAEs. Projected Runge-Kutta collocation methods have been shown
to work for some classes of high-index DAEs [4, 90]. Another interesting approach
is the least squares projection technique described in [11, 27, 28, 29]. This method
works by determining a local set of coordinates for x and then solving the derivative
array equations numerically using least-squares for ẏ. Although these approaches
seem promising, neither has been developed to the point where it is as reliable and
easy to use as BDF methods are for index-1 DAEs.
Finally, a method for finding an index-1 DAE with the same solution set as the
high-index DAE was described in [5, 58, 60, 139]. This method is termed the elim-
ination method of deriving an equivalent index-1 DAE. Essentially the elimination
method takes the nonredundant equations in the UDAEs and substitutes them into
the original high-index DAE, eliminating some variables and thus creating an index-1
DAE with a reduced number of degrees of freedom. As an example of this method
applied to the pendulum (5.22–5.24) the nonredundant equations from the UDAEs
150
are the ones created by differentiating the length constraint (5.24) twice:
2xẋ + 2y ẏ = 0 (5.32)
When (5.32–5.33), and (5.22) are used to eliminate ẍ and ẋ, an equivalent index-1
DAE is obtained:
y 2 ẏ 2 ẏ 2 y ÿ λ
m − 3 − − + x=0 (5.34)
x x x L
λ
mÿ + y = −mg (5.35)
L
x2 + y 2 = L2 (5.36)
This system can be explicitly solved for its highest order time derivatives (λ, ÿ, x)
in terms of (ẏ, y) and the parameters to get:
y 3 mg − ymgL2 + mẏ 2 L2
λ=− (5.37)
L (y 2 − L2 )
y 4mg − 2y 2 mgL2 + ymẏ 2L2 + mgL4
ÿ = (5.38)
L2 (y 2 − L2 ) m
+
x = ± −y 2 + L2 (5.39)
These equations no longer uniquely determine the highest order time derivatives when
y → ±L. Therefore, this system is either locally high-index or locally unsolvable when
y = ±L. It can easily be seen that no finite number of differentiations of this DAE
will produce a UDAE that uniquely defines the highest order time derivatives when
y = ±L, since the denominators of all time derivatives of (5.37–5.38) will contain
powers of (y 2 − L2 ). It appears that this system is locally unsolvable when y = ±L,
since the definition of solvability given in [21] would seem to require that the index
be finite. This same solvability phenomena shall be seen again in the discussion of
151
the dummy derivative method.
However, there is another equivalent index-1 DAE that is formed by eliminating ÿ
and ẏ instead of ẍ and ẋ that is solvable when x = 0 but is not solvable when y = 0.
Theoretically the elimination method is valid for solving high-index DAEs without
constraint drift, and it does work for small systems such as the one demonstrated
above, but in practice the algebraic elimination required is extremely computationally
expensive for large nonlinear DAEs.
A more promising method that is the somewhat related method of dummy deriva-
tives [97] which is described in the later sections of this chapter. Although this
method was originally described in [97], it required considerable development to cre-
ate a practical automated algorithm. This method allows for solution of a broad class
of nonlinear arbitrary-index DAEs efficiently and with guaranteed accuracy, and it
does not require expensive algebraic eliminations. Since the method is based on the
ability to find the nonredundant UDAE equations that permit the solution for con-
sistent initial values, the next sections of this chapter are used to describe Pantelides’
algorithm for obtaining a consistent initial condition for high-index DAEs.
152
5.2 Consistent Initialization of High-Index DAEs
In general, all of the UDAEs of a DAE must be satisfied at the initial condition,
however, only a subset of the UDAE equations are nonredundant. The set of all
UDAEs of a DAE is also called the derivative array. All the equations in the derivative
array that are nonredundant and constrain the state variables or their time derivatives
must be found in order to find a numerical solution using the BDF method. This
problem is related to the problem of finding a consistent initial condition, which
requires finding which equations are differentiated and how many times to constrain
independent initial conditions.
The problem of determining which equations in (5.1) and (5.9) need to be dif-
ferentiated at the initial condition was addressed in a structural sense in [107]. In
this section, Pantelides’ structural algorithm is described. Also described are mod-
ifications to the algorithm, and an implementation of the modified algorithm. In
later sections this modified algorithm is incorporated into a general framework for
obtaining numerical solutions to high-index DAEs.
153
that gives a nonzero structural diagonal of maximum size.
154
Equations Variables
(5.40) v̇
(5.41) F
(5.42) ẋ
Consider the following very simple system that was given in [97]:
mv̇ = F (5.40)
ẋ = v (5.41)
x = x̄(t) (5.42)
The solution to this problem may be interpreted as the force F required to make the
mass m follow a given trajectory x(t). Note that the selection of x as the input to the
system has caused it to be high-index. If instead F had been selected as the input
the problem would be an index-0 system of ODEs in state-space form.
Pantelides’ algorithm may be applied to the system (5.40–5.42) to determine any
additional equations that must be explicitly satisfied by the initial condition. For
this example, the structural graph will be shown using circles for the vertices of a
graph, and the lines connecting them as the graph edges. The system (5.40–5.42) is
structurally represented by Figure 5-5.
The bold edges show one possible attempt to find a matching, but no matching
is possible on this graph because (5.42) has no edges connecting it with a variable.
Therefore, this graph is structurally singular with respect to (5.42), which is differ-
entiated, giving:
155
Equations Variables
(5.40) v̇
(5.41) F
(5.42 ) ẋ
Figure 5-6: Graph of index-3 system after one step of Pantelides’ algorithm
Equations Variables
(5.40) v̇
(5.41 ) F
(5.42 ) ẍ
Figure 5-7: Graph of index-3 system after two steps of Pantelides’ algorithm
which replaces the (5.42) equation node in Figure 5-5 to produce the graph in Figure 5-
6 for the next step of the algorithm.
It can be seen from Figure 5-6 that no matching exists because (5.41) and (5.42 )
each have only one edge connecting to the same variable, ẋ. In the scenario depicted
in Figure 5-6, the algorithm has assigned ẋ to (5.41), and cannot assign (5.42 ). Since
the two equations cannot both be assigned to ẋ, they are a structurally singular subset
and are differentiated giving:
ẍ = v̇ (5.41 )
156
returned by the algorithm on the last step is index-1 because it contains an algebraic
state variable F . The final UDAE returned by the algorithm must be structurally
index-1, since the algorithm is guaranteed to return a UDAE with structural index at
most one, and it cannot be index-0 since there is an algebraic variable F in the graph.
Therefore, the original DAE (5.40–5.42) was index-3, since (5.42) was differentiated
twice, and the final UDAE returned by the algorithm is index-1. The initial conditions
must satisfy all of the equations in all of the UDAEs produced by the algorithm, i.e.:
(5.40-5.42), (5.41), (5.42 ), and (5.42 ).
157
of equations and the differentiation approach employed.
Modifications to Pantelides’ algorithm are also necessary to keep the differentiated
DAE from having an order higher than 1, so that the standard DAE solvers may be
used. This criteria is enforced simply by introducing new equations and variables as
necessary. Thus, when:
f1 (ẋ, x, y, t) = 0 (5.43)
is differentiated:
a = ẋ (5.45)
is obtained, where (5.45) is a new equation and a is a new variable introduced into the
model unless it is already present. This is somewhat more complicated in practice
than it would seem because of the desire not to introduce a if it is already in the
problem.
A sketch of the index detection and differentiation algorithm REDUCE-INDEX
that has been implemented in ABACUSS is given below. It uses the AUGMENT-
PATH algorithm given in [107]. The inputs to the algorithm are the DAE f , the
initial conditions φ, the time derivatives of the differential state variables ẋ, and the
algebraic state variables y. Upon successful termination, the algorithm returns the
extended DAE F̄ , its initial conditions D, and the structural index of F̄ .
REDUCE-INDEX(f ,φ,ẋ,y)
3. ∀ {i} ∈ FIndex :
(a) Apply the AUGMENTPATH algorithm to equation {i} and variable set z
to obtain a set of structurally singular equations W ⊆ FIndex .
158
(b) ∀ {j} ∈ W :
ii. For every algebraic variable q ∈ y that is in equation {j} set ẋ := ẋ∪ q̇,
y := y \ q, and z := (z \ q) ∪ q̇.
4. ∀ {i} ∈ FIndex :
(a) Apply the AUGMENTPATH algorithm to equation {i} and variable set ẋ
to obtain a set of structurally singular equations W ⊆ FIndex .
6. F̄ := FIndex−1 ∪ Q.
7. ∀ {i} ∈ G:
(a) Apply the AUGMENTPATH algorithm to equation {i} and variable set
ẑ = z ∪ x to obtain a set of structurally singular equations W ⊆ G.
8. RETURN(Index, F̄ , D, F0 , . . . , FIndex−1 ).
159
Like the original algorithm in [107], REDUCE-INDEX will terminate if and only
if the corresponding extended system:
f (ẋ, x, y, t) = 0 (5.46)
hi (ẋ, x) = 0 ∀i = 1 . . . mx (5.47)
is structurally nonsingular with respect to all occurring variables. The exact form of
(5.47) is not important since the condition is structural, but in practice it could be
thought of as a difference formula relating ẋ and x. In [139] the termination condition
(5.46–5.47) is noted to be equivalent to nonsingularity of the structural matrix pencil :
∂F ∂F
pat + pat (5.48)
∂ ż ∂z
160
Step 7 is a check of the structural nonsingularity of the extended DAE and its
initial condition with respect to (ẋ, x, y) to ensure that the initial conditions are
consistent in a structural sense. This check has proven to be useful in practical
modeling activities, although it by no means ensures that it is easy to obtain a set of
consistent initial values.
Implementation of this algorithm in ABACUSS has proved to be very useful for
several reasons:
• Structurally ill-posed problems are detected before the start of the integration.
• The user is informed that the DAE is high-index, which is important because
the formulation of a high-index DAE could be unintentional.
• The correct equations for initializing a high-index DAE are automatically de-
rived and reported.
161
adds variables so that the entire extended system may be solved. The main advan-
tage of the dummy derivative algorithm is that it does not require computationally
expensive algebraic substitutions. The key to the method is to pick a set of time
derivatives to replace by dummy algebraic variables.
The method of dummy derivatives is described in detail in [97], but it is summa-
rized here for the purpose of clarity in the rest of this chapter. For convenience, let
us rewrite the DAE as an operator equation:
Fz = 0 (5.49)
162
algebraic variables.
5. The equivalent index-1 DAE is constructed by including all original and all
differentiated equations, with dummy algebraic variables substituted for those
variables indicated in Step 3.
a = ẋ (5.50)
b = ż (5.51)
λ
mȧ + x=0 (5.52)
L
λ
mḃ + y = −mg (5.53)
L
x2 + y 2 = L2 (5.54)
2xẋ + 2y ẏ = 0 (5.55)
163
derivatives z1 = (ȧ, ḃ, ẋ, ẏ, λ) and the Jacobian H11 is:
H11 = 2x 2y 0 0 0 (5.57)
There are two choices of columns that would make M11 nonsingular, because neither
x nor y are nonzero for all t. The states x and y are not zero simultaneously because
of the length constraint. When either x nor y is locally zero, the choice of M11 is clear.
When neither x nor y are locally zero, either choice is valid.
If ȧ is chosen as the dummy derivative, z2 = [ẋ]. Even though the ȧ does not
appear to have a time derivative of one order less than ȧ, the substitution ẍ = ȧ
was made when doing the differentiation to obtain (5.56) and therefore ẋ is a time
derivative of one order less than ȧ. The H21 matrix is simply:
H21 = 2x (5.58)
Depending on the choice of M11 matrices, the following two equivalent index-1
formulations of (5.22–5.24) can be obtained:
a = x̄ (5.59)
b = ẏ (5.60)
λ
mā + x=0 (5.61)
L
λ
mḃ + y = −mg (5.62)
L
x2 + y 2 = L2 (5.63)
2xx̄ + 2y ẏ = 0 (5.64)
164
and:
a = ẋ (5.66)
b = ȳ (5.67)
λ
mȧ + x=0 (5.68)
L
λ
mb̄ + y = −mg (5.69)
L
x2 + y 2 = L2 (5.70)
2xẋ + 2y ȳ = 0 (5.71)
where ā, b̄, x̄, ȳ denote dummy derivatives (that is, they are algebraic state variables).
This example is useful because it illustrates several key points about the dummy
derivative algorithm. Since in general the choice of a nonsingular square sub-matrix
at each step of the algorithm is not unique, a general high-index DAE will have a
“family” of equivalent index-1 DAEs. Furthermore, it is important to recognize that
the dummy derivative method relies on numerical nonsingularity of the M matrices,
which in general may be a local property of the DAE. Therefore, it may become nec-
essary to perform dummy derivative pivoting between the family of equivalent index-1
DAEs, in which different M matrices and therefore a different set of dummy deriva-
tives are selected during the solution of the DAE depending on the local properties
of H.
Detecting the need for dummy derivative pivoting would seem to require the rect-
angular H matrices to be factored at every integration step of the simulation, which
is computationally prohibitive. In [97] it is demonstrated that nonsingularity of Mi
implies nonsingularity of Hi−1 , hence only Hi−1 need be monitored. In general, each
matrix M could be as large as the Jacobian matrix of the original high-index system,
although in many practical problems it has been found to be much smaller. Described
below is a strategy that avoids this expensive factorization at each step, which other-
wise would make the dummy derivative method extremely costly in a similar manner
165
to the staggered direct method for sensitivities described in Chapter 4.
166
5.3 Differentiation in ABACUSS
Derivation of the family of equivalent index-1 DAEs required by the method of dummy
derivatives in an automated fashion requires the application of computational differ-
entiation technology. This section describes the differentiation strategy adopted for
implementation of the dummy derivative method in ABACUSS.
167
IVP is characterized by a sequence of discrete changes to the functional form of the
system (5.1) along the trajectory. For example, in the case of the activation of an
inequality path constraint at a specific time, the active inequality must be augmented
to the DAE and REDUCE-INDEX applied to derive a family of equivalent index-1
models for this new DAE. These changes in the functional form of the DAE poses
problems for the code generation approach: either new code must be generated and
linked each time the DAE changes, or all possible functional forms must be antici-
pated a priori and appropriate code generated for each one. The former approach
is very costly at run time, whereas the latter is in general combinatorial, since, for
example, all admissible combinations of inequality constraint activations must be enu-
merated, even if only a small subset is encountered along the trajectory. On the other
hand, the interpretative approach allows for very efficient manipulation of the DAE
by merely adjusting the relevant data structures and performing differentiation in an
incremental manner as necessary along the trajectory.
Details of the ABACUSS implementation are given in [13]. Sketched here is how
a typical problem is processed. The user defines the model equations in natural
language using the high level ABACUSS input language. In the first phase, this input
is then compiled into an intermediate representation of the model held in appropriate
data structures. In the second phase, a particular calculation is selected and the
ABACUSS simulation executive transforms this intermediate representation into the
run-time data structures for a calculation. In particular, a function fi in (5.1) is stored
in a binary tree using a dynamic data structure. Given values for the unknowns, a
subroutine can then “interpret” this binary tree to evaluate the function.
Derivatives of all of the functions are obtained using automatic differentiation
technology. Automatic differentiation is a method of obtaining derivatives symboli-
cally without resulting in unnecessary and highly inefficient expression “swell” in the
resulting expressions. The automatic differentiation in ABACUSS uses an algorithm
detailed in [135, 134] that is highly efficient for large sparse equations.
168
5.4 Dummy Derivative Pivoting
The key to the dummy derivative method is the selection of dummy derivatives. As
shown in the Section 5.2.5, the choice is in many cases non-unique, and a practical
implementation must select the set of dummy derivatives, monitor the set to see if it
is no longer optimal or valid, and switch (pivot) among equivalent index-1 DAEs as
necessary.
The problem of selecting an appropriate equivalent index-1 DAE from the family
of possible equivalent index-1 DAEs depends on selecting a nonsingular square sub-
matrix Mi1 . Since structural algorithms are needed in the REDUCE-INDEX algo-
rithm, it might seem reasonable to choose any structurally nonsingular submatrix of
Mi1 . However, as the example in Section 5.2.5 demonstrates, structural criteria are
not sufficient to detect the local points at which an equivalent index-1 model is not
valid. A reliable method for selecting nonsingular sub-matrices from a rectangular
matrix (provided that such a matrix exists) is to use Gaussian elimination with full
pivoting. This Gaussian elimination has been implemented using the MA48 code [45],
which is capable of LU factoring large sparse rectangular matrices. Although this step
is fairly expensive computationally, the overall expense may be reduced by minimizing
the number of times that it is done during solution of the DAE, as described below.
The questions that arise when the example in Section 5.2.5 are examined is what
is happening when the Mi1 matrix of an equivalent index-1 DAE becomes singular,
and whether there are reasons to prefer one equivalent index-1 DAEs over another
provided their respective Mi1 matrices are nonsingular.
Dummy derivative pivoting is required because the current matrices Mi1 may
become locally singular at certain points in state space. The consequence of a singular
Mi1 is that the equivalent index-1 DAE may cease to define uniquely all of its highest-
169
order time derivatives at some points in the solution trajectory. To see this, consider
one of the equivalent index-1 models (5.59–5.65) of the pendulum. When this system
is solved for explicit expressions for the highest-order time derivatives, the following
system is obtained:
a = x̄ (5.73)
b = ẏ (5.74)
L2 ymg − L2 mb2 − y 3mg
λ=− (5.75)
(−y 2 + L2 ) L
yb2 L2 + gy 4 − 2gy 2L2 + gL4
ḃ = − (5.76)
(−y 2 + L2 ) L2
+
x = ± −y 2 + L2 (5.77)
yb
x̄ = ∓ + (5.78)
−y 2 + L2
L2 yg − L2 b2 − y 3 g
ā = ± + (5.79)
L2 −y 2 + L2
This system does not uniquely define the highest order time derivatives when
y = ±L. The situation is similar to that encountered with the elimination method
(5.37–5.39), since no finite number of differentiations of (5.73–5.79) will produce a
UDAE for which there are not powers of (−y 2 + L2 ) in the denominator. In general,
all that is known about points where Mi1 becomes singular is that the highest-order
time derivatives of the equivalent index-1 DAE may not be uniquely defined, which
can result when the system is locally high-index or locally unsolvable.
Interestingly, the corrector matrix used the BDF method (which is essentially the
local matrix pencil h1 (∂F/∂ ż) + (∂F/∂z) of the system (5.73–5.79)) does not become
locally singular when y = ±L. Indeed, high-index models also do not necessarily have
a singular corrector matrix. However, standard codes do experience difficulty when
solving an equivalent index-1 model in the neighborhood of such points (the chances
of the integrator stepping exactly onto one of these points are vanishingly small).
The codes can cut the step-size drastically in the vicinity of the singular point, and
sometimes fail to integrate past the point. It was observed that the corrector matrix
170
Corrector Condition Number
1e+12
1e+11 x=0.00
x=0.10
1e+10 x=0.25
1e+09 x=0.50
x=0.75
1e+08 x=0.90
1e+07 x=1.00
1e+06
100000
10000
1000
100
10
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
h
Figure 5-8: Condition number of the corrector matrix at different points on the so-
lution trajectory of the equivalent index-1 pendulum as a function of the step size
h
becomes ill-conditioned near such points, which can cause inaccuracy in the solution
to the corrector and trigger step failures [2].
Figure 5-8 shows the LINPACK estimate of the corrector matrix condition number
for (5.59–5.65) as the integrator step size h → 0. The corrector matrix condition
number increases as x → 0. When the corrector matrix becomes very ill-conditioned
(above about 5 · 105 in this case) the corrector matrix may not converge or it may
converge to an incorrect point and trigger a truncation error test failure.
A previously unanswered problem for the dummy derivative method is how to detect
automatically when dummy derivative pivoting is necessary during the solution of the
DAE. The brute-force method would be to perform Gaussian elimination on the Hi1
matrices at every integration step and detect whether a different equivalent index-1
model is chosen than the one most recently solved.
The observation made in the previous section that singular Mi1 matrices lead
to ill-conditioned corrector matrices which can trigger truncation error test failures
171
provides a convenient heuristic for checking whether dummy derivative pivoting is
necessary. The heuristic is to check the factorization of the Hi1 matrix whenever
there are repeated corrector failures or truncation error failures and the integrator
calls for a cut in the step size. An example demonstrating the use of this heuristic is
given below.
This heuristic is somewhat too conservative. It may call for dummy derivative
pivoting at points where none is necessary and the step-size is being reduced for
reasons unrelated to the index of the model. On the other hand, the heuristic does
not call for pivoting unless the integrator experiences difficulty, which means that
some ill-conditioned points may be passed without pivoting. Our experience shows
that the heuristic does result in infrequent dummy derivative pivoting checks for
most problems. Other situations where there are ill-conditioned corrector matrices
are analyzed in [2, 1].
When it has been decided to switch between equivalent index-1 DAEs during the
solution, it is unnecessary to restart the integration code. Doing so is undesirable
because it requires small step-sizes, many corrector matrix factorizations, and low
order polynomial interpolation, increasing the cost of solving the DAE.
One issue with dummy derivative pivoting is that, although it is not necessary the-
oretically to restart the integration after the pivoting, in practice the DAE integrator
may be forced to cut the step size drastically. This happens because the integrator is
controlling the truncation error on the differential and algebraic state variables, but
not directly on the time derivatives. When one of the time derivatives becomes an al-
gebraic dummy derivative, accurate predictor information does not exist for this new
algebraic variable, and the solver is forced to cut the step-size, sometimes drastically.
One way to address this is to introduce an extra equation into the system of the form:
a = ẋ (5.80)
172
where a is a new algebraic variable, for all the time derivatives in the current DAE that
have the possibility of becoming dummy derivatives. It is only necessary to include
these equations in the truncation error control, since solution of (5.80) is trivial and
may be performed after the corrector iteration has finished. The DAE solver is able
to take fewer steps after a dummy pivoting operation if it is able to avoid cutting the
step by using extrapolation information obtained from equations like (5.80).
173
Dummy Derivative Method Pendulum Length
1.00001
1.00001
1
L
0.999995
0.99999
0 50 100 150 200 250 300
t
1. Check for pivot when the DAE solver calls for a cut in the step size.
174
Table 5-1: Different Pivoting Strategies
Pivot Pivot Trunc.
Pivots Steps f-evals Jac ΔL ΔE
Strat. Checks Fail
1 7 5 256 772 110 7 1.4 · 10−7 4.7 · 10−7
2 n/a 6 221 656 103 4 6.3 · 10−8 6.2 · 10−6
3 215 6 221 656 103 0 6.3 · 10−8 6.2 · 10−6
The headings in Table 5-1 refer to the number of pivot checks, the number of piv-
ots, the number of integration steps, the number of function evaluations, the number
of evaluations and factorizations of the corrector matrix, and truncation error test
failures. There were no corrector convergence failures under any of the strategies.
The headings ΔL and ΔE refer to the maximum deviation in the total energy and
length constraints during the time interval. The total energy was calculated after the
integration using the formula:
1
E = m(ẋ2 + ẏ 2 ) + mgy (5.81)
2
The deviation in the energy was measured relative to the value of the energy on
the first step of the integration. The length constraint is present explicitly, and the
drift in this constraint was maintained below the integrator tolerances for all of the
strategies. The energy constraint is an implicit constraint, which is why it was not
enforced as closely as the length constraint, but the drift is still very small.
Strategies 2 and 3 have identical solution statistics because they result in exactly
the same pivot times. Strategy 2 checks the pivot selection at every step except
immediately following dummy derivative pivots, which accounts for the discrepancy
in the number of steps and the number of pivot checks. Table 5-1 shows that the
policy of checking the pivots only when the integrator cuts the step (strategy 1) carries
a price in terms of the number of steps, the number of residual evaluations, and the
175
Dummy Derivative Method for Index-3 Pendulum
1.5 x
y
Pivot Times
1
0.5
-0.5
-1
0 2 4 6 8 10
Time
176
Dummy Derivative Method for Index-3 Pendulum
1e+07
Corrector Condition Number
1e+06 Pivot Times
100000
10000
1000
100
10
0 2 4 6 8 10
Time
177
Table 5-3: Example problem solution statistics
Batch
Problem Condenser Standard CSTR
Column
Model Equations 5 20 4 297
Additional Eqns. Derived 2 0 5 145
Input Functions 1 4 0 23
Input Funcs. Derived 0 19 0 2
Initial Conditions 1 0 0 9
Integration Steps 363 63 65 96
Residual Evaluations 825 128 144 201
Jacobian Factorizations 62 7 16 28
Corrector Conv. Failures 0 0 0 0
Truncation Test Failures 17 1 2 3
Pivot Checks 17 0 0 3
Pivots 0 0 0 2
This section provides several examples of the use of the dummy derivative method
on problems of interest in chemical engineering. The problems used in this chapter
have been discussed in the literature under the context of high-index DAEs, but no
numerical solutions for most of the problems have been reported. Table 5-3 gives the
solution statistics that were obtained by solving these problems using the dummy
derivative method implemented in ABACUSS.
178
5.6.1 Fixed-volume condenser with no liquid holdup
The following simple model of a condenser with fixed volume and negligible liquid
holdup was discussed in [108, 139].
Ṅ = F − L (5.82)
P V = NRT (5.84)
101325 B
P = exp A − (5.85)
760 C −T
where (5.82) represents a material balance, (5.83) the energy balance, (5.84) the
vapor-liquid equilibrium, and (5.85) an equation of state for the vapor. It was noted
in [139] that the assumptions used to derive this model are somewhat questionable,
since the vapor phase is modeled as both an ideal gas (5.83) and a saturated vapor
(5.85).
π
F = 9000 + 1000 sin( t) (5.87)
2
The definitions of the variables and parameters are given in Tables 5-4 and 5-5.
The parameters for the heat capacity equation, vapor pressure equation, and heat of
vaporization are for water and are taken from [50].
179
Table 5-4: Variables in the high-index condenser model
Variable Units Description
N mols Molar Holdup in the vessel
T K Temperature in the vessel
F (mols/hr) Feed flow rate
P Pa Pressure
L (mols/hr) Liquid flow rate out of vessel
Cp (J/mol · K) Vapor heat capacity
180
time interval (0, 10 hrs] that are obtained from an initial condition of T (0) = 400K.
181
The following are the equations in the model:
Input Equation # 1
CONDENSER.F = 9000 + 1000*SIN(3.14286E+00*TIME/2) ;
Equation # 2
CONDENSER.$N = CONDENSER.F - CONDENSER.L ;
Equation # 3
CONDENSER.N*CONDENSER.CP*CONDENSER.$T = CONDENSER.F*CONDENSER.CP*
(5.13150E+02 - CONDENSER.T) + -4.06560E+04*CONDENSER.L -
5.00000E+05*(CONDENSER.T - 2.80000E+02) ;
Equation # 4
CONDENSER.P*1.00000E+02 = CONDENSER.N*8.31450E+00*CONDENSER.T ;
Equation # 5
CONDENSER.P = 1.33322E+02*EXP(7.96681E+00 - 1.66821E+03/
(2.28000E+02 + (CONDENSER.T - 2.73150E+02))) ;
Equation # 6
CONDENSER.CP = 3.34600E+01 + 6.88000E-02*(CONDENSER.T - 2.73150E+02)^2
+ 7.60400E-06*(CONDENSER.T - 2.73150E+02)^3 + -3.59300E-09*
(CONDENSER.T - 2.73150E+02)^4 ;
Total Unknowns : 9
182
Temperature Profile for High-Index Condenser
500
498
496
T (K)
494
492
490
0 1 2 3 4 5 6 7 8 9 10
t(hr)
234
232
230
N (mol)
228
226
224
222
220
0 1 2 3 4 5 6 7 8 9 10
t(hr)
Figure 5-14: Dummy derivative mole holdup profile for high-index condenser
183
Index-20 DAE
1
0.8 x17
x18
0.6 x19
x20
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
0 1 2 3 4 5 6 7 8
t
A ‘standard high-index’ model was proposed and solved in [76]. The model is:
xi = f (t) (5.89)
The index of this system is N, and hence the model is primarily interesting because
the index is a function of a parameter. An index-20 version of this problem was
solved, with f (t) = sin(t), which required ABACUSS to differentiate 19 equations.
Some of the state trajectories are shown in Figure 5-15.
184
Table 5-6: Parameters for the CSTR example
Parameter Value
C0 5 mol/L
T0 273 K
K1 20 s−1
K2 3000 (K · L)/mol
K3 1500 s−1
K4 3000 K
Ċ = K1 (C0 − C) − R (5.90)
Ṫ = K1 (T0 − T ) + K2 R − K3 (T − Tc ) (5.91)
−K4
R = K3 exp C (5.92)
T
185
CSTR Reactor
5
4.9 C
4.8
Concentration(mol/L)
4.7
4.6
4.5
4.4
4.3
4.2
4.1
4
0 1 2 3 4 5 6 7 8 9 10
t(hr)
CSTR Reactor
540 T
Tc
520
T emperature(K)
500
480
460
440
420
400
0 1 2 3 4 5 6 7 8 9 10
t(hr)
186
Batch Frac Column
375.5
Reboiler T
375
T (K) 374.5
374
373.5
373
372.5
0 5000 10000 15000 20000 25000
t
An interesting high-index example is the BatchFrac model [20]. As written the model
is index-2 due to assumptions about the holdup on the trays given in equation (6)
of [20]. The fact that the problem is index-2 was not specifically noted in [20],
but the problem was solved by replacing the equations for the holdup and enthalpy
derivatives with finite difference approximations (see equations (13) and (14) in [20]).
The method of dummy derivatives eliminates the need for this approximation. The
BatchFrac model was solved in ABACUSS for a five component mixture using ideal
thermodynamic assumptions with one tray in the column.
The original index-2 DAE system for this problem consisted of 155 equations,
and an additional 75 equations were automatically derived to form the equivalent
index-1 system. This model is large because it includes thermodynamic properties
as equations, not procedures. The reboiler temperature profile obtained from the
solution of the 1-tray system is given in Figure 5-18.
187
5.7 Conclusions
This chapter has shown that it is possible to reliably solve a large class of arbitrary-
index DAEs using the dummy derivative method. The advantages of the dummy
derivative method over other methods that have been proposed are that it directly
enforces all of the implicit constraints and it may be easily automated and combined
with automatic differentiation technology to solve high-index DAEs of practical en-
gineering interest. The dummy derivative method does not work for all high-index
DAEs, notably those for which the index and/or dynamic degrees of freedom can-
not be determined correctly using structural criteria. However, our experience has
shown that numerical solutions to many high-index engineering problems are easily
obtained.
Implementation of the dummy derivative algorithm in ABACUSS required sig-
nificant development of the details of the algorithm, as evidenced in the REDUCE-
INDEX algorithm and the dummy derivative pivoting heuristic. This implementation
appears to be the first example of a computational environment capable of easily solv-
ing high-index DAEs. The ability to solve high-index DAEs directly is necessary for
the development of algorithms to solve state-constrained dynamic optimization prob-
lems, which are discussed in the next chapters.
188
Chapter 6
This chapter is concerned with the solution of equality path-constrained dynamic op-
timization problems. Equality path constraints (as distinguished from inequality path
constraints, which are the subject of the next chapter) constrain the state variables
of the DAE in the dynamic optimization problem. To date, this class of problem
has not been satisfactorily handled within the control parameterization framework.
However, it is shown in this chapter that equality path-constrained problems contain
high-index DAEs which may be solved efficiently using the dummy derivative method
detailed in Chapter 5.
An equality path-constrained subset of the dynamic optimization formulation
given in Chapter 2 is considered in this chapter:
tf
min J = ψ (x (tf ) , y (tf ) , tf ) + L (x, u, t) dt (6.1)
u(t),tf to
f (ẋ, x, y, u, t) = 0 (6.2)
h(x, y) = 0 (6.3)
In this formulation, x are differential state variables, y are algebraic state variables
189
and u are control variables. The path constraint (6.3) is assumed to be a vector
function with h(·) → Rnh . The DAE (6.2) is assumed to have index ≤ 1, although
the results are easily extended if (6.2) is a high-index DAE for which an equivalent
index-1 DAE may be derived using the dummy derivative method. Note that (6.1–
6.4) does not include point constraints at times other than t0 , which are irrelevant for
this chapter, or inequality path constraints on the state variables, which are discussed
in Chapter 7.
Our goal is to solve (6.2–6.3) simultaneously as an IVP within the control pa-
rameterization method, since the efficiency of the control parameterization method
increases as more of the constraints are handled by the IVP solver rather than the NLP
solver. Since (6.3) constrains the states of the system (6.2), it can be expected that
the combined system may be high-index. However, the combined system (6.2–6.3) is
also overspecified, and thus not all of the control variables u are truly independent.
The two problems that must be addressed are:
190
Section 6.4. Finally, examples are presented in Section 6.5.
191
6.1 Review of Methods for Solving Path-Con-
strained Dynamic Optimization Problems
Several methods have been developed to date for solving path-constrained dynamic
optimization problems within the control parameterization framework. Since it was
not possible to solve high-index DAEs directly at the time that these methods were
developed, the emphasis was on finding methods that discretize the path constraints so
that they could be included either in the objective function or as a set of NLP equality
constraints. Therefore, these methods all handle the path constraints indirectly, in
the sense that they are included in the master NLP rather than the IVP subproblem.
One method for enforcing path constraints is to modify the objective function to
include a measure of the constraint violation [24]:
nh tf
J¯ = J + Ki h2i dt (6.5)
i=1 t0
where K is a vector of large numbers. The path constraints are satisfied exactly
only if K → ∞. The problem with this approach is that it has been shown to cause
numerical difficulties with the NLP master problem because it modifies the shape of
the objective function, possibly making it more nonlinear or introducing additional
locally optimal points.
Another method [128] is to replace the path constraints with a single end-point
constraint:
tf
ϕ= hT hdt = 0 (6.6)
t0
192
the entire state trajectory into a single measure, and thus it provides only limited
information about how to modify the input functions to achieve feasibility. Also,
both (6.6) and (6.7) have gradients with respect to the optimization parameters that
are zero at the optimum, which reduces the efficiency of gradient-based optimization
methods.
A somewhat more sophisticated method was proposed in [142, 143], where the
relationship between path-constrained dynamic optimization problems and high-index
DAEs was noted. It was still not possible at the time of that research to solve most
classes of high-index DAEs, so a method was proposed to append directly to the
DAE any state variable constraints that did not cause the resulting system to be
high-index. The other state variable constraints were transformed into a set of NLP
equality constraints. To form these NLP equality constraints, a hybrid method was
used which combines both global constraints like (6.7) and point constraints like:
i = 1 . . . nh j = 1 . . . npt tj ∈ (t0 , tf ]
These point constraints may be evaluated either by sampling the state variables as the
simulation progresses, or by interpolating the state trajectories after the simulation
finishes. The point constraints provide local information along the trajectory to the
optimizer, while the global constraint attempts to prevent the constraint from being
violated at times other than the points where (6.8) were evaluated. Similar methods
were proposed in [63], although the relationship between the path constraints and
high-index DAEs was not noted in that work.
In [76] a similar method was described in which high-index DAEs were solved
via a transformation to a path-constrained dynamic optimization problem. In this
method, a high-index DAE:
¯ ẋ, x, y, t) = 0
f( (6.9)
193
is partitioned into two sets of equations:
such that the matrix (∂f (1) /ẋ(1) ) (∂f (1) /y (1) ) is nonsingular, and the state variables
T T
are partitioned into x = x(1) : x(2) and y = y (1) : y (2) . The solution to the high-
index DAE may be obtained by solving the following dynamic optimization problem:
subject to:
where (·) is an appropriate norm on the domain [t0 , tf ]. The dynamic optimization
method for solving high-index DAEs requires the following steps:
3. Adjust the control profiles x(2) , y (2) to try to minimize the constraint violation
on the next step. Goto Step 1.
This method is interesting here because it demonstrates the problems with all methods
that solve path-constrained dynamic optimization problems by including the path
constraints in the master NLP.
There are problems with this method include:
• Since the most expensive step is Step 1, this method is much less efficient than
direct methods for solving high-index DAEs (such as the dummy derivative
194
method described in Chapter 5), which requires the solution of only one (albeit
larger) IVP.
• Any method for choosing a time discretization of the controls will be somewhat
ad hoc compared to the step size selection algorithm built into the numerical
DAE solver that handles the IVP. Time discretization of the controls is ar-
guably not a problem for unconstrained dynamic optimization problems, where
the space of implementable control functions is often limited by physical con-
siderations. However, it is a problem here because state trajectories are being
discretized for which the functional form is not constrained by implementable
control functions.
• Independent of the discretization, the NLP has more equations and decision
variables than necessary. Some of the decision variables in an equality path-
constrained optimization are actually completely determined by the solution to
the path constrained DAE. Therefore, it is more efficient to find the solution of
these variables directly, using the IVP solver, rather than indirectly, using the
NLP solver.
In short, the combination of inefficiency and uncontrollable accuracy makes the use of
control parameterization unattractive to solve equality path-constrained DAEs with
all the above methods.
Although the problems described above have not been explicitly recognized by
other authors, there have been some methods proposed for solving high-index dy-
195
namic optimization problems directly. In [66, 67] dynamic optimization of index-2
DAEs was described, and in [109] a method similar to the dummy derivative method
was proposed to derive an equivalent index-1 DAE for a given arbitrary-index DAE.
However, neither implementation nor numerical results were reported in [109], and
the method described requires detection of numerical singularity of matrices, which
is problematic both practically and theoretically in the nonlinear sense. In both of
these works it was assumed that (6.2) was high-index, and neither proposed a method
for directly appending (6.3) to (6.2) to form a high-index system.
196
6.2 Equality Path Constraints and High-Index
DAEs
The method proposed in this chapter for solving equality path-constrained dynamic
optimization problems is to append the path constraint (6.3) directly to the DAE
(6.2), allowing a subset of the control variables to be determined by the solution of
the resulting combined IVP. Then the dummy derivative method is used to derive
an equivalent index-1 DAE for the high-index DAE. For the purposes of this section,
it is assumed that a control matching can be found, and concentrate instead on the
properties of the resulting combined system.
Reference to the following definition which was presented in [21] was made in
Chapter 2:
Theorem 6.1. The index and the structural index of a DAE are related by i ≥ is .
197
Proof. Consider a modification to Pantelides’ algorithm described in Chapter 5, in
which the variables a and equations:
a = ẋ (6.14)
are appended to the underlying DAE as necessary at each step of the algorithm so
that time derivatives of order greater than one do not appear in the underlying DAE
obtained through the next step. Since the new variable a is uniquely determined by
(6.14), the augmented system has the same index as the original DAE.
Define ẑis as the set of highest order time derivatives in the final underlying DAE
obtained with this modified Pantelides’ algorithm, f (is ) . Define Jˆis as the Jacobian
of f (is ) with respect to ẑis .
There are four cases:
i = is : If Jˆis is nonsingular and ẑis does not contain any algebraic variables, then
f (is ) has i = is = 0, and according to the Implicit Function Theorem, all time
derivatives are uniquely determined given the state variables.
i = is + 1 : If Jˆis is nonsingular and ẑis does contain some algebraic variables, then
all time derivatives are not uniquely determined by this final underlying DAE.
Since by assumption:
is nonsingular, where ẋ and y are respectively the time derivatives and algebraic
variables in ẑis , one further differentiation of f (is ) produces:
where û = {u, (du/dt), . . . , (d(is ) u/dt(is ) )}. Because Jˆis is nonsingular, the time
derivatives ȧ and ẏ are uniquely determined by f (is ) .
198
i > is : If Jˆis is singular, then by the properties of Pantelides’ algorithm Jˆis must be
numerically but not structurally singular. Since the general DAE is nonlinear,
no information about whether ẑis is uniquely determined or not is conveyed by
the numerical singularity of Jˆis . It is possible that additional differentiations
must be performed to uniquely determine ẑis , and therefore the differential index
may be larger than the structural index.
i < is : Since Pantelides’ algorithm terminates with the first underlying DAE that
possesses a structurally nonsingular Jˆis , and since structural nonsingularity of
Jˆis is a necessary condition for f (is ) to uniquely define the highest order time
derivatives ẑis , it is not possible for any previous f (is −j) , j = 1, . . . , is to uniquely
determine ẑis −j . Therefore, the structural index is a lower bound on the differ-
ential index.
199
an index-one DAE may result in a high-index DAE. There are cases in which the
index of the augmented DAE remains unchanged. For example, consider the index-1
DAE:
ẋ + y + u = 0 (6.18)
y−u−x= 0 (6.19)
2x + y − 5 = 0 (6.20)
where x and y are state variables and u is a control variable. When u is treated as
an algebraic variable, the augmented DAE (6.18–6.20) also has index = 1.
However, there are some important classes of DAE for which appending equality
path constraints to the DAE in the manner described above will result in high-index
DAEs.
g(x) = 0 (6.21)
ẋ − φ(x, u, t) = 0 (6.22)
Proof. Appending the path constraints requires ng controls to become algebraic state
variables denoted by y ⊆ u in the augmented DAE. Define χ̄ = x \ χ and ū = u \ y.
200
A suitable partitioning of (6.22) yields:
T T
∂gj T ∂ 2 gj 2
T ∂ gj
2
T ∂ gj ∂gj
χ̈ + χ̇ χ̇ + 2χ̄˙ χ̇ + χ̄˙ χ̄˙ + ¨=0
χ̄ (6.28)
∂χ ∂χ2 ∂ χ̄∂χ ∂ χ̄2 ∂ χ̄
for all j = 1, . . . ng .
201
Theorem 6.3. Appending ng ≤ nu state constraints of the form:
g(x, u) = 0 (6.30)
to an explicit ODE of the form (6.22) yields a DAE with i ≥ 1, assuming the DAE
is solvable.
Proof. Since (6.30) does not contain any time derivatives, the augmented system is
structurally rank-deficient with respect to the time derivatives and hence must be at
least index-1.
A consequence of Theorems 6.2 and 6.3 is that optimal control of ODE systems
subject to state path constraints requires either implicit or explicit treatment of DAEs.
For cases where the DAE is not an explicit ODE, the index may stay the same or
increase when the DAE is augmented with state path constraints. In problems where
the index increases, as the theorems indicate, the new index may indeed rise by more
than one. An example of the index rising by more than one is the index-0 DAE:
x3 = 10 (6.34)
202
rank deficient, a subset of the state path constraints can be ignored at least locally.
203
6.3 Dynamic Optimization Feasibility
When equality path constraints are appended to the DAE, one control variable
must become a state variable in the resulting augmented DAE to prevent it from
being overdetermined. The rationale is simply that the number of state variables in
a properly specified DAE must equal the number of equations in the DAE. One of
the implications is that if the number of control variables is less than the number
of constraints, the problem is either over-constrained or some of the constraints are
redundant. Therefore, the number of control variables must be greater than or equal
to the number of equality path constraints for the dynamic optimization problem to
be solvable.
If the DAE has an equal number of control variables and equality path constraints,
it is still possible that the resulting system is unsolvable, which is to say, there is no
state trajectory for a given set of input trajectories. Assuming that the original
DAE is well-posed and solvable, the augmented system could be unsolvable because
the augmented DAE is uncontrollable, or because the constraints are inconsistent or
redundant. For example, no control trajectory u for the system:
1
ẋ = + u2 (6.35)
x
x=0 (6.36)
Likewise, any dynamic optimization problem that includes the equality path con-
204
straints:
x1 + x2 = 5 (6.37)
x1 − x2 = 1 (6.38)
Note that consistency, nonredundancy, and controllability are all local properties of
nonlinear systems, and therefore it is difficult in practice to detect the presence or
absence of these conditions.
However, as in the solution of nonlinear high-index DAEs, some information can be
gained from the structural properties of the augmented system. A necessary condition
for a set of nonredundant constraints to be consistent is that an assignment exists
for every equation to a unique variable in the system. If this condition is not met,
the equations must be either inconsistent or redundant. In the example (6.37–6.39)
no assignment exists for each equation to a nonredundant variable because there are
three equations but only two variables.
205
A precondition for controllability is input connectability, which exists for a system
“if the system inputs are able to influence all the state variables” [121]. Input con-
nectability is defined in a graph-theoretic sense if a path exists for each state variable
to at least one control variable in the digraph of the system. The digraph of a DAE
was defined in Section 5.2.1.
ẋ1 = x2 (6.40)
ẋ2 = x2 + u (6.41)
ẋ3 = 4 (6.42)
x21 = 3 (6.43)
x3 = 9 (6.44)
The input variable u directly influences (6.40), which in turn influences x2 through
(6.41). Therefore, the state variable x2 in the path constraint (6.43) is input con-
nectable with u. However, the variable x3 in the state path constraint (6.44) is not
influenced by the control variable u; therefore x3 is not input connectable and no
feasible solution exists to a dynamic optimization problem that contains (6.40–6.44)
which satisfies (6.44).
Input connectability of all the state variables is not a condition for solvability of
the DAE. For example, the DAE (6.40–6.43) is solvable (and a dynamic optimization
problem would be feasible) even though the state variable x3 is not input connectable.
However, at least one state variable in each of the state variable path constraints must
have the ability to be influenced by an input variable in the unconstrained DAE or
else the constraint cannot be satisfied. Therefore, the entire augmented DAE does
not have to be input connectable, but the state variable constraints do.
Also, there exist input-connectable systems which are still infeasible. For example,
206
consider the system:
ẋ1 = x2 (6.45)
ẋ2 = x2 + u1 (6.46)
ẋ3 = 4 + u2 (6.47)
x21 = 3 (6.48)
x2 = sin(t) (6.49)
Although both (6.48) and (6.49) are input connectable to u1 , the control variable u1
cannot satisfy them both simultaneously. However, if (6.48) is replaced with:
x1 = cos(t) (6.50)
the control u1 could simultaneously satisfy the state path constraints. Even though
the constraints (6.49–6.50) do not appear to be redundant when examined in isola-
tion, they are in fact redundant when coupled with the DAE (6.45–6.47). Therefore,
the nonredundancy consideration requires each state path constraint to be input con-
nectable to a unique control variable in the unconstrained DAE.
The result of the discussion in this section is the following theorem:
207
3. There exists at least one transversal of the Jacobian of the constraints (6.3) with
respect to the state variables such that a path exists in the graph of (6.2–6.3)
from every member of the set of nh control variables û ⊆ u that have become
algebraic state variables in (6.2–6.3), to a unique member of the set of state
variables in the transversal.
Proof. All of the following statements are based on the assumption that the con-
straints are nonredundant. If Condition 1 is not true, then the dynamic optimization
problem does not have enough degrees of freedom to satisfy its constraints. If Condi-
tion 2 is not true, then (6.3) must be inconsistent, and therefore no state trajectories
exist that will satisfy all of the constraints simultaneously. If Condition 3 is not true,
then the combined system is either not input connectable, or it is structurally singular
with respect to its state variables.
The assumption that the constraints are nonredundant is difficult to check, but
is a reasonable assumption since it is usually obvious to the person specifying the
problem as to which nonredundant constraints may be imposed. Theorem 6.4 is
useful in practice, since all of the conditions may be easily checked using structural
criteria. However, the theorem is, like the other structural criteria used in this thesis,
a set of necessary conditions, and there are sets of state path constraints for which
this theorem is true which are not feasible because they do not satisfy consistency or
input connectability requirements at local points.
208
6.4 Control Matching
The results presented in this chapter thus far have assumed that it is possible to find
a valid subset of the control variables of the unconstrained problem to make into
algebraic variables in the constrained problem. This section assumes that a feasible
path constrained dynamic optimization problem has been posed as described in the
previous section, and describes how to determine which control variables become
algebraic variables in the path constraint augmented DAE.
The equality path-constrained dynamic optimization problem may have more con-
trols than state variables. In this case, a valid subset of the control variables must
be selected to become algebraic state variables in the augmented DAE. Such a subset
is termed a control variable matching. If Theorem 6.4 is satisfied and there are ex-
actly the same number of controls as state path constraints, there is no uncertainty
about which controls are in the control matching, but it is still necessary to determine
whether the resulting augmented DAE is solvable.
The most general statement that may be made about a control variable matching
is that it must result in a solvable DAE. Solvability criteria for nonlinear high-index
DAEs are discussed in [21, 29], but are of little practical use. Rather, since the
dummy derivative method is dependent upon Pantelides’ algorithm, a related but
more relevant question is to find a control variable matching that will lead to a DAE
for which Pantelides’ algorithm will terminate. If Pantelides’ algorithm terminates,
the DAE is solvable in a structural sense, and the corrector iteration of the BDF
method will be structurally nonsingular.
The following definitions and theorem were presented in [107]:
Definition 6.5 (DAE Extended System). Given the DAE system (6.51), the cor-
responding extended system is:
f (ẋ, x, y, u) = 0 (6.51)
209
Definition 6.6 (Structurally inconsistent DAE). The DAE (6.51) is said to be
structurally inconsistent if it can become structurally singular with respect to all oc-
curring variables state variables (x, y) by the addition of the time differentials of a
(possibly empty) subset of its equations.
Theorem 6.5. Pantelides’ algorithm terminates if and only if the extended system
(6.51–6.52) is structurally nonsingular. Furthermore, if (6.51–6.52) is structurally
singular, then the DAE system (6.51) is structurally inconsistent.
f1 (x, y1 , y2 ) = 0 (6.53)
f3 (x, u2 ) = 0 (6.55)
with control variables u1 and u2 , which was given in [107]. The extended system is
formed by appending:
υ(ẋ, x) = 0 (6.56)
210
AUGMENTPATH algorithm (see [43, 44, 107] and Chapter 5) to the extended system
of (6.2) has resulted in a vector ASSIGN of length 2nx + ny . The equality path
constraints and the control variables are then appended to the extended system.
The AUGMENTPATH algorithm is then applied to each equality path constraint,
hopefully resulting in a new vector ASSIGN of length 2nx + ny + nh . Any control
variable ui for which ASSIGN(2nx + ny + i) is nonzero is defined to be part of a
control matching. If AUGMENTPATH returns PATHFOUND=FALSE for any of
the equality path constraints, no control matching is possible for this system.
For example, the structural Jacobian with respect to the state variables and time
derivatives for the following extended system:
f1 (ẋ1 , x2 , u1 ) = 0 (6.57)
υ1 (ẋ1 , x1 ) = 0 (6.60)
υ2 (ẋ2 , x2 ) = 0 (6.61)
where the x variables are states and the u variables are controls is:
ẋ1 ẋ2 x1 x2 y
f1 × ⊗
f2 ⊗ × ×
(6.62)
f3 × ⊗
υ1 × ⊗
υ2 ⊗ ×
where the symbols ‘×’ and ‘⊗’ denote structural nonzeros, and the set of ‘⊗’ de-
notes an augmenting path found by successive application of the AUGMENTPATH
211
algorithm to each equation. When the following equality path constraints are added:
h1 (x1 , x2 ) = 0 (6.63)
h2 (y) = 0 (6.64)
to form an augmented DAE, the constraints and the u variables are added to the
structural Jacobian, giving:
ẋ1 ẋ2 x1 x2 y u1 u2 u3
f1 × ⊗ ×
f2 × × × × ⊗
f3 × × × ⊗
(6.65)
υ1 ⊗ ×
υ2 ⊗ ×
h1 ⊗ ×
h2 ⊗
when the AUGMENTPATH algorithm is applied to the new equations. The control
matching consists of the variables u2 and u3 , which become algebraic state variables in
the augmented DAE. Note that since the structural singularity of (6.62) is typically
checked when solving a DAE in a modeling environment such as ABACUSS, the
additional computational effort required to obtain the control matching is minimal.
It is possible that the control matching found with this method is nonunique. In
the previous example, the augmenting path for the extended Jacobian could have
212
been:
ẋ1 ẋ2 x1 x2 y u1 u2 u3
f1 × ⊗ ×
f2 × × × ⊗ ×
f3 × × × ⊗
(6.66)
υ1 ⊗ ×
υ2 ⊗ ×
h1 ⊗ ×
h2 ⊗
and therefore the control matching could be the variables u1 and u3 . There is no
structural information that would indicate that one choice is preferred over the other,
and in many cases it may be that all possible control matchings are acceptable. One
consideration may be that the controls that are not in the control matching will be
subject to the control parameterization, while the functional form of those in the
control matching is unrestricted. If, for example, the implementable function space
was limited for some controls but not others, the control matching choice could be
guided by the desire to limit the functional form of some of the controls. Also, since
structural criteria were used to determine the possible control matchings, it may be
that some choices are in fact not permissible because they lead to DAEs which are
numerically unsolvable.
213
6.5 Examples
This section gives numerical results for several equality path-constrained dynamic
optimization problems. For other examples of path constrained dynamic optimization
problems in this thesis, see Chapter 7 and Chapter 8. There are very few examples
of equality path-constrained dynamic optimization problems in the literature (there
are examples of inequality path constrained problems, but those are discussed in
the next chapter). One of the few problems for which a numerical solution has been
reported is the high-index pendulum, which was solved in [76] and [142] as an equality
path-constrained dynamic optimization problem. However, the high-index pendulum
was solved in Chapter 5 using a single IVP integration using the dummy derivative
method, and so it has not been included here.
In this problem a car which is able to move in two dimensions (x and y) under the
influence of two controls, the acceleration angle θ and magnitude a. The objective
is to find the control profiles that cause the car to move from point A to point B in
minimum time. An equality path constraint is imposed by requiring the car to remain
on a road of a given shape. Mathematically, this dynamic optimization problem is:
214
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−5 10−3 23 26.85 3.88s
min tf (6.67)
a(t),θ(t),tf
subject to:
v̇x = ax (6.68)
ẋ = vx (6.69)
v̇y = ay (6.70)
ẏ = vy (6.71)
ax = a cos(θ) (6.72)
ay = a sin(θ) (6.73)
y = x2 (6.74)
vx (0) = 0 vx (tf ) = 0
The equality path constraint (6.74) can be matched to θ. The resulting augmented
DAE is index-3, and the dummy derivative algorithm differentiates (6.74) twice to
obtain an equivalent index-1 DAE. This problem was solved using two constant finite
elements to approximate a. The control was bounded by −500 ≤ a ≤ 500, and the
initial guess was a = 10. Solution statistics are given in Table 6-1 and Figures 6-1 to
6-4 show the solution trajectories. The solution found satisfies the path constraint to
within the integrator tolerances over the entire state trajectory.
215
Two-Dimensional Car Problem
90000
80000 y(t)
70000
60000
50000
40000
30000
20000
10000
0
0 50 100 150 200 250 300
x(t)
Figure 6-1: State space plot showing the optimal trajectory of the two-dimensional car
problem
200
-200
-400
-600
0 5 10 15 20 25 30
t
216
Two-Dimensional Car Problem
20
vx
15
10
0 5 10 15 20 25 30
t
Figure 6-3: The optimal velocity in the x direction for the two-dimensional car prob-
lem
5000
4000
3000
2000
1000
-1000
0 5 10 15 20 25 30
t
Figure 6-4: The optimal velocity in the y direction for the two-dimensional car problem
217
6.5.2 Brachistochrone
For a description of the brachistochrone problem, see Section 8.1, where it is solved
with an inequality path constraint. There are several possible formulations of the
brachistochrone problem [24, 84], including the following one [14]:
min tf (6.75)
θ(t),F (t),tf
subject to:
ẋ = u (6.76)
ẏ = v (6.77)
u̇ = F sin(θ) (6.78)
v̇ = g − F cos(θ) (6.79)
x(tf ) = 1
where u and v are respectively the horizontal and vertical velocities, θ is the angle at
which the bead is currently heading, and F is the normal contact force. Equations
(6.76–6.79) describe the forces acting on the bead, so a path constraint defining the
shape of the wire must be added:
v
tan(θ) = (6.80)
u
Interestingly, in this problem the index is dependent on the control matching. Either
F or θ can be matched to the constraint (6.80). If F is matched to (6.80), the problem
is index-2, while if θ is matched to (6.80) the problem is index-1. However, in the
latter case ẋ(0) = 0, ẏ(0) = 0 is not a valid initial condition because (6.80) is not
capable of uniquely defining θ at that point. Therefore, the index-2 formulation was
218
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−5 10−3 10 1.772 1.33s
0.8
0.7
0.6
0.5
0.4
0.3
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x(t)
x(0) = 0 (6.81)
y(0) = 1 (6.82)
ẋ(0) = 0 (6.83)
was imposed.
The dummy derivative algorithm calls for differentiating (6.80) once. This dy-
namic optimization problem was solved using a single linear element to approximate
θ. Solution statistics are given in Table 6-2, a state-space plot is shown in Figures 6-5,
and the F and θ trajectories are given in Figures 6-6 and 6-7. The solution agrees
with numerical solutions obtained for other (lower-index) formulations of the brachis-
tochrone problem.
219
Brachistochrone Problem
0
-0.2 F (t)
-0.4
-0.6
-0.8
-1
-1.2
-1.4
-1.6
-1.8
-2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
t
Brachistochrone Problem
0.2
0 θ(t)
-0.2
-0.4
-0.6
-0.8
-1
-1.2
-1.4
-1.6
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
t
220
6.6 Conclusions
This chapter has demonstrated that equality path-constrained dynamic optimization
problems require the solution of high-index DAEs. Our method of appending the
equality path constraints to the DAE and solving the resulting augmented system
directly is superior to indirect methods for handling equality path constraints which
rely on including constraint violation measures in the NLP. Not only is our method
more efficient, but it eliminates problems that the indirect methods have associated
with the error control of the equality path constraints. Also, it sheds light on the
difficulties that are involved in specifying an feasible equality path-constrained dy-
namic optimization problem. Since the DAE is nonlinear, very few general statements
can be made about whether it is solvable, and, by extension, whether the dynamic
optimization problem is feasible.
The problem of control matching is also very difficult for nonlinear DAEs. How-
ever, the structural control matching algorithm is efficient and consistent with the
structural criteria that we use to detect and solve high-index DAEs. In the next
chapter, we extend these results for inequality path-constrained dynamic optimiza-
tion problems, where the state variables may not be path-constrained at all points on
the state trajectories.
221
222
Chapter 7
Inequality Path-Constrained
Dynamic Optimization
f (ẋ, x, u, t) = 0 (7.2)
g(x) ≤ 0 (7.3)
In this formulation, x are state variables and u are control variables. The DAE (7.2)
is assumed to have index ≤ 1, although the results of this chapter are easily extended
for equality path-constrained or high-index DAEs for which an equivalent index-1
DAE may be derived using the dummy derivative method.
As in Chapter 6, the focus of this chapter is restricted to direct dynamic op-
timization methods that use control parameterization to approximate the control
variable trajectories. Chapter 6 showed how to include equality path-constraints in
223
the dynamic optimization problem, but did not address how to include inequality
path-constraints, which add an additional layer of complexity to the problem.
Two substantial issues must be addressed to solve problems of the form (7.1–7.4)
using control parameterization. First, the DAE (7.2) can become high-index along
any trajectory segments on which (7.3) is active, and the trajectories of some subset
of u are prescribed during those segments. Therefore, the index of the DAE and the
dynamic and design degrees of freedom can fluctuate between segments where the
constraints are not active and segments where constraints become active. Second,
the sequence of constraint activations and deactivations is unknown a priori and
thus inequality state-constrained dynamic optimization problems can be viewed from
the perspective of hybrid discrete-continuous dynamic optimization problems. This
fact has not been previously recognized, but it is a very significant insight because it
means that there are potentially combinatorial aspects to the solution of inequality
constrained dynamic optimization problems.
The first part of this chapter demonstrates the connection between hybrid discrete-
continuous and inequality path-constrained dynamic optimization problems. Exist-
ing methods for handling these problems using control parameterization are then
reviewed. The later sections of the chapter describe three new methods developed
in the course of this thesis which take advantage of the high-index nature of state
constrained dynamic optimization problems. Of these, the fluctuating index infeasi-
ble path method has proven the most robust method at present, and examples of its
use are provided in Chapter 8. The other two methods, the modified slack variable
method and the fluctuating index feasible path method are not useful for as broad a
class of problems, but are described since they are instructive in illustrating the issues
involved in solving state inequality constrained dynamic optimization problems. Fur-
ther, if the nonsmoothness of the resulting master NLP can be dealt with effectively,
the fluctuating index feasible path method may be useful in the future because it can
handle the sequencing aspect of the problem without introducing integer variables.
224
7.1 Inequality Path Constraints and the Hybrid
Dynamic Optimization Problem
gj (x) = 0 (7.5)
where gj is the set of all locally active constraints in (7.3). Each equation in gj
takes up one degree of freedom in the augmented DAE, and therefore for every gi a
control is released and permitted to be implicitly determined by the active inequality.
However, an IVP in which the degrees of freedom are fluctuating along the solution
trajectory is problematic from the point of view of analysis and solution.
225
Inequality
path
constraint
Final Point
Constrained
segment Unconstrained
Feasible segment
trajectories
Inequality
path
Initial Point constraint
f (ẋ, x, us , t) = 0 (7.6)
us = uc (7.7)
uc = uc (t) (7.8)
f (ẋ, x, us , t) = 0 (7.9)
g(x) = 0 (7.10)
uc = uc (t) (7.11)
Note that (7.7) has been replaced with (7.10), but the degrees of freedom for the
overall dynamic system remain unchanged. Thus, during unconstrained portions of
the trajectory the control that actually influences the state (us ) is equivalent to the
forcing function for the system (uc ), whereas in constrained portions of the trajectory
us is determined implicitly by the active path constraint (7.10) and is unrelated to uc .
In a control parameterization context, uc would be prescribed by the control param-
226
g (x) = 0
x
_ = f (x us t) x
_ = f (x us t)
s c
u = u g(x) = 0
c c c c
u = u (t) u = u (t)
g(x) < 0
Figure 7-2: Autonomous switching of constrained dynamic simulation
eters over the entire time horizon, but during constrained portions of the trajectory
it would not influence the solution trajectory.
Equations (7.6–7.11) correspond to a hybrid dynamic system that experiences au-
tonomous switching in response to state (or implicit) events [110]. This behavior is
illustrated in Figure 7-2 using the finite automaton representation introduced in [15].
Further, as shown below, the differential index of the constrained problem will in
general differ from that of the unconstrained problem.
Given these preliminaries, it is now evident that a path-constrained dynamic op-
timization with a single control and a single inequality is equivalent to the following
hybrid discrete/continuous dynamic optimization problem:
tf
min J = ψ(x(tf ), tf ) + L(x, us , t)dt (7.12)
uc (t),tf t0
subject to:
f (ẋ, x, us , t) = 0 (7.13)
⎧ ⎫
⎪
⎨us = uc ⎪
∀t ∈ T : g(x) < 0 ⎬
(7.14)
⎪
⎩g(x) = 0 ⎪
∀t ∈ T : g(x) = 0 ⎭
227
Provided each inequality is matched with a unique control, this problem formulation
can be extended to multiple controls and inequalities.
The extension to multiple controls and inequalities is evident, provided each in-
equality is matched with a unique control. Note that this hybrid dynamic optimiza-
tion problem exhibits the following properties:
• The number and order of implicit events at the optimum is unknown a priori.
228
subject to:
ln(2)
Ṗ5 = (0.79P − P5 ) (7.16)
5
q
Ṗt = − P (7.17)
Vt
Ṗ = u (7.18)
⎧
⎨ State A : −1 ≤ u(t) ≤ 3 Switch to B if P ≥ 1.58P
5
(7.19)
⎩ State B : u(t) = 0 Wait for 4 min then switch to A
Pt ≥ P ∀t ∈ [0, tf ] (7.20)
P (0) = 1 (7.22)
P (tb ) = 6 (7.23)
P (tf ) = 1 (7.24)
0 ≤ tb ≤ tf (7.25)
In this example, only a tissue with a five minute half-time is modeled, yielding equa-
tion (7.16), where P5 is the partial pressure of N2 in the tissue, and P is the sur-
rounding pressure.
The control for this problem is the rate of ascent/descent u, and since pressure
P is equivalent to depth, u is equivalent to the rate of change of P (7.18). Diving
tables establish decompression stops at different depths depending on the depth-
time profile of the dive. These tables follow conceptually the theoretical principles
developed by Haldane combined with more recent developments and data. In this
simplified example, it is assumed that when the partial pressure in the tissue becomes
twice the environmental pressure, the diver must make a decompression stop of four
minutes, which effectively constrains the admissible control profile (7.19). Note that
discontinuities in the control profile occur at points in time determined by the state
trajectory P5 (t) crossing the state trajectory 1.58P (t), and therefore they are not
known a priori but rather are determined implicitly by the solution of the model
equations (this type of implicit discontinuity is commonly called a state event in the
229
simulation literature [110]). Hence, the dynamic optimization has to determine the
number and ordering of these implicit discontinuities along the optimal trajectory. In
addition, the descent and ascent rates are constrained (at 30 m/min and 10 m/min
respectively).
Imagine a scenario in which the diver wishes to collect an item from the ocean
floor at 50 m with minimum total consumption of air. The volume of a tank of
air is Vt = 15 L and it is charged to Pt (0) = 200 bar. Average air consumption
is q = 8 L/min, but the mass consumed varies with the depth (pressure), yielding
(7.17). A parameter tb is introduced to model the time at which the diver reaches
the bottom, and point constraints (7.23–7.24) force the diver to travel to the bottom
and then surface.
In the first approach, two admissible control profiles are examined:
Observe that the different admissible controls yield different sequences of decom-
pression stops. The consequences of these control policies are that in the first case
the diver is forced to make two decompression stops, augmenting the time and air
consumption of the dive, whereas in the second case a more rapid ascent rate only
requires one decompression stop. Further, if only constant ascent rates are admitted,
then if the ascent rate is dropped further, only one stop becomes necessary again.
In general, this need to determine the number and ordering of implicit discontinu-
ities along the optimum trajectory appears to be the most difficult issue with the
optimization of discontinuous dynamic systems.
230
v = 5m/min
7
P
6 P5
P 5/P
5
0
0 2 4 6 8 10 12 14 16 18 20
t
v = 5m/min
200
PT
195
190
185
180
175
170
0 2 4 6 8 10 12 14 16 18 20
t
231
v = 10m/min
7
P
6 P5
P 5/P
5
0
0 2 4 6 8 10 12
t
v = 10m/min
200
PT
198
196
194
192
190
188
186
184
0 2 4 6 8 10 12
t
232
7.2 Review of Methods for Handling Inequality
Path Constraints in Control Parameterization
Several methods have been proposed for handling state variable path constraints
within the control parameterization framework. Vassiliadis [142] also reviews such
methods. There are four classes of methods: slack variables, random search, penalty
functions, and interior-point constraints.
Although the slack variable approach [74, 75, 140] (often known as Valentine’s method)
was not developed for use with control parameterization, its extension to this frame-
work is straightforward. The terminology and approach detailed in [74] is somewhat
ad hoc because the paper predates all of the work on DAEs. However, this work can
be put in the context of modern DAE theory.
The slack variable approach described in [74] is valid when the DAE (7.2) is an
ODE:
ẋ = f (x, u, t) (7.26)
The method proceeds by appending the state variable inequality (7.3) to (7.26) using
a slack variable a(t):
1
g(x) − a2 = 0 (7.27)
2
where it is assumed here that g(·) → R. Squaring the slack variable has the effect of
enforcing the inequality constraint for all admissible trajectories of a.
233
Equation (7.27) is then differentiated:
di g di a
where g (i) = dti
and a(i) = dti
. The differentiation is carried out p times until the
system formed by (7.26–7.30) may be solved for some ui ∈ u. The control ui is then
eliminated from the problem, a(k) becomes a control variable, and the time derivatives
a . . . a(k−1) become differential state variables. Note that the a(k−1) are essentially
being enforced as dummy derivatives, and in fact this method is very similar to the
elimination index-reduction method of [22].
There are several significant problems with this method:
• Also observed in [74] is that there is no obvious extension of this method for
problems that contain more state path inequality constraints than control vari-
ables.
234
7.2.2 Random search technique
A random search dynamic optimization solution technique was proposed in [7, 8, 32].
Although this method uses control parameterization, it does not make use of gradient
information in the NLP optimizer. Instead, normal probability distributions are used
to generate different decision parameter vectors, which are then used to solve the
DAE and evaluate the objective function.
Another method that has been used to enforce inequality path constraints is the use
of a penalty function (for example [24, 146]). One way to use penalty functions is to
augment the objective function (7.1) as:
tf
J˜ = J + K r T (t)r(t)dt r ∈ Rn g (7.31)
t0
or as:
ng tf
Jˆ = J + Ki ri (x(t))dt (7.32)
i=1 t0
where K ∈ R+ is a large positive number and r(t) ∈ Rng is a measure of the constraint
violation, e.g.:
235
This approach can cause numerical difficulties because it requires K → ∞ to sat-
isfy the constraint exactly. In addition, many authors that have used operators such
as max in (7.33) have not recognized that such operators introduce implicit discon-
tinuities that require special treatment during integration [110] and calculation of
sensitivities [55].
This approach also causes numerical difficulties because the gradients of the end-point
constraints are zero at the optimum, which reduces the rate of convergence near the
solution.
Penalty function methods and end-point constraint methods suffer from problems
due to the selection of a suitable r(t). In [133] computational experience was reported
which showed that using r(t) of the form (7.33) rarely converges for nontrivial dy-
namic optimization problems. The primary reason for these difficulties is that (7.33)
is nondifferentiable when gi (t) = 0. Such constraints also introduce implicit disconti-
nuities in the simulation subproblem, which must also be handled carefully to avoid
numerical difficulties [110]. In [133], a different r(t) was proposed of the form:
⎧
⎪
⎪
⎪
⎪gi (t) if gi (t) >
⎪
⎨
ri (t) = (−gi (t) − )2 /4 if − ≤ gi (t) ≤ i = 1 . . . ng (7.35)
⎪
⎪
⎪
⎪
⎪
⎩0 if gi (t) ≤ −
This smooth function is differentiable when ri (t) = 0 and contains no implicit discon-
tinuities. However, (7.35) does reduce the feasible state space region since the true
inequality path constraints can only be active in the limiting case when = 0.
236
7.2.5 Interior-point constraints
where ρ(·) : Rng → R is a function that provides a measure of the violation of the
constraints at a given ti , the parameter i ∈ R+ is a small positive number, and npc
is the number of point constraints.
The problem with this method is that npc → ∞ is necessary to guarantee that
the state path constraint is not violated during any portion of the optimal trajectory.
Since each point constraint adds a dense row to the Jacobian of the constraints of
the NLP, large numbers of point constraints can create NLPs that cannot be solved
using currently available numerical algorithms. If only a few point constraints are
used (one per finite element is typical) there is no guarantee that state trajectories
will satisfy the state constraint to within the tolerances that were used to obtain a
numerical solution of the IVP.
A hybrid approach was used in [33, 142] that combines both the penalty function
and interior point constraint methods. This approach transforms the state variable
inequality path constraint into an end point inequality constraint:
χi (tf ) ≤ i (7.37)
237
where i is a small positive number, which is evaluated by appending the following
equation to: (7.2)
χ(0) = 0 (7.39)
gi (x(tk )) ≤ 0 k = 1 . . . NF E (7.40)
where tk are the boundaries of the control finite elements. Although the constraint
(7.37) is in principle sufficient to ensure that the original inequality constraint is not
violated, it provides no information to the NLP solver when gi (x) = 0. The point
constraints (7.40) are included to provide some information to the optimizer when
gi (x) = 0.
The disadvantages of this hybrid method are:
• The state variable path constraints are satisfied to within known tolerances
only at the points during the state trajectory where NLP point constraints
have been imposed. The state variable path constraints may not be satisfied to
within acceptable tolerances at other points along the trajectory.
• The value of the control that causes the constraint to be active is implicitly de-
fined, and therefore the NLP solver must iterate to find the control whenever an
inequality path constraint is active. Such iteration decreases the computational
efficiency of the method. This method can be viewed in some sense as solving
a high-index DAE by iterating on an index-1 DAE until all of the equations of
the high-index DAE are satisfied.
• The use of the max operator in (7.38), coupled with the discrete nature of (7.40)
can cause the method to converge slowly if the constraint violation is occurring
at points along the solution trajectory other than at the end of the control finite
elements.
238
7.3 A Modified Slack Variable Approach
This section describes a modification to the slack variable approach that is a significant
improvement over the original slack variable method. As mentioned in the previous
section, there are several difficulties with the original slack variable approach that
have kept it from being widely used. An example of these difficulties can be seen by
considering the following problem which was discussed in [66, 74, 90, 142]:
1 2
min V = x1 + x22 + 0.005u2 dt (7.41)
u(t) 0
subject to:
ẋ1 = x2 (7.42)
x1 (0) = 0 x2 (0) = −1
t ∈ [0, 1]
A slightly different form was used in [66, 142] where (7.43) was replaced by ẋ2 = x2 +u.
When the method of [74] is applied to this problem (7.41–7.44), (7.44) is replaced
by:
1
x2 − 8(t − 0.5)2 + 0.5 + a2 = 0 (7.45)
2
239
unconstrained problem:
1
min V = x21 + x22 + 0.005(x2 + 8(t − 0.5) − aa1 )2 dt (7.47)
a1 (t) 0
subject to:
ẋ1 = x2 (7.48)
ȧ = a1 (7.50)
√
x1 (0) = 0 x2 (0) = −1 a(0) = 5
t ∈ [0, 1]
The results of a numerical solution of this problem using the ABACUSS control
parameterization algorithm are shown in the first row of Table 7-1.
240
dummy derivative method on (7.41–7.44) gives:
1 2
min V = x1 + x22 + 0.005u2 dt (7.51)
a1 (t) 0
subject to:
ẋ1 = x2 (7.52)
x1 (0) = 0 x2 (0) = −1
t ∈ [0, 1]
However, H11 matrix for this problem that is used in the dummy derivative method
is:
A modified slack variable method is proposed here to handle both the problems
with algebraic manipulations and problems with singular arcs. The modification is
to make a the control variable, and to apply the method of dummy derivatives to the
resulting high-index DAE.
Thus, the state inequality-constrained problem given by (7.1), (7.2), and (7.3)
may be transformed into a state equality-constrained problem using a slack variable
241
a:
tf
min J = ψ (x (tf ) , tf ) + L (x, u, t) dt (7.57)
a(t),tf to
f (ẋ, x, u, t) = 0 (7.58)
1
g(x) = − a2 (7.59)
2
φ (ẋ (to ) , x (to ) , to ) = 0 (7.60)
provided that the dimension of g does not exceed that of u, and that the resulting
(possibly high-index) DAE is solvable using the dummy derivative method.
Using this modified slack variable method on the problem (7.41–7.44) gives:
1 2
min V = x1 + x22 + 0.005u2 dt (7.61)
a(t) 0
subject to:
ẋ1 = x2 (7.62)
ẋ2 = x2 + u (7.63)
1
x2 − 8(t − 0.5)2 + 0.5 = − a2 (7.64)
2
√
x1 (0) = 0 a(0) = 5
t ∈ [0, 1]
242
equivalent index-1 DAE:
ẋ1 = x2 (7.65)
x̄2 = x2 + u (7.66)
1
x2 − 8(t − 0.5)2 + 0.5 = − a2 (7.67)
2
x̄2 = 16(t − 0.5) = −aā (7.68)
d
ā = a(t) (7.69)
dt
This problem does not have a singular arc when a = 0, and the corrector matrix for
the equivalent index-1 formulation (7.65–7.69) does not become ill-conditioned when
a = 0. Solution statistics for this problem are also shown in Table 7-1. They show
that the modified slack variable approach required significantly fewer iterations to
find the optimum, and actually found a slightly better value for the optimal objective
function.
Figure 7-7 shows the optimal trajectory for the control variable u. The u trajectory
contains discontinuities at the control finite element boundaries, which is due to the
choice of control parameterization of a. Since a controls an equation that involves
only a state variable, u is related to the derivative of a. Since a has been control
parameterized with linear finite elements with C0 continuity, a variable such as u that
is algebraically dependent upon the derivative of a can not be C0 continuous at the
finite element boundaries. The discontinuities in u can be eliminated if higher-order
polynomials are used in the control parameterization of a. Figure 7-8 corresponds to
the last row of Table 7-1, and shows the trajectory of u when quadratic finite elements
are used to approximate a.
The ability to solve high-index DAEs has allowed the creation of a modified slack
variable method with performance characteristics that are superior to the original
method. The modified version is interesting because it clearly shows the link between
state inequality constrained dynamic optimization problems and high-index DAEs.
243
However, there are a number of problems with this method that keep it from becoming
generally useful. They are:
• It does not appear to be possible to extend the method for problems that
contain more inequalities than control variables. In general, it is not possible
to have more independent active state variable constraints at any point in time
than there are control variables. However, there exist many problems that have
more state inequality constraints than control variables, but that never have
more active constraints than they have control variables, which could not be
solved using this method.
• The problem becomes high-index for its entire trajectory, even if a state inequal-
ity constraint never becomes active. This phenomenon requires us to solve a
equivalent index-1 DAE that may be much larger than the DAE of the problem
without constraints, thus decreasing the efficiency of the overall optimization.
244
Modified Slack Variable Method: Linear a approximation
14
12 u(t)
10
8
6
4
2
0
-2
-4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
Figure 7-7: Control variable in modified slack variable method example with ‘a’ ap-
proximated by linear finite elements
Figure 7-8: Control variable in modified slack variable method example with ‘a’ ap-
proximated by quadratic finite elements
245
7.4 Fluctuating-Index Feasible-Path Method for
Dynamic Optimization with Inequality Path
Constraints
The inequality path constraints are not violated during intermediate iterations
because the FIFP method uses an implicit event constrained dynamic simulation al-
gorithm. That is, the hybrid discrete/continuous problem given by (7.13–7.14) is
solved directly. The activation and deactivation of the inequality path constraints
are implicit state events that are detected during the integration. Inequality path
constraint activation and deactivation events are termed constraint events. Implicit
event constrained dynamic simulation requires detecting the constraint events, deter-
mining which controls cease to be degrees of freedom in the constrained portions of
the trajectory, and integrating numerically the resulting (possibly) fluctuating-index
DAE.
246
The implicit event constrained dynamic simulation problem is:
f (ẋ, x, us , t) = 0 (7.70)
uc = uc (t) (7.71)
ng
αi + βij = 1 ∀i = 1 . . . nu , ∀t ∈ T (7.73)
j=1
nu
βij ≤ 1 ∀j = 1 . . . ng , ∀t ∈ T (7.74)
i=1
This formulation shows how the equations in the active DAE change upon activation
and deactivation of path constraints. The variables α and β are determined by the
solution of the implicit event constrained dynamic simulation problem. When a con-
straint gj becomes active, some βij = 1 and αi = 0. The βij and αi is determined
by matching controls ui to constraints gj , which is obtained using the method de-
scribed in Chapter 6. It is assumed that all path constraints are inactive at the initial
condition, and therefore that α(t0 ) = 1 and β(t0 ) = 0.
Assuming that a standard BDF integration method is used to solve the DAE, the
algorithm performed at each step of the integration is:
(a) For every active gj (x), set one βij = 1 and αi = 0. Set all other β\{βij } = 0
and α \ {αi } = 1.
247
3. Apply the method of dummy derivatives to derive an equivalent index-1 formu-
lation for the current problem.
5. Take one integration step with the current index-1 formulation of the problem,
performing dummy pivoting as necessary. If not at the end of the simulation,
go to Step 1.
From a practical standpoint, there are several aspects of the above algorithm that need
to be described in detail. Among these are detecting of activation and deactivation of
constraints and transferring initial conditions between constrained and unconstrained
portions of the trajectory. These details are discussed in the following subsections.
An inequality constraint activates at the earliest point in time at which the previously
inactive constraint becomes satisfied with equality. These points in time are often
termed state events, because they are defined implicitly by the evolution of the system
state and are not known a priori. The state events marking the constraint activation
may be identified algorithmically by introducing a new algebraic variable zj for each
currently inactive constraint:
zj = gj (x) (7.75)
These discontinuity functions are appended to the current equivalent index-1 for-
mulation of the problem, and constraint activations are detected by advancing the
simulation speculatively until a zero crossing occurs in zj (t). This situation corre-
sponds to a special case of the state events handled by the algorithm described in
[110], which handles equations of the form (7.75) efficiently, and guarantees that all
state events are processed in strict time order.
248
7.4.2 Constraint deactivation
ng
(
Equation (7.76) contains a set of parameters { j } such that when j βij = 1 and
j = 0, the corresponding constraint gj is active; and if j ≤ 0, the corresponding
constraint has been “tightened” and the feasible state space decreased. As shown
below, (7.76) is not actually solved; rather it allows the calculation of the sensitivity
of the system to { j }, which is used to detect inequality deactivation.
∂f ∂ ẋ ∂f ∂x ∂f ∂us
+ + s = 0 ∀j (7.77)
∂ ẋ ∂ j ∂x ∂ j ∂u ∂ j
ng T ng
∂usi ∂gk ∂x
αi + βik = βik δjk ∀i, j (7.78)
∂ j k=1
∂x ∂ j
k=1
where δjk is the Krönecker delta function. Note that, although a large number of
sensitivity equations are defined by (7.77–7.78), all sensitivities with respect to j are
identically zero if gj (x) is inactive. Provided that all βij = 0 at t0 , all sensitivities with
respect to may be initialized to zero. Transfer conditions for these sensitivities when
constraints become active are the same as the general sensitivity transfer conditions
249
given in Chapter 3.
These algebraic equations are appended to the DAE and solved as detailed in [110].
Theorem 7.1. If at any state event time defined by a zero crossing of (7.80), the
condition:
s
∂usi ∂ui
< 0 ∧ (ui > ui ) ∨
c s
> 0 ∧ (ui < ui )
c s
(7.81)
∂ j ∂ j
Proof. A negative value of j corresponds to tightening the constraint, and thus mov-
ing the solution trajectory into the feasible region. If the sensitivity (∂usi /∂ j ) is
negative, then uci must increase for the solution trajectory to move away from the
inequality and into the feasible region. Therefore, if uci becomes greater than usi at
some point in the time domain, the specified control will inactivate the inequality.
The reasoning for the case in which (∂usi /∂ j ) > 0 follows the same logic.
250
to the state variables after the constraint event. The transition function is:
where the “−” denotes the values before the constraint activation, and the “+” de-
notes the values after activation. In some cases, (7.82) could describe a loss function
upon activation of the constraint, i.e., frictional losses due to an object interacting
with a surface. However, the usual method for determining this transition function is
to specify some subset of the state variables x and y which remain continuous across
the constraint event. The dimension of the vector function T is equal to the dynamic
degrees of freedom of the DAE that is active after the constraint event.
Continuity conditions for arbitrary-index DAEs are stated in [64], assuming that
the index and the vector field is the same on both sides of the discontinuity. The
condition is that state variables are continuous across a discontinuity in the controls
provided that the corresponding underlying ODE does not depend on any derivatives
of the controls that are causing the discontinuity. A successive linear programming
(SLP) algorithm was proposed in [64] to find the dependencies of the variable deriva-
tives on the derivatives of the controls that are causing the discontinuity.
The continuity conditions given in [64] are not directly applicable to the FIFP
method because the FIFP method constraint events cause the DAE after the con-
straint event to have a different vector field and possibly a different index. It is
possible, however, to define transition functions for the FIFP method by noting that
there must be a corresponding explicit input-jump simulation for any valid implicit
event simulation. That is, there must be some input function ū(t) for which the
251
simulation:
f (ẋ, x, us , t) = 0 (7.83)
us = ū(t) (7.84)
produces the same state variable trajectories as the implicit-event simulation (7.70–
7.74). Note that ū(t) is permitted to contain discontinuities. The existence of a
corresponding explicit input-jump simulation is guaranteed because a valid implicit
event simulation must define a unique trajectory us which can then be used to set
ū(t) in the explicit input-jump simulation. The corresponding explicit input-jump
simulation is not necessarily unique since there could be more than one ū(t) which
produce the same state trajectories.
The transition conditions in [64] are valid for the corresponding explicit input-
jump simulation. Since the implicit event and explicit input-jump simulations produce
the same state trajectories, the transition conditions for the state variables must also
be the same for the two DAEs. In practice, defining the transition conditions for the
implicit event simulation is even easier than the SLP algorithm in [64] would imply
when the explicit input-jump simulation has index ≤ 1, since then it is possible to
assume continuity of the variables that are differential state variables in the explicit
input-jump simulation.
Since the index of the DAE in the implicit event simulation may be greater than
the index of the DAE in the explicit input-jump simulation, there may be more con-
tinuity assumptions for state variables than there are dynamic degrees of freedom for
the implicit event simulation. In general, the number of degrees of freedom available
will be less than or equal to the number of continuity conditions required for (7.2).
For example, if (7.2) has is = 0, then the number of conditions on the continuity
of x must be equal to the dimensionality of x, whereas the number of differential
state variables in the augmented equivalent index-1 system derived by the method of
dummy derivatives will be equal to or less than the dimensionality of x. Hence, in
general only that subset of the continuity conditions corresponding to the differential
252
state variables remaining in the equivalent index-1 DAE describing the new segment
can be satisfied by the implicit event constrained dynamic simulation algorithm. The
additional continuity assumptions may or may not be automatically enforced by the
DAE itself.
To see an example of continuity assumptions for the FIFP method, consider the
DAE:
ẋ1 = x1 + x2 (7.85)
u = u(t) (7.87)
x1 + x2 ≤ 10 (7.88)
The continuity assumptions for (7.85–7.87) upon a jump in u(t) are that x1 and x2 are
continuous. However, an equivalent index-1 system for the implicit event simulation
after activation of (7.88) is:
x̄1 = x1 + x2 (7.89)
x1 + x2 = 10 (7.91)
which has only one dynamic degree of freedom. However, either of the transition
(+) (−) (+) (−)
functions x1 = x1 or x2 = x2 may be chosen, since whichever continuity
assumption is not explicitly defined will in this case be implicitly enforced by (7.91).
253
u(t)
control profiles in a manner such that at the optimum, the extra continuity condi-
tions that cannot be enforced by the implicit event constrained dynamic simulation
algorithm are satisfied by the NLP. Thus, while the method is always feasible with
respect to path constraints, it is not necessarily feasible with respect to continuity
conditions at intermediate iterations.
where Fin and Fout are the flowrates in and out of the tank, A is the area of the
tank, h is the liquid height in the tank, and f and g are functions that describe,
respectively, the hydraulics of the flow out of the tank and the flow response to the
control signal.
Since the liquid height is constrained to be lower than hmax , there is an inequality
254
constraint on h:
h ≤ hmax (7.96)
When this inequality is active the control u becomes an algebraic state variable in
the resulting high-index augmented DAE. Applying the dummy derivative method to
the augmented DAE results in the following equivalent index-1 DAE:
h = hmax (7.102)
h̄ = 0 (7.103)
h̃ = 0 (7.104)
where h̄, h̃, F̄in , and F̄out are dummy algebraic variables (h̃ eliminates ḧ).
Equations (7.97) and (7.103) indicate that the flow into the tank must equal the
flow out to maintain the level at its threshold, and hence the stem position u required
to achieve this is also prescribed algebraically. Since for an arbitrary control signal
profile there is no guarantee that the stem position immediately before the constraint
activation will equal that prescribed at the beginning of the constrained segment
of the trajectory, it must be allowed to jump, which is clearly non-physical and
forbidden by the continuity conditions for the unconstrained DAE. In fact, in this case
the equivalent index-1 DAE has no dynamic degrees of freedom to define the initial
condition, so both the stem position and the level could potentially jump. However,
continuity of the liquid level will always be redundant with the liquid level inequality,
so only the stem position can actually jump. Although the constrained dynamic
simulation subproblems allow jumps, the point constraints in the master NLP will
255
force the optimal control profile to be one in which the stem position does not jump.
If the unconstrained DAE (7.2) is already high index (e.g., if it includes equality path
constraints), then the continuity conditions stated in [64] must be enforced by point
constraints in the master NLP.
The points along the solution trajectory where inequality deactivations occur are
defined by implicit events. Therefore, the sensitivity variable transfer conditions are
defined by the relations given in Chapter 3. However, the deactivation event (7.81)
requires special attention because it contains a sensitivity variable.
For the deactivation condition (7.81) to be true, one of (7.79–7.80) must hold at
the constraint activation time t∗ . The transfer conditions for the sensitivity functions
follow the derivation given in Chapter 3. Obtaining the sensitivity transfer functions
requires differentiating the discontinuity function, which if (7.79) is currently true
gives:
for all αi = 0.
If the discontinuity function is instead (7.80), the sensitivity transfer function is:
This condition has the added complication that it requires second-order sensitivity
information, namely (∂ 2 usi /∂ j ∂p). Obtaining this second order sensitivity informa-
tion requires the solution of the second-order analog to (7.77–7.78). It is possible
to extend the algorithm described in Chapter 4 to handle these second order sen-
sitivities. Although the second order sensitivities are more costly to calculate than
first-order sensitivities, only a few are required and hence the additional cost should
256
not be significant.
There are two possible behaviors for the control variable upon deactivation of an
inequality. The state and parameterized trajectories of the control may cross, or else
the special sensitivity may cross zero, making the control jump. The latter case is
more costly to handle in a numerical algorithm because of the need for second order
sensitivities.
7.4.5 Example
min tf (7.107)
θ(t),tf
subject to:
ẋ = v cos θ (7.108)
ẏ = v sin θ (7.109)
v̇ = g sin θ (7.110)
y ≥ ax − b (7.111)
x(tf ) = xf (7.112)
where tf is the final time, g is the gravitational constant, and γ and b are given.
Equation (7.111) defines an inequality constraint on the state variables x and y,
(7.112) defines a final-point constraint, and (7.113) gives the initial conditions.
257
Table 7-2: Solution statistics for constrained brachistochrone
Initial Guess QP LS CPU
θ(t) = −1.558 + 2.645t 2 3 0.38s
θ(t) = −1.6 + 2.2857t 3 3 0.47s
θ(t) = 0.0 3 5 0.52s
θ(t) = −1.6 5 5 0.67s
θ(t) = −1.0 6 6 0.75s
This problem was solved using the feasible-path control parameterization method
described in this section. The discretization chosen for the control θ(t) was a single
linear element. There were therefore three parameters in the NLP (the two for the
control variable and tf ). The parameters g, a, b, and xf were set to −1.0, −0.4,
0.3, and 1.1 respectively. The integrator relative and absolute tolerances (RTOL and
ATOL) were both set to 10−7 and the optimization tolerance was 10−5 . The optimal
objective function value was 1.86475. The ABACUSS input file is shown in Figure 2-
2, and Table 7-2 gives solution statistics from different initial guesses for the control
variable. In this table, “QP” refers to the number of master iterations of the NLP
solver, “LS” refers to the number of line searches performed, and “CPU” gives the
total CPU time for the solution on an HP C160 workstation. These statistics compare
very favorably with those reported for other similarly-sized problems [142].
Figure 7-10 shows the state and control variable solution of the constrained bra-
chistochrone problem described above using the constrained dynamic optimization
algorithm. The ‘kink’ in the control variable profile is due to the control becoming
determined by the high-index DAE during the constrained portions of the trajectory.
The FIFP method is interesting because it is the first method proposed that explicitly
addresses the two issues that have been shown to be characteristic of inequality state-
constrained dynamic optimization: the high-index segments of the state trajectory
and the hybrid discrete/continuous nature of the problem. FIFP appears to solve the
258
ABACUSS Dynamic Optimization
Y x 10-3
CONSTRAINT
0.00 Y
-50.00
-100.00
-150.00
-200.00
-250.00
-300.00
-350.00
-400.00
-450.00
-500.00
-550.00
-600.00
-650.00
-700.00
-750.00
X
0.00 0.20 0.40 0.60 0.80 1.00
-0.10
-0.20
-0.30
-0.40
-0.50
-0.60
-0.70
-0.80
-0.90
-1.00
-1.10
-1.20
-1.30
-1.40
-1.50
Time
0.00 0.50 1.00 1.50
problems associated with the modified slack variable approach, since it has the ability
to handle more inequality constraints than controls (provided that there is no point
in the problem where there are more active inequality constraints than controls), and
very little extra work is performed when constraints are inactive.
However, experience with the FIFP method has shown that it suffers from a
pathological problem. Recall from Chapter 3 that part of the sufficient conditions
for existence and uniqueness of sensitivity functions was that the sequence of state
events remains qualitatively the same: the location of the state events in state space
may change, but existence and uniqueness cannot be guaranteed if the ordering of the
events changes. This problem was demonstrated in Figures 3-3, 3-4, and 3-5, where
a differential change in a parameter produced a discrete jump in the values of both
259
the state and sensitivity functions.
The fact that an infinitely small movement in the value of a parameter can produce
a finite jump in the value of a state variable implies that the dynamic optimization
master NLP is not smooth. Nonsmooth NLPs often prevent convergence of gradient-
based NLP solution techniques (such as SQP algorithms). Because such methods
are generally based on assumptions of smoothness, nonsmooth problems can prevent
convergence or slow it to an unacceptably slow speed.
The FIFP method does work for some problems like the simple constrained bra-
chistochrone problem shown above because the FIFP problems are “smooth” in some
neighborhood of the optimum. That is, small movements of the parameter values away
from their optimal values generally do not change the sequence of events and thus
the objective function does not contain discontinuities. However, larger movements
of the parameters from their optimal values do cause the state variables to jump. For
example, in the constrained brachistochrone problem, some values of the constraints
produce state trajectories for which the inequality path constraint does not become
active. Fortunately for the constrained brachistochrone problem presented above, the
region around the optimum for which the sequence of events remains constant is fairly
large and convergence is fairly robust.
In a nutshell, the problem with the FIFP method is that it has an unpredictable
and potentially very small convergence region. That is, gradient based NLP algo-
rithms can be expected to converge to a local optimum (if one exists) from a starting
guess in the neighborhood of the optimum. This convergence neighborhood is defined
as the set of parameter values that produce the same constraint event sequence as
the locally optimal parameter values do. Since the constraints are defined implic-
itly, there is no general method at present for defining the convergence neighborhood
without actually solving hybrid IVPs.
Note that the convergence problem with the FIFP algorithm exists only when
FIFP is used with gradient-based NLP algorithms. Nonsmoothness of the NLP may
not be a problem for non-gradient based (e.g., stochastic) optimization algorithms
(see, for example [8]). Since the FIFP method generates trajectories that are always
260
feasible with respect to the path constraints, stochastic methods can then avoid the
need to discard many infeasible trajectories, and will be significantly more efficient.
261
7.5 Fluctuating-Index Infeasible-Path Method for
Dynamic Optimization with Inequality Path
Constraints
The fluctuating-index infeasible path (FIIP) method for inequality constrained dy-
namic optimization avoids the convergence difficulties that the FIFP method expe-
riences when the constraint event ordering changes during intermediate iterations
by requiring that each dynamic simulation subproblem follows the same sequence of
constraint activations and deactivations. This requirement is imposed by explicitly
defining the constraint activation and deactivation times. The method is an infeasi-
ble path method from the standpoint of the state variable inequality path constraints
because these constraints are not required to be satisfied for all time at intermediate
iterations of the FIIP NLP.
Since the inequality path constraints may be violated during intermediate itera-
tions, an explicit event constrained dynamic simulation algorithm can be used to solve
the IVP subproblem. The activations and deactivations of the state variable inequal-
ity path constraints occur at explicitly defined points in the solution trajectory that
are determined by the master NLP solver. During segments of the state trajectory
where an inequality constraint gk (x) is defined to be active, the equality:
gk (x) = k gk ⊆ g (7.114)
is added to the DAE using the method described in Chapter 6. The parameter k is
used because at the time when the constraint is defined explicitly to be active, the
values of the state variables may not satisfy the constraint. Therefore, the constraint
is offset by k such that it is active at the current time, which is shown in Figure 7-11.
The overall NLP contains a constraint that forces k = 0 at the optimum. Note that
the overall state variable inequality path constraint gk (x) = 0 is active when k = 0.
262
State Inequality Constraint
State Trajectory
Constraint Offset
Unconstrained Element
Constrained Element
Unconstrained Element
Figure 7-11: The FIIP method works by offsetting the constraint to the state variable
trajectory at points where the constraint activates
f (ẋ, x, us , t) = 0 (7.115)
uc = uc (t) (7.116)
ng
(k) (k) (k)
αi (usi − uci ) + βij gj (x) − j =0 (7.117)
j=1
(k)
j = gj (x(tk ), y(tk )) (7.118)
ng
(k) (k)
αi + βij = 1 (7.119)
j=1
ng
(k)
βij ≤ 1 (7.120)
j=1
(k) (k)
αi , βij ∈ {0, 1}
i = 1 . . . nu k = 1 . . . NF E
As in the implicit event dynamic simulation method (7.70–7.74), the α and β vari-
ables are used to express the matching between active state path constraints and
the control variables that are not free to be independently specified in the resulting
high-index DAE. However, in this case, the α and β variables are not determined by
263
the occurrence of an implicitly defined state event, but instead are defined explicitly
for each control variable finite element. Therefore, the α and β are functions of the
finite element index, rather than the simulation time. The start and end times of
the finite elements are in general optimization parameters that are determined by the
NLP solver.
The j variable in (7.117) permits continuity assumptions for the state variables
when the constraint activates by allowing the inequality path constraint to be vio-
lated. Essentially, j is used to offset the constraint so that the state variables do
not jump when it activates. The amount of constraint violation is measured at the
point where the constraint activates, and remains constant throughout the entire con-
strained segment. The inequality constraint is satisfied at the optimum by imposing
a constraint in the master NLP such that j = 0.
Assuming a standard BDF method is used to solve the DAE, and that a set of α
and β variables have been defined, the steps of the explicit event dynamic simulation
method are:
1. Solve the IVP until the end of the next control finite element is encountered at
t = tk .
2. If tk = tf then stop.
(mu (k)
3. For every j where i=1 βij = 1, determine j by solving:
(k)
gj (x) = j (7.121)
at t = tk .
6. Set k = k + 1. Go to Step 1.
264
The FIIP method is less complicated than the FIFP method because there are no
implicit constraint activations and deactivations. However, there are still two aspects
of the algorithm which require further explanation: how to specify the transfer con-
ditions for the state variables when constraints activate, and how to determine the
sequence of activations and deactivations.
As with the FIFP method, transition conditions must be defined for the state variables
when a constraint becomes active. Even though the constraint event time is explicitly
defined in the FIIP method, special consideration is required to define the FIIP
transition conditions because the dynamic degrees of freedom and possibly the index
of the DAE will change after a constraint event.
The principle that was used to define the state transition conditions in Sec-
tion 7.4.3 for the FIIP method is useful for the FIFP method as well. There is
an explicit input-jump simulation that corresponds to the explicit event constrained
dynamic simulation, and the continuity conditions for the explicit input-jump sys-
tem can be used to derive continuity assumptions for the explicit event simulation.
The use of the offset parameter with the active constraints removes the ambiguity
that would be associated with constraint activation. For example, suppose that the
constraint:
x1 + x2 + x3 ≤ 0 (7.122)
became active when the values of the state variables were x1 = −1, x2 = −2, and
x3 = −1, the corresponding explicit input jump simulation indicated that all of
the state variables in the constraint remain continuous, but the dynamic degrees
of freedom for the DAE after the constraint activation indicated that any two of
the three state variables can be specified to remain continuous. Then, one of the
state variables would have to experience a jump. However, allowing one of the state
variables to jump would violate the corresponding explicit input-jump simulation
265
continuity assumptions. Also, it is ambiguous as to which state variable would jump.
If however the active constraint is defined as:
x1 + x2 + x3 = (7.123)
where = −4, then no state variable will jump, even though continuity is enforced
on only two of the three state variables.
To illustrate the state variable transfer conditions under the FIIP method, consider
the DAE:
ẋ = x + y (7.124)
y = 3x + u (7.125)
u = u(t) (7.126)
u = u(t∗ ) (7.128)
to find values for x(+) , y (+) , and ẋ(+) . Now suppose that (7.124–7.125) is the DAE
part of a dynamic optimization problem that is being solved using the FIIP method
and the constraint:
x≤5 (7.129)
becomes active at t(∗) . After application of the dummy derivative method, the system
266
solved to find a consistent initialization at t(∗) is:
x̄(+) = 0 (7.133)
where x̄ is a dummy derivative and = (x(−) − 5) in this case. Note that even
though this system does not require specification of additional transfer conditions,
the differential state variable x remained continuous when the constraint activated.
The same state trajectories could have been obtained with the appropriate jump in
u in the system (7.124–7.125).
Theoretically, there is no need for additional constraints in the NLP to handle in-
equality path constraints in the FIIP method. However, including a point constraint
representation of the inequality path constraints improves the robustness of the FIFP
method. If the point constraints are not included, the FIFP method sometimes finds
locally optimal solutions for which the inequality path constraints are satisfied dur-
ing finite elements where they are defined to be active, but violated during finite
elements where the constraint is not supposed to be active. Including a measure of
the inequality constraint violation at a discrete number of points as an inequality con-
straints in the NLP tends to solve this problem. However, since these NLP inequality
constraints increase the size of the NLP, it is desirous not to include too many. In
practice, one NLP inequality constraint is included per inequality path constraint per
finite element.
267
7.5.3 Specifying a sequence of constraint events
Specifying the sequence of constraint events can be difficult in the FIIP method.
The FIIP method decouples the inequality path-constrained dynamic optimization
problem into discrete and continuous subproblems. The discrete subproblem is to
find the optimal sequence of constraint activations and deactivations. The continuous
subproblem is, given a fixed sequence of constraint activations and deactivations,
to find the optimal control trajectories. The use of the explicit event constrained
dynamic simulation algorithm allows us to solve the continuous subproblem.
Solution of the discrete subproblem is combinatorial in nature, yielding a mixed-
integer dynamic optimization subproblem (MIDO) [1, 3]. Putting the FIIP NLP in
the form of the MIDO formulation given in [3] yields:
tf
min φ(x(tf ), u(tf ), v, tf ) + L(x(t), u(t), v, t)dt (7.134)
u(t),v,α,β,tf 0
subject to:
f (ẋ, x, us , t) = 0 (7.135)
uc = uc (t) (7.136)
ng
(k) (k) (k)
αi (usi − uci ) + βij gj (x) − j =0 (7.137)
j=1
(k)
j = gj (x(tk ), y(tk )) (7.138)
(k)
j = 0 (7.139)
ng
(k) (k)
αi + βij = 1 (7.140)
j=1
ng
(k)
βij ≤ 1
j=1
(k) (k)
αi , βij ∈ {0, 1}
i = 1 . . . nu k = 1 . . . NF E
At present, there exist no rigorous strategies for solution of general MIDOs. Further,
268
in the worst case where there are many controls, inequality constraints, and control
finite elements, the only rigorous strategy at present that will guarantee an optimal
answer is to enumerate explicitly all of the possible constraint event sequences and
compare the optimal values obtained by solving the associated continuous subprob-
lems.
Most dynamic optimization problems with state inequality constraints that have
been solved in the optimal control literature to date contain only one inequality
constraint and one control variable. These problems may be easily solved using the
FIIP method. The best way is to solve the unconstrained problem and determine the
finite elements during which the constraint is violated. Then, use the FIIP method
to solve the constrained problem where those same finite elements are the ones where
the constraint is defined to be active. An alternate method for single-control, single-
inequality problems is to specify alternating constrained and unconstrained finite
elements. Since the end points of the finite elements are optimization parameters,
the optimization problem can determine where the constraint activation takes place
by shrinking the length of some elements to zero if necessary.
For problems with multiple inequality constraints, one ad hoc strategy is to solve
the unconstrained problem and determine the sequence of constraint violations, and
then solve the constrained problem with that sequence. This strategy does not guar-
antee an optimal solution and some enumeration of constraint event orderings is
usually required. On the other hand, other methods for handling inequality path
constraints generally have problems with multiple constraints, and the FIIP method
does provide an extremely efficient method (see Chapter 8) to obtain the optimal
solution when the constraint activation order is known.
7.6 Conclusions
The advantage of the three methods described in this chapter over other methods
for solving inequality path constrained dynamic optimization problems is that these
guarantee that the inequality path constraints are not violated during an optimal
269
solution. In addition, these methods explicitly recognize the high-index nature of
path constrained problems, and employ the dummy derivative method to solve them
efficiently.
This chapter has clearly shown the hybrid discrete/continuous nature of inequality
constrained dynamic optimization problems that arises because of the need to make
decisions concerning the ordering of constraint events. Although the modified slack
variable method is capable of transforming some inequality path-constrained problems
into completely continuous optimization problems, the range of problems that it can
handle is limited.
The advantage of the FIFP method over other solution methods is that the discrete
nature of the inequality constrained problem is explicitly recognized, and solution is
possible if the constraint activation sequence is known. Although it is difficult at this
time to use the FIFP method to solve dynamic optimization problems with many con-
straints because of the combinatorial nature of these problems, dynamic optimization
problems with multiple inequality constraints are in fact very difficult to solve by any
currently existing method. Furthermore, as shown in the next chapter, the method
works well for a broad range of constrained dynamic optimization problems.
270
Chapter 8
Numerical examples
In this chapter, the FIIP algorithm is applied to several interesting examples in order
to demonstrate the ease with which it can be used to solve state-constrained dynamic
optimization problems that are considered difficult to solve using other methods.
In particular, the abilities of FIIP to track nonlinear constraints, to guarantee that
constraints are not violated at the optimum, and to solve problems without the use
of complex control basis functions and many finite elements are demonstrated.
An implementation of FIIP in ABACUSS was used to solve these problems. The
Harwell code VF13 is used to solve the master NLPs. VF13 uses a sequential quadratic
programming method with a gradient-based line search technique, and hence all of
the IVP subproblems are required to solve the combined DAE and sensitivity system.
This combined system is solved using the DSL48S code detailed in Chapter 4. ABA-
CUSS uses the efficient automatic differentiation algorithm of [135, 134] to obtain
the Jacobian of the DAE and right hand sides of the sensitivity equations. ABA-
CUSS does sacrifice some computational efficiency compared to methods where the
equations and derivatives are hard-coded and compiled, because it creates and stores
the equations and derivatives as memory structures. However, this computational
inefficiency is more than compensated by the modeling efficiency that the ABACUSS
input language and architecture provide.
There are several solution statistics reported for each problem. The integration
tolerance was used for both the relative and absolute tolerance for the numerical IVP
271
solver. The optimization tolerance was used for the NLP solver. The number of IVP
solutions is a measure of how difficult the NLP was to solve, since most of the solution
time is spent in numerical integration of the DAE and sensitivity equations. The CPU
times reported were obtained on an HP 9000 C160 single-processor workstation. The
number of IVP solutions and the CPU time statistics are of only limited usefulness
since they are highly dependent on the initial guess employed for the control variables,
however they are included for use in comparing some of the examples with results
obtained in other works that used the same initial guesses.
For most of the problems, results for several tolerance levels are shown. It is
common with control parameterization for the number of IVP subproblems required to
solve the problem to increase significantly as the tolerances are tightened. This effect
occurs because the objective function is often relatively flat near the optimum with
respect to the exact positions of the finite element boundaries. When the tolerances
are tightened, the optimizer continues to move the finite element boundaries, resulting
in small improvements in the objective function value. The solution trajectories shown
for each example correspond to the results for the tightest tolerances that were tried.
However, the solution statistics for most of the problems show that the work done
increases immensely at tight tolerance levels, and the improvements to the solution
are very small. Reasonably accurate solutions may be obtained without resorting to
the strictest tolerances that were used here.
272
8.1 Line-Constrained Brachistochrone
The constrained brachistochrone problem was presented both in [24] and in a slightly
different form in [84]. The formulation used here is that of [84], who solved the prob-
lem using a discretized constraint control parameterization algorithm. The problem
is to find the shape of a wire on which a frictionless bead will slide between two points
under the force of gravity in minimum time. The brachistochrone is constrained by
requiring it to lie on or above a line drawn in state space.
min tf (8.1)
θ(t),tf
subject to:
ẋ = v cos θ (8.2)
ẏ = v sin θ (8.3)
v̇ = g sin θ (8.4)
y ≥ ax + b (8.5)
x(tf ) = xf (8.6)
where tf is the final time, g is the gravitational constant, and a and b are given
constants. Equation (8.5) defines an inequality constraint on the state variables x
and y, (8.6) defines a final-point constraint, and (8.7) gives the initial conditions.
Following [84], the parameters used were g = −1.0, a = −0.4, b = 0.2, and xf =
1.0. The starting guess for the control was θ(t) = −π/2 and for the final time was tf =
2.0. The control was discretized using three linear finite elements, and the inequality
constraint was active during the second finite element. The optimal trajectories for
the states and controls are shown in Figure 8-1 and Figure 8-2. Solution statistics
273
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−7 10 −5
8 1.8046 0.59s
10−9 10−7 18 1.795220 2.06s
10−11 10 −9
22 1.79522045 3.98s
274
Line Constrained Brachistochrone: Control Variable
0.2
0 θ(t)
-0.2
-0.4
-0.6
-0.8
-1
-1.2
-1.4
-1.6
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
t
-0.2
-0.3
-0.4
-0.5
-0.6
-0.7
0 0.2 0.4 0.6 0.8 1
x(t)
275
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−7 10−5 10 1.8230 0.96s
10−9 10−7 34 1.812723 5.38s
10−11 10 −9
35 1.81272311 23.55s
y ≥ ax2 + bx + c (8.8)
This problem was solved with the parameters in (8.8) set to a = −0.375, b =
−0.245, and c = −0.4. The initial guesses, parameter values, and control discretiza-
tion were the same as those used for the line-constrained brachistochrone. The opti-
mal trajectories for the states and controls are shown in Figure 8-3 and Figure 8-4.
The solution statistics are given in Table 8-2. The statistics show that the CPU
time grows at a faster rate than the number of IVP solutions as the tolerances are
tightened. This effect occurs because the number of integration steps and Jacobian
factorizations taken during the solution of each IVP increases as the tolerances are
tightened.
276
Curve Constrained Brachistochrone: Control Variable
0.2
0 θ(t)
-0.2
-0.4
-0.6
-0.8
-1
-1.2
-1.4
-1.6
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t
-0.2
-0.3
-0.4
-0.5
-0.6
0 0.2 0.4 0.6 0.8 1
x(t)
277
8.3 Constrained Van Der Pol Oscillator
subject to:
y˙2 = y1 (8.11)
y1 ≥ −0.4 (8.14)
The problem was solved using ten linear finite elements to discretize the control,
and the inequality constraint was active during the third finite element. The initial
guess for the control variable was u(t) = 0.70. Solution statistics are given in Table 8-
3 and the optimal trajectories for the states and controls are shown in Figure 8-5 and
Figure 8-6. The results shown are those corresponding to the tighter tolerance levels.
It may be noted that the CPU times are much longer for this problem than for the
brachistochrone examples, even though all of the models contain similar numbers of
equations. The reason for the increased solution time for the Van der Pol problem is
that it contains ten finite elements, and the integrator must restart at the beginning
of each finite element. The BDF method can be inefficient while it is starting because
it takes small steps, and therefore a large number of integration restarts can exact
a significant toll on the overall computational efficiency. A modification to the BDF
method was proposed in [2] which allows the integrator to take larger steps as it is
starting.
Several slightly different answers to this problem were presented in [142]. The ob-
278
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−7 10 −5
25 2.9548 11.37s
10−9 10−7 65 2.954756 44.88s
10−11 10 −9
78 2.9547566 80.39s
Table 8-3: Statistics for the constrained Van der Pol oscillator problem
jective function values reported in that work vary from 2.95678 to 2.95421, depending
on the scheme used to measure the violation of the state path constraint. The FIIP
method converged significantly faster, although the best objective function value the
algorithm found was slightly larger than the lowest reported in [142]. One explanation
of this very small difference is that the results presented in [142] may have actually
allowed small violations of the state path constraint, whereas our method guarantees
that the solution does not violate this constraint, and hence solves a more constrained
problem. Alternatively, this problem, like many control parameterization problems,
is multimodal, and the algorithm may not have converged to the same local optimum
that was found in [142]. However, the FIIP solution is guaranteed not to violate the
inequality path constraint.
279
Constrained Van Der Pol Oscillator: Control Variable
1
u(t)
0.8
0.6
0.4
0.2
-0.2
-0.4
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
1.5
0.5
-0.5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
280
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−5 10−3 17 35.000 0.36s
The constrained car problem is a classic optimal control problem (see for example,
[90], [142]). The problem is to find the minimum time for a car to travel between two
points, subject to constraints on the velocity and acceleration. The model used was:
min tf (8.16)
a(t),tf
subject to:
v̇ = a (8.17)
ẋ = v (8.18)
v ≤ 10 (8.19)
−2 ≤ a ≤ 1 (8.20)
where x is the position of the car, v is the velocity, and a is the acceleration. The
initial and final point constraints require the optimal solution to be one where the
car starts and finishes with zero velocity. There is a speed limit that constrains the
velocity to be less than 10.
The problem was solved using three constant finite elements, and the inequality
constraint was active during the second element. The optimal trajectories for the
acceleration and velocity are shown in Figure 8-7 and Figure 8-8. They match the
numerical solution given in [142] and the analytical solution given in [90]. Solution
statistics are given in Table 8-4. Statistics for only one tolerance level are reported
281
because no improvement is possible on the solution found, and tighter tolerances do
not increase the number of IVPs or improve the objective function.
282
Constrained Car Problem: Control Variable
1
a(t)
0.5
-0.5
-1
-1.5
-2
0 5 10 15 20 25 30 35 40
t
0
0 5 10 15 20 25 30 35 40
t
283
8.5 Index-2 Jacobson and Lele Problem
This problem was originally presented in [74], and was later solved using different
methods in [66, 90, 98, 106, 142]. The problem is:
subject to:
y˙1 = y2 (8.24)
There are two versions of this problem. The original form of the problem presented
in [74] and solved in [34, 106, 98] is shown above. A modified version was used in [66]
and [142], where (8.25) was changed to:
y˙2 = y2 + u (8.30)
The latter problem is unstable. Hence, the application of numerical integration algo-
rithms to this problem should be treated with suspicion.
The original form of the problem was solved using twelve finite elements, with the
inequality constraint active on the seventh element. The initial guess for the control
profile was u(t) = 6.0. The optimal trajectories for the control and constrained
state variable are shown in Figure 8-9 and Figure 8-10. Solution statistics are given
in Table 8-5. The value that was obtained for the objective function is very close
(within 0.1%) to the value reported in [90]).
A sequenced initial guess method was used in this problem. That is, the solution
284
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−5 10−3 11 0.217 2.9s
10−7 10−4 13 0.17982 6.0s
10−9 10 −7
61 0.1698402 39.8s
10−11 10−9 54 0.169832150 49.9s
Table 8-5: Solution statistics for original index-2 Jacobson and Lele problem
Table 8-6: Solution statistics for the modified index-2 Jacobson and Lele problem
was obtained for the tighter tolerances by using as the initial guess the answer from
the problem with the next loosest tolerances. This method often decreases the time
to solve control parameterization problems because tight integration tolerances do
not need to be enforced when the NLP is far from the optimal solution.
The modified version of the problem was also solved using a control parameteri-
zation of seven finite elements with the constraint enforced during the fourth finite
element. The solution statistics for the modified problem are given in Table 8-6, and
the control and constrained state variable trajectory are shown in Figure 8-11 and
Figure 8-12. The optimal value for the objective function are slightly better than the
one reported in [142].
The optimal trajectories for the control and constrained state variable for the
original form are shown in Figure 8-9 and Figure 8-10. The same variables are shown
for the modified form in Figure 8-11 and Figure 8-12.
285
Original Index-2 Jacobson and Lele Problem: Control Variable
12
u(t)
10
-2
-4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
0.5
-0.5
-1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
286
Modified Index-2 Jacobson and Lele Problem: Control Variable
16
14 u(t)
12
10
8
6
4
2
0
-2
-4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
0.5
-0.5
-1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
287
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−7 10−5 71 0.75156 35.4s
10−9 10−7 37 0.7514907 25.2s
10−11 10 −9
251 0.75144685 210.9s
Table 8-7: Solution statistics for the index-3 Jacobson and Lele Problem
This change has the effect of making the problem index-3 when the constraint is
active, rather than index-2 when the problem is constrained as above with (8.27).
This problem was solved using twelve linear finite elements, with the constraint
enforced on the third element. A discontinuity in the control was permitted after the
second element. The finite element size of all the finite elements except the constrained
element was bounded from below by 0.05. The statistics shown in Table 8-7 use the
sequenced initial guess method. The optimal trajectories for the control and the
constrained state variable in Figure 8-13 and Figure 8-14 show that the constraint is
active only at one point. The objective function values reported here are very close
to those reported in [90], even though a much simpler control discretization than the
one used in that work was used.
288
Index-3 Jacobson and Lele Problem: Control Variable
16
14 u(t)
12
10
8
6
4
2
0
-2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
-0.1
-0.2
-0.3
-0.4
-0.5
-0.6
0 0.2 0.4 0.6 0.8 1
t
289
8.7 Pressure-constrained Batch Reactor
k1
A 2B (8.32)
k2
k
A + B 3 D (8.33)
This simple problem is useful for showing the ability of the high-index DAE to find
a highly nonlinear control that tracks a constraint.
The dynamic optimization problem is:
subject to:
ĊB = k1 CA − k2 CB CB − k3 CA CB (8.36)
ĊD = k3 CA CB (8.37)
N = V (CA + CB + CD ) (8.38)
P V = NR T (8.39)
P ≤ 340000 (8.40)
0 ≤ F ≤ 8.5 (8.41)
290
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−7 10−5 10 11.7284 0.99s
Table 8-8: Solution statistics for the pressure-constrained batch reactor problem
T = 400 K, the gas constant was R = 8.314 J/(mol · K). The initial guess for the feed
rate of species A was F (t) = 0.5 mol/hr.
This problem was solved using two constant finite elements to approximate the
control, and the constraint was active during the second element. The optimal tra-
jectories for the control, the pressure, and the concentrations are shown in Figures 8-
15–8-17. Solution statistics are given in Table 8-8. During the constrained portion
of the trajectory, the control is highly nonlinear. If this problem were solved using
a penalty function approach, the control would require a complex discretization to
approximate this nonlinear optimal control trajectory. However, the FIIP method is
able to exploit the fact that the control is not independent when the constraint is
active, and a simple control discretization may be used.
291
Pressure-Constrained Reactor Problem: Control Variable
9
F (t)
8.5
8
mol/hr
7.5
6.5
6
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t(hr)
339000
338000
337000
Pa
336000
335000
334000
333000
332000
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t (hr)
292
Pressure-Constrained Reactor Problem: Control Variable
100
CA (t)
CB (t)
80 CD (t)
60
mol/m3
40
20
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t(hr)
293
8.8 Fed-Batch Penicillin Fermentation
subject to:
FX
Ẋ = μX − (8.44)
V
FP
Ṗ = θX − 0.01P − (8.45)
V
0.11S
μ= (8.46)
S + 0.1X
0.004
θ= (8.47)
1 + 0.0001/S + S/0.1
V̇ = F (8.48)
X ≤ 41 (8.50)
[X(0), P (0), V (0)] = 1, 0, 2.5 · 105 (8.51)
The objective function in this problem takes into account the revenue from the prod-
uct, the cost due to product deactivation, the cost of the substrate, and the daily
cost. The state variables in the problem are the biomass concentration X (g/L),
the amount of penicillin product present P (activity/L), the substrate concentration
S (g/L), the reactor volume V (L), the specific product formation rate θ, and the
specific growth rate μ. The feed rate parameter F was set to 1666.67 L/h.
This problem is somewhat sensitive to the initial guess for the control trajectory.
294
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−7 10−5 21 −1.1230 · 106 8.94s
Table 8-9: Solution statistics for the CSTR and column startup problem
The initial guess used in this work was a step function starting out at S(t) = 0.4 and
falling after the first finite element to S(t) = 0.01 at t = 40.0.
The solution statistics for this problem are shown in Table 8-9. The optimal
trajectories for the control and selected state variables are shown in Figure 8-18 and
Figure 8-19. The trajectories and the objective function value agree closely with those
reported in [127].
295
Fed-Batch Penicillin Fermentation Problem: Control Variable
0.5
S(t)
0.4
0.3
(g/L)
0.2
0.1
0
0 20 40 60 80 100 120 140
t(hr)
296
A
B
Reactor R
Qcondenser
Cooling Jacket
Column
W aterin
Fout
Reboiler
W aterout Qreboiler
This example demonstrates how constraining the dynamic optimization problem can
reduce the number of controls in the dynamic optimization problem, thus making it
easier to solve. It also demonstrates the ability of FIIP to handle problems that are
modeled by large-scale DAEs.
Consider the flowsheet containing a continuous stirred tank reactor (CSTR) and
distillation column shown in Figure 8-20. There are two simultaneous reactions which
take place in the liquid phase in the CSTR:
k
A + B →1 C (8.52)
k
B + C →2 D (8.53)
The desired product is C, but there is a side reaction which is favored at higher tem-
peratures which consumes the desired product and produces the undesired product
D.
The reactor contents must be kept below 400K at all times to avoid vaporization
297
of the reacting mixture. The reactor temperature may be controlled by adjusting the
flowrate of water to a cooling jacket which surrounds the CSTR.
The reaction mixture leaves the reactor at a prescribed flowrate and flows into a
stripper column. The column is intended to strip off the lighter reactants A and B,
and the top product from the column is recycled back to the reactor.
The dynamic model for this example is detailed below. The model and its param-
eters are loosely based on one given in [33].
NR
Ṅi = xin
i F
in
+V νji rj − xri F out ∀i = 1 . . . NC (8.54)
j=1
species i in the reactor, F in and F out are respectively the mole flows into and out of
the reactor, NR is the number of independent reactions, and NC is the number of
components.
NC
β
rj = k j Ci ij ∀j = 1 . . . NR (8.55)
i=1
where Ci is the molar concentration of species i, βij is the order of reaction j with
respect to species i, and kj is the rate constant of reaction j given by the Arrhenius
equation:
Ej
kj = Aj exp − ∀j = 1 . . . NR (8.56)
RT
298
The total enthalpy H r of the mixture in the reactor is:
NC
r
H = Ni Ĥir (8.57)
i=1
where Ĥir is the specific enthalpy of species i in the reactor, and an energy balance
on the entire reactor gives:
NC
r in
Ḣ = F xin in
i Ĥi
i=1
NC
(8.58)
+ F out i Ĥi − Q̇jacket
xout r
i=1
where Ĥ in is the specific enthalpy of species i in the reactor feed and Q̇jacket is the
heat removed by the reactor cooling jacket.
NC
Ni
V = (8.59)
i=1
ρi
The reactor concentration and mole fraction are respectively given by:
Ni
Ci = ∀i = 1 . . . NC (8.60)
V
N i
xri = ∀i = 1 . . . NC (8.61)
Ntot
NC
Ntot = Ni (8.62)
i=1
The parameters used in this model are given in Table 8-10. Both reactions are
first order (therefore βij = 1 ∀(i, j)), and the gas constant is 8.314 (kJ/kmol · K).
299
Parameter Value
NR 2 Parameter Value
NC 4 A1 100.0 m3 /(hr · kmol)
νA1 -1 A2 120.0 m3 /(hr · kmol)
νA2 0 E1 17000 kJ/kmol
νB1 -1 E2 20000 kJ/kmol
νB2 -1 ρA 11.0 kmol/m3
νC1 1 ρB 16.0 kmol/m3
νC2 -1 ρC 10.0 kmol/m3
νD1 0 ρD 10.4 kmol/m3
νD2 -1
w
where Tr is the reactor temperature, Tout is the outlet temperature of the cooling
water, U is the heat transfer coefficient of the jacket, and A is the heat transfer
surface area.
The heat is transferred to the cooling water, which gives the temperature rela-
tionship:
w
where Tout is the inlet temperature of the cooling water and Cpw is the heat capacity
of the cooling water.
The values of the parameters used for the cooling jacket were U = 3000 (kW/m2 ·
K), A = 30 m2 , and Tin
w
= 288 K. The heat capacity of water was assumed to be
independent of temperature at 4200 (kJ/tonne · K).
300
8.9.3 Column tray model
The column has NS stages, but the first stage is the condenser, and the last is the
reboiler, so there are NS − 2 equilibrium trays. The overall material balance on each
tray is:
where Mk is the molar holdup on stage k, xik and yki are respectively the liquid and
vapor mole fractions of species i on stage k, Fk is the feed flow rate to stage k, and
zki is the composition of the feed on stage k (assumed to be liquid).
Assuming fast energy dynamics and adiabatic operation, the energy balance on
each tray is:
l
0 =Lk−1 Ĥk−1 − Lk Ĥkl +
∀k = 2 . . . NS − 1 (8.67)
v
Vk+1Ĥk+1 − Vk Ĥkv + Fk Ĥkf
where Ĥkl , Ĥkv , and Ĥkf are respectively the molar specific enthalpies of the liquid,
vapor, and feed on stage k.
The vapor mole fractions on each tray are normalized:
NC
yki = 1 k = 2 . . . NS (8.68)
i=1
and, since thermodynamic equilibrium is assumed on each tray, the liquid phase
composition must satisfy:
301
where Kki is the vapor/liquid equilibrium distribution coefficient for species i on tray
k.
The liquid overflow on each tray is related to the tray holdup by the Francis weir
equation:
NC 1.5
hk
Lk xik Z i = ρ̄lk WL (8.70)
i=1
750
# NC $−1
xik
ρ̄lk = (8.71)
i=1
Z i ρi
NC
Pk
ρ̄vk = (yki Z i ) (8.72)
RTk i=1
NC
Mk
Vk = xki Zlk (8.73)
ρ̄lk i=1
Vk
hk = − WH (8.74)
Atray
where Atray is the area of the tray, and WH is the height of the weir.
302
The pressure drop on each tray is related to the vapor flow by:
tray
Pk = Pk−1 + ΔPk−1 (8.75)
where B is the bottoms flowrate. The material balance for each species is:
NC
i
yN S = 1 (8.82)
i=1
303
and the liquid phase composition must satisfy:
i i i
yN S = KN S xN S (8.83)
The flow out of the reboiler is related to the reboiler holdup by the proportional
control law:
V2 = D + L1 (8.85)
V1 = 0 (8.87)
The amount refluxed back to the top tray obeys the relation:
V2 r = L1 (8.88)
The heat required to condense the vapor into the condenser Q̇C is given by:
304
The liquid mole fractions are normalized:
NC
y1i = 1 (8.90)
i=1
The specific enthalpy of a pure species in the vapor phase is assumed to be a function
of temperature:
T
Ĥiv = ΔHif (Tref ) + ai dT (8.91)
Tref
NS
l
Ĥ = xi Ĥil (8.93)
i=1
NS
Ĥ l = yiĤiv (8.94)
i=1
The values of the parameters for the enthalpy model are given in Table 8-11. The
reference temperature was set to Tref = 298K.
Raoult’s law is assumed, and therefore the vapor/liquid equilibrium distribution co-
efficients are independent of composition, and are given by:
Pivap
Ki = (8.95)
P
305
Parameter Value
aA 172.3 kJ/(kmol · K)
aB 200.0 kJ/(kmol · K)
aC 160.0 kJ/(kmol · K)
aD 155.0 kJ/(kmol · K)
ΔHAvap 31000 kJ/kmol
ΔHBvap 26000 kJ/kmol
ΔHCvap 28000 kJ/kmol
vap
ΔHD 34000 kJ/kmol
ΔHAf −30000 kJ/kmol
ΔHBf −50000 kJ/kmol
ΔHCf −20000 kJ/kmol
ΔHD f
−20000 kJ/kmol
αi
ln Pivap = − + bi (8.96)
T
The parameters for the vapor pressure model are given in Table 8-12. The vapor
pressure is given in bar with these parameters. The pressure in the condenser was 1
atm.
Parameter Value
αA 4142.75 K
αB 3474.56 K
αC 3500.00 K
αD 4543.71 K
bA 11.7158
bB 9.9404
bC 8.9000
bD 11.2599
Table 8-12: Parameters for the vapor pressure model (vapor pressure in bar)
306
8.9.8 Path constraints
The reactor and column model that has been specified thus far has five controls (cor-
responding to the valves shown in Figure 8-20). Dynamic optimization problems with
many control variables are difficult to solve using any method. In control parameter-
ization, the size of the NLP increases with the number of controls, and in practice,
the degree of nonlinearity of the NLP increases. However, there are some additional
constraints on the state variables that may be specified which, due to the results of
Chapter 6, reduce the number of independent control variables. Note that the re-
boiler and condenser duties, listed as controls in Figure 8-20, are in fact completely
determined by the model that has been specified.
The path constraints that have been explicitly imposed on this model are:
2. Constant ratio between A and B entering the reactor: The flowrate of B was
assumed to be 15% of the flowrate of A.
3. Constant flowrate of material out of the reactor. The flowrate was set to 15
kmols/hr.
The only independent control variables remaining in this problem are the column
reflux ratio and the flow rate of the cooling water to the reactor jacket. The reactor
temperature constraint causes the flow rate of the cooling water to be specified when-
ever the constraint is active. The flow rate of the cooling water was approximated
using a piecewise continuous linear control discretization. The column reflux ratio was
made a time-invariant optimization parameter. Hence, the complexity of the dynamic
optimization problem has been reduced to a manageable level using path constraints
which prescribe the values of some of the controls. The index of the DAE is two when
the inequality path constraint is inactive. The index is also two when the inequal-
ity path constraint is active, but a larger set of equations must be differentiated in
order to derive the equivalent index-1 DAE. Figures 8-21-8-22 show the ABACUSS
307
Before differentiation, this model was index 2.
Figure 8-21: ABACUSS index-reduction output for reactor and column startup model
when the constraint is inactive
index reduction output for the model both without and with the inequality constraint
active.
308
Before differentiation, this model was index 2.
Figure 8-22: ABACUSS index-reduction output for reactor and column startup model
when the constraint is active
309
where:
C̄˙ = BxC
NS (8.98)
˙ = BxD
D̄ (8.99)
NS
C̄(0) = 0 (8.100)
D̄(0) = 0 (8.101)
tf = 100hr (8.102)
This objective function measures the relative cost associated with producing C and
D during the startup (i.e., producing an extra kmol of C offsets the disadvantage
associated with producing 10 kmols of the undesired species D). The final time was
chosen to be long enough so that the system will be close to steady state at the end
of the simulation.
The condenser pressure was set to 1 atmosphere for the entire startup procedure.
The feed makeup temperature is 300K. The setpoint for the control equation in the
column reboiler was set to MN∗ S = 22 kmol At the initial time, the reactor contains
50 kmols of A at 300K, the total holdup in the column is 46.586382 kmols of A, and
each of the trays is filled to the weir height with A at its bubble point. Note that
since one of the path constraints fixes the ratio between A and B being fed to the
reactor, the flow of B into the reactor undergoes a step function at t0 .
The control variable F water (t) was approximated by four linear finite elements,
and the inequality path constraint was active during the second element. The initial
guess for the control was F water (t) = 3tonne/hr, and for the time invariant parameter
was R = 0.5. The control was bounded by 0 ≤ F water ≤ 5, and the reflux ratio was
bounded by 0.4 ≤ R ≤ 0.83. The size of the DAE was 741 equations, and there
were 13 optimization parameters, which brought the size of the combined DAE and
sensitivity system to 10374 equations. Solution statistics for this problem are given
in Table 8-13. Optimal solution profiles are shown in Figures 8-23–8-28.
310
Integration Optimization IVP Objective
CPU
Tolerance Tolerance Solutions Function
10−5 10−3 91 153.9297 2455s
Table 8-13: Solution statistics for the CSTR and column startup problem
0.1525
F w (ton/hr)
0.152
0.1515
0.151
0.1505
0.15
0 10 20 30 40 50 60 70 80 90 100
t (hr)
Figure 8-23: The optimal trajectory for the cooling water flowrate
380
T (K)
360
340
320
300
0 10 20 30 40 50 60 70 80 90 100
t (hr)
311
Reactor and Column Startup Problem
2.5
A
2 B
C
D
1.5
(kmol/hr)
0.5
-0.5
0 20 40 60 80 100
t (hr)
Figure 8-25: The molar flowrates of the species leaving the system
375 T3
T2
370 T1
365
360
355
350
0 20 40 60 80 100 120
t (hr)
312
Reactor and Column Startup Problem
1.5e+07
1.45e+07 Reboiler Duty
1.4e+07
1.35e+07
Q̇r (kJ/hr)
1.3e+07
1.25e+07
1.2e+07
1.15e+07
1.1e+07
1.05e+07
0 20 40 60 80 100
t (hr)
1.442e+06
Q̇c (kJ/hr)
1.44e+06
1.438e+06
1.436e+06
1.434e+06
1.432e+06
0 20 40 60 80 100
t (hr)
313
314
Chapter 9
Conclusions
This thesis has demonstrated several developments which facilitate reliable numerical
solution of the types of dynamic optimization problems found in process engineering.
Specifically, advances were made in three interrelated areas: sensitivity analysis, nu-
merical solution of high-index DAEs, and solution of dynamic optimization problems
with state variable path constraints.
The staggered corrector algorithm for numerical sensitivity analysis of DAEs has
been demonstrated to be significantly more efficient than other existing methods.
Also, it has been implemented in the DSL48S numerical solver for large-scale sparse
DAEs which is well-implemented, extensively tested, and stable. Therefore, efficient
numerical sensitivity analysis is now an easily available tool in process engineering,
and can be used, for example, in designing control strategies, finding the response of
a system to disturbances, and determining the design variables to which a system is
most sensitive.
The use of efficient sensitivity analysis makes the control parameterization method
more attractive relative to indirect methods for solving dynamic optimization prob-
lems. Use of indirect methods typically requires forward-integration of the DAE and
then backward-integration of the adjoint equations, and it does not appear possible
to take advantage of the fact that the Jacobians of both the adjoint and DAE sys-
tems are based on similar information. By using sensitivities instead of adjoints, the
staggered corrector method is able to minimize both the number of Jacobian factor-
315
izations and the number of Jacobian updates by taking advantage of the similarities
between the DAE and its sensitivity system.
An important result of this thesis is the clarification of the fact that path-con-
strained dynamic optimization problems are naturally high-index DAEs, and therefore
all methods that solve path constrained dynamic optimization problems are capable
of solving high-index DAEs. However, most dynamic optimization methods handle
these high-index DAEs indirectly. This thesis shows that there are advantages to
be gained by directly solving high-index DAEs, and vice-versa. This observation led
to the development in this thesis of the dummy derivative method as a practical
technique for solving a broad class of arbitrary-index DAEs. When used with con-
trol parameterization, the dummy derivative method allows the path constraints to
be included in the IVP subproblem, rather than in the Master NLP. The result is
that fewer IVP subproblems are required to solve many dynamic optimization prob-
lems, and the accuracy of the path constraints is guaranteed over the entire solution
trajectory to within the integration tolerances.
Including path constraints directly in the IVP raises a set of issues concerning
dynamic degrees of freedom and feasibility of the dynamic optimization problem.
Whenever a path constraint is appended to the system, a control variable becomes
determined by the solution to the augmented DAE, and thus is not free to be in-
dependently specified. A control matching algorithm was proposed that is able to
316
determine in many problems which, if any, controls cease to be design degrees of free-
dom in the augmented DAE. Furthermore, it was shown that the constrained dynamic
optimization problem can be feasible only if the augmented DAE is well-posed.
One of the most interesting results of this work is that there are several strate-
gies which allow inequality path constraints on dynamic optimization problems to
be enforced by the IVP solver within the control parameterization framework. The
slack variable method transforms the inequality constraints into equality constraints.
The FIFP method transforms the inequality constrained problem into a hybrid dis-
crete/continuous optimization problem. The FIIP method transforms the inequality
constrained problem into a mixed-integer dynamic optimization problem (MIDO).
Each of these methods has its own set of advantages and disadvantages. The slack
variable method avoids problems with sequencing decisions for constraint activations
and deactivations. However, the slack variable method is not capable of handling
dynamic optimization problems with more inequality constraints than control vari-
ables, and it creates a high-index DAE whether or not the inequality constraint is
active. The FIFP method does not have restrictions on the relative number of control
variables and inequality constraints, and it handles the constraints efficiently, but it
creates nonsmooth NLPs with small regions of convergence. Such problems cannot be
easily solved with standard gradient-based optimization methods. The FIIP method
has all the advantages of the FIFP method and it creates smooth NLPs, but it requires
317
that the sequence of constraint activation and deactivation events be determined a
priori.
Of these methods, the FIIP method proved to be the most successful. To use
the method, a sequence of constraint events must be specified, and the index of the
DAE is permitted to fluctuate between unconstrained and constrained segments. The
path constraints are permitted to be violated during intermediate iterations in order
to provide transition conditions for the state variables between unconstrained and
constrained segments. The FIIP method was successfully demonstrated on several
examples. FIIP is efficient and it guarantees that the path constraints are satisfied
over the optimal solution trajectory.
Finding the optimal sequence of constraint activations and deactivations for the
FIIP method can be difficult. If the dynamic optimization problem contains few
constraints, as is the case with almost all problems that have been solved in the lit-
erature, it is usually possible to find the sequence by examining the solution of the
unconstrained dynamic optimization problem. In the case where the dynamic opti-
mization problem contains many constraints, finding the optimal sequence becomes
a combinatorial problem.
The methods developed in this thesis are capable of solving inequality constrained
dynamic optimization problems that cannot be solved using other methods, and prob-
lems that can be solved using other methods with greater accuracy and efficiency.
318
9.1 Directions for Future Work
An interesting area for future research is to find methods to solve the classes of
high-index DAEs that cannot be solved using the dummy derivative algorithm. The
dummy derivative method is not currently capable of solving problems for which
the index cannot be detected structurally. Detection of the index using numerical,
rather than structural methods is problematic from both theoretical and practical
standpoints. Theoretically, singularity of the Jacobian of a nonlinear DAE with
respect to the highest order time derivatives does not necessarily indicate that the
DAE is high-index. Practically, it is difficult to differentiate between a singular matrix
and one that is highly ill-conditioned.
Structural criteria are also used in the methods developed in this thesis to deter-
mine feasibility and find control matchings for path-constrained dynamic optimization
problems. Since the structural criteria do not work for all problems, there are dy-
namic optimization problems which according to these structural criteria are feasible
and/or have a valid control matching which are in fact unsolvable. Also, structural
criteria do not indicate the ‘best’ control matching for those problems where there
are several possible control matchings, and solution of such problems requires further
research. However, these difficulties are related to the problem of nonlinear control-
lability, which has been an outstanding problem in the literature for many years.
The algorithm for numerical sensitivity analysis described in this thesis has sig-
nificantly improved the computational efficiency of numerical solution of dynamic
optimization problems. However, a number of interesting research issues remain.
319
The size of the dynamic optimization problem solvable with the control parameter-
ization method is limited by the number of parameters in the master NLP, rather
than by the number of state variables in the DAE. Numerical solution and sensitivity
analysis of sparse DAEs containing 10,000-100,000 variables is possible with current
technology, but solution of relatively large (1,000-10,000 parameters) dense master
NLPs is problematic. Such master NLPs arise in dynamic optimization problems
with large numbers of controls in the plant-wide model, higher-order basis functions,
or many finite elements. Furthermore, in such problems partial derivatives of the
point constraints with respect to all of the parameters can exist, which means that
strategies such as range and null space decomposition SQP will not work. Therefore,
methods are needed which can solve large dense NLPs for which the explicit func-
tional form of the constraints and objective is not known. A further major challenge
is that these large-scale master NLPs are often multi-modal [8].
320
(e.g., controller setpoints).
Hybrid dynamic optimization problems are difficult to solve because they often
cause the master NLP to be nonsmooth, which creates problems with gradient-based
optimization methods. Sub-gradient optimization techniques are capable of handling
some classes of nonsmooth optimization problems, and it may be possible to find a
method that transforms the inequality path-constrained dynamic optimization prob-
lem into a hybrid problem that has an NLP of the appropriate form. On the other
hand, stochastic or random search methods are relatively insensitive to nonsmooth-
ness, so these methods offer an immediate and practical way to make sequencing de-
cisions in dynamic optimization problems. However, a more sophisticated approach
that exploits the evident structure of such problems (see Chapter 4) would ultimately
be more desirable.
The difficulty with solution of MIDO problems is finding a ‘link’ between the con-
tinuous and discrete aspects of the problem. That is, there is no satisfactory method
for determining how to vary the discrete variables based on information obtained from
the solution to the continuous problem. Several approaches have been attempted thus
far. In [102, 101], an attempt was made to solve MIDOs by using orthogonal colloca-
tion on finite elements to transform the MIDO into a finite-dimensional mixed integer
nonlinear program (MINLP). More recently, in [103] an analog of the control param-
eterization method for ordinary dynamic optimization problems was proposed. In
this scheme, numerical integration is used to obtain gradient and objective function
information for a master MINLP. It is shown that the adjoint variables of the dy-
namic optimization problem can be used to find the dual information necessary for
MINLP algorithms. However, the problem with these approaches is that the NLPs
are nonconvex, and therefore general methods for solving MINLPs (for a review, see
[52]) do not work [3] (in fact, the methods may converge to points that are not even
local extrema).
321
322
Appendix A
DECLARE
323
WaterFlowRate = 0 : -1e-2 : 1E20 unit = "ton/hr"
END # Declarations
#===============================================================================
MODEL LiquidEnthalpy
PARAMETER
NC AS INTEGER # number of components
RGAS AS REAL
TREF AS REAL
ENTHA AS ARRAY(NC) OF REAL
ENTHB AS ARRAY(NC) OF REAL
ENTHC AS ARRAY(NC) OF REAL
ENTHD AS ARRAY(NC) OF REAL
HVAP AS ARRAY(NC) OF REAL
Heat_Formation AS ARRAY(NC) OF REAL
VARIABLE
Specific_Enthalpy_Liquid AS ARRAY(NC) OF MolarEnthalpy
Specific_Enthalpy_Vapor AS ARRAY(NC) OF MolarEnthalpy
Temp AS Temperature
EQUATION
Specific_Enthalpy_Liquid=
Specific_Enthalpy_Vapor-HVAP ;
Specific_Enthalpy_Vapor=Heat_Formation+
ENTHA*(Temp-TREF)+ENTHB*(Temp-TREF)^2+
(1/2)*ENTHC*(Temp-TREF)^3+(1/3)*ENTHD*(Temp-TREF)^4 ;
END #LiquidEnthalpy
#===============================================================================
MODEL RD1_Reactions
PARAMETER
A, B, C, D AS INTEGER # identifiers for the components
NR AS INTEGER # number of reactions
NC AS INTEGER # number of components
MOLAR_VOLUME AS ARRAY(NC) of REAL
PRE_EXP_FACTOR AS ARRAY(NR) of REAL
RGAS AS REAL
ACTIVATION_ENERGY AS ARRAY(NR) of REAL
STOICH_COEFF AS ARRAY(NC,NR) of INTEGER
#Enthalpy_Reaction AS ARRAY(NR) OF REAL
HVAP AS ARRAY(NC) OF REAL
VARIABLE
Concentration AS ARRAY(NC) of Molar_Concentration
no_Mols AS ARRAY(NC) of MolarHoldup
ReactionRate AS ARRAY(NR) of Reaction_Rate
temp AS Temperature
324
feed_A AS MolarFlowRate
feed_B AS MolarFlowRate
feed_C AS MolarFlowRate
feed_D AS MolarFlowRate
TotalFeed AS MolarFlowRate
FlowOut AS MolarFlowRate
volume AS Volume
QJacket AS EnergyFlowRate
TotalMols AS MolarHoldup
X AS ARRAY(NC) OF Fraction
Enthalpy AS Energy
Specific_Enthalpy AS ARRAY(NC) OF Energy
FeedEnthalpy AS ARRAY(NC) OF Energy
FeedTemp AS Temperature
EQUATION
ReactionRate(2)
= PRE_EXP_FACTOR(2)*EXP(-ACTIVATION_ENERGY(2)/(RGAS*temp))
*Concentration(B)*Concentration(C) ;
TotalMols=SIGMA(no_Mols) ;
X=no_Mols/TotalMols ;
Enthalpy=SIGMA(No_Mols*Specific_Enthalpy);
$enthalpy = (FeedEnthalpy(A)+FeedEnthalpy(B)+FeedEnthalpy(C)+FeedEnthalpy(D)
-FlowOut*SIGMA(X*Specific_Enthalpy)
325
-QJacket);
END # RD1_Reactions
#===============================================================================
MODEL Reactor_Jacket
PARAMETER
Heat_Transfer_Coeff AS REAL
Jacket_Area AS REAL
CP_water AS REAL
Twater_in AS REAL
VARIABLE
QJacket AS EnergyFlowRate
TWater_out AS Temperature
Temp_Reactor AS Temperature
Flow_Water AS WaterFlowRate
EQUATION
#Heat removed by jacket
QJacket=Heat_Transfer_Coeff*Jacket_Area*(Temp_Reactor-Twater_out) ;
MODEL ColumnVLE
PARAMETER
NSTAGE AS INTEGER
NC AS INTEGER
VPA AS ARRAY(NC) OF REAL
VPB AS ARRAY(NC) OF REAL
A, B, C, D AS INTEGER # identifiers for the components
TREF AS REAL
ENTHA AS ARRAY(NC) OF REAL
ENTHB AS ARRAY(NC) OF REAL
ENTHC AS ARRAY(NC) OF REAL
ENTHD AS ARRAY(NC) OF REAL
HVAP AS ARRAY(NC) OF REAL
VARIABLE
X AS ARRAY(NSTAGE,NC) OF Fraction
Y AS ARRAY(NSTAGE,NC) OF Fraction
K AS ARRAY(NSTAGE,NC) OF Value
VAPORENTHALPY AS ARRAY(NSTAGE) OF MolarEnthalpy
LIQUIDENTHALPY AS ARRAY(NSTAGE) OF MolarEnthalpy
VAPORPRESSURE AS ARRAY(NSTAGE,NC) OF Pressure
P AS ARRAY(NSTAGE) OF Pressure
Temp AS ARRAY(NSTAGE) OF Temperature
Specific_Enthalpy_Liquid AS ARRAY(NSTAGE,NC) OF MolarEnthalpy
Specific_Enthalpy_Vapor AS ARRAY(NSTAGE,NC) OF MolarEnthalpy
326
EQUATION
# Vapor Pressure
FOR I:=1 TO NSTAGE DO
VAPORPRESSURE(I,)=EXP(-VPA/Temp(I)+VPB) ;
END
Specific_Enthalpy_Vapor(I,)=
ENTHA*(Temp(I)-TREF)+ENTHB*(Temp(I)-TREF)^2+
(1/2)*ENTHC*(Temp(I)-TREF)^3+(1/3)*ENTHD*(Temp(I)-TREF)^4 ;
LIQUIDENTHALPY(I) = SIGMA(X(I,)*Specific_Enthalpy_Liquid(I,));
VAPORENTHALPY(I) = SIGMA(Y(I,)*Specific_Enthalpy_Vapor(I,));
END
END
MODEL Column
PARAMETER
NSTAGE AS INTEGER
FEEDSTAGE AS INTEGER
NC AS INTEGER
A, B, C, D AS INTEGER # identifiers for the components
MOLAR_VOLUME AS ARRAY(NC) of REAL
MW AS ARRAY(NC) of REAL
STAGE_AREA AS REAL
WEIR_LENGTH AS REAL
WEIR_HEIGHT AS REAL
ORIFCON AS REAL
RGAS AS REAL
AFREE AS REAL
ALINE AS REAL
KLOSS AS REAL
VARIABLE
M AS ARRAY(NSTAGE) OF MolarHoldup
L AS ARRAY(NSTAGE) OF MolarFlowRate
V AS ARRAY(NSTAGE) OF MolarFlowRate
TotalHoldup AS MolarHoldup
VAPORENTHALPY AS ARRAY(NSTAGE) OF MolarEnthalpy
LIQUIDENTHALPY AS ARRAY(NSTAGE) OF MolarEnthalpy
X AS ARRAY(NSTAGE,NC) OF Fraction
Y AS ARRAY(NSTAGE,NC) OF Fraction
K AS ARRAY(NSTAGE,NC) OF Value
327
DISTILLOUT AS MolarFlowRate
QR AS EnergyFlowRate
QC AS EnergyFlowRate
REFLUXRATIO AS Value
P AS ARRAY(NSTAGE) OF Pressure
DPSTAT AS ARRAY(NSTAGE) OF Pressure
DPDRY AS ARRAY(NSTAGE) OF Pressure
DPTRAY AS ARRAY(NSTAGE) OF Pressure
T AS ARRAY(NSTAGE) OF Temperature
FEED AS ARRAY(NSTAGE) OF MolarFlowRate
BOTTOMS AS MolarFlowRate
XFEED AS ARRAY(NC) OF Fraction
FEEDENTHALPY AS MolarEnthalpy
VOLUMEHOLDUP AS ARRAY(NSTAGE) OF VOLUME
HEAD AS ARRAY(NSTAGE) OF Length
LIQHEIGHT AS ARRAY(NSTAGE) OF Length
LIQDENS AS ARRAY(NSTAGE) OF Molar_Concentration
VAPDENS AS ARRAY(NSTAGE) OF Molar_Concentration
LIQMDENS AS ARRAY(NSTAGE) OF Molar_Concentration
VAPMDENS AS ARRAY(NSTAGE) OF Molar_Concentration
# FeedTemp AS Temperature
MWV AS ARRAY(NSTAGE) OF Value
MWL AS ARRAY(NSTAGE) OF Value
EQUATION
# Condenser Model
# Material Balance
V(2)=DISTILLOUT+L(1) ;
V(2)*REFLUXRATIO=L(1) ;
X(1,)=Y(2,) ;
# Energy Balance
QC=V(2)*VAPORENTHALPY(2)-(L(1)+DISTILLOUT)*LIQUIDENTHALPY(1) ;
# Phase Equilibrium
Y(1,)=K(1,)*X(1,);
SIGMA(Y(1,))=1 ;
328
HEAD(1)=0 ;
((V(2)*MWV(2)/3600)/VAPMDENS(1)*ALINE)^2=2.0*DPTRAY(1)/VAPMDENS(1)/KLOSS;
# Tray Model
FOR I:=2 TO NSTAGE-1 DO
# Material Balance
$M(I)=L(I-1)-L(I)+V(I+1)-V(I)+FEED(I) ;
$M(I)*X(I,)+M(I)*$X(I,)=
L(I-1)*X(I-1,)-L(I)*X(I,)+
V(I+1)*Y(I+1,)-V(I)*Y(I,)+
FEED(I)*XFEED ;
# Energy Balance
#$M(I)*LIQUIDENTHALPY(I)+M(I)*$LIQUIDENTHALPY(I)=
0=
L(I-1)*LIQUIDENTHALPY(I-1)-L(I)*LIQUIDENTHALPY(I)+
V(I+1)*VAPORENTHALPY(I+1)-V(I)*VAPORENTHALPY(I)+
FEED(I)*FEEDENTHALPY;
# Phase Equilibrium
Y(I,)=K(I,)*X(I,);
#Reboiler Model
# Material Balance
329
$M(NSTAGE)=L(NSTAGE-1)-L(NSTAGE)-V(NSTAGE) ;
BOTTOMS=L(NSTAGE) ;
$M(NSTAGE)*X(NSTAGE,)+M(NSTAGE)*$X(NSTAGE,)=
L(NSTAGE-1)*X(NSTAGE-1,)-L(NSTAGE)*X(NSTAGE,)-
V(NSTAGE)*Y(NSTAGE,) ;
# Energy Balance
#$M(NSTAGE)*LIQUIDENTHALPY(NSTAGE)+M(NSTAGE)*$LIQUIDENTHALPY(NSTAGE)=
0=
L(NSTAGE-1)*LIQUIDENTHALPY(NSTAGE-1)-L(NSTAGE)*LIQUIDENTHALPY(NSTAGE)
-V(NSTAGE)*VAPORENTHALPY(NSTAGE)+QR;
# Phase Equilibrium
Y(NSTAGE,)=K(NSTAGE,)*X(NSTAGE,);
SIGMA(Y(NSTAGE,))=SIGMA(X(NSTAGE,));
END # Column
MODEL Flowsheet
PARAMETER
A, B, C, D AS INTEGER # identifiers for the components
NR AS INTEGER # number of reactions
NC AS INTEGER # number of components
MOLAR_VOLUME AS ARRAY(NC) of REAL
MW AS ARRAY(NC) of REAL
PRE_EXP_FACTOR AS ARRAY(NR) of REAL
RGAS AS REAL
ACTIVATION_ENERGY AS ARRAY(NR) of REAL
STOICH_COEFF AS ARRAY(NC,NR) of INTEGER
#Enthalpy_Reaction AS ARRAY(NR) OF REAL
Heat_Formation AS ARRAY(NC) OF REAL
330
Heat_Transfer_Coeff AS REAL
Jacket_Area AS REAL
CP_water AS REAL
Twater_in AS REAL
TREF AS REAL
ENTHA AS ARRAY(NC) OF REAL
ENTHB AS ARRAY(NC) OF REAL
ENTHC AS ARRAY(NC) OF REAL
ENTHD AS ARRAY(NC) OF REAL
HVAP AS ARRAY(NC) OF REAL
NSTAGE AS INTEGER
FEEDSTAGE AS INTEGER
STAGE_AREA AS REAL
WEIR_LENGTH AS REAL
WEIR_HEIGHT AS REAL
SET
NR := 2 ;
NC := 4 ;
A := 1;
B := 2;
C := 3;
D := 4;
MOLAR_VOLUME(A) := 1.0/11.0 ;
MOLAR_VOLUME(B) := 1.0/16.0 ;
MOLAR_VOLUME(C) := 1.0/10.4 ;
MOLAR_VOLUME(D) := 1.0/10.0 ;
MW(A):= 76 ;
MW(B):= 52 ;
MW(C):= 170 ;
MW(D):= 264 ;
PRE_EXP_FACTOR(1) := 100.0 ;
PRE_EXP_FACTOR(2) := 120.0 ;
ACTIVATION_ENERGY(1) := 17000 ;
ACTIVATION_ENERGY(2) := 20000 ;
RGAS := 8.314 ;
331
# R1 R2
STOICH_COEFF(A,) := [-1, 0] ;
STOICH_COEFF(B,) := [-1, -1] ;
STOICH_COEFF(C,) := [ 1, -1] ;
STOICH_COEFF(D,) := [ 0, 1] ;
# Enthalpy_Reaction(1) := -60000 ;
# Enthalpy_Reaction(2) := -50000 ;
Heat_Formation(A) := -30000;
Heat_Formation(B) := -50000;
Heat_Formation(C) := -20000;
Heat_Formation(D) := -20000;
Heat_Transfer_Coeff := 3000 ;
Jacket_Area := 30 ;
CP_water := 4200 ;
Twater_in := 298 ;
TREF := 298 ;
ENTHA(A) := 172.3 ;
ENTHA(B) := 200.0 ;
ENTHA(C) := 160.0 ;
ENTHA(D) := 155.0 ;
ENTHB(1:NC) := 0.0 ;
ENTHC(1:NC) := 0.0 ;
ENTHD(1:NC) := 0.0 ;
HVAP(A) := 31000 ;
HVAP(B) := 26000 ;
HVAP(C) := 28000 ;
HVAP(D) := 34000 ;
NSTAGE := 10 ;
FEEDSTAGE := 5 ;
STAGE_AREA := 1.05;
AFREE := 0.1*STAGE_AREA ;
ALINE := .03;
KLOSS := 1 ;
ORIFCON :=0.6 ;
WEIR_LENGTH :=1 ;
WEIR_HEIGHT :=0.25;
VPA(A) := 4142.75;
VPA(B) := 3474.56;
VPA(C) := 3500;
VPA(D) := 4543.71;
VPB(A) := 11.7158;
VPB(B) := 9.9404;
VPB(C) := 8.9;
VPB(D) := 11.2599;
332
END #Flowsheet
EQUATION
Reactor.Temp IS Jacket.Temp_Reactor ;
Reactor.Temp IS LiqEnthalpy.Temp ;
Reactor.Specific_Enthalpy IS LiqEnthalpy.Specific_Enthalpy_Liquid ;
Reactor.QJacket IS Jacket.QJacket ;
Reactor.FeedTemp IS ReactorFeedEnthalpy.Temp ;
Reactor.FeedEnthalpy IS ReactorFeedEnthalpy.Specific_Enthalpy_Liquid ;
Reactor.$TotalMols=0 ;
Reactor.Feed_A=Reactor.Feed_B;
END # ReactorFlowsheet
EQUATION
Column.X IS VLE.X ;
Column.Y IS VLE.Y ;
Column.K IS VLE.K ;
Column.T IS VLE.Temp ;
Column.P IS VLE.P ;
Column.VAPORENTHALPY IS VLE.VAPORENTHALPY ;
Column.LIQUIDENTHALPY IS VLE.LIQUIDENTHALPY ;
#Column.$TotalHoldup=0 ;
#Column.TotalHoldup=43.911382-17.325 ;
Column.BOTTOMS=0.1*(Column.M(Column.NSTAGE)-22) ;
#Column.BOTTOMS=0.1*Column.M(Column.NSTAGE) ;
#Column.FeedTemp IS ColumnFeedEnthalpy.Temp;
Column.FeedEnthalpy=
SIGMA(Column.XFEED*ColumnFeedEnthalpy.Specific_Enthalpy_Liquid)*
SIGMA(Column.FEED);
Column.$TotalHoldup=0 ;
END # ColumnFlowsheet
UNIT
Column AS Column
VLE AS ColumnVLE
Reactor AS RD1_Reactions
Jacket AS Reactor_Jacket
LiqEnthalpy AS LiquidEnthalpy
333
MakeupEnthalpy AS LiquidEnthalpy
VARIABLE
BFraction AS Fraction
Makeup AS ARRAY(NC) OF MolarFlowRate
EQUATION
WITHIN Column DO
X IS VLE.X ;
Y IS VLE.Y ;
K IS VLE.K ;
T IS VLE.Temp ;
P IS VLE.P ;
VAPORENTHALPY IS VLE.VAPORENTHALPY ;
LIQUIDENTHALPY IS VLE.LIQUIDENTHALPY ;
BOTTOMS=0.1*Column.M(Column.NSTAGE);
$TotalHoldup=0 ;
END
WITHIN Reactor DO
Temp = Jacket.Temp_Reactor+0 ;
Temp= LiqEnthalpy.Temp+0 ;
Specific_Enthalpy IS LiqEnthalpy.Specific_Enthalpy_Liquid ;
QJacket IS Jacket.QJacket ;
#FeedTemp IS ReactorFeedEnthalpy.Temp ;
#FeedEnthalpy IS ReactorFeedEnthalpy.Specific_Enthalpy_Liquid ;
FeedTemp IS MakeupEnthalpy.Temp;
FeedEnthalpy =
(Column.DISTILLOUT*Column.X(1,)*VLE.SPECIFIC_ENTHALPY_LIQUID(1,)+
Makeup*MakeupEnthalpy.Specific_Enthalpy_Liquid);
$TotalMols=0 ;
Feed_B=BFraction*Feed_A;
# Need to change 0.01 in the figure below to 1
Feed_A=Column.DISTILLOUT*Column.X(1,A)+Makeup(A);
Feed_B=Column.DISTILLOUT*Column.X(1,B)+Makeup(B);
Feed_C=Column.DISTILLOUT*Column.X(1,C);
Feed_D=Column.DISTILLOUT*Column.X(1,D);
Makeup(C)=0 ;
Makeup(D)=0 ;
END
WITHIN Column DO
# FeedTemp IS Reactor.Temp;
FeedEnthalpy=
SIGMA(XFEED*Reactor.Specific_Enthalpy)*SIGMA(FEED);
XFEED IS Reactor.X ;
FEED(1)=0 ;
FOR I:=2 TO FEEDSTAGE-1 DO
FEED(I)=0 ;
END
FEED(FEEDSTAGE)= Reactor.FlowOut;
FOR I:=FEEDSTAGE+1 TO NSTAGE-1 DO
FEED(I)=0 ;
END
334
FEED(NSTAGE)=0 ;
END
END # BothFlowsheet
SIMULATION RunReactor
UNIT
System As ReactorFlowSheet
INPUT
WITHIN System.Reactor DO
FlowOut := 10.0 ;
FeedTemp := 300 ;
Feed_C := 0.0 ;
Feed_D := 0.0 ;
END
WITHIN System.Jacket DO
Flow_Water := 1 ;
END
INITIAL
WITHIN System.Reactor DO
No_Mols(A) = 22.27 ;
No_Mols(B) = 36.81 ;
No_Mols(C) = 0.0 ;
No_Mols(D) = 0.0 ;
Temp=300 ;
END
SCHEDULE
CONTINUE FOR 0
END
SIMULATION RunColumn
UNIT
System As ColumnFlowSheet
INPUT
WITHIN System.Column DO
P(1) :=1.01325 ;
REFLUXRATIO :=0.5 ;
FEED(1):=0 ;
FOR I:=2 TO FEEDSTAGE-1 DO
FEED(I):=0 ;
END
FEED(FEEDSTAGE):=(10/6+8*10/6) ;
FOR I:=FEEDSTAGE+1 TO NSTAGE-1 DO
FEED(I):=0 ;
END
FEED(NSTAGE):=0 ;
XFEED(1) := 1 ;
XFEED(2) := 0 ;
335
XFEED(3) := 0 ;
XFEED(4) := 0 ;
# FEEDTEMP := 300 ;
#TotalHoldup:=3.6586382E+01+10 ;
END
PRESET
##############################
# Values of All Active Variables #
##############################
SYSTEM.COLUMNFEEDENTHALPY.TEMP := 3.0000000E+02 ;
SYSTEM.COLUMNFEEDENTHALPY.SPECIFIC_ENTHALPY_VAPOR(1) := 3.4460000E+02 ;
SYSTEM.COLUMNFEEDENTHALPY.SPECIFIC_ENTHALPY_VAPOR(2) := 4.0000000E+02 ;
SYSTEM.COLUMNFEEDENTHALPY.SPECIFIC_ENTHALPY_VAPOR(3) := 3.2000000E+02 ;
SYSTEM.COLUMNFEEDENTHALPY.SPECIFIC_ENTHALPY_VAPOR(4) := 3.1000000E+02 ;
SYSTEM.COLUMNFEEDENTHALPY.SPECIFIC_ENTHALPY_LIQUID(1) := -3.0655400E+04 ;
SYSTEM.COLUMNFEEDENTHALPY.SPECIFIC_ENTHALPY_LIQUID(2) := -2.5600000E+04 ;
SYSTEM.COLUMNFEEDENTHALPY.SPECIFIC_ENTHALPY_LIQUID(3) := -2.7680000E+04 ;
SYSTEM.COLUMNFEEDENTHALPY.SPECIFIC_ENTHALPY_LIQUID(4) := -3.3690000E+04 ;
SYSTEM.COLUMN.MWV(1) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(2) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(3) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(4) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(5) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(6) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(7) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(8) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(9) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(10) := 7.6000000E+01 ;
SYSTEM.COLUMN.BOTTOMS := 2.2005890E+00 ;
SYSTEM.COLUMN.VAPORENTHALPY(1) := 9.6490429E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(2) := 9.6493013E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(3) := 9.7613871E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(4) := 9.8713496E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(5) := 9.9792593E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(6) := 1.0094036E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(7) := 1.0206730E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(8) := 1.0317422E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(9) := 1.0426188E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(10) := 1.0533100E+04 ;
SYSTEM.COLUMN.LIQMDENS(1) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(2) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(3) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(4) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(5) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(6) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(7) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(8) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(9) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(10) := 8.3600000E+02 ;
SYSTEM.COLUMN.VAPMDENS(1) := 2.6164666E+00 ;
SYSTEM.COLUMN.VAPMDENS(2) := 2.6165852E+00 ;
SYSTEM.COLUMN.VAPMDENS(3) := 2.6684553E+00 ;
336
SYSTEM.COLUMN.VAPMDENS(4) := 2.7201430E+00 ;
SYSTEM.COLUMN.VAPMDENS(5) := 2.7716449E+00 ;
SYSTEM.COLUMN.VAPMDENS(6) := 2.8272809E+00 ;
SYSTEM.COLUMN.VAPMDENS(7) := 2.8827759E+00 ;
SYSTEM.COLUMN.VAPMDENS(8) := 2.9381325E+00 ;
SYSTEM.COLUMN.VAPMDENS(9) := 2.9933533E+00 ;
SYSTEM.COLUMN.VAPMDENS(10) := 3.0484406E+00 ;
SYSTEM.COLUMN.QR := 6.4072745E+06 ;
SYSTEM.COLUMN.DPSTAT(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.DPSTAT(2) := 2.0759991E-02 ;
SYSTEM.COLUMN.DPSTAT(3) := 2.0726226E-02 ;
SYSTEM.COLUMN.DPSTAT(4) := 2.0689946E-02 ;
SYSTEM.COLUMN.DPSTAT(5) := 2.2496062E-02 ;
SYSTEM.COLUMN.DPSTAT(6) := 2.2483939E-02 ;
SYSTEM.COLUMN.DPSTAT(7) := 2.2471846E-02 ;
SYSTEM.COLUMN.DPSTAT(8) := 2.2459782E-02 ;
SYSTEM.COLUMN.DPSTAT(9) := 2.2447747E-02 ;
SYSTEM.COLUMN.DPSTAT(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.DPTRAY(1) := 5.0229571E-05 ;
SYSTEM.COLUMN.DPTRAY(2) := 2.1986243E-02 ;
SYSTEM.COLUMN.DPTRAY(3) := 2.1952478E-02 ;
SYSTEM.COLUMN.DPTRAY(4) := 2.1916198E-02 ;
SYSTEM.COLUMN.DPTRAY(5) := 2.3722479E-02 ;
SYSTEM.COLUMN.DPTRAY(6) := 2.3710350E-02 ;
SYSTEM.COLUMN.DPTRAY(7) := 2.3698251E-02 ;
SYSTEM.COLUMN.DPTRAY(8) := 2.3686181E-02 ;
SYSTEM.COLUMN.DPTRAY(9) := 2.3674140E-02 ;
SYSTEM.COLUMN.DPTRAY(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(1) := -2.1350957E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(2) := -2.1350699E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(3) := -2.1238613E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(4) := -2.1128650E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(5) := -2.1020741E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(6) := -2.0905964E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(7) := -2.0793270E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(8) := -2.0682578E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(9) := -2.0573812E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(10) := -2.0466900E+04 ;
SYSTEM.COLUMN.LIQHEIGHT(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.LIQHEIGHT(2) := 2.5313481E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(3) := 2.5272311E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(4) := 2.5228073E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(5) := 2.7430342E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(6) := 2.7415560E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(7) := 2.7400814E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(8) := 2.7386104E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(9) := 2.7371429E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.HEAD(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.HEAD(2) := 3.1348103E+00 ;
SYSTEM.COLUMN.HEAD(3) := 2.7231056E+00 ;
SYSTEM.COLUMN.HEAD(4) := 2.2807310E+00 ;
SYSTEM.COLUMN.HEAD(5) := 2.4303419E+01 ;
SYSTEM.COLUMN.HEAD(6) := 2.4155598E+01 ;
337
SYSTEM.COLUMN.HEAD(7) := 2.4008142E+01 ;
SYSTEM.COLUMN.HEAD(8) := 2.3861041E+01 ;
SYSTEM.COLUMN.HEAD(9) := 2.3714286E+01 ;
SYSTEM.COLUMN.HEAD(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.K(1,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(1,2) := 1.1186637E+00 ;
SYSTEM.COLUMN.K(1,3) := 3.6783189E-01 ;
SYSTEM.COLUMN.K(1,4) := 2.0422134E-01 ;
SYSTEM.COLUMN.K(2,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(2,2) := 1.1186548E+00 ;
SYSTEM.COLUMN.K(2,3) := 3.6782906E-01 ;
SYSTEM.COLUMN.K(2,4) := 2.0422232E-01 ;
SYSTEM.COLUMN.K(3,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(3,2) := 1.1147884E+00 ;
SYSTEM.COLUMN.K(3,3) := 3.6660608E-01 ;
SYSTEM.COLUMN.K(3,4) := 2.0464705E-01 ;
SYSTEM.COLUMN.K(4,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(4,2) := 1.1110220E+00 ;
SYSTEM.COLUMN.K(4,3) := 3.6541455E-01 ;
SYSTEM.COLUMN.K(4,4) := 2.0506307E-01 ;
SYSTEM.COLUMN.K(5,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(5,2) := 1.1073514E+00 ;
SYSTEM.COLUMN.K(5,3) := 3.6425316E-01 ;
SYSTEM.COLUMN.K(5,4) := 2.0547070E-01 ;
SYSTEM.COLUMN.K(6,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(6,2) := 1.1034745E+00 ;
SYSTEM.COLUMN.K(6,3) := 3.6302637E-01 ;
SYSTEM.COLUMN.K(6,4) := 2.0590358E-01 ;
SYSTEM.COLUMN.K(7,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(7,2) := 1.0996952E+00 ;
SYSTEM.COLUMN.K(7,3) := 3.6183030E-01 ;
SYSTEM.COLUMN.K(7,4) := 2.0632791E-01 ;
SYSTEM.COLUMN.K(8,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(8,2) := 1.0960090E+00 ;
SYSTEM.COLUMN.K(8,3) := 3.6066354E-01 ;
SYSTEM.COLUMN.K(8,4) := 2.0674404E-01 ;
SYSTEM.COLUMN.K(9,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(9,2) := 1.0924118E+00 ;
SYSTEM.COLUMN.K(9,3) := 3.5952480E-01 ;
SYSTEM.COLUMN.K(9,4) := 2.0715229E-01 ;
SYSTEM.COLUMN.K(10,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(10,2) := 1.0888996E+00 ;
SYSTEM.COLUMN.K(10,3) := 3.5841286E-01 ;
SYSTEM.COLUMN.K(10,4) := 2.0755297E-01 ;
SYSTEM.COLUMN.L(1) := 1.2799411E+01 ;
SYSTEM.COLUMN.L(2) := 1.0700897E+01 ;
SYSTEM.COLUMN.L(3) := 8.6636325E+00 ;
SYSTEM.COLUMN.L(4) := 6.6407080E+00 ;
SYSTEM.COLUMN.L(5) := 2.3099594E+02 ;
SYSTEM.COLUMN.L(6) := 2.2889167E+02 ;
SYSTEM.COLUMN.L(7) := 2.2679899E+02 ;
SYSTEM.COLUMN.L(8) := 2.2471775E+02 ;
SYSTEM.COLUMN.L(9) := 2.2264778E+02 ;
SYSTEM.COLUMN.L(10) := 2.2005890E+00 ;
338
SYSTEM.COLUMN.$M(2) := 1.2242624E-18 ;
SYSTEM.COLUMN.$M(3) := -4.6540611E-19 ;
SYSTEM.COLUMN.$M(4) := -5.9937802E-19 ;
SYSTEM.COLUMN.$M(5) := 1.1802807E-18 ;
SYSTEM.COLUMN.$M(6) := 1.9926115E-19 ;
SYSTEM.COLUMN.$M(7) := -3.8043140E-19 ;
SYSTEM.COLUMN.$M(8) := -8.5739929E-19 ;
SYSTEM.COLUMN.$M(9) := -4.9483541E-19 ;
SYSTEM.COLUMN.M(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.M(2) := 2.9237071E+00 ;
SYSTEM.COLUMN.M(3) := 2.9189519E+00 ;
SYSTEM.COLUMN.M(4) := 2.9138424E+00 ;
SYSTEM.COLUMN.M(5) := 3.1682045E+00 ;
SYSTEM.COLUMN.M(6) := 3.1664972E+00 ;
SYSTEM.COLUMN.M(7) := 3.1647940E+00 ;
SYSTEM.COLUMN.M(8) := 3.1630950E+00 ;
SYSTEM.COLUMN.M(9) := 3.1614000E+00 ;
SYSTEM.COLUMN.M(10) := 2.2005890E+01 ;
SYSTEM.COLUMN.LIQDENS(1) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(2) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(3) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(4) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(5) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(6) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(7) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(8) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(9) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(10) := 1.1000000E+01 ;
SYSTEM.COLUMN.VAPDENS(1) := 3.4427192E-02 ;
SYSTEM.COLUMN.VAPDENS(2) := 3.4428753E-02 ;
SYSTEM.COLUMN.VAPDENS(3) := 3.5111254E-02 ;
SYSTEM.COLUMN.VAPDENS(4) := 3.5791355E-02 ;
SYSTEM.COLUMN.VAPDENS(5) := 3.6469012E-02 ;
SYSTEM.COLUMN.VAPDENS(6) := 3.7201064E-02 ;
SYSTEM.COLUMN.VAPDENS(7) := 3.7931261E-02 ;
SYSTEM.COLUMN.VAPDENS(8) := 3.8659638E-02 ;
SYSTEM.COLUMN.VAPDENS(9) := 3.9386227E-02 ;
SYSTEM.COLUMN.VAPDENS(10) := 4.0111061E-02 ;
SYSTEM.COLUMN.P(2) := 1.0133002E+00 ;
SYSTEM.COLUMN.P(3) := 1.0352865E+00 ;
SYSTEM.COLUMN.P(4) := 1.0572390E+00 ;
SYSTEM.COLUMN.P(5) := 1.0791551E+00 ;
SYSTEM.COLUMN.P(6) := 1.1028776E+00 ;
SYSTEM.COLUMN.P(7) := 1.1265880E+00 ;
SYSTEM.COLUMN.P(8) := 1.1502862E+00 ;
SYSTEM.COLUMN.P(9) := 1.1739724E+00 ;
SYSTEM.COLUMN.P(10) := 1.1976465E+00 ;
SYSTEM.COLUMN.DISTILLOUT := 1.2799411E+01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(2) := 2.6579155E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(3) := 2.6535926E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(4) := 2.6489477E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(5) := 2.8801859E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(6) := 2.8786338E-01 ;
339
SYSTEM.COLUMN.VOLUMEHOLDUP(7) := 2.8770855E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(8) := 2.8755409E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(9) := 2.8740000E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(10) := 2.0005354E+00 ;
SYSTEM.COLUMN.T(1) := 3.5400141E+02 ;
SYSTEM.COLUMN.T(2) := 3.5400291E+02 ;
SYSTEM.COLUMN.T(3) := 3.5465344E+02 ;
SYSTEM.COLUMN.T(4) := 3.5529164E+02 ;
SYSTEM.COLUMN.T(5) := 3.5591793E+02 ;
SYSTEM.COLUMN.T(6) := 3.5658408E+02 ;
SYSTEM.COLUMN.T(7) := 3.5723813E+02 ;
SYSTEM.COLUMN.T(8) := 3.5788057E+02 ;
SYSTEM.COLUMN.T(9) := 3.5851183E+02 ;
SYSTEM.COLUMN.T(10) := 3.5913233E+02 ;
SYSTEM.COLUMN.V(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.V(2) := 2.5598822E+01 ;
SYSTEM.COLUMN.V(3) := 2.3500308E+01 ;
SYSTEM.COLUMN.V(4) := 2.1463044E+01 ;
SYSTEM.COLUMN.V(5) := 1.9440119E+01 ;
SYSTEM.COLUMN.V(6) := 2.2879535E+02 ;
SYSTEM.COLUMN.V(7) := 2.2669108E+02 ;
SYSTEM.COLUMN.V(8) := 2.2459840E+02 ;
SYSTEM.COLUMN.V(9) := 2.2251716E+02 ;
SYSTEM.COLUMN.V(10) := 2.2044719E+02 ;
SYSTEM.COLUMN.DPDRY(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.DPDRY(2) := 1.8679046E-09 ;
SYSTEM.COLUMN.DPDRY(3) := 1.5277952E-09 ;
SYSTEM.COLUMN.DPDRY(4) := 1.2295567E-09 ;
SYSTEM.COLUMN.DPDRY(5) := 1.6714726E-07 ;
SYSTEM.COLUMN.DPDRY(6) := 1.6085789E-07 ;
SYSTEM.COLUMN.DPDRY(7) := 1.5486202E-07 ;
SYSTEM.COLUMN.DPDRY(8) := 1.4914136E-07 ;
SYSTEM.COLUMN.DPDRY(9) := 1.4367911E-07 ;
SYSTEM.COLUMN.DPDRY(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(1,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.X(1,2) := -1.4211639E-30 ;
SYSTEM.COLUMN.X(1,3) := -7.7119210E-33 ;
SYSTEM.COLUMN.X(1,4) := -6.9770011E-33 ;
SYSTEM.COLUMN.X(2,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(2,1) := -1.1837221E-27 ;
SYSTEM.COLUMN.X(2,2) := -1.2704223E-30 ;
SYSTEM.COLUMN.$X(2,2) := 5.7989273E-33 ;
SYSTEM.COLUMN.X(2,3) := -2.0966046E-32 ;
SYSTEM.COLUMN.$X(2,3) := -2.6702840E-35 ;
SYSTEM.COLUMN.X(2,4) := -6.1284288E-32 ;
SYSTEM.COLUMN.$X(2,4) := -1.6449487E-34 ;
SYSTEM.COLUMN.X(3,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(3,1) := 1.2030481E-26 ;
SYSTEM.COLUMN.X(3,2) := -1.2080724E-30 ;
SYSTEM.COLUMN.$X(3,2) := 5.5717889E-33 ;
SYSTEM.COLUMN.X(3,3) := -3.8197066E-32 ;
SYSTEM.COLUMN.$X(3,3) := -1.0199658E-34 ;
SYSTEM.COLUMN.X(3,4) := -8.1682874E-32 ;
SYSTEM.COLUMN.$X(3,4) := -2.0340822E-34 ;
340
SYSTEM.COLUMN.X(4,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(4,1) := -1.3084953E-26 ;
SYSTEM.COLUMN.X(4,2) := -1.0440124E-30 ;
SYSTEM.COLUMN.$X(4,2) := 5.5665390E-33 ;
SYSTEM.COLUMN.X(4,3) := -6.9179846E-32 ;
SYSTEM.COLUMN.$X(4,3) := -1.7839640E-34 ;
SYSTEM.COLUMN.X(4,4) := -6.5639146E-32 ;
SYSTEM.COLUMN.$X(4,4) := -1.6350563E-34 ;
SYSTEM.COLUMN.X(5,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(5,1) := -1.9839325E-24 ;
SYSTEM.COLUMN.X(5,2) := -9.9188031E-31 ;
SYSTEM.COLUMN.$X(5,2) := 5.6461486E-33 ;
SYSTEM.COLUMN.X(5,3) := -9.4816353E-32 ;
SYSTEM.COLUMN.$X(5,3) := -2.4050174E-34 ;
SYSTEM.COLUMN.X(5,4) := -4.3865365E-33 ;
SYSTEM.COLUMN.$X(5,4) := -1.1121583E-35 ;
SYSTEM.COLUMN.X(6,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(6,1) := 7.4938318E-24 ;
SYSTEM.COLUMN.X(6,2) := -8.9965879E-31 ;
SYSTEM.COLUMN.$X(6,2) := 4.9062142E-33 ;
SYSTEM.COLUMN.X(6,3) := -2.6625428E-31 ;
SYSTEM.COLUMN.$X(6,3) := -6.7514852E-34 ;
SYSTEM.COLUMN.X(6,4) := -1.2628664E-32 ;
SYSTEM.COLUMN.$X(6,4) := -3.2429701E-35 ;
SYSTEM.COLUMN.X(7,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(7,1) := -2.4396441E-24 ;
SYSTEM.COLUMN.X(7,2) := -8.2074687E-31 ;
SYSTEM.COLUMN.$X(7,2) := 4.4714835E-33 ;
SYSTEM.COLUMN.X(7,3) := -1.0224736E-30 ;
SYSTEM.COLUMN.$X(7,3) := -1.8627965E-33 ;
SYSTEM.COLUMN.X(7,4) := -5.2858763E-32 ;
SYSTEM.COLUMN.$X(7,4) := -1.3644363E-34 ;
SYSTEM.COLUMN.X(8,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(8,1) := 1.9593269E-24 ;
SYSTEM.COLUMN.X(8,2) := -7.5079031E-31 ;
SYSTEM.COLUMN.$X(8,2) := 4.0864429E-33 ;
SYSTEM.COLUMN.X(8,3) := -2.3610164E-30 ;
SYSTEM.COLUMN.$X(8,3) := -4.3205815E-33 ;
SYSTEM.COLUMN.X(8,4) := -2.4917836E-31 ;
SYSTEM.COLUMN.$X(8,4) := -6.4403095E-34 ;
SYSTEM.COLUMN.X(9,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(9,1) := -4.8662107E-23 ;
SYSTEM.COLUMN.X(9,2) := -6.8857471E-31 ;
SYSTEM.COLUMN.$X(9,2) := 3.7443424E-33 ;
SYSTEM.COLUMN.X(9,3) := -6.1456693E-30 ;
SYSTEM.COLUMN.$X(9,3) := -1.1199828E-32 ;
SYSTEM.COLUMN.X(9,4) := -1.2057443E-30 ;
SYSTEM.COLUMN.$X(9,4) := -3.1172520E-33 ;
SYSTEM.COLUMN.X(10,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(10,1) := 6.2732799E-24 ;
SYSTEM.COLUMN.X(10,2) := -6.3306961E-31 ;
SYSTEM.COLUMN.$X(10,2) := 3.4394567E-33 ;
SYSTEM.COLUMN.X(10,3) := -1.6841651E-29 ;
SYSTEM.COLUMN.$X(10,3) := -3.0684627E-32 ;
341
SYSTEM.COLUMN.X(10,4) := -5.7768554E-30 ;
SYSTEM.COLUMN.$X(10,4) := -1.5124539E-32 ;
SYSTEM.COLUMN.Y(1,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(1,2) := -1.5898045E-30 ;
SYSTEM.COLUMN.Y(1,3) := -2.8366905E-33 ;
SYSTEM.COLUMN.Y(1,4) := -1.4248525E-33 ;
SYSTEM.COLUMN.Y(2,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(2,2) := -1.4211639E-30 ;
SYSTEM.COLUMN.Y(2,3) := -7.7119210E-33 ;
SYSTEM.COLUMN.Y(2,4) := -6.9770011E-33 ;
SYSTEM.COLUMN.Y(3,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(3,2) := -1.3467451E-30 ;
SYSTEM.COLUMN.Y(3,3) := -1.4003277E-32 ;
SYSTEM.COLUMN.Y(3,4) := -1.6716159E-32 ;
SYSTEM.COLUMN.Y(4,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(4,2) := -1.1599208E-30 ;
SYSTEM.COLUMN.Y(4,3) := -2.5279322E-32 ;
SYSTEM.COLUMN.Y(4,4) := -1.3460165E-32 ;
SYSTEM.COLUMN.Y(5,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(5,2) := -1.0983600E-30 ;
SYSTEM.COLUMN.Y(5,3) := -3.4537156E-32 ;
SYSTEM.COLUMN.Y(5,4) := -9.0130471E-34 ;
SYSTEM.COLUMN.Y(6,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(6,2) := -9.9275052E-31 ;
SYSTEM.COLUMN.Y(6,3) := -9.6657325E-32 ;
SYSTEM.COLUMN.Y(6,4) := -2.6002870E-33 ;
SYSTEM.COLUMN.Y(7,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(7,2) := -9.0257137E-31 ;
SYSTEM.COLUMN.Y(7,3) := -3.6996193E-31 ;
SYSTEM.COLUMN.Y(7,4) := -1.0906238E-32 ;
SYSTEM.COLUMN.Y(8,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(8,2) := -8.2287294E-31 ;
SYSTEM.COLUMN.Y(8,3) := -8.5153253E-31 ;
SYSTEM.COLUMN.Y(8,4) := -5.1516140E-32 ;
SYSTEM.COLUMN.Y(9,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(9,2) := -7.5220712E-31 ;
SYSTEM.COLUMN.Y(9,3) := -2.2095205E-30 ;
SYSTEM.COLUMN.Y(9,4) := -2.4977270E-31 ;
SYSTEM.COLUMN.Y(10,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(10,2) := -6.8934926E-31 ;
SYSTEM.COLUMN.Y(10,3) := -6.0362642E-30 ;
SYSTEM.COLUMN.Y(10,4) := -1.1990035E-30 ;
SYSTEM.COLUMN.FEEDENTHALPY := -4.5983100E+05 ;
SYSTEM.COLUMN.MWL(1) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(2) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(3) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(4) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(5) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(6) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(7) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(8) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(9) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(10) := 7.6000000E+01 ;
SYSTEM.COLUMN.TOTALHOLDUP := 4.6586382E+01 ;
342
SYSTEM.COLUMN.$TOTALHOLDUP := 0.0000000E+00 ;
SYSTEM.COLUMN.QC := 7.9357010E+05 ;
SYSTEM.VLE.VAPORENTHALPY(1) := 9.6490429E+03 ;
SYSTEM.VLE.VAPORENTHALPY(2) := 9.6493013E+03 ;
SYSTEM.VLE.VAPORENTHALPY(3) := 9.7613871E+03 ;
SYSTEM.VLE.VAPORENTHALPY(4) := 9.8713496E+03 ;
SYSTEM.VLE.VAPORENTHALPY(5) := 9.9792593E+03 ;
SYSTEM.VLE.VAPORENTHALPY(6) := 1.0094036E+04 ;
SYSTEM.VLE.VAPORENTHALPY(7) := 1.0206730E+04 ;
SYSTEM.VLE.VAPORENTHALPY(8) := 1.0317422E+04 ;
SYSTEM.VLE.VAPORENTHALPY(9) := 1.0426188E+04 ;
SYSTEM.VLE.VAPORENTHALPY(10) := 1.0533100E+04 ;
SYSTEM.VLE.TEMP(1) := 3.5400141E+02 ;
SYSTEM.VLE.TEMP(2) := 3.5400291E+02 ;
SYSTEM.VLE.TEMP(3) := 3.5465344E+02 ;
SYSTEM.VLE.TEMP(4) := 3.5529164E+02 ;
SYSTEM.VLE.TEMP(5) := 3.5591793E+02 ;
SYSTEM.VLE.TEMP(6) := 3.5658408E+02 ;
SYSTEM.VLE.TEMP(7) := 3.5723813E+02 ;
SYSTEM.VLE.TEMP(8) := 3.5788057E+02 ;
SYSTEM.VLE.TEMP(9) := 3.5851183E+02 ;
SYSTEM.VLE.TEMP(10) := 3.5913233E+02 ;
SYSTEM.VLE.LIQUIDENTHALPY(1) := -2.1350957E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(2) := -2.1350699E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(3) := -2.1238613E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(4) := -2.1128650E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(5) := -2.1020741E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(6) := -2.0905964E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(7) := -2.0793270E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(8) := -2.0682578E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(9) := -2.0573812E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(10) := -2.0466900E+04 ;
SYSTEM.VLE.K(1,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(1,2) := 1.1186637E+00 ;
SYSTEM.VLE.K(1,3) := 3.6783189E-01 ;
SYSTEM.VLE.K(1,4) := 2.0422134E-01 ;
SYSTEM.VLE.K(2,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(2,2) := 1.1186548E+00 ;
SYSTEM.VLE.K(2,3) := 3.6782906E-01 ;
SYSTEM.VLE.K(2,4) := 2.0422232E-01 ;
SYSTEM.VLE.K(3,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(3,2) := 1.1147884E+00 ;
SYSTEM.VLE.K(3,3) := 3.6660608E-01 ;
SYSTEM.VLE.K(3,4) := 2.0464705E-01 ;
SYSTEM.VLE.K(4,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(4,2) := 1.1110220E+00 ;
SYSTEM.VLE.K(4,3) := 3.6541455E-01 ;
SYSTEM.VLE.K(4,4) := 2.0506307E-01 ;
SYSTEM.VLE.K(5,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(5,2) := 1.1073514E+00 ;
SYSTEM.VLE.K(5,3) := 3.6425316E-01 ;
SYSTEM.VLE.K(5,4) := 2.0547070E-01 ;
SYSTEM.VLE.K(6,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(6,2) := 1.1034745E+00 ;
343
SYSTEM.VLE.K(6,3) := 3.6302637E-01 ;
SYSTEM.VLE.K(6,4) := 2.0590358E-01 ;
SYSTEM.VLE.K(7,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(7,2) := 1.0996952E+00 ;
SYSTEM.VLE.K(7,3) := 3.6183030E-01 ;
SYSTEM.VLE.K(7,4) := 2.0632791E-01 ;
SYSTEM.VLE.K(8,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(8,2) := 1.0960090E+00 ;
SYSTEM.VLE.K(8,3) := 3.6066354E-01 ;
SYSTEM.VLE.K(8,4) := 2.0674404E-01 ;
SYSTEM.VLE.K(9,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(9,2) := 1.0924118E+00 ;
SYSTEM.VLE.K(9,3) := 3.5952480E-01 ;
SYSTEM.VLE.K(9,4) := 2.0715229E-01 ;
SYSTEM.VLE.K(10,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(10,2) := 1.0888996E+00 ;
SYSTEM.VLE.K(10,3) := 3.5841286E-01 ;
SYSTEM.VLE.K(10,4) := 2.0755297E-01 ;
SYSTEM.VLE.VAPORPRESSURE(1,1) := 1.0132500E+00 ;
SYSTEM.VLE.VAPORPRESSURE(1,2) := 1.1334860E+00 ;
SYSTEM.VLE.VAPORPRESSURE(1,3) := 3.7270566E-01 ;
SYSTEM.VLE.VAPORPRESSURE(1,4) := 2.0692728E-01 ;
SYSTEM.VLE.VAPORPRESSURE(2,1) := 1.0133002E+00 ;
SYSTEM.VLE.VAPORPRESSURE(2,2) := 1.1335331E+00 ;
SYSTEM.VLE.VAPORPRESSURE(2,3) := 3.7272127E-01 ;
SYSTEM.VLE.VAPORPRESSURE(2,4) := 2.0693853E-01 ;
SYSTEM.VLE.VAPORPRESSURE(3,1) := 1.0352865E+00 ;
SYSTEM.VLE.VAPORPRESSURE(3,2) := 1.1541254E+00 ;
SYSTEM.VLE.VAPORPRESSURE(3,3) := 3.7954232E-01 ;
SYSTEM.VLE.VAPORPRESSURE(3,4) := 2.1186832E-01 ;
SYSTEM.VLE.VAPORPRESSURE(4,1) := 1.0572390E+00 ;
SYSTEM.VLE.VAPORPRESSURE(4,2) := 1.1746158E+00 ;
SYSTEM.VLE.VAPORPRESSURE(4,3) := 3.8633050E-01 ;
SYSTEM.VLE.VAPORPRESSURE(4,4) := 2.1680067E-01 ;
SYSTEM.VLE.VAPORPRESSURE(5,1) := 1.0791551E+00 ;
SYSTEM.VLE.VAPORPRESSURE(5,2) := 1.1950039E+00 ;
SYSTEM.VLE.VAPORPRESSURE(5,3) := 3.9308568E-01 ;
SYSTEM.VLE.VAPORPRESSURE(5,4) := 2.2173476E-01 ;
SYSTEM.VLE.VAPORPRESSURE(6,1) := 1.1028776E+00 ;
SYSTEM.VLE.VAPORPRESSURE(6,2) := 1.2169973E+00 ;
SYSTEM.VLE.VAPORPRESSURE(6,3) := 4.0037366E-01 ;
SYSTEM.VLE.VAPORPRESSURE(6,4) := 2.2708645E-01 ;
SYSTEM.VLE.VAPORPRESSURE(7,1) := 1.1265880E+00 ;
SYSTEM.VLE.VAPORPRESSURE(7,2) := 1.2389034E+00 ;
SYSTEM.VLE.VAPORPRESSURE(7,3) := 4.0763366E-01 ;
SYSTEM.VLE.VAPORPRESSURE(7,4) := 2.3244654E-01 ;
SYSTEM.VLE.VAPORPRESSURE(8,1) := 1.1502862E+00 ;
SYSTEM.VLE.VAPORPRESSURE(8,2) := 1.2607240E+00 ;
SYSTEM.VLE.VAPORPRESSURE(8,3) := 4.1486630E-01 ;
SYSTEM.VLE.VAPORPRESSURE(8,4) := 2.3781482E-01 ;
SYSTEM.VLE.VAPORPRESSURE(9,1) := 1.1739724E+00 ;
SYSTEM.VLE.VAPORPRESSURE(9,2) := 1.2824613E+00 ;
SYSTEM.VLE.VAPORPRESSURE(9,3) := 4.2207220E-01 ;
SYSTEM.VLE.VAPORPRESSURE(9,4) := 2.4319108E-01 ;
344
SYSTEM.VLE.VAPORPRESSURE(10,1) := 1.1976465E+00 ;
SYSTEM.VLE.VAPORPRESSURE(10,2) := 1.3041169E+00 ;
SYSTEM.VLE.VAPORPRESSURE(10,3) := 4.2925192E-01 ;
SYSTEM.VLE.VAPORPRESSURE(10,4) := 2.4857510E-01 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,1) := 9.6490429E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,2) := 1.1200282E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,3) := 8.9602256E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,4) := 8.6802185E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,1) := 9.6493013E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,2) := 1.1200582E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,3) := 8.9604655E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,4) := 8.6804509E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,1) := 9.7613871E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,2) := 1.1330687E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,3) := 9.0645499E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,4) := 8.7812827E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,1) := 9.8713496E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,2) := 1.1458328E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,3) := 9.1666624E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,4) := 8.8802042E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,1) := 9.9792593E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,2) := 1.1583586E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,3) := 9.2668688E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,4) := 8.9772791E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,1) := 1.0094036E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,2) := 1.1716815E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,3) := 9.3734520E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,4) := 9.0805316E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,1) := 1.0206730E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,2) := 1.1847626E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,3) := 9.4781009E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,4) := 9.1819102E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,1) := 1.0317422E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,2) := 1.1976113E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,3) := 9.5808907E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,4) := 9.2814879E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,1) := 1.0426188E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,2) := 1.2102365E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,3) := 9.6818924E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,4) := 9.3793332E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,1) := 1.0533100E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,2) := 1.2226466E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,3) := 9.7811726E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,4) := 9.4755110E+03 ;
SYSTEM.VLE.P(1) := 1.0132500E+00 ;
SYSTEM.VLE.P(2) := 1.0133002E+00 ;
SYSTEM.VLE.P(3) := 1.0352865E+00 ;
SYSTEM.VLE.P(4) := 1.0572390E+00 ;
SYSTEM.VLE.P(5) := 1.0791551E+00 ;
SYSTEM.VLE.P(6) := 1.1028776E+00 ;
SYSTEM.VLE.P(7) := 1.1265880E+00 ;
SYSTEM.VLE.P(8) := 1.1502862E+00 ;
SYSTEM.VLE.P(9) := 1.1739724E+00 ;
SYSTEM.VLE.P(10) := 1.1976465E+00 ;
345
SYSTEM.VLE.X(1,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(1,2) := -1.4211639E-30 ;
SYSTEM.VLE.X(1,3) := -7.7119210E-33 ;
SYSTEM.VLE.X(1,4) := -6.9770011E-33 ;
SYSTEM.VLE.X(2,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(2,2) := -1.2704223E-30 ;
SYSTEM.VLE.X(2,3) := -2.0966046E-32 ;
SYSTEM.VLE.X(2,4) := -6.1284288E-32 ;
SYSTEM.VLE.X(3,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(3,2) := -1.2080724E-30 ;
SYSTEM.VLE.X(3,3) := -3.8197066E-32 ;
SYSTEM.VLE.X(3,4) := -8.1682874E-32 ;
SYSTEM.VLE.X(4,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(4,2) := -1.0440124E-30 ;
SYSTEM.VLE.X(4,3) := -6.9179846E-32 ;
SYSTEM.VLE.X(4,4) := -6.5639146E-32 ;
SYSTEM.VLE.X(5,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(5,2) := -9.9188031E-31 ;
SYSTEM.VLE.X(5,3) := -9.4816353E-32 ;
SYSTEM.VLE.X(5,4) := -4.3865365E-33 ;
SYSTEM.VLE.X(6,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(6,2) := -8.9965879E-31 ;
SYSTEM.VLE.X(6,3) := -2.6625428E-31 ;
SYSTEM.VLE.X(6,4) := -1.2628664E-32 ;
SYSTEM.VLE.X(7,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(7,2) := -8.2074687E-31 ;
SYSTEM.VLE.X(7,3) := -1.0224736E-30 ;
SYSTEM.VLE.X(7,4) := -5.2858763E-32 ;
SYSTEM.VLE.X(8,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(8,2) := -7.5079031E-31 ;
SYSTEM.VLE.X(8,3) := -2.3610164E-30 ;
SYSTEM.VLE.X(8,4) := -2.4917836E-31 ;
SYSTEM.VLE.X(9,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(9,2) := -6.8857471E-31 ;
SYSTEM.VLE.X(9,3) := -6.1456693E-30 ;
SYSTEM.VLE.X(9,4) := -1.2057443E-30 ;
SYSTEM.VLE.X(10,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(10,2) := -6.3306961E-31 ;
SYSTEM.VLE.X(10,3) := -1.6841651E-29 ;
SYSTEM.VLE.X(10,4) := -5.7768554E-30 ;
SYSTEM.VLE.Y(1,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(1,2) := -1.5898045E-30 ;
SYSTEM.VLE.Y(1,3) := -2.8366905E-33 ;
SYSTEM.VLE.Y(1,4) := -1.4248525E-33 ;
SYSTEM.VLE.Y(2,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(2,2) := -1.4211639E-30 ;
SYSTEM.VLE.Y(2,3) := -7.7119210E-33 ;
SYSTEM.VLE.Y(2,4) := -6.9770011E-33 ;
SYSTEM.VLE.Y(3,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(3,2) := -1.3467451E-30 ;
SYSTEM.VLE.Y(3,3) := -1.4003277E-32 ;
SYSTEM.VLE.Y(3,4) := -1.6716159E-32 ;
SYSTEM.VLE.Y(4,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(4,2) := -1.1599208E-30 ;
346
SYSTEM.VLE.Y(4,3) := -2.5279322E-32 ;
SYSTEM.VLE.Y(4,4) := -1.3460165E-32 ;
SYSTEM.VLE.Y(5,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(5,2) := -1.0983600E-30 ;
SYSTEM.VLE.Y(5,3) := -3.4537156E-32 ;
SYSTEM.VLE.Y(5,4) := -9.0130471E-34 ;
SYSTEM.VLE.Y(6,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(6,2) := -9.9275052E-31 ;
SYSTEM.VLE.Y(6,3) := -9.6657325E-32 ;
SYSTEM.VLE.Y(6,4) := -2.6002870E-33 ;
SYSTEM.VLE.Y(7,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(7,2) := -9.0257137E-31 ;
SYSTEM.VLE.Y(7,3) := -3.6996193E-31 ;
SYSTEM.VLE.Y(7,4) := -1.0906238E-32 ;
SYSTEM.VLE.Y(8,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(8,2) := -8.2287294E-31 ;
SYSTEM.VLE.Y(8,3) := -8.5153253E-31 ;
SYSTEM.VLE.Y(8,4) := -5.1516140E-32 ;
SYSTEM.VLE.Y(9,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(9,2) := -7.5220712E-31 ;
SYSTEM.VLE.Y(9,3) := -2.2095205E-30 ;
SYSTEM.VLE.Y(9,4) := -2.4977270E-31 ;
SYSTEM.VLE.Y(10,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(10,2) := -6.8934926E-31 ;
SYSTEM.VLE.Y(10,3) := -6.0362642E-30 ;
SYSTEM.VLE.Y(10,4) := -1.1990035E-30 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,1) := -2.1350957E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,2) := -1.4799718E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,3) := -1.9039774E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,4) := -2.5319781E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,1) := -2.1350699E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,2) := -1.4799418E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,3) := -1.9039535E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,4) := -2.5319549E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,1) := -2.1238613E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,2) := -1.4669313E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,3) := -1.8935450E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,4) := -2.5218717E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,1) := -2.1128650E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,2) := -1.4541672E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,3) := -1.8833338E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,4) := -2.5119796E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,1) := -2.1020741E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,2) := -1.4416414E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,3) := -1.8733131E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,4) := -2.5022721E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,1) := -2.0905964E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,2) := -1.4283185E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,3) := -1.8626548E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,4) := -2.4919468E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,1) := -2.0793270E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,2) := -1.4152374E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,3) := -1.8521899E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,4) := -2.4818090E+04 ;
347
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,1) := -2.0682578E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,2) := -1.4023887E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,3) := -1.8419109E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,4) := -2.4718512E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,1) := -2.0573812E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,2) := -1.3897635E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,3) := -1.8318108E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,4) := -2.4620667E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,1) := -2.0466900E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,2) := -1.3773534E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,3) := -1.8218827E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,4) := -2.4524489E+04 ;
INITIAL
WITHIN System.Column DO
FOR I:=2 TO NSTAGE DO
X(I,D)= 0.0 ;
X(I,B)= 0.0 ;
X(I,C)= 0.0 ;
END
TotalHoldup=4.6586382E+01 ;
M(2)=2.9237071E+00 ;
M(3)=2.9189519E+00 ;
M(4)=2.9138424E+00 ;
M(5)=3.1682045E+00 ;
M(6)=3.1664972E+00 ;
M(7)=3.1647940E+00 ;
M(8)=3.1630950E+00 ;
M(9)=3.1614000E+00 ;
FOR I:=2 TO NSTAGE DO
SIGMA(X(I,))=1 ;
END
END
SCHEDULE
CONTINUE FOR 1000
END
SIMULATION RunBoth
UNIT
System As BothFlowSheet
INPUT
WITHIN System.Reactor DO
FlowOut := 15 ;
FeedTemp := 300 ;
END
WITHIN System.Jacket DO
Flow_Water := 3 ;
END
348
WITHIN System.Column DO
P(1) :=1.01325 ;
REFLUXRATIO :=0.5 ;
END
WITHIN System DO
BFraction:=0.15 ;
END
PRESET
##############################
# Values of All Active Variables #
##############################
SYSTEM.MAKEUP(1) := 7.6341614E-02 ;
SYSTEM.MAKEUP(2) := 7.5000000E+00 ;
SYSTEM.MAKEUP(3) := 0.0000000E+00 ;
SYSTEM.MAKEUP(4) := 0.0000000E+00 ;
SYSTEM.COLUMN.MWV(1) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(2) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(3) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(4) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(5) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(6) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(7) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(8) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(9) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(10) := 7.6000000E+01 ;
SYSTEM.COLUMN.BOTTOMS := 2.2005890E+00 ;
SYSTEM.COLUMN.VAPORENTHALPY(1) := 9.6490429E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(2) := 9.6493013E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(3) := 9.7613871E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(4) := 9.8713496E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(5) := 9.9792593E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(6) := 1.0094036E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(7) := 1.0206730E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(8) := 1.0317422E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(9) := 1.0426188E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(10) := 1.0533100E+04 ;
SYSTEM.COLUMN.LIQMDENS(1) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(2) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(3) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(4) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(5) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(6) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(7) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(8) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(9) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(10) := 8.3600000E+02 ;
SYSTEM.COLUMN.VAPMDENS(1) := 2.6164666E+00 ;
SYSTEM.COLUMN.VAPMDENS(2) := 2.6165852E+00 ;
SYSTEM.COLUMN.VAPMDENS(3) := 2.6684553E+00 ;
SYSTEM.COLUMN.VAPMDENS(4) := 2.7201430E+00 ;
SYSTEM.COLUMN.VAPMDENS(5) := 2.7716449E+00 ;
SYSTEM.COLUMN.VAPMDENS(6) := 2.8272809E+00 ;
349
SYSTEM.COLUMN.VAPMDENS(7) := 2.8827759E+00 ;
SYSTEM.COLUMN.VAPMDENS(8) := 2.9381325E+00 ;
SYSTEM.COLUMN.VAPMDENS(9) := 2.9933533E+00 ;
SYSTEM.COLUMN.VAPMDENS(10) := 3.0484406E+00 ;
SYSTEM.COLUMN.QR := 6.4072745E+06 ;
SYSTEM.COLUMN.DPSTAT(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.DPSTAT(2) := 2.0759991E-02 ;
SYSTEM.COLUMN.DPSTAT(3) := 2.0726226E-02 ;
SYSTEM.COLUMN.DPSTAT(4) := 2.0689946E-02 ;
SYSTEM.COLUMN.DPSTAT(5) := 2.2496062E-02 ;
SYSTEM.COLUMN.DPSTAT(6) := 2.2483940E-02 ;
SYSTEM.COLUMN.DPSTAT(7) := 2.2471846E-02 ;
SYSTEM.COLUMN.DPSTAT(8) := 2.2459782E-02 ;
SYSTEM.COLUMN.DPSTAT(9) := 2.2447747E-02 ;
SYSTEM.COLUMN.DPSTAT(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.XFEED(1) := 1.0000000E+00 ;
SYSTEM.COLUMN.XFEED(2) := 0.0000000E+00 ;
SYSTEM.COLUMN.XFEED(3) := 0.0000000E+00 ;
SYSTEM.COLUMN.XFEED(4) := 0.0000000E+00 ;
SYSTEM.COLUMN.DPTRAY(1) := 5.0229571E-05 ;
SYSTEM.COLUMN.DPTRAY(2) := 2.1986243E-02 ;
SYSTEM.COLUMN.DPTRAY(3) := 2.1952478E-02 ;
SYSTEM.COLUMN.DPTRAY(4) := 2.1916197E-02 ;
SYSTEM.COLUMN.DPTRAY(5) := 2.3722479E-02 ;
SYSTEM.COLUMN.DPTRAY(6) := 2.3710350E-02 ;
SYSTEM.COLUMN.DPTRAY(7) := 2.3698251E-02 ;
SYSTEM.COLUMN.DPTRAY(8) := 2.3686181E-02 ;
SYSTEM.COLUMN.DPTRAY(9) := 2.3674140E-02 ;
SYSTEM.COLUMN.DPTRAY(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(1) := -2.1350957E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(2) := -2.1350699E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(3) := -2.1238613E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(4) := -2.1128650E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(5) := -2.1020741E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(6) := -2.0905964E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(7) := -2.0793270E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(8) := -2.0682578E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(9) := -2.0573812E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(10) := -2.0466900E+04 ;
SYSTEM.COLUMN.LIQHEIGHT(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.LIQHEIGHT(2) := 2.5313481E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(3) := 2.5272311E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(4) := 2.5228073E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(5) := 2.7430342E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(6) := 2.7415560E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(7) := 2.7400814E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(8) := 2.7386104E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(9) := 2.7371429E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.HEAD(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.HEAD(2) := 3.1348139E+00 ;
SYSTEM.COLUMN.HEAD(3) := 2.7231082E+00 ;
SYSTEM.COLUMN.HEAD(4) := 2.2807273E+00 ;
SYSTEM.COLUMN.HEAD(5) := 2.4303420E+01 ;
350
SYSTEM.COLUMN.HEAD(6) := 2.4155602E+01 ;
SYSTEM.COLUMN.HEAD(7) := 2.4008139E+01 ;
SYSTEM.COLUMN.HEAD(8) := 2.3861039E+01 ;
SYSTEM.COLUMN.HEAD(9) := 2.3714286E+01 ;
SYSTEM.COLUMN.HEAD(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.K(1,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(1,2) := 1.1186637E+00 ;
SYSTEM.COLUMN.K(1,3) := 3.6783189E-01 ;
SYSTEM.COLUMN.K(1,4) := 2.0422134E-01 ;
SYSTEM.COLUMN.K(2,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(2,2) := 1.1186548E+00 ;
SYSTEM.COLUMN.K(2,3) := 3.6782906E-01 ;
SYSTEM.COLUMN.K(2,4) := 2.0422232E-01 ;
SYSTEM.COLUMN.K(3,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(3,2) := 1.1147884E+00 ;
SYSTEM.COLUMN.K(3,3) := 3.6660608E-01 ;
SYSTEM.COLUMN.K(3,4) := 2.0464705E-01 ;
SYSTEM.COLUMN.K(4,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(4,2) := 1.1110220E+00 ;
SYSTEM.COLUMN.K(4,3) := 3.6541455E-01 ;
SYSTEM.COLUMN.K(4,4) := 2.0506307E-01 ;
SYSTEM.COLUMN.K(5,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(5,2) := 1.1073514E+00 ;
SYSTEM.COLUMN.K(5,3) := 3.6425316E-01 ;
SYSTEM.COLUMN.K(5,4) := 2.0547070E-01 ;
SYSTEM.COLUMN.K(6,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(6,2) := 1.1034745E+00 ;
SYSTEM.COLUMN.K(6,3) := 3.6302637E-01 ;
SYSTEM.COLUMN.K(6,4) := 2.0590358E-01 ;
SYSTEM.COLUMN.K(7,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(7,2) := 1.0996952E+00 ;
SYSTEM.COLUMN.K(7,3) := 3.6183030E-01 ;
SYSTEM.COLUMN.K(7,4) := 2.0632791E-01 ;
SYSTEM.COLUMN.K(8,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(8,2) := 1.0960090E+00 ;
SYSTEM.COLUMN.K(8,3) := 3.6066354E-01 ;
SYSTEM.COLUMN.K(8,4) := 2.0674404E-01 ;
SYSTEM.COLUMN.K(9,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(9,2) := 1.0924118E+00 ;
SYSTEM.COLUMN.K(9,3) := 3.5952480E-01 ;
SYSTEM.COLUMN.K(9,4) := 2.0715229E-01 ;
SYSTEM.COLUMN.K(10,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(10,2) := 1.0888996E+00 ;
SYSTEM.COLUMN.K(10,3) := 3.5841286E-01 ;
SYSTEM.COLUMN.K(10,4) := 2.0755297E-01 ;
SYSTEM.COLUMN.L(1) := 1.2799411E+01 ;
SYSTEM.COLUMN.L(2) := 1.0700915E+01 ;
SYSTEM.COLUMN.L(3) := 8.6636449E+00 ;
SYSTEM.COLUMN.L(4) := 6.6406918E+00 ;
SYSTEM.COLUMN.L(5) := 2.3099596E+02 ;
SYSTEM.COLUMN.L(6) := 2.2889172E+02 ;
SYSTEM.COLUMN.L(7) := 2.2679894E+02 ;
SYSTEM.COLUMN.L(8) := 2.2471772E+02 ;
SYSTEM.COLUMN.L(9) := 2.2264778E+02 ;
351
SYSTEM.COLUMN.L(10) := 2.2005890E+00 ;
SYSTEM.COLUMN.FEED(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(2) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(3) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(4) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(5) := 1.5000000E+01 ;
SYSTEM.COLUMN.FEED(6) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(7) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(8) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(9) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.M(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.$M(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.M(2) := 2.9237071E+00 ;
SYSTEM.COLUMN.$M(2) := -5.8431630E-05 ;
SYSTEM.COLUMN.M(3) := 2.9189519E+00 ;
SYSTEM.COLUMN.$M(3) := 1.9291184E-05 ;
SYSTEM.COLUMN.M(4) := 2.9138424E+00 ;
SYSTEM.COLUMN.$M(4) := 8.9519949E-05 ;
SYSTEM.COLUMN.M(5) := 3.1682045E+00 ;
SYSTEM.COLUMN.$M(5) := -1.1165674E-04 ;
SYSTEM.COLUMN.M(6) := 3.1664972E+00 ;
SYSTEM.COLUMN.$M(6) := -1.0041827E-04 ;
SYSTEM.COLUMN.M(7) := 3.1647940E+00 ;
SYSTEM.COLUMN.$M(7) := 3.0944158E-04 ;
SYSTEM.COLUMN.M(8) := 3.1630950E+00 ;
SYSTEM.COLUMN.$M(8) := -5.6979389E-05 ;
SYSTEM.COLUMN.M(9) := 3.1614000E+00 ;
SYSTEM.COLUMN.$M(9) := -7.1432749E-05 ;
SYSTEM.COLUMN.M(10) := 2.2005890E+01 ;
SYSTEM.COLUMN.$M(10) := -1.9333939E-05 ;
SYSTEM.COLUMN.LIQDENS(1) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(2) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(3) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(4) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(5) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(6) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(7) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(8) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(9) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(10) := 1.1000000E+01 ;
SYSTEM.COLUMN.VAPDENS(1) := 3.4427192E-02 ;
SYSTEM.COLUMN.VAPDENS(2) := 3.4428753E-02 ;
SYSTEM.COLUMN.VAPDENS(3) := 3.5111254E-02 ;
SYSTEM.COLUMN.VAPDENS(4) := 3.5791355E-02 ;
SYSTEM.COLUMN.VAPDENS(5) := 3.6469012E-02 ;
SYSTEM.COLUMN.VAPDENS(6) := 3.7201064E-02 ;
SYSTEM.COLUMN.VAPDENS(7) := 3.7931261E-02 ;
SYSTEM.COLUMN.VAPDENS(8) := 3.8659638E-02 ;
SYSTEM.COLUMN.VAPDENS(9) := 3.9386227E-02 ;
SYSTEM.COLUMN.VAPDENS(10) := 4.0111061E-02 ;
SYSTEM.COLUMN.P(2) := 1.0133002E+00 ;
SYSTEM.COLUMN.P(3) := 1.0352865E+00 ;
SYSTEM.COLUMN.P(4) := 1.0572390E+00 ;
352
SYSTEM.COLUMN.P(5) := 1.0791551E+00 ;
SYSTEM.COLUMN.P(6) := 1.1028776E+00 ;
SYSTEM.COLUMN.P(7) := 1.1265880E+00 ;
SYSTEM.COLUMN.P(8) := 1.1502862E+00 ;
SYSTEM.COLUMN.P(9) := 1.1739724E+00 ;
SYSTEM.COLUMN.P(10) := 1.1976465E+00 ;
SYSTEM.COLUMN.DISTILLOUT := 1.2799411E+01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(2) := 2.6579155E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(3) := 2.6535926E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(4) := 2.6489476E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(5) := 2.8801859E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(6) := 2.8786338E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(7) := 2.8770855E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(8) := 2.8755409E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(9) := 2.8740000E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(10) := 2.0005354E+00 ;
SYSTEM.COLUMN.T(1) := 3.5400141E+02 ;
SYSTEM.COLUMN.T(2) := 3.5400291E+02 ;
SYSTEM.COLUMN.T(3) := 3.5465344E+02 ;
SYSTEM.COLUMN.T(4) := 3.5529164E+02 ;
SYSTEM.COLUMN.T(5) := 3.5591793E+02 ;
SYSTEM.COLUMN.T(6) := 3.5658408E+02 ;
SYSTEM.COLUMN.T(7) := 3.5723813E+02 ;
SYSTEM.COLUMN.T(8) := 3.5788057E+02 ;
SYSTEM.COLUMN.T(9) := 3.5851183E+02 ;
SYSTEM.COLUMN.T(10) := 3.5913233E+02 ;
SYSTEM.COLUMN.V(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.V(2) := 2.5598822E+01 ;
SYSTEM.COLUMN.V(3) := 2.3500268E+01 ;
SYSTEM.COLUMN.V(4) := 2.1463017E+01 ;
SYSTEM.COLUMN.V(5) := 1.9440153E+01 ;
SYSTEM.COLUMN.V(6) := 2.2879531E+02 ;
SYSTEM.COLUMN.V(7) := 2.2669097E+02 ;
SYSTEM.COLUMN.V(8) := 2.2459850E+02 ;
SYSTEM.COLUMN.V(9) := 2.2251722E+02 ;
SYSTEM.COLUMN.V(10) := 2.2044721E+02 ;
#SYSTEM.COLUMN.FEEDTEMP := 3.0000000E+02 ;
SYSTEM.COLUMN.DPDRY(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.DPDRY(2) := 1.8678982E-09 ;
SYSTEM.COLUMN.DPDRY(3) := 1.5277914E-09 ;
SYSTEM.COLUMN.DPDRY(4) := 1.2295610E-09 ;
SYSTEM.COLUMN.DPDRY(5) := 1.6714719E-07 ;
SYSTEM.COLUMN.DPDRY(6) := 1.6085773E-07 ;
SYSTEM.COLUMN.DPDRY(7) := 1.5486215E-07 ;
SYSTEM.COLUMN.DPDRY(8) := 1.4914144E-07 ;
SYSTEM.COLUMN.DPDRY(9) := 1.4367913E-07 ;
SYSTEM.COLUMN.DPDRY(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(1,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.X(1,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(1,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(1,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(2,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(2,1) := 1.0816707E-21 ;
353
SYSTEM.COLUMN.X(2,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(2,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(2,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(2,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(2,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(2,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(3,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(3,1) := -1.5264713E-22 ;
SYSTEM.COLUMN.X(3,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(3,2) := 2.4869401E-23 ;
SYSTEM.COLUMN.X(3,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(3,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(3,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(3,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(4,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(4,1) := 2.1141297E-21 ;
SYSTEM.COLUMN.X(4,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(4,2) := -2.1385140E-23 ;
SYSTEM.COLUMN.X(4,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(4,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(4,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(4,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(5,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(5,1) := -1.2429451E-21 ;
SYSTEM.COLUMN.X(5,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(5,2) := -3.2446314E-24 ;
SYSTEM.COLUMN.X(5,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(5,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(5,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(5,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(6,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(6,1) := 8.0887492E-22 ;
SYSTEM.COLUMN.X(6,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(6,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(6,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(6,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(6,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(6,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(7,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(7,1) := -6.1828866E-22 ;
SYSTEM.COLUMN.X(7,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(7,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(7,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(7,3) := 4.1248869E-24 ;
SYSTEM.COLUMN.X(7,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(7,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(8,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(8,1) := 1.2278487E-21 ;
SYSTEM.COLUMN.X(8,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(8,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(8,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(8,3) := -4.1271025E-24 ;
SYSTEM.COLUMN.X(8,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(8,4) := 0.0000000E+00 ;
354
SYSTEM.COLUMN.X(9,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(9,1) := 1.0411440E-21 ;
SYSTEM.COLUMN.X(9,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(9,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(9,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(9,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(9,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(9,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(10,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(10,1) := -8.9506882E-23 ;
SYSTEM.COLUMN.X(10,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(10,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(10,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(10,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(10,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(10,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(1,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(1,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(1,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(1,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(2,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(2,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(2,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(2,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(3,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(3,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(3,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(3,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(4,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(4,2) := 3.3822172E-24 ;
SYSTEM.COLUMN.Y(4,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(4,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(5,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(5,2) := 5.2878473E-25 ;
SYSTEM.COLUMN.Y(5,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(5,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(6,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(6,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(6,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(6,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(7,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(7,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(7,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(7,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(8,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(8,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(8,3) := 5.8123352E-26 ;
SYSTEM.COLUMN.Y(8,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(9,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(9,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(9,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(9,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(10,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(10,2) := 0.0000000E+00 ;
355
SYSTEM.COLUMN.Y(10,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(10,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEEDENTHALPY := -4.5983100E+05 ;
SYSTEM.COLUMN.MWL(1) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(2) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(3) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(4) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(5) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(6) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(7) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(8) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(9) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(10) := 7.6000000E+01 ;
SYSTEM.COLUMN.TOTALHOLDUP := 4.6586382E+01 ;
SYSTEM.COLUMN.$TOTALHOLDUP := 0.0000000E+00 ;
SYSTEM.COLUMN.QC := 7.9357010E+05 ;
SYSTEM.MAKEUPENTHALPY.TEMP := 3.0000000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_VAPOR(1) := 3.4460000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_VAPOR(2) := 4.0000000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_VAPOR(3) := 3.2000000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_VAPOR(4) := 3.1000000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_LIQUID(1) := -3.0655400E+04 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_LIQUID(2) := -2.5600000E+04 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_LIQUID(3) := -2.7680000E+04 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_LIQUID(4) := -3.3690000E+04 ;
SYSTEM.REACTOR.TOTALMOLS := 5.0000000E+01 ;
SYSTEM.REACTOR.$TOTALMOLS := 0.0000000E+00 ;
SYSTEM.REACTOR.TEMP := 3.0000000E+02 ;
SYSTEM.REACTOR.ENTHALPY := -1.5327700E+06 ;
SYSTEM.REACTOR.$ENTHALPY := -4.3344954E+06 ;
SYSTEM.REACTOR.TOTALFEED := 1.5000000E+01 ;
SYSTEM.REACTOR.QJACKET := 4.8152866E+04 ;
SYSTEM.REACTOR.NO_MOLS(1) := 5.0000000E+01 ;
SYSTEM.REACTOR.$NO_MOLS(1) := -7.5000000E+00 ;
SYSTEM.REACTOR.NO_MOLS(2) := 0.0000000E+00 ;
SYSTEM.REACTOR.$NO_MOLS(2) := 7.5000000E+00 ;
SYSTEM.REACTOR.NO_MOLS(3) := 0.0000000E+00 ;
SYSTEM.REACTOR.$NO_MOLS(3) := 0.0000000E+00 ;
SYSTEM.REACTOR.NO_MOLS(4) := 0.0000000E+00 ;
SYSTEM.REACTOR.$NO_MOLS(4) := 0.0000000E+00 ;
SYSTEM.REACTOR.VOLUME := 4.5454545E+00 ;
SYSTEM.REACTOR.CONCENTRATION(1) := 1.1000000E+01 ;
SYSTEM.REACTOR.CONCENTRATION(2) := 0.0000000E+00 ;
SYSTEM.REACTOR.CONCENTRATION(3) := 0.0000000E+00 ;
SYSTEM.REACTOR.CONCENTRATION(4) := 0.0000000E+00 ;
SYSTEM.REACTOR.FEED_A := 7.5000000E+00 ;
SYSTEM.REACTOR.FEED_B := 7.5000000E+00 ;
SYSTEM.REACTOR.FEED_C := 0.0000000E+00 ;
SYSTEM.REACTOR.X(1) := 1.0000000E+00 ;
SYSTEM.REACTOR.X(2) := 0.0000000E+00 ;
SYSTEM.REACTOR.X(3) := 0.0000000E+00 ;
SYSTEM.REACTOR.X(4) := 0.0000000E+00 ;
SYSTEM.REACTOR.FEED_D := 0.0000000E+00 ;
SYSTEM.REACTOR.FEEDENTHALPY(1) := -5.0553546E+05 ;
356
SYSTEM.REACTOR.FEEDENTHALPY(2) := -1.9395435E+05 ;
SYSTEM.REACTOR.FEEDENTHALPY(3) := -2.0971314E+05 ;
SYSTEM.REACTOR.FEEDENTHALPY(4) := -2.5524695E+05 ;
SYSTEM.REACTOR.SPECIFIC_ENTHALPY(1) := -3.0655400E+04 ;
SYSTEM.REACTOR.SPECIFIC_ENTHALPY(2) := -2.5600000E+04 ;
SYSTEM.REACTOR.SPECIFIC_ENTHALPY(3) := -2.7680000E+04 ;
SYSTEM.REACTOR.SPECIFIC_ENTHALPY(4) := -3.3690000E+04 ;
SYSTEM.REACTOR.REACTIONRATE(1) := 0.0000000E+00 ;
SYSTEM.REACTOR.REACTIONRATE(2) := 0.0000000E+00 ;
SYSTEM.VLE.VAPORENTHALPY(1) := 9.6490429E+03 ;
SYSTEM.VLE.VAPORENTHALPY(2) := 9.6493013E+03 ;
SYSTEM.VLE.VAPORENTHALPY(3) := 9.7613871E+03 ;
SYSTEM.VLE.VAPORENTHALPY(4) := 9.8713496E+03 ;
SYSTEM.VLE.VAPORENTHALPY(5) := 9.9792593E+03 ;
SYSTEM.VLE.VAPORENTHALPY(6) := 1.0094036E+04 ;
SYSTEM.VLE.VAPORENTHALPY(7) := 1.0206730E+04 ;
SYSTEM.VLE.VAPORENTHALPY(8) := 1.0317422E+04 ;
SYSTEM.VLE.VAPORENTHALPY(9) := 1.0426188E+04 ;
SYSTEM.VLE.VAPORENTHALPY(10) := 1.0533100E+04 ;
SYSTEM.VLE.TEMP(1) := 3.5400141E+02 ;
SYSTEM.VLE.TEMP(2) := 3.5400291E+02 ;
SYSTEM.VLE.TEMP(3) := 3.5465344E+02 ;
SYSTEM.VLE.TEMP(4) := 3.5529164E+02 ;
SYSTEM.VLE.TEMP(5) := 3.5591793E+02 ;
SYSTEM.VLE.TEMP(6) := 3.5658408E+02 ;
SYSTEM.VLE.TEMP(7) := 3.5723813E+02 ;
SYSTEM.VLE.TEMP(8) := 3.5788057E+02 ;
SYSTEM.VLE.TEMP(9) := 3.5851183E+02 ;
SYSTEM.VLE.TEMP(10) := 3.5913233E+02 ;
SYSTEM.VLE.LIQUIDENTHALPY(1) := -2.1350957E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(2) := -2.1350699E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(3) := -2.1238613E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(4) := -2.1128650E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(5) := -2.1020741E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(6) := -2.0905964E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(7) := -2.0793270E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(8) := -2.0682578E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(9) := -2.0573812E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(10) := -2.0466900E+04 ;
SYSTEM.VLE.K(1,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(1,2) := 1.1186637E+00 ;
SYSTEM.VLE.K(1,3) := 3.6783189E-01 ;
SYSTEM.VLE.K(1,4) := 2.0422134E-01 ;
SYSTEM.VLE.K(2,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(2,2) := 1.1186548E+00 ;
SYSTEM.VLE.K(2,3) := 3.6782906E-01 ;
SYSTEM.VLE.K(2,4) := 2.0422232E-01 ;
SYSTEM.VLE.K(3,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(3,2) := 1.1147884E+00 ;
SYSTEM.VLE.K(3,3) := 3.6660608E-01 ;
SYSTEM.VLE.K(3,4) := 2.0464705E-01 ;
SYSTEM.VLE.K(4,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(4,2) := 1.1110220E+00 ;
SYSTEM.VLE.K(4,3) := 3.6541455E-01 ;
357
SYSTEM.VLE.K(4,4) := 2.0506307E-01 ;
SYSTEM.VLE.K(5,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(5,2) := 1.1073514E+00 ;
SYSTEM.VLE.K(5,3) := 3.6425316E-01 ;
SYSTEM.VLE.K(5,4) := 2.0547070E-01 ;
SYSTEM.VLE.K(6,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(6,2) := 1.1034745E+00 ;
SYSTEM.VLE.K(6,3) := 3.6302637E-01 ;
SYSTEM.VLE.K(6,4) := 2.0590358E-01 ;
SYSTEM.VLE.K(7,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(7,2) := 1.0996952E+00 ;
SYSTEM.VLE.K(7,3) := 3.6183030E-01 ;
SYSTEM.VLE.K(7,4) := 2.0632791E-01 ;
SYSTEM.VLE.K(8,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(8,2) := 1.0960090E+00 ;
SYSTEM.VLE.K(8,3) := 3.6066354E-01 ;
SYSTEM.VLE.K(8,4) := 2.0674404E-01 ;
SYSTEM.VLE.K(9,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(9,2) := 1.0924118E+00 ;
SYSTEM.VLE.K(9,3) := 3.5952480E-01 ;
SYSTEM.VLE.K(9,4) := 2.0715229E-01 ;
SYSTEM.VLE.K(10,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(10,2) := 1.0888996E+00 ;
SYSTEM.VLE.K(10,3) := 3.5841286E-01 ;
SYSTEM.VLE.K(10,4) := 2.0755297E-01 ;
SYSTEM.VLE.VAPORPRESSURE(1,1) := 1.0132500E+00 ;
SYSTEM.VLE.VAPORPRESSURE(1,2) := 1.1334860E+00 ;
SYSTEM.VLE.VAPORPRESSURE(1,3) := 3.7270566E-01 ;
SYSTEM.VLE.VAPORPRESSURE(1,4) := 2.0692728E-01 ;
SYSTEM.VLE.VAPORPRESSURE(2,1) := 1.0133002E+00 ;
SYSTEM.VLE.VAPORPRESSURE(2,2) := 1.1335331E+00 ;
SYSTEM.VLE.VAPORPRESSURE(2,3) := 3.7272127E-01 ;
SYSTEM.VLE.VAPORPRESSURE(2,4) := 2.0693853E-01 ;
SYSTEM.VLE.VAPORPRESSURE(3,1) := 1.0352865E+00 ;
SYSTEM.VLE.VAPORPRESSURE(3,2) := 1.1541254E+00 ;
SYSTEM.VLE.VAPORPRESSURE(3,3) := 3.7954232E-01 ;
SYSTEM.VLE.VAPORPRESSURE(3,4) := 2.1186832E-01 ;
SYSTEM.VLE.VAPORPRESSURE(4,1) := 1.0572390E+00 ;
SYSTEM.VLE.VAPORPRESSURE(4,2) := 1.1746158E+00 ;
SYSTEM.VLE.VAPORPRESSURE(4,3) := 3.8633050E-01 ;
SYSTEM.VLE.VAPORPRESSURE(4,4) := 2.1680067E-01 ;
SYSTEM.VLE.VAPORPRESSURE(5,1) := 1.0791551E+00 ;
SYSTEM.VLE.VAPORPRESSURE(5,2) := 1.1950039E+00 ;
SYSTEM.VLE.VAPORPRESSURE(5,3) := 3.9308568E-01 ;
SYSTEM.VLE.VAPORPRESSURE(5,4) := 2.2173476E-01 ;
SYSTEM.VLE.VAPORPRESSURE(6,1) := 1.1028776E+00 ;
SYSTEM.VLE.VAPORPRESSURE(6,2) := 1.2169973E+00 ;
SYSTEM.VLE.VAPORPRESSURE(6,3) := 4.0037366E-01 ;
SYSTEM.VLE.VAPORPRESSURE(6,4) := 2.2708645E-01 ;
SYSTEM.VLE.VAPORPRESSURE(7,1) := 1.1265880E+00 ;
SYSTEM.VLE.VAPORPRESSURE(7,2) := 1.2389034E+00 ;
SYSTEM.VLE.VAPORPRESSURE(7,3) := 4.0763366E-01 ;
SYSTEM.VLE.VAPORPRESSURE(7,4) := 2.3244654E-01 ;
SYSTEM.VLE.VAPORPRESSURE(8,1) := 1.1502862E+00 ;
358
SYSTEM.VLE.VAPORPRESSURE(8,2) := 1.2607240E+00 ;
SYSTEM.VLE.VAPORPRESSURE(8,3) := 4.1486630E-01 ;
SYSTEM.VLE.VAPORPRESSURE(8,4) := 2.3781482E-01 ;
SYSTEM.VLE.VAPORPRESSURE(9,1) := 1.1739724E+00 ;
SYSTEM.VLE.VAPORPRESSURE(9,2) := 1.2824613E+00 ;
SYSTEM.VLE.VAPORPRESSURE(9,3) := 4.2207220E-01 ;
SYSTEM.VLE.VAPORPRESSURE(9,4) := 2.4319108E-01 ;
SYSTEM.VLE.VAPORPRESSURE(10,1) := 1.1976465E+00 ;
SYSTEM.VLE.VAPORPRESSURE(10,2) := 1.3041169E+00 ;
SYSTEM.VLE.VAPORPRESSURE(10,3) := 4.2925192E-01 ;
SYSTEM.VLE.VAPORPRESSURE(10,4) := 2.4857510E-01 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,1) := 9.6490429E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,2) := 1.1200282E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,3) := 8.9602256E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,4) := 8.6802185E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,1) := 9.6493013E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,2) := 1.1200582E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,3) := 8.9604655E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,4) := 8.6804509E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,1) := 9.7613871E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,2) := 1.1330687E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,3) := 9.0645499E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,4) := 8.7812827E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,1) := 9.8713496E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,2) := 1.1458328E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,3) := 9.1666624E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,4) := 8.8802042E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,1) := 9.9792593E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,2) := 1.1583586E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,3) := 9.2668688E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,4) := 8.9772791E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,1) := 1.0094036E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,2) := 1.1716815E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,3) := 9.3734520E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,4) := 9.0805316E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,1) := 1.0206730E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,2) := 1.1847626E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,3) := 9.4781009E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,4) := 9.1819102E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,1) := 1.0317422E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,2) := 1.1976113E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,3) := 9.5808907E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,4) := 9.2814879E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,1) := 1.0426188E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,2) := 1.2102365E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,3) := 9.6818924E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,4) := 9.3793332E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,1) := 1.0533100E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,2) := 1.2226466E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,3) := 9.7811726E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,4) := 9.4755110E+03 ;
SYSTEM.VLE.P(1) := 1.0132500E+00 ;
SYSTEM.VLE.P(2) := 1.0133002E+00 ;
SYSTEM.VLE.P(3) := 1.0352865E+00 ;
359
SYSTEM.VLE.P(4) := 1.0572390E+00 ;
SYSTEM.VLE.P(5) := 1.0791551E+00 ;
SYSTEM.VLE.P(6) := 1.1028776E+00 ;
SYSTEM.VLE.P(7) := 1.1265880E+00 ;
SYSTEM.VLE.P(8) := 1.1502862E+00 ;
SYSTEM.VLE.P(9) := 1.1739724E+00 ;
SYSTEM.VLE.P(10) := 1.1976465E+00 ;
SYSTEM.VLE.X(1,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(1,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(1,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(1,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(2,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(2,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(2,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(2,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(3,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(3,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(3,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(3,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(4,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(4,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(4,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(4,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(5,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(5,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(5,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(5,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(6,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(6,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(6,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(6,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(7,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(7,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(7,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(7,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(8,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(8,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(8,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(8,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(9,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(9,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(9,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(9,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(10,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(10,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(10,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(10,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(1,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(1,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(1,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(1,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(2,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(2,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(2,3) := 0.0000000E+00 ;
360
SYSTEM.VLE.Y(2,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(3,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(3,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(3,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(3,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(4,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(4,2) := 3.3822172E-24 ;
SYSTEM.VLE.Y(4,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(4,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(5,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(5,2) := 5.2878473E-25 ;
SYSTEM.VLE.Y(5,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(5,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(6,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(6,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(6,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(6,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(7,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(7,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(7,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(7,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(8,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(8,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(8,3) := 5.8123352E-26 ;
SYSTEM.VLE.Y(8,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(9,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(9,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(9,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(9,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(10,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(10,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(10,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(10,4) := 0.0000000E+00 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,1) := -2.1350957E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,2) := -1.4799718E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,3) := -1.9039774E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,4) := -2.5319781E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,1) := -2.1350699E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,2) := -1.4799418E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,3) := -1.9039535E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,4) := -2.5319549E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,1) := -2.1238613E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,2) := -1.4669313E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,3) := -1.8935450E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,4) := -2.5218717E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,1) := -2.1128650E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,2) := -1.4541672E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,3) := -1.8833338E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,4) := -2.5119796E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,1) := -2.1020741E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,2) := -1.4416414E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,3) := -1.8733131E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,4) := -2.5022721E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,1) := -2.0905964E+04 ;
361
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,2) := -1.4283185E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,3) := -1.8626548E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,4) := -2.4919468E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,1) := -2.0793270E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,2) := -1.4152374E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,3) := -1.8521899E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,4) := -2.4818090E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,1) := -2.0682578E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,2) := -1.4023887E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,3) := -1.8419109E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,4) := -2.4718512E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,1) := -2.0573812E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,2) := -1.3897635E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,3) := -1.8318108E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,4) := -2.4620667E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,1) := -2.0466900E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,2) := -1.3773534E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,3) := -1.8218827E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,4) := -2.4524489E+04 ;
SYSTEM.JACKET.TWATER_OUT := 2.9946497E+02 ;
SYSTEM.JACKET.QJACKET := 4.8152866E+04 ;
SYSTEM.JACKET.TEMP_REACTOR := 3.0000000E+02 ;
SYSTEM.LIQENTHALPY.TEMP := 3.0000000E+02 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_VAPOR(1) := 3.4460000E+02 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_VAPOR(2) := 4.0000000E+02 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_VAPOR(3) := 3.2000000E+02 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_VAPOR(4) := 3.1000000E+02 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_LIQUID(1) := -3.0655400E+04 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_LIQUID(2) := -2.5600000E+04 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_LIQUID(3) := -2.7680000E+04 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_LIQUID(4) := -3.3690000E+04 ;
INITIAL
WITHIN System.Reactor DO
# No_Mols(A) = 22.27 ;
# No_Mols(B) = 36.81 ;
No_Mols(A) = 50 ;
No_Mols(B) = 0 ;
No_Mols(C) = 0.0 ;
No_Mols(D) = 0.0 ;
Temp=300 ;
END
WITHIN System.Column DO
FOR I:=2 TO NSTAGE DO
X(I,D)= 0.0 ;
X(I,B)= 0.0 ;
X(I,C)= 0.0 ;
END
TotalHoldup=4.6586382E+01 ;
M(2)=2.9237071E+00 ;
M(3)=2.9189519E+00 ;
M(4)=2.9138424E+00 ;
M(5)=3.1682045E+00 ;
M(6)=3.1664972E+00 ;
362
M(7)=3.1647940E+00 ;
M(8)=3.1630950E+00 ;
M(9)=3.1614000E+00 ;
FOR I:=2 TO NSTAGE DO
SIGMA(X(I,))=1 ;
#SIGMA(K(I,)*X(I,))=1 ;
END
END
SCHEDULE
CONTINUE FOR 100
END
OPTIMIZATION OptBoth
UNIT
System As BothFlowSheet
VARIABLE
Mols_A_OUT AS MolarFlowRate
Mols_B_OUT AS MolarFlowRate
Mols_C_OUT AS MolarFlowRate
Mols_D_OUT AS MolarFlowRate
ChangeMols_A_Out AS Value
ChangeMols_B_Out AS Value
ChangeMols_C_Out AS Value
ChangeMols_D_Out AS Value
Total_C_OUT AS MolarHoldup
Total_D_OUT AS MolarHoldup
#Final_Time AS Value
OBJECTIVE
MAXIMIZE Total_C_OUT-0.1*Total_D_OUT;
#MAXIMIZE Total_C_OUT;
#MINIMIZE Final_Time ;
#MINIMIZE 100000*(ChangeMols_A_Out^2 +ChangeMols_B_Out^2+
# ChangeMols_C_Out^2 +ChangeMols_D_Out^2);
#MINIMIZE Final_Time+ 100000*(ChangeMols_A_Out^2 +ChangeMols_B_Out^2+
# ChangeMols_C_Out^2 +ChangeMols_D_Out^2+
# System.Reactor.$No_Mols(1)^2+System.Reactor.$No_Mols(2)^2+
# System.Reactor.$No_Mols(3)^2+System.Reactor.$No_Mols(4)^2);
EQUATION
ChangeMols_A_Out= $Mols_A_OUT ;
ChangeMols_B_Out= $Mols_B_OUT ;
ChangeMols_C_Out= $Mols_C_OUT ;
ChangeMols_D_Out= $Mols_D_OUT ;
Mols_A_OUT=System.Column.BOTTOMS*System.Column.X(System.Column.NSTAGE,1) ;
363
Mols_B_OUT=System.Column.BOTTOMS*System.Column.X(System.Column.NSTAGE,2) ;
Mols_C_OUT=System.Column.BOTTOMS*System.Column.X(System.Column.NSTAGE,3) ;
Mols_D_OUT=System.Column.BOTTOMS*System.Column.X(System.Column.NSTAGE,4) ;
$Total_C_OUT=Mols_C_OUT;
$Total_D_OUT=Mols_D_OUT;
INEQUALITY
System.Jacket.Temp_Reactor<=400 ;
INPUT
WITHIN System.Reactor DO
FlowOut := 15 ;
FeedTemp := 300 ;
END
WITHIN System.Column DO
P(1) :=1.01325 ;
# REFLUXRATIO :=0.5 ;
END
WITHIN System DO
BFraction := .15 ;
END
CONTROL
WITHIN System.Jacket DO
Flow_Water := 3 :0 :5 ;
END
TIME_INVARIANT
WITHIN System.Column DO
REFLUXRATIO:=0.5 : 0.4 : 0.83;
END
#Final_Time:=90:71:500 ;
PRESET
##############################
# Values of All Active Variables #
##############################
SYSTEM.MAKEUP(1) := 7.6341614E-02 ;
SYSTEM.MAKEUP(2) := 7.5000000E+00 ;
SYSTEM.MAKEUP(3) := 0.0000000E+00 ;
SYSTEM.MAKEUP(4) := 0.0000000E+00 ;
SYSTEM.COLUMN.MWV(1) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(2) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(3) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(4) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(5) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(6) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(7) := 7.6000000E+01 ;
364
SYSTEM.COLUMN.MWV(8) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(9) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWV(10) := 7.6000000E+01 ;
SYSTEM.COLUMN.BOTTOMS := 2.2005890E+00 ;
SYSTEM.COLUMN.VAPORENTHALPY(1) := 9.6490429E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(2) := 9.6493013E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(3) := 9.7613871E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(4) := 9.8713496E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(5) := 9.9792593E+03 ;
SYSTEM.COLUMN.VAPORENTHALPY(6) := 1.0094036E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(7) := 1.0206730E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(8) := 1.0317422E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(9) := 1.0426188E+04 ;
SYSTEM.COLUMN.VAPORENTHALPY(10) := 1.0533100E+04 ;
SYSTEM.COLUMN.LIQMDENS(1) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(2) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(3) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(4) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(5) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(6) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(7) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(8) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(9) := 8.3600000E+02 ;
SYSTEM.COLUMN.LIQMDENS(10) := 8.3600000E+02 ;
SYSTEM.COLUMN.VAPMDENS(1) := 2.6164666E+00 ;
SYSTEM.COLUMN.VAPMDENS(2) := 2.6165852E+00 ;
SYSTEM.COLUMN.VAPMDENS(3) := 2.6684553E+00 ;
SYSTEM.COLUMN.VAPMDENS(4) := 2.7201430E+00 ;
SYSTEM.COLUMN.VAPMDENS(5) := 2.7716449E+00 ;
SYSTEM.COLUMN.VAPMDENS(6) := 2.8272809E+00 ;
SYSTEM.COLUMN.VAPMDENS(7) := 2.8827759E+00 ;
SYSTEM.COLUMN.VAPMDENS(8) := 2.9381325E+00 ;
SYSTEM.COLUMN.VAPMDENS(9) := 2.9933533E+00 ;
SYSTEM.COLUMN.VAPMDENS(10) := 3.0484406E+00 ;
SYSTEM.COLUMN.QR := 6.4072745E+06 ;
SYSTEM.COLUMN.DPSTAT(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.DPSTAT(2) := 2.0759991E-02 ;
SYSTEM.COLUMN.DPSTAT(3) := 2.0726226E-02 ;
SYSTEM.COLUMN.DPSTAT(4) := 2.0689946E-02 ;
SYSTEM.COLUMN.DPSTAT(5) := 2.2496062E-02 ;
SYSTEM.COLUMN.DPSTAT(6) := 2.2483940E-02 ;
SYSTEM.COLUMN.DPSTAT(7) := 2.2471846E-02 ;
SYSTEM.COLUMN.DPSTAT(8) := 2.2459782E-02 ;
SYSTEM.COLUMN.DPSTAT(9) := 2.2447747E-02 ;
SYSTEM.COLUMN.DPSTAT(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.XFEED(1) := 1.0000000E+00 ;
SYSTEM.COLUMN.XFEED(2) := 0.0000000E+00 ;
SYSTEM.COLUMN.XFEED(3) := 0.0000000E+00 ;
SYSTEM.COLUMN.XFEED(4) := 0.0000000E+00 ;
SYSTEM.COLUMN.DPTRAY(1) := 5.0229571E-05 ;
SYSTEM.COLUMN.DPTRAY(2) := 2.1986243E-02 ;
SYSTEM.COLUMN.DPTRAY(3) := 2.1952478E-02 ;
SYSTEM.COLUMN.DPTRAY(4) := 2.1916197E-02 ;
SYSTEM.COLUMN.DPTRAY(5) := 2.3722479E-02 ;
365
SYSTEM.COLUMN.DPTRAY(6) := 2.3710350E-02 ;
SYSTEM.COLUMN.DPTRAY(7) := 2.3698251E-02 ;
SYSTEM.COLUMN.DPTRAY(8) := 2.3686181E-02 ;
SYSTEM.COLUMN.DPTRAY(9) := 2.3674140E-02 ;
SYSTEM.COLUMN.DPTRAY(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(1) := -2.1350957E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(2) := -2.1350699E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(3) := -2.1238613E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(4) := -2.1128650E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(5) := -2.1020741E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(6) := -2.0905964E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(7) := -2.0793270E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(8) := -2.0682578E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(9) := -2.0573812E+04 ;
SYSTEM.COLUMN.LIQUIDENTHALPY(10) := -2.0466900E+04 ;
SYSTEM.COLUMN.LIQHEIGHT(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.LIQHEIGHT(2) := 2.5313481E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(3) := 2.5272311E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(4) := 2.5228073E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(5) := 2.7430342E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(6) := 2.7415560E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(7) := 2.7400814E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(8) := 2.7386104E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(9) := 2.7371429E-01 ;
SYSTEM.COLUMN.LIQHEIGHT(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.HEAD(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.HEAD(2) := 3.1348139E+00 ;
SYSTEM.COLUMN.HEAD(3) := 2.7231082E+00 ;
SYSTEM.COLUMN.HEAD(4) := 2.2807273E+00 ;
SYSTEM.COLUMN.HEAD(5) := 2.4303420E+01 ;
SYSTEM.COLUMN.HEAD(6) := 2.4155602E+01 ;
SYSTEM.COLUMN.HEAD(7) := 2.4008139E+01 ;
SYSTEM.COLUMN.HEAD(8) := 2.3861039E+01 ;
SYSTEM.COLUMN.HEAD(9) := 2.3714286E+01 ;
SYSTEM.COLUMN.HEAD(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.K(1,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(1,2) := 1.1186637E+00 ;
SYSTEM.COLUMN.K(1,3) := 3.6783189E-01 ;
SYSTEM.COLUMN.K(1,4) := 2.0422134E-01 ;
SYSTEM.COLUMN.K(2,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(2,2) := 1.1186548E+00 ;
SYSTEM.COLUMN.K(2,3) := 3.6782906E-01 ;
SYSTEM.COLUMN.K(2,4) := 2.0422232E-01 ;
SYSTEM.COLUMN.K(3,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(3,2) := 1.1147884E+00 ;
SYSTEM.COLUMN.K(3,3) := 3.6660608E-01 ;
SYSTEM.COLUMN.K(3,4) := 2.0464705E-01 ;
SYSTEM.COLUMN.K(4,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(4,2) := 1.1110220E+00 ;
SYSTEM.COLUMN.K(4,3) := 3.6541455E-01 ;
SYSTEM.COLUMN.K(4,4) := 2.0506307E-01 ;
SYSTEM.COLUMN.K(5,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(5,2) := 1.1073514E+00 ;
SYSTEM.COLUMN.K(5,3) := 3.6425316E-01 ;
366
SYSTEM.COLUMN.K(5,4) := 2.0547070E-01 ;
SYSTEM.COLUMN.K(6,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(6,2) := 1.1034745E+00 ;
SYSTEM.COLUMN.K(6,3) := 3.6302637E-01 ;
SYSTEM.COLUMN.K(6,4) := 2.0590358E-01 ;
SYSTEM.COLUMN.K(7,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(7,2) := 1.0996952E+00 ;
SYSTEM.COLUMN.K(7,3) := 3.6183030E-01 ;
SYSTEM.COLUMN.K(7,4) := 2.0632791E-01 ;
SYSTEM.COLUMN.K(8,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(8,2) := 1.0960090E+00 ;
SYSTEM.COLUMN.K(8,3) := 3.6066354E-01 ;
SYSTEM.COLUMN.K(8,4) := 2.0674404E-01 ;
SYSTEM.COLUMN.K(9,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(9,2) := 1.0924118E+00 ;
SYSTEM.COLUMN.K(9,3) := 3.5952480E-01 ;
SYSTEM.COLUMN.K(9,4) := 2.0715229E-01 ;
SYSTEM.COLUMN.K(10,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.K(10,2) := 1.0888996E+00 ;
SYSTEM.COLUMN.K(10,3) := 3.5841286E-01 ;
SYSTEM.COLUMN.K(10,4) := 2.0755297E-01 ;
SYSTEM.COLUMN.L(1) := 1.2799411E+01 ;
SYSTEM.COLUMN.L(2) := 1.0700915E+01 ;
SYSTEM.COLUMN.L(3) := 8.6636449E+00 ;
SYSTEM.COLUMN.L(4) := 6.6406918E+00 ;
SYSTEM.COLUMN.L(5) := 2.3099596E+02 ;
SYSTEM.COLUMN.L(6) := 2.2889172E+02 ;
SYSTEM.COLUMN.L(7) := 2.2679894E+02 ;
SYSTEM.COLUMN.L(8) := 2.2471772E+02 ;
SYSTEM.COLUMN.L(9) := 2.2264778E+02 ;
SYSTEM.COLUMN.L(10) := 2.2005890E+00 ;
SYSTEM.COLUMN.FEED(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(2) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(3) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(4) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(5) := 1.5000000E+01 ;
SYSTEM.COLUMN.FEED(6) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(7) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(8) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(9) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEED(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.M(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.$M(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.M(2) := 2.9237071E+00 ;
SYSTEM.COLUMN.$M(2) := -5.8431630E-05 ;
SYSTEM.COLUMN.M(3) := 2.9189519E+00 ;
SYSTEM.COLUMN.$M(3) := 1.9291184E-05 ;
SYSTEM.COLUMN.M(4) := 2.9138424E+00 ;
SYSTEM.COLUMN.$M(4) := 8.9519949E-05 ;
SYSTEM.COLUMN.M(5) := 3.1682045E+00 ;
SYSTEM.COLUMN.$M(5) := -1.1165674E-04 ;
SYSTEM.COLUMN.M(6) := 3.1664972E+00 ;
SYSTEM.COLUMN.$M(6) := -1.0041827E-04 ;
SYSTEM.COLUMN.M(7) := 3.1647940E+00 ;
367
SYSTEM.COLUMN.$M(7) := 3.0944158E-04 ;
SYSTEM.COLUMN.M(8) := 3.1630950E+00 ;
SYSTEM.COLUMN.$M(8) := -5.6979389E-05 ;
SYSTEM.COLUMN.M(9) := 3.1614000E+00 ;
SYSTEM.COLUMN.$M(9) := -7.1432749E-05 ;
SYSTEM.COLUMN.M(10) := 2.2005890E+01 ;
SYSTEM.COLUMN.$M(10) := -1.9333939E-05 ;
SYSTEM.COLUMN.LIQDENS(1) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(2) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(3) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(4) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(5) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(6) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(7) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(8) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(9) := 1.1000000E+01 ;
SYSTEM.COLUMN.LIQDENS(10) := 1.1000000E+01 ;
SYSTEM.COLUMN.VAPDENS(1) := 3.4427192E-02 ;
SYSTEM.COLUMN.VAPDENS(2) := 3.4428753E-02 ;
SYSTEM.COLUMN.VAPDENS(3) := 3.5111254E-02 ;
SYSTEM.COLUMN.VAPDENS(4) := 3.5791355E-02 ;
SYSTEM.COLUMN.VAPDENS(5) := 3.6469012E-02 ;
SYSTEM.COLUMN.VAPDENS(6) := 3.7201064E-02 ;
SYSTEM.COLUMN.VAPDENS(7) := 3.7931261E-02 ;
SYSTEM.COLUMN.VAPDENS(8) := 3.8659638E-02 ;
SYSTEM.COLUMN.VAPDENS(9) := 3.9386227E-02 ;
SYSTEM.COLUMN.VAPDENS(10) := 4.0111061E-02 ;
SYSTEM.COLUMN.P(2) := 1.0133002E+00 ;
SYSTEM.COLUMN.P(3) := 1.0352865E+00 ;
SYSTEM.COLUMN.P(4) := 1.0572390E+00 ;
SYSTEM.COLUMN.P(5) := 1.0791551E+00 ;
SYSTEM.COLUMN.P(6) := 1.1028776E+00 ;
SYSTEM.COLUMN.P(7) := 1.1265880E+00 ;
SYSTEM.COLUMN.P(8) := 1.1502862E+00 ;
SYSTEM.COLUMN.P(9) := 1.1739724E+00 ;
SYSTEM.COLUMN.P(10) := 1.1976465E+00 ;
SYSTEM.COLUMN.DISTILLOUT := 1.2799411E+01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(2) := 2.6579155E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(3) := 2.6535926E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(4) := 2.6489476E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(5) := 2.8801859E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(6) := 2.8786338E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(7) := 2.8770855E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(8) := 2.8755409E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(9) := 2.8740000E-01 ;
SYSTEM.COLUMN.VOLUMEHOLDUP(10) := 2.0005354E+00 ;
SYSTEM.COLUMN.T(1) := 3.5400141E+02 ;
SYSTEM.COLUMN.T(2) := 3.5400291E+02 ;
SYSTEM.COLUMN.T(3) := 3.5465344E+02 ;
SYSTEM.COLUMN.T(4) := 3.5529164E+02 ;
SYSTEM.COLUMN.T(5) := 3.5591793E+02 ;
SYSTEM.COLUMN.T(6) := 3.5658408E+02 ;
SYSTEM.COLUMN.T(7) := 3.5723813E+02 ;
368
SYSTEM.COLUMN.T(8) := 3.5788057E+02 ;
SYSTEM.COLUMN.T(9) := 3.5851183E+02 ;
SYSTEM.COLUMN.T(10) := 3.5913233E+02 ;
SYSTEM.COLUMN.V(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.V(2) := 2.5598822E+01 ;
SYSTEM.COLUMN.V(3) := 2.3500268E+01 ;
SYSTEM.COLUMN.V(4) := 2.1463017E+01 ;
SYSTEM.COLUMN.V(5) := 1.9440153E+01 ;
SYSTEM.COLUMN.V(6) := 2.2879531E+02 ;
SYSTEM.COLUMN.V(7) := 2.2669097E+02 ;
SYSTEM.COLUMN.V(8) := 2.2459850E+02 ;
SYSTEM.COLUMN.V(9) := 2.2251722E+02 ;
SYSTEM.COLUMN.V(10) := 2.2044721E+02 ;
#SYSTEM.COLUMN.FEEDTEMP := 3.0000000E+02 ;
SYSTEM.COLUMN.DPDRY(1) := 0.0000000E+00 ;
SYSTEM.COLUMN.DPDRY(2) := 1.8678982E-09 ;
SYSTEM.COLUMN.DPDRY(3) := 1.5277914E-09 ;
SYSTEM.COLUMN.DPDRY(4) := 1.2295610E-09 ;
SYSTEM.COLUMN.DPDRY(5) := 1.6714719E-07 ;
SYSTEM.COLUMN.DPDRY(6) := 1.6085773E-07 ;
SYSTEM.COLUMN.DPDRY(7) := 1.5486215E-07 ;
SYSTEM.COLUMN.DPDRY(8) := 1.4914144E-07 ;
SYSTEM.COLUMN.DPDRY(9) := 1.4367913E-07 ;
SYSTEM.COLUMN.DPDRY(10) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(1,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.X(1,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(1,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(1,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(2,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(2,1) := 1.0816707E-21 ;
SYSTEM.COLUMN.X(2,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(2,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(2,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(2,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(2,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(2,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(3,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(3,1) := -1.5264713E-22 ;
SYSTEM.COLUMN.X(3,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(3,2) := 2.4869401E-23 ;
SYSTEM.COLUMN.X(3,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(3,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(3,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(3,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(4,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(4,1) := 2.1141297E-21 ;
SYSTEM.COLUMN.X(4,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(4,2) := -2.1385140E-23 ;
SYSTEM.COLUMN.X(4,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(4,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(4,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(4,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(5,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(5,1) := -1.2429451E-21 ;
369
SYSTEM.COLUMN.X(5,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(5,2) := -3.2446314E-24 ;
SYSTEM.COLUMN.X(5,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(5,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(5,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(5,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(6,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(6,1) := 8.0887492E-22 ;
SYSTEM.COLUMN.X(6,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(6,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(6,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(6,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(6,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(6,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(7,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(7,1) := -6.1828866E-22 ;
SYSTEM.COLUMN.X(7,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(7,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(7,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(7,3) := 4.1248869E-24 ;
SYSTEM.COLUMN.X(7,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(7,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(8,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(8,1) := 1.2278487E-21 ;
SYSTEM.COLUMN.X(8,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(8,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(8,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(8,3) := -4.1271025E-24 ;
SYSTEM.COLUMN.X(8,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(8,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(9,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(9,1) := 1.0411440E-21 ;
SYSTEM.COLUMN.X(9,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(9,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(9,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(9,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(9,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(9,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(10,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.$X(10,1) := -8.9506882E-23 ;
SYSTEM.COLUMN.X(10,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(10,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(10,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(10,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.X(10,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.$X(10,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(1,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(1,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(1,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(1,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(2,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(2,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(2,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(2,4) := 0.0000000E+00 ;
370
SYSTEM.COLUMN.Y(3,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(3,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(3,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(3,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(4,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(4,2) := 3.3822172E-24 ;
SYSTEM.COLUMN.Y(4,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(4,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(5,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(5,2) := 5.2878473E-25 ;
SYSTEM.COLUMN.Y(5,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(5,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(6,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(6,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(6,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(6,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(7,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(7,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(7,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(7,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(8,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(8,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(8,3) := 5.8123352E-26 ;
SYSTEM.COLUMN.Y(8,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(9,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(9,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(9,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(9,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(10,1) := 1.0000000E+00 ;
SYSTEM.COLUMN.Y(10,2) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(10,3) := 0.0000000E+00 ;
SYSTEM.COLUMN.Y(10,4) := 0.0000000E+00 ;
SYSTEM.COLUMN.FEEDENTHALPY := -4.5983100E+05 ;
SYSTEM.COLUMN.MWL(1) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(2) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(3) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(4) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(5) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(6) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(7) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(8) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(9) := 7.6000000E+01 ;
SYSTEM.COLUMN.MWL(10) := 7.6000000E+01 ;
SYSTEM.COLUMN.TOTALHOLDUP := 4.6586382E+01 ;
SYSTEM.COLUMN.$TOTALHOLDUP := 0.0000000E+00 ;
SYSTEM.COLUMN.QC := 7.9357010E+05 ;
SYSTEM.MAKEUPENTHALPY.TEMP := 3.0000000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_VAPOR(1) := 3.4460000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_VAPOR(2) := 4.0000000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_VAPOR(3) := 3.2000000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_VAPOR(4) := 3.1000000E+02 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_LIQUID(1) := -3.0655400E+04 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_LIQUID(2) := -2.5600000E+04 ;
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_LIQUID(3) := -2.7680000E+04 ;
371
SYSTEM.MAKEUPENTHALPY.SPECIFIC_ENTHALPY_LIQUID(4) := -3.3690000E+04 ;
SYSTEM.REACTOR.TOTALMOLS := 5.0000000E+01 ;
SYSTEM.REACTOR.$TOTALMOLS := 0.0000000E+00 ;
SYSTEM.REACTOR.TEMP := 3.0000000E+02 ;
SYSTEM.REACTOR.ENTHALPY := -1.5327700E+06 ;
SYSTEM.REACTOR.$ENTHALPY := -4.3344954E+06 ;
SYSTEM.REACTOR.TOTALFEED := 1.5000000E+01 ;
SYSTEM.REACTOR.QJACKET := 4.8152866E+04 ;
SYSTEM.REACTOR.NO_MOLS(1) := 5.0000000E+01 ;
SYSTEM.REACTOR.$NO_MOLS(1) := -7.5000000E+00 ;
SYSTEM.REACTOR.NO_MOLS(2) := 0.0000000E+00 ;
SYSTEM.REACTOR.$NO_MOLS(2) := 7.5000000E+00 ;
SYSTEM.REACTOR.NO_MOLS(3) := 0.0000000E+00 ;
SYSTEM.REACTOR.$NO_MOLS(3) := 0.0000000E+00 ;
SYSTEM.REACTOR.NO_MOLS(4) := 0.0000000E+00 ;
SYSTEM.REACTOR.$NO_MOLS(4) := 0.0000000E+00 ;
SYSTEM.REACTOR.VOLUME := 4.5454545E+00 ;
SYSTEM.REACTOR.CONCENTRATION(1) := 1.1000000E+01 ;
SYSTEM.REACTOR.CONCENTRATION(2) := 0.0000000E+00 ;
SYSTEM.REACTOR.CONCENTRATION(3) := 0.0000000E+00 ;
SYSTEM.REACTOR.CONCENTRATION(4) := 0.0000000E+00 ;
SYSTEM.REACTOR.FEED_A := 7.5000000E+00 ;
SYSTEM.REACTOR.FEED_B := 7.5000000E+00 ;
SYSTEM.REACTOR.FEED_C := 0.0000000E+00 ;
SYSTEM.REACTOR.X(1) := 1.0000000E+00 ;
SYSTEM.REACTOR.X(2) := 0.0000000E+00 ;
SYSTEM.REACTOR.X(3) := 0.0000000E+00 ;
SYSTEM.REACTOR.X(4) := 0.0000000E+00 ;
SYSTEM.REACTOR.FEED_D := 0.0000000E+00 ;
SYSTEM.REACTOR.FEEDENTHALPY(1) := -5.0553546E+05 ;
SYSTEM.REACTOR.FEEDENTHALPY(2) := -1.9395435E+05 ;
SYSTEM.REACTOR.FEEDENTHALPY(3) := -2.0971314E+05 ;
SYSTEM.REACTOR.FEEDENTHALPY(4) := -2.5524695E+05 ;
SYSTEM.REACTOR.SPECIFIC_ENTHALPY(1) := -3.0655400E+04 ;
SYSTEM.REACTOR.SPECIFIC_ENTHALPY(2) := -2.5600000E+04 ;
SYSTEM.REACTOR.SPECIFIC_ENTHALPY(3) := -2.7680000E+04 ;
SYSTEM.REACTOR.SPECIFIC_ENTHALPY(4) := -3.3690000E+04 ;
SYSTEM.REACTOR.REACTIONRATE(1) := 0.0000000E+00 ;
SYSTEM.REACTOR.REACTIONRATE(2) := 0.0000000E+00 ;
SYSTEM.VLE.VAPORENTHALPY(1) := 9.6490429E+03 ;
SYSTEM.VLE.VAPORENTHALPY(2) := 9.6493013E+03 ;
SYSTEM.VLE.VAPORENTHALPY(3) := 9.7613871E+03 ;
SYSTEM.VLE.VAPORENTHALPY(4) := 9.8713496E+03 ;
SYSTEM.VLE.VAPORENTHALPY(5) := 9.9792593E+03 ;
SYSTEM.VLE.VAPORENTHALPY(6) := 1.0094036E+04 ;
SYSTEM.VLE.VAPORENTHALPY(7) := 1.0206730E+04 ;
SYSTEM.VLE.VAPORENTHALPY(8) := 1.0317422E+04 ;
SYSTEM.VLE.VAPORENTHALPY(9) := 1.0426188E+04 ;
SYSTEM.VLE.VAPORENTHALPY(10) := 1.0533100E+04 ;
SYSTEM.VLE.TEMP(1) := 3.5400141E+02 ;
SYSTEM.VLE.TEMP(2) := 3.5400291E+02 ;
SYSTEM.VLE.TEMP(3) := 3.5465344E+02 ;
SYSTEM.VLE.TEMP(4) := 3.5529164E+02 ;
SYSTEM.VLE.TEMP(5) := 3.5591793E+02 ;
372
SYSTEM.VLE.TEMP(6) := 3.5658408E+02 ;
SYSTEM.VLE.TEMP(7) := 3.5723813E+02 ;
SYSTEM.VLE.TEMP(8) := 3.5788057E+02 ;
SYSTEM.VLE.TEMP(9) := 3.5851183E+02 ;
SYSTEM.VLE.TEMP(10) := 3.5913233E+02 ;
SYSTEM.VLE.LIQUIDENTHALPY(1) := -2.1350957E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(2) := -2.1350699E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(3) := -2.1238613E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(4) := -2.1128650E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(5) := -2.1020741E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(6) := -2.0905964E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(7) := -2.0793270E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(8) := -2.0682578E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(9) := -2.0573812E+04 ;
SYSTEM.VLE.LIQUIDENTHALPY(10) := -2.0466900E+04 ;
SYSTEM.VLE.K(1,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(1,2) := 1.1186637E+00 ;
SYSTEM.VLE.K(1,3) := 3.6783189E-01 ;
SYSTEM.VLE.K(1,4) := 2.0422134E-01 ;
SYSTEM.VLE.K(2,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(2,2) := 1.1186548E+00 ;
SYSTEM.VLE.K(2,3) := 3.6782906E-01 ;
SYSTEM.VLE.K(2,4) := 2.0422232E-01 ;
SYSTEM.VLE.K(3,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(3,2) := 1.1147884E+00 ;
SYSTEM.VLE.K(3,3) := 3.6660608E-01 ;
SYSTEM.VLE.K(3,4) := 2.0464705E-01 ;
SYSTEM.VLE.K(4,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(4,2) := 1.1110220E+00 ;
SYSTEM.VLE.K(4,3) := 3.6541455E-01 ;
SYSTEM.VLE.K(4,4) := 2.0506307E-01 ;
SYSTEM.VLE.K(5,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(5,2) := 1.1073514E+00 ;
SYSTEM.VLE.K(5,3) := 3.6425316E-01 ;
SYSTEM.VLE.K(5,4) := 2.0547070E-01 ;
SYSTEM.VLE.K(6,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(6,2) := 1.1034745E+00 ;
SYSTEM.VLE.K(6,3) := 3.6302637E-01 ;
SYSTEM.VLE.K(6,4) := 2.0590358E-01 ;
SYSTEM.VLE.K(7,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(7,2) := 1.0996952E+00 ;
SYSTEM.VLE.K(7,3) := 3.6183030E-01 ;
SYSTEM.VLE.K(7,4) := 2.0632791E-01 ;
SYSTEM.VLE.K(8,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(8,2) := 1.0960090E+00 ;
SYSTEM.VLE.K(8,3) := 3.6066354E-01 ;
SYSTEM.VLE.K(8,4) := 2.0674404E-01 ;
SYSTEM.VLE.K(9,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(9,2) := 1.0924118E+00 ;
SYSTEM.VLE.K(9,3) := 3.5952480E-01 ;
SYSTEM.VLE.K(9,4) := 2.0715229E-01 ;
SYSTEM.VLE.K(10,1) := 1.0000000E+00 ;
SYSTEM.VLE.K(10,2) := 1.0888996E+00 ;
SYSTEM.VLE.K(10,3) := 3.5841286E-01 ;
373
SYSTEM.VLE.K(10,4) := 2.0755297E-01 ;
SYSTEM.VLE.VAPORPRESSURE(1,1) := 1.0132500E+00 ;
SYSTEM.VLE.VAPORPRESSURE(1,2) := 1.1334860E+00 ;
SYSTEM.VLE.VAPORPRESSURE(1,3) := 3.7270566E-01 ;
SYSTEM.VLE.VAPORPRESSURE(1,4) := 2.0692728E-01 ;
SYSTEM.VLE.VAPORPRESSURE(2,1) := 1.0133002E+00 ;
SYSTEM.VLE.VAPORPRESSURE(2,2) := 1.1335331E+00 ;
SYSTEM.VLE.VAPORPRESSURE(2,3) := 3.7272127E-01 ;
SYSTEM.VLE.VAPORPRESSURE(2,4) := 2.0693853E-01 ;
SYSTEM.VLE.VAPORPRESSURE(3,1) := 1.0352865E+00 ;
SYSTEM.VLE.VAPORPRESSURE(3,2) := 1.1541254E+00 ;
SYSTEM.VLE.VAPORPRESSURE(3,3) := 3.7954232E-01 ;
SYSTEM.VLE.VAPORPRESSURE(3,4) := 2.1186832E-01 ;
SYSTEM.VLE.VAPORPRESSURE(4,1) := 1.0572390E+00 ;
SYSTEM.VLE.VAPORPRESSURE(4,2) := 1.1746158E+00 ;
SYSTEM.VLE.VAPORPRESSURE(4,3) := 3.8633050E-01 ;
SYSTEM.VLE.VAPORPRESSURE(4,4) := 2.1680067E-01 ;
SYSTEM.VLE.VAPORPRESSURE(5,1) := 1.0791551E+00 ;
SYSTEM.VLE.VAPORPRESSURE(5,2) := 1.1950039E+00 ;
SYSTEM.VLE.VAPORPRESSURE(5,3) := 3.9308568E-01 ;
SYSTEM.VLE.VAPORPRESSURE(5,4) := 2.2173476E-01 ;
SYSTEM.VLE.VAPORPRESSURE(6,1) := 1.1028776E+00 ;
SYSTEM.VLE.VAPORPRESSURE(6,2) := 1.2169973E+00 ;
SYSTEM.VLE.VAPORPRESSURE(6,3) := 4.0037366E-01 ;
SYSTEM.VLE.VAPORPRESSURE(6,4) := 2.2708645E-01 ;
SYSTEM.VLE.VAPORPRESSURE(7,1) := 1.1265880E+00 ;
SYSTEM.VLE.VAPORPRESSURE(7,2) := 1.2389034E+00 ;
SYSTEM.VLE.VAPORPRESSURE(7,3) := 4.0763366E-01 ;
SYSTEM.VLE.VAPORPRESSURE(7,4) := 2.3244654E-01 ;
SYSTEM.VLE.VAPORPRESSURE(8,1) := 1.1502862E+00 ;
SYSTEM.VLE.VAPORPRESSURE(8,2) := 1.2607240E+00 ;
SYSTEM.VLE.VAPORPRESSURE(8,3) := 4.1486630E-01 ;
SYSTEM.VLE.VAPORPRESSURE(8,4) := 2.3781482E-01 ;
SYSTEM.VLE.VAPORPRESSURE(9,1) := 1.1739724E+00 ;
SYSTEM.VLE.VAPORPRESSURE(9,2) := 1.2824613E+00 ;
SYSTEM.VLE.VAPORPRESSURE(9,3) := 4.2207220E-01 ;
SYSTEM.VLE.VAPORPRESSURE(9,4) := 2.4319108E-01 ;
SYSTEM.VLE.VAPORPRESSURE(10,1) := 1.1976465E+00 ;
SYSTEM.VLE.VAPORPRESSURE(10,2) := 1.3041169E+00 ;
SYSTEM.VLE.VAPORPRESSURE(10,3) := 4.2925192E-01 ;
SYSTEM.VLE.VAPORPRESSURE(10,4) := 2.4857510E-01 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,1) := 9.6490429E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,2) := 1.1200282E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,3) := 8.9602256E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(1,4) := 8.6802185E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,1) := 9.6493013E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,2) := 1.1200582E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,3) := 8.9604655E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(2,4) := 8.6804509E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,1) := 9.7613871E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,2) := 1.1330687E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,3) := 9.0645499E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(3,4) := 8.7812827E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,1) := 9.8713496E+03 ;
374
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,2) := 1.1458328E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,3) := 9.1666624E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(4,4) := 8.8802042E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,1) := 9.9792593E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,2) := 1.1583586E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,3) := 9.2668688E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(5,4) := 8.9772791E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,1) := 1.0094036E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,2) := 1.1716815E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,3) := 9.3734520E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(6,4) := 9.0805316E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,1) := 1.0206730E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,2) := 1.1847626E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,3) := 9.4781009E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(7,4) := 9.1819102E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,1) := 1.0317422E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,2) := 1.1976113E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,3) := 9.5808907E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(8,4) := 9.2814879E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,1) := 1.0426188E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,2) := 1.2102365E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,3) := 9.6818924E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(9,4) := 9.3793332E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,1) := 1.0533100E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,2) := 1.2226466E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,3) := 9.7811726E+03 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_VAPOR(10,4) := 9.4755110E+03 ;
SYSTEM.VLE.P(1) := 1.0132500E+00 ;
SYSTEM.VLE.P(2) := 1.0133002E+00 ;
SYSTEM.VLE.P(3) := 1.0352865E+00 ;
SYSTEM.VLE.P(4) := 1.0572390E+00 ;
SYSTEM.VLE.P(5) := 1.0791551E+00 ;
SYSTEM.VLE.P(6) := 1.1028776E+00 ;
SYSTEM.VLE.P(7) := 1.1265880E+00 ;
SYSTEM.VLE.P(8) := 1.1502862E+00 ;
SYSTEM.VLE.P(9) := 1.1739724E+00 ;
SYSTEM.VLE.P(10) := 1.1976465E+00 ;
SYSTEM.VLE.X(1,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(1,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(1,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(1,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(2,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(2,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(2,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(2,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(3,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(3,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(3,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(3,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(4,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(4,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(4,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(4,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(5,1) := 1.0000000E+00 ;
375
SYSTEM.VLE.X(5,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(5,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(5,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(6,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(6,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(6,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(6,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(7,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(7,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(7,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(7,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(8,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(8,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(8,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(8,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(9,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(9,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(9,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(9,4) := 0.0000000E+00 ;
SYSTEM.VLE.X(10,1) := 1.0000000E+00 ;
SYSTEM.VLE.X(10,2) := 0.0000000E+00 ;
SYSTEM.VLE.X(10,3) := 0.0000000E+00 ;
SYSTEM.VLE.X(10,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(1,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(1,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(1,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(1,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(2,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(2,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(2,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(2,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(3,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(3,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(3,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(3,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(4,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(4,2) := 3.3822172E-24 ;
SYSTEM.VLE.Y(4,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(4,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(5,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(5,2) := 5.2878473E-25 ;
SYSTEM.VLE.Y(5,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(5,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(6,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(6,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(6,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(6,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(7,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(7,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(7,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(7,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(8,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(8,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(8,3) := 5.8123352E-26 ;
376
SYSTEM.VLE.Y(8,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(9,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(9,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(9,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(9,4) := 0.0000000E+00 ;
SYSTEM.VLE.Y(10,1) := 1.0000000E+00 ;
SYSTEM.VLE.Y(10,2) := 0.0000000E+00 ;
SYSTEM.VLE.Y(10,3) := 0.0000000E+00 ;
SYSTEM.VLE.Y(10,4) := 0.0000000E+00 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,1) := -2.1350957E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,2) := -1.4799718E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,3) := -1.9039774E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(1,4) := -2.5319781E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,1) := -2.1350699E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,2) := -1.4799418E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,3) := -1.9039535E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(2,4) := -2.5319549E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,1) := -2.1238613E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,2) := -1.4669313E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,3) := -1.8935450E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(3,4) := -2.5218717E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,1) := -2.1128650E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,2) := -1.4541672E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,3) := -1.8833338E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(4,4) := -2.5119796E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,1) := -2.1020741E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,2) := -1.4416414E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,3) := -1.8733131E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(5,4) := -2.5022721E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,1) := -2.0905964E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,2) := -1.4283185E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,3) := -1.8626548E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(6,4) := -2.4919468E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,1) := -2.0793270E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,2) := -1.4152374E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,3) := -1.8521899E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(7,4) := -2.4818090E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,1) := -2.0682578E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,2) := -1.4023887E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,3) := -1.8419109E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(8,4) := -2.4718512E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,1) := -2.0573812E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,2) := -1.3897635E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,3) := -1.8318108E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(9,4) := -2.4620667E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,1) := -2.0466900E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,2) := -1.3773534E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,3) := -1.8218827E+04 ;
SYSTEM.VLE.SPECIFIC_ENTHALPY_LIQUID(10,4) := -2.4524489E+04 ;
SYSTEM.JACKET.TWATER_OUT := 2.9946497E+02 ;
SYSTEM.JACKET.QJACKET := 4.8152866E+04 ;
SYSTEM.JACKET.TEMP_REACTOR := 3.0000000E+02 ;
SYSTEM.LIQENTHALPY.TEMP := 3.0000000E+02 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_VAPOR(1) := 3.4460000E+02 ;
377
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_VAPOR(2) := 4.0000000E+02 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_VAPOR(3) := 3.2000000E+02 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_VAPOR(4) := 3.1000000E+02 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_LIQUID(1) := -3.0655400E+04 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_LIQUID(2) := -2.5600000E+04 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_LIQUID(3) := -2.7680000E+04 ;
SYSTEM.LIQENTHALPY.SPECIFIC_ENTHALPY_LIQUID(4) := -3.3690000E+04 ;
INITIAL
TOTAL_C_OUT = 0;
TOTAL_D_OUT = 0;
WITHIN System.Reactor DO
No_Mols(A) = 50 ;
No_Mols(B) = 0 ;
No_Mols(C) = 0.0 ;
No_Mols(D) = 0.0 ;
Temp=300 ;
END
WITHIN System.Column DO
#FOR I:=2 TO NSTAGE-1 DO
# $M(I)=0;
#END
FOR I:=2 TO NSTAGE DO
X(I,D)= 0.0 ;
X(I,B)= 0.0 ;
X(I,C)= 0.0 ;
END
TotalHoldup=4.6586382E+01 ;
M(2)=2.9237071E+00 ;
M(3)=2.9189519E+00 ;
M(4)=2.9138424E+00 ;
M(5)=3.1682045E+00 ;
M(6)=3.1664972E+00 ;
M(7)=3.1647940E+00 ;
M(8)=3.1630950E+00 ;
M(9)=3.1614000E+00 ;
FOR I:=2 TO NSTAGE DO
SIGMA(X(I,))=1 ;
#SIGMA(K(I,)*X(I,))=1 ;
END
END
# FINAL
#TOTAL_C_OUT>=130 ;
#ChangeMols_A_Out^2 +ChangeMols_B_Out^2+
# ChangeMols_C_Out^2 +ChangeMols_D_Out^2<=0.00001;
#SYSTEM.REACTOR.$ENTHALPY<=0.01 ;
#SYSTEM.REACTOR.$NO_Mols(1)<=0.01 ;
#SYSTEM.REACTOR.$NO_Mols(2)<=0.01 ;
#SYSTEM.REACTOR.$NO_Mols(3)<=0.01 ;
#SYSTEM.REACTOR.$NO_Mols(4)<=0.01 ;
#MOLS_D_OUT<=0.065 ;
SCHEDULE
378
CONTINUE FOR 100
#CONTINUE FOR FINAL_TIME
END
379
380
References
[9] J. Barlow and U. Vemulapati, Rank detection methods for sparse matrices,
SIAM J. Matrix Anal. Appl., 13 (1992), pp. 1279–1297.
[10] M. Barrera and L. Evans, Optimal design and operation of batch processes,
Chem. Eng. Comm., 82 (1989), pp. 45–66.
381
[11] A. Barrlund and B. Kägström, Analytical and numerical solutions to
higher index linear variable coefficient DAE systems, J. Comp. Appl. Math, 31
(1990), pp. 305–330.
[12] M. Bartholomew-Biggs, A penalty method for point and path state con-
straints in trajectory optimization, Opt. Cont. Appl. and Meth., 16 (1995),
pp. 291–297.
[24] A. Bryson and Y. Ho, Applied Optimal Control, Hemisphere, New York,
1975.
382
[25] P. Bujakiewicz, Maximum weighted matching for high-index differential al-
gebraic equations, PhD thesis, Technical University of Delft, The Netherlands,
1994.
[26] S. Campbell, Consistent initial conditions for singular nonlinear systems,
Circuits, Systems and Signal Processing, 2 (1983), pp. 45–55.
[27] , Least squares completions for nonlinear differential algebraic equations,
Numer. Math., 65 (1993), pp. 77–94.
[28] , Numerical methods for unstructured higher-index DAEs, Annals of Nu-
merical Mathematics, 1 (1994), pp. 265–278.
[29] S. Campbell and E. Griepentrog, Solvability of general differential-
algebraic equations, SIAM J. Sci. Comput., 16 (1995), pp. 257–270.
[30] M. Caracotsios and W. Stewart, Sensitivity analysis of initial value prob-
lems with mixed ODEs and algebraic constraints, Computers chem. Engng., 9
(1985), pp. 359–365.
[31] M. Carver, Efficient integration over discontinuities in ordinary differential
equation simulations, Math. Comput. Sim., XX (1978), pp. 190–196.
[32] J. Casares and J. Rodriguez, Analysis and evolution of a wastewater treat-
ment plant by stochastic optimization, Appl. Math. Model, 13 (1989), pp. 420–
424.
[33] M. Charalambides, Optimal Design of Integrated Batch Processes, PhD the-
sis, University of London, London, U.K., 1996.
[34] J. Cuthrell and L. Biegler, On the optimization of differential-algebraic
process systems, AIChE J., 33 (1987), pp. 1257–1270.
[35] W. De Backer, Jump conditions for sensitivity coefficients, in Sensitivity
methods in control theory, L. Radanović, ed., Dubrovnik, Aug. 31 - Sep. 5
1964, Pergamon Press, pp. 168–175.
[36] M. de Pinho, R. Sargent, and R. Vintner, Optimal control of nonlinear
DAE systems, in Proceedings of 32nd IEEE Conference on Decision and Control,
1993.
[37] W. Denham and A. Bryson, Optimal programming problems with inequality
constraints. II: Solution by steepest ascent, AIAA J., 2 (1964), pp. 25–34.
[38] J. Dennis and R. Schnabel, Numerical Methods for Unconstrained Opti-
mization and Nonlinear Equations, Prentice Hall, Englewood Cliffs, NJ, 1983.
[39] P. Deuflhard, Recent advances in multiple shooting techniques, in Computa-
tional Techniques for Ordinary Differential Equations, Academic Press, 1980,
pp. 217–272.
383
[40] P. Deuflhard, E. Hairer, and J. Zugck, One-step and extrapolation
methods for differential-algebraic systems, Numer. Math., 51 (1987), pp. 501–
516.
[44] I. Duff, A. Erisman, and J. Reid, Direct Methods for Sparse Matrices,
Oxford University Press, 1986.
[45] I. Duff and J. Reid, MA48, a FORTRAN code for direct solution of sparse
unsymmetric linear systems of equations, Tech. Rep. RAL-93-072, Rutherford
Appleton Laboratory, Oxon, UK, 1993.
384
[53] R. Franks, Modeling and Simulation in Chemical Engineering, Wiley-
Interscience, New York, 1972.
[60] C. Gear and L. Petzold, ODE methods for the solution of differential-
algebraic equations, SIAM J. Numer. Anal., 21 (1984), pp. 716–728.
[66] D. Gritsis, The Dynamic Simulation and Optimal Control of Systems De-
scribed by Index Two Differential-Algebraic Equations, PhD thesis, University
of London, 1990.
385
[67] D. Gritsis, C. Pantelides, and R. Sargent, Optimal control of systems
described by index-2 differential-algebriac equations, SIAM J. Sci. Comput., 16
(1995), pp. 1349–1366.
[68] T. Gronwall, Note on the derivatives with respect to a parameter of the so-
lutions of a system of differential equations, Annals of Mathematics, 20 (1919),
pp. 292–296.
[69] J. Guckenheimer and S. Johnson, Planar hybrid systems, Hybrid Systems
II, Lecture Notes in Computer Science, 999 (1995), pp. 202–225.
[70] E. Hairer, C. Lubich, and M. Roche, The numerical solution of
differential-algebriac systems by Runge-Kutta methods, in Lecture Notes in
Mathematics, Springer-Verlag, 1989.
[71] E. Hairer, S. Nørsett, and G. Wanner, Solving Ordinary Differential
Equations, vol. I Nonstiff Problems, Springer-Verlag, Berlin, 1987. pp. 94-98.
[72] E. Haug and P. Ehle, Second-order design sensitivity analysis of mechanical
system dynamics, Internat. J. Numer. Methods Engrg., 18 (1982), pp. 1699–
1717.
[73] E. Haug, R. Wehage, and N. Barman, Design sensitivity analysis of pla-
nar mechanism and machine dynamics, Trans. ASME, 103 (1981).
[74] D. Jacobson and M. Lele, A transformation technique for optimal control
problems with a state variable inequality constraint, IEEE Trans. Automatic
Control, 5 (1969), pp. 457–464.
[75] D. Jacobson, M. Lele, and J. Speyer, New necessary conditions of opti-
mality for control problems with state variable inequality constraints, J. Math.
Anal. Appl., 35 (1971), pp. 255–284.
[76] R. Jarvis and C. Pantelides, A differentiation-free algorithm for solving
high-index DAE systems, AIChE Annual Meeting, Miami, (1992).
[77] B. Keeping and C. Pantelides, On the implementation of optimisation al-
gorithms for large-scale transient systems on a MIMD computer, AIChE Annual
Meeting, Miami, (1995).
[78] J. Keiper and C. Gear, The analysis of generalized backward-difference for-
mula methods applied to Hessenberg form differential-algebraic equations, SIAM
J. Numer. Anal., 28 (1991), pp. 833–858.
[79] H. Keller, Numerical Methods for Two-Point Boundary Value Problems,
Blaisdell, Waltham, MA, 1968.
[80] H. Kelley, A trajectory optimization technique based upon the theory of the
second variation, Progress in Astronautics and Aeronautics, 14 (1964), pp. 559–
582.
386
[81] E. Kikkinides and R. Yang, Simultaneous SO2 /NO2 removal and SO2 re-
covery from flue gas by pressure swing adsorption, Ind. Eng. Chem. Res., 30
(1991), pp. 1981–1989.
[82] D. Kirk, Optimal Control Theory: An Introduction, Prentice-Hall, Englewood
Cliffs, New Jersey, 1970.
[83] P. Kokotović and R. Rutman, Sensitivity of automatic control systems
(survey), Automation and Remote Control, 4 (1965), pp. 727–749.
[84] D. Kraft, On converting optimal control problems into nonlinear programming
problems, Comp. Math. Prog., 15 (1985), pp. 261–280.
[85] , Algorithm 733: TOMP- Fortran modules for optimal control calculations,
ACM Trans. Math. Software, 20 (1994), pp. 263–281.
[86] M. Kramer and J. Leis, The simultaneous solution and sensitivity analy-
sis of systems described by ordinary differential equations, ACM Trans. Math
Software, 14 (1988), pp. 45–60.
[87] A. Lefkopoulos and M. Stadtherr, Index analysis of unsteady-state
chemical processes- I. an algorithm for problem formulation, Computers chem.
Engng., 17 (1993), pp. 399–413.
[88] J. Leis and M. Kramer, Sensitivity analysis of systems of differential and
algebraic equations, Computers chem. Engng., 9 (1985), pp. 93–96.
[89] M. Lentini and V. Pereyra, An adaptive finite-difference solver for non-
linear two-point boundary value problems with mild boundary layers, SIAM J.
Numer. Anal., 14 (1977), pp. 91–111.
[90] J. Logsdon and L. Biegler, Accurate solution of differential-algebraic op-
timization problems, Ind. Eng. Chem. Res., 28 (1989), pp. 1628–1639.
[91] R. Luus, Piecewise linear continuous optimal control by iterative dynamic pro-
gramming, Ind. Eng. Chem. Res., 32 (1993), pp. 859–865.
[92] L. Lynn, E. Parkin, and R. Zahradnik, Near-optimal control by trajectory
approximation, Ind. Eng. Chem. Fundam., 18 (1979).
[93] C. Majer, W. Marquardt, and E. Gilles, Reinitialization of DAEs after
discontinuities, Computers chem. Engng., 19 (1995), pp. S507–S512.
[94] T. Maly and L. Petzold, Numerical methods and software for sensitivity
analysis of differential-algebraic systems, Applied Numerical Mathematics, 20
(1996), pp. 57–79.
[95] W. Marquardt, Numerical methods for the simulation of differential-
algebraic process models, tech. rep., Rheinisch-Westfälische Technische
Hochschule Aachen, 1994.
387
[96] R. März, Numerical methods for differential-algebraic equations, Acta Numer-
ica, (1991), pp. 141–198.
[98] R. Mehra and R. Davis, A generalized gradient method for optimal control
problems with inequality constraints and singular arcs, IEEE Trans. Autom.
Control, (1972).
[102] , Optimal synthesis and design of dynamic systems under uncertainty, Com-
puters chem. Engng., 20 (1996), pp. S895–S900.
[106] C. Neuman and A. Sen, A suboptimal control algorithm for constrained prob-
lems using cubic splines, Automatica, 9 (1973), pp. 601–603.
388
[110] T. Park and P. Barton, State event location in differential-algebraic models,
ACM Transactions on Modeling and Computer Simulation, 6 (1996), pp. 137–
165.
[111] H. Pesch, Real-time computation of feedback controls for constrained optimal
control problems. Part 2: A correction method based on multiple shooting, Opt.
Cont. Appl. and Meth., 10 (1989), pp. 147–171.
[112] L. Petzold, Differential/algebraic equations are not ODEs, SIAM J. Sci. Stat.
Comput., 3 (1982), pp. 367–384.
[113] , A description of DASSL: A differential/algebraic equation solver, in Sci-
entific Computing, R. Stepelman, ed., North-Holland, 1983, pp. 65–68.
[114] L. Petzold, Y. Ren, and T. Maly, Regularization of higher-index
differential-algebraic equations with rank-deficient constraints, SIAM J. Sci.
Comput., 18 (1997), pp. 753–774.
[115] L. Petzold, J. Rosen, P. Gill, L. Jay, and K. Park, Numerical optimal
control of parabolic PDEs using DASOPT, Proc. IMA Workshop on Large-Scale
Optimization, (1996).
[116] P. Piela, ASCEND: An Object-Oriented Computer Environment for Modeling
and Analysis, PhD thesis, Carnegie–Mellon University, Pittsburgh, 1989.
[117] J. Ponton and P. Gawthrop, Systematic construction of dynamic models
for phase equilibrium processes, Computers chem. Engng., 15 (1991), pp. 803–
808.
[118] L. Pontryagin, V. Boltyanskii, R. Gamkrelidze, and E. Mishenko,
The Mathematical Theory of Optimal Processes, Interscience Publishers Inc.,
New York, 1962.
[119] H. Rabitz, M. Kramer, and D. Dacol, Sensitivity analysis in chemical
kinetics, Ann. Rev. Phys. Chem., 34 (1983), pp. 419–61.
[120] W. Ray, Advanced Process Control, McGraw-Hill, New York, 1981.
[121] K. Reinschke, Multivariable control: A graph theoretic approach, in Lecture
Notes in Control and Information Sciences, M. Thoma and A. Wyner, eds.,
vol. 108, Springer-Verlag, 1988.
[122] S. Roberts and J. Shipman, Two Point Boundary-Value Problems: Shooting
Methods, American Elsevier, New York, 1972.
[123] O. Rosen and R. Luus, Evaluation of gradients for piecewise optimal control,
Computers chem. Engng., 15 (1991), pp. 273–281.
[124] E. Rozenvasser, General sensitivity equations of discontinuous systems, Au-
tomation and Remote Control, (1967), pp. 400–404.
389
[125] I. Russ, Sensitivity function generation from nonlinear systems with jump dis-
continuities, Buletinul Institutului Politehnic “Gheorghe Gheorghiu-Dej” Bu-
curesti. Seria Electrotehnica, 39 (1977), pp. 91–100.
[131] G. Stewart, Rank degeneracy, SIAM J. Sci. Stat. Comput., 5 (1984), pp. 403–
413.
390
[140] F. Valentine, The problem of Lagrange with differential inequalities as added
side conditions, in Contributions to the Calculus of Variations, Chicago Uni-
versity Press, 1937, pp. 407–448.
[144] O. Von Stryk, Numerical solution of optimal control problems by direct col-
location, in Optimal Control and Variational Calculus, R. Bulirsch, A. Miele,
J. Stoer, and K. Well, eds., Birkhaüser, Basel, Germany, 1993.
[145] H. Wu, Generalized maximum principle for optimal control of generalized state-
space systems, Int. J. Control, 47 (1988), pp. 373–380.
[146] A. Xing and C. Wang, Applications of the exterior penalty method in con-
strained optimal control problems, Opt. Cont. Appl. and Meth., 10 (1989),
pp. 333–345.
391