0% found this document useful (0 votes)
5 views

8.2. SE5072_Optimization

The document discusses optimization techniques in model calibration, focusing on parametric and topology optimization to improve performance, safety, and efficiency in engineering applications. It outlines essential features of optimization, including objective functions, design variables, constraints, and various algorithms used for optimization. Examples in materials engineering, such as composite layup and metamaterials, illustrate the practical applications of these optimization methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

8.2. SE5072_Optimization

The document discusses optimization techniques in model calibration, focusing on parametric and topology optimization to improve performance, safety, and efficiency in engineering applications. It outlines essential features of optimization, including objective functions, design variables, constraints, and various algorithms used for optimization. Examples in materials engineering, such as composite layup and metamaterials, illustrate the practical applications of these optimization methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Optimization & Application in Model Calibration

Introduction to Optimization
An optimization is the act of achieving the best possible result under given
circumstances

Improve
Reduce cost
efficiency

Optimization

Improve Improve
performance safety
Parametric Optimization

Performed by changing the value of parametric variables that are:

• Continuous: part thickness, height, width, …


• Discrete: number of holes, number of pillars, …
• Categorical: selection of material, selection of cross-sections, … Nominal scale

Continuous: Discrete: Categorical: Categorical:


XYZ coordinate of key points Number of holes Material selection Shape of the cross-section
in structure morphing

1 – steel 1 – square
2 – aluminum 2 – round
3 – composites 3 – H shape
4 – plastic 4 – L shape
Topology Optimization
Topology optimization (TO) is a mathematical method that optimizes material layout within
a given design space, for a given set of loads, boundary conditions and constraints with the
goal of maximizing the performance of the system.
Examples of Optimization in Materials Engineering
Composite layup optimization
• Number of layers
• The Fiber orientation of each layer

Performance
HPC
simulations

Process integration

User
Light-
Interface
weighting
Metamodeling &
Optimization Safety

Model parameters:

Optimal designs NVH  Number of layers


 Fiber orientation
• Fiber type
• Volume fraction
Validation …
Examples of Optimization in Materials Engineering

Design of metamaterials with negative Poisson ratio


Examples of Optimization in Materials Engineering

Vogiatzis, Panagiotis, Shikui Chen, Xianfeng David Gu, Ching-Hung Chuang, Hongyi Xu, and Na Lei. "Multi-Material
Topology Optimization of Structures Infilled With Conformal Metamaterials." In International Design Engineering Technical
Conferences and Computers and Information in Engineering Conference, vol. 51760, p. V02BT03A009. American Society
of Mechanical Engineers, 2018.
Examples of Optimization in Materials Engineering
Integrated Computational Materials Engineering (ICME) of carbon fiber composites
• Parametric geometry design variables
• Manufacturing process variables
• Multiscale material design variables
Essential Features of Optimization
An objective function: a response that needs to be either maximized or minimized.
• Cost
• Stiffness
• Strength
• Fatigue
• Turnaround time
• Etc.

Design variable: an input parameter that can be changed to influence the response.
Not all model parameters are design variables, especially those you cannot change
in real world.

Optimization can only be formulated on underdetermined system: If all the


design variables are fixed. There is no optimization. Thus one or more variables is
relaxed and the system becomes an underdetermined system, which has more than
one solution.
Terminologies

Design representation:
Define design variables. Each design variable corresponds to one dimension of the design space.

Design evaluation:
Obtain the performance of the design. For example, run simulation to obtain the
stiffness/crashworthiness/durability of a structure design.

Design synthesis:
Find the optimal designs by using design representation and design evaluation tools.
Essential Features of Optimization (Cont.)

Constraints: Usually the optimization is done keeping certain constraints/restrictions. The


design constraints can be defined on:

• Design variables (inputs): the value of design variable should be in the range of ….
• Responses (outputs): the value of the response should be in the range of ….

Designs satisfying all constraints: feasible design


Designs violating one or more constraints: infeasible design
Terminologies

Contour plot

2.5

1.5 0.1
0.1
0.2 0.3 0.2
1 0.4
0.5
0.5 0.
6
X2

0.1

0.4

0.7
0.3

0.4
0

0.1
7
0.8 0. 5
Response

0. 3
0.2
-0.5 0.

0.
5 0.

2
0.6
-1 0.4
0.3
0.1
X2 -1.5
0.2
0.1

-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

X1
X1
Terminologies

𝑓(𝑥)

𝑥
Optimization

x  x1 , x 2 ,, x n 
T
Given design viables

An objective function f(x)

A set of equality constraints g(x)=0

A set of inequality constraints, h(x)0

The general problem formulation:

min f x 
x

s.t. g x   0
h ( x)  0
Questions

Find an example of equality constraint

The sum of volume percentages of material phases is equal to 100%

Find an example of inequality constraint

The weight of the structure should be less than 5 kg


Engineering Optimization Algorithms
One dimensional search

Random search methods


• Random jumping method
• Random walk method

Gradient-based algorithms
• Steepest decent method
• Newton’s method

Generation-based algorithms
• Simulated Annealing (SA) algorithm
• Genetic Algorithm (GA)
• Particle Swarm Optimization (PSO) algorithm
1D Search: Section Methods

Given a closed interval [ a0, b0 ], a unimodal function (having one and only one local minimum
in the interval), find a point that is no more than 𝜀 away from the local optimum (minimum).

𝑓(𝑥)

𝑥
a0 b0
1D Search: Section Methods

𝑓(𝑥)

𝑥
a0 a1 b1 b0
Sectioning with two intermediate points
How to Choose Intermediate points
Iteration #1

Equal space

2 points need to be generated in


x
each iteration
a0 a1 b1 b0

( 2 objective function evaluations ) Iteration #2

x
a1 a2 b2 b0
Why not reuse previous points?
We would like to minimize the number of objective function evaluations.

𝜌 a1 b1 𝜌
x
a0 1 − 2𝜌 b0

x
a1 a2 b2 b0
𝜌 1 − 2𝜌 1−𝜌
=
1 1−𝜌
What is this Ratio?

3+ 5
1 𝜌 𝜌= ≈ 0.382
= 2
1 − 𝜌 2𝜌 − 1 1 − 𝜌 ≈ 0.618

Golden ratio!

Golden ratio in
nature:
Nautilus shell
Golden Section Search
3+ 5
Given 𝜌 = ≈ 0.382
2

𝐿0

𝜌 ∙ 𝐿0 𝜌 ∙ 𝐿0

𝜌 ∙ 𝐿1

𝐿1
Random Search Algorithms
Usually, these methods are used for education purpose. They may not be
applicable for real engineering cases.

Random jumping method:


(random search)
- Generate random points, keep the best

Random walk method:


– Generate random unit direction vectors
– Walk to new point if better

– Decrease stepsize after N steps


Another simple optimization method: one factor at a time
𝑥1 2
Objective function: minimize 𝑓 𝑥1 , 𝑥2 = + 𝑥2 2
4

Start point: [ -7, -7 ]

Step size: 2

𝑥1

𝑥2
Gradient-based Algorithms
Definition of Gradient: 1 Dimensional
Definition of Gradient: N Dimensional
The gradient is defined by a vector of partial derivatives

 f f 
f ( x1 ,..., xn ) :  ,..., 
 x1 xn 

Z
f f
z   x   y
x y
Y

X
Basic Idea behind Gradient-based Search

Search along this


direction for minimum 𝑓(𝑥)

𝑥0 x
𝑓 ′ 𝑥0 < 0
Basic Idea behind Gradient-based Search
 f f 
f :R  R
2
f ( x, y ) :  
 x y 
To find maximum, search
on the direction of:
Z 1
v  f p
f p

To find minimum, search


f ( x, y) on the direction of:
Y
1
V
X
v  f p
f p
Example: Determine the Search Direction

f  x1  2 x1 x2  x2
4 2 2
Find MIN: Current location: (0.2, 1.5)

x2
Start point

 4 x13  4 x1 x2   2 x1 

f     2( x1  x2 ) 
2

  2 x1  2 x2   1 
2
-f
−1.168
=
2.92

So the search direction is


1.168 x1
−2.92
Steepest Descent Method (1D)

• We are at the point 𝑥𝑖 . How do we reach 𝑥𝑖+1 ?


• Idea: go into the direction in which f (x)
decreases most quickly (  f ( xi )  ri )
• How far should we go (step size)?

𝛻F(xi)<0, so the next


𝑥𝑖+1 = 𝑥𝑖 − 𝛻𝑓(𝑥𝑖 ) ∙ 𝑡𝑖 x location should be
Y on the right
F(x)
One simple solution: simply choose
a constant 𝛼 as the step size: 𝑡𝑖 = 𝛼
xi xi+1 x
But it is possible that 𝛼 may be too
large or too small.
* Steepest decent is used in deep learning
Steepest Descent Method (2D)
Function shape along the dashed line
• We are at the point xi . How do we reach xi 1 ?
• Idea: go into the direction in which f (x)
decreases most quickly (  f ( xi )  ri )
• how far should we go (step size)? Start point 1

𝑥𝑖+1 = 𝑥𝑖 − 𝛻𝑓(𝑥𝑖 ) ∙ 𝑡𝑖

Choose 𝑡𝑖 so that 𝑓(𝑥𝑖+1 ) is minimized along


the direction of −𝛻𝑓(𝑥𝑖 )
Start point 2

Any 1-dimensional search algorithm will


work.
Steepest Decent Method – Newton’s Method

𝑥𝑖+1 = 𝑥𝑖 − 𝛻𝑓(𝑥𝑖 ) ∙ 𝑡𝑖 where the step size 𝑡𝑖 = 𝛻 2 𝑓 𝑥𝑖

Hessian matrix (second-order partial


derivatives)
Find the step size that minimize the quadratic
approximation of the original function (Taylor series)
1
𝑓 𝑥𝑖+1 ≈ 𝑓 𝑥𝑖 + 𝛻𝑓 𝑥𝑖 𝑥𝑖+1 − 𝑥𝑖 + 𝑥𝑖+1 − 𝑥𝑖 T 𝛻 2 𝑓(𝑥𝑖 )(𝑥𝑖+1 − 𝑥𝑖 )
T
2
y

𝑥𝑖 x
𝑥𝑖+1
Two things to be noted

1. The gradient based method can 2. Ways to obtain the gradient information
only find the nearby local optimum

𝑓(𝑥) • Analytical gradient function is available


Start (e.g. neural network, as we know the
mathematical equations of all neurons)

• The gradient is obtained numerically


(e.g. finite difference method)
End

𝑥
What if we do not have analytical gradient?

?
Sometimes analytical gradient function is not available.
Computing the gradient numerically may not be efficient. What can we do?
Engineering Cases without Analytical Gradient Information

• Complexity (black box)


• Bifurcation
• Noise
• Experimental test
“Black boxes” tools used in Engineering
Simulated Annealing

• Random method inspired by a natural process: annealing


• Annealing is a process of heating metal/glass to relieve stresses
• The heated material undergoes a controlled cooling process to achieve a state of stable
equilibrium with minimal internal stresses

– Probability of internal energy change


(Boltzmann’s probability distribution function)

● SA algorithm based on this probability concept


Simulated Annealing
1. Set a starting “temperature” T, pick a set of starting designs x,
and evaluate their responses f(x)
2. Randomly generate a set of new designs x’ by perturbing
existing designs x

3. Obtain f(x’). If better, accept the new design. If worse, generate


random number r, and accept new design when

 f ( x ) f ( y )  Note:
 
r P  e  T 
f ( x)  f ( y)  0

4. Stop if design performance does not improves in several steps.


Otherwise, update temperature: T  T ,  1
Lower temperature means smaller magnitude of perturbation
Illustration of the SA process (1D)
Find x to minimize the function value

Pay attention to the reduced


“perturbation amplitude”

Q: From Step 2 to Step 3, we accepted a worse design. Why?


Illustration of the SA process (2D)

Min 𝑓(𝑥1 , 𝑥2 )

Iteration 1 Iteration N

𝑥2 … 𝑥2

𝑥1 𝑥1
Genetic Algorithms

• Based on evolution theory of Darwin: Survival of the fittest


• Design objective = fitness function
• Designs are encoded in chromosomal
strings, ~ genes: e.g. binary strings:

1 1 0 1 0 0 1 0 1 1 0 0 1 0 1

x1 x2
Workflow of a Typical GA
Key Operators in GA

• Reproduction:
• Exact copy/copies of individual

• Crossover:
• Randomly exchange genes of different parents
• Many possibilities: how many genes, parents, children …

• Mutation:
• Randomly flip some bits of a gene string
• Used sparingly, but important to explore new designs
GA Operations (Cont.)
• Crossover:
Parent 1 Parent 2
1 1 0 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1

0 1 1 0 0 0 1 0 1 1 0 0 1 0 1 1 1 0 1 1 0 0 1 0 1 1 0 0 0 1

Child 1 Child 2

● Mutation:

1 1 0 1 0 0 1 0 1 1 0 0 1 0 1

1 1 0 1 0 1 1 0 1 1 0 0 1 0 1
Particle Swarm Optimization (PSO)

• No genes and reproduction, but a population that travels through the


design space

• Derived from simulations of flocks/schools in nature

• Individuals (search agents, design points) tend to follow the individual


with the best fitness value, but also determine their own path

• Some randomness added to give exploration properties


(“craziness parameter”)

* A lot of research papers are published on improving PSO in the past 10 years
Particle Swarm Optimization (PSO)

1. Initialize location and speed of individuals (search agents) randomly


2. Evaluate fitness
3. Update best scores: individual (y) and overall (Y)
4. Update velocity and position:

vi 1  v i  c1r1 y i  xi   c2 r2 Yi  xi 
xi 1  xi  v i 1
Control “social behavior” Random numbers
vs “individual behavior” between 0 and 1
Illustration of PSO process

Search agents

Location of the optimal design


Summary: the Non-Gradient, Generation-based Optimization
Algorithms

Initial generation of
designs

Design evaluation

Generate the next generation based on:


Identify the “elite”
• Predefined rules
designs
• Current ‘elite’ designs

Stop criteria
Not satisfied
Satisfied
End
Surrogate Modeling + Optimization

Optimization Simulations or experiments


algorithm

Design variables Performance

OR
Design evaluator Surrogate model
Review: Surrogate Modeling Process

Collect sample points Obtain the response Y Build a surrogate model to


in the design (input) of each sample point replace the original evaluator
space X

Y
x2

x1 x2
x1
“Virtual” optimum found by surrogate model-based
optimization

• Surrogate model has prediction errors.

• The “optimal performance” found by surrogate model may not be true (“virtual” optimum)

• ALWAYS use the real design evaluator (simulation or experiment) to double check the
performance of the “virtual” designs
Adv. Topic: Bayesian Optimization

Reference:
Ariyarit, A. and Kanazaki, M., 2017. Multi-fidelity multi-objective efficient global optimization applied
to airfoil design problems. Applied Sciences, 7(12), p.1318.
Concept of Bayesian Optimization
Recap: Gaussian regression General steps of Bayesian Optimization
(Kriging) surrogate model (BO), maximization problem

Create a surrogate model to represent


the true function 𝑓 and define its prior

Given the dataset of existing samples


𝑥1 , 𝑥2 , … , 𝑥𝑡 , use Bayes rule to obtain
the posterior
𝑓
Use acquisition function 𝛼(𝑥) to decide the
next sample point 𝑥𝑡+1 = argmax𝑥 𝛼(𝑥)

Add the new sample to the dataset

A tutorial on Bayesian Optimization: https://distill.pub/2020/bayesian-optimization/


Also refer to: Snoek, J., Larochelle, H. and Adams, R.P., 2012. Practical bayesian optimization of machine learning algorithms. In
Advances in neural information processing systems (pp. 2951-2959).
Concept of Bayesian Optimization: Acquisition Functions
Probability of improvement (PI) Ф(∙) CDF of Gaussian distribution
𝑓 𝑥+ : the current max
𝜖: a small positive number
𝜇𝑡 𝑥 − 𝑓 𝑥 + − 𝜖 𝜇𝑡 𝑥 : predicted mean of the point to be added
𝑥𝑡+1 = argmax𝑥 Ф
𝜎𝑡 (𝑥) 𝜎𝑡 (𝑥): standard deviation of the point to be added

It will choose a point 𝑥𝑡 that leads to the highest probability of improvement over the current best

Expected improvement (EI) 𝔼(∙): expectation


𝑓 𝑥+ : the current max
𝒟𝑡 : existing training data
𝑥𝑡+1 = argmax𝑥 𝔼 𝑚𝑎𝑥 0, ℎ𝑡+1 𝑥 − 𝑓(𝑥 + ) | 𝒟𝑡 ℎ𝑡+1 𝑥 : posterior mean of surrogate at 𝑡 + 1 time step

Choose the next query point as the one which has the highest expected improvement over the current best
Concept of Bayesian Optimization: Acquisition Functions
Probability of improvement (PI)
Better Q: Which location to choose,
green or yellow?

Best current design

Worse

Expected improvement (EI)


Better

Q: Which location to choose,


green or yellow?

Best current design


Worse
Concept of Bayesian Optimization: Demo
Model Calibration by Optimization
Design Variables v.s. Model Parameters
Design variables Analysis: deflection of a steel
cantilever beam
• Length
• Width Model
• Thickness

Response
• Deflection
Model parameters
Experiment
• Elastic modulus
• Yield strength

Design variables: the engineers can change their values to obtain different responses;

Model parameters: cannot be changed. It can or cannot be measured.


Model Calibration
Sometimes, we do not have accurate values for the model parameters.

Distribution
of strength
Model Calibration
Model calibration is the act of predicting the most proper values of model
parameters.
The basic idea is to “try” different model parameter values, and then find the best
values that can minimize the prediction error

Choose a design (fix design variable Conduct experiment to


values) obtain the true response

Build the Assign values to Predict the Evaluate the prediction


simulation model model parameters response error of the model

Optimization
• Input variables: model parameters
• Objective: minimize the prediction error
Example 1: Vehicle Occupant Restraint System Model

Time history of upper neck load


Before After
calibration calibration

Simulation Experiment Simulation after


before calibration calibration
Zhan, Z., Fu, Y., Yang, R.J. and Peng, Y., An automatic model calibration method for occupant restraint systems. Structural and
Multidisciplinary Optimization, 44(6), pp.815-822 (2011)
Example 2: Multiscale Model of Additively Manufactured AlSi10Mg
Nanoscale Microscale
Macroscale stress-strain
curve by coupon test

Minimize: the difference


between simulation and
experiment

Input
parameters Melting pool Melting pool boundary
Shaded area: cumulative
Hardening modulus ℎ0,𝑀𝑃 ℎ0,𝑀𝑃𝐵 difference between two curves
Saturation stress 𝜏𝑠,𝑀𝑃 𝜏𝑠,𝑀𝑃𝐵
Critical resolved Predict the macroscale
𝜏0,𝑀𝑃 𝜏0,𝑀𝑃𝐵 = 𝜏0,𝑀𝑃 − 5MPa
shear stress properties: stress-strain curve
Based on the differences in subgranular
cell dimension and Hall-Petch equation
Example 2: Multiscale Model of Additively Manufactured AlSi10Mg

Before calibration After calibration

400 400

Engineering Stress (MPa)


Engineering Stress (MPa)

300 300

200 200

Experiment
100 100
Simulation

0 0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0 0.01 0.02 0.03 0.04 0.05 0.06
Engineering Strain Engineering Strain

Wang, Z., Xu, H., Li, Y., “Material Model Calibration by Deep Learning for Additively Manufactured Alloys”, ASME 2020
International Symposium on Flexible Automation, ISFA2020-16724.
Adv. Topic 2: System Design Optimization using
Analytical Target Cascading (ATC)

References:
Kim, H.M., Michelena, N.F., Papalambros, P.Y. and Jiang, T., 2003. Target cascading in optimal
system design. J. Mech. Des., 125(3), pp.474-480.
Kim, H.M., Rideout, D.G., Papalambros, P.Y. and Stein, J.L., 2003. Analytical target cascading in
automotive vehicle design. J. Mech. Des., 125(3), pp.481-489.
Examples of Engineering Systems
Mechanical Materials and Manufacturing system:
system: vehicle ICME of AM alloy structures
suspension
system
Micromechanics of materials

Vehicle
Performance
evaluation module

System

Sub- Process simulation Structure


system and control design
Design Representation of an Engineering System
Product performance 𝑓(𝑦1 , … , 𝑦𝑚 ) 𝑿𝒔 , 𝑿𝒔𝒔 Local variables
𝑿𝐋 Link (shared) variables

System-level Analysis
𝑿𝒔 Input: 𝑿𝒔 , 𝑿𝐋𝟏 , 𝑿𝐋𝟐 , 𝑅1 , 𝑅2 , …, 𝑅𝑁
Output: 𝑦1 , … , 𝑦𝑚

𝑅1 𝑅2 𝑿L1 𝑿L2 𝑅3
Sub-system
𝑿L1 responses
𝑿L2
Sub-system 1 Sub-system 2 Sub-system 3
Inputs: 𝑿𝒔𝒔𝟏 , 𝑿L1 Inputs: 𝑿𝒔𝒔𝟐 , 𝑿L2 Inputs: 𝑿𝒔𝒔𝟑 ,
𝑿𝐋𝟏 𝑿𝐋𝟏 , 𝑿𝐋𝟐 𝑿𝐋𝟐

𝑿𝒔𝒔1 𝑿𝒔𝒔2 𝑿𝑠𝑠3


Analytical Target Cascading
Analytical target cascading (ATC): a system design approach enabling top level design targets
to be cascaded down to the lowest level of the modeling hierarchy.
1) System Partitioning
o Natural partition, model-based partition [1]
o Identify input variables (link and local variables) and responses
2) Model Development
o Simulation models, data-driven surrogate models, etc.
3) Target cascading
o Link variables and upper variables → Target of the lower level
4) Embodiment Design
o Solving the problem through a coordination strategy (iterative process)

[1] Michelena, N.F. and Papalambros, P.Y., 1995, September. Optimal model-based decomposition of powertrain system design. In International Design
Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 17162, pp. 165-172). American Society of Mechanical
Engineers.
An Mathematical Example of System Optimization

Product performance:
Product design objective: MIN 𝑓
𝑓 = 𝑅12 + 𝑅22 𝑃𝑠
𝑓 = 𝑅12 + 𝑅22 𝑅3−2 + 𝑥42
Constraints: ≤1
𝑥52
System-level response:
𝑅1 = 𝑅32 + 𝑥4−2 + 𝑥52 𝑥52 + 𝑅6−2
𝑅1
≤1
𝑅2 𝑥72
𝑅2 = 𝑅62 + 𝑥52 + 𝑥72
𝑅3 , 𝑅6 , 𝑥4 , 𝑥5 , 𝑥7 , 𝑥11 ≥ 0
Subsystem-level response:
𝑅3
𝑅6 𝑥11 𝑅3 𝑥11 𝑅6
Link variable:
𝑥11 𝑃𝑠𝑠1 𝑃𝑠𝑠2
𝑅3 = 𝑥82 + 𝑥9−2 + −2
𝑥10 + 2
𝑥11 𝑅6 = 2
𝑥11 2
+ 𝑥12 2
+ 𝑥13 2
+ 𝑥14
Local design variables: Constraints: Constraints:
𝑃𝑠 : 𝑥4 , 𝑥5 , 𝑥7 , 𝑥82 + 𝑥92 2 −2
𝑥8−2 + 𝑥10
2
𝑥11 + 𝑥12 2
𝑥11 2
+ 𝑥12
𝑃𝑠𝑠1 : 𝑥8 , 𝑥9 , 𝑥10 2 ≤1 2 ≤1 2 ≤1 2 ≤1
𝑃𝑠𝑠2 : 𝑥12 , 𝑥13 , 𝑥14 𝑥11 𝑥11 𝑥13 𝑥14
𝑥8 , 𝑥9 , 𝑥10 , 𝑥11 ≥ 0 𝑥12 , 𝑥13 , 𝑥14 , 𝑥11 ≥ 0
Product
𝑃𝑠 Min: (𝑅12 + 𝑅22 ) + 𝜀1 + 𝜀2 + 𝜀3
ATC Formulation 𝐿 2 𝐿 2
performance
s.t. 𝑥11 − 𝑥11,𝑠1 + 𝑥11 − 𝑥11,𝑠2 ≤ 𝜀1
Product performance: 𝑅3 − 𝑅3𝐿 2 ≤ 𝜀2
𝑓 = 𝑅12 + 𝑅22 𝑅6 − 𝑅6𝐿 2
≤ 𝜀3

System-level response: 𝑅3−2 + 𝑥42


≤1
𝑥52
𝑅1 = 𝑅32 + 𝑥4−2 + 𝑥52
𝑥52 + 𝑅6−2
2 ≤1
𝑅2 = 𝑅62 + 𝑥52 + 𝑥72 𝑥7
𝑅3 , 𝑅6 , 𝑥4 , 𝑥5 , 𝑥7 , 𝑥11 ≥ 0
Subsystem-level responses:
𝑅3 = 𝑥82 + 𝑥9−2 + 𝑥10
−2 2
+ 𝑥11 𝑅3𝑈 𝑅3𝐿 𝑅6𝑈 𝑅6𝐿
𝑈 𝐿 𝑈 𝐿
𝑅6 = 2
𝑥11 + 2
𝑥12 + 2
𝑥13 + 2
𝑥14 𝑥11 𝑥11,𝑠1 𝑥11 𝑥11,𝑠2

2 2
Link variable: 𝑥11 𝑃𝑠𝑠1 Min: 𝑅3 − 𝑅3𝑈 𝑈
+ 𝑥11 − 𝑥11 2 𝑃𝑠𝑠2 Min: 𝑅6 − 𝑅6𝑈 𝑈
+ 𝑥11 − 𝑥11 2

Local design variables: 𝑥82 + 𝑥92 𝑥8−2 + 𝑥10


2 2
𝑥11 −2
+ 𝑥12 2
𝑥11 2
+ 𝑥12
𝑃𝑠 : 𝑥4 , 𝑥5 , 𝑥7 , 2 ≤1 2 ≤1 ≤1 ≤1
𝑥11 𝑥11 2
𝑥13 𝑥 2
14
𝑃𝑠𝑠1 : 𝑥8 , 𝑥9 , 𝑥10
𝑃𝑠𝑠2 : 𝑥12 , 𝑥13 , 𝑥14 𝑥8 , 𝑥9 , 𝑥10 , 𝑥11 ≥ 0 𝑥12 , 𝑥13 , 𝑥14 , 𝑥11 ≥ 0
The Iterative Optimization Process
(1) Conduct optimization at the upper level
• Design variables: R3_U, R6_U, x4, x5, x7, x11_U, e1, e2, e3
• Record the optimization result
• R3_U, R6_U, x11_U are obtained and passed to the lower level.

(2.1) Conduct optimization at the lower level, (2.2) Conduct optimization at the lower level,
Pss1 Pss2
• Design variables: x8, x9, x10, x11_L1 • Design variables: x12, x13, x14, x11_L2
• Record the optimization result • Record the optimization result
• R3_L, x11_L1 are obtained and passed • R6_L, x11_L2 are obtained and passed
back to the upper level back to the upper level

Converge? End
No Yes
Implementation in MATLAB

ATC workflow Objective and constraint “Simulation models” called by the


functions objective and constraint functions
ATC_math_example.m

Upper_obj.m Upper_model1.m 𝑅1
Upper level optimization
Upper_cons.m Upper_model2.m 𝑅2

Lower level
lower_obj1.m
optimization lower_model1.m 𝑅3
lower_cons1.m
Pss1

Lower level
optimization
Pss2 lower_obj2.m
lower_model2.m 𝑅6
lower_cons2.m

Move to the next iteration


Exercises

(1) Revise the code and use "while" or "for" loop to automate the iterative search process:
• Define convergence criteria to stop the iterative search process;
• If you choose to use the “for” loop, the code should return a warning message if it
reaches the maximum iteration number;
• Generate proper plots to visualize the search history.

(2) After you automate the search process, conduct a parametric study. Change the parameter
named “weight” in upper level objective function (upper_obj.m), and report its impact on:
• Optimization result;
• Time to converge;
• Consistency in link variable values (i.e. “accuracy”).

You might also like