8.2. SE5072_Optimization
8.2. SE5072_Optimization
Introduction to Optimization
An optimization is the act of achieving the best possible result under given
circumstances
Improve
Reduce cost
efficiency
Optimization
Improve Improve
performance safety
Parametric Optimization
1 – steel 1 – square
2 – aluminum 2 – round
3 – composites 3 – H shape
4 – plastic 4 – L shape
Topology Optimization
Topology optimization (TO) is a mathematical method that optimizes material layout within
a given design space, for a given set of loads, boundary conditions and constraints with the
goal of maximizing the performance of the system.
Examples of Optimization in Materials Engineering
Composite layup optimization
• Number of layers
• The Fiber orientation of each layer
Performance
HPC
simulations
Process integration
User
Light-
Interface
weighting
Metamodeling &
Optimization Safety
Model parameters:
Vogiatzis, Panagiotis, Shikui Chen, Xianfeng David Gu, Ching-Hung Chuang, Hongyi Xu, and Na Lei. "Multi-Material
Topology Optimization of Structures Infilled With Conformal Metamaterials." In International Design Engineering Technical
Conferences and Computers and Information in Engineering Conference, vol. 51760, p. V02BT03A009. American Society
of Mechanical Engineers, 2018.
Examples of Optimization in Materials Engineering
Integrated Computational Materials Engineering (ICME) of carbon fiber composites
• Parametric geometry design variables
• Manufacturing process variables
• Multiscale material design variables
Essential Features of Optimization
An objective function: a response that needs to be either maximized or minimized.
• Cost
• Stiffness
• Strength
• Fatigue
• Turnaround time
• Etc.
Design variable: an input parameter that can be changed to influence the response.
Not all model parameters are design variables, especially those you cannot change
in real world.
Design representation:
Define design variables. Each design variable corresponds to one dimension of the design space.
Design evaluation:
Obtain the performance of the design. For example, run simulation to obtain the
stiffness/crashworthiness/durability of a structure design.
Design synthesis:
Find the optimal designs by using design representation and design evaluation tools.
Essential Features of Optimization (Cont.)
• Design variables (inputs): the value of design variable should be in the range of ….
• Responses (outputs): the value of the response should be in the range of ….
Contour plot
2.5
1.5 0.1
0.1
0.2 0.3 0.2
1 0.4
0.5
0.5 0.
6
X2
0.1
0.4
0.7
0.3
0.4
0
0.1
7
0.8 0. 5
Response
0. 3
0.2
-0.5 0.
0.
5 0.
2
0.6
-1 0.4
0.3
0.1
X2 -1.5
0.2
0.1
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
X1
X1
Terminologies
𝑓(𝑥)
𝑥
Optimization
x x1 , x 2 ,, x n
T
Given design viables
min f x
x
s.t. g x 0
h ( x) 0
Questions
Gradient-based algorithms
• Steepest decent method
• Newton’s method
Generation-based algorithms
• Simulated Annealing (SA) algorithm
• Genetic Algorithm (GA)
• Particle Swarm Optimization (PSO) algorithm
1D Search: Section Methods
Given a closed interval [ a0, b0 ], a unimodal function (having one and only one local minimum
in the interval), find a point that is no more than 𝜀 away from the local optimum (minimum).
𝑓(𝑥)
𝑥
a0 b0
1D Search: Section Methods
𝑓(𝑥)
𝑥
a0 a1 b1 b0
Sectioning with two intermediate points
How to Choose Intermediate points
Iteration #1
Equal space
x
a1 a2 b2 b0
Why not reuse previous points?
We would like to minimize the number of objective function evaluations.
𝜌 a1 b1 𝜌
x
a0 1 − 2𝜌 b0
x
a1 a2 b2 b0
𝜌 1 − 2𝜌 1−𝜌
=
1 1−𝜌
What is this Ratio?
3+ 5
1 𝜌 𝜌= ≈ 0.382
= 2
1 − 𝜌 2𝜌 − 1 1 − 𝜌 ≈ 0.618
Golden ratio!
Golden ratio in
nature:
Nautilus shell
Golden Section Search
3+ 5
Given 𝜌 = ≈ 0.382
2
𝐿0
𝜌 ∙ 𝐿0 𝜌 ∙ 𝐿0
𝜌 ∙ 𝐿1
𝐿1
Random Search Algorithms
Usually, these methods are used for education purpose. They may not be
applicable for real engineering cases.
Step size: 2
𝑥1
𝑥2
Gradient-based Algorithms
Definition of Gradient: 1 Dimensional
Definition of Gradient: N Dimensional
The gradient is defined by a vector of partial derivatives
f f
f ( x1 ,..., xn ) : ,...,
x1 xn
Z
f f
z x y
x y
Y
X
Basic Idea behind Gradient-based Search
𝑥0 x
𝑓 ′ 𝑥0 < 0
Basic Idea behind Gradient-based Search
f f
f :R R
2
f ( x, y ) :
x y
To find maximum, search
on the direction of:
Z 1
v f p
f p
f x1 2 x1 x2 x2
4 2 2
Find MIN: Current location: (0.2, 1.5)
x2
Start point
4 x13 4 x1 x2 2 x1
f 2( x1 x2 )
2
2 x1 2 x2 1
2
-f
−1.168
=
2.92
𝑥𝑖+1 = 𝑥𝑖 − 𝛻𝑓(𝑥𝑖 ) ∙ 𝑡𝑖
𝑥𝑖 x
𝑥𝑖+1
Two things to be noted
1. The gradient based method can 2. Ways to obtain the gradient information
only find the nearby local optimum
𝑥
What if we do not have analytical gradient?
?
Sometimes analytical gradient function is not available.
Computing the gradient numerically may not be efficient. What can we do?
Engineering Cases without Analytical Gradient Information
f ( x ) f ( y ) Note:
r P e T
f ( x) f ( y) 0
Min 𝑓(𝑥1 , 𝑥2 )
Iteration 1 Iteration N
𝑥2 … 𝑥2
𝑥1 𝑥1
Genetic Algorithms
1 1 0 1 0 0 1 0 1 1 0 0 1 0 1
x1 x2
Workflow of a Typical GA
Key Operators in GA
• Reproduction:
• Exact copy/copies of individual
• Crossover:
• Randomly exchange genes of different parents
• Many possibilities: how many genes, parents, children …
• Mutation:
• Randomly flip some bits of a gene string
• Used sparingly, but important to explore new designs
GA Operations (Cont.)
• Crossover:
Parent 1 Parent 2
1 1 0 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1
0 1 1 0 0 0 1 0 1 1 0 0 1 0 1 1 1 0 1 1 0 0 1 0 1 1 0 0 0 1
Child 1 Child 2
● Mutation:
1 1 0 1 0 0 1 0 1 1 0 0 1 0 1
1 1 0 1 0 1 1 0 1 1 0 0 1 0 1
Particle Swarm Optimization (PSO)
* A lot of research papers are published on improving PSO in the past 10 years
Particle Swarm Optimization (PSO)
vi 1 v i c1r1 y i xi c2 r2 Yi xi
xi 1 xi v i 1
Control “social behavior” Random numbers
vs “individual behavior” between 0 and 1
Illustration of PSO process
Search agents
Initial generation of
designs
Design evaluation
Stop criteria
Not satisfied
Satisfied
End
Surrogate Modeling + Optimization
OR
Design evaluator Surrogate model
Review: Surrogate Modeling Process
Y
x2
x1 x2
x1
“Virtual” optimum found by surrogate model-based
optimization
• The “optimal performance” found by surrogate model may not be true (“virtual” optimum)
• ALWAYS use the real design evaluator (simulation or experiment) to double check the
performance of the “virtual” designs
Adv. Topic: Bayesian Optimization
Reference:
Ariyarit, A. and Kanazaki, M., 2017. Multi-fidelity multi-objective efficient global optimization applied
to airfoil design problems. Applied Sciences, 7(12), p.1318.
Concept of Bayesian Optimization
Recap: Gaussian regression General steps of Bayesian Optimization
(Kriging) surrogate model (BO), maximization problem
It will choose a point 𝑥𝑡 that leads to the highest probability of improvement over the current best
Choose the next query point as the one which has the highest expected improvement over the current best
Concept of Bayesian Optimization: Acquisition Functions
Probability of improvement (PI)
Better Q: Which location to choose,
green or yellow?
Worse
Response
• Deflection
Model parameters
Experiment
• Elastic modulus
• Yield strength
Design variables: the engineers can change their values to obtain different responses;
Distribution
of strength
Model Calibration
Model calibration is the act of predicting the most proper values of model
parameters.
The basic idea is to “try” different model parameter values, and then find the best
values that can minimize the prediction error
Optimization
• Input variables: model parameters
• Objective: minimize the prediction error
Example 1: Vehicle Occupant Restraint System Model
Input
parameters Melting pool Melting pool boundary
Shaded area: cumulative
Hardening modulus ℎ0,𝑀𝑃 ℎ0,𝑀𝑃𝐵 difference between two curves
Saturation stress 𝜏𝑠,𝑀𝑃 𝜏𝑠,𝑀𝑃𝐵
Critical resolved Predict the macroscale
𝜏0,𝑀𝑃 𝜏0,𝑀𝑃𝐵 = 𝜏0,𝑀𝑃 − 5MPa
shear stress properties: stress-strain curve
Based on the differences in subgranular
cell dimension and Hall-Petch equation
Example 2: Multiscale Model of Additively Manufactured AlSi10Mg
400 400
300 300
200 200
Experiment
100 100
Simulation
0 0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0 0.01 0.02 0.03 0.04 0.05 0.06
Engineering Strain Engineering Strain
Wang, Z., Xu, H., Li, Y., “Material Model Calibration by Deep Learning for Additively Manufactured Alloys”, ASME 2020
International Symposium on Flexible Automation, ISFA2020-16724.
Adv. Topic 2: System Design Optimization using
Analytical Target Cascading (ATC)
References:
Kim, H.M., Michelena, N.F., Papalambros, P.Y. and Jiang, T., 2003. Target cascading in optimal
system design. J. Mech. Des., 125(3), pp.474-480.
Kim, H.M., Rideout, D.G., Papalambros, P.Y. and Stein, J.L., 2003. Analytical target cascading in
automotive vehicle design. J. Mech. Des., 125(3), pp.481-489.
Examples of Engineering Systems
Mechanical Materials and Manufacturing system:
system: vehicle ICME of AM alloy structures
suspension
system
Micromechanics of materials
Vehicle
Performance
evaluation module
System
System-level Analysis
𝑿𝒔 Input: 𝑿𝒔 , 𝑿𝐋𝟏 , 𝑿𝐋𝟐 , 𝑅1 , 𝑅2 , …, 𝑅𝑁
Output: 𝑦1 , … , 𝑦𝑚
𝑅1 𝑅2 𝑿L1 𝑿L2 𝑅3
Sub-system
𝑿L1 responses
𝑿L2
Sub-system 1 Sub-system 2 Sub-system 3
Inputs: 𝑿𝒔𝒔𝟏 , 𝑿L1 Inputs: 𝑿𝒔𝒔𝟐 , 𝑿L2 Inputs: 𝑿𝒔𝒔𝟑 ,
𝑿𝐋𝟏 𝑿𝐋𝟏 , 𝑿𝐋𝟐 𝑿𝐋𝟐
[1] Michelena, N.F. and Papalambros, P.Y., 1995, September. Optimal model-based decomposition of powertrain system design. In International Design
Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 17162, pp. 165-172). American Society of Mechanical
Engineers.
An Mathematical Example of System Optimization
Product performance:
Product design objective: MIN 𝑓
𝑓 = 𝑅12 + 𝑅22 𝑃𝑠
𝑓 = 𝑅12 + 𝑅22 𝑅3−2 + 𝑥42
Constraints: ≤1
𝑥52
System-level response:
𝑅1 = 𝑅32 + 𝑥4−2 + 𝑥52 𝑥52 + 𝑅6−2
𝑅1
≤1
𝑅2 𝑥72
𝑅2 = 𝑅62 + 𝑥52 + 𝑥72
𝑅3 , 𝑅6 , 𝑥4 , 𝑥5 , 𝑥7 , 𝑥11 ≥ 0
Subsystem-level response:
𝑅3
𝑅6 𝑥11 𝑅3 𝑥11 𝑅6
Link variable:
𝑥11 𝑃𝑠𝑠1 𝑃𝑠𝑠2
𝑅3 = 𝑥82 + 𝑥9−2 + −2
𝑥10 + 2
𝑥11 𝑅6 = 2
𝑥11 2
+ 𝑥12 2
+ 𝑥13 2
+ 𝑥14
Local design variables: Constraints: Constraints:
𝑃𝑠 : 𝑥4 , 𝑥5 , 𝑥7 , 𝑥82 + 𝑥92 2 −2
𝑥8−2 + 𝑥10
2
𝑥11 + 𝑥12 2
𝑥11 2
+ 𝑥12
𝑃𝑠𝑠1 : 𝑥8 , 𝑥9 , 𝑥10 2 ≤1 2 ≤1 2 ≤1 2 ≤1
𝑃𝑠𝑠2 : 𝑥12 , 𝑥13 , 𝑥14 𝑥11 𝑥11 𝑥13 𝑥14
𝑥8 , 𝑥9 , 𝑥10 , 𝑥11 ≥ 0 𝑥12 , 𝑥13 , 𝑥14 , 𝑥11 ≥ 0
Product
𝑃𝑠 Min: (𝑅12 + 𝑅22 ) + 𝜀1 + 𝜀2 + 𝜀3
ATC Formulation 𝐿 2 𝐿 2
performance
s.t. 𝑥11 − 𝑥11,𝑠1 + 𝑥11 − 𝑥11,𝑠2 ≤ 𝜀1
Product performance: 𝑅3 − 𝑅3𝐿 2 ≤ 𝜀2
𝑓 = 𝑅12 + 𝑅22 𝑅6 − 𝑅6𝐿 2
≤ 𝜀3
2 2
Link variable: 𝑥11 𝑃𝑠𝑠1 Min: 𝑅3 − 𝑅3𝑈 𝑈
+ 𝑥11 − 𝑥11 2 𝑃𝑠𝑠2 Min: 𝑅6 − 𝑅6𝑈 𝑈
+ 𝑥11 − 𝑥11 2
(2.1) Conduct optimization at the lower level, (2.2) Conduct optimization at the lower level,
Pss1 Pss2
• Design variables: x8, x9, x10, x11_L1 • Design variables: x12, x13, x14, x11_L2
• Record the optimization result • Record the optimization result
• R3_L, x11_L1 are obtained and passed • R6_L, x11_L2 are obtained and passed
back to the upper level back to the upper level
Converge? End
No Yes
Implementation in MATLAB
Upper_obj.m Upper_model1.m 𝑅1
Upper level optimization
Upper_cons.m Upper_model2.m 𝑅2
Lower level
lower_obj1.m
optimization lower_model1.m 𝑅3
lower_cons1.m
Pss1
Lower level
optimization
Pss2 lower_obj2.m
lower_model2.m 𝑅6
lower_cons2.m
(1) Revise the code and use "while" or "for" loop to automate the iterative search process:
• Define convergence criteria to stop the iterative search process;
• If you choose to use the “for” loop, the code should return a warning message if it
reaches the maximum iteration number;
• Generate proper plots to visualize the search history.
(2) After you automate the search process, conduct a parametric study. Change the parameter
named “weight” in upper level objective function (upper_obj.m), and report its impact on:
• Optimization result;
• Time to converge;
• Consistency in link variable values (i.e. “accuracy”).