Intelligent Optimization Algorithm for Master (2)
Intelligent Optimization Algorithm for Master (2)
algorithm for
optimization problems
• Optimization problem
example
Optimization problem example b
cases from
• St: ft[i]=es[i]+d[i]
• if i to pa[j], es[j]>=ft[i]
• For any day t, k type of rs consumed by act i
∑𝑟 𝑠𝑖𝑘𝑡 <𝑟 𝑠 𝑙𝑘
𝑖
It is not possible to Heuristic approach
solve the problem is the promissing
by manual way for this
operation. problem.
How can we use IA to solve the optimization
problem?
• hill-climbing algorithm (competition between two individuals)
f ( x0)
f ( x0 delta)
Example
• Many individuals
• Crossover and mutation
Ga process
• Step1 :Generate the individual answer (the answer should
be the feasible answer)
• Srep 2:Generate a population of answers
• Step 3:make the object function for the problem
• Step 4:Evaluate the population by using object function
• Step 5:Select the feasible answers according to their fittting
values
• Step 6:Corssover
• Step 7:Mutation
• Step 8: back to step 4
Variation: cossover and mutation for
binary value
Variation: cossover and mutation for decimal
value
Variation: mutation for decimal value
Advantage and disadvantage
• question free
• Not guarante to global solution
• Many parameters
• Operate Slowly with operator
Several algorithms with few
parameters and simple evolution
strcture
• 1+1 ES
• Only mutation
Several algorithms with few
parameters and simple evolution
strcture
• U +lambda ES (u parents, each parent produce lambda children, all
are evaluated, select u, repeat)
• Only mutation
DE flow
chart
( more on
mutation)
Difference
evolution
PSO (competition and cooperation)
• Particle Swarm Optimization (PSO)
What is nn
Surrogate • To find an approximate function for the data,
traditionally using gausian process with kernal
optimization function
Neuron network (surrogate
optimization)
• The concept of surrogate optimization
• To find an approximate function for the data, traditionally using gausian
process with kernal function
• but nn is more powerful to fit the data
• (an example)…nn for optimization
• State: (fire)
• Action: (oil),
• Rw_f(state,action)=reward
• Rw_f( fire,use oil)=-50
• Rw_f(fire, use water)=100
Using q table to store the
knowledge
• Data is stored in a table with the reuslts for paired data (state, action)
• Given the q table, greedy strategy to select action under state environment
• Here, states are discrete and independent in the fire example.
Action a Action b
S1-r1-s2-r2…. v(s1)=r1+dis*v(s2)
Monte carlo q table
• One episode:
• 2,left,10,1,left,-100,0
• 2,left,10,1
• 1,left,-100,0 (end state)
• q(1,left)=-100+dis*0=-100
• q(2,left)=10+0.9*(-100)=-80
Update the knowledge
• q(1,left)=-100
• q(2,left)=-80
• New episode
• 2,left,10,1,right,-100,0
• 2,left,10,1
• 1,right,-100,0
• q(1,right)=-100+dis*0=-100
• q(2,left)=10+0.9*(-100)=-80
• Update the knowledg again
• q(1,left)=-100 , q(1,right)=-100 ,q(2,left)=(-80-80)/2=-80,
Monte carlo q
table