Reinforcement Learning
Reinforcement Learning
PROBLEM
USING TEMPORAL DIFFERENCE(TD)
& VALUE ITERATION(VI)
REINFORCEMENT LEARNING
ALRORITHMS
By
Muzammil Abdulrahman
&
Yusuf Garba Dambatta
Mevlana University Konya, Turkey
2013
INTRODUCTION
The aim of the mountain car problem is for the
car to learn on two continuous variables;
• position and
• velocity
So that it can reach the top of the mountain in a
minimum number of steps.
By starting the car from rest, its engine power
alone will not be powerful enough to bring the
car over the hill in front.
2
INTRODUCTION CONT.
To climb up the hill, the car would need to swing back
and forth inside the valley
3
INTRODUCTION CONT.
By accelerating forward and backward in order
to gather momentum.
The agent receives a negative reward at every
time step when the goal is not reached
The agent has no information about the goal
until an initial success, it uses reinforcement
learning methods.
In this project, we employed TD-Q learning and
value iteration algorithms
4
REINFORCEMENT LEARNING
Reinforcement learning is orthogonal learning
algorithm in the field of machine learning
Where an estimation of the correctness of the
answer is provided to the system
It deals with how an agent should take an action
in an environment so as to maximize a
cumulative reward
It is a Learning from interaction
And is a Goal-oriented learning
5
CHARACTERISTICS
No direct training examples – (delayed) rewards instead
Goal-oriented learning
Learning about, from, and while interacting with
an external environment
Need for exploration of environment & exploitation
The environment might be stochastic and/or unknown
The learning actions of the agent affect future rewards
6
EXAMPLES
Chess Master
8
UNSUPERVISED LEARNING
9
SUPERVISED LEARNING
10
TEMPORAL DIFFERENCE(TD)
Temporal difference (TD) learning is a
prediction method.
It has been mostly used for solving the
reinforcement learning problem.
TD learning is a combination of Monte Carlo
ideas and dynamic programming (DP) ideas.
11
TD Q-LEARNING
12
TD Q-LEARNING ALGORITHM
Initialize Q values for all states ‘s’ and actions ‘a’
Obtain the current state
Select an action according to current state
Implement the selected action and obtain an immediate
reward and the next state
Update the Q function according to the above equation
Update the system state
Stop the algorithm if the maximum number of iteration
is reached
13
ε -GREEDY SELECTION (Q,S,EPSILON)
The agent randomly select action from Q table
based on e-greedy strategy.
Initially, epsilon=0.01 which is the probability of
selecting random action.
It will be approximately equal to zero, when the
car agent has fully learned how to climb the
front hill (no randomness because it has learned
about best action).
14
STATE, ACTION & REWARD
State: The states are position and speed. Position is
between the range of -1.5 and 0.55 and speed is
between the range of -0.07 and 0.07
Action: The agent has one of these 3 actions at all
the time: Forward, backward, neutral (Forward
accelaration=+1m/s2, backward deccelaration
=-1m/s2 , neutral=0 m/s2).
Reward: The agent receive a reward of -1 for all
actions except when the agent reaches the goal
state where it receives a 0 reward 15
VALUE ITERATION
The value iteration algorithm which is also called
backward induction
Combines policy improvement and a truncated
policy evaluation into a single update step
16
VALUE ITERATION ALGORITHM
19
GRAPH
20
CONT.
22
VI RESULTS
The graph below shows the convergence error
over iterations
23
VI CONT.
Figure 6 shows the graph of Optimal Positions and Velocities over time on top
while bottom one displays the car learning in the mountain.
24
VI CONT.
The first Episode records the highest error
This is because the error is the difference
between the current value function and the
previous value function i.e. Error= V (s′) -V(s)
But initially the previous value function is 0
Hence Error= V (s′)
25
VI CONT.
At subsequent episodes, the error keeps
decreasing as the next updated value functions
increase.
At convergence, the error (with 0 value) is less
ε
than the threshold value ( =0.0001) which is
the termination criteria for this project.
Finally the optimal policy will be returned.
26
VI CONT.
The graphs below shows the optimal positions
and velocities over time
The first graph is that of the optimal positions
over time
It simply shows the optimal positions attained by
the car as it attempt to reach the goal state at
different time
27
CONT.
Also the second graph shows the optimal
velocities attained by the car as it attempt to
reach the goal state at different time
The car initially accelerate from rest position to
attain a position of -0.2 it then swings back to
gather enough momentum by attaining a
position of -0.95, it finally accelerate forward
again and reach the goal state
28
CONCLUSION
In this project, the temporal difference and value
iteration learning algorithms were implemented for
mountain car problem. Both the algorithms were
guaranteed to converge by determining the
optimal policy for reaching the goal state.
29