0% found this document useful (0 votes)
37 views2 pages

CDC23 Final2225 Abstract0901

The document proposes control strategies for four in-wheel motor vehicles to track target speed and trajectories while suppressing vertical acceleration. It establishes a vehicle dynamics model and identifies unknown parameters. A longitudinal controller tracks target speed using nonlinear control methods. Reinforcement learning allocates torques to suppress vehicle states and angles with minimum energy.

Uploaded by

huxiao561004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views2 pages

CDC23 Final2225 Abstract0901

The document proposes control strategies for four in-wheel motor vehicles to track target speed and trajectories while suppressing vertical acceleration. It establishes a vehicle dynamics model and identifies unknown parameters. A longitudinal controller tracks target speed using nonlinear control methods. Reinforcement learning allocates torques to suppress vehicle states and angles with minimum energy.

Uploaded by

huxiao561004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Margin requirements for first page

Paper size this page US Letter 72 pt


1 in
25.4 mm

Advanced Motion Control of Four In-wheel Motor Actuated Vehicles*


Huanning Yang‡ , Jieshu Wang‡ , Boxiong Yang‡ , Xiao Hu‡ , Ping Wang, Yunfeng Hu, and Hong Chen

Vehicle
Abstract— Advanced motion controllers combining model- Target speed Model-based Total torque Model-free Motor torque
based and model-free methods are proposed to solve two controller reinforce learning

problems of the benchmark challenge organized by IEEE CDC


2023. The proposed controllers allow the vehicle to track the Longitudinal speed
Longitudinal speed Vertical acceleration
target speed and trajectory while effectively suppressing its
vertical acceleration on rough roads. First, the vehicle dynamics
model is established, and its unknown parameters are identified Fig. 1. The diagram of the proposed control strategy.
via the nonlinear least squares method. Second, a longitudinal
motion controller is proposed based on nonlinear control
and fuzzy control. The above methods can realize the ac-
methods to track the target speed. Finally, the vehicle states, curate tracking of longitudinal speed when the vehicle pa-
including vertical acceleration, body roll angle, and body pitch rameters are known. However, they have not yet considered
angle, are suppressed by torque allocation based on model-free whether the tracking of longitudinal speed is effective when
reinforcement learning. Co-simulations of Modelon Impact and the vehicle model is unknown. There are also lots of studies
MATLAB/Simulink have been performed, and the results show
that our methods are initially effective and promising.
focusing on vehicle lateral-longitudinal-vertical motion con-
trol. F. Xiao et al. [1] studied the three-dimensional stability
I. I NTRODUCTION region and proposed an integrated control framework of
Automation and electrification are two important trends in active front-wheel steering, active suspension, and direct
the development of the automotive industry. Vehicle motion yaw moment control. J. Zhao et al. [2] proposed a new
control is the basis for the realization of autonomous driving integrated controller with a three-layer recursive structure to
technologies. Although this problem has been addressed in coordinate the three interactions. S. Zhao et al. [3] proposed
the field of vehicle engineering in the past decades, it has a multilevel recursive order control theory realizing the
not attracted much attention in the academic field of control function decoupling of the vehicle chassis system.
and decision-making. Therefore, the study of motion con- Reinforcement learning is the branch of machine learning
54 pt that emphasizes exploring actions and learning based on 54 pt
0.75 in trol of four in-wheel motor-actuated vehicles is meaningful 0.75 in
and important. Two problems of the benchmark challenge the environment to maximize expected benefits. The basic
19.1 mm 19.1 mm
organized by IEEE CDC 2023 are as follows: principle of reinforcement learning is to learn the optimal
strategy to maximize the cumulative rewards of the intelli-
• Problem 1: For the problem of acceleration and braking
gent body through trial and error, constant interaction with
on rough wet straight roads, we should not only track the environment, and constant revision of the intelligent
the target speed but also consider the vehicle states and body’s strategy, which ultimately maximizes the rewards
body angles with the minimum energy consumption. or achieves the specified goal. Q-learning algorithm is a
• Problem 2: For the ISO two-lane transformation prob-
typical value-based reinforcement learning algorithm, which
lem on rough and uneven roads: we must not only track has the advantages of fewer required parameters, no need for
the desired route to the maximum extent but also control environment modeling, and can be implemented offline, and
the driving state and body posture of the vehicle with is one of the most effective algorithms currently applied to
the minimum energy consumption. four-wheel drive vehicle path planning.
The control strategies of longitudinal motion mainly in-
clude model-based optimal control, neural network control, II. M ETHODS
The diagram of the proposed control strategy is shown
*This work was supported by the National Nature Science Foundation of in Fig. 1. Firstly, a model-based control strategy based on
China under Grant 62073152. Corresponding author: Ping Wang.
‡ Huanning Yang, Jieshu Wang, Boxiong Yang, and Xiao Hu contributed the longitudinal model is adopted to obtain the total torque
equally to this work and should be regarded as co-first authors. demand. The unknown parameters of the longitudinal model
Huanning Yang, Jieshu Wang, Boxiong Yang, Ping Wang, and are identified by using the nonlinear least squares method.
Yunfeng Hu are with the College of Communication Engineering, Jilin
University, 130025 Changchun, China {yanghn22, wangjs23, Then, reinforcement learning is used to allocate the total
yangbx22}@mails.jlu.edu.cn, {wangping12, huyf required torque to reduce energy consumption and suppress
}@jlu.edu.cn vertical acceleration.
2 Xiao Hu is with the School of Artificial Intelligence, Jilin University,
130015 Changchun, China [email protected] The details of the model-based control strategy are to
3 Hong Chen is with the College of Communication Engineering, Jilin identify the vehicle’s longitudinal model and obtain the
University, 130025 Changchun, China, and also with the School of Artificial total torque demand of the vehicle. The longitudinal motion
Intelligence, Jilin University, 130015 Changchun, China, and also with
the College of Electronic and Information Engineering, Tongji University, controller is designed within the framework of the output
518060 Shanghai, China [email protected] feedback nonlinear control method. The specific method

54 pt
0.75 in
19.1 mm
Margin requirements for the other pages 54 pt
Paper size this page US Letter 0.75 in
19.1 mm
20
Algorithm 1 Q-learning Algorithm Target

Speed (m/s)
Vehicle
1: Initialize Q (s, a) with zeros, initialize Rj with zero 10
2: for i = 1 : N do
3: Initialize s 0
0 5 10 15 20 25
4: for j = 1 : M do 0.2
Choose action aj , observe Rj , sj+1 from envi-

a-z (m/s2)
5:
ronment 0

6: Updating the Q-table: Q (sj , aj ) ← Q (sj , aj ) +


-0.2
c [Rj + r ∗ max (Q (sj+1 , aj )) − Q (sj , aj )] 0 5 10 15 20 25
0.02
end for

phi-p/phi-q (rad)
7: phi-p
8: end for phi-q
0

-0.02
0 5 10 15 20 25
of reinforcement learning is to use the Q-learning method, 100

Toruqe (Nm)
which uses the constraints of vertical acceleration and energy
consumption as the reward function. The specific algorithm 0

process is shown in Algorithm 1, where c is step size, r is


the discount factor, s is the state of the vehicle, a is the -100
0 5 10 15 20 25
distributed torque, and R is the reward. Time (s)

The reward R can be calculated as Fig. 2. Co-simulation results of Problem 1.


R = jv3 + pena, (1) 17

Speed (m/s)
(
−1000, |az | > 0.4, |ϕ| > 0.014, |θ| > 0.005 16.5
pena = (2) Target
0, others, Vehicle
16
0 2 4 6 8 10
where jv3 is energy consumption, pena is the penalty for
0.5
exceeding constraints, az is vertical acceleration, ϕ is the
a-z (m/s2)

pitch angle, θ is the roll angle. Decision-making is performed 0

by constructing a Q-table, where each element of the Q-table -0.5


54 pt measures the maximum expected cumulative payoff that will 0 2 4 6 8 10 54 pt
0.75 in 0.04 0.75 in
be obtained when a given action is taken in a given state.
phi-p/phi-q (rad)

19.1 mm 0.02 19.1 mm


Therefore, the intelligent body can select the optimal action
0
in each state according to the Q-table. Based on vehicle -0.02
phi-p phi-q
dynamics, it is known that changes in vehicle road conditions -0.04
are coupled to the vehicle through vertical load, so in this 0 2 4 6 8 10
5
Toruqe (Nm)

paper, the vertical acceleration is used as the state quantity in


Q-learning, and the body attitude-pitch, side inclination, and 0
front/rear axle allocation ratio are used as the reward function
to train the appropriate allocation strategy to control the body -5
0 2 4 6 8 10
attitude as well as the vehicle stability. Time (s)

III. R ESULTS Fig. 3. Co-simulation results of Problem 2.

The preliminary results of our work are given based on and the tracking of speed and trajectory of four in-wheel
co-simulations of Modelon Impact and MATLAB/Simulink, actuated vehicles. The proposed methods will be developed
as shown in Figs. 2 and 3. It should be noted that these further until the beginning of the Autonomous Driving Con-
preliminary results are obtained with the basic PID controller, trol Benchmark Challenge of IEEE CDC 2023.
not the methods mentioned in Section II. We are developing
more advanced controllers described in Section II, and new R EFERENCES
results will be given by the poster at the end of November. [1] F. Xiao, J. Hu, M. Jia, P. Zhu, and C. Deng, “A novel integrated
Co-simulation results show that the longitudinal speed can control framework of AFS, ASS, and DYC based on ideal roll angle to
improve vehicle stability,” Advanced Engineering Informatics, vol. 54,
track the target speed, and the vertical acceleration is within p. 101764, 2022.
its constraint range. In addition, the pitch angle and roll angle [2] J. Zhao, P. K. Wong, X. Ma, and Z. Xie, “Chassis integrated control for
are also below constraints, satisfying the requirements. active suspension, active front steering and direct yaw moment systems
using hierarchical strategy,” Vehicle System Dynamics, vol. 55, no. 1,
pp. 72–103, 2017.
IV. C ONCLUSIONS [3] S.-e. Zhao, Y. Li, X. Qu et al., “Vehicle chassis integrated control
The research content of this paper is to design vehicle based on multimodel and multilevel hierarchical control,” Mathematical
Problems in Engineering, vol. 2014, pp. 1–13, 2014.
motion controllers for state suppression, energy conversation,

54 pt
0.75 in
19.1 mm

You might also like