Nonlinear_Dynamics_and_Machine_Learning_for_Roboti
Nonlinear_Dynamics_and_Machine_Learning_for_Roboti
1 Faculty of Technology and Technical Sciences, University St. Kliment Ohridski, 7000 Bitola, North Macedonia
2 Faculty of Technical Science, Mother Teresa University, 1000 Skopje, North Macedonia;
[email protected]
3 Faculty of Food Technology and Biotechnology, University of Zagreb, Pierottijeva 6, 10000 Zagreb, Croatia;
[email protected]
* Correspondence: [email protected]
Abstract: This paper presents a novel approach to robotic control by integrating nonlinear dynamics
with machine learning (ML) in an Internet of Things (IoT) framework. This study addresses the
increasing need for adaptable, real-time control systems capable of handling complex, nonlinear
dynamic environments and the importance of machine learning. The proposed hybrid control system
is designed for a 20 degrees of freedom (DOFs) robotic platform, combining traditional nonlinear
control methods with machine learning models to predict and optimize robotic movements. The
machine learning models, including neural networks, are trained using historical data and real-
time sensor inputs to dynamically adjust the control parameters. Through simulations, the system
demonstrated improved accuracy in trajectory tracking and adaptability, particularly in nonlinear
and time-varying environments. The results show that combining traditional control strategies with
machine learning significantly enhances the robot’s performance in real-world scenarios. This work
offers a foundation for future research into intelligent control systems, with broader implications for
industrial applications where precision and adaptability are critical.
Figure 1. Architecture
Figureof1.the IoT and a robot.
Architecture of the IoT and a robot.
Figure 2. Architecture of mobile robot and code for generating flier object.
Figure 2. Architecture of mobile robot and code for generating flier object.
In this case, however, due to the complexity of the humanoid structure, multiple
kinematic chains are required, each originating from specific segments of the mechanism.
Future Internet 2024, 16, 435 4 of 23
Figure 2 depicts the mechanism as consisting of three kinematic chains, indicated by curved
lines with arrows showing the direction of the chain extension. Chain I comprises the
pelvis, torso, and head; chain II, the pelvis, torso, and right arm; and chain III, the pelvis,
torso, and left arm.
Each chain is composed of segments (numbered: torso—4, head—7, right upper arm—10,
right forearm—12, right hand—14, left upper arm—17, left forearm—19, left hand—21) in
Figure 2. All other segments are imaginary with approximately zero dimensions, as well as
negligible dynamic characteristics so as not to affect the real part of the mechanism (masses m
and moments of inertia J, have values of 0). This division into kinematic chains is a functional
approach for this model.
The head and torso segments are separated, introducing three additional degrees of
freedom between them, replicating realistic movement. A rotational degree of freedom
around the z-axis is introduced between the pelvis and torso, providing a third degree of
freedom at the waist. The arm complexity is increased by adding two more degrees of
freedom at the shoulders (allowing rotation around the y-axis) and incorporating triangular
joints along the z-axis in both arms. The model includes hands, which can rotate relative to
the forearm, achieved through a combination of rotations around the x- and y-axes. The
resulting structure, thus, provides a total degree of freedom as given by N = 20i + 6.
The state of the robotic system is defined by the vector X [23,39], which includes the
joint angles, angular velocities, and accelerations. The pose of a base segment in space is
given by the following equation:
X = [ x, y, z, φ, θ, ψ] T (1)
The vector contains six components that describe the position and orientation in a
three-dimensional space: x y z represent the Cartesian coordinates of the position in space.
φ θ ψ represent angles that are typically Euler angles, describing the orientation of the
segment in space (φ—roll rotation around x-axis; θ—pitch rotation about the y-axis; and
ψ—yaw rotation about the z-axis).
φ(roll ) = atan 2·(ω · x + y·z), 1 − 2· x2 + y2 (2)
This vector describes the spatial positioning and orientation of an object (such as a
robot) in terms of the translational coordinates x, y, z and rotational coordinates φ θ ψ. It is
used for determining the location and orientation in space, suitable for tasks like navigation,
positioning, and alignment in a fixed reference frame. Its purpose is to provide a general
pose in 3D space, useful for external tasks and interactions, whereas a state vector that
includes joint dynamics is more specialized for internal control and analysis within robotic
systems and is given in Equation [5], which includes the joint angles, angular velocities,
and accelerations. This would indeed be a more detailed and dynamic focused version of a
state vector in robotics.
Q = [ X q ] T = [ x y z φ θ ψ q1 q2 q3 . . . q n ] T (5)
This vector typically encompasses not only the positions of the joints but also their
velocities and accelerations over time, making it highly suitable for dynamic modeling
and control purposes. The dynamic model, governed by nonlinear dynamic equations
derived from the Euler–Lagrange formulation [40,41], for a robotic system with n degrees
of freedom (DOFs) is described as follows:
.. .
H (q)q + h q, q + g(q) = τ (6)
Future Internet 2024, 16, 435 5 of 23
where
H (q) is the inertia matrix, influenced by the robot’s configuration;
..
q represents the joint accelerations;
.
h q, q represents the Coriolis and centrifugal forces, dependent on both the position and
velocities of the joints;
g(q) is the gravitational force vector, depending solely on its configuration;
u is the control input vector (torques or forces).
The robot’s velocity motion is governed by the following differential equations:
cosθ 0
. v
q = sinθ 0 (7)
ω
0 1
where q represents the joint positions, θ represents the orientation, υ represents the linear
velocities, and ω represents the angular velocities. The key engineered features include the
Euler angles (derived from quaternion orientation data), total angular velocity ( ωt ), and
total linear acceleration (v t ).
q
ωt = ( angular_velocity_X )2 + ( angular_velocity_Y )2 + ( angular_velocity_Z )2 (8)
q
vt = (linear_accelerat_X )2 + (linear_accelerat_Y )2 + (linear_accelerati_Z )2 (9)
Analyzing the movement of the robot in the environment, the Jacobian matrix is
∂X
used, J = ∂Q , which gives the relationship between the point of the robot located in the
local coordinate system and the speed of the entire mechanism in relation to the global
coordinate system:
.. .
H (q)q + h q, q + g(q) = τ + J T (q) F ext (10)
J T (q) is the transpose of the Jacobian matrix. This matrix relates the external forces
and torques F ext acting on the robot to the torques at the joints. The Jacobian J(q) trans-
forms the joint velocity vectors into end-effector velocity vectors in the workspace, and its
transpose J T (q) maps the external forces applied to the end-effector back to the equivalent
joint torques.
F ext represents the external forces. This vector represents the forces and torques from
the environment acting on the robot. These could be due to interaction with objects, external
loads, or any other environmental influence exerting force on the robot.
In an IoT-integrated robotic system, the motion of the robot can be monitored through
a connected device that communicates with the controller, publishing pose data in a
structured format, such as JSON sensor data for “current_pose”, which include information
about the robot’s position and orientation (the position of the robot’s tool reference frame is
described by the coordinates in Equation (1) and the orientation by Equation (5)).
Once all of these data are organized, they are ready to be used to create a virtual “flyer”
object. However, before using this object, it must first be created in the computer’s memory.
This is performed with the help of the k_flier constructor. When this constructor is called, it
creates a new flyer object (in this case, named flier20), based on the structured data from
the gen_links function.
In simple terms, the ‘gen_links’ function gathers all the complex information needed
to define the flyer, while the ‘k_flier’ constructor brings it to life in the digital environment,
allowing researchers to use this model in simulations and to further study its behavior.
The flowchart (Figure 3) effectively outlines a robust system for managing commands
in an automated or robotic system, ensuring that actions are taken based on successful
connections and accurate sensor data, with contingencies for failures. It emphasizes a
structured approach to operational readiness, monitoring, and execution, allowing for real-
time adjustments and precise control based on environmental feedback and operational
Future Internet 2024, 16, x FOR PEER REVIEW 6 of 25
Figure
Figure 3. Command
3. Command andand control
control process
process flowchart
flowchart forfor robotic
robotic operations.
operations.
2.2.2.2. Integrating
Integrating Nonlinear
Nonlinear Methods
Methods with Neural
with Neural Networks
Networks
The diagram in Figure 4 outlines a hybrid control architecture that integrates tradi-
The diagram in Figure 4 outlines a hybrid control architecture that integrates tradi-
tional numerical calculations with neural network predictions to optimize robotic control.
tional numerical calculations with neural network predictions to optimize robotic control.
This architecture is designed to adaptively refine control strategies through iterative learn-
This architecture is designed to adaptively refine control strategies through iterative learn-
ing, making it highly effective for complex robotic applications. We proposed a hybrid
ing, making it highly effective for complex robotic applications. We proposed a hybrid
control architecture that combines the strengths of nonlinear mathematical methods and
control architecture that combines the strengths of nonlinear mathematical methods and
neural networks. A neural network is used to optimize the robot’s control strategy, such
neural networks. A neural network is used to optimize the robot’s control strategy, such
as path planning, joint control, or disturbance handling. The inputs to the neural network
as path planning, joint control, or disturbance handling. The inputs to the neural network
were as follows: the joint angles, joint velocities, external forces, etc. The outputs of the
were as follows: the joint angles, joint velocities, external forces, etc. The outputs of the
neural network were as follows: the predicted joint torques, optimized trajectories, or
neural network were as follows: the predicted joint torques, optimized trajectories, or con-
control signals.
trol signals.
Machine learning models are integrated to learn from the robot’s past trajectories and
to dynamically adjust the control parameters.
The inputs to the neural network are the variables that describe the current state of the
robot and the environment. In this case, these include the following: the position of each
joint in the robot, which can be represented as a vector Q (Equation (5)) of size n, where
n is the number of joints q = [q1 , q2 , . . . , qn ] the speed at which each joint is moving
. . . . . .
(vector dq = q) of size n: dq = q = q1 , q2 , q3 , . . . qn ; and the external forces acting on
the robot
(including
disturbances, gravity, or forces from contact with the environment).
F = Fx , Fy , Fz , where the components are forces in the three-dimensional space. The
previous control signals or torques applied to the joints are used as inputs if the control
strategy depends on past actions.
Future
Future Internet
Internet 2024,
2024, 16,
16, x435
FOR PEER REVIEW 7 of 25
7 of 23
Machine learning models are integrated to learn from the robot’s past trajectories and
to dynamically adjust the control parameters.
The inputs to the neural network are the variables that describe the current state of
the robot and the environment. In this case, these include the following: the position of
each joint in the robot, which can be represented as a vector Q (Equation (5)) of size n,
where n is the number of joints 𝑞 = 𝑞 , 𝑞 , … , 𝑞 the speed at which each joint is moving
(vector 𝑑𝑞 = 𝑞) of size n: 𝑑𝑞 = 𝑞 = 𝑞 , 𝑞 , 𝑞 , … 𝑞 ; and the external forces acting on the
robot (including disturbances, gravity, or forces from contact with the environment). 𝐹 =
𝐹 , 𝐹 , 𝐹 , where the components are forces in the three-dimensional space. The previous
control
Figure 4.
Figure
signals
4. Hybridor
Hybrid
torques
control
control
applied to
architecture
architecture
the joints are used as inputs if the control strategy
workflow for robotic systems.
depends on past actions.
Applying
Machine Equation
Applyinglearning
Equation (5),
(5),the
models outputs
theare
outputs ofofthe
integrated thetoneural
learn network
neural network
from the are
arethe
thevariables
robot’s variables
past that
thatrep-
trajectories repre-
and
resent
sent
to the control actions
the controladjust
dynamically actionsthe the robot
thecontrol should
robot should take (Figure 5). The joint
take (Figure 5). The joint torques
parameters. torques τ are applied to
to each
each joint
joint to
to achieve
achieve the
the desired
desired motion.
motion. With
With
The inputs to the neural network are the variables aa vector
vector τ τof
ofsize
sizen,n,𝑦y= = 𝜏[ 𝜏 𝜏 … 𝜏
1 2 3 state of,
that describe the current
τ τ τ . . . τn
, ] T
.
and
andthe
the thedesired
robot desiredangles
and theangles 𝑞qn (joint
(joint positions)
environment. positions) and
In this case, 𝑞q (joint
andthese (joint velocities)
velocities) for each
for each thejoint,
joint, the
the ro- robot
n include the following: position of
bot should
should be be guided
guided by by
the the neural
neural network
network to to
a a specific
specific posture
posture oror trajectory.
trajectory.
each joint in the robot, which can be represented as a vector Q (Equation (5)) of size n,
where n is the number of joints 𝑞 = 𝑞 , 𝑞 , … , 𝑞 the speed at which each joint is moving
(vector 𝑑𝑞 = 𝑞) of size n: 𝑑𝑞 = 𝑞 = 𝑞 , 𝑞 , 𝑞 , … 𝑞 ; and the external forces acting on the
robot (including disturbances, gravity, or forces from contact with the environment). 𝐹 =
𝐹 , 𝐹 , 𝐹 , where the components are forces in the three-dimensional space. The previous
control signals or torques applied to the joints are used as inputs if the control strategy
depends on past actions.
Applying Equation (5), the outputs of the neural network are the variables that rep-
resent the control actions the robot should take (Figure 5). The joint torques τ are applied
to each joint to achieve the desired motion. With a vector τ of size n, 𝑦 = 𝜏 𝜏 𝜏 … 𝜏 ,
and the desired angles 𝑞 (joint positions) and 𝑞 (joint velocities) for each joint, the ro-
bot should be guided by the neural network to a specific posture or trajectory.
From the MATLAB robot code, the inputs are as follows: q(7:26), the angles of the
robot’s joints; dq(7:26), the velocities of the robot’s joints; and FW, the external forces acting
on the robot.
Outputs: The torque values that should be applied to each joint to achieve that which
is desired, such as from the main loop of the neural network, where the torques (currently
with physics-based models) are calculated.
Neural network architecture: The input layer has a size 2n + m, where n is the number
of joints, and m is the number of force components. Hidden layers: the number of layers
and neurons are chosen per layer based on the complexity of the control problem. A
Future Internet 2024, 16, 435 8 of 23
common starting point is two hidden layers with 64 neurons each. The output layer has
a size n, corresponding to the torques for each joint. Activation functions: typically, the
ReLU (Rectified Linear Unit) is used for the hidden layers and a linear activation for the
output layer.
The control strategy for the robotic system involves using the torques predicted by a
neural network to precisely actuate the robot’s joints. This method incorporates real-time
data on the joint positions, velocities, and external forces to calculate the necessary torques
(Figure
Future Internet 2024, 16, x FOR PEER REVIEW
6). These torques are then applied directly to the joints, enabling the robot to achieve
9 of 25
the desired movements and postures efficiently. This approach ensures both accuracy and
reduced energy consumption, enhancing the robot’s performance and durability.
Figure6.6.Neural
Figure Neuralnetwork
networkframework
frameworkfor
forsolving
solving nonlinear
nonlinear dynamics
dynamics optimization
optimization in robotic
in robotic con-
control.
trol.
Weight matrices W1 , W2 , . . . , Wj ) are used in each layer of a neural network to transform
the input
The data
maininto a format
loop of the that
MATLABthe network can use to make
code (Algorithm decisions
1) performs theorexecution
predictions.
of For
the
planned W
instance, is
motions:
1 the weight matrix used in the first layer of the network to transform the initial
input vector into the first hidden layer’s output (h1 ). Similarly, W2 transforms h1 into h2 ,
and so on. Neural
Algorithm 1 Predict networks
Functionuse weight matrices in conjunction with nonlinear activation
functions (like
1: while (t < T) the ReLU) to introduce nonlinearity into the network. This nonlinearity allows
the
2: network t =to learn complex patterns beyond what a linear model could achieve.
i*dt;
During the training phase, these weight matrices are adjusted to minimize the loss
3: % Update the states
function—the measure of how far the network’s predictions are from the actual values. This is
4: Q_132 = [q; TetaA; TetaB; dq; dTetaA; dTetaB];
typically performed using optimization algorithms like gradient descent, where Wj is updated
5: options = odeset(‘RelTol’, 1 × 10−2, ‘AbsTol’, 1 × 10−4, ‘MaxOrder’, 3);
iteratively. This is described by the following equation:
6: [tout,Q_132_out] = ode113(@ECCERdof_PomPod,[t t + dt], Q_132, options);
7: Q_132 = Q_132_out(end,:)’; ( t +1) (t) ∂L
Wj = Wj − µ (11)
8: q = Q_132(1:26); ∂Wj
9: ...
∂L
10: W denotes
j % Control weight matrix at the j-th layer at iteration t. The gradient ∂W
the adjustments tells
j
us how to adjust Wj to reduce errors in the predictions, and η is the learning rate that
determines
Here, the howrobotbig each update
states should
(positions, be.
velocities, etc.) are updated over time using a nu-
W1 integration
merical and Wj are crucialmethodfor transforming
(ode113), which and processingthe
approximates thecontinuous
input datatime
through each
dynamics
layer of the
of the robot. network, allowing the model to learn from the data and make increasingly
accurate
Thepredictions
loss function oriscontrol
defined decisions.
as the mean Thesquared
use of multiple weightthe
error between matrices
desiredenables
and actualthe
network to handle
joint angles. a varietyprocess
The training of tasksinvolves
and adapt to different datawhere
backpropagation, patternstheand complexities,
gradients of the
which is especially
loss function vital in applications
are computed with respectlike robotic
to the control
network where the
parameters and dynamics can be
used to update
highly variable and complex.
the weights.
The
Lossmain
function.loopThe
of the
meanMATLAB
squaredcode error(Algorithm
(MSE) between 1) performs the execution
the predicted torques of𝑦 and
the
planned motions:
the actual torques y is determined as follows:
Ը= ∑ (𝑦 − y ) (12)
The main loop of the MATLAB code (Algorithm 1) performs the execution of the
planned motions:
Algorithm 1 Predict Function
Algorithm
1: while1 Predict
(t < T)Function
1: 2:
while (t <t T)
= i*dt;
2: 3: t = i*dt;
% Update the states
3: 4: % Update
Q_132 the
= [q;states
TetaA; TetaB; dq; dTetaA; dTetaB];
4: 5: Q_132 = [q; TetaA;
options 1 × 10−2dTetaB];
TetaB; dq; dTetaA;
= odeset(‘RelTol’, , ‘AbsTol’, 1 × 10−4 , ‘MaxOrder’, 3);
5: 6: [tout,Q_132_out]
options = ode113(@ECCERdof_PomPod,[t
= odeset(‘RelTol’, t + dt], Q_132,
1 × 10 , ‘AbsTol’, 1 × 10−4, ‘MaxOrder’,
−2 3); options);
6: 7: Q_132 = Q_132_out(end,:)’;
[tout,Q_132_out] = ode113(@ECCERdof_PomPod,[t t + dt], Q_132, options);
7: 8: Q_132 q ==Q_132(1:26);
Q_132_out(end,:)’;
8: 9: ...
q = Q_132(1:26);
9: 10: ... % Control adjustments
10: % Control adjustments
Here,
Here, thethe robot
robot states
states (positions,
(positions, velocities,
velocities, etc.) etc.) are updated
are updated overusing
over time time ausing
nu- a
numerical integration method (ode113), which approximates the continuous
merical integration method (ode113), which approximates the continuous time dynamics time dynamics
of of
thethe robot.
robot.
The loss function is defined as the mean squared error between the desired and actual
The loss function is defined as the mean squared error between the desired and actual
joint angles. The training process involves backpropagation, where the gradients of the
joint angles. The training process involves backpropagation, where the gradients of the
loss function are computed with respect to the network parameters and used to update
loss function are computed with respect to the network parameters and used to update
the weights.
the weights.
Loss function. The mean squared error (MSE) between the predicted torques ŷ and
Loss function. The mean squared error (MSE) between the predicted torques 𝑦 and
the actual torques y is determined as follows:
the actual torques y is determined as follows:
1 n
Ը== ∑∑i=(𝑦 y y) i )2
(ŷ−−
1 i
(12)(12)
n
where h is the step size and βi and αi are the coefficients that depend on the order of
the method.
This method is particularly effective for ensuring that the solution remains stable and
accurate over long simulation periods, making it ideal for scenarios that integrate both the
outputs from numerical methods and adjustments from machine learning to generate the
optimal control inputs for the robotic system.
The hybrid control strategy, combining traditional control (PID and inverse kinematics)
with machine learning for model-free adaptation, can be implemented within the controller
block. The transformation to robot coordinates and feedforward components would be
primarily handled by traditional control methods, while the controller can adapt using
machine learning to fine-tune responses based on sensor feedback.
outputs from numerical methods and adjustments from machine learning to generate the
optimal control inputs for the robotic system.
The hybrid control strategy, combining traditional control (PID and inverse kinemat-
ics) with machine learning for model-free adaptation, can be implemented within the con-
Future Internet 2024, 16, 435 troller block. The transformation to robot coordinates and feedforward components 10 of 23
would be primarily handled by traditional control methods, while the controller can adapt
using machine learning to fine-tune responses based on sensor feedback.
The
Theprovided
provideddiagram
diagram(Figure
(Figure7)7)isisconsistent
consistentwith
withthe
themathematical
mathematicaland
andcontrol
control
methodologies
methodologiesdiscussed
discussedinin this
this paper. It effectively
effectively visualizes
visualizesthe
thehybrid
hybridcontrol
controlarchitec-
archi-
tecture where
ture where traditional
traditional control
control principles,
principles, numerical
numerical methods,
methods, and and machine
machine learning
learning work
work together
together to optimize
to optimize the robotic
the robotic motion
motion control.
control.
Blockdiagram
Figure7.7.Block
Figure diagramofofthe
thecontrol
controlarchitecture
architecturefor
foraamobile
mobilerobot.
robot.
Thecontrol
The controlstructure
structurecan
canbebebroken
brokendown
downinto
intothe
thefollowing
followingstages:
stages:
Referencetrajectory
Reference trajectorygenerator:
generator:This
Thisblock
blockgenerates
generatesthethedesired
desiredreference
referencetrajectory
trajectory
defined by x , which is the desired position in the x-direction; y , which
defined by 𝑥 , which is the desired position in the x-direction; 𝑦 , which is the desired
ed ed is the desired
position in the y-direction; 𝜓 , which is the desired heading angle; and 𝑉 , which isisthe
position in the y-direction; ψ ed , which is the desired heading angle; and Vd , which the
desired velocity.
desired velocity.
The feedforward block compensates for known disturbances and path deviations. In the
The feedforward block compensates for known disturbances and path deviations. In
methodology discussed earlier, this is supported by the traditional control methods, where the
the methodology discussed earlier, this is supported by the traditional control methods,
feedforward signal works as a primary reference, while the feedback corrects errors.
where the feedforward signal works as a primary reference, while the feedback corrects
Coordinate transformation and guidance errors: This block transforms the global
errors.
reference trajectory into the robot’s local coordinate frame. It computes the errors between
Coordinate transformation and guidance errors: This block transforms the global ref-
the desired trajectory (from the reference generator) and the current trajectory of the robot.
erence trajectory into the robot’s local coordinate frame. It computes the errors between
The outputs include the following: eψ , which is the error in the heading angle and
the desired trajectory (from the reference generator) and the current trajectory of the ro-
ex , ey , which are the position errors in the x- and y-directions. The desired heading and
bot.
velocity values ψed , Vd are passed to the controller.
The controller processes the errors and generates control signals u1 and u2 , which corre-
spond to the commands for the steering wheel and driving force, respectively. The controller
seeks to minimize the errors eψ and ex , ey by adjusting the steering and speed commands.
This is analogous to the inverse kinematics calculation q = f −1 ( p).
Controller: The controller block is responsible for generating the actuation commands
u1 (steering) and u2 (driving force). In the context of the earlier methodology, this would in-
volve both traditional control (PID and adaptive control) and machine learning components
for fine-tuning the actuation.
The mathematical formulation for control is presented with the following equations:
dex
Z
u1 = K p e x + Ki ex dx + Kd (15)
dt
Z
deψ
u2 = K p e ψ + Ki eψ dx + Kd (16)
dt
where K p , Ki , Kd are the control gains, adaptively tuned using the neural network; ψd is
the desired heading angle; and ψ is the current heading angle.
Mobile robot dynamics: The dynamics of the mobile robot, such as the steering and driving
forces, are controlled by the inputs u1 and u2. This block represents the physical model of the robot,
whose dynamics were mathematically modeled using the Adams–Bashforth–Moulton method.
Sensors and feedback loop: The sensor block provides the actual state y = [ xe , ye , ψ] T
back to the controller for feedback-based correction. The sensor inputs, such as the current
pose (position and orientation), are used to refine the predictions from the neural network
Future Internet 2024, 16, 435 11 of 23
p = f (q)
where
p is the position and orientation of the end-effector in the task space;
q is the vector of the joint angles in the configuration space;
f is the forward kinematics function.
This function maps the joint angles to the end-effector’s position and orientation in
the task space.
Inverse kinematics is the process of determining the required joint angles (in config-
uration space q) to achieve a desired position and orientation of the end-effector (in task
space p).
Mathematically, it is represented as follows:
p = f −1 ( q )
where q is the vector of the joint angles in the configuration space; p is the desired position
and orientation of the end-effector in the task space; and f−1 is the inverse kinematics
function. This function maps a desired end-effector position and orientation back to the
necessary joint angles.
Mapping the relationship between the spaces: the actuation inputs u are mapped to
−1 .
the configuration space q using specific functions, denoted as f spec
From configuration space to task space: the joint angles q are then mapped to the task
space p using the forward kinematics function:
Inverse mapping: conversely, the task space p can be mapped back to the configuration
space q using inverse kinematics, f f−ind
1
, and then from the configuration space to the
actuation space using f spec .
q = f f−ind
1
( p) (18)
additional mappings f spec and f spec−1 connect the actuation commands with the configuration
space, effectively linking the entire control system from the actuator inputs to the task-
specific outputs.
The system’s foundation is built on mathematical equations representing inverse
kinematics and the dynamic model of the robot. The equations below define the positional
errors and control inputs: ex = xd − x ey = yd − y eψ = ψd − ψ.
The robot’s control inputs u1 and u2 are derived based on these error measurements
and are fed into the controller to generate the desired driving force FD and steering wheel
commands δ.
In Figure 8, the control system, which integrates both the traditional control methods
Future Internetand
2024, a
16,neural network
x FOR PEER within a hybrid control architecture, is presented. This configuration
REVIEW 13 of 25
is composed of two main loops: the tracking loop and the nonlinear inner loop with a
neural network.
system to continuously refine its actions based on the current robot state, ensuring accurate
trajectory tracking and force application.
Unlike conventional feedback mechanisms, which typically rely on fixed control gains
or linear models, the neural network adapts in real time based on state feedback, providing
an output fˆ( x ) that is combined with the robust control term v(t). The combined control
Future Internet 2024, 16, x FOR PEER input
REVIEW to the robot system is τ = v(t) + fˆ( x ). The error e in the tracking loop e = qd − 14 q.
of By
25
leveraging the neural network’s learning capabilities, the control law becomes adaptive,
ensuring that the system can cope with dynamic environmental changes, which is especially
valuable in applications involving variable loads or complex trajectories. The prediction
trajectories.
error eqN = The
F ( x ) prediction error 𝑒 with
− fˆ( x ) is associated = 𝐹(𝑥)the−neural
𝑓 (𝑥) isnetwork’s
associated with the neural
approximation net-
defined
work’s approximation defined as follows. If we make a comparison to traditional
as follows. If we make a comparison to traditional control, in this neural network control control,
in this neural
model, network
L functions control
similarly model,
to Kp 𝐿 functions
(proportional similarly
control). 𝐾𝑝 (proportional
Kf istoa gain control).
matrix that determines
Kf is the
how a gain matrix
system thattodetermines
reacts differenceshow the system
between reactsand
the desired to measured
differencesinteraction
between the de-
forces.
sired and measured interaction forces. 𝐾𝑣 plays a role akin to 𝐾𝑑 (derivative
Kv plays a role akin to Kd (derivative control), and the effect of Ki (integral control) is control),
and the effect
adaptively of 𝐾𝑖 (integral
handled control)
by the neural is adaptively
network’s handled
feedback by the
learning neural network’s
capability feed-
and the robust
back learning capability
control adjustments over time. and the robust control adjustments over time.
3. Results
An examination of the feature distributions between the training and test datasets is
presented in Figures
Figures 99 and
and 10.
10.
Figure 9. Distribution of orientation, velocity, and acceleration features in training and test datasets.
Figure 8 displays the basic sensory data acquired from the robot, including orientation,
angular velocity, and linear acceleration along different axes. The distributions reveal con-
sistent patterns between the training and test data, indicating that the experimental setup
effectively captures the dynamic behavior of the robotic system under varied conditions.
Figure 9 further extends this analysis to include engineered features that are critical
for the robotic control algorithms, such as the total angular velocity and Euler angles.
This plot showcases the distributions for the engineered features such as the total an-
gular velocity, total linear acceleration, Euler angles, and derived velocity and acceleration
metrics in the training and test datasets.
The consistency across these feature distributions validates the data processing and
feature engineering steps undertaken, ensuring that the machine learning models trained
on these data are well equipped to generalize from training to real-world application
scenarios. This robust feature engineering is further supported by the correlation analysis
presented in Figure 11, which provides a deeper insight into the relationships between the
orientation, angular velocity, and linear acceleration parameters.
Figure 10. Distribution of engineered features reflecting robot dynamics in training and test da-
tasets.
Figure 8 displays the basic sensory data acquired from the robot, including orienta-
tion, angular velocity, and linear acceleration along different axes. The distributions reveal
Future Internet 2024, 16, 435 14 of 23
Figure 9. Distribution of orientation, velocity, and acceleration features in training and test datasets.
Figure 9 further extends this analysis to include engineered features that are critical
for the robotic control algorithms, such as the total angular velocity and Euler angles.
This plot showcases the distributions for the engineered features such as the total
angular velocity, total linear acceleration, Euler angles, and derived velocity and acceler-
ation metrics in the training and test datasets.
The consistency across these feature distributions validates the data processing and
feature engineering steps undertaken, ensuring that the machine learning models trained
on these data are well equipped to generalize from training to real-world application sce-
narios. This robust feature engineering is further supported by the correlation analysis
presented in Figure 11, which provides a deeper insight into the relationships between the
10. Distribution
Distribution of
Figure 10.
orientation, angular of engineeredfeatures
engineered
velocity, andfeaturesreflecting
linearreflecting robot
robot
acceleration dynamics
dynamics in in training
training
parameters. andand
testtest da-
datasets.
tasets.
Figure 8 displays the basic sensory data acquired from the robot, including orienta-
tion, angular velocity, and linear acceleration along different axes. The distributions reveal
consistent patterns between the training and test data, indicating that the experimental
setup effectively captures the dynamic behavior of the robotic system under varied con-
ditions.
11. Correlation
Figure 11.
Figure Correlationmatrix
matrixof of
orientation, angular
orientation, velocity,
angular and linear
velocity, and acceleration parameters.
linear acceleration parameters.
From the results, as can be seen, a very strong correlation (1.0) is evident between
From the results, as can be seen, a very strong correlation (1.0) is evident between roll
roll (orientation_x) and scalar_part, as well as between yaw (orientation_z) and pitch
(orientation_x)
(orientation_y). andTherescalar_part,
is a notable as well ascorrelation
negative between yaw(−0.8)(orientation_z)
between angular and pitch (orien-
velocity
tation_y). There is(w_z)
in the z-direction a notable negative
and angular correlation
velocity in the (−0.8) between
y-direction angular
(w_y). velocity ain the z-
Furthermore,
direction (w_z) and angular velocity in the y-direction (w_y). Furthermore,
moderate positive correlation (0.4) exists between linear acceleration in the y-direction a moderate
positive
(dv_y) andcorrelation (0.4) exists
linear acceleration between
in the linear
z-direction acceleration
(dv_z). The strong in the y-direction
correlations observed(dv_y)
in and
the data
linear not only highlight
acceleration the critical relationships
in the z-direction (dv_z). Thebetween the orientation
strong correlations and velocity
observed in the data
parameters
not but also provide
only highlight valuable
the critical insights into
relationships the feature
between theimportance
orientationforand
the subsequent
velocity parame-
predictive modeling.
ters but also provide valuable insights into the feature importance for the subsequent pre-
In Figure 12, is presented the structure of the neural network model. The developed
dictive modeling.
model consists of a sequential architecture designed to predict the robotic joint torques,
whichInisFigure
crucial12,
foristhe
presented the structure
accurate control of a 20 of the neural
DOFs robotic network
platform. model. Thelayer
The input developed
model consists of a sequential architecture designed to predict the robotic joint torques,
which is crucial for the accurate control of a 20 DOFs robotic platform. The input layer
receives 41 features, comprising the joint angles, velocities, and force measures, reflecting
the comprehensive state of the robot necessary for effective torque computation.
Future Internet 2024, 16, 435 15 of 23
12.Structure
Figure 12.
Figure Structureof the
of neural network
the neural model. model.
network
First layer: The model begins with a dense layer of 3810 units. Although this number
First
appears layer:
large, The
it was modelchosen
initially begins with
to test a capacity
the dense layer of theof 3810 units.
network Although
to capture com- this n
appears
plex patternslarge, it was
in the initially chosen
high-dimensional data.to testlayer
This the uses
capacity
ReLUof the network
activation to capture c
to introduce
nonlinearity, allowing the model to learn more complex functions.
patterns in the high-dimensional data. This layer uses ReLU activation to introdu
Dropout and regularization: A dropout rate of 20% follows to prevent overfitting by
linearity, allowing the model to learn more complex functions.
randomly omitting subsets of features during training, thus ensuring that the model does
Dropout
not rely too heavilyandonregularization:
any single neuron. A dropout rate of 20% follows to prevent overfitt
randomly omitting subsets of features
Hidden layers: A subsequent dense layer during
with 128 training,
units further thus ensuring
processes that the mod
the learned
representations,
not rely too heavily with another
on anydropout layer at 10% to continue regularization. ReLU
single neuron.
activation is used here as well to maintain nonlinear learning.
Hidden layers: A subsequent dense layer with 128 units further processes the
Output layer: The final layer consists of 20 units corresponding to each joint torque,
representations,
with a linear activation withfunction.
anotherThisdropout
setup islayer
crucialatas10% to continue
the task regularization.
is a regression problem ReL
vation
where eachis usedoutputhere
unitas well to
predicts maintain value
a continuous nonlinear learning.
representing the torque.
The training progress of a neural network model
Output layer: The final layer consists of 20 units corresponding is presented in Figure 13, to
which
each joint
shows the performance over 100 epochs. It includes the loss and
with a linear activation function. This setup is crucial as the task is a regression accuracy metrics for p
each epoch, demonstrating how the model’s performance improves as training progresses.
where each output unit predicts a continuous value representing
As the epochs increase, the loss decreases and the accuracy increases, indicating effective
the torque.
Theand
learning training progress
adaptation of a neural
by the model network
to the training data.model is presented
By the final epochs, the in Figure 13
model
shows
achievesthe performance
a high accuracy and over
low100 epochs.
loss, It includes
suggesting that it hastheeffectively
loss and captured
accuracythe metrics f
underlying patterns in the training dataset. The final accuracy
epoch, demonstrating how the model’s performance improves as training progres is 0.9304 and the loss
is 0.1850.
the epochs increase, the loss decreases and the accuracy increases, indicating e
learning and adaptation by the model to the training data. By the final epochs, the
achieves a high accuracy and low loss, suggesting that it has effectively captured
derlying patterns in the training dataset. The final accuracy is 0.9304 and the loss is
shows the performance over 100 epochs. It includes the loss and accuracy metrics for each
epoch, demonstrating how the model’s performance improves as training progresses. As
the epochs increase, the loss decreases and the accuracy increases, indicating effective
learning and adaptation by the model to the training data. By the final epochs, the model
Future Internet 2024, 16, 435 achieves a high accuracy and low loss, suggesting that it has effectively captured the un-
16 of 23
derlying patterns in the training dataset. The final accuracy is 0.9304 and the loss is 0.1850.
Cart
Cart Motion
Motion with
with TrapezoidalVelocity
Trapezoidal VelocityProfile
Profile
distance (m)
distance (m) 4 4
time(s)
time (s) 3.5 3.5
maxacceleration
max (m/s22))
acceleration(m/s 6.5 6.5
max
maxspeed
speed(m/s)
(m/s) 1.13 1.13
Figure 14 presents the positions of the robot’s joints during the simulation of a cart
Figure 14 presents the positions of the robot’s joints during the simulation of a cart motion
motion with a trapezoidal velocity profile with a distance of 1 m and a time period of T =
with a trapezoidal velocity profile with a distance of 1 m and a time period of T = 3.5 s.
3.5 s.
Figure 14. Position of the robot’s joints during a cart motion with a trapezoidal velocity profile with
Figure 14. Position of the robot’s joints during a cart motion with a trapezoidal velocity profile with a
a distance of 1 m and a time period of T = 3.5 s.
distance of 1 m and a time period of T = 3.5 s.
Figure 15 shows the tracking errors (tracking errors) affecting the stability during the
Figure
cart motion 15with
shows the tracking
a trapezoidal errors
velocity (tracking
profile with a errors)
distanceaffecting
of 1 m, T =the
3.5 stability during the
s. The tracking
carterrors
motion with a trapezoidal velocity profile with a distance of 1 m, T = 3.5 s. The
indicate that the most significant deviation occurs in the X joint during the stride, tracking
but it still remains within the bounds of stability.
Future Internet 2024, 16, 435 17 of 23
errors
Future Internet 2024, 16, x FOR PEER indicate
REVIEW that the most significant deviation occurs in the X joint during18 the
of 25stride,
but it
Future Internet 2024, 16, x FOR PEER REVIEW
still remains within the bounds of stability. 18 of 25
Figure 15. Tracking errors during the cart motion with a trapezoidal velocity profile with a distance
Figure 15. Tracking errors during the cart motion with a trapezoidal velocity profile with a distance
of 1 m and a time period of T = 3.5 s.
Figure
of 1 m15. Tracking
and a timeerrors
periodduring
of T the
= 3.5cart
s. motion with a trapezoidal velocity profile with a distance
of 1 m and a time period of T = 3.5 s.
This research emphasizes the importance of simulations that compare the intended
This research
(reference) emphasizes
andemphasizes
actual paths thea importance ofduring
simulations that compare the intended
This research thethat robot follows
importance of simulations circular
that motion.
compare theBy examining
intended
(reference)
both the and actual
trajectory and paths
the that a robot
orientation anglesfollows
(ψ during circular
d for reference and ψ motion.
for actual), By study
this examining
(reference) and actual paths that a robot follows during circular motion. By examining
both the trajectory
assesses how and
closely the
the orientation
robot adheres angles
to its (ψ
planned
both the trajectory and the orientation angles (ψd for reference d for reference
course. The and ψ
tracking for
erroractual),
(e
and ψ for actual), this study ψ), this study
which
assesses how closely the robot adheres to its planned course. The tracking error (eψ), which ψ ),
assesses how
quantifies closely
the the
deviation robot adheres
between the to its
robot’s planned
actual course.
path and The
its tracking
intended error (e
trajectory, aswhich
quantifies the deviation between the robot’s actual path and its intended trajectory, as a
shown
quantifies in Figure
the 16, is
deviation a critical
between measure
the in this
robot’s analysis
actual of
path the robot’s
and its cart motion
intended with
trajectory, as
trapezoidal
shown in velocity
Figure 16, isprofile.
a critical measure in this analysis of the robot’s
shown in Figure 16, is a critical measure in this analysis of the robot’s cart motion with a cart motion with a
trapezoidal
trapezoidal velocity
velocity profile.
profile.
Figure 16. A detailed visual representation of the simulation: reference ψd and actual ψ course angles
and tracking error evisual
A detailed ψ during the robot’s cart motion with a trapezoidal velocity profile movement.
representation
Figure16.16.
Figure A detailed visual representation of theof the simulation:
simulation: reference
reference ψd and
ψd and actual actualangles
ψ course ψ course angles
and
andtracking
trackingerror eψ during
error the robot’s
eψ during cart motion
the robot’s with a trapezoidal
cart motion velocity profile
with a trapezoidal movement.
velocity profile movement.
Future Internet 2024, 16, x FOR PEER REVIEW 19 of 25
Future Internet 2024, 16, 435 18 of 23
A simulation of the robot’s circular motion was conducted to analyze its dynamic
A simulation
behavior. of the robot’s
The characteristics of thecircular
circularmotion
motion,was conducted
with to analyze its
a radius (amplitude) of dynamic
1 m and
abehavior.
period ofThe
3.5 characteristics of theincircular
s, are summarized Table 2.motion, with a radius (amplitude) of 1 m and a
period of 3.5 s, are summarized in Table 2.
Table 2. Characteristics of circular motion.
Table 2. Characteristics of circular motion.
Circular Motion
radius/amplitude (m) Circular Motion 1
time (s)
radius/amplitude (m) 3.5
1
max acceleration
time (s) (m/s2) 6.445
3.5
max acceleration 2 6.445
max speed (m/s) (m/s ) 1.8
max speed (m/s) 1.8
Figure 17 presents the positions of the robot’s joints during the simulation of circular
motion with 17
Figure a radius of the
presents 1 mpositions
and a time period
of the of Tjoints
robot’s = 3.5 during
s. Figurethe18simulation
shows theoftracking
circular
motion with a radius of 1 m and a time period of T = 3.5 s. Figure 18 shows
errors (tracking errors) affecting the stability during the circular motion with a radius ofthe tracking
1errors
m and(tracking
T = 3.5 s.errors) affecting
The tracking the indicate
errors stabilitythat
during the circular
the most motion
significant with aoccurs
deviation radiusinof
1 mXand
the T during
joint = 3.5 s. the
Thestride,
tracking
buterrors
it stillindicate
remainsthat the most
within significant
the bounds deviation occurs in
of stability.
the X joint during the stride, but it still remains within the bounds of stability.
Figure 17. Position of the robot’s joints during the circular motion with a radius of 1 m and a time
Figure 17. Position of the robot’s joints during the circular motion with a radius of 1 m and a time
period of T = 3.5 s.
period of T = 3.5 s.
By examining both the trajectory and the orientation angles (ψd for the reference and
ψ for the actual trajectories), this study assesses how closely the robot adheres to its planned
course (Figure 19). The tracking error (eψ ), which quantifies the deviation between the
robot’s actual path and its intended trajectory, is a critical measure in this analysis of the
robot’s circular movement.
5. Discussion
The results of this study demonstrate the efficacy of integrating nonlinear dynamics
with machine learning (ML) in optimizing control systems for a 20 degrees of freedom
(DOFs) robotic platform. By utilizing a hybrid control approach that combines traditional
feedback methods with neural networks, the proposed system adapts to real-time changes
in complex environments, such as those characterized by nonlinearity and time variance.
Future
Future Internet
Internet 2024,16,
2024, 16,435
x FOR PEER REVIEW 20 of 25
19 of 23
Figure 18. Tracking errors during the circular motion with a radius of 1 m and a time period of T =
3.5 s.
By examining both the trajectory and the orientation angles (ψd for the reference and
ψ for the actual trajectories), this study assesses how closely the robot adheres to its
planned course (Figure 19). The tracking error (eψ), which quantifies the deviation be-
tween
Figurethe
18.robot’s actual
Tracking path
errors and its
during intended
the circulartrajectory,
motion is aacritical
with radius measure in athis anal-
Figure 18. Tracking errors during the circular motion with a radius of 1 of
m 1and
m and
a time time period
period of T =of3.5
T s.
=
ysis of
3.5 s. the robot’s circular movement.
By examining both the trajectory and the orientation angles (ψd for the reference and
ψ for the actual trajectories), this study assesses how closely the robot adheres to its
planned course (Figure 19). The tracking error (eψ), which quantifies the deviation be-
tween the robot’s actual path and its intended trajectory, is a critical measure in this anal-
ysis of the robot’s circular movement.
Figure 19. A detailed visual representation of the simulation: references and actual trajectories during
the robot’s circular motion.
In comparison to previous studies, this work advances the field of robotic control
in several ways [43]. For instance, El-Hussieny et al. [11,44] successfully applied a deep
learning-based Model Predictive Control (MPC) framework to a three DOFs biped robot
Future Internet 2024, 16, 435 20 of 23
leg, showing improvements in trajectory tracking. However, their focus was on a lower
dimensional system and did not account for as much real-time adaptability in nonlinear
environments. Similarly, Yuan et al. [24,45,46] applied auxiliary physics-informed neural
networks to solve nonlinear integral differential equations, showing promise in adapt-
ing to complex environments, but with limited integration in real-time control systems
for robotics.
Chen and Wen [32] explored the use of multi-layer neural networks in trajectory track-
ing for industrial robots. Their results highlighted the potential for ML in improving control
precision, yet their study did not integrate the additional complexity of nonlinear dynamics.
Our hybrid control architecture addresses this gap by providing a more comprehensive
solution that allows for faster and more accurate adjustments in dynamic environments,
particularly with the inclusion of real-time sensor data from IoT platforms.
This study also builds upon earlier works on robotic control using deep reinforcement
learning [47,48], where Tang et al. reviewed real-world successes in the application of
these techniques. While deep reinforcement learning offers significant benefits for robotic
systems, our hybrid approach enhances the control system by combining machine learn-
ing models with traditional nonlinear control techniques. This hybrid method provides
superior adaptability in real-time applications, particularly in IoT-driven settings.
Furthermore, Levine [49] explored deep and recurrent neural architectures for con-
trol tasks in high-dimensional robotic systems. Similarly, studies by Li et al. [33] and
Zheng et al. [34] applied recurrent neural networks (RNNs) in trajectory tracking for high-
dimensional robotic systems, underscoring the importance of adaptive learning models
in nonlinear environments. While these works contributed valuable insights into ML
applications with high-dimensional control, our study goes further by leveraging the itera-
tive learning capabilities of neural networks within a feedback control loop, allowing for
continuous system optimization in real-time scenarios.
Additionally, Wei and Zhu [50] demonstrated the application of MPC for trajectory
tracking and control [51,52] in mobile robots, addressing challenges in time-varying en-
vironments. Our approach builds upon these findings by integrating neural networks to
predict joint torques directly, allowing for faster adaptation and reduced computational
complexity in real-time robotic control.
Moreover, the integration of the IoT with robotic control systems has been extensively
discussed, particularly in smart farming applications where real-time data processing and
adaptability are crucial [35–37]. This study further advances the field by demonstrating
how an IoT framework can enhance the efficacy of ML models in dynamically adjusting
the control parameters based on real-time sensor inputs [44]. The adaptability of this
system is critical for environments requiring constant adjustments due to rapidly changing
conditions [42].
Our neural network model demonstrated a high accuracy and low loss, with a final
accuracy of 0.9304 and a loss of 0.1850. These results suggest that the model effectively
captured the underlying patterns in the training dataset, demonstrating strong potential
for real-world application in robotic control systems. However, while this level of accuracy
is commendable, there is room for improvement when compared to the results obtained in
studies such as that of Almassri et al. [53], where a neural network approach integrated
with Inertial Measurement Unit (IMU) and Ultra-Wideband (UWB) data fusion achieved a
99% positioning accuracy for moving robots.
The combination of nonlinear control methods, machine learning, and IoT technologies
creates a robust platform for future research and development. While this study provides
a solid foundation, there are several avenues for future work. One area of focus could
be improving the scalability of the system for even higher degrees of freedom in robotic
platforms. Additionally, exploring more advanced neural network architectures, such as
deep reinforcement learning models, could further enhance the system’s adaptability and
decision-making capabilities in highly uncertain environments.
Future Internet 2024, 16, 435 21 of 23
Moreover, future studies could investigate the integration of other emerging technolo-
gies, such as edge computing and 5G, to further reduce latency and improve real-time
control in IoT environments [54,55]. The potential to extend this hybrid approach to other
industries, such as autonomous transportation or healthcare robotics, offers promising
directions for further exploration [56,57].
Among these factors, model inaccuracies and sensor delays contribute most signifi-
cantly to the total tracking error. Neural network prediction errors are also influential, but
their impact can be minimized with adequate training. Understanding these contributions
allows for targeted improvements in model accuracy, network training, and delay manage-
ment strategies to enhance real-world performance. In an IoT environment, where commu-
nication delays can occasionally occur, the control system’s inherent robustness—derived
from the combination of the robust control term and the adaptive neural network—enables
it to tolerate short-term data unavailability or latency. If the delays are persistent, further
techniques such as predictive control can be integrated into the system, where the neural
network could predict the likely future states based on historical data, thus maintaining
continuity in the control response.
6. Conclusions
The integration of nonlinear dynamics, machine learning, and the IoT in this study
demonstrates significant improvements in robotic control system performance, particu-
larly in real-time adaptability and precision. These findings contribute to the growing
body of knowledge in intelligent control systems and present valuable insights for future
developments in both industrial and research applications.
Author Contributions: Conceptualization, V.A.K., O.P. and J.G.K.; methodology, V.A.K.; software, V.A.K.;
validation, V.A.K.; formal analysis, V.A.K., O.P. and J.G.K.; investigation, V.A.K.; resources, V.A.K.; data
curation, V.A.K.; writing—original draft preparation, V.A.K., O.P. and J.G.K.; writing—review and editing,
V.A.K., O.P. and J.G.K.; and visualization, V.A.K., O.P. and J.G.K. All authors have read and agreed to the
published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement: The original contributions presented in the study are included in the
article, further inquiries can be directed to the corresponding author.
Acknowledgments: I would like to express sincere gratitude to Veljko Potkonjak, who first introduced
me to the field of robotics and software for robot programs in MATLAB simulations.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Song, Q.; Zhao, Q. Recent Advances in Robotics and Intelligent Robots Applications. Appl. Sci. 2024, 14, 4279. [CrossRef]
2. Zaitceva, I.; Andrievsky, B. Methods of Intelligent Control in Mechatronics and Robotic Engineering: A Survey. Electronics 2022, 11, 2443.
[CrossRef]
3. Wang, Y.; Hou, M.; Plataniotis, K.N.; Kwong, S.; Leung, H.; Tunstel, E.; Rudas, I.J.; Trajkovic, L. Towards a Theoretical Framework
of Autonomous Systems Underpinned by Intelligence and Systems Sciences. IEEE/CAA J. Autom. Sin. 2021, 8, 52–63. [CrossRef]
4. Gabsi, A.E.H. Integrating Artificial Intelligence in Industry 4.0: Insights, Challenges, and Future Prospects—A Literature Review.
Ann. Oper. Res. 2024. [CrossRef]
5. Antoska Knights, V.; Gacovski, Z. Methods for Detection and Prevention of Vulnerabilities in the IoT (Internet of Things) Systems.
In Internet of Things—New Insights; IntechOpen: London, UK, 2024. [CrossRef]
6. Knights, V.; Petrovska, O.; Prchkovska, M. Enhancing Smart Parking Management through Machine Learning and AI Integration
in IoT Environments. In Navigating the Internet of Things in the 22nd Century—Concepts, Applications, and Innovations [Working Title];
IntechOpen: London, UK, 2024. [CrossRef]
7. Chataut, R.; Phoummalayvane, A.; Akl, R. Unleashing the Power of IoT: A Comprehensive Review of IoT Applications and
Future Prospects in Healthcare, Agriculture, Smart Homes, Smart Cities, and Industry 4.0. Sensors 2023, 23, 7194. [CrossRef]
[PubMed]
8. Sadeghzadeh, N.; Farajzadeh, N.; Dattatri, N.; Acevedo, B.P. SPS Vision Net: Measuring Sensory Processing Sensitivity via an
Artificial Neural Network. Cogn. Comput. 2024, 16, 1379–1392. [CrossRef]
Future Internet 2024, 16, 435 22 of 23
9. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160.
[CrossRef]
10. Khanna, A.; Kaur, S. Internet of Things (IoT), Applications and Challenges: A Comprehensive Review. Wirel. Pers. Commun. 2020,
114, 1687–1762. [CrossRef]
11. El-Hussieny, H. Real-Time Deep Learning-Based Model Predictive Control of a 3-DOF Biped Robot Leg. Sci. Rep. 2024, 14, 16243.
[CrossRef]
12. Knights, V.; Petrovska, O. Dynamic Modeling and Simulation of Mobile Robot Under Disturbances and Obstacles in an
Environment. J. Appl. Math. Comput. 2024, 8, 59–67. [CrossRef]
13. Antoska Knights, V.; Gacovski, Z.; Deskovski, S. Guidance and Control System for Platoon of Autonomous Mobile Robots. J.
Electr. Eng. 2018, 6, 281–288. [CrossRef]
14. Richards, S.M.; Azizan, N.; Slotine, J.-J.; Pavone, M. Adaptive-Control-Oriented Meta-Learning for Nonlinear Systems. arXiv
2021, arXiv:2103.04490. Available online: https://arxiv.org/abs/2103.04490 (accessed on 1 September 2024).
15. Knights, V.; Prchkovska, M. From Equations to Predictions: Understanding the Mathematics and Machine Learning of Multiple
Linear Regression. J. Math. Comput. Appl. 2024, 3, 137. [CrossRef]
16. Sakaguchi, H. Machine Learning of Nonlinear Dynamical Systems with Control Parameters Using Feedforward Neural Networks.
arXiv 2024, arXiv:2409.07468. [CrossRef]
17. Meindl, M.; Lehmann, D.; Seel, T. Bridging Reinforcement Learning and Iterative Learning Control: Autonomous Motion
Learning for Unknown, Nonlinear Dynamics. Front. Robot. AI 2022, 9, 793512. [CrossRef]
18. Lewis, F.L.; Jagannathan, S.; Yesildirek, A. Neural Network Control of Robot Manipulators and Nonlinear Systems; Taylor & Francis
Ltd.: London, UK, 1999; ISBN 0-7484-0596-8.
19. Sayeed, A.; Verma, C.; Kumar, N.; Koul, N.; Illés, Z. Approaches and Challenges in Internet of Robotic Things. Future Internet
2022, 14, 265. [CrossRef]
20. Afanasyev, I.; Mazzara, M.; Chakraborty, S.; Zhuchkov, N.; Maksatbek, A.; Kassab, M.; Distefano, S. Towards the Internet of
Robotic Things: Analysis, Architecture, Components and Challenges. In Proceedings of the 2019 IEEE International Conference
on Developments in eSystems Engineering (DeSE), Kazan, Russia, 7–10 October 2019. [CrossRef]
21. Sikder, A.K.; Petracca, G.; Aksu, H.; Jaeger, T.; Uluagac, A.S. A Survey on Sensor-Based Threats to Internet-of-Things (IoT)
Devices and Applications. arXiv 2018, arXiv:1802.02041. Available online: https://www.researchgate.net/publication/322975901
(accessed on 15 September 2024).
22. Vermesan, O.; Bahr, R.; Ottella, M.; Serrano, M.; Karlsen, T.; Wahlstrøm, T.; Sand, H.E.; Ashwathnarayan, M.; Gamba, M.T. Internet
of Robotic Things Intelligent Connectivity and Platforms. Front. Robot. AI 2020, 7, 104. [CrossRef]
23. Antoska, V.; Jovanović, K.; Petrović, V.M.; Baščarević, N.; Stankovski, M. Balance Analysis of the Mobile Anthropomimetic Robot
Under Disturbances—ZMP Approach. Int. J. Adv. Robot. Syst. 2013, 10, 206. [CrossRef]
24. Yuan, L.; Ni, Y.-Q.; Deng, X.-Y.; Hao, S. A-PINN: Auxiliary Physics Informed Neural Networks for Forward and Inverse Problems
of Nonlinear Integro-Differential Equations. J. Comput. Phys. 2022, 462, 111260. [CrossRef]
25. Pascal, C.; Raveica, L.-O.; Panescu, D. Robotized Application Based on Deep Learning and Internet of Things. In Proceedings of
the 2018 22nd International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 10–12 October 2018.
[CrossRef]
26. Li, Q.; Sompolinsky, H. Statistical Mechanics of Deep Linear Neural Networks: The Backpropagating Kernel Renormalization.
Phys. Rev. X 2021, 11, 031059. [CrossRef]
27. Meng, X.; Li, Z.; Zhang, D.; Karniadakis, G.E. PPINN: Parareal Physics-Informed Neural Network for Time-Dependent PDEs.
arXiv 2019, arXiv:1909.10145. [CrossRef]
28. Gardašević, G.; Katzis, K.; Bajić, D.; Berbakov, L. Emerging Wireless Sensor Networks and Internet of Things Technologies—
Foundations of Smart Healthcare. Sensors 2020, 20, 3619. [CrossRef] [PubMed]
29. Coronado, E.; Venture, G. Towards IoT-Aided Human–Robot Interaction Using NEP and ROS: A Platform-Independent, Accessible
and Distributed Approach. Sensors 2020, 20, 1500. [CrossRef]
30. Yilmaz, N.; Wu, J.Y.; Kazanzides, P.; Tumerdem, U. Neural Network-Based Inverse Dynamics Identification and External Force
Estimation on the da Vinci Research Kit. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation
(ICRA), Paris, France, 31 May–31 August 2020. [CrossRef]
31. Antoska Knights, V.; Stankovski, M.; Nusev, S.; Temeljkovski, D.; Petrovska, O. Robots for safety and health at work. Mech.
Eng.—Sci. J. 2015, 33, 275–279.
32. Chen, S.; Wen, J.T. Industrial Robot Trajectory Tracking Control Using Multi-Layer Neural Networks Trained by Iterative Learning
Control. Robotics 2021, 10, 50. [CrossRef]
33. Li, J.; Su, J.; Yu, W.; Mao, X.; Liu, Z.; Fu, H. Recurrent Neural Network for Trajectory Tracking Control of Manipulator with
Unknown Mass Matrix. Front. Neurorobotics 2024, 18, 1451924. [CrossRef]
34. Zheng, X.; Ding, M.; Liu, L.; Guo, J.; Guo, Y. Recurrent Neural Network Robust Curvature Tracking Control of Tendon-Driven
Continuum Manipulators with Simultaneous Joint Stiffness Regulation. Nonlinear Dyn. 2024, 112, 11067–11084. [CrossRef]
35. Dhanaraju, M.; Chenniappan, P.; Ramalingam, K.; Pazhanivelan, S.; Kaliaperumal, R. Smart Farming: Internet of Things
(IoT)-Based Sustainable Agriculture. Agriculture 2022, 12, 1745. [CrossRef]
Future Internet 2024, 16, 435 23 of 23
36. Amertet Finecomess, S.; Gebresenbet, G.; Alwan, H.M. Utilizing an Internet of Things (IoT) Device, Intelligent Control Design,
and Simulation for an Agricultural System. IoT 2024, 5, 58–78. [CrossRef]
37. Friha, O.; Ferrag, M.A.; Shu, L.; Maglaras, L.; Wang, X. Internet of Things for the Future of Smart Agriculture: A Comprehensive
Survey of Emerging Technologies. IEEE/CAA J. Autom. Sin. 2021, 8, 718–752. [CrossRef]
38. GeeksforGeeks. Architecture of Internet of Things (IoT). GeeksforGeeks. 2024. Available online: https://www.geeksforgeeks.
org/architecture-of-internet-of-things-iot/ (accessed on 29 September 2024).
39. Antoska, V.; Potkonjak, V.; Stankovski, M.J.; Baščarević, N. Robustness of Semi-Humanoid Robot Posture with Respect to External
Disturbances. Facta Univ. Ser. Autom. Control Robot. 2012, 11, 99–110.
40. Lagrange Equations (in Mechanics); Encyclopedia of Mathematics; EMS Press: Berlin, Germany, 2001; Available online: https:
//encyclopediaofmath.org/wiki/Euler-Lagrange_equation (accessed on 20 September 2024).
41. Weisstein, E.W. Euler-Lagrange Differential Equation. In MathWorld; Wolfram Research, Inc.: Champaign, IL, USA, 2024; Available
online: https://mathworld.wolfram.com/Euler-LagrangeDifferentialEquation.html (accessed on 20 September 2024).
42. Antoska-Knights, V.; Gacovski, Z.; Deskovski, S. Obstacles Avoidance Algorithm for Mobile Robots, Using the Potential Fields
Method. Univ. J. Electr. Electron. Eng. 2017, 5, 75–84. [CrossRef]
43. Patil, S.; Vasu, V.; Srinadh, K.V.S. Advances and Perspectives in Collaborative Robotics: A Review of Key Technologies and
Emerging Trends. Discov. Mech. Eng. 2023, 2, 13. [CrossRef]
44. Piga, D.; Bemporad, A. New Trends in Modeling and Control of Hybrid Systems. Int. J. Robust Nonlinear Control 2020, 30, 5775–5776.
[CrossRef]
45. Roy, S.; Rana, D. Machine Learning in Nonlinear Dynamical Systems. Resonance 2021, 26, 953–970. [CrossRef]
46. Gilpin, W. Generative Learning for Nonlinear Dynamics. Nat. Rev. Phys. 2024, 6, 194–206. [CrossRef]
47. Tang, C.; Abbatematteo, B.; Hu, J.; Chandra, R.; Martín-Martín, R.; Stone, P. Deep Reinforcement Learning for Robotics: A Survey
of Real-World Successes. arXiv 2024, arXiv:2408.03539.
48. Han, D.; Mulyana, B.; Stankovic, V.; Cheng, S. A Survey on Deep Reinforcement Learning Algorithms for Robotic Manipulation.
Sensors 2023, 23, 3762. [CrossRef]
49. Levine, S. Exploring Deep and Recurrent Architectures for Optimal Control; Stanford University: Stanford, CA, USA, 2013; Available
online: https://people.eecs.berkeley.edu/~svlevine/papers/dlctrl.pdf (accessed on 25 September 2024).
50. Wei, J.; Zhu, B. Model Predictive Control for Trajectory-Tracking and Formation of Wheeled Mobile Robots. Neural Comput. Appl.
2022, 34, 16351–16365. [CrossRef]
51. Silaa, M.Y.; Barambones, O.; Bencherif, A. Robust Adaptive Sliding Mode Control Using Stochastic Gradient Descent for Robot
Arm Manipulator Trajectory Tracking. Electronics 2024, 13, 3903. [CrossRef]
52. Schwenzer, M.; Ay, M.; Bergs, T.; Abel, D. Review on Model Predictive Control: An Engineering Perspective. Int. J. Adv. Manuf.
Technol. 2021, 117, 1327–1349. [CrossRef]
53. Almassri, A.M.M.; Shirasawa, N.; Purev, A.; Uehara, K.; Oshiumi, W.; Mishima, S.; Wagatsuma, H. Artificial Neural Network
Approach to Guarantee the Positioning Accuracy of Moving Robots by Using the Integration of IMU/UWB with Motion Capture
System Data Fusion. Sensors 2022, 22, 5737. [CrossRef] [PubMed]
54. Ma, X.; Xu, M.; Li, Q.; Li, Y.; Zhou, A.; Wang, S. 5G Edge Computing: Technologies, Applications and Future Visions; Springer
Nature: Berlin/Heidelberg, Germany, 2024; Available online: https://books.google.mk/books?id=zGgFEQAAQBAJ&printsec=
frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false (accessed on 8 October 2024).
55. Attaran, M. The Impact of 5G on the Evolution of Intelligent Automation and Industry Digitization. J. Ambient. Intell. Humaniz.
Comput. 2023, 14, 5977–5993. [CrossRef] [PubMed]
56. Biswas, A.; Wang, H.-C. Autonomous Vehicles Enabled by the Integration of IoT, Edge Intelligence, 5G, and Blockchain. Sensors
2023, 23, 1963. [CrossRef]
57. Carvalho, G.; Cabral, B.; Pereira, V.; Bernardino, J. Edge Computing: Current Trends, Research Challenges and Future Directions.
Computing 2021, 103, 993–1023. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.