Implementation of A Self Driving Vehicle
Implementation of A Self Driving Vehicle
The following document details the execution of a project In this case, the RC car we chose features front-wheel
that consists of programming a toy car to lead it across a steering with only three position: left, right and straight. For
predetermined path autonomously and to elude certain this purpose, a DC motor and a gearbox determine the
obstacles located on that path. This project simulates the direction of the wheels.
functioning of autonomous cars that are currently being
developed. Thanks to this technology, automobiles will no
longer need drivers, given that they will rely on sensors and
artificial vision that will help the car recognize its
surroundings in addition to localization and path-planning.
These features have the potential of making trips safer and
quicker. In the following sections, the hardware and
software of the project will be explained. Moreover, the
performance tests and results of the toy car will be shown.
Finally, the document will address the conclusions related Fig. 2 Steering System of the RC car
to the development of this technology that we found while
executing the project. C. Throttle System
For the throttle system, the car uses a DC motor and a
driver that allows us to control speed and direction of
II. HARDWARE DESCRIPTION rotation. The DC motor is the one that comes with the RC
car, but in order to control the vehicle, a driver is required.
For convenience, this vehicle uses as a base an RC toy car,
For this reason, we decided to use the driver TB6612FNG
which provides the chassis, the steering and throttle system.
with a power supply voltage of 5V and four function
Besides, the vehicle requires a camera and three ultrasonic
modes: CW, CCW, short brake and stop. [1]
sensors to obtain data about the surroundings. For
processing and control tasks, a Raspberry Pi is chosen.
A. Chassis
As specified before, the chassis we decided to use is
obtained from an RC car. This chassis needs to be able to
support all the other elements required to build the self-
driving vehicle. Fig. 1 shows a picture of the chassis used
for this specific project.
Fig. 6 Raspberry Pi 3B
G. Power Supply
To energize the Raspberry Pi we decided to use a
power bank and to energize the sensors and the
dc motors we use eight batteries of 1.5V in series.
H. Assembly
Fig. 4 LifeCam HD 3000 camera Fig. 7 shows how the vehicle was assembled. The camera is
attached to an acrylic bracket that keeps it on top of the
E. Ultrasonic Sensors
vehicle. This location helps the camera get images of the
Ultrasonic sensors measure distance sending a signal, the path right in front of the car so that we can determine if the
time it takes to return is proportional to the distance. This car needs to turn or keep going straight. Ultrasonic sensors
application requires ultrasonic sensors to detect the distance are mounted using acrylic holders. The Raspberry Pi is
of obstacles from the car. For this reason, we decide to use mounted on top of an acrylic platform and the connections
three sensors with this disposition: one on the front of the are made on a protoboard located on top of the chassis.
vehicle and the other two on each side. The sensors used Lastly, the batteries and the power bank are attached to the
are the HC-SR04 with a working voltage of 5V. They have back of the chassis.
a measuring range from 2cm to 400 cm. [3]
F. Raspberry Pi
Raspberry Pi is a small single-board computer. The Fig. 7 Assembly of the self-driving vehicle
model used for this self-driving vehicle is the
Raspberry Pi 3B. It presents 4 usb ports, an HDMI
port, 1 GB RAM and 40 GPIO pins. The Raspberry Pi III. SOFTWARE DESCRIPTION
allows us to process the images gotten by the camera,
the information from the ultrasonic sensors, and to The programming of the Raspberry was made in the Python
apply algorithms in order to obtain signals to control programming language. Libraries such as OpenCV, Numpy
the car. [4] and Matplotlib were used to develop the code.
In the beginning, it was taken into account to use the Canny
method to perform the line detection algorithm; however,
this method generates a lot of computational cost.
Due the track to perform the tests only had black and white
colors, it was decided to use a simple masking by
intensities. In this way, the black lines of the white
background could be separated.
After having that mask, the Hough transform was applied
to detect lines. Thus, data could be obtained as the starting
and ending point of each line. This is important since you
can have a lot of information about the position and
orientation of the lines on the track.
In the project developed, it was decided to use two main
characteristics: distance and slope.
Lane recognition was done as follows: Due the center lines
are segmented, the line with the longest line will be part of
the extreme line. Consequently, the line position with the The next step was to test the ultrasonic sensors, because the
maximum distance found will be the one that will obstacles that arise during the driving of the vehicle will be
determine the lane in which the vehicle is located. identified by this sensor. We wanted to check which was
The movement control was performed using the slope of the optimal and necessary distance to detect, considering
the maximum line found. When it was inclined with a the geometry of the cart, and it turned out to be 30 cm.
positive slope and greater than a delta, it should turn to the
right and when it is negative, it should turn to the left. In
this way, it is ensured that the detected lines are within an
acceptable range of positive and negative slope values (-d <
m < + d), in which it will only move forward.
Traction Steering
Forward 100% 0%
Right 60% 100%
Left 50% 100%
Back 100% 0%
The percentage of PWM for the right is greater than for the
left because the geometry of the vehicle requires it to turn.
V. CONCLUSIONS