Traffic Signs Recognition With Deep Learning
Traffic Signs Recognition With Deep Learning
Abstract — In this paper, a deep learning based road traffic more adapted to real images of road signs that do not generally
signs recognition method is developed which is very promising in look like their models.
the development of Advanced Driver Assistance Systems (ADAS)
and autonomous vehicles. The system architecture is designed to Motivated by the success of classification and recognition
extract main features from images of traffic signs to classify them methods, in different domains, based on Deep learning, we are
under different categories. The presented method uses a modified interested in the use of these new advances in Machine learning
LeNet-5 network to extract a deep representation of traffic signs for traffic signs recognition.
to perform the recognition. It is constituted of a Convolutional
The remainder of this paper is organized as follows; the 2nd
Neural Network (CNN) modified by connecting the output of all
convolutional layers to the Multilayer Perceptron (MLP). The section discusses some related works in Traffic Signs
training is conducted using the German Traffic Sign Dataset and Recognition (TSR). In the 3rd section, the datasets used in the
achieves good results on recognizing traffic signs. development of our approach are presented. Section 4 details
the proposed method and section 5 discusses the developed
ideas to improve their performances. Section 6 presents the
implementation results of the network before and after the
Keywords—Classification, Recognition, Artificial Neural application of improvement operations. A summary of the key
Network (ANN), Convolutional Neural Network (CNN), Multilayer points and future works concludes the paper in section 6.
Perceptron (MLP), Deep learning, Artificial Intelligence, Road
signs, Autonomous vehicles.
II. RELATED WORKS
The last decade shows a growth evolution in the
I. INTRODUCTION
development of intelligent transportation systems (ITS) and
Human factor remains the most common cause of road especially ADAS and Self-Driving Cars (SDC). In these
mortality. Indeed, the potentially dangerous choices made by systems, traffic signs detection and recognition is one of the
the driver might be intentional (speed driving, for example) as difficult tasks that confront researchers and developers. This
they might be the result of physical tiredness, drowsiness or a issue is addressed as a problem of detecting, recognizing, and
poor perception and interpretation of seen scenes. The classifying objects (traffic signs) using computer vision and
introduction of autonomous vehicles will certainly reduce these still be a challenge until now.
causes or even make them disappear.
The work presented in this paper focuses on traffic signs
As part of the development of these autonomous vehicles, recognition without the consideration of the detection step. For
particularly driving assistance systems, several manufacturers this purpose, this section discusses only related works from this
and laboratories have oriented their works towards the angle. Traffic signs recognition is divided in two parts: features
exploitation of visual information because of its usefulness for extraction and sings recognition. In the first step, several
the detection of road, vehicles, pedestrians and traffic signs. methods have been proposed, including edge detection [1],
The principle of driving assistance systems aiming at road signs scale invariance feature (SIFT) [2], speeded-up robust feature
recognition is to detect signs, interpret their meaning, then (SURF) [3], Histogram of gradient (HOG) [4] and others. In
transmit the information to the driver (by a projection on a [5], Bag of Words (BOW) exploiting SURF and k-means
windshield, a screen or a smartphone) or even better, transmit classifier was used. Typically, the output of this step is the
the information to the vehicle that carries out the execution input of the classification algorithms for the recognition of the
without needing a human decision. However, given that the road signs. Many algorithms have been used such as K-Nearest
classical approach has been bounded by well-structured models Neighbor (KNN) classifier [3], Support Vector Machine
of traffic signs (undistorted and completely visible models) (SVM) [6] and neural network [5][7] for traffic signs
only, it became necessary to consider real characteristics of the classification. Authors in [5] proposed the evaluation of three
road environment. For this, the current researches are moving methods namely, Artificial Neural Network (ANN), Support
towards the development of recognition systems which are Vector Machine (SVM) and Ensemble subSpace KNN using
BoW where every road sign is encoded with 200 features. The traffic signs class by processing them into a 4 layers fully
Multi-layer Perceptron Neural network provides better results. connected network.
Currently, Convolutional networks are gradually replaced
traditional computer vision algorithms for different applications
such as object classification and pattern recognition [7][8]. It is
used for the extraction and the learning of depth description of
the traffic signs. This solution overcomes the step of
descriptors extraction which is very sensitive to different
factors. This network takes 2D image and processes it with
convolution operations. It has the ability to learn a
representative description of image.
Fig. 1. LeNet-5 architecture
III. TRAFFIC SIGNS DATASET
A rich dataset is needed in object recognition based on The training phase of our neural network updates its
neural network in order to train the system and evaluate its parameters Φ (weights and biases) in order to reach an
results. For the purpose of traffic signs classification, we used adequate accuracy value. The update algorithm chosen in our
the German Traffic Sign Benchmark (GTSB) [9] which application is a supervised learning algorithm called gradient
contains 43 classes divided into 3 categories as represented in descent with mini-batches where a multi-dimensional error
table I. function С (depends on all the network parameters, over 70 000
in the case of LeNet-5) is calculated over mini batches of 64
training examples (to avoid calculus over 34799 images at
TABLE I. THE DATASET DISTRIBUTION
every stage). Once the error function is obtained, the algorithm
Category Task Number Shape will search for the function’s decreasing direction by using the
of gradient of each parameter and then update them under the
images
formula (1) [8], where γ is the learning rate:
Training data Used to train the 34799 4 dimensions
network tensor to determine
the image index in Φt+1 = Φt + γ ∇С(Φ)
Validation Allows to 4410 the dataset, the
data supervise the pixel’s row-column
and the information The algorithm repeats the described process until it reaches
network
performances it carries (Red Green the desired results. At the end, the parameters of the neural
while training it (a Blue value) network are well trained to know what features the network
reduced version of must extract (convolution phase) and which class it must
testing data) attribute to the input (classification phase).
Testing data Used to evaluate 12630
the final network
V. PERFORMANCE IMPROVEMENT
20 21 22 23
24 25 26 27
[7] L. Abdi, “Deep learning traffic sign detection, recognition and [11] N. Srivastava, G. Hinton, A. Krizhevsky, I. Stuskever and R.
augmentation,” Proceedings of the Symposium on Applied Computing, Salakhutdinov, “Dropout : A Simple Way to Prevent Neural Networks
Maroc, 2017, p. 131-136. from Overfitting,” Journal of Machine Learning Research, Vol. 15,
[8] Y. Moualek, “Deep learning pour la classification des images,” Master’s 2014, p. 1929-1958.
thesis, Abou Bakr Belkaid University, Tlemcen, 2017. [12] https://www.tensorflow.org
[9] http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset
[10] Y. Le Cun, L. Bottou, Y. Bengio and P.Haffner, “Gradient-Based
learning applied to document recognition,” Proceedings of IEEE, Vol.
86, N°11, p. 2278-2324, 1998.