Detection of Traffic Sign Using CNN: June 2022
Detection of Traffic Sign Using CNN: June 2022
net/publication/361142189
CITATIONS READS
5 3,021
4 authors, including:
Radhey Shyam
Sri Ramswaroop Memorial College of Engineering and Management
38 PUBLICATIONS 207 CITATIONS
SEE PROFILE
All content following this page was uploaded by Radhey Shyam on 07 June 2022.
STM JOURNALS
http://computers.stmjournals.com/index.php?journal=RTPC&page=index
Review RTPC
Abstract
For past more than half of the decade, the detection and recognition of the traffic signs are an active
research area. Specially in the field of automation of driving that is critical for driverless driving, as
there is drastic increase in road accidents due to ignorance of traffic signs and rules. This is
frequently used for recognizing permanent or temporary various road signs which are displayed on
the side of every small and long road. A complete recognition system may consist of detection of
traffic sign as well as their recognition. The detection of traffic sign as well as their recognition is
typically used on portable devices. This helps the devices to make better decisions and improve their
driving algorithms. The preferable parameters that need to be considered are size of the traffic sign
boards and speed of the vehicles. This paper addresses the real time detection and recognition of
traffic sign from the traffic sign boards through convolutional neural networks. Furthermore, image
segmentation techniques such as edge-based segmentation, region-based segmentation, cluster-based
segmentation, and so on play an important role in traffic sign detection and recognition.
Furthermore, convolutional neural networks lead to a robust system that achieves higher accuracy in
detection phase as well as higher performance ability in training and recognition phase. The
proposed model achieves recognition accuracy up to 98.13% at the training.
INTRODUCTION
Automatic Detection and recognition of traffic signs are a popular problem. Where the driving
instruments need to recognise the various traffic signs that are displayed in different large and small
roads in our Country. However, because to the complexity of driving surroundings, different elements
such as passing vehicles, buildings, and roadside vegetation might impact the detection and
recognition of traffic signs. Because neither the driver nor the vehicle vision system can effectively
discern the meaning of traffic signs, they are unable to make the best decision for the next driving
manoeuvre. As a result, researchers have proposed various methods for detection and recognition of
traffic signs in real time.
The method is to determine the location of a traffic signs from the captured images are firstly,
identifying a traffic sign, and secondly, recognize
*Author for Correspondence the identified traffic sign for further decisions. In
Simran addition to that, there are now two widely used
E-mail: [email protected]
methods for detecting traffic signs.
1-2
Student, Department of Computer Science and Engineering,
SRMCEM, Lucknow, Uttar Pradesh, India The traffic sign is split into two categories, such
3
Assistant Professor, Department of Computer Science and
Engineering, SRMCEM, Lucknow, Uttar Pradesh, India as shape and colour. For instance, the literature
4
Professor, Department of Computer Science and uses a Gaussian model to segment the image,
Engineering, SRMCEM, Lucknow, Uttar Pradesh, India determine the position of the traffic sign, extract
Received Date: May 13, 2022 the histogram of oriented gradient (HOG) features,
Accepted Date: May 23, 2022 and lastly categorise the signs using support vector
Published Date: May 27, 2022 machines (SVM) [1]. Furthermore, discriminating
Citation: Simran, Sristi Tandon, Shilpi Khanna, Radhey the position of traffic sign by the detection of
Shyam. Detection of Traffic Sign Using CNN. Recent Trends Harris corners, it uses a random sample consensus
in Parallel Computing. 2022; 9(1): 14–23p. matching of template of traffic sign [2]. The other
way employs neural networks to train the traffic sign to improve recognition accuracy. In [3] creates a
hyper-pixel map model based on the scene graph's colour and boundary features, as well as a priori
position, thresholds segmenting the traffic sign's areas of interest and employing Caffeine's neural
network training convolutions to recognise traffic signs.
Furthermore, the support vector machine is used to extract picture attributes and detect traffic signs
using deep convolutional neural networks [4–5]. Deep learning which is a subset of machine learning
and is based on the concept of artificial neural networks have achieved improved recognition accuracy
[6–11]. Researchers also have used Faster Region-based Convolutional Neural Network algorithm
and the pre-trained deep neural network model for image segmentation and classification [12]. On the
other hand, Alex Net is used to detect traffic signs from the image, based on the concept of transfer
learning [13]. After putting theories to the test, a model capable of detecting traffic signals was
created, and its validity was confirmed by putting the test set's photographs to the test.
The created neural networks is mainly used for image classification. Despite this, various automatic
image segmentation techniques have been proposed in the last four years [14]. A recurrent neural
network, on the other hand, investigates search designs in threefold: (i) stacking the down sampling
layer, (ii) the bridge layer, and (iii) the up-sampling layer. This method has a tiny search space
compared to other relevant picture segmentation methods, yet it can investigate most of the state-of-
the-art supervised image segmentation models. They have tested AutoSegNet on two different
datasets, and the findings reveal that it produces better segmentation results with crisp and continuous
segmented edges, as well as with more relevant image information.
The following is how the paper is organised: The proposed model's operation is depicted in
"PROPOSED SYSTEM". The outcomes are displayed in "RESULT". Finally, the paper concludes with
the words "CONCLUSIONS".
PROPOSED SYSTEM
For our project, we used an Indian traffic sign. We first pre-process the image in order to aid
detection. For segmenting traffic signs and backgrounds, deep learning-based automatic segmentation
method has applied. CNN was given a segmented traffic sign area to recognise the final result. The
schematic diagram of a proposed system is illustrated as in Figure 1.
Input Image
For the training and recognition of traffic signs, we have used a size of 960 × 720 colour image.
Images have been captured with various image capturing devices and the database prepared by using
various online and offline sources. One of the input image that has been considered for testing is
illustrated as in Figure 2.
Pre-processing
It begins by converting a coloured image, such as a RGB (3-channel image), to LAB (L for
lightness and a and b for the colour opponents green–red and blue, yellow). After that, Contrast
Limited Adaptive Histogram Equalization used to the luminance plane to produce improved image.
Then, this image converted back to RGB. Illustration of this process is shown in Figure 3.
Figure 3. Left image is an original image and right image is the improved image [21].
An illustration of Resnet in the form of regular block and residual block is shown in Figure 4 and
ResNet block with and without 1 × 1 convolution is shown in Figure 5. Here, x is an input image and
f(x) is the desired mapping that obtained after so many layers, such as weighted layers and activation
function (ReLu). Selection of activation function may also affect the final result.
From the Figure 4, the portion within the dotted line in left box must directly learn the mapping
f(x). The portion within the dotted-line box on the right must learn the residual mapping f(x) - x,
which is how the residual block gets its name. If the identity mapping f(x) = x is the desired
underlying mapping, the residual mapping is easier.
The residual mapping is easier to learn if the desired underlying mapping is the identity mapping
f(x) = x. The residual block of ResNet is illustrated on the right in Figure 4, where the solid line
delivering the layer input x to the addition operator is called a residual connection. With residual
blocks, inputs can propagate quicker between layers through the residual connections.
Training
In training, a set of different images are used in the form of original images and labelled images.
The illustration of original image and labelled image is shown in Figure 7.
The background of the labelled image is green, and the traffic sign is red. Figure 8 displays the
training outcomes. At the end of the training, the proposed model has an accuracy of 98.13%.
(a)
(b)
Figure 8. Performance of the proposed model of training are shown in terms of accuracy and loss,
respectively.
Segmentation
The trained ResNet receives the input image and produces the segmented output, which includes
the background region and the sign region.
The illustration of segmented image as a result is shown in Figure 9. From the segmented image red
region represents some sign that is useful for decision purposes and remaining green portion
represents the background that can be discarded.
CNN
The extracted traffic sign as the input is provided to CNN that has been used to recognise traffic
signs. For detecting the five different traffic signs from the images, we have selected five different
traffic signs from the popular Kaggle datasets. The training results are illustrated in Figure 11.
(a)
(b)
Figure 11. Performance of CNN training are shown in terms of accuracy and loss, respectively.
RESULT
Jupyter Notebook was used for the implementation of our proposed work. In Jupyter Notebook, a
recognised traffic sign is shown in Figure 12 using the Figure 2 image as input. Comparison of
different image segmentation in the form of their advantages and disadvantages has shown in Table 1.
CONCLUSIONS
This article addresses the detection and recognition of different traffic signs that are displayed on
the side of small and large roads. A convolutional neural network has been used for improving
efficiency and accuracy of traffic sign in terms of detection and recognition. Our interest is on the
development of a model that can used for detection and development of traffic signs. When an
automated cars moved on the roads. Here, an automatic segmentation is done through deep learning.
The size of the training dataset are 25 samples. The proposed model improves recognition accuracy.
The recognition accuracy gets better and better when we increase the number of samples in the
dataset, accordingly.
REFERENCES
1. Chang Failing, Huang Cui, and Liu Chengdu, Traffic sign detection based on Gaussian colour
model and SVM, Chinese Journal of Scientific Instrument, 2014, 35 (1): 43–49.
2. Ge Xia, Yu Feng in, and Chen Ying, Ransack Algorithm with Harris angular point for detecting
traffic signs, Transducer and Microsystem Technologies, 2017, 36 (3): 124–127.
3. Liu Hansen, Zhao Xiangmo, and Li Qian, Traffic sign recognition method based on graphical
model and convolutional neural network, Journal of Traffic and Transportation Engineering,
2016, (3): 124–127.
4. Wang Xiaoping, Huang Janie, and Liu Wensum. Traffic sign recognition based on optimized
convolutional neural network architecture, Jamal of Computer Applications, 2017, 37 (2): 530–
534.
5. Xin Jin, Cain Fixing, and Deng Haiti, Traffic Sign Classification Based on Deep Learning of
Image Invariant Feature, Journal of Computer-Aided Design & Computer Graphics, 2017, 29 (4):
632–640.
6. Radhey Shyam, Convolutional Neural Network and its Architectures, Journal of Computer
Technology & Applications. 2021; 12 (2): 6–14p.
7. Srivastava Vartika and Shyam Radhey, Enhanced object detection with deep convolutional neural
networks, International Journal of all Research Education and Scientific Methods, 9(7), 2021.
8. Zhou Zhuhai, Machine learning. Beijing: Tsinghua University Press, 2016: 23-47.
9. Radhey Shyam and Ria Singh, A Taxonomy of Machine Learning Techniques, Journal of
Advancements in Robotic’s. 2021; 8 (3): 18–25p.
10. Radhey Shyam and Riya Chakraborty, Machine Learning and its Dominant Paradigms, Journal of
Advancements in Robotics. 2021; 8 (2): 1–10p.
11. Radhey Shyam and Gautami Awasthi. Role of Deep Learning in Image Recognition. Journal of
Image Processing & Pattern Recognition Progress. 2021; 8 (2): 34–39p.
12. Ren S, He K, and Airsick R, Faster R-CNN: towards realtime object detection with region
proposal networks, International Conference on Neural Information Processing Systems. MIT
Press, 2015, 91–99.
13. Krizhevsky A, Sutskever I, and Hinton GE, Image Net classification with deep convolutional
neural networks, International Conference on Neural Information Processing Systems, 2012,
1097–1105.
14. Zhimin xu, Si Zuo, AutoSegNet: An Automated Neural Network for Image Segmentation, IEEE
Access, 2020, 92452–92461
15. Hoban S, Stall Kamp J, and Salman J, Detection of traffic signs in real-world images: The
German traffic sign detection benchmark, International Joint Conference on Neural Networks,
IEEE, 2013, 1–8.
16. B. Zoph and Q.V. Le, Neural architecture search with rein-forcement learning, 2016.
17. B. Baker, O. Gupta, N. Naik, and R. Raskar, Designing neural network architectures using
reinforcement learning, 2016.
18. O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional networks for biomedical image
segmentation, in Lect. Notes Comput. Sci., 2015, pp. 234–241.
19. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, DeepLab: Semantic image
segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 40, no. 4, pp. 834–848, Apr. 2018.
20. Dive into Deep Learning. 7.6. Residual Networks (ResNet) [Online]. Available from
https://d2l.ai/chapter_convolutional-modern/resnet.html.
21. Jessin Mathew and Hari S, Traffic Sign Detection using Deep Learning Image Segmentation and
CNN, International Research Journal of Engineering and Technology (IRJET), 2021, 8 (5): 989-
993.