Map Report Final
Map Report Final
A PROJECT REPORT
Submitted by
J. ABINAYA 912420104002
B. MEENAKSHI 912420104019
S. PRAVINA 912420104025
1
ANNA UNIVERSITY :: CHENNAI 600 025
BONAFIDE CERTIFICATE
2
ACKNOWLEDGEMENT
iii
ABSTRACT
In contemporary society, where time is a precious commodity, the
imperative to complete tasks efficiently is a pervasive human tendency. One
notable arena where this urgency manifests is in vehicular transportation.
However, the escalating density of vehicles coupled with lax adherence to traffic
regulations pose significant challenges to road safety. The prevalence of drivers
exceeding speed limits in restricted zones exacerbates the problem, leading to a
surge in road accidents. To address these concerns, we propose an innovative
solution: an Automatic Speed Control System (ASCS) for vehicles employing
video processing technology. This system aims to mitigate the risks associated
with over-speeding by detecting restricted areas and gradually reducing vehicle
speed in response. The methodology involves capturing real-time video footage
of road scenes using small cameras installed on vehicles. Through the
implementation of Convolutional Neural Network (CNN) algorithms, the system
identifies and interprets road signs denoting speed limits, school zones, and
hospital zones. Upon detection of a restricted area, the microcontroller integrated
into the vehicle initiates a process to gradually reduce vehicle speed. Additionally,
the system alerts the driver, ensuring awareness of the impending speed
adjustment. By integrating video processing technology with existing vehicle
systems, our proposed ASCS offers a proactive approach to enhancing road
safety. Through its ability to detect and respond to restricted areas in real-time,
this system has the potential to significantly reduce the incidence of over-
speeding-related accidents.
iv
TABLE OF CONTENT
ABSTRACT iv
LIST OF FIGURES viii
LIST OF ABBERVATION ix
1. INTRODUCTION 1
1.1 Vehicle Density and Accidents 2
1.2 Contributing Factors to Road 2
Accidents
1.3 Role of Human Behaviour in 3
Road Safety
1.4 Utilization of Deep Learning 3
Techniques for Zone Detection
1.5 Understanding Artificial 4
Intelligence in Road Safety
1.6 The Significance of Speed Limits 4
in Road Safety
2 LITERATURE SURVEY 5
2.1 Double FCOS: A Two-Stage 5
Model Utilizing FCOS for Vehicle
Detection in Various Remote Sensing
Scenes
2.1.1 Merits 5
2.1.2 Demerits 5
2.2 A DVB-S-Based multichannel 6
passive radar system for vehicle
detection
2.2.1 Merits 6
2.2.2 Demerits 6
v
2.3 VAID: An Aerial Image Dataset 7
for Vehicle Detection and
Classification
2.3.1 Merits 7
2.3.2 Demerits 7
2.4 CMNet: A Connect-and-Merge 8
Convolutional Neural Network for
Fast Vehicle Detection in Urban
Traffic Surveillance
2.4.1 Merits 8
2.4.2 Demerits 8
2.5 Part-Aware Region Proposal for 9
Vehicle Detection in High Occlusion
Environment
2.5.1 Merits 9
2.5.2 Demerits 9
3 SYSTEM ANALYSIS 10
3.1 Existing System 10
3.1.1 Limitation of Existing 11
System
3.2 Proposed System 11
3.2.1 Advantage of Proposed 12
System
3.3 System Architecture 13
3.3.1 Architecture Diagram 13
3.3.2 Block Diagram 14
3.3.3 Flowchart Diagram 15
3.4 Module Description 16
3.4.1 Image capture and 16
Recognition Module
vi
3.4.2 Decision-Making Logic 17
Module
3.4.3 Voice Prompt Generation 18
Module
3.4.4 Integration With Pic 19
Microcontroller Module
3.4.5 Relay Drive For Speed 19
4 SYSTEMS DESIGN 20
4.1 UML Diagrams 20
4.2 Types Of UML Diagram 21
4.2.1 Usecase diagram 22
4.2.2 Class diagram 23
4.2.3 Activity diagram 24
4.2.4 State chart diagram 25
4.2.5 Deployment diagram 26
5 IMPLEMENTATION 27
5.1 Optimization and Classification 27
5.1.1 CNN Algorithm 27
5.2 System Specification 33
5.2.1 Software Requirements 33
5.2.2 Hardware Requirements 34
5.2.3 Software Description 34
5.2.4 Hardware Description 47
6 CONCLUSION AND FUTURE WORK 60
6.1 Results and Discussion 60
6.2 Conclusion 61
6.3 Future Work 61
APPENDIX I 62
APPENDIX II 70
REFERENCES 72
vii
LIST OF FIGURES
5.1 Feature 56
viii
LIST OF ABBREVATIONS
ix
CHAPTER 1
1. INTRODUCTION
This report undertakes the ambitious task of dissecting the multifaceted issues
underpinning the escalating rate of accidents, with a primary focus on the
pervasive problem of over-speed driving. Concurrently, the report endeavors to
present innovative solutions harnessing advanced technologies such as deep
learning and artificial intelligence (AI). By delving into the intricate interplay
between societal dynamics, infrastructural deficiencies, and human behaviour,
this report aspires to provide comprehensive insights into the root causes of the
prevailing road safety crisis. Moreover, it seeks to articulate a roadmap for
effective intervention, leveraging cutting-edge technological tools to mitigate
risks and cultivate a safer driving environment for all road users.
1
safety that integrates technological innovations with policy reforms and public
awareness campaigns. Ultimately, the goal is to pave the way for a future where
road accidents are minimized, and every individual can travel safely and
confidently on the nation's roads.
The period between 2001 and 2015 marked a significant juncture in the
trajectory of vehicular traffic on Indian roadways, emblematic of both economic
expansion and societal progress. However, this epoch of exponential growth in
vehicle density has not unfolded without its accompanying challenges. Statistical
analyses conducted during this period underscore a clear and alarming correlation
between the burgeoning number of vehicles and the escalating frequency of road
accidents, thereby casting a profound shadow over the nation's road safety
landscape. Indeed, the proliferation of vehicles has served to spotlight a myriad
of intricate issues, necessitating a concerted and multifaceted approach to
addressing the underlying causes and effectively mitigating the associated risks
posed to public safety.
2
users to accidents and underscores the imperative for comprehensive
interventions aimed at enhancing road safety across the nation.
In recent times, there has been a notable surge in the utilization of deep
learning techniques, particularly Convolutional Neural Networks (CNNs), within
the domain of road safety. This trend reflects the growing recognition of AI's
potential to revolutionize traditional approaches to image analysis, classification,
and pattern recognition. By harnessing the power of AI, specifically in the realms
of image processing and feature extraction, these advanced techniques offer a
promising avenue for enhancing road safety measures. Through the analysis of
real-time images captured by strategically positioned cameras, these systems can
discern critical information regarding accident-prone zones, hospital precincts,
and school areas. This analytical prowess enables proactive identification of
potential hazards, empowering authorities to implement timely interventions and
alert drivers, thereby mitigating risks and fostering a safer driving environment
for all road users.
3
1.5 Understanding Artificial Intelligence in Road Safety
4
CHAPTER 2
2. LITERATURE SURVEY
2.1 TITLE : Double FCOS: A Two-Stage Model Utilizing FCOS for Vehicle
Detection in Various Remote Sensing Scenes
AUTHOR : Peng Gao , Tian Tian
YEAR : 2022
2.1.1 MERITS
• Improved detection accuracy, especially for tiny or weak vehicles, through
the carefully designed two-stage detection process.
2.1.2 DEMERITS
• Potentially increased computational complexity compared to single-stage
detection models, due to the additional stages and classification branches.
5
2.2 TITLE : a DVB-S-Based multichannel passive radar system for vehicle
detection
AUTHOR : Junjie Li Junkang Wei
YEAR : 2021
2.2.1 MERITS
• Simple implementation and low computational complexity compared to
multichannel techniques.
2.2.2 DEMERITS
• Limited detection gain and susceptibility to interference, particularly in
environments with high levels of independently and identically distributed
(i.i.d.) interference and Doppler effects.
6
2.3 TITLE : VAID: An Aerial Image Dataset for Vehicle Detection and
Classification
AUTHOR : Huei-Yung Lin
YEAR : 2020
2.3.1 MERITS
• VAID dataset offers a rich collection of aerial images with diverse traffic
conditions and annotated vehicle categories, facilitating robust algorithm
development and evaluation.
2.3.2 DEMERITS
• Limited to aerial imagery, VAID may not fully capture the complexities of
vehicle detection in ground-level scenarios, potentially impacting
algorithm generalization.
7
2.4 TITLE : CMNet: A Connect-and-Merge Convolutional Neural Network
for Fast Vehicle Detection in Urban Traffic Surveillance
AUTHOR : Fukai Zhang
YEAR : 2019
2.4.1 MERITS
• The CMNet architecture offers efficient feature extraction through
connect-and-merge residual branches, enhancing detection accuracy and
speed for real-world traffic surveillance data.
2.4.2 DEMERITS
• The complexity of the CMNet architecture may require significant
computational resources, potentially limiting its applicability on resource-
constrained platforms.
8
2.5 TITLE : Part-Aware Region Proposal for Vehicle Detection in High
Occlusion Environment
AUTHOR : Weiwei Zhang
YEAR : 2019
This paper presents a two-stage detector based on Faster R-CNN for high
occluded vehicle detection, in which we integrate a part-aware region proposal
network to sense global and local visual knowledge among different vehicle
attributes. That entails the model simultaneously generating partial-level
proposals and instance-level proposals at the _rst stage. Then, different parts
belong to the same vehicle are encoded and recon_gured into a compositional
entire proposal through a part af_nity _elds, allowing the model to generate
integral candidates and mitigate the impact of occlusion challenge to the utmost
extent. Extensive experiments conducted on KITTI benchmark exhibit that our
method outperforms most machine-learning-based vehicle detection methods and
achieves high recall in the severely occluded application scenario.
2.5.1 MERITS
• The integration of a part-aware region proposal network enables the model
to generate both partial-level and instance-level proposals simultaneously,
enhancing detection accuracy and robustness, especially in highly
occluded scenarios.
2.5.2 DEMERITS
• The complexity introduced by the part-aware region proposal network may
increase computational overhead, potentially impacting real-time
performance on resource-constrained platforms.
9
CHAPTER 3
3. SYSTEM ANALYSIS
The proposed system for accident, school, and hospital zone detection,
utilizing Convolutional Neural Networks (CNN), is designed to enhance road
safety by accurately pinpointing areas prone to accidents. Through the integration
of cameras and sensors, the system captures images or video footage of targeted
areas, which are then analysed by CNN to identify and categorize accident,
school, and hospital zones. By training the CNN on extensive datasets comprising
images of these specific zones, the system becomes adept at detecting accidents
swiftly.
12
3.3 SYSTEM ARCHITECTURE
13
3.3.2 BLOCK DIAGRAM
The system employs cameras and sensors to capture information about the
surrounding environment, which is then processed by Convolutional Neural
Networks (CNNs). These CNNs analyse the data to identify accident-prone areas,
school zones, and hospital zones. The system then relays real-time information to
drivers, prompting them to exercise caution and reduce speed in designated zones.
Additionally, the system integrates with traffic management systems to optimize
traffic flow and prevent accidents. By automatically adjusting vehicle speed in
school and college zones based on the detected data, the system contributes to
maintaining road safety standards. Continuous monitoring ensures that the system
remains effective in its operation.
15
3.4 MODULES USED
1. Image Capture And Recognition Module
2. Decision-Making Logic Module
3. Voice Prompt Generation Module
4. Integration With Pic Microcontroller Module
5. Relay Drive For Speed Control Module
MODULE DESCRIPTION
16
3.4.2 DECISION-MAKING LOGIC MODULE
18
3.4 .4 INTEGRATION WITH PIC MICROCONTROLLER MODULE
The Integration with PIC Microcontroller Module acts as the intermediary
between the Decision-Making Logic Module and the PIC microcontroller
responsible for vehicle speed control. This module enables seamless
communication by transmitting instructions and relevant data from the decision-
making logic to the PIC microcontroller. These instructions include commands
for implementing speed control measures based on the detected zones, such as
adjusting vehicle speed in accident-prone areas or school zones.By facilitating
this communication, the module ensures that the PIC microcontroller receives
accurate and timely information to effectively regulate vehicle speed in response
to changing road conditions. This integration enhances the system's ability to
enforce safety measures and optimize traffic flow, ultimately contributing to
improved road safety outcomes.
19
CHAPTER 4
4 SYSTEMS DESIGN
The system design phase is a pivotal step in any project, laying the
groundwork for its architecture and functionality. It begins with a comprehensive
analysis of requirements to ensure alignment with user needs. Architectural
blueprints are then crafted, outlining the structure and relationships of system
components. Detailed designs for each element are developed, clarifying
functionality and inter-component interfaces. Data flow pathways are
meticulously mapped to ensure efficient handling throughout the system.
Technology selection is critical, with tools chosen to best implement each
component's requirements. Integration planning is paramount to ensure seamless
operation and data exchange among components. Rigorous testing strategies are
devised to validate functionality and performance. Deployment, maintenance,
and support plans are established to ensure successful implementation and
ongoing operation. Ultimately, the system design process ensures that the
proposed solution is robust, scalable, and capable of meeting user expectations.
Furthermore, use case diagrams highlight the system's functionalities from the
perspective of its users, illustrating the different use cases and the actors involved
in each scenario. This provides stakeholders with a clear understanding of the
system's capabilities and how users interact with it.
Overall, UML diagrams play a vital role in system design by providing a visual
representation of its structure, behaviour, and functionalities, facilitating
communication, collaboration, and decision-making among stakeholders
throughout the development process.
• CLASS DIAGRAM
• USECASE DIAGRAM
• ACTIVITY DIAGRAM
• STATECHART DIAGRAM
• DEPLOYMENT DIAGRAM
21
4.2.1 USECASE DIAGRAM
22
4.2.2 CLASS DIAGRAM
23
4.2.3 ACTIVITY DIAGRAM
24
4.2.4 STATECHART DIAGRAM
Statechart diagrams depict the various states of an object or system and the
transitions between these states in response to events. States represent different
conditions or modes in which the system can exist. Transitions depict the
movement from one state to another triggered by specific events or conditions.
Initial and final states mark the start and end points of the diagram, respectively.
Actions associated with transitions denote the behavior or activities performed
when transitioning between states. Statechart diagrams aid in modeling complex
systems, capturing their dynamic behavior and interaction over time. They are
valuable tools in software engineering for designing and understanding stateful
systems and their behavior under different scenarios.
25
4.2.4 DEPLOYMENT DIAGRAM
26
CHAPTER 5
5 IMPLEMENTATION
One of the most well-known ANN is the convolutional neural network. The
domains of image and video recognition make extensive use of it. It is founded
on the mathematical idea of convolution. It closely resembles a multi-layer
perceptron, however before the completely linked hidden neuron layer, it has a
succession of convolution layers and pooling layers.
Here,
• 2 series of Convolution and pooling layer is used and it receives and process
the input (e.g. image).
• A single fully connected layer is used and it is used to output the data (e.g.
classification of image)
28
where each layer's neurons analyze specific areas called "local receptive fields"
without considering changes beyond these borders. Convolutions occur as
connections pass weights from one layer to another, enabling feature extraction.
Pooling layers, following CNN declarations, condense feature maps generated by
convolutional networks, facilitating the creation of new layers with neurons from
earlier ones.
• Convolution
• Pooling
• The input will contain the image's raw pixel values and its three colour
channels, R, G, and B.
• CONV layer will compute the output of neurons that are connected to
local regions in the input, each computing a dot product between their weights
and a small region they are connected to in the input volume.
• RELU layer will apply an element wise activation function. This leaves
the size of the volume unchanged.
29
• The POOL layer conducts downsampling across spatial dimensions,
yielding a volume like [16x16x12]. The FC (fully-connected) layer calculates
class scores, resulting in a volume of a specific size. Similar to regular Neural
Networks, each neuron in this layer connects to all numbers in the preceding
volume.
Layers in CNN
➢ Input layer
➢ Convo layer (Convo + ReLU)
➢ Pooling layer
➢ Fully connected(FC) layer
➢ Softmax/logistic layer
➢ Output layer
I. Input Layer
Image data in CNN requires transformation into a single column from the
original three-dimensional matrix representation, such as converting a 28x28
image into a 784x1 array before input.
30
Convo Layer
Due to the fact that characteristics of the picture are extracted within this layer,
this layer is also known as feature extractor layer. In order to perform the
convolution operation we observed before and calculate the dot product between
the receptive field—a local area of the input image that has the same size as the
filter—and the filter, a portion of the image is first connected to the Convo layer.
One integer representing the output volume is the operation's output. Next, via a
Stride, we move the filter over the following receptive area of the identical input
picture and repeat the process. Until we have completed the entire image, the
same procedure will be repeated. Input will become the output. for the next layer.
Convo layer also contains ReLU activation to make all negative value to zero.
Pooling Layer
Pooling layers provide several benefits to CNNs. Firstly, they help in achieving
translation invariance, meaning that the network can recognize patterns
regardless of their position in the input image. Additionally, pooling layers
contribute to feature learning by extracting the most relevant information from
the input feature maps. Finally, pooling layers aid in reducing overfitting by
discarding unnecessary information and promoting generalization.
31
There is no parameter in pooling layer but it has two hyper parameters — Filter(F)
and Stride(S).
W2 = (W1−F)/S+1
H2 = (H1−F)/S+1
D2 = D1
Where W2, H2 and D2 are the width, height and depth of output.
Fully connected layer involves weights, biases, and neurons. It connects neurons
in one layer to neurons in another layer. It is used to classify images between
different categories by training.
Softmax or Logistic layer is the last layer of CNN. It resides at the end of FC
layer. Logistic is used for binary classification and softmax is for multi-
classification.
32
Output Layer
Output layer contains the label which is in the form of one-hot encoded.
• CNN
• Embedded C language
33
5.2.2 HARDWARE REQUIREMENTS
• Power supply
• PIC(16F877A) Microcontroller
• Relay driver
• DC motor
• CAMERA
➢ PYTHON
➢ TENSORFLOW
➢ KERAS
➢ NUMPY
➢ PILLOW
➢ SCIPY
➢ OPENCV
➢ CONVOLUTIONAL NEURAL NETWORK (CNN)
➢ MPLAB IDE
PYTHON
It is used for:
• Software development,
• Mathematics,
• System scripting.
34
PYTHON NUMPY
Our Python NumPy Tutorial explains both the fundamental and more
complex NumPy principles. Both professionals and beginners can benefit from
our NumPy tutorial. Numeric Python, sometimes known as NumPy, is a Python
library for computing and processing multidimensional and linear array elements.
The use of NumPy for data analysis has the following benefits.
OPEN CV
35
Canny Edge Detection, Template Matching, Blob Detection, Contour, Mouse
Event, Gaussian Blur, and others.
36
The picture intensity at the particular location is represented by the numbers. In
the above image, we have shown the pixel values for a grayscale image consist
of only one value, the intensity of the black color at that location.
1. Grayscale
Images that solely have the two hues black and white are known as
grayscale images. Black is considered to have the lowest intensity whereas white
has the highest, according to the contrast assessment of intensity. The computer
gives each pixel in the grayscale image a value dependent on how dark it is when
we use it.
2. RGB
An RGB is a mixture of the colors red, green, and blue that results in a new
color. Each pixel's value is extracted by the computer, which then organizes the
information into an array for interpretation.
37
Installation of the OpenCV
first step is to download the latest Anaconda graphic installer for Windows
from it official site. Choose your bit graphical installer. You are suggested to
install 3.7 working with Python 3.
38
Open the command prompt and type the following code to check if the OpenCV
is installed or not.
Scikit-learn :
PANDAS
39
Library feature :
CAMERA :
40
TENSORFLOW- INTRODUCTION
TensorFlow — Installation
41
Following are the two TensorFlow — Convolutional Neural NetworksAfter
understanding machine-learning concepts, we can now shift our focus to deep
learning concepts. Deep learning is a division of machine learning and is
considered as a crucial step taken by researchers in recent decades. The examples
of deep learning implementation include applications like image recognition and
speech recognition.
METHODOLOGY:
CNNs consist of convolutional layers, pooling layers, ReLU layers, and fully
connected layers, forming two main subnetworks: convolutional layers for
feature extraction and a deep neural network. Pooling layers, such as max
pooling, downscale the image by determining the maximum value in a region.
Fully connected layers, akin to traditional neural network hidden layers, require
consideration of the number of outputs during training and can utilize transfer
learning techniques.
• AlexNet
• VGG-16
• VGG-19
• Inception V3
• XCeption
• ResNet-50
The differences among these networks are mainly the network architecture (the
number and distribution and parameters of the convolution, pooling, and dropout,
and fully connected layers ). And so, the overall number of trainable parameters,
the computatonal complexity of the training process and attained performance.
IMAGE ARCHIVES
The Python Imaging Library is best suited for image archival and batch
processing applications. Python pillow package can be used for creating
thumbnails, converting from one format to another and print images, etc.
IMAGE DISPLAY
Image Processing
44
3.Installing Pillow using pip
To install pillow using pip, just run the below command in your command
prompt:
In case, if pip and pillow are already installed in your computer, above commands
will simply mention the ‘requirement already satisfied’ as shown below:
To display the image, pillow library is using an image class within it. The
image module inside pillow package contains some important inbuilt functions
like, load images or create new images, etc.
Example
im = Image.open("images/cuba.jpg")
im = im.rotate(45) im.show()
45
MPLAB IDE SOFTWARE
MPLAB is aproprietary freeware integrated development environment for
the development of embedded applications on PIC and dsPIC microcontrollers,
and is developed by Microchip Technology.
MPLAB 8.X is the last version of the legacy MPLAB IDE technology,
custom built by Microchip Technology in Microsoft Visual C++. MPLAB
supports project management, editing, debugging and programming of Microchip
8-bit, 16-bit and 32-bit PICmicrocontrollers. MPLAB only works on Microsoft
Windows. MPLAB is still available from Microchip's archives, but is not
recommended for new projects.
➢ POWER SUPPLY
➢ PIC MICRO CONROLLER
➢ DRIVER RELAY
➢ DC MOTOR
POWER SUPPLY
Transformer:
Vs*Is=Vp * Ip
The low voltage AC output is suitable for lamps, heaters and special AC motors.
It is not suitable for electronic circuits unless they include a rectifier and a
smoothing capacitor.
Rectifier:
49
The varying DC output is suitable for lamps, heaters and standard motors. It is
not suitable for electronic circuits unless they include a smoothing capacitor.
Bridge rectifier:
50
Single diode rectifier:
A single diode can be used as a rectifier but this produces half-wave
varying DC which has gaps when the AC is negative. It is hard to smooth this
sufficiently well to supply electronic circuits unless they require a very small
current so the smoothing capacitor does not significantly discharge during the
gaps. Please see the Diodes page for some examples of rectifier diodes.
Smoothing:
51
Smoothing involves using a large electrolytic capacitor across the DC
supply to act as a reservoir, supplying current to the output during voltage drops.
When the rectified DC voltage fluctuates, the capacitor quickly charges near its
peak and then discharges as it provides current to the output. This process results
in a smoothed output voltage, reducing fluctuations and creating a more stable
power supply.
The smooth DC output has a small ripple. It is suitable for most electronic circuits.
Regulator:
52
Voltage regulator ICs come in fixed (like 5V, 12V, and 15V) or variable
output voltages and are rated by their maximum current capacity. Negative
voltage regulators are also available, often for dual supply setups. These ICs
typically feature automatic protection against excessive current and overheating.
The LM78XX series offers various fixed output voltages, suitable for diverse
applications such as local on-card regulation, logic systems, instrumentation, and
HiFi equipment. While primarily fixed voltage regulators, they can also be
configured for adjustable voltages and currents using external components. Many
fixed voltage regulator ICs have 3 leads and resemble power transistors, such as
the 7805 +5V 1A regulator, often equipped with a heat sink attachment hole if
needed.
1.Positive regulator
1. input pin
2. ground pin
3. output pin
It regulates the positive voltage
2. Negative regulator
1. ground pin
2. input pin
3. output pin
It regulate the negative voltage
53
The regulated DC output is very smooth with no ripple. It is suitable for all
electronic circuits.
PIC MICROCONTROLLER
54
pic IC pin
The PIC16f877a microcontroller is widely applied in devices such as remote
sensors, security systems, home automation, and various industrial instruments.
Its EEPROM feature enables permanent storage of vital information like
transmitter codes and receiver frequencies, enhancing its utility. With low cost
and user-friendly handling, it finds versatility in coprocessor applications and
timer functions, expanding its usage beyond conventional microcontroller
applications. The PIC series, known for its RISC architecture and CMOS
fabrication, offers advantages like low power consumption, compact chip size,
and immunity to noise, making it a preferred choice for diverse projects.
PIC 16877A:
CORE FEATURES
Number of Pins 40
55
Number of I/O pins 33
Comparators 2
56
PIN DESCRIPTION
Relays are electrically operated switches that use a magnetic field generated by
the coil current to change the switch contacts.
57
Typically, relays have two switch positions and
double throw (changeover) switch contacts. They
allow one circuit to switch a second circuit,
completely separate from the first, using a
Circuit symbol for a relay
magnetic and mechanical link rather than an
electrical connection. The coil of a relay requires
a relatively large current, typically around 30mA
for a 12V relay, which most ICs cannot provide.
Therefore, transistors are commonly used to
amplify the small IC current to the larger value
required for the relay coil. Relays are electrically
operated switches with two switch positions and
double throw switch contacts. They allow one
circuit to control another separate circuit using a
Relays
magnetic and mechanical connection.
58
Simple motor has six parts:
1. Armature or rotor
2. Commutator
3. Brushes
4. Axle
5. Field magnet
6. DC power supplyof some sort
Working Principle of DC motor
• Shunt wound.
• Series wound.
• Compound wound.
• Separately excited.
59
CHAPTER 6
The implementation of the proposed system for accident zone, school zone,
and hospital zone detection using Convolutional Neural Networks (CNN) has
yielded promising results in enhancing road safety. Through rigorous testing and
analysis, the system's effectiveness in accurately identifying these critical areas
in real-time has been demonstrated. The system showcased high accuracy in
detecting accident-prone areas, school zones, and hospital zones using image
processing techniques and CNN algorithms. Integration with a PIC
microcontroller facilitated efficient speed control measures based on the detected
zones, successfully adjusting vehicle speeds and mitigating the risk of accidents.
Discussions surrounding the system's performance emphasized its scalability and
potential for integration with existing traffic management infrastructure.
Challenges include refining zone detection algorithms, optimizing speed control
mechanisms, and addressing scalability issues in large-scale deployments. Future
work may explore sensor integration for enhanced data collection and advanced
AI algorithms for predictive analysis. Moreover, its user-friendly interface and
seamless integration with existing traffic control systems ensure ease of adoption
and operation. Further refinements in the system's hardware and software
components can enhance its robustness and reliability in diverse environmental
conditions. Continuous evaluation and optimization are essential to ensure the
system's long-term effectiveness in mitigating road accidents and ensuring safer
roadways for all users. Overall, leveraging advanced technologies such as deep
learning and artificial intelligence can significantly improve road safety and
reduce road accidents.
60
6.2 CONCLUSION
Implemented within the current infrastructure, this project not only ensures
seamless integration but also boasts a cost-effective and durable design,
guaranteeing maximum safety for pedestrians and the community. By providing
drivers with real-time road information without distracting them from their
primary task of driving, the system enhances overall road safety. Additionally,
drivers receive timely notifications regarding their vehicle's status, further
improving their awareness and control. With its low power consumption, the
project prioritizes energy efficiency, contributing to sustainability efforts.
Furthermore, the automatic speed control feature enhances safety measures by
promptly responding to hazard signals from the external environment. Overall,
this comprehensive approach underscores the project's commitment to enhancing
road safety and user experience.
61
APPENDICES
APPENDIX I
SOURCE CODE
SOFTWARE
import numpy as np
import os
import sys
import tensorflow as tf
from distutils.version import StrictVersion
from collections import defaultdict
from PIL import Image
from object_detection.utils import ops as utils_ops
sys.path.append("..")
if StrictVersion(tf.__version__) < StrictVersion('1.9.0'):
raise ImportError('Please upgrade your TensorFlow installation to v1.9.*
or later!')
from utils import label_map_util
from utils import visualization_utils as vis_util
MODEL_NAME = 'inference_graph'
PATH_TO_FROZEN_GRAPH = MODEL_NAME +
'/frozen_inference_graph.pb'
PATH_TO_LABELS = 'training/labelmap.pbtxt'
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
62
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
category_index =
label_map_util.create_category_index_from_labelmap(PATH_TO_LABE
LS, use_display_name=True)
def run_inference_for_single_image(image, graph):
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to
image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0],
tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0],
[real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0],
[real_num_detection, -1, -1])
detection_masks_reframed =
utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0],
image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
63
image_tensor =
tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
global a2
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
if output_dict['detection_classes'][0] == 1 and
output_dict['detection_scores'][0] > 0.70:
print('Accedent_zone')
#a2=1
if output_dict['detection_classes'][0] == 2 and
output_dict['detection_scores'][0] > 0.70 :
print('Hospital_zone')
#a2=1
if output_dict['detection_classes'][0] == 3 and
output_dict['detection_scores'][0] > 0.70:
print('School_zone')
if a2==1:
a2=0
64
## sleep(1)
## email()
## sleep(1)
return output_dict
a1=0
a2=0
import cv2
cap = cv2.VideoCapture(0)
try:
with detection_graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in
op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] =
tf.get_default_graph().get_tensor_by_name(
tensor_name)
while True:
65
(__, image_np) = cap.read()
# Expand dimensions since the model expects images to have
shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
cv2.imwrite('capture.jpg',image_np)
# Actual detection.
output_dict = run_inference_for_single_image(image_np,
detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
cv2.imshow('object_detection',
cv2.resize(image_np,(800,600)))
if cv2.waitKey(1)& 0xFF == ord('q'):
cap.release()
cv2.destroyAllWindows()
break
except Exception as e:
print(e)
#cap.release()
66
HARDWARE
#include<pic.h>
__CONFIG(0X3F72);
#define R1 RC0int x;
void delay(unsigned int X)
{
while(X--);
}
//**************************
// UART INIT
//**************************
void TX_init()
{
TXSTA=0X24;
RCSTA=0X90;
SPBRG=25;
}
void RX_init()
{
TXSTA=0x24;
RCSTA=0x90;
SPBRG=25;
BRGH=1;
GIE=1;
PEIE=1;
RCIE=1;
}
67
void interrupt rcx(void)
{
if(RCIF==1)
{
RCIF=0;
x=RCREG;
}
}
void serial(const unsigned char *a)
{
unsigned char o;for(o=0;a[o]!=0;o++)
{
TXREG=a[o];delay(20000);
//TXREG=0X0D;
}
}
int main()
{
TRISC=0X80;
PORTC=0X00;
RX_init();
TX_init();
delay(100);
while(1)
{
if(x=='1’)
{
68
R1=1;
delay(2000);
}
}
if(x=='2’)
{
R1=0;
delay(2000);
}
}
}
69
APPENDIX II
➢ ACCIDENT_ZONE
➢ HOSPITAL_ZONE
➢ SCHOOL_ZONE
70
71
REFERENCES
1. S. Arabi, A. Haghighat and A. Sharma, "A deep-learning-based computer
vision solution for construction vehicle detection", Comput.-Aided Civil
Infrastruct. Eng., vol. 35, no. 7, pp. 753-767, 2020.
5. X. Zhao, P. Sun, Z. Xu, H. Min and H. Yu, "Fusion of 3D LiDAR and camera
data for object detection in autonomous vehicle applications", IEEE Sensors
J., vol. 20, no. 9, pp. 4901-4913, May 2020.
7. D.-Y. Ge, X.-F. Yao, W.-J. Xiang and Y.-P. Chen, "Vehicle detection and
tracking based on video image processing in intelligent transportation
system", Neural Comput. Appl., vol. 35, no. 3, pp. 1-13, 2022.
72
9. D. N.-N. Tran, L. H. Pham, H.-H. Nguyen and J. W. Jeon, "City-scale multi-
camera vehicle tracking of vehicles based on YOLOv7", Proc. IEEE Int. Conf.
Consum. Electron.-Asia (ICCE-Asia), pp. 1-4, Oct. 2022.
10.A. Holla, U. Verma and R. M. Pai, "Enhanced vehicle re-identification for ITS:
A feature fusion approach using deep learning", Proc. IEEE Int. Conf.
Electron. Comput. Commun. Technol. (CONECCT), pp. 1-6, Jul. 2022.
11.K.-S. Yang, Y.-K. Chen, T.-S. Chen, C.-T. Liu and S.-Y. Chien, "Tracklet-
refined multi-camera tracking based on balanced cross-domain re-
identification for vehicles", Proc. IEEE/CVF Conf. Comput. Vis. Pattern
Recognit.
14.S. Han, P. Huang, H. Wang, E. Yu, D. Liu and X. Pan, "MAT: Motion-aware
multi-object tracking", Neurocomputing, vol. 476, pp. 75-86, Mar. 2022.
15.M. Chen, S. Banitaan, M. Maleki and Y. Li, "Pedestrian group detection with
K-means and DBSCAN clustering methods", Proc. IEEE Int. Conf. Electro
Inf. Technol. (eIT), pp. 1-6, May 2022.
73
17.Y. Liu, X. Zhang, B. Zhang, X. Zhang, S. Wang and J. Xu, "Multi-camera
vehicle tracking based on occlusion-aware and inter-vehicle information",
Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW),
pp. 3257-3264, Jun. 2022.
19.Y. Zhang, C. Wang, X. Wang, W. Zeng and W. Liu, "FairMOT: On the fairness
of detection and re-identification in multiple object tracking", Int. J. Comput.
Vis., vol. 129, no. 11, pp. 3069-3087, Nov. 2021.
21.X. Xie, G. Cheng, J. Wang, X. Yao and J. Han, "Oriented R-CNN for object
detection", Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pp. 3500-3509,
Oct. 2021.
22.P. Sun, R. Zhang, Y. Jiang, T. Kong, C. Xu, W. Zhan, et al., "Sparse R-CNN:
End-to-end object detection with learnable proposals", Proc. IEEE/CVF Conf.
Comput. Vis. Pattern Recognit. (CVPR), pp. 14449-14458, Jun. 2021.
23.Z. Sun, J. Chen, L. Chao, W. Ruan and M. Mukherjee, "A survey of multiple
pedestrian tracking based on tracking-by-detection framework", IEEE Trans.
Circuits Syst. Video Technol., vol. 31, no. 5, pp. 1819-1833, May 2021.
24.M. Z. Shanti, C.-S. Cho, Y.-J. Byon, C. Y. Yeun, T.-Y. Kim, S.-K. Kim, et al.,
"A novel implementation of an AI-based smart construction safety inspection
protocol in the UAE", IEEE Access, vol. 9, pp. 166603-166616, 2021.
74
25.K. Han and X. Zeng, "Deep learning-based workers safety helmet wearing
detection on construction sites using multi-scale features", IEEE Access, vol.
10, pp. 718-729, 2022.
26.H. D. Najeeb and R. F. Ghani, "A survey on object detection and tracking in
soccer videos", Muthanna J. Pure Sci., vol. 8, no. 1, pp. 1-13, Jan. 2021.
27.P. Garnier and T. Gregoir, "Evaluating soccer player: From live camera to deep
reinforcement learning" in arXiv:2101.05388, 2021.
29.N. D. Nath, A. H. Behzadan and S. G. Paal, "Deep learning for site safety:
Real-time detection of personal protective equipment", Autom. Construct.,
vol. 112, Apr. 2020.
75