0% found this document useful (0 votes)
8 views59 pages

Myrep Changes

The project report presents an IoT-based weapon detection system utilizing OpenCV and convolutional neural networks (CNN) to enhance security by detecting metal elements and weapons. It aims to address the dangers faced by soldiers using traditional metal detectors in detecting land mines, integrating various technologies including Arduino and Raspberry Pi for real-time monitoring and alerts. The report includes detailed sections on system design, hardware components, and the advantages of the proposed solution.

Uploaded by

720724208015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views59 pages

Myrep Changes

The project report presents an IoT-based weapon detection system utilizing OpenCV and convolutional neural networks (CNN) to enhance security by detecting metal elements and weapons. It aims to address the dangers faced by soldiers using traditional metal detectors in detecting land mines, integrating various technologies including Arduino and Raspberry Pi for real-time monitoring and alerts. The report includes detailed sections on system design, hardware components, and the advantages of the proposed solution.

Uploaded by

720724208015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 59

IoT BASED WEAPON DETECTION

USING OpenCV

A PROJECT REPORT

Submitted by

RASIKA J (1803118)
SANDHYAVATHI K (1803122)
SHRI RAMYA CHINMAI G (1803133)

in partial fulfillment for the award of the


degree of
BACHELOR OF ENGINEERING
in
ELECTRICAL AND ELECTRONICS ENGINEERING

SRI RAMAKRISHNA ENGINEERING COLLEGE


[Educational Service: SNR Sons Charitable Trust]
[Autonomous Institution, Reaccredited by NAAC with ‘A+’ Grade]
[Approved by AICTE and Permanently Affiliated to Anna University,
Chennai] [ISO 9001:2015 Certified and All Eligible Programmes Accredited
by NBA]
Vattamalaipalayam, N.G.G.O. Colony Post,

COIMBATORE–641022

ANNA UNIVERSITY::CHENNAI 600025


MAY 2022
SRI RAMAKRISHNA ENGINEERING
COLLEGE
BONAFIDE CERTIFICATE

Department of Electrical and Electronics Engineering

PROJECT WORK –MAY 2022

This is to certify that the project entitled

IoT BASED WEAPON DETECTION USING OpenCV

is the bonafide record of project work done by

RASIKA J (1803118)
SANDHYAVATHI K (1803122)
SHRI RAMYA CHINMAI G (1803133)

who carried out the project work under my supervision, certified further that
to the best of my knowledge the work reported herein does not form part of
any other thesis or dissertation on the basis of which a degree or award was
conferred on an earlier occasion on this or any other candidate of B.E.
Electrical and Electronics Engineering during the year 2018-2022.

----------------------------- ---------------------------
Dr.C.Kathirvel
Supervisor Head of the Department

Submitted for the Project Viva-Voce examination held on __________

--------------------- ----------------------

Internal Examiner External Examiner


ACKNOWLEDGEMENT

We would like to express our esteemed and Honorable Managing Trustee,


Sri.D.LAKSHMINARAYANASWAMY, B.Tech., MBA., and Joint Managing
Trustee, Sri.R.SUNDAR, M.Tech., MBA, for providing the necessary facilities for
carrying out our project work in the institution.

We would like to thank our beloved Principal, Dr.N.R.ALAMELU, B.E


(Hons). M.E., (Ph.D)., for her continuous motivation towards the development of the
students.

We are very grateful to Dr.C.KATHIRVEL, Professor and Head, Department


of Electrical and Electronics Engineering, who has encouraged and gave us valuable
suggestions for the completion of this project.

We express our sincere thanks to our Supervisor, Dr. P. Sebastian Vindro


Jude, Professor (SI Grade), Department of Electrical and Electronics Engineering for
his valuable guidance for the completion of this project.

We also take this opportunity to thank our Project Coordinators,


Dr.C.KATHIRVEL, Professor and Head and Mrs.R.RUBIA GANDHI, Assistant
Professor, Department of Electrical and Electronics Engineering for their positive criticism
and support to complete this project.

We extend our sincere gratitude to all the teaching and non-teaching staffs of
our department who helped us during this project.

We would like to express our gratitude to Almighty for sustaining us


throughout our project. We offer a special thanks to our beloved parents and friends
for their support and encouragement.
TABLE OF CONTENTS

Chapter Title Page


no no
ABSTRACT – ENGLISH ix

ABSTRACT – தமிழ் x

1 INTRODUCTION 11
1.1.1 EMBEDDED SYSTEMS 12

1.1.2 BLOCK DIAGRAM OF AN EMBEDDED 13


SYSTEM
1.2 EXISTING SYSTEM 13

1.2.1 DISADVANTAGES 13

1.3 PROPOSED SYSTEM 13

1.3.1 ADVANTAGES 14
1.4 CASCADE CLASSIFIER 14

1.5 DIGITAL IMAGE PROCESSING 15


1.5.1 INTRODUCTION 15
1.6 BASIC OF IMAGE PROCESSING 16
1.6.1 IMAGE 16

1.6.2 IMAGE FILE SIZES 17


1.7 IMAGE PROCESSING: 18
1.7.1 FUNDAMENTAL STEPS IN DIGITAL 18
IMAGE PROCESSING
1.7.2 IMAGE ACQUISITION 19

i
1.7.3 IMAGE ENHANCEMENT 19

1.7.4 SEGMENTATION: 20

1.7.5 IMAGE COMPRESSION 21

2 LITERATURE SURVEY 22

3 BLOCK DIAGRAM 24

3.1 BLOCK DIAGRAM FOR WEAPON 25


DETECTION SYSTEM

3.1.1 BLOCK DIAGRAM FOR TRAINING 26


PROCESS
3.1.2 BLOCK DIAGRAM FOR TESTING 26
PROCESS

3.2 OPEN CV 27
3.3 PANDAS 28
3.4 NUMPY 28
3.5 TKINTER 28
3.6 PYTHON 28
3.6.1 INDENTATION 29

3.6.2 STATEMENTS AND CONTROL FLOW 29

4 HARDWARE DESCRIPTION 31

4.1 NODEMCU 32
4.1.1 WINDOWS VS LINUX 32
4.2 FLASHING 33
4.2.1 FLASHING FROM LINUX 33
34
ii
4.2.2 FLASHING FROM OS X

4.2.3 FLASHING FROM WINDOWS 34


4.3 CONNECTING 35
4.3.1 CONNECTING EXTERNAL STORAGE 35

4.4 ESP8266 FEATURES 35

4.5 CAMERA MODULE 37


4.5.1 PRINCIPLE 37
37
4.5.2 PRODUCT FEATURES

4.5.3 WIRE CONNECTIONS 38


4.6 ULTRASONIC SENSOR 39
4.6.1 PRINCIPLE
39
4.6.2 PRODUCT FEATURES
39
4.6.3 WIRE CONNECTIONS 39
40
4.6.4 OPERATION

4.6.5 TIMING DIAGRAM 40


4.6.6 INTERFACING WITH CONTROLLER 41
42
4.7 METAL DETECTOR

4.7.1 METHODOLOGY 42
43
4.8 BUZZER

5 OUTPUT OF THE PROJECT 44

5.1 OUTPUT 45
6 ADVANTAGES AND APPLICATIONS 48

6.1 ADVANTAGES 48

iii
6.2 APPLICATIONS 48

7 CONCLUSION AND FUTURE SCOPE 49

7.1 CONCLUSION 50

7.2 FUTURE SCOPE 50

8 SNAPSHOT OF HARDWARE 51
REFERENCES 53
APPENDIX 54

iv
LIST OF FIGURES

Figure Title Page No.


No.
1.1 BLOCK DIAGRAM OF A TYPICAL 13
EMBEDDED SYSTEM
1.2 FEATURES 14

1.3 EXAMPLE IMAGE FOR 16


PROCESSING
1.4 GRID VIEW OF THE IMAGE 17
1.5 PIXELS OF THE IMAGE 17

1.6 IMAGE PROCESSING 18


FUNDAMENTALS
1.7 IMAGE ACQUISITION 19
1.8 2-D IMAGE FROM SCANNER 19
1.9 IMAGE ENHANCEMENT 20
1.10 SEGMENTATION 20
1.11 IMAGE COMPRESSION MODEL 21
3.1 BLOCK DIAGRAM 25
3.2 BLOCK DIAGRAM FOR TRAINING 26
PROCESS
3.3 BLOCK DIAGRAM FOR TESTING 26
PROCESS
3.4 WEAPON DETECTION 27
4.1 NODEMCU 37
4.2 CAMERA MODULE 37
4.3 WIRE CONNECTION 38
4.4 ULTRASONIC SENSOR 39
4.5 ULTRASONIC SENSOR OPERATION 40
4.6 TIMING DIAGRAM 41
4.7 INTERFACING WITH CONTROLLER 41
4.8 METAL DETECTOR 42
v
4.9 BUZZER 43
5.1 OUTPUT 45
7.1 SOFTWARE OUTPUT 51
7.2 HARDWARE OUTPUT 52

vi
LIST OF
ABBREVIATIONS

DVI - Digital Video Interconnect


CNN - Convolution Neural Network
SVM - Support Vector Machine
CT - Computer Tomography
MRI - Magnetic Resonance Imaging
Node MCU - Node Microcontroller Unit
UMS - USB Mass Storage
UID - User ID
GID - Group ID
CPU - Central Processing Unit
GPU - Graphics Processing Unit
CEA - Consumer Electronics Association of America
HDTV - High-Definition Television
VESA - Video Electronics Standards Association
CMOS - Complementary Metal–Oxide–Semiconductor
CCD - Charge-Coupled Device
SSDs - Solid-State Drives
USB - Universal Serial Bus
OPEN CV – Open Source Computer Vision
ADC - Analog to Digital Converter
UMS – USB Mass Storage
NumFOCUS - Numerical Foundation for Open Code and Useable Science
SoC - System-on-a-Chip
GPIO – General Purpose Input Output
IDE – Integrated Development Environment

vii
ABSTRACT

vii
i
ABSTRACT

This project is mainly used for finding metal elements. In army for detecting the
land mines the soldiers use handled metal detector so that sometimes the land
mine was exploding. That way the soldiers are died. For solving this problem,
this project is used. This project has advanced alert system that is message alert.
This is designed by using ‘Arduino’ technology. This car is controlled by
webpage by IoT technology with Node MCU module. When the metal detector
detects the metal, this project obtains only one Arduino board because all three
application programs are merged in only one board. Security is always a main
concern in every domain, due to a rise in crime rate in a crowded event or
suspicious lonely areas. Abnormal detection and monitoring have major
applications of computer vision to tackle various problems. Due to growing
demand in the protection of safety, security and personal properties, needs and
deployment of video surveillance systems can recognize and interpret the scene
and anomaly events play a vital role in intelligence monitoring. This paper
implements automatic gun (or) weapon detection using a convolution neural
network (CNN) algorithms. CNN is one of the best algorithms for all
classification Process.

KEYWORDS: Node MCU, Convolution Neural Network, Weapon detection, Land mines,
Image Processing, Open CV.

ix
ஆய்வுச்சுருக்கம்

இந்த திட்டம் முக்கியமாக உலோக கூறுகளை கண்டுபிடிப்பதற்காக


பயன்படுத்தப்படுகிறது. ராணுவத்தில் கண்ணிவெடிகளைக் கண்டறிவதற்காக, சில
சமயங்களில் கண்ணிவெடி வெடித்துச் சிதறும் வகையில், கையாளப்பட்ட மெட்டல்
டிடெக்டர்களைப் பயன்படுத்துவார்கள். இதனால் வீரர்கள் உயிரிழந்தனர். இந்த
சிக்கலை தீர்க்க, இந்த திட்டம் பயன்படுத்தப்படுகிறது. இந்த திட்டத்தில் ஒரு
மேம்பட்ட எச்சரிக்கை அமைப்பு உள்ளது, அது செய்தி எச்சரிக்கை ஆகும். இது 'Arduino
தொழில்நுட்பத்தைப் பயன்படுத்தி வடிவமைக்கப்பட்டுள்ளது. இந்த கார் நோட் MCU
தொகுதியுடன் கூடிய IoT தொழில்நுட்பத்தின் மூலம் வலைப்பக்கத்தால்
கட்டுப்படுத்தப்படுகிறது. மெட்டல் டிடெக்டர் உலோகத்தைக் கண்டறியும் போது,
இந்தத் திட்டமானது ஒரே ஒரு Arduino போர்டைப் பெறுகிறது, ஏனெனில் மூன்று
பயன்பாட்டு நிரல்களும் ஒரே பலகையில் இணைக்கப்பட்டுள்ளன. நெரிசலான
நிகழ்வுகள் அல்லது சந்தேகத்திற்கிடமான தனிமையான பகுதிகளில் குற்ற விகிதம்
அதிகரிப்பதால், ஒவ்வொரு டொமைனிலும் பாதுகாப்பு எப்போதும் ஒரு முக்கிய
கவலையாக உள்ளது. அசாதாரண கண்டறிதல் மற்றும் கண்காணிப்பு பல்வேறு
சிக்கல்களைச் சமாளிக்க கணினி பார்வையின் முக்கிய பயன்பாடுகளைக்
கொண்டுள்ளது. பாதுகாப்பு மற்றும் தனிப்பட்ட சொத்துக்களின் பாதுகாப்பிற்கான
வளர்ந்து வரும் தேவை காரணமாக, வீடியோ கண்காணிப்பு அமைப்புகளின் தேவைகள்
மற்றும் வரிசைப்படுத்தல் ஆகியவை காட்சியை அடையாளம் கண்டு விளக்குகின்றன
மற்றும் நுண்ணறிவு கண்காணிப்பில் ஒரு முக்கிய பங்கைக் கொண்டுள்ளன.
கன்வல்யூஷன் நியூரல் நெட்வொர்க் (சிஎன்என்) அல்காரிதம்களைப் பயன்படுத்தி
தானியங்கி துப்பாக்கி (அல்லது) ஆயுதக் கண்டறிதலை இந்தத் தாள்
செயல்படுத்துகிறது. அனைத்து வகைப்பாடு செயல்முறைகளுக்கும் CNN சிறந்த
வழிமுறைகளில் ஒன்றாகும்.

முக்கிய வார்த்தைகள்: நோட் MCU, கன்வல்யூஷன் நியூரல் நெட்வொர்க், ஆயுதம்


கண்டறிதல், கண்ணிவெடிகள், பட செயலாக்கம், திறந்த CV.

x
CHAPTER 1
INTRODUCTION

xi
CHAPTER 1

INTRODUCTION

INTRODUCTION
This project is mainly used for finding metal elements. In army for detecting the land mines the
soldiers use handled metal detector so that sometimes the land mine was exploding. That way the
soldiers are died. For solving this problem, this project is used. This project has advanced alert
system that is message alert. This is designed by using ‘Raspberry pi’ Microprocessor. In this
project the metal detector is connected to ADC, the hole circuit is placed on one robot car. This car
is controlled by webpage by IoT technology. When the metal detector detects the metal, this
project obtains only one Raspberry pi board because all three application programs are merged in
only one board. Security is always a main concern in every domain, due to a rise in crime rate in a
crowded event or suspicious lonely areas. Abnormal detection and monitoring have major
applications of computer vision to tackle various problems. Due to growing demand in the
protection of safety, security and personal properties, needs and deployment of video surveillance
systems can recognize and interpret the scene and anomaly events play a vital role in intelligence
monitoring. This paper implements automatic gun (or) weapon detection using a convolution
neural network (CNN) algorithms. CNN is one of the best algorithms for all classification Process.

1.1.1 EMBEDDED SYSTEMS


An embedded system is a special-purpose computer system designed to perform one or a few
dedicated functions, often with real-time computing constraints. It is usually embedded as part of a
complete device including hardware and mechanical parts. In contrast, a general-purpose computer,
such as a personal computer, can do many different tasks depending on programming. Embedded
systems have become very important today as they control many of the common devices we use.
Since the embedded system is dedicated to specific tasks, design engineers can optimize it,
reducing the size and cost of the product, or increasing the reliability and performance. Some
embedded systems are mass-produced, benefiting from economies of scale.
Physically, embedded systems range from portable devices such as digital watches and MP3
players, to large stationary installations like traffic lights, factory controllers, or the systems
controlling nuclear power plants. Complexity varies from low, with a single microcontroller chip,
to very high with multiple units, peripherals and networks mounted inside a large chassis or
enclosure.
In general, “embedded system” is not an exactly defined term, as many systems have some element
of programmability. For example, Handheld computers share some elements with embedded
systems such as the operating systems and microprocessors which power them — but are not truly
embedded systems, because they allow different applications to be loaded and peripherals to be
connected.
Embedded systems provide several functions Monitor the environment; embedded systems read
data from input sensors. This data is then processed and the results displayed in some format to a
user or users
12
Control the environment; embedded systems generate and transmit commands for actuators.
Transform the information; embedded systems transform the data collected in some meaningful
way, such as data compression/decompression Although interaction with the external world via
sensors and actuators is an important aspect of embedded systems, these systems also provide
functionality specific to their applications. Embedded systems typically execute applications such
as control laws, finite state machines, and signal processing algorithms. These systems must also
detect and react to faults in both the internal computing environment as well as the surrounding
electromechanical systems.

1.1.2 BLOCK DIAGRAM OF AN EMBEDDED SYSTEM:


An embedded system usually contains an embedded processor. Many appliances that have a digital
interface – microwaves, VCRs, cars – utilize embedded systems. Some embedded systems include
an operating system. Others are very specialized resulting in the entire logic being implemented as
a single program. These systems are embedded into some device for some specific purpose other
than to provide general purpose computing.

Fig 1.1 Block diagram of a typical embedded system

1.2 EXISTING SYSTEM


 SVM(Support Vector Machine) algorithm is not suitable for large data sets.
 Working Process is Slow.
 Robot working process is miner slow.

1.2.1DISADVANTAGES

1.High cost.
13
2.Some materials may have signal Problem.

1.3 PROPOSED SYSTEM


In this project we using OpenCV to detect the student faces. Open CV is one of the
best method for detection process. Our proposed system take a nodal points in students face.

1.3.1 ADVANTAGES

1. Take less time.


2. Open CV is free Source.

1.4 CASCADE CLASSIFIER:-

It is a machine learning based approach where a cascade function is trained from a


lot of positive and negative images. It is then used to detect objects in other images. Here we will
work with face detection. Initially, the algorithm needs a lot of positive images (images of faces)
and negative images (images without faces) to train the classifier. Then we need to extract features
from it. For this, haar features shown in below image are used. They are just like our convolutional
kernel. Each feature is a single value obtained by subtracting sum of pixels under white rectangle
from sum of pixels under black rectangle.

FEATURES:-

Fig 1.2 Features

• Now all possible sizes and locations of each kernel is used to calculate plenty of features. (Just
imagine how much computation it needs?

• Even a 24x24 window results over 160000 features). For each feature calculation, we need to find
sum of pixels under white and black rectangles.

14
• To solve this, they introduced the integral images. It simplifies calculation of sum of pixels.

• how large may be the number of pixels, to an operation involving just four pixels. Nice, isn’t it? It
makes things super-fast

• But among all these features we calculated, most of them are irrelevant.

• For example, consider the image below. Top row shows two good features.

• The first feature selected seems to focus on the property that the region of the eyes is often darker
than the region of the nose and cheeks.

• The second feature selected relies on the property that the eyes are darker than the bridge of the
nose.

• But the same windows applying on cheeks or any other place is irrelevant. So how do we select
the best features out of 160000+ features? It is achieved by Adaboost.

1.5 DIGITAL IMAGE PROCESSING

1.5.1 INTRODUCTION
The identification of objects in an image. This process would probably start with image
processing techniques such as noise removal, followed by (low-level) feature extraction to locate
lines, regions and possibly areas with certain textures. The clever bit is to interpret collections of
these shapes as single objects, e.g. cars on a road, boxes on a conveyor belt or cancerous cells on a
microscope slide. One reason this is an AI problem is that an object can appear very different when
viewed from different angles or under different lighting. Another problem is deciding what features
belong to what object and which are background or shadows etc. The human visual system
performs these tasks mostly unconsciously but a computer requires skillful programming and lots
of processing power to approach human performance. Manipulating data in the form of an image
through several possible techniques. An image is usually interpreted as a two-dimensional array of
brightness values, and is most familiarly represented by such patterns as those of a photographic
print, slide, television screen, or movie screen. An image can be processed optically or digitally
with a computer. There are 3 types of images used in Digital Image Processing. They are,
 Binary Image
 Gray Scale Image
 Color Image

BINARY IMAGE:
A binary image is a digital image that has only two possible values for each pixel Typically the
two colors used for a binary image are black and white though any two colors can be used. The
color used for the objects in the image is the foreground color while the rest of the image is the
background color. Binary images are also called bi-level or two-level. This means that each pixel is
stored as a single bit (0 or 1) This name black and white, monochrome or monochromatic are often
used for this concept, but may also designate any images that have only one sample per pixel, such
15
as grayscale images Binary images often arise in digital image processing as masks or as the result
of certain operations such as segmentation, thresholding, and dithering. Some input/output devices,
such as laser printers, fax machines, and bi-level computer displays, can only handle bi-level
images

GRAY SCALE IMAGE


A grayscale Image is digital image is an image in which the value of each pixel is a single sample,
that is, it carries only intensity information. Images of this sort, also known as black-and-white, are
composed exclusively of shades of gray(0-255), varying from black(0) at the weakest intensity to
white(255) at the strongest.Grayscale images are distinct from one-bit black-and-white images,
which in the context of computer imaging are images with only the two colors, black, and white
(also called bi-level or binary images). Grayscale images have many shades of gray in between.
Grayscale images are also called monochromatic, denoting the absence of any chromatic
variation.
COLOUR IMAGE:
A (digital) color image is a digital image that includes color information for each pixel.
Each pixel has a particular value which determines its appearing color. This value is qualified by
three numbers giving the decomposition of the color in the three primary colors Red, Green and
Blue. The decomposition of a color in the three primary colors is quantified by a number between 0
and 255. For example, white will be coded as R = 255, G = 255, B = 255; black will be known as
(R,G,B) = (0,0,0); and say, bright pink will be : (255,0,255).
In other words, an image is an enormous two-dimensional array of color values, pixels, each of
them coded on 3 bytes, representing the three primary colors. This allows the image to contain a
total of 256x256x256 = 16.8 million different colors.
.
1.6 BASIC OF IMAGE PROCESSING
1.6.1 IMAGE:

An image is a two-dimensional picture, which has a similar appearance to some subject


usually a physical object or a person. Image is a two-dimensional, such as a photograph, screen
display, and as well as a three-dimensional, such as a statue. They may be captured by optical
devices—such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and
phenomena, such as the human eye or water surfaces. The word image is also used in the broader
sense of any two-dimensional figure such as a map, a graph, a pie chart, or an abstract painting. In
this wider sense, images can also be rendered manually, such as by drawing, painting, carving,
rendered automatically by printing or computer graphics technology, or developed by a
combination of methods, especially in a pseudo-photograph.

16
Fig 1.3 Example Image for Processing

An image is a rectangular grid of pixels. It has a definite height and a definite width counted
in pixels. Each pixel is square and has a fixed size on a given display. However different computer
monitors may use different sized pixels. The pixels that constitute an image are ordered as a grid
(columns and rows); each pixel consists of numbers representing magnitudes of brightness and
color.

Fig 1.4 Grid view of the image

Each pixel has a color. The color is a 32-bit integer. The first eight bits determine the
redness
of the pixel, the next eight bits the greenness, the next eight bits the blueness, and the
remaining eight bits the transparency of the pixel.

17
Fig 1.5 Pixels of the image

1.6.2 IMAGE FILE SIZES:

Image file size is expressed as the number of bytes that increases with the number of pixels
composing an image, and the color depth of the pixels. The greater the number of rows and
columns, the greater the image resolution, and the larger the file. Also, each pixel of an image
increases in size when its color depth increases, an 8-bit pixel (1 byte) stores 256 colors, a 24-bit
pixel (3 bytes) stores 16 million colors, the latter known as true color.

Image compression uses algorithms to decrease the size of a file. High resolution cameras produce
large image files, ranging from hundreds of kilobytes to megabytes, per the camera's resolution
and the image-storage format capacity. High resolution digital cameras record 12 megapixel (1MP
= 1,000,000 pixels / 1 million) images, or more, in true color. For example, an image recorded by
a 12 MP camera; since each pixel uses 3 bytes to record true color, the uncompressed image
would occupy 36,000,000 bytes of memory, a great amount of digital storage for one image, given
that cameras must record and store many images to be practical. Faced with large file sizes, both
within the camera and a storage disc, image file formats were developed to store such large
images.

1.7 IMAGE PROCESSING:

Digital image processing, the manipulation of images by computer, is relatively recent


development in terms of man’s ancient fascination with visual stimuli. In its short history, it has
been applied to practically every type of images with varying degree of success. The inherent
subjective appeal of pictorial displays attracts perhaps a disproportionate amount of attention from
the scientists and also from the layman. Digital image processing like other glamour fields, suffers
from myths, mis-connect ions, mis-understandings and mis-information. It is vast umbrella under
which fall diverse aspect of optics, electronics, mathematics, photography graphics and computer
technology. It is truly multidisciplinary endeavor ploughed with imprecise jargon. Several factor
combine to indicate a lively future for digital image processing. A major factor is the declining
cost of computer equipment. Several new technological trends promise to further promote digital
image processing. These include parallel processing mode practical by low cost microprocessors,
and the use of charge coupled devices (CCDs) for digitizing, storage during processing and
display and large low cost of image storage arrays.

1.7.1 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING:

18
Fig 1.6 Image processing Fundamentals

1.7.2 IMAGE ACQUISITION:

Image Acquisition is to acquire a digital image. To do so requires an image sensor and the
capability to digitize the signal produced by the sensor. The sensor could be monochrome or color
TV camera that produces an entire image of the problem domain every 1/30 sec. the image sensor
could also be line scan camera that produces a single image line at a time. In this case, the objects
motion past the line.

Fig 1.7 Image Acquisition

Scanner produces a two-dimensional image. If the output of the camera or other imaging sensor is
not in digital form, an analog to digital converter digitizes it. The nature of the sensor and the
image it produces are determined by the application.

19
Fig 1.8 2-D image from scanner

1.7.3 IMAGE ENHANCEMENT:

Image enhancement is among the simplest and most appealing areas of digital image processing.
Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or
simply to highlight certain features of interesting an image. A familiar example of enhancement is
when we increase the contrast of an image because “it looks better.” It is important to keep in
mind that enhancement is a very subjective area of image processing.

Fig 1.9 Image Enhancement

1.7.4 SEGMENTATION:

Segmentation procedures partition an image into its constituent parts or objects. In general,
autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged
segmentation procedure brings the process a long way toward successful solution of imaging
problems that require objects to be identified individually.

Fig 1.10 Segmentation


20
On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual
failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.
Digital image is defined as a two dimensional function f(x, y), where x and y are spatial (plane)
coordinates, and the amplitude of f at any pair of coordinates (x, y) is called intensity or grey
level of the image at that point. The field of digital image processing refers to processing digital
images by means of a digital computer. The digital image is composed of a finite number of
elements, each of which has a particular location and value. The elements are referred to as picture
elements, image elements, pels, and pixels. Pixel is the term most widely used.

1.7.5 IMAGE COMPRESSION

Digital Image compression addresses the problem of reducing the amount of data required to
represent a digital image. The underlying basis of the reduction process is removal of redundant
data. From the mathematical viewpoint, this amounts to transforming a 2D pixel array into a
statically uncorrelated data set. The data redundancy is not an abstract concept but a
mathematically quantifiable entity. If n1 and n2 denote the number of information-carrying units
in two data sets that represent the same information, the relative data redundancy [2] of the
first data set (the one characterized by n1) can be defined as,

Where called as compression ratio .

It is defined as

=
In image compression, three basic data redundancies can be identified and exploited: Coding
redundancy, interpixel redundancy, and phychovisal redundancy. Image compression is achieved
when one or more of these redundancies are reduced or eliminated.

The image compression is mainly used for image transmission and storage. Image transmission
applications are in broadcast television; remote sensing via satellite, air-craft, radar, or sonar;
teleconferencing; computer communications; and facsimile transmission. Image storage is

21
required most commonly for educational and business documents, medical images that arise in
computer tomography (CT), magnetic resonance imaging (MRI) and digital radiology, motion
pictures, satellite images, weather maps, geological surveys, and so on.

Fig 1.11 Image Compression Mode

CHAPTER 2
LITERATURE SURVEY

22
CHAPTER 2
LITERATURE SURVEY

2.1 LITERATURE SURVEY


The literature review attempts to discuss the work carried out in the weapon detection process
using AI and machine learning.
[1] Weapon detection using artificial Intelligence and deep learning for security applications;
Harsha Jain ICESC 2020.
Security is always a main concern in every domain, due to a rise in crime rate in a crowded event
or suspicious lonely areas. Abnormal detection and monitoring have major applications of
computer vision to tackle various problems. Due to growing demand in the protection of safety,
security and personal properties, needs and deployment of video surveillance systems can
recognize and interpret the scene and anomaly events play a vital role in intelligence monitoring.
This paper implements automatic gun (or) weapon detection using a convolution neural network
(CNN) based SSD and Faster RCNN algorithms. Proposed implementation uses two types of
datasets.

[2] Automatic handgun and knife detection algorithms; Arif Warsi IEEE Conference 2019.
Nowadays, the surveillance of criminal activities requires constant human monitoring. Most of
these activities are happening due to handheld weapons mainly pistol and gun. Object detection
algorithms have been used in detecting weapons like knives and handguns. Handgun and knives
detection are one of the most challenging tasks due to occlusion. This paper reviewed and
categorized various algorithms that have been used in the detection of handgun and knives with
their strengths and weaknesses. This paper presents a review of various algorithms used in
detecting handguns and knives.

[3] Weapon Detection in Real-Time CCTV Videos Using Deep Learning; Muhammad Tahir
Bhatti , Muhammad Gufran Khan, 2021
Deep learning is a branch of machine learning inspired by the functionality and structure of the
human brain also called an artificial neural network. The methodology adopted in this work
features the state of art deep learning, especially the convolutional neural networks due to their
exceptional performance in this field. The aforementioned techniques are used for both the
classification as well as localizing the specific object in a frame so both the object classification
and detection algorithms were used and because our object is small with other object in

23
background so after experimentation we found the best algorithm for our case.

[4] Handheld Gun detection using Faster R-CNN Deep Learning; Gyanendra Kumar Verma
IEEE Conference 2019.
In this paper we present an automatic handheld gun detection system using deep learning
particularly CNN model. Gun detection is a very challenging problem because of the various
subtleties associated with it. One of the most important challenges of gun detection is occlusion of
gun that arises frequently. There are two types of occlusions of gun, namely gun to object and gun
to site/scene occlusion. Normally, occlusions in gun detection are arises beneath three conditions:
self-occlusion, inter-object occlusion or by background site/scene structure. Self- occlusion arises
when one portion of the gun is occluded by another.

CHAPTER 3
BLOCK DIAGRAM

24
CHAPTER 3
BLOCK DIAGRAM

3.1 BLOCK DIAGRAM FOR WEAPON DETECTION SYSTEM

Fig 3.1 Block Diagram

• In this method, we use NodeMCU to control and monitor the surrounding.


• Camera is use to find the weapon type.
• Metal detector is used to find the weapon.
• Buzzer is used for alert purpose.

DESCRIPTION
 The ultrasonic sensor senses for the obstacle nearby.
 The PC is used for the OpenCV operation and takes the image input.

25
 The NodeMCU is connected with the sensors like metal detector, which checks for the
metals.
 Once the metal is detected Buzzer switches ON.
 The Webpage displays the name of the weapon detected.

3.1.1 BLOCK DIAGRAM FOR TRAINING PROCESS

Fig 3.2 Block diagram for training process


DESCRIPTION

 The weapon is shown in the camera. It takes the image of the weapon shown in the real time.
 The camera is switched ON. Once it is switched ON, the video starts recording.
 The video gets converted into picture. In OpenCV, the images are uploaded and trained with
the uploaded images.
 It detects for the weapon, and it extracts the features of weapons. In the end it saves the
image.
 It is repeated for the further images for accuracy.

3.1.2 BLOCK DIAGRAM FOR TESTING PROCESS

26
Fig 3.3 Block diagram for testing process

DESCRIPTION

 The weapon is shown in the camera. It takes the image of the weapon shown in the real time.
 The camera is switched ON. Once it is switched ON, the video starts recording.
 The video gets converted into picture. In OpenCV, the images are compared with the dataset.
 It detects for the weapon, and it extracts the features of weapons. In the end it saves the
image and gets compared with the trained model.
 The weapon get recognized. Displays weapon name on the screen once the accuracy and
weapon comparison is done.

3.2 OPEN CV

Open Computer Vision is shortly denoted as Open CV. Officially launched in 1999 the OpenCV
project was initially an Intel Research initiative to advance CPU-intensive applications, part of a
series of projects including real-time ray tracing and 3D display walls. The main contributors to
the project included a number of optimization experts in Intel Russia, as well as Intel’s
Performance Library Team. OpenCV is written in C++ and its primary interface is in C++, but it
still retains a less comprehensive though extensive older C interface. All of the new developments
and algorithms appear in the C++ interface. There are bindings in Python, Java and MATLAB.
The API for these interfaces can be found in the online documentation. Wrappers in several
programming languages have been developed to encourage adoption by a wider audience. In our
project Open Computer Vision used to detect the face in our face recognition technique. Open CV
is act like eye of the computer or any machines.

WEAPON CHARACTERISTIC
A weapon characteristic is based on property of its purpose that can be identified and from which
distinguishing, repeatable features can be extracted for the purpose of automated recognition of
particular weapon. An example is the gun. This characteristic, recorded with a capture device, can
27
be compared with sample representation of characteristics of weapons. The features are
information extracted from weapon samples, which can be used for comparison with a weapon
reference. The aim of the extraction of features from a sample is to remove any information that
does not contribute to weapon recognition. This enables a fast comparison, improved
performance, and may have privacy advantages.

Fig 3.4 Weapon detection

3.3 PANDAS

Pandas is a software library written for the Python programming language for data manipulation
and analysis. In particular, it offers data structures and operations for manipulating numerical
tables and time series. It is free software released under the three-clause BSD license. The name is
derived from the term “panel data”, an econometrics term for data sets that include observations
over multiple time periods for the same individuals. Its name is a play on the phrase “Python data
analysis” itself. Pandas is mainly used for data analysis. Pandas allows importing data from
various file formats such as comma-separated values, JSON, SQL database tables or queries,
and Microsoft Excel. Pandas allows various data manipulation operations such as
merging, reshaping, selecting, as well as data cleaning, and data wrangling features.

3.4 NUMPY

NumPy is a library for the Python programming language, adding support for large, multi-
dimensional arrays and matrices, along with a large collection of high-
level mathematical functions to operate on these arrays. The ancestor of NumPy, Numeric, was
originally created by Jim Hugunin with contributions from several other developers. In
2005, Travis Oliphant created NumPy by incorporating features of the competing Numarray into
Numeric, with extensive modifications. NumPy is open-source software and has many
contributors. NumPy is a NumFOCUS fiscally sponsored project.

3.5 TKINTER
Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the Tk
GUI toolkit and is Python’s de facto standard GUI. Tkinter is included with
standard GNU/Linux, Microsoft Windows and macOS installs of Python. The
28
name Tkinter comes from Tk interface. Tkinter was written by Steen Lumholt and Guido van
Rossum, then later revised by Fredrik Lund. Tkinter is free software released under a Python
license.

3.6 PYTHON
Python an interpreted, high-level, general-purpose programming language. Created by Guido van
Rossum and first released in 1991, Python has a design philosophy that emphasizes code
readability, notably using significant whitespace. It provides constructs that enable clear
programming on both small and large scales. Python features a dynamic type system and
automatic memory management. It also supports multiple programming paradigms, including
object oriented, imperative imperative, functional and procedural, and has a large and
comprehensive standard library.
Python interpreters are available for many operating systems. Cpython, the reference
implementation of Python, is open source software and has a community-based development
model, as do nearly all of Python’s other implementations. Python and Cpython are managed by
the non-profit Python Software Foundation. Python is meant to be an easily readable language. Its
formatting is visually uncluttered, and it often uses English keywords where other languages use
punctuation.

3.6.1 INDENTATION
Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An
increase in indentation comes after certain statements; a decrease in indentation signifies the end
of the current block. Thus, the program’s visual structure accurately represents the program’s
semantic structure.[1] This feature is also sometimes termed the off-side rule.

3.6.2 STATEMENTS AND CONTROL FLOW

PYTHON’S STATEMENTS INCLUDE (AMONG OTHERS):

 The assignment statement (token ‘=’, the equals sign). This operates differently than in
traditional imperative programming languages, and this fundamental mechanism (including the
nature of Python’s version of variables) illuminates many other features of the language.
Assignment in C, e.g., x = 2, translates to “typed variable name x receives a copy of numeric
value 2”. The (right-hand) value is copied into an allocated storage location for which the (left-
hand) variable name is the symbolic address. The memory allocated to the variable is large
enough (potentially quite large) for the declared type. In the simplest case of Python assignment,
using the same example, x = 2, translates to “(generic) name x receives a reference to a separate,
dynamically allocated object of numeric (int) type of value 2.” This is termed binding the name to
the object. Since the name’s storage location doesn’t contain the indicated value, it is improper to
call it a variable. Names may be subsequently rebound at any time to objects of greatly varying
29
types, including strings, procedures, complex objects with data and methods, etc. Successive
assignments of a common value to multiple names, e.g., x = 2; y = 2; z = 2 result in allocating
storage to (at most) three names and one numeric object, to which all three names are bound.
Since a name is a generic reference holder it is unreasonable to associate a fixed data type with it.
However at a given time a name will be bound to some object, which will have a type; thus there
is dynamic typing.
 The if statement, which conditionally executes a block of code, along with else and elif (a
contraction of else-if).
 The for statement, which iterates over an iterable object, capturing each element to a local
variable for use by the attached block.
 The while statement, which executes a block of code as long as its condition is true.
 The try statement, which allows exceptions raised in its attached code block to be caught and
handled by except clauses; it also ensures that clean-up code in a finally block will always be run
regardless of how the block exits.
 The raise statement, used to raise a specified exception or re-raise a caught exception.
 The class statement, which executes a block of code and attaches its local namespace to a class,
for use in object-oriented programming.
 The def statement, which defines a function or method.
 The with statement, from Python 2.5 released on September 2006, which encloses a code block
within a context manager (for example, acquiring a lock before the block of code is run and
releasing the lock afterwards, or opening a file and then closing it), allowing Resource Acquisition
Is Initialization (RAII)-like behavior and replaces a common try/finally idiom.
 The pass statement, which serves as a NOP. It is syntactically needed to create an empty code
block.
 The assert statement, used during debugging to check for conditions that ought to apply.
 The yield statement, which returns a value from a generator function. From Python 2.5, yield is
also an operator. This form is used to implement co routines.
 The import statement, which is used to import modules whose functions or variables can be used
in the current program. There are three ways of using import: import <module name> [as
<alias>] or from <module name> import * or from <module name> import <definition 1> [as
<alias 1>], <definition 2> [as <alias 2>], ....
 The print statement was changed to the print() function in Python 3.

30
31
CHAPTER 4
HARDWARE DESCRIPTION

32
CHAPTER 4
HARDWARE DESCRIPTION
HARDWARE MODULES

 NodeMCU
 PC
 Ultrasonic Sensor
 Metal Detector
 BuzzeR0072

4.1 NODE MCU


NodeMCU is an open-source LUA based firmware developed for the ESP8266 wifi chip. By
exploring functionality with the ESP8266 chip, NodeMCU firmware comes with the ESP8266
Development board/kit i.e. NodeMCU Development board. The NodeMCU (Node
MicroController Unit) is an open-source software and hardware development environment built
around an inexpensive System-on-a-Chip (SoC) called the ESP8266. The ESP8266, designed and
manufactured by Espressif Systems, contains the crucial elements of a computer: CPU, RAM,
networking (WiFi), and even a modern operating system and SDK. That makes it an excellent
choice for Internet of Things (IoT) projects of all kinds.

However, as a chip, the ESP8266 is also hard to access and use. You must solder wires, with the
appropriate analog voltage, to its pins for the simplest tasks such as powering it on or sending a
keystroke to the “computer” on the chip. You also have to program it in low-level machine
instructions that can be interpreted by the chip hardware. This level of integration is not a problem
using the ESP8266 as an embedded controller chip in mass-produced electronics. It is a huge
burden for hobbyists, hackers, or students who want to experiment with it in their own IoT
projects.

4.1.1 WINDOWS VS. LINUX

Another important difference between the Raspberry Pi and your desktop or laptop, other than the
size and price, is the operating system—the software that allows you to control the computer. The
majority of desktop and laptop computers available today run one of two operating systems:
Microsoft Windows or Apple OS X. Both platforms are closed source, created in a secretive
environment using proprietary techniques. These operating systems are known as closed source
for the nature of their source code, the computer-language recipe that tells the system what to do.
In closed-source software, this recipe is kept a closely-guarded secret. Users are able to obtain the
finished software, but never to see how it’s made.

Unlike Windows or OS X, Linux is open source: it’s possible to download the source code for the
entire operating system and make whatever changes you desire. Nothing is hidden, and all changes

33
are made in full view of the public. This open source development ethos has allowed Linux to be
quickly altered to run, a process known as porting. At the time of this writing, several versions of
Linux—known as distributions—have been ported to the BCM2835 chip, including Debian,
Fedora Remix and Arch Linux. The different distributions cater to different needs, but they all
have something in common: they’re all open source. They’re also all, by and large, compatible
with each other: software written on a Debian system will operate perfectly well on Arch Linux
and
vice versa.

4.2 FLASHING

The easiest way to get the ESP8266 into the flash mode is:

1.Pull down the GPIO 0 (connect it to GND or DTR, DTR may not work with esptool)

2.Start the flash (press Flash button or execute the cmd command)

3.Reset the ESP8266 by pulling down the GPIO16/RST pin (touch any of the GND pins with a
male end)

Open the folder with esptool and bring up the cmd (right click +shift > Open command window
here). Check what port has been assigned to your FDT1232 by bringing up a device manager.
Execute the line and then immediately reset the ESP. You should see this: If you on the Linux or
Raspberry Pi, you can continue with the esptool, if you are using Windows you can now switch to
the NodeMCU flasher tool. The command is built in the following way: (path to python)
esptool.py — port [your com port] write_flash -fm [mode: dio for 4MB+, qio for <4Mb]) -fs (flash
size 8Mb in my case) 0x00000 (path to bin file)

As before, execute the command and reset the ESP8266 by shorting the GPIO16/RST pin to
GND. If you are using Windows and NodeMCU flash tool the procedure is similar: Then press
flash and reset the ESP8266 by shorting the GPIO16/RST to GND. The flash will take few
moments and you will see the progress: Once the flash is complete you are ready to upload your
Lua files. This is where the ESPlorer comes in handy. Disconnect the GPIO 0, we don’t need it
any more. Set the baud rate to 115200 open the COM port and reset the ESP8266 by shorting the
GPIO16/RST and GND.

4.2.1 FLASHING FROM LINUX

If your current PC is running a variant of Linux already, you can use the dd command to write the
contents of the image file out to the SD card. This is a text-interface program operated from the
command prompt, known as a terminal in Linux parlance. Follow these steps to flash the SD card:

34
1. Open a terminal from your distribution’s applications menu.
2. Plug your blank SD card into a card reader connected to the PC.
3. Type sudo fdisk -l to see a list of disks. Find the SD card by its size, and note the device address
(/dev/sdX, where X is a letter identifying the storage device. Some systems with integrated SD
card readers may use the alternative format /dev/mmcblkX—if this is the case, remember to
change the target in the following instructions accordingly).
4. Use cd to change to the directory with the .img file you extracted from the Zip archive.
5. Type sudo dd if=imagefilename.img of=/dev/sdX bs=2M to write the file imagefilename.img to
the SD card connected to the device address from step 3. Replace imagefilename.img with the
actual name of the file extracted from the Zip archive. This step takes a while, so be patient!
During flashing, nothing will be shown on the screen until the process is fully complete
Flashing the SD card using the dd command in Linux

4.2.2 FLASHING FROM OS X

If your current PC is a Mac running Apple OS X, you’ll be pleased to hear that things are as
simple as with Linux. Thanks to a similar ancestry, OS X and Linux both contain the dd utility,
which you can use to flash the system image to your blank SD card as follows:

1. Select Utilities from the Application menu, and then click on the Terminal application.
2. Plug your blank SD card into a card reader connected to the Mac.
3. Type diskutil list to see a list of disks. Find the SD card by its size, and note the device address
(/dev/diskX, where X is a letter identifying the storage device).
4. If the SD card has been automatically mounted and is displayed on the desktop, type diskutil
unmountdisk /dev/diskX to unmount it before proceeding.
5. Use cd to change to the directory with the .img file you extracted from the Zip archive.
6. Type dd if=imagefilename.img of=/dev/diskX bs=2M to write the file imagefilename.img to the
SD card connected to the device address from step 3. Replace imagefilename.img with the actual
name of the file extracted from the Zip archive. This step takes a while, so be patient!

4.2.3 FLASHING FROM WINDOWS

If your current PC is running Windows, things are slightly trickier than with Linux or OS X.
Windows does not have a utility like dd, so some third-party software is required to get the image
file flashed onto the SD card. Although it’s possible to install a Windows-compatible version of
dd, there is an easier way: the Image Writer for Windows. Designed specifically for creating USB
or SD card images of Linux distributions, it features a simple graphical user interface that makes
the creation of a Raspberry Pi SD card straightforward.
Follow these steps to download, install and use the Image Writer for Windows software to prepare
the SD card for the Pi:

1. Download the binary (not source) Image Writer for Windows Zip file, and extract it to a folder
on your computer.
2. Plug your blank SD card into a card reader connected to the PC.
35
3. Double-click the Win32DiskImager.exe file to open the program, and click the blue folder icon
to open a file browse dialogue box.
4. Browse to the imagefilename.img file you extracted from the distribution archive, replacing
imagefilename.img with the actual name of the file extracted from the Zip archive, and then click
the Open button.
5. Select the drive letter corresponding to the SD card from the Device drop-down dialogue box. If
you’re unsure which drive letter to choose, open My Computer or Windows Explorer to check.

6. Click the Write button to flash the image file to the SD card. This process takes a while, so be
patient!

4.3 CONNECTING

4.3.1 CONNECTING EXTERNAL STORAGE

While the Raspberry Pi uses an SD card for its main storage device—known as a boot device—
you may find that you run into space limitations quite quickly. Although large SD cards holding
32 GB, 64 GB or more are available, they are often prohibitively expensive. Thankfully, there are
devices that provide an additional hard drive to any computer when connected via a USB cable.
Known as USB Mass Storage (UMS) devices, these can be physical hard drives, solid-state drives
(SSDs) or even portable pocket-sized flash drives (see Figure 1-6).

Two USB Mass Storage devices: a pen drive and an external hard drive The majority of USB
Mass Storage devices can be read by the Pi, whether or not they have existing content. In order for
the Pi to be able to access these devices, their drives must be mounted—a process you will learn in
Chapter 2, “Linux System Administration”. For now, it’s enough to connect the drives to the Pi in
readiness.

CONNECTING POWER

The NodeMCU is powered by the small micro-USB connector found on the lower left side of the
circuit board. This connector is the same as found on the majority of smartphones and some tablet
devices. Many chargers designed for smartphones will work with the Raspberry Pi, but not all.
This is more power-hungry than most micro-USB devices, and requires up to 700mA in order to
operate. Some chargers can only supply up to 500mA, causing intermittent problems in the
operation (see Chapter 3, “Troubleshooting”). Connecting to the USB port on a desktop or laptop
computer is possible, but not recommended. As with smaller chargers, the USB ports on a
computer can’t provide the power required to work properly.

Only connect the micro-USB power supply when you are ready to start using . With no power
button on the device, it will start working the instant power is connected and can only be turned
off again by physically removing the power cable.

4.4 ESP8266 FEATURES


36
Low cost, compact and powerful Wi-Fi Module
Power Supply: +3.3V only
Current Consumption: 100mA
I/O Voltage: 3.6V (max)
I/O source current: 12mA (max)
Built-in low power 32-bit MCU @ 80MHz
512kB Flash Memory
Can be used as Station or Access Point or both combined
Supports Deep sleep (<10uA)
Supports serial communication hence compatible with many development platform like Arduino
Can be programmed using Arduino IDE or AT-commands or Lua Script

SPECIFICATIONS

Pin Name Alternate purpose

Ground Connected to the ground of the circuit

TX Connected to Rx pin of programmer/uC to upload


program

GPIO-2 General purpose Input/output pin

CH_EN Chip Enable – Active high

GPIO - 0 Flash

Reset Resets the module

RX General purpose Input/output pin

37
Vcc Connect to +3.3V only

Fig 4.1 NodeMCU

NODEMCU COMPATIBILITY WITH ARDUINO IDE


The NodeMCU offers a variety of development environments, including compatibility with the
Arduino IDE (Integrated Development Environment). The NodeMCU/ESP8266 community took
the IDE selection a step further by creating an Arduino add-on. If you’re just getting started
programming the ESP8266 or even an established developer, this is the highly recommended
environment. Visit our dedicated page on setting up and configuring the Arduino IDE for a
NodeMCU ESP8266.

4.5 CAMERA MODULE:

4.5.1 PRINCIPLE:
A camera records and stores photographic image in digital form. Many current models are also
able to capture sound or video, in addition to still images. Capture is usually accomplished by use
of a photo sensor, using a charged coupled device.

4.5.2 PRODUCT FEATURES:

The images are actually stored as a several pixel, for every pixel in the sensor, the brightness
data, represented by a number from 0 to 4095 for a 12-bit A/D converter, along with the
coordinates of the location of the pixel, are stored in a file. Although the camera can record 12 bits
or 4096 steps of brightness information, almost all output devices can only display 8 bits or 256
steps per color channel. The original 12-bit (2 12 = 4096) input data must be converted to 8-bits
(28 = 256) for output. the indicated pixel has a brightness level of 252 in the red channel, 231 in
the green channel, and 217 in the blue channel. Each color's brightness can range from 0 to 255,

38
for 256 total steps in each color channel when it is displayed on a computer monitor, or output to a
desktop printer.

Fig 4.2 Camera Module

Zero indicates pure black, and 255 indicates pure white. 256 colors each of red, green and blue
may not seem like a lot, but actually it is a huge number because 256 x 256 x 256 = more then 16
million individual colors.

4.5.3 WIRE CONNECTIONS:

 USB Connection with NodeMCU

OPERATION:
An image sensor is an electronic, photosensitive device which converts an optical image into an
electronic signal. It is composed of millions of photodiodes and is used as an image receiver in
digital imaging equipment. An image sensor is capable of reacting to the impact of photons, thus
converting them into an electrical current that is then passed onto an analog-digital converter. The
most common types of image sensors are CCD and CMOS sensors. Image sensors are mostly used
in camera modules, digital cameras and other imaging devices. Some of the earliest analog sensors
were video camera tubes. Currently, the most common image sensors are digital charge-coupled
device (CCD) or complementary metal–oxide–semiconductor (CMOS) active pixel sensors. In a
camera, a photo electronic image sensor converts the light passing through the lens into per-
photodiode charges of varying sizes. These charges are then processed by the camera’s electronics
and are converted into image information by the camera’s software. Applications in the dental x-
ray field include intra-oral, panoramic imaging.
When the operation start at camera, an aperture opens at the front of the camera and light streams
in through the lens. There is a piece of electronic equipment that captures the incoming light rays
and turns them into electrical signals. This light detector is one of two types, either a charge-
coupled device (CCD) or a CMOS image sensor. In a camera, Light from the thing you are
photographing zooms into the camera lens. This incoming "picture" hits the image sensor chip,
which breaks it up into millions of pixels. The sensor measures the color and brightness of each
pixel and stores it as a number. Your digital photograph is effectively an enormously long string
of numbers describing the exact details of each pixel it contains.

39
Fig 4.3 Wire Connection

4.6 ULTRASONIC SENSOR

4.6.1 PRINCIPLE

A special sonic transducer is used for the ultrasonic proximity sensors, which allows for
alternate transmission and reception of sound waves. The sonic waves emitted by the transducer
are reflected by an object and received back in the transducer. After having emitted the sound
waves, the ultrasonic sensor will switch to receive mode. The time elapsed between emitting and
receiving is proportional to the distance of the object from the sensor.

4.6.2 PRODUCT FEATURES:

Target Detection: Sonic waves are best reflected from hard surfaces. Targets may be solids,
liquids, granules or powders. In general, ultrasonic sensors are deployed for object detection
where optical principles would lack reliability.
Standard Target: ultrasonic distance sensor provides precise, non-contact distance
measurements from about 2 cm (0.8 inches) to 3 meters (3.3 yards).
Measuring Methods: Bidirectional TTL pulse interface on a single I/O pin can communicate
with 5 V TTL or 3.3 V CMOS microcontrollers.
Input trigger: positive TTL pulse, 2 μs min, 5 μs typ.
Echo pulse: positive TTL pulse, 115 μs minimum to 18.5 ms maximum.

40
Fig 4.4 Ultrasonic Sensor

4.6.3 WIRE CONNECTIONS:


 5V Supply
 Trigger Pulse Input
 Echo Pulse Output
 0V Ground

4.6.4 OPERATION:
Trig of SR04 must receive a pulse of high (5V) for at least 10us, this will initiate the sensor will
transmit out 8 cycle of ultrasonic burst at 40kHz and wait for the reflected ultrasonic burst. When
the sensor detected ultrasonic from receiver, it will set the Echo pin to high (5V) and delay for a
period (width) which proportion to distance. To obtain the distance, measure the width (Ton) of
Echo pin.
Time = Width of Echo pulse, in uS (micro second)
 Distance in centimeters = Time / 58
 Distance in inches = Time / 148
 Or you can utilize the speed of sound, which is 340m/s

Fig 4.5 Ultrasonic sensor operation

4.6.5 TIMING DIAGRAM:

The Timing diagram is shown below. You only need to supply a short 10uS pulse to the trigger
input to start the ranging, and then the module will send out an 8 cycle burst of ultrasound at 40
41
kHz and raise its echo. The Echo is a distance object that is pulse width and the range in
proportion .You can calculate the range through the time interval between sending trigger signal
and receiving echo signal. Formula: uS / 58 = centimetres or uS / 148 =inch; or: the range = high
level time * velocity (340M/S) / 2; we suggest to use over 60ms measurement cycle, in order to
prevent trigger signal to the echo signal.

Fig 4.6 Timing Diagram

4.6.6 INTERFACING WITH CONTROLLER:

 VCC = +5VDC
 Trig = Trigger input of Sensor
 Echo = Echo output of Sensor
 GND = GND

42
Fig 4.7 Interfacing with Controller

4.7 METAL DETECTOR

A metal detector is a portable electronic instrument which detects the presence of metal nearby.
Metal detectors are useful for finding metal inclusions hidden within objects, or metal objects
buried underground. They often consist of a handheld unit with a sensor probe which can be swept
over the ground or other objects. If the sensor comes near a piece of metal this is indicated by a
changing tone in earphones, or a needle moving on an indicator. Usually the device gives some
indication of distance; the closer the metal is, the higher the tone in the earphone or the higher the
needle goes. The simplest form of a metal detector consists of an oscillator producing an
alternating current that passes through a coil producing an alternating magnetic field. If a piece of
electrically conductive metal is close to the coil, eddy currents will be induced in the metal, and
this produces a magnetic field of its own. If another coil is used to measure the magnetic field
(acting as a magnetometer), the change in the magnetic field due to the metallic object can be
detected. The first industrial metal detectors were developed in the 1960s and were used
extensively for mineral prospecting and other industrial applications. Uses include de-mining (the
detection of land mines), the detection of weapons such as knives and guns (especially in airport
security), geophysical prospecting, archaeology and treasure hunting. Metal detectors are also
used to detect foreign bodies in food, and in the construction industry to detect steel reinforcing
bars in concrete and pipes and wires buried in walls and floors.

43
Fig 4.8 Metal Detector

4.7.1 METHODOLOGY
In this work, we have attempted to develop an integrated framework for reconnaissance security
that distinguishes the weapons progressively, if identification is positively true it will caution/brief
the security personals to handle the circumstance by arriving at the place of the incident through
IP cameras. We propose a model that provides a visionary sense to a machine to identify the
unsafe weapon and can also alert the human administrator when a gun or firearm is obvious in the
edge. Moreover, we have programmed entryways locking framework when the shooter seems to
carry appalling weapon. On the off chance conceivable, through IP webcams we can likewise
share the live photo to approach security personals to make the move in meantime. Also, we have
constructed the information system for recording all the exercises to convey impact activities in
the metropolitan territories for a future crisis. This further ends up in designing the database for
recording all the activities in order to take prompt actions for future emergency.

4.8 BUZZER

The buzzer is a sounding device that can convert audio signals into sound signals. It is usually
powered by DC voltage. It is widely used in alarms, computers, printers and other electronic
products as sound devices. It is mainly divided into piezoelectric buzzer and electromagnetic
buzzer, represented by the letter "H" or "HA" in the circuit. According to different designs and
uses, the buzzer can emit various sounds such as music, siren, buzzer, alarm, and electric bell. the
pulse current to drive the vibration of the metal plate to generate sound. Piezoelectric buzzer is
mainly composed of multi-resonator, piezoelectric plate, impedance matcher, resonance box,
housing, etc. Some of the piezoelectric buzzers are also equipped with light-emitting diodes. The
multi-resonator consists of transistors or integrated circuits. When the power supply is switched on
(1.5~15V DC operating voltage), the multi-resonator oscillates and outputs 1.5~2.5kHz audio
signal. The impedance matcher pushes the piezoelectric plate to generate sound. The piezoelectric
plate is made of lead zirconate titanate or lead magnesium niobate piezoelectric ceramic, and
silver electrodes are plated on both sides of the ceramic sheet. After being polarized and aged, the
silver electrodes are bonded together with brass or stainless steel sheets.

44
Fig 4.9 Buzzer

CHAPTER 5
OUTPUT OF THE PROJECT

45
CHAPTER 5

OUTPUT OF THE PROJECT


STEP 1

STEP 2

STEP 3

46
STEP 4

STEP 5

47
CHAPTER 6
ADVANTAGES AND
APPLICATIONS

48
CHAPTER 6
ADVANTAGES AND APPLICATIONS

6.1 ADVANTAGES

 The automation of weapon detection is more consistent and faster than manual review.
 Increased safety and anomaly event monitoring in crowded events or public places.
 In mission-critical situations, such systems flag critical situations for immediate human review.
 Computer vision based weapon detection is highly scalable and can operate with a high number
of cameras and in complex and crowded scenes.
 Weapon identification and classification to aid further investigation by security personnel.

6.2 APPLICATIONS

 This can be incorporated with robots and used in detecting landmines.


 This type of weapon detection has huge military applications.
 It can be used in all places where we need more security.
 There are many places where the crime rate caused by firearms or knives is very high, especially
in places where they are allowed. The early detection of potentially violent situations is of
paramount importance for citizens security. One way to prevent these situations is by detecting the
presence of dangerous objects
 It can be used in crowded areas, malls and even at the place of frisking.

49
CHAPTER 7
CONCLUSION AND
FUTURE SCOPE

50
CHAPTER 7
CONCLUSION AND FUTURE SCOPE

7.1 CONCLUSION

The weapon detection in surveillance system using yolov3 algorithm is faster than the previous
CNN, R-CNN and faster CNN algorithms. In this era where things are automated, object detection
becomes one of the most interesting field. When it comes to object detection in surveillance
systems, speed plays an important role for locating an object quickly and alerting the authority.
This work tried to achieve the same and its able to produce a faster result compared to the
previously existing systems. The accuracy and sensitivity in the identification and classification of
weapon detection are high, the project's outcomes have been good. The information contained in
weapon photos has shown to be as useful as the pre-processing procedures employed to limit the
amount of data input during categorization. The goal of developing these subsystems was to be
able to use them Only the important data: the pixels around the locations that, based on their
intensity, may be weapons. Even though it is not widely used, the usage of a CNN in input images
of three deep dimensions has been created and performs well. Its main disadvantage is that it
requires more input data, which increases the number of parameters that must be taught in the
CNN.

7.2 FUTURE SCOPE

There are many future scopes regarding this project such as follows:
1.If the condition improved, we can implement this system by using multimedia GSM module in
future.
2. To achieve more sound security, we can use the iris scan method.
3. To improve the system performance, we can use the advance versions of the raspberry pi
module as per requirement.
4. If needed, we can make this system to be used in the air services.
5. If user needs to operate this system through android application, it is possible.
Because this model simply detects the weapon's name, it can be improved in the future by adding
more features such as the weapon's count if we provide more than one weapon. It can be improved
even more by using the bounding box to define the weapon's name within the image itself. It can
be improved by identifying the weapon using a live image, which can then be used in a CCTV
system. for the purpose of detecting the weapon This suggested method detects weapons of the
same type in a single image and can be improved in the future to recognize the names of many
types of weapons in a single imag

51
CHAPTER 8
SNAPSHOT OF OUTPUT

52
SNAPSHOT OF OUTPUT

Fig 7.1 Software Output

53
Fig 7.2 Harware Output

54
REFERENCES

55
REFERENCES

[1] Harsha Jain et.al. “Weapon detection using artificial Intelligence and
deep learning for security applications” ICESC 2020.

[2] Arif Warsi et.al “Automatic handgun and knife detection algorithms”
IEEE Conference 2019.

[3] Neelam Dwivedi et.al. “Weapon classification using deep Convolutional


neural networks” IEEE Conference CICT 2020.

[4] Gyanendra Kumar Verma et. al. “Handheld Gun detection using Faster
RCNN Deep Learning” IEEE Conference 2019.

[5] Abhiraj Biswas et.al. “Classification of Objects in Video Records using


Neural Network Framework,” International conference on Smart Systems
and Inventive Technology, 2018.

[6] Pallavi Raj et.al. “Simulation and Performance Analysis of Feature


Extraction and Matching Algorithms for Image Processing Applications”
IEEE International Conference on Intelligent Sustainable Systems, 2019.

[7] Mohana et.al. “Simulation of Object Detection Algorithms for Video


Survillance Applications”, International Conference on I-SMAC (IoT in
Social, Mobile, Analytics and Cloud), 2018.

[8] Glowacz et.al. “Visual Detection of Knives in Security Applications


using Active Appearance Model”, Multimedia Tools Applications, 2015.

[9] Amrutha C.V, C. Jyotsna, Amudha J. Dept. of Computer Science &


Engineering, Deep Learning Approach for Suspicious Activity Detection
from Surveillance Video, IEEE Xplore (2019).

[10] HYESEUNG PARK , SEUNGCHUL PARK, AND YOUNGBOK


JOO, Detection of Abandoned and Stolen Objects Based on Dual
Background Model and Mask R-CNN, IEEE Access (2020).

56

You might also like