Myrep Changes
Myrep Changes
USING OpenCV
A PROJECT REPORT
Submitted by
RASIKA J (1803118)
SANDHYAVATHI K (1803122)
SHRI RAMYA CHINMAI G (1803133)
COIMBATORE–641022
RASIKA J (1803118)
SANDHYAVATHI K (1803122)
SHRI RAMYA CHINMAI G (1803133)
who carried out the project work under my supervision, certified further that
to the best of my knowledge the work reported herein does not form part of
any other thesis or dissertation on the basis of which a degree or award was
conferred on an earlier occasion on this or any other candidate of B.E.
Electrical and Electronics Engineering during the year 2018-2022.
----------------------------- ---------------------------
Dr.C.Kathirvel
Supervisor Head of the Department
--------------------- ----------------------
We extend our sincere gratitude to all the teaching and non-teaching staffs of
our department who helped us during this project.
ABSTRACT – தமிழ் x
1 INTRODUCTION 11
1.1.1 EMBEDDED SYSTEMS 12
1.2.1 DISADVANTAGES 13
1.3.1 ADVANTAGES 14
1.4 CASCADE CLASSIFIER 14
i
1.7.3 IMAGE ENHANCEMENT 19
1.7.4 SEGMENTATION: 20
2 LITERATURE SURVEY 22
3 BLOCK DIAGRAM 24
3.2 OPEN CV 27
3.3 PANDAS 28
3.4 NUMPY 28
3.5 TKINTER 28
3.6 PYTHON 28
3.6.1 INDENTATION 29
4 HARDWARE DESCRIPTION 31
4.1 NODEMCU 32
4.1.1 WINDOWS VS LINUX 32
4.2 FLASHING 33
4.2.1 FLASHING FROM LINUX 33
34
ii
4.2.2 FLASHING FROM OS X
4.7.1 METHODOLOGY 42
43
4.8 BUZZER
5.1 OUTPUT 45
6 ADVANTAGES AND APPLICATIONS 48
6.1 ADVANTAGES 48
iii
6.2 APPLICATIONS 48
7.1 CONCLUSION 50
8 SNAPSHOT OF HARDWARE 51
REFERENCES 53
APPENDIX 54
iv
LIST OF FIGURES
vi
LIST OF
ABBREVIATIONS
vii
ABSTRACT
vii
i
ABSTRACT
This project is mainly used for finding metal elements. In army for detecting the
land mines the soldiers use handled metal detector so that sometimes the land
mine was exploding. That way the soldiers are died. For solving this problem,
this project is used. This project has advanced alert system that is message alert.
This is designed by using ‘Arduino’ technology. This car is controlled by
webpage by IoT technology with Node MCU module. When the metal detector
detects the metal, this project obtains only one Arduino board because all three
application programs are merged in only one board. Security is always a main
concern in every domain, due to a rise in crime rate in a crowded event or
suspicious lonely areas. Abnormal detection and monitoring have major
applications of computer vision to tackle various problems. Due to growing
demand in the protection of safety, security and personal properties, needs and
deployment of video surveillance systems can recognize and interpret the scene
and anomaly events play a vital role in intelligence monitoring. This paper
implements automatic gun (or) weapon detection using a convolution neural
network (CNN) algorithms. CNN is one of the best algorithms for all
classification Process.
KEYWORDS: Node MCU, Convolution Neural Network, Weapon detection, Land mines,
Image Processing, Open CV.
ix
ஆய்வுச்சுருக்கம்
x
CHAPTER 1
INTRODUCTION
xi
CHAPTER 1
INTRODUCTION
INTRODUCTION
This project is mainly used for finding metal elements. In army for detecting the land mines the
soldiers use handled metal detector so that sometimes the land mine was exploding. That way the
soldiers are died. For solving this problem, this project is used. This project has advanced alert
system that is message alert. This is designed by using ‘Raspberry pi’ Microprocessor. In this
project the metal detector is connected to ADC, the hole circuit is placed on one robot car. This car
is controlled by webpage by IoT technology. When the metal detector detects the metal, this
project obtains only one Raspberry pi board because all three application programs are merged in
only one board. Security is always a main concern in every domain, due to a rise in crime rate in a
crowded event or suspicious lonely areas. Abnormal detection and monitoring have major
applications of computer vision to tackle various problems. Due to growing demand in the
protection of safety, security and personal properties, needs and deployment of video surveillance
systems can recognize and interpret the scene and anomaly events play a vital role in intelligence
monitoring. This paper implements automatic gun (or) weapon detection using a convolution
neural network (CNN) algorithms. CNN is one of the best algorithms for all classification Process.
1.2.1DISADVANTAGES
1.High cost.
13
2.Some materials may have signal Problem.
1.3.1 ADVANTAGES
FEATURES:-
• Now all possible sizes and locations of each kernel is used to calculate plenty of features. (Just
imagine how much computation it needs?
• Even a 24x24 window results over 160000 features). For each feature calculation, we need to find
sum of pixels under white and black rectangles.
14
• To solve this, they introduced the integral images. It simplifies calculation of sum of pixels.
• how large may be the number of pixels, to an operation involving just four pixels. Nice, isn’t it? It
makes things super-fast
• But among all these features we calculated, most of them are irrelevant.
• For example, consider the image below. Top row shows two good features.
• The first feature selected seems to focus on the property that the region of the eyes is often darker
than the region of the nose and cheeks.
• The second feature selected relies on the property that the eyes are darker than the bridge of the
nose.
• But the same windows applying on cheeks or any other place is irrelevant. So how do we select
the best features out of 160000+ features? It is achieved by Adaboost.
1.5.1 INTRODUCTION
The identification of objects in an image. This process would probably start with image
processing techniques such as noise removal, followed by (low-level) feature extraction to locate
lines, regions and possibly areas with certain textures. The clever bit is to interpret collections of
these shapes as single objects, e.g. cars on a road, boxes on a conveyor belt or cancerous cells on a
microscope slide. One reason this is an AI problem is that an object can appear very different when
viewed from different angles or under different lighting. Another problem is deciding what features
belong to what object and which are background or shadows etc. The human visual system
performs these tasks mostly unconsciously but a computer requires skillful programming and lots
of processing power to approach human performance. Manipulating data in the form of an image
through several possible techniques. An image is usually interpreted as a two-dimensional array of
brightness values, and is most familiarly represented by such patterns as those of a photographic
print, slide, television screen, or movie screen. An image can be processed optically or digitally
with a computer. There are 3 types of images used in Digital Image Processing. They are,
Binary Image
Gray Scale Image
Color Image
BINARY IMAGE:
A binary image is a digital image that has only two possible values for each pixel Typically the
two colors used for a binary image are black and white though any two colors can be used. The
color used for the objects in the image is the foreground color while the rest of the image is the
background color. Binary images are also called bi-level or two-level. This means that each pixel is
stored as a single bit (0 or 1) This name black and white, monochrome or monochromatic are often
used for this concept, but may also designate any images that have only one sample per pixel, such
15
as grayscale images Binary images often arise in digital image processing as masks or as the result
of certain operations such as segmentation, thresholding, and dithering. Some input/output devices,
such as laser printers, fax machines, and bi-level computer displays, can only handle bi-level
images
16
Fig 1.3 Example Image for Processing
An image is a rectangular grid of pixels. It has a definite height and a definite width counted
in pixels. Each pixel is square and has a fixed size on a given display. However different computer
monitors may use different sized pixels. The pixels that constitute an image are ordered as a grid
(columns and rows); each pixel consists of numbers representing magnitudes of brightness and
color.
Each pixel has a color. The color is a 32-bit integer. The first eight bits determine the
redness
of the pixel, the next eight bits the greenness, the next eight bits the blueness, and the
remaining eight bits the transparency of the pixel.
17
Fig 1.5 Pixels of the image
Image file size is expressed as the number of bytes that increases with the number of pixels
composing an image, and the color depth of the pixels. The greater the number of rows and
columns, the greater the image resolution, and the larger the file. Also, each pixel of an image
increases in size when its color depth increases, an 8-bit pixel (1 byte) stores 256 colors, a 24-bit
pixel (3 bytes) stores 16 million colors, the latter known as true color.
Image compression uses algorithms to decrease the size of a file. High resolution cameras produce
large image files, ranging from hundreds of kilobytes to megabytes, per the camera's resolution
and the image-storage format capacity. High resolution digital cameras record 12 megapixel (1MP
= 1,000,000 pixels / 1 million) images, or more, in true color. For example, an image recorded by
a 12 MP camera; since each pixel uses 3 bytes to record true color, the uncompressed image
would occupy 36,000,000 bytes of memory, a great amount of digital storage for one image, given
that cameras must record and store many images to be practical. Faced with large file sizes, both
within the camera and a storage disc, image file formats were developed to store such large
images.
18
Fig 1.6 Image processing Fundamentals
Image Acquisition is to acquire a digital image. To do so requires an image sensor and the
capability to digitize the signal produced by the sensor. The sensor could be monochrome or color
TV camera that produces an entire image of the problem domain every 1/30 sec. the image sensor
could also be line scan camera that produces a single image line at a time. In this case, the objects
motion past the line.
Scanner produces a two-dimensional image. If the output of the camera or other imaging sensor is
not in digital form, an analog to digital converter digitizes it. The nature of the sensor and the
image it produces are determined by the application.
19
Fig 1.8 2-D image from scanner
Image enhancement is among the simplest and most appealing areas of digital image processing.
Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or
simply to highlight certain features of interesting an image. A familiar example of enhancement is
when we increase the contrast of an image because “it looks better.” It is important to keep in
mind that enhancement is a very subjective area of image processing.
1.7.4 SEGMENTATION:
Segmentation procedures partition an image into its constituent parts or objects. In general,
autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged
segmentation procedure brings the process a long way toward successful solution of imaging
problems that require objects to be identified individually.
Digital Image compression addresses the problem of reducing the amount of data required to
represent a digital image. The underlying basis of the reduction process is removal of redundant
data. From the mathematical viewpoint, this amounts to transforming a 2D pixel array into a
statically uncorrelated data set. The data redundancy is not an abstract concept but a
mathematically quantifiable entity. If n1 and n2 denote the number of information-carrying units
in two data sets that represent the same information, the relative data redundancy [2] of the
first data set (the one characterized by n1) can be defined as,
It is defined as
=
In image compression, three basic data redundancies can be identified and exploited: Coding
redundancy, interpixel redundancy, and phychovisal redundancy. Image compression is achieved
when one or more of these redundancies are reduced or eliminated.
The image compression is mainly used for image transmission and storage. Image transmission
applications are in broadcast television; remote sensing via satellite, air-craft, radar, or sonar;
teleconferencing; computer communications; and facsimile transmission. Image storage is
21
required most commonly for educational and business documents, medical images that arise in
computer tomography (CT), magnetic resonance imaging (MRI) and digital radiology, motion
pictures, satellite images, weather maps, geological surveys, and so on.
CHAPTER 2
LITERATURE SURVEY
22
CHAPTER 2
LITERATURE SURVEY
[2] Automatic handgun and knife detection algorithms; Arif Warsi IEEE Conference 2019.
Nowadays, the surveillance of criminal activities requires constant human monitoring. Most of
these activities are happening due to handheld weapons mainly pistol and gun. Object detection
algorithms have been used in detecting weapons like knives and handguns. Handgun and knives
detection are one of the most challenging tasks due to occlusion. This paper reviewed and
categorized various algorithms that have been used in the detection of handgun and knives with
their strengths and weaknesses. This paper presents a review of various algorithms used in
detecting handguns and knives.
[3] Weapon Detection in Real-Time CCTV Videos Using Deep Learning; Muhammad Tahir
Bhatti , Muhammad Gufran Khan, 2021
Deep learning is a branch of machine learning inspired by the functionality and structure of the
human brain also called an artificial neural network. The methodology adopted in this work
features the state of art deep learning, especially the convolutional neural networks due to their
exceptional performance in this field. The aforementioned techniques are used for both the
classification as well as localizing the specific object in a frame so both the object classification
and detection algorithms were used and because our object is small with other object in
23
background so after experimentation we found the best algorithm for our case.
[4] Handheld Gun detection using Faster R-CNN Deep Learning; Gyanendra Kumar Verma
IEEE Conference 2019.
In this paper we present an automatic handheld gun detection system using deep learning
particularly CNN model. Gun detection is a very challenging problem because of the various
subtleties associated with it. One of the most important challenges of gun detection is occlusion of
gun that arises frequently. There are two types of occlusions of gun, namely gun to object and gun
to site/scene occlusion. Normally, occlusions in gun detection are arises beneath three conditions:
self-occlusion, inter-object occlusion or by background site/scene structure. Self- occlusion arises
when one portion of the gun is occluded by another.
CHAPTER 3
BLOCK DIAGRAM
24
CHAPTER 3
BLOCK DIAGRAM
DESCRIPTION
The ultrasonic sensor senses for the obstacle nearby.
The PC is used for the OpenCV operation and takes the image input.
25
The NodeMCU is connected with the sensors like metal detector, which checks for the
metals.
Once the metal is detected Buzzer switches ON.
The Webpage displays the name of the weapon detected.
The weapon is shown in the camera. It takes the image of the weapon shown in the real time.
The camera is switched ON. Once it is switched ON, the video starts recording.
The video gets converted into picture. In OpenCV, the images are uploaded and trained with
the uploaded images.
It detects for the weapon, and it extracts the features of weapons. In the end it saves the
image.
It is repeated for the further images for accuracy.
26
Fig 3.3 Block diagram for testing process
DESCRIPTION
The weapon is shown in the camera. It takes the image of the weapon shown in the real time.
The camera is switched ON. Once it is switched ON, the video starts recording.
The video gets converted into picture. In OpenCV, the images are compared with the dataset.
It detects for the weapon, and it extracts the features of weapons. In the end it saves the
image and gets compared with the trained model.
The weapon get recognized. Displays weapon name on the screen once the accuracy and
weapon comparison is done.
3.2 OPEN CV
Open Computer Vision is shortly denoted as Open CV. Officially launched in 1999 the OpenCV
project was initially an Intel Research initiative to advance CPU-intensive applications, part of a
series of projects including real-time ray tracing and 3D display walls. The main contributors to
the project included a number of optimization experts in Intel Russia, as well as Intel’s
Performance Library Team. OpenCV is written in C++ and its primary interface is in C++, but it
still retains a less comprehensive though extensive older C interface. All of the new developments
and algorithms appear in the C++ interface. There are bindings in Python, Java and MATLAB.
The API for these interfaces can be found in the online documentation. Wrappers in several
programming languages have been developed to encourage adoption by a wider audience. In our
project Open Computer Vision used to detect the face in our face recognition technique. Open CV
is act like eye of the computer or any machines.
WEAPON CHARACTERISTIC
A weapon characteristic is based on property of its purpose that can be identified and from which
distinguishing, repeatable features can be extracted for the purpose of automated recognition of
particular weapon. An example is the gun. This characteristic, recorded with a capture device, can
27
be compared with sample representation of characteristics of weapons. The features are
information extracted from weapon samples, which can be used for comparison with a weapon
reference. The aim of the extraction of features from a sample is to remove any information that
does not contribute to weapon recognition. This enables a fast comparison, improved
performance, and may have privacy advantages.
3.3 PANDAS
Pandas is a software library written for the Python programming language for data manipulation
and analysis. In particular, it offers data structures and operations for manipulating numerical
tables and time series. It is free software released under the three-clause BSD license. The name is
derived from the term “panel data”, an econometrics term for data sets that include observations
over multiple time periods for the same individuals. Its name is a play on the phrase “Python data
analysis” itself. Pandas is mainly used for data analysis. Pandas allows importing data from
various file formats such as comma-separated values, JSON, SQL database tables or queries,
and Microsoft Excel. Pandas allows various data manipulation operations such as
merging, reshaping, selecting, as well as data cleaning, and data wrangling features.
3.4 NUMPY
NumPy is a library for the Python programming language, adding support for large, multi-
dimensional arrays and matrices, along with a large collection of high-
level mathematical functions to operate on these arrays. The ancestor of NumPy, Numeric, was
originally created by Jim Hugunin with contributions from several other developers. In
2005, Travis Oliphant created NumPy by incorporating features of the competing Numarray into
Numeric, with extensive modifications. NumPy is open-source software and has many
contributors. NumPy is a NumFOCUS fiscally sponsored project.
3.5 TKINTER
Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the Tk
GUI toolkit and is Python’s de facto standard GUI. Tkinter is included with
standard GNU/Linux, Microsoft Windows and macOS installs of Python. The
28
name Tkinter comes from Tk interface. Tkinter was written by Steen Lumholt and Guido van
Rossum, then later revised by Fredrik Lund. Tkinter is free software released under a Python
license.
3.6 PYTHON
Python an interpreted, high-level, general-purpose programming language. Created by Guido van
Rossum and first released in 1991, Python has a design philosophy that emphasizes code
readability, notably using significant whitespace. It provides constructs that enable clear
programming on both small and large scales. Python features a dynamic type system and
automatic memory management. It also supports multiple programming paradigms, including
object oriented, imperative imperative, functional and procedural, and has a large and
comprehensive standard library.
Python interpreters are available for many operating systems. Cpython, the reference
implementation of Python, is open source software and has a community-based development
model, as do nearly all of Python’s other implementations. Python and Cpython are managed by
the non-profit Python Software Foundation. Python is meant to be an easily readable language. Its
formatting is visually uncluttered, and it often uses English keywords where other languages use
punctuation.
3.6.1 INDENTATION
Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An
increase in indentation comes after certain statements; a decrease in indentation signifies the end
of the current block. Thus, the program’s visual structure accurately represents the program’s
semantic structure.[1] This feature is also sometimes termed the off-side rule.
The assignment statement (token ‘=’, the equals sign). This operates differently than in
traditional imperative programming languages, and this fundamental mechanism (including the
nature of Python’s version of variables) illuminates many other features of the language.
Assignment in C, e.g., x = 2, translates to “typed variable name x receives a copy of numeric
value 2”. The (right-hand) value is copied into an allocated storage location for which the (left-
hand) variable name is the symbolic address. The memory allocated to the variable is large
enough (potentially quite large) for the declared type. In the simplest case of Python assignment,
using the same example, x = 2, translates to “(generic) name x receives a reference to a separate,
dynamically allocated object of numeric (int) type of value 2.” This is termed binding the name to
the object. Since the name’s storage location doesn’t contain the indicated value, it is improper to
call it a variable. Names may be subsequently rebound at any time to objects of greatly varying
29
types, including strings, procedures, complex objects with data and methods, etc. Successive
assignments of a common value to multiple names, e.g., x = 2; y = 2; z = 2 result in allocating
storage to (at most) three names and one numeric object, to which all three names are bound.
Since a name is a generic reference holder it is unreasonable to associate a fixed data type with it.
However at a given time a name will be bound to some object, which will have a type; thus there
is dynamic typing.
The if statement, which conditionally executes a block of code, along with else and elif (a
contraction of else-if).
The for statement, which iterates over an iterable object, capturing each element to a local
variable for use by the attached block.
The while statement, which executes a block of code as long as its condition is true.
The try statement, which allows exceptions raised in its attached code block to be caught and
handled by except clauses; it also ensures that clean-up code in a finally block will always be run
regardless of how the block exits.
The raise statement, used to raise a specified exception or re-raise a caught exception.
The class statement, which executes a block of code and attaches its local namespace to a class,
for use in object-oriented programming.
The def statement, which defines a function or method.
The with statement, from Python 2.5 released on September 2006, which encloses a code block
within a context manager (for example, acquiring a lock before the block of code is run and
releasing the lock afterwards, or opening a file and then closing it), allowing Resource Acquisition
Is Initialization (RAII)-like behavior and replaces a common try/finally idiom.
The pass statement, which serves as a NOP. It is syntactically needed to create an empty code
block.
The assert statement, used during debugging to check for conditions that ought to apply.
The yield statement, which returns a value from a generator function. From Python 2.5, yield is
also an operator. This form is used to implement co routines.
The import statement, which is used to import modules whose functions or variables can be used
in the current program. There are three ways of using import: import <module name> [as
<alias>] or from <module name> import * or from <module name> import <definition 1> [as
<alias 1>], <definition 2> [as <alias 2>], ....
The print statement was changed to the print() function in Python 3.
30
31
CHAPTER 4
HARDWARE DESCRIPTION
32
CHAPTER 4
HARDWARE DESCRIPTION
HARDWARE MODULES
NodeMCU
PC
Ultrasonic Sensor
Metal Detector
BuzzeR0072
However, as a chip, the ESP8266 is also hard to access and use. You must solder wires, with the
appropriate analog voltage, to its pins for the simplest tasks such as powering it on or sending a
keystroke to the “computer” on the chip. You also have to program it in low-level machine
instructions that can be interpreted by the chip hardware. This level of integration is not a problem
using the ESP8266 as an embedded controller chip in mass-produced electronics. It is a huge
burden for hobbyists, hackers, or students who want to experiment with it in their own IoT
projects.
Another important difference between the Raspberry Pi and your desktop or laptop, other than the
size and price, is the operating system—the software that allows you to control the computer. The
majority of desktop and laptop computers available today run one of two operating systems:
Microsoft Windows or Apple OS X. Both platforms are closed source, created in a secretive
environment using proprietary techniques. These operating systems are known as closed source
for the nature of their source code, the computer-language recipe that tells the system what to do.
In closed-source software, this recipe is kept a closely-guarded secret. Users are able to obtain the
finished software, but never to see how it’s made.
Unlike Windows or OS X, Linux is open source: it’s possible to download the source code for the
entire operating system and make whatever changes you desire. Nothing is hidden, and all changes
33
are made in full view of the public. This open source development ethos has allowed Linux to be
quickly altered to run, a process known as porting. At the time of this writing, several versions of
Linux—known as distributions—have been ported to the BCM2835 chip, including Debian,
Fedora Remix and Arch Linux. The different distributions cater to different needs, but they all
have something in common: they’re all open source. They’re also all, by and large, compatible
with each other: software written on a Debian system will operate perfectly well on Arch Linux
and
vice versa.
4.2 FLASHING
The easiest way to get the ESP8266 into the flash mode is:
1.Pull down the GPIO 0 (connect it to GND or DTR, DTR may not work with esptool)
2.Start the flash (press Flash button or execute the cmd command)
3.Reset the ESP8266 by pulling down the GPIO16/RST pin (touch any of the GND pins with a
male end)
Open the folder with esptool and bring up the cmd (right click +shift > Open command window
here). Check what port has been assigned to your FDT1232 by bringing up a device manager.
Execute the line and then immediately reset the ESP. You should see this: If you on the Linux or
Raspberry Pi, you can continue with the esptool, if you are using Windows you can now switch to
the NodeMCU flasher tool. The command is built in the following way: (path to python)
esptool.py — port [your com port] write_flash -fm [mode: dio for 4MB+, qio for <4Mb]) -fs (flash
size 8Mb in my case) 0x00000 (path to bin file)
As before, execute the command and reset the ESP8266 by shorting the GPIO16/RST pin to
GND. If you are using Windows and NodeMCU flash tool the procedure is similar: Then press
flash and reset the ESP8266 by shorting the GPIO16/RST to GND. The flash will take few
moments and you will see the progress: Once the flash is complete you are ready to upload your
Lua files. This is where the ESPlorer comes in handy. Disconnect the GPIO 0, we don’t need it
any more. Set the baud rate to 115200 open the COM port and reset the ESP8266 by shorting the
GPIO16/RST and GND.
If your current PC is running a variant of Linux already, you can use the dd command to write the
contents of the image file out to the SD card. This is a text-interface program operated from the
command prompt, known as a terminal in Linux parlance. Follow these steps to flash the SD card:
34
1. Open a terminal from your distribution’s applications menu.
2. Plug your blank SD card into a card reader connected to the PC.
3. Type sudo fdisk -l to see a list of disks. Find the SD card by its size, and note the device address
(/dev/sdX, where X is a letter identifying the storage device. Some systems with integrated SD
card readers may use the alternative format /dev/mmcblkX—if this is the case, remember to
change the target in the following instructions accordingly).
4. Use cd to change to the directory with the .img file you extracted from the Zip archive.
5. Type sudo dd if=imagefilename.img of=/dev/sdX bs=2M to write the file imagefilename.img to
the SD card connected to the device address from step 3. Replace imagefilename.img with the
actual name of the file extracted from the Zip archive. This step takes a while, so be patient!
During flashing, nothing will be shown on the screen until the process is fully complete
Flashing the SD card using the dd command in Linux
If your current PC is a Mac running Apple OS X, you’ll be pleased to hear that things are as
simple as with Linux. Thanks to a similar ancestry, OS X and Linux both contain the dd utility,
which you can use to flash the system image to your blank SD card as follows:
1. Select Utilities from the Application menu, and then click on the Terminal application.
2. Plug your blank SD card into a card reader connected to the Mac.
3. Type diskutil list to see a list of disks. Find the SD card by its size, and note the device address
(/dev/diskX, where X is a letter identifying the storage device).
4. If the SD card has been automatically mounted and is displayed on the desktop, type diskutil
unmountdisk /dev/diskX to unmount it before proceeding.
5. Use cd to change to the directory with the .img file you extracted from the Zip archive.
6. Type dd if=imagefilename.img of=/dev/diskX bs=2M to write the file imagefilename.img to the
SD card connected to the device address from step 3. Replace imagefilename.img with the actual
name of the file extracted from the Zip archive. This step takes a while, so be patient!
If your current PC is running Windows, things are slightly trickier than with Linux or OS X.
Windows does not have a utility like dd, so some third-party software is required to get the image
file flashed onto the SD card. Although it’s possible to install a Windows-compatible version of
dd, there is an easier way: the Image Writer for Windows. Designed specifically for creating USB
or SD card images of Linux distributions, it features a simple graphical user interface that makes
the creation of a Raspberry Pi SD card straightforward.
Follow these steps to download, install and use the Image Writer for Windows software to prepare
the SD card for the Pi:
1. Download the binary (not source) Image Writer for Windows Zip file, and extract it to a folder
on your computer.
2. Plug your blank SD card into a card reader connected to the PC.
35
3. Double-click the Win32DiskImager.exe file to open the program, and click the blue folder icon
to open a file browse dialogue box.
4. Browse to the imagefilename.img file you extracted from the distribution archive, replacing
imagefilename.img with the actual name of the file extracted from the Zip archive, and then click
the Open button.
5. Select the drive letter corresponding to the SD card from the Device drop-down dialogue box. If
you’re unsure which drive letter to choose, open My Computer or Windows Explorer to check.
6. Click the Write button to flash the image file to the SD card. This process takes a while, so be
patient!
4.3 CONNECTING
While the Raspberry Pi uses an SD card for its main storage device—known as a boot device—
you may find that you run into space limitations quite quickly. Although large SD cards holding
32 GB, 64 GB or more are available, they are often prohibitively expensive. Thankfully, there are
devices that provide an additional hard drive to any computer when connected via a USB cable.
Known as USB Mass Storage (UMS) devices, these can be physical hard drives, solid-state drives
(SSDs) or even portable pocket-sized flash drives (see Figure 1-6).
Two USB Mass Storage devices: a pen drive and an external hard drive The majority of USB
Mass Storage devices can be read by the Pi, whether or not they have existing content. In order for
the Pi to be able to access these devices, their drives must be mounted—a process you will learn in
Chapter 2, “Linux System Administration”. For now, it’s enough to connect the drives to the Pi in
readiness.
CONNECTING POWER
The NodeMCU is powered by the small micro-USB connector found on the lower left side of the
circuit board. This connector is the same as found on the majority of smartphones and some tablet
devices. Many chargers designed for smartphones will work with the Raspberry Pi, but not all.
This is more power-hungry than most micro-USB devices, and requires up to 700mA in order to
operate. Some chargers can only supply up to 500mA, causing intermittent problems in the
operation (see Chapter 3, “Troubleshooting”). Connecting to the USB port on a desktop or laptop
computer is possible, but not recommended. As with smaller chargers, the USB ports on a
computer can’t provide the power required to work properly.
Only connect the micro-USB power supply when you are ready to start using . With no power
button on the device, it will start working the instant power is connected and can only be turned
off again by physically removing the power cable.
SPECIFICATIONS
GPIO - 0 Flash
37
Vcc Connect to +3.3V only
4.5.1 PRINCIPLE:
A camera records and stores photographic image in digital form. Many current models are also
able to capture sound or video, in addition to still images. Capture is usually accomplished by use
of a photo sensor, using a charged coupled device.
The images are actually stored as a several pixel, for every pixel in the sensor, the brightness
data, represented by a number from 0 to 4095 for a 12-bit A/D converter, along with the
coordinates of the location of the pixel, are stored in a file. Although the camera can record 12 bits
or 4096 steps of brightness information, almost all output devices can only display 8 bits or 256
steps per color channel. The original 12-bit (2 12 = 4096) input data must be converted to 8-bits
(28 = 256) for output. the indicated pixel has a brightness level of 252 in the red channel, 231 in
the green channel, and 217 in the blue channel. Each color's brightness can range from 0 to 255,
38
for 256 total steps in each color channel when it is displayed on a computer monitor, or output to a
desktop printer.
Zero indicates pure black, and 255 indicates pure white. 256 colors each of red, green and blue
may not seem like a lot, but actually it is a huge number because 256 x 256 x 256 = more then 16
million individual colors.
OPERATION:
An image sensor is an electronic, photosensitive device which converts an optical image into an
electronic signal. It is composed of millions of photodiodes and is used as an image receiver in
digital imaging equipment. An image sensor is capable of reacting to the impact of photons, thus
converting them into an electrical current that is then passed onto an analog-digital converter. The
most common types of image sensors are CCD and CMOS sensors. Image sensors are mostly used
in camera modules, digital cameras and other imaging devices. Some of the earliest analog sensors
were video camera tubes. Currently, the most common image sensors are digital charge-coupled
device (CCD) or complementary metal–oxide–semiconductor (CMOS) active pixel sensors. In a
camera, a photo electronic image sensor converts the light passing through the lens into per-
photodiode charges of varying sizes. These charges are then processed by the camera’s electronics
and are converted into image information by the camera’s software. Applications in the dental x-
ray field include intra-oral, panoramic imaging.
When the operation start at camera, an aperture opens at the front of the camera and light streams
in through the lens. There is a piece of electronic equipment that captures the incoming light rays
and turns them into electrical signals. This light detector is one of two types, either a charge-
coupled device (CCD) or a CMOS image sensor. In a camera, Light from the thing you are
photographing zooms into the camera lens. This incoming "picture" hits the image sensor chip,
which breaks it up into millions of pixels. The sensor measures the color and brightness of each
pixel and stores it as a number. Your digital photograph is effectively an enormously long string
of numbers describing the exact details of each pixel it contains.
39
Fig 4.3 Wire Connection
4.6.1 PRINCIPLE
A special sonic transducer is used for the ultrasonic proximity sensors, which allows for
alternate transmission and reception of sound waves. The sonic waves emitted by the transducer
are reflected by an object and received back in the transducer. After having emitted the sound
waves, the ultrasonic sensor will switch to receive mode. The time elapsed between emitting and
receiving is proportional to the distance of the object from the sensor.
Target Detection: Sonic waves are best reflected from hard surfaces. Targets may be solids,
liquids, granules or powders. In general, ultrasonic sensors are deployed for object detection
where optical principles would lack reliability.
Standard Target: ultrasonic distance sensor provides precise, non-contact distance
measurements from about 2 cm (0.8 inches) to 3 meters (3.3 yards).
Measuring Methods: Bidirectional TTL pulse interface on a single I/O pin can communicate
with 5 V TTL or 3.3 V CMOS microcontrollers.
Input trigger: positive TTL pulse, 2 μs min, 5 μs typ.
Echo pulse: positive TTL pulse, 115 μs minimum to 18.5 ms maximum.
40
Fig 4.4 Ultrasonic Sensor
4.6.4 OPERATION:
Trig of SR04 must receive a pulse of high (5V) for at least 10us, this will initiate the sensor will
transmit out 8 cycle of ultrasonic burst at 40kHz and wait for the reflected ultrasonic burst. When
the sensor detected ultrasonic from receiver, it will set the Echo pin to high (5V) and delay for a
period (width) which proportion to distance. To obtain the distance, measure the width (Ton) of
Echo pin.
Time = Width of Echo pulse, in uS (micro second)
Distance in centimeters = Time / 58
Distance in inches = Time / 148
Or you can utilize the speed of sound, which is 340m/s
The Timing diagram is shown below. You only need to supply a short 10uS pulse to the trigger
input to start the ranging, and then the module will send out an 8 cycle burst of ultrasound at 40
41
kHz and raise its echo. The Echo is a distance object that is pulse width and the range in
proportion .You can calculate the range through the time interval between sending trigger signal
and receiving echo signal. Formula: uS / 58 = centimetres or uS / 148 =inch; or: the range = high
level time * velocity (340M/S) / 2; we suggest to use over 60ms measurement cycle, in order to
prevent trigger signal to the echo signal.
VCC = +5VDC
Trig = Trigger input of Sensor
Echo = Echo output of Sensor
GND = GND
42
Fig 4.7 Interfacing with Controller
A metal detector is a portable electronic instrument which detects the presence of metal nearby.
Metal detectors are useful for finding metal inclusions hidden within objects, or metal objects
buried underground. They often consist of a handheld unit with a sensor probe which can be swept
over the ground or other objects. If the sensor comes near a piece of metal this is indicated by a
changing tone in earphones, or a needle moving on an indicator. Usually the device gives some
indication of distance; the closer the metal is, the higher the tone in the earphone or the higher the
needle goes. The simplest form of a metal detector consists of an oscillator producing an
alternating current that passes through a coil producing an alternating magnetic field. If a piece of
electrically conductive metal is close to the coil, eddy currents will be induced in the metal, and
this produces a magnetic field of its own. If another coil is used to measure the magnetic field
(acting as a magnetometer), the change in the magnetic field due to the metallic object can be
detected. The first industrial metal detectors were developed in the 1960s and were used
extensively for mineral prospecting and other industrial applications. Uses include de-mining (the
detection of land mines), the detection of weapons such as knives and guns (especially in airport
security), geophysical prospecting, archaeology and treasure hunting. Metal detectors are also
used to detect foreign bodies in food, and in the construction industry to detect steel reinforcing
bars in concrete and pipes and wires buried in walls and floors.
43
Fig 4.8 Metal Detector
4.7.1 METHODOLOGY
In this work, we have attempted to develop an integrated framework for reconnaissance security
that distinguishes the weapons progressively, if identification is positively true it will caution/brief
the security personals to handle the circumstance by arriving at the place of the incident through
IP cameras. We propose a model that provides a visionary sense to a machine to identify the
unsafe weapon and can also alert the human administrator when a gun or firearm is obvious in the
edge. Moreover, we have programmed entryways locking framework when the shooter seems to
carry appalling weapon. On the off chance conceivable, through IP webcams we can likewise
share the live photo to approach security personals to make the move in meantime. Also, we have
constructed the information system for recording all the exercises to convey impact activities in
the metropolitan territories for a future crisis. This further ends up in designing the database for
recording all the activities in order to take prompt actions for future emergency.
4.8 BUZZER
The buzzer is a sounding device that can convert audio signals into sound signals. It is usually
powered by DC voltage. It is widely used in alarms, computers, printers and other electronic
products as sound devices. It is mainly divided into piezoelectric buzzer and electromagnetic
buzzer, represented by the letter "H" or "HA" in the circuit. According to different designs and
uses, the buzzer can emit various sounds such as music, siren, buzzer, alarm, and electric bell. the
pulse current to drive the vibration of the metal plate to generate sound. Piezoelectric buzzer is
mainly composed of multi-resonator, piezoelectric plate, impedance matcher, resonance box,
housing, etc. Some of the piezoelectric buzzers are also equipped with light-emitting diodes. The
multi-resonator consists of transistors or integrated circuits. When the power supply is switched on
(1.5~15V DC operating voltage), the multi-resonator oscillates and outputs 1.5~2.5kHz audio
signal. The impedance matcher pushes the piezoelectric plate to generate sound. The piezoelectric
plate is made of lead zirconate titanate or lead magnesium niobate piezoelectric ceramic, and
silver electrodes are plated on both sides of the ceramic sheet. After being polarized and aged, the
silver electrodes are bonded together with brass or stainless steel sheets.
44
Fig 4.9 Buzzer
CHAPTER 5
OUTPUT OF THE PROJECT
45
CHAPTER 5
STEP 2
STEP 3
46
STEP 4
STEP 5
47
CHAPTER 6
ADVANTAGES AND
APPLICATIONS
48
CHAPTER 6
ADVANTAGES AND APPLICATIONS
6.1 ADVANTAGES
The automation of weapon detection is more consistent and faster than manual review.
Increased safety and anomaly event monitoring in crowded events or public places.
In mission-critical situations, such systems flag critical situations for immediate human review.
Computer vision based weapon detection is highly scalable and can operate with a high number
of cameras and in complex and crowded scenes.
Weapon identification and classification to aid further investigation by security personnel.
6.2 APPLICATIONS
49
CHAPTER 7
CONCLUSION AND
FUTURE SCOPE
50
CHAPTER 7
CONCLUSION AND FUTURE SCOPE
7.1 CONCLUSION
The weapon detection in surveillance system using yolov3 algorithm is faster than the previous
CNN, R-CNN and faster CNN algorithms. In this era where things are automated, object detection
becomes one of the most interesting field. When it comes to object detection in surveillance
systems, speed plays an important role for locating an object quickly and alerting the authority.
This work tried to achieve the same and its able to produce a faster result compared to the
previously existing systems. The accuracy and sensitivity in the identification and classification of
weapon detection are high, the project's outcomes have been good. The information contained in
weapon photos has shown to be as useful as the pre-processing procedures employed to limit the
amount of data input during categorization. The goal of developing these subsystems was to be
able to use them Only the important data: the pixels around the locations that, based on their
intensity, may be weapons. Even though it is not widely used, the usage of a CNN in input images
of three deep dimensions has been created and performs well. Its main disadvantage is that it
requires more input data, which increases the number of parameters that must be taught in the
CNN.
There are many future scopes regarding this project such as follows:
1.If the condition improved, we can implement this system by using multimedia GSM module in
future.
2. To achieve more sound security, we can use the iris scan method.
3. To improve the system performance, we can use the advance versions of the raspberry pi
module as per requirement.
4. If needed, we can make this system to be used in the air services.
5. If user needs to operate this system through android application, it is possible.
Because this model simply detects the weapon's name, it can be improved in the future by adding
more features such as the weapon's count if we provide more than one weapon. It can be improved
even more by using the bounding box to define the weapon's name within the image itself. It can
be improved by identifying the weapon using a live image, which can then be used in a CCTV
system. for the purpose of detecting the weapon This suggested method detects weapons of the
same type in a single image and can be improved in the future to recognize the names of many
types of weapons in a single imag
51
CHAPTER 8
SNAPSHOT OF OUTPUT
52
SNAPSHOT OF OUTPUT
53
Fig 7.2 Harware Output
54
REFERENCES
55
REFERENCES
[1] Harsha Jain et.al. “Weapon detection using artificial Intelligence and
deep learning for security applications” ICESC 2020.
[2] Arif Warsi et.al “Automatic handgun and knife detection algorithms”
IEEE Conference 2019.
[4] Gyanendra Kumar Verma et. al. “Handheld Gun detection using Faster
RCNN Deep Learning” IEEE Conference 2019.
56