0% found this document useful (2 votes)
561 views

Weapon-Detection-System Using DIP (Seminar Project)

The document discusses technology for predicting suicide bombers and explosions through image processing for concealed weapon detection. It describes how manual screening is currently used but has limitations, and explores using sensors and image processing as an improvement. Specifically, it examines using imaging sensors like infrared and millimeter wave sensors, and image processing techniques like wavelet transforms, to detect concealed weapons from a distance and aid in security and public safety.

Uploaded by

All details
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (2 votes)
561 views

Weapon-Detection-System Using DIP (Seminar Project)

The document discusses technology for predicting suicide bombers and explosions through image processing for concealed weapon detection. It describes how manual screening is currently used but has limitations, and explores using sensors and image processing as an improvement. Specifically, it examines using imaging sensors like infrared and millimeter wave sensors, and image processing techniques like wavelet transforms, to detect concealed weapons from a distance and aid in security and public safety.

Uploaded by

All details
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

ABSTRACT

We have recently witnessed the series of bomb blasts in Dilshuknagar, Hyderabad.


Bombs killed many and left many injured. On Feb 22nd two explosions took place
within one hour and left the world in shell shock and the Indians in terror.
This situation is not limited to Hyderabad but it can happen anywhere and anytime in
the world. People think bomb blasts can’t be predicted before handled. Here we show
you the technology which predicts the suicide bombers and explosion of weapons
through Image Processing for Conceal Weapon Detection.
The detection of weapons concealed underneath a person’s clothing is very much
important to the improvement of the security of the general public as well as the
safety of public assets like airports, buildings, and railway stations etc. Manual
screening procedures for detecting concealed weapons such as handguns, knives, and
explosives are common in controlled access settings
like airports, entrances to sensitive buildings and public events. It is desirable
sometimes to be able to detect concealed weapons from a standoff distance, especially
when it is impossible to arrange the flow of people through a controlled procedure in
the present paper we describe the concepts of the technology ‘Concealed Weapon
Detection’ the sensor improvements, how the imaging takes place and the
challenges. And we also describe techniques for simultaneous noise suppression,
object extraction.
INTRODUCTION
Till now the detection of concealed weapons is done by manual screening
procedures. To control the explosives in some places like airports, sensitive buildings,
famous constructions etc. But these manual screening procedures are not giving
satisfactory results, because this type of manual screenings procedures screens the
person when the person is near the screening machine and also sometimes it gives
wrong alarm indications so we are need of a technology that almost detects the
weapon by scanning. This can be achieved by imaging for concealed weapons. The
goal is the eventual deployment of automatic detection and recognition of concealed
weapons. It is a technological challenge that requires innovative solutions in sensor
technologies and image processing. The problem also presents challenges in the legal
arena; a number of sensors based on different phenomenology as well as image
processing support are being developed to observe objects underneath people’s
clothing.

 Imaging Sensors

These imaging sensors developed for CWD applications depending on their portability,
proximity and whether they use active or passive illuminations. The different types of
imaging sensors for CWD based are shown in following table.

 Infrared Imager

Infrared imagers utilize the temperature distribution information of the target to form
an image. Normally they are used for a variety of night-vision applications, such as
viewing vehicles and people. The underlying theory is that the infrared radiation
emitted by the human body is absorbed by clothing and then re-emitted by it. As a
result, infrared radiation can be used to show the image of a concealed weapon only
when the clothing is tight, thin, and stationary. For normally loose clothing, the emitted
infrared radiation will be spread over a larger clothing area, thus decreasing the
ability to image a weapon.

 PMW Imaging Sensors

 First Generation
Passive Millimeter Wave (MMW) sensors measure the apparent temperature
through the energy that is emitted or reflected by sources. The output of the sensors
is a function of the emissive of the objects in the MMW spectrum as measured by the
receiver. Clothing penetration for concealed weapon detection is made possible by
MMW sensors due to the low emissive and high reflectivity of objects like metallic
guns. In early 1995, the MMW data were obtained by means of scans using a single
detector that took up to 90 minutes to generate one image.
Following fig. 1 (a) shows a visual image of a person wearing a heavy sweater that
conceals two guns made with metal and ceramics. The corresponding 94-GHz
radiometric image fig. 1
(b) was obtained by scanning a single detector across the object plane using a
mechanical scanner. The radiometric image clearly shows both firearms.

Fig. 1: (a) Visible and (b) MMW Image of a Person Concealing 2 Guns beneath a Heavy Sweater

 Second Generation
Recent advances in MMW sensor technology have led to video- rate (30 frames/s)
MMW cameras. One such camera is the pupil- plane array from Terex Enterprises. It
is a 94-GHz radiometric pupil-plane imaging system that employs frequency
scanning to achieve vertical resolution and uses an array of 32 individual wave- guide
antennas for horizontal resolution. This system collects up to 30 frames/s of MMW
data.

 CWD through Image Fusion


By fusing passive MMW image data and its corresponding infrared (IR) or electro-
optical (EO) image, more complete information can be obtained; the information can
then be utilized to facilitate concealed weapon detection. Fusion of an IR image
revealing a concealed weapon and its corresponding MMW image has been shown to
facilitate extraction of the concealed weapon.
 Imaging Processing Architecture

An image processing architecture for CWD is shown in Figure 4.The input can be
multisensor (i.e., MMW + IR, MMW + EO, or MMW + IR + EO) data or only the
MMW data. In the latter case, the blocks showing registration and fusion can be
removed from figure. The output can take several forms. It can be as simple as a
processed image/video sequence displayed on a screen; a cued display where potential
concealed weapon types and locations are highlighted with associated confidence
measures; a “yes,” “no,” or “maybe” indicator; or a combination of the above. The
image processing proceduresthat have been investigated for CWD applications range from
simple denoising to automatic pattern recognition.

Fig. 2: An Imaging Processing Architecture Overview for CWD

 Wavelet Approaches for Preprocessing

Before an image or video sequence is presented to a human observer for operator-


assisted weapon detection or fed into an automatic weapon detection algorithm, it is
desirable to preprocess the images or video data to maximize their exploitation. The
preprocessing steps considered in this section include enhancement and filtering for the
removal of shadows, wrinkles, and other artifacts. When more than one sensor is used,
preprocessing must also include registration and fusion procedures.

 Image Denoising Through Wavelets


There have been several investigations into additive noise suppression in signals
and images using wavelet transforms. The principle work is that Johnstone and
Donoho [Donoho 1992; Donoho and Johnstone 1992], which is based on thresholding
the DWT of an image and then reconstructing it. The method relies on the fact that
noise commonly manifests it as fine –grained structure in the image and the wavelet
transform provides a scale-based decomposition. Thus, most of the noise tends to be
represented by wavelet coefficients at the finer scales. Discarding these coefficients
would result in a natural filtering. Out of noise on basis of scale. Because the
coefficients at such scales also tend to be the primary carriers of edge information, the
method of Donoho and the Johnstone[1992] thresholds the wavelet coefficients to
zero if there values are below a threshold. These coefficients are mostly those
corresponding to noise. The edge-related coefficients, on the other hand are usually
above the threshold an alternative to such hard thresholding is soft thresholding, which
leads to less severe distortion of the object of interest several approaches have been
suggested for setting the threshold for each band of wavelet decomposition. A
common approach is to compute the sample variance σ2 of the coefficients in a band
and set the threshold to some multiple of the standard deviation σ. Thus, if we want a
soft threshold of the DTWT coefficients for a particular wavelet band, we would
threshold the coefficients of that band. The inverse wavelet transform of thresholded
transform coefficients is the denoised image. It has been found that such denoising is
effective in that although the noise is suppressed, edge features are retained without
much damage.

Fig. 3: Block Diagram of DWT-Based Denoising Scheme


Fig. 4: Edge Extraction from Original Image

Fig. 5: Edge Extraction from Denoised Image


 Registration of Multi Sensor Images

As indicated earlier, making use of multiple sensors may increase the efficacy of a
CWD system. The first step toward image fusion is a precise alignment of images (i.e.,
image registration). Even though MMW imagers penetrate clothing they do not
provide the best picture due to their limited resolution. Infrared (IR) sensors, on the other
hand, provide well-resolve pictures with less capability for clothing penetration.
Combining both technologies should provide a better way to display a well resolved
image with a better view of a concealed weapon. Combination of IR and MMW image
sensors required registration and fusion procedures. The registration procedure is
based on the observation that the body shapes are well preserved in IR and MMW
images. We first scale the IR image according to some prior knowledge about the
sensors. Then the body shapes are extracted from the backgrounds of the IR and
MMW images. Finally we apply correlation to the resulting binary images to determine
the X and Y displacements.

Fig. 6: Block Diagram for Registration Procedure


 Image Decomposition

The most straightforward approach to image fusion is to take the average of the
source images, but this can produce undesirable results such as a decrease in
contrast. Many of the advanced image fusion methods involve multi resolution
image decomposition based on the wavelet transform. First, an image pyramid is
constructed for each source image by applying the wavelet transform to the source
images. This transform domain representation emphasizes important details of the
source images at different scales, which is useful for choosing the best fusion rules.
Then, using a feature selection rule, a fused pyramid is formed for the composite
image from the pyramid coefficients of the source images. The simplest feature
selection rule is choosing the maximum of the two corresponding transform values.
This allows the integration of details into one image from two or more images. Finally,
the composite image is obtained by taking an inverse pyramid transform of the
composite wavelet representation. The process can be applied to fusion of multiple
source imagery. This type of method has been used to fuse IR and MMW images for
CWD application. The first fusion example for CWD application is given in Figure 7.
Two IR images taken from separate IR cameras from different viewing angles are
considered in this case. The advantage of image fusion for this case is clear since we
can observe a complete gun shape only in the fused image. The second fusion example,
fusion of IR and MMW images, is provided in Figure.

Fig. 7: Block Diagram of Fusion Procedure


(a)

(b)

(c)

Fig. 8: (a) and (b) are Original I R Images (c) is Fused Image
Fig. 9: Example of Fusion Procedure

 Automatic Weapon Detection

After preprocessing, the images/video sequences can be displayed for operator-


assisted weapon detection or fed into a weapon detection module for automated
weapon detection. Toward this aim, several steps are required, including object
extraction, shape description, and weapon recognition.

 Segmentation for Object Extraction

Object extraction is an important step towards automatic recognition of a weapon,


regardless of whether or not the image fusion step is involved. It has been
successfully used to extract the gun shape from the fused IR and MMW images. This
could not be achieved using the original images alone. One segmented result from the
fused IR and MMW image. Another segmentation procedure applied successfully to
MMW video sequences for CWD application is called the Slamani Mapping Procedure
(SMP). Ablock diagram of this procedure shown in figure. The procedure computes
multiple important thresholds of the image data in the Automatic Threshold
Computation (ATC) stage for 1) regions with distinguishable intensity levels, and 2)
regions with close intensity levels. Regions with distinguishable intensity levels have
multi modal histograms, whereas regions with close intensity levels have
overlapping histograms. The thresholds from both cases are fused to form the set of
important thresholds in the scene. At the output of the ATC stage, the scene is
quantized for each threshold value to obtain data above and below. Adaptive filtering
is then used to perform homogeneous pixel grouping in order to obtain “objects”
present at each threshold level. The resulting scene is referred to as a component
image. Note that when the component images obtained for all thresholds are added
together, they form a composite image that displays objects with different colors.
Fig. 9 shows the original scene and its corresponding composite image. Note that the
weapon appears as a single object in the composite image. Object extraction is an
important step towards automatic recognition of a weapon, regardless of whether or
not the image fusion step is involved. It has been successfully used to extract the gun
shape from the fused IR and MMW images. This could not be achieved using the
original images alone. One segmented result from the fused IR and MMW image.
Another segmentation procedure applied successfully to MMW video sequences for
CWD application is called the Slamani Mapping Procedure (SMP). Ablock diagram of
this procedure shown in figure. The procedure computes multiple important
thresholds of the image data in the Automatic Threshold Computation (ATC) stage for
1) regions with distinguishable intensity levels, and 2) regions with close intensity
levels. Regions with distinguishable intensity levels have multi modal histograms,
whereas regions with close intensity levels have overlapping histograms. The
thresholds from both cases are fused to form the set of important thresholds in the
scene. At the output of the ATC stage, the scene is quantized for each threshold value
to obtain data above and below. Adaptive filtering is then used to perform
homogeneous pixel grouping in order to obtain “objects” present at each threshold
level. The resulting scene is referred to as a component image. Note that when the
component images obtained for all thresholds are added together, they form a
composite image that displays objects with different colors. Fig. 9 shows the original
scene and its corresponding composite image. Note that the weapon appears as a
single object in the composite image.

Fig. 10: Block Diagram of SMP


Fig. 11: Original and Composite Images

Fig. 12: Block Diagram for Recognition Procedure


Fig. 13: A CWD Example

To controlthe explosives in some places like airports, sensitive buildings, famous


constructions etc. But these manual screening procedures are not giving satisfactory
results, because this type of manual screenings procedures screens the person when
the person is near the screening machine and also sometimes it gives wrong alarm
indications so we are need of a technology that almost detects the weapon by scanning.
This can be achieved by imaging for concealed weapons. The goal is the eventual
deployment of automatic detection and recognition of concealed weapons. It is a
technological challenge that requires innovative solutions in sensor technologies and
image processing. The problem also presents challenges in the legal arena; a number of
sensors based on different phenomenology as well as image processing support are
being developed to observe objects underneath people’s clothing. Imaging Sensors-
These imaging sensors developed for CWD applications depending on their portability,
proximity and whether they use active or passive illuminations. The different types of
imaging sensors for CWD based are shown in following table. Infrared Imager-
Infrared imagers utilize the temperature distribution information of the target to form
an image. Normally they are used for a variety of night-vision applications, such as
viewing vehicles and people. The underlying theory is that the infrared radiation
emitted by the human body is absorbed by clothing and then re-emitted by it. As a
result, infrared radiation can be used to show the image of a concealed weapon only
when the clothing is tight, thin, and stationary. For normally loose clothing, the emitted
infrared radiation will be spread over a larger clothing area, thus decreasing the ability
to image a weapon. PMW Imaging Sensors - First Generation - Passive Millimeter
Wave (MMW) sensors measure the apparent temperature through the energy that is
emitted or reflected by sources. The output of the sensors is a function of the emissive
of the objects in the MMW spectrum as measured by the receiver. Clothing penetration
for concealed weapon detection is made possible by MMW sensors due to the low
emissive and high reflectivity of objects like metallic guns. In early 1995, the MMW
data were obtained by means of scans using a single detector that took up to 90
minutes to generate one image. Second Generation - Recent advances in MMW sensor
technology have led to video- rate (30 frames/s) MMW cameras. One such camera is
the pupil- plane array from Terex Enterprises. It is a 94-GHz radiometric pupil-plane
imaging system that employs frequency scanning to achieve vertical resolution and
uses an array of 32 individual wave- guide antennas for horizontal resolution. This
system collects up to 30 frames/s of MMW data. CWD through Image Fusion - By
fusing passive MMW image data and its corresponding infrared (IR) or electro-optical
(EO) image, more complete information can be obtained; the information can then be
utilized to facilitate concealed weapon detection. Fusion of an IR image revealing a
concealed weapon and its corresponding MMW image has been shown to facilitate
extraction of the concealed weapon.
An image processing architecture for CWD is shown in Figure 4.The input can be
multisensor (i.e., MMW + IR, MMW + EO, or MMW + IR + EO) data or only the
MMW data. In the latter case, the blocks showing registration and fusion can be
removed from figure. The output can take several forms. It can be as simple as a
processed image/video sequence displayed on a screen; a cued display where potential
concealed weapon types and locations are highlighted with associated confidence
measures; a “yes,” “no,” or “maybe” indicator; or a combination of the above. The
image processing proceduresthat have been investigated for CWD applications range from
simple denoising to automatic pattern recognition. Image Denoising Through Wavelets
- There have been several investigations into additive noise suppression in signals
and images using wavelet transforms. The principle work is that Johnstone and Donoho
[Donoho 1992; Donoho and Johnstone 1992], which is based on thresholding the DWT
of an image and then reconstructing it. The method relies on the fact that noise
commonly manifests it as fine –grained structure in the image and the wavelet
transform provides a scale-based decomposition. Thus, most of the noise tends to be
represented by wavelet coefficients at the finer scales. Discarding these coefficients
would result in a natural filtering. Out of noise on basis of scale. Because the coefficients
at such scales also tend to be the primary carriers of edge information, the method of
Donoho and the Johnstone[1992] thresholds the wavelet coefficients to zero if there
values are below a threshold. These coefficients are mostly those corresponding to
noise. The edge-related coefficients, on the other hand are usually above the threshold
an alternative to such hard thresholding is soft thresholding, which leads to less severe
distortion of the object of interest several approaches have been suggested for setting
the threshold for each band
of wavelet decomposition. A common approach is to compute the sample variance σ2
of the coefficients in a band and set the threshold to some multiple of the standard
deviation σ. Thus, if we want a soft threshold of the DTWT coefficients for a
particular wavelet band, we would threshold the coefficients of that band. The inverse
wavelet transform of thresholded transform coefficients is the denoised image. It has
been found that such denoising is effective in that although the noise is suppressed,
edge features are retained without much damage. The most straightforward approach
to image fusion is to take the average of the source images, but this can produce
undesirable results such as a decrease in contrast. Many of the advanced image fusion
methods involve multi resolution image decomposition based on the wavelet
transform. First, an image pyramid is constructed for each source image by applying
the wavelet transform to the source images. This transform domain representation
emphasizes important details of the source images at different scales, which is useful for
choosing the best fusion rules. Then, using a feature selection rule, a fused pyramid is
formed for the composite image from the pyramid coefficients of the source images.
The simplest feature selection rule is choosing the maximum of the two corresponding
transform values. This allows the integration of details into one image from two or more
images. Finally, the composite image is obtained by taking an inverse pyramid
transform of the composite wavelet representation. The process can be applied to
fusion of multiple source imagery. This type of method has been used to fuse IR and
MMW images for CWD application. The first fusion example for CWD application is
given in Figure 7. Two IR images taken from separate IR cameras from different
viewing angles are considered in this case. The advantage of image fusion for this
case is clear since we can observe a complete gun shape only in the fused image.
Object extraction is an important step towards automatic recognition of a weapon,
regardless of whether or not the image fusion step is involved. It has been successfully
used to extract the gun shape from the fused IR and MMW images. This could not be
achieved using the original images alone. One segmented result from the fused IR and
MMW image. Another segmentation procedure applied successfully to MMW video
sequences for CWD application is called the Slamani Mapping Procedure (SMP).
Ablock diagram of this procedure shown in figure. The procedure computes multiple
important thresholds of the image data in the Automatic Threshold Computation (ATC)
stage for 1) regions with distinguishable intensity levels, and 2) regions with close
intensity levels. Regions with distinguishable intensity levels have multi modal
histograms, whereas regions with close intensity levels have overlapping histograms.
The thresholds from both cases are fused to form the set of important thresholds in the
scene. At the output of the ATC stage, the scene is quantized for each threshold value to
obtain data above and below. Adaptive filtering is then used to perform homogeneous
pixel grouping in order to obtain “objects” present at each threshold level. The
resulting scene is referred to as a component image. Note that when the component
images obtained for all thresholds are added together, they form a composite image
that displays objects with different colors. Fig. 9 shows the original scene and its
corresponding composite image. Note that the weapon appears as a single object in the
composite image. Object extraction is an important step towards automatic
recognition of a weapon, regardless of whether or not the image fusion step is
involved. It has been successfully used to extract the gun shape from the fused IR and
MMW images. This could not be achieved using the original images alone. One
segmented result from the fused IR and MMW image. Another segmentation procedure
applied successfully to MMW video sequences for CWD application is called the
Slamani Mapping Procedure (SMP). The procedure computes multiple important
thresholds of the image data in the Automatic Threshold Computation (ATC) stage for
1) regions with distinguishable intensity levels, and 2) regions with close intensity
levels. Regions with distinguishable intensity levels have multi modal histograms,
whereas regions with close intensity levels have overlapping histograms. The
thresholds from both cases are fused to form the set of important thresholds in the
scene. At the output of the ATC stage, the scene is quantized for each threshold value to
obtain data above and below. Adaptive filtering is then used to perform homogeneous
pixel grouping in order to obtain “objects” present at each threshold level. The
resulting scene is referred to as a component image. Note that when the component
images obtained for all thresholds are added together, they form a composite image
that displays objects with different colors. Fig. 9 shows the original scene and its
corresponding composite image. Note that the weapon appears as a single object in the
composite image.
Challenges

There are several challenges ahead. One critical issue is the challenge of
performing detection at a distance with high probability of detection and
low probability of false alarm. Yet another difficulty to be surmounted is
forging portable multisensory instruments. Also, detection systems go
hand in hand with subsequent response by the operator, and system
development should take into account the overall context of deployment.
Conclusion
Imaging techniques based on a combination of sensor technologies and
processing will potentially play a key role in addressing the concealed weapon
detection problem. In this paper, we first briefly reviewed the sensor
technologies being investigated for the CWD application of the various
methods being investigated, passive MMW imaging sensors. Recent advances
in MMW sensor technology have led to video-rate (30frames/s) MMW
cameras. However, MMW cameras alone cannot provide useful information
about the detail and location of the individual being monitored. To enhance
the practical values of passive MMW cameras, sensor fusion approaches
using MMW and IR, or MMW and EO cameras are being described. By
integrating the complementary information from different sensors, a more
effective CWD system is expected. In the second part of this paper, we
provided a survey of the image processing techniques being developed to
achieve this goal. Specifically, topics such as MMW image or video
enhancement, filtering, registration, fusion, extraction, description, and
recognition were discussed.
References

1. www.wikipedia.org
2. www.google.com
3. Raghuveer M Rao,"Wavelet transforms and its applications".
4. An Article from “IEEE SIGNAL PROCESSING MAGAZINE” March
2005 pp. 52-61
5. S. Mallet, W.L. Hwang,“Singularity detection and processing with
Wavelets”, IEEE Trans. Inform. Theory, Vol. 38, No. 2, pp. 617-643,
1992.
6. “WAVELETS”- Robipolikar
7. N.G. Paulter,“Guide to the technologies of concealed weapon imaging and
detection”, NIJ Guide 602-00, 2001
8. F.Maes, A.Collignon, D. Vandermeulen, G.Marchal, P.Suetens,"Multi
modality images registration by maximization of mutual information”,
IEEE Trans.Med. Imag., Vol. 16, No. 2, pp. 187-198, 1997.

You might also like