Weapon-Detection-System Using DIP (Seminar Project)
Weapon-Detection-System Using DIP (Seminar Project)
Imaging Sensors
These imaging sensors developed for CWD applications depending on their portability,
proximity and whether they use active or passive illuminations. The different types of
imaging sensors for CWD based are shown in following table.
Infrared Imager
Infrared imagers utilize the temperature distribution information of the target to form
an image. Normally they are used for a variety of night-vision applications, such as
viewing vehicles and people. The underlying theory is that the infrared radiation
emitted by the human body is absorbed by clothing and then re-emitted by it. As a
result, infrared radiation can be used to show the image of a concealed weapon only
when the clothing is tight, thin, and stationary. For normally loose clothing, the emitted
infrared radiation will be spread over a larger clothing area, thus decreasing the
ability to image a weapon.
First Generation
Passive Millimeter Wave (MMW) sensors measure the apparent temperature
through the energy that is emitted or reflected by sources. The output of the sensors
is a function of the emissive of the objects in the MMW spectrum as measured by the
receiver. Clothing penetration for concealed weapon detection is made possible by
MMW sensors due to the low emissive and high reflectivity of objects like metallic
guns. In early 1995, the MMW data were obtained by means of scans using a single
detector that took up to 90 minutes to generate one image.
Following fig. 1 (a) shows a visual image of a person wearing a heavy sweater that
conceals two guns made with metal and ceramics. The corresponding 94-GHz
radiometric image fig. 1
(b) was obtained by scanning a single detector across the object plane using a
mechanical scanner. The radiometric image clearly shows both firearms.
Fig. 1: (a) Visible and (b) MMW Image of a Person Concealing 2 Guns beneath a Heavy Sweater
Second Generation
Recent advances in MMW sensor technology have led to video- rate (30 frames/s)
MMW cameras. One such camera is the pupil- plane array from Terex Enterprises. It
is a 94-GHz radiometric pupil-plane imaging system that employs frequency
scanning to achieve vertical resolution and uses an array of 32 individual wave- guide
antennas for horizontal resolution. This system collects up to 30 frames/s of MMW
data.
An image processing architecture for CWD is shown in Figure 4.The input can be
multisensor (i.e., MMW + IR, MMW + EO, or MMW + IR + EO) data or only the
MMW data. In the latter case, the blocks showing registration and fusion can be
removed from figure. The output can take several forms. It can be as simple as a
processed image/video sequence displayed on a screen; a cued display where potential
concealed weapon types and locations are highlighted with associated confidence
measures; a “yes,” “no,” or “maybe” indicator; or a combination of the above. The
image processing proceduresthat have been investigated for CWD applications range from
simple denoising to automatic pattern recognition.
As indicated earlier, making use of multiple sensors may increase the efficacy of a
CWD system. The first step toward image fusion is a precise alignment of images (i.e.,
image registration). Even though MMW imagers penetrate clothing they do not
provide the best picture due to their limited resolution. Infrared (IR) sensors, on the other
hand, provide well-resolve pictures with less capability for clothing penetration.
Combining both technologies should provide a better way to display a well resolved
image with a better view of a concealed weapon. Combination of IR and MMW image
sensors required registration and fusion procedures. The registration procedure is
based on the observation that the body shapes are well preserved in IR and MMW
images. We first scale the IR image according to some prior knowledge about the
sensors. Then the body shapes are extracted from the backgrounds of the IR and
MMW images. Finally we apply correlation to the resulting binary images to determine
the X and Y displacements.
The most straightforward approach to image fusion is to take the average of the
source images, but this can produce undesirable results such as a decrease in
contrast. Many of the advanced image fusion methods involve multi resolution
image decomposition based on the wavelet transform. First, an image pyramid is
constructed for each source image by applying the wavelet transform to the source
images. This transform domain representation emphasizes important details of the
source images at different scales, which is useful for choosing the best fusion rules.
Then, using a feature selection rule, a fused pyramid is formed for the composite
image from the pyramid coefficients of the source images. The simplest feature
selection rule is choosing the maximum of the two corresponding transform values.
This allows the integration of details into one image from two or more images. Finally,
the composite image is obtained by taking an inverse pyramid transform of the
composite wavelet representation. The process can be applied to fusion of multiple
source imagery. This type of method has been used to fuse IR and MMW images for
CWD application. The first fusion example for CWD application is given in Figure 7.
Two IR images taken from separate IR cameras from different viewing angles are
considered in this case. The advantage of image fusion for this case is clear since we
can observe a complete gun shape only in the fused image. The second fusion example,
fusion of IR and MMW images, is provided in Figure.
(b)
(c)
Fig. 8: (a) and (b) are Original I R Images (c) is Fused Image
Fig. 9: Example of Fusion Procedure
There are several challenges ahead. One critical issue is the challenge of
performing detection at a distance with high probability of detection and
low probability of false alarm. Yet another difficulty to be surmounted is
forging portable multisensory instruments. Also, detection systems go
hand in hand with subsequent response by the operator, and system
development should take into account the overall context of deployment.
Conclusion
Imaging techniques based on a combination of sensor technologies and
processing will potentially play a key role in addressing the concealed weapon
detection problem. In this paper, we first briefly reviewed the sensor
technologies being investigated for the CWD application of the various
methods being investigated, passive MMW imaging sensors. Recent advances
in MMW sensor technology have led to video-rate (30frames/s) MMW
cameras. However, MMW cameras alone cannot provide useful information
about the detail and location of the individual being monitored. To enhance
the practical values of passive MMW cameras, sensor fusion approaches
using MMW and IR, or MMW and EO cameras are being described. By
integrating the complementary information from different sensors, a more
effective CWD system is expected. In the second part of this paper, we
provided a survey of the image processing techniques being developed to
achieve this goal. Specifically, topics such as MMW image or video
enhancement, filtering, registration, fusion, extraction, description, and
recognition were discussed.
References
1. www.wikipedia.org
2. www.google.com
3. Raghuveer M Rao,"Wavelet transforms and its applications".
4. An Article from “IEEE SIGNAL PROCESSING MAGAZINE” March
2005 pp. 52-61
5. S. Mallet, W.L. Hwang,“Singularity detection and processing with
Wavelets”, IEEE Trans. Inform. Theory, Vol. 38, No. 2, pp. 617-643,
1992.
6. “WAVELETS”- Robipolikar
7. N.G. Paulter,“Guide to the technologies of concealed weapon imaging and
detection”, NIJ Guide 602-00, 2001
8. F.Maes, A.Collignon, D. Vandermeulen, G.Marchal, P.Suetens,"Multi
modality images registration by maximization of mutual information”,
IEEE Trans.Med. Imag., Vol. 16, No. 2, pp. 187-198, 1997.