A Project Presentation on
Hand Gesture Recognition For Physically
Handicapped People
Under the guidance of: Submitted By:
Dr. Rachana Asthana Ankur Singh (1204530006)
Professor & Head Deepak Dubey (1204530017)
Electronics Engineering Dept. Sanjeev Yadav (1204530028)
H.B.T.I. Kanpur Satyam Srivastava (1204530029)
Smrithy Sivakumar (1204530032)
TOOLS/COMPONENTS USED:
1. MATLABTM R2012a
2. CAMERA(Webcam)
Hand Gesture Recognition- A Literature Survey of Various Techniques,
Methods and Algorithms:
Gesture Recognition Techniques
Instrumented Gloves
Vision-Based Technology
Comparison of Various Models
Methods Glove-Based model Vision-Based
model
Cost Higher Lesser
User Comfort Lesser Higher than GB
model
Hand Anatomy Restriction High Less
Calibration Critical Not Critical
Portability Less Portability High Portability
Gesture Recognition Algorithms:
1. Template Matching
2. Feature Extraction Analysis
3. Active Shapes Model
4. Principal Component Analysis
5. Linear Fingertip Models
Comparison of Various Algorithms:
ALGORITHMS USAGE ADVANTAGES DISADVANTAGES
Simplest
TEMPLATE VB and GB Accurate for small set of postures Does not work for large posture sets
MATCHING Small amount of calibration
FEATURE Both postures and gestures
EXTRACTION VB Layered architecture Computationally expensive
ACTIVE SHAPE Real time recognition Tracks only the open hand
MODELS VB Both hand postures and gestures
PRINCIPAL COMPONENTS Both Recognize on the order of 25 to 35 More training needed
postures
LINEAR FINGERTIP MODELS Simple Not real time
VB Good recognition accuracy Recognizes small set of postures
DESIGN ISSUES:
1. Variation of illumination conditions
2. Rotation problem
3. Background problem
4. Scale problem
5. Translation problem
6. Hardware problems
Algorithm Development:
Working of Hand Gesture Recognition System
Point Pattern Matching Algorithm:
SIFT Algorithm :
To perform reliable recognition, it is important that the features extracted from the training image be
detectable even under changes in image scale, noise and illumination. Such points usually lie on high-
contrast regions of the image, such as object edges.
Another important characteristic of these features is that the relative positions between them in the
original scene shouldn't change from one image to another.
Key Point matching by SIFT algorithm
MK-RoD Algorithm :
Algorithm Implementation :
The process of implementation can be subdivided in following 9 steps:
1. Input the Query Image (execHGR.m).
2. Check if the images are in RGB format and convert it to Grayscale (isRGB.m).
3. Reading and image and return its SIFT keypoints (SIFT.m).
4. For each descriptor in the first image, select its match to second image (match.m).
5. Show the match points for both the images.
6. Show both images side by side (appendimages.m).
7. Calculate the distance of the matched points to center of the keypoints (formresults.m).
8. Calculation of the validity ratio.
9. Printing and displaying the results (HGR.m).
1. Input the query Image:
input='Images/Inputs/sample/[Link]';
results=hgr(input);
2. Check if the images are in RGB format and convert it to Grayscale:
y = isrgb1(x);
Image = rgb2gray (image);
3. Reading an image and return its SIFT keypoints :
[Image, descriptors, locs] = sift (imageFile)
Image : The image array in double format.
Descriptors: A K-by-128 matrix, where each row gives an invariant descriptor for one of the K keypoints. The
descriptor is a vector of 128 values normalized to unit length.
Locs : K-by-4 matrix, in which each row has the 4 values for a keypoint location (row, column, scale,
[Link] orientation is in the range [-PI, PI] radians.
The output of the sift algorithm is given as an input parameter to match function.
4. Match Point Calculation :
[match1, match2, cx1, cy1, cx2, cy2, num] = match (image1, image2, distRatio);
Find SIFT keypoints for Image-1.
Find SIFT keypoints for Image-2.
For each descriptor in the first image, select its match to second image.
Calculate the center points.
num returns the number of matched keypoints.
5. Show the match points for both the images
The function imshow can be used to display image.
6. Show both images side by side :
im = appendimages(image1, image2)
Select the image with the fewest rows and fill in enough empty rows to make it the same height as the other
image.
if (rows1 < rows2)
image1 (rows2,1) = 0;
else
image2 (rows1,1) = 0;
end
Now append both images side-by-side.
im = [image1 image2];
7. Calculation the distance of the matched points to centre of the keypoints
If the number of Matched Keypoints are greater than 2, start the algorithm.
Calculate the distances of the matched keypoints to the center of the keypoints.
Sum the distances and calculate the Distance Ratio Array.
Calculate the total valid points by summing the distanceMask.
Calculate the validity ratio of the keypoints simply by dividing the valid matched
keypoints by the matched keypoints
8. Calculation of the validity ratio
VALIDITY RATIO= NUMBER OF VALID POINTS
NUMBER OF MATCHED POINTS
9. Printing and displaying the results
RESULTS :
Input image obtained The Grayscale pre-processed image
SIFT Key points for input/query image
Matched Points for "Database Image-1" and Matched Points for "Database Image
"Image-2"(7 matched key point found) 2" and "Image-2"(no matched key
point found)
Matched Database Image -versus- Input Image
Conclusion and Future Work :
1. Developed a hand gesture based recognition system for hearing impaired persons by means of
designing and building a man-machine interface using a video input to interpret the various
gestures.
2. The system is designed to be cost efficient as well and another advantage is that the user not
only can communicate from a distance, but need have no physical contact with the computer.
3. A visual system was chosen instead of an audio based system as the latter would fail during
real time implementation in noisy environments or in situations where sound would cause a
disturbance.
4. Dynamicity.
5. Widely used in the consumer electronics industry .
6. Applications in health care industry are expected to emerge significantly.
7. Google is presently working on Project Soli which uses Radar.
REFERENCES
[1] David G. Lowe, Distinctive image features rom scale-invariant key-points, International
Journal of Computer Vision, Vol.60, No. 2, pp. 91-110, 2004.
[2] David G. Lowe, Object recognition from local scale-invariant features, International
Conference on Computer Vision, Corfu, Greece, pp. 1150-1157, September 1999
[3] S. Pandita, S. P. Narote, Hand Gesture Recognition using SIFT ,International Journal of
Engineering Research & Technology (IJERT) ISSN: 2278-0181 Vol. 2 Issue 1, January- 2013
[4] K G Derpains, A review of Vision-based Hand Gestures, Internal Report, Department of
Computer Science. York University, February 2004
[5] Richard Watson,A Survey of Gesture Recognition Techniques, Technical Report TCD-
CS-93-11, Department of Computer Science, Trinity College Dublin, 1993
[6] Mitra khaledian, Mohammad bagher Menhaj, Real-time Vision-based Hand Gesture
Recognition Using Sift Features, TELKOMNIKA Indonesian Journal of Electrical
EngineeringVol. 15, No. 1, July 2015, pp. 162 ~ 170 DOI: 10.11591/telkomnika.v15i1.8091
[7] [Link], Ms [Link] , Hand Gesture Recognition Analysis of Various Techniques,
Methods and Their Algorithms, International Journal of Innovative Research in Science,
Engineering and Technology, Volume 3, Special Issue 3, March 2014