0% found this document useful (0 votes)
51 views4 pages

Automatic Region Detection of Facial Feature Using Haar Classifier

This paper proposes methods for automatically detecting facial features in images using Haar classifiers and analyzing facial geometry. Key facial features like eyes, nose, mouth, eye pupils, and nostrils are detected. Eyes are detected using Haar classifiers within the estimated eye region of the face. Eye corners are extracted to locate the eye pupils. Nose and nostrils are detected using Haar classifiers and thresholding. Mouth location is estimated based on distances between other facial features based on typical facial geometry. Lips corners are detected using corner detection and thresholding within the estimated mouth region. The methods were tested on 100 images with neutral and smiling expressions with good detection accuracy.

Uploaded by

ITechSoft
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views4 pages

Automatic Region Detection of Facial Feature Using Haar Classifier

This paper proposes methods for automatically detecting facial features in images using Haar classifiers and analyzing facial geometry. Key facial features like eyes, nose, mouth, eye pupils, and nostrils are detected. Eyes are detected using Haar classifiers within the estimated eye region of the face. Eye corners are extracted to locate the eye pupils. Nose and nostrils are detected using Haar classifiers and thresholding. Mouth location is estimated based on distances between other facial features based on typical facial geometry. Lips corners are detected using corner detection and thresholding within the estimated mouth region. The methods were tested on 100 images with neutral and smiling expressions with good detection accuracy.

Uploaded by

ITechSoft
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 4

Automatic Region Detection of Facial Feature using Haar Classifier

N.S Priya Department of Information technology LJCET Nagercoil.

Abstract: This paper proposes A utomatic region detection of facial features in an image that can be important stage for various facial image manipulation works, such as face recognition, facial expression recognition, 3D face modeling and facial features tracking. R e g i o n d etection of facial features like eye, pupil, mouth, nose, nostrils, lip corners, eye corners etc., with different facial image with neutral region selection and illumination is a challenging task. In this paper, we presented different methods for fully automatic region detection of facial features. Object detector is used along with haar-like cascaded features in order to detect face, eyes and nose. Novel techniques using the basic concepts of facial geometry are proposed to locate the mouth position, nose position and eyes position. The estimation of detection region for features like eye, nose and mouth enhanced the detection accuracy effectively. An algorithm, using the H-plane of the HSV color space is proposed for detecting eye pupil from the eye detected region. Proposed algorithm is tested over 100 frontal face images with two different facial expressions (neutral face and smiling face).

Keywords ROI(Region of Interest), Facial Expression, AdaBoost, Cascaded classier, HSV.

I.

INTRODUCTION

The ability to recognize human faces is a demonstration of incredible human intelligence. Over the last three decades researchers from diverse areas have been making attempts to replicate this outstanding visual perception of human beings in machine recognition of faces [1] . However, there are still substantial challenging problems such as intraclass variations in three-dimensional pose, facial expression, make-up and lighting condition as well as occlusion and cluttered background. Facial features detection is very important stage in vision related applications, such as face identication, featuresTracking, facial expression recognition, face synthesis, head pose estimation etc. Facial features generally include salient points which can be tracked easily, like corners of the eyes, nostrils, lip corners etc. Currently, most of the applications for facial expressions tracking are manually giving points as initial feature points for tracking.The face has a nice facial geometry,which can be estimated based on the eyes position, Mouth position and nose Position. Eye detection using A daBoost algorithm [2] gives an estimated location of the face and eyes. AdaBoost algorithm uses a set of weak cascaded classiers, makes the classication accuracy greatly improved through extensive studies. We have developed a mouth

detection method using the concept of facial geometry and tested on number of faces including a set of faces from Database. In the propos ed paper, pupil detection is done using the Hue information [6] obtained from h u e plane of the image. In hue image, pupil is the darkest region compared to other region in the neighborhoods. The pupil contains reddish color information which can be separated easily form the other eye region by simple thresholding of the hue plane. Eye center gives information about the head pose and location of other facial features, like lip and nose location can be estimated provided, the eye center is known to us. Lip muscles are highly deformable and subjected to change. The Shi-Tomasi feature corner detection algorithm used in this paper takes care of the features which can be tracked easily. We have used some samples of smiling faces as well as faces wearing glasses, to detect the relevant features. Nostrils are detected by thresholding the gray scale image of nose and then nding the contours within the region. The center of the bounded rectangular region (bounding the contour) is the nostril. The methodology for detecting lip corner points is also explained in detail. The algorithm is tested with various frontal face images with different illumination.
In the remainder of this paper, we describe the facial features detection approach. Followed by an overview about the AdaBoost detection algorithm which is used for face detection. After that we outline our proposed features detection methodologies, for eye corners detection, nostril detection, mouth location and mouth corners detection.

II. Proposed Algorithm for Facial feature region detecion


In our proposed approach, at rst, we detect the face using Viola and Jones Boosting algorithm [2] and a set of Haar-like cascade features. The eye search area is minimized by assuming the eyes estimated position to be at the upper part of the face. Haar-like features cascade is used for the eye detection. It locates the rectangular regions containing eyes. Given the eyes ROI, an algorithm is developed to locate the eye pupil by taking Hue information of the eye image. The hue image is thresholded and contour is detected in the thresholded image. Centroid of the contour is detected as the eye pupil.

Next, the nose is detected using haar-like features. Having known the eyes center, and the position of the nose, an approach is proposed based on the facial geometry for mouth location estimation. An algorithm is developed to locate the lips corners points, which are considered as good features for tracking lips movement. Finally, nostrils are detected from the nose ROI by taking threshold of the gray nose image and obtaining the contours in the thresholded image.

A. Face Detection
Viola and Jones [2] face detection algorithm is based on the Haar-like features. The four types of rectangular box are used to extract the features. The shaded area of the rectangular region is subtracted from the white area of the rectangle.

Fig 2: Eye corner detection to obtain pupil Eye corner detection Algorithm:
1) Get the ROI image of the eye. 2) Extract the gray scale image of the ROI. 3) Using Shi-Tomasis Good Features to Track method, obtain all the good corner features within the ROI. 4) Corners with positive difference are in the left of the pupil and those of negative difference are at the right of the pupil. 5) The eye corners usually are near to ycoordinate of the eye pupil and also mostly found slightly below the pupil. So taking a constrain in the y-direction. 6) The greatest + ve value gives the left most corner point and the greatest ve value gives the right most corner point. 7) Resultant two extreme points are the required corner points of the eyes.

Fig 1 : Haar Classifier for face region detection ADABOOST Algorithm for classier learning [1] will be used to trace the facial feature.

B. Eye detection
Eye is the most prominent feature on the face. Accurate detection of eyes and features like, eyes corners and eye pupil is very necessary for initialization of feature points to be used for tracking. In this paper eyes are detected from the face ROI using eye detection cascade of boosted tree classiers with haar like features. To improve the accuracy of eye detection, approximate location of eyes in the face is estimated. The eye corners are more stable than other eye features. But, accurate detection of eye corners is difficult, because the eye corners are located in the skin region and doesnt have unique gray scale characteristics.

D. Nose Detection and Nostrils Detection


Nose is detected within the face region by using haar-like cascaded features and AdaBoost classication algorithm. To improve the nose detection accuracy, estimated region of nose is calculated in the given face image. A pictorial view of the nose region estimation is shown in gure 7.

C. Eye Corners Detection


Eye corners are among the most important features of the face. The eye corner points are considered as global features as it does not changes with change in facial expression. Figure 2 gives pictorial description of eye corners and its locations. In the proposed work, eye corners are detected as follows.

Fig 3: Nose and Nostril Detection

ROI of the nose is being detected using Haar classifier used [1].Detection of the Nostril is quiet complex part. Later section deal with Nostril detection. 1) Nostril Detection: Nostrils are relatively darker thanthe surrounding nose regions even under a wide range of lighting conditions. After nose is detected, the nostrils can be found by searching for two dark region.Algorithm so used: 1) Get the nose ROI after it is detected using haarlike cascaded features. 2) Extract gray scale image of the ROI. 3) Threshold the gray image. We have used a conventional thresholding method here. 4) Use morphological operations like erosion technique to remove the particles, small dis-joint parts etc. 5) Obtain rectangles bounding the contours. Find the centroid of both the rectangles. 6) The resultant centroids are the two nostrils. E. Mouth Detection The proposed method of mouth detection uses a simple fundamental of facial geometry. From the facial geometry, we can easily observe that, the approximate width of the lips are same as the distance between two eyes centers. The mouth y-location starts after the nose tip. x-locations can be given as two eye centers x-locations. The height of the mouth is estimated as 3/4 of the nose height detected. The height can also be taken as equivalent to the height of the nose to avoid the elimination of lower lips edges, specially when a person is smiling. Figure 10 gives the pictorial description of the proposed mouth region estimation method.

By following the simple mathematical calculation the mouse can be calculated. The proposed method for detecting lips corners is given below: 1) Get the detected mouth ROI, which is done as stated in section E. 2) Extract gray-level image. 3) Threshold the gray-image. It gives a nice edge information for a closed mouth and a contour region for an open mouth. 4) Using Shi-Tomasis corner detection method, obtain all the corner points in the thresholded image 5) Considering the mid point of the lip as ( x10 ,y10) obtain 6) the difference x1 = x10 x1i and y1 = y10 y1i The obtained ve difference will give the points at the right and + ve one will give those of are at the right. 7) Obtain the points at both extremes. Resultant points are the corners of the lips. III. Experimental result IDL Language is being used for the implementation and detection of the frontal image. Below shown are the frontal result of the implementation.

IV.

References

[1] J. Chen and O. Lemon, Automatic and robust detection of

Fig 4: Mouth Detection

frontal features in frontal images, in Signal and Image Processing Applications (ICSIPA), 2009 IEEE International Conference on. IEEE, 2010, pp. 279284.

[2] P. Viola and M. Jones, Robust real-time object detection, International Journal of Computer Vision, vol. 57, no. 2, pp. 137154, 2002. [3] B. Xiang and X. Cheng, Eye detection based on improved ad AdaBoost algorithm, in Signal Processing Systems (ICSPS), 2010 2nd International Conference on, vol. 2. IEEE, 2010, p. V2. [4] C. Thomaz and G. Giraldi, A new ranking method for principal components analysis and its application to face image analysis, Image and Vision Computing, vol. 28, no. 6, pp. 902913, 2010. [5] L. Ding and A. Martinez, Precise detailed detection of faces and facial features, in Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008, pp. 17. [6] Z. Zheng, J. Yang, M. Wang, and Y. Wang, A Novel Method for Eye Features Extraction, Computational and Information Science, pp. 10021007, 2005. [7] C. Xu, Y. Zheng, and Z. Wang, Semantic feature extraction for accurate eye corner detection, in Pattern Recognition, 2008. ICPR 2008. 19th International Conference on. IEEE, 2009, pp. 14. [8] J. Shi and C. Tomasi, Good features to track, in Computer Vision and Pattern Recognition, 1994. Proceedings CVPR94., 1994 IEEE Computer Society Conference on. IEEE, 2002, pp. 593600. [9] D. Vukadinovic and M. Pantic, Fully automatic facial feature point detection using Gabor feature based boosted classiers, in Systems, Man and Cybernetics, 2005 IEEE International Conference on, vol. 2. IEEE, 2006, pp. 16921698. [10] J. van de Kraats and D. van Norren, Directional and nondirectional spectral reection from the human fovea, Journal of Biomedical Optics, vol. 13, p. 024010, 2008. [11] G. Bradski and A. Kaehler, Learning OpenCV: Computer vision with the OpenCV library. OReilly Media, 2008.

You might also like