Efficient IRIS Recognition Through Improvement of Feature Extraction and Subset Selection
Efficient IRIS Recognition Through Improvement of Feature Extraction and Subset Selection
I. INTRODUCTION
B. Related works was proposed that is capable of a detailed analysis of the eye
The usage of iris patterns for the personal identification began region images in terms of the position of the iris, degree of the
in the late 19th century; however, the major investigations on eyelid opening, and the shape, the complexity, and the texture of
iris recognition were started in the last decade. In [9], the iris the eyelids. A directional filter bank was used in [24] to
signals were projected into a bank of basis vectors derived by decompose an iris image into eight directional sub band outputs;
the independent component analysis, and the resulting the normalized directional energy was extracted as features, and
projection coefficients were quantized as Features. A prototype iris matching was performed by computing the Euclidean
was proposed in [10] to develop a 1D representation of the distance between the input and the template feature vectors. In
gray-level profiles of the iris. In [11], biometrics based on the [25], the basis of a genetic algorithm was applied to develop a
concealment of the random kernels and the iris images to technique to improve the performance of an iris recognition
synthesize a minimum average correlation energy filter for iris system. In [26], the global texture information of iris images
authentication were formulated. In [5, 6, 12], the Multiscale was used for ethnic classification. The iris representation
Gabor filters were used to demodulate the texture phase method of [10] was further developed in [27] to use the different
structure information of the iris. In [13], an iris segmentation similarity Measures for matching.
method was proposed based on the crossed chord theorem and
the collarette area.
Eyelids,
Iris image Pupillary Localization Collarette Eyelashes
Normalization
Localization Of iris area And noise
Isolation Detection
In [14], iris recognition technology was applied in mobile Measures for matching. The iris recognition algorithm
phones. In [15], correlation filters were utilized to measure the described in [28] exploited the integro differential operators to
consistency of the iris images from the same eye. An interesting detect the inner and outer boundaries Of iris, Gabor filters to
solution to defeat the fake iris attack based on the Purkinje extract the unique binary vectors constituting the iris code, And
image was depicted in [16]. An iris image was decomposed in a statistical matcher that analyzes the average Hamming
[17] into four levels by using the 2D Haar wavelet transform, distance between two codes. In [29], the performance of
the fourth-level high-frequency information was quantized to iris-based identification system was analyzed at the matching
form an 87-bit code, and a modified competitive learning neural score level. A biometric system, which achieves the offline
network (LVQ) was adopted for classification. In [18], a verification of certified and cryptographically secured
modification to the Hough transform was made to improve the documents called “EyeCerts” was reported in [30] for the
iris segmentation, and an eyelid detection technique was used, identification of the people. An iris recognition method was
where each eyelid was modeled as two straight lines. A used in [31] based on the 2D wavelet transform for the feature
matching method was implemented in [19], and its performance extraction and direct discriminant linear analysis for feature
was evaluated on a large dataset. In [20], a personal reduction with SVM techniques as iris pattern classifiers. In
identification method based on the iris texture analysis was [32], an iris recognition method was proposed based on the
described. An algorithm was proposed for iris recognition by histogram of local binary patterns to represent the iris texture
characterizing the key local variations in [21]. A phase-based and a graph matching algorithm for structural classification. An
iris recognition algorithm was proposed in [22], where the elastic iris blob matching algorithm was proposed to overcome
phase components were used in 2D discrete Fourier transform the limitations of local feature based classifiers (LFC) in [33],
of iris image with a simple matching strategy. In [23], a system and in order to recognize the various iris images properly, a
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009
novel cascading scheme was used to combine the LFC and an taken as approximate circles. However, the two circles are
iris blob matcher. In [34], the authors described the usually not concentric [20, 21].
determination of eye blink states by tracking the iris and the
B. Eyelids, Eyelashes, and noise detection
eyelids. An intensity-based iris recognition system was
(i) Eyelids are isolated by first fitting a line to the upper and lower
presented in [35], where the system exploited the local intensity
changes of the visible iris textures. In [36], the iris eyelids using the linear Hough transform. A second horizontal
characteristics were analyzed by using the analytic image line is then drawn, which intersects with the first line at the iris
constructed by the original image and its Hilbert transform. The edge that is closest to the pupil [45].
binary emergent frequency functions were sampled to form (ii) a Separable eyelashes are detected using 1D Gabor filters, since a
feature vector, and the Hamming distance was deployed for low output value is produced by the convolution of a separable
matching [37, 38]. In [39], the Hough transform was applied for eyelash with the Gaussian smoothing function. Thus, if a
the iris localization, a Laplacian pyramid was used to represent resultant point is smaller than a threshold, it is noted that this
the distinctive spatial characteristics of the human iris, and a point belongs to an eyelash.
modified normalized correlation was applied for the matching (iii) Multiple eyelashes are detected using the variance of intensity,
process. In [40], various techniques have been suggested to and if the values in a small window are lower than a threshold,
solve the occlusion problem that happens due to the eyelids and the centre of the window is considered as a point in an eyelash
the eyelashes. From the above discussion, we may divide the as shown in Figure .3.
existing iris recognition approaches roughly into four major
categories based on feature extraction scheme, namely, the
phase-based methods [5,6,12,22], the zero-crossing
representation methods [10, 27], the texture analysis-based
methods [18, 21, 24, 28,39, 41–43], and the intensity variation
analysis [9, 21, 44] methods. Our proposed iris recognition
scheme falls in the first category. A well-established fact that
the usual two-dimensional tensor product wavelet bases are not
optimal for representing images consisting of different regions
of smoothly varying grey-values separated by smooth (a) (b) (c)
boundaries. This issue is addressed by the directional
transforms such as contourlets, which have the property of
preserving edges. The contourlet transform is an efficient
directional multiresolution image representation which differs
from the wavelet transform. The contourlet transform uses
non-separable filter banks developed in the discrete form; thus it
is a true 2D transform, and overcomes the difficulty in exploring
the geometry in digital images due to the discrete nature of the
image data. The remainder of this paper is organized as follows: (d) (e) (f)
Section 2 deals with Iris Image Preprocessing. Section 3 deals
with Feature Extraction method discussion. Section 4 deals with
Figure.3: CASIA iris images (a), (b), and (c) with the detected
feature subset selection and vector creation techniques, Section Collarette area and the corresponding images (d), (e), and (f) after
5 shows our experimental results and finally Section 6 Detection of noise, eyelids, and eyelashes.
concludes this paper.
C. Iris Normalization
II. IRIS IMAGE PREPROCESSING
We use the rubber sheet model [12] for the normalization of the
First, we outline our approach, and then we describe further isolated collarette area. The center value of the pupil is
details in the following subsections. The iris is surrounded by considered as the reference point, and the radial vectors are
the various non relevant regions such as the pupil, the sclera, the passed through the collarette region. We select a number of data
eyelids, and also noise caused by the eyelashes, the eyebrows, points along each radial line that is defined as the radial
the reflections, and the surrounding skin [9].We need to remove resolution, and the number of radial lines going around the
this noise from the iris image to improve the iris recognition collarette region is considered as the angular resolution. A
accuracy. constant number of points are chosen along each radial line in
A. Iris / Pupil Localization order to take a constant number of radial data points,
The iris is an annular portion of the eye situated between the irrespective of how narrow or wide the radius is at a particular
pupil (inner boundary) and the sclera (outer boundary). Both the angle. We build the normalized pattern by backtracking to find
inner boundary and the outer boundary of a typical iris can be the Cartesian coordinates of data points from the radial and
angular positions in the normalized pattern [3, 5, 6]. The
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009
normalization approach produces a 2D array with horizontal sparse image expansion by applying a multi-scale transform
dimensions of angular resolution, and vertical dimensions of followed by a local directional transform. It gathers the nearby
radial resolution form the circular-shaped collarette area (See basis functions at the same scale into linear structures. In
Figure.4I). In order to prevent non-iris region data from essence, a wavelet-like transform is used for edge (points)
corrupting the normalized representation, the data points, which detection, and then a local directional transform for contour
occur along the pupil border or the iris border, are discarded. segments detection. A double filter bank structure is used in CT
Figure.4II (a), (b) shows the normalized images after the in which the Laplacian pyramid (LP) [50] is used to capture the
isolation of the collarette area. point discontinuities, and a directional filter bank (DFB) [51] to
link point discontinuities into linear structures. The combination
III. FEATURE EXTRACTION AND ENCODING of this double filter bank is named pyramidal directional filter
Only the significant features of the iris must be encoded so that bank (PDFB) as shown in Figure.5.
comparisons between templates can be made. Gabor filter and B. Benefits of Contourlet Transform in the Iris Feature
wavelet are the well-known techniques in texture analysis [5, Extraction
20, 42, 46, 47]. In wavelet family, Haar wavelet [48] was
applied by Jafer Ali to iris image and they extracted an To capture smooth contours in images, the representation
87-length binary feature vector. The major drawback of should contain basis functions with variety of shapes, in
wavelets in two-dimensions is their limited ability in capturing particular with different aspect ratios. A major challenge in
Directional information. The contourlet transform is a new capturing geometry and directionality in images comes from
extension of the wavelet transform in two dimensions using The discrete nature of the data; the input is typically sampled
Multi scale and directional filter banks. images defined on rectangular grids.
(I)
(a) (b)
(II)
Figure.4 : (I) shows the normalization procedure on CASIA dataset; (II) (a), (b) show the normalized images of the isolated collarette regions
A. Contourlet Transform
Contourlet transform (CT) allows for different and flexible
number of directions at each scale. CT is constructed by
combining two distinct decomposition stages [49], a multiscale
decomposition followed by directional decomposition. The Figure. 5: Two Level Contourlet Decomposition [49]
grouping of wavelet coefficients suggests that one can obtain a
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009
Because of pixelization, the smooth contours on sampled IV. FEATURE SUBSET SELECTION AND VECTOR
images are not obvious. For these reasons, unlike other CREATION IN PROPOSED METHODS
transforms that were initially developed in the continuous It is necessary to select the most representative feature sequence
domain and then discretized for sampled data, the new approach from a features set with relative high dimension [53]. In this
starts with a discrete-domain construction and then investigate paper, we propose several methods to select the optimal Set of
its convergence to an expansion in the continuous-domain. This features, which provide the discriminating information to
construction results in a flexible multi-resolution, local, and classify the iris patterns. In this section we describe several
directional image expansion using contour segments. methods that proposed for optimal feature selection and vector
Directionality and anisotropy are the important characteristics creation .also According to the method mentioned in section
of contourlet transform. Directionality indicates that having IIIA, we concluded the middle band of iris normalized images
basis function in many directions, only three direction in have more important information and less affected by fragile
wavelet. The anisotropy property means the basis functions bits, so for introducing iris feature vector based on contourlet
appear at various aspect ratios where as wavelets are separable transform the rows between 5 and 12 in iris normalize image are
functions and thus their aspect ratio is one. Due to this decomposed into eight directional sub-band outputs using the
properties CT can efficiently handle 2D singularities, edges in DFB at three different scales and extract their coefficients.
an image. This property is utilized in this paper for extracting
A. Gray Level Co-occurrence Matrix (GLCM)
directional features for various pyramidal and directional filters.
In this method we use using the Grey Level Co-occurrence
C. The Best Bit in an Iris Code Matrix (GLCM) [54]. The technique uses the GLCM of an
Biometric systems apply filters to iris images to extract image and it provides a simple approach to capture the spatial
information about iris texture. Daugman’s approach maps the relationship between two points in a texture pattern. It is
filter output to a binary iris code. The fractional Hamming calculated from the normalized iris image using pixels as
distance between two iris codes is computed and decisions primary information. The GLCM is a square matrix of size
about the identity of a person are based on the computed G * G, where G is the number of gray levels in the image. Each
distance. The fractional Hamming distance weights all bits in an element in the GLCM is an estimate of the joint probability of a
iris code equally. However, not all the bits in an iris code are pair of pixel intensities in predetermined relative positions in
th
equally useful. For a given iris image, a bit in its corresponding the image. The (i , j) element of the matrix is generated by
iris code is defined as “fragile” if there is any substantial finding the probability that if the pixel location (x , y) has gray
probability of it ending up a 0 for some images of the iris and a 1 level Ii then the pixel location (x+dx , y+dy) has a gray level
for other images of the same iris. According to [52] the intensity Ij. The dx and dy are defined by considering various
percentages of fragile bits in each row of the iris code, Rows in scales and orientations. Various textural features have been
the middle of the iris code (rows 5 through 12) are the most defined based on the work done by Haralick [56]. These
consistent (See Figure. 6.) features are derived by weighting each of the co-occurrence
matrix values and then summing these weighted values to form
the feature value. The specific features considered in this
research are defined as follows:
1) Energy = ∑∑ p(i, j )
i j
2
⎡ Ng Ng
N g −1
⎤
2) Contrast = ∑ n ⎢∑∑ P (i, j ) i − j = n ⎥
2
n =0 ⎣ i =1 j =1 ⎦
∑∑ (ij ) P(i, j ) − μ μ
i j
x y
3) Correlation =
σx σy
1
4) Homogeneity= ∑∑ 1 + (i − j )
i j
2
P(i, j )
6) Dissimilarity = ∑∑ i − j .P(i, j )
i j
Positive Local Maximum = 1
Negative Local Maximum = -1
Other = 0
vector in our method by using ICA has 1100 element similar to the optimal set of features, which provide the discriminating
PCA. information to classify the iris patterns. In this subsection, we
present the choice of a representation for encoding the candidate
D. Feature Vector in the Coefficient domain
solutions to be manipulated by the GAs, and each individual in
One of the most common methods for creating features vector is the population represents a candidate solution to the feature
using is using the extracted coefficients by using various subset selection problem. If m be the total number of features
transformations such as Gabor filters, Wavelet etc.douagman available to choose to represent from the patterns to be
made use of this technique in his method. In our proposed classified (m = 600 in our case), the individual is represented by
method in this section according to the extracted coefficients in a binary vector of dimension, m. If a bit is a 1, it means that the
level 2 of Contourlet transform, feature vector is created. Also corresponding feature is selected, otherwise the feature is not
techniques regarding decreasing vector dimensions are used. selected (See Figure.8) this is the simplest and most
1) Binary vector creation with coefficient: as stated in the straightforward representation scheme [53]. In this work, we
previous section level 2 sub bands are extracted and according use the roulette wheel selection [53], which is one of the most
to the Following Rule are modified into binary mode: common and easy to implement selection mechanism. Usually,
If Coeff (i)>=0 then NewCoeff (i) =1 there is a fitness value associated with each chromosome, for
Else NewCoeff (i) =0 Example, in a minimization problem, a lower fitness value
And with hamming distance between the vectors of the means that the chromosome or solution is more optimized to the
generated coefficients is calculated.Numbers ranging from 0 to problem while, a higher value of fitness indicates a less
0.5 for inter-class distribution and 0.45 and 0.6 for intra-class optimized chromosome. Our problem consists of optimizing
distribution are included. In total 192699 comparisons two objectives:
inter-class and 1679 comparisons intra-class are carried out. In (i) Minimization of the number of features,
Figure.7 you can see inter-class and intra-class distribution. In (ii) Minimization of the recognition error rate of the Classifier.
implementing this method, we have used point 0.42 the Therefore, we deal with the multi objectives optimization
inter-class and intra-class separation point. Problem. In Table I you can see the parameters used in the
genetic algorithm.
Feature 1st is selected for Classifier Feature 15th is not selected for Classifier
111110000001110…………………0000111100011110000
only 48 elements. where N is the number of pixels in the image, m is the mean of
3) Genetic Algorithm (GA): optimal features subset selection the image, and f(x,y) is the value at point (x, y).The AAD feature
with the aid of genetic algorithm is studied in this section. In is a statistic value similar to variance, but experimental results
fact, for creating the iris feature vector we use the level 2 binary show that the former gives slightly better performance than the
coefficients and by using GA we try to reduce the dimensions of latter. The average absolute deviation of each filtered image
iris feature vector. In this method, we use MOGA [53] to select constitutes the components of our feature vector. These
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009
Features are arranged to form a 1D feature vector of length 1280 GA 600 SVM 97.81 20.3
for each input image. (160 elements for each sub bands in level Other methods
3 of Contourlet transform). ADD 1280 SVM 92.63 20.3
PCA 1100 SVM 90 20.3
V. EXPRIMENTAL RESULT ICA 1100 SVM 85.9 20.3
To evaluate the performance of this proposed system we use
A. Discussion
“CASIA”[7] iris image database (version 1) created by
National Laboratory of pattern recognition, Institute of
Automation, Chinese Academy of Science that consists of 108 • Using GLCM causes the feature vector with
subjects with 7 sample each. Images of “CASIA” iris image appropriate dimensions and acceptable classification
database are mainly from Asians. For each iris class, images are accuracy.
captured in two different sessions. The interval between two • The highest acceptable classification accuracy
sessions is one month. There is no overlap between the training percentage is arrived at in GA.
and test samples. In our experiments, three-level Contourlet • The feature vector in the coefficients domain, PCA,
decomposition is adopted. The above experiments are ICA and ADD method is easily implemented.
performed in Matlab 7.0. The normalized iris image obtained • Using the global and local properties do not come to
from the localized iris image is segmented by Dugman method. good results in isolation while a combination of these
We have used the filters designed by A. Cohen, I. Daubechies, two is really noise resistant and leads us to good
and J.-C. Feauveau. For the quincunx filter banks in the DFB results.
stage. In Table II we compared our proposed methods with • NLAC has really appropriate dimensions for feature
some other well known methods from 3 view points: feature vector and acceptable classification accuracy
vector length, the correct of percentage classification and percentage.
feature extraction time. Also we modified the classifier of well • All the proposed methods in this paper save time in the
known method to SVM for better comparison. processing and extraction of feature in comparison
with the known existence methods.
[7] CASIA,”Chinese Academy of Sciences – Institute of Automation”. Database Workshop on Machine Learning for Signal Processing (MLSP ’05), pp.
of 756 Grayscale Eye Images.http://www.sinobiometrics.com Versions 1.0, 159–164,Mystic, Conn, USA, September 2005.
2003. [26] X. Qiu, Z. Sun, and T. Tan, “Global texture analysis of iris images for ethnic
[8] X. He and P. Shi, “An efficient iris segmentation method for recognition,” in classification,” in Proceedings of the International Conference on Advances on
Proceedings of the 3rd International Conference on Advances in Patten Biometrics (ICB ’06),vol. 3832 of Lecture Notes in Computer Science, pp.
Recognition (ICAPR ’05), vol. 3687 of Lecture Notes in Computer Science, pp. 411–418,Springer, Hong Kong, January 2006.
120–126, Springer, Bath, UK, August 2005. [27] C. Sanchez-Avila, R. Sanchez-Reillo, and D. de Martin-Roche, “Iris-based
[9] K. Bae, S. Noh, and J. Kim, “Iris feature extraction using independent biometric recognition using dyadic wavelet transform,” IEEE Aerospace and
component analysis,” in Proceedings of the 4th International Conference on Electronic Systems Magazine,vol. 17, no. 10, pp. 3–6, 2002.
Audio- and Video-Based Biometric Person Authentication (AVBPA ’03), vol. [28] R. Sanchez-Reillo and C. Sanchez-Avila, “Iris recognition with low
2688, pp. 1059–1060,Guildford, UK, June 2003. template size,” in Proceedings of the 3rd International Conference on Audio-
[10] W. W. Boles and B. Boashash, “A human identification technique using and Video-Based Biometric Person Authentication (AVBPA ’01), pp. 324–329,
images of the iris and wavelet transform,”IEEE Transactions on Signal Halmstad, Sweden,June 2001.
Processing, vol. 46, no. 4, pp. 1185–1188, 1998. [29] N. A. Schmid, M. V. Ketkar, H. Singh, and B. Cukic,“Performance analysis
[11] S. C. Chong, A. B. J. Teoh, and D. C. L. Ngo, “Iris authentication using of iris-based identification system at the matching score level,” IEEE
privatized advanced correlation filter, “in Proceedings of the International Transactions on Information Forensics and Security, vol. 1, no. 2, pp. 154–168,
Conference on Advances on Biometrics (ICB ’06), vol. 3832 of Lecture Notes in 2006.
Computer Science, pp. 382–388, Springer, Hong Kong, January 2006. [30] D. Schonberg and D. Kirovski, “EyeCerts,” IEEE Transactionson
[12] J. Daugman, “Statistical richness of visual phase information: update on Information Forensics and Security, vol. 1, no. 2, pp. 144–153, 2006.
recognizing persons by iris patterns,” International Journal of Computer Vision, [31] B. Son, H. Won, G. Kee, and Y. Lee, “Discriminant irisfeature and support
vol. 45, no. 1, pp. 25–38, 2001. vector machines for iris recognition,” in Proceedings of the International
[13] X. He and P. Shi, “An efficient iris segmentation method for recognition,” Conference on Image Processing (ICIP ’04), vol. 2, pp. 865–868, Singapore,
in Proceedings of the 3rd International Conference on Advances in Patten October 2004.
Recognition (ICAPR ’05), vol. 3687 of Lecture Notes in Computer Science, pp. [32] Z. Sun, T. Tan, and X. Qiu, “Graph matching iris image blocks with local
120–126, Springer, Bath, UK, August 2005. binary pattern,” in Proceedings of the International Conference on Advances on
[14] D. S. Jeong, H.-A. Park, K. R. Park, and J. Kim, “Iris recognition in mobile Biometrics (ICB ’06), vol. 3832 of Lecture Notes in Computer Science, pp.
phone based on adaptive Gabor filter,” in Proceedings of the International 366–372, Springer, Hong Kong, January 2006.
Conference on Advances on Biometrics (ICB ’06), vol. 3832 of Lecture Notes in [33] Z. Sun, Y. Wang, T. Tan, and J. Cui, “Improving iris recognition accuracy
Computer Science, pp. 457–463, Springer, Hong Kong, January 2006. via cascaded classifiers,” IEEE Transactions on Systems, Man and Cybernetics
[15] B. V. K. Vijaya Kumar, C. Xie, and J. Thornton, “Iris verification using C, vol. 35, no. 3, pp. 435–441, 2005.
correlation filters,” in Proceedings of the 4th International Conference Audio- [34] H. Tan and Y.-J. Zhang, “Detecting eye blink states by tracking iris and
and Video-Based Biometric Person Authentication (AVBPA ’03), vol. 2688 of eyelids,” Pattern Recognition Letters, vol. 27, no. 6, pp. 667–675, 2006.
Lecture Notes in Computer Science, pp. 697–705, Guildford, UK, June 2003. [35] Q. M. Tieng and W. W. Boles, “Recognition of 2D object contours using
[16] E. C. Lee, K. R. Park, and J. Kim, “Fake iris detection by using purkinje the wavelet transform zero-crossing representation,”IEEE Transactions on
image,” in Proceedings of the International Conference on Advances on Pattern Analysis and Machine Intelligence, vol. 19, no. 8, pp. 910–916, 1997.
Biometrics (ICB ’06), vol. 3832 of Lecture Notes in Computer Science, pp. [36] C. Tisse, L. Martin, L. Torres, and M. Robert, “Person identification
397–403, Springer, Hong Kong, January 2006. technique using human iris recognition,” in Proceedings of the 15th
[17] S. Lim, K. Lee, O. Byeon, and T. Kim, “Efficient iris recognition through International Conference on Vision Interface (VI ’02), pp. 294–299, Calgary,
improvement of feature vector and classifier,” Electronics and Canada, May 2002.
Telecommunications Research Institute Journal, vol. 23, no. 2, pp. 61–70, 2001. [37] J. P. Havlicek, D. S. Harding, and A. C. Bovik, “The mutlicomponent
[18] X. Liu, K. W. Bowyer, and P. J. Flynn, “Experiments with an improved iris AM-FM image representation,” IEEE Transactions on Image Processing, vol. 5,
segmentation algorithm,” in Proceedings of the 4th IEEE Workshop on no. 6, pp. 1094–1100, 1996.
Automatic Identification Advanced Technologies (AUTO ID ’05), pp. 118–123, [38] T. Tangsukson and J. P. Havlicek, “AM-FM image segmentation,”in
Buffalo, NY, USA, October 2005. Proceedings of the International Conference on Image Processing (ICIP ’00),
[19] X. Liu, K. W. Bowyer, and P. J. Flynn, “Experimental evaluation of iris vol. 2, pp. 104–107, Vancouver, Canada, September 2000.
recognition,” in Proceedings of the IEEE Computer Society Conference on [39] R. P. Wildes, J. C. Asmuth, G. L. Green, et al., “A machine vision system
Computer Vision and Pattern Recognition (CVPR ’05), vol. 3, pp. 158–165, San for iris recognition,” Machine Vision and Applications, vol. 9, no. 1, pp. 1–8,
Diego, Calif, USA, June 2005. 1996.
[20] L.Ma, T. Tan, Y.Wang, and D. Zhang, “Personal identification based on iris [40] A. Poursaberi and B. N. Araabi, “Iris recognition for partially occluded
texture analysis,” IEEE Transactions on Pattern Analysis and Machine images: methodology and sensitivity analysis,”EURASIP Journal on Advances
Intelligence, vol. 25, no. 12, pp. 1519–1533, 2003. in Signal Processing, vol. 2007, Article ID 36751, 12 pages, 2007.
[21] L.Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by [41] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification based on
characterizing key local variations,” IEEE Transactions on Image Processing, iris patterns,” in Proceedings of the 15th International Conference on Pattern
vol. 13, no. 6, pp. 739–750, 2004. Recognition (ICPR ’00), vol. 2, pp. 801–804, Barcelona, Spain, September
[22] K.Miyazawa, K. Ito, T. Aoki, K. Kobayashi, and H. Nakajima,“A 2000.
phase-based iris recognition algorithm,” in Proceedings of the International [42] L. Ma, Y. Wang, and T. Tan, “Iris recognition based on multichannel Gabor
Conference on Advances on Biometrics (ICB ’06), vol. 3832 of Lecture Notes in filtering,” in Proceedings of the 5th Asian Conference on Computer Vision
Computer Science, pp. 356–365, Springer, Hong Kong, January 2006. (ACCV ’02), vol. 1, pp. 279–283, Melbourne, Australia, January 2002.
[23] T.Moriyama, T. Kanade, J. Xiao, and J. F. Cohn, “Meticulously detailed [43] L. Ma, Y. Wang, and T. Tan, “Iris recognition using circular symmetric
eye region model and its application to analysis of facial images,” IEEE filters,” in Proceedings of the 16th International Conference on Pattern
Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. Recognition (ICPR ’02), vol. 2, pp. 414–417, Quebec City, Canada, August
738–752, 2006. 2002
[24] C.-H. Park, J.-J. Lee,M. J. T. Smith, and K.-H. Park, “Iris-based personal [44] L. Ma, Personal identification based on iris recognition, Ph.D dissertation,
authentication using a normalized directional energy feature,” in Proceedings of Institute of Automation, Chinese Academy of Sciences, Beijing, China, 2003.
the 4th International Conference on Audio- and Video-Based Biometric Person [45] L. Masek, Recognition of human iris patterns for biometrics identification,
Authentication (AVBPA ’03), vol. 2688, pp. 224–232, Guildford, UK, June2003. B. Eng. thesis, University of Western Australia, Perth, Australia, 2003.
[25] M. B. Pereira and A. C. P. Veiga, “Application of genetic algorithms to [46]J. Daugman. “How Iris Recognition works”. IEEE Transactions on
improve the reliability of an iris recognition system,” in Proceedings of the IEEE Circuits and systems for video Technology, Vol.14, No.1, pp: 21-30, January
2004.
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009