0% found this document useful (0 votes)
77 views

Efficient IRIS Recognition Through Improvement of Feature Extraction and Subset Selection

This document summarizes a research paper that proposes improvements to iris recognition through enhanced feature extraction and subset selection methods. It first discusses existing iris recognition technologies and prior work in the field. It then describes the main steps of the proposed approach, which includes preprocessing iris images, isolating the collarette region, and extracting discriminating features using contourlet transforms. Optimal feature subsets are selected to increase matching accuracy. Experimental results showed the proposed methods reduce processing time and increase classification accuracy compared to other methods.

Uploaded by

Jasir Cp
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views

Efficient IRIS Recognition Through Improvement of Feature Extraction and Subset Selection

This document summarizes a research paper that proposes improvements to iris recognition through enhanced feature extraction and subset selection methods. It first discusses existing iris recognition technologies and prior work in the field. It then describes the main steps of the proposed approach, which includes preprocessing iris images, isolating the collarette region, and extracting discriminating features using contourlet transforms. Optimal feature subsets are selected to increase matching accuracy. Experimental results showed the proposed methods reduce processing time and increase classification accuracy compared to other methods.

Uploaded by

Jasir Cp
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

(IJCSIS) International Journal of Computer Science and Information Security,

Vol. 2, No.1, June 2009

Efficient IRIS Recognition


Through Improvement of
Feature Extraction and subset Selection
Amir Azizi Hamid Reza Pourreza
Islamic Azad University Mashhad Branch Ferdowsi University of Mashhad
Mashhad , IRAN Mashhad , IRAN
[email protected] [email protected]

Abstract—the selection of the optimal feature subset and the


classification has become an important issue in the field of iris Based on the technology developed by Daugman [3, 5, 6], iris
recognition. In this paper we propose several methods for iris scans have been used in several international airports for the
feature subset selection and vector creation. The deterministic rapid processing of passengers through the immigration which
feature sequence is extracted from the iris image by using the
have pre registered their iris images.
contourlet transform technique. Contourlet transform captures
the intrinsic geometrical structures of iris image. It decomposes A. Proposed Method: The Main Steps
the iris image into a set of directional sub-bands with texture
details captured in different orientations at various scales so for
Figure. 2 illustrates the main steps of our proposed Approach.
reducing the feature vector dimensions we use the method for First the image preprocessing step perform the localization of
extract only significant bit and information from normalized iris The pupil, detects the iris boundary, and isolates the collarette
images. In this method we ignore fragile bits. And finally we use region, which is regarded as one of the most important areas of
SVM (Support Vector Machine) classifier for approximating the the iris complex pattern.
amount of people identification in our proposed system.
Experimental result show that most proposed method reduces
processing time and increase the classification accuracy and also
the iris feature vector length is much smaller versus the other
methods.

Keywords-Biometric-Iris Recognition-Contourlet-Support Vector


Machine (SVM)

I. INTRODUCTION

T HERE has been a rapid increase in the need of accurate and


reliable personal identification infrastructure in recent years,
and biometrics has become an important technology for the
security. Iris recognition has been considered as one of the most
reliable biometrics technologies in recent years [1, 2]. The Figure. 1. Samples of iris images from CASIA [7]
human iris is the most important biometric feature candidate,
which can be used for differentiating the individuals. For The collarette region is less sensitive to the pupil dilation and
systems based on high quality imaging, a human iris has an usually unaffected by the eyelids and the eyelashes [8]. We also
extraordinary amount of unique details as illustrated in Figure.1. detect the eyelids and the eyelashes, which are the main sources
Features extracted from the human iris can be used to identify of the possible occlusion. In order to achieve the invariance to
individuals, even among genetically identical twins [3]. the translation and the scale, the isolated annular collarette area
Iris-based recognition system can be noninvasive to the users is transformed to a rectangular block of fixed dimension. The
since the iris is an internal organ as well as externally visible, discriminating features are extracted from the transformed
which is of great importance for the real-time applications [4]. image and the extracted features are used to train the classifiers.
The optimal features subset is selected using several methods to
increase the matching accuracy based on the recognition
performance of the classifiers.
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009

B. Related works was proposed that is capable of a detailed analysis of the eye
The usage of iris patterns for the personal identification began region images in terms of the position of the iris, degree of the
in the late 19th century; however, the major investigations on eyelid opening, and the shape, the complexity, and the texture of
iris recognition were started in the last decade. In [9], the iris the eyelids. A directional filter bank was used in [24] to
signals were projected into a bank of basis vectors derived by decompose an iris image into eight directional sub band outputs;
the independent component analysis, and the resulting the normalized directional energy was extracted as features, and
projection coefficients were quantized as Features. A prototype iris matching was performed by computing the Euclidean
was proposed in [10] to develop a 1D representation of the distance between the input and the template feature vectors. In
gray-level profiles of the iris. In [11], biometrics based on the [25], the basis of a genetic algorithm was applied to develop a
concealment of the random kernels and the iris images to technique to improve the performance of an iris recognition
synthesize a minimum average correlation energy filter for iris system. In [26], the global texture information of iris images
authentication were formulated. In [5, 6, 12], the Multiscale was used for ethnic classification. The iris representation
Gabor filters were used to demodulate the texture phase method of [10] was further developed in [27] to use the different
structure information of the iris. In [13], an iris segmentation similarity Measures for matching.
method was proposed based on the crossed chord theorem and
the collarette area.

Iris image Preprocessing

Eyelids,
Iris image Pupillary Localization Collarette Eyelashes
Normalization
Localization Of iris area And noise
Isolation Detection

Optimal Feature Normalized


Sequence of Vector image
Iris pattern features Feature Feature
Classification Selection Extraction

Figure.2: Flow diagram of the proposed iris recognition scheme

In [14], iris recognition technology was applied in mobile Measures for matching. The iris recognition algorithm
phones. In [15], correlation filters were utilized to measure the described in [28] exploited the integro differential operators to
consistency of the iris images from the same eye. An interesting detect the inner and outer boundaries Of iris, Gabor filters to
solution to defeat the fake iris attack based on the Purkinje extract the unique binary vectors constituting the iris code, And
image was depicted in [16]. An iris image was decomposed in a statistical matcher that analyzes the average Hamming
[17] into four levels by using the 2D Haar wavelet transform, distance between two codes. In [29], the performance of
the fourth-level high-frequency information was quantized to iris-based identification system was analyzed at the matching
form an 87-bit code, and a modified competitive learning neural score level. A biometric system, which achieves the offline
network (LVQ) was adopted for classification. In [18], a verification of certified and cryptographically secured
modification to the Hough transform was made to improve the documents called “EyeCerts” was reported in [30] for the
iris segmentation, and an eyelid detection technique was used, identification of the people. An iris recognition method was
where each eyelid was modeled as two straight lines. A used in [31] based on the 2D wavelet transform for the feature
matching method was implemented in [19], and its performance extraction and direct discriminant linear analysis for feature
was evaluated on a large dataset. In [20], a personal reduction with SVM techniques as iris pattern classifiers. In
identification method based on the iris texture analysis was [32], an iris recognition method was proposed based on the
described. An algorithm was proposed for iris recognition by histogram of local binary patterns to represent the iris texture
characterizing the key local variations in [21]. A phase-based and a graph matching algorithm for structural classification. An
iris recognition algorithm was proposed in [22], where the elastic iris blob matching algorithm was proposed to overcome
phase components were used in 2D discrete Fourier transform the limitations of local feature based classifiers (LFC) in [33],
of iris image with a simple matching strategy. In [23], a system and in order to recognize the various iris images properly, a
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009

novel cascading scheme was used to combine the LFC and an taken as approximate circles. However, the two circles are
iris blob matcher. In [34], the authors described the usually not concentric [20, 21].
determination of eye blink states by tracking the iris and the
B. Eyelids, Eyelashes, and noise detection
eyelids. An intensity-based iris recognition system was
(i) Eyelids are isolated by first fitting a line to the upper and lower
presented in [35], where the system exploited the local intensity
changes of the visible iris textures. In [36], the iris eyelids using the linear Hough transform. A second horizontal
characteristics were analyzed by using the analytic image line is then drawn, which intersects with the first line at the iris
constructed by the original image and its Hilbert transform. The edge that is closest to the pupil [45].
binary emergent frequency functions were sampled to form (ii) a Separable eyelashes are detected using 1D Gabor filters, since a
feature vector, and the Hamming distance was deployed for low output value is produced by the convolution of a separable
matching [37, 38]. In [39], the Hough transform was applied for eyelash with the Gaussian smoothing function. Thus, if a
the iris localization, a Laplacian pyramid was used to represent resultant point is smaller than a threshold, it is noted that this
the distinctive spatial characteristics of the human iris, and a point belongs to an eyelash.
modified normalized correlation was applied for the matching (iii) Multiple eyelashes are detected using the variance of intensity,
process. In [40], various techniques have been suggested to and if the values in a small window are lower than a threshold,
solve the occlusion problem that happens due to the eyelids and the centre of the window is considered as a point in an eyelash
the eyelashes. From the above discussion, we may divide the as shown in Figure .3.
existing iris recognition approaches roughly into four major
categories based on feature extraction scheme, namely, the
phase-based methods [5,6,12,22], the zero-crossing
representation methods [10, 27], the texture analysis-based
methods [18, 21, 24, 28,39, 41–43], and the intensity variation
analysis [9, 21, 44] methods. Our proposed iris recognition
scheme falls in the first category. A well-established fact that
the usual two-dimensional tensor product wavelet bases are not
optimal for representing images consisting of different regions
of smoothly varying grey-values separated by smooth (a) (b) (c)
boundaries. This issue is addressed by the directional
transforms such as contourlets, which have the property of
preserving edges. The contourlet transform is an efficient
directional multiresolution image representation which differs
from the wavelet transform. The contourlet transform uses
non-separable filter banks developed in the discrete form; thus it
is a true 2D transform, and overcomes the difficulty in exploring
the geometry in digital images due to the discrete nature of the
image data. The remainder of this paper is organized as follows: (d) (e) (f)
Section 2 deals with Iris Image Preprocessing. Section 3 deals
with Feature Extraction method discussion. Section 4 deals with
Figure.3: CASIA iris images (a), (b), and (c) with the detected
feature subset selection and vector creation techniques, Section Collarette area and the corresponding images (d), (e), and (f) after
5 shows our experimental results and finally Section 6 Detection of noise, eyelids, and eyelashes.
concludes this paper.
C. Iris Normalization
II. IRIS IMAGE PREPROCESSING
We use the rubber sheet model [12] for the normalization of the
First, we outline our approach, and then we describe further isolated collarette area. The center value of the pupil is
details in the following subsections. The iris is surrounded by considered as the reference point, and the radial vectors are
the various non relevant regions such as the pupil, the sclera, the passed through the collarette region. We select a number of data
eyelids, and also noise caused by the eyelashes, the eyebrows, points along each radial line that is defined as the radial
the reflections, and the surrounding skin [9].We need to remove resolution, and the number of radial lines going around the
this noise from the iris image to improve the iris recognition collarette region is considered as the angular resolution. A
accuracy. constant number of points are chosen along each radial line in
A. Iris / Pupil Localization order to take a constant number of radial data points,
The iris is an annular portion of the eye situated between the irrespective of how narrow or wide the radius is at a particular
pupil (inner boundary) and the sclera (outer boundary). Both the angle. We build the normalized pattern by backtracking to find
inner boundary and the outer boundary of a typical iris can be the Cartesian coordinates of data points from the radial and
angular positions in the normalized pattern [3, 5, 6]. The
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009

normalization approach produces a 2D array with horizontal sparse image expansion by applying a multi-scale transform
dimensions of angular resolution, and vertical dimensions of followed by a local directional transform. It gathers the nearby
radial resolution form the circular-shaped collarette area (See basis functions at the same scale into linear structures. In
Figure.4I). In order to prevent non-iris region data from essence, a wavelet-like transform is used for edge (points)
corrupting the normalized representation, the data points, which detection, and then a local directional transform for contour
occur along the pupil border or the iris border, are discarded. segments detection. A double filter bank structure is used in CT
Figure.4II (a), (b) shows the normalized images after the in which the Laplacian pyramid (LP) [50] is used to capture the
isolation of the collarette area. point discontinuities, and a directional filter bank (DFB) [51] to
link point discontinuities into linear structures. The combination
III. FEATURE EXTRACTION AND ENCODING of this double filter bank is named pyramidal directional filter
Only the significant features of the iris must be encoded so that bank (PDFB) as shown in Figure.5.
comparisons between templates can be made. Gabor filter and B. Benefits of Contourlet Transform in the Iris Feature
wavelet are the well-known techniques in texture analysis [5, Extraction
20, 42, 46, 47]. In wavelet family, Haar wavelet [48] was
applied by Jafer Ali to iris image and they extracted an To capture smooth contours in images, the representation
87-length binary feature vector. The major drawback of should contain basis functions with variety of shapes, in
wavelets in two-dimensions is their limited ability in capturing particular with different aspect ratios. A major challenge in
Directional information. The contourlet transform is a new capturing geometry and directionality in images comes from
extension of the wavelet transform in two dimensions using The discrete nature of the data; the input is typically sampled
Multi scale and directional filter banks. images defined on rectangular grids.

Black portion represents region of interest


Of the unwrapped iris image
White region denotes noise

(I)

(a) (b)
(II)

Figure.4 : (I) shows the normalization procedure on CASIA dataset; (II) (a), (b) show the normalized images of the isolated collarette regions

The feature representation should have information enough to


classify various irises and be less sensitive to noises. Also in the
most appropriate feature extraction we attempt to extract only
significant information, more over reducing feature vector
dimensions, the processing lessened and enough information is
supplied to introduce iris feature vectors classification.

A. Contourlet Transform
Contourlet transform (CT) allows for different and flexible
number of directions at each scale. CT is constructed by
combining two distinct decomposition stages [49], a multiscale
decomposition followed by directional decomposition. The Figure. 5: Two Level Contourlet Decomposition [49]
grouping of wavelet coefficients suggests that one can obtain a
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009

Because of pixelization, the smooth contours on sampled IV. FEATURE SUBSET SELECTION AND VECTOR
images are not obvious. For these reasons, unlike other CREATION IN PROPOSED METHODS
transforms that were initially developed in the continuous It is necessary to select the most representative feature sequence
domain and then discretized for sampled data, the new approach from a features set with relative high dimension [53]. In this
starts with a discrete-domain construction and then investigate paper, we propose several methods to select the optimal Set of
its convergence to an expansion in the continuous-domain. This features, which provide the discriminating information to
construction results in a flexible multi-resolution, local, and classify the iris patterns. In this section we describe several
directional image expansion using contour segments. methods that proposed for optimal feature selection and vector
Directionality and anisotropy are the important characteristics creation .also According to the method mentioned in section
of contourlet transform. Directionality indicates that having IIIA, we concluded the middle band of iris normalized images
basis function in many directions, only three direction in have more important information and less affected by fragile
wavelet. The anisotropy property means the basis functions bits, so for introducing iris feature vector based on contourlet
appear at various aspect ratios where as wavelets are separable transform the rows between 5 and 12 in iris normalize image are
functions and thus their aspect ratio is one. Due to this decomposed into eight directional sub-band outputs using the
properties CT can efficiently handle 2D singularities, edges in DFB at three different scales and extract their coefficients.
an image. This property is utilized in this paper for extracting
A. Gray Level Co-occurrence Matrix (GLCM)
directional features for various pyramidal and directional filters.
In this method we use using the Grey Level Co-occurrence
C. The Best Bit in an Iris Code Matrix (GLCM) [54]. The technique uses the GLCM of an
Biometric systems apply filters to iris images to extract image and it provides a simple approach to capture the spatial
information about iris texture. Daugman’s approach maps the relationship between two points in a texture pattern. It is
filter output to a binary iris code. The fractional Hamming calculated from the normalized iris image using pixels as
distance between two iris codes is computed and decisions primary information. The GLCM is a square matrix of size
about the identity of a person are based on the computed G * G, where G is the number of gray levels in the image. Each
distance. The fractional Hamming distance weights all bits in an element in the GLCM is an estimate of the joint probability of a
iris code equally. However, not all the bits in an iris code are pair of pixel intensities in predetermined relative positions in
th
equally useful. For a given iris image, a bit in its corresponding the image. The (i , j) element of the matrix is generated by
iris code is defined as “fragile” if there is any substantial finding the probability that if the pixel location (x , y) has gray
probability of it ending up a 0 for some images of the iris and a 1 level Ii then the pixel location (x+dx , y+dy) has a gray level
for other images of the same iris. According to [52] the intensity Ij. The dx and dy are defined by considering various
percentages of fragile bits in each row of the iris code, Rows in scales and orientations. Various textural features have been
the middle of the iris code (rows 5 through 12) are the most defined based on the work done by Haralick [56]. These
consistent (See Figure. 6.) features are derived by weighting each of the co-occurrence
matrix values and then summing these weighted values to form
the feature value. The specific features considered in this
research are defined as follows:
1) Energy = ∑∑ p(i, j )
i j
2

⎡ Ng Ng
N g −1

2) Contrast = ∑ n ⎢∑∑ P (i, j ) i − j = n ⎥
2

n =0 ⎣ i =1 j =1 ⎦

∑∑ (ij ) P(i, j ) − μ μ
i j
x y

3) Correlation =
σx σy
1
4) Homogeneity= ∑∑ 1 + (i − j )
i j
2
P(i, j )

Figure.6: Percent of Fragile Bit in Iris Pattern [52]


5) Autocorrelation = ∑∑ (ij ) P(i, j )
i j
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009

6) Dissimilarity = ∑∑ i − j .P(i, j )
i j
Positive Local Maximum = 1
Negative Local Maximum = -1
Other = 0

∑∑ (i − j ) 2 In other words in extracted image of iris by Contourlet


7) Inertia = P(i, j ) transform the black points represent 0, white points 1 and
i j
Grayscale points -1 .the feature vector length in this method is
Here μ x , μ y ,σ x ,σ y are mean and standard deviation along x composed of 2520 elements.
and y axis. For creating iris feature vector we carried out the Global properties represent the global structure of image;
following steps: therefore rotation and noise do not affect them. In this method
by using 12 sub band (in level 2 and 3 of Contourlet transform)
1) Iris normalized image (Rows in the middle of the iris we extracted the global properties and Computed average and
code (rows 5 through 12)) is decomposed up to level variance for each sub bands. The feature vector in this method is
two.(for each image ,at level one , 2 and at level two , as follows:
4 sub band are created ) . F=[
a1,v1,a2,v2,a3,v3,a4,v4,a5,v5,a6,v6,a7,
2) The sub bands of each level are put together, therefore
v7,a8,v8,a9,v9,a10,v10,a11,v11,a12,v12]
at level one a matrix with 4*120 elements, and at level
The feature vector in this method has only 24 elements. By
two a matrix with 16*120 elements is created. We
combining local and global properties not only does the system
named these matrixes: Matrix1 and Matrix 2.
with stand more noise but the system dispenses with several
3) By putting together Matrix1 and Matrix 2, a new comparisons for overcoming the effect of rotation iris images.
matrix named Matrix3 with 20*120 elements is According to [57,58]in combination system first , the local
created. The co-occurrence of these three matrixes feature vector is create, then in the matching step , a value
with offset one pixel and angles 0, 45, 90 degree is representing the similarity amount between the input local code
created and name this matrix: CO1, CO2 and CO3.in and local code of the class in question given in the data bank is
this case for each image 3 co-occurrence matrixes with decided. If the distance falls outside the local properties domain
8*8 dimensions are created. the global properties are extracted and distance of global code
4) According to the Haralick‘s [55] theory the and the global code of the class in question is compared and
co-occurrence matrix has 14 properties , of which in decision is made according to the pre-decided threshold. In this
iris biometric system we used 7 properties which are technique the feature vector has 2544 elements.
used for 3 matrixes , so the feature vector is as follow:
F=[ En1,Cont1,cor1,hom1,Acor1,dis1,ine1, C. The Creation of Iris Feature Vector by Using
En2,Cont2,cor2, hom2,Acor2,dis2,ine2 PCA and ICA
En3,Cont3,cor3,hom3,Acor3,dis3,ine3] In this method by using the generated sub bands and PCA
In other word the feature vector in our method has only 21 (Principal Component Analysis) and ICA (Independent
elements. Also for improving results, for each sub bands and Component Analysis) techniques the features in question are
scale we create a feature vector by using GLCM.in other words extracted.PCA is a classic method for analyzing statistical data,
for each eight sub bands in level 3 of Contourlet transform we extracting features and condensing data. This method via
computed GLCM properties, separately and then by combining modifying data presents an appropriate representation with
these properties the feature vector is created. In this case the smaller dimensions and less added information .in this method,
feature vector has 56 elements. coordinate axes are defined in such a way that data mapping has
B. Combination of Local and Global Properties in an the highest variance. To implement this method, we made use of
Iris Image article [59] and the features axis is developed according to level
3 sub bands of Contourlet Transform. In this method the feature
Another method we used for creating iris feature vector is local vector has 1100 elements.ICA is also a statistical method for
and global properties of an iris image. The detailed changes of finding the components of multi variable data .Now ever, what
in an iris images is called the local properties. For example makes this method distinct with regard to other methods of data
edges are considered a local property. The edge should be representation is that it searches for statistically independent
extracted from the lower levels because in upper levels the components that are at the same time comprised of
edges are usually removed. Another point is that the first level is Non-Gaussian distribution. In actual fact this method can be
usually very sensitive to noise. By studying coefficients, we regarded as an extension of methods such as PCA and FA
find that when the edges are noticed, the coefficient is Positive (Factor Analysis), which is by comparison stronger and in many
and when the edges are gone this coefficient is Negative. After cases of classic methods insufficient, is of much use. Similar to
running contourlet transform the extremum value which could PCA method, we used level 3 sub bands of Contourlet
be positive or negative is reached. For coding these values we Transform for creating iris feature vector. For implement ICA
use the rules below: method we use technique hat describe in [60]. The feature
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009

vector in our method by using ICA has 1100 element similar to the optimal set of features, which provide the discriminating
PCA. information to classify the iris patterns. In this subsection, we
present the choice of a representation for encoding the candidate
D. Feature Vector in the Coefficient domain
solutions to be manipulated by the GAs, and each individual in
One of the most common methods for creating features vector is the population represents a candidate solution to the feature
using is using the extracted coefficients by using various subset selection problem. If m be the total number of features
transformations such as Gabor filters, Wavelet etc.douagman available to choose to represent from the patterns to be
made use of this technique in his method. In our proposed classified (m = 600 in our case), the individual is represented by
method in this section according to the extracted coefficients in a binary vector of dimension, m. If a bit is a 1, it means that the
level 2 of Contourlet transform, feature vector is created. Also corresponding feature is selected, otherwise the feature is not
techniques regarding decreasing vector dimensions are used. selected (See Figure.8) this is the simplest and most
1) Binary vector creation with coefficient: as stated in the straightforward representation scheme [53]. In this work, we
previous section level 2 sub bands are extracted and according use the roulette wheel selection [53], which is one of the most
to the Following Rule are modified into binary mode: common and easy to implement selection mechanism. Usually,
If Coeff (i)>=0 then NewCoeff (i) =1 there is a fitness value associated with each chromosome, for
Else NewCoeff (i) =0 Example, in a minimization problem, a lower fitness value
And with hamming distance between the vectors of the means that the chromosome or solution is more optimized to the
generated coefficients is calculated.Numbers ranging from 0 to problem while, a higher value of fitness indicates a less
0.5 for inter-class distribution and 0.45 and 0.6 for intra-class optimized chromosome. Our problem consists of optimizing
distribution are included. In total 192699 comparisons two objectives:
inter-class and 1679 comparisons intra-class are carried out. In (i) Minimization of the number of features,
Figure.7 you can see inter-class and intra-class distribution. In (ii) Minimization of the recognition error rate of the Classifier.
implementing this method, we have used point 0.42 the Therefore, we deal with the multi objectives optimization
inter-class and intra-class separation point. Problem. In Table I you can see the parameters used in the
genetic algorithm.

Feature 1st is selected for Classifier Feature 15th is not selected for Classifier

111110000001110…………………0000111100011110000

Length of chromosome, l = feature dimension

(a)Inter-Class Distribution (b)Inter-Class Distribution Figure.8: Binary feature vector of l dimension.


Table. I: GA Parameters
Figure.7: Inter and Intra Class Distribution
Parameters CASIA Dataset
2) Non Linear Approximation Coefficients (NLAC): in this Population size 108 (the scale of iris sample)
method we use non linear approximation coefficients for Length of chromosome code 600 (selected dimensionality of feature sequence)
select the significant coefficient from the binary feature Crossover probability 0.65
Mutation probability 0.002
vector we create in the last section .for this purpose use the
Number of generation 110
following formula:
Nsignif=round (npixel*2.5/100) (1) E. Average Absolute Deviation (ADD): In this algorithm, the
feature value is the average absolute deviation (AAD) of each
Where npixel is the number of pixel in iris normalized image output image defined as follows:
and nsignif is the number of significant coefficient. In other
1⎡ ⎤
words it is proved that [61] only by having 2.5% of coefficients
can reconstruct the image. The feature vector in our method has
F= ⎢
N⎣N
∑ f ( x, y ) − m ⎥

(2)

only 48 elements. where N is the number of pixels in the image, m is the mean of
3) Genetic Algorithm (GA): optimal features subset selection the image, and f(x,y) is the value at point (x, y).The AAD feature
with the aid of genetic algorithm is studied in this section. In is a statistic value similar to variance, but experimental results
fact, for creating the iris feature vector we use the level 2 binary show that the former gives slightly better performance than the
coefficients and by using GA we try to reduce the dimensions of latter. The average absolute deviation of each filtered image
iris feature vector. In this method, we use MOGA [53] to select constitutes the components of our feature vector. These
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009

Features are arranged to form a 1D feature vector of length 1280 GA 600 SVM 97.81 20.3
for each input image. (160 elements for each sub bands in level Other methods
3 of Contourlet transform). ADD 1280 SVM 92.63 20.3
PCA 1100 SVM 90 20.3
V. EXPRIMENTAL RESULT ICA 1100 SVM 85.9 20.3
To evaluate the performance of this proposed system we use
A. Discussion
“CASIA”[7] iris image database (version 1) created by
National Laboratory of pattern recognition, Institute of
Automation, Chinese Academy of Science that consists of 108 • Using GLCM causes the feature vector with
subjects with 7 sample each. Images of “CASIA” iris image appropriate dimensions and acceptable classification
database are mainly from Asians. For each iris class, images are accuracy.
captured in two different sessions. The interval between two • The highest acceptable classification accuracy
sessions is one month. There is no overlap between the training percentage is arrived at in GA.
and test samples. In our experiments, three-level Contourlet • The feature vector in the coefficients domain, PCA,
decomposition is adopted. The above experiments are ICA and ADD method is easily implemented.
performed in Matlab 7.0. The normalized iris image obtained • Using the global and local properties do not come to
from the localized iris image is segmented by Dugman method. good results in isolation while a combination of these
We have used the filters designed by A. Cohen, I. Daubechies, two is really noise resistant and leads us to good
and J.-C. Feauveau. For the quincunx filter banks in the DFB results.
stage. In Table II we compared our proposed methods with • NLAC has really appropriate dimensions for feature
some other well known methods from 3 view points: feature vector and acceptable classification accuracy
vector length, the correct of percentage classification and percentage.
feature extraction time. Also we modified the classifier of well • All the proposed methods in this paper save time in the
known method to SVM for better comparison. processing and extraction of feature in comparison
with the known existence methods.

Table II: Comparison Pertaining to Our methods and


VI. CONCLUSION
And some well - known Method. In this paper we proposed an effective algorithm for iris feature
The Feature The Correct Feature extraction using contourlet transform. For reduce iris feature
Method Vector Classifier Of Percentage Extraction(ms) vector we use several techniques. For Segmentation and
Length(Bit) Classifier (%) normalization we use Daugman methods. The Contourlet
Well Known methods
transform is used to extract the discriminating features, and
several methods are applied for the feature subset selection. Our
Daugman[3] 2048 HD SVM 100 100 628.5
proposed methods can classify iris feature vector properly. The
Lim[17] 87 LVQ SVM 90.4 92.3 180
rate of expected classification for the fairly large number of
Jafar Ali[48] 87 HD SVM 92.1 92.8 260.3
experimental date in this paper verifies this claim. In other
Ma[20] 1600 ED SVM 95.0 95.9 80.3
words most proposed methods in this paper provide a less
Our Proposed Methods feature vector length with an insignificant reduction of the
Gray Level Co-occurrence Matrix percentage of correct classification.
GLCM[54] 21 SVM 94.2 20.3
GLCM
(Combining 56 SVM 96.3 20.3
Sub bands)
REFERENCES
Local and Global Feature
[1] R. P. Wildes, “Iris recognition: an emerging biometric technology,”
Local Feature 2520 HD 90.32 20.3 Proceedings of the IEEE, vol. 85, no. 9, pp. 1348–1363, 1997.
Global 20.3 [2] A. Jain, R. Bolle, and S. Pankanti, Biometrics: Personal Identification in a
24 ED 78.6 Networked Society, Kluwer Academic Publishers, Norwell, Mass, USA, 1999.
Feature
[3] J. Daugman, “Biometric personal identification system based on iris
Combining analysis,” 1994, US patent no. 5291560.
Local and [4] T. Mansfield, G. Kelly, D. Chandler, and J. Kane, “Biometric product
2544 HD+ED 93.2 20.3
Global testing,” Final Report, National Physical Laboratory, Middlesex, U.K, 2001.
Feature [5] J. G. Daugman, “High confidence visual recognition of persons by a test of
statistical independence,” IEEE Transactions on Pattern Analysis and Machine
Binary Vector in coefficient domain Intelligence, vol. 15, no. 11, pp. 1148–1161, 1993.
Binary Vector 2520 HD 96.5 20.3 [6] J. Daugman, “Demodulation by complex-valued wavelets for stochastic
NLAC[55] 48 SVM 91.3 20.3 pattern recognition,” International Journal of Wavelets, Multiresolution and
Information Processing, vol. 1, no. 1, pp. 1–17, 2003.
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009

[7] CASIA,”Chinese Academy of Sciences – Institute of Automation”. Database Workshop on Machine Learning for Signal Processing (MLSP ’05), pp.
of 756 Grayscale Eye Images.http://www.sinobiometrics.com Versions 1.0, 159–164,Mystic, Conn, USA, September 2005.
2003. [26] X. Qiu, Z. Sun, and T. Tan, “Global texture analysis of iris images for ethnic
[8] X. He and P. Shi, “An efficient iris segmentation method for recognition,” in classification,” in Proceedings of the International Conference on Advances on
Proceedings of the 3rd International Conference on Advances in Patten Biometrics (ICB ’06),vol. 3832 of Lecture Notes in Computer Science, pp.
Recognition (ICAPR ’05), vol. 3687 of Lecture Notes in Computer Science, pp. 411–418,Springer, Hong Kong, January 2006.
120–126, Springer, Bath, UK, August 2005. [27] C. Sanchez-Avila, R. Sanchez-Reillo, and D. de Martin-Roche, “Iris-based
[9] K. Bae, S. Noh, and J. Kim, “Iris feature extraction using independent biometric recognition using dyadic wavelet transform,” IEEE Aerospace and
component analysis,” in Proceedings of the 4th International Conference on Electronic Systems Magazine,vol. 17, no. 10, pp. 3–6, 2002.
Audio- and Video-Based Biometric Person Authentication (AVBPA ’03), vol. [28] R. Sanchez-Reillo and C. Sanchez-Avila, “Iris recognition with low
2688, pp. 1059–1060,Guildford, UK, June 2003. template size,” in Proceedings of the 3rd International Conference on Audio-
[10] W. W. Boles and B. Boashash, “A human identification technique using and Video-Based Biometric Person Authentication (AVBPA ’01), pp. 324–329,
images of the iris and wavelet transform,”IEEE Transactions on Signal Halmstad, Sweden,June 2001.
Processing, vol. 46, no. 4, pp. 1185–1188, 1998. [29] N. A. Schmid, M. V. Ketkar, H. Singh, and B. Cukic,“Performance analysis
[11] S. C. Chong, A. B. J. Teoh, and D. C. L. Ngo, “Iris authentication using of iris-based identification system at the matching score level,” IEEE
privatized advanced correlation filter, “in Proceedings of the International Transactions on Information Forensics and Security, vol. 1, no. 2, pp. 154–168,
Conference on Advances on Biometrics (ICB ’06), vol. 3832 of Lecture Notes in 2006.
Computer Science, pp. 382–388, Springer, Hong Kong, January 2006. [30] D. Schonberg and D. Kirovski, “EyeCerts,” IEEE Transactionson
[12] J. Daugman, “Statistical richness of visual phase information: update on Information Forensics and Security, vol. 1, no. 2, pp. 144–153, 2006.
recognizing persons by iris patterns,” International Journal of Computer Vision, [31] B. Son, H. Won, G. Kee, and Y. Lee, “Discriminant irisfeature and support
vol. 45, no. 1, pp. 25–38, 2001. vector machines for iris recognition,” in Proceedings of the International
[13] X. He and P. Shi, “An efficient iris segmentation method for recognition,” Conference on Image Processing (ICIP ’04), vol. 2, pp. 865–868, Singapore,
in Proceedings of the 3rd International Conference on Advances in Patten October 2004.
Recognition (ICAPR ’05), vol. 3687 of Lecture Notes in Computer Science, pp. [32] Z. Sun, T. Tan, and X. Qiu, “Graph matching iris image blocks with local
120–126, Springer, Bath, UK, August 2005. binary pattern,” in Proceedings of the International Conference on Advances on
[14] D. S. Jeong, H.-A. Park, K. R. Park, and J. Kim, “Iris recognition in mobile Biometrics (ICB ’06), vol. 3832 of Lecture Notes in Computer Science, pp.
phone based on adaptive Gabor filter,” in Proceedings of the International 366–372, Springer, Hong Kong, January 2006.
Conference on Advances on Biometrics (ICB ’06), vol. 3832 of Lecture Notes in [33] Z. Sun, Y. Wang, T. Tan, and J. Cui, “Improving iris recognition accuracy
Computer Science, pp. 457–463, Springer, Hong Kong, January 2006. via cascaded classifiers,” IEEE Transactions on Systems, Man and Cybernetics
[15] B. V. K. Vijaya Kumar, C. Xie, and J. Thornton, “Iris verification using C, vol. 35, no. 3, pp. 435–441, 2005.
correlation filters,” in Proceedings of the 4th International Conference Audio- [34] H. Tan and Y.-J. Zhang, “Detecting eye blink states by tracking iris and
and Video-Based Biometric Person Authentication (AVBPA ’03), vol. 2688 of eyelids,” Pattern Recognition Letters, vol. 27, no. 6, pp. 667–675, 2006.
Lecture Notes in Computer Science, pp. 697–705, Guildford, UK, June 2003. [35] Q. M. Tieng and W. W. Boles, “Recognition of 2D object contours using
[16] E. C. Lee, K. R. Park, and J. Kim, “Fake iris detection by using purkinje the wavelet transform zero-crossing representation,”IEEE Transactions on
image,” in Proceedings of the International Conference on Advances on Pattern Analysis and Machine Intelligence, vol. 19, no. 8, pp. 910–916, 1997.
Biometrics (ICB ’06), vol. 3832 of Lecture Notes in Computer Science, pp. [36] C. Tisse, L. Martin, L. Torres, and M. Robert, “Person identification
397–403, Springer, Hong Kong, January 2006. technique using human iris recognition,” in Proceedings of the 15th
[17] S. Lim, K. Lee, O. Byeon, and T. Kim, “Efficient iris recognition through International Conference on Vision Interface (VI ’02), pp. 294–299, Calgary,
improvement of feature vector and classifier,” Electronics and Canada, May 2002.
Telecommunications Research Institute Journal, vol. 23, no. 2, pp. 61–70, 2001. [37] J. P. Havlicek, D. S. Harding, and A. C. Bovik, “The mutlicomponent
[18] X. Liu, K. W. Bowyer, and P. J. Flynn, “Experiments with an improved iris AM-FM image representation,” IEEE Transactions on Image Processing, vol. 5,
segmentation algorithm,” in Proceedings of the 4th IEEE Workshop on no. 6, pp. 1094–1100, 1996.
Automatic Identification Advanced Technologies (AUTO ID ’05), pp. 118–123, [38] T. Tangsukson and J. P. Havlicek, “AM-FM image segmentation,”in
Buffalo, NY, USA, October 2005. Proceedings of the International Conference on Image Processing (ICIP ’00),
[19] X. Liu, K. W. Bowyer, and P. J. Flynn, “Experimental evaluation of iris vol. 2, pp. 104–107, Vancouver, Canada, September 2000.
recognition,” in Proceedings of the IEEE Computer Society Conference on [39] R. P. Wildes, J. C. Asmuth, G. L. Green, et al., “A machine vision system
Computer Vision and Pattern Recognition (CVPR ’05), vol. 3, pp. 158–165, San for iris recognition,” Machine Vision and Applications, vol. 9, no. 1, pp. 1–8,
Diego, Calif, USA, June 2005. 1996.
[20] L.Ma, T. Tan, Y.Wang, and D. Zhang, “Personal identification based on iris [40] A. Poursaberi and B. N. Araabi, “Iris recognition for partially occluded
texture analysis,” IEEE Transactions on Pattern Analysis and Machine images: methodology and sensitivity analysis,”EURASIP Journal on Advances
Intelligence, vol. 25, no. 12, pp. 1519–1533, 2003. in Signal Processing, vol. 2007, Article ID 36751, 12 pages, 2007.
[21] L.Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by [41] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification based on
characterizing key local variations,” IEEE Transactions on Image Processing, iris patterns,” in Proceedings of the 15th International Conference on Pattern
vol. 13, no. 6, pp. 739–750, 2004. Recognition (ICPR ’00), vol. 2, pp. 801–804, Barcelona, Spain, September
[22] K.Miyazawa, K. Ito, T. Aoki, K. Kobayashi, and H. Nakajima,“A 2000.
phase-based iris recognition algorithm,” in Proceedings of the International [42] L. Ma, Y. Wang, and T. Tan, “Iris recognition based on multichannel Gabor
Conference on Advances on Biometrics (ICB ’06), vol. 3832 of Lecture Notes in filtering,” in Proceedings of the 5th Asian Conference on Computer Vision
Computer Science, pp. 356–365, Springer, Hong Kong, January 2006. (ACCV ’02), vol. 1, pp. 279–283, Melbourne, Australia, January 2002.
[23] T.Moriyama, T. Kanade, J. Xiao, and J. F. Cohn, “Meticulously detailed [43] L. Ma, Y. Wang, and T. Tan, “Iris recognition using circular symmetric
eye region model and its application to analysis of facial images,” IEEE filters,” in Proceedings of the 16th International Conference on Pattern
Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. Recognition (ICPR ’02), vol. 2, pp. 414–417, Quebec City, Canada, August
738–752, 2006. 2002
[24] C.-H. Park, J.-J. Lee,M. J. T. Smith, and K.-H. Park, “Iris-based personal [44] L. Ma, Personal identification based on iris recognition, Ph.D dissertation,
authentication using a normalized directional energy feature,” in Proceedings of Institute of Automation, Chinese Academy of Sciences, Beijing, China, 2003.
the 4th International Conference on Audio- and Video-Based Biometric Person [45] L. Masek, Recognition of human iris patterns for biometrics identification,
Authentication (AVBPA ’03), vol. 2688, pp. 224–232, Guildford, UK, June2003. B. Eng. thesis, University of Western Australia, Perth, Australia, 2003.
[25] M. B. Pereira and A. C. P. Veiga, “Application of genetic algorithms to [46]J. Daugman. “How Iris Recognition works”. IEEE Transactions on
improve the reliability of an iris recognition system,” in Proceedings of the IEEE Circuits and systems for video Technology, Vol.14, No.1, pp: 21-30, January
2004.
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 2, No.1, June 2009

[47]Y. Zhu, T. Tan, and Y. Wang, “Biometric Personal Identification Based on


Iris Patterns,” Proc. Int’l Conf. Pattern Recognition, vol. II,pp. 805-808, 2000.
[48] Jafar M. H. Ali, Aboul Ella Hussanien “An Iris Recognition System to
AUTHORS PROFILE
Enhance E-security Environment Based on Wavelet Theory” AMO - Advanced
Modeling and Optimization, Volume 5, Number 2, 2003
[49] M. N. Do and M. Vetterli, “The contourlet transform: An Efficient
directional multiresolution image representation,” IEEE Transactions on Image Amir Azizi: Graduated From Islamic Azad University, Mashhad
Processing, vol. 14, issue 12, pp. 2091-2106. Branch and Received the B.S. in Computer Engineering .He received
[50] P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image the M.S. degree in Artificial intelligence From the Islamic Azad
code,” IEEE Trans. Commun., vol. 31, no. 4, pp. 532–540, April 1983.. R. H. University, Qazvin Branch.
[51]Bamberger and M. J. T. Smith, “A filter bank for the directional
decomposition of images: Theory and design,” IEEE Trans. Signal Proc., vol. Hamid Reza Pourreza: Graduated From Ferdowsi University of
40, no. 4, pp. 882–893, April 1992 Mashhad in EE and received the PHD in C.E From AmirKabir
[51]R. H. Bamberger and M. J. T. Smith, “A filter bank for the directional University of Technology. He is Assistant Professor of Computer Eng.
decomposition of images: Theory and design,” IEEE Trans. Signal Proc., vol. Dept. School of Engineering, Ferdowsi University of Mashhad.
40, no. 4, pp. 882–893, April 1992
[52]Karen P. Hollingsworth, Kevin W. Bowyer, “The Best Bits in an Iris Code”
IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), April
2008
[53] S. Bandyopadhyay, S. K. Pal, and B. Aruna, “Multiobjective GAs,
quantitative indices, and pattern classification,” IEEE Transactions on Systems,
Man, and Cybernetics B, vol. 34, no.5, pp. 2088–2099, 2004.
[54]A.Azizi, H.R.Pourreza," A new Method for Iris Feature Extraction Based
on Contourlet Transform and Co-occurrence Matrix" 3rd IADIS International
Conference, Intelligent Systems and Agents 2009 Algarve, Portugal 21 - 23 June
2009.
[55]A.Azizi, H.R.Pourreza," A new Method for Iris Recognition Based on
Contourlet Transform and Non Linear Approximation Coefficients” , 5th
International Conference on Intelligent Computing(ICIC2009) Ulsan, South
Korea 16- 19 September 2009.
[56]R.M. Haralick, K. Shanmugam and L. Din stein, “Textural features for
image classification,” IEEE Transactions on Systems, Man, and Cybernetics,
vol.3, no.6, pp.610-621, Apr.1973.
[57] Z. Sun and Y. Wang, “Improving iris recognition accuracy via cascaded
classifiers,” IEEE Trans. On Systems, Man, and Cybernetics. -Part C:
Applications and Reviews, vol. 35, no. 3, Aug. 2005.
[58] Z. Sun and Y. Wang, “Cascading statistical and structural classifiers for iris
recognition,” Proc. IEEE International Conference on Image processing, vol.II,
Oct 2004, pp. 1261-1264.
[59] M. Turk and A. Petland, "Eigen faces for Recognition, Journal of Cognitive
Neuroscience", Vol .3, No. 1, 1991, pp. 71-86.
[60] A. Hyvärinen and E. Oja. A fast fixed-point algorithm for independent
component analysis. IEEE transaction on Neural Networks, 9(7):1483-1492,
1997.
[61]Contourlet Toolbox (version 2.0, November 2003)
http://www.ifp.uiuc.edu/~minhdo/software/

You might also like