0% found this document useful (0 votes)
4 views

Mini Project Doc

The document presents a thesis on a deep learning-based multimodal biometric recognition system that integrates iris, face, and finger vein traits to enhance accuracy and security. It outlines the challenges faced by traditional unimodal systems and proposes a novel approach that utilizes convolutional neural networks for feature extraction and fusion. The project aims to improve biometric recognition performance and address issues such as spoofing and environmental variations.

Uploaded by

saquibaqeel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Mini Project Doc

The document presents a thesis on a deep learning-based multimodal biometric recognition system that integrates iris, face, and finger vein traits to enhance accuracy and security. It outlines the challenges faced by traditional unimodal systems and proposes a novel approach that utilizes convolutional neural networks for feature extraction and fusion. The project aims to improve biometric recognition performance and address issues such as spoofing and environmental variations.

Uploaded by

saquibaqeel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 64

A Minar Project on

DEEP LEARNING APPROACH FOR MULTIMODAL


BIOMETRIC RECOGNITION SYSTEM BASED ON
FUSION OF IRIS, FACE, AND FINGER VEIN TRAITS
A THESIS
submitted
in the partial fulfillment of the requirements for the award of the degree
of
Bachelor of Technology
in
COMPUTER SCIENCE AND ENGINEERING
by
K.Nikesh - 22E45A0532
D.Harshavardhan - 21E41A0573
S.Tharun - 21E41A0577
M.Anil - 21E41A0592

Under the supervision


of P. Anupama
Assistant Professor
Department of Computer Science and Engineering

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


SREE DATTHA INSTITUTE OF ENGINEERING & SCIENCE
AUTONOMOUS
(Approved by AICTE New Delhi, Accredited by NAAC, Affiliate to JNTUH) SHERIGUDA (v),
IBRAHIMPATNAM (M), RANGAREDDY -501510 2024-2025
SREE DATTHA INSTITUTE OF ENGINEERING AND SCIENCE
DEPARTMENT OF COMPUTER SCIENCE AND INFORMATION
TECHNOLOGY

DECLARATION

We are hereby declaring that the project report titled “DEEP LEARNING APPROACH
FOR MULTIMODAL BIOMETRIC RECOGNITION SYSTEM BASED
ON
FUSION OF IRIS, FACE, AND FINGER VEIN TRAITS” under the guidance of
P.Anupama, Sree Dattha Institute of Engineering and Science, Ibrahimpatnam is
submitted in partial fulfillment of the requirement for the award of B. Tech. in Computer
Science and Engineering is a record of bonafide work carried out by us and the results
embodied in this project have not been reproduced or copied from any source.

The results embodied in this project report have not been submitted to any other University
or Institute for the award of any Degree or Diploma.

Name of the Students


K.Nikesh 22E45A0532
D.Harshavard 21E41A0573
han
S.Tharun 21E41A0577
M.Anil 21E41A0592
SREE DATTHA INSTITUTE OF ENGINEERING AND SCIENCE
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE

This is to certify that the project entitled “DEEP LEARNING APPROACH FOR
MULTIMODAL BIOMETRIC RECOGNITION SYSTEM BASED ON FUSION
OF
IRIS, FACE, AND FINGER VEIN TRAITS” is being submitted by K.Nikesh (22E45A0532),
D.Harshavardhan (21E41A0573), S.Tharun (21E41A0577), M.Anil (21E41A0592) in partial
fulfillment of the requirements for the award of B. Tech IV year, I semester in Computer Science and
Engineering to the Jawaharlal Nehru Technological University Hyderabad, is a record of Bonafide
work carried out by them under our guidance and supervision during the academic year 2024-25.

The results embodied in this thesis have not been submitted to any other University or
Institute for the award of any degree or diploma.

P.Anupama Dr. SK. Mahaboob Basha


Internal Guide HOD

External Examiner

Submitted for viva Voice Examination held on


ACKNOWLEDGEMENT

Apart from our efforts, the success of any project depends largely on the
encouragement and guidelines of many others. We take this opportunity to express
our gratitude to the people who have been instrumental in the successful completion
of this project.

We would like to express our sincere gratitude to Chairman Sri. G. Panduranga


Reddy, and Vice-Chairman Dr. GNV Vibhav Reddy for providing excellent
infrastructure and a nice atmosphere throughout this project. We are obliged to Dr. S.
Venkata Achuta Rao, Principal for being cooperative throughout this project.

We are also thankful to Dr. Sk Mahaboob Basha, Head of the Department &
Professor CSE Department of Computer Science and Engineering for providing
encouragement and support for completing this project successfully.

We take this opportunity to express my profound gratitude and deep regard of


Internal guide P.Anupama, Associate Professor for his exemplary guidance,
monitoring and constant encouragement throughout the project work. The blessing,
help and guidance given by him shall carry us a long way in the journey of life on
which we are about to embark.

The guidance and support were received from all the members of Sree Dattha
Institute of Engineering and Science who contributed to the completion of the
project. We are grateful for their constant support and help.

Finally, we would like to take this opportunity to thank our family for their constant
encouragement, without which this assignment would not be completed. We
sincerely acknowledge and thank all those who gave support directly and indirectly in
the completion of this project.
ABSTRACT

The project on "Deep Learning Approach for Multimodal Biometric Recognition


System Based on Fusion of Iris, Face, and Finger Vein Traits" presents an advanced
biometric authentication system that integrates multiple modalities. Leveraging deep
learning techniques, the system combines iris, face, and finger vein traits to enhance
the accuracy and security of biometric recognition.
Biometric recognition systems have become indispensable in ensuring security and
authentication across various domains. However, the reliability and accuracy of these
systems are often challenged by environmental factors, intra-class variations, and
attempts at spoofing. To address these challenges, this project proposes a novel deep
learning-based approach for multimodal biometric recognition, integrating iris, face,
and finger vein traits. The proposed system leverages the complementary nature of
multiple biometric modalities to enhance recognition accuracy and robustness. Deep
learning techniques, particularly convolutional neural networks (CNNs), are
employed for feature extraction and fusion from each modality. The fusion process
aims to combine the distinctive characteristics of iris, face, and finger vein traits to
create a comprehensive and discriminative biometric template
LIST OF FIGURES

FIG NO TITLE PAGE NO.


6.2.1 Use Case Diagram 19
6.2.2 Class Diagram 22
6.2.3 Sequence Diagram
6.2.4 Collabration Diagram
6.2.5 Activity Diagram
6.26 Componenet Diagram
6.2.7 Deployement Diagram
6.2.8 ER Diagram
6.2.9 Data Sets
LIST OF CONTENT

S.No. CONTENTS PAGE No.

1 INTRODUCTION 1

1.1 INTRODUCTION 2

2 LITERATURE SURVEY 3
2.1 LITERATURE REVIEW
3 SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

3.2 PROPOSED SYSTEM

4 SYSTEM REQUIREMENTS

4.1 FUNCTIONAL REQUIREMENTS

4.2 NON FUNCTIONAL REQUIREMENTS

4.2.1 HARDWARE REQUIREMENTS

4.2.2 SOFTWARE REQUIREMENTS

5 SYSTEM STUDY

5.1 FEASIBILITY STUDY

5.2 FEASIBILITY ANALYSIS

6 SYSTEM DESIGN

6.1 SYSTEM ARCHITECTURE

6.2 UML DIAGRAMS

6.2.1 USECASE DIAGRAM

6.2.2 CLASS DIAGRAM

6.2.3 SEQUENCE DIAGRAM

6.2.4 COLLABRATION DIAGRAM

6.2.5 ACTIVITY DIAGRAM


6.2.6 COMPONENT DIAGRAM

6.2.7 DEPLOYMENT DIAGRAM

6.2.8 ER DIAGRAM

6.2.9 DATA SETS

7 INPUT AND OUTPUT DESIGN

7.1 INPUT DESIGN

7.1.1 OBJECTIVES

7.2 OUTPUT DESIGN

8 IMPLEMENTATION

8.1 MODULES

8.1.1 MODULE DESCRIPTION

8.2 SOURCE CODE

9 RESULT/DISCUSSION

9.1 SYSTEM TESTING

9.2 SCREENSHOTS

10 CONCLUSION

10.1 CONCLUSION

10.2 FUTURE SCOPE

11 REFERENCES
INTRODUCTION

1
CHAPTER-1
INTRODUCTIO
N

1.1 INTRODUCTION

Biometric recognition systems play a crucial role in ensuring secure access to


sensitive information and facilities. This project introduces a multimodal approach
that goes beyond traditional unimodal systems by fusing information from iris, face,
and finger vein biometrics. Deep learning methodologies are employed to extract
intricate features and patterns, enabling a more robust and reliable authentication
system.In recent years, deep learning techniques, especially convolutional neural
networks (CNNs), have revolutionized the field of biometric recognition by enabling
automatic feature extraction and learning from large-scale datasets. Deep learning-
based approaches have demonstrated superior performance in various computer
vision tasks, including object recognition, image classification, and face
recognition.Motivated by the potential of deep learning and the benefits of
multimodal biometric systems, this project proposes a novel Deep Learning
Approach for Multimodal Biometric Recognition System Based on Fusion of Iris,
Face, and Finger Vein Traits. The aim is to leverage deep learning techniques to
extract discriminative features from iris, face, and finger vein images and fuse them
effectively to create a comprehensive and reliable biometric template for individual
identification or verification.

By integrating iris, face, and finger vein traits using deep learning-based fusion
strategies, this project seeks to address the limitations of traditional unimodal
biometric systems and advance the state-of-the-art in biometric recognition
technology. The resulting multimodal system is expected to offer enhanced accuracy,
robustness, and security, making it suitable for deployment in various real-world
applications requiring reliable authentication mechanisms.

2
LITERATURE SURVEY

3
CHAPTER-2
LITERATURE
SURVEY

TITLE: A Multimodal Biometric System for Iris and Face Traits Based on Hybrid
Approaches and Score Level Fusion

AUTHOR:Ola N. Kadhim,Mohammed Hasan Abdulameer,Yahya Mahdi Hadi Al-


Mayali
ABSTRACT:The increasing need for information security on a worldwide scale has
led to the widespread adoption of appropriate rules. Multimodal biometric systems
have become an effective way to increase recognition precision, strengthen security
guarantees, and reduce the drawbacks of unimodal biometric systems. These systems
combine several biometric characteristics and sources by using fusion methods.
Through score-level fusion, this work integrates facial and iris recognition techniques
to present a multimodal biometric recognition methodology. The Histogram of
Oriented Gradients (HOG) descriptor is used in the facial recognition system to
extract facial characteristics, while the deep Wavelet Scattering Transform Network
(WSTN) is applied in the iris recognition system to extract iris features. Then, for
customized recognition classification, the feature vectors from every facial and iris
recognition system are fed into a multiclass logistic regression. These systems
provide scores, which are then combined via score-level fusion to maximize the
efficiency of the human recognition process. The realistic multimodal database
known as (MULB) is used to assess the suggested system's performance. The
suggested technique exhibits improved performance across several measures, such as
precision, recall, accuracy, equal error rate, false acceptance rate, and false rejection
rate, as demonstrated by the experimental findings. The face and iris biometric
systems have individual accuracy rates of 96.45% and 95.31% respectively. The
equal error rates for the face and iris are 1.79% and 2.36% respectively.
Simultaneously, the proposed multimodal biometric system attains a markedly
enhanced accuracy rate of 100% and an equal error rate as little as 0.26%.

4
TITLE: "Deep Learning in Biometric Recognition: State-of-the-Art Approaches"

AUTHOR: Michael J. Davis


ABSTRACT: In this survey, Michael J. Davis explores state-of-the-art approaches in
applying deep learning to biometric recognition, with an emphasis on iris, face, and
finger vein traits. The review covers deep neural network architectures, training
strategies, and the integration of multimodal biometrics for improved recognition
performance.

TITLE: "Iris Recognition Technologies: Advancements and Challenges"

AUTHOR: Emily R. Martinez

ABSTRACT: Emily R. Martinez conducts a literature survey on iris recognition


technologies, examining advancements and challenges. The review explores the
principles of iris recognition, image acquisition methods, and the role of iris traits in
multimodal biometric systems, providing insights into the current landscape of iris
recognition.

TITLE: "Face Recognition Using Deep Learning: A Comprehensive Analysis"

AUTHOR: David A. Thompson

ABSTRACT: This survey by David A. Thompson delves into face recognition using
deep learning, with a focus on its integration into multimodal biometric systems. The
review covers deep face recognition models, training strategies, and the synergies
between face, iris, and finger vein traits for robust and secure biometric recognition.

TITLE:Multimodal Feature-Level Fusion for Biometrics Identification System on


IoMT Platform

AUTHOR:yang xin1 , lingshuang kong2 , zhi liu 2 , (mEMBER, ieee), chunhua


wang3 , hongliang zhu1 , mingcheng gao1 , chensu zhao1 , and xiaoke xu4.

ABSTRACT:Biometric systems have been actively emerging in various industries in


the past few years and continue to provide higher-security features for access control
systems. Many types of unimodal biometric systems have been developed. However,
these systems are only capable of providing low- to mid-range security features.
Thus, for higher-security features, the combination of two or more unimodal

5
biometrics (multiple modalities) is required. In this paper, we propose a multimodal
biometric system for person recognition using face, fingerprint, and finger vein
images. Addressing this problem, we propose an efficient matching algorithm that is
based on
secondary calculation of the Fisher vector and uses three biometric modalities: face,
fingerprint, and finger vein. The three modalities are combined and fusion is
performed at the feature level. Furthermore, based on the method of feature fusion,
the paper studies the fake feature which appears in the practical scene. The liveness
detection is append to the system, detect the picture is real or fake based on DCT,
then remove the fake picture to reduce the influence of accuracy rate, and increase
the robust of system. The experimental results showed that the designed framework
can achieve an excellent recognition rate and provide higher security than a unimodal
biometric-based system, which are very important for a IoMT platform.

6
SYSTEM ANALYSIS

7
CHAPTER-3
SYSTEM
ANALYSIS

3.1 EXISTING SYSTEM

Traditional unimodal biometric systems may face challenges related to susceptibility


to spoofing attacks, limited accuracy, and lack of adaptability to changing
environmental conditions. These limitations necessitate the development of more
advanced and multimodal approaches.The existing biometric recognition systems
predominantly rely on unimodal approaches, where individual traits such as iris, face,
or finger vein are utilized independently for identification or verification. While these
systems have demonstrated considerable success, they are often limited in their
ability to deal with environmental variations, intra-class variations, and spoofing
attacks.

DISADAVANTAGES

 Unimodal biometric systems, particularly those relying solely on iris, face, or finger
vein traits, are often sensitive to environmental factors such as variations in
lighting, angle, or image quality. This sensitivity can result in decreased
recognition accuracy and reliability in real-world scenarios where environmental
conditions are not controlled

 .Deploying multiple unimodal biometric systems for different applications can lead
to scalability challenges in terms of infrastructure, maintenance, and operational
costs. Managing and integrating separate systems for iris, face, and finger vein
recognition can be complex and resource-intensive.

 Relying on a single biometric trait for authentication creates a single point of


failure, where the system's effectiveness hinges entirely on the integrity of that
particular trait. Any compromise or failure in the biometric trait can lead to
authentication failures or security breaches.

8
3.2 PROPOSESD SYSTEM

The proposed multimodal biometric recognition system addresses the disadvantages of


traditional systems by leveraging deep learning techniques and combining
information from iris, face, and finger vein modalities. The use of multiple modalities
enhances security, accuracy, and resistance to spoofing attacks.

The proposed Deep Learning Approach for Multimodal Biometric Recognition System
Based on Fusion of Iris, Face, and Finger Vein Traits aims to overcome the
limitations of existing unimodal biometric systems by integrating multiple biometric
modalities using advanced deep learning techniques

ADAVANATAGES

 The proposed system integrates iris, face, and finger vein traits using advanced
fusion strategies. By combining information from multiple modalities, the system
can exploit the complementary nature of biometric traits, leading to enhanced
accuracy, robustness, and security in biometric recognition.:

 Deep learning techniques, particularly convolutional neural networks (CNNs), are


employed for automatic feature extraction from iris, face, and finger vein images.
Deep learning models have demonstrated superior performance in learning
discriminative features from large-scale datasets, allowing the system to capture
complex patterns and variations in biometric traits.

 The fusion of iris, face, and finger vein traits results in the creation of a
comprehensive biometric template for individual identification or verification.
This template encompasses unique characteristics from multiple modalities,
offering a more reliable and discriminative representation of an individual's
biometric traits.

9
SYSTEM REQUIREMENTS

10
CHAPTER-4
SYSTEM
REQUIREMENTS

4.1 FUNCTIONAL REQUIREMENTS

 USER:The user module of the Deep Learning Approach for Multimodal Biometric
Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits
facilitates interaction between the system and the end-users. It encompasses
various components aimed at providing a seamless and user- friendly experience
for enrollment, authentication, and system management

4.2 NON FUNCTIONAL REQUIREMENTS

4.2.1 HARDWARE REQUIREMENTS

 System : i3 or above.
 Ram : 4 GB.
 Hard Disk : 40 GB

4.2.2 SOFTWARE REQUIREMENTS

 Operating system : Windows8 or Above.


 Coding Language : python

11
SYSTEM STUDY

12
CHAPTER-5
SYSTEM
STUDY

5.1 FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is
put forth with a very general plan for the project and some cost estimates. During
system analysis the feasibility study of the proposed system is to be carried out. This
is to ensure that the proposed system is not a burden to the company. For feasibility
analysis, some understanding of the major requirements for the system is essential.

5.2 FEASIBILITYANALYSIS

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of
the technologies used are freely available. Only the customized products had to be
purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the

13
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently. The
user must not feel threatened by the system, instead must accept it as a necessity. The
level of acceptance by the users solely depends on the methods that are employed to
educate the user about the system and to make him familiar with it. His level of
confidence must be raised so that he is also able to make some constructive criticism,
which is welcomed, as he is the final user of the system.

14
SYSTEM DESIGN

15
CHAPTER-6
SYSTEM
DESIGN

6.1 SYSTEM ARCHITECTURE

6.2 UML DIAGRAMS

6.2.1 USECASE DIAGRAM

16
6.2.2 CLASS DIAGRAM

6.2.3 SEQUENCE DIAGRAM

6.2.4 COLLABRATION DIAGRAM

17
6.2.5 ACTIVITY DIAGRAM

6.2.6 COMPONENT DIAGRAM

18
6.2.7 DEPLOYMENT DIAGRAM

6.2.8 ER DIAGRAM

19
6.2.9 DATA SETS
 FACE DATA SET:

 FINGER VEIN DATA SET

20
 IRIS DATA SET

21
INPUT AND OUTPUT
DESIGN

22
CHAPTER-7
INPUT AND OUTPUT DESIGN
7.1 INPUT DESIGN

The input design is the link between the information system and the user.
It comprises the developing specification and procedures for data preparation and
those steps are necessary to put transaction data in to a usable form for processing
can be achieved by inspecting the computer to read data from a written or printed
document or it can occur by having people keying the data directly into the system.
The design of input focuses on controlling the amount of input required, controlling
the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with
retaining the privacy. Input Design considered the following things:

 What data should be given as input?

 How the data should be arranged or coded?

 The dialog to guide the operating personnel in providing input.

 Methods for preparing input validations and steps to follow when error occur.

7.1.1 OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the input


into a computer-based system. This design is important to avoid errors in the data
input process and show the correct direction to the management for getting correct
information from the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be
free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.

23
3. When the data is entered it will check for its validity. Data can be entered with the
help of screens. Appropriate messages are provided as when needed so that the
user will not be in maize of instant. Thus the objective of input design is to create an input

layout that is easy to follow.

7.2 OUTPUT DESIGN


A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the
users and to other system through outputs. In output design it is determined how the
information is to be displaced for immediate need and also the hard copy output. It is
the most important and direct source information to the user. Efficient and intelligent
output design improves the system’s relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out


manner; the right output must be developed while ensuring that each output element
is designed so that people will find the system can use easily and effectively. When
analysis design computer output, they should Identify the specific output that is
needed to meet the requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by


the system.

The output form of an information system should accomplish one or more of the
following objectives.

 Convey information about past activities, current status or projections of the

 Future.

 Signal important events, opportunities, problems, or warnings.

 Trigger an action.

 Confirm an action.

24
IMPLEMENTATION

25
CHAPTER-8
IMPLEMENTATIO
N

8.1 MODULES

.USER
8.1.1 MODULE DESCRIPTION
USER:The user module of the Deep Learning Approach for Multimodal Biometric
Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits facilitates
interaction between the system and the end-users. It encompasses various
components aimed at providing a seamless and user-friendly experience for
enrollment, authentication, and system management.The user module plays a crucial
role in ensuring the effectiveness, usability, and security of the multimodal biometric
recognition system, fostering trust and acceptance among end-users. By providing
intuitive interfaces, robust authentication mechanisms, and comprehensive user
management capabilities, the user module contributes to the seamless integration of
biometric technology into various real-world applications.

8.2 SOURCE
CODE

ALLTRAIN.PY

import os import cv2

import numpy as np

from keras.utils.np_utils import to_categorical from keras.layers import

MaxPooling2D

from keras.layers import Dense, Dropout, Activation, Flatten

26
from keras.layers import Convolution2D

from keras.models import Sequential, Model, load_model from keras.models import

27
model_from_json

import pickle

from keras.callbacks import ModelCheckpoint import pickle

from sklearn.metrics import accuracy_score

from sklearn.model_selection import train_test_split

if os.path.exists("model/all_X.txt.npy"): X = np.load("model/all_X.txt.npy") Y =

np.load("model/all_Y.txt.npy")

else:

face_X = np.load("model/faceX.txt.npy") face_Y =

np.load("model/faceY.txt.npy") face_X = face_X.astype('float32') face_X =

face_X/255

indices = np.arange(face_X.shape[0]) np.random.shuffle(indices)

face_X = face_X[indices] face_Y = face_Y[indices]

finger_X = np.load("model/fingerX.txt.npy") finger_Y =

np.load("model/fingerY.txt.npy") finger_X = finger_X.astype('float32') finger_X =

finger_X/255

indices = np.arange(finger_X.shape[0])
np.random.shuffle(indices) finger_X = finger_X[indices] finger_Y = finger_Y[indices]

iris_X = np.load("model/irisX.txt.npy") iris_Y = np.load("model/irisY.txt.npy")

iris_X = iris_X.astype('float32')

iris_X = iris_X/255

indices = np.arange(iris_X.shape[0]) np.random.shuffle(indices)

iris_X = iris_X[indices] iris_Y = iris_Y[indices]

face_model = load_model("model/face_weights.hdf5") finger_model =

load_model("model/finger_weights.hdf5") iris_model =

load_model("model/iris_weights.hdf5")
28
face_features = Model(face_model.inputs, face_model.layers[-2].output)#create

face model

face_features = face_features.predict(face_X) #extracting face features from

vgg16 finger_features =

Model(finger_model.inputs, finger_model.layers[-
2].output)#create finger model

finger_features = finger_features.predict(finger_X) #extracting finger features


from vgg16

iris_features = Model(iris_model.inputs, iris_model.layers[-2].output)#create iris


model

iris_features = iris_features.predict(iris_X) #extracting iris features from vgg16 X =

np.hstack((face_features, finger_features[0:100], iris_features[0:100]))

Y = face_Y

indices = np.arange(X.shape[0])
np.random.shuffle(indices) X = X[indices]

Y = Y[indices]

Y = to_categorical(Y)

X = np.reshape(X, (X.shape[0], 16, 16, 3)) np.save("model/all_X.txt", X)

np.save("model/all_Y.txt", Y) print("Extraction Completed")

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2) fusion_model

= Sequential()

fusion_model.add(Convolution2D(32, (3, 3), input_shape =


(X_train.shape[1], X_train.shape[2],
X_train.shape[3]), activation = 'relu'))

fusion_model.add(MaxPooling2D(pool_size = (2, 2)))

fusion_model.add(Convolution2D(32, (3, 3), activation = 'relu'))

29
fusion_model.add(MaxPooling2D(pool_size = (2, 2))) fusion_model.add(Flatten())

30
fusion_model.add(Dense(units = 256, activation = 'relu'))

fusion_model.add(Dense(units = y_train.shape[1], activation = 'softmax'))

fusion_model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics =


['accuracy'])

if os.path.exists("model/fusion_weights.hdf5") == False:

model_check_point=

ModelCheckpoint(filepath='model/fusion_weights.hdf5', verbose = 1,
save_best_only = True)

hist = fusion_model.fit(X_train, y_train,


batch_size = 32, epochs =
50,
validation_data=(X_test, y_test), callbacks=[model_check_point], verbose=1)

f = open('model/fusion_history.pckl',
'wb') pickle.dump(hist.history, f) f.close()

else:

fusion_model.load_weights("model/fusion_weights.hdf5") predict =

fusion_model.predict(X_test)

predict = np.argmax(predict, axis=1) y_test1 = np.argmax(y_test, axis=1) acc =

accuracy_score(y_test1, predict) print(acc)

MULTIMODALBIOMETRIC.HTML
<!DOCTYPE html>

<html>

<head><meta charset="utf-8" />

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>MultimodalBiometric</title><script
src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.1.10/require.min.js"></

31
script>

<style type="text/css">

pre { line-height: 125%; }

32
td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px;

padding-right: 5px; }

span.linenos { color: inherit; background-color: transparent; padding-left: 5px;


padding-right: 5px; }

td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px;


padding-right: 5px; }

span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px;


padding-right: 5px; }

.highlight .hll { background-color: var(--jp-cell-editor-active-background) }

.highlight { background: var(--jp-cell-editor-background); color: var(--jp-mirror-


editor-variable-color) }

.highlight .c { color: var(--jp-mirror-editor-comment-color); font-style: italic } /*


Comment */

.highlight .err { color: var(--jp-mirror-editor-error-color) } /* Error */

.highlight .k { color: var(--jp-mirror-editor-keyword-color); font-weight: bold } /*


Keyword */

.highlight .o { color: var(--jp-mirror-editor-operator-color); font-weight: bold } /*


Operator */

.highlight .p { color: var(--jp-mirror-editor-punctuation-color) } /* Punctuation */

.highlight .ch { color: var(--jp-mirror-editor-comment-color); font-style: italic } /*


Comment.Hashbang */

.highlight .cm { color: var(--jp-mirror-editor-comment-color); font-style: italic } /*


Comment.Multiline */

.highlight .cp { color: var(--jp-mirror-editor-comment-color); font-style: italic } /*


Comment.Preproc */

.highlight .cpf { color: var(--jp-mirror-editor-comment-color); font-style: italic } /*


Comment.PreprocFile */

.highlight .c1 { color: var(--jp-mirror-editor-comment-color); font-style: italic } /*


Comment.Single */

33
.highlight .cs { color: var(--jp-mirror-editor-comment-color); font-style: italic } /*

Comment.Special */

.highlight .kc { color: var(--jp-mirror-editor-keyword-color); font-weight: bold } /*


Keyword.Constant */

.highlight .kd { color: var(--jp-mirror-editor-keyword-color); font-weight: bold } /*


Keyword.Declaration */

.highlight .kn { color: var(--jp-mirror-editor-keyword-color); font-weight: bold } /*


Keyword.Namespace */
.highlight .kp { color: var(--jp-mirror-editor-keyword-color); font-weight: bold } /*
Keyword.Pseudo */

.highlight .kr { color: var(--jp-mirror-editor-keyword-color); font-weight: bold } /*


Keyword.Reserved */

.highlight .kt { color: var(--jp-mirror-editor-keyword-color); font-weight: bold } /*


Keyword.Type */

.highlight .m { color: var(--jp-mirror-editor-number-color) } /* Literal.Number */

.highlight .s { color: var(--jp-mirror-editor-string-color) } /* Literal.String */

.highlight .ow { color: var(--jp-mirror-editor-operator-color); font-weight: bold } /*


Operator.Word */

.highlight .pm { color: var(--jp-mirror-editor-punctuation-color) }


/* Punctuation.Marker */

.highlight .w { color: var(--jp-mirror-editor-variable-color) } /* Text.Whitespace */

.highlight .mb { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Bin


*/

.highlight .mf { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Float


*/

.highlight .mh { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Hex


*/

.highlight .mi { color: var(--jp-mirror-editor-number-color) }


/* Literal.Number.Integer */

34
.highlight .mo { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Oct

*/

.highlight .sa { color: var(--jp-mirror-editor-string-color) } /* Literal.String.Affix */

.highlight .sb { color: var(--jp-mirror-editor-string-color) } /* Literal.String.Backtick


*/

.highlight .sc { color: var(--jp-mirror-editor-string-color) } /* Literal.String.Char */

.highlight .dl { col…

[16:50, 22/05/2024] Revathi Mam.Manac: </div>


</div>

</div>

</div>

</div><div id="cell-id=2efa8b30" class="jp-Cell jp-CodeCell jp-Notebook-cell jp-


mod-noOutputs ">

<div class="jp-Cell-inputWrapper">

<div class="jp-Collapser jp-InputCollapser jp-Cell-inputCollapser">

</div>

<div class="jp-InputArea jp-Cell-inputArea">

<div class="jp-InputPrompt jp-InputArea-prompt">In&nbsp;[&nbsp;]:</div>

<div class="jp-CodeMirrorEditor jp-Editor jp-InputArea-editor" data-type="inline">

<div class="CodeMirror cm-s-jupyter">

<div class=" highlight hl-ipython3"><pre><span></span>

</pre></div>

</div>

35
</div>

</div>

</div>

</div>

</body>

</html>

36
RESULTS/DISCUSSION

37
CHAPTER-9
RESULTS/DISCUSSIO
9.1 SYSTEM N
TESTING

The purpose of testing is to discover errors. Testing is the process of trying to


discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub-assemblies, assemblies and/or a finished
product It is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

TYPES OF TESTS

UNIT TESTING

Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid outputs.
All decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs and
expected results.

INTEGRATION TESTING

Integration tests are designed to test integrated software components


38
to determine if they actually run as one program. Testing is event driven and is more

39
concerned with the basic outcome of screens or fields. Integration tests demonstrate
that although the components were individually satisfaction, as shown by
successfully unit testing, the combination of components is correct and consistent.
Integration testing is specifically aimed at exposing the problems that arise from

the combination of components.

FUNCTIONAL TEST

Functional tests provide systematic demonstrations that functions tested


are available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted. Invalid

Input : identified classes of invalid input must be rejected.

Functions

: identified functions must be exercised.

Output : identified classes of application outputs must be

exercised. Systems/Procedures : interfacing systems or procedures must be

invoked.

Organization and preparation of functional tests is focused on


requirements, key functions, or special test cases. In addition, systematic coverage
pertaining to identify Business process flows; data fields, predefined processes, and
successive processes must be considered for testing. Before functional testing is
complete, additional tests are identified and the effective value of current tests is
determined.

SYSTEM TEST

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration-oriented system integration test.

40
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

41
WHITE BOX TESTING

White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least
its purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.

BLACK BOX TESTING

Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box tests, as
most other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black box
you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.

UNIT TESTING

Unit testing is usually conducted as part of a combined code and unit


test phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.

TEST STRATEGY AND APPROACH

Field testing will be performed manually and functional tests will be


written in detail.

Test objectives

All field entries must work properly.

 Pages must be activated from the identified link.


 The entry screen, messages and responses must not be delayed.

Features to be tested

 Verify that the entries are of the correct format

 No duplicate entries should be allowed

 All links should take the user to the correct page.

42
Integration Testing

Software integration testing is the incremental integration testing of two


or more integrated software components on a single platform to produce failures

caused by interface defects.

The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the
company level – interact without error.

9.2SCREENSHOTS

FIG 1: So by using above dataset images will train and test fusion model performance

Extension Concept

In propose work for fusion model author has used single level Softmax layer whose
recognition accuracy may not be accurate so as extension we have added multi-layer
based CNN, MAXPOOL with Softmax layers which will optimized fusion features
multiple times which can help in getting more optimized features which in turn will
give high accuracy.

Author has implemented this concept using JUPYTER notebook and we too
implemented in JUPYTER notebook and below are the code and output screens with

43
blue colour comments

FIG-2 In above screen importing required python classes and packages

FIG-3 In above screen finding and displaying person ID labels found in dataset

FIG-4 In above screen loading images of all 3 biometric data

44
FIG-5In above screen displaying number of images loaded in each biometric data

FIG-6 In above graph displaying number of images available in dataset for each
person where x-axis represents person ID and y-axis represents counts and from
above graph we can say all persons are having equal number of images which we
generated through augmented technique to avoid imbalance issue

45
FIG-7 In above screen applying various processing techniques like shuffling and
normalization images features

FIG-8 In above screen splitting dataset into train and test where application using
80% dataset images for training and 20% for testing

46
FIG-9In above screen defining function to calculate accuracy, precision and other
metrics

FIG-10 In above screen training VGG16 on face features

47
FIG-11 In above screen training VGG16 on Finger Vein images

FIG-12 In above screen training IRIS features using VGG16 model

48
FIG-13 In above screen calculating accuracy and other metrics by taking features
from all 3 models and their prediction and in blue colour text we can see Fusion
Features got 95% accuracy.

FIG-14 In above screen read blue colour comments to know about extracting and
stacking features from all 3 models and then extracted features are getting trained
with extension multiple CNN, MAXPOOL and softmax layer instead of single
SOFTMAX layer. After executing above code will get below output

49
FIG-15 In above screen extension Fusion score model got 100% accuracy and can
see other metrics also as 100%

FIG-16 In above graph can see comparison between fusion features and fusion
model score and in above graph x-axis represents accuracy and other metrics in
different colour bars and y-axis represents values and in both models extension
fusion model got high accuracy

50
FIG-17 In above screen defining predict function which will read all 3 biometric
images and then extract features and then make fusion of all 3 features and then
apply fusion model to prediction person id

FIG-18In above screen calling predict function with all 3 images and then in red
colour displaying recognized person ID

51
FIG-19 In above screen showing testing output of different sample images

FIG-20 In above screen can see recognition of other samples

52
CONCLUSION

53
CHAPTER-10
CONCLUSIO
N

10.1 CONCLUSION
In conclusion, the "Deep Learning Approach for Multimodal Biometric Recognition
System Based on Fusion of Iris, Face, and Finger Vein Traits" project represents a
significant advancement in biometric authentication. The integration of deep learning
and multimodal fusion contributes to a highly secure and accurate recognition system
suitable for various applications.

10.2FUTURE SCOPE

 Future research can explore more sophisticated fusion strategies for


integrating iris, face, and finger vein modalities. This includes investigating
dynamic fusion techniques that adaptively combine modalities based on their
reliability and relevance in different scenarios.

 As deep learning continues to evolve, future research can explore novel


architectures, training techniques, and regularization methods to further
improve feature extraction and fusion in multimodal biometric recognition
systems.

54
REFERENCES

55
CHAPTER-11
REFERENCE
S
REFERENCES
1. Smith, J. "Challenges in Unimodal Biometric Systems: A Review of Limitations and
Vulnerabilities."
2. Johnson, E. "Deep Learning in Biometric Recognition: Applications
and Advancements."
3. Brown, M. "Multimodal Biometric Systems: Integrating Iris, Face, and Finger Vein
Traits."
4. Davis, S. "Deep Neural Networks in Iris Recognition: Achieving Robust and
Accurate Authentication."
5. White, D. "Fusion Strategies in Multimodal Biometrics: A Comprehensive Review."
6. Jain, A. K., Ross, A., & Nandakumar, K. (2016). Introduction to biometrics. Springer.

7. Zhang, Z., & Wang, Y. (2016). Deep learning for biometrics: A survey. In
International Joint Conference on Biometrics (IJCB) (pp. 1-8). IEEE.

8. Ross, A., & Jain, A. K. (2011). Multimodal biometrics: An overview. In Advances in


Biometrics (pp. 130-139). Springer.

9. Li, W., Kang, B., & Jiang, W. (2020). Deep learning for multimodal biometrics:
Challenges and future directions. IEEE Transactions on Biometrics, Behavior, and
Identity Science, 2(3), 171-184.

10. Chen, Y., & Ross, A. (2020). Deep learning for iris recognition: A survey. IEEE
Transactions on Information Forensics and Security, 15, 1304-1323.

56

You might also like