Mini Project Doc
Mini Project Doc
DECLARATION
We are hereby declaring that the project report titled “DEEP LEARNING APPROACH
FOR MULTIMODAL BIOMETRIC RECOGNITION SYSTEM BASED
ON
FUSION OF IRIS, FACE, AND FINGER VEIN TRAITS” under the guidance of
P.Anupama, Sree Dattha Institute of Engineering and Science, Ibrahimpatnam is
submitted in partial fulfillment of the requirement for the award of B. Tech. in Computer
Science and Engineering is a record of bonafide work carried out by us and the results
embodied in this project have not been reproduced or copied from any source.
The results embodied in this project report have not been submitted to any other University
or Institute for the award of any Degree or Diploma.
CERTIFICATE
This is to certify that the project entitled “DEEP LEARNING APPROACH FOR
MULTIMODAL BIOMETRIC RECOGNITION SYSTEM BASED ON FUSION
OF
IRIS, FACE, AND FINGER VEIN TRAITS” is being submitted by K.Nikesh (22E45A0532),
D.Harshavardhan (21E41A0573), S.Tharun (21E41A0577), M.Anil (21E41A0592) in partial
fulfillment of the requirements for the award of B. Tech IV year, I semester in Computer Science and
Engineering to the Jawaharlal Nehru Technological University Hyderabad, is a record of Bonafide
work carried out by them under our guidance and supervision during the academic year 2024-25.
The results embodied in this thesis have not been submitted to any other University or
Institute for the award of any degree or diploma.
External Examiner
Apart from our efforts, the success of any project depends largely on the
encouragement and guidelines of many others. We take this opportunity to express
our gratitude to the people who have been instrumental in the successful completion
of this project.
We are also thankful to Dr. Sk Mahaboob Basha, Head of the Department &
Professor CSE Department of Computer Science and Engineering for providing
encouragement and support for completing this project successfully.
The guidance and support were received from all the members of Sree Dattha
Institute of Engineering and Science who contributed to the completion of the
project. We are grateful for their constant support and help.
Finally, we would like to take this opportunity to thank our family for their constant
encouragement, without which this assignment would not be completed. We
sincerely acknowledge and thank all those who gave support directly and indirectly in
the completion of this project.
ABSTRACT
1 INTRODUCTION 1
1.1 INTRODUCTION 2
2 LITERATURE SURVEY 3
2.1 LITERATURE REVIEW
3 SYSTEM ANALYSIS
4 SYSTEM REQUIREMENTS
5 SYSTEM STUDY
6 SYSTEM DESIGN
6.2.8 ER DIAGRAM
7.1.1 OBJECTIVES
8 IMPLEMENTATION
8.1 MODULES
9 RESULT/DISCUSSION
9.2 SCREENSHOTS
10 CONCLUSION
10.1 CONCLUSION
11 REFERENCES
INTRODUCTION
1
CHAPTER-1
INTRODUCTIO
N
1.1 INTRODUCTION
By integrating iris, face, and finger vein traits using deep learning-based fusion
strategies, this project seeks to address the limitations of traditional unimodal
biometric systems and advance the state-of-the-art in biometric recognition
technology. The resulting multimodal system is expected to offer enhanced accuracy,
robustness, and security, making it suitable for deployment in various real-world
applications requiring reliable authentication mechanisms.
2
LITERATURE SURVEY
3
CHAPTER-2
LITERATURE
SURVEY
TITLE: A Multimodal Biometric System for Iris and Face Traits Based on Hybrid
Approaches and Score Level Fusion
4
TITLE: "Deep Learning in Biometric Recognition: State-of-the-Art Approaches"
ABSTRACT: This survey by David A. Thompson delves into face recognition using
deep learning, with a focus on its integration into multimodal biometric systems. The
review covers deep face recognition models, training strategies, and the synergies
between face, iris, and finger vein traits for robust and secure biometric recognition.
5
biometrics (multiple modalities) is required. In this paper, we propose a multimodal
biometric system for person recognition using face, fingerprint, and finger vein
images. Addressing this problem, we propose an efficient matching algorithm that is
based on
secondary calculation of the Fisher vector and uses three biometric modalities: face,
fingerprint, and finger vein. The three modalities are combined and fusion is
performed at the feature level. Furthermore, based on the method of feature fusion,
the paper studies the fake feature which appears in the practical scene. The liveness
detection is append to the system, detect the picture is real or fake based on DCT,
then remove the fake picture to reduce the influence of accuracy rate, and increase
the robust of system. The experimental results showed that the designed framework
can achieve an excellent recognition rate and provide higher security than a unimodal
biometric-based system, which are very important for a IoMT platform.
6
SYSTEM ANALYSIS
7
CHAPTER-3
SYSTEM
ANALYSIS
DISADAVANTAGES
Unimodal biometric systems, particularly those relying solely on iris, face, or finger
vein traits, are often sensitive to environmental factors such as variations in
lighting, angle, or image quality. This sensitivity can result in decreased
recognition accuracy and reliability in real-world scenarios where environmental
conditions are not controlled
.Deploying multiple unimodal biometric systems for different applications can lead
to scalability challenges in terms of infrastructure, maintenance, and operational
costs. Managing and integrating separate systems for iris, face, and finger vein
recognition can be complex and resource-intensive.
8
3.2 PROPOSESD SYSTEM
The proposed Deep Learning Approach for Multimodal Biometric Recognition System
Based on Fusion of Iris, Face, and Finger Vein Traits aims to overcome the
limitations of existing unimodal biometric systems by integrating multiple biometric
modalities using advanced deep learning techniques
ADAVANATAGES
The proposed system integrates iris, face, and finger vein traits using advanced
fusion strategies. By combining information from multiple modalities, the system
can exploit the complementary nature of biometric traits, leading to enhanced
accuracy, robustness, and security in biometric recognition.:
The fusion of iris, face, and finger vein traits results in the creation of a
comprehensive biometric template for individual identification or verification.
This template encompasses unique characteristics from multiple modalities,
offering a more reliable and discriminative representation of an individual's
biometric traits.
9
SYSTEM REQUIREMENTS
10
CHAPTER-4
SYSTEM
REQUIREMENTS
USER:The user module of the Deep Learning Approach for Multimodal Biometric
Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits
facilitates interaction between the system and the end-users. It encompasses
various components aimed at providing a seamless and user- friendly experience
for enrollment, authentication, and system management
System : i3 or above.
Ram : 4 GB.
Hard Disk : 40 GB
11
SYSTEM STUDY
12
CHAPTER-5
SYSTEM
STUDY
The feasibility of the project is analyzed in this phase and business proposal is
put forth with a very general plan for the project and some cost estimates. During
system analysis the feasibility study of the proposed system is to be carried out. This
is to ensure that the proposed system is not a burden to the company. For feasibility
analysis, some understanding of the major requirements for the system is essential.
5.2 FEASIBILITYANALYSIS
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of
the technologies used are freely available. Only the customized products had to be
purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
13
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently. The
user must not feel threatened by the system, instead must accept it as a necessity. The
level of acceptance by the users solely depends on the methods that are employed to
educate the user about the system and to make him familiar with it. His level of
confidence must be raised so that he is also able to make some constructive criticism,
which is welcomed, as he is the final user of the system.
14
SYSTEM DESIGN
15
CHAPTER-6
SYSTEM
DESIGN
16
6.2.2 CLASS DIAGRAM
17
6.2.5 ACTIVITY DIAGRAM
18
6.2.7 DEPLOYMENT DIAGRAM
6.2.8 ER DIAGRAM
19
6.2.9 DATA SETS
FACE DATA SET:
20
IRIS DATA SET
21
INPUT AND OUTPUT
DESIGN
22
CHAPTER-7
INPUT AND OUTPUT DESIGN
7.1 INPUT DESIGN
The input design is the link between the information system and the user.
It comprises the developing specification and procedures for data preparation and
those steps are necessary to put transaction data in to a usable form for processing
can be achieved by inspecting the computer to read data from a written or printed
document or it can occur by having people keying the data directly into the system.
The design of input focuses on controlling the amount of input required, controlling
the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with
retaining the privacy. Input Design considered the following things:
Methods for preparing input validations and steps to follow when error occur.
7.1.1 OBJECTIVES
2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be
free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.
23
3. When the data is entered it will check for its validity. Data can be entered with the
help of screens. Appropriate messages are provided as when needed so that the
user will not be in maize of instant. Thus the objective of input design is to create an input
The output form of an information system should accomplish one or more of the
following objectives.
Future.
Trigger an action.
Confirm an action.
24
IMPLEMENTATION
25
CHAPTER-8
IMPLEMENTATIO
N
8.1 MODULES
.USER
8.1.1 MODULE DESCRIPTION
USER:The user module of the Deep Learning Approach for Multimodal Biometric
Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits facilitates
interaction between the system and the end-users. It encompasses various
components aimed at providing a seamless and user-friendly experience for
enrollment, authentication, and system management.The user module plays a crucial
role in ensuring the effectiveness, usability, and security of the multimodal biometric
recognition system, fostering trust and acceptance among end-users. By providing
intuitive interfaces, robust authentication mechanisms, and comprehensive user
management capabilities, the user module contributes to the seamless integration of
biometric technology into various real-world applications.
8.2 SOURCE
CODE
ALLTRAIN.PY
import numpy as np
MaxPooling2D
26
from keras.layers import Convolution2D
27
model_from_json
import pickle
if os.path.exists("model/all_X.txt.npy"): X = np.load("model/all_X.txt.npy") Y =
np.load("model/all_Y.txt.npy")
else:
face_X/255
finger_X/255
indices = np.arange(finger_X.shape[0])
np.random.shuffle(indices) finger_X = finger_X[indices] finger_Y = finger_Y[indices]
iris_X = iris_X.astype('float32')
iris_X = iris_X/255
load_model("model/finger_weights.hdf5") iris_model =
load_model("model/iris_weights.hdf5")
28
face_features = Model(face_model.inputs, face_model.layers[-2].output)#create
face model
vgg16 finger_features =
Model(finger_model.inputs, finger_model.layers[-
2].output)#create finger model
Y = face_Y
indices = np.arange(X.shape[0])
np.random.shuffle(indices) X = X[indices]
Y = Y[indices]
Y = to_categorical(Y)
= Sequential()
29
fusion_model.add(MaxPooling2D(pool_size = (2, 2))) fusion_model.add(Flatten())
30
fusion_model.add(Dense(units = 256, activation = 'relu'))
if os.path.exists("model/fusion_weights.hdf5") == False:
model_check_point=
ModelCheckpoint(filepath='model/fusion_weights.hdf5', verbose = 1,
save_best_only = True)
f = open('model/fusion_history.pckl',
'wb') pickle.dump(hist.history, f) f.close()
else:
fusion_model.load_weights("model/fusion_weights.hdf5") predict =
fusion_model.predict(X_test)
MULTIMODALBIOMETRIC.HTML
<!DOCTYPE html>
<html>
<title>MultimodalBiometric</title><script
src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.1.10/require.min.js"></
31
script>
<style type="text/css">
32
td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px;
padding-right: 5px; }
33
.highlight .cs { color: var(--jp-mirror-editor-comment-color); font-style: italic } /*
Comment.Special */
34
.highlight .mo { color: var(--jp-mirror-editor-number-color) } /* Literal.Number.Oct
*/
</div>
</div>
<div class="jp-Cell-inputWrapper">
</div>
</pre></div>
</div>
35
</div>
</div>
</div>
</div>
</body>
</html>
36
RESULTS/DISCUSSION
37
CHAPTER-9
RESULTS/DISCUSSIO
9.1 SYSTEM N
TESTING
TYPES OF TESTS
UNIT TESTING
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid outputs.
All decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs and
expected results.
INTEGRATION TESTING
39
concerned with the basic outcome of screens or fields. Integration tests demonstrate
that although the components were individually satisfaction, as shown by
successfully unit testing, the combination of components is correct and consistent.
Integration testing is specifically aimed at exposing the problems that arise from
FUNCTIONAL TEST
Functions
invoked.
SYSTEM TEST
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration-oriented system integration test.
40
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.
41
WHITE BOX TESTING
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least
its purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.
Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box tests, as
most other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black box
you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.
UNIT TESTING
Test objectives
Features to be tested
42
Integration Testing
The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the
company level – interact without error.
9.2SCREENSHOTS
FIG 1: So by using above dataset images will train and test fusion model performance
Extension Concept
In propose work for fusion model author has used single level Softmax layer whose
recognition accuracy may not be accurate so as extension we have added multi-layer
based CNN, MAXPOOL with Softmax layers which will optimized fusion features
multiple times which can help in getting more optimized features which in turn will
give high accuracy.
Author has implemented this concept using JUPYTER notebook and we too
implemented in JUPYTER notebook and below are the code and output screens with
43
blue colour comments
FIG-3 In above screen finding and displaying person ID labels found in dataset
44
FIG-5In above screen displaying number of images loaded in each biometric data
FIG-6 In above graph displaying number of images available in dataset for each
person where x-axis represents person ID and y-axis represents counts and from
above graph we can say all persons are having equal number of images which we
generated through augmented technique to avoid imbalance issue
45
FIG-7 In above screen applying various processing techniques like shuffling and
normalization images features
FIG-8 In above screen splitting dataset into train and test where application using
80% dataset images for training and 20% for testing
46
FIG-9In above screen defining function to calculate accuracy, precision and other
metrics
47
FIG-11 In above screen training VGG16 on Finger Vein images
48
FIG-13 In above screen calculating accuracy and other metrics by taking features
from all 3 models and their prediction and in blue colour text we can see Fusion
Features got 95% accuracy.
FIG-14 In above screen read blue colour comments to know about extracting and
stacking features from all 3 models and then extracted features are getting trained
with extension multiple CNN, MAXPOOL and softmax layer instead of single
SOFTMAX layer. After executing above code will get below output
49
FIG-15 In above screen extension Fusion score model got 100% accuracy and can
see other metrics also as 100%
FIG-16 In above graph can see comparison between fusion features and fusion
model score and in above graph x-axis represents accuracy and other metrics in
different colour bars and y-axis represents values and in both models extension
fusion model got high accuracy
50
FIG-17 In above screen defining predict function which will read all 3 biometric
images and then extract features and then make fusion of all 3 features and then
apply fusion model to prediction person id
FIG-18In above screen calling predict function with all 3 images and then in red
colour displaying recognized person ID
51
FIG-19 In above screen showing testing output of different sample images
52
CONCLUSION
53
CHAPTER-10
CONCLUSIO
N
10.1 CONCLUSION
In conclusion, the "Deep Learning Approach for Multimodal Biometric Recognition
System Based on Fusion of Iris, Face, and Finger Vein Traits" project represents a
significant advancement in biometric authentication. The integration of deep learning
and multimodal fusion contributes to a highly secure and accurate recognition system
suitable for various applications.
10.2FUTURE SCOPE
54
REFERENCES
55
CHAPTER-11
REFERENCE
S
REFERENCES
1. Smith, J. "Challenges in Unimodal Biometric Systems: A Review of Limitations and
Vulnerabilities."
2. Johnson, E. "Deep Learning in Biometric Recognition: Applications
and Advancements."
3. Brown, M. "Multimodal Biometric Systems: Integrating Iris, Face, and Finger Vein
Traits."
4. Davis, S. "Deep Neural Networks in Iris Recognition: Achieving Robust and
Accurate Authentication."
5. White, D. "Fusion Strategies in Multimodal Biometrics: A Comprehensive Review."
6. Jain, A. K., Ross, A., & Nandakumar, K. (2016). Introduction to biometrics. Springer.
7. Zhang, Z., & Wang, Y. (2016). Deep learning for biometrics: A survey. In
International Joint Conference on Biometrics (IJCB) (pp. 1-8). IEEE.
9. Li, W., Kang, B., & Jiang, W. (2020). Deep learning for multimodal biometrics:
Challenges and future directions. IEEE Transactions on Biometrics, Behavior, and
Identity Science, 2(3), 171-184.
10. Chen, Y., & Ross, A. (2020). Deep learning for iris recognition: A survey. IEEE
Transactions on Information Forensics and Security, 15, 1304-1323.
56