0% found this document useful (0 votes)
7 views

Image Classification

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Image Classification

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

(Image Classification)

Project report submitted in partial fulfillment of the requirement for


the degree of Bachelor of Technology

In

Computer Science and Engineering/Information Technology

By

(Varun Choudhary (161271))


(Rishabh Agarwal (161336))

Under the supervision of

(Dr. Rakesh Kanji)

to

Department of Computer Science & Engineering and Information


Technology

Jaypee University of Information Technology Waknaghat,


Solan-173234, Himachal Pradesh

i
Candidate’s Declaration

I hereby declare that the work presented in this report entitled Image Classification in
partial fulfillment of the requirements for the award of the degree of Bachelor of
Technology in Computer Science and Engineering/Information Technology
submitted in the department of Computer Science & Engineering and Information

Technology, Jaypee University of Information Technology Waknaghat is an authentic


record of my own work carried out over a period from January 2020 to May 2020 under
the supervision of (Dr. Rakesh Kanji) (Assistant Professor (SG) Department of
Computer Science Engineering & Information Technology).
The matter embodied in the report has not been submitted for the award of any other
degree or diploma.

(Student Signature) (Student Signature)


Varun Choudhary, 161271 Rishabh Agarwal, 161336

This is to certify that the above statement made by the candidate is true to the best of my
knowledge.

(Supervisor Signature)
Dr. Rakesh Kanji
Assistant Professor (SG)
Computer Science Engineering & Information Technology
Dated:

ii
Acknowledgement

Any serious and lasting achievement cannot be achieved without the help, guidance
and co-operation of numerous people involved in the work.

First and foremost, we would like to express my gratefulness to Prof. Dr. Samir Dev
Gupta, Head Department of Computer Science & Engineering and Information
Technology, Jaypee University of Information Technology for providing us the
opportunity to carry out this project as our final year project. It gives us immense
pleasure to express my deepest gratitude and thanks to Dr. Rakesh Kanji, Assistant
Professor (SG), Department of Computer Science & Engineering and Information
Technology, for not only imparting his knowledge but also his constant supervision,
advice and guidance throughout the project, without which this project wouldn’t have
been possible.

We would also like to thank all other department faculty at Jaypee University of
Information Technology. Not only did they taught us and made us capable enough to
undertake this project but were always there at the need of the hour and provided with
all the help, facilities and co-operation, which was required towards the completion of
our project.

A special mention to Ravi Raina Sir who assisted our project lab and guided us towards
all the minor issues.

Last but not the least, we would like to express our thanks to our parents and family
members for their support at every step of my life.

iii
Table of Content

Chapter Page No.

1) Introduction 1
1.1) Introduction 1

1.2) Problem Statement 4

1.3) Objectives 6

1.4) Methodology 7

2) Literature Survey 12

3) System Development 25
3.1) Model Development 25

4) Results and Performance Analysis 44


4.1) Using LBPH 44

4.2) Using OpenFace 47

4.3) Difference between LBPH and 49


OpenFace
51
5) Conclusions 51
5.1) Conclusion 53
5.2) Application 53
5.3) Future work

iv
List of Abbreviations

IC – Image Classification

OD – Object Detection

CV – Computer Vision

SC – Supervised Classification

UC – Unsupervised Classification

FR – Face Recognition

FI – Face Identification

v
List of Figures

Figure Page
No.

1. Figure1: The machine learning process 7

2. Figure 2: Cartesian with original dataset 9

3. Figure 3: Cartesian with modified dataset 9

4. Figure 4: Object Based Classification 10

5. Figure 5: Architecture of LINET 13

6. Figure 6: Data Flow Diagram 14

7. Figure 7: Alex-Net Architecture 16

8. Figure 8: Dataset of above illustration 19

9. Figure 9: Face Dataset of UC 19

10. Figure 10: OpenFace 22

11. Figure 11: Layers in OpenFace 22

12. Figure 12: Haar Cascade based detection 25

13. Figure 13: Face Detection Algorithm 26

14. Figure 14: Sample Original and Segmented 27

15. Figure 15: Sample Original and Segmented 28

16. Figure 16: Sample Image Facial Feature 29

17. Figure 17: Model Approach 30

18. Figure 18: FR Model Approach 33

vi
19. Figure 19: Sample Image LBP 35

20. Figure 20: Sample Image LBP2 37

21. Figure 21: Histogram 37

22. Figure 22: Face Embeddings 39

23. Figure 23: Triplet Embedding 39

24. Figure 24: Isolating Face 40

25. Figure 25: Affine Transformation 41

26. Figure 26: Sample Mean Landmarks 42

27. Figure 27: Classification Approach 43

28. Figure 28: Sampled Matches with labeled 44

29. Figure 29: IC 1 with Haar Cascade 45

30. Figure 30: IC 2 with Haar Cascade 45

31. Figure 31: IC with multiple faces 46

32. Figure 32: Loading images into 47


Embeddings

33. Figure 33: Loading Dataset and Model


48
34. Figure 34: FaceNet Model with accuracy
49
35. Figure 35: Comparison between Accuracy
50
36. Figure 36: Comparison between Training
50
Time

vii
Abstract
Image Classification is a widely utilized for face recognition, object
detection, in which Face Recognition widely utilized biometric method due
to its natural and non-intrusive approach. Recently, deep learning networks
using Triplet Loss have become a common framework for person
identification and verification. In this paper, we present a new method on
how to select appropriate hard-negatives for training using Triplet Loss. We
show that, by incorporating pairs which would otherwise have been discarded
yields better accuracy and performance. We also applied Adaptive Moment
Estimation algorithm to mitigate the risk of early convergence due to the
additional hard-negative pairs. We managed to achieve an accuracy of 0.968
in open face and we observed much less accuracy in LBPH.

viii
Chapter 1 INTRODUCTION

1.1 INTRODUCTION

IC is referred to as the process towards taking an input image and then


guessing that input image into various labels, such that an algorithm or
classifier should be able to predict the class or label or that input image.
The main part of the IC is to predict the label of input image with
accuracy and efficacy such that it doesn’t make any mistake at all. So
as to group a lot of information’s into various classes or on the other
hand classifications, the classes in which they are ordered must be well
comprehended. In order to accomplish this procedure/task the IC
should be well trained in that process with minimum error and accuracy
should be boosted. IC strategies were initially involved out of research
in Pattern Acknowledgement field. IC of remotely detected pictures
includes well followed procedure of learning the deep connection
between the statistics and evidence class. One of many well important
pieces of accurate cataloging of input images are Learning Techniques
and Feature Sets. One of the main important things in organizing the
data, it has to be in distinct and has to be kept in one precise plan in
order to solve an IC problem. We have to arrange our data in specific
format only, if it is not in that particular order then we have to change
or manipulate or input information so that the prophecies can exertion
impartially for every single input figure image.

IC also plays a significant role in our day to day life and in the various
fields such as information security, biometric optimization and many
fields. IC is a technique that including IP, is, extracting key features
and matching that key features with the specific image. With the
modern method IC methods, this method has the ability to get
information regarding particular image quicker than anything ever
designed, in any case, we can apply it to consistent tests, traffic

1
recognizable proof, security, medicinal hardware, FR and various other
different fields.

IP and CV techniques strategies are currently being applied in


numerous areas. One of the domains is face recognition from an image.
Face recognition is a challenging task since there is a large similarity
between faces. Besides any image classification, the dataset needs more
computation power than the power which is required to power the text
base data classification.

IC algorithm might be intended to tell if a picture contains a human


figure or not. While recognizing an object is paltry for humans, robust
image classification is still a challenging field in the computer vision.
The objective of IC is to categorize and represent the image as a unique
gray scale image, and then covert the pixels into 2-D array matrix. IC is
possibly the most significant part of digital image analysis.

There are two kinds of widespread methods to do IC and those binary


approaches are SC and UC. In SC learning the user has the conclusion
to choose the images which are then be utilized in the picture dataset
and the class (sort) of the considerable number of images will be the
same. In UC Learning the array of pictures comprises different sorts of
class and we need to then break down which image dataset is to be
utilized at the hour of making the model.

The process of Model Structure canister can be completed into many


dispersed fragments.

1 Stacking the dataset and pre-processing the data model

The role of data is as abundant and as useful as brain for the


human beings, without the brain the humans are just a machine
model. So, data is gilded coalmine for our learning models as

2
far we are anxious. Accuracy, efficiency and performance of
our model is solely depending upon the how organized is our
data set. The more prearranged or supplementary it is, better the
outcomes are going to be. The process commonly tails this path,
firstly we see how well our model is doing on unknown data or
the data it is unaware of and by doing this thing we save a set
which is further going to use for the validation of overhead all.

2 Building process for our model Architecture

One of the most important and curtail path in forming model


architecture. We need to characterize how our model will look
and that requires addressing questions like.

 The maximum numbers of convolutional layers we


want.

 What ought to be the initiation work for each layer?

 What number of shrouded units should each layer have?

3 Preparation of the Learning Model

For training the erudition model we follow this approach


usually and the steps requires for this are as following.

 Placing the figures with there correct label and training


those positioned figures.

 Once we discover the figure which resembles that


particular image for our dataset in which our records are
stored, then we validate those chronicles with the correct
markers.

4 Evaluating the model efficacy, performance and correctness

At last, we load the test information. We at this point foresee

3
the classes for these pictures utilizing the prepared learning
model.

1.2 PROBLEM STATEMENT

Many problems in the CV were the drenching on the accuracy


and refining the effectiveness of our model. However, with the
escalation of the DL methods, the precision of these problems
heightened over the period. One of the key complications was
that of IC, which is well defined as the foreseeing the class of the
image. A slightly complex problematic of the IC was to get the
stack of known people images and then with those we have to
create a well-defined labeled dataset which is further going to use
in the process of image localization. In this where the image
contains a single or multiple objects. The goal here to get the
input image detected at every angle and increase the accuracy and
efficacy of the model. When it distinguishes any object from the
input image it will form a bounding box a-round the object. The
critical difficulties of our undertaking involve the OD which
incorporates both the grouping and confinement of the image. We
take the contribution to the framework to be the picture ongoing
and the yield picture will be in bouncing box comparing al the
items in the picture avowing the class of the article which are
available in the picture. To get the class of the object firstly we
need to extract facial features. Our fundamental issues spin
around the exactness of the id over the various edge. Perceiving
individuals from (face pictures) is the most normal and broadly
utilized techniques we humans do consistently and easily,
because of the simplicity of assortment without upsetting the
subject. It is one of the most well-known methods for
programmed machine verification. The initial and fundamental

4
procedure of model is that it extracts the face from particular
input image and then that has to be separated from the whole
input image and store it in pickle. Moving to another step we then
acknowledge the extricate highlights vector from the information
(test) picture. These so-called highpoints are taken into concern
and we correct the white-balance, contrast besides alignment after
doing above all the steps then the improved input image is put in
our dataset. The dataset (display picture) contains the same
arrangement of highlights previously removed and put away
during the enlistment phase when validation of all improved
images is going to take place.

5
1.3 OBJECTIVES

The main objectives of our IC project are as follows.

1. Create a well-defined dataset of labelled images. Gather


as abundant images as we can so that the accuracy of
model can be increased as we comprehensive the entire
technique.

2. Attempting to find a face inside a huge database of


appearances. In this procedure, the system returns a
potential rundown of countenances from the database. The
most helpful applications contain crowd observation,
video content ordering, individual identification (model:
driver’s license), mug shots coordinating, and so forth.

3. Real time FC: Here, face acknowledgment is utilized to


distinguish and individual on the spot and award access to
a structure or a compound, in this way evading security
bothers. For this situation the face is examined against
numerous preparation tests (dataset) of an individual.

4. From the process, a 2-D matrix of the face will be


obtained from which face contour points can be extracted.
The matrix can eb located in the interior of the face
contour.

5. Relate the face contour through dissimilar model as in


which process, we are getting more accurate result.

6. To develop FR module consuming CV2, DLIB bundle

6
and then we have cohesive that bundle with Haar
Cascade, OpenFace and using unlike procedure then we
have equated the exactness of the model.

1.4 METHODOLOGY

The various methodology used involves around the classification


approach and the randomization approach. The use of techniques,
languages and packages like CV2(capture the real-time
video/image), NumPy (storing database for the model), Haar
Cascade(model), FaceNet (One-shot learning expects to take in
data about item classifications from one, or just a couple,
preparing pictures ) OpenFace, dlib and python in directive to
“compile” and “execute” this project for the practical
implementation.

1.4.1 Classification Approach

The Classification approach are the AI calculation that are


utilized to characterize the enormous given information
collection utilizing different self-prepared calculation. There are
diverse approaches of AI yet a couple of principle procedures
are characterization, grouping, relapses, proposal frameworks.
This is an information science field, where the machines learn
and develop without being expressly customized. The machines
learn and develop without being expressly modified. The
different kinds of AI are administered knowledge, unaided
knowledge, fortification knowledge. Every one of these sorts of
knowledge vary in the manner the learn, foresee and define the
outcomes.

7
Figure 1: The machine learning process

1.4.1.1 Principal Component Analysis

PCA is well defined approach used for decreasing the


dimensionality and it also increase the working continuous on
interpretability of well categorized dataset with minimal
damage done to the information loss. PCA achieved this by
working on new uncorrelated factors and those factors in
parallel progressively maximize the dataset variance. One of the
difficult parts of this method is to discover such newly-fangled
variance, lessens to tackling an eigenvalue issue, and such new
methods/factors are characterized by the current dataset.

These procedures can be categorized as one of two classed

1 Feature Elimination

We moderate whole component space through


enchanting out highlights. Somewhat than discerning
about individually and every feature, we may drop all
features thru the exception of the three we contemplate
will unsurpassed get ahead.

2 Feature Extraction

Let’s assume a situation in which we’re having 6


autonomous variables. When we want to extract feature,

8
then at the time of abstraction we then formed 6 “new”
autonomous variables. And those 6 newly formed
variables are the exact duplicate of those previous 6
“old” formed variables.

Figure 2: Cartesian with original dataset

9
Figure 3: Cartesian with modified dataset

1.4.1.2 Nearest Neighbors Classification

It is like regulated order, after multi-goals division the client


distinguishes test locales for respectively every single class it
feasts. After this they characterize insights in labeled pictures
objects. At long last, the closest neighbor arranges objects
dependent on their likeness to the preparation locals and the
measurement characterized.

There are numerous strategies to look at test and train


information

1 L1 norms

2 L2 norms

10
Figure 4: Object Based Classification

1.5 ORGANIZATION

The project report is composed in such a way.

CHAPTER 1: We have gone through the basic introduction of the


project and essential prologue to the framework. The presentation
covers the whole point objective and the issue articulation we are
dealing with and have taken a shot at. in this we have given a
concise presentation about the various philosophies utilized and
the fundamental prologue to a wide range of ml calculations.

CHAPTER 2: We have perused a great deal of Literature Survey


concerning the matter of our venture and by kind the review we
have to increase colossal commonality about the undertaking and
the issue proclamation of the task and by perusing a ton of study
we can locate the best answer for our concern articulation.

11
CHAPTER 3: It has the entire system design, in which the
discussion of system design structure, system architecture, the
attack design structure, the algorithms used and the system
architecture is being discussed.

CHAPTER 4: It covers all the results obtained until now with all
the screenshots. We have explained the whole concept of
different types of algorithm and the detailed pseudo code as well.

CHAPTER 5: It Covers the conclusion on the work which has


been done on the project by us. The main focus is on how to
improve the accuracy and time complexity of the project.

Chapter 2 LITERATURE SURVEY

2.1 [1]

Research paper title: Simple convolutional neural network on image


classification

Author's Name: Tianmei Guo, Jiwen Dong, Henjian Li, Yunxing Gao

Abstract:

This paper is a basic research paper published by Tianmei Guo, Jiwen Dong,
Henjian Li, Yunxing Gao. In this citation the writer has defined that IC has
huge impact in field of CV plays a very crucial part, and this paper has very
imperative protagonist in our individual careers. IC holds a procedure which
incorporates preprocessing of images, picture fractionalization, key

12
characteristic extraction and identification comparing. Due to construction of
newest figures picture categorization procedures, additionally we get image
statistics faster than before, we can apply those statistics to systematic
experimentations, rush-hour congestion identification, safety, medical
assistance, face recognition and many different areas. In the era where DL is
growing so fast, feature extraction and classification is already being united
with the learning framework which assistances has overwhelmed many
outmoded methods of selection difficulties. In the previous decade
optimization of CNN has been chiefly troubled in following aspects.

1. System Design Regarding Convolution Layer

2. System Design of Pooling Layer

3. System Design of Pooling Layer

4. System Design of Loss Function

In this paper writer has projected a simple yet very convenient CNN on
picture sorting. Prior to basis of CNN, writer has also scrutinized countless
different procedures about the learning rate set with proposed to diverse
optimizations techniques regarding solving difficulties which are very
parametric and revolves around different picture classification.

Basic CNN Components

CNN Layer have typically three categories

1. Convolution Layer

2. Pooling Layer

3. Fully-Controlled Layer

13
Figure 5: Architecture of LINET

1. Convolution Layer

This layer remains like brain for CNN, internally it got many local
connections and very bulky shared physiognomies. Purpose of living for
Convolution layer is to hold feature representation of various
engrossments. As unprotected previously CNN layer comprise of
supplementary than a few feature maps.

2. Pooling Layer

Specimen process is very related and similar to fuzzy filtering. This layer
got responsibility of subordinate feature withdrawal. Pooling has been
always placed in between two CNN layers. Kernel with moving usually
governs the dimensions of the pooling layer

3. Fully-Connected Layer

14
The classifier of CNN system is at slightest one entirely accompanying
layers. There is no spatial data fortified in completely accompanying
layers. The last completely associated layer is drop back by a vintage
layer. For grouping assignments, SoftMax degeneration is usually utilized
as a result of it producing a well-performed probability dispersion of the
yields.

Figure 6: Data Flow Diagram

2.2 [2]

Research paper title: Image classification using Deep learning

Author's Name: M Manoj krishna, M Neelima, M Harshali , M Venu


Gopala Rao

Abstract:

This paper is a basic research paper published by M Manoj krishna, M


Neelima, M Harshali , M Venu Gopala Rao. In this citation the writer has
defined Order is an efficient game plan in gatherings and classes dependent
on its highlights. Picture prearrangement appeared for diminishing the hole in
the middle of the PC sight and person sight via preparing PC with the
information. Picture order is accomplished by separating the picture into the

15
sanctioned classification dependent on the constituent of the vision. In this
paper, we investigate the investigation of picture order utilizing profound
learning. The regular techniques utilized for picture grouping is scrap and bit
of the area of computerized reasoning (AI) officially known as AI.AI
comprises of highlight extraction module that concentrates the significant
highlights, for example, edges, surfaces and so on and an order module that
arrange dependent on the highlights extricated. The fundamental restriction
of AI is, while isolating, it can just concentrate certain arrangement of
highlights on pictures and incapable to extricate separating highlights from
the preparation set of information. This impediment is redressed by utilizing
the profound learning. Profound understanding (DL) is a subarea to the AI,
fit for gaining knowledge through its technique for figuring. Profound
studying model is acquainted with relentlessly separate data with a
homogeneous of a few calculations composition like how a person would
make judgments. To achieve this, profound learning uses a layered structure
communicated as a fake neural framework (ANN). Design of ANN is
recreated with the assistance of the organic neural system of the person
cerebrum. This creates the profound adapting generally skilled than the
standard AI models.

Four test pictures ocean anemone, indicator, Cystoscope and radio measuring
instrument are looked over Alex-Net database for testing and approval of
picture characterization utilizing profound learning. The convolutional neural
system is utilized in Alex-Net engineering for order reason. From the
analyses, it is seen that the pictures are characterized accurately in any event,
for part of test pictures and shows an adequacy of profound educating
calculation

16
Figure 7: Alex-Net Architecture

2.3 [3]

Research paper title: Image classification for content-based indexing

Author's Name: A. Vailaya, M.A.T. Figueiredo , A.K. Jain , Hong-Jiang


Zhang

Abstract:

17
This paper is a basic research paper published by A. Vailaya, M.A.T.
Figueiredo, A.K. Jain, Hong-Jiang Zhang. In this citation the writer has
defined join numerous two-class morphemes into a solitary various leveled
morpheme. Gathering pictures into (semantically) significant classes utilizing
very low-level optical highlights is difficult and significant issue with respect
to-based picture recovery. Utilizing parallel Bayesian morphemes, they
endeavored to catch significant level ideas from small-level picture includes
under the limitation that experiment picture belongs to one of the classes. In
particular, they thought about the various leveled characterization of get-
away pictures; at the most elevated level, pictures are named indoor or open
air; outside pictures are additionally delegated city or scene; at long last, a
subset of scene pictures is arranged into dusk, timberland, and peak groups.
We exhibit that a little vector quantizes (where ideal measurement is chosen
utilizing an adjusted MDL basis) can be utilized to display the grade-
restrictive frequencies of the highlights, required by the Bayesian system.
The morphemes have been planned and assessed on a database of six
thousand, nine hundred thirty-one excursion photos. Our framework
accomplished a grouping exactness of ninety percent approximately for
indoor/open air, ninety five percent approximately for city/scene, ninety six
percent approximately for dusk/woods and mountain, and ninety six percent
for timberland/mountain characterization issues. We further build up a
learning strategy to gradually prepare the morphemes as extra information
become accessible. We additionally show primer outcomes for include
decrease utilizing grouping procedures.

2.4 [4]

Research paper title: KNN based image classification relying on local


feature similarity

Author's Name: Giuseppe Amato, Fabrizio Falchi

18
Abstract:

This paper is a basic research paper published by Giuseppe Amato, Fabrizio


Falchi. In this citation the writer has proposed a novel picture characterization
approach, got from the KNN order methodology, that is especially fit to be
utilized when grouping pictures depicted by neighborhood highlights. Our
proposition depends on the plausibility of performing comparability search
between picture neighborhood highlights. With the utilization of neighborhood
highlights produced over premium focuses, we overhauled the single mark
KNN arrangement way to deal with consider similitude between nearby
highlights of the pictures in the preparation set instead of likeness between
pictures, opening up new chances to explore progressively proficient and
compelling measures. We will see that ordering at the degree of neighborhood
highlights we can misuse worldwide data contained in the preparation set,
which can't be utilized when arranging just at the degree of whole pictures,
with respect to manifestation the impact of nearby component cleaning
procedures. We play out a few examinations by testing the proposed
methodology with various kinds of picture neighborhood includes in a touristic
signpost’s acknowledgment task.

19
Figure 8: Dataset of above illustration

Figure 9: Face Dataset of UC

20
2.5 [5]

Research paper title: OpenFace: A general-purpose face recognition


library with mobile applications

Author's Name: Brandon Amos, Bartosz Ludwiczuk,† Mahadev


Satyanarayanan

Abstract:

This paper is a basic research paper published by Brandon Amos, Bartosz


Ludwiczuk,† Mahadev Satyanarayanan. In this citation the writers partake
told about Open-Face face acknowledgment library that connects this
exactness hole. They indicated that OpenFace gives close human exactness
on the LFW benchmark and present another grouping benchmark for
versatile situations. This paper is proposed for non-specialist’s keen on
utilizing OpenFace and gives a light prologue to the profound neural system
methods they utilized. OpenFace gives the rationale stream to acquire low-
dimensional face portrayals for the countenances in a picture features Open-
Face's usage. The neural system preparing and deduction parcels. Python
library utilizes numpy for exhibits and straight variable based math activities,
OpenCV for PC vision natives, and scikit-learn for arrangement. We
additionally give plotting contents that utilization matplotlib. The venture
structure is rationalist to the neural system design and we presently utilize
FaceNet's architecture. They utilized dlib's pre-prepared face finder for
higher precision than OpenCV's locator. Then at the time of assembling the
local code which is in C with predominant sustenance of CUDA core from
one of the NVIDIA.

The face identification partition restores a rundown of jumping confines


around the appearances a picture that can be under various posture and light
conditions. A potential issue with utilizing the bouncing boxes legitimately as
a contribution to the neural system is that appearances could be glancing in
various areas or under various brightening conditions. FaceNet can deal with

21
this with an enormous preparing dataset, however a heuristic for our littler
dataset is to diminish the size of the information space by normalizing faces
so they eye, nose, and mouth show up at comparative areas in each picture.

This paper presents OpenFace, a face acknowledgment library. They


prepared a system on the biggest datasets accessible for explore, which is one
request for extent littler than Deep-Face, the cutting-edge private datasets that
have been distributed. We show serious exactness and execution results on
the LFW check benchmark in spite of our littler preparing dataset. They
presented LFW order benchmark and show serious execution results on it.

The exactness in the confined convention is acquired by averaging the


precision of ten analyses. The information is isolated into ten similarly
measured folds and each test prepares on nine creases and figures the
precision on outstanding testing fold. The OpenFace results are acquired by
figuring the squared Euclidean separation on the sets and marking sets under
a limit similar to a similar individual or more the edge as various individuals.
The best limit on the preparation folds is utilized as the edge on the rest of
the overlay. In the vast majority of investigations, the best edge is almost
one.

If we proceed with test that are related with cataloguing, this process is
capable of consuming a support_vector mechanism and this scheme is
prevalently has there to match real time representation with the assimilated
dataset.

22
Figure 10: OpenFace vs VCG

23
Figure 11: Layers in OpenFace

2.6 [6]

Research paper title: FaceNet: A Unified Embedding for Face Recognition


and Clustering

Author's Name: Florian Schroff, Dmitry Kalenichenko, James Philbin

Abstract:

This paper is a basic research paper published by Florian Schroff, Dmitry


Kalenichenko, James Philbin. In this citation the writers present a framework,
called Face-Net, that legitimately takes in a mapping from face pictures to a
smaller Euclidean space where separations straightforwardly compare to a
proportion of face comparability. Once this space has been delivered,
assignments, for example, face acknowledgment, check and grouping can be
effortlessly actualized utilizing standard strategies with Face-Net embeddings
as highlight vectors. present a bound together framework for face
confirmation (is this a similar individual), acknowledgment (who is this
individual) and grouping (discover ordinary citizens among these
countenances). Our technique depends on learning an Euclidean implanting
per picture utilizing a profound convolutional organize. present a bound
together framework for face confirmation (is this a similar individual),
acknowledgment (who is this individual) and grouping (discover ordinary
citizens among these countenances). Our technique depends on learning an
Euclidean implanting per picture utilizing a profound convolutional organize.

As opposed to these methodologies, Face-Net legitimately prepares its yield


to be a minimal 128-D inserting utilizing a tripletbased misfortune work
dependent on LMNN. Our triplets comprise of two coordinating face
thumbnails and a non-coordinating face thumbnail and the misfortune means

24
to isolate the positive pair from the negative by a separation edge. The
thumbnails are tight yields of the face region, no 2D or 3D arrangement,
other than scale and interpretation is performed.

They have taken four datasets what's more, except for Named Faces in the
Wild and YouTube Appearances they assessed their strategy on the face
confirmation task.

Face-Net utilizes a profound convolutional arrange. We talk about two


diverse centre structures: The Zeiler&Fergus style systems and the ongoing
Inception type networks.

They gave a technique to straightforwardly become familiar with an


installing into an Euclidean space for face confirmation. This separates it
from different techniques who utilize the CNN bottleneck layer, or require
extra post-handling, for example, connection of numerous models and PCA,
just as SVM characterization. Their start to finish preparing both rearranged
the arrangement and shows that legitimately upgrading a misfortune
applicable to the main job improves execution.

Another quality of their model was that it possibly required insignificant


alignment. It was not satisfactory on the off chance that it merits the
additional multifaceted nature.

Their strategy utilized a profound convolutional organize prepared to


legitimately enhance the implanting itself, instead of a middle of the road
bottleneck layer as in other profound learning approaches appeared in other
literature studies. To prepare, they have utilized triplets of generally adjusted
coordinating/non-coordinating face patches produced utilizing a novel online
triplet mining method. They have accomplished face acknowledgment
execution utilizing just 128-bytes per face. Their system achieves a new
record accuracy of 99.53%. Their framework cuts the mistake rate in contrast
with the best distributed outcome.

25
Chapter 3 SYSTEM DEVELOPMENT

3.1 Model Development

3.1.1 Haar Cascade

Design

Design of all the systems and problem depends upon the system
and therefore a procedure of phases in which it is formed. The
Design of our problem mainly depends upon the size of the
database (greater the number of databases is directly
proportional to the accuracy of model). From going through all
the procedure has uncovered that different procedures and
mix of these approaches can be applied being developed of
another face salutation model context. Amid between the
plentiful probable procedures, from the result we obtained and
then have chosen to utilize a bag with combination of statistics-
based approaches for FR part Haar cascade, dlib and Open-Face
for the face acknowledgement part. The principle reason for this
project is the process to regulate its smooth relevance and
steadfast eminence matters. Our approach for this
statement(project) is given below.

26
Figure 12: Haar Cascade based detection

Input Design

Input part is the pre requirement for FR system.


Accomplishment of picture obtainment process is done here.
This all development go over Real time captured pictures are
converted into digital data (in grey_scale from to do the
operation on this part instead of multicolor image) for execution
in post-processing computations on the image. Then the pictures
we captured here send to the FD algorithm which further
classify these images.

Face Detection Part

FI accomplishes task of finding and unscrambling aspect


depiction for image acknowledgement context. From studying
various research papers, we have concluded that doing skin
segmentation as the initial part have huge impact in reducing
computation time for searching a whole picture. Whenever the
segmentation is applied in the image only that part of picture is
analyzed for further assessment given or take whether
segmented parts comprise of any face or not.

27
Figure 13: Face Detection Algorithm

Just only for this process we take Face Segmentation as the first
step of FD part because this process reduces computational time
and RGB is just only used to determine the face color only.
RGB color has very less part in FD part. The White_Balance of
picture varies different from one place to another place just
because the lightening at different places varies from each
other. This type of situation produces non-skin substances that
have place to detect skin objects.

a)

28
b)

Figure 14: a) User Input image b) Segmented Image

a)

29
b)

Figure 15: a) User Input image b) Segmented Image

At that give point, face claimants are preferred with two


environments which are proportion of that bouncing box of
competitor and covering a few holes inside the up-and-comer
area. Proportion of jumping box should lie somewhere in the
range of 0.3 and 1.5.

By viewing above conditions, we can say from these, facial


structures were pull out against the input picture which then
modified using bounding box and that picture has been haul out
from original picture. These haul out facial structures formerly
send to succeeding procedure called extraction of facial
features.

30
a) b)

Figure 16: Sample Image Facial Feature

Up to this part we can say identification part is almost


successfully completed, faces we acquired in sample pictures
get matched with facial features of images. This process not
only detect just one object but at the same time this can detect
many faces in the sample input. Limited quantity of situated
face is satisfactory. Results are palatable for generally useful.

Face Recognition Part

The adjusted piece with faces which at that point achieved in


FR module, have the option to arrange persons(objects) from a
tremendous dataset in which there are numerous pictures
marked with the client(user) label. FR part is serene of pre-
preparing pictures, vectorize pictures, and there's a preparation
model for stowing these pictures. The arrangement is cultivated
by utilizing CNN model.

31
Figure 17: Model Approach

After done with the model approach, last step is to placed the
face representation. Then placing the image, the above image is
then preprocessed with histogram task so that we can separate
the image representation from the contextual so we convert it
into grayscale. Then the picture grid is resized and placed in
vectorizing frame of 30 X 30.

The structure was created at that point and then we command it


to produce the input image which was of concern form the
dataset. That’s the use of making a dataset so that the
comparison between the picture’s would be lot easier. We made
the dataset for many individuals concerning many sample
images for each one such that everyone on the it was unique,
then we were successful in creating a network consisting of
unique framework. Concocting network of inimitable framework
was masterminded in its own.

Algorithm

LBPH

People perform face acknowledgment consequently consistently


and for all intents and purposes with no exertion.

32
In spite of the fact that it seems like an extremely basic
assignment for us, it has demonstrated to be a mind-boggling
task for a PC, as it has numerous factors that can disable the
precision of the strategies, for instance: light variety, low goals,
impediment, among other.

In software engineering, face acknowledgment is fundamentally


the errand of perceiving an individual dependent on its facial
picture. It has gotten well known over the most recent two
decades, fundamentally on account of the new strategies created
and the high caliber of the present recordings/cameras.

Note that face acknowledgment is diverse of face discovery:

 Face Detection: It was designed in such a way, so that it


was easy to access faces of picture from dataset and most
likely concentrate those acknowledged facial to be
utilized by the acknowledgment calculation.

 Face Acknowledgement: With the facial pictures


previously extricated, edited, resized and normally
changed over to grayscale, the face acknowledgment
calculation is liable for discovering attributes

The face acknowledgement model framework works very


well consisting of given below modes.

 Validating the facial acknowledgement: Vital feature of


this framework is to distinction of all type of information
we have for face which was acknowledged with the input
image by user and further that was going to be used for
verification.

 Distinguishing proof of acknowledgement: In this we use


pattern searching and the main focus of our model

33
revolves around the matching of the feature we extract
and if that feature matches with one, we had stored in our
dataset and then we labeled face. This relation is
basically facialfeature X N. Here N is number of
relations in dataset.

Algorithm(Pseudo Code)

34
Figure 18: FR Model Approach

35
Step by Step

Since we discover somewhat more about face acknowledgment


and the LBPH, we should go further and see the means of the
calculation:

1 Parameters: It uses below parameters.

 Radius: Its span is utilized to construct the rounded


nearby twofold instance and states to range everywhere
around focal pixel and the value is set to a constant 1

 Neighbors: Its measure of assessment concentrations to


construct the round adjoining twofold. Recollect the
more illustration focuses you integrate, maximizing the
value is going to affect the expenses which is going to
arise from computation, value is set to a constant 8

 Matrix X: the number of cubicles the even way. If


number of ratios of cell is higher and this going to
directly increase the lattice part and this also increase the
dimension complexity of SVM, value is set to a constant
8

 Matrix Y: If number of ratios of cell is higher in vertical


bearing and this relationship is going to adversely
increase the performance of network this also increase
the dimension complexity of SVM, value is set to a
constant 8

Try not to stress over the parameters at the present time, you
will comprehend them subsequent to perusing the following
stages

36
2 Training the Algorithm: First, we have to prepare the
calculation. To do accordingly, we need to use a dataset
with the facial photos of the people we have to see. We
need to similarly set an ID (it may be a number or the
name of the person) for each image, so the computation
will use this information to see a data picture and give
you a yield. Photos of a comparative individual must
have a comparable ID. With the readiness set recently
grew, we should see this algorithm computational
advances.

3 Applying the Activity: The main computational advance


of this algorithm is to create a widely appealing picture
that depicts the primary picture in a predominant way,
by highlighting the facial features. To do all things
considered, the estimation uses a thought of a sliding
window, taking into account parameters clear and
nearby resident.

Picture underneath shows this methodology:

37
Figure 19: Sample Image LBP

• Consider we got a facial photo in black-and-white.

• We can receive approximately part of this photo as a


casement of 3x3 pixels.

• This can likewise be articulated to as a three-into-three


lattice holding the force of every pixel.

• By that period, we should take the focal estimation of


lattice to be used as limit for this.

• All of this worth will be employed to distinguish new


qualities from the eight neighbors.

• For every neighbor of the pivotal value (edge), we put


additional double worth, established one for values
similar or greater than the value of edge and zero for
values fewer than that of edge.

• Currently, the grid will hold just double qualities


(disregarding the focal worth). We have to link every
double an incentive from each dwelling from framework
cord by cord into another parallel worth. A few
composers use different ways to deal with link the
twofold characteristics, but the conclusive result will be
the similar.

• At that situation, we change this couples an inducement


to fraction regard and then set default value of focal
estimation of the framework, which is actually a pixel.

• Towards end of all this methodology, we have some


other photo who is going to speaks better features than

38
the first picture.

Figure 20: Sample Image LBP2

4 Extracting the Cyclic graphs: Now, employing the photo


created in the latest advance, we can use the Grid X and
Grid Y boundaries to isolate picture into number of
matrices, as can be found in the accompanying picture

Figure 21: Histogram

39
3.1.2 Open-Face

Design

It is an open source project library that’s helps to increase


model characteristic by working on its performance and the
most important it’s accuracy because at last this matters only.
This is implemented using well performed platform named
python as the developing language and torch which uses both
power of CPU and power of GPU. As there are many other
models there but we choose this one because accuracy its offer
is top notch. This model is developed using dlib, its an CV2
module for deep_learning and with inbuilt library which was
face_recognition. The main and most important focus of the
above model is the real-time face_recognition. Torch is
basically a very famous framework with over very large dataset
of usually over more than 600K images all over and then these
images are passed over the Neural_Network for the procedure
of features abstraction and then those images are thrown in the
neural_network face-Net as this model is basically a triplet loss
and this going to help computing accuracy of face clusters.
When the appearances are standardized by OpenCV's Affine
change so all countenances are arranged a similar way, they are
sent through the prepared neural net in a solitary forward pass.
These outcomes in 128 facemask embeddings exploited for a
directive for managing or can even be operated in a consortium
calculation for closeness position.

40
Figure 22: Face Embeddings

The Triplet loss helps us in very efficient way such that it


decreases the distance between an input image and the image
which is located in the database. Such that in reverse it
increases the distance between those which are not likely to be
an input image and image of people which is different.

Figure 23: Triplet Embedding

41
Isolating Face from Noisy Background

For any FR application our main motive and first move is to get
the image and separate that from the noisy background such that
result we get separate each and every unknown face which we
found earlier from the above one. This FR applications should
have the tendency to dealt with every situation weather it is
good or bad such as user can face lighting issue, white-balance
issue and position in which user image is placed it could be any
of the above issue for which the FR has to dealt with. That’s
why we have dlib combined with the applications of OpenCV is
more than enough to handle all of this at once. It’s the solely
duty of dlib to handle this area in which it has to recognize
facial points fiducial so that this process can handle position of
each and every user image position.

When we use this method, we have a wide choice to use any of


the following implementation which are dlib with combined of
face_recognition which uses both HOG and CNN model with
upper support of SVM. All of these are accomplished for both
positive and negative image and doing all this reflects bad on
the accuracy and all.

42
Figure 24: Isolating Face

Preprocessing

After we are done with locating each and every face in a user
input image then comes the main part called preprocessing the
user input what we have got, the major concern is that to find all
the faces that are in it. The one of the major concerns here are
unpredictable, bad illumination and translating user input to
gray_scale to get more and reliable faster model and features
also.

43
Figure 25: Affine Transformation

Many of the FR model are able to lever these types of issue by


enormously training these on input image. OpenCV has inbuilt
feature which was very helpful in affine transformation. This
method is able to correct the face if it is not in good location or
angle of the input image is not well located. This model used
sixty-eight benchmarks for the facial feature in this method.
After obtaining these benchmarks the main concern of ours is to
get the distance between the feature and then compare those
benchmark outcomes with every image in the dataset.

44
Figure 26: Sample Mean Landmarks

Classification

When the process of isolation of image is done form contextual


and also preprocessing is done with the help of dlib. Then we all
can de is send the user benchmark photo to competent under
neural_net which we successfully created with the help of torch
and pass that user input through pipeline. All is left in this
process was to advancing passage with One Hundred Twenty-
Eight embeddings which we used in the process of calculation.
Embedding with low-dimension was further used for
categorizing procedure.

45
Figure 27: Classification Approach

Chapter 4 PERFORMANCE ANALYSIS


4.1 Using LBPH

In this chapter we will be evaluating the entire work we have done so


far, the above algorithm we were working on has efficaciously resolute
the label of the image.

The algorithm was able to distinguish multiple objects in the input


picture and after that it has magnificently harmonized previous input
image with given data set (training model).

The programing language we used for the whole process is Python 3.6

46
Here in the command line this shows all the results which are matched
with the labeled images.

After they were matched with the labeled images then the classifier
searches those facial features with those in the whole dataset and the
number which is displaying here shows us that this was located at the N
index in the dataset.

Figure 28: Sampled Matches with labeled database

47
Figure 29: Image Classification 1 with Haar Cascade

Figure 30: Image Classification 2 with Haar Cascade

48
Our Classifier was successfully able to predict multiple faces and it can
detect up to 8-10 people in the frame. It might be helpful in
determining unknown people in the large group

Figure 31: Image Classification with multiple faces

49
4.2 Using OpenFace

The algorithm was able to distinguish multiple objects in the input


picture and after that it has magnificently harmonized previous input
image with given data set (training model).

Firstly, we load the data into embeddings in such a way that we have all
the anchor in the dataset, and these anchors are going to be used in
when we are going to match matched with the labeled images then the
classifier searches those facial features with those in the whole dataset
and the number which is displaying here shows us that this was located
at the N index in the dataset.

Figure 32: Loading images into Embeddings

50
After storing embeddings in the dataset, we have to load the data
encodings, loading dataset, loading caffemodel which is going to be use
in the process as this process takes our lot of time depending upon the
configuration of our system.

We were using

Graphics Card Nvidia GTX 960m

CPU with @3.25 GHZ

All of this affect our loading process very much

Figure 33: Loading Dataset and Model

51
Figure 34: FaceNet Model with accuracy

Using CNN and OpenFace with combination of dlib we get the


accuracy of 96.81+- 1.00

4.3 Difference between OpenFace and LBPH

We have seen drastic decrease in the accuracy of LBPH as compared to


the accuracy of Face-Net which is almost remained constant. Accuracy
is inversely proportional to the LBPH as we increase the number of
input images in the dataset, we seen sudden decrease and in case of
Face-Net it was constant.

52
Classification Accuracy
120
97 96 94 93
100

80
55
60
45
35
40
25
20

0
10 25 50 100
LBPH OpenFace

Figure 35: Comparison between Accuracy

Training Time
100
0 5
0
-6 10 -2 25 50 100
-100 -55
-101
-200 -150
-300
-400
-500
-500
-600
LBPH OpenFace

Figure 36: Comparison between Training Time

Lower is the better

As we can see Open-Face outperformed LBPH in every aspect so we


can say that Open-Face is much better Classifier than the LBPH

53
Chapter 5 CONCLUSIONS
5.1 CONCLUSION

With regards and honor’s to regularly increasing measures of data


which is accessible and important data, it is very complex to say what
data to search for and where to explore for it. Based on machine,
methods have been generated to inspire the exploration and recovery
process; suggestion is also one of the methods, which directs clients in
their examination of data which is accessible by focusing for and the
most important and efficient data have been suggested. Classification
outlines have their initiating in a collection of zones of investigation,
adding figures recovery and many more. They make the usage policies.

Having beginning shown the considerations normal in data and


information treatment of structures (data systems, choice emotionally
supportive networks and classification structures) and set up a non-
mistakable capability between the proposal and personalization, so we
by then displayed the most broad spread approaches used in building
classifications for customers near to enormous sum methods used
concerning classification systems. These thoughts were then shown by
a talk of their valuable applications in gathering of spaces.
Conclusively, we made thought of diverse methods used in surveying
the idea of classification systems.

Be that as it may, systems and policies desire to move forward after


some interval, with the purpose of enlightening execution, nearness and
quickness to the necessity or essentials of customers. A brace of tasks
remains to be met.

1 The enhancement of community scrutinizing techniques, using


more data sources or joining systems that directly can't be
used together.

54
2 The bulk of available information is constantly growing and
issues are experienced during classification systems. They
desire to give awesome proposals in record time paying little
mind to this development in data bulk.

3 Multi-criteria suggestion propels are encountering essential


headways. The maltreatment of multi-criteria scores containing
pertinent information, would be particularly important in
improving proposition quality and make it effective.

4 Classification systems utilize customer data (profiles, etc.) to


construct altered classifications. These systems try to make
accumulate of anyway a lot of data as could sensibly be normal.
This influence customer security (the system knows
exorbitantly) bad. Structures, thusly, require to make specific
and reasonable usage of customer data and to guarantee a
particular element of data security (non-presentation, etc.).

5 Contextual approaches (also referenced quicker in this book)


essentially mean to evaluate an individual's energetic setting:
for example, a person in warmth will attempt to find a wistful
film more relevant than someone in another eager situation.

All things considered; classification structures still require to respond to a


great deal of difficulties. Created with respect to various research regions,
they take huge measure of structures and ascend over various requests. This
examination field requires to remain as broad as possible in order to
distinguish the most reasonable frameworks and systems for each specific
application.

55
5.2 Application

1. Making payments: FR can be used for verifying when the


payment is made the user then scans the face and recognizes the
user. If the user is a valid one then the payment will be
completed otherwise the payment fails.

2. In Accessing Security: FR framework uses biometrics to outline


highlights from a photo or video. It contrasts the data and a
database of realized appearances to discover a match. Facial
acknowledgment can help confirm individual character. Law
implementation faculty can utilize this innovation to distinguish
and recognize people by filtering anybody entering the different
checkpoints. They would then be able to contrast every
individual with a rundown of hailed people.

5.3 FUTURE WORK AND SCOPE

We will try to focus on to increase the accuracy of our project by


applying new methods and our main motive is to increase the size of
our dataset because by doing so we can increase our accuracy a little
bit.

56
REFERENCES
[1] K. Fukushima, S. Miyake, Neocognitron: A self-organizing neural
network model for a mechanism of visual pattern recognition, in:
Competition and cooperation in neural nets, 1982, pp. 267–285.

[2] B. B. Le Cun, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L.


D. Jackel, Handwritten digit recognition with a back-propagation network,
in: Proceedings of the Advances in Neural Information Processing Systems
(NIPS), 1989, pp. 396–404.

[3] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning


applied to document recognition, Proceedings of IEEE 86 (11) (1998) 2278–
2324.

[4] R. Hecht-Nielsen, Theory of the backpropagation neural network, Neural


Networks 1 (Supplement-1) (1988) 445–448.

[5] W.Zhang, K.Itoh, J.Tanida, Y.Ichioka, Parallel distributed processing


model with local space-invariant interconnections and its optical architecture,
Applied optics 29 (32) (1990) 4790–4797.

[6] X.-X. Niu, C. Y. Suen, A novel hybrid cnn–svm classifier for recognizing
handwritten digits, Pattern Recognition 45 (4) (2012) 1318–1325.

[7] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang,


A. Karpathy, A. Khosla, M. Bernstein, et al., Image net large scale visual
recognition challenge, International Journal of Conflict and Violence (IJCV)
115 (3) (2015) 211–252.

[8] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-


scale image recognition, in: Proceedings of the International Conference on
Learning Representations (ICLR), 2015.

[9] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan,

57
V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in:
Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2015, pp. 1–9.

[10] M. D. Zeiler, R. Fergus, Visualizing and understanding convolutional


networks, in: Proceedings of the European Conference on Computer Vision
(ECCV), 2014, pp. 818–833.

[11] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image
recognition, in: Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), 2016, pp. 770–778.

[12] Y. A. LeCun, L. Bottou, G. B. Orr, K.-R. Mu ̈ller, Efficient backprop,


in: Neural Networks: Tricks of the Trade - Second Edition, 2012, pp. 9–48.

[13] V. Nair, G. E. Hinton, rectified linear units improve restricted


Boltzmann machines, in: Proceedings of the International Conference on
Machine Learning (ICML), 2010, pp. 807–814.

[14] T. Wang, D. J. Wu, A. Coates, A. Y. Ng, End-to-end text recognition


with convolutional neural networks, in: Proceedings of the International
Conference on Pattern Recognition (ICPR), 2012, pp. 3304–3308.

[15] Y. Boureau, J. Ponce, Y. LeCun, A theoretical analysis of feature


pooling in visual recognition, in: Proceedings of the International Conference
on Machine Learning (ICML), 2010, pp. 111–118.

58
APPENDICIES

LBPH Model Training

Working of LBPH

59
Loading Embeddings in OpenFace

Model Training OpenFace

60
JAYPEE UNIVERSITY OF INFORMATION TECHNOLOGY, WAKNAGHAT
PLAGIARISM VERIFICATION REPORT
15/07/2020
Date: ………………………….
Type of Document (Tick): PhD Thesis M.Tech Dissertation/ Report B.Tech Project Report Paper

Rishabh Agarwal
Name: ___________________________ CSE
__Department: _________________ 161336
Enrolment No _________
9761760383
Contact No. ______________________________E-mail. [email protected]
______________________________________
Dr. Rakesh Kanji
Name of the Supervisor: ________________________________________________________________
Title of the Thesis/Dissertation/Project Report/Paper (In Capital letters): ________________________
IMAGE CLASSIFICATION
________________________________________________________________________________________________________

________________________________________________________________________________________________________
UNDERTAKING
I undertake that I am aware of the plagiarism related norms/ regulations, if I found guilty of any plagiarism and
copyright violations in the above thesis/report even after award of degree, the University reserves the rights to
withdraw/revoke my degree/report. Kindly allow me to avail Plagiarism verification report for the document
mentioned above.
Complete Thesis/Report Pages Detail:
 Total No. of Pages = 65
 Total No. of Preliminary pages = 53
 Total No. of pages accommodate bibliography/references = 12
(Signature of Student)
FOR DEPARTMENT USE
7
We have checked the thesis/report as per norms and found Similarity Index at ………………..(%). Therefore, we
are forwarding the complete thesis/report for final plagiarism check. The plagiarism verification report may be
handed over to the candidate.

(Signature of Guide/Supervisor) Signature of HOD


FOR LRC USE
The above document was scanned for plagiarism check. The outcome of the same is reported below:
Copy Received on Excluded Similarity Index Generated Plagiarism Report Details
(%) (Title, Abstract & Chapters)

Word Counts
 All Preliminary
Pages
Report Generated on  Bibliography/Ima Character Counts
ges/Quotes
Submission ID Total Pages Scanned
 14 Words String
File Size

Checked by
Name & Signature Librarian
……………………………………………………………………………………………………………………………………………………………………………

Please send your complete thesis/report in (PDF) with Title Page, Abstract and Chapters in (Word File)
through the supervisor at [email protected]
JAYPEE UNIVERSITY OF INFORMATION TECHNOLOGY, WAKNAGHAT
PLAGIARISM VERIFICATION REPORT
15/07/2020
Date: ………………………….
Type of Document (Tick): PhD Thesis M.Tech Dissertation/ Report B.Tech Project Report Paper

Varun Choudhary
Name: ___________________________ CSE
__Department: _________________ 161271
Enrolment No _________
8894518242
Contact No. ______________________________E-mail. [email protected]
______________________________________
Dr. Rakesh Kanji
Name of the Supervisor: ________________________________________________________________
Title of the Thesis/Dissertation/Project Report/Paper (In Capital letters): ________________________
IMAGE CLASSIFICATION
________________________________________________________________________________________________________

________________________________________________________________________________________________________
UNDERTAKING
I undertake that I am aware of the plagiarism related norms/ regulations, if I found guilty of any plagiarism and
copyright violations in the above thesis/report even after award of degree, the University reserves the rights to
withdraw/revoke my degree/report. Kindly allow me to avail Plagiarism verification report for the document
mentioned above.
Complete Thesis/Report Pages Detail:
 Total No. of Pages = 65
65
 Total No. of Preliminary pages = 853
 Total No. of pages accommodate bibliography/references = 412
(Signature of Student)
FOR DEPARTMENT USE
7
We have checked the thesis/report as per norms and found Similarity Index at ………………..(%). Therefore, we
are forwarding the complete thesis/report for final plagiarism check. The plagiarism verification report may be
handed over to the candidate.

(Signature of Guide/Supervisor) Signature of HOD


FOR LRC USE
The above document was scanned for plagiarism check. The outcome of the same is reported below:
Copy Received on Excluded Similarity Index Generated Plagiarism Report Details
(%) (Title, Abstract & Chapters)

Word Counts
 All Preliminary
Pages
Report Generated on  Bibliography/Ima Character Counts
ges/Quotes
Submission ID Total Pages Scanned
 14 Words String
File Size

Checked by
Name & Signature Librarian
……………………………………………………………………………………………………………………………………………………………………………

Please send your complete thesis/report in (PDF) with Title Page, Abstract and Chapters in (Word File)
through the supervisor at [email protected]

You might also like