0% found this document useful (0 votes)
97 views64 pages

In This Project, I Focus Solely On Authentication Through Online

The document discusses authentication of individuals through online handwritten signature verification. It examines features such as speed, velocity and pressure in addition to the static shape of signatures. Two techniques for signature authentication are discussed: 1) Pixel position matching which compares pixel values between an original and sample signature to calculate a matching rate, and 2) Artificial neural networks which can be trained to authenticate signatures based on dynamic features like speed and pressure. The document also provides details on algorithms, tools and models used for each technique.

Uploaded by

surnika
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views64 pages

In This Project, I Focus Solely On Authentication Through Online

The document discusses authentication of individuals through online handwritten signature verification. It examines features such as speed, velocity and pressure in addition to the static shape of signatures. Two techniques for signature authentication are discussed: 1) Pixel position matching which compares pixel values between an original and sample signature to calculate a matching rate, and 2) Artificial neural networks which can be trained to authenticate signatures based on dynamic features like speed and pressure. The document also provides details on algorithms, tools and models used for each technique.

Uploaded by

surnika
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 64

1.

Introduction
In this project, I focus solely on authentication through online
handwritten signature verification which encompasses features
such as speed, velocity, and pressure in addition to the finished
signature’s static shape (image). Tests performed on a
reasonably large database have shown good results for this
method, both for verification and identification problems. A wide
number signatures are require to confirm or determine the
identity of an individual ,requesting their services. The purpose of
this schemes is to ensure that the rendered services are accessed
only by a legitimate user, and not anyone else. By using this
method it is possible to confirm or establish an individual’s
identity. We have also outlined opinions about the usability of
signature authentication systems, comparison between different
techniques and their advantages and disadvantages in this paper.

To achieve more reliable verification or identification we should


use something that really characterizes the given person.
Biometrics offer automated methods of identity verification or
identification on the principle of measurable physiological or
behavioral characteristics such as a signature or a voice
sample. These characteristics should not be duplicable, but it is
unfortunately often possible to create a copy that is accepted by
the biometric system as a true sample.

Signature authentication technology uses the dynamic analysis


of a signature to authenticate a person. The technology is based
on measuring speed, pressure and angle used by the person
when a signature is produced. This technology uses the
1
individual's handwritten signature as a basis for authentication
of entities and data. An electronic drawing tablet and stylus are
used to record the direction, speed and coordinates of a
handwritten signature. There is no encryption or message
confidentiality offered yet with signature dynamics, but more
modern examples use one-way hash functions to encrypt the
signature dynamics and data and append it to the document
being signed. In one iteration, created by Topaz Systems, the
signature actually disappears from view if the document is
tampered with after signature.

2
2.Techniques
To authenticate a person through his/her signature,is the person
is genuine or not we can test his signature by many way.In this project
two way is used by me

1.Authentication through PIXEL POSITION MATCHING

2.Authentication through ARTIFICIAL NEURAL NETWORKS

Handwritten Signature of person contains a considerable amount of


information about its writer’s identity. Firstly, Draw the Signature on a
device by pen or get it by scan (the signature written by the person on a
paper) on the screen panel. We must remember that the PDAs and other
handheld devices are able to capture only pen position signals, while pen
tablets provide additional signals such as pressure and pen inclination
angles. So we get the signature through pen tablets

3
3 . PIXEL POSITION MATCHING

After getting the signature drawn image is captured and put into the
processes and Put them into array and count the pixel values. After
getting the pixel values of original signature and other sample
signatures mach the pixel values and put them into counter.If the
matching rate in greater than or equals to 80%(>= ) mean the
signature in authenticated otherwise billow 80% mean the signature is
not authenticated.

The counter checked that the bits position of the original image is
matched with the bits position of another image or not. If it matched a
certain percentage or above which is decided in there then the
signature is authenticated either not.
This method is done by me using JAVA. JSP for for client side
coding.Servlet for serverside and javaScript for program development.

4
3.1 Tools Used

Java, JSP, Servlet , Java Script


IDE- NetBeans 6.1
DataBase:-- My SQL

3.2 Data Base

BLOB (Bynary Large Object) .

5
3.3 Flow Chart

Put pixels
Into
Array

Figure 1

6
No

Figure 2

In the above flow chart fig.1 represent how we can put the pixel value
into the array of the images .And from the fig.2 we can understand the
comparison of image1’s (the original image) Byte Array with other
image2 Byte Array. Put them into counter.If the matching value is
greater than or equal to 80% then the signature is authenticated if the
comparison is less than 80% then the signature is unauthenticated.

First store original image and the image from which we want to
compare with it assume image1 and image2 , store into database as
Bytearray. Then declare two Byte Array as img1 and img2. Fetch two
image with respect to a clientId from Database and store it into
ByteArray variables. Coding to storing image as Byte array and
Comparison is

3.4 Authentication Algorithm

1. Store image 1 into DataBase as ByteArray.


2. Store image 1 into DataBase as ByteArray.
3. Declare two Byte Array as img1 and img2.
4. Fetch two image with respect to a clientId from Database and
store it into above ByteArray variables.
5. Declare a variable counterori as integer and initialize with img1
length.
6. Declare two variable (Integer) counterori and countereq.
7. Initialize variable i As 0.
7
8. For i< img1.length
If img1[i] = = img2[i] THEN
Counter = counter + 1;
9. res = (counter * 100) / counterori;
10. Show res % Matched.
11. IF res>=80 then
Show message “Signature Authenticated”.
IF res<80 THEN
Show message “Signature is not Authenticated”.

3.5. Bit-Pattern

Figure 3 Figure 4

8
Fig.3 represent the Bit-Pattern of Original Signature and fig.4 Bit-Pattern of
another Signature .We consider some bits to represent the original signatures
bit pattern and then compare them with the bit pattern of another signature.

Figure 5

9
Consider eight bit and divide them into four and then the four divide them into one
and compare each bit position.

4.1 ARTIFICIAL NEURAL NETWORKS


Artificial neural networks are relatively crude electronics models based on the
neural structure of the brain basically learns from experiences. It is natural proof
that some problems that are beyond the scope of current computers are indeed
solvable way to develop machine solutions. This new approach to computing also
provides a more graceful degradation during system overload than its more
traditional counterparts. These biologically inspired methods of computing are
through to be the next major advancement in the computing industry.

Now, advance in biological research promise an initial understanding of the


natural thinking mechanisms. This research shows that brains store information as
patterns. Some of these patterns are very complicated and allow us the ability to
recognize individual faces from many different angles. This process of storing
information as patterns, utilizing those patterns and then solving problems
encompasses anew field in computing. This field, as mentioned before, does not
utilize traditional programming but involves the creation of massively parallel
networks and the training of those networks to solve specific problems. This field
also utilizes words very different from traditional computing, words like behave,
react, self-organize, learn, generalize and forget.

10
The power of the human mind comes from the sheer numbers of these basic
components and the multiple connections between them. It also comes from genetic
programming learning.

The individual neurons are complicated. They have a myriad of parts, sub-systems,
and control mechanisms. They convey information via a host of electrochemical
pathways. There are over one hundred different classes of neurons, depending on
the classification method used. Together these neurons and their connection form
a process which is not binary, not stable and not synchronous. In short, it is
nothing like the currently available electronic computers or even artificial neural
networks.

These artificial neural networks try to replicate only the most basic elements of
this complicated, versatile and powerful organism. They do it in a primitive way.
But for the software engineering who is trying to solve problems, neural
computing was never about replicating human brains. It is about machines and a
new way to solve problems.

4.2. MODELS OF NEURON

A neuron is information processing units that in fundamental to the operation of a


neural network. The block diagram of fig.3.1 shows the model of a neuron, which
forms the basis of designing artificial neural networks.

Bias bk
X1 Wk1

X2 Activation Function
. Wk2 ∑ Output Yk
Ф (.)

.
.
Xm Wkm

11
Fig:-5.2 Nonlinear Model of a neuron

4.2.1. THE BASIC ELEMENTS OF THE NEURONAL MODEL ARE:

1. A set of synapses or connecting links, each of which is characterized by a


weight or strength of its own Xj at the synapse j connected to neuron k is
multiplied by the synaptic weight Wkj. Unlike a synapse in the brain, the
synaptic weight of an artificial neuron may lie in a range that includes
negative as well as positive values.

2. An adder for summing the input signals, weighted by the respective synapses
of the neuron, the operation described here constitutes a linear combiner.

3. An activation function for limiting the amplitude of the output of the neuron.

The neuronal model of fig. also includes an externally applied bias, denoted by b k.
the bias bk has the effect of increasing or lowering the net input of the activation
function, depending on whether it is positive or negative, respectively.

In mathematically terms, we may describe a neuron k by writing the following


pair of equations-

Uk= WjkXj

And Yk= ф (uk+bk)

Where X1, X2, X3,…….Xm are the inputs signals;Wk1,Wk2,……Wkm are the synaptic
weights of neuron k; uk is the linear combiner output due to the input signals; b k is
the bias (.) is the activation function ; and Y k is the output signal of the neuron.
The use of the bias bk has the effect of applying an affine transformation to the
output uk of the linear combiner in the model of fig As shown by-

Vk= uk+bk

In particular, depending on whether the bias b k is positive or negative, the


relationship between the induced local field or activation potential vk of neuron k.

12
Vk= ∑ WjkXj

Yk= ф (Vk)

The synapse with input

X 0= +1

Has weight Wj0 = bk

We may therefore reformulate the model of neuron as in fig.3.2. In this figure, the
effect of the basis accounted for by doing two things; 1.adding a new input signal
fixed at +1, and 2. Adding a new synaptic weight equal to the bias bk.

X1 wk0

Wk1
X2 Activation Function Yk
∑ Ф (.) Output
Wk2
X3
. Summing Junction
.
Xm wkm

Fig.3.2 Nonlinear Model of a Neuron.

4.3. TRANSFER FUNCTION


13
The behavior of an ANN (Artificial Neural Network) depends on the both the
weight and the input-output function (transfer function) that is specified for the
units. This function typically falls into one three categories.

 Linear (or ramp)

 Threshold

 Sigmoid
For linear units, the output activity is proportional to the total weighted output.

For threshold units, the output are set at one of two levels, depending on whether
the total input is greater than or less than some threshold value.

For sigmoid units, the output varies continuously but not linearly as the input
changes. Sigmoid units bear a greater resemblance to real neurons than do linear
or threshold units, but all three must be considered rough approximations.

a
………… ………+1……………
n
…………………………………
-1

Fig.3.3 Log-Sigmoid Transfer Function.

a
………………… +1…………………

14
n
…………………………………….
-1
Fig.3.4 Tan-Sigmoid Transfer Function.

a
…………………+1…………….
n

…………………………………
-1

Fig.3.5 Purelin Transfer Function.

15
4.4. TRAINING METHODS

4.4.1. SUPERVISED TRAINING

In supervised training, both the inputs and the outputs are provided. The network
then processes the inputs and compares its resulting outputs against the desired
outputs. Errors are then propagates back through the system, causing the system
to adjust the weights which control the networks. This process occurs over and
over as the weights are continually tweaked. The set of data, which enables the
training, is called the “training set”. During the training of a network the same set
of data is processed many times as the connection weights are over refined.

The current commercial networks development packages provide tools to monitor


how well an artificial neural networks is conversing on the ability to predict the
right answer. These tools allow the training process to go on for days, stopping
only when the system reaches some statistically desired point, or accuracy.
However, some networks never learn. This could be because the inputs data does
not contain the specific information from which the desired output is derived.
Networks also don’t converge if there is not enough data to enable complete
learning. Ideally, there should be enough data so that part of data can be held
back as a test. Many layered networks with multiple nodes are capable of
memorizing data. To monitor the network to determine if the system is simply

16
memorizing its data in some no significant way, supervised training needs to hold
back a set of data to be used to test the system after it has undergone its
training(note: memorization is avoided by not having too many processing
elements.)

If a network simply can’t solve the problem, the designers then has to review the
inputs and outputs, the number of layers, the number of elements per layer, the
connections between the layers, the summation, transfer and functions, and even
the initial weight themselves. Those changes required to create a successful
networks constitute a process where in the “arts” of the neural networking occurs.

Another parts of the designer’s creativity govern the rules of training. These are
many laws (algorithm) used to implements the adaptive feedback required to
adjust the weights during training. The most common technique is backward error
propagation, more commonly known as back propagation.

Yet, training is not just a technique. It involves a “feel” and conscious analysis, to
insure that the network is not over trained. Initially, an artificial neural network
configures itself with the general statistical trends of the data. Later, it continues
to “learn” about other aspects of the data, which may be spurious from a general
view point.

When finally the system has been correctly trained, and no further learning is
needed, the weights can, if desired, be “frozen”. In some systems this finalized
networks is then turned into hardware so that it can be fast. Other systems don’t
lock themselves in but continue to learn while in production use.

4.4.2. UNSUPERVISED OR ADAPTIVE TRAINING.

The order type of training is called unsupervised training, in the unsupervised


training, the network is provided with inputs but not with desired outputs. The
system itself must then decide what features it will use to group the input data.
This is often referred to as self-organization or adaptation.

At the present time, unsupervised learning is not well understood. This adaptation
to the environment is the promise, which would enables science fiction types of
robots to continually learn on their own as they encounter new situations and new
17
environments life is filled with situations where exact training sets do not exist.
Some of these situations involve military action where new combat technique and
new weapons might be encountered. Because of this unexpected aspect to life and
the human desired to be prepared, there continues to be research into, and hope
for, this field. Yet, at the present time, the vast bulk of neural network is in systems
with supervised learning. Supervised learning is achieving results.

One of the leading into unsupervised training is tuevo kohonen, an electrical


engineer at the Helsinki University of technology. He has development a self-
organizing network, sometimes called an auto-associated, that learns without the
benefit of knowing the right answer. It is an unusual looking network in that it
contained one single layer with many connections. The weights for those
connections have to be initialized and the inputs have to be normalized. The
neurons are set up to complete in a winner-take all function.

4.5. THE BACK PROPAGATION ALGORITHM

Multilayer feed forward networks from an important class of neural networks.


Typically the network consists of a set of sensory units (source nodes) that
constitute the input layer, one or more hidden layers of computation nodes and
output layer of computational nodes. The inputs signal propagates through the
networks in a forward direction, on a layer by layer basis. These neural networks
are commonly referred to a multilayer perception (MPLs).

Multiplayer perceptions have been applied successful to solve some difficult and
diverse problems by training them in a supervised manner with a highly popular
algorithm known as the error-propagation algorithm. This algorithm is based on
the error-correction learning rule. Basically, error back-propagation learning
consists of two passes through the different layers of the networks; a forward pass
and a backward pass. In the forward pass an activity pattern (inputs vector) is
applied to the sensory nodes of the networks, and its effect propagates through the
network layer by layer. Finally a set of output is produced as the actual response
of the network. During the forward pass the synaptic weights of the network are all
fixed. During the backward pass, on the other hand, the synaptic weights are all
adjusted in accordance with an error-correction rule. Specifically, the actual
response of the networks is subtracted from a desired (target) response to produce
an error signal. This error signal is then propagates backward through the
18
networks against the direction of synaptic connections-hence the name “error
back-propagation”. The synaptic weights are adjusted to make the actual response
of the networks move closer to the desired response of the networks in a statistical
sense. The error back-propagation algorithm is also referred to as the back-
propagation algorithm.

4.5.1. NOTATION

 The indicates i, j and k refer to different neurons in the networks; with


signals propagating through the networks from left to right, neuron j lies in
a layer to the right of neuron i, and neuron k lies on a layer to the right of
neuron j when neuron j is a hidden units.
 In iteration (time step) n, the nth training pattern (example) is presented to
the network.
 The symbol ξ(n) refers to the instantaneous sum of error squares or error
energy at iteration n. the average of ξ(n) over all values of n(i.e., the entire
training set) yields the average error energy ξav.
 The symbol ej(n) refers to the error at the output of neuron j for iteration n.
 The symbol dj(n) refers to the desired response for neuron j and is used to
compute ej(n).
 The symbol yj(n) refers to the function signal appearing at the output of
neuron j at iteration n.
 The symbol wji(n) denotes the synaptic weight connecting the output of
neuron j to the inputs of neuron j at iteration n. the correction applied to
this weight at iteration n is denoted by Δwji(n).
 The induced local field (i.e., weighted sum of all synaptic inputs plus bias)
of neuron j at iteration n is denoted by v j(n);it constitutes the signal applied
to the activation function associated with neuron j.
 The activation function describing the input-output functional relationship
of the non linearity associated with neuron j is denoted by φj(.).
 The bias applied to neuron j is denoted by bj; its effect is represented by a
synapse of weight wjo = bj connected to a fixed input equal to +1.
 The ith element of the input vector (pattern) is denoted by xi (n).
 The kth element of the overall output vector (pattern) is denoted by ok (n).
 The learning-rate parameter is denoted by η.
19
 The symbol m1, denotes the size (i.e., number of nodes) in layer of the
multilayer perceptron; l=0, 1….L, where L is the “depth” of the network.
Thus m0 denotes the sizes of the input layer, m1 denotes the size of the
first hidden layer, and mL denotes the size of the output layer. The notation
mL=M is also used.

The error signal at the output of neuron j at iteration n (i.e., presentation of


the nth training example) is defined by

ej(n)= dj(n)- yj(n), neuron j is an output node……………..(5.1)


1 2
we defined the instantaneous value of the error energy for neuron j as 2 e j (n)

Corresponding, the instantaneous value ξ(n) of the total error energy is


1 2
obtained by summing 2 e j (n) over all neurons in the output layer; these are
the only “visible” neurons for which error signals can be calculated
directly, we may thus write

1 2
ξ (n) = 2 ∑ e j (n)…………………………………(4.2)
j ∈C

Where the set c includes all the neurons in the output layer of the networks
.lets N denotes the total number of pattern (examples) contained in the
training set. The average squared error energy is obtained by summing ξ (n)
over all n and then normalized with respect to the size N, as shown by

N
1
ξav= ∑ ξ (n),…………………………………..(4.3)
N n=1

The instantaneous error energy ξ(n) and therefore the average squared error ξ av is
a function of all the free parameters (i.e., synaptic weights and bias levels)of the
networks. For a given training set ξav represent the cost function as a measure of
learning performance. The objective of the learning process is to adjust the free
parameters of the networks to minimize ξ av, especially we consider a simple
20
method of training in which the weights are updated on a pattern-by-pattern basis
until one epoch, and that is, one complete presentation of the entire training set
has been dealt with. The adjustments to the weights are made in accordance with
the respective errors computed for each pattern presented to the networks. The
arithmetic average of this individual weight changes over the training set is
therefore an estimate of the true change that would result from modifying the
weights based on minimize the cost function ξav over the entire training set.

The induced local field vj(n)produced at the input or the activation function
associated with neuron j is therefore

m
vj(n)= ∑ w ji( n) y j (n),……………………………(4.4)
i=0

where m is the total number of inputs (excluding the bias)applied to neuron j.the
synaptic weight wj0 (corresponding to fixed input y0 = +1)equals the bias bj
applied to the neuron j, hence the function signal y j(n) appearing at the output of
neuron j at iteration n is

yj(n)= φj(vj(n))…………………………………………..(4.5)

The back-propagation algorithm applied a correction Δw ji(n), which id


δξ (n)
proportional to the partial derivative δw ji(n)

According to the chain rule of calculate we may express this gradient as

δξ ( n) δξ ( n) δe j(n) δy j(n) δv j(n)


=
δw ji(n) δe j(n) δy j(n) δv j(n) δw ji(n)

21
………………(4.6)
δξ ( n)
The partial derivatives δw ji(n) represents a sensitivity factor, determining the
direction of search weight space for the synaptic weight wji

Differentiating both sides of eqn(4.2)with respect to ej(n), we get

δξ ( n)
=e j(n)
δe j(n)

……………………………..(4.7)

Differentiating both sides of eqn(4.1)with respect to yj(n), we get

δe j(n)
=−1
δy j( n)

…………………………………………(4.8)

Differentiating both sides of eqn(4.5)with respect to vj(n), we get


δy j(n)
=ф' j(v j(n))
δv j(n)

……………………………..(4.9)
δv j(n)
= y j(n)
δw ji(n)

………………………………(4.10).
δξ ( n)
=−e j(n) ф ' j(v j(n)) y j(n)
δw ji(n)

The correction Δwji(n) applied to wji(n) is defined by the delta rule

22
−ηδξ (n)
Δw ji ( n ) =
δw ji(n)

Where η is the learning rate parameter of the back-propagation algorithm?


The use of the minus sign accounts for the gradient descent in weight space.
Use of eqn(4.11)in (4.12)

Δw ji ( n ) =η δ j (n) y i (n)

……………………(4.13)
Where the local gradient δ j (n) is defined by

−δξ (n)
δ j ( n )=
δv j(n)

= e j(n)ф ' j( v j(n))

……………………(4.14)
We note that a key factor involved in the calculation of the weight
assignment Δwji(n)is error signal ej(n) at the output of neuron j.in this
context we may identity two distinct cases, depending on where in the
network neuron j is located.

CASE 1: NEURON J IS AN OUTPUT NODE

When neuron j is located in the output layer of the network, it is supplied


with a desired response of its own. We may use eqn(4.1)to compute the error
signal ej(n) associated with this neuron having determine ej(n).it is a
straight forward matter to compute the local gradient δ j ( n )using eqn(4.14)

CASE 2: NEURON J IS A HIDDEN NODE

23
When neuron j is located in a hidden layer of the network, there is no
specified desired response for that neuron. Accordingly the error signal for
a hidden neuron would have to be determine recursively in terms of the
error signals of all the neurons to which that hidden neuron is directly
connected. Accordingly to eqn(4.14) we may define the local gradient δ j ( n )
for hidden neuron j as

−δξ (n) δy j(n)


δ j ( n )=
δy j(n) δv j(n)

δξ (n) '
¿− ф j( v j( n))
δy j(n)

1 2
ξ (n) = 2 ∑ e k (n),
k ∈C

δξ (n) δe k (n) δv k (n)


=∑ ek ( n)
δy j(n) k δv k ( n) δyj (n)

ek(n)= dk(n)- φk(vk(n))

Finally we get back-propagation formula for the local gradient δ j ( n ) as described

δ j ( n )=ф ' j( v j (n)) ∑ δ k (n)wk j(n)


k

Neuron j is hidden

……………………….(4.17)

We now summarize the relation that we have derived for the back-propagation
algorithm first the correction Δwji(n) applied to the synaptic weight connecting
neuron I to neuron j is defined by the data rule.

weight learning local input signal of

[ ][
correction =
Δw ji(n)
rate− parameter
η ][ ][
gradient
δ j (n)
neuron j
y j(n) ]
24
Science the local gradient δ j ( n ) depends on whether neuron j is an output node or
a hidden node:

1. If neuron j is an output node, δ j ( n ) equals to product of the derivative


φj(vj(n))and the error signal ej(n), both of which are associated with neuron
j.
2. If neuron j is hidden neuron, δ j ( n ) equals the product of the associated
derivative φj(vj(n))and the weighted sum δs of the computed for the neuron
in the next hidden or output layer that are connected to neuron j.

4.5.2. PRINCIPLES OF TRAINING MULTI-LAYER NEURAL


NETWORK USING BACKPROPAGATION

The project describes teaching process of multi-layer neural network employing


back propagation algorithm. To illustrate this process the three layer neural network
with two inputs and one output, which is shown in the picture below, is used:

Each neuron is composed of two units. First unit adds products of weights
coefficients and input signals. The second unit realise nonlinear function, called
neuron activation function. Signal e is adder output signal, and y = f(e) is output
25
signal of nonlinear element. Signal y is also output signal of neuron.

To teach the neural network we need training data set. The training data set
consists of input signals (x1 and x2 ) assigned with corresponding target (desired
output) z. The network training is an iterative process. In each iteration weights
coefficients of nodes are modified using new data from training data set.
Modification is calculated using algorithm described below: Each teaching step
starts with forcing both input signals from training set. After this stage we can
determine output signals values for each neuron in each network layer. Pictures
below illustrate how signal is propagating through the network, Symbols w(xm)n
represent weights of connections between network input xm and neuron n in input
layer. Symbols yn represents output signal of neuron n.

26
Propagation of signals through the hidden layer. Symbols wmn represent weights
of connections between output of neuron m and input of neuron n in the next
layer.

27
Propagation of signals through the output layer.

28
In the next algorithm step the output signal of the network y is compared with the
desired output value (the target), which is found in training data set. The
difference is called error signal  of output layer neuron.

It is impossible to compute error signal for internal neurons directly, because


output values of these neurons are unknown. For many years the effective method
for training multiplayer networks has been unknown. Only in the middle eighties
the back propagation algorithm has been worked out. The idea is to propagate
error signal  (computed in single teaching step) back to all neurons, which
output signals were input for discussed neuron.

29
The weights' coefficients wmn used to propagate errors back are equal to this used
during computing output value. Only the direction of data flow is changed
(signals are propagated from output to inputs one after the other). This technique
is used for all network layers. If propagated errors came from few neurons they
are added. The illustration is below:

30
When the error signal for each neuron is computed, the weights coefficients of
each neuron input node may be modified. In formulas below df(e)/de represents
derivative of neuron activation function (which weights are modified).

31
32
33
Coefficient  affects network teaching speed. There are a few techniques to select
this parameter. The first method is to start teaching process with large value of
the parameter. While weights coefficients are being established the parameter is
being decreased gradually. The second, more complicated, method starts
teaching with small parameter value. During the teaching process the parameter
34
is being increased when the teaching is advanced and then decreased again in the
final stage. Starting teaching process with low parameter value enables to
determine weights coefficients signs.

35
4.5.3 Back propagation Algorithm

Initialize the weights

Repeat

For each training pattern

Train on the pattern

End

Until the error is acceptably low

Step 1: Normalize the input and outputs with respect to their maximum values.
It is proved that the neural networks work better if input and output
lie between 0-1. For each training pair, assume there are ‘l’ inputs
{I }I {O }O
given by and ‘n’ outputs in a normalized form.
lx1 nx 1

Step 2: Assume the number of neurons’ in the hidden layer to lie between
l<m<2l

Step 3: [V] represents the weights of synapses connecting input neurons and
hidden neurons and [W] represents the weights of synapses connecting
hidden neurons and output neurons. Initialized the weights to small
random values usually from -1 to 1. For general problems, λ can be
assumed as 1 and the threshold values can be taken as zero.
0
[V ] = [random weights]

[W ]0 = [random weights]
36
0
[ ΔV ] = [ ΔW ]0 = [0]

Step 4: For the training data, present one set of inputs and outputs. Present the
pattern to the input layer. By using linear activation function, the output
of the input layer may be evaluated as
{I }I {O}O
=
lx 1 nx 1

Step 5: computes the inputs to the hidden layer by multiplying corresponding


weights of synapses as
{I }H = [V ]T {O }I

mx1 mxl lx1

Step 6: Let the hidden layer units evaluate the output using the sigmoidal
function as

1
{O }H = (1+e
−{I }
Hi
)

mx1

37
Step 7: computes the inputs to the output layer by multiplying corresponding
weights of synapses as
{I }O = [W ]T {O }H

nx1 nxm mx1

Step 8: Let the hidden layer units evaluate the output using the sigmoidal
function
.

1
{ O }O = −{I }
Oi
(1+e )

The above is the network output.

Step 9: Calculate the error and the difference between the network
output and the desired output as for the ith training set as

Ep=
√∑ (T −O
j oj )2
n

38
Step 10: Find {d} as

{d} = (T k −Ook )Ook (T k −Ook )

nx1

Step 11: Find [Y] matrix as

[Y] = {O}H (d)

m x n m x 1 1x n

Step 12: Find [∆W]t+1 = α[∆W]t + η[Y]

mxn mxn mxn

39
Step 13: Find {e} = [W] {d}

mx1mxnnx1

{d*} = e i (O Hi )(1−O Hi)

m x1 mx1

Find [X] matrix as

[X] = {O}I (d*) = {I}I (d*)

l x m l x 1 1x m lx11xm

Step 14: Find [∆V]t+1 = α[∆W]t + η[Y]

lxm lxm lxm

Step 15: Find [∆V]t+1 = [∆V]t + [∆V]t+1

[∆W]t+1 = [∆W]t+ [∆W]t+1

40
Step 16: Find error rate as
Σ Ep
Error rate = n set

Step 17: Repeat step 4 – 16 until the convergence in the error rate is
less than the tolerance value.

End algorithm BPN

4.6 Flow Chart


41
Create a final adjusting (wt) table

Figure 6

42
Output (G4)

Figure 7

No

Figure 8

43
4.7 Algorithm

I discussed here six algorithms which are required for this project.

4.7 .1 How to calculate gray value of any bitmap


image:-
GRAYVALUECALCULATION(colorR,colorG,colorB)
INPUT: Each bitmap pixel (i.e. R,G,B).
OUTPUT: GRAY VALUE of each input pixel.
. . . colorR,colorG,colorB can be represented by Red,Green
and Blue respectively.
Repeat
(colorR +colorG +colorB)
gradient=
3 ….(1)
. . . Each R,G,B of bitmap pixel be typecasted from unsigned
char to unsigned int before equation(4) is computed.
. . . When gradient is being output,it must be typecasted from
float to unsigned char(say initially gradient was float
type).
until the termination condition is met i.e. image_ file is
exhausted;
44
4.7. 2. How to calculate random number between -0:5
to 0:5 :-

RANDOMNUMBERGENERATION()
INPUT:Algorithm [5.7.3]
OUTPUT:Random number.for i 0 to 20 do
. . . Here 0 to 15 represented that , my algorithm gives 21
random numbers within specifc range from -0.5 to 0.5
a rand() ……………………….. (2)
b a mod 1000………………….. (3)
b
f - 0:5 + 1000 ……………………….. (4)

end

4.7.3 TRANSPOSE OF A MATRIX :

TRANSPOSE (A[m][n])
INPUT: A mXn matrix.
OUTPUT:B Transpose of that matrix.
for row 0 to (m - 1) do
for col 0 to (n - 1) do
B[row][col] A[col][row]……(5)
end
45
end

4.7.4. MULTIPLICATION BETWEEN TWO


MATRICES

MULTIPLICATION(A[m][n],B[k][q])
INPUT:A mXn and B kXq two matrices.
OUTPUT:C mXq matrix .
for row 0 to (m - 1) do
for col 0 to (q - 1) do
c[row][col] 0 (6)
for ip 0 to (n - 1) do
C[row][col] C[row][col] +
A[row][ip] *
B[ip][col]
(7 )
end
end
end

46
4.7.5. SIGMOID OUTPUT OF A MATRIX

SIGMOID(A[m][0])
INPUT: A mX0 matrix.
OUTPUT:B mX0 matrix.
for row 0 to (m - 1) do
deno exp(A[row][0] * (-1)) (8)
1: 0
B[row][0] 1:0+ deno (9)

end

47
4.7.6 BACKPROPAGATION ALGORITHM

BACKPROPAGATION(training_example,η,n ¿, nout , nhidden )

Each training example is a pair of the form (x, t) , where


x is the vector of network input values and t is the
vector of target network output values.η is the learning
rate(e.g. 0:05).n is the number of network inputs.n the
¿, hidden

number of units in the hidden layer andn t the number of


out ,

output units
.. . . Create a feed-forward networkwithn inputs,n the
¿, hidden

number of units in the hidden layer,and n output units.


out ,

. . . Initialize all network weights to small random numbers


(e.g. between -0:05 and 0:05).
. . . For each (x, t) in training _examples , Do

repeat
Propagate the input forward through the network:
1.Input the instance x to the network and
48
compute the output ou of every unit u in
the network.
Propagate the errors backward through
the network:
2.For each network output unit k,calculate its
error term
………..(10)
δ k ← Ok (1−O k )¿

3.For each hidden unit h,calculate its error term


∑ w kh δ
δ k ← Ok ¿
k ∈outputs
k …….(11)
4. Update each network weight wji
w w + ∆ w where ∆ w = ηδj ij ……(12)
ij← j ji ji
x

until the termination condition is met i.e.


training_example is
exhausted and E ≤ ∊ where ∊ very small ;

49
5. 1. Picture Password Authentication Scheme :

To address the problems with traditional username-password


authentication, alternative authentication methods, such as biometrics,
have been used. In this presentation, however, we will focus on another
alternative: using pictures as passwords

A Picture Password is an authentication system that works by


having the user select from images, in a specific order, presented in a
graphical user interface (GUI). For this reason, the graphical-password
approach is sometimes called graphical user authentication (GUA).

Adequate user authentication is a persistent problem, particularly with


handheld devices such as Personal Digital Assistants (PDAs), which
tend to be highly personal and at the fringes of an organization's
influence. Yet, these devices are being used increasingly in corporate
settings where they pose a security risk, not only by containing sensitive
information, but also by providing the means to access such information
over wireless network interfaces. User authentication is the first line of
defense for a lost or stolen PDA. However, motivating users to enable
simple PIN or password mechanisms and periodically update their
authentication information is a constant struggle. This paper describes
a general-purpose mechanism for authenticating a user to a PDA using
a visual login technique called Picture Password. The underlying
rationale is that image recall is an easy and natural way for users to
authenticate, removing a serious barrier to compliance with
organizational policy. Features of Picture Password include style
dependent image selection, password reuse, and embedded salting,
which overcome a number of problems with knowledge-based
authentication for handheld devices. Though designed specifically for
50
handheld devices, Picture Password is also suitable for notebooks,
workstations, and other computational devices.

The Picture Password authentication mechanism has two distinct parts:


the initial password enrollment and subsequent password verification.
During enrollment, a user selects a theme identifying the thumbnail
photos to be applied and then registers a sequence of thumbnail images
that are used to derive the associated password. When the device is
powered on or booted up, the user must enter the currently enrolled
image sequence for verification to gain access to the device. After a
successful authentication, the user may change the password, selecting
a new sequence and/or theme.
Picture Password offers benefits over PINs and textual passwords,
especially for the visually inclined user. As with textual passwords, a
similar password length and alphabet size is used. However, instead of
having to memorize and enter a string of random-like alphanumeric
characters, a sequence of thumbnail images must be selected and
retained. Experimental results suggest that human visual memory is well
suited to such visual and cognitive tasks [Mel01, Gol71]. Moreover, an
image sequence that has some meaning to the individual user (e.g.,
logos of sport’s league teams in order of preference, one’s family
members in order of birth or vacation spots in order visited) can be
used. If forgotten, the sequence may be reconstructed from the inherent
visual cues. The display interface presents images in an easy-to-select
size, reducing error entries. The underlying mechanism, which handles
random character code assignment to images, password composition,
enrollment, and verification, is completely hidden from the user.

The presentation of images to the user for selection is based


on tiling a portion of the user’s graphical interface window. Various
ways exist to tile a surface with both regular and irregular patterns.
Picture Password uses squares of identical size (40 x 40 pixels),
grouped into a 5 x 6 matrix of elements. Cells of this size provide a clear
recognizable image that is easily selectable for most users. Figure 9
illustrates the PDA screen image for the Cats & Dogs theme, one of
51
three predefined themes. The message area at the top of the window
guides the user’s actions, while the display area in the center displays
thumbnail images selectable with a single tap of the stylus. The controls
at the bottom allow the user to clear out any incorrect input entered or
submit the entered image sequence for verification. Selecting and
submitting the correct sequence of thumbnail images authenticates the
user to the device.

Figure 9: Picture Password Cats and Dogs Theme

Picture Password allows users flexibility in choosing a predefined


theme that suits their personality and taste or providing their own set of
images for display. All thumbnail images must be in a predefined digital
format, which can be created using an image manipulation tool such as
Photoshop or GIMP. Besides a random layout of individual thumbnail
images, several thumbnail images may be structured to compose a
single composite image as in a mosaic, where each thumbnail image
contributes a portion to some larger image. Figure 10 illustrates the Sea
52
& Shore theme, where all 30 thumbnail images in the display area form
a single contiguous image.

Figure 10: Sea & Shore Theme

5.2 Password Strength


As mentioned earlier, with 30 thumbnail images to choose, the effective
size of the alphabet is 930, (30 + (30*30)). Passwords formed with so
large an alphabet space are quite strong. Thus, 7-entry long passwords
have 9307 possible values for a password space of approximately
6.017009e+20, which is an order of magnitude greater than that for 10-
character long, alphanumeric passwords formed from the 95 printable
ASCII character set, which is 9510 or approximately 5.987369e+19.
The general strength relationship between visual passwords formed
from a 30-element matrix versus textual passwords formed from the 95
printable ASCII characters is approximately

53
Npp = ┌ 2/3 * Ntp ┐,
where Ntp is the required character length for textual password input,
Npp is the corresponding input sequence length required for Picture
Password, and ┌ x ┐ is the “ceiling” function. In simple terms this
means that image sequences formed with dual selection styles require
approximately one-third less length than that of a traditional
alphanumeric password. This presumes, of course, that just as
additional keystrokes are needed to select special and capital characters
on a keyboard, a comparable number of additional strokes are used
when forming an image sequence involving paired image selections.

54
5.3 FLOW CHART

Select pictures Get pixel values Store pixel values of


from picture code of each picture each picture
sequentially into array

Figure: 11

Array x i

Compare
Comparison

Array
yi If two array User can
is matched log in

User can’t
log in
55
Figure: 12
First store original images and the images from which we want to
compare with it assume image x i and image y i, store into database as
Bytearray.. Fetch sequential images with respect to a clientId from
Database and store it into ByteArray variables. If x i matched with y ithen
the user can login.

56
6. Supports Multiple Capturing Devices

Source: PenOp Smartpen .These are special purpose devices used to capture the
signature dynamics. Both are wireless. The E pad devices shows Special pens are
able to capture movements in all 3 dimensions.Tablets have two significant
disadvantages. First, the resulting digitalized signature looks different from the
usual user signature. And second, while signing the user does not see what he/she
has written so far. He/she has to look at the computer monitor to see the signature.
This is a considerable drawback for many(unexperienced) users. Some special
pens work like normal pens, they have ink cartridge inside and can be used to
write with them on paper.

E-pad Smartpen Tablet

the signature on the digital display while the Smartpen has got its own ink
cartridge and can be used to write onto any paper.

57
8.1 APPLICATIONS IN VARIOUS FIELD

 Citizen Identification:
Identify/authentify citizens interacting with government agencies

 PC / Network Access:
Secure access to PCs, networks and other computer resources

 Physical Access / Time and Attendance:


Secure access to a given area at a given time

 Surveillance and Screening:


Identify/authentify individuals present in a given location

 Retail / ATM / Point of Sale:


Provide identification/authentication for in-person transactions for
goods/services

 E-Commerce / Telephony:
Provide identification/authentication for remote transactions for
goods/services

 Criminal Identification:
58
Identify/verify individuals in law enforcement applications
In each of those applications, biometric systems can be used to either
replace or complement existing authentication methods.
 Government Sector
 Travel and Transportation
 Financial Sector
 Health Care
 Law Enforcement
 SPEECH &VISION RECOGNISE SYSTEM: not new, but neural
networks are becoming increasingly part of such systems. They are
used as a system component, in conjunction with traditional
computers.

 PEN PC’s: where one can write on a tablet, and the writing will be
recognized and translated into (ASCII) text.

59
9.1 ADVANTAGES

 Sign & sealed documents using your own signature


 Cannot be lost or stolen
 No pin codes and passwords to memorize
 Highly resistant to imposter attempts
 Non invasive (as opposed to other identification methods)
 Compliant with the digital signature laws
 No change in habits – sign as you always didEliminate paper work
costs
 Shipping & handling
 Scanning
 Labor costs
 Storage & retrieval
 Seamless integration with current workflow procedures of the
organization
 Supports multiple capturing devices

60
CONCLUSION

We have presented a new approach for online signature authentication,


which is based on image processing. The matching process was then
performed using fairly standard statistical correlation methods.Tests
performed on a reasonably large database have shown good results for
our method, both for verification and identification problems. By
choosing adequate event feature sets, equal-error

1) Provide a solution to avoid forgery

2)Ensure the confidentiality of Information in the field of Information


Technology Security

3) A signature scheme make more secure to make it,s security models


more
strong

61
3.6.1. Storing images into byte array :

ByteArrayOutputStream baos = new ByteArrayOutputStream();

filep.writeTo(baos);

if (filep.getName().equals("clientimage")) {

clientpicture = baos.toByteArray();

} else if (filep.getName().equals("clientsignature")) {

62
clientsign = baos.toByteArray();

3.6.2. Image Comparison:

byte img1[ ]=null,img2[ ]=null;


ResultSet rs = null;
PrintWriter out = response.getWriter();
try {
try {
psmt = con.prepareStatement("select image,imagecmp from
clientimage where clientid=?");
psmt.setString(1, clid);
rs=psmt.executeQuery();
while(rs.next()){
img1=rs.getBytes("image");
img2=rs.getBytes("imagecmp");
}
}catch (Exception e) {
System.out.println("Exception :: " + e);
}
int i=0;
for(i=0;i<img1.length;i++){
countori++;
}
for(i=0;i<img1.length;i++){
if(i>img2.length)
break;
if(img1[i]==img2[i]){
63
counteq++;
}
}
System.out.println("Img1: "+countori+" Img2: "+counteq);
double res=0;
res=(counteq*100)/countori;
message=Math.floor(res)+"% Matched";
System.out.println("Message:: "+message);

(T −O)2

64

You might also like