In This Project, I Focus Solely On Authentication Through Online
In This Project, I Focus Solely On Authentication Through Online
Introduction
In this project, I focus solely on authentication through online
handwritten signature verification which encompasses features
such as speed, velocity, and pressure in addition to the finished
signature’s static shape (image). Tests performed on a
reasonably large database have shown good results for this
method, both for verification and identification problems. A wide
number signatures are require to confirm or determine the
identity of an individual ,requesting their services. The purpose of
this schemes is to ensure that the rendered services are accessed
only by a legitimate user, and not anyone else. By using this
method it is possible to confirm or establish an individual’s
identity. We have also outlined opinions about the usability of
signature authentication systems, comparison between different
techniques and their advantages and disadvantages in this paper.
2
2.Techniques
To authenticate a person through his/her signature,is the person
is genuine or not we can test his signature by many way.In this project
two way is used by me
3
3 . PIXEL POSITION MATCHING
After getting the signature drawn image is captured and put into the
processes and Put them into array and count the pixel values. After
getting the pixel values of original signature and other sample
signatures mach the pixel values and put them into counter.If the
matching rate in greater than or equals to 80%(>= ) mean the
signature in authenticated otherwise billow 80% mean the signature is
not authenticated.
The counter checked that the bits position of the original image is
matched with the bits position of another image or not. If it matched a
certain percentage or above which is decided in there then the
signature is authenticated either not.
This method is done by me using JAVA. JSP for for client side
coding.Servlet for serverside and javaScript for program development.
4
3.1 Tools Used
5
3.3 Flow Chart
Put pixels
Into
Array
Figure 1
6
No
Figure 2
In the above flow chart fig.1 represent how we can put the pixel value
into the array of the images .And from the fig.2 we can understand the
comparison of image1’s (the original image) Byte Array with other
image2 Byte Array. Put them into counter.If the matching value is
greater than or equal to 80% then the signature is authenticated if the
comparison is less than 80% then the signature is unauthenticated.
First store original image and the image from which we want to
compare with it assume image1 and image2 , store into database as
Bytearray. Then declare two Byte Array as img1 and img2. Fetch two
image with respect to a clientId from Database and store it into
ByteArray variables. Coding to storing image as Byte array and
Comparison is
3.5. Bit-Pattern
Figure 3 Figure 4
8
Fig.3 represent the Bit-Pattern of Original Signature and fig.4 Bit-Pattern of
another Signature .We consider some bits to represent the original signatures
bit pattern and then compare them with the bit pattern of another signature.
Figure 5
9
Consider eight bit and divide them into four and then the four divide them into one
and compare each bit position.
10
The power of the human mind comes from the sheer numbers of these basic
components and the multiple connections between them. It also comes from genetic
programming learning.
The individual neurons are complicated. They have a myriad of parts, sub-systems,
and control mechanisms. They convey information via a host of electrochemical
pathways. There are over one hundred different classes of neurons, depending on
the classification method used. Together these neurons and their connection form
a process which is not binary, not stable and not synchronous. In short, it is
nothing like the currently available electronic computers or even artificial neural
networks.
These artificial neural networks try to replicate only the most basic elements of
this complicated, versatile and powerful organism. They do it in a primitive way.
But for the software engineering who is trying to solve problems, neural
computing was never about replicating human brains. It is about machines and a
new way to solve problems.
Bias bk
X1 Wk1
X2 Activation Function
. Wk2 ∑ Output Yk
Ф (.)
.
.
Xm Wkm
11
Fig:-5.2 Nonlinear Model of a neuron
2. An adder for summing the input signals, weighted by the respective synapses
of the neuron, the operation described here constitutes a linear combiner.
3. An activation function for limiting the amplitude of the output of the neuron.
The neuronal model of fig. also includes an externally applied bias, denoted by b k.
the bias bk has the effect of increasing or lowering the net input of the activation
function, depending on whether it is positive or negative, respectively.
Uk= WjkXj
Where X1, X2, X3,…….Xm are the inputs signals;Wk1,Wk2,……Wkm are the synaptic
weights of neuron k; uk is the linear combiner output due to the input signals; b k is
the bias (.) is the activation function ; and Y k is the output signal of the neuron.
The use of the bias bk has the effect of applying an affine transformation to the
output uk of the linear combiner in the model of fig As shown by-
Vk= uk+bk
12
Vk= ∑ WjkXj
Yk= ф (Vk)
X 0= +1
We may therefore reformulate the model of neuron as in fig.3.2. In this figure, the
effect of the basis accounted for by doing two things; 1.adding a new input signal
fixed at +1, and 2. Adding a new synaptic weight equal to the bias bk.
X1 wk0
Wk1
X2 Activation Function Yk
∑ Ф (.) Output
Wk2
X3
. Summing Junction
.
Xm wkm
Threshold
Sigmoid
For linear units, the output activity is proportional to the total weighted output.
For threshold units, the output are set at one of two levels, depending on whether
the total input is greater than or less than some threshold value.
For sigmoid units, the output varies continuously but not linearly as the input
changes. Sigmoid units bear a greater resemblance to real neurons than do linear
or threshold units, but all three must be considered rough approximations.
a
………… ………+1……………
n
…………………………………
-1
a
………………… +1…………………
14
n
…………………………………….
-1
Fig.3.4 Tan-Sigmoid Transfer Function.
a
…………………+1…………….
n
…………………………………
-1
15
4.4. TRAINING METHODS
In supervised training, both the inputs and the outputs are provided. The network
then processes the inputs and compares its resulting outputs against the desired
outputs. Errors are then propagates back through the system, causing the system
to adjust the weights which control the networks. This process occurs over and
over as the weights are continually tweaked. The set of data, which enables the
training, is called the “training set”. During the training of a network the same set
of data is processed many times as the connection weights are over refined.
16
memorizing its data in some no significant way, supervised training needs to hold
back a set of data to be used to test the system after it has undergone its
training(note: memorization is avoided by not having too many processing
elements.)
If a network simply can’t solve the problem, the designers then has to review the
inputs and outputs, the number of layers, the number of elements per layer, the
connections between the layers, the summation, transfer and functions, and even
the initial weight themselves. Those changes required to create a successful
networks constitute a process where in the “arts” of the neural networking occurs.
Another parts of the designer’s creativity govern the rules of training. These are
many laws (algorithm) used to implements the adaptive feedback required to
adjust the weights during training. The most common technique is backward error
propagation, more commonly known as back propagation.
Yet, training is not just a technique. It involves a “feel” and conscious analysis, to
insure that the network is not over trained. Initially, an artificial neural network
configures itself with the general statistical trends of the data. Later, it continues
to “learn” about other aspects of the data, which may be spurious from a general
view point.
When finally the system has been correctly trained, and no further learning is
needed, the weights can, if desired, be “frozen”. In some systems this finalized
networks is then turned into hardware so that it can be fast. Other systems don’t
lock themselves in but continue to learn while in production use.
At the present time, unsupervised learning is not well understood. This adaptation
to the environment is the promise, which would enables science fiction types of
robots to continually learn on their own as they encounter new situations and new
17
environments life is filled with situations where exact training sets do not exist.
Some of these situations involve military action where new combat technique and
new weapons might be encountered. Because of this unexpected aspect to life and
the human desired to be prepared, there continues to be research into, and hope
for, this field. Yet, at the present time, the vast bulk of neural network is in systems
with supervised learning. Supervised learning is achieving results.
Multiplayer perceptions have been applied successful to solve some difficult and
diverse problems by training them in a supervised manner with a highly popular
algorithm known as the error-propagation algorithm. This algorithm is based on
the error-correction learning rule. Basically, error back-propagation learning
consists of two passes through the different layers of the networks; a forward pass
and a backward pass. In the forward pass an activity pattern (inputs vector) is
applied to the sensory nodes of the networks, and its effect propagates through the
network layer by layer. Finally a set of output is produced as the actual response
of the network. During the forward pass the synaptic weights of the network are all
fixed. During the backward pass, on the other hand, the synaptic weights are all
adjusted in accordance with an error-correction rule. Specifically, the actual
response of the networks is subtracted from a desired (target) response to produce
an error signal. This error signal is then propagates backward through the
18
networks against the direction of synaptic connections-hence the name “error
back-propagation”. The synaptic weights are adjusted to make the actual response
of the networks move closer to the desired response of the networks in a statistical
sense. The error back-propagation algorithm is also referred to as the back-
propagation algorithm.
4.5.1. NOTATION
1 2
ξ (n) = 2 ∑ e j (n)…………………………………(4.2)
j ∈C
Where the set c includes all the neurons in the output layer of the networks
.lets N denotes the total number of pattern (examples) contained in the
training set. The average squared error energy is obtained by summing ξ (n)
over all n and then normalized with respect to the size N, as shown by
N
1
ξav= ∑ ξ (n),…………………………………..(4.3)
N n=1
The instantaneous error energy ξ(n) and therefore the average squared error ξ av is
a function of all the free parameters (i.e., synaptic weights and bias levels)of the
networks. For a given training set ξav represent the cost function as a measure of
learning performance. The objective of the learning process is to adjust the free
parameters of the networks to minimize ξ av, especially we consider a simple
20
method of training in which the weights are updated on a pattern-by-pattern basis
until one epoch, and that is, one complete presentation of the entire training set
has been dealt with. The adjustments to the weights are made in accordance with
the respective errors computed for each pattern presented to the networks. The
arithmetic average of this individual weight changes over the training set is
therefore an estimate of the true change that would result from modifying the
weights based on minimize the cost function ξav over the entire training set.
The induced local field vj(n)produced at the input or the activation function
associated with neuron j is therefore
m
vj(n)= ∑ w ji( n) y j (n),……………………………(4.4)
i=0
where m is the total number of inputs (excluding the bias)applied to neuron j.the
synaptic weight wj0 (corresponding to fixed input y0 = +1)equals the bias bj
applied to the neuron j, hence the function signal y j(n) appearing at the output of
neuron j at iteration n is
yj(n)= φj(vj(n))…………………………………………..(4.5)
21
………………(4.6)
δξ ( n)
The partial derivatives δw ji(n) represents a sensitivity factor, determining the
direction of search weight space for the synaptic weight wji
δξ ( n)
=e j(n)
δe j(n)
……………………………..(4.7)
δe j(n)
=−1
δy j( n)
…………………………………………(4.8)
……………………………..(4.9)
δv j(n)
= y j(n)
δw ji(n)
………………………………(4.10).
δξ ( n)
=−e j(n) ф ' j(v j(n)) y j(n)
δw ji(n)
22
−ηδξ (n)
Δw ji ( n ) =
δw ji(n)
Δw ji ( n ) =η δ j (n) y i (n)
……………………(4.13)
Where the local gradient δ j (n) is defined by
−δξ (n)
δ j ( n )=
δv j(n)
……………………(4.14)
We note that a key factor involved in the calculation of the weight
assignment Δwji(n)is error signal ej(n) at the output of neuron j.in this
context we may identity two distinct cases, depending on where in the
network neuron j is located.
23
When neuron j is located in a hidden layer of the network, there is no
specified desired response for that neuron. Accordingly the error signal for
a hidden neuron would have to be determine recursively in terms of the
error signals of all the neurons to which that hidden neuron is directly
connected. Accordingly to eqn(4.14) we may define the local gradient δ j ( n )
for hidden neuron j as
δξ (n) '
¿− ф j( v j( n))
δy j(n)
1 2
ξ (n) = 2 ∑ e k (n),
k ∈C
Neuron j is hidden
……………………….(4.17)
We now summarize the relation that we have derived for the back-propagation
algorithm first the correction Δwji(n) applied to the synaptic weight connecting
neuron I to neuron j is defined by the data rule.
[ ][
correction =
Δw ji(n)
rate− parameter
η ][ ][
gradient
δ j (n)
neuron j
y j(n) ]
24
Science the local gradient δ j ( n ) depends on whether neuron j is an output node or
a hidden node:
Each neuron is composed of two units. First unit adds products of weights
coefficients and input signals. The second unit realise nonlinear function, called
neuron activation function. Signal e is adder output signal, and y = f(e) is output
25
signal of nonlinear element. Signal y is also output signal of neuron.
To teach the neural network we need training data set. The training data set
consists of input signals (x1 and x2 ) assigned with corresponding target (desired
output) z. The network training is an iterative process. In each iteration weights
coefficients of nodes are modified using new data from training data set.
Modification is calculated using algorithm described below: Each teaching step
starts with forcing both input signals from training set. After this stage we can
determine output signals values for each neuron in each network layer. Pictures
below illustrate how signal is propagating through the network, Symbols w(xm)n
represent weights of connections between network input xm and neuron n in input
layer. Symbols yn represents output signal of neuron n.
26
Propagation of signals through the hidden layer. Symbols wmn represent weights
of connections between output of neuron m and input of neuron n in the next
layer.
27
Propagation of signals through the output layer.
28
In the next algorithm step the output signal of the network y is compared with the
desired output value (the target), which is found in training data set. The
difference is called error signal of output layer neuron.
29
The weights' coefficients wmn used to propagate errors back are equal to this used
during computing output value. Only the direction of data flow is changed
(signals are propagated from output to inputs one after the other). This technique
is used for all network layers. If propagated errors came from few neurons they
are added. The illustration is below:
30
When the error signal for each neuron is computed, the weights coefficients of
each neuron input node may be modified. In formulas below df(e)/de represents
derivative of neuron activation function (which weights are modified).
31
32
33
Coefficient affects network teaching speed. There are a few techniques to select
this parameter. The first method is to start teaching process with large value of
the parameter. While weights coefficients are being established the parameter is
being decreased gradually. The second, more complicated, method starts
teaching with small parameter value. During the teaching process the parameter
34
is being increased when the teaching is advanced and then decreased again in the
final stage. Starting teaching process with low parameter value enables to
determine weights coefficients signs.
35
4.5.3 Back propagation Algorithm
Repeat
End
Step 1: Normalize the input and outputs with respect to their maximum values.
It is proved that the neural networks work better if input and output
lie between 0-1. For each training pair, assume there are ‘l’ inputs
{I }I {O }O
given by and ‘n’ outputs in a normalized form.
lx1 nx 1
Step 2: Assume the number of neurons’ in the hidden layer to lie between
l<m<2l
Step 3: [V] represents the weights of synapses connecting input neurons and
hidden neurons and [W] represents the weights of synapses connecting
hidden neurons and output neurons. Initialized the weights to small
random values usually from -1 to 1. For general problems, λ can be
assumed as 1 and the threshold values can be taken as zero.
0
[V ] = [random weights]
[W ]0 = [random weights]
36
0
[ ΔV ] = [ ΔW ]0 = [0]
Step 4: For the training data, present one set of inputs and outputs. Present the
pattern to the input layer. By using linear activation function, the output
of the input layer may be evaluated as
{I }I {O}O
=
lx 1 nx 1
Step 6: Let the hidden layer units evaluate the output using the sigmoidal
function as
1
{O }H = (1+e
−{I }
Hi
)
mx1
37
Step 7: computes the inputs to the output layer by multiplying corresponding
weights of synapses as
{I }O = [W ]T {O }H
Step 8: Let the hidden layer units evaluate the output using the sigmoidal
function
.
1
{ O }O = −{I }
Oi
(1+e )
Step 9: Calculate the error and the difference between the network
output and the desired output as for the ith training set as
Ep=
√∑ (T −O
j oj )2
n
38
Step 10: Find {d} as
nx1
m x n m x 1 1x n
39
Step 13: Find {e} = [W] {d}
mx1mxnnx1
m x1 mx1
l x m l x 1 1x m lx11xm
40
Step 16: Find error rate as
Σ Ep
Error rate = n set
Step 17: Repeat step 4 – 16 until the convergence in the error rate is
less than the tolerance value.
Figure 6
42
Output (G4)
Figure 7
No
Figure 8
43
4.7 Algorithm
I discussed here six algorithms which are required for this project.
RANDOMNUMBERGENERATION()
INPUT:Algorithm [5.7.3]
OUTPUT:Random number.for i 0 to 20 do
. . . Here 0 to 15 represented that , my algorithm gives 21
random numbers within specifc range from -0.5 to 0.5
a rand() ……………………….. (2)
b a mod 1000………………….. (3)
b
f - 0:5 + 1000 ……………………….. (4)
end
TRANSPOSE (A[m][n])
INPUT: A mXn matrix.
OUTPUT:B Transpose of that matrix.
for row 0 to (m - 1) do
for col 0 to (n - 1) do
B[row][col] A[col][row]……(5)
end
45
end
MULTIPLICATION(A[m][n],B[k][q])
INPUT:A mXn and B kXq two matrices.
OUTPUT:C mXq matrix .
for row 0 to (m - 1) do
for col 0 to (q - 1) do
c[row][col] 0 (6)
for ip 0 to (n - 1) do
C[row][col] C[row][col] +
A[row][ip] *
B[ip][col]
(7 )
end
end
end
46
4.7.5. SIGMOID OUTPUT OF A MATRIX
SIGMOID(A[m][0])
INPUT: A mX0 matrix.
OUTPUT:B mX0 matrix.
for row 0 to (m - 1) do
deno exp(A[row][0] * (-1)) (8)
1: 0
B[row][0] 1:0+ deno (9)
end
47
4.7.6 BACKPROPAGATION ALGORITHM
output units
.. . . Create a feed-forward networkwithn inputs,n the
¿, hidden
repeat
Propagate the input forward through the network:
1.Input the instance x to the network and
48
compute the output ou of every unit u in
the network.
Propagate the errors backward through
the network:
2.For each network output unit k,calculate its
error term
………..(10)
δ k ← Ok (1−O k )¿
49
5. 1. Picture Password Authentication Scheme :
53
Npp = ┌ 2/3 * Ntp ┐,
where Ntp is the required character length for textual password input,
Npp is the corresponding input sequence length required for Picture
Password, and ┌ x ┐ is the “ceiling” function. In simple terms this
means that image sequences formed with dual selection styles require
approximately one-third less length than that of a traditional
alphanumeric password. This presumes, of course, that just as
additional keystrokes are needed to select special and capital characters
on a keyboard, a comparable number of additional strokes are used
when forming an image sequence involving paired image selections.
54
5.3 FLOW CHART
Figure: 11
Array x i
Compare
Comparison
Array
yi If two array User can
is matched log in
User can’t
log in
55
Figure: 12
First store original images and the images from which we want to
compare with it assume image x i and image y i, store into database as
Bytearray.. Fetch sequential images with respect to a clientId from
Database and store it into ByteArray variables. If x i matched with y ithen
the user can login.
56
6. Supports Multiple Capturing Devices
Source: PenOp Smartpen .These are special purpose devices used to capture the
signature dynamics. Both are wireless. The E pad devices shows Special pens are
able to capture movements in all 3 dimensions.Tablets have two significant
disadvantages. First, the resulting digitalized signature looks different from the
usual user signature. And second, while signing the user does not see what he/she
has written so far. He/she has to look at the computer monitor to see the signature.
This is a considerable drawback for many(unexperienced) users. Some special
pens work like normal pens, they have ink cartridge inside and can be used to
write with them on paper.
the signature on the digital display while the Smartpen has got its own ink
cartridge and can be used to write onto any paper.
57
8.1 APPLICATIONS IN VARIOUS FIELD
Citizen Identification:
Identify/authentify citizens interacting with government agencies
PC / Network Access:
Secure access to PCs, networks and other computer resources
E-Commerce / Telephony:
Provide identification/authentication for remote transactions for
goods/services
Criminal Identification:
58
Identify/verify individuals in law enforcement applications
In each of those applications, biometric systems can be used to either
replace or complement existing authentication methods.
Government Sector
Travel and Transportation
Financial Sector
Health Care
Law Enforcement
SPEECH &VISION RECOGNISE SYSTEM: not new, but neural
networks are becoming increasingly part of such systems. They are
used as a system component, in conjunction with traditional
computers.
PEN PC’s: where one can write on a tablet, and the writing will be
recognized and translated into (ASCII) text.
59
9.1 ADVANTAGES
60
CONCLUSION
61
3.6.1. Storing images into byte array :
filep.writeTo(baos);
if (filep.getName().equals("clientimage")) {
clientpicture = baos.toByteArray();
} else if (filep.getName().equals("clientsignature")) {
62
clientsign = baos.toByteArray();
(T −O)2
64