Learning Performance of Neuron Model Based On Quantum Superposit
Learning Performance of Neuron Model Based On Quantum Superposit
Abstract has been explored [21, and Shor found out that the
I n recent years, some researchers have been quantum algorithm factorizes large integers in
exploring quantum computer in view of the neural polynomial time [31. Perus and Kak discussed that
network to realize a distributed and strongly quantum physics makes it possible to describe the
connectionist system t h a t achieves parallel and fast processing state of the neural network 141[51.
information processing. We have proposed a We have already proposed the qubit-like
qubit-like neuron model based on quantum neuron model. In our model, a neuron state is
mechanics and constructed the Quantum Back described by quantum superposition. We have
Propagation learning rule(QBP1. I n this paper, we constructed the Quantum Back Propagation
show our improved QBP neural network model and learning rule(QBP). The QBP learning rule is the
discuss its performance on solving the 4 bit parity BP learning rule for the qubit-like neuron model
check problem, the function and the gray-scale corresponding to conventional BP one(C13P) [SI. We ’
pattern identification problem. Then, we find our have also reported the information processing
model is more excellent than the conventional one efficiency of the model based on QBP in a
in information processing efficiency. multi-layered neural network, and shown that its
performance is more excellent than the BP neural
networks [7],[8l. I n our previous simulation with
simple problems such a s basic logic operations and
1 Introduction the 4 bit parity check problem, we used one
I n recent years, we have to process a n enormous ‘dummy-input’ in input layer whose role was not
amount of information, for example, searches on clear. In t h s paper, to establish a more
online database, visual and sound cognition etc, sophisticated quantum neural computing method,
using computer systems. However, we know we reconstruct our QBP neural network by
standard computer systems have a limitation on removing the dummy input from it, and discuss our
their performance. Therefore, the demand for a improved QBP neural network model and its
more efficient computation than Von Neuman’s has performance on solving the 4 bit parity check
increased. Neural network computation system problem and the arbitrary function and the
modeled on brain information processing[ll has gray-scale pattern identification problem. Then, we
possibihties of more efficient computation power conclude our model is more excellent than the
than the conventional one because of parallel conventional one in information processing
processing. However, in a simulation of the neural efficiency.
computing on the conventional system, it loses the In the next section 2 we review short description
computational velocity of the system, and does not of the quantum information theory and recapitulate
fully achieve the parallel processing. briefly the qubit-like neuron model and the QBP
On the other hand, since the study of Deutsch, the neural network in sections 3 and 4 respectively. In
quantum computation based on quantum physics the following section 5, we report numerical
IEEE
O-780~-6~73-X/00/$10.00@2000 - 112 -
simulation results concerning our model. part and that of I1> to the imaginary part. We have
a representation of the qubit state that uses
complex numbers as follows:
2 Quantum Neuron Theory
f(8)= e“ = cos e + i sin e . (3)
2.1 Qubit
In the field of quantum computers the ‘qubit’ has Here, the i is the imaginary unit f i . The equation
been introduced as the counterpart of the ‘bit’in the (3) that represents qubit states provides the
conventional computers to describe the state of following representations of the rotation gate and
circuit of quantum computation. two-bit controlled NOT gate.
In quantum computer systems the two quantum a) The rotation gate operation
physical states labeled as 1 O> and I 1> express 1 bit The rotation gate is a phase shifting gate that
information. I O> corresponds to the bit 0 of classical transforms the phase of qubit states. As the qubit
computers, while I 1> to the bit 1. state is represented by the equation (3), the gate is
‘Qubit’ state I @ > maintains a coherent realized as the product of the following
superposition of states I O> and I 1> representation:
where a and L3 are complex numbers called b) The two-bit controlled NOT operation
probability amplitude. That is, the qubit state I @ > This operation is realized by making the
collapses into either I O> state with probability la[’, controlled input parameter y as follows:
or I 1> state with probability lp12 , and 7T sin8 + icose ( y = 1)
f(-r-e)
2
=
cos@-isin8 ( y = 0 )
, (5)
l
a12 + = 1. (2)
where y = 1 corresponds to reversal rotation, and
y =O to not rotation. I n the case of y =0, the phase
2.2 Quantum Gate and Its Representation of the probability amplitude of quantum state I1>
The quantum logic gate is constructed by is reversed. However, its observed probability is
connecting to the single bit rotation gate shown invariant so that we are able to regard this case as
Fig.1, and the two-bit controlled NOT gate shown no-rotation.
Fig.2. This means that these two gates are
fundamental to construct a quantum logic gate. The
state of one qubit is rotated through the bit rotation
gate to 6 angle in Fig.1. The two-bit controlled 3 Qubit-like Neuron Model
NOT gate performs XOR operation, as in Fig.2. I t We make the connection between the neuron
follows that if qubit ‘a’is I O>, output is correspond states and the quantum states. That is, we assume
to qubit ‘b’, while if qubit ‘a’ is ID,output is as follows: firing neuron state is defined as qubit
inversion of qubit ‘b’(through NOT operation). state 11>, non-firing neuron state is defined as
qubit state IO>, and the arbitrary neuron state is
the coherent superposition of the two. From the
above Section 2, our qubit-like neuron model is
defined as follows.
FIG. 1 SINGLE BIT ROTATION The k-th neuron state xk that receives inputs from
L other neurons is given by
ab Zaeb
FIG.2 TWO-BIT CONTROLLED NOT.
uk
L
=xf(e/,k)’x,-f(&)
I
(6)
-113 -
4.2 QBP Learning
In order to estimate our network we consider the
Here, f(x) is the same function as Eq.(3) and g(x) is squared error function
the sigmoid function
1
(9)
g(x) = 1+ exp(-x)
that is used in the CBP. Here K is the number of
In our qubit-like neuron model, we have three learning patterns and tp is the teaching signal for
important parameters, i.e., phase parameter 8 , k the p-th pattern. We, then, use the steepest descent
and reversal parameter 6 . The parameter 8 as a learning rule. That is,
relates to synaptic connection between neurons and
the parameter 1 relates to threshold parameter a s (11)
conventional neuron model. Reversal parameter 6
exists in each neuron in order to increase the
degrees of freedom of the neuron.
-114 -
convergence condition is E l o w e r z 0.001 and Lupper
=10000.
2) Identification of the arbitrary function
We train our network to solve the arbitrary
function identification problem. In this report, we
use the following function,p(x), as a n arbitrary
function whose output is normalized to the range
from 0.0 to 1.0,
20
sin x + sin 2 x + 2 ;
Pb)= 4
(0 5 x I 2 n ) . (14) 0 -:”
0.0 0.5 1.o 1.5 2.0
Learning coefficient
In this simulation, we set the convergence condition
is Elowerx 0.001 and Lupper =10000. FIG.4 LEARNING COEFFICIENT DEPENDENCE OF
3) Identification of the gray scale pattern CONVERGENCE RATE.
Next, we train the network to identify the gray
scale pattern as problem applied large structure
network for identifying pattern. This problem is
2500
that we input the gray scale pattern to network,
and train to get output pattern that is same 5 2000
C
.$
inputted one for fixed iteration number. In this t 1500
m
simulation, we use 256*256 pixel 256 degree
intensity gray scale pattern as test pattern. Test
E 1oM)
P
pattern is converted 256 degree intensity of pixel 500
-115-
\.
0.2
I 2 3 4 5 6 7 8 9 IO I I 12 13 14 I5 16 17 18
thenumber ofdata
References
[11 D.E.Rumelhart, G.E.Hinton, R.J.Wdliams, ”Learning
internal representations by error propagation”
- 116 -
Parallel Distributed Processing‘ Explorations in the
Microstructures of Cognition, Vol.1, D.E.Rumelhart,
J.L.McClelland(Eds.), Cambridge, hWMIT Press,
pp.318-362
[Z] D.Deutsch and R.Jozsa, “Rapid solution of problems
by quantum computation”, Proc. of Royal Society of
London series A,439, pp.553-558, 1992
131 P.W.Shor, ”Algorithm for quantum computation :
discrete logarithms and factoring” Proc, the 35th
Annual IEEE Symposium on Foundations of
Computer Science, 1994, pp. 124-134.
141 Mitja Perus, “Neuro-Quantum Parallelism in
BrainaMind and Computers”, Informatica, 20,
pp173-183, 1996
151 S.C.Kak, ”On quantum neural computing”,
Information Sciences, vo1.83, pp.143-163, March
1995.
[SI Matsui, Takai, Nishimura, “A network model based
on qubit-like neuron corresponding to quantum
circuit”, IEICE, Vol.J81-A, No.12, 1998,
pp.1687-1692.
[71 Takai, Matsui, Nishimura, “A neural network based
on quantum information theory.”, Proc. SICE Kansai
branch, Annual Symposium, Vol.J81-A, No.12,
pp.154-157, Oct 1998,.
[SI Nobuyuki Matsui, Masato Takai, Haruhiko
Nishimura, “A Learning Network Based On
Qubit-Like Neuron Model”, Proc. The Seventeenth
IASTED International Conference on . Applied
Informatics, 1999.
-117-