0% found this document useful (0 votes)
101 views25 pages

NN1 PDF

1. Neural networks represent a different approach to using computers that aims to mimic the human brain. They learn patterns and relationships in data through exposure to examples, rather than being explicitly programmed. 2. Neural networks are useful for problems where hard rules cannot be easily applied, there is lots of example data available, and some errors are tolerable. They are currently difficult to apply to problems involving symbols and memory. 3. Neural networks are of interest to computer scientists, engineers, cognitive scientists, neurophysiologists, physicists, and biologists for both understanding brain functions and solving practical problems. They represent a simplified model of biological neural networks.

Uploaded by

Mohit Narang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views25 pages

NN1 PDF

1. Neural networks represent a different approach to using computers that aims to mimic the human brain. They learn patterns and relationships in data through exposure to examples, rather than being explicitly programmed. 2. Neural networks are useful for problems where hard rules cannot be easily applied, there is lots of example data available, and some errors are tolerable. They are currently difficult to apply to problems involving symbols and memory. 3. Neural networks are of interest to computer scientists, engineers, cognitive scientists, neurophysiologists, physicists, and biologists for both understanding brain functions and solving practical problems. They represent a simplified model of biological neural networks.

Uploaded by

Mohit Narang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Neu ral

Net wor ks
1.1 Neural Processin g
Artificial neural networks are the result of academic investigation s that use mathematical formulations
to model nervous system operations. The resulting techniques are being successfu lly applied in a variety
of everyday business applications.
Neural networks (NNs) represent a meaningfully different approach to using computers in the \vork-
place. A neural network is used to learn patterns and relationships in data. The data may be the resl!lts of
2 !111mductw11 to Neural N<'fll'or/...,

a markct re-.,can.:h cf--lnrt , a prndurtl()n prorc~~ g. 1\


· CII .. 1· 1 , <l 11 ,, 1··1t·1on·1l condi tions, or the decis ' on ,
\ ,11 y 1 g 1 ~ ol
a loan officer g1\'t'11 a ,ct ol h)tm app licat1011s. Rcg:mll r:,,:,, of . . ~ .c •• • • •
the spccili cs involved, app lying a neural
networh. 1, ,uh,tant1alh d11 fl-n:nt from trnt.litional approache~
.
Trad1·11onall) a pwgr· anH1H.:r man analv :,, t spec ·II ~1ca II y ·,cm1cs,• 1·01· ,very f'lcet of the probl e111 ,~ h
c .~. . . or t e
computer tt, ·undcr-.,tamr the ,ituat1on . N~ura l nctworb dn not
re4uirc ex plicit codin g of the problems.
For e,ample. to grnl'ratc a model that performs a sale, rnrcca
st. a neural _network need:, to be given only
nm data related to the prohlem. The ra," tlata might cnnsist of
hi~tor y ol pa_st :ales, prices, comp etitors'
pric~s aml other economic \'ariahk~ . The neura l nctworl-. ~orts
throu gh this in fo rmati on ~nd produ ces an
untleNandmg of the factor~ impacting sales. The model can
then be called upon to provide a prediction
of future ~ales gi ,·en a forcca\t of the key fac tor~ .
These ad, ance1rn.:nt~ are due to the creation of neural netwo
rk learning rul es. which are the algo-
rithm~ used to 'learn' the re lationships in the data. The learni ng
ru les enabl e the ~ctwo rk to '_g~iin knowl -
edge· from availabk data and apply that know ledge to assist
a manager in makin g key dec1s1ons.
\Vhat me the CatJubilities of Neural Netu1orks?
In principle. NN~ can compute any computable function. i.e
. they can do everything a normal di gital
computer can do . Especiall y anything that can be represe nted
as a map ping between vector spaces can
he apprnximated to arbitrary preci~ion by fcedfo rwanJ NNs
(whi ch is the most often used type).
ln practice. NNs arc e~pecial ly useful for mapping problems.
which arc tolerant of some errors, have
lots of example data avai lable. but to which hard and fa~t ru les
cannot easil y be appli ed. However, NNs
are. as of now, difficult to apply successfu lly to probktm that conce
rn manipulation of symbols and memory.
Who is Concerned with Neurol Ne t<H•orks?
1

Neura l Networks are of interest to qui te a lut of peopl e from


different fi elds:
• Computer scientists want to find out abou t the properties of
non-sy mboli c information processing
with neural network.sand about learning ~y~tems in general.
• Engineers of man y kinds want to explo it the capab il ities of
neural networks in many areas (e.g.
signal processing ) to solve their app lication problems.
• Cognitive scientists ,·iew neural netwo rks as a possible appar
atus to describe mode ls of thinking
and conscience (High -level brain funct ion).
• Neuro-physiologists use neura l networks to descr ibe and explo
re medium-lev el brain fun cti on (e.g.
memory. sensory system).
• Physicists use neural networks to mode l phenomena in statis
tical mechanics and for a lot of other
tasks.
• Biologists use Neural Networks to interpret nucleotide seque
nces.
• Philosophe rs and some other peopl e may_a lso be intere sted
in Neural Networks to gain knowledge
about the huma n systems name ly behav ior. conduct. character,
intelli oence brilliance and other
ps) chological fee lings. Environmental nature and related functi
oning, ~ark ~tino busin ess as well
a~ designi ng of any such systems can be implemented via Neura 0
l networks.

1.2 N eural Net works - An Overview


The de\'elopment of Artificial Neur~ I Ne_twork started 50 years
ago. Artificial neural networks (ANNs )
~re gross. simplifications of reaj_ (b1olog1~~) netw~rks of neuro!
'ls. The paradigm of neural networks,
Introduction lO Neural Networks
3
which began durin gr the l 940~ ·.
. . · , prom1 se. . . . .
c;, to be a very
relationship .of th e h~m an brai n. Due to the comp important tool for studying the struct ure-fun ction
euron s vanou s arch itect f
lex ity and incomplete understandin g of biolog ical
n ' _· · ·· ·
urc!-, o art1 tic1 al neura l netwo rk<; ha ve been reported in the li terature. Most of
the ANN stru~tures use~ <.: ommonl y for man y app
li cation !-, o!tcn co nside r the behav ior of a single
ron as _th~ basic co~~ u.trn g unit for dec;,crihin g neura neu-
l inform ation prnccc ;,<;in g opera tion c;,. Each comp
ing umt, 1.e. t!1e artifi ci:1I neuron in the neural netwo ut-
rk is ha<;ed on the conce pt of an ideal neuro n. An
ideal neuro~ is as~umccl to rc!-,pond optimall y to
the applied input!-1. However, ex perim ental <,tudie
neuro-phy siolo~y sh_o': th at th e rc!-.pon sc of a biolog s in
ical neuron appea rs random and on ly by averaging
many observati ons it 1~ PO!-.siblc lo obtain predic tabl e res ults. Inspired by thi~ oh..,ervation , some re-
searchers have developed neural ~Lructures ba~ed
on th e conce pt of neural populati on..,_
In common with biological neural networks, ANN
s can accom moda te many input~ in paralJ el and
enco~e the infun nat!o n in aclistribute diash ion.
Typically the inform ati on that i'> stored in a neura
snare d by many of its processing units. This type l net is
of coding is in sharp contrast to traditi onal memo
schemes, where a particular piece of information r;
is stored in only one memory locati on. The recall
process is time consu ming and gener alizat ion is usuall y absent. The distributed storage scheme provides
many advantages, most important of them being
the redun dancy in inform ation repres entati on. Thus,
ANN can undergo partial destruction of its structure an
and still be able to functi on well. Altho ugh redun
dancy can also be built into other types -
of system s, ANN has a natural way of implementing this .
result is a natural fault-tolerant system which is The
very similar to biological system s.
The aim of neura l netwo rks is to mimic the huma n ability to adapt to changing circumstances and the
current environment. This depends heavily on being
able to learn from event s that have happe ned in the
past and to be able to apply this to future situat
ions. For example the decisi ons made by docto
rarely based on a single symptom because of the rs are
complexity of the human body; since one symp
could signif y any numb er of probl tom
ems. An exper ienced doctor is far more likely to make a sound
sion than a trainee, becau se from his past experience deci-
he know s what to look out for and what to ask. and
may have etche d on his mind a past mistake, which
he will not repeat . Thus the senior docto r is in
superior positi on than the trainee. Similarly it would a
be beneficial _if ~achines, too, cou_ld ~se past event
as part of the criteria on which their decisions are s
based, and this 1s the role that artificial neural net-
works seek to fill.
Artificial neural netwo rks consi st of ·
m any nodes , i.e. processing units analogous t0 neurons in the
brain.Tuch nod~ has a node function , associated
...witn 1t whlcn a1ong with a set of local param eters
aet~ !}es tTie outp1,1t of the node, given au input M__Qf ufy_ilig_the local p~~ rs qiay-a1ter ~
functi.Qn. Artifi~i.al Nefilal_ Networks t~u.s is an information_ e
-proce ~sj~~ xste1:3. In _this infornmio"Q ::-.Pro-
cessin_g system , the eleme nts called neuro ns, process the information. Th~ signals Qfe tram,mitted
Inei!US o.f....cGnooctionlinks.... The links posse by
ss an associ ~t~d
~~j_ght, which is aio~g
multi plied with
(net input)~~~ any typ1c~~~~eur_af_net; The_ outpu_
t~e ·lnc~ ""§ ~l ~
t signal is ob- x w 1
tained by app. lying activatipl)SJo the net input . 1
• •
,fy •

Y
The neural net can generally be a single. layer or ....___ \
a multi- '>, - -
layer net. The structure of the simpl e artific i al neura l net is / .,,.-
"'h- . F' . 1g. l . l .
S ownm
Figur e 1.1 show s a simpl e artific ial neura
ry W?
-
l net with two \___
input neurons (x , x ) and one output neuron (y). X2
1 2 The inter-
connected weigh ts are given b~ w 1 an?
net there is a single layer of weighted interconnec
W 2. In a si~gJe
t1ons.
layer Fig. 1.1 I
A Simple Artificial Neura l Net
.t fnrn>,/11, r,,1111 ,1 \
'c1'r<1l Nct 1ro1k
,
' t \ l"' ~·a1 nm \tl \a ) c1 a, ff , l L:-i·"\\ ne ural
nct "' \\ ('If·
'\.,, . ah hn:, ,a MN N
pn ,c , .in input 1,\~ , . )\\
tc d a~ . co. md- 1(t) \.- ';,
\p u t \a ye , an
h,~kkn l m tc nned
tl . '- . - Y1(\)\
\a\e) \a ye r ot ne ur .
~\";\ ~ , ar e
' ' (\l

of te n ca
·\-. Th l', ca n irn d la ye re d_'~.e ~-
lle
p\erncnt :lrb1tt d t
on s.
y
.. X
x2(t)

- Y2(t), i y
('(,m p'1e· "- in • pu t/o ut ~~~s 0 r dc - \ Xn(t) '- - '
pu t · o •
.... , ~11..111 ~mfa
rn ap ~,
cc ~ paratmg. d1
~e Yn~t)
tern :-- A th re e- la tte re nt pa~-
) er M N N is sh ow
Fi ~ 1. 2. and a ~i n m
mphfied bl ock rii
agram ~ ~ ~
repre~ em at io n in In pu
Fi g. 1.3 . ln a M . a t La yer Hidden La
la\ er of in pu t un N N ye r O ut pu t La
its is co nn ye r
la~er of hi dd en un ec te d to a
its. which is conn
lo· the la, er of ou ec te d A Densely Intercon
tput units . Th e ac nected Th re e- la
of neur~n~ tivity Fig. 1. 2 ye re
m th e in pu t la ye Ne ur al Ne tw ork. Each Sh d St at ic
the ra~ in form at r represen ts ad ed
io n that Circle , or Node
ne n, or k. Th e ac is fe d in to th e \ Re pr es en ts an Artificial Ne ur ,
tiv ity of neurons in on
hi dd en la ve r i~ de the
te nn ined by th e ac
tie<; of th e in tiv i-
pu t ne uron s and N1
ne ct in g "v.eigh ts th e co n- N2 N3 y_
be tw ee n the inpu
hi dd en un
t and
iu .. Si m ila rly, th e be ha vi or In pu t La yer Hidden La
the ou tp ut un of ye r O ut pu t La
its de pe nd s on the activ ity \ ye r
of th e ne ur on s . _
in th e hi dd en la A Bl oc k D ia gr am R ep re
th e co nn ectin g w ye r an d Fig. 1·3 Thre se nt at 10 n of a
eights be tw ee n th e-layered MNN
de n an d e hi d-
th e ou tp ut
are fre e to com.tr layers. Th is simple neural structu
uct their ow n re pr re is in te re st in g
es en ta tio n of th e input. be ca us e ne ur on s in th e hi
MNN s pr ov id e dd en layers
an in crease in co m pu ta
a no nl in ea r ac tiv tio nal po w er ov er
ation fu nc tio n be a si ng le -l ay er
tunct10na1 pr ox tw ee n la ye rs . M an y ca pa ne ur al ne tw or k un le
tion tu nc uuap im at io
n of each ne n, le ar ni ng , ge ne raliz ation,
bi lit ie s of ne ur al ne tw or ks su
ss th er e is
ur on . et c ar e in fa ct ch as no nl in ea r
pe rf or m ed du e
to th e no nl in ea
A1 ':N ,. ha ve be co m r activa-
su ftv .a re and ha
e a te chni ca l fo lk le ge
rd w ar e prod nd . The m ar ke t
uc ts, an d m is fl oo de d w ith
implementat1om an y m or e are su ne w , incr ea si ly te
ng
, ar e H op fle ld , Mu ltilaye re to co m e. A m on g th e ch ni :
Q ua nu 1.a t1o n. Ra r Pe rc ep tro n, m os t po pu la r ha rd w
dial l\a ,is Functio Se lf- or gani zi ng
Co un te r Pr op ag n, C ellular N eu ra Fe at ur e M ap . Le
au on ne tw or l, an d A da pt iv e R am in g V ecto r
en ce oJ all 1ile,e k~ , Back Propagati es on an ce T he or y
ne 1work1,, the ap on ne tw or ks , N eo (AR T) networks
pl ic at io n of -c og ni tro n . et c. .
th e ne ur al A s a re su lt of th e
eraTh u, a11Ific1al neu ral ne1w ne tw or k is in creasi ng
tre m en do us ly .
eX tS -
tion -. ork re prese nis th t
'>ll11 1la 1 \o th at ot1 th e hu m a
e m aj or ex te ns io n
un derst n hr ur n . He nc e co m pu ta tio n. T
ca tio n an opJ1 pu11 t1g u1 ar uh c1
u ni ll e~
-
a. ne ur
it is re ·ts on 10 ,. tb le to
he y pe rf or m th e op -
. al ne 1w o rk , le a,h•ng. lo i01p • •d •n cr ea se in ou
ro'vc·
li •ne tw or k ex pe ct a ra r
pa ra di gm s pt
an d 1a ho st of ap .
1. 3 pll-
T h e R is e o f N cu
ro co mputing
A m aJ·or ity of in fo rm at i on proc essing
w id el y he l d m is to day is ca n - d
pe rc ep tt. on t h at 111
- t·or . n or
m atio r,,," .. .1 e ou t h, ,
/11t roductio11 to A11ifi,cial Neural Network.\ 11
2.1 Introduction
The bas ic pre limin aries in vo lved in the /\rti t'i tia l Nc urn l Ne twork (ANN) arc de~cribed in thi s chapter.
A brie f summ ary nf the hi story o f ne ural nc lwo rk s, in te rm :-. nf Lhc deve lopment of a rchitecture.., and
algorithm s. the stru c tu re of the bi, ,log ica l ne uro n is di sc ussed a nd co mpared with th e artificial neuron .
The basic build ing bloc ks a nd the vari ous te rmin o logies of the a11ificia l neural network are ex plained
rownrds th!.! e nd o f ihe c ha pt e r. The chapter conc lu des by g ivi ng the s um mary or notation<.,, which are
used in a ll the netwo rk a lgorithm s, arc hitectures, e tc. di sc ussed in the fo rthcoming c hapter\ .
2.2 Artificial Neural Networks
~ rtific ial neural networks are nonlin ear information (sig na l) ~!ocessi_ng device'>, which are built fro~
interconnected ele me nta ry process ing de vices called Eeuro nv
An Artific ia l Neural Ne twork (ANN) is an information-processing p aradi g m that is in spired by the
way bi o log ical nervous systems .. such as the brain" process inform ation. Th e key element onS~para..:
dig m is the nove l structure of the information process ing systeni . ft is composed of a la_Ege nu mber of
hi__ghly interconnected processing elementS_ (neu~ ns) working in uni ontosol ve specifi c problems. A.i~s.
like people, learn by exampl e . An ANN is configuredTor a spec ific applicati o n, such as pattern recogni -
tion or data classification, through a learning proce_SSJ Learning in biologi cal systems involv es a ~-
ments to th e synaptic connections that exi st bdween the neuron s. This is true of ANNs as well .
ANN 's :ue a type of artificial intelligence that atte mpts to imitate the w ay a human brain ,rnrk-..
Rather than using a digital model, in which all computations marupulate zeros and ones. a neural ne t-
work works by creating connections between processing elements, the computer equivalent of neuro n!:).
The organization and weights of the connections determine the output.
A neural network is a massively parallel-distributed processor that has a natural propensity for ~rt.r-
ing experimental knowledge and making it available for use. It resem~les the brain in rwo re~peL"t::-.:
1. Knowledge is acquired by the network through a learning process, and.
2. Inter-neuron connection strengths known as synaptic weights are used to store the knov, ledge
Neural networks can also be defined as parameterized computational nonlinear algorith m~ fr,r nll -
merical) data/signal/image processing. These algorithm s are either implemented on a general-pu.rpt.',e
computer or are built into a dedicated hardware.
Artificial Neural r•:etworks thus is an information-process ing syste m . In this inforn1ation-prnL°e"~ •1g
system , the elements called as neurons, process the information. The sign a ls are transmitted b~ mean~ o f
connection links. The links possess an associated we ig ht, whi ch is multiplied along\\ ith the rnconung
signal (net input) for any typical neural net. The output s ig nal is o btained hy applying acti, ation:- tu the
net input.
An artificial ne uron is characterized by :
1. Architecture (connection between neuron s)
2. Trainin g or learning (de te rmining we ights o n rhc connectio ns)
3. Activation fun cti o n
,. \ , \1, •I ~ I
J: hrr. , •fu, -n,,r: • ' • ' x 1- ( ~ w1 (weights)
,r fl.irthro1ni11g
.., kt\\ l ll I tt I
-\': r''c'' c' 3~' J,,~'tl'-''C".l',,t. th'
\ ' l' ' 1itid :d 11~ura
, 11\ll' l ,ll .$
:::,
:
I X1 :: -------- .............. ,----\
\ y
, u~, c'\.' tlf\n, Th' , mi,'tu"\: , \. .'
: I
0. :
•. E l -. ___ )l (O UtpU\)
~ . tl n\'ural net'',orr,.
I II
nc'f\\,,r\ ;, , 1\'1\\11 Hi F·f - II I
Fi.£ur<'
~
~ ! , fh."' ' J , utq' k ,uul\l.) J t,)l\t' LHl!pll t I1t:·uron
x2 1
3 1"- , W2 (weights)
Output L
ayer
\ , d \ \I 1 •
~
\\ 1th i " \) JflfUl (K'Ul\.'Il~ \ \ · \ : Jl\, ,•1H'll ln' ,,·, ,111 ,_ .,.,I
l ~ ). Th<' tnt<'r ,-,,nnc'('tc<l "tlf h~ ,U\' .:-
. ,[ . ,,utpU! ..
· ,1,rtkl, I-pro-_
.\n 3.rtlficial neuron 1~ ar-mput sn~ t - , ' ;;: . x2
t. •\s ·1 simple
ce.--..-,ing element. ,, hich ,·an ti,;:' tIwug lH O ' ' • . 2 1 Input Layer
«ll"ldel of a non-branchin £ bi\)h}fi( ~tl neumn . In Fig · ·
,.m-ous inputs to the nerwZ,rk Mt' n:>pR'St'_nted by the mat~-
2
Fl.g. •1
. Artificial Neural Net
I A Simple
ernatical s, mbol. \ tnt Each ,,f thest' inputs :u-e rnultt -
phed by •.~om"'--0'"' ,-,,gtu. Th<S< ,veights are represented by w(n). In the simplest case, these Prod·
ucts are ~1'.11-pl~ :-umm~. ~~ d~ugh a_trru~ster functJo~ to generate a resu~t, and then delivered as
oucpuc. !11~s pr<X"c....;.s lends !csd_t w ph~·s1l·:1l ~mplementat10n on a large scale _m a small package. .
electromc 1mplemcnrnoon is soil possi ble wnh other network structures, which utilize diffe rent sum.
This
mmo funcu·on~- as- "c II as dift.erem rranskr · · •
;;: fu nctions.
Why Artificial. Neural Net1u•orks ?
The long course of e,·olution has given the human brain many desirable characteris tics not present in
Von Neumann or modern parallel computers. These include
• Massive parallelism.
• Distributed representation and computation ,
• Leaming ability.
• Generalization ability.
• Adaptivity,
• Inherent contextual infonnation processinoe
• Fault tolerance, and
• Low energy consumption.
.
Ir is hoped that devices based on b'101oo1cal
. . n I .
charactensncs. Modem digital com e. eura networks - 1 1
and related symbo l manipulat'1011 Hputers outperform huma .· Wt posses some of these deslf8~le
- owever h ns in the do · o f numenc · uon
!ems (li ke recognizing a man in a crowd f • umans can effort! 1 main computa
as to dwarf the world's fastest com put ro; a m~re glimpse of h~ss Y_ solve complex perceptual prob-
15
mance? The biological neural system ar~~- hy 1s there such . face) at such a high speed and ext~nt
1ecrure (see Table 2. 1). This difference ste~t~re is cornpletel a ;~I~arkabl e difference in their perfo~·
model can bes t perform . gnificantly affects t{
•fferent from the Von Neumann arrh1
Num erous efforts to develop "intellige ., e type of functions each computatiLllll
ture ha ve not res ulted in any general -purpnt . P~ogran1s based
. I ose inte11 · on Vo N c t ..... d architc'\'
works, ANNs are mass ive Y•parallel comput·1ng sy ,gent Pro grarns n eurnann's
1
. . . en ra112"
simple processors with many interconnection sten1s co . . · 1nsp1rect by b ' . . ur:al th'
. s. ANN nsistt 0 O of an extr
10 1og1ca 1 ne
ciples believed to be used in the human brain rnocteJ s . c .l. \~r .
· attempt to erne 1y arge nun
Use Sorn e " orgamut10 . . naf' .-r
Table 2.1 Von Neum ann Comp uter Versus Biolog ical Neura
l System
Von Ne11111 an11 Riolog ;caf
Co111p ura Ne11 rol System
Proces sor c, ,mpl cx Simpl e
I ligh speed Low :-. peed
One or a few /\ large numhe r
Memo ry Separate from a proccs 1-or Integrated in to
Locali zed Proccs<;or
Nnn cnntent addre:-. :-.ab lc D1stnhut cd
Conte nt addre..,..,ahl e
Computing C1,;. ntrali zed Distrihutcd
Seque ntial Parall d
Stored progra m s Se lf-learn ing
Rali ability Ve ry vulner ab le lfobu st
Expertise Nume rical and sy mbo lic Percep tua l
m anipul ations proble ms
Opera ting Well-d efined , Poorly define d,
enviro nment well-c onstra ined uncon straine d
Either human s or other comp uter techni ques can use
neural netwo rks, with thein emark able ab1bry to
deri ve meani ng from comp licate d or impre cise data,
to extrac t patter ns and detect trends that a:e too
compl ex to be notice d. A traine d neura l netwo rk can
be t~ugh t of as an --expe rt.. in the catego ~ of
inform ation it has been given to analyz e. This exper t
can then be used to provid e proj ection s gi ven ne~
situati ons of intere st and answe r "what if' questi ons.
Other advan tages includ e:
1. Adapt ive Leaming: An ability to learn how to do
tasks based on the data giYe n for trainin g or
initial exper ience.
2. Self-organization: An ANN can create its own organ
isation or repres entation of the infom uuur: l l
receiv es during learni ng time.
3. Real-time operation: Al'.TN compu tation s may be
carrie d out in paralk L using spa.:i:il h m:i~, Mt'
device s desig ned and manuf acture d to take advan tage
of this capab ility.
4 . Fault tolera nce via redundant information codin g: Partia l destru ction
of a ne twork. k,.1Js h.' ..1
corres po nding degra dation of perfor mance . Howe ver, some netwe rk capab ilities m:1~ be n-ta.incJ
even af:er major netwo rk damaq;e due to thi s featur e
.
,_.......2.3 Historical Dev elop men t of N eural N etworks
The hi storic al devel opme nt of the ne ural netwo rks can
be traced as follo \\ s:
• ~943 - McCulloch and Pitts: start of the modern era
of nl'Ut'Ul uehH>rks
Thi s forms a logica l ca lc.: ulus of ne ural netwo rks. A
netwo rk l;onsists of sutfi~tetH mnn hc r of neu-
rons (using a simpl e mode l) and prope rl y sl.! t sy nnpt
il' C\)nnec.: tions ~nn l'1)\l\l) utt:' ~\il~ ,:ompu t~lbk
,. , ~,. \,H'i ::.d OJJ'JO tJ,,,; .i/';lg_J;1-:, •.w
"
N •111 1 ' ' " \ . . fill '' I '
111 ,,, N, ,,,,,, ' rw1ir1,,1 , ~
t,,,,,11/111
/II 111 J • av= ,w1:1 t,t,; reprtl~ •lfYJ t:,.,
,., 1111
,,I t, y 11 . HI I 91 1, h "'
, ,1' 1I111 , t rut 0 11 ,';
I JIHI I I 1 tll'fl 1 ' 11
,,. t,f neur,,r, 1'-1 t c u,rJl.•.t'"JfA ~
\ . 11111111 lt1)tll 111111 f ,,,,.., 1YJ' , -y• '~
.t , ,. ,,..
11111, f111li I
1lh ,u111.rl jll' l .,,1 h .uftW p
II Hlllll p1 ,r l/1 I •f 1,1:,11
~r,t.;t.ificd thf~'<-,h<A,J b-11 -
Ill llh' ~, , ( 1111th II I 1lf • I 11111 \ IIH 111n!tl
jlll 11 •111011 Ill i~1t•H t
, .. )';
,.. n1 ,,,; ur,,n ,► CYW:l hl'/t)y,
'1111,lrn1,1t11111 ,,1 1,1111, 1111111,,\,"11 " ' ,1 p,11tH 11h11 '101111d
10 1h1 1rt tyf w,,.
tl11t ,l,,1ld ' Ih II 111,· 1h. . Il l ,,. I Iii II ,t i. illl' ,,
1111 1111111HI 1,1, "' I , ,,
,,,, I I 1h ,1 ' ,,
I
O
If OIi . /ti 1110111/11 {)1/IJn V/tJ ' J)fC'L1l1!;Jj for
r ,,,,huvt or f . \Vl ffl /1 ,
• 1••.i" ltc-hh'._ lnhil, " l'h•• o•U'" ; : ·1'1L ul l1•1un1111' tllk ',"1 !Jilli •• , ,,n1,111
wJI; dwng rng a5, a.n 'ng;::;,-.
1
' ' ,,,111 ,1 ,mt, ,1h i11 t1f n pin•' ,11 ', '", n111u·, 11 v1t y nf ,11, ,r,l lt<'h •JI (; crcW.:
' 1 1
,
JJ b1 (,uch ch,mg t\ .
11 111 \\ 1'(111 1111 I ll
1111 111 ,, 1mh 11, h P l 1
• ., 1I ltlf .. (' lll 1 '
111
,, ~, 11 11d thiit n c ' . , '.l 'J
1
I If · t l 111111 \!(lilt 11 1' · I I iti l!i
c ,·on<.:c pt t,chind the Hebb U.lefJr(
'"'" l,•1n1,,' ' ', '• 1 "'
·h 1111 Ill l ' ll i II ,1i111wni,• p1,yd10 o,~ I I Ltr•·n"th of cc,nnect1<m bet·;,~.,;e::,
• J l1 y I )(: . ., " ~
Iii'''''',,,,,~ ,,.,. . i1 1rn1111 , 1 . I . . :.'.to
I1 • ,,, ,,w i, 1111 1t11 11 111,1H • ti. '..ft r,f the correJatJ<JT1 rr~y
,, th;11 i1 ,,~ ,, ,11·111,rn, ,11,·• 1111111
' 111 l ' • 11hr ,,, ''"
I ·11 t·1111,.;t 111 '• " ~111 ' "
'thi rn ,, \li 111,1n, , h,m 1i '11 ,, • ·••lli l'I
IIH t \, ll '-1

li•:11n11111 .. , ·k 119621, Minsky and Papert [1988


• 10 ~~ R<,.,,,nhlntt tnt rocl m'l'S t•t•n•,•ptron (Uloc_ JJ .
, th c;an be adju ~ted. A metho d of Jte;-a_::·;::
•I Ill • l'Cllll1 Ct: 1Hlll rM /., .
111 Pt1~·\·ptnHl 1w1 \\111" tlw w c 1p lh ill \ l ,
, . , P•rcc ntron net ,~ found trJ con 1,erge
, I .
111 th , l'l'l'l'L'l"ll rn11 net. 1 1
,c c ,,
1;~•
\\c11'ht .,d1u,tnH' '" ,·an lw 1i...u t: · • , ·di the trainin g input and target output·.~-::;
\H'tf l\t, ()ht:mh.~d ., llnw tht· 1wt Ill re producl: <.: Xttcl ly '
p.w·,
• 1'>60- \\ idro\\ and Hoff Introduce adalinc
\1) \l l'\l ahhn•\' iatcd frnm Ad aptive Linear
Neu ron uses a learnin g rule
called as Least ~fez:
~~twn' rule n1 Dclt:\ mk. Thi :-. rul e is found to adjust the weight
s so as to
reduc e the differecr=
lx'i\l. N'n thr n('t inpu t to the output unit and the desi red
output. The
convergence criteria in this ec.5c
.,n th<.· n\Jucuon 0f mean square erro~to a minimum value. This delta
rule for a single layer ner ~
h<. callrd J precu,-...or of the hackpropagatio n net used for multi-l
ayer nets . The multi- layer eXlcfr
,,on . . nt Adaline formed the Madalin e IWidrow and Lehr, 1990].
• 1982- John Hopficld 's networks
Hopf 1eld ,bowed how to use "Ising spin glass" type of model to
~Lahlt' network, H1 , work paved the way - s . . . . ;.~ ,.
. . .
10 r Phys1c1sts tore mformation m dvna:rrl! .. -.
fom1mg the 11eld of neura l network s r 1ec•e net ' to ente l • ·
1 ., . r
lfopf1eld nch are 1ound to he both..continu s are w1del ct neura model. . ing,'"' thereb\'·· rrans-
. . . I y use as associ ative memo rv nelS. Tc-e
ous va ued and ct·
(~1 f ic1ent ~olut1<)11 for the 'Travel l ing Sa les-man .,
Problem' iscrete valued. This net pro, ides JD
• 1,4172- ~oh_onc.•n\ Sdf-Organizin~ Maps (SOM .
h.ohont>n, 5rll c)r ~•an11 ing Mai)s are'" . hi . )
t)101l)g1ca I neural 11c1, . They make ... apc1 •
ul•e 0 1. d_ e of repro duc,ng
,, ata re import -
connno n 111 tlw ,w,, ou~ ~, ·~t•· il) "OM - n,n> ot
; . " !I ••1 i '-
. · prese ntation ll .. ant aspec ts~ of th.e Stn.lCtu
also
l>U tput Jjjy(•1 li:.lfl p,l k UJ) the co,. ..., . . h· s in 0o· top . .
1 1a11nnal' st . ,ts a wide r·ing
'
t· ograp. hi e maps. which arc
11JTant!t'1111.· nt o1 u,1,,... Tht' \C JlL' h . ., , ~. . · iucture (fron, tL . e o ap 1· •
'1te ,ipp hr d to P 1cat1 ons. It shows how th(
• 1985- Pa.rkc:r. .198<»- Letiun t1e 1nputs) · • ·ai
tnany recognitior, · in the form of the span·
Dunnp 1h1 , pl"11od fl w hud , ) . t
1111 1

prob le n1S..
11•ll,d h> Jl · .
prof):J).! ,1h·-. tJ1t• ~'.r.rn rn 1orn1.i111 >11 ·n ti ll ~.! I) dVt•d
.
J e.lt a ru Ic. 'I,h1 \ net 1-, h.i ~" ,t i!~ , ie o 111 p l 1t . th W·ty (. tnto th N
• u l\l lllt 11
· aye lllHl ~ b·v·k
-·~
Ongina Jl y. <.\Ven thuuµh tllt· ,.1, 01 k w·, r r. . . 1 . ',., to th e . eural Net wor k' S Th1.s me t11uu
' ee l lorw,tt·( j e h1clcte 11 u • . · . d
gors to RumclJ wn. Htntnn and Willia n . l iineu · \ Pt rt ) • nits us1
by Park net t ·
1 rainect by m . ng a general1zc
.
~ ( 1986). B· er 0985) h eans of backpropaoaoon-
. ttckpro t e c ct· o
Pogation re 1t of publis hino this nel
net erne o
rged as the most popuJar
learning algori thm fo r the training uf mult ilnyc r pcrccptrons and has been the workhorse fo r many
neura l network applil'ati(1ns.
• 1988-Grossberl!
G1\,sshcrg dewh.,pcd a knmin!! rnll' si milar to that or Kohon cn. whi ch i~ widely u~ed in the Counter
P1\1pag.ation net. This (1ros~bert- type or ll't1rni 11p. i, nhn u,cd a\ ouh tar lea rnin g. Thi s learning
f1cct11'S for all the units in a part ic ula r layer: 110 rn111pctiti o11 umo11g th c'lc tmlt'l i'I a~sumed.
• 1987. 1990- t'urpenh'r und (;rosshrrg
Caqientcr and (11\,-.sbcrg i11 ,-c11tcd Adupti ,·c Rc~onancc.: Theory ( ARTl. ART wa.., de'ligned for both
binar~ inputs and the cnntimmus valued input~. The c.l c~ign for the hinary input.., formed ART I. and
.\RT~ "'amc into being when the design became applicable to the continuou'I valued inputs. The
most unportant featme of these nets is that the input pattern s can be pre,ented m any order.
• t98S - Broomhead and Lowe dc\'eloped Radial Basis Functions (RB F). Thi <. i~ al~o a multi-
la)·er net that is quiet similar to the back propagation net.
• 1990- Yapnik dewloped the support vector machine.

2.4 Biological Neural Networks


.\ ~i0l0gic:1l neuron or a nerve cell consists of synapses. dendrites. the ceU body (or hiJJock ). and the
a'\:on. The .. building blocks-.. are discussed as follows :
• The synapses are elementary signal processing devices
■ A synapse is a biochemical device, which conve11s a pre-synaptic electrical signal into a chemi~a!
signal and then back into a post-synaptic electrical signal.
■ The input pulse train has its amplitude modified by para.meters stored in the synapse. The nature
of this modification depends on the type of the synapse. which can be either inhibitory or excita-
tOf) .

• The postsynaptic signals are aggregated and u·ansferTed along the dendrites to the nerve cd l b'--xi: .
• The cell body generates the output neuronal signal. a spike. which is tr~msferred along the :.1., t.")n to
the ~ynaptic terminals of other neurons.
• The frequency of firi ng of a neuron is proponionaJ to the totaJ synaptic acriYities ~md is \.-ontrolkd
b) the ~ynaptic parameters (weights).
• The P) ramidal cell can receive 104 sy naptic inputs and ir can fan -our the output sign:tl f l) thl)U!-ands
of target cells - a connecti vi ry difficult to ac hieve in the artificial neuml nct\h)rks.
In general the function of the main elements can be given :ts,
Dendrite Rece ive~ !> ignal s from other neurons
Soma Sum s all the incoming signal s
Axon When a pani rnlar :111wunt of input is rt~ceh c.~d , tlwn rh~ 1..' \'ll fire.~~- lt transmirs sign:il
th, ougll axon to other ce lb.
A
1fl /11tn 1tl11, tu m
10 N, ,,m ,I N('{ 1W
1'h,· 111nda111r >l'b
·" ' ar,· nc ss c, w ntal pn,cc-sinµ clement ol
nm pa ss c, a h' a ne ural ne tw
,,1h('1 , c,\1r ·' W µc ncni l ca ork is a ne ur
, , , . "' ""1·
'" "' ' them . pa hi lities . B as ic on
ih ,·n nlllt" ' ,ome way, pe 1· all y, a bi olo g1 . T hi s bu il di ng bl
1lw 1mal resu r orrn s a ge ne ra II y . c al ne ur on re
ce iv :' k of hu
11
lt. Fi111gu re 2.2 no n 1·,n ear o pe
, how s the re la ra ti.on o n sh 1npu lll
tion sh ip o f th
es e fo ur pa rt t, fr an
s. t e re sult, ~ :
4 Parts of a
Ty pi ca l N er
ve C el l
1
I
--------...._
______ Dendrite
s: A cc ep t in pu ts
@ _,_ _ _ __ so m a: P ro ce
ss th e in pu ts

~ k' - - - -- Axon: Tu rn th
e pr oc
into ou tp ut s es se d in pu ts
& -- --
/2 \\ )\\ \, - -- - Synapses: Th e el ec
tr oc he
co nt ac t be tw m ic al
ee n ne ur on s
Fig. 2.2 \ A
Biological N
eu ro n
T he pr op er ti
es of th e bi ol
og ical ne ur on
I . Si gn ah ar po se so m e fe
e re ce iv ed by at ur es on th e
th e pr oc es sing ar ti fi ci al ne
2 . T he w ei
ght at th e rece el em en ts . T hi s el em ur on . T he y
are:
en t su m s th
ivi ng end ha s
th e ca pa e w ei
3. T he ne ur
on fires (t ra ns bi li ty to m od if y th
g h te d in pu ts
.
mit s output ), e in co m in g
4 . T he ou tp w he n su ff ic ie si gn al .
ut pr oduc ed nt in pu t is ob
fro m on e ne ur ta in ed .
5. T he pr oc es on may be tr
si ng of in fo rm an sm it te d to
at io n is fo un d to ot he r ne ur on s
6. T he w e ig
ht s can be m be lo ca l. .
od if ie d by ex
7 . N eu rn tra n~ pe ri ence.
m it le rs fo r th
e sy na ps e may
8 B oth artif be ex c it at or y
ic ial and bi o or · h "b•
lo gica l ne ur on 1n 1
r 1g.u1 e 2 ".\ a.n s have in bu 'ilt 1to ry .
ne t. d T ab le 2.2 in f'\ lt t l
d icate ho w th
e bi olog ic ·\\ ' u o e ra nc e.
ne t . l .
' 11
a ne t is as so
ci at ed w it h
th e mtificial neur 1tl
Cell B od y
Dendrites

\ \ T hr r hol d
-\
\

t
Sumrnatlon t
Fig. 2.3 , A xo n
A ss ocia tion
of Bio /ogica
/ N et With A
rt ifi ci al N et
/t//fl)(I//( f/()/'1 /(J / \rt/JU UII ,Y (' llff l/ ,'l f. 'H',, I Jfl',._\ J7

Amwcln tod Tormlno logies of Biological and Arifical Neural


Tnhlo 2.2 Not
r- -
L
IU,>lof.(ic •ul N<'1110/ N(•twork Artificial Neural Network
( 'ell Bod y Neurons
l k 11drill' Weights or i.ntercon nection~
Su ni:, Net input
/\ xnn Output

2.5 Comparison Between the Brah~ and the Computer


Tlw rnmn differences between the brain and the computer are:
• Biological neurons. the basic building blocks of the brain , are slower than silicon logic gates. The
neurons opcrntc in milli seconds, which is about six orders of magnitude slower than the silicon
gates opernting in the nanosecond range.
• Th~ brain makes up for the slow rate of operation with two factors :
• n huge number of nerve cells (neurons) and interconnections between them. The human brain
cont ains approximately 10 14 to 10 15 interconnections.
• the fun ction of a biological neuron seems to be much more complex than that of a logic gate.
• The brai n is very energy efficient. lt consumes only about 10-16 joules per operation per second..
compa ring with 10-6 joules per operation per sec, for a digital computer.
• The brain is a highly complex, non-linear, parallel information processing system. It performs tas..1<s
like pattern recognition, perception, motor control, many times faster than the fastest digital com-
puters.
• Consider an efficiency of the visual system which provides a representation of the environmen
which enables us to interact with the environment. For example, a complex task of percepnu
recognition , e.g. recognition of a familiar face embedded in an unfamiliar scene can be a~(',)m
plished in 100- 200 ms, whereas tasks of much lesser complexity can take hours if not days O
conventional computers.
• As another example consider an efficiency of the SONAR system of a bat. SONAR is an acti\·
echo location system. A bat SON AR provides information about the distance from a target, its reh
tive veloci ty and size, the size of various features of the target, and its azimuth and ele\'ation. Th
compl ex neural com putations needed to extract all this information from the target echo occu
wi thin the brain, which has the size of a plum. The precision and success rate of the mrget locatio
is rather impossible to match by RADAR or SONAR engineers.

2.6 Comparison Between Artificial and Biological Neural Network


Table 2.3 shows the major differences between the biological and the artific ial neural network.
11111 .~~ --

. . .0 I Networks
N1111
18 /11rrod11ct1011 m . logical Neural Network
., . ·8 1and B10
. n Between Artr,1c1
Table 2.31 Companso Biological Neural Network

t\r(I N('L,,·( ,I Nt•Mork Biological neurons are slow in pro


Chanwtt·ri.,tics
-_ . I_
·r, _
· tCl<l _____ .
·10 processing . Ce-
Neural networks are faster d' ss in o information. The cycle time c
Speed . - . , ,J , · correspon mg b orre.
mtonnat1on. fhc eye e t,n~e . ponding to a neural event prompted b
y
.
. . program
• _ o t· ones tep of-a
to e>.ecuuon . 1llere an external stimulus occurs in a mill'
I 1
the central processing unit ts tn the ani:, second range.
of few nano seconds.
Many programs have large number of Biolo~ical neural networ~s can perform
Processing massively parallel operations. The brain
instructions. and they operate in a
sequential mode one instruction after possesses the capability to operate with
another on a conventional computer. massively parallel operations, each of
them having only few steps.

Size and These do not involve as much Neural networks have large number of
Complexity computational neurons. Hence it is difficult computing elements, and the computing
to perform complex pattern recognition. is not restricted to within neurons. The
number of neurons in the brain is estima-
11
ted to about 10 and the total number
of interconnections to be around 10 15•
The size and complexity of connections
gives the brain the power of performing
complex pattern recognition tasks, which
cannot be realized on a computer.
Storage ln a computer, the information is stored in
the memory, which is addressed by its Neural networks store information in the
location. Any new information in the same st
rengths of the interconnec tions. Infor-
location destroys the old information. mati~n in the brain is adaptable, because
Hence here it is strictly replaceable. new lllformation is added by adjusting
th
e interconnection strenoths without
0 . '
Fault tolerance destroy· th .
Artificial nets are inherently not fault ing e old information .
tolerant, since the information corrupted They exh·b· 1
in the memory cannot be retrieved. . lt fault tolerance since the
tnformar 10 · •
. n is distributed in the connec-
~~ fur h h
· f i: oug out the network. Even thoug h
t tew con .
. nections are not working t e
inf'°rrnatto ·
Control distnb n is still preserved due to the
mechanism
There is a control unit, Which monitors an
the activities of computing. Utect nature of the encoded infomiation
:here is no .o
tnforrn . central control for processino
ation in th cs
based 00 . e brain . The neuron ac
r-n,,. . n locally avai·1able.
nct
a tran fue. lnfio·"uatio
Conn L, _ , srnns its ()111.,... . . -- - c
/111mrluction to /\rtifiria/ Neural Networks 1

2. 7 Basic Building Blocks of Artificial Neural Networ ks


The bnsk building blo~ks t,t· thl' arti ficia l nl'urul network :ire :
I. NL'tw'-w"- at\.'hitl'ctnn.'
2. Setting the weights
3. ActiYation funrtion

2.7.1 Network Architecture


The :.unngement of neurons into layers and the pattern of connection withinJtnd in-between layer are
generally called as the architecture of the net. The neurons within a I.ayer are found to be fully intercon-
nected or not interconnected . The number of layers Tn the net can be defined to be the number of layers
of weighted interconnected links between the particular slabs of neurons. If ~ y er~ of inte.r.connected
weights are present. then it is found to have hidden layers. There. are.....\La~io11$ type of network architec-
tures: Feed forward . feedback. full y interconnec ~d net, competitive ~ -
Artificial neural networks (and real neural networks for that matter) come in many differen t shapes
~d sizes (see Fig. 2.4). Jp feed forward architectures, Qie acJivations of the input units are set and then
as
propagated through the network until the values o f th~ output units are determined . T~..Qr k acts
a vector-Yalu ed function taking one vector on the input and returning another vector on the output EO!
be
instance. the input vector might repre~ent !he £harc}cteri~tics of a bank cus!2fner and the output might
a prediction of whether that customer -is fikely to default on a loan. Or the inputs might represent the
characteristics of a gang member and the output might be a prediction of the gang to which that person
belongs.
A discussion on some commonly used nets follows.
1
Output Output Output ,
~ ~ ,
---¥
~flr,, Hidden

~
J~U Ii \__~ l_~ ~)
Input Input Input
Single Layer Feedforward Multi Layer Feedforward Fully Recurrent Network Competitive Network

Output
Output
,c~ /)
?} f J ....,
• /4-(
~id_de_n___ __
Hidden
:'c J?~ ~
~~~g Copy Back

b9?srq s~1h~ Input


trd Do Context
Input
Jordan Network Simple Recurrent Network

Fig. 2.4 I Some Artificial Neural Network Connection Structures


20 Introduction to Neu ral Ne (l.-rorks

Feed Forwar d Net . the inputs are directly connec ted to th


. where
Feed forward net_works may h~ve _a single .1ayer Of weights
f hidden units (see Fig: 2.4 ). Neural ne tworks usee
outputs , or multiple layers with mterven mg sets O _ tt ms In fact. it has been shown that given
·
hidden units to create internal represen tations_ 0 f the mput pa e · "
. . any function \\, ith a simple feed forward
enough hidden units. it is possible to approxi mate arbitrlan 1y
ks to solve manv kinds of problem s
network . Thi s result has encoura ged peop1e to use n eura networ - · ·. ·
. .
. ly one layer of weighte d mtercon nect1on
1 1
1. ~111gle layer net: It is a feed forward net. t 1~ on s. The
ere is a chance tha t none of the input units
inputs may be connect ed fully to the output units. But th
. . . T .
• h h
and output units are connect ed wit ot er mpu a· t nd output units re-,pect ive ly. here 1s also a case
• ' th th
w here. the input units are connected with · oth er ·
mpu t u nits and output
. units w 1 o e r output
· · . .
·
units. In a sm gle layer net the weights from one outpu t um·t do not influen ce the we1°e hts tor other
o utput units.
2. Mlllri !aver net: It is also a feed forward net i.e .. the net \\.here the ,1
g nah flow fro m the input unit'!
to the o~tput units in a forward direction . The mult1 -la~e r net po,c
one o r more luycrs of nodes
between the input and output unih . This ic;, a<h antagcou.., mer ..,inf,k
la yer net 111 th e ~e ns1: that, it
can be used to solve more complic ated problem ,.
Competitive N et
The compet itive net is simil ar to a sin gle-laye red 1ced ton\-ard
nctv.orl-.. 1:xcc pt th at th e re are connec-
tions . usuall y negativ e , between the output node~. Bec:1UM! tll the,e
cu nncc t1on.., the o utpul nodes lend Lo
compet e to represe nt the current input patt ern . So metime~ thl! output
laye r I', co mp le te ly connec ted and
sometim es the connect ions are restricte d ro units that are close
to eac h o ther ( in .., 0 me ne ig hborhoo d).
With an appro priate le~rning algorith m the latter type of ne twork
can be made to o rganize itse lf topo-
logicall y. In a topolog ical map. ~1euron s near_each other repre~en
r ~imi lar inp ul pattern s. Networ ks of
this kind have ~een \1s~d to ex~l_a m the formatio n of topologi cal
system s includm g v1s1on. aud1t1on. touch and smell maps that occur in many a nim a l sensory

Recurr ent Net


The fully recurren t network is perhaps the simples t of neural netwo
k h. .
nected to all other units· an d
every urut
· 1s
· both
an input and an r arc ltecture s All units are con-
. · •
instanti ated on all of the units. one at a time. As each pattern is
_output_- Typical ly, ~ set of patter:ns;
When a degrade d version of one of the patterns is presente d st
thin antiated the weights are mod1fi h ·
pattern . ' e network attempts to reconst ruct t e
Recurre nt network s are also useful in that they allow
Process ing in recurren t networks depends on the state of th network .
s to process sequent ial informa tJ0 11 ·
the respons e to the current input depends on previous inp; ne;ork
at the last time step. Con sequently,
si mple recurre nt network and the Jordan network . s. igure 2.4 shows two such networks: the

2 _7 . 2 Settin g the Weigh ts


of settin Oa the value for the weights enables th
The metho d . . e proce
.f . 0 the wei!:?.hts m the connect ions between net
f ·
d
of mo 1 ymc, ..., · · , . ss o learning or training The process
d utput is called rrammg a network . The internal Work lay · · . th
ers With the objectiv e of achievin g e
ex~ecte _ o lled learning . General ly. there are three types oroc~s_
rramed is ca s that takes place when a network is
tra1n1ng as follow s.
Introduction to Art(ficial Neural Networks I 21

1. Supervised Training
Supervised training is the process of prov iding the netwo rk with a seri es of sample inputs and compar-
ing the output with the expected responses. The training continues until the network is able to provide
the expected response. In a neural net, fo r a sequence of trainin g input vectors there may exi st target
output vectors. The weights may then be adjusted accordin g to a learnin g algorithm . Th is process is
called supervised training.
In a logic circuit. we might have the target output as' + l ', if the necessary logic conditi on is satisfied,
or ·- 1·. if the logic condition is not satisfied . These type of logical nets are trai ned using supervised
algorithm. The same criterion is applicable for pattern classification nets also.
Supervised training is adopted in pattern association as well. If a neural net is trained to associate a set
of input Yectors with a con-esponding set of output vectors , then it is called associative memory net. If
the output is same as the input, then it forms auto-associative memory, if the output is different from the
input then it is hetero-associative.
Some of the supervised learning algorithms include Hebb net, Pattern association memory net. Back
propagation net. counter propagation net, etc.
2. Unsupervised Training
In a neural net. if for the training input vectors, the target output is not known, the training method
adopted is called as unsupervised training. The net may modify the weight so that the most similar input
vector is assigned to the same output unit. The net is found to form a exemplar or code book vector for
each cluster formed.
Unsupervised networks are far more complex and difficult to implement. It involves looping connec-
tions back into feedback layers and iterating through the process until some sort of stable recall can be
achieved. Unsupervised networks are also called self-learning networks or self-organizing networks
because of their ability to carry out self-learning. This is the method adopted in the case of self-orgmiz-
ing feature maps, adaptive resonance theory, etc. The training process extracts the statistical properties
of the training set and groups similar vectors into classes.

3. Reinforcement Training
In this method , a teac her is also assumed to be present, but the right answer is not presented to the
network. Instead, the network is only presented with an indication of whether th e output answer is right
or wrong. The net work mu st then use this information to improve its perfo rmance. Reinforct?ment learn-
ing is a very general approach to learning that can be applied when the knowledge r~quirt'd w apply
supervi sed learn ing is not availabl e. If sufficient information is available, the reinforcement kaming can
readil y handl e a specific problem. However, it is usually better to use other methods such as superYised
and unsupervised learnin g, becau se they are more direct and the ir underlying analytical basis is usually
well understood.
Reinforcement training is related to supervi sed training. T he output in thi s case may not be indicated
as the desired output, but the condition whether it is ·success ' (+ l ) or ·failure' (0) may be indicated.
Based on this, error may be calculated and the training process may be continued. The enor signal
produced from reinforcement training is found to be binary. Reinforcement learning attempts to learn
the input-output mapping through trial and error with a view to maximi ze a performance index called
the reinforcement signal. The system knows whether the output is correct or not, but does not know th e
correct ou tn11 t
H 1t11,t d11, , , ,I
110111r1 1'Nom Nr11Vork,1 .h certain (cl ass of) network top 1
1
·lo"cly connected wit a o ogy,
• 1 Js are (; ·'
!I.Ian) ,11 tlll'M' ka111111 g 11\1: t w .
. . , . so lllC names...
No" here,~ the 11~1. Jll ~t givin g · .,,
. . 'thout a ''teache1 )..
1. Unsupcn iscd tcm·nmg (1.c· w, ·
1 l) Fcl·dbad, Ne ts:
\a) Binary Adapti ve Resonance Theory (A RTI )
· . . . (ART2, ART2a)
(b) Analog Adapuve Resonance Theoiy

(c) Discrete Hopfield (D H)


td) Continuo us Hopfield (CH)
(e) Discrete Bi-directi onal Associative Memory (BAM)
(t) Temporal Assoc iative Memory (TAM)
(g) Adaptive Bi -directional Associative Memory (ABAM)
(h) Kohonen Self-Organizing Mapffop ology-Pr eserving map (SOM/TPM)
(i) Competit ive learning
(2) Feedforward-on ly Nets:
(a) Learning Matri x (LM)
(b) Dri ver-Rei nforcement Learning (DR)
(c) Counter Propagation (CPN)
2. Supervis ed learning (i.e. with a "teacher" ):
( l ) Feedback Nets:
(a) Boltzmann Machine (BM)
(b) Mean Field Annealing (MFT)
(C) Recur~ent C ascade Correlation (RCC)
(d) Learnrng Vector Quantization (LVQ)
(e) Backpropag::n ion through time (BP
( f ) Jl, ea' I-time
. · . (RTRITT)
recurrent le·<1rn1ng )
(2) Feedforw ard-onl y Nets: ~
(a) Perceptron
(b) Adaline, M adaline
(c) Backp ropagmion ( BP )
(d) Cauchy Machine (CM )
(e) A1 1111lip
(f) Ca:-.cadc Corrl'lation (C·,tsl, ,or)

2. 7. 3 Acti vation 1-;- uncrion


Acti vation fun ction i~ diM.:usl>ed iu c1~, lull..1n S, . .
t:1.: tton 2.8.2
z.8 Artificial Nt'nral Nl·twork (AN N) 'H•r1ninologk•s
Th~ i..cy tL'1·111 ., 11~l'd in the di ~c u~~io n nn :11 til'i c it1 I 1H·u1nl ll l' tw o,k .., :ire di ..,<.: w,~cd hc low.

LI I
2.0. Wc:io·ht
I"'
s
A~ discussed in the prev iou s sec tions, a neura l 11c 1work <.: on
sists of a lnrgc num ber of simp le process i ng ckmcnl~ cal led
ncmxms. These neuron s arc con nec ted to the eac h oth er hy di
rccted communication li nks , whi<.:11arc assoc iated with w eig ht s.
"Weight is an information used by the neural net to solve a
problem".
Figure 2.5 indicates a simp le neural network . Th e wei ght~
that cruTy information are denoted by w I and w2. They may be
fixed, or can take random va lues. Weights can be set to zero, or
Fig. 2.5 A Simple Neural Net I
can be calculated by some methods. lnitiali zation of weight~~ is an imrortant c.:ritcria in a neural net. 'l he
weight changes indic~te the overall perfo rmance of the neural net. From Fi g. 2.5,
x1 = Activation of neuron l (i nput signal)
x2 = Acti vati on of neuro n 2 (input signa l)
y= Output neuron
w =1
Weight connecting neuro n I to output
w2 = Weight connecting neuron 2 to output
Based on all these parameters, the net input ' Net' is ca lculated. The Net i'> the ~ummauon of the
products of the weights and the input signals.
Net=x 1 w 1 +x 2 w2
Generally, it can be written as,
Net input = Net = I X; w;
i

From the calculated net input, applying the acti vation function s, the output may oc cakulaktl.
2.8.2 Activation Functions
The acti vati on function is used to calcul ate the output response of a neuron . The :-.u m of tht' \\ t~1~hrcJ
...
input \ignal is applied with an ac ti vation to obtain the response. +
For neuron\ in same layer, same ac tivation fu nction s are used. There
ma y be linear as well as no nlinear activation function s. 'The non -
linear activation fun<.:tiom, arc used in a multi layer net. A few tin - f(x)
ear and non li near ac ti vation functi ons are di sc ussed here.
Identity Function
The function i.., given by,
X
I' (. ,) = x ; for all r .

This i~ "ihow n in Fi g. 2.6. Fig. 2.6 I Identify Function


Bn M ~ Stc f' Fi.otc tio n
t
TI, .., 1u n..... ho n 1, gn,cn hy .
f(x }
1
• •
if f( x) > 0
f(\' .) = ,~
{ 0: if f(x) < O
Th e thr c ~hold ·e· ,..,
di~ cu ,,e d in the followi ng
~ection~ .
M 0, tl~ , mg le \ay er neL.., u~
bin ary step func tion for cal
the ou tpu t fro m the ne t mp cul atin g
ut. Th e bin ary step fu nction
~a lle-d a, thr e ,ho ld fun cti i~ al~o 0
on or He av bid c fu ncti on.
, h0 \,, a hmal") ,tep fun cti Fig ure 2.7
on .
0
2. 8. 3 Si gm oi da l Fu nc tio ns x-
Th e -.,e fun cti on s are usu
all y S-sha pe d curve.'>. Th e
Flg. 2.7 I
Bin ary Ste p Function
an d log1st1c fun cti on s are hy per bo lic
co mm on ly used . Th ese are
net\\.ork.. rad ia) ba sis fun use d in mu ltil ay er ne ts lik
cti on ne tw ork etc . Th ere are e ba ck pro pa gation
two main typ es of s ig mo ida
B in ary S igmoid.al Fu nc l fun cti on s:
tio n
Th.ts 1:, ah .o cal led log ist ic functi
on . It ranges between O to
1.

f (x) = - - - I
--
1 + exp (- cr x)
wh ere , a is cal led the ste
ep ne ss parameter. If f (x ) is
differentiated we ge t.
f ' (x ) = cr f (x) [ l - f (x )).
Fig ure 2.8 sh ow s the bin
ary sigmoidal function

f(x )
a = 2
-- -- -- -

a= 1

Fig . 2.8 I Binary S,gmo1da t F


unc tion
Bi pl oa r Sig nw idu l Fu nc tio
n
e dc1:,1rt!d rang.1: he rr ,-.
~ be tv. ee~ .,. I and t . Tht">
fun cll on i-. re l~
no n . Th e b11:Qo lar -..1grn 01dal tun cu on I'> g1 ,cn a-..
ate d to the hy pe rbo lic tan
b( .q = 2f (.,\J - t oe nt fun c-
o
/111 mducti on to Artific ia l Neural Netwo rks 25

h{ \) = 2 x
I + ex p(- a x)
2 I - ex p(- <J x )
=
I + ex p( - CJ x )

b(.,) = I ~ exp( - CJ x)
I + ex p(- a x)
on diffrrr ntiating the function b (x), we get ,

b'(x) = CJ [{ 1 + b (x) ) (1 - b (x ))]


2
Mostl y it is fo und that bipolar data is used, hence this activation fun ction
is widely used. Figure 2.9
shows the bipo lar sigmoidal function.

f(x)

- -- - - - - - - --Jc_ __ _ _ _ _ _ _ _ _ _
_
X

-1

Fig. 2.9 , Bipolar Sigmoidal Function

2.8.4 Calcu lation of Net Input using Matrix Multiplication Meth od


lf the weights are given as, W = (w;) in a matrix form , The net input to
output unit yJ is gi\'en a~ the dot
product of the input vectors x = (x , ... X; .. . x,) and w (jth column of
1 1 the weight vector matri, L
Yin) = X; w j

II

Yi11; = L X;W ,1
1=!

Hence net input can be calcu lated using matri x multiplication method .

2.8.5 Bias
A_bias acts exactl y as a weight on a connec t inn from n unit whnse ac ti\
~1tmn is alw:1y ~ l. lnrreasing the
? 1as increas es the
net input to the unit (b = w 0 ). Fi gurt: 2. \ l) :-.hn\\ s :1 simple nt'u r~1l nr.:.'t with
the ~i:1s
1nclucled.
., \ 'eflrorks
26 !nm '<-Im ·n~ •11 , • \ , ,ft

w, y )
\

fig . 2 .10 I A Simple Net with Bias lr1c/tl('/cJd


The bias irnpro,·es the perforrnan~e of the nern::i.! nerwo rL -"irniln r to init
inli n 1fitm ,>fvvd~hf\ bi
~hould al:-o
. be .initiali zed either to O. or to anv. ~J)ectl1ed rnlt1t'. bn~cd llll the llt'llnt
l tll'C. If hi•is
• , 1.,,. 1>lt''l'1¥:
then net 111put 1s calcula ted as.

Net = b+ _L\ 1
\\ '
1

\\ nere. Net net input


b - bias
x, Input from neuron i
w, .
Weight of the neuron i to the outpuc llt'uron .
Hence the actJvacion functio n is obtaine d dS,
,

f (Net) = {+ l: if net ~ O:
- ] : if net
if bias is includ ed.
< 0:

2.8.6 Threshold
The thresh old ·e· i~ a /actor which ·
v al ue o f rhre-,h o l<.J the output may bis used in . I
en cu
e~ ca lcularcd . .lari 11 ~1., ttic, ac11va
. ti - , . . , . , I i>II th,,
For examp le, the activa ti o n ru . . · 1
-1.: . !he ac liv · •. . ons ot !It<: ~lV~Jl
nc11ons mny he .i11on lu11L·tio11 is b: 1s•'lf t )II chcB.t~n
ncr.
, ~illKl)I
ti
+ I if . •- ' . . _
U) y= f(NeL) = n et >O :
{
- J if ncr < 0;

if Y,n1 > ()
(ii) y == f(Net J : { ; 1 ti
1 Y,, ,1 0'
1 lt \e<f 1·01
- J jf
Y,ry 'l I.
< O " 11uircc1j
1 ona 1 ass ,·. .
H e nce 0 and 01 indi cate the rh r,_, ,1 · ou.ir1v e nicnu>r y ,wr
, - --~ IO/d
old va lue is d e f 1ncd by the user. ' · due lo w 1)(.(_' ~,
I !he\ ,,n
Y\ ten 1, r
cspons l' is ,·,tk 11t111~·,f J1h' tlll'
C
H
A
p What You Will Learn
T • The basic funda men tal neuro n
E model , i.e. the McCu lloch- Pitts
neuro n mode l.
R • Excita tory and inhibit ory con nection
paths in the McCu lloch-P itts neuron.
• Proce ss of learni ng in a neural
netwo rk.
• Types of learni ng algorit hms and
3 their classification based on the way
in which the adjust ment to a synaptic
weigh t of a neuro n is formu lated.

Fu nd am en ta l • Various learnin g algorit hms like


Hebb learnin g rule, perceptron
learning rule , Delta rule , etc.
M odel s of • The linear separability concept used
to form the decisi on boundary
regions .
A rt ifici al Neural
N et w orks
.J _. ,. ,;on
r- ur.a,m.c.·r.;'11 ,\loaer.s Of .-u-.., .. ,c-......s u..· .. .. ::111:Jllt.•1 • -~•
fundan1c-ntal models of Artifi,:i~tl N~uml Net\\Ork , .-\~~ 1 are deaJ, 3.loc.g wilh d:e \-:!::OUS le~ g ~~
and linear separahilit~ .__, ,nt'\.' ph .

3.2 ~kCulloch--Pitts Neuron ~iodel


The first fomml definition 0f a ~yntheti(: neuron mode l rosed on me tughl~ mnphfied cocs.ifu!rarioos. ot"
the t,10h.1 gil·al moJ~l \\.l~ fomml..itn:l by Warren ;\ kC uUoch and Walter Pms m 19-13. T.'lc ~kCuDoch-
Pi~ mr~:iel llf a ne-un"'.n is ('h ..tr.11..· tt:> rized b~ its fom1ahsm . elegam. and p:-ec,se Dai.k~Lll <lc:i:.."lijoc_
\ kC'ull(,·h- Pitts neun"n .ill0ws tiinary O Qr l states onl:. 1.e. it i5 bio:r.~· :1:nrnt...~ l;}._-"'ic ile'..!:O:C.S ~c
ronne-c-ted by direct weighted psth. The connected path ('an be e,ciratory o:- mhibitory E,;:-i~~ am-
nections h:ive positive weights and inhibirory coooecrions have nc-gatiw \V~igbrs . TI:~ ~i~ 2c" s~- :c
weights for the excitntory C'Onne-crion entering inw :.1
particufar neuron. Th~ n.:'uro.1 rs .::.s-x-Xi :; - ~-1 :--:±
the threshold value . The neuron~ if the net input m the neuron is gre2.tcr rhau ct.~ ct:::~~.i ~ t.c
threshold is set so that the inhib ition is absolute. because. non-zero inhibitory inpm u"L1 ;y:.c~er:: ::::t:
neuron from firing. It takes only one time step fm a signal m pa...._;.;s O \cT one conn~4IO« Ii±

3.2.1 Architectu re
The architecture of the McCulloch Pius neuron is shown in Fig. 3. r.
~

X1
,,,,____,

X2

Xn w ~
...___ \
-p
x,,'T , ----~~
-p --------- ---·
~
Xn- 2
-p

~ Wffl

Flg, 3. 1 I Archttocture ol a Ak-::Cu:.ocn--Puts Nc1<.s "t»:

·y· is the ~kCuUoch- Pllh neuron, tt ~a.n ~~l\e ilgnal fu..)m :m~ nmn~-r f\f t.)d\M n~'U:-t't~ TI1~
connection \\ ~i g.hb from -'" ,· ... "( 0 are ~M-·irnh>f'). JenoK•d b~ ·w· an'"i th.:> .__,)nnn·ci(•n \\ ,·i~hts fu"m
f11 rn>d11l'fio11 r,> N, 1111·" / N<'tworks
,
32 • • . ., , p' The McCulloch- PillS neuron Y has the
activati Ctir~

1
I
,. • . , ,,,,., urc 111h1h1tor)' dcnotcu 1,y - . ·
on fun
I if Y- in ~0
f(Yi1) == { o if y- 111 < 0
where 0 is the threshold and y_,,, is the total net input signal
receiv ed by neuro n Y.
The
th reshold 0 should sati sfy the relatio n.
e > nw - p
This is the condition for absolute inhibi tion.
. ll1e McCulloch-Pitts neuron will fire if it receiv
es k or more excitatory inputs and no inhibiton
mputs, where
·
kw 2'. 0 > (k - I) w.

Solve d Examples

Exam ple 3.1 Generate the output of logic AND functi on


by McCu lloch- Pitts neuro n m odel.
Soluti on The AND function returns a true value only if both
the inputs are true, else it re turn s a false value
' l ' represents true value and 'O' represents false value.
The truth table for AND function is,
x, X2 y
1 1
1 0 0
0 1 0
y
0 0 0
A McCulloch-Pitts neuron to implement AND
functio n is
shown in Fig. 3.2. The threshold on unit Y is 2.
The output Y is,

Y = f(yin) Fig. 3.21


The net input is give n by McCu lloch- Pitts Neuron ~o per·
form Logic al AND Function
Yin = L weight s * input
Yin = I * XI+ I* X2
Yi n = X 1 + X2
F llll·s· the acti vati ons or output ne uron can be t·)
rom - t rn1ect.

Y = l(y
. in) ::: {' if Y 111 >- 2
0 if
Y in <: 2
No" prc~cnt the input"

F"n dam enta f M odel.\ of Ani fici


al Neural Net wor ks I 33
(i) " i= '- , = l. v 111 = 'X 1 +, ,= 1 +1 =2
~ = f\~ 10) = I ~ll\C C) 10 -2
tii) '- 1 = I. '\ ,= 0. ~ ,n = '\ 1 + X , =
() + l - I
~ ::f( ~ )= 0 ..,inc ey 111 = 1<2 .
10
11w, ,, ~am e whe n "(
1 =0 and x 2 = l .
um , , = 0. ' : =0. Ym = x, + X 2 = 0 + 0 =0.
Hen ce.\• =f (y ) =0 ~ince y =
m m 0 < 2·
Exa mp le 3.2 Gen erat e OR func tion usi ng
Mc Cul loch -Pit ts neu ron mod e
Solutzon The OR func l.
tion retu rns a high (' l ') if an y one of the inpu l is high . retu
of the mputs is high . rns a low , ·o·, 1f non e
The truth table for OR func tion
is,
X1 X2 y ,,,-- -
-- ~ - - - - -
1 1 1 X1
1 0 1
0 1 1
3 __ ___.
0 0 0
X:z ----------
~
A McC ullo ch-P itts neu ron for
OR fun ctio n s is sh ow n in
Fig. 3.3. The thre sho ld for the unit _____,,,
is 3.
The net inpu t is calculat ed as,
Y1n = 3x 1 + 3x 2
The outp ut is give n by,
Fig. 3... 3
I Me e ... ,oc;..,- :::,,~s ~e'- --::Jr
Fun ctio n
.:::: ,, ::J ::i
l if Y- in ~ 3
Y = f(y in) = {0 if
Y- in < 3
Presenting the inpu ts,
(i) x 1= x 2 = 1, Yin = 3x 1 + 3x = l + l = 2
2
=3 x l +3 x I = 6 > thresho ld 3 .
Hen ce, y = 1
(UJ x, = l , =0 ,
X2
y10 = 3x 1 + 3x 2
= 1 ;1 l + 1 ;1 0 = 1 = thre~ho ld
App l)m g aui , at1on il>J mul a
y -=- I ( ) rn> = 1
17m, 1~ abo the lii~c v. he 11
(Ill)
.>-.
1 = 0 a n<l x = l
x 1 = x::!=O 2
Y111 = 3x 1 -+ 1x = \ 0
-
2 ,< 1 1 ,< 0 = 0 < thrt'~ hold
Hen ce outp ut ) = 0 .
44 1
/11 trod11, ·t1011 to Ne11rol N, t11 •ork.\

T tfr., type of ..,ynap..,c ,~ cn lkd l lchhi an :--yn:ip s<:


. T he four key mcchc~ni sms that c.haracterize
--~ n3\N.' arc time depen den t mec hanis m , local a Hebbia
mcc huni 1-i m, int e rac ti ve mech ani sm and corre
lation;
mccham..,m 1
The ..,,mpJc..,1 l01 m of Hchh ian learnin g i~ descr
O(J b :S
ibed by, .-_~- L l _C -
~w = x,y ~ \ rJ/j 4 ...,) ]
1711, Hehb ian learn ing ru le repre se nts a pure ly
feed forwa rd , un s upe r vised ]earn in g. It \ late\
the cro..,.., prndu ct of outpu t and input is positi that if
ve, thi s res ults in incre ase of we ig ht, o therw
i se the weight
dccrc a ..,c ... .
In ~ome _case~. the Hebb ian rul e needs to be mod
ifi ed to co unte rac t un con strain ed grow th of
'alue~ . which takes place whe n excita tion s and weight
respo n se consistentl y agree in s ig n . Thi s corre
the Hebb ian learn ing rule with satu ration of spo nd-; to
we ights at a certai n prese t le ve l.

3 .3. 2 Perc eptron Learning Rule


For the perce ptron learn ing rul e, the learn ing
signa l is the differ ence betw een the d e s ired
neuro n·~ respo nse. Thi s type of learni ng is super and actual
vi sed.
The fact that the weigh t vecto r is perpe ndicu lar
to the plane separ ating the input patte rns durin
learni ng proce sses, can be used to interp ret the g the
degre e of diffic ulty of traini ng a perce ptron
types of input . for differ ent
The perce ptron learn ing rule states that for a
finite 'n' numb er of input traini ng vecto rs ,
x( n) where n=lt oN
each with an assoc iated target value ,
t ( n) where n=lt oN
which is + I or - 1, and an activa tio n function
y - f (y _ in), where ,

I if y > t}
Y= O '.f - t}~Yi n ~i,
{
- I if Yin< - 1<t -----
tht weight updat ion is given by
if y -.t- L, then
WJI CW = w old + tx

i1 y = t , then there , .., no change in weights.


The perce ptwn le.:1rning ru le is of ce ntral impor
tance for ~upe rvi sed lear ning of ne ..
wei ght'-> can be initial 11 ed ut any va lu es in th.i 11
s me thod . li t ~ ne tw o rks . The
The ie is a pen.: eptrou learn ing ru k con ve, gence
the ore m whic h slH tt·~ . " If the re is . .
'.->UC1I lha t Hx(p) w" ) = t(p) for · a l I f) , t IH; n ·
10 1 ,tn y 1'tartin g vec tor w lh t·
. . 1 Pt' n: e pir ' tl w e1°° ht vec torw *
~<JllV(:.rge Lo a we ,glll vecto r th at g1. ve 1- 1l1e co .
111 · f ·
rrec t n.:s pon~c tor a ll l rrnntn g paucr ns on 1e ·u .
" 1n 1te numb t r of &
, ' _ning ru)e Will
teps". ' ' <tnd this ·
" Wil l be done
~: 3.3 Delta Lear ning Rule (Wid row, Hoff Rule or
Leas t Mea n Sqtia re (L
I ht <.l c:lta lea rnin g rule is ub,o referr ed LO
and 110 1'1• as Wic.Jrow-Hoff rul e, name d d ue to th
- · . MS) Rule)
, 1960 ). The delta lcurning
rull; i~ va lid ·
on l y 1·or co ntinu ·
ous activa tio e ortgina tors (W ·
. n funct ions ~drow
. .-
:' . -,,, anct in th
- ~ -

You might also like