0% found this document useful (0 votes)
11 views

Soft Computing

The document provides an introduction to soft computing, defining it as a collection of AI techniques that mimic human-like problem-solving capabilities, contrasting it with hard computing which relies on precise mathematical methods. It also covers the basics of artificial neural networks (ANNs), their structure, functionality, types, and applications in various fields. Key concepts such as learning types, activation functions, and neuron interconnections in neural networks are discussed to illustrate their operational mechanics.

Uploaded by

jeevanajyothi539
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Soft Computing

The document provides an introduction to soft computing, defining it as a collection of AI techniques that mimic human-like problem-solving capabilities, contrasting it with hard computing which relies on precise mathematical methods. It also covers the basics of artificial neural networks (ANNs), their structure, functionality, types, and applications in various fields. Key concepts such as learning types, activation functions, and neuron interconnections in neural networks are discussed to illustrate their operational mechanics.

Uploaded by

jeevanajyothi539
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

INTRO TO SOFT

COMPUTING
UNIT-I
WHAT IS COMPUTING?
 Computing is a process that takes an
input uses a formal method algorithm or
a mapping function to process it and
delivers an output.
 This formal method or mapping function
has control actions that converts a
particular input to particular output.
MATHEMATICAL
FUNCTION

function
X Y

Input Y= Output
F(X)
TYPES OF COMPUTING

 Hard computing
 Soft computing
HARD COMPUTING
 Hard computing uses traditional
mathematical methods to solve problems,
such as algorithms and mathematical models
 It is based on deterministic and precise
calculations and is ideal for solving problems
that have well-defined mathematical
solutions.
Drawbacks:
 Incapable in solving the real world problems
whose behavior is imprecise and their
information being changing continuously.
 It works on exact(no noise) data.
 Programs have to be written
What is Soft Computing?
 Soft computing is a collection of artificial
intelligence (AI) computing technique that
gives devices human-like problem solving
capabilities.
 It includes the basics of neural networks, fuzzy
logic and genetic algorithms.
History
 SC theory and techniques were introduced in
the 1980s.
 The term SC was coined in the 1992 by Lofti A.
Zadeh, a mathematician computer scientist,
electrical engineer, AI reseacher and prof.
emeritus of Computer Science at the
university of California, Berkeley.
BASIC FEATURES OF ANY
COMPUTING METHODS
1. Solution produced by the computing
function must be precise.
2. Control actions in the computing
functions must be Unambiguous and
accurate.
3.There must be a mathematical mode to
solve a problem.
INTRODUCTION TO ARTIFICIAL
NEURAL NETWORKS
 Neural networks, also known as artificial
neural networks (ANNs) or simulated
neural networks (SNNs), are a subset
of machine learning and are at the heart
of deep learning algorithms.
 Their name and structure are inspired
by the human brain, mimicking the way
that biological neurons signal to one
another.
WHAT IS NEURON?
 Neurons in deep learning models are nodes
through which data and computations flow.

Neurons work like this:


 They receive one or more input signals. These
input signals can come from either the raw
data set or from neurons positioned at a
previous layer of the neural net.
 They perform some calculations.
 They send some output signals to neurons
deeper in the neural net through a synapse.
FUNCTIONALITY OF A NEURON IN
A DEEP LEARNING NEURAL NET
INSPIRATION OF NEURAL
NETWORKS
 The human brain is the inspiration behind
neural network architecture. Human brain
cells, called neurons, form a complex, highly
interconnected network and send electrical
signals to each other to help humans process
information.
 Similarly, an artificial neural network is made
of artificial neurons that work together to
solve a problem. Artificial neurons are
software modules, called nodes, and artificial
neural networks are software programs or
algorithms that, at their core, use computing
systems to solve mathematical calculations.
TYPICAL DIAGRAM OF
BIOLOGICAL NEURAL NETWORK
NN COMPARISON WITH
ANN
 Dendrites from Biological Neural Network
represent inputs in Artificial Neural Networks,
cell nucleus represents Nodes, synapse
represents Weights, and Axon represents
Output.
APPLICATIONS
 Neural networks are widely used in a variety
of applications, including image recognition,
predictive modeling and natural language
processing (NLP).
 Examples of significant commercial
applications since 2000 include handwriting
recognition for check processing, speech-to-
text transcription, oil exploration data
analysis, weather prediction and facial
recognition.
TYPICAL ARTIFICIAL NEURAL
NETWORK
ARCHITECTURE OF
NEURAL NETWORK
AN ILLUSTRATION OF
NEURAL NETWORK
BASIC MODEL OF NEURAL
NETWORKS
 The model of an artificial neural network
can be specified by three entities:
 Interconnections
 Activation functions and
 Learning rules
INTERCONNECTION
 Interconnection can be defined as the
way processing elements (Neuron) in
ANN are connected to each other.
 These arrangements always have two
layers that are common to all network
architectures, the Input layer and output
layer where the input layer buffers the
input signal, and the output layer
generates the output of the network.
 The third layer is the Hidden layer, in
which neurons are neither kept in the
input layer nor in the output layer.
 These neurons are hidden from the
people who are interfacing with the
system and act as a black box to them.
There exist five basic types of neuron
connection architecture :
1. Single-layer feed-forward network
2. Multilayer feed-forward network
3. Single node with its own feedback
4. Single-layer recurrent network
5. Multilayer recurrent network
SINGLE-LAYER FEED-
FORWARD NETWORK
 In this type of network, we have only
two layers input layer and the output
layer but the input layer does not count
because no computation is performed in
this layer.
 The output layer is formed when
different weights are applied to input
nodes and the cumulative effect per
node is taken.
 After this, the neurons collectively give
the output layer to compute the output
signals.
SINGLE-LAYER FEED-
FORWARD NETWORK
MULTILAYER FEED-
FORWARD NETWORK
 This layer also has a hidden layer that is
internal to the network and has no
direct contact with the external layer.
 The existence of one or more hidden
layers enables the network to be
computationally stronger, a feed-
forward network because of information
flow through the input function, and the
intermediate computations used to
determine the output Z.
MULTILAYER FEED-
FORWARD NETWORK
SINGLE NODE WITH ITS
OWN FEEDBACK
 When outputs can be directed back as
inputs to the same layer or preceding
layer nodes, then it results in feedback
networks.
 Recurrent networks are feedback
networks with closed loops.
 A single recurrent network having a
single neuron with feedback to itself.
SINGLE NODE WITH ITS
OWN FEEDBACK
SINGLE-LAYER
RECURRENT NETWORK
 A recurrent neural network is a class of
artificial neural networks where
connections between nodes form a
directed graph along a sequence.
 This allows it to exhibit dynamic
temporal behavior for a time sequence.
SINGLE-LAYER
RECURRENT NETWORK
MULTILAYER RECURRENT
NETWORK
 In this type of network, processing
element output can be directed to the
processing element in the same layer
and in the preceding layer forming a
multilayer recurrent network.
 They perform the same task for every
element of a sequence, with the output
being dependent on the previous
computations.
 Inputs are not needed at each time
step.
MULTILAYER RECURRENT
NETWORK
LEARNINGS
 The main property of an ANN is its
capability to learn.
 Learning or training is a process by
means of which a neural network adapts
itself to a stimulus by making proper
parameter adjustments resulting in the
production of desired response.
There are two kinds of learning in ANNs:
1. Parameter learning: It updates the
connecting weights in a neural net.
2. Structure learning: It focuses on the
change in network structure The above two
types of learning can be performed
simultaneously or separately
 Apart from these two categories of learning,
the learning in an ANN can be generally
classified into three categories as:
 Supervised learning
 Unsupervised learning
 Reinforcement learning
ACTIVATION FUNCTION
 The activation function is applied over
the net input to calculate the output of
an ANN.
 The information processing of a
processing element can be viewed as
consisting of two major parts: input and
output.
TYPES OF ACTIVATION
FUNCTION
 Identity function
 Binary step function
 Bipolar step function
 Sigmoid function
 Ramp function
a) Identity function :
f(x) = x for all x
b) Binary step function
{ 1 if x>= θ
f(x) = { 0 if x< θ

c) Bipolar step function


{ 1 if x>= θ
f(x) = { -1 if x< θ
d) Sigmoid function
1. binary sigmoid function
1
f(x) = ___________
1+ e^-λx
2. bipolar sigmoid function
1- e^-λx
f(x) = ___________
1+ e^-λx
Note: Here ^ means power
e) Ramp function
{ 1 if x >1
f(x) = { x if 0 <= x<=1
{ 0 if x < 0

You might also like