Introduction neural
Introduction neural
Introduction
Neural network is the fusion of artificial intelligence and brain-inspired
design that reshapes modern computing. With intricate layers of
interconnected artificial neurons, these networks emulate the intricate
workings of the human brain, enabling remarkable feats in machine
learning. There are different types of neural networks, from feed
forward to recurrent and convolutional, each tailored for specific tasks.
This article covers its real-world applications across industries
like image recognition, natural language processing, and more. Read on
to know everything about neural network in machine learning!
Neural networks mimic the basic functioning of the human brain and
are inspired by how the human brain interprets information. They solve
various real-time tasks because of its ability to perform computations
quickly and its fast responses.
Artificial Neural Network has a huge number of interconnected
processing elements, also known as Nodes. These nodes are connected
with other nodes using a connection link. The connection link contains
weights, these weights contain the information about the input signal.
Each iteration and input in turn leads to updation of these weights.
After inputting all the data instances from the training data set, the final
weights of the Neural Network along with its architecture is known as
the Trained Neural Network.
2.History
In 1943, neurophysiologist Warren McCulloch and mathematician
Walter Pitts wrote a paper on how neurons might work. In order to
describe how neurons in the brain might work, they modeled a simple
neural network using electrical circuits.
A. Perceptron
C. Multilayer Perceptron
An entry point towards complex neural nets where input data travels
through various layers of artificial neurons. Every single node is
connected to all neurons in the next layer which makes it a fully
connected neural network. Input and output layers are present having
multiple hidden Layers i.e. at least three or more layers in total. It has a
bi-directional propagation i.e. forward propagation and backward
propagation.
Inputs are multiplied with weights and fed to the activation function
and in back propagation, they are modified to reduce the loss. In simple
words, weights are machine learnt values from Neural Networks. They
self-adjust depending on the difference between predicted outputs vs
training inputs. Nonlinear activation functions are used followed by
softmax as an output layer activation function.
When a new input vector [the n-dimensional vector that you are trying
to classify needs to be classified, each neuron calculates the Euclidean
distance between the input and its prototype. For example, if we have
two classes i.e. class A and Class B, then the new input to be classified is
Each RBF neuron compares the input vector to its prototype and
outputs a value ranging which is a measure of similarity from 0 to 1. As
the input equals to the prototype, the output of that RBF neuron will be
1 and with the distance grows between the input and prototype the
response falls off exponentially towards 0. The curve generated out of
neuron’s response tends towards a typical bell curve.
4.Advantages
Ability to Learn Complex Patterns: Neural networks can learn intricate
patterns and relationships within data, making them effective for tasks
like image recognition, natural language processing, and time-series
prediction.
Adaptability: They can adapt and learn from new data, making them
suitable for dynamic environments or tasks where the underlying
patterns may change over time.
Parallel Processing: Neural networks can perform computations in
parallel, leveraging GPUs and specialized hardware to accelerate
training and inference, leading to faster processing times.
Feature Learning: They can automatically extract relevant features
from raw data, reducing the need for manual feature engineering,
which can be time-consuming and domain-specific.
Generalization: Neural networks can generalize well to unseen data,
provided they are properly trained and validated, making them robust
in handling diverse inputs.
Non-linear Relationships: They can model complex non-linear
relationships between inputs and outputs, allowing them to capture
intricate dependencies in data.
5.Disadvantages
Complexity: Neural networks can be complex and challenging to
understand, particularly deep neural networks with many layers. This
complexity can make debugging and interpreting model decisions
difficult.
Over fitting: Neural networks are prone to overfitting, where the model
learns to memorize the training data rather than generalize to new
data. Techniques such as regularization and dropout are often used to
mitigate this issue.
Black Box Nature: Neural networks are often considered "black box"
models, meaning that understanding how they arrive at a particular
decision can be challenging, leading to concerns about interpretability
and trustworthiness, especially in critical applications like healthcare or
finance.
6.Conclusion
Neural network is a vast subject. Many data scientists solely focus only
on Neural network techniques. In this session, we practiced the
introductory concepts only. Neural Networks has much more advanced
techniques. There are many algorithms other than back propagation.
Neural networks particularly work well on some particular class of
problems like image recognition. The neural network algorithms are
very calculation intensive. They require highly efficient computing
machines. Large datasets take a significant amount of runtime on R. We
need to try different types of options and packages. Currently, there is a
lot of exciting research going on, around neural networks. After gaining
sufficient knowledge in this basic session, you may want to explore
reinforced learning, deep learning.
7.Reference
1. Shao, Feng; Shen, Zheng (9 January 2022). "How can artificial
neural networks approximate the brain?". Front Psychol. 970214
^ Levitan, Irwin; Kaczmarek, Leonard (August 19, 2015).
"Intercellular communication". The Neuron: Cell and Molecular
Biology (4th ed.). New York, NY: Oxford University Press. pp. 153–
328. ISBN 978-0199773893.
2. ^ Jump up to:a b Rosenblatt, F. (1958). "The Perceptron: A
Probabilistic Model For Information Storage And Organization In
The Brain". Psychological Review.
3. ^ Bishop, Christopher M. (2006-08-17). Pattern Recognition and
Machine Learning. New York: Springer. ISBN 978-0-387-31073-2.
4. ^ Vapnik, Vladimir N.; Vapnik, Vladimir Naumovich (1998). The
nature of statistical learning theory (Corrected 2nd print. ed.).
New York Berlin Heidelberg: Springer. ISBN 978-0-387-94559-0.
5. ^ Bain (1873). Mind and Body: The Theories of Their Relation.
New York: D. Appleton and Company.
6. ^ James (1890). The Principles of Psychology. New York: H. Holt
and Company.