0% found this document useful (0 votes)
37 views

Neural Networks, A Brief Overview

Neural networks are a type of machine learning model inspired by the human brain. They consist of interconnected nodes that send signals to each other. Common types include feedforward, recurrent, convolutional, and generative adversarial networks. Convolutional neural networks are often used for image tasks, using convolutional and pooling layers to detect features. Key components include the input layer, hidden layers for computation, output layer, weights and biases, activation functions, loss functions, and optimizers. Increasing layers can increase capacity and hierarchical representations but also computational complexity and overfitting risk, while decreasing layers reduces complexity but limits representational power. The optimal depth depends on task complexity and available data.

Uploaded by

fa20-bee-056
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Neural Networks, A Brief Overview

Neural networks are a type of machine learning model inspired by the human brain. They consist of interconnected nodes that send signals to each other. Common types include feedforward, recurrent, convolutional, and generative adversarial networks. Convolutional neural networks are often used for image tasks, using convolutional and pooling layers to detect features. Key components include the input layer, hidden layers for computation, output layer, weights and biases, activation functions, loss functions, and optimizers. Increasing layers can increase capacity and hierarchical representations but also computational complexity and overfitting risk, while decreasing layers reduces complexity but limits representational power. The optimal depth depends on task complexity and available data.

Uploaded by

fa20-bee-056
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

CIIT/FA20-BEE-056/LHR AI - Lab Assignment Instructor: Dr.

Usman Iqbal

1. What is a Neural Network?


Neural networks, also known as artificial neural networks (ANNs) or simulated neural
networks (SNNs), are a subset of machine learning and are at the heart of deep learning
algorithms. Their name and structure are inspired by the human brain, mimicking the way
that biological neurons signal to one another.
ANNs consist of node layers that include input, one or several hidden layers, and
output layer. Every node, or artificial neuron, is linked with another one and is characterized
by a specific threshold as well as weight. Each individual node sends data to other nodes in
the network if its output exceeds a designated threshold value. Hence, if there is no valid
information sent down to a subsequent layer of the network then it does not carry on any data
further.
The development of neural networks entails training data that facilitate learning and
enhance the accuracy in future times. Nevertheless, after calibrating them for accuracy, they
become strong tools of classification and clustering at very high speeds in computer science
and artificial intelligence. The speech identification and image identification can be achieved
within minutes as against an hour of carrying out a manual task by human specialists. A
famous example is Google’s search algorithm.

Types of Neural Networks:


1. Feedforward Neural Network (FNN): Neurons are organized in layers, and information flows in
one direction—from the input layer through hidden layers to the output layer.

2. Recurrent Neural Network (RNN): Neurons have connections that form directed cycles,
allowing the network to maintain a memory of previous inputs. This is beneficial for sequential data.

3. Convolutional Neural Network (CNN): Specifically designed for processing grid-like data, such
as images. It uses convolutional layers to automatically learn hierarchical features.

4. Generative Adversarial Network (GAN): Consists of a generator and a discriminator that are
trained. GAN are used for generating new, realistic data.

5. Long Short-Term Memory Network (LSTM): A type of RNN with specialized neurons capable
of learning and remembering over long sequences, addressing the vanishing gradient problem.

How CNN Works?


CNN are primarily used for image-related tasks. They consist of convolutional layers, pooling
layers, and fully connected layers. The convolutional layers apply filters to the input image, detecting
features like edges and textures. Pooling layers then downsample the output, reducing computational
complexity. Finally, fully connected layers process the high-level features and produce the output.
Components of a Neural Network:
1. Input Layer: Receives the initial data.
2. Hidden Layers: Intermediate layers where computation and learning occur.
3. Output Layer: Produces the final result or prediction.
CIIT/FA20-BEE-056/LHR AI - Lab Assignment Instructor: Dr. Usman Iqbal

4. Weights and Biases: Parameters adjusted during training.


5. Activation Function: Introduces non-linearity to the model.
6. Loss Function: Measures the difference between predicted and actual output.
7. Optimizer: Adjusts weights and biases to minimize the loss.

ReLU (Rectified Linear Unit):


ReLU is an activation function commonly used in neural networks. It replaces all negative
values in the input with zero, allowing the model to learn complex patterns and speeding up the
convergence of training.

How Layers Work in a Neural Network:

Input Layer: Accepts input data.


Hidden Layers: Comprise neurons that transform input using learned weights.
Output Layer: Produces the final prediction or result.

Merits and Demerits of Increasing/Decreasing Layers:


Merits of Increasing Layers:
1. Increased Capacity: More layers can capture intricate patterns and representations.
2. Hierarchical Abstractions: Deeper networks can learn hierarchical features.
3. Expressiveness: Greater depth allows for more expressive and powerful models.

Demerits of Increasing Layers:


1. Computational Complexity: Deeper networks require more computation.
2. Overfitting: Too many layers can lead to overfitting on the training data.
3. Vanishing/Exploding Gradients: Training deep networks can be challenging due to gradient-
related issues.

Merits of Decreasing Layers:


1. Reduced Complexity: Shallower networks may be computationally more efficient.
2. Less Prone to Overfitting: Shallower networks might generalize better on smaller datasets.

Demerits of Decreasing Layers:


1. Limited Representation: Shallow networks may struggle to capture complex relationships.
2. Reduced Expressiveness: Insufficient depth may limit the ability to learn intricate patterns.
In conclusion, the choice of network depth depends on the complexity of the task and the
available data. Balancing depth to avoid overfitting while capturing essential patterns is crucial for
effective neural network design.

You might also like