0% found this document useful (0 votes)
8 views

Deep Learning

The document provides an overview of deep learning including definitions of key concepts like neural networks and deep learning. It discusses the differences between machine learning and deep learning and describes techniques like deep autoencoders and convolutional neural networks.

Uploaded by

Khaoula Diboun
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Deep Learning

The document provides an overview of deep learning including definitions of key concepts like neural networks and deep learning. It discusses the differences between machine learning and deep learning and describes techniques like deep autoencoders and convolutional neural networks.

Uploaded by

Khaoula Diboun
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Blida 1 University

M2 COMPUTER & NETWORK ENGINEERING

Advanced Topics in Computer Systems and


Networks

Overview of Deep Learning

2023-2024
Neural Network Definition

◆ Hecht Nielsen, a neural network researcher in the US, defines a neural network as "a computing system made
up of a number of simple, highly interconnected processing elements, which process information by their
dynamic state response to external inputs."

Based on the origin, features, and interpretations of the neural network, it can be simply defined as an
information processing system designed to simulate human brain's structure and functions.

◆ Artificial neural network (neural network for short):refers to a network


composed of artificial neurons. It abstracts and simplifies a human brain based
on its microscopic structure and functions. It is an important way to simulate
human intelligence and reflects some basic features of human brain functions,
such as parallel information processing, learning, association, pattern
classification, and memory.
11/12/2023 2
Deep Learning ?

11/12/2023 3
Deep Learning

11/12/2023 4
Machine Learning VS Deep Learning

11/12/2023 5
Machine Learning VS Deep Learning

o Manually designed features are often over-specified, incomplete and take a long time to design and
validate
o Learned Features are easy to adapt, fast to learn
o Deep learning provides a very flexible, (almost?) universal, learnable framework for representing
world, visual and linguistic information.
o Can learn both unsupervised and supervised
o Effective end-to-end joint system learning
o Utilize large amounts of training data

11/12/2023 6 1
Quotes : hype or Reality ?

11/12/2023 7
Deep learning Milestones

11/12/2023 8
Artificial Neurone

11/12/2023 9
Artificial Neurone Model

11/12/2023 10
Activation functions

11/12/2023 11
Activation functions

11/12/2023 12
Artificial Neurone Model: perceptron

11/12/2023 13
Perceptron training

11/12/2023 14
AN Learning

11/12/2023 15
Artificial Neural Network (ANN)

➢Input Layer-receive input-outside world


➢Output Layer-respond to information-how it learned a task.
➢Hidden layer-main layer-all processing-transform input to output

11/12/2023 16
Deep learning : Activation functions

Most deep networks use ReLU-


max(0,x) -nowadays for hidden layers,
since it trains much faster, is more
expressive than logistic function and
prevents the gradient vanishing
problem.

Non-linearity is needed to learn complex (non-linear) representations of


data, otherwise the NN would be just a linear function.
11/12/2023 17
ANN Learning

Learns by generating an error signal that


measures the difference between the
predictions of the network and the
desired values and then using this error
signal to change the weights(or
parameters) so that predictions get more
accurate.

11/12/2023 18
ANN Learning

11/12/2023 19
ANN Learning

11/12/2023 20
ANN Learning

Gradient Descent finds the (local) the


minimum of the cost function (used to
calculate the output error) and is used to
adjust the weights.

11/12/2023 21
Deep learning

◆ Deep learning generally involves a deep neural network, where the depth refers to the number
of layers of the neural network.
Output layer

Hidden layer

Input layer

Neural network ANN (Perceptron) Deep neural network


•Deep: Hidden layers (cascading tiers) of processing •Learning: Algorithms “learn” from data by modeling
“Deep” networks (5+ layers) versus “shallow” (1-2 features and updating probability weights assigned to
layers) feature nodes in testing how relevant specific features are
11/12/2023 in determining the general type of item 22
Deep Learning

So, 1. what exactly is deep learning ?


And, 2. why is it generally better than other methods on image, speech and
certain other types of data?
The short answers
1. ‘Deep Learning’ means using a neural network
with several layers of nodes between input and output

2. the series of layers between input & output do


feature identification and processing in a series of stages,
just as our brains seem to.
yes..... But
multilayer neural networks have been around for
11/12/2023 25 years. What’s actually new? 23 1
Deep learning

good algorithms for learning the weights in networks with 1


hidden layer

but these algorithms are not good at learning the weights for
networks with more hidden layers

what’s new is: algorithms for training many-later networks


11/12/2023 24 [1]
Deep learning : The new way to train multi-layer NNs

Train this layer first


then this layer
then this layer
then this layer
finally this layer
11/12/2023 25 [1]
Deep learning

The new way to train multi-layer NNs…

EACH of the (non-output) layers is trained


to be an auto-encoder
Basically, it is forced to learn good
features that describe what comes from
11/12/2023 the previous layer 26 [1]
Deep learning: Deep Autoencoders

an auto-encoder is trained, with an absolutely standard weight-


adjustment algorithm to reproduce the input

11/12/2023 27 [1]
Deep learning: Deep Autoencoders

an auto-encoder is trained, with an absolutely standard weight-


adjustment algorithm to reproduce the input

By making this happen with (many) fewer units than the


inputs, this forces the ‘hidden layer’ units to become good
11/12/2023 28 [1]
feature detectors
Deep learning: Deep Autoencoders

intermediate layers are each trained to be


auto encoders (or similar)

11/12/2023 29 [1]
Deep learning: Deep Autoencoders

Final layer trained to predict class based on


outputs from previous layers

11/12/2023 30 [1]
Deep learning: Deep Autoencoders

And that’s that


• That’s the basic idea
• There are many many types of deep
learning,
• different kinds of autoencoder, variations
on architectures and training algorithms,
etc…
• Very fast growing area …

11/12/2023 31
11/12/2023 32 [1]
11/12/2023 33 [1]
11/12/2023 34 [1]
11/12/2023 35 [1]
11/12/2023 36 [1]
What's
hidden in
the hidden
layers?

11/12/2023 37 [1]
11/12/2023 38 1
Deep learning

11/12/2023 39 [1]
Deep learning

11/12/2023 40 [1]
Deep learning architectures

11/12/2023 41 [1]
Deep learning architectures

11/12/2023 42 [1]
Deep learning: Deep Autoencoders

Composed of two symmetrical deep-belief


networks. The encoding network learns to
compresses the input to a condensed vector
(dimensionality reduction). The decoding
network can be used to reconstruct the data.

11/12/2023 43 1
Deep learning: Convolutional Neural Nets (CNN) Convolutional

➢Convolutional Neural Networks learn


a complex representation of visual
data using vast amounts of data. They
are inspired by the human visual
system and learn multiple layers of
transformations, which are applied on
top of each other to extract a
progressively more sophisticated
representation of the input.
➢Every layer of a CNN takes a 3D
volume of numbers and outputs a 3D
volume of numbers. E.g. Image is a
224*224*3 (RGB) cube and will be
transformed to 1*1000 vector of
11/12/2023 probabilities. 44
Deep learning: Recurrent Neural Nets (RNN)

State-of-the-art results in time series


RNNs are general computers which can learn algorithms to map prediction, adaptive robotics, handwriting
input sequences to output sequences (flexible-sized vectors). recognition, image classification, speech
The output vector’s contents are influenced by the entire recognition, stock market prediction, and
history of inputs. other sequence learning problems.
Everything can be processed sequentially.

11/12/2023 45 1
Deep learning: Long Short-Term Memory RNN (LSTM)

A Long Short-Term Memory(LSTM) network is a particular


type of recurrent network that works slightly better in
practice, owing to its more powerful update equation and
some appealing back propagation dynamics.
General computers which can learn algorithms to map
input sequences to output sequences
The LSTM units give the network memory cells with read,
write and reset operations. During training, the network
can learn when it should remember data and when it
should throw it away.

Well-suited to learn from experience to classify, process and predict time series when
there are very long time lags of unknown size between important events.
11/12/2023 46 1
Sources
[1]: Dr. Hentabli hamza, Deep Learning, Supervisor Prof. Naomie Salim, 2019.

11/12/2023 47
Thanks

11/12/2023 48

You might also like