Deep Learning Unit2
Deep Learning Unit2
UNIT2
Output-shape is a 1D array.
ResNet-34
RESNET
ResNet, short for Residual Network, is a deep
convolutional neural network architecture that was
introduced by Microsoft researchers in 2015.
It was designed to address the problem of vanishing
gradients in very deep networks by introducing skip
connections, or residual connections, that allow
gradients to be directly propagated from earlier layers
to later layers.
The basic building block of the ResNet architecture is
the residual block, which consists of two or three
convolutional layers and a skip connection.
The skip connection adds the input of the block to its
output, allowing the network to learn residual functions
rather than complete mappings.
By doing so, the network is able to preserve information
from earlier layers and avoid the degradation of
accuracy that can occur in very deep networks.
RESNET
ResNet comes in various depths, with ResNet-18,
ResNet-34, ResNet-50, ResNet-101, and ResNet-
152 being the most common variants. The
numbers in the names correspond to the number
of layers in the network. ResNet-50, for example,
has 50 layers, including 48 convolutional layers
and 2 fully connected layers.
ResNet has been widely used in computer vision
applications, and it has achieved state-of-the-art
results on various benchmarks, including the
ImageNet Large Scale Visual Recognition
Challenge (ILSVRC) dataset.
Its success has inspired the development of
other architectures that use skip connections,
such as DenseNet and Highway Networks.