Introduction To Convolution Neural Network
Introduction To Convolution Neural Network
Different types of Neural Networks are used for different purposes, for
example for predicting the sequence of words we use Recurrent Neural
Networks more precisely an LSTM, similarly for image classification we
use Convolution Neural networks.
1. Input Layers: It’s the layer in which we give input to our model. The
number of neurons in this layer is equal to the total number of features
in our data (number of pixels in the case of an image).
2. Hidden Layer: The input from the Input layer is then fed into the
hidden layer.
There can be many hidden layers depending on our model and data
size.
Each hidden layer can have different numbers of neurons which are
generally greater than the number of features.
3. Output Layer: The output from the hidden layer is then fed into a
logistic function like sigmoid or softmax which converts the output of
each class into the probability score of each class.
The data is fed into the model and output from each layer is
obtained from the above step is called feedforward, we then
calculate the error using an error function, some common error
functions are cross-entropy, square loss error, etc.
CNN architecture
Convolutional Neural Network consists of multiple layers like the
input layer, Convolutional layer, Pooling layer, and fully connected
layers.
Now imagine taking a small patch of this image and running a small
neural network, called a filter or kernel on it, with say, K outputs and
representing them vertically. Now slide that neural network across the
whole image, as a result, we will get another image with different widths,
heights, and depths. Instead of just R, G, and B channels now we have
more channels but lesser width and height. This operation is
called Convolution. If the patch size is the same as that of the image it
will be a regular neural network. Because of this small patch, we have
fewer weights.
Now let’s talk about a bit of mathematics that is involved in the whole
convolution process.
• Convolution layers consist of a set of learnable filters (or kernels)
having small widths and heights and the same depth as that of input
volume (3 if the input layer is image input).
• For example, if we have to run convolution on an image with
dimensions 34x34x3. The possible size of filters can be axax3, where
‘a’ can be anything like 3, 5, or 7 but smaller as compared to the image
dimension.
• During the forward pass, we slide each filter across the whole input
volume step by step where each step is called stride (which can have
a value of 2, 3, or even 4 for high-dimensional images) and compute
the dot product between the kernel weights and patch from input
volume.
• As we slide our filters we’ll get a 2-D output for each filter and we’ll
stack them together as a result, we’ll get output volume having a depth
equal to the number of filters. The network will learn all the filters.
Types of layers:
Let’s take an example by running a covnets on of image of dimension 32 x
32 x 3.
• Input Layers: It’s the layer in which we give input to our model. In
CNN, Generally, the input will be an image or a sequence of images.
This layer holds the raw input of the image with width 32, height 32,
and depth 3.
• Convolutional Layers: This is the layer, which is used to extract the
feature from the input dataset. It applies a set of learnable filters known
as the kernels to the input images. The filters/kernels are smaller
matrices usually 2×2, 3×3, or 5×5 shape. it slides over the input image
data and computes the dot product between kernel weight and the
corresponding input image patch. The output of this layer is referred
• as feature maps. Suppose we use a total of 12 filters for this layer we’ll
get an output volume of dimension 32 x 32 x 12.
• Activation Layer: By adding an activation function to the output of the
preceding layer, activation layers add nonlinearity to the network. it will
apply an element-wise activation function to the output of the
convolution layer. Some common activation functions are RELU:
max(0, x), Tanh, Leaky RELU, etc. The volume remains unchanged
hence output volume will have dimensions 32 x 32 x 12.
• Pooling layer: This layer is periodically inserted in the covnets and its
main function is to reduce the size of volume which makes the
computation fast reduces memory and also prevents overfitting. Two
common types of pooling layers are max pooling and average
pooling. If we use a max pool with 2 x 2 filters and stride 2, the
resultant volume will be of dimension 16x16x12.