MLT Assignment 6
MLT Assignment 6
Assignment-6
Regression is a technique used to predict the value of a response (dependent) variables, from one or
more predictor (independent) variables, where the variable are numeric. There are various forms of
regression such as linear, multiple, logistic, polynomial, non-parametric, etc.
The probability of some obtained event is represented as a linear function of a combination of predictor
variables.
The independent variable should not be correlated with each other (no multicollinearity exist).
- This involves the process of learning by example -- where a system tries to induce a general rule from a
set of observed instances.
This involves classification -- assigning, to a particular input, the name of a class to which it belongs.
Classification is important to many problem solving tasks.
Hardware Dependencies
Generally, Deep Learning depends on high-end machines while traditional learning depends on low-end
machines. Thus, Deep Learning requirement includes GPUs. That is an integral part of it’s working. They
also do a large amount of matrix multiplication operations.
- Backpropagation is a supervised learning algorithm, for training Multi-layer Perceptrons (Artificial Neural
Networks).
I would recommend you to check out the following Deep Learning Certification blogs too:
TensorFlow Tutorial
But, some of you might be wondering why we need to train a Neural Network or what exactly is the
meaning of training.
The Backpropagation algorithm looks for the minimum value of the error function in weight space using
a technique called the delta rule or gradient descent. The weights that minimize the error function is
then considered to be a solution to the learning problem.
7.What is perceptron?
- A perceptron is a simple model of a biological neuron in an artificial neural network. Perceptron is also
the name of an early algorithm for supervised learning of binary classifiers.
The perceptron algorithm was designed to classify visual inputs, categorizing subjects into one of two
types and separating groups with a line. Classification is an important part of machine learning and
image processing. Machine learning algorithms find and classify patterns by many different means. The
perceptron algorithm classifies patterns and groups by finding the linear separation between different
objects and patterns that are received through numeric or visual input.
9.What is PCA.
- The main idea of principal component analysis (PCA) is to reduce the dimensionality of a data set
consisting of many variables correlated with each other, either heavily or lightly, while retaining the
variation present in the dataset, up to the maximum extent. The same is done by transforming the
variables to a new set of variables, which are known as the principal components (or simply, the PCs) and
are orthogonal, ordered such that the retention of variation present in the original variables decreases as
we move down in the order. So, in this way, the 1st principal component retains maximum variation that
was present in the original components. The principal components are the eigenvectors of a covariance
Importantly, the dataset on which PCA technique is to be used must be scaled. The results are also
sensitive to the relative scaling. As a layman, it is a method of summarizing data. Imagine some wine
bottles on a dining table. Each wine is described by its attributes like colour, strength, age, etc. But
redundancy will arise because many of them will measure related properties. So what PCA will do in this
Intuitively, Principal Component Analysis can supply the user with a lower-dimensional picture, a
projection or "shadow" of this object when viewed from its most informative viewpoint.
10.What is dimenionality reduction?
- In machine learning classification problems, there are often too many factors on the basis of which the
final classification is done. These factors are basically variables called features. The higher the number of
features, the harder it gets to visualize the training set and then work on it. Sometimes, most of these
features are correlated, and hence redundant. This is where dimensionality reduction algorithms come
into play. Dimensionality reduction is the process of reducing the number of random variables under
consideration, by obtaining a set of principal variables. It can be divided into feature selection and
feature extraction.