chp2 cost functions
chp2 cost functions
well the model can estimate the relationship between input and output parameters
Cost function also plays a crucial role in understanding that how well your model
estimates the relationship between the input and output parameters.
In this topic, we will explain the cost function in Machine Learning, Gradient descent,
and types of cost functions.
In machine learning, once we train our model, then we want to see how well our model
is performing. Although there are various accuracy functions that tell you how your
model is performing, but will not give insights to improve them. So, we need a function
that can find when the model is most accurate by finding the spot between the
undertrained and overtrained model.
In simple, "Cost function is a measure of how wrong the model is in estimating the
relationship between X(input) and Y(output) Parameter." A cost function is
sometimes also referred to as Loss function, and it can be estimated by iteratively
running the model to compare estimated predictions against the known values of Y.
The main aim of each ML model is to determine parameters or weights that can
minimize the cost function.
In the above image, the green dots are cats, and the yellow dots are dogs. Below are
the three possible solutions for this classification problem.
In the above solutions, all three classifiers have high accuracy, but the third solution is
the best because it correctly classifies each datapoint. The reason behind the best
classification is that it is in mid between both the classes, not close or not far to any of
them.
To get such results, we need a Cost function. It means for getting the optimal solution;
we need a Cost function. It calculated the difference between the actual values and
predicted values and measured how wrong was our model in the prediction. By
minimizing the value of the cost function, we can get the optimal solution.
Gradient descent is an iterative process where the model gradually converges towards
a minimum value, and if the model iterates further than this point, it produces little or
zero changes in the loss. This point is known as convergence, and at this point, the
error is least, and the cost function is optimized.
AD
AD
There are three commonly used Regression cost functions, which are as follows:
a. Means Error
In this type of cost function, the error is calculated for each training data, and then the
mean of all error values is taken.
AD
The errors that occurred from the training data can be either negative or positive.
While finding mean, they can cancel out each other and result in the zero-mean error
for the model, so it is not recommended cost function for a model.
Means Square error is one of the most commonly used Cost function methods. It
improves the drawbacks of the Mean error cost function, as it calculates the square of
the difference between the actual value and predicted value. Because of the square of
the difference, it avoids any possibility of negative error.
In MSE, each error is squared, and it helps in reducing a small deviation in prediction
as compared to MAE. But if the dataset has outliers that generate more prediction
errors, then squaring of this error will further increase the error multiple times. Hence,
we can say MSE is less robust to outliers.
Mean Absolute error also overcome the issue of the Mean error cost function by taking
the absolute difference between the actual value and predicted value.
This means the Absolute error cost function is also known as L1 Loss. It is not affected
by noise or outliers, hence giving better results if the dataset has noise or outlier.
One of the commonly used loss functions for classification is cross-entropy loss.
The binary Cost function is a special case of Categorical cross-entropy, where there is
only one output class. For example, classification between red and blue.
To better understand it, let's suppose there is only a single output variable Y
For each green point (y=1), it adds log(p(y)) to the loss, that is, the log probability of it
being green.
Conversely, it adds log(1-p(y)), that is, the log probability of it being red, for each red
point (y=0)
2. Cross-entropy(D) = - (1-y)*log(1-p) when y = 0
The error in binary classification is calculated as the mean of cross-entropy for all N
training data. Which means:
It is designed in a way that it can be used with multi-class classification with the target
values ranging from 0 to 1, 3, ….,n classes.
For a perfect cross-entropy, the value should be zero when the score is minimized.
Reference:
https://www.javatpoint.com/cost-function-in-machine-learning