0% found this document useful (0 votes)
21 views

Chapter+3+ ++Regression+Algorithms

Uploaded by

abdoalsenaweabdo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Chapter+3+ ++Regression+Algorithms

Uploaded by

abdoalsenaweabdo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Machine Learning

Chapter 3
Regression Algorithms

Dr. Mahmoud Elsabagh


Contents
Chapter 1: Introduction to Machine Learning
Chapter 2: Supervised Learning Fundamentals
Chapter 3: Regression Algorithms
Chapter 4: Classification Algorithms
Chapter 5: Decision Trees and Ensemble Learning
Chapter 6: Unsupervised Learning Basics
Chapter 7: Neural Networks and Deep Learning
Chapter 8: Natural Language Processing (NLP) and Sequence Models
Chapter 9: Reinforcement Learning
Chapter 10: Model Evaluation and Tuning
Chapter 11: Model Deployment and Applications
Chapter 3: Regression Algorithms

3
Contents
1. Introduction to Regression
2. Types of Regression Algorithms
3. Linear Regression
4. Polynomial Regression
5. Ridge Regression (L2 Regularization)
6. Lasso Regression (L1 Regularization)
7. Elastic Net Regression
8. Logistic Regression
9. Support Vector Regression (SVR)
10. Decision Tree Regression
11. Random Forest Regression
12. Bayesian Regression
13. Applications of Regression Algorithms
1. Introduction to Regression

➜ Definition:
Regression is a type of supervised learning task where the goal is to predict a continuous
numerical value based on input features. It models the relationship between a dependent
variable (also called the target or outcome) and one or more independent variables (features
or predictors).

➜ Use Cases:
Regression is commonly used for problems where you want to predict continuous values,
such as house prices, stock prices, or temperature.
2. Types of Regression Algorithms

➜ Linear Regression
➜ Polynomial Regression
➜ Ridge Regression (L2 Regularization)
➜ Lasso Regression (L1 Regularization)
➜ Elastic Net Regression
➜ Logistic Regression (used for classification)
➜ Support Vector Regression (SVR)
➜ Decision Tree Regression
➜ Random Forest Regression
➜ Bayesian Regression
3. Linear Regression
Algorithms
➜ 3.1 Overview
➜ Concept:
Linear regression models the relationship between a dependent variable and one or
more independent variables using a straight line. It assumes a linear relationship
between the input features and the output.

Mathematical Formula :

y — + ' * • + +b
where :

• y is the predicted outcome (dependent variable].

35i • r ,
• x n are the input features [independent variables).
" !

u n , u.- 2 , - • W n are the weights (coefficients) .


!

* b is the intercept ( bias term ) .


3. Linear Regression
Algorithms
➜ Ordinary Least Squares (OLS):
The most common way to train a linear regression model is by minimizing the sum of
squared errors between the actual and predicted values:
1 "
MSE - y, )2
n i =i

where y is the true value, and y is the predicted value


*
{ .

➜ 3.2 Assumptions of Linear Regression:


Linearity: The relationship between the independent and dependent variables is linear.
Independence: The observations in the dataset are independent of each other.
Homoscedasticity: The variance of the residuals (errors) is constant across all levels of
the independent variables.
Normality of Errors: The residuals (errors) are normally distributed.
3. Linear Regression
Algorithms
➜ 3.3 Evaluation Metrics for Linear Regression
- Mean Squared Error (MSE): The average of the squared differences between the
actual and predicted values.
- R-squared (R²): Measures how well the independent variables explain the variance in
the dependent variable:

1
4. Polynomial Regression
Algorithms
➜ Concept:
Polynomial regression is an extension of linear regression where the relationship
between the independent and dependent variables is modeled as a polynomial of
degree n.

➜ Mathematical Formula:
Here, the relationship between the input feature x and the output y can take non-
linear forms, but it is still linear in terms of the model parameters.

➜ Use Case:
It is used when the data shows a non-linear trend that can be approximated by a
polynomial. However, polynomial regression can easily overfit the data if the degree of
the polynomial is too high.
5. Ridge Regression (L2 Regularization)

➜ Concept:
Ridge regression is a form of linear regression that adds a penalty term to the cost
function, penalizing large coefficients. This helps prevent overfitting, especially when there
is multicollinearity (high correlation between input features).
• Mathematical Formula:

rn
2 "
)2 -
^
Cost Function — — > ( yT- -
r i & A Wj
n
=
1 1 i=i

The term A w Y is the regularization term, where


. A controls the strength of regularization.

➜ Effect:
It forces the model to reduce the magnitude of the weights, making the model less complex
and less prone to overfitting.
5. Ridge Regression (L2 Regularization)

( y •y )2

2
0 <
Q.
C5 Steep Slope 0
a '* 1
o Steepness of slope
c o>
O o reduced
o
O

>
High School GPA
*
High School GPA

RIDGE REGRESSION

i (y {>)
- y{i ) ) + [A *
2
( Slope ) }
6. Lasso Regression (L1 Regularization)

➜ Concept:
Lasso regression is similar to Ridge regression but uses an L1 penalty instead of an L2
penalty. It can set some coefficients to exactly zero, effectively performing feature
selection.
• Mathematical Formula:

Cost Function — — [ ih Vif + A u' j


n
i I j 1

➜ Key Feature:
Lasso is particularly useful when you expect only a subset of the features to be relevant,
as it can automatically eliminate irrelevant features by shrinking their coefficients to
zero.
6. Lasso Regression (L1 Regularization)

LASSO REGRESSION

_ y(0
Ei i (2/ W
=

^ + [A * |Slope | ]

y = m |Xi + m2 X 2 + »113 X3 + ... C


<
7. Elastic Net Regression

➜ Concept:
Elastic Net is a hybrid of Ridge and Lasso regression. It combines both L1 and L2
regularization to handle cases where there are multiple correlated features.
• Mathematical Formula:

I
Cost Function — —
n

​It takes advantage of both L1 (feature selection) and L2 (shrinkage) regularization


methods.

➜ Use Case:
Elastic Net is effective when dealing with datasets that have many features, some of
which may be correlated.
8. Logistic Regression

➜ Concept:
Logistic regression is used for binary classification rather than regression. It models
the probability that a given input belongs to a particular class, often using a sigmoid
function.

• Mathematical Formula:

i
P( y -
1+ e *

where z — W i X ± + woEv + r

r
+ ttin3?n + b.

➜ Key Use Case:


Logistic regression is commonly used for problems where the target variable is
binary, such as predicting whether an email is spam or not (0 or 1).
9. Support Vector Regression (SVR)

➜ Concept:
Support Vector Regression (SVR) extends the principles of Support Vector
Machines (SVM) to regression problems. Instead of trying to classify data points,
SVR aims to find a line that fits the data within a certain margin of error.

➜ Key Idea:
SVR tries to find a function that has at most a deviation ϵ\epsilonϵ from the actual
target values for all training data, while at the same time keeping the model as simple
as possible.
10. Decision Tree Regression

➜ Concept:
Decision tree regression works by recursively splitting the data into smaller and
smaller subsets based on the input features. The goal is to create a tree where each
leaf node represents a predicted value for the dependent variable.

➜ Advantages:
○ Non-linear relationships between input and output variables can be captured.
○ Simple to interpret.

➜ Disadvantages:
○ Prone to overfitting, especially with deep trees.
11. Random Forest Regression

➜ Concept:
Random Forest regression is an ensemble learning method that builds multiple
decision trees and averages their predictions. It reduces overfitting by using a
combination of multiple trees trained on different random subsets of the data.

➜ Key Features:
○ Reduces the variance of predictions by averaging multiple decision trees.
○ Robust to noise and outliers.
12. Bayesian Regression

➜ Concept:
Bayesian regression incorporates prior beliefs or knowledge about the parameters
into the regression model and uses Bayesian inference to update these beliefs as
new data is observed.

➜ Key Features:
○ Provides a probabilistic approach to regression.
○ Regularization is naturally built into the model through the prior distribution.
13. Applications of Regression Algorithms
➜ Predicting Continuous Outcomes:
- Predicting House Prices
- Stock Price Prediction
- Sales Forecasting
- Medical Data Analysis (e.g., predicting patient outcomes)
- Energy Consumption Prediction
- Temperature forecasting
Data Imputation:
Regression algorithms are often used to fill in missing values in datasets by
predicting the missing value based on other features.
Thanks!
Any
questions?

22

You might also like