Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
33 views
Aiml Module 3 Part 3
Uploaded by
jashwanth
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save AIML MODULE 3 PART 3 For Later
Download
Save
Save AIML MODULE 3 PART 3 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
33 views
Aiml Module 3 Part 3
Uploaded by
jashwanth
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save AIML MODULE 3 PART 3 For Later
Carousel Previous
Carousel Next
Download
Save
Save AIML MODULE 3 PART 3 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 12
Search
Fullscreen
Regression Analysis “Regression analysis is the hydrogen bomb of the statistics arsenal.” = Charles Whelan, Naked Statistics: Stripping the Dread from the Data Regression analysis is a supervised learning method for predicting continuous variables. The difference between classification and regression analysisis that regression methodsare used io predict qualitative variables or continuous numbers unlike categorical variables or labels. It is tised to predict linear or non-linear relationships among variables Of the given dataset. This chapter deals with an introduction of regression and its various types. Tearning Objectives © Understand the basics of regression analysis * Introduce concepts of correlation and causation ‘+Learn about linear regression and its validation techniques sy INTRODUCTION TO REGRESSION Regression analysis is the premier method of supervised leaming. This is one of the most popular and oldest supervised learning technique. Given a training dataset D containing N training points (, y), where i= 1..N, regression analysis is used to model the relationship between one or more independent variables x,and a dependent variable y, The relationship between the dependent and independent variables can be represented as a function as follows: = fle) 61) Here, the feature variable x is also known as an explanatory variable, exploratory variable, a predictor variable, an independent variable, a covariate, or a domain point. y is a dependent variable. Dependent variables are also called as labels, target variables, or response variables. Regression analysis determines the change in response variables when one exploration variable is varied while keeping all other parameters constant. Thisis used to determine the relationship each of the exploratory variables exhibits. ‘hus, regression analysis is used for prediction and forecasting,Regression Analysis +» 131 Regression is used to predict continuous variables or quantitative variables such as price and revenue. ‘Thus, the primary concern of regression analysis is to find answer to questions such as: 1. What is the relationship between the variable: 2. What is the strength of the relationships? - 3, What is the nature of the relationship such as linear or non-linear? 4, What is the relevance of the attributes? 5, What is the contribution of each attribute? ‘There are many applications of regression analysis include predicting: Some of the applications of regressions] 1, Sales of a goods or services 2. Value of bonds in portfolio management 3. Premium on insurance companies 4. Yield of crops in agriculture 5, Prices of real estate 5,2 INTRODUCTION TO LINEARITY, CORRELATION, AND CAUSATION ‘fhe quality of the regression analysisis determined by the factors such as correlation and causation. Regression and Correlation Correlation among two variables can be done effectively using a Scatter plot, which is a plot between explanatory variables and response variables. It is a 2D graph showing the relationship between two variables. The x-axis of the scatter plot is independent, or input or predictor variables and y-axis of the scatter plot is output or dependent or predicted variables. ‘The scatter plot is useful in exploring data. Some of the scatter plots are shown in Figure 5.1. The Pearson correlation coefficient is the most common test for determining correlation if there is an association between two variables. The correlation coefficient is denoted by r. Correlation is discussed in Chapter 2 of this book. The positive, negative, and random correlations are given in Figure 5.1. In positive correlation, one variable change is associated with the change in another variable, In negative correlation, the relationship between the variables is reciprocal while in random correlation, no relationship exists between variables. Yeris Tans 26! zi . 24 . 20; 22 18, 20 8 is i 6 . . 12. 4 | % 10. oi * 8 8 6. 6), . al, . 4b______——>xeanis 4 Karis Xaxis 7152253354455 4152253354455 1152253354455 @) ) @ Figure 5.1: Examples of (a) Positive Correlation (b) Negative Correlation ( Random Points with No Correlation132 « Machine Learning While correlation is about relationships among variables, say x and y, regression is about predicting one variable given another variable. Regression and Causation Causation is about causal relationship among variables, say x and y. Causation means knowing whether x causes y to happen or vice versa. x causes y is often denoted as x implies y. Correlation and Regression relationships are rot same as causation relationship. For example, tie correlation between economical background and marks scored does not imply that economic background causes high marks. Similarly, the relationship between higher sales of cool drinks due to a rise in temperature is not a causal relation. Even though high temperature is the cause of cool drinks sales, it depends on other factors too, Linearity andNon-linearity Relationships The linearity relationship between the variables means the relationship between the dependent and independent variables can be visualized as a straight line. The line of the form, y = ax +b can be fitted to the data points that indicate the relationship between x and y. By linearity, it is meant that as one variable increases, the corresponding variable also increases in a linear manner. A linear relationship is shown in Figure 5.2 (a). A nor-linear relationship exists in functions such as exponential function and power function and it is shown in Figures 5.2 (b) and 5.2 (c). Here, x-axis is given by x data and y-axis is given by y data Yeaxis Yeaxis soy Kans ) Yeaxis + — —__, X-axis © Figure 5.2: (a) Example of Linear Relationship of the Form y = ax + b (b) Example of a Non-linear : Relationship of the Form y= ax* (c) Examples of a Non-linear Relationship y = ax+b The functions like exponential function (y = ax*) and power function (35) are os nonlinear relationships between the dependent and independent variables that cannot be fitted in a line. This is shown in Figures 5.2 (b) and (0).Regression Analysis » 133 Types of Regression Methods ‘The classification of regression methods is shown in Figure 5.3. Regression methods ee Linear regression Non-linear Logical methods regression regression Single linear regression Multiple finear Polynomial regression regression Figure 5.3: Types of Regression Methods + Linear Regression It is a type of regression where a line is fitted upon given data for finding the linear relationship between one independent variable and one dependent variable to describe relationships. Multiple Regression It is a type of regression where a line is fitted for finding the linear relationship between two or more independent variables and one dependent variable to describe relationships among variables. Polynomial Regression It is a type of non-linear regression method of describing relation- ships among variables where N® degree polynomial is used to model the relationship between one independent variable and one dependent variable. Polynomial multiple regression is used to model two or more independent variables and one dependant variable. Logistic Regression It is used for predicting categorical variables that involve one or more “independent variables and one dependent variable. This is also known as a binary classifier. Lasso and Ridge Regression Methods These are special variants of regression method where Fegularization methods are used to Timit the number and size of coefficients of the independent variables. Limitations of Regression Method L_Outliers - Outliers are abnormal data. It can bias the outcome of the regression model, as outliers push the regression line towards it, 2, Number of cases ~ The ratio of independent and dependent variables should be at least 20 : 1. For every explanatory variable, there should be at least 20 samples. Atleast five samples are required in extreme cases. 3. Missing data - Missing data in training data can make the model unfit for the sampled data. 4, Multicollinearity — If exploratory variables are highly correlated (0.9 and above), the regression is vulnerable to bias. Singularity leads to perfect correlation of 1. The remedy is to remove exploratory variables that exhibit correlation more than 1. If there is a tie, then the tolerance (1-R squared) is used to eliminate variables that have the greatest value.7 454 + Machine Learning 5.3, INTRODUCTION TO LINEAR REGRESSION {nthe simplest form, the linear regression model can be created by fitting line among the scattered data points, The line is of the form given in Eq, (5.2). y=a,ta,xxt+e (6.2) Here, a, is the intercept which represents the bias and a, represents the slope of the line. These are called regression coefficients. eis the error in prediction. ‘The assumptions of linear regression are listed as follows: 1. The observations (y) are random and are mutually independent. 2. The difference between the predicted and true values is called an error. The error is also mutually independent with the same distributions such as normal distribution with zero mean and constant variables. 3. The distribution of the error term is independent of the joint distribution of explanatory variables. 4, The unknown parameters of the regression models are constants. The idea of linear regression is based on Ordinary Least Square (OLS) approach. This method is also known as ordinary least squares method. In this method, the data points are modelled using a straight line. Any arbitrarily drawn line is not an optimal line. In Figure 54, three data points and their errors (e, ¢, ¢,) are shown. The vertical distance between each point and the line (predicted by the approximate line equation y = a, + @,x) is called an error. These individual errors are added to compute the total error of the predicted line. This is called sum of residuals. ‘The squares of the individual errors can also be computed and added to give a sum of squared error. The line with the lowest sum of squared error is called line of best fit. yaxis igure 5.4: Data Points and their Errors ~ In another words, OLS is an optimization technique where the difference between the data points and the line is optimized. Mathematically, based on Eq. (5.2), the line equations for points (x, x, ...,X,) are: Y= (a +4,x,) +e, I= (yt ax) +e ¥, = (y+ 4,%,) #6, (63) In general, the error is given as: ¢,= y,~ (q,+4,x) 64) This can be extended into the set of equations as shown in Eq, (5.3).Regression Analysis 135 Here, the terms (¢,,¢, ...,€,) are error associated with the data points and denote the difference petween the true value of the observation and the point on the line. This is also called as residuals. ‘The residuals can be positive, negative or zero. A regression line is the line of best fit for which the sum of the squares of residuals is minimum. ‘The minimization can be done as minimization of individual errors by finding the parameters a, and a, such that: = NY, +4) 65) Oras the minimization of sum of absolute values of the individual errors: E=Zlel= Eley, -@ +42))] 66) Oras the minimization of the sum of the squares of the individual errors: E Bee? = de, ~(@, +42)? (5.7) Sum of the squares of the individual errors, often preferred as individual errors (positive and negative errors), do not get cancelled out and are always positive, and sum of squares results in a large increase even for a small change in the error. Therefore, this is preferred for linear regression. ‘Therefore, linear regression is modelled as a minimization function as follows: Flay, a) = Ely, NP = By, -@ +4300 (68) Here, J(a, 4,) is the criterion function of parameters a, and a,. This needs to be minimized. This is done by differentiating and substituting to zero. This yields the coefficient values of a,and a,. The values of estimates of a, and a, are given as follows: 69) ‘And the value of a, is given as follows: j= @)-4,x% 6.10) _Let us consider a simple problem to illustrate the usage of the above concept. = FETTSCERE Let us consider an example where the five weeks' sales data (in Thousands) is given as shown below in Table 5.1. Apply linear regression technique to predict the 7*and 9" month sales. Table 5.1: Sample Data bi (UYGS eer jt 2 18 | 3 26 4 32 5 38136 + Machine Learning Solution: Here, there are 5 items, ie., i= 1, 2, 3, 4, 5. The computation table is shown below (Table 5.2). Here, there are five samples, so i ranges from 1 to 5 Table 5.2: Computation Table 1 12 1 12 2 18 4 3.6 3 26 9 78 4 32, 16 128 5 38 25 19 Sum =15 Sum = 12.6 Sum =55 Sum = 44.4 Average of (x,) Average of (y,) Average of (x?) Average of (x, x y,)” 55. —_ M4 ays == = 252 = = 8.88 Let us compute the slope and intercept now using Eq, (5.9) as: 8.88 - 3(2.52 1-8 4, = 2.52 0.66 x3 = 0.54 a, = 0.66 The fitted line is shown in Figure 5.5. @ 35 ¥-Dependent 1 1 2 3 4 5 Independent — Regression line (= 0.66x + 0.54) Figure 5.5: Linear Regression Model Constructed Let us model the relationship as y = 4, +4, x x. Therefore, the fitted line for the above data is: y= 0.54 + 0.66 x x. ‘The predicted 7 week sale would be (when x =7), y= 0.54 +0.66 x 7 =5.16 and the 12" month, y=0.54 4 0.66 x 12 =8.46, All sales are in thousands. -¢-~ 7 Regression Analysis w Linear Regression in Matrix Form Matrix notations can be used for representing the values of independent and dependent variables. ‘This s illustrated through Example 52. The Eq, (6.3) can be written in the form of matrix as follows: w) fs a we Ly ( 611) Pid YJ Wx, 4 This can be written as: Y= Xa+e, where X is an m x 2 matrix, Y is an n x 1 vector, ais a2x 1 column vector and eis an nx 1 column vector. PREETI) Find linear regression of the data of week and product sales (in Thousands) given in Table 5.3. Use linear regression in matrix form. ier sesame os 1 2 3 3 4 4 - 8 Solution: Here, the dependent variable X is be given as: T=[1234] And the independent variable is given as follows: y=(1348] ‘The data can be given in matrix form as follows: 11 x=)? The first column can be used for setting bias. (4 1 and =|? 4 8 ‘The regression is given as: a= (GOK XY ‘The computation order of this equation is shown step by step as:138 + Machine Learning 11 1111 { 1. Computation of (X"X x{1?2}.[4 2° 13] (10 30 14 410) 2. Computation of matrix inverse of enn 3) 15 -05 ~05 02 15 -05) (1111) (1 05 0 -05 3. Computation of ((X?X)?X") = x = ~05 02)°(1234) (03 0101 03 1 1 05 0 -15)\(h 4, Finally, (X"X)!X")¥ = 705) 13) _ (15 {Intercept -03 -0.1 0.1 03)" |4|"( 22 ]\_ stope 8 Thus, the substitution of values in Eq. (6.11) using the previous steps yields the fitted line as 22x-15. Mean Absolute Error (MAE) 5,4 VALIDATION OF REGRESSION METHODS The regression model should be evaluated using some metrics for checking the correctness. ‘The following metrics are used to validate the results of regression. Standard Error Residuals or error is the difference between the actual (y) and predicted value (9). If the residuals have normal distribution, then the mean is zero and hence it is desirable. This is a measure of variability in finding the coefficients. It is preferable that the error be less than the coefficient estimate. The standard deviation of residuals is called residual standard error. If it is zero, then it means that the model fits the data correctly. MAE is the mean of residuals. It is the difference between estimated or predicted target value and actual target incomes. It can be mathematically defined as follows: Ta MAE =— -¥, (5.12) aM 5 6.12) Here, j is the estimated or predicted target output and y is the actual target output, and n is the number of samples used for regression analysis. Mean Squared Error (MSE) It is the sum of square of residuals. This value is always positive and closer to 0. This is given mathematically as: i ae IP (5.13)8 | is a Root Mean Square Error (RMSE) ‘The square root of the MSE is called RMSE. This is given as: RMSE = (MSE = 25y,- i - Relative MSE Regression Analysis + 139 (6.14) Relative MSE is the ratio of the prediction ability of the 7 to the average of the trivial population. ‘The value of zero indicates that the model is perfect and its value ranges between 0 and 1. If the vyalue is more than 1, then the created model is not a good one. This is given as follows: = RelMSE = a ZY, - 9 Coefficient of Variation Coefficient of variation is unit less and is given as: RMSE ce (6.15) (6.16) [EEE Consider the following training set Table 54 for predicting the sales ofthe items. Table 5.4: Training Item Table 8 Consider two fresh items I, and I,, whose actual values are 80 and 75, respectively. A regression ‘model predicts the values of the items /, and [, as 75 and 85, respectively. Find MAE, MSE, RMSE, RelMSE and CV. Solution: The test items’ actual and prediction is given in Table 5.5 as: Table 5.5: Test Item Table peo e cues emacs ts Bi if i 80 B £ % 5 Mean Absolute Error (MAE) using Eq. (5.12) is given as: MAE = 5x80 —75] +7589 = Mean Squared Error (MSE) using Eq. (5.13) is given as: 1 2 eg _ 125 MSE = 5 x|80~ 75) + [75 ~85{ =->> = 625140. + Machine Learning Root Mean Square error using Eq, (5.14) is given as: RMSE = YMSE = 625 =7.91 For finding RelMSE and CY, the training table should be used to find the average of y. 80+90+100+110+120 _ 500 5 “5 RelMSE using Eq, (6:15) can be computed as: (80-75) + (75-85)? _ 125. The average of y is 00. RelMSE = = 01219 ° (80-100)? + (75-1007 1025 CV can be computed using Eq. (5.16) as was = 0.08. ——— Coefficient of Determination To understand the coefficient of determination, one needs to understand the total variation of coefficients in regression analysis. The sum of the squares of the differences between the y-value of the data pair and the average of y is called total variation. Thus, the following variations can be defined The explained variation is given as: 6.17) The unexplained variation is given as: =Ly-5F 6.18) ‘Thus, the total variation is equal to the explained variation and the unexplained variation. The coefficient of determination r? is the ratio of the explained and total variations. p_ Explained variation 613) Total variation " Itis a measure of how many future samples are likely to be predicted by the regression model. Its value ranges from 1 to ~c, where 1 is the most optimum value. It also signifies the proportion of variance. Here, ris the correlation coefficient. If r = 0.95, then r* is given as 0.95 x 0.95 = 0.9025. ‘This means that 90% of the model can be explained by the relationship between x and y. The rest 10% is unexplained and that may be due to various reasons such as noise, chance, or error. Standard Error Estimate Standard error estimate is another useful measure of regression. It is the standard deviation of the observed values to the predicted values. This is given as: (6.20) Here, as usual, y, is the observed value and 4, is the predicted value. Here, 1 is the number of samples. =~Regression Analysis. « 141 —_—_— Pnjaciee 4 Let us consider the data given in the Table 5.3 with actual and predicted values. Find standard error estimate. & Solution: The observed value or the predicted value is given below in Table 56. ‘ample Data Leviccon els ye 1.46 (15 -1.46)?= 0.0016 2.02. (2.9 - 2.027 = 0.7744 258 (2.7 -2.58)' = 0.0144 3.14 (8.1-3.14)= 0.0016 ‘The sum of (y~ Hf forall =1, 2, San Tumber of samples 7 = 4) 1s 0.792. The standard deviation error estimate as given in Eq. (5.20)
You might also like
Linear Regression
PDF
No ratings yet
Linear Regression
16 pages
Regression Analysis
PDF
No ratings yet
Regression Analysis
12 pages
Aiml M3 C3
PDF
No ratings yet
Aiml M3 C3
37 pages
CH 5
PDF
No ratings yet
CH 5
36 pages
MODULE-3
PDF
No ratings yet
MODULE-3
34 pages
regression
PDF
No ratings yet
regression
7 pages
Correlation and Linear Regression
PDF
No ratings yet
Correlation and Linear Regression
25 pages
Chapter 5 - 1
PDF
No ratings yet
Chapter 5 - 1
5 pages
Regression Coeffient
PDF
No ratings yet
Regression Coeffient
52 pages
Simple Linear Regression (1)
PDF
No ratings yet
Simple Linear Regression (1)
83 pages
Cha 6
PDF
No ratings yet
Cha 6
8 pages
DA-MODULE-3
PDF
No ratings yet
DA-MODULE-3
54 pages
Correlation and Regression Analyses
PDF
No ratings yet
Correlation and Regression Analyses
8 pages
Unit - II_DA
PDF
No ratings yet
Unit - II_DA
22 pages
Management Science Notes
PDF
No ratings yet
Management Science Notes
13 pages
Regression and Correlation Analysis
PDF
No ratings yet
Regression and Correlation Analysis
16 pages
Definition 3. Use of Regression 4. Difference Between Correlation and Regression 5. Method of Studying Regression 6. Conclusion 7. Reference
PDF
No ratings yet
Definition 3. Use of Regression 4. Difference Between Correlation and Regression 5. Method of Studying Regression 6. Conclusion 7. Reference
11 pages
Presentation4 - Bivariate Analysis and Simple Linear Regression
PDF
No ratings yet
Presentation4 - Bivariate Analysis and Simple Linear Regression
31 pages
3-4 CLRM
PDF
No ratings yet
3-4 CLRM
87 pages
Unit_6_Machine_Learning_Algorithms
PDF
No ratings yet
Unit_6_Machine_Learning_Algorithms
13 pages
Investigating Variables
PDF
No ratings yet
Investigating Variables
15 pages
Lecture 6 - Regression Analysis
PDF
No ratings yet
Lecture 6 - Regression Analysis
34 pages
Correlation and Regression
PDF
No ratings yet
Correlation and Regression
3 pages
Chapter 6
PDF
No ratings yet
Chapter 6
58 pages
Regression Analysis
PDF
100% (2)
Regression Analysis
11 pages
CH 6
PDF
No ratings yet
CH 6
42 pages
14 Statistics and Probability
PDF
No ratings yet
14 Statistics and Probability
37 pages
Correlation
PDF
No ratings yet
Correlation
57 pages
Regression
PDF
No ratings yet
Regression
14 pages
Correlation and Regression Notes
PDF
No ratings yet
Correlation and Regression Notes
5 pages
7. Chapter 14 Simple Linear Regression .
PDF
No ratings yet
7. Chapter 14 Simple Linear Regression .
39 pages
Regression (1)
PDF
No ratings yet
Regression (1)
11 pages
Regression Notes-I
PDF
No ratings yet
Regression Notes-I
10 pages
BA3-4-5modules
PDF
No ratings yet
BA3-4-5modules
258 pages
Correlation
PDF
No ratings yet
Correlation
5 pages
DTB (ch5)
PDF
No ratings yet
DTB (ch5)
14 pages
Linear Regression
PDF
No ratings yet
Linear Regression
7 pages
Intermediate Analytics-Regression-Week 1
PDF
No ratings yet
Intermediate Analytics-Regression-Week 1
52 pages
REGRESSION and CORRELATION ANALYSIS STA 106 -DR. BASHIRU
PDF
No ratings yet
REGRESSION and CORRELATION ANALYSIS STA 106 -DR. BASHIRU
10 pages
Corr_Regression Analysis
PDF
No ratings yet
Corr_Regression Analysis
19 pages
Ra Web
PDF
No ratings yet
Ra Web
70 pages
CH 6
PDF
No ratings yet
CH 6
43 pages
Correlation and Regression
PDF
No ratings yet
Correlation and Regression
15 pages
2023 Statistics Fin 10
PDF
No ratings yet
2023 Statistics Fin 10
14 pages
Regression Analysis
PDF
No ratings yet
Regression Analysis
44 pages
Regression Analysis Linear and Multiple Regression
PDF
No ratings yet
Regression Analysis Linear and Multiple Regression
6 pages
Regression Analysis Linear and Multiple Regression
PDF
No ratings yet
Regression Analysis Linear and Multiple Regression
6 pages
Regression Analysis Linear and Multiple Regression
PDF
No ratings yet
Regression Analysis Linear and Multiple Regression
6 pages
Notes 2
PDF
No ratings yet
Notes 2
22 pages
meWeek 3
PDF
No ratings yet
meWeek 3
57 pages
HELM Workbook 43 Regression and Correlation
PDF
No ratings yet
HELM Workbook 43 Regression and Correlation
32 pages
Correlation Regression And: Learning Outcomes
PDF
No ratings yet
Correlation Regression And: Learning Outcomes
16 pages
ida unit-3.rtf
PDF
No ratings yet
ida unit-3.rtf
34 pages
Chapter Regression PDF
PDF
No ratings yet
Chapter Regression PDF
95 pages
DA2
PDF
No ratings yet
DA2
12 pages
DS Unit-Iv
PDF
No ratings yet
DS Unit-Iv
34 pages
Regression: by Vijeta Gupta Amity University
PDF
No ratings yet
Regression: by Vijeta Gupta Amity University
15 pages
M3 Part 2: Regression Analysis
PDF
No ratings yet
M3 Part 2: Regression Analysis
21 pages
Revanth (1BI17CS123)
PDF
No ratings yet
Revanth (1BI17CS123)
40 pages
Cin2015 715730
PDF
No ratings yet
Cin2015 715730
9 pages
Jashsports
PDF
No ratings yet
Jashsports
39 pages
Aiml Module 3 Part 2
PDF
No ratings yet
Aiml Module 3 Part 2
12 pages