CHAPTER 5
APPLIED ECONOMETRICS AND SAFTWARE APPLICATIONS
5.1 Basic concepts
Regression is an econometric technique for estimating the relationships between
the change dependent and independent variables.
It helps to analyse how the typical value of the dependent variable changes
when any one of the independent variables changes, while the other are held
constant (ceteris paribus).
The main purpose of linear regression analysis is to assess associations between
dependent and independent variables.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 1
Steps in Applied Regression Analysis of Econometrics
The first step is choosing the dependent variable; this step is determined by
the purpose of the research.
After choosing the dependent variable, it’s logical to follow the following
sequence:
1. Review the literature and develop the theoretical model
2. Specify the model: Select the independent variables and the
functional form
3. Hypothesize the expected signs of the coefficients
4. Collect the data. Inspect and clean the data
5. Estimate and evaluate the equation
6. Document the results
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 2
5.2 Multiple Regression Analysis of Cross-sectional Data
See difference between the regression results using ‘Stata’ and ‘Eviews ‘ below.
A) Using Stata: 10 observations, three explanatory(X’s) and dependent
variable(Y).
Steps in Stata Software application:
Go to file, Import, choose Excel spread sheet, Brose to pick up the file from
where it is located, Tick on box import first row as variable names, and click
on OK
Write ‘reg‘ ,click on your dependent variable first and followed by
independent variables, ultimately the regression results will be appeared as
shown on the next slide.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 3
OLS Regression Results using Stata
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 4
Steps in Eviews Software application for the same model:
• Go to file, Import, import from file, double click on your file, finish,
and link imported series…? No.
• Click on your dependent variable first and followed by independent
variables, open as equation.
• Copy-paste or write the variables in the box appeared in equation
estimation and choose least squares as method.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 5
B) Using Evies of the same data, here are the following regression results
Dependent Variable: Y
Method: Least Squares
Sample: 1 10
Included observations: 10
Variable Coefficient Std. Error t-Statistic Prob.
X1 -0.362254 0.109777 -3.299909 0.0164
X2 0.760462 0.130119 5.844352 0.0011
X3 0.327330 0.147341 2.221586 0.0680
C 33.80356 5.784788 5.843527 0.0011
R-squared 0.880274 Mean dependent var 57.80000
Adjusted R-squared 0.820411 S.D. dependent var 6.941021
S.E. of regression 2.941461 Akaike info criterion 5.284864
Sum squared resid 51.91314 Schwarz criterion 5.405898
Log likelihood -22.42432 Hannan-Quinn criter. 5.152090
F-statistic 14.70483 Durbin-Watson stat 1.192552
Prob(F-statistic) 0.003581
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 6
5.2.1 Conduct tests and get remedies for the problems of cross- sectional
data
Test for model specification
Steps: Estimate, Go to view, Stability Diagnostics, Ramsey RESET Test.
Null hypothesis, Ho: No miss-specification (Correct specification)
Ha: Miss-specification
Decision:
• We don’t reject the null hypothesis of correct specification at any conventional
level of significance since all the probability values of t-statistic, F-statistic
and Likelihood ratio are insignificant as can be seen in Table on the next slide.
• Note: If not, change the variables into log transformation or first difference
and repeat the same steps and observe whether the probability values are
insignificant.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 7
Ramsey RESET Test
Equation: UNTITLED
Specification: Y X1 X2 X3 C
Omitted Variables: Squares of fitted values
Value df Probability
t-statistic 1.059549 5 0.3378
F-statistic 1.122645 (1, 5) 0.3378
Likelihood ratio 2.025563 1 0.1547
F-test summary:
Sum of Sq. df Mean Squares
Test SSR 9.518767 1 9.518767
Restricted SSR 51.91314 6 8.652191
Unrestricted SSR 42.39438 5 8.478875
LR test summary:
Value df
Restricted LogL -22.42432 6
Unrestricted LogL -21.41154 5
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 8
Test for multicollinearity
Steps: Estimate, Go to View, Coefficient Diagnostics, Variance Inflation Factors
Null hypothesis, Ho: No multicollinearity
Ha: multicollinearity
Variance Inflation Factors
Sample: 1 10
Included observations: 10
Coefficient Uncentered Centered
Variable Variance VIF VIF
X1 0.012051 4.144622 1.006581
X2 0.016931 15.93263 1.026169
X3 0.021709 17.72436 1.022714
C 33.46377 38.67664 NA
Decision: We wouldn’t reject the null hypothesis of no multicollinearity because
centred VIF for all explanatory variables are quite less that than 10.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 9
Note: If reject the null hypothesis of no multicollinearity because centred VIF
for all explanatory variables are greater that than 10., then convert the original
data into log transformation and repeat the same steps.
Other remedies are:
• Make correlation analysis of all independent variables to observe the
highest value of coefficient correlation among them (say 0.8 and drop it )
and run estimation of the dependent variable on the rest of independent
variables.
• Drop either the insignificant variable or transform it into log,
• Have more data size and run the regression using same procedures we have
made so far.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 10
Test for Heteroskedasticity
Heteroskedasticity happens when variance is not constant(𝜎𝑖2 ) and it can be
carried out by two ways: informal and formal econometric tests.
1. Informal method:
Using OLS method, obtain auxiliary coefficient of determination as follows:
𝑌 = 𝛽0 + 𝛽1 𝑋1 + 𝛽2 𝑋2 + 𝛽3 𝑋3
𝑅𝑆𝑆 = 𝜃0 + 𝜃1 𝑋1 + 𝜃2 𝑋2 + 𝜃3 𝑋3
Take one of the methods such as Breusch-Pagan-Godfrey LM test of
Heteroskedasticity:
𝐻0 : 𝜃1 = 𝜃2 = 𝜃3 = 0
𝐻𝑎 : 𝜃1 ≠ 𝜃2 ≠ 𝜃3 ≠ 0
Steps: Go to Proc and save the error term, generate RSS and regress the RSS
on auxiliary variables and get the results.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 11
Dependent Variable: RSS
Method: Least Squares
Sample: 1 10
Included observations: 10
Variable Coefficient Std. Error t-Statistic Prob.
C 234.4999 77.21742 3.036879 0.0229
X1 -0.896608 1.465344 -0.611875 0.5631
X2 -1.873711 1.736877 -1.078782 0.3221
X3 -5.216730 1.966758 -2.652452 0.0379
R-squared 0.575555 Mean dependent var 34.73578
Adjusted R-squared 0.363333 S.D. dependent var 49.20788
S.E. of regression 39.26367 Akaike info criterion 10.46765
Sum squared resid 9249.814 Schwarz criterion 10.58868
Log likelihood -48.33825 Hannan-Quinn criter. 10.33488
F-statistic 2.712037 Durbin-Watson stat 2.538458
Prob(F-statistic) 0.137860
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 12
2
LM= 𝑁 ∗ 𝑅𝑎𝑢 = 10 ∗ 0.575555 = 𝟓𝟕. 𝟓𝟔 and the chi-squared with 𝑑𝑓 is found
to
be 𝜒 2 𝑘 − 1 = 𝜒 2 4 − 1 = 𝜒 2 3 = 7.8147.
Note: the procedures are as follows to generate the chi-squared:
chis=@qchisq(.95,3)
The decision rule is that as far as the value of LM is greater than 𝜒 2 3 , we
Reject the null hypothesis and conclude that there is no problem of
heterosskedasticity.
OR
As it can be seen from the next slide , and we don’t reject the 𝐻0 since none of
the coefficients are significant, there is no problem of heterosskedasticity.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 13
2. Formal method of Test
Steps: Estmate,View, Residuals Diagnostics, Heteroskedasticity Tests and
then choose one of them.
Decision: Here we don’t reject the null hypothesis of no heteroskedasticity
problem because the probability value of obs*R-squared is greater than at
any conventional level of significance.
Note: if there is heteroskedasticity, remove it using GLS/WLS method
considering prior information:
Apply OLS, choose weight,
Inverse standard, write your prior information, say X2(−.5) in weight series
box
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 14
Heteroskedasticity Test: Breusch-Pagan-Godfrey
F-statistic 1.060152 Prob. F(3,6) 0.4329
Obs*R-squared 3.464376 Prob. Chi-Square(3) 0.3254
Scaled explained SS 0.992313 Prob. Chi-Square(3) 0.8031
Test Equation:
Dependent Variable: RESID^2
Method: Least Squares
Date: 09/25/22 Time: 08:40
Sample: 1 10
Included observations: 10
Variable Coefficient Std. Error t-Statistic Prob.
C -7.756788 13.44142 -0.577081 0.5849
X1 -0.287357 0.255076 -1.126553 0.3030
X2 0.179733 0.302342 0.594468 0.5739
X3 0.476771 0.342358 1.392609 0.2132
R-squared 0.346438 Mean dependent var 5.191314
Adjusted R-squared 0.019656 S.D. dependent var 6.902901
S.E. of regression 6.834721 Akaike info criterion 6.971083
Sum squared resid 280.2805 Schwarz criterion 7.092117
Log likelihood -30.85541 Hannan-Quinn criter. 6.838309
F-statistic 1.060152 Durbin-Watson stat 1.115098
Prob(F-statistic) 0.432879
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 15
5.2.2 Dummy Variables Regression Analysis
By definition, dummy variables are indicators or categorical and qualitative
variables that are used to quantify the qualitative, nominal scale variables by
giving them the value of 0 and 1.
Qualitative variables such as gender, politics, race, religion, region,
union,children,party,nationality,residency,occupation,profession etc are
nominal scale variables which have no specific numerical values.
Reference category in dummy variable that has the value of 0 is called the
reference category, benchmark or comparison category.
All the comparisons of the dummy variable are made in relation to its
reference category.
Dummy Variable Trap: When the number of dummy variables created is
equal to the number of values of the categorical value can take on. This leads
to multicollinearity, which causes incorrect calculations of regression
coefficients and p-values.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 16
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 17
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 18
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 19
Example of Dummy Variables Regression Analysis:
Dependent Variable: Y
Method: Least Squares
Sample: 1 10
Included observations: 10
Variable Coefficient Std. Error t-Statistic Prob.
X1 0.387616 0.062565 6.195396 0.0004
DI 1.262693 0.314127 4.019695 0.0051
C 0.930495 0.466974 1.992606 0.0866
R-squared 0.859195 Mean dependent var 3.820000
Adjusted R-squared 0.818964 S.D. dependent var 1.078888
S.E. of regression 0.459048 Akaike info criterion 1.524002
Sum squared resid 1.475077 Schwarz criterion 1.614778
Log likelihood -4.620012 Hannan-Quinn criter. 1.424422
F-statistic 21.35700 Durbin-Watson stat 1.136004
Prob(F-statistic) 0.001048
16 November 2024
Prepared by Urgaia Rissa(PhD) for MBA 20
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 21
5.3 Time series Data Analysis
Time series data is a case where observations are generated over time.
The time series regression can be problematic issue of stationarity.
The two approaches of time series data analyses are the Johansen
cointegration and ARDL Bounds test cointegration.
5.3.1 The Johansen cointegration approach
After importing the raw data to Eviews, we can covert the variables into
logarithmic function, passing the following steps:
Go to Quick,
Open Generate series by equations,
Write your variable in the box appeared, eg; lny=log(y) and then
click on Ok.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 22
Lag selection and unit root tests
Steps for optimal lag length:
Click on dependent variable, then followed by independent variables
Open as VAR( All variables are appeared in the box as endogenous )
Choose unrestricted VAR
Click ok
Go to View, Lag stricture, Lag length criteria
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 23
Table A Optimum Lag Length
The optimum lag length is 2 by
all criteria since each criterion
indicates that the lowest error
value, the best model to be
chosen.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 24
Unit root test to identify whether each variable is non-stationarity al the level
and first difference or not, follow the steps:
Go to Quick,
Series statistics,
Unit Root test,
Write the variable in the box appeared and then click ok, choose
test type: insert you optimal lag you have obtained for level and 1st
difference tests.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 25
Table Unit Root Test
Null Hypothesis: LNY has a unit root
Exogenous: Constant
Lag Length: 2 (Automatic - based on AIC, maxlag=2)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic 1.832436 0.9996
Test critical values: 1% level -3.621023
5% level -2.943427
10% level -2.610263
*MacKinnon (1996) one-sided p-values.
Null Hypothes is : D(LNY) has a unit root
Exogenous : Cons tant
Lag Length: 2 (Autom atic - bas ed on AIC, m axlag=2)
t-Statis tic Prob.*
Augm ented Dickey-Fuller tes t s tatis tic -3.588645 0.0110
Tes t critical values : 1% level -3.626784
5% level -2.945842
10% level -2.611531
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 26
*MacKinnon (1996) one-s ided p-values .
Null Hypothes is : LNX1 has a unit root
Exogenous : Cons tant
Lag Length: 1 (Autom atic - bas ed on AIC, m axlag=2)
t-Statis tic Prob.*
Augm ented Dickey-Fuller tes t s tatis tic 1.470695 0.9989
Tes t critical values : 1% level -3.615588
5% level -2.941145
10% level -2.609066
*MacKinnon (1996) one-s ided p-values .
Null Hypothes is : D(LNX1) has a unit root
Exogenous : Cons tant
Lag Length: 0 (Autom atic - bas ed on AIC, m axlag=2)
t-Statis tic Prob.*
Augm ented Dickey-Fuller tes t s tatis tic -3.294465 0.0222
Tes t critical values : 1% level -3.615588
5% level -2.941145
10% level -2.609066
*MacKinnon (1996) one-s ided p-values .
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 27
Null Hypothes is : LNX2 has a unit root
Exogenous : Cons tant
Lag Length: 1 (Autom atic - bas ed on AIC, m axlag=2)
t-Statis tic Prob.*
Augm ented Dickey-Fuller tes t s tatis tic 1.103809 0.9968
Tes t critical values : 1% level -3.615588
5% level -2.941145
10% level -2.609066
*MacKinnon (1996) one-s ided p-values .
Null Hypothes is : D(LNX2) has a unit root
Exogenous : Cons tant
Lag Length: 0 (Autom atic - bas ed on AIC, m axlag=2)
t-Statis tic Prob.*
Augm ented Dickey-Fuller tes t s tatis tic -3.843495 0.0055
Tes t critical values : 1% level -3.615588
5% level -2.941145
10% level -2.609066
*MacKinnon (1996) one-s ided p-values .
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 28
As we can see from Table above, the Augmented Dickey-Fuller test statistic
value in the absolute term is less than the t-test critical in the absolute
values at the conventional levels of significance, say 5%, we don’t reject the
null hypothesis of unit root for each series of data in the level.
However, we fail to reject the null hypothesis of unit root for each series
data at the first difference.
Therefore, we proceed to test for the long-run relationship with the
Johansen cointegration because each unit root in data series confirms that
there is integrated order of one .
Steps: Go to quick, Group statistics, Johansen Cointegration Test, Ok and
write your series data in the appeared box, then choose summarize all 5 sets
of assumptions and get results as follows.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 29
Unrestricted cointegration rank test in Johansen Cointegration Test
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 30
The preceding slide shows that Unrestricted Cointegration Rank Tests of
both in trace and maximum eigenvalue, there is 1 cointegrating equation at
the 0.05 level
We fail to reject hypothesized number of cointegrating equation at most 1
and 2 because Trace Statistic is less than the 0.05 Critical Value for Trace
test statistics and Max-Eigen Statistic is less than the 5% Critical Value of
maximum eigenvalue test, respectively.
From these, we conclude that all the three series of data have long-run
relationship.
Note: If the trace test and the maximum eigenvalue show different
cointegrating equation, we have to consider the one with the trace test
because it is stronger than that of later.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 31
Test for stability and structural breaks
Steps: Go to quick, estimate using OLS method, View, Stability Diagnostics,
Recursive estimates
30
20
The graph shows the
10 model is instability
because the trend
0 becomes out of the
range of the two lines
-10 at 5% level of
significance level.
-20
1985 1990 1995 2000 2005 2010 2015 2020
CUSUM 5% Significance
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 32
To see the structural breaks in the data trend, draw the graph. Steps: Choose
the variables, Open as group, Go to view, Graph, in graph option, Choose basic
type, Line and Symbol and Ok, you get it as shown below
24.5
24.0
23.5
23.0
22.5
22.0
21.5
From the trends, there is
21.0 structural breaks at year
1992.
20.5
1985 1990 1995 2000 2005 2010 2015 2020
16 November 2024 LNY
Prepared LNX1
by Urgaia Rissa(PhD) for MBA LNX2 33
To test structural break: Estimate all series data with OLS, Got view,
Stability Diagnostics, Chow Breakpoint Test and write the break year in the
box:1992 and get the results as shown below
Table Test for Structural Breaks
Chow Breakpoint Test: 1992
Null Hypothesis: No breaks at specified breakpoints
Varying regressors: All equation variables
Equation Sample: 1981 2020
F-statistic 6.819128 Prob. F(3,34) 0.0010
Log likelihood ratio 18.84232 Prob. Chi-Square(3) 0.0003
Wald Statistic 20.45738 Prob. Chi-Square(3) 0.0001
We reject the null hypothesis of No breaks at specified breakpoints because the
probability of F-statistic is significant. That means there is a break point at year
1992 and to remove this break we use dummy variable.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 34
To create series for dummy variable, Go to generate series and write:
a) dummy1=@year>1992,
b) interactiondummywithlnx1=dummy1*lnx1 and
c) interactiondummywithlnx2=dummy1*lnx2 and
Then estimate your variables including a,b and c and see the stability condition
as shown below.
Therefore, the structural breaks are
removed and now we can estimate the
short-run using VECM and long-run by
different methods such FMOLS
VECM: Open as VAR, Choose vector
error correction, Number of cointegrating
as 1with No intercept nor trend and then
OK, Go to Proc, Make the system, Order
by variable
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 35
Short-run estimation
Table Short-run estimation
Dependent Variable: D(LNY)
Method: Vector Error Correction Estimates
Sample (adjusted): 1984 2020
Included observations: 37 after adjustments
D(LNY) = C(1)*( LNY(-1) - 30.4646788726*LNX1(-1) + 28.8836160056
*LNX2(-1) - 16.0103875756*DUMMY1(-1) + 29.8374559211
*INTERACTIONDUMMYWITHX1(-1) - 28.5667904689
*INTERACTIONDUMMYWITHX2(-1) ) + C(2)*D(LNY(-1)) + C(3)*D(LNY(
-2)) + C(4)*D(LNX1(-1)) + C(5)*D(LNX1(-2)) + C(6)*D(LNX2(-1)) + C(7)
*D(LNX2(-2)) + C(8)*D(DUMMY1(-1)) + C(9)*D(DUMMY1(-2)) + C(10)
*D(INTERACTIONDUMMYWITHX1(-1)) + C(11)*D(INTERACTIONDUMM
YWITHX1(-2)) + C(12)*D(INTERACTIONDUMMYWITHX2(-1)) + C(13)
*D(INTERACTIONDUMMYWITHX2(-2))
Coefficient Std. Error t-Statistic Prob.
C(1) -0.059711 0.018805 -3.175365 0.0041
C(2) 0.251698 0.114218 2.203671 0.0374
C(3) -0.739322 0.137770 -5.366345 0.0000
C(4) -3.271694 0.902689 -3.624388 0.0014
C(5) -2.750004 0.700296 -3.926918 0.0006
C(6) 3.131871 0.762998 4.104691 0.0004
C(7) 1.722467 0.572454 3.008916 0.0061
C(8) -36.96652 14.17236 -2.608353 0.0154
C(9) -10.43914 13.86459 -0.752935 0.4588
C(10) 3.302430 0.968268 3.410658 0.0023
C(11) 2.968163 0.782466 3.793344 0.0009
C(12) -1.530168 0.790719 -1.935159 0.0648
C(13) -2.400485 0.697610 -3.441016 0.0021
R-squared 0.760076 Mean dependent var 0.037271
Adjusted R-squared 0.640114 S.D. dependent var 0.081387
S.E. of regression 0.048824 Akaike info criterion -2.931333
Sum squared resid 0.057212 Schwarz criterion -2.365334
Log likelihood 67.22965 Hannan-Quinn criter. -2.731792
Durbin-Watson stat 2.156272
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 36
Table Diagnostics tests
Breusch-Godfrey Serial Correlation LM Test:
F-statistic 2.335543 Prob. F(2,25) 0.1175
Obs*R-squared 5.824868 Prob. Chi-Square(2) 0.0543
Heteroskedasticity Test: Breusch-Pagan-Godfrey
F-statistic 0.857948 Prob. F(12,24) 0.5961
Obs*R-squared 11.10730 Prob. Chi-Square(12) 0.5197
Scaled explained SS 10.43358 Prob. Chi-Square(12) 0.5780
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 37
Long run estimation parameters
Dependent Variable: LNY
Method: Fully Modified Least Squares (FMOLS)
Sample (adjusted): 1982 2020
Included observations: 39 after adjustments
Cointegrating equation deterministics: C
Long-run covariance estimate (Bartlett kernel, Newey-West fixed bandwidth
= 4.0000)
Variable Coefficient Std. Error t-Statistic Prob.
LNX1 -0.746918 0.255084 -2.928127 0.0061
LNX2 0.720750 0.322638 2.233928 0.0324
DUMMY1 -12.17930 3.866140 -3.150249 0.0035
INTERACTIONDUMMYWITHX
1 0.773450 0.270955 2.854527 0.0074
INTERACTIONDUMMYWITHX
2 -0.196781 0.336786 -0.584293 0.5630
C 22.86475 3.843211 5.949386 0.0000
R-squared 0.981920 Mean dependent var 23.15020
Adjusted R-squared 0.979181 S.D. dependent var 0.513042
S.E. of regression 0.074025 Sum squared resid 0.180832
Long-run variance 0.005403
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 38
Dependent Variable: LNY
Method: Generalized Method of Moments
Sample (adjusted): 1982 2020
Included observations: 39 after adjustments
Linear estimation with 1 weight update
Estimation weighting matrix: HAC (Bartlett kernel, Newey-West fixed
bandwidth = 4.0000)
Standard errors & covariance computed using TSLS weighting matrix
Instrument specification: LNY LNX1 LNX2(-1) DUMMY1 INTERACTIONDUM
MYWITHX1 INTERACTIONDUMMYWITHX2
Constant added to instrument list
Variable Coefficient Std. Error t-Statistic Prob.
LNX1 -0.739446 0.289660 -2.552809 0.0155
LNX2 0.881965 0.380026 2.320802 0.0266
DUMMY1 -8.225527 4.176731 -1.969370 0.0574
INTERACTIONDUMMYWITHX
1 0.761095 0.304585 2.498791 0.0176
INTERACTIONDUMMYWITHX
2 -0.364614 0.392737 -0.928393 0.3599
C 19.18586 4.155005 4.617530 0.0001
R-squared 0.981746 Mean dependent var 23.15020
Adjusted R-squared 0.978980 S.D. dependent var 0.513042
S.E. of regression 0.074382 Sum squared resid 0.182579
Durbin-Watson stat 1.449467 J-statistic 3.919096
Instrument rank 7 Prob(J-statistic) 0.047741
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 39
5.3.2 ARDL Model Estimation
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 40
Decision Rule:
If the F- value is statistically significant in that cointegration exists, we have
to proceed with VECM to explain long-run dynamic causality and ARDL to
explain the causality of short-run specification.
If the F- value is not statistically significant in that cointegration exists, we
have to proceed with ARDL to explain the causality of short-run
specification exists.
In both cases, we have to perform the model diagnostics such as serial
correlation and stability condition for both ARDL short –run and VECM
long-run estimation.
If the variables have same integrated order, we can also estimate the
coefficients by way of Johansen Cointegration approach.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 41
• Since Table indicates that the F-statistic value is greater the upper bound
at all levels of significance, we reject the null hypothesis of no levels
relationship.
• So, we conclude that there is a cointegration exists among the variables.
Steps: Go to view, coefficient diagnostics, Error correction form.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 42
5.4 Panel Data
To choose between Fixed and Random Effects Models
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 43
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 44
Hausman Test to distinguish these effects models one from another
We don’t reject the null hypothesis: Correlated Random Effects as the chi-sq.
statistic value for the cross section random is quite insignificant any
conventional level of significance and we choose random over fixed effects
according to Hausman test.
16 November 2024 Prepared by Urgaia Rissa(PhD) for MBA 45