0% found this document useful (0 votes)
2 views

Lecture Note Two

The document discusses the advantages and theoretical background of quantile regression (QR), highlighting its robustness to outliers, ability to analyze both location and scale parameters, and suitability for heteroskedastic data. It explains the model formulation, the use of asymmetric penalties in QR, and the ease of computation with modern software like Stata. Additionally, it emphasizes the non-parametric nature of QR and its asymptotic properties related to univariate sample quantiles.

Uploaded by

prottoy142000
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture Note Two

The document discusses the advantages and theoretical background of quantile regression (QR), highlighting its robustness to outliers, ability to analyze both location and scale parameters, and suitability for heteroskedastic data. It explains the model formulation, the use of asymmetric penalties in QR, and the ease of computation with modern software like Stata. Additionally, it emphasizes the non-parametric nature of QR and its asymptotic properties related to univariate sample quantiles.

Uploaded by

prottoy142000
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

QUANTILE REGRESSION

Dr. Muhammad Shahadat Hossain Siddiquee


Professor, Department of Economics
University of Dhaka
Email: [email protected]
Cell: +8801719397749
ADVANTAGES OF USINGQUANTILE
REGRESSION (CONTD.)
QRS HAVE CONSIDERABLE APPEAL FOR SEVERAL REASONS:
a) Median regression, also called least absolute-deviations regression, is more robust to outliers
than is mean regression.
b) QR permits us to study the impact of regressors on both the location and scale parameters of
the model, thereby allowing a richer understanding of the data.
c) This approach avoids assumptions about parametric distribution of regression errors. These
features make QR especially suitable for heteroskedastic data. In the context of regression
analysis, "parametric distribution of regression error" means assuming that the errors (the
difference between predicted and actual values) follow a specific, known probability
distribution (like a normal distribution) characterized by a finite number of parameters.
d) QR is particularly useful when the effect of a predictor variable varies across different
quantiles.
e) Recently, computation of QR models has become easier.

This lecture explores the application of QR using several of Stata’s QR commands. We also discuss
the presentation and interpretation of QR computer output using different examples.
THEORETICAL BACKGROUND OF QR ANALYSIS
KEY THEORETICAL PROPERTIES OF QR
 ASYMPTOTIC THEORY
The asymptotic properties of quantile regression estimators are closely
related to the theory of univariate sample quantiles. Univariate sample
quantiles represent points that divide a sorted data set into equal parts,
estimating quantiles of the underlying distribution. They're used to
summarize data and can be found by sorting the sample and identifying the
appropriate value based on the desired quantile.
Asymptotic theory, or large sample theory, is a statistical framework that
analyzes the behavior of estimators and statistical tests as the sample size
approaches infinity. It provides insights into properties like consistency and
efficiency under these asymptotic conditions
THEORETICAL BACKGROUND OF QR ANALYSIS…
 NON-PARAMETRIC NATURE
Quantile regression is often considered semiparametric, as it does not
rely on strong distributional assumptions about the error terms.
MODEL FORMULATION
• The quantile regression model can be written as: Q(Y|X) = μ(X) = β'X+ ei,
where Q(Y|X) is the conditional quantile function, X is the vector of
independent variables, and β is the vector of parameters to be
estimated. The objective of quantile regression is to estimate these β
parameters for different quantiles.
THEORETICAL BACKGROUND OF QR ANALYSIS…
• Let ei denote the model prediction error. Then ordinary least squares
(OLS) minimizes  i , median regression minimizes  | ei |
e
2
, and QR
minimizes a sum that gives the asymmetric penalties(1 i q) | e | for over-
i

prediction and q | ei | for under-prediction. i

• Linear programming methods need to be used to obtain the QR


estimator, but it is still asymptotically normally distributed and easily
obtained using Stata commands.
• In quantile regression, the use of asymmetric penalties, or loss
functions, stems from the method's focus on modeling the conditional
distribution's quantiles rather than just the mean, which are not
symmetric.
• These penalties are calculated differently for errors below and above
the predicted value, penalizing over and under predictions unevenly.
Conditional Quantiles
Conditional Quantiles (Contd..)
Conditional Quantiles (Contd..)
Conditional Quantiles (Contd..)

You might also like