Linear Regression
Linear Regression
In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variables) denoted X. The case of one explanatory variable is called simple linear regression. For more
than one explanatory variable, the process is called multiple linear regression.[1] (This term should be distinguished
from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a
single scalar variable.)[2]
1 Introduction
In linear regression, the relationships are modeled using linear predictor functions whose unknown model
parameters are estimated from the data. Such models are
called linear models.[3] Most commonly, the conditional
mean of y given the value of X is assumed to be an
ane function of X; less commonly, the median or some
other quantile of the conditional distribution of y given
X is expressed as a linear function of X. Like all forms
of regression analysis, linear regression focuses on the
conditional probability distribution of y given X, rather
than on the joint probability distribution of y and X,
which is the domain of multivariate analysis.
Linear regression was the rst type of regression analysis to be studied rigorously, and to be used extensively
in practical applications.[4] This is because models which
depend linearly on their unknown parameters are easier Example of simple linear regression, which has one independent
variable
to t than models which are non-linearly related to their
parameters and because the statistical properties of the
resulting estimators are easier to determine.
Linear regression has many practical uses. Most applications fall into one of the following two broad categories:
If the goal is prediction, or forecasting, or error reduction, linear regression can be used to t a predictive model to an observed data set of y and X values. After developing such a model, if an additional
value of X is then given without its accompanying
value of y, the tted model can be used to make a
prediction of the value of y.
Given a variable y and a number of variables X1 , ...,
Xp that may be related to y, linear regression analysis can be applied to quantify the strength of the relationship between y and the Xj, to assess which Xj
may have no relationship with y at all, and to identify Example of a cubic polynomial regression, which is a type of
which subsets of the Xj contain redundant informa- linear regression.
tion about y.
Given a data set {yi , xi1 , . . . , xip }ni=1 of n statistical
Linear regression models are often tted using the least units, a linear regression model assumes that the relationsquares approach, but they may also be tted in other ship between the dependent variable yi and the p-vector
1
1 INTRODUCTION
i = 1, . . . , n,
y = X + ,
where
y1
y2
y = . ,
..
yn
xT1
x11
xT2 x21
X= . = .
.. ..
xTn
1
2
= . ,
..
p
..
.
x1p
x2p
.. ,
.
xn1 xnp
1
2
= . .
..
n
Usually a constant is included as one of the regressors. For example, we can take xi = 1 for
i = 1, ..., n. The corresponding element of
is called the intercept. Many statistical inference procedures for linear models require an
intercept to be present, so it is often included
even if theoretical considerations suggest that
its value should be zero.
Sometimes one of the regressors can be a
non-linear function of another regressor or
of the data, as in polynomial regression and
segmented regression. The model remains linear as long as it is linear in the parameter vector .
The regressors xij may be viewed either as
random variables, which we simply observe, or
they can be considered as predetermined xed
values which we can choose. Both interpretations may be appropriate in dierent cases,
and they generally lead to the same estimation
procedures; however dierent approaches to
asymptotic analysis are used in these two situations.
is a p-dimensional parameter vector. Its elements
are also called eects, or regression coecients. Statistical estimation and inference in linear regression
focuses on . The elements of this parameter vector are interpreted as the partial derivatives of the
dependent variable with respect to the various independent variables.
i is called the error term, disturbance term, or noise.
This variable captures all other factors which inuence the dependent variable yi other than the regressors xi. The relationship between the error term
and the regressors, for example whether they are
correlated, is a crucial step in formulating a linear
regression model, as it will determine the method to
use for estimation.
1.1
1.1
Assumptions
Assumptions
Standard linear regression models with standard estimation techniques make a number of assumptions about the
predictor variables, the response variables and their relationship. Numerous extensions have been developed that
allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated
entirely. Some methods are general enough that they can
relax multiple assumptions at once, and in other cases
this can be achieved by combining dierent extensions.
Generally these extensions make the estimation procedure more complex and time-consuming, and may also
require more data in order to produce an equally precise
model.
The following are the major assumptions made by standard linear regression models with standard estimation
techniques (e.g. ordinary least squares):
Weak exogeneity. This essentially means that the
predictor variables x can be treated as xed values,
rather than random variables. This means, for example, that the predictor variables are assumed to be
error-freethat is, not contaminated with measurement errors. Although this assumption is not realistic in many settings, dropping it leads to signicantly
more dicult errors-in-variables models.
Linearity. This means that the mean of the response variable is a linear combination of the parameters (regression coecients) and the predictor variables. Note that this assumption is much less restrictive than it may at rst seem. Because the predictor
variables are treated as xed values (see above), linearity is really only a restriction on the parameters.
The predictor variables themselves can be arbitrarily
transformed, and in fact multiple copies of the same
underlying predictor variable can be added, each one
transformed dierently. This trick is used, for example, in polynomial regression, which uses linear
regression to t the response variable as an arbitrary
polynomial function (up to a given rank) of a predictor variable. This makes linear regression an extremely powerful inference method. In fact, models
such as polynomial regression are often too powerful, in that they tend to overt the data. As a result, some kind of regularization must typically be
used to prevent unreasonable solutions coming out
of the estimation process. Common examples are
ridge regression and lasso regression. Bayesian linear regression can also be used, which by its nature
is more or less immune to the problem of overtting. (In fact, ridge regression and lasso regression
can both be viewed as special cases of Bayesian
linear regression, with particular types of prior distributions placed on the regression coecients.)
Constant variance (a.k.a. homoscedasticity).
This means that dierent response variables have
3
the same variance in their errors, regardless of
the values of the predictor variables. In practice this assumption is invalid (i.e. the errors are
heteroscedastic) if the response variables can vary
over a wide scale. In order to determine for heterogeneous error variance, or when a pattern of residuals violates model assumptions of homoscedasticity (error is equally variable around the 'best-tting
line' for all points of x), it is prudent to look for
a fanning eect between residual error and predicted values. This is to say there will be a systematic change in the absolute or squared residuals when
plotted against the predicting outcome. Error will
not be evenly distributed across the regression line.
Heteroscedasticity will result in the averaging over
of distinguishable variances around the points to get
a single variance that is inaccurately representing all
the variances of the line. In eect, residuals appear
clustered and spread apart on their predicted plots
for larger and smaller values for points along the linear regression line, and the mean squared error for
the model will be wrong. Typically, for example,
a response variable whose mean is large will have
a greater variance than one whose mean is small.
For example, a given person whose income is predicted to be $100,000 may easily have an actual income of $80,000 or $120,000 (a standard deviation of around $20,000), while another person with
a predicted income of $10,000 is unlikely to have
the same $20,000 standard deviation, which would
imply their actual income would vary anywhere between -$10,000 and $30,000. (In fact, as this shows,
in many casesoften the same cases where the assumption of normally distributed errors failsthe
variance or standard deviation should be predicted
to be proportional to the mean, rather than constant.) Simple linear regression estimation methods give less precise parameter estimates and misleading inferential quantities such as standard errors
when substantial heteroscedasticity is present. However, various estimation techniques (e.g. weighted
least squares and heteroscedasticity-consistent standard errors) can handle heteroscedasticity in a quite
general way. Bayesian linear regression techniques
can also be used when the variance is assumed to be
a function of the mean. It is also possible in some
cases to x the problem by applying a transformation to the response variable (e.g. t the logarithm
of the response variable using a linear regression
model, which implies that the response variable has
a log-normal distribution rather than a normal distribution).
Independence of errors. This assumes that the errors of the response variables are uncorrelated with
each other. (Actual statistical independence is a
stronger condition than mere lack of correlation and
is often not needed, although it can be exploited if it
is known to hold.) Some methods (e.g. generalized
1 INTRODUCTION
least squares) are capable of handling correlated
errors, although they typically require signicantly
more data unless some sort of regularization is used
to bias the model towards assuming uncorrelated errors. Bayesian linear regression is a general way of
handling this issue.
Lack of multicollinearity in the predictors. For
standard least squares estimation methods, the design matrix X must have full column rank p; otherwise, we have a condition known as multicollinearity
in the predictor variables. This can be triggered
by having two or more perfectly correlated predictor variables (e.g. if the same predictor variable is
mistakenly given twice, either without transforming one of the copies or by transforming one of the
copies linearly). It can also happen if there is too
little data available compared to the number of parameters to be estimated (e.g. fewer data points
than regression coecients). In the case of multicollinearity, the parameter vector will be nonidentiableit has no unique solution. At most we
will be able to identify some of the parameters, i.e.
narrow down its value to some linear subspace of
Rp . See partial least squares regression. Methods for tting linear models with multicollinearity
have been developed;[5][6][7][8] some require additional assumptions such as eect sparsitythat a
large fraction of the eects are exactly zero.
Note that the more computationally expensive iterated algorithms for parameter estimation, such as
those used in generalized linear models, do not suffer from this problemand in fact its quite normal
when handling categorically valued predictors to introduce a separate indicator variable predictor for
each possible category, which inevitably introduces
multicollinearity.
Beyond these assumptions, several other statistical properties of the data strongly inuence the performance of
dierent estimation methods:
The statistical relationship between the error terms
and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and
consistent.
The arrangement, or probability distribution of the
predictor variables x has a major inuence on the
precision of estimates of . Sampling and design of
experiments are highly developed subelds of statistics that provide guidance for collecting data in such
a way to achieve a precise estimate of .
The sets in the Anscombes quartet have the same linear regression line but are themselves very dierent.
2.3
Heteroscedastic models
tor variable. This is the only interpretation of held xed 2.3 Heteroscedastic models
that can be used in an observational study.
Various models have been created that allow for
The notion of a unique eect is appealing when studyheteroscedasticity, i.e. the errors for dierent response
ing a complex system where multiple interrelated comvariables may have dierent variances. For example,
ponents inuence the response variable. In some cases,
weighted least squares is a method for estimating linear
it can literally be interpreted as the causal eect of an
regression models when the response variables may have
intervention that is linked to the value of a predictor varidierent error variances, possibly with correlated errors.
able. However, it has been argued that in many cases mul(See also Weighted linear least squares, and generalized
tiple regression analysis fails to clarify the relationships
least squares.) Heteroscedasticity-consistent standard erbetween the predictor variables and the response variable
rors is an improved method for use with uncorrelated but
when the predictors are correlated with each other and are
potentially heteroscedastic errors.
[9]
not assigned following a study design. A commonality
analysis may be helpful in disentangling the shared and
unique impacts of correlated independent variables.[10]
2.4 Generalized linear models
Extensions
Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed.
2.1
The very simplest case of a single scalar predictor variable x and a single scalar response variable y is known as
simple linear regression. The extension to multiple and/or
vector-valued predictor variables (denoted with a capital
X) is known as multiple linear regression, also known as
multivariable linear regression. Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms
of the multiple regression model. Note, however, that in
these cases the response variable y is still a scalar. Another term multivariate linear regression refers to cases
where y is a vector, i.e., the same as general linear regression. The dierence between multivariate linear regression and multivariable linear regression should be emphasized as it causes much confusion and misunderstanding
in the literature.
ESTIMATION METHODS
2.5
Hierarchical linear models (or multilevel regression) organizes the data into a hierarchy of regressions, for example
where A is regressed on B, and B is regressed on C. It is
often used where the variables of interest have a natural hierarchical structure such as in educational statistics,
where students are nested in classrooms, classrooms are
nested in schools, and schools are nested in some administrative grouping, such as a school district. The response Comparison of the TheilSen estimator (black) and simple linear
variable might be a measure of student achievement such regression (blue) for a set of points with outliers.
as a test score, and dierent covariates would be collected
at the classroom, school, and school district levels.
2.6
Errors-in-variables
Errors-in-variables models (or measurement error models) extend the traditional linear regression model to allow the predictor variables X to be observed with error.
This error causes standard estimators of to become biased. Generally, the form of bias is an attenuation, meaning that the eects are biased toward zero.
2.7
Others
In DempsterShafer theory, or a linear belief function in particular, a linear regression model may be
represented as a partially swept matrix, which can
be combined with similar matrices representing observations and other assumed normal distributions
and state equations. The combination of swept or
unswept matrices provides an alternative method for
estimating linear regression models.
Estimation methods
(
) (
)
= (XT X)1 XT y = xi xT 1 xi yi .
i
The estimator is unbiased and consistent if the errors
have nite variance and are uncorrelated with the
regressors[12]
E[ xi i ] = 0.
It is also ecient under the assumption that the
errors have nite variance and are homoscedastic,
meaning that E[i2 |xi] does not depend on i. The
condition that the errors are uncorrelated with the
regressors will generally be satised in an experiment, but in the case of observational data, it is difcult to exclude the possibility of an omitted covariate z that is related to both the observed covariates
and the response variable. The existence of such a
covariate will generally lead to a correlation between
the regressors and the response variable, and hence
to an inconsistent estimator of . The condition of
homoscedasticity can fail with either experimental
3.2
7
only one iteration is sucient to achieve an ecient
estimate of .[15][16]
Instrumental variables regression (IV) can be performed when the regressors are correlated with the
errors. In this case, we need the existence of some
auxiliary instrumental variables zi such that E[zii]
= 0. If Z is the matrix of instruments, then the estimator can be given in closed form as
= (XT Z(ZT Z)1 ZT X)1 XT Z(ZT Z)1 ZT y.
3
optimal estimator is the 2-step MLE, where the rst
step is used to non-parametrically estimate the distribution of the error term.[23]
3.3
ESTIMATION METHODS
Least-angle regression[6] is an estimation procedure for linear regression models that was developed
to handle high-dimensional covariate vectors, potentially with more covariates than observations.
The TheilSen estimator is a simple robust estimation technique that chooses the slope of the t line
to be the median of the slopes of the lines through
pairs of sample points. It has similar statistical efciency properties to simple linear regression but is
much less sensitive to outliers.[25]
Other robust estimation techniques, including the trimmed mean approach, and L-, M-, S-, and Restimators have been introduced.
4.2
3.5
Epidemiology
4.2 Epidemiology
10
4.4
6 NOTES
Economics
4.5
Environmental science
See also
Analysis of variance
Censored regression model
Cross-sectional regression
Curve tting
Empirical Bayes methods
Errors and residuals
Lack-of-t sum of squares
Line tting
Linear classier
Linear equation
6 Notes
[1] David A. Freedman (2009). Statistical Models: Theory
and Practice. Cambridge University Press. p. 26. A simple regression equation has on the right hand side an intercept and an explanatory variable with a slope coecient.
A multiple regression equation has two or more explanatory variables on the right hand side, each with its own
slope coecient
[2] Rencher, Alvin C.; Christensen, William F. (2012),
Chapter 10, Multivariate regression Section 10.1, Introduction, Methods of Multivariate Analysis, Wiley Series in Probability and Statistics, 709 (3rd ed.), John Wiley
& Sons, p. 19, ISBN 9781118391679.
[3] Hilary L. Seal (1967). The historical development of
the Gauss linear model. Biometrika. 54 (1/2): 124.
doi:10.1093/biomet/54.1-2.1.
[4] Yan, Xin (2009), Linear Regression Analysis: Theory and Computing, World Scientic, pp. 12, ISBN
9789812834119, Regression analysis ... is probably one
of the oldest topics in mathematical statistics dating back
to about two hundred years ago. The earliest form of the
linear regression was the least squares method, which was
published by Legendre in 1805, and by Gauss in 1809 ...
Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the
orbits of bodies about the sun.
[5] Tibshirani, Robert (1996). Regression Shrinkage and
Selection via the Lasso. Journal of the Royal Statistical
Society, Series B. 58 (1): 267288. JSTOR 2346178.
[6] Efron, Bradley; Hastie, Trevor; Johnstone,Iain;
Tibshirani,Robert (2004).
Least Angle Regression. The Annals of Statistics. 32 (2): 407451.
doi:10.1214/009053604000000067. JSTOR 3448465.
[7] Hawkins, Douglas M. (1973). On the Investigation of
Alternative Regressions by Principal Component Analysis. Journal of the Royal Statistical Society, Series C. 22
(3): 275286. JSTOR 2346776.
M-estimator
Logistic regression
[10] Warne, R. T. (2011). Beyond multiple regression: Using commonality analysis to better understand R2 results. Gifted Child Quarterly, 55, 313-318. doi:10.1177/
0016986211422217
[11] Brillinger, David R. (1977). The Identication of a Particular Nonlinear Time Series System. Biometrika. 64
(3): 509515. doi:10.1093/biomet/64.3.509. JSTOR
2345326.
[12] Lai, T.L.; Robbins, H.; Wei, C.Z. (1978).
Strong
consistency
of
least
squares
estimates in multiple regression.
PNAS. 75 (7):
Bibcode:1978PNAS...75.3034L.
30343036.
doi:10.1073/pnas.75.7.3034. JSTOR 68164.
11
[13] Tofallis, C (2009). Least Squares Percentage Regression. Journal of Modern Applied Statistical Methods. 7:
526534. doi:10.2139/ssrn.1406472.
[14] del Pino, Guido (1989). The Unifying Role of Iterative Generalized Least Squares in Statistical Algorithms.
Statistical Science.
4 (4): 394403.
doi:10.1214/ss/1177012408. JSTOR 2245853.
[15] Carroll, Raymond J. (1982). Adapting for Heteroscedasticity in Linear Models. The Annals of Statistics. 10
(4): 12241233. doi:10.1214/aos/1176345987. JSTOR
2240725.
[16] Cohen, Michael; Dalal, Siddhartha R.; Tukey,John W.
(1993). Robust, Smoothly Heterogeneous Variance Regression. Journal of the Royal Statistical Society, Series
C. 42 (2): 339353. JSTOR 2986237.
[17] Nievergelt, Yves (1994). Total Least Squares: State-ofthe-Art Regression in Numerical Analysis. SIAM Review. 36 (2): 258264. doi:10.1137/1036055. JSTOR
2132463.
[18] Lange, Kenneth L.; Little, Roderick J. A.; Taylor,Jeremy
M. G. (1989). Robust Statistical Modeling Using the t
Distribution. Journal of the American Statistical Association. 84 (408): 881896. doi:10.2307/2290063. JSTOR
2290063.
[19] Swindel, Benee F. (1981). Geometry of Ridge Regression Illustrated. The American Statistician. 35 (1): 12
15. doi:10.2307/2683577. JSTOR 2683577.
[20] Draper, Norman R.; van Nostrand; R. Craig (1979).
Ridge Regression and James-Stein Estimation: Review
and Comments. Technometrics. 21 (4): 451466.
doi:10.2307/1268284. JSTOR 1268284.
[21] Hoerl, Arthur E.; Kennard,Robert W.; Hoerl,Roger W.
(1985). Practical Use of Ridge Regression: A Challenge
Met. Journal of the Royal Statistical Society, Series C. 34
(2): 114120. JSTOR 2347363.
[22] Narula, Subhash C.; Wellington, John F. (1982). The
Minimum Sum of Absolute Errors Regression: A State
of the Art Survey. International Statistical Review. 50
(3): 317326. doi:10.2307/1402501. JSTOR 1402501.
[23] Stone, C. J. (1975). Adaptive maximum likelihood estimators of a location parameter. The Annals of Statistics.
3 (2): 267284. doi:10.1214/aos/1176343056. JSTOR
2958945.
[24] Goldstein, H. (1986). Multilevel Mixed Linear Model
Analysis Using Iterative Generalized Least Squares.
Biometrika. 73 (1): 4356. doi:10.1093/biomet/73.1.43.
JSTOR 2336270.
[25] Theil, H. (1950). A rank-invariant method of linear and
polynomial regression analysis. I, II, III. Nederl. Akad.
Wetensch., Proc. 53: 386392, 521525, 13971412.
MR 0036489; Sen, Pranab Kumar (1968). Estimates of
the regression coecient based on Kendalls tau. Journal
of the American Statistical Association. 63 (324): 1379
1389. doi:10.2307/2285891. JSTOR 2285891. MR
0258201.
[26] Wilkinson, J.H. (1963) Chapter 3: Matrix Computations, Rounding Errors in Algebraic Processes, London:
Her Majestys Stationery Oce (National Physical Laboratory, Notes in Applied Science, No.32)
[27] Deaton, Angus (1992). Understanding Consumption. Oxford University Press. ISBN 0-19-828824-7.
[28] Krugman, Paul R.; Obstfeld, M.; Melitz, Marc J. (2012).
International Economics: Theory and Policy (9th global
ed.). Harlow: Pearson. ISBN 9780273754091.
[29] Laidler, David E. W. (1993). The Demand for Money:
Theories, Evidence, and Problems (4th ed.). New York:
Harper Collins. ISBN 0065010981.
[30] Ehrenberg; Smith (2008). Modern Labor Economics
(10th international ed.). London: Addison-Wesley. ISBN
9780321538963.
[31] EEMP webpage
7 References
Cohen, J., Cohen P., West, S.G., & Aiken, L.S.
(2003).
Applied multiple regression/correlation
analysis for the behavioral sciences. (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates
Charles Darwin. The Variation of Animals and
Plants under Domestication. (1868) (Chapter XIII
describes what was known about reversion in Galtons time. Darwin uses the term reversion.)
Draper, N.R.; Smith, H. (1998). Applied Regression
Analysis (3rd ed.). John Wiley. ISBN 0-471-170828.
Francis Galton. Regression Towards Mediocrity in
Hereditary Stature, Journal of the Anthropological
Institute, 15:246-263 (1886). (Facsimile at: )
Robert S. Pindyck and Daniel L. Rubinfeld (1998,
4h ed.). Econometric Models and Economic Forecasts, ch. 1 (Intro, incl. appendices on operators &
derivation of parameter est.) & Appendix 4.3 (mult.
regression in matrix form).
8 Further reading
Barlow, Jesse L. (1993). Chapter 9: Numerical aspects of Solving Linear Least Squares Problems.
In Rao, C.R. Computational Statistics. Handbook of
Statistics. 9. North-Holland. ISBN 0-444-88096-8
Bjrck, ke (1996). Numerical methods for least
squares problems. Philadelphia: SIAM. ISBN 089871-360-9.
12
Goodall, Colin R. (1993). Chapter 13: Computation using the QR decomposition. In Rao, C.R.
Computational Statistics. Handbook of Statistics. 9.
North-Holland. ISBN 0-444-88096-8
Pedhazur, Elazar J (1982). Multiple regression
in behavioral research: Explanation and prediction
(2nd ed.). New York: Holt, Rinehart and Winston.
ISBN 0-03-041760-0.
National Physical Laboratory (1961). Chapter 1:
Linear Equations and Matrices: Direct Methods.
Modern Computing Methods. Notes on Applied Science. 16 (2nd ed.). Her Majestys Stationery Oce
National Physical Laboratory (1961). Chapter 2:
Linear Equations and Matrices: Direct Methods on
Automatic Computers. Modern Computing Methods. Notes on Applied Science. 16 (2nd ed.). Her
Majestys Stationery Oce
9+10=21
8 FURTHER READING
13
9.1
Text
Linear regression Source: https://en.wikipedia.org/wiki/Linear_regression?oldid=743094866 Contributors: The Anome, Taw, Ap, Larry
Sanger, Danny, Miguel~enwiki, Rade Kutil, Edward, Patrick, Michael Hardy, GABaker, Shyamal, Kku, Dcljr, Tomi, TakuyaMurata,
Den fjttrade ankan~enwiki, Kevin Baas, Rossami, Hike395, Jitse Niesen, Andrewman327, Taxman, Donarreiskoer, Robbot, Jaredwf,
Benwing, Gak, ZimZalaBim, Yelyos, Babbage, Henrygb, Jcole, Wile E. Heresiarch, Giftlite, BenFrantzDale, Fleminra, Alison, Duncharris, Jason Quinn, Ato, Utcursch, Pgan002, MarkSweep, Piotrus, Wurblzap~enwiki, Icairns, Urhixidur, Natrij, Discospinster, Rich
Farmbrough, Pak21, Paul August, Bender235, Violetriga, Elwikipedista~enwiki, Gauge, MisterSheik, Spoon!, Perfecto, O18, Davidswelt,
R. S. Shaw, Tobacman, Arcadian, NickSchweitzer, 99of9, Crust, Landroni, Storm Rider, Musiphil, Arthena, ABCD, Kotasik, Avenue,
Snowolf, LFaraone, Forderud, Drummond, Oleg Alexandrov, Abanima, Tappancsa, Mindmatrix, BlaiseFEgan, Btyner, Joerg Kurt Wegner, Lacurus~enwiki, Graham87, Qwertyus, Rjwilmsi, Vegaswikian, Matt Deres, TeaDrinker, Chobot, Manscher, FrankTobia, Wavelength,
RussBot, Gaius Cornelius, Bug42, Afelton, Thiseye, Cruise, Moe Epsilon, Voidxor, Dggoldst, Arch o median, Arthur Rubin, Drallim, Caballero1967, Anarch21, SolarMcPanel, SmackBot, NickyMcLean, Quazar777, Prodego, InverseHypercube, Jtneill, DanielPeneld, Evanreyes, Commander Keane bot, Ohnoitsjamie, Hraefen, Afa86, Markush, Amatulic, Feinstein, Oli Filth, John Reaves, Berland, Wolf87,
Cybercobra, Semanticprecision, G716, Unco, Theblackgecko, Lambiam, Jonas August, Vjeet a, Nijdam, Beetstra, Emurph, Hu12, Pjrm,
LAlawMedMBA, JoeBot, Chris53516, AlainD, Jsorens, Tawkerbot2, CmdrObot, JRavn, CBM, Anakata, Chrike, Harej bot, Thomasmeeks, Neelix, Cassmus, Bumbulski, MaxEnt, 137 0, Pedrolapinto, Mmmooonnnsssttteeerrr, Farshidforouz~enwiki, FrancoGG, Talgalili, Thijs!bot, Epbr123, Tolstoy the Cat, Jfaller, Whooooooknows, Natalie Erin, Woollymammoth, Mack2, JAnDbot, MER-C, Je560,
Ph.eyes, Hectorlamadrid, Magioladitis, Tripbeetle, MastCell, Albmont, Baccyak4H, Ddr~enwiki, David Eppstein, Joostw, Apal~enwiki,
Yonaa, R'n'B, Noyder, Charlesmartin14, Kawautar, Mbhiii, J.delanoy, Scythe of Death, Salih, TomyDuby, Jaxha, HyDeckar, Jewzip, MrPaul84, Copsi, Llorenzi, VolkovBot, Smarty07, Muzzamo, Mfreund~enwiki, Jsd115, Zhenqinli, Greswik, P1h3r1e3d13, Ricardo MC, Jhedengren, Petergans, Karthik Sarma, Rlendog, Zsniew, Quest for Truth, Saginer, Paolo.dL, OKBot, Water and Land, Scottyoak2, Melcombe,
Gpap.gpap, Tanvir Ahmmed, ClueBot, HairyFotr, Cp111, Rhubbarb, Dromedario~enwiki, Alexbot, Ecov, Kaspar.jan, Tokorode~enwiki,
Skbkekas, Stephen Milborrow, Diaa abdelmoneim, Qwfp, Bigoperm, Sunsetsky, XLinkBot, Tofallis, W82~enwiki, Tayste, Addbot, RPHv,
Fgnievinski, Doronp, MrOllie, Download, Forich, Zorrobot, Ettrig, Luckas-bot, Yobot, Sked123, Its Been Emotional, AnomieBOT, Rubinbot, IRP, Tucoxn, Materialscientist, HanPritcher, Citation bot, Lixiaoxu, Sketchmoose, Flavio Guitian, Gtfjbl, Istrill, Mstangeland,
Aa77zz, Imran.fanaswala, Fstonedahl, PhysicsJoe, Nickruiz, FrescoBot, X7q, Citation bot 1, AstaBOTh15, Boxplot, Pinethicket, Elockid,
Kiefer.Wolfowitz, Rdecker02, Jonesey95, Stpasha, Oldrrb, Trappist the monk, Wotnow, Duoduoduo, PAC2, Wombathammer, Diannaa,
RjwilmsiBot, Elitropia, John of Reading, Dewritech, Sugarfoot1001, Julienbarlan, Wikieconometrician, Bkearb, NGPriest, BartlebytheScrivener, Chewings72, Esaintpierre, Manipande, Haresfur, ClueBot NG, Mathstat, Frietjes, BlueScreenD, Helpful Pixie Bot, Grandwgy, Daonng, Marcocapelle, Mark Arsten, MyWikiNik, ChrisGualtieri, Illia Connell, Dansbecker, Jerry Hintze, Dexbot, Hkoslik, Sa
publishers, Ossifragus, Bha100710, PeterLFlomPhD, Prajwel, Drvikas74, Melonkelon, Asif usa, Pandadai, Bryanrutherford0, Tertius51,
Logan.dunbar, Monkbot, Jpeterson1346, Bob nau, JC713, Moorshed, Velvel2, Whatfoxsays, 18trevor3695, Split97, HelpUsStopSpam,
T9520, Marianna251 and Anonymous: 384
9.2
Images
14
9.3
Content license