0% found this document useful (0 votes)
421 views22 pages

Hair Et Al. - The Use of PLS-SEM in Strategic Management Research

This article reviews 37 studies published in leading strategic management journals that used partial least squares structural equation modeling (PLS-SEM). The review analyzes aspects of the studies such as reasons for using PLS-SEM, data characteristics, model characteristics, model evaluation, and reporting. The review finds both problems and improvements in how PLS-SEM has been applied and provides recommendations to disseminate rigorous research practices when using PLS-SEM in strategic management.

Uploaded by

Sri P C Buwono
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
421 views22 pages

Hair Et Al. - The Use of PLS-SEM in Strategic Management Research

This article reviews 37 studies published in leading strategic management journals that used partial least squares structural equation modeling (PLS-SEM). The review analyzes aspects of the studies such as reasons for using PLS-SEM, data characteristics, model characteristics, model evaluation, and reporting. The review finds both problems and improvements in how PLS-SEM has been applied and provides recommendations to disseminate rigorous research practices when using PLS-SEM in strategic management.

Uploaded by

Sri P C Buwono
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/233408611

The Use of Partial Least Squares Structural


Equation Modeling in Strategic Management
Research: A Review of Past Practices...

Article in Long Range Planning · November 2012


DOI: 10.1016/j.lrp.2012.09.008

CITATIONS READS

211 2,166

4 authors, including:

Marko Sarstedt Torsten Pieper


Otto-von-Guericke-Universität Magdeburg Kennesaw State University
113 PUBLICATIONS 7,190 CITATIONS 40 PUBLICATIONS 680 CITATIONS

SEE PROFILE SEE PROFILE

Christian M. Ringle
Technische Universität Hamburg
144 PUBLICATIONS 9,034 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Comparing Results from Factor-Based and Composite-Based Approaches to Structural Equation


Modeling: Five Perspectives View project

Book Project: Advanced Issues in Partial Least Squares Structural Equation Modeling (PLS-SEM) View
project

All content following this page was uploaded by Torsten Pieper on 20 June 2014.

The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
Long Range Planning 45 (2012) 320e340 http://www.elsevier.com/locate/lrp

The Use of Partial Least Squares


Structural Equation Modeling
in Strategic Management
Research: A Review of Past
Practices and Recommendations
for Future Applications
Joseph F. Hair, Marko Sarstedt, Torsten M. Pieper and
Christian M. Ringle

Every discipline needs to frequently review the use of multivariate analysis methods to
ensure rigorous research and publications. Even though partial least squares structural
equation modeling (PLS-SEM) is frequently used for studies in strategic management,
this kind of assessment has only been conducted by Hulland (1999) for four studies
and a limited number of criteria. This article analyzes the use of PLS-SEM in thirty-seven
studies that have been published in eight leading management journals for dozens of
relevant criteria, including reasons for using PLS-SEM, data characteristics, model
characteristics, model evaluation and reporting. Our results reveal several problematic
aspects of PLS-SEM use in strategic management research, but also substantiate some
improvement over time. We find that researchers still often do not fully make use of
the method’s capabilities, sometimes even misapplying it. Our review of PLS-SEM
applications and recommendations on how to improve the use of the method are
important to disseminate rigorous research and publication practices in the strategic
management discipline.
Ó 2012 Elsevier Ltd. All rights reserved.

0024-6301/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.lrp.2012.09.008
Introduction
Research into the strategic management discipline recognized relatively early the potential of struc-
tural equation modeling (SEM) to empirically test theories and conceptual models. Indeed, by the
late 1980s (e.g., Birkinshaw et al., 1995; Cool et al., 1989; Fornell et al., 1990; Govindarajan, 1989;
Johansson and Yip, 1994), the strategic management discipline acknowledged the different and, in
many research situations, advantageous properties of variance-based partial least squares SEM
(PLS-SEM; Lohm€ oller, 1989; Wold, 1982) in comparison with the alternative covariance-based
SEM (CB-SEM; J€ oreskog, 1978; J€ oreskog, 1982) method to estimate structural equation models.
In short, CB-SEM and PLS-SEM are different but complementary statistical methods for SEM,
whereby the advantages of the one method are the disadvantages of the other, and vice versa
(J€
oreskog and Wold, 1982).
PLS-SEM is particularly appealing when the research objective focuses on prediction and explain-
ing the variance of key target constructs (e.g., strategic success of firms) by different explanatory
constructs (e.g., sources of competitive advantage); the sample size is relatively small and/or the
available data is non-normal; and, when CB-SEM provides no, or at best questionable, results
(Hair et al., 2011; Hair et al., 2012; Henseler et al., 2009; Reinartz et al., 2009). Moreover, forma-
tively measured constructs are particularly useful for explanatory constructs (e.g., sources of com-
petitive advantage) of key target constructs, such as success (i.e., success factor studies; Albers,
2010). PLS-SEM is the preferred alternative over CB-SEM in these situations, since it enables
researchers to create and estimate such models without imposing additional limiting constraints.
PLS-SEM applications in strategic management often address topics such as long-term survival
of firms (Agarwal et al., 2002; Cool et al., 1989); performance of global firms (Birkinshaw et al.,
1998; Birkinshaw et al., 1995; Devinney et al., 2000; Johansson and Yip, 1994; Robins et al.,
2002); knowledge sourcing and collaborations (Gray and Meister, 2004; Im and Rai 2008;
Jarvenpaa and Majchrzak, 2008; Purvis et al., 2001); and, cooperation of firms (Doz et al., 2000;
Fornell et al., 1990; Sarkar et al., 2001).
Despite recognizing the SEM method and, more specifically, the advantageous features of PLS-
SEM in existing studies, their number, as we show in this study, is considerably smaller than in
other disciplines such as marketing (Hair et al., 2012) and management information systems
(MIS) (Ringle et al., 2012). Researchers in management and especially strategic management
seem to predominantly rely on first-generation multivariate analysis techniques (e.g., factor analy-
sis, multiple linear regression, etc.) in their empirical studies, and thus may miss opportunities that
researchers in other disciplines frequently exploit by using the second-generation SEM technique.
Potential reasons may be the restrictive assumptions of the CB-SEM method (e.g., sample size re-
quirements, data distribution, model specification) and the improper use of PLS-SEM in a few early
applications (Hulland, 1999). More recent articles, however, conclude that PLS can indeed be a “sil-
ver bullet” in many research situationseif correctly applied (Hair et al., 2011).
As with other statistical methods, users can only benefit from the unique properties of PLS-SEM
if they understand the principles underlying the method, apply it properly, and report the results
correctly. Due to the complexities involved in using PLS-SEM, systematic assessments on how the
technique has been applied in prior research can provide important guidance and, if necessary,
opportunities for course correction in future applications. But despite the importance of this
research question, corresponding assessments are very limited. Hulland (1999) provided an assess-
ment of four studies in the strategic management area, showing the PLS-SEM technique had been
applied with considerable variability in terms of authors appropriately handling conceptual and
methodological issues.
Many disciplines frequently review the methods used to disseminate rigorous research and pub-
lication practices. While reviews of CB-SEM usage have been carried out across many disciplines in
business research (e.g., Babin, Hair and Boles, 2008; Baumgartner and Homburg, 1996; Brannick,
1995; Garver and Mentzer, 1999; Shah and Goldstein, 2006; Shook et al., 2004; Steenkamp and van
Trijp, 1991), recent reviews of PLS-SEM usage cover only accounting (Lee et al., 2011),

Long Range Planning, vol 45 2012 321


management information systems (Ringle et al., 2012), and marketing (Hair et al., 2012). Against
this background, an update and extension of Hulland’s (1999) case study-based assessment of PLS-
SEM specific to the strategic management discipline seems timely and warranted.
Like all statistical methods, “PLS-SEM requires several choices that, if not made correctly,
can lead to improper findings, interpretations, and conclusions.” (Hair et al., 2012, p. 415).
The objective of this article is to provide recommendations for the use of PLS-SEM in
strategic management research. For explanations of the PLS-SEM method itself, the reader
is referred to recent articles (Chin, 2010; Hair et al., 2011; Henseler et al., 2012; Henseler
et al., 2009) and a forthcoming text (Hair et al., 2013). Toward this aim, we review thirty-
seven empirical applications of PLS-SEM in eight leading journals publishing strategic
management research, and analyze these applications according to several key dimensions, in-
cluding reasons for using PLS-SEM, data characteristics, model characteristics, model evalu-
ation and reporting. We contrast the findings in strategic management research with
standards applied in other disciplines. Where possible, we indicate best practices as guidelines
for future applications of PLS-SEM in strategic management and suggest avenues for further
research involving the technique.
Our results reveal several problematic aspects of PLS-SEM use in strategic management research,
but also substantiate some improvement over time. Researchers still often do not fully utilize avail-
able analytical potential, and sometimes incorrectly apply methods in top-tier strategic manage-
ment journals. For this reason, our review of PLS-SEM applications and guidelines on how to
properly use the method are important to disseminate rigorous research and publication practices
in the strategic management discipline.

Review of PLS-SEM research


Our review includes studies published in the Academy of Management Journal, Administrative
Science Quarterly, Journal of Management, Journal of Management Studies, Long Range Planning,
Management Science, Organization Science and Strategic Management Journal, which were selected
as representative of the leading journals in management (e.g., Furrer et al., 2008; Raisch and
Birkinshaw, 2008). These eight journals were searched for the 30-year period from 1981 through
2010, to identify all empirical applications of PLS-SEM. We accessed the ABI/INFORM Com-
plete, EBSCO Business Source Complete, ISI WEB OF KNOWLEDGE and JSTOR databases as
well as online versions of the journals, using the keywords “partial least squares” and “PLS”
to search the full text of articles previously published. To identify studies eligible for inclusion
in the review, the list was then examined independently by two professors proficient in the tech-
nique. In this process, conceptual papers on methodological aspects (e.g., Echambadi et al., 2006)
and empirical studies that mentioned having used PLS-SEM for validation purposes without re-
porting concrete results (e.g., Tiwana, 2008; Zott and Amit, 2007) were removed.1 Using these
criteria, we identified thirty-seven studies (Table 1) containing 112 PLS-SEM model estimations.
The number of model estimations was larger than the number of studies reviewed since several
articles estimated multiple models using different set-ups and/or data originating from different
sources, countries, or years. The Strategic Management Journal published the largest number of
PLS-SEM studies of the reviewed journals. In contrast, Long Range Planning did not include a sin-
gle PLS-SEM study in the period of time reviewed.
Figure 1 shows the (cumulative) number of studies between 1985 (the first year an application
was found) and 2010 (the bars indicate the number of studies per year and the line indicates the
cumulative number of studies). It is apparent that the use of PLS-SEM has significantly increased
over time. Regressing the number of studies on the linear effects of time yields significant model
(F ¼ 14.25; p < 0.01) and time effects (t ¼ 3.76; p < 0.01). A quadratic effect of time, however,
is not significant (t ¼ 0.35; p > 0.10). Therefore, the use of PLS-SEM in strategic management
1
The coding agreement on the relevant articles was 94 percent, which compares well with the study by Hair et al. (2012).

322 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research
Table 1. PLS-SEM studies in the top management journals

Academy of Management Journal Management Science Strategic Management Journal


Avolio et al., 1999 Fichman and Kemerer, 1997 Birkinshaw et al., 1995
Cording et al., 2008 Fornell et al., 1985 Birkinshaw et al., 1998
Groth et al., 2009 Fornell et al., 1990 Cool et al., 1989
Mezner and Nigh, 1995 Graham et al., 1994 Delios and Beamish, 1999
Shamir et al., 1998 Gray and Meister, 2004 Doz et al., 2000
Administrative Science Quarterly Im and Rai, 2008 Gruber et al., 2010
House et al., 1991 Mitchell and Nault, 2007 Johansson and Yip, 1994
Howell and Higgins, 1990 Venkatesh and Agarwal, 2006 Olk and Young, 1997
Journal of Management Xu et al., 2010 Robins et al., 2002
Ashill and Jobber, 2010 Organization Science Sarkar et al., 2001
Shea and Howell, 2000 Jarvenpaa and Majchrzak, 2008 Tsang, 2002
Journal of Management Studies Milberg et al., 2000
Barthelemy and Quelin, 2006 Nambisan and Baron, 2010
Lee and Tsang, 2001 Purvis et al., 2001
Van Riel et al., 2009 Staples et al., 1999

research has grown linearly as a function of time, which is typical for early introduction stages of
a new research technique’s diffusion. In the field of marketing, in contrast, the use of PLS-SEM has
accelerated substantially over time (Hair et al., 2012).
In light of these results (i.e., a linear growth of the number of studies per year), we applied the
Bass diffusion model (Bass, 1969) to investigate how the application of the PLS-SEM method in
strategic management has developed over time. Fitting the Bass model to the data yielded a co-
efficient of innovation (p) of 0.11 and a coefficient of imitation (q) of 0.23. The value of p þ q
falls within boundaries commonly encountered in prior research, and the ratio of p/q suggests
that the use of PLS-SEM can be regarded as slightly contagious in terms of diffusion
(Mahajan et al., 1995).

Critical issues in PLS-SEM research


The thirty-seven articles included in our review were analyzed according to five key criteria
previously used to evaluate critical issues and common misapplications in research involving
PLS-SEM (Hair et al., 2012). The criteria used to analyze the studies and model estimations
were: 1) reasons for using PLS-SEM, 2) data characteristics, 3) model characteristics, 4) model
evaluation, and 5) reporting. We also distinguish between two time periods to assess whether
the use of PLS-SEM has changed between these periods. Using Chin’s (1998), Chin and
Newsted’s (1999) and Hulland’s (1999) PLS-SEM articles published in the late 1990s as seminal
milestones, we subsequently differentiate between studies published before 2000 (sixteen stud-
ies with sixty-one models) and studies published in 2000 and beyond (twenty-one studies with
fifty-one models).2

Reasons for using PLS-SEM


Given that PLS-SEM has only recently attracted increased interest in business research disciplines, it
requires a more detailed explanation of the rationale leading to the selection of this method. Of the
thirty-seven studies, a total of thirty-two (86.5%) provided explicit reasons for using PLS-SEM.
Moreover, the proportion of studies providing explicit reasons for using PLS-SEM remained rela-
tively consistent before 2000 (15 studies) and from 2000 onward (17 studies).
2
In the following, we consistently use the term “studies” when referring to the thirty-seven journal articles and the term
“models” when referring to the 112 PLS-SEM applications in these articles.

Long Range Planning, vol 45 2012 323


Figure 1. Applications of PLS-SEM in Management Journal Publications over Time

The four most frequently used reasons for using PLS-SEM are, in order of importance, non-
normal data (22 studies, 68.8%), small sample size (17 studies, 53.1%), formative measures (10
studies, 31.3%), and focus on prediction (10 studies, 31.3%).3 A comparison of studies published
before 2000 with those published in 2000 and onward shows a fairly consistent pattern. Apart from
gradual shifts in the order of importance, the prevalence of the reasons for using PLS-SEM remains
relatively consistent over time, with non-normal data, formative measures, small sample size, and
focus on prediction being the most prevalent reasons in recent years. This observation is consistent
with patterns observed in marketing research (Hair et al., 2012).
Wold (1985, p. 589) originally designed PLS-SEM for research situations that are “simulta-
neously data-rich and theory-primitive”. He envisioned a discovery-oriented process d “a dia-
logue between the investigator and the computer” (Wold, 1985, p. 590). Rather than commit to
a specific model a priori and frame the statistical analysis as a hypothesis test, Wold imagined
a researcher estimating numerous models in the course of learning something about the data
and about the phenomena underlying the data (Rigdon, in press). However, only seven studies
(21.9%) indicate theory development as a rationale for using PLS-SEM and one study (3.1%)
mentions exploratory research purposes. It was surprising, therefore, to find that practically all
studies argue their case in confirmatory terms, which likely reflects an academic bias in favor
of presenting findings in a confirmatory context (Greenwald et al., 1986). While this practice un-
derlines an apparent misconception in the appropriate use of PLS-SEM shown also in other fields
such as marketing (Hair et al., 2012), one should recognize that CB-SEM, on the other hand, is
rarely used in a truly confirmatory sense. In fact, research reality is that models estimated with
CB-SEM rarely fit initially and modifying models in an effort to yield what reviewers and editors
might perceive accurate in terms of model fit is typical. Choices in statistical methods often in-
volve tradeoffs, and Wold (1985) recognized both strengths and limitations in his “intentionally
approximate” technique. It is important for modern users to consider these same issues when
making the choice between PLS-SEM versus CB-SEM and when applying a particular technique.

Data characteristics
A primary advantage of PLS-SEM over CB-SEM is that it works particularly well with small sample
sizes (e.g., Chin and Newsted’s, 1999; Reinartz et al., 2009). It is not surprising, therefore, that the
average sample size of the studies included in our review (5% trimmed mean ¼ 154.9) is
3
The total of the percentages exceeds 100 percent because various studies mentioned multiple reasons for the use of PLS-SEM.

324 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research
considerably lower than that reported in previous reviews of CB-SEM studies (mean ¼ 246.4) (Shah
and Goldstein, 2006). Noteworthy, however, is a significant difference in the sample sizes of studies
published before 2000 (5% trimmed mean ¼ 95.4) and in 2000 and beyond (5% trimmed
mean ¼ 207.1). The sample size of studies in 2000 and beyond has more than doubled compared
to studies published before 2000, but is still below the average sample size reported in comparable
PLS-SEM reviews in marketing (5% trimmed mean ¼ 211.3) (Hair et al., 2012) and MIS (5%
trimmed mean ¼ 238.1) (Ringle et al., 2012). Similarly, the median sample size across all models
included in our review (median ¼ 83) is considerably lower than that reported in the Hair et al.
(2012) marketing (median ¼ 159) and Ringle et al. (2012) MIS (median ¼ 198) studies. This trend
is also apparent for models with less than 100 observations (58 of 112 models in total).
Overall, PLS-SEM studies in the strategic management discipline rely on much smaller sample
sizes compared to other fields. Even though from a statistical standpoint PLS-SEM can be used
with smaller sample sizes, this observation is not without problems for at least two reasons. First,
relying on small sample sizes tends to capitalize on the idiosyncrasies of the sample at hand. All else
being equal, the more heterogeneous the underlying population, the larger the required sample size
necessary to adequately reflect the population and to yield accurate estimates. Researchers need to
be aware that no statistical method can offset the fact that smaller sample sizes go hand in hand
with higher sampling error, especially when the population and the sample are heterogeneous in
composition. Second, the biasing effects of small sample sizes are likely to be accentuated when
data are extremely non-normal. Even though PLS-SEM is well-known to be robust when used
on highly skewed data (e.g., Cassel et al., 1999; Reinartz et al., 2009), such data inadequacies inflate
bootstrapping standard errors, thereby reducing the statistical power of the method. Considering
the tendency of PLS-SEM to underestimate inner model relationships (Hui and Wold, 1982),
non-normal data may represent a concern in combination with small sample sizes. It is therefore
of concern that none of the studies included in our review reported a check of the skewness and
kurtosis of the data underlying the analyses.
What might have contributed to the misconception of the universal suitability of PLS-SEM to
handle small sample sizes is the widespread application of the “ten times rule of thumb”
(Barclay et al., 1995; Hair et al., 2013). This rule recommends a minimum sample size of ten times
the maximum number of independent variables in the outer model and inner model. This approach
is equivalent to using a sample size of ten times the largest number of formative indicators used to
measure any construct in the outer model (in other words, the number of indicators per formative
construct) or ten times the largest number of structural paths directed at a particular latent con-
struct in the inner model. Most models (93; 83.0%) meet this rule of thumb. The nineteen models
(17.0%) that did not meet this criterion are on average 26.7 percent short of the recommended
sample size. Over time, 14 out of 61 models published before 2000 did not meet the ten times
rule of thumb, whereas only five out of 51 studies published in 2000 and beyond did not meet
the ten times rule of thumb, revealing a significant difference (p < 0.10) and indicating that re-
searchers have become more aware of sample size issues in PLS-SEM in recent years.
While this rule of thumb may provide a broad estimate of minimum sample size requirements
for the use of PLS-SEM (Hair et al., 2011), it needs to be pointed out that it does not consider effect
size, reliability, the total number of indicators, and other issues likely affecting the statistical power
of the PLS-SEM method. Since sample size recommendations in PLS-SEM essentially build on the
properties of ordinary least squares regression, researchers can revert to more differentiated rules of
thumb such as those provided by Cohen (1992) in his statistical power analyses for multiple regres-
sion models. For instance, when the maximum number of independent variables in the outer and
inner models is five, one would need ninety-one observations to achieve a statistical power of 80
percent, assuming a medium effect size and a 5 percent a-level. Cohen’s (1992) statistical power
analyses generally match those from Reinartz et al. (2009) in their comparison of CB-SEM and
PLS-SEM, provided that the outer model has an acceptable quality in terms of outer loadings
(i.e., loadings should be above the common threshold of 0.70). Likewise, Hair et al. (2013) provide
minimum sample size recommendations, based on regression-based power analyses.

Long Range Planning, vol 45 2012 325


Another benefit of PLS-SEM is its ability to process nominal, ordinal, interval, and ratio scaled vari-
ables (Fornell and Bookstein, 1982; Haenlein and Kaplan, 2004; Reinartz et al., 2009). Of the 112
models included in our review, only six models (5.4%) included categorical variables with more
than two modalities. A considerably larger number (40 models; 35.7%) used binary variables. The
use of categorical variables in PLS-SEM should be approached with caution as the number of binary
indicators and the position of the corresponding construct in the path model may restrict the use of
categorical variables. For instance, an issue occurs when a binary single-item is used to measure an
endogenous latent variable representing a choice situation. In the final inner approximation of the
PLS-SEM algorithm, the endogenous latent variable is regressed on the predecessor variables. As
the construct becomes its measure, however, and thus has only two values (choice versus no choice),
a basic premise of the ordinary least squares regression is violated. Researchers should be acquainted,
therefore, with the fundamental steps in the PLS-SEM algorithm (e.g., Henseler et al., 2012; Henseler
et al., 2009) to avoid model set-ups that are problematic in this respect.

Model characteristics
Table 2 provides an overview of model characteristics of the PLS-SEM studies included in our
review. On average, the number of latent variables in path models is 7.5, which is similar to the
7.9 and 8.1 reported in prior PLS-SEM reviews in marketing (Hair et al., 2012) and MIS (Ringle
et al., 2012), but much higher than in comparable studies in a CB-SEM context (e.g.,
Baumgartner and Homburg, 1996; Shah and Goldstein, 2006). This increased model complexity
is also mirrored in the higher number of inner model relationships being analyzed
(mean ¼ 10.4), which has increased significantly (p  0.10) over time (9.4 before 2000 and 11.6
thereafter). Likewise, models incorporate a relatively large average number of indicators (27 in
our review), which is much higher than generally encountered in CB-SEM (e.g., Baumgartner
and Homburg, 1996; Shah and Goldstein, 2006). This finding is not due to a larger average number
of indicators per construct but rather a result of the relatively larger number of constructs used in
the models. Specifically, the average numbers of indicators per reflective construct is 3.4 and 3.6 for
formative constructs, both of which have increased significantly over time (p  0.01). The relatively
small difference in the number of indicators per reflective and formative construct is striking, given
that formative indicators should capture the entire content domain of the construct under consid-
eration (Diamantopoulos et al., 2008). This especially holds for PLS-SEM which is restricted to es-
timating formative constructs without an error term (Diamantopoulos, 2011).
Taken jointly, these results suggest that researchers benefit from the ability of PLS-SEM to use
fewer data points in estimating complex models with many constructs, inner model relationships
and indicator variables. In contrast, CB-SEM quickly reaches its limits in similar situations. For ex-
ample, a complex model such as that described in Staples et al. (1999) with fifteen constructs, mea-
sured by a total seventy indicator variables has 2,383 degrees of freedom. Hence, the statistical
power for the test of model fit based on root-mean-square error of approximation using
a ¼ 0.05, ε0 ¼ 0.05, and εa ¼ 0.08, would be 1.000 (MacCallum et al., 1996). Therefore, rounding
errors would likely be detected at the 10th place and the model probably would fail even with sim-
ulated data (Haenlein and Kaplan, 2004).
An important characteristic of PLS-SEM is that it readily incorporates both reflective and forma-
tive measures. Drawing on this characteristic, half of the models used a combination of both reflec-
tively and formatively measured latent variables (56 models; 50.0%), and the number increased
significantly (p < 0.05) over time. Very few models were composed of solely reflectively measured
latent variables (12 models; 10.7%) or solely formatively measured latent variables (12 models;
10.7%). One quite surprising finding was that thirty-two models (28.6%) did not specify the mea-
surement mode for the constructs at all, despite the ongoing and rather vibrant debate on measure-
ment specification (Bagozzi, 2007; Diamantopoulos, 2006; Diamantopoulos et al., 2008;
Diamantopoulos and Siguaw, 2006; Diamantopoulos and Winklhofer, 2001; Edwards and
Bagozzi, 2000; Howell et al., 2007). It is encouraging, however, to see that the number of models
lacking an outer model description decreased significantly (p < 0.10) over time (Table 2).

326 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research
Table 2. Descriptive statistics for model characteristics

Criterion Results Proportion (%) Before 2000 2000 onward


(n [ 112) (n [ 61) (n [ 51)

Number of latent variables


Mean 7.5 e 7.0 8.1
Median 6.0 6.0 6.0
Range (2; 31) (3; 15) (2; 31)
Number of inner model path relations
Mean 10.4 e 9.4 11.6*
Median 9.0 7.0 10.0
Range (2; 39) (3; 39) (2; 31)
Mode of outer models
Only reflective 12 10.7 5 7
Only formative 12 10.7 11*** 1
Reflective and formative 56 50.0 24 32**
Not specified 32 28.6 21* 11
Number of indicators per reflective constructa
Mean 3.4 e 2.3 4.3***
Median 3.0 2.0 4.0
Range (1; 10) (1; 5) (1; 10)
Number of indicators per formative constructb
Mean 3.6 e 2.6 4.7***
Median 2.5 2.0 5.0
Range (1; 10) (1; 6) (1; 10)
Total number of indicators in models
Mean 27.0 e 19.7 35.7***
Median 19.0 18.0 23.0
Range (7; 114) (7; 70) (9; 114)
Number of models with single-item constructs 76 67.9 45 31

*** (**, *) indicates a significant difference between “before 2000” and “2000 onward” at a 1% (5%, 10%) significance level;
results based on independent samples t-tests and (one-tailed) Fisher’s exact tests (no tests for median differences).
a
Includes only models that have been marked as including reflective indicators (nbefore 2000 ¼ 29; n2000 onward ¼ 39).
b
Includes only models that have been marked as including formative indicators (nbefore 2000 ¼ 35; n2000 onward ¼ 33).

Our review revealed that 76 of 112 PLS path models (67.9%) used single-item measures. While
PLS-SEM readily incorporates single-item measures, researchers need to be vigilant since PLS-SEM
requires reasonable outer model quality (i.e., a sufficient number of indicators per construct and
higher loadings) for the technique to provide acceptable parameter estimates under a restricted
sample size (Reinartz et al., 2009). Apart from that, in terms of predictive validity, recent research
shows that single-item measures perform as well as multi-item scales (Diamantopoulos et al., 2012)
only under very specific conditions. As Diamantopoulos et al. (2012, p. 446) point out, “opting for
single-item measures in most empirical settings is a risky decision as the set of circumstances that
would favor their use is unlikely to be frequently encountered in practice.” Despite their ease of
implementation in PLS-SEM, researchers should follow Diamantopoulos et al.’s (2012) guideline
and only consider single items (rather than a multi-item scale) when 1) small sample sizes are pres-
ent (i.e., N < 50), and 2) effect sizes of 0.30 and lower are expected, and 3) the items of the orig-
inating multi-item scale are highly homogeneous (i.e., Cronbach’s alpha > 0.90), and 4) the items
are semantically redundant. This especially holds since PLS-SEM focuses on explaining the variance
in the endogenous variables, thereby placing emphasis on prediction.

Long Range Planning, vol 45 2012 327


Model evaluation
Outer model evaluation
To assess the extent to which constructs are appropriately measured by their indicator variables (in-
dividually or jointly), researchers need to differentiate between reflective and formative measure-
ment approaches (Diamantopoulos et al., 2008), with each approach relying on a different set of
criteria. Reflective measures are commonly evaluated through criteria of internal consistency,
such as Cronbach’s alpha and composite reliability (Hair et al., 2011). The internal consistency per-
spective that underlies reflective outer model evaluation cannot be universally applied to formative
models since formative measures do not necessarily covary (e.g., Diamantopoulos and Winklhofer,
2001). Thus, any attempt to purify formative indicators based on correlation patterns can have ad-
verse consequences for construct measures’ content validity (e.g., Diamantopoulos and Riefler,
2011; Diamantopoulos and Siguaw, 2006). This especially holds for PLS-SEM which assumes
that the formative indicators fully capture the content domain of the construct under consideration
(Diamantopoulos, 2011). Therefore, instead of considering measures such as composite reliability
or AVE, researchers have to revert to other criteria to assess formatively measured constructs. These
include, for example, the indicator weights, their significance and collinearity among the indicators.
Our review assesses whether and how authors evaluated reflective (Table 3, Panel A) and formative
measures (Table 3, Panel B) and whether the standards used for evaluating reflective outer models
were applied to formative models.

Reflective outer models


Despite the importance of providing evidence for the reflective measures’ reliability and validity in
order to adequately interpret inner model estimates (Henseler et al., 2009), authors oftentimes do
not engage in these analyses. For instance, of the sixty-eight models reporting the use of reflectively
measured constructs, fifty-three models (77.9%) reported outer loadings, thereby indirectly speci-
fying indicator reliability. Internal consistency reliability was reported in 38 of 68 models (55.9%)
using reflectively measured constructs and is significantly more prevalent recently (p  0.05). Most
models report composite reliability values, either exclusively (17 models; 25.0%) or in conjunction
with Cronbach’s Alpha (14 models; 20.6%). A relatively small number (seven models, 10.3%) re-
port only Cronbach’s Alpha. Since the PLS-SEM algorithm emphasizes indicators with strong re-
liability levels more, composite reliability is generally regarded as the more appropriate criterion
to establish internal consistency reliability as compared to Cronbach’s Alpha. The latter is generally
perceived as a lower bound of reliability whereas composite reliability is the upper bound.
In a PLS-SEM context, Tenenhaus et al. (2005) suggest using the above-mentioned reliability
measures to assess the unidimensionality of a set of manifest variables hypothesized to reflect an
underlying construct. However, Sahmer et al. (2006) show that Cronbach’s Alpha and composite
reliability are both inadequate to assess whether a set of reflective manifest variables is unidimen-
sional or not. Instead, researchers should apply the Kaiser-Gutman criterion (see Karlis et al., 2003
for a modified version) or Revelle’s (1979) coefficient beta.
Convergent validity was assessed in 30 of 68 models (44.1%) with the AVE being by far the most
prevalent measure. Furthermore, a total of twenty-seven models (39.7%) report some measure of
discriminant validity. Specifically, thirteen models (19.1%) exclusively relied on the Fornell-
Larcker criterion (Fornell and Larcker, 1981), which compares the AVE of each construct with
the squared inter-construct correlations. This criterion has been applied significantly (p  0.05)
more frequently in recent years. Similarly, thirteen models (19.1%) reported cross loadings only,
a more liberal form of assessing discriminant validity, which has also been reported significantly
(p  0.01) more frequently in recent years.

Formative outer models


Our analysis shows 68 of 112 models (60.7%) included at least one formatively measured con-
struct. Indicator weights, the most common criterion to assess formative measures, were reported

328 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research
Table 3. Evaluation of outer models

Empirical test Number of models Proportion Before 2000 2000 onward


criterion in PLS-SEMa reporting (n [ 68) reporting (%) (n [ 29) (n [ 39)

Panel A: Reflective outer models


Indicator reliability Indicator loadings 53 77.9 22 31
Internal consistency Only composite 17 25.0 3 14**
reliability reliability
Only Cronbach’s Alpha 7 10.3 0 7**
Both 14 20.6 0 14***
Convergent validity AVE 29 42.7 2 27
Other 1 1.5 0 1
Discriminant Only Fornell-Larcker 13 19.1 2 11**
validity criterion
Only cross-loadings 13 19.1 0 13***
Other 1 1.5 0 1

Empirical test Number of models Proportion Before 2000 2000 onward


criterion in PLS-SEMa reporting (n [ 68) reporting (%) (n [ 35) (n [ 33)

Panel B: Formative outer models


e Reflective criteria used 17 25.0 11 6
to evaluate formative
constructs
Indicator’s absolute Indicator weights 26 38.2 21*** 5
contribution
to the construct
Significance Standard errors, 3 4.4 0 3
of weights significance levels,
t-values/p-values for
indicator weights
Multicollinearity Only VIF/tolerance 0 0.0 e e
Only condition index 0 0.0 e e
Both 1 1.5 0 1

*** (**, *) indicates a significant difference between “before 2000”and “2000 onward” respectively, at a 1% (5%, 10%)
significance level; results based on (one-tailed) Fisher’s exact tests.
a
Single-item constructs were excluded from this analysis.

in 26 of 68 models (38.2%). A formative indicator’s weight represents the partialized effect of the
indicator on its corresponding construct, controlling for the effect of all other indicators of that
construct (Cenfetelli and Basselier, 2009). Therefore, weights are generally smaller than loadings
(i.e., the zero-order bivariate correlation between the indicator and the associated construct) and
thus need to be assessed for their significance through resampling procedures such as bootstrap-
ping or jackknifing. However, only three models (4.4%) reported standard errors, significance
levels, t-values, or p-values for indicator weights, all of which were published in more recent
years.
Multicollinearity among indicators represents an important concern in assessing formative mea-
sures since it can inflate bootstrap standard errors and therefore trigger type II errors (Cenfetelli
et al., 2009). Surprisingly, however, with one exception multicollinearity assessment is almost en-
tirely missing in the models included in our review. Since the weights of formative indicators are

Long Range Planning, vol 45 2012 329


usually smaller than those of reflective indicators, multicollinearity can cause misinterpretations of
the indicators’ relevance for the construct domain (Diamantopoulos and Winklhofer, 2001). The
lack of multicollinearity assessment is an important oversight and should be a reporting require-
ment in future studies that include formative measures.
Overall, formative outer model assessment in the management discipline leaves much to be de-
sired. Researchers neglect fundamental principles of outer model evaluation such as significance
testing and multicollinearity assessment, casting measurement quality and thus the studies’ findings
into doubt. More severely, a total of seventeen models (25.0%) with formatively measured con-
structs inappropriately used reflective criteria to evaluate the corresponding measures. This mistake
has been made consistently over time. In fact, it is surprising that six of 33 models (18.2%) used
reflective criteria to evaluate formative measures in recent years since the discussion about the eval-
uation of formative measures has taken place not only in the marketing and MIS disciplines (e.g.,
Diamantopoulos and Winklhofer, 2001; Petter et al., 2007), but also in the management domain
(e.g., Podsakoff et al., 2006; Podsakoff et al., 2003).
Researchers should place more emphasis on evaluating formative measures by applying estab-
lished as well as more recently proposed guidelines (Hair et al. (2013) provide an overview of
the state-of the-art in formative measurement evaluation). For instance, researchers can examine
the correlation between the formatively measured construct and a reflective measure of the
same construct. This analysis, also known as redundancy analysis (Chin, 1998), indicates the val-
idity of the designated set of formative indicators in tapping the construct of interest. Similarly, on
an indicator-level, Hair et al. (2011) argue that researchers should also consider a formative indi-
cator’s absolute contribution to its construct; that is, the information an indicator provides with-
out considering any other indicators (Cenfetelli and Basselier, 2009). The absolute contribution is
given by the formative indicator’s outer loading, which is always provided along with the indicator
weights in PLS-SEM. When an indicator’s outer weight is insignificant but its outer loading is high
(i.e., above 0.50), the indicator should be interpreted as absolutely important but not as relatively
important. In this situation, the indicator would generally be retained. However, when an indicator
has an insignificant weight and the outer loading is below 0.50, the researcher should decide
whether to retain or delete the indicator by examining its theoretical relevance and potential con-
tent overlap with other indicators of the same construct. Only if the theory-driven conceptualiza-
tion of the construct strongly supports retaining the indicator (e.g., by means of expert
assessment), should it be kept in the formative outer model. Finally, if the outer loading is low,
insignificant, and there is no empirical support for the indicator’s relevance regarding providing
content to the formative index, it should be considered a strong candidate for removal.
It is important to note that eliminating formative indicators that do not meet threshold levels
in terms of their contribution has, from an empirical perspective, almost no effect on the param-
eter estimates when re-estimating the model. Nevertheless, formative indicators should never be
discarded simply on the basis of statistical outcomes. In this context, Einhorn’s (1972, p. 378)
conclusion that “just as the alchemists were not successful in turning base metal into gold, the
modern researcher cannot rely on the ‘computer’ to turn his data into meaningful and valuable
scientific information” still holds true today. Therefore, before removing an indicator from the
formative outer model, researchers need to carefully check its relevance from a content validity
point of view.

Inner model evaluation


Unlike CB-SEM, PLS-SEM does not optimize a unique global scalar function. The lack of a global
scalar function and the consequent lack of global goodness-of-fit measures is traditionally con-
sidered a major drawback of PLS-SEM. It is important to recognize that the term “fit” has dif-
ferent meanings in the contexts of CB-SEM and PLS-SEM. Fit statistics for CB-SEM are derived
from the discrepancy between the empirical and the model-implied (theoretical) covariance ma-
trix, whereas PLS-SEM focuses on the discrepancy between the observed (in the case of manifest
variables) or approximated (in the case of latent variables) values of the dependent variables and

330 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research
the values predicted by the model in question. As a consequence, researchers using PLS-SEM rely
on measures indicating the model’s predictive capabilities to judge the model’s quality (Table 4).
The central criterion in this respect is the R2 which 90 of the 112 models (80.4%) report. Only
twelve models (10.7%) report effect size (f2), which considers the relative impact of a particular ex-
ogenous latent variable on an endogenous latent variable by means of changes in the R2 (Cohen,
1988). But in recent years significantly more models (p < 0.01) reported the effect size f2. The
cross-validated redundancy measure Q2, a common sample re-use technique (Geisser, 1974;
Stone, 1974), allows for assessing a model’s predictive validity. More precisely, Q2 represents a syn-
thesis of cross-validation and function fitting and is a recommended assessment criterion for PLS-
SEM applications (Wold, 1982). It is notable that only three models (2.7%) reported this criterion
all of which appeared in recent years. Similar to the f2 value, the Q2 value can also be used to assess
the predictive relevance of an individual construct to the model (labeled q2). But none of the
models reported predictive relevance of individual constructs.
Overall, the results point to an apparent problem in researchers’ reporting when evaluating PLS-
SEM-based path models. Table 4 summarizes the review of the inner model evaluations. The fact
that reviews of PLS-SEM use in marketing research (Hair et al., 2012) and MIS (Ringle et al., 2012)
showed similarly problematic results is not encouraging. In accordance with PLS-SEM’s statistical
properties, researchers should make broader use of relevant criteria to assess the model’s predictive
capabilities.

Table 4. Evaluation of inner models

Criterion Empirical test Number of models Proportion Before 2000 2000 onward
criterion in PLS-SEM reporting (n [ 112) reporting (%) (n [ 61) (n [ 51)

Endogenous constructs’ R2 90 80.4 45 45*


explained variance
Effect size f2 12 10.7 0 12***
Predictive relevance Cross-validated 3 2.7 0 3*
redundancy Q2
Relative predicted q2 0 0.0 e e
relevance
Overall goodness-of-fit GoF 0 0.0 e e
Path coefficients Absolute values 107 95.5 59 48
Significance of Standard errors, 107 95.5 59 48
path coefficients significance levels,
t-values, p-values
Confidence intervals e 0 0.0 e e
Total effects e 12 10.7 9 3

Criterion Empirical test Number of studies Proportion Before 2000 2000 onward
criterion in PLS-SEM reporting (n [ 37) reporting (%) (n [ 16) (n [ 21)

Observed Categorical moderator 8 21.6 5 3


heterogeneity Continuous moderator 2 5.4 0 2
Unobserved Response-based 0 0.0 e e
heterogeneity segmentation techniques
(e.g., FIMIX-PLS)

*** (**, *) indicates a significant difference between “before 2000” and “2000 onward” at a 1% (5%, 10%) significance level;
results based on (one-tailed) Fisher’s exact tests.

Long Range Planning, vol 45 2012 331


As PLS-SEM aims at maximizing the explained variance of the dependent variables, the model
quality criteria described above cannot indicate fit or a lack thereof in a CB-SEM sense. This
also holds for the goodness-of-fit index (GoF) which Tenenhaus et al. (2004) originally proposed
as a means to validate a path model globally. Henseler and Sarstedt (in press) recently challenged
the usefulness of the GoF both conceptually and empirically. For instance, in a simulation study, the
authors show that the GoF is not able to separate valid models from invalid ones. Since the GoF is
also not applicable to formative outer models and does not penalize overparameterization efforts,
researchers are advised to maintain the current practice and not apply this measure.
After assessing the predictive quality of an inner model, researchers examine the standardized
path coefficients to analyze whether the hypothesized relationships among constructs are reflected
by the data. The significance of these coefficients should be assessed using resampling procedures
(Henseler et al., 2009). Of the 112 models in our review, 107 (95.5%) reported path coefficients and
their significance. Unfortunately, researchers only interpreted the parameter coefficient’s t-value.
But there are other approaches to assess a coefficient’s stability if researchers consider the complete
bootstrap output. The more superior techniques involve construction of (bias-corrected) bootstrap
confidence intervals (e.g., Gudergan et al., 2008; Sarstedt et al., 2011) which d unlike regular con-
fidence intervals d may be asymmetrically distributed around the mean estimate from the full data.
This is a valuable property as the forced symmetry of regular confidence intervals may have negative
influence on estimation accuracy and statistical power (Efron and Tibshirani, 1986).
It is important to note that the examination of inner model estimates, both in terms of values
and significance, is not restricted to direct relationships. Rather, researchers can also examine total
effects; that is, the sum of direct and indirect effects. Interpretation of total effects is particularly
useful in studies with the objective of exploring the differential impact of different driver constructs
on a criterion construct via several mediating variables (Albers, 2010). In our review, 12 of 112
models (10.7%) reported total effects.
Recently, methodological research has devoted considerable interest to the identification and
treatment of heterogeneous data structures within a PLS-SEM framework. As a result of these
efforts, researchers can apply a broad range of procedures to take observed heterogeneity into ac-
count. These include approaches to modeling continuous moderating variables (Henseler and
Fassott, 2010; Henseler and Chin, 2010) as well as categorical moderating variables (i.e., Sarstedt
et al., 2011), which only a few studies in our review made use of (Table 4). However, variables caus-
ing heterogeneous data structures are not always observed (known). As a result, researchers have
developed approaches that facilitate identifying and treating unobserved heterogeneity (Sarstedt,
2008), which none of the studies included. Among the various approaches that have been proposed,
Hahn et al.’s (2002) finite mixture PLS (FIMIX-PLS) procedure is considered a primary approach.
By fitting mixture regression models on the data, FIMIX-PLS effectively detects heterogeneity in
inner model estimates and enables the derivation of latent classes. More recently, Ringle et al.
(in press) proposed a combination of PLS-SEM with genetic algorithms (PLS-GAS), which have
routinely been used in various business research disciplines to handle complex optimization prob-
lems. The authors show that PLS-GAS effectively uncovers heterogeneity and allows for a more pre-
cise segmentation of the data compared to prior approaches. Further assessment, especially with
regard to (formative) outer models is still pending but preliminary analyses show that the approach
holds considerable promise for segmentation tasks in PLS-SEM.

Reporting
A fundamental issue in the use of any statistical technique relates to the reporting of the choice of
computational options as these can have a significant bearing on the analysis results. This also holds
for the use of PLS-SEM, which leaves the researcher with several degrees of freedom when running
the algorithm or using complementary techniques such as jackknifing or bootstrapping. Unfortu-
nately, reporting practices in management research are lacking in several respects.
For instance, while practically all studies used resampling techniques such as jackknifing and
bootstrapping, only 20 of 37 studies (54.1%) explicitly mentioned their use. Furthermore, only

332 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research
10 of the 20 studies (50.0%) reported the concrete parameter settings (on the upside, settings were
reported significantly more frequently in recent years; p < 0.05).
Detailed reporting on resampling procedures is critical, however, in PLS-SEM studies. For exam-
ple, the different bootstrapping sign change options yield considerably different results when pa-
rameter estimates are close to zero. Specifically, when the no sign change option indicates
a nonsignificant relationship, the individual sign changes option is more likely to indicate a relation-
ship as significant. Likewise, a misspecification of bootstrap sample size vis-a-vis the original sample
size and bootstrap cases can significantly bias the results. For instance, using a small number of
bootstrap samples, particularly when the original sample size is much larger, will considerably de-
flate standard errors. Considered jointly, bootstrap parameter settings leave the researcher with
many degrees of freedom in their analysis of results, making their concrete reporting a must.
None of the studies provides information on the use of the PLS-SEM algorithm; that is, on the
weighting scheme used and the abort criterion. While in practice the choice of weighting scheme
has little bearing on the analysis results, researchers have to consider that the schemes are not univer-
sally applicable to all kinds of model set-ups. For instance, the centroid scheme must not be used when
estimating higher order models (Hair et al., 2012; Hair et al., 2013). Similarly, although the PLS-SEM
algorithm usually converges (Henseler, 2010), it may not do so when the stop criterion is extremely
low (e.g., 1020). Therefore, researchers should provide the (maximum) number of iterations to assess
whether or not the PLS-SEM algorithm converged before reaching the pre-specified stop criterion.
Reporting of the software used (in accordance with the license agreements) can provide some
information in this respect as the different programs rely on different default settings. However,
only 18 of 37 studies (48.7%) specified which software was used for model estimation. While recent
studies reported software package information significantly more often (p < 0.01) than earlier stud-
ies, the number is still relatively small. Of the eighteen studies in our review providing software
package information, ten studies used PLS Graph (Chin, 2003), five studies used LVPLS
(Lohm€ oller, 1987), two studies used SmartPLS (Ringle et al., 2005), and one study used PLS-
GUI (Li, 2005).
Surprisingly, and in contrast to previous reviews of PLS-SEM studies in the marketing literature
(Hair et al., 2012), twenty-five studies of 37 studies (67.6%) reported the covariance/correlation
matrix for the indicator variables, which enables readers to replicate and validate the analytical find-
ings. The number of studies reporting the covariance/correlation matrix is significantly higher
(p < 0.01) in recent than in previous years.
In summary, researchers should pay closer attention to reporting technical aspects when using
PLS-SEM. The fact that the PLS-SEM algorithm practically always converges might tempt re-
searchers to put less emphasis on the technicalities of the analyses. For the reasons mentioned
above, however, this practice needs to be changed.

Observations and conclusions


Today, our understanding of PLS-SEM is much more developed as a result of recent analyses that
compare the method’s properties with those of CB-SEM (e.g., Chin and Newsted’s, 1999; Reinartz
et al., 2009) or newly emerging techniques for estimating structural equation models (Henseler,
2012; Hwang et al., 2010; Lu et al., 2011). While the comparative studies’ approaches and research
aims differ, collectively they show the benefits of PLS-SEM lie in its ability to identify relationships
among latent variables in the model when they in fact exist in the population (i.e., its statistical
power), especially in situations when sample sizes are small. This property makes PLS-SEM partic-
ularly useful in exploratory research settings. PLS-SEM also facilitates more flexibility in estimating
complex models and those incorporating formative indicators, situations in which the uses of clas-
sical covariance-based techniques often reach their limits (Hair et al., 2011). These characteristics
make PLS-SEM particularly useful for strategic management research that often deals with small
sample sizes, complex models, and formative measures especially when analyzing the sources of
competitive advantage.

Long Range Planning, vol 45 2012 333


As with any statistical technique, PLS-SEM requires researchers to make several choices that if
made incorrectly can have substantial consequences on the validity of the results. Based on prior
methodological discussions, our own work in the area, and particularly the present review of pre-
vious applications of PLS-SEM in the Academy of Management Journal, Administrative Science
Quarterly, Journal of Management, Journal of Management Studies, Long Range Planning, Manage-
ment Science, Organization Science and Strategic Management Journal, we offer the following general
guidelines to future users of the technique.
First, more careful thought should be given to data characteristics. Even though PLS-SEM per-
forms well with small samples and non-normal data, researchers should not be careless in imple-
menting these advantages. Small sample sizes and skewed data easily increase sampling error
yielding inflated bootstrap standard errors. When this occurs, the technique’s statistical power is
reduced, offsetting one of PLS-SEM’s major advantages.
Second, researchers should pay closer attention to model specification issues. For example, using
formatively measured constructs in PLS-SEM implies that the indicators capture the entire con-
struct domain (or at least major parts of it). Similarly, using single-item measures is not without
problems. Considering that single-item measures significantly lag behind multi-item scales in terms
of predictive validity, their use should be avoided in PLS-SEM in most situations, especially in light
of the technique’s predictive focus.
Third, researchers should make greater use of model evaluation criteria, especially when assessing
the quality of formatively measured constructs. Our review shows that current practice leaves much
to be desired in this regard, casting doubt on the validity of some of the measures. Similarly, re-
searchers should make use of the full range of criteria available to assess the model’s predictive ca-
pabilities, such as the cross-validated redundancy index or the effect size. Researchers need to
understand that these measures are by definition not indicative of model fit in a covariance-
based sense, and any effort to interpret them as such should clearly be rejected.
Lastly, it is important to note that some of the criticism stated in the review might be misplaced
since there is no indication whether some of the admonished reporting elements have been dis-
carded in the course of the revision process. However, every study should provide the reader
with sufficient information to fully assess the quality of the research as well as replicate the results
(Stewart, 2009). To improve reporting practices of PLS-SEM studies in strategic management and
other disciplines, the aspects in Table 5 must be clearly addressed by providing details on 1) data
used and their characteristics, 2) model characteristics, 3) PLS-SEM algorithm settings and software
used, and 4) evaluation criteria results of inner and outer models. Acknowledging the trade-off be-
tween page restrictions and number of studies being published, journal editors should nevertheless
devote more journal space to fundamental reporting elements to improve the readers’ confidence in
the results.
Besides basic PLS-SEM analyses, researchers can take advantage of a much larger set of meth-
odological extensions of the PLS-SEM method, ranging from evaluation techniques such as blind-
folding (e.g., Chin, 1998) and confirmatory tetrad analysis (e.g., Gudergan et al., 2008) to
importance-performance analyses (e.g., Rigdon et al., 2011; V€ olckner et al., 2010), PLS-SEM multi
group analyses (Henseler et al., 2009; Sarstedt et al., 2011) and response-based segmentation ap-
proaches such finite mixture-PLS (e.g., Hair et al., 2011; Rigdon et al., 2010). However, PLS-SEM
studies in strategic management very infrequently exploit such potentials and, thus, miss oppor-
tunities to further substantiate the appropriateness of findings and to improve their analyses. For
instance, uncovering unobserved heterogeneity represents a key area of concern every PLS-SEM
study and should be addressed in the results evaluation (Hair et al., 2012) while the
importance-performance adds a second dimensions (i.e., performance) to the analysis of results
and thereby leads to richer and further differentiated findings. Hence, PLS-SEM in strategic man-
agement must more strongly focus on presenting state of the art applications.
As this and previous reviews of PLS-SEM use (Hair et al., 2012; Lee et al., 2011; Ringle et al.,
2012) have shown, PLS-SEM has already had a substantial impact on empirical research in several
disciplines. However, to further increase its usefulness in empirical research, the problem areas and

334 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research
Table 5. Issues, Implications and Recommendations in the Application of PLS-SEM

Issue Implications Recommendation References

Sample size Model generally identified but low Use ten times rule as rough Hair et al. (2013);
requirements sample sizes deflate statistical power, estimate of the required Reinartz et al. (2009)
especially when the outer model sample size; for a more
quality is poor and data are thorough assessment,
highly skewed consider Cohen’s (1992)
statistical power tables or
carry out distinct power
analyses
Non-normal PLS-SEM is very robust when used Examine the degree to Cassel et al. (1999),
data on extremely non-normal data; which data are non-normal Mooi and
however, bootstrapping standard using Q-Q plots, Sarstedt (2011);
errors may become inflated, Kolmogorov-Smirnov Reinartz et al. (2009)
especially when the sample size or Shapiro-Wilk tests
is small, yielding lower levels
of statistical power
Use of formative PLS-SEM readily accommodates Establish content validity by Diamantopoulos (2011)
measures formative measures but assumes determining how well the
that the indicators cover the indicators cover the entire
entire domain of the construct (or at least major aspects)
of the construct’s content
domain. Also see formative
outer model assessment
Use of categorical PLS-SEM can generally Consider the PLS-SEM Henseler (2010)
variables accommodate categorical variables algorithm to assess the Henseler et al. (2009)
but their inclusion depends on their suitability of the model
position in the outer model set-up
and the number of indicators
used per construct
Use of single-item Leads to poor outer model quality Avoid using single-item Diamantopoulos
measures in terms of predictive validity; measures; should only be et al. (2012)
aggravates PLS-SEM’s tendency considered when practical
to underestimate inner model considerations
relationships (e.g., population/sample
is limited in size) require
their use
Reflective outer Establishing reliability and validity Fully make use of popular Henseler et al. (2009)
model assessment of reflective measures is a criteria to assess the Hair et al. (2012, 2013)
precondition for interpreting reflective outer models
inner model estimates
Formative outer Criteria used for reflective outer Consider established criteria Hair et al. (2011, 2013)
model assessment model evaluation are not (weights and their Cenfetelli and
universally applicable to significance, multicollinearity) Basselier (2009)
formative measures as well as more recently
proposed methods
(e.g., redundancy analysis,
indicator loadings) to evaluate
formative measures’ quality
(continued on next page)

Long Range Planning, vol 45 2012 335


Table 5 (continued)
Issue Implications Recommendation References

Evaluation of the As PLS-SEM aims at maximizing Consider the full range of Hair et al. (2012, 2013)
inner model the explained variance of the criteria to assess the model’s
dependent variables, model predictive capabilities
quality criteria cannot indicate (i.e., R2, Q2). Bootstrapping
fit or a lack thereof in a analyses should also be used
CB-SEM sense. to construct (bias-corrected)
confidence intervals
Consideration of Analyses on the aggregate Consider observable Sarstedt et al. (2011)
heterogeneous data level can seriously bias moderating variables or, in Rigdon et al. (2010)
data structures the results if the data structure case of unobserved Ringle et al. (in press)
is heterogeneous heterogeneity, apply latent Rigdon et al. (2011)
class approaches such as
FIMIX-PLS or PLS-GAS
Reporting of Reporting computational Fully report the following Hair et al. (2012)
computational settings is highly important elements:
settings because specific -Standard PLS-SEM algorithm: Hair et al. (2013)
(combinations of) settings weighting scheme, maximum
can be considered conservative iterations, abort criterion
or liberal -Resampling procedures:
sign change option, number
of bootstrap cases and samples

recommendations about how to improve the practice of using PLS-SEM should be carefully taken
into account in future applications of the technique.

References
Agarwal, R., Sarkar, M.B., Echambadi, R., 2002. The conditioning effect of time on firm survival: an industry
life cycle approach. Academy of Management Journal 45 (5), 971e994.
Albers, S., 2010. PLS and success factor studies in marketing. In: Esposito Vinzi, V., Chin, W.W., Henseler, J.,
Wang, H. (Eds.), Handbook of Partial Least Squares: Concepts, Methods and Applications in Marketing
and Related Fields. Springer, Berlin et al., pp. 409e425.
Babin, B.J., Hair, J.F., Boles, J.S., 2008. Publishing research in marketing journals using structural equations
modeling. Journal of Marketing Theory and Practice 16 (4), 279e285.
Bagozzi, R.P., 2007. On the meaning of formative measurement and how it differs from reflective measure-
ment: comment on Howell, Breivik, and Wilcox (2007). Psychological Methods 12 (2), 229e237.
Barclay, D.W., Higgins, C.A., Thompson, R., 1995. The partial least squares approach to causal modeling: per-
sonal computer adoption and use as illustration. Technology Studies 2 (2), 285e309.
Bass, F.M., 1969. A new product growth for model consumer durables. Management Science 15 (5), 215e227.
Baumgartner, H., Homburg, C., 1996. Applications of structural equation modeling in marketing and con-
sumer research: a review. International Journal of Research in Marketing 13 (2), 139e161.
Birkinshaw, J., Hood, N., Jonsson, S., 1998. Building firm-specific advantages in multinational corporations:
the role of subsidiary initiative. Strategic Management Journal 19 (3), 221e242.
Birkinshaw, J., Morrison, A., Hulland, J., 1995. Structural and competitive determinants of a global integra-
tion strategy. Strategic Management Journal 16 (8), 637e655.
Brannick, M.T., 1995. Critical comments on applying covariance structure modeling. Journal of Organiza-
tional Behavior 16 (3), 201e213.
Cassel, C., Hackl, P., Westlund, A.H., 1999. Robustness of partial least-squares method for estimating latent
variable quality structures. Journal of Applied Statistics 26 (4), 435e446.
Cenfetelli, R.T., Bassellier, G., 2009. Interpretation of formative measurement in information systems research.
MIS Quarterly 33 (4), 689e708.

336 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research
Chin, W.W., 1998. The partial least squares approach to structural equation modeling. In: Marcoulides, G.A.
(Ed.), Modern Methods for Business Research. Lawrence Erlbaum, Mahwah, pp. 295e358.
Chin, W.W., 2003. PLS Graph 3.0. Soft Modeling Inc., Houston.
Chin, W.W., 2010. How to write up and report PLS analyses. In: Esposito Vinzi, V., Chin, W.W., Henseler, J.,
Wang, H. (Eds.), Handbook of Partial Least Squares: Concepts, Methods and Applications in Marketing
and Related Fields. Springer, Berlin et al, pp. 655e690.
Chin, W.W., Newsted, P.R., 1999. Structural equation modeling analysis with small samples using partial least
squares. In: Hoyle, R.H. (Ed.), Statistical Strategies for Small Sample Research. Sage, Thousand Oaks, pp.
307e341.
Cohen, J., 1988. Statistical Power Analysis for the Behavioral Sciences, second ed. Lawrence Erlbaum Associ-
ates, Hillsdale, NJ.
Cohen, J., 1992. A power primer. Psychological Bulletin 112 (1), 155e159.
Cool, K., Dierickx, I., Jemison, D., 1989. Business strategy, market structure and risk-return relationships:
a structural approach. Strategic Management Journal 10 (6), 507e522.
Devinney, T.M., Midgley, D.F., Venaik, S., 2000. The optimal performance of the global firm: formalizing and
extending the integration-responsiveness framework. Organization Science 11 (6), 674e695.
Diamantopoulos, A., 2006. The error term in formative measurement models: interpretation and modeling
implications. Journal of Modelling in Management 1 (1), 7e17.
Diamantopoulos, A., 2011. Incorporating formative measures into covariance-based structural equation
models. MIS Quarterly 35 (2), 335e358.
Diamantopoulos, A., Riefler, P., 2011. Using formative measures in international marketing models: a cau-
tionary tale using consumer animosity as an example. Advances in International Marketing 10 (22),
11e30.
Diamantopoulos, A., Riefler, P., Roth, K.P., 2008. Advancing formative measurement models. Journal of Busi-
ness Research 61 (12), 1203e1218.
Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P., Kaiser, S., 2012. Guidelines for choosing between
multi-item and single-item scales for construct measurement: a predictive validity perspective. Journal of
the Academy of Marketing Science 40 (3), 434e449.
Diamantopoulos, A., Siguaw, J.A., 2006. Formative vs. reflective indicators in measure development: does the
choice of indicators matter? British Journal of Management 13 (4), 263e282.
Diamantopoulos, A., Winklhofer, H.M., 2001. Index construction with formative indicators: an alternative to
scale development. Journal of Marketing Research 38 (2), 269e277.
Doz, Y.L., Olk, P.M., Ring, P.S., 2000. Formation processes of R&D consortia: which path to take? Where does
it lead? Strategic Management Journal 21 (3), 239e266.
Echambadi, R., Campbell, B., Agarwal, R., 2006. Encouraging best practice in quantitative management
research: an incomplete list of opportunities. Journal of Management Studies 43 (8), 1801e1820.
Edwards, J.R., Bagozzi, R.P., 2000. On the nature and direction of relationships between constructs and
measures. Psychological Methods 5 (2), 155e174.
Efron, B., Tibshirani, R., 1986. Bootstrap methods for standard errors, confidence intervals, and other
measures of statistical accuracy. Statistical Science 1 (1), 54e75.
Einhorn, H.J., 1972. Alchemy in the behavioral sciences. Public Opinion Quarterly 36 (3), 367e378.
Fornell, C., Lorange, P., Roos, J., 1990. The cooperative venture formation process: a latent variable structural
modeling approach. Management Science 36 (10), 1246e1255.
Fornell, C.G., Bookstein, F.L., 1982. Two structural equation models: LISREL and PLS applied to consumer
exit-voice theory. Journal of Marketing Research 19 (4), 440e452.
Fornell, C.G., Larcker, D.F., 1981. Evaluating structural equation models with unobservable variables and
measurement error. Journal of Marketing Research 18 (1), 39e50.
Furrer, O., Thomas, H., Goussevskaia, A., 2008. The structure and evolution of the strategic management
field: a content analysis of 26 years of strategic management research. International Journal of Management
Reviews 10 (1), 1e23.
Geisser, S., 1974. A predictive approach to the random effects model. Biometrika 61 (1), 101e107.
Govindarajan, V., 1989. Implementing competitive strategies at the business unit level: implications of match-
ing managers to strategies. Strategic Management Journal 10 (3), 251e269.
Gray, P.H., Meister, D.B., 2004. Knowledge sourcing effectiveness. Management Science 50 (6), 821e834.
Greenwald, A.G., Pratkanis, A.R., Leippe, M.R., Baumgardner, M.H., 1986. Under what conditions does the-
ory obstruct research progress? Psychological Review 93 (2), 216e229.

Long Range Planning, vol 45 2012 337


Gudergan, S.P., Ringle, C.M., Wende, S., Will, A., 2008. Confirmatory tetrad analysis in PLS path modeling.
Journal of Business Research 61 (12), 1238e1249.
Haenlein, M., Kaplan, A.M., 2004. A beginner’s guide to partial least squares analysis. Understanding Statistics
3 (4), 283e297.
Hahn, C., Johnson, M.D., Herrmann, A., Huber, F., 2002. Capturing customer heterogeneity using a finite
mixture PLS approach. Schmalenbach Business Review 54 (3), 243e269.
Hair, J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M., 2013. A Primer on Partial Least Squares Structural Equa-
tion Modeling. Sage, Thousand Oaks,.
Hair, J.F., Ringle, C.M., Sarstedt, M., 2011. PLS-SEM: indeed a silver bullet. Journal of Marketing Theory and
Practice 19 (2), 139e151.
Hair, J.F., Sarstedt, M., Ringle, C.M., Mena, J.A., 2012. An assessment of the use of partial least squares structural
equation modeling in marketing research. Journal of the Academy of Marketing Science 40 (3), 414e433.
Henseler, J., 2010. On the convergence of the partial least squares path modeling algorithm. Computational
Statistics 25 (1), 107e120.
Henseler, J., 2012. Why generalized structured component analysis is not universally preferable to structural
equation modeling. Journal of the Academy of Marketing Science 40 (3), 402e413.
Henseler, J., Chin, W.W., 2010. A comparison of approaches for the analysis of interaction effects between
latent variables using partial least squares path modeling. Structural Equation Modeling 17 (1), 82e109.
Henseler, J., Fassott, G., 2010. Testing moderating effects in PLS path models: an illustration of available pro-
cedures. In: Esposito Vinzi, V., Chin, W.W., Henseler, J., Wang, H. (Eds.), Handbook of Partial Least
Squares: Concepts, Methods and Applications in Marketing and Related Fields. Springer, Berlin et al.,
pp. 713e735.
Henseler, J., Ringle, C.M., Sarstedt, M., 2012. Using partial least squares path modeling in international ad-
vertising research: basic concepts and recent issues. In: Okazaki, S. (Ed.), Handbook of Research in Inter-
national Advertising. Edward Elgar Publishing, Cheltenham, pp. 252e276.
Henseler, J., Ringle, C.M., Sinkovics, R.R., 2009. The use of partial least squares path modeling in interna-
tional marketing. Advances in International Marketing 20, 277e320.
Henseler, J., Sarstedt, M. Goodness-of-fit indices for partial least squares path modeling. Computational Sta-
tistics, in press.
Howell, R.D., Breivik, E., Wilcox, J.B., 2007. Is formative measurement really measurement? Reply to Bollen
(2007) and Bagozzi (2007). Psychological Methods 12 (2), 238e245.
Hui, B.S., Wold, H., 1982. Consistency and consistency at large of partial least squares estimates. In:
J€
oreskog, K.G., Wold, H. (Eds.), Systems under Indirect Observation: Part II, North Holland, Amsterdam,
pp. 119e130.
Hulland, J., 1999. Use of Partial Least Squares (PLS) in strategic management research: a review of four recent
studies. Strategic Management Journal 20 (2), 195e204.
Hwang, H., Malhotra, N.K., Kim, Y., Tomiuk, M.A., Hong, S.A., 2010. Comparative study on parameter re-
covery of three approaches to structural equation modeling. Journal of Marketing Research 47 (4),
699e712.
Im, G., Rai, A., 2008. Knowledge sharing ambidexterity in long-term interorganizational relationships. Man-
agement Science 54 (7), 1281e1296.
Jarvenpaa, S.L., Majchrzak, A., 2008. Knowledge collaboration among professionals protecting national secu-
rity: role of transactive memories in ego-centered knowledge networks. Organization Science 19 (2),
260e276.
Johansson, J.K., Yip, G.S., 1994. Exploiting globalization potential: U.S. and Japanese strategies. Strategic
Management Journal 15 (8), 579e601.
J€
oreskog, K.G., 1978. Structural analysis of covariance and correlation matrices. Psychometrika 43 (4),
443e477.
J€
oreskog, K.G., 1982. The LISREL approach to causal model-building in the social sciences. In: Wold, H.,
J€
oreskog, K.G. (Eds.), Systems Under Indirect Observation: Part I, North-Holland, Amsterdam,
pp. 81e100.
J€
oreskog, K.G., Wold, H., 1982. The ML and PLS techniques for modeling with latent variables: historical and
comparative aspects. In: J€ oreskog, K.G., Wold, H. (Eds.), Systems Under Indirect Observation: Part I,
North-Holland, Amsterdam, pp. 263e270.
Karlis, D., Saporta, G., Spinakis, A., 2003. A simple rule for the selection of principal components. Commu-
nications in Statistics e Theory and Methods 32 (3), 643e666.

338 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research
Lee, L., Petter, S., Fayard, D., Robinson, S., 2011. On the use of partial least squares path modeling in account-
ing research. International Journal of Accounting Information Systems 12 (4), 305e328.
Li, Y., 2005. PLS-GUI e Graphic User Interface for Partial Least Squares (PLS-PC 1.8). University of South
Carolina, Columbia, SC.
Lohm€ oller, J.-B., 1987. LVPLS 1.8. Lohm€ oller, Cologne.
Lohm€ oller, J.-B., 1989. Latent Variable Path Modeling with Partial Least Squares. Physica, Heidelberg.
Lu, I.R.R., Kwan, E., Thomas, D.R., Cedzynski, M., 2011. Two new methods for estimating structural equation
models: An illustration and a comparison with two established methods. International Journal of Research
in Marketing 28 (3), 258e268.
MacCallum, R.C., Browne, M.W., Sugawara, H.M., 1996. Power analysis and determination of sample size for
covariance structure modeling. Psychological Methods 1 (2), 130e149.
Mahajan, V., Muller, E., Bass, F.M., 1995. Diffusion of new products: empirical generalizations and managerial
uses. Marketing Science 14 (3), G79eG88.
Mooi, E., Sarstedt, M., 2011. A Concise Guide to Market Research. The Process, Data, and Methods Using
IBM SPSS Statistics. Springer, Berlin et al.
Petter, S., Straub, D., Rai, A., 2007. Specifying formative constructs in information systems research. MIS
Quarterly 31 (4), 623e656.
Podsakoff, N.P., Shen, W., Podsakoff, P.M., 2006. The role of formative measurement models in strategic
management research: review, critique, and implications for future research. In: Ketchen, D.J.,
Bergh, D.D. (Eds.), Research Methodology in Strategy and Management. Emerald, pp. 197e252.
Podsakoff, P.M., MacKenzie, S.B., Podsakoff, N.P., Lee, J.Y., 2003. The mismeasure of man(agement) and its
implications for leadership research. The Leadership Quarterly 14 (6), 615e656.
Purvis, R.L., Sambamurthy, V., Zmud, R.W., 2001. The assimilation of knowledge platforms in organizations:
an empirical investigation. Organization Science 12 (2), 117e135.
Raisch, S., Birkinshaw, J., 2008. Organizational ambidexterity: antecedents, outcomes, and moderators. Jour-
nal of Management 34 (3), 375e409.
Reinartz, W.J., Haenlein, M., Henseler, J., 2009. An empirical comparison of the efficacy of covariance-based
and variance-based SEM. International Journal of Market Research 26 (4), 332e344.
Revelle, W., 1979. Hierarchical clustering and the internal structure of tests. Multivariate Behavioral Research
14 (1), 57e74.
Rigdon, E.E. Partial least squares path modeling, in: Hancock G., Mueller R.O. (Eds.), Structural Equation
Modeling: A Secondary Course. Information Age Publishing, Charlotte, NC, in press.
Rigdon, E.E., Ringle, C.M., Sarstedt, M., 2010. Structural modeling of heterogeneous data with partial least
squares. In: Malhotra, N.K. (Ed.), Review of Marketing Research. Sharpe, pp. 255e296. Armonk.
Rigdon, E.E., Ringle, C.M., Sarstedt, M., Gudergan, S.P., 2011. Assessing heterogeneity in customer satisfac-
tion studies: across industry similarities and within industry differences. Advances in International Mar-
keting, 169e194.
Ringle, C.M., Sarstedt, M., Schlittgen, R., Taylor, C. R. PLS path modeling and evolutionary segmentation.
Journal of Business Research, in press.
Ringle, C.M., Sarstedt, M., Straub, D.W., 2012. A Critical Look at the Use of PLS-SEM in MIS Quarterly. MIS
Quarterly 36 (1), iiiexiv.
Ringle, C.M., Wende, S., Will, A., 2005. SmartPLS 2.0 Hamburg. www.smartpls.de.
Robins, J.A., Tallman, S., Fladmoe-Lindquist, K., 2002. Autonomy and dependence of international cooper-
ative ventures: an exploration of the strategic performance of U.S. ventures in Mexico. Strategic Manage-
ment Journal 23 (10), 881e901.
Sahmer, K., Hanafi, M., Qannari, M., 2006. Assessing unidimensionality within PLS path modeling frame-
work. In: Spiliopoulou, M., Kruse, R., Borgelt, C., N€ urnberger, A., Gaul, W. (Eds.), From Data and Infor-
mation Analysis to Knowledge Engineering. Springer Berlin Heidelberg, pp. 222e229.
Sarkar, M.B., Echambadi, R., Harrison, J.S., 2001. Alliance entrepreneurship and firm market performance.
Strategic Management Journal 22 (6/7), 701e711.
Sarstedt, M., 2008. A review of recent approaches for capturing heterogeneity in partial least squares path
modelling. Journal of Modelling in Management 3 (2), 140e161.
Sarstedt, M., Henseler, J., Ringle, C.M., 2011. Multi-group analysis in partial least squares (PLS) path mod-
eling: alternative methods and empirical results. Advances in International Marketing 22, 195e218.
Shah, R., Goldstein, S.M., 2006. Use of structural equation modeling in operations management research:
looking back and forward. Journal of Operations Management 24 (2), 148e169.

Long Range Planning, vol 45 2012 339


Shook, C.L., Ketchen, D.J., Hult, T., Kacmar, K.M., 2004. An assessment of the use of structural equation
modeling in strategic management research. Strategic Management Journal 25 (4), 397e404.
Staples, S.D., Hulland, J.S., Higgins, C.A., 1999. A self-efficacy theory explanation for the management of re-
mote workers in virtual organizations. Organization Science 10 (6), 758e776.
Steenkamp, J.B.E.M., Trijp, H.C.M., 1991. The use of LISREL in validating marketing constructs. Interna-
tional Journal of Research in Marketing 8 (4), 283e299.
Stewart, D., 2009. The role of method: some parting thoughts from a departing editor. Journal of the Academy
of Marketing Science 37 (4), 381e383.
Stone, M., 1974. Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statis-
tical Society 36 (2), 111e147.
Tenenhaus, M., Amato, S., Esposito Vinzi, V., 2004. A Global Goodness-of-Fit Index for PLS Structural Equa-
tion Modeling. Proceedings of the XLII SIS Scientific Meeting. CLEUP, Padova, pp. 739e742.
Tenenhaus, M., Esposito Vinzi, V., Chatelin, Y.-M., Lauro, C., 2005. PLS path modeling. Computational Sta-
tistics & Data Analysis 48 (1), 159e205.
Tiwana, A., 2008. Do bridging ties complement strong ties? An empirical examination of alliance ambidex-
terity. Strategic Management Journal 29 (3), 251e272.
V€olckner, F., Sattler, H., Hennig-Thurau, T., Ringle, C.M., 2010. The role of parent brand quality for service
brand extension success. Journal of Service Research 13 (4), 359e361.
Wold, H., 1982. Soft modeling: the basic design and some extensions. In: J€ oreskog, K.G., Wold, H. (Eds.),
Systems under Indirect Observations: Part II, North-Holland, Amsterdam, pp. 1e54.
Wold, H., 1985. Partial least squares. In: Kotz, S., Johnson, N.L. (Eds.), Encyclopedia of Statistical Sciences.
Wiley, New York, pp. 581e591.
Zott, C., Amit, R., 2007. Business model design and the performance of entrepreneurial firms. Organization
Science 18 (2), 181e199.

Biographies
Dr. Joseph F. Hair is Professor of Marketing at Coles College of Business, Kennesaw State University, USA. His
research mainly focuses on multivariate analysis methods and their application in business research. E-mail:
[email protected]. Please visit his webpage (http://coles.kennesaw.edu/departments_faculty/faculty-pages/Hair-
JoeF.htm) for more information on Dr. Hair.
Dr. Marko Sarstedt is Professor of Marketing at Otto-von-Guericke-University Magdeburg and Visiting Professor
at the University of Newcastle, Australia. His research interests include PLS-SEM, measurement principles, and
corporate reputation. His research has been published in journals such as the Journal of the Academy of Marketing
Science, Journal of Business Research, and MIS Quarterly. E-mail: [email protected]
Dr. Torsten M. Pieper is Assistant Professor at the Department of Management and Entrepreneurship, Kennesaw
State University, USA. His research mainly addresses family business, strategic management, and research methods.
E-mail: [email protected]. Please visit his webpage (http://coles.kennesaw.edu/departments_faculty/faculty-
pages/Pieper-Torsten.htm) for more information on Dr. Pieper.
Dr. Christian M. Ringle is a Full Professor and Managing Director of the Institute for Human Resource Manage-
ment and Organizations (HRMO) at Hamburg University of Technology (TUHH), Germany, and Visiting Pro-
fessor at the University of Newcastle, Australia. His research mainly addresses strategic management, organizations,
marketing, human resource management, and quantitative methods for business and market research.
E-mail: [email protected]. Please visit http://www.tuhh.de/hrmo for more information on Dr. Ringle.

340 The Use of Partial Least Squares Structural Equation Modeling in Strategic Management Research

View publication stats

You might also like