Measuring Service Quality in Higher Education: Hedperf Versus Servperf
Measuring Service Quality in Higher Education: Hedperf Versus Servperf
www.emeraldinsight.com/0263-4503.htm
Measuring
Measuring service quality in service quality in
higher education: HEdPERF higher education
versus SERVPERF
31
Firdaus Abdullah
MARA University of Technology, Jalan Meranek, Malaysia Received May 2004
Revised October 2005
Accepted October 2005
Abstract
Purpose – This paper aims to test and compare the relative efficacy of three measuring instruments
of service quality (namely Higher Education PERFormance (HEdPERF), SERVPERF and the
moderating scale of HEdPERF-SERVPERF) within a higher education setting. The objective was to
determine which instrument had the superior measuring capability in terms of unidimensionality,
reliability, validity and explained variance.
Design/methodology/approach – After a pilot test, data were collected from students in two
public universities, one private university and three private colleges in Malaysia between January and
March 2004, by the “contact person” route. From a total of 560 questionnaires, 381 were usable: a
response rate of 68.0 per cent. This sample of nearly 400,000 students in Malaysian tertiary institutions
was in line with the generalized scientific guideline for sample size decisions. Data were subjected to
regression analysis.
Findings – A modified five-factor structure of HEdPERF is put forward as the most appropriate
scale for the higher education sector.
Research limitations/implications – Since this study only examined the respective utilities of
each instrument within a single industry, any suggestion that the HEdPERF is generally superior
would still be premature. Nonetheless, the current findings do provide some important insights into
how these instruments of service quality compare with one another.
Practical implications – The single dominant factor on this study is “access”, which has clear
implications for institutions’ marketing strategies.
Originality/value – This is believed to be the first study of its kind carried out among consumers of
the higher education service.
Keywords Service quality assurance, Higher education, Measuring instruments
Paper type Research paper
Introduction
Service industries are playing an increasingly important role in the economy of many
nations. In today’s world of global competition, rendering quality service is a key for
success, and many experts concur that the most powerful competitive trend currently
shaping marketing and business strategy is service quality. Since the 1980s service
quality has been linked with increased profitability, and it is seen as providing an
important competitive advantage by generating repeat sales, positive word-of-mouth
feedback, customer loyalty and competitive product differentiation. As Zeithaml and Marketing Intelligence & Planning
Bitner (1996, p. 76) point out: Vol. 24 No. 1, 2006
pp. 31-47
q Emerald Group Publishing Limited
. . . the issue of highest priority today involves understanding the impact of service quality on 0263-4503
profit and other financial outcomes of the organisation. DOI 10.1108/02634500610641543
MIP Service quality has since emerged as a pervasive strategic force and a key strategic
24,1 issue on management’s agenda. It is no surprise that practitioners and academics alike
are keen on accurately measuring service quality in order better to understand its
essential antecedents and consequences, and ultimately, establish methods for
improving quality to achieve competitive advantage and build customer loyalty.
The pressures driving successful organisations toward top quality services make the
32 measurement of service quality and its subsequent management of utmost importance.
Interest in the measurement of service quality is thus understandably high. However,
the problem inherent in the implementation of such a strategy has been compounded
by the elusive nature of service quality construct, rendering it extremely difficult to
define and measure. Although researchers have devoted a great deal of attention
to service quality, there are still some unresolved issues that need to be addressed, and
the most controversial one refers to the measurement instrument.
An attempt to define the evaluation standard independent of any particular service
context has stimulated the setting up of several methodologies. In the last decade, the
emergence of diverse instruments of measurement such as SERVQUAL (Parasuraman
et al., 1988), SERVPERF (Cronin and Taylor, 1992) and evaluated performance (EP)
(Teas, 1993a, b) has contributed enormously to the development in the study of service
quality. SERVQUAL operationalises service quality by comparing the perceptions of
the service received with expectations, while SERVPERF maintains only the
perceptions of service quality. On the other hand, EP scale measures the gap between
perceived performance and the ideal amount of a feature rather than the customer’s
expectations. Diverse studies using these scales have demonstrated the existence of
difficulties resulting from the conceptual or theoretical component as much as from the
empirical component.
Nevertheless, many authors concur that customers’ assessments of continuously
provided services may depend solely on performance, thereby suggesting that
performance-based measure explains more of the variance in an overall measure of
service quality (Oliver, 1989; Bolton and Drew, 1991a, b; Cronin and Taylor, 1992;
Boulding et al., 1993; Quester et al., 1995). These findings are consistent with other
research that have compared these methods in the scope of service activities, thus
confirming that SERVPERF (performance-only) results in more reliable estimations,
greater convergent and discriminant validity, greater explained variance, and
consequently less bias than the SERVQUAL and EP scales (Cronin and Taylor, 1992;
Parasuraman et al., 1994; Quester et al., 1995; Llusar and Zornoza, 2000). Whilst its
impact in the service quality domain is undeniable, SERVPERF being a generic
measure of service quality may not be a totally adequate instrument by which to assess
the perceived quality in higher education.
Nowadays, higher education is being driven towards commercial competition
imposed by economic forces resulting from the development of global education
markets and the reduction of government funds that forces tertiary institutions to seek
other financial sources. Tertiary institutions had to be concerned with not only what
the society values in the skills and abilities of their graduates (Ginsberg, 1991; Lawson,
1992), but also how their students feel about their educational experience (Bemowski,
1991). These new perspectives call attention to the management processes within the
institutions as an alternative to the traditional areas of academic standards,
accreditation and performance indicators of teaching and research.
Tertiary educators are being called to account for the quality of education that they Measuring
provide. While more accountability in tertiary education is probably desirable, the service quality in
mechanisms for its achievement are being hotly debated. Hattie (1990) and Soutar and
McNeil (1996) oppose the current system of centralised control, in which the higher education
government sets up a number of performance indicators that are linked to funding
decisions. There are a number of problems in developing performance indicators in
tertiary education. One such problem is that performance indicators tend to become 33
measures of activity rather than true measures of the quality of students’ educational
service (Soutar and McNeil, 1996). These performance indicators may have something
to do with the provision of tertiary education, but they certainly fail to measure the
quality of education provided in any comprehensive way.
A survey conducted by Owlia and Aspinwall (1997) examined the views of different
professionals and practitioners on the quality in higher education and concluded that
customer-orientation in higher education is a generally accepted principle. They
construed that from the different customers of higher education, students were given
the highest rank. Student experience in a tertiary education institution should be a key
issue of which performance indicators need to address. Thus it becomes important to
identify determinants or critical factors of service quality from the standpoint of
students being the primary customers.
In view of that, Firdaus (2005) proposed HEdPERF (Higher Education
PERFormance-only), a new and more comprehensive performance-based measuring
scale that attempts to capture the authentic determinants of service quality within higher
education sector. The 41-item instrument has been empirically tested for
unidimensionality, reliability and validity using both exploratory and confirmatory
factor analysis. Therefore, the primary question is directed at the measurement of service
quality construct within a single, empirical study utilising customers of a single industry,
namely higher education. Specifically, the ability of the more concise HEdPERF scale is
compared with that of two alternatives namely SERVPERF instrument and the merged
HEdPERF-SERVPERF as moderating scale. The goal is to assess the relative strengths
and weaknesses of each instrument in order to determine which instrument had the
superior measurement capability in terms of unidimensionality, reliability, validity and
explained variance of service quality. Eventually, the results of this comparative study
were used to refine the HEdPERF scale, transforming it into an ideal measuring
instrument of service quality for higher education sector.
Research foundations
Many researchers (Parasuraman et al., 1985; Carman, 1990; Bolton and Drew, 1991a, b)
concur that service quality is an elusive concept, and there is considerable debate about
how best to conceptualise this phenomenon. Lewis and Booms (1983, p. 100) were
perhaps the first to define service quality as a “. . . measure of how well the service level
delivered matches the customer’s expectations”. Thereafter, there seems to be a broad
consensus that service quality is an attitude of overall judgement about service
superiority, although the exact nature of this attitude is still hazy. Some suggest
that it stems from a comparison of performance perceptions with expectations
(Parasuraman et al., 1988), while others argue that it is derived from a comparison of
performance with ideal standards (Teas, 1993a, b) or from perceptions of performance
alone (Cronin and Taylor, 1992).
MIP In terms of measurement methodologies, a review of literature provides plenty of
24,1 service quality evaluation scales. Some stem from the realisation of conceptual models
produced to understand the evaluation process (Parasuraman et al., 1985), and others
come from empirical analysis and experimentation on different service sectors (Cronin
and Taylor, 1992; Franceschini and Rossetto, 1997b; Parasuraman et al., 1988). The
most widely used methods applied to measure perceived quality can be characterised
34 as primarily quantitative multi-attribute measurements. Within the attribute-based
methods, a great number of variants exist and among these variants, the SERVQUAL
and SERVPERF instruments have attracted the greatest attention.
Generally, most researchers acknowledge that customers have expectations and
these serve as standards or reference points to evaluate the performance of an
organisation. However, the unresolved issues of expectations as a determinant of
perceived service quality have resulted in two conflicting measurement paradigms: the
disconfirmation paradigm (SERVQUAL) which compares the perceptions of the
service received with expectations, and the perception paradigm (SERVPERF) which
maintains only the perceptions of service quality. These instruments share the same
concept of perceived quality. The main difference between these scales lies in the
formulation adopted for their calculation, and more concretely, the utilisation of
expectations and the type of expectations that should be used.
Most research studies do not support the five-factor structure of SERVQUAL
posited by Parasuraman et al. (1988), and administering expectation items is also
considered unnecessary (Carman, 1990; Parasuraman et al., 1991a, b; Babakus and
Boller, 1992). Cronin and Taylor (1992) were particularly vociferous in their critiques,
thus developing their own performance-based measure, dubbed SERVPERF. In fact,
the SERVPERF scale is the unweighted perceptions components of SERVQUAL,
which consists of 22 perception items thus excluding any consideration of
expectations. In their empirical work in four industries, Cronin and Taylor (1992)
found that unweighted SERVPERF measure (performance-only) performs better that
any other measure of service quality, and that it has greater predictive power (ability to
provide an accurate service quality score) than SERVQUAL. They argue that current
performance best reflects a customer’s perception of service quality, and that
expectations are not part of this concept.
Likewise, Boulding et al. (1993) reject the value of an expectations-based
SERVQUAL, and concur that service quality is only influenced by perceptions. Quester
et al. (1995) perform similar analysis to Cronin and Taylor in the Australian
advertising industry, and their empirical tests show that SERVPERF performs best,
while SERVQUAL performs worst, although the differences are small. Teas (1993a) on
the other hand, discusses the conceptual and operational difficulties of using the
“expectations minus performance” approach, with a particular emphasis on
expectations. His empirical test subsequently produces two alternatives of perceived
service quality measures namely EP and normed quality (NQ). He concludes that the
EP instrument, which measures the gap between perceived performance and the ideal
amount of a feature rather than the customer’s expectations, outperforms both
SERVQUAL and NQ.
A review of service quality literature brings forward diverse arguments in relation
to the advantages and disadvantages in the use of these instruments. In general, the
arguments make reference to aspects related to the characteristics of these
scales notably their reliability and validity. Recently, Llusar and Zornoza (2000) concur Measuring
that SERVPERF results in more reliable estimations, greater convergent and service quality in
discriminant validity, greater explained variance, and consequently less bias than the
EP scale. These results are consistent with earlier research that had compared these higher education
methods in the scope of service activities (Cronin and Taylor, 1992; Parasuraman et al.,
1994). In fact, the marketing literature appears to offer considerable support for
the superiority of simple performance-based measures of service quality (Mazis et al., 35
1975; Churchill and Surprenant, 1982; Carman, 1990; Bolton and Drew, 1991a, b;
Boulding et al., 1993; Teas, 1993a; Quester et al., 1995).
Research methodology
Research objectives
On the basis of the conceptual and operational concerns associated with the generic
measures of service quality, the present research attempts to compare and contrast
empirically the HEdPERF scale against two alternatives namely the SERVPERF and
the merged HEdPERF-SERVPERF scales. The primary goal is to assess the relative
strengths and weaknesses of each instrument in order to determine which instrument
had the superior measurement capability in terms of unidimensionality, reliability,
validity and explained variance of service quality. The findings were eventually used
in transforming HEdPERF into an ideal measuring instrument of service quality for
higher education sector. The various steps involved in this comparative study are
shown by means of flow chart in Figure 1.
Research design
Data were collected by means of a structured questionnaire comprising of four sections
namely A, B, C and D. Section A contained nine questions pertaining to student
respondent profile. While sections B and C required respondents to evaluate the service
components of their tertiary institution, in which only perceptions data were collected
and analysed. Specifically, section B consisted of 22 perception-items extracted from
the original SERVPERF scale (Cronin and Taylor, 1992), and modified to fit into higher
education context.
Section C on the other hand is composed of 41 items extracted from the original
HEdPERF (Firdaus, 2005), a scale uniquely developed to embrace different aspects of
tertiary institution’s service offering. As the items were generated and validated within
higher education context, no modification was required. All the items in sections B and
C were presented as statements on the questionnaire, with the same rating scale used
throughout, and measured on a 7-point, Likert-type scale that varied from 1 ¼ strongly
disagree to 7 ¼ strongly agree. In addition to the main scale addressing individual
items, respondents were asked in section D to provide an overall rating of the service
quality, satisfaction level and future visits. There were also three open-ended questions
allowing respondents to give their personal views on how any aspect of the service
could be improved.
The draft questionnaire was eventually subjected to pilot testing with a total of
30 students, and they were asked to comment on any perceived ambiguities, omissions
or errors concerning the draft questionnaire. The feedback received was rather
ambiguous thus only minor changes were made accordingly, for instance, technical
jargons were rephrased to ensure clarity and simplicity. The revised questionnaire was
MIP
24,1
36
Figure 1.
Comparing HEdPERF,
SERVPERF and
HEdPERF-SERVPERF
subsequently submitted to three experts (an academician, a researcher and a Measuring
practitioner) for feedback before being administered for a full-scale survey. These service quality in
experts viewed that the draft questionnaire was rather lengthy, which in fact coincided
with the preliminary feedback from students. Nevertheless, in terms of number of items higher education
in the questionnaire, current study somewhat conforms with similar research works
(Cronin and Taylor, 1992; Teas, 1993a, b; Lassar et al., 2000; Mehta et al., 2000; Robledo,
2001) that attempted to compare various measuring instruments of service quality. 37
In the subsequent full-scale survey, data were collected from students of six higher
learning institutions (two public universities, one private university and three private
colleges) in Malaysia for the period between January and March 2004. Data had been
collected using the “personal-contact” approach as suggested by Sureshchandar et al.
(2002) whereby “contact persons” (registrar or assistant registrar) have been approached
personally, and the survey explained in detail. The final questionnaire together with a
cover letter was then handed personally or mailed to the “contact persons”, who in turn
distributed it randomly to students within their respective institutions.
A total of 560 questionnaires were distributed to six tertiary institutions, of these 390
were returned and nine discarded due to incomplete responses, thus leading to a response
rate of 68.0 per cent. The number of usable sample size of 381 for a population size of nearly
400,000 students in Malaysian tertiary institutions was in line with the generalized
scientific guideline for sample size decisions as proposed by Krejcie and Morgan (1970).
In order to determine which instrument had the superior measurement capability, a
new scale was developed by merging the two measuring instruments of HEdPERF and
SERVPERF. The scope of the empirical investigation notably the methodology utilised
for the development of the merged scale was defined. Next, the results obtained from
the three instruments were computed, comparing them based on the widely-used
criteria of unidimensionality, reliability, validity and their ability to predict service
quality. The results of this comparative study were subsequently used to refine the
HEdPERF scale, transforming it into an ideal measuring instrument of service quality
for higher education sector.
It is important to note that the four factors identified did not conform exactly
with neither the six-factor structure of HEdPERF nor the five-factor structure of
Measuring
Factor 1: Factor 2:
non-academic academic Factor 3: Factor 4: service quality in
Variables aspects aspects reliability empathy higher education
Promises kept 0.65
Sympathetic and reassuring in solving problems 0.38 0.57
Dependability 0.52 39
On-time service provision 0.74
Responding to request promptly 0.51
Trust 0.44
Feeling secured with the transaction 0.47 0.31
Politeness 0.45 0.32
Individualised attention 0.68
Giving personalized attention 0.74
Knowing student needs 0.66
Keeping student interests at heart 0.55
Knowledge in course content 0.33 0.57
Showing positive attitude 0.66
Good communication 0.75
Feedback on progress 0.68
Sufficient and convenient consultation time 0.56 0.32
Excellent quality programmes 0.62
Variety of programmes/specialisations 0.43
Flexible syllabus and structure 0.31 0.60
Reputable academic programmes 0.31 0.56
Educated and experience academicians 0.50
Efficient/prompt dealing with complaints 0.51
Good communication 0.51 0.37
Positive work attitude 0.53 0.36
Knowledge of systems/procedures 0.50 0.36
Providing service within reasonable time 0.51
Equal treatment and respect 0.75
Fair amount of freedom 0.56
Confidentiality of information 0.65
Easily contacted by telephone 0.57
Counselling services 0.53 0.32
Student’s union 0.33
Feedback to improve service performance 0.36 0.32 0.33
Standardised and simple delivery procedures 0.42 0.35
Eigenvalues 10.29 2.79 2.68 1.72 Table I.
Percentage of variance 26.2 5.9 5.8 3.0 Results of factor analysis
Cummulative percentage of variance 26.2 32.2 38.0 41.0 (factor loadings)
SERVPERF. In fact, the new dimensions extracted were the result of the amalgamation
between HEdPERF and SERVPERF scales, in which two factors (non-academic
aspects and academic aspects) were found in HEdPERF and the other two (reliability
and empathy) were identified in SERVPERF.
References
Anderson, J.C. and Gerbing, D.W. (1991), “Predicting the performance of measures in a
confirmatory factor analysis with a pretest assessment of their substantive validities”,
Journal of Applied Psychology, Vol. 76 No. 5, pp. 732-40.
Babakus, E. and Boller, G.W. (1992), “An empirical assessment of the SERVQUAL scale”, Journal
of Business Research, Vol. 24 No. 3, pp. 253-68.
Bemowski, K. (1991), “Restoring the pillars of higher education”, Quality Progress, October,
pp. 37-42.
Bollen, K.A. (1989), Structural Equations with Latent Variables, Wiley, New York, NY.
Bolton, R.N. and Drew, J.H. (1991a), “A longitudinal analysis of the impact of service changes on
customer attitudes”, Journal of Marketing, Vol. 55, pp. 1-9.
Bolton, R.N. and Drew, J.H. (1991b), “A multi stage model of customer’s assessments of service
quality and value”, Journal of Consumer Research, Vol. 17, pp. 375-84.
Boulding, W., Kalra, A., Staelin, R. and Zeithaml, V.A. (1993), “A dynamic process model of
service quality: from expectations to behavioural intentions”, Journal of Marketing
Research, Vol. 30, pp. 7-27.
Brown, M.W. and Cudeck, R. (1993), “Alternative ways of assessing model fit”, in Bollen, K.A.
and Long, J.S. (Eds), Testing Structural Equation Models, Sage, Newbury Park, CA.
Bryne, B.M. (1994), Structural Equation Modelling with EQS and EQS/Windows-Basic Concepts,
Applications and Programming, Sage, Thousand Oaks, CA.
Bryne, B.M. (1998), Structural Equation Modeling with LISREL, PRELIS and SIMPLIS: Basic
Concepts, Applications and Programming, Lawrence Erlbaum Associates, Mahwah, NJ.
Carman, J.M. (1990), “Consumer perceptions of service quality: an assessment of the SERVQUAL
dimensions”, Journal of Retailing, Vol. 66, pp. 33-55.
Cattell, R.N. (1966), “The scree test for the number of factors”, Multivariate Behavioural Research,
Vol. 1, pp. 245-76.
Churchill, G.A. and Surprenant, C. (1982), “An investigation into the determinants of customer
satisfaction”, Journal of Marketing Research, Vol. 19, pp. 491-504.
Cronin, J.J. and Taylor, S.A. (1992), “Measuring service quality: reexamination and extension”,
Journal of Marketing, Vol. 56, pp. 55-68.
Diamantopoulos, A. and Siguaw, J.A. (2000), Introducing LISREL, Sage, London.
Eisen, S.V., Wilcox, M. and Leff, H.S. (1999), “Assessing behavioural health outcomes in
outpatient programs: reliability and validity of the BASIS-32”, Journal of Behavioural
Health Sciences & Research, Vol. 26 No. 4, pp. 5-17.
Finn, D.W. and Lamb, C.W. (1991), “An evaluation of the SERVQUAL scale in a retailing
setting”, in Holman, R. and Solomon, M.R. (Eds), Advances in Consumer Research,
Association for Consumer Research, Provo, UT, pp. 483-90.
MIP Firdaus, A. (2005), “The development of HEdPERF: a new measuring instrument of service
quality for higher education”, International Journal of Consumer Studies, online
24,1 publication, 20 October.
Franceschini, F. and Rossetto, S. (1997b), “On-line service quality control: the ‘Qualitometro’
method”, De Qualitac, Vol. 6 No. 1, pp. 43-57.
Ginsberg, M.B. (1991), Understanding Educational Reforms in Global Context: Economy, Ideology
46 and the State, Garland, New York, NY.
Hair, J.F. Jr, Anderson, R.E., Tatham, R.L. and Black, W.C. (1995), Multivariate Data Analysis with
Readings, Prentice-Hall, Englewood Cliffs, NJ.
Hattie, J. (1985), “Methodology review: assessing unidimensionality of tests and items”, Applied
Psychological Measurement, Vol. 9, pp. 139-64.
Hattie, J. (1990), “Performance indicators in education”, Australian Journal of Education, No. 3,
pp. 249-76.
Joreskog, K.G. and Sorbom, D. (1978), Analysis of Linear Structural Relationships by Method of
Maximum Likelihood, National Educational Resources, Chicago, IL.
Kaiser, H.F. (1970), “A second-generation little jiffy”, Psychometrika, Vol. 35, pp. 401-15.
Krejcie, R. and Morgan, D. (1970), “Determining sample size for research activities”, Educational
and Psychological Measurement, Vol. 30, pp. 607-10.
Lassar, W.M., Manolis, C. and Winsor, R.D. (2000), “Service quality perspective and satisfaction
in private banking”, Journal of Services Marketing, Vol. 14 No. 3, pp. 244-71.
Lawson, S.B. (1992), “Why restructure? An international survey of the roots of reform”, Journal of
Education Policy, Vol. 7, pp. 139-54.
Lewis, R.C. and Booms, B.H. (1983), “The marketing aspects of service quality”, in Berry, L.,
Shostack, G. and Upah, G. (Eds), Emerging Perspectives on Services Marketing, American
Marketing, Chicago, IL, pp. 99-107.
Llusar, J.C.B. and Zornoza, C.C. (2000), “Validity and reliability in perceived quality measurement
models: an empirical investigation in Spanish ceramic companies”, International Journal of
Quality & Reliability Management, Vol. 17 No. 8, pp. 899-918.
Mazis, M.B., Ahtola, O.T. and Klippel, R.E. (1975), “A comparison of four multi-attribute models
in the prediction of consumer attitudes”, Journal of Consumer Research, Vol. 2, pp. 38-52.
Mehta, S.C., Lalwani, A.K. and Han, S.L. (2000), “Service quality in retailing: relative efficiency of
alternative measurement scales for different product-service environments”, International
Journal of Retail & Distribution Management, Vol. 28 No. 2, pp. 62-72.
Nunnally, J.C. (1988), Psychometric Theory, McGraw-Hill, Englewood-Cliffs, NJ.
Oliver, R.L. (1989), “Processing of the satisfaction response in consumption: a suggested
framework and research propositions”, Journal of Consumer Satisfaction, Dissatisfaction,
and Complaining Behaviour, No. 2, pp. 1-16.
Owlia, M.S. and Aspinwall, E.M. (1997), “TQM in higher education – a review”, International
Journal of Quality & Reliability Management, Vol. 14 No. 5, pp. 527-43.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1985), “A conceptual model of service quality
and its implications for future research”, Journal of Marketing, No. 49, pp. 41-50.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), “SERVQUAL: a multiple-item scale for
measuring consumer perceptions of service quality”, Journal of Retailing, Vol. 64 No. 1,
pp. 12-40.
Parasuraman, A., Berry, L.L. and Zeithaml, V.A. (1991a), “Refinement and reassessment of the
SERVQUAL scale”, Journal of Retailing, Vol. 67 No. 4, pp. 420-50.
Parasuraman, A., Berry, L.L. and Zeithaml, V.A. (1991b), “More on improving service quality Measuring
measurement”, Journal of Retailing, Vol. 69 No. 1, pp. 140-7.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1994), “Reassessment of expectations as a
service quality in
comparison standard in measuring service quality: implications for future research”, higher education
Journal of Marketing, Vol. 58, pp. 111-24.
Quester, P., Wilkinson, J.W. and Romaniuk, S. (1995), “A test of four service quality measurement
scales: the case of the Australian advertising industry”, Working Paper 39, Centre de 47
Recherche et d’Etudes Appliquees, Group esc Nantes Atlantique, Graduate School of
Management, Nantes.
Robledo, M.A. (2001), “Measuring and managing service quality: integrating customer
expectations”, Managing Service Quality, Vol. 11 No. 1, pp. 22-31.
Soutar, G. and McNeil, M. (1996), “Measuring service quality in a tertiary institution”, Journal of
Educational Administration, Vol. 34 No. 1, pp. 72-82.
Sureshchandar, G.S., Rajendran, C. and Anantharaman, R.N. (2002), “Determinants of
customer-perceived service quality: a confirmatory factor analysis approach”, Journal of
Services Marketing, Vol. 16 No. 1, pp. 9-34.
Teas, R.K. (1993a), “Expectations, performance evaluation, and consumers’ perceptions of
quality”, Journal of Marketing, Vol. 57 No. 4, pp. 18-34.
Teas, R.K. (1993b), “Consumer expectations and the measurement of perceived service quality”,
Journal of Professional Services Marketing, Vol. 8 No. 2, pp. 33-54.
Zeithaml, V.A. and Bitner, M.J. (1996), Services Marketing, McGraw-Hill, Singapore.
Further reading
Crawford, F. (1991), Total Quality Management, Committee of Vice-Chancellors and Principals
Occasional Paper, London, December.