0% found this document useful (0 votes)
5 views34 pages

Reliability_lit_review

This document explores bus transit service reliability and improvement strategies by integrating the perspectives of both passengers and transit agencies in North America. It reviews existing literature on passenger perceptions and transit agency strategies, identifying gaps in public transit planning and operations. The study aims to provide valuable insights for transit planners and decision-makers to enhance service reliability and user satisfaction.

Uploaded by

keivoon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views34 pages

Reliability_lit_review

This document explores bus transit service reliability and improvement strategies by integrating the perspectives of both passengers and transit agencies in North America. It reviews existing literature on passenger perceptions and transit agency strategies, identifying gaps in public transit planning and operations. The study aims to provide valuable insights for transit planners and decision-makers to enhance service reliability and user satisfaction.

Uploaded by

keivoon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Bus Transit Service Reliability and Improvement Strategies: Integrating the

Perspectives of Passengers and Transit Agencies In North America

Ehab Diab, Madhav Badami, and Ahmed El-Geneidy


School of Urban Planning
McGill University

Email:
[email protected]
[email protected]
[email protected]

2015

For citation please use: Diab, E., Badami, M., & El‐Geneidy, A. (2015). Bus transit service reliability and
improvement strategies: Integrating the perspectives of passengers and transit agencies in North
America. Transport Reviews, 23(3), 292 – 328.

1
1. Abstract
Transit agencies are consistently trying to improve service reliability and attract new passengers
by employing various strategies. Previous literature reviews have focused on either passengers’
or transit agencies’ perspectives on service reliability. However, none of the earlier reviews have
simultaneously addressed these differing perspectives on service reliability in an integrated
manner. In response to this gap in the literature, this paper first reviews previous work on
passengers’ perspectives of transit service reliability and their response to service adjustments
made by different agencies. Second, it analyzes transit agencies’ plans and reports regarding
their reliability goals and used strategies in order to improve service reliability, while looking at
the impacts of these strategies on service. Reviewing these two parts together provides a needed
contribution to the literature from a practical viewpoint since it allows for the identification of
gaps in the public transit planning and operations field in the area of reliability and provides
transit planners and decision makers with effective and valuable policy-relevant information.

2. Introduction
Public transit systems are essential services to the sustainability, equity, and livability of
any city. In fact, during the past decade, transportation planning has shifted its focus from car
mobility goals to embrace broader environmental and social goals, in particular, by providing
and improving transport alternatives that provide access to destinations regardless of car
ownership (Jabareen, 2006; Lucas et al., 2007). This shift in paradigm has encouraged operators
to incorporate various strategies to improve transit service operation with the goal of attracting
new passengers and retaining existing ones. This shift is supported by enormous funding
commitments from federal, state and local governments in order to improve transit service. For
example, in the United States, total government spending increased at an annual average
inflation-adjusted rate of about 3% between 1997 and 2012, from $26.1 billion to $ 58.5 billion
(NTD, 2013).
Transit agencies are responsible for providing an efficient, productive and reliable
service that is positively perceived by the public (Vuchic, 2005). It is clear that providing a
reliable transit services is necessary in order to maintain an efficient and attractive system, which
increases users’ satisfaction and loyalty. Reliability is also important for operators because it can
easily improve internal efficiency, reduce operating costs, and improve revenues by retaining and
attracting users. Therefore, improving reliability is a win-win situation for both users and transit
agencies and enables cities to achieve their broader goals. The present review of the literature
aims to understand transit service reliability from different perspectives. More specifically, it
attempts to identify passengers’ and transit agencies’ perspectives, while linking both
perspectives to empirical studies that investigate the impacts of service improvement strategies.
This paper uses a systemic review method to identify the international literature that covers the
passengers’ perspective, while analyzing North America’s transit agencies’ perspectives
regarding service reliability.
Within the transportation setting, there are a wide range of definitions for the concept of
reliability. It can be defined as the availability and stability of transit service attributes at certain
locations, affecting people and operators’ decision-making (Abkowitz et al., 1978; Cham, 2006).
On the other hand, reliability can also be defined in terms of performance measures. Kimpel

2
(2001) defined it as “a multidimensional phenomenon in that there is no single measure that can
adequately address service quality.” (p. 3) Different measures have been identified by
researchers and range from minimizing schedule delays, running time delays and headway
delays to achieving on-time performance (OTP) standards (Kimpel, 2001; Strathman et al., 1999;
Turnquist, 1981). Other researchers used a holistic standpoint to define reliability from the
passengers’ perspective. Passengers perceive the service as reliable when it (a) decreases their
efforts to access the service, (b) has short and consistent travel times, and (c) arrives predictably,
resulting in short waiting time (El-Geneidy et al., 2011; Koenig, 1980; Murray and Wu, 2003).
Researchers argue that public transport patronage growth can result from service
reliability improvements whereas it can decay due to unreliable service (Bates et al., 2001; Nam
et al., 2005; Noland and Polak, 2002; Vuchic, 2005). A lively discussion about the importance of
reliability issues for passengers can be found throughout the literature. Peek and Van Hagen
(2002) suggested an approach based on Maslow’s pyramid, which represents passengers’
priorities. This approach argues that safety and reliability are the foundation of traveler
satisfaction, and accordingly, must be provided. The upper part of the pyramid includes
additional aspects of quality such as comfort. Hensher, Stopher and Bullock (2003), and Brons
and Rietveld (2007) confirm this hierarchal importance of prioritization for both regular and non-
regular users. Other researchers have argued that reliability is the second most important transit
attribute after arriving safely at destinations (Iseki and Taylor, 2010; Perk et al., 2008; Taylor et
al., 2007; Yoh et al., 2011).

3. Methodology
This section describes the methodology used in the analysis, and contains two sections. The
first section describes the review of academic literature concentrating on the passengers’
perspective and reliability improvement strategies, while the second section focuses on the
analysis of transit agencies’ plans and reports in order to understand their perspective.

3.1 Literature Review


A systematic literature review is an important and useful approach to identify and analyze all
relevant research on a given topic. The present study uses a Realist method to understand the
literature concerning: (a) passengers’ perspective, and (b) reliability improvement strategies.
This method builds on the conventional systematic review template to provide a more
explanatory rather than a solely judgmental focus (Pawson et al., 2005). For each section of the
literature review, a search strategy consisting of two phases is conducted. The first phase
includes a search of the Web of Knowledge, Scopus and TRID online article databases in
November of 2013. TRID is a comprehensive database that includes more than one million
records of transportation research worldwide (TRID, 2013). Only results yielding full articles
and papers are included in the analysis. Additionally, the search is also restricted to include only
publications in English related to transportation, urban studies, social sciences and engineering.
There were no date restrictions on the results of the search. The second phase of the search
strategy began once the database search had identified the relevant articles based on a
predetermined set of inclusion and exclusion criteria. Table 1 shows the criteria. The reference
lists of all articles were examined, and articles found through this method were subject to the
same exclusion criteria after their full texts had been read.

3
3.1.1 Passengers' perspective
The search consisted of the following terms within the ‘‘title’’ search field: “(Bus OR
Transit) AND perception or time value”, OR “(bus OR transit) AND satisfaction or demand or
ridership.” The first phase of the search yielded 340 papers in total, of which 316 were excluded
due to irrelevance and application of exclusion criteria. The second phase of the search strategy
began once the database search results had been reduced to 22 relevant articles based on the
predetermined set of exclusion criteria. Then, the reference lists of all articles were examined and
yielded an additional 7 articles. Finally, articles that passed this review process were read in their
entirety (see Appendix 1). The studies range in publishing date from June 1987 to November
2013. The studies selected for the review focused on one or more aspects of transit users’ point
of view in terms of their perception, estimation of their time value, demand and satisfaction.

3.1.2 Reliability improvement strategies


The search consisted of the following terms within the ‘‘title’’ search field: "(Bus OR
Transit) AND improvement strategies or Automatic data collection or AVL or APC or AFC” OR
"(Bus OR Transit) AND Reliable or Reliability or On-time performance”, OR"(Bus OR Transit)
AND travel time or dwell time.” The first phase of the search yielded 230 papers in total, of
which 218 were excluded due to irrelevance and application of exclusion criteria. Studies using
the actual automatic operational data e.g. extracted from Automatic Vehicle Location (AVL) and
Automatic Passenger Counting (APC), and Automated Fare Collection (AFC), were included if
the results were based on empirical model-driven analyses. Strathman and Hopper (1993)
demonstrate the importance of the emergence of these automatic data collection technologies in
the 90s. They provide researchers and agencies with a rich and accurate source of information,
facilitating extensive and detailed analysis of transit operations (Feng and Figliozzi, 2011; Furth
et al., 2006; Furth and Muller, 2007; Hickman, 2004; Peng et al., 2008; Uniman et al., 2010) .
The second phase of the search strategy was based on the reference lists of the 12 relevant
articles and yielded one additional article. Appendix 3 presents these studies. The studies range
in publishing date from June 2000 to July 2013. Findings from these studies are discussed in the
transit agencies’ perspective section following the introduction of what measures agencies use to
improve the service.

Table 1. Inclusion and Exclusion Criteria for Literature Review


Inclusion criteria Exclusion criteria
Passengers' perspective
 Uses surveys or real-world observations  Focuses on private automobile

 Focuses on passenger-related issues (i.e. demand,  Focuses on other public transport modes, e.g. trams
perception, satisfaction and time value) and trains, planes, undergrounds, and ferries

 Investigates the factors impacting passengers’  Focuses on vehicle emissions and economics, and
perception. users’ life satisfaction issues

 Empirical analysis  Focuses on simulation techniques and mathematical


optimizations methods.
 Published up to November 2013
 Not peer reviewed

4
Inclusion criteria Exclusion criteria
 Peer- reviewed  All languages other than English

 Full articles only

 English language only

Reliability improvement strategies*


 Uses automatic data collection (e.g. AVL, AFC, APC)  Focuses on private automobile

 Analyzes the impact of improvement strategies (e.g.  Focuses on other public transport modes, e.g. trams
bus type, reserved lanes, TSP ...etc) and trains, planes, undergrounds, and ferries

 Focuses on one of the service operational aspects (e.g.  Focuses on vehicle emissions and contracting
running time, on-time performance, dwell time) or
their variation  Focuses on simulation techniques, mathematical
optimizations methods and visualization
 Empirical model-driven analyses
 Only a summary statistics study
 Published up to November 2013  Not peer reviewed
 All languages other than English
 Peer- reviewed

 Full-articles only

 English language only

* AVL: Automatic Vehicle Location, APC: Automatic Passenger Counting, AFC: Automated fare collection, TSP:
Transit Signal Priority System

3.2 Transit Agencies' Plans


The existing literature rarely discusses how transit agencies define and resolve reliability
issues or realize their reliability objectives and employ strategies to achieve these objectives.
Previous studies focus solely on aspects such as understanding transit agencies’ performance
measures (Bates, 1986; Benn, 1995; Kittelson & Associates et al., 2003b), employing archived
AVL-APC data to improve transit performance and management (Furth et al., 2006), or planning
processes (Mistretta et al., 2010). This section reviews 15 of the largest bus transit agencies'
plans and reports in the U.S. and Canada, which are ranked by annual ridership (American Public
Transportation Association, 2011a, b). The search criteria regarding plans and reports were as
follows: large transit agencies with recent documents published after 2004 available from an
agency’s official website. Appendix 2 shows the results of transit agencies’ plans that were
reviewed.
Transit agencies’ plans and reports represent one of their main outputs illustrating their
guidelines, policies and approaches, and are used to communicate these to the public. The
purpose of this approach is not only to understand their performance measures, but also to
understand the main reliability goals these agencies articulated, and strategies they use to achieve
them. The idea of transportation plan analysis and examination is well-established in the

5
literature. Researchers have employed this approach to understand existing policies regarding
various goals, including agencies’ sustainability orientations and approaches, or their social goals
(Berke and Conroy, 2000; Feitelson, 2002; Geurs et al., 2009; Stanley and Villa-Brodrick, 2009).
Finally, the study identifies the areas of overlap, disconnect and mismatch between the
perspectives of transit agencies and passengers, regarding service reliability and the impacts of
service improvement strategies. The areas of disconnect represent the important gaps in
understanding that need to be integrated and addressed to enable transit agencies to achieve
better service that is positively perceived by passengers.

4. Passenger Perspectives
4.1 Passengers’ Time Value
A sizable body of literature has developed around how users value their time during a transit
trip and has attempted to assign a dollar value to passenger time, with an underlying assumption
that the value of time is equal to its opportunity cost, usually defined as the wage rate (Wardman,
2004). These studies tend to focus on the relationship between out-of-vehicle time and in-vehicle
time. For example, Mohring et al. (1987) estimate the value associated with in-vehicle time as
half of an hourly wage whereas waiting time is valued at a level two to three times that of in-
vehicle time. One example is Wardman’s (2001) study that uses a regression model to analyze
evidence drawn from 143 British academic and consultancy studies conducted between 1980 and
1996. He estimated that walking time, waiting time, and combined walking and waiting time are
respectively valued 1.66, 1.47, and 1.46 times as much as in-vehicle time. Later, Wardman
(2004) suggested that previous estimations for waiting time values were too low, and it is
reasonable to value waiting time at 2.5 times as much as in-vehicle time. However, some studies
he referenced indicated that the waiting time is valued up to 4.5 times more than walking time,
which is valued at two times that of in-vehicle time. Similarly, several studies reviewed by Reed
(1995) indicate a significantly different estimation for waiting time value, ranging from less than
1.5 times to as much as 12 times that of travel time value. It is important to note that the
calculated values of waiting time vary by income, location, trip distance and purpose, and by
survey method (Abrantes and Wardman, 2011; Chang and HSU, 2003; Lam and Morrall, 1982;
Wardman, 2004). Shires and de Jong (2009) indicate similar factors that impact the value of
travel time savings. However, it is rare to find empirical studies in the literature investigating the
value of time savings that come as a result of service improvements.
Nevertheless, from the perspective of behavioral decision research, the value of time is
subject to context effects. Most human behavior is analogous in its relation to both time and
money; however, it differs completely for all situations involving risk (or uncertainty) (Leclerc et
al., 1995). Behavioral decision researchers more recently have extended the previous argument
in the context of time versus money and have stated that there are quantitative and qualitative
differences in how people process temporal information in relation to monetary information to
arrive at judgments and decisions (Monga and Saini, 2008; Soman, 2001; Zauberman and Lynch,
2005). While most of the studies regarding the cost of travel time reliability focused on car users’
perceptions (Carrion and Levinson, 2012; Chen et al., 2003; Li et al., 2010; Small et al., 1999), it
is rare to find studies focused specifically on transit users' perceptions. One of the rare examples
is an empirical analysis done by Pinjari and Bhat (2006) which indicates that transit users, during
the first 15 minutes of a trip, place a small value on travel time while placing a higher value on

6
travel time reliability. However, the value of travel time increases rapidly after the first 15
minutes while the valuation of travel time reliability falls radically.

4.2 Passengers’ Time Perception


Research indicates that passengers perceive waiting time differently from the actual time
for reasons such as being exposed to adverse weather conditions, the surrounding environment,
and the experience of being stressed by waiting anxiety (Daskalakis and Stathopoulos, 2008).
Mishalani et al. (2006) used linear regression to investigate the relationship between passengers’
perceptions of waiting time and actual time. In this study a surveyor went to a bus stop, noted the
arrival time of a passenger, and then asked him or her about their time perceptions. The results of
this study indicate an overestimation of waiting time by 0.84 minutes. Psarros et al. (2011) used
the same data-collection technique and revealed that for all trip purposes – work, education,
shopping and personal affairs – there appears to be a strong positive effect on the length of
perceived waiting time by 27%, 43%, 30% and 30%, respectively. However, these estimates may
not present the actual case because perception of waiting time tends to differ significantly from
the actual measured waiting time depending on whether passengers make a conscious decision to
wait compared to when the wait is imposed on them by others, such as transit agencies (Moreau,
1992).
Hall (2001) indicated that passengers who knew the schedule were more inclined to
believe the bus was late than those who did not know the schedule. Hess, Brown and Shoup
(2004) report that passengers overestimate their waiting time by a factor of two compared to the
actual wait time when it is imposed by others (e.g. transit system) whereas they accurately
estimate their waiting time when they themselves chose to wait (e.g. for a free bus). Other
researchers indicate that this tendency to overestimate waiting time is further affected by the
individual’s personal experience in terms of whether the passenger is experiencing time drag or
not. Time drag occurs when a passenger perceives his time spent at a stop as unproductive and
useless, which occurs when the passenger is not involved in other activities such as reading a
book while waiting. In this case, waiting time can seem much longer (Dziekan and Vermeulen,
2006; Moreau, 1992; Reed, 1995). However, no study explicitly focused on understanding the
impacts of bus delay or arrival variation on transit users' waiting time perception.
Regarding travel time perception, the Transit Capacity and Quality Service Manual
(TCQSM) (2003a) suggests that perceived travel time is equal to actual travel time. However,
this does not provide understanding about how passengers perceive travel time variability, which
is clearly an added time cost that passengers must account for during their trip planning
(Daskalakis and Stathopoulos, 2008). According to the scheduling approach theory, transit users’
preferred departure time would change (later or earlier) in response to transit schedule
constraints and structures and their perceptions about travel time variation. Hollander (2006)
confirms that the impacts of travel time variability on passengers is best explained through
scheduling considerations. Nam et al. (2005) indicates that, at the same level of improvement,
policies designed to decrease travel time variability are more beneficial than policies designed to
reduce travel time. In addition, there is some empirical evidence that suggests there is an inherent
disutility associated with a failure to adhere to the schedule for both the early and the late arrival,
particularly if there is a transfer point in the trip (Bates et al., 2001; Noland and Polak, 2002). In
other words, arriving early at destinations (e.g. a transfer point) is not as good as arriving late
because time cannot be restored and used for other purposes, and users will regard the time spent
due to the early arrivals as a wasted time that they may have used it better if they had taken the

7
following trip instead. In short, passengers overestimate their waiting time at bus stops and value
this waiting time more than any other time component of their trip.

4.3 Transit Strategies Impact on Passengers’ Perception


A number of studies examined the immediate impacts of the implementation of different
strategies on users’ perceptions, and they generally indicated that passengers tend to perceive the
service more positively after the implementation of a new strategy (Cain et al., 2010; Conlon et
al., 2001; Currie, 2006; 2010). For instance, using a before-and-after rating system survey in
Chicago after the implementation of a limited stop service running parallel to a bus route, users
indicated a high satisfaction level in many areas including the overall satisfaction, satisfaction of
travel time and waiting time, at both the regular and the limited stop service routes (Conlon et al.,
2001). Dziekan & Vermeulen (2006), Dziekan & Kottenhoff (2007) and Watkins, et al. (2011),
among others, have investigated the impact of the introduction of real-time information on
passenger’ waiting time perception. Results from these studies indicated that the perceived
waiting time decreased after the implementation, without reporting any actual improvement in
the service frequency. El-Geneidy & Surprenant-Legault (2010) focused on users’ travel time
perception after the implementation of a new limited stop service, indicating that users
overestimate their perceived travel savings compared to the actual time savings.
Cain, et al.(2010) revealed that the implementation of express lanes significantly improved
users’ travel time and service reliability ratings. Diab & El-Geneidy (2012) investigated the
impact of a combination of strategies on passengers’ travel time perception, indicating that
passengers tend to overestimate the travel time savings associated with the implementation of
this combination of strategies, while there was almost no actual saving in buses’ running time.
This indicates a positive attitude towards the implementation of improvement strategies.
However, previously mentioned studies in this subsection focused on measuring users’
perceptions and/or satisfaction immediately (at one time point) after the implementation of a new
measure or route. Thus, it is rare to find studies that investigate how these perceptions change
over time. Only Dziekan & Vermeulen (2006) investigated the effects of the introduction of real-
time information on people’s waiting time perception changes over time, using surveys one
month before, and three months and 16 months after the system implementation. However, their
study suffered from a limited study sample size.

4.4 Section Summary


To summarize, several studies investigated how users value their time during a transit trip
and indicated that the relative value of waiting and travel times varies with income, location, trip
distance and purpose, and survey method. Nevertheless, it is infrequent to find empirical studies
that investigate the value of time savings and their reliability for transit users. It is common to
find studies investigating passengers’ waiting time perception, however, no study explicitly
focused on understanding the impacts of bus delay or arrival variation on transit users’ waiting
time perception. Finally, although, several studies indicate a positive impact of service
improvement strategies on user’s perception after the immediate implementation of a new
strategy, it is rare to find studies that investigate why exactly these strategies impact perception
and how these perceptions change over time.

8
5. Transit Agency Perspectives
Across the U.S. and Canada, transit services are funded in part through public subsidies
(American Public Transportation Association, 2011a). In addition, in each country, there is a
national organization that tracks and supports public transit service, which requires transit
agencies to file annual reports, to develop future plans, and to comply with various other
requirements in order to receive federal funds (FTA, 2012; Transport Canada, 2012). Therefore,
and due to the spatial, political and financial contexts similarities, this study focuses solely on
industry practice in North America. The following section discusses transit agencies’
perspectives on reliability. The discussion provides insight into the following questions:
 how do transit agencies understand and realize reliability;
 how and to what extent do they measure riders’ perceptions of service reliability;
 what reliability indictors do they use; and,
 what are their service improvement strategies?
A systematic evaluation method for transit agencies’ plans was applied to identify each
agency’s definition of reliability, and reliability goals, objectives and strategies. A key word
search for “reliability, “punctually”, “transit”, “bus”, “perception”, and “satisfaction” was
performed to allocate the sections that needed to be reviewed. If agencies used words such as
“mission”, “goal” and “task”, or employed key verbs, such as “define”, “refer”, or the verb ‘to
be’ (e.g. reliability is….), the sentences' purpose were considered as a goal or as a definition,
respectively. While if agencies used words such as “target”, “objective”, or contained key verbs,
such as “aim’, “intent”, and “require”, the sentences'' purpose were considered as an objective.
Then, the related paragraphs were checked to make sure that the used word was related to
reliability and bus and/or transit service. If the agency indicated reliability as a main goal, the
strategies used to improve the service were collected. For each transit agency, more than one
report is included in the analysis to give more holistic ideas about its perspectives.

5.1 Transit Agencies’ Understanding of Reliability


All the transit agencies included in this review indicate reliability as a priority. Most of
them mentioned reliability in their broad mission statement or president’s message as one of the
most important strategic goals to be achieved. Among the examples, the chairman of NJ
TRANSIT, New Jersey, stated that their mission is to “enhance reliability and safety” of transit
services (NJ TRANSIT, 2012). In Chicago, the CTA president stated that his charge is to make
sure that “(the service) is operating as reliably and efficiently as possible, … to strive to evolve
and improve and to deliver on-time… service each and every day”(p.7) (Chicago Transit
Authority (CTA), 2011). Similar examples of commitment to improve transit service reliability
can be found across the reviewed transit agencies’ plans.
Transit agencies define reliability in different ways. Among those who provided a
definition of reliability, nearly all agencies define and operationalize reliability in term of
measures, particularly those related to OTP. As an example, reliable service for TransLink,
Vancouver, is regarded to be “designed to ensure OTP, avoiding being early & minimizing
running late” (p.3) (TransLink, 2004). WMATA, Washington, is “dedicated to delivering
service on time… to improve reliability” (p.4) (WMATA, 2012). Other transit agencies including
the King County Metro Transit, Seattle, defined it in terms of the overall availability of service.
Regarding the objective of achieving reliability, around 80% of the reviewed transit agencies
consider reliability as an objective in order to increase customer convenience, or as the measure
that should be monitored in order to keep them satisfied and to improve ridership. For example,

9
MTA in New York city, regards service reliability as the key factor to increase ridership
(Metropolitan Transportation Authority (MTA), 2008). NJ TRANSIT (2011) stated that
reliability is an important measure to “meeting customers’ needs.” OC-Transpo, Ottawa, stated
that “reliability is a key factor” in building customer satisfaction (OC Transpo, 2012).

5.2 How Transit Agencies Measure Riders’ Perceptions of Service Reliability


It is important to understand how transit operators view and recognize transit users’
responses to service quality changes, particularly regarding their perspectives concerning
reliability. Despite the fact that most of the reviewed transit agencies regard reliability as a key
factor in building customer satisfaction, only 20% of transit agencies (3 out of 15) reported
users’ satisfaction about service reliability (or schedule adherence and OTP). For example,
Miami-Dade Transit, Miami, indicated that the percentage of respondents satisfied with the
reliability of bus service is 35% in 2008, while their target is 45%. The MTA indicated the
passenger satisfaction level for their local buses’ OTP reached 6.6 out of 10 in 2008
(Metropolitan Transportation Authority (MTA), 2008).
On the other hand, approximately 12% of the sampled agencies reported changes in the
passenger complaint rate concerning reliability of service, including the MBTSA in
Massachusetts and Metrolinx in Toronto. Other transit agencies reported overall customer
satisfaction of transit service along with other measures without reporting satisfaction with
reliability. For example, the STM, Montreal, in their 2009 report, stated that “the average level of
customer satisfaction about all aspects of service is 86%” (p.8). In addition, the STM in 2008
reported the level of overall customer satisfaction with transit (82%), the level of satisfaction
with driver courtesy (81%) and safety (91%), without reporting reliability separately. It should
be noted that a rating system (e.g. 1=Poor to 10=Excellent) was the major tool reported by transit
agencies to indicate changes in passengers level of satisfaction.

5.3 Transit Agencies Reliability Indicators


Indicators are the quantitative measurement tools used to assess progress toward a desired
outcome or objective (Maryland Department of Transportation, 2009a). Bates (1986), Benn
(1995), and Kittelson & Associates et al., (2003b) reviewed operators’ performance measures.
They report that OTP is the most commonly recognized and employed measurement used by
transit operators in order to understand and achieve reliability. Along with previous research, our
study indicates that most transit agencies define reliability in terms of OTP and are still using
OTP-related measures. A few transit agencies use other measurements besides OTP, that relate
to service interruption percentages, the percentage of delivered trips, or the mean distance
between failures (MDBF). However, it is rare to find measurements related to headway
adherence (the importance of these measures will be discussed later). Only 20% of reviewed
transit agencies (3 out of 15) used the percentage of big gap intervals and bunched intervals,
headway adherence percentage and waiting time assessments as measures of reliability.
OTP is commonly expressed as the percentage of buses that depart or arrive at a given
location within a predetermined range of time. The acceptable percentage threshold varies from
one agency to another according to the target goal and the measured range of acceptable delay or
earliness that an agency assumes would be acceptable for passengers to wait. For example, a
transit agency can set a goal that requires 78% of their buses to be on time, using an acceptable
range from 2 minutes early to 7 minutes late, like the WMATA. Another agency’s goal can be
the same (78%), using an acceptable range of from 1 minute early to 4 minutes late, such as in
the case of SEPTA in Philadelphia. In addition, while the majority of transit agencies measure

10
OTP as the bus arrival time at a number of points along the system, such as the last stop of some
routes, the NJ TRANSIT measures OTP as the bus departure time within 1 minute early and 5
minutes late from a few time points along the system (i.e. layover points mainly). On this basis,
the NJ Transit achieved 94% in 2010 (NJ TRANSIT, 2010).

5.4 Agencies Strategies to Improve Service Reliability


Regarding the strategies that agencies use to enhance their service reliability, several are
reported. These strategies are different from one transit agency to another according to the level
of improvement required or provided by what has already been implemented (Hemily and King,
2008; Smith et al., 2005). These strategies, by decreasing frequency of appearance order, are:
transit signal priority (TSP), bus rapid transit (BRT) or BRT-like systems (rapid transit system or
networks), new buses (low-floor buses and articulated buses), reserved bus lanes, limited-stop
services (express buses), intelligent transportation system (ITS) and (AVL/APC) systems, and
smart cards. Because BRT and BRT-like systems that combine more than one approach are more
attractive than conventional transit routes operating with less speed and reliability, these systems
are considered one of the most effective tools to increase service reliability, efficiency and
ridership (Currie, 2006; The Canadian Urban Transit Association (CUTA), 2007). About 20% of
transit agencies (3 out of 15) considered reviewing their bus stop location, route design and
structure, and driver training.

5.5 Impact of Strategies on Service


A number of studies discussed the impacts of different improvement strategies on transit
service. These studies are presented in Appendix 3. Most of the studies are done in response to
the cooperation between transit agencies and researchers to understand the impacts of their
actions on service. Thus, these studies are evaluational studies that use a before-and-after design
to assess and provide evidence of the impacts of interventions. Other studies not included in the
review generally focused on understanding the general factors impacting the service, such as
distance, weather, time periods, number of passengers and land use (Mazloumi et al., 2010;
Patnaik et al., 2004; Rajbhandari et al., 2003).
The majority of the study concentrated on running time improvements that resulted from
implementing these strategies. Several studies agreed that limited-stop bus service and reserved
bus lane decrease running time (El-Geneidy et al., 2006; El-Geneidy and Surprenant-Legault,
2010; Surprenant-Legault and El-Geneidy, 2011), while low-floor buses decrease dwell time
(Dueker et al., 2004). Strathman et al. (2000) indicates buses’ running times are significantly
shorter due to the implementation of the dispatch system. The use of articulated buses along a
transit corridor is expected to have a mixed effect on running time (El-Geneidy and
Vijayakumar, 2011). It decreases running time due to the existence of the buses’ third door,
while also increasing it due to the longer acceleration and deceleration time. The use of the smart
card increases running time compared to using the traditional flash passes (Diab and El-Geneidy,
2012), while it decreases the running time compared to magnetic strip tickets, but only when the
bus is not crowded (Milkovits, 2008). Kimpel et al. (2005b) indicate that the expected benefits of
TSP are not consistent across routes and time periods.
Concerning the service variation, few studies indicated that driver experience and behavior
are important factors affecting transit service running time and its variability (Abkowitz et al.,
1978; El-Geneidy et al., 2011; Levinson, 1991; Strathman and Hopper, 1993; Strathman et al.,
2002). El-Geneidy et al. (2006) analyzed the impacts of bus stop consolidation on bus
performance. They indicate that while bus running time improves due to implementation, this

11
does not impact the service running time variation nor headway variation. Yetiskul and Senbil
(2012) indicate that new buses decrease running time variation. Finally, Diab and El-Geneidy
(2002; 2003) provided two detailed studies that explore the impact of a combination of service
improvement strategies on service running time and its variaiton. They indicated that strategies
may have unexpected impacts when they are implemented together. Therefore, understanding the
synergies and the collective impacts of strategies is needed.

5.6 Section Summary


To summarize, transit agencies consider reliability to be a priority, defining it in terms of
OTP measures to achieve the objective of increasing customer satisfaction. They do not
frequently report users’ satisfaction regarding service reliability despite its perceived importance.
Additionally, the majority of transit agencies use OTP measures with differing standards.
Finally, no transit agency indicated using only one improvement strategy; they often employ TSP
and BRT or BRT-like systems that combine a few strategies in order to enhance the service. On
the other hand, discussion of the impact of improvement strategies focused on understanding the
effect of only one or two strategies on the service running time and dwell time. Only a few
studies focused on exploring the impact of a set of strategies on the service variation.

6. DISCUSSION AND CONCLUSIONS


The main objective of this paper is to address simultaneously, within the scope of
reliability, passengers' and transit agencies' perspectives. Figure 1 illustrates the research
structure and the key findings. The shaded area within the arrows shows the overlap in the
understanding and linkage areas. The area outside the arrows presents the disconnect area, which
signifies the important gaps and mismatches in the understanding of reliability. The factors in
this area need to be integrated and addressed to enable transit agencies to achieve better service
that is positively perceived by passengers. The following section discusses this paper’s key
findings in detail.

12
Figure 1. Research Structure and Main Findings

6.1 Passengers and Agencies Perspectives


The overlap between passengers’ and transit agencies’ perspectives on reliability centers
on agreement about its importance to the service provided. The key differences between both
perspectives are related to the definition of reliability, to the standard viewpoint regarding OTP,
and to the unaddressed waiting time variation.
Passengers think about reliability in terms of consistently minimizing their overall
waiting time and traveling time. They consider waiting and running times and their variation as
reliability measures since they affect their decision of departure time (Hollander, 2006) and daily
activity planning (Leclerc et al., 1995). In contrast, operators mainly define reliability in terms of
on-time performance standards (or adherence to schedules). From a passenger’s perspective,
there are few drawbacks related to an OTP standard because it only introduces a number or
percentage of vehicles located within a fixed tolerance based on the schedule. OTP does not take
into account the amount or severity of delay or the bandwidth of arrival deviation from schedules
(Camus et al., 2005). Therefore, it does not provide much information about the changes that
occur in passengers’ waiting times.
In addition, particular attention must be given to the main aspect of passengers’ views
relating to the reliability of transit planning: their response to waiting time variation due to bus
delays. In fact, capturing and isolating waiting time variations experienced by users due to late
buses is difficult. Researchers simply cannot know when users’ actual waiting time starts in
correlation with how much longer they waited behind the schedules (for the late buses). This is
because researchers have to interrupt users to ask them about their perception, which is not
capturing the full impact of delays on perception. Therefore, in the literature, it is still unclear
how people perceive wait time variation and how they act during that experience. Thus, transit
planners should support the concept that measurement of service variation can fundamentally

13
address the quality of service, which can then decrease service variations and, consequently,
users' waiting time variations. Variation can be expressed using various measures including
headway variation and travel time variation. These measures are more relevant to a passenger’s
experiences of daily changes and delays than a discrete on-time window that may be practical for
evaluating the reliability of the system’s operational plan from a transit agency’s perspective.
Accordingly, given the classic dilemma of valuing passenger time, transit agencies
should account for passengers’ waiting time more carefully by determining and addressing the
difference between expected waiting time values for passengers and the added waiting time
imposed by operators due to delays. Waiting imposed by operators makes passengers spend time
stressed because they experience anxiety related to the fear of not meeting their target arrival
time at their destination. Therefore, the value of waiting time can reach as much as 12 times the
value of in-vehicle time and it changes according to users’ preferences, time planning and their
situations, as stated earlier (Iseki et al., 2006; Reed, 1995).
The majority of transit agencies indicate using passengers’ surveys to measure user’s
perception. Nevertheless, these surveys should not only be utilized to track changes in service
quality but also to help prioritize future improvements for service quality initiatives and
strategies. Rather than using a satisfaction rating system, these surveys should consistently
require users to quantify their waiting time and travel time (and their changes). This would give a
better connection between passengers’ perceptions and improvement efforts made by agencies,
which may lead to more accurate integration between users’ perceptions and policy making
during the service planning and operation process.

6.2 Passenger Perspective Relative to Service Improvement Strategies


A number of studies examined the immediate impacts of the implementation of different
strategies on users’ perceptions, and they generally indicate that passengers tend to perceive the
service more positively after the implementation of a new strategy (Cain et al., 2010; Conlon et
al., 2001; 2010). Figure 2-A shows a conceptual framework of how transit agencies measure
their performance, and the nature of the passengers’ perception of the regular or standard
service attributes. It shows that while agencies measure and capture the actual average service,
passengers perceive it differently, particularly concerning their waiting time (Kittelson &
Associates et al., 2003a). The main conflict is related to passengers’ perception when the agency
implements strategies in order to improve the service. Figure 2-B shows this conceptual
framework related to when transit agencies implement an improvement strategy. In this case,
transit users tend to be satisfied and significantly overestimate their benefits (ICF Macro, 2011).
This bias may occur because users are witnessing the implementation of such measures, as well
as the related time cost saving that they experience. However, the question of why ‘exactly’
users overestimate these benefits is not presented in the literature. In addition, it is rarely
discussed how these positive estimated perceptions can change over time (shifting back from
Figure 2-B to Figure 2-A).
Unfortunately this tendency to be satisfied is yet to be successfully quantified and put to
use, and will remain that way as long as transit agencies and researchers are capturing
passengers’ satisfaction and perception using mainly customer satisfaction rating techniques. The
traditional rating techniques’ results are devoid of specific insight into how people are
overestimating and quantifying their time changes according to changes in service quality. In
fact, the availability, affordability and accuracy of AVL/APC systems data offers a good
opportunity to understand and to present better estimations of how passengers estimate and

14
perceive actual time changes in relation to implemented strategies. This is an important policy-
relevant issue, since agencies should not only understand the quantitative effects of their policy
and implemented strategies on their performance, but also on passengers' perception. Such
knowledge will provide an understanding of the link between passengers’ perception and the
benefits of using a specific strategy, which may lead to more accurate measures and predictors of
behavioral responses and, as a result, improved cost-benefit evaluations of transportation
projects.

(A)

(B)
Figure 2. (A) Perception of Regular Service; and (B) Perception after the Implementation of
Improvement Strategies.

6.3 Transit Agencies Perspective Relative to Service Improvement Strategies


It is essential to assess to what extent the academic literature provides transit agencies
with useful information related to the impacts of various strategies. The impacts of various
strategies on run time and dwell time have long been discussed in the literature. However, it
appears that less attention has been given to the impact of various strategies on service variation,
particularly related to dwell time variation. Furthermore, it is rare to find studies that provide a
comprehensive analysis of the impacts of implementing a set of strategies on service reliability

15
as well as passengers’ perception of these changes. These are important issues since strategies
may have unexpected impacts when they are implemented together. Therefore, understanding the
synergies and the collective impacts of these strategies is needed (Diab and El-Geneidy, 2012,
2013). This is particularly relevant to transit agencies’ practice, since no transit agencies
indicated using only one strategy to improve their service, and they often employ BRT or BRT-
like systems (that combine a few strategies in order to improve the service).
This knowledge is important to help transit agencies prioritize one strategy or a set of
strategies over the others. The current literature’s limited focus on transit agencies’ knowledge
needs may be limiting the latter’s ability to correctly anticipate the impacts of their efforts on the
service, and accordingly, on passengers’ perception. Therefore, it is suggested that researchers
should provide more in depth studies regarding the comprehensive impacts of improvement
strategies while understanding how these may function together to affect the transit performance
and its variation. This level of complexity can be investigated using different automatic data
collection systems, thereby giving transit agencies a better idea about the impacts of efforts on
service and on passengers. Finally, while this research has focused on the North American
experience regarding transit agencies’ perspective, lessons can be learned and applied across
different areas in the world, enabling transit agencies to achieve better service reliability that is
positively perceived by the public.

7. REFERENCES
Abkowitz, M., Slavin, H., Waksman, R., Englisher, L., Wilson, N., 1978. Transit service
reliability, In: UMTA-MA-06-0049-78-1 (Ed.). USDOT Transportation Systems Center,
Cambridge.
Abrantes, P., Wardman, M., 2011. Meta-analysis of UK values of travel time: An update.
Transportation Research Part A: Policy and Practice 45(1), 1-17.
American Public Transportation Association, 2011a. 2011 Public transportation fact book, 62nd
ed. American Public Transportation Association (APTA), Washington.
American Public Transportation Association, 2011b. Transit ridership report, First Quarter
2011. American Public Transportation Association (APTA), Washington.
Axhausen, K., Hess, S., König, A., Abay, G., Bates, J., Bierlaire, M., 2008. Income and distance
elasticities of values of travel time savings: New Swiss results. Transport Policy 15(3), 173-185.
Barr, J., Beaton, E., Chiarmonte, J., Orosz, T., 2010. Select bus service on Bx12 in New York
City. Transportation Research Record: Journal of the Transportation Research Board No. 2145,
40–48.
Bates, J., 1986. Definition of practices for bus transit on-time performance: Preliminary study.
Transportation Research Circular 300, 1-5.
Bates, J., Polak, J., Jones, P., Cook, A., 2001. The valuation of reliability for personal travel.
Transportation Research Part E: Logistics and Transportation Review 37(2–3), 191-229.
Benn, H., 1995. Bus route evaluation standards, TCRP synthesis of transit practice 10.
Transportation Research Board, Washington.
Berke, P., Conroy, M., 2000. Are we planning for sustainable development? An evaluation of 30
comprehensive plans. Journal of the American Planning Association 66(1), 21-33.
Brons, M., Rietveld, P., 2007. Betrouwbaarheid en klanttevredenheid in de OVketen: een
statistische analyse. Transumo project Betrouwbaarheid van transportketens, Amsterdam.

16
Cain, A., Van Nostrand, C., Flynn, J., 2010. Impact of Miami, Florida, Urban Partnership
Agreement Phase 1A on transit user perceptions of I-95 Express bus service. Transportation
Research Record: Journal of the Transportation Research Board 2144(-1), 44-51.
Camus, R., Longo, G., Macorini, C., 2005. Estimation of transit reliability level-of-service based
on automatic vehicle location data, pp. 277-286.
Carrion, C., Levinson, D., 2012. Value of travel time reliability: A review of current evidence.
Transportation Research Part A: Policy and Practice 46(4), 720-741.
Cham, L., 2006. Understanding bus service reliability : A practical framework using AVL/APC
data, Massachusetts Institute of Technology. Dept. of Civil and Environmental Engineering.
Massachusetts Institute of Technology, Massachusetts.
Chang, S., HSU, S., 2003. Modeling of passenger waiting time in intermodal station with
constrained capacity on intercity transit. Journal of the Eastern Asia Society for Transportation
Studies 5.
Chen, C., Skabardonis, A., Varaiya, P., 2003. Travel-Time reliability as a measure of service.
Transportation Research Record: Journal of the Transportation Research Board 1855(-1), 74-
79.
Chicago Transit Authority (CTA), 2011. Meeting the challenge of a struggling economy.
Chicago Transit Authority, Chicago.
Chicago Transit Authority (CTA), 2013. December 2013 performance metrics. Chicago Transit
Authority, Chicago.
Cirillo, C., Eboli, L., Mazzulla, G., 2011. On the asymmetric user perception of transit service
quality. International Journal of Sustainable Transportation 5(4), 216-232.
City of Denver, 2008. Denver strategic transportation plan: Moving people. City of Denver,
Denver.
Conlon, M., Foote, P., O'Malley, K., Stuart, D., 2001. Successful arterial street limited-stop
express bus service in Chicago. Transportation Research Record(1760), 74-80.
Currie, G., 2006. Bus rapid transit in Australasia: Performance, lessons learned and futures.
Journal of Public Transportation 9 (3), 1-22.
Daskalakis, N., Stathopoulos, A., 2008. Users’ perceptive evaluation of bus arrival time
deviations in stochastic networks. Journal of Public Transportation 11(4), 25-38.
dell’Olio, L., Ibeas, A., Cecín, P., 2010. Modelling user perception of bus transit quality.
Transport Policy 17(6), 388-397.
Diab, E., El-Geneidy, A., 2012. Understanding the impacts of a combination of service
improvement strategies on bus running time and passenger’s perception. Transportation
Research Part A: Policy and Practice 46(3), 614-625.
Diab, E., El-Geneidy, A., 2013. Variation in bus transit service: understanding the impacts of
various improvement strategies on transit service reliability. Public Transport 4(3), 209-231.
Dueker, K.J., Kimpel, T.J., Strathman, J.G., Callas, S., 2004. Determinants of bus dwell time.
Journal of Public Transportation 7(1), 21-40.
Dziekan, K., Kottenhoff, K., 2007. Dynamic at-stop real-time information displays for public
transport: effects on customers. Transportation Research Part A: Policy and Practice 41(6), 489-
501.
Dziekan, K., Vermeulen, A., 2006. Psychological effects of and design preferences for real-time
information displays. Journal of Public Transportation 1, 71–89.

17
El-Geneidy, A., Horning, J., Krizek, K., 2011. Analyzing transit service reliability using detailed
data from automatic vehicular locator systems. Journal of Advanced Transportation 45(1), 66-
79.
El-Geneidy, A., Strathman, J., Kimpel, T., Crout, D., 2006. The effects of bus stop consolidation
on passenger activity and transit operations. Transportation Research Record: Journal of the
Transportation Research Board(1971), 32-41.
El-Geneidy, A., Surprenant-Legault, J., 2010. Limited bus stop service: An evaluation of an
implementation strategy. Public Transport: Planning and Operations 2(4), 291-306.
El-Geneidy, A., Vijayakumar, N., 2011. The effects of articulated buses on dwell and running
times. Journal of Public Transportation 14(3), 63-86.
Fan, W., Machemehl, R., 2009. Do transit users just wait for buses or wait with strategies?
Transportation Research Record: Journal of the Transportation Research Board 2111(-1), 169-
176.
Feitelson, E., 2002. Introducing environmental equity dimensions into the sustainable transport
discourse: issues and pitfalls. Transportation Research Part D 7, 99-118.
Feng, W., Figliozzi, M., 2011. Empirical Findings of Bus Bunching Distributions and Attributes
Using Archived AVL/APC Bus Data, ICCTP 2011, pp. 4330-4341.
FTA, 2012. About FTA. The Federal Transit Administration (FTA).
Furth, P., Hemily, B., Muller, T., Strathman, J., 2006. Using Archived AVL-APC data to
improve transit performance and management, TCRP Report 113. Transportation Cooperative
Research Program (TCRP), Washington, D.C.
Furth, P., Muller, T., 2007. Service reliability and optimal running time schedules.
Transportation Research Record: Journal of the Transportation Research Board 2034(-1), 55-
61.
Geurs, K., Boon, W., Van Wee, B., 2009. Social impacts of transport: Literature review and the
state of the practice of transport appraisal in the Netherlands and the United Kingdom. Transport
Reviews 29(1), 69-90.
Hall, R., 2001. Passenger waiting time and information acquisition using automatic vehicle
location for verification. Transportation Planning and Technology 24, 249-269.
Hemily, B., King, R., 2008. Uses of higher capacity buses in transit service, TCRP Synthesis
Transportation Research Board, Washington, D.C.
Hensher, D., Stopher, P., Bullock, P., 2003. Service quality––developing a service quality index
in the provision of commercial bus contracts. Transportation Research Part A: Policy and
Practice 37(6), 499-517.
Hess, D., Brown, J., Shoup, D., 2004. Waiting for the Bus. Journal of Public Transportation
7(4), 67–84.
Hickman, M., 2004. Bus Automatic Vehicle Location (AVL) Systems, In: Gillen, D., Levinson,
D. (Eds.), Assessing the Benefits and Costs of ITS. Springer US, pp. 59-88.
Hollander, Y., 2006. Direct versus indirect models for the effects of unreliability. Transportation
Research Part A: Policy and Practice 40(9), 699-711.
ICF Macro, 2011. New Jersey Transt: South Jersey bus study final report. New Jersey
TRANSIT, New Jersey.
Iseki, H., Taylor, B., 2010. Style versus service? An analysis of user perceptions of transit stops
and stations. Journal of Public Transportation 13(3), 38-63.
Iseki, H., Taylor, B., Miller, M., 2006. The effects of out-of-vehicle time on travel behavior:
Implications for transit transfers. California Department of Transportation, California.

18
Jabareen, R., 2006. Sustainable urban forms: Their typologies, models, and concepts. Journal of
Planning Education and Research 26(1), 38-52.
Kimpel, T., 2001. Time point-level analysis of transit service reliability and passenger demand,
Urban Studies and Planning. Portland State University, Portland, OR, p. 154.
Kimpel, T., Strathman, J., Bertini, R., Bender, P., Callas, S., 2005a. Analysis of transit signal
priority using archived TriMet bus dispatch system data. Transportation Research Record(1925),
156-166.
Kimpel, T., Strathman, J., Bertini, R., Callas, S., 2005b. Analysis of transit signal priority using
archived TriMet bus dispatch system data. Transportation Research Board(1925), 156-166.
King County Metro Transit, 2007. 2007 Annual management report. King County Metro Transit,
Seattle.
King County Metro Transit, 2013a. 2013 Service guidelines report. King County Metro Transit,
Seattle.
King County Metro Transit, 2013b. Strategic plan for public transportation 2011-2021. King
County Metro Transit, Seattle.
Kittelson & Associates, KFH Group, Parsons Brinckerhoff Quade & Douglass, Hunter-Zaworsk,
K., 2003a. Transit capacity and quality of service manual, In: Edition, n. (Ed.). TRB
Transportation Research Board, Washington D.C., USA.
Kittelson & Associates, Urbitran, LKC consulting services, Morpace International, Queensland
University of Technology, Nakanishi, Y., 2003b. A Guidebook for developing a transit
performance-measurement system, TCRP Report 88. Transportation Cooperative Research
Program (TCRP), Washington, D.C.
Koenig, J., 1980. Indicators of urban accessibility: Theory and application. Transportation 9,
145-172.
Lam, W., Morrall, J., 1982. Bus passenger walking distances and waiting times: A summer–
winter comparison. Transportation Quarterly 36(3), 407–421.
Leclerc, F., Schmitt, B., Dube, L., 1995. Waiting time and decision making: is time like money?
Journal of Consumer Research 22(1), 110-119.
Levinson, H., 1991. Supervision strategies for improved reliability of bus routes. Transportation
Research Board, Washington, D.C.
Li, Z., Hensher, D., Rose, J., 2010. Willingness to pay for travel time reliability in passenger
transport: A review and some new empirical evidence. Transportation Research Part E:
Logistics and Transportation Review 46(3), 384-403.
Lucas, K., Marsden, G., Brooks, M., Kimble, M., 2007. Assessment of capabilities for examining
long-term social sustainability of transport and land use strategies. Transportation Research
Record: Journal of the Transportation Research Board 2013(-1), 30-37.
Maryland Department of Transportation, 2009a. 2009 Annual attainment report on transportation
system performance. Maryland Department of Transportation, Maryland.
Maryland Department of Transportation, 2009b. Consolidated transportation program. Maryland
Department of Transportation, Maryland.
Maryland Department of Transportation, 2011. 2011 Annual attainment report on transportation
system performance. Maryland Department of Transportation, Maryland.
Massachusetts Bay Transportation Authority (MBTA), 2008. Final 2008 service plan: Bus, rapid
transit, and boat service changes and service delivery policy modifications. Massachusetts Bay
Transportation Authority, Boston.

19
Massachusetts Bay Transportation Authority (MBTA), 2009. MBTA Service Delivery Policy.
Massachusetts Bay Transportation Authority, Boston.
MassDOT, 2013. The Way forward: A 21st - Century transportation plan. Massachusetts
Department of Transportation (MassDOT), Boston.
Mazloumi, E., ASCE, S.M., Currie, G., Rose, R., 2010. Using GPS data to gain insight into
public transport travel time variability. Journal of Transportation Engineering, ASCE Vol.
136(No. 7).
Metrolinx, 2008. The Big Move: Transforming transportation in the greater Toronto and
Hamilton Area. Greater Toronto Transportation Authority (Metrolinx) Toronto.
Metropolitan Transportation Authority (MTA), 2008. 2008 Annual report. Metropolitan
Transportation Authority (MTA), New York.
Metropolitan Transportation Authority (MTA), 2009. Proposed MTA capital program: 2010-
2014. Metropolitan Transportation Authority (MTA), New York.
Metropolitan Transportation Authority (MTA), 2011. Performance goals by agency 2011-2015.
Metropolitan Transportation Authority (MTA), New York.
Miami-Dade Transit, 2009. 2008 Direction finder survey. Miami-Dade Transit, Miami.
Miami-Dade Transit, 2012a. SCORECARD DETAIL-Miami Dade Transit Department FY 11-
12. Miami-Dade Transit, Miami.
Miami-Dade Transit, 2012b. Strategic plan 2012. Miami-Dade Transit, Miami.
Milkovits, M., 2008. Modeling the Factors Affecting Bus Stop Dwell Time: Use of Automatic
Passenger Counting, Automatic Fare Counting, and Automatic Vehicle Location Data.
Transportation Research Record: Journal of the Transportation Research Board 2072(-1), 125-
130.
Mishalani, R., McCord, M., Wirtz, J., 2006. Passenger wait time perceptions at bus stops:
Empirical results and impact on evaluating real-time bus arrival information. Journal of Public
Transportation 9(2), 89-106.
Mistretta, M., Goodwill, J., Gregg, R., DeAnnuntis, C., 2010. Best practices in transit service
planning. Center for Urban Transportation Research: University of South Florida, Tampa,
Florida.
Mohring, H., Schroeter, J., Wiboonchutikula, P., 1987. The value of waiting time, travel time,
and a seat on a bus. Rand Journal of Economics 18(1), 40-56.
Monga, A., Saini, R., 2008. Time versus money: Differential use of heuristics. Advances in
Consumer Research 35.
Moreau, A., 1992. Public transport waiting times as experienced by customers: Marketing
research involving the Grenoble system. Public Transport International 41(3), 52–68.
Murray, A., Wu, X., 2003. Accessibility tradeoffs in public transit planning. Journal of
Geographical Systems 5(1), 93-107.
Nam, D., Park, D., Khamkongkhun, A., 2005. Estimation of value of travel time reliability.
Journal of Advanced Transportation 39(1), 39-61.
NJ TRANSIT, 2010. Moving NJ forward: 2010NJ transit annual report. NJ TRANSIT, New
Jersey.
NJ TRANSIT, 2011. Moving the needle: 2011 NJ transit annual report. NJ TRANSIT, New
Jersey.
NJ TRANSIT, 2012. Moving the Needle one trip at a time: 2012 NJ TRANSIT Annual report.
NJ TRANSIT, New Jersey.

20
Noland, R., Polak, J., 2002. Travel time variability: A review of theoretical and empirical issues.
Transport Reviews 22(1), 39-54.
NTD, 2013. National Transit Database data.
OC Transpo, 2009. Transit services annual performance report. The Transit Services Department
of the City of Ottawa, Ottawa.
OC Transpo, 2012. OC Transpo 2012 business plan: Choosing transit for everyday life. OC
Transpo, Ottawa.
Patnaik, J., Chien, S., Bladikas, A., 2004. Estimation of bus arrival times using APC data. .
Journal of Public Transportation 7(1), 1-20.
Pawson, R., Greenhalgh, T., Harvey, G., Walshe, K., 2005. Realist review - A new method of
systematic review designed for complex policy interventions. Journal of Health Services
Research and Policy 10(SUPPL. 1), 21-34.
Peek, G., Van Hagen, M., 2002. Creating synergy in and around stations: Three strategies for
adding value. Transportation Research Record: Journal of the Transportation Research Board
1793(-1), 1-6.
Peng, Z., Lynde, E., Chen, W., 2008. Improving service restoration Using automatic vehicle
location. Midwest Regional University Transportation Center, Chicago.
Perk, V., Flynn, J., Volinski, J., 2008. Transit Ridership, Reliability, and Retention. National
Center For Transit Research (NCTR), University of South Florida, Florida.
Pinjari, A., Bhat, C., 2006. Nonlinearity of response to level-of-service variables in travel mode
choice models. Transportation Research Record: Journal of the Transportation Research Board
1977(-1), 67-74.
Politis, I., Papaioannou, P., Basbas, S., Dimitriadis, N., 2010. Evaluation of a bus passenger
information system from the users’ point of view in the city of Thessaloniki, Greece. Research in
Transportation Economics 29(1), 249-255.
Psarros, I., Kepaptsoglou, k., Karlaftis, m., 2011. An Empirical investigation of passenger wait
time perceptions using hazard-based duration models. Journal of Public Transportation 14(3),
109-122.
Rajbhandari, R., Chien, S., Daniel, J., 2003. Estimation of bus dwell times with automatic
passenger counter information. Transportation Research Record: Journal of the Transportation
Research Board 1841(-1), 120-127.
Reed, T., 1995. Reduction in the burden of waiting for public transit due to real-time schedule
information: a conjoint analysis study, Vehicle Navigation and Information Systems Conference,
1995. Proceedings. In conjunction with the Pacific Rim TransTech Conference, pp. 83-89.
RTD, 2011. Connecting communities: Report to the region, 2010/2011. Denver Regional
Transportation District (RTD), Denver.
RTD, 2012. Service Performance 2012, 2010/2011. Denver Regional Transportation District
(RTD), Denver.
San Francisco Municipal Transportation Agency (SFMTA), 2011. SFMTA Strategic plan: Fiscal
year 2013 - Fiscal year 2018. San Francisco Municipal Transportation Agency (SFMTA),, San
Francisco.
San Francisco Municipal Transportation Agency (SFMTA), 2013. SFMTA 20-Year capital plan.
San Francisco Municipal Transportation Agency (SFMTA), San Francisco.
SFMTA, Nelson\Nygaard Consulting Associates, 2008. Proposition E: Municipal transportation
quality review final report July 1, 2006 – June 30, 2008. San Francisco Municipal
Transportation Agency (SFMTA), San Francisco.

21
Shires, J., de Jong, G., 2009. An international meta-analysis of values of travel time savings.
Evaluation and Program Planning 32(4), 315-325.
Small, K., Noland, R., Chu, X., Lewis, D., 1999. Valuation of travel-time savings and
predictability in congested conditions for highway user-cost estimation, In: Transportation
Research Board, N.R.C. (Ed.), NCHRP Report 431. National Academy Press, Washington, D.C.
Smith, H., Hemily, B., Ivanovic, M., 2005. Transit signal priority (TSP): A planning and
implementation handbook. ITS America, Washington, D.C.
Société de transport de Montréal, 2009. The STM in 2008 activity report Société de transport de
Montréal, Montreal, Québec.
Société de transport de Montréal, 2010. On the move 2009 activity report Société de transport de
Montréal, Montreal, Québec.
Société de transport de Montréal, 2011. Strategic plan 2020. Société de transport de Montréal
(STM), Montreal.
Soman, D., 2001. The mental accounting of sunk time costs: Why time is not like money.
Journal of behavioral decision making 14, 169-185.
Southeastern Pennsylvania Transportation Authority (SEPTA), 2010. Fiscal Years 2010 – 2014:
Five–Year strategic business plan. Southeastern Pennsylvania Transportation Authority
(SEPTA), Philadelphia.
Southeastern Pennsylvania Transportation Authority (SEPTA), 2011. Fiscal year 2011 operating
budget and Fiscal years 2012 to 2020 financial projections. Southeastern Pennsylvania
Transportation Authority (SEPTA), Philadelphia.
Southeastern Pennsylvania Transportation Authority (SEPTA), 2013. 2012 Customer satisfaction
survey final report. Southeastern Pennsylvania Transportation Authority (SEPTA), Philadelphia.
Stanley, J., Villa-Brodrick, D., 2009. The usefulness of social exclusion to inform social policy
in transport. Transport Policy 16, 90-96.
Strathman, J., Dueker, K., Kimpel, T., Gerhart, R., Turner, K., Taylor, P., Callas, S., Griffin, D.,
Hopper, H., 1999. Automated bus dispatching, operations control, and service reliability baseline
analysis. Transportation Research Record(1666), 28-36.
Strathman, J., Hopper, J., 1993. Empirical analysis of bus transit on-time performance.
Transportation Research Part A: Policy and Practice (27), 93-100.
Strathman, J., Kimpel, T., Dueker, K., Gerhart, R., Callas, S., 2002. Evaluation of transit
operations: Data applications of Tri-Met’s automated bus dispatching system. Transportation
29(3), 321-345.
Strathman, J., Kimpel, T., Gerhart, R., Turner, K., Taylor, P., Callas, S., Griffin, D., 2000.
Service reliability impacts of computer-aided dispatching and automatic vehicle location
technology: A Tri-Met case study. Transportation Quarterly 54(3), 85-102.
Surprenant-Legault, J., El-Geneidy, A., 2011. Introduction of reserved bus lane: Impact on bus
running time and on-time performance. Transportation Research Record: Journal of the
Transportation Research Board 2218(-1), 10-18.
Taylor, B., Iseki, H., Miller, M., Smart, M., 2007. Thinking outside the bus: Understanding user
perceptions of waiting and transferring in order to increase transit use. UCLA Institute of
Transportation Studies, Los Angeles.
The Canadian Urban Transit Association (CUTA), 2007. Bus Rapid Transit: A Canadian
perspective, Toronto.
The City of Ottawa, 2008. Transportation master plan, Beyond Ottawa 20/20: planning for
future, Ottawa.

22
Toronto Transit Commission (TTC), 2009. Transit City Bus Plan. Toronto Transit Commission
(TTC), Toronto
Toronto Transit Commission (TTC), 2013. Five-Year Corporate Plan 2013-2017. Toronto
Transit Commission (TTC), Toronto.
TransLink, 2004. Transit service guidelines:public summary report. South Coast British
Columbia Transportation Authority (TransLink), Vancouver.
TransLink, 2009. TransLink 2009 annual report:Creating the foundation for the future. South
Coast British Columbia Transportation Authority (TransLink), Vancouver.
TransLink, 2012. 2013 Base plan and outlook: Transportation and financial base plan for 2013 to
2015 and outlook for 2016 to 2022. South Coast British Columbia Transportation Authority
(TransLink), Vancouver.
Transport Canada, 2012. Organization.
Transport for London, 2013. Annual report and statement of accounts. Transport for London,
London.
TRID, 2013. Home page.
Turnquist, M., 1981. Strategies for improving reliability of bus transit service. Transportation
Research Record 818, 7-13.
Uniman, D., Attanucci, J., Mishalani, R., Wilson, N., 2010. Service reliability measurement
using automated fare card data. Transportation Research Record: Journal of the Transportation
Research Board 2143(-1), 92-99.
Vuchic, V., 2005. Urban transit: Operations, planning and economics. John Wiley & Sons, New
York.
Wardman, M., 2001. A review of British evidence on time and service quality valuations.
Transportation research. Part E, Logistics and transportation review 37E(2-3), 107-128.
Wardman, M., 2004. Public transport values of time. Transport Policy 11 11, 363–377.
Watkins, K., Ferris, B., Borning, A., Rutherford, G., Layton, D., 2011. Where is my bus? Impact
of mobile real-time information on the perceived and actual wait time of transit riders.
Transportation Research Part A: Policy and Practice 45(8), 839-848.
WMATA, 2010. Approved fiscal 2010 annual budget. Washington Metropolitan Area Transit
Authority (WMATA), Washington, D.C.
WMATA, 2012. Momentum: Strategic plan 2013-2025. Washington Metropolitan Area Transit
Authority (WMATA), Washington, D.C.
WMATA, 2013. Vital Signs Report: A scorecard of metro’s key performance indicators (KPI)
2013 1th Quarter Results. Washington Metropolitan Area Transit Authority (WMATA),
Washington , D.C.
Yetiskul, E., Senbil, M., 2012. Public bus transit travel-time variability in Ankara (Turkey).
Transport Policy 23(0), 50-59.
Yoh, A., Iseki, H., Smart, M., Taylor, B.D., 2011. Hate to Wait Effects of Wait Time on Public
Transit Travelers' Perceptions. Transportation Research Record: Journal of the Transportation
Research Board(2216), 116-124.
Zauberman, G., Lynch, J., 2005. Resource slack and propensity to discount delayed investments
of time versus money. Journal of Experimental Psychology: General 134(23-37).

23
Appendix 1. Summary of Studies on Passengers' Perspectives Included in Review

Issues Data source Sample Analysis Measures Key findings


Study methods used
addressed (s) size
Passenger time value
Mohring et al. Wage, and on-board 11,438 Maximum  Travel time  The value associated with time is usually higher than the fare.
(1987), Singapore waiting and survey likelihood  Waiting time  The value associated with in vehicle time is around half the
travel time estimates equivalent of an hourly wage, waiting time is valued at 2-3 times that
values of in-vehicle time.
Leclerc et al. Risk behavior, 8 surveys 756 Descriptive  Waiting time  The value of consumers' waiting time is not constant but depends on
(1995), New York, money and statistics contextual characteristics of the decision situation
USA time value  Respondents preferred risk-averse choices with respect to decisions in
the domain of time
Wardman (2001), Time and Meta-anlysis 143 Regression  Walking time  Walking time, waiting time, and combined walking and waiting time
England service quality using various studies models Waiting time are valued 1.66, 1.47, and 1.46 respectively times as much as in-
value data sources  Travel time vehicle time.
Wardman (2004), Value of Meta-anlysis 171 Regression  Walk time  Waiting time is valued at 2.5 times as much as in-vehicle time, while
England Walk time, using various studies models  Wait time walking time is valued at 2.5 times travel time
waiting time data sources  Headway  The value of headway depends upon journey purpose and distance
 Travel time
Nam et al. (2005), Importance of On-site survey na Multinomial  Travel time  The value of reliability is greater than values of travel time.
na travel time and Nested Reliability was expressed in terms of standard deviation.
reliability Logit model
Pinjari & Bhat Value of Web-based 317 Mixed logit  Travel time  The values of travel time and travel time unreliability were found to
(2006), Austin, Travel time survey model be nonlinear.
USA and travel  During the first 15 minutes of a trip, passengers place a small value
time variation on travel time while placing a higher value on travel time reliability.
The value of travel time increases rapidly after the first 15 minutes
while the valuation of reliability falls radically
(Shires and de Value of Meta-anlysis 77 Panel data  Travel time  The value of travel time savings varies by income, country, travel
Jong, 2009) travel time using various studies models purpose, mode, distance and by survey method.
savings data sources

24
Issues Data source Sample Analysis Measures Key findings
Study methods used
addressed (s) size

Politis et al. The value of On-site survey 300 Descriptive  Waiting time  Users value real time information services at, on average, 24.0% of
(2010), real time statistics  Number of the current fare.
Thessaloniki, Information trips  Women value the service more than men.
Greece System  About 20 % of the overall sample stated that they have undertaken
more trips as a consequence of the information system.
Abrantes & The value of Meta-anlysis 226 Regression  Travel time  The ratio between walk and wait time and in-vehicle time was found
Wardman travel time using various studies models to be lower than the commonly used value of two.
(2011), England academic and  There is a large and significant difference between the results from
reports studies based on different types of Stated Preference survey
presentation.

Passenger perceptions regarding time

Strathman et al. Automated On-board 1815 Descriptive  Reliability  Users rated a frequent service as the most reliable and gave it the
(1999), Portland, Bus survey statistics  Satisfaction highest overall satisfaction rating, while it has the lowest reliability
USA Dispatching (Rating 1-4 (in terms of the coffecient of variation of running time and
impacts scale) headways).
Hall (2001), Los Perception of On-site survey 1199 Regression  Waiting time  Perceived waiting time varies according to age group, destination,
Angeles, USA Waiting time & AVL data models and primary language, as well as for first-time users.
logit models  People who knew the schedule were more inclined to believe the bus
was late than those who did not know the schedule.
Hess et al. (2004), Perception of On-site survey 281 Descriptive  Waiting time  Riders overestimated their wait time by a factor of two when it was
Los Angeles, USA Waiting time & manual statistics imposed by the transit system, but accurately estimated their wait
headway data time when they chose to wait for the free bus ride.

Hollander (2006), Travel time Web-based 244 Multinomial  Travel time  The influence of travel time variability on bus users is best explained
city of York, variability and survey logit variability indirectly through scheduling considerations.
England trip time  The penalty placed on early arrival to the destination is found to be
choice similar to the penalty on travel time itself; late arrivals are much more
heavily penalized.
Mishalani et al. Perception of On-site survey 83 Regression  Waiting time  Their results indicated an overestimation of waiting time by
(2006), Ohio, USA waiting time models and passengers compared to their actual waiting time at stops by 0.84
descriptive minutes.
statistics

25
Issues Data source Sample Analysis Measures Key findings
Study methods used
addressed (s) size

Daskalakis & Perception of On-site survey 300 Mathematic  Waiting time  The greater the headway, the greater the deviation the users perceive,
Stathopoulos waiting time al models but at a diminishing rate.
(2008), Athens, and headways  A reliable service, meaning smaller deviations, is more appreciated by
Greece the public than any service of shorter headways and less reliability.

Fan & Machemehl Waiting time Observation & 2237 Linear  Waiting time  They identified a threshold of 11 minutes that passengers begin to
(2009), Texas, and Arrival video regressions  Arrival pattern coordinate their arrivals to the bus stops as predetermined as at
USA pattern recording schedules
Dell’Olio et al. How Focus groups, 768 Ordered  Waiting time  The perception of quality is shown to change with the category of
(2010), Santander, perception of on-board and probit  Travel time user (frequency of use, income, gender, age, car ownership)
Spain quality varies on-site survey models  Reliabilty  Users tend to be more critical in terms of perception of Overall
according to Quality until they are stimulated into thinking more deeply about
the available other influential variables.
information  As a general rule, the number improving their score is practically
double the number reducing it for the same situations.
Eboli & Mazzulla Asymmetric Survey 470 Mixed logit  Service quality  Users’ perceptions of transit services are heterogeneous: for many
(2011), Italy user (Rating 1 to model reasons: the qualitative nature of some service aspects, the different
perception 10) users’ socioeconomic characteristics, the diversity in tastes and
attitudes towards transit.
Psarros et al. Perception of On-site survey 1000 Hazard-  Waiting time  For all trip purposes – work, education, shopping and personal affairs
(2011), Athens, waiting time based – there appears to be a strong positive effect on the length of
Greece duration perceived waiting time comapred to actual waiting time by 27%,
models 43%, 30% and 30%, respectively.
 Younger people estimate their waiting time more correctly than older
people
Transit strategies impacts* on passengers’ perception

Conlon et al. Express On-site survey 1,178, Descriptive  Travel time  Customer satisfaction and loyalty measures, as measured by before-
(2001), Chicago, service (Rating 1 to 5) 1,006, statistics  waiting time and-after customer satisfaction surveys, increased significantly for
USA and both local and express customers due the implementation of new
730 express service.

26
Issues Data source Sample Analysis Measures Key findings
Study methods used
addressed (s) size

Dziekan & Real-time Three mailed 53 Descriptive  Waiting time  Passenger waiting time perception decreased after the implementation
Vermeulen (2006), information survey statistics by 20% (1.30 minutes) without reporting any actual improvement in
Hague, displays service, with no significant change in perception on the long term
Netherlands impacts
Dziekan & Real-time Several studies 11 na  Waiting time  Only 4 studies report that users’ perceived wait times were reduced
Kottenhoff (2007), information review studies due to the real-time information system implementation.
Stockholm, displays
Sweden impacts
Cain et al. (2010), Reserved Two 572 and Descriptive  Travel time  Express lanes, as measured by before-and-after surveys, have
Miami, USA lanes impacts on-board 349 statistics  OTP improved user perceptions of travel time and service reliability.
surveys and t-tests  Travel time and rating increased by 0.23 points
(Rating 1 to 5)  Service reliability increased by 0.16 points
 63.9% perceived a 5- to 29-min, while the actual
saving was 17 min.
Barr et al. (2010), BRT system Na Na Na  Travel time  89% said that BRT is better than the limited services, and 30% said
New york, USA impacts that they were riding more frequently than before,
 84% said that BRT is faster than the limited.
El-Geneidy & Express On-site survey 340 Linear  Travel time  Implementing a limited-stop service yielded 4.6 minutes savings
Surprenant-Legault service & AVL/APC regressions (13% compared to the local service) in running time for the new
(2010), Montreal, impacts data and t-test limited service.
Canada  Passengers tend to overestimate the savings associated with the
implementation of the new limited-stop service by 4 to 7 minutes
more than the actual savings.
Yoh et al. (2011), Relative On-site survey 900 Regression  Waiting time  Regardless of waiting time, safety and on-time performance were
California, USA importance of (Rating 1 to 4) models paramount to riders
stop and a value for  Lighting, cleanliness, information, shelter, and the presence of guards
amenities on waititng time were less important to travelers when waits were short, but were more
perception important with longer wait times.

27
Issues Data source Sample Analysis Measures Key findings
Study methods used
addressed (s) size

Watkins et al. Real-time On-site survey 655 Linear  Waiting time  Measured wait time, real-time information, PM peak period, bus
(2011), Seattle, information (13% are regression frequency, and aggravation level impact users perception.
USA via devices real-time models  Real-time information users’ perceived wait time = measured wait
impacts users) time. The addition of real-time information decreases the perceived
wait time by 0.73 min.

Diab & El-Geneidy A set of AVL/APC & 60,973 Linear  Travel time  The combination of a set of strategies led to a 10.5% decline in
(2012), Montreal, strategies On-site survey regression running time along the limited stop service compared to the regular
QC, Canada impacts models service. However, the regular route running time increased by 1%
compared to the initial time period.
 Users tend to overestimate the savings associated with the
implementation of this combination of strategies by 3.5–6.0 min and
by 2.5–4.1 min for both the regular route and the limited stop service,
respectively.

28
Appendix 2. Summary of Transit Agencies Plans Included in Review
Agency Reliability Objective or Performance measures* Strategies, and policies Users Perception Reference
definition/goal expected
benefits
South Coast British  Improve  Increase  OTP (0 min +3 min)  Transit priority system  Overall (TransLink,
Columbia OTP customer  Delivered trips (%) of (TSP, bus lanes, queue satisfaction 2004, 2009,
Transportation  Avoid being satisfaction scheduled trips jumpers) ratings (e.g. 7.3 2012)
Authority (TransLink), early  Express service out of 10 in
Vancouver, Canada  Minimize
2009)
running late  Bus bays improvements
 Articulated buses
Toronto Transit  Increase  Compete  OTP  BRT  Less complaints (Metrolinx,
Commission (TTC), OTP and effectively with  Monitored monthly  Rapid Transit Network about reliability 2008; Toronto
Toronto Canada decrease the automobile  TSP, Bypass, Shoulders every three Transit
cancellations ITS months Commission
 Customer
(TTC), 2009,
satisfaction
rating 2013)
Société de transport de  Increase bus  Improve  OTP (-1min +3 min)  TSP and ITS  Overall (Société de
Montréal (STM), punctuality customer  Target: 83%,  Express service customer transport de
Montréal, Canada experience (82.6 in 2008 and 83.6% in  Reserved bus lanes satisfaction (81 Montréal, 2009,
in 2008 and
2009)  Street layout 2010, 2011)
86% in 2009)
OC Transpo, Ottawa,  Achieve  OTP (0 min +5 min) at time  Rapid transit system  Using customer (OC Transpo,
Canada scheduled points  TSP satisfaction 2009, 2012; The
service  Cancelled trips (%) of  Road geometry changes surveys City of Ottawa,
availability scheduled trips  Reserved lanes 2008)
 OTP  Average transit vehicle speed  Queue jumps

29
Agency Reliability Objective or Performance measures* Strategies, and policies Users Perception Reference
definition/goal expected
benefits
Metropolitan  Improve  Ridership  Mean Distance Between  Express service  Ridership (Metropolitan
Transportation performance Failures (MDBF)  New Buses  Customer Transportation
Authority (MTA),  Bus wait assessment  TSP satisfaction Authority
New York, USA percentage for high-volume  BRT (off-board fare rating (1 to 10) (MTA), 2008,
bus lines and limited stop collection, TSP, Real time For OTP,
2009, 2011)
service bus information) Safety, and
 New Fare collection system Overall
 Managing fleet defects
 Improved schedules

San Francisco  Meets core  Ability to speed  OTP (-1 min +4 min)  BRT  Using customer (San Francisco
Municipal operational transit  Headway adherence (as a  Reserved bus lanes satisfaction Municipal
Transportation Agency agency  Meet Transit secondary measure)  All-door boarding surveys Transportation
(SFMTA), San performance Effectiveness  Stop spacing Agency
objectives Project (TEP)  TSP and signal timing
Francisco, USA (SFMTA),
(e.g. achieve objectives  Articulated buses
OTP) 2011, 2013;
 Improving fare collection
SFMTA and
system
Nelson\Nygaard
Consulting
Associates,
2008)
Chicago Transit  Minimize  Decrease delay  Percentage of Big Gap  BRT  Number of (Chicago
Authority (CTA), system and bus Intervals  TSP complaints Transit
Chicago, USA delays bunching  Percentage of Bunched  Articulated buses. Authority
manage rail Intervals  Bus arrival information (CTA), 2011,
and bus
2013)
intervals
Maryland Transit  Quality of  Ridership  OTP  AVL system and  Using customer (Maryland
Administration service/  Target: 87% in 2010 out of centralized control center satisfaction Department of
(MTA), Maryland, efficiency 90% target  CharmCard smart card rating (1 to 5) Transportation,
USA  OTP  Express service 2009b, 2011)
 Fleet replacements

Massachusetts Bay  Service  Decrease  OTP  BRT  Rider (Massachusetts

30
Agency Reliability Objective or Performance measures* Strategies, and policies Users Perception Reference
definition/goal expected
benefits
Transportation should be unpredictable  Headways ≥10 min: OTP at  AVL/APC complaints Bay
Authority (MBTA), operated as wait and/or start (0 min +3 min), mid (0  Newer buses ( Low-floor  Public meetings Transportation
Boston, USA scheduled travel times. min +7 min), and at end (-3 buses) feedback Authority
min +7 min) (MBTA), 2008,
 Headways <10 min: OTP 2009;
within 1.5 times of MassDOT,
scheduled headway, and 2013)
OTP at end within 20% of
run time
Southeastern  Improve  OTP (-59 sec +4min)  New Technologies  Reliability for (Southeastern
Pennsylvania OTP  Bus arrival  New payment methods all modes (7.8 in Pennsylvania
Transportation 2012 out of 10) Transportation
 Target 78% in 2011 (75% in  Evaluate schedules
 Using customer Authority
Authority (SEPTA), 2010)  Route adjustments
satisfaction (SEPTA), 2010,
Philadelphia, USA  MDBF: target 9125 in 2012 2011, 2013)
(7,066 in 2010) rating (1 to 10)
 Report every 6 months
NJ TRANSIT, New  Achieve  Decrease delays  OTP (-59 sec +5min)  Newer full-size buses  Using customer (NJ TRANSIT,
Jersey, USA OTP  Bus departure at few main  low-floor buses satisfaction 2010, 2012)
stations  “Tap & Go” system rating (1 to 10)
 94% in 2010 (No target)

Washington  OTP  Meet customer  OTP (-2 min +7 min)  Priority Corridor Network  Reliability (73% (WMATA,
Metropolitan Area expectations by  Arrival time at a time point (TSP and exclusive bus in 2012) 2010, 2012,
Transit Authority consistently  Target 78% in 2013 (77.5% lanes)  Overall 2013)
(WMATA), customer
delivering in 2012)  Management actions
satisfaction
Washington, D.C. quality service  MDBF: Target 8100 miles in  Express service (81% in 2013)
USA 2013 (8485 miles in 2012)  Route adjustments
 Reported quarterly

31
Agency Reliability Objective or Performance measures* Strategies, and policies Users Perception Reference
definition/goal expected
benefits
King County Metro  Decrease late  Improve  OTP (-1 min +5 min)  Rapid transit  Customer (King County
Transit – Department trips satisfaction  Target 80% in 2013 (77.5%  Schedule revisions Satisfaction of Metro Transit,
of Transportation, in 2012)  TSP OTP 2007, 2013a, b)
Seattle, USA  PM Peak period 65 %  Bus reserved lanes  Customer
 Reported monthly  Queue bypass complaints
 Measured at time points  Stop consolidation

Denver Regional  On-time as  Decrease users’  OTP (-1 min +5 min)  TSP (City of Denver,
Transportation District scheduled waiting time  Max 30 minutes delay  Bus lanes 2008; RTD,
(RTD), service  Ridership  BRT 2011, 2012)
Denver, USA  Riders deserve
on-time service
Miami-Dade Transit,  Improve  improve riders  OTP (-2 min +5 min)  TSP  % of users (Miami-Dade
Miami, USA OTP satisfaction  Target 80% (79% in 2009 satisfied with Transit, 2009,
and 80% in 2012) the service 2012a, b)
reliability (35%
in 2008, target
45%)
*OTP: on-time performance; MDBF: mean distance between failures; Big Gap interval: An instance when the time in between buses is
more than double the scheduled interval, or a gap of more than 15 minutes; The percentage of bunched intervals: The number of bus
intervals (time between two buses at a bus stop) that are 60 seconds or less divided by the total number of weekday bus intervals traveled
during the month; Bus wait assessment: The percent of actual intervals between vehicles that are no more than the scheduled interval
plus 25% of the headway.

32
Appendix 3. Summary of Studies on Service Improvement Strategies Included in Review

Issues Data source Sample Analysis Measures


Study Key findings
addressed (s) size methods
used
Strathman et al. Bus Manual and 830 Linear  Running time  The implementation of bus dispatching system (BDS) decreased
(2000) Portland, dispatching AVL/ APC regression running time by 1.45 minutes (around 3% of the running time before
USA system (BDS) models BDS).
impact

Strathman et al. Drivers AVL/ APC 110,743 Linear  Running time  Bus operators are an important source of running time variation after
(2002) Portland, experience regression controlling for such factors as route design, time of day and direction
USA impact models of service, and passenger activity.
 Operators’ relative running time decreases by 0.57 seconds for each
month of additional experience

Dueker et al. Low-floor AVL/ APC 353,552, Linear  Dwell time  The dwell time model for the without lift operation sub-sample yields
(2004), Portland, buses impact 2,347, regression an estimated effect of a low-floor bus of -0.11 seconds (-0.93%) per
USA 16,504, models dwell.
18,098  A low-floor bus reduces dwell time for lift operations by nearly 5
seconds (-4.74 or 5.8 %).

Kimpel et al. Transit signal AVL/APC 18,132 Summary  Running time  The study shows that the expected benefits of TSP are not consistent
(2005a), Portland, priority (TSP) stats & and across routes and time periods, nor are they consistent across the
USA impact a regression various performance measures (e.g. running time, running time
model variation, headway and OTP)

El-Geneidy et al. Stop AVL/APC 332 Linear  Running time  The results indicate that bus stop consolidation had no significant
(2006), Portland, consolidation regression  Running time effects on passenger activity, whereas bus running times improved by
USA impact models variation nearly 6%. Running time improvements may have been limited by
insufficient schedule adjustments. No evidence was found about the
impact of stop consolidation on running time variation or headway
variation.

33
Issues Data source Sample Analysis Measures
Study Key findings
addressed (s) size methods
used
Milkovits (2008), Smart cards AFC/AVL/ 165,000 Linear  Dwell time  Smart cards are estimated to have a 1.5-s faster transaction time than
Chicago USA and bus type APC regression magnetic strip tickets, but only in uncrowded situations. When the
impact models number of onboard passengers exceeds the seating capacity, there is
no statistically significant difference between the fare media types.

El-Geneidy et al. Drivers AVL/APC 21,275, linear  Running time  Drivers’ experience decrease run time by 0.34 for each additional
(2011), Experience and 97 regression  Running time year of experience.
Minneapolis, MN, impact models variation  A 1% variation in drivers’ experience leads to 5% decline in the run
USA time coefficient of variation.

Surprenant-Legault Reserved AVL/APC 4,384 Linear  Running time  The reserved bus lane yielded savings of 1.3% to 2.2% in total
& El-Geneidy lanes regression  OTP running time, and benefits were more significant for northbound
(2011), Montreal, Impacts and logit afternoon peak trips than for southbound morning peak trips because
Canada models of congestion levels northbound.
 The introduction of a reserved lane increased the odds of being on
time by 65% for both routes.
El-Geneidy & Articulated AVL/APC 253,260 linear  Running time  Operation of articulated buses yielded savings in dwell time,
Vijayakumar buses and regression  Dwell time especially with high levels of passenger activity and the use of the
(2011), Montreal, impact 9,235 models third door in alighting.
QC, Canada  However, these savings were not reflected in running time, since
articulated buses are generally slower than regular buses.

Yetiskul & Senbil New buses in AFC data 3,150, linear  Running time  Three main causes of travel-time variability have been identified and
(2012), Ankara, the fleet 2,481 regression variation tested in this study: temporal dimension, spatial dimension, and
Turkey impact and models service characteristics.
7,424  Model results indicate that all of these factors affect travel-time
variability.

Diab & El-Geneidy A set of AVL/APC 255,000 linear  Running time  The introduction of a smart card fare collection system increased bus
(2013), Montreal, strategies and 447 regression variation running time and service variation compared to the initial situation.
QC, Canada impacts models  Running time  Articulated buses, limited-stop bus service and reserved bus lanes
deviation have mixed effects on variation in comparison to the running time
variation changes, while TSP did not show an impact on variations.

34

You might also like