0% found this document useful (0 votes)
61 views

Agreeing To Cross: How Drivers and Pedestrians Communicate : Amir Rasouli, Iuliia Kotseruba and John K. Tsotsos

This document presents a novel dataset for studying the behaviors of drivers and pedestrians when crossing streets or interacting at intersections. The dataset contains over 650 samples of pedestrian behavior collected from 240 hours of driving footage in urban, suburban, and city roads. The analysis focuses on the types of non-verbal communication cues, such as eye contact and hand gestures, that road users use at intersections and how these cues influence crossing decisions based on additional contextual factors like road configuration, vehicle speed, and weather conditions.

Uploaded by

Soulayma Gazzeh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Agreeing To Cross: How Drivers and Pedestrians Communicate : Amir Rasouli, Iuliia Kotseruba and John K. Tsotsos

This document presents a novel dataset for studying the behaviors of drivers and pedestrians when crossing streets or interacting at intersections. The dataset contains over 650 samples of pedestrian behavior collected from 240 hours of driving footage in urban, suburban, and city roads. The analysis focuses on the types of non-verbal communication cues, such as eye contact and hand gestures, that road users use at intersections and how these cues influence crossing decisions based on additional contextual factors like road configuration, vehicle speed, and weather conditions.

Uploaded by

Soulayma Gazzeh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Agreeing To Cross: How Drivers and Pedestrians Communicate*

Amir Rasouli, Iuliia Kotseruba and John K. Tsotsos1

Abstract— The contribution of this paper is twofold. The first


is a novel dataset for studying behaviors of traffic participants
while crossing. Our dataset contains more than 650 samples
of pedestrian behaviors in various street configurations and
weather conditions. These examples were selected from approx.
240 hours of driving in the city, suburban and urban roads.
arXiv:1702.03555v1 [cs.RO] 12 Feb 2017

The second contribution is an analysis of our data from


the point of view of joint attention. We identify what types of
non-verbal communication cues road users use at the point of
crossing, their responses, and under what circumstances the
crossing event takes place.
It was found that in more than 90% of the cases pedestrians
gaze at the approaching cars prior to crossing in non-signalized
crosswalks. The crossing action, however, depends on additional Fig. 1: An overview of joint attention in crossing. The
factors such as time to collision, explicit driver’s reaction or timeline of events is recovered from the behavioral data and
structure of the crosswalk.
shows a single pedestrian crossing the parking lot. Initially,
I. INTRODUCTION the driver is moving slow and, as he notices the pedestrian
ahead, slows down to let her pass. At the same time the
The fascination with autonomously driving vehicles goes pedestrian crosses without looking first then turns to check if
as far back as mass production of early automobiles. Since the road is safe, and, as she sees the driver yielding, continues
the early 1920s the automotive industry has witnessed numer- to cross.
ous attempts to achieve full autonomy in the form of radio
signal controlled cars [1], wire following vehicles [2], lane
detection and car following [3] and, in more recent works, weather, etc.) and even cultural differences make estimating
the cars that can fully autonomously drive roads under certain the intention of traffic participants particularly challenging
conditions [4]. [8].
Despite such success stories in autonomous control sys- Our contribution in the proposed work is twofold. First,
tems, designing fully autonomous vehicles suitable for urban we introduce a novel visual dataset for detection and analysis
environments still remains an unsolved problem. Aside from of pedestrians’ behaviors while crossing (or attempting to
challenges associated with developing suitable infrastructure cross) the street under various conditions. We call this dataset
[5] and regulating the autonomous behaviors [6], one of the Joint Attention in Autonomous Driving (JAAD). Then we
major dilemmas faced by autonomous vehicles is to how present some of our findings regarding the course of actions
to communicate with other road users in a chaotic traffic taken and non-verbal cues used by pedestrians in different
scene [7]. In addition to official rules that govern the flow crossing scenarios. We show that the crossing behavior can
of traffic, humans often rely on some form of informal be influenced by various contextual elements such as cross-
rules resulting from non-verbal communication among them way structure, driver’s behavior, distance to the approaching
and anticipation of the other traffic participants’ intentions. vehicles, etc.
For instance, pedestrians intending to cross a street where
there is no stop sign or traffic signal often establish eye II. R ELATED W ORKS
contact with the driver to ensure that the approaching car A. Studies of driver and pedestrian interaction
will stop for them. Other forms of non-verbal communication Numerous psychological studies examined the behaviors
such as hand gestures or body posture are also used to of drivers and pedestrians before crossing events. Usually, the
resolve ambiguities in typical traffic situations. Furthermore, following aspects are considered: the likelihood of the driver
the characteristics of a road user (e.g. age and gender), yielding ([9], [10], [11]), driver awareness of the pedestrian
the physical environment (the structure of the crosswalk, [12], [13] and pedestrian’s decision making [14], [15]. Mul-
tiple factors affecting these behaviors have been identified:
*This work was supported by the Natural Sciences and Engineering
Research Council of Canada (NSERC), the NSERC Strategic Network for vehicle speed and time to collision (TTC) ([16], [17]), size
Field Robotics (NCFRN), and the Canada Research Chairs Program through of the gap between the vehicles [18], geometry and other
grants to JKT. features of the road (signs and delineation) [14], weather
1 The authors are with The Department of Electrical Engineering and
Computer Science and Center for Vision Research York University, Toronto, conditions [15], crossing conditions (whether pedestrian is
Canada. {aras; yulia k; tsotsos}@eecs.yorku.ca crossing from a standstill or walking), number of pedestrians
crossing [18], gender and age of the drivers and pedestrians small-scale naturalistic driving study and extracted data on
[14], eye contact between the pedestrian and the driver ([11], the non-verbal communication occurring between the traffic
[19]), etc. participants in various situations. The following sections
Typically, the interactions between the traffic participants discuss data collection procedure, general statistics and the
are treated mechanistically. For instance, TTC takes into ac- preliminary results of our study.
count the speed of the vehicle and distance to the pedestrian
III. T HE JAAD DATASET
and is thought to affect his/her crossing behavior ([20], [17],
[21], [16]). The JAAD dataset1 was created to study the behavior of
However, several recent studies show that non-verbal com- traffic participants. The data consists of 346 high-resolution
munication is also important for determining the intentions video clips (5-15s) showing various situations typical for
of traffic participants. For example, drivers are more likely urban driving. These clips were extracted from approx. 240
to yield if they are looked at by the pedestrian waiting to hours of driving videos collected in several locations. Two
cross ([11], [19]). vehicles equipped with wide-angle video cameras were used
In a psychological experiment by Schmidt et al. [17] for data collection (Table I). Cameras were mounted inside
participants were unable to correctly evaluate pedestrians’ the cars in the center of the windshield below the rear view
crossing intentions based only on the trajectories of their mirror.
motion, suggesting that parameters of body language (pos- TABLE I: Properties of the samples in the database.
ture, leg and head movements) are valuable cues. # Clips Location Resolution Camera Model
55 Toronto, Canada 1920 × 1080 GoPro HERO+
In computer vision and robotics, passive approaches are 276 Kremenchuk, Ukraine 1920 × 1080 Garmin GDR-35
prevalent for predicting pedestrians’ actions during the cross- 6 Hamburg, Germany 1280 × 720 Highscreen Black Box Connect
5 New York, USA 1920 × 1080 GoPro HERO +
ing. These works mainly look at the dynamic factors in the 4 Lviv, Ukraine 1280 × 720 Highscreen Black Box Connect
scene such as pedestrians’ trajectories [22] and velocities
[23] or try to predict the changes in the behavior of pedes- The video clips represent a wide variety of scenarios
trians crossing as a group [24]. involving pedestrians and other drivers. Most of the data is
In more recent works, the pedestrian’s body language is collected in urban areas (downtown and suburban), only a
used as a means of predicting behavior [25], [26]. In these few clips are filmed in rural locations. The samples cover a
works, head orientation is associated with the pedestrian’s variety of situations such as pedestrians crossing individually,
level of awareness, however, the learning is crude and the or as a group, pedestrians occluded by objects, walking along
context is not taken into account. For instance, driver’s the road and many more. The dataset contains fewer clips
reaction or vehicle’s speed as well as the structure of the of interactions with other drivers, most of them occur in
crossway such as presence of a traffic signal or width of the uncontrolled intersections, in parking lots or when another
street is not considered. driver is moving across several lanes to make a turn.
The videos are recorded during different times of the day,
B. Existing Datasets and under various weather and lighting conditions. Some of
There are many datasets for pedestrian detection intro- them are particularly challenging, for example, sun glare.
duced by the computer vision and robotics communities. To The weather also can impact the behavior of road users,
name a few, KITTI [27], Caltech pedestrian detection bench- for example, during the heavy snow or rain people wearing
mark [28] and Daimler Pedestrian Benchmark Dataset [29]. hooded jackets or carrying umbrellas may have limited
These datasets are accompanied by ground truth information visibility of the road. Since their faces are obstructed it is
in the form of bounding boxes, stereo information, sensor also harder to tell if they are paying attention to the traffic
readings and occlusion tags. from the driver’s perspective.
To the best of our knowledge, there are no datasets We attempted to capture all of these conditions for further
facilitating the study of pedestrians’ crossing behavior. Most analysis by providing two kinds of annotations for the data:
of the data for the relevant psychological studies is collected bounding boxes and textual annotations. Bounding boxes are
at select locations and involves direct observation by the provided only for cars and pedestrians that interact with or
researchers on site. Another potential source is data collected require the attention of the driver (e.g. another car yielding
for Naturalistic Driving Studies (NDS). These are introduced to the driver, pedestrian waiting to cross the street, etc.).
to eliminate observer’s effect and aggregated large volumes Bounding boxes for each video are written into an XML file
of data on everyday driving patterns over an extended period with frame number, coordinates, width, height, and occlusion
of time. A number of such studies have been launched in flag. The textual annotations are created using the BORIS2
the USA [30], [31], Europe [32], Asia [33] and Australia software for video observations [35]. It allows to assign
[34]. Although these studies produced petabytes of video predefined behavior labels to different subjects seen in the
recordings of everyday driving situations, at present the video, and can also save some additional data, such as video
processing of this data has been focused on identifying file id, the location where the observation has been made,
crash and near-crash events and factors that caused them. etc. (see Fig. 1 for an example).
Since access to the raw NDS data is restricted and only 1 http://data.nvision2.eecs.yorku.ca/JAAD dataset/. Ethics certificate #
general anonymized statistics are available, we conducted a 2016-203 from York University.
(a) crossing events

(b) no crossing events

Fig. 2: Joint attention motifs of pedestrians. Diagram a) shows a summary of 345 sequences of pedestrians’ actions before
and after crossing. Diagram b) shows 92 sequences of actions when pedestrians did not cross. Vertical bars represent actions
color-coded as the precondition to crossing, attention, reaction to driver’s actions, crossing or ambiguous actions. Curved
lines between the bars show connections between consecutive actions. The thickness of lines reflects the frequency of the
action in the ’crossing’ or ’non-crossing’ subset. The sequences longer than 10 actions (e.g. when the pedestrian hesitates
to cross) are extremely rare and are not shown.

We save the following data for each video clip: weather, whether the attention or the act of crossing is happening.
time of the day, age and gender of the pedestrians, location We list these actions and the number of samples in Table
and whether it is a designated crosswalk. II. Here attention refers to the first moment the pedestrian is
Each pedestrian is assigned a label (pedestrian1, pedes- assessing the environment and expressing his/her intention
trian2, etc.). We also distinguish between the driver inside the to the approaching vehicles, therefore it is considered as a
car and other drivers, which are labeled as Driver and car1, form of non-verbal communication.
car2, etc. respectively. This is necessary for the situations Visual attention takes two forms: looking and glancing.
where two or more drivers are interacting. Finally, a range Looking refers to the scenarios in which the pedestrian in-
of behaviors is defined for drivers and pedestrians: walking, spects the approaching car (typically for 1 second or longer),
standing, looking, moving, etc. A more detailed example of assesses the environment and in some cases establishes eye
textual annotation can be found in [36]. contact with the driver. The other form of attention, glance,
usually lasts less than a second and is used to quickly assess
IV. T HE DATA the location or speed of the approaching vehicles. Pedestrians
In our data, we observed high variability in the behaviors glance when they have a certain level of confidence in
of pedestrians at the point of crossing/no-crossing with predicting the driver’s behavior, e.g. the vehicle is stopped
more than 100 distinct patterns of actions. For instance, or moving very slowly or otherwise is sufficiently far away
Fig. 2a shows sequences of actions during the completed and does not pose any immediate danger.
crossing scenarios found in the dataset. Two typical patterns,
”standing, looking, crossing” and ”crossing, looking”, cover V. O BSERVATIONS AND A NALYSIS
only half of the situations observed in the dataset. Similarly, Our data contains various scenarios in which pedestrians
in 1/3rd of non-crossing scenarios (Fig. 2b) pedestrians are are observed during or prior to crossing. Two categories from
waiting at the curb and looking at the traffic. Otherwise, the Table II, crossing and action, are omitted from the analysis.
behaviors vary significantly both in the number of actions Since these crossing scenarios do not demonstrate the full
before and after crossing and in the meaning of particular crossing event, it is difficult to assess the behavior of the
actions (e.g. standing may be both a precondition and a pedestrians at the point of crossing. As for the action cases
reaction to driver’s actions). the intentions of the pedestrians are ambiguous. For example,
For further analysis we split these behavioral patterns into pedestrians are not approaching the curb or are standing far
9 groups depending on the initial state of the pedestrian and away from the crossway.
TABLE II: The behavioral patterns observed in the data.
Behavior Sequence Meaning Number of Samples
Crossing The pedestrian is observed at the point of crossing and no attention is taking place 152
Crossing + Attention The pedestrian is observed at the point of crossing and some form of attention is occurred 64
Crossing + Attention + Reaction The pedestrian is observed at the point of crossing and some form of attention is occurred and the pedestrian changes behavior 29
PreCondition + Crossing The pedestrian is walking/standing and crosses without paying attention 37
Precondition + Attention + Crossing The pedestrian is walking/standing and crosses after paying attention 160
Precondition + Attention + Reaction + Crossing The pedestrian is walking/standing, pays attention and changes behavior prior to crossing 64
Action The pedestrian is walking/standing and his/her intention is ambiguous 56
Action + Attention The pedestrian is about to cross and pays attention 43
Action + Attention + Reaction The pedestrian is about to cross, pays attention and responds 49
Total 654

A. Forms of non-verbal communication


In the course of a crossing event, pedestrians often use
different forms of non-verbal communication (in more than
90% of the cases in our dataset). The most prominent
signal to transmit the crossing intention is looking (90%)
or glancing (10%) towards the coming traffic. Other forms
of communication are rarer, e.g. nodding (as a form of
gratitude and acknowledgement) and hand gesture (as a
form of gratitude or yielding), and are usually performed Fig. 3: Relationship between TTC and probability of atten-
in response to the driver’s action. tion occurring prior to crossing.
The pedestrians’ response to the communication is not
always explicit and is often realized as a change in their
behavior. For instance, when a pedestrian slows down or
stops it could be an indicator of noticing the vehicle ap-
proaching or driver not yielding. Table III summarizes the
forms of communication and responses observed in the data.
In this table we distinguish between the primary and sec-
ondary occurrence of attention. The primary attention is the
first instance when the pedestrian inspects the environment
prior to crossing. The secondary attention refers subsequent (a) (b)
inspection of the environment or checking the traffic while
crossing. Fig. 4: The pedestrian attention frequency at a) designated
and b) non-designated crosswalks.
TABLE III: Forms of pedestrians communication and re-
sponse. Primary (PO) and secondary occurrence (SO) of
attention. stopping). There is also no cases of crossing without attention
Form of Communication Number of Occurrences when TTC is less than 2s.
PO
looking 328 The context in which the crossing takes place also plays
glance 37 a role in crossing behavior. The context can be described by
attention
looking 106
SO factors such as the weather conditions, street structure and
glance 19
stop 71 driver’s reaction. Since analyzing all these factors is beyond
clear path 29 the scope of this paper, here we only look at the effect of
slow down 24
response the street structure.
speed up 14
hand gesture 13 There are two factors that characterize a crosswalk:
nod 11 whether it is designated (there is a zebra crossing or traffic
signal) and its width (measured as the number of lanes).
B. Attention occurrence prior to crossing In our samples, crossing without attention only happened
As mentioned earlier there are scenarios in which pedestri- in non-designated crosswalks when TTC was higher than 6
ans do not pay attention to the moving traffic. To investigate seconds (see Fig. 4).
the probability of attention occurrence, one important factor The full crossing events happen in street with widths
to consider is TTC or how long it takes the approaching ranging from 1 (narrow one-way streets) to 4 lanes (main
vehicle to arrive at the position of the pedestrian, given that streets).
they maintain their current speed and trajectory. We report on the data by dividing the results into 4
The relationship between attention occurrence and TTC is intervals with respect to the TTC values and in each category,
illustrated in Fig. 3. Crossing without attention comprises we group them based on the number of lanes (see Fig. 5).
only about 10% of all crossing scenarios out of which As illustrated, when TTC is below 3s there is no occurrence
more than 50% of the cases occurred when TTC is above of crossing without attention in streets wider than 2 lanes.
10s (including situations where the approaching vehicle is In fact, only 18% of the crossings happened in streets wider
(a) (b) (a) crossing and crosswalk property (b) non-designated

(c) (d) (c) zebra crossing (d) traffic signal


Fig. 5: Attention occurrence with respect to the number of Fig. 7: Pedestrians crossing behavior at crosswalks with
lanes. different properties.
and the driver’s reaction can impact the pedestrians level of
confidence to cross.
To investigate this we divide the crosswalks into three
categories: non-designated, without zebra or traffic signal,
zebra-crossing, with either zebra or/and a pedestrian crossing
sign and traffic signal with a signal such as traffic light or
stop sign which forces the driver to stop.
Fig. 7a shows that pedestrians are less likely cross the
street after communicating their intention if the crosswalk
Fig. 6: Average duration of the pedestrian’s attention prior is not designated and more likely to cross if some form of
to crossing based on TTC for different age groups. signal or dedicated pathway is present.
To understand under what circumstances the crossing
takes place in different crosswalks, we look at the driver’s
than 2 lanes.
reaction to the pedestrian’s intention of crossing. The driver’s
The duration of attention or how fast pedestrians tend to behavior can be grouped into speeds (when the driver either
begin crossing from the moment they gaze at the approaching maintains the current speed or speeds up), slows down and
car also may vary. As illustrated in Fig. 6, the duration of stops.
looking depends on time to collision. The further away the Figs. 7b and 7c show that when there is no traffic signal
vehicle is from the pedestrians, the longer it will take them present, in the majority of the cases pedestrians cross if the
to assess the intention of the driver, hence they will attend driver acknowledges their intention of crossing by slowing
longer. The gaze duration increases up to a maximum safe down or stopping. In few scenarios, the pedestrian still
TTC threshold (from 7s for adults up to 8s for elderly) after crosses the street even though the vehicle accelerates. In
which it dramatically declines when the vehicle is either far these cases either TTC is very high (average of 25.7 s) or the
away or stopped. In addition, the elderly pedestrians in com- car is in a traffic congestion and the pedestrian anticipates
parison to adults and children tend to be more conservative that the car would shortly stop. Moreover, crossing also
and spend on average about 1s longer on looking prior to might not take place when the driver slows down or stops
crossing. (even in the presence of a traffic signal) (see Fig. 7b and7d).
In these cases either the pedestrian hesitates to cross or
C. Crossing action post attention occurrence explicitly (often by some form of hand gesture) yields to
Although the pedestrian’s head orientation and attentive the driver.
behavior are strong indicators of crossing intention, they
are not always followed by a crossing event. In addition to VI. CONCLUSIONS
TTC, which reflects both the approaching driver’s speed and Pedestrians often engage in various forms of non-verbal
their distance to the contact point, the structure of the street communication with other road users. These include gazing,
hand gesture, nodding or changing their behavior. At the [13] Y. Fukagawa and K. Yamada, “Estimating driver awareness of pedes-
point of crossing, in more than 90% of the cases pedestrians trians from driving behavior based on a probabilistic model,” in
Intelligent Vehicles Symposium (IV). IEEE, 2013, pp. 1155–1160.
use some form of attention to communicate their intention [14] A. Tom and M.-A. Granié, “Gender differences in pedestrian rule com-
of crossing. The most prominent form of attention (or pliance and visual search at signalized and unsignalized crossroads,”
primary communication) is looking in the direction of the Accident Analysis and Prevention, vol. 43, no. 5, pp. 1984–1801, 2011.
[15] R. Sun, X. Zhuang, C. Wu, G. Zhao, and K. Zhang, “The estimation
approaching vehicles. The duration of looking also may vary of vehicle speed and stopping distance by pedestrians crossing streets
depending on age of the pedestrian or time to collision. in a naturalistic traffic environment,” Transportation research part F:
Other forms of explicit communication such as nodding traffic psychology and behaviour, vol. 30, pp. 97–106, 2015.
[16] N. Lubbe and J. Davidsson, “Drivers comfort boundaries in pedestrian
or hand gesture were observed in 15% of the cases as a crossings: A study in driver braking characteristics as a function of
response to the driver’s action and often were used to show pedestrian walking speed,” Safety Science, vol. 75, pp. 100–106, 2015.
gratitude, acknowledgement or to yield to the driver. [17] S. Schmidt and B. Farber, “Pedestrians at the kerb–recognising the
action intentions of humans,” Transportation research part F: traffic
The crossing event does not always follow the first com- psychology and behaviour, vol. 12, no. 4, pp. 300–310, 2009.
munication of intention. Crossing depends on additional [18] T. Wang, J. Wu, P. Zheng, and M. McDonald, “Study of pedestrians’
factors such as the structure of the street (e.g. designated/non- gap acceptance behavior when they jaywalk outside crossing facil-
ities,” in 13th International IEEE Conference on Intelligent Trans-
designated, the width of the street), the driver’s reaction to portation Systems (ITSC), 2010.
the communication or time to collision (how soon the driver [19] Z. Ren, X. Jiang, and W. Wang, “Analysis of the influence of pedes-
arrives at the crosswalk). trians eye contact on drivers comfort boundary during the crossing
conflict,” Procedia Engineering, vol. 137, pp. 399–406, 2016.
Future work will include analysis of pedestrians’ gait [20] R. R. Oudejans, C. F. Michaels, B. van Dort, and E. J. P. Frissen,
patterns with and without attention during the crossing. In “To cross or not to cross: The effect of locomotion on street-crossing
behavior,” Ecological Psychology, vol. 8, no. 3, pp. 259–267, 1996.
addition, to better assess the nature of communication it [21] E. Du, K. Yang, F. Jiang, P. Jiang, R. Tian, M. Luzetski, Y. Chen,
would be beneficial to record driver’s data such as driver’s R. Sherony, and H. Takahashi, “Pedestrian behavior analysis using
gestures, eye movements and any reaction that involves 110-car naturalistic driving data in usa,” in 23rd International Techni-
cal Conference on the Enhanced Safety of Vehicles (ESV), 2013.
changing the state of the vehicle. [22] T. Bandyopadhyay, C. Z. Jie, D. Hsu, M. H. A. Jr., D. Rus, and
E. Frazzoli, “Intention-aware pedestrian avoidance,” in The 13th
International Symposium on Experimental Robotics, 2013.
R EFERENCES [23] S. Pellegrini, A. Ess, K. Schindler, and L. V. Gool, “You’ll never walk
alone: Modeling social behavior for multi-target tracking,” in 12th
[1] F. Kroger, “Automated driving in its social, historical and cultural International Conference on Computer Vision, 2009, pp. 261–268.
conte,” Autonomous Driving, Technical, Legal and Social Aspects, pp. [24] W. Choi and S. Savarese, “Understanding collective activities of people
41–68, 2016. from videos,” IEEE TRANSACTIONS ON PATTERN ANALYSIS AND
[2] M. Mann, “The car that drives itself,” Popular Science, vol. 175, no. 5, MACHINE INTELLIGENCE, vol. 36, no. 6, pp. 1242–1257, 2014.
p. 76, 1958. [25] J. Kooij, N. Schneider, F. Flohr, and D. M. Gavrila, “Context-based
[3] E. D. Dickmanns, B. Mysliwetz, and T. Christians, “An integrated pedestrian path prediction,” in In European Conference on Computer
spatio-temporal approach to automatic visual guidance of autonomous Vision (ECCV), 2014, pp. 618–633.
vehicles,” IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBER- [26] A. T. Schulz and R. Stiefelhagen, “Pedestrian intention recognition
NETIC, vol. 20, no. 6, pp. 1273–1284, December 1990. using latent-dynamic conditional random fields,” in Intelligent Vehicles
[4] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, Symposium (IV), 2015.
J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, [27] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics:
C. Oakley, M. Palatucci, V. Pratt, and P. Stang, “Stanley: The robot that The kitti dataset,” International Journal of Robotics Research (IJRR),
won the darpa grand challenge,” Journal of Field Robotics, vol. 23, 2013.
no. 9, pp. 661–692, 2006. [28] P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection:
An evaluation of the state of the art,” PAMI, vol. 34, 2012.
[5] B. Friedrich, “The effect of autonomous vehicles on traffic,” Au-
[29] M. Enzweiler and D. M. Gavrila., “Monocular pedestrian detection:
tonomous Driving, Technical, Legal and Social Aspects, pp. 317–334,
Survey and experiments,” IEEE Trans. on Pattern Analysis and Ma-
2016.
chine Intelligence, vol. 31, no. 12, pp. 2179–2195, 2009.
[6] T. M. Gasser, “Fundamental and special legal questions for au-
[30] Shrp2 naturalistic driving study. [Online]. Available: https://insight.
tonomous vehicles,” Autonomous Driving, Technical, Legal and Social
shrp2nds.us/
Aspects, pp. 523–551, 2016.
[31] Virginia tech transportation institute data warehouse. [On-
[7] W. Knight, “Can this man make ai more human,” MIT Technology
line]. Available: http://forums.vtti.vt.edu/index.php?/files/category/
Review, dec 2015. [Online]. Available: https://www.technologyreview.
2-vtti-data-sets/
com/s/544606/can-this-man-make-aimore-human/
[32] Y. Barnard, F. Utesch, N. Nes, R. Eenink, and M. Baumann, “The
[8] I. Wolf, “The interaction between humans and autonomous agents,”
study design of udrive: the naturalistic driving study across europe
Autonomous Driving, Technical, Legal and Social Aspects, pp. 103–
for cars, trucks and scooters,” European Transport Research Review,
124, 2016.
vol. 8, no. 2, 2016.
[9] D. Sun, S. Ukkusuri, R. F. Benekohal, and S. T. Waller, “Modeling of [33] N. Uchida, M. Kawakoshi, T. Tagawa, and T. Mochida, “An investi-
motorist-pedestrian interaction at uncontrolled mid-block crosswalks,” gation of factors contributing to major crash types in japan based on
Urbana, vol. 51, 2002. naturalistic driving data,” IATSS Research, vol. 34, no. 1, pp. 22–30,
[10] K. Salamati, B. Schroeder, D. Geruschat, and N. Rouphail, “Event- 2010.
based modeling of driver yielding behavior to pedestrians at two-lane [34] A. Williamson, R. Grzebieta, J. Eusebio, Y. Wu, J. Wall, J. L. Charlton,
roundabout approaches,” Transportation Research Record: Journal of M. Lenne, J. Haley, B. Barnes, and A. Rakotonirainy, “The australian
the Transportation Research Board, no. 2389, 2013. naturalistic driving study: From beginnings to launch,” in Proceedings
[11] N. Guéguen, S. Meineri, and C. Eyssartier, “A pedestrians stare of the 2015 Australasian Road Safety Conference, 2015.
and drivers stopping behavior: A field experiment at the pedestrian [35] O. Friard and M. Gamba, “Boris: a free, versatile open-source
crossing,” Safety Science, vol. 75, pp. 87–89, 2015. eventlogging software for video/audio coding and live observations,”
[12] Y.-C. Lee, J. D. Lee, and L. N. Boyle, “The interaction of cognitive Methods in Ecology and Evolution, vol. 7, no. 11, 2016.
load and attention-directing cues in driving,” Human Factors: The [36] I. Kotseruba, A. Rasouli, and J. K. Tsotsos, “Joint attention in
Journal of the Human Factors and Ergonomics Society, vol. 51, no. 3, autonomous driving (jaad),” arXiv preprint arXiv:1609.04741, 2016.
pp. 272–280, 2009.

You might also like