0% found this document useful (0 votes)
149 views

MetaBot Automated and Dynamically Schedulable Robotic Behaviors in Retail Environments PDF

Uploaded by

Gaby Hayek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views

MetaBot Automated and Dynamically Schedulable Robotic Behaviors in Retail Environments PDF

Uploaded by

Gaby Hayek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

MetaBot: Automated and Dynamically Schedulable

Robotic Behaviors in Retail Environments

Jon Francis, Utsav Drolia, Kunal Mankodiya, Rolando Martins, Rajeev Gandhi, and Priya Narasimhan
Department of Electrical & Computer Engineering, Carnegie Mellon University
Intel Science & Technology Center for Embedded Computing, Pittsburgh, PA, USA
{ jmfrancis, utsav, kunalm, rolandomartins }@cmu.edu, [email protected], [email protected]

Abstract—The ever-increasing popularity of online stores is re- to stores, annually [3] [4]. Unfortunately, the manual approach
shaping traditional commerce models. In particular, brick-and- to stock maintenance is costly (in man-hours spent), and still
mortar stores are presently facing the challenge of reinventing does not even aproach the efficiency levels that many stores
themselves and their business models to offer attractive yet low- need to stay solvent. Thus, the demand for automated solutions
cost alternatives to e-commerce. Other industries have already is increasing: they would provide higher efficiency at a lower
introduced new concepts to fight inefficiency (i.e., “Just-in-Time”
cost of operation. In this paper, we propose one such automated
inventory management in Automotive), retail stores face a more
challenging environment which these models cannot accommo- retail solution that embeds the functionality of aisle stock
date. Stores remain heavily vested in battling the overhead costs management in a mobile service robot, equipped with on-board
of personnel management when, instead, a robotic automation sensors, computing, and communication.
scheme with retail-oriented behaviors could reduce the detec-
tion latency of out-of-stock and compliance error phenomena
MetaBot, our retail-centric service robot, is designed to
throughout the store. These behaviors must be automated, multi- autonomously navigate retail environments and execute micro-
purpose, and schedulable; they must also ensure that the robot merchandizing tasks that are commanded by retail staff or
coordinates store nuances to adapt its functionality appropriately. scheduled at predefined time-intervals. We do this by mod-
In this paper, we present our architecture that defines retail ulating behavior activities based on its functionality “class.”
robot behaviors as a collection of reusable activities, which, In the following section, we begin by presenting state-of-the-
when permuted various ways, allows for various high-level art navigation techniques in mobile robotics. The rest of the
and application-specific tasks to be accomplished effectively. We sections are dedicated to describing our system design, in-lab
evaluate this system on our robotic platform by scrutinizing experiments, and results.
the integrity of navigation and machine vision tasks, which we
perform concurrently in an experimental store setup. Results
show the feasibility and efficiency of our proposed architecture. II. BACKGROUND
I. I NTRODUCTION Mobile service robots are mission-oriented machines, built
to perform actions associated with desired goals. Accordingly,
In the last decade, several major innovations in ‘service the mobile robot requires knowledge of how to coordinate sev-
robotics’ have automated processes for rescue operations, eral actions relative to environmental and timing constraints.
surveillance/patrolling applications, underwater monitoring, An example may be navigating slowly/precisely through retail
and so forth. According to the International Federation of aisle corridors, but swiftly elsewhere. Earlier autonomous
Robotics (IFR), ‘service robots’ are those that operate semi- robots [5], based on classical artificial intelligence, were
or fully-autonomously to perform services useful to the well- slow and required enormous procedural computation due to
being of humans and equipment, excluding manufacturing extensive pre-planning and hardcoded task implementations.
operations [1]. IFR also reports that about 16,400 professional This approach was overcome by the behavior-based approach,
service robots were sold in 2011 and showed 9% sale’s wherein robot control is modularized into reusable subroutines,
increase in comparison to the previous year. Despite the called behaviors [6]. Each behavior is driven by sensory data
large coverage of service-oriented robotics in various sectors, and provides event reactiveness for accomplishing the task,
not many attempts have been made to solve inherent retail piece-wise. Some examples of such activities/subgoals are “ob-
industry-specific problems. stacle avoidance”, “wall following”, or “approach landmark”.
Due to high competition with online vendors, physical There are a couple different existing behavior modulation
retailers may only survive by decreasing product costs (and, protocols—coordination (running behaviors of different prior-
thus, profit margins); they depend heavily on the efficiency ities, individually), fusion (combining the output of different
of in-store operations such as stock-keeping and personnel behaviors)—but these only deal with modulation of behaviors
management [2]. One of the main challenges retailers face is within one actuation paradigm, for example, those with strictly
allocating appropriate space for thousands of merchandise they navigation-oriented outputs [7][8].
sell, and making sure popular items are sufficiently stocked at
all times. Shelf-space utlization is paramount, as distributors Because a mission-focused retail robotic system must coor-
often pay hundreds of dollars per square foot—based on dinate subgoals in navigation, machine vision, and even limb-
category of the merchandise, location in the store, and shelf- actuation paradigms simultaneously, a hybridized behavior
level visibility [2]. In fact, many consumer studies reveal that modulation protocol is required. For example, the machine
out-of-stock (OOS) events on shelf-space contribute a big loss may need to capture a photo of a retail shelf, depth-scan a shelf

978-1-4673-2939-2/13/$31.00 ©2013 IEEE


for an OOS phenomenon, and so forth—all while avoiding its own sensor state specifications (e.g., sonar:engaged:yes,
an obstacle or path-planning through a particular section of cameraarray:polled:yes) and system actuation type descriptions
the store. This necessitates either a split horizon (where a (e.g., translate:yes, move head:no).
“universal behavior” may lie dormant while a “local behavior”
is engaged) or a multi-objective goalset (where a universal Thus, our behavior modulation structure must apply focus
behavior consists of many subgoals that the robot needs to to different subgoals by choosing which to switch between
accomplish, possibly in tandem). We consider the latter case and which to run in parallel. Before any behavior Bi may be
in this paper, with specific emphasis on retail-centric solutions. considered “satisfied,” all misson subgoal translations of Di
must be completed and Bi must exit with callback Ci —all
according to the set of parameters in Pi .
III. P ROBLEM D ESCRIPTION For aisle-scanning behaviors, specifically, we define k
We find that the aforementioned retail problems translate aisle locations in map as a multidimensional array of robot
into a set of schedulable robotic use-cases that retail staff pose specifications; these are best-estimates of aisle-beginnings
would find tangibly attractive [9]. We assemble these use- (based on indoor structure and specifications from retail oper-
cases, or “domain-wants,” into a series of tasks that the robot ational staff), spaced appropriately from the front-plane of the
must satisfy, dynamically, without user input (or supervision). aisle shelf (for machine vision integrity), such that:
In this paper, we will focus on the following domain wants:
• D.W.1. Automated Out-of-Stock (OOS) Detection. The {A0 , A1 , A2 , . . . , Ak−1 } =
robot will need to identify OOS phenomena within an {{A x0 , A y0 , A z0 , A worient0 } ,
aisle, several aisles, or across all points of interest in (3)
the entire store. ...
{A xk−1 , A yk−1 , A zk−1 , A worientk−1 }}
• D.W.2. Automated Spacings Integrity-Check. The
robot will need to identify areas in the store where Next, we define a set of m home locations within map, as a
products are misaligned; additional products could be multidimensional array of robot pose specifications. We have:
stocked in this “wasted” shelf-space.
• D.W.3. Automated Facings Integrity-Check. The robot
will need to retrieve high-resolution photographs from {H0 , H1 , H2 , . . . , Hk−1 } =
points of interest within the store, for retail distribu- {{H x0 , H y0 , H z0 , H worient0 } ,
tor’s inventory compliance analysis. (4)
...
For convenience, mission-scheduling should be seamless {H xk−1 , H yk−1 , H zk−1 , H worientk−1 }}
to even non-technical users—requiring little more than a mere
button press to initiate comprehensive (and complex) robotic In this paper, we will present a system architecture for
behaviors. We additionally assume that the robot leverages autonomously satisfying the domain-wants described in Sec-
a distributed software computation graph paradigm, where tion III, by iterating through S to service all aisles A, having
nodes communicate via publisher/subscribe message-passing started at some home location, Hi . Each behavior implemen-
protocols. tation must map backward to its corresponding domain-want
(numbered respectively). We implement these in Section V-B3.
IV. F ORMULATION
V. S YSTEM A RCHITECTURE
Firstly, we define a behavior as a high-level goal direc-
tive (e.g., “scan all aisles”): Here, we present our robotic platform implementation,
for scheduled behavior-automation of phenomena-detection
Bi = {Di , Pi , Ci } (1) in retail environments. Our design considerations included a
Where Di denotes the specific behavior directive, Pi denotes distributed architecture in both hardware and software, because
a set of runtime parameters, and Ci denotes a behavior call- both aspects must collaborate seamlessly when necessary but
back (exit function). Via data-tree associations, the behavior also remain mutually independent in hardware dependencies.
directive must be translated into a set of subgoals. Thus, we This results in our system’s ability to enjoy development mod-
have: ularity, optimal code-reuse, and conform to context-dependent
Bi = {{Si0 , . . . , Sij } , Pi , Ci } (2) behavior functionality. For these reasons, we have selected
the Fuerte version of the Robotic Operating System (ROS)
Where Si∗ may be defined as the subgoal set of Bi , [10] as our software development environment and a multi-tier
[j = 1, ..., Ji ], and Ji is the number of subgoals defined or heterogenous computing paradigm in hardware. We describe
algorithmically inferred for Bi . Now, it is important to note that these subsystems below.
the behaviors discussed in this paper have a plethora of activity
output ranges: subgoal outputs are not strictly navigation- A. Hardware Subsystem
oriented velocity commands, but may be generalized to include
vision and learning tasks as well. Therefore, activity summa- We operate under a heterogeneous, distributed-computing
tion paradigms for behavior modulation [7] are not entirely hardware paradigm. Multiple motherboards were mounted
appropriate for our implementation. Instead, Sij must maintain inside a shelf-like frame of acrylic and aluminum, which was
mounted atop a MobileRobots P3-DX rover (Figure 1a). In the • Pointcloud Transformation Node (D): Subscribes to
interest of optimizing power resource efficiency, we disburse depth-frame scan messages, processes each frame
our computational load across our system proportionally, as (down-projection, error-correction, recursive frame-
a function of the performance capabilities of each computing stitching), and publishes a laserscan frame
unit. We are interested in assessing the efficiency of behav-
ior/subgoal mode-switching while the majority of computation • Runtime Mapping Node (E): Subscribes to trans-
rests within robot-centric hardware (rather than external LAN- formed pointcoud messages, performs minimal recon-
connected PCs).1 Robot hardware interconnects are expressed struction, and spatially stitches to create 2-dimensional
in Figure 1b, below. navigation map for autonomous SLAM; saves output
static map; based on the gmapping ROS pack-
age [12]
• Localization Node (F): Cross-analyzes incoming
transformed pointcloud data against static map to gain
best post estimate; outputs pose estimate as ground
truth pose; based on the amcl ROS package [13]
• Move Base Node (G): Subscribes to sensor transfors,
odometry, laserscans, static map data, and navigational
goal commands; modulates local and global costmaps
to generate real-time navigation path-plans through
Fig. 1. Hardware architecture; (a) annotated prototype mobile robot (b) mapped spatial environments; outputs differential-
sensor-computer interconnect descriptions drive velocity commands
• Data Aggregation Node (H): collects raw sensor data,
B. Software Subsystem prepares the corpus, and publishes it as messages
Equipped with the roscpp compilation framework, we • TeleOperation Node (I): provides a manual robot
developed a wealth of proprietary algorithms; these will be control interface for velocity, pose, manual behavior
leveraged to form subgoals. As discussed in Section IV, these triggers, and sensor states; takes manual keyboard
subgoals are then permuted to form desired behaviors.2 inputs and transforms them into the respective system
1) Algorithms: Each ROS node in our computation graph event signals
features at least one algorithm that performs a specified task
(e.g., polling a sensor for real-time data, providing calcula- 2) Subgoals: These are “meta-algorithms” that assemble
tions of reference-frame transformations between sensor FOVs, multiple and add interstitial logic to perform useful tasks (e.g.,
etc.): “wall-following”). The reason we need this intermediary level
of abstraction is that algorithmic nodes, alone, are not suffi-
• Base Controller Node (A): Interfaces directly with cient to modularly engage in behaviors that feature multiple
the MobileRobots Pioneer-3 DX (P3-DX) ground- actuation realms (concurrent event-based navigation, learning,
translation robotic base platform; subscribes to veloc- etc). The Subgoals we have devised for engaging in aisle-
ity and state directives and actuates base accordingly; monitoring behaviors are as follows:
also publishes base state information (pose, battery
voltage levels, sensor state, and raw sensor data) to • Wall-following Subgoal (A): Robot uses sonar PID
network graph; based on the ROSARIA ROS pack- control (lowpass filtered, running velocity configura-
age [11] tion, Zieger-Nichols closed-loop tuning) to maintain
a dynamically-configurable distance away from the
• Camera Sensing Node (B): Requires on-board we- front-plane of a hard surface (i.e., store shelf). We
bcam device(s) (attached to host machine); spawns utilize algorithms{A, H, I}.
a series of camera sensor nodelets, manages image-
capture RPC requests, and serves those images when • Stop & Go Subgoal (B): During forward translation,
requested to do so robot engages in dynamically-configurable periodic
stops, to facilitate shelf image-sensing. We utilize
• Depth Frame Sensing Node (C): Requires on-board algorithms{A, H}.
depth-sensing device (attached to host machine);
spawns a depth camera sensor nodelet, manages depth- • Mapping Subgoal (C): Process for creating map, a
frame capture RPC requests, and serves those frames 2D occupancy grid map, used for autonomous pro-
when requested to do so grammatic navigation. An engineer teleoperates the
robot (to ensure proper area-coverage) as it creates
1 From our extensive robot computation workload characterization studies,
map dynamically. We use algorithms{A, C, D, E, H,
we identified workload polarization—wherein powerful systems (i.e., remote I}.
power-PC, cloud) were assigned almost all computation tasks, according to
typical “robot power-conservation” schema, IEEE 802.11 network latency • AutoNav Subgoal (D): Given a map of the store, robot
notwithstanding.
2 The list of algorithms / behaviors we present is not intended to be exhaus- will navigate autonomously to any known cleared
tive for the tasks we perform; we merely provide a baseline implementation space therein, via a programmatic command. We uti-
for the set of retail-oriented scanning behaviors we achieve lize algorithms{A, C, D, F, G, H}.
• OOS Detection Subgoal (E): This subgoal begins Require: map
by capturing a depth-frame from the robot’s FOV, Require: parameters
whose points are then clustered into three distance- Require: home-positions
groups, via K-means, where the seed points are the Require: aisle positions
minimum distance (foreground), maximum distance Ensure: self-localized in map
1: procedure D O B EHAVIOR (D, P, C)
(background) and the average of the two (points in
between). The foreground cluster is then extracted 2: P ← {} . Load global params
for connected-component analysis. The foreground is, 3: H ← {} . Load HomePosition Param Database
thus, partitioned such that each segment represents a 4: A ← {} . Load AislePosition Param Database
shelf; each segment is processed independently. This 5: S = TREE ASSOC(Di ) . Find Subgoal Set
foreground-shelf-segment represents the state of the 6: S 0 = PERMUTE SUBTASKS(S) . Order Subgoal Set
shelf’s frontal-plane. We hypothesize that this plane 7: for i ← 0, A do
0
should comprise (i) in-stock items and (ii) the spine of 8: NAVIGATION (Sglobnav , Ai )
0 0
the shelf; if a product is out-of-stock, it should create 9: P =LOAD PARAMS(Scv )
a “void” in this plane. Hence, we scan through each 10: while Si 6= EXIT F LAG do
0
shelf-segment to detect such voids. Voids could also 11: VISION (Scv , P 0)
0
be created by the space between products, which does 12: NAVIGATION (Ssubnav , Ai )
not necessarily mean a product is out-of-stock (see 13: if ARBIT ER P REEM P T then
the next bulletpoint). Only if the detected void size 14: BREAK (F LAG)
crosses a set threshold (found empirically) we flag it 15: end if
as “out-of-stock.” We utilize algorithms{C, H}. See 16: end while
Figure 3ab. 17: STATE TRANSITION (S 0 ) . Called by Arbiter
18: end for
• Spacings Check Subgoal (F): Similar to subgoal{E}, 19: if EXISTS(C) then D O B EHAVIOR(C, P, N U LL)
the depth sensor processes frames to extract the frontal 20: end if
plane of each shelf. However, now, all possible “void” 21: end procedure
areas are flagged. The width of each such void is
aggregated per shelf; this sum returns the approximate Fig. 2. Generalized Behavior Construct
amount of unused shelf-space. This information is
valuable to both retailers and vendors, as it optimizes
product-placement. We utilize algorithms{C, H}. See
Figure 3cd.
• Facings Check Subgoal (G): Enables retailers and
vendors to schedule the robot to retrieve a snapshot
of shelf-state at a specified location. We utilize algo-
rithms{A, B, H}.
• Arbiter Subgoal* (H): This subgoal implements our
behavior modulation function. It is not the Arbiter’s
job to operate on any existing algorithms, except to
monitor other subgoals and make decisions based on
the events that their output data and the given behavior
directive imply.
3) Autonomous Behaviors: These are fully-defined end-
to-end missions that are scheduled programmatically. Each
behavior comprises a set of activites/subgoals that are mod-
ulated by the Arbiter and are functionally loop-closed (i.e., Fig. 3. Input/output images from real-time, depth-based vision sub-
robot returns to a dormant state, at a deterministic location, goals{E,F}. (a, top-left) OOS-input (subgoal{E}); (b, top-right) OOS-output;
after completion). Below, we define a generalized mission (c, lower-left) Spacings-input (subgoal {F}); (d, lower-right) Spacings-
construct (Figure 2), then reference the directive (line 6) with output. The white/grey boxes in the outputs are generated by the subgoal
routine itself, indicating what it considers “problem-locations” in its FOV.
its respective domain-want mapping from Section III.
• Behavior—B.1.: OOS Detectiono(→D.W.1.)
n
0 0 0
C. System Calibration & Initialization
◦ Sglobnav , Scv , Ssubnav = subgoal{D,E,A}
The following steps were required before experimentation:
• Behavior—B.2.: Spacings Check o (→D.W.2.)
n
0 0 0 1. Construct a static 2D occupancy grid map of store
◦ Sglobnav , Scv , Ssubnav = subgoal{D,F,A} environment; we use subgoal {D}. We ensure map is loop-
• Behavior—B.3: Facings Check o(→D.W.3.) closed and built at sufficiently slow velocities, with additional
n considerations for error-reduction, as in [14].
0 0 0
◦ Sglobnav , Scv , Ssubnav = subgoal{D,G,A}
2. Define k = 3 aisle locations, A, in map (Figure 4a).
TABLE I. DATA FROM NAVIGATION /V ISION T RIALS
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
Starting Average Average Average Average Average Average Average Average Average Average Number Number Number
Position Time to Scan Time To Starting Starting Starting End Distance Distance Distance Vision Correct Wrong
(“Home” Reach Time Return Position Position Position Position from A1 from A2 from A3 Events Vision Vision
Coord.) A1 (min) (min) Error in Error in Error in Error (sonar) (sonar) (sonar) Exist Scan Scan
(min) A1 (in) A2 (in) A3 (in) (in) Hits Hits
Home 0:01:48 0:03:56 0:02:50 4.33” 0.66” 2.66” 7” 1143 1162 1185 OOS:6, OOS:6, OOS:2,
#1 Spac.:11, Spac.:11, Spac.:10,
Fac.:1 Fac.:1 Fac.:N/A
Home 0:01:48 0:03:13 0:01:25 3” 4.66” 3.33” 2.33” 1130.33 1175.33 1142 OOS:6, OOS:6, OOS:2,
#2 Spac.:11, Spac.:11, Spac.:10,
Fac.:1 Fac.:1 Fac.:N/A
Home 0:01:55 0:03:53 0:02:04 4.33” 2.66” 1.33” 3.33” 1130.33 1103.66 1175.66 OOS:6, OOS:6, OOS:2,
#3 Spac.:11, Spac.:11, Spac.:10,
Fac.:1 Fac.:1 Fac.:N/A
Home 0:00:26 0:01:07 0:01:45 2.33” 3.33” 2” 4” 1134.22 1175 1159 OOS:6, OOS:6, OOS:2,
#4 Spac.:11, Spac.:11, Spac.:10,
Fac.:1 Fac.:1 Fac.:N/A

3. Define m = 4 home locations, H, in map (Figure 4b). A. Experimental Setup


4. Save sets A and H in YAML config file, which will We wish to evaluate the average time for reaching various
serve as secordary input to Arbiter (where primary input is checkpoints along a multiple-aisle scanning task, but from
the behavior directive); these arrays will be loaded into ROS’s varied starting locations for the robot in a global reference
param server. frame (e.g., predefined static map). This ensures robustness
and allows for ease of precision measurement.
5. Physically initialize robot at some home location H0 and
trigger a programmatic pose estimate. We then conducted each experimental run, by scheduling a
vague do all scans in all aisles directive; the machine schema
for that is as follows: Robot initializes at a recorded Home
Location, Hi , at some coordinate (xi ,yi ) in map, with some
VI. E XPERIMENTAL D ESIGN angle of orientation (wi ) → Navigational target set as Aisle
1 → On reaching Aisle 1, switch to appropriate subgoal for
The experimental procedure is predicated on evaluating the scanning → Scan Aisle 1 for Out-of-stock/Spacings phenom-
following hypotheses: ena → Capture photo of shelf if necessary → At end of Aisle
1, switch to higher-level path planning and set Aisle 2 as goal
• Hypothesis—Hyp.1.: predefined activities may be per-
→ Repeat necessary scans with Aisle 2 and then Aisle 3 →
muted into efficient behaviors for machine vision
At end of Aisle 3, set goal as Home Location → End run at
analysis of products in a retail store
Home Location. Navigate to Hi+1 and repeat all steps.
• Hypothesis—Hyp.2.: mode-switching between tasks
occurs smoothly B. Experimental Results

• Hypothesis—Hyp.3.: mode-switches decouple activi- This experiment was conducted from four distinct home lo-
ties from inherent errors in map-wide robot localiza- cations that were chosen to specifically test behavior robustness
tion. and precision of scan-times, with three trials per home location.
We recorded: (i) mission checkpoint timings (minutes), (ii) po-
sition deviations from aisle and home “checkpoints” (inches),
(iii) sonar sensor readings (analog integer values ranging from
0-5000; PID setpoint at 1150), and (iv) accuracy measures of
vision task performed; see Table I and Figure 5.

C. Observation & Discussion


We may now evaluate the correctness of the experimental
hypotheses:
• hypothesis 1 – Verified. The constant average distance
from the aisles affirms that the robot maintains the
required distance from the shelf for depth analysis,
namely out-of-stock detection. This establishes that
Fig. 4. 2D Occupancy grid map of experimental setup (Intel Labs, the Wall-following (A) subgoal is reliable and per-
Carnegie Mellon University), annotated with: (a) aisle shelves (boxes) and forms as desired.
corresponding aisle poses (arrows); (b) aisle shelves and home location poses.
(c) Example A1 scan mission (with no exit function). The blue arrow denotes • hypotheses 2 – Verified. The average aisle-position
the H1 pose; the red line is a historical robot pose trail, composed of the error columns (#5-7) show that mode-switches only
robot’s pose vectors as it traveled to A1 autonomously. The large cluster of
red at the A1 position is the robot’s instantaneous pose estimate array.
occur when the Arbiter catches the goal reached flag
or observes sensor data events that should precipitate
and vendors would benefit significantly. To satisfy these
demands, we have developed MetaBot: a system capable
of increasing the efficiency of physical store operation—by
performing out-of-stock detection, spacing error checks, and
facings snapshot retrieval—using coordinated and dynamically
schedulable behaviors. To achieve this, we modularly defined
and permuted various meta-algorithms. To establish the valid-
ity of our approach, we tested the implementation and found
that, indeed, mode-switching contains errors and prevents them
from accumulating and propagating into future system states.
We claim that our architecture, for combining behaviors for
high-level automation goals in retail operations, is the first of
its kind. It will help store managers schedule repetitive tasks
and get results in near real-time—hence increasing efficiency
by, for example, reducing the amount of time an out-of-stock
product goes unnoticed. We have had successful controlled
experiments in the lab and the next step is to take the robot
to an actual store to test its real-world performance.
Fig. 5. Gantt chart of mission trial timings
R EFERENCES
a state transition; this implies that mode-switches [1] IFR Statistical Department, hosted by the VDMA Robotics +
happen correctly and at locations that have bounded Automation association, “World Robotics (2010),” 2010. [Online].
Available: http://www.worldrobotics.org
margins of error (fixed encoder drift error, plus xy-
tolerance parameter setting in the path-planner con- [2] Dreze, X., Hoch, S., and Purk, M., “Shelf management and space
elasticity,” Journal of Retailing, vol. 70, no. 4, pp. 301–326, 1994.
figuration), which are independently corrected (e.g.,
[3] Che, H., and Chen, X., and Chen, Y., “Investigating Effects of Out-of-
via PID). This verifies that the AutoNav (D) subgoal Stock on Consumer Stock Keeping Unit Choice,” Journal of Marketing
performs as desired, and its error is within acceptable Research, vol. 49, no. 4, pp. 502–513, 2012.
limits. [4] Gruen, T.W., Corsten, D, “A Comprehensive Guide To Retail Out-
of-Stock Reduction In the Fast-Moving Consumer Goods Industry.
• hypothesis 3 – Verified. The ‘Average Distance from Research Study, funded by Proctor and Gamble,” 2007.
A*’ columns (#9-11), the “Number Vision Events [5] Moravec, H. P., “The Stanford Cart and the CMU Rover,” Proceedings
Exist” column (#12) and the “Number Correct Vi- of the IEEE, vol. 71, no. 7, pp. 872–884, 1983.
sion Scan Hits” column (#13) show that, even with [6] Arkin, R. C., “Behavior-Based Robotics,” MIT Press.
different aisle-start position errors, the wall-following [7] Huq, R., Mann, G., Gosine, R., “Behavior modulation technique in
activity in the scanning behavior resolved any signifi- mobile robotics using fuzzy discrete event system,” in IEEE Trans.
cant position errors. Moreover, we see little variation Robot. Springer Berlin Heidelberg, 2006, vol. 22, pp. 903–916.
in scan timing across trials (see A∗ blocks for trials [8] Yahmedi, A. S. A., Fatmi, M. A., “Fuzzy Logic Based Navigation of
across different home positions—Figure 5). This en- Mobile Robots,” in Recent Advances in Mobile Robotics. InTech, 2011.
sured proper operation of the OOS/Spacings-detection [9] Mankodiya, K., Gandhi, R., and Narasimhan, P., “Challenges and
Opportunities for Embedded Computing in Retail Environments,” in
and image-retrieval activities, which establishes au- Sensor Systems and Software, ser. Lecture Notes of the Institute
tonomy between activities across a mode-switch and, for Computer Sciences, Social Informatics and Telecommunications
thus, verifies hypothesis 3. Table 1 also establishes Engineering, F. Martins, L. Lopes, and H. Paulino, Eds. Springer
that the vision activities perform reliably. However, Berlin Heidelberg, 2012, vol. 102, pp. 121–136.
we see in column #13 that there are errors in the [10] Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J.,
OOS/Spacings scans: this is because that particular Wheeler, R., and Ng, A.Y., “ROS: an open-source Robot Operating
System,” in ICRA workshop on open source software, vol. 3, no. 3.2,
shelf does not have a backing, which is required by 2009.
the current algorithm. However, backless shelves will [11] Autonomous Mobile Robotics Group (University of Zagreb), “amor-ros-
rarely be encountered in stores. pkg,” https://code.google.com/p/amor-ros-pkg, [Online; accessed 17-
October-2012].
By verifying the hypotheses, we also verify that the sub- [12] Grisetti, G., Stachniss, C., Burgard, W., “GMapping, Highly Efficient
goals utilized in the experiments perform reliably. Since all Rao-Blackwellized Particle Filter to Learn Grid Maps from Laser
the behaviors are constructed by permuting the subgoals, this Range Data.” [Online]. Available: http://openslam.org/gmapping.html
implies that the behaviors themselves are completed reliably as [13] Gerkey, B.P., “AMCL, Probabilistic Localization System.” 2007.
well. The behaviors have been defined to match the “domain- [Online]. Available: http://www.ros.org/wiki/amcl
wants” from Section III. Hence by completing the behaviors [14] Bonnal, E.P., Bona, B., “3D Mapping of Indoor Environments Using
reliably, we satisfy the domain-wants. RGBD Kinect Camera for Robotic Mobile Applications,” Ph.D. dis-
sertation, Politecnico di Torino, Department of Control and Computer
Engineering, 2011.
VII. CONCLUSION

The retail industry is in dire need of application-specific


automation, and we have identified avenues where managers

You might also like