Evaluation of Flexible Graphical User Interface For Intuitive Human Robot Interactions
Evaluation of Flexible Graphical User Interface For Intuitive Human Robot Interactions
net/publication/288101793
CITATIONS READS
16 716
4 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Péter Korondi on 05 February 2021.
Abstract: A new approach for industrial robot user interfaces is necessary due to the fact
that small and medium sized enterprises are more interested in automation. The increasing
number of robot applications in small volume production requires new techniques to ease
the use of these sophisticated systems. In this paper shop floor operation is in the focus. A
Flexible Graphical User Interface is presented which is based on cognitive
infocommunication (CogInfoCom) and implements the Service Oriented Robot Operation
concept. The definition of CogInfoCom icons is extended by the introduction of
identification, interaction and feedback roles. The user interface is evaluated with
experiments. Results show that a significant reduction in task execution time and a lower
number of required interactions is achieved because of the intuitiveness of the system with
human centered design.
1 Introduction
In the era of touch screen based smartphones the interaction between humans and
machines is becoming a frequent event. Especially the communication between
people and robots is a field of research nowadays. Human Robot Interaction (HRI)
studies are aiming the user friendly application of different robot systems for
– 135 –
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
– 136 –
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
Figure 1
Modern Teach Pendants.
From top-left corner: FANUC iPendant [11], Yaskawa Motoman NX100 teach pendant [12], ABB
FlexPendant [13], KUKA smartPad [14], ReisPAD [15], Nachi FD11 teach pendant
– 137 –
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
For industrial robotic manipulators the standard user interface is the Teach
Pendant. It is a mobile, hand-held device usually with customized keyboard and
graphical display. Most robot manufacturers are developing distinctive design
(See Figure 1) and are including a great number of features thus increasing the
complexity and flexibility.
Input methods vary from key-centric to solely touch screen based. Feedback and
system information is presented on graphical display in all cases, however the
operator may utilize a great number of other information channels; touch, vision,
hearing, smell and taste. The operator can also take an advantage of the brain’s
ability to integrate the information acquired from his or her senses. Thus, although
the operator has the main sight and attention oriented towards the robot’s tool, she
will immediately change her attention to any of the robot’s link colliding with an
obstacle when it is intercepted by her peripheral vision.
The science of integrating informatics, info-communication and cognitive-
sciences, called CogInfoCom is investigating the advantages of combining
information modalities and has working examples in virtual environments,
robotics and telemanipulation [10, 16]. The precise definition and description of
CogInfoCom is presented in [17], terminology is provided in [18], while [19]
walks the reader through the history and evolution of CogInfoCom.
Introducing this concept in the design of industrial robot user interfaces is a
powerful tool to meet the goal of human-centred systems and a more efficient
human robot interaction.
– 138 –
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
is done by the system integrator. Naturally, the integrator cannot be fully aware of
the needs of the operator thus the flexibility of a robot cell is depending also on
the capabilities of the framework in which the graphical user interface is working.
The communication barrier between the operator and the integrator influences the
efficiency of the production through inadequate and non-optimized information
flow (See Figure 2). Differences in competence level, geographical distance or
cultural distance can all cause difficulties in overcoming this barrier.
The Flexible Graphical User Interface (FGUI) concept aims to close this gap. The
system integrator still have the opportunity to compile task specific user interfaces
but the framework includes pre-defined, robot specific elements which are at the
operators' hands at all times. The functionality is programmed by the integrator
and represented as she thinks it is the most fitting, but the final information
channel can be rearranged by the user/operator to achieve efficient and human
(operator) centered surface.
Figure 2
Connection of a robot system with the operator and the integrator
During shop-floor operation the user should communicate on high level with the
robot cell. In high volume and highly automated production lines this interaction
is restricted the best to start a program at the beginning of the worker's shift and
stop it when the shift is over. For SMEs this approach is not feasible: frequent
reconfiguration and constant supervision is inevitable. Therefore a service-
– 139 –
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
Figure 3
Connections between functionality and service-oriented layers
The selection of programs for the operation is reduced to service requests initiated
by button presses paired with an image of the part. Using CogInfoCom
semantics [18]; the images of parts, the buttons, the indicator lamps, the progress
– 140 –
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
bar and the text box are all visual icons bridging the technical robot data with
operation parameters and statuses. The high-level message generated by these
entities is the list of achievable item movements in case of a pick and place service
and feedback on the current operation provided by the user interface not from
visual observation of the cell thus we consider this application of CogInfoCom
channel as an eyecon message transferring system in general.
Furthermore, considering a group of screen elements (e.g. the image of the gear,
the buttons "Table 1" and "Table 2" and the indicator lamps) we use low-level
direct conceptual mapping. The picture of the part generates an instant
identification of the robot service objective and the surface to interact with the
robot controller. The buttons represent identification and also interaction with the
robot cell. The robot system internal mapping of real world is sent back to the user
through the indicator lamps in feedback form.
These applications indicates that CogInfoCom icons usually not only generate and
represent a message in the communication but these also have roles. These roles
may depend on the actual implementation and concept transmitted by messages.
In human robot interaction three main roles may be distinguished:
identification role,
interaction role,
feedback role.
The simple instruction "Move the gear from Table 1 to Table 2" given to the
operator does not require additional mapping from the user thus the human robot
interaction is simplified and became human centered. The gear may be identified
by image, manipulated by interaction with the button and get feedback though the
indicator lamp.
Practically the user interface implements an abstract level between the technical
realization of operations and the service oriented operation. A main robot program
is monitoring the inputs from the user. By pressing one of the buttons a number is
loaded into a variable which causes to controller to change to the program
indicated by this number. The program contains the pre-defined pose values to be
executed sequentially. The current program line number divided by the total
number of lines indicates the progress with task for the user in the progress bar,
since one program contains only one item movement from one table to another.
Exiting the main program also sets the text box to "Operation in progress" and re-
entering it resets to "Standby".
The robot controller keeps track of objects in the robot cell by adjusting an integer
variable according to the current position of it in the motion sequence. Values 1, 2
and 0 represent the item placed on Table 1, placed on Table 2, and lifted and
moved by the robot, respectively. Mapping this information onto the indicator
lamps means that when the object is on one of the tables, the lamp next to the
– 141 –
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
table button lights up, sending the message that this table is occupied. When it is
in the air both lamps are dimmed.
It is generally important to rate the success of a concept and its implementations
with experiments. The assessment in communication improvement is may be
measured both quantitatively and qualitatively.
Test participants executed two main tasks first using the traditional system
(Figure 4) then repeating them using the flexible user interface (Figure 5 and
Figure 6). During the tests user interactions with the robot system were recorded
by cameras, key logging, and screen capturing.
Figure 4
Traditional Graphical User Interface
At the beginning of each task a briefing was given to the participants on the
general aim of the executed task (Table 1). After that they read through a
summary on the necessary settings (Table 2 and Table 3) to be made before the
robot can perform its operation. At this point participants had the opportunity to
go on with a user manual with step-by-step instruction on the necessary steps or
continue on their own. Chosen the second option they always had the chance to
refer to the manual if they needed suggestions on how to continue.
– 142 –
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
Figure 5
Flexible Graphical User Interface for Task 3
Figure 6
Flexible Graphical User Interface for Task 4
During the execution of Task 1 the only action needed was to select the
appropriate program for the robot based upon the required part movement. Task 2
was more complicated; three internal variables had to be set according to the
number of nuts and bolts and the number of the delivery box.
The participant was provided with the new user interface for the execution of
Task 3 and Task 4 thus the direct interaction with robot controller variables and
setting were obscured.
– 143 –
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
Table 1
General aims of tasks
Table 2
Settings for Task 1
Table 3
Settings for Task 2
3.2.2 Results
The evaluation was conducted with four participants, all male, between age 25 and
27. All four have engineering background; two have moderate, two have advanced
experience in robot programming. Participants were advised that audio and video
is recorded which serves only for scientific analysis and they were assured that the
test can be interrupted anytime on their initiation.
All participants were able to execute all of the tasks in a reasonable amount of
time. Two users reported difficulties to set the program number during Task 1.2.
The problem turned out to be a software bug in the traditional robot controller user
interface; the user manual had been modified accordingly, although none of the
1
TGUI: Traditional Graphical User Interface, FGUI: Flexible Graphical User Interface
– 144 –
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
participants followed the user manual steps strictly, most likely due to their
previous experience with robots.
Figure 7
Synchronised videos had been taken during testing
Collected data have been evaluated after the tests. Three parameters have been
selected to represent the difference between the traditional and the flexible
approach: task execution duration, number of interactions and ratio of number of
key presses to touch screen commands.
Execution time is measured between the first interaction with the Teach Pendant
recorded by the key logger and mouse logger running on the robot controller, and
the last command which ordered the robot to move. The last interaction was
determined by time stamp on the video in case of the TGUI measurement, since
logging of program start button on the controller housing was not in place at the
time of the experiment. The start of robot movement could be determined from
mouse logging data in case of FGUI.
The mean execution time using the conservative system for Task 1.1, Task 1.2 and
Task 2 are 36, 37 and 135 seconds respectively. In contrast, Task 3.1, Task 3.2
and Task 4 duration was 31, 0 and 30 seconds with the use of the FGUI. Data are
presented in Table 4. The total mean time spent on Task 1 and Task 3 are 72,1 and
38,1 second respectively. All the mean values presented based on three
measurements. Due to the low number of measurements further statistical data
were not computed.
– 145 –
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
Table 4
Task execution time
Participants had to configure the robot controller using the Teach Pendant which
offered two possible interactions: key presses and touch screen interactions. A
virtual representation of the keyboard is depicted in Figure 7. Besides, users had to
deactivate the joint brakes and energize the motors as well as start the robot
program by pressing buttons on the robot controller housing. All interactions were
counted by the logging software and later the touch interaction ratio to key presses
was calculated as follows:
TR %
n touch (1)
100 ,
n touch n keypress n controller
where ntouch is the number of touch screen interactions, nkeypress is the number of
button presses on the Teach Pendant and ncontroller is the number of button presses
on the controller housing. The total number of interactions and touch ratio is listed
in Table 5. The data loss in execution time measurement did not affect counting of
interactions thus mean values are calculated from four samples.
Table 5
Number of interactions and touch ratio
2
Task duration is zero because the first interaction already started the task execution for
the robot.
3
Due to data corruption Task 1.1 and Task 1.2 duration is not available. Mean value is
calculated with three measurements in all cases.
– 146 –
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
3.2.3 Discussion
This testing aimed to evaluate the new concept of service oriented flexible user
interface. No particular selection was in place for the participants and the size of
dataset is not wide enough for statistical analysis and true usability
evaluation [21]. However, trends and impressions can be synthesized on the
difference in user performance. Inspection of the video recordings shows the
participants did not understand fully that Task 1 and Task 3 as well as Task 2 and
Task 4 were the same except the fact that they have to use different user interface
during execution. Although in the series of actions they did not follow the step-by-
step instruction of the user manual, participants returned to it over and over again
to get acknowledgement on their progression. An excessive number of interactions
is present against the number of interactions required by the user manual
(Table 6).
Table 6
Comparison of required and performed interactions
At the beginning the state of the user interface and the controller was the same in
every test but at the start of Task 1.2 this situation changed due to the different
preferences of the users on how to stop a robot program. The significant
difference for Task 2 is caused by the fact that there are several ways of inputting
a variable in the traditional GUI but the shortest (based upon the robot
manufacturer's user manual) was not used by either of the participants.
The excess number of interactions for Task 3 are the actions to dismiss messages
on the flexible user interface caused by previous action of the users. The increased
number of touches in Task 4 are due to a usability issue: the display item for
selecting the amount of parts to be delivered was to small thus the selection could
not be made without repeated inputs. Verdict of this investigation is that users
tends to use less efficient ways to set up the robot controller which may induce
errors and execution time increases due to the need of recuperating from errors.
– 147 –
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
Figure 8
Interaction pattern with traditional (Task 2) and flexible, intuitive system (Task 4)
The intuitiveness of the new approach can be proven with the examination of
interaction patterns. Figure 8 shows the interactions of User 1 in details. The user
input for the traditional system comes in bursts. The slope of each burst is close to
the final slope of the FGUI interaction (Task 4 in Figure 8) but the time between
these inputs decreases the overall speed of setting. Recordings show that this time
in the case of Task 2 is generally spent on figuring out the next step to execute
either by looking in the manual or searching for clues on the display. Finally the
press of the start button for Task 2 is delayed because the user double-checked the
inputted values.
In contrast the inputs for Task 4 are distributed in time uniformly and the delay
between the interactions is significantly lower. The user did not have to refer to
the user manual because the user interface itself gave enough information for
setup. This means that this composition of user interface which is made
specifically for this service of the robot cell offers a more intuitive and easy-to-use
interface than the traditional one and that CogInfoCom messages were transmitted
successfully.
The overall picture shows that FGUI performs significantly better than TGUI
(See Figure 9). For comparison the execution time of Task 3 against Task 1 was
shortened by 34 seconds while the duration of Task 4 against Task 2 was shorter
by 116 seconds. Regarding the necessary interactions with system FGUI reduced
it to 23,4% giving around a quarter of possibilities for errors.
– 148 –
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
Figure 9
Final results showing improved performance of FGUI: reduced task execution time, decreased number
of interaction and increased ratio of touch screen usage to key presses
The Flexible Graphical User Interface also increased the use of touch screen
significantly (e.g. from 15% to 93% for Task 2 and Task 4). Participants reported
that the use of images and the composition of the UI helped them to connect easier
the parameters given by the instructions and the necessary actions to input these
values into the controller. This is the result of deliberate design of the abstract
level connected to the robot cell's service; the design is based upon the principle of
CogInfoCom messages to ensure human centred operation.
Concluding the discussion it is stated that this Flexible Graphical User Interface is
evaluated as expected; users were able to operate the robot cell faster, more
intuitively and with greater self-confidence. Due to the low number of participants
further verification is necessary; organisation of a new test for deeper usability
and intuitiveness investigation (including non-expert users) is underway at the
time of writing this paper and results will be published in later papers.
Conclusions
The Flexible Graphical User Interface implementation based on Service Oriented
Robot Operation was presented. The application of CogInfoCom principles is
described and a new property for the CogInfoCom icon notion was introduced.
This new property is the role of icon in message transfer and for human robot
interaction identification, interaction and feedback roles were identified.
– 149 –
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
– 150 –
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
– 151 –