Haptic Augmented Reality: Taxonomy, Research Status, and Challenges
Haptic Augmented Reality: Taxonomy, Research Status, and Challenges
CONTENTS
10.1 Introduction................................................................................................... 227
10.2 Taxonomies.................................................................................................... 229
10.2.1 Visuo-Haptic Reality–Virtuality Continuum.................................... 229
10.2.2 Artificial Recreation and Augmented Perception.............................. 232
10.2.3 Within- and Between-Property Augmentation.................................. 233
10.3 Components Required for Haptic AR........................................................... 234
10.3.1 Interface for Haptic AR..................................................................... 234
10.3.2 Registration between Real and Virtual Stimuli................................. 236
10.3.3 Rendering Algorithm for Augmentation........................................... 237
10.3.4 Models for Haptic AR....................................................................... 238
10.4 Stiffness Modulation..................................................................................... 239
10.4.1 Haptic AR Interface...........................................................................240
10.4.2 Stiffness Modulation in Single-Contact Interaction..........................240
10.4.3 Stiffness Modulation in Two-Contact Squeezing.............................. 243
10.5 Application: Palpating Virtual Inclusion in Phantom with Two Contacts......245
10.5.1 Rendering Algorithm......................................................................... 247
10.6 Friction Modulation.......................................................................................248
10.7 Open Research Topics................................................................................... 249
10.8 Conclusions.................................................................................................... 250
References............................................................................................................... 251
10.1 INTRODUCTION
This chapter introduces an emerging research field in augmented reality (AR), called
haptic AR. As AR enables a real space to be transformed to a semi-virtual space by
providing a user with the mixed sensations of real and virtual objects, haptic AR does
the same for the sense of touch; a user can touch a real object, a virtual object, or a real
object augmented with virtual touch. Visual AR is a relatively mature technology and
is being applied to diverse practical applications such as surgical training, industrial
manufacturing, and entertainment (Azuma et al. 2001). In contrast, the technology
227
228 Fundamentals of Wearable Computers and Augmented Reality
for haptic AR is quite recent and poses a great number of new research problems
ranging from modeling to rendering in terms of both hardware and software.
Haptic AR promises great potential to enrich user interaction in various applications.
For example, suppose that a user is holding a pen-shaped magic tool in the hand, which
allows the user to touch and explore a virtual vase overlaid on a real table. Besides, the
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
user may draw a picture on the table with an augmented feel of using a paint brush on
a smooth piece of paper, or using a marker on a stiff white board. In a more practical
setting, medical students can practice cancer palpation skills by exploring a phantom
body while trying to find virtual tumors that are rendered inside the body. A consumer-
targeted application can be found in online stores. Consumers can see clothes displayed
on the touchscreen of a tablet computer and feel their textures with bare fingers, for
which the textural and frictional properties of the touchscreen are modulated to those
of the clothes. Another prominent example is augmentation or guidance of motor skills
by means of external haptic (force or vibrotactile) feedback, for example, shared con-
trol or motor learning of complex skills such as driving and calligraphy. Creating such
haptic modulations belongs to the realm of haptic AR. Although we have a long way to
go in order to realize all the envisioned applications of haptic AR, some representative
examples that have been developed in recent years are shown in Figure 10.1.
Virtual tumor
(a) (b)
HMD
Haptic device
Actuator
(c) (d)
FIGURE 10.1 Representative applications of haptic AR. (a) AR-based open surgery simu-
lator introduced. (From Harders, M. et al., IEEE Trans. Visual. Comput. Graph., 15, 138,
2009.) (b) Haptic AR breast tumor palpation system. (From Jeon, S. and Harders, M., IEEE
Trans. Haptics, 99, 1, 2014.) (c) Texture modeling and rendering based on contact accelera-
tion data. (Reprinted from Romano, J.M. and Kuchenbecker, K.J., IEEE Trans. Haptics, 5,
109, 2011. With permission.) (d) Conceptual illustration of the haptic AR drawing example.
Haptic Augmented Reality 229
In this chapter, we first address three taxonomies for haptic AR based on a com-
posite visuo-haptic reality–virtuality continuum, a functional aspect of haptic AR
applications, and the subject of augmentation (Section 10.2). A number of studies
related to haptic AR are reviewed and classified based on the three taxonomies.
Based on the review, associated research issues along with components needed for
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
a haptic AR system are elucidated in Section 10.3. Sections 10.4 through 10.6 intro-
duce our approach for the augmentation of real object stiffness and friction, in the
interaction with one or two contact points. A discussion of the open research issues
for haptic AR is provided in Section 10.7, followed by brief conclusions in Section
10.8. We hope that this chapter could prompt more research interest in this exciting,
yet unexplored, area of haptic AR.
10.2 TAXONOMIES
10.2.1 Visuo-Haptic Reality–Virtuality Continuum
General concepts associated with AR, or more generally, mixed reality (MR) were
defined earlier by Milgram and Colquhoun Jr. (1999) using the reality–virtuality
continuum shown in Figure 10.2a. The continuum includes all possible combinations
of purely real and virtual environments, with the intermediate area corresponding to
MR. Whether an environment is closer to reality or virtuality depends on the amount
of overlay or augmentation that the computer system needs to perform; the more aug-
mentation performed, the closer to virtuality. This criterion allows MR to be further
classified into AR (e.g., a heads-up display in an aircraft cockpit) and augmented
virtuality (e.g., a computer game employing a virtual dancer with the face image
of a famous actress). We, however, note that the current literature does not strictly
discriminate the two terms, and uses AR and MR interchangeably.
Extending the concept, we can define a similar reality–virtuality continuum for
the sense of touch and construct a visuo-haptic reality–virtuality continuum by com-
positing the two unimodal continua shown in Figure 10.2b. This continuum can be
valuable for building the taxonomy of haptic MR. In Figure 10.2b, the whole visuo-
haptic continuum is classified into nine categories, and each category is named in an
abbreviated form. The shaded regions belong to the realm of MR. In what follows, we
review the concepts and instances associated with each category, with more attention
to those of MR. Note that the continuum for touch includes all kinds of haptic feed-
back and does not depend on the specific types of haptic sensations (e.g., kinesthetic,
tactile, or thermal) or interaction paradigms (e.g., tool-mediated or bare-handed).
In the composite continuum, the left column has the three categories of h aptic
reality, vR-hR, vMR-hR, and vV-hR, where the corresponding environments pro-
vide only real haptic sensations. Among them, the simplest category is vR-hR,
which represents purely real environments without any synthetic stimuli. The other
end, vV-hR, refers to the conventional visual virtual environments with real touch,
for example, using a tangible prop to interact with virtual objects. Environments
between the two ends belong to vMR-hR, in which a user sees mixed objects but
still touches real objects. A typical example is the so-called tangible AR that has
been actively studied in the visual AR community. In tangible AR, a real prop held
230 Fundamentals of Wearable Computers and Augmented Reality
Mixed reality
Augmented reality Augmented virtuality
Reality–virtuality continuum
Real environment Virtual environment
(a)
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
Visual virtuality
in the hand is usually used as a tangible interface for visually mixed environments
(e.g., the MagicBook in Billinghurst et al. 2001), and its haptic property is regarded
unimportant for the applications. Another example is the projection augmented
model. A computer-generated image is projected onto a real physical model to create
a realistic-looking object, and the model can be touched by the bare hand (e.g., see
Bennett and Stevens 2006). Since the material property (e.g., texture) of the real
object may not agree with its visually augmented model, haptic properties are usu-
ally incorrectly displayed in this application.
The categories in the right column of the composite continuum, vR-hV, vMR-hV,
and vV-hV, are for haptic virtuality, corresponding to environments with only virtual
haptic sensations, and have received the most attention from the haptics research
community. Robot-assisted motor rehabilitation can be an example of vR-hV where
Haptic Augmented Reality 231
objects. Earlier attempts in this category focused on how to integrate haptic render-
ing of virtual objects into the existing visual AR framework, and they identified
the precise registration between the haptic and the visual coordinate frame as a key
issue (Adcock et al. 2003, Vallino and Brown 1999). For this topic, Kim et al. (2006)
applied an adaptive low-pass filter to reduce the trembling error of a low-cost vision-
based tracker using ARToolkit, and upsampled the tracking data for use with 1 kHz
haptic rendering (Kim et al. 2006). Bianchi et al. further improved the registration
accuracy via intensive calibration of a vision-based object tracker (Bianchi et al.
2006a,b). Their latest work explored the potential of visuo-haptic AR technology
for medical training with their highly stable and accurate AR system (Harders et al.
2009). Ott et al. also applied the HMD-based visuo-haptic framework to training
processes in industry and demonstrated its potential (Ott et al. 2007). In applica-
tions, a half mirror was often used for constructing a visuo-haptic framework due to
the better collocation of visual and haptic feedback, for example, ImmersiveTouch
(Luciano et al. 2005), Reachin Display (Reachin Technology), PARIS display
(Johnson et al. 2000), and SenseGraphics 3D-IW (SenseGraphics). Such frameworks
were, for instance, applied to cranial implant design (Scharver et al. 2004) or MR
painting application (Sandor et al. 2007).
The last categories for haptic MR, vR-hMR, vMR-hMR, and vV-hMR, with which
the rest of this chapter is concerned, lie in the middle column of the composite con-
tinuum. A common characteristic of haptic MR is that synthetic haptic signals that
are generated by a haptic interface modulate or augment stimuli that occur due to a
contact between a real object and a haptic interface medium, that is, a tool or a body
part. The VisHap system (Ye et al. 2003) is an instance of vR-hMR that provides
mixed haptic sensations in a real environment. In this system, some properties of a
virtual object (e.g., shape and stiffness) are rendered by a haptic device, while others
(e.g., texture and friction) are supplied by a real prop attached at the end-effector of
the device. Other examples in this category are the SmartTool (Nojima et al. 2002)
and SmartTouch systems (Kajimoto et al. 2004). They utilized various sensors (opti-
cal and electrical conductivity sensors) to capture real signals that could hardly be
perceived by the bare hand, transformed the signals into haptic information, and then
delivered them to the user in order to facilitate certain tasks (e.g., peeling off the
white from the yolk in an egg). The MicroTactus system (Yao et al. 2004) is another
example of vR-hMR, which detects and magnifies acceleration signals caused by
the interaction of a pen-type probe with a real object. The system was shown to
improve the performance of tissue boundary detection in arthroscopic surgical train-
ing. A similar pen-type haptic AR system, Ubi-Pen (Kyung and Lee 2009), embed-
ded miniaturized texture and vibrotactile displays in the pen, adding realistic tactile
feedback for interaction with a touch screen in mobile devices.
On the other hand, environments in vV-hMR use synthetic visual stimuli. For exam-
ple, Borst et al. investigated the utility of haptic MR in a visual virtual environment
232 Fundamentals of Wearable Computers and Augmented Reality
by adding synthetic force to a passive haptic response for a panel control task (Borst
and Volz 2005). Their results showed that mixed force feedback was better than syn-
thetic force alone in terms of task performance and user preference. In vMR-hMR,
both modalities rely on mixed stimuli. Ha et al. installed a vibrator in a real tangible
prop to produce virtual vibrotactile sensations in addition to the real haptic informa-
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
tion of the prop in a visually mixed environment (Ha et al. 2007). They demonstrated
that the virtual vibrotactile feedback enhances immersion for an AR-based handheld
game. Bayart et al. introduced a teleoperation framework where force measured at
the remote site is presented at the master side with additional virtual force and mixed
imagery (Bayart et al. 2007, 2008). In particular, they tried to modulate a certain real
haptic property with virtual force feedback for a hole-patching task and a painting
application, unlike most of the related studies introduced earlier.
Several remarks need to be made. First, the vast majority of related work, except
(Bayart et al. 2008, Borst and Volz 2005, Nojima et al. 2002), has used the term
haptic AR without distinguishing vMR-hV and hMR, although research issues asso-
ciated with the two categories are fundamentally different. Second, haptic MR can
be further classified to haptic AR and haptic augmented virtuality using the same
criterion of visual MR. All of the research instances of hMR introduced earlier cor-
respond to haptic AR, since little knowledge regarding an environment is managed
by the computer for haptic augmentation. However, despite its potential, attempts to
develop systematic and general computational algorithms for haptic AR have been
scanty. An instance of haptic augmented virtuality can be haptic rendering systems
that use haptic signals captured from a real object (e.g., see Hoever et al. 2009,
Okamura et al. 2001, Pai et al. 2001, Romano and Kuchenbecker 2011) in addition
to virtual object rendering, although such a concept has not been formalized before.
Third, although the taxonomy is defined for composite visuo-haptic configurations,
a unimodal case (e.g., no haptic or visual feedback) can also be mapped to the cor-
responding 1D continuum on the axes in Figure 10.2b.
TABLE 10.1
Classification of Related Studies Using the Composite Taxonomy
Artificial Recreation Augmented Perception
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
Further, the last two taxonomies are combined to construct a composite taxonomy,
and all relevant literature in the hMR category is classified using this taxonomy in
Table 10.1. Note that most of the haptic AR systems have both within- and between-
property characteristics to some degree. For clear classification, we only examined
key augmentation features in Table 10.1.
Real environment
Re/action
based on
physics Object
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
Interaction
tool
Sensing Perception
Coupled Coupled
Haptic Sensorimotor
Computer Brain
interface system
Actuation Action
both the real environment and the haptic interface are mixed and transmitted to
the user. Therefore, designing this feel-through tool is of substantial importance
in designing a haptic AR interface.
The feel-through can be either direct or indirect. Direct feel-through, analogous
to optical see-through in visual AR, transmits relevant physical signals directly to
the user via a mechanically coupled implement. In contrast, in indirect feel-through
(similar to video see-through), relevant physical signals are sensed, modeled, and
synthetically reconstructed for the user to feel, for example, in master–slave tele-
operation. In direct feel-through, preserving the realism of a real environment and
mixing real and virtual stimuli is relatively easy, but real signals must be compen-
sated for with great care for augmentation. To this end, the system may need to
employ very accurate real response estimation methods for active compensation
or special hardware for passive compensation, for example, using a ball bearing
tip to remove friction (Jeon and Choi 2010) and using a deformable tip to compen-
sate for real contact vibration (Hachisu et al. 2012). On the contrary, in indirect
feel-through, modulating real signals is easier since all the final stimuli are syn-
thesized, but more sophisticated hardware is required for transparent rendering of
virtual stimuli with high realism.
Different kinds of coupling may exist. Mechanical coupling is a typical example,
a force feedback haptic stylus instrumented with a contact tip, for example (Jeon and
Choi 2011). Other forms such as thermal coupling and electric coupling are also pos-
sible depending on the target property. In between-property augmentation, coupling
may not be very tight, for example, only position data and timing are shared (Borst
and Volz 2005).
Haptic AR tools can come in many different forms. In addition to typical styli,
very thin sheath-type tools are also used, for example, sensors on one side and
236 Fundamentals of Wearable Computers and Augmented Reality
actuators on the other side of a sheath (Nojima et al. 2002). Sometimes a real object
itself is a tool, for example, when both sensing and actuation modules are embedded
in a tangible marker (Ha et al. 2006).
A tool and coupling for haptic AR needs to be very carefully designed. Each of
the three components involved in the interaction requires a proper attachment to the
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
tool, appropriate sensing and actuation capability, and eventually, all of these should
be compactly integrated into the tool in a way that it can be appropriately used by
a user. To this end, the form factors of the sensors, attachment joints, and actuation
parts should be carefully designed to maximize the reliability of sensing and actua-
tion while maintaining a sufficient degree of freedom of movement.
between virtual and real signals. In the case of within-property augmentation, mixing
happens in a single property, and thus virtual signals related to a target property need to
be exactly aligned with corresponding real signals for harmonious merging and smooth
transition along the line between real and virtual. This needs very sophisticated regis-
tration, often with the estimation of real properties based on sensors and environment
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
models (see Section 10.4 for how we have approached this issue). However, in between-
property augmentation, different properties are usually treated separately, and virtual
signals of one target property do not have to be closely associated with real signals of
the other properties. Thus, the registration may be of lesser accuracy in this case.
Step 1 prepares data for steps 2 and 3 by sensing variables from the real envi-
ronment. Signal processing can also be applied to the sensor values.
Step 2 conducts a registration process based on the sensed data and pre-
identified models (see Section 10.3.4 for examples). This step usually esti-
mates the spatial and temporal state of the tool and the real environment and
then conducts the registration as indicated in Section 10.3.2, for example,
property-related registration and contact detection between the tool and real
objects. Depending on the result of this step, the system decides whether to
proceed to step 3 or go back to step 1 in this frame.
Step 3 is dedicated to the actual calculation of virtual feedback (in direct feel-
through) or mixed feedback (in indirect feel-through). Computational proce-
dures in this step largely depend on the categories of haptic AR (Table 10.1).
For artificial recreation, this step simulates the behaviors of the properties
involved in the rendering using physically based models. However, augmented
perception may need to derive the target signal based on purely sensed signals
and/or using simpler rules, for example, doubling the amplitude of measured
contact vibration (Yao et al. 2004). In addition, within-property augmenta-
tion often requires an estimation of the properties of a real object in order to
compensate for or augment it. For instance, modulating the feel of a brush
in the AR drawing example first needs the compensation of the real tension
and friction of the manipulandum. This estimation can be done either using a
model already identified in a preprocessing step or by real-time estimation of
the property using sensor values, or both (see Section 10.3.4 for more details).
In between-property augmentation, however, this estimation process is not
required in general, and providing virtual properties is simpler.
Step 4 sends commands to the haptic AR interface to display the feedback
calculated in Step 3. Sometimes we need techniques for controlling the
hardware for the precise delivery of stimuli.
238 Fundamentals of Wearable Computers and Augmented Reality
TABLE 10.2
Characteristics of the Categories
Category Within Property Between Property
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
Registration and • Registration: position and timing • Registration: only basic position
rendering registration as well as property- and timing registration needed.
related registration needed. • Rendering: algorithms for haptic
• Rendering includes estimation and VR can be applied.
compensation of real signals and
merging of them with virtual signals.
PHANToM
Premium 1.5 PHANToM
Premium 1.5
FIGURE 10.4 Haptic AR interface. (Reprinted from Jeon, S. and Harders, M., Extending
haptic augmented reality: Modulating stiffness during two-point squeezing, in Proceedings
of the Haptics Symposium, 2012, pp. 141–146. With permission.)
fr ( t ) = − fh ( t ) + fd ( t ) .
{ } (10.1)
The reaction force fr(t) during contact can be decomposed into two orthogonal
force components, as shown in Figure 10.5:
where
frn (t ) is the result of object elasticity in the normal direction
frt (t ) is the frictional tangential force
Haptic Augmented Reality 241
f rn
Original surface fr
fr pc
x
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
un
Deformed surface
fh fd frt
p
FIGURE 10.5 Variables for single-contact stiffness modulation. (Reprinted from Jeon, S.
and Choi, S., Presence Teleop. Virt. Environ., 20, 337, 2011. With permission.)
Let x(t) be the displacement caused by the elastic force component, which represents
the distance between the haptic interface tool position, p(t), and the original non-
deformed position pc(t) of a contacted particle on the object surface. If we denote the
unit vector in the direction of frn (t ) by un(t) and the target modulation stiffness by k(t ),
the force that a user should feel is:
Using (10.3), the force that the haptic device needs to exert is
This equation indicates the tasks that a stiffness modulation algorithm has to do in
every loop: (1) detection of the contact between the haptic tool and the real object for
spatial and temporal registration, (2) measurement of the reaction force fr(t), (3) esti-
mation of the direction un(t) and magnitude x(t) of the resulting deformation for stiff-
ness augmentation, and (4) control of the device-rendered force fd(t) to produce the
desired force fd (t ). The following section describes how we address these four steps.
In Step 1, we use force sensor readings for contact detection since the entire
geometry of the real environment is not available. A collision is regarded to have
occurred when forces sensed during interaction exceed a threshold. To increase the
accuracy, we developed algorithms to suppress noise, as well as to compensate for
the weight and dynamic effects of the tool. See Jeon and Choi (2011) for details.
Step 2 is also simply done with the force sensor attached to the probing tool.
Step 3 is the key process for stiffness modulation. We first identify the friction
and deformation dynamics of a real object in a preprocessing step, and use them later
during rendering to estimate the known variables for merging real and virtual forces.
The details of this process are summarized in the following section.
Before augmentation, we carry out two preprocessing steps. First, the friction
between the real object and the tool tip is identified using the Dahl friction model (Jeon
and Choi 2011). The original Dahl model is transformed to an equivalent discrete-time
difference equation, as described in Mahvash and Okamura (2006). It also includes
a velocity-dependent term to cope with viscous friction. The procedure for friction
242 Fundamentals of Wearable Computers and Augmented Reality
data bins for the presliding regime are used to identify the parameters that define
behavior at almost zero velocity, while the others are used for Coulomb and viscous
parameters.
The second preprocessing step is for identifying the deformation dynamics of the
real object. We use the Hunt–Crossley model (Hunt and Crossley 1975) to account
for nonlinearity. The model determines the response force magnitude given dis-
placement x(t) and velocity x (t ) by
( ) ( ) x ( t ) ,
m m
f (t ) = k x (t ) + b x (t ) (10.5)
where
k and b are stiffness and damping constants
m is a constant exponent (usually 1 < m < 2)
For identification, the data triples consisting of displacement, velocity, and reaction
force magnitude are collected through repeated presses and releases of a deformable
sample in the normal direction. The data are passed to a recursive least-squares algo-
rithm for an iterative estimation of the Hunt–Crossley model parameters (Haddadi
and Hashtrudi-Zaad 2008).
For rendering, the following computational process is executed in every haptic ren-
dering frame. First, two variables, the deformation direction un(t) and the magnitude
of the deformation x(t) are estimated. The former is derived as follows. Equation 10.2
indicates that the response force fr(t) consists of two perpendicular force compo-
nents: frn (t ) and frt (t ). Since un(t) is the unit vector of frn (t ), un(t) becomes:
fr ( t ) − frt ( t )
un (t ) = . (10.6)
fr ( t ) − frt ( t )
The known variable in (10.6) is frt ( t ). The magnitude of frt (t ) is estimated using the
identified Dahl model. Its direction is derived from the tangent vector at the current
contact point p(t), which is found by projecting Δp(t) onto un(t−Δt) and subtracting
it from Δp(t).
The next part is the estimation of x(t). The assumption of material homogeneity
allows us to directly approximate it from the inverse of the Hunt–Crossley model
identified previously. Finally, using the estimated un(t) and x(t), fd (t ) is calculated
using (10.4), which is then sent to the haptic AR interface.
In Jeon and Choi (2011), we assessed the physical performance of each compo-
nent and the perceptual performance of the final rendering result using various real
samples. In particular, the perceptual quality of modulated stiffness evaluated in a
Haptic Augmented Reality 243
psychophysical experiment showed that rendering errors were less than the human
discriminability of stiffness. This demonstrates that our system can provide percep-
tually convincing stiffness modulation.
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
fr,* (t) can be further decomposed to pure weight fw,* (t) and a force component in
squeezing direction fsqz,* (t) as shown in Figure 10.6, resulting in
Since the displacement and the force along the squeezing direction contribute to stiffness
perception, the force component of interest is fsqz,* (t). Then, (10.7) can be rewritten as
x1u1 fsqz,1
pc,1
x2u2 p1
l
p2
pc,2 fr,1
fw,1
fd fsqz,2
fh fw,2
FIGURE 10.6 Variables for two-contact stiffness modulation. (Reprinted from Jeon, S.
and Harders, M., Extending haptic augmented reality: Modulating stiffness during two-point
squeezing, in Proceedings of the Haptics Symposium, 2012, pp. 141–146. With permission.)
244 Fundamentals of Wearable Computers and Augmented Reality
fh,* ( t ) = k ( t ) x* ( t ) u* ( t ) , (10.10)
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
where x*(t) represents the displacement along the squeezing direction and u*(t)
is the unit vector toward the direction of that deformation. Combining (10.9) and
(10.10) results in the virtual force for the haptic interfaces to render for the desired
augmentation:
Here again, (10.11) indicates that we need to estimate the displacement x*(t) and the
deformation direction u*(t) at each contact point. The known variables are the reac-
tion forces fr,*(t) and the tool tip positions p*(t). To this end, the following three obser-
vations about an object held in the steady state are utilized. First, the magnitudes of
the two squeezing forces fsqz,1(t) and fsqz,2(t) are the same, but the directions are the
opposite (fsqz,1(t) = −fsqz,2(t)). Second, each squeezing force falls on the line connect-
ing the two contact locations. Third, the total weight of the object is equal to the sum
of the two reaction force vectors:
fr ,1 ( t ) + fr ,2 ( t ) = fw,1 ( t ) + fw,2 ( t ) .
The first
LLLLLLLLL I and second
LLLLLLLLL I observations provide the directions of fsqz,*(t) (= u*(t) =
p1 (t )p2 (t ) or p2 (t )p1 (t ) ; also see l(t) in Figure 10.6). The magnitude of fsqz,*(t), fsqz,*
(t) is determined as follows. The sum of the reaction forces along the l(t) direction,
fr↓ sqz (t ) = fr ,1 (t ) ⋅ u l (t ) + fr ,2 (t ) ⋅ u l (t ) , includes not only the two squeezing forces, but
also the weight. Thus, fsqz(t) can be calculated by subtracting the effect of the weight
along l(t) from fr↓sqz(t):
where f w↓sqz(t) can be derived based on the third observation such that
fw↓sqz ( t ) = fr ,1 ( t ) + fr ,1 ( t ) ⋅ u l ( t ) .
( ) (10.13)
Then, the squeezing force at each contact point can be derived based on the first
observation:
FIGURE 10.7 Example snapshot of visuo-haptic augmentation. Reaction force (dark gray
arrow), weight (gray arrow), and haptic device force (light gray arrow) are depicted. Examples
with increased stiffness (virtual forces oppose squeezing) and decreased stiffness (virtual
forces assist squeezing) are shown on left and right, respectively.
Steps for the estimation of the displacement x* (t) in (10.11) are as follows. Let
the distance between the two initial contact points on the non-deformed surface
(pc,1(t) and pc,2(t) in Figure 10.6) be d0. It is constant over time due to the no-slip
assumption. Assuming homogeneity, x1(t) is equal to x2(t), and the displacements can
be derived by
x1 ( t ) = x2 ( t ) = 0.5 d0 − d ( t ) ,
( ) (10.15)
where d(t) is p1 (t )p2 (t ) . All the unknown variables are now estimated and the final
virtual force can be calculated using (10.11).
In Jeon and Harders (2012), we also evaluated the system performance through
simulations and a psychophysical experiment. Overall, the evaluation indicated that
our system can provide physically and perceptually sound stiffness augmentation.
In addition, the system has further been integrated with a visual AR framework
(Harders et al. 2009). To our knowledge, this is among the first system that can
augment both visual and haptic sensations. We used the visual system to display
information related to haptic augmentation, such as the force vectors involved in the
algorithm. Figure 10.7 shows exemplar snapshots.
fH,2
fH,1
fR,1 fR,2
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
fT,2
fT,1
Silicone tissue
mock up
Virtual tumor
with the consideration of the mutual effects between the contacts. The final com-
bined forces fH,* (t) enable a user to feel augmented sensations of the stiffer inclusion,
given as
fH ,* ( t ) = fR,* ( t ) + fT ,* ( t ) . (10.16)
Here, estimating and simulating fT,*(t) is the key for creating a sound illusion. The
hardware setup we used is the same as the one shown in Figure 10.4.
A two-step, measurement-based approach is taken to model the dynamic behavior
of the inclusion. First, a contact dynamics model representing the pure response of the
inclusion is identified using the data captured during palpating a physical mock-up. Then,
another dynamics model is constructed to capture the movement characteristics of the
inclusion in response to external forces. Both models are then used in rendering to deter-
mine fT,* (t) in real-time. The procedures are detailed in the following paragraphs.
The first preprocessing step is for identifying the overall contact force resulting
purely from an inclusion (inclusion-only case) as a function of the distance between
the inclusion and the contact point. Our approach is to extract the difference between
the responses of a sample with a stiffer inclusion (inclusion-embedded) and a sam-
ple without it (no-inclusion). To this end, we first identify the Hunt–Crossley model
using the no-inclusion model. We use the same identification procedure described in
Section 10.4.2. This model is denoted by f = H NT ( x, x ). Then, we obtain the data from
the inclusion-embedded sample by manually poking along a line from pTs to pT0 (see
Figure 10.9 for the involved quantities). This time, we also record the position changes
of pT using a position tracking system (TrackIR; NaturalPoint, Inc.). This gives us the
state vector when palpating the tumor-embedded model ( xTE , x TE , fTE , pT , p H ).
As depicted in Figure 10.8, the force f TE (t) can be decomposed into fR (t) and fT (t).
Since f = H NT ( x, x ) represents the magnitude of fR (t), the magnitude of fT (t) can be
obtained by passing all data pairs ( xTE , x TE ) to H NT ( x, x ) and by computing differ-
ences using
Original
surface pTs
l0
Deformed
surface
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
pH Tool tip
lHT
pT0 Initial tumor
d
Displaced tumor pT
FIGURE 10.9 Variables for inclusion model identification. (Reprinted from Jeon, S. and
Harders, M., IEEE Trans. Haptics, 99, 1, 2014. With permission.)
f T (t) can be expressed as a function of the distance between the inclusion and the tool
tip. Let the distance from pH(t) to pT (t) be lHT (t), and the initial distance from pTs to
pT0 be l0. Then, the difference, l(t) = l0 −lHT (t), becomes a relative displacement toward
the inclusion. By using the data triples (l, l, fT ), a new response model with respect
to l(t) can be derived, which is denoted as HT (l, l). This represents the inclusion-only
force response at the single contact point pTs, poking into the direction of pT.
In the second step, the inclusion movement in response to external forces is char-
acterized. Nonlinear changes of d(t) with respect to an external force fT (t) can be
approximated using again the Hunt–Crossley model. After determining d(t) using a
position tracker and fT (t) using our rendering algorithm described in the next subsec-
tion, vector triples (d, d , fT ) are employed to identify three Hunt–Crossley models
for the three Cartesian directions, denoted by Gx (d x , d x ), Gy (d y , d y ), and Gz (dz , d z ).
p H ,* ( t ) − pT ( t )
fT ,* ( t ) = fT ,* ( t ) . (10.18)
| p H ,* ( t ) − pT ( t ) |
Equation 10.18 indicates that the unknown values, f T,* (t) and pT (t), should be approxi-
mated during the rendering.
f T,* (t) is derived based on HT. To this end, we first scale the current indentation
distance to match those during the recording:
l0
l* (t ) = (l0,* − lHT ,* (t )) . (10.19)
l0,*
248 Fundamentals of Wearable Computers and Augmented Reality
l0,1
pTs
pHs,1
Original
surface
l0
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
pH,1 pH,2
pT0
lHT,1 Tool tip 2 Deformed
Tool tip 1 d surface
Initial tumor
Displaced tumor pT
FIGURE 10.10 Variables for inclusion augmentation rendering. (Reprinted from Jeon, S.
and Harders, M., IEEE Trans. Haptics, 99, 1, 2014. With permission.)
1/ m
n
� �
� ∑ fT ,*,i (t ) �
di ( t ) = � *=1
� i = x, y, z, (10.20)
� k + b d (t ) �
� �
where
n is the number of contact points
m is the exponential parameter in the Hunt–Crossley model
Finally, fT,*(t) is determined using (10.18), which is directly sent to the haptic AR
interface.
In Jeon and Harders (2014), we compared the simulation results of our algo-
rithm with actual measurement data recorded from eight different real mock-ups via
various interaction methods. Overall, inclusion movements and the mutual effects
between contacts are captured and simulated with reasonable accuracy; the force
simulation errors were less than the force perception thresholds in most cases.
FIGURE 10.11 Variables for friction modulation. (Reprinted from Jeon, S. et al., Extensions
to haptic augmented reality: Modulating friction and weight, in Proceedings of the IEEE
World Haptics Conference (WHC), 2011, pp. 227–232. With permission.)
with a tool. As illustrated in Figure 10.11, this is done by adding a modulation fric-
tion force f mod(t) to the real friction force:
Thus, the task reduces to: (1) simulation of the desired friction response ftarg(t) and (2)
measurement of the real friction force freal(t).
For the simulation of the desired friction force ftarg(t) during rendering, we iden-
tify the modified Dahl model describing the friction of a target surface. For the Dahl
model parameter identification, a user repeatedly strokes the target surface with the
probe tip attached to the PHANToM. The identification procedure is the same as
that given in Section 10.4.2. The model is then used to calculate ftarg(t) using the tool
tip position and velocity and the normal contact force during augmented rendering.
freal(t) can be easily derived from force sensor readings after a noise reduction
process. Given the real friction and the target friction, the appropriate modulation
force that needs to be rendered by the device is finally computed using (10.20). The
modulation force is sent to the haptic interface for force control.
We tested the accuracy of our friction identification and modulation algorithms
with four distinctive surfaces (Jeon et al. 2011). The results showed that regardless of
the base surface, the friction was modulated to a target surface without perceptually
significant errors.
behaviors, and approaches that are based on more in-depth contact mechanics are
necessary for appropriate modeling and augmentation. This has been one direction
of our research, with an initial result that allows the user to model the shape of a soft
object using a haptic interface without the need for other devices (Yim and Choi 2012).
Our work has used a handheld tool for the exploration of real objects. This must
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
be extended to those which allow for the use of bare hands, or at least very similar
cases such as thin thimbles enclosing fingertips. Such extension will enlarge the
application area of haptic AR by the great extent, for example, palpation training on
a real phantom that includes virtual organs and lumps. To this end, we have begun to
examine the feasibility of sensing not only contact force but also contact pressure in
a compact device and its utility for haptic AR (Kim et al. 2014).
Another important topic is that for multi-finger interaction. This functionality
requires very complicated haptic interfaces that provide multiple, independent forces
with a very large degrees of freedom (see Barbagli et al. 2005), as well as very
sophisticated deformable body rendering algorithms that take into account the inter-
play between multiple contacts. Research effort on this topic is still ongoing even for
haptic VR.
Regarding material properties, we need methods to augment friction, texture, and
temperature. Friction is expected to be relatively easier in both modeling and render-
ing for haptic AR, as long as deformation is properly handled. Temperature modula-
tion is likely to be more challenging, especially due to the difficulty of integrating a
temperature display to the fingertip that touches real objects. This functionality can
greatly improve the realism of AR applications.
The last critical topic we wish to mention is texture. Texture is one of the most
salient material properties and determines the identifying tactual characteristics of
an object (Katz 1925). As such, a great amount of research has been devoted to
haptic perception and rendering of surface texture. Texture is also one of the most
complex issues because of the multiple perceptual dimensions involved in texture
perception; all of surface microgeometry and material’s elasticity, viscosity, and fric-
tion play an important role (Hollins et al. 1993, 2000). See Choi and Tan (2004a,b,
2005, 2007) for a review of texture perception relevant to haptic rendering, Campion
and Hayward (2007) for passive rendering of virtual textures, and Fritz and Barner
(1996), Guruswamy et al. (2011), Lang and Andrews (2011), and Romano and
Kuchenbecker (2011) for various models. All of these studies pertained to haptic VR
rendering. Among these, the work of Kuchenbecker and her colleagues has the best
feasibility for application to haptic AR; they have developed a high-quality texture
rendering system that overlays artificial vibrations on a touchscreen to deliver the
textures of real samples (Romano and Kuchenbecker 2011) and an open database of
textures (Culbertson et al. 2014). This research can be a cornerstone for the modeling
and augmentation of real textures.
10.8 CONCLUSIONS
This chapter overviewed the emerging AR paradigm for the sense of touch. We first
outlined the conceptual, functional, and technical aspects of this new paradigm with
three taxonomies and thorough review of existing literature. Then, we moved to
Haptic Augmented Reality 251
recent attempts for realizing haptic AR, where hardware and algorithms for aug-
menting the stiffness and friction of a real object were detailed. These frameworks
are applied to medical training of palpation, where stiffer virtual inclusions are ren-
dered in a real tissue mock-up. Lastly, we elucidate several challenges and future
research topics in this research area. We hope that our endeavor introduced in this
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
chapter will pave the way to more diverse and mature researches in the exciting field
of haptic AR.
REFERENCES
Abbott, J., P. Marayong, and A. Okamura. 2007. Haptic virtual fixtures for robot-assisted
manipulation. In Robotics Research, eds. S. Thrun, R. Brooks, and H. Durrant-Whyte,
pp. 49–64. Springer-Verlag: Berlin, Germany.
Abbott, J. and A. Okamura. 2003. Virtual fixture architectures for telemanipulation.
Proceedings of the IEEE International Conference on Robotics and Automation,
pp. 2798–2805. Taipei, Taiwan.
Adcock, M., M. Hutchins, and C. Gunn. 2003. Augmented reality haptics: Using ARToolKit
for display of haptic applications. Proceedings of Augmented Reality Toolkit Workshop,
pp. 1–2. Tokyo, Japan.
Azuma, R., Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre. 2001. Recent
advances in augmented reality. IEEE Computer Graphics & Applications 21 (6):34–47.
Barbagli, F., D. Prattichizzo, and K. Salisbury. 2005. A multirate approach to haptic interac-
tion with deformable objects single and multipoint contacts. International Journal of
Robotics Research 24 (9):703–716.
Bayart, B., J. Y. Didier, and A. Kheddar. 2008. Force feedback virtual painting on real
objects: A paradigm of augmented reality haptics. Lecture Notes in Computer Science
(EuroHaptics 2008) 5024:776–785.
Bayart, B., A. Drif, A. Kheddar, and J.-Y. Didier. 2007. Visuo-haptic blending applied to a
tele-touch-diagnosis application. Lecture Notes on Computer Science (Virtual Reality,
HCII 2007) 4563: 617–626.
Bayart, B. and A. Kheddar. 2006. Haptic augmented reality taxonomy: Haptic enhancing and
enhanced haptics. Proceedings of EuroHaptics, 641–644. Paris, France.
Bennett, E. and B. Stevens. 2006. The effect that the visual and haptic problems associ-
ated with touching a projection augmented model have on object-presence. Presence:
Teleoperators and Virtual Environments 15 (4):419–437.
Bianchi, G., C. Jung, B. Knoerlein, G. Szekely, and M. Harders. 2006a. High-fidelity visuo-
haptic interaction with virtual objects in multi-modal AR systems. Proceedings of the
IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 187–
196. Santa Barbara, USA.
Bianchi, G., B. Knoerlein, G. Szekely, and M. Harders. 2006b. High precision augmented real-
ity haptics. Proceedings of EuroHaptics, pp. 169–168. Paris, France.
Billinghurst, M., H. Kato, and I. Poupyrev. 2001. The MagicBook—Moving seamlessly
between reality and virtuality. IEEE Computer Graphics & Applications 21 (3):6–8.
Borst, C. W. and R. A. Volz. 2005. Evaluation of a haptic mixed reality system for interac-
tions with a virtual control panel. Presence: Teleoperators and Virtual Environments
14 (6):677–696.
Bose, B., A. K. Kalra, S. Thukral, A. Sood, S. K. Guha, and S. Anand. 1992. Tremor compen-
sation for robotics assisted microsurgery. Engineering in Medicine and Biology Society,
1992, 14th Annual International Conference of the IEEE, October 29, 1992–November
1 1992, pp. 1067–1068. Paris, France.
252 Fundamentals of Wearable Computers and Augmented Reality
Brewster, S. and L. M. Brown. 2004. Tactons: Structured tactile messages for non-visual
information display. Proceedings of the Australasian User Interface Conference,
pp. 15–23. Dunedin, New Zealand.
Brown, L. M. and T. Kaaresoja. 2006. Feel who’s talking: Using tactons for mobile phone
alerts. Proceeding of the Annual SIGCHI Conference on Human Factors in Computing
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
Haddadi, A. and K. Hashtrudi-Zaad. 2008. A new method for online parameter estimation
of hunt-crossley environment dynamic models. Proceedings of the IEEE International
Conference on Intelligent Robots and Systems, pp. 981–986. Nice, France.
Harders, M., G. Bianchi, B. Knoerlein, and G. Szekely. 2009. Calibration, registration, and
synchronization for high precision augmented reality haptics. IEEE Transactions on
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
Kurita, Y., A. Ikeda, T. Tamaki, T. Ogasawara, and K. Nagata. 2009. Haptic augmented real-
ity interface using the real force response of an object. Proceedings of the ACM Virtual
Reality Software and Technology, pp. 83–86. Kyoto, Japan.
Kyung, K.-U. and J.-Y. Lee. 2009. Ubi-Pen: A haptic interface with texture and vibrotactile
display. IEEE Computer Graphics and Applications 29 (1):24–32.
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015
Lang, J. and S. Andrews. 2011. Measurement-based modeling of contact forces and textures
for haptic rendering. IEEE Transactions on Visualization and Computer Graphics 17
(3):380–391.
Lee, H., W. Kim, J. Han, and C. Han. 2012a. The technical trend of the exoskeleton robot
system for human power assistance. International Journal of Precision Engineering and
Manufacturing 13 (8):1491–1497.
Lee, I. and S. Choi. 2014. Vibrotactile guidance for drumming learning: Method and per-
ceptual assessment. Proceedings of the IEEE Haptics Symposium, pp. 147–152.
Houston, TX.
Lee, I., K. Hong, and S. Choi. 2012. Guidance methods for bimanual timing tasks. Proceedings
of IEEE Haptics Symposium, pp. 297–300. Vancouver, Canada.
Li, M., M. Ishii, and R. H. Taylor. 2007. Spatial motion constraints using virtual fixtures gener-
ated by anatomy. IEEE Transactions on Robotics 23 (1):4–19.
Luciano, C., P. Banerjee, L. Florea, and G. Dawe. 2005. Design of the ImmersiveTouch™:
A high-performance haptic augmented virtual reality system. Proceedings of
International Conference on Human-Computer Interaction. Las Vegas, NV.
Mahvash, M. and A. M. Okamura. 2006. Friction compensation for a force-feedback telero-
botic system. Proceedings of the IEEE International Conference on Robotics and
Automation, pp. 3268–3273. Orlando, FL.
Milgram, P. and H. Colquhoun, Jr. 1999. A taxonomy of real and virtual world display integra-
tion. In Mixed Reality—Merging Real and Virtual Worlds, ed. by Y. O. A. H. Tamura,
pp. 1–16. Springer-Verlag: Berlin, Germany.
Minamizawa, K., H. Kajimoto, N. Kawakami, and S. Tachi. 2007. Wearable haptic display to
present gravity sensation. Proceedings of the World Haptics Conference, pp. 133–138.
Tsukuba, Japan
Mitchell, B., J. Koo, M. Iordachita, P. Kazanzides, A. Kapoor, J. Handa, G. Hager, and
R. Taylor. 2007. Development and application of a new steady-hand manipulator for
retinal surgery. Proceedings of the IEEE International Conference on Robotics and
Automation, pp. 623–629. Rome, Italy.
Nojima, T., D. Sekiguchi, M. Inami, and S. Tachi. 2002. The SmartTool: A system for
augmented reality of haptics. Proceedings of the IEEE Virtual Reality Conference,
pp. 67–72. Orlando, FL.
Ochiai, Y., T. Hoshi, J. Rekimoto, and M. Takasaki. 2014. Diminished haptics: Towards digital
transformation of real world textures. Lecture Notes on Computer Science (Eurohaptics
2014, Part I) LNCS 8618: pp. 409–417.
Okamura, A. M., M. R. Cutkosky, and J. T. Dennerlein. 2001. Reality-based models for vibra-
tion feedback in virtual environments. IEEE/ASME Transactions on Mechatronics
6 (3):245–252.
Ott, R., D. Thalmann, and F. Vexo. 2007. Haptic feedback in mixed-reality environment. The
Visual Computer: International Journal of Computer Graphics 23 (9):843–849.
Pai, D. K., K. van den Doel, D. L. James, J. Lang, J. E. Lloyd, J. L. Richmond, and S. H. Yau. 2001.
Scanning physical interaction behavior of 3D objects. Proceedings of the Annual Conference
on ACM Computer Graphics and Interactive Techniques, pp. 87–96. Los Angeles, CA.
Park, G., S. Choi, K. Hwang, S. Kim, J. Sa, and M. Joung. 2011. Tactile effect design and
evaluation for virtual buttons on a mobile device Touchscreen. Proceedings of the
International Conference on Human-Computer Interaction with Mobile Devices and
Services (MobileHCI), pp. 11–20. Stockholm, Sweden.
Haptic Augmented Reality 255
Parkes, R., N. N. Forrest, and S. Baillie. 2009. A mixed reality simulator for feline abdominal
palpation training in veterinary medicine. Studies in Health Technology and Informatics
142:244–246.
Powell, D. and M. K. O’Malley. 2011. Efficacy of shared-control guidance paradigms for
robot-mediated training. Proceedings of the IEEE World Haptics Conference, pp. 427–
Downloaded by [Pohang University of Science and Technology], [Seungmoon Choi] at 17:27 03 August 2015