Virtual Influencers
Virtual Influencers
Artificial Intelligence
With an Introduction to Machine Learning, Second Edition
Richard E. Neapolitan, Xia Jiang
Virtual Humans
Today and Tomorrow
David Burden, Maggi Savin-Baden
David Burden
Maggi Savin-Baden
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter
invented, including photocopying, microfilming, and recording, or in any information storage or retrieval
system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
René Descartes
The Philosophical Writings of Descartes: Volume 3,
The Correspondence
(Trans. J. Cottingham., R. Soothoff., D. Murdoch., A. Kenny.
Cambridge, UK: Cambridge University Press, 1991)
To our partners, Deborah and John,
the least virtual humans we know.
Permissions
INTRODUCTION, XXIX
xi
xii ◾ Contents
Can It Learn? 11
Is It Imaginative? 12
Sentient or Non-Sentient? 12
DEMONSTRATING INTELLIGENCE? 12
A VIRTUAL HUMAN PROFILE 13
VIRTUAL HUMANOIDS AND VIRTUAL SAPIENS 14
TOWARDS A WORKING DEFINITION 16
EXAMPLES OF VIRTUAL HUMANS 17
Chatbots 17
Autonomous Agents 18
Conversational Agents 19
Pedagogical Agents 20
Virtual Mentors 20
ARTIFICIAL INTELLIGENCE, MACHINE LEARNING
AND VIRTUAL HUMANS 21
CONCLUSION 21
REFERENCES 22
Drama 36
Literature 37
Games 38
CONCLUSION 42
REFERENCES 43
Section ii Technology
chapter 4 ◾ Mind 71
UNDERSTANDING WHAT CONSTITUTES THE MIND 71
PERCEPTION 72
ATTENTION 73
APPRAISAL, EMOTION AND MOOD 75
PERSONALITY 77
MOTIVATION, GOALS AND PLANNING 79
DECISION-MAKING, PROBLEM-SOLVING
AND REASONING 81
World Models 82
MEMORY 84
LEARNING 85
IMAGINATION AND CREATIVITY 87
META-MANAGEMENT AND SELF-MONITORING 89
CONCLUSION 90
REFERENCES 91
chapter 5 ◾ Communications 97
INTRODUCTION 97
COMMUNICATIONS: NON-LANGUAGE MODALITIES 97
COMMUNICATIONS: LANGUAGE-BASED MODALITIES 98
Speech Recognition 98
Speech Generation (Text to Speech) 101
NATURAL LANGUAGE UNDERSTANDING
AND COMMUNICATION 103
Machine Learning 105
Conversation Management 105
Uses and Future Developments 106
NATURAL LANGUAGE GENERATION 107
INTERNAL DIALOGUE 109
CONCLUSION 110
REFERENCES 110
Contents ◾ xv
GLOSSARY, 271
INDEX, 279
List of Figures
xxiii
Acknowledgements
xxv
Authors
xxvii
Introduction
xxix
xxx ◾ Introduction
The book is not particularly concerned with the use by physical humans
of avatars in, for example, virtual worlds (another use of the term virtual
human). However, such situations can put the virtual human and physical
human on an equal footing within a virtual environment.
The first section of the book, Part I: The Landscape, outlines understand-
ings of virtual humans and begins by providing much needed definitions
and a taxonomy of artificial intelligence. This section includes Chapter 1:
What Are Virtual Humans?, which presents an introductory analysis of
the traits which are important when considering virtual humans, argues
for a spectrum of virtual human types, and evaluates the common virtual
human forms against that spectrum. The second chapter, Chapter 2: Virtual
Humans and Artificial Intelligence, broadens this discussion by engaging
with the issue that there is a range of perspectives about what counts as
artificial intelligence (AI) and virtual humans. Thus, this chapter presents a
virtual humans/AI landscape and positions the different terms within it, as
well as identifying three main challenges within the landscape that need to
be overcome to move from today’s chatbots and AI to the sort of AI envis-
aged in the popular imagination and science fiction literature.
The second section of the book, Part II: Technology, explores approaches
to developing and using relevant technologies, and ways of creating virtual
humans. It presents a comprehensive overview of this rapidly developing
field. This section begins with Chapter 3: Body, which examines the tech-
nologies which enable the creation of a virtual body for the virtual human,
explores the extent to which current approaches and techniques allow a
human body to be modelled realistically as a digital avatar and analyses
how the capability might develop over the coming years. The chapter also
explores the senses, and examines how human (and other) senses could be
created for a virtual human. Chapter 4: Mind complements the study of
the ‘body’ and ‘senses’ in Chapter 3 by examining the different technolo-
gies and approaches involved in the creation of the mind or ‘brain’ of a
virtual human. The chapter will consider current research in the areas
of perception, emotion, attention and appraisal, decision-making, person-
ality, memory, learning, and meta-cognition, and anticipates the future
directions that may develop over the coming decade.
The next chapter, Chapter 5: Communications, explores how the senses
and abilities created for a virtual human can support language and non-
verbal communications. Chapter 6: Architecture then reviews some of the
leading architectures for creating a virtual human. The range considered
will show the breadth of approaches, from those which are theoretical, to
Introduction ◾ xxxi
virtualhumans.ai Website
Supporting material for this book, including links to virtual human images,
videos, applications and related work and papers, can be found on the
website at www.virtualhumans.ai.
I
The Landscape
INTRODUCTION
Part I sets the bounds to this book. Many similar terms are used for
artefacts, which are, to some greater or lesser extent, virtual or digital
versions of a physical human. These range from chatbots, conversational
agents and autonomous agents to virtual humans and artificial
intelligences. There is also a blurring between the digital and physical
versions of virtual humans, the latter being represented by robots and
androids. Somewhere between the two sit the digital entities which are
linked to specific physical platforms such as Siri and Alexa.
Chapter 1 will consider some of these different manifestations of
virtual humans and identify a set of traits which can be used separate
virtual humans from other software systems, and to compare and contrast
different versions of virtual humans. This leads to a working definition of
a virtual human and also to the identification of lower-function virtual
humans – termed virtual humanoids, and higher function virtual humans,
which are the true equivalent of physical humans and have been termed
virtual sapiens. The chapter closes with an examination of several key use
cases of virtual humans which will be considered throughout the book,
namely those of chatbots, conversational agents and pedagogic agents.
1
2 ◾ Virtual Humans
INTRODUCTION
Virtual Humans are human-like characters, which may be seen on a
computer screen, heard through a speaker, or accessed in some other way.
They exhibit human-like behaviours, such as speech, gesture and movement,
and might also show other human characteristics, such as emotions,
empathy, reasoning, planning, motivation and the development and use of
memory. However, a precise definition of what represents a virtual human
or even ‘artificial intelligence’ (AI) is challenging. Likewise, establishing the
distinctions between different types of virtual human, such as a chatbot,
conversational agent, autonomous agent or pedagogic agent is unclear, as is
how virtual humans relate to robots and androids. This chapter presents an
introductory analysis of component parts of a virtual human and examines
the traits that are important when considering virtual humans. It examines
existing definitions of a virtual human before developing a practical working
definition, and argues for a spectrum of virtual human types, and presents
some common examples of virtual humans.
3
4 ◾ Virtual Humans
Enviro
nment
Passing sensory data
Appraisal Motivation,
Imagination
Emotion & Goals &
& Creativity
Mood Planning
Actions Enaction
Natural Language Reasoning
BODY /
AVATAR
MODEL Learning
Passing action
instructions
Memory
APIs
• Is it physical or digital?
• Is it manifest in a visual, auditory or textual form?
• Is it embodied or disembodied?
• Is it humanoid or non-humanoid?
• Does it use natural language or command-driven communication?
• Is it autonomous or controlled?
• Is it emotional or unemotional?
• Does it have a personality?
• Can it reason?
• Can it learn?
• Is it imaginative?
• How self-aware is it?
Physical or Digital?
The first of these traits is whether the entity is defined by a physical or digital
presence. In much popular literature, for example, replicants in Blade Runner
or Kryten in Red Dwarf, physical androids are considered virtual humans.
Whilst possessing a presence in some form of humanoid or non-humanoid
physical robot body may well be useful at times to a virtual human, the
essence of the virtual human is in its digital form. This is particularly salient
when, represented as an avatar within a virtual world, it is able to present
itself as just as ‘human’ as any avatar controlled by a physical human.
It should be noted that many authors refer to software-based virtual
humans as ‘digital humans’ (for example, Jones et al., 2015 and Perry, 2014),
but the term ‘digital humans’ is also used in other areas, such as filmmaking
(Turchet et al., 2016) and ergonomic design (Keyvani et al., 2013) to refer
only to the creation of the ‘body’ of the virtual human, and not of any higher
functions. Hence, there is a preference in this work for the term ‘virtual human’.
Embodied or Disembodied?
Whilst closely linked to the issue of manifestation, there is also the
matter of whether the virtual human needs to be embodied, digitally or
physically. One belief in cognitive science is that ‘intelligence’ needs to be
embodied (Iida et al., 2004); it needs the sensation, agency and grounding
of having a ‘body’ within a rich and changing environment in order to
develop and experience ‘intelligence’. Whilst these issues will be discussed
in more detail in Chapter 7, there does appear to be a case that an entity
which never has some form of embodied manifestation may never be able
to become a ‘true’ virtual human—although it could still be a very smart
computer or artificial intellect, a so-called artilect. If being embodied is a
requirement of a virtual human, the implication would be that many of
the ‘virtual human’ computers of science fiction (Hal, Orac, J.A.R.V.I.S,
Holly) would be better considered as artilects, not virtual humans.
(a) (b)
Humanoid or Non-Humanoid?
The next concern is whether the entity has a humanoid form. Myth and
popular literature contains numerous examples of humans able to take
on animal and other forms, such as lycanthropes and Native American
skin-walkers; so just because a virtual human can represent itself as some-
thing other than human does not mean that it cannot be a virtual human.
If the entity’s core personality, default appearance, actions and thought
processes are those of a human, then it should be considered as a virtual
human. However, if the entity only ever represented itself in a particular
animal form in appearance, deed and thought, then it should be termed a
virtual animal.
Autonomous or Controlled?
Despite some of the current philosophical and neurological debates about
freewill (for example, Caruso, 2013), it is generally accepted that a human
has autonomy for most practical purposes, although such autonomy is
10 ◾ Virtual Humans
limited by laws and social, moral, and ethical frameworks. A true virtual
human should also therefore be expected to have a similar degree of auton-
omy and be bound by similar frameworks. Whilst a virtual human may
not initially exhibit the same level of autonomy as a physical human, its
level of autonomy should still be extensive within the scope of its coding.
A further consideration is how much intrinsic motivation the virtual
human possesses. Autonomy often exists within a well-defined set of
tasks, for instance, a self-driving car choosing a route and then a more
complex set of second-by-second decisions over speed and direction,
based on a rapidly evolving and complex environment, need to be made.
Once the journey is completed, the car just waits for its next command
or possibly drives itself off home to its garage to recharge. A true virtual
human, though, should always be operating in a relatively autonomous
way. Once it finishes one task, it needs to decide on its next one. Such
decisions should be driven by a set of long-term goals, by motivation, as
they are in physical humans.
Emotional or Unemotional?
Demonstrating and responding to emotions is certainly seen in popular
culture as being evidence of humanity (for example, the Voight-Kampff test
in Blade Runner), and the lack of emotions is often taken as an indication of
a disturbed or even psychotic personality. Certainly, within the literature
(for example, Mell, 2015; Mykoniatis, 2014), the ability for a virtual human
to be able to show and respond to emotions is seen as an important feature.
One of the key questions is to what extent the virtual human is ‘faking
it’ – does it ‘feel’ emotion or empathy – or is it just exhibiting the features
and responses that we associate with those traits? Often, though, it is the
emotional response of the human party to the virtual human’s condition
that can be just as important (for example, Bouchard et al., 2013). So, if
an emotional reaction is elicited in a physical human to a virtual human
showing emotion, then do the mechanisms through which that emotion
was generated matter? Since surely almost any emotion portrayed on stage
or in film is artificial as well?
Presence of a Personality?
In considering the possible traits of a virtual human, the word ‘personality’
is often used. Personality can be defined in a variety of ways, and personality
theories include: dispositional (trait) perspective, psychodynamic,
humanistic, biological, behaviourist, evolutionary and those based on
What Are Virtual Humans? ◾ 11
social learning. This suggests that a virtual human should also appear
to behave, feel and think in a unique and individual way, not identical
to any other virtual human (except, possibly, clones of itself) or even to
any other physical human. If the virtual human does not show a unique
personality, or indeed any personality, then it is possibly not worthy of the
term. However, there is again the danger of anthropomorphism, people
will quite readily attribute personalities to very obviously non-human
objects, from cars to printers, and it is important to be cautious about
whether any perceived personality is just being implied by the observer or
is actually present within the system.
Ability to Reason?
Reasoning here is used to refer to the ability to accept a set of inputs and
make a sensible decision based on them. Reasoning can include theories
about moral reasoning, as suggested by Kohlberg (1984), as well as models
of novice to expert reasoning used in professional education (Benner,
1984). Reasoning is also taken here as including problem-solving, which is
a more constrained version of the reasoning ability. At its lowest level within
a virtual human, the reasoning may be as simple as identifying that if a
website customer has enquired about lighting, then they should be shown
all the desks, tables and floor lamps in stock. In a more developed virtual
human, it would be expected that the reasoning capability is beginning
to match that of a human – in other words, given the same inputs, it will
make a similar decision to a human, even though the number of factors, or
their relation to the output, might be more complex, or there may be high
degrees of uncertainty involved, so called fuzzy or even wicked problems.
Can It Learn?
One common definition of an intelligent system is that of having the
capacity to learn how to deal with new situations (Sternberg et al., 1981).
Intelligence is not so much about having the facts and knowledge to
answer questions (a more popular view of what intelligence is), but rather
an adaptive ability to cope with new situations, often by applying patterns
and knowledge (reason) previously acquired. As such, the ability to learn
(in a whole variety of different ways and applied to a whole variety of
different situations) must be an important trait for a virtual human.
One of the ultimate goals of AI research is so-called Artificial General
Intelligence (discussed in more detail in Chapter 13), a computer system
that exhibits a very generalized and human form of such learning and
12 ◾ Virtual Humans
Is It Imaginative?
As will be discussed in Chapter 4, there are a lot of computer programs
which demonstrate creativity, using parametrics, neural networks, genetic
algorithms or other approaches to create pieces of music, paintings, poems
or other works of art. There is, however, a difference between creativity
and imagination. The imagination trait is more about an internal ability
to visualise something, something which may not exist or at least has
not been sensed, and perhaps to take an existing trope and change its
parameters to create a whole new experience. The ‘creative’ element is then
more about taking this piece of imagination and using craft, skills and
‘creativity’ to make it manifest and bring it into the social domain. So, the
important trait is probably that of imagination, with creativity coming
from combining imagination with other traits, such as reasoning (what
colour where) and learning (how did I do this last time).
Sentient or Non-Sentient?
In common discourse, sentience can be viewed as an equivalence of
‘thinking’: Does the machine have a cognitive function? Is it sentient?
There is also some potential overlap with free will and autonomy. A further
definition of sentience aligns it with consciousness, but defining that is
similarly fraught with problems. Indeed, the question of what consciousness
means in terms of the way that we have subjective, phenomenal experiences
is often described as the ‘hard problem’. Such consciousness implies some
form of self-awareness and internal narrative, for example Nagel’s ‘What
Is It Like to Be a Bat?’ (Nagel, 1974). Achieving sentience using current
technologies is beyond the present capabilities for a virtual human.
However, an aspiration to develop some form of internal self-awareness and
internal narrative and dialogue would seem to be desirable, and whether
that results in, or can enable, some form of true sentience is probably a key
philosophical and research question for our times.
DEMONSTRATING INTELLIGENCE?
It should be noted that in the 10 traits above, ‘intelligence’ has been delibera-
tely omitted. There is no real agreement as to what an ‘intelligent’ system
is, just as there is no agreement about what counts as human intelligence.
What Are Virtual Humans? ◾ 13
Natural-Language vs
Learning vs
Command Driven
Unlearning
Self-Aware Embodied
Imaginative Humanoid
Learning Natural-Language
Reasoning Autonomous
Personality Emotional
FIGURE 1.4 Profiles of virtual humanoids (dashed line), virtual sapiens (solid
line – effectively the complete edge of the decagon) and virtual humans (the
shaded space in between).
At the upper (most developed) end is the ‘virtual sapien’, a digital entity
which:
The two new definitions are shown as profiles in Figure 1.4, with virtual
human as the more overarching term.
It should be noted that the lines between a virtual humanoid, virtual
humans (the all-embracing term), and virtual sapiens are significantly
blurred, and on many measures, it is a matter of degree rather than of
absolutes.
Note: The term digital entity has been used above, but perhaps a stricter and
more general definition would be an informational entity, as it then avoids the
limitation on form that ‘digital’ (and also program) could imply—for example,
ruling out some biological possibilities and even intelligent windows, as in
Permutation City (Egan, 1994). The term ‘infomorph’ (Muzyka, 2013) has
been used for such an informational entity. The term ‘artilect’, introduced
earlier, would then represent a relatively well developed infomorph.
Chatbots
Chatbots is a generic term for describing a piece of software that mimics
human conversation. It emphasises the conversational capability
but says nothing about any other elements of the virtual human. A
system like Siri or Alexa that does not really engage in conversation
is probably not even a chatbot, rather being a question-answering or
18 ◾ Virtual Humans
Self-Aware Embodied
Imaginative Humanoid
Learning Natural-Language
Reasoning Autonomous
Personality Emotional
FIGURE 1.5 Mapping current virtual human types: chatbots (dashed), conversa-
tional agents (dotted), and pedagogical agents (solid).
Autonomous Agents
Autonomous agent is a very broad term that has been used for well-
developed chatbots, with elements of the virtual human (Bogdanovych,
2005), for massively replicated software entities running in a crowd
What Are Virtual Humans? ◾ 19
Conversational Agents
Conversational agents (Cassell, 2000) are virtual humans whose task is
to provide a conversational interface to a particular application, rather
than through command line instructions or the clicking of icons or menu
items, for tasks ranging from making travel bookings and buying furni-
ture to interrogating sales and marketing data.
These ‘agents’ can be represented textually, orally or in conjunction
with an avatar (or all three), and, like chatbots, may also exhibit some
of the behaviours and characteristics of humans, such as speech,
locomotion, gestures and movements of the head, eye or other parts of
the body (Dehn and Van Mulken, 2000). However, their role goes beyond
simply maintaining a conversation with no particular goal. The level of
sophistication of these types of agent can thus determine their utility
within differing contexts.
Across the services, manufacturing and raw materials sectors of
industry, conversational agents have been used to increase the usability of
devices and as methods to assist the retrieval of information. For example,
numerous websites have utilised agents as virtual online assistants to
improve access to information, such as ‘Anna’ on the Ikea website or the
shopping assistants from H&M or Sephora accessed through the KiK
service (KiK Interactive Inc., 2016).
Similarly, smartphone and tablet interfaces now include options to
use personal assistant applications such as ‘Siri’, ‘Cortana’ or ‘Google
Now’ to assist users in searching for information, starting applications
and performing routine tasks, such as sending messages. Other software,
such as Amazon’s ‘Alexa’, incorporated into their ‘Echo’ device, allow
the control of various household smart devices such as fridges and
heating systems. These agents, while being helpful, do not necessarily
have a high level of contextual awareness, and so, whilst being effective
at assisting with simple tasks, such as information retrieval or acting on
simple commands (for example, Alexa), agents are not always positioned
to provide guidance.
Conversational agents are likely to possess a defined goal and set
of capabilities. Granting them additional open-ended conversational
20 ◾ Virtual Humans
Mind
Ackerman, M. , Goel, A. , Johnson, C. G. , Jordanous, A. , León, C. , y Pérez, R. P. , & Ventura,
D. (2017). Teaching computational creativity. In Proceedings of the 8th International
Conference on Computational Creativity (pp. 9–16). Atlanta.
Adam, C. , & Lorini, E . (2014). A BDI emotional reasoning engine for an artificial companion.
In: Corchado J. M. et al. (Eds.), Highlights of Practical Applications of Heterogeneous Multi-
Agent Systems. The PAAMS Collection. PAAMS 2014. Communications in Computer and
Information Science (vol. 430). Cham, Switzerland: Springer.
Agüero, C. E. , Martín, F. , Rubio, L. , & Cañas, J. M. (2012). Comparison of smart visual
attention mechanisms for humanoid robots. International Journal of Advanced Robotic
Systems, 9(6), 233.
Aguilar, R. A. , de Antonio, A. , & Imbert, R . (2007). Emotional Agents with team roles to
support human group training. In Pelachaud, C. , Martin, J. C. , André, E. , Chollet, G. ,
Karpouzis, K. , & Pelé, D. (Eds.), Intelligent Virtual Agents. IVA 2007. Lecture Notes in
Computer Science, (pp. 352–353). vol 4722. Berlin, Germany: Springer.
Andriamasinoro, F. (2004). Modeling natural motivations into hybrid artificial agents. In Fourth
International ICSC Symposium on Engineering of Intelligent Systems (EIS 2004).
Anthony, B. , Majid, M. A. , & Romli, A . (2017). Application of intelligent agents and case based
reasoning techniques for green software development. Technics Technologies Education
Management, 12(1), 30.
Baddeley, A. (2012). Working memory: Theories, models, and controversies. Annual Review of
Sychology, 63, 1–29.
Becker-Asano, C. (2014). WASABI for affect simulation in human-computer interaction. In
Proceedings of the International Workshop on Emotion Representations and Modelling for HCI
Systems (pp. 1–10). Istanbul: ACM.
Belbin, R. M. (2012). Team Roles at Work. London, UK: Routledge.
Berners-Lee, T. , Hendler, J. , & Lassila, O . (2001). The semantic web. Scientific American,
284(5), 28–37. Available online http://web.cs.miami.edu/home/saminda/csc688/tblSW.pdf.
Blair, M. (2016). Crossroads of boulez and cage: Automatism in music. Available online
https://mdsoar.org/bitstream/handle/11603/3751/Verge13_BlairMark.pdf?sequence=1.
Boden, M. A. (1998). Creativity and artificial intelligence. Artificial Intelligence, 103(1–2),
347–356.
Bogdanovych, A. , Trescak, T. , & Simoff, S . (2015). Formalising believability and building
believable virtual agents. In Chalup, S. K. , Blair, A. D. , & Randall, M. (Eds.), Artificial Life and
Computational Intelligence. ACALCI 2015. Lecture Notes in Computer Science (vol. 8955, pp.
142–156). Cham, Switzerland: Springer.
Borji, A. , Sihite, D. N. , & Itti, L . (2012). Modeling the influence of action on spatial attention in
visual interactive environments. Robotics and Automation (ICRA), 2012 IEEE International
Conference on (pp. 444–450). IEEE.
Breazeal, C. (2004). Designing Sociable Robots. Cambridge, MA: MIT Press.
Bredeche, N. , Montanier, J. M. , Liu, W. , & Winfield, A. F. (2012). Environment-driven
distributed evolutionary adaptation in a population of autonomous robotic agents. Mathematical
and Computer Modelling of Dynamical Systems, 18(1), 101–129.
Bringsjord, S. , & Ferrucci, D . (1999). Artificial Intelligence and Literary Creativity: Inside the
Mind of Brutus, A Storytelling Machine. London, UK: Psychology Press.
Cattell, R. B. , Eber, H. W. , & Tatsuoka, M. M. (1970). Handbook for the 16PF. Champaign, IL:
IPAT.
Cid, F. , Moreno, J. , Bustos, P. , & Núñez, P. (2014). Muecas: A multi-sensor robotic head for
affective human robot interaction and imitation. Sensors, 14(5), 7711–7737.
Cohen, P. R. , & Feigenbaum, E. A. (Eds.), (2014). The Handbook of Artificial Intelligence (Vol.
3). Oxford, UK: Butterworth-Heinemann.
Colton, S. & Wiggins, G. A. (2012). Computational creativity: The final frontier? In de Raedt, L. ,
Bessiere, C. , Dubois, D. , & Doherty, P. (Eds)., Proceeding ECAI Frontiers (pp. 21–16).
Amsterdam, the Netherlands: IOS Press.
Davis, E. , & Morgenstern, L . (2004). Introduction: Progress in formal common sense
reasoning. Artificial Intelligence, 153(1), 1–12.
Doce, T. , Dias, J. , Prada, R. , & Paiva, A . (2010). Creating individual agents through
personality traits. 10th International Conference on Intelligent Virtual Agents (pp. 257–264).
Berlin, Germany: Springer.
Ekman, P. (1989). The argument and evidence about universals in facial expressions of
emotion. In H. Wagner & A. Manstead (Eds.), Handbook of Social Psychophysiology (pp.
143–164). Chichester, UK: Wiley.
Frith, C. , & Frith, U . (2005). Theory of mind. Current Biology, 15(17), R644–R645.
Gaut, B. , & Livingstone, P. (Eds). (2003) The Creation of Art: New Essays in Philosophical
Aesthetics. Cambridge, MA: Cambridge University Press.
Gervás, P. (2018). Computer-driven creativity stands at the forefront of artificial intelligence and
its potential impact on literary composition. AC/E Digital Culture Annual Report.: Digital Trends
in Culture. Focus: Readers in the Digital Age, 88.
Goldman, A. I. (2012). Theory of mind. In E. Margolis. , R. Samuels. , & S. P. Stich (Eds.), The
Oxford Handbook of Philosophy of Cognitive Science. New York: Oxford University Press.
Hopgood, A.A. (2011). Intelligent Systems for Engineers and Scientists. Boca Raton, FL: CRC
press.
Izard, C. E. (2013). Human Emotions. New York: Springer Science & Business Media.
Jones, N. A. , Ross, H. , Lynam, T. , Perez, P. , & Leitch, A . (2011). Mental models: An
interdisciplinary synthesis of theory and methods. Ecology and Society, 16(1). Available online
https://www.ecologyandsociety.org/vol16/iss1/art46/.
Johnson-Laird, P. J. (2005). Mental models and thought. In K. Holyoak & B. Morrison (Eds.),
The Cambridge Handbook of Thinking and Reasoning (pp. 185–208). Cambridge, MA:
Cambridge University Press.
Kanov, M. (2017). “Sorry, what was your Name Again?”: How to Use a Social Robot to Simulate
Alzheimer’s Disease and Exploring the Effects on its Interlocutors. Stockholm, Sweden: KTH
Royal Institute of Technology.
Kefalas, P. , Sakellariou, I. , Basakos, D. , & Stamatopoulou, I . (2014). A formal approach to
model emotional agents behavior in disaster management situations. In Likas A. , Blekas K. ,
Kalles D. (Eds.), Artificial Intelligence: Methods and Applications, SETN 2014, Lecture Notes in
Computer Science (vol. 8445, pp. 237–250). Cham, Switzerland: Springer.
Kelley, T. D. (2014). Robotic dreams: A computational justification for the post-hoc processing
of episodic memories. International Journal of Machine Consciousness, 6(2), 109–123.
Khalili, A. , Auer, S. , & Ngonga, Ngomo AC. (2014). Context–Lightweight text analytics using
linked data. In: V. Presutti , C. d’Amato , F. Gandon , M. d’Aquin , S. Staab , A. Tordai (Eds.),
The Semantic Web: Trends and Challenges, ESWC 2014, Lecture Notes in Computer Science
(vol. 8465, pp. 628–643). Cham, Switzerland: Springer.
Kiesel, D. (2005). A brief introduction to neural networks (ZETA2-EN). Available online
http://www.dkriesel.com/_media/science/neuronalenetze-en-zeta2-2col-dkrieselcom.pdf.
Lamb, C. , Brown, D. G. , & Clarke, C. L. (2018). Evaluating computational creativity: An
interdisciplinary tutorial. ACM Computing Surveys (CSUR), 51(2), 28.
Lilly, J. (2010). Programming the Human Biocomputer. Berkeley, CA: Ronin Publishing.
Lim, M. Y. , Dias, J. , Aylett, R. , & Paiva, A . (2012). Creating adaptive affective autonomous
NPCs. Autonomous Agents and Multi-Agent Systems, 24(2), 287–311.
Lorenz, K. , & Leyhausen, P . (1973). Motivation of Human and Animal Behavior; An Ethological
View. New York: van Nostrand Reinhold Company.
Loughran, R. , & O’Neill, M. (2017). Application domains considered in computational creativity.
In A. Goel. , A. Jordanous , & A. Pease (Eds.), Proceedings of the Eighth International
Conference on Computational Creativity ICCC 2017. Atlanta, Georgia, June 19–23.
Mahadevan, S. (2018). Imagination machines: A new challenge for artificial intelligence.
Available online https://people.cs.umass.edu/~mahadeva/papers/aaai2018-imagination.pdf.
Maslow, A. (1954). Toward a Psychology of Being, 3rd ed. London, UK: Wiley & Sons.
Mataric, M. J. (1994). Reward functions for accelerated learning. Machine Learning
Proceedings, 181–189.
Mautner, J. , Makmal, A. , Manzano, D. , Tiersch, M. , & Briegel, H. J. (2014). Projective
simulation for classical learning agents: A comprehensive investigation. New Generation
Computing, 33(1), 69–114.
McCormick, J. , Vincs, K. , Nahavandi, S. , Creighton, D. , & Hutchison, S . (2014). Teaching a
digital performing agent: Artificial neural network and hidden arkov model for recognising and
performing dance movement. In Proceedings of the 2014 International Workshop on Movement
and Computing (p. 70). New York: ACM.
McRorie, M. , Sneddon, I. , McKeown, G. , Bevacqua, E. , De Sevin, E. , & Pelachaud, C .
(2012). Evaluation of four designed virtual agent personalities. Affective Computing, IEEE
Transactions on, 3(3), 311–322.
Minton, S. , Carbonell, J. G. , Knoblock, C. A. , Kuokka, D. R. , Etzioni, O. , & Gil, Y . (1989).
Explanation-based learning: A problem solving perspective. Artificial Intelligence, 40(1–3),
63–118.
Moore, G. T. , and R. G. Golledge . 1976. Environmental knowing: Concepts and theories. In G.
T. Moore & R. G. Golledge (Eds.), Environmental Knowing: Theories, Research and Methods
(pp. 3–24). Stroudsburg, PA: Dowden Hutchinson and Ross Inc.
Ortony, A. , Clore, G. L. , & Collins, A . (1990). The Cognitive Structure of Emotions.
Cambridge, MA: Cambridge University Press.
Parasuraman, R. , Sheridan, T. B. , & Wickens, C. D. (2000). A model for types and levels of
human interaction with automation. IEEE Transactions on systems, man, and cybernetics-Part
A: Systems and Humans, 30(3), 286–297.
Pease, A. , Chaudhri, V. , Lehmann, F. , & Farquhar, A . (2000). Practical knowledge
representation and the DARPA high performance knowledge bases project. In KR –2000:
Proceedings of the Conference on Knowledge Representation and Reasoning (pp. 717–724).
Breckenridge, CO.
Pérez-Pinillos D. , Fernández S. , Borrajo D. (2013). Modeling motivations, personality traits
and emotional states in deliberative agents based on automated planning. In J. Filipe , A. Fred
(Eds.), Agents and Artificial Intelligence, ICAART 2011, Communications in Computer and
Information Science (vol. 271, pp. 146–160). Berlin, Germany: Springer.
Perner, J. (1991). Understanding the Representational Mind. Cambridge, MA: The MIT Press.
Pomerol, J. C. (1997). Artificial intelligence and human decision making. European Journal of
Operational Research, 99(1), 3–25.
Prinz, J. J. (2006). Is emotion a form of perception? Canadian Journal of Philosophy, 36(sup1),
137–160.
Puică, M. A. , & Florea, A. M. (2013). Emotional belief-desire-intention agent model: Previous
work and proposed architecture. International Journal of Advanced Research in Artificial
Intelligence, 2(2), 1–8.
Rahwan, T. , Rahwan, T. , Rahwan, I. , & Ashri, R . (2004). Agent-based support for mobile
users using agentSpeak (L). In P. Giorgini , B. Henderson-Sellers & M. Winikoff . (Eds.), Agent-
Oriented Information Systems (AOIS 2003): Revised Selected Papers 45–60. Berlin, Germany:
Springer.
Rao, A. S. , & Georgeff, M. P. (1991). Modeling rational agents within a BDI-architecture. In: R.
Fikes and E. Sandewall (Eds.), Proceedings of Knowledge Representation and Reasoning
(KR&R-91) (pp. 473–484). Burlington, MA: Morgan Kaufmann.
Rozo, L. , Jiménez, P. , & Torras, C . (2013). A robot learning from demonstration framework to
perform force-based manipulation tasks. Intelligent Service Robotics, 6(1), 33–51.
Salgado, R. , Bellas, F. , Caamano, P. , Santos-Diez, B. , & Duro, R. J. (2012). A procedural
long term memory for cognitive robotics. In Evolving and Adaptive Intelligent Systems (EAIS),
2012 IEEE Conference on (pp. 57–62). IEEE.
Salichs, M. A. , & Malfaz, M . (2012). A new approach to modeling emotions and their use on a
decision-making system for artificial agents. Affective Computing, IEEE Transactions on, 3(1),
56–68.
Sansonnet, J. P. , & Bouchet, F. (2013) A framework covering the influence of FFM/NEO PI-R
traits over the dialogical process of rational agents. In J. Filipe & A. Fred (Eds.) Agents and
Artificial Intelligence, ICAART 2013, Communications in Computer and Information Science
(vol. 449, pp. 62–79). Berlin, Germany: Springer.
Schank, R. C. (1990). Tell Me A Story: A New Look at Real and Artificial Memory. New York:
Charles Scribner’s Sons.
Scherer, K. R. , Bänziger, T. , & Roesch, E . (2010). A Blueprint for Affective Computing: A
Sourcebook and Manual. Oxford, UK: Oxford University Press.
Scirea, M. , Togelius, J. , Eklund, P. , & Risi, S . (2017). Affective evolutionary music
composition with meta compose. Genetic Programming and Evolvable Machines, 18(4),
433–465.
Si, M. (2015). Should I stop thinking about it: A computational exploration of reappraisal based
emotion regulation. Advances in Human-Computer Interaction, 5.
Simons, D. J. , & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness
for dynamic events. Perception-London, 28(9), 1059–1074.
Singh, P. (2002) ‘The open mind common sense project.’ Available online
http://www.kurzweilai.net/.
Slater, S. , & Burden, D . (2009). Emotionally responsive robotic avatars as characters in virtual
worlds. In Games and Virtual Worlds for Serious Applications, 2009. VS-GAMES’09.
Conference (pp. 12–19). IEEE.
Stachowicz, D. , & Kruijff, G. J. M. (2012). Episodic-like memory for cognitive robots.
Autonomous Mental Development, IEEE Transactions on, 4(1), 1–16.
Stokes, D. (2014). The role of imagination in creativity. In E. S. Paul & S. B. Kaufman (Eds) The
philosophy of Creativity: New essays (pp. 157–184). Oxford, UK: Oxford Scholarship Online.
Stokes, D. (2016) Imagination and creativity, in A. Kind (Ed.), The Routledge Handbook of the
Philosophy of Imagination. London, UK: Routledge, 247–261.
Sutton, S. and Barto, A. G. (1998). Reinforcement Learning: An Introduction. Cambridge, MA:
MIT press.
Svensson, H. , & Thill, S. (2013) Should robots dream of electric sheep? Adaptive Behavior,
21(4), 222–238.
Vernon, D. , Beetz, M. , & Sandini G (2015). Prospection in cognition: The case for joint
episodic-procedural memory in cognitive robotics. Frontiers in Robotics & AI, 2, 19
W3C. (2014). RDF 1.1 concepts and abstract syntax. W3C recommendation. Available online
http://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/.
Williams, D. , Kirke, A. , Miranda, E. R. , Roesch, E. , Daly, I. , & Nasuto, S . (2015).
Investigating affect in algorithmic composition systems. Psychology of Music, 43(6), 831–854.
Wilson, I. (2000). The artificial emotion engine, driving emotional behavior. In AAAI Spring
Symposium on Artificial Intelligence and Interactive Entertainment (pp. 20–22). Palo Alto, CA.
Available online http://www.aaai.org/Papers/Symposia/Spring/2000/SS-00-02/SS00-02-015.pdf.
Communications
Abdul-Kader, S. A. , & Woods, J . (2015). Survey on chatbot design techniques in speech
conversation systems. International Journal of Advanced Computer Science and Applications,
6(7), 72–80.
Adam, C. , & Cavedon, L . (2013). A companion robot that can tell stories. In R. Aylett , B.,
Krenn. , C. Pelachaud & H. Shimodaira (Eds.), Proceedings of Intelligent Virtual Agents. 13th
International Conference, IVA. Heidelberg, Germany: Springer.
Besacier, L. , Barnard, E. , Karpov, A. , & Schultz, T . (2014). Automatic speech recognition for
under-resourced languages: A survey. Speech Communication, 56, 85–100.
Bibauw, S. , François, T. , & Desmet, P. (2015). Conversational Agents for Language Learning:
Sate of the Art and Avenues for Research on Task-based Agents. CALICO edition. Boulder,
CO. Available online
https://lirias.kuleuven.be/bitstream/123456789/499442/1/2015.05.28+Dialogue+systems+for
+language+learning.pdf.
Black, A. W. , Bunnell, H. T. , Dou, Y. , Muthukumar, P. K. , Metze, F. , Perry, D. , Polzehl, T. ,
Prahallad, K. , Steidl, S. , & Vaughn, C . (2012). Articulatory features for expressive speech
synthesis. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International
Conference on (pp. 4005–4008). IEEE.
Bradeško, L. , & Mladenić, D. (2012). A survey of chatbot systems through a loebner prize
competition. In Proceedings of Slovenian Language Technologies Society Eighth Conference of
Language Technologies (pp. 34–37).
Burden, D. J. H. , Savin-Baden, M. , & Bhakta, R . (2016). Covert implementations of the turing
test: A more level playing field? In International Conference on Innovative Techniques and
Applications of Artificial Intelligence (pp. 195–207). Cham, Switzerland: Springer.
Cassell, J. (2001). Embodied conversational agents: Representation and intelligence in user
interfaces. AI magazine, 22(4), 67.
Clancey, W. J. (1997). Situated Cognition: On Human Knowledge and Computer
Representations. Cambridge, MA: Cambridge University Press.
Cockayne, D. , Leszczynski, A. , & Zook, M . (2017). # HotForBots: Sex, the non-human and
digitally mediated spaces of intimate encounter. Environment and Planning D: Society and
Space, 35(6), 1115–1133.
Cutajar, M. , Gatt, E. , Grech, I. , Casha, O. , & Micallef, J . (2013). Comparative study of
automatic speech recognition techniques. Signal Processing, IET, 7(1), 25–46.
De Mantaras, R. L. , & Arcos, J. L. (2002). AI and music: From composition to expressive
performance. AI magazine, 23(3), 43.
Deemter, K. V. , Theune, M. , & Krahmer, E . (2005). Real versus template-based natural
language generation: A false opposition? Computational Linguistics, 31(1), 15–24.
Ekenel, H. K. , Stallkamp, J. , Gao, H. , Fischer, M. , & Stiefelhagen, R . (2007). Face
recognition for smart interactions. In Multimedia and Expo, 2007 IEEE International Conference
on (pp. 1007–1010). IEEE.
Gazzaniga, M. S. (2000). Cerebral specialization and interhemispheric communication: Does
the corpus callosum enable the human condition? Brain, 123(7), 1293–1326.
Gee, S. (2018). Facebook closes M, Its virtual assistant. i-programmer.info. Available online
http://www.i-programmer.info/news/105-artificial-intelligence/11448-facebook-closes-virtual-
assistant.html.
Gilbert, R. L. , & Forney, A . (2015). Can avatars pass the Turing test? Intelligent agent
perception in a 3D virtual environment. International Journal of Human-Computer Studies, 73,
30–36.
Hinton, G. , Deng, L. , Yu, D. , Dahl, G. E. , Mohamed, A. R. , Jaitly, N. , & Kingsbury, B. (2012).
Deep neural networks for acoustic modeling in speech recognition: The shared views of four
research groups. Signal Processing Magazine, IEEE, 29(6), 82–97.
Jaffe, E. , White, M. , Schuler, W. , Fosler-Lussier, E. , Rosenfeld, A. , & Danforth, D . (2015).
Interpreting questions with a log-linear ranking model in a virtual patient dialogue system. The
Twelfth Workshop on Innovative Use of NLP foBuilding Educational Applications (pp. 86–96).
Stroudsburg, PA: The Association for Computational Linguistics.
Johnson, M. , Lapkin, S. , Long, V. , Sanchez, P. , Suominen, H. , Basilakis, J. , & Dawson, L .
(2014). A systematic review of speech recognition technology in health care. BMC Medical
Informatics and Decision Making, 14(1), 94.
Kelly, S. D. (2001). Broadening the units of analysis in communication: Speech and nonverbal
behaviours in pragmatic comprehension. Journal of Child Language, 28(2), 325–349.
Khanna, A. , Pandey, B. , Vashishta, K. , Kalia, K. , Pradeepkumar, B. , & Das, T . (2015). A
study of today’s AI through chatbots and rediscovery of machine intelligence. International
Journal of u-and e-Service, Science and Technology, 8(7), 277–284.
Kisner, J. (2018). The technology giving voice to the voiceless. The Guardian. Available online
https://www.theguardian.com/news/2018/jan/23/voice-replacement-technology-adaptive-
alternative-communication-vocalid.
Koolagudi, S. G. , & Rao, K. S. (2012). Emotion recognition from speech: A review.
International Journal of Speech Technology, 15(2), 99–117.
Lala, R. , Jeuring, J. T. , & Overbeek, T . (2017). Analysing and adapting communication
scenarios in virtual learning environments for one-to-one communication skills training.
[ILRN2017] Available online http://castor.tugraz.at/doku/iLRN2017/iLRN2017paper34.pdf.
Lapakko, D. (1997). Three cheers for language: A closer examination of a widely cited study of
nonverbal communication. Communication Education, 46(1), 63–67.
McMillan, D. , Loriette, A. , & Brown, B . (2015). Repurposing conversation: Experiments with
the continuous speech stream. In Proceedings of the 33rd annual ACM Conference on Human
Factors in Computing Systems (pp. 3953–3962). New York: ACM.
Newitz, A. (2015). The Fembots of Ashley Madison. Gizmodo. Available online:
http://gizmodo.com/the-fembots-of-ashley-madison-1726670394.
Nolan, B. (2014). Extending a lexicalist functional grammar through speech acts, constructions
and conversational software agents. In B. Nolan & C. Periñán-Pascual (Eds.), Language
Processing and Grammars: The Role of Functionally Oriented Computational Models (pp.
143–164) Amsterdam, the Netherlands: John Benjamins Publishing Company.
Oh, A. H. , & Rudnicky, A. I. (2000). Stochastic language generation for spoken dialogue
systems. In Proceedings of the 2000 ANLP/NAACL Workshop on Conversational Systems-
Volume 3 (pp. 27–32). Morristown, NJ: Association for Computational Linguistics.
Ramos-Soto, A. , Bugarin, A. J. , Barro, S. , & Taboada, J . (2015). Linguistic descriptions for
automatic generation of textual short-term weather forecasts on real prediction data. IEEE
Transactions on Fuzzy Systems, 23(1), 44–57.
Savin-Baden, M. , Tombs, G. , Burden, D. , & Wood, C . (2013). “‘It’s almost like talking to a
person’: Student disclosure to pedagogical agents in sensitive settings. International Journal of
Mobile and Blended Learning, 5(2), 78–93.
Reiter, E. , & Dale, R . (1997). Building applied natural language generation systems. Natural
Language Engineering, 3(1), 57–87.
Sidnell, J. (2011). Conversation Analysis: An Introduction. Hoboken, NJ: John Wiley & Sons.
Taylor, P. (2009). Text-to-Speech Synthesis. Cambridge, MA: Cambridge University Press.
Taylor, P. , & Isard, A . (1997). SSML: A speech synthesis markup language. Speech
Communication, 21(1), 123–133.
The Guardian. (2016). Microsoft ‘deeply sorry’ for racist and sexist tweets by AI chatbot.
Available online https://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-
for-offensive-tweets-by-ai-chatbot
Thimm, M. , Villata, S. , Cerutti, F. , Oren, N. , Strass, H. , & Vallati, M . (2016). Summary report
of the first international competition on computational models of argumentation. AI magazine,
37(1), 102.
Vinyals, O. , & Le, Q . (2015). A neural conversational model. Proceedings of the International
Conference on Machine Learning, Deep Learning Workshop. Available online
https://arxiv.org/abs/1506.05869
Wallace, R. (2003). The Elements of AIML Style. Alice AI Foundation. Available online
https://files.ifi.uzh.ch/cl/hess/classes/seminare/chatbots/style.pdf.
Warwick, K. , & Shah, H . (2016). Can machines think? A report on turing test experiments at
the royal society. Journal of experimental & Theoretical artificial Intelligence, 28(6), 989–1007.
Wei, B. , & Prakken, H . (2017). Defining the structure of arguments with AI models of
argumentation. College Publications 68 1–22 Available online
https://dspace.library.uu.nl/bitstream/handle/1874/356057/WeibinPrakken17.pdf?sequence=1.
Wen, T. H. , Gasic, M. , Mrksic, N. , Su, P. H. , Vandyke, D. , & Young, S . (2015). Semantically
conditioned lstm-based natural language generation for spoken dialogue systems. In
Proceeding of EMNLP (pp. 1711–1721), Lisbon, Portugal, September.
Yamagishi, J. , Veaux, C. , King, S. , & Renals, S . (2012). Speech synthesis technologies for
individuals with vocal disabilities: Voice banking and reconstruction. Acoustical Science and
Technology, 33(1), 1–5.
Zen, H. , & Senior, A . (2014). Deep mixture density networks for acoustic modeling in statistical
parametric speech synthesis. In Acoustics, Speech and Signal Processing (ICASSP), 2014
IEEE International Conference on (pp. 3844–3848). IEEE.
Zheng, T. F. , Jin, Q. , Li, L. , Wang, J. , & Bie, F . (2014). An overview of robustness related
issues in speaker recognition. In Asia-Pacific Signal and Information Processing Association,
2014 Annual Summit and Conference (APSIPA) (pp. 1–10). IEEE.
Architecture
Anderson, J. R. (1983). A spreading activation theory of memory. Journal of Verbal Learning
and Verbal Behavior, 22(3), 261–295.
Anderson, J. R. (1996). ACT: A simple theory of complex cognition. American Psychologist,
51(4), 355.
Arafa, Y. , & Mamdani, A . (2003). Scripting embodied agents behaviour with CML: character
markup language. In Proceedings of the 8th International Conference on Intelligent User
Interfaces (pp. 313–316). New York: ACM.
Bechara, A. , Damasio, H. , & Damasio, A. R. (2000). Emotion, decision making and the
orbitofrontal cortex. Cerebral Cortex, 10(3), 295–307.
Becker, C. , Kopp, S. , & Wachsmuth, I . (2004). Simulating the emotion dynamics of a
multimodal conversational agent. In A.E., Dybkjær ., L, Minker ., W, Heisterkamp . (Eds)
Proceedings of the Tutorial and Research Workshop, ADS 2004, Lecture Notes in Computer
Science (LNAI) (Vol. 3068, pp. 154–165). Berlin, Germany: Springer.
Becker-Asano, C. (2014). WASABI for affect simulation in human-computer interaction. In
Proceeding International Workshop on Emotion Representations and Modelling for HCI
Systems. New York: ACM.
Becker-Asano, C. , & Wachsmuth, I . (2010). Affective computing with primary and secondary
emotions in a virtual human. Autonomous Agents and Multi-Agent Systems, 20(1), 32.
Bower, G. H. (1981). Mood and memory. American Psychologist, 36(2), 129.
Brockman, G. , Cheung, V. , Pettersson, L. , Schneider, J. , Schulman, J. , Tang, J. , &
Zaremba, W . (2016). OpenAI gym. Available online arXiv preprint arXiv:1606.01540.
Brockman, G. , & Sutskever, I . (2015). Introducing OpenAI [Web log post]. Available online
https://blog.openai.com/introducing-openai/
Courgeon, M. , Martin, J. C. , & Jacquemin, C. (2008). Marc: A multimodal affective and
reactive character. In Proceedings of the 1st Workshop on Affective Interaction in Natural
Environments (pp. 12–16). New York: ACM.
De Carolis B. , Pelachaud C. , Poggi I. & Steedman M. (2004) APML, a markup language for
believable behavior generation. In H. Prendinger and M. Ishizuka (Eds) Life-Like Characters.
Cognitive Technologies (pp. 65–85). Berlin, Germany: Springer.
Dias J. & Paiva A. (2005). Feeling and reasoning: A computational model for emotional
characters. In C. Bento , A. Cardoso & G. Dias (Eds.), Progress in Artificial Intelligence. EPIA
2005. Lecture Notes in Computer Science (Vol. 3808, pp. 127–140). Berlin, Germany: Springer.
Dorner, D. (2003). The mathematics of emotions. In F. Detje & H. Schaub . (Eds.), Proceedings
of the Fifth International Conference on Cognitive Modeling (pp. 75–79). Bamberg, Germany:
Universitäts-Verlag Bamberg, April 10–12.
Ernst, G. W. , & Newell, A. (1967). Some issues of representation in a general problem solver.
In Proceedings of the April 18–20, 1967, Spring Joint Computer Conference (pp. 583–600).
New York: ACM.
Goertzel, B. , & Duong, D . (2009). OpenCog NS: A deeply-interactive hybrid neural-symbolic
cognitive architecture designed for global/local memory synergy. In AAAI Fall Symposium:
Biologically Inspired Cognitive Architectures (pp. 63–68). Palo Alto, CA: AAAI.
Goertzel, B. , Hanson, D. , & Yu, G . (2014). Toward a robust software architecture for generally
intelligent humanoid robotics. Proceeding Computer Science, 41, 158–163
Gomes, P. F. , & Jhala, A. (2013). AI authoring for virtual characters in conflict. In Proceedings
on the Ninth Annual AAAI Conference on Artificial Intelligence and Interactive Digital
Entertainment. Palo Alto, CA: AAAI.
Granger, R. (2006). Engines of the brain: The computational instruction set of human cognition.
AI Magazine, 27(2), 15.
Gratch, J. , Hartholt, A. , Dehghani, M. , & Marsella, S . (2013). Virtual humans: A new toolkit for
cognitive science research. Applied Artificial Intelligence, 19, 215–233.
Gratch, J. , Rickel, J. , André, E. , Cassell, J. , Petajan, E. , & Badler, N . (2002). Creating
interactive virtual humans: Some assembly required. IEEE Intelligent systems, 17(4), 54–63.
Hart, D. , & Goertzel, B . (2008). Opencog: A software framework for integrative artificial
general intelligence. In: Frontiers in Artificial Intelligence and Applications, Proceeding. 1st AGI
Conference (Vol. 171, pp. 468–472).
Hartholt, A. , D. Traum , S. C. Marsella , A. Shapiro , G. Stratou , A. Leuski , L. P. Morency ,
and J. Gratch . (2013). “All together. now.” In Proceedings of the 13th International Conference
on Intelligent Virtual Agents (pp. 368–381). Berlin, Germany: Springer, August 29–31.
Hilgard, E. R. (1980). The trilogy of mind: Cognition, affection, and conation. Journal of the
History of the Behavioral Sciences, 16(2), 107–117.
Ho, W. C. , & Dautenhahn, K. (2008). Towards a narrative mind: The creation of coherent life
stories for believable virtual agents. In H. Prendinger , J. Lester , M. Ishizuka (Eds.), Intelligent
Virtual Agents, IVA 2008, Lecture Notes in Computer Science (Vol. 5208, pp. 59–72). Berlin,
Germany: Springer.
Johnson, T. R. (1997). Control in ACT-R and Soar. Control in Act-R and Soar. In M. Shafto & P.
Langley (Eds.), Proceedings of the Nineteenth Annual Conference of the. Cognitive Science
Society (pp. 343–348). Hillsdale, NJ: Lawrence Erlbaum Associates.
Kiryazov K. , & Grinberg M. (2010) Integrating emotions in the TRIPLE ECA model. In A.
Esposito , N. Campbell , C. Vogel , A. Hussain , A. Nijholt (Eds.) Development of Multimodal
Interfaces: Active Listening and Synchrony, Lecture Notes in Computer Science (Vol. 5967, pp.
122–133). Berlin, Germany: Springer.
Klug, M. , & Zell, A . (2013). Emotion-based human-robot-interaction. In Computational
Cybernetics (ICCC), 2013 IEEE 9th International Conference on (pp. 365–368). IEEE.
Kopp, S. , Krenn, B. , Marsella, S. , Marshall, A. N. , Pelachaud, C. , Pirker, H. , Thórisson, K.
R. , & Vilhjálmsson, H. (2006). Towards a common framework for multimodal generation: The
behavior markup language. In J. Gratch , M. Young , R. Aylett , D. Ballin , P. Olivier (Eds.)
Intelligent Virtual Agents, IVA 2006, Lecture Notes in Computer Science (Vol. 4133, pp.
205–217). Berlin, Germany: Springer.
Kshirsagar, S. , Guye-Vuilleme, A. , Kamyab, K. , Magnenat-Thalmann, N. , Thalmann, D. ,
Mamdani, E. (2002).Avatar markup language. In Proceedings of 8th Eurographics Workshop on
Virtual Environments, (pp. 169–177). New York: ACM Press.
Laird, J. E. (2012). The Soar Cognitive Architecture. Cambridge, MA: MIT press.
Lehman, J. F. , Laird, J. E. , & Rosenbloom, P. S. (1996). A gentle introduction to Soar, an
architecture for human cognition. Invitation to Cognitive Science, 4, 212–249.
Lim, M. Y. , Dias, J. , Aylett, R. , & Paiva, A . (2012). Creating adaptive affective autonomous
NPCs. Autonomous Agents and Multi-Agent Systems, 24(2), 287–311.
Lin, J. , Spraragen, M. , Blythe, J. , & Zyda, M . (2011). EmoCog: Computational integration of
emotion and cognitive architecture. In Proceedings of the Twenty-Fourth. FLAIRS Conference.
Palo Alto, CA: AAAI.
Marinier, R. P. , & Laird, J. E. (2007, January). Computational modeling of mood and feeling
from emotion. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 29,
No. 29). Hillsdale, NJ: Lawrence Erlbaum Associates.
Marriott, A. (2001). VHML–Virtual human markup language. In Talking Head Technology
Workshop, at OzCHI Conference (pp. 252–264). Available online
http://ivizlab.sfu.ca/arya/Papers/Others/Representation%20and%20Agent%20Languages/OzC
HI-01-VHML.pdf.
Martínez-Miranda, J. , Bresó, A. , & García-Gómez, J. M. (2012). The construction of a
cognitive-emotional module for the help4Mood‘s virtual agent. Information and Communication
Technologies applied to Mental Health, 34(3).
Newell, A. (1992). Precis of unified theories of cognition. Behavioral and Brain Sciences, 15,
425–492.
Newell, A. , & Simon, H . (1956). The logic theory machine–A complex information processing
system. IRE Transactions on Information Theory, 2(3), 61–79.
Nilsson, N. (1998). Artificial Intelligence: A New Synthesis. San Francisco, CA: Morgan
Kaufmann.
Ortony, A. , Clore, G. L. , & Collins, A . (1990). The Cognitive Structure of Emotions.
Cambridge, MA: Cambridge University Press.
Ritter, S. , Anderson, J. R. , Koedinger, K. R. , & Corbett, A . (2007). Cognitive tutor: applied
research in mathematics education. Psychonomic Bulletin & Review, 14(2), 249–255.
Rodriguez, L. F. , Galvan, F. , Ramos, F. , Castellanos, E. , García, G. , & Covarrubias, P .
(2010). A cognitive architecture based on neuroscience for the control of virtual 3D human
creatures. International Conference on Brain Informatics (pp. 328–335).
Russell, J. A. , & Mehrabian, A . (1977). Evidence for a three-factor theory of emotions. Journal
of Research in Personality, 11(3), 273–294.
Scherer, S. , Marsella, S. , Stratou, G. , Xu, Y. , Morbini, F. , Egan, A. , & Morency, L. P. (2012).
Perception markup language: Towards a standardized representation of perceived nonverbal
behaviors. In Y. Nakano , M. Neff , A. Paiva , M. Walker . (Eds.) Intelligent Virtual Agents,
LNCS (Vol. 7502, pp. 455–463). Berlin, Germany: Springer.
Slater, S. , & Burden, D . (2009). Emotionally responsive robotic avatars as characters in virtual
worlds. In Games and Virtual Worlds for Serious Applications, VS-GAMES’09. Conference in
12–19. IEEE.
Sloman, A. (2001). Beyond shallow models of emotion. Cognitive Processing, 2(1), 177–198.
Sloman, A. (2003). The Cognition and Affect Project: Architectures, Architecture-Schemas, and
the New Science of Mind. Technical Report School of Computer Science, University of
Birmingham, Birmingham, UK. Available online
https://pdfs.semanticscholar.org/b376/bcfcd69798a5027eae518d001cbaf629deae.pdf.
Smart, P. R. , Scutt, T. , Sycara, K. , & Shadbolt, N. R. (2016). Integrating ACT-R cognitive
models with the unity game engine. In J. Turner , M. Nixon , U. Bernardet , & S. DiPaola (Eds.),
Integrating Cognitive Architectures into Virtual Character Design (pp. 35–64). Hershey, PA: IGI
Global.
Smart, P. R. , Tang, Y. , Stone, P. , Sycara, K. , Bennati, S. , Lebiere, C. , Mott, D. , Braines, D.
, & Powell, G. (2014) Socially-distributed cognition and cognitive architectures: Towards an
ACT-R-based cognitive social simulation capability. At Annual Fall Meeting of the International
Technology Alliance Annual Fall Meeting of the International Technology Alliance, United
Kingdom (p. 8) September 15, 2014.
Sutskever, I. , Brockman, G. , Altman, S. , & Musk, E . (2016). OpenAI technical goals [Web log
post]. Available online https://blog.openai.com/openai-technical-goals/.
Swartout, W. R. , Gratch, J. , Hill Jr, R. W. , Hovy, E. , Marsella, S. , Rickel, J. , & Traum, D .
(2006). Toward virtual humans. AI Magazine, 27(2), 96.
Tulving, E. (1983). Ecphoric processes in episodic memory. Philosophical Transactions of the
Royal Society of London B, 302(1110), 361–371.
Ylvisaker, M. , Hibbard, M. , & Feeney, T . (2006). “What is cognition”. Available online
LEARNet on The Brain Injury Association of New York State website:
http://www.projectlearnet.org/tutorials/cognition.html.
Embodiment
Agarwal, R. , & Karahanna, E . (2000). Time flies when you’re having fun: Cognitive absorption
and beliefs about information technology usage. MIS Quarterly, 24(4), 665–694.
Anderson, M. L. (2003). Embodied cognition: A field guide. Artificial intelligence, 149(1),
91–130.
Brooks, R. A. (1991). Intelligence without reason. In: R. Chrisley & S. Begeer (Eds) Artificial
intelligence: Critical Concepts, 3, 107–163. Cambridge, MA: MIT Press.
Calongne, C. , & Hiles, J . (2007). Blended realities: A virtual tour of education in second life. In
TCC Worldwide Online Conference (pp. 70–90). TCC Hawaii.
Dreyfus, H. L. (2007). Why Heideggerian AI failed and how fixing it would require making it
more Heideggerian, Artificial Intelligence, 171(18), 1137–1160.
Freeman, W. J. (1991). The physiology of perception. Scientific American, 264(2), 78–87.
Froese T. , & Ziemke T. (2009). Enactive artificial intelligence: Investigating the systemic
organization of life and mind. Artificial Intelligence 173(3–4), 466–500.
Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1–3),
335–346.
Haugeland, J. (1998). ‘Mind embodied and embedded,’ In J. Haugeland (Ed). Having Thought:
Essays in the Metaphysics of Mind (207–237). Cambridge, MA: Harvard University Press.
Heidegger, M. (1958). The Question of Being. Lanham, MD: Rowman & Littlefield.
Jackson, F. (1986). What Mary didn’t know. The Journal of Philosophy, 83(5), 291–295.
Lakoff, G. , & Johnson, M . (1999). Philosophy in the Flesh (Vol. 4). New York: Basic Books.
Lenat, D. , & Guha, R. V. (1989). Building Large Knowledge-Based Systems: Representation
and Inference in the Cyc project. Reading, MA: Addison-Wesley Publishing Company.
Matuszek, C. , Cabral, J. , Witbrock, M. J. , & DeOliveira, J. (2006). An introduction to the
syntax and content of cyc. In AAAI Spring Symposium: Formalizing and Compiling Background
Knowledge and Its Applications to Knowledge Representation and Question Answering (pp.
44–49). Palo Alto, CA: AAAI.
Merleau-Ponty, M. (1962). Phenomenology of Perception. London, UK: Routledge.
Piaget, J. (1954). The Construction of Reality in the Child. London, UK: Routledge.
Sloman, A. (2009). ‘Some requirements for human-like robots: Why the recent over-emphasis
on embodiment has held up progress’ In B. Sendhoff et al. (Eds.), Creating Brain-Like
Intelligence, LNAI (Vol. 5436, 248–277). Berlin, Germany: Springer-Verlag.
Varela, F. J. , Thompson, E. , & Rosch, E . (1991). The Embodied Mind: Cognitive Science and
Human Experience. Cambridge, MA: MIT Press.
Zeimke, T. (2001). Are robots embodied? In Proceedings of the First Intl. Workshop on
Epigenetic Robotic (Vol. 85). Lund, Sweden: Lund University Cognitive Studies.
Digital Ethics
Borenstein, J. , & Arkin, R.C. (2016). Robots, ethics, and intimacy: The need for scientific
research. In Conference of the International Association for Computing and Philosophy (IACAP
2016), Ferrara, Italy, June.
Corritore, C. L. , Kracher, B. , & Wiedenbeck, S . (2003). On-line trust: Concepts, evolving
themes, a model. International Journal of Human-Computer Studies, 58(6), 737–758.
Culley, K. E. , & Madhavan, P . (2013). A note of caution regarding anthropomorphism in HCI
agents. Computers in Human Behavior, 29(3), 577–579.
Docherty, B. (2016). Losing control: The dangers of killer robots. The Conversation, June 16.
Ess, C. M. (2016). Phronesis for machine ethics? Can robots perform ethical judgments? In J.
Seibt , M. Nørskov , & S. S. Andersen (Eds.), What Social Robots Can and Should Do,
Frontiers in Artificial Intelligence and Applications (pp. 386–389). Amsterdam, the Netherlands:
IOS Press.
Ess, C. M. (2017a). Communication and technology. Annals of the International Communication
Association, 41(3–4), s209–s212.
Ess, C. M. (2017b). Digital media ethics. Oxford Research Encyclopedia of Communication.
doi:10.1093/acrefore/9780190228613.013.508.
European Union Directive 95/46/EC of 13th May 2014 on personal data – the protection of
individuals with regard to the processing of such data.
Hasler, B. S. , Tuchman, P. , & Friedman, D . (2013). Virtual research assistants: Replacing
human interviewers by automated avatars in virtual worlds. Computers in Human Behavior,
29(4), 1608–1616.
Hern, A. (2017). Give robots ‘personhood’ status, EU committee argues. The Guardian,
January 17.
Levy, D. (2008). Love and Sex with Robots: The Evolution of Human-Robot Relationships. New
York: Harper Perennial.
Malle, B. F. (2016). Integrating robot ethics and machine morality: The study and design of
moral competence in robots. Ethics Information Technology, 18, 243–256.
Markham, A. , & Buchanan, E . (2012). Ethical Decision-Making and Internet Research:
Recommendations for the AOIR Ethics Working Committee Version 2.0, Association of Internet
Researchers. Available online http://aoir.org/reports/ethics2.pdf.
Michel, A. H. (2013). Interview: The professor of robot love. Center for the Study of the Drone,
October 5. Available online http://dronecenter.bard.edu/interview-professor-robot-love/.
National Commission for the Protection of Human Subjects of Biomedical and Behavioral
Research (1978). Belmont report: Ethical principles and guidelines for the protection of human
subjects of research. Available online http://www.fda.gov/ohrms/dockets/ac/05/briefing/2005-
4178b_09_02_Belmont%20Report.pdf (January 18, 2016).
National Institute of Health (1949). Nuremberg Code. Available online
https://history.nih.gov/research/downloads/nuremberg.pdf.
Öhman, C. , & Floridi, L . (2018). An ethical framework for the digital afterlife industry. Nature
Human Behaviour, 2, 318–320.
Prinz, J. J. (2011). Is empathy necessary for morality? In A. Coplan & P. Goldie (Eds.),
Empathy: Philosophical and Psychological Perspectives (pp. 211–229). Oxford, UK: Oxford
University Press.
Riek, L. D. , & Howard, D . (2014). A code of ethics for the human-robot interaction profession
(April 4). Proceedings of We Robot, 2014. Available online https://ssrn.com/abstract=2757805.
Savin-Baden, M. , Burden, D. , & Taylor, H . (2017). The ethics and impact of digital immortality.
Knowledge Cultures, 5(2), 11–29.
Savin-Baden, M. , & Major, C . (2013). Qualitative Research: The Essential Guide to Theory
and Practice. London, UK: Routledge.
Savin-Baden, M. , & Tombs, G . (2017). Research Methods for Education in the Digital Age.
London, UK: Bloomsbury.
Savin-Baden, M. , Tombs, G. & Bhakta, R. (2015). Beyond robotic wastelands of time:
Abandoned pedagogical agents and new pedalled pedagogies. E-Learning and Digital Media,
12(3–4), 295–314.
Savin-Baden, M. , Tombs, G. , Burden, D. , & Wood, C . (2013). It’s almost like talking to a
person. International Journal of Mobile and Blended Learning, 5(2), 78–93.
Turkle, S. (2010). In good company? On the threshold of robotic companions. In Y. Wilks (Ed.),
Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design
Issues (pp. 3–10). Amsterdam, the Netherlands: John Benjamins Publishing Company.
United Nations. (1948). Universal declaration of human rights. Available online
http://www.un.org/en/universal-declaration-human-rights/.
Waddell, T. F. , Ivory, J. D. , Conde, R. , Long, C. , & McDonnell. R. (2014). White man’s virtual
world: A systematic content analysis of gender and race in massively multiplayer online games.
Journal for Virtual Worlds Research, 7(2). Available online https://jvwr-ojs-
utexas.tdl.org/jvwr/index.php/jvwr/article/view/7096.
World Medical Association. (1964). Declaration of Helsinki. Helsinki, Finland: World Medical
Association.
Digital Immortality
Bainbridge, W. (2013). Perspectives on virtual veneration. Information Society, 29(3), 196–202.
Bassett, D. (2015). Who wants to live forever? Living, dying and grieving in our digital society.
Social Sciences, 4, 1127–1139.
Bassett, D. (2017). Shadows of the dead: Social media and our changing relationship with the
departed, Discover Society. Available online http://discoversociety.org/2017/01/03/shadows-of-
the-dead-social-media-and-our-changingrelationship-with-the-departed/
Birnhack, M. , & Morse, T. (2018) Regulating access to digital remains – research and policy
report. Israeli Internet Association. Available online https://www.isoc.org.il/wp-
content/uploads/2018/07/digital-remains-ENG-for-ISOC-07-2018.pdf.
Bowlby, J. (1981). Attachment and Loss (Vol. 3). New York: Basic Books.
Bridge, M. (2016). Good grief: Chatbots will let you talk to dead relatives. The Times. Available
online https://www.thetimes.co.uk/article/27aa07c8-8f28-11e6-baac-bee673517c57.
Brubaker, J. , & Callison-Burch, V. (2016). Legacy contact: Designing and implementing post-
mortem stewardship at Facebook. In Proceedings of the ACM Conference on Human Factors in
Computing Systems (pp. 2908–2919). Santa Clara, CA: ACM.
Burden, D. J. H. (2012). Digital Immortality. Presentation at Birmingham, UK: TEDx.
Cuminskey, K. , & Hjorth, L . (2018). Haunting Hands. Oxford, UK: Oxford University Press.
Dennett, D. C. (1995). The unimagined preposterousness of zombies. Journal of
Consciousness Studies, 2(4), 322–326.
Dubbin, R. (2013). The rise of twitter bots. The New Yorker. Available online
http://www.newyorker.com/tech/elements/the-rise-of-twitter-bots.
Eternime (2017). Available online http://eterni.me/
Hallam, E. , & Hockey, J . (2001). Death, Memory, and Material Culture. Oxford, UK: Berg
Publishers.
Harbinja, E. (2017). Post-mortem privacy 2.0: Theory, law, and technology. International
Review of Law, Computers & Technology, 31(1), 26–42.
Kasket, E. (2019) All the Ghosts in the Machine: Illusions of Immortality in the Digital Age.
London, UK: Robinson.
Klass, D. , Silverman, S. , & Nickman S. (Eds.) (1996). Continuing Bonds: New Understandings
of Grief. Washington, DC: Taylor & Francis Group.
Kubler-Ross, E. (1969). On Death and Dying. New York: Macmillan.
LifeNaut Project. (2017). Available online https://www.lifenaut.com/.
Maciel, C. (2011). Issues of the social web interaction project faced with afterlife digital legacy.
In: Proceedings of the 10th Brazilian Symposium on Human Factors in Computing Systems and
the 5th Latin American Conference on Human-Computer Interaction (pp. 3–12). ACM Press.
Maciel, C. , & Pereira, V . (2013). Digital Legacy and Interaction. Heidelberg, Germany:
Springer.
Nansen, B. , Arnold, M. , Gibbs, M. , & Kohn, T . (2015). The restless dead in the digital
cemetery, digital death: Mortality and beyond in the online age. In C. M. Moreman & A. D. Lewis
(Eds.), Digital Death: Mortality and Beyond in the Online Age (pp. 111–124). Santa Barbara,
CA: Praeger.
Pennington, N. (2017). Tie strength and time: Mourning on social networking sites. Journal of
Broadcasting and Electronic Media, 61(1), 11–23.
Savin-Baden, M. , & Burden, D . (2018). Digital immortality and virtual humans paper.
Presented at Death Online Research Symposium, University of Hull, August 15–17.
Savin-Baden, M. , Burden, D. , & Taylor, H . (2017). The ethics and impact of digital immortality.
Knowledge Cultures, 5(2), 11–19.
Sofka, C. , Cupit, I. N. , & Gilbert, K. R. (Eds.) (2012). Dying, Death, and Grief in an Online
Universe: For Counselors and Educators. New York: Springer Publishing Company.
Steinhart, E. C. (2014). Your Digital Afterlives. Basingstoke, UK: Palgrave MacMillan.
Stroebe, M. , & Schut, H . (1999). The dual process model of coping with bereavement:
rationale and description. Death Studies, 23, 197–224.
Tonkin, L. (2012). Haunted by a ‘Present Absence’. Studies in the Maternal, 4(1), 1–17.
Walter, T. (1996). A new model of grief: Bereavement and biography. Mortality, 1(1), 7–25.
Walter, T. (2017). How the dead survive: Ancestors, immortality, memory. In M. H. Jacobsen
(Ed.), Postmortal Society. Towards as Sociology of Immortality. London, UK: Routledge.