0% found this document useful (0 votes)
19 views

Affective Computing: Recent Advances, Challenges, and Future Trends

Uploaded by

Amita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Affective Computing: Recent Advances, Challenges, and Future Trends

Uploaded by

Amita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

REVIEW ARTICLE

Affective Computing: Recent Advances,


Challenges, and Future Trends Citation: Pei G, Li H, Lu Y, Wang Y,
Guanxiong Pei 1, Haiying Li 2, Yandi Lu 3, Yanlei Wang4, Shizhen Hua1, Hua S, Li T. Affective Computing:
Recent Advances, Challenges,
and Taihao Li1* and Future Trends. Intell. Comput.
2024;3:Article 0076. https://doi.
1
Research Center for Multi-Modal Intelligence, Research Institute of Artificial Intelligence, Zhejiang org/10.34133/icomputing.0076
Lab, Hangzhou, China. 2National Science Library, Chinese Academy of Sciences, Beijing, China.
3 Submitted 15 October 2023
Center for Psychological Sciences, Zhejiang University, Hangzhou, China. 4De.InnoScience, Deloitte, Accepted 6 December 2023
Shanghai, China. Published 5 January 2024

Copyright © 2024 Guanxiong Pei et al.


*Address correspondence to: [email protected]
Exclusive licensee Zhejiang Lab. No
claim to original U.S. Government
Affective computing is a rapidly growing multidisciplinary field that encompasses computer science, Works. Distributed under a Creative
engineering, psychology, neuroscience, and other related disciplines. Although the literature in this field Commons Attribution License 4.0
has progressively grown and matured, the lack of a comprehensive bibliometric analysis limits the overall (CC BY 4.0).

Downloaded from https://spj.science.org on April 08, 2024


understanding of the theory, technical methods, and applications of affective computing. This review
presents a quantitative analysis of 33,448 articles published in the period from 1997 to 2023, identifying
challenges, calling attention to 10 technology trends, and outlining a blueprint for future applications. The
findings reveal that the emerging forces represented by China and India are transforming the global research
landscape in affective computing, injecting transformative power and fostering extensive collaborations,
while emphasizing the need for more consensus regarding standard setting and ethical norms. The 5 core
research themes identified via cluster analysis not only represent key areas of international interest but
also indicate new research frontiers. Important trends in affective computing include the establishment
of large-scale datasets, the use of both data and knowledge to drive innovation, fine-grained sentiment
classification, and multimodal fusion, among others. Amid rapid iteration and technology upgrades,
affective computing has great application prospects in fields such as brain–computer interfaces, empathic
human–computer dialogue, assisted decision-making, and virtual reality.

Introduction connotations. Thus, emotions are inseparable from natural lan-


guage and are critical for semantic disambiguation. Third, emo-
According to basic emotion theory, emotion is the grammar of tions have a decision-making function that manifests in both fast
social living and serves as a crucial means of exchanging infor- and slow modes of thinking. The commonly used unconscious
mation, maintaining relationships, and communicating ideas “System 1” mainly relies on emotions and experiences, while the
between individuals. Moreover, it is a fundamental psychologi- conscious “System 2” depends on rational deliberation [5].
cal element that ensures basic human survival while shaping Therefore, emotions are widely involved in higher-level thinking
social habits and supporting advanced thinking [1,2]. Given its and decision-making processes that profoundly affect the results
central role in numerous human intellectual activities such as and efficiency of decisions. Fourth, emotions serve a motivational
perception, learning, decision-making, reasoning, and social- function in stimulating and sustaining individuals’ behaviors,
izing, emotion is an important force driving the continuous thereby affecting the degree of resource input, behavioral persist­
and diverse prosperity of human civilization. ence, and evaluation of outcomes [6]. Finally, emotions perform
The importance of emotions to human beings can be sum- a maintenance function as bonds between members of ethnic
marized in 5 crucial aspects. First, the survival function is a groups, families, social circles, social classes, and other groups.
learned physiological response that allows individuals to adapt During human socialization, emotions serve as the core of low-
positively to their environment [3]. Emotions play a pivotal role cost maintenance of social relations, forming potential social
in strengthening the capacity to adapt to the environment by interaction contracts, and are closely tied to individual moral
regulating attention, memory, perception, and other cognitive constraints and codes of conduct [7,8]. Hence, the nature and
processes. This ensures a greater chance of survival and develop- functions of emotions ensure that they are inseparable from
ment during the evolutionary process. Second, the communica- human survival and development.
tion function highlights the importance of emotions for the As the era of a human–machine symbiotic society approaches,
accurate expression and understanding of human intentions [4]. endowing machines with emotional intelligence becomes increas-
The same words spoken with different emotions carry different ingly crucial. Emotional intelligence represents a fundamental

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 1


technology and an essential prerequisite for realizing naturalized emotion modeling. The discrete emotion model categorizes emo-
and anthropomorphic human–computer interaction. It is of great tions individually rather than in correlated groups, as does Ekman’s
value for opening up the era of intelligence and digitization. Picard basic emotion classification model, which is based on facial expres-
is credited with being the first to propose a comprehensive defini- sion analysis [10] and comprises happiness, sadness, anger, disgust,
tion of affective computing. In her 1997 book, Affective computing, surprise, fear, and contempt. Although the discrete emotion model
she defined it as “computing that relates to, arises from, or delib- is clearly defined, interpretable, easy to understand, and capable
erately influences emotions” [9]. The goal of affective computing of semantically integrating vocabulary and concepts, it lacks gran-
is to create a computing system capable of perceiving, recognizing, ularity and provides a limited quantitative description of emotions.
and understanding human emotions and responding intelligently, In contrast, dimensional affective models represent different emo-
sensitively, and naturally, thus making human–computer interac- tions through multidimensional vectors in affective space. Such
tion more natural. The epochal importance of affective computing models include the valence–arousal affective model [11] and the
lies in its impact on changing how emotions are perceived as 3-dimensional pleasure–arousal–dominance model [12,13]. These
abstractions within psychology, making it possible for emotions models are highly quantitative, abstract, and inductive and have
to be measured, computed, and machine-learned. continuous emotional value vectors. They are suitable for handling
Affective computing encompasses various disciplines, includ- changes in emotional states over time but are not intuitively inter-
ing computer science, engineering science, brain and psychologi- pretable; thus, it is difficult for machines to use them to develop
cal science, and social sciences. Computer science and engineering rich coping strategies for emotional interactions. The selection of
science focus on providing various information technology tools the model depends on the actual application tasks and scene
and engineering capabilities to enable digital reconstruction and requirements, as both discrete and dimensional emotion models
computational realization of emotion perception, recognition, have advantages and disadvantages.
understanding, and feedback, allowing machines to possess
human-like emotional and cognitive functions. The psychological Collection of emotional signals
and consciousness aspects of the brain and psychological sciences To support data acquisition and the comparison of algorithms

Downloaded from https://spj.science.org on April 08, 2024


provide theories on the basic definition of human emotions and in affective computing, numerous open-source databases have
the structure of related elements, laying the foundation for model- been established. They contain datasets that can be categorized
ing emotion theories. Cognitive neuroscience, another branch as textual, speech/audio, visual, physiological, or multimodal.
of the brain and psychological sciences, examines the emotion-­ The characteristics of these databases considerably impact model
processing mechanism of the human brain and establishes a func- design and network architecture in affective computing.
tional network of psychological elements associated with emotions, Text-based resources on various communication carriers serve
providing important inspiration and strategic guidance for devel- as massive datasets for emotional text mining [14]. Representative
oping affective computing models. Social and medical sciences datasets include the internet movie database (IMDb) [15], the
offer numerous opportunities for the application of affective com- Stanford sentiment treebank, which contains sentences from
puting and serve as a resource for designing application scenarios movie reviews [16], and the Multi-Domain Sentiment Dataset,
for such technologies. which contains Amazon.com product reviews [17]. Speech is
another crucial modality for decoding emotions in human inter-
Research in affective computing communication. Speech signals comprise both the emotional
The research content of affective computing primarily covers content of the speech and the emotional characteristics of the
5 aspects. The first aspect is the fundamental theory of emotion, sound itself. Representative datasets include EmoDB [18], the
which currently relies on the discrete emotion model and the SEMAINE database [19], and CSED [20]. Visual-emotional sig-
dimensional emotion model from the field of psychology to nals such as body movements and facial expressions are now more
define various types of emotions, ranging from basic to com- convenient to gather because of low-cost sensors such as cameras
pound. The second aspect involves collecting emotional signals, and camcorders, and they do not require direct contact with the
such as text, speech, facial expressions, gestures, and physio- user [21]. This field has vast amounts of data and many related
logical signals, to establish corresponding datasets. The third research papers with considerable data collected directly from
aspect is sentiment analysis, which utilizes machine-learning real-world scenarios, making it more conducive to grounded
and deep-learning algorithms to model and identify emotional applications [22]. Representative datasets include the Expression-
signals. The fourth aspect is multimodal fusion, which lever- in-the-Wild (ExpW) dataset [23], AffectNet [24], the Real-world
ages multimodal emotional features and fusion algorithms to Affective Faces Database (RAF-DB) [25], and SMIC, a database
enhance the accuracy of emotional classification. Finally, the of spontaneous microexpressions [26].
fifth aspect is generating and expressing emotions, processes Physiological data have an advantage over signal data such
that enable robots to express emotional states through facial as text, speech, and facial expressions in that they can more
expressions, voice intonation, body movements, etc., and facili- directly, objectively, and accurately reflect an individual’s emo-
tates natural, anthropomorphic, and personified human–robot tional state while being less influenced by subjective conscious-
interaction. Figure 1 illustrates the specific content and devel- ness [27,28]. Consequently, physiological data have become a
opment status of these 5 aspects. research hotspot in affective computing. Commonly used
physiological data in this field include electroencephalograms
Basic theory of emotion (EEGs), skin electricity, cardiac electricity, electromyography
The field of affective psychology has numerous grounded theories (EMG), eye electricity, respiration, skin temperature, and
of emotion and serves as an important source of inspiration for blood volume pulse. However, obtaining physiological data
the development of computable emotion models. The discrete requires the use of complex sensors. Thus, such data are expen-
emotion model and the dimensional emotion model are the most sive and challenging to collect for use in practical applications.
commonly used theoretical models for artificial intelligence Consequently, the scale of physiological data used in laboratory

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 2


Downloaded from https://spj.science.org on April 08, 2024
Fig. 1. Research content of affective computing.

research is generally small [29]. Representative datasets include Speech analysis. Speech emotion recognition is the process
the Database for Emotion Analysis using Physiological Signals by which a computer automatically recognizes the emotional
(DEAP) [30], the Shanghai Jiao Tong University Emotion EEG state signaled by speech. Speech contains emotional informa-
Dataset (SEED) [31], and WESAD, a dataset for wearable stress tion, such as speech rate and intonation, in addition to semantic
and affect detection [32]. information. Speech emotion analysis combines linguistic and
acoustics-related technologies to analyze the syntax, semantics,
Sentiment analysis and acoustic feature information related to the speaker’s emo-
Text analysis. This method focuses on extracting, analyzing, tional state [40]. This analysis mainly revolves around rhyme,
understanding, and generating emotional information in natu- spectrum, and sound quality features. The numerous acoustic
ral language. Early text affective recognition relied mainly on features related to affective states include fundamental fre-
manually constructed affective dictionaries and rules for affec- quency, duration, speech rate, resonance peaks, pitch, mel-filter
tive analysis. These methods judge sentiment polarity by bank (MFB), log-frequency power coefficients (LFPC), linear
matching sentiment words with grammatical rules in a text predictive cepstral coefficients (LPCC), and mel-frequency ceps-
[33,34]. However, this approach is limited by emotional lexicon tral coefficients (MFCC) [41–43]. These features are represented
coverage and rules, making it challenging to support multido- as fixed dimensional feature vectors, with each component rep-
main sentiment analysis. With the advancement of machine resenting the statistical value of each acoustic parameter, includ-
learning, text emotion recognition methods based on statistical ing the mean, variance, maximum or minimum value, and range
and machine learning algorithms have emerged. By training of variation. Recently, the ability of neural networks to extract
on large-scale text datasets, machine learning models can auto- suitable feature parameters has received increasing attention.
matically learn emotional expression and semantic features, Deep speech emotion features are learned from speech signals
enhancing the accuracy and generalization ability of sentiment or spectrograms through tasks related to speech emotion rec-
classification [35,36]. In recent years, deep-learning technol- ognition. Deep speech features learned from large-scale training
ogy has considerably impacted text emotion recognition. data are widely used as speech emotion features in speech event
Neural network-based models, such as recurrent neural net- detection and speech emotion recognition tasks, as in the
works (RNNs), convolutional neural networks (CNNs), long VGGish and wav2vec projects [44,45], for example. In recent
short-term memory (LSTM) networks, bidirectional encoder years, algorithms such as ConvNet learning [46], ConvNet-RNN
representation from transformers (BERT), and generative pre- [47], and adversarial learning [48] have considerably improved
trained transformers (GPT), have been successful in various speech emotion recognition performance.
sentiment analysis tasks [37–39]. They can capture contextual Visual analysis. Visual emotion recognition research primar-
information and semantic relationships to better understand ily focuses on facial expression recognition (FER) and emotional
and analyze sentiments. body gesture recognition. The conventional method involves

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 3


feature extraction followed by classification. Typically, hand- Multimodal fusion
crafted features for static image analysis include local binary pat- Early affective computing primarily involved unimodal data
tern (LBP), histogram of oriented gradients (HOG), local phase analysis and emotion recognition, focusing on a single modality,
quantization (LPQ), and Gabor features [49,50]. Some scholars such as text, speech, facial expression, body movement, or physi-
have proposed dynamic feature extraction methods, such as LBP ological signals. However, this approach fails to conform to the
on three orthogonal planes (LBP-TOP) [34]. Features are usually human perception and expression patterns of emotions and has
classified using pattern recognition classification methods such limitations in terms of the information obtained for emotion
as K-nearest neighbors, support vector machines (SVMs), or recognition [22]. Humans communicate their emotions through
multi-layer perceptrons (MLPs). Another approach is the feature multiple channels, including language, tone of voice, facial
learning approach, which combines the end-to-end training of expressions, and body movements. Textual, auditory, and visual
feature representations and classifiers on a given task target, typi- information together provide more comprehensive emotional
cally a combination of the entire connection layer and softmax. information than they do individually, just as the brain relies
The feature-learning method employs features learned from big on multiple sensory input sources to validate events. Moreover,
data through layer-by-layer feature transformation and can unimodal information is insufficient and can be easily affected
describe the intrinsic information of data better than handcrafted by various external factors [21]. Emotional signals can be dis-
features. However, supervised training methods such as deep guised or affected by other signals from a single channel, for
CNNs are not universal and rely on large amounts of sample data. example, when facial expressions are obscured or when noise
Therefore, it is too early to abandon traditional feature-extraction interferes with speech, resulting in a considerable reduction in
methods. In visual emotion analysis, automatic training features emotion analysis performance. Multimodal emotion analysis
can be extracted and integrated with traditional features, which considers the complementarity of emotion expression among
may further improve system performance. modalities and is thus more robust and aligned with natural
Physiological signal analysis. Physiological changes that human behavior expression. Therefore, research on multimodal
occur with emotions, including brain electrical activity, heart fusion of affective computation has received increasing atten-

Downloaded from https://spj.science.org on April 08, 2024


rate changes, electrical skin response, muscle tension, and res- tion. Multimodal fusion algorithms integrate information from
piration rate, are supported by mainstream theories, such as different modalities into a stable multimodal representation,
the physiological theory of emotion [51] and Lange’s theory of enabling comprehensive processing and coordinated optimiza-
emotion [52]. By detecting changes in these physiological sig- tion to identify human emotions as accurately as possible [57].
nals, patterns associated with emotions can be recognized and Common multimodal fusion methods can be categorized into
then used to develop computer systems that can automatically feature-, model-, and decision-layer-based fusion depending on
recognize emotions. Physiological signals are more challenging the fusion stage [58].
to recognize than text, speech, and facial expression signals
mentioned above, and they have unique properties. For exam- Generation and expression of emotions
ple, computing EEG data requires more complex preprocessing, Affective computing enables machines to provide empathic
including electrode position localization, bandpass filtering, feedback based on deep contextual understanding. Robots and
reference conversion, segment analysis interception, artifact other agents can deliver expressions and responses, conveying
removal, and bad electrode interpolation. Researchers must the emotional temperature to the user through facial expres-
have cross-field knowledge to apply machine learning or deep sions, emotional text responses, and body movements [59,60]
learning methods to recognize emotions from physiological by building on the results of sentiment analysis and recogni-
signals [53]. tion. Emotional text generation and speech synthesis are the
Affective computing mainly employs peripheral nervous system most-studied areas of research. Emotional text generation
(PNS) features, such as facial EMG, galvanic skin potential (GSP), involves the automatic generation of emotional response con-
photoplethysmography (PPG), heart rate variability (HRV), respi- tent that matches the message of the dialogue and is consistent
ratory rate, and electrocardiogram (ECG), whereas central nervous with the machine’s strategy, which is chosen according to the
system (CNS) features include EEG, near-infrared, and brain- context [61]. For instance, a traffic enforcement robot may
imaging features. EEG features have dominated the studies pub- exhibit a fundamental difference in the language used for per-
lished on this topic. For instance, manual feature extraction involves suasion and the language used for enforcement, a difference
multidimensional feature extraction from EEG signals in the time, that is crucial to obtaining effective practical traffic manage-
frequency, time–frequency, and nonlinear domains for emotion ment results. The goal of emotional text generation is for the
recognition and classification. Recent studies have emphasized the model to generate text that conforms to a specified sentiment
integrity and relevance of these features. To construct functional category, as expressed by emotion-related keywords or tech-
brain networks, many studies have started defining a channel as a niques such as metaphors [62]. Pretrained models such as GPTs
node and quantifying the relationship between individual nodes are increasingly being utilized as a base for emotionally con-
using phase synchronization, inter-correlation, and mutual infor- trollable text generation and achieving powerful results [63].
mation, treating strength as the functional connectivity between Responding to text content with emotional color is only the
the brain regions of the corresponding channel. Complex network first step. The generated text needs to be expressed using a
measures, including efficiency, clustering coefficients, degree dis- related emotional voice. Emotional coding information is inte-
tribution, small-world features, and average shortest distance, are grated into the speech synthesis model to make human–
then used to extract functional brain network features. Since 2018, machine dialogue less cold and mechanical, thereby allowing
deep learning methods such as CNNs, RNNs, deep belief networks individuals to perceive “machine empathy” and feel warmth
(DBNs), and stacked autoencoders (SAEs) [54–56] are being and affinity. Emotional speech synthesis uses a specific voice
increasingly used for emotional computation of EEG data, general- style and combines text content with emotional tags to provide
izing sentiment analysis to various physiological signals. a robot or agent with a voice that expresses a particular emotion

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 4


[64]. This process inputs textual content and a specific voice of affective computing can reveal the user’s true preferences and
style into a neural network that synthesizes an output voice in improve and streamline the buying process [77]. In the field of
that style by utilizing the spectral, rhythmic, and linguistic financial credit, affective computing technologies can be used to
features of human voices that express emotion. analyze the emotional state and moral level of a customer based
on voice and tone, determine the probability of the customer
Applications of affective computing lying, and provide a guide for lending decisions. In the field of
Affective computing is a technology that advances according stock investment, investor decisions are influenced by irrational
to the actual needs of the industry, which drives progress and judgments. The price trend of a stock is determined not only by
iteration. To build up reliability, general applications initially a company’s fundamentals but also to a large extent by fluctua-
focused on recreation, leisure, or serving people with urgent tions in investor emotions. The study of investor sentiment from
needs, then gradually expanded to more fields, transforming social media data (e.g., data from X, formerly known as Twitter)
the technology and contributing to productive endeavors. In can help identify investors’ emotional preferences and cognitive
2021, the value of affective computing reached $21.6 billion, biases for the purpose of predicting the direction of the stock
and it is expected to double by 2024 [65,66]. As the industry market [78].
grows, the creative applications of affective computing tech-
Integration of science and art
nologies will flourish, yielding satisfactory results in various
fields. In the current digital era, image, audio, and video data have
become plentiful and important. Extracting useful information
from them and retrieving and mining them effectively are crucial.
Education
For example, in recommending music to users, resource manage-
In the field of education, affective computing is primarily used
ment and audio search efficiency are essential. Traditional music
to recognize the emotional state of learners and provide corre-
search methods match content using text (e.g., song title, artist
sponding feedback and adjustment [67]. For example, teachers
name, or lyrics). Including sentiment, a high-level semantic fea-

Downloaded from https://spj.science.org on April 08, 2024


can utilize intelligent emotional teaching systems to better under-
ture of music, improves the match between user preferences and
stand students’ engagement levels and adjust the pace and content
music, thus aiding in the primary task in music sentiment analy-
of their teaching to improve the learning experience. An intelli-
sis [79]. Affective computing also empowers automated poetry
gent system can recommend customized learning content based
generation, where deep learning methods such as RNNPG, an
on the sentiment analysis of students’ interests. Students can
RNN-based poem generator, and SeqGAN, a sequence generative
provide authentic teaching feedback through intelligent systems
adversarial network, are gradually replacing Word Salada, genetic
to improve the comprehensiveness and accuracy of teaching
algorithms, and statistical machine translation methods [80–82].
evaluations. One advantage of an intelligent system is that it can
Expressing emotions more richly is key in making generated
be used in both traditional and online classrooms to strengthen
poetry spiritual, i.e., in moving beyond resemblance of form to
the contextualization of online teaching, enhance emotional
resemblance in spirit.
interaction between teachers and students, and improve teaching
quality. Affective computing techniques are also conducive to the
research and development of educational games and robots [68], Importance of this study
providing improved human–computer interaction and achieving The field of affective computing has grown considerably and
educational objectives more effectively. exploded in popularity in the last decade for 2 reasons: techno-
logical developments providing tools for affective computing and
the growth and expansion of demand. In the era of human–
Healthcare machine symbiosis, the deepened human understanding of
Affective computing research has expanded into various psy- emotional connotation and the improvement of the “double
chiatric disorders in the affective disorders category, such as quotient” (i.e., IQ + EQ) of intelligent machines will become a
Alzheimer’s [69], Parkinson’s [70], bipolar disorder [71], and vital innovative force promoting the affective computing disci-
post-traumatic stress [72], and into healthcare areas including pline, technological evolution, and industrial progress. Despite
relaxation service healthcare [73] and health office systems [74].
the rapid development in affective computing, a comprehensive
Affective computing enables the scientific and objective iden- review of research and systematic analysis of hotspots and trends
tification and judgment of patients’ emotions, particularly in is lacking. Continuous innovation in algorithmic technology,
psychological disorder treatments, providing a useful comple- broadening application requirements, and increasing research
ment to more subjective traditional diagnostic tools such as efforts necessitate that existing research be summarized and
behavioral observation and scale filling. Objective data collec- future technological directions be identified. Doing so will enable
tion can improve personalized and precise medical treatment academia and industry to better understand the development of
[75]. In addition, affective computing can be used for the initial affective computing technology, thus will facilitate affective com-
screening and efficacy assessment of diseases. For instance, puting research, empower applications, and benefit society.
patients with social anxiety disorder exhibit important differ- This study aims to fill the gaps in existing research through
ences in emotional facial processing compared to the normal a comprehensive review of affective computing from 1997,
population, differences that can be identified by automated when Picard formally proposed the concept, up to 2023. We
monitoring of differential features [76]. adopted a bibliometric analysis method to accurately portray
the current status of the development of the field and provide
Business services insights into present challenges and future trends. The main
In marketing, where the consumer experience is highly corre- contributions of this study are as follows. (a) Facing the aca-
lated with emotions, affective computing is widely used to under- demic frontier, we list the research hotspots and trends that we
stand and recognize the user’s emotional state. The application identified by analyzing full-scale papers. This allows readers to

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 5


quickly and comprehensively grasp the development dynamics
of the field and understand key common and frontier-leading Table 1. Search strategy for this study
technologies. (b) Facing major needs and the main battlefield
of the economy, we provide blueprints for technological devel- Index field Search strategy
opment and insights into current applications. This promotes
the application and transformation of affective computing, Theme keywords “affective recognition” or “mood
facilitating high-quality economic development and digital recognition” or “affective computing”
transformation. (c) Facing future trends, we introduce chal- or “artificial emotional intelligence” or
lenges and developments in the field of affective computing, “emotion AI” or “expression recognition”
along with predictions for future technology and industry or “emotion recognition” or “emotion
application directions. This serves as a forward-looking guide learning” or “sentiment analysis” or
to the field. “sentiment recognition”
Literature types proceedings papers, articles, review
Materials and Methods articles, early access

Data collection
This study searched for papers published in affective computing
from January 1997 to September 2023 in the Web of Science Core
Collection (WoSCC), which includes the Science Citation Index peer review and expert judgment, bibliometrics can provide
Expanded, Social Sciences Citation Index, Arts & Humanities quantitative indicators to ensure objectivity through statistical
Citation Index, Emerging Sources Citation Index, Conference analysis of academic achievements [85]. Bibliometric analysis
Proceedings Citation Index—Science (CPCI-S), and Conference enables monitoring and summarizes the status, hotspots, and
Proceedings Citation Index—Social Sciences & Humanities trends of a particular topic, helping researchers identify future

Downloaded from https://spj.science.org on April 08, 2024


(CPCI-SSH). The search strategy is summarized in Table 1. research directions [86]. In this study, we first cleaned and ana-
The reason this study uses 1997 as the starting point of the lyzed the data using the Derwent Data Analyzer (DDA, version
timeline is that the book Affective computing [9], which was 10, Clarivate, London, UK), which is well integrated with the
published in that year, is regarded as the work that established source data from the Web of Science platform. DDA was used
affective computing as an independent academic research field. for multidimensional data mining, preprocessing, standardiza-
Papers outside this time range were not included in the calcula- tion, and statistical analysis. Subsequently, the bibliometric
tion of citation statistics. In the statistics of Chinese papers, analysis and knowledge visualization software tool VOSviewer
Hong Kong, Macau, and Taiwan are included. The results show (version 1.6.15, Leiden University, Leiden, Netherlands) was
that 33,448 papers were published worldwide. Among them, employed. This analysis tool provides valuable insights into the
16,097 (48.13%) were conference papers and 17,351 (51.87%) structure, advancement, and collaboration in the field of affec-
were journal papers. It should be noted that the names of insti- tive computing. Notably, its distinctive feature lies in the graph-
tutions were standardized using machine and manual meth- ical representation of bibliometric maps, which is particularly
ods. However, when scientists publish papers, the writing of suitable for large-scale data analysis [87]. VOSviewer was used
the names of institutions is not standardized, which may have to visualize the data in this study.
caused the omission of papers in the statistics and a deviation
in the index calculation results. Results
In addition, this study combined the following 3 databases
for data acquisition: (a) Incites: This database is based on the Publication trends
publication date of all document types in the major index From 1997 to 2009, the number of articles published in this field
databases of the WoSCC. It performs publication count and steadily increased, exhibiting an overall growth trend despite
index calculations to provide research performance analysis. occasional fluctuations (Fig. 2). From 2010 to 2019, with the
(b) Essential Science Indicators (ESI): This is an in-depth ana- rise of deep learning, a rapid development was observed in the
lytical research tool based on the Web of Science. ESI can iden- field of affective computing, and the number of articles pub-
tify influential countries, institutions, papers, and publications, lished in the field rose rapidly, indicating an explosive growth
as well as the cutting-edge in a research field. (c) Journal Citation stage of research. After 2019, because of a plateau in the innova-
Reports (JCRs): This is a multidisciplinary journal evaluation tion of deep learning methods and the impact of the coronavi-
tool that provides journal evaluation resources based on citation rus disease 2019 (COVID-19) pandemic on academia, research
data statistics. By citing and counting references, the JCR can in the field of affective computing also reached a plateau, and
measure the influence of research at the journal level, revealing the rising trend slowed down.
the relationships between citing and cited journals.
Comparison of countries
Data analysis To analyze the main research positions in the field of affective
Statistical analysis was performed using a bibliometric method. computing, the country/region fields of all the authors and the
Bibliometrics applies quantitative methods such as mathemat- first author of the paper were counted. As shown in Table 2,
ics and statistics to the literature of a scientific or other field among the top 20 countries with publications in the field of
and processes statistical data based on information science affective computing, China is the country with the largest num-
theory. This widely accepted approach provides quantitative ber of publications, accounting for 26.2% of all authors and
analysis pathways and innovative insights into the assessment 24.6% of first authors. China, the United States, India, the United
of research trends based on previous literature [83,84]. Unlike Kingdom, and Germany rank among the top 5 in the number

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 6


of papers published counting all authors or first author, and are affective computing because of its advantages in computer sci-
the most important in terms of research in the field of affective ence, engineering, and other disciplines.
computing. The United States ranks second in the number of
papers published counting all authors, but third in the number Main journals
of papers published counting only first author, after India. This section analyzes basic data on journal papers. The 17,351
In addition to the 2-year step in 2021–2022, a 4-year step published papers were distributed in 1,300 journals, among
was used to count the publication volume of the top 10 coun- which IEEE Access [impact factor (IF) 3.9] had the most (875),
tries in the field of affective computing. The results are shown as shown in Table 3. Across all journals, 1,209 had an IF listed
in Fig. 3. Given that the concept of “affective computing” origi- in the 2022 JCRs. The distribution of the IFs of the 1,209 jour-
nated in the United States, which has been a major research nals is shown in Table 4. Among them, 54 journals have IFs
force in this field, we chose the United States as the benchmark. greater than 10, and the 5 journals with the highest IFs are
During the entire period, the relative volume of publications World Psychiatry (73.3), Lancet Psychiatry (64.3), Nature
by China and the United States changed considerably, as shown Reviews Neuroscience (34.7), Nature Human Behaviour (29.2),
in Fig. 3. From 1997 to 2004, the number of papers published and JAMA Psychiatry (25.8). The IFs of most journals are dis-
by the United States far exceeded that of China. From 1997 to tributed in the 2 intervals of 2 ≤ IF < 4 and 4 ≤ IF < 7. It is
2000, the total number of papers published by China was 20% worth noting that IEEE Transactions on Affective Computing
of that of the United States. From 2001 to 2004, the total num- (IF 11.2) is a high-level journal focusing on the field of affective
ber of papers published by China rose to 31% of that of the computing. It is a cross-disciplinary and international archive
United States. In the period from 2005 to 2008, the number of journal aimed at disseminating the results of research on the
papers published by China surpassed that of the United States, design of systems that can recognize, interpret, and simulate
and the number of papers published by China in 2021–2022 is human emotions and related affective phenomena. In addition,
about 3 times that of the United States. It can be seen that in Expert Systems with Applications, Knowledge-Based Systems,
recent years, China’s research in the field of affective computing Information Processing & Management, IEEE Transactions on

Downloaded from https://spj.science.org on April 08, 2024


has accumulated rapidly, and its large volume of research has Multimedia, Neurocomputing, Information Sciences, Pattern
certain advantages compared with that of the United States. In Recognition, Applied Soft Computing, Decision Support
addition, in 2021–2022, the number of papers published by Systems, and Future Generation Computer Systems are also
India surpassed that of the United States for the first time. India high-level journals favored by scholars in the field of affective
has gradually become a major research center in the field of computing.

Table 2. The top 20 countries in the field of affective computing

Country Number of papers Country Number of papers


No. (All authors) (All authors) (First author) (First author)
1 China 8,780 China 8,223
2 USA 4,715 India 3,632
3 India 3,829 USA 3,274
4 UK 2,535 UK 1,432
5 Germany 1,706 Germany 1,253
6 Japan 1,321 Italy 1,022
7 Italy 1,302 Japan 977
8 Australia 1,234 South Korea 931
9 Spain 1,178 Spain 862
10 South Korea 1,121 Australia 788
11 Canada 1,100 Canada 720
12 France 943 France 587
13 Netherlands 778 Turkey 581
14 Saudi Arabia 765 Netherlands 484
15 Turkey 691 Malaysia 479
16 Singapore 640 Pakistan 454
17 Malaysia 609 Brazil 443
18 Pakistan 595 Greece 413
19 Brazil 522 Iran 398
20 Greece 483 Singapore 394

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 7


Table 3. Top 20 journals with the largest number of articles in the field of affective computing

No. Journal Number of papers


1 IEEE Access 875
2 Multimedia Tools and Applications 474
3 IEEE Transactions on Affective Computing 419
4 Sensors 378
5 Frontiers in Psychology 362
6 Applied Sciences-Basel 349
7 Expert Systems with Applications 290
8 International Journal of Advanced Computer Science and Applications 272
9 Neurocomputing 248
10 Knowledge-Based Systems 226
11 Psychiatry Research 191
12 Electronics 167
13 Journal of Intelligent & Fuzzy Systems 151
14 Neural Computing & Applications 144
15 Neuropsychologia 137

Downloaded from https://spj.science.org on April 08, 2024


16 Schizophrenia Research 135
17 Information Processing & Management 132
18 Computational Intelligence and Neuroscience 114
19 Cognitive Computation 112
20 Information Sciences 110

Fig. 2. Annual scientific production on “affective computing” from 1997 January 1 to 2023 September 25.

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 8


High-level international conferences Conference on Computer Vision and Pattern Recognition
Combining ESI’s highly cited and hot papers with the “China (CVPR), IEEE International Conference on Computer Vision
Computer Federation Recommended International Academic (ICCV), International Conference on Affective Computing and
Conferences” and CORE Computer Science Conference Intelligent Interaction (ACII), IEEE International Conference and
Rankings, we identified the high-level international confer- Workshops on Automatic Face and Gesture Recognition (FG),
ences related to affective computing. These include the ACM and the IEEE International Conference on Acoustics, Speech, and
International Conference on Multimedia (ACM MM), AAAI Signal Processing (ICASSP).
Conference on Artificial Intelligence (AAAI), Annual Meeting
of the Association for Computational Linguistics (ACL), IEEE Discipline distribution
This section analyzes the distribution of research fields based on
statistics on the Web of Science categories of papers in the field
of affective computing. Studies related to the topic of affective

Table 4. Journal impact factor distribution

Journal impact factor Number of journals


IF ≥ 10 54
7 ≤ IF < 10 74
4 ≤ IF < 7 255

Downloaded from https://spj.science.org on April 08, 2024


2 ≤ IF < 4 406
1 ≤ IF < 2 245
IF ≤ 1 175
Fig. 3. Comparison between the top 10 countries and the United States in the number
of publications.

Table 5. Top 20 categories with the most papers in the field of affective computing

Web of Science category Number of papers Percentage (%)


Computer Science, Artificial Intelligence 12,687 37.90
Engineering, Electrical & Electronic 9,820 29.36
Computer Science, Information Systems 8,714 26.05
Computer Science, Theory & Methods 8,405 25.13
Computer Science, Interdisciplinary 3,930 11.75
Applications
Telecommunications 3,133 9.37
Computer Science, Software Engineering 2,982 8.92
Neurosciences 2,376 7.10
Psychiatry 2,100 6.28
Computer Science, Cybernetics 1,904 5.69
Imaging Science & Photographic Technology 1,077 3.22
Engineering, Multidisciplinary 1,045 3.12
Automation & Control Systems 997 2.98
Computer Science, Hardware & Architecture 981 2.93
Psychology, Multidisciplinary 884 2.64
Robotics 793 2.37
Engineering, Biomedical 735 2.20
Acoustics 724 2.16
Linguistics 637 1.90
Clinical Neurology 610 1.82

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 9


computing involve computer science, communication, engineer-
ing, psychology, medicine, and other disciplines, reflecting dis- Table 6. Number of first authors in the field of affective comput-
tinct interdisciplinary characteristics. The top 20 categories with ing (top 20 countries)
the largest number of publications are listed in Table 5. The
category with the largest proportion is “Computer Science, No. Country Number No. Country Number
Artificial Intelligence,” with 12,678 publications (37.93% of the of of
total), followed by “Engineering, Electrical & Electronic,” with scholars scholars
9,820 publications (29.36% of the total).
1 China 4,240 11 Canada 533
Technology transfer and conversion 2 India 2,391 12 France 425
This study searched the Derwent Innovation Index, the world’s 3 USA 2,390 13 Turkey 403
most comprehensive database of value-added patent informa- 4 UK 999 14 Netherlands 349
tion. Among effective invention patents with transfer records 5 Germany 825 15 Malaysia 331
and high value, the transferred patents with an IncoPat patent 6 Italy 690 16 Pakistan 324
value of 10 (the highest level) include “Cognitive content dis- 7 Japan 631 17 Brazil 366
play device” (US10902058B2, transferred from IBM to Kyndryl
Inc.) and “Signal processing approach to sentiment analysis for 8 South 514 18 Greece 248
entities in documents” (US9436674B2, transferred from Attivio Korea
Inc. to Servicenow Inc.). However, the number of patent trans- 9 Spain 545 19 Iran 270
fer records related to affective computing is small, indicating 10 Australia 496 20 Singapore 229
that technology transfer activity needs to be improved.

Downloaded from https://spj.science.org on April 08, 2024


Global distribution of scholars
This section presents statistical analysis of publications based
on the country of the first author to provide a macroscopic cooperation between China and the United States has been
understanding of the global distribution of scholars in the field challenging in recent years, in the field of affective computing,
of affective computing. As shown in Table 6, China has the they remain each other’s largest partners, maintaining a vital
largest number (4,240), followed by India (2,391) and the and continuous cooperation.
United States (2,390). In Fig. 4, darker shading indicates a larger
number of scholars. It can be seen that Asia and North America Important research institutions
are the regions with the most concentrated distribution of The top 10 institutions in the world by number of publications
scholars in the field of affective computing. (counting all authors) are listed in Table 8. This study used indi-
cators such as Citation Impact, Category Normalized Citation
International collaboration Impact (CNCI), and Highly Cited Papers to further evaluate
There is a wide range of international cooperation in the field the influence of various institutions in the field of affective com-
of affective computing. A count of collaborations between the puting. Among them, CNCI is a valuable and unbiased impact
top 20 countries is shown in Table 7. The number of articles indicator that excludes the influence of publication year, subject
published by China and the United States is the largest (641), field, and document type. A CNCI value of 1 indicates that the
followed by China and the United Kingdom (343). Although cited performance of a group of papers is equivalent to the

Fig. 4. Global distribution of scholars in the field of affective computing.

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 10


Table 7. Collaborations between the top 20 countries in the field of affective computing

C1 U1 I1 U2 G1 J I2 A S1 S2 C2 F N S3 T S4 M P B G2
C1 / 641 79 343 79 256 59 218 43 61 137 44 28 61 12 161 38 57 6 4
U1 641 / 128 232 174 42 120 122 62 83 153 100 105 55 42 71 10 36 53 23
I1 79 128 / 73 15 15 22 37 19 33 26 26 8 67 10 48 26 11 3 2
U2 343 232 73 / 294 40 132 119 96 15 60 89 160 77 25 64 30 43 30 48
G1 79 174 15 294 / 41 69 47 39 11 43 59 99 4 17 17 5 7 13 19
J 256 42 15 40 41 / 4 20 17 7 27 15 11 6 5 18 16 1 3 1
I2 59 120 22 132 69 4 / 22 58 13 25 65 53 10 13 37 4 10 6 7
A 218 122 37 119 47 20 22 / 26 14 30 22 24 27 14 35 25 23 11 3
S1 43 62 19 96 39 17 58 26 / 15 15 42 46 23 12 11 5 10 24 18
S2 61 83 33 15 11 7 13 14 15 / 10 16 6 29 2 9 71 3 3
C2 137 153 26 60 43 27 25 30 15 10 / 45 19 40 10 9 2 12 17 3
F 44 100 26 89 59 15 65 22 42 16 45 / 39 11 4 2 8 16 18 10
N 28 105 8 160 99 11 53 24 46 6 19 39 / 19 8 3 1 11 14
S3 61 55 67 77 4 6 10 27 23 29 40 11 / 8 4 41 120 1 3
T 12 42 10 25 17 5 13 14 12 10 4 19 8 / 5 7 3 2 1

Downloaded from https://spj.science.org on April 08, 2024


S4 161 71 48 64 17 18 37 35 11 2 9 2 8 4 5 / 4 2 1
M 38 10 26 30 5 16 4 25 5 9 2 8 3 41 7 4 / 40 3
P 57 36 11 43 7 1 10 23 10 71 12 16 1 120 3 2 40 / 3
B 6 53 3 30 13 3 6 11 24 3 17 18 11 1 2 3 /
G2 4 23 2 48 19 1 7 3 18 3 3 10 14 3 1 1 3 /
Note: C1, China; U1, USA; I1, India; U2, UK; G1, Germany; J, Japan; I2, Italy; A, Australia; S1, Spain; C2, Canada; S2, South Korea; F, France; N, Netherlands; T, Turkey;
S3, Saudi Arabia; S4, Singapore; M, Malaysia; P, Pakistan; B, Brazil; G2, Greece.

global average level, a value greater than 1 indicates higher words. The results of conducting frequency and co-occurrence
performance, and a value less than 1 indicates lower perform­ analysis on keywords assigned to papers by authors in the field
ance; a value of 2 indicates performance twice as high as the of affective computing are shown in Table 10.
global average. The top 5 institutions according to CNCI rank- The Thomson Data Analyzer was used to automatically and
ings were Nanyang Technological University (5.06), Imperial manually clean the keywords assigned by the authors of papers
College London (3.58), Tsinghua University (3.23), the Chinese in the dataset. Subsequently, VOSviewer was used to cluster
Academy of Sciences (3.15), and the University of California the core (high-frequency) subject words and set a certain co-
System (2.77). occurrence frequency and co-occurrence intensity according
to the size of the dataset to cluster the keywords. Combined with
Citation network analysis expert interpretation, each cluster was named and interpreted,
This section analyzes the direct citations of all authors in the and the topics of the journal articles were identified and ana-
field of affective computing. To highlight the key authors, 40 lyzed. After keyword cleaning, 613 keywords appearing more
authors who had published no fewer than 30 papers were selected than 20 times were selected as analysis objects for cluster calcu-
for analysis. The results are shown in Fig. 5. Authors in clusters lation. Five clusters were obtained by clustering the core subject
of the same color have strong correlations and inheritance in words with the highest co-occurrence intensity, as shown in
research content. Representative scholars from the 5 clusters are Table 11 and Fig. 6.
listed in Table 9. The average number of citations of a research theme is the
average number of times that a paper containing these subject
words has been cited since publication, and the average correla-
Word frequency analysis tion strength of a research theme indicates the closeness of the
Word frequency refers to the number of times a word occurs connection between the core subject words contained in this
in the document being analyzed. In scientometric research, theme concept. The greater the correlation strength, the greater
word frequency dictionaries can be established for specific sub- the co-occurrence intensity between the core subject words and
ject areas to quantify the analysis of scientists’ creative activities. the more concentrated the research. In contrast, relatively lower
Word frequency analysis is the method of extracting keywords correlation is associated with more scattered research. Research
or subject words that express the core content of the articles in on the application of affective computing in the analysis of
the literature, to study the development trends and research affective disorders has the highest average citation frequency,
hotspots of the field through the frequency distribution of these which shows that interdisciplinary research involving affective

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 11


Table 8. Institutions with a top 10 publication in affective computing

No. Institution Number of papers Citation impact Category H-index Percentage in Q1 Country
Normalized journals
Citation Impact
1 Chinese Academy 699 20.97 3.15 60 59.87 China
of Sciences
2 University of 443 50.26 2.29 77 69.45 UK
London
3 UDICE-French 388 18.86 1.37 42 50.43 France
Research
Universities
4 Centre National de 377 19.33 1.36 42 51.56 France
la Recherche Sci-
entifique (CNRS)
5 University of 371 40.83 2.77 64 58.72 USA
California System
6 National Institute 364 9.68 1.46 29 26.43 India
of Technology
(NIT System)

Downloaded from https://spj.science.org on April 08, 2024


7 Indian Institute of 360 13.51 1.99 36 44.7 India
Technology System
(IIT System)
8 Nanyang Techno- 350 46.35 5.06 69 68.99 Singapore
logical University
9 Tsinghua University 302 24.87 3.23 44 62.93 China
10 Imperial College 300 41.00 3.58 49 70.75 UK
London
Notes: 1. Citation impact: The citation impact of a set of documents is calculated by dividing the total number of citations of the set of documents by the
number of documents. Citation impact shows the average number of citations received by a document in the group. 2. Category Normalized Citation Impact
(CNCI): The CNCI of a document is obtained by dividing the actual number of citations by the expected number of citations of documents of the same type,
publication year, and subject. When a document is classified into multiple subject areas, the average value of the ratio of actual citations to expected citations
is used. The CNCI of a country is the average of the CNCIs of the publications of that country.

computing and medicine, especially research on affective dis- citations of their papers was more than twice the global average.
orders and depression recognition, has a greater influence. The Citation network analysis showed that Chinese scholars are
average correlation strength of multimodal sentiment analysis representative and have become essential nodes in the citation
based on deep learning is the largest, which shows that the network, indicating that China is constructing a large-scale tal-
research on this topic is the most concentrated. ent team for affective computing and progressing in both the
quantity and quality of research. However, China also faces
disadvantages in academic journals, international conferences,
Discussion and other aspects, leading to weak dominance, which restricts
This paper presents a comprehensive analysis and review of China’s academic discourse improvement in this field. Notably,
systematically collected data on papers and major intellectual in recent years, India’s publication volume has exceeded that of
property rights in the field of affective computing. The results the United States for the first time, revealing a robust develop-
reveal that over the past 25 years, affective computing has expe- ment potential linked to its advantages in computing.
rienced rapid growth in the number of published papers, rep- Nonetheless, India still has room for growth in terms of research
resenting a vibrant academic ecology and an interdisciplinary quality and paper impact as it lacks representative scholars in
character with a wide range of disciplines. Additionally, scholars the field of affective computing.
worldwide actively participate in a relatively close cooperation
network. In particular, Chinese scholars have led the world in Challenges and technology development trends
terms of the number of publications, scholars, and collaborative Modeling of cultural contexts
papers in this field. Among important research institutions, This study found that affective computing researchers are dis-
Tsinghua University and the Chinese Academy of Sciences stand tributed across various countries globally and have a wide range
out, with CNCI values indicating that the average number of of cultural backgrounds. While emotional expression has a

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 12


Downloaded from https://spj.science.org on April 08, 2024
Fig. 5. Citation network of scholars.

degree of consistency across humanity, it is considerably influ- techniques for affective computing and opinion mining” and
enced by cultural background. Cultural norms and values “facial expression and micro-expression recognition and analysis.”
determine the different emotional experiences of individuals Current research focuses more on emotion recognition, with rela-
and how others perceive these emotions. Therefore, affective tively limited attention accorded to emotion generation. Emotion
computing systems developed using a single cultural group may recognition and generation are both essential aspects of affective
fail in other cultural contexts. For example, Chinese, Germans, computing and constitute an important technical basis for the
and Japanese express emotions relatively implicitly, whereas closed loop of human–computer interaction. To enable machines
Americans, British, and Brazilians express emotions more to provide more anthropomorphic and natural feedback, it is cru-
overtly. This indicates that emotion agents must match emotion cial to focus on the following 2 research areas. (a) Generation of
calculation rules with the cultural context. Many Western cul- facial expressions. The fact that human emotions are expressed
tural standards may not necessarily apply in Eastern contexts. through visual (55%), voice (38%), and verbal (7%) signals is also
For example, Japanese researchers tend to develop robots that known as the “3V rule,” which reflects the importance of human
can express emotions implicitly because overly direct expres- facial expressions in emotion analysis [90]. Appropriate use of
sions of emotions may cause user dissatisfaction [88]. Therefore, facial expressions by avatars and robots can enhance human–robot
cultural characteristics must be considered in developing uni- interaction. Thus, current research aims to build a lexicon of facial
versal cross-cultural emotional agents for people from differ- expressions that can translate communicative intent into associ-
ent cultural backgrounds. Hofstede defined culture in terms ated expressive morphology and dynamic features to express vari-
of 5 measures—power distance, identity, gender, uncertainty ous meanings. Meanwhile, a team of animation experts is required
avoidance, and long-term orientation—which can be used to to achieve realistic facial rendering effects, including lighting and
summarize the typical rules of emotional expression in differ- muscle textures. (b) Generation of emotional body movement.
ent cultural contexts [89]. When it is challenging to obtain This requires the design of embodied agents using computer mod-
culture-specific empirical affective data, it is more feasible to els of body expression. This area involves studying human kine-
design affective computational models using cultural theories matics; however, researchers have yet to determine how to
and rules. characterize the organic combination of body parts, movement
strength, and posture of specific emotional states.

Emotion generation techniques Fine-grained sentiment classification models


The cluster analysis of topic terms in affective computing revealed Ekman’s basic emotion theory model is a widely used classification
5 important core topics, including “natural language processing model for emotion computation [10]. However, in real life, people’s

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 13


Table 9. Representative scholars in the citation network

Scholar Organization Research fields


Baoliang Lu Shanghai Jiaotong Brain-like computing, neural networks, deep learning, emotion AI, affective brain–computer
University, China interface
Bjoern Schuller Imperial College Machine intelligence, signal processing, affective computing, digital health, speech recogni-
London, UK tion
Erik Cambria Nanyang Techno- Affective computing, sentiment analysis, commonsense reasoning, natural language
logical University, understanding
Singapore
Fuji Ren The University of Natural language processing, artificial intelligence, affective computing, and emotional robots
Tokushima, Japan
the University of
Electronic Science
and Technology of
China, China
Wenming Zheng Southeast Multimodal affective computing, neural computation, pattern recognition, machine learning,
University, China and computer vision

Downloaded from https://spj.science.org on April 08, 2024


emotions often exist in a mixed state. For example, people often become more hesitant to display emotions in public, instead
simultaneously express surprise and joy, sadness and pain, etc. choosing to use a “poker face.” While there should be openness
Du et al. [91] proposed the concept of mixed emotions based on in the use of affective computing, appropriate regulation is nec-
research conducted using the Facial Action Coding System essary to assess potential risks involving privacy and security,
(FACS). They suggested that the combination of 2 basic emotions and the technology should be reviewed and documented for
creates mixed emotions and defined different types using scenario each industry to maximize benefits while minimizing harm,
examples. Using a FACS-based face recognition algorithm model, risks, and costs. Ethical issues are more likely to be overlooked
microvariations in facial muscles can be analyzed to accurately in computing and engineering than in psychology. The collec-
discriminate between different types of mixed emotions. Martinez tion of individual data, particularly physiological data, should
[92] assessed whether mixed emotions can be semantically labeled be regulated by human research ethics committees, which are
correctly. The test tasks included prioritization and forced selection best suited to managing informed consent and privacy issues.
of mixed emotion labels, and the results showed that subjects per- Efforts should be made to strengthen the development of
formed consistent and accurate categorization. Mixed emotion is international standards in the field of affective computing to
an essential research direction for expression-based fine-grained form a universally accepted specification. Currently, the avail-
emotion classification. This concept extends the core idea of FACS, able standard is “Information technology—Affective computing
aiming to reveal the relationship between mixed and basic emo- user interface (AUI)” (standard number ISO/IEC 30150-1:2022).
tions. It offers a better solution to the problem of differentiation of The first part, “Model,” was released in June 2022, and the second
emotions and clarifies the relationship between differentiated emo- part, “Affective Characteristics,” is under construction. However,
tions and their original emotions, providing traceable clues and there is a lack of standards for data collection, data security, and
measurement possibilities for the generation, development, and personal privacy protection in the field of affective computing.
change of emotions. It summarizes complex emotional changes Therefore, the International Organization for Standardization
into a logical dynamic composite form with similar configuration (ISO), International Electrotechnical Commission (IEC), and
effects, resulting in strong interpretability, logic, and unity. International Telecommunication Union (ITU) should improve
relevant standards and unify them for global use.
Code of ethics and technical standards
Recording an individual’s emotional state has implications for Cognitive neuroscience-inspired affective computing
privacy, particularly when it comes to recording video or audio. Just as CNN architectures are inspired by biological visual pro-
Subjects may not agree to provide researchers with authentic cessing and reinforcement learning methods are inspired by
and naturalistic emotional data and may feel uncomfortable behaviorist theories in psychology, impulse network models are
being monitored in daily life. For example, the results of AI inspired by neuroplasticity. Cognitive neuroscience has also
emotion monitoring tools may be analyzed alongside employee developed theories on affective circuits [94], multiple-wave
performance evaluations, predictions of the risk of leaving the models [95], embodied cognition [96], and other related areas,
job, and patterns of employee–team interactions for predicting providing brain-inspired insights into the design of affective
behavior. Although the use of such technology reduces employee computation models. Studies on the physiological representa-
turnover and saves costs for organizations [66], employees tions of different emotions offer theoretical foundations and
may experience constant psychological stress, leading to burn- guidelines for feature extraction in affective computing based
out [93]. Additionally, individuals may lose autonomy as they on facial expressions, psychophysiological measurements, and

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 14


Table 10. Frequency analysis of top 25 keywords in affective computing

No. Number of Technical keyword Number of co-occurrences with Time period Proportion of occur-
occurrences other keywords rences within last
3 years (%)
1 7,621 Sentiment analysis Machine learning [958]; 2006–2023 21
Opinion mining [936];
Natural language processing
[829]
2 4,566 Emotion recognition Feature extraction [422]; 1997–2023 24
Affective computing [397];
Deep learning [372]
3 2,457 Affective computing Emotion recognition [397]; 2000–2023 15
Machine learning [191];
Emotion [137]
4 2,232 Deep learning Sentiment analysis [691]; 2012–2023 40
Emotion recognition [372];
Machine learning [268]
5 2,054 Machine learning Sentiment analysis [958]; 2002–2023 27
Natural language processing

Downloaded from https://spj.science.org on April 08, 2024


[275];Deep learning [268]
6 1,816 Facial expression Deep learning [182]; 1997–2023 18
recognition Feature extraction [150];
Face recognition [109]
7 1,348 Natural language Sentiment analysis [829]; 2006–2023 30% of 1,348
processing Machine learning [275];
Deep learning [209]
8 1,214 Feature extraction Emotion recognition [422]; 2003–2023 32
Sentiment analysis [213];
Task analysis [181]
9 1,209 Opinion mining Sentiment analysis [936]; 2006–2023 11
Natural language processing
[159];
Machine learning [151]
10 1,067 Emotion Affective computing [137]; 1999–2023 13
Emotion recognition [80];
Facial expression [78]
11 1,007 Twitter Sentiment analysis [770]; 2011–2023 18
Machine learning [160];
Social media [145]
12 975 Speech emotion Deep learning [86]; 2006–2023 29
recognition Feature extraction [72];
Emotion recognition [61]
13 852 Social media Sentiment analysis [587]; 2009–2023 21
Twitter [145];
Machine learning [105]
14 732 Social cognition Schizophrenia [193]; 2001–2023 16
Emotion recognition [184];
Theory of mind [179]
15 657 Text mining Sentiment analysis [486]; 2006–2023 15
Natural language processing
[87];Opinion mining [87]
16 635 EEG Emotion recognition [357]; 2004–2023 27
Affective computing [87];
Emotion [51]

(Continued)

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 15


Table 10. (Continued)

No. Number of Technical keyword Number of co-occurrences with Time period Proportion of occur-
occurrences other keywords rences within last
3 years (%)
17 620 Classification Sentiment analysis [208]; 2003–2023 19
Machine learning [100];
Emotion recognition [84]
18 618 Facial expression Emotion recognition [175]; 1998–2023 15
Emotion [78];
Affective computing [49]
19 582 Convolutional neural Deep learning [146]; 2003–2023 30
network Facial expression recognition
[102];
Emotion recognition [100]
20 535 Schizophrenia Social cognition [193]; 1998–2023 8
Emotion recognition [88];
Theory of mind [68]
21 478 Support vector machine Sentiment analysis [123]; 2002–2023 9
Facial expression recognition [79];

Downloaded from https://spj.science.org on April 08, 2024


Emotion recognition [64]
22 470 Feature selection Sentiment analysis [119]; 2001–2023 16
Emotion recognition [95];
Feature extraction [59]
23 423 Face recognition Feature extraction [155]; 1997–2023 29
Emotion recognition [124];
Facial expression recognition
[109]
24 422 Transfer learning Emotion recognition [89]; 2009–2023 40
Deep learning [81];
Sentiment analysis [77]
25 404 Data mining Sentiment analysis [251]; 2006–2023 22
Feature extraction [64];
Machine learning [64]

neuroimaging. Further human research in the field of cognitive parameters, and the selection of these parameters requires sam-
neuroscience will ultimately affect the development of affective ples that are typically 100 times the number of parameters. A
computing and artificial intelligence as a whole. The cognitive larger dataset size enables the trained model to avoid overfitting,
process of human brain emotion processing, its neural mecha- which improves model learning. However, the challenge lies in
nism, and its anatomical basis provide essential inspiration for labeling these massive datasets. Thus, it is necessary to explore
the development of affective computing models. However, to active, weakly supervised, and unsupervised learning methods
ensure that machines have genuine emotions rather than just to label the meaningful data in large unlabeled datasets or train
appearing to have emotions, further research in cognitive neu- machines for labeling. The second trend highlights the need
roscience is required. This research may involve exploring the for the collection of multimodal data, the accumulation of
neural basis for the generation of human consciousness, the richer modal information, and fine-grained alignment between
neural mechanism for the construction of human values, and different modalities. At this stage, machines differ from human
other key scientific issues. Based on this neural theoretical foun- beings in 2 critical aspects: First, humans exist in a multimodal
dation, simulation and machine implementation are feasible social environment, as evidenced by their joint expression of
options for providing machines with authentic emotions. intentions and emotions through language, facial expressions,
speech, and actions; second, humans can switch between
Construction of large-scale multimodal datasets modalities for emotional reasoning when dealing with emo-
The development of affective computing is highly dependent on tions. They can also switch between different modalities to
the construction of large-scale open datasets. Three major trends search for clues, eliminate ambiguities, and conduct emotional
are described below. The first trend predicts that dataset sizes will reasoning through interconnections. Therefore, creating a
continue to grow to meet the demands of deep learning algorithm large-scale multimodal emotion dataset can contribute to the
training. Deep-learning models have a substantial number of development of human-like emotion intelligence technology

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 16


Table 11. Five research themes in affective computing

No. Research theme Number of core Average number Average correlation


subject words of citations strength
1 Natural language processing 153 10.41 197.80
techniques used for
affective computing and
opinion mining
2 Facial expression and 134 15.89 178.77
micro-expression
recognition and analysis
3 Affective computing 121 18.69 110.38
studies in human–
computer interaction
4 Applied research of 30 33.5 165.59
affective computing in
affective disorder analysis
5 Multimodal sentiment 81 9.8 260.95
analysis based on deep
learning

Downloaded from https://spj.science.org on April 08, 2024

Fig. 6. Five research themes in affective computing.

and the realization of more accurate emotion recognition. The high-quality labeled emotional-physiological data in daily
third trend focuses on collecting natural-scene data, as emo- life remains a challenge due to the lack of hardware collec-
tional data collected in perform­ance or evoked mode may not tion devices that are sufficiently comfortable and resistant to
accurately represent real-life scenarios. However, collecting interference.

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 17


Multimodal fusion technology innovation For humans to understand data fully, they must activate other
Multimodal fusion combines information from multiple modal- associated information, such as potential knowledge or common
ities using multimodal representations for sentiment classifica- sense. The human brain can seamlessly combine this information
tion. It can enhance the performance of sentiment-computing to enable more generalized, intelligent, and frugal computation
models by playing a complementary and disambiguating role for complex problems. Therefore, affective computing requires
[21]. Multimodal fusion methods can be classified as model- not only big data and extensive computing power but also the
independent or model-based. Model-independent fusion meth- integration of knowledge. Knowledge guidance and inspiration
ods do not rely on a specific deep-learning method, whereas can compensate for insufficient or uneven data quality while con-
model-based fusion methods do. serving computational power. For instance, in constructing a
There are 3 categories of model-independent fusion meth- multidisciplinary and multi-faceted emotional knowledge map,
ods: early fusion (feature-based fusion), late fusion (decision- fine-grained emotional knowledge integrated through emotional
based fusion), and hybrid fusion (combination of the 2). Early commonsense associations is used to enable the modeling of hier-
fusion integrates features immediately after they are extracted archical logical relationships between aspect words and emotional
and uses multiple signals to create a single feature vector, which words. This approach facilitates the dynamic correlation, aggrega-
is then modeled using machine-learning algorithms. The larger tion, and reasoning of domain, aspect, and emotional knowledge.
the number of features and the greater the variation in these It provides an optimal solution for various applications of affective
features, the more challenging feature-level fusion becomes and computing, such as efficient real-time online sentiment analysis,
the easier it is to overfit the training data. In contrast, late fusion emotion-injected dialogue systems, and emotion-injected story
performs integration only after each model outputs the results generation. These applications provide dynamic and accurate
(e.g., classification or regression results). It can better handle domain-adaptive sentiment knowledge.
overfitting but does not allow the classifier to train on all data
simultaneously. The Dempster–Shafer theory of evidence is a Group affective computing
generalization of Bayesian theory to subjective probability. It Current research in affective computing primarily focuses on

Downloaded from https://spj.science.org on April 08, 2024


is widely used in late fusion models because of its ability to sentiment analysis at the individual level, neglecting the potential
model uncertain knowledge and combine beliefs from different value of group-affective computing. For instance, emotions felt
sources to obtain new beliefs that take into account all available by individual employees can aggregate and spread to create “col-
evidence. Hybrid fusion combines the outputs of earlier fusion lective emotions” in the workplace. These shared emotions can
methods and unimodal predictors. Although it is flexible, care- considerably affect the organization by offering insights into
ful design is required to determine the timing, modalities, and absenteeism, intra-team communication, team cohesion and
method of fusion based on the specific application problem performance, and organizational citizenship behavior. As such,
and research content. Researchers must select the appropriate affective computing research could expand its focus from indi-
approach at their discretion. vidual to collective affect analysis and the propagation of affect
Model-based fusion methods address the multimodal fusion across people. Furthermore, group affective computing can pre-
problem through implementation techniques and models, dict consumer behavior. EEG-based hyperscanning technology,
using 3 common methods: multiple kernel learning (MKL), which explores dynamic brain activity between 2 or more inter-
graphical models (GMs), and neural networks (NNs). As these acting customers and their underlying neuroemotional activities,
methods easily exploit the spatial and temporal structure of the can be used to anticipate shared consumption intentions, panic
data, they are particularly suitable for time-related modeling buying, and group-buying marketing effects. Although group
tasks. Additionally, they allow human expert knowledge to be affective computing currently lacks a well-established research
embedded in the model, thereby enhancing interpretability. methodology, it is a promising direction for future studies.
However, their disadvantage is that they are computationally
expensive and challenging to train. Unique emotional carriers
Research has shown that synesthesia is generated not only in Emotions are ubiquitous in human political, economic, and cul-
the cerebral cortex but also in the subcortical limbic system, tural life, and the carriers of emotions are continually increasing
including the thalamus, amygdala, and hippocampus, which are in number, making them a popular research topic. Several areas
closely related to emotional processing [97]. Inspired by the mul- have been identified as key carriers of emotions. (a) Political
tistage fusion phenomenon that integrates multisensory informa- speeches: CORPS is a corpus that contains political speeches with
tion in the brain, a multistage multimodal emotion fusion method markers indicating audience reactions such as applause, standing
can be developed. This would first involve training a unimodal ovations, and boos [98]. Researchers can use this information to
model, splicing it as an implicit state with another modal feature, predict emotion-evoking actions and persuasive content that may
training the bimodal model similarly, and continuing with this induce empathy and sympathy in audiences. (b) Music and drama:
process until a multimodal model is obtained. In conclusion, Affective computing in music and drama provides a basis for the
multimodal fusion technology effectively utilizes the synergistic categorized retrieval of relevant emotional carriers. Advancements
complementarity of different modal information [57], enhances in artificial intelligence-generated content (AIGC) technology
emotional understanding and expression, and improves model have made machine-generated music possible, and affective com-
robustness and performance. This represents an important direc- puting can enhance the generation of music to conform to emo-
tion for future research. tional classifications. (c) Oil painting: As a representative art form,
oil painting allows creators to express their innermost emotions.
Data- and knowledge-driven technological innovation Its charm lies not in the degree of realism but in the emotions it
In its early stages, affective computing research relied heavily on conveys. Combining affective computing with oil painting
collected data to make inferences. However, this data-driven would enable the exploration of artificial intelligence methods
approach is both inefficient and ineffective at the application level. for emotional expression, the integration of technology and art,

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 18


and the establishment of a library of emotion-inducing materials In addition, transfer-learning techniques based on deep and gen-
for oil paintings, thereby providing resources for the development erative adversarial networks can solve the problem of individual
of affective computing disciplines. differences. These techniques enable generalization from the
source domain to the target domain, thereby expanding the scope
Outlook for future applications of possible applications of affective brain–computer interfaces.
Affective brain–computer interfaces
Affective brain–computer interfaces (aBCIs) are primarily Empathic human–computer dialogue
designed to measure emotional states through neurological mea- There have been 4 waves of change in the way people interact with
surements and to recognize and/or regulate human emotions. machines. The first wave, represented by Microsoft, involved the
Currently, aBCIs are one of the main methods of realizing emo- organic fusion of the user interface, operating system, keyboard,
tional intelligence. At this stage, the most commonly used physi- and mouse. This greatly reduced the difficulty of human–­computer
ological signals for emotional brain–computer interfaces are EEG interaction and contributed to the rapid popularization of the
signals, which map closely to an individual’s emotional state. As personal computer. The second wave, represented by Google,
in motor brain–computer interfaces, the human brain plays the involved the organic integration of search engine and internet
role of a controller for the entire system. The first step involves technologies. This integration broke down information silos and
decoding an individual’s initial emotional state and then recogniz- considerably expanded the boundaries of interaction. The third
ing and understanding their emotions. Subsequently, a control wave, represented by Apple, involved the miniaturization of
strategy or system is designed to achieve the target emotion using computing represented by the smartphone. This breakthrough
control signals or parameters that provide feedback to the brain, removed the physical space limitations of human–computer inter-
thereby forming a closed-loop system. action, enabling interconnectivity anytime, anywhere. Currently,
Unlike facial expressions, physiological signals such as EEG we are in the fourth wave, represented by OpenAI. This wave
signals are difficult to disguise and provide an accurate reflec- involves the comprehensive application of a human–computer
tion of the real emotional state of the individual. As a result, dialogue system that makes human–computer interaction more

Downloaded from https://spj.science.org on April 08, 2024


affective brain–computer interfaces play a crucial role in clini- anthropomorphic and naturalized.
cal diagnostics and therapy. Their uses include detecting work- The essence of human–computer dialogue is to make human–
load and mental state, using neurofeedback for stress relief, computer interaction more human-like. Humans exchange
aiding in the diagnosis of social anxiety and other disorders information through natural language and multiple senses, and
[76], and enabling objective assessment and intervention in human–computer interaction can imitate this process through
depression. Furthermore, affective brain–computer interfaces multimodal information for joint analysis and decision-making.
have considerable potential for military applications. They can Human–computer dialogue involves a diverse range of signals,
help maximize the physiological capabilities of individual sol- including speech, text, and images (such as individual facial
diers, enhance their endurance and tolerance to extreme envi- expressions and body movements), conveying information in
ronments, and improve their overall physical and mental both the rational and perceptual dimensions. Linguistic text
fitness. These objectives are achieved by installing electroen- serves as the ontology of intent understanding, but emotional
cephalography electrodes inside combat helmets to detect information conveyed through voice intonation, facial expres-
threats and emotional signals emitted by the brain. The signals sions, and body movements plays a crucial role in disambigua-
are then converted into computer language using computer tion, which is essential for in-depth communication between
algorithms, analyzed, and confirmed by combat command. humans and machines. The use of different emotional colors to
Subsequently, threat warnings and reminders about emotional express the same sentence results in entirely different connota-
regulation are sent to the affected soldiers, and signals to tions. As Nobel Prize winner Simon noted, emotion recognition
cooperate in combat are transmitted to surrounding soldiers. is crucial for the communication and understanding of informa-
In addition, direct transcranial current stimulation, transcra- tion. Therefore, affective computing offers machines the ability
nial electromagnetic stimulation, and deep brain cortex stimu- to achieve deep contextual understanding.
lation can act on the brain to eliminate fatigue, reduce stress In advanced technology fields, research has expanded to
and anxiety, control pain sensation, and enhance cognitive include machine expression and action generation, referred to as
ability. This system helps improve the situational awareness of “multimodal emotional expression generation.” A current focus
soldiers on the battlefield, thereby improving their ability to area is the development of a “virtual human” interface that not
survive. only appears human-like but also simulates human demeanor and
The primary obstacle to the application of affective brain– behavior. For instance, voice-driven facial-expression animation
computer interfaces is their unstable performance. Cross-modal generation technology can create virtual humans with facial
affective models that rely on heterogeneous transfer learning expressions and lip, head, and body movements that closely resem-
(HTL) may be necessary for establishing reliable and robust aBCI ble those of real people. The virtual human no longer has an empty
technology in complex real-world environments. To address the skin but appears more 3-dimensional and vivid. The personaliza-
missing-modalities problem, cross-modal emotion models com- tion of human–computer interaction lays the crucial foundations
prehensively analyze signals from multiple modalities and extract for future applications in areas such as elderly companions, intel-
correlation characteristics during the training process. In the ligent customer service, and mayor hotlines, revealing important
testing stage, predictions are made based on partial modal infor- prospects for practical use.
mation. For example, correlating EEG signals with eye movement
enables the use of eye movement alone to assess emotions in Emotion-assisted decision-making
scenarios where collecting EEG signals is difficult. The HTL Human–computer interaction involves both shallow and deep
approach ensures that performance degradation in the absence levels. At the shallow level, machines are equipped with the
of modalities is acceptable, thereby improving model robustness. ability to read and speak, whereas at the deep level, they are

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 19


capable of thinking and making decisions like humans. Nobel illusion that avatars are alive and pass the Turing test, which
Prize winner Kahneman described human decision-making as enhances the audience’s interest and engagement in the virtual
entailing 2 processes: fast (System 1) and slow (System 2). The world [104]. Affective virtual reality has considerable potential
unconscious “System 1” relies on emotions, experience, and for applications in virtual reality socialization, virtual reality
rapid judgments, while the conscious “System 2” relies on ratio- anchors, and virtual reality marketing.
nal deliberation. Emotions play an important role in advanced
human thinking and decision-making. The book “Descartes’ Limitations
Error” emphasizes that emotions are crucial for rational decision-­ This bibliometric analysis has several limitations that should be
making and behavior [99]. Numerous studies have indicated acknowledged. First, the basic processing unit of information
that purely rational decision-making may not always be the in this study is the article in its entirety, and the full content of
optimal solution for humans when dealing with problems due the literature has not been systematically broken down, which
to the complexity of the social environment. Incorporating emo- may result in incomplete analysis and conclusions. Second, the
tional factors into the decision-making process may help indi- assumption that the articles contain information of equal qual-
viduals identify better solutions. Therefore, inputting emotional ity makes it difficult to consider the objective differences in the
variables can enable machines to make decisions in a more human-­ value of the literature. In future research, a combination of bib-
like manner. In building a harmonious human–machine sym- liometrics and content analysis could be used to enhance the
biotic society, it is essential to master this high-level function, reliability and accuracy of the analytical results.
which is also an important direction in affective computing
research. The modeling of machine agents has begun to incor- Conclusion
porate patterns of emotional influence on human rational decision-­
making and mechanisms for deciding and interrupting behaviors Affective computing is a rapidly developing field with broad
based on goals [100,101]. prospects. Emerging forces such as China and India are injecting
Emotion-assisted decision-making abilities can be applied strong momentum into the field. However, the field of affective

Downloaded from https://spj.science.org on April 08, 2024


widely across various fields of human–machine collaboration. computing also faces challenges and development trends in 10
For example, in production tool manipulation, the operator’s aspects, including cultural background modeling, ethical and
emotional state regarding operation specifications, safety aware- moral norms, and multimodal integration. Affective computing
ness, and accurate judgment has an impact. Monitoring and has great potential for application in 4 major fields and requires
early warning of negative emotions, psychological stress, fatigue, the joint efforts of researchers and industry practitioners. These
and drowsiness, etc., can help identify potential anthropogenic efforts can make affective computing beneficial to the progress
risks to production safety. Machines can then optimize manage- of human society by building a more anthropomorphic, harmo-
ment decisions, intervene early, and intervene intelligently to nious, and natural human–computer symbiotic social form.
avoid major accidents. In assisted driving, negative emotions
such as anger and anxiety can seriously affect the driver’s con- Acknowledgments
centration and may lead to traffic accidents. Emotion-assisted
decision making can be incorporated into driver monitoring Funding: This work was supported by the National Natural
systems (DMS) that use facial-expression recognition technol- Science Foundation of China (grant number T2241018), the
ogy and wearable devices to provide real-time monitoring of Zhejiang Provincial Natural Science Foundation of China (grant
the driver’s emotional state. This approach equips the vehicle number LQ22C090007), the National Science and Technology
with enhanced safety performance and improves the overall Major Project of the Ministry of Science and Technology of
driving experience [102]. China (grant number 2021ZD0114303), and the Open Research
Project of the Key Laboratory of Brain-Machine Intelligence
Affective virtual reality for Information Behavior (Ministry of Education of Shanghai)
The metaverse is generating considerable interest in both indus- (grant numbers 2023KFKT003 and 2022KFKT002).
trial and academic circles as the next generation of immersive, Author contributions: G.P.: Conceptualization, methodology,
full-fledged internet. It is considered a theme park for digitized writing (original draft), and funding acquisition. H.L.: Methodology,
human beings, a virtual complex resulting from the development data curation, formal analysis, and visualization. Y.L.: Writing
of cutting-edge technologies, and a utopia where the human body (review and editing). Y.W.: Data curation, formal analysis, and
and consciousness can cross physical time and space. As a new visualization. S.H.: Writing (original draft). T.L.: Resources, super-
type of future living space, the development of the metaverse vision, validation, and funding acquisition.
cannot be limited to creating a virtual space parallel to the real Competing interests: The authors declare that they have no
world. It should exist in human life like air, enabling humans to competing interests.
shuttle freely between the virtual and real worlds. Affective virtual
reality is crucial for constructing the metaverse because it can Data Availability
considerably enhance an individual’s experience of bodily owner-
ship, sense of agency, and situational awareness. In particular, an The data and code used in this study are available from the
individual’s avatar in the metaverse, which is a core element of the corresponding author upon request.
metaverse construction, includes voice tone, facial expressions,
body movements, and gestures that richly and 3-dimensionally References
express the individual’s emotions and create scenes and spaces
for emotional twins [103]. As in movies and literature, complex 1. Keltner D, Sauter D, Tracy J, Cowen A. Emotional expression:
and emotionally rich avatar characters engage audiences more Advances in basic emotion theory. J Nonverbal Behav.
than simple and stable characters do. This appeal creates the 2019;43(2):133–160.

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 20


2. Soleymani M, Garcia D, Jou B, Schuller B, Chang S-F, Pantic M. 21. Poria S, Cambria E, Bajpai R, Hussain A. A review of affective
A survey of multimodal sentiment analysis. Image Vis Comput. computing: From unimodal analysis to multimodal fusion.
2017;65:3–14. Inf Fusion. 2017;37:98–125.
3. Bach DR, Dayan P. Algorithms for survival: A comparative 22. Wang Y, Song W, Tao W, Liotta A, Yang D, Li X, Gao S, Sun Y,
perspective on emotions. Nat Rev Neurosci. 2017;18:311–319. Ge W, Zhang W, et al. A systematic review on affective
4. Chen L, Zhou M, Wu M, She J, Liu Z, Dong F, Hirota K. computing: Emotion models, databases, and recent advances.
Three-layer weighted fuzzy support vector regression Inf Fusion. 2022;83–84:19–52.
for emotional intention understanding in human–robot 23. Zhang Z, Luo P, Loy CC, Tang X. From facial expression
interaction. IEEE Trans Fuzzy Syst. 2018;26(5):2524–2538. recognition to interpersonal relation prediction. Int J Comput Vis.
5. Kahneman D. Thinking, fast and slow. Macmillan, London, 2018;126:550–569.
UK: Farrar, Straus and Giroux; 2011. 24. Mollahosseini A, Hasani B, Mahoor MH. AffectNet:
6. Fanselow MS. Emotion, motivation and function. Curr Opin A database for facial expression, valence, and arousal
Behav Sci. 2018;19:105–109. computing in the wild. IEEE Trans Affect Comput.
7. Lopes PN, Salovey P, Coté S, Beers M. Emotion regulation 2019;10:18–31.
abilities and the quality of social interaction. Emotion. 25. Li S, Deng W, Du J. Reliable crowdsourcing and deep locality-
2005;5:113–118. preserving learning for expression recognition in the wild.
8. Suvilehto JT, Glerean E, Dunbar RIM, Hari R, Nummenmaa L. Paper presented at: 2017 IEEE Conference on Computer
Topography of social touching depends on emotional bonds Vision and Pattern Recognition (CVPR); 2017; . Honolulu, HI.
between humans. Proc Natl Acad Sci U S A. 2015;112: p. 2584–2593.
13811–13816. 26. Li X, Pfister T, Huang X, Zhao G, Pietikäinen M. A
9. Picard RW. Affective computing. Cambridge (MA): MIT Press; 1997. spontaneous micro-expression database: Inducement,
10. Ekman P. Are there basic emotions? Psychol Rev. collection and baseline. Paper presented at: 2013 10th IEEE
1992;99(3):550–553. International Conference and Workshops on Automatic Face

Downloaded from https://spj.science.org on April 08, 2024


11. Russell JA. A circumplex model of affect. J Pers Soc Psychol. and Gesture Recognition (FG); 2013; Shanghai, China. p. 1–6.
1980;39:1161–1178. 27. Galvão F, Alarcão SM, Fonseca MJ. Predicting exact valence and
12. Mehrabian A. Framework for a comprehensive description arousal values from EEG. Sensors (Basel). 2021;21(10):3414.
and measurement of emotional states. Genet Soc Gen 28. Shalbaf A, Bagherzadeh S, Maghsoudi A. Transfer learning
Psychol Monogr. 1995;121(3):339–361. with deep convolutional neural network for automated
13. Bakker I, Van Der Voordt T, Vink P, De Boon J. Pleasure, detection of schizophrenia from EEG signals. Phys Eng
arousal, dominance: Mehrabian and russell revisited. Sci Med. 2020;43(4):1229–1239.
Curr Psychol. 2014;33:405–421. 29. Shirahama K, Grzegorzek M. Emotion recognition based
14. Pozzi FA, Fersini E, Messina E, Liu B. Chapter 1—Challenges of on physiological sensor data using codebook approach. In:
sentiment analysis in social networks: An overview. In: Pozzi FA, Piętka E, Badura P, Kawa J, Wieclawek W, editors. Information
Fersini E, Messina E, Liu B, editors, Sentiment analysis in technologies in medicine. Cham: Springer International
social networks. Boston: Morgan Kaufmann; 2017. p. 1–11. Publishing; 2016. p. 27–39.
15. Maas AL, Daly RE, Pham PT, Huang D, Ng AY, Potts C. 30. Koelstra S, Muhl C, Soleymani M, Lee J-S, Yazdani A,
Learning word vectors for sentiment analysis. Poster Ebrahimi T, Pun T, Nijholt A, Patras I. DEAP: A database
presented at: Proceedings of the 49th Annual Meeting of the for emotion analysis using physiological signals. IEEE Trans
Association for Computational Linguistics: Human Language Affect Comput. 2012;3(1):18–31.
Technologies; Portland, Oregon, USA; 2011. p. 142–150. 31. Duan R-N, Zhu J-Y, Lu B-L. Differential entropy feature
16. Socher R, Perelygin A, Wu J, Chuang J, Manning CD, for EEG-based emotion classification. Paper presented at:
Ng AY, Potts C. Recursive deep models for semantic 2013 6th International IEEE/EMBS Conference on Neural
compositionality over a sentiment treebank. Paper presented Engineering (NER); 2013; San Diego, CA, USA. p. 81–84.
at: Proceedings of the 2013 Conference on Empirical 32. Schmidt P, Reiss A, Duerichen R, Marberger C,
Methods in Natural Language Processing; 2013; Seattle, WA, Van Laerhoven K. Introducing WESAD, a multimodal dataset
USA. p. 1631–1642. for wearable stress and affect detection. Paper presented at:
17. Blitzer J, Dredze M, Pereira F. Biographies, Bollywood, Proceedings of the 20th ACM International Conference on
boom-boxes and blenders: Domain adaptation for sentiment Multimodal Interaction; 2018; Boulder, CO, USA. p. 400–408.
classification. Poster presented at: Proceedings of the 45th 33. Taboada M, Brooke J, Tofiloski M, Voll K, Stede M. Lexicon-
Annual Meeting of the Association of Computational based methods for sentiment analysis. Comput Linguist.
Linguistics; 2007; Prague, Czech Republic. p. 440–447. 2011;37(2):267–307.
18. Burkhardt F, Paeschke A, Rolfes M, Sendlmeier WF, Weiss B. 34. Ding X, Liu B, Yu PS. A holistic lexicon-based approach
A database of German emotional speech. Interspeech. to opinion mining. Paper presented at: Proceedings of the
2005;5:1517–1520. International Conference on Web Search and Web Data
19. McKeown G, Valstar M, Cowie R, Pantic M, Schroder M. The Mining—WSDM ’08; 2008; Palo Alto, CA, USA. p. 231.
SEMAINE Database: Annotated multimodal records 35. Mullen T, Collier N. Sentiment analysis using support vector
of emotionally colored conversations between a person and machines with diverse information sources. Paper presented
a limited agent. IEEE Trans Affect Comput. at: Proceedings of the 2004 Conference on Empirical
2011;3(1):5–17. Methods in Natural Language Processing; 2004; Barcelona,
20. Xu L, Xu M, Yang D. Chinese emotional speech database for Spain. p. 412–418.
the detection of emotion variations. J Tsinghua Univ Nat Sci. 36. Pak A, Paroubek P. Text representation using dependency
2009;49(S1):1413–1418. tree subgraphs for sentiment analysis. In: Xu J, Yu G,

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 21


Zhou S, Unland R, editors. Database systems for advanced 53. Kim M-K, Kim M, Oh E, Kim S-P. A review on the
applications. Berlin, Heidelberg: Springer Berlin Heidelberg; computational methods for emotional state estimation
2011. p. 323–332. from the human EEG. Comput Math Methods Med.
37. Deng J, Ren F. A survey of textual emotion recognition and 2013;2013:Article e573734.
its challenges. IEEE Trans Affect Comput. 2023;14(1):49–67. 54. Craik A, He Y, Contreras-Vidal JL. Deep learning for
38. Heaton CT, Schwartz DM. Language models as emotional electroencephalogram (EEG) classification tasks: A review.
classifiers for textual conversation. Paper presented at: J Neural Eng. 2019;16(3):Article 031001.
Proceedings of the 28th ACM International Conference on 55. Maria MA, Akhand MAH, Shimamura T. Emotion
Multimedia; 2020; Seattle, WA, USA. p. 2918–2926. recognition from EEG with normalized mutual
39. Mao R, Liu Q, He K, Li W, Cambria E. The biases of pre- information and convolutional neural network. Paper
trained language models: An empirical study on prompt- presented at: 2022 12th International Conference on
based sentiment analysis and emotion detection. IEEE Trans Electrical and Computer Engineering (ICECE); 2022;
Affect Comput. 2022;14(3):1743–1753. Dhaka, Bangladesh. p. 372–375.
40. Lee CM, Narayanan SS. Toward detecting emotions in 56. Rahman MM, Sarkar AK, Hossain MA, Hossain MS, Islam MR,
spoken dialogs. IEEE Trans Audio Speech Lang Process. Hossain MB, Quinn JMW, Moni MA. Recognition of human
2005;13(2):293–303. emotions using EEG signals: A review. Comput Biol Med.
41. Lugger M, Yang B. The relevance of voice quality features in 2021;136:Article 104696.
speaker independent emotion recognition. Paper presented 57. D’mello SK, Kory J. A review and meta-analysis of
at: 2007 IEEE International Conference on Acoustics, multimodal affect detection systems. ACM Comput Surv.
Speech and Signal Processing—ICASSP ’07; 2007; 2015;47(3):1–36.
Honolulu, HI, USA. p. IV-17–IV–20. 58. He Z, Li Z, Yang F, Wang L, Li J, Zhou C, Pan J. Advances in
42. Likitha MS, Gupta SRR, Hasitha K, Raju AU. Speech based multimodal emotion recognition based on brain–computer
human emotion recognition using MFCC.Paper presented at: interfaces. Brain Sci. 2020;10(10):687.

Downloaded from https://spj.science.org on April 08, 2024


2017 International Conference on Wireless Communications, 59. Filippini C, Perpetuini D, Cardone D, Chiarelli AM, Merla A.
Signal Processing and Networking (WiSPNET); 2017; Thermal infrared imaging-based affective computing and its
Chennai, India. p. 2257–2260. application to facilitate human robot interaction: A review.
43. Bitouk D, Verma R, Nenkova A. Class-level spectral features Appl Sci. 2020;10(8):2924.
for emotion recognition. Speech Commun. 2010;52(7–8): 60. Spezialetti M, Placidi G, Rossi S. Emotion recognition
613–625. for human-robot interaction: Recent advances and future
44. Alisamir S, Ringeval F. On the evolution of speech perspectives. Front Robot AI. 2020;7:Article 532279.
representations for affective computing: A brief 61. Peng Y, Fang Y, Xie Z, Zhou G. Topic-enhanced emotional
history and critical overview. IEEE Signal Process. Mag. conversation generation with attention mechanism.
2021;38(6):12–21. Knowl Based Syst. 2019;163:429–437.
45. Stappen L, Baird A, Schumann L, Schuller B. The multimodal 62. Dybala P, Ptaszynski M, Rzepka R, Araki K, Sayama K.
sentiment analysis in car reviews (MuSe-CaR) dataset: Metaphor, humor and emotion processing in human-
Collection, insights and improvements. IEEE Trans Affect computer interaction. Int J Comput Linguist Res. 2013.
Comput. 2023;14(2):1334–1350. 63. Goswamy T, Singh I, Barkati A, Modi A. Adapting a
46. Huang Z, Dong M, Mao Q, Zhan Y. Speech emotion language model for controlled affective text generation.
recognition using CNN. Paper presented at: Proceedings of Paper presented at: Proceedings of the 28th International
the 22nd ACM International Conference on Multimedia; Conference on Computational Linguistics; 2020; Barcelona,
2014; New York, NY, USA. p. 801–804. Spain. p. 2787–2801.
47. Neumann M, Vu NT. Improving speech emotion recognition 64. Lei Y, Yang S, Wang X, Xie L. MsEmoTTS: Multi-scale
with unsupervised representation learning on unlabeled emotion transfer, prediction, and control for emotional
speech. Paper presented at: ICASSP 2019 - 2019 speech synthesis. IEEE/ACM Trans Audio Speech
IEEE International Conference on Acoustics, Speech and Lang Process. 2022;30:853–864.
Signal Processing (ICASSP); 2019; Brighton, UK. 65. Crawford K. Time to regulate AI that interprets human
p. 7390–7394. emotions. Nature. 2021;592(7853):167.
48. Abdelwahab M, Busso C. Domain adversarial for acoustic 66. Ho M-T, Mantello P, Nguyen H-KT, Vuong Q-H. Affective
emotion recognition. IEEE/ACM Trans Audio Speech computing scholarship and the rise of China: A view from
Lang Process. 2018;26(12):2423–2435. 25 years of bibliometric data. Humanit Soc Sci Commun.
49. Shan C, Gong S, McOwan PW. Facial expression recognition 2021;8:Article 282.
based on Local Binary Patterns: A comprehensive study. 67. Yadegaridehkordi E, Noor NFBM, Ayub MNB, Affal HB,
Image Vis Comput. 2009;27(6):803–816. Hussin NB. Affective computing in education: A systematic
50. Chao W-L, Ding J-J, Liu J-Z. Facial expression recognition review and future research. Comput Educ. 2019;142:
based on improved local binary pattern and class- Article 103649.
regularized locality preserving projection. Signal Process. 68. Wu C-H, Huang Y-M, Hwang J-P. Review of affective
2015;117:1–10. computing in education/learning: Trends and challenges.
51. James W. Review of la pathologie des emotions by Ch. Féré. Br J Educ Technol. 2016;47(6):1304–1323.
Philos Rev. 1893;2:333–336. 69. Liberati G, Veit R, Kim S, Birbaumer N, von Arnim C,
52. Cannon WB. The James-Lange theory of emotions: A Jenner A, Lulé D, Ludolph AC, Raffone A, Belardinelli MO,
critical examination and an alternative theory. Am J Psychol. da Rocha JD, Sitaram R. Development of a binary fMRI-
1987;100:567–586. BCI for Alzheimer patients: A semantic conditioning

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 22


paradigm using affective unconditioned stimuli. Paper of the 2014 Conference on Empirical Methods in Natural
presented at: 2013 Humaine Association Conference on Language Processing (EMNLP); 2014; Doha, Qatar.
Affective Computing and Intelligent Interaction; 2013; p. 670–680.
Geneva, Switzerland. p. 838–842. 83. Mao G, Liu X, Du H, Zuo J, Wang L. Way forward for
70. Yuvaraj R, Murugappan M, Mohamed Ibrahim N, alternative energy research: A bibliometric analysis during
Iqbal Omar M, Sundaraj K, Mohamad K, Palaniappan R, 1994–2013. Renew Sustain Energy Rev. 2015;48:276–286.
Mesquita E, Satiyan M. On the analysis of EEG power, frequency 84. Haustein S, Larivière V. The use of bibliometrics for
and asymmetry in Parkinson’s disease during emotion assessing research: Possibilities, limitations and adverse
processing. Behav Brain Funct. 2014;10:12. effects. In: Welpe I, Wollersheim J, Ringelhan S, Osterloh M,
71. Baki P, Kaya H, Çiftçi E, Güleç H, Salah AA. A multimodal editors. Incentives and performance: Governance of research
approach for mania level prediction in bipolar disorder. organizations. Cham: Springer International Publishing; 2014.
IEEE Trans Affect Comput. 2022;13(4):2119–2131. p. 121–139.
72. Mohammadi-Ziabari SS, Treur J. Integrative biological, 85. Hammarfelt B, Rushforth AD. Indicators as judgment
cognitive and affective modeling of a drug-therapy for a post- devices: An empirical study of citizen bibliometrics in
traumatic stress disorder. In: Fagan D, Martín-Vide C, research evaluation. Res Eval. 2017;26(3):169–180.
O’Neill M, Vega-Rodríguez MA, editors. Theory and 86. Wang J, Veugelers R, Stephan P. Bias against novelty in
practice of natural computing. Cham: Springer International science: A cautionary tale for users of bibliometric indicators.
Publishing; 2018. p. 292–304. Res Policy. 2017;46(8):1416–1436.
73. Tivatansakul S, Ohkura M. Healthcare system focusing on 87. Van Eck NJ, Waltman L. Software survey: VOSviewer, a
emotional aspects using augmented reality—Implementation computer program for bibliometric mapping. Scientometrics.
of breathing control application in relaxation service. Paper 2010;84(2):523–538.
presented at: 2013 International Conference on Biometrics 88. Šabanović S. Robots in society, society in robots. Int J of
and Kansei Engineering; 2013; Tokyo, Japan. p. 218–222. Soc Robotics. 2010;2:439–450.

Downloaded from https://spj.science.org on April 08, 2024


74. Zenonos A, Khan A, Kalogridis G, Vatsikas S, Lewis T, 89. Hofstede G. Culture’s consequences: Comparing values,
Sooriyabandara M. HealthyOffice: Mood recognition at work behaviors, institutions and organizations across nations.
using smartphones and wearable sensors. Paper presented at: London, UK: Sage; 2001.
2016 IEEE International Conference on Pervasive Computing 90. Mehrabian A. Communication without words. Communication
and Communication Workshops (PerCom Workshops); theory. 2nd ed. London, UK: Routledge; 2008.
2016; Sydney, NSW, Australia. p. 1–6. 91. Du S, Tao Y, Martinez AM. Compound facial expressions of
75. Weziak-Bialowolska D, Bialowolski P, Lee MT, Chen Y, emotion. Proc Natl Acad Sci U S A. 2014; 111(15):E1454–E1462.
VanderWeele TJ, McNeely E. Psychometric properties 92. Martinez AM. Computational models of face perception.
of flourishing scales from a comprehensive well-being Curr Dir Psychol Sci. 2017;26(3):263–269.
assessment. Front Psychol. 2021;12:Article 652209. 93. Dragano N, Lunau T. Technostress at work and mental
76. Pei G, Xiao Q, Pan Y, Li T, Jin J. Neural evidence of face health: Concepts and research results. Curr Opin Psychiatry.
processing in social anxiety disorder: A systematic review 2020;33(4):407–413.
with meta-analysis. Neurosci Biobehav Rev. 2023;152: 94. LeDoux J. The emotional brain: The mysterious underpinnings of
Article 105283. emotional life. New York, NY, USA: Simon and Schuster; 1998.
77. Pei G, Li T. A literature review of EEG-based affective 95. Pessoa L, Adolphs R. Emotion processing and the amygdala:
computing in marketing. Front Psychol. 2021;12:Article From a ‘low road’ to ‘many roads’ of evaluating biological
602843. significance. Nat Rev Neurosci. 2010;11(11):773–782.
78. Valle-Cruz D, Fernandez-Cortez V, López-Chau A, 96. Price TF, Peterson CK, Harmon-Jones E. The emotive
Sandoval-Almazán R. Does twitter affect stock market neuroscience of embodiment. Motiv Emot. 2012;36:27–37.
decisions? Financial sentiment analysis during pandemics: A 97. Cytowic RE. Synesthesia: A union of the senses. Cambridge,
comparative study of the H1N1 and the COVID-19 periods. MA, USA: MIT Press; 2002.
Cognit Comput. 2022;14(1):372–387. 98. Guerini M, Strapparava C, Stock O. CORPS: A corpus of
79. Gómez LM, Cáceres MN. Applying data mining for tagged political speeches for persuasive communication
sentiment analysis in music. In: De la Prieta F, Vale Z, processing. J Inf Technol Politics. 2008;5(1):19–32.
Antunes L, Pinto T, Campbell AT, Julián V, Neves AJR, 99. Damasio AR. Descartes’ error. New York, NY, USA: Random
Moreno MN, editors. Trends in cyber-physical multi-agent House; 2006.
systems. Cham: Springer International Publishing; 2018. 100. Scheutz M. The inherent dangers of unidirectional emotional
p. 198–205. bonds between humans and social robots. In: Lin P,
80. Yu L, Zhang W, Wang J, Yu Y. SeqGAN: Sequence generative Abney K, Bekey GA, editors. Robot ethics: The ethical and
adversarial nets with policy gradient. Paper presented social implications of robotics. Cambridge (MA): MIT Press;
at: Proceedings of the AAAI Conference on Artificial 2011. p. 205.
Intelligence; 2017; San Francisco, CA, USA. p. 31. 101. Scheutz M, Schermerhorn P. Dynamic robot autonomy:
81. Oliveira HG. A survey on intelligent poetry generation: Investigating the effects of robot decision-making in a
Languages, features, techniques, reutilisation and human-robot team task. Paper presented at: Under review
evaluation. Paper presented at: Proceedings of the 10th for the 4th ACM International Conference on Human-Robot
International Conference on Natural Language Generation; Interaction; 2009; La Jolla, CA, USA.
2017; Santiago de Compostela, Spain. p. 11–20. 102. Gill R, Singh J. A review of neuromarketing techniques and
82. Zhang X, Lapata M. Chinese Poetry Generation with emotion analysis classifiers for visual-emotion mining. Paper
Recurrent Neural Networks. Paper presented at: Proceedings presented at: 2020 9th International Conference System

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 23


Modeling and Advancement in Research Trends (SMART); Processing Association Annual Summit and Conference
2020; Moradabad, India. p. 103–108. (APSIPA ASC); 2022; Chiang Mai, Thailand. p. 1469–1476.
103. Pei G, Li B, Li T, Xu R, Dong J, Jin J. Decoding emotional 104. Ochs M, Sadek D, Pelachaud C. A formal model of emotions
valence from EEG in immersive virtual reality. Paper for an empathic rational dialog agent. Auton Agent Multi-
presented at: 2022 Asia-Pacific Signal and Information Agent Syst. 2012;24:410–440.

Downloaded from https://spj.science.org on April 08, 2024

Pei et al. 2024 | https://doi.org/10.34133/icomputing.0076 24

You might also like