Bosses Without A Heart - Socio-Demographic and Cross-Cultural Determinants of Attitude Toward Emotional AI in The Workplace
Bosses Without A Heart - Socio-Demographic and Cross-Cultural Determinants of Attitude Toward Emotional AI in The Workplace
https://doi.org/10.1007/s00146-021-01290-1
ORIGINAL ARTICLE
Abstract
Biometric technologies are becoming more pervasive in the workplace, augmenting managerial processes such as hiring,
monitoring and terminating employees. Until recently, these devices consisted mainly of GPS tools that track location,
software that scrutinizes browser activity and keyboard strokes, and heat/motion sensors that monitor workstation presence.
Today, however, a new generation of biometric devices has emerged that can sense, read, monitor and evaluate the affective
state of a worker. More popularly known by its commercial moniker, Emotional AI, the technology stems from advancements
in affective computing. But whereas previous generations of biometric monitoring targeted the exterior physical body of the
worker, concurrent with the writings of Foucault and Hardt, we argue that emotion-recognition tools signal a far more inva-
sive disciplinary gaze that exposes and makes vulnerable the inner regions of the worker-self. Our paper explores attitudes
towards empathic surveillance by analyzing a survey of 1015 responses of future job-seekers from 48 countries with Bayes-
ian statistics. Our findings reveal affect tools, left unregulated in the workplace, may lead to heightened stress and anxiety
among disadvantaged ethnicities, gender and income class. We also discuss a stark cross-cultural discrepancy whereby East
Asians, compared to Western subjects, are more likely to profess a trusting attitude toward EAI-enabled automated manage-
ment. While this emerging technology is driven by neoliberal incentives to optimize the worksite and increase productivity,
ultimately, empathic surveillance may create more problems in terms of algorithmic bias, opaque decisionism, and the ero-
sion of employment relations. Thus, this paper nuances and extends emerging literature on emotion-sensing technologies in
the workplace, particularly through its highly original cross-cultural study.
13
Vol.:(0123456789)
AI & SOCIETY
Current applications range from Spotify’s voice assistant out of affect-driven automated management systems. The
that suggests music playlists tempered to a user’s mood, first centers on the legitimacy of the ‘science’ upon which
Honda’s automobile bio-sensors that sense whether drivers affect technologies are predicated. Kappas (2010) asks how
are stressed or drowsy, Grammarly’s natural word processing scientists can create technology that measures human emo-
that can detect an email’s tone, Amazon’s Halo bracelet that tions when they do not first understand what emotions are
promotes mood awareness, and smart toys such as Moxie or how they are constructed? Besides highlighting the com-
that foster a child’s emotional, social and cognitive develop- plexity of social and cultural modulators that give rise to
ment through play-based learning exercises. affective states, Kappas criticizes the determinist logic of
Yet, the fastest growing application of EAI is in the Emotional AI developers who believe accuracy and reliabil-
workplace. While legacy companies such as IBM, Unilever, ity “is just something that will eventually be solved with a
and Softbank are using emotional analytics for recruitment better algorithm” (p.7).
purposes (Richardson 2020), affect tools are increasingly The second concern involves the ethical and legal impli-
embedded in automated management systems. For example, cations of affect-driven automated management systems. For
to increase efficiency and productivity in call centers, the example, while mindful of the dangers of misuse, proponents
Japanese company Empath and Boston start-up Cogito have of EAI such as McStay believe that given proper regulatory
developed voice recognition software. While Empath’s tech- oversight, EAI is a form of biopower that can help manag-
nology allows managers to read the moods of its employees ers to find better ways of understanding and communicating
to assess their well-being, Cogito’s tone detector is designed with their employees. Critical labor scholars, however, main-
to gauge customers’ sentiments to provide better services. tain a far more skeptical stance, pointing out historical links
To de-escalate the potential for office environments to turn between technologies of surveillance and labor exploitation
toxic, US company Spot markets an AI chatbot that identi- which challenge the ‘neutrality of technology’ assumptions
fies patterns associated with workplace harassment (Fouri- advanced by EAI proponents such as McStay. For example,
ezos 2019). Additionally, the security company Vibraim- Crawford (2021) points out that many EAI venders insist on
age sells ‘suspect AI’ camera recognition systems to global operating with a black-box approach that hides the algorith-
sporting events that allegedly ‘predicts’ criminal intention mic bias of their technologies under a veneer of scientific
by monitoring and analyzing a person’s gait, head and eye objectivity. Rhue (2019) notes that this opacity can lead to
movements, as well as facial expressions (Wright 2021). discriminatory managerial practices and abusive power rela-
Vibraimage products have been used in Russian airports, tions. La Torre et al. (2019) and Rosenblat (2018) both agree
Russian and Japanese nuclear powerplants, and convenience that automated management can foment higher degrees of
and retail stores in Japan (Kobata, personal communication, anxiety and stress through target settings, time tracking,
2021). gamification, ticketing systems and performance monitoring.
For businesses, besides alleviating costs and adminis- Finally, Manokha (2020) and Marciano (2019) maintain that
trative burdens associated with workplace wellness pro- automated surveillance can erode employer–employee rela-
grams, affect recognition tools are primarily purposed to tions, leading to lower trust levels and stalled productivity.
optimize efficiency, compliance, and productivity. This is On the one hand, EAI vendors claim that their tech-
accomplished through automated Human Resource (HR) nologies can assist human managers to find better ways of
systems that promise faster “measurement of individual understanding and supervising employees as well as lead to
employee performance” allowing supervisors “to encour- greater levels workplace satisfaction (Gal et al. 2020). They
age goal achievement, productivity and development” so also insist they can help to make objective and unbiased
that employees can benefit from “continuous feedback and managerial decisions about a worker’s performance (Moore
coaching” (Cornerstoneondemand.com 2021). But as Hoch- and Woodcock 2021). On the other hand, affect-driven auto-
schild (2012) notes, because the ultimate goal of emotional mated management tools whether operationalized through
surveillance is to monetize a worker’s affective state, emo- self-tracking devices or imposed externally through panop-
tions are no longer private or personal (p.7). Rather emotions tic systems can foment higher degrees of anxiety (La Torre
can be transformed into money and profit in excess of costs et al. 2019), lower trust levels (Brougham and Haar 2017),
normally associated with the labor process. and encourage discrimination (Rhue 2019).
Although there exists a growing body of literature on We suggest that the rise of EAI in the workplace sig-
digital surveillance in the workplace (Ball 2010; Marciano nals a novel and perhaps more insidious genus of neo-Tay-
2019; Rosenblat 2018; Manokha 2020; Moore and Wood- lorism seeking to optimize workplace efficiency, productiv-
cock 2021), the impact of EAI on workers, managers and the ity and profit. Whereas previous generations of biometric
labor process is understudied apart from Andrew McStay’s devices targeted the exterior corporeality of labor, we argue
seminal book, Emotional AI: The Rise of Empathic Media. empathic surveillance passes into the inner and most inti-
This article identifies two major streams of interest evolving mate recesses of the worker-self, exposing it to techniques
13
AI & SOCIETY
of actuarial measurement and behavioral control. Put suc- the AI-enabled human resources (HR) management: job
cinctly, EAI is the latest application of “numerous strategies entry gatekeeping, workplace monitoring, and the threat to
and techniques to subjugate bodies and control populations” a worker’s sense of agency, thus enabling a comprehensive
(Foucault 1978) by transforming the affective state of physi- and cross-culturally informed discussion of AI ethics and
cal labor into an emerging form of biopower. As such, we governance in the age of the quantified workplace. The fol-
understand EAI as the most recent development by logis- lowing section provides an in-depth and critical review of
tical regimes to maximize productivity of populations by the relevant literature on this subject.
making bare ‘life’ (in this case, human emotion) its referent
object. This paper nuances and extends nascent literature
on emotion-sensing technologies through a highly original, 2 Literature review
cross-cultural study that focuses on future job seekers’ per-
ceptions of EAI. 2.1 Philosophical background: from Taylorism
to empathic surveillance
1.1 Research questions
Critical labor scholars use the term, ‘Neo-Taylorism’ to
Regardless of the issues mentioned above, empathic sur- describe the post-Fordist intensification and accelera-
veillance in the workplace is unequivocally and uncritically tion of labor management systems that prioritize stand-
being ushered in as part of the ‘new normal’ in the golden ardization, routinization and specialized techniques in
age of big data. Similar to the influence of late nineteenth assigned work tasks to maximize efficiency and produc-
century industrialization on HR management, the growth tivity (Vázquez and García 2011). Whereas classical Tay-
and unbridled acceptance of EAI in the workplace is recon- lorism omitted the human factor in its efficiency equa-
figuring age-old practices in organizational management. tion, proponents of neo-Taylorism, especially in the post
Thus, our study suggests the need for a systematic way of WWII era, saw a correlation between productivity and a
understanding how people perceive the prospects of pursu- worker’s physical well-being. Yet, as Crowley et al. (2010)
ing jobs that will be monitored and assessed by automated observe, concern for the worker came not as the result of
management systems that have access to the most intimate a more compassioned or enlightened view in labor rela-
regions of their self. Moreover, as affect detection tools tions. Rather it grew out of the negative consequences
migrate across national and cultural borders, especially of an “increasingly rigorous application of the princi-
in context of transnational corporations, there is an urgent ples of scientific management” (Crowley et al. 2010, p.
need to understand the cross-cultural factors that influence 421). In other words, the neo-Taylorist’s fanatical obses-
perceptions and understanding of the technology in the sion with efficiency placed heightened pressures on the
workplace. worker, leading to a general deterioration of conditions
Thus, we survey a large body of international students, for both blue- and white-collar occupations. Similarly,
1015 future job-seekers, from 48 countries and 8 regions, Reardon (1998) points out that the evolution of wellness
and apply a combination of descriptive statistics and Bayes- programs in the latter half of the twentieth century grew
ian multi-level analysis to answer the following research less out of concern for a worker’s health than to empirical
questions. studies showing that illness-related absences diminished
RQ1: What are the general concerns of future job-seekers productivity and profit as well as increased the financial
regarding EAI as managers vs. AI as their replacement? burden of health care costs to the employer. Critically,
RQ2: Which is the level of awareness of EAI among the like classical Taylorism’s corporeal obsession, twentieth
future job-seekers? century wellness programs emphasized the physical rather
RQ3: How do socio-demographic and cross-cultural fac- than emotional health of the worker (Moore and Robin-
tors influence respondents’ perception toward automated son, 2015). For the most part, emotions in the workplace
management systems? were deemed unstable and irrational and as such, had no
RQ4: How do socio-demographic and cross-cultural fac- bearing on human performance or productivity (Simon
tors influence self-rated knowledge regarding AI? 1986). This neglect reflected a larger ontological disre-
RQ5: How does self-rated familiarity with AI influence gard for human emotions in organizational management
respondents’ attitudes toward automated management? theory (Dean 1999). These ideas were further supported by
To answer RQ1 and RQ2, we use descriptive statistics; Drucker’s (1992) writing on the rise of ‘knowledge work-
for the rest of the RQs we use Bayesian statistical analy- ers” who by definition could not be measured in the cor-
sis. The intention of the survey is to better understand how poreal metrics and techniques associated with Taylorism.
socio-demographic, cultural, gender and economic factors Importantly, the idea that emotions could not be quantified
influence perception and attitude toward three aspects of were largely premised on and supported by the fact that
13
AI & SOCIETY
besides scientific laboratory settings, medical institutions, affective labor originally referred to emotional work car-
or focus groups, no technologies existed in the workplace ried out with organizational outsiders (Hardt 1999; Lupton
to measure a person’s affective state (Davies 2015). 2016), such as in the fields of hospitality, entertainment,
Contrary to prominent labor theorists in the late 70s and office work and care work, the term has now broadened to
early 80s who understood the computer only in Taylorist terms include emotional labor amongst organizational insiders
as an efficiency multiplier tool, Cooley (1980) warned that the (Leighton 2012). In Hochschild (2012)’s seminal book, The
computational workplace was in fact a Trojan horse. Rather Managed Heart (2012), she observes that emotions are not
than increasing efficiency and liberating the worker from many simply integral to the service economy, they are a service
dreary demands of repetitive tasks, Cooley argued that com- itself. In other words, emotions, especially, in knowledge
puters would lead to greater exploitation of social relations in work and service industry, are now construed as having
the labor process. Similar to Marx’s (1983) prescient warnings exchange value. The growth in affective recognition tools
about the dangers of technology in Grundrisse: Foundations of in the workplace recalibrates the horizons of capital not
the Critique of Political Economy, Cooley predicted that com- by expanding outward into the consumer domain (like
puter management systems would usher in a more authoritar- surveillance capitalism) but rather turning inward, extract-
ian form of Taylorism. More than two decades later, this argu- ing greater value from the labor process. As a new source
ment would resurface in the seminal writings of Delueze and of wealth accumulation, mood monitoring dictates that a
Guttari’s A Thousand Plateau’s (1987) in which they discuss worker’s emotional state must be surveilled, measured, and
how technology enchains human labor by transforming them controlled. As a result, workplace performance and pro-
into biological prosthetics of the machine. Deleuze and Guttari ductivity is now intimately tied to expressions in authentic-
refer to this abstraction of human labor as ‘machinic enslave- ity, positivity and spontaneity (Cabanas and Illouz 2021;
ment’ where instead of the worker using the technology, the Davies 2015). Whereas first generation biometric devices
technology uses the worker to increase productivity and profit. monitoring sought to optimize performance by reading the
Following this argument, Adorno and Horkheimer (2002) con- exterior body, empathic surveillance allows for control over
tend that the technological workplace creates a novel form of the microsocial dynamics and inner subjective processes in
indentured servitude where exchangeability and precarity are more fluid and open-ended working environments (Moore
normalized. As a dominant characteristic of late capitalism, and Richardson 2015). For example, affect-driven automated
Healey (2020), argues that the pervasiveness of digital moni- management vendors such as Humanyze use data analytics
toring devices in the neoliberal workplace has fundamentally to optimize workplace social dynamics through wearables
eroded the qualitative character of the labor process. equipped with GPS, microphones and blue-tooth that moni-
The acceptance of biometric monitoring practices in the tor employee physical interactions and conversations. Rather
80s exemplifies the neo-Taylorist logic to increase central- than the probing eyes of a human supervisor, the emotion-
ized control over the physical body of the laborer. In this ally quantified workplace, electronic dashboards, bio-sensors
regard, EAI signals the emergence of an industrial emo- and deep learning algorithms monitor and score the perfor-
tional complex devoted to “psycho-physical informatics,” mance of each and every worker, making granular second-
and in turn, a new genus of wealth creation that monetizes to-second assessments that can lead to promotion, warning
the non-conscious data of workers in order to optimize the or termination (Mateescu and Nguyen 2019). Often these
workplace and maximize profits. Similar to the interpellate managerial decisions will simply be communicated with
effects of panopticism, under the invisible eye of empathic an automatic screen prompt or email (Lecher 2019). In the
surveillance, a worker’s emotions are made transparent data-driven workplace, employees are no longer regarded
and vulnerable to measurement, manipulation and control simply as physical capital but instead conduits of actuarial
(Jeung et al. 2018; Gu and You 2020). Without the ability and statistical intelligence gleaned from the extraction of
to backstage, empathic surveillance demands that a worker’s their non-conscious body data.
persona must always be authentic and positive (Moore and Beyond the Neo-Taylorist disregard for the human ele-
Woodcock 2021). Under such conditions the regulation of ment comes the shaky science to support emotion-sensing
emotion becomes work itself (Woodcock 2016; Cabanas and tools (Barrett 2017; Barrett et al. 2019; Crawford 2021;
Illouz 2019). Indregard et al. (2018) refer to this type of Heaven 2020). For decades now researchers in disciplines
personal estrangement that occurs under empathetic control such as neuroscience, sociology, anthropology, biology and
as ‘emotional dissonance.’ psychology have been unable to agree on whether emo-
tions are hard-wired into the psycho-physical make-up of
2.2 The rise of empathic surveillance the human body or if they are social constructions contin-
gent on social situations and understandings (Leys 2017).
The rise of EAI in the workplace puts into sharp relief the Added to this dispute are claims by EAI vendors that all
informalization and monetization of affective labor. Whereas humans manifest a discrete number of universal emotions
13
AI & SOCIETY
and that they are innate and identical from culture to cul- can be situated within two relevant bodies of literature: (i)
ture (Crawford 2021). Problematically, as EAI technologies technological adoption in the workplace; and (ii) AI-aug-
cross international borders, their data sets and algorithms mented management practice. This section reviews relevant
are seldom tweaked for gender, ethnic, and cultural differ- studies on various factors that influence the perception of AI
ences or importantly, ‘attitudinal diversity’ (McStay, 2021). in the workplace, namely, socio-demographic, behavioral,
McStay uses the term “machinic verisimilitude” (2018, p.5) and cross-cultural.
to capture the sense of “good enough” that technologists and
business communities are striving for without fully dealing 2.3.1 Socio‑demographic and cross‑cultural factors
with the social constructivist complexities of ethnocentric,
context-dependent views of emotions. Regarding socio-demographic factors, men are found to be
Thus, the ‘science’ of emotions is further problematized more willing to adopt new technologies, including ICTs (Ali
by a growing body of literature questioning the validity of 2012) and self-tracking mobile apps than women (Urueña
the so-called ‘universality thesis’ of emotion which serves et al. 2018). McClure (2017) also finds women to report
as the foundation for the emphatic media industry. Prior higher level of fear related to technology that they know
to advances in AI and machine learning, early research on little about such as AI or smart technology. This tendency
affective computing focused on the reliability of computer might be explained by a higher level of perceived techno-
vision to decipher human emotion (Picard 1997; Lisetti and logical self-efficacy among male respondents, i.e., the belief
Schiano 2000; Picard and Klein 2002; Russell et al. 2003). that one is capable of performing a task using technologies
However, the efficacy of the claims made by computer scien- (Cai et al. 2017; Huffman et al. 2013).
tists in these studies were mostly premised on Paul Ekman’s Higher income is also a reliable predictor of willing-
(1999) now disputed face-coding model (Crawford 2021). ness to adopt new technologies (Ali 2012; McClure, 2017;
The communication and inference of anger, fear, disgust, or Urueña et al. 2018). Batte and Arnholt (2003) argue peo-
any other basic emotions have been shown to carry signifi- ple from dominant social classes tend to be early adopters
cant cultural and contextual variations as a result of review- of technology as they can afford the risks as well as they
ing more than 1000 academic articles on emotional expres- are often viewed as local opinion leaders. Concurringly,
sion (Barrett et al. 2019; Chen et al. 2018). Moreover, modes McClure (2017) shows technophobes have a higher likeli-
of emoting are increasingly seen not as static but evolving hood to come from lower income and non-White groups.
since cultures themselves are dynamic and unbounded (Boyd Higher level of education has also been shown to positively
et al. 2011; Vuong and Napier 2015; Vuong 2021). The flu- correlate with attitude toward automated decision-making
idity of emotions in context to culture challenges the tra- and news recommendations by AI (Araujo et al. 2020; Thur-
ditional/normative and static ways of structuring emotion man et al. 2019). Damerji and Salimi (2021) find third and
datasets favored by the tech companies (McStay 2018). The fourth-year students in university have higher perceived
fact that many job-seekers are now aware of AI-hiring and ease of use, perceived utility, and acceptance towards AI.
starting to game the algorithms by presenting themselves Although these socio-demographic factors are indeed use-
differently using different words and facial expressions than ful in predicting AI perception, most of these studies are
they naturally would (Partner 2020) makes the concern over conducted from a single-country perspective (Ali 2012;
accuracy even graver. This is evidenced by the plethora of McClure 2017; Batte and Arnholt 2003; Damerji and Salimi
videos on YouTube by amateur and professional consultants 2021; Araujo et al. 2020). Yet, as we next discuss, there is
that teach users ‘how to beat AI recruiting’ (Partner 2020). a growing body of literature that explores the cross-cultural
The implications of job-seekers migrating to crowd-sourced nuances in tech-acceptance behaviors.
platforms to learn how to game the already gamified AI hir- Curiously, existing theories on technology adoption
ing process begs further investigation beyond the scope of and acceptance such as the ‘Theory of Planned Behavior’,
this article. ‘Theory of Reasoned Action’, and ‘Uses and Gratification
Theory’ have struggled to account for cross-cultural differ-
2.3 Correlates of perception of AI and empathic ences in norms and values (Taherdoost 2018). Most of these
surveillance use in the workplace theories account for an individual’s reasoning process based
on a cost-and-benefit calculation. The ‘Technology Accept-
Of the few studies on the perception of AI in the modern ance Model’ (TAM), despite being one of the most cited
workplace, it is clear that the research methods to measure theories in the field (Davis 1989), purposefully neglects sub-
awareness of AI, especially EAI, and its effects are still in jective norms on the grounds that they are hard to quantify
an early stage. Critically, there is a vacuum in empirical (Muk and Chung 2015). Even though Venkatesh and Davis
literature devoted to the correlates of perception of EAI as a (2000) expand the original TAM model to include subjective
preeminent tool of HR management. Thus, our current study norms, the authors’ understanding of the term is based on
13
AI & SOCIETY
whether most people who are close to a person think he or Of the few studies that look at varying student attitudes
she should or should not adopt a technology (p.187). Such toward AI from different university majors, there are mixed
a narrow modulator for human behavior does not capture results. For example, a 1996 study on university students and
the complexity of cultural nuances in norms, social roles, faculty perceptions of business ethics indicates that busi-
notions of self or personal values. For example, decades of ness or humanities majors share similar value judgements
psychological science research have shown people in col- (Curren and Harich 1996). However, more recent studies
lectivist cultures are more likely to conform to their group’s concerning AI ethics provide evidence to the contrary. In
expectations compared to those in individualist cultures terms of future sustainability, Gherheș and Obrad (2018)
(Henrich 2020). find Romanian students at technical universities hold more
Indeed, a growing body of literature indicates the impor- positive views of AI than their humanities counterparts.
tance of cultural values for explaining the behavioral mecha- Likewise, Chen and Lee (2019) show that Taiwanese stu-
nism in tech-adoption. Cultural values are shown to be the dents majoring in science and engineering are more positive
antecedents to perceived risk, perceived self-efficacy, and about AI’s social impacts than those in humanities, social
subjective norms (Alsaleh et al. 2004, 2019; Muñoz-Leiva science, management, education and arts. Importantly, it
et al. 2018). In other words, cultural and socio-religious val- appears that curricula of business schools with Associa-
ues play a decisive role in influencing users’ perception of tion to Advance Collegiate Schools of Business (AASCB)
risks of and rewards in the adoption of new technology. For accreditation emphasize the importance and advantages of
example, a number of U.S. national surveys found, compared acquiring data analytics skills to enter the increasingly AI-
to highly religious people, non-religious and less religious enabled business world but mention very little about data
people (measured by the number of times they attend reli- ethics and algorithmic bias (Clayton and Clopton 2019). It
gious services, for example (Brewer et al. 2020)) held a more is also common for business and marketing academic jour-
favorable view of AI (Northeastern University and Gallup nals to emphasize the positive rather than negative aspects
2018; West 2018). Except for a few empirical studies that of AI in optimizing various operations and processes (Pren-
focus on Muslim populations (Adnan et al. 2019), very few tice et al. 2020). Consequently, one would expect business
studies seek to quantify and compare the effects of specific students to be more familiar with AI and have more posi-
religions on tech-acceptance behaviors. Thus, this study fills tive attitudes for AI in HR management, the central concern
such a gap in the existing literature. explored in this paper.
Thus, this study addresses three major concerns in the
present literature. First, the absence of studies on the impact
2.3.2 Behavioral factors: trust and self‑knowledge of emotion-sensing technologies in the workplace calls for
regarding EAI further research to fill the intellectual vacuum. Second,
empirical studies on the subject indicate a shortage of con-
One consistent finding in the literature is that people have sistent measuring and testing instruments for AI perception’s
little concern over job loss due to AI (Brougham and determinants. Finally, there is a clear lack of cross-cultural
Haar 2017; Pinto dos Santos et al. 2019). For example, and cross-regional comparison of perceptions of AI use in
a recent survey of 487 pathologists indicated that nearly the workplace.
75% of the participants displayed excitement and interest
in the prospect of AI integration in their work (Sarwar
et al. 2019). Alternatively, there is also evidence that sug- 3 Research design
gests greater anxiety related to the rise of AI applications
in the workplace. Brougham and Haar (2017) find in a 3.1 Hypotheses
New Zealand study that the greater an employee’s aware-
ness of these technologies, the lower their organizational Based on the literature review on socio-demographic, behav-
commitment and career satisfaction. These findings are ioral, and cross-cultural factors influencing technological
concurrent with previous studies that have examined the adoption in the workplace context, this study formulates a
relationship between biometric surveillance and employee series of hypotheses (H) to answer RQ3, 4 and 5.
trust in the workplace (Rosenblatt 2018; Marciano 2019; In Fig. 1A, income, male gender, business major,
Mateescu and Nguyen 2019; Manohka 2020). Similarly, and school year are hypothesized to have a positive corre-
a Saudi Arabian study of medical students (Bin Dahmash lation with attitude toward EAI use in the workplace, the
et al. 2020) has found anxiety toward using AI was cor- dependent variable (H1–H4). Meanwhile, self-rated knowl-
related with a higher self-perceived understanding of this edge regarding EAI and religiosity are hypothesized to
technology. have a negative correlation (H5, H6) with the dependent
13
AI & SOCIETY
3.3 Data treatment
13
AI & SOCIETY
Outcome variable
Attitude Continuous The attitude toward the application of EAI in the The Attitude variable is first calculated by averaging
workplace (“1” strongly disagree/very worried, “5” the answers of three Likert-scale questions,
strongly agree/not worried) (1) Do you agree that a company manager should
use AI/smart algorithms to measure employees’
performances?
(2) Do you agree that a company manager should use
AI/smart algorithms to screen job applicants?
(3) Are you worried about protecting your autonomy
at work due to the wider application of AI/smart
algorithms?
Familiarity Continuous Taking the average of the four questions on the The variable attitude is calculated by averaging the
right side (“1” being Not familiar; “5” being Very answers of four Likert-scale questions:
familiar”) (1) How familiar are you with coding/programming?
(2) How familiar are you with the topic of EAI?
(3) How familiar are you with the concept of smart
cities?
(4) How familiar are you with the topic of Artificial
Intelligence (AI)?
Predictive variable
SchoolYear Ordinal/continuous 1st, 2nd, 3rd, and 4th year
Sex Binary Male (“1”) vs. Female (“0”) Respondents choose their biological sex
Income Ordinal/continuous low (“1”), middle (“2”), and high (“3”) Self-perceived level of household income
Major Binary Social studies (“0”) vs. Business (“1”) Students are asked to specify their majors
Religions Binary Christianity:“1”if identified Respondents are asked to specify their official religion
Islam: “1” if identified and the lack thereof. There are very few Jewish and
Buddhism:“1” if identified Shintoist respondents; thus they are not included in
Atheism: “1” if identified our analyses
Religiosity Binary “1” for the very religious, “0” for the non-religious or Respondents are asked to choose their level of religios-
mildly religious ity
13
AI & SOCIETY
Cultural notions of autonomy in the workplace are particu- and overfit than no-pooling (McElreath 2020; Spiegelhal-
larly relevant considering the cultural disposition of Asians ter 2019). It is worth noting that models with both religion
toward consensus and collectivity as opposed to the Western and religiosity variables are nonlinear to avoid confounding
affinity for individualism (Henrich 2020). effects.
To guard against overfitting and select the model best
fitted with the data, the models are compared in detail using
3.4 Bayesian multi‑level analysis the Pareto smoothed importance-sampling leave-one-out
cross-validation (PSIS-LOO) approach and their weights
3.4.1 Model construction computed to assess the plausibility to each model (La and
Vuong 2019; Vehtari and Gabry 2019).
Following the recent guidelines on conducting Bayesian
inference (Aczel et al. 2020; Vuong et al. 2018), twelve
models are constructed which gradually expand the number 4 Results
of variables and levels. They are then fit with the data using
the Hamiltonian Monte Carlo simulation approach with the 4.1 Descriptive statistics
bayesvl R package (Vuong et al. 2020). All Bayesian pri-
ors are set as default, which is ‘uninformative’ (McElreath, First, answering RQ1 on the general concerns of the job-
2020). Each model is represented by an equation in Table 2, seekers concerning EAI, we presented students a list of nine
which seeks to establish a mathematical relationship among ethical problems with AI proposed by the World Economic
the variables. For example, Equation No.1 models the lin- Forum (Bossman 2016) and asked them to choose the top
ear relationship between attitude towards the use of EAI three. Interestingly, Fig. 2 shows the top concern for inter-
for automated HR management, the dependent variable, and national students is essentially human–machine interaction,
four independent (exploratory) socio-demographic variables: i.e., “Humanity. How do machines affect our behavior and
income level, school year, biological sex, and school major. interaction?” with 561 responses (55.3%). The second great-
Model 10 is the most complex as it is a multi-level model est concern, with 488 responses or 48.1%, is about the secu-
where the Region variable functions as the varying-intercept rity of these smart systems, i.e., “how do we keep AI safe
and there present all other variables. The multi-level fitting from adversaries?”. The third place is unemployment with
model also helps improve the estimate for imbalance in 467 responses or 46%, and the fourth place is about unin-
sampling and explicitly studies the variation among groups. tended consequences of deploying AI with 445 responses
Partial pooling (or adaptive pooling) is another advantage or 43.8%. Although previous studies on AI integration at
of multi-level modeling. This kind of pooling enables us work have pointed out that people are not concerned about
to produce less underfit estimates than complete pooling AI replacement, at least in the short term (Pinto dos Santos
13
AI & SOCIETY
et al. 2019; Sarwar et al. 2019), our survey results provide a 4.2 Technical validation
more nuanced understanding of people’s perception of vari-
ous risks regarding automated management systems. 4.2.1 Convergence diagnostics
Second, concerning the RQ2 on the level of awareness
of EAI, when the students are asked to choose the most After running the MCMC analyses for all models (4 chains,
appropriate definition of this technology to the best of their 5000 iterations, 2000 warm-ups), all Rhat’s values equal one
knowledge (Fig. 3A), nearly 80% chose intelligent machine/ (1), and all the effective sample sizes (n_eff) are above 1000,
algorithms that attempt to read (44.7%) or display (34%) suggesting a good fit with the data. The detailed results and
the emotions of humans. This means nearly 78% chose the visualizations of the diagnostic tests are in the Supplemen-
roughly correct definitions of EAI and affective computing tary file. As an example, Fig. 3 presents the mixing of the
(McStay 2018; Richardson 2020; Rukavina et al. 2016). Markov chains after fitting the model 10 with the data (see
Meanwhile, 21.3% of the respondents chose AI that displays Table 2 for equations and Fig. 4 for model visualizations).
human consciousness. The Markov chains are mixing well together. There is no
Table 3 shows 52% of the respondents hold a worried divergent chain. Thus, this indicates the coefficients reliably
view toward automated management, and 51% rated them- converge to a range of value, i.e., the posterior distribution.
selves below average regarding AI knowledge. We explore the posterior distribution in the Results section.
Fig. 3 Familiarity of the respondents with EAI. A Students choose among three definitions of EAI. B Students rate their familiarity with the
topic
13
AI & SOCIETY
4.2.2 Model comparison and robustness check the best among (0.821; 0.672; 0.685). We have also con-
ducted a robustness check on the prior sensitivity of Model
We run the PSIS-LOO test and find that all Pareto k esti- 10 and 5, the tweaking of the Bayesian priors results in no
mates are good (k < 0.5) for all models, suggesting a good fit real differences in the posterior distribution, suggesting the
with the data. In Bayesian statistics, plausibilities of models models are robust (see Supplementary file).
with the same outcome variable given the data are repre-
sented by weights, which must add up to 1. Three types of 4.3 Major findings
weights are used and reported as follow: Pseudo-BMA with-
out Bayesian bootstrap; Pseudo-BMA with Bayesian boot- 4.3.1 The multi‑faceted nature of attitude toward EAI
strap; Bayesian stacking. Model 10 starkly outperforms other as automated management
models with Attitude being the outcome variable (0.999;
0.924; 0.833). Meanwhile, of models with self-rated famili- The best performances belong to models 5 and 10. Indeed,
arity with EAI as the outcome variable, Model 5 fits the data attitude toward EAI-enabled HR management is a very
13
AI & SOCIETY
Fig. 4 The mixing of the Markov chains after fitting Model 10 with the data
multi-faceted issue, as it is best predicted from a host of as the outcome variable, Model 10. Here, students with
factors: not only socio-demographic and behavioral factors, higher income, men, business majors, and higher school
but also cultural and political factors (religion, religiosity, year are likely to have a less-worried outlook toward EAI-
and region) (Model 10). Here, we show cross-cultural factors enabled HR management, thus validating H1–4. This is
are indeed important in predicting the attitude toward auto- consistent with results from Model 1, Model 4, Model 9,
mated management, thus validating hypothesis No. 6 and and Model 10 (see Table 2 for model equations, and Supple-
7 (RQ3 and RQ4). This result contradicts theories such as mentary File for the details of each model’s goodness-of-fit
Technology Acceptance Model or Theory of Planned Behav- and posterior distribution). Regarding income, an explana-
iors or Theory of Reasoned Action that only prioritize the tion might be that the students with higher income are likely
cost and benefit calculation in predicting human behaviors to have higher educational attainment (Aakvik et al. 2005;
(Davis 1989; Taherdoost 2018). Self-rated familiarity with Blanden and Gregg 2004) and end up in high-status occupa-
EAI, however, is a less complicated issue. It is best pre- tions (Macmillan et al. 2015); thus, in all likelihood, they
dicted from basic factors such as sex, school year, income, are more likely to become future managers who will use
and study major (Model 5), thus validating H12 and H13. those AI tools to recruit and monitor their employees.
Figure 5 shows business major and sex are the most pre- Regarding the sex variables, validating H2 and H9, our
dictive of self-rated familiarity with AI (validating H9 and result is aligned with the literature showing being male
H10), while income and school year’s effects are ambiguous is correlated with higher perceived technological self-
(invalidating H8 and H11). efficacy (Cai et al. 2017; Huffman et al. 2013). The fact
that being a business major is correlated with less anxiety
4.3.2 Determinants of attitude toward EAI‑enabled for EAI-enabled HR management might be a product of
automated management the lack of emphasis on AI’s ethical and social implica-
tions in business education. Another reason may be that
4.3.2.1 Sex, income, school year, major, familiarity Figure 6 hoping to become a manager would incline a person to
shows the regression results of Model 10, which shows the adopt the company position, thus seeing management
highest goodness-of-fit among class of models with attitude supervision only in terms of productivity and performance
13
AI & SOCIETY
Fig. 5 Highest density interval (HDPI) plot of the posterior distribution of income, school year, sex, and major to predict self-familiarity with
EAI from Model 5
results. Future studies are required to understand the religious background are found to express more concern
underlying cause. (Fig. 7B). Curiously, Buddhist students are least likely to
Model 10 shows that students who have higher self- have a worried outlook toward non-human bosses. While
rated familiarity with AI tend to view the EAI-enabled HR Muslim students are most likely to have a negative attitude,
management more positively (rejecting H5). This result the coefficient b_Islam_Attitude (mean = − 0.10, sd = 0.09)
contradicts a Saudi Arabian study of medical students (Bin is distributed mostly on the negative side. Christian students
Dahmash et al. 2020), which found anxiety toward using are more ambiguous, but the majority of the b_Christianity_
AI was correlated with a higher self-perceived understand- attitude’s distribution is on the negative side (mean = − 0.10,
ing of this technology. This divergence with the literature sd = 0.09).
can be explained by the diversity of the surveyed popula- Higher religiosity of the Muslim and Buddhist students
tion, of which there are 48 countries in 8 regions and many appears to have made the students more anxious about AI
possible future professions. tools in human resource management. Our computation
shows β_Islam_Attitude (mean = − 0.16, sd = 0.10) and
4.3.3 Religions and religiosity β_Islam_Religiosity_Attitude (mean = -0.24, sd = 0.18).
There is a similar trend for the Buddhist students as well
Our analyses show religiosity indeed negatively correlates with β_Buddhism_Attitude’s mean = -0.05, sd = 0.07; β_
with attitude toward EAI-enabled HR management, sup- Buddhism_Religiosity_Attitude’s mean = -0.15, sd = 0.19).
porting H6. First, Model 2b shows that atheism positively However, the Christian respondents’ high religiosity seems
correlates with attitude (Fig. 7A), while students from a to generate a slight shift of the distribution toward the
13
AI & SOCIETY
Fig. 6 Density plot from Model 10 for five variables: familiarity, income, major, school year, and sex
positive range and makes the distribution wider. The mean Oceania) also tend to have a lower level of anxiety toward
value of β_Christian_Attitude is − 0.10 (sd = 0.09), while being managed by AI (Fig. 9).
the mean value of β_Christianity_Religiosity_Attitude is
− 0.05 (sd = 0.17).
5 Discussion
4.3.4 Region
5.1 Implications
Figure 8 shows the attitudes toward AI use in HR man-
agement of the respondents from the different geographi- Besides being among the few cross-cultural empirical stud-
cal regions are also different (validating H7). Respondents ies on the perception of EAI tools in HR management, the
from East Asia have the lowest anxiety (a_Region[Eastern paper discovers that being managed by AI is the greatest AI
Asia] = 1.78; sd = 0.18), while respondents from Europe are risk perceived by the international future job-seekers, which
the most likely to worry about the use of EAI in the work- answers RQ1 on the concerns of future job-seekers regard-
place (a_Region[Europe] = 1.36; sd = 0.26). Such findings ing AI as managers versus AI as their replacement. Moreo-
might be rooted in cultural differences among the regions as ver, the analytical insights highlight the urgent need for better
well-established results from psychology literature show- education and science communication concerning the risks
ing stark differences between collectivist and individualist of AI in the workplace. As our study, in answering RQ2 on
cultures (de Oliveira and Nisbett 2017). In a collectivist the level of awareness of EAI among the future job-seekers,
culture, for example, in East Asia, concerns about privacy shows although nearly 80% picked a very close definition of
and self-autonomy are less pronounced compared to their EAI (Fig. 3A), when students are asked to rate their level of
Western counterparts (Whitman 1985). In addition, notably, familiarity with EAI, roughly 40% rate themselves as unfamil-
students from underdeveloped regions (Africa, Central Asia, iar or very unfamiliar and 36.7% of the respondents are unsure
13
AI & SOCIETY
Fig. 7 A The density plot of the Religion variable from Model 10: Model 2b: Non-religious students are likely to worry about the EAI-
Religious students are likely to have a worried attitude toward EAI- enabled management
enabled management. B HDPI interval plot of the Atheism variable
of their level of knowledge (Fig. 3B). Finally, in exploring the EAI-enabled management of the workplace that must be
the effects of various factors on the attitude toward automated bridged to bring more equalities to the AI-augmented work-
management (RQ3,4,5) via the Bayesian MCMC approach, place. Table 4 below summarizes the decisions regarding each
this study also highlights various cross-cultural and socio- hypothesis and their relevant literature.
demographic discrepancies in concern and ignorance about
13
AI & SOCIETY
Fig. 8 Interval plot of the Region variable: (1) Africa; (2) Central Asia; (3) East Asia; (4) Europe; (5) North America; (6) South-East Asia; (7)
South Asia; (8) Oceania
13
AI & SOCIETY
COUNT OF RESPONSES
80
60
48
37 39
40 34
26
22 20
17 15
20 13 13
8 10 8
5 4 6 5 5 5 6 5
2 1 3 3 3
0
1.5-2
2-2.5
2.5-3
3-3.5
3.5-4
4-4.5
4.5-5
1.5-2
2-2.5
2.5-3
3-3.5
3.5-4
4-4.5
4.5-5
3-3.5
South Korea
1.5-2
2-2.5
2.5-3
3-3.5
3.5-4
4-4.5
1.5-2
2-2.5
2.5-3
EU/North America
Chinese
Japanese
1-1.5
1-1.5
1-1.5
1-1.5
1: VERY WORRIED; 3: NEUTRAL; 5: NOT WORRIED AT ALL
workplace, as shown in various studies on algorithmic biases 5.1.3 Bridging the cross‑cultural discrepancies
(Rhue 2019; Crawford 2021, Moore and Woodcock, 2021;
Buolamwini and Gebru 2018). Even though the problem Answering RQ4 on the effects of socio-demographic and
of algorithmic bias has now moved to the center of public cross-cultural differences, this study shows people from
discourse in Western media (Singh 2020), when it comes different socio-cultural, economic backgrounds do tend to
to a multi-national sample, this study indicates a clear lack form different perceptions of emerging technologies (vali-
of knowledge as 51% of the respondents rated themselves dating H1,2,6,7). Here, it is worth mentioning previous stud-
below average in AI knowledge (Table 3). ies show that an employees’ awareness of the presence of
Past studies have shown student engagement with ethics smart surveillance technologies negatively correlates with
is contingent on several factors: first, the type of curriculum organizational commitment (Ball 2010; Brougham and Haar
adopted by higher education institutions (Culver et al. 2013); 2017). These two tendencies combined with the risk of AI
second, how the concept of bias is communicated and under- being misunderstood (Wilkens 2020) are important obstacles
stood through the course literature. As such, our study indi- to overcome before such technologies can be harnessed in
cates that university curricula would strongly benefit from ways that safeguard the worker’s best interests.
the inclusion of courses on social and ethical implications Our analysis also shows people from economically less
of AI in the workplace, especially in the business major, developed regions (Africa, Oceania, Central Asia) exhibit
which has been shown to correlate with less concern about less concern for EAI-enabled management, while people
AI in HR management in this paper (see Fig. 6 and H3). This from more prosperous regions (Europe, Northern America)
is to correct any students’ misconceptions and enrich their tend to be more cautious. Interestingly, however, an eco-
understanding of the positive and negative potential of such nomically prosperous region such as East Asia correlates
technologies. Given the strong emphasis on the importance with less anxiety toward the EAI-enabled HR management.
and advantages of acquiring data analytics skills in current Our data in Fig. 9 show for East Asians, 63.62% of the Japa-
curricula of AASCB-accredited business schools (Clayton nese, 56.32% of the South Korean, and 41.77% of the Chi-
and Clopton 2019), ethical training and critical thinking nese people express a more accepting attitude (averaging the
about the ethics of these technologies should be integral score of equal or more than 3 in the attitude scale). While
to institutional higher learning epistemology that prepares for European and Northern Americans, an overwhelming
younger generations for the quantified workforce. majority of 75% possess the worried attitude toward being
13
AI & SOCIETY
Table 4 A summary of decisions regarding the hypotheses and relevant literature examined in this study
Hypotheses Decision Literature Research
questions
H1: Income is positively correlated with attitude toward Accept Ali (2012), Urueña et al. (2018) and McClure (2017) RQ3
automated management
H2: Being male is positively correlated with attitude toward Accept Brewer et al. (2020), McClure (2017), Cai et al. (2017) and RQ3
automated management, while the opposite is true for Huffman et al. (2013)
female
H3: Business major is positively correlated with attitude Accept Clayton and Clopton (2019), Prentice et al. (2020), Gherheș RQ3
toward automated management, while the opposite is true and Obrad (2018), Chen and Lee (2019)
for Social Studies major
H4: Number of years in higher education is positively cor- Accept Thurman et al. (2019), Damerji and Salimi (2021) and RQ3
related with attitude toward automated management Araujo et al. 2020)
H5: Self-rated familiarity with AI is negatively correlated Reject Brougham and Haar (2017) and; Bin Dahmash et al. 2020) RQ5
with attitude toward automated management
H6: Religiosity is negatively correlated with attitude toward Accept Brewer et al. (2020), Northeastern University and Gallup RQ3
automated management (2018) and West (2018)
H7: There are regional differences in the attitude toward Accept Alsaleh et al. (2019), Muñoz-Leiva et al. (2018) and Hen- RQ3
automated management rich (2020)
H8: Income is positively correlated with self-rated familiar- Reject Ali (2012), Urueña et al. (2018) and McClure (2017) RQ4
ity with AI
H9: Being male is positively correlated with self-rated Accept Ali (2012), Urueña et al. (2018) and McClure (2017) RQ4
familiarity with AI, while the opposite is true for female
H10: Business major is positively correlated with self-rated Accept Clayton and Clopton (2019) RQ4
familiarity with AI, while the opposite is true for Social
Studies major
H11: Number of years in higher education is positively cor- Reject Thurman et al. (2019), Damerji and Salimi (2021) and RQ4
related with self-rated familiarity with AI Araujo et al. (2020)
H12: Religiosity does not affect self-rated familiarity with Accept Very little evidence in the literature RQ4
AI
H13: Regions do not affect self-rated familiarity with AI Accept Very little evidence in the literature RQ4
managed by AI. Since these East Asian countries have dif- counterparts. The Japanese participants’ lack of reserva-
ferent political systems, the consistency of accepting atti- tion for automated management is unsurprising given the
tudes for EAI across these countries could be explained by extent to which workplace norms and conventions dictate
a common factor—Confucianism. Specifically, there might unquestioning obedience, loyalty and mandatory volun-
be antipathy toward individual rights in Confucian cultures teerism (Stukas et al. 1999), especially in relation to mana-
(Weatherley 2002), as well as a stronger emphasis on har- gerial superiors (Meek 2004; Rear 2020). For example, it
mony, duty, and loyalty to the collective will (Vuong et al. is an unspoken convention in Japanese corporate culture
2020; Whitman 1985). Finally, in Confucian culture, there that no one leaves the office before the kacho (office head)
is much more acceptance of intervention by higher authority does. Our findings suggest that as a more invasive form
as it is thought of as a source of moral guidance (Roberts of automated management, EAI may exacerbate anxiety
et al. 2020). amongst foreign workers in Japan, opening up the possibil-
Such cross-regional and cross-cultural differences ity of conflict with Japanese managers who are culturally
prompt us to further investigate the differences among the conditioned to value conformity and loyalty, and to punish
top ten countries represented in our sample size. Control- ‘attitudinal diversity’. As the Japanese saying goes, “出る
ling all other socio-demographic and behavioral variables, 杭は打たれる”, (deru kugi wa utareru—the nail that sticks
the Japanese have the strongest correlation with an accept- up must be hammered down) (Sana 1991; Luck 2019).
ing attitude toward EAI in HR management, followed by The empirical findings on such stark cross-cultural
the Vietnamese, Chinese, and Korean (See the Supplemen- and cross-regional differences could help educators, busi-
tary file, Model 11). Indians, on the other hand, correlate nesses, and policymakers to shape their action programs
with the highest level of anxiety toward automated man- to address any stakeholder’s concern or lack thereof for the
agement followed by their Bangladeshi and Indonesian future of AI-driven work.
13
AI & SOCIETY
5.1.4 Ethical and legal implications their employers. Given the increase in teleworking practices
due to COVID-19 and the fact that many business enter-
Our analysis has highlighted two main areas of ethical and prises are now creating their own platforms to monitor work
legal concern. First, algorithmically driven management sys- engagement, concentration and performance levels at a dis-
tems measure performance based on established benchmarks tance (Vallas and Schor 2020), our findings are timely and
of what others have done in the past and what a company poignant.
believes a worker should achieve in the present. Yet EAI
can only quantify statistics of productivity; they do not have 5.2 Limitations and future research directions
the ability to take into account human particularities such
as attitudinal diversity, gender differences or cultural idi- This study suffers from several limitations. First, the inher-
osyncrasies. Automated monitoring systems are unlikely to ited limitations of the convenient sampling method. The
know if a worker is ill, physically or mentally disabled, expe- surveyed population is young students who study in a mul-
riencing domestic problems or simply having a bad week. ticultural, bilingual campus (Nguyen et al. 2021). Although
Rather automated management runs the risk of diminishing many of the respondents will find a job in Japan, the diver-
the need for once valued interpersonal communication skills sity in career options and locations has allowed us to discuss
of an HR manager. Second, while technologically mediated cultural expectations outside of Japan. According to the sta-
workplaces can provide added perks such as flexible work- tistics on graduates of the academic year 2020, 56.8% (684)
ing hours, they also run the risks of eroding labor relations of 1204 graduates reported finding a job, 6.6% (80) con-
due to ethical and legal grey issues over the rights of work- tinued to higher education, while 36.6% (440) found other
ers to have access and control over their personal data that options including returning to their home countries. Regard-
is gathered through automated management systems. These ing successful job-seekers, for international students, 85.6%
points are particularly salient as traditionally homogeneous (256/299) obtained an offer; while 36% (94/256) found a job
workplaces such as in Japan are undergoing greater cultural outside of Japan. Whereas for Japanese graduates, 428 out of
hybridity. 441 job-seekers obtained an offer (not specified where, but
Fortunately, some policy and legislation efforts are under- presumably, the majority are located in Japan) (APU 2021).
way. For example, the Switzerland-based UNI Global Union Third, some regions such as East Asia and South-East Asia
has established a set of ten principles for ethical AI along are over-represented in the sample, which is corrected for
with ten principles for ensuring the protection of a workers’ by the partial pooling of the Bayesian multi-level analysis.
data rights, seeking to promote more inclusive practices in As such, the results should be interpreted in this context.
the future workplace (Colcough 2018; UNI Global Union, Future studies can further explore the attitude of working
2021). More recently, the European Union’s (EU) draft AI professionals regarding Emotional AI as well as the causal
regulations have identified the use of AI tools as being of mechanisms of the correlations established in this study.
‘high risk’ practice, including the use of AI for recruitment, For example, conducting in-depth interviews and controlled
promotion, performance management, task allocation and experiments with respondents from diverse backgrounds can
workplace monitoring (European Commission 2021). Addi- explain the influences of educational background, industry,
tionally, as of April 14, 2021, another EU draft proposal work position, entrepreneurial experiences, religious back-
titled “Regulation on a European approach for artificial intel- ground, and geographical regions.
ligence” has been leaked that seeks to regulate the collection
of non-conscious data by emotion-recognition AI systems
(Vincent 2021). The proposal requires “any natural person 6 Conclusions
whose personal data is being processed by an emotion-rec-
ognition system or a categorization system shall be notified Our study suggests three fundamental concerns for future
that they are exposed to such a system” (European Commis- job-seekers who will be governed and assessed in either
sion 2021, p.34). small or large ways by non-human resource management.
Similar to how early twentieth-century trade unions’ The first is about privacy. The increased accuracy of emo-
criticism of Taylorism led to the enactment of labor laws tion-sensing biometric technologies relies on a further
safeguarding the interests of factory workers, our analysis blurring of personal/employee distinction and harvesting
contributes to an emerging body of literature calling for of real-time subjective states. The invasive disciplinary
greater regulatory scrutiny of algorithmic management and gaze of emotion-recognition technologies does not allow
workforce analytics. This article opens the door for future for backstaging. Rather it exposes and makes vulnerable
researchers to explore strategies and practices to empower an employee’s affective inner self to top-down but also in
workers’ collective bargaining power to ensure transparency the case of workplace wellness programs, peer-to-peer hori-
in how their data is collected and used by AI platforms and zontal surveillance conflated as communal care initiatives.
13
AI & SOCIETY
The second is a concern for explainability. As EAI and its Akvik A, Salvanes KG, Vaage K (2005) Educational attainment and
machine learning capabilities move toward greater complex- family background. German Econ Rev 6(3):377–394. https://d oi.
org/10.1111/j.1468-0475.2005.00138.x
ity levels in automated thinking, many technologists believe Ali J (2012) Factors affecting the adoption of information and commu-
that it will not be clear even to the creators of these sys- nication technologies (ICTs) for farming decisions. J Agric Food
tems how decisions are reached (Mitchell, 2019). Finally, Inf 13(1):78–96. https://d oi.o rg/1 0.1 080/1 04965 05.2 012.6 36980
at a deeper biopolitical level, EAI represents an emerging Alsaleh DA, Elliott MT, Fu FQ, Thakur R (2019) Cross-cultural dif-
ferences in the adoption of social media. J Res Interact Mark
era of automated governance where Foucauldian strategies 13(1):119–140. https://doi.org/10.1108/JRIM-10-2017-0092
and techniques of control are relegated to software systems. APU Website (2021) Student enrolment by country/region. Retrieved
Instead of physically monitoring and confining individuals 2021, August 18 from https://en.apu.ac.jp/home/about/conte
in brick-and-mortar enclosures or enacting forms of control nt250/Student_Enrollment_by_CountryRegion_E.pdf
APU (2021) Job placement and advancement. Retrieved 2021 August
based on the body's exteriority, the ‘algorithmic governmen- 12 from https://en.apu.ac.jp/home/career/content9/
tality’ of emotion-sensing AI ultimately targets the mind Araujo T, Helberger N, Kruikemeier S, de Vreese CH (2020) In AI we
and behavioral processes of workers to encourage their pro- trust? Perceptions about automated decision-making by artifi-
ductivity and compliance (Mantello 2016). Our empirical cial intelligence. AI Soc 35(3):611–623. https://doi.org/10.1007/
s00146-019-00931-w
results suggest that, left unregulated, EAI will only exac- Ball K (2010) Workplace surveillance: an overview. Labor History
erbate labor relation tensions, especially conflicts that may 51(1):87–106
arise due to culture, gender, social class, ethnicity and atti- Barrett LF (2017) How emotions are made: the secret life of the brain.
tudinal disposition. Houghton Mifflin Harcourt, Boston
Barrett LF, Adolphs R, Marsella S, Martinez AM, Pollak SD (2019)
This study advances earlier biopolitical understandings of Emotional expressions reconsidered: challenges to inferring emo-
EAI as suggested by proponents such as McStay (2018). It tion from human facial movements. Psychol Sci Public Interest
does so by pointing out a darker discursive cloud that hangs 20(1):1–68. https://doi.org/10.1177/1529100619832930
over all forms of biopower. Namely, its proprietary logic Batte MT, Arnholt MW (2003) Precision farming adoption and use in
Ohio: case studies of six leading-edge adopters. Comput Electron
to make life its referent object yet willingness to compro- Agric 38(2):125–139
mise the human element to maximize the productivity of Bin Dahmash A, Alabdulkareem M, Alfutais A, Kamel AM, Alk-
populations. In conclusion, the empirical cross-cultural and holaiwi F, Alshehri S, Al Zahrani Y, Almoaiqel M (2020)
socio-demographic discrepancies observed in this paper seek Artificial intelligence in radiology: does it impact medical stu-
dents preference for radiology as their future career? BJR Open
to promote awareness and discussion as well as serve as a 2(1):20200037. https://doi.org/10.1259/bjro.20200037
platform for further intercultural research on the ethical and Blanden J, Gregg P (2004) Family income and educational attainment:
social implications of EAI as an emerging tool in non-human a review of approaches and evidence for Britain. Oxf Rev Econ
resource management. Policy 20(2):245–263. https://doi.org/10.1093/oxrep/grh014
Bossman J (2016) Top 9 ethical issues in artificial intelligence.
Retrieved 2021, May 15 from https://www.weforum.org/agenda/
Supplementary Information The online version contains supplemen- 2016/10/top-10-ethical-issues-in-artifi cial-intelligence/
tary material available at https://d oi.o rg/1 0.1 007/s 00146-0 21-0 1290-1. Boyd R, Richerson PJ, Henrich J (2011) The cultural niche: Why social
learning is essential for human adaptation. Proc Natl Acad Sci
Acknowledgements This study is part of the project “Emotional AI in 108(Supplement 2):10918. https://doi.org/10.1073/pnas.11002
Cities: Cross Cultural Lessons from UK and Japan on Designing for an 90108
Ethical Life” funded by JST-UKRI Joint Call on Artificial Intelligence Brewer P, Wilson D, Bingaman J, Paintsil A, Obozintsev L (2020)
and Society (2019). (Grant No. JPMJRX19H6). The authors would Media messages and U.S. Public opinion about artificial intel-
like to thank all the APU faculty members who helped us distribute ligence. University of Delaware, Newark
the survey. Author Manh-Tung Ho would like to express his gratitude Brougham D, Haar J (2017) Smart Technology, Artificial Intelligence,
toward the SGH Foundation for their support. Robotics, and Algorithms (STARA): employees’ perceptions of
our future workplace. J Manag Organ 24(2):239–257. https://d oi.
Data availability The raw data and codes used in this paper can be org/10.1017/jmo.2016.55
accessed at https://doi.org/10.5061/dryad.c2fqz617b. Buolamwini J and Gebru T (2018) Gender shades: intersectional accu-
racy disparities in commercial gender classification. In: Proceed-
ings of the 1st conference on fairness, accountability and trans-
parency, proceedings of machine learning research. http://proce
edings.mlr.press
References Cabanas E, Illouz E (2019) Manufacturing happy citizens: How the sci-
ence and industry of happiness control our lives. Wiley, Hoboken
Aczel B, Hoekstra R, Gelman A, Wagenmakers E-J et al (2020) Discus- Cai Z, Fan X, Du J (2017) Gender and attitudes toward technology
sion points for Bayesian inference. Nat Hum Behav 4(6):561– use: a meta-analysis. Comput Educ 105:1–13. https://doi.org/10.
563. https://doi.org/10.1038/s41562-019-0807-z 1016/j.compedu.2016.11.003
Adnan AAZ, Yunus NKY, Ghouri AM (2019) Does religiosity mat- Chen S-Y, Lee C (2019) Perceptions of the impact of high-level-
ter in the era of industrial revolution 4.0? Asian Acad Manag J machine-intelligence from university students in Taiwan: the case
24(2):67–77. https://doi.org/10.2139/ssrn.3508417 for human professions, autonomous vehicles, and smart homes.
Adorno TW, Horkheimer M (2002) Dialectic of enlightenment. Stan- Sustainability. https://doi.org/10.3390/su11216133
ford University Press, Stanford
13
AI & SOCIETY
Chen C, Crivelli C, Garrod OGB, Schyns PG, Fernández-Dols J-M, Heaven D (2020) Why faces don’t always tell the truth about
Jack RE (2018) Distinct facial expressions represent pain and feelings. Nature 578:502–504. https:// d oi. o rg/ 1 0. 1 038/
pleasure across cultures. Proc Natl Acad Sci 115(43):E10013. d41586-020-00507-5
https://doi.org/10.1073/pnas.1807862115 Henrich J (2020) The WEIRDest people in the world: How the West
Clayton PR, Clopton J (2019) Business curriculum redesign: integrat- became psychologically peculiar and particularly prosperous.
ing data analytics. J Educ Bus 94(1):57–63 Farrar Straus and Giroux, New York
Coclough C (2018) When algorithms hire and fire. Int Union Rights Hochschild AR (2012) The managed heart: commercialization of
25(3):6–7. http://www.thefutureworldofwork.org/media/35506/ human feeling. University of California Press, California
iur-colclough.pdf Huffman AH, Whetten J, Huffman WH (2013) Using technology in
Cooley M (1980) Computerization Taylor’s latest disguise. Econ Ind higher education: the influence of gender roles on technology
Democr 1(4):523–539 self-efficacy. Comput Hum Behav 29(4):1779–1786. https://doi.
Cornerstoneondemand (2021) Adapt your people. Accelerate your org/10.1016/j.chb.2013.02.012
business. Retrieved 2021, August 20 from https://www.corne Indregard AMR, Ulleberg P, Knardahl S, Nielsen MB (2018) Emo-
rstoneondemand.com/releases/february2021/ tional dissonance and sickness absence among employees work-
Crawford K (2021) Time to regulate AI that interprets human emotions. ing with customers and clients: a moderated mediation model via
Nature. https://doi.org/10.1038/d41586-021-00868-5 exhaustion and human resource primacy. Front Psychol 9:436
Crowley M, Tope D, Chamberlain LJ, Hodson R (2010) Neo-Taylorism Jeung DY, Kim C, Chang SJ (2018) Emotional labor and burnout: a
at work: occupational change in the Post-Fordist Era. Soc Probl review of the literature. Yonsei Med J 59(2):187–193
57(3):421–447. https://doi.org/10.1525/sp.2010.57.3.421 Kappas A (2010) Smile when you read this, whether you like it or
Culver SM, Puri IK, Wokutch RE, Lohani V (2013) Comparison of not: conceptual challenges to affect detection. IEEE Trans Affect
engagement with ethics between an engineering and a business Comput 1(1):38–41. https://doi.org/10.1109/T-AFFC.2010.6
program. Sci Eng Ethics 19(2):585–597. https://d oi.o rg/1 0.1 007/ La V-P and Vuong Q-H (2019) bayesvl: Visually learning the graphi-
s11948-011-9346-3 cal structure of bayesian networks and performing MCMC with
Curren MT, Harich KR (1996) Business ethics: a comparison of busi- 'Stan.' https://c ran.r-p rojec t.o rg/w
eb/p ackag es/b ayesv l/i ndex.h tml
ness and humanities students and faculty. J Educ Bus 72(1):9–11. (version 0.8.5 (officialy published on May 24, 2019))
https://doi.org/10.1080/08832323.1996.10116818 La Torre G, Esposito A, Sciarra I, Chiappetta M (2019) Definition,
Damerji H, Salimi A (2021) Mediating effect of use perceptions on symptoms and risk of techno-stress: a systematic review. Int Arch
technology readiness and adoption of artificial intelligence in Occup Environ Health 92(1):13–35
accounting. Acc Educ 30(2):107–130. https://doi.org/10.1080/ Larradet F, Niewiadomski R, Barresi G, Caldwell DG, Mattos LS
09639284.2021.1872035 (2020) Toward emotion recognition from physiological signals
Davies W (2015) The happiness industry: How the government and big in the wild: approaching the methodological issues in real-life
business sold us well-being. Verso Books, New York data collection [Review]. Front Psychol. https://doi.org/10.3389/
Davis FD (1989) Perceived usefulness, perceived ease of use, and user fpsyg.2020.01111
acceptance of information technology. MIS Q 13(3):319–340 Lecher C (2019) How Amazon automatically tracks and fires ware-
de Oliveira S, Nisbett RE (2017) Culture changes how we think about house workers for ‘productivity’. Retrieved March 23, 2021 from
thinking: from “Human Inference” to “Geography of Thought.” https://www.theverge.com/2019/4/25/18516004/amazon-wareh
Perspect Psychol Sci 12(5):782–790 ouse-fulfillment-centers-productivity-firing-terminations
Drucker PF (1992) The new society of organizations. In: Harvard busi- Leighton CL (2012) Workplace emotion regulation: making the case
ness review (September–October 1992) for emotional labour and emotion work Doctor of Philosophy.
Ekman P (1999) Basic emotions. In: Dalgleish T, Power M (eds) Hand- University of Western Australia, Perth
book of cognition and emotion. Wiley, Hoboken Leys R (2017) The ascent of affect: genealogy and critique. Univer-
European commission (2021) Proposal for a regulation of the European sity of Chicago Press, Chicago
parliament and of the council laying down harmonised rules on Lisetti C, Schiano D (2000) Automatic facial expression interpreta-
artificial intelligence (artificial intelligence act) and amending tion: Where human-computer interaction, artificial intelligence
certain union legislative acts. Document 52021PC0206. https:// and cognitive science intersect. Pragmat Cogn 8:185–235
eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372- Luck K (2019) The nail that sticks up isn't always hammered down:
11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF women, employment discrimination, and litigiousness in Japan
Fouriezos N (2019) How A.I. could fix workplace harassment. (Publication Number 5842) [Virginia Commonwealth Univer-
Retrieved 2021, March 14 from https://www.ozy.com/the-new- sity]. https://doi.org/10.25772/2238-D408
and-the-next/ai-has-its-biases-now-it-might-also-fix-discrimina Lupton D (2016) The diverse domains of quantified selves: self-
tion-harassment/96772/ tracking modes and dataveillance. Econ Soc 45(1):101–122
Gal U, Jensen TB, Stein M-K (2020) Breaking the vicious cycle of Macmillan L, Tyler C, Vignoles A (2015) Who gets the top jobs? The
algorithmic management: a virtue ethics approach to people role of family background and networks in recent graduates’
analytics. Inf Organ 30(2):100301. https://doi.org/10.1016/j. access to high-status professions. J Soc Policy 44(3):487–515
infoandorg.2020.100301 Manokha I (2018) Surveillance, panopticism, and self-discipline in
Gherheș V, Obrad C (2018) Technical and humanities students’ per- the digital age. Surveill Soc 16(2):219–237. https://doi.org/10.
spectives on the development and sustainability of artificial intel- 24908/ss.v16i2.8346
ligence (AI). Sustainability. https://doi.org/10.3390/su10093066 Manokha I (2020) The implications of digital employee monitor-
Gu Y, You X (2020) Recovery experiences buffer against adverse well- ing and people analytics for power relations in the workplace.
being effects of workplace surface acting: a two-wave study of Surveill Soc 18(4):540–554. https://doi.org/10.24908/ss.v18i4.
hospital nurses. J Adv Nurs 76(1):209–220. https://doi.org/10. 13776
1111/jan.14236 Mantello P (2016) The machine that ate bad people: the
Hardt M (1999) Affective labor. Boundary 2 26(2):89–100 ontopolitics of the precrime assemblage. Big Data Soc
Healey K (2020) Coercion, consent, and the struggle for social media. 3(2):2053951716682538. https://doi.org/10.1177/2053951716
In: Wilkins L, Clifford GC (eds) The Routledge handbook of 682538
mass media ethics. Routledge, pp 321–335
13
AI & SOCIETY
Marciano A (2019) Reframing biometric surveillance: from a towards artificial intelligence: a multicentre survey. Eur Radiol
means of inspection to a form of control. Ethics Inf Technol 29(4):1640–1646. https://doi.org/10.1007/s00330-018-5601-1
21(2):127–136 Prentice C, Dominique Lopes S, Wang X (2020) Emotional intelligence
Marx K (1983) Grundrisse: foundations of the critique of political or artificial intelligence—an employee perspective. J Hosp Mar-
economy. Penguin, London ket Manag 29(4):377–403. https://doi.org/10.1080/19368623.
Mateescu A and Nguyen A (2019) Explainer: workplace monitoring 2019.1647124
and surveillance. Retrieved 2021, May 15 from https://datas Rear D (2020) Persisting values in the Japanese workplace: managerial
ociety.net/wp-content/uploads/2019/02/DS_Workplace_Monit attitudes towards work skills. Japan Forum. https://doi.org/10.
oring_Surveillance_Explainer.pdf 1080/09555803.2020.1726434
McClure PK (2017) “You’re Fired”, says the robot: the rise of auto- Reardon J (1998) The history and impact of worksite wellness. Nurs
mation in the workplace, technophobes, and fears of unemploy- Econ 16(3):117
ment. Soc Sci Comput Rev 36(2):139–156. https://doi.org/10. Rhue L (2019) Anchored to bias: How AI-human scoring can induce
1177/0894439317698637 and reduce bias due to the anchoring effect. SSRN J. https://doi.
McElreath R (2020) Statistical rethinking: a Bayesian course with org/10.2139/ssrn.3492129 (Available at SSRN 3492129)
examples in R and Stan. CRC Press, Boca Raton Richardson S (2020) Affective computing in the modern workplace.
McStay A (2018) Emotional AI: the rise of empathic media. Sage, Bus Inf Rev 37(2):78–85. https://doi.org/10.1177/0266382120
Thousand Oaks 930866
McStay A (2020) Emotional AI, soft biometrics and the surveillance Roberts H, Cowls J, Morley J, Taddeo M, Wang V, Floridi L (2020)
of emotional life: an unusual consensus on privacy. Big Data Soc The Chinese approach to artificial intelligence: an analysis of
7(1):2053951720904386. https://doi.org/10.1177/2053951720 policy, ethics, and regulation. AI Soc. https://doi.org/10.1007/
904386 s00146-020-00992-2
Meek CB (2004) The dark side of Japanese management in the 1990s. Rosenblat A (2018) Uberland: How algorithms are rewriting the rules
J Manag Psychol 19(3):312–331. https://doi.org/10.1108/02683 of work. University of California Press, California
940410527775 Rukavina S, Gruss S, Hoffmann H, Tan J-W, Walter S, Traue HC
Mitchell M (2019) Artificial intelligence: A guide for thinking humans. (2016) Affective computing and the impact of gender and age.
Penguin UK, London PLoS ONE 11(3):e0150584. https://doi.org/10.1371/journal.
Moore PV, Woodcock J (eds) (2021) Augmented exploitation: artificial pone.0150584
intelligence, automation, and work. Pluto Press, London Russell JA, Bachorowski J, Fernandez-Dols J (2003) Facial and vocal
Moore P, Robinson A (2015) The quantified self: What counts in expressions of emotion. Annu Rev Psychol 54:329–349
the neoliberal workplace. New Media Soc 18(11):2774–2792. Sana A (1991) Zen and Japanese economic performance. Int J Sociol
https://doi.org/10.1177/1461444815604328 Soc Policy 11(4):17–36. https://doi.org/10.1108/eb013135
Muk A, Chung C (2015) Applying the technology acceptance model Sarwar S, Dent A, Faust K, Richer M, Djuric U, Van Ommeren R,
in a two-country study of SMS advertising. J Bus Res 68(1):1–6. Diamandis P (2019) Physician perspectives on integration of
https://doi.org/10.1016/j.jbusres.2014.06.001 artificial intelligence into diagnostic pathology. NPJ Digit Med
Muñoz-Leiva F, Mayo-Muñoz X, De la Hoz-Correa A (2018) 2(1):28. https://doi.org/10.1038/s41746-019-0106-0
Adoption of homesharing platforms: a cross-cultural study. Simon HA (1986) Rationality in psychology and economics. J Bus
J Hosp Tour Insights 1(3):220–239. https://doi.org/10.1108/ 59(4):S209–S224. http://www.jstor.org/stable/2352757
JHTI-01-2018-0007 Singh M (2020) Google workers demand reinstatement and apology for
Nguyen M-H, Serik M, Vuong T-T, Ho M-T (2019) Internationaliza- fired Black AI ethics researcher. Retrieved January 23, 2021 from
tion and its discontents: help-seeking behaviors of students in https://www.theguardian.com/technology/2020/dec/16/google-
a multicultural environment regarding acculturative stress and timnit-gebru-fired-letter-reinstated-diversity
depression. Sustainability. https://doi.org/10.3390/su11071865 Spiegelhalter D (2019) The art of statistics: Learning from data. Pen-
Nguyen M-H, Le T-T, Nguyen H-KT et al (2021) Alice in suicideland: guin UK, London
exploring the suicidal ideation mechanism through the sense of Stukas AA, Snyder M, Clary EG (1999) The effects of “Mandatory
connectedness and help-seeking behaviors. Int J Environ Res Volunteerism” on intentions to volunteer. Psychol Sci 10(1):59–
Public Health. https://doi.org/10.3390/ijerph18073681 64. https://doi.org/10.1111/1467-9280.00107
Northeastern University and Gallup (2018) Optimism and anxiety: Taherdoost H (2018) A review of technology acceptance and adoption
views on the impact of artificial intelligence and higher educa- models and theories. Procedia Manuf 22:960–967. https://doi.
tion’s response. Retrieved 2021 August 11 from https://www. org/10.1016/j.promfg.2018.03.137
northeastern.edu/gallup/pdf/OptimismAnxietyNortheasternGal Telford T (2019) ‘Emotion detection’ AI is a $20 billion industry.
lup.pdf New research says it can’t do what it claims. Retrieved 05
Ota H (2018) Internationalization of higher education: global trends Jan 2021 from https://w ww.washin gtonp ost.c om/b usine ss/
and Japan’s challenges. Educ Stud Jpn 12:91–105. https://doi. 2019/07/31/emotion-detection-ai-isbillion-industry-new-resea
org/10.7571/esjkyoiku.12.91 rch-says-it-cant-do-what-it-claims/
Partner P (2020) How to beat A.I. in landing a job. Retrieved 4 Oct Thurman N, Moeller J, Helberger N, Trilling D (2019) My friends,
2021 from https://bigthink.com/technologyinnovation/how-to- editors, algorithms, and I. Digit J 7(4):447–469. https://doi.org/
beat-a-i-in-landing-a-job?rebelltitem=2#rebelltitem2 10.1080/21670811.2018.1493936
Picard RW (1997) Affective computing. MIT Press, Cambridge UNI Global Union. (n.d.). Top 10 principles for ethical artificial intelli-
Picard R, Klein J (2002) Computers that recognize and respond to user gence. Retrieved 2021 May 14 from http://www.thefutureworldo
emotion: theoretical and practical implications. Interact Comput fwork.org/media/35420/uni_ethical_ai.pdf
14:141–169 Urueña A, Arenas EÁ, Hidalgo A (2018) Understanding workers’
Picard RW (1995) Affective computing. In: MIT Media Laboratory adoption of productivity mobile applications: a fuzzy set quali-
Perceptual Computing Section Technical Report No. 321, 2139 tative comparative analysis (fsQCA). Econ Res-Ekonomska
Pinto dos Santos D, Giese D, Brodehl S, Chon SH, Staab W, Klein- Istraživanja 31(1):967–981
ert R, Maintz D, Baeßler B (2019) Medical students’ attitude
13
AI & SOCIETY
Vallas S, Schor JB (2020) What do platforms do? Understanding the transmitter. Palgrave Commun 6(1):82. https://doi.org/10.1057/
gig economy. Ann Rev Sociol 46:273–294. https://doi.org/10. s41599-020-0442-3
1146/annurev-soc-121919-054857 Weatherley R (2002) Harmony, hierarchy and duty based morality: the
Vázquez J, García M (2011) From Taylorism to neo Taylorism: a 100 Confucian antipathy towards rights. J Asian Pacific Commun
year journey in human resource management. Int Rev Public 12(2):245–267
Nonprofit Mark Madrid 8(2):111–130 West DM (2018) Brookings survey finds worries over AI impact on
Vehtari A, Gabry J (2019) Bayesian Stacking and Pseudo-BMA jobs and personal privacy, concern U.S. will fall behind China.
weights using the loo package. loo 2.2.0. Retrieved Dec 27 from Retrieved 2021 from https://www.brookings.edu/blog/techtank/
https://mc-stan.org/loo/articles/loo2-weights.html 2018/05/21/brookings-survey-finds-worries-over-ai-impact-on-
Venkatesh V, Davis FD (2000) A theoretical extension of the technol- jobs-and-personal-privacy-concern-u-s-will-fall-behind-china/
ogy acceptance model: four longitudinal field studies. Manage Whitman CB (1985) Privacy in Confucian and Taoist thought. In:
Sci 46(2):186–204. https://d oi.o rg/1 0.1 287/m nsc.4 6.2.1 86.1 1926 Munro D (ed) Individualism and Holism: studies in Confucian
Vincent, J. (2021). The EU is considering a ban on AI for mass sur- and Taoist values. Univ. of Michigan, Center for Chinese Stud-
veillance and social credit scores. Leaked regulation suggests ies, Michigan
strong new laws on AI uses. Retrieved 2021, April 16, 2021 Wilkens U (2020) Artificial intelligence in the workplace—a double-
from https://w ww.t hever ge.c om/2 021/4/1 4/2 23833 01/e u-a i-r egul edged sword. Int J Inf Learn Technol 37(5):253–265. https://doi.
ation-draft-leak-surveillance-social-credit org/10.1108/IJILT-02-2020-0022
Vuong Q-H (2020) Reform retractions to make them more trans- Woodcock J (2016) Working the phones: control and resistance in call
parent. Nature 582(7811):149. https:// d oi. o rg/ 1 0. 1 038/ centres. Pluto Press, London
d41586-020-01694-x Wright J (2021) Suspect AI: vibraimage, emotion recognition technol-
Vuong Q-H (2021) The semiconducting principle of monetary and ogy and algorithmic opacity. Sci Technol Soc. https://d oi.o rg/1 0.
environmental values exchange. Econ Bus Lett 10(3):284–290 1177/09717218211003411
Vuong Q-H, Napier N-K (2015) Acculturation and global mindsponge:
an emerging market perspective. Int J Intercult Relat 49:354–367 Publisher's Note Springer Nature remains neutral with regard to
Vuong Q-H, Bui Q-K, La V-P et al (2018) Cultural additivity: behav- jurisdictional claims in published maps and institutional affiliations.
ioural insights from the interaction of Confucianism, Buddhism
and Taoism in folktales. Palgrave Commun 4(1):143. https://doi.
org/10.1057/s41599-018-0189-2
Vuong Q-H, Ho M-T, Nguyen H-KT et al (2020) On how religions
could accidentally incite lies and violence: folktales as a cultural
13