1 s2.0 S074756322300064X Main
1 s2.0 S074756322300064X Main
A R T I C L E I N F O A B S T R A C T
Handling Editor: Jen-Her Wu As the demand for automatic video interviews powered by artificial intelligence (AI) increases among employers
in the postpandemic era, so do concerns over job applicants’ trust in the technology. There are various forms of
Keywords: AI-based video interviews with and without the features of tangibility, immediacy, and transparency used for
Asynchronous video interview (AVI) preemployment screening, and these features may distinctively influence applicants’ trust in the technology and
Human–computer interaction (HCI)
whether they engage in or disengage from the hiring process accordingly. This field study involved designing a
Trustworthy AI
test of the effect of various forms of AI-based video interviews on interviewees’ cognitive and affective trust
User interface (UI)
User experience (UX) based on the self-reporting of 152 real job applicants. The study found that AI used in asynchronous video in
terviews (AI-AVI) increased applicants’ cognitive trust from that in the non-AI condition. Moreover, when the AI-
AVI had features of tangibility and transparency, the applicants’ cognitive and affective trust increased. How
ever, the feature of immediacy did not have a statistically significant impact. Contrary to concern over the
potential negative effects caused by AI and its features, no statistically significant impacts were found in this
study.
1. Introduction 2022).
There are a variety of interfaces that simulate human interaction and
Nine out of 10 employers now use artificial intelligence (AI) in promote trustworthiness through the use of tangibility, immediacy, or
employment interviews, according to Harvard Business Review (Jaser transparency (Glikson & Woolley, 2020). For example, job applicants
et al., 2022). The rapid adoption of asynchronous video interviews talk to a void in the AI-based AVI platform HireVue.com (no tangibility,
(AVIs) involving AI for personnel selection has been studied in academic immediacy, or transparency). Robot Vera (ai.robotvera.com) is an
research (e.g., Gonzalez et al., 2022; Hunkenschroer & Luetge, 2022). AI-based avatar used to interview and interact with job applicants
AVIs are one-way interviews in which interviewees answer questions in (tangibility). HRDA.pro has a chatbot option by which applicants are
front of their webcam and interviewers review the video at a later time aware that they can communicate with the AI in the AVI (immediacy).
(Mejia & Torres, 2018). AI applications use a visual-audio recognition Pymetrics.ai adopts game-based AI agents to evaluate job applicants’
technique in tandem with deep learning to match job applicants with job capabilities and explains the “science” behind the game to applicants
vacancies by automatically assessing the applicants’ verbal (e.g., con (transparency). Studies of human–computer interactions have found
tent), paraverbal (e.g., prosody), and nonverbal (e.g., head twist) sig that user trust in AI is a critical factor in how humans interact with AI;
nals, which helps employers screen their applicants (Hickman et al., therefore, trust in AI is an important factor that affects AI interface
2022). However, a lack of humanity and transparency can be fatal flaws design (Chi et al., 2021).
of AI video interviews, which raises concerns about applicants’ trust in On the one hand, AI interview technologies empower the personnel
such automated systems (Jaser et al., 2022). Therefore, many AI service selection process (Woods et al., 2020); on the other hand, various AI
providers have tried to add interfaces that meet human norms to in interview interfaces may influence job applicants’ feelings, including
crease trust in AI (Chi et al., 2021). Trust is defined in this study as both cognitive and affective trust (Glikson & Woolley, 2020), differently
“trustworthiness perceptions toward the characteristics of an automated and may accordingly affect their withdrawal from the recruitment
systems support decisions that affect individuals’ fates” (Langer et al., process (Basch et al., 2020). Therefore, it is essential for both scholars
* Corresponding author.
E-mail address: [email protected] (H.-Y. Suen).
https://doi.org/10.1016/j.chb.2023.107713
Received 29 August 2022; Received in revised form 28 December 2022; Accepted 13 February 2023
Available online 15 February 2023
0747-5632/© 2023 Elsevier Ltd. All rights reserved.
H.-Y. Suen and K.-E. Hung Computers in Human Behavior 143 (2023) 107713
and practitioners to examine whether job applicants respond with trust 2020). Job applicants who have less trust in an interview technology
when interacting with various AI interview interfaces since applicants’ may perceive the interview process to be unfair, and those with greater
trust in AI interviewing affects their application willingness and thereby trust may perceive greater fairness (Basch et al., 2020). As a result, the
alters employers’ recruitment effectiveness (Basch et al., 2020). former may withdraw from the recruitment process or decline job offers
Although various AI interfaces have been widely adopted for (Blacksmith et al., 2016).
personnel selection, our understanding of how real job applicants Recent studies have found that interviewees develop disfavor toward
interact with various AI interview interfaces and their trust in the in automatic interview interfaces due to irrational feelings, such as
terfaces remain insufficient (Hu et al., 2021). Hu and colleagues (2021) creepiness (Langer et al., 2017; Suen et al., 2019a), which may diminish
have urged more studies to examine the consequences of applicant–AI their affective trust. If job applicants believe that their performance will
interactions. As a result, we adopt a unique context for employment be evaluated fairly, they develop more cognitive trust toward the
video interviews to explore how job applicants develop trust when automatic interview process (Suen et al., 2019a) according to Gilliland’s
interacting with various AI interfaces. (1993) fairness model.
As mentioned, AI interfaces may have various features, including Some AVI service providers (e.g., hirevue.com, retorio.com) use AI
tangibility, immediacy, and transparency. To examine the effects of that can automatically predict interviewees’ personality traits (Suen
different AI interview interfaces on job applicants’ trustworthiness, this et al., 2019b, 2020), interview performance, interviewability (Naim
study aims to explore 1) whether job applicants have different cognitive et al., 2018), or interpersonal communication skills (Rao et al., 2017)
and affective trust perceptions toward AVI with and without AI and 2) according to the interviewees’ audio-visual expressions on video (see
whether job applicants’ cognitive/affective trust varies across AI in Celiktutan & Gunes, 2017). AI can also make hiring recommendations in
terfaces with and without tangibility, immediacy, and transparency in real employment interviews by extracting verbal, paraverbal, or
the condition of AI-AVI. Prior to answering these research questions, the nonverbal cues from both the interviewee and interviewer (Hickman
authors developed AI interfaces to embody tangibility with a 2-D affi et al., 2022; Nguyen et al., 2014).
nitive avatar according the principles of anthropomorphism (Waytz Job applicants in AI-based AVIs are fully or partially evaluated by AI
et al., 2014) and uncanny valley theory (Mori, 1970) to convey imme algorithms, whereas job applicants in non-AI-based AVIs are evaluated
diacy with a chatbot based on a standard script (Adamopoulou & by human raters (Suen et al., 2019a). Since human raters have personal
Moussiades, 2020) and to realize transparency with a text description to biases in evaluating job applicants, AI algorithms offer more objective
explain how AI assesses an interviewee (see Bedué & Fritzsche, 2022). solutions (van Esch et al., 2019). In rational, signaling theory states that
Afterward, this study used a series of experimental conditions in a real job applicants understand employers based on incomplete information
hiring situation with two layers: in the first layer, job applicants’ trust from their perceptions of the interviewers (Rynes & Miller, 1983).
perceptions were examined with and without AI in the AVI condition; in AI-based interviewers or raters may provide a consistent and objective
the second layer, job applicants’ trust perceptions were examined across hiring process without personal biases (Black & van Esch, 2020), which
individual AI interfaces. signals to job applicants that employers value equality and novelty (i.e.,
Acikgoz et al., 2020; van Esch et al., 2020), thereby boosting job ap
2. Background and hypotheses plicants’ cognitive trust. From an emotional perspective, AI-based in
terviewers or raters may engender anxiety because job applicants do not
Since AI-based AVIs have been incorporated into employment in know what the AI algorithms are assessing (Langer et al., 2020, 2020van
terviews to screen job applicants (Gupta et al., 2018; Jatobá et al., Esch et al., 2020), which is especially related to the explainability
2019), it is essential to examine how applicants react differently toward concerns of the AI black box (Tambe et al., 2019). Additionally, evidence
those new digital selection tools, such as by investigating their trust in AI from past studies shows that using new technology provokes people’s
interviews (Woods et al., 2020). Humans’ trust in AI influences how they anxiety because of its unfamiliarity (Meuter et al., 2003). Therefore,
interact with machines, and the properties of AI interfaces shape human AI-based AVIs may decrease job applicants’ affective trust due to their
trust, which influences how job applicants engage in the AI selection uncertainty and unfamiliarity than non-AI-based AVIs. We propose the
process (van Esch & Black, 2019). Trust has become a critical challenge following hypotheses:
in implementing AI-based AVIs (see Tambe et al., 2019).
Hypothesis 1a. Interviewees perceive a higher level of cognitive trust
in AI-based AVIs than in non-AI-based AVIs after automated
2.1. Job applicants’ trust in AI- and non-AI-based AVIs
interviewing.
In addition to perceived usefulness and ease of use based on the Hypothesis 1b. Interviewees perceive a lower level of affective trust
technology acceptance model (TAM; Davis, 1989), many studies have in AI-based AVIs than in non-AI-based AVIs after automated
found that users’ trust is the other critical factor in determining user interviewing.
acceptance of new technology (Glikson & Woolley, 2020). Theoretical
models of trust in automation (Lee & See, 2004) show that users’ 2.2. Job applicants’ trust in AI-based AVIs with and without tangibility
perception of the trustworthiness of an automated system arise from
perceived characteristics of the system as well as the system perfor According to Long’s (2001) social interface theory, people interact
mance, which can help achieve an individual’s goals in a context of with computer interfaces as well as they do with other humans; com
uncertainty and vulnerability (Kohn et al., 2021). In this study, context, puter interfaces can arouse responses from users that are similar to those
this means that job applicants assess trustworthiness in relation to their in interpersonal interactions based on whether the computer interface
goals for gaining a better interview rating and a job offer. The trust has humanizing cues. Therefore, interviewees tend to treat computer
worthiness of automated systems can be conceptualized through mul interfaces the same as they treat humans in video interviews if the in
tiple facets (Lee & See, 2004), and we examined two facets of terfaces have humanizing features (Gerich, 2012), and their responses
trustworthiness that job applicants may consider for an AI-based AVI: are similar to socially desirable responses in face-to-face interviews
cognitive trust and affective trust (Glikson & Woolley, 2020) for this (Haan et al., 2017). When AVIs use AI with humanizing features, in
study context. Cognitive trust is determined by rational thinking, terviewees may respond to the AI-enabled AVIs as they would to human
whereas affective trust is determined by feelings (Glikson & Woolley, interviewers (Suen et al., 2019a), including by developing affective and
2020). Both cognitive and affective trust affect human willingness to cognitive trust.
rely on automated systems in uncertain situations (Hoff & Bashir, 2015), According to Glikson and Woolley (2020), there are various em
such as in technology-mediated employment interviews (Basch et al., bodiments of AI: AI-enabled robots, AI-enabled virtual agents, and
2
H.-Y. Suen and K.-E. Hung Computers in Human Behavior 143 (2023) 107713
embedded AI. AI-enabled robots have a physical presence and Woolley, 2020). On the one hand, tracking signals can help drivers make
human-like features; AI-enabled virtual agents have some distinguished better decisions and thereby increase their cognitive trust; on the other
identifiers (e.g., an avatar or a chatbot) and may possess a face, body, or hand, this immediacy can be perceived as surveillance, which may
voice or the ability to text without physical presence; embedded AI violate drivers’ autonomy (although they can ignore or game the sys
applications, such computer vision or voice recognition, have neither a tem) and consequently decrease their affective trust (see Möhlmann &
physical presence nor distinguished identifiers, preventing users from Zalmanson, 2017). In the context of AI-based AVIs, a constant voice
being aware of their existence. In the context of AVIs, virtual agents (e. tracker plus a verbal response shown on the screen may create the
g., Vera.ai) and embedded AI (e.g., HireVue.com) are commonly used in perception of immediacy by interviewees during automatic interviews
AVIs in commercial solutions to evaluate job applicants and make hiring (e.g., hrda.pro). Therefore, an AI interface that exhibits immediacy be
recommendations (Suen et al., 2019a). haviors may increase cognitive trust but decrease affective trust. We
Virtual agents and embedded AI can have human-like interface fea thus propose the following:
tures, such as tangibility, immediacy, and transparency (see Glikson &
Hypothesis 3a. Interviewees perceive a higher level of cognitive trust
Woolley, 2020). Tangibility is the ability of AI to be physically perceived
in AI-based AVIs with immediacy than in AI-based AVIs without
or touched by human individuals (Liu & London, 2016). Immediacy is
immediacy after automated interviewing.
the interpersonal closeness perceived by audiences through AI’s verbal
and nonverbal responsiveness and human-like interactions (Kreps & Hypothesis 3b. Interviewees perceive a lower level of affective trust
Neuhauser, 2013). Transparency is the degree to which individuals can in AI-based AVIs with immediacy than in AI-based AVIs without
understand why and how AI assesses and decides something and follows immediacy after automated interviewing.
human rules and logic (Hoff & Bashir, 2015).
Although there are a variety of AI embodiments, virtual agents and
2.4. Job applicants’ trust in AI-based AVIs with and without transparency
embedded AI are used as AI interfaces in AI-based AVIs. According to the
literature on social psychology, when users feel they can predict an
Some scholars have argued that AI virtual agents’ transparency has a
object’s behavior or use and have confidence in it, they have more
greater impact on cognitive trust than tangibility (Wang et al., 2016)
cognitive trust in the object (Johnson & Grayson, 2005). AI interfaces
and immediacy (Glikson & Woolley, 2020). According to
also support this notion (Glikson & Woolley, 2020). When AI has a
explanation-for-trust theory (Pieters, 2011), users have more cognitive
tangible interface such as a virtual agent that can be visually perceived
trust in technology when they can compare alternatives by explaining
by users, users feel that the interface is more predictable and reliable
how the inner system works in detail, which is the system’s transparency
than when the interface uses invisible embedded AI that has no tangi
(Glikson & Woolley, 2020). Accordingly, when AI can explain how its
bility (Krämer et al., 2017). One study also found that an avatar picture
algorithm makes decisions, users have more cognitive trust in its
on a commercial website increased visitors’ cognitive trust and intention
competence and benevolence (Wang & Benbasat, 2007).
to revisit the website (Chattaraman et al., 2014).
Although the full transparency of AI algorithms is hard to achieve in
With regard to affective trust in AI, researchers have found that AI
AI with a deep-learning black box (Ananny & Crawford, 2018), an
interfaces with a human-like social presence conveyed by a “persona”
explanation of the rationale behind the AI’s recommendations or de
increase users’ affective trust (de Visser et al., 2017). In contrast, when
cisions that can be understood by users without technical knowledge
users encounter no AI representation, AI-based applications may
would increase users’ expectations for the AI’s performance and thereby
decrease affective trust and evoke anger because nontangibility makes
increase users’ cognitive trust (Glikson & Woolley, 2020). In the case of
users feel unsafe and unsupported (Hengstler et al., 2016). Eslami et al.
AI-AVIs, explaining how the AI evaluates the interviewees before the
(2015) found that more than 60% of users were unaware of intangible
interview would allow the interviewees to have more cognitive trust in
embedded AI managing their information on many social media sites,
the AI-AVIs than when no explanation is provided. Because this trans
which may cause users to feel uncomfortable or even angry about not
parency affects users’ rational thinking but not emotional feelings
being informed of the use of AI. A study also showed that the presence of
(Glikson & Woolley, 2020), explaining how the AI evaluates in
a “persona”, such as an avatar, in virtual interactions can significantly
terviewees in AI-based AVIs will not influence the interviewees’ affec
reduce users’ anxiety and increase their perceived social support
tive trust. Thus, the last hypothesis is proposed:
(Chattaraman et al., 2014). Therefore, if an avatar appears in AI-based
AVIs, the tangibility of the AI increases both users’ cognitive trust and Hypothesis 4. Interviewees perceive a higher level of cognitive trust
their affective trust. As discussed above, the following hypotheses are in AI-based AVIs with transparency than in AI-based AVIs without
proposed: transparency after automated interviewing.
3
H.-Y. Suen and K.-E. Hung Computers in Human Behavior 143 (2023) 107713
Although the trust scales used to study AI vary, researchers have not
made a clear distinction between cognitive and affective trust (Glikson
& Woolley, 2020). We follow Glikson and Woolley’s (2020) categori
zation to develop cognitive and affective trust measures based on Lee
et al.’s (2015) five-point scale (1 = strongly disagree to 5 = strongly
agree), which includes 6 items. An exploratory factor analysis (EFA) also
proved that all items could be categorized as cognitive trust based on
rationality and as affective trust based on emotional feelings, as shown
in Table 1. Additionally, the content validity was reviewed and
confirmed by 3 experts in human–computer interaction through the
Fig. 1. Illustration of an AI-AVI avatar. authors’ connections.
4
H.-Y. Suen and K.-E. Hung Computers in Human Behavior 143 (2023) 107713
One-way MANOVA was used to analyze the hypotheses for each To understand the patterns of interaction and linear relationships
experimental condition between the independent groups established by among the variables in this study, we conducted Pearson correlation
our manipulation (AI-AVI vs. non-AI-AVI, tangible vs. intangible AI-AVI, analysis and eliminated non-AI-AVI data to compute the correlations
immediate vs. nonimmediate AI-AVI, transparent vs. nontransparent AI- among tangibility, immediacy, transparency and other variables, as
AVI) on both cognitive and affective trust perceptions from the AVI shown in Table 2.
interviewees. Table 2 shows that cognitive trust (mean 3.509, SD 0.746) and af
fective trust (mean 3.524, SD 0.701) were moderately intercorrelated,
4. Results while the various treatments were also slightly intercorrelated because
the AI interfaces were under the same experimental conditions as the AI-
4.1. Scale reliability AVIs. Regarding the treatments correlating with cognitive and affective
trust, AI and transparency were positively associated with cognitive
Following Lee et al.’s (2015) procedure, a principal component trust, whereas tangibility was positively associated with affective trust.
analysis with varimax rotation was executed, and two major compo Contrary to our expectations, tangibility and immediacy were not
nents were generated, accounting for 55.87% of the total variance in significantly associated with cognitive trust, and AI and its immediacy
accordance with Glikson and Woolley’s (2020) classification for users’ did not influence affective trust. The matrix also shows that the de
trust in AI based on affective and cognitive components. A Kaiser‒ mographics of the interviewees varied with their job functions, with the
Meyer‒Olkin (KMO) statistic of 0.850 was achieved and was beyond the older participants having more work experience, but their demographics
cutoff level of 0.5, which indicates that our sample size is adequate. were not correlated with the study treatments or dependent variables.
Bartlett’s test of sphericity was statistically significant (χ2 (45) = Additionally, the interviewees’ previous AVI experience in employment
536.222, p < .001), which indicates that the correlation within our interviews did not influence their cognitive or affective trust in this
dataset is significant and unlikely to be due to chance. The component study.
structure and reliability are shown in Table 1, which indicates that all
the factor loadings were above the cutoff value of 0.4, and all Cronbach’s 4.4. Analysis of hypotheses
alpha (ɑ) values were more than the generally acceptable level of 0.7
(Hair et al., 2019). Table 3 displays descriptive statistics by experimental group, and the
mean of each treatment group was more than that of the control group,
except for the mean of immediacy for affective trust.
Table 2
Correlation matrix.
Variables 1 2 3 4 5 6 7 8 9 10 11
1. Cognitive trust –
2. Affective trust .559** –
3. AIa .225** .104 –
4. Tangibilitya .060 .188* .167* –
5. Immediacya .045 .025 − .045 .142 –
6. Transparencya .238** .133 .186* .152 − .069 –
7. Sexb − .003 − .153 .179 − .102 − .060 .102 –
8. Age − .022 − .020 .040 .052 .055 − .013 − .098 –
9. Educationc − .001 .151 − .105 .069 − .089 .119 − .111 .074 –
10. Work experience − .014 − .021 .016 .017 .119 .050 − .098 .894** − .151 –
11. AVI experience .081 .003 .008 .150 − .077 − .116 − .102 − .008 .028 − .030 –
12. Applied job functiona
Human Resources − .018 − .066 .142 .078 − .035 − .065 − .098 − .348** .307** − .367** .127
Financial − .015 − .068 .102 .106 − .029 − .109 − .031 .149 − .084 .143 .046
Information Technology .062 .036 .109 − .117 .039 .143 .232** − .176* − .187* − .140 − .107
Operation .040 .019 .123 − .107 .069 − .116 .148 .390** − .307** .328** .000
5
H.-Y. Suen and K.-E. Hung Computers in Human Behavior 143 (2023) 107713
6
H.-Y. Suen and K.-E. Hung Computers in Human Behavior 143 (2023) 107713
However, an AI-AVI supplied with immediacy by being embodied by a Declarations of competing interest
chatbot that has a voice tracking signal with standard text-based
responsiveness may not increase job applicants’ cognitive trust or The authors declare that they have no known competing financial
decrease their affective trust. Whether more intelligent chatbots that interests or personal relationships that could have appeared to influence
convey immediacy influence job applicants’ trust should be explored in the work reported in this paper.
future studies (see Nordheim et al., 2019).
Accordingly, the study results provide some recommendations for Data availability
designing AI-AVI and choosing optimal interfaces for AI-AVI to increase
job applicants’ trust in the initial preemployment selection process. The data that has been used is confidential.
First, job applicants should be informed in writing that their interview
performance will be assessed by AI algorithms when true to activate Acknowledgements
their cognitive trust by conveying objectivity. Second, what AI will be
used and how the AI will assess the interviewees should be conveyed This work was supported by the Ministry of Science and Technology
with simple words through text to increase cognitive trust by conveying (National Science and Technology Council), Taiwan (R.O.C.), under
transparency. Finally, an attractive virtual agent or avatar should be Grant MOST-110-2511-H-003-044-MY2. The authors would like to ex
displayed on the screen to increase affective trust by simulating a social press their special thanks to Shu-Ming Yang and Jui-Ching Chen for the
presence and human interaction in conferencing interviews. data collection associated with this work.
Although this study tested and determined how to increase job ap
plicants’ trust in AI-AVIs using various interface features, some notable References
limitations of this study should be addressed by future research. First,
because of the wide adoption and media reports of AI-AVIs being used Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of
artificial intelligence in selection. International Journal Of Selection and Assessment, 28
for personnel selection, the participants may have been familiar with (4), 399–416. https://doi.org/10.1111/ijsa.12306
and accepted the technology without restrictions on the use of AI video Adamopoulou, E., & Moussiades, L. (2020). Chatbots: History, technology, and
interviews (e.g., Illinois’ Artificial Intelligence Video Interview Act). applications. Machine Learning with Applications, 2, Article 100006. https://doi.org/
10.1016/j.mlwa.2020.100006
Additionally, cultural differences might have some impact on their Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the
evaluation of trust in the AI video interview system. Future studies transparency ideal and its application to algorithmic accountability. New Media &
should repeat our experiment in other states or countries with different Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Basch, J. M., Melchers, K. G., Kurz, A., Krieger, M., & Miller, L. (2020). It takes more than
digital maturities, legal requirements, and cultural contexts to examine a good camera: Which factors contribute to differences between face-to-face
whether negative trust responses can occur. Second, this study involved interviews and videoconference interviews regarding performance ratings and
only 152 participants, who had an average age of 27.6 years old, had a interviewee perceptions? Journal of Business and Psychology, 36(5), 921–940. https://
doi.org/10.1007/s10869-020-09714-3
bachelor’s degree or higher (90%) came from a single PEO, and applied
Bedué, P., & Fritzsche, A. (2022). Can we trust AI? An empirical investigation of trust
for only four types of job functions, which may have influenced the requirements and guide to successful AI adoption. Journal of Enterprise Information
study’s generalizability because the demographics may mediate or Management, 35(2), 530–549. https://doi.org/10.1108/jeim-06-2020-0233
moderate the resulting dependent variables. Future studies should so Blacksmith, N., Willford, J., & Behrend, T. (2016). Technology in the employment
interview: A meta-analysis and future research agenda. Personnel Assessment and
licit more diverse job applicants to test our research hypotheses. Third, Decisions, 2(1), 12–20. https://doi.org/10.25035/pad.2016.002
the participants’ prior perception toward AI itself may have influenced Black, J. S., & van Esch, P. (2020). AI-enabled recruiting: What is it and how should a
their trust in AI (Langer et al., 2022). Future studies should measure and manager use it? Business Horizons, 63(2), 215–226. https://doi.org/10.1016/j.
bushor.2019.12.001
control the prior perception toward AI. Finally, the three AI interfaces Celiktutan, O., & Gunes, H. (2017). Automatic prediction of impressions in time and
and their impacts on the results were limited to our interface designs. across varying context: Personality, attractiveness and likeability. IEEE Transactions
However, different AI interface designs may have different impacts on on Affective Computing, 8(1), 29–42. https://doi.org/10.1109/taffc.2015.2513401
Chattaraman, V., Kwon, W.-S., Gilbert, J. E., & Li, Y. (2014). Virtual shopping agents. The
users’ trust in AI, according to Bedué and Fritzsche’s (2022) qualitative Journal of Research in Indian Medicine, 8(2), 144–162. https://doi.org/10.1108/jrim-
study. Future studies should use a male avatar to display tangibility (see 08-2013-0054
Machneva et al., 2022), apply machine learning and natural language Chi, O. H., Jia, S., Li, Y., & Gursoy, D. (2021). Developing a formative scale to measure
consumers’ trust toward interaction with artificially intelligent (AI) social robots in
processing (NLP) to develop an intelligent chatbot to convey immediacy service delivery. Computers in Human Behavior, 118, Article 106700. https://doi.org/
(see Hu et al., 2021) or use different interpretable messages to demon 10.1016/j.chb.2021.106700
strate transparency (see Liu & Wei, 2021). Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/
249008
6. Conclusions van Esch, P., & Black, J. S. (2019). Factors that influence new generation candidates to
engage with and complete digital, AI-enabled recruiting. Business Horizons, 62(6),
729–739. https://doi.org/10.1016/j.bushor.2019.07.004
As the demand for AI-AVIs increases in the postpandemic era, so do
van Esch, P., Black, J. S., & Arli, D. (2020). Job candidates’ reactions to AI-Enabled job
concerns about their lack of humanity and transparency impairing users’ application processes. AI and Ethics, 1(2), 119–130. https://doi.org/10.1007/
trust (Jaser et al., 2022). This study tested various modalities of AI-AVI, s43681-020-00025-0
and we found that trust concerns were not present among the actual job van Esch, P., Black, J. S., & Ferolie, J. (2019). Marketing AI recruitment: The next phase
in job application and selection. Computers in Human Behavior, 90, 215–222. https://
applicants. Moreover, the findings show that AI-AVIs equipped with doi.org/10.1016/j.chb.2018.09.009
various interface designs can increase applicants’ cognitive and/or af Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K.,
fective trust. We believe that this study can benefit future academic Hamilton, K., & Sandvig, C. (2015). I always assumed that I wasn’t really that close
to [her]": Reasoning about Invisible Algorithms in News Feeds. In Proceedings of the
research through this important finding and can help vendors and users 33rd annual ACM conference on human factors in computing systems (pp. 153–162).
develop and adopt the most trustworthy approaches to using AI-AVIs for Association for Computing Machinery.
personnel selection. Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using
G*Power 3.1: Tests for correlation and regression analyses. Behavior Research
Methods, 41(4), 1149–1160. https://doi.org/10.3758/brm.41.4.1149
Credit author statement Gerich, J. (2012). Video-enhanced self-administered computer interviews: Design and
outcomes. In C. N. S. Hershey (Ed.), Online research methods in urban and planning
studies: Design and outcomes (pp. 99–119). IGI Global.
Hung-Yue Suen: Conceptualization, Methodology, Validation, Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational
Formal analysis, Investigation, Writing – original draft, Writing – review justice perspective. Academy of Management Review, 18(4), 694–734. https://doi.
& editing, Supervision, Project administration, Funding acquisition. org/10.5465/amr.1993.9402210155
Kuo-En Hung: Software, Resources, Data curation, Visualization.
7
H.-Y. Suen and K.-E. Hung Computers in Human Behavior 143 (2023) 107713
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of Lee, H.-S., Sun, P.-C., Chen, T.-S., & Jhu, Y.-J. (2015). The effects of avatar on trust and
empirical research. The Academy of Management Annals, 14(2), 627–660. https://doi. purchase intention of female online consumer: Consumer knowledge as a moderator.
org/10.5465/annals.2018.0057 International Journal of Electronic Commerce Studies, 6(1), 99–118. https://doi.org/
Gonzalez, M. F., Liu, W., Shirase, L., Tomczak, D. L., Lobbe, C. E., Justenhoven, R., & 10.7903/ijecs.1395
Martin, N. R. (2022). Allying with AI? Reactions toward human-based, AI/ML-based, Liu, X., & London, K. (2016). T.A.I: A tangible AI interface to enhance human-artificial
and augmented hiring processes. Computers in Human Behavior, 130, Article 107179. intelligence (AI) communication beyond the screen. In Proceedings of the 2016 ACM
https://doi.org/10.1016/j.chb.2022.107179 conference on designing interactive systems (pp. 281–285). Association for Computing
Gorman, C. A., Robinson, J., & Gamble, J. S. (2018). An investigation into the validity of Machinery.
asynchronous web-based video employment-interview ratings. Consulting Psychology Liu, B., & Wei, L. (2021). Machine gaze in online behavioral targeting: The effects of
Journal: Practice and Research, 70(2), 129–146. https://doi.org/10.1037/ algorithmic human likeness on social presence and social influence. Computers in
cpb0000102 Human Behavior, 124, Article 106926. https://doi.org/10.1016/j.chb.2021.106926
Gupta, P., Fernandes, S. F., & Jain, M. (2018). Automation in recruitment: A new Long, N. (2001). Development sociology: Actor perspectives. Routledge.
frontier. Journal of Information Technology Teaching Cases, 8(2), 118–125. https:// Machneva, M., Evans, A. M., & Stavrova, O. (2022). Consensus and (lack of) accuracy in
doi.org/10.1057/s41266-018-0042-x perceptions of avatar trustworthiness. Computers in Human Behavior, 126, Article
Haan, M., Ongena, Y. P., Vannieuwenhuyze, J. T. A., & de Glopper, K. (2017). Response 107017. https://doi.org/10.1016/j.chb.2021.107017
behavior in a video-web survey: A mode comparison study. Journal of Survey Mejia, C., & Torres, E. N. (2018). Implementation and normalization process of
Statistics and Methodology, 5(1), 48–69. https://doi.org/10.1093/jssam/smw023 asynchronous video interviewing practices in the hospitality industry. International
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis. Journal of Contemporary Hospitality Management, 30(2), 685–701. https://doi.org/
Prentice-Hall. 10.1108/ijchm-07-2016-0402
Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust—the Meuter, M. L., Ostrom, A. L., Bitner, M. J., & Roundtree, R. (2003). The influence of
case of autonomous vehicles and medical assistance devices. Technological technology anxiety on consumer use and experiences with self-service technologies.
Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j. Journal of Business Research, 56(11), 899–906. https://doi.org/10.1016/S0148-2963
techfore.2015.12.014 (01)00276-4
Hickman, L., Bosch, N., Ng, V., Saef, R., Tay, L., & Woo, S. E. (2022). Automated video Möhlmann, M., & Zalmanson, L. (2017). Hands on the wheel: Navigating algorithmic
interview personality assessments: Reliability, validity, and generalizability management and Uber drivers. In Autonomy’, in proceedings of the international
investigations. Journal of Applied Psychology, 107(8), 1323–1351. https://doi.org/ conference on information systems (ICIS) (pp. 10–13).
10.1037/apl0000695 Mori, M. (1970). Bukimi no tani [The uncanny valley]. Energy, 7, 33–35.
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on Naim, I., Tanveer, M. I., Gildea, D., & Hoque, M. E. (2018). Automated analysis and
factors that influence trust. Human Factors: The Journal of the Human Factors and prediction of job interview performance. IEEE Transactions on Affective Computing, 9
Ergonomics Society, 57(3), 407–434. https://doi.org/10.1177/0018720814547570 (2), 191–204. https://doi.org/10.1109/taffc.2016.2614299
Hu, P., Lu, Y., Gong, Y., & Yale). (2021). Dual humanness and trust in conversational AI: Nguyen, L. S., Frauendorfer, D., Mast, M. S., & Gatica-Perez, D. (2014). Hire me:
A person-centered approach. Computers in Human Behavior, 119, Article 106727. Computational inference of hirability in employment interviews based on nonverbal
https://doi.org/10.1016/j.chb.2021.106727 behavior. IEEE Transactions on Multimedia, 16(4), 1018–1031. https://doi.org/
Hunkenschroer, A. L., & Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: 10.1109/tmm.2014.2307169
A review and research agenda. Journal of Business Ethics, 178(4), 977–1007. https:// Nordheim, C. B., Følstad, A., & Bjørkli, C. A. (2019). An initial model of trust in chatbots
doi.org/10.1007/s10551-022-05049-6 for customer service—findings from a questionnaire study. Interacting with
Jaser, Z., Petrakaki, D., Starr, R., & Oyarbide-Magaña, E. (2022). Where automated job Computers, 31(3), 317–335. https://doi.org/10.1093/iwc/iwz022
interviews fall short. Harvard Business Review. https://hbr.org/2022/01/where-a O’Brien, M. (2021). Want a job? Employers say: Talk to the computer. Fortune. https://fort
utomated-job-interviews-fall-short. une.com/2021/06/18/job-interview-artificial-intelligence-remote-online/.
Jatobá, M., Santos, J., Gutierriz, I., Moscon, D., Fernandes, P. O., & Teixeira, J. P. (2019). O’Connor, S. (2021). AI is making applying for jobs even more miserable. Financial Times.
Evolution of artificial intelligence research in human resources. Procedia Computer https://www.ft.com/content/a81245ee-9916-47e2-81b9-846e9403be00.
Science, 164, 137–142. https://doi.org/10.1016/j.procs.2019.12.165 Pieters, W. (2011). Explanation and trust: What to tell the user in security and AI? Ethics
Johnson, D., & Grayson, K. (2005). Cognitive and affective trust in service relationships. and Information Technology, 13(1), 53–64. https://doi.org/10.1007/s10676-010-
Journal of Business Research, 58(4), 500–507. https://doi.org/10.1016/s0148-2963 9253-3
(03)00140-1 Rao, S. B. P., Rasipuram, S., Das, R., & Jayagopi, D. B. (2017). Automatic assessment of
Kim, J.-Y., & Heo, W. (2022). Artificial intelligence video interviewing for employment: communication skill in non-conventional interview settings: A comparative study. In
Perspectives from applicants, companies, developer and academicians. Information E. Lank, A. Vinciarelli, & E. H. S. Subramanian (Eds.), Proceedings of the 19th ACM
Technology & People, 35(3), 861–878. https://doi.org/10.1108/itp-04-2019-0173 international conference on multimodal interaction (pp. 221–229). Association for
Köchling, A., Wehner, M. C., & Warkocz, J. (2022). Can I show my skills? Affective Computing Machinery.
responses to artificial intelligence in the recruitment process. Review of Managerial Roulin, N., Wong, O., Langer, M., & Bourdage, J. S. (2022). Is more always better? How
Science. https://doi.org/10.1007/s11846-021-00514-4 preparation time and re-recording opportunities impact fairness, anxiety, impression
Kohn, S. C., de Visser, E. J., Wiese, E., Lee, Y.-C., & Shaw, T. H. (2021). Measurement of management, and performance in asynchronous video interviews. European Journal
trust in automation: A narrative review and reference guide. Frontiers in Psychology, of Work & Organizational Psychology. https://doi.org/10.1080/
12, Article 604977. https://doi.org/10.3389/fpsyg.2021.604977 1359432X.2022.2156862
König, C. J., Melchers, K. G., Kleinmann, M., Richter, G. M., & Klehe, U.-C. (2007). Rynes, S. L., & Miller, H. E. (1983). Recruiter and job influences on *s for employment.
Candidates’ ability to identify criteria in nontransparent selection procedures: Journal of Applied Psychology, 68(1), 147–154. https://doi.org/10.1037/0021-
Evidence from an assessment center and a structured interview. International Journal 9010.68.1.147
of Selection and Assessment, 15, 283–292. https://doi.org/10.1111/j.1468- Suen, H.-Y., Chen, M. Y.-C., & Lu, S.-H. (2019a). Does the use of synchrony and artificial
2389.2007.00388.x intelligence in video interviews affect interview ratings and applicant attitudes?
Krämer, N. C., Lucas, G., Schmitt, L., & Gratch, J. (2017). Social snacking with a virtual Computers in Human Behavior, 98, 93–101. https://doi.org/10.1016/j.
agent – on the interrelation of need to belong and effects of social responsiveness chb.2019.04.012
when interacting with artificial entities. International Journal of Human-Computer Suen, H.-Y., Hung, K.-E., & Lin, C.-L. (2019b). TensorFlow-based automatic personality
Studies, 109, 112–121. https://doi.org/10.1016/j.ijhcs.2017.09.001 recognition used in asynchronous video interviews. IEEE Access, 7, 61018–61023.
Kreps, G. L., & Neuhauser, L. (2013). Artificial intelligence and immediacy: Designing https://doi.org/10.1109/access.2019.2902863
health communication to personally engage consumers and providers. Patient Suen, H.-Y., Hung, K.-E., & Lin, C.-L. (2020). Intelligent video interview agent used to
Education and Counseling, 92(2), 205–210. https://doi.org/10.1016/j. predict communication skill and perceived personality traits. Human-centric
pec.2013.04.014 Computing and Information Sciences, 10(1), 3. https://doi.org/10.1186/s13673-020-
Langer, M., König, C. J., Back, C., & Hemsing, V. (2022). Trust in artificial intelligence: 0208-3
Comparing trust processes between human and automated trustees in light of unfair Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human
bias. Journal of Business and Psychology. https://doi.org/10.1007/s10869-022- resources management: Challenges and a path forward. California Management
09829-9 Review, 61(4), 15–42. https://doi.org/10.1177/0008125619867910
Langer, M., König, C. J., & Hemsing, V. (2020). Is anybody listening? The impact of de Visser, E. J., Monfort, S. S., Goodyear, K., Lu, L., O’Hara, M., Lee, M. R.,
automatically evaluated job interviews on impression management and applicant Parasuraman, R., & Krueger, F. (2017). A little anthropomorphism goes a long way:
reactions. Journal of Managerial Psychology, 35(4), 271–284. https://doi.org/ Effects of oxytocin on trust, compliance, and team performance with automated
10.1108/jmp-03-2019-0156 agents. Human Factors, 59(1), 116–133. https://doi.org/10.1177/
Langer, M., König, C. J., & Krause, K. (2017). Examining digital interviews for personnel 0018720816687205
selection: Applicant reactions and interviewer ratings. International Journal of Wang, W., & Benbasat, I. (2007). Recommendation agents for electronic commerce:
Selection and Assessment, 25(4), 371–382. https://doi.org/10.1111/ijsa.12191 Effects of explanation facilities on trusting beliefs. Journal of Management Information
Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Systems, 23(4), 217–246. https://doi.org/10.2753/mis0742-1222230410
Baum, K. (2021). What do we want from explainable artificial intelligence (XAI)? – a
stakeholder perspective on XAI and a conceptual model guiding interdisciplinary
XAI research. Artificial Intelligence, 296, Article 103473. https://doi.org/10.1016/j.
artint.2021.103473
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance.
Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392
8
H.-Y. Suen and K.-E. Hung Computers in Human Behavior 143 (2023) 107713
Wang, W., Qiu, L., Kim, D., & Benbasat, I. (2016). Effects of rational and social appeals of Woods, S. A., Ahmed, S., Nikolaou, I., Costa, A. C., & Anderson, N. R. (2020). Personnel
online recommendation agents on cognition- and affect-based trust. Decision Support selection in the digital age: A review of validity and applicant reactions, and future
Systems, 86, 48–60. https://doi.org/10.1016/j.dss.2016.03.007 research challenges. European Journal of Work & Organizational Psychology, 29(1),
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism 64–77. https://doi.org/10.1080/1359432x.2019.1681401
increases trust in an autonomous vehicle. Journal of Experimental Social Psychology,
52, 113–117. https://doi.org/10.1016/j.jesp.2014.01.005