0% found this document useful (0 votes)
29 views13 pages

144- Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management

Uploaded by

arsalanphdmf21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views13 pages

144- Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management

Uploaded by

arsalanphdmf21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Human Resource Management Review 33 (2023) 100881

Contents lists available at ScienceDirect

Human Resource Management Review


journal homepage: www.elsevier.com/locate/hrmr

Introducing a multi-stakeholder perspective on opacity,


transparency and strategies to reduce opacity in algorithm-based
human resource management☆
Markus Langer *, Cornelius J. König
Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany

A R T I C L E I N F O A B S T R A C T

Keywords: Artificial Intelligence and algorithmic technologies support or even automate a large variety of
Explainable artificial intelligence human resource management (HRM) activities. This affects a range of stakeholders with different,
Opacity partially conflicting perspectives on the opacity and transparency of algorithm-based HRM. In
Transparency
this paper, we explain why opacity is a key characteristic of algorithm-based HRM, describe
Human resource management
AI ethics
reasons for opaque algorithm-based HRM, and highlight the implications of opacity from the
perspective of the main stakeholders involved (users, affected people, deployers, developers, and
regulators). We also review strategies to reduce opacity and promote transparency of algorithm-
based HRM (technical solutions, education and training, regulation and guidelines), and
emphasize that opacity and transparency in algorithm-based HRM can simultaneously have
beneficial and detrimental consequences that warrant taking a multi-stakeholder view when
considering these consequences. We conclude with a research agenda highlighting stakeholders'
interests regarding opacity, strategies to reduce opacity, and consequences of opacity and
transparency in algorithm-based HRM.

1. Introduction

Artificial Intelligence (AI) and algorithmic technologies to support or even automate human resource management (HRM) ac­
tivities are a key driver of innovation in HRM (Kellogg, Valentine, & Christin, 2020; Makarius, Mukherjee, Fox, & Fox, 2020). In line
with Kellogg, Valentine, and Christin (2020; p.366), we consider AI and algorithmic technologies (following we subsume them under
the term algorithm) to encompass “computer-programmed procedures that transform input data into desired outputs in ways that tend
to be more encompassing, instantaneous, interactive, and opaque than previous technological systems.” As this understanding sug­
gests, opacity is a key characteristic associated with current algorithms and can thus render algorithm-based HRM activities opaque.
Opacity in this regard means that (a) the inputs used in algorithm-based HRM remain unknown or not understandable, (b) relations
between inputs and outputs remain hidden, and (c) there is no further explanation for a given output (e.g., prediction or classification)
(Arrieta et al., 2020; Burrell, 2016; Sokol & Flach, 2020). Consequently, opacity can be located on the opposite end of a continuum,
with transparency at the other end, and can contribute to a lack of understandability of algorithm-based processes and outputs.


Work on this paper was funded by the Volkswagen Foundation, Germany grant number AZ98513 and by the DFG, Germany grant 389792660
as part of TRR 248.
* Corresponding author at: Universität des Saarlandes, Arbeits- & Organisationspsychologie, Campus A1 3, 66123 Saarbrücken, Germany.
E-mail address: [email protected] (M. Langer).

https://doi.org/10.1016/j.hrmr.2021.100881
Received 30 October 2020; Received in revised form 1 November 2021; Accepted 16 November 2021
Available online 24 November 2021
1053-4822/© 2021 Elsevier Inc. All rights reserved.
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

Previous work has highlighted the potential for both negative and positive implications of opacity for algorithm-based HRM. For
instance, for people working with algorithm-based systems, opacity can undermine adequate trust in system outputs, thus diminishing
decision-making performance (Yeomans, Shah, Mullainathan, & Kleinberg, 2019) and at the same time opacity can increase efficiency
due to not being distracted by unnecessary additional information (Tintarev & Masthoff, 2007). Similarly, for people affected by
algorithm-based HRM decisions (e.g., applicants), opacity can either negatively or positively affect reactions to algorithm-based de­
cisions (Langer, König, & Fitili, 2018; Newman, Fast, & Harmon, 2020). For companies using algorithm-based systems to control
worker activities, opacity can serve to uphold control mechanisms but at the same time may conflict with legal regulations requiring a
certain level of transparency (Goodman & Flaxman, 2017; Kellogg et al., 2020).
Although previous work has already investigated implications of opacity for single stakeholders in HRM and has highlighted that
opacity can have negative and positive consequences for single stakeholders, a comprehensive analysis of the implications associated
with opacity from the perspectives of diverse stakeholders in algorithm-based HRM is still missing (Kellogg et al., 2020). Specifically,
advantages and disadvantages associated with opacity only become apparent and especially important when considering the per­
spectives of the main stakeholders in algorithm-based HRM simultaneously. Additionally, previous work on algorithm-based HRM has
predominantly focused on reasons for opacity or the implications of opacity without considering strategies to reduce opacity and
promote transparency. As there are benefits and downsides associated with opacity, stakeholders in HRM need to actively consider
whether and to what extent algorithmic opacity aligns with their goals and needs regarding algorithm-based HRM and also need to
have the tools to potentially increase transparency when they conclude that the downsides of opacity outweigh the benefits.
In this paper, we argue that opacity is a central aspect of algorithm-based HRM and can contribute to promises and perils associated
with algorithm-based HRM. We thus present the key reasons for opacity in algorithm-based HRM and emphasize the necessity to take a
multi-stakeholder view when considering the implications of opacity in algorithm-based HRM as advantages and disadvantages
associated with opacity only become apparent when simultaneously considering these perspectives. Moreover, our paper contributes
to the literature on algorithm-based HRM by providing an overview on strategies to reduce opacity (technical solutions, education and
training, regulation and guidelines) that address the main reasons for opacity (system-based opacity, opacity due to illiteracy, and
intentional opacity; Burrell, 2016) thus highlighting the strategies that HRM researchers and practitioners could use to deliberately
reduce opacity. Finally, we emphasize that there are trade-offs associated with the strategies to reduce opacity and provide an agenda
for future research investigating consequences of opacity and strategies to reduce opacity from a multi-stakeholder view.

2. Algorithm-based HRM

HRM involves a large variety of activities that could be supported or even fully automated by algorithm-based systems (M. M.
Cheng & Hackett, 2021; Kellogg et al., 2020). For instance, Lepak, Bartol, and Erhardt (2005) categorize HR activities ranging from
transactional to transformational. Transactional activities reflect administrative components of HR aimed to maintain the organiza­
tions' HR infrastructure, whereas transformational activities contribute to strategic organizational goals. Importantly, the same HR
task can be transactional or transformational depending on how central the task is to the organizations' strategy (Lepak et al., 2005).
For instance, whereas compensation might be more transactional for traditional organizations with full-time employed workers,
algorithm-controlled, dynamic compensation is at the core of the gig economy's business model where workers are considered in­
dependent contractors (e.g., Uber; M. K. Lee, Kusbit, Metsky, & Dabbish, 2015; Möhlmann, Zalmanson, & Gregory, in press).
For each HRM activity, there is then a variety of possible development strategies regarding algorithm-based systems to support or
automate HR activities (M. M. Cheng & Hackett, 2021). Broadly, we can distinguish them into manual development, self-learning, and
continuous learning. Manual development involves human developers who formalize tasks in a way that they can be automatically
fulfilled by algorithm-based systems (M. M. Cheng & Hackett, 2021). For instance, for well-defined tasks in scheduling, systems could
follow an explicitly programmed set of rules to determine shifts and schedules for employees. However, a large share of tasks and
outcomes in HRM are less well-defined and thus self-learning development strategies might be more promising to automate (parts of)
these tasks (M. M. Cheng & Hackett, 2021; Makarius et al., 2020; Möhlmann et al., in press). In such cases, human developers set up the
initial instance of a system's algorithm and then feed it with training data to allow the algorithm to learn relationships between inputs
and outputs. In other words, depending on the task or strategy, developers can choose from various different machine learning
methods. For instance, screening applicant job interviews through algorithm-based systems could follow this strategy where the
respective algorithm would learn to distinguish more or less suitable applicants based on a database of previous applicant interviews
and interviewer ratings (Hickman et al., 2021; Naim, Tanveer, Gildea, & Hoque, 2018). Based on a self-learning strategy, developers
would then distribute a system that was tested on a training dataset but that would not adapt to changing environments. In contrast,
certain HRM tasks are more dynamic and might require adaptive systems (Kellogg et al., 2020; Möhlmann et al., in press). Conse­
quently, a continuous learning development strategy might be appropriate where a system is set up with the capacity to learn from
newly incoming data. An example of such a system is Uber's algorithmic management system where it assigns tasks, determines
compensation, and manages ongoing evaluation processes of their large driver workforce (M. K. Lee et al., 2015).
Algorithms resulting from these different development strategies can then be implemented in HRM as descriptive, predictive, or
prescriptive algorithms (Leicht-Deobald et al., 2019). Descriptive algorithms analyze historic data and try to provide insights regarding
their implications for present organizational states (Leicht-Deobald et al., 2019). For example, such algorithms could analyze employee
behavior and customer satisfaction outcomes to find patterns of behavior that have positively influenced customer satisfaction in the
past that they can then learn from and use in future customer interactions (Leicht-Deobald et al., 2019). Predictive algorithms are those
that analyze past or real-time data in order to predict future outcomes. For instance, such systems could analyze applicant behavior in
job interviews to predict their future job performance. This could result in a score that HR managers use as additional information

2
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

supporting their hiring decisions (Langer, König and Busch, 2021). Finally, prescriptive algorithms go beyond predictive ones by
providing simulations of what would happen if one decides on a specific course of action, including explicit suggestions to human
decision-makers and maybe even automatically implement decisions. For instance, such algorithms could be used to implement
automated task assignment or compensation for workers (M. K. Lee et al., 2015; Ravenelle, 2019).

3. Reasons for and implications of the opacity of algorithm-based HRM

3.1. Three reasons for the opacity of algorithm-based HRM

Independent of HRM activity, algorithms' development strategy, and the implementation of algorithm-based systems, one un­
derlying topic is the opacity of algorithm-based systems (Kellogg et al., 2020; Leicht-Deobald et al., 2019). In fact, a large share of
papers that investigate algorithm-based HRM highlight the potential for opacity of algorithm-based HRM (M. Cheng & Foley, 2019;
Griesbach, Reich, Elliott-Negri, & Milkman, 2019; Höddinghaus, Sondern, & Hertel, 2020; Jarrahi & Sutherland, 2019; Kellogg et al.,
2020; M. K. Lee et al., 2015; Leicht-Deobald et al., 2019; Möhlmann et al., in press; Möhlmann & Zalmanson, 2017; Myhill, Richards, &
Sang, 2021; Veen, Barratt, & Goods, 2020). In line with recent research (Kellogg et al., 2020), we consider opacity as a key charac­
teristic and even as the default for algorithm-based HRM.
Specifically, there are three broad reasons that contribute to opacity in algorithm-based HRM scenarios: system-based opacity,
opacity due to illiteracy, and intentional opacity (Burrell, 2016). System-based opacity refers to opacity resulting from characteristics of
systems in use. There are several factors that contribute to this form of opacity and we exemplify those factors with the use of a system
for the automatic evaluation of job interviews. (a) Algorithm-based systems that support information processing and decisions usually
consist of a combination of systems (Burrell, 2016), and this will likely be the case for a system automatically evaluating interviews:
There might also be a subsystem extracting nonverbal and content related information from interview videos, and another subsystem
that is trained on previous applicant information to distinguish the most suitable applicants from the less suitable. Making the decision
logic of one subsystem transparent does not necessarily affect transparency of another subsystem (Raghavan, Barocas, Kleinberg, &
Levy, 2020). (b) Algorithm-based systems usually analyze a large number and variety of features (predictors) in order to arrive at their
outputs (Kellogg et al., 2020). In the case of a video interview recording, every word applicants use can serve as a potential feature to
determine their suitability for a job (Campion, Campion, Campion, & Reider, 2016), every frame of a video produces additional
features to use for prediction (Liem et al., 2018), and every second of audio data provides additional applicant information (e.g., about
applicants voice pitch). In order to make this large number of features useful for algorithms, they have to be condensed from a raw to a
preprocessed format and during these preprocessing steps, opacity may increase due to features losing a directly graspable meaning
(Burrell, 2016). (c) Finally, some kind of algorithm is needed to detect patterns in features and to link features to outcome variables.
One of the most commonly cited class of opaque algorithms are artificial neural networks often used for self- and continuous learning
development strategies (Arrieta et al., 2020; Felzmann, Villaronga, Lutz, & Tamò-Larrieux, 2019). Heavily simplified, developers
determine the initial structure of artificial neural networks and then feed them with training data. Neural networks then detect patterns
in the data and adjust their weights associated with input variables in order to be better able to classify or predict target variables. This
happens without human programming involved, making artificial neural networks opaque even to their developers. Furthermore,
neural networks use internal representation of data that do not readily translate to human semantics and to human ways of problem
solving (Ananny & Crawford, 2018; Burrell, 2016). This means, even if a human could peek into the “decision-making” process of
neural networks, the representation of information, and the decision processes might not reduce opacity. This also indicates that
system-based opacity might be more or less pronounced for different development strategies (Burrell, 2016). Specifically, in contrast to
self- and continuous learning, system-based opacity will usually be lower if a system was manually developed as human developers
might have explicitly programmed a set of rules that a respective algorithm-based system would follow to arrive at its outputs.
Opacity due to illiteracy, is the second reason for opacity and is a result of illiteracy in regard to algorithm-based systems and their
underlying mathematical or developmental foundations (Burrell, 2016). With the spread of algorithm-based systems in HRM, there are
more and more stakeholders who may have little knowledge of the general logic of algorithm-based processes and solutions. For
instance, people who work in HRM are usually not trained in programming (Burrell, 2016; Oswald, Behrend, Putka, & Sinar, 2020) but
are now more than ever required to understand system outputs and processes, scrutinize when to rely on system recommendations, or
evaluate the usefulness and validity of algorithm-based solutions for decisions (Höddinghaus et al., 2020;Langer, König and Busch,
2021). Without a basic level of programming literacy, reducing algorithm-based opacity might remain a hopeless endeavor inde­
pendent of the HR activity for which an algorithm is used, and independent of the development or implementation strategy. In fact,
even linear regressions can be opaque to stakeholders with no background in statistics (Páez, 2019).
Finally, intentional opacity means that it can be an intentional choice to keep algorithm-based systems, processes, and outputs
opaque. On the one hand, this choice lies in the hands of developers who might favor system performance over system transparency
when developing systems (Brock, 2018). On the other hand, deployers of systems (e.g., upper-level managers deciding to use
algorithm-based systems for certain HRM activities) can purposefully reduce available information regarding systems. Reasons for this
can be that organizations want to protect their intellectual property, keep their competitive advantage, or uphold information
asymmetries for exercising control over workers (Rosenblat & Stark, 2016; Schnackenberg & Tomlinson, 2016). Making code details of
algorithms openly available or decision-processes more transparent could make it easier for competitors to copy the companies' ap­
proaches (Sokol & Flach, 2020). Relatedly, providing workers or the public with insights into the decision-logic of algorithms may
undermine their usefulness and can lead to people using the information to game the system (Rosenblat & Stark, 2016; Schnackenberg
& Tomlinson, 2016).

3
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

3.2. Implications of opacity for HRM stakeholders

The reasons for opacity and their combination have important implications with respect to algorithm-based HRM. In fact, there are
daily high-stake decisions where algorithm-based HRM determines the future of more and more people of whom only a small number
have experience in working with such systems (M. M. Cheng & Hackett, 2021). Consequently, algorithm-based systems used for HRM
activities are opaque to an increasing number of people who consequently will not or only partially be able to understand the reasons
for algorithm-based decisions. These are people who use algorithm-based systems as decision support and who may want to under­
stand the logic of algorithm-based recommendations (Dhaliwal & Benbasat, 1996), people who are affected by algorithm-based HRM
and who may want to understand the rationale behind algorithm-based decisions affecting their lives (M. K. Lee et al., 2015), or people
who deploy algorithm-based systems in HRM and may need to understand whether these systems and their outputs align with
company strategy and comply with legislation (Arrieta et al., 2020). Whereas this indicates that opacity can be detrimental because it
prevents stakeholders from better understanding algorithm-based processes and outputs, opacity can also be beneficial as it can in­
crease efficiency for users of systems, or can be used by organizations to prevent insights in algorithm-based systems that may be used
in an adversarial way. In the case of HRM, there are advantages and disadvantages associated with opacity that become especially
apparent when considering the diversity of key stakeholders with potentially diverging interest with respect to opacity. Fig. 1 shows
the relation of algorithm-based systems in HRM for the five main classes of HRM stakeholders – users, affected people, deployers,
developers, and regulators – and the implications of opaque algorithm-based systems in HRM associated with these stakeholders.

3.2.1. Opacity for users of algorithm-based systems in HRM


Users of algorithm-based systems in HRM might be HR managers or general managers who use systems with the goal of increasing
decision-making efficiency and quality (Kellogg et al., 2020;(Langer, König and Busch, 2021)). Opacity of algorithm-based systems can
affect the efficiency and quality (e.g., decision quality) of human-system teams (Yeomans et al., 2019). Specifically, with opaque
systems, users may miss insights regarding decision outputs that could help them adequately consider system outputs in their decision-
making (Lai & Tan, 2019). However, opacity can also contribute to efficient processes because the system would provide streamlined
information (Tintarev & Masthoff, 2007). Furthermore, through a technology acceptance lens, opacity can impact perceived usefulness
and ease of use of a system (Venkatesh, Morris, Davis, & Davis, 2003; Yang, Linder, & Bolchini, 2012). Opacity can decrease these
variables by not having enough insights into system decision processes to adequately use the system (Yang et al., 2012). At the same
time, opacity could make processes easier as system outputs may focus on the most important information thus preventing information
overload. Further contributing to user-system collaboration, it is commonly assumed that opacity undermines the possibility to build
adequate trust in systems and their outputs (Endsley, 2017; Hoff & Bashir, 2015). For example, in the case of systems supporting
performance evaluations, only providing a numeric value for employee evaluations might not be enough to trust the system recom­
mendation. This can lead to issues with under- or over-trusting the system (Glikson & Woolley, 2020; Parasuraman & Manzey, 2010;
Parasuraman & Riley, 1997). In the case of under-trust, users would not incorporate recommendations by the system in their decision-
making process, making the system redundant. In the case of over-trust, users would not adequately supervise systems or would rely
too much on outcomes given by the system without challenging their adequacy (Langer, Oster, et al., 2021). Furthermore, opacity can
affect users' well-being at work, job satisfaction, and motivation (Hackman & Oldham, 1976; Morgeson, Garzsa, & Campion, 2012).

Fig. 1. Overview of stakeholders and their relationships to algorithm-based system in HRM, and sample implications of opacity of algorithm-based
HRM (figure adapted from Langer, Oster, et al., 2021).

4
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

For instance, it has been assumed that opacity can undermine human self-determination and autonomy (Jobin, Ienca, & Vayena,
2019). As well, according to work characteristics research, opacity would relate to perceived responsibility for work results and to
outcomes such as job motivation (Hackman & Oldham, 1976; Morgeson et al., 2012; Parker & Grote, 2020). Finally, users such as HR
managers can be in the role of explainers and sustainers of algorithm-based systems (Kellogg et al., 2020; Makarius et al., 2020; Wilson,
Daugherty, & Morini-Bianzino, 2017). The explainer role involves being able to explain algorithm-based systems and their outputs to
other stakeholders (e.g., upper management or to people affected by algorithm-based decisions, such as applicants in personnel se­
lection, employees in performance evaluation situations). The sustainer role involves continuous monitoring of algorithm-based
systems to ensure they are operating as intended (e.g., whether continuous learning systems produce fair outputs) or whether they
need to be updated. For both roles, opacity undermines users' abilities to successfully fulfill these roles as they might not be able to
explain system outputs to other stakeholders or have no insight into system processes to monitor potentially unwanted changes in a
system's decision logic over time (Kellogg et al., 2020).

3.2.2. Opacity for people affected by algorithm-based HRM


People affected by algorithm-based HRM are those who usually cannot choose whether they want to be affected by algorithm-based
decisions but on which the outcomes of algorithm-based decision have wide-ranging effects on their everyday life in an organization (e.
g., through algorithm-based scheduling or algorithm-based task assignment), or even their future career (e.g., through algorithm-based
promotion decisions) (Langer & Landers, 2021; Leicht-Deobald et al., 2019). This class of stakeholders consists, for instance, of ap­
plicants or employees affected by personnel selection decisions, performance evaluation, or scheduling outcomes. For them, opacity is
especially associated with decision-processes and outputs of using algorithm-based systems in HRM and wanting to understand if
decisions based on algorithm-based processes and their outputs were fair (Arrieta et al., 2020; M. K. Lee et al., 2015; Myhill, Richards,
& Sang, 2021). Similar to reactions to managers who explain why a particular team member got promoted to a team leader position
over others, reactions to decisions made in collaboration with algorithm-based systems might be negative when lacking transparency
(Schnackenberg & Tomlinson, 2016; Shaw, Wild, & Colquitt, 2003). In general, people affected by algorithm-based HRM might want
to evaluate justice and fairness of algorithm-based processes and outcomes (M. K. Lee, 2018; Ötting & Maier, 2018). This highlights the
importance of opacity for HRM in relation to organizational justice (Colquitt, Conlon, Wesson, Porter, & Ng, 2001). Most directly,
opacity could reduce informational justice which could also impair insight into other facets of justice (i.e., procedural and distributive
justice) (Gregor & Benbasat, 1999; Schlicker et al., 2021). Yet, opacity could also positively affect justice perceptions as sometimes less
information is better than too much information (Langer et al., 2018; Newman et al., 2020). Especially salient are the implications of
opacity for gig and platform workers who are directed, evaluated, and disciplined by algorithm-based systems (Kellogg et al., 2020;
Rosenblat & Stark, 2016). Directed means that workers receive tasks and recommendations by algorithm-based systems which then
steer their behavior; evaluated is when algorithm-based systems use worker behavior to curate ratings of workers on which payment
and future work opportunities might depend; and disciplined means that workers might automatically be fired and replaced when not
following algorithm-based instructions (Kellogg et al., 2020). For all those procedures, opacity can undermine perceived control and
autonomy at work as workers may have no insight into the algorithmic decision-making processes and outputs that structure their
everyday work experience (Kellogg et al., 2020; Myhill, Richards, & Sang, 2021). However, opacity can also potentially be beneficial
for workers with respect to control and autonomy as it may enable them to detect and take advantage of system features that are not
even known to system developers or deployers (Kellogg et al., 2020; Langer & Landers, 2021).1 Furthermore, although opaque, gig
workers seem to sometimes prefer algorithmic management over human bosses as they feel more autonomous in the former case
(Langer & Landers, 2021; Möhlmann et al., in press).

3.2.3. Opacity for deployers of algorithm-based systems for HRM


Deployers of algorithm-based systems consist of two subgroups. First, deployers can be organizations providing algorithm-based
solutions for HRM; second, deployers can be decision-makers in organizations who decide which system to implement and how to
implement a system within an existing process. These stakeholders need to ensure that the algorithm-based systems they deploy adhere
to a level of transparency that legal guidelines (e.g., in the European Union General Data Protection Regulations 2018 [GDPR]2) demand
in relation to the use of algorithms in decision-making (Burrell, 2016; Goodman & Flaxman, 2017). In this regard, deployers are in a
unique role as they can directly influence system opacity and need to consider its advantages and disadvantages. Specifically, they
might make strategic decisions regarding which information to disclose to other stakeholders affected by algorithm-based HRM
(Felzmann et al., 2019). For instance, deployers in the area of algorithmic management may keep their systems intentionally opaque to
maintain control over workers managed by the respective algorithm-based system (Kellogg et al., 2020; Rosenblat & Stark, 2016).
Furthermore, deployers may decide to hide what kind of inputs algorithm-based systems for HRM take as this can impact acceptance
among users or people affected by HRM systems (Langer et al., 2018). This way, they may also attempt to protect their intellectual
property from competitors or their systems from adversarial attacks (Arrieta et al., 2020; Kellogg et al., 2020). Similarly, deployers
need to consider whether reducing opacity will foster or undermine systems' usefulness (Gregor & Benbasat, 1999; Kellogg et al.,
2020). For instance, whereas for users, less opacity can foster the utility of such systems as decision-support, for people affected by
HRM systems any insights into system functioning can make systems useless if it enables them to game the system (e.g., by revealing
how applicants can influence their scores; by informing how workers can get a higher payment; Möhlmann et al., in press; Raghavan

1
We thank an anonymous reviewer for highlighting this possibility.
2
https://gdpr.eu/tag/gdpr/

5
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

et al., 2020; Rosenblat & Stark, 2016).

3.2.4. Opacity for developers of algorithm-based HRM systems


Behind every algorithm-based system there are developers developing, maintaining, and updating systems. For instance, those
might be developers of a third-party provider of algorithm-based solutions or developers within a data science team of an organization.
This class of stakeholders is mostly interested in better ways to design, improve, and update algorithm-based systems. Although opacity
is usually associated with high-performance systems (Burrell, 2016), opacity of system processes and outputs can complicate
improving, debugging, and inspecting systems (Miller, Howe, & Sonenberg, 2017) as it can undermine finding out how to provide
better outputs (e.g., more validly predict job performance) or how to prevent biased outcomes. At the same time, opacity makes it
harder to maintain systems. Specifically, it is unlikely that algorithm-based systems in HRM will work as intended without recurring
human intervention (Kellogg et al., 2020). HRM operates in a dynamic environment where systems need to be adapted to changing
demands and contexts. For instance, as applicant pools or job requirements change, algorithm-based systems for personnel selection
need to be examined and possibly updated to uphold predictive accuracy as well as to prevent unfair bias. Opacity might prevent
realizing issues at an early point in time and may thus delay the detection that a system needs an update. Simultaneously, opacity may
reduce the likelihood that other stakeholders (e.g., HR managers in their role of sustainers of algorithm-based systems) report issues
with algorithm-based systems. Moreover, opacity can hinder tracing of reasons for system failures. Note that developers of algorithm-
based systems are (like deployers) in a unique role as they can deliberately alter systems' opacity (e.g., by deciding for more transparent
system design), although the extent of their influence on opacity might be restricted by technical constraints (Ananny & Crawford,
2018).

3.2.5. Opacity for regulators of algorithm-based HRM


Regulators will be predominantly interested in opacity due to legal and ethical reasons (Arrieta et al., 2020). For instance, this class
of stakeholders consists of employee representatives (e.g., work councils), juries, lawyers, and policy-makers who provide the regu­
latory frame for algorithm-based HRM. Opacity can protect intellectual property and can thus also be a deliberate regulatory strategy
that enables companies to maintain their competitive advantages (Arrieta et al., 2020; Burrell, 2016). However, opacity might also
make it harder to perform system auditing to determine whether algorithm-based systems follow legal and ethical standards (Kellogg
et al., 2020). Similarly, opacity might make it harder to determine who is accountable in case of unfavorable outcomes of algorithm-
based HRM (e.g., in discrimination lawsuits that might arise due to systems producing biased outputs; Dastin, 2018). Jobin et al.
(2019) analyzed the global landscape on ethics guidelines on AI algorithms to which regulators (but also deployers) of algorithm-based
systems contribute. Their analysis showed that those guidelines converge towards transparency as the most important principle
surrounding the use of algorithms in high-stakes decision-making. This is due to the fact that there is hope that transparency improves
monitoring and auditing of algorithm-based systems, empowers people to act against algorithm-based systems (e.g., enables whis­
tleblowing), and promotes other ethical principles in the application of algorithm-based systems (e.g., autonomy, justice. trust) (Jobin
et al., 2019). Although those ethical guidelines calling for transparency are not legally binding, legislation such as the European
Union's GDPR as well as the European proposal for AI-specific legislation (AI Act) show that considerations surrounding opacity and
transparency will be central for future legal considerations in algorithm-based HRM (Floridi, 2021; Goodman & Flaxman, 2017).

4. How to promote transparency in algorithm-based HRM

The fact that there are various stakeholders with their own perspectives and interests regarding algorithm-based HRM makes
considerations surrounding opacity and transparency in algorithm-based HRM especially challenging. For some stakeholder goals,
reducing opacity and promoting transparency will be necessary to fulfill those goals, whereas for other goals, maintaining opacity can
be beneficial. Similarly, some stakeholders might want algorithm-based HRM to be more transparent whereas others want to preserve
opacity. Considering all stakeholders perspectives as well as the advantages and disadvantages associated with opacity and trans­
parency seems necessary for effective and accepted use of algorithm-based HRM.
However, research on algorithm-based HRM has so far predominantly focused on reasons for opacity as well as their implications
without providing strategies to reduce opacity and promote transparency. Being aware of possible strategies to reduce opacity is
crucial as it gives stakeholders in algorithm-based HRM the opportunity to consider these strategies when dealing with the advantages
and disadvantages of opacity. In the following section, we introduce suggested strategies to reduce system-based opacity, opacity due
to illiteracy, and intentional opacity in order to promote transparency.

4.1. Technical solutions to reduce system-based opacity

There is a variety of technical solutions to address system-based opacity (Adadi & Berrada, 2018; Arrieta et al., 2020). The
literature on technical solutions is vast and a comprehensive review of this research is beyond the scope of the current paper (we refer
readers to Adadi & Berrada, 2018; Arrieta et al., 2020; Guidotti et al., 2019; Lipton, 2018, for overviews on those methods). Broadly
speaking, the two big classes of technical solutions to reduce system-based opacity are transparency-by-design and post-hoc inter­
pretability or explainability methods (Guidotti et al., 2019; Lipton, 2018). Transparency-by-design is to implement algorithm-based
solutions using (at least partly) transparent algorithmic models and features for prediction and classification. Transparency-by-
design therefore relates to the actual transparency of algorithmic models used in algorithm-based systems. It could mean models
whose entire logic is transparent to people, models where single components (e.g., inputs, parameters, calculations) are transparent,

6
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

and models where at least the underlying algorithm is transparent (e.g., that linear regression describes linear relations; Arrieta et al.,
2020; Lipton, 2018).
Post-hoc interpretability and explainability methods play a crucial role when there is low or virtually no model transparency
(Lipton, 2018). In such cases, researchers (mostly computer scientists) attempt to design technical solutions that make system pro­
cesses or outputs less opaque. For instance, post-hoc interpretability and explainability methods can augment black-box models with
methods to generate text or symbols to explain functioning of a system or the rationale behind outputs of a system (e.g., “these ap­
plicants were rejected because they did not respond to question 2”). Further examples for post-hoc interpretability and explainability
methods provide visual information or feature relevance information that give intuition regarding the importance of features for a
given output. As another example, there are methods that try to derive information regarding what would have happened if there
would have been different input information (e.g., what would have happened if an applicant would have been male instead of female)
or methods that inform the user on what would have needed to be different for a different outcome (e.g., this applicant would have
been invited for a job interview, if they would have had 1 more year of job experience) (Karimi, Schölkopf, & Valera, 2021; Mittelstadt,
Russell, & Wachter, 2019). As a final example, post-hoc interpretability and explainability methods could provide further represen­
tative data examples that relate to a respective output. For instance, this could mean that in addition to the recommendation regarding
an applicant (e.g., “this applicant received 8 out of 10 points”), other applicants who are representative for a given recommendation
category (i.e., for the category “8 out of 10 points”) could be presented together with their input information (see Arrieta et al., 2020,
for an overview on post-hoc interpretability and explainability methods).
The extent to which these technical solutions reduce opacity without considering other reasons for opacity is still up for debate.
Specifically, opacity due to illiteracy will likely undermine the positive effects that those technical solutions can realistically achieve
(Langer, König and Busch, 2021). For instance, understanding an inherently transparent system still requires basic knowledge on how
algorithms generally work. Similarly, highlighting the most influential predictors in an algorithm still requires an understanding of
what this means for the outputs of the respective algorithm. Furthermore, technical solutions can be designed intentionally opaque.
For instance, deployers can decide to implement technical solutions that only reveal parts of a system's actual decision-making process
otherwise they could overwhelm users with additional information (Chromik, Eiband, Völkel, & Buschek, 2019). Thus, it might be
necessary to implement regulations that prevent the misuse of technical solutions to opacity.

4.2. Education and training as strategies to reduce opacity due to illiteracy

To address the issue of opacity due to illiteracy, it is necessary to educate and train stakeholders (Oswald et al., 2020). Beginning in
primary school, policy is attempting to bring programming literacy to the broad public (Lepri, Oliver, Letouzé, Pentland, & Vinck,
2018). Furthermore, online courses to teach people the basics of programming and machine learning are booming and could be used as
training methods within organizations that want to enable their HR staff to become more knowledgeable regarding algorithm-based
systems (Oswald et al., 2020). However, it is still unknown what the effects of education and training are on the use of algorithm-based
systems in practice (Kellogg et al., 2020).
Again, we need to consider interaction effects with the other reasons for opacity and strategies to reduce opacity. For example,
implementing technical solutions could make algorithm-based systems and their outputs less opaque without the need for strong
algorithmic literacy. However, the development of technical solutions so far was mostly focused on methods that benefit the goals of
developers (e.g., increasing developers' understanding to help them improve system quality) (Brock, 2018; Miller et al., 2017). This
implies that research is necessary to make technical solutions helpful for people with limited algorithmic literacy. Eventually, technical
solutions that are tailored to the needs of people with low algorithmic literacy could lower the bar for knowledge necessary to
effectively use algorithm-based systems in practice (e.g., Doshi-Velez & Kim, 2017; Sokol & Flach, 2020). Unfortunately, research on
the effects of technical solutions for non-experts in applied settings (Liao, Gruen, & Miller, 2020; see Tonekaboni, Joshi, McCradden, &
Goldenberg, 2019, for an exception in the medical domain) is particularly rare and practically non-existent for the current applications
of algorithm-based systems in management. This makes it questionable whether existing technical solutions would even align with
stakeholder needs in applied settings or with regulation that was set in place to address algorithmic opacity (Goodman & Flaxman,
2017).

4.3. Regulation and ethical guidelines as strategies to reduce intentional opacity

In the case of intentional opacity, reasons for opacity lie in the hands of developers and deployers (Felzmann et al., 2019). In such
cases it might be necessary to legally require developers and deployers to reduce opacity and implement means of increasing algo­
rithmic transparency. For instance, the GDPR clearly taps into issues surrounding opacity and transparency. Particularly relevant in
this regard are Articles 12, 13, 14, 22 and Recital 71 of the GDPR (Goodman & Flaxman, 2017). Article 12 builds the foundation for the
use of “transparent, intelligible, and easily accessible” information and communication in the other articles. Articles 13 and 14 refer to
providing “meaningful information about the logic involved” when using automated decision-making and profiling (i.e., using
automated processing of personal data to evaluate people's characteristics and to evaluate and predict human behavior). Article 22
puts restrictions on the use of completely automated profiling and calls for safeguards in cases where automated profiling is used (e.g.,
right to contest and right for human intervention). Finally, Recital 71 includes what is commonly discussed as a “right to an expla­
nation” (Goodman & Flaxman, 2017): “[automated] processing should be subject to suitable safeguards, which should include specific
information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation
of the decision reached after such assessment and to challenge the decision.”

7
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

Whereas those parts of the GDPR aim towards more transparency of algorithm-based decision processes, there is still a lot of room
for interpretation with the right to explanation and with other parts of the GDPR (which is also a reason for recent proposals on more
specific European legislation with respect to transparency requirements of algorithm-based systems; Floridi, 2021).3 How deployers
will implement those aspects in algorithm-based HRM is an open question. For instance, deployers could implement technical solutions
or make source code openly available. However, this might not be especially helpful to reduce opacity for people with low algorithmic
literacy. Additionally, there are cases where this is not practically feasible. In such cases, it might be possible to invite independent
auditors. Those could audit respective algorithms and disclose their inner workings to, for instance, regulators to ensure ethical and
legal standards (Burrell, 2016) and at the same time preserve an organization's intellectual property by maintaining opacity regarding
competitors.
Aside from legal regulations, there has been a recent upsurge in ethical guidelines calling for increased transparency of algorithm-
based systems (Jobin et al., 2019). Although not legally binding, they can influence decision-makers in organizations to invest in
increasing algorithmic transparency (Jobin et al., 2019; Kellogg et al., 2020). However, even though most ethical guidelines call for
transparency, they are less uniform regarding their interpretation and definition of transparency and how it should be implemented.
Additionally, the content of those guidelines might reflect the underlying motivation and goals of the respective institution that
developed the guidelines. For instance, Jobin et al. (2019) emphasize that ethical guidelines developed by private companies differ
from those of non-profit organizations or policy makers (e.g., private companies support technical solutions to reduce opacity whereas
other institutions call for more legal regulation and auditing). This indicates that interests with regard to (intentional) opacity might be
reflected in ethical guidelines that actually aim to address respective issues of opacity.

4.4. Trade-offs associated with the strategies to reduce opacity for HRM stakeholders

Up to this point, we have presented the centrality of opacity for algorithm-based HRM, and we have considered implications of
opacity for the main stakeholders, and provided an overview on strategies to reduce opacity and promote transparency. Throughout
the paper we have seen that opacity can be beneficial for certain desired outcomes in algorithm-based HRM and detrimental for others,
and opacity can be in the interest or can undermine the interests of different stakeholders. Similarly, there are possible trade-offs to be
aware of when considering strategies to reduce opacity in order to promote transparency from the perspectives of the involved
stakeholders.

4.4.1. Trade-offs for users


Augmenting algorithm-based systems with technical solutions to reduce opacity can reduce efficiency of processes as users need to
process additional information (Adadi & Berrada, 2018;´Gregor & Benbasat, 1999; Tintarev & Masthoff, 2007). If users want to quickly
fulfill daily tasks and routines, any additional information that accompanies system outputs might lead to negative reactions and
decreasing efficiency (Gregor & Benbasat, 1999; Tintarev & Masthoff, 2007). However, additional information (and training) can also
enhance efficiency of decision-making as system outputs become more convincing or more easily integrable into users' decision-
making processes (Langer, König and Busch, 2021). There is another trade-off associated with the faithfulness and simplicity of in­
formation generated by technical solutions. Although people prefer simple explanations (Lombrozo, 2007), these might not cover the
full rationale of an algorithm-based outcome and thus will not faithfully represent the reasons for an algorithm-based decision. Thus, if
technical solutions only reveal parts of the reasons for an outcome, there might be cases where users falsely assume that a system-based
recommendation is based on, for instance, fair processes (Ribeiro, Singh, & Guestrin, 2016; Sokol & Flach, 2020). Similarly, getting
information that faithfully informs about the reasons for the outputs of algorithm-based systems is a task for future research in
computer science. To date, it is not really possible to verify whether advanced system provide insights into their decision-making
processes that actually reflect what the reasons for respective outputs were (Schölkopf, 2019). Furthermore, a considerable number
of users will need more and better education and training to address opacity due to illiteracy. Although there will hopefully soon be an
emerging generation of HR professionals equipped with data scientific foundations, developing evidence-based training for more
algorithmic illiteracy might pose a challenge if individuals and organizations do not have resourcing for the associated training costs.

4.4.2. Trade-offs for people affected by algorithm-based HRM


Increasing transparency of algorithm-based processes might overwhelm people affected by algorithm-based HRM. For instance,
Lee, Jain, Cha, Ojha, and Kusbit (2019) argue that if the complexity of a decision-making process is made transparent, people might
think that system outputs are the best possible solution – after seeing the complexity of system-based decision processes, they might not
even consider that there could have been a fairer solution. This speculation is, for instance, in line with Bigman, Gray, Waytz, Arnestad,
and Wilson (2020) whose findings indicate that moral outrage is less likely for outcomes provided by systems and with findings by
Elsbach and Stigliani (2019) showing that people believe novel technologies are just too complex to understand. Additionally, affected
people might not even expect information or explanation in relation to algorithm-based decisions (Schlicker et al., 2021). In some
cases, providing clarifying information on algorithm-based decision processes can even detrimentally affect reactions to algorithm-
based processes (Newman, Fast and Harmon, 2020)

3
See https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN for the Regulation of the European Parlia­
ment and of the council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative
acts.

8
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

4.4.3. Trade-offs for deployers


There is a trade-off between information leakage and endeavors to make algorithm-based systems more transparent (e.g., technical
solutions and legal regulations) (Sokol & Flach, 2020). Every piece of information that increases transparency of algorithm-based
systems can help competitors to unravel and potentially copy a system. Furthermore, reducing opacity can help people to game
systems (M. K. Lee et al., 2015). In machine learning, this issue is discussed in relation to adversarial attacks on machine learning
models meaning that people try to unravel a model's decision process to manipulate the process to their own advantage (Arrieta et al.,
2020). Especially in algorithmic management contexts, the trade-off between transparency and gameability becomes apparent (Kel­
logg et al., 2020; M. K. Lee et al., 2015; Möhlmann et al., in press). For instance, Uber drivers commonly try to exchange information
about how the Uber algorithm determines compensation for rides in order to be better able to control their compensation (Möhlmann
& Zalmanson, 2017). Similarly, Upwork workers (people working on the freelancing platform Upwork) try to find out how Upwork
monitors their work or determines work performance in order to find ways to circumvent monitoring or to enhance their performance
evaluation (Jarrahi & Sutherland, 2019). For deployers, those situations are likely undesirable, thus motivating them to maintain or
even increase opacity. We can also imagine systems that are able to tailor provided information in a way that will more likely increase
outcomes deployers find desirable. For instance, systems could learn what kind of explanation they have to provide to users in order to
increase users' trust in a system or workers on algorithmic management platforms' commitment to follow the rules of the platform
provider (Ravenelle, 2019). This, again, points to the potentially conflicting interests of different stakeholders regarding algorithm-
based HRM, and again emphasizes considerations regarding opacity as a strategic lever for deployers (Ananny & Crawford, 2018;
Schnackenberg & Tomlinson, 2016). For instance, deployers can strategically increase or decrease system transparency. In the case of
algorithm-based systems for HRM, this means that deployers of algorithm-based solutions might provide users with overly detailed
information which occludes actually relevant information and can decrease motivation to even consider the provided information
(Ananny & Crawford, 2018). Similarly, making transparent certain parts of the algorithmic prediction models (e.g., inputs) while
ignoring others (e.g., outputs) might be another strategical decision in relation to the transparency of algorithm-based systems in HRM.
Another trade-off for deployers is associated with education and training as a way to reduce opacity due to illiteracy. On the one hand,
deployers might want to train users to be better able to provide high-quality decisions in collaboration with algorithm-based systems
(Oswald et al., 2020). On the other hand, they might want to keep some opacity as transparency can also lead to less efficiency (Gregor
& Benbasat, 1999) or to users being better able to game systems (Kellogg et al., 2020).

4.4.4. Trade-offs for developers


First, when trying to make systems less opaque, there are important technical limitations. Specifically, completely solving the issue
of system-based opacity seems challenging if not impossible (Ananny & Crawford, 2018; Zerilli, Knott, Maclaurin, & Gavaghan, 2018).
For instance, it might never be fully possible to translate internal representations of deep neural nets into human semantics and for
continuous learning systems, any explanation might just reflect a snapshot of a system's current state of internal decision logic (Ananny
& Crawford, 2018). Second, there is a trade-off that concerns system accuracy and transparency with potentially strong implications
for the implementation of algorithm-based systems in practice. Previous research frequently refers to an accuracy-transparency trade-
off in machine learning (Arrieta et al., 2020; Yarkoni & Westfall, 2017): Higher accuracy in machine learning algorithms (e.g.,
classification or prediction accuracy) often goes hand in hand with lower transparency. Transparency-by-design (i.e., using transparent
models) can be a roadblock for accuracy because more transparent models (e.g., rule based, tree based) tend to be less accurate than
less transparent models (artificial neural networks). As another example, preprocessing (e.g., deleting improbable data values) can
boost prediction accuracy but will reduce transparency (Burrell, 2016). This likely requires purposeful decision-making, balancing
prediction accuracy and transparency of an algorithm-based system. Beyond research to find (mathematically) pareto-optimal solu­
tions, it might be fruitful to study how much prediction accuracy one is ready to lose for improving transparency in algorithm-based
HRM. The answer to this question likely depends on the application context of algorithm-based systems (Arrieta et al., 2020; Felzmann
et al., 2019; Kellogg et al., 2020; Leicht-Deobald et al., 2019). For instance, using algorithm-based systems for transactional activities
(e.g., everyday work scheduling; Lepak et al., 2005) might call for less transparency than for ethically sensitive, transformational
decisions in promotion, personnel selection, or organizational development.

4.4.5. Trade-offs for regulators


Many ethical guidelines refer to transparency in relation to algorithm-based systems in practice, assuming that more transparency
is better (Jobin et al., 2019; Martin, 2019). However, transparency might be achievable for only certain algorithm-based systems, and
given that transparency does not automatically ensure understandability (e.g., for people with low algorithmic literacy), it might be
necessary for some activities but not for others, and might be in the interest of only some stakeholders. Thus, regulators have to
consider conflicts and trade-offs between different interests in relation to opacity (Langer, Oster, et al., 2021; Sokol & Flach, 2020).
Whereas developers more likely call for technical insights into systems to improve systems, affected people might call for easily
accessible information increasing their controllability of algorithm-based systems. Furthermore, deployers might call for transparency
because they want their systems to adhere to legal regulation, whereas developers either cannot provide a certain level of transparency
due to technical constraints or would need to significantly lower the performance of systems to provide such transparency. Addressing
specific stakeholders' interests in a given situation requires context-aware technical solutions, training and education, or regulation
tailored to stakeholders' needs (Mittelstadt et al., 2019). With respect to technical solutions, this is already an active field of research.
However there are scholars questioning whether it will be possible to develop technical solutions with this kind of responsiveness,
adaptability, interactivity, and context-awareness in the near future (Ananny & Crawford, 2018; Burrell, 2016).

9
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

5. Future research directions

In summary, there is a strong need for research on the basic issue of opacity, on proposed strategies to reduce opacity (technical
solutions, training and education, regulation and guidelines), and on consequences of opacity and transparency with respect to
stakeholder perspectives in algorithm-based HRM. To holistically address opacity in algorithm-based HRM, it is crucial to investigate
the three reasons for opacity and strategies to reduce opacity from the perspective of each type of stakeholder, and with regard to
potential advantages, disadvantages, and trade-offs. Furthermore, our analysis of the consequences of opacity and transparency shows
that they cover various topics central to the area of algorithm-based HRM such as work design (Parker & Grote, 2020), control (Kellogg
et al., 2020), decision-making (Grove, Zald, Lebow, Snitz, & Nelson, 2000), trust (J. D. Lee & See, 2004; Mayer, Davis, & Schoorman,
1995), justice and fairness (Ötting & Maier, 2018), acceptance of algorithm-based systems in HRM (Newman et al., 2020),
improvement of the performance of algorithm-based systems in HRM (Tambe, Cappelli, & Yakubovich, 2019), as well as legal and
ethical issues (Goodman & Flaxman, 2017; Jobin et al., 2019). Table 1 provides an overview on future research directions from the
perspective of the stakeholders in HRM and with respect to the strategies to reduce opacity as deliberate activities to address the
reasons for opacity and balance the advantages and disadvantages associated with opacity and transparency.
Regarding technical solutions to system-based opacity, some of the most pressing research questions arise from the fact that
technical solutions have rarely found their way into real world settings. Within the increasing use of algorithm-based systems for HRM
(M. M. Cheng & Hackett, 2021; Tambe et al., 2019) there lies opportunity to investigate the outcomes of different technical solutions
on stakeholders involved. Depending on the stakeholder, the respective HRM activity, and further contextual variables (e.g., time
pressure), there is a strong need for a structured research agenda to investigate how technical solutions contribute to the imple­
mentation of algorithm-based HRM in practice. Specifically, this means (a) investigating the use of different technical solutions for
different application areas, (b) exploring the effects of technical solutions on a variety of possible outcomes (e.g., fairness, trust,
autonomy, responsibility, work satisfaction, and human-AI team performance), (c) scrutinizing the processes of how technical

Table 1
Research questions in algorithm-based HRM resulting from the perspective of the main stakeholders in algorithm-based HRM, from the proposed
strategies to reduce opacity, and from the trade-offs associated with the strategies to reduce opacity.
Technical solutions to reduce system-based opacity Education and training as strategies to Regulation and guidelines as strategies to
reduce opacity due to illiteracy reduce intentional opacity

Users How do users react to technical solutions (e.g., trust, What are the effects of training and What are effects of regulation and
acceptance)? educating users (e.g., regarding guidelines for users (e.g., regarding
How do technical solutions contribute to work design perceived usefulness, decision-making autonomy)?
when implementing algorithm-based HRM? quality)? How do users react to regulations and
How do technical solutions affect performance of How to train HR employees on the use of guidelines?
human-system teams (e.g., decision-making algorithm-based systems in HRM? How to ensure that users perceive regulations
performance)? How to efficiently implement user and guidelines as impactful?
Are there trade-offs with respect to efficiency and training?
performance when implementing technical solutions? What are the costs associated with user
training?
Affected How do affected people react to technical solutions (e. What are the effects of training and What are the effects of regulation and
people g., justice, fairness, controllability)? educating affected people (e.g., guidelines for affected people (e.g.,
Do technical solutions promote justice in algorithm- regarding acceptance, controllability)? regarding worker autonomy, applicant
based decisions? How to educate the public regarding reactions)?
Do technical solutions have side effects for affected people algorithm-based HRM? Will regulation and guidelines increase
(e.g., overwhelm people, allow gaming systems)? Does education and training of affected bureaucracy (e.g., for gig workers)?
people influence system gameability?
Deployers How do deployers react to technical solutions (e.g., What are the effects of training and How do deployers respond to regulation
anticipating acceptance, trust)? educating deployers (e.g., regarding risk (legal and ethical guidelines)?
Do technical solutions contribute to algorithm-based assessment of algorithm-based HRM)? How do regulations and guidelines affect
HRM strategy (e.g., only with technical solutions will Do algorithmic literate deployers algorithm-based HRM strategy?
algorithm-based HRM be used for transformational implement algorithm-based systems Will deployers try to circumvent regulation
HRM activities)? differently? and guidelines to keep the strategic lever of
Do deployers use technical solutions strategically (e.g., to Will deployers develop trainings that keep opacity?
keep opacity high)? certain information intentionally opaque?
Developers Do technical solutions improve the development of Can developers be made aware of other How do developers implement regulation
algorithm-based HRM? stakeholders' perspectives to be better in algorithm-based systems?
Do technical solutions help to realize issues with able to improve systems? How to prevent regulation and guidelines from
algorithm-based systems earlier? How to train developers in weighing negatively affecting system quality?
How to treat accuracy-transparency trade-offs? different stakeholders' perspective,
advantages and disadvantages of opacity?
Regulators How do regulators react to technical solutions? (e.g., What are the effects of training and How do regulators in companies (e.g.,
evaluation of auditability, adherence to regulation of educating regulators (e.g., more employee representatives) enforce
algorithm-based HRM activities) practically implementable regulation)? regulations and guidelines?
How to ensure that technical solutions are designed to the How to train regulators? How to prevent overregulation and
benefit of users, affected people and not only in the Does literacy affect the demands and bureaucracy?
strategic interests of deployers? interests of regulators regarding algorithm-
based HRM?

Note. Italicized sentences represent sample trade-offs to investigate that are associated with the strategies to reduce opacity.

10
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

solutions affect those outcomes, (d) examining contextual influences and individual differences as moderators of the relation between
technical solutions, stakeholders and outcomes, and (e) iteratively improving technical solutions and their use in applied settings (see
also Langer, Oster, et al., 2021).
Regarding education and training as strategies to reduce opacity due to illiteracy, it seems particularly important to develop and
validate educational and training efforts regarding the use of algorithms in management. Although there have been calls for more
education and training (Oswald et al., 2020), it is not yet clear how to effectively and efficiently implement them. Furthermore, there is
a lack of research showing that education and training really affect intended outcomes for the respective stakeholders such as human-
system team performance in HRM, adequate trust in systems, acceptance of algorithm-based decisions, or risk assessment for deploying
algorithm-based systems in HRM.
Regarding regulation and guidelines as strategies to reduce intentional opacity, there has been a recent boom in ethical guidelines
and legislation addressing the use of algorithms in practice (for an overview see Hagendorff, 2020; or Jobin et al., 2019). However,
their effects on the behavior of developers and deployers are unclear. Will developers perceive those regulations as necessary or do
they believe them to be ignorable? In the case of deployers, it is conceivable that they could creatively interpret regulation and ethical
guidelines (e.g., similar to privacy statements required by the GDPR; Degeling et al., 2019) to keep the strategic lever of opacity and
transparency. Furthermore, it is an open question as to whether other stakeholders (i.e., users, affected people) feel empowered by
those regulations and if they would react favorably to organizations adhering to ethical guidelines regarding the use of algorithm-
based systems in HRM.
In Table 1, we present technical solutions, education and training, as well as regulation and guidelines as possible strategies to
reduce opacity. However, regarding the aforementioned possible trade-offs, those solutions might be confronted with issues in
practice. For instance, although there are various ethical guidelines, legal regulations, and a variety of scholars promoting to increase
transparency of algorithm-based systems (Adadi & Berrada, 2018; Jobin et al., 2019), there are also authors criticizing previous
research emphasizing that, for instance, increasing transparency does not always lead to the expected or even positive outcomes
(Ananny & Crawford, 2018; Felzmann et al., 2019), or claiming that there is a double standard in relation to transparency emphasizing
that human decisions basically also are black boxes (Zerilli et al., 2018). Furthermore, there is already research showing that there
could be “dark pattern” of technical solutions to opacity (Chromik et al., 2019). For instance, technical solutions could be designed to
confuse or divert affected people in a way that they would not question algorithm-based decisions, or in a way to convince regulators
that algorithm-based decisions adhere to legal regulations when it is the algorithm-based decision process that is problematic.
Consequently, there is a need for a systematic research agenda investigating how to successfully, legally, ethically, and effectively
balance opacity and transparency while considering stakeholders' individual perspectives.

6. Conclusion

Algorithm-based HRM will continue to play a central role in organizational practices (M. M. Cheng & Hackett, 2021; Möhlmann
et al., in press; Tambe et al., 2019). With the spread of algorithm-based HRM, there will be increasing demands from a variety of
stakeholders to make algorithm-based HRM more efficient, useful, trustworthy, ethical, fair, legal, and accountable (Floridi et al.,
2018). In this paper we have emphasized that opacity is a key characteristic of algorithm-based HRM activities that might affect all of
these demands. We have also shown that considerations between opacity and transparency are complex when implementing
algorithm-based HRM and that there are different reasons for opacity, different strategies to reduce opacity, and different stakeholders
with (partly) conflicting interests. Furthermore, depending on the HR activity, stakeholders' perspectives, and the application context
will inform whether a system should be designed in a less opaque way, whether users need to be better trained in working with systems,
or whether we need regulation to promote transparency. Opacity seems to be the default when implementing algorithm-based HRM
and any strategy to reduce opacity needs to be examined with respect to the various stakeholders and their (strategic) interests
regarding algorithm-based HRM.

References

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/
10.1109/ACCESS.2018.2870052
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media &
Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts,
taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
Bigman, Y. E., Gray, K., Waytz, A., Arnestad, M., & Wilson, D. (2020). Algorithmic discrimination causes less moral outrage than human discrimination [preprint].
PsyArXiv. https://doi.org/10.31234/osf.io/m3nrp
Brock, D. C. (2018). Learning from artificial intelligence’s previous awakenings: The history of expert systems. AI Magazine, 39(3), 3–15. https://doi.org/10.1609/
aimag.v39i3.2809
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/
2053951715622512, 205395171562251.
Campion, M. C., Campion, M. A., Campion, E. D., & Reider, M. H. (2016). Initial investigation into computer scoring of candidate essays for personnel selection.
Journal of Applied Psychology, 101(7), 958–975. https://doi.org/10.1037/apl0000108
Cheng, M., & Foley, C. (2019). Algorithmic management: The case of Airbnb. International Journal of Hospitality Management, 83, 33–36. https://doi.org/10.1016/j.
ijhm.2019.04.009
Cheng, M. M., & Hackett, R. D. (2021). A critical review of algorithms in HRM: Definition, theory, and practice. Human Resource Management Review, 31(1), Article
100698. https://doi.org/10.1016/j.hrmr.2019.100698

11
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

Chromik, M., Eiband, M., Völkel, S. T., & Buschek, D. (2019). Dark patterns of explainability, transparency, and user control for intelligent systems. In Workshop on
explainable smart systems at the ACM conference on intelligent user interfaces (Paper Presentation) https://www.medien.ifi.lmu.de/pubdb/publications/pub/
chromik2019iuiworkshop/chromik2019iuiworkshop.pdf.
Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C. O. L. H., & Ng, K. Y. (2001). Justice at the millenium: A meta-analytic review of 25 years of organizational
justice research. Journal of Applied Psychology, 86(3), 425–445. https://doi.org/10.1037//0021-9010.86.3.425
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.Com. https://www.reuters.com/article/us-amazon-com-jobs-
automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
Degeling, M., Utz, C., Lentzsch, C., Hosseini, H., Schaub, F., & Holz, T. (2019). We value your privacy … Now take some cookies: Measuring the GDPR’s impact on web
privacy. In Proceedings 2019 network and distributed system security symposium. https://doi.org/10.14722/ndss.2019.23378
Dhaliwal, J. S., & Benbasat, I. (1996). The use and effects of knowledge-based system explanations: Theoretical foundations and a framework for empirical valuation.
Information Systems Research, 7(3), 342–362. https://doi.org/10.1287/isre.7.3.342
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. ArXiv. https://arxiv.org/abs/1702.08608.
Elsbach, K. D., & Stigliani, I. (2019). New information technology and implicit bias. Academy of Management Perspectives, 33(2), 185–206. https://doi.org/10.5465/
amp.2017.0079
Endsley, M. R. (2017). From here to autonomy: Lessons learned from human–automation research. Human Factors, 59(1), 5–27. https://doi.org/10.1177/
0018720816681350
Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal
norms and contextual concerns. Big Data & Society, 6(1). https://doi.org/10.1177/2053951719860542, 205395171986054.
Floridi, L. (2021). The European legislation on AI: A brief analysis of its philosophical approach. Philosophy & Technology, 34(2), 215–222. https://doi.org/10.1007/
s13347-021-00460-9
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018).
AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.
org/10.1007/s11023-018-9482-5
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.
org/10.5465/annals.2018.0057
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 50–57. https://doi.
org/10.1609/aimag.v38i3.2741
Gregor, S., & Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly, 23(4), 497. https://doi.
org/10.2307/249487
Griesbach, K., Reich, A., Elliott-Negri, L., & Milkman, R. (2019). Algorithmic control in platform food delivery work. Socius: Sociological Research for a Dynamic World,
5. https://doi.org/10.1177/2378023119870041, 237802311987004.
Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12(1),
19–30. https://doi.org/10.1037//1040-3590.12.1.19
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys,
51(5), 1–42. https://doi.org/10.1145/3236009
Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance, 16(2), 250–279.
https://doi.org/10.1016/0030-5073(76)90016-7
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
Hickman, L., Bosch, N., Ng, V., Saef, R., Tay, L., & Woo, S. E. (2021). Automated video interview personality assessments: Reliability, validity, and generalizability
investigations. Journal of Applied Psychology. https://doi.org/10.1037/apl0000695. Advance Online Publication.
Höddinghaus, M., Sondern, D., & Hertel, G. (2020). The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior,
116, Article 106635. https://doi.org/10.1016/j.chb.2020.106635
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/
10.1177/0018720814547570
Jarrahi, M. H., & Sutherland, W. (2019). Algorithmic management and algorithmic competencies: Understanding and appropriating algorithms in gig work. In
N. G. Taylor, C. Christian-Lamb, M. H. Martin, & B. Nardi (Eds.), Information in contemporary society (11420th ed., pp. 578–589). Springer. https://doi.org/
10.1007/978-3-030-15742-5_55. Lecture notes in computer science.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-
019-0088-2
Karimi, A.-H., Schölkopf, B., & Valera, I. (2021). Algorithmic recourse: From counterfactual explanations to interventions. In Proceedings of the 2021 FAccT conference
on fairness, accountability, and transparency (pp. 353–362). https://doi.org/10.1145/3442188.3445899
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410.
https://doi.org/10.5465/annals.2018.0174
Lai, V., & Tan, C. (2019). On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In Proceedings of
the 2019 FAT* conference on fairness, accountability, and transparency (pp. 29–38). https://doi.org/10.1145/3287560.3287590
Langer, M., König, C. J., & Busch, V. (2021). Changing the means of managerial work: Effects of automated decision-support systems on personnel selection tasks.
Journal of Business and Psychology, 36(5), 751–769. https://doi.org/10.1007/s10869-020-09711-6
Langer, M., König, C. J., & Fitili, A. (2018). Information as a double-edged sword: The role of computer experience and information on applicant reactions towards
novel technologies for personnel selection. Computers in Human Behavior, 81, 19–30. https://doi.org/10.1016/j.chb.2017.11.036
Langer, M., & Landers, R. N. (2021). The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by
algorithms and third-party observers. Computers in Human Behavior, 123. https://doi.org/10.1016/j.chb.2021.106878
Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., … Baum, K. (2021). What do we want from explainable artificial intelligence (XAI)? A
stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, Article 103473. https://doi.org/
10.1016/j.artint.2021.103473
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1).
https://doi.org/10.1177/2053951718756684, 205395171875668.
Lee, M. K., Jain, A., Cha, H. J., Ojha, S., & Kusbit, D. (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair
algorithmic mediation. In Proceedings of the ACM on human-computer interaction, 3(CSCW) (pp. 1–26). https://doi.org/10.1145/3359284
Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. In
Proceedings of the 33rd annual ACM conference on human factors in computing systems - CHI ’15 (pp. 1603–1612). https://doi.org/10.1145/2702123.2702548
Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A., Schafheitle, S., Wildhaber, I., & Kasper, G. (2019). The challenges of algorithm-based HR decision-making for
personal integrity. Journal of Business Ethics, 160(2), 377–392. https://doi.org/10.1007/s10551-019-04204-w
Lepak, D. P., Bartol, K. M., & Erhardt, N. L. (2005). A contingency framework for the delivery of HR practices. Human Resource Management Review, 15(2), 139–159.
https://doi.org/10.1016/j.hrmr.2005.06.001
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the
proposed solutions, and the open challenges. Philosophy & Technology, 31(4), 611–627. https://doi.org/10.1007/s13347-017-0279-x
Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI conference
on human factors in computing systems (pp. 1–15). https://doi.org/10.1145/3313831.3376590

12
M. Langer and C.J. König Human Resource Management Review 33 (2023) 100881

Liem, C. C. S., Langer, M., Demetriou, A., Hiemstra, A. M. F., Sukma Wicaksana, A., Born, M. P., & König, C. J. (2018). Psychology meets machine learning:
Interdisciplinary perspectives on algorithmic job candidate screening. In H. J. Escalante, S. Escalera, I. Guyon, X. Baró, Y. Güçlütürk, U. Güçlü, & M. van Gerven
(Eds.), Explainable and interpretable models in computer vision and machine learning (pp. 197–253). Springer. https://doi.org/10.1007/978-3-319-98131-4_9.
Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43. https://doi.org/10.1145/3233231
Lombrozo, T. (2007). Simplicity and probability in causal explanation. Cognitive Psychology, 55, 232–257. https://doi.org/10.1016/j.cogpsych.2006.09.006
Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the
organization. Journal of Business Research, 120, 262–273. https://doi.org/10.1016/j.jbusres.2020.07.045
Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835–850. https://doi.org/10.1007/s10551-018-3921-3
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(2), 709–726. https://doi.org/
10.2307/258792
Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum. In Proceedings of the 17th international joint conference on artificial
intelligence IJCAI (pp. 36–42). http://www.intelligentrobots.org/files/IJCAI2017/IJCAI-17_XAI_WS_Proceedings.pdf#page=36.
Mittelstadt, B. D., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the 2019 FAT* conference on fairness, accountability, and
transparency (pp. 279–288). https://doi.org/10.1145/3287560.3287574
Möhlmann, M., & Zalmanson, L. (2017). Hands on the wheel: Navigating algorithmic management and Uber drivers’ autonomy. In Proceedings of the 2017 international
conference on information system. https://aisel.aisnet.org/icis2017/DigitalPlatforms/Presentations/3.
Möhlmann, M., Zalmanson, L., & Gregory, R. W. (in press). Algorithmic management of work on online labor platforms: When matching meets control. MIS Quarterly.
Advance Online Publication.
Morgeson, F. P., Garzsa, A. S., & Campion, M. A. (2012). Work design. In E. B. Weiner, N. W. Schmitt, & S. Highhouse (Eds.), Handbook of psychology (pp. 525–559).
Wiley. https://doi.org/10.1002/9781118133880.hop212020.
Myhill, K., Richards, J., & Sang, K. (2021). Job quality, fair work and gig work: The lived experience of gig workers. International Journal of Human Resource
Management, 32(19), 4110–4135. https://doi.org/10.1080/09585192.2020.1867612
Naim, I., Tanveer, M. I., Gildea, D., & Hoque, M. E. (2018). Automated analysis and prediction of job interview performance. IEEE Transactions on Affective Computing,
9(2), 191–204. https://doi.org/10.1109/TAFFC.2016.2614299
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions.
Organizational Behavior and Human Decision Processes, 160, 149–167. https://doi.org/10.1016/j.obhdp.2020.03.008
Oswald, F. L., Behrend, T. S., Putka, D. J., & Sinar, E. (2020). Big data in industrial-organizational psychology and human resource management: Forward progress for
organizational research and practice. Annual Review of Organizational Psychology and Organizational Behavior, 7(1), 505–533. https://doi.org/10.1146/annurev-
orgpsych-032117-104553
Ötting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human-machine-interactions: Intelligent systems as new decision agents in organizations.
Computers in Human Behavior, 89, 27–39. https://doi.org/10.1016/j.chb.2018.07.022
Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459. https://doi.org/10.1007/s11023-019-09502-w
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.
org/10.1177/0018720810376055
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1518/
001872097778543886
Parker, S. K., & Grote, G. (2020). Automation, algorithms, and beyond: Why work design matters more than ever in a digital world. Applied Psychology. https://doi.
org/10.1111/apps.12241. Advance Online Publication.
Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 FAT*
conference on fairness, accountability, and transparency. https://doi.org/10.1145/3351095.3372828
Ravenelle, A. J. (2019). “We’re not uber:” control, autonomy, and entrepreneurship in the gig economy. Journal of Managerial Psychology, 34(4), 269–285. https://doi.
org/10.1108/JMP-06-2018-0256
Ribeiro, S., Singh, P., & Guestrin, C. (2016). "Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 2016 ACM SIGKDD international
conference on knowledge discovery and data mining. https://doi.org/10.18653/v1/n16-3020 (pp. 1135-1144).
Rosenblat, A., & Stark, L. (2016). Algorithmic labor and information asymmetries: A case study of Uber’s drivers. International Journal of Communication, 10,
3758–3784. https://doi.org/10.2139/ssrn.2686227
Schlicker, N., Langer, M., Ötting, S. K., König, C. J., Baum, K., & Wallach, D. (2021). What to expect from opening “Black Boxes”? Comparing perceptions of justice
between human and automated agents. Computers in Human Behavior, 122, 106837. https://doi.org/10.1016/j.chb.2021.106837
Schnackenberg, A. K., & Tomlinson, E. C. (2016). Organizational transparency: A new perspective on managing trust in organization-stakeholder relationships.
Journal of Management, 42(7), 1784–1810. https://doi.org/10.1177/0149206314525202
Schölkopf, B. (2019). Causality for machine learning. ArXiv. http://arxiv.org/abs/1911.10500.
Shaw, J. C., Wild, E., & Colquitt, J. A. (2003). To justify or excuse?: A meta-analytic review of the effects of explanations. Journal of Applied Psychology, 88(3),
444–458. https://doi.org/10.1037/0021-9010.88.3.444
Sokol, K., & Flach, P. (2020). Explainability fact sheets: A framework for systematic assessment of explainable approaches. In Proceedings of the 2020 conference on
fairness, accountability, and transparency (pp. 56–67). https://doi.org/10.1145/3351095.3372870
Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management
Review, 61(4), 15–42. https://doi.org/10.1177/0008125619867910
Tintarev, N., & Masthoff, J. (2007). A survey of explanations in recommender systems. In Proceedings of the international conference on data engineering workshop (pp.
801–810). https://doi.org/10.1109/ICDEW.2007.4401070
Tonekaboni, S., Joshi, S., McCradden, M. D., & Goldenberg, A. (2019). What clinicians want: contextualizing explainable machine learning for clinical end use. In
Proceedings of the Conference of Machine learning research (pp. 359-380).
Veen, A., Barratt, T., & Goods, C. (2020). Platform-capital’s ‘app-etite’ for control: A labour process analysis of food-delivery work in Australia. Work, Employment and
Society, 34(3), 388–406. https://doi.org/10.1177/0950017019836911
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. Management Information Systems
Quarterly, 27, 425–478. https://doi.org/10.2307/30036540
Wilson, H. J., Daugherty, P. R., & Morini-Bianzino, N. (2017). The jobs that artificial intelligence will create. MIT Sloan Management Review, 58(4), 14–16.
Yang, T., Linder, J., & Bolchini, D. (2012). DEEP: Design-oriented evaluation of perceived usability. International Journal of Human Computer Interaction, 28(5),
308–346. https://doi.org/10.1080/10447318.2011.586320
Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12,
1100–1122. https://doi.org/10.1177/1745691617693393
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403–414. https://
doi.org/10.1002/bdm.2118
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy &
Technology, 32(4), 661–683. https://doi.org/10.1007/s13347-018-0330-6

13

You might also like