Lecture-5
Lecture-5
A R T I C L E I N F O A B S T R A C T
Keywords: This study proposes the ‘duality of algorithmic management’ as a conceptual lens to unravel the
Human resource management complex relationship between human resource management (HRM) algorithms, job autonomy
Algorithms and the value to workers who are subject to algorithmic management. Against tendencies to
Algorithmic management
present algorithmic management as having predetermined, undesired consequences (e.g. re
Value creation
Job autonomy
striction of job autonomy, poor financial compensation and deteriorating working conditions),
Duality our ‘duality of algorithmic management’ perspective offers two amendments to the dominant
thinking on HRM algorithms and their outcomes to workers. First, we showcase how algorithmic
management simultaneously restrains and enables autonomy and value to workers – with the
latter referring to both use (i.e. non-monetary benefits) and exchange value (i.e. monetary ben
efits) that workers derive from working (under algorithmic management). In doing so, we make
the case that the desired consequences of HRM algorithms to workers co-exist alongside the
undesired consequences that the literature has mostly reported on. Second, we argue that algo
rithmic management is shaped by, as much as it shaping, the autonomy and value to workers. We
do so by highlighting the ‘recursivity’ of algorithmic management that occurs when software
designers and/or self-learning algorithms reinforce or limit worker acts for (re)gaining job au
tonomy and/or creating value out of HRM algorithms. We conclude this paper with the presen
tation of avenues for future research into the duality of algorithmic management, which sets the
stage for a future line of inquiry into the complex interrelationships among HRM algorithms, job
autonomy and value.
1. Introduction
Human resource management (HRM) activities are increasingly executed by software algorithms, that is, a set of computer-
programmed steps to automatically accomplish a task by transforming data into output. Inputs that amount to the data coming
from workers (e.g. via their social media accounts, smartphones or sociometric badges) (Garcia-Arroyo & Osca, 2019; Strohmeier,
2018), allow organizations to operate software algorithms for generating HRM-related outputs. These algorithm-enabled HRM outputs
are mostly related to (automated) decision making in areas such as staffing (e.g. resume screening and text-mining), training (e.g.
prediction of skill gaps), compensation (e.g. online job ranking and calibration), appraisal (e.g. processing data on worker perfor
mance) and workforce planning (e.g. assigning workers to shifts) (Cheng & Hackett, 2021; Kellogg, Valentine, & Christin, 2020; Leicht-
* Corresponding author.
E-mail address: [email protected] (J. Meijerink).
https://doi.org/10.1016/j.hrmr.2021.100876
Received 30 October 2020; Received in revised form 21 October 2021; Accepted 25 October 2021
Available online 1 November 2021
1053-4822/© 2021 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
Deobald et al., 2019; Newlands, 2021; Strohmeier & Piazza, 2015). It is the collection of these diverse algorithm-enabled HRM ac
tivities that has been broadly referred to as ‘algorithmic management’ (Duggan, Sherman, Carbery, & McDonnell, 2020; Lee, Kusbit,
Metsky, & Dabbish, 2015; Meijerink, Boons, Keegan, & Marler, 2021; Möhlmann & Zalmanson, 2017).
Since HRM algorithms augment and/or automate decision-making about workers, academics have begun to examine how workers
are impacted by algorithmic management. This strand of research has made several conceptual contributions, two of which are mostly
important for, and extended by, our current study. First, researchers showed that algorithmic management limits job autonomy and
value to workers (Gandini, 2019; Kellogg et al., 2020; Newlands, 2021; Veen, Barratt, & Goods, 2019; Zuboff, 2019). Here, we refer job
autonomy (hereafter: autonomy) to as the freedom of workers to exercise control over aspects of their work (Langfred, 2007). Value to
workers refers to the monetary (or exchange value; e.g. income) and non-monetary benefits (or use value; e.g. personal growth,
identity and accomplishment) that employees derive from their work (Maatman, Bondarouk, & Looise, 2010; Meijerink & Bondarouk,
2018). Scholars argue that algorithmic management reduces autonomy and value to workers, among other reasons, through auto
mating wage theft (Van Doorn, 2019), creating information asymmetries that curb workers' leeway to make optimal (economic)
decisions for themselves (Rosenblat, 2018; Shapiro, 2018), decreasing human sensemaking where algorithms crowd out human
freedom (Leicht-Deobald et al., 2019), and disciplining without room for personal growth and development (Kellogg et al., 2020).
While studies into these downsides of algorithmic management are important, they reinforce negative deterministic assumptions about
algorithmic management as having predetermined, undesired consequences to workers. We argue that HR management by algorithms
is more complex than reinforcing negative outcomes for workers only. Instead, HRM algorithms can simultaneously offer value to, and
foster autonomy for, workers. There is a growing body of knowledge to support this non-deterministic view on HRM algorithms. For
instance, software algorithms embed organizational resources such as data, rules, and procedures that limit worker autonomy
(Orlikowski & Scott, 2015; Strohmeier, 2018) while simultaneously offering workers the freedom to create value for themselves out of
HRM activities (Meijerink & Bondarouk, 2018). Moreover, although HRM algorithms limit worker freedom, research shows that not all
activities performed by workers can be monitored by algorithmic management systems such that selected worker behaviors remain at
the discretion of workers (Gal, Jensen, & Stein, 2020; Newlands, 2021; Wood, Graham, Lehdonvirta, & Hjorth, 2019). This implies that
HRM activities, when put into hands of software algorithms, not only restrain/limit, but simultaneously enable/offer autonomy and
value to workers.
Second, researchers have observed how workers attempt to regain autonomy and value by offsetting algorithm-enabled control
a) HRM algorithms are embedded with rules and resources such as worldviews, meanings, assumptions, power relationships, procedures, norms, and
values. These so-called structural properties of algorithmic management manifest as algorithmic input (i.e. machine-readable data about worker
attributes and behaviors), algorithmic processes (e.g. software code for automated data processing) and algorithmic output (e.g. information,
statistics or predictions that humans input in decision-making processes as well as automated decision-making and -execution).
b) The structural properties of algorithmic management are simultaneously restraining and enabling to workers. Specifically, HRM algorithms
enable and restrain autonomy and value to workers because of the duality in algorithmic input, processes and outcomes.
c) Worker acts such as value co-creation and algoactivism for (re)gaining autonomy may trigger human managers/software designers to (re)design
HRM algorithms. Such (re)design attempts may serve to further support workers in gaining autonomy and/or creating value or restrain workers in
doing so. Propelled by artificial intelligence and self-learning, HRM algorithms can also change when inputted with worker data that reflect
worker acts to (re)gain autonomy and/or (co-)create value with selected stakeholders.
d) The structural properties (i.e. rules and resources embedded in HRM algorithms) of HRM algorithms are sustained or changed, depending on
whether software designers and self-learning processes reinforce or go against worker acts to gain autonomy or (co-)create value.
2
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
through so-called ‘algoactivism’ (Kellogg et al., 2020). This involves novel tactics of resistance such as ‘data obfuscation’ to sabotage
software algorithms by feeding them with misleading data (Newlands, 2021), or scripts that workers deploy to monitor their online
workplaces (Irani & Silberman, 2013). Ultimately, these tactics are believed to prevent employers from capturing excessive amounts of
(monetary) value and stripping away autonomy from workers subject to algorithmic management (Gandini, 2019; Kellogg et al., 2020;
Shapiro, 2018; Veen et al., 2019). We argue that these insights open the road for future research to study the ill-explored recursive
implications for software algorithms at work. Here, the notion of ‘recursivity’ (Orlikowski, 1992) reflects a cyclical process in which
algorithmic management is shaped and influenced by, as much as it shaping and influencing, the autonomy and value to workers (see
Fig. 1). There are empirical observations that support such an assumption about the recursive nature of algorithmic management. First,
HRM algorithms are fed with data about workers' behaviors (Garcia-Arroyo & Osca, 2019; Strohmeier, 2018), including their re
sponses to algorithmic management, thereby shaping the functioning of software algorithms at work. Secondly, software developers
may decide to alter software algorithms to counter the resistance to algorithmic management by workers. Provided that workers'
algoactivism serves to (re)gain autonomy and value (Kellogg et al., 2020), we argue that algorithmic management does not unilaterally
(pre-)determine autonomy and value to workers. Instead, when engaging in algoactivism for dealing with the autonomy- and value-
related consequences of algorithmic management, workers shape the nature of algorithmic management. Put differently, algorithmic
management is both the driver and the outcome of autonomy and value to workers. To illustrate this, we develop a conceptual lens that
allows future studies to go beyond deterministic views on algorithmic management.
Specifically, we aim to develop a conceptual lens that affords to see autonomy and value as being limited and fostered by algo
rithmic management, while simultaneously shaping (or: having recursive implications for) the working of algorithmic management.
To do so, we draw on the concept of duality of technology which holds that technology is the product of human action, while at the
same time enables and restrains that action (Leonardi, Nardi, & Kallinikos, 2012; Orlikowski, 1992). It originated as a response to the
call to go beyond deterministic thinking on the consequences of technology (Leonardi et al., 2012; Orlikowski, 1992) and therefore fits
our aim to conceptually show that HRM algorithms are the product of autonomy and value to workers, while at the same time fosters
and limits that autonomy and value. In line with this, we propose the concept of duality of algorithmic management which holds that
algorithmic management simultaneously enables and restrains autonomy and value to workers, which reciprocally shape algorithmic
management (see Fig. 1). On the way toward our goal, we offer two contributions to the literature. First, we provide an overview of the
use of algorithms in HRM and offer a much-needed conceptual lens that allows to see how worker outcomes are recursively related to,
rather than (pre-)determined by, algorithmic management. Second, we formulate questions for future research into the duality of
algorithmic management, thereby setting a research agenda on the complex and non-deterministic interplay between HRM algorithms,
autonomy and value to workers.
This paper is structured as follows. We start with defining software algorithms at work and review the literature on algorithmic
management to highlight the controlling potential of software algorithms and the resistance of algorithmic control by workers. This is
followed by an outline of the notion of the duality of algorithmic management. We finalize with the presentation of a duality-inspired
research agenda on how algorithmic management enables and restrains autonomy and value to workers that reciprocally shape
algorithmic management.
In the literature, algorithmic management has been defined as a system of control that relies on machine-readable data and
software algorithms that support and/or automate managerial decision-making about work (Duggan et al., 2020; Lee et al., 2015;
Meijerink et al., 2021; Möhlmann & Zalmanson, 2017). This definition highlights three important features of algorithmic management:
(1) machine-readable data as input, (2) automated processing of data, and (3) decision-making and -execution as output:
3
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
data into output without (much) human involvement. In fact, software algorithms can be deployed to perform activities that computers
can do more effortlessly than human managers, such as the cleaning, extracting, sorting, and filtering of data generated by workers
(Garcia-Arroyo & Osca, 2019; Strohmeier & Piazza, 2015). Although software algorithms process data in an automated manner,
algorithmic management nevertheless requires human involvement. For instance, the data that propels algorithmic management is
generated by workers while performing (work-related) activities, like executing a job-related task, interacting with colleagues or
taking a break. Therefore, depending how workers behave (at work), HRM algorithms may be fed with data that are different in kind.
Besides workers generating the data that are inputted into software algorithms, this also involves programmers and/or human
managers deciding what type of software code to operate, which parameters to include or what weight to assign to each parameter.
Such decisions may not be disclosed to those subject to software algorithms, rendering algorithmic management opaque to workers
(Burrell, 2016; Faraj, Pachidi, & Sayegh, 2018; Rosenblat & Stark, 2016). Especially online labor platforms like Uber and Deliveroo are
known for strategically withholding information about their HRM algorithms in an attempt to control their workforces (Burrell, 2016;
Pasquale, 2015; Rosenblat & Stark, 2016; Veen et al., 2019). Moreover, the opaqueness of software algorithms may increase further
when they are built using artificial intelligence, which makes them ‘self-learning’ in that they automatically improve (e.g. by adjusting
the weights of parameters) through experience (i.e. the data coming from workers). As noted by Burrell (2016), when operating on
(big) data sets that include heterogenous properties of data, self-learning algorithms become so complex that they (even) become black
boxes to their designers. In a response, academics and practitioners alike call for the use of explainable artificial intelligence (Adadi &
Berrada, 2018; Arrieta et al., 2020), which refers to the “details and reasons a [algorithmic] model gives to make its functioning clear
or easy to understand” (Arrieta et al., 2020: 85). In the case of algorithmic management, explainable HRM algorithms can be achieved
by operating so-called ‘white-box’ algorithms that reveal the algorithm's structure and increasing the technical literacy of workers
(Burrell, 2016). However, given that (1) workers differ in their technical literacy and thus abilities to understand the workings of HRM
algorithms (Arrieta et al., 2020), (2) employers may be reluctant to operate white-box algorithms (Burrell, 2016; Rosenblat & Stark,
2016), and/or (3) software algorithms may operate on complex, large-scale data sets (Adadi & Berrada, 2018), we expect that
algorithmic management remains, to varying degrees, opaque to workers (Pasquale, 2015).
Table 1
Examples of algorithmic-enabled HRM activities.
HRM algorithm type Example studies
Selection Assessment of job Predicting job candidates Automated resume screening; Cheng and Hackett (2021); Kellogg et al.
candidates personality traits potential and performance automated suggestions which job (2020); Mallafi and Widyantoro (2016);
on basis of their social candidate to invite for job Stoughton, Thompson, and Meade
media profiles interview (2013); Strohmeier and Piazza (2015)
Training Automated web-search of Predicting the need for Automated instructions to poor Cheng and Hackett (2021); Kellogg et al.
available training programs; upskilling; prediction of performing workers (2020); Ramamurthy et al. (2015)
evaluation of training workforce competence
effectiveness gaps
Appraisal Sentiment analysis; Predicting when projects Alerting managers to take Jarrahi and Sutherland (2019); Kinder
aggregation and computing go off track; predicting corrective actions; Automated et al. (2019); Rosenblat (2018);
performance scores; future worker sanctioning (e.g. deactivation) of Schweyer (2018); Strohmeier and Piazza
performance poor performing workers (2015); Veen et al. (2019)
Compensation Automated salary Predicting desired Surge pricing; automated Cheng and Hackett (2021); Griesbach
and benefits surveying; job ranking compensation level variable pay; priority access to et al. (2019); Meijerink et al. (2019);
work assignments Veen et al. (2019)
Workforce Construction of competency Turnover prediction; Automated staff rostering; Griesbach et al. (2019); Meijerink et al.
planning profiles; employee predicting future labor automated task allocation (2019); Strohmeier and Piazza (2015)
inventory demand
4
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
Second, predictive algorithms are used for forecasting purposes by providing a score that represents the likelihood that an event or
outcome will occur (Leicht-Deobald et al., 2019). This can involve the use of advanced regression techniques, machine-learning al
gorithms or data mining approaches (Davenport, 2013) that assist decision makers in areas such as recruitment and selection (e.g.
predicting future potential of a job candidate), workforce planning (e.g. predicting turnover) or performance management (e.g.
predicting future performance of a worker). Although such predictions may not be optimal, they can nevertheless assist (HR) managers
in choosing a selected course of action (Cheng & Hackett, 2021).
Finally, prescriptive algorithms extend predictive algorithms by including simulations and scenario-based techniques to propose
what should be done in the light of possible scenarios (Davenport, 2013; Leicht-Deobald et al., 2019). This serves two functions: (1)
decision support with the algorithm proposing which scenario to follow and a human manager making the final call (e.g. a software
algorithm that filters out job candidates on the basis of their resume and in turn, a human manager deciding on which job candidates to
invite for a job interview), or (2) decision automation that involves a computer taking a decision without human intervention (Leicht-
Deobald et al., 2019; Meijerink et al., 2021; Strohmeier & Piazza, 2015). The use of algorithms for automated decision-making is
particularly prominent among online labor platforms like Uber, Deliveroo or Amazon Mechanical Turk that matchmake between
freelance workers and organizations (Duggan et al., 2020; Lee et al., 2015; Meijerink & Keegan, 2019; Möhlmann & Zalmanson, 2017;
Newlands, 2021; Rosenblat & Stark, 2016; Veen et al., 2019). Because they charge a fee per match made, online labor platforms have
an interest to scale the number of freelancer workers on their platform. HRM algorithms are helpful here to automate automated
decision making, limit transaction costs and control freelance workers at scale (Gandini, 2019; Rosenblat, 2018). In line with this,
online labor platforms operate a wide range of predictive HRM algorithms to automate decisions in areas such as workforce planning
(e.g. a software algorithm that automatically assigns activities to workers), selection (e.g. automated admission to the online platform
on the basis of selected worker characteristics), compensation (e.g. surge pricing that determine the level of variable pay) and per
formance appraisal (e.g. dismissal/deactivation of poor performing workers). As such, when discussing the notion of algorithmic
management in this study, we will often draw on examples coming from studies into online labor platforms and their use of HRM
algorithms.
In existing studies into software algorithms at work, algorithmic management is predominantly conceptualized as a mean for
managers to control workers, thereby limiting worker autonomy and value (Duggan et al., 2020; Gandini, 2019; Kellogg et al., 2020;
Leicht-Deobald et al., 2019; Möhlmann & Zalmanson, 2017; Shapiro, 2018; Veen et al., 2019; Zuboff, 2019). This is in line with labor
process theory which predicts that managers seek innovative ways to establish control over workers to capture the (monetary) value
created by workers' labor (Gandini, 2019; Smith, 2015). Provided that the effort of workers enhance monetary value when turning
inputs possessed by the employer (e.g. production facilities, raw materials, know-how) into products and services, it is in the interest of
employers to control labor processes. Accordingly, labor process theorists have examined how employers use control mechanisms that
involve directing, surveilling and disciplining workers to capture (monetary) value from workers (Barley & Kunda, 1992; Edwards,
1979; Thompson & Van den Broek, 2010). Along similar lines, HRM researchers have shown how high-control HRM systems afford
employers to improve worker productivity through HRM practices such as well-defined jobs, training that emphasis compliance with
rules and procedures, performance appraisal against preset behaviors, and contingent/hly pay (Arthur, 1994; Hauff, Alewell, &
Hansen, 2014; Lepak & Snell, 2002; Walton, 1985). These control-enhancing HRM activities come at the expense of job autonomy in
terms of the workers' freedom to exercise control over aspects of their work such as its content, location, timing, remuneration and/or
performance standard (Langfred, 2007). Research shows that this limits workers' non-monetary value derived from work in terms of
building a desired (work) identity, personal growth, being satisfied at work or experiencing a sense of accomplishment (Goods, Veen, &
Barratt, 2019; Wood et al., 2019). Labor process theorists predict that this ultimately creates an antagonist relationship between
workers and employers/manager, with the former seeking to resist managerial control, causing the latter to seek new ways to execute
control over workers (Smith, 2015; Thompson & Van den Broek, 2010). In the literature, algorithmic management is regarded as an
‘innovative’ way for employers to reinforce control over workers (Gandini, 2019; Kellogg et al., 2020; Veen et al., 2019) by means of
directing, evaluating and disciplining workers (Edwards, 1979; Kellogg et al., 2020):
Algorithmic direction entails employers using algorithms to offer only certain information to workers and/or making suggestions
that make workers decide in line with the employers' interests (Kellogg et al., 2020; Rosenblat, 2018; Veen et al., 2019). It is the opaque
nature by which algorithms turn input into output that affords this type of control that manifests in algorithmic-enabled HRM activities
such as job design and selection. For instance, the ride-hailing platform Uber relies on algorithms to automatically allocate tasks to
drivers. Drivers however are not presented with the complete overview of all available requests for a ride. This limits the degree of
choice to workers through the creation of information asymmetries (Leicht-Deobald et al., 2019; Rosenblat & Stark, 2016). This re
duces (monetary) value to Uber drivers – who are freelancers and therefore paid per ride – as they cannot select those rides that are
most beneficial to them. Moreover, research into the meal-delivery platform Deliveroo shows that the address of consumers remain
algorithmically concealed until a meal deliverer picked up the order from the restaurant, thereby avoiding that deliverers accept orders
that are less (financially) lucrative to them (Meijerink, Keegan, & Bondarouk, 2021; Veen et al., 2019).
Algorithmic evaluation involves both the real-life, remote surveillance of worker behavior and the evaluation of worker performance
by means of ratings and rankings (Gandini, 2019; Kellogg et al., 2020; Newlands, 2021; Strohmeier & Piazza, 2015). This is afforded by
the variety and veracity of the big data that is inputted into software algorithms that semi-automate HRM activities such as perfor
mance appraisal, compensation and selection. For instance, Schweyer (2018) shows how the consulting firm Klick Health operates a
machine-learning tool to calculate the average time that workers take to “complete a variety of tasks and alerts leaders when projects
5
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
appear to be going off track” (p. 7). Software algorithms at work can also be used to perform text mining analyses on employees'
opinions and sentiments in web-based documents, blogs and social network sites (Gegenhuber, Ellmer, & Schüßler, 2021; Strohmeier &
Piazza, 2015). This limits workers' autonomy through identity and discourse control by managers (Alvesson & Kärreman, 2007). In line
with this, online platform firms like Upwork, Fiverr and Amazon Mechanical Turk rely on software algorithms to aggregate customer
ratings to identity poor performing workers and take corrective measures if needed (Jarrahi & Sutherland, 2019).
Algorithmic disciplining entails the use of software algorithms for rewarding and/or punishing workers (Kellogg et al., 2020) thereby
enabling HRM activities such as performance appraisal and compensation & benefits. As an example, online platform firms auto
matically dismiss workers in case their performance ratings fall below a certain threshold (Rosenblat & Stark, 2016). This involves
workers, who do repeatedly receive poor customer ratings, being either deactivated (i.e. losing access to the online workplace) or
receiving fewer jobs (Jarrahi & Sutherland, 2019; Rosenblat & Stark, 2016; Shapiro, 2018). Workers that are subject to algorithmic
disciplining often are easily replaceable by others and work on short, fixed-term contracts, meaning that the use of software algorithms
at work creates precarious working conditions for them (Gandini, 2019; Kellogg et al., 2020; Wood et al., 2019). On the other hand,
workers that are willing to comply with preset performance criteria may be algorithmically rewarded with more work and/or higher
pay – often however at the expense of their autonomy in terms of freedom how and when to do their work (Lehdonvirta, Kässi, Hjorth,
Barnard, & Graham, 2019; Meijerink et al., 2021; Möhlmann & Zalmanson, 2017; Wood et al., 2019).
The overview above shows that there have been a lot of studies into algorithmic management to show that are variety of HRM
activities are augmented and/or automated by means of software algorithms. Although HRM algorithms differ by being either
descriptive, predictive or prescriptive in nature, research evidence is converging around the idea that algorithmic management reduces
workers' autonomy and (non-)monetary value, among other reasons, through the creation of information asymmetries (Rosenblat,
2018; Shapiro, 2018), decreased human sensemaking (Leicht-Deobald et al., 2019) and automated disciplining (Kellogg et al., 2020).
In parallel to the stream of studies on algorithmic control, there is a growing body of literature that shows how workers seek to
regain autonomy through so-called algoactivism. We borrow the definition from Kellogg et al. (2020) and view algoactivism as the
tactics employed by workers to resist the control that HRM algorithms afford (Kellogg et al., 2020). Research shows that workers seek
to regain autonomy under algorithmic management regimes in at least two ways: through non-cooperation and data obfuscation (Irani
& Silberman, 2013; Kellogg et al., 2020; Lee et al., 2015; Lehdonvirta, 2018; Newlands, 2021). Non-cooperation entails the ignorance of
algorithmic direction and recommendation (Kellogg et al., 2020). For instance, workers on the Amazon Mechnical Turk platform
crowdsourced an application called Turkopticon that enables workers to report and avoid malicious clients. In so doing, they can avoid
contracting with clients that are algorithmically suggested to them by Amazon Mechnical Turk (Irani & Silberman, 2013). Moreover,
research has shown that taxi drivers on the Uber platform reported not to be influenced by Uber's surge pricing mechanism that
algorithmically directs drivers by offering higher pay in areas were customer demand increases. Rather that following algorithm-based
recommendations, the Uber drivers relied on their own knowledge in deciding which city districts to do business in, thereby limiting
the reduction in their autonomy (Lee et al., 2015; Rosenblat, 2018). Other Uber drivers however seek to engage in data obfuscation by
collectively gaming the surge price algorithm by calling on others in online forums to log off from the Uber app (Möhlmann & Zal
manson, 2017). As another example of data obfuscation, Lehdonvirta (2018) showed that workers on platforms like MobileWorks and
CloudFactory operate scripts that monitor the online marketplace and alert workers when suitable tasks became available, thereby
seeking to gain the upper hand over the platforms' algorithmic control mechanisms (Kellogg et al., 2020). Along similar lines, workers
seek to avoid the control exercised by automated screenshots taken of their computer screens, by installing a second monitor (Wood
et al., 2019). In conclusion, while one stream of research has shown how algorithmic management limits worker autonomy and value,
a second research stream showed that this can be negated by workers through algoactivism for resisting algorithmic control, (re)
gaining autonomy and creating value out of HRM algorithms.
2.4. Moving our understanding beyond notions of algorithmic control and resistance
Based on the two streams of studies into algorithmic control and algoactivism, we will build our understanding of HRM algorithms
further, in at least two ways. First, if we know that workers may resist algorithmic-enabled control and regain autonomy through
algoactivism, we still need to investigate the implications this may have for the properties algorithmic management. After all, HRM
algorithms are fed with data that are generated through workers' behaviors and actions at work (Garcia-Arroyo & Osca, 2019) –
including their algoactivist acts. Accordingly, algoactivism may lead to changes in algorithmic management when workers feed HRM
algorithms with obfuscated data to offset algorithmic control, which may change the weight and relevance of algorithmic parameters.
Similarly, managers and software developers can decide to redesign software algorithms to offset resistance tactics of workers to regain
autonomy under algorithmic management and create value out of HRM algorithms. Provided that algoactivism by workers serves to
(re)gain autonomy and value from work, we expect that besides being shaped by, autonomy and value are equally shaping algorithmic
management. Secondly, if we know that algorithmic management can restrain worker autonomy and value, now it is time to put
emphasis on how software algorithms can simultaneously foster autonomy and value to workers.
Given the above, we argue that a one-sided research perspective – i.e. algorithmic management as a threat to worker autonomy or
workers' resistance to it – that is predominantly informed by labor process theory is meaningful, yet does not echo the real life
complexity of algorithmic management. In fact, overlooking this complexity creates the risk of ending up with a deterministic
instrumental way of conducting empirical studies into the relationship between algorithmic management, autonomy and control. That
6
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
is, current perspectives on algorithmic management imply that HRM algorithms predetermine autonomy and control to workers who
cannot do more but to either accept or resist the algorithmic control they are subject to and undesired consequences that HRM al
gorithms bring. We argue that algorithmic management is more complex, while workers can engage in alternative courses of actions
beyond algoactivism to increase autonomy under, and create value out of, HRM algorithms. To ensure future research can unpack this
complexity and avoid reinforcing deterministic assumptions about algorithmic management as having predetermined, undesired
consequences to workers, we propose a conceptual lens that puts the duality of algorithmic management at center stage. The dualistic
perspective allows to see that algorithmic management limits and fosters autonomy/value, while simultaneously being shaped by the
actions of workers to increase autonomy and value at work. In the remainder of this article we discuss the notion of duality of HRM
algorithms, translate this to the duality of algorithmic management concept, and conclude with a duality-informed research agenda on
the recursively relationships among algorithmic management, autonomy and value.
We take the notion of duality from the scholarly stream of structuration thinking (after Giddens (1984)). Particularly, we were
inspired by the notion of duality of technology that is originated as a response to the call to go beyond deterministic thinking on the
relationship between technology and human (inter)action (Leonardi et al., 2012; Orlikowski, 1992). In deterministic studies, algo
rithms are seen as mediating mechanisms between action and organizational structures by providing workers with new opportunities
and capabilities. The notion of duality changes this view. While the algorithm-determinist approach views activities as communica
tions between people in organizations, duality thinking alters this understanding as it considers algorithms usage to be an action on its
own that impacts the constitution of organizational structures. In other words, software algorithms – like any other technology at work
– do not affect the organizational structures through communications; instead – their use is considered to be an action that changes the
organizational structure (Orlikowski, 1992).
Following this scholarly tradition, we view algorithmic management neither as an objective force that determines actions/re
sponses of workers nor being socially produced by them. Accordingly, we borrow and apply the central premise of the duality of
technology to algorithmic management that human actions and their outcomes are enabled and constraint by algorithmic management
(Leonardi et al., 2012; Orlikowski, 1992). Here, the idea is that HRM algorithms embed rules and resources such as worldviews,
assumptions, norms, power relations or conventions (Giddens, 1984). For example, software developers draw on their knowledge,
experiences, worldviews when designing technological artefacts thereby embodying algorithms with their meanings (Leicht-Deobald
et al., 2019). Algorithms also can encompass some built-in power relationships because managers and designers make decisions what
can and cannot be done with algorithms. Also, one can expect that rules are built into algorithms to legitimize selected actions, while
delegitimizing others (like collecting or reporting private information about workers). As such, we propose algorithms at work to be
seen as both enabling and restraining the actions of those engaging with algorithmic management.
The above implies that the duality of algorithmic management involves another notion – algorithms themselves develop gradually
and recursively. Rather than merely enabling or restraining workers' action (obeying or resistance), algorithms themselves become a
target of influence by workers, and therefore, are subject to change. Changes in algorithms – either through design or usage – are seen
to be triggered by the so-called interpretive flexibility of software algorithms (after Orlikowski (1992)). As noted by Pinch and Bijker
(1987), this concerns “the idea that technological artefacts are both culturally constructed and interpreted, that is flexibility is
manifested in how people think of or interpret artefacts as well as how they design them” (p. 40). Interpretive flexibility allows users of
algorithms to be conscious about the meanings, resources and norms/rules that are built into algorithms. Here, the idea is that users are
able to change the properties of algorithms when they are aware of these properties and capable to modify them. Provided that al
gorithms are built by humans, workers are provided with selected worldviews, resources and rules which they draw upon in their day-
to-day activities (Leicht-Deobald et al., 2019). Therefore, algorithms are socially constructed by workers through the everyday usage,
making sense of, and/or deciding how to use selected technical features. In so doing, workers (users) reproduce and sustain the rules
and resources embedded into HRM algorithms. It can also occur that users ascribe different meanings to algorithms or use them as a
resource to achieve other outcomes than intended by designers. Over time, this may result in a structural change in algorithms whereby
they lose their connection with individual users (Leonardi et al., 2012; Orlikowski, 1992).
Taken together, the above implies that the concept of algorithms' duality embody two notions: algorithms will enable and constrain
actions of workers simultaneously; and in the course of usage algorithms will evolve and change their properties.
In line with the two notions outlined above, we propose the duality of algorithmic management to argue that HRM algorithms (1) both
restrain and enable worker autonomy and value, and (2) are simultaneously resultant from workers' acts to uphold autonomy and
create value out of HRM algorithms (see Fig. 1).
4.1. Algorithmic management as restraining and fostering worker autonomy and value
Provided that HRM algorithms are built and deployed on the basis of rules and resources, we argue that the structural features of
HRM algorithms enable and restrain workers actions to gain autonomy and create value out of HRM algorithms. We see four reasons for
this, because: algorithms (1) are fed with proxy data which are dual in nature, (2) incorporate automated processes which are dual in
nature, (3) equate resources out of which workers create value, and (4) enable value co-creation between workers and other
7
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
8
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
value for them. As an example, when demand outstrips supply, the meal-delivery platform Uber Eats offers minimum hourly rates (so-
called ‘hourly guarantees’) of approx. €15 per/hour. Workers are algorithmically offered these minimum hourly rates when per
forming at least one meal delivery an hour and refrain from declining incoming orders (Veen et al., 2019). Meijerink, Keegan, and
Bondarouk (2019) show that this algorithmic-enabled compensation scheme does benefit Uber Eats (i.e. more labor supply), but not
the workers (i.e. demand outstrips supply, meaning that workers' income anyway exceed the minimum hourly rate). However, some
workers were able to identify a bug in the system that allowed them to maximize income at minimal additional effort. Namely, during
the first 50 min of each hour worked, these workers earned an income via a rivalry platform firm (e.g. Deliveroo). Only during the last
10 min, they switch to the Uber Eats platform. During these 10 min they perform one meal delivery and therefore, do not decline any
orders. As such, they are algorithmically awarded the hourly guarantee of €15 by Uber Eats on top of the income generated via the rival
platform (Meijerink et al., 2019). Along similar lines, Wood et al. (2019) show that online freelance workers with high online repu
tation scores on platforms like Upwork and Fiverr are able to attract many clients. They benefit from these algorithm-computed scores
by sub-contracting some of the work to peers via the same online platforms at a lower hourly rate than charged to their client. Taken
together, these examples show that the output of algorithmic management enable and restrain value to workers, depending on how
they put this output to use.
4.2. The recursive implications of worker autonomy and value for algorithmic management
Besides being shaped by HRM algorithms, autonomy and value equally shape algorithmic management. The latter we refer to as the
recursive implications of autonomy/value for algorithmic management. We propose that worker autonomy and value creation have
recursive implications for HRM algorithms by triggering (1) the purposeful redesign of software algorithms by humans and (2)
automated changes in software algorithms by means of artificial intelligence (see left-hand side of our Fig. 1).
9
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
contrary, software engineers (upon the requests of their managers and/or capitalists) may also redesign software algorithms to avoid
unwanted use of HRM algorithms for limiting freedom to workers. Research into algoactivism has shown that workers indeed ignore
algorithmic direction (e.g. recommendations on which work activities to execute) or circumvent algorithmic disciplining to regain
autonomy (Kellogg et al., 2020). These acts may motivate managers to revisit algorithmic control mechanisms. For instance, in an
attempt to avoid so-called disintermediation – i.e. platform users engaging in transactions outside their online marketplace – the
Upwork platform designed an algorithm that automatically searches for worker-client communication that is indicative of disinter
mediation attempts (Jarrahi & Sutherland, 2019). Moreover, to escape intermediation fees, platform workers and clients move
transactions off-platform, which Upwork seeks to address by algorithmically deactivating accounts of workers who engage in disin
termediation (Kinder et al., 2019). Along similar lines, and as noted by Gregory et al. (2021), “Uber tries to prevent fraudulent
behavior such as prearranged trips between riders and drivers that limit open competition by letting its algorithms monitor signs of
fake trips (e.g., requesting, accepting and completing trips on the same device or with the same payment profile, excessive promotional
trips, excessive cancellations) in real time for faster prediction and action recommendations or sanctions to enforce rules more quickly”
(p. 14). Finally, Griesbach, Reich, Elliott-Negri, and Milkman (2019) show that meal delivery workers consistently reject
algorithmically-dispatched orders coming from restaurants with bad reputations. In an attempt to address this situation, meal delivery
platforms change pay/compensation algorithms to induce workers to accept orders by algorithmically increasing delivery fees for
orders coming from restaurants that workers previously did not want to work for (Meijerink et al., 2021). Taken together, this implies
that worker actions to (re)gain autonomy and value have recursively implications for algorithmic management when designers
attempt to suppress (or reinforce/support) these worker actions through HRM algorithm redesign.
In line with our duality of algorithmic management concept, we propose two avenues for future research to uncover the full
complexity associated with the interrelations among HRM algorithms, autonomy and value: one on the enabling and restraining nature
of algorithmic management, the other on the recursive relationship between algorithmic management and worker autonomy/value:
5.1. Future research questions on the restraining and enabling nature of algorithmic management
Although we predict that algorithmic management simultaneously enables and restrains autonomy/value, it is unlikely that HRM
algorithms are enabling and restraining to a similar degree. That is, HRM algorithms that are designed to foster autonomy may still
limit worker freedom, while control-enhancing algorithms may still afford some degree of autonomy to workers. As such, an inter
esting avenue for future research is to uncover under what conditions algorithmic management hinges on being more enabling or
restraining. For instance, does this depend on the type of algorithm put into place? That is, can we expect differences in worker au
tonomy depending on whether organizations deploy descriptive/predictive algorithms where humans rely on algorithmic output in
decision making processes but still make the final call versus prescriptive algorithms that fully automate decision making? Moreover,
research shows that the type of algorithm-enabled (HRM) activity and complexity have implications for worker outcomes (Langer,
König, & Papathanasiou, 2019; Nagtegaal, 2021). For instance, algorithmic decision making is shown to have negative implications for
workers' justice perceptions (Newman, Fast, & Harmon, 2020) – a situation which is however negated in cases (e.g. automated
interview training) when little is at stake for the worker (Langer et al., 2019) or when algorithmic HR-decision making involves little
complexity (Nagtegaal, 2021). In fact, Nagtegaal (2021) shows that procedural justice increases when algorithms are used for auto
mating HRM activities that are low in complexity (and vice versa). In such cases, the use of white-box HRM algorithms, that are
transparent and reveal the algorithm's structure to workers (Burrell, 2016), may be desirable and effective as non-complex algorithmic
decision making may be most comprehensible to workers. On the contrary, complex and high-stake decision making by algorithms
10
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
should be black-boxed, or even better, be placed in the hands of a human decision maker (Langer et al., 2019; Nagtegaal, 2021). This
presents interesting questions for future research into the implication of algorithmic management for autonomy and value, like: does
the enablement versus restriction of autonomy depend on the level of complexity of algorithm-based HR decision making? Do HRM
algorithms limit value to workers in high-stake scenarios and increase value when algorithmic decision making involves low stakes? Is
worker autonomy/value enabled or restrained depending on the transparency of algorithmic management?
Another line of inquiry is to examine how the enabling and restraining potential of algorithmic management is affected by the
institutional context in which they are applied. Provided that algorithms are embedded with rules and resources, we can expect them
to reflect the institutional context in which they are applied (Orlikowski, 1992). While some legal jurisdictions are more stringent on
the collection of worker data (e.g. GDPR in Europe) and limiting worker autonomy, others likely leave more room for corporations to
control workers by means of algorithmic management. Here, future studies can apply the institutional logics concept to describe how
ideal-type sets of norms, values and believe systems impact algorithmic management (Frenken, Vaskelainen, Fünfschilling, & Pisci
celli, 2020; Meijerink et al., 2021). This would allow studies that examine whether the restraining or enabling features of HRM al
gorithms differ across the ideal-type institutional logics of the market, corporation, state or profession.
Since algorithms have interpretive flexibility, we expect that the autonomy and value coming from algorithmic management differ
across workers. For instance, previous research has shown that employees derive value from HRM activities by integrating their
personal resources (i.e. their knowledge, skills and abilities) with resources provided by the organization (e.g. relationships with
colleagues, information technologies, operating procedures) (Meijerink & Bondarouk, 2018). As such, an interesting question is what
personal and organizational resources employees draw upon to create value out of HRM algorithms? Such personal and organizational
resources follow from HRM activities that are not necessarily automated by algorithms, such as training or team work (Van Beurden,
Van De Voorde, & Van Veldhoven, 2020). To our knowledge, there is little research that examines the interactions between HRM
algorithms and non-algorithmic HRM. This is surprising as research has shown that HRM activities in highly digitalized workplaces
continue to be executed by human managers that operate alongside HRM algorithms (Meijerink et al., 2021; Newlands, 2021; Shapiro,
2018; Veen et al., 2019). Accordingly, we see the need for research that examines questions such as: How are non-algorithmic and
algorithmic HRM activities working together? What is the best synergy? What role do HRM activities performed by human managers
play in balancing whether HRM algorithms enable or restrain worker autonomy? What resources do non-algorithmic HRM activities
offer to workers for allowing them to create value out of HRM algorithms? And, do changes in non-algorithmic HRM activities trigger
changes in how workers make sense of and engage with algorithmic-enabled HRM activities?
Finally, we see possibilities for studies into how the enabling and restraining properties of algorithmic management are reflected in
worker behavior. Here, we encourage future studies on new forms of algoactivism that offset the constraining potential of HRM al
gorithms (Kellogg et al., 2020). For instance, a group of Uber drivers recently announced to start a court case against Uber with the aim
to obtain the worker data inputted into Uber's HRM algorithms. Using these data, these workers want to reverse engineer the algo
rithms they are subject to, which ultimately can start future court cases for limiting the control that Uber exercises over its drivers. In
line with this, future studies can examine whether such acts impact the degree to which algorithmic management restrains autonomy.
While some workers seek to reactively resist algorithmic control through algoactivism, others may take a more proactive approach to
create value out of HRM algorithms. As such, we ask what makes some workers to (reactively) resist algorithmic management, while
others seek to use algorithmic management to their own advantage? Such worker acts to create value out of HRM algorithms may have
negative consequences for others. Earlier, we described how freelance workers exploit their peers by using their high online reputation
scores to attract clients and in turn sub-contract that work to their peers at a lower hourly rate than charged to their client (Wood et al.,
2019). This implies that for some workers, algorithmic management can be enabling, while for others it is restraining. Accordingly,
future research may seek to answer questions like: To what extent does algorithmic management result into ‘winners’ and ‘losers’? Is
the creation of value out of HRM algorithms a zero-sum game to workers? What role does reflectivity and algorithmic output (e.g.
online reputations) play in explaining whether HRM algorithms enable value creation to some workers, while limiting value to others?
5.2. Future research questions on recursive relationship among HRM algorithms, autonomy and value
We proposed that worker autonomy and value creation trigger automated changes in HRM algorithms and/or motivate software
engineers to redesign the structural properties of algorithmic management (see Fig. 1). Although the use of self-learning algorithms in
HRM may yet be limited (Cheng & Hackett, 2021), research into online labor platforms and their use of algorithmic management show
that an uptake of artificially intelligence algorithms in ‘traditional’ organizational settings may be foreseeable in the near future
(Prassl, 2018). If this were to happen, HRM algorithms are more likely to develop autonomously, depending on the worker-generated
data that they are trained with and co-evolve along. This may minimize the role that interventions by human designers play the use and
of HRM algorithms. In terms of the duality of HRM algorithms, it implies that the recursive implications of worker autonomy and value
for algorithmic management is more likely to be based on automated processes than deliberate changes by human designers in cases
when self-learning HRM algorithms are deployed. On the other hand however, as noted by Raisch and Krakowski (2021), even the
most automated and autonomous algorithmic systems require human-made changes and include human managers ‘in the loop’ in cases
when algorithms need to be optimized to realize a new desired state, perform a new/different task, or need to be compliant with novel
regulatory regimes. This implies that the redesign of HRM algorithms always remains – to varying degrees – a responsibility of humans.
In fact, human decision makers (like HR professionals and line managers) perform HRM activities (e.g. job interviews, appraisal talks
or task allocation) alongside those automated by HRM algorithms. These human-performed HRM activities are likely to be dual in
nature too in that they shape, and are shaped by, algorithm-enabled HRM activities. One the one hand, HRM algorithms have im
plications for human managers in case they perform HRM activities that are augmented and shaped by algorithmic output (e.g.
11
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
decision which job candidates to invite for an interview) or replaced by autonomous, algorithmic processes which makes their HRM
role redundant or offers freedom to work on HRM activities that algorithms can/do not take over. On the other hand, HRM algorithms
can be shaped by human-performed HRM activities as well. This occurs in cases when hiring decisions or performance evaluations that
were made by human decision makers are used as training data for developing HRM algorithms. Taken together, this triggers research
questions such as: What is having a bigger impact on the structural properties of HRM algorithms: automated changes of human-made
changes? Will self-learning algorithms adjust differently to worker acts to regain autonomy in comparison to situations where software
engineers redesign HRM algorithms? If software designers decide to change HRM algorithms, is this upon their own initiative, that of
workers or a manager/capitalist? Do the worldviews of software engineers (and their managers) change when workers engage with
algorithmic management in unintended ways? Or, will they change HRM algorithms to reinforce and superimpose their pre-existing
worldviews onto workers? What rules and resources do software engineers change when redesigning HRM algorithms: algorithmic
parameters, the weight of these parameters, the descriptive/prescriptive/predictive nature of algorithmic management? How do
power relationships change when workers create autonomy and value out of HRM algorithms? And, will (automated) changes in
algorithmic management ultimately bring about change in the degree/balance to which HRM algorithms enabled and restrain worker
autonomy and value?
Since algorithmic management and worker autonomy/value are recursively interrelated, we see the need for longitudinal research
on the evolution of algorithmic management. Here, interesting questions for future research are: Through which processes will the use
and outcomes of algorithmic management become stabilized? What time does it take before workers make habitual use of HRM al
gorithms, meaning that the restraining and enabling features of algorithmic management remain unchanged? Or, will the structural
properties of HRM algorithms never change provided that self-learning algorithms continue to adapt? How does the evolution of
algorithmic management depend on the characteristics and (inter)actions of its users? For instance, will the differences in meanings
attached to HRM algorithms become smaller among workers, designers and managers such that the use of algorithmic management
becomes more or less stabilized? While power relationships between workers and managers/capitalist stabilize such that workers stop
to resist algorithmic control, meaning that the enabling and restraining properties of HRM algorithms remain unchanged? And, what
changes do external shocks such as the COVID-19 pandemic or the loss of court cases over worker rights under algorithmic man
agement bring to HRM algorithms? Will such shocks change the meanings, worldviews, norms and rules embedded in HRM algorithms
and thereby, bring alterations to whether algorithmic management enables or restrains worker autonomy and value? It is by answering
such questions through which future studies can further untangle the deeper complexity associated to algorithmic management.
6. Conclusions
This conceptual study builds on two important observations made by earlier studies into algorithmic management: (1) that HRM
algorithms limit worker autonomy and value through increased surveillance and control exercised over workers, (2) which workers
seek to resist by means of algoactivism as a novel type of workers resistance vis-à-vis management/capitalist. Building further on these
earlier contributions, we offer a deeper understanding of the complexity in the relationship between algorithmic management, au
tonomy and value. By outlining the duality of HRM algorithms and the ‘duality of algorithmic management’ concept, we showcased
how algorithmic management simultaneously restrains and enables worker autonomy and value. Moreover, we propose that HRM
algorithms are recursively interrelated with worker autonomy and value, when software designers and/or self-learning algorithms
reinforce or limit worker acts for (re)gaining autonomy (e.g. algoactivism) and/or creating value out of HRM algorithms. On this basis,
we discussed avenues for future research into the duality of algorithmic management. As such, we hope that this study sets the stage for
a future line of inquiry into the complex interrelationships among HRM algorithms, autonomy and value.
References
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138–52160.
Alvesson, M., & Kärreman, D. (2007). Unraveling HRM: Identity, ceremony, and control in a management consulting firm. Organization Science, 18(4), 711–723.
Angrave, D., Charlwood, A., Kirkpatrick, I., Lawrence, M., & Stuart, M. (2016). HR and analytics: Why HR is set to fail the big data challenge. Human Resource
Management Journal, 26(1), 1–11.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts,
taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
Arthur, J. B. (1994). Effects of human resource systems on manufacturing performance and turnover. Academy of Management Journal, 37(3), 670–687.
Barley, S. R., & Kunda, G. (1992). Design and devotion: Surges of rational and normative ideologies of control in managerial discourse. Administrative Science
Quarterly, 37(3), 363–399.
Bowman, C., & Ambrosini, V. (2000). Value creation versus value capture: Towards a coherent definition of value in strategy. British Journal of Management, 11(1),
1–15.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.
Cameron, L. (2018). The good bad job: Autonomy and Control in the Algorithmic Workplace. Paper presented at the Academy of Management Annual Meeting, Chicago,
IL.
Chen, D. L., & Horton, J. J. (2016). Research note—Are online labor markets spot markets for tasks? A field experiment on the behavioral response to wage cuts.
Information Systems Research, 27(2), 403–423.
Cheng, M. M., & Hackett, R. D. (2021). A critical review of algorithms in HRM: Definition, theory, and practice. Human Resource Management Review, 31(1), 100698.
Davenport, T. H. (2013). Analytics 3.0. Harvard Business Review, 91(12), 64–72.
Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2020). Algorithmic management & app-work in the gig economy: A research agenda for employment relations
& HRM. Human Resource Management Journal, 30(1), 114–132.
Edwards, R. (1979). Contested terrain: The transformation of the workplace in the twentieth century. New York: Basic Books.
Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization, 28(1), 62–70.
12
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
Frenken, K., Vaskelainen, T., Fünfschilling, L., & Piscicelli, L. (2020). An institutional logics perspective on the gig economy. In I. Maurer, J. Mair, & A. Oberg (Eds.),
vol. 66. Theorizing the sharing economy: Variety and trajectories of new forms of organizing. Bingley, United Kingdom: Emerald Publishing Limited.
Gal, U., Jensen, T. B., & Stein, M.-K. (2020). Breaking the vicious cycle of algorithmic management: A virtue ethics approach to people analytics. Information and
Organization, 30(2), 100301.
Gandini, A. (2019). Labour process theory and the gig economy. Human Relations, 72(6), 1039–1056.
Garcia-Arroyo, J., & Osca, A. (2019). Big data contributions to human resource management: A systematic review. The International Journal of Human Resource
Management, 1–26.
Gegenhuber, T., Ellmer, M., & Schüßler, E. (2021). Microphones, not megaphones: Functional crowdworker voice regimes on digital work platforms. Human Relations,
74(9), 1473–1503.
Giddens, A. (1984). The constitution of society: Outline of the theory of structuration. Cambridge: Polity Press.
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies: Essays on communication, materiality, and society
(pp. 167–194). Cambridge: MIT Press.
Goods, C., Veen, A., & Barratt, T. (2019). “Is your gig any good?” Analysing job quality in the Australian platform-based food-delivery sector. Journal of Industrial
Relations, 61(4), 502–527.
Gregory, R. W., Henfridsson, O., Kaganer, E., & Kyriakou, H. (2021). The role of artificial intelligence and data network effects for creating user value. Academy of
Management Review, 46(3), 534–551.
Griesbach, K., Reich, A., Elliott-Negri, L., & Milkman, R. (2019). Algorithmic control in platform food delivery work. Socius, 5(1), 1–15.
Haggerty, K. D., & Ericson, R. V. (2000). The surveillant assemblage. The British Journal of Sociology, 51(4), 605–622.
Hauff, S., Alewell, D., & Hansen, N. K. (2014). HRM systems between control and commitment: Occurrence, characteristics and effects on HRM outcomes and firm
performance. Human Resource Management Journal, 24(4), 424–441.
Irani, L. C., & Silberman, M. S. (2013). Turkopticon: Interrupting worker invisibility in Amazon mechanical Turk (Paper presented at the Proceedings of the SIGCHI
conference on human factors in computing systems).
Jarrahi, M. H., & Sutherland, W. (2019). Algorithmic management and algorithmic competencies: Understanding and appropriating algorithms in gig work. Paper presented at
the International Conference on Information Systems, Munich, Germany.
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410.
Kinder, E., Jarrahi, M. H., & Sutherland, W. (2019). Gig platforms, tensions, alliances and ecosystems: An actor-network perspective. In Proceedings of the ACM on
human-computer interaction, 3(CSCW) (pp. 1–26).
Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and
Assessment, 27(3), 217–234.
Langfred, C. W. (2007). The downside of self-management: A longitudinal study of the effects tf conflict on trust, autonomy, and task interdependence in self-
managing teams. Academy of Management Journal, 50(4), 885–900.
Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. Paper presented
at the proceedings of the 33rd annual ACM conference on human factors in computing systems, New York.
Lehdonvirta, V. (2018). Flexibility in the gig economy: Managing time on three online piecework platforms. New Technology, Work and Employment, 33(1), 13–29.
Lehdonvirta, V., Kässi, O., Hjorth, I., Barnard, H., & Graham, M. (2019). The global platform economy: A new offshoring institution enabling emerging-economy
microproviders. Journal of Management, 45(2), 567–599.
Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A., Schafheitle, S., Wildhaber, I., & Kasper, G. (2019). The challenges of algorithm-based HR decision-making for
personal integrity. Journal of Business Ethics, 160(2), 377–392.
Leonardi, P. M., Nardi, B. A., & Kallinikos, J. (2012). Materiality and organizing: Social interaction in a technological world. Oxford: Oxford University Press.
Lepak, D. P., Smith, K. G., & Taylor, M. S. (2007). Value creation and value capture: a multilevel perspective. Academy of Management Review, 32(1), 180–194.
Lepak, D. P., & Snell, S. A. (2002). Examining the human resource architecture: The relationships among human capital, employment, and human resource
configurations. Journal of Management, 28(4), 517–543.
Maatman, M., Bondarouk, T., & Looise, J. K. (2010). Conceptualising the capabilities and value creation of HRM shared service models. Human Resource Management
Review, 20(4), 327–339.
Mallafi, H., & Widyantoro, D. H. (2016). Prediction modelling in career management (Paper presented at the International Conference on Computational Intelligence and
Cybernetics).
Meijerink, J. G., & Bondarouk, T. (2018). Uncovering configurations of HRM service provider intellectual capital and worker human capital for creating high HRM
service value using fsQCA. Journal of Business Research, 82(1), 31–45.
Meijerink, J. G., Boons, M., Keegan, A., & Marler, J. (2021). Algorithmic human resource management: Synthesizing developments and cross-disciplinary insights on
digital HRM. The International Journal of Human Resource Management, 32(23), 2545–2562.
Meijerink, J. G., & Keegan, A. (2019). Conceptualizing human resource management in the gig economy: Toward a platform ecosystem perspective. Journal of
Managerial Psychology, 34(4), 214–232.
Meijerink, J. G., Keegan, A., & Bondarouk, T. (2019). Exploring ‘human resource management without employment’ in the gig economy: How online labor platforms manage
institutional complexity. Paper presented at the 6th International Workshop on the Sharing Economy Utrecht, The Netherlands, June 28-29, 2019.
Meijerink, J. G., Keegan, A., & Bondarouk, T. (2021). Having their cake and eating it too? Online labor platforms and human resource mangement as a case of
institutional complexity. International Journal of Human Resource Management, 32(19), 4016–4052.
Möhlmann, M., & Zalmanson, L. (2017). Hands on the wheel: Navigating algorithmic management and Uber drivers’. Paper presented at the 38th International Conference
on Information Systems, Seoul, South Korea.
Nagtegaal, R. (2021). The impact of using algorithms for managerial decisions on public employees' procedural justice. Government Information Quarterly, 38(1),
101536.
Newlands, G. (2021). Algorithmic surveillance in the gig economy: The organisation of work through Lefebvrian conceived space. Organization Studies, 42(5),
719–737.
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions.
Organizational Behavior and Human Decision Processes, 160, 149–167.
Orlikowski, W. J. (1992). The duality of technology: Rethinking the concept of technology in organizations. Organization Science, 3(3), 398–427.
Orlikowski, W. J., & Scott, S. V. (2015). The algorithm and the crowd. MIS Quarterly, 39(1), 201–216.
Pasquale, F. (2015). The Black Box society. Cambridge: Harvard University Press.
Pinch, T., & Bijker, W. E. (1987). The social construction of facts andartefacts: Or how the sociology of science and sociology of technologymight benefit each othe. In
W. E. Bijker, P. Hughes, & T. Pinch (Eds.), The social construction of technological systems (pp. 159–187). Cambridge: MIT Press.
Prassl, J. (2018). Humans as a service: The promise and perils of work in the gig economy. Oxford: Oxford University Press.
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 46(1), 192–210.
Ramamurthy, K. N., Singh, M., Davis, M., Kevern, J. A., Klein, U., & Peran, M. (2015). Identifying employees for re-skilling using an analytics-based approach. Paper
presented at the 2015 IEEE International Conference on Data Mining Workshop (ICDMW).
Rosenblat, A. (2018). Uberland: How algorithms are rewriting the rules of work. Oakland: University of California Press.
Rosenblat, A., & Stark, L. (2016). Algorithmic labor and information asymmetries: A case study of Uber’s drivers. International Journal of Communication, 10(27),
3758–3784.
Schweyer, A. (2018). Predictive analytics and artificial intelligence in people management. Incentive Research Foundation.
Shapiro, A. (2018). Between autonomy and control: Strategies of arbitrage in the “on-demand” economy. New Media & Society, 20(8), 2954–2971.
Smith, C. (2015). Continuity and change in labor process analysis forty years after labor and monopoly capital. Labor Studies Journal, 40(3), 222–242.
13
J. Meijerink and T. Bondarouk Human Resource Management Review 33 (2023) 100876
Sonenshein, S. (2007). The role of construction, intuition, and justification in responding to ethical issues at work: The sensemaking-intuition model. Academy of
Management Review, 32(4), 1022–1040.
Stoughton, J. W., Thompson, L. F., & Meade, A. W. (2013). Big five personality traits reflected in job applicants’ social media postings. Cyberpsychology, Behavior and
Social Networking, 16(11), 800–805.
Strohmeier, S. (2018). Smart HRM–a Delphi study on the application and consequences of the internet of things in human resource management. The International
Journal of Human Resource Management, 1–30.
Strohmeier, S., & Piazza, F. (2015). Artificial intelligence techniques in human resource management—a conceptual exploration. In J. Kacprzyk, & L. Jain (Eds.),
Intelligent techniques in engineering management (pp. 149–172). New York: Springer.
Thompson, P., & Van den Broek, D. (2010). Managerial control and workplace regimes: An introduction. Work, Employment and Society, 24(3), 1–12.
Van Beurden, J., Van De Voorde, K., & Van Veldhoven, M. (2020). The employee perspective on HR practices: A systematic literature review, integration and outlook.
The International Journal of Human Resource Management, 1–35.
Van Doorn, N. (2019). Reflections on a courier consultation forum in New York City.
Van Esch, P., Black, J. S., & Ferolie, J. (2019). Marketing AI recruitment: The next phase in job application and selection. Computers in Human Behavior, 90, 215–222.
Veen, A., Barratt, T., & Goods, C. (2019). Platform-capital’s ‘app-etite’for control: A labour process analysis of food-delivery work in Australia. Work, Employment and
Society, 34(3), 388–406.
Walton, R. E. (1985). From control to commitment in the workplace. Harvard Business Review, 63(2), 76–84.
Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks: Sage.
Wenzel, R., & Van Quaquebeke, N. (2018). The double-edged sword of big data in organizational and management research: A review of opportunities and risks.
Organizational Research Methods, 21(3), 548–591.
Wood, A. J., Graham, M., Lehdonvirta, V., & Hjorth, I. (2019). Good gig, bad gig: Autonomy and algorithmic control in the global gig economy. Work, Employment and
Society, 33(1), 56–75.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs.
14