Governance of Artificial Intelligence
Governance of Artificial Intelligence
1. Introduction
Artificial intelligence (AI) is rapidly changing how transactions and social interactions
are organised in society today. AI systems and the algorithms supporting their operations
play an increasingly important role in making value-laden decisions for society, ranging
from clinical decision support systems that make medical diagnoses, policing systems
that predict the likelihood of criminal activities and filtering algorithms that categorise
and provide personalised content for users (Helbing, 2019; Mittelstadt, Allo, Taddeo,
Wachter, & Floridi, 2016). The ability to mimic or rival human intelligence in complex
problem-solving sets AI apart from other technologies, as many cognitive tasks
CONTACT Araz Taeihagh [email protected]; [email protected] Lee Kuan Yew School of Public
Policy, National University of Singapore, 469B Bukit Timah Road, Li Ka Shing Building, Level 2, #02-10 259771 Singapore
© 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (http://
creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium,
provided the original work is properly cited.
138 A. TAEIHAGH
involve frequent interactions with users such as robots for personal care, autonomous
vehicles, and service providers.
The decision-making autonomy of AI significantly reduces human control over their
decisions, creating new challenges for ascribing responsibility and legal liability for the
harms imposed by AI on others. Existing legal frameworks for the ascribing of respon
sibility and liability for machine operation treat machines as tools that are controlled by
their human operator based on the assumption that humans have a certain degree of
control over the machine’s specification (Matthias 2004; Leenes & Lucivero, 2014).
A vast body of literature and government reports have highlighted issues of data
privacy and surveillance that can arise from AI applications. As algorithms in AI systems
utilise sensors to collect data and big data technologies to store, process and transmit data
through external communication networks, there have been concerns regarding the
potential misuse of personal data by third parties and increasing calls for more holistic
data governance frameworks to ensure reliable sharing of data within and between
organisations (Gasser & Almeida, 2017; Janssen, Brous, Estevez, Barbosa, & Janowski,
2020). AI systems store extensive personal information about their users that can be
(Frey & Osborne, 2017; Linkov et al., 2018). The effects of automation are already felt in
industries such as the manufacturing, entertainment, healthcare, finance, and transport
sectors as companies increasingly invest in AI to reduce labour costs and boost efficiency
(Linkov et al., 2018). While technological advancements have historically created new
jobs as well, there are concerns that the distribution of employment opportunities is
uneven across sectors and skill levels. Studies show that highly routine and cognitive
tasks that characterise many middle-skilled jobs are at a high risk of automation. In
contrast, tasks with relatively lower risks of automation are those that machines cannot
4. Governing AI
4.1 Why AI governance is important
Understanding and managing the risks posed by AI is crucial to realise the benefits of the
technology. Increased efficiency and quality in the delivery of goods and services, greater
autonomy and mobility for the elderly and disabled, and improved safety from using AI
in safety-critical operations such as in healthcare, transport and emergency response are
the many socio-economic benefits arising from AI that can propel smart and sustainable
development (Agarwal, Gurjar, Agarwal, & Birla, 2015; Lim & Taeihagh, 2018;
Yigitcanlar et al., 2018). Thus, as AI systems develop and increase in complexity, their
risks and interconnectivity with other smart devices and systems will also increase,
necessitating the creation of both specific governance mechanisms, such as for health
care, transport and autonomous weapons, as well as a broader global governance frame
work for AI (Butcher & Beridze, 2019).
been shown to undermine accuracy and performance (Felzmann, Villaronga, Lutz, &
Tamò-Larrieux, 2019; Piano, 2020). This issue is highlighted as a severe limitation of
the EU General Data Protection Regulation in increasing algorithmic transparency to
tackle discrimination (Goodman & Flaxman, 2017). Algorithms are often kept inten
tionally opaque by their developers to prevent cyber-attacks and to safeguard trade
secrets, which is legally justified by intellectual property rights, and the complexity of
extensive datasets used by ML algorithms makes it nearly impossible to identify and
remove all variables that are correlated with sensitive categories of personal data
ways of acquiring information and devising effective policies that can adapt to the
evolving AI landscape (Guihot et al., 2017; Wirtz et al., 2020).
Amidst the issues with ‘hard’ regulatory frameworks, industry bodies and govern
ments have increasingly adopted self-regulatory or ‘soft law’ approaches to govern AI
design, but they remain limited in their effectiveness. Soft law approaches refer to
‘nonbinding norms and techniques’ that create ‘substantive expectations that are not
directly enforceable’. Industry bodies have released voluntary standards, guidelines,
and codes of conduct (IEEE 2019), and governments alike have formed expert com
soft law approaches that involve collaboration with the affected stakeholders to develop
guidelines, and legal experimentation and regulatory sandboxes to test innovative frame
works for liability and accountability for AI that will be adapted in iterative phases (Cath
et al. 2018; Hemphill, 2020; Linkov et al., 2018; Philipsen, Stamhuis, & De Jong, 2021).
New governance frameworks can also be adapted from the approaches taken to
regulate previous emerging technologies (Gasser & Almeida, 2017). Studies have ana
lysed hybrid governance to regulate the Internet and emerging digital technologies. For
instance, Leiser and Murray (2016) highlight the need to account for the increasing role
documents, concepts of governance and governance models are used. AI policy docu
ments are analysed to establish how they frame AI. The emerging governance of AI is
analysed according to diverse governance models such as market, participatory, flexible,
and deregulated (Peters, 2001). This article contributes to the studies of emerging
disruptive technologies by analysing how the framing of risks and uncertainties of AI
leads to the development of specific governance and regulatory arrangements mapping
similarities and differences across countries, regions, and organisations.
attention to the implementation of robots in care settings and their implications for
policymaking.
Informed by semi-structured interviews with policymakers, care providers, suppliers
of robots and technology experts, Dickinson et al. (2021) elicit a series of ‘dilemmas’ of
governance faced in this space and identify three dilemmas relating to independence and
surveillance, the re-shaping of human interactions, the dynamics of caregiving and
receiving care illustrating some of the tensions involved in the governance of robotics
in care services and draw attention to the issues that governments need to address for the
5.6 Law and tech collide: foreseeability, reasonableness, and advanced driver
assistance systems (Leiman, 2021)
There have been a number of fatalities involving automated vehicles since May 2016,
which have been highly publicised. Simultaneously, millions of serious injuries and
fatalities have occurred due to collisions by human-operated vehicles. In jurisdictions
where compensations depend on the establishment of fault, many of the injured or their
dependents will not recover any compensations (Leiman, 2021). In many Australian
jurisdictions, the standard of care required presents significant challenges when applied
to partly, highly, and fully automated vehicles (ibid).
Leiman (2021) explores how the existing regulatory framework in Australia considers
perceptions of risk in establishing legal liability in the case of motor vehicles and whether this
approach can be applied to automated vehicles. The article examines whether the law itself
may affect the perceptions of risk in view of the data generated by the automated vehicles and
considers the efficacy of no-fault or hybrid schemes for legal liability in existence in Australia
and compares them with the alternative legislation passed by the Parliament in the UK that
imposes responsibility on the insurers. Leiman also discusses proposals concerning the role
of government in assuring safety and fault in the case of Automated vehicles.
Porto & Zuppetta, 2021). The authors advocate that the regulators should tackle informa
tion asymmetry, low bargaining power and wrong information and endorse an enforced
co-regulatory approach that allows participation of platforms and consumers, and devel
opment of personalised discloses based on the needs of the consumers to empower them.
Acknowledgments
Araz Taeihagh is grateful for the funding support provided by the Lee Kuan Yew School of Public
Funding
This work was supported by the National University of Singapore.
Notes on contributor
Araz Taeihagh (DPhil, Oxon) is the Head of Policy Systems Group at the Lee Kuan Yew School of
Public Policy, Principal Investigator at the Centre for Trusted Internet and Community at the
National University of Singapore, and Visiting Associate Professor at Dynamics of Inclusive
Prosperity Initiative, at the Rotterdam School of Management, Erasmus University. He has lived
and conducted research on four continents, and since 2007, his research interest has been on the
interface of technology and society. Taeihagh is interested in socio-technical systems and focuses
on the unique challenges that arise from introducing new technologies to society (e.g., crowdsour
cing, sharing economy, autonomous vehicles, AI, MOOCs, ridesharing). Taeihagh focuses on: a)
how to shape policies to accommodate new technologies and facilitate sustainable transitions; b)
the effects of these new technologies on the policy process; and c) changing the way we design and
analyse policies by developing innovative practical approaches to address the growth in the
interdependence and complexity of socio-technical systems. Araz has more than two decades of
experience working with firms on PMC and Design issues relating to chemical, petroleum and
construction projects and provides technical and policy consultations on environmental, trans
portation, energy, and technology related issues.
ORCID
Araz Taeihagh http://orcid.org/0000-0002-4812-4745
COI-statement
No potential conflict of interest was reported by the author.
POLICY AND SOCIETY 153
References
Agarwal, P. K., Gurjar, J., Agarwal, A. K., & Birla, R. (2015). Application of artificial intelligence for
development of intelligent transport system in smart cities. Journal of Traffic and
Transportation Engineering, 1(1), 20–30.
Allen, G. C. (2019). Understanding China’s AI Strategy: Clues to Chinese strategic thinking on
artificial intelligence and national security. Washington, DC: Center for a New American
Security.
Alonso Raposo, M., Grosso, M., Després, J., Fernándezmacías, E., Galassi, C., Krasenbrink, A.,
Krause, J., Levati, L., Mourtzouchou, A., Saveyn, B., Thiel, C. and Ciuffo, B. (2018). An analysis
Gasser, U., & Almeida, V. A. (2017). A layered model for AI governance. IEEE Internet Computing,
21(6), 58–62.
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making
and a “right to explanation”. AI Magazine, 38(3), 50–57.
Guihot, M., Matthew, A. F., & Suzor, N. P. (2017). Nudging robots: Innovative solutions to
regulate artificial intelligence. Vand. J. Ent. & Tech. L, 20, 385.
He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., & Zhang, K. (2019). The practical implementation of
artificial intelligence technologies in medicine. Nature Medicine, 25(1), 30–36.
Helbing, D. (2019). Societal, economic, ethical and legal challenges of the digital revolution: From
big data to deep learning, artificial intelligence, and manipulative technologies. In Helbing D.
Leiser, M., & Murray, A. (2016). The role of non-state actors and institutions in the governance of
new and emerging digital technologies. In The oxford handbook of law, regulation and
technology, Eds. R. Brownsword, E. Scotford, K. Yeung, & O. U. Press. (pp. 670–704)
Lele, A. (2019). Disarmament, arms control and arms race. In Disruptive technologies for the
militaries and security (pp. 217–229). Singapore: Springer.
Lele, A. (2019b). Artificial intelligence. In Disruptive technologies for the militaries and security (pp.
139–154). Singapore: Springer.
Li, Y., Taeihagh, A., & De Jong, M. (2018). The governance of risks in ridesharing: A revelatory
case from Singapore. Energies, 11(5), 1277.
Li, Y., Taeihagh, A., De Jong, M., & Klinke, A. (2021). Toward a commonly shared public policy
Radu, R. (2021). Steering the governance of artificial intelligence: National strategies in perspec
tive. In Policy and society.
Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and
Information Technology, 20(1), 5–14.
Robinson, H., MacDonald, B., & Broadbent, E. (2014). The role of healthcare robots for older
people at home: A review. International Journal of Social Robotics, 6(4), 575–591.
Roff, H. M. (2014). The strategic robot problem: Lethal autonomous weapons in war. Journal of
Military Ethics, 13(3), 211–227.
Sætra, H. S. (2020). A shallow defence of a technocracy of artificial intelligence: Examining the
political harms of algorithmic governance in the domain of government. Technology in Society,
Yigitcanlar, T., Kamruzzaman, M., Foth, M., Sabatini, J., Da Costa, E., & Ioppolo, G. (2018). Can
cities become smart without being sustainable? A systematic review of the literature. In
Sustainable cities and society. 45, 348–365.
Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Oxford, UK:
University of Oxford.
Zhang, B., & Dafoe, A. (2020). US public opinion on the governance of artificial intelligence. In
Proceedings of the AAAI/ACM Conference on AI, Ethics and Society, pp.187–193.Zhang, B., &
Dafoe, A. (2020). US public opinion on the governance of artificial intelligence. In Proceedings of
the AAAI/ACM Conference on AI, Ethics and Society, pp.187–193. New York, NY, USA.