A Bibliometric View of AI Ethics Development
A Bibliometric View of AI Ethics Development
Di Kevin Gao Andrew Haverly Dr. Sudip Mittal Dr. Jingdao Chen
Management Department Computer Science and Computer Science and Computer Science and
California State University, East Engineering Dept Engineering Dept Engineering Dept
Bay Mississippi State University Mississippi State University Mississippi State University
Hayward, CA, USA Mississippi State, MS 39762 Mississippi State, MS 39762 Mississippi State, MS 39762
[email protected] [email protected] [email protected] [email protected]
2023 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE) | 979-8-3503-4107-2/23/$31.00 ©2023 IEEE | DOI: 10.1109/CSDE59766.2023.10487710
Abstract— Artificial Intelligence (AI) Ethics is a nascent yet Report in 1973, which triggered a massive loss of confidence in
critical research field. Recent developments in generative AI and AI [3]. In the United States, the Defense Advanced Research
foundational models necessitate a renewed look at the problem Projects Agency (DARPA) also drastically reduced its AI
of AI Ethics. In this study, we perform a bibliometric analysis of funding. AI sank into an “AI Winter" until 1980. The relief in
AI Ethics literature for the last 20 years based on keyword the 80s turned out to be short-lived. In 1987, AI was again placed
search. Our study reveals a three-phase development in AI in a freezer. By the early 2000s, AI was haunted by over-
Ethics, namely an incubation phase, making AI human-like promises and under-delivery. In 2005, John Markoff in the New
machines phase, and making AI human-centric machines phase. York Times described that some computer scientists avoided the
We conjecture that the next phase of AI ethics is likely to focus term artificial intelligence altogether “for fear of being viewed
on making AI more machine-like as AI matches or surpasses as wild-eyed dreamers" [4]. In 2007, Alex Castro referred to
humans intellectually, a term we coin as “machine-like human”. artificial intelligence as a subject that has “too often failed to live
up to their promises" [5]. That has resulted in “once something
Keywords— artificial intelligence ethics, AI ethics, machine becomes useful enough and common enough it’s not labeled AI
ethics, algorithm ethics, Roboethics, human-like machine, anymore" [6]. In the 1990s and 2000s, new computer science
machine-like human
disciplines flourished. However, they were deliberately not
I. INTRODUCTION categorized under Artificial Intelligence, for example,
informatics, machine learning, machine perception, analytics,
Artificial Intelligence (AI) Ethics is the study of the predictive analytics, decision support systems, knowledge-
ethical and responsible development and deployment of AI based systems, business rules management, cognitive systems,
technology. Our bibliometric analysis of AI Ethics literature intelligent systems, language models, intelligent agents, or
published between 2004 and 2023 points us to a three-phase computational intelligence.
development: 1. Incubation; 2. Making AI human-like AI regained its popularity with Google Translate,
machines; 3. Making AI human-centric machines. Google Image Search, and IBM Watson’s winning the
This article contributes to AI Ethics discussions with Jeopardy! game in 2011 [7]. 2012 marked a turning point for AI
unique insights based on keyword usage patterns. It also due to breakthroughs in deep learning and GPU technology.
contrasts “human-like machine”, “human-centric machine”, and AlexNet used GPU to train its Convolutional Neural Network
“machine-like human”, which represent the past, current, and (CNN) model to recognize and label images automatically and
potential future phases of AI Ethics development. won the ImageNet 2012 Challenge by a large margin[8] [9]. AI
II. DEFINITIONS AND HISTORICAL DEVELOPMENT excitement quickly built, and its resurgence became
insurmountable. AI broadened its scope to absorb many
AI was first coined in 1956 at the Dartmouth Workshop [1]. downstream research fields. It became an aggregator and a
John McCarthy, one of the key contributors at the conference destination.
and an AI pioneer, defined AI as "the science and engineering In November 2022, Open AI released ChatGPT which
of making intelligent machines, especially intelligent computer attracted intense interest from the general public. It triggered an
programs. It is related to the similar task of using computers to all-out war between the Big Techs and stiffened competition
understand human intelligence, but AI does not have to confine between rival countries. Ethical AI development and
itself to methods that are biologically observable" [2]. deployment have become more important than ever.
AI has gone through multiple cycles of boom and bust.
It was a rising star from its inception to 1973. Scientists were III. METHODS
excited by its potential to solve algebra word problems, prove We selected SCOPUS (www.scopus.com) as the main
geometry theorems, and even learn to speak. However, it failed data source and VOSviewer (www.vosviewer.com) as the data
to deliver on the hyped expectations. That led to the Lighthill aggregator. In the SCOPUS database, we searched for “AI
Authorized licensed use limited to: Dr. D. Y. Patil Educational Complex Akurdi. Downloaded on July 23,2024 at 05:47:37 UTC from IEEE Xplore. Restrictions apply.
Usage of different ethics in literature keywords
(Since 2004. 2023 partial year till 7/28)
160
140 148
120
114
100 99
80
60 57
40
20 19 16
0 2 1 2 2
Fig 1: Usage of different ethics in literature keywords. AI Ethics first appeared in 2008. It consistently appeared after 2014 and has taken off since 2020.
Fig 2: AI Ethics and related fields based on bibliographical data in VOSviewer by using Scopus data from 2004 to July 28, 2023.
Authorized licensed use limited to: Dr. D. Y. Patil Educational Complex Akurdi. Downloaded on July 23,2024 at 05:47:37 UTC from IEEE Xplore. Restrictions apply.
Fig 3: AI Ethics Development Phases Based on Keyword Analysis
perception, text analysis, natural language processing (NLP), AI released Chat GPT 3.0 to the public and triggered an all-out
logical reasoning, game-playing, decision support systems, data AI race. It looked increasingly like the AI companies were
analytics, and predictive analytics became AI’s upstream or racing to the bottom to win but put safety and security on the
supporting disciplines. Robotics (including autonomous backburner [20]. In 2023, ChatGPT became the second Large
vehicles) was a fast-developing field that was enabled by AI. Language Model to pass the Turing Test [21] [22]. In 2023,
AI became an aggregator. Goldman Sachs estimated that 300 million jobs could be
AI Ethics keywords during this phase reflected many displacement by AI [23]. Scientists, entrepreneurs, and public
human-like features, for example, “accountability", “care", officers started to alarm the general public about the
“compassion", “empathy", “fairness", “justice", consequences of unconstrained AI development. Public distrust
“transparency", “trust", and Explainable AI (XAI). The AI of AI surged.
Ethics community wanted to make this intelligent technology During this phase, precautionary keywords showed up
accountable, caring, compassionate, empathetic, fair, unbiased, very frequently in the literature, such as “algorithm bias", “AI
transparent, and trustworthy, just like an ethical human. bias", “gender bias", and “discrimination" in 2020, and
3) Phase III: Make AI Human-centric Machines (2020- “opacity", “responsibility gap", and “social justice" in 2021.
present) Meanwhile, keywords such as “explainability" and
In Phase III, while continuing its rapid ascension, AI “trustworthy AI’ popped up in 2020. “Human-centric",
had shown aspects that were far from angelic. The AI Ethics “responsible AI" showed up in 2021, and “interpretability" and
community focused on grounding AI into an explainable, “sustainable AI" showed up in 2022. The AI ethics community
responsible, and trustworthy machine that serves humans wanted to make AI responsible, explainable, and trustworthy to
instead of being a runaway alien technology. That was the humans. AI Ethics entered a phase to make AI "human-centric".
reason we titled this phase "Make AI Human-centric
D. The future of AI Ethics
Machines".
By 2020, AI had surpassed humans in handwriting AI technology is disruptive in nature. AI Ethics is
recognition, speech recognition, image recognition, read pivotal in the benign and benevolent rollout of AI technology.
comprehension, and language understanding [17]. In the With the current development pace, it is almost inevitable that
meantime, Deep Fakes exacerbated online misinformation and AI and robotics will match or surpass humans both physically
undermined basic human trust. In May 2021, the United States and intellectually. AI can become near-human. AI ethicists may
National Security Commission urged the US to win the AI arms need to explore how to make these intelligent AI embodiments
race against China, reminiscing the costly and dangerous Cold “machine-like humans”: machines that are intelligent and
War [18]. In July 2022, Google fired an engineer who claimed capable, but never achieve the same status as full humans.
that its LaMDA language model was sentient, exacerbating the Another challenge is the confluence of AI, robotics,
general public’s suspicion of AI [19]. In November 2022, Open and biotechnology. Machine-augmented humans and machine-
Authorized licensed use limited to: Dr. D. Y. Patil Educational Complex Akurdi. Downloaded on July 23,2024 at 05:47:37 UTC from IEEE Xplore. Restrictions apply.
augmented non-human (animals or newly created species) [2] John McCarthy, “What is ai?,” 2023. http://jmc.stanford.edu/artificial-
intelligence/what-is-ai/index.html
could blur the definition of humans. AI ethicists may need to
[3] James Lighthill, “Lighthill report: Artificial intelligence: a paper
study the ethical ramifications of this development and draw symposium,” Science Research Council, London, 1973.
redlines on socially acceptable creations of alien beings. [4] John Markoff, “Behind artificial intelligence, a squadron of bright real
Superintelligence, a form of AI that is superior to people,” 2005. New York Times.
human intelligence, can be a concern. Even if the risks are [5] Patty Tascarella, “Are you talking to me?” 2005.
extremely low, if indeed it happens, the consequences could be https://www.economist.com/technology-quarterly/2007/06/09/are-you-
incredibly serious. AI Ethicists may need to develop a talking-to-me
framework to prevent Superintelligence from remotely [6] CNN, “Ai set to exceed human brain power,” 2006. CNN.com.
happening. [7] Mike Hale, “Actors and their roles for $300, hal? hal!”
New York Times, 2011
V. LIMITATION [8] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, “Imagenet
classification with deep convolutional neural networks,”
Our bibliometric analysis has a few limitations: Communications of the ACM, vol. 60, 2017.
1. We relied heavily on SCOPUS for data sources. [9] Dave Gershgorn, “The inside story of how ai got good
Literature unlisted on SCOPUS would be excluded. enough to dominate Silicon Valley,” Quartz, 2018.
2. We relied on VOSviewer to generate keyword co- [10] Bruce Weber, “Swift and slashing, computer topples
kasparov,” 1997. https://www.nytimes.com/1997/05/12/nyregion/swift-
occurrence data. Errors in VOSviewer could be carried and-slashing-computer-topples-kasparov.html
into our final analysis. [11] Steve Russell, “Darpa grand challenge winner: Stanley the robot!”
3. Literature search was based on literature titles and 2006.
keywords. That could result in the addition of unwanted https://www.popularmechanics.com/technology/robots/a393/2169012/
papers. Some legitimate AI ethics articles might be [12] John Markoff, “Computer wins on ‘Jeopardy!’: Trivial,
excluded. it’s not!,” 2011.
https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html
4. The majority of literature included in this study was in
[13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei,
English (97%). It is highly probable that some non- “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE
English literature was missed. Conference on Computer Vision and Pattern Recognition, 2009, pp.
248–255.
[14] Adrian Cho, “‘Huge leap forward’: Computer that mimics human brain
Despite these caveats, we believe that our bibliometric analysis beats professional at game of go,” Science, 2016.
remains highly valuable to the scientific and engineering [15] Alexa Lardieri, “Ai beats doctors at cancer diagnoses,”
community since most AI and AI Ethics research is published 2018. https://www.usnews.com/news/health-care-news/articles/2018-05-
in English-language journals and conferences that are indexed 28/artificial-intelligence-beats-dermatologists-at-diagnosing-skin-cancer
by Scopus. [16] Jon Fingas, “Waymo launches its first commercial self-driving car
service,” 2019.
https://en.wikipedia.org/wiki/Waymo#:~:text=In%20December%20201
VI. CONCLUSION 8%2C%20Waymo%20launched,and%20request%20a%20pick%2Dup
The bibliometric analysis of AI Ethics literature has [17] Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus
Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet
pointed to a 3-phase AI Ethics development, namely incubation, Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel,
making AI human-like machines, and making AI human-centric Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal,
machines. AI ethicists may need to get ahead of the AI Christopher Potts, and Adina Williams, “Dynabench: Rethinking
technology development and research on making AI machine- benchmarking in nlp,” 2021.
like humans, prohibit unethical development of machine- [18] National Security Commission on Artificial Intelligence, “National
Security Commission on Artificial Intelligence - final report,” 2021.
augmented non-humans, and prevent the development of https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-
malicious or malevolent Superintelligence. 1.pdf
[19] Miles Kruppa, “Google parts with engineer who claimed its ai system is
VII. ACKNOWLEDGEMENT sentient,” 2022. https://www.wsj.com/articles/google-parts-with -
engineer-who-claimed-its-ai-system-is-sentient-
The work reported herein was supported by the National 11658538296?ns=prod/accounts-wsj
Science Foundation (NSF) (Award #2246920). Any opinions, [20] Samantha Lock, “What is ai chatbot phenomenon chat-gpt and could it
findings, conclusions, or recommendations expressed in this replace humans?,” 2022, Accessed: 2022-12-05.
material are those of the authors and do not necessarily reflect https://www.theguardian.com/technology/2022/dec/05/what-is-ai-
chatbot-phenomenon-chatgpt-and-could-it-replace-humans
the views of the NSF.
[21] Celeste Biever, “Chatgpt broke the Turing test - the race
is on for new ways to assess ai,” 2023. Nature, vol. 619, 2023
[22] Will Oremus, “Google’s ai passed a famous test — and showed how the
test is broken,” 2022. The Washington Post.
[23] Beatrice Nolan, “AI systems like chatgpt could impact 300 million full-
[1] Dartmouth University, “Dartmouth workshop,” 1956. time jobs worldwide, with administrative and legal roles
https://home.dartmouth.edu/about/artificial-intelligence-ai-coined- some of the most at risk, goldman sachs report says,”
dartmouth https://www.businessinsider.com/generative-ai-chatpgt-300-million-full-
time-jobs-goldman-sachs-2023
Authorized licensed use limited to: Dr. D. Y. Patil Educational Complex Akurdi. Downloaded on July 23,2024 at 05:47:37 UTC from IEEE Xplore. Restrictions apply.