0% found this document useful (0 votes)
31 views5 pages

A Bibliometric View of AI Ethics Development

Uploaded by

sohamvadje24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views5 pages

A Bibliometric View of AI Ethics Development

Uploaded by

sohamvadje24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

A Bibliometric View of AI Ethics Development

Di Kevin Gao Andrew Haverly Dr. Sudip Mittal Dr. Jingdao Chen
Management Department Computer Science and Computer Science and Computer Science and
California State University, East Engineering Dept Engineering Dept Engineering Dept
Bay Mississippi State University Mississippi State University Mississippi State University
Hayward, CA, USA Mississippi State, MS 39762 Mississippi State, MS 39762 Mississippi State, MS 39762
[email protected] [email protected] [email protected] [email protected]
2023 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE) | 979-8-3503-4107-2/23/$31.00 ©2023 IEEE | DOI: 10.1109/CSDE59766.2023.10487710

Abstract— Artificial Intelligence (AI) Ethics is a nascent yet Report in 1973, which triggered a massive loss of confidence in
critical research field. Recent developments in generative AI and AI [3]. In the United States, the Defense Advanced Research
foundational models necessitate a renewed look at the problem Projects Agency (DARPA) also drastically reduced its AI
of AI Ethics. In this study, we perform a bibliometric analysis of funding. AI sank into an “AI Winter" until 1980. The relief in
AI Ethics literature for the last 20 years based on keyword the 80s turned out to be short-lived. In 1987, AI was again placed
search. Our study reveals a three-phase development in AI in a freezer. By the early 2000s, AI was haunted by over-
Ethics, namely an incubation phase, making AI human-like promises and under-delivery. In 2005, John Markoff in the New
machines phase, and making AI human-centric machines phase. York Times described that some computer scientists avoided the
We conjecture that the next phase of AI ethics is likely to focus term artificial intelligence altogether “for fear of being viewed
on making AI more machine-like as AI matches or surpasses as wild-eyed dreamers" [4]. In 2007, Alex Castro referred to
humans intellectually, a term we coin as “machine-like human”. artificial intelligence as a subject that has “too often failed to live
up to their promises" [5]. That has resulted in “once something
Keywords— artificial intelligence ethics, AI ethics, machine becomes useful enough and common enough it’s not labeled AI
ethics, algorithm ethics, Roboethics, human-like machine, anymore" [6]. In the 1990s and 2000s, new computer science
machine-like human
disciplines flourished. However, they were deliberately not
I. INTRODUCTION categorized under Artificial Intelligence, for example,
informatics, machine learning, machine perception, analytics,
Artificial Intelligence (AI) Ethics is the study of the predictive analytics, decision support systems, knowledge-
ethical and responsible development and deployment of AI based systems, business rules management, cognitive systems,
technology. Our bibliometric analysis of AI Ethics literature intelligent systems, language models, intelligent agents, or
published between 2004 and 2023 points us to a three-phase computational intelligence.
development: 1. Incubation; 2. Making AI human-like AI regained its popularity with Google Translate,
machines; 3. Making AI human-centric machines. Google Image Search, and IBM Watson’s winning the
This article contributes to AI Ethics discussions with Jeopardy! game in 2011 [7]. 2012 marked a turning point for AI
unique insights based on keyword usage patterns. It also due to breakthroughs in deep learning and GPU technology.
contrasts “human-like machine”, “human-centric machine”, and AlexNet used GPU to train its Convolutional Neural Network
“machine-like human”, which represent the past, current, and (CNN) model to recognize and label images automatically and
potential future phases of AI Ethics development. won the ImageNet 2012 Challenge by a large margin[8] [9]. AI
II. DEFINITIONS AND HISTORICAL DEVELOPMENT excitement quickly built, and its resurgence became
insurmountable. AI broadened its scope to absorb many
AI was first coined in 1956 at the Dartmouth Workshop [1]. downstream research fields. It became an aggregator and a
John McCarthy, one of the key contributors at the conference destination.
and an AI pioneer, defined AI as "the science and engineering In November 2022, Open AI released ChatGPT which
of making intelligent machines, especially intelligent computer attracted intense interest from the general public. It triggered an
programs. It is related to the similar task of using computers to all-out war between the Big Techs and stiffened competition
understand human intelligence, but AI does not have to confine between rival countries. Ethical AI development and
itself to methods that are biologically observable" [2]. deployment have become more important than ever.
AI has gone through multiple cycles of boom and bust.
It was a rising star from its inception to 1973. Scientists were III. METHODS
excited by its potential to solve algebra word problems, prove We selected SCOPUS (www.scopus.com) as the main
geometry theorems, and even learn to speak. However, it failed data source and VOSviewer (www.vosviewer.com) as the data
to deliver on the hyped expectations. That led to the Lighthill aggregator. In the SCOPUS database, we searched for “AI

978-1-6654-5305-9/22/$31.00 ©2023 IEEE


Authorized licensed use limited to: Dr. D. Y. Patil Educational Complex Akurdi. Downloaded on July 23,2024 at 05:47:37 UTC from IEEE Xplore. Restrictions apply.
ethics" OR “artificial intelligence ethics" OR “machine ethics" humans, e.g., trust, empathy, justice, care, and fairness. From
OR “algorithm ethics" OR “information ethics" OR "ethics of 2020, however, the AI Ethics keywords were increasingly
technology" OR “Robotic Ethics" OR “Robot Ethics" OR focused on protecting the downside risks and making AI
“artificial moral agent" OR “artificial moral agents" from 2004 explainable, accountable, trustworthy, non-biased, non-
to 2023, a period of 20 years, for all languages, all countries, and discriminatory, less opaque, and for-diversity.
territories. In total, 2,517 articles were selected. After removing Based on this observation, we grouped AI Ethics
60 entries due to missing info, a total of 2,457 pieces of literature development into the following three phases:
were included in this analysis. For keyword analysis, we used
2004-2023 co-occurrence author keywords from VOSviewer. • Phase I: Incubation (2004 to 2013)
We used “Full counting", which means each keyword is counted • Phase II: Making AI human-like machines (2014 to 2019)
as one regardless of how many keywords were listed in the • Phase III: Making AI human-centric machines (2020 and
literature. We then exported the results for time series and on)
pattern analysis.
C. AI Ethics Development Phases
IV. AI ETHICS DEVELOPMENT In the following section, we will go through each
phase and highlight the major developments.
A. AI ethics and related ethics usage analysis
1) Phase I: Incubation (2004-2013)
We calculated the keyword usage frequencies for AI AI Ethics trailed AI’s rapid development. In 1997,
Ethics and other related ethical fields based on SCOPUS data. IBM’s Deep Blue defeated the world chess champion Garry
Other related ethical fields included information ethics, machine Kasparov [10]. In 2005, the Stanford autonomous vehicle,
ethics, roboethics, technology ethics, computer ethics, data Stanley, successfully crossed 212 kilometers of terrain in the
ethics, engineering ethics, digital ethics, and computational Mojave Desert [11]. In 2011, IBM’s Watson won the Jeopardy!
ethics. The data is summarized in Figure 1. that conventional wisdom believed only humans could master
"AI Ethics" or "Artificial Intelligence ethics" first [12]. AI technology’s fast development galvanized many
appeared in keywords in 2008, followed by five 5 years of exciting research fronts and, in a way, “pushed” AI Ethics to
hibernation. In 2014, AI Ethics reemerged and has since enjoyed the forefront. It became apparent that other ethics would not be
exponential growth. In 2014, there was only one occurrence of sufficient to cover the wide spectrum of fields that AI covers.
the keyword “AI Ethics” in our literature research. However, by AI Ethics as a research field was born.
2022, the keyword frequency had increased to 148. In the 2023 The popular keyword in 2004 was “privacy". Privacy
partial year till July 28, the usage has also reached 114. The was not a new discipline; neither was it exclusive to AI. It
keyword “AI Ethics” completely outnumbered the rest of the became a hot topic with the explosive growth of data collection
terms such as “roboethics”, “data ethics”, or “machine ethics”. and data usage in the Internet age. In the next few years,
This finding is important. In the historical development “autonomy", “reliability", “safety", “security", and
section, we mentioned that 2012 was the turning point for AI as “sustainability" surfaced. The majority of the keywords were
a research field. We believe 2014 is the year that AI Ethics was product oriented.
formed. Before that, AI Ethics was dispersed across information 2) Phase II: Make AI Human-like Machines (2014-2019)
ethics, machine ethics, roboethics, technology ethics, and In Phase II, AI has increasingly demonstrated its
computer ethics. Thus, we define the pre-2014 period as the potential to function like a human. AI Ethicists and the general
incubation period. public welcomed the development. AI Ethics’ focus was on the
B. Ethics principles usage pattern analysis ethical application of AI, the mini human, in different fields.
We leveraged VOSviewer to unpack keyword usages For this reason, we labeled this phase “Make AI Human-like
that are related to AI Ethics. Figure 2 is generated from the Machines".
VOSviewer based on data since 2004 by using co-occurrence During this phase, AI continued its breakneck
data and author keywords. We used full counting instead of advancement and pushed deep into new frontiers. In 2014,
partial counting, which means each author keyword is counted generative adversarial networks (GAN) were developed to
as one regardless of how many keywords were used in the synthesize new and creative images from existing ones. In
literature. The size of the circle indicates the keywords’ relative 2015, AI-enabled machines to "see" and label images better
frequency. Color represents the closeness of the topics. than humans [13]. In 2016, Deep Mind’s AlphaGo defeated
We sorted this information chronologically and further world Go champion Lee Sedol [14]. In 2018, AI beat human
bifurcated the keywords based on product orientation and their dermatologists in accurately detecting skin cancer [15]. In
associations with AI Ethics principles. The information is 2018, Google Waymo’s Robotaxi started roaming in Phoenix’s
presented in Figure 3. The top rectangles are product-oriented streets [16]. The general public viewed the development
features, the middle ovals illustrate when keywords started to be positively and was excited by AI’s boundless applications.
consistently used. However, AI Ethics development lagged behind AI
The results revealed a significant shift in AI Ethics technology development. As mentioned in the historical
research principles from 2020. Between 2014 and 2019, AI background section, AI regained popularity and broadened its
Ethics keywords focused on principles to make AI ethical scope to include any computer science disciplines that enabled
human-like intelligence in 2012. Machine learning, machine

Authorized licensed use limited to: Dr. D. Y. Patil Educational Complex Akurdi. Downloaded on July 23,2024 at 05:47:37 UTC from IEEE Xplore. Restrictions apply.
Usage of different ethics in literature keywords
(Since 2004. 2023 partial year till 7/28)
160
140 148
120
114
100 99
80
60 57
40
20 19 16
0 2 1 2 2

ai ethics information ethics machine ethics


roboethics technology ethics computer ethics
data ethics engineering ethics digital ethics
computational ethics

Fig 1: Usage of different ethics in literature keywords. AI Ethics first appeared in 2008. It consistently appeared after 2014 and has taken off since 2020.

Fig 2: AI Ethics and related fields based on bibliographical data in VOSviewer by using Scopus data from 2004 to July 28, 2023.

Authorized licensed use limited to: Dr. D. Y. Patil Educational Complex Akurdi. Downloaded on July 23,2024 at 05:47:37 UTC from IEEE Xplore. Restrictions apply.
Fig 3: AI Ethics Development Phases Based on Keyword Analysis

perception, text analysis, natural language processing (NLP), AI released Chat GPT 3.0 to the public and triggered an all-out
logical reasoning, game-playing, decision support systems, data AI race. It looked increasingly like the AI companies were
analytics, and predictive analytics became AI’s upstream or racing to the bottom to win but put safety and security on the
supporting disciplines. Robotics (including autonomous backburner [20]. In 2023, ChatGPT became the second Large
vehicles) was a fast-developing field that was enabled by AI. Language Model to pass the Turing Test [21] [22]. In 2023,
AI became an aggregator. Goldman Sachs estimated that 300 million jobs could be
AI Ethics keywords during this phase reflected many displacement by AI [23]. Scientists, entrepreneurs, and public
human-like features, for example, “accountability", “care", officers started to alarm the general public about the
“compassion", “empathy", “fairness", “justice", consequences of unconstrained AI development. Public distrust
“transparency", “trust", and Explainable AI (XAI). The AI of AI surged.
Ethics community wanted to make this intelligent technology During this phase, precautionary keywords showed up
accountable, caring, compassionate, empathetic, fair, unbiased, very frequently in the literature, such as “algorithm bias", “AI
transparent, and trustworthy, just like an ethical human. bias", “gender bias", and “discrimination" in 2020, and
3) Phase III: Make AI Human-centric Machines (2020- “opacity", “responsibility gap", and “social justice" in 2021.
present) Meanwhile, keywords such as “explainability" and
In Phase III, while continuing its rapid ascension, AI “trustworthy AI’ popped up in 2020. “Human-centric",
had shown aspects that were far from angelic. The AI Ethics “responsible AI" showed up in 2021, and “interpretability" and
community focused on grounding AI into an explainable, “sustainable AI" showed up in 2022. The AI ethics community
responsible, and trustworthy machine that serves humans wanted to make AI responsible, explainable, and trustworthy to
instead of being a runaway alien technology. That was the humans. AI Ethics entered a phase to make AI "human-centric".
reason we titled this phase "Make AI Human-centric
D. The future of AI Ethics
Machines".
By 2020, AI had surpassed humans in handwriting AI technology is disruptive in nature. AI Ethics is
recognition, speech recognition, image recognition, read pivotal in the benign and benevolent rollout of AI technology.
comprehension, and language understanding [17]. In the With the current development pace, it is almost inevitable that
meantime, Deep Fakes exacerbated online misinformation and AI and robotics will match or surpass humans both physically
undermined basic human trust. In May 2021, the United States and intellectually. AI can become near-human. AI ethicists may
National Security Commission urged the US to win the AI arms need to explore how to make these intelligent AI embodiments
race against China, reminiscing the costly and dangerous Cold “machine-like humans”: machines that are intelligent and
War [18]. In July 2022, Google fired an engineer who claimed capable, but never achieve the same status as full humans.
that its LaMDA language model was sentient, exacerbating the Another challenge is the confluence of AI, robotics,
general public’s suspicion of AI [19]. In November 2022, Open and biotechnology. Machine-augmented humans and machine-

Authorized licensed use limited to: Dr. D. Y. Patil Educational Complex Akurdi. Downloaded on July 23,2024 at 05:47:37 UTC from IEEE Xplore. Restrictions apply.
augmented non-human (animals or newly created species) [2] John McCarthy, “What is ai?,” 2023. http://jmc.stanford.edu/artificial-
intelligence/what-is-ai/index.html
could blur the definition of humans. AI ethicists may need to
[3] James Lighthill, “Lighthill report: Artificial intelligence: a paper
study the ethical ramifications of this development and draw symposium,” Science Research Council, London, 1973.
redlines on socially acceptable creations of alien beings. [4] John Markoff, “Behind artificial intelligence, a squadron of bright real
Superintelligence, a form of AI that is superior to people,” 2005. New York Times.
human intelligence, can be a concern. Even if the risks are [5] Patty Tascarella, “Are you talking to me?” 2005.
extremely low, if indeed it happens, the consequences could be https://www.economist.com/technology-quarterly/2007/06/09/are-you-
incredibly serious. AI Ethicists may need to develop a talking-to-me
framework to prevent Superintelligence from remotely [6] CNN, “Ai set to exceed human brain power,” 2006. CNN.com.
happening. [7] Mike Hale, “Actors and their roles for $300, hal? hal!”
New York Times, 2011
V. LIMITATION [8] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, “Imagenet
classification with deep convolutional neural networks,”
Our bibliometric analysis has a few limitations: Communications of the ACM, vol. 60, 2017.
1. We relied heavily on SCOPUS for data sources. [9] Dave Gershgorn, “The inside story of how ai got good
Literature unlisted on SCOPUS would be excluded. enough to dominate Silicon Valley,” Quartz, 2018.
2. We relied on VOSviewer to generate keyword co- [10] Bruce Weber, “Swift and slashing, computer topples
kasparov,” 1997. https://www.nytimes.com/1997/05/12/nyregion/swift-
occurrence data. Errors in VOSviewer could be carried and-slashing-computer-topples-kasparov.html
into our final analysis. [11] Steve Russell, “Darpa grand challenge winner: Stanley the robot!”
3. Literature search was based on literature titles and 2006.
keywords. That could result in the addition of unwanted https://www.popularmechanics.com/technology/robots/a393/2169012/
papers. Some legitimate AI ethics articles might be [12] John Markoff, “Computer wins on ‘Jeopardy!’: Trivial,
excluded. it’s not!,” 2011.
https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html
4. The majority of literature included in this study was in
[13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei,
English (97%). It is highly probable that some non- “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE
English literature was missed. Conference on Computer Vision and Pattern Recognition, 2009, pp.
248–255.
[14] Adrian Cho, “‘Huge leap forward’: Computer that mimics human brain
Despite these caveats, we believe that our bibliometric analysis beats professional at game of go,” Science, 2016.
remains highly valuable to the scientific and engineering [15] Alexa Lardieri, “Ai beats doctors at cancer diagnoses,”
community since most AI and AI Ethics research is published 2018. https://www.usnews.com/news/health-care-news/articles/2018-05-
in English-language journals and conferences that are indexed 28/artificial-intelligence-beats-dermatologists-at-diagnosing-skin-cancer
by Scopus. [16] Jon Fingas, “Waymo launches its first commercial self-driving car
service,” 2019.
https://en.wikipedia.org/wiki/Waymo#:~:text=In%20December%20201
VI. CONCLUSION 8%2C%20Waymo%20launched,and%20request%20a%20pick%2Dup
The bibliometric analysis of AI Ethics literature has [17] Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus
Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet
pointed to a 3-phase AI Ethics development, namely incubation, Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel,
making AI human-like machines, and making AI human-centric Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal,
machines. AI ethicists may need to get ahead of the AI Christopher Potts, and Adina Williams, “Dynabench: Rethinking
technology development and research on making AI machine- benchmarking in nlp,” 2021.
like humans, prohibit unethical development of machine- [18] National Security Commission on Artificial Intelligence, “National
Security Commission on Artificial Intelligence - final report,” 2021.
augmented non-humans, and prevent the development of https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-
malicious or malevolent Superintelligence. 1.pdf
[19] Miles Kruppa, “Google parts with engineer who claimed its ai system is
VII. ACKNOWLEDGEMENT sentient,” 2022. https://www.wsj.com/articles/google-parts-with -
engineer-who-claimed-its-ai-system-is-sentient-
The work reported herein was supported by the National 11658538296?ns=prod/accounts-wsj
Science Foundation (NSF) (Award #2246920). Any opinions, [20] Samantha Lock, “What is ai chatbot phenomenon chat-gpt and could it
findings, conclusions, or recommendations expressed in this replace humans?,” 2022, Accessed: 2022-12-05.
material are those of the authors and do not necessarily reflect https://www.theguardian.com/technology/2022/dec/05/what-is-ai-
chatbot-phenomenon-chatgpt-and-could-it-replace-humans
the views of the NSF.
[21] Celeste Biever, “Chatgpt broke the Turing test - the race
is on for new ways to assess ai,” 2023. Nature, vol. 619, 2023
[22] Will Oremus, “Google’s ai passed a famous test — and showed how the
test is broken,” 2022. The Washington Post.
[23] Beatrice Nolan, “AI systems like chatgpt could impact 300 million full-
[1] Dartmouth University, “Dartmouth workshop,” 1956. time jobs worldwide, with administrative and legal roles
https://home.dartmouth.edu/about/artificial-intelligence-ai-coined- some of the most at risk, goldman sachs report says,”
dartmouth https://www.businessinsider.com/generative-ai-chatpgt-300-million-full-
time-jobs-goldman-sachs-2023

Authorized licensed use limited to: Dr. D. Y. Patil Educational Complex Akurdi. Downloaded on July 23,2024 at 05:47:37 UTC from IEEE Xplore. Restrictions apply.

You might also like