Academia.eduAcademia.edu

Outline

AI in society and culture: decision making and values

https://doi.org/10.1145/3334480.XXXXXXX

Abstract

With the increased expectation of artificial intelligence, academic research face complex questions of human-centred, responsible and trustworthy technology embedded into society and culture. Several academic debates, social consultations and impact studies are available to reveal the key aspects of the changing human-machine ecosystem. To contribute to these studies, hundreds of related academic sources are summarized below regarding AI-driven decisions and valuable AI. In details, sociocultural filters, taxonomy of human-machine decisions and perspectives of value-based AI are in the focus of this literature review. For better understanding, it is proposed to invite stakeholders in the prepared large-scale survey about the next generation AI that investigates issues that go beyond the technology.

AI in society and culture: decision making and values Abstract Katalin Feher Asta Zelenkauskaite With the increased expectation of artificial intelligence, Budapest Business School Drexel University academic research face complex questions of human- University of Applied 3141 Chestnut St, centred, responsible and trustworthy technology Sciences, Hungary & Philadelphia, PA 19104, embedded into society and culture. Several academic Drexel University USA debates, social consultations and impact studies are 3141 Chestnut St, [email protected] available to reveal the key aspects of the changing Philadelphia, PA 19104, human-machine ecosystem. To contribute to these USA studies, hundreds of related academic sources are Fulbright Research Fellow summarized below regarding AI-driven decisions and [email protected] valuable AI. In details, sociocultural filters, taxonomy of human-machine decisions and perspectives of value- based AI are in the focus of this literature review. For better understanding, it is proposed to invite stakeholders in the prepared large-scale survey about the next generation AI that investigates issues that go beyond the technology. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that Author Keywords copies bear this notice and the full citation on the first page. Copyrights AI-driven decisions; valuable AI, responsible AI; for third-party components of this work must be honored. For all other trustworthy AI; fairness, outsourced decisions, society, uses, contact the owner/author(s). CHI 2020 Extended Abstracts, April 25–30, 2020, Honolulu, HI, USA. culture © 2020 Copyright is held by the owner/author(s). ACM ISBN 978-1-4503-6819-3/20/04. DOI: https://doi.org/10.1145/3334480.XXXXXXX CSS Concepts Human-centered computing~Human computer interaction (HCI) <concept_id>10003120</ concept_id>{Human-centered computing} Introduction and trust (Europan Commission 2018, https:// If digital technology becomes more complex, users will ec.europa.eu). progressively empower the artificial intelligence (AI) to make decisions without human verification. In this The next section presents an outline of current change of decision making, users outsource certain academic trends on this field. activities to AI. The question is what kind of decision types are in the focus due to recent AI research and AI-driven decisions in sociocultural context developments and what concepts orient the AI This overview is based on the literature review related developments towards the human-centric approach? to the issues of AI covered so far. First of all, the sociocultural context of AI is covered to understand the Social science in its interdisciplinary context presents key subject areas. The number of related academic diverse landscape of AI-related research with constant sources has been growing significantly in the recent human-centric and moral questions. The outline is years, therefore, hundreds of research papers have focusing on this scope along human-machine decision become available to analyze. making. Our review is based on three fundamental academic Our goal is to contribute to the CHI2020 workshop with databases, such as ArXiv as a repository of electronic a summary of the academic research landscape of AI- preprints with strong focus on technological driven decisions and human-centric AI and provide developments, Scopus as the largest abstract and future directions. citation database of peer-reviewed literature, and also, EbscoHost as a leading provider of research databases. Theoretical considerations After merging the outputs from the last five years as According to the inevitable trends, the AI related big context of society and culture, and after the data data or big social data [1], human-machine cleaning, more than four hundred academic sources augmentation or perceptive and fused intelligence [2], presented the database (n=432). HCI design [3] or software-based narratives [4] are overloaded with choices and options with a wide Two keywords were the absolute most frequent as the selection. ”New forms of intelligence are making Excel tool in the studied academic abstracts, namely, decisions in complex ways that escape the limits of “human” and “information”. Searching the word pairs of human comprehension” [5]. Therefore, users do not these results, “structure of information” and “human intend to control all online activity and to make all being” were the most frequent. Based on the context of digital decision. Decisions and activities are getting society and culture, these are assumed as filters of more outsourced to smart or AI services considering technologies to support or decline. only certain risks and benefits [6]. Transfer of control or empowered technology recall the constant dilemma of good AI society with the focus on responsibility [7] Having these filters, the database was narrowed as AI-RD topics of human-machine decisions. After data cleaning, almost twenty percent of the abstracts remained with relevant results. According to the findings applying manual content analysis [8], three categories of sociocultural context decision types have revealed as follows: • AI-driven operation and decisions, such as structure of information algorithmic, data-driven, automated and human being autonomous decisions. • Human-related decisions with options to be outsourced to the machines, serving the professional decision making or policy creation. • Human decisions to preserve social-cultural values and to train trustable technology. Subjective, behavioral, emotionally intelligent, value-loaded or ethical decisions represent this category. The boundaries are infrequently blurred between the categories, but the taxonomy was clearly drawn from decision types the research topics. The laboratory funnel of Figure 1. represents a complex and slow process of the development of AI concepts where funnel filter works algorithmic or data-driven via currently available structure of information and automated or autonomous human-driven perspectives extracted based on the professional or policy-focused behavioral or emotionally intelligent or subjective recent research studies on a topic of AI. The filter value-loaded or ethical passes through only those AI research & developments (AI-RD) which are adaptable as human-centric technologies. Decision types are critical to be filtered by Figure 1. AI-driven decision types in sociocultural academic research as control and power. The question context according to academic publications (N=432) that remains unanswered is how the trust is contributing to these contexts. Valuable AI understand the AI-driven technologies. Second, The transfer of control and the outsourced decisions definitions still focus heavily on values and trust issues presuppose a valuable technology to be trustable. One- that go beyond technological solutions. Those are tenth of the studied abstracts highlighted the rather conceptual, definitional, and broad questions of importance of responsibility or trust or fairness. Trust interest. As such, this means that application-based and responsibility as values appeared almost equally on practices still are at the nascent phase, even if the field the top as critical requirement, while fairness was less seems to be saturated with the AI-driven research. represented (see Figure 2). According to the findings of the manual content analysis [8] of the reviewed Third, given the complexity in values, we would like to studies, three approaches were revealed regarding propose the next steps in this research to better valuable AI or good AI society: understand what’s the state of AI among various stakeholders. As such, future research should conduct Figure 2. • Issues of “responsibility" are moral and ethical targeted large-scale survey with various stakeholders Expected AI values questions to preserve the core tenets of humanity, who could provide insights for the next generation of AI (N=48) geared to avoid discrimination or inequality. research. “Responsibility" is primarily mentioned along governmental or organizational policies. Acknowledgement Sincere gratitude to the Fulbright Commission for • Topics of “trust” represent human-machine supporting this project. relationship with a strong focus on human-like robots, digital agents, and social-emotional References intelligence. The focus is on the shared control or [1] Asta Zelenkauskaite and Eric P. Bucy. empowered AI. 2016. A scholarly divide: Social media, Big Data, and unattainable scholarship. • The subject of “fair” is less frequented. Fairness First Monday, 21, 5: https://doi.org/ requires transparency and accountability for 10.5210/fm.v21i5.6358 equality, creativity and respected diversity. [2] Yunhe Pan. 2016. Heading toward Artificial Intelligence 2.0. Engineering, 2, 4: 409-413. These values, expectations and requirements have https://doi.org/10.1016/J.ENG.2016.04.018 become essential for academic research over the past [3] Yang Chen. 2019. Exploring Design Guidelines of five years. What are some implications of AI-values? Using User-Centered Design in Gamification Development: A Delphi Study. INT J HUM-COMPUT Discussion and future direction INT, 35, 13: 1170-1181. https://doi.org/ There are several implications of these findings. First, 10.1080/10447318.2018.1514823 regardless of the number of the studies on AI and [4] Simone Natale. 2018. If software is narrative: related fields, there is still a pressing need to further Joseph Weizenbaum, artificial intelligence and the biographies of ELIZA. New Media Soc, 21, 3: 712-728. https://doi.org/ 10.1177/1461444818804980 [5] Alexandre P. Casares. A. P. 2018. The brain of the future and the viability of democratic governance: The role of artificial intelligence, cognitive machines, and viable systems. Futures, 103, October: 5-16. https://doi.org/10.1016/ j.futures.2018.05.002 [6] Katalin Feher. 2019. Digital identity and online self: footprint strategies. An exploratory and comparative research study. Int. J. Inf. Sci. First Published: October 17. https://doi.org/ 10.1177/0165551519879702 [7] Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo and Luciano Floridi. 2018. Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach. Sci. Eng. Ethics, 24, 2: 505-528. https://doi.org/10.1007/ s11948-017-9901-7 [8] Klaus Krippendorf, K. 2004. Content Analysis. An Introduction to Its Methodology. London: Sage.

References (9)

  1. Asta Zelenkauskaite and Eric P. Bucy. 2016. A scholarly divide: Social media, Big Data, and unattainable scholarship. First Monday, 21, 5: https://doi.org/ 10.5210/fm.v21i5.6358
  2. Yunhe Pan. 2016. Heading toward Artificial Intelligence 2.0. Engineering, 2, 4: 409-413. https://doi.org/10.1016/J.ENG.2016.04.018
  3. Yang Chen. 2019. Exploring Design Guidelines of Using User-Centered Design in Gamification Development: A Delphi Study. INT J HUM-COMPUT INT, 35, 13: 1170-1181. https://doi.org/ 10.1080/10447318.2018.1514823
  4. Simone Natale. 2018. If software is narrative: Joseph Weizenbaum, artificial intelligence and the Figure 2.
  5. Expected AI values (N=48) biographies of ELIZA. New Media Soc, 21, 3: 712-728. https://doi.org/ 10.1177/1461444818804980
  6. Alexandre P. Casares. A. P. 2018. The brain of the future and the viability of democratic governance: The role of artificial intelligence, cognitive machines, and viable systems. Futures, 103, October: 5-16. https://doi.org/10.1016/ j.futures.2018.05.002
  7. Katalin Feher. 2019. Digital identity and online self: footprint strategies. An exploratory and comparative research study. Int. J. Inf. Sci. First Published: October 17. https://doi.org/ 10.1177/0165551519879702
  8. Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo and Luciano Floridi. 2018. Artificial Intelligence and the 'Good Society': the US, EU, and UK approach. Sci. Eng. Ethics, 24, 2: 505-528. https://doi.org/10.1007/ s11948-017-9901-7
  9. Klaus Krippendorf, K. 2004. Content Analysis. An Introduction to Its Methodology. London: Sage.