0% found this document useful (0 votes)
31 views

Game and AI

The document discusses the ethics of artificial intelligence in video games. It focuses on how AI is used in the affective game loop, which elicits emotions from players, senses their emotional state, and adapts the game accordingly. Each part of the loop raises ethical issues such as manipulating players' emotions, lack of transparency in data collection, and questions around data ownership. The document calls for open dialogue and guidelines to address these challenges and protect players while guiding developers.

Uploaded by

Rene
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Game and AI

The document discusses the ethics of artificial intelligence in video games. It focuses on how AI is used in the affective game loop, which elicits emotions from players, senses their emotional state, and adapts the game accordingly. Each part of the loop raises ethical issues such as manipulating players' emotions, lack of transparency in data collection, and questions around data ownership. The document calls for open dialogue and guidelines to address these challenges and protect players while guiding developers.

Uploaded by

Rene
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

1

The Ethics of AI in Games


David Melhart, Julian Togelius, Benedikte Mikkelsen, Christoffer Holmgård, Georgios N. Yannakakis
modl.ai
Copenhagen, Denmark
[email protected], [email protected], [email protected], [email protected], [email protected]

Abstract—Video games are one of the richest and most popular


forms of human-computer interaction and, hence, their role is
arXiv:2305.07392v1 [cs.HC] 12 May 2023

critical for our understanding of human behaviour and affect at


a large scale. As artificial intelligence (AI) tools are gradually
adopted by the game industry a series of ethical concerns
arise. Such concerns, however, have so far not been extensively
discussed in a video game context. Motivated by the lack of a
comprehensive review on the ethics of AI as applied to games, we
survey the current state of the art in this area and discuss ethical
considerations of these systems from the holistic perspective of
the affective loop. Through the components of this loop, we study
the ethical challenges that AI faces in video game development.
Elicitation highlights the ethical boundaries of artificially induced
emotions; sensing showcases the trade-off between privacy and
safe gaming spaces; and detection, as utilised during in-game
adaptation, poses challenges to transparency and ownership. This
paper calls for an open dialogue and action for the games of today
and the virtual spaces of the future. By setting an appropriate Figure 1. The Affective Game Loop [9]. The loop relies on the game’s
framework we aim to protect users and to guide developers parameter space to elicit an emotional response. This response is sensed by
an AI model that detects change(s) in the player’s emotional state. The output
towards safer and better experiences for their customers.
of the affect model can be used to adapt the game content and generate a
Index Terms—artificial intelligence, ethics, video games, affec- new set of stimuli for the player.
tive computing

I. I NTRODUCTION
Video games are key to our understanding of human be- levels and images [10] or guide an orchestration process [11],
haviour due to their vast popularity, the multi-modal ways [12] across creative facets such as text, levels and visuals.
players can interact with them, and the various ways games can The concept of the affective game loop has been explored
express emotion and adapt to a player’s style. Even though val- thoroughly in academia [13]–[17]. Meanwhile, the adoption
ues such as transparency, trustworthiness and responsibility are of affect-driven adaptation systems in games has been gradual
core aspects of ethical systems in other domains, video games over the last twenty years; indicative yet representative exam-
present unique challenges in terms of ethics. Dark patterns in ples include Façade (Procedural Arts, 2005)—see Fig. 2—and
game design [1], predatory monetisation strategies [2], and the Nevermind (Flying Mollusk, 2016)—see Fig. 3.
black-box nature of games hinder transparency [3] and raise
several ethical concerns. These issues are far-reaching from The paper is structured as follows. After an overview of
game design and development [4], [5] to societal impact and related literature (Section II), we discuss the ethical dimen-
research ethics [6]. sions of game AI through the phases of the affective game
In this survey paper, we aim to address the ethical con- loop—for a detailed structure of our survey see Table I.
siderations of game AI tools and methods through the lens of In particular, Section III covers aspects of elicitation and
affective computing. In particular, we focus primarily on player how dark patterns are used to manipulate and reduce the
modelling [7] as a field of game research that considers the ag- players’ emotional agency in harmful or exploitative ways;
gregation, simulation [8], and understanding of gameplay and Section IV takes a thorough look at sensing and issues related
user experience in games. We, thus, structure the discussion to the tradeoff between privacy and control, and malicious
of AI ethics in games around the affective game loop [9] (see action in games; Section V discusses affect detection and the
Fig. 1). The affective game loop describes the relationships complexities of transparency in limited information systems
between emotion expression, elicitation, detection, prediction, such as games; and finally Section VI reflects on questions of
and subsequent reaction. It presents a complex game system data and model ownership during the affect-driven adaptation
which facilitates these processes and adapts to the user’s phase. The paper ends with a discussion on several other issues
emotional response. This loop can assist AI systems to gen- related to game AI ethics including AI algorithmic biases,
erate personalised aspects of games such as agent behaviour, compute fairness, and in-game toxicity and violence.
2

Table I
A SPECTS OF THE A FFECTIVE G AME L OOP WITH THEIR ASSOCIATED AI V IRTUES ( INTRODUCED BY B OSTROM AND Y UDKOWSKY [18]); MAJOR
PITFALLS ; AND POSITIVE INITIATIVES UPHOLDING AI V IRTUES AND BENEFITING END - USERS .

Affective Loop Section AI Virtues [18] Major Pitfalls Positive Initiatives


Governments tightening regulation around
predatory monetisation practices [19];
Responsibility, Predatory monetisation, dark design patterns [1],
Elicitation III Prolific designers taking a critical stance
Auditability and generating harmful content
against dark patterns (e.g. Six to Start
CEO Adrian Hon [20])
Transparency, Ubisoft’s ML models sensing for toxic
Sensing IV Lack of transparency in data inferred by AI systems
Incorruptibility behaviour in For Honor (2017) [21]
Game studios sharing datasets with the
Transparency,
AI systems overfitting to skewed populations and academic community (e.g. EA and
Detection V Auditability,
perpetuating harmful historical biases Nintendo [22], Ubisoft [23] and Riot
Predictability
Games [24]).
Responsibility, Microsoft’s Xbox Transparency Reports
Unclear chain of responisibiliy and ownership of
Adaptation VI Transparency, showing actions taken in content
data and output of human-AI co-creation
Incorruptibility moderation1 .

A. Ethics in Artificial Intelligence


Ethics has been a constant challenge in the field of AI
fuelled by academic and practical interest into the governance
of autonomous systems and public anxiety towards data-driven
black-box infrastructures [25], [26]. Ethical frameworks have
been developed to address these anxieties, generally aiming
to provide guidelines for creating beneficial, transparent, and
trustworthy applications.
One of the most popular ethics frameworks applied in
AI consists of the virtues of Responsibility, Transparency,
Auditability, Incorruptibility, and Predictability introduced
by Bostrom and Yudkowsky [18]. The responsibility of AI
algorithms refers to their clear oversight on the chain of
responsibility as the output of the algorithm can be attributed
to either individuals or organisations [27]. As we will discuss
Figure 2. In Façade (Procedural Arts, 2005), the player can interact with in Section III and VI, a clear chain of responsibility is often
the game’s agents through free-form text. The underlying AI responds to the lost between the original data, the inferred models, and large-
player input based on its semantic and emotional content. scale ensemble architectures using third-party AI.
Transparency is one of the more complex ethical virtues
and cornerstones of AI Trustworthiness [28], [29]. On the one
hand, it can refer to a kind of algorithmic transparency that
promotes AI decision-making processes that are explainable
[30] and clearly understood by their users [27]. On the other
hand, it can refer to a systemic transparency and openness of
AI-powered applications; that is legal access to AI infrastruc-
tures themselves [28]. As we will show in Section IV, Sec-
tion V, and Section VI, the industry has a troubled relationship
with transparency, with many companies not disclosing their
use of player models to gain further insights from user data.
While transparency in relation to direct data-collection is clear,
“inferred” information such as computational models and their
output is much less protected by legal frameworks [31].
Figure 3. In Nevermind (Flying Mollusk, 2016), the player explores dream-
like horror environments. The game content is adjusted based on the player’s
Auditability implies that the correctness of the output of
emotional state by introducing more dangers as the player’s stress level AI systems should be verifiable by a third party. As we
increases. discuss in Section III and Section V, auditability and general
transparency is a serious blind spot of the video games
industry. Although some of this blind-spot can be attributed
II. R ELATED W ORK to the inherent opacity of AI systems [32], there is a definite
limitation raised by legal opacity restricting access to AI
This section reviews the literature on ethics research and architecture and training data by external auditors [33].
ethical frameworks in AI and games research. Incorruptibility means that the system is robust against
3

manipulation. Even though the obfuscation of datasets, al- more complex. A new ethical conundrum emerges when we
gorithms, and their output definitely provides some level of consider that in some human-computer interactions, a lack of
protection, obfuscation is fundamentally clashing with the transparency can improve the efficacy of the system [42]. If
principle of transparency. Due to their interactivity games this is true, wouldn’t the performance drop—that was induced
are under constant siege by malicious users, however, their by increased transparency—hurt the user in the long run?
corruption is not necessarily an outside force. As Gebru Would in this situation total transparency take away from the
points out, AI bias tends to exacerbate the sociopolitical and user’s autonomy? On the other hand, could an opaque system
socioeconomic disparities in our society as they perpetuate even present fair choices to the user? These questions—raised
inherent biases of the creators of AI models and our social by Rovatsos [41]—presuppose a benevolent system. However,
reality [34]. As we discuss in Section IV, while one of the AI is not always designed to be benevolent. Perhaps the
primary goals of applied AI in the game industry is to increase most striking example of this is lethal autonomous weapon
the robustness of systems against external attacks, there is systems, which are designed to kill humans without consider-
much less discussion and transparency about the inherent bias able oversight [43]. Even though real-life killing robots might
in the employed AI systems. seem to be removed from the domain of games, pushing a
Finally, predictability refers to self-consistent AI outputs military agenda and aiding both recruitment and research has
and algorithmic behaviour. Predictability is a less prominent never been far from video games [6]. And as we discuss at
yet important aspect of AI ethics, which aims to push appli- many points in this paper neither is emotional exploitation nor
cations towards a more reliable and fair implementation [27]. psychological manipulation.
Predictability goes a long way towards eliminating AI bias, AI researchers from the fields of computer science, engi-
which we detail in Section V. The aforementioned virtues neering, robotics, medicine, games, and more are calling for
are being understood as the cornerstones of AI ethics and stronger regulations on exploitative and harmful AI and a push
solidified [3], [35]—in some shape or form—in the IEEE for benevolent AI applications [6], [43], [44]. Their fears are
Ethically Aligned Design Guidelines [36], the Humane AI not unfounded as current research and industrial application of
Ethical Framework [27] and the newly emerging concept of AI are more than capable of exploiting and harming humans
Trustworthy AI [29], [37], which also plays a fundamental role en masse, from social engineering [45], through psychological
in the new Ethical Guidelines for Artificial Intelligence of the manipulation [2] and exacerbating existing socioeconomic
European Union [38]. disparities to physical harm [6], [43].
A recent meta-review by Yu et al. [26] of the AAAI,
AAMAS, ECAI and IJCAI conferences mapped out the field B. AI Ethics in Game Research
of Ethics in AI (EAI) and identified four major areas under this
domain. The first category is research focusing on leveraging AI ethics in game research is a fairly under-researched area.
AI techniques to explore questions of ethics faced by humans. The handful of papers existent in the literature focus mainly
The second and third categories focus on internal decision- on player modelling [3], [46], ethical development practices
making frameworks for AI agents acting either as individual [4], [5] and ethical practices in research [6]. In contrast, more
units or collectives. Finally, the last category focuses on ethics work has been carried out on games as ethical systems [47]
in human-computer interactions. For a complete review of all and the outcome of responsibility of game design [1].
these avenues of research we refer to Yu et al. [26]; here we In a review of the field of player modelling, Mikkelsen
focus only on the latter category as it is the most relevant to the et al. provided an overview of emerging ethical issues [3].
domain and purposes investigated in this paper. As positioned In their analysis relying on the framework laid out by [18],
by Yu et al. [26], [39] and echoed by the larger research Mikkelsen et al. identified a number of areas of concern
[3], [35], [40] and policy making [36], [37] communities, from monetisation through content management to dynamic
ethical HCI systems should conserve the autonomy of humans, adaptation and privacy. Most issues emerging in these areas
be beneficial to the user, and minimise underlying risks. are connected to the lack of transparency and auditability
Summarising this sentiment in relation to affective computing, of computer models, especially in industrial settings. One
the IEEE Ethically Aligned Design Guidelines explicitly state: solution to the lack of transparency and interpretability is
offered by the field of explainable AI [48], [49]. A possible
“To ensure that intelligent technical systems will be avenue for adapting explainable AI is through open player
used to help humanity to the greatest extent possible models [46], which are based on Open Learner Models applied
in all contexts, autonomous and intelligent systems to games [50]. Open player models incorporate an explanatory
that participate in or facilitate human society should module into a given AI application which gives clear feedback
not cause harm by either amplifying or dampening to the user on the behaviour and predictions of the model. As
human emotional experience” [36, page 6]. the module providing transparency is removed from the main
Beyond the scope of emotional autonomy, however, there is pipeline of the algorithm, in principle open player models
also the question of transparency and autonomy in human- can be cost-effective to implement in existing systems as
AI interaction in general. Rovatsos raises the issue of the well. Nevertheless, although explainable AI principles can
general distrust towards machines and whether it is ethical for help build more transparent systems in theory, the practical
an AI system to conceal itself [41]. Although it can be easy application of such frameworks appears to be challenging.
to consider total transparency as the most ethical, the issue is Black-box algorithms such as deep learning neural networks
4

are very popular in data science due to their performance, is profiled inaccurately, an imperfect prediction can still be
and they are notoriously hard to explain and interpret—despite used to great effect in an adaptive system [55]. Moreover,
advances [49]. On the other hand, effective white-box systems commercial applications often safeguard their models as trade
are still an open challenge to the field [48] and might not secrets or cannot handle the constraints and overhead of
even be sustainable or desirable from a business perspective as implementing ethical safeguards on a fundamental level.
the games industry is known to treat datasets, data-processing The above examples focus predominantly on the research
pipelines, and AI models as strictly-kept trade secrets. community; in the games industry, the problem of transparency
Beyond the concerns of transparency there is an alarming can be even more prevalent. More often than not, users are
issue of intentionally harmful usage of AI models that exploit unaware of the data collected and inferred by algorithms.
addiction and irresponsible spending habits [3], [5]. Despite a As Kröger points out, data collection in games is generally
growing concern against aggressive and deceptive monetisa- made invisible to the players as it is “woven into a game’s
tion techniques—often aimed at children—there is still a lack environment” [51]. Given this opaqueness and a blasé atti-
of legal and practical frameworks that are capable of address- tude of users towards—what they perceive as—anonymous
ing such issues [2]. King et al. [2], for instance, examined 13 play, it is questionable to which extent regulations such as
different patents connected to video game monetisation and the aforementioned GDPR could reasonably be upheld. A
found that almost all of them relied on the exploitation of thorough review of five companies by Vakkuri et al. [35]
the players’ data to optimise the delivery and timing of ads revealed that even though developers might consider ethics as
and purchase offers. They note that with the expansion of AI an important question, they have little to no tools to address
methods it is expected that such systems will become more it in a systematic manner. Mitigation of ethical risks in AI
sophisticated and ubiquitous in the future, making the issue of systems thus becomes low-priority and generally addressed
ethics in player modelling more pressing than ever. in a post- and ad-hoc manner, if at all. Reviews of game
industry applications that span from player modelling [3],
[46], to data-driven game development [5], and to procedural
C. State of AI Ethics in Practice content generation [4] reveal a similar pattern. Because the
Although ethical frameworks have been developed to pro- games industry is a fast-moving field with growing pressure
vide guidance, the lack of specificity often leads to a small on producing more and more content with the advent of live-
scale of adoption. If we look at the core issue of trans- service games, the application of ethical frameworks to AI
parency, which is also often required for the assessment of in games—including but not limited to player modelling—
other components of AI trustworthiness, we find that both remains an after-thought without clear ways to integrate the
affective computing and games applications are lagging behind mitigation of ethical problems into existing industry pipelines.
[51]. This is true despite the issue of transparency being
propped up by more robust legal frameworks than many other
III. E LICITATION – B OUNDARIES OF A RTIFICIALLY
components of ethical AI. In the European Union, the General
I NDUCED E MOTIONS
Data Protection Regulation (GDPR) [52] is meant to give
a legal framework and transparency to data handling (also We start examining the affective game loop from the Elicita-
in an AI context). However, a review of serious games— tion phase. Doing so we are faced with the ethical boundaries
games developed for healthcare, educational, hiring, or other of artificially induced emotions. Although inherently personal
non-entertainment purposes—found that two years after the and subjective, emotions do not enjoy legal protection to the
adoption of GDPR, it has had little to no effect on the research same extent as other personal data [54]. The core issues we
community [53]. Similarly, in a recent exploration of affective encounter in this area are ownership and autonomy over one’s
computing through the lens of GDPR laws Hauselmann found own emotions. The so-called dark design patterns [1] have
that the field faces serious issues in terms of transparency, been used in games to compel players’ behaviour through
responsibility, and predictability [54]. Hauselmann highlights affective manipulation and with the advent of big data analysis
the delicate nature of emotional data as something that is and machine learning, there is a potential for a new wave of
not necessarily protected under current legal frameworks but dark design patterns [2]. As games are often marketed towards
extremely personal to the users. However, the question of children, the ethical side of the emotion elicitation in games,
emotional data is further complicated by the fact that while their use and their goal have to be considered. Importantly,
user behaviour is relatively easy to observe and record, emo- the challenge of dark design patterns is core to game design
tional data is often extracted through means of peripheral principles but not necessarily to the AI algorithm associated
signals and machine learning. In this sense affective data with a game. One should thus take a dive into the problematic
is inferred and not observed [31]; as a result, the majority ethical aspects of the game design prior to examining the role
of affective computing applications appear to be inherently of AI within a particular game.
opaque. As there should be a right to an accurate portrayal While a few years ago loot boxes—i.e. virtual items that
of personal data, inaccurate predictors might infringe on the can be redeemed for other random items that provide some
personal rights of users. This is hard to prove, however, as value to the players [56]—made waves [57]–[59], the new
these models are often difficult to audit. This phenomenon monetisation technique sweeping across the industry is the
is amplified because there are fewer practical concerns for battle pass or season pass system. Unlike previous iterations
inaccurate models up to a certain degree. Often even if a user of premium subscriptions, in-game currencies, downloadable
5

content packs, loot boxes, and gated progression, the battle Nevermind (2016) integrated computer models to guide their
pass system does not promise any immediate tangible reward elicitation. In Nevermind the game’s transgressive aesthetics
to players. Instead, players buy into access to time-limited is amplified when the player is under stress and subdued
content updates, which they still have to unlock in-game when they calm down. Intentionally transgressive content—
within a given time frame [60]. This type of monetisation even when controlled by autonomous systems—can be tuned
reformulates the value proposition of online games and shifts and managed more consciously; unintentionally harmful or
the focus from commodities to services [61]. While the loot offensive content, however, poses a much more complex
boxes of yesteryear were designed to operate on the same problem.
psychological buttons as gambling [57], [58], emerging battle Of course some more obvious errors are easier to catch with
pass systems build more on a feeling of missing out [60] simple pruning, but computational models can also encode
and societal pressure [62]. In many modern online games— biases—like gender biases or offensive stereotypes—which in
such as Fortnite (Epic Games, 2017), Apex Legends (Respawn turn result in harmful content [72], [73] unintended by the
Entertainment, 2019), Fall Guys (Mediatonic, 2020) and Over- designer. A good encapsulation of the complicated nature of
watch 2 (Blizzard Entertainment, 2022)—these monetisation using AI generators, content moderation, and privacy is AI
practices often coalesce into a virtual storefront, where in- Dungeon (Latitude, 2019), a text adventure game based on
game currencies can be bought for real money, then spent on the GPT-2 [74] and GPT-3 [75] language models. Latitude
single purchase upgrades and battle passes alike. has become the focus of a controversy when they decided
Where player modelling techniques can make loot boxes, to take action against offensive content AI Dungeon—mainly
battle passes, and other similar monetisation techniques more involving stories containing non-consensual sexual content and
concerning is the ability to target people more prone to child pornography [76]. Although much of the questionable
spending. Predictive models have already been in place for content banned by Latitude was generated deliberately by their
years in the industry for the estimation of churn [63] to keep users, the model was also known to generate sexually explicit
track of players lost and their velocity through a game. Similar content seemingly unprompted, including “writing children
models, however, can also be used to find and target potential into sexual scenarios” [76]. Users raised concerns about the
excessive spenders—often called “whales” in the industry [64]. decision of Latitude to address the issue with strict moderation,
Affective computing models estimating the users’ emotional automatic flagging of problematic materials, and monitoring
state can be used for the targeted and timed delivery of ads the content of users’ privately generated stories. On one hand,
and promotional offers to maximise user spending. Affective this decision pushed all the responsibility of the content to the
modelling methods, however, can also help build more respon- users even though it was co-created with Latitude’s algorithm;
sible systems that detect a risk of a problematic behaviour. on the other hand, the human moderation of private content
Out of the psychosocial aspects of addiction (salience, mood raised privacy concerns. The controversy showed that despite
modification, tolerance, withdrawal symptoms, relapse, con- the best intentions of Latitude, a lack of transparency and a
flict) [65], affective computing methods could be especially clean line of responsibility [18] leads to detrimental outcomes
useful to pick up on mood modification and tolerance (as for both the company and the end-users. The swift shift in
a diminishing affective response) at the very least and flag how moderation was done on the platform made the already
users as at-risk consumers. In the same way, machine learning black-box system even harder to navigate for players, which
models based on affective and behavioural feedback [66] are in turn reduced both the transparency and the users’ trust in
deployed to target monetisation and retain consumers, they the system. Many players felt unfairly flagged for content that
can also be used to deploy “precision psychiatry” [67]. Most either fell within community guidelines or was generated by
notably, EA was accused to leverage their patented dynamic the model virtually unprompted [76]. As Latitude was not the
difficulty adjustment system to push players to spend more developer of the underlying foundation language model [77],
money on loot boxes [68]. Even though the case was dismissed it was unclear how the system can be effectively audited and
and EA swore to uphold “fair play” in their online games [69], since there was no established responsibility for the system,
this indicative example goes to show how algorithms can be all the blame fell to users who interacted with the model.
used to elicit emotions that influence players to act against As we can see elicitation through AI-assisted systems has
their own interest. potentially harmful effects on the end user. On the one hand,
While affective computing systems in general often obscure video game companies can rely on affective computing models
how they infer information and predictions [54], the black- to fine-tune and personalise targeted monetisation strategies.
box nature of games is even more apparent. Games are This carries the danger of intentionally or unintentionally
often viewed as “smoke and mirrors” when it comes to facilitating addiction or pressuring users on an emotional
the dichotomy between the game’s parameter space and the or social basis. On the other hand, generative systems can
conveyed aesthetic. Game designers are invested in hiding be unreliable and surprising and generate unwanted content
what lies within the game rules to facilitate a suspension without the designer’s knowledge or the user’s consent. To
of disbelief and thus make the experience more impactful prevent subsequent issues, generative systems in games should
and believable for players [70]. Often game designers are demarcate a chain of responsibility for the model’s output
relying on transgressive aesthetics [71] to create experiences and offer tools to players to mitigate unwanted content.
that have a larger emotional weight. Although the area is Even though foundation models offer a robust solution for
still largely unexplored some games like Flying Mollusk’s generating the content, often the complexity, black-box nature,
6

and ownership of the models limit the auditability of these


algorithms [77].
While leveraging predatory monetisation strategies has been
a prevalent pattern—especially in the mobile games industry—
not all studios have followed suit. A good counter-example is
Six to Start, the developer of the immensely popular Zombies,
Run! (Six to Start) (2012) mobile exergame. The studio has
not only forgone the usual dark design patterns, showing
that games can be successful without putting psychological
pressure on their players, but they have also been making a
firm public stance against these practices [20].

IV. S ENSING – P RIVACY AND C ONTROL Figure 4. Traditional flagging methods for toxic behaviour in For Honor
(Ubisoft, 2017). Such methods often depend on user reporting which, in turn,
Following elicitation, the next step of the affective loop may allow many toxic events to remain unnoticed and stay unreported.
is sensing: the capture and processing of the manifested
emotions. The central issue of sensing is that of privacy—
i.e. when, how, and kind of data is being captured. While user identity of players—using their in-game data, username, GPS
privacy might define a clear issue in other domains, games location, preferences, and distinct play patterns they use to
present a special case. Particularly due to the interactivity of interact with the game [81], [82]—to the point where it is
the medium, a certain amount of dynamic control is needed to questionable whether game data can be truly anonymised at
maintain oversight over toxic and malicious actors in a game’s all [80].
ecosystem and prevent unintended or negative effects on the Complex player profiles can be built on inferred gender, age,
players’ mental well-being. socioeconomic status, and interests, which can fuel harmful
It has been shown that users of affective computing sys- models exacerbating problem behaviour such as gambling and
tems prefer clear notifications and potential control over the excessive spending. Beyond static high-level profiles it is also
output of sensors [78]. This need is also supported by the possible to infer the emotional state of users through keystroke
principle of autonomy when it comes to ethical applications patterns [83], voice [84], or in-game behaviour [14], [85].
[79], and more concretely by the right of accurate portrayal Although the secrecy of the industry is a major concern, there
[54]. Unfortunately, in multiple instances affective computing is a tradeoff in terms of privacy and autonomy that is afforded
applications fail to address these needs as the data and data to the players. Similarly to how behavioural analytics is used
pipeline is kept secret from the user [54]; game companies to infer a rich player profile [86], emotional data can also be
are no exception to this. In a recent comprehensive study on used to model and subsequently enhance the play experience
data types collected by the video game industry, Kröger et al. [9], [87]. These types of tradeoffs were identified by Ishowo-
[80] reveals that companies are collecting and inferring a wide Oloko et al. as the transparency–efficiency tradeoff of human-
range of data often without the user’s knowledge. machine cooperation [42]. While not always applicable to
Most of the data collection focuses on in-game behavioural human-computer interaction, there are instances where the
metrics which can be used to predict player skill, preferences, inherent bias against AI [41] can hinder a human-computer
or content consumption and spending habits. The inference system if total transparency is maintained.
of spending habits and personal identifiers—such as location, Models incorporating emotional data can also be used to en-
gender, financial status, etc—can clearly fuel harmful policies. hance other game systems involved in the moderation of user
Even game mechanics, which on the surface are there to content and interaction. Most recently, Canossa et al. presented
benefit the player—such as matchmaking or dynamic difficulty a robust method to flag the occurrence and severity of toxic and
adjustment—have led to concerns in the past. As mentioned emotionally abusive behaviour in For Honor (Ubisoft, 2017)
in Section III, one of the most recent examples of dynamic [21]. Although the input features of these models are mainly
difficulty adjustment working against the users was the case behavioural in nature, the inferred actions are emotional.
of EA Games. In a lawsuit, EA was accused to use difficulty While community guidelines are generally presented clearly,
adjustment to influence player spending on loot boxes [68]. human moderation can become cumbersome with the breadth
The lawsuit itself was later dismissed [69], however, this case of data increasing with new players. Additionally, traditional
still highlights the public distrust towards systems that collect reporting systems rely on user input, which could come with
behavioural data. This is not to say that games rely solely its own limitations including unreported events and subversion
on such data. As Kröger et al. point out, game companies are of the system by malicious users (see Fig. 4). As toxic players
adapting sensor data in their datasets at an increasing rate [80]. are often trying to find new ways to circumvent regulations,
Eye-tracking, voice data, GPS information, and peripheral in the future automatic flagging systems—that incorporate
signals from smart accessories can all be used to enrich game emotional data as part of their input or output features—can
datasets and potentially reveal a wide variety of personal make games a safer place for players. The aforementioned
information about the user. In today’s interconnected world, study is a good indicative example of how affective computing
it is becoming exponentially easy to triangulate and infer the applications can be used to enhance the predictability of the
7

system towards its users and how we can use AI to both deliver [42]. It is important, however, that this tradeoff only applies to
clear value to players and improve the game experience. systems that use behaviour and emotion prediction to adjust
When it comes to privacy and transparency the state of in-game content, where the models only reach as far as the
practice in the game industry appears to be severe. The lack “magic circle” of the game experience [94], [95].
of consent, autonomy, or in many cases just knowledge about It is important to note that games do not exist in a vacuum
the collected data and its usage is a serious and prevalent and the experience is far from being a closed bubble. In
ethical issue. In many instances this ethical failing can be systems where affect detection is used to inform monetisation
traced back to lax regulations, where legal requirements for strategies, users should be informed in a clear and compre-
consent and compliance are technically fulfilled but do not hensive way about the output and the goal of the algorithm
facilitate transparency in a tangible way. However, despite to preserve the system’s transparency and predictability. Of
recent legal efforts to institutionalise privacy requirements course, as cited above, some game studios have a bad track
such as the GDPR of the EU, many industry players are record keeping users informed about their practices behind the
still falling behind, unable or unwilling to address the most curtains [84]. Although some of the secrecy can be chalked
pressing issues [84]. On the other hand, not all data is collected up to attempts to address the incorruptibility of the system by
for the purpose of exploiting and manipulating users. Often a keeping it obscure, more often than not it seems video game
wide range of data—otherwise considered “non-essential”— companies rather want to protect their resources (e.g. in terms
can enable the automatic flagging and moderation of toxic of trade secrets, trained models, and datasets).
and malicious users. Some players aim to actively harm others Beyond the questions of responsibility, transparency, and
in- and outside of the game through emotional abuse. Affect- auditability when it comes to detecting emotional and be-
driven applications of AI can offer solutions for identifying havioural outcomes, game developers must also face the
both the occurrence and the impact of toxic behaviour, making consequences of the inherent bias present in AI systems. For
online games a safer, more reliable and predictable space for example, even though it might be responsible and beneficial to
everyone involved. For this to happen, however, clear rules filter players based on certain emotional or behavioural states,
have to be put in place and users must be notified of what kind models can also propagate unseen harmful biases. Even though
of data is being collected for what purpose. Even though the transparency becomes a critical duty of developers in such
detection of toxic behaviour and negative gameplay outcomes instances, other ethical standards must be drawn as well to
might be beneficial for the players, it doesn’t mean that the preserve the integrity of these applications and reconcile with
principle of transparency cannot be upheld. the experience being provided to the players.
One of the most common causes attributed to algorithmic
bias is a faulty dataset [96]–[99]. The more apparent issues
V. A FFECT D ETECTION – T RANSPARENCY IN L IMITED
in this regard are a skewed population, lack of control for
I NFORMATION S YSTEMS
diversity, and the non-critical capture of historical biases [96],
The third core step of the affective loop is Affect Detection [97]. The latter of these issues makes it especially hard to
which refers to the computational processing and prediction mitigate algorithmic biases. On one hand, “clean”, unbiased
of certain aspects of affect [88]. One of the major ethical data might either not be available or impossible to attain.
challenges of deploying affective models, in general, is their On the other hand, systems relying on historical biases can
transparency towards users [3], [54]. More often than not propagate patterns that seem true to a casual observer and are
companies are not disclosing that user data is modelled, only revealed as biased under a more critical analysis [99],
let alone informing users about their system’s predictions. [100]. The issue becomes more severe due to the lack of
Similarly to privacy challenges discussed in Section IV, the transparency and auditability in the field. It is often next to
issue of transparency is of ambiguous nature too. impossible to recognise a biased dataset unless the algorithm
We have already touched upon the issue of games as limited breaches the trust of the users in a serious and very apparent
information systems in Section III. Games often withhold way. The most prolific of these instances are tied to sexist and
information to create uncertainty, decrease cognitive load, and racist outcomes [34], [98]. In one instance Google Photos’
construct challenges for players [89]–[91]. Moreover, games algorithm was mislabelling pictures featuring black people
limit the up-front information that players have access to so as “gorillas” [101]. Of course, Google’s algorithm was not
that they facilitate learning [90] and an experience of flow [92]. created to be racist. The issue instead stems from the lack of
Abuhamdeh et al. have shown that greater outcome uncertainty diversity in the dataset that was used to train the model. As the
does indeed lead to greater satisfaction when the player is model was trained on a dataset featuring predominantly white
succeeding [91]. They also found that as perceived competence people, it learned to associate “whiteness” with “people”. This
rises, suspense and uncertainty become a major facilitator error reveals a fundamental issue with a less-than-critical ap-
of intrinsic motivation in video games; for an overview of proach to data. Historic and institutionalised injustice defines
intrinsic motivation in games see [93]. Because uncertainty is a our social reality. As injustice is ingrained at a fundamental
fundamental element of game design, it is very difficult to mit- level in our society, this type of bias is very hard to eliminate.
igate issues of transparency. While explainable AI frameworks The responsibility of the curators of large datasets and the
generally advocate for open communication towards the user developers of AI models is to apply critical forethought to
about model predictions [46], [48], when it comes to in-game processing and modelling to reduce the impact of these biases.
adjustments this can be detrimental to the player experience While models used in the video game industry can arguably
8

fail in similar contexts, there are more potential pitfalls unique The type of adaptation can occur at a macro level through an
to games. Similarly to other skewed datasets, game data can orchestrator that governs content in the game or the micro level
also be skewed towards atypical players, deriving an unfair through the behaviour and affect the expression of individual
distribution of the population as a whole. One example of game agents. The goal of the game adaptation might differ
this would be the overabundance of data from players with depending on the given use case. It might serve to maintain,
large amounts of playtime. Because they might not represent amplify or change the user’s experience, but regardless of the
the population, an algorithm that focuses on these players interaction task, the adaptation module will yield a new set
would likely produce sub-optimal content for the remainder of emotional stimuli, thereby, closing the affective loop (see
of the player base. This can especially be true in the initial Fig. 1).
phases of development as initial testers tend to be young When we look at an adaptive system described by the
and relatively good players. Another example of in-game affective game loop as a whole the question of ownership
bias would be discrimination towards atypical behaviour or arises. While AI models can incorporate data from a large
emotional response. Systems monitoring toxic behaviour and number of players, it is unclear how much ownership these
bots are often based on high-level aggregated data on in-game players have over these affective models. This issue is even
actions and chat interactions [21]. If these algorithms do not more complex in closed ecosystems that can facilitate co-
account for diversity and expect a behavioural and emotional creation with AI designers. As discussed earlier in Section II,
response based on a Western, neuro-typical audience, they affective computing applications face challenges addressing
might flag good-faith players who are not conforming to these questions under the current regulatory frameworks [51],
certain behaviour or communication standards. On the other [53], [54]. Even though users should have (at very least) rights
end of the spectrum, outliers can outperform the expectations to have control over their own data, to portray themselves
of the model and are labelled as cheaters as happened to accurately, to be forgotten [104], and to be self-determined, in
Julias Jackson, an autistic boy on the Xbox Live ecosystem reality, information inferred from big data is often exempt from
[102]. This error reveals the fragility of many automated the same legal protection afforded to first-hand personal data
systems when they have to apply their predictions outside of [31]. To handle the data used to train models and the inferences
their trained boundaries. As we cannot be sure where those made by these models on an individual basis Wachter and
boundaries are in the wild [3], the challenge of organising Mittelstadt propose the concepts of “high-risk inferences” and
and labelling our training data becomes a core issue. There is the “right to reasonable inferences” [31]. The former concept
only so much we can anticipate in terms of future diversity refers to inferences made from big data through algorithmic
requirements and even if we do, we might lack the tools to means that are either harmful to the privacy of the user or
label our data correctly. Although it is relatively easy to rely on have low verifiability in what the authors call “important
user reporting—especially when it comes to moderation—this decisions”—with loans and employment brought as examples
also injects a large amount of bias into our systems both by [31]. The latter concept would enshrine a right that could
malicious and good-faith users. Unfortunately, user reporting force data controllers to provide certain information about their
can often exacerbate other underlying biases, such as men inferences:
accusing women of cheating for outperforming their peers
“This disclosure would address (1) why certain data
[103]. Even though these AI tools are in most cases used as
form a normatively acceptable basis from which
flagging systems and not automated banning systems, the story
to draw inferences; (2) why these inferences are
of Julias Jackson shows that the infrastructure is far from being
relevant and normatively acceptable for the chosen
perfect, even with human oversight.
processing purpose or type of automated decision;
Nevertheless, many industry leaders are well-invested in
and (3) whether the data and methods used to draw
creating fulfilling experiences that promote player well-being.
the inferences are accurate and statistically reliable.”
As part of these initiatives, companies such as EA and
[31, page 8].
Nintendo [22], Ubisoft [23] and Riot Games [24] often share
data with academics and enable studies into fostering well- While this proposal—if it were to go into effect—could help
being, combating toxicity, and evaluating content moderation. provide more robust protection to users, unfortunately, when it
Beyond their main purpose, these studies also expose some of comes to the models themselves matters get complicated. At
the data that is collected and the information that is detected by the moment of writing, IP protection virtually takes precedence
these companies lending transparency and auditability to the over the individual’s autonomy over their personal data when
industry. These types of cooperations provide a good example it comes to the trained models themselves. As the models are
of how data transparency can bring value to industry players. considered only “inferred from data” they are further removed
from the users whose data is used to create the models [31].
One key challenge with the existing legal framework is that
VI. A DAPTIVE S YSTEMS – OWNERSHIP IN THE
the training data and the model are not as well separated
A FFECTIVE L OOP
or modular as the guidelines suggest. In many cases, it is
Adaptation and affect expression are the steps that close possible to reverse engineer the model and extract information
the affective loop. Within games, an affect-based interaction about the training set including sensitive personal information
system uses the parameterised output of its affect detection about the original subjects [105]. There are algorithmic so-
module to adjust the game’s parameters to the user experience. lutions to address this problem, however. Ongoing research
9

in the field of machine unlearning aims to offer methods


that attempt to remove knowledge from a trained model as
if the given datapoint was never part of the training set [106]–
[109]. Although early approaches focused on very specific
applications—such as decremental learning in Support-Vector
Machines [110]—recent methods seem to be able to generalise
well over different architectures [106], [108], [109]. However,
for a practical application of machine unlearning, the owner
of the model has to retain the raw training data in most
cases. Not all methods require this from a technical perspective
[109], but the removal of the datapoint has to be verified
to preserve the predictability of the system. Although the
implementation of unlearning would most certainly pose a
computational and organisational overhead, it is still more Figure 5. In Dreams (Sony Interactive Entertainment, 2020), players are able
cost- and resource-efficient than retraining the models from to create their own experiences using a complex editor, reminiscent of the
scratch. Even though machine unlearning has not been adopted user interface of professional game engines.
widely yet, contemporary research results show a promising
path ahead for mitigating some of the privacy issues in small-
scale architectures. been leaked through ChatGPT [112]. While users can opt-out
Ethical questions in small-scale models can potentially be of this data collection, it is unclear if already submitted data
addressed through the aforementioned methods, however, far can be removed from the trained model. The ownership issues
more complex ethical challenges are posed by large-scale are further complicated by the secretiveness of the industry
world models or foundation models [77]. These are large- stakeholders. Industry players are often invested in creating
scale pre-trained models built using hundreds of billions of legal opacity around their systems through restrictive licences
parameters and massive-scale datasets often scraped from the and digital rights management tools to restrict the transparency
internet indiscriminately. Foundation models have the potential and usage of the software. Although this type of opacity does
to provide basic knowledge in a domain or generate content not stem from the AI architecture itself, it prevents public
out of the box. While many applications like AI Dungeon access to the inner workings of such systems and limits the
utilize these foundation models, the lines of responsibilities are overview of the AI decision-making process [32].
blurred as a given company has no access to the source code
or the original data of these models. As the source of the data Even though the ownership over the models themselves is
is often scraped from the internet, the ethics of constructing a central question, we must not forget about the ownership
such a dataset is also highly questionable. As users are not and responsibility over the output of said models either. Who
notified they have no way to revoke their participation or owns the results of a human-AI co-creation process? Looking
retaliate against their creations used for training these models. at contemporary legal frameworks, it is hard to say. Of course,
In addition, the underlying data is often discarded or kept judgement can be passed based on the circumstances and
secret, removing even the moral right to the output of these the particularities on a case-by-case basis, but there is no
generators once the model is constructed. Moreover, as these apparent clear line [113] and in most cases, the question
foundation models can only be constructed using an immense is sidestepped entirely by the end-user licence agreement of
amount of resources, large industry players can essentially specific AI-assisted tools. There is no comprehensive frame-
monopolise the market. This trend can already be seen in work for either professional creative tools or interactive media
language-based models, where the current dominance of the meant for entertainment. The matter is further complicated
closed-source GPT-3 and—increasingly more popular—GPT-4 because—even if we focus just on games—it can be hard
[111] models imply that new applications have to subscribe to to make a clear demarcation between a creative tool and
the black-box rules of that system. Ownership over the input, curated entertainment. A recent example of this conundrum
output and the models themselves is not a trivial problem. is Media Molecule’s Dreams (Sony Interactive Entertainment,
The models are owned by their respective companies, and 2020), a “game about making games”—see Fig. 5. Although
even though users generally retain rights to their input and Dreams presents itself very much like a game, it perhaps
to the output of the models, there are some major caveats. has more in common with game engines, such as Unity3
As an example, the Terms and Conditions of OpenAI to or GameMaker4 . Nevertheless, until recently games created
their GTP-3 and GPT-4 algorithms2 which warns users that with Dreams were not monetisable by the creators and solely
OpenAI themselves retain the rights to both the user input beholden to Sony’s PlayStation ecosystem [114]. While Media
and the system output to improve the system in the future. Molecule maintains that their users retain the rights to their
Nevertheless, users have input confidential information into own creations, the options for the users to exercise these rights
the system that lead to security leaks. Most recently in the
case of Samsung, where confidential notes and source code has
3 https://unity.com/
2 https://openai.com/policies/terms-of-use 4 https://gamemaker.io/
10

remain limited5 . After the runaway success of user-created reports every 6 months. If successful, a large industry player
content such as Defense of the Ancients—originally created in such as Microsoft can inspire the industry at large to follow
the Warcraft III (Blizzard Entertainment, 2002) map editor— suit.
which lead to a boom of Multiplayer Online Battle Area games
it is easy to see why companies are trying to retain as much VII. OTHER ISSUES IN GAME AI ETHICS
control over user-generated content as possible. There is a case
to be made for the user’s moral right over their creations, There are several issues in the ethics of game AI that do not
however. Especially in systems where the player demonstrates fit comfortably into our current structure, revolving around the
considerable creative effort during the co-creation process, affective loop. Even though these are not the core concern of
they should be able to retain all rights to their own intellectual our paper, and going in-depth on each question would require
property. While contemporary examples are still subject to ad- more space than we have here, we want to at least mention
hoc judgement, we expect the right to the output of co-creative these concerns and provide some pointers for further reading.
systems to become a central topic in the near future as AI- All games are, in some way, partial representations of the
powered generative systems become more ubiquitous. world and processes therein. One might see games as being
The conversation around co-creation is not just about rights “about” certain real-world processes [115]. A key feature of
but also responsibilities. Who is responsible for the output of games is also that you learn to perform the actions required
the system when an agent learns to act like a bully, creates to win the game as you play them; well-designed games
offensive and abusive content, or is instructed to generate are typically pedagogical sequences that introduce gradually
misinformation? While it is easy—and companies are more harder versions of the same challenge, and it has been argued
than ready—to push the blame on the user, as we demonstrated that this is a key component of why games are fun and
before in Section III, addressing this issue is not as trivial. appealing [90]. While there are many games whose in-game
The main tools to combat the uncertainty around the models’ processes are representations of peaceful real-life activities
output preemptively are transparency and predictability—not such as gardening in FarmVille (Zynga, 2009) or warehouse
just from the AI perspective, but from the larger view of stocking in Sokoban (Thinking Rabbit, 1982), very many
the organisation itself. A delineation between harmful content games represent some kind of real-life violence. Games about
produced on purpose and as a result of a biased or erroneous fighting in various forms are ubiquitous, and have been so
algorithm has to rely on clear and transparent guidelines on since the birth of the medium; many, perhaps most, video
how people are expected to interact with the system, and what games have “hit” or “shoot” as one of their most important
the owners of the system consider malicious use. Predictability mechanics. This might be because it is comparatively easy to
of the model output and the larger organisational response design engaging games around fighting compared to any other
to adversarial attacks can facilitate a safer environment for purpose; regardless, combat is pervasive in games since the era
all users involved. The maintenance of the reliability and of Pac-Man (Namco, 1980) and its ghost-eating capability.
incorruptibility of the models should take precedence over One potential issue of this is whether playing video games
the user’s input. The responsibility of designing and deploying inspires, encourages, or teaches violent behaviour. There has
these security measures should, however, fall on the creator of been a debate around the effects of video games on violent
the system. Employing security measures in highly modular behaviour for at least three decades, and many studies of
software systems is far from trivial given the integration of varying quality. To some extent, this debate and field of inquiry
third-party models, especially out-of-box solutions. Neverthe- can be subsumed under the broader question of media effects,
less, as industrial systems maintain their opaqueness the end where the effects of other media (such as TV) on violence
users cannot be considered fully autonomous. The extent of and other behavioural aspects have been studied for a longer
their autonomy will always be limited by the design of the time. Although the idea that video games in some way lead to
application, the complexity of the system, and the limited violent behaviour has some plausibility as video games teach
transparency afforded to them. some kind of skills, thorough and well-performed studies have
The industry has been slow to react to the growing concern largely failed to find any causal link [116]. Even if such an
about trustworthiness; the landscape is changing, however. An unknown link might be there, AI models can now assist in the
excellent example of recent initiatives is Microsoft’s Xbox isolation of such violence of toxic behaviours [21].
Transparency Reports6 . In this report, Microsoft publishes Given the very high computing demands of much current
explanations and statistics about their content moderation AI research and the advantages of having access to large
policies to increase both the reliability and transparency of datasets, it is worth pondering which modern game AI methods
their ecosystem. Although the first report was just released will benefit most. It is possible that modern game AI will
in 2022, the company pledges to release these transparency exacerbate the divide between large developers with deep
pockets, multiple titles and existing user bases and small
5 Media Molecule launched a Beta Evaluation for projects that aim to independent game developers. If models trained on user data
commercialise their creations off the PlayStation ecosystem. However, this stay proprietary, the large developers will have a considerable
program is not open to everybody and approval is granted on an opaque case- additional advantage over small creators.
by-case basis. Interestingly—as of writing—games are completely excluded Concerns about fairness and bias are ubiquitous in machine
from the program. (Read more at https://docs.indreams.me/en/community/
news/dreams-beta-evaluation) learning, and as discussed previously, these concerns are very
6 https://www.xbox.com/en-GB/legal/xbox-transparency-report real for AI in games as well. It is often claimed that biased
11

models come out of biased teams, in other words, that the R EFERENCES
composition of the human workforce defining and developing [1] J. P. Zagal, S. Björk, and C. Lewis, “Dark patterns in the design of
the AI solution impacts bias. This is certainly a concern in the games,” in Foundations of Digital Games 2013, 2013.
game industry, which appears to be at least as demographically [2] D. L. King, P. H. Delfabbro, S. M. Gainsbury, M. Dreier, N. Greer,
and J. Billieux, “Unfair play? video games as exploitative monetized
imbalanced as the rest of the tech industry [117], [118]. services: An examination of game patents from a consumer protection
Finally, a far-reaching potential ethical concern is that we perspective,” Computers in Human Behavior, vol. 101, pp. 131–143,
one day develop artificial general intelligence, that is as capa- 2019.
[3] B. Mikkelsen, C. Holmgard, and J. Togelius, “Ethical considerations for
ble (as we are) across a large range of areas and tasks. Games player modeling,” in 31st AAAI Conference on Artificial Intelligence,
could have played a critical role in that development. Such en- AAAI 2017. AI Access Foundation, 2017, pp. 975–982.
tities might become very influential in human affairs, and may [4] M. Cook, “Ethical procedural generation,” in Procedural Generation
in Game Design. AK Peters/CRC Press, 2017, pp. 43–54.
also gain the ability to improve themselves, potentially leading [5] M. Seif El-Nasr and E. Kleinman, “Data-driven game development:
to what has been termed an intelligence explosion [119]. If that ethical considerations,” in International Conference on the Foundations
happens, the alignment problem becomes acute: making sure of Digital Games, 2020, pp. 1–10.
[6] M. Cook, “The social responsibility of game ai,” in 2021 IEEE
that the goals and principles of such an entity are aligned with Conference on Games (CoG). IEEE, 2021, pp. 1–8.
human society. Given that video games are abundantly used [7] G. N. Yannakakis, P. Spronck, D. Loiacono, and E. André, “Player
in AI research, it is worth pondering what impact training modeling,” Dagstuhl Follow-Ups, 2013.
[8] C. Holmgård, A. Liapis, J. Togelius, and G. N. Yannakakis, “Evolving
on video games might have on the ethics of a potential personas for player decision modeling,” in Proceedings of the Confer-
superintelligence. ence on Computational Intelligence and Games (CIG), 2014.
[9] G. N. Yannakakis and A. Paiva, “Emotion in games,” Handbook on
affective computing, vol. 2014, pp. 459–471, 2014.
VIII. C ONCLUSIONS [10] G. N. Yannakakis and J. Togelius, “Experience-driven procedural
content generation,” IEEE Transactions on Affective Computing, vol. 2,
This survey paper discussed thoroughly the various ethical no. 3, pp. 147–161, 2011.
aspects of artificial intelligence in and for games. We opted [11] A. Liapis, G. N. Yannakakis, M. J. Nelson, M. Preuss, and R. Bidarra,
to view the most critical of these aspects under the affective “Orchestrating game generation,” IEEE Transactions on Games,
vol. 11, no. 1, pp. 48–68, 2018.
game loop concept [9]. Based on that concept we reviewed [12] J. Togelius and G. N. Yannakakis, “General general game ai,” in 2016
the current game AI state-of-the-art and the game industry IEEE Conference on Computational Intelligence and Games (CIG).
state-of-practice with respect to player experience elicitation, IEEE, 2016, pp. 1–8.
[13] G. N. Yannakakis and J. Hallam, “Real-time game adaptation for
sensing, detection and finally adaptation. We raised a number optimizing player satisfaction,” IEEE Transactions on Computational
of ethical dimensions and concerns and the current (lack of) Intelligence and AI in Games, vol. 1, no. 2, pp. 121–133, 2009.
measures and tools available to address them. We also made a [14] N. Shaker, G. Yannakakis, and J. Togelius, “Towards automatic per-
sonalized content generation for platform games,” in Proceedings of
number of recommendations and suggested future steps for the AAAI Conference on Artificial Intelligence and Interactive Digital
making ethics an integral part of AI and games research Entertainment, vol. 6, no. 1, 2010, pp. 63–68.
and innovation. The dialogue between the game industry and [15] M. Van Rooij, A. Lobel, O. Harris, N. Smit, and I. Granic, “Deep:
A biofeedback virtual reality game for children at-risk for anxiety,” in
academic stakeholders is currently active across the various Proceedings of the 2016 CHI conference extended abstracts on human
conferences (e.g. GDC, IEEE CoG, FDG, CHI Play), seminars factors in computing systems, 2016, pp. 1989–1997.
and summer schools (e.g. AI and Games Summer School) with [16] D. Villani, C. Carissoli, S. Triberti, A. Marchetti, G. Gilli, and G. Riva,
“Videogames for emotion regulation: a systematic review,” Games for
a focus on the area. Moreover, the ethical aspects of AI in health journal, vol. 7, no. 2, pp. 85–99, 2018.
games and media at large are currently a top priority item in [17] D. Melhart, A. Liapis, and G. N. Yannakakis, “Towards general models
the agenda of policymakers (e.g. the European Commission) of player experience: A study within genres,” in 2021 IEEE Conference
on Games (CoG). IEEE, 2021, pp. 01–08.
manifested through research and innovation projects78 and [18] N. Bostrom and E. Yudkowsky, “The ethics of artificial intelligence,”
policies such as the AI act9 . We expect affective computing The Cambridge handbook of artificial intelligence, vol. 1, pp. 316–334,
researchers to take a leading role in these efforts. Affective 2014.
[19] I. Subhan, “18 european countries call for better regulation
computing is uniquely positioned as a multidisciplinary field of loot boxes following new report,” 2022. [Online].
between sensor technology, AI, and applied psychology; hence Available: https://www.eurogamer.net/18-european-countries-call-for-
it offers a comprehensive overview of most of the issues better-regulation-of-loot-boxes-following-new-report
[20] A. Hon, You’ve Been Played: How Corporations, Governments and
this survey has touched upon. Although in many cases the Schools Use Games to Control Us All. Swift Press, 2022.
response to emerging issues has to be regulatory instead [21] A. Canossa, D. Salimov, A. Azadvar, C. Harteveld, and G. Yannakakis,
of technical, affective computing can still provide a shared “For honor, for toxicity: Detecting toxic behavior through gameplay,”
Proceedings of the ACM on Human-Computer Interaction, vol. 5, no.
language between the fields involved and help highlighting CHI PLAY, pp. 1–29, 2021.
potential issues. This paper aims to further facilitate and [22] N. Johannes, M. Vuorre, and A. K. Przybylski, “Video game play is
moderate this dialogue among all stakeholders involved—AI positively correlated with well-being,” Royal Society Open Science,
vol. 8, no. 2, p. 202049, 2021.
and affective computing researchers and practitioners, game [23] D. Melhart, A. Azadvar, A. Canossa, A. Liapis, and G. N. Yannakakis,
developers, and ultimately players—with the hope that ethical “Your gameplay says it all: Modelling motivation in Tom Clancy’s The
awareness is increased and that necessary action is taken for Division,” in Proc. of the IEEE Conference on Games (CoG), 2019.
[24] C. Monge and T. O’Brien, “Effects of individual toxic behavior on
the mutual benefit of players and their games. team performance in league of legends,” Media Psychology, vol. 25,
no. 1, pp. 82–105, 2022.
7 https://www.ai4media.eu/
[25] J. J. Bryson and P. P. Kime, “Just an artifact: Why machines are
8 https://learnml.eu/
perceived as moral agents,” in Twenty-second international joint con-
9 https://artificialintelligenceact.eu/ ference on artificial intelligence, 2011.
12

[26] H. Yu, Z. Shen, C. Miao, C. Leung, V. R. Lesser, and Q. Yang, “Build- [52] P. Voigt and A. Von dem Bussche, “The eu general data protection
ing ethics into artificial intelligence,” arXiv preprint arXiv:1812.02953, regulation (gdpr),” A Practical Guide, 1st Ed., Cham: Springer Inter-
2018. national Publishing, vol. 10, no. 3152676, pp. 10–5555, 2017.
[27] J. Crowley, A. OrSullivan, A. Nowak, C. Jonker, D. Pedreschi, F. Gi- [53] P. Jost and M. Lampert, “Two years after: A scoping review of gdpr
annotti, and Y. Rogers, “Toward ai systems that augment and empower effects on serious games research ethics reporting,” in International
humans by understanding us, our society and the world around us,” Conference on Games and Learning Alliance. Springer, 2020, pp.
Report of, vol. 761758, pp. 1–32, 2019. 372–385.
[28] S. Larsson and F. Heintz, “Transparency in artificial intelligence,” [54] A. Häuselmann, “Fit for purpose? affective computing meets eu data
Internet Policy Review, vol. 9, no. 2, 2020. protection law,” International Data Privacy Law, 2021.
[29] S. Thiebes, S. Lins, and A. Sunyaev, “Trustworthy artificial intelli- [55] S. McCrea, G. Geršak, and D. Novak, “Absolute and relative user
gence,” Electronic Markets, vol. 31, no. 2, pp. 447–464, 2021. perception of classification accuracy in an affective video game,”
[30] A. Das and P. Rad, “Opportunities and challenges in explainable ar- Interacting with Computers, vol. 29, no. 2, pp. 271–286, 2017.
tificial intelligence (xai): A survey,” arXiv preprint arXiv:2006.11371, [56] D. Zendle and P. Cairns, “Video game loot boxes are linked to problem
2020. gambling: Results of a large-scale survey,” PloS one, vol. 13, no. 11,
[31] S. Wachter and B. Mittelstadt, “A right to reasonable inferences: re- p. e0206767, 2018.
thinking data protection law in the age of big data and ai,” Colum. Bus. [57] A. Drummond, J. D. Sauer, L. C. Hall, D. Zendle, and M. R. Loudon,
L. Rev., p. 494, 2019. “Why loot boxes could be regulated as gambling,” Nature Human
[32] L. Beckman, J. Hultin Rosenberg, and K. Jebari, “Artificial intelligence Behaviour, vol. 4, no. 10, pp. 986–988, 2020.
and democratic legitimacy. the problem of publicity in public author- [58] E. Gibson, M. Griffiths, F. Calado, and A. Harris, “The relationship
ity,” AI & SOCIETY, pp. 1–10, 2022. between videogame micro-transactions and problem gaming and gam-
[33] H. Liu, Y. Wang, W. Fan, X. Liu, Y. Li, S. Jain, Y. Liu, A. K. Jain, and bling: A systematic review,” Computers in Human Behavior, p. 107219,
J. Tang, “Trustworthy ai: A computational perspective,” arXiv preprint 2022.
arXiv:2107.06641, 2021. [59] S. E. Hodge, M. Vykoukal, J. McAlaney, R. D. Bush-Evans, R. Wang,
[34] T. Gebru, “Race and gender,” The Oxford handbook of ethics of aI, pp. and R. Ali, “What’s in the box? exploring uk players’ experiences
251–269, 2020. of loot boxes in games; the conceptualisation and parallels with
[35] V. Vakkuri, K.-K. Kemell, J. Kultanen, M. Siponen, and P. Abra- gambling,” PloS one, vol. 17, no. 2, p. e0263567, 2022.
hamsson, “Ethically aligned design of autonomous systems: Industry [60] E. Petrovskaya and D. Zendle, “The battle pass: A mixed-methods
viewpoint and an empirical study,” arXiv preprint arXiv:1906.07946, investigation into a growing type of video game monetisation,” OSF
2019. Preprints, Sep, 2020.
[36] The IEEE Global Initiative on Ethics of Autonomous and Intelligent [61] D. Joseph, “Battle pass capitalism,” Journal of Consumer Culture,
Systems, “Ethically aligned design: A vision for prioritizing vol. 21, no. 1, pp. 68–83, 2021.
human well-being with autonomous and intelligent systems,” 2019. [62] D. L. King, A. M. Russell, P. H. Delfabbro, and D. Polisena, “Fortnite
[Online]. Available: https://standards.ieee.org/industry-connections/ec/ microtransaction spending was associated with peers’ purchasing be-
autonomous-systems haviors but not gaming disorder symptoms,” Addictive Behaviors, vol.
[37] L. Floridi, “Establishing the rules for building trustworthy ai,” Nature 104, p. 106311, 2020.
Machine Intelligence, vol. 1, no. 6, pp. 261–262, 2019. [63] Á. Periáñez, A. Saas, A. Guitart, and C. Magne, “Churn prediction
[38] N. A. Smuha, “The eu approach to ethics guidelines for trustworthy in mobile social games: towards a complete assessment using survival
artificial intelligence,” Computer Law Review International, vol. 20, ensembles,” in Proceedings of the International Conference on Data
no. 4, pp. 97–106, 2019. Science and Advanced Analytics (DSAA), 2016, pp. 564–573.
[39] H. Yu, C. Miao, C. Leung, and T. J. White, “Towards ai-powered [64] P. P. Chen, A. Guitart, A. F. del Rı́o, and A. Periánez, “Customer
personalization in mooc learning,” npj Science of Learning, vol. 2, lifetime value in video games using deep learning and parametric
no. 1, pp. 1–5, 2017. models,” in 2018 IEEE international conference on big data (big data).
[40] V. Dignum, “Ethics in artificial intelligence: introduction to the special IEEE, 2018, pp. 2134–2140.
issue,” pp. 1–3, 2018. [65] M. Griffiths, “A ‘components’ model of addiction within a biopsy-
[41] M. Rovatsos, “We may not cooperate with friendly machines,” Nature chosocial framework,” Journal of Substance use, vol. 10, no. 4, pp.
Machine Intelligence, vol. 1, no. 11, pp. 497–498, 2019. 191–197, 2005.
[42] F. Ishowo-Oloko, J.-F. Bonnefon, Z. Soroye, J. Crandall, I. Rahwan, [66] M. Xi, Z. Luo, N. Wang, and J. Yin, “A latent feelings-aware rnn
and T. Rahwan, “Behavioural evidence for a transparency–efficiency model for user churn prediction with behavioral data,” arXiv preprint
tradeoff in human–machine cooperation,” Nature Machine Intelligence, arXiv:1911.02224, 2019.
vol. 1, no. 11, pp. 517–521, 2019. [67] K. K. Mak, K. Lee, and C. Park, “Applications of machine learning in
[43] S. Russell, S. Hauert, R. Altman, and M. Veloso, “Ethics of artificial addiction studies: A systematic review,” Psychiatry research, vol. 275,
intelligence,” Nature, vol. 521, no. 7553, pp. 415–416, 2015. pp. 53–60, 2019.
[44] S. Russell, Human compatible: Artificial intelligence and the problem [68] R. Valentine, “Ea faces yet another class-action
of control. Penguin, 2019. lawsuit connected to loot boxes,” 2020. [Online].
[45] T. Hagendorff, “The ethics of ai ethics: An evaluation of guidelines,” Available: https://www.gamesindustry.biz/ea-faces-yet-another-class-
Minds and Machines, vol. 30, no. 1, pp. 99–120, 2020. action-lawsuit-over-alleged-use-of-dynamic-difficulty-adjustment
[46] J. Zhu and M. S. El-Nasr, “Open player modeling: Empowering players [69] J. Batchelor, “’dynamic difficulty’ loot box lawsuit against ea
through data transparency,” arXiv preprint arXiv:2110.05810, 2021. dropped,” 2021. [Online]. Available: https://www.gamesindustry.biz/
[47] M. Sicart, The ethics of computer games. MIT press, 2011. dynamic-difficulty-loot-box-lawsuit-against-ea-dropped
[48] J. Zhu, A. Liapis, S. Risi, R. Bidarra, and G. M. Youngblood, [70] F. Tencé, C. Buche, P. De Loor, and O. Marc, “The challenge of
“Explainable ai for designers: A human-centered perspective on mixed- believability in video games: Definitions, agents models and imitation
initiative co-creation,” in 2018 IEEE Conference on Computational learning,” arXiv preprint arXiv:1009.0451, 2010.
Intelligence and Games (CIG). IEEE, 2018, pp. 1–8. [71] K. Jorgensen and F. Karlsen, Transgression in games and play. MIT
[49] A. B. Arrieta, N. Dı́az-Rodrı́guez, J. Del Ser, A. Bennetot, S. Tabik, Press, 2019.
A. Barbado, S. Garcı́a, S. Gil-López, D. Molina, R. Benjamins et al., [72] E. Sheng, K.-W. Chang, P. Natarajan, and N. Peng, “The woman
“Explainable artificial intelligence (xai): Concepts, taxonomies, op- worked as a babysitter: On biases in language generation,” arXiv
portunities and challenges toward responsible ai,” Information fusion, preprint arXiv:1909.01326, 2019.
vol. 58, pp. 82–115, 2020. [73] L. Lucy and D. Bamman, “Gender and representation bias in gpt-3
[50] D. Hooshyar, E. Bardone, N. E. Mawas, and Y. Yang, “Transparent generated stories,” in Proceedings of the Third Workshop on Narrative
player model: Adaptive visualization of learner model in educational Understanding, 2021, pp. 48–55.
games,” in International Conference on Innovative Technologies and [74] P. Budzianowski and I. Vulić, “Hello, it’s gpt-2–how can i help
Learning. Springer, 2020, pp. 349–357. you? towards the use of pretrained language models for task-oriented
[51] J. L. Kröger, P. Raschke, J. P. Campbell, and S. Ullrich, “Surveilling dialogue systems,” arXiv preprint arXiv:1907.05774, 2019.
the gamers: Privacy impacts of the video game industry,” Entertainment [75] L. Floridi and M. Chiriatti, “Gpt-3: Its nature, scope, limits, and
Computing, vol. 44, p. 100537, 2023. consequences,” Minds and Machines, vol. 30, no. 4, pp. 681–694, 2020.
13

[76] T. Simonite, “It began as an ai-fueled dungeon game. it got much [102] M. Fahey, “Autistic boy branded a cheater by xbox live [update],”
darker,” 2021. [Online]. Available: https://www.wired.com/story/ai- 2013. [Online]. Available: https://kotaku.com/autistic-boy-branded-a-
fueled-dungeon-game-got-much-darker/ cheater-by-xbox-live-update-5743970
[77] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von [103] B. Ashcraft, “Korean woman kicks ass at overwatch,
Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., “On gets accused of cheating [update],” 2016. [Online]. Avail-
the opportunities and risks of foundation models,” arXiv preprint able: https://kotaku.com/korean-woman-kicks-ass-at-overwatch-gets-
arXiv:2108.07258, 2021. accused-of-ch-1782343447
[78] C. Reynolds and R. Picard, “Affective sensors, privacy, and ethical con- [104] E. F. Villaronga, P. Kieseberg, and T. Li, “Humans forget, machines re-
tracts,” in CHI’04 extended abstracts on Human factors in computing member: Artificial intelligence and the right to be forgotten,” Computer
systems, 2004, pp. 1103–1106. Law & Security Review, vol. 34, no. 2, pp. 304–313, 2018.
[79] R. Cowie, “Ethical issues in affective computing,” The Oxford hand- [105] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing
book of affective computing, pp. 334–348, 2015. machine learning models via prediction {APIs},” in 25th USENIX
[80] J. L. Kröger, P. Raschke, J. P. Campbell, and S. Ullrich, “Surveilling security symposium (USENIX Security 16), 2016, pp. 601–618.
the gamers: Privacy impacts of the video game industry,” Available at [106] C. Guo, T. Goldstein, A. Hannun, and L. Van Der Maaten, “Cer-
SSRN 3881279, 2021. tified data removal from machine learning models,” arXiv preprint
[81] S. Makarovych, A. Canossa, J. Togelius, and A. Drachen, “Like a dna arXiv:1911.03030, 2019.
string: Sequence-based player profiling in Tom Clancy’s The Division,” [107] L. Graves, V. Nagisetty, and V. Ganesh, “Amnesiac machine learning,”
in Proceedings of the Artificial Intelligence and Interactive Digital arXiv preprint arXiv:2010.10981, 2020.
Entertainment Conference. York, 2018. [108] L. Bourtoule, V. Chandrasekaran, C. A. Choquette-Choo, H. Jia,
[82] C. M. Myers, L. F. Laris Pardo, A. Acosta-Ruiz, A. Canossa, and A. Travers, B. Zhang, D. Lie, and N. Papernot, “Machine unlearning,”
J. Zhu, ““try, try, try again:” sequence analysis of user interaction in 2021 IEEE Symposium on Security and Privacy (SP). IEEE, 2021,
data with a voice user interface,” in CUI 2021-3rd Conference on pp. 141–159.
Conversational User Interfaces, 2021, pp. 1–8. [109] A. Sekhari, J. Acharya, G. Kamath, and A. T. Suresh, “Remember what
[83] L. M. Vizer, L. Zhou, and A. Sears, “Automated stress detection using you want to forget: Algorithms for machine unlearning,” Advances in
keystroke and linguistic features: An exploratory study,” International Neural Information Processing Systems, vol. 34, 2021.
Journal of Human-Computer Studies, vol. 67, no. 10, pp. 870–886, [110] G. Cauwenberghs and T. Poggio, “Incremental and decremental support
2009. vector machine learning,” Advances in neural information processing
[84] J. L. Kröger, O. H.-M. Lutz, and P. Raschke, “Privacy implications of systems, vol. 13, 2000.
voice and speech analysis–information disclosure by inference,” in IFIP [111] OpenAI, “GPT-4 technical report,” arXiv preprint arXiv:2303.08774,
International Summer School on Privacy and Identity Management. 2023.
Springer, 2019, pp. 242–258. [112] L. Maddison, “Samsung workers made a major error by using
[85] M. S. El-Nasr, A. Drachen, and A. Canossa, Game analytics. Springer, chatgpt,” 2023. [Online]. Available: https://www.techradar.com/news/
2016. samsung-workers-leaked-company-secrets-by-using-chatgpt
[86] S. C. Bakkes, P. H. Spronck, and G. van Lankveld, “Player behavioural [113] J. K. Eshraghian, “Human ownership of artificial creativity,” Nature
modelling for video games,” Entertainment Computing, vol. 3, no. 3, Machine Intelligence, vol. 2, no. 3, pp. 157–160, 2020.
pp. 71–79, 2012. [114] J. Castello, “Ps4 game dreams is an amazing cre-
[87] R. Hare and Y. Tang, “Player modelling and adaptation methods within ation tool with an exposure problem,” 2020. [Online].
adaptive serious games,” in 2021 International Conference on Cyber- Available: https://www.theverge.com/2020/2/14/21136244/dreams-
Physical Social Intelligence (ICCSI). IEEE, 2021, pp. 1–6. ps4-game-creation-tool-exposure-problem-curation-media-molecule
[115] I. Bogost, Persuasive games: The expressive power of videogames. Mit
[88] R. A. Calvo and S. D’Mello, “Affect detection: An interdisciplinary
Press, 2010.
review of models, methods, and their applications,” IEEE Transactions
[116] J. Breuer, J. Vogelgesang, T. Quandt, and R. Festl, Violent video
on affective computing, vol. 1, no. 1, pp. 18–37, 2010.
games and physical aggression: Evidence for a selection effect among
[89] H. Wang and C.-T. Sun, “Game reward systems: Gaming experiences
adolescents. Educational Publishing Foundation, 2015, vol. 4, no. 4.
and social meanings.” in DiGRA conference, vol. 114, 2011.
[117] E. N. Bailey, K. Miyata, and T. Yoshida, “Gender composition of teams
[90] R. Koster, Theory of fun for game design. ” O’Reilly Media, Inc.”, and studios in video game development,” Games and Culture, vol. 16,
2013. no. 1, pp. 42–64, 2021.
[91] S. Abuhamdeh, M. Csikszentmihalyi, and B. Jalal, “Enjoying the [118] C. J. Passmore, R. Yates, M. V. Birk, and R. L. Mandryk, “Racial
possibility of defeat: Outcome uncertainty, suspense, and intrinsic diversity in indie games: Patterns, challenges, and opportunities,” in
motivation,” Motivation and Emotion, vol. 39, no. 1, pp. 1–10, 2015. Extended abstracts publication of the annual symposium on computer-
[92] B. Cowley, D. Charles, M. Black, and R. Hickey, “Toward an under- human interaction in play, 2017, pp. 137–151.
standing of flow in video games,” Computers in Entertainment (CIE), [119] N. Bostrom, “How long before superintelligence?” International Jour-
vol. 6, no. 2, pp. 1–27, 2008. nal of Futures Studies, vol. 2, 1998.
[93] S. Rigby and R. M. Ryan, Glued to games: How video games draw
us in and hold us spellbound: How video games draw us in and hold
us spellbound. AbC-CLIo, 2011.
[94] J. Huizinga, Homo Ludens: A Study of the Play Element in Culture.
Beacon Press, 1955.
[95] K. S. Tekinbas and E. Zimmerman, Rules of play: Game design
fundamentals. MIT press, 2003.
[96] C. O’Neil, Weapons of math destruction: How big data increases
inequality and threatens democracy. Broadway Books, 2016.
[97] K. Lum and W. Isaac, “To predict and serve?” Significance, vol. 13,
no. 5, pp. 14–19, 2016.
[98] A. Yapo and J. Weiss, “Ethical implications of bias in machine
learning,” in Proceedings of the 51st Hawaii International Conference
on System Sciences, 2018.
[99] T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach,
H. D. Iii, and K. Crawford, “Datasheets for datasets,” Communications
of the ACM, vol. 64, no. 12, pp. 86–92, 2021.
[100] D. Roselli, J. Matthews, and N. Talagala, “Managing bias in ai,” in
Companion Proceedings of The 2019 World Wide Web Conference,
2019, pp. 539–544.
[101] J. Vincent, “Google ’fixed’ its racist algorithm by
removing gorillas from its image-labeling tech,” 2018.
[Online]. Available: https://www.theverge.com/2018/1/12/16882408/
google-racist-gorillas-photo-recognition-algorithm-ai
14

David Melhart is a Senior Member of Technical Christoffer Holmgård is a co-founder and Chief
Staff at modl.ai, and a Postdoctoral Researcher at Executive Officer of modl.ai. He holds a BA degree
the Institute of Digital Games, University of Malta. in Psychology from the University of Copenhagen
He received a MA degree in Cognition and Com- and an MSc in Media technology from the IT
munication from the University of Copenhagen in University of Copenhagen. He worked in statistics,
2016 and a Ph.D. degree in Game Research from the psychometrics, and organisational psychology with
University of Malta in 2021. His research focuses on the Royal Danish Defence College from 2004 until
Machine Learning, Affective Computing, and Games 2011, maintaining the Danish national draft board
User Modelling. He has been the Communication intelligence test, conducting psychometric assess-
Chair of FDG 2020, a recurring organiser and Pub- ment of special forces operatives, fighter pilots, and
licity Chair of the Summer School series on Artificial officers, and supporting Danish veterans. In 2015,
Intelligence and Games (2018-2023), the Workshop and Panels Chair of he earned a PhD in Artificial Intelligence and Procedural Content Generation
FDG 2023, Editorial Assistant of the IEEE Transactions on Games, Guest from the IT University of Copenhagen and further earned a post-doctorate
Associate Editor on the User States in Extended Reality Media Experiences for in Game Engineering from New York University. Before starting modl.ai, he
Entertainment Games Special Issue of Frontiers in Virtual Reality and Human served as a tenure-track Assistant Professor at Northeastern University and
Behaviour, and Review Editor of Frontiers in Human-Media Interaction. the head of their Master’s program in Game Science and Design. Holmgård
formed his first game development company in 2008, Duck and Cover Games,
focusing on learning games, together with Benedikte Mikkelsen. In 2011, he
co-founded the award winning game studio Die Gute Fabrik, leading the studio
as Managing Director, today serving as the chair of the board. During his
Julian Togelius is a co-founder and Research Di- tenure, the studio shipped multiple games on PC, Mac, mobile, and console
rector of modl.ai, and an Associate Professor in platforms, securing more than 22 industry nominations and awards, including
the Department of Computer Science and Engineer- the GDC Innovation Award and the IndieCade Grand Jury Award.
ing, New York University. He works on artificial
intelligence for games and on games for artificial
intelligence. His current main research directions
involve procedural content generation in games, gen-
eral video game playing, player modelling, and fair Georgios N. Yannakakis is a co-founder of modl.ai,
and relevant benchmarking of AI through game- and a Professor and Director of the Institute of Dig-
based competitions. Additionally, he works on topics ital Games, University of Malta (UM). He received
in evolutionary computation, quality-diversity algo- the Ph.D. degree in Informatics from the University
rithms, and reinforcement learning. From 2018 to 2021, he was the Editor- of Edinburgh in 2006. Prior to joining UM, in 2012
in-Chief of the IEEE Transactions on Games. Togelius holds a BA from he was an Associate Professor at the Center for
Lund University, an MSc from the University of Sussex, and a PhD from the Computer Games Research at the IT University of
University of Essex. He has previously worked at IDSIA in Lugano and at Copenhagen. He does research at the crossroads
the IT University of Copenhagen. of artificial intelligence, affective computing, games
and computational creativity. He has published more
than 350 papers in the aforementioned fields and his
work has been cited broadly. His research has been supported by numerous na-
tional and European grants (including a Marie Skłodowska-Curie Fellowship).
Benedikte Mikkelsen is a co-founder and Chief He is currently the Editor in Chief of the IEEE Transactions on Games, an
Product Officer of modl.ai. She holds a BA degree Associate Editor of the IEEE Transactions on Evolutionary Computation, and
in Architecture from The Royal Danish Academy of used to be Associate Editor of the IEEE Transactions on Affective Computing
Fine Arts and an MSc in Media technology from the and the IEEE Transactions on Computational Intelligence and AI in Games
IT University of Copenhagen. Mikkelsen formed her journals. Among the several rewards he has received for his papers he is the
first game development company Duck and Cover recipient of the IEEE Transactions on Affective Computing Most Influential
Games, focusing on learning game development, Paper Award and the IEEE Transactions on Games Outstanding Paper Award.
with Christoffer Holmgård in 2008, where they He is a senior member of the IEEE.
shipped multiple bespoke game titles for large public
and private actors in Denmark. Subsequently, she
worked with user experience and web application
development, among other things developing design-driven integrated web and
mobile platforms for actors in the public sector. She has experience working
in research and development from international European Union Research
projects focused on building non-expert end user behavioural modelling tools
leveraging game telemetry data.

You might also like