Art and The Science of Generative AI: A Deeper Dive
Art and The Science of Generative AI: A Deeper Dive
∗
To whom correspondence should be addressed; E-mail: [email protected].
A new class of tools, colloquially called generative AI, can produce high-quality
artistic media for visual arts, concept art, music, fiction, literature, video, and
animation. The generative capabilities of these tools are likely to fundamen-
tally alter the creative processes by which creators formulate ideas and put
them into production. As creativity is reimagined, so too may be many sec-
tors of society. Understanding the impact of generative AI—and making pol-
icy decisions around it—requires new interdisciplinary scientific inquiry into
culture, economics, law, algorithms, and the interaction of technology and cre-
ativity. We argue that generative AI is not the harbinger of art’s demise, but
rather is a new medium with its own distinct affordances. In this vein, we
consider the impacts of this new medium on creators across four themes: aes-
thetics and culture, legal questions of ownership and credit, the future of cre-
ative work, and impacts on the contemporary media ecosystem. Across these
1
themes, we highlight key research questions and directions to inform policy
and beneficial uses of the technology.
Note: This white paper is an expanded version of Epstein et al 2023 published in Sci-
ence Perspectives on July 16, 2023 which you can find at the following DOI: 10.1126/sci-
ence.adh4451.
1 Introduction
Generative AI systems increasingly have the capability to produce high-quality artistic media
for visual arts, concept art, music, fiction, literature, and video/animation. For example, dif-
fusion models can synthesize high-quality images [1] and large language models can produce
sensible-sounding and impressive prose and verse in a wide range of contexts [2]. The genera-
tive capabilities of these tools are likely to fundamentally alter the creative processes by which
creators formulate ideas and put them into production. As creativity is reimagined, so too may
be many sectors of society. Understanding the impact of generative AI—and making policy
decisions around it—requires new interdisciplinary scientific inquiry into culture, economics,
law, algorithms, and the interaction of technology and creativity.
Generative AI tools, at first glance, seem to fully automate artistic production—an impres-
sion that mirrors past instances when traditionalists viewed new technologies as threatening
“art itself.” In fact, these moments of technological change did not indicate the “end of art,” but
had much more complex effects, recasting the roles and practices of creators and shifting the
aesthetics of contemporary media [3]. For example, some 19th-century artists saw the advent
of photography as a threat to painting. Instead of replacing painting, however, photography
eventually liberated it from realism, giving rise to Impressionism and the Modern Art move-
ment. On the other hand, portrait photography did largely replace portrait painting, leading to
a short-term loss of jobs among portraiturists and postcard painters [4]. Many other historical
analogies illustrate similar trends, with a new artistic technology challenging traditional creative
practices and jobs while in time creating new roles for and genres of art. The digitization of
music production (e.g., digital sampling and sound synthesis) was decried as “the end of mu-
sic.” Instead, it altered the ways we produce and listen to music and helped spawn new genres,
like Hip Hop and Drum’n’bass. This follows trends in computer animation (where traditional
animators thought that computers would replace animators entirely, but instead computer an-
imation flourished as a medium and jobs for computer animators increased [5, 6]) and digital
photography (which in its time challenged photographic principles and assumptions, but now it
is commonplace and widely used [7, 8]).
Like these historical analogs, generative AI is not necessarily the harbinger of art’s demise,
but rather is a new medium with its own distinct affordances. As a suite of tools used by
human creators, generative AI is positioned to upend many sectors of the creative industry and
beyond—threatening existing jobs and labor models in the short term, while ultimately enabling
2
new models of creative labor and reconfiguring the media ecosystem. These immediate impacts
require serious consideration and discussion across academia, industry and civil society.
Unlike past disruptions, however, generative AI relies on training data made by people: the
models “learn” to generate art by extracting statistical patterns from existing artistic media.
This reliance on training data raises new issues—such as where the data is sourced, how it
influences the resulting outputs, and how to determine authorship. By leveraging existing work
to automate aspects of the creative process, generative AI challenges conventional definitions
of authorship, ownership, creative inspiration, sampling, and remixing and thus complicates
existing conceptions of media production. It is therefore important to consider generative AI’s
impacts on aesthetics and culture, legal questions of ownership and credit, the future of the
creative work, and impacts on the contemporary media ecosystem. Across these themes, there
are key research questions to inform policy and beneficial uses of this technology that we outline
in this white paper.
3
countability. In order to be considered meaningful human control, a generative system should
be capable of incorporating a human author’s intent into its output. If a user starts with no spe-
cific goal, the system should allow for open-ended, curiosity-driven exploration. As the user’s
goal becomes clearer through interaction, the system should be able to both guide and deliver
this intent. Such systems should have a degree of predictability, allowing users to gradually
understand the system to the extent that they can learn to anticipate the results of their actions.
Given these conditions, we can consider the human user as accountable for the outputs of the
generative system. In other words, MHC is achieved if human creators can creatively express
themselves through the generative system, leading to an outcome that aligns with their inten-
tions and carries their personal, expressive signature. Future work is needed to investigate in
what ways generative systems and interfaces can be developed that allow more meaningful hu-
man control by adding input streams that provide users fine-grained causal manipulation over
outputs.
Generative AI systems are diffuse, sociotechnical systems [18] and therefore more work is
needed to understand how people reason about the complex interplay between human actors
and computational processes. For example, how do perceptions of the generative process (e.g.,
the relative salience or invisibility of various stakeholders or the involvement of AI disclosure)
affect attitudes towards artifacts produced by those systems [19]? And how do these perceptions
affect attitudes towards various stakeholders involved in the generative AI systems in the first
place [13]? These insights can help us design systems that properly disclose the generative
process and avoid misleading interpretations.
So far, this discussion of generative AI has centered on the self-contained case in which a
user queries a generative AI model directly (e.g. via prompting) to create a static artifact via
inference. However, other regimes within AI art involve the development of systems that go
beyond this fixed user-prompting paradigm. Many of these systems use feedback from a large
number of users to guide the creation of content [20, 21, 22, 23, 24, 25], and thus fall into the
lineage of collective or crowd art (e.g. the r/Place experiment [26] or Agnieszka Kurant’s The
End of Signature [27]. Given that unforeseen outputs from AI systems can fuel perceptions and
fear of AI agency, the well-established category of human-created generative art systems with
intentionally unexpected outputs provide an important reminder: humans create these systems
and are responsible for their outputs.
4
tent with very low barriers to entry. Second, this content is viewed on algorithmically-mediated
social media platforms, where attention is scarce and explicitly monetized [29, 30]. Finally, the
vast computational infrastructure necessary to produce AI-generated content (e.g. large num-
bers of GPUs) is developed and maintained by a few large companies, who can therefore control
the functionality of and access to the technology.
These unique affordances in turn give rise to a medium with its own aesthetics that may have
a long-term effect on art and culture [31]. Primarily, we note how this medium will recast the
practices and roles of creators. In traditional artforms characterized by direct manipulation [32]
of a material (e.g., painting, tattoo, or sculpture), the creator has a direct hand in creating the
final output, and therefore it is relatively straightforward to identify the creator’s intentions and
style in the output. Indeed, previous research has shown the relative importance of “intention
guessing” in the artistic viewing experience [33, 34], as well as the increased creative value
afforded to an artwork if elements of the human process (e.g., brushstrokes) are visible [35].
However, generative techniques have strong aesthetics themselves [36]; for instance, it has
become apparent that certain generative tools are built to be as “realistic” as possible, resulting
in a hyperrealistic aesthetic style. As these aesthetics propagate through visual culture, it can be
difficult for a casual viewer to identify the creator’s intention and individuality within the out-
puts. Indeed, some creators have spoken about the challenges of getting generative AI models
to produce images in new, different, or unique aesthetic styles [36, 37]. This unique position
of the creator relative to the tool calls into question the particular role of the creator in exert-
ing their artistic intention on AI-generated artifacts. While there is a long history of generative
and computer art, these art forms usually involve software built by the artist with distinctive
aesthetics.
Therefore, AI-based artists using generative AI systems must find ways to exert their artis-
tic intention and rigor into other stages of the creation process, such as how they select train-
ing data, craft prompts [38, 39], or use AI-generated artifacts for downstream creative appli-
cations [40]. Future work should explore what constitutes meaningful human control in the
context of generative AI. How does it relate to intent, predictability, accountability and expres-
sion? What existing interactions with generative AI are sites for artistic agency and meaningful
human control? How can additional sites of artistic agency and meaningful human control be
introduced into generative AI systems, such as through increased explainability, transparency
and responsiveness? These explorations can be organized into distinct layers like the user-facing
interface layer, (i.e., user-experience design), but also a deeper layer of incorporating particular
desired controls into the models themselves.
As generative AI tools become more widespread, and knowledge of these tools becomes
commonplace (as consumer photography did a century ago), an open question remains regard-
ing how the aesthetics of their outputs will affect the range of artistic outputs. On one hand,
generative AI could increase the overall diversity of artistic outputs by expanding the set of
creators who engage with artistic practice.
But on the other hand, the aesthetic and cultural norms and biases embedded in the training
data of generative-AI models affect their outputs. It is well documented that biases in the
5
training data of an algorithmic system can create outputs that reflect or even amplify those
biases [41, 42]. The data used to train generative-AI models primarily comes from the web;
web image search results have been shown to amplify existing racial and gender inequalities [43,
44], and further be geographically concentrated rather than representative of all cultures [45].
Without documentation of what data is used in the training of generative-AI models, it is more
difficult to identify, quantify and mitigate the biases the models have [46, 47], although efforts
have been made to overcome this issue [48, 49]. Going beyond the data, algorithmic decisions,
such as which types of outputs to reward when training the model, implicitly reflect the values
of the generative AI creators [50]. For example, models may learn to produce outputs that more
closely mimic the “common” rather than the “rare” or “unique” inputs, or focus on representing
just a subset of the data [51]. Thus, it is possible that the generative AI models will in fact
entrench bias in cultural production and decrease aesthetic diversity.
AI-generated content may also feed future generative models, creating a self-referential
aesthetic flywheel that could perpetuate AI-driven cultural norms. This flywheel may in turn
reinforce generative AI’s aesthetics, as well as the biases these models exhibit.
Another key aspect of the aesthetics of AI-generated artifacts is the very knowledge that the
artifact was created by generative AI, and how that knowledge influences the viewer’s percep-
tion [52]. As mentioned above, viewers often engage in “intention guessing,” and the presence
of human intention leads to enhanced perceptions of creativity and creative value [53,54,55,56].
However, as viewers increasingly anthropomorphize generative-AI systems by ascribing them
with intention and agency, the credit ascribed to various human actors may change. For in-
stance, we may witness decreasing perceived credit for the human artist or increasing perceived
credit for the creator of the technology [13]. Future work should explore ways to quantify and
increase output diversity, and study how generative-AI tools may influence aesthetics and aes-
thetic diversity. In addition, we need new ways of communicating about artist intention in AI
production.
The proliferation of AI-generated content is embedded in a social media landscape where
users post content to platforms and these platforms serve content to other users through the fil-
ter of opaque, engagement-maximizing recommendation algorithms that leverage personalized
patterns gleaned from browsing behavior. The distinct logic of this technological context can
shift practices of both production and consumption. To increase visibility on these platforms,
creators might continue to prioritize the production of content that satisfies their perceptions of
what the algorithms will surface [57, 58, 59]. As both curation algorithms and content creators
try to maximize engagement, this may result in further homogenization of content [31]. How-
ever, some preliminary experiments [60] suggest that incorporating engagement metrics when
curating AI-generated content can, in some cases, diversify content. It remains an open ques-
tion what styles are amplified by recommender algorithms, and how that prioritization affects
the types of content creators make and share. Future work must explore the complex, dynamic
systems formed by the interplay between generative models, recommender algorithms, and so-
cial media platforms, and their resulting impact on aesthetics and conceptual diversity.
6
Legal Dimensions of Authorship
Generative AI’s reliance on training data to automate aspects of creation raises legal and ethical
challenges regarding authorship and thus should prompt technical research into the nature of
these systems [61, 62]. Copyright law must balance the benefits to creators, users of generative
AI tools, and society at large. In this section, we focus on two distinct (but related) legal
challenges. The first is the legal treatment of a model’s training data itself, and the second is the
legal treatment of the model’s outputs.
2. How often do these models directly copy elements from the training data, versus creating
entirely new works [71, 65]?
3. Even when models do not directly copy from existing works, how should artists’ individ-
ual styles be protected [72]?
4. What mechanisms could protect and compensate the artists whose work is used for train-
ing these models, or even allow them to opt out, while simultaneously allowing for new
cultural contributions from generative AI models?
Answering these questions and determining how copyright law should treat training data re-
quires substantial technical research to develop and understand the AI systems, social science
research to understand perceptions of similarity, and legal research to apply existing precedents
to novel technology. Of course, these points represent only an American legal perspective.
1
It is worth mentioning thought that US fair-use laws are much more permissive than, for example, UK fair-use
laws, so ultimately the coverage will depend on jurisdiction.
7
The legal treatment of model outputs
A related but distinct legal question is: who can claim legal ownership over the output of genera-
tive AI systems? Answering this requires understanding the creative contributions of a system’s
users versus other stakeholders, such as the system’s developers and creators of the training
data. AI developers could, for example, claim ownership over outputs through terms of use.
In contrast, if a user of a system (e.g., the prompter for text-to-image models or LLMs) has
engaged in a meaningfully creative way (e.g., the process is not fully automated, or does not
emulate specific works), then they might be considered as the default copyright holders. But
how substantial must users’ creative influence be for them to claim ownership? An important
exception arises when a major artistic element from the training data or a prompt is part of an
output, in which case the artist that owns the relevant work may claim that the output represents
a derivative work. How likely is it that major elements from the training data unintentionally
appear in the output? Ultimately, answering these questions involve studying not just the mod-
els themselves but also the creative process of using AI-based tools. And the answers of these
questions may change as users gain more direct control through e.g., painting interfaces.
Generative AI can also be used to deliberately emulate a specific existing work, either
through the use of prompt material or by fine-tuning the AI [65, 73]. The resulting outputs
could be characterized as derivative works over which the original artists can claim ownership,
although it may also be possible to reward prompt artists through compulsory licenses [70]
or joint ownership [74]. Copyright law does not usually protect artistic styles, but artists
may legally object to their names being associated with a certain style under misappropria-
tion laws [73]. Apportioning ownership over outputs requires studying the creative process of
using AI-based tools, and may become complex as algorithms provide more direct control to
users, for example, through painting interfaces.
8
the specific steps of the creative process, precisely which and how those steps might be im-
pacted by generative AI tools, and the resulting effects on workplace requirements and activities
of varying cognitive occupations [82]. For example, human-in-the-loop interactive paradigms
could both advance workers’ productivity with AI while highlighting areas for future tools to
better complement workers [84, 85].
Although these tools may threaten some occupations, they could increase the productivity of
others and perhaps create new ones. For example, historically, music automation technologies
enabled more musicians to create—even as earnings skewed [86]. Generative-AI systems can
create hundreds of outputs per minute which may greatly accelerate the creative process through
rapid ideation [87,40]. This may reduce production time, and thus reduce costs. In turn, demand
for creative work may increase (e.g., the same marketing budget now buys more ads). On the
other hand, this acceleration of ideation may undermine aspects of creativity by removing the
initial period of prototyping and envisioning associated with a tabula rasa. The impacts on
creativity of generative AI tools for ideation requires continued thought and research, yet in
either case, the production of creative goods may become more efficient, leading to the same
amount of output with fewer workers. Furthermore, some work-for-hire occupations using
conventional tools, like illustration or stock photography, might face some displacement.
Several historical examples bear this out. Most notably, the Industrial Revolution enabled
mass scale production of traditionally artisanal crafts (e.g., ceramics, textiles, and steelmaking)
with the labor of non-artisans [88]. In turn, hand-made goods became treated as luxury items
with increased artistic intention [89, 90]. Similarly, when photography reached the mainstream,
photographers replaced portraitists and documentary painters. And, the digitization of music
removed constraints of learning to physically manipulate instruments and enabled more com-
plex arrangements with many more contributors. By both lowering the barrier to entry while
simultaneously making creative tasks more efficient, these tools may change who can work as
an artist, in which case employment may rise for artists even as average wages fall.
9
tication. For example, visual watermarks could indicate an image’s source, but they are easily
cropped out or manipulated. The C2PA protocol [93] involves cryptographically binding media
with provenance metadata on when and where media is recorded and by whom. Cryptographic
signatures attached to images in commercial cameras (e.g. Sony) currently serve to authenti-
cate the images the camera captures at the moment of capture. However, digital signatures in
metadata is not a panacea because it requires widespread adoption across the media ecosystem.
A second, complementary, approach relies on post-hoc machine learning and forensic anal-
ysis to passively identify statistical and physical artifacts left behind by media manipulation.
For example, learning-based forensic analysis techniques use machine learning to automati-
cally detect manipulated visual and auditory content (see e.g. [94]). However, these learning-
based approaches have been shown to be vulnerable to adversarial attacks [95] and context
shift [96]. Artifact-based techniques exploit low-level pixel artifacts introduced during synthe-
sis. But these techniques are vulnerable to counter-measures like recompression or additive
noise. Other approaches involve biometric features of an individual (e.g., the unique motion
produced by the ears in synchrony with speech [97]) or behavioral mannerisms [98]). Biomet-
ric and behavioral approaches are robust to compression changes and do not rely on assump-
tions about the moment of media capture, but they do not scale well. However, they may be
vulnerable to future generative-AI systems that may adapt and synthesize individual biometric
signals.
On social media platforms, people will need to consider whether the media they are con-
suming may be produced by generative AI. In order to improve people’s ability to discern which
news headlines are true or false, a body of scholarship has explored the science of misinforma-
tion [99]. This work has focused primarily on why and how people come to believe misinforma-
tion and why they share it. Pennycook and Rand address these questions by examining the role
of reasoning and heuristics in people’s ability to discern truth from falsehood [99]. Fundamen-
tally, people can accurately distinguish between real and fake news when they are paying atten-
tion to accuracy, but distraction by social motivations can undermine discernment [100, 101].
Indeed, people’s attention is often focused away from accuracy when browsing social media and
instead attention is often oriented towards moral [102], emotional [103, 104] and sensational
content [105]. As a result, misinformation may spread faster than accurate information [106]
(though see caveats [107]). This body of scholarship raises the question: how will synthetic
media impact the extent to which people are susceptible to and spread misinformation?
Preliminary evidence suggests that photographs can influence people’s susceptibility to fake
news headlines without providing any direct visual evidence of the headlines’ claims [108].
While visual information is more powerful than textual information for inciting believability in
content, this only translates to a small increase in persuasiveness [109]. Furthermore, recent
research reveals that the visual components of synthetic media can be useful for identifying its
synthetic origins [110].
There are many other potential impacts of generative AI on the information environment
beyond the potential for explicitly faked photorealistic imagery or plausible-sounding audio.
Large language models (LLMs) that assist creators can generate fluent written content, without
10
reliable information verification in the output, or in service of a particular ideology [111]. While
the focus of this paper–artistic creation–bleeds into these broader trends in content creation
writ large, these capabilities of LLMs ultimately introduce a set of issues distinct from image
generation that are beyond the scope of this paper. While the approaches to provenance and
watermarking technologies discussed above may be useful for mitigation, future work is needed
to properly diagnose the impact of LLMs on the information environment and propose solutions.
With the proliferation of both AI-generated visual and written media, another principal con-
cern is how the increased amount of information will impact online environments. Lorenz at
al. [112] find that increases in the amount of information available can decrease collective at-
tention spans. The explosion of AI-generated content may in turn hamper society’s ability to
engage in collective discussion and action in important arenas such as climate and democracy.
Furthermore, generative AI systems may allow authoritarian governments or bad actors to mass
produce articles, blogs, and memes that drown out organic public discourse.
These concerns and approaches raise important research questions:
1. What is the role of platform interventions such as tracking source provenance and detect-
ing synthetic media downstream for governance and promoting trust [113]?
2. How does the proliferation of synthetic media affect trust in real media, such as unedited
journalistic photographs?
11
Research Questions Raised by Generative AI
Perceptions of Generative AI systems
How do perceptions of the generative process affect attitudes towards artifacts produced by those systems?
How do these perceptions affect attitudes towards various stakeholders involved in the generative AI systems?
How to design systems that properly disclose the generative process and avoid misleading interpretations?
Attribution + Training Data Future of Creative Work Impacts on the Media Ecosystem
Does collecting third-party How substantial must users’ How do generative AI tools influence
data for training violate creative influence be for them aesthetics and aesthetic diversity?
copyright? to claim ownership?
How do generative models, recommendations
How often do these models What constitutes meaningful algorithms and social media platforms
directly copy elements from the human control in the context of interact?
training data, versus creating generative AI, and what are the
entirely new works? sites for increased artistic Will generative AI undermine attention
agency and meaningful human spans?
Even when models do not control?
Misinformation
directly copy from existing Misinformation
works, should artists' What are the specific steps of How does the proliferation of synthetic
individual styles be protected, the creative process, and how media affect trust in
and, if so, how? will each of those steps be authentically-captured media?
impacted by generative AI?
What mechanisms could What is the role of platform interventions
protect and compensate the How will generative AI tools such as tracking source provenance and
artists whose work is used for impact ideation, and the detecting synthetic media downstream for
training, or even allow them to quality of outputs? governance and promoting trust?
opt out?
How to measure the carbon impact of these systems and promote less resource use?
How to prevent the exploitation of crowdworkers who work in service of developing these systems?
How to prevent the concentration of market power towards a small number of entities?
12
generative AI earlier in the workflow for speculation and idea generation [127, 128, 40], or
by building algorithms that are explicitly designed to interact with distinct modes of human
creativity.
Every artistic medium mirrors and comments on the issues of its time, and contemporary AI-
generated art reflects present issues surrounding automation, corporate control, and the attention
economy. Ultimately, we express our humanity through art, so understanding and shaping the
impact of AI on creative expression is at the center of broader questions about its impact on
society.
The widespread adoption of generative AI is not inevitable. Rather, its uses and impacts
will be shaped by the collective decisions made by technology developers, users, regulators
and civil society. Therefore, new research into generative AI is required to ensure the use of
these technologies is beneficial and must engage with critical stakeholders, particularly artists
and creative laborers themselves, many of whom actively engage with difficult questions at the
vanguard of societal change.
13
References and Notes
1. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Om-
mer. High-resolution image synthesis with latent diffusion models. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695,
2022.
2. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N
Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in
neural information processing systems, 30, 2017.
3. Aaron Hertzmann. Can computers create art? In Arts, volume 7, page 18. MDPI, 2018.
5. Karen Park. To Infinity and Beyond!: The Story of Pixar Animation Studios. Chronicle
Books, 2007.
6. Tom Sito. Moving Innovation: A History of Computer animation. MIT Press, 2013.
7. Charlotte Cotton. Photograph as Contemporary Art (World of Art). Thames & Hudson,
2020.
8. Andy Grundberg, Helmut Erich Robert Gernsheim, Beaumont Newhall, and Naomi.
Rosenblum. History of photography. Encyclopedia Britannica, 2023.
10. Zachary C Lipton and Jacob Steinhardt. Troubling trends in machine learning scholarship.
arXiv preprint arXiv:1807.03341, 2018.
11. Byron Reeves and Clifford Nass. The media equation: How people treat computers,
television, and new media like real people. Cambridge, UK, 10:236605, 1996.
12. David Watson. The rhetoric and reality of anthropomorphism in artificial intelligence.
Minds and Machines, 29(3):417–440, 2019.
13. Ziv Epstein, Sydney Levine, David G Rand, and Iyad Rahwan. Who gets credit for ai-
generated art? iScience, 23(9):101515, 2020.
14. Madeleine Clare Elish. Moral crumple zones: Cautionary tales in human-robot interaction
(pre-print). Engaging Science, Technology, and Society (pre-print), 2019.
15. Michael F Cohen. Imagination amplification. IEEE Computer Graphics and Applications,
20(1):54–55, 2000.
14
16. Filippo Santoni de Sio and Jeroen Van den Hoven. Meaningful human control over au-
tonomous systems: A philosophical account. Frontiers in Robotics and AI, 5:15, 2018.
17. Memo Akten. Deep visual instruments: realtime continuous, meaningful human control
over deep neural networks for creative expression. PhD thesis, Goldsmiths, University of
London, 2021.
18. Nick Seaver. Algorithms as culture: Some tactics for the ethnography of algorithmic
systems. Big data & society, 4(2):2053951717738104, 2017.
19. Manav Raj, Justin Berg, and Rob Seamans. Artificial intelligence: The effect of AI dis-
closure on evaluations of creative content. arXiv preprint arXiv:2303.06217, 2023.
20. Ziv Epstein, Océane Boulais, Skylar Gordon, and Matt Groh. Interpolating gans to scaf-
fold autotelic creativity. In International Conference on Computational Creativity Casual
Creators Workshop, 2020.
21. Skylar Gordon, Robert Mahari, Manaswi Mishra, and Ziv Epstein. Co-creation and own-
ership for ai radio. arXiv preprint arXiv:2206.00485, 2022.
22. Mario Klingemann, Simon Hudson, and Ziv Epstein. Botto: A decentralized autonomous
artist. In NeurIPS Machine Learning for Creativity and Design Workshop, 2021.
23. Scott Draves. The electric sheep screen-saver: a case study in aesthetic evolution. In
Proceedings of the 3rd European conference on Applications of Evolutionary Computing,
2005.
24. Jimmy Secretan, Nicholas Beato, David B D’Ambrosio, Adelein Rodriguez, Adam Camp-
bell, Jeremiah T Folsom-Kovarik, and Kenneth O Stanley. Picbreeder: A case study
in collaborative evolutionary exploration of design space. Evolutionary computation,
19(3):373–403, 2011.
25. Gene Kogan. Artist in the cloud: Towards an autonomous artist. In Neurips Machine
Learning for Creativity and Design Workshop, 2019.
26. Jérémie Rappaz, Michele Catasta, Robert West, and Karl Aberer. Latent structure in
collaboration: the case of Reddit R/place. In Proceedings of the International AAAI Con-
ference on Web and Social Media, volume 12, 2018.
27. Divya Shanmugam, Katie Lewis, Jose Javier Gonzalez-Ortiz, Agnieszka Kurant, and John
Guttag. At the intersection of deep learning and conceptual art: The end of signature.
arXiv preprint arXiv:2207.04312, 2022.
28. Eva Cetinic and James She. Understanding and creating art with AI: Review and out-
look. ACM Transactions on Multimedia Computing, Communications, and Applications
(TOMM), 18(2):1–22, 2022.
15
29. Thomas Poell, David B Nieborg, and Brooke Erin Duffy. Platforms and cultural produc-
tion. John Wiley & Sons, 2021.
30. Tim Hwang. Subprime attention crisis: Advertising and the time bomb at the heart of the
Internet. FSG originals, 2020.
33. Paul Bloom. Intention, history, and artifact concepts. Cognition, 60(1):1–29, 1996.
34. Leslie Snapper, Cansu Oranç, Angelina Hawley-Dolan, Jenny Nissel, and Ellen Winner.
Your kid could not have done that: Even untutored observers can discern intentionality
and structure in abstract expressionist art. Cognition, 137:154–165, 2015.
35. Laura Mariah Herman and Angel Hsing-Chi Hwang. In the eye of the beholder: A
viewer-defined conception of online visual creativity. New Media & Society, page
14614448221089604, 2022.
36. L Manovich and E Arielli. Artificial aesthetics: A critical guide to AI. Media and Design,
2021.
37. Renée Zachariou. Machine learning art: An interview with Memo Akten. Artnome.com,
16, 2018.
38. Vivian Liu and Lydia B Chilton. Design guidelines for prompt engineering text-to-image
generative models. In Proceedings the CHI Conference on Human Factors in Computing
Systems, pages 1–23, 2022.
39. Jon McCormack, Camilo Cruz Gambardella, Nina Rajcic, Stephen James Krol,
Maria Teresa Llano, and Meng Yang. Is writing prompts really making art? In Artifi-
cial Intelligence in Music, Sound, Art and Design: 12th International Conference, Evo-
MUSART 2023, Held as Part of EvoStar 2023, Brno, Czech Republic, April 12–14, 2023,
Proceedings, pages 196–211. Springer, 2023.
40. Amy Smith, Hope Schroeder, Ziv Epstein, Mike Cook, Simon Colton, and Andrew Lipp-
man. Trash to treasure: Using text-to-image models to inform the design of physical
artefacts. In The AAAI-23 Workshop on Creative AI Across Modalities, 2023.
41. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. Men
also like shopping: Reducing gender bias amplification using corpus-level constraints. In
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017.
16
42. Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning:
Limitations and Opportunities. fairmlbook.org, 2019. http://www.fairmlbook.
org.
43. Safiya Umoja Noble. Algorithms of oppression. In Algorithms of oppression. New York
University Press, 2018.
44. Matthew Kay, Cynthia Matuszek, and Sean A Munson. Unequal representation and gender
stereotypes in image search results for occupations. In Proceedings of the 33rd Annual
ACM Conference on Human Factors in Computing Systems, pages 3819–3828, 2015.
45. Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D Sculley.
No classification without representation: Assessing geodiversity issues in open data sets
for the developing world. NeurIPS Workshop on Machine Learning for the Developing
World , 2017.
46. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna
Wallach, Hal Daumé III, and Kate Crawford. Datasheets for datasets. Communications of
the ACM, 64(12):86–92, December 2021.
47. Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell.
On the dangers of stochastic parrots: Can language models be too big? In Proceedings
of the ACM conference on Fairness, Accountability, and Transparency, pages 610–623,
2021.
48. Aditya Grover, Jiaming Song, Ashish Kapoor, Kenneth Tran, Alekh Agarwal, Eric
Horvitz, and Stefano Ermon. Bias correction of learned generative models via likelihood-
free importance weighting. In Neural Information Processing Systems (NeurIPS), 2019.
49. Alexandra Sasha Luccioni, Christopher Akiki, Margaret Mitchell, and Yacine Jernite.
Stable bias: Analyzing societal representations in diffusion models. arXiv preprint
arXiv:2303.11408, 2023.
50. Vongani H. Maluleke, Neerja Thakkar, Tim Brooks, Ethan Weber, Trevor Darrell,
Alexei A. Efros, Angjoo Kanazawa, and Devin Guillory. Studying bias in GANs through
the lens of race. In European Conference on Computer Vision (ECCV), 2022.
51. Tong Che, Yanran Li, Athul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized
generative adversarial networks. In International Conference on Learning Representa-
tions (ICLR), 2017.
52. Caterina Moruzzi. Should human artists fear AI?: A report on the perception of creative
ai. In xCoAx 2020: the Eighth Conference on Computation, Communication, Aesthetics
& X, pages 170–185, 2020.
17
53. Ellen Yi-Luen Do, Mark D Gross, Bennett Neiman, and Craig Zimring. Intentions in and
relations among design drawings. Design studies, 21(5):483–503, 2000.
54. Rebecca Chamberlain, Caitlin Mullin, Bram Scheerlinck, and Johan Wagemans. Putting
the art in artificial: Aesthetic responses to computer-generated art. Psychology of Aesthet-
ics, Creativity, and the Arts, 12(2):177, 2018.
55. Justin Kruger, Derrick Wirtz, Leaf Van Boven, and T William Altermatt. The effort heuris-
tic. Journal of Experimental Social Psychology, 40(1):91–98, 2004.
56. Odette Da Silva, Nathan Crilly, and Paul Hekkert. How people’s appreciation of products
is affected by their knowledge of the designers’ intentions. 2015.
57. Rebecca Giblin and Cory Doctorow. Chokepoint Capitalism: How Big Tech and Big
Content Captured Creative Labor Markets and How We’ll Win Them Back. Beacon Press,
2022.
58. Sophie Bishop. Algorithmic experts: Selling algorithmic lore on YouTube. Social Media+
Society, 6(1):2056305119897323, 2020.
60. Ziv Epstein, Matthew Groh, Abhimanyu Dubey, and Alex Pentland. Social influence leads
to the formation of diverse local trends. Proceedings of the ACM on Human-Computer
Interaction, 5(CSCW2):1–18, 2021.
61. Jason K Eshraghian. Human ownership of artificial creativity. Nature Machine Intelli-
gence, 2(3):157–160, 2020.
62. Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A Lemley, and
Percy Liang. Foundation models and fair use. arXiv preprint arXiv:2303.15715, 2023.
63. Thomas Margoni and Martin Kretschmer. A deeper look into the EU text and data mining
exceptions: harmonisation, data ownership, and the future of technology. GRUR Interna-
tional, 71(8):685–701, 2022.
64. Pierre N Leval. Toward a fair use standard. Harvard Law Review, 103(5):1105–1136,
1990.
65. Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein.
Diffusion art or digital forgery? Investigating data replication in diffusion models. arXiv
preprint arXiv:2212.03860, 2022.
66. James Grimmelmann. Copyright for literate robots. Iowa Law Review, 101:657, 2015.
18
67. Benjamin LW Sobel. Artificial intelligence’s fair use crisis. Columbia Journal of Law &
Arts, 41:45, 2017.
68. Mark A Lemley and Bryan Casey. Fair learning. Texas Law Review, 99:743, 2020.
69. Saffron Huang and Divya Siddarth. Generative ai and the digital commons. arXiv preprint
arXiv:2303.11074, 2023.
70. Jessica Fjeld and Mason Kortz. Re: USPTO Request for Comments on Intellectual Prop-
erty Protection for Artificial Intelligence Innovation, 2020.
71. Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian
Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from
diffusion models. arXiv preprint arXiv:2301.13188, 2023.
72. Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, and Ben Y Zhao.
Glaze: Protecting artists from style mimicry by text-to-image models. arXiv preprint
arXiv:2302.04222, 2023.
73. Andy Baio. Invasive diffusion: How one unwilling illustrator found herself turned into an
AI model, 2022.
74. Jessica Fjeld and Mason Kortz. Re: WIPO Conversation on Intellectual Property (IP) and
Artificial Intelligence (AI) , 2020.
75. Eli Berman, John Bound, and Stephen Machin. Implications of skill-biased technological
change: International evidence. The Quarterly Journal of Economics, 113(4):1245–1279,
1998.
76. Aaron Smith and Monica Anderson. Automation in everyday life. Pew Research Center,
2017.
77. Wassily Leontief. Machines and man. Scientific American, 187(3):150–164, 1952.
78. John Maynard Keynes and John Maynard Keynes. Economic possibilities for our grand-
children. Springer, 2010.
79. Daron Acemoglu and David Autor. Skills, tasks and technologies: Implications for em-
ployment and earnings. In Handbook of labor economics, volume 4, pages 1043–1171.
Elsevier, 2011.
80. David H Autor, Frank Levy, and Richard J Murnane. The skill content of recent techno-
logical change: An empirical exploration. NBER Working paper, 8377, 2001.
19
81. Carl Benedikt Frey and Michael A Osborne. The future of employment: How susceptible
are jobs to computerisation? Technological forecasting and social change, 114:254–280,
2017.
82. Morgan R Frank, David Autor, James E Bessen, Erik Brynjolfsson, Manuel Cebrian,
David J Deming, Maryann Feldman, Matthew Groh, José Lobo, Esteban Moro, et al.
Toward understanding the impact of artificial intelligence on labor. Proceedings of the
National Academy of Sciences, 116(14):6531–6539, 2019.
83. Erik Brynjolfsson, Tom Mitchell, and Daniel Rock. What can machines learn, and what
does it mean for occupations and the economy? In AEA papers and proceedings, volume
108, pages 43–47, 2018.
84. Jichen Zhu, Antonios Liapis, Sebastian Risi, Rafael Bidarra, and G Michael Young-
blood. Explainable AI for designers: A human-centered perspective on mixed-initiative
co-creation. In 2018 IEEE Conference on Computational Intelligence and Games (CIG),
pages 1–8. IEEE, 2018.
85. Elizabeth B-N Sanders and Pieter Jan Stappers. Co-creation and the new landscapes of
design. Co-design, 4(1):5–18, 2008.
86. Peter Knees, Andres Ferraro, and Moritz Hübler. Bias and feedback loops in music recom-
mendation: Studies on record label impact. In Workshop of Multi-Objective Recommender
Systems (MORS’22), in conjunction with the 16th ACM Conference on Recommender Sys-
tems, RecSys, volume 22, page 2022, 2022.
87. Neil Leach. Architecture in the Age of Artificial Intelligence: An introduction to AI for
architects. Bloomsbury Publishing, 2022.
88. Carl Benedikt Frey. The technology trap. In The Technology Trap. Princeton University
Press, 2019.
89. Laura Herman. Globalized creative economies: Rethinking local craft, provenance, and
platform design. Feminist Futures of Work, page 53.
90. Christoph Fuchs, Martin Schreier, and Stijn MJ Van Osselaer. The handmade effect:
What’s love got to do with it? Journal of marketing, 79(2):98–110, 2015.
91. Josh A Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, and Ka-
terina Sedova. Generative language models and automated influence operations: Emerg-
ing threats and potential mitigations. arXiv preprint arXiv:2301.04246, 2023.
92. Bobby Chesney and Danielle Citron. Deep fakes: A looming challenge for privacy,
democracy, and national security. Calif. L. Rev., 107:1753, 2019.
20
93. Leonard Rosenthol. C2PA: the world’s first industry standard for content provenance. In
Applications of Digital Image Processing XLV, volume 12226, page 122260P. SPIE, 2022.
94. Huy H Nguyen, Junichi Yamagishi, and Isao Echizen. Capsule-forensics: Using cap-
sule networks to detect forged images and videos. In IEEE International Conference on
Acoustics, Speech and Signal Processing, pages 2307–2311. IEEE, 2019.
95. Shehzeen Hussain, Paarth Neekhara, Malhar Jere, Farinaz Koushanfar, and Julian
McAuley. Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adver-
sarial examples. In Proceedings of the IEEE/CVF conference on applications of computer
vision, pages 3348–3357, 2021.
96. Matthew Groh. Identifying the context shift between test benchmarks and production data.
arXiv preprint arXiv:2207.01059, 2022.
97. Shruti Agarwal and Hany Farid. Detecting deep-fake videos from aural and oral dynamics.
In Workshop on Media Forensics at CVPR, 2021, pages 981–989, 2021.
98. Matyáš Boháček and Hany Farid. Protecting world leaders against deep fakes using fa-
cial, gestural, and vocal mannerisms. Proceedings of the National Academy of Sciences,
119(48):e2216035119, 2022.
99. Gordon Pennycook and David G Rand. The psychology of fake news. Trends in cognitive
sciences, 25(5):388–402, 2021.
100. Gordon Pennycook, Ziv Epstein, Mohsen Mosleh, Antonio Arechar, Dean Eckles, and
David Rand. Understanding and reducing the spread of misinformation online. ACR
North American Advances, 2020.
101. Ziv Epstein, Nathaniel Sirlin, Antonio Arechar, Gordon Pennycook, and David Rand. The
social media context interferes with truth discernment. Science Advances, 9(9):eabo6169,
2023.
102. William J Brady, Molly J Crockett, and Jay J Van Bavel. The MAD model of moral
contagion: The role of motivation, attention, and design in the spread of moralized content
online. Perspectives on Psychological Science, 15(4):978–1010, 2020.
103. William J Brady, Julian A Wills, John T Jost, Joshua A Tucker, and Jay J Van Bavel.
Emotion shapes the diffusion of moralized content in social networks. Proceedings of the
National Academy of Sciences, 114(28):7313–7318, 2017.
104. Jonah Berger and Katherine L Milkman. What makes online content viral? Journal of
marketing research, 49(2):192–205, 2012.
21
105. Ziv Epstein, Hause Lin, Gordon Pennycook, and David Rand. Quantifying attention
via dwell time and engagement in a social media browsing environment. arXiv preprint
arXiv:2209.10464, 2022.
106. Soroush Vosoughi, Deb Roy, and Sinan Aral. The spread of true and false news online.
Science, 359(6380):1146–1151, 2018.
107. Jonas L Juul and Johan Ugander. Comparing information diffusion mechanisms
by matching on cascade size. Proceedings of the National Academy of Sciences,
118(46):e2100786118, 2021.
108. Eryn J Newman, Maryanne Garry, Daniel M Bernstein, Justin Kantner, and D Stephen
Lindsay. Nonprobative photographs (or words) inflate truthiness. Psychonomic Bulletin
& Review, 19:969–974, 2012.
109. Chloe Wittenberg, Ben M Tappin, Adam J Berinsky, and David G Rand. The (minimal)
persuasive advantage of political video over text. Proceedings of the National Academy
of Sciences, 118(47):e2114388118, 2021.
110. Matthew Groh, Ziv Epstein, Chaz Firestone, and Rosalind Picard. Deepfake detection by
human crowds, machines, and machine-informed crowds. Proceedings of the National
Academy of Sciences, 119(1):e2110013119, 2022.
111. Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, and Mor Naaman. Co-
writing with opinionated language models affects users’ views. In Proceedings of the
2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York,
NY, USA, 2023. Association for Computing Machinery.
112. Philipp Lorenz-Spreen, Bjarke Mørch Mønsted, Philipp Hövel, and Sune Lehmann. Ac-
celerating dynamics of collective attention. Nature Communications, 10(1):1759, 2019.
113. Hany Farid. Creating, using, misusing, and detecting deep fakes. Journal of Online Trust
and Safety, 1(4), 2022.
114. Grant Fergusson, Caitriona Fitzgerald, Chris Frascella, Megan Iorio, Tom McBrien, Calli
Schroeder, Ben Winters, and Enid Zhou. Generating harms: Generative AI’s impact &
paths forward. Electronic Privacy Information Center, 2023.
115. Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considera-
tions for deep learning in nlp. arXiv preprint arXiv:1906.02243, 2019.
116. Laura Westra and Bill Lawson. Faces of environmental racism: Confronting issues of
global justice. Rowman & Littlefield Publishers, 2001.
22
117. Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quanti-
fying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700, 2019.
118. Lasse F Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan. Carbontracker:
Tracking and predicting the carbon footprint of training deep learning models. arXiv
preprint arXiv:2007.03051, 2020.
119. Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle
Pineau. Towards the systematic reporting of the energy and carbon footprints of machine
learning. The Journal of Machine Learning Research, 21(1):10039–10081, 2020.
120. Mary L Gray and Siddharth Suri. Ghost work: How to stop Silicon Valley from building a
new global underclass. Eamon Dolan Books, 2019.
121. P Kalluri. Don’t ask if artificial intelligence is good or fair, ask how it shifts power.
Nature., 2020.
122. Shoshana Zuboff. Big other: surveillance capitalism and the prospects of an information
civilization. Journal of Information Technology, 30(1):75–89, 2015.
123. Neil Savage. How AI and neuroscience drive each other forwards. Nature,
571(7766):S15–S15, 2019.
124. Terence Broad, Sebastian Berns, Simon Colton, and Mick Grierson. Active divergence
with generative deep learning–a survey and taxonomy. arXiv preprint arXiv:2107.05599,
2021.
125. Marvin Zammit, Antonios Liapis, and Georgios N Yannakakis. Seeding diversity into AI
art. 2022.
126. Aaron Hertzmann. Toward modeling creative processes for algorithmic painting. In Pro-
ceedings of the International Conference on Computational Creativity, 2022.
127. Ziv Epstein, Hope Schroeder, and Dava Newman. When happy accidents spark creativ-
ity: Bringing collaborative speculation to life with generative AI. In Proceedings of the
International Conference on Computational Creativity, 2022.
128. Simon Colton, Amy Smith, Sebastian Berns, Ryan Murdock, and Michael Cook. Gener-
ative search engines: Initial experiments. In Proceedings of the International Conference
on Computational Creativity, 2021.
23