Generative AI
Generative AI
https://doi.org/10.1007/s12599-023-00834-7
CATCHWORD
Generative AI
Stefan Feuerriegel • Jochen Hartmann • Christian Janiesch •
Patrick Zschech
Received: 29 April 2023 / Accepted: 7 August 2023 / Published online: 12 September 2023
Ó The Author(s) 2023
Keywords Generative AI Artificial intelligence The term generative AI refers to computational tech-
Decision support Content creation Information systems niques that are capable of generating seemingly new,
meaningful content such as text, images, or audio from
training data. The widespread diffusion of this technology
1 Introduction with examples such as Dall-E 2, GPT-4, and Copilot is
currently revolutionizing the way we work and communi-
Tom Freston is credited with saying ‘‘Innovation is taking cate with each other. Generative AI systems can not only
two things that exist and putting them together in a new be used for artistic purposes to create new text mimicking
way’’. For a long time in history, it has been the prevailing writers or new images mimicking illustrators, but they can
assumption that artistic, creative tasks such as writing and will assist humans as intelligent question-answering
poems, creating software, designing fashion, and compos- systems. Here, applications include information technology
ing songs could only be performed by humans. This (IT) help desks where generative AI supports transitional
assumption has changed drastically with recent advances in knowledge work tasks and mundane needs such as cooking
artificial intelligence (AI) that can generate new content in recipes and medical advice. Industry reports suggest that
ways that cannot be distinguished anymore from human generative AI could raise global gross domestic product
craftsmanship. (GDP) by 7% and replace 300 million jobs of knowledge
workers (Goldman Sachs 2023). Undoubtedly, this has
drastic implications not only for the Business & Informa-
Accepted after one revision by Susanne Strahringer. tion Systems Engineering (BISE) community, where we
will face revolutionary opportunities, but also challenges
S. Feuerriegel (&)
and risks that we need to tackle and manage to steer the
LMU Munich and Munich Center for Machine Learning,
Geschwister-Scholl-Platz 1, 80539 Munich, Germany technology and its use in a responsible and sustainable
e-mail: [email protected] direction.
In this Catchword article, we provide a conceptualiza-
J. Hartmann
tion of generative AI as an entity in socio-technical systems
Technical University of Munich, TUM School of Management,
Arcisstr. 21, 80333 Munich, Germany and provide examples of models, systems, and applica-
e-mail: [email protected] tions. Based on that, we introduce limitations of current
generative AI and provide an agenda for BISE research.
C. Janiesch
Previous papers discuss generative AI around specific
TU Dortmund University, Otto-Hahn-Str. 12, 44319 Dortmund,
Germany methods such as language models (e.g., Teubner et al.
e-mail: [email protected] 2023; Dwivedi et al. 2023; Schöbel et al. 2023) or specific
applications such as marketing (e.g., Peres et al. 2023),
P. Zschech
innovation management (Burger et al. 2023), scholarly
FAU Erlangen-Nürnberg, Lange Gasse 20, 90403 Nürnberg,
Germany research (e.g., Susarla et al. 2023; Davison et al. 2023),
e-mail: [email protected] and education (e.g., Kasneci et al. 2023; Gimpel et al.
123
2023). Different from these works, we focus on generative Note that the modalities in Fig. 1 are neither complete
AI in the context of information systems, and, to this end, nor entirely distinctive and can be detailed further. In
we discuss several opportunities and challenges that are addition, many unique use cases such as, for example,
unique to the BISE community and make suggestions for modeling functional properties of proteins (Unsal et al.
impactful directions for BISE research. 2022) can be represented in another modality such as text.
Generative AI is primarily based on generative modeling, A generative AI model is a type of machine learning
which has distinctive mathematical differences from dis- architecture that uses AI algorithms to create novel data
criminative modeling (Ng and Jordan 2001) often used in instances, drawing upon the patterns and relationships
data-driven decision support. In general, discriminative observed in the training data. A generative AI model is of
modeling tries to separate data points X into different critically central yet incomplete nature, as it requires fur-
classes Y by learning decision boundaries between them ther fine-tuning to specific tasks through systems and
(e.g., in classification tasks with Y 2 f0; 1g). In contrast to applications.
that, generative modeling aims to infer some actual data Deep neural networks are particularly well suited for the
distribution. Examples can be the joint probability distri- purpose of data generation, especially as deep neural net-
bution P(X, Y) of both the inputs and the outputs or P(Y), works can be designed using different architectures to
but where Y is typically from some high-dimensional model different data types (Janiesch et al. 2021; Kraus
space. By doing so, a generative model offers the ability to et al. 2020), for example, sequential data such as human
produce new synthetic samples (e.g., generate new obser- language or spatial data such as images. Table 1 presents
vation-target-pairs (X, Y) or new observations X given a an overview of the underlying concepts and model archi-
target value Y) (Bishop 2006). tectures that are common in the context of generative AI,
Building upon the above, a generative AI model refers to such as diffusion probabilistic models for text-to-image
generative modeling that is instantiated with a machine generation or the transformer architecture and (large) lan-
learning architecture (e.g., a deep neural network) and, guage models (LLMs) for text generation. GPT (short for
therefore, can create new data samples based on learned generative pre-trained transformer), for example, repre-
patterns.1 Further, a generative AI system encompasses the sents a popular family of LLMs, used for text generation,
entire infrastructure, including the model, data processing, for instance, in the conversational agent ChatGPT.
and user interface components. The model serves as the Large generative AI models that can model output in
core component of the system, which facilitates interaction and across specific domains or specific data types in a
and application within a broader context. Lastly, generative comprehensive and versatile manner are oftentimes also
AI applications refer to the practical use cases and imple- called foundation models (Bommasani et al. 2021). Due to
mentations of these systems, such as search engine opti- their size, they exhibit two key properties: emergence,
mization (SEO) content generation or code generation that meaning the behavior is oftentimes implicitly induced
solve real-world problems and drive innovation across rather than explicitly constructed (e.g., GPT models can
various domains. Figure 1 shows a systematization of create calendar entries in the .ical format even though such
generative AI across selected data modalities (e.g., text, models were not explicitly trained to do so), and homog-
image, and audio) and the model-, system-, and application- enization, where a wide range of systems and applications
level perspectives, which we detail in the following section. can now be powered by a single, consolidated model (e.g.,
Copilot can generate source code across a wide range of
1
It should be noted, however, that advanced generative AI models programming languages).
are often not based on a single modeling principle or learning Figure 1 presents an overview of generative AI models
mechanism, but combine different approaches. For example, language
along different, selected data modalities, which are pre-
models from the GPT family first apply a generative pre-training
stage to capture the distribution of language data using a language trained on massive amounts of data. Note that we structure
modeling objective, while downstream systems typically then apply a the models in Fig. 1 by their output modality such as X-to-
discriminative fine-tuning stage to adapt the model parameters to text or X-to-image. For example, GPT-4 as the most recent
specific tasks (e.g., document classification, question answering).
generative AI model underlying OpenAI’s popular con-
Similarly, ChatGPT combines techniques from generative modeling
together with discriminatory modeling and reinforcement learning versational agent ChatGPT (OpenAI 2023a) accepts both
(see Fig. 2). image and text inputs to generate text outputs. Similarly,
123
123
Diffusion probabilistic models Diffusion probability models are a class of latent variable models that are common for various tasks such as
image generation (Ho et al. 2020). Formally, diffusion probability models capture the image data by
modeling the way data points diffuse through a latent space, which is inspired by statistical physics.
Specifically, they typically use Markov chains trained with variational inference and then reverse the
diffusion process to generate a natural image. A notable variant is Stable Diffusion (Rombach et al. 2022).
Diffusion probability models are also used in commercial systems such as DALL-E and Midjourney.
Generative adversarial network A GAN is a class of neural network architecture with a custom, adversarial learning objective (Goodfellow
et al. 2014). A GAN consists of two neural networks that contest with each other in the form of a zero-sum
game, so that samples from a specific distribution can be generated. Formally, the first network G is called
the generator, which generates candidate samples. The second network D is called the discriminator, which
evaluates how likely the candidate samples come from a desired distribution. Thanks to the adversarial
learning objective, the generator learns to map from a latent space to a data distribution of interest, while
the discriminator distinguishes candidates produced by the generator from the true data distribution (see
Fig. 2).
(Large) language model A (large) language model (LLM) refers to neural networks for modeling and generating text data that
typically combine three characteristics. First, the language model uses a large-scale, sequential neural
network (e.g., transformer with an attention mechanism). Second, the neural network is pre-trained through
self-supervision in which auxiliary tasks are designed to learn a representation of natural language without
risk of overfitting (e.g., next-word prediction). Third, the pre-training makes use of large-scale datasets of
text (e.g., Wikipedia, or even multi-language datasets). Eventually, the language model may be fine-tuned
by practitioners with custom datasets for specific tasks (e.g., question answering, natural language
generation). Recently, language models have evolved into so-called LLMs, which combine billions of
parameters. Prominent examples of massive LLMs are BERT (Devlin et al. 2018) and GPT-3 (Brown et al.
2020) with 340 million and 175 billion parameters, respectively.
Reinforcement learning from RLHF learns sequential tasks (e.g., chat dialogues) from human feedback. Different from traditional
human feedback reinforcement learning, RLHF directly trains a so-called reward model from human feedback and then uses
the model as a reward function to optimize the policy, which is optimized through data-efficient and robust
algorithms (Ziegler et al. 2019). RLHF is used in conversational systems such as ChatGPT (OpenAI 2022)
for generating chat messages, such that new answers accommodate the previous chat dialogue and ensure
that the answers are in alignment with predefined human preferences (e.g., length, style, appropriateness)
Prompt learning Prompt learning is a method for LLMs that uses the knowledge stored in language models for downstream
tasks (Liu et al. 2023). In general, prompt learning does not require any fine-tuning of the language model,
which makes it efficient and flexible. A prompt is a specific input to a language model (e.g., ‘‘The movie
was superb. Sentiment: ‘‘) and then the most probable output s 2 f‘‘positive’’; ‘‘negative’’g instead of the
space is picked. Recent advances allow for more complex data-driven prompt engineering, such as tuning
prompts via reinforcement learning (Liu et al. 2023).
seq2seq The term sequence-to-sequence (seq2seq) refers to machine learning approaches where an input sequence is
mapped onto an output sequence (Sutskever et al. 2014). An example is machine learning-based translation
between different languages. Such seq2seq approaches consist of two main components: An encoder turns
each element in a sequence (e.g., each word in a text) into a corresponding hidden vector containing the
element and its context. The decoder reverses the process, turning the vector into an output element (e.g., a
word from the new language) while considering the previous output to model dependencies in language.
The idea of seq2seq models has been extended to allow for multi-modal mappings such as text-to-image or
text-to-speech mappings.
Transformer A transformer is a deep learning architecture (Vaswani et al. 2017) that adopts the mechanism of self-
attention which differentially weights the importance of each part of the input data. Like recurrent neural
networks (RNNs), transformers are designed to process sequential input data, such as natural language, with
applications for tasks such as translation and text summarization. However, unlike RNNs, transformers
process the entire input all at once. The attention mechanism provides context for any position in the input
sequence. Eventually, the output of a transformer (or an RNN in general) is a document embedding, which
presents a lower-dimensional representation of text (or other input) sequences where similar texts are
located in closer proximity which typically benefits downstream tasks as this allows to capture semantics
and meaning (Siebers et al. 2022).
Variational autoencoder A variational autoencoder (VAE) is a type of neural network that is trained to learn a low-dimensional
representation of the input data by encoding it into a compressed latent variable space and then
reconstructing the original data from this compressed representation. VAEs differ from traditional
autoencoders by using a probabilistic approach to the encoding and decoding process, which enables them
to capture the underlying structure and variation in the data and generate new data samples from the learned
latent space (Kingma and Welling 2013). This makes them useful for tasks such as anomaly detection and
data compression but also image and text generation.
123
Table 1 continued
Concept Description
Zero-shot learning / few-shot Zero-shot learning and few-shot learning refer to different paradigms of how machine learning deals with
learning the problem of data scarcity. Zero-shot learning is when a machine is taught how to learn a task from data
without ever needing to access the data itself, while few-short learning refers to when there are only a few
specific examples. Zero-shot learning and few-shot learning are often desirable in practice as they reduce
the cost of setting up AI systems. LLMs are few-shot or zero-shot learners (Brown et al. 2020) as they just
need a few samples to learn a task (e.g., predicting the sentiment of reviews), which makes LLMs highly
flexible as a general-purpose tool.
Fig. 2 Examples of different training procedures for generative AI models. a Generative adversarial network (GAN). b Reinforcement learning
from human feedback (RLHF) as used in conversational generative AI models
ChatGPT system in November 2022, ChatGPT’s ease of in many generative AI models are that they were trained on
use also for non-expert users was a core contributing factor historical data with specific cut-off date and thus do not
to its explosive worldwide adoption. store information beyond or that an information compres-
Moreover, on the system level, multiple components of sion takes place because of which generative AI models
a generative AI system can be integrated or connected to may not remember everything that they saw during training
other systems, external databases with domain-specific (Chiang 2023). Both limitations can be mitigated by aug-
knowledge, or platforms. For example, common limitations menting the model with functionality for real-time
123
information retrieval, which can substantially enhance its (Berente et al. 2021). With generative AI, we are
accuracy and usefulness. Relatedly, in the context of text approaching a further point of refinement. In the past, the
generation, online language modeling addresses the prob- capability of AI was mostly understood to be analytic,
lem of outdated models by continuously training them on suitable for decision-making tasks. Now, AI gains the
up-to-date data.2 Thereby, such models can then be capability to perform generative tasks, suitable for content
knowledgeable of recent events that their static counter- creation. While the procedure of content creation to some
parts would not be aware of due to their training cut-off respect can still be considered analytic as it is inherently
dates. probabilistic, its results can be creative or even artistic as
generative AI combines elements in novel ways. Further,
2.2.3 Application-Level View IT artifacts were considered passive as they were used
directly by humans. With the advent of agentic IT artifacts
Generative AI applications are generative AI systems sit- (Baird and Maruping 2021) powered by LLMs (Park et al.
uated in organizations to deliver value by solving dedicated 2023), this human agency primacy assumption needs to be
business problems and addressing stakeholder needs. They revisited and impacts how we devise the relation between
can be regarded as human-task-technology systems or human and AI based on their potency. Eventually, this may
information systems that use generative AI technology to require AI capability models to structure, explain, guide,
augment human capacities to accomplish specific tasks. and constrain the different abilities of AI systems and their
This level of generative AI encompasses countless real- uses as AI applications.
world use cases: These range from SEO content generation Focusing on the interaction between humans and AI, so
(Reisenbichler et al. 2022), over synthetic movie genera- far, for analytic AI, the concept of delegation has been
tion (Metz 2023) and AI music generation (Garcia 2023), discussed to establish a hierarchy for decision-making
to natural language-based software development (Chen (Baird and Maruping 2021). With generative AI, a human
et al. 2021). uses prompts to engage with an AI system to create con-
Generative AI applications will give rise to novel tent, and the AI then interprets the human’s intentions and
technology-enabled modes of work. The more users will provides feedback to presuppose further prompts. At first
familiarize themselves with these novel applications, the glance, this seems to follow a delegation pattern as well.
more they will trust or mistrust them as well as use or Yet, the subsequent process does not, as the output of the
disuse them. Over time, applications will likely transition AI can be suggestive to the other and will inform their
from mundane tasks such as writing standard letters and further involvement directly or subconsciously. Thus, the
getting a dinner reservation to more sensitive tasks such as process of creation rather follows a co-creation pattern, that
soliciting medical or legal advice. They will involve more is, the practice of collaborating in different roles to align
consequential decisions, which may even involve moral and offer diverse insights to guide a design process
judgment (Krügel et al. 2023). This ever-increasing scope (Ramaswamy and Ozcan 2018). Using the lens of agentic
and pervasiveness of generative AI applications give rise to AI artifacts, initiation is not limited to humans.
an imminent need not only to provide prescriptions and The abovementioned interactions also impact our cur-
principles for trustworthy and reliable designs, but also for rent understanding of hybrid intelligence as the integration
scrutinizing the effects on the user to calibrate qualities of humans and AI, leveraging the unique strengths of both.
such as trust appropriately. The (continued) use and Hybrid intelligence argues to address the limitations of
adoption of such applications by end users and organiza- each intelligence type by combining human intuition, cre-
tions entails a number of fundamental socio-technical ativity, and empathy with the computational power, accu-
considerations to descry innovation potential and affor- racy, and scalability of AI systems to achieve enhanced
dances of generative AI artifacts. decision-making and problem-solving capabilities (Deller-
mann et al. 2019). With generative AI and the AI’s capa-
2.3 A Socio-Technical View on Generative AI bility to co-create, the understanding of what constitutes
this collective intelligence begins to shift. Hence, novel
As technology advances, the definition and extent of what human-AI interaction models and patterns may become
constitutes AI are continuously refined, while the reference necessary to explain and guide the behavior of humans and
point of human intelligence stays comparatively constant AI systems to enable effective and efficient use in AI
applications on the one hand and, on the other hand, to
2
ensure envelopment of AI agency and reach (Asatiani et al.
See https://github.com/huggingface/olm-datasets (accessed 25 Aug
2021).
2023) for a script that enables users to pull up-to-date data from the
web for training online language models, for instance, from Common On a theoretical level, this shift in human-computer or
Crawl and Wikipedia. rather human-AI interaction fuels another important
123
observation: The theory of mind is an established theoret- Moreover, the output of generative AI, especially that of
ical lens in psychology to describe the cognitive ability of LLMs, is typically not easily verifiable.
individuals to understand and predict the mental states, The correctness of generative AI models is highly
emotions, and intentions of others (Carlson et al. 2013; dependent on the quality of training data and the according
Baron-Cohen 1997; Gray et al. 2007). This skill is crucial learning process. Generative AI systems and applications
for social interactions, as it facilitates empathy and allows can implement correctness checks to inhibit certain out-
for effective communication. Moreover, conferring a mind puts. Yet, due to the black-box nature of state-of-the-art AI
to an AI system can substantially drive usage intensity models (Rai 2020), the usage of such systems critically
(Hartmann et al. 2023a). The development of a theory of hinges on users’ trust in reliable outputs. The closed source
mind in humans is unconscious and evolves throughout an of commercial off-the-shelf generative AI systems aggra-
individual’s life. The more natural AI systems become in vates this fact and prohibits further tuning and re-training
terms of their interface and output, the more a theory of of the models. One solution for addressing the downstream
mind for human-computer interactions becomes necessary. implications of incorrect outputs is to use generative AI to
Research is already investigating how AI systems can produce explanations or references, which can then be
become theory-of-mind-aware to better understand their verified by users. However, such explanations are again
human counterpart (Rabinowitz et al. 2018; Çelikok et al. probabilistic and thus subject to errors; nevertheless, they
2019). However, current AI systems hardly offer any cues may help users in their judgment and decision-making
for interactions. Thus, humans are rather void of a theory to when to accept outputs of generative AI and when not.
explain their understanding of intelligent behavior by AI Bias and fairness. Societal biases permeate everyday
systems, which becomes even more important in a co- human-generated content (Eskreis-Winkler and Fishbach
creation environment that does not follow a task delegation 2022). The unbiasedness of vanilla generative AI is very
pattern. A theory of the artificial mind that explains how much dependent on the quality of training data and the
individuals perceive and assume the states and rationale of alignment process. Training deep learning models on
AI systems to better collaborate with them may alleviate biased data can amplify human biases, replicate toxic
some of these concerns. language, or perpetuate stereotypes of gender, sexual ori-
entation, political leaning, or religion (e.g., Caliskan et al.
2017; Hartmann et al. 2023b). Recent studies expose the
3 Limitations of Current Generative AI harmful biases embedded in multimodal generative AI
models such as CLIP (contrastive language-image pre-
In the following, we discuss four salient boundaries of training; Wolfe et al. 2022) and the CLIP-filtered LAION
generative AI that, we argue, are important limitations in dataset (Birhane et al. 2021), which are core components
real-world applications. The following limitations are of of generative AI models (e.g., Dall-E 2 or Stable Diffu-
technical nature in that they refer to how current generative sion). Human biases can also creep into the models in other
AI models make inferences, and, hence, the limitations stages of the model engineering process. For instruction-
arise at the model level. Because of this, it is likely that based language models, the RLHF process is an additional
limitations will persist in the long run, with system- and source of bias (OpenAI 2023b). Careful coding guidelines
application-level implications. and quality checks can help address these risks.
Incorrect outputs. Generative AI models may produce Addressing bias and thus fairness in AI receives
output with errors. This is owed to the underlying nature of increasing attention in the academic literature (Dolata et al.
machine learning models relying on probabilistic algo- 2022; Schramowski et al. 2022; Ferrara 2023; De-Arteaga
rithms for making inferences. For example, generative AI et al. 2022; Feuerriegel et al. 2020; von Zahn et al. 2022),
models generate the most probable response to a prompt, but remains an open and ongoing research question. For
not necessarily the correct response. As such, challenges example, the developers of Stable Diffusion flag ‘‘probing
arise as, by now, outputs are indistinguishable from and understanding the limitations and biases of generative
authentic content and may present misinformation or models’’ as an important research area (Rombach et al.
deceive users (Spitale et al. 2023). In LLMs, this problem 2022). Some scholars even attest to models certain moral
in emergent behavior is called hallucination (Ji et al. 2023), self-correcting capabilities (Ganguli et al. 2023), which
which refers to mistakes in the generated text that are may attenuate concerns of embedded biases and result in
semantically or syntactically plausible but are actually more fairness. In addition, on the system and application
nonsensical or incorrect. In other words, the generative AI level, mitigation mechanisms can be implemented to
model produces content that is not based on any facts or address biases embedded in the deep learning models and
evidence, but rather on its own assumptions or biases. create more diverse outputs (e.g., updating the prompts
‘‘under the hood’’ as done by Dall-E 2 to increase the
123
demographic diversity of the outputs). Yet, more research other hand, offer numerous research opportunities, espe-
is needed to get closer to the notion of fair AI. cially for BISE researchers due to their interdisciplinary
Copyright violation. Generative AI models, systems, background. We organize our considerations according to
and applications may cause a violation of copyright laws the individual departments of the BISE journal (see
because they can produce outputs that resemble or even Table 2 for an overview of exemplary research questions).
copy existing works without permission or compensation to
the original creators (Smits and Borghuis 2022). Here, two 4.1 Business Process Management
potential infringement risks are common. On the one hand,
generative AI may make illegal copies of a work, thus Generative AI will have a strong impact on the field of
violating the reproduction right of creators. Among others, Business Process Management (BPM) as it can assist in
this may happen when a generative AI was trained on automating routine tasks, improving customer and
original content that is protected by copyright but where employee satisfaction, and revealing process innovation
the generative AI produces copies. Hence, a typical opportunities (Beverungen et al. 2021), especially in cre-
implication is that the training data for building generative ative processes (Haase and Hanel 2023). Concrete impli-
AI systems must be free of copyrights. Crucially, copyright cations and research directions can be connected to various
violation may nevertheless still happen even when the phases of the BPM lifecycle model (Vidgof et al. 2023).
generative AI has never seen a copyrighted work before, For example, in the context of process discovery, genera-
such as, for example, when it simply produces a trade- tive AI models could be used to generate process
marked logo similar to that of Adidas but without descriptions, which can help businesses identify and
ever having seen that logo before. On the other hand, understand the different stages of a process (Kecht et al.
generative AI may prepare derivative works, thus violating 2023). From the perspective of business process improve-
the transformation right of creators. To this end, legal ment, generative process models could be used for idea
questions arise around the balance of originality and cre- generation and to support innovative process (re-)design
ativity in generative AI systems. Along these lines, legal initiatives (van Dun et al. 2023). In this regard, there is
questions also arise around who holds the intellectual great potential for generative AI to contribute to both
property for works (including patents) produced by a exploitative as well as explorative BPM design strategies
generative AI. (Grisold et al. 2022). In addition, natural language pro-
Environmental concerns. Lastly, there are substantial cessing tasks related to BPM such as process extraction
environmental concerns from developing and using gen- from text could benefit from generative AI without further
erative AI systems due to the fact that such systems are fine-tuning using prompt engineering (Busch et al. 2023).
typically built around large-scale neural networks, and, Likewise, other phases can benefit owing to generative
therefore, their development and operation consume large AI’s ability to learn complex and non-linear relationships
amounts of electricity with immense negative carbon in dynamic business processes that can be used for
footprint (Schwartz et al. 2020). For example, the carbon implementation as well as in simulation and predictive
emission for training a generative AI model such as GPT-3 process monitoring among other things.
was estimated to have produced the equivalent of In the short term, robotic process automation (van der
552 t CO2 and thus amounts to the annual CO2 emissions Aalst et al. 2018; Herm et al. 2021) will benefit as formerly
of several dozens of households (Khan 2021). Owing to handcrafted processing rules can not only be replaced, but
this, there are ongoing efforts in AI research to make the entirely new types of automation can be enabled by ret-
development and deployment of AI algorithms more car- rofitting and thus intelligentizing legacy software. In the
bon-friendly, through more efficient training algorithms, long run, we also see a large potential to support the phase
through compressing the size of neural network architec- of business process execution in traditional BPM. Specifi-
tures, and through optimized hardware (Schwartz et al. cally, we anticipate the development of a new generation of
2020). process guidance systems. While traditional system designs
are based on static and manually-crafted knowledge bases
(Morana et al. 2019), more dynamic and adaptive systems
4 Implications and Future Directions for the BISE are feasible on the basis of large enterprise-wide trained
Community language models. Such systems could improve knowledge
retrieval tasks from a wide variety of heterogeneous sour-
In this section, we draw a number of implications and ces, including manuals, handbooks, e-mails, wikis, job
future research directions which, on the one hand, are of descriptions, etc. This opens up new avenues of research
direct relevance to the BISE community as an application- into how unstructured and distributed organizational
oriented, socio-technical research discipline and, on the
123
Business process management How can generative AI assist in automating routine tasks?
How can generative AI reveal process innovation opportunities and support process (re-)design
initiatives?
Decision analytics and data science How can generative AI models be effectively fine-tuned for domain-specific applications?
How can the reliability of generative AI systems be improved?
Digital business management and digital How can generative AI support managerial tasks such as resource allocation?
leadership How will the digital work of employees change with smart assistants powered by generative AI?
Economics of information systems What are the welfare implications of generative AI?
Which jobs and tasks are affected most by generative AI?
Enterprise modeling and enterprise How can generative AI be used to support the construction and maintenance of enterprise models?
engineering How can generative AI support in enterprise applications (e.g., CRM, BI, etc.)?
Human computer interaction and social How should generative AI systems be designed to foster trust?
computing What countermeasures are effective to prevent users from falling for AI-generated disinformation?
To what extent can generative AI replace or augment crowdsourcing tasks?
How can generative AI assist in education?
Information systems engineering and What are effective design principles for developing generative AI systems?
technology How can generative AI support design science projects to foster creativity in the development of
new IT artifacts?
knowledge can be incorporated into intelligent process better ways how outputs can be verified (e.g., by offering
guidance systems. additional explanations or references).
Finally, questions arise about how generative AI can
4.2 Decision Analytics and Data Science natively support decision analytics and data science pro-
jects by closing the gap between modeling experts and
Despite the huge progress in recent years, several analytical domain users (Zschech et al. 2020). For instance, it is
and technical questions around the development of gener- commonly known that many AI models used in business
ative AI have yet to be solved. One open question relates to analytics are difficult to understand by non-experts (cf.
how generative AI can be effectively customized for Senoner et al. 2022). As a remedy, generative AI could be
domain-specific applications and thus improve perfor- used to generate descriptions that explain the logic of
mance through higher degrees of contextualization. For business analytics models and thus make the decision logic
example, novel and scalable techniques are needed to more intelligible. One promising direction could be, for
customize conversational agents based on generative AI for example, to use generative AI for translating post hoc
applications in medicine or finance. This will be crucial in explanations derived from approaches like SHAP or LIME
practice to solve specific BISE-related tasks where cus- into more intuitive textual descriptions or generate user-
tomization may bring additional performance gains. Novel friendly descriptions of models that are intrinsically inter-
techniques for customization must be designed in a way pretable (Slack et al. 2023; Zilker et al. 2023).
that ensures the safety of proprietary data and prevents the
data from being disclosed. Moreover, new frameworks are 4.3 Digital Business Management and Digital
needed for prompt engineering that are designed from a Leadership
user-centered lens and thus promote interpretability and
usability. Generative AI has great potential to contribute to different
Another important research direction is to improve the types of value creation mechanisms, including knowledge
reliability of generative AI systems. For example, algo- creation, task augmentation, and autonomous agency.
rithmic solutions are needed on how generative AI can However, this also requires the necessary organizational
detect and mitigate hallucination. In addition to algorithmic capabilities and conditions, where further research is nee-
solutions, more effort is also needed to develop user-cen- ded to examine these ingredients more closely for the
tered solutions, that is, how users can reduce the risk of context of generative AI to steer the technological possi-
falling for incorrect outcomes, for example, by developing bilities in a successful direction (Shollo et al. 2022).
123
That is, generative AI will lead to the development of example, AI-based translation between different languages
new business ideas, unseen product and service innova- is responsible for significant economic gains (Brynjolfsson
tions, and ultimately to the emergence of completely new et al. 2019). The BISE community can contribute by pro-
business models. At the same time, it will also have a viding quantification through rigorous causal evidence.
strong impact on intra-organizational aspects, such as work Given the velocity of AI research, it may be necessary to
patterns, organizational structures, leadership models, and take a more abstract problem view instead of a concrete
management practices. In this regard, we see that AI-based tool view. For example, BISE research could run field
assistant systems previously centered around desktop experiments to compare programmers with and without AI
automation taking over more and more routine tasks such support and thereby assess whether generative AI systems
as event management, resource allocation, and social for coding can improve the speed and quality of code
media account management to free up even more human development. Similarly, researchers could test whether
capacity (Maedche et al. 2019). Further, in algorithmic generative AI will make artists more creative as they can
management (Benlian et al. 2022; Cameron et al. 2023), it more easily create new content. A similar pattern was
should be examined how existing theories and frameworks previously observed for AlphaGo, which has led humans to
need to be contextualized or fundamentally extended in become better players in the board game Go (Shin et al.
light of the increasingly powerful capabilities of generative 2023).
AI. Generative AI is likely to transform the industry as a
However, there are not only implications at the man- whole. This may hold true in the case of platforms that
agement level. The future of work is very likely to change make user-generated content available (e.g., shutterstock.-
at all levels of an organization (Feuerriegel et al. 2022). com, pixabay.com, stackoverflow.com), which may be
Due to the multi-modality of generative AI models, it is replaced by generative AI systems. Here, further research
conceivable that employees will work increasingly via questions arise as to whether the use of generative AI can
smart, speech-based interfaces, whereby the formulation of lead to a competitive advantage and how generative AI
prompts and the evaluation of their results could become a changes competition. For example, what are the economic
key activity. Against this background, it is worth investi- implications if generative AI is developed as open-source
gating which new competencies are required to handle this vs. closed-source systems? In this regard, a salient success
emerging technology (cf. Debortoli et al. 2014) and which factor for the development of conversational agents based
entirely new job profiles, such as prompt engineers, may on generative AI (e.g., ChatGPT) are data from user
evolve in the near future (Strobelt et al. 2023). interactions through dialogues and feedback on whether the
Generative AI is also expected to fundamentally reform dialog was helpful. Hence, the value of such interaction
the way organizations manage, maintain, and share data is poorly understood and what it means if such data are
knowledge. Referring to the sketched vision of a new only available to a few Big Tech companies.
process guidance system in Sect. 4.1, we anticipate a The digital transformation from generative AI also poses
number of new opportunities for digital knowledge man- challenges and opportunities for economic policy. It may
agement, among others automated knowledge discovery affect future work patterns and, indirectly, worker capa-
based on large amounts of unstructured distributed data bility via restructured learning mechanisms. It may also
(e.g., identification of new product combinations), affect content sharing and distribution and, hence, have
improved knowledge sharing by automating the process of non-trivial implications on the exploitation and protection
creating, summarizing, and disseminating content (e.g., of intellectual properties. On top of that, a growing con-
automated creation of wikis and FAQs in different lan- centration of power over AI innovation in the hands of a
guages), and personalized knowledge delivery to individual few companies may result in a monopoly of AI capabilities
employees based on their specific needs and preferences and hamper future innovation, fair competition, scientific
(e.g., recommendations for specific training material). progress, and thus welfare and human development at
large. All of these future impacts are important to under-
4.4 Economics of Information Systems stand and provide meaningful directions for shaping eco-
nomic policy.
Generative AI will have significant economic implications
across various industries and markets. Generative AI can 4.5 Enterprise Modeling and Enterprise Engineering
increase efficiency and productivity by automating many
tasks that were previously performed by humans, such as Enterprise models are important artifacts for capturing
content creation, customer service, code generation, etc. insights into the core components and structures of an
This can reduce costs and open up new opportunities for organization, including business processes, resources,
growth and innovation (Eloundou et al. 2023). For information flows, and IT systems (Vernadat 2020). A
123
major drawback of traditional enterprise models is that they if necessary, need to be extended. Nevertheless, this situ-
are static and may not provide the level of abstraction that ation also offers the potential to explore and design new
is required by the end user. Likewise, their construction approaches for more effective API management (e.g.,
and maintenance are time-consuming and expensive and including novel app store solutions, privacy and security
require manual effort and human expertise (Silva et al. mechanisms, service level definitions, pricing, and licens-
2021). With generative AI, we see a large potential that ing models) so that generative AI solutions can be
many of these limitations can be addressed by generative smoothly integrated into existing enterprise IT infrastruc-
AI as assistive technology (Sandkuhl et al. 2018), for tures without risking any unauthorized use and confiden-
example by automatically creating and updating enterprise tiality breaches.
models at different levels of abstraction or generating
multi-modal representations. 4.6 Human Computer Interaction and Social
First empirical results suggest that generative AI is able Computing
to generate useful conceptual models based on textual
problem descriptions. Fill et al. (2023) show that ER, Salient behavioral questions related to the interactions
BPMN, UML, and Heraklit models can not only be gen- between humans and generative AI systems are still
erated with very high to perfect accuracy from textual unanswered. Examples are related to the perception,
descriptions, but they also explored the interpretation of acceptance, adoption, and trust of systems using generative
existing models and received good results. In the near AI. A study found that news was believed less if generated
future, we expect more research that deals with the by generative AI instead of humans (Longoni et al. 2022)
development, evaluation, and application of more and another found that there is a replicant effect (Jakesch
advanced approaches. Specifically, we expect that learned et al. 2019). Such behavior is likely to be context-specific
representations of enterprise models can be transformed and will vary by other antecedents highlighting the need for
into more application-specific formats and can either be a principled theoretical foundation to build successful
enriched with further details or reduced to the essential generative AI systems. The BISE community is well
content. positioned to develop rigorous design recommendations.
Against this background, the concept of ‘‘digital twins’’, Further, generative AI is a key enabler for developing
virtual representations of enterprise assets, may experience high-quality interfaces for information systems based on
new accentuation and extensions (Dietz and Pernul 2020). natural language that promote usability and accessibility.
Especially, in the public sector, where most organizational For example, such interfaces will not only make interac-
assets are non-tangible in the form of defined services, tions more intuitive but will also facilitate people with
specified procedures, legal texts, manuals, and organiza- disabilities. Generative AI is likely to increase the ‘‘degree
tional charts, generative AI can play a crucial role in dig- of intelligence’’ of user assistance systems. However, the
itally mirroring and managing such assets along their design of effective interactions must also be considered
lifecycles. Similar benefits could be explored with physical when increasing the degree of intelligence (Maedche et al.
assets in Industry 4.0 environments (Lasi et al. 2014). 2016). Similarly, generative AI will undoubtedly have an
In enterprise engineering, the role of generative AI impact on (computer-mediated) communication and col-
systems in existing as well as newly emerging IT land- laboration, such as within companies. For example, gen-
scapes to support the business goals and strategies of an erative AI can create optimized content for social media,
organization gives rise to numerous opportunities (e.g., in emails, and reports. It can also help to improve the
office solutions, customer relationship management and onboarding of new employees by creating personalized and
business analytics applications, knowledge management interactive training materials. It can also enhance collabo-
systems, etc.). Generative AI systems have the potential to ration within teams by providing creative and intelligence
evolve into core enterprise applications that can either be conservation agents that suggest, summarize, and synthe-
hosted on-premise or rented in the cloud. Unsanctioned use size information based on the context of the team (e.g.,
bears the risk that third-party applications will be used for automated meeting notes).
job-related tasks without explicit approval or even knowl- Several applications and research opportunities are
edge of the organization. This phenomenon is commonly related to the use of generative AI in marketing and,
known as shadow IT and theories and frameworks have especially, e-commerce. It is expected that generative AI
been proposed to explain this phenomenon, as well as can automate the creation of personalized marketing con-
recommending actions and policies to mitigate associated tent, for instance, different sales slogans for introverts vs.
risks (cf. Haag and Eckhardt 2017; Klotz et al. 2022). In extroverts (Matz et al. 2017) or other personality traits as
the light of generative AI, however, such approaches have personalized marketing content is more effective than a
to be revisited for their applicability and effectiveness and, one-content-fits-all approach (Matz et al. 2023).
123
Generative AI may automate various tasks in marketing vein, the widespread availability of generative AI systems
and media where content generation is needed (e.g., writing may also propel research around virtual assistants. Previ-
news stories, summarizing web pages for mobile devices, ously, research made use of ‘‘Wizard-of-Oz’’ experiments
creating thumbnail images for news stories, translating (Diederich et al. 2020), while future research may build
written news to audio for blind people and Braille-sup- upon generative AI systems instead.
ported formats for deaf people) that may be studied in Crucially, automated content generation using genera-
future research. Moreover, generative AI may be used in tive AI is a new phenomenon, but automation in general
recommender systems to boost the effectiveness of infor- and how people are affected by automated systems has
mation dissemination through personalization as content been studied by scholars for decades. Thus, existing theo-
can be tailored better to the abilities of the recipient. ries on the interplay of humans with automated systems
The education sector is another example that will need may be contextualized to generative AI systems.
to reinvent in some parts following the availability of
conversational agents (Kasneci et al. 2023; Gimpel et al. 4.7 Information Systems Engineering and Technology
2023). At first glance, generative AI seems to constitute an
unauthorized aid that jeopardizes student grading so far Generative AI offers many engineering- and technology-
relying on written examinations and term papers. However, oriented research opportunities for the Information Systems
over time, examinations will adapt, and generative AI will community as a design-oriented discipline. This includes
enable the development of comprehensive digital teaching developing and evaluating design principles for generative
assistants as well as the creation of supplemental teaching AI systems and applications to extend the limiting
material such as teaching cases and recap questions. Fur- boundaries of this technology (cf. Section 3). As such,
ther, the educator’s community will need to develop novel design principles can focus on how generative AI systems
guidelines and governance frameworks that educate can be made explainable to enable interpretability, under-
learners to rely appropriately on generative AI systems, standing, and trust; how they can be designed reliable to
how to verify model outputs, and to engineer prompts avoid discrimination effects or privacy issues; and how
rather than the output itself. they can be built more energy efficient to promote envi-
In addition, generative AI, specifically LLMs, can not ronmental sustainability (cf. Schoormann et al. 2023b).
only be used to spot harmful content on social media (e.g., While a lot of research is already being conducted in
Maarouf et al. 2023), but it can also create realistic disin- technology-oriented disciplines such as computer science,
formation (e.g., fake news, propaganda) that is hard to the BISE community can add its strength by looking at
detect by humans (Kreps et al. 2022; Jakesch et al. 2023). design aspects through a socio-technical lens, involving
Notwithstanding, AI-generated disinformation has previ- individuals, teams, organizations, and societal groups in
ously evolved as so-called deepfakes (Mirsky and Lee design activities, and thereby driving the field forward with
2021), but recent advances in generative AI reduce the cost new insights from a human–machine perspective (Maedche
of creating such disinformation and allow for unprece- et al. 2019).
dented personalization. For example, generative AI can Further, we see great potential that generative AI can be
automatically adapt the tone and narrative of misinforma- leveraged to improve current practices in design science
tion to specific audiences that identify as extroverts or research projects when constructing novel IT artifacts (see
introverts, left- or right-wing partisans, or people with Hevner et al. 2019). Here, one of the biggest potentials
particular religious beliefs. could lie in the support of knowledge retrieval tasks.
Lastly, generative AI can facilitate—or even replace— Currently, design knowledge in the form of design
traditional crowdsourcing where annotations or other requirements, design principles, and design features is
knowledge tasks are handled by a larger pool of crowd often only available in encapsulated written papers or
workers, for example in social media content annotation implicitly embedded in instantiated artifacts. Generative AI
(Gilardi et al. 2023) or market research on willingness-to- has the potential to extract such design knowledge that is
pay for services and products (Brand et al. 2023). In gen- spread over a broad body of interdisciplinary research and
eral, we expect that generative AI will automate many make it available in a collective form for scholars and
other tasks being a zero-shot / few-shot learner. However, practitioners. This could also overcome the limitation that
this may also unfold negative implications: Users may design knowledge is currently rarely reused, which ham-
contribute less to question-answering forums such as pers the fundamental idea of knowledge accumulation in
stackoverflow.com, which thus may reduce human-based design science research (Schoormann et al. 2021).
knowledge creation impairing the future performance of Besides engineering actual systems and applications, the
AI-based question-answering systems that rely upon BISE community should also investigate how generative
human question-answering content for training. In a similar AI can be used to support creativity-based tasks when
123
initiating new design projects. In this regard, a promising indicated otherwise in a credit line to the material. If material is not
direction could be to incorporate generative AI in design included in the article’s Creative Commons licence and your intended
thinking and similar methodologies to combine human use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright
creativity with computational creativity (Hawlitschek
holder. To view a copy of this licence, visit http://creativecommons.
2023). This may support different phases and steps of org/licenses/by/4.0/.
innovation projects, such as idea generation, user needs
elicitation, prototyping, design evaluation, and design
automation, in which different types of generative AI References
models and systems could be used and combined with each
other to form applications for creative industries (e.g., Agostinelli A, Denk TI, Borsos Z, Engel J, Verzetti M, Caillon A,
Huang Q, Jansen A, Roberts A, Tagliasacchi M, et al (2023)
generated user stories with textual descriptions, visual
MusicLM: generating music from text. arXiv:2301.11325
mock-ups for user interfaces, and quick software proto- Asatiani A, Malo P, Nagbøl PR, Penttinen E, Rinta-Kahila T,
types for proofs-of-concept). If generative AI is used to co- Salovaara A (2021) Sociotechnical envelopment of artificial
create innovative outcomes, it may also enable better intelligence: an approach to organizational deployment of
inscrutable artificial intelligence systems. J Assoc Inf Syst
reflection of the different design activities to ensure the
22(2):8
necessary learning (Schoormann et al. 2023a). Baird A, Maruping LM (2021) The next generation of research on IS
use: a theoretical framework of delegation to and from agentic IS
artifacts. MIS Q 45(1):315–341
Baron-Cohen S (1997) Mindblindness: an essay on autism and theory
5 Conclusion
of mind. MIT Press, Cambridge
Benlian A, Wiener M, Cram WA, Krasnova H, Maedche A,
Generative AI is a branch of AI that can create new content Möhlmann M, Recker J, Remus U (2022) Algorithmic manage-
such as texts, images, or audio that increasingly often ment. Bus Inf Syst Eng 64(6):825–839. https://doi.org/10.1007/
s12599-022-00764-w
cannot be distinguished anymore from human craftsman-
Berente N, Gu B, Recker J, Santhanam R (2021) Special issue editor’s
ship. For this reason, generative AI has the potential to comments: managing artificial intelligence. MIS Q
transform domains and industries that rely on creativity, 45(3):1433–1450
innovation, and knowledge processing. In particular, it Beverungen D, Buijs JCAM, Becker J, Di Ciccio C, van der Aalst
WMP, Bartelheimer C, vom Brocke J, Comuzzi M, Kraume K,
enables new applications that were previously impossible
Leopold H, Matzner M, Mendling J, Ogonek N, Post T, Resinas
or impractical for automation, such as realistic virtual M, Revoredo K, del Rı́o-Ortega A, La Rosa M, Santoro FM,
assistants, personalized education and service, and digital Solti A, Song M, Stein A, Stierle M, Wolf V (2021) Seven
art. As such, generative AI has substantial implications for paradoxes of business process management in a hyper-connected
world. Bus Inf Syst Eng 63(2):145–156. https://doi.org/10.1007/
BISE practitioners and scholars as an interdisciplinary
s12599-020-00646-z
research community. In our Catchword article, we offered Birhane A, Prabhu VU, Kahembwe E (2021) Multimodal datasets:
a conceptualization of the principles of generative AI along misogyny, pornography, and malignant stereotypes. arXiv:2110.
a model-, system-, and application-level view as well as a 01963
Bishop C (2006) Pattern recognition and machine learning. Springer,
social-technical view and described limitations of current
New York
generative AI. Ultimately, we provided an impactful Bommasani R, Hudson DA, Adeli E, Altman R, Arora S, von Arx S,
research agenda for the BISE community and thereby Bernstein MS, Bohg J, Bosselut A, Brunskill E, Brynjolfsson E,
highlight the manifold affordances that generative AI Buch S, Card D, Castellon R, Chatterji NS, Chen AS, Creel KA,
Davis J, Demszky D, Donahue C, Doumbouya M, Durmus E,
offers through the lens of the BISE discipline.
Ermon S, Etchemendy J, Ethayarajh K, Fei-Fei L, Finn C, Gale
T, Gillespie LE, Goel K, Goodman ND, Grossman S, Guha N,
Acknowledgements During the preparation of this Catchword, we Hashimoto T, Henderson P, Hewitt J, Ho DE, Hong J, Hsu K,
contacted all current department editors at BISE to actively seek their Huang J, Icard TF, Jain S, Jurafsky D, Kalluri P, Karamcheti S,
feedback on our suggested directions. We gratefully acknowledge Keeling G, Khani F, Khattab O, Koh PW, Krass MS, Krishna R,
their support. Kuditipudi R, Kumar A, Ladhak F, Lee M, Lee T, Leskovec J,
Levent I, Li XL, Li X, Ma T, Malik A, Manning CD,
Funding Open Access funding enabled and organized by Projekt Mirchandani SP, Mitchell E, Munyikwa Z, Nair S, Narayan A,
DEAL. Narayanan D, Newman B, Nie A, Niebles JC, Nilforoshan H,
Nyarko JF, Ogut G, Orr L, Papadimitriou I, Park JS, Piech C,
Open Access This article is licensed under a Creative Commons Portelance E, Potts C, Raghunathan A, Reich R, Ren H, Rong F,
Attribution 4.0 International License, which permits use, sharing, Roohani YH, Ruiz C, Ryan J, R’e C, Sadigh D, Sagawa S,
adaptation, distribution and reproduction in any medium or format, as Santhanam K, Shih A, Srinivasan KP, Tamkin A, Taori R,
long as you give appropriate credit to the original author(s) and the Thomas AW, Tramèr F, Wang RE, Wang W, Wu B, Wu J, Wu
source, provide a link to the Creative Commons licence, and indicate Y, Xie SM, Yasunaga M, You J, Zaharia MA, Zhang M, Zhang
if changes were made. The images or other third party material in this T, Zhang X, Zhang Y, Zheng L, Zhou K, Liang P (2021) On the
article are included in the article’s Creative Commons licence, unless
123
opportunities and risks of foundation models. arXiv:2108. Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK,
07258https://doi.org/10.48550/arXiv.2108.07258 Baabdullah AM, Koohang A, Raghavan V, Ahuja M et al (2023)
Brand J, Israeli A, Ngwe D (2023) Using GPT for market research. ‘‘So what if ChatGPT wrote it?’’ Multidisciplinary perspectives
SSRN 4395751 on opportunities, challenges and implications of generative
Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, conversational AI for research, practice and policy. Int J Inf
Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Manag 71(102):642
Language models are few-shot learners. Adv Neural Inf Process Eloundou T, Manning S, Mishkin P, Rock D (2023) GPTs are GPTs:
Syst 33:1877–1901 an early look at the labor market impact potential of large
Brynjolfsson E, Hui X, Liu M (2019) Does machine translation affect language models. arxiv:2303.10130, accessed 03 April 2023
international trade? Evidence from a large digital platform. Eskreis-Winkler L, Fishbach A (2022) Surprised elaboration: when
Manag Sci 65(12):5449–5460 white men get longer sentences. J Personal Soc Psychol
Burger B, Kanbach DK, Kraus S, Breier M, Corvello V (2023) On the 123:941–956
use of AI-based tools like ChatGPT to support management Ferrara E (2023) Should ChatGPT be biased? Challenges and risks of
research. Europ J Innov Manag 26(7):233–241. https://doi.org/ bias in large language models. arXiv:2304.03738
10.1108/EJIM-02-2023-0156 Feuerriegel S, Dolata M, Schwabe G (2020) Fair AI: challenges and
Busch K, Rochlitzer1 A, Sola D, Leopold H (2023) Just tell me: opportunities. Bus Inf Syst Eng 62:379–384
Prompt engineering in business process management. arXiv: Feuerriegel S, Shrestha YR, von Krogh G, Zhang C (2022) Bringing
2304.07183 artificial intelligence to business management. Nat Machine
Caliskan A, Bryson JJ, Narayanan A (2017) Semantics derived Intell 4(7):611–613
automatically from language corpora contain human-like biases. Fill HG, Fettke P, Köpke J (2023) Conceptual modeling and large
Sci 356(6334):183–186 language models: impressions from first experiments with
Cameron L, Lamers L, Leicht-Deobald U, Lutz C, Meijerink J, ChatGPT. EMISAJ 18(3):1–15. https://doi.org/10.18417/emisa.
Möhlmann M (2023) Algorithmic management: its implications 18.3
for information systems research. Commun AIS 52(1):518–537. Ganguli D, Askell A, Schiefer N, Liao T, Lukošiūt_e K, Chen A,
https://doi.org/10.17705/1CAIS.05221 Goldie A, Mirhoseini A, Olsson C, Hernandez D, et al (2023)
Carlson SM, Koenig MA, Harms MB (2013) Theory of mind. WIREs The capacity for moral self-correction in large language models.
Cogn Sci 4:391–402 arXiv:2302.07459
Çelikok MM, Peltola T, Daee P, Kaski S (2019) Interactive AI with a Garcia T (2023) David Guetta replicated Eminem’s voice in a song
theory of mind. In: ACM CHI 2019 workshop: computational using artificial intelligence. https://variety.com/2023/music/
modeling in human-computer interaction, vol 80, pp 4215–4224 news/david-guetta-eminem-artificial-intelligence-1235516924/,
Chen L, Zaharia M, Zou J (2023) How is chatgpt’s behavior changing accessed 25 Aug 2023
over time? arXiv:2307.09009 Gilardi F, Alizadeh M, Kubli M (2023) ChatGPT outperforms crowd-
Chen M, Tworek J, Jun H, Yuan Q, Pinto HPdO, Kaplan J, Edwards workers for text-annotation tasks. arXiv:2303.15056
H, Burda Y, Joseph N, Brockman G, et al (2021) Evaluating Gimpel H, Hall K, Decker S, Eymann T, Lämmermann L, Mädche A,
large language models trained on code. arXiv:2107.03374 Röglinger M, Ruiner C, Schoch M, Schoop M, et al (2023)
Chiang T (2023) ChatGPT is a blurry JPEG of the web. https://www. Unlocking the power of generative ai models and systems such
newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry- as GPT-4 and ChatGPT for higher education. https://digital.uni-
jpeg-of-the-web, accessed 25 Aug 2023 hohenheim.de/fileadmin/einrichtungen/digital/Generative_AI_
Davison RM, Laumer S, Tarafdar M, Wong LHM (2023) ISJ and_ChatGPT_in_Higher_Education.pdf, accessed 25 Aug 2023
editorial: pickled eggs: generative AI as research assistant or co- Goldman Sachs (2023) Generative AI could raise global GDP by 7%.
author? Inf Syst J Early View. https://doi.org/10.1111/isj.12455 https://www.goldmansachs.com/insights/pages/generative-ai-
De-Arteaga M, Feuerriegel S, Saar-Tsechansky M (2022) Algorith- could-raise-global-gdp-by-7-percent.html
mic fairness in business analytics: directions for research and Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D,
practice. Prod Oper Manag 31(10):3749–3770 Ozair S, Courville A, Bengio Y (2014) Generative adversarial
Debortoli S, Müller O, vom Brocke J (2014) Comparing business nets. Adv Neural Inf Process Syst 27:2672–2680
intelligence and big data skills. Bus Inf Syst Eng 6(5):289–300. Gray HM, Gray K, Wegner DM (2007) Dimensions of mind
https://doi.org/10.1007/s12599-014-0344-2 perception. Sci 315(5812):619–619
Dellermann D, Ebel P, Söllner M, Leimeister JM (2019) Hybrid Grisold T, Groß S, Stelzl K, vom Brocke J, Mendling J, Röglinger M,
intelligence. Bus Inf Syst Eng 61(5):637–643. https://doi.org/10. Rosemann M (2022) The five diamond method for explorative
1007/s12599-019-00595-2 business process management. Bus Inf Syst Eng 64(2):149–166.
Devlin J, Chang MW, Lee K, Toutanova K (2018) BERT: Pre- https://doi.org/10.1007/s12599-021-00703-1
training of deep bidirectional transformers for language under- Haag S, Eckhardt A (2017) Shadow IT. Bus Inf Syst Eng
standing. arXiv:1810.04805 59(6):469–473. https://doi.org/10.1007/s12599-017-0497-x
Diederich S, Brendel AB, Kolbe LM (2020) Designing anthropo- Haase J, Hanel PHP (2023) Artificial muses: generative artificial
morphic enterprise conversational agents. Bus Inf Syst Eng intelligence chatbots have risen to human-level creativity. arXiv:
62(3):193–209 2303.12003
Dietz M, Pernul G (2020) Digital twin: empowering enterprises Hartmann J, Bergner A, Hildebrand C (2023a) MindMiner: uncov-
towards a system-of-systems approach. Bus Inf Syst Eng ering linguistic markers of mind perception as a new lens to
62(2):179–184. https://doi.org/10.1007/s12599-019-00624-0 understand consumer-smart object relationships. J Consum Psy-
Dolata M, Feuerriegel S, Schwabe G (2022) A sociotechnical view of chol. https://doi.org/10.1002/jcpy.1381
algorithmic fairness. Inf Syst J 32(4):754–818 Hartmann J, Schwenzow J, Witte M (2023b) The political ideology of
van Dun C, Moder L, Kratsch W, Röglinger M (2023) ProcessGAN: conversational AI: converging evidence on ChatGPT’s pro-
supporting the creation of business process improvement ideas environmental, left-libertarian orientation. arXiv:2301.01768
through generative machine learning. Decis Support Syst Hawlitschek F (2023) Interview with Samuel Tschepe on ‘‘Quo vadis
165(113):880. https://doi.org/10.1016/j.dss.2022.113880 design thinking?’’. Bus Inf Syst Eng 65(2):223–228. https://doi.
org/10.1007/s12599-023-00792-0
123
Herm LV, Janiesch C, Reijers HA, Seubert F (2021) From symbolic Maedche A, Legner C, Benlian A, Berger B, Gimpel H, Hess T, Hinz
RPA to intelligent RPA: challenges for developing and operating O, Morana S, Söllner M (2019) AI-based digital assistants:
intelligent software robots. In: International conference on opportunities, threats, and research perspectives. Bus Inf Syst
business process management, pp 289–305 Eng 61(4):535–544. https://doi.org/10.1007/s12599-019-00600-
Hevner A, vom Brocke J, Maedche A (2019) Roles of digital 8
innovation in design science research. Bus Inf Syst Eng Matz S, Teeny J, Vaid SS, Harari GM, Cerf M (2023) The potential of
61(1):3–8. https://doi.org/10.1007/s12599-018-0571-z generative AI for personalized persuasion at scale. PsyArXiv
Ho J, Jain A, Abbeel P (2020) Denoising diffusion probabilistic Matz SC, Kosinski M, Nave G, Stillwell DJ (2017) Psychological
models. Adv Neural Inf Process Syst 33:6840–6851 targeting as an effective approach to digital mass persuasion.
Jakesch M, French M, Ma X, Hancock JT, Naaman M (2019) AI- Proc Natl Acad Sci 114(48):12,714-12,719
mediated communication: how the perception that profile text Metz C (2023) Instant videos could represent the next leap in A.I.
was written by AI affects trustworthiness. In: Conference on technology. https://www.nytimes.com/2023/04/04/technology/
human factors in computing systems (CHI) runway-ai-videos.html, accessed 25 Aug 2023
Jakesch M, Hancock JT, Naaman M (2023) Human heuristics for AI- Mirsky Y, Lee W (2021) The creation and detection of deepfakes: a
generated language are flawed. Proc Natl Acad Sci survey. ACM Comput Survey 54(1):1–41
120(11):e2208839 Morana S, Maedche A, Schacht S (2019) Designing process guidance
Janiesch C, Zschech P, Heinrich K (2021) Machine learning and deep systems. J Assoc Inf Syst pp 499–535, https://doi.org/10.17705/
learning. Electron Market 31(3):685–695. https://doi.org/10. 1jais.00542
1007/s12525-021-00475-2 Ng A, Jordan M (2001) On discriminative vs. generative classifiers: a
Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, Ishii E, Bang YJ, Madotto comparison of logistic regression and naive Bayes. In: Advances
A, Fung P (2023) Survey of hallucination in natural language in Neural Information Processing Systems, vol 14, pp 841–848,
generation. ACM Comput Surv 55(12):1–38 https://papers.nips.cc/paper_files/paper/2001/hash/
Kasneci E, Seßler K, Küchemann S, Bannert M, Dementieva D, 7b7a53e239400a13bd6be6c91c4f6c4e-Abstract.html, accessed
Fischer F, Gasser U, Groh G, Günnemann S, Hüllermeier E et al 25 Aug 2023
(2023) ChatGPT for good? On opportunities and challenges of OpenAI (2022) Introducing ChatGPT. https://openai.com/blog/
large language models for education. Learn Individ Differ chatgpt, accessed 25 Aug 2023
103(102):274 OpenAI (2023a) GPT-4 technical report. arXiv:2303.08774
Kecht C, Egger A, Kratsch W, Röglinger M (2023) Quantifying OpenAI (2023b) How should AI systems behave, and who should
chatbots’ ability to learn business processes. Inf Syst decide? https://openai.com/blog/how-should-ai-systems-behave,
113(102):176. https://doi.org/10.1016/j.is.2023.102176 accessed 25 Aug 2023
Khan J (2021) AI’s carbon footprint is big, but easy to reduce, Google Park JS, O’Brien JC, Cai CJ, Morris MR, Liang P, Bernstein MS
researchers say. Fortune (2023) Generative agents: interactive simulacra of human
Kingma DP, Welling M (2013) Auto-encoding variational Bayes. behavior. arXiv:2304.03442
https://doi.org/10.48550/arXiv.1312.6114 Peres R, Schreier M, Schweidel D, Sorescu A (2023) On ChatGPT
Klotz S, Westner M, Strahringer S (2022) Critical success factors of and beyond: how generative artificial intelligence may affect
business-managed IT: it takes two to tango. Inf Syst Manag research, teaching, and practice. Int J Res Market 40:269–275
39(3):220–240 Rabinowitz NC, Perbet F, Song HF, Zhang C, Eslami SMA,
Kraus M, Feuerriegel S, Oztekin A (2020) Deep learning in business Botvinick MM (2018) Machine theory of mind. In: International
analytics and operations research: models, applications and conference on machine learning, PMLR, vol 80, pp 4215–4224,
managerial implications. Europ J Oper Res 281(3):628–641. http://proceedings.mlr.press/v80/rabinowitz18a.html, accessed
https://doi.org/10.1016/j.ejor.2019.09.018 25 Aug 2023
Kreps S, McCain RM, Brundage M (2022) All the news that’s fit to Rai A (2020) Explainable AI: from black box to glass box. J Acad
fabricate: AI-generated text as a tool of media misinformation. Market Sci 48:137–141
J Exp Polit Sci 9(1):104–117 Ramaswamy V, Ozcan K (2018) What is co-creation? An interac-
Krügel S, Ostermaier A, Uhl M (2023) ChatGPT’s inconsistent moral tional creation framework and its implications for value creation.
advice influences users’ judgment. Sci Report 13(1):4569 J Bus Res 84:196–205
Lasi H, Fettke P, Kemper HG, Feld T, Hoffmann M (2014) Industry Reisenbichler M, Reutterer T, Schweidel DA, Dan D (2022)
4.0. Bus Inf Syst Eng 6(4):239–242. https://doi.org/10.1007/ Frontiers: supporting content marketing with natural language
s12599-014-0334-4 generation. Market Sci 41(3):441–452
Li Y, Choi D, Chung J, Kushman N, Schrittwieser J, Leblond R, Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B (2022) High-
Eccles T, Keeling J, Gimeno F, Dal Lago A et al (2022) resolution image synthesis with latent diffusion models. In:
Competition-level code generation with alphacode. Science IEEE/CVF conference on computer vision and pattern recogni-
378(6624):1092–1097 tion, pp 10684–10695
Liu P, Yuan W, Fu J, Jiang Z, Hayashi H, Neubig G (2023) Pre-train, Sandkuhl K, Fill H, Hoppenbrouwers S, Krogstie J, Matthes F,
prompt, and predict: a systematic survey of prompting methods Opdahl AL, Schwabe G, Uludag Ö, Winter R (2018) From
in natural language processing. ACM Comput Surv 55(9):1–35 expert discipline to common practice: a vision and research
Longoni C, Fradkin A, Cian L, Pennycook G (2022) News from agenda for extending the reach of enterprise modeling. Bus Inf
generative artificial intelligence is believed less. In: ACM Syst Eng 60(1):69–80. https://doi.org/10.1007/s12599-017-0516-
conference on fairness, accountability, and transparency y
(FAccT), pp 97–106 Schoormann T, Möller F, Hansen MRP (2021) How do researchers
Maarouf A, Bär D, Geissler D, Feuerriegel S (2023) HQP: a human- (re-)use design principles: An inductive analysis of cumulative
annotated dataset for detecting online propaganda. arXiv:2304. research. In: The Next Wave of Sociotechnical Design, Springer,
14931 Cham, Lecture Notes in Computer Science, pp 188–194, https://
Maedche A, Morana S, Schacht S, Werth D, Krumeich J (2016) doi.org/10.1007/978-3-030-82405-1_20
Advanced user assistance systems. Bus Inf Syst Eng 58:367–370
123
Schoormann T, Stadtländer M, Knackstedt R (2023) Act and reflect: Susarla A, Thatcher RGJB, Sarker S (2023) Editorial: the janus effect
integrating reflection into design thinking. J Manag Inf Syst of generative AI: charting the path for responsible conduct of
40(1):7–37. https://doi.org/10.1080/07421222.2023.2172773 scholarly activities in information systems. Inf Syst Res
Schoormann T, Strobel G, Möller F, Petrik D, Zschech P (2023) 34(2):399–408. https://doi.org/10.1287/isre.2023.ed.v34.n2
Artificial intelligence for sustainability: a systematic review of Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning
information systems literature. Commun AIS 52(1):8 with neural networks. Adv Neural Inf Process Syst
Schramowski P, Turan C, Andersen N, Rothkopf CA, Kersting K 27:3104–3112
(2022) Large pre-trained language models contain human-like Teubner T, Flath CM, Weinhardt C, van der Aalst W, Hinz O (2023)
biases of what is right and wrong to do. Nat Machine Intell Welcome to the era of ChatGPT. Bus Inf Syst Eng 65(2):95–101.
4(3):258–268 https://doi.org/10.1007/s12599-023-00795-x
Schwartz R, Dodge J, Smith NA, Etzioni O (2020) Green AI. Unsal S, Atas H, Albayrak M, Turhan K, Acar AC, Doğan T (2022)
Commun ACM 63(12):54–63 Learning functional properties of proteins with language models.
Schöbel S, Schmitt A, Benner D, Saqr M, Janson A, Leimeister JM Nat Machine Intell 4(3):227–245
(2023) Charting the evolution and future of conversational van der Aalst WMP, Bichler M, Heinzl A (2018) Robotic process
agents: a research agenda along five waves and new frontiers. Inf automation. Bus Inf Syst Eng 60(4):269–272. https://doi.org/10.
Syst Front. https://doi.org/10.1007/s10796-023-10375-9 1007/s12599-018-0542-4
Senoner J, Netland T, Feuerriegel S (2022) Using explainable Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN,
artificial intelligence to improve process quality: evidence from Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv
semiconductor manufacturing. Manag Sci 68(8):5704–5723 Neural Inf Process Syst 30:6000–6010
Shin M, Kim J, van Opheusden B, Griffiths TL (2023) Superhuman Vernadat F (2020) Enterprise modelling: research review and outlook.
artificial intelligence can improve human decision-making by Comput Indust 122(103):265. https://doi.org/10.1016/j.compind.
increasing novelty. Proc Natl Acad Sci 120(12):e2214840,120 2020.103265
Shollo A, Hopf K, Thiess T, Müller O (2022) Shifting ML value Vidgof M, Bachhofner S, Mendling J (2023) Large language models
creation mechanisms: a process model of ML value creation. for business process management: opportunities and challenges.
J Strateg Inf Syst 31(3):101,734. https://doi.org/10.1016/j.jsis. In: Business process management forum. Lecture Notes in
2022.101734 Computer Science, Springer, Cham, pp 107-123
Siebers P, Janiesch C, Zschech P (2022) A survey of text represen- von Zahn M, Feuerriegel S, Kuehl N (2022) The cost of fairness in
tation methods and their genealogy. IEEE Access 10:96,492- AI: evidence from e-commerce. Bus Inf Syst Eng 64:335–348
96,513. https://doi.org/10.1109/ACCESS.2022.3205719 Wolfe R, Banaji MR, Caliskan A (2022) Evidence for hypodescent in
Silva N, Sousa P, Mira da Silva M (2021) Maintenance of enterprise visual semantic AI. In: ACM conference on fairness, account-
architecture models. Bus Inf Syst Eng 63(2):157–180. https://doi. ability, and transparency, pp 1293–1304
org/10.1007/s12599-020-00636-1 Ziegler DM, Stiennon N, Wu J, Brown TB, Radford A, Amodei D,
Slack D, Krishna S, Lakkaraju H, Singh S (2023) Explaining machine Christiano P, Irving G (2019) Fine-tuning language models from
learning models with interactive natural language conversations human preferences. arXiv:1909.08593
using TalkToModel. Nat Machine Intell 5:873–883 Zilker S, Weinzierl S, Zschech P, Kraus M, Matzner M (2023) Best of
Smits J, Borghuis T (2022) Generative AI and intellectual property both worlds: combining predictive power with interpretable and
rights. Law and artificial intelligence: regulating AI and applying explainable results for patient pathway prediction. In: Proceed-
ai in legal practice. Springer, Heidelberg, pp 323–344 ings of the 31st European Conference on Information Systems
Spitale G, Biller-Andorno N, Germani F (2023) AI model GPT-3 (dis) (ECIS), Kristiansand, Norway
informs us better than humans. Sci Adv 9:eadh1850 Zschech P, Horn R, Höschele D, Janiesch C, Heinrich K (2020)
Strobelt H, Webson A, Sanh V, Hoover B, Beyer J, Pfister H, Rush Intelligent user assistance for automated data mining method
AM (2023) Interactive and visual prompt engineering for ad-hoc selection. Bus Inf Syst Eng 62(3):227–247. https://doi.org/10.
task adaptation with large language models. IEEE Transact 1007/s12599-020-00642-3
Visual Comput Graphics 29(1):1146–1156. https://doi.org/10.
1109/TVCG.2022.3209479
123
1. use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
2. use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at