0% found this document useful (0 votes)
45 views

Large Language Models Present New Questions for Decision Support

The article discusses the impact of large language models (LLMs) on decision support systems (DSS) in organizations, highlighting their potential to assist in information gathering, brainstorming alternatives, and decision-making processes. It proposes new research questions based on Herbert Simon's decision-making framework, focusing on how LLMs will influence individual and organizational decision-making and the design of decision support technologies. The authors emphasize the need for a deeper understanding of the implications of LLMs, including issues of autonomy, authority consolidation, and the ethical considerations surrounding their use.

Uploaded by

anwarshahphd2021
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Large Language Models Present New Questions for Decision Support

The article discusses the impact of large language models (LLMs) on decision support systems (DSS) in organizations, highlighting their potential to assist in information gathering, brainstorming alternatives, and decision-making processes. It proposes new research questions based on Herbert Simon's decision-making framework, focusing on how LLMs will influence individual and organizational decision-making and the design of decision support technologies. The authors emphasize the need for a deeper understanding of the implications of LLMs, including issues of autonomy, authority consolidation, and the ethical considerations surrounding their use.

Uploaded by

anwarshahphd2021
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

International Journal of Information Management 79 (2024) 102811

Contents lists available at ScienceDirect

International Journal of Information Management


journal homepage: www.elsevier.com/locate/ijinfomgt

Research Note

Large language models present new questions for decision support


Abram Handler a, *, 1, Kai R. Larsen a, 2, Richard Hackathorn b, 3
a
Leeds School of Business, University of Colorado, 995 Regent Dr., Boulder, CO 80309, USA
b
Bolder Technology, Inc., 4740 Hancock Dr., Boulder, CO 80303, USA

A R T I C L E I N F O A B S T R A C T

Keywords: Large language models (LLMs) have proven capable of assisting with many aspects of organizational decision
Decision support systems making, such as helping to collect information from databases and helping to brainstorm possible courses of
Generative artificial intelligence action ahead of making a choice. We propose that broad adoption of these technologies introduces new questions
Large language models
in the study of decision support systems, which assist people with complex and open-ended choices in business.
Natural language processing
Where traditional study of decision support has focused on bespoke tools to solve narrow problems in specific
Business intelligence
domains, LLMs offer a general-purpose decision support technology which can be applied in many contexts. To
organize the wealth of new questions which result from this shift, we turn to a classic framework from Herbert
Simon, which proposes that decision making requires collecting evidence, considering alternatives, and finally
making a choice. Working from Simon’s framework, we describe how LLMs introduce new questions at each
stage of this decision-making process. We then group new questions into three overarching themes for future
research, centered on how LLMs will change individual decision making, how LLMs will change organizational
decision making, and how to design new decision support technologies which make use of the new capabilities of
LLMs.

1. Introduction DSS.4 Inspired by practitioner use, we propose that the broad adoption
of large language models will introduce many new interrelated behav­
Since the release of ChatGPT, practitioners have begun to explore the ioral, technical, and economic questions for decision support, which
use of large language models to guide decision making in business. This should strive to address topics that are relevant to current practice
presents a change for decision support systems (DSS), an area of IS (Benbasat & Zmud, 1999; Rosemann & Vessey, 2008; Starkey & Madan,
focused on how software can help people make complex and open-ended 2001). To ground new developments within established frameworks, we
choices. Traditionally, DSS researchers built and evaluated tools that organize new questions using Herbert Simon’s (1960) three-phase
focused on specific domains (Eom et al., 1998; Eom & Kim, 2006). But model of decision making. This model proposes that reasoned action
today, practitioners now are now testing what may be a form of requires collecting information, considering alternatives, and finally
general-purpose “intelligence” (Bubeck et al., 2023) to guide making a choice.5 We propose that applications of large language
decision-making in the wild. Some practitioners describe using LLMs for models in each of these stages will introduce new and important ques­
the exact same problems already addressed in the academic literature on tions for the field of DSS.

* Corresponding author.
E-mail addresses: [email protected] (A. Handler), [email protected] (K.R. Larsen), [email protected] (R. Hackathorn).
1
ORCID: 0000-0002-8663-2628.
2
ORCID: 0000-0002-8812-9866.
3
ORCID: 0009-0007-4904-4070.
4
Examples include the paper from Beraldi et al. (2011) and practitioner post from Martinez (2023) which each focus on portfolio optimization, the paper from
Kleij et al. (2022) and post from Keary (2023) which each focus on responding to cybersecurity threats and the paper from Huntley et al. (1995) and post from
FluidTruck (2023) which focus on the logistics of transportation.
5
Simon’s three-phase model was central to the founding of decision support (Gorry & Scott Morton, 1971, 1989). Simon (1960) explains that his three stages of
decision making are closely-related to John Dewey’s three stages of problem solving. Dewey describes each stage with a question: ‘What is the problem?’, ‘What are
the alternatives?’, and ‘Which alternative is best?’ We include the three questions from Dewey in our section headers because they help clarify each stage in Simon’s
framework. Simon also specifies that he uses the term intelligence in the sense of military intelligence gathering, so we describe this stage as intelligence gathering.

https://doi.org/10.1016/j.ijinfomgt.2024.102811
Received 22 February 2024; Received in revised form 23 May 2024; Accepted 24 May 2024
Available online 4 June 2024
0268-4012/© 2024 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
A. Handler et al. International Journal of Information Management 79 (2024) 102811

Our analysis centers on two core properties that distinguish LLM- individual employees act more autonomously, using easy access to high-
based approaches from prior machine learning methods, which are quality information.6 Large language models may also help middle-skill
already widely used in DSS (Merkert et al., 2015). First, following workers gain greater decision-making autonomy, by equipping them
Bommasani et al. (2021), we emphasize that LLM-based models are with some of the same information as experts (Autor, 2024). Decen­
marked by emergence, meaning that their behavior is induced via a tralization itself may also offer organizational advantages, such as
general training process rather than through deliberate design. For greater efficiency in competitive markets (Eva et al., 2019; Moran, 2015;
instance, GPT-3′s apparent ability to generate from English to French Robertson, 2015).
emerged from learning to predict upcoming words in large text corpora However, LLMs may also consolidate decision-making authority.
(Brown et al., 2020), rather than training on English–French sentence Some theories propose that leaders delegate to subordinates because
pairs. Second, again following Bommasani et al. (2021), we emphasize superiors do not have all the information needed for choice (Bendor
how LLM-based methods are marked by homogenization. LLMs allow et al., 2001). This view of delegation is closely related to Simon’s view of
researchers and practitioners to perform diverse text-to-text tasks using decision making (Simon, 1955); organizations must delegate because no
a small number of similar pretrained networks. single person has the cognitive capacity to collect or process all relevant
Emergence and homogenization are central to how LLMs enable information. From this perspective, LLMs may consolidate
more general decision support technologies. Diverse emergent behaviors decision-making power by making it easier for leaders to collect
allow for a unified computational approach to diverse decision tasks. required information using MIS (Brynjolfsson & Mendelson, 1993).7 For
However, this same diversity encourages both researchers and practi­ example, LLMs may make it easier for senior managers to integrate,
tioners to rely on a few homogeneous models. Throughout our work, we organize, and analyze information from individual branch locations,
explain how emergence and homogenization drive many new questions reducing the need for middle management.
at every stage of decision making. We then conclude by organizing such Yet homogenization from LLMs may also consolidate decision-
questions into broader themes to define a research agenda for LLMs and making authority. If all intelligence gathering in a firm relies on a sin­
DSS. gle large language model, then that model’s representation of reality
will influence how individuals across an organization perceive, analyze,
2. Intelligence gathering: what is the problem? and act on information. Such “algorithmic monoculture” may bring new
risks (Kleinberg & Raghavan, 2021); if all analysts use the same model,
2.1. How can LLMs assist with intelligence gathering? all analysts may be susceptible to the same biases and mistakes (Dwivedi
et al., 2023). Moreover, competing firms may predict and exploit this
Simon proposes that decision making begins with collecting infor­ behavior. Thus LLMs introduce many new empirical questions about
mation to inform choice. Yet in Simon’s framework “information gath­ possible shifts in decision-making structures, and about possible new
ering is not a free activity” (Simon, 1978, p. 10); people and risks from such shifts.
organizations incur costs for collecting intelligence, which must be
weighed against possible benefits of additional information. From this
perspective, LLMs can assist with decision making by making it easier 2.3. How can we develop a just and sustainable ecosystem around LLMs?
and less expensive for organizations to gather intelligence required for
choice. Training large language models requires high-quality text that is
For example, the OpenAI Codex model (OpenAI, 2021) generates written by people. However, the emergent behavior of LLMs obscures
computer code based on a natural language prompt. Van Haren (2023) the (complex) relationships between human labor and artificial neural
describes the use of this model for a “natural language analytics” tool network output. This poorly understood dynamic opens complex soci­
that helps non-technical users generate SQL queries for a relational otechnical questions surrounding how to design just systems that sustain
database, without dedicated support from a data analyst (Patterns, the people who provide information for machine intelligence. At pre­
2024). Other work considers how similar tools will change other aspects sent, the ethical norms, legal guidelines, and business models sur­
of data management by making it easier to discover and link different rounding this new technology are still emerging (Klein, 2024). For
databases (Fernandez et al., 2023), or lowering the cost of software instance, Reddit currently licenses its content to Google to assist with
development (Amankwah-Amoah et al., 2024). training language models, but does not pay the people who contribute
LLMs may also lower the cost of intelligence gathering by allowing content to the platform (Tong et al., 2024). Alternately, The New York
non-specialists to use once esoteric machine learning methods, without Times pays skilled reporters to collect new and accurate information, but
developing custom models or collecting specialized training data is litigating against (rather than cooperating with) OpenAI and Micro­
(Brown et al., 2020). For example, a firm once might have enlisted a soft. The publisher alleges that these technology companies “free-ride”
costly data scientist or machine learning engineer to classify customer off costly Times journalism without “permission or payment” (Roth,
comments on social media. Now an entry-level data analyst might 2023).8
perform this work using the ChatGPT API. These changes in the cost of As the legal, ethical, and business ecosystem emerges, IS researchers
intelligence gathering will introduce new questions for decision support. might consider how to design information systems which sustain the
We provide a few examples below. people who provide information that will inform organizational decision
making. In some ways, these questions seem best suited to theoretical
2.2. Will easy data science distribute or concentrate decision making
within firms?
6
We say “easy data science” to emphasize how LLMs make it easier to
organize and analyze information. However, for now, such features come with
Large language models exhibit emergent behaviors such as the
high computational costs. We discuss this further in the next section.
ability to write SQL queries, which might help people at all levels across 7
Surveys of managers have found that some business leaders are reluctant to
an organization gather and analyze information more easily. In theory, employ artificial intelligence for decision making, due to perceived threats (Cao
lowering the cost of data science might help smaller teams or even et al., 2021). We expect that AI tools will find wider adoption as managers
perceive greater benefits from the technology and gain greater familiarity with
the strengths and limitations of AI systems (e.g., via training programs or direct
experience).
8
At the time of writing, US courts have not decided if this is a violation of US
copyright law.

2
A. Handler et al. International Journal of Information Management 79 (2024) 102811

traditions within IS, such as work from Kane et al. (2021) who propose for decision support. Early human-in-the-loop approaches in natural
an “emancipatory” approach to designing ML systems. Yet these broad language processing and code generation may offer inspiration. For
qualitative questions also intersect with more narrowly technical con­ instance, because skilled human translators can often improve auto­
cerns. For instance, researchers might use open language models like matic machine translation, Green et al. (2015) describe a specialized
OMLo (Groeneveld et al., 2024) to examine the extent to which LLMs are user interface which allows a human expert to quickly create a
copying and merging from existing sources or creating new ideas, which high-quality translation in collaboration with a machine. Alternately,
might inform legal, ethical, and regulatory perspectives. Alternately, the code generation tool (Cursor, 2024) proposes an “AI-first Code Ed­
researchers may explore how to employ language models such as SILO itor,” which allows software engineers to write computer code by
(Min et al., 2023), to perform inference using specific documents. In chatting with an LLM.
principle, such models might allow a technology company to pay a Practitioners are beginning to explore similar forms of human­
person or organization who collects relevant information. Such inter­ –machine collaboration for design. For example, the program Notion
secting social, legal, and technical questions offer many future oppor­ (Notion Labs, 2024) proposes features that help practitioners consider
tunities for IS research. possible courses of action as they type professional notes. A practitioner
can ask Notion to “brainstorm five creative marketing campaigns” and
2.4. How will better intelligence affect firm productivity? then modify the first draft of a note created by a LLM. In the future, this
form of human–machine collaboration opens many new possibilities for
IS research has long focused on measuring and describing the eco­ artifact-oriented DSS research.
nomic benefits of information technology (Schweikl & Obermaier,
2020). Early work in this area was driven by a paradox. Economists 3.3. How can DSS integrate intelligence for generation?
observed that digitization was beginning to reshape many aspects of
large economies. However, they did not observe any corresponding To be most useful, systems that generate alternatives must integrate
gains in macroeconomic productivity. “You can see the computer age and analyze information collected during intelligence gathering. For
everywhere,” Solow (1987) wrote, except “in the productivity statis­ instance, the cybersecurity firm Recorded Future has explored using a
tics.” Much like in the early days of digitization, economists have specialized generative language model to automatically create threat
recently observed that dramatic improvements in artificial intelligence advisories based on information collected from diverse sources like
have not brought productivity gains (Brynjolfsson et al., 2017). As customer network logs, public Internet sites, and the dark web (Keary,
before, this may be because it takes time for firms to learn to use new 2023). Yet automatically combining and drawing conclusions from such
digital tools (Kohli & Devaraj, 2003), or because managers may not yet diverse structured and unstructured sources to inform general-purpose
understand how to invest wisely in new technologies (Schweikl & text generation is an open technical challenge; much work from com­
Obermaier, 2020). Investigating this dynamic offers an opportunity for puter science focuses on much simpler kinds of data-to-text problems,
future DSS research. such as writing summaries of individual basketball games based on ta­
Such research questions seem most amenable to econometric bles of game statistics (Wiseman et al., 2017). Research on
research methods. Much existing work in this area uses the Cobb and retrieval-augmented generation (Lewis et al., 2020) may offer a helpful
Douglas (1928) production function to model the effect of both capital starting point. This paradigm proposes generating text based on docu­
and labor on firm productivity (Polák, 2017). Research in this paradigm ments collected with an information retrieval system, which may
might examine the extent to which LLMs allow a firm to make more contain text that is relevant for a given task. However, some problem
efficient use of capital, reduce labor costs, or produce superior outputs domains will benefit from other forms of reasoning about evidence, as
with the same production inputs (Schweikl & Obermaier, 2020). How­ described in Section 5.
ever, non-econometric research methods such as case studies (Yin,
2013) focused on specific decision support tools in specific firms will 3.4. Does generating alternatives actually lead to better decisions?
also shape understanding. As in earlier work on IT payoff (Soh & Mar­
kus, 1995), as DSS researchers gain a clearer understanding of new While large language models may help practitioners explore possible
changes, they might work towards broader theories of how improved courses of action, automatically enumerating alternatives may have
intelligence gathering can improve firm productivity. unintended consequences. Many studies in cognitive science have
documented a “generation effect” (Bertsch et al., 2007), in which people
3. Design: what are the alternatives? are more likely to remember information that they actively create,
rather than simply observe. Similarly, active learning is known to pro­
3.1. How can LLMs assist with design? mote understanding in math, science, and engineering classes, as
compared to listening to traditional lectures (Freeman et al., 2014). By
In Simon’s model, practitioners enumerate different possible courses analogy, simply generating alternatives may not lead to better decision
of action after gathering intelligence and before ultimately making a making, because thinking over possible courses of action (without a
choice. Simon describes this as the design stage of decision making, computer) may help practitioners reason through the ramifications of a
emphasizing that because human cognition is resource-constrained, choice. On the other hand, generating alternatives (with a computer)
people can only consider a finite number of alternatives. From this may also help practitioners think of unexpected or unusual possibilities.
perspective, LLMs might assist with decision making by helping people Understanding the exact costs and benefits of automatically generating
search through many possible alternatives more quickly. Yet using LLMs alternatives during decision making presents an opportunity for future
to enumerate and refine possible courses of action introduces many new work.
research questions for decision support. We describe a few sample
questions below. 4. Choice: which alternative is best?

3.2. What kinds of user interfaces best help people enumerate 4.1. How LLMs can assist with choice?
alternatives?
In Simon’s decision-making model, practitioners choose among al­
Dialog-tuned language models allow people to develop documents ternatives after gathering intelligence and enumerating possible courses
easily and interactively, in collaboration with a computer. This new of action. While training large models to predict missing words in a
emergent behavior allows for many new kinds of possible user interfaces sequence has given rise to many complex, surprising and emergent

3
A. Handler et al. International Journal of Information Management 79 (2024) 102811

behaviors that some argue resembles general intelligence (Bubeck et al., easier to communicate with such formal models (Little, 1970). In much
2023), we do not believe that such next token predictors (Van Haren, the same way that existing work in DSS examines how software can help
2023) can currently provide practitioners with the optimal solutions to mitigate cognitive bias (Arnott, 2006; Roy & Lerch, 1996) or induce
the complex, open-ended, multi-faceted, unique, and context-dependent reflection (Abdel-Karim et al., 2023) future research might similarly
choices at the center of decision support (Gorry & Scott Morton, 1989).9 investigate how dialog tuned language models could help people weight
Nevertheless, applying LLMs during the choice stage of decision making evidence ahead of consequential choice, or how well-known biases in
presents many new research questions for DSS. such models (Rossi et al., 2024) may distort decision making.

4.2. How can decision support systems attribute, explain and calibrate 5. Future research directions
advice from a “stochastic parrot”?
We propose that large language models present many new research
Following the release of BERT, Bender et al. (2021) articulated an questions for decision support. We conclude by abstracting away from
influential critique of large language models. They argued that LLMs can the individual questions enumerated throughout this work to articulate
generate plausible-sounding, seemingly fluent, and seemingly coherent three overarching themes in the study of large language models for DSS
text without achieving actual natural language understanding. They (Table 1).
note that these emergent behaviors introduce new risks “because
humans are prepared to interpret strings belonging to languages they 5.1. Theme one: how will large language models affect individual decision
speak as meaningful and corresponding to the communicative intent of making?
some individual.” From this perspective, LLMs are “stochastic parrots”
that offer practitioners and researchers a risky illusion of intelligence. In 2011, psychologists examined how widespread adoption of Goo­
Within the context of decision support, this means that language models gle had altered human cognition (Sparrow et al., 2011). They found that
trained to predict the next word in a sequence may parrot when people had access to information stored in a computer, they were
plausible-sounding guidance on possible courses of action, without less likely to remember the information itself, and more likely to
actually offering well-informed or well-reasoned advice. This poses a remember where to find it in a digital tool. They also found that people
danger to practitioners, who may act uncritically on generated sugges­ were less likely to remember specific facts (e.g., names or dates) when
tions. It also limits the utility of LLMs for DSS, as practitioners currently they expected that such information would be stored in retrievable
have good reason to distrust machine suggestions on what to do. digital form. From such experiments, researchers proposed that web
Overcoming these limitations presents new technical challenges for scale information retrieval systems serve as a form of collective memory,
DSS research. For instance, using LLMs to offer business advice may which stores shared information for individual minds. Similar work has
require attributing generated suggestions to underlying evidence examined how technologies like smartphones or email have altered
(Rashkin et al., 2023) or explaining how a machine reached particular other aspects of cognition (Gloria et al., 2012; Wilmer et al., 2017).
conclusions (Adadi & Berrada, 2018). Offering advice may also require Widespread adoption of large language models will introduce similar
appropriately conveying a machine’s degree of certainty, which is questions about how people make decisions using LLMs. For example,
sometimes described as calibration. Recent work towards these goals in we consider whether automatically generating alternatives will actually
computer science focuses on much simpler settings, such as how to lead to better decision making (Section 3) and whether chatting with a
calibrate (Tian et al., 2023) or attribute (Bohnet et al., 2022) answers to robot can help people make better choices (Section 4). Yet researchers
fact-based questions. Significant challenges in attributing, explaining could also examine when and how people can detect “hallucinations”
and calibrating business advice from a “stochastic parrot” present major (Zhang et al., 2023) which may lead to faulty decisions, or the factors
new research questions in decision support. that engender (warranted or unwarranted) trust in artificial reasoning
(Jacovi et al., 2021). Such isolated questions reflect a broader theme.
4.3. Can chatting with a robot improve decision making? How will large language models affect individual decision making?
This high-level topic seems well-suited to cognitively-oriented
Yet emergent behaviors of dialog-tuned language models may help research traditions such as NeuroIS (Dimoka et al., 2012). But other
people make better choices, even if they do not directly give advice. For methods, such as ethnographic observation (Myers, 1999) or interview
example, practitioners might talk through consequential choices with experiments (Schultze & Avital, 2011), may also shed light on general
specialized chatbots in much the same way that high-level managers questions about how machine guidance distorts, improves, or shifts how
sometimes work with expensive executive coaches to think through people make choices. IS scholars could also work towards a more
consequential decisions (Bartleby Column, 2023). A dialog-based deci­ fine-grained, cognitively informed, and theoretical understanding of the
sion coach might be trained to help people spot well-known cognitive kinds of decisions that are more or less amenable to LLM support and
biases (Tversky & Kahneman, 1974) or work through structured ap­ how LLMs can best help people make such choices. Researchers have
proaches to decision making such as cost-benefit analysis (Mishan & begun to explore similar questions in related areas of computer science
Quah, 2020) or the analytic hierarchy process (Saaty, 1987). Such (Doshi-Velez & Kim, 2017; Lubars & Tan, 2019), which may offer
processes for avoiding inappropriate heuristics and biases are closely inspiration and initial guidance. Future theoretical work may also
related to Simon’s perspective on choice. Because people have limited organize new findings about how people make choices using this new
cognitive resources, they sometimes must rely on quick, fast, and form of computational help.
sub-optimal approaches to making a decision.
Yet LLMs may reduce the costs of such structured forms of reasoning. 5.2. Theme two: how will large language models affect organizational
For instance, in some areas like marketing, scholars have developed decision making?
rigorous decision-making frameworks that have not yet found wide use
(Lilien, 2011); chat interfaces might encourage adoption by making it Large language models may also change the decision-making
behavior of groups. In the future, we expect that groups of workers
will use specialized firm or sector-specific language models, such as
9
This assumption is consistent with prior work on the use of artificial in­ BloombergGPT (Wu et al., 2023). The ways in which such tools retrieve,
telligence for decision making in business (Duan et al., 2019), which also em­ organize, and present information will shape how organizations find,
phasizes the ways that AI will augment human decision makers rather than fully weigh, and act on knowledge required for choice.
automate complex business decisions. For example, the parameters of large language models store

4
A. Handler et al. International Journal of Information Management 79 (2024) 102811

Table 1
Broad research themes and sample questions for large language models and decision support.
How will large language models affect individual decision making?

• Does generating alternatives actually lead to better decisions? (Section 3)


• Can chatting with a robot improve decision making? (Section 4)

How will large language models affect organizational decision-making?

• Will easy data science distribute or concentrate decision making within firms? (Section 2)
• How will better intelligence affect firm productivity? (Section 2)
• How can we develop a just and sustainable ecosystem around LLMs? (Section 2)

How should we design and engineer decision-support systems based on large language models?

• What kinds of user interfaces best help people enumerate alternatives? (Section 3)
• How can DSS integrate intelligence for generation? (Section 3)
• How can DSS attribute, explain and calibrate advice from a “stochastic parrot”? (Section 4)

information that can be used to answer simple questions (Roberts et al., the costs and benefits of domain-specific and general-purpose DSS. From
2020), such as who was the first person to see Earth from space one perspective, using general-purpose tools for decision support seems
(Kwiatkowski et al., 2019). However, this emergent behavior requires to offer clear advantages; employing general chat-based systems does
tens or even hundreds of thousands of supporting documents (Kandpal not require custom development or special user education and can
et al., 2023); such models are much worse at learning long-tail infor­ leverage expensive computation (e.g., pretraining) for niche decision
mation that appears less frequently in a corpus. An organization that tasks. Yet such gains also come with clear downsides. For instance,
uses a single homogeneous LLM to organize and retrieve facts simply Murty et al. (2005) describe a DSS tool that reasons about the structure
may not see infrequent yet relevant information. of the road network in the Port of Hong Kong to maximize container
This example illustrates how emergence and homogenization intro­ throughput. ChatGPT is not trained to perform this kind of reasoning,10
duce a second theme for decision support: how will large language and will not have deep knowledge of this problem domain. Future
models affect organizational decision making? In this work, we explore design science theorists might work towards a more general under­
two example questions along this theme: will cheap data science standing of the possible gains and limitations in approaching decision
distribute or concentrate decision making within firms, and will the support problems using homogeneous LLMs, or the ways in which
benefits of intelligence gathering from LLMs increase investment in in­ practitioners might customize or adapt general-purpose language
formation technology, spurring greater organizational use of DSS (Sec­ models for specific decision tasks.
tion 2)? However, many other questions are possible. For example, Yet LLMs do not need to perform domain-specific reasoning to play a
training a large language model often involves selecting suitable docu­ key role in domain-specific DSS. DeepMind co-founder Demis Hassabis
ments using an automatic “quality filter” (Brown et al., 2020; Soldaini has argued that in the future, dialog-tuned LLMs may serve as a common
et al., 2024). Much work has documented implicit social perspectives in interface for different kinds of more specialized artificial reasoning
such choices. For example, a test with high school newspapers revealed systems (Klein, 2023). For example, WolframAlpha uses symbolic
that the GPT-3 quality filter was more likely to exclude documents from reasoning rather than machine learning to solve calculus problems
zip codes with lower levels of income and education (Gururangan et al., (Wolfram Research, 2023), but it is possible to chat with WolframAlpha
2022). Documents deemed low quality are not included in training data, using a GPT plugin. In much the same way, researchers might work
and do not influence the behavior of the LLM. By analogy, DSS re­ towards general-purpose tools that enable practitioners to interact with
searchers might examine how organizations that embrace homogeneous domain-specific DSS. From this perspective, chatbots might represent
language models may implicitly exclude perspectives and information one “tool” (Sprague, 1980) in a decision support system, much like other
from marginalized groups. Bommasani et al. (2021) discuss these and subsystems such as data visualization libraries or relational databases. In
similar risks in detail. the future, behind new chat interfaces, we may find the same decision
support technology that has been there all along.
5.3. Theme three: how should we design and engineer decision-support
systems based on LLMs? 6. Implications for practice

In the past, technologies like graphical user interfaces, color displays, In the future, we expect that more people and groups will make
wide area networks, and machine learning have each dramatically decisions using LLMs. Therefore, we expect that future academic
changed decision support (Merkert et al., 2015; Shim et al., 2002; research into each of our three themes will offer insights for practice. For
Sprague, 1980). Dialog-tuned language models may bring a similar shift. individuals, clearer understanding of how LLMs enhance or distort
New emergent features such as the ability to enumerate alternatives or cognition (Theme 1) may inform future decision making. For example,
generalize from a few in-context examples seem to enable totally new hundreds of business courses around the world (Open Syllabus, 2023)
kinds of DSS. Such capabilities introduce a third theme for future DSS expose future practitioners to well-known limitations in human
research: how should we design and engineer decision-support systems reasoning such as confirmation bias (Bazerman & Moore, 2012), and
based on the capacities of LLMs? work introducing core ideas from Prospect Theory ranks among the
In this work, we describe a few example questions under this broad best-selling business books of all time (Kahneman, 2011). As practi­
theme, such as what kinds of user interfaces help people enumerate al­ tioners adopt new language technologies, future scholarly research
ternatives (Section 3), how can DSS integrate intelligence for generation might offer a similar systematic perspective on how to best make
(Section 3), and how to attribute, explain and calibrate machine advice
(Section 4). However, many other questions are possible. For example,
because LLMs offer a homogeneous architecture for many different 10
Although some LLMs may acquire some ability for symbolic reasoning via
tasks, future design science research (Hevner et al., 2004) may explore training (Brown et al., 2020).

5
A. Handler et al. International Journal of Information Management 79 (2024) 102811

decisions using LLMs. Other research into how LLMs shape group deci­ (2022). Attributed question answering: Evaluation and modeling for attributed large
language models. arXiv Preprint (〈https://arxiv.org/abs/2212.08037〉).
sion making (Theme 2) may also have managerial implications. Aca­
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., Arx, S. von,
demic research into closely-related phenomena like Groupthink (Janis, Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S.,
1972) have already influenced how practitioners conceive of collective Card, D., Castellon, R., Chatterji, N. S., Chen, A. S., Creel, K. A., Davis, J.,
choices. For example, the popular practitioner-oriented work Super­ Demszky, D., … Liang, P. (2021). On the opportunities and risks of foundation
models. arXiv Preprint (〈https://arxiv.org/abs/2108.07258〉).
forecasting (Tetlock, 2015) presents research into how diversity among Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A.,
team members can help groups make better decisions about the proba­ Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G.,
bility of future outcomes. In the future, similar research into how groups Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., … Amodei, D.
(2020). Language models are few-shot learners. Advances in Neural Information
discover, analyze, and act on evidence using new language technologies Processing Systems.
could have similar influence. Finally, large language models offer the Brynjolfsson, E., & Mendelson, H. (1993). Information systems and the organization of
possibility of entirely new kinds of DSS (Theme 3). In the future, scholars modern enterprise. Journal of Organizational Computing, 3(3).
Brynjolfsson, E., Rock, D., & Syverson, C. (2017). Artificial intelligence and the modern
can help apply these new forms of software artifacts (Hevner et al., productivity paradox: A clash of expectations and statistics (Working Paper 24001).
2004) through collaborations with industry partners. Such National Bureau of Economic Research.
industry-oriented research would follow in the rich history of decision Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee,
Y.T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., & Zhang, Y. (2023).
support (Eom et al., 1998; Eom & Kim, 2006), which has long fostered Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint.
academic—industry relationships aimed at real-world impact. 〈https://arxiv.org/abs/2303.12712〉.
Cao, G., Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2021). Understanding managers’
attitudes and behavioral intentions towards using artificial intelligence for
7. Conclusion organizational decision-making. Technovation, 106.
Cobb, C. W., & Douglas, P. H. (1928). A theory of production. The American Economic
We propose that broad adoption of large language models will Review, 18(1).
Cursor. (2024). Cursor: The AI-first code editor. 〈https://cursor.sh/〉.
introduce new questions for decision support. We outline eight new Dimoka, A., Davis, F. D., Gupta, A., Pavlou, P. A., Banker, R. D., Dennis, A. R.,
questions for DSS, which arise from both the emergent features of large Ischebeck, A., Müller-Putz, G., Benbasat, I., Gefen, D., Kenning, P. H., Riedl, R.,
language models and the homogenization which results from using a Brocke, J. vom, & Weber, B. (2012). On the use of neurophysiological tools in IS
research: Developing a research agenda for NeuroIS. MIS Quarterly, 36(3).
small number of very similar models for many different analytic tasks. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine
We then group example questions into broader themes to organize a learning. arXiv preprint. 〈https://arxiv.org/abs/1702.08608〉.
research agenda for large language models and DSS. In total, we propose Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision
making in the era of big data – Evolution, challenges and research agenda.
that applying LLMs to assist decision making in making in business will
International Journal of Information Management, 48.
introduce many unexplored topics in decision support. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K.,
Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H.,
Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I.,
CRediT authorship contribution statement Brooks, L., Buhalis, D., … Wright, R. (2023). Opinion paper: “So what if ChatGPT
wrote it?” Multidisciplinary perspectives on opportunities, challenges and
Richard Hackathorn: Writing – review & editing, Writing – original implications of generative conversational AI for research, practice and policy.
International Journal of Information Management, 71.
draft, Conceptualization. Kai Larsen: Writing – review & editing, Eom, S., Lee, S. M., Kim, E. B., & Somarajan, C. (1998). A survey of decision support
Writing – original draft, Conceptualization. Abram Kaufman Handler: system applications (1988–1994). Journal of the Operational Research Society, 49(2).
Writing – review & editing, Writing – original draft, Conceptualization. Eom, S., & Kim, E. (2006). A survey of decision support system applications
(1995–2001). Journal of the Operational Research Society, 57(11).
Eva, N., Robin, M., Sendjaya, S., van Dierendonck, D., & Liden, R. C. (2019). Servant
leadership: A systematic review and call for future research. The Leadership Quarterly,
Declaration of Competing Interest 30(1).
Fernandez, R. C., Elmore, A. J., Franklin, M. J., Krishnan, S., & Tan, C. (2023). How large
No author has interests to declare. language models will disrupt data management. Proc VLDB Endowment, 16(11).
FluidTruck. (2023). Using ChatGPT for route optimization and more! 〈https://www.
youtube.com/watch?v=6GubIlC7dzo〉.
References Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., &
Wenderoth, M. P. (2014). Active learning increases student performance in science,
engineering, and mathematics. Proceedings of the National Academy of Sciences, 111
Abdel-Karim, B. M., Pfeuffer, N., Carl, K. V., & Hinz, O. (2023). How AI-based systems
(23).
can induce reflections: The case of AI-augmented diagnostic work. MIS Quarterly, 47
Gloria, M., Void, S., & Cardello, A. (2012). “A pace not dictated by electrons": An
(4).
empirical study of work without email. Proceedings of the SIGCHI Conference on
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable
Human Factors in Computing Systems.
artificial intelligence (XAI). IEEE Access, 6.
Gorry, G. A., & Scott Morton, M. S. (1971). A framework for management information
Amankwah-Amoah, J., Abdalla, S., Mogaji, E., Elbanna, A., & Dwivedi, Y. K. (2024). The
systems. Sloan Management Review, 13(1).
impending disruption of creative industries by generative AI: Opportunities,
Gorry, G. A., & Scott Morton, M. S. (1989). A framework for management information
challenges, and research agenda. International Journal of Information Management.
systems. Sloan Management Review, 30(3).
Arnott, D. (2006). Cognitive biases and decision support systems development: A design
Green, S., Heer, J., & Manning, C. D. (2015). Natural language translation at the
science approach. Information Systems Journal, 16(1).
intersection of AI and HCI: Old questions being answered with both AI and HCI.
Autor, D. (2024). Applying AI to rebuild middle class jobs (Working Paper 32140). National
Queue, 13(6).
Bureau of Economic Research.
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A. H.,
Bartleby Column, The Economist. (2023). Executive coaching is useful therapy that you
Ivison, H., Magnusson, I., Wang, Y., et al. (2024). Olmo: Accelerating the science of
can expense. The Economist.
language models. arXiv preprint. 〈https://arxiv.org/abs/2402.00838〉.
Bazerman, M. H., & Moore, D. A. (2012). Judgment in managerial decision making (8th
Gururangan, S., Card, D., Dreier, S., Gade, E., Wang, L., Wang, Z., Zettlemoyer, L., &
ed.). Wiley.
Smith, N. A. (2022). Whose language counts as high quality? Measuring language
Benbasat, I., & Zmud, R. W. (1999). Empirical research in information systems: The
ideologies in text data selection. In Y. Goldberg, Z. Kozareva, & Y. Zhang (Eds.),
practice of relevance. MIS Quarterly, 23(1).
Proceedings of the 2022 conference on empirical methods in natural language processing.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of
Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information
stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM
systems research. MIS Quarterly, 28(1).
conference on fairness, accountability, and transparency.
Huntley, C. L., Brown, D. E., Sappington, D. E., & Markowicz, B. P. (1995). Freight
Bendor, J., Glazer, A., & Hammond, T. (2001). Theories of delegation. Annual Review of
routing and scheduling at CSX transportation. Interfaces, 25(3).
Political Science, 4(1).
Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021). Formalizing trust in artificial
Beraldi, P., Violi, A., & De Simone, F. (2011). A decision support system for strategic
intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the
asset allocation. Decision Support Systems, 51(3).
2021 ACM conference on fairness, accountability, and transparency.
Bertsch, S., Pesta, B. J., Wiscott, R., & McDaniel, M. A. (2007). The generation effect: A
Janis, I. L. (1972). Victims of groupthink: A psychological study of foreign-policy decisions
meta-analytic review. Memory & Cognition, 35(2).
and fiascoes. United States: Houghton, Mifflin.
Bohnet, B., Tran, V. Q., Verga, P., Aharoni, R., Andor, D., Soares, L. B., Ciaramita, M.,
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Eisenstein, J., Ganchev, K., Herzig, J., Hui, K., Kwiatkowski, T., Ma, J., Ni, J.,
Saralegui, L. S., Schuster, T., Cohen, W. W., Collins, M., Das, D., … Webster, K.

6
A. Handler et al. International Journal of Information Management 79 (2024) 102811

Kandpal, N., Deng, H., Roberts, A., Wallace, E., & Raffel, C. (2023). Large language Rosemann, M., & Vessey, I. (2008). Toward improving the relevance of information
models struggle to learn long-tail knowledge. In Proceedings of the 40th international systems research to practice: The role of applicability checks. MIS Quarterly, 32(1).
conference on machine learning. Rossi, S., Rossi, M., Mukkamala, R. R., Thatcher, J. B., & Dwivedi, Y. K. (2024).
Kane, G. C., Young, A. G., Majchrzak, A., & Ransbotham, S. (2021). Avoiding an Augmenting research methods with foundation models and generative AI.
oppressive future of machine learning: A design theory for emancipatory assistants. International Journal of Information Management.
MIS Quarterly, 45(1). Roth, E. (2023). The New York Times is suing OpenAI and Microsoft for copyright
Keary, T. (2023). GPT has entered the security threat intelligence chat. VentureBeat. 〈htt infringement. The Verge. 〈https://www.theverge.com/2023/12/27/24016212/new
ps://venturebeat.com/security/gpt-has-entered-the-security-threat-intelligence -york-times-openai-microsoft-lawsuit-copyright-infringement〉.
-chat/〉. Roy, M. C., & Lerch, F. J. (1996). Overcoming ineffective mental representations in base-
Kleij, R., van der, Schraagen, J. M., Cadet, B., & Young, H. (2022). Developing decision rate problems. Information Systems Research, 7(2).
support for cybersecurity threat and incident managers. Computers & Security, 113 Saaty, R. W. (1987). The analytic hierarchy process—What it is and how it is used.
(C). Mathematical Modelling, 9(3).
Klein, E. (2023). A.I. could solve some of humanity’s hardest problems. It already has. Schultze, U., & Avital, M. (2011). Designing interviews to generate rich data for
Episode: Ezra Klein Show. 〈https://podcasts.apple.com/us/podcast/a-i-could-solve- information systems research. Information and Organization, 21(1).
some-of-humanitys-hardest-problems/id1548604447〉. Schweikl, S., & Obermaier, R. (2020). Lessons from three decades of IT productivity
Klein, E. (2024). Will A.I. break the internet? Or save it? Episode: Ezra Klein Show. 〈htt research: Towards a better understanding of IT-induced productivity effects.
ps://podcasts.apple.com/us/podcast/will-a-i-break-the-internet-or-save-it/id1548 Management Review Quarterly, 70(4).
604447?i=1000651522107〉. Shim, J. P., Warkentin, M., Courtney, J. F., Power, D. J., Sharda, R., & Carlsson, C.
Kleinberg, J., & Raghavan, M. (2021). Algorithmic monoculture and social welfare. (2002). Past, present, and future of decision support technology. Decision Support
Proceedings of the National Academy of Sciences, 118(22). Systems, 33(2).
Kohli, R., & Devaraj, S. (2003). Measuring information technology payoff: A meta- Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of
analysis of structural variables in firm-level empirical research. Information Systems Economics, 69(1).
Research, 14(2). Simon, H. A. (1960). The new science of management decision. Harper & Brothers.
Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Simon, H. A. (1978). Rationality as process and as product of thought. The American
Epstein, D., Polosukhin, I., Devlin, J., Lee, K., Toutanova, K., Jones, L., Kelcey, M., Economic Review, 68(2).
Chang, M.-W., Dai, A. M., Uszkoreit, J., Le, Q., & Petrov, S. (2019). Natural Soh, C., & Markus, M. L. (1995). How IT creates business value: A process theory synthesis.
questions: A benchmark for question answering research. Transactions of the Soldaini, L., Kinney, R., Bhagia, A., Schwenk, D., Atkinson, D., Authur, R., Bogin, B.,
Association for Computational Linguistics, 7. Chandu, K., Dumas, J., Elazar, Y., et al. (2024). Dolma: An open corpus of three
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., trillion tokens for language model pretraining research. arXiv preprint. 〈https://arxiv.
Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. (2020). Retrieval- org/abs/2402.00159〉.
augmented generation for knowledge-intensive NLP tasks. Advances in Neural Solow, R. (1987). We’d better watch out. New York Times Book Review.
Information Processing Systems, 33. Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive
Lilien, G. L. (2011). Bridging the academic–Practitioner divide in marketing decision consequences of having information at our fingertips. Science, 333(6043).
models. Journal of Marketing, 75(4). Sprague, R. H. (1980). A framework for the development of decision support systems.
Little, J. D. C. (1970). Models and managers: The concept of a decision calculus. MIS Quarterly, 4(4).
Management Science, 16(8). Starkey, K., & Madan, P. (2001). Bridging the relevance gap: Aligning stakeholders in the
Lubars, B., & Tan, C. (2019). Ask not what AI can do, but what AI should do: Towards a future of management research. British Journal of Management, 12(s1).
framework of task delegability. In Proceedings of the 33rd international conference on Tetlock, P. E. (2015). Superforecasting: The art and science of prediction. Random House.
neural information processing systems. Tian, K., Mitchell, E., Zhou, A., Sharma, A., Rafailov, R., Yao, H., Finn, C., & Manning, C.
Martinez, C. (2023). 5 ways to use Chat GPT for investment portfolio optimization. (2023). Just ask for calibration: Strategies for eliciting calibrated confidence scores
Medium. 〈https://christianmartinezfinancialfox.medium.com/5-ways-to-use-chat- from language models fine-tuned with human feedback. In H. Bouamor, J. Pino, &
gpt-for-investment-portfolio-optimization-f5979da52173〉. K. Bali (Eds.), Proceedings of the 2023 conference on empirical methods in natural
Merkert, J., Mueller, M., & Hubl, M. (2015). A survey of the application of machine language processing.
learning in decision support systems. European Conference on Information Systems. Tong, A., Wang, E., & Coulter, M. (2024). Exclusive: Reddit in AI content licensing deal with
Min, S., Gururangan, S., Wallace, E., Hajishirzi, H., Smith, N., & Zettlemoyer, L. (2023). Google. Reuters. 〈https://www.reuters.com/technology/reddit-ai-content-licensing
SILO language models: Isolating legal risk in a nonparametric datastore. arXiv -deal-with-google-sources-say-2024-02-22/〉.
preprint. 〈https://arxiv.org/abs/2308.04430〉. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases.
Mishan, E. J., & Quah, E. (2020). Cost-benefit analysis. Taylor & Francis. Science, 185(4157).
Moran, A. (2015). Managing agile: Strategy, implementation, organisation and people. Van Haren, K. (2023). Replacing a SQL analyst with 26 recursive GPT prompts. In
Springer. Patterns blog. 〈https://www.patterns.app/blog/2023/01/18/crunchbot-sql-ana
Murty, K. G., Liu, J., Wan, Y., & Linn, R. (2005). A decision support system for operations lyst-gpt/〉.
in a container terminal. Decision Support Systems, 39(3). Wilmer, H. H., Sherman, L. E., & Chein, J. M. (2017). Smartphones and cognition: A
Myers, M. (1999). Investigating information systems with ethnographic research. CAIS, 2 review of research exploring the links between mobile technology habits and
(4). cognitive functioning. Frontiers in Psychology, 8.
Notion Labs. (2024). Notion AI. 〈https://www.notion.so/product/ai〉. Wiseman, S., Shieber, S., & Rush, A. (2017). Challenges in data-to-document generation.
Open Syllabus. (2023). Business syllabi featuring Bazerman and Moore. 〈https://analytics. In Proceedings of the 2017 conference on empirical methods in natural language
opensyllabus.org/record/syllabi?field_name=Business&field_ids=10&author_name processing.
=Max+H.+Bazerman&author_ids=Max+H.+Bazerman〉. Wolfram Research. (2023). Symbolic calculations. Wolfram Language & System
OpenAI. (2021). OpenAI codex. 〈https://openai.com/blog/openai-codex〉. Documentation Center. 〈https://reference.wolfram.com/language/tutorial/Symbol
Patterns. (2024). Patterns documentation. 〈https://docs.patterns.app〉. icCalculations.html〉.
Polák, P. (2017). The productivity paradox: A meta-analysis. Information Economics and Wu, S., Irsoy, O., Lu, S., Dabravolski, V., Dredze, M., Gehrmann, S., Kambadur, P.,
Policy, 38. Rosenberg, D., & Mann, G. (2023). BloombergGPT: A large language model for
Rashkin, H., Nikolaev, V., Lamm, M., Aroyo, L., Collins, M., Das, D., Petrov, S., finance. arXiv preprint. 〈https://arxiv.org/abs/2303.17564〉.
Tomar, G. S., Turc, I., & Reitter, D. (2023). Measuring attribution in natural language Yin, R. K. (2013). Case study research: Design and methods (5th ed.). SAGE Publications,
generation models. Computational Linguistics. Inc.
Roberts, A., Raffel, C., & Shazeer, N. (2020). How much knowledge can you pack into the Zhang, Y., Li, Y., Cui, L., Cai, D., Liu, L., Fu, T., Huang, X., Zhao, E., Zhang, Y., Chen, Y.,
parameters of a language model? In Proceedings of the 2020 conference on empirical et al. (2023). Siren’s song in the AI ocean: A survey on hallucination in large
methods in natural language processing. language models. arXiv preprint. 〈https://arxiv.org/abs/2309.01219〉.
Robertson, B. J. (2015). Holacracy: The new management system for a rapidly changing
world.

You might also like