Cognitive Biases and Decision Support Systems Development: A Design Science Approach
Cognitive Biases and Decision Support Systems Development: A Design Science Approach
Abstract. This paper presents design science research that aims to improve deci-
sion support systems (DSS) development in organizations. Evolutionary develop-
ment has been central to DSS theory and practice for decades, but a significant
problem for DSS analysts remains how to conceptualize the improvement of
a decision task during evolutionary DSS development. The objective of a DSS
project is to improve the decision process and outcome for a manager making an
important decision. The DSS analyst needs to have a clear idea of the nature of the
target decision task and a clear strategy of how to support the decision process.
Existing psychological research was examined for help with the conceptualization
problem, and the theory of cognitive bias is proposed as a candidate for this assis-
tance. A taxonomy of 37 cognitive biases that codifies a complex area of psycho-
logical research is developed. The core of the project involves the construction of
a design artefact – an evolutionary DSS development methodology that uses cog-
nitive bias theory as a focusing construct, especially in its analysis cycles. The
methodology is the major contribution of the project. The feasibility and effective-
ness of the development methodology are evaluated in a participatory case study
of a strategic DSS project where a managing director is supported in a decision
about whether to close a division of a company.
INTRODUCTION
Decision support systems (DSS) is the area of information systems (IS) devoted to supporting
and improving human decision-making. The DSS field began in the early 1970s as a radical
alternative to large-scale management IS (MIS). Over time, major changes in information tech-
nology (IT) have enabled new decision support movements. Financial modelling software and
spreadsheets created a boom in personal DSS in the early 1980s; 5 years later, multi-dimen-
sional modelling and online analytical procesing technology enabled the deployment of large-
scale executive IS (EIS). Advances in storage technology and data modelling in the mid-1990s
led to the data warehousing and business intelligence movements (Arnott & Pervan, 2005).
Despite this substantial technical progress, laboratory experiments investigating the influence
of DSS on decision performance have reported mixed, often disappointing, outcomes (Ben-
basat & Nault, 1990). In contrast, the results from case study research show that a focus on
decision-making and tailored support can lead to successful systems (e.g. Courbon, 1996;
Igbaria et al., 1996; Botha et al., 1997). A persistent theme in descriptions of successful IS for
managers is the use of evolutionary systems development methods (Poon & Wagner, 2001).
Evolutionary development has been central to DSS theory and practice for decades. Spra-
gue & Carlson (1982, p. 132) argued, ‘DSS must evolve or grow to reach a “final” design
because no one can predict or anticipate in advance what is required. The system can never
be final; it must change frequently to track changes in the problem, user, and environment
because these factors are inherently volatile’. As a result, the functionality of a DSS evolves
over a series of development cycles where both the client and the systems analyst are active
contributors to the shape, nature and logic of the system (Arnott, 2004). While there is uni-
versal acceptance of the value of evolutionary development for decision support projects, there
is little advice available to system developers about how to proceed with evolutionary DSS
development. The objective of a DSS project is usually to improve the decision process and
outcome for a manager making an important decision. The DSS analyst needs to have a clear
idea of the nature of the target decision task and a clear strategy of how to support the decision
process. A persistent problem for analysts is how to conceptualize the aspects of the decision
task that need improvement during the various iterations of the evolutionary development pro-
cess. This problem is the focus of this paper.
System development is fundamentally a process of design. Hevner et al. (2004), in a dis-
cussion of the role of design theory in IS, clearly articulated the nature of the problem that DSS
analysts face: ‘the existing knowledge base is often insufficient for design purposes and
designers must rely on intuition, experience, and trial-and-error methods’ (p. 99). Given the
strategic nature of most DSS to organizations, any guidance to help analysts cope with a trial-
and-error design situation could lead to more effective systems. It follows that because DSS is
fundamentally about decision-making, a DSS analyst should have considerable knowledge
about human decision processes and how to improve them. Further, DSS development meth-
ods should support the analyst’s strategies for decision improvement. This paper reports a
design science project that attempts to provide guidance to analysts developing a DSS. It
grounds this guidance in an important part of behavioural decision theory – the theory of cog-
nitive bias. The outcome of the research project is a systems development methodology that
is effective in developing strategic personal DSS.
The paper is organized as follows: first, the research method and design are presented. A
feature of this section is the synthesis of a design science research method from previous
studies and frameworks. Next, the theoretical background of the project in judgement and deci-
sion-making is defined. The development of a taxonomy of cognitive biases is an important
contribution of this section. The fourth section presents the major contribution of the research
project: a DSS development methodology that uses cognitive biases as a focusing construct.
This methodology is then tested in a strategic DSS project where a managing director is sup-
ported in a decision about whether to close a division of a company. Finally, the limitations, pro-
fessional and theoretical contributions and future directions of the research are discussed.
As mentioned in the Introduction, this project addresses the development of a DSS and, in par-
ticular, considers how to conceptualize the improvement of a decision task during evolutionary
DSS development. The research uses a design science approach. Design science is an alter-
native, or complement, to the natural science approach that is dominant in IS research. In
design science, the researcher ‘creates and evaluates IT artefacts intended to solve identified
organizational problems’ (Hevner et al., 2004, p. 77). March & Smith (1995) clearly draw the
distinction between natural and design science: ‘Whereas natural science tries to understand
reality, design science attempts to create things that serve human purposes’ (p. 253).
Design science is particularly relevant to IS research because it helps to address two of the
current controversies of the discipline: the role of the IT artefact in IS research (Orlikowski &
Iacono, 2001) and the low level of professional relevance of many IS studies (Benbasat &
Zmud, 1999). These controversies are addressed by making systems and methods the unit of
analysis and by evaluating research outcomes in an organizational context, preferably in a real
IT application. Figure 1 presents the research method used in this project. On the left-hand
side of the figure are five distinct research processes. These are adapted from Vaishnavi &
Kuechler (2005), who proposed a design research methodology with the following major pro-
cess steps: awareness of problem, suggestion, development, evaluation and conclusion. They
also identified knowledge feedback flows between the steps. The method in Figure 1 also
includes aspects of other frameworks and models for conducting design science research in IS.
Gregg et al. (2001) developed a design science-style software engineering research method-
ology framework for IS that comprises three interrelated phases: conceptualization, formaliza-
tion and development. They argued that rigorous design research must address at least two of
the three phases. In Figure 1 conceptualization is covered by the problem recognition and sug-
gestion steps, and development is addressed by artefact development and evaluation. March
& Smith (1995) proposed build and evaluate as the two fundamental design research pro-
cesses. Build effectively covers the first three processes in Figure 1. Teasing out build into three
subprocesses makes the research design much clearer and the execution much easier.
The right-hand side of Figure 1 shows how the current project uses the design science meth-
odology. The first process, problem recognition, has already been addressed in the Introduc-
tion, with the problem being defined as ‘how to conceptualize the aspects of the decision task
that need improvement during the various iterations of the evolutionary development’. In the
second process, suggestion, the idea of cognitive bias is proposed as a focusing construct. The
third phase, artefact development, is the heart of a design science project. March & Smith
(1995) define IT design artefacts as constructs, models, methods, or instantiations. The arte-
Research
Current Project
Processes
fact at the core of this project is a DSS development method. The instantiation of the design
artefact in this project is the development of a strategic DSS using the new methodology. In the
fourth phase, evaluation, researchers can use a variety of methods and techniques from both
positivist and interpretive IS traditions. Hevner et al. (2004) provide a set of guidelines for
design science research in IS and identify five classes of methods for evaluating design arte-
facts. Their first class of evaluation, observational, comprises case studies and field studies.
This project uses a participatory case study to study the design artefact intensively in an orga-
nizational context. The aim of the evaluation stage of this project is to test the feasibility of using
the development method in the field, and also to test its effectiveness in use. The details of the
design of this empirical study are presented in the section on research design.
THEORETICAL BACKGROUND
This section addresses the suggestion stage of the design science research method. In trying
to conceptualize the improvement of a decision task during evolutionary DSS development, a
number of alternative theories of decision-making may be useful. DSS theory has been dom-
inated by the process-oriented model of decision-making associated with the Nobel laureate
Herbert Simon (Simon, 1960). Simon’s model was an integral component of the framework
that first defined DSS (Gorry & Scott Morton, 1971) and was part of the theoretical founda-
tion of the most influential early DSS books (Keen & Scott Morton, 1978; Sprague & Carlson,
1982). Despite the importance of Simon’s original theory to the history of DSS, more recent
contributions to decision-making theory need to be better integrated into DSS theory. Ange-
hrn & Jelassi (1994) argue that Simon’s theory ‘has become a serious obstacle for the evo-
lution of DSS theory and practice’ (p. 269). Elam et al. (1992) argue that research on
behavioural decision-making needs to be integrated with research on the effect of DSS on
decision-making. One aspect of behavioural decision theory that is of potential value to DSS
researchers and systems analysts involved in developing DSS is the notion of predictable
bias in decision-making.
Cognitive biases
Cognitive biases are cognitions or mental behaviours that prejudice decision quality in a sig-
nificant number of decisions for a significant number of people; they are inherent in human rea-
soning. Cognitive biases are often called decision biases or judgement biases. One way of
viewing cognitive biases is as predictable deviations from rationality. A rational choice is one
based on the decision-maker’s current assets and the possible consequences of the choice
(Hastie & Dawes, 2001, Chapter 1). Many cognitive biases have been identified by decision
theory researchers. Following a detailed literature review and analysis, 37 biases were iden-
tified. They are presented in Table 1. This taxonomy arranges biases into categories of mem-
ory, statistical, confidence, adjustment, presentation and situation biases. Memory biases have
to do with the storage and recall of information. Statistical biases are concerned with the gen-
eral tendency of humans to process information contrary to the normative principles of prob-
ability theory. Confidence biases act to increase a person’s confidence in his or her prowess
as a decision-maker. An important aspect of confidence bias is the curtailment of the search
for new information about the decision task. Presentation biases should not be thought of as
only being concerned with the display of data. They act to bias the way information is perceived
and processed, and are some of the most important biases from a decision-making perspec-
tive. Situation biases relate to how a person responds to the general decision situation and rep-
resent the highest level of bias abstraction. It is important to recognize that all these cognitive
biases are not necessarily as discrete as the taxonomy implies, and that they are likely to over-
lap in definition and effect. Further details of the individual biases and bias taxonomies can be
found in Arnott (2002).
The research on biases summarized in Table 1 indicates a predictable propensity of human
decision-makers towards irrationality. While the nature of the underlying psychological pro-
cesses that lead to biased behaviour is the subject of considerable debate (Keren, 1990; Gig-
erenzer, 1991; 1996; Dawes & Mulford, 1996), the experimental findings on cognitive biases
show persistent biasing in laboratory studies. This behaviour has also been shown in many
cases to generalize to real-world situations, albeit with a reduced effect (Joyce & Biddle, 1981;
Wright & Ayton, 1990). Normally excluded from consideration in cognitive bias research are
factors that influence decisions arising from psychological pathology, religious belief or social
Memory biases
Hindsight In retrospect, the degree to which an event could have Fischhoff (1982a);
been predicted is often overestimated Mazursky & Ofir (1997)
Imaginability An event may be judged more probable if it can be Tversky & Kahneman (1974);
easily imagined Taylor & Thompson (1982)
Recall An event or class may appear more numerous or Tversky & Kahneman (1981);
frequent if its instances are more easily recalled than Taylor & Thompson (1982)
other equally probable events
Search An event may seem more frequent because of the Tversky & Kahneman (1974);
effectiveness of the search strategy Bazerman (2002)
Similarity The likelihood of an event occurring may be judged by Horton & Mills (1984);
the degree of similarity with the class it is perceived Joram & Read (1996)
to belong to
Testimony The inability to recall details of an event may lead to Wells & Loftus (1984);
seemingly logical reconstructions that may be Ricchiute (1997)
inaccurate
Statistical biases
Base rate Base rate data tends to be ignored when other data are Fischhoff & Beyth-Marom (1983);
available Bar-Hillel (1990)
Chance A sequence of random events can be mistaken for an Wagenaar (1988);
essential characteristic of a process Ayton et al. (1989)
Conjunction Probability is often overestimated in compound Bar Hillel (1973);
conjunctive problems Teigen et al. (1996)
Correlation The probability of two events occurring together can be Tversky & Kahneman (1973);
overestimated if they have co-occurred in the past Alloy & Tabachnik (1984)
Disjunction Probability is often underestimated in compound Bar Hillel (1973);
disjunctive problems Bazerman (2002)
Sample The size of a sample is often ignored in judging its Nisbett et al. (1983);
predictive power Sedlmeier & Gigerenzer (1997)
Subset A conjunction or subset is often judged more probable Thuring & Jungermann (1990);
than its set Briggs & Krantz (1992)
Confidence biases
Completeness The perception of an apparently complete or logical Fischhoff et al. (1978);
data presentation can stop the search for omissions Hogarth (1987)
Control A poor decision may lead to a good outcome, inducing Greenberg (1996);
a false feeling of control over the judgement situation Hastie & Dawes (2001)
Confirmation Often decision-makers seek confirmatory evidence and Russo et al. (1996);
do not search for disconfirming information Heath (1996)
Desire The probability of desired outcomes may be Olsen (1997);
inaccurately assessed as being greater Hastie & Dawes (2001)
Overconfidence The ability to solve difficult or novel problems is often Brenner et al. (1996);
overestimated Keren (1997)
Redundancy The more redundant and voluminous the data, the more Remus & Kotterman (1986);
confidence may be expressed in its accuracy and Arkes et al. (1989)
importance
Table 1. Cont.
Selectivity Expectation of the nature of an event can bias what Schwenk (1988);
information is thought to be relevant Kahneman & Tversky (1973)
Success Often failure is associated with poor luck, and success Miller (1976);
with the abilities of the decision-maker Hogarth (1987)
Test Some aspects and outcomes of choice cannot be Einhorn (1980);
tested, leading to unrealistic confidence in Christensen-Szalanski &
judgement Bushyhead (1981)
Adjustment biases
Anchoring and Adjustments from an initial position are usually Chapman & Johnson (1994);
adjustment insufficient Ganzach (1996)
Conservatism Often estimates are not revised appropriately on the Fischhoff & Beyth-Marom (1983);
receipt of significant new data Nelson (1996)
Reference The establishment of a reference point or anchor can Tversky & Kahneman (1974);
be a random or distorted act Bazerman (2002)
Regression That events will tend to regress towards the mean on Kahneman & Tversky (1973);
subsequent trials is often not allowed for in Joyce & Biddle (1981)
judgement
Presentation biases
Framing Events framed as either losses or gains may be Kahneman & Tversky (1979);
evaluated differently Kunberger (1997)
Linear Decision-makers are often unable to extrapolate a non- Wagenaar & Timmers (1979);
linear growth process Mackinnon & Wearing (1991)
Mode The mode and mixture of presentation can influence the Saunders & Jones (1990);
perceived value of data Dusenbury & Fennma (1996)
Order The first or last item presented may be overweighted in Yates & Curley (1986);
judgement Chapman et al. (1996)
Scale The perceived variability of data can be affected by the Remus (1984);
scale of the data Ricketts (1990)
Situation biases
Attenuation A decision-making situation can be simplified by Beer (1981);
ignoring or significantly discounting the level of Hogarth (1987)
uncertainty
Complexity Time pressure, information overload and other Maule & Edland (1997);
environmental factors can increase the perceived Ordonez & Benson (1997)
complexity of a task
Escalation Often decision-makers commit to follow or escalate a Northcraft & Wolf (1984);
previous unsatisfactory course of action Drummond (1994)
Habit An alternative may be chosen only because it was used Hogarth (1987);
before Slovic (1975)
Inconsistency Often a consistent judgement strategy is not applied to Showers & Charkrin (1981);
an identical repetitive set of cases Moskowitz & Sarin (1983)
Rule The wrong decision rule may be used Sage (1981);
Goodwin & Wright (1991)
pressure (including customs, tradition and hero worship). The role of intelligence and individual
differences in cognitive bias research has been largely ignored, as have the effects of visceral
or ‘hot’ factors on decision-making (Loewenstein, 1996).
Debiasing
Debiasing is a procedure for reducing or eliminating biases from the cognitive strategies of a
decision-maker. Keren (1990, p. 523) proposed a debiasing framework based on medical diag-
nosis and prescription. This framework aims to:
1 Identify the existence and nature of the potential bias. This includes understanding the envi-
ronment of the bias and the cognitive triggers of the bias;
2 Consider alternative means for reducing or eliminating the bias;
3 Monitor and evaluate the effectiveness of the debiasing technique chosen. The possibility of
negative side effects should be a particular concern.
In step 2, Keren distinguished between procedural techniques, where the user is unaware
of the internal structure of the problem and hence the operation of the bias, and structure-
modifying techniques, whereby the user can manipulate the internal structure of the task.
Most reported debiasing research is of a procedural nature, although the deeper under-
standing of the task and biases required for structure modifying may lead to more effective
outcomes.
In one of the most influential works on debiasing, Fischhoff (1982b) proposed a classification
of debiasing methods that focused on the source of bias. Sources were identified as faulty deci-
sion-makers, faulty tasks and mismatches between decision-makers and tasks. Fischhoff’s
category of faulty tasks implies that a redesign of the task environment may have an effect on
cognitive biases. Klayman & Brown (1993) support this view and suggest that redesigning the
task environment is an alternative to debiasing the individual decision-maker. IS has much to
offer in this area, as task and process redesign is a core activity in systems analysis and design
(Avison & Fitzgerald, 1995, Chapter 3).
The aspect of Fischhoff’s classification that has attracted the most attention is his strategy
for ‘perfecting individuals’. This assumes that the primary source of biased judgement is the
decision-maker, rather than the task. Kahneman & Tversky (1982) distinguish between those
situations where people lack competence (comprehension errors) and those where they are
competent but fail on a given decision (application errors). A debiasing strategy for an appli-
cation error needs to focus on educating the decision-maker about the decision task, relevant
biases and decision rules. Comprehension errors are more difficult to overcome than applica-
tion errors. Fischhoff’s strategy to overcome these errors is an escalation design where each
level represents an increase in the degree of support provided to the individual. The steps in
this escalation of involvement are as follows:
1 Warn the decision-maker about the possibility of bias, without providing a description of its
nature.
2 Describe the nature of the bias. This description should include the direction (positive or
negative influence) and the strength of the bias.
3 Provide feedback. This feedback should personalize the warning and description of the bias
and the decision-maker’s reaction to the bias for the target task.
4 Provide an extended programme of training, with coaching, feedback, discussion or any
other intervention that will overcome the bias effect.
Fischhoff’s third category, mismatch between decision-maker and task, addresses Kahne-
man and Tversky’s application errors in that the decision-maker is thought to have the requisite
cognitive skills but somehow they are not applied effectively. Fischhoff calls the debiasing strat-
egies in this category cognitive engineering.
Bazerman (2002, pp. 155–157) suggested a general debiasing strategy based on the
Lewin–Schein model of social change (Lewin, 1947; Schein, 1962). The Lewin–Schein model
views change as a sequence of unfreezing, moving and refreezing processes. Unfreezing
involves altering the forces on an individual such that the current equilibrium is disturbed to the
extent that the individual wants to change. This can result from external direct pressure or indi-
rectly by a reduction in the forces that constrain change. Moving involves instruction into the
nature of change and the actual process of learning new social behaviours. Refreezing
involves integrating the changes into the personality or cognitive make-up of the individual.
Bazerman used the Lewin–Schein model because he argues that debiasing must be guided by
a psychological framework for change. Bazerman believes that unfreezing is the key to debi-
asing for three reasons. The first is that decision-makers are likely to have used their current
strategy for a considerable time and that any change will be psychologically disturbing. People
will avoid disturbing information that questions their cognitive abilities. Second, most managers
(who are the principal users of DSS) will have been rewarded for their current decision-making
strategies. Indeed, their successive promotions will probably have been based on the results
of their intuitive strategies. Third, individuals tend to keep cognitions in order and debiasing is
a threat to this order or cognitive balance.
Bazerman terms the moving stage of the Lewin–Schein model as change. He prescribes
three steps for decision-making change: clarification of the existence of cognitive biases, expla-
nation of the causes of the biases and reassurance that the biases are not a threat to the
decision-maker’s self-esteem (Bazerman, 2002, p. 156). It is important for the decision-maker
to realize that everyone’s decision-making is biased and that debiasing is meant to make an
already effective decision-maker even more effective. Refreezing is important as biases can
easily resurface after the effort of the moving/change stage is over. The decision-maker needs
to continually use the new approach to ensure that it becomes the dominant cognitive process.
In summary, human decision-making is subject to cognitive biases that can often adversely
affect decision quality. It is important for managers to realize that cognitive biases may lead to
serious errors of judgement in strategic decisions. The theory of cognitive biases and the pro-
cess of debiasing provide a conceptual foundation for improving decision performance in a
DSS project. If developing DSS can help to overcome the negative effects of one or more
biases, then the process and outcome of decision-making should be improved.
This section addresses the third process in the design science research method, artefact
development. The design artefact in this project is a DSS development method that uses cog-
nitive bias as a focusing construct. A model of the development method is presented in
Figure 2. It conceptualizes DSS development at two levels: a major cycle level, represented by
dark circles, and a development activity level, represented by white ellipses. The major cycles
are initiation, analysis and delivery.
Figure 2 attempts to portray DSS development in a realistic manner. It is unlike most systems
development schematics in that it does not indicate procedural flow through a model using
arrows that link discrete elements. The development of the first generation of a DSS is often
presented visually as involving left-to-right progress in the model, which obscures the fact that
many activities overlap in time and nature. For example, it is common for system construction,
system use and design to be undertaken in rapid succession, sometimes simultaneously.
Figure 2 shows that the major cycles are linked by shared activities – planning and resourcing
links initiation and analysis cycles, and design links analysis and delivery cycles. This attempts
to capture the organic nature of DSS development, although it is very difficult to depict the
dynamics of DSS development in a static, two-dimensional diagram.
Initiation cycles
Initiation cycles are triggered when the client realizes the need for a new DSS application or
recognizes the need for significant change to an existing application. This realization means
that the decision-maker sees that some improvement to decision-making is required. This
makes unfreezing easier than if a system development project is imposed, as is often the case
with large-scale operational systems. If a DSS is using a debiasing strategy, it is ethically
important to make the client aware of the nature of the strategy. Debiasing can be more per-
sonally challenging than other DSS development approaches, and the manager/client has the
right to choose the level of cognitive process intervention that they are comfortable with. During
initiation, the general problem area or decision is defined, resources are allocated and stake-
holders engaged. An initiation cycle is completed when a decision is made by the client to con-
tinue with the development of the application.
Analysis cycles
Make the decision-maker articulate what they know about the decision.
Encourage decision-makers to search for discrepant information or information that chal-
lenges the adopted or preferred position.
Offer ways to decompose the problem into more understandable subproblems or themes.
Consider a wider set of decision situations or scenarios. Then, consider the nature of the
The analyst can work through some, or all of these steps, depending on the nature of the
project. After this process, the manager will have become accustomed to thought experiments
about the decision task and will be ready to explicitly consider cognitive biases. Keren’s diag-
nosis and prescription framework, Fischhoff’s perfecting individuals escalation design and
Bazerman’s steps for decision-making change (all presented in the section on debiasing) can
be combined into a strategy that can be used to approach debiasing. The steps in this com-
bined approach are:
The taxonomy presented in Table 1 can be used to help with the identification of biases. The
analyst should start bias identification at the highest level of the taxonomy (the memory,
statistical, confidence, adjustment, presentation and situation categories) and judge if there is
any likely effect under each classification. The analyst may then proceed to the individual bias
level. The descriptions that are provided in Table 1 for each individual bias are useful with this
identification.
In identifying the likely impact and magnitude of the bias or biases, the analyst is specifically
interested in that subset of the identified biases that may have a strong negative influence on
the target decision. The selection of the method for reducing or eliminating the bias will depend
on the particular bias and the particular decision-maker. The citations for each bias in Table 1
can be consulted to provide additional knowledge about the bias and possible corrective
action. The nature of the bias and the debiasing strategy will then guide the systems design
activity. The systems analyst will consider what is possible to implement in an IT-based system,
and this may cause a change in the debiasing approach.
Delivery cycles
The delivery cycles involve both iterations and parallel application of design, system construc-
tion and use. These cycles cover the moving/change and refreezing stages of the Lewin–
Schein model. The use of a DSS can be viewed as a process of feedback and training. Training
will be more effective when the decision-maker, having misunderstood the basic principles of
the task, has the experience and ability to realize this, and learn what is required. This form of
debiasing also relies on the bias being triggered by the characteristics of the task. Analyst and
client learning has always been a central theme in DSS development (Keen, 1980; Courbon,
1996). By using the system, the decision-maker will change his or her understanding of the
decision task and the biases associated with that task. In reaction to this new understanding,
the systems analyst should redesign the DSS and construct new versions or applications. In
this sense, a DSS can be viewed as a learning system. Keen’s (1980) adaptive design model
remains the most cited exposition of this cycle. Courbon (1996, p. 119) describes these cycles
as sequences of ‘action – whenever the designer implements a new version and the user works
with it and . . . reflection, i.e. the feedback where the user and the designer think about what
should be done next based on the preceding active use’. Often the action of a delivery cycle
triggers a new analysis cycle and occasionally a new initiation cycle.
The DSS analyst should pay particular attention to refreezing the decision process. Refreez-
ing will be enabled by the continued use of a stable DSS. Given the probability of many iter-
ations of the delivery cycle, it could be that a decision-maker might remain in a constant state
of moving/change that may be psychologically stressful. The DSS may even be abandoned.
On the other hand, if the moving/change stage is completed too quickly, the possible benefit of
the DSS development will be reduced. An important activity during delivery cycles is to monitor
and evaluate the effectiveness of the chosen debiasing technique (Keren, 1990). In particular,
the possibility of negative side effects of the debiasing effort, including the triggering of other
cognitive biases, should be assessed.
The development of a design artefact is the major creative stage of the design science
research method. The evaluation of the artefact is the next stage of the project. As foreshad-
owed in the second section of this paper, this project uses a case study of a real DSS imple-
mentation to evaluate the feasibility and effectiveness of the development methodology. The
present section begins with the design of the case study. This is followed by a description of the
project and a discussion on the evaluation of the development method.
Research design
The empirical study used a single case design (Yin, 1994, Chapter 2). The unit of analysis was
the system development process. An intensive case study captures more detail than a survey
(Galliers, 1992), especially in identifying the nature and important characteristics of the sys-
tems development process (Benbasat et al., 1987). The selection of the case was opportu-
nistic. It can also be termed an instrumental case study in that the actual case was less
important than the process being studied (Stake, 1994, p. 237).
The data collection technique was participant observation (Cole, 1991; Atkinson & Ham-
mersley, 1994). The author was the systems analyst for the DSS project, and two systems
developers, one of whom who was a masteral student, programmed the applications. Although
the researcher was involved in the DSS project, the method cannot be strictly categorized as
action research as there was no process of theory building through iterations of planned inter-
vention, reflection and learning (Baskerville & Wood-Harper, 1998, p. 101). This is partly due
to the very short 5-week project lifetime. The main benefit of participant observation for this
project was access to senior staff members and organizational processes (especially meetings
and project discussions) which would not have been possible in non-participant observation
(Cole, 1991). Everyone involved in the DSS project was aware that the case was being used
for research into systems development. The development team recorded their experiences in
diaries, and some sessions between the client and the systems analyst were audiotaped and
transcribed. In addition, the analyst kept a meta-diary that reflected on the overall development
process in the spirit of Schon (1983). A condition for approval of the research project by the uni-
versity ethics committee was anonymity for the organization and subjects and, as a result, the
identity of the organization was disguised. The essential elements of the project description
were unaltered.
Project description
Context
‘Delta Consulting’ is a business services firm whose services include strategic consulting,
project management, training and IT development. These areas reflect the interests of the
founders, who mostly came from an academic environment, except one, who came from a
large multi-national consulting firm. Delta has five office staff and 26 principal consultants.
When required, external contractors are employed for specific projects. The service and prod-
uct portfolio of Delta was under formal review by the board of directors. All areas of the com-
pany were profitable, although the training area was barely breaking even. The board had
commissioned an external consultant (who was not considered a competitor) to review Delta’s
performance and prospects. His report recommended that Delta maintain its core activities of
strategic consulting and project management at the current level. He recommended that the
training area be wound up and that a strategic alliance with a specialist training provider be
investigated. He argued that the time and energy that Delta would save from this alliance could
be devoted to the IT development area, which he believed had a huge potential for revenue
growth. Delta’s training services involved 19 courses that ranged from half-day to three-day
programmes. Most of Delta’s consultants were involved in training, but the only full-time
employee in the area was the training manager. The board considered the external consultant’s
report and other briefing information, and after 15 minutes of discussion there was a general
feeling that the closure of training services was a desirable strategy, although no final decision
was taken. The possible closure of the training area was flagged as an item for detailed dis-
cussion and decision at a board meeting in 2 months’ time. After the meeting, the managing
director began to have reservations about the external consultant’s recommendation and the
prospect of Delta not having a training function. At this time, the board meeting to consider the
training area closure was 5 weeks away.
Although the managing director had to recommend formally a course of action to the board, he
had the strong impression that the decision was largely his and that the board would probably
adopt his recommendation, as it had on numerous other occasions. However, with only
5 weeks available, he was unsure of the correct strategy. To help his decision process, he
engaged a consultant systems analyst to develop a DSS, triggering the first initiation cycle of
the project. He had no firm idea about what support he needed, just that he needed more infor-
mation and more options. There was no need for the analyst to explicitly address unfreezing
the decision-making process as the manager had effectively unfrozen himself when he iden-
tified the need for specialist support. The decision problem was classified as a possible appli-
cation error (Kahneman & Tversky, 1982) in that the managing director was a competent
decision-maker but was faced with a decision situation that he had not encountered before.
The analyst discussed with the managing director the general notion of cognitive biases and
outlined the degree and nature of the possible interventions into his decision-making pro-
cesses that could accompany a bias-focused DSS development. The manager agreed to fol-
low a bias-focused strategy. The analyst began the first analysis cycle with a number of
unstructured conservations with the managing director. He studied the financial documentation
that was presented to the board, as well as the external consultant’s report.
The project then moved from planning and resourcing to a decision diagnosis activity. The train-
ing closure decision was modelled by using functional decomposition (Avison & Fitzgerald,
1995, p. 62) and influence diagrams (Bodily, 1988). The decision was then analysed for the
influence of any major cognitive biases by using the taxonomy presented in Table 1. It became
apparent that the confirmation bias was likely to have a major negative impact on the decision,
as the information available to the board seemed to support strongly a closure strategy. The
confirmation bias acts against a fundamental principle of the scientific method, which holds that
information that refutes a hypothesis is more valuable than information that supports it. How-
ever, under the confirmation bias, people tend to search for information that confirms their
hypotheses and gloss over, or even actively ignore, disconfirming information (Evans, 1989;
Russo et al., 1996). The only known previous work on the confirmation bias and DSS is Ang
(1992). The analyst researched the confirmation bias in the psychology literature to better
understand the effect. After this he undertook a series of semi-structured interviews with the
managing director to elucidate the hypotheses or propositions that were addressed by the
managing director and the board when the prima facie case to close the training area was
made. The information sources known to be used by the managing director and the board were
then attributed to the various propositions, and the information was classified as being con-
firming, disconfirming or neutral. As can be seen in Table 2, virtually all of the information was
found to be confirming in nature.
During this diagnostic activity, the analyst began to develop a vague idea of what sort of DSS
could help the managing director; it would probably have a data focus, rather than a model
focus, but it would probably not be a standard database application. This vague speculation
about the IS marked the start of design activities in the engagement. The next event in the
project was deeply symbolic. Rather than refer to the project as the ‘training closure decision’,
through a number of conversations, the analyst convinced the managing director to rename the
project the ‘training area evaluation’. This neutral reframing of the decision task was noticed
and commented on by a number of company staff. It was the first time that they knew the train-
ing area closure was not a ‘done deal’ and that the managing director was considering other
strategies.
To counter the effect of the confirmation bias, the analyst adopted the escalation approach
to debiasing ‘perfectible individuals’, which was discussed in the third section of this paper (Fis-
chhoff, 1982b). The analyst described to the managing director the nature of the confirmation
bias and briefed him on the results of the information stream analysis. They mutually decided
to develop a system that would attempt to reduce the effect of overconfirmation in the target
decision. A search for possible disconfirming information was undertaken, led by the managing
Table 2. Information used by the board for the prima facie closure decision
Decision
Information Type Source impact
Profit and loss statements (YTD and last 2 years) Quantitative Office manager Confirming
Report on the future of Delta Consulting Qualitative Consultant’s report Confirming
Revenue and expenditure forecasts (Total company, next 3 years) Quantitative Consultant’s report Confirming
Revenue and expenditure forecasts (By divisions, next 3 years) Quantitative Consultant’s report Confirming
Course attendance history (last 3 years) Quantitative Training manager Neutral
YTD, year-to-date.
director and assisted by Delta’s office manager. Much of this information was of a qualitative
nature and was included in documents such as office memos and consultant performance
reviews.
The first delivery cycle produced a DSS, which became known as the intelligence system. The
system was named by the managing director. It was constructed by using hyperlinked docu-
ments on a dedicated personal computer; in essence it was an unpublished web site. The doc-
ument navigation tree was based on the decision influence diagram and a hierarchy chart of
identified hypotheses. In this way, the theory of confirmation bias was used to provide the phys-
ical structure of the IS. Financial statements and other board reports were pasted into the rel-
evant documents, as was relevant disconfirming information. The system was then used by the
managing director to explore the training area decision. All other board members were given
access to the system. While the document structure implied which information sources could
be used to arrive at the decision, the system did not force a set retrieval pattern on the user.
The developer inserted as many hyperlinks as possible into each document to allow users to
follow hunches that were triggered by system use. While using the system, the managing direc-
tor repeatedly asked for additional information to be added, as did another director who briefly
used the system. These minor delivery cycles significantly increased the amount of information
contained in the Intelligence System but did not significantly change the logic or structure of the
system.
While using the intelligence system, the managing director developed new ideas about the role
of the training area. He began to wonder if training was generating business for the other areas
of Delta or if it was important in retaining clients. The possible presence of a cross-subsidy was
difficult to assess as the additional business generated by the training activities could follow the
initial work by a significant period of time, or be from a seemingly unrelated client because a
person previously related to Delta through training could have changed employer. These ideas
triggered the second initiation cycle of the project. The managing director called this second
stage the subsidy system, because it emerged from his training cross-subsidy hypothesis.
The subsidy system was not a discrete decision support application or set of applications in the
sense of the intelligence system. Rather, it is best described as a series of ephemera – appli-
cations that existed sometimes for hours, sometimes for days. This phase of the overall project
was characterized by chaotic analysis and design cycles. Design cycles were much more
numerous and used more human resources than the analysis cycles, although it was hard at
times to tell when one cycle ended and another began.
The people involved with the intelligence system were also involved with the subsidy system.
The managing director was personally involved in virtually every DSS application and devoted
significant time to the ‘system’. He indicated that the project was one of his highest priorities
and asked to be interrupted if new reports became available. The analyst and system devel-
opers worked full-time on the project and at times their effort was augmented by a company IT
consultant. The office manager was more involved in this stage of the project than for the intel-
ligence system. His main role was as a data provider for models and databases developed by
the development team.
The applications that made up the subsidy system were organized around questions artic-
ulated by the managing director. Once a question was defined, the analyst and programmers
built an information system as quickly as possible and populated it with data. Applications were
developed by using relational database and spreadsheet packages. Answering some of the
questions involved non-IT support or data gathering, e.g., asking a long-standing client about
an issue at a business lunch to inform system development. Table 3 illustrates how the man-
aging director’s questions guided the development of the subsidy system applications.
As a result of using the various applications that made up the subsidy system phase of the
project, the managing director decided to retain the training area of Delta. He believed that con-
sultants benefited significantly from the formalization of knowledge and experience that was
required to conduct a training course. He believed that this benefit manifested in increased con-
sultant performance and in increased sales. That is, he believed that a significant cross-subsidy
existed between training and the core consulting areas. He also discovered that the consult-
ants enjoyed the training work and that this contributed to their decision to remain with Delta.
This was an important finding because maintaining a high quality staff establishment in the
highly mobile consulting industry is very difficult. Using material from the decision support
applications, the managing director prepared a paper for the board that recommended retain-
IT-based
Question decision support Data sources Non-IT-based decision support
How many of our clients for strategic Databases Client files, training Managing director contacts
consulting were initially mailing list selected clients
clients of the training area?
Is there a relationship between Databases, Sales data, consultant Managing director has
consultant participation in training spreadsheets staff files, training conversations with selected
and their consulting performance? evaluations, survey project leaders and consultants,
formal survey of all consultants
What are the infrastructure and HR Spreadsheets Generic building cost Office manager consults with
costs of expanding IT development? data, HR budget Building owner
IT, information technology; HR, human resources.
ing the training function. As predicted, the board accepted the managing director’s recommen-
dation and resolved to investigate potential efficiencies in other areas.
The first issue is whether Delta’s DSS project can be considered successful. The assessment
of success is a difficult problem for design research studies because it is impossible, after the
research intervention, to determine if an alternative intervention would have been more suc-
cessful or have led to a different outcome. The main argument indicating a successful project
is the opinion of the managing director. In a study of DSS success factors, Finlay & Forghani
(1998) argued that success is ‘equated with repeat use and user satisfaction’ (p. 54). In this
case, the managing director regarded the project as a success; he even offered a bonus pay-
ment to the development team, citing the importance of the outcome to Delta as the reason for
the offer. His continued personal involvement in the project equates with repeat use, which
reinforces the ‘success’ evaluation. A common occurrence in DSS projects is that the com-
missioning manager has already made his or her decision before project initiation and wants
a DSS developed to justify this decision. This situation is unlikely to have occurred in this case.
The bias-focused approach adopted by the project represented a significant challenge to the
managing director’s cognitive strategies and required much more personal involvement than a
standard DSS engagement. If his objective in commissioning the project was post-decision jus-
tification, a less demanding development process could have been followed.
The case study was conducted to evaluate the design artefact at the centre of this design
science project: a DSS development methodology that uses cognitive bias theory as a focusing
construct. The case study shows that the development method is both feasible and effective.
Using a decision-debiasing approach within an evolutionary development method, the systems
analyst had a clear strategy for improving decision performance using a DSS. The new devel-
opment method adds a psychological theory of cognitive process change to DSS development.
In the case study, the process of change involved the agreed intervention in the decision-
making process of an experienced and successful executive. The approach in Delta’s project
was a combination of cognitive engineering and procedural debiasing.
The case study was also a classical evolutionary DSS development in the spirit of Keen
(1980) and Courbon (1996). By using the DSS, the managing director learnt more about deci-
sion tasks, which triggers system evolution. Sometimes this evolution involves changes to an
application; sometimes it leads to the development of new applications. Two clusters of adap-
tive loops defined the major development cycles of the engagement. The analysis cycles that
linked planning and resourcing, decision diagnosis, and design were quite chaotic and
occurred over short periods of time. The loops clustered in systems delivery were more
orderly and tended to be cycles of design to system construction to use to design again. As
with many DSS, the development activities were non-linear, and often aspects of the develop-
ment process proceeded in parallel and in an opposite direction to that normally assumed. For
example, in the subsidy system, some database applications were built (delivery cycle) in
order to begin understanding the nature of the question that was guiding development (anal-
ysis cycle). This is contrary to the normal instantiation of a classical systems development life
cycle.
The interpretation of the subsidy system as a series of ephemera may be of considerable
theoretical and practical importance. Most IS research is focused on projects that are relatively
large and stable. In the DSS domain, it may be that the majority of systems are more like the
ephemera that composed the subsidy system. As in the case of the company’s training area
decision, the impact of these microsystems on an organization may be much more significant
than a high-cost, large-scale operational system. This is because the decisions based on the
use of ephemeral DSS can determine the strategic success or failure of an organization. Fur-
ther research into the ephemeral nature of many IS is needed.
CONCLUSION
This paper has presented design science research that aims to improve DSS development in
organizations. The first stage of the research was the recognition that conceptualizing the
improvement of a decision task during evolutionary DSS development is a significant problem
for DSS analysts. The second stage investigated existing psychological research to see if any
theory could help solve the problem. The theory of cognitive bias was proposed as a candidate
for this assistance. A taxonomy of 37 cognitive biases that codifies a complex area of psycho-
logical research was developed. The third stage of the project involved the construction of the
design artefact: an evolutionary DSS development methodology that uses cognitive bias the-
ory as a focusing construct, especially in its analysis cycles. The systems development meth-
odology is the major contribution of the project. The fourth stage of the project involved the
evaluation of the methodology. Its feasibility and effectiveness was successfully tested in a par-
ticipatory case study of a strategic DSS project.
The design science research presented in this paper is subject to a number of limitations.
The participatory observation approach of the case study can have a number of biases,
including the need to take advocacy rather than observer roles, becoming a supporter of the
group under study and not having enough time for observations (Yin, 1994, p. 89). These
biases were minimized by keeping knowledge of the potential problems explicit throughout the
project. There was ample time for observation and reflection during the project. With respect
to advocacy bias, it was inevitable that the researcher engaged in some advocacy for the
development method. However, the researcher did not become a supporter of the client, or an
advocate of any decision alternative or the project outcome. The next limitation is the difficulty
in generalizing a single case study to other engagements. In design science research, the aim
of the evaluation phase is to demonstrate the feasibility and effectiveness of the design arte-
fact. The case study in the fifth section has arguably demonstrated such feasibility and effec-
tiveness, but as identified by other researchers, design science research outcomes in one
project may not generalize to other projects (Markus et al., 2002). It is essential that further
cases or action research studies be undertaken, possibly using a replication logic (Yin, 1994,
p. 36).
The goal of design science research is utility, especially through the creation of new methods
and technologies that are useful in practice. The systems development method described in
this paper is at an early stage of development. Considerable further research is required to
enable practising systems analysts to use it with confidence. The method requires systems
analysts to have a reasonable understanding of behavioural decision theory, and this under-
standing is particularly important for cognitive bias identification. The bias taxonomy is a useful
starting point for this activity, but more research is required to develop an identification process
or method that is operationally effective. One possible direction could be the development of a
web-based assistant for bias identification. Another important area for further research is the
psychological contract between the client/user and the DSS analyst. The bias-focused DSS
methodology could place the client in a potentially stressful situation as it challenges the deci-
sion-making processes of the manager. The analyst needs to be both aware of, and sensitive
to, this challenge. Further research is required to produce strategies and guidelines to assist
the analyst with establishing and maintaining this contract.
The final reflection involves the research methodology used in this project. Design science
is an important movement in IS research (Markus et al., 2002; Hevner et al., 2004). It can help
the discipline address its problem of professional relevance and can help bring the IT artefact
to the centre of IS research. This project has shown that design science can tackle IS problems
of both theoretical and practical importance.
REFERENCES
Alloy, L.B. & Tabachnik, N. (1984) Assessment of covari- Arnott, D. & Pervan, G. (2005) A critical analysis of deci-
ation by humans and animals: joint influence of prior sion support systems research. Journal of Information
expectations and current situational information. Psy- Technology, 20, 1–21.
chological Review, 91, 112–149. Atkinson, P. & Hammersley, M. (1994) Ethnography and
Ang, D. (1992) An investigation into the use of a decision participant observation. In: Handbook of Qualitative
support system to reduce the confirmation bias. Unpub- Research, Denzin, N.K. & Lincoln, Y.S. (eds), pp. 248–
lished Master of Computing Minor Thesis. Monash Uni- 261. Sage Publications, Thousand Oaks, CA, USA.
versity, Melbourne, Australia. Avison, D.E. & Fitzgerald, G. (1995) Information Systems
Angehrn, A.A. & Jelassi, T. (1994) DSS research and Development: Methodologies, Techniques and Tools,
practice in perspective. Decision Support Systems, 12, 2nd edn. McGraw-Hill, Maidenhead, UK.
257–275. Ayton, P., Hunt, A.J. & Wright, G. (1989) Psychological
Arkes, H.R., Hackett, C. & Boehm, L. (1989) The generality conceptions of randomness. Journal of Behavioural
of the relation between familiarity and judged validity. Decision Making, 2, 221–238.
Journal of Behavioural Decision Making, 2, 81–94. Bar-Hillel, M. (1973) On the subjective probability of
Arnott, D. (2002) A Taxonomy of Decision Biases (Techni- compound events. Organizational Behavior and Human
cal Report. No. 2002/01). Decision Support Systems Performance, 9, 396–406.
Laboratory. Monash University, Melbourne, Australia. Bar-Hillel, M. (1990) Back to base rates. In: Insights in
[WWW document]. URL http://dsslab.sims.monash.edu. Decision Making, Hogarth, R. (ed.), pp. 200–216. Uni-
au/taxonomy.pdf. versity of Chicago Press, Chicago, IL, USA.
Arnott, D. (2004) Decision support systems evolution: Baskerville, R. & Wood-Harper, A.T. (1998) Diversity in
framework, case study and research agenda. European information systems action research methods. Euro-
Journal of Information Systems, 13, 247–259. pean Journal of Information Systems, 7, 90–107.
Bazerman, M.H. (2002) Judgement in Managerial Decision Dawes, R.M. & Mulford, M. (1996) The false consensus
Making, 5th edn. Wiley, New York, NY, USA. effect and overconfidence: flaws in judgement or flaws in
Beer, S. (1981) Brain of the Firm, 2nd edn. Wiley, Chich- how we study judgement. Organisational Behaviour and
ester, UK. Human Decision Processes, 65, 201–211.
Benbasat, I., Goldstein, D.K. & Mead, M. (1987) The case Drummond, H. (1994) Escalation in organisational deci-
research strategy in studies of information systems. sion making: a case of recruiting an incompetent
Management Information Systems Quarterly, 11, 369– employee. Journal of Behavioral Decision Making, 7,
386. 43–56.
Benbasat, I. & Nault, B. (1990) An evaluation of empirical Dusenbury, R. & Fennma, M.G. (1996) Linguistic-numeric
research in managerial support systems. Decision Sup- presentation mode effects on risky option preferences.
port Systems, 6, 203–226. Organisational Behaviour and Human Decision Pro-
Benbasat, I. & Zmud, R.W. (1999) Empirical research in cesses, 68, 109–122.
information systems: the question of relevance. MIS Einhorn, H.J. (1980) Learning from experience and subop-
Quarterly, 23, 3–16. timal rules in decision making. In: Cognitive Processes
Bodily, S.E. (1988) Modern Decision Making: A Guide to in Choice and Decision Making, Wallsten, T.S. (ed.), pp.
Modelling with Decision Support Systems. McGraw-Hill, 1–20. Erlbaum, Hillsdale, NJ, USA.
New York, NY, USA. Elam, J.J., Jarvenpaa, S.L. & Schkade, D.A. (1992) Behav-
Botha, S., Gryffenberg, I., Hofmeyer, F.R., Lausberg, J.L., ioural decision theory and DSS: new opportunities
Nicolay, R.P., Smit, W.J., et al. (1997) Guns or butter: for collaborative research. In: Information Systems and
decision support for determining the size and shape of Decision Processes, Stohr, E.A. & Konsynski, B.R.
the South African National Defence Force. Interfaces, (eds), pp. 51–74. IEEE Computer Society Press, Los
27, 7–28. Alamitos, CA, USA.
Brenner, L.A., Koehler, D.J., Liberman, V. & Tversky, A. Evans, J.St.B.T. (1989) Bias in Human Reasoning: Causes
(1996) Overconfidence in probability and frequency and Consequences. Lawrence-Erlbaum, London, UK.
judgements: a critical examination. Organisational Finlay, P.N. & Forghani, M. (1998) A classification of suc-
Behaviour and Human Decision Processes, 65, 212– cess factors for decision support systems. Journal of
219. Strategic Information Systems, 7, 53–70.
Briggs, L.K. & Krantz, D.H. (1992) Judging the strength Fischhoff, B. (1982a) For those condemned to study the
of designated evidence. Journal of Behavioral Decision past: heuristics and biases in hindsight. In: Judgement
Making, 5, 77–106. under Uncertainty: Heuristics and Biases, Kahneman,
Chapman, G.B., Bergus, G.R. & Elstein, A.S. (1996) Order D., Slovic, P. & Tversky, A. (eds), pp. 335–351, Cam-
of information affects clinical judgement. Journal of bridge University Press, New York, NY, USA.
Behavioral Decision Making, 9, 201–211. Fischhoff, B. (1982b) Debiasing. In: Judgement Under
Chapman, G.B. & Johnson, E.J. (1994) The limits of Uncertainty: Heuristics and Biases, Kahneman, D.,
anchoring. Journal of Behavioral Decision Making, 7, Slovic, P. & Tversky, A. (eds), pp. 422–444. Cambridge
223–242. University Press, New York, NY, USA.
Christensen-Szalanski, J.J. & Bushyhead, J.B. (1981) Phy- Fischhoff, B. & Beyth-Marom, R. (1983) Hypothesis eval-
sicians use of probabilistic judgement in a real clinical uation from a Bayesian perspective. Psychological
setting. Journal of Experimental Psychology: Human Review, 90, 239–260.
Perception and Performance, 7, 928–935. Fischhoff, B., Slovic, P. & Lichtenstein, S. (1978) Fault
Cole, R.E. (1991) Participant observer research: an activist trees: sensitivity of estimated failure probabilities to
role. In: Participatory Action Research, Whyte, W.F. problem representation. Journal of Experimental Psy-
(ed.), pp. 159–166. Sage Publications, Newbury Park, chology: Human Perception and Performance, 4, 330–
CA, USA. 344.
Courbon, J.-C. (1996) User-centered DSS design and Galliers, R.D. (1992) Choosing information systems re-
implementation. In: Implementing Systems for Support- search approaches. In: Information Systems Research:
ing Management Decisions: Concepts, Methods and Issues, Methods and Practical Guidelines, Galliers, R.D.
Experiences, Humphreys, P., Bannon, L., McCosh, A., (ed.), pp. 144–162. Blackwell Scientific, London, UK.
Milgliarese, P. & Pomerol, J.-C. (eds), pp. 108–122. Ganzach, Y. (1996) Preference reversals in equal-
Chapman & Hall, London, UK. probability gambles: a case for anchoring and adjust-
ment. Journal of Behavioral Decision Making, 9, 95– Kahneman, D. & Tversky, A. (1982) Intuitive prediction:
109. biases and corrective procedures. In: Judgement under
Gigerenzer, G. (1991) How to make cognitive illusions Uncertainty: Heuristics and Biases, Kahneman, D.,
disappear: beyond heuristics and biases. Psychological Slovic, P. & Tversky, A. (eds), pp. 414–421. Cambridge
Review, 103, 592–596. University Press, New York, NY, USA.
Gigerenzer, G. (1996) On narrow norms and vague heu- Keen, P.G.W. (1980) Adaptive design for decision support
ristics: a reply to Kahneman and Tversky. European systems. Data Base, 12, 15–25.
Review of Social Psychology, 2, 83–115. Keen, P.G.W. & Scott Morton, M.S. (1978) Decision Sup-
Goodwin, P. & Wright, G. (1991) Decision Analysis for port Systems: An Organisational Perspective. Addison-
Management Judgement. Wiley, Chichester, UK. Wesley, Reading, MA, USA.
Gorry, G.A. & Scott Morton, M.S. (1971) A framework for Keren, G. (1990) Cognitive aids and debiasing methods:
management information systems. Sloan Management can cognitive pills cure cognitive ills. In: Cognitive Biases,
Review, 13, 1–22. Caverni, J.P., Fabre, J.M. & Gonzalez, M. (eds), pp. 523–
Greenberg, J. (1996) ‘Forgive me, I’m new’: three experi- 555. North-Holland, Amsterdam, the Netherlands.
mental demonstrations of the effects of attempts to Keren, G. (1997) On the calibration of probability
excuse poor performance. Organisational Behaviour judgments: some critical comments and alternative
and Human Decision Processes, 66, 165–178. perspectives. Journal of Behavioral Decision Making,
Gregg, D.G., Kulkarni, U.R. & Vinze, A.S. (2001) Under- 10, 269–278.
standing the philosophical underpinnings of software Klayman, J. & Brown, K. (1993) Debias the environment
engineering research in information systems. Informa- instead of the judge: an alternative approach to reducing
tion Systems Frontiers, 3, 169–183. error in diagnostic (and other) judgement. Cognition, 49,
Hastie, R. & Dawes, R.M. (2001) Rational Choice in an 97–122.
Uncertain World. Sage, Thousand Oaks, CA, USA. Kunberger, A. (1997) Theoretical conceptions of framing
Heath, C. (1996) Do people prefer to pass along good or effects in risky decisions. In: Decision Making: Cognitive
bad news? Valence relevance of news as predictors of Models and Explanations, Ranyard, R., Crozier, W.R. &
transmission propensity. Organisational Behaviour and Svenson, O. (eds), pp. 128–144. Routledge, London,
Human Decision Processes, 68, 79–94. UK.
Hevner, A.R., March, S.T., Park, J. & Ram, S. (2004) Lewin, K. (1947) Group decision and social change. In:
Design science in information systems research. MIS Readings in Social Psychology, Newcomb, T.M. &
Quarterly, 28, 75–106. Hartley, E.L. (eds), pp. 330–344. Holt, New York, NY,
Hogarth, R. (1987) Judgement and Choice: The Psychol- USA.
ogy of Decision, 2nd edn. Wiley, Chichester, UK. Loewenstein, G. (1996) Out of control: visceral influences
Horton, D.L. & Mills, C.B. (1984) Human learning and on behavior. Organisational Behaviour and Human Deci-
memory. Annual Review of Psychology, 35, 361–394. sion Processes, 65, 272–292.
Igbaria, M., Sprague, R.H. Jr, Basnet, C. & Foulds, L. Mackinnon, A.J. & Wearing, A.J. (1991) Feedback and the
(1996) The impacts and benefits of a DSS: the case of forecasting of exponential change. Acta Psychologica,
FleetManager. Information and Management, 31, 215– 76, 177–191.
225. March, S. & Smith, G.F. (1995) Design & natural science
Joram, E. & Read, D. (1996) Two faces of representative- research on information technology. Decision Support
ness: the effects of response format on beliefs about Systems, 15, 251–266.
random sampling. Journal of Behavioral Decision Mak- Markus, M.L., Majchrzak, A. & Gasser, L. (2002) A design
ing, 9, 249–264. theory for systems that support emergent knowledge
Joyce, E.J. & Biddle, G.C. (1981) Are auditors’ judgements processes. MIS Quarterly, 26, 179–212.
sufficiently regressive? Journal of Accounting Research, Maule, A.J. & Edland, A.C. (1997) The effects of time pres-
19, 323–349. sure on human judgement and decision making. In:
Kahneman, D. & Tversky, A. (1973) On the psychology of Decision Making: Cognitive Models and Explanations,
prediction. Psychological Review, 80, 237–251. Ranyard, R., Crozier, W.R. & Svenson, O. (eds), pp.
Kahneman, D. & Tversky, A. (1979) Prospect theory: an 189–204. Routledge, London, UK.
analysis of decision under risk. Econometrica, 47, 263– Mazursky, D. & Ofir, C. (1997) ‘I knew it all along’ under all
291. conditions? Or possibly ‘I could not have expected it to
happen’ under some conditions? Organisational Behav- Transactions on Systems, Man and Cybernetics, 11,
iour and Human Decision Processes, 66, 237–240. 640–678.
Miller, D.T. (1976) Ego involvement and attributions for Saunders, C. & Jones, J.W. (1990) Temporal sequences in
success and failure. Journal of Personality and Social information acquisition for decision making: a focus on
Psychology, 34, 901–906. source and medium. Academy of Management Review,
Moskowitz, H. & Sarin, R.K. (1983) Improving the consis- 15, 29–46.
tency of conditional probability assessments for fore- Schein, E.H. (1962) Management development as a
casting and decision making. Management Science, 29, system of influence. Industrial Management Review, 2,
735–749. 59–77.
Nelson, M.W. (1996) Context and the inverse base rate Schon, D.A. (1983) The Reflective Practitioner: How Pro-
effect. Journal of Behavioral Decision Making, 9, 23–40. fessionals Think in Action. Ashgate, Aldershot, UK.
Nisbett, R.E., Krantz, D.H., Jepson, C. & Ziva, K. (1983) Schwenk, C.R. (1988) The cognitive perspective on stra-
The use of statistical heuristics in everyday inductive tegic decision making. Journal of Management Studies,
reasoning. Psychological Review, 90, 339–363. 25, 41–55.
Northcraft, G.B. & Wolf, G. (1984) Dollars, sense and sunk Sedlmeier, P. & Gigerenzer, G. (1997) Intuitions about
costs: a life cycle model of resource allocation decisions. sample size: the empirical law of large numbers. Journal
Academy of Management Review, 9, 225–234. of Behavioral Decision Making, 10, 33–51.
Olsen, R.A. (1997) Desirability bias among professional Showers, J.L. & Charkrin, L.M. (1981) Reducing uncollect-
investment managers: some evidence from experts. able revenue from residential telephone customers.
Journal of Behavioral Decision Making, 10, 65–72. Interfaces, 11, 21–34.
Ordonez, L. & Benson, L. III (1997) Decisions under time Simon, H.A. (1960) The New Science of Management
pressure: how time constraint affects risky decision mak- Decision. Harper & Row, New York, NY, USA.
ing. Organisational Behaviour and Human Decision Pro- Slovic, P. (1975) Choice between equally valued alterna-
cesses, 71, 121–140. tives. Journal of Experimental Psychology: Human Per-
Orlikowski, W.J. & Iacono, C.S. (2001) Desperately seek- ception and Performance, 1, 280–287.
ing the ‘IT’ in IT research – a call for theorizing the IT arti- Sprague, R.H. Jr & Carlson, E.D. (1982) Building Effective
fact. Information Systems Research, 12, 121–134. Decision Support Systems. Prentice Hall, Englewood
Poon, P. & Wagner, C. (2001) Critical success factors Cliffs, NJ, USA.
revisited: success and failure cases of information sys- Stake, R.E. (1994) Case studies. In: Handbook of Quali-
tems for senior executives. Decision Support Systems, tative Research, Denzin, N.K. & Lincoln, Y.S. (eds),
30, 393–418. pp. 236–247. Sage Publications, Thousand Oaks, CA,
Remus, W.E. (1984) An empirical investigation of the USA.
impact of graphical and tabular data presentations on Taylor, S.E. & Thompson, S.C. (1982) Stalking the elusive
decision making. Management Science, 30, 533–542. ‘vividness’ effect. Psychological Review, 89, 155–181.
Remus, W.E. & Kottemann, J.E. (1986) Toward intelligent Teigen, K.H., Martinussen, M. & Lund, T. (1996) Linda ver-
decision support systems: an artificially intelligent stat- sus World Cup: conjunctive probabilities in three-event
istician. MIS Quarterly, 10, 403–418. fictional and real-life predictions. Journal of Behavioral
Ricchiute, D.N. (1997) Effects of judgement on memory: Decision Making, 9, 77–93.
experiments in recognition bias and process dissociation Thuring, M. & Jungermann, H. (1990) The conjunction fal-
in a professional judgement task. Organisational Behav- lacy: causality vs. event probability. Journal of Behav-
iour and Human Decision Processes, 70, 27–39. ioural Decision Making, 3, 51–74.
Ricketts, J.A. (1990) Powers-of-ten information biases. Tversky, A. & Kahneman, D. (1973) Availability: a heuristic
MIS Quarterly, 14, 63–77. for judging frequency and probability. Cognitive Psychol-
Russo, J.E., Medvec, V.H. & Meloy, M.G. (1996) The dis- ogy, 5, 207–232.
tortion of information during decisions. Organisational Tversky, A. & Kahneman, D. (1974) Judgment under
Behaviour and Human Decision Processes, 66, 102– uncertainty: heuristics and biases. Science, 185, 1124–
110. 1131.
Sage, A.P. (1981) Behavioural and organisational consid- Tversky, A. & Kahneman, D. (1981) The framing of
erations in the design of information systems and pro- decisions and the psychology of choice. Science, 211,
cesses for planning and decision support. IEEE 453–458.
Vaishnavi, V. & Kuechler, B. (2005) Design research in Yin, R.K. (1994) Case Study Research: Design and Meth-
information systems. [WWW document]. URL http:// ods, 2nd edn. Sage Publications, Newbury Park, CA,
www.isworld.org/Researchdesign/drisISworld.htm USA.
(accessed 20 March 2005).
Wagenaar, W.A. (1988) Paradoxes of Gambling Behav-
iour. Lawrence Erlbaum, East Sussex, UK.
Biography
Wagenaar, W.A. & Timmers, H. (1979) The pond-and-
duckweed problem: three experiments on the misper- David Arnott is Professor of Information Systems at
ception of exponential growth. Acta Psychologica, 43, Monash University, Melbourne, Australia and Associate
239–251. Dean, Education of Monash’s Faculty of Information Tech-
Wells, G.L. & Loftus, E.F. (eds) (1984) Eyewitness Testi- nology (IT). His current research areas include the devel-
mony: Psychological Perspectives. Cambridge Univer- opment of IT-based systems for managers, business
sity Press, New York, NY, USA. intelligence, data warehousing and IT governance. He is
Wright, G. & Ayton, P. (1990) Biases in probabilistic judge- the author of more than 60 scientific papers in the decision
ment: a historical perspective. In: Cognitive Biases, support area, including papers in journals such as the
Caverni, J.P., Fabre, J.M. & Gonzalez, M. (eds), pp. European Journal of Information Systems, Information
425–441. North-Holland, Amsterdam, the Netherlands. Systems Journal, Decision Support Systems and the Jour-
Yates, J.F. & Curley, S.P. (1986) Contingency judgement: nal of Information Technology.
primacy effects and attention decrement. Acta Psycho-
logica, 62, 293–202.