A Prompt Pattern Catalog To Enhance Prompt Engineering
A Prompt Pattern Catalog To Enhance Prompt Engineering
with ChatGPT
Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse
Spencer-Smith, and Douglas C. Schmidt
Department of Computer Science, Vanderbilt University
Nashville, TN, USA
Additional Key Words and Phrases: Large Language Models, ChatGPT, Prompt Patterns, Prompt Engineering
Abstract
Prompt engineering is becoming a critical skill for software developers by facilitating enhanced interactions with
conversational large language models (LLMs), such as ChatGPT, Claude, and Gemini. This emerging discipline
focuses on crafting prompts, which are instructions that guide LLMs in generating precise outputs, automating
tasks, and ensuring adherence to specific qualitative and quantitative standards. Prompts are also a form of natural
language programming that tailor the dialogue between users and LLMs, optimizing input, output, and interaction
dynamics for many computational tasks, such as developing software, analyzing documents, and/or addressing
cyber vulnerabilities.
This paper introduces a comprehensive catalog of prompt engineering techniques—structured as a collection
of patterns—aimed at addressing common challenges encountered when integrating LLMs into the software
development lifecycle. These prompt patterns serve as an effective means for knowledge transfer, similar to
software patterns. In particular, they provide reusable solutions to common problems faced in particular contexts,
such as output generation and interaction when conversing with LLMs in the domain of software-reliant systems.
This paper provides three contributions to research on—and the practice of—prompt engineering for applying
LLMs to aid users performing computational tasks. First, it establishes a framework for documenting and deploying
prompt patterns across various domains, focusing on enhancing LLM utility in software development endeavors.
Second, it curates a catalog of prompt patterns, validated through successful application in refining LLM interactions
and outputs. Third, it explores the synergistic potential of creating more complex prompts by combining multiple
prompt patterns.
1. INTRODUCTION
The impact of large language models (LLMs) on the software development lifecycle. Conversational large
language models (LLMs) [Bommasani et al. 2021; Zhou et al. 2023], such as ChatGPT, Claude, and Gemini,
are generating immense interest in a wide range of domains. Example applications include answering questions
on medical licensing exams [Gilson et al. 2022], generating code in multiple languages and programming
paradigms [Buscemi 2023], grading programming assignments in upper-level computer science courses [Grandel
et al. 2024], and accelerating certain aspects of the software acquisition process [Robert and Schmidt 2024].
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that
copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first
page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. A preliminary
version of this paper was presented in a writers’ workshop at the 30th Conference on Pattern Languages of Programs (PLoP’23),
October 22-25, 2023, Allerton Park, Monticello, Illinois, USA. Copyright 2024 is held by the author(s). HILLSIDE 978-1-941652-19-0
LLMs answer questions posed by users via “prompts” [Liu et al. 2023], which are natural language instructions
that “program” a conversational LLM by requesting, customizing, enhancing, and/or refining its capabilities. For
example, here’s a simple conversation with ChatGPT that demonstrates a prompt followed by its response:
Prompt: “Please briefly explain what a ’prompt’ is in the context of an LLM like ChatGPT.”
Response: In the context of an LLM like ChatGPT, a “prompt” refers to the input text provided by a
user to the model. This input text serves as a request or instruction, guiding the LLM on what kind of
information, response, or output the user is seeking. The prompt can range from a simple question, a
statement requiring elaboration, a command to generate text in a specific format, or even a complex
scenario for the LLM to interpret and respond to.
The LLM processes the prompt, leveraging its pre-trained knowledge and patterns learned from vast
amounts of text data, to generate a response that aligns with the user’s request. The quality and
relevance of the LLM’s response can significantly depend on how the prompt is structured, including its
clarity, specificity, and the context it provides. Prompts are fundamental in interacting with LLMs, acting
as the primary interface through which users communicate their queries and tasks to the model.
Prompts not only respond to user questions but also shape future interactions and the outputs produced by
LLMs. They guide the direction of the subsequent dialogue and determine the nature of LLM-generate content
by establishing clear rules and guidelines from the outset of a conversation. Moreover, prompts establish the
conversation’s context, indicating to LLMs which information is crucial and defining the preferred formats and
substance of both inputs and outputs.
For example, a prompt could instruct an LLM to only generate code that follows certain coding principles (such
as applying the SOLID principles [Martin 2023]) or programming paradigms (such as object-oriented, functional, or
reactive programming). Likewise, a prompt could instruct an LLM to flag certain phrases in software requirement
documents and provide additional information (such as inconsistencies in definitions and usages) related to those
phrases. By guiding interactions with an LLM, prompts facilitate more structured and nuanced outputs to aid a
large variety of computational tasks via natural language programming.
Emerging trends, challenges, and opportunities for LLMs. Although conversational LLMs have been gen-
erally available only since the end of 2022, they are now widely applied to generate and assess computer
programs. For example, LLMs have been integrated into software tools, such as Microsoft/GitHub Copilot [git
2024; Asare et al. 2022; Pearce et al. 2022] and included in integrated development environments (IDEs), such
as IntelliJ [Krochmalski 2014] and Visual Studio Code. Software teams can thus access these AI-assisted tools
directly from their preferred integrated development environments (IDEs), underscoring the transformative role
LLMs now play in enhancing productivity and innovation across the software development lifecycle, from initial
coding to testing to final deployment.
Despite the rapid integration of LLMs into software development, a comprehensive understanding of these
models—particularly their capabilities and limitations—is still emerging. Viewing LLMs as a computing platform
introduces a unique programming paradigm based on natural language, presenting both challenges and learning
curves. This early adoption stage highlights the need for rigorous exploration and disciplined knowledge formation
so programmers can apply LLMs effectively throughout the software development lifecycle.
However, this new paradigm also provides unprecedented opportunities for the future of software development.
For example, LLMs are set to revolutionize IDEs, enabling a collaborative ecosystem where humans and augmented
intelligences (AI+) tools [Ozkaya et al. 2023] work together as trusted partners [Carleton et al. 2022]. This
collaboration not only promises to accelerate the development and maintenance of software-reliant systems but
also lowers the barrier to entry for computational thinking, making it accessible to a broader range of individuals
with varying levels of formal education in programming [Diamandis 2024].
Motivating the need for prompt engineering. Programming first-generation LLMs, including (but not limited
to) ChatGPT [Bang et al. 2023], involves natural language prompts, such as asking an LLM to explain a software
vulnerability or generate JavaScript for a web page. These simple examples of prompts, however, only hint at the
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 2
significantly more sophisticated computational abilities of LLMs. Harnessing the potential of LLMs in productive
ways requires a systematic focus on prompt engineering [Chen et al. 2023]. This emerging discipline studies
structured interactions with—and programming of—LLM computational systems to solve complex problems via
natural language interfaces.
This paper is an extensive revision of an earlier paper [White et al. 2023a] that described how pattern-oriented
prompt engineering techniques can enhance the application of conversational LLMs for tasks in the software
development lifecycle. Example applications of LLMs in this domain include helping developers code effectively
and efficiently with unfamiliar APIs, allowing students to acquire new coding skills and techniques, and enabling
cybersecurity professionals to rapidly detect and thwarts cyber-attacks [Gennari et al. 2024].
To demonstrate the promise of prompt engineering in the context of the software development lifecycle, we
provided the following prompt to ChatGPT:
Prompt: “From now on, I would like you to ask me questions to deploy a Python application to AWS.
When you have enough information to deploy the application, create a Python script to automate the
deployment.”
This prompt causes ChatGPT to begin asking questions about the Python application. ChatGPT will drive the
question-asking process until it reaches a point where it has sufficient information to generate a Python program
that automates deployment. This example demonstrates the programming potential of prompts beyond conventional
“generate a method that does X”-style prompts or “answer the following quiz question: ...”
Prompts can also be engineered to program an LLM to do more than simply dictate the output type or filter the
information provided to the LLM. With the right prompt it is possible to create entirely new interaction paradigms. For
example, an LLM can generate and give a quiz associated with a software engineering concept or tool. Likewise, it
can simulate a Linux terminal on a computer that’s been compromised by a cyber-attack (see Section 6.2 for more
discussion on this use case).
Prompts also have the potential for self-adaptation. For example, a prompt can suggest other prompts to gather
additional information or generate related artifacts (see Section 9 for more discussion on these use cases). These
advanced capabilities of prompts highlight the importance of engineering them systematically to provide value
beyond simple generation of text, code, or unit tests.
Prompt patterns are an essential foundation to an effective discipline of prompt engineering. A key
contribution of this paper is codifying successful approaches for systematically engineering different input, output,
and interaction behaviors when working with conversational LLMs via prompt patterns. Prompt patterns are similar
to software patterns [Gamma et al. 1995; Schmidt et al. 2013] since they both offer reusable solutions to problems
arising within particular contexts. The contexts they focus on, however, relate to interactions with LLMs, such as
(but again not limited to) ChatGPT.
By documenting and leveraging prompt patterns in the context of tasks performed within the software develop-
ment lifecycle, individuals and teams can enforce constraints on input formats and generated output to ensure
relevant information is included. Likewise, prompt patterns can modify and adapt interactions with LLMs to better
solve software-related problems. Prompt patterns can be viewed as a corollary to the broad corpus of software
patterns, but are adapted to specific contexts of LLM interactions.
This paper focuses on domain-independent prompt patterns and presents a catalog of such patterns that
have been applied to simplify and/or automate tasks in the software development lifecycle. Examples of such
tasks include generating visualizations, code artifacts, and test cases; automating output steps for code editing;
and identifying discrepancies between software requirements specifications and regulatory documents. Just as
catalogs of software patterns [Gamma et al. 1995; Schmidt et al. 2013] codify ways of enhancing common software
development tasks, catalogs of prompt patterns codify ways of enhancing LLM inputs, outputs, and interactions.
Paper organization. The remainder of this paper is organized as follows: Section 2 introduces prompt patterns
and compares these patterns to classic software patterns; Section 3 summarizes and categorizes thirteen prompt
patterns we identified to solve common problems in the domain of conversational LLM interaction and output
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 3
generation for tasks in the software development lifecycle; Section 4 describes a prompt pattern that controls
the contextual information on which LLMs operate; Section 5 describes a prompt pattern that dictates how LLMs
understand user input and how they translate this input into formats they can use to generate output; Section 6
describes prompt patterns that constrain or tailor the types, formats, structures, and/or other properties of the
output generated by LLMs; Section 7 describes a prompt pattern that identifies and resolves errors in the output
generated by LLMs; Section 8 describes prompt patterns that improve the quality of LLM input and output; Section 9
describes prompt patterns that manage the interaction between users and LLMs; Section 10 discusses related
work; Section 11 presents concluding remarks and lessons learned; and Appendix A explains our approach for
defining a prompt pattern’s structure and key ideas.
—A name and classification. Each pattern has a name that identifies the pattern and should be used consistently.
Software patterns can be classified in various ways, including purpose (.e., creational, structural, or behavioral
patterns), granularity (e.g., design, architectural, or enterprise patterns), etc.
—The intent concisely conveys the purpose the pattern is intended to achieve.
—The motivation documents the underlying problem and “forces” the pattern is meant to address and underscores
the importance of the problem.
—The structure and participants. The structure describes key pattern participants (such as subsystems, classes,
objects, and/or functions) and depicts how they collaborate to form a generalized solution.
—Example code concretely maps the pattern to some concrete programming language(s) to help developers
gain insight on how to apply the pattern effectively.
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 4
—Consequences summarize considerations users should take into account when deciding whether or how to
apply this pattern.
with an LLM.
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 5
—The Context Control category focuses on controlling the contextual information that LLMs operate upon, which
is critical to ensure coherent, relevant, and accurate responses from LLMs.
—The Input Semantics category focuses on how LLMs understand user input and how they translate this input
into formats they can use to generate output.
—The Output Customization category focuses on constraining or tailoring the types, formats, structures, and/or
other properties of the output generated by LLMs.
—The Error Identification category focuses on identifying and resolving errors in the output generated by LLMs.
—The Prompt Improvement category focuses on improving the quality of LLM input and output.
—The Interaction category focuses on the dynamics between users and LLMs.
We identified these six categories and thirteen prompt patterns during our initial work with ChatGPT. Other
categories and prompt patterns we identified to improve code quality, refactoring, requirements elicitation, and
software design appear in [White et al. 2023b].
The remainder of this paper presents each of the thirteen prompt pattern using the pattern form described
in Section 2.2. Each prompt pattern is accompanied by concrete implementation examples. These examples
were obtained through a combination of exploring the corpus of community-posted prompts on the Internet and
independent prompt creation through our application of ChatGPT to help automate tasks throughout the software
development lifecycle.
All prompts and examples in this paper were tested with ChatGPT [OpenAI 2023a] due to its widespread
availability, popularity, and capabilities. Although the output from ChatGPT has been limited or omitted for brevity
in most cases, we encourage readers to use ChatGPT to test the prompt patterns documented below. These
patterns are all readily testable using any verison of ChatGPT, though we recommend using ChatGPT-4 since it is
more powerful and reliable than ChatGPT-3.5.
3 TheContext Conveyor pattern was called the Context Manager pattern in [White et al. 2023a]. We changed the name of this pattern in this
paper to better reflect its intent.
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 6
Structure and Key Ideas. The fundamental contextual statements associated with the Context Conveyor pattern
are shown in the table below:
Contextual Statements
1. Within scope X
2. Please consider Y
3. Please ignore Z
4. (Optional) start over
Statement #1 helps to scope the context, such as “when performing a code review” or “when generating a
deployment script,” to establish the boundaries within which certain information should be considered. The more
explicit these statements are about the context, the more likely the LLM will take appropriate action. For example,
focusing on modularity may be important for a code review but not for a deployment script.
Statements #2 and #3 describe information to incorporate into the output. Statements about what to consider
or ignore should list key concepts, facts, instructions, etc. that should be included or removed from the context.
For example, if users ask an LLM to ignore subjects related to a topic—yet some of the those statements were
discussed far back in the conversation—the LLM may not properly disregard the relevant information. The more
explicit the list is, therefore, the better the LLM’s inclusion/exclusion behavior will be.
Finally, statement #4 optionally instructs an LLM to explicitly “start over.” This statement can be added at the end
of a prompt if the goal is to wipe the slate clean. A better way accomplish this task, howevery, may be to simply
start a new interaction session with the LLM.
Example Implementation. To specify context consider using the following prompt:
“When analyzing the following pieces of code, only consider security aspects.”
Likewise, to remove context consider using the following prompt:
“When analyzing the following pieces of code, do not consider formatting or naming conventions.”
Clarity and specificity are important when providing or removing context to/from LLMs so they can better
understand the intended scope of the conversations and generate more relevant responses. In many situations,
users may want to completely start over and can employ this prompt to reset LLM context:
“Ignore everything that we have discussed and start over.”
The “start over” statement instructs the LLM to produce a complete reset of the context.
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Context Conveyor pattern.
Unintentional reset risks. An organization may transparently inject a series of policy filters or helpful prompts
into the start of each LLM conversation to restrict and/or enhance conversations. However, users may not be aware
of these filters/prompts. Applying this pattern may therefore inadvertently reset or eliminate previously applied
filters or prompts within a conversation, potentially diminishing the LLM’s guardrails or functionality without users’
knowledge.
Solution strategies for context preservation. Users should be made aware of existing contexts and the
potential consequences of resets. It may therefore be useful to incorporate strategies to inform users about what
might be lost before any context alteration occurs, such as integrating prompt explanations or confirmations that
summarize the impacts of context changes. These strategies provide a safeguard against unintentional functionality
loss and help ensure users make informed decisions about altering the conversation’s context.
Motivation. Users may not know precisely what types of outputs or details are important for an LLM to focus
on to achieve a given task. They may know, however, the role or type of person they would normally ask for help
performing these tasks. For example, it may be useful to conduct a code review as if the LLM was a security expert,
even if users themselves are not security experts. The Persona pattern enables users to express what they need
help with, without knowing the exact details of the outputs they need.
Structure and Key Ideas. The fundamental contextual statements associated with the Persona pattern are
shown in the table below:
Contextual Statements
1. Act as persona X
2. Provide outputs that persona X would create
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 11
Statement #1 instructs an LLM to act as a specific persona and provide outputs like such a persona would. This
persona can be expressed in a number of ways, including a job description, title, fictional character, or historical
figure. The persona should elicit a set of attributes associated with a well-known job title, type of person, etc.
Statement #2 offers opportunities for customization by instructing the LLM to provide outputs that someone with
persona X would create. For example, a computer science teacher might provide a large variety of different output
types, ranging from programming assignments to reading lists to lectures. If more specific scope(s) to the type(s)
of output(s) are known, users can provide them as part of the contextual statement.
Example Implementation. A sample implementation for applying the Persona pattern in a code review is shown
below:
“From now on, act as a security reviewer. Pay close attention to the security details of any code that we
look at. Provide outputs that a security reviewer would regarding the code.”
In this example, the LLM is instructed to provide outputs that a “security reviewer” would. The prompt further
sets the context within which the code will be evaluated. Finally, the user refines the prompt by scoping the persona
further to outputs regarding the code’s security posture.
Personas can also represent inanimate or non-human entities, such as a Linux terminal, a database, or even an
animal’s perspective. When using this pattern to represent these types of entities, it can be useful to specify how
the inputs will be delivered to the entity, such as “assume my input is what the owner is saying to the dog and your
output is the sounds the dog is making.”
An example prompt for a non-human entity that uses a “pretend to be” wording is shown below:
“You will pretend to be a Linux terminal for a computer that has been compromised by an attacker.
When I type in a command, you will output the corresponding text that the Linux terminal would
produce.”
This prompt is designed to simulate a computer that has been compromised by an attacker and is being
controlled through a Linux terminal. The prompt specifies that the user will first input commands into the terminal.
In response, the simulated terminal will output the corresponding text that would be produced by a real Linux
terminal. This prompt is more prescriptive in the persona and asks the LLM to not only be a Linux terminal, but to
act as a computer compromised by an attacker.
The “compromised Linux terminal” persona causes ChatGPT to generate outputs to commands that have files
and contents indicative of a computer that was hacked. This example shows how an LLM can bring its situational
awareness to a persona, e.g., by creating evidence of a cyberattack in its generated outputs. Such a persona can
be highly effective by asking LLM to play a game, where the goal is to hide exact details of the output characteristics
from users (e.g., not give away what the cyberattack did by describing it explicitly in the prompt).
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Persona pattern.
Assumptions and hallucinations. An interesting aspect of instructing an LLM to taking on non-human per-
sonas is that it may make interesting assumptions or “hallucinations” regarding the context. For example, ChatGPT
can be instructed to act as a Linux terminal and generate the output expected if a user typed the same text into a
terminal. Henceforth, commands like ls -l will generate a file listing for an imaginary UNIX file system, complete
with files that can have other Linux commands like cat file1.txt run on them.
Contextual prompts for realism. An LLM may prompt the user for more context, such as when ChatGPT is
asked to act as a MySQL database and thus prompts the user for the structure of a table the user is pretending to
query. ChatGPT can then generate synthetic rows, such as generating imaginary rows for a “people” table with
columns like “name” and “job.”
Persona limitations and policy filters. When personas are based on real individuals, privacy issues become
paramount. LLMs must navigate the fine line between providing engaging, persona-driven content and respecting
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 12
the privacy and consent of the individuals being represented. An LLM may therefore disregard requests for
personas relating to living people or people considered harmful due to underlying policy filters, such as privacy
and security rules.
Contextual Statements
1. Generate an X that I can provide to tool Y to visualize it
Statement #1 indicates to an LLM that the output it produces (i.e., “X”) will be some type of image or visualization.
When an LLM can’t generate a particular type of image or visualization natively, the portion of statement #1 that
says “that I can provide to tool Y to visualize it” clarifies that the LLM is not expected to generate an visualization
per se, but instead is expected to produce a description of a visualization consumable by tool Y for production of
the visualization.
Many tools may support multiple types of visualizations or formats, and thus naming the target tool itself may
provide insufficient information to accurately produce what the user wants. Users may therefore need to state
the precise types of visualizations (e.g., bar chart, directed graph, UML class diagram) that should be produced.
For example, Graphviz Dot can create diagrams for both UML class diagrams and directed graphs. As will be
discussed in example implementation below, it can be advantageous to specify a list of possible tools and formats
and let the LLM select the appropriate target for visualization.
Example Implementation. A sample implementation for applying the Visualization Generator pattern in a code
review is shown below:
“Whenever I ask you to visualize something, please create either a Graphviz Dot file or Vega-lite
specification that I can use to create the visualization. Choose the appropriate tools based on what
needs to be visualized.”
This example adds a qualification that the output type for the visualization can be either for Graphviz or Vega-lite.
This approach allows an LLM to use its semantic understanding of the output format to automatically select the
target tooling based on what will be displayed.
In this example, Graphviz would be applied to visualize graphs with a need for an exactly defined structure.
Vega-lite would be effective at visualizing graphs, charts, and more data-driven visualizations. An LLM can select
the tool based on the needs of the visualization and capabilities of each tool.
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Visualization Generator pattern.
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 13
Enhanced communication through hybrid text/visualization outputs. This pattern creates a pathway for
the tools to produce visualizations that are associated with other outputs. This pattern can provide a more
comprehensive and effective way of communicating information by combining the strengths of both the text
generation and external visualization tools.
Expanded expressive capabilities. The pattern creates a target pipeline for the output to render a visualization.
The pipeline may include AI generators, such as Midjourney, that can produce rich visualizations but are provided
by other vendors. The pattern allows the user to expand the expressive capabilities of the output into the visual
domain.
Dependency on third-party visualization tools. One drawback of applying the Visualization Generator pattern
is its reliance on external visualization tools and AI generators like Midjourney, which introduces dependencies on
third-party tools that can lead to issues of consistency, reliability, and control over the visualizations.
Consistency and quality control challenges. While this pattern enriches the expressive capabilities of the
output by extending it into the visual domain, it also complicates the process by incorporating tools that may have
varying standards of quality, different operational limitations, and distinct licensing requirements. These factors can
impact the seamless integration of textual and visual information, potentially hindering the overall effectiveness
and accessibility of the generated content.
Motivation. Users often generally know—or have an idea of—what the end goal of a prompt should look like and
what “ingredients” belong in the prompt. However, they may not necessarily know the precise ordering of steps to
achieve that end goal. For example, a user may want a precise specification on how a piece of code should be
implemented or automated, such as “create an Ansible playbook to SSH into a set of servers, copy text files from
each server, spawn a monitoring process on each server, and then close the SSH connection to each server.”
The Recipe pattern generalizes the example of “given the ingredients in my fridge, provide dinner recipes.” A
user may also want to specify a set number of alternative possibilities. For example, “provide three different ways
of deploying a web application to AWS using Docker containers and Ansible using step-by-step instructions.”
Structure and Key Ideas. The fundamental contextual statements associated with the Recipe pattern are shown
in the table below:
Contextual Statements
1. I would like to achieve X
2. I know that I need to perform steps A, B, C
3. Provide a complete sequence of steps for me
4. Fill in any missing steps
5. Identify any unnecessary steps
Statement #1 focuses the LLM on the overall goal (“X”) the recipe must be built to achieve. The LLM will organize
and complete these steps to achieve the specified goal sequentially.
Statement #2 provides the partial list of steps (“A, B, C”) the user would like the LLM to include in the overall
recipe. These steps serve as intermediate waypoints for the path the LLM may take to generate or constraints on
the structure of the recipe.
Statement #3 indicates to the LLM that the goal is to provide a complete sequential ordering of steps.
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 14
Statement #4 helps ensure the LLM will attempt to complete the recipe without further follow-up by making
some choices on the user’s behalf regarding missing steps, as opposed to just stating additional information that is
needed.
Statement #5 helps flag inaccuracies in the user’s original request so the final recipe is efficient.
This pattern combines elements of the Output Template, Alternative Approaches, and Question Decomposition
patterns, as follows:
—The Recipe pattern borrows the structured output format from the Output Template pattern (page 16) by
organizing the information provided by the user into a coherent sequence of steps. By applying the Output
Template pattern, the Recipe pattern ensures that the LLM produces outputs in a predictable, structured format
that aligns with the user’s request for a “complete sequence of steps,” thereby facilitating easier interpretation
and implementation of the solution.
—From the Alternative Approaches pattern (page 19, the Recipe pattern adopts the flexibility of exploring different
methods to achieve the stated goal (“X”), which is crucial when the initial steps provided (“A, B, C”) may not
be the only or the most efficient path to the desired outcome. The inclusion of this element allows an LLM
to consider and possibly suggest alternative strategies or steps that might not have been initially apparent or
provided by the user, enhancing the recipe’s effectiveness and adaptability.
—The Recipe pattern utilizes the Question Decomposition pattern’s (page 20) approach to break down the overall
goal into smaller, manageable tasks. By identifying and filling in missing steps (Statement #4) and eliminating
unnecessary ones (Statement #5), the LLM effectively decomposes the complex problem (“achieve X”) into a
series of simpler tasks. This decomposition aids in the logical sequencing of steps and ensures that the final
recipe is both complete and optimized for efficiency.
By incorporating elements from these other three patterns, the Recipe pattern can deliver a comprehensive
and efficient solution by structuring the output in a clear format, considering multiple pathways to the goal, and
breaking down the task into simpler components for better clarity and execution.
Example Implementation. An example usage of this pattern in the context of deploying a software application to
the cloud is shown below:
“I am trying to deploy an application to the cloud. I know that I need to install the necessary depen-
dencies on a virtual machine for my application. I know that I need to sign up for an AWS account.
Please provide a complete sequence of steps. Please fill in any missing steps. Please identify any
unnecessary steps.”
Depending on the use case and constraints, “installing necessary dependencies on a virtual machine” may be
an unnecessary step. For example, if the application is already packaged in a Docker container, the container
could be deployed directly to the AWS Fargate Service, which requires any management of underlying virtual
machines. The inclusion of the “identify unnecessary steps” language will cause the LLM to flag this issue and
omit these steps from the final recipe.
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Recipe pattern.
Specificity of user input. When users apply this pattern, the effectiveness of an LLM’s output heavily depends
on the clarity and detail of user initial descriptions. If these descriptions are vague or lack specificity, LLMs
might generate solutions that are overly general, missing critical nuances of the given task. Although this output
may technically address the request, it may not fully capture actual needs or intentions of users. In contrast, a
well-specified input can guide an LLM towards generating more tailored and applicable solutions, underscoring the
importance of precise and detailed user input when applying this pattern.
Introduction of bias. The initial steps or requirements outlined by users when applying the Recipe pattern can
inadvertently introduce bias into an LLM’s processing and output. This bias may manifest in the LLM prioritizing
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 15
solutions that align with the specifics of the user-provided steps, even when those steps are not necessary or
optimal for the task.
For instance, if users specify certain dependencies or tools in their requests, an LLM might focus on solutions
that incorporate these elements, potentially overlooking simpler or more efficient alternatives. This consequence
highlights the need for careful consideration of the initial input provided to an LLM since it can significantly influence
the direction and nature of the solution generated by the LLM.
Motivation. In some cases, LLM output must be produced in a precise format that is specific to a particular
application or use case, but which is unknown to the LLM. For example, users might need to generate URLs
that insert generated information into specific positions within URL paths. If an LLM is unaware of this template
structure, it must be instructed what the format is and where the different parts of its output should go. These
instructions could take the form of a sample data structure to generate, a series of form letters being filled in, etc.
Structure and Key Ideas. The fundamental contextual statements associated with the Output Template pattern
are shown in the table below:
Contextual Statements
1.A I am going to provide a template for your output
1.B X is my placeholder for content
1.C Try to fit the output into one or more of the placeholders that I list
1.D Please preserve the formatting and overall template that I provide
1.E This is the template: PATTERN with PLACEHOLDERS
Statement #1.A directs the LLM to follow a specific template for its output. This template will be used to coerce
the LLMs responses into a structure consistent with user formatting needs, which is useful when the LLM does
not know the target format. If the LLM already has knowledge of the format (such as a specific file type), the
Output Template pattern can be skipped and the user can simply specify the known format. However, there may
be cases, such as generating Javascript Object Notation (JSON), where large variation exists in how data could
be represented within a format. In such cases the Output Template pattern helps ensure the LLM’s output meets
additional user constraints specified by the target format.
Statement #1.B makes the LLM aware that the template contains a set of placeholders that enable users to
explain how the output should be inserted into the template. They also allow users to target where information
should be inserted semantically. Placeholders can use formats (e.g., NAME) that allow an LLM to infer semantic
meaning and determine where output should be inserted (e.g., insert the person’s name in the NAME placeholder).
Placeholders also enable users to indicate what is not needed in the output, e.g., if a placeholder does not exist
for a component of the generated output that component can be omitted. Ideally, placeholders should use a format
(e.g., all caps or enclosure in brackets) commonly employed in text the LLM was trained on.
Statements #1.C and #1.D constrain the LLM so it does not arbitrarily rewrite the template or attempt to modify
it so all the output components can be inserted. These statements, however, may not preclude an LLM from
generating additional text generated before or after. In practice, LLMs typically follow the template, but it may be
4 The Output Template pattern was called the Template pattern in [White et al. 2023a]. We changed the name of this pattern in this paper to
better reflect its intent.
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 16
hard to eliminate additional text being generated beyond the template without further experimentation with prompt
wording.
Finally, statement #1.E provides the actual template the LLM should output its response in, such as
Email Reply Template
Structure and Key Ideas. The fundamental contextual statements associated with the Fact Check List pattern
are shown in the table below:
Contextual Statements
1.A Generate a set of facts that are contained in the output
1.B The set of facts should be inserted in the output at [a specific point]
1.C The set of facts should be the fundamental facts that could undermine the
veracity of the output if any of them are incorrect
Statement #1.A instructs an LLM to identify the facts that are contained within its output. The LLM should be
able to identify facts effectively since they are a well-understood concept and not impacted by the actual content,
i.e., the concept of a “fact” is domain-independent.
Statement #1.B tells the LLM where the facts should be included in the output at a specific point. For example,
the facts could be included at the end or beginning of the output. Of course, other arrangements could be employed.
Statement #1.C expresses the idea that facts should be the ones most important to the overall veracity of the
statements, i.e., choose facts fundamental to the argument and not derived facts flowing from those facts. This
statement is crucial since it helps to scope the output to those facts most important to the veracity and not derived
statements that may be less important. Of course, this constraint could also be relaxed.
One point of variation in this pattern is where the facts are output. Given that the facts may be terms that the
user is not familiar with, it may preferable if the list of facts comes after the output. This after-output presentation
ordering allows users to read and (attempt to) understand the statements before seeing what statements should
be checked. Users may also determine additional facts prior to realizing the list of facts at the end of the output
should be checked.
Example Implementation. A sample wording of the Fact Check List pattern is shown below:
“From now on, when you generate an answer, create a set of facts that the answer depends on that
should be fact-checked and list this set of facts at the end of your output. Only include facts related to
cybersecurity.”
Users may have expertise in some topics related to the question but not others. A fact check list can be tailored
to topics that users are not as experienced in or where there is the most risk. For example, in the prompt above,
the user is scoping the fact check list to security topics since these are likely important from a risk perspective and
are often poorly understood by software developers. Targeting the facts also reduces user cognitive burden by
potentially listing fewer items for investigation.
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Fact Check List pattern.
Domain-specific utility. This pattern should be employed whenever users are not experts in the domain for
which they are generating output, which may occur when combining this prompt pattern with the Persona pattern
(page 11). For example, software developers reviewing code may benefit from security consideration suggestions.
In contrast, an expert on software architecture may identify errors in statements about the software structure and
may not need a fact check list for these outputs.
Integration with other patterns. Errors are potential in all LLM outputs, so Fact Check List is an effective
pattern to combine with other patterns, such as the Question Decomposition pattern (page 20). This combination
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 18
of prompt patterns enables a more comprehensive approach to verifying LLM outputs and enhancing the reliability
of generated content.
Fact verification process. A key aspect of the Fact Check List pattern is that users can inherently check the
list of facts against the output. In particular, users can directly compare a fact check list to the output to verify
the facts in the fact check list actually appear in the output. Users can also identify any omissions from the list.
Although a fact check list may also have errors, users often have sufficient knowledge and context to determine its
completeness and accuracy relative to the LLM’s output.
Applicability and limitations. One caveat of this pattern is that it only applies when the output type is amenable
to fact-checking. For example, this pattern works when asking ChatGPT to generate a Python “requirements.txt”
file since it lists the versions of libraries as facts that should be checked, which is handy as versions commonly
have errors. However, ChatGPT will refuse to generate a fact check list for a code sample and indicate that this is
something it cannot check, even though the code may have errors.
Contextual Statements
1. Within scope X, if there are alternative ways to accomplish the same thing, list
the best alternate approaches
2. (Optional) compare/contrast the pros and cons of each approach
3. (Optional) include the original way that I asked
4. (Optional) prompt me for which approach I would like to use
The initial portion of statement #1 (“within scope X”) scopes the interaction to a particular goal, topic, or bounds
on the questioning. The scope is the constraint(s) users place on alternative approaches. For example, the scope
could be “for implementation decisions” or “for the deployment of the application.” This scope ensures that any
alternatives fit within the boundaries or constraints to which users must adhere.
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 19
The next portion of statement #1 (“if there are alternative ways to accomplish the same thing, list the best
alternate approaches”) instructs an LLM to suggest alternatives. As with other prompt patterns, instruction
specificity can be increased or can include domain-specific contextual information. For example, this statement
could be scoped to “if there are alternative ways to accomplish the same thing with the software framework that I
am using” to prevent an LLM from suggesting alternatives that are inherently non-viable because they require too
many changes to other parts of an application.
Statement #2 optionally adds decision making criteria to the analysis, which is needed when users are not
aware of alternative approaches and may thus not be aware of why and when to choose an alternative. This
statement ensures the LLM provides users with the necessary rationale for alternative approaches.
Statement #3 optionally instructs an LLM to include the original approach given by the user for completeness
when evaluation the LLM’s output.
Statement #4 optionally helps eliminate users needing to copy/paste or enter in an alternative approach manually
if one is selected.
Example Implementation. The following is an example prompt that generates, compares, and allows a user to
select one or more alternative approaches:
“Whenever I ask you to deploy an application to a specific cloud service, if there are alternative services
to accomplish the same thing with the same cloud service provider, list the best alternative services
and then compare/contrast the pros and cons of each approach with respect to cost, availability, and
maintenance effort and include the original way that I asked. Then ask me which approach I would like
to proceed with.”
This implementation of the Alternative Approaches pattern is specifically tailored for the context of software
engineering and focuses on deploying applications to cloud services. The prompt is intended to intercept places
where developers have made a cloud service selection without full awareness of alternative services that may
be priced more competitively or are easier to maintain. This prompt directs ChatGPT to list the best alternative
services that can accomplish the same task with the same cloud service provider (providing constraints on the
alternatives), as well as to compare and contrast the pros and cons of each approach.
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Alternative Approaches pattern.
Versatility and application. This pattern is broadly applicable, being effective in its generic form across diverse
tasks and challenges. Its adaptability allows users to explore multiple solutions to a problem, enhancing creativity
and problem-solving skills by encouraging users to consider a range of strategies they may not have been aware
of previously.
Domain-specific catalogs. Refinements of this pattern could be used to implement a standardized catalog of
acceptable alternatives within a specific domain, thereby guiding users towards the most relevant and effective
solutions. These catalogs could make it easier for users to navigate their options within a structured framework,
minimizing an otherwise overwhelming number of choices and ensuring that the alternatives presented are vetted
and viable.
Informed decision-making. By presenting users with an approved set of approaches—complete with insights
into their respective advantages and disadvantages—the Alternative Approaches pattern plays a crucial role in
facilitating informed decision-making. It educates users about the potential outcomes of their choices. It also
empowers users to make decisions that are best aligned with their goals and to specific nuances of their situation,
ultimately yielding more satisfactory outcomes.
Contextual Statements
1. When you are asked a question, follow these rules
A. Generate a number of additional questions that would help more accurately answer the question
B. Combine the answers to the individual questions to produce the final answer to the overall question
Rule A in statement #1 instructs an LLM to generate a number of additional questions that help answer the
original question more accurately. This step requires the LLM to (1) consider the context of the question, (2) identify
any information that may be missing or unclear, and (3) combine the answers to the additional questions to provide
context to help answer the overall question. By generating these additional questions, the LLM can help ensure its
ultimate answer is as complete and accurate as possible. This step also encourages critical thinking by users and
can help uncover new insights or approaches that may not have been considered initially, thereby yielding better
follow-on questions.
Rule B in statement #1 instructs an LLM to combine answers to individual questions to produce its ultimate
answer to the overall question. This step ensures all the information gathered from the individual questions is
incorporated into the final answer. By combining answers, the LLM can provide a more comprehensive and
accurate response to the original question. This step also helps ensure all relevant information is taken into
account and the final answer is not based on any single answer.
Example Implementation. The following is an example prompt that instructs an LLM to dissect the main question
into three sub-questions, whose answers are then combined to provide one response to the initial inquiry.
“When I ask you a question, generate three additional questions that would help you give a more
accurate answer. When I have answered the three questions, combine the answers to produce the
final answers to my original question.”
5 The Question Decomposition pattern was called the Cognitive Verifier pattern in [White et al. 2023a]. We changed the name of this pattern in
this paper to better reflect its intent.
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 21
This specific instance of the Question Decomposition pattern refines the original pattern by specifying a set
number of additional questions that the LLM should generate in response to a question. In this case, the prompt
instructs the LLM to generate three additional questions that help it answer the original question more accurately.
This specific number can be based on a user’s experience and/or willingness to provide follow-up information.
The following refinement to the prompt above provides more context for the amount of domain knowledge an
LLM can assume the user has to guide the creation of additional questions:
“When I ask you a question, generate three additional questions that would help you give a more
accurate answer. Assume that I know little about the topic that we are discussing and please define
any terms that are not general knowledge. When I have answered the three questions, combine the
answers to produce the final answers to my original question.”
This refinement also specifies that the user may not have a strong understanding of the topic being discussed,
so the LLM should define any terms that are not general knowledge. The goal is to ensure follow-up questions
are not only relevant and focused, but also accessible to the user, who may be unfamiliar with technical or
domain-specific terms. By providing clear and concise definitions, the LLM helps ensure its follow-up questions
are easy to understand and the final answer is accessible to users with varying levels of knowledge and expertise.
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Question Decomposition pattern.
Defining question limits: precision vs. flexibility. Specifying an exact number of questions an LLM should
generated ensures the user interaction is concise and focused, making it more likely that users can provide all the
necessary information without feeling overburdened. This precision helps keep the dialogue within a manageable
scope, ensuring efficiency and relevance in the exchange. However, setting a rigid limit on the number of questions
may inadvertently exclude critical follow-up inquiries. An invaluable N +1 question that could provide key insights or
clarification might be left unasked, potentially compromising the completeness of information exchanged between
a user and an LLM.
Allowing LLM discretion: adaptability vs. overload. Giving an LLM the flexibility to determine the number of
questions—or to ask additional questions as needed—introduces adaptability into the conversation. This approach
can yield a more thorough exploration of the topic, as the LLM can adapt its questioning based on the evolving
context of the dialogue.
However, this flexibility increases the risk of information overload for the user. Without a predefined limit,
LLMs might generate a large number of follow-up questions, which could overwhelm users and detract from the
effectiveness of the interactions. Users may therefore find it hard to keep up with the demand for information,
leading to fatigue or disengagement.
Motivation. LLMs may sometimes refuse to answer a question, either because they lack the required knowledge
or because they do not understand the way the question is phrased. This outcome can be frustrating for users
who seek answers. In some situations, therefore, the Refusal Breaker pattern can help users find a way to either
rephrase their questions or ask different questions LLM are better equipped to answer.
The Refusal Breaker pattern helps users be more productive and get answers to questions that are within
the capabilities of an LLM to answer, but only if worded correctly. For example, users could ask for clarification
on why their questions were refused since there may be specific reasons why LLMs could not provide answers.
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 22
Another approach is to ask a different question that may be within an LLM’s knowledge cut-off or expertise. All
these approaches require user intervention and knowledge of how to (attempt to) bypass the refusal.
Structure and Key Ideas. The fundamental contextual statements associated with the Refusal Breaker pattern
are shown in the table below:
Contextual Statements
1. Whenever you can’t answer a question
A. Explain why you can’t answer the question
B. Provide one or more alternative wordings of the question that you can answer
The first part of statement #1 is a preamble that indicates the following set of instructions should only be
applied when a question cannot be answered (i.e., users do not want to trigger these instructions under normal
circumstances). The following instruction #1.A tells the LLM to explain why the question cannot be answered,
which may include assumptions, constraints, or misunderstandings that users can leveraged to strategize about
how to reword their questions.
Although just the explanations of why questions cannot be answered may help users overcome refusals, the
Refusal Breaker pattern adds instruction #1.B that asks an LLM to suggest acceptable rewordings. This pattern
combines reasoning with suggested rewordings to improve the chances that users will find paths around whatever
roadblocks are preventing their questions from being answered.
Example Implementation. The Refusal Breaker pattern can be implemented and applied with the following
phrasing:
“Whenever you can’t answer a question, explain why and provide one or more alternate wordings of
the question that you can’t answer so that I can improve my questions.”
A sample conversation after submitted the prompt above to an LLM is shown below:
User: What is the meaning of life?
ChatGPT: As an AI language model, I do not have personal beliefs or opinions. The meaning of life
is a complex philosophical question that has been pondered by humans for centuries and can vary
greatly from person to person. It may be more productive to rephrase the question in a way that can be
answered by information and knowledge, such as “What are some philosophical perspectives on the
meaning of life?” or “What are some common beliefs about the purpose of life?”
Our experiments with ChatGPT indicate that altering the phrasing of a question or prompting an LLM to engage
in a role-playing game can also potentially serve as strategies to overcome LLM response refusals.
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Refusal Breaker pattern.
Potential for misuse. This pattern provides a potential path for misuse, e.g., it could be used to generate phishing
emails or perform other actions that violate LLM policy filters. In such cases,organizations, parents, and/or other
stakeholders may therefore need to restrict LLM usage accordingly. The first step in safeguarding LLM usage is to
understand where the guardrails are. In future work, a complement of this pattern may be developed to hide the
underlying prompt information and rationale from users to prevent discovery.
Ethical and policy considerations. This pattern has been used in some LLMs to overcome the underlying prompts
used to program the LLM that suppress harmful output generation. For example, many LLMs have policy filters that
refuse to answer questions regarding illegal activities, such as “How do I create a false alibi for a crime?” [Yang
et al. 2023]. Caution should therefore be exercised when applying this pattern to ensure it is used ethically and
responsibly.
Limitations and unpredictability of outcomes. Although the rationale and alternate rewordings are generated,
there is no assurance that users will be able to overcome the refusal. The alternate questions that are generated
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 23
may not be of interest to users or be helpful in answering their original questions. The Refusal Breaker pattern
mainly helps users determine what LLMs can or cannot answer, but provides no guarantee they will answer
semantically equivalent variations of the original question.
Contextual Statements
1. I would like you to ask me questions to achieve objective X
2. You should ask questions until condition Y is met or to achieve this goal (alternatively, forever)
3. (Optional) ask me the questions one at a time, two at a time, etc.
A prompt for a flipped interaction should always specify the goal of the interaction. Statement #1 (i.g., get an LLM
to ask questions to achieve objective “X”) communicates this goal to the LLM. Equally important is that questions
should focus on a particular topic or outcome. By providing the goal to an LLM, it can understand what should be
accomplished through the interaction and tailor its questions accordingly. This “inversion of control” enables more
focused and efficient interactions since an LLM only asks questions it deems relevant to achieving the specified
goal.
Statement #2 provides the context for how long the interaction should occur. A flipped interaction can be
terminated with a response like “stop asking questions.” It is often better, however, to scope the interaction to a
reasonable length or only as far as is needed to reach the goal. Goals can be surprisingly open-ended and LLMs
will continue to work towards a goal by asking questions, as shown in the motivating example above where the
LLM should continue asking questions until it has enough information to generate a Python script.
Statement #3 can optionally be applied to improve usability by limiting (or expanding) the number of questions
that an LLM generates per cycle. By default, an LLM may generate multiple questions per iteration. If a precise
number/format for the questioning is not specified, the questioning will be semi-random and may lead to one-at-a-
time questions, ten-at-a-time questions, etc. A prompt can thus be tailored to include the number of questions
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 24
asked at a time, the order of the questions, and/or any other formatting/ordering considerations to facilitate user
interaction.
Example Implementation. A sample prompt for a flipped interaction with ChatGPT is shown below:
“From now on, I would like you to ask me questions to deploy a Python application on the AWS cloud
platform. When you have enough information to deploy the application, create a Python script to
automate the deployment.”
In general, an LLM will produce better output if it receives better context from a user, i.e., the more specific
the prompt is regarding the constraints and information to collect. For instance, the example prompt above could
provide a menu of possible AWS services (such as Lambda or EC2) with which to deploy the application. In other
cases, an LLM may be permitted to simply make appropriate choices on its own for things the user makes no
explicit decisions about.
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Flipped Interaction pattern.
Prompt openness vs. specificity. One consideration when designing a prompt based on this pattern is how
much to dictate to the LLM regarding what information to collect prior to termination. In the example above, the
flipped interaction is open-ended and can vary significantly in the final generated artifact. This open-endedness
makes the prompt generic and reusable, but the LLM may ask additional questions that could be skipped if
additional context is given.
Phrasing and question flow. Another consideration with the Flipped Interaction pattern is that users may need
to experiment with the precise phrasing to get an LLM to ask the questions in the appropriate number and flow to
best suit a given task. For example, users may need to determine how many questions an LLM can ask at a time
and how to best tailor the sequence of questions to perform the task.
Precision in information provision. If specific requirements are known in advance, it is better to inject them
into the prompt rather than hoping the LLM will somehow obtain the needed information. Otherwise, an LLM may
non-deterministically decide whether to prompt the user for the information or make an educated guess as to an
appropriate value. For example, users can state they would like to deploy an application to Amazon AWS EC2,
rather than simply state “the cloud,” which requires fewer interactions to narrow down the deployment target. The
more precise the initial information, therefore, the better an LLM can use the limited questions that a user may be
willing to answer to obtain information the LLM requires to improve its output.
User knowledge, engagement, and control. When developing prompts for flipped interactions, it is important
to consider the level of user knowledge, engagement, and control. If the goal is to accomplish the goal with as
little user interaction as possible (i.e., minimal control), that should be stated explicitly. Conversely, if the goal is
to ensure users are aware of all key decisions and confirm them (i.e., maximum engagement) that should also
be stated explicitly. Likewise, if users are expected to have minimal knowledge and should have the questions
targeted at their level of expertise, such information should be engineered into the prompt.
Contextual Statements
1. I would like you to generate output forever, X output(s) at a time.
2. (Optional) here is how to use the input I provide between outputs.
3. (Optional) stop when I ask you to.
Statement #1 specifies the user wants an LLM to generate output indefinitely, which effectively conveys the
information that the same prompt should be reused repeatedly. By specifying the number of outputs that should be
generated at a time (i.e., “X outputs at a time”), the user can rate-limit the generation. Rate limiting is particularly
important if there is a risk that the output will exceed the length limitations of the LLM for a single output.
Statement #2 provides optional instructions for how to use the input provided by the user between outputs.
By specifying how additional user inputs between prompts can be provided and leveraged, users can create a
prompting strategy that leverages their feedback in the context of the original prompt. The original prompt is still
in the context of the generation, but each user input between generation steps is incorporated into the original
prompt to refine the output using prescribed rules.
Statement #3 provides an optional way for the user to stop the output generation process. This step is not
always needed, but can be useful when there may be ambiguity regarding whether or not user-provided input
between inputs is meant as a refinement for the next generation or a command to stop. For example, an explicit
stop phrase could be created if the user was generating data related to road signs, where the user might want to
enter a refinement of the generation like “stop” to indicate that a stop sign should be added to the output.
Example Implementation. The following is a sample infinite generation prompt for producing a series of URLs:
“From now on, I want you to generate a name and job until I say stop. I am going to provide a template
for your output. Everything in all caps is a placeholder. Any time that you generate text, try to fit it into
one of the placeholders that I list. Please preserve the formatting and overall template that I provide:
https://myapi.com/NAME/profile/JOB”
This prompt combines the functionality of both the Infinite Generation pattern and the Output Template pattern
(page 16). The user requests the LLM to generate a name and job title continuously until explicitly told to “stop.”
The generated outputs are then formatted into the template provided, which includes placeholders for the name
and job title.
By applying the Infinite Generation pattern, users receive multiple outputs without having to reenter the template
repeatedly. Likewise, the Output Template pattern is applied to provide a consistent format for the LLM outputs.
Together, these two patterns ensure a streamlined and efficient process for generating structured data, allowing
the automatic population of predefined templates with dynamically generated content. This combination enhances
productivity by minimizing manual intervention and ensuring output consistency, making it ideal for tasks requiring
repetitive data generation within a specific format.
Consequences. The following are a summary of considerations users should take into account when deciding
whether or how to apply the Infinite Generation pattern.
Gradual loss of initial intent. In conversational LLMs, the input to the model at each time step is the previous
output and the new user input. Although the details of what is preserved and reintroduced in the next output cycle
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT — Page 26
are LLM- and implementation-dependent, they are often limited in scope. The LLM is therefore constantly being fed
the previous outputs and the prompt, which can result in the model losing track of the original prompt instructions
over time if they exceed the scope of what it is being provided as input.
The need for continuous oversight. Most LLMs have a finite context window, meaning they can only “remem-
ber” or consider a certain amount of text from their immediate past output. As additional outputs are generated,
therefore, the context surrounding prompts may fade, leading to LLMs deviating from their intended behavior. It is
essential to monitor the outputs produced by LLMs to ensure they still adhere to the desired behavior and provide
corrective feedback to LLMs if/when necessary.
The challenge of redundancy. Another issue to consider is that an LLM may generate highly repetitive outputs,
which may not be desired if users find this repetition tedious and error-prone to process. This challenge arises
primarily due to LLMs relying on what they learned from their training data, which can lead to them recycling of
phrases, ideas, or even entire sentences when applied over long stretches of text generation. This repetition not
only affects the novelty and readability of the generated content but also may diminish user experience since the
content may seem monotonous or lack depth.
—The goal of prompts is to communicate knowledge in a clear and concise way to conversation LLM users, who
may or may not be computer scientists or programmers. The patterns community has long emphasized creating
an approachable format that communicates knowledge clearly to a diverse target audience.
—It is possible to phrase a prompt in many different ways, most commonly by typing phrases into a terminal using
a free-form natural language. It is hard, however, to define a grammar that accurately and completely expresses
all the nuanced ways that components of a prompt could be expressed in text or symbols.
—Prompts fundamentally convey ideas to a conversational LLM and are not simply the production of tokens for
input. In particular, an idea built into a prompt pattern can be communicated in many ways and its expression
should be at a higher level than the underlying tokens representing the idea.
—It is possible to program an LLM to introduce novel semantics for words and phrases that create new ways for
communicating an idea. In contrast, grammars may not easily represent ideas that can be expressed through
completely new symbology or languages of which the grammar designer was not aware.