We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 7
Pa
STRATEGIES FOR
(A)1) Write Clear Instructions
These models can't read your mind. If outputs are too
long, ask for brief replies.
If outputs are too simple, ask for expert-level writing.
Tf you dislike the format, demonstrate the format you'd
Toe Root
The less the model has to guess at what you want, the
more likely you'll get it.
TACTICS:
a Include details in your query to get more relevant
Eee
a) Ask the model to adopt a persona.
|] Use delimiters to clearly indicate distinct parts of the
input.
va Specify the steps required to complete a task.
il Provide examples. Specify the desired length of the
fo] Uhdolbiay
i]2) Provide Reference Text
Language models can confidently invent fake answers,
especially when asked about esoteric topics or for
citations and URLs.
In the same way that a sheet of notes can help a student
do better on a test, providing reference text to these
models can help in answering with fewer fabrications.
TACTICS:
Utara Mille Rok me Ree Unie
Basem taille moka kel eile)
noe aka
i3) Split Complex Tasks
Just as it is good practice in software engineering to
decompose a complex system into a set of modular
components, the same is true of tasks submitted to a
language model.
Complex tasks tend to have higher error rates than
simpler tasks.
Furthermore, complex tasks can often be re-defined as a
ola aio melt tt) 9D (Tem =k Ua ela mel Le el Mem -Y-1al 1g
tasks are used to construct the inputs to later tasks.
TACTICS:
A Use intent classification to identify the most relevant
instructions for a user query.
® For dialogue applications that require very long
conversations, summarize or filter previous dialogue.
@® summarize long documents piecewise and construct a
TOMEI mace Th
i]4) Give the Model Time to
om alta).
If asked to multiply 17 by 28, you might not know it
instantly, but can still work it out with time.
Similarly, models make more reasoning errors when trying
to answer right away, rather than taking time to work out
an answer.
Asking for a "chain of thought" before an answer can help
the model reason its way toward correct answers more
foc]
TACTICS:
@ Instruct the model to work out its own solution before
rushing to a conclusion.
El Use inner monologue or a sequence of queries to hide
the model's reasoning process.
Ld Ask the model if it missed anything on previous
oe5) Use External Tools
Compensate for the weaknesses of the model by feeding it
the outputs of other tools.
For example, a text retrieval system (sometimes called
RAG or retrieval augmented generation) can tell the model
about relevant documents.
A code execution engine like OpenAl's Code Interpreter
can help the model do math and run code.
Si-Be-) aera ome la Ue MET MOlmiiCellAl aN me ar el}
rather than by a language model, offload it to get the
ella) moan
TACTICS:
@ Use embeddings-based search to implement efficient
andi crel* (Maida
MU Meets Rtcen tn ON ame se-Tcell ci
calculations or call external APIs.
B Give the model access to specific functions.
i]6) Test Changes Systematically
Improving performance is easier if you can measure it.
In some cases a modification to a prompt will achieve
better performance on a few isolated examples but lead to
Nolet mente] Mle tar aeRO LAN MUSA 1k alee ee)
examples.
Therefore to be sure that a change is net positive to
performance it may be necessary to define a
comprehensive test suite (also known an as an "eval").
TACTIC:
a Evaluate model outputs with reference to gold-
standard answers.
i)
Code, Et Tu - LLM, Transformer, RAG AI - Mastering Large Language Models, Transformer Models, and Retrieval-Augmented Generation (RAG) Technology (2024)
Instant download [EARLY RELEASE] Quick Start Guide to Large Language Models: Strategies and Best Practices for using ChatGPT and Other LLMs Sinan Ozdemir pdf all chapter