Best Practices For Prompt Engineering With OpenAI API - OpenAI Help Center
Best Practices For Prompt Engineering With OpenAI API - OpenAI Help Center
💡 If you're just getting started with OpenAI API, we recommend reading the Introduction
and Quickstart tutorials first.
2. Put instructions at the beginning of the prompt and use ### or """ to
separate the instruction and context
Less effective ❌ :
Summarize the text below as a bullet point list of the most important points.
Better ✅ :
Summarize the text below as a bullet point list of the most important points.
Text: """
{text input here}
"""
Better ✅ :
Write a short inspiring poem about OpenAI, focusing on the recent DALL-E product launch
4. Articulate the desired output format through examples (example 1,
example 2).
Less effective ❌ :
Extract the entities mentioned in the text below. Extract the following 4 entity types:
Text: {text}
Show, and tell - the models respond better when shown specific format requirements. This also
makes it easier to programmatically parse out multiple outputs reliably.
Better ✅ :
Extract the important entities mentioned in the text below. First extract all company na
Desired format:
Company names: <comma_separated_list_of_company_names>
People names: -||-
Specific topics: -||-
General themes: -||-
Text: {text}
Text: {text}
Keywords:
Text 1: Stripe provides APIs that web developers can use to integrate payment processing
Keywords 1: Stripe, payment processing, APIs, web developers, websites, mobile applicati
##
Text 2: OpenAI has trained cutting-edge language models that are very good at understand
Keywords 2: OpenAI, language models, text processing, API.
##
Text 3: {text}
Keywords 3:
Better ✅ :
Use a 3 to 5 sentence paragraph to describe this product.
Better ✅ :
The following is a conversation between an Agent and a Customer. The agent will attempt
In this code example below, adding “import” hints to the model that it should start writing in
Python. (Similarly “SELECT” is a good hint for the start of a SQL statement.)
Better ✅ :
# Write a simple python function that
# 1. Ask me for a number in mile
# 2. It converts miles to kilometers
import
Parameters
Generally, we find that model and temperature are the most commonly used parameters to alter
the model output.
1. model - Higher performance models are more expensive and have higher latency.
2. temperature - A measure of how often the model outputs a less likely token. The higher the
temperature, the more random (and usually creative) the output. This, however, is not the
same as “truthfulness”. For most factual use cases such as data extraction, and truthful
Q&A, the temperature of 0 is best.
3. max_tokens (maximum length) - Does not control the length of the output, but a hard cutoff
limit for token generation. Ideally you won’t hit this limit often, as your model will stop either
when it thinks it’s finished, or when it hits a stop sequence you defined.
4. stop (stop sequences) - A set of characters (tokens) that, when generated, will cause the
text generation to stop.
For other parameter descriptions see the API reference.
Additional Resources
If you're interested in additional resources, we recommend:
Guides
Text completion - learn how to generate or edit text using our models
Code completion - explore prompt engineering for Codex
Fine-tuning - Learn how to train a custom model for your use case
Embeddings - learn how to search, classify, and compare text
Moderation
OpenAI cookbook repo - contains example code and prompts for accomplishing common
tasks with the API, including Question-answering with Embeddings
Community Forum