Prompt Engineering
Prompt Engineering
What is Prompt?
prompt, in the context of language models like ChatGPT, refers to the
initial input or stimulus provided to the model to generate a response. It's
the starting point or the query given to the model to produce text.
Prompts can vary in length and complexity, ranging from simple
questions or statements to more detailed descriptions or scenarios.
Example:
Let’s say you’re making spaghetti marinara for dinner. Sauce from a jar
is perfectly fine. But what if you buy your tomatoes and basil from the
farmers market to make your own sauce? Chances are it will taste a lot
better. And what if you grow your own ingredients in your garden and
make your own fresh pasta? A whole new level of savory deliciousness.
Just as better ingredients can make for a better dinner, better inputs into
a generative AI (gen AI) model can make for better results. These inputs
are called prompts, and the practice of writing them is called prompt
engineering.
Customer Support:
Data Analysis:
Formulating prompts to generate insights and summaries from
datasets.
Using the model for exploratory data analysis and hypothesis
generation.
Prompt chaining: The prompter splits a complex task into smaller (and
easier) subtasks, then uses the generative AI's outputs to accomplish the
overarching task. This method can improve reliability and consistency
for some of the most complicated tasks.
Structure the prompt: Start by defining its role, give context/input data,
then provide the instruction.
Use specific, varied examples to help the model narrow its focus and
generate more accurate results.
Use constraints to limit the scope of the model's output. This can help
avoid meandering away from the instructions into factual inaccuracies.