mod1
mod1
Module 1
Prompts
Prompts are input instructions or cues that shape the model's response.
Prompts can be in the form of natural language instructions, system-
defined instructions, or conditional constraints.
A prompt is a short piece of text that is used to guide an LLM's response.
Prompts can be as simple as a single sentence, or it can be more
complex, with multiple clauses and instructions.
The goal of a prompt is to provide the LLM with enough information to
understand what is being asked of it, and to generate a relevant and
informative response.
Clear and explicit prompts enable developers to guide the model’s
behaviour and shape the resulting output.
2. Context
o The AI uses this input data to perform the requested task, in this
case, sentiment analysis. High-quality input leads to more accurate
outputs.
Example
Each of these components plays a critical role in ensuring the AI generates the
correct output. By following this structure, the model knows:
What to do (Instruction),
Types of Prompts
System Prompts
Conditional Prompts
1. Databases & Other Info Stores: These are sources of data that can be
referenced or accessed to gather information for crafting the prompt.
7. Generative AI: This is the final output phase, where the AI generates
text, images, or other results based on the prompt.
The more specific you are in your prompt, the more likely the LLM is
to generate a relevant and informative response.
Example, instead of asking the LLM to "write a poem," you could ask
it to "write a poem about our nation."
Use examples
If possible, provide the LLM with examples of the kind of output you
are looking for.
This will help the LLM to understand your expectations and to
generate more accurate results.
Experiment
Clarity is key
Ensure that the prompt is clear and unambiguous. Avoid jargon unless
it's necessary for the context.
Try role-playing
Use constraints
Leading questions can bias the model's output. It's essential to remain
neutral to get an unbiased response.
Iterate and evaluate
You can achieve a lot with simple prompts, but the quality of results
depends on how much information you provide it and how well-crafted
the prompt is.
A prompt can contain information like the instruction or question you are
passing to the model and include other details such as context, inputs, or
examples.
You can use these elements to instruct the model more effectively to
improve the quality of results.
Prompt
The sky is
Output:
blue.
Prompt playgrounds
If you are using the OpenAI Playground or any other LLM playground, you can
prompt the model as shown in the following screenshot:
Something to note is that when using the OpenAI chat models like gpt-
3.5-turbo or gpt-4, you can structure your prompt using three different
roles: system, user, and assistant.
The system message is not required but helps to set the overall behavior
of the assistant.
The example above only includes a user message which you can use to
directly prompt the model.
For simplicity, all of the examples, except when it's explicitly mentioned,
will use only the user message to prompt the gpt-3.5-turbo model. The
assistant message in the example above corresponds to the model
response.
You can also define an assistant message to pass examples of the desired
behavior you want. You can learn more about working with chat models
here.
You can observe from the prompt example above that the language model
responds with a sequence of tokens that make sense given the context
"The sky is".
The output might be unexpected or far from the task you want to
accomplish. In fact, this basic example highlights the necessity to provide
more context or instructions on what specifically you want to achieve
with the system. This is what prompt engineering is all about.
Prompts help users control the tone, style, and length of the AI's output.
By fine-tuning the prompt, users can guide the model to produce content
that is formal, casual, detailed, or concise.
3. Handling Ambiguity
AI models rely heavily on prompts for context. The clearer and more
specific the prompt, the better the model performs.
Vague or ambiguous prompts can result in outputs that miss the user's
intent, highlighting the importance of precision in designing prompts.
4. Contextual Awareness
Advanced AI models can maintain context within a conversation through
sequential prompts.
By referencing prior exchanges, prompts enable the model to build upon
earlier responses, making interactions more coherent and allowing for
deeper, multi-turn dialogue.
9. Iterative Improvement
The way prompts are framed can lead to ethical challenges. Manipulative,
harmful, or deceptive prompts can cause the AI to generate biased,
inappropriate, or misleading content. Users need to be mindful of how
they design prompts to ensure ethical AI usage.
Conclusion
Depending on the exact role and how technical it is, a prompt engineer needs a
solid foundation in several technical areas:
Familiarity with LLMs. Experience with models like GPT, PaLM2, and
other emerging models their underlying architectures.
While technical prowess is vital, a prompt engineer also needs a suite of non-
technical skills:
1. Chain-of-thought-prompting
For example, given a question: “How does climate change affect biodiversity?”
Instead of directly providing an answer, an AI model that uses chain-of-thought
prompting would break the question into three components or subproblems. The
subproblems might include:
Destruction of habitat
Then, the model starts analyzing and investigating how the changed climate
affects temperature, how temperature change affects habitat, and how the
destruction of a habitat affects biodiversity.
This approach allows the model to address each part of the issue and give a
more detailed answer to the initial question of the influence of climate change
on biodiversity.
2. Tree-of-thought prompting
3. Maieutic prompting
4. Complexity-based prompting
This method advises the model to source the explicit information required
before creating the content.
This implies the content developed is knowledgeable and of higher
quality. For example, if a user would like to create a presentation
covering the topic of renewable sources, they could prompt the model by
implying, “Make a presentation about renewable sources.”
The following two explicit facts should be noted: “Solar power frees us
from throwaway fossil fuels” and “Solar power lowers the demand for
mostly coal-fired power plants that produce our electricity.”
Finally, the model could make up an argument for how beneficial it
would be for humanity to switch to renewable sources.
6. Least-to-most prompting
Using the least-to-most prompting technique, the model will list the
subproblems involved in solving a given task. Then, the model solves the
subproblems in a sequence to ensure that every subsequent step uses the
solutions to the previous ones.
For example, a user may prompt the model using the following cooking-
themed least-to-most example: a user says to the model, “Bake a cake for
me.” Hence, the model’s first output would include the subproblems “preheat
the oven” and “mix the ingredients.” The model would need to ensure that
the cake is baked.
7. Self-refine prompting
8. Directional-stimulus prompting
Directional-stimulus prompting includes directing what the models
should write.
For example, if I ask the model to write a poem about love, I will suggest
including “heart,” “passion,” and “eternal.” These provisions help the model
produce favorable outputs from various tasks and domains.
9. Zero-shot prompting
For example, the chatbot will check in real time if a particular prompt generated
a valuable answer based on the consumer’s following reply. If this prompt
inexplicably confuses or aggravates the user, the chatbot can adapt the ask-it-
this-way strategy in dynamic real time to add more explanation, for example, or
propose another solution. As a result, the chatbot can learn to identify which
kinds of prompts do not perform well solely on insights from individual users.
1. Content generation
2. Language translation
3. Text summarization
4. Dialogue systems
Dialogue systems like chatbots and virtual assistants rely on prompt engineering
to facilitate natural and engaging user interactions. By designing prompts that
anticipate user queries and preferences, prompt engineers can guide AI models
to generate relevant, coherent, and contextually appropriate responses,
enhancing the overall user experience.
5. Information retrieval
6. Code generation
7. Educational tools
Data Acquisition:
Complexity of Models:
AI prompt engineers must also test and debug their solutions to ensure
they function as expected, which can be an involved and time-consuming
process.
User intent and context are two essential components of any AI-powered
app, and AI engineers play a pivotal role in making sure user intent and
context are accurately captured, understood, and represented in the design
and development of these applications.
Autonomous Vehicles:
Smart Homes:
Robotics:
Image Recognition:
Cybersecurity:
With cyber threats increasingly present in modern organizations, AI
engineers have an opportunity to develop AI-powered security solutions
that can thwart any attacks from within or external sources.
AI technologies allow organizations to quickly detect and respond to
cyberattacks efficiently reducing the risk of data breaches.
Health Care:
Social Media:
Business Intelligence:
Customer Intelligence:
Automation:
As businesses search for ways to cut costs and boost productivity,
automation has become an invaluable asset.
AI engineers have an opportunity to develop solutions which automate
mundane tasks allowing human resources for more strategic activities.
Invest in Automation:
Stay Current:
Enhanced control
Prompt engineering fosters user control over AI more than ever by
allowing users to control the AI models themselves with prompts. This, in
turn, ensures that the most generated content closely matches the user’s
needs and expectations.
As stated earlier, the same mechanism could be employed with different
writing services, including, but not limited to, content generation
summarization and translation.
Improved relevance
Increased efficiency
Versatility
Customization
Prompt engineering is all about creating a suitable basis for the design of
AI-driven products, taking into account a customer’s needs, tastes, and
targeted group. That is the good side of flexibility, as it facilitates
modifying content to fit the person’s particular goals and targets.
Limitations
Domain specificity
Potential bias