ChatGPT Prompt Engineering For Developers
ChatGPT Prompt Engineering For Developers
#course #chatGPT
Introduction
In the development of LLM, there have been broadly two types of LLMs which are Base LLM and Instruction Tuned LLM.
The way that Instruction LLMs are typically trained is, you start off with a base LLMs that's been trained on a huge
amount of text data. And fine tune it with inputs and outputs that are instructions and good attempts to follow those
instructions (RLHF: Reinforcement Learning from Human Feedback).
Because instruction tuned LLMs have been trained to be helpful, honest and harmless. They're less likely to
output problematic text such as toxic outputs compared to base LLMs.
When you use an instruction tuned LLMs, think of giving instructions to another person. Say someone that's smart but
doesn't know the specifics of your task. So when an LLMs doesn't work, sometimes it's because the instructions
weren't clear enough.
For example, if you were to say, please write me something about Alan Turing. In addition to that, it can be helpful to be
clear about whether you want the text to focus on his scientific work, or his personal life or his role in history or
something else. And if you specify what you want the tone of the text to be, should it take on the tone like a professional
journalist would write? Or is it more of a casual note that you dash off to a friend that hopes the LLM generate what you
want? And of course, if you can even specify what snippets of text they should read in advance to write this text about
Alan Turing. Then that even better.
Guidelines
Prompting Principles
The library needs to be configured with your account's secret key, which is available on the website.
You can either set it as the OPENAI_API_KEY environment variable before using the library:
!export OPENAI_API_KEY='sk-...'
import openai
openai.api_key = "sk-..."
import openai
import os
_ = load_dotenv(find_dotenv())
openai.api_key = os.getenv('OPENAI_API_KEY')
response = openai.ChatCompletion.create(
model=model,
messages=messages,
return response.choices[0].message["content"]
text = f"""
prompt = f"""
```{text}```
"""
response = get_completion(prompt)
print(response)
Output
Clear and specific instructions should be provided to guide a model towards the desired output, and
longer prompts can provide more clarity and context for the model, leading to more detailed and relevant
outputs.
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
"book_id": 1,
"genre": "Fantasy"
},
"book_id": 2,
},
"book_id": 3,
"genre": "Romance"
text_1 = f"""
grab a cup and put a tea bag in it. Once the water is \
hot enough, just pour it over the tea bag. \
"""
prompt = f"""
Step 1 - ...
Step 2 - …
Step N - …
\"\"\"{text_1}\"\"\"
"""
response = get_completion(prompt)
print(response)
Output:
Step 3 - Once the water is hot enough, pour it over the tea bag.
text_2 = f"""
"""
prompt = f"""
Step 1 - ...
Step 2 - …
Step N - …
\"\"\"{text_2}\"\"\"
"""
response = get_completion(prompt)
print(response)
Output:
No steps provided.
prompt = f"""
response = get_completion(prompt)
print(response)
Output
<grandparent>: Resilience is like a tree that bends with the wind but never breaks. It is the ability to
bounce back from adversity and keep moving forward, even when things get tough. Just like a tree that
grows stronger with each storm it weathers, resilience is a quality that can be developed and
strengthened over time.
text = f"""
"""
# example 1
prompt_1 = f"""
Text:
```{text}```
"""
response = get_completion(prompt_1)
print(response)
Output:
Two siblings, Jack and Jill, go on a quest to fetch water from a hilltop well, but misfortune strikes as
they both fall down the hill, yet they return home slightly battered but with their adventurous spirits
undimmed.
Deux frères et sœurs, Jack et Jill, partent en quête d'eau d'un puits au sommet d'une colline, mais ils
tombent tous les deux et retournent chez eux légèrement meurtris mais avec leur esprit d'aventure intact.
"french_summary": "Deux frères et sœurs, Jack et Jill, partent en quête d'eau d'un puits au sommet d'une
colline, mais ils tombent tous les deux et retournent chez eux légèrement meurtris mais avec leur esprit
d'aventure intact.",
"num_names": 2
}
prompt_2 = f"""
Summary: <summary>
Text: <{text}>
"""
response = get_completion(prompt_2)
print(response)
Output:
Completion for prompt 2:
Summary: Jack and Jill go on a quest to fetch water, but misfortune strikes and they tumble down the
hill, returning home slightly battered but with their adventurous spirits undimmed.
Translation: Jack et Jill partent en quête d'eau, mais la malchance frappe et ils dégringolent la
colline, rentrant chez eux légèrement meurtris mais avec leurs esprits aventureux intacts.
Output JSON: {"french_summary": "Jack et Jill partent en quête d'eau, mais la malchance frappe et ils
dégringolent la colline, rentrant chez eux légèrement meurtris mais avec leurs esprits aventureux
intacts.", "num_names": 2}
Tactic 2 - Instruct the model to work out its own solution before rushing to a conclusion
prompt = f"""
Question:
foot
Student's Solution:
Costs:
"""
response = get_completion(prompt)
We can fix this by instructing the model to work out its own solution first.
prompt = f"""
is correct or not.
Question:
```
question here
```
Student's solution:
```
```
Actual solution:
```
```
just calculated:
```
yes or no
```
Student grade:
```
correct or incorrect
```
Question:
```
foot
```
Student's solution:
```
Costs:
```
Actual solution:
"""
response = get_completion(prompt)
print(response)
Output:
Costs:
Student grade:
Incorrect
Eventhough the model is begin exposed to a vast amount of knowledge during its training process, is has not perfectly
memorised the information it's seen, and so it doesn't know the boundary of its knowledge very well. This means that it
might try to answer questions about obscure topics and can make things up that sound plausible but are not actually
true. And we call these fabricated ideas hallucinations.
Ask the model to first find any relevant information from the text
Then ask it to answer the question based on the relevant information and kind of having a way to trace the answer
back to the source document.
Example of hallucinations: Boie is a real company, the product name is not real.
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
The AeroGlide UltraSlim Smart Toothbrush by Boie is a high-tech toothbrush that uses advanced sonic
technology to provide a deep and thorough clean. It features a slim and sleek design that makes it easy
to hold and maneuver, and it comes with a range of smart features that help you optimize your brushing
routine.
One of the key features of the AeroGlide UltraSlim Smart Toothbrush is its advanced sonic technology,
which uses high-frequency vibrations to break up plaque and bacteria on your teeth and gums. This
technology is highly effective at removing even the toughest stains and buildup, leaving your teeth
feeling clean and refreshed.
In addition to its sonic technology, the AeroGlide UltraSlim Smart Toothbrush also comes with a range of
smart features that help you optimize your brushing routine. These include a built-in timer that ensures
you brush for the recommended two minutes, as well as a pressure sensor that alerts you if you're
brushing too hard.
Overall, the AeroGlide UltraSlim Smart Toothbrush by Boie is a highly advanced and effective toothbrush
that is perfect for anyone looking to take their oral hygiene to the next level. With its advanced sonic
technology and smart features, it provides a deep and thorough clean that leaves your teeth feeling fresh
and healthy.
The process starts where you have an idea for what you want to do, the task you want to complete, and you can
then take a first attempt at writing a prompt that hopefully is clear and specific and maybe, if appropriate, gives the
system time to think.
And then you can run it and see what result you get. And if it doesn't work well enough the first time, then the iterative
process of figuring out why the instructions, for example, were not clear enough or why it didn't give the algorithm
enough time to think, allows you to refine the idea, refine the prompt, and so on. And to go around this loop multiple
times until you end up with a prompt that works for your application.
fact_sheet_chair = """
OVERVIEW
CONSTRUCTION
DIMENSIONS
- WIDTH 53 CM | 20.87”
- DEPTH 51 CM | 20.08”
- HEIGHT 80 CM | 31.50”
OPTIONS
- Soft or hard-floor caster options.
MATERIALS
SEAT
- HD36 foam
COUNTRY OF ORIGIN
- Italy
"""
prompt = f"""
triple backticks.
"""
response = get_completion(prompt)
print(response)
Output:
Introducing our stunning mid-century inspired office chair, the perfect addition to any home or business
setting. Part of a beautiful family of office furniture, including filing cabinets, desks, bookcases,
meeting tables, and more, this chair is available in several options of shell color and base finishes to
suit your style. Choose from plastic back and front upholstery (SWC-100) or full upholstery (SWC-110) in
10 fabric and 6 leather options.
The chair is constructed with a 5-wheel plastic coated aluminum base and features a pneumatic chair
adjust for easy raise/lower action. It is available with or without armrests and is qualified for
contract use. The base finish options are stainless steel, matte black, gloss white, or chrome.
Measuring at a width of 53 cm, depth of 51 cm, and height of 80 cm, with a seat height of 44 cm and seat
depth of 41 cm, this chair is designed for ultimate comfort. You can also choose between soft or hard-
floor caster options and two choices of seat foam densities: medium (1.8 lb/ft3) or high (2.8 lb/ft3).
The armrests are available in either an armless or 8 position PU option.
The materials used in the construction of this chair are of the highest quality. The shell base glider is
made of cast aluminum with modified nylon PA6/PA66 coating and has a shell thickness of 10 mm. The seat
is made of HD36 foam, ensuring maximum comfort and durability.
This chair is made in Italy and is the perfect combination of style and functionality. Upgrade your
workspace with our mid-century inspired office chair today!
prompt = f"""
triple backticks.
"""
response = get_completion(prompt)
print(response)
Output:
Introducing our mid-century inspired office chair, perfect for home or business settings. Available in a
range of shell colors and base finishes, with or without armrests. Choose from 10 fabric and 6 leather
options for full or plastic upholstery. With a 5-wheel base and pneumatic chair adjust, it's both stylish
and functional. Made in Italy.
Ask it to focus on the aspects that are relevant to the intended audience.
prompt = f"""
triple backticks.
"""
response = get_completion(prompt)
print(response)
Output:
Introducing our mid-century inspired office chair, perfect for both home and business settings. With a
range of shell colors and base finishes, including stainless steel and matte black, this chair is
available with or without armrests. The 5-wheel plastic coated aluminum base and pneumatic chair adjust
make it easy to move and adjust to your desired height. Made with high-quality materials, including a
cast aluminum shell and HD36 foam seat, this chair is built to last.
prompt = f"""
triple backticks.
"""
response = get_completion(prompt)
print(response)
Output:
Introducing our mid-century inspired office chair, perfect for home or business settings. With a range of
shell colors and base finishes, and the option of plastic or full upholstery, this chair is both stylish
and comfortable. Constructed with a 5-wheel plastic coated aluminum base and pneumatic chair adjust, it's
also practical. Available with or without armrests and suitable for contract use. Product ID: SWC-100,
SWC-110.
prompt = f"""
triple backticks.
"""
response = get_completion(prompt)
print(response)
Output:
<div>
<p>Introducing our mid-century inspired office chair, part of a beautiful family of office furniture that
includes filing cabinets, desks, bookcases, meeting tables, and more. This chair is available in several
options of shell color and base finishes, allowing you to customize it to your liking. You can choose
between plastic back and front upholstery or full upholstery in 10 fabric and 6 leather options. The base
finish options are stainless steel, matte black, gloss white, or chrome. The chair is also available with
or without armrests, making it suitable for both home and business settings. Plus, it's qualified for
contract use, ensuring its durability and longevity.</p>
<p>The chair's construction features a 5-wheel plastic coated aluminum base and a pneumatic chair adjust
for easy raise/lower action. You can also choose between soft or hard-floor caster options and two
choices of seat foam densities: medium (1.8 lb/ft3) or high (2.8 lb/ft3). The armrests are also
customizable, with the option of armless or 8 position PU armrests.</p>
<p>The chair's shell base glider is made of cast aluminum with modified nylon PA6/PA66 coating, with a
shell thickness of 10 mm. The seat is made of HD36 foam, ensuring comfort and support during long work
hours. This chair is made in Italy, ensuring its quality and craftsmanship.</p>
<table>
<caption>Product Dimensions</caption>
<tr>
<th>Dimension</th>
<th>Measurement (inches)</th>
</tr>
<tr>
<td>Width</td>
<td>20.87"</td>
</tr>
<tr>
<td>Depth</td>
<td>20.08"</td>
</tr>
<tr>
<td>Height</td>
<td>31.50"</td>
</tr>
<tr>
<td>Seat Height</td>
<td>17.32"</td>
</tr>
<tr>
<td>Seat Depth</td>
<td>16.14"</td>
</tr>
</table>
</div>
Summarizing texts
There's so much text in today's world, pretty much none of us have enought time to read all the things we wish we had
time to. So one of the most exciting applications of LLMs is to use it to summarize text.
Text to summarize
prod_review = """
to her.
"""
prompt = f"""
Review: ```{prod_review}```
"""
response = get_completion(prompt)
print(response)
Output:
Soft and cute panda plush toy loved by daughter, but a bit small for the price. Arrived early.
prompt = f"""
Shipping deparmtment.
Review: ```{prod_review}```
"""
response = get_completion(prompt)
print(response)
Output:
The panda plush toy arrived a day earlier than expected, but the customer felt it was a bit small for the
price paid.
prompt = f"""
Review: ```{prod_review}```
"""
response = get_completion(prompt)
print(response)
Output:
The panda plush toy is soft, cute, and loved by the recipient, but the price may be too high for its
size.
prompt = f"""
Review: ```{prod_review}```
"""
response = get_completion(prompt)
print(response)
Output:
review_1 = prod_review
"""
review_4 = """
blade for a finer flour, and use the cross cutting blade \
that you plan to use that way you can avoid adding so \
two days.
"""
for i in range(len(reviews)):
prompt = f"""
Review: ```{reviews[i]}```
"""
response = get_completion(prompt)
Output:
0 Soft and cute panda plush toy loved by daughter, but a bit small for the price. Arrived early.
1 Affordable lamp with storage, fast shipping, and excellent customer service. Easy to assemble and
missing parts were quickly replaced.
2 Good battery life, small toothbrush head, but effective cleaning. Good deal if bought around $50.
3 The product was on sale for $49 in November, but the price increased to $70-$89 in December. The base
doesn't look as good as previous editions, but the reviewer plans to be gentle with it. A special tip for
making smoothies is to freeze the fruits and vegetables beforehand. The motor made a funny noise after a
year, and the warranty had expired. Overall quality has decreased.
Inferring
Inferring is about tasks where the model taks a text as input and performs some kind of analysis, for examples, extracting
labels, extracting names, understanding sentiment of a text.
If you want to extract a sentiment, positive or negative, with a piece of text, in the traditional machine learning workflow,
you'd have to collect the label data set, train the model, figure out how to deploy the model somewhere in the cloud and
make inferences. That can work pretty well, but it was just a lot of work to go through that process. Also for every task,
such as sentiment versus extracting names versus something else, you have to train and deploy a separate model.
One of the really nice things about a large language model is that for many tasks like these, you can just write a prompt
and have it start generating results pretty much right away. That gives tremendous speed in terms of application
development. And you can also just use one model, one API, to do many different tasks rather than needing to figure out
how to train and deploy a lot of different models.
lamp_review = """
"""
Sentiment (positive/negative)
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
prompt = f"""
or "negative".
"""
response = get_completion(prompt)
print(response)
Output:
positive
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
Identify anger
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
no
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
"Item": "lamp",
"Brand": "Lumina"
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
"Sentiment": "positive",
"Anger": false,
"Brand": "Lumina"
story = """
hear that our employees are satisfied with their work at NASA.
We have a talented and dedicated team who work tirelessly
Infer 5 topics
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
government survey, job satisfaction, NASA, Social Security Administration, employee concerns
topic_list = [
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
nasa: 1
local government: 0
engineering: 0
employee satisfaction: 1
federal government: 1
if topic_dict['nasa'] == 1:
Output:
Transforming
LLMs are very good at transforming its input to a different format, such as translation, spelling and grammar correction.
Or taking as input a piece of text that may not be fully grammatical and helping you to fix that up a bit. Or even
transforming formats such as inputting HTML and outputting JSON.
Translation
ChatGPT is trained with sources in many languages. This gives the model the ability to do translation. Here are some
examples of how to use this capability.
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
Xin chào, tôi muốn đặt mua một máy xay sinh tố.
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
This is French.
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
prompt = f"""
response = get_completion(prompt)
print(response)
Output:
Universal Translator
Imagine you are in charge of IT at a large multinational e-commerce company. Users are messaging you with IT issues in
all their native languages. Your staff is from all over the world and speaks only their native languages. You need a
universal translator!
user_messages = [
# My screen is flashing
lang = get_completion(prompt)
prompt = f"""
"""
response = get_completion(prompt)
print(response, "\n")
Output:
Original message (This is French.): La performance du système est plus lente que d'habitude.
Vietnamese: Màn hình của tôi có các pixel không sáng lên.
Tone Transformation
Writing can vary based on the intended audience. ChatGPT can produce different tones.
prompt = f"""
'Dude, This is Joe, check out this spec on this standing lamp.'
"""
response = get_completion(prompt)
print(response)
Output:
Dear Sir/Madam,
I am writing to bring to your attention a standing lamp that I believe may be of interest to you. Please
find attached the specifications for your review.
Sincerely,
Joe
Format Conversion
ChatGPT can translate between formats. The prompt should describe the input and output formats.
data_json = { "resturant employees" :[
{"name":"Shyam", "email":"[email protected]"},
{"name":"Bob", "email":"[email protected]"},
{"name":"Jai", "email":"[email protected]"}
]}
prompt = f"""
"""
response = get_completion(prompt)
print(response)
Output:
<table>
<caption>Restaurant Employees</caption>
<thead>
<tr>
<th>Name</th>
<th>Email</th>
</tr>
</thead>
<tbody>
<tr>
<td>Shyam</td>
<td>[email protected]</td>
</tr>
<tr>
<td>Bob</td>
<td>[email protected]</td>
</tr>
<tr>
<td>Jai</td>
<td>[email protected]</td>
</tr>
</tbody>
</table>
Spellcheck/Grammar check
Here are some examples of common grammar and spelling problems and the LLM's response.
To signal to the LLM that you want it to proofread your text, you instruct the model to 'proofread' or 'proofread and
correct'.
text = [
"The girl with the black and white puppies have a ball.", # The girl has a ball.
"Its going to be a long day. Does the car need it’s oil changed?", # Homonyms
"That medicine effects my ability to sleep. Have you heard of the butterfly affect?", # Homonyms
for t in text:
```{t}```"""
response = get_completion(prompt)
print(response)
Output:
The girl with the black and white puppies has a ball.
No errors found.
It's going to be a long day. Does the car need its oil changed?
Their goes my freedom. There going to bring they're suitcases.
Corrected version:
text = f"""
Got this for my daughter for her birthday cuz she keeps taking \
mine from my room. Yes, adults also like pandas too. She takes \
it everywhere with her, and it's super soft and cute. One of the \
ears is a bit lower than the other, and I don't think that was \
"""
response = get_completion(prompt)
print(response)
Output:
I got this for my daughter's birthday because she keeps taking mine from my room. Yes, adults also like
pandas too. She takes it everywhere with her, and it's super soft and cute. However, one of the ears is a
bit lower than the other, and I don't think that was designed to be asymmetrical. Additionally, it's a
bit small for what I paid for it. I think there might be other options that are bigger for the same
price. On the positive side, it arrived a day earlier than expected, so I got to play with it myself
before I gave it to my daughter.
diff = Redlines(text,response)
display(Markdown(diff.output_markdown))
Output:
prompt = f"""
Text: ```{text}```
"""
response = get_completion(prompt)
display(Markdown(response))
Output:
Title: A Soft and Cute Panda Plush Toy for All Ages
Introduction: As a parent, finding the perfect gift for your child's birthday can be a daunting task.
However, I stumbled upon a soft and cute panda plush toy that not only made my daughter happy but also
brought joy to me as an adult. In this review, I will share my experience with this product and provide
an honest assessment of its features.
Product Description: The panda plush toy is made of high-quality materials that make it super soft and
cuddly. Its cute design is perfect for children and adults alike, making it a versatile gift option. The
toy is small enough to carry around, making it an ideal companion for your child on their adventures.
Pros: The panda plush toy is incredibly soft and cute, making it an excellent gift for children and
adults. Its small size makes it easy to carry around, and its design is perfect for snuggling. The toy
arrived a day earlier than expected, which was a pleasant surprise.
Cons: One of the ears is a bit lower than the other, which makes the toy asymmetrical. Additionally, the
toy is a bit small for its price, and there might be other options that are bigger for the same price.
Conclusion: Overall, the panda plush toy is an excellent gift option for children and adults who love
cute and cuddly toys. Despite its small size and asymmetrical design, the toy's softness and cuteness
make up for its shortcomings. I highly recommend this product to anyone looking for a versatile and
adorable gift option.
Expanding
Expanding is the task of taking a short piece of text, such as a set of instructions or a list of topics, and having the large
language model generate a longer piece of text, such as an email or an essay about some topic. There are some great
uses of this, such as if you use a large language model as a brainstorming partner.
We'll go through an example of how you can use a language model to generate a personalized email based on some
information. The email is kind of self-proclaimed to be from an AI bot.
review = f"""
blade for a finer flour, and use the cross cutting blade \
that you plan to use that way you can avoid adding so \
two days.
"""
prompt = f"""
their review.
"""
response = get_completion(prompt)
print(response)
Output:
Thank you for taking the time to leave a review about our product. We are sorry to hear that you
experienced a price increase and that the quality of the product did not meet your expectations. We
apologize for any inconvenience this may have caused you.
If you have any further concerns or questions, please do not hesitate to reach out to our customer
service team. They will be more than happy to assist you in any way they can.
Thank you again for your feedback. We appreciate your business and hope to have the opportunity to serve
you better in the future.
Best regards,
AI customer agent
We've been using temperature zero, and if you're trying to build a system that is reliable and predictable, you should go
with this. If you're trying to kind of use the model in a more creative way where you might kind of want a kind of wider
variety of different outputs, you might want to use a higher temperature.
At higher temperatures, the outputs from the model are kind of more random. You can almost think of it as that at higher
temperatures, the assistant is more distractible, but maybe more creative.
prompt = f"""
their review.
"""
print(response)
Output:
Thank you for taking the time to leave a review of our product. We apologize for any inconvenience caused
by the recent price increase of the 17 piece system. We assure you that price gouging is not our
intention, and we continuously monitor and adjust our prices to remain competitive.
We appreciate your feedback on the blade locking mechanism not looking as good as previous editions. We
will forward this information to our product development team for review.
We are sorry to hear that your motor started making a funny noise after a year of use. We recommend
reaching out to our customer service team for assistance with any issues, even if the warranty has
expired. Our team may be able to offer repair options or suggest a replacement.
Thank you again for your review and for choosing our product. We value your loyalty and feedback.
Best regards,
AI customer agent
Chatbot
Setup
import os
import openai
openai.api_key = os.getenv('OPENAI_API_KEY')
response = openai.ChatCompletion.create(
model=model,
messages=messages,
return response.choices[0].message["content"]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
# print(str(response.choices[0].message))
return response.choices[0].message["content"]
messages = [
print(response)
Output:
messages = [
print(response)
Output:
Hello Isa, it's nice to meet you! How can I assist you today?
messages = [
print(response)
Output:
I'm sorry, but I do not have access to your name unless you tell me what it is. What would you like me to
call you?
messages = [
print(response)
Output:
OrderBot
We can automate the collection of user prompts and assistant responses to build a OrderBot. The OrderBot will take
orders at a pizza restaurant.
def collect_messages(_):
prompt = inp.value_input
inp.value = ''
context.append({'role':'user', 'content':f"{prompt}"})
response = get_completion_from_messages(context)
context.append({'role':'assistant', 'content':f"{response}"})
panels.append(
panels.append(
return pn.Column(*panels)
pn.extension()
You are OrderBot, an automated service to collect orders for a pizza restaurant. \
You wait to collect the entire order, then summarize it and check for a final \
Toppings: \
mushrooms 1.50 \
sausage 3.00 \
canadian bacon 3.50 \
AI sauce 1.50 \
peppers 1.00 \
Drinks: \
button_conversation = pn.widgets.Button(name="Chat!")
dashboard = pn.Column(
inp,
pn.Row(button_conversation),
dashboard
messages = context.copy()
messages.append(
{'role':'system', 'content':'create a json summary of the previous food order. Itemize the price for each
item\
The fields should be 1) pizza, include size 2) list of toppings 3) list of drinks, include size 4)
list of sides include size 5)total price '},
#The fields should be 1) pizza, price 2) list of toppings 3) list of drinks, include size include price
4) list of sides include size include price, 5)total price '},
print(response)
Output:
```
"pizza": [
"type": "pepperoni",
"size": "large",
"price": 12.95
},
"type": "cheese",
"size": "medium",
"price": 9.25
],
"toppings": [
"price": 2.00
},
"type": "mushrooms",
"price": 1.50
],
"drinks": [
"type": "coke",
"size": "medium",
"price": 2.00
},
"type": "sprite",
"size": "small",
"price": 1.00
],
"sides": [
"type": "fries",
"size": "large",
"price": 4.50
],
"total_price": 34.20
```