0% found this document useful (0 votes)
22 views

02 Build Natural Language Solutions With Azure OpenAI Service

Getting started Build natural language solutions with Azure OpenAI Service
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

02 Build Natural Language Solutions With Azure OpenAI Service

Getting started Build natural language solutions with Azure OpenAI Service
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Introduction

• Azure OpenAI provides a platform for developers to add artificial intelligence


functionality to their applications with the help of both Python and C# SDKs
and REST APIs.

• The platform has various AI models available, each specializing in different


tasks, which can be deployed through the Azure OpenAI Service.

• This module guides you through how to build Azure OpenAI into your own
application, giving you a starting point for developing solutions with
generative AI.
Integrate Azure OpenAI into your
app
• Azure OpenAI offers both C# and Python SDKs and a REST API that developers
can use to add AI functionality to their applications.

• Generative AI capabilities in Azure OpenAI are provided through models.

• The models available in the Azure OpenAI service belong to different families,
each with their own focus.

• To use one of these models, you need to deploy through the Azure OpenAI
Service.
Create an Azure OpenAI resource
• An Azure OpenAI resource can be deployed through both the Azure command
line interface (CLI) and the Azure portal.

• Creating the Azure OpenAI resource through the Azure portal is similar to
deploying individual Azure AI Services resources, and is part of the Azure AI
Services services.

1. Navigate to the Azure portal

2. Search for Azure OpenAI, select it, and click Create

3. Enter the appropriate values for the empty fields, and create the resource.

• The possible regions for Azure OpenAI are currently limited. Choose the region
closest to your physical location.

• Once the resource has been created, you'll have keys and an endpoint that
you can use in your app.
Create an Azure OpenAI resource
Choose and deploy a model
• Each model family excels at different tasks, and there are different capabilities
of the models within each family. Model families break down into three main
families:
• Generative Pre-trained Transformer (GPT) - Models that understand
and generate natural language and some code. These models are best at
general tasks, conversations, and chat formats.

• Code (gpt-3 and earlier) - Code models are built on top of GPT models,
and trained on millions of lines of code. These models can understand and
generate code, including interpreting comments or natural language to
generate code. gpt-35-turbo and later models have this code functionality
included without the need for a separate code model.

• Embeddings - These models can understand and use embeddings, which


are a special format of data that can be used by machine learning models
and algorithms.
Create an Azure OpenAI resource
Choose and deploy a model
• This module focuses on general GPT models, with other models being covered
in other modules.

• For older models, the model family and capability is indicated in the name of
the base model, such as text-davinci-003, which specifies that it's a text
model, with davinci level capability, and identifier 3.

• Details on models, capability levels, and naming conventions can be found on


the Azure OpenAI Models documentation page.

• More recent models specify which gpt generation, and if they are
the turbo version, such as gpt-35-turbo representing the GPT 3.5 Turbo model.

• To deploy a model for you to use, navigate to the Azure OpenAI Studio and go
to the Deployments page. The lab later in this module covers exactly how to
do that.
Create an Azure OpenAI resource
Authentication and specification of deployed
model
• When you deploy a model in Azure OpenAI, you choose a deployment name to
give it.

• When configuring your app, you need to specify your resource endpoint, key,
and deployment name to specify which deploy model to send your request to.

• This enables you to deploy various models within the same resource, and
make requests to the appropriate model depending on the task.
Prompt engineering
• How the input prompt is written plays a large part in how the AI model will
respond.

• For example, if prompted with a simple request such as "What is Azure


OpenAI", you often get a generic answer similar to using a search engine.

• However, if you give it more details about what you want in your response,
you get a more specific answer. For example, given this prompt:
Create an Azure OpenAI resource
Authentication and specification of deployed
model
Prompt engineering

Classify the following news headline into 1 of the following categories: Business, Tech, Politics,
Sport, Entertainment

Headline 1: Donna Steffensen Is Cooking Up a New Kind of Perfection. The Internet’s most
beloved cooking guru has a buzzy new book and a fresh new perspective
Category: Entertainment

Headline 2: Major Retailer Announces Plans to Close Over 100 Stores


Category:
Create an Azure OpenAI resource
Authentication and specification of deployed
model
Prompt engineering
• You'll likely get the "Category:" under headline filled out with "Business".

• Several examples similar to this one can be found in the Azure OpenAI Studio
Playground, under the Examples dropdown.

• Try to be as specific as possible about what you want in response from the
model, and you may be surprised at how insightful it can be!
Create an Azure OpenAI resource
Authentication and specification of deployed
model
Available endpoints
• Azure OpenAI can be accessed via a REST API or an SDK currently available for
Python and C#.

• The endpoints available for interacting with a deployed model are used
differently, and certain endpoints can only use certain models. The available
endpoints are:

• Completion - model takes an input prompt, and generates one or more


predicted completions. You'll see this playground in the studio, but won't
be covered in depth in this module.

• ChatCompletion - model takes input in the form of a chat conversation


(where roles are specified with the message they send), and the next chat
completion is generated.

• Embeddings - model takes input and returns a vector representation of


Create an Azure OpenAI resource
Authentication and specification of deployed
model
Available endpoints
{"role": "system", "content": "You are a helpful assistant, teaching people about
AI."},
{"role": "user", "content": "Does Azure OpenAI support multiple languages?"},
{"role": "assistant", "content": "Yes, Azure OpenAI supports several languages,
and can translate between them."},
{"role": "user", "content": "Do other Azure AI Services support translation too?"}
Create an Azure OpenAI resource
Authentication and specification of deployed
model
Available endpoints
• When you give the AI model a real conversation, it can generate a better
response with more accurate tone, phrasing, and context.

• The ChatCompletion endpoint enables the ChatGPT model to have a more


realistic conversation by sending the history of the chat with the next user
message.

• ChatCompletion also allows for non-chat scenarios, such as summarization or


entity extraction.
• This can be accomplished by providing a short conversation, specifying the
system information and what you want, along with the user input.

• For example, if you want to generate a job description,


provide ChatCompletion with something like the following conversation input.
Create an Azure OpenAI resource
Authentication and specification of deployed
model
Available endpoints
{"role": "system", "content": "You are an assistant designed to write intriguing job
descriptions. "},
{"role": "user", "content": "Write a job description for the following job title:
'Business Intelligence Analyst'. It should include responsibilities, required
qualifications, and highlight benefits like time off and flexible hours."}
Use Azure OpenAI REST API
Authentication and specification of deployed
model
Available endpoints
• Azure OpenAI offers a REST API for interacting and generating responses that
developers can use to add AI functionality to their applications.

• This unit covers example usage, input and output from the API.

• For each call to the REST API, you need the endpoint and a key from your
Azure OpenAI resource, and the name you gave for your deployed model.

• In the following examples, the following placeholders are used:


Use Azure OpenAI REST API
Authentication and specification of deployed
model
Available endpoints
Placeholder name Value
YOUR_ENDPOINT_NAME This base endpoint is found in the Keys &
Endpoint section in the Azure portal. It's the
base endpoint of your resource, such
as https://sample.openai.azure.com/.

YOUR_API_KEY Keys are found in the Keys &


Endpoint section in the Azure portal. You can
use either key for your resource.
YOUR_DEPLOYMENT_NAME This deployment name is the name provided
when you deployed your model in the Azure
OpenAI Studio.
Use Azure OpenAI REST API
Chat completions
• Once you've deployed a model in your Azure OpenAI resource, you can
send a prompt to the service using a POST request

curl
https://YOUR_ENDPOINT_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT
_NAME/chat/completions?api-version=2023-03-15-preview \
-H "Content-Type: application/json" \
-H "api-key: YOUR_API_KEY" \
-d '{"messages":[{"role": "system", "content": "You are a helpful assistant, teaching people
about AI."},
{"role": "user", "content": "Does Azure OpenAI support multiple languages?"},
{"role": "assistant", "content": "Yes, Azure OpenAI supports several languages, and can
translate between them."},
{"role": "user", "content": "Do other Azure AI Services support translation too?"}]}'
Use Azure OpenAI REST API
Chat completions
{
"id": "chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9",
"object": "chat.completion",
"created": 1679001781,
"model": "gpt-35-turbo",
"usage": {
"prompt_tokens": 95,
"completion_tokens": 84,
"total_tokens": 179
},
"choices": [
{
"message":
{
"role": "assistant",
"content": "Yes, other Azure AI Services also support translation. Azure AI Services offer
translation between multiple languages for text, documents, or custom translation through Azure AI Services
Translator."
},
"finish_reason": "stop",
"index": 0
}
]
}
Use Azure OpenAI REST API
Chat completions
• REST endpoints allow for specifying other optional input parameters, such
as temperature, max_tokens and more.

• If you'd like to include any of those parameters in your request, add them to the
input data with the request.
Use Azure OpenAI REST API
Embeddings
• Embeddings are helpful for specific formats that are easily consumed by machine
learning models.

• To generate embeddings from the input text, POST a request to


the embeddings endpoint
curl
https://YOUR_ENDPOINT_NAME.openai.azure.com/openai/deployments/YOUR_D
EPLOYMENT_NAME/embeddings?api-version=2022-12-01 \
-H "Content-Type: application/json" \
-H "api-key: YOUR_API_KEY" \
-d "{\"input\": \"The food was delicious and the waiter...\"}"
Use Azure OpenAI REST API
Embeddings
• When generating embeddings, be sure
to use a model in Azure OpenAI meant
for embeddings.

• Those models start with text-


embedding or text-similarity,
depending on what functionality you're
looking for.

• The response from the API will be


similar to the following JSON:
Use Azure OpenAI SDK
• In addition to REST APIs covered in the previous unit, users can also access
Azure OpenAI models through C# and Python SDKs.

• The same functionality is available through both REST and these SDKs.

• For both SDKs covered in this unit, you need the endpoint and a key from your
Azure OpenAI resource, and the name you gave for your deployed model.

• In the following code snippets, the following placeholders are used:


Use Azure OpenAI SDK

Placeholder name Value


YOUR_ENDPOINT_NAME This base endpoint is found in the Keys & Endpoint section in the Azure
portal. It's the base endpoint of your resource, such
as https://sample.openai.azure.com/.

YOUR_API_KEY Keys are found in the Keys & Endpoint section in the Azure portal. You can
use either key for your resource.

YOUR_DEPLOYMENT_NAME This deployment name is the name provided when you deployed your
model in the Azure OpenAI Studio.
Use Azure OpenAI SDK
Install libraries

• First, install the client library for your preferred language.

• The C# SDK is a .NET adaptation of the REST APIs and built specifically for Azure
OpenAI, however it can be used to connect to Azure OpenAI resources or non-
Azure OpenAI endpoints.

• The Python SDK is built and maintained by OpenAI.


pip install openai
Use Azure OpenAI SDK
Configure app to access Azure OpenAI resource
• Configuration for each language varies slightly, but both require the same
parameters to be set.

• The necessary parameters are endpoint, key, and the name of your
deployment, which is called the engine when sending your prompt to the
model.

• Add the library to your app, and set the required parameters for your client.
Use Azure OpenAI SDK
Call Azure OpenAI resource
• Once you've configured your connection to Azure OpenAI, send your prompt
to the model.
response = client.chat.completions.create(
model=deployment_name,
messages=[
{"role": "system", "content": "You are a
helpful assistant."},
{"role": "user", "content": "What is Azure
OpenAI?"}
]
)
generated_text =
response.choices[0].message.content

# Print the response


print("Response: " + generated_text + "\n")
Use Azure OpenAI SDK
Call Azure OpenAI resource
• The response object contains several values, such
as total_tokens and finish_reason. The completion from the response object
will be similar to the following completion:
"Azure OpenAI is a cloud-based artificial intelligence (AI) service that offers
a range of tools and services for developing and deploying AI applications.
Azure OpenAI provides a variety of services for training and deploying
machine learning models, including a managed service for training and
deploying deep learning models, a managed service for deploying machine
learning models, and a managed service for managing and deploying
machine learning models."
Exercise - Integrate Azure OpenAI into
your app
Knowledge check
1. What resource values are required to make requests to your Azure OpenAI
resource?
a) Chat, Embedding, and Completion
b) Key, Endpoint, and Deployment name
c) Summary, Deployment name, and Endpoint
2. What are the three available endpoints for interacting with a deployed Azure
OpenAI model?

d) Completion, ChatCompletion, and Translation


e) Completion, ChatCompletion, and Embeddings
f) Deployment, Summary, and Similarity
3. What is the best available endpoint to model the next completion of a
conversation in Azure OpenAI?

g) ChatCompletion
h) Embeddings
i) TranslateCompletion

You might also like