02 Build Natural Language Solutions With Azure OpenAI Service
02 Build Natural Language Solutions With Azure OpenAI Service
• This module guides you through how to build Azure OpenAI into your own
application, giving you a starting point for developing solutions with
generative AI.
Integrate Azure OpenAI into your
app
• Azure OpenAI offers both C# and Python SDKs and a REST API that developers
can use to add AI functionality to their applications.
• The models available in the Azure OpenAI service belong to different families,
each with their own focus.
• To use one of these models, you need to deploy through the Azure OpenAI
Service.
Create an Azure OpenAI resource
• An Azure OpenAI resource can be deployed through both the Azure command
line interface (CLI) and the Azure portal.
• Creating the Azure OpenAI resource through the Azure portal is similar to
deploying individual Azure AI Services resources, and is part of the Azure AI
Services services.
3. Enter the appropriate values for the empty fields, and create the resource.
• The possible regions for Azure OpenAI are currently limited. Choose the region
closest to your physical location.
• Once the resource has been created, you'll have keys and an endpoint that
you can use in your app.
Create an Azure OpenAI resource
Choose and deploy a model
• Each model family excels at different tasks, and there are different capabilities
of the models within each family. Model families break down into three main
families:
• Generative Pre-trained Transformer (GPT) - Models that understand
and generate natural language and some code. These models are best at
general tasks, conversations, and chat formats.
• Code (gpt-3 and earlier) - Code models are built on top of GPT models,
and trained on millions of lines of code. These models can understand and
generate code, including interpreting comments or natural language to
generate code. gpt-35-turbo and later models have this code functionality
included without the need for a separate code model.
• For older models, the model family and capability is indicated in the name of
the base model, such as text-davinci-003, which specifies that it's a text
model, with davinci level capability, and identifier 3.
• More recent models specify which gpt generation, and if they are
the turbo version, such as gpt-35-turbo representing the GPT 3.5 Turbo model.
• To deploy a model for you to use, navigate to the Azure OpenAI Studio and go
to the Deployments page. The lab later in this module covers exactly how to
do that.
Create an Azure OpenAI resource
Authentication and specification of deployed
model
• When you deploy a model in Azure OpenAI, you choose a deployment name to
give it.
• When configuring your app, you need to specify your resource endpoint, key,
and deployment name to specify which deploy model to send your request to.
• This enables you to deploy various models within the same resource, and
make requests to the appropriate model depending on the task.
Prompt engineering
• How the input prompt is written plays a large part in how the AI model will
respond.
• However, if you give it more details about what you want in your response,
you get a more specific answer. For example, given this prompt:
Create an Azure OpenAI resource
Authentication and specification of deployed
model
Prompt engineering
Classify the following news headline into 1 of the following categories: Business, Tech, Politics,
Sport, Entertainment
Headline 1: Donna Steffensen Is Cooking Up a New Kind of Perfection. The Internet’s most
beloved cooking guru has a buzzy new book and a fresh new perspective
Category: Entertainment
• Several examples similar to this one can be found in the Azure OpenAI Studio
Playground, under the Examples dropdown.
• Try to be as specific as possible about what you want in response from the
model, and you may be surprised at how insightful it can be!
Create an Azure OpenAI resource
Authentication and specification of deployed
model
Available endpoints
• Azure OpenAI can be accessed via a REST API or an SDK currently available for
Python and C#.
• The endpoints available for interacting with a deployed model are used
differently, and certain endpoints can only use certain models. The available
endpoints are:
• This unit covers example usage, input and output from the API.
• For each call to the REST API, you need the endpoint and a key from your
Azure OpenAI resource, and the name you gave for your deployed model.
curl
https://YOUR_ENDPOINT_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT
_NAME/chat/completions?api-version=2023-03-15-preview \
-H "Content-Type: application/json" \
-H "api-key: YOUR_API_KEY" \
-d '{"messages":[{"role": "system", "content": "You are a helpful assistant, teaching people
about AI."},
{"role": "user", "content": "Does Azure OpenAI support multiple languages?"},
{"role": "assistant", "content": "Yes, Azure OpenAI supports several languages, and can
translate between them."},
{"role": "user", "content": "Do other Azure AI Services support translation too?"}]}'
Use Azure OpenAI REST API
Chat completions
{
"id": "chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9",
"object": "chat.completion",
"created": 1679001781,
"model": "gpt-35-turbo",
"usage": {
"prompt_tokens": 95,
"completion_tokens": 84,
"total_tokens": 179
},
"choices": [
{
"message":
{
"role": "assistant",
"content": "Yes, other Azure AI Services also support translation. Azure AI Services offer
translation between multiple languages for text, documents, or custom translation through Azure AI Services
Translator."
},
"finish_reason": "stop",
"index": 0
}
]
}
Use Azure OpenAI REST API
Chat completions
• REST endpoints allow for specifying other optional input parameters, such
as temperature, max_tokens and more.
• If you'd like to include any of those parameters in your request, add them to the
input data with the request.
Use Azure OpenAI REST API
Embeddings
• Embeddings are helpful for specific formats that are easily consumed by machine
learning models.
• The same functionality is available through both REST and these SDKs.
• For both SDKs covered in this unit, you need the endpoint and a key from your
Azure OpenAI resource, and the name you gave for your deployed model.
YOUR_API_KEY Keys are found in the Keys & Endpoint section in the Azure portal. You can
use either key for your resource.
YOUR_DEPLOYMENT_NAME This deployment name is the name provided when you deployed your
model in the Azure OpenAI Studio.
Use Azure OpenAI SDK
Install libraries
• The C# SDK is a .NET adaptation of the REST APIs and built specifically for Azure
OpenAI, however it can be used to connect to Azure OpenAI resources or non-
Azure OpenAI endpoints.
• The necessary parameters are endpoint, key, and the name of your
deployment, which is called the engine when sending your prompt to the
model.
• Add the library to your app, and set the required parameters for your client.
Use Azure OpenAI SDK
Call Azure OpenAI resource
• Once you've configured your connection to Azure OpenAI, send your prompt
to the model.
response = client.chat.completions.create(
model=deployment_name,
messages=[
{"role": "system", "content": "You are a
helpful assistant."},
{"role": "user", "content": "What is Azure
OpenAI?"}
]
)
generated_text =
response.choices[0].message.content
g) ChatCompletion
h) Embeddings
i) TranslateCompletion