Use Natural Language & Prompts with AI Models | Azure OpenAI Service
Published Dec 14 2022 08:49 AM 6,660 Views
Bronze Contributor

For your next application, leverage large-scale, generative AI models with a deep understanding of language and code, using Azure’s OpenAI service. Interact with models using natural language, prompts, and few-shot learning. Use the Azure OpenAI Studio to experiment and test your models before bringing them into your code to deliver differentiated app experiences, all with Azure’s enterprise-grade security built-in. 

 

Main pic.png

 

Build new experiences with models:
• GPT-3 generates content based on natural language input 
• Codex translates natural language instructions directly into code 
• DALL-E 2 generates realistic images and art from natural language descriptions 

 

Pablo Castro, Distinguished Engineer and part of the Azure AI team, joins Jeremy Chapman for an in-depth look at Azure’s OpenAI service.

 

Realistic images from natural language descriptions with DALL-E 2. 

1- DALL-E.png

See how to build new experiences with the Microsoft Designer app and Azure OpenAI Service.

 

Use prompts and few-shot learning to interact with an app. 

2- Natural language.png

Codex’s knowledge of the world can enrich the experience — watch the demo using Azure OpenAI Service...

 

Build, test, refine — then bring prompts into your custom apps. 

3- THe playground.png

See how to experiment with The Playground in the Azure OpenAI Studio.

 

Watch our video here.

 

QUICK LINKS: 

00:00 — Introduction 

01:06 — Azure’s OpenAI service 

02:44 — Practical examples of OpenAI 

05:23 — Integrate OpenAI models into everyday apps 

09:32 — Building a custom app from scratch 

11:57 — The OpenAI Studio in Azure 

13:41 — The Playground 

16:02 — Wrap up

 

Link References: 

Check out Responsible AI principles at https://aka.ms/AIprinciples 

Start with Designer preview at https://designer.microsoft.com 

Sign up for Azure OpenAI at https://aka.ms/oai/access

 

Unfamiliar with Microsoft Mechanics? 

As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. 

 

Keep getting this insider knowledge, join us on social: 


Video Transcript:

Jeremy Chapman (00:02):
Up next, we take an inside look at Azure’s OpenAI Service that lets you leverage large scale generative AI models based in Azure that have a deep understanding of language and code as you build new applications. Now we’ll unpack the core concepts for interacting with models using prompts, and demonstrate how you can use Azure’s OpenAI Studio to experiment with and test your models before bringing them into your code to deliver differentiated app experiences, all with Azure’s enterprise-grade security for your app’s foundation. And joining me today for an in-depth look at Azure’s OpenAI Services, Pablo Castro, distinguished engineer on Azure’s AI team. Welcome back to Mechanics.

 

Pablo Castro (00:39):
It’s great to be back, and in the studio this time.

 

Jeremy Chapman (00:41):
Thanks so much for joining us back on Mechanics. So OpenAI, it’s really started to gain momentum over the past couple of years, and it provides this really large, extremely rich set of base AI models that can accelerate the process of constructing natural language powered experiences that can, for example, greatly improve how you interact with your apps and extract knowledge from data. Now this is an exciting area that Microsoft has invested a lot in. So what are we solving for then with Azure’s OpenAI Service?

 

Pablo Castro (01:08):
There’s definitely a ton of potential, and we see lots of applications powered by Azure OpenAI today. From a developer’s perspective, the thing that excites me the most is how it helps build new experiences with several cutting edge models like GPT-3, a large language model that generates content based on natural language input, Codex, which can translate natural language instructions directly into code. And DALL-E 2, a new model that generates realistic images and art from natural language descriptions. And when we combine OpenAI with Azure into the Azure OpenAI Service, you get the core building blocks you need for production-grade applications — from how we host OpenAI at scale for you in Azure, to enterprise level security. We’re also giving you a great environment to try things out and iterate. The Azure OpenAI Studio lets you experiment and test your ideas with OpenAI before bringing them into your code.

 

(02:02):
And when you’re done experimenting and you know what you want, you can call the service from your code just like any other rest API. And security’s built in with capabilities spanning strong authentication, role-based access control, and the ability to configure virtual networks, private endpoints as you would for any Azure service. We’ve also incorporated tools for supporting responsible AI with content filtering in the service. For example, if your input includes inappropriate content, the Azure OpenAI Service would catch it. And this works for any generated output too. You can use this to detect and mitigate harmful use following responsible AI principles, which you can find more about at aka.ms/AIprinciples.

 

Jeremy Chapman (02:44):
So can you give a few examples of how we’re using OpenAI in apps and services today?

 

Pablo Castro (02:48):
Sure. OpenAI actually powers a number of experiences at Microsoft today. For example, co-pilot in the Power Platform enables users to author Power Effects commands or even reach automation flows. Another great example is the Designer app we recently announced, which uses the DALL-E 2 model to interpret your text descriptions, to generate images and even artwork. Let me show you. I’m here in the Designer preview. You can find this at designer.microsoft.com. I’m going to add a design headline, say DALL-E 2, and here I can generate an image from natural language. I’ll type grass field with trees and you’ll see Designer gives me some nice recommendations to augment it, but I’ll keep things simple and just add cows. Now if I want to change the look, I can add as a cartoon and that’s starting to look closer what I want. But I’ll change the cartoon to a blocky video game and that’s exactly what I wanted.

 

(03:43):
Now I just need to choose a design. I pick this one, and you’ll see our progression of images. And just to see a little bit more detail, I’ll select this second one, so I can zoom into the middle of the screen and now you can see more of a closeup view. These were all generated on demand. Designer interpreted what I typed, and it created new images each time I iterated. So these apps aren’t just putting an app frame around the API. OpenAI models like this can help you build more dynamic, interactive and differentiated experiences.

 

Jeremy Chapman (04:13):
And that’s really an important point because we’re not training models in the traditional AI machine learning sense, we’re just interacting with them using text. So can you explain a bit more behind that concept?

 

Pablo Castro (04:24):
Yes. These types of models use text for interaction. They have been trained on huge amounts of written text and can use their accumulated knowledge to perform a number of tasks directly. We call the text input to a model a prompt. The simplest form of this interaction pattern is to give a model a string of text, or a prompt, and ask it to complete it. It could be a partial statement, a question or anything you want the model to add to. In cases where the model has enough context to generate good outputs, that might be all you need. This is sometimes called zero shot because the model responds with no additional input or training. That said, sometimes you do want to guide the model a bit more. You can do this by giving it examples of how you want it to behave, such as a question-answers pair, and finishing the prompt with the one you wanted to complete. This still requires no actual updates of the weight in the model. The examples are learned on the spot. So this is often referred to as few shot learning.

 

Jeremy Chapman (05:23):
So how easy is it then, to integrate OpenAI as part of everyday apps?

 

Pablo Castro (05:27):
It’s super straightforward. OpenAI can be used to create many new experiences that were impossible before. My favorite example is how it can transform the entire surface area of an app into something you can converse with using natural language. Let’s use Minecraft, the game, as an example. And if you play Minecraft, you might be familiar with the text commands you can make during gameplay to manipulate the world around you. The challenge is that there is a lot of commands. I’ll run /help to see all the possible commands. Here you can see there are are dozens of different commands that you would need to memorize, and this is in addition to the more than a thousand different materials and lots of different entities. One command is /summon, which is used to spawn a new entity into the world from cows to ship, and then /field to build structures using blocks and so on.

 

(06:15):
The nice thing about Minecraft is that it’s extensible, so you can write custom mods. What I’m going to show you is how we created a mod called conjure, which acts as a plug in so that when you write a natural language message, it’ll pass that command to the OpenAI Service in Azure using the Codex model. This interprets the intent and finally passes back the generated command that Minecraft can natively understands. So let’s try this out for real in Minecraft. I’ll ask you to make a cow, and there’s our cow. It converted my request to create one. That was pretty easy though, because it just converted my command to summon cow. Now, let’s be less precise and type make an animal that goes moo, and OpenAI knows about the world and it was able to translate my intent using the sound a cow makes to tell Minecraft to summon a cow.

 

(07:04):
And beyond the summon commands, it can also interpret other requests. I’ll ask it to make a transparent wall and there it is. Even though transparent isn’t part of a block name, Codex knows that glass is transparent, and it connected the dots for us. It executed the fill command and as you can see, it even came up with relative coordinates. So what we just saw is how we can use OpenAI not only to let you interact using natural language with an application, but also how the model’s knowledge of the world in general can really enrich the experience.

 

Jeremy Chapman (07:37):
And this is really a great example of the possibilities for interacting with apps using natural language. Now aside from building the mod itself, was there anything else you had to do?

 

Pablo Castro (07:46):
Remember that we discussed before the importance of prompts? So for this I needed a very simple prompt. This asks Codex to translate natural language into Minecraft commands. At the top, I made a statement in English about my intent and then I gave it a few examples such as building walls using relative coordinates and summoning animals. This is like a few shots scenario. You’ll see that even though I defined that make a fish in my case translates to /summon tropical fish, it could take that tiny piece of information and generalize to all similar cases, even if the details vary a lot. So these prompts apply to all the in Minecraft, even the sounds they make or other indirect ways you might use to refer to them. This really shows the power of few shot learning with the OpenAI Service. This whole thing was just a few hours of work with my kids.

 

Jeremy Chapman (08:34):
And it really shows how just a little bit of guidance actually gets you the outcomes that you want using your prompt.

 

Pablo Castro (08:39):
Exactly. And these models are already in use out there. For example, Codex powers the GitHub co-pilot capability that helps developers write code faster. Let me show you. Here’s a portion of the code behind my Minecraft mode in Visual Studio Code, and I have the GitHub co-pilot extension enabled. The model is written in Java and this is the part that takes the text that the player wrote and sends it to OpenAI. So here I have the HTTP request that calls the OpenAI Service in Azure. What’s missing is actually sending the request and then processing the response. But I can use co-pilot to write that code. Here, co-pilot is predicting this entire line of code. I just need to hit tab, and I can keep going. And just by hitting the tab key a few times, co-pilot wrote all the code we needed, from login to parsing the JSON response and producing a result.

 

Jeremy Chapman (09:29):
And I’ve got to say, that’s super impressive just using the tab key in that case. So can you show us how you might put all this to work with an app that you might build from scratch?

 

Pablo Castro (09:36):
If you’re building a custom app, you just need a few lines of code to call the OpenAI Service. Let’s take the example of a green energy company. We need to keep up to date on the latest trends and insights, which requires us to build an understanding across lots of unstructured data sitting in documents from various sources. I’ll explain how we’ve architected our app first, and then I’ll show you how it works and what we did. In this example, the user can ask the app a question using natural language. The app then calls Azure Cognitive Search to discover text-based documents related to that question. These ranks the documents and returns the top candidates, but to answer some types of questions, we’ll need data across multiple of these top documents. So it passes the top candidates to the Azure OpenAI Service, which reads and understands these top documents leveraging the OpenAI GPT-3 model to generate an answer, which is then passed back to our application using information spread across these relevant documents.

 

(10:31):
So let me show you this in action. I’ll start in the browser, and you’re looking at the simple app, but the real beauty is in the power of information it can bring to you. I’ll do a quick search for renewable sources expected and actual for 2022. And it finds a few relevant documents each containing parts of the answer. And at the top you’ll see the generated answer that grabs elements from multiple documents like 22% from the first doc and first half of 2022 from the second doc. Notice there is a third unrelated doc that was not included in the summary, because GPT-3 realized it was not related to the question. Now I’ll show you how easy it is to call OpenAI and generate the cross-document answer which you saw. I’m in Visual Studio and here are a few lines of code that do all of the work.

 

(11:17):
First, I take the user query and send it to Azure Cognitive Search. Then I take the results from search, which are multiple documents and I construct a prompt from them using a template. Finally, I send this prompt to OpenAI to generate the summary answer. So with very little code we could leverage OpenAI. And the same process could be used to very quickly solve other problems like entity extraction or summarization that otherwise would’ve required you to pick, train, and deploy custom machine learning models. Here you more or less just build a prompt and you’re done.

 

Jeremy Chapman (11:49):
Right. I can really see how useful this would be across any number of research-based activities. That said, though, in this case, you actually knew what to do. So if someone’s watching and needed to start from scratch, where would you even begin?

 

Pablo Castro (12:01):
The Azure OpenAI Studio is a great place to start. It lets you experiment with prompts before you build them into your apps. To use the OpenAI Studio, first you’ll need an Azure subscription. You’ll need to apply for access to Azure OpenAI using this form at aka.ms/oai/access. After you create an OpenAI resource, it’ll link you directly to the OpenAI Studio. In the studio, you’ll start by creating a deployment. You can think of a deployment as your own service endpoint for OpenAI that your application calls. In my case I’ll use an existing deployment, so we’ll skip that. You can see I already have two deployments, one using DaVinci2, for natural language generation, and another one using Codex for code generation. Going back to the home screen, you’ll see the studio offers a few examples to get started summarized text, class text, natural language to SQL, and generating new product names.

 

(12:55):
 In the studio, we also have the playground. So let me head over there. First I’ll choose one of the deployments I created. I’ll choose Davinci2. Then I’ll select an example from the dropdown menu. And I’ll start with this example for generating a new product name. You can see the example uses a few shot prompt with a product description and seed words to help it understand and generate results. In this example, we’re looking to name shoes that can fit any foot size with seed words like adaptable, fit, and omni-fit. When I generate the results, it comes up with omni-fit shoes, perfect fit shoes, and all fit shoes. And so you can see how this can be used to help people with their creative process.

 

Jeremy Chapman (13:34):
How would you then go from these examples that you just showed to maybe the OpenAI search app that you showed before?

 

Pablo Castro (13:41):
Well, the playground helps you test and iterate on your prompts before you incorporate them into your custom application. Let’s use the summarize text example as a starting point. This example prompt is about neutron stars. By looking at the structure of the prompt, I can see the pattern they follow and learn from it. We’re using the DaVinci model, which is the largest of the OpenAI GPT-3 models and optimized for text understanding. Now you can see this long paragraph of text as input. The example adds TL:DR at the end to suggest to the model that this should be completed with a short summary. Let’s run it. And here, you see the generated output about a neutron star. So I’m giving OpenAI instructions for what to do in plain English. Coming back to the cross-document summarization example, let’s guide the model to read a set of statements so it can answer a question based on those statements, just like a human would.

 

(14:32):
If I go to Visual Studio and grab my prompt and paste it here, you’ll see our prompt including the placeholders, our code replaced at runtime in the app. This is real content, so it’s pretty long and hard to follow. So instead of running that one, let me just paste a shorter version that has the same structure, so we can more easily see what’s going on. I’ll ask you to complete the task, which is to answer the question about the percentage of homes with solar. And you’ll see a detailed answer with a few regions called out. The answer was accurate, but I’d like to see a shorter version of the answer. And this is where I can provide an example answer to further refine the results for the style that I’m looking for. So first, I’ll undo the last run to clear the answer. Now I’ll paste in the same statement from before, but you’ll see that this is a different question with a very concise answer.

 

(15:20):
From there, I can generate a new result for the first question, and you’ll see that it learned from our example and returned a very concise answer. So the playground, in the Azure OpenAI Studio, helps you experiment and iterate. You can test your prompts for accuracy, refine them, and then bring them into your custom applications. And if that isn’t precise enough, one thing that I didn’t have time to demonstrate today is Azure OpenAI’s support for fine tuning where you can specialize one of the base models with additional data that you provide. So you have the full spectrum from zero shot to creating your own fine tuned models.

 

Jeremy Chapman (15:56):
And all of what you’ve shown today to be clear, is completely grounded in security and responsible AI. So now that you’ve gone hands on with Azure’s OpenAI Service, can you explain for everyone watching how they might get started and learn more?

 

Pablo Castro (16:09):
The best way to get started is by trying it out. We’ve made it easy to sign up for the service at aka.ms/oai/access. And from there you can start using Azure’s OpenAI Studio.

 

Jeremy Chapman (16:20):
Thanks so much, Pablo, for joining us today and showing us some highlights of what Azure OpenAI can do. Of course, keep checking back to Microsoft Mechanics for all the latest updates, subscribe to our channel if you haven’t already. And as always, thank you for watching.

 

Version history
Last update:
‎Dec 14 2022 08:49 AM
Updated by: