Bringing Open AI into an Outlook add-in: moving to Azure Open AI
Published Feb 17 2023 03:22 AM 8,496 Views

In the previous post we have learned how to use AI to help users write better business mails. We developed an add-in for Outlook that, using the AI models and the APIs provided by Open AI, can generate a business mail starting from one or two sentences and, using the Office SDK, to include it into the body of the mail.


For this sample project, we have directly leveraged the APIs provided by Open AI. We created an account on their website and we obtained an API key required to manage the authentication.

But what if you're an enterprise customer and you're already using Azure to host many of your applications and services? It would be great if there was a way to use the powerful models of Open AI, but leveraging at the same time the security, privacy and billing provided by Azure.


Well, actually, this way exists and it's the Azure Open AI service, which is part of the Cognitive Services family. Thanks to a partnership between Open AI and Microsoft, the same exact models that you can use with the Open AI APIs are available on Azure. The way they work is very similar, but the main difference is that the whole hosting infrastructure will be provided by Azure, with all its benefits around scaling and security.


In this post we're going to replace the Open AI API implementation we did the last time with a new one based on Azure Open AI. Let's start!


Getting access to Azure Open AI

The starting point is, of course, to have an Azure subscription. However, wait before you rush into code, because there's a catch, even if for a very good reason. Microsoft is pushing for a responsible and ethical approach on AI, to avoid many of the misuses and biases of this technology.


As such, before you go and create a new Azure Open AI service on your tenant, you must submit a nomination form and provide some important information, like who you are, what you're trying to build, the company you're working for, the ID of the subscription where you would like to host the service and so on. Once you have submitted the form, you will get a response within 10 days. If the request is approved, you will be able to open the Azure portal and start creating new instances of the Azure Open AI service.

The service creation is quite simple, since you don't have many options: you choose the subscription, the resource group, the location, the pricing tier and the name.




Once the service has been created, you can deploy one or more AI models, based on the scenario you're building. Move to the Model deployments section and click Create. Give it a name and, under the Model dropdown, you will find an extensive list of the available Open AI models. Choose text-davinci-003, which is the same one we have used in the previous post.


This is all we need to do to start using the model in your application.



Using the model from code

Using the model we have just deployed is quite simple since we just need to perform a HTTP request against the endpoint of our Azure Open AI instance. We're going to work on the taskpane implementation, specifically in the App.tsx file which contains the whole logic implementation. In the App component, we have created in the previous post a function called generateText(), which passes the text provided by the user to the Open AI APIs using the Open AI JavaScript library.

Let's see the new implementation that, instead, connects to the Azure Open AI service we have just deployed:


generateText = async () => {
  var current = this;

  const apiKey = "your-api-key";
  const endpoint = "your-url";
  const prompt = "Turn the following text into a professional business mail: " + this.state.startText;
  const deploymentName = "your-deployment-name";

  const url = endpoint + "openai/deployments/" + deploymentName + "/completions?api-version=2022-12-01";

  const payload = {
    prompt: prompt,
    max_tokens: 1000,
    temperature: 0.7

  var response = await fetch(url, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "api-key": apiKey,
    body: JSON.stringify(payload),

  var data = await response.json();
  current.setState({ generatedText: data.choices[0].text });

In the beginning of the function, we initialize all the information we need to connect to our instance of Azure Open AI:

  • apiKey, which can be retrieved from the Azure portal, under the Keys and Endpoint section. Copy the value of KEY 1 and paste it inside the apiKey constant.


    Please note: like for the Open AI APIs, we're using this approach just for testing purposes. It isn't suggested to do this in production, since we would easily expose the API key. The suggested way is to build a middleware between Azure Open AI and your add-in (like an Azure Function) and use Azure Key Vault to protect your API key.

  • endpoint, which is the URL of your Azure Open AI instance. You can find it in the Keys and Endpoints section as well.

  • prompt, which is the text we want to pass to Azure Open AI. In our case, it's the prompt that describes the outcome we want to achieve (Turn the following text into a professional business mail), followed by the text typed by the user, which is stored in the component's state.

  • deploymentName, which is the name we have chosen when we have deployed the model.



The next step is to build the full URL we need to reach to communicate with Azure Open AI, which has the following format:

The request is sent using a HTTP POST, so we need to create a JSON payload to include in the body with the inputs. The only required one is prompt, with the text we want to process; in our example, we are also passing max_tokens, to specify the maximum length of the text we want to get back, and the temperature (see the previous post for an explanation on this value). You can find the complete list of the supported body parameters in the official documentation. In most cases, there's a 1:1 match between the Open AI and the Azure Open AI parameters format.

Finally, we can submit the HTTP request using the standard fetch() API. Since it's not a standard GET operation, we need to pass some extra parameters other than the URL:

  • method, which is POST.
  • headers, since we need to add an extra header called api-key with the API key we have retrieved from the Azure portal.
  • body, which is the payload we have previously created transformed into JSON with the JSON.stringify() function.

The API will return a response like the following one:


    "id": "cmpl-4kGh7iXtjW4lc9eGhff6Hp8C7btdQ",
    "object": "text_completion",
    "created": 1646932609,
    "model": "ada",
    "choices": [
            "text": ", a dark line crossed",
            "index": 0,
            "logprobs": null,
            "finish_reason": "length"

You can notice how this is the same exact format of the standard Open AI APIs we have used in the previous post. As such, we can retrieve the text generated by Open AI from the text property of the first item in the choices collection. The rest of the code is the same as the one we have seen in the previous post. We store the generated text into the component's state, so that it can be displayed in the second box. This way, the user will have the chance to edit it and then copy it inside the body's mail.


Wrapping up

In this post, we have learned how we can leverage the power of the Open AI models through Azure, which might be a better option for developers and companies who have already invested into this platform and already trust Microsoft. The work we have done has been quite easy: thanks to the simplicity of the Azure Open AI APIs and the compatibility with the Open AI ecosystem, we have been able to reuse most of the code and the models we have already learned in the previous post.


You can find the updated sample in the same GitHub repository, but on a different branch called azure-openai.


Happy coding!

Version history
Last update:
‎Feb 17 2023 10:49 AM
Updated by: