Blog Post

Apps on Azure Blog
18 MIN READ

Using OpenAI on Azure Web App

theringe's avatar
theringe
Icon for Microsoft rankMicrosoft
Nov 25, 2024

On Azure Web Apps, you can use Python's OpenAI package to conveniently and quickly call the official API, upload your training data, and utilize their algorithms for processing. This tutorial provides a step-by-step guide to help you deploy your OpenAI project on an Azure Web App, covering everything from resource setup to troubleshooting common issues.

TOC

  1. Introduction to OpenAI
  2. System Architecture
    • Architecture
    • Focus of This Tutorial
  3. Setup Azure Resources
    • File and Directory Structure
    • ARM Template
    • ARM Template From Azure Portal
  4. Running Locally
    • Training Models and Training Data
    • Predicting with the Model

  5. Publishing the Project to Azure
  6. Running on Azure Web App
    • Training the Model
    • Using the Model for Prediction
  7. Troubleshooting
    • Startup Command Issue
    • App Becomes Unresponsive After a Period
    • az cli command for Linux webjobs fail
    • Others
  8. Conclusion
  9. References

1. Introduction to OpenAI

OpenAI is a leading artificial intelligence research and deployment company founded in December 2015. Its mission is to ensure that artificial general intelligence (AGI)—highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. OpenAI focuses on developing safe and scalable AI technologies and ensuring equitable access to these innovations.

 

Known for its groundbreaking advancements in natural language processing, OpenAI has developed models like GPT (Generative Pre-trained Transformer), which powers applications for text generation, summarization, translation, and more. GPT models have revolutionized fields like conversational AI, creative writing, and programming assistance. OpenAI has also released models like Codex, designed to understand and generate computer code, and DALL·E, which creates images from textual descriptions.

 

OpenAI operates with a unique hybrid structure: a for-profit company governed by a nonprofit entity to balance the development of AI technology with ethical considerations. The organization emphasizes safety, research transparency, and alignment to human values. By providing access to its models through APIs and fostering partnerships, OpenAI empowers developers, businesses, and researchers to leverage AI for innovative solutions across diverse industries. Its long-term goal is to ensure AI advances benefit humanity as a whole.


2. System Architecture

Architecture

Development Environment

OS: 

Ubuntu

Version: 

Ubuntu 18.04 Bionic Beaver

Python Version:

3.7.3

 

Azure Resources

App Service Plan:

SKU - Premium Plan 0 V3

App Service:

Platform - Linux (Python 3.9, Version 3.9.19)

Storage Account:

SKU - General Purpose V2

File Share:

No backup plan

 

Focus of This Tutorial

This tutorial walks you through the following stages:

  1. Setting up Azure resources
  2. Running the project locally
  3. Publishing the project to Azure
  4. Running the application on Azure
  5. Troubleshooting common issues

Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below.

 

Local OS

Windows

Linux

Mac

 

 V

 

 

How to setup Azure resources

Portal (i.e., REST api)

ARM

Bicep

Terraform

 V

 V

 

 

 

How to deploy project to Azure

VSCode

CLI

Azure DevOps

GitHub Action

 

 V

 

 

 


3. Setup Azure Resources

File and Directory Structure

Please open a bash terminal and enter the following commands:

git clone https://github.com/theringe/azure-appservice-ai.git
cd azure-appservice-ai
bash ./openai/tools/add-venv.sh

If you are using a Windows platform, use the following alternative PowerShell commands instead:

git clone https://github.com/theringe/azure-appservice-ai.git
cd azure-appservice-ai
.\openai\tools\add-venv.cmd

 

After completing the execution, you should see the following directory structure:

 

File and Path

Purpose

openai/tools/add-venv.*

The script executed in the previous step (cmd for Windows, sh for Linux/Mac) to create all Python virtual environments required for this tutorial.

.venv/openai-webjob/

A virtual environment specifically used for training models (i.e., calculating embedding vectors indeed).

openai/webjob/requirements.txt

The list of packages (with exact versions) required for the openai-webjob virtual environment.

.venv/openai/

A virtual environment specifically used for the Flask application, enabling API endpoint access for querying predictions (i.e., suggestion).

openai/requirements.txt

The list of packages (with exact versions) required for the openai virtual environment.

openai/

The main folder for this tutorial.

openai/tools/arm-template.json

The ARM template to setup all the Azure resources related to this tutorial, including an App Service Plan, a Web App, and a Storage Account.

openai/tools/create-folder.*

A script to create all directories required for this tutorial in the File Share, including train, model, and test.

openai/tools/download-sample-training-set.*

A script to download a sample training set from News-Headlines-Dataset-For-Sarcasm-Detection, containing headlines data from TheOnion and HuffPost, into the train directory of the File Share.

openai/webjob/cal_embeddings.py

A script for calculating embedding vectors from headlines. It loads the training set, applies the transformation on OpenAI API, and saves the embedding vectors in the model directory of the File Share.

openai/App_Data/jobs/triggered/cal-embeddings/cal_embeddings.sh

A shell script for Azure App Service web jobs. It activates the openai-webjob virtual environment and starts the cal_embeddings.py script.

openai/api/app.py

Code for the Flask application, including routes, port configuration, input parsing, vectors loading, predictions, and output generation.

openai/start.sh

A script executed after deployment (as specified in the ARM template startup command I will introduce it later). It sets up the virtual environment and starts the Flask application to handle web requests.

 

ARM Template

We need to create the following resources or services:

 

Manual Creation Required

Resource/Service

App Service Plan

No

Resource (plan)

App Service

Yes

Resource (app)

Storage Account

Yes

Resource (storageAccount)

File Share

Yes

Service

Let’s take a look at the openai/tools/arm-template.json file. Refer to the configuration section for all the resources.

Since most of the configuration values don’t require changes, I’ve placed them in the variables section of the ARM template rather than the parameters section. This helps keep the configuration simpler. However, I’d still like to briefly explain some of the more critical settings.

 

As you can see, I’ve adopted a camelCase naming convention, which combines the [Resource Type] with [Setting Name and Hierarchy]. This makes it easier to understand where each setting will be used. The configurations in the diagram are sorted by resource name, but the following list is categorized by functionality for better clarity.

Configuration Name

Value

Purpose

storageAccountFileShareName

data-and-model

[Purpose 1: Link File Share to Web App]
Use this fixed name for File Share

storageAccountFileShareShareQuota

5120

[Purpose 1: Link File Share to Web App]
The value is in GB

storageAccountFileShareEnabledProtocols

SMB

[Purpose 1: Link File Share to Web App]

appSiteConfigAzureStorageAccountsType

AzureFiles

[Purpose 1: Link File Share to Web App]

appSiteConfigAzureStorageAccountsProtocol

Smb

[Purpose 1: Link File Share to Web App]

planKind

linux

[Purpose 2: Specify platform and stack runtime]

Select Linux (default if Python stack is chosen)

planSkuTier

Premium0V3

[Purpose 2: Specify platform and stack runtime]
Choose at least Premium Plan to ensure enough memory for your AI workloads

planSkuName

P0v3

[Purpose 2: Specify platform and stack runtime]
Same as above

appKind

app,linux

[Purpose 2: Specify platform and stack runtime]

Same as above

appSiteConfigLinuxFxVersion

PYTHON|3.9

[Purpose 2: Specify platform and stack runtime]
Select Python 3.9 to avoid dependency issues

appSiteConfigAppSettingsWEBSITES_CONTAINER_START_TIME_LIMIT

600

[Purpose 3: Deploying]

The value is in seconds, ensuring the Startup Command can continue execution beyond the default timeout of 230 seconds. This tutorial’s Startup Command typically takes around 300 seconds, so setting it to 600 seconds provides a safety margin and accommodates future project expansion (e.g., adding more packages)

appSiteConfigAppCommandLine

[ -f /home/site/wwwroot/start.sh ] && bash /home/site/wwwroot/start.sh || GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app

[Purpose 3: Deploying]

This is the Startup Command, which can be break down into 3 parts:

  1. First (-f /home/site/wwwroot/start.sh): Checks whether start.sh exists. This is used to determine whether the app is in its initial state (just created) or has already been deployed.
  2. Second (bash /home/site/wwwroot/start.sh): If the file exists, it means the app has already been deployed. The start.sh script will be executed, which installs the necessary packages and starts the Flask application.
  3. Third (GUNICORN_CMD_ARGS=\"--timeout 600 --access-logfile '-' --error-logfile '-' -c /opt/startup/gunicorn.conf.py --chdir=/opt/defaultsite\" gunicorn application:app): If the file does not exist, the command falls back to the default HTTP server (gunicorn) to start the web app.

Since the command is enclosed in double quotes within the ARM template, during actual execution, replace \" with "

appSiteConfigAppSettingsSCM_DO_BUILD_DURING_DEPLOYMENT

false

[Purpose 3: Deploying]

Since we have already defined the handling for different virtual environments in start.sh, we do not need to initiate the default build process of the Web App

appSiteConfigAppSettingsWEBSITES_ENABLE_APP_SERVICE_STORAGE

true

[Purpose 4: Webjobs]

This setting is required to enable the App Service storage feature, which is necessary for using web jobs (e.g., for model training)

storageAccountPropertiesAllowSharedKeyAccess

true

[Purpose 5: Troubleshooting]
This setting is enabled by default. The reason for highlighting it is that certain enterprise IT policies may enforce changes to this configuration after a period, potentially causing a series of issues. For more details, please refer to the Troubleshooting section below.

 

Return to bash terminal and execute the following commands (their purpose has been described earlier).

# Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai
# Please change <RegionName> to your prefer region, for example: eastus2
# Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account)
az group create --name <ResourceGroupName> --location <RegionName>
az deployment group create --resource-group <ResourceGroupName> --template-file ./openai/tools/arm-template.json --parameters resourcePrefix=<ResourcesPrefixName>

If you are using a Windows platform, use the following alternative PowerShell commands instead:

# Please change <ResourceGroupName> to your prefer name, for example: azure-appservice-ai
# Please change <RegionName> to your prefer region, for example: eastus2
# Please change <ResourcesPrefixName> to your prefer naming pattern, for example: openai-arm (it will create openai-arm-asp as App Service Plan, openai-arm-app for web app, and openaiarmsa for Storage Account)
az group create --name <ResourceGroupName> --location <RegionName>
az deployment group create --resource-group <ResourceGroupName> --template-file .\openai\tools\arm-template.json --parameters resourcePrefix=<ResourcesPrefixName>

 

After execution, please copy the output section containing 3 key-value pairs from the result like this.

 

Return to bash terminal and execute the following commands:

# Please setup 3 variables you've got from the previous step
OUTPUT_STORAGE_NAME="<outputStorageName>"
OUTPUT_STORAGE_KEY="<outputStorageKey>"
OUTPUT_SHARE_NAME="<outputShareName>"
sudo mkdir -p /mnt/$OUTPUT_SHARE_NAME
if [ ! -d "/etc/smbcredentials" ]; then
    sudo mkdir /etc/smbcredentials
fi
CREDENTIALS_FILE="/etc/smbcredentials/$OUTPUT_STORAGE_NAME.cred"
if [ ! -f "$CREDENTIALS_FILE" ]; then
    sudo bash -c "echo \"username=$OUTPUT_STORAGE_NAME\" >> $CREDENTIALS_FILE"
    sudo bash -c "echo \"password=$OUTPUT_STORAGE_KEY\" >> $CREDENTIALS_FILE"
fi
sudo chmod 600 $CREDENTIALS_FILE
sudo bash -c "echo \"//$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME cifs nofail,credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30\" >> /etc/fstab"
sudo mount -t cifs //$OUTPUT_STORAGE_NAME.file.core.windows.net/$OUTPUT_SHARE_NAME /mnt/$OUTPUT_SHARE_NAME -o credentials=$CREDENTIALS_FILE,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30

Or you could simply go to Azure Portal, navigate to the File Share you just created, and refer to the diagram below to copy the required command. You can choose Windows or Mac if you are using such OS in your dev environment.

 

After executing the command, the network drive will be successfully mounted. You can use df to verify, as illustrated in the diagram.

 

 

ARM Template From Azure Portal

In addition to using az cli to invoke ARM Templates, if the JSON file is hosted on a public network URL, you can also load its configuration directly into the Azure Portal by following the method described in the article [Deploy to Azure button - Azure Resource Manager]. This is my example.

Click Me

 

After filling in all the required information, click Create.

 

Once the creation process is complete, click Outputs on the left menu to retrieve the connection information for the File Share.

 


4. Running Locally

Training Models and Training Data

In the next steps, you will need to use OpenAI services. Please ensure that you have registered as a member and added credits to your account (Billing overview - OpenAI API). For this example, adding $10 USD will be sufficient. Additionally, you will need to generate a new API key (API keys - OpenAI API), you may choose to create a project as well for future project organization, depending on your needs (Projects - OpenAI API).

 

After getting the API key, create a text file named apikey.txt in the openai/tools/ folder. Paste the key you just copied into the file and save it.

 

Return to bash terminal and execute the following commands (their purpose has been described earlier).

source .venv/openai-webjob/bin/activate
bash ./openai/tools/create-folder.sh
bash ./openai/tools/download-sample-training-set.sh
python ./openai/webjob/cal_embeddings.py --sampling_ratio 0.002

If you are using a Windows platform, use the following alternative PowerShell commands instead:

.\.venv\openai-webjob\Scripts\Activate.ps1
.\openai\tools\create-folder.cmd
.\openai\tools\download-sample-training-set.cmd
python .\openai\webjob\cal_embeddings.py --sampling_ratio 0.002

 

After execution, the File Share will now include the following directories and files.

 

Let’s take a brief detour to examine the structure of the training data downloaded from the GitHub.

 

The right side of the image explains each field of the data. This dataset was originally used to detect whether news headlines contain sarcasm. However, I am repurposing it for another application. In this example, I will use the "headline" field to create embeddings. The left side displays the raw data, where each line is a standalone JSON string containing the necessary fields.

 

In the code, I first extract the "headline" field from each record and send it to OpenAI to compute the embedding vector for the text. This embedding represents the position of the text in a semantic space (akin to coordinates in a multi-dimensional space). After the computation, I obtain an embedding vector for each headline. Moving forward, I will refer to these simply as embeddings.

 

By the way, the sampling_ratio parameter in the command is something I configured to speed up the training process. The original dataset contains nearly 30,000 records, which would result in a training time of around 8 hours. To simplify the tutorial, you can specify a relatively low sampling_ratio value (ranging from 0 to 1, representing 0% to 100% sampling from the original records). For example, a value of 0.01 corresponds to a 1% sample, allowing you to accelerate the experiment.

 

In this semantic space, vectors that are closer to each other often have similar values, which corresponds to similar meanings. In this context, the distance between vectors will serve as our metric to evaluate the semantic similarity between pieces of text. For this, we will use a method called cosine similarity.

 

In the subsequent tutorial, we will construct some test texts. These test texts will also be converted into embeddings using the same method. Each test embedding will then be compared against the previously computed headline embeddings. The comparison will identify the nearest headline embeddings in the multi-dimensional vector space, and their original text will be returned.

 

Additionally, we will leverage OpenAI's well-known generative AI capabilities to provide a textual explanation. This explanation will describe why the constructed test text is related to the recommended headline.

Predicting with the Model

Return to terminal and execute the following commands. First, deactivate the virtual environment used for calculating the embeddings, then activate the virtual environment for the Flask application, and finally, start the Flask app.

 

Commands for Linux or Mac:

deactivate
source .venv/openai/bin/activate
python ./openai/api/app.py

Commands for Windows:

deactivate
.\.venv\openai\Scripts\Activate.ps1
python .\openai\api\app.py

 

When you see a screen similar to the following, it means the server has started successfully. Press Ctrl+C to stop the server if needed.

 

Before conducting the actual test, let’s construct some sample query data:

education

Next, open a terminal and use the following curl commands to send requests to the app:

curl -X GET http://127.0.0.1:8000/api/detect?text=education

You should see the calculation results, confirming that the embeddings and Gen AI is working as expected.

 

PS: Your results may differ from mine due to variations in the sampling of your training dataset compared to mine. Additionally, OpenAI's generative content can produce different outputs depending on the timing and context. Please keep this in mind.


5. Publishing the Project to Azure

Return to terminal and execute the following commands.

 

Commands for Linux or Mac:

# Please change <resourcegroup_name> and <webapp_name> to your own
# Create the Zip file from project
zip -r openai/app.zip openai/*
# Deploy the App
az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai/app.zip --type zip
# Delete the Zip file
rm openai/app.zip

Commands for Windows:

# Please change <resourcegroup_name> and <webapp_name> to your own
# Create the Zip file from project
Compress-Archive -Path openai\* -DestinationPath openai\app.zip
# Deploy the App
az webapp deploy --resource-group <resourcegroup_name> --name <webapp_name> --src-path openai\app.zip --type zip
# Delete the Zip file
del openai\app.zip

 

PS: WebJobs follow the directory structure of App_Data/jobs/triggered/<webjob_name>/. As a result, once the Web App is deployed, the WebJob is automatically deployed along with it, requiring no additional configuration.


6. Running on Azure Web App

Training the Model

Return to terminal and execute the following commands to invoke the WebJobs.

Commands for Linux or Mac:

# Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own
token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01"

Commands for Windows:

# Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own
$token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}'

 

You could see the training status by execute the following commands.

Commands for Linux or Mac:

# Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own
token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq

Commands for Windows:

# Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own
$token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10

Processing

Complete

 

 

 

 

 

 

 

And you can get the latest detail log by execute the following commands.

 

Commands for Linux or Mac:

# Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own
token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; history_id=$(az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | sed 's|.*/history/||') ; response=$(curl -X GET -H "Authorization: Bearer $token" -H "Content-Type: application/json" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01") ; log_url=$(echo "$response" | jq -r '.properties.output_url') ; curl -X GET -H "Authorization: Bearer $token" "$log_url"

Commands for Windows:

# Please change <subscription_id> <resourcegroup_name> and <webapp_name> to your own
$token = az account get-access-token --resource https://management.azure.com --query accessToken -o tsv ; $history_id = az webapp webjob triggered log --resource-group <resourcegroup_name> --name <webapp_name> --webjob-name cal-embeddings --query "[0].id" -o tsv | ForEach-Object { ($_ -split "/history/")[-1] } ; $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/history/$history_id/?api-version=2024-04-01" -Headers @{ Authorization = "Bearer $token" } -Method GET ; $log_url = $response.properties.output_url ; Invoke-RestMethod -Uri $log_url -Headers @{ Authorization = "Bearer $token" } -Method GET

 

Once you see the report in the Logs, it indicates that the embeddings calculation is complete, and the Flask app is ready for predictions.

 

You can also find the newly calculated embeddings in the File Share mounted in your local environment.

 

 

Using the Model for Prediction

Just like in local testing, open a bash terminal and use the following curl commands to send requests to the app:

# Please change <webapp_name> to your own
curl -X GET https://<webapp_name>.azurewebsites.net/api/detect?text=education

 

As with the local environment, you should see the expected results.

 


7. Troubleshooting

Startup Command Issue

  • Symptom: Without any code changes and when the app was previously functioning, updating the Startup Command causes the app to stop working.

    The related default_docker.log shows multiple attempts to run the container without errors in a short time, but the container does not respond on port 8000 as seen in docker.log.

  • Cause: Since Linux Web Apps actually run in containers, the final command in the Startup Command must function similarly to the CMD instruction in a Dockerfile.
    CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]

    This command must ensure it runs in the foreground (i.e., not in daemon mode) and cannot exit the process unless manually interrupted.

  • Resolution: Check the final command in the Startup Command to ensure it does not include a daemon execution mode. Alternatively, use the Web SSH interface to execute and verify these commands directly.

App Becomes Unresponsive After a Period

  • Symptom: An app that runs normally becomes unresponsive after some time. Both the front-end webpage and the Kudu page display an "Application Error," and the deployment log shows "Too many requests." Additionally, the local environment cannot connect to the associated File Share.
  • Cause: Clicking on "diagnostic resources" in the initial error screen provides more detailed error information.

    In this example, the issue is caused by internal enterprise Policies or Automations (e.g., enterprise applications) that periodically or randomly scan storage account settings created by employees. If the settings are deemed non-compliant with security standards, they are automatically adjusted.

    For instance, the allowSharedKeyAccess parameter may be forcibly set to false, preventing both the Web App and the local development environment from connecting to the File Share under the Storage Account. Modification history for such settings can be checked via the Activity Log of the Storage Account (note that only the last 90 days of data are retained).

  • Resolution: The proper approach is to work offline with the enterprise IT team to coordinate and request the necessary permissions.

    As a temporary workaround, modify the affected settings to Enable during testing periods and revert them to Disabled afterward. You can find the setting for allowSharedKeyAccess here.

    Note: Azure Storage Mount currently does not support access via Managed Identity.

az cli command for Linux webjobs fail

  • Symptom: Got "Operation returned an invalid status 'Unauthorized'" message from different platforms even in Azure CloudShell with latest az version
  • Cause: After using "--debug --verbose" from the command I can see the actual error occurred on which REST API, for example, I'm using this command (az webapp webjob triggered):
    az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app --debug --verbose

     Which represent that the operation has invoked under this API:

    /Microsoft.Web/sites/{app_name}/triggeredwebjobs (Web Apps - List Triggered Web Jobs)

    After I directly test that API from the official doc, I still get such the error, which means this preview feature is still under construction, and we cannot use it currently.

     

  • Resolution: I found a related API endpoint via Azure Portal:

    /Microsoft.Web/sites/{app_name}/webjobs (Web Apps - List Web Jobs)

    After I directly test that API from the official doc, I can get the trigger list now.

    So I have modified the original command:

    az webapp webjob triggered list --resource-group azure-appservice-ai --name openai-arm-app

     To the following command (please note the differences between Linux/Mac and Windows commands).

    Make sure to replace <subscription_id>, <resourcegroup_name>, and <webapp_name> with your specific values.

    Commands for Linux or Mac:

    token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; response=$(curl -s -H "Authorization: Bearer $token" "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01") ; echo "$response" | jq

    Commands for Windows:

    $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv); $response = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/webjobs?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"} -Method GET ; $response | ConvertTo-Json -Depth 10

    For "run" commands, due to the same issue when invoking the problematic API, so I also modify the operation.

    Commands for Linux or Mac:

    token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; curl -X POST -H "Authorization: Bearer $token" -H "Content-Type: application/json" -d '{}' "https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Web/sites/<webapp_name>/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01"

    Commands for Windows:

    $token=$(az account get-access-token --resource https://management.azure.com --query accessToken -o tsv) ; Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/029b4739-1f55-4cab-bf84-a9393f8ac8fe/resourceGroups/azure-appservice-ai/providers/Microsoft.Web/sites/openai-arm-app/triggeredwebjobs/cal-embeddings/run?api-version=2024-04-01" -Headers @{Authorization = "Bearer $token"; "Content-type" = "application/json"} -Method POST -Body '{}'

     

Others

Using Scikit-learn on Azure Web App


8. Conclusion

Beyond simple embedding vector calculations, OpenAI's most notable strength is generative AI. You can provide instructions to the GPT model through natural language (as a prompt), clearly specifying the format you need in the instruction. You can then parse the returned content easily. While PaaS products are not ideal for heavy vector calculations, they are well-suited for acting as intermediaries to forward commands to generative AI. These outputs can even be used for various applications, such as patent infringement detection, plagiarism detection in research papers, or trending news analysis.

 

I believe that in the future, we will see more similar applications on Azure Web Apps.


9. References

Overview - OpenAI API

News-Headlines-Dataset-For-Sarcasm-Detection

Quickstart: Deploy a Python (Django, Flask, or FastAPI) web app to Azure - Azure App Service

Configure a custom startup file for Python apps on Azure App Service on Linux - Python on Azure

Mount Azure Storage as a local share - Azure App Service

Deploy to Azure button - Azure Resource Manager

Using Scikit-learn on Azure Web App

Updated Nov 25, 2024
Version 4.0
No CommentsBe the first to comment