Technical Overview
In this blog, I will guide you through deploying a Large Language Model (GPT-2) to the Azure platform and using Power Automate to consume its endpoint, make requests, and ingest responses. In addition, We will use the Power Apps template that I created to display the generated text and give you creative writing ideas.
Prerequisites:
An Azure subscription.
- If you don’t already have one, you can sign up for an Azure free account.
- For Students, you can use the free Azure for Students offer which doesn’t require a credit card only your school email.
A Power Apps environment.
Solution File from GitHub.
Summary of the steps:
Step 1: Open your Azure Portal and Sign in
Step 2: Create an Azure Machine Learning Workspace
Step 3: Deploy a Machine Learning Model using templates
Step 4: Open Power Apps and Import the Solution
Step 5: Edit the Power Automate Flow
Step 6: Publish your Power App
Step 1: Open your Azure Portal and Sign in
Go to https://portal.azure.com and sign in.
Choose your preferred account and proceed
Now you are inside the Azure portal!
Step 2: Create an Azure Machine Learning Workspace
Search for Azure Machine Learning and select it.
Click on Create and choose New Workspace to create a new machine learning workspace.
What do you need to create it?
- Azure Subscription (All resources in an Azure subscription are billed together. Learn more: here)
- Azure Resource Group (A resource group is a collection of resources that share the same life cycle, permissions, and policies. Learn more: here)
- Workspace Name (Unique name that matches the constraints for naming on Azure)
- Region (Choose the region closest to you and your customers. Learn more: here)
- Storage Account (A storage account is used as the default datastore for the workspace. You may create a new Azure Storage resource or select an existing one in your subscription. Learn more: here)
- Key vault (A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription. Learn more: here)
- Application Insights (The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription. Learn more: here)
- Container Registry (A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription. Learn more: here)
For simplicity, we will click on Create new for the resource group, provide a name, click on Ok then provide a name for the workspace and all the other options will be automatically populated for us if you want to learn more about each option you may look at the attached links above.
then Click Review + Create.
Wait for the deployment to finish then click on Go to resource.
Step 3: Deploy a Machine Learning Model using templates
Open the Azure Machine Learning Studio using the Studio web URL.
Click on All Workpaces to access the shard resources in your tenant.
Click on Models under the Shared Assets from the left side menu to choose the LLM template that we want to deploy.
Here you can find binary files for different machine learning models trained with specific algorithms on training datasets and then able to produce predictions and inferences on additional, larger datasets provided by AzureML and HuggingFace.
For this demo, we will choose GPT-2 for text generation this model can be used for different applications like sentiment analysis and many more.
You will see how easy it is to deploy any of these models and use them you may want to try other models later but for this time let's stick to the instructions.
Click on gpt-2 then choose Deploy then Choose Real-time endpoint as we need our inferences in real time not a batch job.
Choose your subscription and the workspace we created together before then Click on Proceed to workspace.
What do you need to deploy a model?
- Virtual Machine (to host the model and run its code. Learn more: here)
- Instance Count (The number of instances to use for the deployment. Specify the value based on the workload you expect. For high availability, Microsoft recommends you set it to at least 3)
- Endpoint Name (An endpoint is an HTTPS path that provides an interface for clients to send requests (input data) and receive the inferencing (scoring) output of a trained model. It provides authentication, SSL termination, and a stable scoring URI. Learn more: here)
- Deployment Name (Deployments are hosted within an endpoint, and can receive data from clients and send responses back in real-time.)
For the sake of simplicity leave all the default options and Click on Deploy you make check it later and test it.
Move to the next step until the deployment is created as it will take a couple of minutes.
Step 4: Open Power Apps and Import the Solution
Fork this repo https://github.com/microsoft/AzureML-PowerAppSolution I created a simple App using Power Apps and a Cloud Flow using Power Automate which are found in a Zip file inside a folder names HelpMeWrite Solution.
then Clone the Repo to have the files available locally.
Open Power Apps using this link https://make.powerapps.com/.
Click on More from the left side window then Choose Solutions.
Click on Import Solution to load the Zip file that we cloned.
Click on Browse and Choose the Zip File that we cloned then Click on Open and Next.
After it finishes processing Click on Import and wait for a couple of minutes.
You can now find a Solution named AML PowerApps HelpMeWrite Sample in the solutions table ready for you to use.
Step 5: Edit the Power Automate Flow
Click on the Solution Display Name to open it, then Click on the Canvas App to Open the application in edit mode.
Now you can edit the application if you want to change colors, fonts, and backgrounds or leave it as it is and continue by clicking on Power Automate from the left side menu.
Hover over the flow name (HelpMeWriteFlow) and from the three dots Click on Edit.
Go back to your Azure Machine Learning Studio in the endpoints tab you will find that the deployment has finished it should look like the image below.
Click on Consume and get the REST endpoint which is the ML Endpoint we will use in our Power Automate Flow and any of the two authentication keys available for you to use as our ML API Key in Power Automate Flow.
Go back to Power Automate and Add the values you copied in the places shown below then Click on Save and Close the flow from the X button.
Step 6: Publish your Power App
Now let's preview our application to test if everything works Click on the Play button from the Navigation Menu or (F5).
Enter any text in the Text Input box then Click on Generate and Wait for the magic to happen.
The last step is to save our changes and publish them by Clicking on Save Icon from the Navigation Menu and waiting until it finishes saving then Clicking on the Publish Icon from the same Menu.
Now you can find the application in your Microsoft 365 Online Apps section. https://www.office.com/apps
Congratulations! You can share your application with anyone inside your Organization.
Clean Up
To prevent additional charges that may incur to your account do the following:
- Delete the model deployments from the endpoints tab on Azure Machine Learning Studio.
- Delete the Azure Machine Learning Workspace.
- Delete the Azure Resource Group.
- Delete the Power Apps Solution.
Conclusion
In conclusion, deploying your own version of a large language model is now available using Azure Machine Learning Registry anyone can provision his/her own deployment fully managed by Azure and scalable according to your business needs.
Did you like this blog and want to learn about the different concepts behind it?
Check out this educational guide on how you can use other large language models (Llama 2) in your Website or Power App (Understanding the Difference in Using Different Large Language Models: Step-by-Step Guide (microsoft.com))
Found this useful? Share it with others and follow me to get updates on:
- Twitter (twitter.com/john00isaac)
- LinkedIn (linkedin.com/in/john0isaac)
You can learn more at:
- Explore the Azure Machine Learning workspace - Training | Microsoft Learn
- Create a canvas app in Power Apps - Training | Microsoft Learn
- Integrate Power Automate flows and Dataverse - Training | Microsoft Learn
Join us live on November the 2nd, 2023 to learn more about Empowering Tech Entrepreneurs: Harnessing a LLM in Power Platform
Register for this event!
2 November, 2023 | 3:30 PM - 4:30 PM GMT
Learn about the new capabilities of Azure Machine Learning Registry that enables you to deploy Large Language Models like GPT and many more, and integrate it with Power Platform John Aziz, a Gold Microsoft Learn Student Ambassador from Egypt, and Lee Stott Principal Cloud Advocate Manager.
In this session, you'll learn about Large Language Models and the steps you need to perform to deploy them to Azure Machine Learning, discover how to set up an Azure Machine Learning Workspace, deploy the machine learning model, learn about Power Apps and Power Automate, and integrate the deployed model with Power Apps. Follow along as we provide clear instructions, and make the process accessible to both beginners and experienced tech enthusiasts.
By the end, you'll have a fully functional application capable of generating text and creative writing ideas. Connect with John and Lee on Twitter and LinkedIn to explore more Microsoft tools and cool tech solutions.
Feel free to share your comments and/or inquiries in the comment section below..
See you in future demos!