Blog Post

Educator Developer Blog
7 MIN READ

Operationalize your Prompt Engineering Skills with Azure Prompt Flow

Ifeoluwa_Oduwaiye's avatar
Ifeoluwa_Oduwaiye
Copper Contributor
Nov 08, 2024

 

Prompt engineering is becoming more valuable to developers and professionals worldwide. Since the big AI boom, companies and industries have now intentionally started to seek out employees that have this skill, and some other companies even go to the lengths of organizing employee training programs to upskill their employees in prompt engineering.

Why so? We have now migrated from the period where we used to say, “AI is the future," to where we say, “AI is here.”. We can all attest to the fact that AI is here to stay. Now, how can we leverage this technology to improve our company operations and to make our lives better?

This article aims to cover key concepts in prompt engineering for operations and how to leverage open-source LLMs (Large Language Models) to solve operational problems using Azure Prompt Flow. Before I go any further, permit me to introduce myself. I am Ifeoluwa Oduwaiye, a Beta MLSA at the Bells University of Technology, a data scientist and technical writer.

Now that that’s done, let’s get into the article. Now, what exactly is prompt engineering?

 

Prompt Engineering

The definition given by McKinsey & Company puts it very simply. Prompt Engineering is the practice of designing inputs for AI tools that wil produce optimal outputs. You can think of it as the art of querying AI tools. 

Prompt engineering has a lot to do with the quality of the words you put into your prompt that is has to do with the quantity of words. Most AI tools are GIGO systems (Garbage-In-Garbage-Out), thus; the quality of your input strongly determines the quality of your output. Take the example below for instance,

Generated using Microsoft Copilot

 

If you’re just getting started with prompt engineering, here are some key tips to get started with prompt engineering. Although I recommend you take a proper course in Prompt Engineering, these tips are enough for you to get started.

  1. Write clear instructions
  2. Provide reference text or an example the AI can follow
  3. Split complex tasks into simpler subtasks
  4. Tell the model what to do as opposed to telling the model what not to do
  5. Be concise in your prompts

 

Now that a sturdy foundation in prompt engineering has been set, we would be moving forward to exploring the tool we would be using in this article, Azure Prompt Flow.

 

Azure Prompt Flow

Azure Prompt Flow is a development tool designed to streamline the entire development cycle of AI applications powered by Large Language Models (LLMs). Prompt flow provides a comprehensive solution that simplifies the process of prototyping, experimenting, iterating, and deploying your AI applications. It provides a visual interface to create and manage workflows that combine LLMs, prompts, and Python code, making it easier to build complex AI applications.

Prompt Flow can be used to build solutions that satisfy a wide range, such as:

  1. Developing chatbots and virtual assistants
  2. Content generation
  3. Code generation and completion
  4. Data summarization and analysis
  5. Data extraction and processing and so on

 

Some of the key features of Azure Prompt Flow are:

  1. Visual Workflow Creation: Design and build your AI applications using a drag-and-drop interface, connecting various components like LLMs, prompts, and Python code blocks.
  2. Prompt Engineering: Easily create and refine prompts to tailor the behavior of your LLM models, improving the quality and relevance of their responses.
  3. Debugging and Iteration: Debug your workflows step-by-step, identify issues, and make necessary adjustments to optimize performance.
  4. Collaboration and Sharing: Share your workflows with team members for review, feedback, and collaborative development.
  5. Deployment and Integration: Deploy your workflows as APIs or integrate them into your existing applications, making your AI solutions accessible to a wider audience.

 

Building an LLM app using Azure Prompt Flow

To get started with building the LLM app, log into the Azure portal. You will need a subscription for this. If you’re a student, you’ll want to check out the Azure for students benefit (this gives you $100 worth of Azure credits to get started with Azure cloud).

Once you’re successfully logged into the portal, you will want to create an Azure Machine Learning workspace.

 

Step 1: Create an Azure Open AI Service resource

To do this, start by logging into the Azure portal. In the search bar, type in Azure Open AI and click on the first option.

Navigate to Azure OpenAI resource

 

In the next tab, click on the Create button; it will take you to an interactive popup window. On the Create Azure OpenAI page, you will need to fill in the parameters for the subscription, resource group, name, region, and pricing tier.

Create an OpenAI resource

 

Stick with the defaults for the Network and Tags tabs and click on Create to create the resource. In less than a minute, your deployment would be complete, and now we can proceed to deploying the resource to an endpoint.

 

Step 2: Deploy the Azure OpenAI Service resource

Sign into Azure OpenAI Studio and navigate to the resource that was created in the previous step. 

Sign into Azure OpenAI Studio

 

In the Azure OpenAI studio, navigate to Deployments under the Shared Resources section, and click on Deploy base model.

Deploy a base model

 

In the Deployment pop-up window, select your preferred LLM and click on Confirm to deploy the model.

Deploy GPT-4 model

 

In the next popup tab, select your preferred deployment settings, and click on Deploy. For this tutorial, I stuck with the default settings, but you are free to change this to suit your business needs.

Deploy GPT-4 model

 

Once the model has been successfully deployed, you will see a more robust Deployments tab with more information, such as the endpoint, general deployment info, and the monitoring & safety settings. In the next step, we would be making use of the model’s endpoint information, so do well to keep that in a secure place.

Deployment Details Tab

 

Step 3: Create your LLM using Azure Prompt Flow

Now that our OpenAI resource has been successfully created and deployed, we can move on to the next step. To get started with creating the Prompt Flow app, log into the Azure ML Studio. Once you’re logged into the portal, navigate to your workspace. If you haven’t created one previously, you can do so by clicking on the Create button and following the on-screen prompts, or you can follow the guide listed in this article if you need further help creating a workspace.

Log into Azure ML Studio

 

In the Azure ML Studio workspace window, navigate to the Prompt Flow tab under the Authoring section and click on the Create button in the Flows tab. This would lead you to an interactive popup window for creating a flow. In Azure Prompt Flow, a flow is a workflow that connects nodes to process data and perform tasks in an AI application, and its main purpose is to streamline the development of LLM-based AI applications. 

Create a new flow

 

For this tutorial, we would be creating a chatbot that answers questions using information available on Wikipedia. In the Chat flow section, click on the Clone button under the Chat with Wikipedia flow. This would clone a flow that was already created by experts.

Chat with Wikipedia flow

 

The Flow editing window is divided into 3 sections.

  1. The main section, which contains all inputs, outputs, and processes in the flow.
  2. The file section can be used to add files to the flow.
  3. The graph section contains a graphical representation of the flow pipeline.

 

Flow Editing Page

 

Most of the components in the main section do not need to be edited as they have already been pre-programmed. All we need to do here is add the endpoints for the deployed OpenAI resource. To do this, navigate to the first LLM component, extract_query_from_question, click on the Add connection drop-down box, and select Azure OpenAI.

Create an OpenAI connection

 

In the popup window, fill in the connection name, your subscription, OpenAI account name, and endpoint information. You can access the API key and base information by copying and pasting the endpoint information from the Azure OpenAI resource deployed in the previous step. Once that is filled in, click on Save.

Connection Creation Popup window

 

Now that the connection is created, you can use it for all LLM components in the flow. This can be done by simply filling in the connection information at each LLM component, as shown below.

Add the connection to the LLM component

 

Repeat the same step for the next LLM component, augmented_chat. Once all that is done, click on the Save button to save the changes you’ve made to the flow.

 

Step 4: Test the flow

To get started with testing the flow, click on the Start compute session button to start a session that can be used to test the flow. Once the session has been successfully started, click on the chat button to test the LLM application in a chat environment.

Test the flow

 

The app is fully functional. We will now be moving on to the next step, which is the deployment stage.

 

Deploying an LLM app using Azure Prompt Flow

Save all changes made to the app and click on the Deploy button to get started with the deployment process. On the Deployment popup window, select New endpoint if you don’t have an existing endpoint, and a suitable virtual machine will be created for you. If you already have an endpoint running, select Existing endpoint and specify the location of that endpoint.

For this tutorial, I created a new endpoint and stuck to the defaults. Once the endpoint information is specified, click on Review+Create >> Create.

Create a deployment endpoint

 

 

Navigate to the Endpoints tab under the Assets section and select the newly created deployment endpoint. It should take about 5 minutes for the endpoint to be completely provisioned.

Endpoints tab

 

 

Once the provisioning is done, you should see a green checkmark on the Details page. If you come across a ResourceOperationFailure error during provisioning, try registering the following resources on your Azure subscription (Microsoft.ClassicNetwork, Microsoft.ClassicStorage, Microsoft.ClassicCompute, Microsoft.PolicyInsights and Microsoft.VirtualMachineImages) and redeploying the app. If you don’t know how to go about registering a resource, follow the steps in this tutorial.

Endpoints Details Tab

 

 

Now, the LLM app is ready to be consumed and embedded within an existing application. Feel free to either test the deployment on the Test tab or to consume it on the Consume tab using either C#, Python or Javascript.

 

Concluding Remarks

In this tutorial, we’ve covered the essentials of prompt engineering, introduced Azure Prompt Flow, and guided you through building and deploying LLM applications with this powerful tool. But this is just the beginning! There’s a world of possibilities to explore in LLM development, and Azure Prompt Flow is your gateway to creating smarter, more impactful AI-driven applications.

I’d love to hear your thoughts—feedback, questions, and ideas are all welcome! Ready to dive deeper? Be sure to explore the resources linked below for even more insights and advanced techniques. Keep building, keep experimenting, and let’s push the boundaries of what’s possible with Azure Prompt Flow. 

Updated Nov 03, 2024
Version 1.0