Drug Details in Doctor’s Prescriptions: A Named Entity Recognition Approach using Prompt Flow
Published Jan 30 2024 08:15 AM 1,067 Views



This article shows how to identify important information like drug details from doctor’s prescription, which is called Named Entity Recognition (NER). Entities are words that represent specific concepts or objects in a text. You can use any example to find the important information from the text. Below is the sample doctor’s prescription and we need to identify the drugs e.g. ibuprofen and pseudoephedrine.




Large Language Models (LLMs) e.g. GPT 3.5, GPT4 can be used to perform NER. To create an application that takes a text as input and outputs entities, we can create a flow that uses a LLM node with prompt flow.

To know how to implement the solution, we need to first learn the basic concept of Prompt flow.


Prompt Flow - For creating, evaluating, adjusting, and running LLM applications, you can use prompt flow, which is available in the Azure Machine Learning studio and the Azure AI Studio (preview). Using prompt flow we can develop, test, tune and deploy LLM applications.

To learn how to use prompt flow, we need to first examine the stages of developing an application that uses a Large Language Model (LLM).


Below is the lifecycle of large language model


  • Initialization: Identify the use case and plan the solution.
  • Experiment: Create a flow and run it with a small dataset.
  • Evaluation and Refinement: Check the flow with a larger dataset.
  • Production: Launch and track the flow and application.




Understand a flow - Prompt flow is a function in the Azure AI Studio that lets you create flows. Flows are workflows that can be run and usually have three components:


  • Inputs: Represent data passed into the flow. Can be different data types like strings, integers, or Boolean, In below screenshot you can see I have provided two inputs drug’s name as entity type and doctor’s prescription as text.
  • Nodes: Represent tools that perform data processing, task execution, or algorithmic operations. In below screenshot I have two nodes DRUG_IDENTITY_LLM and DATA_CLEANSING where I am passing entity_type and text as argument to LLM connection.
  • Outputs: Represent the data produced by the flow. In below screenshot you can see out which shows the final results (e.g. drug’s name). We can get output in desired format like JSON or TEXT.







Tools available in prompt flow - Three common tools are:

  • LLM tool: Enables custom prompt creation utilizing Large Language Models.
  • Python tool: Allows the execution of custom Python scripts.
  • Prompt tool: Prepares prompts as strings for complex scenarios or integration with other tools.

Types of Flow - There are three different types of flows you can create with prompt flow:

  • Standard flow: Ideal for general LLM-based application development, offering a range of versatile tools.
  • Chat flow: Designed for conversational applications, with enhanced support for chat-related functionalities.
  • Evaluation flow: Focused on performance evaluation, allowing the analysis and improvement of models or applications through feedback on previous runs.




To access data from an external source, service, or API, your flow needs permission to interact with that external service. When you set up a connection, you establish a secure link between prompt flow and external services, enabling smooth and secure data exchange. In below screenshot you can see the connection is Default_AzureOpenAI which is automatically created by system when you setup deployment, you will learn about it more in next steps.







You need to set up the connections your tools use and create your flow before you can run it. You also need compute, which prompt flow gives you through runtimes. You can see in below screenshot how runtime defined in my solution, you will learn more about it in upcoming steps.






Prompt flow variants are different versions of a tool node with unique settings. Variants are only available in the LLM tool right now, where a variant can have a different prompt content or connection setting. Variants let users tailor their method for specific tasks, like, summarizing news articles. In below screenshot you can see the default variant is variant_0, you can clone a another new one and change your settings and run them independently.





Deploy your flow to endpoint


You can deploy your flow to an online endpoint when you are happy with how it works. Endpoints are URLs that you can use from any application. When you use an online endpoint with an API call, you can get a (near) instant response.

When you put your flow on an online endpoint, prompt flow creates a URL and key for you to securely connect your flow with other applications or business processes. When you use the endpoint, a flow runs and the output comes back in real-time. This way, you can use endpoints to create chat or copilot responses that you want to use in another application. In the below screenshot you can see a sample deployment setup for this solution.




Evaluation Metrics


Prompt flow requires tracking evaluation metrics to assess how well your LLM application performs, how it matches real-world expectations and how it produces correct results. The main metrics used for evaluation in prompt flow are Groundedness, Relevance, Coherence, Fluency and Similarity.


Implementation of Drug identification Solution – Step by Step


In this exercise, you’ll use Azure AI Studio’s prompt flow to create an LLM application that expects an entity type (e.g. drug’s name) and text (e.g. doctor’s prescription) as input. It calls a GPT model from Azure OpenAI through a LLM node to extract the required entity from the given text, cleans the result and outputs the extracted entities.




You first need to create a project in the Azure AI Studio to create the necessary Azure resources. Then, you can deploy a GPT model with the Azure OpenAI service. Once you have the necessary resources, you can create the flow. Finally you’ll run the flow to test it and view the sample output.

Create a project in the Azure AI Studio

You start by creating an Azure AI Studio project and the Azure AI resources to support it.

  1. In a web browser, open https://ai.azure.com and sign in using your Azure credentials.
  2. Select the Build page, then select + New project.
  3. In the Create a new project wizard, create a project with the following settings:
    • Project nameA unique name for your project (e.g. Prescribed_Drug_Identification)
    • Azure AI resourceCreate a new resource with the following settings:
      • Resource nameA unique name
      • SubscriptionYour Azure subscription
      • Resource groupA new resource group
      • LocationChoose a location near you
  1. Review your configuration and create your project.
  2. Wait 5-10 minutes for your project to be created.







Deploy a GPT model


To use a LLM model in prompt flow, you need to deploy a model first. The Azure AI Studio allows you to deploy OpenAI models that you can use in your flows.

  1. In the navigation pane on the left, under Components, select the Deployments page.
  2. In Azure OpenAI Studio, navigate to the Deployments page.
  3. Create a new deployment of the gpt-35-turbo model with the following settings:
    • Model: gpt-35-turbo
    • Model versionLeave the default value
    • Deployment name: gpt-35-turbo
    • Set the Advanced options to use the default content filter and to restrict the tokens-per-minute (TPM) to 5K.



Now that you have your LLM model deployed, you can create a flow in Azure AI Studio that calls the deployed model.


Create and run a flow in the Azure AI Studio

Now that you have all necessary resources provisioned, you can create a flow.


Create a new flow

To create a new flow with a template, you can select one of the types of flows you want to develop.

  1. In the navigation pane on the left, under Tools, select Prompt flow.
  2. Select + Create to create a new flow.
  3. Create a new Standard flow and enter drug-recognition as folder name.

A standard flow with one input, two nodes, and one output is created for you. You’ll update the flow to take two inputs, extract entities, clean up the output from the LLM node, and return the entities as output.






Start the automatic runtime


To test your flow, you need compute. The necessary compute is made available to you through the runtime.

  1. After creating the new flow that you named drug-recognition, the flow should open in the studio.
  2. Select the Select runtime field from the top bar.
  3. In the Automatic runtime list, select Start to start the automatic runtime.
  4. Wait for the runtime to start.



Configure the inputs


The flow you’ll create will take two inputs: a text and the type of entity you want to extract from the text.

  1. Under Inputs, one input is configured named topic of type string. Change the existing input and update with the following settings:
    • Name: entity_type
    • Type: string
    • Value: drug’s name
  2. Select Add input.
  3. Configure the second input to have the following settings:
    • Name: text
    • Type: string
    • Value: “Patient Name: _________


  1. Take ibuprofen (400mg) 3 times a day for 7 days to reduce fever and muscle aches.
  2. Take pseudoephedrine (60mg) 3 times a day for 7 days as needed for congestion and sinus pressure.
  3. Use a saline nasal spray to help relieve congestion.
  4. Drink plenty of fluids and get plenty of rest.
  5. Avoid activities that may worsen symptoms.
  6. Follow up with your doctor if symptoms worsen or persist.


Dr. ___________”



Configure the LLM node


The standard flow already includes a node that uses the LLM tool. You can find the node in your flow overview. The default prompt asks for a joke. You’ll update the LLM node to extract entities based on the two inputs specified in the previous section.

  1. Navigate to the LLM node named joke.
  2. Replace the name with DRUGS_IDENTIFY_LLM
  3. For Connection, select the Default_AzureOpenAI connection.
  4. For deployment_name, select the gpt-35-turbo model you deployed.
  5. Replace the prompt field with the following code:



{% raw %}

Your task is to find entities of a certain type from the given text content.
If there're multiple entities, please return them all with comma separated, e.g. "entity1, entity2, entity3".
You should only return the entity list, nothing else.
If there's no such entity, please return "None".


Entity type: {{entity_type}}
Text content: {{text}}
{% endraw %}




  1. Select Validate and parse input.
  2. Within the LLM node, in the Inputs section, configure the following:
    • For entity_type, select the value ${inputs.entity_type}.
    • For text, select the value ${inputs.text}.

Your LLM node will now take the entity type and text as inputs, include it in the prompt you specified and send the request to your deployed model.

Configure the Python node

To extract only the key information from the result of the model, you can use the Python tool to clean up the output of the LLM node.

  1. Navigate to the Python node named echo.
  2. Replace the name with DATA_CLEANSING.
  3. Replace the code with the following:



from typing import List
from promptflow import tool
def cleansing(entities_str: str) -> List[str]:
    # Split, remove leading and trailing spaces/tabs/dots
    parts = entities_str.split(",")
    cleaned_parts = [part.strip(" \t.\"") for part in parts]
    entities = [part for part in cleaned_parts if len(part) > 0]
    return entities


  1. Select Validate and parse input.
  2. Within the Python node, in the Inputs section, set the value of entities_str to ${ DRUGS_IDENTIFY_LLM.output}.




Configure the output


Finally, you can configure the output of the whole flow. You only want one output to your flow, which should be the extracted entities.

  1. Navigate to the flow’s Outputs.
  2. For Name, enter entities.
  3. For Value, select ${DATA_CLEANSING.output}.

Run the flow


Now that you’ve developed the flow, you can run it to test it. Since you’ve added default values to the inputs, you can easily test the flow in the studio.

  1. Select Run to test the flow.
  2. Wait until the run is completed.
  3. Select View outputs. A pop-up should appear showing you the output for the default inputs. Optionally, you can also inspect the logs.




Summary: This article has taught us these things so far. Prompt flow lets you make flows, which are the order of actions or steps that you take to do a certain task or function. A flow shows the whole process or pipeline that uses the interaction with the LLM to solve a specific use case. The flow covers the entire journey from taking input to producing output or doing a wanted action.

  • The steps involved in making LLM applications.
  • What a flow means in prompt flow.
  • The main elements when using prompt flow.
  • A developed a small solution drug identification solution using standard flow.


Version history
Last update:
‎Jan 30 2024 09:02 AM
Updated by: