In this tutorial we are going to look at how to create model instances supported by Azure OpenAI Service on Azure AI Foundry and look at how to use Lang Graph from Langchain.js to build a simple AI agent that will help us do price conversion for MyTreat website. This tutorial is recommended for developers who are just getting into building with Azure AI Services or professionals who are seeking a quick introduction on how to perform tool-calling with LLMs.
Introducing MyTreat
Our demo is a fictional website that shows customers their total bill in dollars, but they have the option of getting the total bill in their local currencies. The button sends a request to the Node.js service and a response is simply returned from our Agent given the tool it chooses. Let’s dive in and understand how this works from a broader perspective.
Prerequisites
- An active Azure subscription. You can sign up for a free trial here or get $100 worth of credits on Azure every year if you are a student.
- A GitHub account (not necessarily)
- Node.js LTS 18 +
- VS Code installed (or your favorite IDE)
- Basic knowledge of HTML, CSS, JS
Creating an Azure OpenAI Resource
Go over to your browser and key in portal.azure.com to access the Microsoft Azure Portal. Over there navigate to the search bar and type Azure OpenAI. Go ahead and click on + Create.
Fill in the input boxes with appropriate, for example, as shown below then press on next until you reach review and submit then finally click on Create.
After the deployment is done, go to the deployment and access Azure AI Foundry portal using the button as show below. You can also use the link as demonstrated below.
In the Azure AI Foundry portal, we have to create our model instance so we have to go over to Model Catalog on the left panel beneath Get Started. Select a desired model, in this case I used gpt-35-turbo for chat completion (in your case use gpt-4o). Below is a way of doing this.
- Choose a model (gpt-4o)
- Click on deploy
- Give the deployment a new name e.g. myTreatmodel, then click deploy and wait for it to finish
On the left panel go over to deployments and you will see the model you have created.
- Access your Azure OpenAI Resource Key
Go back to Azure portal and specifically to the deployment instance that we have and select on the left panel, Resource Management. Click on Keys and Endpoints. Copy any of the keys as shown below and keep it very safe as we will use it in our .env file.
Configuring your project
Create a new project folder on your local machine and add these variables to the .env file in the root folder.
AZURE_OPENAI_API_INSTANCE_NAME=
AZURE_OPENAI_API_DEPLOYMENT_NAME=
AZURE_OPENAI_API_KEY=
AZURE_OPENAI_API_VERSION="2024-08-01-preview"
LANGCHAIN_TRACING_V2="false"
LANGCHAIN_CALLBACKS_BACKGROUND = "false"
PORT=4556
Starting a new project
Go over to https://github.com/tiprock-network/mytreat.git and follow the instructions to setup the new project, if you do not have git installed, go over to the Code button and press Download ZIP. This will enable you get the project folder and follow the same procedure for setting up.
Creating a custom tool
In the utils folder the math tool was created, this code show below uses tool from Langchain to build a tool and the schema of the tool is created using zod.js, a library that helps in validating an object’s property value. The price function takes in an array of prices and the exchange rate, adds the prices up and converts them using the exchange rate as shown below.
import { tool } from '@langchain/core/tools'
import { z } from 'zod'
const priceConv = tool((input) =>{
//get the prices and add them up after turning each into
let sum = 0
input.prices.forEach((price) => {
let price_check = parseFloat(price)
sum += price_check
})
//now change the price using exchange rate
let final_price = parseFloat(input.exchange_rate) * sum
//return
return final_price
},{
name: 'add_prices_and_convert',
description: 'Add prices and convert based on exchange rate.',
schema: z.object({
prices: z.number({
required_error: 'Price should not be empty.',
invalid_type_error: 'Price must be a number.'
}).array().nonempty().describe('Prices of items listed.'),
exchange_rate: z.string().describe('Current currency exchange rate.')
})
})
export { priceConv }
Utilizing the tool
In the controller’s folder we then bring the tool in by importing it. After that we pass it in to our array of tools. Notice that we have the Tavily Search Tool, you can learn how to implement in the Additional Reads Section or just remove it.
Agent Model and the Call Process
This code defines an AI agent using LangGraph and LangChain.js, powered by GPT-4o from Azure OpenAI. It initializes a ToolNode to manage tools like priceConv and binds them to the agent model. The StateGraph handles decision-making, determining whether the agent should call a tool or return a direct response. If a tool is needed, the workflow routes the request accordingly; otherwise, the agent responds to the user. The callModel function invokes the agent, processing messages and ensuring seamless tool integration.
The searchAgentController is a GET endpoint that accepts user queries (text_message). It processes input through the compiled LangGraph workflow, invoking the agent to generate a response. If a tool is required, the agent calls it before finalizing the output. The response is then sent back to the user, ensuring dynamic and efficient tool-assisted reasoning.
//create tools the agent will use
//const agentTools = [new TavilySearchResults({maxResults:5}), priceConv]
const agentTools = [ priceConv]
const toolNode = new ToolNode(agentTools)
const agentModel = new AzureChatOpenAI({
model:'gpt-4o',
temperature:0,
azureOpenAIApiKey: AZURE_OPENAI_API_KEY,
azureOpenAIApiInstanceName:AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName:AZURE_OPENAI_API_DEPLOYMENT_NAME,
azureOpenAIApiVersion:AZURE_OPENAI_API_VERSION
}).bindTools(agentTools)
//make a decision to continue or not
const shouldContinue = ( state ) => {
const { messages } = state
const lastMessage = messages[messages.length -1]
//upon tool call we go to tools
if("tool_calls" in lastMessage && Array.isArray(lastMessage.tool_calls) && lastMessage.tool_calls?.length) return "tools";
//if no tool call is made we stop and return back to the user
return END
}
const callModel = async (state) => {
const response = await agentModel.invoke(state.messages)
return {
messages: [response]
}
}
//define a new graph
const workflow = new StateGraph(MessagesAnnotation)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", shouldContinue, ["tools", END])
.addEdge("tools", "agent")
const appAgent = workflow.compile()
The above is implemented with the following code:
Frontend
The frontend is a simple HTML+CSS+JS stack that demonstrated how you can use an API to integrate this AI Agent to your website. It sends a GET request and uses the response to get back the right answer. Below is an illustration of how fetch API has been used.
const searchAgentController = async ( req, res ) => {
//get human text
const { text_message } = req.query
if(!text_message) return res.status(400).json({
message:'No text sent.'
})
//invoke the agent
const agentFinalState = await appAgent.invoke(
{
messages: [new HumanMessage(text_message)]
},
{streamMode: 'values'}
)
//const agentFinalState_b = await agentModel.invoke(text_message)
/*return res.status(200).json({
answer:agentFinalState.messages[agentFinalState.messages.length - 1].content
})*/
//console.log(agentFinalState_b.tool_calls)
res.status(200).json({
text: agentFinalState.messages[agentFinalState.messages.length - 1].content
})
}
There you go! We have created a basic tool-calling agent using Azure and Langchain successfully, go ahead and expand the code base to your liking. If you have questions you can comment below or reach out on my socials.
Additional Reads
Updated Mar 14, 2025
Version 2.0theophilusO
Brass Contributor
Joined October 20, 2022
Educator Developer Blog
Follow this blog board to get notified when there's new activity