nodejs
11 TopicsHow to build Tool-calling Agents with Azure OpenAI and Lang Graph
Introducing MyTreat Our demo is a fictional website that shows customers their total bill in dollars, but they have the option of getting the total bill in their local currencies. The button sends a request to the Node.js service and a response is simply returned from our Agent given the tool it chooses. Let’s dive in and understand how this works from a broader perspective. Prerequisites An active Azure subscription. You can sign up for a free trial here or get $100 worth of credits on Azure every year if you are a student. A GitHub account (not necessarily) Node.js LTS 18 + VS Code installed (or your favorite IDE) Basic knowledge of HTML, CSS, JS Creating an Azure OpenAI Resource Go over to your browser and key in portal.azure.com to access the Microsoft Azure Portal. Over there navigate to the search bar and type Azure OpenAI. Go ahead and click on + Create. Fill in the input boxes with appropriate, for example, as shown below then press on next until you reach review and submit then finally click on Create. After the deployment is done, go to the deployment and access Azure AI Foundry portal using the button as show below. You can also use the link as demonstrated below. In the Azure AI Foundry portal, we have to create our model instance so we have to go over to Model Catalog on the left panel beneath Get Started. Select a desired model, in this case I used gpt-35-turbo for chat completion (in your case use gpt-4o). Below is a way of doing this. Choose a model (gpt-4o) Click on deploy Give the deployment a new name e.g. myTreatmodel, then click deploy and wait for it to finish On the left panel go over to deployments and you will see the model you have created. Access your Azure OpenAI Resource Key Go back to Azure portal and specifically to the deployment instance that we have and select on the left panel, Resource Management. Click on Keys and Endpoints. Copy any of the keys as shown below and keep it very safe as we will use it in our .env file. Configuring your project Create a new project folder on your local machine and add these variables to the .env file in the root folder. AZURE_OPENAI_API_INSTANCE_NAME= AZURE_OPENAI_API_DEPLOYMENT_NAME= AZURE_OPENAI_API_KEY= AZURE_OPENAI_API_VERSION="2024-08-01-preview" LANGCHAIN_TRACING_V2="false" LANGCHAIN_CALLBACKS_BACKGROUND = "false" PORT=4556 Starting a new project Go over to https://github.com/tiprock-network/mytreat.git and follow the instructions to setup the new project, if you do not have git installed, go over to the Code button and press Download ZIP. This will enable you get the project folder and follow the same procedure for setting up. Creating a custom tool In the utils folder the math tool was created, this code show below uses tool from Langchain to build a tool and the schema of the tool is created using zod.js, a library that helps in validating an object’s property value. The price function takes in an array of prices and the exchange rate, adds the prices up and converts them using the exchange rate as shown below. import { tool } from '@langchain/core/tools' import { z } from 'zod' const priceConv = tool((input) =>{ //get the prices and add them up after turning each into let sum = 0 input.prices.forEach((price) => { let price_check = parseFloat(price) sum += price_check }) //now change the price using exchange rate let final_price = parseFloat(input.exchange_rate) * sum //return return final_price },{ name: 'add_prices_and_convert', description: 'Add prices and convert based on exchange rate.', schema: z.object({ prices: z.number({ required_error: 'Price should not be empty.', invalid_type_error: 'Price must be a number.' }).array().nonempty().describe('Prices of items listed.'), exchange_rate: z.string().describe('Current currency exchange rate.') }) }) export { priceConv } Utilizing the tool In the controller’s folder we then bring the tool in by importing it. After that we pass it in to our array of tools. Notice that we have the Tavily Search Tool, you can learn how to implement in the Additional Reads Section or just remove it. Agent Model and the Call Process This code defines an AI agent using LangGraph and LangChain.js, powered by GPT-4o from Azure OpenAI. It initializes a ToolNode to manage tools like priceConv and binds them to the agent model. The StateGraph handles decision-making, determining whether the agent should call a tool or return a direct response. If a tool is needed, the workflow routes the request accordingly; otherwise, the agent responds to the user. The callModel function invokes the agent, processing messages and ensuring seamless tool integration. The searchAgentController is a GET endpoint that accepts user queries (text_message). It processes input through the compiled LangGraph workflow, invoking the agent to generate a response. If a tool is required, the agent calls it before finalizing the output. The response is then sent back to the user, ensuring dynamic and efficient tool-assisted reasoning. //create tools the agent will use //const agentTools = [new TavilySearchResults({maxResults:5}), priceConv] const agentTools = [ priceConv] const toolNode = new ToolNode(agentTools) const agentModel = new AzureChatOpenAI({ model:'gpt-4o', temperature:0, azureOpenAIApiKey: AZURE_OPENAI_API_KEY, azureOpenAIApiInstanceName:AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiDeploymentName:AZURE_OPENAI_API_DEPLOYMENT_NAME, azureOpenAIApiVersion:AZURE_OPENAI_API_VERSION }).bindTools(agentTools) //make a decision to continue or not const shouldContinue = ( state ) => { const { messages } = state const lastMessage = messages[messages.length -1] //upon tool call we go to tools if("tool_calls" in lastMessage && Array.isArray(lastMessage.tool_calls) && lastMessage.tool_calls?.length) return "tools"; //if no tool call is made we stop and return back to the user return END } const callModel = async (state) => { const response = await agentModel.invoke(state.messages) return { messages: [response] } } //define a new graph const workflow = new StateGraph(MessagesAnnotation) .addNode("agent", callModel) .addNode("tools", toolNode) .addEdge(START, "agent") .addConditionalEdges("agent", shouldContinue, ["tools", END]) .addEdge("tools", "agent") const appAgent = workflow.compile() The above is implemented with the following code: Frontend The frontend is a simple HTML+CSS+JS stack that demonstrated how you can use an API to integrate this AI Agent to your website. It sends a GET request and uses the response to get back the right answer. Below is an illustration of how fetch API has been used. const searchAgentController = async ( req, res ) => { //get human text const { text_message } = req.query if(!text_message) return res.status(400).json({ message:'No text sent.' }) //invoke the agent const agentFinalState = await appAgent.invoke( { messages: [new HumanMessage(text_message)] }, {streamMode: 'values'} ) //const agentFinalState_b = await agentModel.invoke(text_message) /*return res.status(200).json({ answer:agentFinalState.messages[agentFinalState.messages.length - 1].content })*/ //console.log(agentFinalState_b.tool_calls) res.status(200).json({ text: agentFinalState.messages[agentFinalState.messages.length - 1].content }) } There you go! We have created a basic tool-calling agent using Azure and Langchain successfully, go ahead and expand the code base to your liking. If you have questions you can comment below or reach out on my socials. Additional Reads Azure Open AI Service Models Generative AI for Beginners AI Agents for Beginners Course Lang Graph Tutorial Develop Generative AI Apps in Azure AI Foundry Portal3.8KViews1like2CommentsInitiating a group chat
Hi, We are trying to initiate a group chat with members in Teams. We are using Postman to call the BotFramework REST API to start a group conversation. Here is the payload we are using { "type": "message", "isGroup": true, "members": [ { "id": "Some Id" }, { "id": "Some Id" } ], "tenantId": "Some Id", "topicName": "test" } When we tried to execute the request, we get the following error { "error": { "code": "BadSyntax", "message": "Incorrect conversation creation parameters" } } If we tried to initiate a conversation with only ONE member with the following payload, then everything works correctly { "type": "message", "isGroup": false, "members": [ { "id": "Some Id" } ], "tenantId": "Some Id", "topicName": "test" } Can anyone advise what the proper syntax to creating a group chat in Teams with the BotFramework REST API? Thank YouSolved1.9KViews0likes4CommentsStep by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application
In this article I will show you how to migrate from version 3 to version 4 of the programming model for Azure Functions for Node.Js applications using a real case project Contoso Real Estate5.4KViews3likes0CommentsPower Virtual Agent Bots as Skills for an Azure Bot
Hi, We have an Azure Bot deployed and are exploring the possibility of using a PVA Bot as a skill for the Azure Bot. From https://learn.microsoft.com/en-us/power-virtual-agents/advanced-use-pva-as-a-skill#add-your-bot-framework-bot-to-the-allowlist-for-your-power-virtual-agents-bot, we have read that we can add a PVA Bot to a Bot created via the BotFrameworkComposer as a skill. However, we did not find any documentation on how we can accomplish the same using code. Our Azure Bot is written in NodeJS. We have a skill dialog bot that is currently connected to the root bot using NodeJS. Is it possible to have our NodeJS root bot talk to a PVA Bot? Thank You1.2KViews0likes1CommentError 554 5.2.0 while sending email
Hi. I'm working on a personal project with Nodejs and was testing a email template for password recovery. I created a new hotmail account and starting testing sending emails from my test account to my personal email to check the email layout. But after sometime, I started to get this error: '554 5.2.0 STOREDRV.Submission.Exception:OutboundSpamException; Failed to process message due to a permanent exception with message [BeginDiagnosticData]WASCL UserAction verdict is not None. Actual verdict is Suspend, ShowTierUpgrade. OutboundSpamException: WASCL UserAction verdict is not None. Actual verdict is Suspend, ShowTierUpgrade.[EndDiagnosticData] [Hostname=RO1P152MB2796.LAMP152.PROD.OUTLOOK.COM]' I already verified my account and still can't send any email. I did some research and found that it has to do with spam limitations, but couldn't figure out how to remove the block. Can any body help me?7.4KViews1like1CommentRefused to execute script from 'myfiles.bundle.js.gz' in SharePoint Online Master page
I have created a bundle file with 'compression-webpack-plugin'. But when I am trying to use in SharePoint online master page, it is throwing an error : 'Refused to execute script from 'myfiles.bundle.js.gz' because its MIME type ('application/x-gzip') is not executable, and strict MIME type checking is enabled.' How can I use .gz file in SharePoint Online Master page? Thanks, Gaurav Goyal2.5KViews0likes0CommentsSkill Dialog Bots and Update Activity
Hi, In a dialog-root-bot and skill-dialog-bot implementation, if my skill-dialog-bot shows an adaptive card and the user has already acted on the presented card, I want to update the card with either a 'text' message or a card to avoid the user from interacting with the previous card again. My dialog-root-bot's WaterfallDialog definition is as follows .addDialog(new WaterfallDialog(WATERFALL_DIALOG, [ this.actStep.bind(this), this.finalStep.bind(this) ])); In finalStep, I check for the results from the skill-dialog-bot and determine if I need to update the activity and present the next card or message. In the following snippet, I replace the previous card with a new card const details = stepContext.result; const conversationId = details.conversationId; const activityId = details.activityId; const nextCard = stepContext.context.activity; nextCard.value = null; nextCard.text = null; nextCard.type = ActivityTypes.Message; nextCard.conversation.id = conversationId; nextCard.id = activityId; nextCard.attachments = [details.nextCard]; await stepContext.context.updateActivity(nextCard); What happens now is that even though the skill-dialog-bot has ended the conversation, the line "stepContext.context.updateActivity(nextCard)" will again pass control to the skill-dialog-bot. Is there a way to prevent that? Thank YouSolved1.2KViews0likes2CommentsSkill Dialog: endDialog() does not work
Hi, I am implementing a dialogRootBot that will call a skillDialogBot. My dialogRootBot is able to invoke the skillDialogBot. The conversation can progress until the point where the skillDialogBot is supposed to return the results to the dialogRootBot. The skillDialogBot has the following setup this.addDialog(new TextPrompt(TEXT_PROMPT)) .addDialog(new ConfirmPrompt(CONFIRM_PROMPT)) .addDialog(new WaterfallDialog(WATERFALL_DIALOG, [ this.processStep.bind(this) ])); The processStep is laid out like this async processStep(stepContext) { const details = stepContext.options; details.result = { status: 'success' }; return await stepContext.endDialog(stepContext.options); } I was expecting the dialogRootBot to get the result from the skillDialogBot after processStep has called endDialog, but that never happens. Instead, the user is stuck with the skillDialogBot until the user manually types in "abort", which is the command the dialogRootBot is monitoring to cancel all dialogs in its onContinueDialog() implementation Here is how the onContinueDialog() looks like async onContinueDialog(innerDc) { const activeSkill = await this.activeSkillProperty.get(innerDc.context, () => null); const activity = innerDc.context.activity; if (activeSkill != null && activity.type === ActivityTypes.Message && activity.text) { if (activity.text.toLocaleLowerCase() === 'abort') { // Cancel all dialogs when the user says abort. // The SkillDialog automatically sends an EndOfConversation message to the skill to let the // skill know that it needs to end its current dialogs, too. await innerDc.cancelAllDialogs(); return await innerDc.replaceDialog(this.initialDialogId, { text: 'Request canceled!' }); } } return await super.onContinueDialog(innerDc); } I modeled this after the botbuilder-samples\samples\javascript_nodejs\81.skills-skilldialog sample. If I were to change the skillDialogBot and have it do a ConfirmPrompt() before the finalStep()'s endDialog(), then the conversation ends correctly with the skillDialogBot() posting the dialog's results to the rootDialogBot. For the sake of clarity, this is how the bookingDialog in the skills-skilldialog sample looks like /** * Confirm the information the user has provided. */ async confirmStep(stepContext) { const bookingDetails = stepContext.options; // Capture the results of the previous step. bookingDetails.travelDate = stepContext.result; const messageText = `Please confirm, I have you traveling to: ${ bookingDetails.destination } from: ${ bookingDetails.origin } on: ${ bookingDetails.travelDate }. Is this correct?`; const msg = MessageFactory.text(messageText, messageText, InputHints.ExpectingInput); // Offer a YES/NO prompt. return await stepContext.prompt(CONFIRM_PROMPT, { prompt: msg }); } /** * Complete the interaction and end the dialog. */ async finalStep(stepContext) { if (stepContext.result === true) { const bookingDetails = stepContext.options; return await stepContext.endDialog(bookingDetails); } return await stepContext.endDialog(); } Is it not possible to endDialog() without a prompt? Thank YouSolved1.9KViews0likes2Comments