Blog Post

Educator Developer Blog
6 MIN READ

Building AI Agents with Ease: Function Calling in VS Code AI Toolkit

shreyanfern's avatar
shreyanfern
Iron Contributor
Aug 15, 2025

Function calling is the technical mechanism that allows an agent to take action. Think of the agent as the intelligent system or "brain" that reasons and plans, and the function calls as the agent's hands—the specific actions it can perform in the real world.

Function calling is a powerful technique that allows Large Language Models (LLMs) to go beyond their static training data and interact with real-world, dynamic information sources like databases and APIs. This capability turns a simple chat interface into a powerful tool for fetching real-time data, executing code, and much more.

The process of Tool/Function calling typically involves two main components: a client application and the LLM. A user's request, such as "Do I need to carry an umbrella in Bangalore today?", is sent from the client application to the LLM. Along with this message, a tool definition is provided. This definition gives the LLM the context it needs to understand what tools are available along usage methods.

The LLM analyses the user's request and the list of available tools. Based on this analysis, it identifies the most appropriate tool to use (e.g., a weather API) and determines the correct way to call it, including any necessary input parameters.

Once the LLM recommends a tool call, the client application is responsible for executing it. The output of this tool—for example, the weather API's response of "rainy"—is then sent back to the LLM. The LLM processes this new information and generates a final, conversational human-friendly response for the user, such as "Yes, it's rainy in Bangalore, so you should definitely carry an umbrella!”

Tool definitions are what make this process work. A tool definition must include:

  • Name: A unique name for the tool.
  • Description: A clear explanation of what the tool does and when it should be used.
  • Input Parameters: A list of all required input values for the tool.

By carefully crafting these definitions, developers can enable LLMs to perform a wide variety of tasks with external data, significantly expanding their capabilities.

Agent Development with Function Calling:

Function calling is the technical mechanism that allows an agent to take action. Think of the agent as the intelligent system or "brain" that reasons and plans, and the function calls as the agent's hands—the specific actions it can perform in the real world.

The agent's workflow is powered by the following process:

  1. The agent (the LLM) receives a request from the user.
  2. It reasons about the request and decides what actions are needed to fulfil it.
  3. It then uses function calling to generate a structured call to one of its predefined tools (e.g., an API, a database query, or a piece of code).
  4. Your code executes this function call.
  5. The result of that action is fed back to the agent so it can decide on the next step or provide a final answer.

So, using function calling to develop an agent is a precise and correct way to describe how we are giving agent the ability to interact with external systems.

Agent Development with Function Calling

Developing Agent with AI Toolkit:

An "agent" is a model that decides what action to perform. As a first step, we augment the LLM call with the ability to take action via Function calling or Tool Calling. This can be further extended with the Model Context Protocol (MCP). We will demonstrate this using the AI Toolkit.

To begin, make sure the AI Toolkit is installed on the machine. For detailed stepwise installation and usage refer details refer to the Blog or AI Sparks series on YouTube.

Once we have the environment ready, navigate to the Agent Builder (earlier known as Prompt builder).

 

AI toolkit: Agent Builder

Now click on the “+” icon and select “Custom Tool” from the dropdown.

AI Toolkit: Custom tool

Post this there will be a popup window on which all the tool configuration must be completed. There is an option to use the example tool or to upload the existing schema, in this beginner tutorial, “get_weather” example will be used. Upon selection, it populates values in all the fields automatically. Whereas in a custom tool upload, the schema must be defined by the user as per the tool call.

 

AI Toolkit: get_weather

get_weather” is an imitation to the real API call. We will use a default value of “rainy”, but it can be modified to imitate a real API call to OpenWeather API or any other weather API provider.

{
  "type": "object",
  "properties": {
    "location": {
      "type": "string",
      "description": "The city and state e.g. San Francisco, CA"
    },
    "unit": {
      "type": "string",
      "enum": [
        "c",
        "f"
      ]
    }
  },
  "additionalProperties": false,
  "required": [
    "location"
  ]
}
AI Toolkit: Schema

Finally, our Agent is just a click away! All we need to do is to click the “Add button” and the tool gets added! Congratulations on the first successful agent development!

Its now time to test the first agent application with the GPT model (hosted via GitHub).  Notice that the tool is now added. Add the user promptDo I need to carry an umbrella today in Bangalore?

AI Toolkit: Custom tool configuration

Let’s first run it as it is and check if the model is able to recognize if it needs external data. To do this simply click on RUN.

AI Toolkit: Run

As it is clearly evident the model is now identifying that it needs external data. It returns the function name and the parameters required. Next step is to simulate the function by providing mock weather data. In order to do this in the custom tools section add “rainy” in the “enter tool response” placeholder.

AI Toolkit: Tool

  

Post this let’s check if this agent can identify, connect to API and provide the response. Lets click on “RUN” again,

AI Toolkit: Model response

In the model response section, now we can see the agent responding with the response as “Yes, you should carry an umbrella today in Bangalore, as it is rainy. Stay dry!”. It can also be added to the conversational history by simply clicking on the “Add to Prompts”.

That’s it! Our first Agent is ready to tell us the weather updates!

For further flexibility or deployment, AI Toolkit also provides the code in various languages,

Upon clicking onView Code”, the toolkit prompts to select a language. We'll use Python and select the Azure inference SDK. Following is the code we have received from AI Toolkit.

 

"""Run this model in Python

> pip install azure-ai-inference
"""
import os
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import AssistantMessage, SystemMessage, UserMessage, ToolMessage
from azure.ai.inference.models import ImageContentItem, ImageUrl, TextContentItem
from azure.core.credentials import AzureKeyCredential

# To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings.
# Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
client = ChatCompletionsClient(
    endpoint = "https://models.github.ai/inference",
    credential = AzureKeyCredential(os.environ["GITHUB_TOKEN"]),
    api_version = "2024-08-01-preview",
)

def get_weather():
    return "rainy"

messages = [
    SystemMessage(content = "You are a helpful AI Assistant."),
    UserMessage(content = [
        TextContentItem(text = "Do I need to carry an umbrella today in Bangalore?"),
    ]),
]

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Determine weather in my location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state e.g. San Francisco, CA"
                    },
                    "unit": {
                        "type": "string",
                        "enum": [
                            "c",
                            "f"
                        ]
                    }
                },
                "additionalProperties": False,
                "required": [
                    "location"
                ]
            }
        }
    }
]

response_format = "text"

while True:
    response = client.complete(
        messages = messages,
        model = "openai/gpt-4o",
        tools = tools,
        response_format = response_format,
        temperature = 1,
        top_p = 1,
    )

    if response.choices[0].message.tool_calls:
        print(response.choices[0].message.tool_calls)
        messages.append(response.choices[0].message)
        for tool_call in response.choices[0].message.tool_calls:
            messages.append(ToolMessage(
                content=locals()[tool_call.function.name](),
                tool_call_id=tool_call.id,
            ))
    else:
        print(f"[Model Response] {response.choices[0].message.content}")
        break

 

In the upcoming blogs we will explore and implement more complicated agentic systems with the help of the AI Toolkit! Stay tuned.

Updated Aug 15, 2025
Version 1.0