Function calling (also known as “tool calling” or “tool use”) lets you connect LLMs to external data and systems. You can define a set of functions as tools that the LLM has access to, and it can use them when appropriate.

This guide will explain how to integrate function calls into your agent.

Example: Add a get_weather function

This section will walk you through the process of adding a get_weather function as a tool.

Here’s how we’ll do it:

  1. Define the get_weather function.
  2. Make the LLM aware of the tool.
  3. Make Jay aware of the tool.

1. Define the get_weather function

Open the file containing your agent (e.g. agent/main.py). Then, copy and paste the get_weather function below into it:

import aiohttp

async def get_weather(location: str) -> str:
    url = f"https://wttr.in/{location}?format=%C+%t"
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            if response.status == 200:
                weather_data = await response.text()
                return f"The weather in {location} is {weather_data}."
            else:
                return f"Failed to get weather data"

2. Make the LLM aware of the tool

We’ll make the LLM aware of the get_weather function by adding the tools definition below to your LLM response handler:

async def llm_response_handler(input: LLMResponseHandlerInput):
    client = AsyncOpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
    completion = await client.chat.completions.create(
        model="gpt-4o",
        messages=input["messages"],
        tools=[
            {
                "type": "function",
                "function": {
                    "name": "get_weather",
                    "description": "Get the current weather in a given location",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "location": {
                                "type": "string",
                                "description": "The city or location to retrieve the weather for",
                            }
                        },
                        "required": ["location"],
                        "additionalProperties": False,
                    },
                },
            }
        ],
        stream=True,
    )
    return completion

You can learn more about the fields in the tools definition above by following OpenAI’s Function Calling guide.

Important: The function definition you include in your LLM response handler must exactly match the function signature of the corresponding function. In other words, the function name (e.g. get_weather above) and the parameter name(s) (e.g. location) must match. Any mismatch in the function name or parameter names can cause the tool call to fail or be ignored by the LLM.

3. Make Jay aware of the tool

Add the get_weather function to the tools array in your Agent:

agent = Agent(
  tools=[get_weather],
  ...
)

Function Calling Lifecycle

Here’s what happens if your LLM decides to call a function during a session:

  1. The LLM will return a tool call message containing the function call (e.g. get_weather) and its parameters (e.g. "location": "New York City").
  2. Jay will detect the tool call message returned by the LLM and call the function with the specified parameters.
  3. Jay adds the result of the function call to the conversation history and then calls the LLM again with the full conversation history so that it can give a response to the user.

Function Event Handlers

You can define event handlers that respond to events emitted during the function calling lifecycle. These event handlers are useful for sending data to external systems, like your database or a third-party analytics service.

  • on_function_calls_collected (Reference): Called when Jay receives the complete set of functions to execute from the LLM.
  • on_function_calls_executed (Reference): Called after Jay has executed all the functions the model requested.