Skip to content

Tool Calling

Tool calling lets the model invoke your Python functions during a conversation.

Define a Tool

You can define tools with plain @tool or with explicit metadata.

Option A: Docstring-based

from agentapi import tool


@tool
def get_weather(city: str) -> str:
    """Get current weather for a city in plain text."""
    return f"Weather in {city}: sunny"
from agentapi import tool


@tool(
    name="get_weather",
    description="Get current weather conditions for a city.",
    context="Use this for weather intent. Ask for city when missing.",
)
def get_weather(city: str) -> str:
    return f"Weather in {city}: sunny"

Metadata helps the model choose tools more reliably in ambiguous prompts.

Attach Tools to an Agent

from agentapi import Agent
from tools import get_weather

agent = Agent(
    system_prompt="You are a weather assistant",
    provider="openai",
    tools=[get_weather],
)

How Tool Execution Works

  1. User message is sent to provider with tool schemas.
  2. Provider may return tool calls.
  3. Agent executes matching Python functions.
  4. Tool outputs are appended to conversation memory.
  5. Agent asks provider again for final response.
  • Add explicit description and context for model-facing intent.
  • Use clear argument names and type hints.
  • Keep outputs deterministic and concise.
  • Return machine-friendly strings for downstream parsing when needed.
  • Handle exceptions inside the tool and return actionable failure text.

Tool Design Example

@tool(
    description="Lookup current stock quantity for a SKU.",
    context="Use before confirming order availability.",
)
def get_inventory(sku: str) -> str:
    try:
        qty = inventory_service.lookup(sku)
        return f"sku={sku}; quantity={qty}"
    except Exception:
        return f"sku={sku}; error=inventory_lookup_failed"

Parsing and Validation

AgentAPI safely parses tool arguments from model JSON payloads before execution.