Tool Calling¶
Tool calling lets the model invoke your Python functions during a conversation.
Define a Tool¶
You can define tools with plain @tool or with explicit metadata.
Option A: Docstring-based¶
from agentapi import tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city in plain text."""
return f"Weather in {city}: sunny"
Option B: Explicit metadata (recommended)¶
from agentapi import tool
@tool(
name="get_weather",
description="Get current weather conditions for a city.",
context="Use this for weather intent. Ask for city when missing.",
)
def get_weather(city: str) -> str:
return f"Weather in {city}: sunny"
Metadata helps the model choose tools more reliably in ambiguous prompts.
Attach Tools to an Agent¶
from agentapi import Agent
from tools import get_weather
agent = Agent(
system_prompt="You are a weather assistant",
provider="openai",
tools=[get_weather],
)
How Tool Execution Works¶
- User message is sent to provider with tool schemas.
- Provider may return tool calls.
- Agent executes matching Python functions.
- Tool outputs are appended to conversation memory.
- Agent asks provider again for final response.
Recommended Tool Authoring¶
- Add explicit
descriptionandcontextfor model-facing intent. - Use clear argument names and type hints.
- Keep outputs deterministic and concise.
- Return machine-friendly strings for downstream parsing when needed.
- Handle exceptions inside the tool and return actionable failure text.
Tool Design Example¶
@tool(
description="Lookup current stock quantity for a SKU.",
context="Use before confirming order availability.",
)
def get_inventory(sku: str) -> str:
try:
qty = inventory_service.lookup(sku)
return f"sku={sku}; quantity={qty}"
except Exception:
return f"sku={sku}; error=inventory_lookup_failed"
Parsing and Validation¶
AgentAPI safely parses tool arguments from model JSON payloads before execution.