Tools and MCP Integration
Connecting external systems safely
Agents need to be able to interact with the real world in order to be truly useful.
This is why we need tool calls!
In this lesson you'll:
- Add a function tool that calls an external weather API
- Register an MCP (Model Context Protocol) server so the agent can access shared tools
Building a weather lookup tool
Function tools let the LLM invoke Python methods with structured arguments.
Imports
At the top of agent.py, add:
import aiohttp
from livekit.agents import RunContext
from livekit.agents.llm import function_tool
Implement the tool
Inside the Assistant class, add:
@function_tool
async def lookup_weather(self, context: RunContext, location: str) -> str:
"""Look up current weather information for the given location."""
logger.info("Looking up weather for %s", location)
try:
async with aiohttp.ClientSession() as session:
async with session.get(f"http://shayne.app/weather?location={location}") as response:
if response.status == 200:
data = await response.json()
condition = data.get("condition", "unknown")
temperature = data.get("temperature", "unknown")
unit = data.get("unit", "degrees")
return f"{condition} with a temperature of {temperature} {unit}"
logger.error("Weather API returned status %s", response.status)
return "Weather information is currently unavailable for this location."
except Exception as exc:
logger.error("Error fetching weather: %s", exc)
return "Weather service is temporarily unavailable."
The tool returns a natural-language summary the LLM can incorporate directly into its reply.
Let's try it
- Ask “What’s the weather like in London?” and observe the tool call.
Connecting an MCP server
Model Context Protocol (MCP) provides a consistent interface for out-of-band capabilities. Add the extra dependency:
uv add "livekit-agents[mcp]"
Then import the MCP module near the top:
from livekit.agents import mcp
Finally, register the server when initializing Assistant:
class Assistant(Agent):
def __init__(self) -> None:
super().__init__(
instructions=(
"You are an upbeat, slightly sarcastic voice AI for tech support. "
"Help the caller fix issues without rambling, and keep replies under 3 sentences."
),
mcp_servers=[
mcp.MCPServerHTTP(url="https://shayne.app/sse"),
],
)
The example server exposes a simple add_numbers tool. LLMs aren't great at adding large numbers together, but with an MCP tool added, your agent will have no problem at all!
Tooling best practices
Design tools so they’re predictable, fast, and safe under failure:
- Correct function selection: keep tools single-purpose with clear names and docstrings.
- Parameter accuracy: validate and coerce types; provide defaults where safe.
- Timeouts and retries: set tight timeouts, bound retries, and surface tool errors to the LLM.
- Idempotency: make repeated calls safe; use request IDs to dedupe.
- Authentication and rate limits: prefer scoped keys; implement backoff and circuit breakers.
- Observability: log tool name, args, duration, status, and error class for every invocation.
- Output contracts: return structured payloads and short NL summaries; never raw internal errors.
Latency tips:
- Keep synchronous tools sub-300ms; offload long jobs to background tasks and summarize status.
- Cache recent results (e.g., weather for a city) to avoid repeated calls in a session.
- Prefer local decisions to network hops when possible; batch where safe.
Security & safety:
- Treat tool outputs as untrusted; validate shapes before injecting into replies.
- Guardrail prompts: instruct the model to ask for missing parameters rather than guessing.
- Fall back gracefully: if a tool fails, explain the limitation and continue the conversation.