Custom Tools & Webhooks
Define your own tools using the OpenAI function calling format. Attach a webhook URL and the orchestrator will call your API when the agent invokes the tool — or return the tool call to the client for local execution.
Execution Backends
When the LLM decides to call a tool, the orchestrator needs to know where to execute it. There are two options:
The orchestrator POSTs the tool arguments to your webhook URL. Your server processes the request and returns a JSON response. The agent never pauses — everything happens server-side.
Best for: APIs, databases, external servicesThe orchestrator returns the tool_calls in the response, just like the OpenAI API. Your client code executes the tool and sends the result back in a follow-up request.
Defining a Custom Tool
Custom tools follow the OpenAI function calling format with optional webhook configuration.
{
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name (e.g. 'London')"
},
"units": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature units"
}
},
"required": ["city"]
}
},
"webhookUrl": "https://api.example.com/weather",
"headers": {
"Authorization": "Bearer YOUR_API_TOKEN"
},
"timeout": 10000
}
]
}typeAlways "function".
function.nameUnique function name the LLM will use to call this tool.
requiredfunction.descriptionDescription that helps the LLM understand when to use this tool.
requiredfunction.parametersJSON Schema describing the tool's input parameters.
requiredwebhookUrlURL to POST tool arguments to. If omitted, tool calls are returned to the client.
optionalheadersCustom headers sent with webhook requests (e.g. auth tokens).
optionaltimeoutTimeout for webhook calls in milliseconds. Default: 30000.
optionalWebhook Request & Response
When a webhook tool is invoked, the orchestrator sends a POST request to your endpoint with the tool arguments as the body.
Request to your webhook
POST https://api.example.com/weather
Content-Type: application/json
Authorization: Bearer YOUR_API_TOKEN
{
"city": "London",
"units": "celsius"
}Expected response
{
"temperature": 18,
"condition": "Partly cloudy",
"humidity": 65
}The response body is passed directly back to the LLM as the tool result. Return JSON for structured data, or plain text for simple responses.
Client-Side Tool Execution
If a tool doesn't have a webhookUrl, tool calls are returned in the API response for your client to execute — identical to how OpenAI function calling works.
// 1. Agent responds with a tool_call
const result = await client.chat.completions.create({
model: 'agent:my-agent',
messages: [{ role: 'user', content: 'What is the weather in London?' }],
});
// 2. Check for tool calls
const toolCall = result.choices[0].message.tool_calls?.[0];
if (toolCall) {
// 3. Execute the tool locally
const args = JSON.parse(toolCall.function.arguments);
const weatherData = await myWeatherAPI(args.city);
// 4. Send the result back
const final = await client.chat.completions.create({
model: 'agent:my-agent',
messages: [
...previousMessages,
result.choices[0].message,
{
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify(weatherData),
},
],
});
}Default tool backend
The agent setting defaultToolBackend controls what happens with tools that don't have an explicit webhookUrl. Set it to "client" (default) to return tool calls to your code, or "webhook" if most of your tools are server-side.