Agent API Reference
Execute agents via the OpenAI-compatible agent endpoint. Pass your agent ID via model in the request body alongside your messages.
OpenAI SDK compatible
Agents have their own /api/agents endpoint. Point the OpenAI SDK's baseURL at it, set model to your agent slug, and everything just works — streaming, tool calling, and all.
- API Key — Generate from Dashboard → Settings → API Keys
- Agent ID — The slug shown in the agent editor (e.g.
my-research-agent) - Agent must be active — Disabled agents return a
403error
https://agentlify.co/api/agentsRequest Body
{
"model": "your-agent-id",
"messages": [
{ "role": "user", "content": "Your message here" }
],
"stream": false
}Headers
AuthorizationBearer mp_YOUR_API_KEYContent-Typeapplication/jsonJavaScript — OpenAI SDK
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.AGENTLIFY_API_KEY, // mp_xxx key
baseURL: 'https://agentlify.co/api/agents',
});
const completion = await client.chat.completions.create({
model: 'my-research-agent',
messages: [{ role: 'user', content: 'Summarize the latest on AI agents' }],
});
console.log(completion.choices[0].message.content);
// Agent metadata is available on the raw response
// completion.agent_metadata
// { execution_id, agent_id, steps_executed, total_cost, ... }Python — OpenAI SDK
from openai import OpenAI
client = OpenAI(
api_key="mp_YOUR_API_KEY",
base_url="https://agentlify.co/api/agents",
)
completion = client.chat.completions.create(
model="my-research-agent",
messages=[{"role": "user", "content": "Summarize the latest on AI agents"}],
)
print(completion.choices[0].message.content)cURL
curl -X POST "https://agentlify.co/api/agents/chat/completions" \
-H "Authorization: Bearer mp_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "my-research-agent",
"messages": [{"role": "user", "content": "Summarize the latest on AI agents"}]
}'Streaming
const stream = await client.chat.completions.create({
model: 'my-research-agent',
messages: [{ role: 'user', content: 'Tell me about quantum computing' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}Standard OpenAI chat completion format with an extra agent_metadata field for execution details.
{
"id": "agent-exec-uuid",
"object": "chat.completion",
"created": 1706000000,
"model": "agent:my-research-agent",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The agent's final response..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 150,
"completion_tokens": 200,
"total_tokens": 350
},
"agent_metadata": {
"execution_id": "exec-uuid",
"agent_id": "agent:my-research-agent",
"agent_name": "my-research-agent",
"steps_executed": 2,
"total_latency": 1234,
"skills_invoked": 1
}
}Both routers and agents use the OpenAI SDK — just change the baseURL:
import OpenAI from 'openai';
// Router — auto-selects the best model for you
const router = new OpenAI({
apiKey: 'mp_YOUR_API_KEY',
baseURL: 'https://agentlify.co/api/router/YOUR_ROUTER_ID',
});
const routerRes = await router.chat.completions.create({
messages: [{ role: 'user', content: 'Hello' }],
});
// Agent — multi-step autonomous execution
const agent = new OpenAI({
apiKey: 'mp_YOUR_API_KEY',
baseURL: 'https://agentlify.co/api/agents',
});
const agentRes = await agent.chat.completions.create({
model: 'my-research-agent',
messages: [{ role: 'user', content: 'Research this topic' }],
});Errors follow the OpenAI error format. The OpenAI SDK raises typed exceptions automatically.
Bad Request
Missing messages array or invalid format.
Unauthorized
Invalid or missing API key (mp_*).
Insufficient Credits
Add credits in Dashboard → Billing.
Agent Disabled
Enable the agent from the editor first.
Agent Not Found
No agent with that ID, or no access.
Timeout
Execution exceeded the configured timeout.
Rate Limited
Too many requests. Back off and retry.
The easiest way to test is through the Dashboard Playground:
- Go to Dashboard → Playground
- Switch to Agent mode using the toggle
- Select your agent from the dropdown
- Send a message and see the response in real time
Note: The Playground uses Firebase auth internally. For external API calls, use your API key (mp_*) in the Authorization: Bearer header.