Skip to main content
Agents

Agents

Agents are autonomous AI workflows made of composable steps, skills, and tools. Build multi-step pipelines that plan, research, execute, and review — then call them with a single API request via the OpenAI-compatible endpoint.

One API call, multi-step execution

Set model: "agent:your-agent-id" in any OpenAI-compatible request. Agentlify orchestrates every step, tool call, and skill lookup server-side, then returns a single response — or streams it token-by-token.

Quick Example

javascript
// Agents use a dedicated endpoint: /api/agents
const response = await fetch('https://agentlify.co/api/agents', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.AGENTLIFY_API_KEY}`,
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'my-research-agent',   // agent ID = the slug
    messages: [{ role: 'user', content: 'Compare React and Vue in 2025' }],
    stream: true,
  }),
});

const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  process.stdout.write(decoder.decode(value));
}

How Agents Work

Every agent is a pipeline of steps executed in order. Each step can call an LLM with a prompt template, invoke a built-in tool, or trigger a webhook. Steps pass context forward via variables like {{previous_output}} and {{input}}.

1

User sends a message

Via API (model: "agent:slug") or the in-app test panel.

2

Orchestrator runs each step

LLM prompts are sent to your router. Tool steps execute server-side. Each step's output feeds into the next.

3

Final response returned

Standard OpenAI completion format with an extra _meta field containing cost, latency, and step details.

Anatomy of an Agent

json
{
  "model": "my-research-agent",       // Slug — also the Firestore doc ID
  "displayName": "Research Agent",       // Human-readable name (editable)
  "description": "Researches topics and produces structured summaries",
  "isActive": true,

  "steps": [                             // Executed in order
    {
      "id": "step-1",
      "name": "research",
      "displayName": "Research Phase",
      "runtime": "llm",                  // "llm" or "tool"
      "phase": "research",               // UI label: research | plan | execute | review
      "prompt": "Research the following topic: {{input}}",
      "order": 0,
      "enabled": true
    },
    {
      "id": "step-2",
      "name": "summarize",
      "displayName": "Summarize",
      "runtime": "llm",
      "phase": "execute",
      "prompt": "Summarize: {{previous_output}}",
      "order": 1,
      "enabled": true
    }
  ],

  "skills": ["code-review-guidelines"],  // Instructional skills (via get_skill)
  "builtinTools": ["builtin_web_search"], // Executable tools
  "tools": [],                            // Custom webhook tools

  "settings": {
    "planningMode": "initial_plan",       // off | initial_plan | per_step_plan
    "timeout": 120,                       // seconds
    "streamFinalOnly": true,
    "defaultToolBackend": "client"        // client | webhook
  }
}