OpenAI SDK with ModelPilot
Use your existing OpenAI code with ModelPilot for intelligent routing and cost optimization.
Why Use OpenAI SDK?
Zero Code Changes
Your existing OpenAI code works as-is
Gradual Migration
Test ModelPilot without rewriting
Framework Support
Works with LangChain, LlamaIndex, etc.
Intelligent Routing
Automatic model selection and cost optimization
Installation
bash
npm install openaiConfiguration
Basic Setup
javascript
const { OpenAI } = require('openai');
const client = new OpenAI({
apiKey: process.env.MODELPILOT_API_KEY, // Use mp_xxx key
baseURL: `https://modelpilot.co/api/router/${process.env.MODELPILOT_ROUTER_ID}`
});Critical Points
- Use your ModelPilot API key (starts with mp_), NOT your OpenAI key
- The baseURL must include your router ID in the path
- The SDK automatically appends /chat/completions to the URL
- Omit the model parameter - your router automatically selects the best model
Migration Guide
Before (Pure OpenAI)
javascript
const { OpenAI } = require('openai');
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const completion = await client
.chat.completions.create({
model: 'gpt-5',
messages: [
{ role: 'user', content: 'Hello!' }
],
temperature: 0.7
});
console.log(
completion.choices[0].message.content
);After (ModelPilot)
javascript
const { OpenAI } = require('openai');
const client = new OpenAI({
apiKey: process.env.MODELPILOT_API_KEY, // ← Changed
baseURL: `https://modelpilot.co/api/router/${process.env.MODELPILOT_ROUTER_ID}` // ← Added
});
const completion = await client
.chat.completions.create({
// NO model parameter - router selects!
messages: [
{ role: 'user', content: 'Hello!' }
],
temperature: 0.7
});
); // ← New!Migration Checklist
Create a router in the dashboard with your preferred models
Replace OPENAI_API_KEY with MODELPILOT_API_KEY
Add baseURL with your router ID
Remove model parameter - let router select automatically
Test with a simple request
Monitor model used and costs via _meta field
Basic Usage
Simple Completion
javascript
const completion = await client.chat.completions.create({
messages: [
{ role: 'user', content: 'What is the capital of France?' }
]
});
console.log(completion.choices[0].message.content);
// Output: "The capital of France is Paris."Streaming
javascript
const stream = await client.chat.completions.create({
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}Framework Integration
LangChain
javascript
const { ChatOpenAI } = require('langchain/chat_models/openai');
const chat = new ChatOpenAI({
openAIApiKey: process.env.MODELPILOT_API_KEY,
configuration: {
baseURL: `https://modelpilot.co/api/router/${process.env.MODELPILOT_ROUTER_ID}`
},
temperature: 0.7
});
const response = await chat.call([
{ role: 'user', content: 'Hello!' }
]);Vercel AI SDK
javascript
import { OpenAIStream, StreamingTextResponse } from 'ai';
import { Configuration, OpenAIApi } from 'openai-edge';
import { CodeBlock } from '@/components/code-block';
const config = new Configuration({
apiKey: process.env.MODELPILOT_API_KEY,
basePath: `https://modelpilot.co/api/router/${process.env.MODELPILOT_ROUTER_ID}`
});
const openai = new OpenAIApi(config);
export async function POST(req) {
const { messages } = await req.json();
const response = await openai.createChatCompletion({
messages,
stream: true
});
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}Differences from Pure OpenAI
What's the Same ✓
- • All request parameters
- • Response format
- • Streaming behavior
- • Function/tool calling
- • Error structure
- • SDK methods
What's Different +
- • Additional _meta field with cost, latency
- • Router may select different model
- • Environmental impact tracking
- • Automatic fallback handling
- • Multi-provider access