Tutorial·March 14, 2026·4 min read
Migrate from OpenAI to TokonLab in Under 5 Minutes
If you're already using the OpenAI SDK, switching to TokonLab requires exactly one change: your base URL. That's it. No new SDK, no new abstractions, no migration guide to read for hours.
Here's the complete migration for the three most common setups.
Python
from openai import OpenAI
# Before
client = OpenAI(api_key="sk-openai-your-key")
# After — one line change
client = OpenAI(
base_url="https://api.tokonlab.com/v1",
api_key="sk-tokon-your-key",
)
response = client.chat.completions.create(
model="deepseek/deepseek-r1", # or any model alias
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)Node.js / TypeScript
import OpenAI from 'openai';
// Before
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// After
const client = new OpenAI({
baseURL: 'https://api.tokonlab.com/v1',
apiKey: process.env.TOKONLAB_API_KEY,
});
const res = await client.chat.completions.create({
model: 'qwen/qwen3-235b-a22b',
messages: [{ role: 'user', content: 'Hello' }],
});cURL
curl https://api.tokonlab.com/v1/chat/completions \
-H "Authorization: Bearer sk-tokon-your-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-r1",
"messages": [{"role": "user", "content": "Hello"}]
}'Model Aliases
You can use full model IDs (like deepseek/deepseek-r1) or our convenience aliases:
cheap-model— routes to the best value model (currently DeepSeek R1)fast-model— routes to the lowest latency model (currently GLM-5 Turbo)best-model— routes to the highest quality model (currently Qwen3 235B)
Streaming
Streaming works identically to OpenAI. Just pass stream: true and handle the response the same way you always have.
Get your API key at tokonlab.com/dashboard. Free tier included, no credit card required.
