LLM Gateway
Migrations

Migrate from OpenRouter

Switch to LLM Gateway for built-in analytics, self-hosting options, and simpler API. Two-line code change.

LLM Gateway works just like OpenRouter—same API format, same model names—but with built-in analytics and the option to self-host. Migration takes two lines of code.

Quick Migration

Change your base URL and API key:

- const baseURL = "https://openrouter.ai/api/v1";
- const apiKey = process.env.OPENROUTER_API_KEY;
+ const baseURL = "https://api.llmgateway.io/v1";
+ const apiKey = process.env.LLM_GATEWAY_API_KEY;

Migration Steps

Get Your LLM Gateway API Key

Sign up at llmgateway.io/signup and create an API key from your dashboard.

Update Environment Variables

# Remove OpenRouter credentials
# OPENROUTER_API_KEY=sk-or-...

# Add LLM Gateway credentials
LLM_GATEWAY_API_KEY=llmgtwy_your_key_here

Update Your Code

Using fetch/axios

// Before (OpenRouter)
const response = await fetch("https://openrouter.ai/api/v1/chat/completions", {
	method: "POST",
	headers: {
		Authorization: `Bearer ${process.env.OPENROUTER_API_KEY}`,
		"Content-Type": "application/json",
	},
	body: JSON.stringify({
		model: "openai/gpt-5.2",
		messages: [{ role: "user", content: "Hello!" }],
	}),
});

// After (LLM Gateway)
const response = await fetch("https://api.llmgateway.io/v1/chat/completions", {
	method: "POST",
	headers: {
		Authorization: `Bearer ${process.env.LLM_GATEWAY_API_KEY}`,
		"Content-Type": "application/json",
	},
	body: JSON.stringify({
		model: "gpt-5.2",
		messages: [{ role: "user", content: "Hello!" }],
	}),
});

Using OpenAI SDK

import OpenAI from "openai";

// Before (OpenRouter)
const client = new OpenAI({
	baseURL: "https://openrouter.ai/api/v1",
	apiKey: process.env.OPENROUTER_API_KEY,
});

// After (LLM Gateway)
const client = new OpenAI({
	baseURL: "https://api.llmgateway.io/v1",
	apiKey: process.env.LLM_GATEWAY_API_KEY,
});

// Usage remains the same
const completion = await client.chat.completions.create({
	model: "anthropic/claude-3-5-sonnet-20241022",
	messages: [{ role: "user", content: "Hello!" }],
});

Using Vercel AI SDK

Both OpenRouter and LLM Gateway have native AI SDK providers, making migration straightforward:

import { generateText } from "ai";

// Before (OpenRouter AI SDK Provider)
import { createOpenRouter } from "@openrouter/ai-sdk-provider";

const openrouter = createOpenRouter({
	apiKey: process.env.OPENROUTER_API_KEY,
});

const { text } = await generateText({
	model: openrouter("gpt-5.2"),
	prompt: "Hello!",
});

// After (LLM Gateway AI SDK Provider)
import { createLLMGateway } from "@llmgateway/ai-sdk-provider";

const llmgateway = createLLMGateway({
	apiKey: process.env.LLMGATEWAY_API_KEY,
});

const { text } = await generateText({
	model: llmgateway("gpt-5.2"),
	prompt: "Hello!",
});

Model Name Mapping

Most model names are compatible, but here are some common mappings:

OpenRouter ModelLLM Gateway Model
openai/gpt-5.2gpt-5.2 or openai/gpt-5.2
gemini/gemini-3-flash-previewgemini-3-flash-preview or google-ai-studio/gemini-3-flash-preview
bedrock/claude-opus-4-5-20251101claude-opus-4-5-20251101 or aws-bedrock/claude-opus-4-5-20251101

Check the models page for the full list of available models.

Streaming Support

LLM Gateway supports streaming responses identically to OpenRouter:

const stream = await client.chat.completions.create({
	model: "anthropic/claude-3-5-sonnet-20241022",
	messages: [{ role: "user", content: "Write a story" }],
	stream: true,
});

for await (const chunk of stream) {
	process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

Full Comparison

Want to see a detailed breakdown of all features? Check out our LLM Gateway vs OpenRouter comparison page.

How is this guide?

Last updated on

On this page

Ready for production?

Ship to production with SSO, audit logs, spend controls, and guardrails your security team will approve.

Explore Enterprise