LLM Gateway

Quickstart

Fastest way to start using LLM Gateway in any language or framework.

🚀 Quickstart

Welcome to LLM Gateway—a single drop‑in endpoint that lets you call today’s best large‑language models while keeping your existing code and development workflow intact.

TL;DR — Point your HTTP requests to https://api.llmgateway.io/v1/…, supply your LLM_GATEWAY_API_KEY, and you’re done.


1 · Get an API key

  1. Sign in to the dashboard.
  2. Create a new Project → Copy the key.
  3. Export it in your shell (or a .env file):
export LLM_GATEWAY_API_KEY="llmgtwy_XXXXXXXXXXXXXXXX"

2 · Pick your language

curl -X POST https://api.llmgateway.io/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
  {"role": "user", "content": "Hello, how are you?"}
]
}'

3 · SDK integrations

vercel-ai-sdk.ts
import { createOpenAI } from "@ai-sdk/openai";

const llmgateway = createOpenAI({
	baseURL: "https://api.llmgateway.io/v1",
	apiKey: process.env.LLM_GATEWAY_API_KEY!,
});

const completion = await llmgateway.chat({
	model: "gpt-4o",
	messages: [{ role: "user", content: "Hello, how are you?" }],
});

console.log(completion.choices[0].message.content);
openai-sdk.ts
import OpenAI from "openai";

const openai = new OpenAI({
	baseURL: "https://api.llmgateway.io/v1",
	apiKey: process.env.LLM_GATEWAY_API_KEY,
});

const completion = await openai.chat.completions.create({
	model: "gpt-4o",
	messages: [{ role: "user", content: "Hello, how are you?" }],
});

console.log(completion.choices[0].message.content);

4 · Going further

  • Streaming: pass stream: true to any request—Gateway will proxy the event stream unchanged.
  • Monitoring: Every call appears in the dashboard with latency, cost & provider breakdown.
  • Fail‑over: Specify fallback_models to auto‑retry on provider errors.

5 · FAQ


6 · Next steps

Happy building! ✨

On this page