Claude Code Integration
Use GPT-5, Gemini, or any model with Claude Code. Three environment variables, full cost tracking.
Claude Code is locked to Anthropic's API by default. With LLM Gateway, you can point it at any model—GPT-5, Gemini, Llama, or 180+ others—while keeping the same Anthropic API format Claude Code expects.
Three environment variables. No code changes. Full cost tracking in your dashboard.
Setup
Sign Up for LLM Gateway
Sign up free — no credit card required. Copy your API key from the dashboard.
Set Environment Variables
Configure Claude Code to use LLM Gateway:
export ANTHROPIC_BASE_URL=https://api.llmgateway.io
export ANTHROPIC_AUTH_TOKEN=llmgtwy_your_api_key_here
# optional: specify a model, otherwise it uses the default Claude model
export ANTHROPIC_MODEL=gpt-5 # or any model from our catalogWhy This Works
LLM Gateway's /v1/messages endpoint speaks Anthropic's API format natively. We handle the translation to each provider behind the scenes. This means:
- Use any model — GPT-5, Gemini, Llama, or Claude itself
- Keep your workflow — Claude Code doesn't know the difference
- Track costs — Every request appears in your LLM Gateway dashboard
- Automatic caching — Repeated requests hit cache, saving money
Choosing Models
You can use any model from the models page.
Use OpenAI's Latest Models
# Use the latest GPT model
export ANTHROPIC_MODEL=gpt-5
# Use a cost-effective alternative
export ANTHROPIC_MODEL=gpt-5-miniUse Google's Gemini
export ANTHROPIC_MODEL=gemini-2.5-proUse Anthropic's Claude Models
export ANTHROPIC_MODEL=anthropic/claude-3-5-sonnet-20241022Environment Variables
ANTHROPIC_MODEL
Specifies the main model to use for primary requests.
export ANTHROPIC_MODEL=gpt-5Complete Configuration Example
export ANTHROPIC_BASE_URL=https://api.llmgateway.io
export ANTHROPIC_AUTH_TOKEN=llmgtwy_your_api_key_here
export ANTHROPIC_MODEL=gpt-5
export ANTHROPIC_SMALL_FAST_MODEL=gpt-5-nanoMaking Manual API Requests
If you want to test the endpoint directly, you can make manual requests:
curl -X POST "https://api.llmgateway.io/v1/messages" \
-H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
],
"max_tokens": 100
}'Response Format
The endpoint returns responses in Anthropic's message format:
{
"id": "msg_abc123",
"type": "message",
"role": "assistant",
"model": "gpt-5",
"content": [
{
"type": "text",
"text": "Hello! I'm doing well, thank you for asking. How can I help you today?"
}
],
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 13,
"output_tokens": 20
}
}What You Get
- Any model in Claude Code — GPT-5 for heavy lifting, GPT-4o Mini for routine tasks
- Cost visibility — See exactly what each coding session costs
- One bill — Stop managing separate accounts for OpenAI, Anthropic, Google
- Response caching — Repeated requests (like linting the same file) hit cache
- Discounts — Check discounted models for savings up to 90%
View all available models on the models page.
Need help? Join our Discord community for support and troubleshooting assistance.
How is this guide?
Last updated on