Quickstart
Fastest way to start using LLM Gateway in any language or framework.
🚀 Quickstart
Welcome to LLM Gateway—a single drop‑in endpoint that lets you call today’s best large‑language models while keeping your existing code and development workflow intact.
TL;DR — Point your HTTP requests to
https://api.llmgateway.io/v1/…
, supply yourLLM_GATEWAY_API_KEY
, and you’re done.
1 · Get an API key
- Sign in to the dashboard.
- Create a new Project → Copy the key.
- Export it in your shell (or a
.env
file):
2 · Pick your language
3 · SDK integrations
4 · Going further
- Streaming: pass
stream: true
to any request—Gateway will proxy the event stream unchanged. - Monitoring: Every call appears in the dashboard with latency, cost & provider breakdown.
- Fail‑over: Specify
fallback_models
to auto‑retry on provider errors.
5 · FAQ
6 · Next steps
- Read Self host docs guide.
- Drop into our GitHub for help or feature requests.
Happy building! ✨