# LLM Gateway Documentation > Documentation for LLM Gateway - a full-stack LLM API gateway ## Docs - [Introduction](/): LLM Gateway is an open-source API gateway for Large Language Models. Route requests to multiple providers, manage API keys, track usage, and optimize costs. - [Overview](/overview): Introduction to LLM Gateway, an open-source API gateway for LLMs. - [Quickstart](/quick-start): Fastest way to start using LLM Gateway in any language or framework. - [Self Host LLMGateway](/self-host): Simple guide to self-hosting LLMGateway using Docker. - [Health check](/health) - [Chat Completions](/v1_chat_completions) - [Anthropic Messages](/v1_messages) - [Models](/v1_models) - [Agent Skills](/guides/agent-skills): Packaged instructions and guidelines for AI coding agents - [Autohand Integration](/guides/autohand): Use GPT-5, Claude, Gemini, or any model with Autohand's autonomous coding agent. Simple config, full cost tracking. - [Claude Code Integration](/guides/claude-code): Use GPT-5, Gemini, or any model with Claude Code. Three environment variables, full cost tracking. - [LLM Gateway CLI](/guides/cli): Command-line tool for scaffolding and managing LLM Gateway projects - [Cline Integration](/guides/cline): Use LLM Gateway with Cline for AI-powered coding assistance in VS Code - [Codex CLI Integration](/guides/codex-cli): Use any model with OpenAI's Codex CLI through LLM Gateway. One config file, full cost tracking. - [Cursor Integration](/guides/cursor): Use LLM Gateway with Cursor IDE for AI-powered code editing and chat - [Model Context Protocol (MCP)](/guides/mcp): Use LLM Gateway as an MCP server for Claude Code, Cursor, and other MCP-compatible clients - [N8n Integration](/guides/n8n): Connect n8n workflow automation to LLM Gateway for AI-powered workflows - [OpenClaw Integration](/guides/openclaw): Use GPT-5.4, Claude Opus, Gemini, or any model with OpenClaw across Discord, WhatsApp, Telegram, and more - [OpenCode Integration](/guides/opencode): Connect OpenCode to 180+ models through LLM Gateway. One config file, any model, full cost tracking. - [Anthropic API Compatibility](/features/anthropic-endpoint): Use the Anthropic-compatible endpoint to access any LLM model through the familiar Anthropic API format. - [API Keys & IAM Rules](/features/api-keys): Comprehensive guide to API key management and Identity Access Management (IAM) rules for fine-grained access control - [Audit Logs](/features/audit-logs): Track all organization activity with comprehensive audit logs - [Caching](/features/caching): Reduce costs and latency by caching identical requests. - [Cost Breakdown](/features/cost-breakdown): Get real-time cost information for each API request directly in the response. - [Custom Providers](/features/custom-providers): Learn how to integrate custom OpenAI-compatible providers with LLMGateway for enhanced flexibility and control. - [Data Retention](/features/data-retention): Store and access your full request and response data for debugging, analytics, and compliance. - [Guardrails](/features/guardrails): Protect your LLM usage with content guardrails that detect and block harmful content - [Image Generation](/features/image-generation): Generate images using AI models through the OpenAI-compatible images API or chat completions API - [Metadata](/features/metadata): Send additional context and metadata to LLM Gateway using custom headers. - [Reasoning](/features/reasoning): Learn how to use reasoning-capable models that show their step-by-step thought process. - [Response Healing](/features/response-healing): Automatically repair malformed JSON responses from AI models. - [Routing](/features/routing): Learn how LLMGateway intelligently routes your requests to the best available models and providers. - [Source Attribution](/features/source): Use the X-Source header to identify your domain for public usage statistics. - [Vision Support](/features/vision): Learn how to send images to vision-enabled models using URLs or inline base64 data. - [Native Web Search](/features/web-search): Enable real-time web search capabilities to get up-to-date information from the internet. - [AWS Bedrock Integration](/integrations/aws-bedrock): Connect AWS Bedrock to LLM Gateway for access to foundation models - [Azure Integration](/integrations/azure): Connect Azure to LLM Gateway for enterprise-grade OpenAI models - [Activity](/learn/activity): View and inspect every API request made through LLM Gateway - [API Keys](/learn/api-keys): Create and manage API keys for authenticating with LLM Gateway - [Audit Logs](/learn/audit-logs): Track every action taken within your organization - [Billing](/learn/billing): Manage your credits, subscription plan, and payment methods - [Dashboard](/learn/dashboard): Your central hub for monitoring LLM usage, costs, and performance - [Guardrails](/learn/guardrails): Configure content safety rules to protect your LLM usage - [Introduction](/learn): Learn how to navigate and use the LLM Gateway dashboard - [Model Usage](/learn/model-usage): Track usage breakdown by individual model - [Org Preferences](/learn/org-preferences): Manage your organization's name and billing email - [Group Chat](/learn/playground-group): Compare responses from multiple LLM models side by side - [Image Studio](/learn/playground-image): Generate and edit images using AI models - [Chat Playground](/learn/playground): Test LLM models interactively with a full-featured chat interface - [Policies](/learn/policies): Configure data retention and other organization policies - [Preferences](/learn/preferences): Configure project-level settings including caching and project mode - [Provider Keys](/learn/provider-keys): Bring your own provider API keys to use without additional fees - [Referrals](/learn/referrals): Earn credits by referring other users to LLM Gateway - [Security Events](/learn/security-events): Monitor guardrail violations and content policy events - [Team](/learn/team): Manage team members and their roles within your organization - [Transactions](/learn/transactions): View your complete payment and credit history - [Usage & Metrics](/learn/usage-metrics): Detailed analytics for requests, models, errors, caching, and costs - [Migrate from LiteLLM](/migrations/litellm): Switch from self-hosted LiteLLM to managed LLM Gateway. Same API format, zero infrastructure to maintain. - [Migrate from OpenRouter](/migrations/openrouter): Switch to LLM Gateway for built-in analytics, self-hosting options, and simpler API. Two-line code change. - [Migrate from Vercel AI Gateway](/migrations/vercel-ai-gateway): Keep your Vercel AI SDK code, add response caching, detailed analytics, and smart routing. One provider for all models. - [Rate Limits](/resources/rate-limits): Understanding rate limits for free and paid models on LLMGateway.