LLM Gateway

Cursor Integration

Use LLM Gateway with Cursor IDE for AI-powered code editing and chat

Cursor is an AI-powered code editor built on VSCode. You can configure Cursor to use LLM Gateway for enhanced AI capabilities, access to multiple models, and better cost control.

Cursor with LLM Gateway

Prerequisites

  • An LLM Gateway account with an API key
  • Cursor IDE installed
  • Basic understanding of Cursor's AI features

Setup

Cursor supports OpenAI-compatible API endpoints, making it easy to integrate with LLM Gateway.

Get Your API Key

  1. Log in to your LLM Gateway dashboard
  2. Navigate to API Keys section
  3. Create a new API key and copy the key

LLM Gateway API Keys

Configure Cursor Settings

  1. Open Cursor and go to Settings then Click on "Cursor Settings"
  2. Click on "Models"
  3. Click on "Add OpenAI API Key"

Cursor Settings

  1. Scroll down to OpenAI API Key section
  2. Click on Add OpenAI API Key

Cursor API Key Input

  1. Enter your LLM Gateway API key

  2. In the same Models settings, find the Override OpenAI Base URL option

  3. Enable the override option

  4. Enter the LLM Gateway endpoint: https://api.llmgateway.io/v1

Select Models

  1. In the Models section, you can now select from available models
  2. Choose any LLM Gateway supported model:

Cursor Model Selection

  • For chat: Use models like gpt-5, gpt-4o, claude-sonnet-4-5
  • For provider specific models: Add the provider name before the model name (e.g. openai/gpt-5, anthropic/claude-sonnet-4-5, google/gemini-2.0-flash-exp)
  • For custom models: Add the provider name before the model name (e.g. custom/my-model)
  • For discounted models: copy the ids from from the models page
  • For free models: copy the ids from from the models page
  • For reasoning models: copy the ids from from the models page

Test the Integration

  1. Open any code file in Cursor
  2. Try using the AI chat (Cmd/Ctrl + L)
  3. Or test the autocomplete feature while typing

Cursor AI Chat Cursor AI Chat 2

All AI requests will now be routed through LLM Gateway.

Features

Once configured, you can use all of Cursor's AI features with LLM Gateway:

AI Chat (Cmd/Ctrl + L)

  • Ask questions about your code
  • Request code explanations
  • Get debugging help
  • Generate new code

Inline Edit (Cmd/Ctrl + K)

  • Edit code with natural language instructions
  • Refactor functions
  • Add features to existing code

Autocomplete

  • Get intelligent code suggestions as you type
  • Context-aware completions based on your codebase

Advanced Configuration

Using Different Models for Different Features

Cursor allows you to configure different models for different features:

  1. Chat Model: Use a powerful model like gpt-5 or claude-sonnet-4-5
  2. Autocomplete Model: Use a faster, cost-effective model like gpt-4o-mini
  3. Provider Specific Model: Use a provider specific model like openai/gpt-5, anthropic/claude-sonnet-4-5, google/gemini-2.0-flash-exp
  4. Custom Model: Use a custom model like custom/my-model
  5. Discounted Model: Use a discounted model like routeway-discount/claude-sonnet-4-5
  6. Free Model: Use a free model like routeway/deepseek-r1t2-chimera-free
  7. Reasoning Model: Use a reasoning model like canopywave/kimi-k2-thinking with 75% off discount

This gives you the best balance of performance and cost.

Model Routing

With LLM Gateway's routing features, you can:

  • Chooses cost-effective models by default for optimal price-to-performance ratio
  • Automatically scales to more powerful models based on your request's context size
  • Handles large contexts intelligently by selecting models with appropriate context windows

Troubleshooting

Authentication Errors

If you see authentication errors:

  • Verify your API key is correct
  • Check that the base URL is set to https://api.llmgateway.io/v1
  • Ensure your LLM Gateway account has sufficient credits

Model Not Found

If you see "model not found" errors:

  • Verify the model ID exists in the models page
  • Check that you're using the correct model name format
  • Some models may require specific provider configurations in your LLM Gateway dashboard

Slow Responses

If responses are slow:

  • Check your internet connection
  • Monitor your usage in the LLM Gateway dashboard
  • Consider using faster models for autocomplete features

Need help? Join our Discord community for support and troubleshooting assistance.

Benefits of Using LLM Gateway with Cursor

  • Multi-Provider Access: Use models from OpenAI, Anthropic, Google, Open-source models and more
  • Cost Control: Track and limit your AI spending with detailed usage analytics
  • Caching: Reduce costs with response caching
  • Analytics: Monitor usage patterns and costs

How is this guide?

Last updated on