LLM Gateway
Guides

Autohand Integration

Use GPT-5, Claude, Gemini, or any model with Autohand's autonomous coding agent. Simple config, full cost tracking.

Autohand is an autonomous AI coding agent that works in your terminal, IDE, and Slack. With LLM Gateway, you can route all Autohand requests through a single gateway—use any of 180+ models from 60+ providers, with full cost tracking and smart routing.

Setup

Sign Up for LLM Gateway

Sign up free — no credit card required. Copy your API key from the dashboard.

Set Environment Variables

Configure Autohand to use LLM Gateway:

export OPENAI_BASE_URL=https://api.llmgateway.io/v1
export OPENAI_API_KEY=llmgtwy_your_api_key_here

Run Autohand

autohand

All requests will now be routed through LLM Gateway.

Why Use LLM Gateway with Autohand

  • 180+ models — GPT-5, Claude Opus, Gemini, Llama, and more from 60+ providers
  • Smart routing — Automatically selects the best provider based on uptime, throughput, price, and latency
  • Cost tracking — Monitor exactly how much each autonomous session costs
  • Single bill — No need to manage multiple API provider accounts
  • Response caching — Repeated requests hit cache automatically
  • Automatic failover — If one provider is down, requests route to another

Configuration File

You can also configure LLM Gateway in Autohand's config file:

{
	"provider": {
		"llmgateway": {
			"baseUrl": "https://api.llmgateway.io/v1",
			"apiKey": "llmgtwy_your_api_key_here"
		}
	},
	"model": "gpt-5"
}

Choosing Models

You can use any model from the models page.

ModelBest For
gpt-5Latest OpenAI flagship, highest quality
claude-opus-4-6Anthropic's most capable model
claude-sonnet-4-6Fast reasoning with extended thinking
gemini-2.5-proGoogle's latest flagship, 1M context window
o3Advanced reasoning tasks
gpt-5-miniCost-effective, quick responses
gemini-2.5-flashFast responses, good for high-volume
deepseek-v3.1Open-source with vision and tools

Autohand Features with LLM Gateway

Terminal (CLI)

Autohand CLI works seamlessly with LLM Gateway. Set the environment variables and use all Autohand commands as normal—multi-file editing, agentic search, and autonomous code generation all work out of the box.

IDE Integration

Autohand's VS Code and Zed extensions respect the same environment variables. Set them in your shell profile and the IDE integration will automatically route through LLM Gateway.

Slack Integration

When using Autohand through Slack, configure the LLM Gateway base URL in your Autohand server settings to route all Slack-triggered coding tasks through the gateway.

Monitoring Usage

Once configured, all Autohand requests appear in your LLM Gateway dashboard:

  • Request logs — See every prompt and response
  • Cost breakdown — Track spending by model and time period
  • Usage analytics — Understand your AI usage patterns

View all available models on the models page.

Need help? Join our Discord community for support and troubleshooting assistance.

How is this guide?

Last updated on