LLM Gateway CLI
Command-line tool for scaffolding and managing LLM Gateway projects
The LLM Gateway CLI (@llmgateway/cli) is a command-line utility for scaffolding projects, managing AI applications, and discovering models.
Installation
Run commands directly without installation:
npx @llmgateway/cli initInstall globally for faster access:
npm install -g @llmgateway/cliThen run commands directly:
llmgateway initQuick Start
Initialize a Project
Create a new project from a template:
npx @llmgateway/cli initOr specify the template and name directly:
npx @llmgateway/cli init --template image-generation --name my-ai-appConfigure Authentication
Login to save your API key locally:
npx @llmgateway/cli auth loginThis opens a browser window to authenticate with LLM Gateway. Your credentials are stored in ~/.llmgateway/config.json.
Alternatively, set the LLMGATEWAY_API_KEY environment variable which takes precedence over the config file.
Start Development
Navigate to your project and start the development server:
cd my-ai-app
npx @llmgateway/cli devOr specify a custom port:
npx @llmgateway/cli dev --port 3000Commands
init
Initialize a new project from a template.
npx @llmgateway/cli init [options]Options:
--template <name>— Template to use (e.g.,image-generation,weather-agent)--name <name>— Project name
Examples:
# Interactive mode
npx @llmgateway/cli init
# With options
npx @llmgateway/cli init --template image-generation --name my-applist
Display available project templates.
npx @llmgateway/cli listOptions:
--json— Output in JSON format
models
Browse and filter available AI models.
npx @llmgateway/cli models [options]Options:
--capability <type>— Filter by capability (e.g.,chat,image,embedding)--provider <name>— Filter by provider (e.g.,openai,anthropic,google)--search <term>— Search models by name
Examples:
# List all models
npx @llmgateway/cli models
# Filter by provider
npx @llmgateway/cli models --provider openai
# Search models
npx @llmgateway/cli models --search gptadd
Add tools or API routes to an existing project.
npx @llmgateway/cli addTools available:
weather— Weather lookup functionalitysearch— Web search capabilitycalculator— Mathematical operations
API routes available:
generate— Text generation endpointchat— Chat completion endpoint
auth
Manage API authentication.
# Login via browser
npx @llmgateway/cli auth login
# Check authentication status
npx @llmgateway/cli auth status
# Logout
npx @llmgateway/cli auth logoutdev
Start the local development server.
npx @llmgateway/cli dev [options]Options:
--port <number>— Port to run on (default: 3000)
upgrade
Update LLM Gateway dependencies in your project.
npx @llmgateway/cli upgrade [options]Options:
--dry-run— Show what would be updated without making changes
docs
Open the documentation in your browser.
npx @llmgateway/cli docsAvailable Templates
Image Generation
A full-stack application for AI image generation.
- Stack: Next.js 16, React 19, TypeScript
- Features: Multi-provider support (DALL-E, Stable Diffusion), unified API
- Use case: Image generation apps, creative tools
npx @llmgateway/cli init --template image-generationWeather Agent
A CLI agent demonstrating tool calling capabilities.
- Stack: TypeScript, AI SDK, OpenAI
- Features: Tool calling, real-time data, natural language
- Use case: Learning tool usage, building CLI agents
npx @llmgateway/cli init --template weather-agentConfiguration
The CLI stores configuration in ~/.llmgateway/config.json:
{
"apiKey": "llmgtwy_...",
"defaultTemplate": "image-generation"
}Environment Variables
The LLMGATEWAY_API_KEY environment variable takes precedence over the config file:
export LLMGATEWAY_API_KEY="llmgtwy_..."More Resources
- Agents — Pre-built AI agents
- Templates — Production-ready starter projects
- GitHub Repository — Source code and issues
Need help or want to request a feature? Open an issue on GitHub.
How is this guide?
Last updated on