Create Video
Creates a new asynchronous video generation job using an OpenAI-compatible request format.
Bearer token authentication using API keys
In: header
The video generation model to use. Supported values: veo-3.1-generate-preview, veo-3.1-fast-generate-preview, and their obsidian/, avalanche/, or google-vertex/ prefixed variants.
"veo-3.1-generate-preview"Text prompt describing the video to generate.
1 <= lengthOutput resolution in OpenAI widthxheight format. Obsidian supports 1280x720 and 720x1280. Avalanche supports 1920x1080, 1080x1920, 3840x2160, and 2160x3840. Google Vertex supports 1280x720, 720x1280, 1920x1080, 1080x1920, 3840x2160, and 2160x3840.
LLMGateway extension. When set, a signed webhook is delivered after the job reaches a terminal state.
uriLLMGateway extension. Shared secret used to sign webhook deliveries with HMAC-SHA256.
1 <= lengthOutput duration in seconds. Veo 3.1 preview models support up to 10 seconds. Provider-specific routing may impose stricter limits, and Obsidian and Avalanche currently only support 8-second outputs.
1 <= valueWhether the generated video should include audio. Google Vertex Veo 3.1 supports both audio-enabled and silent output. Other currently exposed Veo 3.1 mappings only support audio-enabled output.
trueOne to three reference images for provider-specific asset or material-guided video generation.
1 <= items <= 3Response Body
application/json
curl -X POST "https://api.llmgateway.io/v1/videos" \ -H "Content-Type: application/json" \ -d '{ "prompt": "A cinematic drone shot flying through a neon-lit futuristic city at night", "seconds": 8 }'{
"id": "string",
"object": "video",
"model": "string",
"status": "queued",
"progress": 100,
"created_at": 0,
"completed_at": 0,
"expires_at": 0,
"error": {
"code": "string",
"message": "string",
"details": null
},
"content": [
{
"type": "video",
"url": "http://example.com",
"mime_type": "string"
}
]
}How is this guide?
Last updated on