Learn
Group Chat
Compare responses from multiple LLM models side by side
The Group Chat page lets you send a single prompt to multiple models simultaneously and compare their responses side by side. This is useful for evaluating model quality, speed, and cost.

How It Works
- Select two or more models from the model picker
- Type your prompt in the input field
- All selected models receive the same prompt at once
- Responses stream in parallel, displayed in separate columns
Use Cases
- Model evaluation — Compare output quality across providers
- Cost optimization — See which models give the best results for the price
- Speed comparison — Observe latency differences between models
- Migration testing — Verify that a new model produces equivalent results
How is this guide?
Last updated on