Azure Integration
Connect Azure to LLM Gateway for enterprise-grade OpenAI models
Azure provides access to OpenAI's powerful language models through Microsoft's enterprise cloud infrastructure. This guide shows how to create an Azure resource, deploy models, and integrate them with LLM Gateway.
Only OpenAI models are supported via Azure at this time. Open an issue to request support for other model types.
Prerequisites
- An Azure account with an active subscription
- LLM Gateway account (Pro plan required for provider keys) or self-hosted instance (free)
Overview
Azure provides enterprise-grade access to OpenAI models with enhanced security, compliance, and regional availability. LLM Gateway integrates seamlessly with Azure deployments.
Create Azure Resource
Create an Azure OpenAI Resource
- Log into the Azure Portal (https://portal.azure.com)
- Click Create a resource
- Search for Azure OpenAI and select it
- Click Create
- Configure the resource:
- Subscription: Select your Azure subscription
- Resource group: Create new or select existing
- Region: Choose a region (e.g., East US, West Europe)
- Name: Enter a unique resource name (this will be your
<resource-name>) - Pricing tier: Select Standard S0
- Click Review + create, then Create
- Wait for deployment to complete
Important: Note your resource name - it will be used in the base URL: https://<resource-name>.openai.azure.com
Deploy Models
- Navigate to your Azure resource in the Azure Portal
- Click Go to Azure OpenAI Studio or visit https://oai.azure.com
- In Azure Studio, select Deployments from the left sidebar
- Click Create new deployment
- Configure your deployment:
- Model: Select a model (e.g., gpt-4o, gpt-4o-mini, gpt-4-turbo)
- Deployment name: Enter a name (this must match the model identifier you'll use – use the pre-filled name)
- Model version: Select the latest version
- Deployment type: Global Standard
- Click Create
- Repeat for additional models you want to use
Note: The deployment name must match the expected model name:
- For
gpt-4o-mini→ deployment name should begpt-4o-mini - For
gpt-35-turbo→ deployment name should begpt-35-turboetc.
Get API Key
- In the Azure Portal, go to your Azure resource
- Click Keys and Endpoint in the left sidebar
- Copy Key 1 or Key 2
- Note your Endpoint URL (should be
https://<resource-name>.openai.azure.com)
Important: Keep your API key secure - it provides access to your Azure deployments.
Add to LLM Gateway
Navigate to Provider Keys
- Log into LLM Gateway Dashboard
- Select your organization and project
- Go to Provider Keys in the sidebar
Add Azure Provider Key
- Click Add for Azure
- Enter your API Key from Azure Portal
- Enter your Resource Name (the name from your Azure endpoint URL)
- Example: If your endpoint is
https://my-openai-resource.openai.azure.com, entermy-openai-resource
- Example: If your endpoint is
- Select your preferred type (Azure OpenAI or AI Foundry)
- Adapt the Validation Model to a model that you already deployed and is available This is a one time check to ensure the API key is valid and the model can be accessed.
- Click Add Key
The system will validate your key and confirm the connection.
Test the Integration
Test your integration with a simple API call:
curl -X POST https://api.llmgateway.io/v1/chat/completions \
-H "Authorization: Bearer YOUR_LLMGATEWAY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "azure/gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "Hello from Azure!"
}
]
}'Replace YOUR_LLMGATEWAY_API_KEY with your LLM Gateway API key.
Available Models
Once configured, you can access your Azure deployments through LLM Gateway:
- GPT-4o:
azure/gpt-4o - GPT-4o Mini:
azure/gpt-4o-mini - GPT-3.5 Turbo:
azure/gpt-3.5-turbo(note: use gpt-3.5-turbo as llmgateway model name instead of gpt-35-turbo)
Note: Only models you have deployed in Azure Studio will be available. Ensure your deployment names match the expected model identifiers.
Browse all available models at llmgateway.io/models
Troubleshooting
"Deployment not found" error
- Verify you've created a deployment in Azure Studio
- Ensure the deployment name exactly matches the model name you're requesting
- Check that the deployment is in the same resource as your API key
"Resource not found" error
- Verify the resource name is correct (check your Azure Portal endpoint URL)
- Ensure your API key belongs to the correct Azure resource
- Confirm the resource is in an active state in Azure Portal
Rate limiting
- Azure has Tokens Per Minute (TPM) quotas per deployment
- Monitor usage in Azure Studio under Quotas
- Request quota increases through Azure Portal if needed for high-volume workloads
Region availability
- Not all models are available in all Azure regions
- Check Azure model availability for your region
- Consider creating resources in multiple regions for better availability
How is this guide?
Last updated on