Appearance
Provider Errors
Errors from upstream AI providers.
Error Codes
| Code | HTTP Status | Description |
|---|---|---|
provider_error | 502 | General provider error |
provider_timeout | 504 | Provider request timed out |
provider_unavailable | 503 | Provider is temporarily unavailable |
model_not_found | 404 | Requested model doesn't exist |
model_overloaded | 503 | Model is overloaded |
context_length_exceeded | 400 | Input exceeds model's context window |
content_filter | 400 | Content blocked by safety filter |
invalid_request | 400 | Invalid request to provider |
Error Format
json
{
"error": {
"type": "provider_error",
"code": "provider_error",
"message": "The upstream provider returned an error.",
"provider": "openai",
"provider_error": {
"type": "server_error",
"message": "The server had an error processing your request."
},
"doc_url": "https://docs.gateflow.ai/api-reference/errors/provider-errors#provider_error"
}
}Error Details
provider_error
General error from the provider.
json
{
"error": {
"type": "provider_error",
"code": "provider_error",
"message": "OpenAI returned an error.",
"provider": "openai",
"provider_error": {
"type": "server_error",
"message": "The server had an error processing your request.",
"code": "internal_error"
},
"fallback_attempted": true,
"fallback_provider": "anthropic",
"fallback_result": "success"
}
}provider_timeout
Request to provider timed out.
json
{
"error": {
"type": "provider_error",
"code": "provider_timeout",
"message": "Request to OpenAI timed out after 30000ms.",
"provider": "openai",
"timeout_ms": 30000
}
}Resolution:
- Retry the request
- Consider using a faster model
- Reduce prompt size for faster processing
provider_unavailable
Provider service is temporarily down.
json
{
"error": {
"type": "provider_error",
"code": "provider_unavailable",
"message": "OpenAI is temporarily unavailable.",
"provider": "openai",
"status_url": "https://status.openai.com",
"retry_after": 60
}
}Resolution:
- Check provider status page
- Use fallback models
- Wait and retry
model_not_found
The requested model doesn't exist or isn't available.
json
{
"error": {
"type": "provider_error",
"code": "model_not_found",
"message": "Model 'gpt-6' not found.",
"model": "gpt-6",
"provider": "openai",
"available_models": ["gpt-5.2", "gpt-5", "gpt-5-mini"]
}
}Common causes:
- Model name typo
- Model deprecated or removed
- Model not available in your region
model_overloaded
The model is experiencing high demand.
json
{
"error": {
"type": "provider_error",
"code": "model_overloaded",
"message": "The gpt-5.2 model is currently overloaded.",
"provider": "openai",
"model": "gpt-5.2",
"retry_after": 30
}
}Resolution:
- Wait and retry
- Use a different model
- Enable fallbacks
context_length_exceeded
Input exceeds the model's maximum context window.
json
{
"error": {
"type": "provider_error",
"code": "context_length_exceeded",
"message": "This model's maximum context length is 128000 tokens. Your request resulted in 145000 tokens.",
"provider": "openai",
"model": "gpt-5.2",
"max_tokens": 128000,
"requested_tokens": 145000
}
}Resolution:
- Reduce prompt size
- Summarize or truncate context
- Use a model with larger context (Gemini 3 Pro: 2M tokens)
content_filter
Request blocked by provider's safety filter.
json
{
"error": {
"type": "provider_error",
"code": "content_filter",
"message": "Your request was rejected by the content filter.",
"provider": "openai",
"filter_reason": "Content may violate usage policies."
}
}Resolution:
- Review and modify your prompt
- Check provider's usage policies
invalid_request
Malformed request to the provider.
json
{
"error": {
"type": "provider_error",
"code": "invalid_request",
"message": "Invalid request: 'temperature' must be between 0 and 2.",
"provider": "openai",
"param": "temperature",
"value": 3.0
}
}Handling Provider Errors
Python
python
import openai
client = openai.OpenAI(
base_url="https://api.gateflow.ai/v1",
api_key="gw_prod_..."
)
try:
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello"}]
)
except openai.APIStatusError as e:
error = e.body.get("error", {})
code = error.get("code")
if code == "provider_timeout":
print("Request timed out, retrying...")
elif code == "provider_unavailable":
print(f"Provider down, check {error.get('status_url')}")
elif code == "context_length_exceeded":
max_tokens = error.get("max_tokens")
print(f"Context too long, max is {max_tokens}")
elif code == "model_not_found":
available = error.get("available_models", [])
print(f"Model not found, available: {available}")
else:
print(f"Provider error: {error.get('message')}")With Automatic Fallbacks
python
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello"}],
extra_body={
"gateflow": {
"fallbacks": ["claude-sonnet-4-5-20250929", "gemini-3-pro"],
"retry": {
"max_attempts": 3,
"retryable_errors": [
"provider_error",
"provider_timeout",
"provider_unavailable",
"model_overloaded"
]
}
}
}
)
# Check which provider actually served the request
provider = response.model_extra.get("gateflow", {}).get("provider")
print(f"Served by: {provider}")JavaScript/TypeScript
typescript
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.gateflow.ai/v1',
apiKey: 'gw_prod_...',
});
try {
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Hello' }],
});
} catch (error) {
if (error instanceof OpenAI.APIError) {
const errorBody = error.error as any;
const code = errorBody?.code;
switch (code) {
case 'provider_timeout':
console.log('Request timed out, consider retrying');
break;
case 'context_length_exceeded':
console.log(`Max tokens: ${errorBody.max_tokens}`);
break;
case 'model_not_found':
console.log(`Available: ${errorBody.available_models}`);
break;
default:
console.log(`Provider error: ${errorBody.message}`);
}
}
}Provider Status
Check Provider Health
bash
curl https://api.gateflow.ai/v1/management/providers/health \
-H "Authorization: Bearer gw_prod_..."Response
json
{
"providers": {
"openai": {
"status": "healthy",
"latency_ms": 120,
"success_rate": 0.998,
"last_error": null
},
"anthropic": {
"status": "healthy",
"latency_ms": 95,
"success_rate": 0.999,
"last_error": null
},
"google": {
"status": "degraded",
"latency_ms": 450,
"success_rate": 0.95,
"last_error": {
"time": "2026-02-16T10:00:00Z",
"type": "timeout"
}
}
}
}Best Practices
- Always configure fallbacks - Don't rely on a single provider
- Handle context limits - Check token counts before sending
- Monitor provider health - Set up alerts for degraded status
- Use appropriate timeouts - Adjust based on model speed
- Implement retry logic - Automatically retry transient errors
See Also
- Model Fallbacks - Fallback configuration
- Retry Logic - Retry strategies
- Provider Configuration - Provider setup
- Supported Models - Model catalog