Skip to content

Provider Configuration

Connect your AI provider accounts to GateFlow. This guide covers supported providers and configuration options.

Supported Providers

ProviderModelsStatus
OpenAIGPT-5.2, GPT-5, o3, Whisper, EmbeddingsStable
AnthropicClaude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5Stable
GoogleGemini 3 Pro, Gemini 2.5 Pro, Gemini 2.5 FlashStable
MistralLarge 3, Small 3, Voxtral, EmbedStable
CohereCommand R+, Command R, Embed, RerankStable
ElevenLabsMultilingual v2, Turbo v2.5, Flash v2.5Stable

Adding a Provider

Via Dashboard

  1. Go to Settings → Providers
  2. Click Add Provider
  3. Select the provider
  4. Enter your credentials
  5. Click Test Connection
  6. Click Save

Via API

bash
curl -X POST https://api.gateflow.ai/v1/management/providers \
  -H "Authorization: Bearer gw_prod_admin_key" \
  -H "Content-Type: application/json" \
  -d '{
    "provider": "openai",
    "credentials": {
      "api_key": "sk-..."
    },
    "settings": {
      "organization_id": "org-..."
    }
  }'

Provider-Specific Configuration

OpenAI

json
{
  "provider": "openai",
  "credentials": {
    "api_key": "sk-..."
  },
  "settings": {
    "organization_id": "org-...",
    "project_id": "proj_..."
  }
}

Available Models:

  • gpt-5.2, gpt-5.2-chat-latest, gpt-5.2-codex
  • gpt-5.1, gpt-5, gpt-5-mini, gpt-5-nano
  • o3, o4-mini (reasoning models)
  • text-embedding-3-large, text-embedding-3-small
  • whisper-1, tts-1, tts-1-hd

Anthropic

json
{
  "provider": "anthropic",
  "credentials": {
    "api_key": "sk-ant-..."
  }
}

Available Models:

  • claude-opus-4-5-20251107
  • claude-sonnet-4-5-20250929
  • claude-sonnet-4-20250514
  • claude-haiku-4-5-20251015

Google (Gemini)

json
{
  "provider": "google",
  "credentials": {
    "api_key": "AIza..."
  }
}

Available Models:

  • gemini-3-pro, gemini-3-flash
  • gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite
  • text-embedding-004

Mistral

json
{
  "provider": "mistral",
  "credentials": {
    "api_key": "..."
  }
}

Available Models:

  • mistral-large-3, mistral-large-latest
  • mistral-small-3, mistral-small-latest
  • ministral-3b, ministral-8b, ministral-14b
  • pixtral-large-latest
  • devstral-2, devstral-small-2
  • voxtral-mini-latest (speech-to-text)
  • mistral-embed
  • mistral-ocr-latest

Cohere

json
{
  "provider": "cohere",
  "credentials": {
    "api_key": "..."
  }
}

Available Models:

  • command-r-plus, command-r-plus-08-2024
  • command-r, command-r-03-2024
  • embed-english-v3.0, embed-multilingual-v3.0
  • rerank-english-v3.0, rerank-multilingual-v3.0

ElevenLabs

json
{
  "provider": "elevenlabs",
  "credentials": {
    "api_key": "..."
  }
}

Available Models:

  • eleven_multilingual_v2 - Highest quality, 29 languages
  • eleven_turbo_v2_5 - Low latency, optimized for real-time
  • eleven_flash_v2_5 - Ultra-fast, cost-effective
  • eleven_monolingual_v1 - English only, legacy

Voice IDs:

  • rachel (21m00Tcm4TlvDq8ikWAM)
  • josh (TxGEqnHWrfWFTfGW9XjX)
  • bella (EXAVITQu4vr4xnSDxMaL)
  • adam (pNInz6obpgDQGcFmaJgB)
  • domi (AZnzlk1XvdvUeBnXmlld)

Testing Connections

Via Dashboard

Click Test Connection to verify credentials work.

Via API

bash
curl -X POST https://api.gateflow.ai/v1/management/providers/openai/test \
  -H "Authorization: Bearer gw_prod_admin_key"

Response:

json
{
  "status": "ok",
  "latency_ms": 145,
  "models_available": ["gpt-5.2", "gpt-5", "..."]
}

Provider Priority

When multiple providers can serve a model, GateFlow uses priority order:

bash
curl -X PATCH https://api.gateflow.ai/v1/management/providers/priority \
  -H "Authorization: Bearer gw_prod_admin_key" \
  -H "Content-Type: application/json" \
  -d '{
    "priority": ["openai", "anthropic", "google"]
  }'

Provider Health

GateFlow monitors provider health:

bash
curl https://api.gateflow.ai/v1/management/providers/health \
  -H "Authorization: Bearer gw_prod_admin_key"

Response:

json
{
  "providers": [
    {
      "provider": "openai",
      "status": "healthy",
      "latency_p50_ms": 230,
      "latency_p99_ms": 890,
      "error_rate_1h": 0.001,
      "last_check": "2026-01-15T10:30:00Z"
    },
    {
      "provider": "anthropic",
      "status": "degraded",
      "latency_p50_ms": 450,
      "error_rate_1h": 0.05,
      "issues": ["Elevated latency detected"]
    }
  ]
}

Provider-Level Settings

Rate Limit Overrides

Set custom rate limits per provider:

json
{
  "provider": "openai",
  "settings": {
    "rate_limits": {
      "requests_per_minute": 500,
      "tokens_per_minute": 100000
    }
  }
}

Timeout Settings

json
{
  "provider": "openai",
  "settings": {
    "timeout_seconds": 60,
    "connect_timeout_seconds": 10
  }
}

Retry Configuration

json
{
  "provider": "openai",
  "settings": {
    "retry": {
      "max_attempts": 3,
      "initial_delay_ms": 1000,
      "max_delay_ms": 10000
    }
  }
}

Credential Security

Provider credentials are:

  • Encrypted at rest with AES-256
  • Never logged or exposed in responses
  • Access controlled by organization roles
  • Rotatable without downtime

Rotating Provider Credentials

  1. Update credentials in the provider portal
  2. Update in GateFlow:
bash
curl -X PATCH https://api.gateflow.ai/v1/management/providers/openai \
  -H "Authorization: Bearer gw_prod_admin_key" \
  -H "Content-Type: application/json" \
  -d '{
    "credentials": {
      "api_key": "sk-new-key-here"
    }
  }'
  1. Old credentials stop working immediately

Troubleshooting

"Provider Not Configured"

The model you requested requires a provider that isn't set up:

json
{
  "error": {
    "message": "Provider 'anthropic' not configured",
    "code": "provider_not_configured"
  }
}

Solution: Add the provider in Settings → Providers.

"Invalid Credentials"

Your provider API key is incorrect or expired:

json
{
  "error": {
    "message": "Provider authentication failed",
    "code": "provider_auth_error"
  }
}

Solution: Verify your API key in the provider's dashboard.

"Provider Rate Limit"

You've hit the provider's rate limit (not GateFlow's):

json
{
  "error": {
    "message": "Provider rate limit exceeded",
    "code": "provider_rate_limit",
    "retry_after_seconds": 60
  }
}

Solution: Wait and retry, or configure fallback models.

Next Steps

Built with reliability in mind.