Skip to content

OpenAI SDK Integration

GateFlow is fully compatible with the OpenAI SDK. Just change the base URL and use your GateFlow API key.

Installation

bash
pip install openai
bash
npm install openai
bash
go get github.com/sashabaranov/go-openai

Configuration

Python

python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.gateflow.ai/v1",
    api_key="gw_prod_your_key_here"
)

TypeScript/Node.js

typescript
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.gateflow.ai/v1',
  apiKey: 'gw_prod_your_key_here',
});

Environment Variables

bash
export OPENAI_BASE_URL="https://api.gateflow.ai/v1"
export OPENAI_API_KEY="gw_prod_your_key_here"

Then in code:

python
from openai import OpenAI
client = OpenAI()  # Uses environment variables

Usage Examples

Chat Completions

python
response = client.chat.completions.create(
    model="gpt-4o",  # Or claude-3-5-sonnet, gemini-1.5-pro, etc.
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is GateFlow?"}
    ]
)

print(response.choices[0].message.content)

Streaming

python
stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a poem about AI."}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Embeddings

python
response = client.embeddings.create(
    model="text-embedding-3-small",
    input="Hello world"
)

embedding = response.data[0].embedding

Function Calling

python
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in Paris?"}],
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string"}
                },
                "required": ["location"]
            }
        }
    }]
)

GateFlow Extensions

Pass GateFlow-specific options via extra_body:

python
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[...],
    extra_body={
        "gateflow": {
            "cache": "skip",           # Bypass cache
            "fallbacks": ["claude-3-5-sonnet"],  # Custom fallbacks
            "tags": {"team": "support"}  # Analytics tags
        }
    }
)

Response Metadata

GateFlow adds metadata to responses:

python
response = client.chat.completions.create(...)

# Access GateFlow metadata (if using raw response)
gateflow_meta = response.model_extra.get("gateflow", {})
print(f"Provider: {gateflow_meta.get('provider')}")
print(f"Cost: ${gateflow_meta.get('cost', {}).get('total')}")
print(f"Cache hit: {gateflow_meta.get('cache', {}).get('hit')}")

Error Handling

python
from openai import OpenAI, APIError, RateLimitError

client = OpenAI(
    base_url="https://api.gateflow.ai/v1",
    api_key="gw_prod_..."
)

try:
    response = client.chat.completions.create(...)
except RateLimitError as e:
    print(f"Rate limited. Retry after: {e.response.headers.get('Retry-After')}")
except APIError as e:
    print(f"API error: {e.code} - {e.message}")

Migration from Direct OpenAI

Migrating existing code is simple:

python
# Before (direct OpenAI)
from openai import OpenAI
client = OpenAI(api_key="sk-...")

# After (via GateFlow)
from openai import OpenAI
client = OpenAI(
    base_url="https://api.gateflow.ai/v1",
    api_key="gw_prod_..."
)

# All your existing code works unchanged!

Next Steps

Built with reliability in mind.