Appearance
Quickstart
Get your first GateFlow API call working in under 5 minutes.
Prerequisites
- A GateFlow account (sign up here)
- An API key from at least one AI provider (OpenAI, Anthropic, etc.)
Step 1: Get Your API Key
- Log in to the GateFlow Dashboard
- Navigate to Settings → API Keys
- Click Create Key
- Choose a name and select
productionordevelopmentscope - Copy the key (it starts with
gw_prod_orgw_dev_)
Keep Your Key Safe
Your API key grants access to your GateFlow account. Never commit it to version control or expose it in client-side code.
Step 2: Configure a Provider
Before making requests, connect at least one AI provider:
- Go to Settings → Providers
- Click Add Provider
- Select OpenAI (or your preferred provider)
- Enter your provider API key
- Click Save
Step 3: Make Your First Request
Using Python (OpenAI SDK)
Install the OpenAI SDK if you haven't already:
bash
pip install openaiMake a request through GateFlow:
python
from openai import OpenAI
client = OpenAI(
base_url="https://api.gateflow.ai/v1",
api_key="gw_prod_your_key_here"
)
response = client.chat.completions.create(
model="gpt-5.2",
messages=[
{"role": "user", "content": "What is an AI gateway?"}
]
)
print(response.choices[0].message.content)Using Sustain Mode
Enable carbon-optimized routing for sustainable AI:
python
# Enable Sustain Mode for carbon-optimized routing
response = client.chat.completions.create(
model="auto", # Auto-select most sustainable model
routing_mode="sustain_optimized",
minimum_quality_score=85, # Maintain quality while optimizing for sustainability
messages=[{"role": "user", "content": "What are the latest advances in green AI?"}]
)
print(response.choices[0].message.content)
print(f"Carbon footprint: {response.sustainability.carbon_gco2e} gCO₂e")
print(f"Carbon saved: {response.sustainability.carbon_saved_gco2e} gCO₂e")Using TypeScript/Node.js
typescript
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.gateflow.ai/v1',
apiKey: 'gw_prod_your_key_here',
});
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [
{ role: 'user', content: 'What is an AI gateway?' }
],
});
console.log(response.choices[0].message.content);Using cURL
bash
curl https://api.gateflow.ai/v1/chat/completions \
-H "Authorization: Bearer gw_prod_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "What is an AI gateway?"}]
}'Step 4: Verify in Dashboard
After your request completes, check the dashboard:
- Go to Analytics → Requests
- You should see your request with:
- Model used
- Token count
- Latency
- Cost
What Just Happened?
Your request flowed through GateFlow:
Your App → GateFlow → OpenAI → GateFlow → Your AppGateFlow:
- Authenticated your request
- Routed to OpenAI (based on your model choice)
- Logged the request for analytics
- Returned the response
Next Steps
Now that you've made your first request, explore:
- Add Fallback Models - Protect against provider outages
- Enable Caching - Reduce costs on repeated queries
- Try Different Models - Use Anthropic, Google, or Mistral
- Set Up Routing Rules - Route by task type
Next Steps - Sustainability Features
- Enable Sustain Mode - Automatic carbon-optimized routing
- Configure Time-Shifted Execution - Defer requests to low-carbon periods
- View Sustainability Dashboard - Track your carbon savings
- Try Cohere Models - Efficient chat and rerank capabilities
- Use ElevenLabs TTS - Low-carbon voice synthesis
Troubleshooting
"Invalid API Key" Error
Make sure your GateFlow API key:
- Starts with
gw_prod_orgw_dev_ - Is copied completely (no extra spaces)
- Hasn't been revoked
"Provider Not Configured" Error
You need to add the provider in your dashboard:
- Go to Settings → Providers
- Add the provider for the model you're trying to use
- Enter valid provider credentials
Rate Limit Errors
GateFlow respects provider rate limits. If you hit limits:
- Add a fallback model from another provider
- Enable request queuing in your settings
- Upgrade your provider tier
Need more help? Check the FAQ or contact support.