Skip to content

GateFlowRouting-Native Evaluation

Evaluate continuously, route intelligently, comply automatically. The first AI gateway where eval results drive every decision.

Why GateFlow?

Most evaluation tools are disconnected from production. You run evals in notebooks, see results in dashboards, but nothing changes automatically. GateFlow closes the loop.

GateFlow enables quality-driven AI infrastructure:

  • Gateway-Native Eval: Evaluation built into routing, not bolted on after
  • Production-Driven: Continuous sampling of live traffic, not just test sets
  • Curated Database: 10+ pre-built suites, not a blank canvas
  • Automatic Compliance: EU AI Act and ISO 42001 reports generated from eval history

Quick Example

Evals run automatically on production traffic:

python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.gateflow.ai/v1",
    api_key="gf-..."  # Your GateFlow API key
)

# Standard inference - evals sample automatically
response = client.chat.completions.create(
    model="auto",  # Routing informed by eval scores
    messages=[{"role": "user", "content": "Hello!"}]
)

# Or run explicit eval suites
from gateflow import EvalClient
eval_client = EvalClient(api_key="gf-...")

results = eval_client.run_suite(
    suite="safety-core",
    model="gpt-4o"
)
print(f"Safety score: {results.aggregate_score}%")

That's it. Your requests flow through GateFlow with automatic evaluation, quality-driven routing, and compliance reporting.

What's New

Latest Release - v3.0.0

Eval Platform Launch - Gateway-native evaluation with 10+ curated suites, tiered evaluators for 97% cost reduction, closed-loop routing, and EU AI Act compliance reporting. Read the changelog →

Explore the Docs

Built with reliability in mind.