Appearance
EU AI Act Compliance
GateFlow Eval helps you meet EU AI Act requirements for AI system evaluation and documentation.
EU AI Act Overview
The EU AI Act requires organizations deploying AI systems to:
- Document AI system behavior - Maintain records of system performance
- Conduct regular testing - Evaluate accuracy, robustness, and safety
- Monitor for drift - Detect changes in AI behavior over time
- Enable human oversight - Provide mechanisms for intervention
- Maintain audit trails - Keep records for regulatory inspection
Risk Classification
The EU AI Act classifies AI systems by risk level:
| Risk Level | Examples | Requirements |
|---|---|---|
| Unacceptable | Social scoring | Prohibited |
| High-Risk | Medical diagnosis, hiring | Full compliance |
| Limited | Chatbots | Transparency |
| Minimal | Spam filters | No requirements |
GateFlow Eval supports compliance for high-risk and limited risk systems.
Mapping GateFlow to EU AI Act
Article 9: Risk Management System
Requirement: Establish a continuous risk management system.
GateFlow Solution:
python
# Configure continuous safety monitoring
client.configure_sampling(
rate=0.05,
suites=["safety-core", "safety-bias"],
alert_threshold=95,
alert_channels=["compliance-team"]
)
# Enable drift detection for risk monitoring
client.configure_drift_detection(
enabled=True,
suites=["safety-core"],
actions={"on_drift": "alert_and_reduce_traffic"}
)Article 10: Data and Data Governance
Requirement: Training and testing datasets must be relevant, representative, and examined for biases.
GateFlow Solution:
python
# Run bias detection suite
results = client.run_suite(
suite="safety-bias",
model="your-model",
config={"report_demographics": True}
)
# Export for data governance documentation
export = client.export_results(
run_id=results.id,
format="eu_ai_act_article_10"
)Article 13: Transparency
Requirement: AI systems must be transparent to users.
GateFlow Solution:
python
# Log all model decisions with reasoning
client.configure_logging(
include_model_selection=True,
include_routing_reason=True,
include_confidence_scores=True
)
# Generate transparency report
report = client.generate_report(
type="transparency",
time_range="quarterly",
include_model_decisions=True
)Article 14: Human Oversight
Requirement: Enable human operators to understand and override AI decisions.
GateFlow Solution:
python
# Configure human-in-the-loop for critical thresholds
client.configure_routing_feedback(
constraints={
"safety_floor": 95,
"require_human_approval_below": 90
},
human_oversight={
"enabled": True,
"approval_workflow": "slack-approval",
"timeout_action": "block"
}
)
# Manual override capability
client.set_routing_override(
model="risky-model",
traffic_share=0.0,
reason="Human oversight decision"
)Article 15: Accuracy, Robustness, Cybersecurity
Requirement: AI systems must be accurate, robust, and secure.
GateFlow Solution:
python
# Comprehensive accuracy testing
results = client.run_suites(
suites=["quality-general", "quality-reasoning"],
model="your-model"
)
# Robustness testing with adversarial inputs
results = client.run_suite(
suite="safety-jailbreak",
model="your-model"
)
# Security audit export
audit = client.export_security_audit(
time_range="annual",
include_penetration_results=True
)Generating Compliance Reports
Quick Report
python
report = client.generate_eu_ai_act_report(
model="your-model",
time_range="quarterly"
)
print(report.summary)
# EU AI Act Compliance Report
# Period: Q1 2024
# Model: your-model
#
# Article 9 (Risk Management): ✓ Compliant
# Article 10 (Data Governance): ✓ Compliant
# Article 13 (Transparency): ✓ Compliant
# Article 14 (Human Oversight): ✓ Compliant
# Article 15 (Accuracy): ✓ Compliant
#
# Overall Status: COMPLIANT
# Download full report
report.download_pdf("/path/to/report.pdf")Detailed Report
python
report = client.generate_eu_ai_act_report(
model="your-model",
time_range="quarterly",
detailed=True,
include_sections=[
"risk_assessment",
"evaluation_history",
"bias_analysis",
"drift_events",
"human_oversight_log",
"incident_response"
]
)Scheduled Reports
python
# Automatically generate compliance reports
client.schedule_report(
type="eu_ai_act",
frequency="monthly",
models=["model-a", "model-b"],
recipients=["compliance@company.com"],
format="pdf"
)Audit Trail
What's Logged
- All evaluation runs and results
- Routing decisions and reasons
- Drift detection events
- Human oversight actions
- Model changes and deployments
- Alert events and responses
Accessing Audit Logs
python
# Query audit trail
logs = client.query_audit_trail(
time_range="2024-01-01:2024-03-31",
event_types=["eval_run", "routing_change", "human_override"],
model="your-model"
)
for log in logs:
print(f"{log.timestamp}: {log.event_type}")
print(f" Actor: {log.actor}")
print(f" Details: {log.details}")
# Export for regulators
export = client.export_audit_trail(
time_range="annual",
format="json",
signed=True # Cryptographically signed
)Evidence Collection
Pre-Audit Checklist
python
# Generate compliance evidence package
evidence = client.generate_evidence_package(
model="your-model",
regulation="eu_ai_act",
time_range="annual"
)
print(evidence.contents)
# - evaluation_summary.pdf
# - bias_analysis_report.pdf
# - drift_monitoring_log.csv
# - human_oversight_decisions.csv
# - incident_response_log.csv
# - model_changelog.json
# - audit_trail.json (signed)
# Download package
evidence.download_zip("/path/to/evidence.zip")High-Risk System Checklist
For high-risk AI systems, ensure:
- [ ] Risk management system documented
- [ ] Training data governance documented
- [ ] Bias testing completed and documented
- [ ] Accuracy benchmarks established
- [ ] Robustness testing completed
- [ ] Human oversight mechanisms in place
- [ ] Transparency documentation complete
- [ ] Audit trail enabled and retained
- [ ] Incident response procedures documented
- [ ] Regular evaluation schedule established
python
# Check compliance status
status = client.check_eu_ai_act_compliance(model="your-model")
for requirement, compliant in status.items():
icon = "✓" if compliant else "✗"
print(f"{icon} {requirement}")Next Steps
- ISO 42001 - AI management system standard
- Report Generation - Automated compliance reports
- Drift Detection - Continuous monitoring