One specification, multiple runtimes.
Unified AI model interaction across Protocol, Rust, and Python.
Dual-licensed MIT/Apache-2.0 · Protocol-driven design · Vendor neutral
All logic is operators, all configuration is protocol. Decouple providers from code with declarative configuration.
Choose Rust for performance or Python for flexibility. Both share the same unified specification.
30+ AI providers, including global and China-region services, accessible through a unified interface.
Built-in retry, rate limiting, circuit breaker, and other enterprise-grade features.
Rust compile-time checks + Python Pydantic runtime validation.
OpenTelemetry integration with comprehensive metrics and tracing.
Specification for unified AI model interaction with declarative configuration and operator-based processing.
High-performance Rust implementation with 14 architectural layers, type safety, and <1ms overhead.
Official Python runtime with 95% feature parity, Pydantic v2 type safety, and async support.
Choose the right runtime
| Feature | AI-Protocol | Rust SDK | Python SDK |
|---|---|---|---|
| Type System | YAML/JSON Schema | Compile-time check | Runtime type check |
| Performance | N/A | <1ms overhead | ~10-50ms |
| Ecosystem | 30+ Providers | Crates.io | PyPI |
| Ideal For | Protocol implementers | Systems programming | ML/Data Science |
28 AI providers, global coverage, unified interface
Leading international AI providers
Leading China-region AI providers
Self-hosted and custom models
The following metrics refer to SDK layer overhead only (excludes remote model latency). See repository README for full methodology. Always benchmark with your workload.
Note: Real-world throughput constrained by provider rate limits & network. Figures are indicative, not guarantees.
Layered design: App → High-level API → Unified abstraction → Provider adapters → Transport (HTTP/stream + reliability) → Common types.