One Protocol,
Every AI Provider.
AI-Lib is an open-source ecosystem that standardizes AI model interactions. V2 protocol with MCP, Computer Use, and multimodal capabilities — 37 providers, two runtimes, zero hardcoded logic.
Three Projects, One Ecosystem
A specification layer defines the rules. Two runtime implementations bring them to life.
AI-Protocol
The provider-agnostic specification. V2 three-ring manifests with MCP, Computer Use, and multimodal schemas — no hardcoded logic, ever.
- 37 provider manifests (6 V2 + 36 V1)
- V2 three-layer pyramid architecture
- MCP / Computer Use / Multimodal schemas
- ProviderContract specification
- CLI tool for validation & inspection
- JSON Schema validated
- Hot-reloadable configuration
ai-lib-rust
High-performance Rust runtime. ProviderDriver abstraction, MCP tool bridge, Computer Use safety, extended multimodal, and operator-based streaming pipeline.
- ProviderDriver (OpenAI/Anthropic/Gemini)
- MCP tool bridge with namespace isolation
- Computer Use + SafetyPolicy
- Extended multimodal validation
- Capability Registry (feature-gated)
- Operator-based streaming pipeline
- 185+ tests, published on Crates.io
ai-lib-python
Developer-friendly Python runtime. ProviderDriver abstraction, MCP tool bridge, Computer Use safety, extended multimodal, and full async support.
- ProviderDriver (OpenAI/Anthropic/Gemini)
- MCP tool bridge with namespace isolation
- Computer Use + SafetyPolicy
- Extended multimodal validation
- Capability Registry (pip extras)
- Pydantic v2 + full async/await
- 75+ V2 tests, published on PyPI
Protocol-Driven Design
"All logic is operators, all configuration is protocol." Every provider behavior is declared in YAML — runtimes contain zero hardcoded provider logic.
Declarative Configuration
Provider endpoints, auth, parameter mappings, streaming decoders, and error handling — all declared in YAML manifests, validated by JSON Schema.
Operator-Based Pipeline
Streaming responses flow through composable operators: Decoder, Selector, Accumulator, FanOut, EventMapper. Each operator is protocol-driven.
Hot-Reload Ready
Update provider configurations without restarting. Protocol changes propagate automatically to runtimes. Add new providers through configuration, not code.
Ecosystem Architecture
Three layers working together — specification defines the rules, runtimes execute them, applications consume unified interfaces.
How It Works
From user request to unified streaming events — every step is protocol-driven.
Choose Your Runtime
Same protocol, different strengths. Pick the runtime that fits your stack.
| Capability | AI-Protocol | Rust SDK | Python SDK |
|---|---|---|---|
| Type System | JSON Schema | Compile-time (Rust) | Runtime (Pydantic v2) |
| Streaming | SSE/NDJSON spec | tokio async streams | async generators |
| Resilience | Retry policy spec | Circuit breaker, rate limiter, backpressure | ResilientExecutor with all patterns |
| V2 Driver | ProviderContract spec | Box<dyn ProviderDriver> | ProviderDriver ABC |
| MCP | mcp.json schema | McpToolBridge | McpToolBridge |
| Computer Use | computer-use.json schema | ComputerAction + SafetyPolicy | ComputerAction + SafetyPolicy |
| Multimodal | multimodal.json schema | MultimodalCapabilities | MultimodalCapabilities |
| Embeddings | — | Vector operations, similarity | Vector operations, similarity |
| Distribution | GitHub / npm | Crates.io | PyPI |
| Best For | Specification & standards | Systems, performance-critical | ML, data science, prototyping |
37 AI Providers Supported
All driven by protocol configuration — zero hardcoded logic for any provider. 6 V2 manifests with MCP/CU/Multimodal declarations.
Global Providers
China Region
Ready to get started?
Read the documentation, pick your runtime, and start building with 37 AI providers today.