One Protocol,
Every AI Provider.
AI-Lib is an open-source ecosystem that standardizes AI model interactions. V2 protocol with MCP, Computer Use, and multimodal capabilities โ 37 providers, three runtimes, zero hardcoded logic.
Five Projects, One Ecosystem
A specification layer defines the rules. Three runtime implementations (Rust, Python, TypeScript) bring them to life.
AI-Protocol
The provider-agnostic specification. V2 three-ring manifests with MCP, Computer Use, and multimodal schemas โ no hardcoded logic, ever.
- 37 provider manifests (6 V2 + 36 V1)
- V2 three-layer pyramid architecture
- STT / TTS / Rerank in manifests (Jina, OpenAI, Cohere)
- MCP / Computer Use / Multimodal schemas
- ProviderContract + pricing schema
- CLI tool for validation & inspection
- JSON Schema validated
- Hot-reloadable configuration
ai-lib-rust
High-performance Rust runtime. ProviderDriver abstraction, MCP tool bridge, Computer Use safety, extended multimodal, and operator-based streaming pipeline.
- ProviderDriver (OpenAI/Anthropic/Gemini)
- MCP tool bridge with namespace isolation
- Computer Use + SafetyPolicy
- Extended multimodal validation
- Capability Registry (feature-gated)
- Operator-based streaming pipeline
- 185+ tests, published on Crates.io
ai-lib-python
Developer-friendly Python runtime. ProviderDriver abstraction, MCP tool bridge, Computer Use safety, extended multimodal, STT/TTS/Rerank extras.
- ProviderDriver (OpenAI/Anthropic/Gemini)
- MCP tool bridge with namespace isolation
- Computer Use + SafetyPolicy
- STT / TTS / Rerank extras
- Capability Registry (pip extras)
- Pydantic v2 + full async/await
- 75+ V2 tests, published on PyPI
ai-lib-ts
TypeScript/Node.js runtime for the npm ecosystem. Protocol-driven, streaming-first, with Resilience, Routing, MCP, and Multimodal support.
- V2 manifest parsing + standard error codes
- Resilience (Retry, CircuitBreaker, RateLimiter, Backpressure)
- ModelManager + CostBasedSelector + FallbackChain
- SttClient, TtsClient, RerankerClient
- McpToolBridge, EmbeddingClient, Plugins
- BatchExecutor + PreflightChecker
- Native fetch, published on npm
Showcase Projects
Reference applications built on the AI-Lib ecosystem โ see the protocol and runtimes in action.
AI Debate
Multi-model AI debate arena. Pro vs Con across four rounds, then a Judge delivers the verdict. Built on ai-lib-rust and ai-protocol.
- 4-round debate flow (Opening โ Rebuttal โ Defense โ Closing โ Judgement)
- Web search tool calling via Tavily (optional)
- Multi-provider: DeepSeek, Zhipu, Groq, Mistral, OpenAI, Anthropic
- Auto fallback, real-time SSE streaming
- Axum + SQLite, modern dark UI
ZeroSpider
Protocol-driven autonomous AI agent runtime. Intelligent model selection, multi-model negotiation, and hardware integration.
- Protocol-driven providers via ai-lib-rust + ai-protocol
- Smart routing: cost, speed, quality, reliability
- Multi-model negotiation and parallel task execution
- Channels: Telegram, Discord, Matrix
- Remote deployment, hardware (GPIO, STM32)
Protocol-Driven Design
"All logic is operators, all configuration is protocol." Every provider behavior is declared in YAML โ runtimes contain zero hardcoded provider logic.
Declarative Configuration
Provider endpoints, auth, parameter mappings, streaming decoders, and error handling โ all declared in YAML manifests, validated by JSON Schema.
Operator-Based Pipeline
Streaming responses flow through composable operators: Decoder, Selector, Accumulator, FanOut, EventMapper. Each operator is protocol-driven.
Hot-Reload Ready
Update provider configurations without restarting. Protocol changes propagate automatically to runtimes. Add new providers through configuration, not code.
Ecosystem Architecture
Three layers working together โ specification defines the rules, runtimes execute them, applications consume unified interfaces.
How It Works
From user request to unified streaming events โ every step is protocol-driven.
Choose Your Runtime
Same protocol, different strengths. Pick the runtime that fits your stack.
| Capability | AI-Protocol | Rust SDK | Python SDK | TypeScript SDK |
|---|---|---|---|---|
| Type System | JSON Schema | Compile-time (Rust) | Runtime (Pydantic v2) | Compile-time (TypeScript) |
| Streaming | SSE/NDJSON spec | tokio async streams | async generators | AsyncIterator + fetch |
| Resilience | Retry policy spec | Circuit breaker, rate limiter, backpressure | ResilientExecutor with all patterns | RetryPolicy, CircuitBreaker, RateLimiter |
| V2 Driver | ProviderContract spec | Box<dyn ProviderDriver> | ProviderDriver ABC | ManifestV2 + HttpTransport |
| MCP | mcp.json schema | McpToolBridge | McpToolBridge | McpToolBridge |
| Computer Use | computer-use.json schema | ComputerAction + SafetyPolicy | ComputerAction + SafetyPolicy | โ |
| Multimodal | multimodal.json schema | MultimodalCapabilities | MultimodalCapabilities | SttClient, TtsClient, RerankerClient |
| Embeddings | โ | Vector operations, similarity | Vector operations, similarity | EmbeddingClient |
| Distribution | GitHub / npm | Crates.io | PyPI | npm |
| Best For | Specification & standards | Systems, performance-critical | ML, data science, prototyping | Node.js, npm ecosystem, full-stack |
37 AI Providers Supported
All driven by protocol configuration โ zero hardcoded logic for any provider. 6 V2 manifests with MCP/CU/Multimodal declarations.
Global Providers
China Region
Ready to get started?
Read the documentation, pick your runtime (Rust, Python, or TypeScript), and start building with 37 AI providers today.