AI-Lib AI-Lib
Open Source Ecosystem

One Protocol,
Every AI Provider.

AI-Lib is an open-source ecosystem that standardizes AI model interactions. One specification drives Rust and Python runtimes — supporting 30+ providers without a single line of hardcoded logic.

30+
AI Providers
3
Projects
0
Hardcoded Logic
2
Runtimes

Protocol-Driven Design

"All logic is operators, all configuration is protocol." Every provider behavior is declared in YAML — runtimes contain zero hardcoded provider logic.

Declarative Configuration

Provider endpoints, auth, parameter mappings, streaming decoders, and error handling — all declared in YAML manifests, validated by JSON Schema.

Operator-Based Pipeline

Streaming responses flow through composable operators: Decoder, Selector, Accumulator, FanOut, EventMapper. Each operator is protocol-driven.

Hot-Reload Ready

Update provider configurations without restarting. Protocol changes propagate automatically to runtimes. Add new providers through configuration, not code.

Ecosystem Architecture

Three layers working together — specification defines the rules, runtimes execute them, applications consume unified interfaces.

AI-Lib Ecosystem Architecture APPLICATION RUNTIME PROTOCOL Web Apps / API Services Rust / Python Your application code AI Agents Multi-turn / Tool Calling CLI Tools Batch / Data Pipelines ai-lib-rust v0.6.6 AiClient Pipeline Transport Resilience Embeddings Cache / Batch Crates.io · tokio + reqwest · <1ms overhead ai-lib-python v0.5.0 AiClient Pipeline Transport Resilience Telemetry Routing PyPI · httpx + Pydantic v2 · async/await Load Manifests AI-Protocol v1.5 spec.yaml Core Specification providers/*.yaml 30+ Provider Manifests models/*.yaml Model Registry schemas/ JSON Schema YAML definitions → JSON compilation → Runtime consumption · Vendor neutral

How It Works

From user request to unified streaming events — every step is protocol-driven.

Request-Response Data Flow REQUEST FLOW → User chat() AiClient UnifiedRequest Protocol compile_request() Transport HTTP POST AI Provider OpenAI, etc. ← RESPONSE FLOW (SSE/JSON) Byte Stream SSE / NDJSON Decoder JSON Frames Pipeline Select → Accumulate → Map EventMapper StreamingEvent Application Unified Events All parameter mapping and event normalization driven by protocol manifests — zero hardcoded logic Unified events: PartialContentDelta · ToolCallStarted · PartialToolCall · StreamEnd

Choose Your Runtime

Same protocol, different strengths. Pick the runtime that fits your stack.

Capability AI-Protocol Rust SDK Python SDK
Type System JSON Schema Compile-time (Rust) Runtime (Pydantic v2)
Streaming SSE/NDJSON spec tokio async streams async generators
Resilience Retry policy spec Circuit breaker, rate limiter, backpressure ResilientExecutor with all patterns
Embeddings Vector operations, similarity Vector operations, similarity
Distribution GitHub / npm Crates.io PyPI
Best For Specification & standards Systems, performance-critical ML, data science, prototyping

30+ AI Providers Supported

All driven by protocol configuration — zero hardcoded logic for any provider

Global Providers

OpenAI OpenAI
Anthropic Anthropic
Google Gemini Google Gemini
Groq Groq
Mistral Mistral
Cohere Cohere
Azure OpenAI Azure OpenAI
Together AI Together AI
Hugging Face Hugging Face
Grok Grok
Ollama Ollama
D
DeepInfra
O
OpenRouter
N
NVIDIA
F
Fireworks
R
Replicate
A
AI21 Labs
C
Cerebras
P
Perplexity
L
Lepton AI

China Region

DeepSeek DeepSeek
Qwen Qwen
D
Doubao
Z
Zhipu GLM
Baidu ERNIE Baidu ERNIE
Tencent Hunyuan Tencent Hunyuan
iFlytek Spark iFlytek Spark
Moonshot Moonshot
M
MiniMax
B
Baichuan
Y
Yi / 01.AI
S
SiliconFlow
S
SenseNova
T
Tiangong

Ready to get started?

Read the documentation, pick your runtime, and start building with 30+ AI providers today.