AiClient API (Rust)
AiClient API
Section titled “AiClient API”通过模型标识符
Section titled “通过模型标识符”// Automatic protocol loadinglet client = AiClient::from_model("anthropic/claude-3-5-sonnet").await?;使用 Builder
Section titled “使用 Builder”let client = AiClient::builder() .model("openai/gpt-4o") .protocol_dir("./ai-protocol") .timeout(Duration::from_secs(60)) .build() .await?;ChatRequestBuilder
Section titled “ChatRequestBuilder”构建器模式提供流畅的 API:
let response = client.chat() // Messages .system("You are a helpful assistant") .user("Hello!") .messages(vec![Message::user("Follow-up")])
// Parameters .temperature(0.7) .max_tokens(1000) .top_p(0.9) .stop(vec!["END".into()])
// Tools .tools(vec![weather_tool]) .tool_choice("auto")
// Execution .execute() .await?;ChatResponse
Section titled “ChatResponse”pub struct ChatResponse { pub content: String, // Response text pub tool_calls: Vec<ToolCall>, // Function calls (if any) pub finish_reason: String, // Why the response ended pub usage: Usage, // Token usage}StreamingEvent
Section titled “StreamingEvent”pub enum StreamingEvent { ContentDelta { text: String, index: usize }, ThinkingDelta { text: String }, ToolCallStarted { id: String, name: String, index: usize }, PartialToolCall { id: String, arguments: String, index: usize }, ToolCallEnded { id: String, index: usize }, StreamEnd { finish_reason: Option<String>, usage: Option<Usage> }, Metadata { model: Option<String>, usage: Option<Usage> },}CallStats
Section titled “CallStats”pub struct CallStats { pub total_tokens: u32, pub prompt_tokens: u32, pub completion_tokens: u32, pub latency_ms: u64, pub model: String, pub provider: String,}// Simple responselet response = client.chat().user("Hello").execute().await?;
// Response with statisticslet (response, stats) = client.chat().user("Hello").execute_with_stats().await?;let mut stream = client.chat() .user("Hello") .stream() .execute_stream() .await?;
while let Some(event) = stream.next().await { // Handle each StreamingEvent}let (mut stream, cancel_handle) = client.chat() .user("Long task...") .stream() .execute_stream_cancellable() .await?;
// Cancel from another tasktokio::spawn(async move { tokio::time::sleep(Duration::from_secs(5)).await; cancel_handle.cancel();});use ai_lib::{Error, ErrorContext};
match client.chat().user("Hello").execute().await { Ok(response) => println!("{}", response.content), Err(Error::Protocol(e)) => eprintln!("Protocol error: {e}"), Err(Error::Transport(e)) => eprintln!("HTTP error: {e}"), Err(Error::Remote(e)) => { eprintln!("Provider error: {}", e.error_type); // e.error_type is one of the 13 standard error classes } Err(e) => eprintln!("Other error: {e}"),}所有错误通过 ErrorContext 携带 V2 标准错误码。使用 error.context().standard_code 访问 StandardErrorCode 枚举(E1001–E9999),以便进行编程式处理。
并行执行多个聊天请求:
// Execute multiple chat requests in parallellet results = client.chat_batch(requests, 5).await; // concurrency limit = 5
// Smart batching with automatic concurrency tuninglet results = client.chat_batch_smart(requests).await;在发送前根据协议清单验证请求:
// Validate a request against the protocol manifest before sendingclient.validate_request(&request)?;反馈与可观测性
Section titled “反馈与可观测性”上报 RLHF 和监控的反馈事件,并查看弹性状态:
// Report feedback events for RLHF / monitoringclient.report_feedback(FeedbackEvent::Rating(RatingFeedback { request_id: "req-123".into(), rating: 5, max_rating: 5, category: None, comment: Some("Great response".into()), timestamp: chrono::Utc::now(),})).await?;
// Get current resilience statelet signals = client.signals().await;println!("Circuit: {:?}", signals.circuit_breaker);使用 AiClientBuilder 进行高级配置:
let client = AiClientBuilder::new() .protocol_path("path/to/protocols".into()) .hot_reload(true) .with_fallbacks(vec!["openai/gpt-4o".into()]) .feedback_sink(my_sink) .max_inflight(10) .circuit_breaker_default() .rate_limit_rps(5.0) .base_url_override("https://my-proxy.example.com") .build("anthropic/claude-3-5-sonnet") .await?;