Chat Completions
Chat Completions
Section titled “Chat Completions”Chat completions are the primary API for interacting with AI models. Both runtimes provide a unified interface that works across all 30+ providers.
Basic Usage
Section titled “Basic Usage”let client = AiClient::from_model("openai/gpt-4o").await?;
let response = client.chat() .user("Hello, world!") .execute() .await?;
println!("{}", response.content);Python
Section titled “Python”client = await AiClient.create("openai/gpt-4o")
response = await client.chat() \ .user("Hello, world!") \ .execute()
print(response.content)Messages
Section titled “Messages”System Messages
Section titled “System Messages”Set the model’s behavior:
// Rustclient.chat() .system("You are a helpful coding assistant. Always include code examples.") .user("Explain closures") .execute().await?;# Pythonawait client.chat() \ .system("You are a helpful coding assistant.") \ .user("Explain closures") \ .execute()Multi-turn Conversations
Section titled “Multi-turn Conversations”Pass conversation history:
// Rustuse ai_lib::{Message, MessageRole};
let messages = vec![ Message::system("You are a tutor."), Message::user("What is recursion?"), Message::assistant("Recursion is when a function calls itself..."), Message::user("Can you show an example?"),];
client.chat().messages(messages).execute().await?;# Pythonfrom ai_lib_python import Message
messages = [ Message.system("You are a tutor."), Message.user("What is recursion?"), Message.assistant("Recursion is when a function calls itself..."), Message.user("Can you show an example?"),]
await client.chat().messages(messages).execute()Parameters
Section titled “Parameters”| Parameter | Type | Description |
|---|---|---|
temperature | float | Randomness (0.0 = deterministic, 2.0 = creative) |
max_tokens | int | Maximum response length |
top_p | float | Nucleus sampling (alternative to temperature) |
stop | string[] | Sequences that stop generation |
// Rustclient.chat() .user("Write a poem") .temperature(0.9) .max_tokens(200) .top_p(0.95) .execute().await?;Streaming
Section titled “Streaming”For real-time output, use streaming:
// Rustlet mut stream = client.chat() .user("Tell me a story") .stream() .execute_stream() .await?;
while let Some(event) = stream.next().await { if let StreamingEvent::ContentDelta { text, .. } = event? { print!("{text}"); std::io::stdout().flush()?; }}# Pythonasync for event in client.chat() \ .user("Tell me a story") \ .stream(): if event.is_content_delta: print(event.as_content_delta.text, end="", flush=True)Response Statistics
Section titled “Response Statistics”Track usage for cost management:
// Rustlet (response, stats) = client.chat() .user("Hello") .execute_with_stats() .await?;
println!("Prompt tokens: {}", stats.prompt_tokens);println!("Completion tokens: {}", stats.completion_tokens);println!("Latency: {}ms", stats.latency_ms);# Pythonresponse, stats = await client.chat() \ .user("Hello") \ .execute_with_stats()
print(f"Tokens: {stats.total_tokens}")print(f"Latency: {stats.latency_ms}ms")Provider Switching
Section titled “Provider Switching”The same code works across all providers:
// Just change the model identifierlet client = AiClient::from_model("anthropic/claude-3-5-sonnet").await?;let client = AiClient::from_model("deepseek/deepseek-chat").await?;let client = AiClient::from_model("gemini/gemini-2.0-flash").await?;The protocol manifest handles endpoint URLs, authentication, parameter mapping, and streaming format differences automatically.