1 min read

vex-llm#

LLM provider integrations for the VEX Protocol.

Supported Providers#

  • OpenAI - GPT-4, GPT-3.5, etc.
  • Ollama - Local LLM inference
  • DeepSeek - DeepSeek models
  • Mistral - Mistral AI models
  • Mock - Testing provider
  • Secure WASM Sandbox - Isolate tool execution with wasmtime
  • OOM Protection - Strict 10MB output memory limits

Installation#

TOML
[dependencies]
[dependencies]
vex-llm = "1.3.0"

# With OpenAI support
vex-llm = { version = "0.1", features = ["openai"] }

Quick Start#

Rust
use vex_llm::{LlmProvider, OllamaProvider, LlmRequest};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let provider = OllamaProvider::new("http://localhost:11434");
    let request = LlmRequest::new("Hello, world!");
    let response = provider.complete(request).await?;
    println!("{}", response.content);
    Ok(())
}

License#

Apache-2.0 License - see LICENSE for details.

Found something unclear or incorrect?Report issueor useEdit this page
Edit this page on GitHub