π§ ProvnAI#
Refining Accountable Intelligence#
An open research initiative dedicated to verifying the thoughts and actions of autonomous agents.
π‘οΈ The Context#
Todayβs AI agents often feel like black boxes. They can hallucinate or drift, and it's difficult for them to prove how they reached a decision. ProvnAI is a community-driven initiative exploring the Trust Layers needed to make agentic decisions more verifiable and secure.
π§© The Stack#
| Layer | Project | Description | Status |
|---|---|---|---|
| Cognitive | VEX Protocol | Core Rust framework (17 crates) providing trust primitives for AI. | LIVE (v1.7.0) |
| Forensic | VEX Explorer | Client-side forensic tool for mathematical integrity of VEPs. | LIVE (v1.5.0) |
| SDK | VEX-SDK | Unified Python/TS interface for hardware-rooted cognitive routing. | LIVE (v1.5.0) |
| Identity | Provn-SDK | Sovereign cryptographic signing and permanent data anchoring. | LIVE |
| Interception | McpVanguard | Real-time security proxy and verification buffer for agent tooling. | LIVE |
| Environment | Attest | Hardware-sealed identity and provenance (Integrated into VEX). | |
| Demo | VexEvolve | Autonomous AI newsroom demonstrating evolutionary verification. | LIVE |
| Evaluation | VEX-HALT | Standardized benchmark tool for calibrating AI verification. | PAUSED |
| Studio | Agentic Studio | Visual development environment for building and monitoring agents. | RESEARCH |
| Compute | Verified Compute | Hybrid browser/native system exploring decentralized processing. | RESEARCH |
| Evolution | Agentic Evolution | Research into verifiable, recursive self-improving agent loops. | RESEARCH |
| Harmony | Collective Harmony | Rules-as-Code research for automated institutional alignment. | RESEARCH |
π€ Get Involved#
- Learn More: Read the Docs β’ Explore VEX β’ Star the repos.
- Support: Help keep this initiative independent via Patreon.
- Discord: Join the community for discussion.
Infrastructure for Accountable Intelligence