
Brave Software has introduced a new privacy feature for its AI assistant, Leo, enabling cryptographically verifiable privacy and transparency using Trusted Execution Environments (TEEs) backed by Nvidia hardware.
The capability is available now in the Brave Nightly build for users experimenting with the DeepSeek V3.1 model.
The new system marks a significant step toward transitioning away from blind trust in AI providers toward a “trust but verify” model, aligning with Brave’s privacy-by-design philosophy. Users can now verify both the model that’s responding to their queries and the privacy of the conversation itself, without relying on vendor promises alone.
Brave Leo is the privacy-focused AI assistant integrated directly into the Brave browser. Unlike most mainstream AI chatbots, Leo operates without logging IP addresses, storing chat histories, or using user data for model training. With the addition of TEEs via NEAR AI infrastructure on Nvidia GPUs, Leo now processes user queries in isolated, hardware-backed secure enclaves, enabling what Brave calls Confidential LLM Computing.

TEEs are specialized secure areas within processors that isolate data and code from the host operating system and other processes. This isolation provides strong guarantees of confidentiality and integrity, even if the underlying system is compromised. By leveraging Nvidia’s Hopper GPU architecture and NEAR AI’s open-source TEE stack, Brave ensures that each AI inference occurs within a verified environment that shields user data from outside access.
The confidentiality assurance works through cryptographic attestation, where each execution environment produces verifiable proofs, hashes of the model and code, which Brave checks before delivering AI responses. In this first rollout phase, Brave performs this verification internally and communicates the outcome to users with a “Verifiably Private with NEAR AI TEE” label in the Leo interface.
This system also directly addresses growing concerns about model substitution, where providers quietly swap premium AI models with cheaper, lower-quality alternatives to reduce costs. Brave’s transparency mechanism allows users to verify which model is being used, helping counter this practice and supporting accountability.
Currently, these verifiable guarantees are offered only with DeepSeek V3.1 in the Brave Nightly development build. However, Brave intends to expand support to more models based on user feedback and performance evaluations.
For users, this advancement means that conversations with Brave Leo are no longer just private by policy but cryptographically verifiable by design. Looking ahead, Brave plans to fully open-source the verification pipeline and bring validation closer to users, giving them even greater control and assurance.






Leave a Reply