
The Tor Project has confirmed that it is actively removing all artificial intelligence integrations from its browser, marking a clear departure from the trend seen in mainstream browsers like Firefox, Chrome, and Edge.
In the latest alpha release of Tor Browser 15.0 (15.0a4), developers emphasize that AI-powered features pose unresolvable privacy and auditability concerns, which fundamentally conflict with Tor’s security model.
This decision was announced earlier today as part of the release notes for the final alpha build of the 15.0 series. The stable version is expected to launch later this month. One of the key changes in this release is the explicit removal of “various AI features” that Mozilla had recently added to Firefox, features that allow users to access popular generative AI tools like ChatGPT, Claude, Google Gemini, and others directly within the browser’s sidebar.
Over the past year, Mozilla has integrated AI chatbots into Firefox, beginning with version 133. These tools offer capabilities such as summarizing articles, generating content, and assisting with brainstorming, all from within a persistent sidebar interface. While these features are marketed as productivity enhancers, they also require users to engage with third-party AI platforms, each governed by their own opaque privacy policies and data usage terms.
The Tor Project, however, has taken a different stance. According to the 15.0a4 release notes, “machine learning systems and platforms are inherently un-auditable from a security and privacy perspective.” The developers assert that including these tools would undermine the browser’s mission and risk implying endorsement of platforms that cannot meet Tor's strict privacy standards. Therefore, AI integrations have been deliberately stripped out of the codebase.
The Tor Browser is a modified version of Firefox designed to protect users from tracking, surveillance, and censorship. It routes traffic through the Tor network, anonymizing a user’s location and preventing network-level monitoring. Beyond encryption and traffic obfuscation, the browser also removes identifying features, standardizes window sizes (letterboxing), and integrates extensions like NoScript to block potentially malicious content.
Allowing AI features, even optional ones, would introduce multiple vectors for de-anonymization. Most AI chat services operate in the cloud, require user accounts, and may retain session data. Even passive data like the timing, frequency, or nature of user input to these models can potentially be exploited to fingerprint or identify users. These risks are fundamentally incompatible with Tor’s threat model, which is built to protect users operating under repressive regimes, whistleblowers, journalists, and others who require high levels of anonymity.
The Tor team has also removed Mozilla’s new Firefox Home experience, various sidebar features, and recent branding changes. These removals are not just aesthetic, they reflect a deliberate effort to minimize data surface and streamline the browser for privacy-respecting use.
Leave a Reply