Necessary Always Active
Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.
|
||||||
|
||||||
|
||||||
|
Anthropic is making major changes to the way it handles user information. According to Anthropic, in a recent update, Claude will now give people a new decision: share their conversations to improve AI systems or decline participation from September 28, 2025. This move has drawn attention because it places user choice at the center of how the company develops its technology. At the heart of this change is the Anthropic user data opt-out option, which lets people decide if their chats will be included in future training.
The updates affect users on Claude Free, Pro, and Max plans, including when they access Claude Code through those accounts. They do not cover services governed by the Commercial Terms, such as Claude for Work, Claude Gov, Claude for Education, or API usage through third parties like Amazon Bedrock and Google Cloud’s Vertex AI.
Anthropic’s update states that data retention is five years if opted in. The most talked-about aspect of Anthropic’s privacy policy update is the length of time data will be stored. The company has stated that information may be retained for up to five years, commonly referred to as the Anthropic data retention five-year policy. For many, this raises questions about how long personal conversations should reasonably be held. While Anthropic says the purpose is to improve transparency and security, not everyone is comfortable with such a long timeline. In June 2025, Anthropic customized an AI model for the U.S. security forces, named ‘Claude Gov’, to help with special tasks like intelligence analysis, strategic planning and daily operations.
Another critical part of this update is how consent is being handled. Anthropic’s user consent for AI training model makes it clear that users must actively agree if they want their data included in training efforts. This shift is being seen as a way of putting more control back in the hands of users, a step that many privacy advocates have been calling for across the tech industry. Still, some argue that not all users will fully understand what they agree to, especially with complex terms of service.
Anthropic has also released new consumer terms, which outline in greater detail how data is used, why it is collected, and what rights users have when choosing to opt in or out. These terms are designed to be more transparent, but critics point out that legal language often remains hard for everyday users to follow. The challenge for Anthropic is making sure people can make informed decisions without being overwhelmed by lengthy documents.
For Anthropic, these changes represent an attempt to balance two competing needs: advancing AI models and respecting individual privacy. The Anthropic user data opt-out feature is clearly a response to growing public concern about how personal information is handled in the AI industry. With the addition of Anthropic’s privacy policy update, the data retention five-year clause, and the focus on user consent for AI training, the company is signaling that it wants to show greater accountability. In June, Anthropic won the copyright lawsuit when a U.S. judge ruled that the company’s use of books to train an AI model is legal.
As the new Anthropic consumer terms roll out, the debate will likely continue. Supporters believe that giving users an opt-out option is a positive step, while critics remain cautious about how long data is stored and how clearly consent is explained. For users, the key takeaway is simple: they now have more power than before to decide how their data is used.