What To Know
- In a surprising twist for privacy-minded AI users, Anthropic—creator of the popular Claude chatbot—has unveiled a sweeping policy update requiring users to choose whether their conversations and coding sessions can be used to train future models.
- For many, this ultimatum serves as a reminder that in the AI arms race, your data is currency—and it is your responsibility to safeguard it.
AI News: Major Policy Shift on Claude Data Usage
In a surprising twist for privacy-minded AI users, Anthropic—creator of the popular Claude chatbot—has unveiled a sweeping policy update requiring users to choose whether their conversations and coding sessions can be used to train future models. The deadline is September 28, 2025. Until now, Anthropic operated under a privacy-first policy: consumer chat data was deleted within 30 days unless flagged or legally required to be retained longer. That changes now. Users who do not opt out will see their data retained and used for up to five full years.
Users are grappling with the tough choice between sharing personal data for AI training or opting out to preserve privacy with regards to Claude chatbot
Image Credit: AI-Generated
What’s Changing for Users
This AI News report peels back the curtain on the details: The new policy applies across all consumer subscriptions—Claude Free, Pro, Max, and even Claude Code. Enterprise and government users such as those on Claude for Work, Claude Gov, Claude for Education, or API users are not affected. Users will encounter a pop-up titled “Updates to Consumer Terms and Policies” with a prominent “Accept” button (which opts them in) and a smaller toggle for opting out. If no choice is made by the September 28 deadline, users stand to lose access to Claude entirely.
Why the Change—and What It Means
Anthropic frames the update as an opportunity for users to directly contribute to improving Claude’s reasoning, coding, and safety systems. It claims that user data helps build better classifiers to detect abuse and enhance moderation tools. The extended five-year retention aligns with long AI development cycles, helping the company ensure consistency across model upgrades.
But not everyone is convinced. Critics argue this move shifts the default from privacy to data sharing, making consent the exception rather than the norm. The design—a bold “Accept” versus a tiny “opt-out” toggle—is seen as a classic “dark pattern” nudging users toward sharing without full awareness. Users on online forums have voiced concern, pointing out that five years of retention feels excessive and that the opt-out path is deliberately less visible. Regulatory authorities have long cautioned firms against quietly altering privacy policies without transparency, and this development arrives amid broader debates over how AI companies collect and use personal data. Anthropic is also facing legal challenges, with some platforms accusing the company of using their content without proper authorization to train Claude.
What Users Should Do Now
Act before September 28—navigate your Claude app settings or sign-up flow and explicitly choose whether to opt in or opt out.
Understand the consequences—if your data is used, it becomes part of model training and cannot be removed. However, if you opt out, your data remains private and limited to a 30-day retention window.
Be vigilant about defaults—the interface is crafted to steer you toward acceptance; reaching for the opt-out toggle is essential if you want privacy.
The stakes are high. Users must weigh model improvement gains against long-term retention and privacy implications. For many, this ultimatum serves as a reminder that in the AI arms race, your data is currency—and it is your responsibility to safeguard it. The decision will define how much control users truly maintain over their own digital conversations, and whether they will prioritize convenience and progress or privacy and protection.
For the latest on Claude chatbot, keep on logging to Thailand AI News