HN
Today

If you have a Claude account, they're going to train on your data moving forward

Anthropic's Claude is shifting its default data policy for personal accounts, now using user conversations for model training unless explicitly opted out. This change has sparked a fiery debate on user privacy and data ownership in the AI era. While some lament the 'dark pattern,' others see it as a pragmatic, if overdue, move for AI improvement.

284
Score
130
Comments
#6
Highest Rank
9h
on Front Page
First Seen
Aug 29, 12:00 PM
Last Seen
Aug 29, 8:00 PM
Rank Over Time
6610101823233029

The Lowdown

The core of the story, as inferred from the Hacker News title and subsequent discussions, reveals that Anthropic, the developer of the Claude AI assistant, has updated its terms of service regarding user data. This move significantly impacts personal account holders, altering the landscape of user data privacy in AI.

  • Specifically, for users with personal Claude accounts, the default policy will now be to utilize their conversations and interactions for training the AI models. This marks a notable change from prior understandings where user data was not implicitly used for training.
  • While this new default is opt-out for personal accounts, discussions indicate that API users and those under commercial agreements are exempt from this change, with their data not being used for training.
  • An important nuance, clarified in comments, is that a new 5-year data retention period applies only to data from accounts that do allow their conversations to be used for training; opted-out data retains its shorter, standard retention period.

This policy shift has ignited considerable debate regarding user privacy, data ownership, and the evolving ethics of AI development.

The Gossip

The Data Deja Vu

Many users reported little surprise, having already assumed their conversations with AI models were being used for training, whether explicitly stated or not. This perspective views the explicit policy change and opt-out as a move towards greater transparency, even if the underlying practice was anticipated. Some even welcomed the change, hoping it would lead to better model performance.

Opt-Out Outrage: Dark Patterns or Due Diligence?

A significant point of contention is Anthropic's decision to make data training an opt-out default for personal accounts. While some users appreciated being given an option to control their data and noted the clear in-app pop-up, others strongly criticized this as a 'dark pattern' and an ethical breach, contrasting it with previous policies and highlighting concerns about users missing the notification or not understanding its implications.

Proprietary Predicament: Is My Genius Training Your AI?

The discussion often revolved around the quality and sensitivity of the data being fed into the models. Users questioned whether casual conversations or 'dumb questions' truly improve the AI. More importantly, significant concerns were raised about the risk of proprietary code, intellectual property, or highly sensitive personal information inadvertently becoming part of the training data and potentially being regurgitated or used by competitors or even for targeted advertising. This fear underlies the accusation of 'stealing' data.

Trust in Tech: Shifting Sands of Privacy Policies

This theme encapsulates broader concerns about tech companies' handling of user data and changing privacy policies. Commenters debated whether companies like Anthropic can be truly trusted to honor opt-out requests, given past industry precedents. The shift highlighted a general skepticism towards corporate claims regarding data privacy and the continuous struggle for users to maintain control over their digital footprint, even when paying for services. The change prompted some to consider alternative, more controlled AI solutions like local models.