Is Chat GPT Wearing a Wire?
By Lincoln A. West J.D 8/04/2025
A recent court order in the New York Times v. OpenAI and Microsoft copyright lawsuit may fundamentally alter the AI landscape—and pose a serious risk to user privacy.

The court ruled that OpenAI must retain all user prompts and outputs indefinitely, including deleted chats and those submitted in “private” mode. This applies across all users, whether free or paid. The stated reason? Preserving potential evidence of copyright infringement during litigation.
But the implications extend far beyond the litigants and reach deep into the public sphere. This ruling effectively overrides the privacy commitments found in OpenAI’s terms of service and mandated by established data privacy laws, including GDPR, CCPA/CPRA, and others. Thus, users cannot rely on any privacy whatsoever with these AI tools, despite what they have been promised by law and by contract. As critics have pointed out, this is akin to conducting a therapy session in public—with a permanent transcript available for scrutiny,
Besides the privacy concerns given that there is no longer any way for users to delete, change, or control the use of their own information, there are logistical considerations as well. OpenAI reportedly processes billions of prompts monthly, and requiring indefinite retention of this volume of data may cripple it’s digital infrastructure.
In addition to exposing users to an unprecedented level of surveillance, the effects of this ruling could stifle progress in that it is likely to cause significant barriers to innovation. First, free AI access may end as data storage costs skyrocket. Second, smaller developers may not be able to bring the software to market because they may struggle to afford the infrastructure required to meet these new standards.
Unless and until this ruling is revisited, the safest approach for AI user is to treat every AI interaction as if it's on the record—because it is — and anything your prompt potentially can be used against you.