Export Compliance Daily is a service of Warren Communications News.

OpenAI Raises Privacy Concerns with NY Times Seeks ChatGPT Conversations

OpenAI said the New York Times “disregards long-standing privacy protections” when it demands that the company turn over 20 million private ChatGPT conversations in a legal suit. But a spokesperson for the newspaper said the AI company is being misleading and there's no threat to users' privacy.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

The newspaper’s 2023 case against OpenAI and Microsoft claims that the companies engaged in large-scale copyright infringement to fuel the development of their AI models (see 2312270044).

The newspaper seeks the 20 million conversations -- randomly sampled between December 2022 and November 2024 -- because it claims that it “might find examples of you using ChatGPT to try to get around [the Times'] paywall,” Open AI Chief Information Security Officer Dane Stuckey wrote to ChatGPT users on Wednesday. “This demand disregards long-standing privacy protections, breaks with common-sense security practices, and would force us to turn over … highly personal conversations from people who have no connection to the Times’ baseless lawsuit against OpenAI.”

“Journalism has historically played a critical role in defending people’s right to privacy throughout the world,” added Stuckey. “However, this demand from the New York Times does not live up to that legacy, and we’re asking the court to reject it. We will continue to explore every option available to protect our users' privacy.”

In a Q&A beneath Stuckey’s statement, OpenAI noted that the newspaper, if its request is granted, “would be legally obligated … to not make any data public outside the court process.” However, the AI company said, “If the Times continues to push to access it in any way that will make the conversations public, we will fight to protect your privacy at every step.”

In an emailed statement to Privacy Daily, the spokesperson for the newspaper company said, “The New York Times’s case against OpenAI and Microsoft is about holding these companies accountable for stealing millions of copyrighted works to create products that directly compete with The Times. In another attempt to cover up its illegal conduct, OpenAI’s blog post purposely misleads its users and omits the facts. No ChatGPT user’s privacy is at risk."

"The court ordered OpenAI to provide a sample of chats, anonymized by OpenAI itself, under a legal protective order," the Times spokesperson added. "This fear-mongering is all the more dishonest given that OpenAI’s own terms of service permit the company to train its models on users’ chats and turn over chats for litigation."