Home  /  Blog  /  ChatGPT Data Privacy: Key Insights on Security, Agents, and Litigation (2025 Update)

ChatGPT Data Privacy: Key Insights on Security, Agents, and Litigation (2025 Update)

ChatGPT Data Privacy: Key Insights on Security and Privacy

Last Updated: December 2, 2025

Executive Summary: The 2025 Privacy Landscape

  • Litigation Impact: A federal court order in October 2025 restored standard deletion rights for most users, but data from mid-2025 remains archived.
  • Agent Risks: The new “Operator” agent retains data, including screenshots of your browser, for 90 days, significantly longer than standard chats.
  • Regulation: As of August 2025, the EU AI Act imposes strict transparency obligations on General Purpose AI.

ChatGPT remains the dominant force in AI, specifically with the November 2025 release of GPT-5.1 and the autonomous “Operator” agent. However, as the technology shifts from a passive chatbot to an active agent that browses the web for you, the privacy stakes have escalated.

Statista surveys from early adoption phases showed that nearly half of respondents feared personal data collection. In late 2025, those fears have evolved from simple data collection to complex legal retention and autonomous surveillance. This guide provides the definitive answers on how OpenAI handles your data today.

What is ChatGPT in 2025?

ChatGPT is a Large Language Model (LLM) and, increasingly, an autonomous agent. It generates text, writes code, and executes tasks based on your input. Developed by OpenAI, it currently runs on the GPT-5.1 architecture (released November 12, 2025), which features “Instant” and “Thinking” modes for adaptive reasoning.

Unlike previous versions, the new Operator feature allows the AI to navigate the web, click buttons, and perform actions on your behalf, introducing an entirely new layer of data privacy considerations.

Does ChatGPT save my data? (The “Legal Singularity”)

Yes, ChatGPT saves your data, but the rules changed dramatically in 2025 due to federal litigation.

1. The standard policy

For standard chat interactions, OpenAI stores your prompts, conversation history, and account details (email, IP, location) to train its models and provide the service. Typically, if you delete a chat, it is removed from OpenAI’s systems within 30 days.

2. The New York Times litigation effect

In 2025, the New York Times v. OpenAI copyright lawsuit fundamentally disrupted the “Right to Erasure.”

  • The preservation order (May–Sept 2025): In May 2025, a US Magistrate Judge ordered OpenAI to preserve all ChatGPT conversation logs indefinitely for legal discovery. During this period, even if you clicked “Delete,” your data was archived in a separate legal hold.
  • The October modification: On October 9, 2025, Judge Ona T. Wang modified this order. OpenAI was allowed to resume its standard 30-day deletion policy for new data.
  • The “Zombie Data” risk: However, data captured between May and September 2025 remains preserved. Furthermore, accounts specifically “flagged” by the New York Times as relevant to the case are still subject to indefinite retention.

Key takeaway: While standard deletion has returned for most, legal retention orders proved that your “Right to delete” can be suspended by the courts at any time.

The new privacy frontier: “Operator” and Agents

The release of the Operator agent introduced a stricter privacy regime. Because Operator can take actions on the real web (like booking flights or filling forms), it requires aggressive abuse monitoring.

FeatureRetention periodWhat is captured?
Standard chat30 Days (after deletion)Text prompts, file uploads.
Operator agent90 Days (after deletion)Text, Screenshots, Browsing history.

The screenshot risk:

To function, Operator takes continuous screenshots of the browser window it controls. If Operator navigates to a page displaying sensitive PII (Personal Identifiable Information) or proprietary dashboards, those screenshots are captured. Even if you delete the session immediately, these screenshots are retained for 90 days for safety review 2.

Why is ChatGPT Data Privacy a concern?

Beyond the “black box” of AI training, privacy is a regulatory minefield.

GDPR and the EU AI Act

Europe continues to lead in regulation. The EU AI Act became fully operational in 2025:

  • February 2025: Prohibitions on “unacceptable risk” AI took effect.
  • August 2025: General Purpose AI (GPAI) rules came into force, requiring OpenAI to maintain detailed technical documentation and comply with EU copyright law.

The Dutch stance:

The Dutch data protection authority (Autoriteit Persoonsgegevens) remains vigilant. Following the €290 million fine imposed on Uber in 2025 for improper data transfers to the US, Dutch organizations must be hyper-aware of where their AI data resides. The AP has emphasized that AI literacy (Article 4 of the AI Act) is now a mandatory obligation for companies deploying these tools.

chatgpt data privacy

How to make ChatGPT more privacy-friendly?

1. Opt-out of model training (Consumer Accounts)

If you are on a Free, Plus, or Pro plan, your data is used to train OpenAI’s models by default. You can stop this:

  1. Go to Settings > Data Controls.
  2. Toggle off “Improve the model for everyone”.
  3. Note: This does not bypass the 30-day (or 90-day for Operator) retention for abuse monitoring, but it prevents your data from becoming part of the AI’s permanent knowledge base.

2. Use “Temporary Chat”

For sensitive one-off tasks, use Temporary Chat. These chats are not saved to your history and are not used for training, though the 30-day abuse retention window still applies.9

3. Switch to Enterprise or Team Plans

For businesses, the only safe path is the ChatGPT Enterprise or Team subscription.

  • No Training: OpenAI explicitly states they do not train on business data by default.10
  • Enterprise Key Management (EKM): Released in late 2025, this allows companies to manage their own encryption keys. If you revoke the key, OpenAI can no longer read your data, providing a technical “kill switch”.11
  • Zero Data Retention (ZDR): Eligible enterprise clients can use ZDR APIs where no data is written to disk/logs, though this may limit some advanced features.12

4. Deep Research & Cloud Connectors

The new Deep Research tool connects to Google Drive and Microsoft OneDrive. While the contents of your files are not used for training in Enterprise plans, the interaction logs (metadata of what you accessed and when) are generated. Always practice “Least Privilege” principles when granting the AI access to your cloud storage.10

Recommendations for ChatGPT Data Privacy 2025

  • Avoid PII in Operator: Do not let the Operator agent view screens containing banking info or credentials, as screenshots are retained for 3 months.
  • Monitor Litigation: Understand that “deleted” data may still be discoverable in legal cases like NYT v. OpenAI.
  • Consider data residency: Use OpenAI’s EU data residency options to mitigate GDPR transfer risks.
  • Keep up to date with recent developments: Make sure you and your team are trained to use ChatGPT safely and securely. If you want support with this you can check out our ChatGPT Workshop.

Need help navigating the 2025 AI Privacy landscape?

DataNorth offers specialized ChatGPT Assessments to ensure your organization complies with the new EU AI Act obligations while leveraging the power of GPT-5.1. Get in touch with us to secure your AI operations.

I deleted my ChatGPT history in July 2025. Is it really gone forever?

Likely not. While OpenAI restored standard deletion rights in October 2025, data generated or deleted between May 2025 and September 2025 was subject to a federal preservation order in the New York Times v. OpenAI lawsuit. This “Zombie Data” is archived in a separate legal hold and cannot be purged until the litigation concludes, even if it disappears from your chat history sidebar.

Does the new ‘Operator’ agent see my screen when I’m not using it?

The Operator agent only “sees” the specific browser tab it is controlling, but it captures continuous screenshots of that tab to function. Crucially, these screenshots are retained for 90 days for abuse monitoring. This is three times longer than standard text chats. If you have a sensitive bank statement open in the same tab Operator is using, that visual data is recorded and stored for three months.

Can OpenAI read my Enterprise data if the government demands it?

Not if you use Enterprise Key Management (EKM). Released in late 2025, EKM allows your organization to hold the encryption keys for your data. If you revoke these keys, the data stored on OpenAI’s servers becomes unreadable cryptographic noise, even to OpenAI’s own engineers or legal team.

If I use the ‘Deep Research’ tool on my Google Drive, does it train on my files?

For Enterprise and Team users, the content of your files is not used for training. However, the interaction logs (metadata) are generated. OpenAI creates a record of which files were accessed and when to monitor for abuse. While your trade secrets remain private, the “paper trail” of your research is still logged.

Can I ask ChatGPT to ‘unlearn’ a specific fact about me?

Technically, no. As highlighted by the Dutch Data Protection Authority (AP) in 2025, LLMs cannot easily “forget” a specific fact without a complete retraining of the model. This creates a conflict with the GDPR’s “Right to Rectification.” If ChatGPT outputs incorrect info about you, your only reliable option is to delete the entire conversation history or opt-out of training for future data; you cannot surgically edit the model’s memory.