ChatGPT: Privacy, Training & Output Ownership
Tier-by-tier analysis of ChatGPT's data handling, training policies, and commercial output rights. Updated 2026-02-12.
Quick Answer
OpenAI's ChatGPT suite, as of February 2026, has expanded into a complex ecosystem ranging from ad-supported consumer tools (Free/Go) to highly regulated environments (Healthcare/Enterprise). While GPT-5 series models provide elite reasoning, the privacy protections vary drastically: consumer data is used for training by default, while business tiers offer zero-retention and non-training guarantees. For developers, the OpenAI API provides a separate compliance path with ZDR, BAA support, and access through Azure OpenAI Service.
Attorneys and healthcare professionals must strictly avoid consumer tiers (Free, Go, Plus, Pro) for confidential data. Use of the Business tier (formerly 'Team,' renamed August 29, 2025) provides non-training guarantees but lacks BAA support. Only the Enterprise or specialized 'ChatGPT for Healthcare' tier is suitable for PHI. For developers building applications, the OpenAI API (direct or via Azure) provides the strongest compliance path, with Azure OpenAI as the gold standard for regulated industries.
Tier-by-Tier Analysis
Consumer (Free, Go, Plus, Pro)
Sensitive Data
No
Used for Training
Yes
Output Ownership
Conditional
Sensitive Data
Data is accessible for model training by default and now subject to targeted advertising profiles in lower tiers.
Training
Model improvement (training) is enabled by default for all individual personal workspace users.
OpenAI uses conversations to train future models unless the user manually disables 'Improve the model for everyone' in Data Controls or via privacy.openai.com. The Temporary Chat feature provides conversations deleted after 30 days and not used for training, but this lacks contractual backing. ChatGPT Pro ($200/mo) remains in the consumer tier despite its premium pricing — it provides advanced model access but identical privacy terms to Plus.
Output Ownership
User owns inputs, and OpenAI assigns its right in outputs, but copyright remains legally uncertain for AI-only content.
Terms assign rights to the user, but specify 'to the extent permitted by law.' In 2026, courts still largely refuse to grant copyright to non-human-authored work, and the consumer license allows OpenAI broad rights to use content for safety and service maintenance.
Data Retention
Data is retained indefinitely to maintain chat history. If history is disabled, OpenAI still retains a copy for 30 days to monitor for abuse before deletion.
Security Measures
Standard AES-256 encryption at rest and TLS 1.2+ in transit. Consumer accounts lack the advanced auditing and administrative controls found in higher tiers. No SOC 2 coverage, no BAA, no DPA.
Your Rights & Control
Users can export their data, delete their accounts, and opt-out of training via the settings menu. Rights are subject to the individual's local jurisdiction (e.g., GDPR/CCPA).
Special Considerations
The 'Go' ($8/mo) and 'Free' tiers now include 'Sponsored Recommendations' (ads). These tiers should never be used for professional work due to the risk of data leakage into the advertising profile system.
Business (formerly Team, renamed Aug 29 2025)
Sensitive Data
Limited
Used for Training
No
Output Ownership
User
Sensitive Data
Provides much higher security than consumer tiers with SOC 2 Type II and DPA, but lacks HIPAA BAA support.
Training
Inputs and outputs are excluded from model training by default for all business customers.
Data is isolated from the general training pool. Business data is never used to improve OpenAI's foundational models, ensuring that proprietary corporate logic remains within the organization's instance.
Output Ownership
Business terms provide stronger commercial ownership rights and clearer IP indemnification via 'Copyright Shield.'
OpenAI provides an explicit assignment of rights to the customer and offers 'Copyright Shield' (IP indemnification) to protect business users from third-party infringement claims arising from outputs.
Data Retention
Retention is controlled by the workspace administrator. Data is stored within the organization's silo and can be deleted on-demand by admins.
Security Measures
Includes SOC 2 Type II compliance, DPA availability, and administrative consoles for managing user access and monitoring usage patterns. However, the Business tier lacks SAML SSO, SCIM provisioning, Enterprise Key Management, and BAA support.
Your Rights & Control
Administrators have full control over data export, user seats, and workspace-wide privacy settings. Individual users' rights are managed by the employer.
Special Considerations
Ideal for small-to-medium professional firms (legal, consulting) where non-training is a prerequisite but extreme compliance (like ITAR or HIPAA) is not required. OpenAI explicitly states BAAs are unavailable for the Business tier. Priced at approximately $25–30/seat/month.
Enterprise / Healthcare
Sensitive Data
Yes
Used for Training
No
Output Ownership
User
Sensitive Data
Enterprise-grade security with available HIPAA BAA, Enterprise Key Management (EKM), and data residency across 10 regions.
Training
Strict non-training policy with zero-retention options available via API and Healthcare portals.
Enterprise data is logically isolated. For Healthcare customers, 'ChatGPT for Healthcare' uses dedicated, compartmentalized processing that never bleeds into other interactions or foundational training.
Output Ownership
Full commercial ownership with the highest level of IP protection and legal indemnification.
Enterprise customers enjoy full rights to outputs with specialized legal clauses that prioritize corporate ownership over default individual terms.
Data Retention
Fully customizable retention policies with admin-controlled data retention windows. For HIPAA-configured API endpoints, OpenAI offers a zero-retention mode where data is never stored on disk.
Security Measures
Top-tier security including SAML SSO, SCIM provisioning, Enterprise Key Management (EKM) with customer-controlled encryption keys, detailed audit logs, and data residency across 10 regions (US, Europe, UK, Japan, Canada, South Korea, Singapore, Australia, India, UAE). HIPAA compliance is supported for eligible entities using the Healthcare suite.
Your Rights & Control
Granular control over all data. Organizations can implement their own governance rules on top of the OpenAI platform.
Special Considerations
As of February 2026, only this tier can be considered safe for PHI within the ChatGPT product family when configured with a Business Associate Agreement. For developers building applications, see the API tiers below which provide a distinct compliance path including ZDR, BAA, and Azure OpenAI access.
Standard API (Direct)
APISensitive Data
Limited
Used for Training
No
Output Ownership
User
Sensitive Data
No training on API data by default (since March 2023), but standard 30-day retention applies for abuse monitoring.
Training
OpenAI does not train on data submitted through the API by default. This policy has been in effect since March 1, 2023.
The OpenAI API has not trained on customer data by default since March 2023. This is a distinct policy from the ChatGPT consumer product (which trains by default). Developers may opt in to data sharing for model improvement, but this is never automatic. The Business Terms explicitly confirm this separation.
Output Ownership
Developers retain full ownership of inputs and outputs. OpenAI assigns its rights in output to the user.
OpenAI assigns all its right, title, and interest in outputs to the developer, 'to the extent permitted by applicable law.' The developer's end users can own the outputs per the developer's own terms. Copyright Shield (IP indemnification) is available for certain API usage tiers.
Data Retention
API inputs and outputs are retained for 30 days for abuse and misuse monitoring, then automatically deleted. This retention is for safety purposes only and data is not used for training during this period.
Security Measures
SOC 2 Type II certified. Data encrypted in transit and at rest. DPA available. API key and organization-level authentication. Usage dashboards and rate limiting.
Your Rights & Control
Developers can manage their data through the API platform. Standard GDPR/CCPA rights apply. OpenAI provides a subprocessor list and data processing documentation.
Special Considerations
The standard API is suitable for most commercial applications. For HIPAA workloads, request ZDR and a BAA through OpenAI's sales process. OpenAI's Under-18 API Guidance explicitly requires ZDR for any application processing children's data — making this the clearest policy of any major provider for COPPA compliance.
API with Zero Data Retention (ZDR)
APISensitive Data
Yes
Used for Training
No
Output Ownership
User
Sensitive Data
ZDR eliminates the 30-day retention window. BAA available for HIPAA use cases.
Training
No training and no persistent storage of inputs or outputs.
ZDR removes the standard 30-day abuse monitoring retention. Prompts and completions are processed and immediately discarded. No data is written to persistent storage or used for any purpose beyond fulfilling the request.
Output Ownership
Full developer ownership with no OpenAI claims.
Identical to standard API — full ownership assigned to the developer with no OpenAI claims.
Data Retention
Zero. Data is processed in-memory and not written to disk. This is available on approval through OpenAI's sales/enterprise team.
Security Measures
All standard API security plus zero persistent storage. SOC 2 Type II. BAA available for HIPAA-regulated workloads. DPA available.
Your Rights & Control
Same as standard API, with minimal data footprint to manage.
Special Considerations
ZDR must be explicitly requested and approved — it is not self-service. Developers building healthcare, legal, or children's applications should request ZDR proactively. OpenAI launched 'OpenAI for Healthcare' in 2025 with purpose-built HIPAA-compliant API configurations.
Azure OpenAI Service
APISensitive Data
Yes
Used for Training
No
Output Ownership
User
Sensitive Data
The most comprehensive compliance posture for accessing OpenAI models — BAA included by default, broadest certification portfolio in the industry.
Training
Data is completely isolated from OpenAI's consumer services. Neither Microsoft nor OpenAI trains on customer data. OpenAI has zero access to Azure-processed data.
Azure OpenAI Service (now branded 'Azure Direct Models' within Microsoft Foundry) provides OpenAI model access within Azure infrastructure. Data never reaches OpenAI's servers or consumer services. Microsoft does not train on customer data. The isolation is architectural, not just contractual.
Output Ownership
Full developer ownership under Microsoft Product Terms.
Outputs belong entirely to the developer under Microsoft's standard commercial terms. No claims from either Microsoft or OpenAI.
Data Retention
Standard 30-day abuse monitoring retention applies by default, but customers can request opt-out. Customer-managed retention policies available. ZDR configurations available on approval.
Security Measures
Inherits Azure's full compliance portfolio — the broadest in the industry: SOC 2 Type II, ISO 27001, FedRAMP High, DoD IL2, HITRUST, HIPAA (BAA included by default through Microsoft Product Terms), PCI DSS, and dozens of additional certifications. Customer-managed encryption keys via Azure Key Vault. Azure Private Link for network isolation. Full regional data residency across all Azure regions. Comprehensive audit logging via Azure Monitor.
Your Rights & Control
Full Azure data management capabilities. Azure Policy for governance enforcement. Diagnostic logging and compliance tooling.
Special Considerations
Azure OpenAI is widely considered the gold standard for regulated industries accessing OpenAI models. The BAA is included by default through Microsoft Product Terms for eligible enterprise licensing — no separate negotiation required. Best for healthcare, financial services, government, and legal applications. Organizations already under Microsoft enterprise agreements can often activate Azure OpenAI with minimal additional contracting.
FAQ: ChatGPT
Does ChatGPT train on my inputs?
ChatGPT has multiple tiers with different training policies. The Enterprise / Healthcare tier does not train on inputs: Strict non-training policy with zero-retention options available via API and Healthcare portals. Free and consumer tiers often allow training by default. See the full tier breakdown below.
Can I use ChatGPT with confidential or client data?
ChatGPT is safe for sensitive or client data at the strongest tier. Enterprise / Healthcare: Enterprise-grade security with available HIPAA BAA, Enterprise Key Management (EKM), and data residency across 10 regions. Consumer tiers should generally not be used with confidential material.
Who owns the output I generate with ChatGPT?
Output ownership for ChatGPT varies by tier. Enterprise / Healthcare: Full commercial ownership with the highest level of IP protection and legal indemnification.
What is ChatGPT's data retention policy?
ChatGPT retention policies vary by tier. Enterprise / Healthcare: Fully customizable retention policies with admin-controlled data retention windows. For HIPAA-configured API endpoints, OpenAI offers a zero-retention mode where data is never stored on disk.
Which ChatGPT tier is safest for professional or regulated use?
The Enterprise / Healthcare tier of ChatGPT is the strongest option for professional or confidential use. As of February 2026, only this tier can be considered safe for PHI within the ChatGPT product family when configured with a Business Associate Agreement. For developers building applications, see the API tiers below which provide a distinct compliance path including ZDR, BAA, and Azure OpenAI access.
Does ChatGPT meet ABA Model Rule 1.6 confidentiality for lawyers handling client data?
Yes, at the strongest tier. Use the Enterprise / Healthcare tier of ChatGPT. See the AI Privacy Guide at https://hoaglaw.ai/resources/ai-privacy-guide for the full comparison.
Need an AI-aware contract review or governance policy?
Hoag Law.ai builds AI-aware MSAs, DPAs, and internal governance frameworks for startups, flat-rate from $2,500/month. If you're evaluating ChatGPT for your team, let's talk.
Book a free call