Claude: Privacy, Training & Output Ownership
Tier-by-tier analysis of Claude's data handling, training policies, and commercial output rights. Updated 2026-02-12.
Quick Answer
Anthropic Claude is an AI assistant platform built on 'Constitutional AI' principles, prioritizing safety and alignment. As of February 2026, the service maintains a strict data privacy bifurcation between its consumer-facing plans and its commercial 'Claude for Work' ecosystem. New beta products — Claude CoWork, Claude in Excel, and Claude in PowerPoint — inherit the data handling terms of the user's underlying plan but are explicitly excluded from HIPAA BAA coverage and Zero Data Retention agreements. For developers, the Anthropic API provides a separate compliance path with ZDR, BAA support, and access through Amazon Bedrock and Azure AI Foundry.
To ensure professional confidentiality and compliance, organizations must utilize Enterprise or API tiers; consumer plans (Free/Pro/Max) are unsuitable for PII or PHI due to five-year data retention and default training opt-ins. Beta products (CoWork, Claude in Excel/PowerPoint) are excluded from BAA and ZDR coverage regardless of plan tier. The API (direct or via Bedrock/Azure) is the recommended path for developers building applications that handle sensitive, privileged, or regulated data.
Tier-by-Tier Analysis
Consumer (Free, Pro, Max)
Sensitive Data
No
Used for Training
Yes
Output Ownership
User
Sensitive Data
Default settings include long-term data retention and potential human review for safety monitoring. Incognito chats are excluded from training but lack contractual backing.
Training
As of the October 8, 2025 Consumer Terms update, data from consumer chats is used for model training by default unless the user manually opts out in Privacy Settings.
For all consumer subscription levels (Free at $0, Pro at $20/mo, Max at $100–200/mo), Anthropic utilizes chat transcripts and coding sessions to train future models. This behavior is enabled by default via the October 2025 policy update, requiring an active opt-out in Privacy Settings to disable. Incognito chats are excluded from training regardless of settings, but this is a product feature, not a contractual guarantee.
Output Ownership
Users generally own their inputs and outputs, subject to a license granted to Anthropic for service improvement.
Anthropic's terms specify that as between the user and Anthropic, the user owns the output generated by the service. However, the user must ensure they have the right to provide the input and adhere to usage policies.
Data Retention
If the user participates in model training, data is retained in a de-identified format for up to five years. If the user opts out of training, the standard retention period is 30 days, unless the content is flagged for a safety violation.
Security Measures
Standard encryption in transit and at rest is provided, but these tiers lack the advanced administrative controls and audit logs found in commercial versions. No SOC 2 coverage, no BAA, no DPA, and no ZDR are available for consumer plans.
Your Rights & Control
Users can delete individual conversations, which removes them from the interface immediately and from back-end systems within 30 days (provided training is disabled).
Special Considerations
A Consumer Health Data Privacy Policy was enacted in January 2026 for US users, governing the integration of third-party health applications with Claude. Claude CoWork (launched January 13, 2026), Claude in Excel (GA January 29, 2026), and Claude in PowerPoint (launched February 5, 2026) follow the consumer plan's data handling terms when accessed through Pro/Max accounts, meaning training opt-in/opt-out rules apply. Security researchers at PromptArmor demonstrated prompt injection vulnerabilities in the Excel add-in in November 2025. As of January 7, 2026, Anthropic became a Microsoft subprocessor for Microsoft 365 Copilot — when Claude models are accessed through Copilot Agent Mode, Microsoft's Product Terms and DPA govern, not Anthropic's.
Claude for Work (Team)
Sensitive Data
Limited
Used for Training
No
Output Ownership
User
Sensitive Data
Provides better data isolation than consumer tiers but does not include full HIPAA Business Associate Agreements.
Training
Commercial terms explicitly prohibit the use of customer data for training Anthropic's global models.
Under the 'Claude for Work' Commercial Terms, prompts and outputs are strictly isolated. Anthropic does not use data from Team accounts to train its large language models, ensuring that proprietary business logic and internal communications remain confidential.
Output Ownership
The organization typically owns all inputs and outputs generated by its members.
Outputs are legally owned by the customer organization. This tier is designed to provide clear intellectual property boundaries for small to mid-sized professional teams.
Data Retention
Data is retained for the duration of the account's active status, with administrative controls available to manage or delete history. Standard retention for deleted items is 30 days.
Security Measures
Includes administrative consoles for user management, basic audit trails, and SOC 2 Type II compliance artifacts available upon request. A DPA (with Standard Contractual Clauses) is automatically incorporated into the Commercial Terms of Service.
Your Rights & Control
Organization admins have the right to manage member data, export logs, and enforce deletion policies across the workspace.
Special Considerations
While safer than Pro accounts, the Team tier is notably excluded from Anthropic's HIPAA-ready service list, making it unsuitable for Protected Health Information (PHI). Claude CoWork, Claude in Excel, and Claude in PowerPoint used through Team accounts benefit from the no-training guarantee but remain excluded from BAA and ZDR coverage.
Enterprise / HIPAA-Ready
Sensitive Data
Yes
Used for Training
No
Output Ownership
User
Sensitive Data
Supports HIPAA compliance with signed BAAs and offers Zero Data Retention (ZDR) options for the Messages API and Token Counting API.
Training
Data is never used for training and is governed by strict enterprise-grade confidentiality agreements.
Enterprise accounts are entirely exempt from all model training activities. Anthropic treats all Enterprise data as highly confidential, processed only to provide the specific service to the client.
Output Ownership
Full organizational ownership of all data assets with indemnification protections.
Comprehensive ownership rights are established in the master service agreement (MSA), often including legal indemnification for intellectual property claims related to model outputs.
Data Retention
Retention is highly configurable by the organization. Enterprise administrators can set custom retention windows. For API-level ZDR details (Messages API, Batch API, Files API), see the API tiers below.
Security Measures
Includes SSO, SCIM provisioning, advanced audit logs, role-based access control (RBAC), and compliance with SOC 2 Type II.
Your Rights & Control
Administrators have full visibility and control over data residency, user activity, and compliance auditing tools.
Special Considerations
The HIPAA-ready offering requires a specific BAA and configuration. It is available only on the Enterprise tier and through first-party API usage. The BAA explicitly excludes 'other beta or chat products, features, or integrations' — meaning Claude CoWork, Claude in Excel, and Claude in PowerPoint are NOT covered even on Enterprise plans. For HIPAA-regulated workflows, use only the core Enterprise chat interface or the API directly.
Standard API (Direct)
APISensitive Data
Limited
Used for Training
No
Output Ownership
User
Sensitive Data
No training on API data by default, but standard 30-day retention applies for trust and safety monitoring.
Training
Anthropic does not train on API data by default. This has been the policy since the API launched.
Unlike the consumer chat product, the Anthropic API has never trained on customer data by default. The Commercial Terms of Service explicitly prohibit using API inputs or outputs for model training. A DPA with Standard Contractual Clauses is automatically incorporated into the Commercial Terms.
Output Ownership
Developers and their end users retain full ownership of inputs and outputs under the Commercial Terms of Service.
All outputs generated through the API belong to the developer (or their end users, depending on the developer's own terms). Anthropic claims no ownership or IP rights over API-generated content.
Data Retention
Standard API retention is 30 days for trust and safety purposes. The Batch API retains data for 29 days. The Files API retains data until explicit deletion by the developer. Anthropic reserves the right to retain data where required by law or to investigate usage policy violations (up to 2 years).
Security Measures
SOC 2 Type II certified. Data encrypted in transit (TLS 1.2+) and at rest (AES-256). DPA with SCCs auto-incorporated. API key authentication with usage monitoring.
Your Rights & Control
Developers can delete data through the API. Standard GDPR/CCPA rights apply. Anthropic provides a subprocessor list for compliance tracking.
Special Considerations
The standard API tier is suitable for most commercial applications handling non-regulated data. For HIPAA, privileged, or highly sensitive data, enable ZDR (see next tier). Claude Code is ZDR-eligible when used with commercial API credentials.
API with Zero Data Retention (ZDR)
APISensitive Data
Yes
Used for Training
No
Output Ownership
User
Sensitive Data
ZDR eliminates persistent storage of prompts and completions. BAA available for HIPAA use cases.
Training
No training on data, and no persistent storage of inputs or outputs beyond the duration of the API request.
ZDR provides the strongest privacy guarantee available directly from Anthropic. Prompts and completions are processed in volatile memory and never written to persistent storage. No data is used for training, evaluation, or any purpose beyond fulfilling the immediate request.
Output Ownership
Full developer/end-user ownership with no Anthropic claims on inputs or outputs.
Identical to standard API — full ownership by the developer and their end users. ZDR adds the additional guarantee that Anthropic does not retain the data that would be needed to assert any future claims.
Data Retention
ZDR applies to the Messages API and Token Counting API endpoints only. Data exists only for the duration of the request. Even under ZDR, Anthropic reserves the right to retain data where required by law or to combat usage policy violations (up to 2 years), though this is a narrow exception. The Batch API (29-day retention) and Files API (until deletion) are NOT ZDR-eligible.
Security Measures
All standard API security measures plus zero persistent storage. SOC 2 Type II certified. BAA available for HIPAA-regulated workloads. DPA with SCCs auto-incorporated.
Your Rights & Control
Same as standard API, with the additional assurance that there is minimal data to manage since nothing is persisted.
Special Considerations
ZDR is the minimum viable standard for developers building applications that process attorney-client privileged information, PHI, or other regulated data. To enable ZDR, developers must contact Anthropic or configure it through their enterprise account. ZDR does NOT cover beta products, the Batch API, or the Files API.
Claude via Amazon Bedrock
APISensitive Data
Yes
Used for Training
No
Output Ownership
User
Sensitive Data
Strongest default privacy posture of any Claude access path — AWS stores and logs nothing by default, and Anthropic has zero access to customer data.
Training
Neither AWS nor Anthropic trains on data submitted through Bedrock. This is the default, not an opt-in.
Amazon Bedrock provides Claude model access within the AWS infrastructure. AWS does not store prompts or completions by default, and Anthropic (the model provider) has zero access to customer data processed through Bedrock. No training occurs on any data submitted through this path.
Output Ownership
Full developer ownership under AWS and Anthropic commercial terms.
Outputs belong entirely to the developer. Both the AWS Customer Agreement and Anthropic's Bedrock Commercial Terms confirm no ownership claims by either party.
Data Retention
Zero retention is the default — AWS does not store or log prompts or completions unless the developer explicitly enables logging (e.g., to CloudWatch or S3). This is the strongest default posture of any major AI API.
Security Measures
Inherits the full AWS compliance stack: SOC 2 Type II, ISO 27001, FedRAMP High, HITRUST, HIPAA (BAA available through AWS self-service), PCI DSS, and dozens of additional certifications. Customer-managed encryption keys via AWS KMS. VPC endpoints for private network access. Full regional data residency across all AWS regions. IAM-based access control and CloudTrail audit logging.
Your Rights & Control
Full AWS data management capabilities including programmatic deletion, access controls, and compliance tooling.
Special Considerations
Bedrock is the recommended path for developers who need Claude's capabilities combined with the most comprehensive compliance infrastructure. Ideal for healthcare, legal, financial services, and government applications. The self-service BAA process through AWS is simpler than negotiating directly with Anthropic. Pricing is usage-based and may differ from direct API pricing.
Claude via Azure AI Foundry
APISensitive Data
Yes
Used for Training
No
Output Ownership
User
Sensitive Data
Anthropic models accessed through Azure inherit Microsoft's compliance infrastructure — the broadest certification portfolio in the industry.
Training
Data is completely isolated from Anthropic's consumer services and never used for training by either Microsoft or Anthropic.
When Claude models are accessed through Azure AI Foundry (formerly Azure AI Studio), Microsoft's Product Terms and DPA govern data handling — not Anthropic's consumer terms. Anthropic acts as a subprocessor under Microsoft's agreements. No data is used for training by either party.
Output Ownership
Full developer ownership under Microsoft Product Terms.
Outputs belong to the developer under Microsoft's standard commercial terms. No claims from either Microsoft or Anthropic.
Data Retention
Governed by Azure's data handling policies. Standard 30-day abuse monitoring retention applies unless the customer opts out. Customer-managed retention policies available through Azure portal.
Security Measures
Inherits Azure's full compliance portfolio: SOC 2 Type II, ISO 27001, FedRAMP High, DoD IL2, HITRUST, HIPAA (BAA included by default through Microsoft Product Terms for eligible licensing), PCI DSS, and many more. Customer-managed encryption keys. Full Azure regional data residency. Azure Private Link for network isolation.
Your Rights & Control
Full Azure data management capabilities including compliance tooling, diagnostic logging, and Azure Policy enforcement.
Special Considerations
Azure AI Foundry provides the broadest compliance certification portfolio for accessing Claude models. The BAA is included by default through Microsoft Product Terms — no separate signing required. Best for organizations already in the Microsoft ecosystem or requiring the most extensive regulatory compliance coverage. Note: as of January 7, 2026, Anthropic became a Microsoft subprocessor for Microsoft 365 Copilot as well, but Copilot access and direct Azure AI Foundry access are distinct products with different governance.
FAQ: Claude
Does Claude train on my inputs?
Claude has multiple tiers with different training policies. The Enterprise / HIPAA-Ready tier does not train on inputs: Data is never used for training and is governed by strict enterprise-grade confidentiality agreements. Free and consumer tiers often allow training by default. See the full tier breakdown below.
Can I use Claude with confidential or client data?
Claude is safe for sensitive or client data at the strongest tier. Enterprise / HIPAA-Ready: Supports HIPAA compliance with signed BAAs and offers Zero Data Retention (ZDR) options for the Messages API and Token Counting API. Consumer tiers should generally not be used with confidential material.
Who owns the output I generate with Claude?
Output ownership for Claude varies by tier. Enterprise / HIPAA-Ready: Full organizational ownership of all data assets with indemnification protections.
What is Claude's data retention policy?
Claude retention policies vary by tier. Enterprise / HIPAA-Ready: Retention is highly configurable by the organization. Enterprise administrators can set custom retention windows. For API-level ZDR details (Messages API, Batch API, Files API), see the API tiers below.
Which Claude tier is safest for professional or regulated use?
The Enterprise / HIPAA-Ready tier of Claude is the strongest option for professional or confidential use. The HIPAA-ready offering requires a specific BAA and configuration. It is available only on the Enterprise tier and through first-party API usage. The BAA explicitly excludes 'other beta or chat products, features, or integrations' — meaning Claude CoWork, Claude in Excel, and Claude in PowerPoint are NOT covered even on Enterprise plans. For HIPAA-regulated workflows, use only the core Enterprise chat interface or the API directly.
Does Claude meet ABA Model Rule 1.6 confidentiality for lawyers handling client data?
Yes, at the strongest tier. Use the Enterprise / HIPAA-Ready tier of Claude. See the AI Privacy Guide at https://hoaglaw.ai/resources/ai-privacy-guide for the full comparison.
Need an AI-aware contract review or governance policy?
Hoag Law.ai builds AI-aware MSAs, DPAs, and internal governance frameworks for startups, flat-rate from $2,500/month. If you're evaluating Claude for your team, let's talk.
Book a free call