AI Disclosure | Transparency Requirements | User Notification | Automated Decision Rights | Accountability
This AI Disclosure & Transparency Policy for [Company / Service] effective [Date] establishes requirements for disclosing AI use to end users. Legal Framework: EU AI Act 2023/1230 Art. 52 (Transparency), GDPR Art. 22 (Automated Decision-Making), US FTC AI Transparency Guidelines, FTC Act Β§5 (Deceptive Practices).
1.1 When Disclosure REQUIRED: Per EU AI Act Art. 52, disclose AI when:
β Chatbot/virtual assistant providing customer service
β AI-generated content: Text, images, audio, video (not created by human)
β Automated decision-making: Loan approval, hiring, content recommendation
β Biometric identification: Facial recognition, emotion detection
β Risk assessment: Credit scoring, fraud detection (per Art. 52(2))
1.2 Explicit Notification (MANDATORY): Clear disclosure that AI involved per FTC Act Β§5 (Clear & Conspicuous):
β’ Format: "This is powered by AI" OR "AI-Generated" badge/notification
β’ Timing: Disclosed BEFORE user engages with AI system
β’ Prominence: Easily visible (not buried in small print per GDPR Art. 22(3))
β’ Language: Clear, non-technical (user-friendly per FTC Guidance)
1.3 Disclosure Content: Must include per AI Act Art. 52(1):
β Fact that AI system is operating (not human)
β Capability/purpose: What the AI can and cannot do
β Limitations: Accuracy rate, error rate (if material)
β Human fallback: "Request human review" option available
2.1 Required Chatbot Notice (AT START): Per AI Act Art. 52:
EXAMPLE (MANDATORY LANGUAGE):
"You are chatting with an AI assistant. This is not a human. I can answer questions about [X], but cannot [Y]. You can request a human agent anytime." | Button: "Talk to Human"
β’ Placement: Message 1 (before bot responds) per FTC Guidelines
β’ Persistent: Visible throughout conversation
β’ Switchable: One-click human transfer available (opt-out right per GDPR Art. 21)
2.2 Capability Disclosure: What chatbot CAN do per AI Act Art. 52(3):
β "Can help with [X] topics"
β "Cannot provide legal/medical advice"
β "Responses are AI-generated (not guaranteed accurate)"
2.3 Interaction Logging: Users informed that per GDPR Art. 13(1):
β’ Conversations logged (for improvement/debugging)
β’ Data retention: [X days/months]
β’ Privacy policy link: Available in chat footer
3.1 Text Content: If generated by AI per AI Act Art. 52(1):
β’ Label: "[AI-Generated]" OR "[Generated with AI assistance]"
β’ Placement: At beginning of content (or near title)
β’ Example: "[AI-Generated Article] This content was created using AI technology and reviewed by an editor."
3.2 Image/Video Content: AI-generated visuals per FTC Guidance:
β’ Watermark: "AI Generated" OR visual badge (unavoidable)
β’ Metadata: EXIF data includes "AI-Generated" tag
β’ Platform: Social media caption states "AI-Generated Image"
3.3 Synthetic Media (Deepfake): If AI-manipulated per AI Act Art. 52(2):
β’ PROMINENT WARNING: "[AI-Synthesized/Manipulated Media]"
β’ Location: Video overlay + caption + description
β’ Example: "[SYNTHETIC] This video contains AI-generated or manipulated content depicting real people."
4.1 Decision Disclosure (GDPR MANDATORY): When AI makes decision per GDPR Art. 22(3) & Art. 13(2)(f):
REQUIRED LANGUAGE:
"This decision was made automatically by AI. You have the right to:
β’ Request human review
β’ Know the reasoning behind the decision
β’ Challenge the decision
Contact: [email/phone] to exercise these rights."
4.2 User Rights (NON-WAIVABLE per Art. 22):
β Right to human review: Decision made by human (on request)
β Right to explanation: Why AI rejected/approved (reasoning disclosure per Art. 15)
β Right to object: Contest decision + request reconsideration
β Right to data access: See data used for decision (per Art. 15)
β Response time: Reply within [30 days] (per GDPR Art. 12(3))
4.3 High-Risk Automated Decisions: Additional disclosure per AI Act Art. 6 (High-Risk):
β’ Credit/loan decisions: Disclose AI involvement + appeal process
β’ Employment/hiring: Disclose AI screening + human review availability
β’ Criminal justice: Disclose AI use in risk assessment + review right
5.1 Accuracy Metrics (IF MATERIAL): Disclose performance data per FTC Act Β§5 (Substantiation):
β’ If claiming high accuracy: "[AI System X is 95% accurate]" - must be substantiated
β’ Error rate: "[System has 5% error rate]" - disclose if material
β’ Limitations: "[Not suitable for life-critical decisions]"
5.2 Bias & Fairness Disclosure: If AI system may exhibit bias per AI Act Art. 6(1) (High-Risk):
β’ Known limitations: "[System may perform differently across demographics]"
β’ Testing results: Documented fairness testing (if conducted)
β’ Recourse: Human review available for disputed decisions
5.3 Hallucination Warning (For LLMs): If Large Language Model used per AI Act Art. 52(4):
β’ Warning: "[AI may generate false/misleading information]"
β’ Recommendation: "[Do not rely on responses without verification]"
β’ Placement: Visible in UI + help section
6.1 Documentation (REQUIRED): Maintain per AI Act Art. 22 (Record-Keeping):
β AI system description (what it does, training data)
β Disclosure texts (proof of disclosure to users)
β Performance metrics (accuracy, error rates, tested)
β Complaints log (user complaints + resolutions)
β Impact assessment: DPIA if GDPR-relevant (Art. 35)
6.2 Audit Trail: Log all AI decisions per GDPR Art. 5(2):
β’ Decision logged with: timestamp, inputs, reasoning, outcome
β’ Retention: [X years] (per regulatory requirements)
β’ Access: User can request logs (transparency per Art. 15)
6.3 Complaint Handling: Process per AI Act Art. 20 (Right to Appeal):
β’ Complaint channel: Email/form available
β’ Response time: Within [30 days] (per GDPR standard)
β’ Resolution: Explain decision, offer alternatives (escalation to human if needed)
Law: β EU AI Act + GDPR (if EU user) β [US (FTC Act Β§5)] | Enforcement: Regulatory fines (GDPR: up to EUR 20M / 4% revenue; AI Act: EUR 30M / 6% revenue per Art. 71) + private claims (class actions possible for misleading AI)