PREAMBLE & LEGAL FRAMEWORK
This Agreement establishes compliance obligations for [AI System Name], classified as [Prohibited / High-Risk / General Purpose / Low-Risk] under EU Regulation 2024/1689 (AI Act), effective August 1, 2024.
Provider ("you") warrants compliance with Articles 5 (Prohibited Practices), Annex III (High-Risk Systems), and Articles 52-52d (General Purpose AI), applicable as of: [February 1 / August 1 / 2026].
1.1 Absolute Prohibition. Provider certifies that the AI System does NOT employ:
(a)
Subliminal Manipulation: Techniques designed to bypass or subvert conscious decision-making (
Article 5(1)(a));
(b)
Exploitation of Vulnerability: Targeting minors, elderly, or disabled persons based on specific characteristics (
Article 5(1)(b));
(c)
Social Scoring by Public Authorities: Automated assessment affecting general population treatment (
Article 5(1)(c));
(d)
Real-Time Biometric Identification: Remote identification in publicly accessible spaces (with exceptions for law enforcement per
Article 5(1)(d));
(e)
Emotion Recognition: In employment/education contexts to deduce emotional states (
Article 5(1)(e));
(f)
Biometric Categorization: Based on race, ethnicity, caste, religion, political opinion (
Article 5(1)(f));
(g)
Facial Image Scraping: From internet/CCTV without consent for facial recognition systems (
Article 5(1)(g)).
VIOLATION PENALTY: Up to EUR 35 million or 7% global annual turnover (whichever is higher) per Article 71(2) AI Act. IMMEDIATE product delisting from EU market.
2.1 System Classification. If this System is classified as High-Risk (Annex III), Provider shall implement mandatory requirements per Articles 8-23:
☐ Biometric Identification / Emotion Recognition
☐ Critical Infrastructure (Energy, Water, Transport, Utilities)
☐ Education & Professional Training (Admission, Performance Assessment)
☐ Employment (Recruitment, Promotion, Termination, Performance Monitoring)
☐ Access to Public Services (Benefits, Housing, Education)
☐ Credit Scoring / Creditworthiness Assessment
☐ Law Enforcement / Crime Prediction
☐ Migration / Asylum / Visa Processing
2.2 Risk Management System (Article 9). Provider shall establish and maintain:
(a) Documented risk identification methodology;
(b) Mitigation measures for foreseeable risks (accuracy, robustness, cybersecurity);
(c) Post-market monitoring plan for at least [X years];
(d) Incident reporting to competent authorities within [72 hours] of serious harm.
2.3 Data Governance (Article 10). Training/Validation/Testing data must:
(a) Be relevant, representative, and free from bias;
(b) Include appropriate safeguards against data quality issues;
(c) Be retained with metadata for post-market surveillance per
Article 12;
(d) Comply with GDPR
Article 6 (lawful basis) for processing.
2.4 Technical Documentation (Article 11). Provider shall create and maintain:
(a) Detailed description of the AI System architecture and logic;
(b) Training/validation/testing dataset information;
(c) Performance metrics: accuracy, precision, recall, F1-score, fairness indicators;
(d) Limitations and specific contexts for safe use;
(e) Version control and change logs.
2.5 Logging & Record Retention (Article 12). Automatic logging of:
(a) System inputs, outputs, and decisions;
(b) User interactions and modifications;
(c) Incidents or anomalies detected;
(d) Retention: Minimum [3 years] (can be extended for safety-critical systems).
2.6 Transparency & User Information (Article 13). Users must be clearly informed:
(a) That they are interacting with an AI System;
(b) The purpose, capabilities, and limitations of the System;
(c) Likelihood of significant consequences from System decisions;
(d) Available remedies and complaint procedures.
2.7 Human Oversight (Article 14). System shall enable human review:
(a) Human reviewer can understand System reasoning;
(b) Human reviewer can override decisions (where applicable);
(c) For decisions with legal/similarly significant effects, meaningful human review MANDATORY.
2.8 Accuracy, Robustness & Cybersecurity (Article 15). Provider shall:
(a) Achieve documented accuracy thresholds: [X%] (per specification);
(b) Test resilience to adversarial attacks and data corruption;
(c) Implement cybersecurity measures against model extraction, poisoning, evasion attacks;
(d) Conduct red-teaming for systematic risks (if General Purpose AI, per Article 52c).
3.1 Conformity Assessment. Before placing on market, Provider shall:
(a)
High-Risk Systems: Undergo third-party or internal audit against Articles 8-23;
(b)
GPAI Systems: Document compliance with Article 52a-52d (training data, technical documentation, red-teaming);
(c)
All Systems: Issue Declaration of Conformity per
Annex VI.
3.2 CE Marking (Article 48). High-Risk Systems shall bear CE marking:
(a) Certificate of Conformity executed by: [Provider / Authorized Representative];
(b) Compliance affirmed with Articles 8-23 and Article 25 (post-market surveillance);
(c) CE marking maintains validity for [5 years] from issuance.
3.3 EU Database Registration (Article 49). Provider shall register High-Risk Systems:
(a) In
EU AI Registry within
[30 days] of market placement;
(b) Providing: System name, intended purpose, provider contact, conformity assessment details;
(c) Updates required within 30 days of material changes.
4.1 GPAI Definition & Scope. This System is General Purpose AI if it:
(a) Is a foundation model (e.g., LLM) designed to perform a wide range of tasks;
(b) Exhibits emergent capabilities not specifically trained for specific downstream tasks;
(c) Can be fine-tuned or prompted for various applications.
4.2 Technical Documentation (Article 52a). Provider shall document:
(a) Model architecture, training procedure, and intended use cases;
(b) Training data summary (domains, languages, size, provenance);
(c) Downstream use cases Provider tested or recommends avoiding;
(d) Known limitations, risks, and failure modes.
4.3 Copyright Compliance (Article 52a(3)). Provider certifies:
(a) Training data sourcing complies with
Copyright Directive 2019/790 text & data mining provisions;
(b) Rights holders' opt-out requests honored (where technically feasible);
(c) Summary of copyright measures included in technical documentation.
4.4 Systematic Risk Measures (Article 52c). If GPAI System presents systemic risk, Provider shall:
(a) Conduct red-teaming before release;
(b) Test for biological weapon design assistance, cyberattack tools, jailbreaks;
(c) Report serious incidents to NIST or
EU AI Office.
5.1 AI Disclosure (Article 52). Users must be transparently informed:
(a) Chatbots/Conversational AI: Must disclose "This is AI-generated" at start of interaction;
(b) Deepfakes/Synthetic Media: Must bear watermark or metadata indicating synthetic origin;
(c) AI-Generated Content: Creator must label as AI-generated (e.g., "Image created by DALL-E");
(d) Emotion Recognition Systems: Must disclose in user interface.
5.2 Impact Assessment Transparency (Article 22 GDPR Link). For automated decisions with legal/similarly significant effects:
(a) Provider shall conduct Data Protection Impact Assessment (DPIA) per
GDPR Article 35;
(b) Individuals have right to explanation and human review per
Article 22;
(c) Bias monitoring and audit trail maintained.
6.1 EU AI Office & Market Surveillance. Provider acknowledges oversight by:
6.2 Incident Reporting (Article 73). Provider shall report serious incidents to MSA within:
(a) 72 hours: For incidents causing death, serious injury, or environmental damage;
(b) 15 days: Initial assessment and corrective actions;
(c) Report to competent authority: [Authority Name / Contact].
6.3 Corrective Actions (Article 74). Upon MSA request, Provider shall:
(a) Modify or withdraw System from market;
(b) Notify end-users of risks and corrective measures;
(c) Issue public statements if safety concerns identified;
(d) Preserve evidence and provide inspection access.
| Violation Category | Maximum Penalty | Article |
| Prohibited Practices (Art. 5) | EUR 35M or 7% global revenue | Article 71(2) |
| High-Risk System Failures (Art. 8-23) | EUR 15M or 3% global revenue | Article 71(3) |
| Other Violations (Art. 52-61) | EUR 7.5M or 1.5% global revenue | Article 71(4) |
| False Declarations/Obstruction | EUR 3.75M or 0.75% global revenue | Article 71(5) |
Note: "Turnover" = global annual revenue from past financial year. Penalties are per violation and cumulative.
8.1 Effective Date: [Agreement Date]
8.2 Compliance Deadlines:
- Prohibited Practices: February 1, 2025
- GPAI Requirements: August 1, 2025
- High-Risk Systems (Annex I): August 2, 2026
- High-Risk Systems (Annex III): August 2, 2026
8.3 Provider Certification: By executing this Agreement, Provider certifies full compliance with all applicable provisions as of effective date.
AI PROVIDER:
Company: [Legal Entity Name]
By: _________________________ Date: ________________
Name (print): _________________________ Title: _________________
AUTHORIZED USER/CUSTOMER:
Company/Individual: [Name]
By: _________________________ Date: ________________
Name (print): _________________________ Title: _________________
LEGAL DISCLAIMER - CRITICAL: This agreement is binding. Non-compliance with EU AI Act requirements results in administrative fines up to EUR 35 million or 7% annual global revenue. Immediate market suspension may apply. Both parties should seek independent legal counsel before signing. This document does not constitute legal advice.