Regulations
Which packs we audit against
Each regulation is decomposed into atomic clauses in a versioned YAML pack on GitHub. Every clause has verbatim regulation text, a deterministic rule spec, score mapping, remediation hint, and a stable 16-bit ID for on-chain anchoring.
Regulation (EU) 2024/1689 — Artificial Intelligence Act
v2024-08Regulation (EU) 2024/1689. Full applicability 2 August 2026. We audit Articles 5, 6+Annex III, 9–15, 26, and 50 — the code-checkable subset.
Clauses · 25
Code · 17
External · 1
Enforcement timeline
in_force· 2024-08-01prohibited· 2025-02-02gpai· 2025-08-02high_risk_v2· 2026-08-02embedded_high· 2027-08-02
All 25 clauses
- 5(1)(a)Subliminal techniques distorting behaviourcode
- 5(1)(b)Exploiting vulnerabilities (age, disability, socio-economic)code
- 5(1)(c)Social scoring leading to detrimental treatmentmixed
- 5(1)(d)Predictive policing solely from profilingcode
- 5(1)(e)Untargeted facial image scraping for face databasescode
- 5(1)(f)Emotion recognition in workplace and educationcode
- 5(1)(g)Biometric categorisation by protected attributescode
- 5(1)(h)Real-time remote biometric identification in public spacesexternal
- 9Risk management system established, implemented, documentedmixed
- 10Data and data governance practices documentedmixed
- 11Technical documentation drawn up before placing on marketmixed
- 12(1)Automatic recording of events over the lifetimecode
- 12(2)Logging ensures traceability appropriate to riskmixed
- 13Transparent operation and instructions for usemixed
- 14(1)Effective human oversight designed and built-incode
- 14(4)(d)Interrupt / stop function reachable by overseercode
- 14(4)(e)Ability to override / reverse the system's outputcode
- 15(1)Appropriate level of accuracy declared and testedcode
- 15(4)Resilience to errors, faults, inconsistenciescode
- 15(5)Cybersecurity measures appropriate to circumstancescode
- 26(6)Deployer log-retention capability supportedmixed
- 50(1)Users informed they are interacting with an AIcode
- 50(2)AI-generated content marked as such, machine-readablecode
- 50(3)Emotion recognition / biometric categorisation disclosurecode
- 50(4)Deepfake content labelled as artificially generatedcode
NIST AI Risk Management Framework 1.0 (NIST AI 100-1)
v1.0NIST AI 100-1 (January 2023). Voluntary US framework; the de facto reference for enterprise procurement. We audit code-mappable subcategories across GOVERN, MAP, MEASURE, MANAGE.
Clauses · 10
Code · 6
External · 0
Enforcement timeline
voluntary· 2023-01-26
All 10 clauses
- GOVERN 1.4Risk management process documented and accountablemixed
- GOVERN 1.5Ongoing monitoring and periodic review of risk managementmixed
- MAP 1.1Context of use established and understoodmixed
- MAP 3.4Risks and benefits to people identifiedmixed
- MEASURE 2.3AI system performance evaluated and documentedcode
- MEASURE 2.7Security and resilience evaluatedcode
- MEASURE 2.8Privacy risk of the AI system evaluatedcode
- MEASURE 2.11Fairness and bias evaluatedcode
- MANAGE 2.3Mechanisms to supersede or deactivate AI systemscode
- MANAGE 4.1Post-deployment monitoring, appeal and override, change managementcode
Versioning
Each pack is named {regulation-id}-{version}.yaml. The regulationsVersion anchored on chain is the sha256 of the YAML file content at audit time. Future revisions ship as new files (e.g. eu-ai-act-2024-08.yaml → eu-ai-act-2025-03.yaml); clause IDs stay stable across versions so re-audit diffs remain comparable.
Planned
- ISO/IEC 42001 — AI management system standard. Overlaps with NIST RMF on the code-side; deferred to V1.5.
- UK AI Safety Institute frameworks — alignment guide + evaluation protocols.
- China’s GenAI Measures — generative-AI service provider obligations.
- Japan’s AI Promotion Act — operator obligations once finalised.
Contributing a clause
Edit the YAML pack in regulations/, add a rule implementation to src/pipeline/checkers/rules.ts, and open a PR. The schema is documented in regulations/schema.md.