ai-governanceeu-ai-acteu-ai-act-article-12compliance

EU AI Act Article 12: Logging Requirements Decoded

Nikola Kovtun · · 9 min read
EU AI Act Article 12: Logging Requirements Decoded

When engineering teams hear “EU AI Act logging requirements,” most assume the answer is “turn on verbose logging.” Article 12 is more specific than that — and more demanding. It’s not a volume requirement. It’s a structure requirement.

The full text of Article 12 runs three paragraphs. The implementation implications run much further. Here is what the law actually requires, translated into engineering terms.

TL;DR

  • EU AI Act Article 12 requires automatic, accurate logging of events for high-risk AI systems
  • Logging must enable risk identification, post-market monitoring, and regulatory inspection
  • Specific data required: usage period, reference database, input data, and identity of oversight persons
  • Logs must be tamper-evident — not just retained, but verifiable
  • Article 12 does not specify format; it specifies function — your logs must enable reconstruction of AI decisions

What Article 12 Actually Says

Article 12 of the EU AI Act is titled “Logging Capabilities.” The operative text for high-risk AI systems reads:

“High-risk AI systems shall technically allow for the automatic recording of events (‘logs’) throughout the lifetime of the system… The logging capabilities shall allow for the monitoring of the operation of the high-risk AI system with a view to detecting situations that may result in the AI system presenting a risk…”

Three phrases define the requirement:

“Automatic recording” — Logs must be generated by the system, not manually compiled. An agent that requires a human to document its actions does not meet this standard. The logging must be built into the system architecture.

“Throughout the lifetime of the system” — Logging is not a go-live checklist item. It must operate from initial deployment through decommission. System updates must not create logging gaps.

“Detecting situations that may result in risk” — This defines the functional standard. Your logs must be structured such that an anomaly, a policy violation, or an escalating risk pattern is detectable from the log data. Logs that only record successful completions fail this standard.

For full text, see the official EU AI Act published in the Official Journal of the European Union.

Who Article 12 Applies To

Article 12 applies to providers of high-risk AI systems. High-risk systems are defined in Annex III of the EU AI Act and cover:

  • Biometric identification and categorization
  • Critical infrastructure management
  • Education and vocational training assessment
  • Employment decision-making
  • Essential private and public services (including credit scoring)
  • Law enforcement
  • Migration and border control
  • Administration of justice

AI agents deployed in fintech (credit decisions, fraud assessment), healthcare (patient triage, scheduling), insurance (claims processing, underwriting), and legaltech (document review with legal effect, evidence analysis) are likely to qualify as high-risk depending on their function.

Even where a system doesn’t meet the high-risk threshold, Article 12-level logging is becoming a de facto standard in enterprise procurement. Enterprise customers in regulated industries increasingly require vendor compliance with Article 12 even for GPAI systems.

Article 12 Logging Checklist for Engineering Teams

Structural requirements

□ Automatic logging is built into the system architecture The logging subsystem must be a first-class component of the agent architecture, not a plugin or afterthought. It must operate without human initiation.

□ Logs are generated per-use-period Each session or defined use period must produce a discrete log set with identifiable boundaries. You must be able to say: “During the period [start] to [end], here are all events.”

□ Log generation continues throughout system lifetime Verify that updates, model changes, and infrastructure changes do not create logging gaps. Continuous coverage is required.

Content requirements

□ Reference database recorded For each inference session: which knowledge base, model version, or reference data did the system access? This must be logged with version or hash references.

□ Input data recorded The inputs used in each AI decision must be logged. For an agent handling customer data, this includes: which customer record, which data fields, which query or prompt. Data minimization applies to retention, not to logging — the evidence of what data was used must exist.

□ Identity of human oversight persons recorded Where Article 14 (human oversight) applies, the identity of the person exercising oversight and the nature of that oversight must be logged. Automated-only systems must log the absence of human review and the policy basis for that exemption.

□ Decisions and their reasoning are logged Article 12 requires logging of events relevant to risk identification. A log that records “decision made” without the reasoning that produced it cannot satisfy the risk-detection standard. The decision rationale — the policy evaluation, the risk tier, the specific rule applied — must be included.

Integrity requirements

□ Logs are tamper-evident The EU AI Act does not specify a technical mechanism, but the functional requirement is clear: logs must be verifiable. If a log record can be modified without detection, it cannot serve as audit evidence. Cryptographic signing and hash chaining satisfy this requirement.

□ Logs are stored separately from the production system Logs stored only on the production system can be modified or deleted if the system is compromised. Immutable log storage — append-only systems, write-once storage, or secondary signing services — satisfies the integrity requirement.

□ Retention period meets legal requirements Article 12 requires retention appropriate to the system, with a minimum of six months. Regulated industries typically require longer periods. Map your specific regulatory requirements and configure retention accordingly.

Operational requirements

□ Logs are accessible to regulatory authorities Article 12 requires that logs must be made available to national competent authorities on request. This means log access must be possible without system downtime, and the format must be interpretable by non-engineering reviewers.

□ Logging covers system failures and anomalies The risk-detection standard requires that anomalies, errors, and out-of-scope behaviors appear in the logs. A system that logs only successful completions fails this requirement.

RequirementCommon implementationGap check
Automatic loggingBuilt into governance layer✅ if governance-level, ❌ if application-level only
Input data loggedEvidence record includes query params✅ if params captured, ❌ if only status code
Tamper-evidenceEd25519 signatures + hash chain✅ if signed, ❌ if plain text
Human oversight traceEscalation queue records✅ if escalation logged, ❌ if informal only
Retained 6+ monthsStorage policy defined✅ if policy exists, ❌ if ad hoc

Common Gaps Engineering Teams Miss

Mistake 1: Logging at the application layer, not the governance layer. Application logs capture what the application did. Governance logs capture whether the agent was authorized to do it. Article 12’s risk-detection standard requires the second. If your only logs are at the application layer, you’re missing the authorization evidence.

Mistake 2: Logging outputs, not inputs. Many teams focus on logging what the AI system produced. Article 12 requires logging of input data used in the decision. The input is what the auditor needs to reconstruct causality.

Mistake 3: Treating retention as the logging requirement. Retaining logs is necessary but not sufficient. Logs that are retained but not structured for risk detection, not tamper-evident, and not accessible in interpretable format don’t meet Article 12’s functional standard.

Mistake 4: Gaps during updates. Model version changes, infrastructure upgrades, and feature releases can create logging gaps if the logging subsystem isn’t part of the deployment contract. Article 12 requires continuous coverage. Build logging continuity into your deployment process.

How Article 12 Relates to Other EU AI Act Articles

Article 12 doesn’t operate in isolation. It connects directly to:

  • Article 9 (Risk Management) — The risks that Article 9 requires you to manage are those Article 12 logs must be capable of detecting. The logging system is the operational implementation of the risk management system.
  • Article 14 (Human Oversight) — Human oversight events triggered under Article 14 must be logged under Article 12. The log is where human oversight becomes verifiable evidence.
  • Article 17 (Quality Management) — Quality management systems required by Article 17 must include logging as a component of ongoing monitoring.
  • Annex IV (Technical Documentation) — The logging system design must be documented in the technical documentation required by Annex IV.

FAQ

Q: Does Article 12 apply to AI agents used only internally, not customer-facing?

High-risk classification is based on function, not audience. An AI agent making employment decisions, handling credit applications, or managing critical infrastructure is high-risk regardless of whether it faces customers or internal staff. Review Annex III against your specific agent function.

Q: What constitutes adequate “input data” logging for an AI agent?

The standard is what would be necessary to reconstruct the decision for audit purposes. For a credit decision agent: the applicant data accessed, the time period of the query, the model version, the policy rules evaluated. For a customer service agent: the customer record, the product data, the prior interaction context. Log what a compliance auditor would need to trace causality from input to decision.

Q: Are there specific EU AI Act Article 12 format requirements?

No. Article 12 specifies function, not format. The logs must be capable of detecting risks, enabling monitoring, and supporting regulatory inspection. JSON, structured databases, and signed binary formats all satisfy the requirement if they meet the functional standard. Plain text logs without tamper-evidence do not.

Q: How long before enforcement actions under Article 12 begin?

The EU AI Act’s high-risk system obligations for existing systems became applicable in August 2026. National supervisory authorities are building enforcement capacity. Early enforcement actions focus on egregious gaps — systems with no meaningful logging. Sound governance architecture now provides a defensible baseline.

Q: Can we use our existing SIEM system for Article 12 compliance?

SIEM systems designed for security event logging can satisfy some Article 12 requirements if configured correctly — particularly for event capture and retention. They typically don’t satisfy the tamper-evidence requirement without additional configuration (WORM storage, cryptographic signing). More importantly, SIEM systems don’t capture governance-layer data: policy evaluations, constitutional rule references, authorization chains. You likely need both.


By Nikola Kovtun, founder of Infracortex AI Studio. Cortex implements Article 12-compliant logging for AI agent deployments — automatic, tamper-evident, and structured for regulatory inspection. Book a call to audit your current logging against the Article 12 checklist.

See also: EU AI Act Article 9: Continuous Risk Management for AI Agents | EU AI Act Article 14: Building Practical Human Oversight | Why Your AI Agent Logs Won’t Pass an Audit

Cortex build: 0.1.35-260423

Nikola Kovtun
Nikola Kovtun
AI Knowledge Architect, Founder at Infracortex
Get Started

Find Out Where AI Can Save You the Most Time

Start with an AI System Health Check. 1-2 days, from $500, zero commitment. You get a structured report with your biggest opportunities.

Get Your Health Check From $500 · 1-2 days · Zero commitment