EU AI Act Article 50 — 90 Days Since the Banner Shipped
Why this post exists
In early 2026 we shipped a short transparency banner on every page of our product where an AI-driven feature is exposed to users — bot triage suggestions, audit summaries, AI Magnet recommendations. The banner is our reading of Article 50 of Regulation (EU) 2024/1689 (the EU AI Act): if you're talking to an AI, you should know it; if you're being shown AI-generated output, that should be labelled.
Ninety days later, the banner is still there, the wording has changed twice, and we have some operational notes worth sharing for other small operators thinking about Article 50 in production.
This is not legal advice. We're a sole-trader (JDG) operator without errors-and-omissions cover, so we read Article 50 with the conservative posture that posture demands. Your counsel's reading may differ.
What Article 50 actually says
Article 50 lives in Chapter IV of the Act ("Transparency obligations for providers and deployers of certain AI systems") and entered into force on 1 August 2024. The operative obligations apply from 2 August 2026.
The provisions that matter for most product surfaces:
- Article 50(1) — providers must design AI systems intended to interact directly with natural persons so the natural person is informed they are interacting with an AI system, unless that fact is "obvious from the perspective of a natural person who is reasonably well-informed, observant and circumspect."
- Article 50(4) — deployers of an emotion-recognition or biometric-categorisation system must inform the persons exposed to it.
- Article 50(5) — output that is generated or manipulated by an AI system constituting a deep fake must be disclosed; for text published to inform the public on matters of public interest, the AI-generated nature must be disclosed (with carve-outs for editorial review).
The Act distinguishes "obvious" from "explicit" disclosure. Reasonable people can disagree on where the line falls when the AI is suggesting a triage label inside a dashboard. We chose explicit.
What we shipped
The banner is a single short sentence rendered as a tooltip-style affordance next to every AI-driven UI element. It says:
This output is generated by an AI assistant. Treat it as a suggestion, not a decision.
That phrasing took two iterations. The first version named the model family. We took the model name out for two reasons:
- The model can change; the disclosure outlives any specific model contract.
- Naming the vendor created a marketing impression we did not intend.
The current version doesn't claim accuracy or guarantee anything — it tells the user the output is non-authoritative and that human review is expected. That framing is consistent with the "advisory-only" posture we hold elsewhere in the product (and in our DPA and DPIA).
What we measured — and what we deliberately didn't
We measured three things, because they're cheap and because they let us reason about whether the banner is doing its job:
- Banner render rate — does the banner actually appear on every AI-affected surface? Pre-commit lint and a runtime registry check answer this. Today: every registered surface either renders the banner or is explicitly carved out (with reason).
- Surface registry coverage — every place we call an AI provider lives in
ai-surfaces.ts. New code paths fail the lint until they register. This is the dull part of compliance and the part nobody notices when it works. - Operator-side false-positive review — when an admin flags an AI-suggested label as wrong, that flag gets recorded. We watch the rate as a softer signal that the system is being calibrated, not as the regulatory target.
We did not measure user perception, comprehension, or engagement with the banner. That would have required either a survey panel (costly) or per-user tracking with consent (a privacy trade we weren't willing to make for an internal-process question). The Article 50 standard is whether the user can know — not whether they always read.
What surprised us
The wording was harder than the engineering. Adding the banner component took an afternoon. Settling the wording took a month, two rewrites, and one reluctant counsel re-read of the Act. Most of the work was deciding what NOT to claim — every adjective ("accurate", "trained", "advanced") opened a small attack surface for someone to argue the disclosure itself was misleading.
The banner has to live somewhere users see it. Tooltips on hover are not a robust disclosure pattern for keyboard-only users or for the case where the AI output is consumed by a downstream automation. We replaced two hover-only banners with always-visible labels after the first 30 days.
Localization matters more than you'd think. Our Polish-language strings run about 30% longer than the English equivalents. The first version of the banner overflowed its container in Polish and was clipped on the second-most-important surface. A bug, but a compliance-relevant bug — clipped disclosure isn't disclosure. We now run a paired-locale lint that fails CI when an EN string ships without its PL counterpart.
"AI-generated" is a moving target. Our audit assistants summarize findings; the summary is AI-generated even when the underlying findings are deterministic rule output. We had to decide whether that counts as "AI-generated text on a matter of public interest" under Article 50(5). We took the conservative read: anything an LLM emits, even if it's a summary of rule output, gets the disclosure.
What we'd change
If we were starting over today:
- We'd write the wording first, in both languages, and only then build the component. The wording dictates the dimensions; the component is incidental.
- We'd put the disclosure in the same DOM region as the AI output, never as a separate tooltip. Adjacency makes the relationship unambiguous.
- We'd keep a single source of truth for the AI-surface list from day one. Spreading registrations across files cost us a week of grep-archaeology when we audited coverage.
A minimum set for other small operators
We are not a model for everyone — our operator profile (sole-trader, no insurance) makes us conservative by structural necessity. With that caveat, the disclosure elements we'd consider non-negotiable:
- Explicit AI-presence statement ("This output is generated by an AI assistant" or equivalent in the user's language)
- Authority disclaimer ("Treat as suggestion, not decision" — or your version)
- A registry of every surface where AI emits user-facing output, enforced by something more durable than a comment
- Locale parity for every language you serve
- A change-log on the disclosure wording itself, so when counsel reads the wording in 2027 they can see when and why it shifted
Note that none of these require a banner specifically. Article 50 is a disclosure obligation, not a UI specification. A footnote, an inline label, or a visible pre-action confirmation can all carry the disclosure — what matters is that the user has the information without effort.
What we're watching next
Two open questions for the rest of 2026:
- The Code of Practice on Transparency of AI-Generated Content, expected in final form by June 2026. We expect it to clarify the threshold for "matters of public interest" in Article 50(5) and to give some structure to how AI-generated text in published outputs should be marked. We'll re-read our wording against whatever it says.
- National enforcement posture. The Act is uniform; how member-state authorities prioritise enforcement is not. Polish PUODO has been active on the GDPR side and the AI Act sits adjacent — we're watching their early enforcement pattern as a leading indicator.
The banner stays. The wording will keep changing. That's the work.
This post reflects HumanKey's operational experience implementing Article 50 transparency obligations for the AI surfaces in our product. It is not legal advice. For binding interpretation of the EU AI Act, consult counsel qualified in your jurisdiction. Primary sources: Regulation (EU) 2024/1689 (EUR-Lex), specifically Articles 50 and 53 and Recital 132.
Know Your AI Traffic
Start tracking AI crawlers visiting your website today. Free for up to 1,000 verifications per month.
Start Free Trial