Disclosed data-handling figures stay aligned with implementation — by build-time construction
Why this post exists
This is an operator-side transparency post, not a sales pitch. EU B2B buyers — DPOs, Heads of Compliance, Legal Counsel — are right to ask whether the retention periods we promise in our privacy documents actually match what our service does in practice. The honest answer is: yes, because the same source is consulted by both surfaces, and a mechanical check prevents the disclosed numbers from drifting away from the implementation.
This is a specific class of compliance hygiene that goes underappreciated until it surfaces during an audit. We close it by construction.
The problem: disclosed numbers can drift from real behavior
A typical SaaS company publishes data-handling figures in several places:
- Privacy Notice — "we retain visit logs for N days after collection"
- Data Processing Agreement (DPA) — same N referenced in the retention article
- Privacy Impact Assessment (DPIA) — the same N as part of the lawful-basis assessment
- Marketing pages ("free plan includes N days of retention", "business plan includes M days")
- Downloadable PDF agreements that customers sign before procurement
- The actual service — a scheduled job that purges data older than N days
Six surfaces. One number. Easy to keep consistent on day one. Easy to forget when the retention policy changes six months later in a routine product update.
Under GDPR Article 13(2)(a), the data subject must be informed of the retention period at the time of collection. Under EU Unfair Commercial Practices Directive 2005/29/EC Article 6, any material claim made to consumers — including data subjects receiving a privacy notice as natural persons — must remain truthful for the duration it is displayed. A privacy notice that says "30 days" while the service actually purges at "60 days" is factually inaccurate disclosure — even if the longer retention is more conservative — because the data subject's expectation was set incorrectly.
The risk is asymmetric: short-disclosed-versus-long-real means we kept data past the disclosed window without the subject's continued consent shape; long-disclosed-versus-short-real means the published agreement misrepresents what we deliver.
Neither outcome is acceptable.
The solution: a single source of truth + build-time interpolation
The retention periods, the per-plan request limits, and the maximum lengths of certain stored fields all live in one canonical configuration in our codebase. Each tier declares its values explicitly there. Nothing else in the codebase declares them.
Every other surface that needs to display these numbers consumes them through one of two patterns:
- Server-side render: the API endpoint that drives a dashboard widget or a server-rendered legal page imports the canonical configuration directly and reads the values at request time.
- Build-time emission: for surfaces that ship as static content (the marketing pages, the MDX-based blog index, the downloadable PDFs, the EN+PL legal documents that share content), a build-time script reads the canonical configuration once and substitutes the values into each generated artifact before the page or PDF is published.
Both patterns mean the disclosed number is the implementation number, not a copy of it. There is no "second source" to drift.
Mechanical defense: a commit-time check
A single source of truth helps only if engineers actually use it. So we run an automated check at commit time that flags any new attempt to hard-code one of these numeric values directly in a place that is not the canonical configuration. When the check finds a stray literal, it surfaces actionable feedback with the file and line that introduced the drift, plus a pointer to the canonical pattern.
The check is idempotent and fast. It runs alongside our other commit-time invariants and stays out of the way unless something is genuinely off.
Customer benefit: aligned disclosures, no surprise figures
The visible result for our customers and their compliance teams:
- Privacy Notice retention period equals real service retention. What we say is what we do.
- DPA Article 5 retention reference equals Privacy Notice equals service. A counsel reviewer who compares the three documents finds them coherent.
- Marketing-page tier limits equal DPA tier limits. The procurement team sees the same numbers in the public marketing material that they later sign in the agreement.
- Per-locale parity preserved. Our EN and PL legal documents render the same numeric values because they share the same build-time pipeline; we cannot ship a typo in one locale that creates a contradiction with the other.
What this is part of
Our engineering process leans on closure checklists, plan-of-record documents, and a maintained registry of lessons learned. Disclosure alignment is one of many similar invariants we maintain mechanically — across plan-feature gates, locale parity, accessibility wiring, and AI-surface disclosures, among others. We treat factual disclosure alignment as a structural property of the codebase, not a periodic audit task.
We publish this kind of post because EU B2B buyers in 2026 are right to ask "how do you actually run this?" before signing a Data Processing Agreement. The answer for us: deliberately, with mechanical defenses, and with a paper trail of decisions.
If you are evaluating HumanKey for your traffic intelligence stack, our Data Processing Agreement, Privacy Impact Assessment, Sub-Processor list, and DORA self-assessment are all public — and the figures they reference are the same figures our service actually uses, by construction. Start with whichever your compliance team needs first.
For a free trial, no credit card required, the snippet takes under two minutes to install.
Know Your AI Traffic
Start tracking AI crawlers visiting your website today. Free for up to 1,000 verifications per month.
Start Free Trial