Engineering rigor: How we eliminate plan-feature drift in growing SaaS codebases
Why this post exists
This is an operator-side transparency post, not a sales pitch. We want EU mid-market buyers — CTOs, Heads of Engineering, compliance reviewers — to see how we run our codebase, because that's a reasonable signal of how we'll handle their data.
Specifically: when a SaaS product has multiple pricing tiers and many feature flags, plan-feature drift is a real bug class that can affect what customers see and pay for. We close it mechanically. Here's how at a high level.
The problem: scattered plan-tier checks drift over time
Most growing SaaS codebases accumulate plan-tier checks in many places — pricing-gate render paths, billing UI, recommendation scoring, scheduled email jobs. Each check encodes the same intent: "does this customer's tier include capability X?".
A typical pattern looks like a hardcoded list of tier names compared against the user's tier. It's correct on the day it's written. Months later, a new tier is added. The new tier should also unlock the capability — but the engineer who adds the tier doesn't know how many parallel checks exist, scattered across components written by different people at different times.
One check gets updated. Several don't. The customer on the new tier:
- Sees the capability on some pages
- Doesn't see it on others
- Gets a downgraded scheduled email because the cron job's eligibility check wasn't updated
This is silent. No exception is thrown. No log line fires. The customer just experiences inconsistent behavior — sometimes paying for a capability they can't fully use.
We audited for this drift class during this quarter's cleanup work and consolidated every such gate behind a single derivation function. The lint described below ensures it stays consolidated as new capabilities are added.
The solution: server-derived capability flags
Pricing logic lives in one canonical configuration file — a single source of truth where each tier declares which capabilities it includes. The shape, abstracted:
midTier: {
features: {
advancedReports: true,
integrations: true,
// each capability declared once, here
},
}
Our authenticated user endpoint derives boolean capability flags from that registry, and every consuming surface reads the derived flag instead of doing its own tier comparison. Add a new tier? Update one file. The flag propagates everywhere automatically.
Mechanical defense: pre-commit linting
A single source of truth helps only if engineers actually use it. So we wrote a pre-commit lint that scans the codebase for hardcoded plan-list patterns of several known shapes, and rejects commits that introduce them.
It blocks the commit if any forbidden pattern is found, with a closed allowlist for the rare legitimate exception that requires a human-readable rationale. Unknown allowlist tags are rejected.
The lint is idempotent and fast — a few hundred files scanned in under a second. It runs alongside our other commit-time guardrails. When a commit fails, the engineer gets actionable feedback at the location of the issue.
Customer benefit: correct access, no surprise gates
The visible result for our customers:
- Correct capability access: every paying customer sees exactly the capabilities their tier includes — not fewer, not more.
- Consistent billing posture: centralized derivation prevents one part of the system from including a tier in a metered job that another part excludes.
- Predictable downgrades: our standard grace period after a downgrade applies uniformly across every capability, because the same effective-tier resolver drives every gate.
What this is part of
Our engineering process leans on closure checklists, plan-of-record documents, and a maintained registry of lessons learned. The plan-feature consistency work is one of many similar invariants we maintain mechanically — review gates on legal documents, locale parity, accessibility wiring, prompt discipline for AI surfaces, and so on.
We publish this kind of post because EU B2B buyers in 2026 are right to ask "how do you actually run this?" before signing a Data Processing Agreement. The answer for us: deliberately, with mechanical defenses, and with a paper trail of decisions.
If you're evaluating HumanKey for your traffic intelligence stack, our Data Processing Agreement, Privacy Impact Assessment, Sub-Processor list, and DORA self-assessment are all public. Start with whichever your compliance team needs first.
For a free trial, no credit card required, the snippet takes under two minutes to install.
Know Your AI Traffic
Start tracking AI crawlers visiting your website today. Free for up to 1,000 verifications per month.
Start Free Trial