If your pricing and packaging keeps getting more granular, there’s a good chance you’re accidentally building a future you’ll hate.
It starts innocently: one more usage meter here, one more limit there, one more “simple” add-on to match what Sales promised. Then you wake up with 100+ entitlements in a plan, multiple systems to support, and a product that ships slower because monetization logic is now baked into everything.
Customers don’t buy tomatoes, pickles, and lettuce. They buy sandwiches.
In our latest HTTP 402 AMA, Jeffrey Goldberg (Director of Product Management at Qlik) shared a painfully familiar story: consolidating ~10 entitlement systems, transitioning from seat-based to consumption/capacity, and now facing the next wave, AI, agents, tool calls, and unpredictable usage, all while trying to keep the buying experience simple and the engineering architecture resilient. Because the more individual items you entitle with usage limits, the more complex it becomes to price, explain, negotiate, and sell.
The more individual items you entitle with usage limits, the more complex it becomes to price, explain, negotiate, and sell.
If you’re an engineering or product leader dealing with pricing complexity, this is the playbook (and the warning signs).
The trap: “more meters” don’t necessarily add value… they can easily become a tax
A single monetizable entitlement doesn’t travel alone. Once a capability becomes “entitled,” it drags behind it:
- contract language
- reporting expectations
- billing edge cases
- dashboards
- and the perennial question: “how do we explain this without terrifying people?”
There’s also a behavioral tax. Every new limit becomes a new question in a user’s head: “Am I allowed to do this? Will I get in trouble if I do?” That uncertainty suppresses experimentation and slows adoption, not because the product isn’t valuable, but because people don’t want to accidentally trigger a cost, a cap, or an awkward conversation. The result is a compounding effect: the more limits you add, the more you need governance, and the more governance you need, the harder it is for usage to grow naturally.
That’s how you end up with what Jeff calls the ingredients list problem: plans with dozens (or hundreds) of entitlements across booleans, configs, usage meters, overage rules, and hard limits.
That’s the hidden punchline: “more meters” doesn’t just add complexity, it becomes a permanent tax on shipping.
More meters don’t just add complexity. It becomes a permanent tax on shipping.
Jeff goes on to say that When teams fall into entitlement sprawl it usually isn’t a technical failure. It’s organizational. Every time you add a new capability, you’re forced into a high-friction decision: is it just a feature flag… or is it a monetizable entitlement?
As entitlements get bundled together in different ways to entice customers to move from legacy enforcement driven seat-based plans to value driven capacity consumption and usage-based plans, information technology and product development teams end up in the messy middle: trying to support old models, new models, and hybrid bundles simultaneously, and then paying for it with multiple systems, middleware, and inconsistent enforcement across the platform.
So, the more often you answer “entitlement,” the more your packaging starts to look like a spreadsheet instead of a thoughtfully composed value-added solution for your customers.
Enforcement is legacy thinking. Consumption is the real goal.
Jeff makes a huge but subtle distinction that becomes make-or-break as you scale:
- In an on-prem world, “licensing” often means enforcement.
- In cloud, entitlement means capacity, consumption, or usage tied to value.
Legacy seat-based models tend to be coupled to old assumptions: keys, CPU core limitations, hard blocks, or protective checks to prevent license abuse in on-prem environments you had no control. When you move to cloud but maintain similar models, you realize the protection mechanisms you built are actively fighting business model and pricing and packaging evolution.
So, to modernize monetization “adding a new meter” isn’t enough. Removing the old enforcement guardrails is imperative. This isn’t to say limits are bad, however, adding more limits in addition to legacy controls becomes another knot in the codebase.
The painful mistake during transitions: letting old models renew forever
During seat → usage transitions, the most painful (and common) mistake Jeff observes is trying to keep everyone happy by letting old models live indefinitely.
It sounds customer-friendly (“move to cloud, keep your old entitlements”), but it creates long-term drag: you can’t unify packaging, and every new feature needs translation layers to behave correctly across legacy contracts.
A participant attending the AMA shared a cleaner approach: trial the new model to teach the jump, then end-of-life the old one and don’t allow renewals once contracts roll off.
This can be a politically hard position, but it’s how you actually escape.
AI doesn’t just add a new meter. It breaks your unit of value.
AI monetization often starts with something that feels comfortably itemizable:
prompts, questions, pages indexed, token counts.
It works… until it doesn’t.
As Jeff described, once you move toward agentic workflows, MCP tool calls, and multi-step tasks, a single user action may trigger 2, 10, or 50 underlying operations. Suddenly, the “prompt = value” assumption collapses.
This is where granular entitlement design really starts to bite you commercially:
- If you entitle every underlying operation, you create an ingredients list no human wants to buy.
- If you entitle only the top-level action, you lose control over cost drivers.
- If you do both, you’ve built a pricing story no sales team can confidently repeat.
So, you need an abstraction customers can understand without forcing them to purchase each ingredient separately.
Credits aren’t trendy. They’re a practical abstraction.
Jeff believes credits are an optimization lever for the vendor and a control lever for the buyer.
For the vendor, credits provide:
- a configurable rate card behind the scenes (so you can steer margin and cost-to-serve—high-cost actions consume more credits)
- a single currency customers buy (so packaging and procurement stay simple)
- a flexible wrapper that lets you add capabilities without SKU sprawl (folding more features into one unified model over time)
For the customer, credits offer:
- a stable prepaid pool of consumption that enables teams to consume variably. It’s “pay for use” without unpredictable invoice spikes.
- a path to governed and accountable usage. As usage grows, controlling who can consume entitlement and what can be consumed is easier to allocate with credits.
- an easy way for customers to experiment, adopt and right-size credit need based on real usage without having to contact sales to renegotiate entitlements.
The credit system warning: never negotiate the rate card per customer
This part hit hard. If AI pushes you into an à la carte world, the real risk isn’t customization. It’s custom rate cards per customer.
If every enterprise customer negotiates different rates per feature or per AI action, you’re signing up for:
- contract chaos
- margin surprises
- engineering overhead to honor a thousand versions of “truth”
- and a future where you can’t react to changing AI costs
The healthier pattern: standard currency (credits), vendor-owned rate card, and discount the credit volume, not the per-feature rates.
Why? Because AI costs (and infrastructure costs) aren’t stable. If energy costs spike, model pricing changes, or your architecture shifts, you need the ability to adjust rates without breaking every contract you’ve ever signed.
Predictability still matters: governance becomes the product
Jeff continues, saying credits only work if customers trust the exchange rate. When conversion rules feel opaque or change without a clear rationale, credits stop being a simplifying currency and start feeling like a hidden price lever.
The reality is you will need to adjust rates as costs and product value evolve, but the difference between “healthy recalibration” and “pricing gotcha” is explaining the why in plain language, ahead of time, with concrete examples of what changes (and what won’t).
Customers want “pay for use,” and they appreciate stable and predictable spend as they adopt and grow usage of your products across their end users.
That’s why as usage spreads (especially with AI), the question shifts from “how much did we spend?” to: who is allowed to spend it, and on what?
Internal allocation and chargeback (by tenant, team, group, role) becomes critical. The winning systems won’t just meter usage; they’ll make allocation and control a first-class experience.
The takeaway
If you’re building monetization infrastructure going forward, your job is not to track more ingredients.
Your job is to:
- abstract complexity away from the customer
- standardize currency so the business can evolve
- decouple monetization from enforcement-era architecture
- design for change, because the goalposts will keep moving
Your job is not to track more ingredients. Your job is to abstract complexity away from the customer.
Credits won’t magically fix packaging. A new billing provider won’t magically fix entitlement sprawl. And “just add one more meter” is exactly how you end up with a system that blocks your roadmap.
Sell the sandwich. Track the ingredients. Don’t make customers (or your engineers) pay for the cheese.
Join the conversation at HTTP 402
If this post felt familiar, you are not alone.
Many engineering and product leaders are navigating entitlement sprawl, pricing models that slow teams down, and AI usage that breaks old assumptions about value. Most of those lessons are learned the hard way, inside production systems, with very few places to compare notes honestly.
HTTP 402 is a private Slack community for engineering and product leaders building and scaling monetization. We host regular AMAs, roundtables, and discussions focused on real-world pricing, usage metering, entitlements, billing infrastructure, and AI monetization. The conversations are practical, candid, and grounded in systems that actually ship.
Members include leaders from companies like HubSpot, Anthropic, Miro, Twilio, 1Password, Qlik, Cloudflare, and others, and we regularly host community AMAs with leaders from Grafana, Lovable, and Qlik.
If you want a place to trade notes with people who have actually built this stuff, take a look at the HTTP 402 community.





.png)