A year ago, you could still talk about “the coming AI wave.” Today, it’s no longer coming. It’s here. OpenAI has gone from research lab to household name, powering products and entire industries. Clay has redefined what it means to build workflows on top of live data and AI. Tools like Windsurf are changing the way developers interact with code, offering an entirely new level of productivity.
These products aren’t fringe experiments anymore; they’ve already reset customer expectations. And with them, the assumptions we used to make about pricing no longer hold.
When Seats Don’t Map to Value
Seat-based pricing worked beautifully for collaboration tools like Slack or Zoom, where every new user added incremental value. But in AI-native products, that link between seats and value quickly breaks down.
Take Windsurf. Two developers may both count as a single “seat,” but one might occasionally use it to nudge their flow, while another might rely on it to generate entire code scaffolds. Same license, but radically different value delivered. In that context, seat-based pricing starts to feel disconnected from reality.
When Usage Feels Too Volatile
Usage-based pricing, as championed by pioneers like OpenAI, seems like a natural fit. Tokens and compute time are measurable and scalable, and they align neatly with infrastructure costs.
But here too, cracks appear. Customers experimenting with GPT-powered features often hesitate because they don’t know how much a burst of usage will cost. Finance leaders feel the same pressure when a single AI-heavy workflow causes bills to swing wildly. The model itself is sound, but when consumption is unpredictable, it can create anxiety instead of trust.
When Hybrids Need More Support
That’s why so many teams have embraced hybrid models. Combining the predictability of subscriptions with the flexibility of usage gives customers a more balanced experience. Clay, for example, has leaned into hybrid approaches to meet the reality of wildly varied customer workflows.
But in the AI era, even hybrids need reinforcement. It’s not always clear where to draw the line between the fixed part of the plan and the variable. If the mechanics aren’t designed carefully, billing can feel murky for customers, even if the underlying intent is right. Hybrids are the right direction, but AI exposes where they still need evolution.
The New Cost Reality
There’s another layer that makes AI different: the cost base itself. Running AI isn’t like running traditional SaaS. Infrastructure spend is exceptionally higher. Inference costs stack quickly, GPUs aren’t cheap, and foundational model providers like OpenAI, Anthropic, and Perplexity frequently change their pricing.
That volatility doesn’t just affect customer bills; it directly impacts margins. To stay profitable, monetization infrastructure has to be just as dynamic, built to adapt to shifts in model pricing while ensuring AI revenue consistently outpaces AI spend. Without that flexibility, even well-designed pricing models can collapse under the weight of unpredictable costs.
The Tension Point
We’re not waiting for the future anymore. We’re already living in it. AI has transformed how products are built, how they’re used, and how customers expect to pay for them. Traditional pricing models haven’t failed us; they simply weren’t built for the dynamics of AI.
Seats can’t always capture value. Usage can feel too volatile. Hybrids need the right infrastructure to stay clear and fair. And underlying it all, AI’s cost structure demands monetization systems that can move as fast as model providers change their terms.
The opportunity now is to create models that make customers feel empowered, not constrained. Pricing that invites experimentation instead of limiting it. Models that flex with value but still feel predictable.
We’ve entered a new chapter. And the industry is ready for the next step.