When Microsoft launched GitHub Copilot in 2023, they priced it at $10 per seat per month. In the first few months, they were losing an average of $20 per user per month to run the service, and their heaviest users cost as much as $80. And at the time, adoption was growing fast. Every new subscriber they got made the business economics worse. The more they sold, the more money Microsoft lost delivering it.
Now, Microsoft has a trillion-dollar business behind them. They absorbed those Copilot losses and kept going. But if you're a SaaS founder looking at your own product roadmap, the Copilot numbers should make you stop and think. Because the economics of running an AI feature work differently than the economics of running traditional software, and most teams don't realize how different the cost structure is until the production bills start arriving.
Here's what changed. In traditional SaaS, when more people use your product, your margins stay healthy because the cost of serving each additional user is close to nothing. You built the infrastructure once, and that same infrastructure serves ten users or ten thousand users at roughly the same cost. But when more people use an AI feature, the cost goes up with every single interaction, because each LLM inference call requires fresh computation. There is no way to cache or reuse the result from one user's request to serve another user's request. So the more popular your AI feature becomes, the more expensive your product becomes to operate.
This means popularity and profitability no longer move in the same direction for AI features the way they do for traditional SaaS features. In a lot of cases, they move in opposite directions. Your most engaged users generate the highest costs, and if you charge everyone the same flat monthly rate, your best customers are also your biggest financial liability.
We've been building products for SaaS founders for a while now, and over the last two years the AI feature conversation has come up in almost every engagement we run. The excitement about what the technology does is always genuine. The cost modeling is almost always incomplete. And we've had this problem ourselves. We scoped an AI component for an EdTech product last year, and the prototype looked great and the per-interaction cost at prototype scale seemed reasonable. But when we modeled what the costs would look like at 10,000 daily active users, the AI feature's operating cost erased the entire margin on the subscription tier. Not reduced the margin. Erased the margin entirely. If we had shipped that feature as designed, the product would have lost money on every subscriber.
We had done the math ahead of time. The problem was that the math at prototype scale involves fewer variables than the math at production scale. And this experience is common across the industry. When companies move from AI prototypes to AI production systems, the original cost estimates are typically off by 500% to 1,000%, because production introduces monitoring requirements, data pipeline costs, evaluation infrastructure, and engineering maintenance burdens that don't exist when you're running a prototype with a few hundred test users.
The costs that don't show up in your initial budget
The inference cost, meaning the raw cost of running the API call or the GPU computation, is the line item every team remembers to include in their AI feature budget. But inference is one layer in a stack of ongoing operational costs that most teams don't account for until they're already in production.
Your AI feature needs a data pipeline that cleans, formats, and routes information to the model before every request. Your production system needs monitoring tools that track response latency, error rates, and output quality around the clock. You need evaluation infrastructure that catches bad or inaccurate outputs before your users see them, because an AI feature that gives incorrect answers will damage your users' trust in the entire product, not the AI feature alone. You need to run fine-tuning cycles as your underlying data changes and the model's performance drifts over time. And you need logging systems for compliance, for debugging, and for the ongoing work of understanding how your AI feature behaves under real-world conditions. Each of these requirements adds headcount cost or tooling cost on top of the raw inference spend.
And then there's a cost category that never appears on any budget spreadsheet, which is the organizational cost of redirecting your team's attention. At SaaS companies between $20M and $80M in annual recurring revenue, we've seen the same pattern repeat across multiple engagements. The strongest product managers transfer to the AI feature team. Engineering resources shift away from core product stability and retention work toward new AI capabilities. Customer success representatives get pulled into conversations about upselling the AI pricing tier instead of focusing on churn risk signals in the existing customer base. Nobody approves these resource shifts in a formal meeting. They accumulate one sprint planning session at a time, one priority conversation at a time. And then six months later, customer churn starts ticking upward, and nobody on the team connects the churn increase to the fact that the people who used to focus on retention have been working on something else.
We've started asking a question early in our engagements that we didn't think to ask two years ago: "What will your team stop doing in order to build and maintain this AI feature?" The specificity of the answer tells us more about whether the AI feature will succeed than any technical architecture document does.
Why the entire SaaS industry changed its pricing in 2025
The top 500 SaaS companies made over 1,800 pricing changes in 2025. In a normal year, a SaaS company changes its pricing once, if at all. When 500 companies average 3.6 pricing changes each in a single year, the industry is not iterating. The industry is panicking.
The root cause is a structural mismatch between how SaaS companies price their products and how AI features generate costs. Traditional SaaS pricing, whether per-seat or per-month or flat-rate, was designed for products where the cost of serving one more user is close to zero. Those pricing models collapse when one power user consumes 100 times the compute resources of a casual user and both users pay the same $49 monthly fee.
Replit is a clear example of how this mismatch plays out. Replit's AI-powered coding tools drove the company's revenue from $2M in annual recurring revenue to $144M in annual recurring revenue. By traditional SaaS standards, that growth rate is extraordinary. But Replit's gross margins dropped to single digits during that growth period, and during one period of heavy usage, Replit's margins went negative. The company was selling more subscriptions and losing more money at the same time. Replit's margins only stabilized in the 20-30% range after the company rebuilt its entire pricing structure around usage-based plans that tied revenue to actual consumption.
GitHub went through the same painful correction with Copilot. GitHub launched Copilot with unlimited AI-assisted coding for a flat monthly fee, and GitHub now offers a base usage allowance with per-request pricing for usage beyond that allowance. GitHub didn't want to add that pricing complexity. No company wants to make its pricing harder for customers to understand. But GitHub made the change because the alternative was continuing to lose money on the most popular feature in the product.
If your AI feature charges a flat monthly rate and your costs increase with every user interaction, your feature's growing popularity will make your margin problem worse, not better. Every company we've seen recover healthy margins from an AI feature has moved to some form of hybrid pricing where the revenue structure mirrors the cost structure.
Three questions we ask before any AI feature gets built
We used to start AI feature planning by asking what the model should do and how the user experience should work. We don't start with those questions anymore, because we learned that the answers to those questions don't tell you whether the feature will make money or lose money.
The first question we ask now is: what does one AI interaction cost you when you account for everything? The inference call, the data processing, the error handling, the monitoring overhead, and a proportional share of the engineering time required to keep the AI system running in production. If your cost estimate comes from an API pricing page alone, the real fully-loaded cost is likely five times higher. We've seen this gap between estimated and actual costs in enough engagements to state that with confidence.
The second question we ask is: does your pricing structure move in the same direction as your cost structure? When a customer uses your AI feature more heavily, does your revenue from that customer increase proportionally? If the answer is no, and if heavy users pay the same flat rate as light users, then every power user compresses your margin further. The SaaS companies recovering their AI feature margins have moved to hybrid pricing models where a base subscription covers fixed costs and light usage, and a credit system or consumption-based pricing covers heavier usage. That pricing structure is harder to explain on a marketing page than a clean per-seat number. That pricing structure also doesn't lose money when the feature works the way you designed it to work.
The third question we ask is: how tightly have you scoped this AI feature's purpose? The scope of an AI feature is the single biggest cost lever a founder controls directly. An AI feature built to automate one specific workflow that your customers perform every day will cost less to run, will deliver more measurable value per interaction, and will show up more clearly in your renewal and expansion data than a general-purpose AI assistant that tries to help with everything. We push hard on scope in every engagement now, because the difference in cost structure between "an AI feature that automates invoice reconciliation" and "an AI assistant for your finance team" is often the difference between a feature that generates positive unit economics and a feature that doesn't.
Where our thinking has landed, at least for now
We don't believe AI features are a bad idea. We believe the standard process that most SaaS teams use to decide whether to build an AI feature is a bad process.
The standard process works like this: a team sees what the AI technology does, the team gets excited about a prototype, the team ships the feature, and then the team tries to figure out the economics after the feature is in production. That sequence of decisions worked well in SaaS for twenty years, because the economics of traditional software features are forgiving enough to survive that order of operations. You ship a decent feature, you charge per seat, and the margins take care of themselves because the marginal cost of each new user is tiny.
AI features reversed the order of operations that produces a good outcome. The economics of an AI feature need to come first in the planning process. The cost per interaction, the pricing model, the feature scope, and the organizational cost of redirecting your team's focus all need answers before the first line of production code gets written.
We know inference costs are dropping every quarter. We know the cost to develop a competitive AI model fell from $100M to $5M to $30 in the span of two years. The economics of running AI features will keep getting better over time. But the phrase "the economics will improve" has turned into a reason for SaaS teams to skip the hard planning work. Teams ship AI features with bad unit economics today and tell themselves the cost curve will fix the margin problem for them later. Sometimes the cost curve does cooperate. But in a lot of cases, user adoption grows faster than infrastructure costs decline, and the margin problem compounds instead of resolving.
We're still learning our way through this, and we think everyone in the industry is. But the one conviction we've developed over two years of building AI-enabled products is straightforward: popularity and profitability are no longer the same question for SaaS features that run on AI. The teams that recognize those as two separate questions, and that answer the profitability question before they answer the popularity question, will be the teams with a sustainable business on the other side of this transition.
Cameo Labs helps SaaS founders build products that work technically and economically. If you want to pressure-test the economics of an AI feature before you commit engineering resources, [start with a Blueprint Sprint](/blueprint-sprint).
