Every product team is being asked about AI right now. Here's how to decide whether it's time to build or time to hold.
There is a very specific kind of meeting happening inside every software company right now. The CEO walks in and says something like "we need an AI strategy" or "our competitors are adding AI features" or, my personal favorite, "can we put AI in the product?" Nobody in the room says no. And that's the beginning of a very expensive problem.
I don't say this because AI isn't valuable. I believe it is, and I've helped companies build AI features that genuinely changed their products. But I've also watched companies spend six figures building AI capabilities that their customers didn't want, that compressed their margins, and that became maintenance headaches the engineering team resented. The difference between those two outcomes almost always comes down to whether someone asked the right questions before writing the first line of code.
The pressure is real, and mostly wrong
Let me validate the anxiety first. There is genuine pressure to add AI features. Gartner put generative AI on the downslope of the hype cycle heading into what they call the "trough of disillusionment" in 2024. That sounds like bad news but it's actually useful information. It means the market is moving from hype to accountability. S&P Global found that 42% of companies abandoned most of their AI projects in 2025, up from 17% the year before. Companies are not pulling back because AI doesn't work. They're pulling back because they built the wrong things for the wrong reasons.
The pressure to add AI features usually comes from one of three places. Investors want to see that you're "AI-enabled." Competitors have shipped something that looks AI-powered. Or someone internally got excited about a demo and now there's momentum behind a feature nobody has actually validated.
None of those are good reasons to build. They're reasons to have a conversation. But they're not reasons to commit engineering resources.
The cost structure nobody talks about
The single biggest mistake I see product teams make with AI features is treating them like regular software features. They are not.
Traditional SaaS has a beautiful economic model. You build the feature once, and every additional user costs you almost nothing. Your gross margins sit at 80-90% because the marginal cost of serving another customer is essentially zero. AI breaks this completely. And I think most product teams haven't fully internalized how much it breaks it.
a16z reported back in 2020 that AI SaaS companies were operating at 50-60% gross margins compared to 60-80% for traditional SaaS. And that gap has likely widened as companies pile on more inference-heavy features. Andreessen Horowitz found that inference costs can account for 60-80% of total operating expenses for AI-first companies. That is a staggering number. It means the more people use your AI feature, the more it costs you. Every single query, every completion, every generated output incurs compute cost.
I watched a company build an AI feature that worked beautifully in testing. Looked great in demos. Customers loved it in beta. Then it hit production with real traffic and the monthly inference bill was five figures. The feature that was supposed to be a competitive advantage was actually underwater. Every time a customer used it, the company lost money.
This is what happens when product teams think about AI features the way they think about traditional features. You have to model the unit economics before you build, not after.
When to build: the three conditions
After working with enough AI projects, I've landed on three conditions that all need to be true before it makes sense to build an AI feature. If any one of them is missing, you should wait.
The problem has to be expensive enough to justify the cost. This sounds obvious but it gets skipped constantly. If you're automating a task that takes your customer five minutes and happens twice a week, the value of automating that task is tiny. AI features carry ongoing compute costs, maintenance costs, and the engineering cost of keeping up with a field that changes every few months. The problem you're solving needs to be painful enough and frequent enough that the value clearly exceeds all of those costs. Not "kind of" exceeds them. Clearly exceeds them.
You need clean, accessible data to work with. Informatica surveyed Chief Data Officers and found that 43% pointed to data quality and readiness as the number one obstacle blocking their AI initiatives. Not compute. Not talent. Data. Your AI feature is only as good as what you feed it. And most companies dramatically overestimate the readiness of their data. If the data your feature needs lives in five different systems, hasn't been cleaned, and has no consistent schema, you're not ready to build AI features. You're ready to build a data pipeline. Those are very different projects with very different costs and timelines.
The feature has to create defensible value, not just novelty. Gartner warned that more than 40% of agentic AI projects will be scrapped by 2027 unless they're carefully scoped and validated. A lot of what's being built right now is what I'd call AI decoration. It doesn't fundamentally change the product or create value that competitors can't replicate in a few months. If your AI feature is basically "we put a chatbot on our dashboard," that's not defensible. Someone else can do exactly the same thing next quarter with the same API calls. Defensible AI features use your proprietary data, your unique customer workflows, or your domain expertise in ways that are genuinely hard to copy.
When to wait
There are specific situations where waiting is not only acceptable, it's the smart play.
Wait if you're adding AI because investors expect it. Building features to impress investors instead of serving customers is one of the fastest ways to burn money. I've seen this play out multiple times. The team ships an AI feature to hit a fundraising talking point, the feature gets minimal adoption because it wasn't built for a real user problem, and six months later the engineering team is maintaining something nobody uses.
Wait if your core product has unfixed problems. If your onboarding is broken, your churn is high, or your core workflows are clunky, AI won't save you. Bolting an AI feature onto a broken product is like putting a turbocharger on a car with flat tires. Fix the fundamentals first. Your customers will thank you more for a smooth onboarding experience than for a half-baked AI assistant.
Wait if you can't model the unit economics. If you can't answer the question "what will this feature cost us per user per month at 10x our current usage," you're not ready to build it. Replit reportedly saw gross margins dip to single digits and even negative during usage surges before they restructured their pricing. That kind of surprise is avoidable if you do the math beforehand.
Wait if the technology is still moving too fast in your space. Some AI capabilities are improving so rapidly that what costs you six months of engineering today might be available as an API call in twelve months. This is a real consideration. The foundation model providers are expanding their capabilities constantly. If the feature you're building is essentially wrapping an API that's going to get dramatically better and cheaper soon, you might be better off waiting and building on top of the improved version.
A practical way to decide
Here's the framework I walk clients through. It's not complicated but it forces you to be honest about what you actually know.
Start by defining the problem in dollars. Not "our customers would benefit from AI-powered insights." Rather: "our customers spend an average of 12 hours per month manually compiling reports. At their average hourly cost, that's $600 per customer per month in labor. If we can reduce that to 2 hours, we create $500 in monthly value per customer." Now you have a number to work with.
Then model the cost. How many API calls per customer per month? What model do you need, and what does inference cost per query? What's the engineering cost to build and maintain this? What's the cost at 10x usage? If the cost per customer exceeds the value per customer at any realistic scale, stop.
Then ask whether you're the right team to build this. Do you have the data? Do you have engineers who understand ML operations, or do you need to hire or partner? Do you have the infrastructure to monitor and maintain an AI feature in production? A lot of companies discover that the real bottleneck isn't the model. It's everything around the model.
Finally, ask whether this creates lasting competitive advantage or temporary novelty. If a competitor could replicate your feature in three months using the same APIs, that's not a moat. If the feature gets better over time because it learns from your proprietary data or integrates deeply into workflows your customers depend on, that's defensible.
The boring answer that's usually right
This is the part where I lose people, but I'm going to say it anyway. The highest-ROI AI investments right now are not the flashy, customer-facing features that make good demo videos. They're operational. Automating internal workflows. Reducing manual data processing. Cutting down repetitive tasks that cost the company time and money every month.
MIT's research on enterprise AI backs this up. The fastest returns show up in back-office operations: compliance automation, document processing, reducing outsourcing costs. None of that makes a good conference talk. But it shows up on the P&L immediately. And it lets you learn how to build and maintain AI systems in production without betting the whole product on a customer-facing feature you haven't validated.
Start there. Learn the operational realities. Build the muscle. Then when you do add customer-facing AI features, you'll do it with a team that actually knows what they're doing.
Frequently Asked Questions
How do you decide whether to add AI features to your product?
Start with the business problem, not the technology. Define the problem in dollar terms and calculate the value of solving it. Then model the cost of the AI solution including inference costs, engineering time, and ongoing maintenance. If the value clearly exceeds the cost and you have the data infrastructure to support it, build. If any of those conditions aren't met, wait. The companies seeing real ROI from AI are the ones that picked specific, high-value problems and solved them well rather than adding AI features broadly.
What is the ROI of AI features in SaaS?
It varies enormously depending on what you build and how you build it. IBM's research found that enterprise AI initiatives achieved just 5.9% ROI on average while requiring 10% capital investment. But that average hides a wide range. Companies that target specific, measurable problems see strong returns. Companies that build AI features for marketing purposes often see negative ROI once you factor in inference costs and maintenance. The a16z finding that AI SaaS companies operate at 50-60% gross margins compared to 80-90% for traditional SaaS tells you that AI features come with real ongoing costs that traditional features don't.
When should you wait before building AI features?
Wait if you're building to impress investors rather than serve customers. Wait if your core product has fundamental problems that AI won't fix. Wait if you can't model the unit economics at scale. And wait if the specific AI capability you need is improving so fast that building now means rebuilding in twelve months. The 42% of companies that abandoned AI projects in 2025 mostly failed because they built without clear business cases. Patience is underrated.
What makes AI features fail after launch?
The most common reason is that the feature solves a problem nobody was willing to pay to solve. Second is cost surprises when inference bills at production scale exceed what was budgeted. Third is data quality issues that cause the AI to perform worse in production than it did in testing. And fourth is maintenance burden. AI features require ongoing monitoring, model updates, and performance tuning that traditional features don't. Teams that don't plan for this ongoing cost end up either letting the feature degrade or pulling engineers off other work to maintain it.
At Cameo Labs, we help product teams figure out where AI creates real value and where it creates expensive distractions. If you're trying to decide whether to invest in AI features, we can help you model the economics and build a plan that actually makes sense. [Let's talk](/blueprint-sprint).
