Over the past year and a half, I’ve been working closely on AI-driven features and products. And while the excitement around AI is fully justified, there's a pattern I’ve noticed that keeps creeping up and disrupting delivery models - a misconception that AI behaves like traditional code.
If we set a rule, we get a predictable result. But AI, especially in the generative space, isn’t deterministic. And yet, many teams and stakeholders still expect it to behave this way.
This false equivalence leads to mismatched expectations, flawed delivery processes, and silos between engineering and AI teams. If we don’t address it, we risk turning back the clock on everything we’ve learned about agile, collaboration, and product-centric thinking.
The Silo Mistake: Treating AI teams as separate streams
In many organizations, AI engineers are still treated like a separate species. Different streams. Different backlogs. Different standups. Sometimes, even a different Product Owner.
This makes sense in exploratory POCs or R&D streams. But the moment we decide that a feature is production-worthy, the AI engineers must become part of the cross-functional team.
We’ve seen this play out before:
- Devs vs. QAs: Different workflows, then merged.
- Devs vs. DevOps: Different priorities, then merged.
- Now? AI vs. Software Engineering.
It’s time to fix before inefficiencies become the norm.
Why it matters: Delivery becomes fragile
Separate AI teams mean:
- Misaligned timelines: AI outputs are ready before or after the UI/API work.
- Communication gaps: Bugs or model tweaks go unnoticed until late.
- Broken feedback loops: Product decisions are made without input from AI engineers or users.
But when the AI engineer is embedded in the team:
- Dependencies are transparent.
- Prioritization happens holistically.
- Everyone owns the feature, not just the model.
AI is not a service. It’s a capability. Build teams around capabilities, not silos.
Let’s talk accuracy (or the lack thereof)
Another problem I’ve seen is stakeholders expecting AI outputs to be perfect. AI is not deterministic. You won’t get the same answer every time. And in many use cases, the accuracy might cap at 70%.
So, how do we build trust?
The answer isn’t "improve the model until it’s perfect." The answer is UX.
- Give users the ability to review and select from multiple options.
- Provide confidence indicators and contextual clues.
- Allow corrections and feedback loops.
A thoughtful user experience can go a long way in increasing trust, even when the model isn’t flawless. Too many teams forget that AI output is part of the user experience. If the UX is confusing, even accurate models will feel broken.
What we’ve found works:
- Always show alternative outputs where feasible.
- Let users influence the result (e.g., re-rank, give thumbs up/down).
- Be transparent: show the source of content generation.
If users can’t trust the output, they won’t use the feature, no matter how technically advanced it is.
Validate early — Not after you’ve shipped
In classical product development, user testing often happens post-MVP. But with AI, that’s far too late.
Why? Because AI features often require weeks of data curation, model training, and tuning. If the outputs aren’t usable, all of that time was wasted.
Validate early:
- Include user testing in the POC phase.
- Treat AI prompts and outputs like you treat UI prototypes.
- Get real feedback before you scale.
TL;DR — Key takeaways:
1. Embed AI engineers in your cross-functional teams. They’re not a support function but part of the product.
2. Use UX to compensate for imperfect accuracy. Design trust, not perfection.
3. Validate AI outputs as early as possible. Treat content like functionality. Test it before you build.
Final thoughts
As AI becomes embedded in more products, I believe that it is imminent that we evolve the way we build. That doesn’t mean reinventing everything. It just means remembering what worked - and applying it again.
Cross-functional delivery. UX-first thinking. Early validation.
We’ve been here before. Let’s not make the same mistakes.

About the author: Ioanna has 7+ years of experience in product management and has evolved through roles ranging from a Scrum Master to a Technical Program Manager. Her expertise and knowledge entail a deep understanding of what it truly takes to devise and implement strategies that meet and exceed customer expectations.
Download "Translating product goals into business goals"
This was just a preview, you can unlock the whole content here. Enjoy!