The 3 BIGGEST Misconceptions About AI (Debunked by Founders)
09 Jul 2025
AI is everywhere. In the news, in the tools we use, and in the strategy decks of every ambitious startup.
But while adoption is rising, so is confusion.
We asked founders from across the Faces of AI series to weigh in on the biggest myths they’re still encountering in conversations with clients, customers, and teams. Here’s what they told us.
1) “You can bolt AI onto existing processes and expect transformational results”
AI isn’t a magical solution to all business problems.
If you’re already in the space, you’ll know that. But as George Parry, CTO at Emma, points out, the idea that AI is a guaranteed golden ticket to perfect workflows still crops up time and time again:
People often think it’s about adding a chatbot or giving staff access to ChatGPT. And while that might help a little, the real gains only come when you rethink how a business operates with AI at its core.
You could hand someone the best AI tools in the world… but without people management, change management, and a clear explanation of why workflows need to evolve, it won’t have the impact you’re hoping for.
In other words, transformation doesn’t come from tooling alone. It comes from redesigning the way people and processes interact with those tools.
This challenge becomes even more complex when you look under the hood.
Eduardo Mandela, co-founder of Maihem, sees the same disconnect but from a deeply technical perspective.
We see it across the board – people are very aware of AI’s power, but there’s still a lot of scepticism around it.
One of the key things many people don’t realise is that these new models are probabilistic. You can give them the same input twice and get two completely different outputs.
That unpredictability doesn’t just surprise users, it undermines their confidence in the tools entirely. Hallucinations (or confidently wrong outputs) are consistently flagged by founders as to be one of the biggest hurdles to meaningful adoption right now.
When trust is lost, engagement drops fast. Eduardo continues:
There’s a real gap between what people think AI should be capable of and how these technologies actually work. They’re still not perfect.
2) “AI = ChatGPT”
Generative AI might have put artificial intelligence firmly into the public consciousness, but the field is far broader and older than many people realise. As Ivan Scattergood, CTO of OctaiPipe, puts it:
To people outside the space, AI often just means ‘ChatGPT’ now. But I studied neural networks back in the ’90s. The maths hasn’t changed – what’s changed is data and computation.
The synonymous use of "AI" and "ChatGPT" reflects just how narrow public understanding still is. The overwhelming focus on chatbot functions has obscured AI’s wider applications – in optimisation, automation, forecasting, recommendation systems, and more.
As adoption accelerates, education needs to catch up. The hype can obscure the real work – and it’s changed how startups present themselves, even when the underlying tech hasn’t.
We used to avoid saying ‘AI’ at all. We’d say ML, because that’s what we were actually doing. We even had a joke: if you’re talking to investors, it’s AI. If you’re hiring, it’s ML. And when you’re building the thing, it’s logistic regression.
In reality, most startups don’t need the flashiest, most complex models. They need ones that are accurate, stable, and fit for purpose.
That’s something Katya Lait, founder of nettle, sees regularly in conversations with customers:
Customers get really excited and ask, “Can it do this? Can it do that?” We’ll get a list of 10–20 things they want nettle to do. It’s great and exciting, but building the kind of features they want takes time, and we have to temper expectations a little bit in the short term.
For founders, managing expectations is now part of the job – especially when the hype outpaces the reality of what AI can (and should) do.
3) “It’s taking over the world… and our jobs”
Fear is still one of the biggest blockers to adoption – and not just among technophobes. For many users, it’s not the capabilities of AI that concern them most, but the uncertainty around how it works (and what it might be doing behind the scenes.)
Katya Lait regularly sees scepticism surface in client conversations:
They may misunderstand what's happening, so they'll think that we’re reading all of their data, all of their personal information. They don’t trust the system and want us to list every single source we use.
That underlying mistrust isn’t always rooted in the tech itself. Often, it’s the result of a lack of transparency, or simply a fear of being left behind. George Hancock, CEO at OctaPipe, notes a familiar sentiment:
We often hear hesitancy to the tune of "I don’t know it, I don’t trust it, I don’t want to use it."
And in some cases, those fears are generational. Roshan Tamil Sellvan, co-founder at AdvisoryAI, says this is particularly pronounced among more experienced professionals:
We’ve found that older professionals have more barriers than younger ones. They think: I’ve been doing this job for 20 years and AI is going to come in and take over my job.
But founders across the board agree: AI isn’t here to replace people. It’s here to support them. The real value lies in helping individuals do their jobs faster, better, and more creatively.
If you’re spending eight hours writing a report, AI can draft the bulk of it in ten minutes. That frees you up to focus on the parts that really matter. The real challenge is shifting perspectives away from the fear that AI is taking over the world.
That shift in perspective relies on one thing above all: trust. And trust is fragile – especially when AI systems behave unpredictably or rely on complex, opaque processes. Eduardo Candela puts it plainly:
If an AI gives a wrong answer, people often lose trust in the whole system.
Closing the Gap
For founders, one of the biggest ongoing challenges is bridging that gap. Helping users, clients, and even investors understand the reality through transparency and education is what will ultimately build trust in systems that often feel opaque.
Eduardo Candela, co-founder at Maihem, believes closing this gap starts with honesty and clarity about what AI can and can’t do:
We want people to confidently harness the power of AI, while also being aware of the limitations. Bridging that gap between expectations and reality is key.
That process of building trust is especially tricky in more complex AI systems. Even a well-meaning effort to explain how it all works can end up overwhelming users. As Katya Lait explains, that means walking a fine line between transparency and information overload:
Obviously, we want to create trust within the product, but multi-agent systems are very complicated and dense in their workflows.
We pull data from many sources and process them in complex ways, so we have to make tradeoffs between creating trust and not completely overwhelming them with what's going on in the background.
Ultimately, education, design, and communication all play a role. The AI systems that succeed won’t just be the ones that work – they’ll be the ones people understand and believe in.
Curious to learn more?
Our Opportunities Hub is packed full of resources to upskill and find community in the AI space.
Building with AI?
Generative partners with startups at every stage to help them scale smarter, move faster, and deliver real results. Find out how we work with AI startups.