Meet The UK's 54 Leading Tech Founders In AI. Discover Who's Driving AI Innovation In 2025.
Get the whitepaper now.

Why Trust, Not Tech, Is AI’s Biggest Challenge: Eduardo Candela on Founding Maihem

06 May 2025

While AI continues to evolve, Eduardo Candela is on a mission to ensure it’s used with care, clarity, and confidence.

As the co-founder and CTO of Maihem, his priority is clear: help enterprises embrace AI with confidence, bridging the gap between cutting-edge technology and real-world deployment.

We sat down with Eduardo to hear more about his mission, the challenges of building trustworthy AI, and how his PhD and industry experience laid the perfect foundation for his entrepreneurial journey.


Let’s talk about the origins of Maihem – how did the idea come about?

I started my career in the automotive industry, working in Silicon Valley at Tesla, and then later at Bosch Centre of AI. That’s where I saw firsthand what we call mission-critical AI: systems being deployed in cars, where safety and performance are essential. It became clear how important it is to not just build powerful AI, but to keep it safe and under control.

Before I worked in Silicon Valley, I did my Masters at MIT, where I also conducted research on using AI in mission-critical decision-making processes. Five years ago, I moved to London to pursue my PhD at Imperial College London to research safety in self-driving cars, specifically how to measure risk and train models to be risk-aware and reliable using reinforcement learning.

Max, my co-founder, took a different path. He was at McKinsey before starting a PhD in Natural Language Processing, where his research focused on how to make generative AI safer – specifically, how to detect and prevent the spread of conspiracy theories.

We met while still in our PhDs and we immediately connected over a shared passion and a shared question: how do we close the gap between AI research, and real and responsible deployment? Maihem became our answer. It’s a way to take everything we learned in research and apply it to help companies deploy AI with confidence.

How do you think your background in academia has shaped your startup journey?

When I left my job in Silicon Valley to pursue a PhD, people thought I was crazy… walking away from the money and the title. But I knew I wanted to live in the future, to go deep on the questions I cared about. A PhD gave me that.

It’s still not common for people in academia to start startups. But for me, it makes perfect sense. You're using science to test ideas, you’re running experiments, you’re validating things, failing, refining, and spending years going very deep to solve a very specific problem that nobody has solved before. That mindset is exactly what you need to build something from scratch.

Both my co-founder and I completed PhDs, and we’ve found that background to be incredibly valuable. It’s a huge investment of time and energy, so you have to be really clear about what you want to get out of it. But if you are, it’s absolutely worth it.

So while it doesn’t have to be a PhD, I really encourage anyone who’s passionate about answering hard questions and uncovering truth to consider startups.

There's a lot of uncertainty around AI within the wider population. From your perspective, what are some of the most common misconceptions out there?

We see it across the board – people are very aware of AI’s power, but there’s still a lot of skepticism around it. One of the key things many people don’t realise is that these new models are probabilistic. You can give them the same input twice and get two completely different outputs.

That unpredictability can be hard to wrap your head around when you’re expecting consistency. Hallucinations are one of the biggest blockers to adoption right now. When an AI gives users a wrong answer, people often lose trust in the entire system and immediately disengage.

Of course, there’s a more technical layer to this too, where models can be trained in the wrong direction. But for most users and businesses, the biggest challenge is just getting these systems to perform as expected.

There’s a real gap between what people think AI should be capable of and how these technologies actually work. They’re still not perfect. We want people to confidently harness the power of AI, while also being aware of the limitations. Bridging that gap between expectations and reality is key.

What’s the most exciting technical challenge that your team has solved so far?

One of the biggest challenges we’ve been tackling is how to bring user feedback and preferences directly into how AI systems are built and behave.

Right now, AI gets developed by engineers, it’s deployed, and then users interact with something that doesn’t always meet their expectations. So we’ve been asking: how do we close that loop? How do we make users feel in control, and actually get what they want from these systems?

It’s really a human–AI alignment problem, and solving it is key if we want people to trust and adopt these tools.

‘AI Safety’ can mean very different things to different people. What does that term mean to you, and what does it look like in practice?

That’s a great question. For me, one of the key realisations – especially during my PhD research on self-driving cars – is that safety is about performance under constraints.

When I used to run simulations, the safest car was the one that just parked and never moved. Technically, that’s “safe” but it’s also useless. All of us, when we take a plane, when we drive a car, when we go out, we're always accepting some level of risk in order to achieve a goal.

The question then becomes: how do we achieve a goal without crossing certain boundaries? How do we maintain control? That’s what AI safety looks like to me. The real challenge comes in defining and measuring those limits, and ensuring users feel they are in control.

What is the biggest challenge in bringing AI products like yours to market?

The biggest challenge (and the biggest opportunity) is the gap between the development of cutting-edge AI and real-world adoption.

You look at the headlines and see OpenAI, new models, new agents, new startups. But when you talk to enterprises – and the everyday users within their companies – there’s often a huge disconnect. Most companies don’t have the talent, the understanding, or the confidence to use these tools effectively.

I call this the “chasm of adoption” and that’s where we come in to bridge the gap. It’s not just about technology, it’s about mindset. A big part of the work is education: helping people understand how these systems work, and how to use them safely and effectively.

How do you see your technology evolving over the next few years?

AI will continue to improve and become embedded in every part of life. Our mission is to make sure that progress benefits everyone by making AI safe and trustworthy.

In the short term, we’re seeing a flood of new models, agents, workflows – both open and closed source. But most of what’s out there today is still what we’d call level one or two autonomy. These tools are co-pilots. They support humans, augment their work, but don’t operate independently.

At some point, there’s going to be a shift. We don’t know exactly when, but we’ll move into true autonomy, where these systems don’t just assist – they act on our behalf.  That’s the big opportunity for us.

Once agents become autonomous, you need to know: are they safe? Are they in control? Can they interact with each other reliably? We’re not there yet, but the foundations are being built now. Our goal is to help organisations prepare – so when autonomy becomes the norm, they’re ready.

When it comes to AI trends, what are you most excited about?

I did my PhD in reinforcement learning, so I’ve been close to it for a while. It’s exciting to see it finally gaining more attention; I find it very promising. Everyone’s talking about it now because it mirrors how humans learn: through rewards and feedback.

With the rise of new models like DeepSeek and others, reinforcement learning is proving to be a really powerful and accessible way to improve agent behaviour. It’s a current trend that’ll only grow further.

And what do you feel is overhyped?

We hear a lot of talk of AI being super dangerous or close to taking over the world. But that’s just not where we are. Especially with LLMs, people sometimes forget that these are just models trained to predict the next token, the next word.

We’re nowhere near Artificial General Intelligence (AGI). These systems are not self-improving, they use huge amounts of energy, and they’re still far away from replicating human intelligence in many domains. So while LLMs are powerful tools, we’ll likely need different, more powerful architectures to get anywhere close to AGI.

Maihem is currently a team of five. How has the talent attraction process been for you as a startup?

It’s really exciting, but challenging. Hiring for a startup is nothing like hiring for a big company. I’ve done both, and they’re completely different worlds. You talk to people coming out of Big Tech and often, they just don’t have the mindset or culture fit for a startup.

What you’re really looking for is your own SWAT team: strong generalists who are smart, driven, and deeply passionate about the mission. We’ve been really lucky with the team we’ve assembled. Most of them have been founders themselves, so they already understand what it takes to build something from scratch.

That founder mindset has made a huge difference. Instead of chasing the obvious CVs, we looked for people who genuinely want to prove themselves and make a difference. That’s where we’ve found the most success.

You’re obviously incredibly busy. How do you switch off and relax outside of work?

For me, work-life balance isn’t about separating work from life. Finding passion in what you do makes all the difference.

I’ve always loved diving deep into tough problems; staying up late solving something challenging never felt like a chore. I love reading, especially about startups and tech. It’s not work for me, it’s real curiosity.

I also love music; I play piano and guitar – it puts my brain in a different mode and often helps me solve problems in unexpected ways. And of course, I try to stay active. I play football every weekend and tennis when I can. That's how I keep my energy up!

Finally, if you weren't working on Maihem, what other problems would you be trying to solve with AI?

My passion and career have always been focused on one thing: how to build systems that make smart, automated decisions. Even before everyone was calling it “AI,” that’s what I was drawn to.

So if I weren’t focused on helping companies deploy AI safely and responsibly, I’d probably be building something similar myself – maybe a vertical agent tackling a specific, meaningful and safety-critical use case.

I love the idea of building AI that helps people with their mental health, or invest their savings safely. But right now, my goal is to help all of these organisations first, to maximise my impact in the world.


📥 Eduardo is one of the 54 innovative technical founders recently featured in our 'Faces of AI 2025' report, which is available to download for free now.

At Generative, we work with AI-driven startups to scale innovation, refine strategy, and accelerate growth. If you're building in AI and looking for the right talent to take your company to the next level, let's talk.

06 May 2025

Why Trust, Not Tech, Is AI’s Biggest Challenge: Eduardo Candela on Founding Maihem

15 Apr 2025

Revolutionising Wealth Management: The AdvisoryAI Story with Roshan Tamil Sellvan

01 Apr 2025

Solving Insurance’s Biggest Bottleneck: In Conversation With nettle's Katya Lait