Meet The UK's 54 Leading Tech Founders In AI. Discover Who's Driving AI Innovation In 2025.
Get the whitepaper now.

What Does ‘AI Safety’ ACTUALLY Mean? Founders Weigh In.

03 Sep 2025

When it comes to AI, the word “safety” gets thrown around a lot. 

Policymakers reference it when drafting new regulations. Tech companies invoke it when launching products. And in headlines, it’s often tied to fears of misuse or unintended consequences.

But what does AI safety really mean in practice? Is it about compliance, trust, or simply making sure systems don’t break?

We asked founders from across the Faces of AI series to share their perspectives. From healthcare and finance to self-driving cars and simulation technology, here’s how they’re putting AI safety into action in their own work.

Roshan Tamil Sellvan, Advisory AI

For Roshan, operating in the finance sector leaves no room for error.

In finance, responsibility starts with building safeguards into the architecture itself. We train our AI to filter inappropriate language, check outputs against FCA regulatory frameworks, and use multi-model verification pipelines to ensure accuracy and compliance. If a report doesn’t meet the threshold, it gets rewritten. These layers build trust in the system.

By embedding checks and balances into every step, Advisory AI aims to make compliance a structural feature rather than an afterthought.

Eduardo Candela, Maihem

Eduardo’s background in self-driving cars shaped his view of safety as something more nuanced than “risk-free.”

Safety is about performance under constraints. In self-driving car simulations, the ‘safest’ vehicle was the one that parked and never moved – but that’s useless. The question is: how do we achieve goals without crossing boundaries? The challenge is defining those limits and keeping users in control.

In other words, safety isn’t about eliminating all risk. It’s about making trade-offs explicit and manageable.

George Hancock, Octaipipe

At Octaipipe, the emphasis is on trustworthiness.

AI safety is broad – it means different things to practitioners, users, or policymakers. For us, it’s about building explainable and trustworthy systems. We’ve developed frameworks around transparency, interpretability, and user confidence, and carried those into real-world projects like our work with P&G.

Clear principles, George notes, are key to building systems that people can actually rely on.

George Parry, Emma

Operating in healthcare adds its own layer of responsibility.

For us, AI safety means clear guardrails and always keeping a human in the loop. Emma makes outputs transparent – you can see the exact sources behind a recommendation – but final decisions rest with trained professionals. In health and social care, safe AI means combining machine processing power with human empathy and judgment.

For Emma, AI safety is about pairing machine efficiency with human empathy and judgment.

Eric Marcuson, ClinBI

Eric sees safety as starting from customer needs.

AI safety starts with deferring to the customer. Legal frameworks set the boundaries, but beyond that we respect individual preferences around data. Guardrails aren’t a barrier; they create space for thoughtful innovation. If you build with care and intent, safety and progress can align.

By putting user intent first, ClinBI ensures safety doesn’t slow creativity – it enables it.

Scott Wilson, Covecta

Scott points to the policy landscape shaping how safety is defined.

Vendors have a responsibility to ensure safe deployment – from bias to data protection. But regulation has to strike the right balance. Europe’s stringent approach offers safeguards but risks slowing innovation. Governance matters, but so does competitiveness.

Patrick Sharpe, Artificial Societies

Patrick frames safety around strict data boundaries and scalable moderation.

Safety for us means strict data use – only public content, never private – and clear moderation guardrails. Our system uses AI to assess audience requests, blocking harmful simulations before they can be created. It’s how we keep innovation responsible and scalable.

 


So, What Does It All Add Up To?

Across industries, AI safety doesn’t have a single definition. For some founders, it’s about compliance and regulation. For others, it’s about explainability, user control, or ethical data use.

But the common thread is clear: AI safety is about building systems that can be trusted by regulators, businesses, and the people who use them.

As George Parry of Emma puts it: “With any new technology comes new risks. The key is introducing it with a clear-eyed view of both the known and unknown.”

AI safety, then, isn’t about playing it safe. It’s about innovating responsibly – and making accountability part of the design from day one.

03 Sep 2025

What Does ‘AI Safety’ ACTUALLY Mean? Founders Weigh In.

20 Aug 2025

“Clarity Is the Human Need We’re Here to Solve” Eric Marcuson on Empowering Time-Strapped Entrepreneurs

06 Aug 2025

“Emma Is More Than a Name” Building Health and Social Care AI To Become a Trusted Colleague