Cool Heads, Cooler Systems: The Startup Making AI Work for Energy Efficiency
11 Jun 2025
Data centres are under immense pressure, from power constraints to sustainability mandates. Cooling systems are one of their biggest energy drains – and OctaiPipe is using AI to make them significantly more efficient.
But this isn’t your typical AI startup story. Born from a spur-of-the-moment conversation on a train and shaped by years of deep R&D, OctaiPipe has evolved from consultancy roots into a product-first company focused on federated learning — an approach that keeps data local and private, while still enabling powerful AI training at the edge.
We caught up with co-founders George Hancock, Ivan Scattergood and Eric Topham to talk about privacy-preserving AI, federated learning, and building for the market before the market’s ready.
What specific problem is OctaiPipe solving, and why does it matter now?
George: As AI demand grows, so does the pressure on the infrastructure that supports it. Data centres are facing increasing constraints around power, energy use, and sustainability – and cooling systems are one of their biggest energy drains. That’s where we come in.
Our focus is to optimise energy efficiency in data centre cooling systems using AI, aiming to deliver up to 30% savings.
What makes our approach different is the use of federated learning. Our platform allows AI to be trained and deployed securely at the edge, maintaining privacy. This design has attracted attention from both customers and investors, which has naturally drawn us deeper into the data centre space. We offer a timely solution to a growing challenge.
Tell us about the idea behind your startup. What’s the story there?
Ivan: It all started on a train, really. Eric met Paul Carver while geeking out over weather maps – Eric was racing professionally at the time, and Paul was about to set sail. One thing led to another, and Paul, excited by Eric’s background in data science, suggested they start a company.
That company turned into a successful data science consultancy. George came on board to lead commercial efforts, and I joined to help deliver projects. As part of our work, we ran an “innovation sandbox” where MSc students could complete their theses by working on real R&D challenges for our clients.
It was during one of these projects that we began exploring federated learning – driven by a use case involving factory-floor devices producing a terabyte of data per week each. Uploading that volume to the cloud was tough, so instead of taking the data to the code, we brought the code to the data. That’s when the concept behind OctaiPipe, our federated learning platform, was born.
The market wasn’t quite ready for a federated learning platform on its own. What it was ready for was a compelling application of that tech. Our work with edge devices in manufacturing pointed us toward data centres.
Several clients nudged us in that direction, and we realised we could simulate data centre energy use and train reinforcement learning agents to optimise cooling. That’s how OctaiPipe became what it is today: a platform quietly powered by federated learning, driving real-world energy efficiency.
There’s a lot of uncertainty around AI in the wider population. What are some of the common myths or misconceptions you hear?
George: We often hear hesitancy to the tune of "I don’t know it, I don’t trust it, I don’t want to use it" – especially in manufacturing, where people have real concerns about how it fits into their work.
There’s also this idea that AI is something you just plug in and it works. But without upskilling and buy-in from the people using it, even the best solutions won’t stick. Our approach with clients is always: do you actually need an AI solution? If we can’t see a clear business case or ROI, we won’t build it. We’ve walked away from projects that didn’t stack up, and some of those clients have come back later with a more focused brief.
And honestly, AI still exists in a bit of a bubble. To people outside the space, it often just means "ChatGPT" now. But there’s a whole world beyond that. Ivan, for example, was working on ML way before it was cool.
Ivan: I studied cybernetics and control engineering back in 1994 and built a neural network to play the Prisoner’s Dilemma. The maths has been around for decades. What changed was the arrival of big data and greater computing power about ten years ago. That's why there's been this massive explosion.
Back when we were a consultancy, we used to avoid saying "AI" at all. We’d say ML, because that’s what we were actually doing. We even had a running joke: if you’re talking to investors, it’s AI. If you’re hiring, it’s ML. And when you’re actually building the thing, it’s logistic regression.
There's a lot of hype about it but, in most situations, you don't need anything particularly complicated to help you. You just need something that works.
‘AI safety’ can mean very different things depending on who you talk to. What does AI safety mean to you, and how does it show up in your day-to-day?
George: AI safety is a broad term, and its meaning really depends on who you’re talking to – practitioners, users, commercial teams, or policymakers will all define it differently. For us, it’s about building AI systems that are explainable and trustworthy.
Last year, we worked on an Innovate UK initiative to unpack this idea. We looked at how terms like “AI safety” and “AI ethics” are often used interchangeably, and we built a comprehensive framework around the principles that matter most to us – things like transparency, interpretability, and user confidence.
That thinking carried through into a project with P&G, where we focused on practical ways to make AI outputs understandable and reliable. Not just technically sound, but usable and trusted by the people interacting with them.
What’s been the biggest challenge in bringing an AI product like yours to market at scale?
George: Market maturity. We’ve always been slightly ahead, working on solutions 18 months before the market’s really ready for them. That edge has come from staying close to research and applying data science deeply, but it’s also meant we’ve had to shift focus to more immediate, tangible applications.
The challenge isn’t whether AI can help. It’s whether the market recognises the problem it’s solving, right now.
Eric: I’ll put it another way: 99% of people talking about AI don’t really know what they’re talking about. That makes scaling difficult, because most organisations don’t know what they’re buying or how to be ready for it.
It’s not just about technical integration. It’s about embedding AI into business processes. You can build a brilliant churn prediction model, but if the business doesn’t know what to do with that prediction, it’s useless. And too often, rather than saying “we weren’t ready,” people blame the application.
So, yes, it’s a maturity problem. Even now, as awareness grows, most people still struggle to frame the use case, or measure its impact in commercial terms.
Is it possible to have the traditional 3–5 year business plan in a sector like AI?
Eric: I’d say the idea that “everything changes too quickly for long-term planning” mainly applies to the world of LLMs, where there’s a well-funded arms race driving rapid iteration.
But outside of that bubble, there are slower-moving, longer-term trends – like edge computing and the ability to run machine learning on small, distributed devices. Those shifts take time because they rely on complex systems maturing and integrating together.
When George says we’ve been operating 18 months ahead of the market, I’d argue it’s often further. We’ve been building toward a distributed machine learning paradigm for years: models that don’t rely on massive, centralised datasets in the cloud, but instead train locally on devices like mobile phones. It’s a direction that supports data privacy, sovereignty, and trust. These are all issues that matter increasingly to individuals, businesses, and governments alike.
So yes, some areas of AI are evolving quickly. LLMs release every few weeks, but even those models are borrowing ideas like quantisation from edge AI, where constraints like limited memory have been long-standing design challenges.
What we’re seeing is the growing viability of scalable, distributed AI – something we’ve been ahead of the curve on for some time. And in that space, yes, you can plan for the long term because you're building on maturing trends, not chasing hype.
You’re now a team of 15. What has the talent attraction process been like, and how has it evolved?
Eric: We’ve shifted a lot as a company. In the early days, we were more service-based – building bespoke solutions for others – so we hired a lot of sharp, junior talent straight out of top universities. The recruitment focus then was all about raw technical ability.
Now, as we’ve transitioned into a deep-tech, product-led company, we look for experienced specialists with domain expertise. The process has become more about understanding how someone approaches design and problem-solving. Can they navigate complexity, not just code?
If someone shows up and can’t code, we’ll figure that out pretty quickly and they won’t be around for long. So it’s less about testing fundamentals and more about bringing in people who already know how to apply them at a high level.
Finally, you're obviously very busy people. What can we find you doing in your downtime?
Eric: Thinking about work.
George: Yeah, ringing each other to chat about what we need to do next!
Eric: But when I do switch off, I like exploring Paris with my wife or going climbing.
Ivan: I train in kung fu two or three times a week – I’ve been training with my current Shifu for eight years. Before that, it was Taekwondo, Wing Chun, boxing… martial arts have been a big part of my life since my twenties. I do a fair bit of mountain biking as well.
George: I used to be big on gym, swim, and exercise. But since my daughter arrived last July, most of my time off is spent on daddy daycare duty to give my wife a well-earned break!
OctaiPipe is one of the 54 innovative AI organisations featured in our 'Faces of AI 2025' report, which is available to download for free now.
At Generative, we work with AI-driven startups to scale innovation, refine strategy, and accelerate growth. If you're building in AI and looking for the right talent to take your company to the next level, let's talk.