(S3E8) Hello AI Summit 2026- AI is a co-pilot, not a pilot


1. Introduction

We’re at the Emirates Stadium, sitting pitch side with cameras, lights, and an absolutely massive LED screen looping AI content above our heads. This episode the Impact of AI: Explored podcast is on tour and we have landed at the Hello AI Collective’s AI summit (https://www.helloaisummit.co.uk/)

In this episode of our “Hello AI” series, we sat down together with our guest, Phil Sage, to talk about what AI really looks like when it hits the messy reality of business, people, and data.


Phil works at Sage in the accounting tech space and spends his time right at the intersection of finance, operations, and AI, helping build and roll out things like Sage Copilot to real customers.

Together we explored why AI is not a magic cure-all, why good data and governance matter more than ever, and how AI should act as a co‑pilot rather than a replacement for humans.
If you care about AI, accounting, or just trying to make your organisation a bit less chaotic and a bit more effective, this conversation will feel very close to home.


2. Meet the Guest

Phil Sage is a transformation and business technology leader focused on large‑scale change in finance and operations, with a particular emphasis on AI, ERP, and process automation.
Across his career he has worked on digitising finance processes, implementing new platforms, and now helping bring generative AI into products like Sage Accounting and the Sage Copilot assistant.

At Sage, Phil and his colleagues are building AI that is specifically trained on accounting concepts, workflows, and compliance, so it can actually understand ledgers, invoices, and month‑end close instead of just predicting generic text.
That means thinking deeply about how AI fits into accountants’ daily work, how it can orchestrate tasks, and how to keep humans firmly in control of key decisions.

Phil also brings a very human angle: he’s open about being dyslexic and uses AI to enhance his own writing, not to replace it, which gives him a grounded, practical perspective on what “co‑pilot” really means.


3. Setting the Stage

We’re at a point where AI has gone from hype to “everywhere” in the business stack: almost every piece of software you use now has an AI button somewhere.
But that hasn’t magically solved the old problems around bad data, siloed systems, or unclear processes—in many cases, it just exposes them faster.

In accounting, new regulations like Making Tax Digital for income tax in the UK now require SMEs over a certain turnover to keep digital records and report quarterly, which in theory should improve data quality.
In practice, adoption is lagging, businesses are rushing, and the same old data issues risk being dragged straight into the AI era if we are not intentional about foundations.

This episode is about that tension: AI as a genuine game‑changer versus AI as yet another layer on top of shaky foundations, and what leaders need to do to land on the right side of that divide.


4. Episode Highlights

  • “AI is a fantastic tool, but it’s not going to solve every problem your business has.”
    Phil argues that the two biggest mistakes he sees are treating AI as a cure‑all and feeding it bad data, then being surprised when the outcomes disappoint.
  • “AI should be your co‑pilot, not your pilot.”
    We talk through Phil’s favourite analogy: you would never want ChatGPT to be the one actually flying your plane, but you absolutely want AI in the cockpit helping the pilot make better, safer, more efficient decisions.

5. Deep Dive – AI as Co‑Pilot, Not Replacement

A core theme in the episode is the idea that AI should enhance what humans do, not be the only thing they do.
Phil has seen companies desperately over‑invest in a new wave of technology—whether it was cloud, the internet, or now AI—only to discover it doesn’t magically fix structural problems like poor data, weak governance, or the wrong strategy.

We explore the “co‑pilot” philosophy that Sage has adopted: AI embedded into workflows that surfaces insights, automates repetitive steps, and orchestrates tasks, but always with the accountant or business owner holding final approval.
AI can monitor data, flag anomalies, suggest actions, and help you get a handle on the chaos of modern work, but it should never be the sole authority, especially where compliance, risk, or human impact are involved.

Phil is very clear about the danger of building systems that aim to remove humans entirely from the loop.
He compares it to being told to get into a self‑driving car that has no accelerator and no brake: at some point, you want the emergency switch, the ability for a human to say “no, that’s not working” and intervene.


6. Real‑Life Stories & Examples

One of the most vivid parts of our conversation is Phil’s analogy of data as the foundations of a house.
If you build on solid ground with strong foundations, the house can last for decades; if you build quickly on a floodplain with a cheap builder, you should not be surprised when the house collapses in a few years—and that’s exactly what a lot of organisations are doing with AI.

We also get into his aviation analogy.
As he puts it, you’d never board a plane where a large language model casually announces it will be your captain, but having AI in the cockpit analysing routes, avoiding turbulence, and providing guidance to the human pilot is a powerful setup.

Phil shares how he uses AI personally as someone who is dyslexic: he writes his own reports first, then uses AI to enhance them—removing repetition, fixing structure, improving clarity.
The risk is that if you give AI too much freedom, it can “go rogue” and rewrite everything in a generic Californian tech‑bro voice, which is the exact opposite of authenticity; that’s why he is strict about using it as an enhancer, not an author.

We also talk about self‑driving cars and the chaos humans introduce into otherwise optimised systems.
Phil argues that if every car on the road were fully autonomous and coordinated, we’d probably never see a crash, but the moment you add one human driver with their own habits, distractions, and variances, the entire system becomes unpredictable again—and AI in business faces the same challenge.

Another powerful example comes from the labour market: a large accountancy firm announcing it will hire fewer graduates because of AI.
Phil flips that logic on its head: if anything, you should hire more graduates, because they understand the technology, know how to use it, and can help you get the best out of your AI investments, whereas the people who have been doing things the same way for 40 years can become the bottleneck.


7. Key Takeaways

  • AI is a game‑changing tool, but it is not a cure‑all for broken processes, weak strategy, or poor culture.
  • Good data is non‑negotiable: if you feed AI bad data, you get amplified bad outcomes, no matter how clever the model is.
  • AI should be a co‑pilot that augments human decision‑making, not a fully autonomous pilot with no human override.
  • Regulations like Making Tax Digital are pushing more businesses into digital record‑keeping, but rushed adoption can just recreate old problems in new systems if foundations are ignored.
  • Humans remain both the greatest strength and the biggest risk in AI systems; our unpredictability is exactly what makes fully autonomous setups so hard.
  • Governance, policies, and ethics matter, but people will still route around strict rules if AI gives them real competitive or time‑saving advantages, especially via personal devices and consumer tools.
  • The future value is not just in lots of separate AI features, but in the “AI above the AIs” that can connect tools like Teams, Monday.com, Salesforce, and LinkedIn into a coherent, orchestrated day for each person.
  • Younger, tech‑native employees can be an asset in making AI work effectively; cutting them because “AI will do their jobs” is likely to backfire.

8. Closing Thoughts

Spending time with Phil reminded us that AI is not some abstract future concept anymore—it’s already baked into the tools accountants and business leaders use every day, for better or worse.
Whether it actually delivers value now comes down to the unglamorous basics: data quality, governance, human judgement, and a clear sense of where AI should co‑pilot rather than take over.

We left this conversation energised by the possibilities but also very aware of how many open questions remain, from compliance and ethics to culture and skills.
In our next episode, we’ll keep building on this theme by looking at how AI agents might eventually tie all these siloed tools together and what that means for the way we plan and experience our workday.

If this episode sparked ideas or strong opinions we’d love to hear from you: how are you using AI as a co‑pilot in your own organisation, and where are you still keeping your hand firmly on the brake?


Leave a Reply

Your email address will not be published. Required fields are marked *