(S3E11) Hello AI Summit 2026 – The Future of Agentic AI in Football and Beyond

Here’s a first draft of the blogpost in your and James’s voice, using your structure and pulling in missing context about Joe.


1. Introduction

In this episode, we sat down with Joe Darkins, CTO of Burnley Football Club, live at the Hello AI Collective Summit at the Emirates Stadium in London.
We wanted to move beyond the hype and talk about what agentic AI really means for organisations today – from football clubs to enterprises – and how leaders can adopt AI without throwing basic IT and security fundamentals out the window.
What followed was a fast‑paced, honest conversation about grounded AI, governance, burnout, local models and what it will mean to manage teams of AI agents by 2027.


2. Meet the Guest

Joe Darkins is Chief Technology Officer at Burnley Football Club, where he leads technology strategy across both the football and business sides of the organisation.
He has a background in enterprise software and digital transformation, and is a regular speaker on topics like AI adoption, data, and agentic AI, including his recent talk “Agentic AI and the Future of Work” at the Hello AI Collective Summit.
At Burnley, Joe focuses on connecting technology to real business outcomes – from internal productivity pilots with Copilot and chatbots to early experiments in using AI on the football side for analytics and scouting.
What we love about Joe’s perspective is that he combines technical depth with a very pragmatic view of risk, governance and human responsibility in an era where AI can generate 95% of your code and rewrite your entire codebase in a single run.


3. Setting the Stage

We’re at a turning point where every organisation feels pressure to “do something with AI”, but many are leaping in without clarity on the problems they are actually trying to solve.
Basic principles like security, least privilege, governance and data protection – things we’ve all known for years – are suddenly being sidelined in the rush to experiment with agents and automation.
This episode is about slowing down just enough to ask better questions: What is the business problem? Where does AI actually add value? How do we keep humans in the loop? And how do we avoid burning out our people while the machines move at 100 miles an hour?
If you’re trying to move from AI curiosity to responsible adoption – especially in environments that don’t have giant dev teams – this conversation will feel very close to home.


4. Episode Highlights

  • “Map the technology to the business problem”
    Early on, Joe calls out the biggest issue he sees with AI projects today: organisations jumping straight to tools and models without clearly defining the problems they’re trying to solve.
    AI is transformative and powerful, but that doesn’t change the timeless rule of technology: if you don’t start from the business problem, you’re just adding complexity, not value.
  • Judgment, presence and accountability
    A central thread in Joe’s talk and our conversation is that humans still matter deeply in an agentic AI world – specifically in three areas: judgment, presence and accountability.
    AI can generate code, draft content and act autonomously, but humans still need to shape the context, be present with other humans, and ultimately take responsibility when things go wrong – whether that’s deleting a production database or letting an agent roam too freely in your infrastructure.

5. Deep Dive: Agentic AI, Groundedness and Human Responsibility

We spent much of the episode unpacking agentic AI – systems that don’t just respond to prompts but can take actions, call tools, and chain tasks together on their own.
Joe’s message was clear: this doesn’t have to be a doom‑and‑gloom story, but it does require that we rethink how people work, how we architect systems, and how we define responsibility.

A few key ideas from the deep dive:

  • AI is fast; humans are finite
    Agentic systems can think and act at “social media speed”, but on problems that are closer to “reading novels” in depth and complexity.
    If we try to keep humans constantly in that high‑speed loop without the right boundaries, we risk cognitive overload and burnout well before lunch.
  • Judgment as a skill, not a checkbox
    Joe shared how he brought in an intern with little AI experience and spent three months shaping how he thought about prompts, context and evaluation.
    The lesson wasn’t about the tools; it was about teaching judgment: how to frame the right problem, give the right context to an AI system, and critically review outputs instead of blindly trusting them.
  • Accountability in an agentic world
    We talked about real‑world horror stories: agents deleting inboxes, rewriting entire codebases because they “didn’t like” the existing code, or having too much access to critical systems.
    Joe’s view is that accountability will become one of the most important human functions: someone has to own the agents they deploy, validate their behaviour, and make sure access controls, guardrails and policies are in place.
    That also means technology leaders need to design for accountability from day one – sandboxing agents, restricting permissions, and treating them like powerful but very junior team members who need supervision.

6. Real-Life Stories & Examples

The HR vs IT chatbot

At Burnley, Joe and his team have been running pragmatic pilots using Copilot and internal chatbots across different departments.
Two of the early experiments were an IT chatbot and an HR chatbot – both fairly “standard” use cases on paper.
The interesting twist: the IT chatbot saw relatively low adoption, while the HR chatbot quickly became far more valuable.
Joe’s hypothesis is simple and powerful: for IT questions, many people already lean on tools like ChatGPT or other public models for generic how‑to queries, so the internal bot doesn’t feel that unique.
But HR is different – the HR chatbot has access to internal, organisation‑specific information that you simply can’t get from the public internet, so it delivers unique value employees can’t find elsewhere.
It’s a great reminder that AI wins when it is grounded in your own data and context, not when it just replicates what’s already freely available.

Pilots on the football side

We also asked the obvious question: how is AI being used on the football side – scouting, analytics, performance data?
Joe confirmed they’re running pilots there too, with some results that are “expected” and others that are “slightly exciting”, although he couldn’t go into detail.
The takeaway: even in a high‑stakes, competitive environment like professional football, the same approach applies – start small, run targeted pilots, validate the results, and only then scale up investment.

Living with agents at home

Joe described how he runs an open‑source agent (“open claw”) on a completely separate, sandboxed laptop that lives on a bedside table, safely isolated from his email, documents and core systems.
He brings files to the agent, or gives it read‑only access, rather than letting it roam across everything he owns.
It’s a simple mental model any organisation can adopt: treat agents like untrusted code running in a sandbox until you fully understand their behaviour and limitations.
Contrast that with stories of people giving an agent full access to their cloud storage, emails and keys on day one – and then being surprised when things go wrong.

Local models, cost and daisy‑chaining

We also explored the rise of local models, driven in part by skyrocketing token costs on cloud‑hosted models.
People are burning through their monthly allowances on large models like Claude Opus or GPT‑class systems just doing coding tasks, and local models are becoming an attractive alternative for certain workloads.
Joe runs 8B and 12B parameter models locally on a GPU with 16GB of VRAM, which works great for simpler tasks but clearly lags behind frontier models for complex reasoning.
His prediction is that we’ll see a future where specialised local models are daisy‑chained together – each one handling a specific capability – instead of a single monolithic model doing everything.
You pick the right model for the job: you don’t need an extended, expensive model to answer “What is the capital of France?”, but you might need one to reason deeply about strategy or architecture.

Open ports, OpenClaw and malicious models

We couldn’t ignore the security side.
We discussed the spike in exposed services visible on tools like Shodan when people started experimenting with agentic systems like OpenClaw without fully understanding what they were opening up to the internet.
Give an agent your API keys, credit card details or unrestricted access to your internal network, and you’ve effectively handed those over to anyone who can reach that endpoint.
On top of that, there’s the emerging risk of malicious models – systems trained with harmful intent baked in.
The conclusion is straightforward but critical: governance, security and least‑privilege principles are not optional extras in the age of agents; they’re the foundations that keep the whole thing from collapsing.


7. Key Takeaways

  • Start with the problem, not the model.
  • Don’t abandon IT fundamentals – security, least privilege, sandboxing and governance still apply.
  • Humans remain crucial for judgment, presence and accountability, especially with agentic AI.
  • Run small, well‑defined pilots; prove value before you scale.
  • Ground your AI in unique, internal data where it can add real value (like HR, not generic IT FAQ).
  • Treat agents like powerful but untrusted juniors: sandbox them, monitor them, and control access.
  • Use the right model for the job – combine cloud, local and specialised models instead of defaulting to the biggest one.
  • Think ahead to 2027: who in your organisation will be able to manage 10 agents effectively, and how will you identify and support those people?

8. Closing Thoughts

Talking with Joe at the Emirates really brought home how fast this space is moving – and how easy it is for organisations to chase the shiny thing and forget the basics. Agentic AI isn’t something to fear, but it is something we need to approach with discipline: clear problems, strong guardrails, and a deep respect for the humans in the loop. If you’re leading AI initiatives today, now is the time to ask: where can we run safe, meaningful pilots, what data truly differentiates us, and who are the people in our organisation who can grow into “agent managers” in the next one to two years?

We’d love to hear how you’re experimenting with agents, local models and governance in your own environment – drop us a message, share your stories, and let us know what you’d like us to explore in a future episode.


Leave a Reply

Your email address will not be published. Required fields are marked *