(S3E10) Hello AI Summit 2026 – Amplify, Don’t Replace: The Human Side of AI Adoption

Introduction

In this episode of our “Impact of AI” podcast, we sat down at the Hello AI Collective Summit in London’s Emirates Stadium with coach and trainer Stephen Gresty to talk about the human side of AI adoption. Stephen comes from a background in applied positive psychology rather than traditional tech, which made for a very different kind of AI conversation. Together, we explored how leaders can bring their people with them on the AI journey, avoid fear and passive resistance, and keep purpose and humanity at the center of transformation.

Meet the Guest

Stephen Gresty is a coaching and training consultant with an MSc in Applied Positive Psychology, working at the intersection of people, performance, and change. He helps organisations and individuals navigate transformation by focusing on mindset, resilience, strengths, and empathy rather than just processes and tools. Over the years Stephen has worked with leaders and teams across industries, delivering workshops, coaching programmes, and training that translate positive psychology into everyday behaviour and culture. At the Hello AI Collective Summit, he joined as a speaker and facilitator, bringing a people-first lens to AI, change, and organisational purpose.

Setting the Stage

We recorded this episode live at the Hello AI Collective Summit, surrounded by leaders who are trying to make sense of AI while keeping their organisations stable and effective. The timing couldn’t be more intense: large-scale layoffs, massive infrastructure bets, and constant hype are creating a mix of excitement and anxiety around AI. Rather than talking about models and GPUs, we wanted to ask a different question: what does all of this feel like for the humans in the system, and how do you lead in a way that doesn’t leave people behind?

In this blog, we walk through Stephen’s perspective on fear, purpose, and psychological safety in the age of AI. You’ll get a people-first toolkit for thinking about AI in your organisation, plus some very practical advice for leaders and teams on where to start and how not to get overwhelmed.

Episode Highlights

  • “AI doesn’t exist without HI, human intelligence.”
    Stephen’s core message is that AI should amplify what humans do, not replace it. When organisations treat AI as a pure efficiency play, they risk dumbing people down instead of building their cognitive and creative muscles. The real opportunity is to develop human intelligence alongside artificial intelligence so the two reinforce each other.
  • From fear and resistance to purpose and progress
    Stephen pointed out that one of the biggest mistakes organisations make is assuming “everyone will be fine” and racing ahead without bringing people on the journey. When people feel left behind, they become passive resisters who quietly find ways for things not to work. His antidote is deceptively simple: slow down enough to reconnect everyone to purpose – why the organisation exists, and what AI is there to support – so people feel they still matter.

Deep Dive: The MORSE Philosophy and AI

One of the most powerful parts of the conversation was Stephen’s MORSE philosophy, drawn from positive psychology: Mindset, Optimism, Resilience, Strengths, and Empathy. Rather than treating “growth mindset” as a buzzword or a club you either belong to or you don’t, Stephen breaks it down into something situational and practical. It’s not about labelling people as “negative” or “not growth-minded,” but about asking what a growth mindset looks like in this specific context of AI and change.

Optimism, in his framing, is not blind positivity; it is a realistic belief that the future can be better and that we have agency in shaping it. Resilience, similarly, is not about “bouncing back” to how things were but “bouncing forward” to something new after each challenge or learning experience. When you apply MORSE to AI, you get a very different implementation approach: you design rollouts that respect different learning styles, support people when they feel overwhelmed, and treat mistakes and confusion as part of the process, not as personal failings.

Stephen also emphasised psychological safety as a non-negotiable for AI adoption. If people don’t feel safe asking “basic” questions or admitting they don’t understand a new AI tool, they either disengage or pretend to keep up while silently resisting. Leadership’s role is to create an environment where it’s normal to say, “I don’t get this yet, can you walk me through it?” instead of being pointed to yet another tutorial and left alone.

Real-Life Stories & Examples

A theme throughout the conversation was the speed of change and how differently people experience it. Stephen shared the story of getting his first laptop and feeling completely lost until someone in IT reframed the file system as a simple filing cabinet with drawers. That metaphor stuck with him for years and became a symbol of what good, empathetic explanation looks like. When someone humanised the technology, the fear reduced and learning could finally happen.

We also talked about our own experiences navigating change. On the one hand, many of us in tech feel like we’re at the bleeding edge, yet we still get overwhelmed by the volume of AI news and tools and wonder if we’re falling behind. On the other hand, there are colleagues, friends, or even parents who only know “he works with computers” and are now hearing about AI everywhere. One of the practical points we landed on was this: the best way not to get left behind is simply to start using AI in small, low-risk ways that feel safe, and to do it at your own pace rather than trying to match LinkedIn’s highlight reel.

A particularly striking example Stephen shared was someone who took their personality profile, fed it into an AI tool, and then let the model tell them what they should be doing with their life. On the surface it sounded clever and efficient; underneath, it was a worrying sign of outsourcing agency and decision-making to a system that doesn’t know context, nuance, or personal values. This, for Stephen, is one of the biggest dangers: allowing AI to become a crutch that stops us from thinking deeply, reflecting, and making our own choices.

Key Takeaways

  • AI should amplify human intelligence, not replace it.
  • The biggest risk in AI projects is leaving people behind and creating passive resistance.
  • Purpose matters: people need to understand why AI is being introduced and how it connects to their role.
  • Start small with AI: pick specific use cases instead of trying to “implement AI everywhere.”
  • Psychological safety is critical so people feel comfortable asking questions and learning at their own pace.
  • The MORSE philosophy – Mindset, Optimism, Resilience, Strengths, Empathy – offers a practical people framework for AI change.
  • Resilience is about bouncing forward from challenges, not snapping back to the old normal.
  • Over-reliance on AI for decisions can “dumb us down”; we have to maintain our own cognitive process and critical thinking.

Closing Thoughts

Recording this episode at the Emirates Stadium, watching Dan and the Hello AI Collective team bring together leaders, speakers, and practitioners, really underscored how fast the AI narrative is evolving. We left the conversation with Stephen reminded that beneath the strategies, tools, and infrastructure, this is ultimately a human story about fear, hope, learning, and purpose. If we get the human side right, AI becomes a powerful collaborator; if we ignore it, even the best technology will struggle to land.

In our upcoming episodes, we’ll keep exploring that intersection of AI and people – from governance and skills to creativity and ethics – with voices who live this work every day. If this conversation sparked ideas or questions for you, we’d love to hear how your organisation is approaching the human side of AI and what’s working (or not) in your context.


Leave a Reply

Your email address will not be published. Required fields are marked *