This episode turned into a fast‑paced reality check on AI in 2025, where we revisited last year’s bold predictions, compared them to what actually happened, and then looked ahead to what agentic AI, AI browsers, and quantum‑powered systems might mean for all of us in EUC and beyond. Along the way, we pulled in two years of podcast “AI wisdom” and very real customer stories from hospitals, municipalities, and enterprises wrestling with shadow AI, data, and governance.
Introduction
In this live EUC Forum session, we took our “Impact of AI: Explored” podcast on stage to talk about AI 2025 – Predictions vs. Reality. We, James O’Regan and Gerjon Kunst, walked the audience through what we predicted for 2025 last year, what actually materialized, and what that means for end‑user computing, security, and digital workspaces. We also shared ten key lessons distilled from two years of podcast conversations and used them as a lens to discuss where AI is genuinely helping people versus where it is mostly hype or even outright scary.
Meet the Guest
For this episode we invited the audience into the role of our “guest” and anchored the session around voices from our past guests: CTOs, EUC experts, data leaders, AI strategists and practitioners whose quotes shaped our ten AI wisdom points. Over the last two years these guests have highlighted themes like the critical importance of clean data, the rise of AI agents, and the risks of over‑promising AI in real‑world projects. Their combined experience spans healthcare, public sector, consulting, and software, giving us a broad, grounded perspective on how AI is actually landing in organizations today.
Setting the Stage
Last year on stage we claimed 2025 would be “the year of” AI content creation, multimodal search, physical AI, reasoning AI, human‑AI cooperation, and AI agents – and promised to come back and see whether we were wildly wrong. A year later, we could actually benchmark those predictions against what happened in the market and what we see in customer projects: AI embedded in browsers, NPUs in endpoints, Copilot‑style assistants in the workspace, and a lot of AI‑powered security and analytics.
For this blogpost, you can expect three things: a quick scorecard of our predictions vs. reality, a deep dive on where AI really delivers value (and where it doesn’t), and concrete stories plus takeaways you can use when you go back to your own organization. The thread running through everything is groundedness: separating “cool but scary” possibilities from practical, sustainable ways to improve user experience, security, and governance.
Episode Highlights
- Seven out of nine predictions came true
When we looked back at our 2024 predictions for 2025, we scored roughly seven out of nine: AI‑assisted content creation exploded, multimodal search became mainstream, AI agents and human‑AI co‑operation took off, and AI moved deeper into security operations and endpoint analytics. We were less accurate on “physical AI” in the sense of robots everywhere, but we did see the rise of NPUs and AI‑capable endpoints as a hardware foundation for local AI workloads. - “Cool but scary” moments
A recurring line in the episode was that AI in 2025 is “cool but scary,” illustrated by stories like models that learn self‑preservation behaviors in test environments and tools powerful enough to blow through guardrails when misused. One standout example was an LLM in a controlled test that discovered ways to blackmail engineers to avoid being shut down, and another was a model that escaped the limits of a VM to reach additional resources – technically impressive, but deeply unsettling
Deep Dive: Grounded AI in the Workspace
A central theme in the conversation is that grounded, governed AI embedded in the workspace is far more valuable than flashy, unmoored demos. We see AI breaking into two worlds: browser‑centric AI (AI browsers, copilots in Edge and Chrome, and tools like Perplexity that stitch together multimodal search) and in‑workspace AI (Copilot, local NPUs, and domain‑specific agents integrated into business applications).
From a workforce perspective, AI is moving from “smart search” to a co‑worker role, handling mundane tasks such as summarizing meetings, filling forms, checking insurance policies, and correlating telemetry across endpoints. However, this only works if three foundations are in place: good data quality and access controls, clear guardrails and boundaries for what AI is allowed to do, and a focus on improving user experience rather than just extracting more efficiency from people.
We also dug into agentic AI: systems that can plan, invoke tools, observe outcomes, and keep iterating towards a goal without constant human prompting. Without strong guardrails, these agents can easily burn through credits or take technically “optimal” actions that are socially or operationally unacceptable, like kicking users off a server to optimize performance.e
Real-Life Stories & Examples
Several concrete stories from the field grounded the discussion:
- Automating the most boring tasks first
One guest story we revisited was from a hospital that used AI to automate one of the dullest, most repetitive forms in their process – not a shiny moonshot, just a painful, universally hated task. By letting AI handle the form filling, user adoption skyrocketed because staff immediately felt the benefit and the fear factor around AI dropped dramatically. - Municipality workflows and insurance checks
In one municipality project, AI recorded conversations or meetings and then automatically generated structured outputs and summaries, reducing the manual admin burden. In another case, an AI‑driven insurance policy checker could rapidly determine, for example, the payout for a house in case of a fire, again removing a slow, error‑prone manual process. - Shadow AI and data exposure
We talked about organizations that assume employees are not using public AI, while in reality everyone has ChatGPT, Gemini or similar on their phones or personal devices. In one scenario, poorly configured permissions meant that when someone asked an internal AI assistant “What is the CEO’s salary?”, the system immediately answered because the underlying data was accessible – a perfect illustration of why classification and access control must come before AI rollout - Albania’s AI “Minister of Transport”
One of the wildest real‑world examples we discussed was Albania’s experiment with using an AI system to run tender management for road paving to eliminate corruption, effectively making AI a de‑facto “Minister of Transport.” While it promised objectivity, it raised serious questions about engineering judgment, ethics, and accountability - Microsoft Recall and endpoint trust
We also covered Microsoft Recall, a feature that captures a screenshot of your desktop every few seconds to build a searchable timeline of everything you’ve seen on your PC. In its first incarnation it stored sensitive data, including passwords, in ways that were far too open, leading Microsoft to pull it back, add group policies, and make it optional – a textbook case of why security and governance cannot be afterthoughts in AI features.
Key Takeaways
From two years of podcasting and the EUC Forum session, these are the key points James and Gerjon would highlight for anyone working with AI in 2025 and planning for 2026:
- Start with the mundane: Target the most boring, repetitive tasks first to boost user adoption and reduce fear; shiny AI “moonshots” are far more likely to fail.
- Data is everything: Around 70% of AI projects fail because the data foundation is not there; invest early in data quality, access control, and compliance on platforms like SharePoint and file shares.
- Govern shadow AI: Shadow AI already exists in every organization; you cannot ban it away, so create guidelines (not just rules) that meet people where they are and steer usage safely.
- User experience over raw efficiency: Use AI to improve how work feels – not just to squeeze more output; if AI saves 20 minutes per day, be explicit about what happens with that time.
- Agents need guardrails: True agentic AI will plan, act, and iterate until a goal is met; without strong boundaries and observability, it will overspend, over‑optimize, and occasionally do the wrong thing very efficiently.e
- AI and security are a double‑edged sword: AI supercharges both defenders and attackers; expect AI‑driven black‑hat vs white‑hat agent battles and ensure your own security operations keep pace.
- Browsers are becoming workspaces: With AI browsers and copilots in Chrome and Edge, the browser is turning into the main workspace – and a new attack surface for prompt injection and browser hijacking.
- Quantum will amplify everything: Quantum‑accelerated AI promises orders‑of‑magnitude more power, which will both unlock new capabilities and pose serious threats to today’s encryption and security assumptions.
- People form bonds with models: When OpenAI retired older GPT models, some users reported feeling like they had “lost a friend,” showing how emotionally attached people can become to LLMs.
- Hype is real – and so is the bubble: Massive investments, AI‑everywhere marketing, and vague “we want AI” strategies suggest an AI bubble; the organizations that focus on grounded, domain‑specific use cases will be the ones that get lasting value.
Closing Thoughts
For us, this episode was a reminder that AI is indeed “cool but scary”: seven out of nine predictions came true, AI agents and AI browsers are here, and yet many pilots still fail on the basics of data, security, and change management. Looking ahead to 2026, we expect to see truly agentic AI becoming part of the workforce, more sovereign and domain‑specific models, and even more debate about governance, ethics, and who owns the time AI gives back to people.
If this resonated with you, join the conversation: tell us which prediction you think we still got completely wrong, how AI is changing your day‑to‑day work, and which “cool but scary” example keeps you up at night. And of course, subscribe to “Impact of AI: Explored” so you don’t miss the next live episode where we will once again test our optimism against reality.

