(S3E1) Shadow AI, Agentic Browsers & Zero Trust: Why AI Can’t Get Fired (But You Can)

This episode of Impact of AI: Explored dives into shadow AI, AI browsers, and why security-first doesn’t have to mean saying “no” to innovation. Together with our guest, Josh Woodruff, we explore how organizations can safely embrace agentic AI without slowing the business down or putting it at risk.


1. Introduction

In this episode, we (James O’Regan and Gerjon Kunst) sit down with Josh Woodruff, CEO and Founder of MassiveScale.AI, to unpack the rise of shadow AI in organizations and what it really means for security, culture, and leadership.
We talk about everything from AI browsers and agentic AI to zero trust, culture change, and why “you won’t be replaced by AI, but by someone using AI” now applies to companies as much as to individuals.


2. Meet the Guest

Josh Woodruff is the Founder and CEO of MassiveScale.AI (Massive Scale Consulting), where he helps organizations accelerate AI adoption using a security-first, zero trust approach.
He is a seasoned cybersecurity and cloud leader, an IANS Faculty member and CSA research fellow, and author of Agentic AI + Zero Trust: A Guide for Business Leaders, a jargon-free book he co-wrote with his wife to help decision makers understand agentic AI and zero trust without needing a deep technical background.


3. Setting the Stage

Shadow AI has exploded because AI landed in people’s personal lives first—through tools like ChatGPT—before security and IT had any real chance to put guardrails in place, leading employees to use AI at work long before governance existed.
As organizations now scramble to respond, the real challenge is no longer if people use AI, but how to turn that reality into secure, governed, and business-aligned usage instead of pretending it can be banned away.


4. Episode Highlights

  • Highlight 1 – “AI can’t get fired, but you can”
    Josh talks about accountability in a world of AI browsers and agentic systems: even when AI acts under your identity at machine speed, it’s still the human who is held responsible when things go wrong.
    As he puts it, AI won’t get fired, but you will—especially when organizations deploy powerful tools without proper governance, monitoring, and identity-based controls.
  • Highlight 2 – From “Department of No” to “Department of How”
    A recurring theme is how CISOs and IT leaders must shift from blocking innovation to enabling it, using zero trust and clear policies as accelerants rather than brakes.
    Josh describes how security teams can become heroes by saying, “Here’s how we are going to do this safely,” instead of “We don’t do that here,” aligning security with business objectives instead of opposing them.

5. Deep Dive: Shadow AI, AI Browsers, and Zero Trust

Shadow AI is what happens when people use AI tools—LLMs, agents, AI browsers, and SaaS AI features—outside corporate oversight, licensing, governance, or security controls, often because the official path is too slow or doesn’t exist yet.
Josh likens it to the early days of cloud: innovation ran ahead of security, and now AI is repeating the same pattern, with employees using these tools to get work done whether policies exist or not.

A big part of the discussion centers on AI browsers and agentic AI, which can act under a user’s identity, log into systems like M365, and take actions at machine speed across anything that identity can access—creating a risk profile that traditional controls and policies simply weren’t designed for.
Here, zero trust becomes the critical model: applying security to identity and authorization (who can do what, where, and when) instead of just networks or perimeters, and treating AI agents as first-class identities that get their own policies, guardrails, and kill switches.


6. Real-Life Stories & Examples

Josh shares that in almost every new customer engagement, even when organizations claim they have banned AI, his team still finds shadow usage—because people are using AI to write emails, review reports, and automate tasks simply to get their jobs done more efficiently.
He also notes that enterprises with behavioral monitoring can sometimes detect machine-driven browser activity by its velocity and volume compared with a user’s baseline, though this kind of capability is typically only available in larger, more mature organizations.

We discuss a healthcare example where automating a mundane, “TPS report”-style task with AI led to huge adoption because it removed a hated part of people’s daily work, showing that the fastest way to drive AI usage is to solve real pain rather than talk strategy in the abstract.
Josh connects this with why he wrote his book for business leaders: fear and jargon are slowing decision makers down, while concrete, relatable examples of value—like accelerating research or eliminating repetitive form-filling—unlock buy-in and cultural change.


7. Key Takeaways

  • Shadow AI is already inside every organization; banning AI doesn’t work, but clear, simple guidelines and open dialogue do.
  • AI browsers and agentic AI introduce a new risk class because they operate at machine speed under human identities, demanding zero trust and identity-based controls.
  • Security must evolve from the “department of no” to the “department of how,” using guardrails, audit trails, and kill switches to enable safe acceleration instead of blocking it.
  • Early adopters inside the business are invaluable partners: involve them, listen to their use cases, and then secure the paths they actually want to use, rather than inventing use cases from the top down.
  • Cultural and behavioral change are harder than the technology; organizations need a top-down willingness to innovate and a safe space for experimentation.
  • Path of least resistance should also be the most secure: make the official way to use AI the easiest, most effective one, or people will keep finding their own tools.
  • Companies that delay AI adoption risk being outpaced by competitors, because AI value compounds over time—both for individuals and for organizations.

8. Closing Thoughts

For us, the core message of this episode is simple: organizations must stop fighting the AI wave and start learning how to surf it safely—because you won’t be replaced by AI, but by someone (or some company) that uses it well.
In upcoming episodes, we’ll continue exploring practical AI topics like groundedness, AI browsers, and real-world adoption stories, and we’d love to hear your experiences and questions about shadow AI, security, and culture change in your own organization. Share your thoughts with us so we can bring them into future conversations.


Leave a Reply

Your email address will not be published. Required fields are marked *