(S3E5) AI Is Smart. We’re Not: Closing the Front Door on Hype-Driven, Insecure AI


1. Introduction

In this episode of Impact of AI: Explored, we dive into one of the messiest realities of 2026: organizations are spending millions on AI while leaving the front door wide open. We, James O’Regan and Gerjon Kunst, sit down with AI architect and security researcher Bhaskar Sawant to talk about responsible AI, security habits, AI agents, and why speed without architecture is a recipe for disaster.

The conversation starts with a provocative question: AI is incredibly smart technology, so why are we being so dumb with it? From there, we explore reality versus hype, the risks of agentic AI, federated learning, and why observability is becoming one of the most critical disciplines in the AI stack.


2. Meet the Guest

Our guest, Bhaskar Sawant, is an AI architect and cybersecurity innovator with over 15 years of experience building intelligent, adaptive defense systems. He specializes in enterprise AI security, designing architectures where AI, machine learning, and cybersecurity are deeply integrated rather than bolted on as an afterthought.

Bhaskar is a senior member of IEEE and an active contributor to the OWASP AI and LLM security community, where he helps shape emerging standards and best practices for securing modern AI systems. His research includes published work on federated learning and LLM observability, with one of his papers on federated learning available through IEEE Xplore, focusing on privacy-preserving training across distributed data sources.

Outside of pure research, Bhaskar has delivered talks and case studies on topics like “Security Isn’t a Tool, It’s a Habit” and “AI in Cybersecurity: Building Adaptive Defense Systems,” where he shows how AI can dramatically improve threat detection—if implemented with the right habits and architecture.


3. Setting the Stage

Why this conversation now? Because AI has shifted from experiment to critical infrastructure, but a lot of organizations are still treating it like a shiny toy. We see enterprises rushing headfirst into AI, installing powerful agents with system-level access, while governance, architecture, and security are left to “later”—if they’re considered at all.

In this blogpost, we’ll walk through the key themes from our discussion with Bhaskar:

  • Why speed is making organizations careless.
  • How reality vs. hype shows up in enterprise AI projects.
  • What responsible AI actually looks like in practice.
  • Why federated learning matters for privacy-conscious industries.
  • And how observability and guardrails can make or break AI deployments.

If you’re a CTO, architect, security lead, or anyone under pressure to “do something with AI,” this episode—and this write-up—is meant to slow you down just enough to avoid painful lessons.


4. Episode Highlights

Highlight 1: “AI is smart. We’re the dumb ones.”

We open with James’ hook: organizations are “spending millions on AI but leaving the front door wide open.” AI itself isn’t inherently insecure, Bhaskar points out—the problem is how we deploy it. When speed becomes the priority, security and architecture become an afterthought, and that’s where the real risk begins.

“The irony is that AI itself is not insecure. The problem is how we deploy it.” – Bhaskar

Highlight 2: Agents, FOMO, and the Amazon moment

We dig into AI agents and FOMO: everyone wants the latest agentic capability, but very few have the foundations in place. James references a recent case where an internal coding agent at a big tech company (Amazon) took “matters into its own hands” and caused significant internal damage, which becomes a jumping-off point to question whether enterprises are actually ready for agents.

Bhaskar’s answer is nuanced: it’s not that agents are inherently bad—it’s that they’re often given broad access with weak boundaries and limited monitoring. With proper design, containment, and observability, agents can be useful; without those, they’re a liability


5. Deep Dive: Responsible AI Is Architecture, Not Marketing

A core theme in this episode is that responsible AI starts and ends with architecture and governance, not slogans or checklists.

Bhaskar breaks responsible AI down into a few concrete pillars:

  • Strong architecture and data pipelines
    AI models don’t live in isolation; they sit on top of data pipelines, infrastructure, and integration layers. If those are fragile, misconfigured, or unmonitored, your “AI strategy” is just a very expensive way to expose sensitive data.
  • Access control and clear data policies
    Responsible AI means knowing who can access what, which systems an AI component can touch, and how data is classified and used. If your AI can answer “What is the CEO’s salary?” for any random user, you don’t have a smart system—you have a security incident waiting to happen.
  • Continuous monitoring and observability
    Data changes, behavior drifts, and models evolve. Observability—monitoring inputs, outputs, and system activity for anomalies—is essential to detect when something goes off the rails and to maintain trust.
  • Transparency and lifecycle management
    Responsible AI is not just about a model; it’s about the entire lifecycle, from design and deployment to maintenance, monitoring, and retirement. Users and stakeholders should understand how decisions are made and what guardrails exist.

We contrast responsible AI with irresponsible AI: the latter is deployed without proper safeguards, with broad permissions, limited governance, and little to no monitoring. That’s where you see biased outputs, data leakage, and unpredictable behavior—often discovered only after something breaks in production.

Bhaskar’s advice to any pressured CTO is simple and very on-brand for us:

Focus on fundamentals—design, data, and architecture—before you deploy AI.


6. Real-Life Stories & Examples

The episode is full of practical examples and analogies that make these abstract concepts tangible.

  • The Amazon agent story
    James brings up the example of an internal coding agent at Amazon that caused real damage, despite the company’s scale and expertise. Bhaskar uses this to illustrate what happens when agents are given broad access without tight boundaries and layered safeguards. It’s a cautionary tale: if it can happen there, it can happen anywhere.​
  • Chaos Monkey for AI
    Gerjon draws a parallel with Netflix’s Chaos Monkey—randomly killing systems to test resilience—and argues that AI behaves similarly in practice: if you haven’t ticked all your boxes, AI will find the one you missed. Without containment and resilience, any weak point becomes the path of least resistance.
  • FOMO-driven deployments
    Bhaskar describes a familiar pattern: leadership says “We need to use AI,” but can’t articulate the actual business outcome they want to achieve. There’s no measurable success criteria, no long-term plan for monitoring or governance—just a rush to install the latest tool or agent. That’s pure hype-chasing, not strategy.
  • Federated learning in privacy-sensitive industries
    When we switch to Bhaskar’s research on federated learning, he explains how organizations like healthcare or finance can train models on distributed data without centralizing sensitive datasets. The model learns locally; only learned patterns are shared, preserving privacy while still enabling powerful analytics.
  • LLM observability as a new discipline
    Bhaskar’s work on observability shows how monitoring LLM inputs, outputs, and system behavior can drastically reduce detection times and improve security posture—similar to how observability has transformed traditional app monitoring. It’s a reminder that AI systems need the same rigor we apply to critical production software, and then some.

7. Key Takeaways

For those who prefer the TL;DR, here are the main points we’d want you to walk away with:

  • Speed without architecture is dangerous – Rushing AI into production without security and governance is the main reason organizations get into trouble.
  • Start with the problem, not the tool – “We need AI” is not a strategy; define clear business outcomes and success metrics first.
  • AI is not a magic box – It works best on well-defined problems with good data and clear objectives.
  • Responsible AI = architecture + governance + monitoring – Models are just one part of a much larger system that must be designed for security and resilience.
  • Agents need boundaries – Treat AI agents as controlled components, not fully autonomous entities; limit access, monitor behavior, and design for failure.
  • Design for failure and containment – Assume something will go wrong; build layers, isolation, and strong monitoring to minimize blast radius.
  • Data quality and classification still matter – Garbage in, garbage out is as true as ever; if your data and access controls are weak, your AI will amplify that weakness.
  • Federated learning is a powerful option for sensitive data – It allows learning across distributed data sources while keeping data local, at the cost of added complexity.​
  • LLM observability is non‑optional – Monitoring inputs, outputs, and anomalies is critical for both security and trust in AI systems.

8. Closing Thoughts

Talking with Bhaskar reinforced something we keep seeing in our work: AI is not exposing new problems so much as brutally highlighting the ones organizations already have. Weak data practices, undocumented processes, fragile architectures, and lax security habits all become painfully visible once you add powerful AI into the mix.

If you’re under pressure to deploy AI agents or “get something in production,” our friendly challenge is: pause, document your processes, get your data house in order, and design your architecture as if failure is guaranteed. That’s where responsible AI starts.

In an upcoming episode, we’ll continue this theme by looking at how enterprises can move from experiments to production-grade AI—including patterns for agent-safe architectures, practical observability setups, and how to align security, architecture, and business teams around the same AI roadmap.

We’d love to hear how your organization is approaching responsible AI. What are you struggling with most: data, architecture, agents, or culture?


Leave a Reply

Your email address will not be published. Required fields are marked *