1. Introduction
In this episode of the Impact of AI podcast, we (James O’Regan and Gerjon Kunst) sit down with our friend and former Microsoft colleague Seda Akdemir to talk about a topic that keeps coming back in our customer conversations: how to move from AI as a shiny toy to AI as a real, secure, scalable business capability.
We explore why generic AI strategies fail, what a secure AI factory looks like in practice, and why observability, governance, and business alignment matter far more than picking the “right” model or GPU.
2. Meet the Guest
Seda is an AI Strategy Lead for EMEA North at Cisco, based in Amsterdam, where she helps large enterprises turn AI ambitions into concrete, value-generating programs. She has a background in computer engineering and a master’s degree in artificial intelligence from Middle East Technical University, and has spent more than 15 years working with enterprises on data and AI initiatives.
Before joining Cisco, Seda spent 13 years at Microsoft in various data and AI–related roles, and earlier in her career she worked in engineering and delivery leadership at companies like Siemens and Accenture. Today, she focuses on AI-ready data centers, secure AI platforms, and helping organizations design AI strategies that are tailored to their own processes, data, and operating models—rather than copy-pasting generic playbooks.
3. Setting the Stage
We kicked off the episode with a simple but loaded question: in the next three to five years, where will AI’s biggest disruption hit—models, networking, or security? Seda’s answer: the disruption is already here and it will touch everything—networks, business processes, and even how we think about decision‑making.
What we wanted to unpack in this conversation is what it means to go beyond “AI as a feature” and start treating AI as a strategic asset and core part of your business process, including the implications for infrastructure, security, observability, and ROI. If you’re an enterprise leader who feels the pressure to “do something with AI” but struggles to show value, this episode (and this post) is for you.
4. Episode Highlights
- From AI toy to strategic asset
Seda compared today’s AI hype to the early days of cloud: real value only started once we moved from “cloud projects” to true digital transformation embedded in business processes. In her view, we’re at the same turning point with AI: it must move from add‑on assistants and chatbots to a core, governed capability that influences how decisions are made and how operations run. - Why generic AI strategies don’t work
One of Seda’s strongest points: there is no one‑size‑fits‑all AI strategy. Even in the same industry and company size, data, processes, and operating models are different, so copying “off‑the‑shelf” use cases rarely creates meaningful value. As she put it, every organization is on an AI journey, but “the journey is different for everyone,” and your strategy has to reflect your own DNA.
5. Deep Dive: Inside the Secure AI Factory
A recurring theme in our discussion was Seda’s concept of the secure AI factory—moving beyond AI as a feature to an “industrial‑grade” foundation that can be repeated, scaled, and governed.
She sees clear parallels with the cloud transition 15 years ago: the conversation only became serious when organizations started talking about digital transformation, not just spinning up VMs. Similarly, AI only becomes truly valuable when it’s embedded in core business processes, not just bolted on as a chat interface.
From Cisco’s perspective, the secure AI factory is about providing an end‑to‑end, AI‑ready stack that is both flexible and tightly integrated: compute, network, storage, orchestration, observability, and security all working together, validated with partners like NVIDIA and Red Hat OpenShift. Customers can bring their own storage, virtualization, and security tools, but Cisco acts as the trusted partner that verifies and integrates the stack so it isn’t a black box, nor a DIY science experiment.
Crucially, Seda stressed that observability must be built‑in, not an afterthought: you need deep visibility into applications, containers, microservices, network, firewalls, costs, and GPU utilization to maintain zero trust, control economics, and ensure the AI you deploy remains safe, compliant, and cost‑effective over time.
6. Real-Life Stories & Examples
Throughout the episode, Seda anchored the strategy talk in real customer patterns we all recognize:
- ROI and the “10‑minute problem”
We talked about a classic scenario: an AI agent saves each user 10 minutes a day, but finance asks, “What’s that actually worth?” Seda argued that focusing solely on productivity (hours saved) is not enough; ROI must connect to financial impact, revenue, citizen value (in public sector), or clear competitive advantage. Otherwise AI is just “employee productivity” and the business quickly loses interest.\ - Red flags in AI projects
One of Seda’s biggest red flags is when AI discussions happen only with IT and no business stakeholders in the room. In those cases, AI becomes an “innovation play” without clear value, acceptance criteria, or honest acknowledgement of operational and cost complexity. Another red flag is when the sole motivation is automation and headcount reduction, instead of augmenting people and improving decisions; that usually leads to cultural resistance and failed adoption. - Agents, identity, and “career suicide”
We joked that we could build a powerful AI agent in three hours, but deploying it directly into an enterprise would be “career suicide.” Seda highlighted emerging work—also within Cisco’s Outshift organization—on giving agents task‑based authorization and clear identities, so they only get the authority they need for specific tasks rather than blanket access to everything. - Trust, security, and zero trust in an agentic world
When talking to CISOs, Seda actually sees risk aversion as a healthy starting point: “We should be afraid; ignoring the risk is worse.” Her focus is on protecting what organizations have today (data, IP, reputation), ensuring new AI solutions respect existing security principles, and making sure decisions made today are still defensible in two years in a world where everything changes weekly.
7. Key Takeaways
- AI has to move from toy to strategy: from chatbots and features to a core, governed business capability embedded in processes.
- There is no generic AI strategy; your data, processes, and operating model demand a tailored approach and careful use case selection.
- A secure AI factory requires an end‑to‑end, AI‑ready stack—compute, network, storage, orchestration, observability, and security—validated and integrated, not stitched together ad hoc.
- Observability is non‑negotiable once AI touches core processes: you must see into models, applications, infrastructure, costs, and policies to sustain zero trust and control economics.
- ROI should be framed in business value (revenue, competitiveness, citizen value), not just hours saved; productivity alone rarely justifies sustained investment.
- If business stakeholders aren’t in the room, or if the main goal is replacing people, your AI initiative is already in trouble.
- Agents need identity and task‑based authorization; giving them broad, unchecked power is both unsafe and a fast track to “career suicide.”
- The soft skills—listening, communication, translation between business and tech—are becoming as critical as technical skills in AI teams.
8. Closing Thoughts
We ended the episode with a simple but powerful piece of advice from Seda: listen first. In AI projects, we’re often so excited about our “wonderful solution” that we skip the basics—understanding the customer, their processes, and what value actually means in their context.
Seda reminded us (and herself) that going back to basics is exactly what AI is forcing us to do: connect technology to real business outcomes, with clear ROI, governance, and accountability. As we like to say on the podcast, it’s a great time to be alive in AI—but playtime is over, and now it’s about building secure, observable, and truly valuable AI factories that will still make sense a few years from now.
We’d love to hear how you’re approaching this in your own organization. How are you moving from AI experiments to real, strategic impact? Let us know in the comments or reach out to continue the conversation.

