(S3E3) Industrial AI vs GenAI: What Really Works on the Factory Floor?

1. Introduction

In this episode of Impact of AI: Explored, we sit down with Nikita Golovko, Portfolio AI Architect at Siemens, to talk about what really happens when AI leaves the lab and hits the factory floor. Together with James and Gerjon, Nikita explores how industrial AI differs from the generative AI most of us use in our browsers, why technical debt becomes dangerous when code meets physical machines, and what it takes to build AI systems that are predictable, explainable, and safe at scale.

you can check the episode here: https://youtu.be/EkyuuZR-ooU

2. Meet the Guest

Nikita works as a Portfolio AI Architect at Siemens, where he focuses on bringing AI from the cloud into real industrial environments. His day-to-day involves designing the software architecture that runs AI models on edge devices and industrial computers, making sure those models can be deployed, retrained, monitored, and safely integrated into production lines. He has a strong background in industrial automation and PLC-based systems, giving him a rare combination of experience with both software and the physics of machines on the shop floor.

you can find his linkedin profile here: https://www.linkedin.com/in/dr-nikita-golovko/

In this episode, Nikita talks about:

  • How Siemens uses industrial PCs (IPCs), PLCs and edge devices with GPUs to run AI locally on the factory floor.
  • Why he sees generative AI as a tool for documentation and synthetic data, not as a decision-maker in production.
  • His philosophy of the โ€œAI architectโ€ as a problem solver who bridges data science, software engineering and domain expertise.

3. Setting the Stage

Most AI conversations today are about chatbots, copilots and agents that live in the browser, but industrial AI plays by very different rules. On the shop floor, decisions affect physical systems, safety, product quality and downtime, so โ€œmove fast and break thingsโ€ is not an option.

In this blogpost, we walk through:

  • Why AI behaves differently in the lab versus in real production environments.
  • The hidden technical debt that accumulates when AI models are bolted onto existing systems.
  • How to think about AI as an advisor, not an autonomous actor, especially with the upcoming EU AI Act.

If youโ€™re building AI systems that interact with real-world processesโ€”whether in manufacturing, IT operations or elsewhereโ€”these lessons will feel uncomfortably familiar.

4. Episode Highlights

Highlight 1 โ€“ โ€œDonโ€™t touch itโ€ as the first red flag

One of the strongest signals of technical debt for Nikita is when a team is afraid to change a component: โ€œdonโ€™t touch itโ€ becomes the unofficial policy. That fear usually comes from hard-coded dependencies, missing documentation and single points of failureโ€”exactly the conditions you donโ€™t want in a live production environment where AI is influencing physical processes.

โ€œIf your team is afraid of changing some kind of component or piece of code, thatโ€™s your first trigger that youโ€™re adding technical debt.โ€

Highlight 2 โ€“ AI as advisor, not autonomous agent

We also dig into the hype around AI agents, both in IT and industrial settings. Nikita is clear: in industrial automation, fully autonomous agents making independent decisions on production lines are not acceptableโ€”at least not in the next few years. Instead, AI should advise operators, highlight risks, and provide predictions, while humans stay in control of final decisions.

โ€œI donโ€™t believe we will see independent agents in industrial automation in the next few years. They can be good advisors, but not working alone.โ€

5. Deep Dive: Sustainable AI Architectures for the Real World

A central theme of our conversation is sustainability in system designโ€”not in the environmental sense, but in the sense of building systems that can still be understood, maintained and retrained years from now.

For traditional software, sustainability means:

  • A new team member can understand the system in about a week.
  • You can debug it at 3 a.m. without a crisis.
  • The architecture still makes sense when teams change.

For AI systems, Nikita extends that idea:

  • You should be able to retrain the same model in two years with a different team and possibly a different data center.
  • The model and its pipeline must be transparent enough that others can understand how it was trained, what data it used, and how it is deployed on the shop floor.

He draws a sharp line between models and solutions. A team that only delivers a model and throws it over the wall to IT will end up in an endless loop of rejection: the model isnโ€™t production ready, the data pipeline is missing, or deployment constraints werenโ€™t considered. The cure is cross-functional, end-to-end teams where data scientists, ML engineers, DevOps and domain experts build a full solution together.

Equally important is keeping models decoupled from business logic. Hard-coding models into core logic creates โ€œbig balls of mudโ€ that no one dares to touch, making retraining, swapping, or experimentation nearly impossible. Abstractions, adapter layers for external APIs and clear boundaries between model inference and application logic are essential if you want to stay agile instead of trapped.

6. Real-Life Stories & Examples

From the lab to the shop floor

In the lab, AI models tend to live in ideal conditions: stable lighting for computer vision, clean and labeled datasets, and controlled environments with minimal noise. Once those same models are deployed to the factory floor, reality hits: lighting changes, sensors are noisy, data drifts, and the way people use the system rarely matches the original design assumptions.

Nikita describes this as two different worlds:

  • The synthetic, controlled world of the lab.
  • The messy, unpredictable world of real production.

Bridging those worlds requires a โ€œtranslatorโ€โ€”someone who understands both precision and recall and defect rates and production delays. That translator often becomes the real โ€œAI architectโ€: a problem solver who sits between data science, software engineering and domain operations.

The magic of making things move

Nikita also shares a formative moment from his early career: writing his first PLC program as a student and watching it move physical parts. That direct link between code and physical behaviorโ€”seeing software affect the real worldโ€”is what pulled him into industrial automation and still motivates his work with AI on the factory floor.

Local AI at the edge

In Siemensโ€™ ecosystem, models are trained in the cloud and then deployed to edge devices such as industrial PCs with Linux-based edge OS and GPUs. These devices sit close to the production line, running models for tasks like:

  • Time series prediction (e.g., when an engine needs extra repair).
  • Classification and defect detection on production output.

This is not central, monolithic generative AI; itโ€™s distributed, predictable ML running locally, often in machine-to-machine communication scenarios, sometimes feeding into robotic arms or control systems.

AI-assisted coding: powerful but risky

We also touch on AI-assisted coding. Nikita uses AI tools as a sparring partnerโ€”to prototype ideas quickly, check hypotheses, and get code examplesโ€”but he doesnโ€™t trust generated code for production without rigorous testing. The key point is responsibility: even if AI wrote the code, the engineer who commits it is still accountable for its behavior, including security and reliability.

7. Key Takeaways

  • Predictability beats hype on the shop floor: classical ML models with predictable behavior are still the backbone of industrial AI, while generative AI plays supporting roles like documentation and synthetic data generation.
  • Environment matters: AI that works in the lab will break in production if you ignore unstable conditions, noisy data and real-world user behavior]โ€‹
  • Translators are critical: you need people who can bridge data science metrics and operational KPIsโ€”this is the real role of an AI architect in industry.
  • Technical debt shows up as fear: if teams are saying โ€œdonโ€™t touch itโ€ or one person is the only one who understands a component, you already have a technical debt problem.
  • Decouple models from business logic: avoid hard-coding models into application flows; use abstractions and adapter layers so you can retrain, swap and experiment safely.
  • Team structure shapes architecture: Conwayโ€™s Law still applies; if you design microservices but keep a monolithic team, you wonโ€™t get the benefits.
  • AI should advise, not act alone: especially with regulations like the EU AI Act, AI systems in critical environments should be transparent, explainable and used with a human in the loop.
  • Document decisions, not just code: sustainable systems require documented trade-offs, intentions and architectural choices so future teams can understand why things were built a certain way.

8. Closing Thoughts

Talking with Nikita reminded us that the most interesting AI work isnโ€™t always happening in shiny web UIs, but in the places where software quietly meets steel, sensors and real-world constraints. As AI gains more power, we donโ€™t just need better modelsโ€”we need better architects, better teams and better habits around responsibility and documentation.

In our next episodes, weโ€™ll continue exploring where AI, automation and responsibility collideโ€”from end user computing to autonomous agents and beyond. If you have questions or real-world challenges around AI in production, send them our way; weโ€™d love to feature your question as the listener segment in a future episode.


Leave a Reply

Your email address will not be published. Required fields are marked *