Data Analysis and Supply Chain Performance
How can companies improve performance when they already have large amounts of data available? Why do dashboards often fail to create real impact? And what does it take to turn data into better decisions across the supply chain?
In this podcast episode, we talk about how data analytics can move beyond reporting and become a real driver of operational performance.
Stephanie: Welcome to our Supply Chain & Friends podcast. Today we’re talking about data analytics and supply chain performance.
Joining me in the studio are Divyashree Bidarahalli, Data Analyst at STREMLER AG, and Hans Huber, CEO of STREMLER AG.
What goes wrong most often in companies today when it comes to data?
Divya: In most companies, the problem is not that data is missing. Usually, there is plenty of data. The problem is that it is not well structured. Different departments build separate reports for production, logistics, materials, or planning, but the real value only emerges when these data sets are connected.
When production is delayed, for example, the reason may not lie in production itself. It may be linked to material availability, planning priorities, or disruptions somewhere else in the process. These kinds of cause-and-effect relationships only become visible when data is connected and aligned.
Hans: I would add that data often lacks context. Even if data exists, it is difficult to interpret if it is not clear under which circumstances it was generated, when exactly it was recorded, and which metadata belongs to it. We often see information from MES systems, PDE systems, and other sources that exists in parallel but is not meaningfully connected.
Why do so many reports fail to lead to real performance improvement?
Divya: Many reports are designed mainly to present KPIs. But KPIs alone do not explain what really happened. A dashboard may show that output dropped from one month to the next, but it often does not explain why.
Behind a KPI, there may be delays, changeover losses, material waiting times, unstable priorities, or repeated interruptions. If those patterns are not visible, a dashboard remains descriptive, not actionable. That is why many KPI reports are not enough to support real business decisions.
Hans: Another point is comparability. High-level KPIs are often misleading if product mix, environmental conditions, or production circumstances have changed. If one week includes more complex products with longer cycle times than another, you cannot simply compare performance at headline level. You need context: product mix, process conditions, and sometimes even external parameters such as temperature, especially in food production.
What do we do differently at STREMLER when it comes to handling data?
Divya: We start with the business problem, not with the data itself. First we identify the issue that needs to be solved, and then we determine which data is relevant in that specific context. We do not look at each data source in isolation. We connect them and analyze the process as a whole.
Our goal is not simply to visualize information. We want to build a system that supports decision-making across the supply chain process.
Hans: Exactly. We are interested in how everything fits together. Many reports focus on a single dimension such as quality, OEE, or individual production parameters. At STREMLER, we look across the whole chain and ask: what are the real levers for better efficiency, better quality, shorter lead times, and better flow?
How do we use data to support better decisions, rather than just create better-looking dashboards?
Divya: In operations, decisions are being made continuously: what should be produced next, how capacity should be allocated, what risks are emerging, and how to react to disruptions. To support those decisions, data has to be aligned over time, consistent across functions, and directly linked to the decision problem.
Instead of just showing static KPIs, we structure data across departments around the actual issue. That helps companies understand what is happening now, what is likely to happen next, and what they should plan for going forward.
Hans: A report only becomes valuable when it triggers action. That means companies need clear definitions: what exactly is the KPI, what is the threshold, when do we escalate, and what countermeasures are triggered when performance moves out of range? The real value comes from connecting dashboards with processes, responsibilities, and action.
What measurable effects do we typically see with clients?
Divya: One major effect is better decision-making. Once companies gain clarity on what is really happening in the process, they can reduce conflicting situations, stabilize flow, and make more predictable decisions. Of course KPIs improve as well, but the real difference is that decisions become clearer, faster, and more grounded in the actual process.
Hans: Awareness is often the starting point. In one project, downtime was known to be an issue, but once it was measured properly, the actual extent came as a surprise. That created a new level of awareness and established a baseline for improvement.
From there, we often see strong performance gains. In some cases, OEE increases by 15% or more, sometimes even more, simply because measurement becomes more transparent and performance is made visible in a meaningful way. Transparency creates focus, and focus drives improvement.
Can you give an example of where data analysis directly improved results and profitability?
Hans: In one project, we carried out a portfolio analysis for a customer with a broad product range, different volumes, and significantly different pricing structures across channels. Some products were profitable in one channel but unprofitable in another. By analyzing the portfolio in detail, we were able to reduce the number of items significantly while improving overall portfolio quality and profitability. In that case, the impact was around 15% or more of increased profitability.
Divya: Another example comes from food production. A line was operating at full apparent capacity, but performance remained unstable. There were frequent stops, material waiting times, and changing priorities from shift to shift. At first glance, it looked like a capacity problem.
But once we combined production, material, and demand data and aligned them over time, the real issue became visible: materials were not arriving in sync, changeovers were misaligned, and there was unnecessary waiting between process steps. By making those interruptions transparent, the company was able to improve flow and increase output without adding capacity. The problem was not lack of capacity, but lack of visibility.
How quickly can clients see initial results, and what tends to surprise them most?
Divya: What often surprises clients is that the issue is not where they expected it to be. Better decisions do not necessarily require more data. They require a clearer interpretation of the data that already exists.
Hans: If data is available, first insights often emerge within two to four weeks. There is usually some work required around normalization and clarifying open questions, but awareness comes relatively quickly. After that, implementing measures and seeing KPI improvements usually takes another four to eight weeks. So within a quarter, a company can already achieve substantial progress.
A common bottleneck, however, is data availability itself. In many organizations, relevant data sits in different functions and systems rather than in one central place. That slows the process down considerably.
How do we deal with incomplete, fragmented, or erroneous data?
Divya: We start with what is available. We clean and connect the existing data, align it where possible, and then use it to create an initial view of the process. That first view often reveals further inconsistencies: missing values, timing mismatches, incorrect records, or process assumptions that do not match reality.
Some issues can be corrected through analytics, such as formatting problems or timestamp inconsistencies. But other errors only become visible once the data is used to support decisions. That is why improvement is iterative: reports are refined continuously as understanding deepens.
Hans: Today, AI-based tools can support data cleansing, outlier detection, and extrapolation. But AI cannot compensate for fundamentally wrong or missing data. We recently had a case where consultants observed multiple outages on the shop floor, but those outages did not appear in the production data the next day. In such a case, the data itself is unreliable, and there is no shortcut. You have to go one level deeper and improve the data collection and recording process first.
There is also a cultural dimension. If a company wants to improve, it has to be honest about its own performance. Transparency and a constructive attitude toward errors are essential. Improvement starts when issues are surfaced, not hidden.
What role will AI play in the future: hype or real game changer?
Hans: It is both. AI is heavily hyped at the moment, but it is also a real enabler. In data analytics, AI is extremely powerful for transforming, structuring, and interpreting data. And beyond that, it will increasingly enable semi-autonomous systems that learn and optimize based on the underlying data and rules.
At the same time, people remain essential. They provide context, assess relevance, and interpret conclusions in a way that algorithms alone still cannot.
Divya: From an analytics perspective, one thing is critical: AI is only as good as the data it receives. If you feed AI poor or inconsistent data, it will amplify the problem. If you feed it structured, relevant, and accurate data, it can support faster decisions, better forecasts, and stronger process performance. That is why data quality remains the foundation.
If a company wants to get started tomorrow, what is the first concrete step?
Hans: The clearer a company is about the issue it wants to solve, the faster we can help. That said, the entry barrier is low. A company can approach us with a problem, and together we define what data is needed, why it is needed, and what kind of outcome can be expected.
From there, the process is interactive. Often it is also about improving data quality itself: both static master data and dynamic process data from machines or operators. Establishing a reliable data pool and a single source of truth is often one of the first key steps.
Divya: I would always recommend starting with one specific problem. Most businesses face many issues at once, but focusing on one major challenge first makes it possible to create clarity and momentum. Sometimes everything looks stable from the inside, yet performance still falls short. In those situations, an external perspective helps uncover what is really happening and where the actual levers for improvement are.
Closing thought
Data alone does not improve performance.
Real impact comes from connecting data, understanding context, and translating insights into decisions and action
Questions by Stephanie Stremler, host of the STREMLER podcast.