How would we apply the new concepts of data platforms, data hubs, and data mesh to a specific domain such as Supply Chain? Below I will do just that.
My colleague Rajnesh Tangri shared some post-pandemic insight
on how we need supply chains and supply chain analytics to change using the valuable lessons we’ve just learned. In a nutshell: post-pandemic supply chains need to evolve the old SCOR model, adding broader stakeholder needs. It needs not only to be responsive, reliable, flexible, cost-effective, and asset-effective, but its capabilities also need to evolve beyond the narrow interpretation of Lean principles -- in which we over-rotated on minimizing inventory and risk management -- and accommodate the requirements that address a wider set of stakeholders. We need to add resiliency, sustainability, ethical behavior, and simplification to the equation. He also laid out some key new analytic capabilities needed to support these new business imperatives:
- Continuous optimization
- Scenario planning
- Adaptive forecasting
- Exceptions alerting
- Autonomous operation
I am no fortune-teller, but I can predict the spawning of a whole generation of new supply chain analytic applications. There will be new demands on the data organization to spin up many new data pipelines and data products to support the decisioning in the post-pandemic supply chain. And the pressure will be on the data organization to deploy these capabilities quickly. Opinions on how to manage the data to support these capabilities are widely available, but none have addressed supply chain in particular
. So as a supply chain analytics practitioner, and after reading a lot of these opinions -- including the Thirteen Thoughts About The Data Mesh
from Martin Willcox recently -- I wanted to drill down on one area he touched on and apply it specifically to the Supply Chain Domain: Domain-Driven Design (DDD) principles and the idea of a bounded context
. Ok, WHAT?
Let me explain.
A DOMAIN refers to our ability to draw a line around certain areas of data, because they have a strong affinity to one another, like Sales, or Support (see the diagram from Thoughtworks
below). Data affinity grouping is powerful because it allows the organization to separate out a portion of their data and put it under the management and responsibility of a smaller group of subject matter experts, in whose interests it is to curate and manage that data well.
With all the hype around the data mesh, as being discussed now, (and data lakes, data lakehouses, and such), let’s not get caught up too much in these buzzy topics. Building on Rajnesh’s “getting back to basics
” blog post, how do we manage the journey to a cloud architecture, while supporting the notion of enterprise, innovation, and domain-specific hubs. What do we mean by “the basics?” It’s bringing the core principles of data integration and reuse, with infrastructure that allows us to implement a domain-specific set of data products, leveraging both the shared infrastructure of a connected multi-cloud data and analytic platform like Teradata Vantage
, in a way that encapsulates and structures domain specific knowledge and models that allow data consumers to benefit. To read more this blog post six critical capabilities to consider for a modern cloud platform
goes into more detail. Integrated data management is one of these six capabilities, which directly impacts agility, responsiveness, and security.
Now, back to the Supply Chain domain. Supply Chain is an area that can benefit from these recent developments along with decades of supply chain data experience we bring to the table. I propose that it can be implemented as a domain-specific digital hub. It happens that in doing so, we can also achieve one of the key desirable attributes in a digital supply chain of the future – resiliency.
In Supply Chain, resiliency is about adaptive response to change and disruption. More than flexibility, it means you have options: you already have contractual, operational flexibility built in. You haven’t over-rotated on the lowest possible short-term cost. You’ve planned in for worst-case and best-case, and quantified those risks. You’ve contractually arranged for not just plan A, but plan B, C, and more.
Your data and analytics platform needs to be resilient as well. It needs to accommodate your supply chain organization’s need to adapt through:
- Frequent re-configuration: the closing and opening of new lanes, new operational centers, new suppliers; the addition of new product lines, new distribution centers.
- Sensing of changes in behavior: the monitoring and changing of supply chain planning parameters, as supply chain lead times, variability, and service level commitments change.
- Awareness of new and changing costs and alternatives: the impact of disruption and the contractual commitments, such as fees, fines, and cost of service, and how that affects selection of alternatives.
- Policy and governance changes, as the organization learns what selections from alternatives worked best, least, and which decisions can be made autonomously.
Incorporating these adaptations must be automatic, without attracting technical debt: the data and analytics must be able to adapt to these adaptations without making dramatic changes to the data structures, data pipelines, and analytic calculations. That is a resilient data and analytic platform for a supply chain organization
The notion of a Supply Chain Digital Hub combines the “back to basics” approach of data integration and reuse with the latest modern architecture for data ecosystems, and it applies it specifically to the needs of the supply chain organization and supporting data organization. This balanced approach brings swift time-to-market for supply chain data and analytics consumers, while reducing considerable technical debt to the data operations team to maintain it. This approach builds in data and analytic resiliency -- reducing the disruption to the data and analytics themselves. My next blog post in this series will address this in more detail.