To The New Supply Chain.
Why Supply Chains, in the current form, cannot use AI, let alone Gen AI and what can be done about it.
Summary: It’s fine to write product descriptions with Generative AI. But to take any decision with business impact, it takes a bit more. GenAI needs data and relationships across data sets as an input to take make reliable business or operational decisions with confidence. Certain basic foundations are missing and need to be addressed first.
GenAI for Supply Chain does not exist and cannot exist in the current form, at least till a Foundational Model for discovering and reinforcing causality comes out. Even for simplistic use cases e.g. where is my shipment?, Generative AI / General AI (Artificial General Intelligence, not the same btw) or any AI cannot help unless the first step, the biggest hurdle, is crossed: ‘context’ - a correlation or at least stitching of relevant facts together.
Context can be defined as relevant information stitched together from relevant underlying facts, sourced from different systems, for a certain purpose, e.g, an operational decision, with a cohesive, representative arrangement of relationships among the facts. It is important to map, merge and correlate different information entities (data sets) from disconnected systems at scale, with confidence from a business / operational perspective.
If anyone said manual or existing methods of integration – wrong answer, as they are not scalable, do not provide innate confidence of data relationship sanctity, are expensive and time consuming. And rigid – you need to break the integration to add new data source.
Even worse, prevalent supply chain metrics and processes, defined decades ago, do not, cannot capture the richness of data from IoT, Automation, Computer Vision, Time Series (GPS, GIS), while also grossly aggregating the enterprise data – missing out essential details.
So, it’s not just adding the latest buzz word to existing ruins (even if they are the latest – as advertised! – e.g. Knowledge Graph). A grounds-up, first principles Supply Chain Tech view is required. Lora Cecere and Wolfgang Lehmacher would agree. Here is an attempt at diving deep with a structured approach.
Problem: a statement of why and what needs to be solved for.
Solution: a ‘potential’ solution approach – I am not God 😊.
Examples: a few mental experiments – supply chain practitioners who go by logic, numbers, and facts, are no less than scientists!
Problem:
A wise person once said, spend good time on understanding and structuring the problem and a solution will evolve from that. Here is summary of the problem, with an expanded version here:
Part 1: Foundation:
Part 1 of the problem is the lack of capturing all data relevant for decisions. If collected data is insufficient, everything else based on this is ineffective.
Not all supply chain data is captured, missing a lot of context forever. This gap is more prominent with the advent of rich data systems like IoT etc. In addition, if you think in terms of metrics and KPIs, it is even worse as they are super-simplified and most of them use fewer than 10 parameters, that too grossly aggregated. (Jokes on statistics, averages, outliers – supply chain specific here).
Relationships among the data elements are not captured. Some relationships are captured technically within a given system or a database. But business, operational perspective of multiple entities’ relationships is lost. Once in while big EAI projects are not helpful.
Changes in relationship are not captured. This concept does not exist in the prevalent systems. In real life, business context changes all the time, but these changes are not captured, in time or otherwise, for meaningful next actions / decisions.
Part 2: Everything that depends on the Foundation:
Part 2 is every insight, inference, decision or simply a dashboard based off the foundational data. If data captured is sparse or lacking in sufficient granularity, the intelligent ML, Gen AI models will not help.
KPIs: present KPIs are static, ex post facto. They need to be dynamic, streaming, preventive and predictive. A deviating metric highlighted in a nice report afterwards is useless from operational action/decision perspective. If the same can be deduced beforehand along with the causal parameters affecting the KPI, that’s more meaningful leading to remedial actions.
Dashboards: The visibility buzzwords - Control Tower, End to End visibility etc. need to be more contextual and prescriptive with decision, action recommendations.
Business Intent: Narrow focus on supply chain processes disconnected from business / commercial context i.e., impact on customer experience, revenue, cost, business longevity and customer service is a well documented problem that needs to be addressed.
Decision Intelligence: Decisions made off the sparse data are as ineffective as the data itself. Note that we are not talking about a one-time, fixed set of data but constantly changing business scenarios, as in real life.
AI / Gen AI / AGI: Of course, the problem of preparing required data frames with relevant data, and constantly update the same, cannot be understated.
Point solutions that focus on niche, singular areas of supply chain, even with strong ML, cannot address these challenges. Keep in mind when we talk about resilience and responsiveness in supply chains, multiple supply chain areas spanning entire business processes come into scope, not just singular areas like inventory forecasting, demand planning.
Solution
Here is a detailed version of the solution:
The proposed solution is a set of inter-related systems (1, 2, 3 & 4)that address above challenges, in a configurable, modular, transferable way (i.e. not just for one environment or a customer or an industry) .
A system to capture data:
that captures data well:
The first step is to have a system that solves the core problem of capturing sufficient data (sufficient to an organization), in near real time (let’s call that action time – time before the next action / decision), spanning variety of sources - ERPs, SCEs, IoT, CV etc.,
that captures data realistically, instantly, actionably:
Not caricatures or over-simplifications lamented enough about here. Captured data needs to be close to reality, represent the physical supply chain realities, should have enough detail to reconstruct and trace back the business scenarios. Capture data with confidence: provenance, traceability.
A system to capture and manage relationships:
This system creates the context – meaningful, relevant relationships across the raw data captured above,
that rapidly captures relationships:
Records and conserves the relationships among the raw data captured. Relationships include, besides simple yes/no relationships, similarity, equivalence, substitutability, causality, weightage among causal parameters, and business criticality - relationships which are absolutely sacrosanct (must be respected) etc., This must also capture exceptions or changes to data that may affect the relationships. This is a system to just capture relationships. But where to these relationships come from? Addressed below.
that reflects the changing relationships:
The system must capture the changes in the relationships, instantly, in action time – i.e before the next action / decision needs to be taken (close to reality – remember?). Real-time is a bonus.
that facilitates recording of the relationships in multiple ways - manual to AI:
The system must support creation of the context by:
Relationship Discovery – technical (using e.g. database constraints), rule based and machine learning based inference (NLP for affinity, similarity, path equivalence etc.,)
Human Intelligence - uses expert users’ tribal knowledge to create new relationships, subject to validations.
External inputs - allows external established influence on the context - eg. GDP, weather, regulations, consumer spending, unemployment rate etc. where relevant.
that revisits the relationships based on evidence:
The stated relationships, especially the causal factors, change in their weightage with time. The system must keep a balance between the observed and recorded.
that learns continuously:
The relationships are established based on existing, recorded relationships (e.g foreign keys), constraints and importantly – manual inputs. This allows inputs from expert humans to start with and intervention for any later stage corrections, subject to validation rules.
Both 1 and 2 above form the substrate (Relationship Space or Context Fabric) that holds every relevant detail, captured and conserved in the form and shape useful to an organization, an industry or even a problem space. This includes historic data, current transactional data, and a placeholder for emerging new data and relationships yet to be discovered / yet to emerge.
This is a living, breathing fabric of data and relationships, including causality, of a business environment.
A system to control, limit, manage the created Context:
Decisions are functions. Functions of parameters – variables. Strength of decisions depends on complexity of breadth and depth of parameters used. Not every business and not every business decision requires the deepest and broadest set of parameters. It depends on the cost – benefit / ROI of the decision. Hence the Decision Space must be configurable by the user organization. A system is required to define the decision space that forms the boundaries that make commercial sense. The result is a Context Fabric relevant to organizations’ scope.
A Context Fabric of an organization reflects, and continues to reflect, the business environment – data, relationships, evolution, reflected within. Anyone in the organization can access the same, with the same view of inter-related, correlated facts.
A Context Fabric of an industry reflects the business environment reflected within, encompassing rules, regulations, industry challenges, processes, common standards etc.
A Context Fabric of a business unit / departments reflects the relevant environment reflected within. E.g. Marketing department, with its evolving Context, will look the same to everyone, at anytime.
A system to manage decisions.
Now that you have relevant enterprise facts mapped, correlated and ready for use with inherent reliability of relationships, bring in all your ML models, insights, KPIs, or dashboards here, or simulate as many scenarios as you want. Call them Control Towers or Co-Pilots or micro apps or anything else, the base structured above makes them reliable, more effective and the decisions made more reliable. While traditional decision systems have always been designed for non-operators (e.g. desk planners as opposed to field operators), this is a chance to bring decision intelligence, and decision recommendations, to the field. Make every field operator an intelligent, informed master of decision making, at scale - i.e. every user.
Importantly, this makes real, practical resilience or responsiveness possible. Remember the decision space in the Context Fabric that has relevant data and relationships captured? Now when things go wrong, all the information needed for alternative decisions or actions is present in this Context Fabric for ready use, as and when required, instead of running around in the last minute to find alternative paths. This also makes it possible to be systematically / programmatically be responsive, which means this can be scaled up effortlessly.
This is not same as a data lake or a lake house, which are complementary to this system. This system becomes a default source for data frames, for training and deploying the models in the field operations. These can be flexible, composable, reusable data frames sitting on top of the Context Fabric.
A system that facilitates feedback and closes the loop:
Tying up all these, with a view to bridge the gap between plans and execution realities, is this systems the uses the above context to facilities, route the relevant feedback to relevant origins of action / information / decision.
Most of the operational expense are controlled by field activities, irrespective of how much ever planning is done at the desk. This ensures at least some control over the field outcomes when things do not go as planned, which happens all the time.
Examples:
Here is a detailed version.
This presents two simple scenarios and how the proposed approach will make a difference to the scenario in terms of effective decision making, especially when there are several unpredictable, moving parts. This is to demonstrate the difference between prevalent extremely simplified approaches vs data driven decision and responsive approach in the same scenario where 100s of data sources are correlated, interpreted and made use of for corrective actions. The point is – there are enough technologies to make this a reality at scale today, than living in a decades-old simplistic world which ignores rich data lying around.
Scenario 1: This involves orchestrating interactions among workers, forklifts, dock-doors, and vehicles. The solution identified and orchestrated corrective actions in response to changing operational conditions, consuming inputs from various systems - orders, load readiness and marshalling data from WMS (Warehouse Management System); worker data from WLM (Warehouse Labour Management) and IoT wearables data; location and utilization data from forklifts’ IoT sensors; dock-door, vehicle, and yard data from TMS (Transportation Management System) and IoT sensors / LoRaWAN from yard. For instance, if a queue started to build up at a dock-door, the causal factors were identified and corrective actions were triggered in near real time (action time - e.g., if 12 forklifts were planned for a certain load and day of week profile, but 2 drivers got delayed with only 10 forklifts operating, the system could course-correct by directing near-by alternatives - forklifts or drivers), controlling build-up of the queue and adjusting the dock-door schedule for any unsolved delay, overall resulting in reduction in vehicle unloading time, saving payables to transportation vendors, reduction of detention time of freight containers, saving detention charges payable. The Context could flexibly be updated with other causal factors if noticed (and validated) – e.g., effect of weather as observed by expert user, operating conditions at the dock-door on a rainy day, energy efficiency etc. on the scenario.
Comparison: Above scenario has 60+ parameters. Conventional systems use just a handful, after laborious integration projects, only to re-integrate once new data sources are introduced. Context Fabric based approach, as described above, takes all parameters, relationships, changes in relationships and if new data sources arrive anytime, adds the same to the Context without integration projects. All intelligence, ML Models deployed in Context Fabric can retrain based on deviations, feedback etc.
Scenario 2: This scenario involved orchestrating interactions among self-guided vehicles (robots) and human workers in a co-habiting warehousing environment. Based on available workers in a given shift, picking orders were distributed among robots and workers, who in turn could accept or delegate the task to robots if the items were too heavy for them or too difficult to find, based on their fatigue. New types of AGV/SGV or pick policies could be introduced without re-integration or a large system integration project.
This approach is important as Context Fabric is portable and reusable across scenarios. It has the potential to bring a paradigm shift to supply chain responsiveness across industries.
Clarification to anyone thinking this can be done in the traditional way by integrating all the systems and building business logic on top. The whole point is about rapidly changing realities and how to respond to them intelligently, informedly. And time horizon is not a few months but years, forever actually – even when a new system / data source is introduced, this will work without yet another long integration and System Integrator project.
Conclusion:
Supply Chains work in a physical world with realities that change every minute. Conventional approach of planning independent of the realities does not give an opportunity to provide feedback and respond. Conventional approach does not capture rich data and relationships among data, does not take many contextual elements into consideration while planning and even during execution, hence it does not have a mechanism to be responsive timely. Context Fabric is the substrate that establishes a decision space and feedback framework, which is an important pre-requisite for responsive supply chains, recognizing the fact that even the best of planning requires corrections. Flexibility to enrich context continuously is also a key enabler.
Protected by Intellectual Property Rights. The Context Fabric concept builds on my earlier invention on applying Business Context on to IoT/RFID data, US Patent (US 7,394,379). Contact to discuss more.