Preface|Why Are We Rethinking “Decision-Making”?
Last week, we shared an article on ” How AI can support manufacturing transformation ” (👉 Here is the link ). It covered topics such as automation, AI Agents, and MOM architecture. After publishing it, we began to reflect on a deeper question:
When we talk about “AI-assisted decision-making,” what kind of decisions are we actually referring to?
Traditionally, especially on the manufacturing floor, decisions often relied on the tacit knowledge and intuition of managers. But once AI enters the workflow, decision-making is no longer just a matter of individual judgment—it becomes a process that must be deconstructed, understood, and optimized.
This question led us to read Third Millennium Thinking: Creating Sense in a World of Nonsense, a book that opened up a more profound perspective. It’s not a guide for making choices; it’s a tool to help us understand how decisions are formed. The authors break down decision-making into three core dimensions, helping us realize that the value of AI lies not only in data processing but in its ability to support users across all three dimensions.
In this article, we’ll explore those three dimensions, examine how to avoid common decision-making pitfalls, and share how DigiHua’s APS system engages in decision-making to guide businesses toward smarter, evidence-based operations.
Summary
This article explores what constitutes a “real decision.” Drawing insights from Third Millennium Thinking, we learn that decision-making is not merely the outcome of data analysis. It is a process that must balance three elements: the credibility of information, the integration of values, and the legitimacy of decision procedures—what the book refers to as the “Decision Triangle.” When any side of this triangle is compromised, decisions often become distorted or counterproductive.
DigiHua’s APS system serves as a smart engine that bridges data and decision-making. It helps businesses operate in complex environments by building models that are stable, explainable, and actionable—paving the way for truly high-quality, AI-driven decisions.
Contents
- 1. What Makes a Good Decision?
- 2. When the Decision Triangle Falls Out of Balance: Three Common Pitfalls
- 3. Without Structured Data, There Can Be No Predictable Operations
- 4. APS: The Launchpad for AI Decision-Making and the Backbone of Smart Operations
- 5. How We Help Manufacturers Strengthen Data Reliability
- 6. Conclusion|When We Talk About Decisions, We’re Really Talking About Trust, Values, and Participation
1. What Makes a Good Decision?
In Third Millennium Thinking, authors Saul Perlmutter, John Campbell, and Robert MacCoun introduce a concept called the Decision Triangle. According to them, every sound decision must rest on three pillars:
- Epistemic Validity – the reliability of information,
- Normative Integration – the incorporation of values, and
- Procedural Justice – the legitimacy of the decision-making process.
These three elements form a stable structure. If any side is unbalanced, decisions can become biased, distorted, or even counterproductive.
While these concepts may sound abstract, they’re highly relevant in manufacturing settings—on the factory floor, in ERP systems, and in scheduling meetings. These issues are not theoretical; they’re what we face every day.
Epistemic Validity: Is the Data You Give AI Clear or Distorted?
The book points out that poor decisions often stem not from flawed logic, but from flawed data. It’s like navigating with a blurry map—you can’t reach the right destination if your inputs are wrong. This insight precisely reflects the pain points many manufacturers experience when implementing smart factory systems.
Common challenges include:
- Machine data comes from different brands with mismatched formats and frequencies.
- Yield and scrap records are manually filled in and cannot be verified.
- Systems are siloed—data exists but does not communicate across platforms.
- There are no standardized fields, no timestamps, and no data lineage.
These issues give the illusion of having data, but in reality, the data is unreliable. AI cannot automatically interpret or correct errors on its own. As we see it: no matter how powerful your inference engine is, it cannot draw the right conclusion from the wrong premise.
That’s why, when we built DigiHua’s APS system, we didn’t just focus on data collection—we focused on ensuring that the source, structure, and relationships within the data are trustworthy.
Our APS integrates deeply with systems like MES and QMS, ensuring that every piece of data has context, time relevance, and standardized format. For example, when retrieving machine downtime records, we don’t just get “how long it stopped,” but also the responsible operator, the processed product, the production lot, and the corresponding quality anomaly—all in sync.
In other words, our goal is not to make AI “calculate more,” but to ensure it knows what it’s calculating.
Only when AI operates with a correct worldview—built from structured, meaningful data—can its suggestions be truly reliable.
Normative Integration: Facts Alone Aren’t Enough—Values Matter Too
The second cornerstone of decision-making is Normative Integration—the legitimate incorporation of values into the process. This principle reminds us that facts alone are insufficient; decisions also require value-based judgment.
Data can tell us what happened, but it can’t tell us what should be done. For example, an APS system might identify the most efficient production line or the most optimal use of raw materials. But when it comes to choosing between lowest cost and most reliable delivery, AI can’t decide your priorities. This is the point where values must intervene.
In daily scheduling, dispatching, and order management, these value judgments are everywhere. For instance:
- Should you bump in a VIP order at the expense of delaying others?
- Should you authorize weekend overtime, or uphold work-hour limits?
- Should in-house parts be prioritized over outsourced processing—and how do you weigh internal versus external risks?
These are not questions a system can simply “calculate.” They require dialogue—among teams, roles, and priorities. Decisions must be negotiated, not dictated. It’s through multi-role collaboration, shifting condition weights, aligning business goals, and drawing on both moral instincts and operational realities that the most suitable answer emerges.
That’s why DigiHua’s APS system emphasizes configurable scheduling logic. We allow users to set weight priorities—lead time, capacity, changeovers, inventory, etc.—so the system can generate tailored recommendations based on the values relevant to each scenario, instead of applying a one-size-fits-all rule.
Going a step further, we visualize scheduling assumptions, helping users not only see the result but understand the underlying value-based logic. This makes it easy to fine-tune strategic direction when necessary.
Tacit knowledge is a non-verbal way of embedding values. AI can surface and analyze it—helping you make values visible and actionable.
In our view, truly intelligent decision-making means enabling the system to understand your preferences—and enabling people to understand the system’s reasoning. This is how we avoid being blindly ruled by algorithms.
Procedural Justice: Legitimate and Accepted Decision-Making
The third essential element is Procedural Justice—whether the decision-making process itself is legitimate and accepted. In other words, it’s not just about whether the result is right, but also who made the decision, how it was made, and whether it can be understood and supported. Even the best recommendations are meaningless if no one accepts or acts on them.
Have you encountered situations like these?
- The system produces an optimized schedule, but the frontline supervisors refuse to follow it.
- The data looks solid to top management, but workers on the ground say, “This doesn’t reflect reality.”
- A system update changes decision logic, but production staff weren’t informed—causing confusion or even resistance.
These aren’t technical issues—they’re breakdowns in decision legitimacy. People don’t understand why a schedule was created that way, who made the call, or why the process changed.
At DigiHua, our approach to APS system design is built around a key belief: decision-making is a collaborative process, not a top-down command. Our system doesn’t just provide AI-driven suggestions—it also helps users understand where those suggestions come from, how they were generated, and whether they can be adjusted. Crucially, it leaves room for human participation and review.
To enable this, we’ve implemented several core mechanisms:
- Decision Process Visualization
Every dispatch recommendation and changeover suggestion includes clearly labeled conditions and priority weights, so users can understand the logic behind each outcome. - Version Tracking and Audit Trail
Users can review the full history of scheduling changes across different versions, ensuring that every change is traceable and backed by discussion. - Collaborative Permissions and Review Flows
For high-impact adjustments—like last-minute orders or priority overrides—our system supports multi-role approval workflows and notifications. This prevents “black box” AI decisions and encourages shared accountability.
These designs aim to bring transparency and a sense of participation to the decision-making process. The goal is to turn AI recommendations from abstract suggestions into decisions that are actually accepted and executed by the team.
Because no matter how precise the algorithm, without consensus and trust, it’s just theory on paper.
2. When the Decision Triangle Falls Out of Balance: Three Common Pitfalls
Just like a tripod, sound decision-making requires all three sides of the Decision Triangle—reliable data, value integration, and procedural legitimacy—to be balanced and stable. When any one of these pillars is compromised, it often leads to faulty decision-making patterns that are both common and frequently overlooked.
These patterns are:
- Epistocracy – an overreliance on knowledge gatekeepers
- Hyper-Autonomy – excessive individualism and lack of coordination
- Category Collapse – confusion between facts and values
These aren’t theoretical risks—they’re very real, and they tend to surface especially during the early stages of digital transformation. Their root cause isn’t usually poor system usage or operator error. Rather, they arise from deeper imbalances in organizational culture and decision-making structure. If left unchecked, they can seriously undermine the core value of data-driven and AI-assisted decision-making.
Epistocracy: When “Expert-Only” Decisions Undermine Trust
In many organizations, decision-making often rests in the hands of those labeled as “experts.” At first glance, this seems reasonable—after all, people with data, tools, and experience should be the best qualified to decide.
But here’s the problem: when decision-making is limited to a few, the majority are excluded from understanding and participating.
This is a common issue in many manufacturing settings:
- Scheduling logic is managed by a single person; others are left out.
- System parameters are so complex that only IT understands them; operators are kept in the dark.
- Data analysis is locked away in management reports with no link to what’s happening on the ground.
Over time, this leads to:
- A breakdown of trust
- Resistance from frontline teams
- The mistaken perception that “AI is here to control me”
What was meant to assist decision-making becomes a black box of centralized control and information monopoly.
To counter this “expert-only” model, DigiHua’s APS system, with its built-in AI Agent, is designed to promote transparency, explainability, and collaboration. Here’s how:
- Natural Language Query Interface
No need to know SQL or coding—users can ask simple questions like “Which production line had the highest scrap rate this week?” and receive visual charts and trend data instantly. - Visualized Decision Logic
Every scheduling suggestion comes with clickable logic: why it was scheduled that way, and what constraints or goals it was based on. Users don’t just receive the result—they understand it. - Multi-Role Permissions and Collaborative Interfaces
Engineers, QA, production, and sales teams can all participate in adjustments from their own perspectives. This helps build decision consensus instead of leaving choices to a privileged few.
We fully acknowledge that knowledge is essential. But when knowledge becomes a tool for power consolidation, even the “right” decision can be questioned, rejected, or ignored.
Hyper-Autonomy: When Decentralization Turns into Disarray
When everyone claims to have the answer, decision-making often collapses into a lack of consensus. Over-decentralized processes can result in individuals acting independently based on their own judgment—not with greater flexibility, but with greater chaos.
This pattern typically emerges as a backlash against epistocracy. Employees begin to question experts and reject systems, insisting, “I know the situation best,” or “The system is just for reference—my experience is more reliable.” It may look like autonomy, but in reality, it leads to disorganized decisions, inefficiency, and fragmented information.
In manufacturing environments where tacit knowledge plays a big role, this is a familiar sight:
- Production Control (PC) departments create schedules their own way, ignoring system suggestions.
- Each unit logs downtime or scrap reasons using its own format and categories.
- Even after system implementation, you still hear: “This is how we’ve always done it—it’s faster.”
- Employees resolve production issues via LINE chats or verbal discussion, with no records left behind.
Sure, these workarounds might offer short-term speed—but over time, they produce serious issues:
- Data becomes unstructured and incompatible
- Judgments lack reliable foundations
- Decision-making processes leave no trace
And worst of all, AI can’t help, because it has no idea what decisions are based on.
To counter this, DigiHua’s APS system emphasizes three key principles when integrating AI functionality:
- Standardized Data Structures
Every entry—whether it’s a downtime log, changeover reason, or dispatch decision—follows a defined classification structure. This prevents “everyone speaking their own language” and allows for data integration. - Knowledge Modeling and Semantic Engines
By developing industry-specific semantic models and knowledge graphs, we help AI understand shop-floor terminology. In doing so, we turn experiential knowledge into shareable rules and actionable cases. - Natural Language Interfaces and Collaborative Decision Platforms
Different users can ask questions, provide input, and offer feedback in their own words—creating a foundation for cross-functional consensus and coordinated action.
We believe autonomy is not a problem, as long as it’s autonomy with shared understanding. True empowerment comes within a framework of clear rules and shared data logic. Without that, people drift apart—and AI will have no compass to guide anyone back on course.
Category Collapse: When Science Is Misused, Decisions Swing to Extremes
One of the most overlooked traps in “data-driven decision-making” is what we call Category Collapse—the confusion between facts and values. In this scenario, either science is burdened with making moral judgments, or value-based beliefs are used to reject all forms of data analysis.
Both extremes can—and often do—appear on the factory floor:
- One extreme is “Technology decides everything”:
The system calculates the optimal schedule with the highest efficiency and lowest cost, and that’s the final decision—no further discussion. This logic ignores human and operational concerns like employee burnout, unworkable shift patterns, or rising quality risks. - The other extreme is “Reject all data—just go with gut feeling”:
No matter what the system analyzes or what the model suggests, some people insist, “I just feel this way is safer,” fully rejecting the system and driving a wedge between management and operations.
This kind of flawed decision-making is especially common during the early stages of AI adoption. The reason is simple: we mistake science for the answer, without realizing that science provides evidence, not values.
At DigiHua, we recognize the risks of this confusion. That’s why, in the design of our APS + AI logic, we’ve intentionally built a dual-layer framework that separates facts from values:
- Facts Layer – Data-Driven Recommendations
The system analyzes orders, capacity, resources, and historical data to generate optimized schedules and risk forecasts. - Values Layer – User-Defined Preferences and Priorities
Users can define value-based conditions—such as sustainability goals, overtime policies, or customer tiering—so that AI recommendations operate within a value-aligned framework. - Multi-Version Schedule Simulations
The system supports simultaneous simulation of multiple value sets, offering A, B, and C schedule versions. This allows managers to directly compare how different priorities impact the outcome.
The goal of this design is not to let AI make all the decisions, but to empower managers to clearly distinguish between what the data says and what they choose to do.
Our belief is this: the problem is never science itself. The issue lies in not understanding the role science should play—or where its boundaries lie. When we treat data analysis as a substitute for value judgment, or use beliefs to reject all data, the imbalance isn’t in the system or the algorithm—it’s in how we blur the line between evidence and choice.
These three flawed modes of thinking help us see that implementing AI and intelligent systems is not just a technical upgrade—it’s a mental reset for the entire organization.
3. Without Structured Data, There Can Be No Predictable Operations
Earlier, we discussed the Decision Triangle and the three flawed decision-making patterns. While they may appear to stem from mindset or culture, they ultimately lead us back to a more fundamental question:
Can your data actually support decision-making?
No matter how advanced your AI system is, without clean, complete, and structured data, all your predictions are nothing more than speculation.
During our consultations at factory sites, DigiHua has repeatedly encountered companies that claim to be “digitally transformed”—yet still struggle with the following issues:
- Equipment data comes from multiple brands, with some updating every second and others only once per hour.
- Downtime and scrap records are handwritten and entered manually after shifts—often with time gaps of several hours.
- Yield statistics exist but lack critical context like work order numbers, raw material batches, or operator names—making root cause analysis impossible.
- Most critically, systems like MES, ERP, and QMS are not integrated, creating isolated “data islands.”
This kind of data isn’t just unusable for AI training—it’s barely usable for human decision-making.
That’s why, before implementing an APS system, we always conduct a pre-project assessment and ask one key question:
“Is your data analyzable?”
At DigiHua, we emphasize that smart manufacturing doesn’t start with AI—it starts with data. Structured data means every piece of information comes with context, fields, timestamps, and relational links. That’s the foundation of any predictive capability.
Consider the following examples:
- If machine downtime logs are immediately linked to production units, work order numbers, and scrap reasons, the system can accurately identify the source of anomalies.
- If changeover records are analyzed alongside product IDs, processing sequences, shift data, and utilization rates, the system can proactively suggest strategies to minimize changeovers.
- If quality issues are tagged with raw material batch numbers and suppliers, the company can quickly perform traceability and preventive actions.
When your data is well-collected, well-structured, and well-validated, it becomes trustworthy—and only then does predictive power become possible.
4. APS: The Launchpad for AI Decision-Making and the Backbone of Smart Operations
When we talk about “AI supporting decision-making,” we’re not referring to a single tool—but to an entire process. This process spans from data collection and cleansing, to analysis, forecasting, and ultimately the generation of actionable insights. For this chain to work seamlessly, it must begin with a solid foundation: a clear, computable operational model—built by an APS (Advanced Planning and Scheduling) system.
We often describe APS as the “scheduling hub” of a smart factory—but more accurately, it is the engine that connects data to decisions. It is the only system that can consolidate data from disparate sources—ERP, MES, QMS—and turn it into decision-ready, actionable recommendations.
These recommendations aren’t generated out of thin air—they are built upon the business’s real-world constraints and strategic priorities, such as:
- Order deadlines vs. resource limitations
- Production costs vs. equipment utilization
- Workforce availability vs. customer tiers
- Process sequences vs. changeover frequency
DigiHua’s APS system goes beyond traditional scheduling. It features dynamic scheduling algorithms, simulation optimization, and a powerful condition-based configuration engine with multi-version simulation. This allows managers to rapidly switch between strategic setups depending on scenarios—be it rush orders, material shortages, or equipment maintenance.
More importantly, APS is not a standalone tool. It is a core node within the broader MOM (Manufacturing Operations Management) architecture. It ingests real-time data from MES and pushes scheduling outputs back to the shop floor—closing the loop and turning decisions into actions.
So, when we talk about AI supporting decisions, we don’t mean AI making calls in isolation. We mean AI operating from a clean, structured, and logically grounded data environment—and that starting line is provided by APS.
Without APS, even the smartest AI can only observe and describe.
With APS, AI can reason, assist, and truly participate in decision-making.
5. How We Help Manufacturers Strengthen Data Reliability
Bridging the gap between theoretical models and real-world implementation is always a challenge. In manufacturing, the biggest hurdle is turning data into something trustworthy—and turning decisions into something actionable. Through years of supporting digital transformation in the industry, DigiHua has gained deep insight into this very challenge.
Many of the companies we’ve worked with had some level of data infrastructure in place before implementing our systems. Yet once operations began, they often encountered issues like:
- Large volumes of data—but no structured logic to integrate it
- Data in every department—but stored in incompatible formats and classifications
- Unconnected systems—leading to the same issue being described differently across three separate reports
This isn’t just a data quality problem—it’s a sign that the organization’s information governance structure isn’t yet ready for AI-assisted decisions.
That’s why DigiHua never starts with “plug in the AI model.” Instead, we rebuild data reliability from the ground up. Our three-step strategy includes:
- Establishing Data Standards and Semantic Structures
From work order coding to process classification, we build a master data architecture that embeds context. This allows systems to “speak the same language” and understand the operational logic of the business. - Integrating Real-Time Signals and System Data
We connect MES, IoT sensors, and production traceability systems, converting signals like machine utilization, downtime causes, scrap reasons, and quality inspection results into a format that can be computed and analyzed by APS. - Enabling Semantic Queries and Multi-Role Decision Simulation
By implementing AI Agents and natural language query functions, people across departments can “ask the system in their own words.” Combined with multi-version schedule simulations, this builds a shared, verifiable foundation for cross-functional decision-making.
For example, we once helped a metalworking factory move beyond scrap logs that simply said “NG.” We redesigned the records into standardized classifications—such as “raw material dimensional error,” “tool failure,” or “fixture slippage.” We then linked those records to scheduling data and machine parameters. As a result, the AI could not only detect anomalies, but also pinpoint which step in the process the issue occurred, and suggest concrete corrective actions.
6. Conclusion|When We Talk About Decisions, We’re Really Talking About Trust, Values, and Participation
When we began reading Third Millennium Thinking, we weren’t looking for a new framework—we were trying to reframe an old question:
When we make decisions—in the factory, in meetings, or behind the systems—what are those decisions truly based on?
The “Decision Triangle” introduced in the book—epistemic validity, normative integration, and procedural justice—isn’t just a theoretical model. It’s a mirror that reflects the often-overlooked foundations of smart manufacturing and AI-assisted decision-making.
We often ask: “Will AI replace humans?”
But the more important questions are:
- Can we trust the data we feed into AI?
- Are we including diverse values when we make decisions?
- Does our process allow people to understand and participate?
These three questions map directly to the book’s core insights:
- Data credibility answers: What can I trust?
- Value inclusion answers: What truly matters to me?
- Procedural justice ensures: This was a choice I was part of.
At DigiHua, we believe smart manufacturing is more than a technology upgrade. It’s a cultural shift toward transparency, logic, and human involvement. APS, MES, QMS—these systems are not end goals; they are necessary tools along the way. Likewise, AI Agents aren’t here to take over—they’re here to help us make better decisions in complex, uncertain environments.
To stay steady in this wave of digital transformation, we don’t just need to move faster—we need to move with clarity. We don’t just need automation—we need intentional thinking.
This article is not a conclusion, but an open invitation to reflect and collaborate. If you’re also thinking about how to improve decision quality and unlock the real value of AI, we’d be happy to continue the conversation—or help you design a smart decision-making logic tailored to your factory.
Because ultimately, it’s not the tools that drive the future—
It’s our understanding and choices.
If you’re ready to make AI a true member of your decision-making team, DigiHua offers more than just technology. We provide an intelligent decision-making environment built on trust, values, and participation—from data governance and process transparency to simulation and value-based configurations.
Let’s build the foundation where AI can genuinely make a difference.