XAI Requirements in Smart Production Processes: A Case Study

1. Introduction: Explainable AI (XAI)

The increasing prevalence of artificial intelligence (AI) systems has led to a growing consensus on the importance of explainability of such systems. This is often emphasized with respect to societal and developmental contexts, but it is also crucial within the context of business processes, including manufacturing and production. While this is widely recognized, there is a notable lack of practical examples that demonstrate how to take explainability into account in the latter contexts. This paper presents a real-world use case in which we employed AI to optimize an Industry 4.0 production process without considering explainable AI (XAI) requirements. Building on previous work on models of the relationship between XAI methods and various associated expectations, as well as non-functional explainability requirements, we show how business-oriented XAI requirements can be formulated and prepared for integration into process design. 

2. The Case Study: Explainable AI (XAI)

2.1 The Initial Situation

The factory of a major car producer serves as our use case. The manufacturing process of a car is complex, involving multiple components that must be produced and prepared for assembly. While the bare bodies and some components are produced separately by the manufacturer, additional parts are produced by external suppliers and delivered to the factory. The assembly process consists of two main steps: preparing the base bodies (adding doors and painting) and final assembly (attaching all other parts). To optimize this process, Explainable AI (XAI) can play a crucial role in ensuring that AI-driven decisions in the assembly line are transparent, understandable, and trustworthy for all stakeholders involved.

During the final assembly, the order in which workers assemble cars is crucial for the production process’s effectiveness. The automatic movement of car bodies on conveyors, operating on a fixed cycle, compounds the challenge. If a station delays work or misses a car part, production halts. It is essential to recognize that cars have different characteristics, leading to varying difficulties in assembly. Assembling complex cars with similar characteristics consecutively can cause production to come to a standstill.

To optimize the production order, the car factory employs a line buffer that connects the two stages. This buffer can hold up to 15 bodies per line across 27 lines. 

2.2 AI-Driven Solution

A human supervisor manually decides which car body to take for final assembly. Experienced employees attempt to follow a set of rules based on the stored car’s features. However, the approach co-developed by some authors of this paper aims to optimize this decision-making process with the help of a learned AI agent. 

The team trained the AI using reinforcement learning, relying on a simulation of the line buffer and the final assembly line. They combined two approaches to generate appropriate rewards for reinforcement learning: the previously used rule-of-thumb-based method and the simulations. The first approach involved defining a large number of rules hierarchically ordered by importance, imposing negative rewards for rule violations, and assigning more severe punishments for critical rules. The AI agent was trained to minimize these punishments. The second approach involved simulating production time, including events that could lead to a production stop, with the AI agent trained to maximize the number of cars produced, thereby minimizing production stoppage time.

3. Identifying XAI Requirements: Explainable AI 

3.1 Systematic Analysis

To derive empirically testable hypotheses about which requirements result from the use of our AI system, we draw on a model that identifies relevant stakeholder groups as crucial contextual factors. We focus on the specific practical needs of these stakeholder groups, particularly the supervisors, and consider the types of explanatory information that address these needs. By integrating Explainable AI (XAI) into our analysis, we ensure that the AI system’s decision-making processes are transparent and comprehensible, thus addressing the concerns and requirements of the stakeholders effectively.

Stakeholders: The successful implementation of an AI system in an assembly line setting requires understanding and addressing the needs of three key stakeholder groups:

  • Supervisors: Responsible for deciding which car bodies leave the line buffer, they ultimately decide whether to follow the AI’s recommendations.
  • Assembly Line Workers: AI-supported decisions directly affect their work, potentially leading to unexpected configurations.
  • Management: They need to comprehend the underlying reasons for incidents and production halts.

The deployment of the AI system significantly influences each of these roles and affects the system’s overall performance. By catering to their requirements and addressing their concerns, organizations can foster a more efficient, productive, and transparent AI-driven environment.

3.2 Derivation of Requirements and Conditions for Success

Upon closer examination, we categorize the needs of each stakeholder into three general categories: assessing trustworthiness, assuming responsibility, and driving improvement. Specifically, the supervisors expressed three relevant needs that could explain the lack of adoption of the AI system:

  1. Trustworthiness Assessment:

    A lack of trust in the AI system and its results could explain why supervisors often do not follow the AI’s recommendations. Factors contributing to this lack of trust include poor system performance, which disqualifies it as trustworthy, or decalibrated trust, where users incorrectly assess a trustworthy system as non-trustworthy. 

  2. Responsibility:

    Supervisors need to feel that they maintain responsibility for their decisions. If they perceive that the AI system is making decisions without their input, they may resist following its recommendations.

  3. Improvement:

    Supervisors want to understand how the AI system can enhance their decision-making processes.They require explanations that clarify how the AI arrived at its recommendations, allowing them to learn and adapt their own strategies.

To address these needs, we propose that the AI system must provide clear, understandable explanations of its decision-making processes. You can achieve this by using various XAI methods tailored to the specific needs of each stakeholder group.

4. Conclusion

Integrating XAI into smart production processes fundamentally shifts how organizations make and understand decisions, rather than merely enhancing existing systems. By systematically identifying and addressing the needs of stakeholders, organizations can foster trust and acceptance of AI systems, leading to more effective and ethically sound AI-driven business processes.

In conclusion, the incorporation of XAI into business process modeling is essential for organizations seeking to realize the full potential of AI systems. By anticipating the needs of stakeholders and integrating Explainable AI (XAI) from the outset, organizations can enhance trust, improve decision-making quality, and facilitate compliance, resulting in more effective and ethically sound AI-driven business processes.

Do you like to read more educational content? Read our blogs at Cloudastra Technologies or contact us for business enquiry at Cloudastra Contact Us.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top