Interpretable AI in predictive maintenance

Predictive maintenance plays a central role in modern industry. In the HEAD project, Fraunhofer Institute for Production Technology IPT and Fraunhofer Institute for Algorithms and Scientific Computing SCAI are developing interpretable AI models that predict tool wear in cutting processes from acoustic emission. The objective is to increase both the robustness of the production process and the availability of the machines – while maintaining consistently high production quality. An interpretable AI model also aims to increase engineers' trust in Machine Learning by making model decisions comprehensible.

© pressmaster - adobe.stock.com
Ideally, predictive maintenance increases the uptime and reliability of machines and tools. Thanks to interpretable and explainable AI engineers no longer need to question the reliability or trustworthiness of the employed ML model.

Machine Learning gains popularity in the industry continuously. A prominent field of application is predictive maintenance, in which a system’s health and consequently the optimal point in time for maintenance is predicted by Machine Learning (ML) models. Ideally, predictive maintenance increases the uptime and reliability of machines and tools in production environments. Often, however, engineers question the reliability or trustworthiness of the employed ML model itself, the reason being that the model is treated as a black box by engineers having no insight into its inner workings or decision process.
 

Interpretable and explainable AI: A look into the black box of algorithms

A recently highly active field of research in ML and Artificial Intelligence (AI) in general is interpretable and explainable AI. Large efforts are made to open the black box and shed light on a model’s decision process.

In the HEAD project, Fraunhofer IPT and Fraunhofer SCAI developed a complete pipeline of an interpretable solution for predictive maintenance, from data acquisition to the integration of an interpretability method. The latter helps the engineer understand the ML model’s decisions, potentially gain new insight into the underlying physical phenomenon of tool wear and, crucially, assists the engineer in making informed decisions with the support of an ML model.

For the data acquisition and goal-oriented data preprocessing, Fraunhofer IPT uses a hardware-based data reduction for high frequency data and a set of further data preprocessing steps. By this, the data quality is ensured and enriched by labeling of the datasets regarding the real tool wear state, allowing an ML model to predict the tool wear. To analyze the prediction process – to peek into the black box model – Fraunhofer SCAI applies interpretability methods such as saliency maps for neural networks and various feature importance measures for random forests. To increase the robustness of the resulting importance values, we verify the agreement between different methods and perform sanity checks.


Experience the ML model in our demonstrator

An interactive demonstrator lets you experience the way the Machine Learning model makes its predictions. You can play against the AI and determine tool wear visually or audibly – who is better? You can enable an AI-assisted mode which helps you make your decisions. The demonstrator is available here.

With HEAD, Fraunhofer IPT and SCAI present an ML-based approach for online classification of tool wear in an ultra-precision diamond turning process accompanied by supporting the engineer with insights into the model’s decision process. The key for this insight is a feature importance analysis, which allows us to highlight the features most relevant for the model’s decision. Additionally, with the help of the obtained feature importances, users can be trained to distinguish between different classes of tool wear on their own. The described approaches lead to increased trust in and transparency of the applied ML model.

Project info

 

This project is funded

by the Fraunhofer CCIT Technology Hub Machine Learning.