XMANAI | Explainable Manufacturing Artificial Intelligence

Summary

Despite the indisputable benefits of AI, humans typically have little visibility and knowledge on how AI systems make any decisions or predictions due to the so-called 'black-box effect' in which many of the machine learning/deep learning algorithms are not able to be examined after their execution to understand specifically how and why a decision has been made. The inner workings of machine learning and deep learning are not exactly transparent, and as algorithms become more complicated, fears of undetected bias, mistakes, and miscomprehensions creeping into decision making, naturally grow among manufacturers and practically any stakeholder

In this context, Explainable AI (XAI) is today an emerging field that aims to address how black box decisions of AI systems are made, inspecting and attempting to understand the steps and models involved in decision making to increase human trust.
XMANAI aims at placing the indisputable power of Explainable AI at the service of manufacturing and human progress, carving out a 'human-centric', trustful approach that is respectful of European values and principles, and adopting the mentality that 'our AI is only as good as we are'.

XMANAI, demonstrated in 4 real-life manufacturing cases, will help the manufacturing value chain to shift towards the amplifying AI era by coupling (hybrid and graph) AI 'glass box' models that are explainable to a 'human-in-the-loop' and produce value-based explanations, with complex AI assets (data and models) management-sharing-security technologies to multiply the latent data value in a trusted manner, and targeted manufacturing apps to solve concrete manufacturing problems with high impact.'

Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/957362
https://ai4manufacturing.eu/
Start date: 01-11-2020
End date: 30-04-2024
Total budget - Public funding: 5 998 902,00 Euro - 5 998 902,00 Euro
Twitter: @xmanai_project
View on other portals
Cordis data

Original description

"Despite the indisputable benefits of AI, humans typically have little visibility and knowledge on how AI systems make any decisions or predictions due to the so-called “black-box effect” in which many of the machine learning/deep learning algorithms are not able to be examined after their execution to understand specifically how and why a decision has been made. The inner workings of machine learning and deep learning are not exactly transparent, and as algorithms become more complicated, fears of undetected bias, mistakes, and miscomprehensions creeping into decision making, naturally grow among manufacturers and practically any stakeholder
In this context, Explainable AI (XAI) is today an emerging field that aims to address how black box decisions of AI systems are made, inspecting and attempting to understand the steps and models involved in decision making to increase human trust.
XMANAI aims at placing the indisputable power of Explainable AI at the service of manufacturing and human progress, carving out a “human-centric”, trustful approach that is respectful of European values and principles, and adopting the mentality that “our AI is only as good as we are”. XMANAI, demonstrated in 4 real-life manufacturing cases, will help the manufacturing value chain to shift towards the amplifying AI era by coupling (hybrid and graph) AI ""glass box"" models that are explainable to a ""human-in-the-loop"" and produce value-based explanations, with complex AI assets (data and models) management-sharing-security technologies to multiply the latent data value in a trusted manner, and targeted manufacturing apps to solve concrete manufacturing problems with high impact."

Status

SIGNED

Call topic

ICT-38-2020

Update Date

27-10-2022
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Smart Manufacturing (STAND4EU)
Video
Presentation
Publication
Open Research Data Pilot
Other medium