Menu Close

The challenge of AI landing: “Interpretable” is not “Understandable”

Building an interpretable AI is crucial to enable the advanced system in practice, especially in the healthcare system. Many studies have been proposed to improve the AI interoperability in terms of why and how a machine is making decisions. However, none of them tends to be a practical solution. Most of them only focus on machine-self interpretability but ignore another side of communication – understandability. The ideal solution to landing AI systems should be able to allow machines and humans to interact and communicate efficiently and understandably.

For example, the paper – competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction- published on nature machine intelligence by Hongming el at., 2019, proposed to include the physicians into the machine processing loop to give part of the decision option of the prediction of the deep learning model to the human. There is no doubt about the contributions of this work. However, physicians may be very interested in being involved in the decision loop at the beginning. Still, later on, most of them will hate to be involved in the machine work because no one would like to have an additional workload and decision risk. At this point, we may need to ask why physicians should involve in the machine processing loop.

The answer to the question is crystal clear that we wanna the human trust our AI. Thus, to building interpretable AI, first of all, we have to start with building a trust AI that will be reliable and understandable like the physicians trained in practice for years. We allow a trained physician to operate in our clinics because the physician has obtained the trust of the human healthcare system. Trust is the key to AI landing and prior to the interpretability of AI. However, trust is just the beginning. The next question is how we can build trustable AI.

To answer this question, we first need to think about why we trust trained physicians. You may say they are knowledgeable and professional. However, none of these points is the core of the trust. Instead, “understandable” lying in the communication between physicians to physicians and patients is the core. This term is not just referring to the techniques but, more importantly, getting both physicians and patients involved in communication with AI. Effective and understandable communication can build trust no matter among humans, machines, or between.

Landing AI is still challenging to not only techniques but also policies and strategies. More details will be followed soon.