Menu Close

The challenge of AI landing: “Interpretable” is not “Understandable”

Building an interpretable AI is crucial to enable the advanced system in practice, especially in the healthcare system. Many studies have been proposed to improve the AI interoperability in terms of why and how a machine is making decisions. However, none of them tends to be a practical solution. Most of them only focus on machine-self interpretability but ignore another side of communication – understandability. The ideal solution to landing AI systems should be able to allow machines and humans to interact and communicate efficiently and understandably.

For example, the paper – competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction- published on nature machine intelligence by Hongming el at., 2019, proposed to include the physicians into the machine processing loop to give part of the decision option of the prediction of the deep learning model to the human. There is no doubt about the contributions of this work. However, physicians may be very interested in being involved in the decision loop at the beginning. Still, later on, most of them will hate to be involved in the machine work because no one would like to have an additional workload and decision risk. At this point, we may need to ask why physicians should involve in the machine processing loop.

The answer to the question is crystal clear that we wanna the human trust our AI. Thus, to building interpretable AI, first of all, we have to start with building a trust AI that will be reliable and understandable like the physicians trained in practice for years. We allow a trained physician to operate in our clinics because the physician has obtained the trust of the human healthcare system. Trust is the key to AI landing and prior to the interpretability of AI. However, trust is just the beginning. The next question is how we can build trustable AI.

To answer this question, we first need to think about why we trust trained physicians. You may say they are knowledgeable and professional. However, none of these points is the core of the trust. Instead, “understandable” lying in the communication between physicians to physicians and patients is the core. This term is not just referring to the techniques but, more importantly, getting both physicians and patients involved in communication with AI. Effective and understandable communication can build trust no matter among humans, machines, or between.

Landing AI is still challenging to not only techniques but also policies and strategies. More details will be followed soon.

Recurrent feedback can improve deep neural network and our brains to better identify objects from DiCarlo lab

“The DiCarlo lab finds that a recurrent architecture helps both artificial intelligence and our brains to better identify objects ” MIT news

From an engineering perspective, we should understand why the brain needs recurrent architectures, when we need them, and how we can operationalize this procedure into our deep neural networks.

This work definitely has started the first fundamental step to reach our goals. However, as I mentioned, we still need to know more profound about this research, such as the precise procedures and how many neurons involved.

Object recognition in our brains is not working alone instead of links with high-level cognition, such as emotion and memory. They are much likely cooperating with the visual cortex in object recognition. Thus, to overcome the challenges of object recognition in artificial intelligence, we have quite a lot of work to do indeed.

Source: http://news.mit.edu/2019/improved-deep-neural-network-vision-systems-just-provide-feedback-loops-0429

Paper: Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior -Authors: Kar, KKubilius, JSchmidt, KIssa, EBDiCarlo, JJ Nature Neuroscience