AI has many applications in healthcare
. Various AI-powered medical solutions can save doctors’ time on repetitive tasks, allowing them to primarily focus on patient-facing care. Additionally, algorithms are good at diagnosing various health conditions as they can be trained to spot minor details that escape the human eye. However, when doctors cannot explain the outcome, they are hesitant to use this technology and act on its recommendations.
One example comes from Duke University Hospital. A team of researchers installed a machine learning application called Sepsis Watch
, which would send an alert when a patient was at risk of developing sepsis. The researchers discovered that doctors were skeptical of the algorithm and reluctant to act on its warnings because they did not understand it.
This lack of trust is passed to patients who are hesitant to be examined by AI. Harvard Business Review published a study where participants were invited to take a free assessment of their stress level. 40% of the participants
registered for the test when they knew a human doctor would do the evaluation. Only 26% signed up when an algorithm was performing the diagnosis.
When it comes to diagnosing and treatments, the decisions made can be life changing. No surprise that doctors are desperate for transparency. Luckily, with explainable AI, this becomes a reality. For example, Keith Collins, CIO of SAS, mentioned his company is already developing such a technology. Here is what he said: “We’re presently working on a case where physicians use AI analytics to help detect cancerous lesions
more accurately. The technology acts as the physician’s ‘virtual assistant,’ and it explains how each variable in an MRI image, for example, contributes to the technology identifying suspicious areas as probable for cancer while other suspicious areas are not.”