As reported by AIMed earlier, the ongoing Covid-19 pandemic quickens technology adoption. Many medical institutions are opening up to new technologies such as artificial intelligence (AI) for the very first time to help them manage the immense number of patients and shoulder the pressure put upon healthcare workers as a result of the coronavirus. At the same time, some companies and research groups are also putting their projects aside and divert resources to develop tools to combat the global health crisis, hoping they will bring back some lucrative returns.

Tools that were produced in a haste

Either way, concerns that prevent AI deployment like data privacy; biases and generalizability seem to be lifted overnight because everyone is forced to make a change. For example, Rizwan Malik, the lead radiologist at the Royal Bolton Hospital in UK noticed some of his patients are required to wait long hours for a specialist to examine their x-rays. If an AI-based tool is used, the waiting time may decrease tremendously. Initially, Malik planned to test qXR, an AI-driven chest x-ray system developed by Mumbai-based company Qure.ai but before he could embark, Covid-19 hit the country.

Early studies revealed that some severe Covid-19 cases will exhibit unique lung abnormalities that are similar to viral pneumonia. Since medical institutions are not able to take in all individuals with Covid-19 symptoms for a PCR test, looking at their chest x-rays become a triage shortcut. Malik saw the opportunity and proposed to retool qXR into a Covid-19 detection machine whereby AI will assist in the initial reading of chest x-rays. The process will usually take months because the hospital needs to inspect the tool and its design in great details but not this time. Malik was given an immediate approval to deploy the tool at the Royal Bolton.

Challenges that come with AI don’t go away

Nevertheless, medicine and healthcare are generally rigid and incorporate many factors that do not permit changes easily. What happened to qXR’s retool process was the company had to work with experts to learn from past literature, all the characteristics attributed to covid-induced pneumonia. They then coded what they have learnt into the machine before performing validation study on more than 11,000 patients’ images. However, this is not something that all companies can achieve within such a short and pressurizing period of time.

As Malik highlighted to MIT Technology Review, many AI companies approached him in the early onset of Covid-19 pandemic. Of all the 60 that he corresponded with, he realized that most of these so-called AI-driven coronavirus screening tools were nonsensical. “They were trying to capitalize on the panic and anxiety” Malik said. The most concerning part is, in time of crisis, some hospitals may not have the energy or resources to validate these AI solutions.

Although we witnessed a wider use of AI now, challenges that come with it do not disappear. At the end of the day, some of these tools will get to be kept only if they proved to be safe and do not violate patients’ health and rights. Otherwise, those who are deploying these tools are likely to be doing a disservice.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.