
Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.
When NHSx, the digital unit of the UK National Health Service (NHS) launched its in-house COVID-19 contact tracing application in England and Wales this September, suggestions (including a free lottery ticket for every download and prime time TV tutorials teaching people when to turn on and off the app) were given to address the “soft stuff” (i.e., human factors) that would subtly drive massive public buy-in. Indeed, sometimes, it takes as little as not knowing “how to turn a device on” to stop people from using a technology completely even if it has proven to be efficient and safe.
Intuitive and automatic
In the case of radiology, Ben Panter, Founder and Chief Executive Officer of Blackford Analysis believes in order to draw real values from using artificial intelligence (AI) driven tools, integration is just as critical as creation. He cited at the recent AIMed Radiology virtual conference when a head CT scan comes in, the algorithm ought to automatically inform fellow clinicians what is it (i.e., a trauma, suspected stroke etc.) and why it has been taken.
Radiologists may be hesitant to use the AI-driven tool if it is not intuitive; requires them to switch between user interface or the waiting time is longer than what they can do manually. More importantly, radiologists need to be able to get the findings into the report and cross validate it when needed. Panter advised rather than saying “I have an AI algorithm, what can we do with it?” It’s better to think about clinicians are facing these challenges at work, how can we use AI to mitigate them?
Dr. Yvonne Lui, Associate Professor and Associate Chair of AI at New York University Grossman School of Medicine and NYU Langone Health added these algorithms should not be a standalone feature, it should also communicate with the machines around them. Most often, while it is impressive to see AI arriving at a conclusion in a matter of seconds, the pre- and post-processing time are actually slowing things down.
Streamlined complexity
On top of situation awareness, Dr. Woojin Kim, Musculoskeletal Radiologist and Imaging Informaticist at Palo Alto VA Hospital thought there should be a filter deciding when clinicians can handover parts of the responsibilities to an AI. Panter said that boils down to having the right protocols. These protocols need to assimilate different modalities and work alongside the PACS (i.e., Picture Archiving and Communication System). Rather than, in the words of Imon Banerjee, Assistant Professor of Biomedical Informatics at the Emory University School of Medicine, a stack of multiple AI models, which increases complexity and time.
Nevertheless, Dr. Lui commented the need to integrate human expertise too. She observed at her institution, there need to have meetings for putting up a simple workflow model. “It has nothing to do with diagnosis but one which identifies patient no-show and the number of people involved – frontline staff, electronic health records (EHRs) providers, lead MRI technologists and so on – was astonishing,” she says.
On the other hand, Dr. Neelan Das, Consultant Radiologist and AI Lead at East Kent Hospitals University NHS Foundation Trust believes it can be challenging for healthcare systems without much experience with AI and is not keen to develop their in-house AI tools, to start negotiate with vendors the most appropriate AI tools for them. As such, regular meeting, particularly those concerning discrepancies, become a necessity to keep vendors informed on how they should improve on their tools.
Haris Shuaib, AI Transformation Lead at the London Medical Imaging and AI Centre for Value Based Healthcare and Topol Digital Fellow at Guy’s & St. Thomas’ NHS Foundation Trust said he and his team had created formal structures on how to go about deploying and evaluating AI tools. They are now partnering with NHSx to come up with a buyer’s guide.
Sustainability
The guide will target five main areas of AI deployment including technical integration (i.e., does the tool work in the healthcare system); clinical integration (i.e., does the tool work in the way the healthcare system is delivering care); are the staff trained to use it (i.e., is it producing data points that people are not familiar with); is it effective (i.e., does the tool benefit the system and patients), and sustainability (i.e., once the trial or research phase is over, does the tool still work as intended or does it need continuous assessment).
From a commercial perspective, Panter believes in the importance of generalizability and sustainability. He said an AI tool has to work on the patient populations that it is targeting and be trained on controlled, diverse and validated datasets. Likewise, Banerjee added patients may move from one healthcare system to another, so an AI model needs to account for these patients as well and not just those who are always in the pool. She warned clinicians putting too much emphasis on the overall statistical performance of a model rather than the potential impact it has on a single patient.
Overall, Dr. Das thought AI sustainability is tricky because there is a need for a robust mechanism to report any discrepancy but right now, this is only being done on an ac-hoc basis and a proper structure is due to realize. Besides, if there is really a need to have an ongoing audit for these algorithms, how much cost-saving or clinical benefit can it really bring to the system and who is going to cover the additional bill. All these cannot be overlook lightly.
*
Author Bio
Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.