
I am a pediatric cardiologist and have cared for children with heart disease for the past three decades. In addition, I have an educational background in business and finance as well as healthcare administration and global health – I gained a Masters Degree in Public Health from UCLA and taught Global Health there after I completed the program.
“Perfect is the enemy of good.”
Voltaire, French writer and philosopher
The recent increase of artificial intelligence in clinical research mandates more transparency and reproducibility, just as with traditional evidence-based care. In this review from the UK, a framework is suggested for studies evaluating AI interventions in healthcare in order to address the inadequacies of reporting AI clinical studies based on prior reports (<1% were deemed to be robust in design and reporting). In addition, very few studies performed external validation of the models and rarely does any study use multi-center or prospective data collection.
Due to the aforementioned deficiencies and concerns, research guidelines adapted for AI were based on the Enhancing the QUAlity and Transparency Of health Research, or EQUATOR as well as a few individual societies. The EQUATOR network published the landmark publication of Consolidated Standards of Reporting Trials (CONSORT), as well as other guidelines for other study designs, including ones with patient reported outcomes (PROs).
The studies with AI as an intervention have led to EQUATOR-related extensions that combined the robust methodology of the original EQUATOR guidelines while ensuring AI-related elements are included. This guideline is important for both journals and their editors, as well as research teams from institutions. In addition, other reporting guidelines for common study types that are used in radiological research can be adopted for guideline extensions that involve AI (e.g. Randomized Controlled Trials and CONSORT-AI). Lastly, the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines may have an AI extension in the near future. Other non-EQUATOR guidelines that have AI in the content include CLAIM of the Radiological Society of North America and the MINIMAR of the American Medical Informatics Association.
As the authors correctly point out at the conclusion of the manuscript, there is a dire need for researchers and reviewers with dual expertise of the technical aspects of AI and the principles of good trial methodology. Overall, the guidelines are excessively detailed and will have very few studies that will fulfill all the elements listed.