I am a pediatric cardiologist and have cared for children with heart disease for the past three decades. In addition, I have an educational background in business and finance as well as healthcare administration and global health – I gained a Masters Degree in Public Health from UCLA and taught Global Health there after I completed the program.
Last week, we introduced the nascent domain of three intersecting areas: ethics, artificial intelligence, and healthcare. We continue with perhaps one of the most controversial areas.
Perhaps artificial intelligence in medicine, especially in its autonomous form (as seen in fundus photograph diagnosis of diabetic retinopathy), can create very challenging new ethical dilemmas that incorporate the traditional ethical elements: fairness, beneficence, autonomy, justice, safety, and trust. These familiar elements of ethics sometimes take on a totally new nuance when AI is added as a third dimension. As of now, clinicians will need to be in the position of a veto to avert AI-enabled automated decisions in clinical AI that may result in an absurd or unfair patient decision or population outcome. This veto privilege is in alignment with the aforementioned Laws of Robotics. In addition, as there can be bias in healthcare data and databases that are utilized in AI work, this dimension needs particularly close scrutiny especially with autonomous AI in clinical medicine and healthcare.
Other interesting issues in this nascent realm of ethics, artificial intelligence, and healthcare include:
- Libertarian paternalism – this concept of an entity to affect behavior and decision while respecting freedom of choice can be leveraged by AI and therefore create an awkward situation for both the clinician and the patient with their respective autonomies
- Distributive justice – this social principle of assuring that rewards and burdens are equally shared by group members in society is rendered difficult if there is pre-existing bias in healthcare data that is studied by AI
- Black box perception – as some AI tools can lack sufficient explainability and even interpretability, this can lead to conflation of AI and clinical decision and therefore confusion in the process of determining the ethical standards of AI and healthcare
- Learned intermediary – a recognized common defense doctrine offered by defendants in prescription drug and medical device cases that involves a transfer of responsibility and accountability from the manufacturer to the clinician utilizing the drug or device
As the ethics of AI in healthcare is a nearly virgin domain without many precedent cases, it behooves us to convene a multidisciplinary group of ethicists, legal and policy experts, clinicians, AI experts, patient advocates, and others. At the foundation of these three intersecting domains is simply the balance of benefits and risk to the patients and to the population as a whole.
This fascinating topic of ethics of AI in healthcare along with others will be discussed at the annual AIMed Global Summit 2023 scheduled for June 5-7th of 2023 in San Diego.
Hope to see you before then as well as there!
We at AIMed believe in changing healthcare one connection at a time. If you are interested in discussing the contents of this article or connecting, please drop me a line – [email protected]