Engineers and physicians model systems, and solve problems, differently. It takes years of study and practical experience to learn the ways each field views and navigates the world, and how these differ from the way untrained, inexperienced people think about problems and solutions.

Doctors know how they think about patients, diagnosis, and treatment in practice, and can remember how they thought about ill people when they were young. A doctor knows the common errors in reasoning lay people make about disease and treatment, because patients are forever coming in with exotic self-diagnosis and treatment from the internet. We have centuries of experience with quackery and discarded theories of humors and chi.

Engineers know the analogous things about their own field. Top software engineers tell stories of people with amazing business ideas and they just need a nerd to build it. “I have an idea for a company that can predict from a script how much money the movie would make!”. “Cool, how does it work?”. “Uh, artificial intelligence? I dunno; you’re the nerd. Figure it out!”

Every first-time medical software entrepreneur wannabe makes a bee-line for a diagnosis app. ‘You doctors collect the data; my miracle program will find the diagnosis!’

Neither doc nor nerd necessarily understand how their own field’s way of thinking differs from the other’s. The doctor thinks of the engineer as a smart person who can make stuff but is as hopeless in their understanding of medicine as the gardener is. The engineer figures medicine must work pretty much like engineering, i.e. applied science, systematic measurement, testing and debugging.

This is why every first-time medical software entrepreneur wannabe makes a bee-line for a diagnosis app. ‘You doctors collect the data; my miracle program will find the diagnosis!’

I’ve been exhausted over the years explaining to a succession of starry eyed kids why this can’t be done and isn’t really needed.

They don’t understand the sequential aspect of medical data collection, how a skilled clinician takes a history and narrows their differential diagnosis simultaneously and therefore decides which data to collect dynamically.

They don’t understand the outsized importance of the history relative to the physical and even relative to lab and radiology tests.

They don’t understand how difficult it is to take a good history; to tell the difference between a patient who is truly reporting significant abdominal pain and one who just wants to be very careful not to leave out the minor discomfort he may have felt last week…”just to be thorough”.

With experience we become cynical. We transition from patiently explaining to newbies why something isn’t as easy as it seems, to assuming and explaining that everything is impossible

They don’t understand the non-specificity of most symptoms, even in combination, and how and why we must rule out common diagnoses before considering rare ones.

They don’t understand the skepticism we develop about screening tests. It is not intuitive to any untrained human how false positives can overwhelm true positives and how minor probabilities of adverse events can add up over large populations.

Years ago, during pathology residency, I learned renal cell carcinoma is surgically curable if caught early, and many (most) early catches are incidental findings on CT or MRI. Why not just do a screening exam every X years? I immediately assumed someone must have run the numbers and found it wasn’t worthwhile, then published these results to a medical journal. If this were not the case, then surely we would be doing renal cell carcinoma screenings on everyone. I didn’t bother to do a literature search.

That was and is probably true, but that dismissive attitude was and is a big problem, very common among medical professionals.

With experience we become cynical. We transition from patiently explaining to newbies why something isn’t as easy as it seems, to assuming and explaining that everything is impossible. Everything! Impossible!

That attitude will give you the correct answer in almost all cases. Which is why it is seductive. But people don’t impress me by calling bullshit all the time. That particular stopped clock is right a thousand times a day. The impressive thing, and a required characteristic of an innovator, is to know when to stop calling bullshit and call bull…maybe…hmmm…

Perhaps we are in a period of slow transition from the old medical paradigm to a pure engineering paradigm. We currently rely on the existing medical paradigm for diagnosis and treatment selection, but over time more and more of that will be replaced by solid engineer reasoning.

One difference between engineering and medicine is we create engineered systems and so we understand how they work. I mean we deeply understand, we “grok” how machines and software work

Many AI enthusiasts implicitly assume once we have enough sensors, blue prints, and service manuals, maintaining, fixing and improving humans will be the same as for any engineered system.

It won’t. Humans and software/machines have irreducible differences which will require a new paradigm.

One difference between engineering and medicine is we create engineered systems and so we understand how they work. I mean we deeply understand, we “grok” how machines and software work.

For those who are not programmers I recommend you research the origin of the term “grokking” and its use in computer science and engineering.

No other word quite captures it and you don’t really get it until you do it. If you end up grokking an aspect of software engineering and study biology and Medicine you will conclude we are not even close to grokking most of how human biology and medicine “work”.

If you can’t grok something you can’t expect to use the engineering paradigms, not completely. Since we did not create the human body most of it remains a black box.

There is no machine or software system that is the same order of magnitude of complexity as a human body

My cousin-in-law’s father worked for Chrysler and in the 1960s he was able to purchase from them the parts of a competitor’s car. It had been completely disassembled as part of some kind of car autopsy but the parts were undamaged.

He had no assembly manual, but over a couple of years of weekends and after work sessions, he was able to reassemble them into a functioning car. He understood, to the most granular required level, how a car worked.

I don’t mean to imply our inability to reassemble a human from parts is due entirely to our lack of understanding. It isn’t possible to Frankenstein a functioning human together from a box full of parts because (among other things) all the proteins have irreversibly glopped together and will no longer function. We don’t see the same thing with the molecules and atoms of plastics and steel in the car parts.

There is no machine or software system that is the same order of magnitude of complexity as a human body. It’s not that humans are complex systems while software systems are simple. Chaos theory applies to both.

Software systems in particular are very complex these days. Software programs are cobbled together from custom code and libraries and examples from tutorials and have complex interactions with local operating systems and interconnected networks and unpredictable users. They are created, changed, debugged and hacked by different people with different coding styles.

We need to learn the indirect ways of coming to answers that are necessitated by systems as complex, fragile, and litigious as the human body

The event driven nature and constantly changing underlying data, and ever updating versions of libraries and standards make modern software pretty gosh darn chaotic. Yet, painful though it can be, we are confident we can fix most any bug.

Why? Because we can look at the code, follow it most anywhere it goes (though with multithreaded programming and distributed system, this can get hairy), try code changes, recompile and try to reproduce that bug. We can poke and prod the code and rebuild it from scratch as many times as we like. We might complain but the code will not.

You can’t debug a human this way. You can’t comment out the circulatory system and see what happens. You can’t remove a patient’s liver and swap in a gibbon kidney and rerun your unit tests. You can’t check the most recent functioning state of a patient out of version control. Everything you do to evaluate the patient, an MRI, a trial of a medication, exploratory surgery, changes the patient and those changes are likely, in part, damaging.

Much of our training in medical school centers on the consequences of not being able to debug patients. Before we can make use of all the facts we memorize, we need to learn to stop thinking like scientists or engineers. We need to learn the indirect ways of coming to answers that are necessitated by systems as complex, fragile, and litigious as the human body.

This unresolvable fact means we cannot move from the current medical paradigm to the current engineering paradigm. Ergo (yes ergo!) we need a new paradigm

Even though software systems are complex, it is possible to simplify them temporarily as part of the debugging process. You can isolate a bit of code and test it with artificial inputs of your own construction. But a human patient is always a complete complex system.

Additionally, one of the main goals of medicine is longevity and a standard debugging process can’t optimize that. There is no accelerated test for longevity. There is only ‘wait 80 years and see how long the patient lived’. Unfortunately, by that time you can no longer debug the patient because he, and let’s face it you probably, are dead. You can’t get around this by optimizing an identical human for this parameter as you might during the development phase of a software or machine project.

This unresolvable fact means we cannot move from the current medical paradigm to the current engineering paradigm. Ergo (yes ergo!) we need a new paradigm.

I recognize we will initially work within an uncomfortable hybrid between the two. We will mostly work in the medical paradigm and occasionally, for certain situations there will be an awkward handoff to the engineering paradigm and then a return of data to the medical paradigm.

This will be useful both for learning and for patient care. It should not, cannot, be the end goal.

 

This article originally appeared as ‘Thinking like an engineers and thinking like a physician’ in AIMed Magazine issue 03. Click here.

 

By Erik Lickerman M.D.

eriklickerman.com

Erik studied biology as an undergraduate at M.I.T. Then medicine at the University of Illinois. After four years of anatomic and clinical pathology residency, he followed his lifelong passion for computer science by completing a fellowship in pathology informatics under Mike Becich at the University of Pittsburgh. 

Rather than practice pathology and do informatics as a side pursuit, he went into commercial medical informatics, working for a succession of companies from “tiny startup” to Fortune 500. From the beginning his goal has been to transform the medical record from a handwritten notebook full of short stories we write about the patient, to a set of discrete well modeled data with proper standard terminology and amenable to search, query, analysis, and machine learning. He remains disappointed that we are only a quarter of the way there.

Erik considers himself a programmer and engineer first and a physician second. His focus is data modeling, programming and leading teams to create practical medical informatics products. He currently works at Varian Medical Systems and focuses on oncology informatics and the incorporation of clinical data and genomic data into analytics.