I will make the case for a comprehensive consultation conducted by an AI-generated physician as a first layer of care. It is a speculative exploration of the impact that the development of Artificial General Intelligence (AGI) might have on healthcare.

This concept entails an AI entity that can interact with patients as seamlessly as a real doctor in telemedicine consultations. To elaborate, imagine an entity on the screen that not only talks but also exhibits movements and behaviors so lifelike that it becomes virtually indistinguishable from an actual human being.

For the past few months, I’ve engaged in intense discussions with colleagues about AI’s role and its potencial to mitigate biases. These debates have often met with an understandable mix of skepticism and resistance. “Humans can never be understood by binary machines,” a psychiatrist friend remarked. I followed with the inquiry “could we emulate human behavior using complex mathematical equations?” His response was a tentative “maybe”.

At the time of the first statement, he was not aware that the binary system is merely a different method to organize numbers, yet it operates on the same principles of math and logic. Whether formulated in base ten or binary, the equations remain identical. I could go one step further with the provocation: neurons either fire or don’t - 1 or 0.

Let’s examine a particularly sensitive scenario where the application of AI could prove advantageous.

The fictional setting

  • A bustling general ER in a teaching hospital.

  • Dr. Chuck and Dr. Travis, in their second and third year of residency, respectively.

  • They’re in the final 4 hours of a grueling 24-hour shift, sleep-deprived with less than 3 hours of rest. Tensions have been high between them for months, each harboring deep mistrust towards the other.

  • For those unfamiliar with medical training, the more experienced doctor usually takes the lead.

  • They have a senior doctor as their boss, an experienced Neurologist, available for complex dilemmas.

The art

There are certain warning signs in headaches, such as abrupt onset. This type of pain, where a patient can pinpoint the exact moment of onset and which quickly escalates, is well understood by both Chuck and Travis.

Yet, extracting this information from a patient is a complex task, embodying the very art of Medicine. The way individuals articulate their symptoms is significantly shaped by their cultural and personal backgrounds, further complicated when they are experiencing severe pain. This nuanced skill, cultivated by physicians over years of practice, hinges on adaptability and inherently involves a certain level of subjectivity.

AGI represents the only conceivable approach through which an artificial system could effectively navigate and manage this challenge.

The discussion

Chuck, the junior resident, evaluated a patient and firmly believed it was a clear-cut case of a sudden headache. He brought this up with Travis, who disagreed. “The patient’s neurological exam is clear, he’s improved with pain relief, and it’s unlikely his headache was abrupt. You’re green and misread his complaint. You’re again pushing for unnecessary tests, risking patient exposure to radiation and renal issues from the contrast. Besides, we’re swamped, and the CT scanner malfunctioned last week. We can’t afford to overuse it on cases that clearly don’t necessitate it, as it’s operating at its limit. Just discharge him.”

Chuck wanted to argue, “The patient explicitly mentioned a sudden onset of pain. Normal exam results and response to medication don’t rule out conditions like subarachnoid hemorrhage or venous thrombosis. True, such cases are can be benign, but there’s a life-threatening risk we can’t ignore. The potential dangers of not diagnosing are far more significant than the risks you’ve mentioned. And this is a obvious situation we should employ the resources we have at our disposal.”

But that wasn’t Chuck’s actual response. Frustrated and exhausted, he curtly suggested, “Let’s consult the boss then.” Travis, annoyed and dismissive, retorted “We’re not troubling him with something so evident, and you’re not undermining my authority.”

Insted of “It is absolutely not a question of undermining your authority, but he’s more experienced than both of us and would gladly help. He’s payed to do it and in the process we could learn something new” just silence.

The patient was discharged, suffering only from a migraine, with no adverse outcomes. Yet, it wasn’t the correct decision. Had Chuck articulated his reasoning, this mistake might have been averted. The primary point of contention centered around whether the patient experienced a sudden onset of headache.

Ethics

A physician’s duty is to make informed decisions based on current knowledge, not to diagnose accurately every time. Often, a diagnosis remains elusive and becomes clear only after observing patient evolution, and sometimes, it may never be definitively reached.

Faced with ambiguous symptoms but extracted through precise questioning, and where investigation risks are minimal compared to the dangers of non-diagnosis, it’s imperative to proceed with tests. This example exemplifies that point. Travis’ judgment was clouded, while Chuck struggled to articulate his perspective. Both bear responsibility. Emotional detachment might have led to consensus and the correct course of action.

Consider a novice doctor in a rural setting, without access to senior advice, deliberating whether to send a patient on a 400km journey for a CT scan. The stakes are even higher, as is the potential for misjudgment.

I’ve seen variations of these scenarios countless times. Countless.

Pragmatism

AI doesn’t need rest, don’t get overwhelmed, has all the time in the world. While it does the first triage in the patients the doctors can rest, study, devote more time to consultations, care for the critical patients in the emergency room. It is a win-win situation. We’re confronted with numerous challenges, but with judicious use rather than blind reliance, we can successfully navigate them.

Just imagine:

“Based on the patterns observed and on 5 millions of similar consultations, there is a probability of 83% of the patient has indeed suffered a sudden headache. The American College of Emergency Physicians guidelines recommends to proceed with investigation (Level B)”.

“Explain further”.

“Use the Ottawa Subarachnoid Hemorrhage Rule (>40 years), complaint of neck pain or stiffness, witnessed loss of consciousness, onset with exertion, thunderclap headache, and limited neck flexion on examination) as a decision rule that has high sensitivity to rule out subarachnoid hemorrhage, but low specificity to rule in subarachnoid hemorrhage, for patients presenting to the emergency department with a normal neurologic examination result and peak headache severity within 1 hour of onset of pain symptoms.

Although the presence of neck pain and stiffness on physical examination in emergency department patients with an acute headache is strongly associated with subarachnoid hemorrhage, do not use a single physical sign and/or symptom to rule out subarachnoid hemorrhage.”

When I visit the doctor, safeguarding my health is my primary concern, and if tech can enhance that, I see no reason to think otherwise. Should the advantages of this approach be scientifically substantiated in the future, it would be shortsighted, and potentially unethical, to dismiss its adoption. In the realm of medical practice, the duty to embrace strategies that could demonstrably improve patient outcomes becomes a significant factor. The hypothetical validation of such benefits would, therefore, mandate a serious consideration of this approach, it aligns with the ethical imperative to preserve life.


comments powered by Disqus