Google AI has better bedside manner than human doctors — and makes better diagnoses
This article was written by Mariana Lenharo for Nature News.
An artificial intelligence (AI) system trained to conduct medical interviews matched, or even surpassed, human doctors’ performance at conversing with simulated patients and listing possible diagnoses on the basis of the patients’ medical history1.
The chatbot, which is based on a large language model (LLM) developed by Google, was more accurate than board-certified primary-care physicians in diagnosing respiratory and cardiovascular conditions, among others. Compared with human doctors, it managed to acquire a similar amount of information during medical interviews and ranked higher on empathy.
“To our knowledge, this is the first time that a conversational AI system has ever been designed optimally for diagnostic dialogue and taking the clinical history,” says Alan Karthikesalingam, a clinical research scientist at Google Health in London and a co-author of the study, which was published on 11 January in the arXiv preprint repository. It has not yet been peer reviewed.
Dubbed Articulate Medical Intelligence Explorer (AMIE), the chatbot is still purely experimental. It hasn’t been tested on people with real health problems — only on actors trained to portray people with medical conditions. “We want the results to be interpreted with caution and humility,” says Karthikesalingam.
Even though the chatbot is far from use in clinical care, the authors argue that it could eventually play a part in democratizing health care. The tool could be helpful, but it shouldn’t replace interactions with physicians, says Adam Rodman, an internal medicine physician at Harvard Medical School in Boston, Massachusetts. “Medicine is just so much more than collecting information — it’s all about human relationships,” he says.
Image credit: Image by fullvector on Freepik