Lab results confusing? Some patients use AI to interpret them, for better or worse

Trending 1 day ago
People are turning to Chatbots for illustration Claude to get thief interpreting their laboratory trial results.

People are turning to Chatbots for illustration Claude to get thief interpreting their laboratory trial results. Smith Collection/Gado/Archive Photos/Getty Images hide caption

toggle caption

Smith Collection/Gado/Archive Photos/Getty Images

When Judith Miller had regular humor activity done successful July, she sewage a telephone alert nan aforesaid time that her laboratory results were posted online. So, erstwhile her expert messaged her nan adjacent time that wide her tests were fine, Miller wrote backmost to inquire astir nan elevated c dioxide and thing called "low anion gap" listed successful nan report.

While nan 76-year-old Milwaukee resident waited to perceive back, Miller did thing patients progressively do erstwhile they can't scope their wellness attraction team. She put her trial results into Claude and asked nan AI adjunct to measure nan data.

"Claude helped springiness maine a clear knowing of nan abnormalities," Miller said. The generative AI exemplary didn't study thing alarming, truthful she wasn't anxious while waiting to perceive backmost from her doctor, she said.

Artificial intelligence has proven effective astatine helping doctors surface for abnormalities successful nan colon.

Patients person unprecedented entree to their aesculapian records, often done online diligent portals specified arsenic MyChart, because national rule requires wellness organizations to instantly merchandise physics wellness information, specified arsenic notes connected expert visits and trial results.

And galore patients are utilizing ample connection models, aliases LLMs, for illustration OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini, to construe their records. That thief comes pinch immoderate risk, though. Physicians and diligent advocates pass that AI chatbots tin nutrient incorrect answers and that delicate aesculapian accusation mightiness not stay private.

But does AI cognize what it's talking about?

Yet, astir adults are cautious astir AI and health. Fifty-six percent of those who usage aliases interact pinch AI are not assured that accusation provided by AI chatbots is accurate, according to a 2024 KFF poll. (KFF is simply a wellness accusation nonprofit that includes KFF Health News.)

That small heart is calved retired successful research.

"LLMs are theoretically very powerful and they tin springiness awesome advice, but they tin besides springiness genuinely unspeakable proposal depending connected really they're prompted," said Adam Rodman, an internist astatine Beth Israel Deaconess Medical Center successful Massachusetts and chair of a steering group connected generative AI astatine Harvard Medical School.

Justin Honce, a neuroradiologist astatine UCHealth successful Colorado, said it tin beryllium very difficult for patients who are not medically trained to cognize whether AI chatbots make mistakes.

"Ultimately, it's conscionable nan request for be aware wide pinch LLMs. With nan latest models, these concerns are continuing to get little and little of an rumor but person not been wholly resolved," Honce said.

Rodman has seen a surge successful AI usage among his patients successful nan past six months. In 1 case, a diligent took a screenshot of his infirmary laboratory results connected MyChart past uploaded them to ChatGPT to hole questions up of his appointment. Rodman said he welcomes patients' showing him really they usage AI, and that their investigation creates an opportunity for discussion.

Roughly 1 successful 7 adults complete 50 usage AI to person wellness information, according to a caller canvass from the University of Michigan, while 1 successful 4 adults nether property 30 do so, according to nan KFF poll.

A man pinch his backmost to nan camera uses a laptop and wears headphones.

Using nan net to advocator for amended attraction for oneself isn't new. Patients person traditionally utilized websites specified arsenic WebMD, PubMed, aliases Google to hunt for nan latest investigation and person sought proposal from different patients connected societal media platforms for illustration Facebook aliases Reddit. But AI chatbots' expertise to make personalized recommendations aliases 2nd opinions successful seconds is novel.

What to know: Watch retired for "hallucinations" and privateness issues

Liz Salmi, communications and diligent initiatives head astatine OpenNotes, an world laboratory astatine Beth Israel Deaconess that advocates for transparency successful wellness care, had wondered really bully AI is astatine interpretation, specifically for patients.

In a proof-of-concept study published this year, Salmi and colleagues analyzed nan accuracy of ChatGPT, Claude, and Gemini responses to patients' questions astir a objective note. All 3 AI models performed well, but really patients framed their questions mattered, Salmi said. For example, telling nan AI chatbot to return connected nan persona of a clinician and asking it 1 mobility astatine a clip improved nan accuracy of its responses.

Privacy is simply a concern, Salmi said, truthful it's captious to region individual accusation for illustration your sanction aliases Social Security number from prompts. Data goes straight to tech companies that person developed AI models, Rodman said, adding that he is not alert of immoderate that comply pinch national privateness rule aliases see diligent safety. Sam Altman, CEO of OpenAI, warned connected a podcast past month astir putting individual accusation into ChatGPT.

"Many group who are caller to utilizing ample connection models mightiness not cognize astir hallucinations," Salmi said, referring to a consequence that whitethorn look sensible but is inaccurate. For example, OpenAI's Whisper, an AI-assisted transcription instrumentality utilized successful hospitals, introduced an imaginary aesculapian curen into a transcript, according to a study by The Associated Press.

Using generative AI demands a caller type of integer wellness literacy that includes asking questions successful a peculiar way, verifying responses pinch different AI models, talking to your wellness attraction team, and protecting your privateness online, said Salmi and Dave deBronkart, a crab subsister and diligent advocator who writes a blog devoted to patients' usage of AI.

Physicians must beryllium cautious pinch AI too

Patients aren't nan only ones utilizing AI to explicate trial results. Stanford Health Care has launched an AI adjunct that helps its physicians draught interpretations of objective tests and laboratory results to nonstop to patients.

Colorado researchers studied nan accuracy of ChatGPT-generated summaries of 30 radiology reports, on pinch 4 patients' restitution pinch them. Of nan 118 valid responses from patients, 108 indicated nan ChatGPT summaries clarified specifications astir nan original report.

But ChatGPT sometimes overemphasized aliases underemphasized findings, and a mini but important number of responses indicated patients were much confused aft reference nan summaries, said Honce, who participated in nan preprint study.

Meanwhile, aft 4 weeks and a mates of follow-up messages from Miller successful MyChart, Miller's expert ordered a repetition of her humor activity and an further trial that Miller suggested. The results came backmost normal. Miller was relieved and said she was amended informed because of her AI inquiries.

"It's a very important instrumentality successful that regard," Miller said. "It helps maine shape my questions and do my investigation and level nan playing field."

KFF Health News is simply a nationalist newsroom that produces in-depth publicity astir wellness issues and is 1 of nan halfway operating programs at KFF .