By Darius Tahir, KFF Health News
Preparing cancer patients for difficult decisions is an oncologist’s job. They don’t always remember to do it, however. At the University of Pennsylvania Health System, doctors are nudged to talk about a patient’s treatment and end-of-life preferences by an artificially intelligent algorithm that predicts the chances of death.
But it’s far from being a set-it-and-forget-it tool. A routine tech checkup revealed the algorithm decayed during the covid-19 pandemic, getting 7 percentage points worse at predicting who would die, according to a 2022 study.
There were likely real-life impacts. Ravi Parikh, an Emory University oncologist who was the study’s lead author, told KFF Health News the tool failed hundreds of times to prompt doctors to initiate that important discussion — possibly heading off unnecessary chemotherapy — with patients who needed it.
He believes several algorithms designed to enhance medical care weakened during the pandemic, not just the one at Penn Medicine. “Many institutions are not routinely monitoring the performance” of their products, Parikh said.
Algorithm glitches are one facet of a dilemma that computer scientists and doctors have long acknowledged but that is starting to puzzle hospital executives and researchers: Artificial intelligence systems require consistent monitoring and staffing to put in place and to keep them working well.
In essence: You need people, and more machines, to make sure the new tools don’t mess up.
“Everybody thinks that AI will help us with our access and capacity and improve care and so on,” said Nigam Shah, chief data scientist at Stanford Health Care. “All of that is nice and good, but if it increases the cost of care by 20%, is that viable?”
Government officials worry hospitals lack the resources to put these technologies through their paces. “I have looked far and wide,” FDA Commissioner Robert Califf said at a recent agency panel on AI. “I do not…
Read the full article here