A majority of Americans would feel “uncomfortable” with their doctor relying on AI in their medical care, according to recent polling, but despite those misgivings it is likely you have already encountered the results of artificial intelligence in your doctor’s office or local pharmacy.
The true extent of its use “is a bit dependent on how one defines AI,” said Lloyd B. Minor, dean of the Stanford University School of Medicine, but he said some uses have been around for years.
Most large health care providers already use automated systems that verify dosage amounts for medications and flag possible drug interactions for doctors, nurses and pharmacists.
“There’s no question that has reduced medication errors, because of the checking that goes on in the background through applications of AI and machine learning,” Minor said.
Hundreds of devices enabled with AI technologies have been approved by the FDA in recent years, mostly in the fields of radiology and cardiology, where these fancy formulas have shown promise at detecting abnormalities and early signs of disease in X-rays and diagnostic scans. But despite new applications for AI being touted every day, a science fiction future of robot practitioners taking your vitals and diagnosing you isn’t coming soon to your doctor’s office.
With the recent public launch of large language model chatbots like ChatGPT the buzz around how the health care industry can ethically and safely use artificial intelligence is reaching a crescendo, just as the public is starting to get familiar with how the technology works.
“It’s very obvious health and medicine is one of the key areas that AI can make a huge contribution to,” said Fei-Fei Li, co-director of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI). Her group has joined forces with the Stanford School of Medicine to launch RAISE-Health (Responsible AI for Safe and Equitable Health), a new initiative to guide the responsible…
Read the full article here