For nurse Judy Schmidt, the beeping monitors hooked up to critical patients at the Community Medical Center in Toms River, New Jersey, were just a normal part of the whirlwind of activity in the intensive care unit.
But looking back on her work about a decade ago, Schmidt said she realizes those machines were using early versions of artificial intelligence to help analyze and track the patients’ health.
Artificial intelligence has been used in health care settings for years, even before the public became familiar with the technology, said Schmidt, CEO of the New Jersey State Nurses Association, a professional organization.
Today, some electronic health records are programmed to alert providers when patients could be having symptoms of a major illness. And in medical education, professors are depending more on simulations using mannequins, such as those programmed to mimic a birth, she said.
But the fast-paced development of these systems — to the point where robotics are being used in surgery — raises practical and ethical questions for the providers who work with that technology, Schmidt said.
Some experts say AI technology can improve the health care industry by automating administrative work, offering virtual nursing assistance and more. AI systems can predict whether a patient is likely to get sicker while in the hospital. Virtual assistant chatbots in telehealth services enable remote consultations. And more health care providers could start using robotics in the examination room.
But some nurses are concerned that the scarcity of laws regarding AI’s use in hospitals and beyond means a lack of protections for individuals who could suffer from the technology’s mistakes.
“In the long run, whatever artificial intelligence we use, it’s still the human — the person — that has to take that data, and the interpretation of that data in some respects, and apply it to the real person that’s…
Read the full article here