Close

Issues and Challenges Facing the Future of AI in Healthcare

The integration of artificial intelligence (AI) into healthcare is happening faster than predicted. Machine learning, natural language processing and robotics are already mainstays within the field of medicine.

Clinical decision support (CDS) programs are designed to guide clinicians in the ordering of tests and many are embedded in electronic medical record (EMR) systems. Evidence-based guidelines for physicians in the ordering of imaging studies, is one area of guidance. Another type of CDS acts as a “second set of eyes” and provides real-time guidance to radiologists while interpreting images. The Centers for Medicare and Medicaid Services (CMS), which is often criticized for its inability to keep pace with medical innovation, will soon mandate that radiologists use AI as a condition of reimbursement.

The involvement of AI in medical care is even more complex in medical cyber-physical systems (MCPS). These systems take the data from patient monitoring devices and put them through a “smart controller” (algorithms) to adjust treatment delivery devices as needed. An alarm is triggered to alert a caregiver when human intervention is needed.

The CDS and MCPS examples above are just two uses of AI that are touted for improving patient care and being cost effective. For these reasons, private and public healthcare insurers are eager to see AI assume a bigger role in healthcare. Those in the business of AI are also eager to see these changes take place. However, as these innovations are increasingly embraced, a number of issues need to be addressed.

One of these issues is efficacy. Does AI really improve patient care and make it more affordable? Improved patient safety is a function of improved patient care and machines may make mistakes that humans don’t. An error in an algorithm has the potential to harm many people, not just one—as it is often the case in a typical medical malpractice. Exploitation of security vulnerabilities can lead to the same type of harm to many people. What monitoring mechanisms will be in place to identify systematic errors that lead to patient harm? Who will be responsible?

This leads to the issue of medical liability. Under many state tort laws, it has typically been the physicians and/or hospitals that are at risk when there is a poor medical outcome, caused by deviations from professional standards. When AI algorithms play such a deep role in the medical decision-making process, what will be the liability of those creating and selling these programs? In the case of MCPS there are many different manufacturers of the various linked devices, which adds another level of complexity. Some refer to this linkage of FDA approved devices as a new “virtual medical device.” What should the FDA’s role be in evaluating this type of device?

Medicine has been referred to as an art and a science. AI focuses on only on the science, but patients are human beings. It is imperative that physicians—through specialty organizations, artificial intelligence experts and relevant government agencies—continue working together to find the appropriate role of AI in healthcare to ensure that patients and healthcare delivery system are protected from unintended consequences.

Close