Dedicated To Protecting Injured Victims And Their Families Since 1931

  1. Home
  2.  — 
  3. Medical Malpractice
  4.  — Artificial Intelligence: A New Era in Medical Malpractice

Artificial Intelligence: A New Era in Medical Malpractice

On Behalf of | Dec 13, 2019 | Medical Malpractice |


Artificial Intelligence (AI) use in healthcare settings is a hazardous evolution that could result in serious injuries or wrongful deaths. Improper coding or interpretation of the data provided by AI systems could cause physicians, surgeons, or other members of the medical team to make otherwise preventable mistakes.

The Benefits of AI in Healthcare

Proponents of the increased use of artificial intelligence in healthcare settings argue that AI technology will result in faster, more accurate diagnostics and more efficient health management. They also claim that AI will result in more personalized care, improve disease tracking, and make it possible to streamline the flow of information within healthcare facilities.

Supporters say increasing penetration of AI within the healthcare industry will improve patient outcomes and make it possible to better predict risks and prevent disease development before it begins. Nationwide, Google Home, Amazon’s Alexa, and similar systems are becoming increasingly common and many healthcare facilities are already using them for billing management, appointment scheduling, and many other administrative tasks.

The Real Risks

The computers and algorithms that AI technology depends upon are not flawless. Improper coding, equipment failure, and other factors can result in a false interpretation of medical data. This could result in misdiagnosis of disease, failure to diagnose, or other actions that could delay or result in improper treatment.

The advancements made with artificial intelligence technology are occurring faster than their safety and efficacy can be evaluated. Many patients rely on symptom checkers and online portals that use masked algorithms and unspecified data sources as inputs for the AI to pull information. If any single component in this chain of information is incorrect, the patient could put themselves at considerable risk of harm by ignoring symptoms or pursuing treatments that may harm their health.

Similarly, physicians who do not know the inputs used to support the AI can’t verify the efficacy and accuracy of the information provided. Much of it could be corrupted or outdated without the physician’s knowledge. Most physicians are leery of this emerging technology and the potential for patient harm. Should an injury or death occur, the physician, the programmers, software providers, and distributors could be liable for the “decisions” made by machines.