Combining human intelligence with the power of artificial intelligence (AI) can help transform cancer diagnostics.[1] That is the mantra of Kheiron Medical Technologies, which uses AI technologies to help radiologists detect cancer earlier and more accurately. The New York Times recently reported on Kheirons success of utilizing AI to assist radiologists read mammograms.[2] In 2021, Kheirons AI software was used in five health clinics located in Hungary and confirmed that its AI had identified 22 cases where the radiologist had not picked up on and missed the diagnosis of cancer.[3] In 2022, Kheiron reported that its software matched the performance of human radiologists when acting as a second reader of mammography scans, and cut down on radiologists workloads by about 30%.[4] It has been estimated that Kheirons AI technology increased the cancer detection rate by 13%.[5]

The exhilarating use of AI in the medical field is on the rise given the potential benefits of better patient outcomes, earlier cancer detection, and reduced physician burnout. However, this evolving new landscape leads to a number of legal and ethical concerns that will playout in the ensuing years. This article will discuss a health care providers use of AI to aid in the screening of breast cancer, what avenues could be pursued in the event a patient is harmed by AI due to a missed diagnosis of cancer, and what to expect if AI shifts from a second reader to the sole reader of imaging studies.  

AI Explained    

It is ubiquitously known that early cancer detection provides optimal outcomes for patients. Mammography is the most common method of breast cancer screening, in which a high-resolution image is obtained and read by a radiologist, whose role it is to interpret the image and report abnormal findings.   

A simple definition of AI is, The attempt to mimic human intelligence in machine form, allowing the machine to solve problems using a set of stipulated rules with which the machine is provided.[6] The machine in this definition is the AI program utilized. Sub-types of AI have been developed and are utilized in our everyday lives. For instance, real time traffic light controls, email classification, and fraud protection for our bank accounts.    

Machine learning is a sub-type of AI that learns from data sets and allows the machine to reach conclusions or predictions without being specifically programmed for certain responses.[7] The machine receives raw data (input data), the data is analyzed through middle layers, and the machine produces a conclusion based on the input data (output data).[8] Machine learning is used to store a large dataset, which is later used to train prediction models and interpret generalizations.[9]

Deep learning is another sub-type of AI which uses more (deeper) middle layers than machine learning. While it requires more computing power it produces more accurate and reliable results.[10] Deep learning is a new branch of machine learning that works by establishing a system of artificial neural networks that can classify and recognize images.[11]   

Machine learning is able to modify itself when provided with new data. Successful AI medical diagnostic systems use machine learning and deep learning models to deliver the degree of accuracy necessary for quality medical care and treatment.[12]

Computer aided detection (CAD) has been used as a clinical tool for the identification of breast lesions for years.[13] CAD is widely used as a second-reader, following a human radiologists read of the imaging study. Algorithms have also been developed and are used for identifying and reporting lesions in screening and diagnostic ultrasounds.[14] Other software allows for automated screening and breast density assessment tools to be used in conjunction with automated result-reporting tools.[15] This provides more tailored risk assessments and supplemental screenings to better aid in the diagnosis of breast cancer.  

Black Box  

Our current legal system was developed without any regard to AI since it is based largely on intent and human behavior. The inquiry of what is reasonable and foreseeable are the questions at the heart of our tort system when a patient claims to have been harmed. In a legal proceeding, human behavior and decision-making are analyzed by a trier of fact (jury or judge). The trier of fact determines whether certain acts or inactions were reasonable under the circumstances, and if not, whether those acts or inactions caused injury to the patient. But what happens if the decision-making process was based entirely on a non-human, AI tool?   And what if there was no way for a human to determine how the AI tool came to its conclusion?   

The more advanced AI becomes, the more difficult it will be for its human programmers and users to retrace or follow the middle layers, or logic of the machines conclusion. The more complex the middle layers are, the more hidden they become, and the more difficult it is to track the data being processed through to the ultimate conclusion. This is known as the black box problem.  

Accountability and transparency are particularly relevant to the question of liability in the context of AI and breast cancer diagnosis. Issues such as the black box dilemma that are inherent in AI pose certain questions that are difficult to answer given the unprecedented nature of accountability in the context of technology. Who or what is accountable when a black box device is making decisions? Can the traditional tort principles apply in the context of AI? How does the use of AI impact the physician-patient relationship? The answers to these questions are unclear, and likely will remain unclear until AI becomes more advanced and the various scenarios play out over time and in court. However, we may be able to predict the answers to these questions by analyzing how established case law could apply to situations involving AI.

Legal Concepts for AI Cases  

In medical malpractice cases, typically the physicians involved in the complained of care are named as defendants, as well as the respective hospital or the physicians employer. In a case alleging a negligently read mammogram, typically the radiologist who read the mammogram and the radiologists employer are sued. However, many questions arise once AI enters medical practice. By embarking on the path of AI performing human functions, we can envision a time in the not too distant future where AI software replaces the human radiologist and AI makes the diagnosis, not a human physician. Precise case law does not yet exist for this hypothetical. However, predictions can be made on how certain AI claims may be extrapolated from current case law. In particular, cases involving robot-assisted surgery provide insight into how courts may tackle future AI centered cases.

One of the pillars of the practice of medicine is the physician-patient relationship. Physicians interact with their patients through conversations and examinations thereby developing a physician-patient relationship.[16] Technology advancements, even without AI, have already led to a shift in greater patient autonomy. Patients now have access to their medical records on patient portals, and can access a wealth of information about their health (even if inaccurate) via the internet. In this internet-rich era, physicians remain the clinical experts, even as some patients independently research their medical conditions and treatment options.[17]

When a human radiologist interprets a mammogram, it is well-established that a physician-patient relationship exists between the radiologist and the patient. Similarly, if AI serves as a second read, or a first read followed by a radiologists second read, a physician-patient relationship still exists between the patient and radiologist. Under both circumstances, the physician is liable in the event of a misdiagnosis that leads to patient harm.

However, what happens to the definition of the physician-patient relationship when AI fully replaces the physician in the diagnosis and interpretation of imaging studies? Has an AI-patient relationship been established to hold AI responsible for the negligent interpretation of a radiology study? 

Courts have not yet been confronted with the issue of an AI-patient relationship since AI is still being used to assist physicians. In the practice of radiology utilizing AI, there is still a physician-patient relationship between the radiologist and patient, despite the use of the technology. This is akin to surgeons performing robotic surgeries.  Since robotic surgeries have been in vogue for many years, case law is a bit more defined in this area. In Hoke v. Miami Valley Hosp., plaintiff filed a medical malpractice lawsuit after suffering a complication during a robot assisted gynecological procedure. The expert physicians on both sides agreed that using the robot during the surgery was within the standard of care, but disagreed on whether the doctor properly executed the robot during the surgery.[18] The jury returned a verdict in favor of the defendants. In Hoke, there was a physician-patient relationship between the surgeon and patient despite the use of a robot since the surgeon was responsible for the technical use of the robot during the surgery.

In the context of a radiology case, the radiologist utilizing AI to assist in reading a mammogram remains responsible for the proper interpretation of the mammogram. Even if there were a technical problem with the AI tool, the radiologist is ultimately responsible for interpreting the mammogram in accordance with the standards of radiology care and he/she would not be shielded from liability if the AI used failed. 

Some experts are optimistic that AI taking on a larger role in patient care will improve medical care. Other experts believe the opposite, and fear that the medical relationship might soon be met by machines as conversational agent systems, thereby making physicians obsolete.[19] Even if AI is completely relied upon to make medical diagnoses, the ideal scenario is that a physician relays the diagnosis to the patient in a sensitive and meaningful way to further patient care. Some physicians may fear that in the context of relaying a diagnosis reached solely by AI, if that diagnosis is incorrect the physician relaying the diagnosis may be sued for malpractice. However, case law is supportive of not holding those relaying physicians responsible as their role is limited to receiving, reviewing, and relaying a diagnosis. 

Courts have held that radiologists who simply review reports have not created a physician-patient relationship. In Tulloch v. St. Francis Hosp., the plaintiff brought a medical malpractice lawsuit claiming failure to diagnose breast cancer due to the negligent interpretation of mammograms.[20] The court granted summary judgment in favor of one defendant on the ground that his role in the plaintiffs care was limited to reviewing the previously dictated report [by another radiologist] to ensure the absence of any obvious transcription errors and to permit the timely release of the report.[21] Based on Tulloch, we can reasonably predict that when AI interprets an imaging study, and then drafts and sends a report of its findings to the referring physician, that referring physician who relays the diagnosis to the patient should not be liable for any negligent diagnosis made by the AI. 

The question then becomes, how does a patient who is injured by the misdiagnosis of AI recover? Does the patient bring a medical malpractice lawsuit and claim AI assumed the role of a physician and thereby established an AI/physician-patient relationship? Some experts believe the answer to this question is yes, if personhood is conferred upon AI.[22] In other words, the machine would need to be viewed as a person under the law with duties owed by AI to the patient. The AI tool would also be required to retain malpractice insurance, similar to how physicians have medical liability insurance, and any claims of malpractice against the AI machine would be paid out from its policy.

If the AI machine is considered a person under the law in a medical malpractice action, the AI machine would need to establish that its utilized algorithm conformed with the medical standards of care. Expert testimony would be required to establish the standard of care for AI machines engaged in the practice of medicine. In the context of a negligently interpreted mammogram, would the expert be a breast radiologist, the human engineer who created the AI algorithm, or another AI machine capable of communicating the decision-making processes of the AI algorithm? Under our current tort principles, we would have to determine what a reasonable AI machine would have done under the circumstances, which is a difficult inquiry given the black box dilemma.

If an AI tool is not considered a person in the context of a medical malpractice action, then liability would fall on the medical practice or hospital who contracted with the AI company to use the AI algorithm in the practice of medical care under a theory of vicarious liability. In this scenario, the standard of care would require medical facilities to exercise due care in procedurally evaluating and implementing black-box algorithms.[23] The health care facility arguably through its physicians would be tasked with evaluating the AI algorithms utilized and confirming the results, and would be held liable if the AI tool utilized led to negligent care.[24] Another possible avenue for recovery is to pursue an enterprise liability approach, which confers liability on all groups involved in the use and implementation of the AI system[25], which would be a combination lawsuit asserting claims of medical malpractice against the health care facility and products liability against the AI software company.  

Conclusion

AI has begun integrating in almost every aspect of our lives, and has produced exciting and positive results in medicine, as was seen in Hungary with its use of Kheiron technologies to detect breast cancer as good as, if not better, than radiologists.[26] As the practice of medicine, and in particular radiology care, shifts from utilizing AI as an assistive tool to fully replacing physicians to make diagnoses, the physician-patient relationship forming the basis of medical malpractice lawsuits will be upended. When AI eventually assumes the role of physician, recourse for injured patients may require us to pursue novel and unprecedented legal theories.  


References

[1]https://www.kheironmed.com/about-kheiron-medical/ (June 27, 2023)

[2] Satariano, A., Metz, C., Using A.I. to Detect Breast Cancer that Doctors Miss. The New York Times, 19 March 2023, https://www.nytimes.com/2023/03/05/technology/artificial-intelligence-breast-cancer-detection.html

[3] Id.

[4] Id., at p. 3.

[5] Id.

[6] Jorstad, K., Intersection of artificial intelligence and medicine: tort liability in the technological age. Journal of Medical Artificial Intelligence, 30 December 2020, http://jmai.amegroups.com/article/view/5938/html.

[7] Id.

[8] Id.

[9] Dileep G., Gianchandani Gyani S.G. (October 15, 2022) Artificial Intelligence in Breast Cancer Screening and Diagnosis. Cureus 14(10): e30318. DOI 10.7759/cureus.30318

[10] Jorstad, p. 3.

[11] Dileep, p. 2.

[12] Id., p. 4.

[13] Aminololama-Shakeri, S., Lopez, J.E. The Doctor-Patient Relationship with Artificial Intelligence. American Journal of Roentgenology. 2019. 212(2): 308-310. https://www.ajronline.org/doi/epdf/10.2214/AJR.18.20509

[14] Id.

[15] Id.

[16] Aminololama-Shakeri, p. 309

[17] Id., p. 309

[18] Id., 1, 15, 23, 25, 27.

[19] Nagy, M., Sisk, B. How will Artificial Intelligence Affect Patient-Clinician Relationships? AMA Journal of Ethics. 2020. 22(5): 395-400. https://journalofethics.ama-assn.org/article/how-will-artificial-intelligence-affect-patient-clinician-relationships/2020-05

[20] Tulloch v. St. Francis Hosp., 38 Misc. 3d 1220(A), 960 N.Y.S.2d 289, 292 (Sup. Ct. 2013).

[21] Id.

[22]Sullivan, H.R. & Schweikart, S.J., Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI? AMA J Ethics. 2019;21(2):E160-166. https://journalofethics.ama-assn.org/sites/journalofethics.ama-assn.org/files/2019-01/hlaw1-1902_1.pdf

[23] Id.

[24] Id.

[25] Id.

[26] Satariano, A., et. al.

Meet the Authors

Betsy Baydala
Partner
Kaufman Borgeest & Ryan, LLP
 
Janine Luckie
Senior Associate
Kaufman Borgeest & Ryan, LLP

Betsy is a Partner at Kaufman Borgeest & Ryan, LLP, and practices in the areas of Medical Malpractice and Cyber Liability. Janine is a Senior Associate at Kaufman Borgeest & Ryan, LLP, and practices in the areas of Medical Malpractice and Skilled Nursing, Assisted Living, Home Care & Hospice. Together they have extensive experience representing various health care providers, and they handle all aspects of litigation. Special thanks to Francesca Casalaspro for her research contributions to the article. Betsy can be reached at bbaydala@kbrlaw.com and Janine at jluckie@kbrlaw.com

News Type

PLUS Blog

Business Line

Healthcare and Medical PL

Topic

Professional Liability (PL) Insurance

Contribute to

PLUS Blog

Contribute your thoughts to the PLUS Membership consisting of 38,000+ Professional Liability Practitioners.

Related Podcasts

Related Articles

Graphic that says, "Insurance 101 For Lawyers" webinar recap
August 19, 2024

Insurance 101 For Lawyers Webinar Recap

This webinar enforced the fact that in the realm of professional liability…