Courts may soon expect doctors to use AI — or explain why they didn't when harm occurs
This article was produced in partnership with Gluckstein Personal Injury Lawyers
From reading scans, flagging early symptoms, predicting re-admissions, and even proposing — and performing — treatment, artificial intelligence is increasingly active in Canadian health systems. But as it moves from the fringes of medicine into the heart of clinical decision-making, a pressing legal question arises: when medical outcomes under AI fall short, who is responsible?
Jan Marin, a medical negligence lawyer at Gluckstein Personal Injury Lawyers, is watching this tension unfold in real time. She sees AI’s advancement as a herald of change, predicting significant impact on the practice of medical malpractice law.
“The integration of AI tools into clinical settings has the potential to shift not only how medicine is practised, but how legal responsibility is assessed when harm occurs,” she says. “We’re seeing new technologies outpace legal doctrine — and the consequences are beginning to surface in malpractice litigation.”
What do these AI tools aim to do?
In today’s hospitals and diagnostic centres, AI tools remain a far cry from replacing physicians. But they are actively shaping how clinical judgments are made, prioritized, and later defended. That’s the unsettled legal ground of modern medicine that must be confronted, Marin notes.
Radiology platforms like Aidoc and Zebra Medical Vision interpret CTs and X-rays to flag urgent findings. Aidoc’s algorithm, which is FDA-cleared and used in hospitals across the globe, identifies acute abnormalities such as pulmonary embolism, intracranial hemorrhage, stroke, and cervical spine fractures. Processing in real time, it enables faster intervention and helps healthcare practitioners prioritize cases. Its platform also integrates into radiology and emergency workflows, helping clinicians make faster, more accurate decisions especially in trauma and stroke care.
Zebra focuses on detecting a range of diseases — surfacing findings of coronary artery calcifications, osteoporosis, emphysema, and fatty liver disease — as well as acute issues like aortic aneurysms. The company’s goal is to aid in large-scale screening and proactive care by embedding its AI into radiology workflows.
US-based PathAI assists pathologists in identifying malignancies. It partners with biopharma companies, diagnostic labs, and academic institutions to improve diagnostic precision, support drug development, and reduce diagnostic error. In ophthalmology, DeepMind’s AI, developed with Moorfields Eye Hospital in the UK, focuses on diagnosing and prioritizing eye diseases using deep learning algorithms trained on tens of thousands of retinal OCT (optical coherence tomography) scans. Able to detect over 50 conditions, it also recommends the urgency of referral, helping clinicians triage patients more effectively. It demonstrates performance on par with expert clinicians in detecting eye disease and makes a marked difference in settings with high demand and limited specialist resources.
Meanwhile, predictive analytics tools are experiencing widespread adoption across hospital networks. Epic Systems, leveraging Microsoft Azure, can forecast patient deterioration — such as cardiac arrest — or readmission based on electronic health record (EHR) data. The cloud-based tools’ ability to identify at-risk patients earlier translates to faster, data-driven care decisions. Johns Hopkins’ Sepsis Watch uses machine learning to alert clinicians in real time when patients are trending toward sepsis, based on continuous analysis of vital signs and clinical data — often before symptoms become obvious. It too supports early intervention.
On the triage side, platforms like Babylon Health and Sensely expand access to early guidance and virtual care. The former, a digital health platform, uses AI-powered chatbots to provide symptom checking, health assessments, and virtual triage. It’s designed to expand access to basic medical guidance, especially in regions with limited healthcare infrastructure. The latter offers a virtual nurse avatar that uses AI and speech recognition to monitor chronic conditions, guide patients through symptom assessments, and recommend appropriate care options. Both platforms aim to reduce pressure on frontline providers and improve patient engagement through accessible, AI-driven interfaces.
Standard of care on the move
The most immediate legal consequence of widespread AI adoption, such as via the systems outlined above, is its impact on the standard of care. Traditionally defined by what a reasonable physician would do under similar circumstances, that benchmark now faces a high-tech update. If AI tools outperform average clinicians in speed or accuracy, could failure to use them be deemed negligent?
“We may soon face cases where a physician is faulted not for making a mistake, but for choosing not to rely on AI — especially if that tool has become standard practice and is readily. This may also impact hospital liability if they fail to have key tools available that will improve patient care,” Marin explains.
This becomes even more complex when AI does issue a warning that the physician overrides or ignores. Should the clinician have deferred to the machine? Should they have recorded their rationale?
Further complicating matters, AI systems are dynamic. They are designed to continuously improve, learning from new data and implementing that knowledge almost instantaneously. This raises the possibility of a living standard of care that evolves faster than traditional clinical guidelines or consensus practices, which has a significant bearing on the area of medical malpractice.
Liability in the age of machine judgment
For plaintiff-side medical malpractice lawyers, proving liability following harm, is essential. But AI introduces complexity around who is at fault. If an algorithm fails to identify a tumour or miscalculates a risk score, for example, is the blame on the physician who used the tool, the hospital that deployed it, or the company that built it?
“We’re entering an era of hybrid liability,” Marin says. “It’s likely we’ll see claims naming multiple defendants — physicians, hospitals, and AI vendors — particularly when the roles aren’t clearly delineated.”
The potential for shared responsibility will force courts to weigh human and machine reasoning side-by-side, possibly for the first time in Canadian jurisprudence. It will also require specialized expertise from a very narrow subset of experts on how AI programs function, how they are intended to be relied upon and where the line between human and machine intelligence should be drawn. These questions are complex.
As of now, Canadian courts have not issued landmark rulings apportioning liability in any AI-influenced malpractice cases. But with Health Canada’s increasing approvals of “software as a medical device” (SaMD), the stage is set for litigation that blurs the line between product liability and professional negligence.
Ultimately, the legal system isn’t just grappling with new tools, Marin warns. It’s grappling with a new paradigm.
“As AI becomes embedded in frontline care, we’ll need to rethink not only who’s responsible when harm occurs, but how we define responsibility in the first place.”
Up next in Part Two
Marin explores how AI tools are altering litigation strategy for plaintiff-side medical malpractice lawyers from discovery to expert evidence — and why hospitals and health systems may become the new focal point of institutional liability.