AI is changing how care is delivered — and how responsibility is assessed in malpractice cases
This article was produced in partnership with Gluckstein Lawyers
This is a two-part series. Read Part 1 here.
Medical negligence claims are being framed and pursued in rapidly changing ways as artificial intelligence infiltrates — and fundamentally changes — how healthcare is delivered. With AI tools now involved in diagnostics, risk prediction, and treatment prioritization, they leave behind a new kind of evidentiary trail — one that may alter how plaintiffs build claims, how defendants respond, and how courts allocate fault.
For Jan Marin, medical malpractice lawyer at Gluckstein Lawyers, this next phase of AI’s evolution demands close attention from the medical malpractice bar.
“As the healthcare industry undergoes these fundamental shifts, we as lawyers must adapt,” she explains. “We’re going to be confronted with new ways of proving or contesting negligence — and that affects how litigation will unfold from start to finish.”
AI-specific errors and emerging risks
While AI offers potential to reduce diagnostic errors, particularly in radiology, dermatology, and pathology, it also introduces novel risks. Algorithms trained on incomplete or biased datasets can produce flawed outputs. Or on the human side, clinicians may become overly reliant on AI-generated suggestions, a phenomenon known as automation bias.
These types of failures are specific to the advent of AI tools, Marin notes, and therefore may not fit cleanly within traditional malpractice categories.
“Although AI can improve consistency, it can also mask underlying problems if the data isn’t representative or the tool isn’t transparent,” Marin says. “We’re likely going to see the emergence of what you could call ‘AI-specific errors’ — and they raise distinct legal questions.”
For example, if an AI tool misses a diagnosis that a competent physician might have made — or vice versa — determining whether the tool met or failed the standard of care becomes a highly technical matter. In some cases, plaintiffs may shift from alleging professional negligence against a clinician to alleging product liability against the AI developer, particularly where the algorithm is Health Canada-approved software as a medical device (SaMD). According to Marin, these situations may be more akin to product liability cases, but it will also be important to consider what level of reliance a physician can reasonably place on an AI tool.
Documentation and digital evidence
Looking ahead, Marin identifies AI’s digital auditability as one of its most significant effects on litigation. In many cases determining what a physician knew or ought to have known comes down to usual practice and expert opinion. When technology is relied upon more heavily, there may be new and better evidence to make this determination. Many AI systems log their inputs, outputs, timestamps, and user interactions. That record can either bolster a defence — by showing a reasonable response to an AI alert, for example — or expose potential negligence if warnings were ignored or overruled.
By creating a new layer of documentation, “these tools may reduce defences based upon a reasonable ‘usual practice’” Marin says, adding that by the same coin they also expand the evidentiary burden.
Discovery may increasingly involve AI system logs, version histories, and even requests for algorithmic performance data. In some cases, lawyers may seek access to proprietary training datasets or performance metrics — raising complex questions around intellectual property, admissibility, and judicial understanding of technical evidence.
“Counsel will need to know what to ask for and how to interpret it,” she sums up.
Expert evidence and legal strategy
As longstanding legal concepts in the medical malpractice space are viewed through the lens of AI, it underscores the need for litigators to expand their toolbox. Another arena facing disruption is expert evidence.
With technology increasingly central in care delivery, medical experts will still be critical, but data scientists, bioinformaticians, and AI engineers may also become essential to explain how algorithms function — or fail. This changes how both plaintiffs and defendants approach litigation strategy and behooves med mal lawyers to keep up with the pace of technology.
“Lawyers handling these cases need a working knowledge of how AI is designed, how it performs, and where it can fall short,” Marin says. “The opinion of an expert may now involve probing the assumptions built into the algorithm and opining of the adequacy of the tool’s ‘training’ and functionality.”
This also has implications for informed consent. If a physician relies on an AI-supported diagnosis or decision tool, should the patient be informed that part of their care was influenced by machine learning? Should that disclosure be documented? These are questions that may soon be raised in court — particularly if an AI’s output contributed materially to a negative outcome.
Institutional responsibility expands
The rise of AI may also shift the responsibility of hospitals and health systems. Institutions are often the ones that procure and integrate AI tools into clinical workflows. If that process is flawed, liability may attach at the organizational level.
Hospitals may be held responsible for failing to monitor the performance of AI systems, for overlooking systemic bias in deployment, or for relying on vendors without sufficient internal validation. In essence, AI shifts part of the negligence analysis upstream to procurement, implementation, and governance.
“We may see more institutional claims — not just based on what a doctor did or didn’t do, but on whether the hospital chose the right tool, implemented it safely, and trained staff adequately,” Marin predicts.
Redefining responsibility
As AI takes on a greater role in clinical decision-making, the boundaries of medical malpractice law are being redrawn. This raises urgent questions about accountability, consent, and the standard of care. Core doctrines like negligence, causation, and standard of care will be tested in new contexts — where decisions are partly human, partly algorithmic, and often opaque.
The law, known for its thoughtful and deliberate evolution in the face of social change, is now feeling the pressure to keep up with unprecedented advancement. While it remains to be seen how the interplay between these factors will unfurl, the courts are already being tasked with determining not only what went wrong, but how fault is to be distributed across a digital ecosystem.
“We’re still in the early stages of building that legal architecture,” Marin says. “But one thing is clear — as AI changes how care is delivered, it will also change how accountability is assigned. The medical malpractice bar must be ready to address this new reality.”