AI is creating more opportunities for insight — and more risk — in litigation

New systems help litigators see patterns faster but leave records that will fuel more scrutiny, writes Tim Wilbur

AI is creating more opportunities for insight — and more risk — in litigation
Sebastian Pyzik, Paul‑Erik Veel, Maura Grossman
OPINION
By Tim Wilbur
Mar 30, 2026 / Share

Artificial intelligence is no longer a futuristic add‑on in Canadian litigation. It is becoming part of how litigators investigate facts, develop strategy and interact with courts. That creates new opportunities for insight – and equally significant avenues for regulatory scrutiny and litigation risk.

When I spoke recently with Montreal litigator Sebastian Pyzik at Woods LLP, he described AI as “a game changer” for how he works on complex files. Litigators are now using tools such as Copilot to reframe arguments, explore different ways of presenting a case, and pressure-test their reasoning before heading to court. 

READ MORE: Focus on litigation

But the same tools that give litigators more insight also leave digital fingerprints. Those fingerprints are going to be attractive targets in discovery and investigations. Pyzik expects parties to start asking for AI interaction logs where the subject matter is relevant to the dispute – for example, where a party has used an AI system to help decide on strategy or assess commercial options. Once those communications are on a third‑party platform, it will be difficult to apply a blanket rule to keep them out of discovery. That sets up a looming battle over privilege, confidentiality and how far courts will go in compelling access to those records.

The regulatory side is just as demanding. When I spoke with competition and class actions litigator Paul‑Erik Veel, he pointed out that algorithmic tools are now pervasive in pricing, marketing and customer analytics. In areas like drip pricing, we are already seeing private actions test where the Competition Act draws the line. But as AI makes it easier to collect, analyze and share massive volumes of data at speed, he expects to see more claims in what he calls the “classical” competition space: conspiracies, civil agreements between competitors and abuse of dominance.

In one sense, these practices are not new – airlines, for example, have used sophisticated pricing algorithms for decades. What has changed is scale and accessibility. Powerful systems that once required significant infrastructure can now be accessed by relatively small players, often through off‑the‑shelf tools. That means more market actors can experiment with aggressive data‑driven strategies, sometimes without pausing to consider the competition law implications.

At the same time, AI is changing what counts as persuasive evidence. In my conversation with Professor Maura Grossman, she emphasized that generative systems have made it easier than ever to produce realistic-looking images, video, and audio. For courts that rely heavily on documentary and digital evidence, that is deeply unsettling. Grossman is working on tools for the justice system that help judges, clerks, and self-represented litigants assess whether a piece of media is authentic.

Crucially, her focus is not just on binary answers, but on explainability. Rather than simply labelling a photo “real” or “fake,” the system is being designed to surface the indicia that suggest manipulation – shadows that do not match the lighting, inconsistencies in fabrics or metadata that conflicts with a party’s story. The goal is to show the reasoning behind a conclusion and the degree of certainty, so that courts can decide what weight to give the evidence.

Taken together, these developments point to a common theme. AI enhances our ability to see patterns, generate options and test hypotheses in litigation and in the marketplace. But it also ensures that more of our thinking and experimentation leaves a discoverable trail. For businesses, that trail may be examined through the lens of competition law, consumer protection or privacy. For litigators, it may become part of the record when courts are asked to rule on privilege or abuse of process. For judges, it raises the stakes on evidentiary gatekeeping in an environment where what appears genuine may not be.

None of this means that lawyers should shy away from AI. The interviews I have conducted suggest that those who engage thoughtfully with these tools are better positioned to serve their clients. The challenge is to pair that engagement with disciplined risk management: clear internal guidelines for AI use, careful vendor selection, and early advice on the regulatory regimes that may be triggered.

AI is expanding the horizon of what we can know about our cases and our markets. The question for the Canadian bar is whether we will harness those new insights while remaining candid with courts and regulators about how they are generated – and whether we are prepared for the litigation that will follow when we are not.

Related stories

New private rights of action and higher penalties under Competition Act increase business risk EXCLUSIVE: How AI is reshaping Canada’s federal courts AI on trial: 388 decisions track the rise of AI references across Canada’s courts - Federal Courts An AI safety solution network is designing a framework to spot AI-generated evidence in litigation