An AI safety solution network is designing a framework to spot AI-generated evidence in litigation

Professor Maura Grossman speaks to how courts should evaluate manipulated videos, audio and documents

An AI safety solution network is designing a framework to spot AI-generated evidence in litigation
Professor Maura Grossman
By Tim Wilbur
Mar 17, 2026 / Share

In Canadian courtrooms, audio, video, and documents are becoming increasingly convincing, yet their reliability is eroding so quickly that Professor Maura Grossman warns that judges are “in a battle unarmed” as AI-generated media becomes harder to spot.

Grossman does not approach that warning as a distant theorist; she comes from years of work at the junction of civil procedure, evidence and machine learning, after starting in 2006 as a practising lawyer facing “millions and millions of emails” in US discovery and asking how to find the few that mattered, she says. Early on, she tested supervised machine learning and showed that these tools could locate relevant documents “more effectively and more efficiently” than lawyers manually reading every message, a lesson that convinced her software could sometimes surface critical records faster than humans working alone, she says.

Now a research professor in the School of Computer Science at the University of Waterloo and a teacher at Osgoode Hall Law School, Grossman has turned that background toward a more basic question: when evidence itself can be generated or altered by machines, how should courts decide what to trust? She argues that judges must stop treating all AI-related material as a single undifferentiated problem and begin by dividing it into what she and former US judge Paul Grimm call "acknowledged" and "unacknowledged" AI-generated evidence. In an acknowledged scenario, everyone agrees that a system was used, and the dispute is about “was that tool valid, reliable, biased, things like that,” she says. The unacknowledged category is more unsettling: a party insists a video is genuine while the other side counters “that’s not me. I was never in that place. I never had that conversation,” she says.

That split drives very different legal questions. For acknowledged AI, existing rules for scientific and technical evidence still provide judges with a framework; courts can ask whether a system has been tested, what its error rates are, and whether it shows bias, applying standards like Daubert in the US or their Canadian equivalents, she says. With unacknowledged deepfakes, by contrast, the core issue becomes authenticity in a world where images and audio of a party can be cloned by “anybody” with a computer and access to online samples of their voice or face, she says.

Grossman says this is not a hypothetical problem. She and Grimm have tracked cases in which enhanced video, AI-generated avatars, and virtual reconstructions have been offered in US courts, forcing judges to confront questions about how these tools work, whether they meet scientific standards, and how much psychological impact synthetic content may have on findings and sentences, she says. Closer to home, she has heard of family disputes in which one parent offers a recording of a spouse “screaming at the kids” in a custody fight, and of criminal matters where body-camera or surveillance footage feeds facial-recognition systems, even if few Canadian written decisions have yet grappled directly with the technology, she says.

For Grossman, the most serious gap lies with judges and litigants who must now decide whether images, recordings and documents are authentic without access to technical expertise. “Most lay people who are not experts really can’t tell them apart,” she says of real and fake images, and the solution cannot be to require an expert every time someone wants to tender a video in family court or small claims. Instead, she argues, courts are facing highly sophisticated, low-cost manipulation tools while still relying on human intuition built for an analogue era.

That is the context for a new AI safety Solution Network at the Canadian Institute for Advanced Research, funded through the Canadian AI Safety Institute’s research program, which Grossman is co-directing with University of Toronto professor Ebrahim Bagheri. The project aims to build a free, open-source framework that can flag potentially problematic AI-generated evidence for everyone from self-represented litigants to judicial officers and clerks, with a deliberate emphasis on transparency and accessibility. “We are trying to see if we can come up with some kind of tool” that might not stop the most sophisticated criminals but could help a family-court judge decide whether “a simple recording” has been fabricated or altered, she says.

The team is starting with the unglamorous work of assembling a training dataset that pairs real documents and images with closely matched fake versions and then using that data to train models that can distinguish the two across text, image and eventually audio files, she says. They plan to run hackathons and contests inviting outsiders to “break it” by slipping forged material through the filters, and they want the system to provide confidence scores and concrete reasons, highlighting anomalies in pixels, shadows, clothing textures or metadata such as time stamps and device information so that users can see “what has led to the decision making and how certain it is,” she says.

Alongside that technical build, Grossman is wary of oversold easy fixes. Watermarking and content-provenance standards can help in some contexts, yet she notes that watermarks can be removed from fake content or added to genuine files to confuse viewers, and that any universal standard that embeds tamper-evident markers in every phone and camera will take years to roll out, she says.

In the meantime, she argues, legal professionals must change their mindset about what they see and hear. “We’re moving into a world where you really can’t assume. Most of us assume if we see it on audio video, it’s real. And you can’t at this point,” she says. Instead of blind trust, she urges lawyers and judges to cultivate what she calls a healthy “skepticism” that falls short of full-blown cynicism but still pushes them to verify media before accepting it as proof, and to scrutinize even material provided by their own clients, since “today, anybody can make up receipts and all kinds of things” that might once have looked unquestionably authentic, she says.

This article is based on an episode of CL Talk, which can also be found here:

 

The episode can also be found on our CL Talk podcast homepage, which includes links to follow CL Talk on all the major podcast providers.

Related stories

AI has fundamentally changed my litigation practice Deepfakes: GenAI making phoney and real evidence harder to discern, says Maura Grossman