AI on trial: 388 decisions track the rise of AI references across Canada’s courts - Provincial Courts

“AI On Trial” Overview: tracking the rise of AI across 77 Canadian courts and tribunals

AI on trial: 388 decisions track the rise of AI references across Canada’s courts - Provincial Courts
By Kiernan Green
Mar 20, 2026 / Share

Review this report's federal count analysis, and complete legal-industry overview, in Part 1.

Provincial Courts analysis: regional decisions lead national direction

After Canada’s federal courts, Ontario and Quebec courts saw the greatest number of AI-related cases per year. Their volume of AI-related cases were dominant, averaging together 47 percent of all AI-related decisions in Canada during all years examined. Ontario’s AI-related cases as a percentage of all cases in Canada were 23 percent, 27 percent, and 33 percent in the years 2022, 2024, and the first half of 2025, respectively. Those in Quebec rose from 15 percent in 2022 to 27 percent in 2024. Some British Columbia and Alberta courts also receive mention.

As of 2024, federal courts’ earlier dominance of AI-related decisions was overshadowed by provincial courts. In 2022 and 2023, federal courts decided 42 percent and 46 percent of cases, respectively; Ontario 23 percent and 19 percent; and Quebec 15 percent and 19 percent. Then, in 2024 and the first half of 2025, federal courts decided just 22 percent and 16 percent, respectively; Ontario rose to 27 percent and 34 percent; Quebec 27 percent and 25 percent.

This is indicative of regional courts’ growing role in shaping the future of Canada’s AI governance under a lack of firm federal legislation. “Each level of government has a role to play,” said Scassa. Yet provinces “have been waiting for federal leadership … and are perhaps getting a bit impatient now."

Ontario and Quebec Superior Courts: strongest AI appearance in the provinces

Ontario and Quebec’s Superior Courts are responsible for the most AI-related cases of their provinces. From 2021 to Q2 2025, Ontario’s Superior Court of Justice averaged eight AI-related cases per year or 41 percent of all Ontario AI-related cases. In just the first half of 2025, it saw 12 AI-related cases – more cases than any other whole year. Likewise, from January to June 2025, Quebec’s Superior Court averaged 5 cases or 32 percent of all Quebec cases (50 percent in 2022).

The Ontario Superior Court’s most visible examples come from professional responsibility and criminal matters. In Ko v. Li, 2025 ONSC 2766, the Court was confronted with a factum containing “hallucinated” citations apparently generated by a large-language-model system. Justice Frederick Myers devoted significant reasoning to the duties of counsel when using generative AI, stressing that lawyers must verify every authority and not mislead the Court with fabricated precedents. The decision is one of the first in Ontario to tie ethical obligations directly to the use of AI in practice.

Criminal files have also brought AI into focus. In R. v. Ibrahim, 2025 ONSC 2773, the Court considered issues of identification under masks and police description. While ultimately decided on Charter grounds, the reasons pointedly compared human recognition to the speed and accuracy of “facial recognition software,” underscoring how the technology is shaping the vocabulary of credibility assessments even when not formally in use. Together, these cases illustrate that on Ontario’s docket, AI is a practical problem in lawyering, and as a claimed force in the lived experience of litigants.

In February 2025, the Toronto District School Board brought forward a sweeping negligence and public nuisance claim against several major social media companies, including Meta, seeking over $1.6 billion in damages. The Board alleged that the defendants’ algorithmically driven platforms had been negligently designed to foster compulsive use among students, undermining learning environments and straining public education resources. TDSB argued that the defendants’ engagement-maximizing algorithms foreseeably disrupted its statutory mandate under the Education Act to promote student achievement and well-being, forcing it to divert substantial time and funds toward mitigation efforts.

Within the Ontario Superior Court’s growing body of AI-related litigation, the highly public case illustrates how public institutions are now pressing the courts to define the social duties of platforms whose machine-learning systems shape youth behaviour.



Picking up from the earlier Federal Court discussion of Doan v. Clearview, Quebec’s Superior Court took up the same controversy in Doan c. Clearview AI Inc., adopting the core allegations about Clearview’s worldwide facial-recognition service and web‑crawler “scraping” already outlined above. Procedurally, the Quebec case was introduced and initially suspended pending related applications before the Federal Court. At the authorization stage, Quebec’s Superior Court rejected Clearview’s declinatory exception and confirmed its territorial jurisdiction, authorizing a class limited to Quebec residents whose facial photographs were collected since Aug. 18, 2017. The Court grounded Quebec’s jurisdiction in its privacy framework and regulatory record, specifically the Commission d’acces a l’information du Quebec’s (CAI) findings that Clearview collected and sold personal information about Quebec residents and used and disclosed it for commercial purposes in the province. Although the authorization records repeated allegations about the database’s scale and non‑consensual collection, the Quebec proceedings turned on the province’s legal interest in addressing local privacy harms under Quebec law.

Homsy c. Google charts a complementary line. At first instance in 2022, the Court refused authorization on a record where the plaintiff alleged Google Photos “extracted, collected, captured, received, or otherwise obtained” residents’ “facial biometric identifiers,” and did so “without providing sufficient notice.” On Sept. 28, 2023, the Court of Appeal overturned the Tribunal’s decision and remitted the matter to determine whether Google had obtained the plaintiff’s sufficiently informed consent for the use of facial biometric data, directing the Superior Court to dispose of the outstanding consent issues.

Across both superior courts, then, the 2023 to Q2 2025 period yielded a substantial body of case law: Ontario with at least a dozen AI-related cases in early 2025 alone, and Quebec averaging five per year. The jurisprudence is beginning to consolidate two themes: professional and evidentiary duties around generative AI and facial recognition, and structural questions of fairness where automated decision-making intersects with statutory rights.

One may anticipate the top courts of Canada’s two most litigious provinces with a high volume of AI-related cases, considering the lack of provincial legislation on the matter, said Tassa. “There have been a number of privacy commissioner decisions in those jurisdictions with decision-making power, that have touched on artificial intelligence technologies,” including facial recognition and automated decision making, she said. “So, there’s been a bit of activity at the provincial level around AI, some of which has made it up to superior courts in those provinces.”



Information and Privacy Commissioner of Ontario: police and algorithmic accountability 

Below its superior court, Ontario’s Information and Privacy Commissioner stands out for deciding high provincial shares of AI-related cases over the years: 27 percent of Ontario AI-related cases in 2022, and 17 percent in 2024.

These include a series of FOI appeals focused on police deployment, procurement, and record‑keeping around facial recognition (often Clearview AI) across several services. Recent matters include Peel (Order MO‑4641) concerning the police’s use of Clearview AI facial recognition technology, York (Order MO‑4599) addressing “procurement and use” of facial recognition technology, and a Toronto Police Services appeal for records relating to their use of facial recognition technology (Appeal MA20-00299). In Niagara, the Commissioner scrutinized both search adequacy and exemptions, after an initial access decision on “documents … pertaining to the use of facial recognition technology” led to dispute over a section 6(1)(b) (closed meeting) claim. The inquiry then prompted a supplementary decision and partial disclosure of 104 pages of responsive records, with 60 pages released in full and 44 pages in part.

Two procedural themes recur across these files. First, the Commissioner tests claimed exemptions rigorously (York’s Order MO‑4599 expressly flags a three‑part test) while ensuring that operational sensitivities are weighed against the public’s right of access. Second, the Commissioner presses institutions on search reasonableness and the scope of responsive records; in Niagara, the adjudicator addressed the requester’s contention that “additional records … should exist,” leading to further searches and a revised disclosure package.

These police services boards reflect a province‑wide operational interest in face‑matching tools and their procurement trails. In Toronto’s appeal, the Commissioner’s summary highlighted practical access issues (“fee estimate” and “reformulated” requests) and an effort to calibrate transparency to cost and administrative burden, while preserving meaningful scrutiny of facial recognition deployments. Together, these decisions depict an adjudicative middle path: the ONIPC requires disclosure of substantive records about facial recognition use and procurement where warranted, insists on adequate searches and defensible exemptions, and recognizes Clearview AI by name in defining the scope of what the public is entitled to know.

Ontario social benefits tribunal: disability, employment, and AI

The Ontario Social Benefits Tribunal, after deciding zero AI-related cases in 2021 or 2022, decided 16 percent of Ontario AI-related cases in 2023 and 8 percent in the first half of 2025.

In most of these, the references to artificial intelligence emerged indirectly in applicants’ work histories or self-reports rather than as the subject of the appeal itself. For example, in 2410-06324 (Re), 2025 ONSBT 941, the appellant explained that his former research analyst position had been eliminated and replaced by AI, framing his current unemployment as part of the factual background to his disability claim. In 2401-00180 (Re), 2024 ONSBT 3855, the appellant testified that he had worked briefly for an AI division of a telecommunications company, verifying the accuracy of machine-generated answers, and later suggested that with the help of AI tools, he might manage remote data-entry work despite his impairments.

These kinds of references show how AI is becoming part of the evidentiary fabric of social-benefits litigation, and part of benefit applicants’ lived economic reality. The Tribunal consistently treated such claims with caution, acknowledging the role of AI in shaping employment opportunities, while ultimately grounding its decisions in statutory disability tests. The result is a small but revealing sample of cases where AI appears as a contextual factor in assessing individual capacity and workplace exclusion.

Court of Quebec and Administrative Tribunal of Quebec: balancing privacy rights

The Court of Quebec recorded the next most AI-related cases provincially after their Superior Court. During all years examined, the Court of Quebec decided an average of 21 percent of AI-related cases in the province.

In Clearview AI Inc. c. Commission d’accès à l’information du Québec, the Court addressed an “application for confidentiality and sealing order” in litigation involving the CAI’s decision about Clearview’s facial recognition practices. The reasons describe Clearview’s service as a search engine “using facial recognition technology” built from an “image bank” of public web pages, and note CAI findings that Clearview acted “by not obtaining the consent of the individuals concerned.”

Other files situate AI in civil and criminal contexts beyond biometrics. In Santisteban Mondonedo c. AI‑Genetika (BioTwin) inc., the Court noted that the company’s health‑profiling “collects” biosamples and that “artificial intelligence is used in the analysis and processing of data,” including the creation of “digital twins using artificial intelligence.” In R. c. Larouche, sentencing reasons examined the production of child‑exploitation “deepfake” media and how such tools work, describing training steps where “the artificial intelligence has learned” facial features and warning of the public‑safety risks inherent in “deepfake technology.” And in a quasi‑criminal revenue case, the Court remarked on the broader digital‑risk context, noting that “artificial intelligence now makes available to novices” capabilities relevant to cyberattacks and data manipulation, reinforcing the duty to ensure reliability of electronic records throughout their lifecycle.

The Administrative Tribunal of Quebec stands out for having consistently decided a number of AI-related cases each year, averaging 13 percent of Quebec’s cases from 2021 to Q2 2025.

The references are most often found in disability and employment-related matters, where claimants describe the growing role of “intelligence artificielle” or “apprentissage automatique” in reshaping their work environments, similar to those described in Ontario’s Social Benefits Tribunal. In several decisions, applicants argued that tasks they once performed had been displaced by automated decision-making or machine-learning tools, while in others they pointed to emerging reliance on facial recognition systems in government or workplace settings. The Tribunal consistently acknowledged these technologies as part of the factual background, but anchored its outcomes in the statutory tests before it, particularly around eligibility for benefits or workplace accommodation.

British Columbia Supreme Court, Civil Resolution Tribunal of British Columbia, and Court of King’s Bench of Alberta: divides on data scraping and free expression

After the predominant Ontario and Quebec, British Columbia and Alberta deserve mention for their consistency of AI cases in a few courts.

In B.C., the British Columbia Supreme Court leads with a remarkable average of 42 percent of AI-related cases in the province, between three and five annually. Recent AI-adjacent disputes at the BC Supreme Court have centred on public‑space surveillance and biometrics. In a Charter challenge to Vancouver police’s use of a public‑safety trailer with cameras (Papenbrock-Ryan v. Vancouver (City), 2024 BCSC 2288 (CanLII)), the plaintiff argued the system could be reconfigured to “use facial recognition or artificial intelligence” to monitor crowds. The Court rejected that concern as speculative on the record, emphasizing that s.8 reasonableness turns on what the “existing technology” actually generated, not its theoretical capabilities. On the facts, police “used basic video technology,” conducted a limited recording of a public street, and then “deleted all of the information,” and the s.8 claim was dismissed. A separate publication‑ban proceeding underscored how modern tools raise operational risks for undercover officers, noting the “age of social media, search engines, and facial recognition software” in granting protection over identity.

A companion stream involved consumer and platform technologies. In a proposed class action over Google Photos (Situmorang v. Google LLC, 2022 BCSC 2052), the pleadings discussed “pattern recognition” and “face detection,” the “face grouping feature,” and references to “biometric data.” The Court ultimately held the claim failed to disclose a cause of action and dismissed certification. In family litigation, the Court also dealt with a party’s assertion that social media “facial recognition” and friend‑matching features explained an online posting, treating those functions as part of the factual context rather than determinative of outcome. Overall, BCSC reasons show a pragmatic approach: engage with AI‑related capabilities where they are in evidence, while grounding outcomes in existing technology, record facts, and established legal standards.

The Civil Resolution Tribunal of British Columbia, responsible for resolving small claims, strata disputes, motor vehicle injury claims, and certain housing and civil matters, contributed to the provinces’ 2025 rise with four AI-related cases in Q2 2025, 35 percent of the provinces’ total for the year thus far, and more than all of the jurisdictions other quarters combined.

Finally, the Court of King’s Bench of Alberta (known as the Court of Queen’s Bench of Alberta until 2022), which is responsible for serious civil and criminal matters, as well as appeals from provincial courts and tribunals, has seen a sustained number of cases quarterly. 

Alberta’s most consequential AI case in this period is Clearview AI Inc. v. Alberta (Information and Privacy Commissioner), 2025 ABKB 287. The Court described Clearview’s product as “facial recognition software and database,” built from “billions of images taken from the internet,” including material from social media belonging to Albertans. The Commissioner’s Order required Clearview to stop offering its service in Alberta, stop collecting/using/disclosing images and biometric templates, and delete those images/arrays. On jurisdiction, the Court found a “real and substantial connection” because Clearview “marketed its services to Alberta organizations” and provided trials, amounting to “carrying on business in the Province.” Interpreting PIPA and its regulation, the Court upheld the Commissioner’s view that the “publicly available” exception does not include social media (i.e., it “did not extend to social media”). Constitutionally, the Court held the scheme limits Clearview’s freedom of expression, and that parts of it are overbroad, granting a declaration and striking the words “including, but not limited to, a magazine, book or newspaper” from s. 7(e) of the regulation. However, it did not overturn the Commissioner’s Order, and Clearview’s application to quash was dismissed.

AI also surfaced at the margins of other ABKB matters. In R v. Nour, 2024 ABKB 523, the defense argued police could have used “facial recognition technology” when criticizing identification steps; the Court resolved identification on video and other evidence without relying on such tools. In Teng v. Alberta (Minister of Seniors Community and Social Services), 2024 ABKB 747, one of the applicant’s fairness grounds alleged the tribunal accepted misinformation about “facial recognition through WeChat,” but the judicial review was ultimately dismissed. Overall, the Court’s AI-related rulings emphasize statutory text and purpose, constitutional proportionality, and the evidentiary record (i.e., “text, context, and purpose”) rather than the mere availability of advanced technologies.

Diverging provincial court precedents around Clearview AI

Despite seeing a comparatively negligible number of total AI-related cases over years, both BC and Alberta’s Superior Courts have recently made landmark decisions regarding Clearview AI, and use of facial recognition technology and data scraping in the private sector.

In Clearview AI Inc. v. Information and Privacy Commissioner for British Columbia (2024), the BC Supreme Court ruled that personal data scraped from social media remains protected under provincial privacy law, requiring user consent even if publicly posted. Out-of-province companies can be subject to BC’s privacy regime when they have a “real and substantial connection” to residents. This confirmed that the province’s public online content is not automatically “publicly available” for unrestricted use.

By contrast, in Clearview AI Inc. v. Alberta (Information and Privacy Commissioner) (2025), the Alberta Court of King’s Bench agreed that Clearview’s activities fell under Alberta’s Personal Information Protection Act. The King’s Bench diverged sharply by finding that the law’s narrow “publicly available” exception unconstitutionally restricted freedom of expression, striking down part of the regulation and effectively opening the door to lawful data scraping and AI model training using publicly accessible online information without prior consent, provided the purpose is reasonable.

Some parallels exist in Ontario and Quebec. In Quebec, recent regulatory decisions by the Commission d’accès à l’information (CAI) have effectively set national precedents for the governance of facial recognition and biometric data in the private sector. The CAI has blocked projects such as Metro Inc.’s proposed facial recognition pilot, ruling that identifying individuals suspected of theft or fraud without their express consent to the collection and use of their biometric data, specifically their facial images and measurements, would violate privacy law. It has also interpreted Section 44 of Quebec’s Act to Establish a Legal Framework for Information Technology to require explicit consent before using biometric characteristics for identity verification. These rulings collectively reinforce Quebec’s uniquely stringent privacy regime. They establish that any private use of facial recognition or biometric surveillance must demonstrate necessity, proportionality, and full compliance with consent obligations, effectively setting a high legal threshold for AI-related data collection in the province.

In Ontario, there have been no comparable judicial or regulatory decisions on facial recognition or data scraping to date. The province’s privacy landscape remains largely governed by federal frameworks, such as PIPEDA, and policy guidance rather than case law. While Ontario courts have addressed broader access-to-information and confidentiality issues, such as in Ontario (Attorney General) v. Ontario (Information and Privacy Commissioner) (2024), no landmark ruling has yet tested how AI systems or data scraping practices intersect with privacy rights. This leaves Ontario trailing provinces such as Quebec, Alberta, and British Columbia in establishing clear judicial or regulatory limits on AI-driven data collection.

Private AI product development and training using unguarded, easily accessible resources – such as Clearview AI’s public photos from the internet, and OpenAI copyright-protected materials – create “really big questions” which should be answered by provincial privacy and federal copyright law, said Tassa. Largely, those questions have not been answered through legislation. “Both have been subject of debate and reform, and the legislators have been ducking... It’s the courts that are making the law around this based on existing legislation,” she said, notwithstanding Alberta’s nullification of provincial privacy regulations.

“These cases are really interesting, showing the tension between some of the big public policy questions that underlie these issues and how, while the legislators are not dealing with these, the courts are being asked to deal with them.”

Provincial Courts analysis: closing remarks

Ontario and Quebec have emerged as the principal provincial theatres for AI-related litigation, with their superior courts setting the pace on professional ethics, evidentiary use of generative tools and facial recognition, while commissioners and tribunals fill in the operational detail through access-to-information and benefits decisions. British Columbia and Alberta add weight at the margins – most notably through divergent Clearview rulings – producing a patchwork in which privacy oversight, criminal procedure, and administrative law are all adapting to technologies that are now part of the factual fabric of disputes as much as their legal substance.

Without coordination, different provincial private sector AI regulations risk becoming an unintended domestic trade barrier, said Scassa. That diagnosis maps onto the case law: Quebec’s stringent privacy posture, Alberta’s constitutional trimming, British Columbia’s insistence on consent for scraped data, and Ontario’s emphasis on transparency through information-commissioner orders, together illustrate active but uneven governance in the absence of a settled federal framework.

The practical message for institutions and litigants is clear. Courts and commissioners will continue to decide concrete controversies on statutory text, evidence and proportionality, but the efficiency and predictability of those outcomes will increasingly depend on harmonized standards. Unless and until federal leadership delivers that alignment, the provinces are likely to keep shaping the law case by case. This approach is workable, but sub-optimal for a national economy in which data, services, and rights routinely cross borders.

Concluding remarks

Canada’s legal institutions are grappling with the promises and perils of AI while remaining faithful to their traditional disciplines of text, evidence, and fairness. From the Clearview AI litigation to the Ontario Superior Court’s response to fabricated citations, the courts have shown an instinct to absorb new technologies within the existing logic of law rather than build anew. Yet, the task ahead extends beyond adjudication.

“Hundreds of judges across Canada … are hungry to learn,” said D’Agostino. The next phase of adaptation may depend as much on education and professional competence as on statutory reform.

For practitioners and judges alike, education is becoming the first line of defence against both error and erosion of public confidence. “Continuing legal education is more important than ever,” said D’Agostino, recalling a national seminar in Halifax where jurists from all levels of court gathered to discuss AI’s implications. “Education is the first defence,” particularly in lower-level courts where early-career professionals confront AI-driven content daily. Salyzyn echoed that view, describing how “a number of courts are still probably struggling with some of the basics in terms of their IT infrastructure and having systems in place to be digital” and need to “get those fundamentals in place” before turning to sophisticated AI solutions.

Recent years’ judicial workload tells a unique story of pressure and adaptation. “The courts, which have always been already overwhelmed with a lot of volume, [are] getting even more overwhelmed and flooded with just a lot of AI-generated content,” said Dahan. Whether in criminal filings, administrative records, or briefs generated by large-language models, the influx of synthetic text threatens to slow proceedings beyond notable backlogs experienced already.

AI’s potential to clear backlogs depends not on disciplined implementation, tools that verify rather than invent, and systems designed around human decision-making. The administrative law principle in Vavilov offers an instructive parallel: when governments establish clear directives, such as the federal “Directive on Automated Decision-Making,” courts may justifiably focus on the reasons given rather than the hidden code beneath them, said Scassa. But without such frameworks, “you’ve got probably a lot of legitimate and unanswered questions about how those reasons ended up on the page.” At present, Canada’s courts are being asked to review algorithmic decisions without an algorithmic statute to guide them.

Against this backdrop, lawyers themselves are looking for structure. “Even if a lawyer is not using AI in their practice,” said Salyzyn, “the technology is going to come to them, whether it's a client using it or as evidence in cases.” This inevitability demands a “baseline level of literacy” across the profession. Many counsel “may not know the full extent of limitations of that technology,” leaving them unable to assess whether “due process and procedural fairness were followed.” Law societies have begun to respond, from continuing professional development courses to the Ontario Bar Association’s AI Academy, which Salyzyn said is a “hands-on tool that lawyers can use to learn.” Still, she added, “lawyers are craving guidance,” which “will be helpful now that we have some of the basics out.”

If the Clearview decisions demonstrate anything, it is that Canada’s legal norms still privilege the human vantage point of consent, authorship, fairness, and accountability. These are the same values that should guide the legal profession’s response to AI, said D’Agostino. In her view, the danger lies in overlooking the individuals “actually creating and making and labouring” – the authors, clients, and citizens whose rights risk being sidelined as automation scales. “It can’t be a technocentric approach. It has to be a human-centered approach.”

Review this report's federal count analysis, and complete legal-industry overview, in Part 1.