AI on trial: 388 decisions track the rise of AI references across Canada’s courts - Federal Courts

“AI On Trial” Executive summary: 388 decisions define AI in Canadian law

AI on trial: 388 decisions track the rise of AI references across Canada’s courts - Federal Courts
By Kiernan Green
Mar 20, 2026 / Share

Review this report's provincial court analysis, and overall concluding remarks, in Part 2.

Executive summary: 388 decisions define AI in Canadian law

Between early 2021 and mid-2025, 388 Canadian legal decisions explicitly referenced AI, spanning contexts as diverse as facial recognition, social benefits adjudication, professional discipline, intellectual property, and immigration processing. The data show that federal institutions have led the way, with the Immigration and Refugee Board (IRB) and the Federal Court together accounting for nearly half of all national AI-related cases in 2022 and 2023. At the provincial level, Ontario and Quebec are dominant. British Columbia and Alberta, meanwhile, have contrasting rulings against Clearview AI have set the tone for Canada’s evolving approach to automated data collection, privacy, and consent.

Clearview AI Inc., the facial-recognition company that scraped of billions of Canadian’s online images, is a recurring defendant in Canadian proceedings. Its withdrawal from Canada in 2020 established a reference point for courts and commissioners now wrestling with biometric identification and the limits of “publicly available” information. The Federal Court’s 2024 decision in Doan v. Clearview AI Inc. marked a turning point: refusing class-action certification and treating “artificial intelligence” as a legally relevant category that raises distinct privacy and copyright questions. Across the IRB’s docket, meanwhile, facial recognition surfaced in immigration identity disputes, where panels insisted on conventional proof and documentation rather than inference from alleged algorithmic matches.

AI is increasingly embedded in everyday administrative decision-making. Cases before social benefits and workers’ compensation tribunals show that automation and predictive analytics are already influencing how claims are processed, assessed, and reviewed. Ontario and Quebec, in particular, illustrate where algorithmic flagging or digital-record triage intersect with human discretion. Cases also explored professional accountability, including disciplinary responses to fabricated citations generated by large-language models; AI’s legal impact extends well beyond privacy and IP law into ethics, competence, and access to justice.

This report also positions the legal development of AI within Canada’s historic pattern of technological governance. Canada has long adopted an incremental approach to legislation. From early communications law to modern privacy reform, Parliament and the provinces have tended to extend familiar concepts such as consent, authorship, and reasonableness to new technologies. The proposed Artificial Intelligence and Data Act continued that lineage at unprecedent speed. However, the jurisprudence shows that individual courts and tribunals primarily fill Canada’s legislative gap.

As Canada moves toward comprehensive regulation – laggardly and worryingly so, according to leading legal experts – the insights from these 388 decisions across immigration, privacy, benefits, and professional conduct courts offer an empirical foundation for future lawmaking, and for the education of a profession that must now learn to reason with the machine.

Experts consulted and quoted in this report include:

  • Dr. Teresa Scassa – Canada Research Chair in Information Law and Policy, Full Professor at the University of Ottawa (Common Law Section); expert in privacy, data governance, and AI regulation.
  • Dr. Giuseppina (Pina) D’Agostino – Associate Professor at Osgoode Hall Law School, York University; Founder and Director of IP Osgoode and Co-Director of the Centre for Artificial Intelligence & Society; recognized authority in copyright and innovation law.
  • Professor Amy Salyzyn – Associate Professor in the Faculty of Law, University of Ottawa; researcher in legal ethics, technology and the profession, and administrative fairness in AI-assisted decision-making.
  • Professor Samuel Dahan – Associate Professor at Queen’s University Faculty of Law; Founder and Director of OpenJustice and the Conflict Analytics Lab; specialist in legal technology, algorithmic dispute resolution, and applied AI systems.

Overview: tracking the rise of AI across 77 Canadian courts and tribunals

The rise in cases: a look at the data

The purpose of this report is to educate legal and policy professionals on how Canadian institutions are encountering, interpreting, and gradually systematizing artificial intelligence technology (AI) within established frameworks of administrative law, privacy, intellectual property, and professional regulation. The report is intended as both a reference and analytical guide for policymakers, practitioners, and scholars navigating the emerging body of Canadian AI jurisprudence.

This report’s “AI-related” decisions/cases are those whose document text, published to the Canadian Legal Information Institute (CanLII) public database, include the terms "artificial intelligence,” “machine learning,” “automated decision-making,” “facial recognition,” or “autonomous vehicle” (also in French). From 2021 to Q2 2025 (the full survey period), 388 AI-related decisions occurred across 77 courts and tribunals spanning Canada’s federal, provincial, and territorial jurisdictions.

Evident from the past four years of data, AI and AI-related technologies are increasingly included as evidence and legal considerations across Canada’s courts.

Between 2021 and 2024, the number of AI-related Canadian court cases more than doubled from 47 in 2021 to 107 in 2024. Each year, they grew by an average of 20 additional cases. The largest annual increase, 30 additional cases, occurred between 2021 to 2022. Correspondingly, in November 2022, ChatGPT was made available to the public as the first popular learning language model (LLM) or AI platform.

In only the first half of 2025, 66 AI references were made in Canadian court decisions. That was well over half (61 percent) of all cases recorded throughout 2024. If 2025’s first six months are consistent with the entire year, 2025’s AI-involved decisions may total at least 132, or another 25 additional cases over the previous year 2024.

Most AI-related cases surveyed in this report have occurred in either federal, Ontario, or Quebec courts, as detailed further down. In the years 2021 through 2024, these three jurisdictions together saw an average of 79 percent of all AI-related cases reported across Canada. In 2022 and 2023, federal courts saw near-majority representation of AI cases, with 33 (42 percent) and 42 (46 percent) cases, respectively. In 2024, federal court cases dropped remarkably to only 24 (22 percent), while Ontario and Quebec’s share of AI-related cases rose to 27 percent each.

Notably, in the first half of 2025, British Columbia stood out with 16 percent of all cases across Canada, well above its average share during the previous four years of 10 percent. In the first half of 2025, Ontario recorded a remarkable 34 percent of AI-related cases, Quebec 25 percent, and federal courts 16 percent.

Legislation in the private and public sectors, and legal professionals outlook 

The growth of Canada’s AI-related legal disputes – especially in 2022 and 2023 and since ChatGPT’s curtain rise – can be explained by Canadian jurisdictions’ precedent in reacting and adjudicating to new technology.

“Canada is often cautious in responding [to new technology]. It does often take a wait-and-see approach,” said Dr. Teresa Scassa, Canada Research Chair in Information Law and Policy at the University of Ottawa. It tents to follow the reaction of others, namely, the United States and European Union. Considering Canada’s relatively small economic size, “we want whatever we [legislate] to be compatible with what’s happening in other jurisdictions, where we have trading relationships and where our businesses are involved.”

From the CD-ROM to the printing press, legal professions involving intellectual property have long had to adapt to new technologies. With AI technology, considering the volume of work accessible and vulnerable to infringement, virtually everything that has ever been produced, “the scale is supercharged,” said Dr. Giuseppina (Pina) D’Agostino, associate professor at Osgoode Hall Law School, York University; founder and director of IP Osgoode; and co-director of the York’s Centre for Artificial Intelligence & Society.

Each time new technology is commercialized (AI rapidly so) courts become extremely busy as litigants assert their rights under ambiguity. “The courts are forced into making important decisions that would otherwise have been already anticipated or should have been anticipated by Parliament,” said D’Agostino.  

D’Agostino long predicted that major media outlets would sue an AI company for using copyrighted journalism to train large language models, and this took place in December 2023 when The New York Times filed the first prominent lawsuit against OpenAI and Microsoft, alleging mass copyright infringement through unauthorized data scraping. “Having seen this movie before, having studied [copyright policy] ... It’s very predictive. You could see that there was this tidal wave coming of litigation.” Following the New York Times suit, there was cause for concern that Canada would fall behind legislating new AI technologies, she explained. Canadian news publishers filed an identical copyright lawsuit against OpenAI in Ontario in November 2024.

In the 2000s, legal norms for these cases were set by others like New York Times Co. v. Tasini in the US (2001) or Robertson v. Thomson Corp. in Canada (2006). Then, freelance writers’ works were republished in electronic databases and digital archives without their consent, prompting courts to affirm that authors retain rights over secondary uses of their work beyond their original print publication.

Technological inputs and outputs were a focus of these cases, inserting computing language into legal vernacular and distracting from the ethical questions of infringement, party agreements, and bargains. “This is what we’re seeing now, supercharged with generative AI... But the stakes are that much higher,” said D’Agostino. “It’s veering into an unfortunate direction, because I don’t think we’re going to solve these problems, and it’s not going to leave the law in a good place.”

Despite historic wait-and-see legal implementation and learning from new-technology legislation in more productive countries, in June 2022, Canada signaled decisive AI legislation with the short-lived introduction of the Artificial Intelligence and Data Act (AIDA). The AIDA sought to scale private-sector AI developers’ legal obligations with their AI systems level of risk. As-of-yet defined “high-impact” AI systems were those most obligated to monitor, keep records, and provide transparency during developments.

Although the AIDA signaled anomalous Canada’s early movement in AI legislation, the Act died when Canadian parliament prorogued in January 2025. “The reality is, we’re going to end up waiting-and-seeing perhaps for quite some time,” for stern AI-related federal legislation, notwithstanding international legislation, said Scassa.

Turning from the private sector towards Canada’s public sector, the Treasury Board of Canada Secretariat’s Directive on Automated Decision-Making (DADM) remains the most noteworthy government-wide AI governance policy. As of April 2020, Canada’s Digital Government Strategy, which implemented the DADM, requires federal agencies and departments to use automated decision systems responsibly and file publicly available algorithmic impact assessments regarding rights, safety, and economic interests.

In contrast to the Canadian government, law society’s advocacy for new technology-related decisions has typically been swift and fulsome across Canada’s legal industry, according to Amy Salyzyn, associate professor at the University of Ottawa’s Faculty of Law and researcher in legal ethics, technology and administrative fairness in AI-assisted decision-making. In the early 2000s, when Canadian lawyers were confronted with confidentiality risks of cloud computer storage, challenges moving digital hardware internationally and new forms of evidence on social media, certain provincial law societies, namely those in Ontario and British Columbia, were quick to issue instructive professional guidance.

In more recent years with AI platforms, “we saw quite quickly – and almost in an unprecedented fashion in terms of regulators reacting almost universally –  Canadian law societies, almost one by one, coming out with best practice guidance for lawyers about the risks and benefits of this technology,” said Salyzyn.

For example, virtually all provincial law societies have mandated basic technological competency within their code of conduct; understanding technology that will be relevant in their practice, including AI. Similarly, in 2021, the Canadian Judicial Council’s updated its Ethical Principles for Judges to recognize their requirement for technological competence. “Engaging with technology and having some [tech] literacy is not just for the people that strike themselves as innovators. It’s really the baseline, and that’s become quite clear,” said Salyzyn. This is especially true for Canada’s Federal Court, detailed further below.

Despite guidance at the professional level, stories abound of individual legal professionals’ crucial mistakes and oversights attempting to employ AI in their work. For instance in 2024, the US company DoNotPay was alleged to have falsely advertised the capabilities of its “robot lawyer,” and resultingly paid a $193,000 settlement to the Federal Trades Commission. “A lot of these products are a façade; just a ChatGPT with some kind of legal packaging. It’s fine when you’re [an experienced] lawyer, [but] not so fine when you are a [inexperienced] litigant,” said Samuel Dahan, associate professor at Queen’s University Faculty of Law and founder and director of OpenJustice and the Conflict Analytics Lab.

Other jurisdictions have embraced AI in their practice, from small practices using AI to punch above their weight in email productivity, to the French Supreme Court. In France, Dahan and his OpenJustice team are building an AI-based product that aims to evaluate the level of consistency across the country’s lower courts and divergences from the top court’s rulings, with plans to expand this product to Canada and internationally.

In the meantime, as legislation upon the private and public sector and professionals improve AI jurisprudence competencies, Canada’s courts and tribunals will have to address new legal issues generated by AI.

“Those entities do their best with that they have before them, and will interpret and apply existing laws to technology,” said Scassa. “Sometimes, that shows we can already handle the issues that are coming up, and no new law is needed. In other cases, it may reveal gaps, problems, or some of the interpretations may be seen as counter to what’s needed. That might prompt legislative change.”

Federal Courts analysis: immigration, copyright, and privacy

Among Canada’s federal jurisdiction courts, which in 2022 and 2023 saw nearly a majority of AI-related cases (33 cases or 42 percent in 2022, and 42 cases or 46 percent in 2023) in the country, those most active with AI-related cases were predominantly the federal Immigration and Refugee Board, as well as the Federal Court. The federal Trademarks Opposition Board also had its share of several AI-related cases in 2023 and 2024.

Immigration and Refugee Board: Facial recognition, state surveillance, and proof of identity as evidence

Canada’s federal Immigration and Refugee Board saw the most AI-related cases in 2022 and 2023, numbering 23 in 2022 (69 percent of that year’s federal cases) and 16 in 2023 (38 percent).

The Immigration Board was most active from July 2022 to April 2023 (i.e., Q3 2022 to Q1 2023). During this period, it made 28 decisions in cases involving AI, or 51percent of all such Immigration Board decisions made between 2021 and July 2025.

The most striking of the 28 cases occurred from identity disputes where appellants alleged or resisted the use of facial recognition. In X (Re), 2021 CanLII 152060 (CA IRB), the respondents alleged CBSA had used facial-recognition software from company Clearview AI to match their photographs. The Minister denied any such use, and the panel, noting Clearview’s July 6, 2020 withdrawal from Canada, found no evidence of facial recognition in the record, declined to admit the Clearview materials, and resolved identity on conventional photo comparison and immigration records, ultimately vacating status for misrepresentation.

A related line of argument appeared in X (Re), 2021 CanLII 153160 (CA IRB), where the applicant again challenged the possibility of algorithmic identity matching. Per the decision text, the Respondent, a Kenyan national, alleged that the Appellant/Minister improperly used Clearview AI facial recognition software, argued the photographs submitted by the Appellant/Minister did not establish that she was the refugee claimant they claimed her to be, and that “balance of probabilities” was not the correct test to determine that she and who they claimed were one and same person. Here, the panel rejected the claim that AI (Clearview AI) had been determinative, but still engaged with the possibility at length, before allowing the appeal on other grounds. These paired cases demonstrate how facial recognition has become contested as evidence.

Notably, Canada’s IRB has also debated India’s national application of machine-enabled surveillance and automated database systems. In X (Re), 2020 CanLII 126547 (CA IRB), the panel canvassed evidence on the use of Aadhaar, the CCTNS, and tenant verification apps in metropolitan India. The decision references these systems repeatedly, asking whether they amounted to automated decision-making tools capable of nationwide traceability considerable to refugee claims. Ultimately, the Board concluded that while these technologies exist, they were not yet operating at the scale alleged, and thus the claimant’s risk was overstated. A later case, X (Re), 2021 CanLII 153921 (CA IRB), revisited the same technologies but treated them as less decisive; credibility concerns proved determinative, and the appeal was dismissed. These two decisions show the Board acknowledged AI-related tools in Indian policing while resisting claims that today they provide seamless, automated surveillance.

Finally, in X (Re), 2020 CanLII 126609 (CA IRB), the Board examined evidence about China’s “Golden Shield” project: the Communist Party of China’s integrated surveillance and data management system that combines AI, facial recognition, and automated decision-making tools to monitor citizens and control internal mobility. In this case, the claimant argued that this vast digital infrastructure made it impossible to leave China undetected, and that the state’s use of machine learning and biometric tracking proved the risk of persecution. The panel engaged at length with these assertions, acknowledging that the Golden Shield represented one of the world’s most sophisticated examples of state-driven AI surveillance. However, the panel found that even in the case of China, like in India, the available evidence did not support the conclusion that such technology operated with total precision or omnipresence. In assessing the credibility of the claimant’s escape narrative, the IRB distinguished between the theoretical capabilities of China’s AI systems and their proven operational reach, ultimately determining that earlier panels had overstated the power of algorithmic border control. The appeal was, therefore, allowed. This marked a significant moment in Canadian refugee jurisprudence, where the tribunal treated AI-based state surveillance as a matter requiring empirical proof rather than assumption.

Where appeals succeeded at the IRB, AI-adjacent technology was typically peripheral, and sometimes neutralized by the Board’s insistence on fit-for-purpose proof. Two patterns stood out in Canada’s IRB decisions involving AI: 1) In several Chinese claims, panels corrected earlier overreach about automated exit controls and database omniscience, restoring credibility and allowing the appeal. 2) In humanitarian and compassionate grounds files, digital traces were dwarfed by best-interests and hardship analyses. Rather than machine interference, these appeals turned on a more traditional assessment of family impact – typical of Canada’s slow approach to wide recognitions of new technological impact.

In 2024, the Immigration Board’s AI-related case count dropped remarkably to just one. Photo identification improvements are a likely reason for 2024’s decline in AI- and facial recognition-related cases; “I think that some [cases] we saw in 2023 may have resulted in adjustments or changes,” said Scassa. Photo comparisons that were being used and challenged may no longer be used. When they are used, they may be better supported and documented, or decision-makers are providing more thorough reasons for facial recognition’s inclusion, and therefore there’s less to contest, she said. “Not that there’s less AI [facial recognition] being used, but that the institution has since adjusted its practices.”

IRCC’s Chinook system

Immigration, Refugee and Citizenship Canada’s Chinook system is a processing tool has been regularly scrutinized by Canadian third organizations and legal jurisdictions alike since its implementation in 2018, for its alleged automated decision-making in Canada’s immigrant and temporary resident application processes. According to a Government of Canada information webpage on the system, Chinook displays information from the IRCC’s Global Case Management System in a more user-friendly, Excel-based format, for productivity’s sake: It’s “a tool designed to simplify the visual representation of a client’s information. It does not utilize artificial intelligence (AI), nor advanced analytics for decision-making, and there are no built-in decision-making algorithms,” reads the website. 

A considerable share of Canada’s 2023 AI-related immigration jurisprudence surrounds the Chinook system, said Tassa. For instance, despite its innocuousness as a “fancy Excel spreadsheet” and human officers’ final decision-making as stated by the government, Chinook may generate or draft immigration officers’ draft final decisions, said Tassa. “From an immigration lawyer’s point of view: [Chinook] changes how information is presented to the immigration officer, which may have an impact on [the officer’s] decision-making. Certainly, the drafting has an impact.”

Mostly, the Chinook question has raised the question of what is really being contested: its proximity to AI/automated decision-making, or fairness in Canada’s immigration process. The latter is what the courts move on, said Tassa. “The language of AI gets used in [related] decisions [and] the arguments get made, but it’s not clear... The federal court’s position [on Chinook-related cases] has been, ‘Let’s just look at fairness.’”

The Chinook question becomes more complex when framed within the Treasury Board of Canada Secretariat’s DADM. Tassa said it should be asked if the IRCC should or will comply with the Treasury Board’s directive to mitigate risk and publish public algorithmic impact assessments, said Tassa. “Once you start to legislate, or provide other mandatory legal requirements regarding AI, then it does become important if something AI is something else, or is not.”

Federal Court: Doan v. Clearview, and “artificial intelligence” as a legal term

After Canada’s Immigration and Refugee Review Board, Canada’s Federal Court saw the most sustained AI-related case activity between 2021 and July 2025. This court is responsible for resolving legal disputes involving the federal government and issues governed by federal law, and is virtually the home of intellectual property disputes. The Federal Court, as the seat of the Immigration and Refugee Protection Act, likewise saw a number of Chinook-related cases during this period.

Further to judicial responses to new technology and AI, D’Agostino credits Federal Court Justice Roger Thomas Hughes (b. 1941– d. 2024) and his leadership for many of the precedents that determine trademark and technology law within the court. Justice Hughes was on the frontline of disputes regarding new technology pertinent in the early 2000s, and mandated judges’ competence in these areas.

As a result, “it is more of an expert court” regarding technology, said D’Agostino. “Given what we’re facing now with generative AI, if there’s any court that can pronounce themselves and show leadership, it is the Federal Court. It has asserted itself, pretty early on, in terms of giving guidelines to both litigants and then also within the court. It has line of sight and wants to respond in a responsible way,” she said.

In 2023, the number of AI-related cases at the Federal Court peaked at 16 (38 percent of all federal cases). In 2024, it numbered just 12 but represented half (50 percent) of all federal AI-related cases. The 28 cases during these two years were more than half (68 percent) of all AI-related cases examined by this court during the examined 4.5-year period.

The most significant is Doan v. Clearview AI Inc., a proposed national class action that originated in Quebec and was brought before the Federal Court because of its cross-jurisdictional scope and reliance on federal privacy statutes. Its plaintiffs alleged that Clearview AI had systematically scraped billions of publicly available images from the internet, including from Canadian social-media platforms, to build a commercial facial-recognition database marketed to law enforcement agencies. The claim asserted violations of both the Personal Information Protection and Electronic Documents Act (PIPEDA) and the Copyright Act, arguing that Clearview’s AI system created biometric “faceprints” that amounted to unauthorized reproductions of Canadians’ likenesses and works. The pleadings referred repeatedly to AI, describing the platform’s use of algorithmic matching and deep-learning techniques to identify individuals across photographs.

Although in 2024 the Federal Court ultimately refused to certify the class action, its reasons treated “artificial intelligence” as a significant legal concept. The Court recognized that AI-driven data scraping raised novel questions at the intersection of privacy and copyright, including whether digital likenesses can constitute protectable expression and how existing legislation governs automated extraction and analysis of personal data. Thus, Doan v. Clearview AI Inc. remains a landmark early case in Canadian law’s engagement with AI technology, signaling a judicial willingness to consider AI as an independent source of both capability and legal risk within emerging information-governance frameworks. Provincial jurisdictions, namely British Columbia and Alberta, demonstrate different legal outcomes regarding Clearview AI and its alledged victims, detailed in their respective chapters below.

The Federal Court also saw a number of Chinook system-related cases, where the court appeared to uphold the government’s assertion that the system is not AI or automated decision-making. In Haghshenas v. Canada (Citizenship and Immigration), 2023 FC 464, Justice Brown emphasized that decisions are made by a Visa Officer and not by software. In Espinosa Cotacachi v. Canada (Citizenship and Immigration), 2024 FC 2081, the Court reiterated that the use of Chinook, on its own cannot ground a procedural-fairness breach without concrete proof, rejecting speculative assertions about the system’s operation. In the same vein, the Court has cited Raja v. Canada (Minister of Citizenship and Immigration), 2023 FC 719, to clarify that Chinook is not intended to process applications or make decisions, underscoring that applicants must show a specific link between the tool and an unfair outcome.

The Doan case anchor the intersection with intellectual property, while the Chinook line of cases reveals a judiciary that insists on transparency and reviewability whenever automated decision-making is alleged. The result is a body of law that both restrains overreach and very gradually legitimizes AI-adjacent technologies in Canadian courts.

In dealing with high-profile privacy and AI-related cases such as those against Clearview AI, the Federal Court benefits from a wealth of legal precedents set during other technological waves and cases considering similar ethics. While the scale is larger, strong laws are already in place toward the values of fairness, ownership rights, and accountability regarding technology, said D’Agostino.

“We need to take solace and stock in the fact that we do have the wealth of common law to help us decide these cases. The Federal Court is well poised to excavate these cases that are good law, and apply them in these new scenarios,” she said.

Trademarks and Opposition Board: AI as a symbol and risk

Canada’s Trademarks and Opposition Board recorded three AI-related cases in 2023 and 2024 each.

At the Trademarks Opposition Board – further to the question of AI technology and IP law – the most vivid AI-related case examples arose in Canada Lands Company Limited v. Compass Group Canada Ltd., 2024 TMOB 218. Here, the Board considered whether “COMPASS DIGITAL LABS & Design” had been genuinely used in Canada, with the record showing repeated reference to “machine learning algorithms,” “AI-powered frictionless markets,” and even “autonomous delivery robots” as part of the registrant’s service offerings. Far from incidental, these technologies were central to the description of the mark’s scope, and their repeated mention in marketing materials and client presentations persuaded the Board to maintain the registration. The case underscores how AI-based services are moving from speculative claims to legally cognizable evidence of trademark “use."

Further toward litigants’ use of AI

In the Federal Trademarks Board and across Canada’s courts generally, it’s noted that litigants and defendants increasingly use learning language models for assistance. “It’s helping, but also creating a lot of problems,” said Dahan.

In British Columbia, a self-represented litigant before the Civil Resolution Tribunal admitted to using Microsoft Copilot to prepare legal submissions in a landlord-tenant dispute, later discovering that nine of the 10 “cases” Copilot produced were fabricated. A similar incident arose in a child-custody matter before the BC Supreme Court, where a lawyer was found to have cited non-existent judicial decisions generated by ChatGPT, prompting professional-conduct proceedings. As CBC and other outlets have reported, these episodes highlight both the rapid adoption of generative AI by non-experts and the emerging risks of “hallucinated” case law infiltrating real legal processes.

Federal Courts analysis: closing remarks

Canada’s federal courts have served as the country’s principal proving ground for AI-related litigation, handling nearly half of all national cases in 2022 and 2023. Despite the Federal Court’s jurisdictional weight, the Immigration and Refugee Board dominated this activity. The IRB’s panels confronted claims involving facial recognition and foreign surveillance technologies, bringing other countries’ use regard for AI and privacy into the consideration of Canada’s court system. It also faced allegations of automated decision-making within immigration systems such as Chinook. Through these cases, the Board acknowledged the reach of AI tools like Clearview AI and China’s “Golden Shield” while insisting on empirical proof of their operation and legal relevance. Its jurisprudence demonstrates a cautious balance between recognizing the growing influence of AI and ensuring that findings remain grounded in verifiable fact and procedural fairness.

The Federal Court, meanwhile, has positioned itself as the national forum for resolving AI’s intersection with privacy, intellectual property, and administrative law. Landmark files such as Doan v. Clearview AI and Alexa Translations v. Amazon established AI as a legally meaningful category in both data governance and branding, while Chinook-related rulings clarified the judiciary’s expectation of transparency and human oversight in government decision-making. Together with the Trademarks Opposition Board’s early treatment of AI as both a service descriptor and a professional risk, these cases show the federal judiciary steadily building the foundations of Canada’s AI jurisprudence.

Review this report's provincial court analysis, and overall concluding remarks, in Part 2.