Immigration lawyers launch AIMICI to shed light on IRCC’s decision-making tools, which include AI
In 2021, Will Tao was helping a client with a study permit application when Immigration, Refugees and Citizenship Canada notified him that the agency was planning to file an affidavit in another case.
Like Tao’s client, the litigant in that case, Ocran v. The Minister of Citizenship and Immigration, had been refused a study permit by IRCC and was challenging the decision in court. The litigant had argued that the reasons the agency gave for refusing her application read in a way that was contradictory and peculiar – even inhuman.
READ MORE: Focus on immigration
The affidavit by Andie DaPonte, IRCC’s then-director of international network optimization and modernization, came that July. In it, DaPonte revealed that since 2018, IRCC officers had been using a tool beyond the agency’s official case management system to process immigration cases. The tool, Chinook, extracted applicants’ data from the official system and presented it succinctly to officers, summarizing pages of documents in a few lines. It allowed officers to process applications far more quickly. It also meant that officers were not making decisions based on complete reviews of applications.
DaPonte confirmed that IRCC officers had relied on Chinook when it denied Abigail Ocran’s study permit application. For most immigration lawyers, including Tao, it was the first time they’d heard of Chinook at all.
Tao had just co-founded his Burnaby, British Columbia firm Heron Law Offices earlier that year. When he received notice of DaPonte’s affidavit, he reached out to another immigration lawyer in Toronto, Zeynab Ziaie Moayyed, to ask if she’d received a similar message.
That conversation, and DaPonte’s subsequent disclosures, led the two lawyers “on this sort of journey… to try and uncover and figure out what the government’s use of tools were,” Tao says. After finding out about Chinook, Tao and Moayyed learned about a machine learning triage tool that IRCC was using to process temporary resident visas and study permits. IRCC then shared details about an “integrity trend analysis tool,” which the agency uses to identify “potentially high-risk applications.” The lawyers began looking into the agency’s longtime use of facial recognition tools. Recently, their inquiry broadened to include generative AI, “because we found that now they’re disclosing – or not disclosing – the use of generative AI in certain circumstances,” Tao says.
In his 2025 thesis for the Peter Allard School of Law’s Master of Laws program, Tao refers to such tools as automated (or algorithmic) decision-making tools, whose purpose is to support government decision-makers. They include, but are not limited to, AI technology.
“If we look a little bit deeper, it’s actually the technological tools that are doing a lot of the work of the human [IRCC] officer,” he adds. “That’s what got us investigating – filing access to information requests, teaching, consulting with government.”
Last October, these efforts coalesced in the launch of AIMICI, or the AI Monitor for Immigration in Canada and Internationally, a nonprofit that aims to monitor the public use of AI tools. While similar organizations exist in the US and the UK, Tao was not aware of any group in Canada that the government could consult or that could push back against government adoption of concerning AI tools.
“We didn’t have networks or folks who could be jumping in if there was a court case… or [educational] panels,” Tao says. “That was really our impetus to be like, hey, we need a monitor for these types of issues.”
There were several reasons to begin with a focus on immigration. The high volume of applications – IRCC reported 2.1 million pending applications at the end of 2025 – meant there was more data to work with. Immigration is also what Tao calls a “high volume, low rights environment” – i.e., one in which the individuals involved are considered to have fewer legal rights, giving IRCC more leeway when it comes to new tools.
“A lot of experimenting and experimentation takes place in the immigration context because oftentimes, as Canadians, we don’t think about immigrants as having the same level of rights as Canadians… they don’t necessarily benefit from Charter protections and so on,” Moayyed says. “So it’s much easier, both legally and also politically, to experiment in this space, compared to, let’s say, [the Canada Revenue Agency]. People would be up in arms and much more concerned.”
Because it serves as an arena for experimentation, immigration can also function as a preview of how AI tools will be used by the government in other contexts – including in ways that directly impact Canadians, Moayyed says.
As immigration lawyers, Tao and Moayyed have seen firsthand how IRCC’s decision-making tools have impacted applicants in the immigration and refugee system.
“What we end up seeing for many applicants – and I’ve represented several hundred of them over the past couple years – is that they will submit very complete applications,” Moayyed says. “But what they end up with is a refusal that is very generic and has nothing to do with the evidence that they presented in their, for example, study permit application.
“The reason that is happening is because of not just the technology, but there’s all this pressure on the officers to get through applications more quickly,” Moayyed says. However, many decisions fail to reflect “an actual analysis of the application,” she adds.
Tao says many of his clients are bewildered by their experiences with the system.
It’s “really changed the way we practice law, because in the past, it was very much done with lawyers saying that we can write a really strong submission for applicants, and that… we have access to human officers,” Tao says.
Clients are “really shocked when I tell them that a lot of decisions are just based straight off of certain entries into the forms – their biographical background, their marital status, their age, their country of nationality – that a lot of things are outside of their control,” he says.
Karina Juma, another co-founder of AIMICI, says the organization has been approached by university professors and conferences to share its perspectives on the use of AI tools in immigration. According to Tao, the group has also produced reports for courts and the Treasury Board Secretariat.
Recently, IRCC notified Tao and Moayyed that the group’s feedback about the agency’s new AI strategy was crucial for its development. The agency unveiled the strategy this month, which it said will ensure that its use of AI supports the federal government’s “commitment to responsible, transparent, and secure AI adoption across federal institutions.”
The strategy “comes after much advocacy, including by Will, Zeynab, Andrew [Koltun, another AIMICI co-founder], others in the private bar, academics, and other individuals… who are doing this on an ad hoc basis,” Juma says.
While Juma believes the strategy is a good start, she says it also fails to address some key questions. The strategy explains the types of tools IRCC decision-makers are using, for example, but not necessarily how they are being used.
“They mentioned somewhere [that] the ‘human in the loop’ [is] still involved – and this has been IRCC’s position since the beginning, that these tools or AI don’t make a decision or a recommendation for the officers,” Juma says. “But our position and our question has always been: if the information is triaged or presented in a certain way that will lead to a certain outcome, then that’s different than what the officer would have reached if these tools were not involved. It’s basically making a recommendation or a decision for that officer.
“I don’t necessarily think that it addresses that question of how much sway or how much influence does the tool have on the human decision maker,” she adds.
These types of lingering questions are what AIMICI aim to answer, Juma says.
The way IRCC has rolled out automated decision-making tools over the years “has caused a lot of confusion, which is why practitioners have maybe not touched this topic, or are themselves struggling with, how do we bring attention to this?” Juma says. She adds, “We ourselves don’t know everything.”