For Canadians living with disabilities, these tools are already having an adverse effect on hiring
Consider David. His story is a composite, but his experience is not. A software engineer with ten years of experience, two graduate degrees, and a gap in his employment history. The gap exists because David has multiple sclerosis (MS). He took two years off during a relapse. He recovered fully. His skills are current. His references are excellent.
During an online application last spring, David disclosed his MS and requested a disability accommodation in the screening process. Eleven minutes later, he received an automated rejection. No human had reviewed his file. No one had read his accommodation request. An algorithm had decided he was not worth considering.
David cannot prove that AI screened him out. That is precisely the problem.
AI disclosure is not accountability
Ontario made history on January 1, 2026, when it became the first Canadian jurisdiction to require employers with 25 or more employees to disclose whether they use AI to screen applicants. The Employment Standards Act amendment is a meaningful step. It is also entirely insufficient.
The amendment does not require equity audits before deployment, testing for disparate impact under the Human Rights Code, or any mechanism for a rejected applicant to challenge an algorithmic outcome. It tells David that AI was used. It does not tell him why he was rejected 11 minutes after requesting an accommodation, whether the system was tested for bias, or who is accountable.
The case against algorithmic hiring is already built
The Human Rights Code does not exempt algorithmic decision-making. The Supreme Court of Canada established in Ontario (Human Rights Commission) v. Simpsons-Sears Ltd. that a facially neutral policy with discriminatory effects engages the Code. This principle applies with full force to algorithmic hiring. Machine learning systems optimize for the majority. Disabled people are statistical outliers — people who move and communicate differently, or have non-linear employment histories — who fall outside the norm that the system rewards. This is not a bug. It is a feature. And it is precisely the kind of adverse-effect discrimination the Code was designed to catch.
For your clients with disabilities, the disclosure dilemma is acute. Disclose too early and risk being screened out before a human sees your file. Disclose too late and risk being told it is too late. David disclosed immediately. His request went into a system structurally incapable of honouring it. That is a failure of the duty to accommodate at the point of first contact — and it is actionable.
Let me be direct: buying a discriminatory tool does not absolve an employer of its legal obligation. It compounds it. If you are acting for an applicant who was screened out after disclosing a disability, the elements of a human rights complaint are straightforward: adverse impact, a prohibited ground, and a system without a mechanism to fulfil the duty to accommodate. If you are acting for the employer, the question is whether you can demonstrate due diligence. Right now, most employers cannot.
The vendor liability question has arrived
In Mobley v. Workday, Inc., a US federal lawsuit alleges that Workday’s AI screening tools discriminated against applicants based on race, age, and disability. The liability theory is the landmark: Workday is liable not as a neutral vendor but as a third-party agent and indirect employer. The case survived dismissal in 2024, was certified as a nationwide collective action in 2025, and in March 2026, the court rejected Workday’s argument that age discrimination law does not cover applicants. The vendor liability theory has survived every attempt to kill it.
Canadian employment lawyers should be watching Mobley closely. If you advise a client who uses Workday or a comparable platform, the vendor-liability question is now yours. Canada’s federal AI framework, Bill C-27, died on the Order Paper in January 2025. In the absence of federal AI legislation, the Human Rights Code and the Canadian Human Rights Act are the primary instruments available. The regulatory vacuum is not a safe harbour. It is a liability.
A new standard — and how to use it
In December 2025, Accessibility Standards Canada published CAN-ASC-6.2:2025 — Accessible and Equitable Artificial Intelligence Systems — a National Standard under the Accessible Canada Act. It requires employers to validate AI hiring tools for equitable performance across disability groups, treat statistical discrimination as a procurement risk, and provide human alternatives to automated decisions.
This standard has immediate utility on both sides of the bar. Lawyers advising employers on AI procurement should treat CAN-ASC-6.2 as a due diligence benchmark. Lawyers representing applicants should treat non-compliance as evidence of a failure to accommodate. The question is the same: Has this tool been audited for disparate impact discrimination based on disability? Is there genuine human review for accommodation requests? What recourse exists when the system gets it wrong?
Your firm is not exempt
This is not only a client problem. Law firms are employers. Mid-size and large firms are already evaluating AI-powered platforms for articling recruitment and summer student screening — tools that rank, filter, and shortlist candidates before a hiring partner reads a single cover letter. If your firm uses these tools, the same legal obligations apply. A firm that deploys algorithmic screening without auditing for disability bias faces the same human rights complaints it may be advising clients to avoid.
If we cannot model equitable hiring within our own firms, we forfeit the credibility to demand it of others. Lawyers advising corporate clients on AI procurement should be having the same conversation with their own managing partners.
AI disclosure is where accountability begins
For eight million Canadians living with disabilities, algorithmic hiring is not an abstract policy question. Disclosure is where accountability begins. It cannot be where it ends. Whether the law catches up depends on the arguments we make now.