Head of tech and AI practice at the firm says generic policies fuel shadow tools and weaken safeguards
Canadian businesses are tying themselves in knots over AI risk while competitors move faster, even though most of the problems are solvable with focused governance, says tech lawyer Misha Benjamin.
Benjamin, a partner at BCF in Montreal, wants leadership teams to stop debating abstract dangers and start asking themselves which concrete risks matter in their business and how they can control those without shutting down useful tools. That challenge cuts straight across the generic, copy‑paste policies that have spread through boardrooms, policies that, as he puts it, “over-correct on risks that don’t apply to you” while still missing “certain risks that are really important to your business,” he says.
He speaks from experience inside companies that were wrestling with AI long before the current wave of interest. At Ubisoft, he was working “at the very beginning of the use of AI,” and at Element AI, he helped “build out what contracting and risk management and IP around AI should look like” when there was barely a market and few legal frameworks, he says. He later supported large “AI transformation” projects at McKinsey and served as general counsel at Sama, which “annotates and selects data for most of the large AI model developers,” before co-founding the technology and AI practice at BCF. Benjamin is also a member of the Meritas intellectual property group, which works with companies across international jurisdictions on IP issues.
Seen from that vantage point, boilerplate governance looks less like caution and more like a refusal to understand where AI risk actually lives in a business. At Sama, customers handed over highly valuable datasets used to improve foundational models. If a third party had been allowed “to use an AI model that was trained on our data, that would be a huge issue for our customers,” Benjamin explains, because those datasets were a core competitive advantage for the model developer, and “if they can’t trust us in that way, we’re not going to have a business,” he says.
Change the setting, and the risk picture shifts immediately. In a manufacturing plant, data generated “on the shop floor, by your own machines, on your own line” raises different concerns, he notes, because the central question is reliability; if generative tools are used for demand planning and get it wrong, the damage plays out in missed orders and stoppages, not ownership fights. In a creative studio, the calculus changes again, because the work product is all about copyright, yet with many tools, “you might not be able to own the output of those gen AI tools,” leaving room for competitors to “just take what you created and resell it,” he says.
For Benjamin, those contrasts demonstrate that there is no legitimate one-size-fits-all AI policy. “It’s really about figuring out for your company, what makes sense,” he says, starting from “what your customers and stakeholders expect of you” and then deciding “what do we really need to focus on” instead of chasing every hypothetical scenario.
That analysis also needs to examine how tools are implemented, not just which brand is on the contract. The same underlying model carries different risks depending on how it is deployed and governed. Benjamin points to the difference between more robust enterprise deployments that “allow you to do a lot of things that you wouldn’t be able to do” and more limited setups, and warns that simply letting people use public or lightly managed versions strips out the safeguards companies need around data residency, access management and cutting off former employees, he says. Even apparently minor choices can change the calculation; when vendors promise not to train on customer data “except if we give permission,” that consent can hide in a feedback button, so users who rate responses up or down may be silently authorizing their employer’s data to retrain the model, he says.
Against that background, blanket bans look naive. “Pretty much every company that we talk to, someone’s using these tools somewhere,” Benjamin says, describing generative AI as a rare case where adoption is driven from the bottom up as much as from the top. When boards declare that staff cannot use generative tools with “any confidential information” and then fail to define that term or offer compliant alternatives, they create the very behaviour they claim to fear, driving employees toward untracked IT tools and what he describes as “shadow AI” that operates outside formal controls.
Benjamin argues that the answer is not to clamp down harder but to build governance that workers can actually live with. His most successful mandates pair a policy rollout with detailed mapping of “what people are doing or want to be doing,” identification of workloads where AI could deliver real value, and deployment of tools that employees can use with confidence. He tells clients that the policy is designed to enable employees to use the tools and understand when they can do so, with training to help staff derive real value from those systems.
The intellectual property questions remain more stubborn. “Owning output is actually really hard,” he notes, because current guidance often assumes that AI‑generated outputs are not protected in the same way as human‑created works, he says. That has direct consequences for brand strategy, since companies that create logos or other core elements entirely with generative tools may end up with weaker protection and less distinctiveness. It also raises a tracking problem, because firms need to “trace back what was created with AI and what wasn’t” when they are challenged over provenance, he says.
Despite that catalogue of pitfalls, Benjamin rejects paralysis. Citing recent research on adoption, he warns that “Canadian companies are over-indexing on some of the risks” and “not adopting as fast as we could,” even though most of the legal and compliance problems are “very surmountable obstacles” for leaders prepared to understand the technology, ask hard questions and move methodically, he says. The real danger, he suggests, is not that risk‑averse organizations will avoid headlines today, but that they will wake up a few years from now without the data foundations, employee readiness or change‑management muscles they need to compete effectively.
This article is based on an episode of CL Talk, which can also be found here:
The episode can also be found on our CL Talk podcast homepage, which includes links to follow CL Talk on all the major podcast providers.
