Doble says using single-tenant, private cloud environments is akin to shifting from Amazon to Shopify
Law firms that build their AI strategy on traditional multi-tenant cloud tools are putting their future in someone else’s hands, says Alexi CEO Mark Doble.
For two decades, legal tech vendors have encouraged firms to adopt shared cloud applications, where thousands of customers log into the same platform and rely on the provider to separate and secure their data. That structure brought scale and convenience, but in the age of powerful AI systems constantly ingesting and transforming information, Doble now calls it “completely untenable” for serious legal work.
He sketches the stakes with a comparison every managing partner would understand: the Amazon versus Shopify approach. In the multi-tenant world, he says, firms are effectively trying to build their practice on someone else’s marketplace, like a retailer selling on Amazon. The vendor controls the platform, sees every workflow pattern and document type that passes through it, and learns which use cases are most valuable. “You can never truly own your brand. You can never truly own your business if you're only ever selling on Amazon,” he says, and the same logic applies to firms that run their core work on a generic, shared AI platform.
Single-tenant, private cloud environments flip that logic. Instead of all customers sharing one system, each firm runs a copy of the platform in its own isolated environment, still hosted on major providers like AWS or Azure, but, as Doble describes it, “completely walled off from the open internet” and accessible only through the firm’s corporate network. That setup, which is more akin to a retailer setting up their own shop on a platform like Shopify rather than selling directly on Amazon, keeps data and AI-derived work product inside a perimeter the firm controls while still drawing on hyperscale infrastructure and modern tooling.
Doble concedes that this model costs more than the old multi-tenant approach, but he argues the gap is narrowing as AI infrastructure becomes more affordable. More importantly, he frames the spend as an investment in proprietary capability. The work lawyers do inside these systems – and the inputs and outputs of the AI tools wrapped around that work – is “critical intellectual property of a law firm,” he says. Firms that fail to capture it are effectively training someone else’s platform to compete with them.
Once a firm has a private environment it trusts, Doble wants them to push harder on the kinds of AI use cases he has been arguing for in his work on professional AI alignment. His starting point is a strict divide between what he calls the objective and subjective domains of legal work. There is a huge bucket of tasks whose quality is “verifiable,” he says – research, document review, evidence analysis and routine drafting – and those should increasingly sit with AI, so long as humans can check the output against reliable sources.
On the other side is the subjective realm of strategy, client relationships, and judgment, where, in his view, even highly capable systems should not be allowed to operate alone, because the work is laden with human values and trade-offs. In that space, Doble insists, “humans need to be absolute gatekeepers to the legal system,” and lawyers must stay “the ones directly involved with achieving client outcomes,” because no model can reliably absorb the social, commercial and ethical nuance that runs through real disputes and deals, he says.
To make the objective side safe, he argues, design choices matter. Consumer-facing chatbots may look impressive, but without careful constraints, they introduce failure modes that are hard to detect. At a minimum, Doble says, any serious legal AI tool should be barred from making legal claims without pointing back to source material. “We don't let AI say anything legally related without pointing to some authority for that proposition,” he says, adding that the system “should have no opinion” of its own and should instead surface authorities and evidence that a lawyer can test.
That sourcing discipline aligns with a broader emphasis on firms maintaining close relationships with their vendors, rather than treating AI as a black box. He says every sizable practice should have someone who meets with vendors regularly and can distinguish between tools genuinely built around legal workflows and those that simply bolt a thin interface onto general-purpose models. Without that internal expertise, firms risk locking themselves into platforms that are misaligned with their interests or that leave them vulnerable to privacy, security, and quality control issues.
In terms of the risks of using AI uncritically, Doble shares the common definition of hallucination as the point “when AI [tools]... purport something to be true, with very high levels of confidence ... but it turns out to be false,” and he describes it as a particularly pernicious failure because it persuades users to believe something that is wrong, he says. The profession has already seen what happens when those errors slip into court filings, and he points to “significant ramifications” for a lawyer’s reputation, their case and “your particular client, and the objectives you're trying to achieve.”
Beyond hallucinations, Doble flags the behavioural risk that comes when lawyers start treating AI output like the work product of a junior they do not need to check. He compares uncritical use of these tools to sending out work from “a brand new student, even, or brand new associate in your firm ... without reviewing it, without critically looking at it,” and he argues that the reputational and financial damage can be just as severe when that work undermines a case or a transaction.
Those warnings are no longer new to most law firm leaders. What is newer in Doble’s argument is the insistence that the real battleground is no longer whether to use AI at all, but who owns the infrastructure and data it runs on, and whether firms are prepared to move off the shared marketplaces they have grown used to and build the private platforms that will decide who actually controls the value AI creates.
This article is based on an episode of CL Talk, which can also be found here:
The episode can also be found on our CL Talk podcast homepage, which includes links to follow CL Talk on all the major podcast providers.
