• Legal
  • Intapp Celeste

The shift from AI tools to law firm architecture

AI productivity is no longer theoretical. Your lawyers are using AI. Some of your best partners have built workflows around it. And the ROI on these individual tools is real enough that shutting it down isn’t a serious option.

The problem is that your firm’s AI footprint has grown faster than your firm’s ability to govern it. And the gap between those two curves is where your professional liability lives.

So, how do you scale AI across the firm without scaling risk?

That question, not model accuracy or productivity benchmarks, is what will separate firms that lead from firms that manage risks reactively.

The governance gap is already costing firms

Governance is no longer a back-office risk management concern. It’s a business development imperative.

With the ABA’s Formal Opinion 512, supervisory responsibility for AI now rests squarely on lawyers. In response, major financial institutions and Fortune 500 legal departments are including specific AI governance requirements in their outside counsel guidelines, and are increasingly using compliance as a panel selection criterion.

Since governance is now a competitive differentiator, firms must rethink how governance actually works. The governance frameworks your firm built over decades — ethical walls, conflict screening logic, MNPI protocols, independence controls — were designed for a world where work happens inside supervised firm systems. But publicly available AI has introduced a parallel environment that often sits beyond the reach of traditional controls.

Consider the implications. A sensitive M&A session is conducted with an LLM that sits outside your information barriers. A conflicts search runs against your matter database, but not the AI’s context layer. A lateral partner’s institutional knowledge is captured in prompts, but never screened through governed intake. These are the scenarios your professional responsibility counsel is already fielding.

Many Am Law 100 firms now have AI steering committees and policies in place. That’s a start. But policy-layer governance depends on perfect human compliance. It requires lawyers to remember to apply it every time across every tool and every matter.

On the other hand, architecture-layer governance enforces compliance automatically, the same way ethical walls software screens conflicts in the background — without requiring lawyers to manually check conflict reports.

When speaking to leaders whose firms were moving from AI pilots to production, Harvey’s CEO Winston Weinberg found that they all had one thing in common: “The biggest blocker was governance by far. Everyone was self-policing.”

In an AI-enabled firm, governance cannot be optional, manual, or peripheral. It must be structural. The firms that recognize this will be able to win more business while minimizing risk.

Institutional intelligence is the competitive variable

Your firm’s competitive advantage isn’t a generic AI model. It’s your institutional knowledge. Proprietary client data. Matter history. Engagement playbooks. The insight accumulated across decades of work.

When that knowledge is embedded inside governed AI systems, it becomes durable and scalable. Teams can access it instantly, and institutional memory no longer walks out the door with every departure.

When every firm has the same AI models trained on the same public data, the firms that win will be the ones that put their own proprietary knowledge to work inside those models.

John Hall, CEO, Intapp

Agentic AI raises the stakes further. Unlike prompt-driven tools, agentic systems initiate workflows, execute tasks across applications, and route decisions — without requiring human input at each step.

An agent that can run a full conflicts clearance workflow is incredibly powerful. But that same agent, operating without governance embedded into the architecture, could just as easily surface privileged documents across matter boundaries.

The solution isn’t another policy layer. It’s a governed intelligence layer that connects the workflows where risk and opportunity actually live. And Intapp Celeste is built to do just that.

Introduced at Intapp Amplify, Celeste is an agentic AI platform built specifically for professional firms. With built-in ethics and compliance, the platform uses your firm’s unique playbooks to handle everything — from origination and conflicts clearance to engagement delivery and profitability insight.

Plus, through Intapp’s partnership with Harvey, the ethical walls and policies that your firm have already configured extend automatically into Harvey’s environment — across Assistant, Vault, and Workflows. As a result, governance follows the work rather than stopping at the application boundary.

That’s the difference between AI that summarizes, and AI that thinks like your firm.

Roger Smith, Vice President of Engineering for AI Platform, Intapp

Join the conversation

The governance gap is no longer theoretical. It is measurable. And it’s already surfacing in client conversations, lateral negotiations, and professional responsibility inquiries across the Am Law 200.

The good news? Governance no longer has to rely on reminders, training sessions, or policy acknowledgments. It can be embedded directly into your systems — enforced consistently across every agent interaction, matter boundary, and AI-assisted workflow — without adding friction for the lawyers who are already using these tools.

In a profession where reputation drives the next client, the next fund, and the next deal, capability without governance is not a durable strategy. Intelligence must be governed by design, or risk will scale at the same speed as innovation.

The question is no longer where your firm should engage with AI. It’s whether your governance infrastructure is ready for the AI your firm is already using.

Join our executive webinar, “Intelligence Applied: The next phase of AI in law,” to examine what governed AI architecture looks like in practice — and how leading firms are preparing for the next phase of client and regulatory scrutiny. 

Register to join the conversation.