Professional services firms aren’t built like corporations. Ownership is distributed across partners who control client relationships, manage delivery, and carry personal liability for the work they sign. Practice lines operate with meaningful autonomy. In a global network, member firms are legally separate entities that happen to share a brand. That structure creates the economics of professional services, and it creates a specific problem when a firm decides AI adoption is a priority.
Accountability doesn’t have a natural home. When AI is embedded in audit, advisory, or legal workflows, the question of who owns the outcome doesn’t resolve the way it does in a vertically managed organization. No single function is positioned to govern it. And that ambiguity is where most firms are losing ground.
The market has already moved
The firms setting the pace on AI aren’t treating it as a future investment. One of the world’s largest professional services firms has committed to embedding a multi-agent AI framework across 130,000 professionals and 160,000 audit engagements globally, with full end-to-end integration targeted by 2028. That isn’t a pilot. It’s a structural commitment to AI as the foundation of how audit work gets done. Firms are also tying compensation directly to AI adoption – making it a criterion alongside commercial growth and client satisfaction rather than an optional capability.
Across accounting and advisory firms, the conversation has shifted as a result. Firms are no longer asking which AI tools to deploy. They’re asking how to structure AI governance across complex partnership models: how ownership works across practice lines, how accountability flows in a network where member firms operate as separate legal entities, and what governance looks like when3 compensation depends on demonstrating AI impact. The technology question has been largely answered. The structural question is what firms are actively working through now.
When agents act, the old governance model doesn’t hold
Most governance conversations in professional services are still framed around AI-assisted work: a practitioner using a tool, reviewing an output, making a final call. That framing is already behind the deployment curve.
When agents act on behalf of a firm, drafting engagement correspondence, screening conflicts, surfacing client intelligence, without a human checkpoint at every decision, the governance model designed for AI-assisted work doesn’t hold. In professional services, where a single engagement decision can carry regulatory, reputational, and liability consequences, an ungoverned agent acting on firm data isn’t a technology risk. It’s a firm risk. The question isn’t whether your firm will encounter an agentic failure. It’s whether you’ve built the infrastructure to identify it, contain it, and explain it before it becomes a client conversation.
Governance has to begin where the work happens
The firms getting this right have made one key decision: governance has to be embedded at the engagement level, not bolted on at the firm level. That means Intapp Walls governs which AI agents can access which client data, so that the information boundaries that independence and confidentiality require aren’t overridden by a model that doesn’t know they exist. It means AI activity is visible by client, by engagement phase, by practice line. And it means when an agent acts, there’s a record of what it did, what data it touched, and who was accountable.
This isn’t a compliance posture. It’s a performance posture. The partners and firm leaders who build it aren’t just managing risk. They’re generating the engagement-level data that makes AI adoption visible, partner performance measurable, and client trust defensible.
See how Intapp Walls for AI gives professional firms the governance layer to deploy agentic AI with confidence. Learn more.