Skip to content
Ethics & Governance

When AI Stops Answering and Starts Acting: The Legal Governance Gap

Masters Media

Martin Tully argues that agentic AI — tools that pursue goals, take actions, and make decisions — demands governance frameworks most legal organizations have not yet built.

2 min read

AI Governance | MastersAI x TechnoCat Conference, Chicago, April 16, 2026.

Martin Tully opened his session with a callback to one of legal tech’s most memorable moments: the 2021 Zoom hearing where a lawyer pleaded with a judge that he was, in fact, not a cat. His point was sharper than the nostalgia. In 2026, the agent handling your discovery might not be human at all.

Tully, a partner at Redgrave LLP, used the moment to frame a session on agentic systems — tools that do not just answer questions but pursue goals, take actions, and make decisions. “Search and generative AI answer questions. Agents pursue goals.”

The distinction carries real consequences. Traditional AI tools find the document. Agentic AI decides which documents it needs, retrieves them, synthesizes conclusions, and, if granted the permissions, emails the result to opposing counsel at 2 a.m. “The cute interface is not the risk. The permissions are.”

His governance framework distilled to four verbs: see, infer, write, and send. Each carries a distinct risk profile, and each demands deliberate governance decisions that most organizations have not yet made.

For organizations considering a blanket AI ban, Tully offered a blunt warning: bans do not prevent the problem, they push it underground. The answer is structured governance: risk tiers, escalation paths, and controls built into workflows. His defensibility checklist: logs, sources, approvals, retention.

Key Takeaways

  • Agentic AI takes action. Legal teams need governance frameworks addressing what agents can access, infer, produce, and send.
  • The permissions architecture matters more than the interface.
  • Privilege risk in agentic systems extends beyond individual data points — inferred patterns can expose strategy unintentionally.
  • Blanket AI bans push adoption underground and strip organizations of visibility.
  • Defensibility requires: logs, sources, approvals, and retention policies.

Conference Coverage: Chicago, April 16, 2026