The Real Reason Lawyers Do Not Trust AI Has Nothing to Do with the Technology
Cristin Traylor argued that when lawyers say they don't trust AI, they usually mean they don't trust the other party to use it fairly — a relationship problem, not a technology problem.
Conference session: MastersAI x TechnoCat, Chicago, April 16, 2026.
Cristin Traylor opened her session at the MastersAI x TechnoCat Conference in Chicago with a question that reframed everything that followed: when lawyers say they do not trust AI, what do they actually mean?
The surface answer is familiar. The technology is flawed. The output is unreliable. The model might hallucinate or miss something important. But Traylor argued that in most cases, that is not what distrust of AI is really about. “What it really means,” she said, “is I do not trust you to use it fairly.” And that is a very different problem.
Her framework: AI trust in litigation works like a funnel. At the wide end, low trust between the parties. At the narrow end, high trust. Where the parties sit in that funnel determines how much validation, disclosure, oversight, and friction gets demanded around every AI-assisted workflow. The technology barely factors into it.
“The same workflow can look completely reasonable in one case and wildly suspicious in another. Not because the technology changed overnight, but because the level of trust between the parties changed.”
She illustrated the point with two cases. In the first, counsel have worked together before. They are candid. They meet and confer in good faith. When one side discloses it is using AI-assisted review, the other asks proportionate questions: explain the validation, walk me through the QC, tell me how human review was integrated. Reasonable. Now the second case. The parties do not trust each other. Discovery fights have already happened. Deadlines were missed. The exact same disclosure lands as an accusation. Same technology, very different reaction.
Traylor was direct about the industry’s tendency to misattribute that dynamic. We blame the technology for conflict that is actually caused by distrust. AI introduces opacity, and low-trust environments rush to fill opacity with demands for control. More motion practice, more protocol fights, more disclosure demands, more cost, more delay. The funnel widens not because AI is unpredictable, but because the relationship is broken.
Her prescription was not a technical one. Trust is built in discovery the way it has always been built: through consistency, candor, proportionality, and competence. Paradoxically, one of the fastest ways to destroy trust is to oversell the technology. The more credible language is more honest: this tool is powerful, it has limits, here is how we used it, here is how we checked it, here is where humans were involved.
Key Takeaways
- When lawyers say they do not trust AI, they usually mean they do not trust the other party to use it fairly. That is a relationship problem, not a technology problem.
- AI trust works like a funnel. Low trust between parties widens the funnel and drives more validation demands, motion practice, and cost.
- Blaming the technology for discovery conflict caused by distrust is a categorical error.
- Overselling AI accuracy destroys trust faster than acknowledging its limits.
- As AI becomes more capable, trust will matter more, not less.
Conference Coverage: Chicago, April 16, 2026