What It Actually Takes to Trust AI
May 7, 2026 · Carla Anderson

What It Actually Takes to Trust AI
By: Carla Anderson
A new study of nearly 400 LLM users mapped what it actually takes to trust AI. The answer follows a familiar pattern: people, then process, then technology. Each layer is harder than it looks.
People First
Researchers identified 68 criteria users care about, including privacy, fairness, accountability, and a long list of specifics. What topped the list wasn't explainability or model transparency. It was basic data protection. Don't do anything weird with my information. That's a low bar, and a lot of products clear it only technically.
Process Is Where It Gets Personal
The study frames this as being an active consumer rather than a passive subject. Which sounds obvious until you check your own habits. Do you double-check outputs? Avoid pasting sensitive data into prompts? Most people don't, consistently. ChatGPT's "can make mistakes" disclaimer is the study's example of a system doing its part. Whether users actually slow down when they see it is something the researchers flag as a question for future work.
Technology Has a Ceiling
A meaningful chunk of what users care about can't be verified by users at all, including bias and actual data handling versus stated data handling. Worth noting: one user in the study said a login screen "just" increases trust but doesn't actually prove anything. The ritual did the work the AI company couldn't.
What Closes the Gap?
Independent audits and certification labels are the answer the study points to. In practice, they're largely voluntary and inconsistently applied, and that's the part worth pushing on. Not whether the framework is right, but whether anyone moves toward it without being required to.
Source: Benk et al. (2025). Proceedings of the AAAI Conference on Artificial Intelligence, 39, 27197-27205.
