"AI and Trust," published in Communications of the ACM in 2024, is Schneier's most formally academic statement of the thesis he had been developing across schneier-on-security-blog posts and the work toward rewiring-democracy: that the central question raised by AI is not its technical capabilities but its implications for the trust structures that underpin social life.
The Core Argument
The essay applies the trust-framework of liars-and-outliers directly to AI systems. In liars-and-outliers, Schneier argued that societies function through layered trust mechanisms — moral norms, reputational systems, institutional rules, and technical systems — that enable cooperation among strangers at scale. The essay asks what happens to each of these layers when AI is deployed.
The core claim is that trust in the context of AI has a distinctive structure. When we trust a human actor, we are making predictions about their future behavior based on models of their interests, values, and constraints. AI systems have different properties: they may behave reliably in tested domains and unreliably in untested ones; they optimize for specified objectives in ways that may diverge from intended outcomes; and they can be deliberately misaligned by their developers or operators. The trust-framework must be extended to account for these distinctive features.
The essay distinguishes between trusting AI as a tool (trusting that it will perform a specified function reliably), trusting AI as an agent (trusting that it will act in accordance with specified values across unanticipated situations), and trusting AI as an institution (trusting the organizations that deploy AI to do so responsibly). These are different kinds of trust problems requiring different governance responses.
Relationship to the security-mindset
The essay brings the security-mindset to bear on AI: it asks not just what AI can do but how AI systems fail, what their attack surface looks like, and how adversaries will exploit them. This is a distinctive contribution to AI ethics discourse, which tends to focus on harms from well-functioning AI (algorithmic discrimination, surveillance) rather than from AI failures and adversarial exploitation.
The connection to hacking-as-systems-subversion is explicit: AI creates new opportunities for actors to hack social systems by exploiting the gaps between what AI systems are designed to do and what they actually do under adversarial pressure.
Significance
Publication in Communications of the ACM — the flagship journal of the Association for Computing Machinery — represents Schneier's engagement with the computer science research community as distinct from his policy-oriented writing. The essay is more technically rigorous than his blog posts and more formally structured than his books, reflecting the publication venue's standards while remaining accessible to a broad technical audience. It is the academic foundation for the arguments developed at greater length in rewiring-democracy.