About

The Problem

As frontier AI systems become more capable, the safety and security safeguards applied to them are increasingly critical. These safeguards prevent AI systems from being misused to carry out terrorist attacks, protect the sensitive data used by AI agents, and ensure AI systems understand and follow human intent. Companies generally “check their own homework” on these safeguards, which is concerning because even well-intentioned teams are susceptible to blind spots, groupthink, and corner-cutting under competitive pressure.

Many in industry, government, and academia broadly agree that self-evaluation is not enough: rigorous, truly independent assessment is needed in order to surface risks that internal teams might otherwise miss, validate the claims that companies make to regulators and the public, help distill best practices from across the industry, and prevent a race to the bottom by applying a shared standard across organizations. But we are far from achieving that in practice. 

Today’s external assessments of AI systems don’t involve the rigor and access that the term “audit” implies. Assessments are typically done in a black-box fashion, typically ignore many key aspects of AI development such as platform-level safeguards and internal deployment, and provide only a “snapshot in time” rather than continuous verification of safety and security.

Our Vision

From Promises to Proof

AVERI aims to bring about a world in which the most powerful AI systems – and the companies that build them – are rigorously audited for safety and security by third parties. We believe this independent auditing layer is essential for enabling confident deployment of AI and managing critical risks from this increasingly powerful technology.

If we are successful, independent assessment of AI systems will have gone from optional, time-limited, narrow studies to an expected, always‑on verification layer, assuring all parties that severe risks are being addressed.

We see auditing as a complement to, not a substitute for, transparency. Much of what matters for safety and security is proprietary, technically complex, and requires expert judgment to interpret. Independent auditors can review sensitive, non-public information and publish trustworthy conclusions that outsiders can rely on — bridging the gap between what companies know and what the rest of us can verify.

Funding

Our funding comes from Halcyon Futures, Fathom, Coefficient Giving, Geoff Ralston, Craig Falls, Good Forever Foundation, AI Underwriting Company, Sympatico Ventures, and several non-executive employees and alums of frontier AI companies. No donor makes up a majority of our funding.

We've also been offered API credits from Amazon, Anthropic, Google DeepMind, Microsoft, OpenAI, and Thinking Machines Lab.

Learn more about our work

Explore our research, policy initiatives, and approach to frontier AI auditing.