Gold Standard Articulation

AI is rapidly becoming critical societal infrastructure, yet the pace of development has outpaced the institutions that ensure AI works safely and as advertised. Users, policymakers, investors, and insurers need reliable ways to verify that promised safeguards exist and are robust, but the technology is complex, fast-moving, and often proprietary.

Many industries address similar challenges through independent auditors who review sensitive, non-public information and publish trustworthy conclusions that outsiders can rely on. We believe a similar approach is needed for frontier AI.

The paper we wrote with dozens of coauthors across the non-profit, academic, and for-profit sectors – "Frontier AI Auditing" – sets out a comprehensive vision for addressing the challenges above and drawing inspiration from more established fields. We define frontier AI auditing as rigorous third-party verification of frontier AI developers’ safety and security claims and evaluation of their systems and practices against relevant standards, drawing on deep, secure access to non-public information. We propose eight interlinked design principles, covering risk scope, assurance levels, auditor independence, access arrangements, organizational assessment, continuous monitoring, rigorous processes, and clear communication of results.

If implemented across organisations building and deploying frontier AI systems, this approach would improve safety and security outcomes and enable more confident investment and deployment (particularly in high-stakes sectors).

Our vision draws on lessons from domains with more mature third-party assessment regimes, including financial auditing, aviation safety, penetration testing, and consumer product certification. We draw lessons from what works well and what has failed. Historically, these industries often built rigorous oversight only after major incidents. With frontier AI, we have an opportunity and a responsibility to be more proactive.

Read our latest work

RESEARCH

Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies

A comprehensive vision for rigorous third-party verification of frontier AI developers’ safety and security claims and evaluation of their systems and practices against relevant standards, drawing on deep, secure access to non-public information.

Read more