Team

Portrait of Miles Brundage with glasses, a beard, and a plaid shirt, smiling against a plain white background.
  • Miles Brundage serves as Executive Director at AVERI, where he sets AVERI’s strategic direction, leads the team and builds partnerships to advance AVERI’s mission of making third party auditing of AI systems effective and universal. 

    Previously, Miles led the Policy Research and AGI Readiness teams at OpenAI, and before that he was a Research Fellow at the University of Oxford's Future of Humanity Institute. 

    In addition to leading AVERI, Miles is a Non-Resident Senior Fellow at the Institute for Progress, a member of the AI Governance Forum at the Center for a New American Security, an advisor to Epoch AI and the RAND Corporation, and he writes regularly on Substack. 

    Miles completed a Ph.D. in Human and Social Dimensions of Science and Technology from Arizona State University in 2019, and worked at the US Department of Energy's Advanced Research Projects Agency -- Energy (ARPA-E) before beginning his graduate studies.

Portrait of George Balston with brown hair, glasses, and a beard, wearing a dark blazer and white shirt, against a plain white background.
  • George is Policy and Advocacy Lead at AVERI and leads the organization's work on building demand for frontier AI auditing. 

    Previously, George served as Co-Director for Defence & National Security at The Alan Turing Institute, where he directed a cross-functional research portfolio and led a team of 60 staff. He also founded the Centre for Emerging Technology and Security (CETaS), a research centre advising the national security community on the risks and opportunities of emerging technology.

    Prior to this, George worked for the British Government as a Researcher, Data Scientist, and Policy Specialist, developing and delivering early AI capabilities for the national security community.

A headshot of Sean McGregor with short dark hair, a mustache, and a goatee, wearing a checkered dress shirt and a dark blazer.
  • Sean McGregor is a machine learning safety researcher serving as Lead Research Engineer at AVERI, where he leads several projects centered on making third party auditing of AI systems effective and universal. His efforts include projects in metaevaluation and benchmarking.

    Previously, Sean helped found the Digital Safety Research Institute at the UL Research Institutes, launched the AI Incident Database, and trained edge neural network models for the neural accelerator startup Syntiant.

    With an applications-centered research program spanning reinforcement learning for wildfire suppression and deep learning for heliophysics, Sean has covered a wide range of safety critical domains. Sean's open source development work has earned media attention in the Atlantic, Der Spiegel, Wired, Venture Beat, Vice, and O'Reilly while his technical publications have appeared in a variety of machine learning, HCI, ethics, and application-centered proceedings.

Professional headshot of a Grace Werner with long blonde hair wearing a dark blazer and white top, against a plain white background.
  • Grace Werner serves as U.S. Policy Lead at AVERI, where she leads engagement with frontier AI labs, policymakers, and industry stakeholders on approaches for auditing and publishing research. She previously served as AVERI's Interim Chief of Staff, where she designed internal operational structures.

    Before joining AVERI, Grace researched the international and geopolitical dimensions of AI, with emphasis on U.S.–China competition; worked in global strategy at Visa; and managed projects on intelligence, surveillance, and reconnaissance at Sandia National Laboratories.
    Grace holds an M.Sc. in International Relations from the London School of Economics and a B.A. in Political Science from the University of New Mexico.

Portrait of A. Feder Cooper with dark brown hair, wearing a light blue button-down shirt, smiling subtly, against a white background.
  • A. Feder Cooper is a Research Scientist at AVERI, working to advance the state-of-the-art for reliable, valid third-party evaluations of frontier system safety. Dr. Cooper's research contributions span privacy, security, scalable training algorithms, system evaluations, tech policy, and law. Much of this work has received awards at top AI/ML and other computing venues, and collaborations at the intersection of AI and law have been lauded as landmark work among AI/ML experts, technology law scholars and the popular press. Cooper is also an incoming (tenure-track) assistant professor at Yale University, starting in summer 2026.

A portrait of Carly Tryens with long dark hair, blue eyes, and light skin, wearing a black top with button details and a gold necklace, smiling softly against a white background.
  • Carly Tryens serves as Executive Assistant for Miles. In addition to supporting executive and organizational operations, she builds effective systems and cultivates behaviors to ensure AVERI’s highest goals are met.

    Carly has more than eight years of experience in a range of operations roles including founding and scaling high-impact startups and nonprofits. Before joining AVERI, she was a founding member of Blueprint Biosecurity, a pandemic prevention nonprofit, where she supported the executive team and fostered organizational culture. Prior to that, she was Chief of Staff at Alvea, a vaccine development startup that set the record as the fastest biotech company to move a new drug from idea to a Phase 1 clinical trial.

Board

Portrait of Max Henderson smiling man in a navy suit, light blue shirt, and dark tie against a white background.
  • Max is a board member at AVERI and the founder of Ergo Impact. A philanthropist and investor, Max has helped incubate, operate, and fund close to $1B of social good efforts across causes such as global health, science, nuclear security, and AI. Prior to his philanthropy work, Max was a tech founder and executive, having held product and GTM roles at CovidActNow, Firebase, Google, Oracle, Compass, etc.

Close-up of Mike McCormick with short, light brown hair, wearing a button-up denim shirt, looking at the camera with a slight smile
  • Mike is a board member at AVERI and the founder and CEO of Halcyon Futures, a nonprofit grant fund and venture capital fund dedicated to ensuring that AI is developed in ways that are safe, secure, and beneficial for humanity. Since founding Halcyon in 2023, Mike has helped establish more than sixteen new nonprofits and companies in the fields AI alignment, cybersecurity and global resilience, which have collectively raised more than a quarter-billion-dollars in follow-on funding. Prior to Halcyon, Mike was a Partner at the San Francisco-based venture capital firm GreatPoint Ventures.

Advisors

Black and white portrait of Dean W. Ball wearing glasses, a suit, and a patterned tie, smiling, against a plain background.
  • Dean Woodley Ball is a Senior Fellow at the Foundation for American Innovation, a Policy Fellow at Fathom, and author of the AI-focused newsletter Hyperdimensional. He focuses on emerging technologies and the future of governance.Prior to this, he served as Senior Policy Advisor for Artificial Intelligence and Emerging Technology at the White House Office of Science and Technology Policy, where he was the primary staff drafter of America’s AI Action Plan. He was also a Strategic Advisor for AI at the National Science Foundation.

Portrait of David Duvenaud smiling man with shoulder-length red hair, a full beard, wearing a checkered button-up shirt, against a white background.
  • David Duvenaud is an Associate Professor in Computer Science and Statistics at the University of Toronto.  He holds a Sloan Research Fellowship, a Canada Research Chair in Generative Models, and a CIFAR AI chair.  He has over 50 publications in machine learning and artificial intelligence.  His postdoc was done at Harvard University and his Ph.D. at the University of Cambridge.  He is a Founding Member of the Vector Institute for Artificial Intelligence.  He recently spent 1.5 years at Anthropic, leading their Alignment Evaluations team, contributing to their Responsible Scaling Policy, and leading research projects on jailbreaks and sabotage.  He's also a co-director of the Schwartz Reisman Institute for Technology and Society, a director of the AI Safety Foundation, and has been an advisor to Cohere.  He has also received a Google Faculty Award, and best paper awards at both the Neural Information Processing Systems (NeurIPS) conference and the International Conference on Machine Learning (ICML).

Portrait of Bri Treece with shoulder-length wavy auburn hair, wearing a navy blazer, white top, and gold necklace, against white background.
  • Bri Treece is Co-founder and Chief Impact Officer of Fathom. With two decades of experience in healthcare and AI, Bri is a recognized leader in driving innovation, building strategic partnerships, and ensuring operational excellence. Most recently, she served as the COO of the Center for AI Safety (CAIS) and founded the Center for AI Safety Action Fund, an organization connecting technical AI experts with global government leadership.Before her work in AI, Bri was a health tech executive. She played a critical role in leading her digital health company to a successful acquisition. She then launched the U.S. market for an at-home lab testing company during the early days of the COVID-19 pandemic, addressing urgent public health needs during a global crisis.Beyond her professional endeavors, Bri is dedicated to community service. She mentors first-generation youth preparing for college through Oakland Promise and volunteers with Friends of Oakland Parks & Recreation.Bri holds an MBA from the University of California, Berkeley’s Haas School of Business and a B.S. from Northwestern University.

Gillian Hadfield with shoulder-length gray hair smiling in front of a plain white background, wearing a light-colored blazer and a black-and-white polka-dot blouse.
  • Gillian K. Hadfield is an economist and legal scholar turned AI researcher thinking about how humans build the normative world and how to make sure AI plays by rules that make us all better off. She is the Bloomberg Distinguished Professor of AI Alignment and Governance at the Whiting School of Engineering and the School of Government and Policy at Johns Hopkins University. She is professor of law (status-only) at the University of Toronto. Hadfield is a faculty member of the Vector Institute for Artificial Intelligence and is a Schmidt Sciences AI2050 Senior Fellow. Hadfield's research focuses on innovative design for legal, regulatory, and technical systems for AI, computational models of human normative systems, and building AI systems that understand and respond to human values and norms. She is a faculty aƯiliate at the Center for Human-Compatible AI at the University of California Berkeley and she previously served as the inaugural director and held the Schwartz Reisman Chair in Technology and Society at the University of Toronto. Her book Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy was published by Oxford University Press in 2017; a paperback edition with a new prologue on AI was published in 2020 and an audiobook version released in 2021.

Headshot of a Jose H. Orallo with dark hair and a beard, wearing a gray suit and a checkered shirt, smiling against a plain white background.
  • Jose H. Orallo is Director of Research at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK, and Professor (on partial leave) at TU Valencia, Spain. His academic and research activities have spanned several areas of artificial intelligence, machine learning, data science and intelligence measurement, with a focus on a more insightful analysis of the capabilities, generality, progress, impact and risks of artificial intelligence. His research in the area of machine intelligence evaluation has been covered by several popular outlets, such as The Economist, WSJ, FT, New Scientist or Nature. He keeps exploring a more integrated view of the evaluation of natural and artificial intelligence, as vindicated in his book "The Measure of All Minds" (Cambridge University Press, 2017, PROSE Award 2018). His contribution to the field of AI safety relies on new technical AI evaluation paradigms that are explanatory and predictive about the outcomes of AI ecosystems, and could reliably inform scientists, policy-makers and regulators. He is a founder of aievaluation.substack.com and ai-evaluation.org. He is a member of AAAI, CAIRNE and ELLIS, and a EurAI Fellow.

Conflict of Interest Disclosures

Why this matters

Our work is ultimately about trust – trust in AI systems, AI companies, and in the frontier AI auditing ecosystem we are trying to help bring about. Disclosing and managing AVERI’s own potential conflicts of interest is essential in order to earn various stakeholders’ trust in the integrity of our work.

Examples of conflicts of interest

Potential conflicts of interest may arise when our team members or organization have relationships or interests that could influence or appear to influence the integrity of our work. These may include:

  • Financial interests or investments in organizations we might audit or that we might advocate for others to audit

  • Prior or ongoing consulting engagements with audited entities

  • Employment, advisory, familial, or personal relationships with individuals connected to audited entities

How we manage conflicts

We require all staff and contractors to disclose potential conflicts of interest to the leadership and administrative teams upon starting and biannually. When a conflict is identified, mitigation steps may include recusing personnel from projects or declining an engagement entirely. Where conflicts can be managed rather than requiring recusal, mitigation measures may include soliciting peer review from other organizations on the work in question, heightened oversight of a project involving conflicted personnel, and/or transparency statements in audit reports. We also require public disclosure of the most significant conflicts our our website.

AVERI COIs

Many AVERI employees have financial exposure to the AI sector as a whole as a result of owning shares in (non-AI-specific) stock indices such as the S&P 500, which includes companies like NVIDIA, Microsoft, Meta, and Alphabet. Some of these companies in turn own portions of non-publicly-traded AI companies such as OpenAI and Anthropic. Additionally, members of our team have individual investments in Microsoft, Google, NVIDIA, and Broadcom. Members of our staff have been employed at companies including OpenAI and Microsoft. Any ongoing, material financial conflict of interest, and any recent interpersonal conflict of interest, whether actual or perceived, is subject to mandatory recusals.

AVERI’s Executive Director, Miles Brundage, earned equity in OpenAI while employed at the company. He has since sold all shares that he is eligible to sell and will sell the remainder as soon as he is able to do so. Miles also has investments in venture capital funds which include exposure to AI companies, with this being a very small fraction of his net worth compared to stock indices. Miles has personal relationships with employees at a range of frontier AI companies. Under AVERI's policies, Miles is recused from directly auditing OpenAI until at least two years after his departure (though he may help facilitate an audit principally led by others, with all of his involvement documented clearly in any related public outputs), with possible extension depending on the timing of his final direct equity sale.