Funders
As an independent nonprofit, we rely primarily on the ongoing support of donors to continue our research. In addition to the contributions listed below, Epoch AI has received smaller donations from individual supporters. Here, we list only donations of $70,000 USD or more.
We value every contribution as a vote of confidence in our work and vision, and we pledge to use it to further our mission of building a better shared understanding of AI. If you would like further information about specific funding needs, please don’t hesitate to contact us at donate@epoch.ai.
Coefficient Giving
General Support
April 2025
$8,500,000 USD
March 2025
$70,000 USD FrontierMath Benchmark ImprovementsApril 2024
$4,132,488 USD
April 2023
$6,922,565 USDFebruary 2023
$188,558 USDJune 2022
$1,960,000
Jaan Tallinn
General Support
January 2025
$600,000 USD DAF donation
Likith Govindaiah
General Support
January 2025
$400,000 USD DAF donation
Leopold Aschenbrenner
Pilot for New Benchmark
January 2025
$200,000 USD awarded via Manifund
Sentinel Bio
Tracking developments in biological AI models
September 2025
$85,000
September 2024
$80,000
Carl Shulman
General Support
March 2023
$100,000 USD DAF donation
Schmidt Sciences
FrontierMath: Open Problems pilot
December 2025
Collaborations and consultations
We work with leading organizations to produce impactful research that advances the public’s understanding of AI. For simplicity, we may omit engagements under $30,000 USD. Considering working with Epoch AI?
EU AI Office
2026
Technical Consultations
Google DeepMind
2025
Model Evaluations
METR (Model Evaluation and Threat Research)
2025
Building a software engineering benchmark
Blitzy Inc.
2025
SWE Evaluations
Bridgewater Associates AIA Labs
2025
Consultations
xAI
2025
Model Evaluations
Sequoia Capital Global Equities
2025
Research on RL scaling trends
Google DeepMind
2024/25
Joint report
UK Department for Science, Innovation and Technology
2024/25
Consultations
EPRI
2024/25
Joint report on current trends in the energy demand for AI training
OpenAI
2024/25
Developing the FrontierMath benchmark
Advanced Research + Invention Agency
2025
Data Insight on GPU generations.
Anthropic
2024
Small pilot for new benchmark
AI Index
2024/25
Joint report on the cost of AI development, and investigation on all AI applications
FAQ
Do you lobby or advocate for specific policies?
No.
Epoch does not advocate for any particular stance on AI policy, and we do not engage in lobbying. While our staff have their own (diverse) opinions on how AI should be handled, we see Epoch as providing a unique service of informing the public with trustworthy data and evidence about AI without pushing for a specific agenda.
We have partnered with government agencies worldwide, and we see this as an important part of our mission to inform governments of the state of the art in AI so they can enact wiser policies. We might point out the consequences of a policy, while adhering to our usual standards of rigour and transparency. For example, we might publish estimates on the number of developers affected by a compute threshold regulation, and point out that keeping the scope limited will require elevating the threshold. But Epoch AI does not make policy recommendations.
Are you for or against AI progress?
Our staff, like the AI community more broadly, is split on whether advancing AI will ultimately be good for the world. As an organization, we are decidedly neutral on this question. We work on projects that, in different capacities, advance or slow down AI development. These projects are chosen because their primary purpose is to advance public understanding of this technology.
Many of our research projects may help advance the state of the art in artificial intelligence. We partnered with OpenAI to create the leading AI benchmark in mathematics, we have gone to great lengths to study bottlenecks to AI scaling, and we have advanced research on AI scaling laws. However, our goal is not to contribute to AI progress per se. In choosing to work on these projects, we are prioritizing our mission of improving societal understanding of the trajectory of AI.
How do you decide who to work with?
Most of our funding comes from grants and donations. However, we also receive funding and maintain contractual relationships with a number of organizations that either work on or are affected by AI. These include large AI companies (e.g. Google), government agencies (e.g. the UK Department of Science, Innovation and Technology), organizations working on adjacent sectors (e.g. the Electric Power Research Institute), and companies that are affected by AI (including hardware, energy and investment firms, consultancies and others).
In choosing who to work with, we ask ourselves the following questions:
Will our partner be making important decisions that affect the trajectory of AI? We prefer to partner with organizations that are making important decisions in AI, especially in government. These partners provide us with useful feedback on what is most important to work on and directly advance our mission of informing high-stakes decisions on AI.
Will the project help us gain a deeper understanding of AI? We have an opinionated view on what questions are most important to work on, which has prompted us to investigate emerging trends early, including inference time scaling or data scarcity. In our partnerships, and our work more generally, we choose to work on questions that we believe are important for the future of AI.
Will the work be released publicly? We prioritize projects with public outputs that advance the understanding of AI. By publishing our work, we reach a wider audience and impact, including audiences we would not have thought of. This is not always possible — for example, when consulting for governments on sensitive topics — but these are exceptions, and we strive to find acceptable compromises.
Will we have full editorial independence over publishable outputs? We care deeply about always being able to say what we believe to be true, transparently and freely, independently of who funded the research.
Will we be able to learn from our partner? We prioritize working with partners whom we can learn from, and who can provide good feedback on our work. For example, we partnered with EPRI to produce more accurate work on energy demand for AI.
Are we striking a good deal? We aim to charge prices at least on par with industry consultants so that we aren’t inadvertently subsidising the work of our partners. Any profits we make are fully reinvested into our mission, mainly subsidising our public research.
Throughout these partnerships, we are committed to a high level of transparency. We are committed to disclosing any research sponsorship and data access agreements with industry.
Do you invest in AI?
We invest part of our funds in semiconductor and AI stocks as part of a diversified portfolio. Any gains from such investments increase our capacity to advance our mission in scenarios when our work matters most.