The computational performance of machine learning hardware has doubled every 2.3 years
By Robi Rahman
Measured in 16-bit floating point operations, ML hardware performance has increased at a rate of 36% per year, doubling every 2.3 years. A similar trend exists in 32-bit performance. Optimized ML number formats and tensor cores provided additional improvements.
The improvement was driven by increasing transistor counts and other semiconductor manufacturing improvements, as well as specialized design for AI workloads. This improvement lowered cost per FLOP, increased energy efficiency, and enabled large-scale AI training.
Epoch's work is free to use, distribute, and reproduce provided the source and authors are credited under the Creative Commons BY
license.
Explore this data
Machine Learning Hardware
Key data on over 170 AI accelerators, such as graphics processing units (GPUs) and tensor processing units (TPUs).
Epoch AI’s work is free to use, distribute, and reproduce provided the source and authors are credited under the Creative Commons Attribution license.
Citation
Robi Rahman (2024), "The computational performance of machine learning hardware has doubled every 2.3 years". Published online at epoch.ai. Retrieved from 'https://epoch.ai/data-insights/peak-performance-hardware-on-different-precisions' [online resource]. Accessed 2 Apr 2026.
BibTeX Citation
@misc{epoch2024peakperformancehardwareondifferentprecisions,
title={The computational performance of machine learning hardware has doubled every 2.3 years},
author={Robi Rahman},
year={2024},
url={https://epoch.ai/data-insights/peak-performance-hardware-on-different-precisions},
note={Accessed: 2026-04-02}}
Feedback
Have a question? Noticed something wrong? Let us know.
The computational performance of machine learning hardware has doubled every 2.3 years
Epoch AI is a research institute investigating key trends and questions that will shape the trajectory and governance of Artificial Intelligence.