THE ECONOMICS OF ENERGY EFFICIENCY: HUMAN COGNITION VS. AI LARGE LANGUAGE MODELS
Keywords:
Energy economics, efficiency, Large Language Models, neuromorphic computing, sustainable AI, human cognitionAbstract
Abstract
The human brain and Large Language Models (LLMs) exemplify two profoundly different information-processing systems, with a significant imbalance in energy consumption. This article provides a comparative analysis of the energy requirements of biological cognition and machine-based models, linking these differences to their economic consequences. The operational costs of LLMs, driven by energy-intensive computations, present financial and environmental sustainability challenges, influencing the scalability of AI adoption across industries. In contrast, the human brain’s remarkable energy efficiency highlights the economic advantages of biological intelligence, which operates with minimal resource investment. We explore the economic trade-offs of AI’s growing energy demands, including cost implications for firms, policy responses such as carbon regulations, and the potential labor market disruptions arising from energy-efficient automation. By integrating insights from neuroscience, engineering, and economics, we argue that sustainable AI development hinges on a fusion of brain-inspired paradigms, eco-efficient hardware solutions, and strategic policy frameworks that balance innovation with economic feasibilityReferences
Chemero, A. (2023). LLMs differ from human cognition because they are not embodied. Nature Human Behaviour, 7, 1828–1829. https://doi.org/10.1038/s41562-023-01723-5
Human Brain Project. (2023, September 4). Learning from the brain to make AI more energy-efficient. https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/04/learning-brain-make-ai-more-energy-efficient/
IEEE. (2024). Measuring and Improving the Energy Efficiency of Large Language Models Inference. IEEE Xplore. https://ieeexplore.ieee.org/document/10549890
arXiv. (2024). Towards Greener LLMs: Bringing Energy-Efficiency to the Forefront of LLM Inference. https://arxiv.org/abs/2403.20306
Attwell, D., & Laughlin, S. B. (2001). Energetic cost of neural computation. Nature Reviews Neuroscience, 2(11), 695–703.
Clarke, D. D., & Sokoloff, L. (1999). Glucose utilization and cognitive demands. Journal of Neurochemistry, 67(6), 2037–2043.
Herculano-Houzel, S. (2011). The remarkable energetic efficiency of the human brain. PNAS, 108(6), 2322–2327.
Herculano-Houzel, S. (2016). Brain scaling and energy efficiency in humans. Frontiers in Human Neuroscience, 10, 185.
Laughlin, S. B., & Sejnowski, T. J. (2003). Communication in neurons: Optimization for energy. Nature Reviews Neuroscience, 4(8), 683–694.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D., & Pineau, J. (2020). Towards the systematic reporting of the energy and carbon footprints of machine learning. Journal of Machine Learning Research, 21(248), 1–43. https://arxiv.org/abs/2002.05651
Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L., Rothchild, D., & Dean, J. (2021). Carbon emissions and large neural network training. Proceedings of the International Conference on Learning Representations. https://arxiv.org/abs/2104.10350
Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: Smaller, faster, cheaper, and lighter. Proceedings of the 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing at NeurIPS 2019. https://arxiv.org/abs/1910.01108
Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–645. https://doi.org/10.1146/annurev.psych.59.103006.093639
Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5185–5198. https://doi.org/10.18653/v1/2020.acl-main.463
Chen, T., Bao, J., & Zhu, S. (2021). Towards embodied AI: Learning to interact with environments. Proceedings of the AAAI Conference on Artificial Intelligence. https://arxiv.org/abs/2106.13935
Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681–694. https://doi.org/10.1007/s11023-020-09548-1
Marcus, G., & Davis, E. (2020). GPT-3, bloviator: OpenAI’s language generator has no idea what it’s talking about. MIT Technology Review. https://www.technologyreview.com/2020/08/22/1007539/gpt-3-openai-language-generator-artificial-intelligence-ai-opinion/
Davies, M., Srinivasa, N., Lin, T. H., Chinya, G., Cao, Y., Joshi, P., & Dixit, N. (2021). Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1), 82–99. https://doi.org/10.1109/MM.2018.112130359
Furber, S. (2016). Large-scale neuromorphic computing systems. Journal of Neural Engineering, 13(5), 051001. https://doi.org/10.1088/1741-2560/13/5/051001
Indiveri, G., & Liu, S. C. (2015). Memory and information processing in neuromorphic systems. Proceedings of the IEEE, 103(8), 1379–1397. https://doi.org/10.1109/JPROC.2015.2444094
Indiveri, G., Linares-Barranco, B., Hamilton, T. J., Van Schaik, A., Etienne-Cummings, R., Delbruck, T., & Liu, S. C. (2011). Neuromorphic silicon neuron circuits. Frontiers in Neuroscience, 5, 73. https://doi.org/10.3389/fnins.2011.00073
Mead, C. (1990). Neuromorphic electronic systems. Proceedings of the IEEE, 78(10), 1629–1636. https://doi.org/10.1109/5.58356
Qiao, N., Mostafa, H., Corradi, F., Osswald, M., Stefanini, F., Sumislawska, D., & Indiveri, G. (2015). A reconfigurable on-line learning spiking neuromorphic processor. Frontiers in Neuroscience, 9, 141. https://doi.org/10.3389/fnins.2015.00141
Roy, K., Jaiswal, A., & Panda, P. (2019). Towards spike-based machine intelligence with neuromorphic computing. Nature, 575(7784), 607–617. https://doi.org/10.1038/s41586-019-1677-2