How Edge Devices Can Help Mitigate the Global Environmental Cost of Generative AI
NORTHAMPTON, MA / ACCESSWIRE / May 8, 2024 / QualcommExploring the role of edge devices in reducing energy consumption and promoting sustainability in AI systems Written by Angela BakerThe economic value of generative artificial intelligence (AI) …
NORTHAMPTON, MA / ACCESSWIRE / May 8, 2024 / Qualcomm
Exploring the role of edge devices in reducing energy consumption and promoting sustainability in AI systems
Written by Angela Baker
The economic value of generative artificial intelligence (AI) to the world is immense. Research from McKinsey estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually.1
But the energy cost of AI and its environmental impact can also be extensive unless our technology approach evolves to effectively tackle these challenges.
Current projections vary, but there are startling analyses of generative AI's energy use and its impact on the environment. A peer-reviewed report in Joule projects AI energy use growing to over 85 terawatt-hours annually, more than the usage of many small countries (Ireland is the example given).2 Popular studies, like those from Gartner, paint dire pictures of the environmental impact and the expense of adapting our computing infrastructure to generative AI. Gartner's report predicts that by 2030, AI could consume up to 3.5% of the world's electricity.3
Additionally, data center processing requires cooling, and cooling consumes water. In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022, an increase that outside researchers tie to AI.
It is imperative to find ways to make AI processing more energy efficient and sustainable. But most reports focus almost entirely on the energy used by AI in the cloud and in data centers.
Lesen Sie auch
Increasing Efficiency by Running AI Models in Devices
Generative AI does not have to run exclusively in the cloud.
Currently, training a consumer-grade generative AI model requires a massive cluster of AI hardware and the power to run it. A researcher at the University of Washington estimated that training a model like ChatGPT-3 could use up to 10 gigawatt-hours, roughly equivalent to the annual energy consumption of 1,000 U.S. households.4
But once an AI model is trained, it can be reduced and optimized to run on a significantly less power-hungry piece of hardware, like a smartphone or battery-powered laptop.
For instance, research by analyst firm Creative Strategies5 concludes that Snapdragon 8 Gen 3, a flagship processor for smartphones, is 30 times more efficient than a data center on image generation tasks. For laptop PCs, the same report states that Snapdragon X Elite Compute Platform is nearly 28 times more efficient than running the AI task on the data center.