• This field is for validation purposes and should be left unchanged.
home_icon-01_outline
star
  • Earth.Org Newsletters

    Get focused newsletters especially designed to be concise and easy to digest

  • This field is for validation purposes and should be left unchanged.
Earth.Org PAST · PRESENT · FUTURE
Environmental News, Data Analysis, Research & Policy Solutions. Read Our Mission Statement

The Environmental Impact of ChatGPT: A Call for Sustainable Practices In AI Development

by Sophie McLean Global Commons Apr 28th 20234 mins
The Environmental Impact of ChatGPT: A Call for Sustainable Practices In AI Development

ChatGPT, a large language model developed by OpenAI, has garnered widespread attention for its remarkable natural language processing capabilities. However, as with any large language model, training and developing the AI system requires a tremendous amount of energy, resulting in significant environmental costs that are often overlooked. In this article, we take a look at what we already now regarding the environmental impact of ChatGPT.

Large language models may refer to systems that are trained to predict the likelihood of a ‘token’, such as characters, words, or entire sequences, based on either the preceding or surrounding context. While AI has brought about impressive results in areas such as workplace productivity, there is also cause for concern, typically centred around its propensity to amplify biases and the liability issues surrounding the “black box” of the AI system. 

As more companies and industries adopt AI technology, it is important not to marginalise discussions about the environmental consequences of large language models. These conversations should be taken seriously and considered alongside those about the societal impacts of new AI systems. 

You might also like: Can AI Help Achieve Environmental Sustainability?

The Environmental Impact of Data Centres

Data centres are facilities that house power-hungry servers required by AI models, and they consume a considerable amount of energy while generating a substantial carbon footprint. Cloud computing, which AI developers like OpenAI use, relies on the chips inside these data centres to train algorithms and analyse data. 

According to estimates, ChatGPT emits 8.4 tons of carbon dioxide per year, more than twice the amount that is emitted by an individual, which is 4 tons per year. Of course, the type of power source used to run these data centres affects the amount of emissions produced – with coal or natural gas-fired plants resulting in much higher emissions compared to solar, wind, or hydroelectric power – making exact figures difficult to provide. 

A recent study by researchers at the University of California, Riverside, revealed the significant water footprint of AI models like ChatGPT-3 and 4. The study reports that Microsoft used approximately 700,000 litres of freshwater during GPT-3’s training in its data centres – that’s equivalent to the amount of water needed to produce 370 BMW cars or 320 Tesla vehicles. This is primarily a result of the training process, in which large amounts of energy are used and converted into heat, requiring a staggering quantity of freshwater to keep temperatures under control and cool down machinery. Further, the model also consumes a significant amount of water in the inference process, which occurs when ChatGPT is used for tasks like answering questions or generating text. For a simple conversation of 20-50 questions, the water consumed is equivalent to a 500ml bottle, making the total water footprint for inference substantial considering its billions of users. 

As language models continue to grow in size, it is increasingly necessary to explore ways to mitigate their negative impact on the environment in order to pave a sustainable path forward.

Responding to an inquiry made by Bloomberg regarding sustainability concerns, OpenAI said that it takes its role in stopping and reversing climate change “very seriously” and that a lot of thought goes into finding the most efficient way to use its computing power.

“OpenAI runs on Azure, and we work closely with Microsoft’s team to improve efficiency and our footprint to run large language models.” 

Data centres are typically considered black box compared to other industries reporting their carbon footprint; thus, while researchers have estimated emissions, there is no explicit figure documenting the total power used by ChatGPT. The rapid growth of the AI sector combined with limited transparency means that the total electricity use and carbon emissions attributed to AI are unknown, and major cloud providers are not providing the necessary information.

How Do We Reduce the Environmental Impact of AI?

One way to address this issue is to advocate for greater transparency in the development and operation of machine learning systems. Scholars have developed frameworks to assist researchers in reporting their energy and carbon usage, in the hopes of promoting accountability and responsible practices in the field. To aid researchers in benchmarking their energy usage, some scholars have made public online tools which encourage teams to conduct trials in eco-friendly areas, provide consistent updates on energy and carbon measurements, and actively assess the trade-offs between energy usage and performance before deploying energy-intensive models. 

Individuals also play a crucial role in promoting greater accountability in the sphere of AI. One way to do this is by reducing the hype surrounding new, flashy AI systems like ChatGPT and recognising the limitations of language models. By placing their achievements in proper context and acknowledging the trade-offs involved, we can actively encourage new avenues of research that do not solely depend on developing larger and more complex models. This approach not only promotes responsible practices in the field of AI but also paves the way for “greener” AI. 

Final Thoughts

As AI continues to revolutionise various sectors and industries, including the impressive capabilities of ChatGPT, we must prioritise sustainable practices in AI development. Greater transparency and accountability in the development and operation of machine learning systems, as well as individual efforts to recognise the limitations of language models, can help reduce the environmental costs of AI. 

By promoting responsible practices in AI development and research, we can work towards creating a more sustainable and equitable future, where technological progress does not come at the cost of our planet.

You might also like: 7 Data-Based & Artificial Intelligence Projects To Help Fight Climate Change

About the Author

Sophie McLean

Sophie McLean, an international student with academic experience in Canada, the UK, and Hong Kong, has gained invaluable exposure to diverse perspectives on environmental issues. She is set to graduate this summer from McGill University with a BA in Philosophy and a minor in Political Science, having completed a semester abroad at the University of Bristol. Sophie's passion for environmental advocacy has led her to pursue a Law degree at the University of Cambridge this fall, where she hopes to gain the expertise necessary to create a real impact in the field. Sophie's interests lie in staying updated on new scientific developments, green technology, and environmental policies.

Subscribe to our newsletter

Hand-picked stories weekly or monthly. We promise, no spam!

SUBSCRIBE
Instagram @earthorg Follow Us