https://aws.amazon.com/blogs/opensource/sustainability-with-rust/

Rust is a programming language implemented as a set of open source projects. It combines the performance and resource efficiency of systems programming languages like C with the memory safety of languages like Java. Rust started as a research project at Mozilla in 2010, and Rust 1.0 launched in 2015. In 2020, support for Rust moved from Mozilla to the Rust Foundation, a non-profit organization created as a partnership between Amazon Web Services, Inc (AWS), Google, Huawei, Microsoft, and Mozilla. The Foundation’s mission is to support the growth and innovation of Rust, and the member companies have grown from the founding 5 to 27 companies in the first year.

At AWS, Rust has quickly become critical to building infrastructure at scale. Firecracker is an open source virtualization technology that powers AWS Lambda and other serverless offerings. It launched publicly in 2018 as our first notable product implemented in Rust. We use Rust to deliver services such as Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon CloudFront, and more. In 2020, we launched Bottlerocket, a Linux-based container operating system written in Rust, and our Amazon EC2 team uses Rust as the language of choice for new AWS Nitro System components, including sensitive applications, such as Nitro Enclaves.

At AWS, we believe leaders create more than they consume and always leave things better than they found them. In 2019, AWS was proud to become a sponsor of the Rust project. In 2020, we started hiring Rust maintainers and contributors, and we partnered with Google, Huawei, Microsoft, and Mozilla to create the Rust Foundation with a mission to support Rust. AWS is investing in the sustainability of Rust, a language we believe should be used to build sustainable and secure solutions.

Energy Efficiency in the Cloud

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/f1d961cb-8e29-4827-af93-6a9dacde0559/Global-data-centre-energy.png

Source: IEA (2021), Global data centre energy demand by data centre type, 2010-2022, https://www.iea.org/data-and-statistics/charts/global-data-centre-energy-demand-by-data-centre-type-2010-2022. All rights reserved.

Worldwide, data centers consume about 200 terawatt hours per year. That’s roughly 1% of all energy consumed on our planet. There are a couple of really interesting things about the details of that energy use. If you look at the graph of energy consumption, the top line is basically flat going back as far as 2010. That’s incredibly counter-intuitive give the tremendous growth of big data, machine learning, and edge devices our industry has experienced over that same period of time.

The second interesting detail is that while the top line of the graph is flat, inside the graph, the distribution over traditional, cloud, and hyperscale data centers has changed dramatically in the same period. Those cloud and hyperscale data centers have been implementing huge energy efficiency improvements, and the migration to that cloud infrastructure has been keeping the total energy use of data centers in balance despite massive growth in storage and compute for more than a decade.

There have been too many data center efficiency improvements to list, but here are a few examples. In compute, we’ve made efficiency improvements in hardware and implemented smarter utilization of resources to reduce idle time. We’ve slowed the growth of our servers with support for multi-instance and multi-tenant, and we’ve improved drive density and efficiency for storage. We’ve also adopted more energy efficient building materials and cooling systems.

As incredible as that success story is, there are two questions it raises. First, is the status quo good enough? Is keeping data center energy use to 1% of worldwide energy consumption adequate? The second question is whether innovations in energy efficiency will continue to keep pace with growth in storage and compute in the future? Given the explosion we know is coming in autonomous drones, delivery robots, and vehicles, and the incredible amount of data consumption, processing, and machine learning training and inference required to support those technologies, it seems unlikely that energy efficiency innovations will be able to keep pace with demand.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/270eaf63-5d65-4d87-a898-4e76d5e762ba/sust-rust-2.png

The energy efficiency improvements we’ve talked about so far have been the responsibility of AWS, but just like security, sustainability is a shared responsibility. AWS customers are responsible for energy efficient choices in storage policies, software design, and compute utilization, while AWS owns efficiencies in hardware, utilization features, and cooling systems. We are also making huge investments in renewable energy.

AWS is on a path to have 100% of our data centers powered with renewable energy by 2025, but even renewables have an environmental impact. It will take about half a million acres of solar panels to generate the 200 terawatt hours of energy used by data centers today. The mining, manufacturing, and management of that many solar panels has substantial environmental impact. So, while we’re really proud of our success with renewable energy, as Peter DeSantis, SVP, AWS said at re:Invent 2020, “The greenest energy is the energy we don’t use.”

Renewables should not replace energy efficiency as a design principle. In the same way that operational excellence, security, and reliability have been principles of traditional software design, sustainability must be a principle in modern software design. That’s why AWS announced a sixth pillar for sustainability to the AWS Well-Architected Framework.

What that looks like in practice is choices like relaxing SLAs for non-critical functions and prioritizing resource use efficiency. We can take advantage of virtualization and allow for longer device upgrade cycles. We can leverage caching and longer TTLs whenever possible. We can classify our data and implement automated lifecycle policies that delete data as soon as possible. When we choose algorithms for cryptography and compression, we can include efficiency in our decision criteria. Last, but not least, we can choose to implement our software in energy efficient programming languages.

Energy Efficient Program Languages

There was a really interesting study a few years ago that looked at the correlation between energy consumption, performance, and memory use. This is a really common conversation in sustainability. Given how little visibility we have into energy or carbon use by our services, is there a metric that can serve as a proxy? Can I look at my existing service dashboards with infrastructure costs, performance, memory, etc and use the trends I see to infer something about the trends in my service’s energy consumption?

What the study did is implement 10 benchmark problems in 27 different programming languages and measure execution time, energy consumption, and peak memory use. C and Rust significantly outperformed other languages in energy efficiency. In fact, they were roughly 50% more efficient than Java and 98% more efficient than Python.