Background & Occupation
My name is Jay Dietrich. I was always interested in engineering, even as a young adult – I decided I wanted to be an engineer when I was in middle school. The first quarter-century of my career was spent at a semiconductor manufacturing plant; I worked as an environmental engineer and environmental manager in the facilities area. In my last couple of years at the plant, I was the second line manager for both ‘environmental engineering and operations’ and ‘utility engineering and operations’. So, I’ve always been involved in the environmental area, in a very technical and very down-to-earth way.
I then moved into an assignment in the corporate environmental group with responsibility for energy efficiency and climate change policy and programs for the last 13 years of my career. Taking a broader role in the company was very exciting for me because I’m interested in solving problems that are impacting our environment and the way we live. The corporate role gave me a broader impact on the actions and policies in the energy and climate area. In my role, I was working with the operating units to set energy efficiency policies and practices, including data centre efficiency initiatives. I also worked on all of our climate change activities – these included renewable energy procurement and managing our greenhouse gas emissions reporting. Energy efficiency has always been the central pillar of the Climate Change programme – that’s really where the focus and opportunity for meaningful action lies.
Energy Efficiency at Data Centres
The tech industry is a very interesting space, specifically when you look at data centres. The expanding demand for server, network, and storage services at these centres and the associated energy demand is well documented in the media. There is another part of the story that is equally important – with each new generation of processor technology, you’re seeing anywhere from a doubling to a tripling of the work that’s delivered per unit of energy consumed. The same technology advances increase the capacity per watt of storage products and the throughput per watt of network devices. Despite the expanding demand for data centre services, the increase in capacity per watt is going to deliver gradual increases in energy use rather than the exponential growth predicted by some.
I think the real challenge and opportunity in managing data centre efficiency and energy consumption is maximizing the utilization of the available server, storage, and network hardware capacity as workload fluctuates over the day, month or year. There is also a challenge in minimizing the consumption of the operating hardware when no workload is present. Fortunately, there are various software workload and system management tools available to optimise and maximise the use of that capacity and reduce energy use at lower workload levels.
Workload placement and management software are available to optimize the operation of whole systems of servers and storage products in either an enterprise or cloud data centre. The software can move the workload to maximise the utilisation of the servers and minimise the number of servers you need to support a given set of workloads. More sophisticated software can also move workload in real-time, maximizing the utilization of active servers while allowing unneeded servers to go into a low power mode, delivering further reductions in energy consumption. Deployment of these tools can deliver hardware and energy consumption reductions of up to 60% in a data centre system.
An equally important energy efficiency lever is for data centre operators to make use of the power management capabilities of server products. The idle power demand of a given server configuration will be between 20% and 60% of the maximum power demand, depending on the type of hardware components and the applied power management settings. To successfully minimize the idle to maximum power ratio for a given data centre or server, there has to be a balance between lower power use settings and the performance and the response times the clients want. The application of power management settings has to be balanced with the need for a server to get back into a full response state quickly and seamlessly. The difficulty is that the optimized power management settings can save up to several hundred dollars per server per year, but the failure to meet service requirements could cost up to a million bucks a day or more. You need confidence that you can meet service requirements when you integrate those settings into your system. Storage and Network products have limited power management capabilities – there are opportunities to expand the capabilities in these products to deliver further energy consumption reductions in the data centre.
SERT
There is now a standard measure for servers – the Server Efficiency Rating Tool (SERT™). This Tool is a good representative test of the work-per-unit energy a given server product can deliver, and it provides an effective metric for setting server energy efficiency limits for server products. But as limits are set, you have to look at how those servers are used as a system in a data centre. And that’s where we’ve had lively discussions in the regulatory arena about how do you set appropriate requirements that recognise how different a server’s performance is based on the processor, memory, storage, and network components you choose for your application. And then you have to ask how do you set an effective efficiency requirement that still allows the data centre system to be built and set up to operate effectively in a way that reduces energy consumption while meeting the service requirements. It is important to consider these many variables when setting efficiency limits to avoid limiting the availability of server configurations that minimize energy consumption for a given group of workloads.
Workload Placement
One of the things that I advocated for when talking with some regulatory bodies (in regards to data centre metrics) was to focus on things like making sure that the data centre is using that workload placement software to optimise workload placement. The use of software tools will optimize hardware utilization and minimize energy consumption. For some workloads or data centre types, the optimised average utilisation will be 20-40%; for others, you’ll get 60-70% of utilisation because of the way the work workload can be put together on the available hardware. Similarly, sensors and cooling management software tools can optimize cooling, further reducing overall energy consumption.
Renewable Energy
One other initiative that has received a lot of attention has been the procurement of renewable energy for data centre operations. There has been a lot of good things happening in this space, but ultimately, we need to pay attention to and report the generation sources and percentages of renewable generation actually consumed at the data centre or any other commercial or manufacturing facility. Real progress towards effective integration of renewable energy into our electricity grid can only occur when capacity planning and generation portfolios account for the intermittency of renewable generation resources.
Strategies for Energy Efficiency & Climate Change
I think whoever owns the data centres has to take ownership of their energy efficiency. Within my company, I always had to work through the business unit owners to get agreement on goals and strategies for climate change and energy efficiency. It’s important to have the corporate-level person responsible for climate change and energy, and I think you’ve seen a lot of companies go to that. But program ownership ultimately has to rest with the operating unit because that’s where the business decisions are made. The tools and opportunities are there for companies and data centres to be more energy-efficient and reduce their carbon footprint; they just have to be applied.
Show Comments