Wednesday, April 29, 2009

Human Side of Higher Data Center Temperatures

With all the talk of hotter data center temperatures, one item that has often been overlooked is what happens to the poor soul tasked with going in and servicing equipment in that data center. Imagine having to work in a facility at 40°C (104°F) for several hours at a time--and that's at the equipment input. The exhaust temperature on the back side of the rack could easily be 55°C (131°F).

One approach is to adopt a "fail in place" model where technicians never go into a production facility, but even Google has technicians adding and replacing individual servers in their containerized data centers.

Other approaches to consider:
  • Localized spot cooling. A very small air conditioner could take the edge off the area in front of a rack.
  • Perform service operations at night or when it's reasonably cool.

This last suggestion may seem too simplistic at first, but it's actually quite practical. In a facility with sufficient redundancy to ensure high availability, server replacement should be able to wait up to 24 hours. Operating a data center at consistently high temperatures will end up increasing power consumption in the IT equipment. It only makes sense to use higher temperatures in a data center when using optimizers to eliminate or substantially reduce HVAC CapEx and OpEx costs.

If a data center is using economizers, the temperature in the data center should drop when the outside temperature drops. Even in relatively warm areas during summer months, there are substantial times each day where the temperature drops to reasonable levels in which technicians can comfortably work.

--kb

Monday, April 13, 2009

NEBS vs. the Hottest Place on Earth

As mentioned in Higher Temperatures for Data Center and Processors for Higher Temps, various groups are pushing for higher and higher ambient temperatures in data centers. At Google's Efficient Data Center Summit last week, Amazon's James Hamilton brought up an interesting point in his slides and blog about ambient temperatures:
the hottest place on earth over recorded history was Al Aziziyah Libya in 1922 where 136F (58C) was indicated

James went on to note during his talk that telecommunications equipment designed to the NEBS (Network Equipment Building System) standards routinely has to handle temperatures up to 40°C.

Actually, the story is better than that. NEBS-GR-63 (the key NEBS specification dealing with environmental conditions for equipment in telecommunications central offices) requires equipment to handle 40°C long-term ambient temperatures, but telecommunications equipment certified at the shelf (chassis) level needs to be able to operate at 55°C ambient for up to 96 hours at a time and up to 360 hours per year [the 360 hours is for reliability calculations]. This means that much of the NEBS-rated equipment for data centers can operate at temperatures that are only 3°C lower than the highest natural temperature ever recorded on Earth, as noted by James.

Given the common engineering penchant to provide some guardband on products vs. the official specifications, even a 58°C ambient is not out of the question. This means that NEBS-rated equipment could be good candidates for data centers operating at high temperatures.

But can you get decent performance in NEBS-rated servers? Yes! For example, vendors such as Radisys, Kontron, and Emerson have announced blade servers with Intel's new 5500 (aka "Nehalem") processors, and their bladed servers commonly are NEBS certified to operate at 55°C. This would allow the latest server technology to operate in the most demanding environments.

--kb

Thursday, April 9, 2009

More on Google's Battery-backed Servers

As noted in Evaluating Google's Battery-backed Server Approach, there are a number of benefits to Google's recently-disclosed practice of putting VRLA batteries on every server, but there are quite a few drawbacks as well.

One of the drawbacks not discussed in the prior post is a set of issues related to power transients and harmonics. With a conventional data center, there are multiple levels of power transformation and isolation between the individual server and the grid. Power usually comes in at high- or medium-voltage to a transformer and comes out as low voltage (<600v) before going to a UPS and a PDU.

In an effort to improve efficiency and reduce capital costs, facility managers are looking at removing some of these isolation layers. This is fine to a certain extent. After all, there are a lot of small businesses that run one or two servers on their own, and there aren't major problems with them. In those cases, however, there are usually relatively few computers hooked together on the same side of the electrical transformer that provides power to the building. This transformer provides isolation from building to building (or zone to zone in some installations).

When you scale up into a large data center, however, you get thousands and thousands of servers in the same building. If you remove those extra layers of isolation, the burden for providing that extra isolation falls to the power supplies in the individual servers. If servers use traditional AC power supplies, issues like phase balancing and power factor correction of all the separate power supplies becomes more of an interdepent issue.

The issues can be helped or hurt depending on what's nearby. Servers without isolation near an aluminum smelter, sawmill, subway, or steel mill may see wide fluctuations in their power quality which can result in unexplained errors.
I've seen cases with marginal power feeds where individual racks of servers seem to work fine, but the aggregate load when all servers are operating causes enough of a voltage sag that some servers occasionally don't work right. Let me tell you, those are a real pain to diagnose.

On the other hand, if you're somebody like Google or Microsoft who can locate data centers in places like The Dalles, Oregon or Quincy, Washington that are just a stone's throw from major hydroelectric dams or other sources of power, perhaps you can rely on nice clean power all the time.

External power factors may be the least of a data center manager's problems, however. The big concern with eliminating the intermediate isolation is that transients and other power line problems from one power supply can affect the operation of adjacent systems, and this can build up to significant levels if fault isolation and filtering is not supported.

Another issue that bedevils data center managers is the issue with phase balancing. In most AC-powered systems, power is delivered via three phases or legs (A, B, and C phases), each 120° out of phase with each other. At some point (usually the PDU), a neutral conductor is synthesized so that single-phase currents can run from one of these legs to neutral. In a properly balanced system, there will be equal loading on the A leg, the B leg, and the C leg. If the phases are not properly balanced, there are several bad things that can occur, including the following:
  • The neutral point will shift towards the heaviest load, lowering the voltage to the equipment on that line, resulting in premature equipment failure and undervoltage-related errors
  • An imbalanced load may cause excess current to flow over specific conductors and overheat
  • Breakers or other overcurrent mechanisms may trip

Phase imbalance can occur when network administrators do not follow a rigorous process of plugging every third server into alternate phases. Additionally, shifting workloads could cause some servers to be more heavily utilized than others--and phase balancing is almost certainly not a factor considered in allocating applications to specific servers. An even more pernicious issue can arise with systems employing redundant power supplies, such as blade servers: in an attempt to maximize efficiency, management software may shut down certain power supplies to maximize load on the remaining power supplies--all without considering what the impact to phase balancing is when the load is not equally shared among all power supplies.

Data centers that employ conventional PDUs don't generally have these issues (or have them at lesser severity), since the PDUs and their transformers are usually designed to handle significant phase imbalances without creating problems.

Additional considerations with the Google battery-backed server approach:

  • Acid risks from thousands of individual tiny batteries (i.e., cracked cases in thinner-walled batteries)
  • Shorting risks from batteries that can deliver thousands of amps of current for a short period
  • More items to monitor, or higher risks of silent failures (albeit with smaller failure domains) when you most need the batteries

This is a complex issue. I'm not convinced that Google has determined the optimal solution, but kudos to them for finally being willing to publicly discuss some of what they consider to be best practices. Collectively, we can learn bits and pieces from different sources that could end up delivering more efficient services.

--kb

Saturday, April 4, 2009

Evaluating Google's Battery-backed Server Approach

As noted previously, Google has disclosed that they put batteries on every server (see this picture of a Google rack), essentially powering their servers like the way laptops have traditionally been powered. The batteries are needed on laptops because they need to be mobile, which is not generally a consideration for servers.
Are batteries in servers a good idea?

There are some definite advantages in Google's approach:
  1. No need to pay for UPS systems (saves CapEx dollars)
  2. Eliminates two conversion stages found in a traditional AC double-conversion UPS
  3. Reduces dedicated floor space/real estate commonly devoted to UPS/battery rooms
  4. Localizes fault domains for a failed server to just one server
  5. Scales linearly with the number of servers deployed

All of these add up to a solution that works just as well for one server as it does for one thousand servers. Coupled with Google's efforts to increase energy efficiency through founding and support for the Climate Savers Computing Initiative (CSCI) and its target of 92% power supply efficiency, this solution appears to be very efficient.

However, there are some down sides to Google's approach:

  1. A lot of batteries to wire up and monitor
  2. Increased air impedance from blocking airflow
  3. Lower battery reliability with increased ambient temperatures
  4. Higher environmental impact due to increased battery materials
  5. Individual server supplies are exposed to a higher level of power transients and harmonics
  6. Potential phase imbalances and stranded power in data centers

Issue #1 is self-obvious. Issue #2 can be seen from this picture from Green Data Center Blog; the physical mass of the batteries blocks a good portion of the air space in front of the server, which increases the resistance and in turn requires more fan power to move the same amount of air.

Issues #3 and #4 are somewhat related. Google, Microsoft, and other leading internet companies have advocated moving the ambient temperatures of data centers to higher temperatures, with some advising 35°C, 40°C, or even occasionally 50°C ambient temperatures. There are clear savings to be had here, but it may run counter to the battery approach used by Google. Assuming the Google batteries are conventional lead-acid batteries, a common rule is that the useful life of batteries drops by ~50% for every 10°C above 25°C ambient temperatures. Thus, a 4-year battery would only be good for ~2 years in a 35°C environment. In comparison, conventional UPS batteries are often rated for 10, 15, or 20 years. When consolidated in a UPS battery cabinet, the batteries can be protected from the higher ambient temperatures through localized cooling (batteries dissipate almost no heat) for increased life.

Lots of little batteries like Google uses results in more materials usage compared to the use of larger batteries. Couple that with reduced battery life at higher temperatures, and the result is not as good as it first seems. According to http://www.batterycouncil.org/LeadAcidBatteries/BatteryRecycling/tabid/71/Default.aspx, more than 97% of lead from lead-acid batteries is recycled, but this also states that 60-80% the lead and plastic of new batteries is recycled material. Looking at this last stat a different way, 20-40% of lead-acid battery materials are not recycled. Thus, even if Google performs 100% battery recycling, using lots of new batteries still results in the use of a lot of new materials.

I'll address issues #5 and #6 in a future post.

--kb

Friday, April 3, 2009

Google's Server Power Supplies

This past Wednesday, Google finally provided a peek into their data centers. Green Data Center blog has a great roundoup of various articles related to this workshop, including pictures from Google's container data centers.

One of the more interesting aspects revealed Wednesday was the fact that Google has batteries attached to each of their servers.

At first, this seems rather odd. Google's explanation for this is that they use this arrangement as a 99.9% efficient replacement for UPS (Uninterruptible Power Supply) systems. Wow...99.9% efficient!

This is definitely a different approach from what most data centers do today, and it seems really far out there--until you break it down in its component parts. A simplified block diagram looks like the following:
Broken down this way, the arrangement really starts to look like a laptop. The Google server power system apparently operates just like a laptop:
  • External power supply provides ~12Vdc
  • Battery is included with every computer
  • When the external power supply fails, the battery provides power until the generator starts or power is switched to a different source

Graceful shutdown in power outages may or may not be an issue for Google's applications (likely not an issue).

Google certainly thinks they've got a winner with this approach, and goodness knows they've had experience deploying this at scale. In a future posting, I'll look at some of the pro's and con's of this approach.
--kb

Wednesday, April 1, 2009

Deciphering Intel Code Names

There's been a lot of industry buzz lately about Intel's recent release of the Nehalem-EP processor, with many references to how Nehalem is x% better than a previous platform like Bensley, Harpertown, or Clovertown.

Okay, but how can you find out what each one of these code names refers to? Well, it turns out that Intel has a web site that allows you to enter code names for released products and then look up the relevant information. Go to http://ark.intel.com/ and enter the code name (or official name) of a current Intel product, and chances are it will be listed.

One particularly useful feature of this site is the System Design capability. For example, if you enter a processor/chipset power budget and other criteria, the site will list all matching combinations. Try it out!

--kb