Tuesday, March 31, 2009

Suggestion for Energy Star Measurement of Blade Power Consumption

The US EPA is developing an Energy Star for Servers specification. Based on information in the latest draft of the specification, it looks like the EPA may be backing away from including servers in the first release ("Tier 1") of the server Energy Star specification. Given the increasing prevalance of blade servers in data centers, this would be unfortunate.

Ideally, there would be a standardized benchmark like SPECpower_ssj2008 that would be able to measure power consumption on a per-blade basis, but the current benchmark doesn't have provisions to handle chassis.

As an alternative, here are suggestions for how the EPA could measure power consumption for Energy Star (until a chassis-friendly industry specification is developed by an industry group like SPEC):
  • Apply Energy Star to blades, not to chassis. Chassis are ineligible to meet Energy Star, but the blades that go in them can be Energy Star certified.
  • Configure a chassis with the minimal amount of chassis management modules and external modules required for operation, but include all supported power supplies for a given chassis and all the fan/cooling modules typically used (don't remove redundant fans or power supplies).
  • Run a sample workload on all servers to keep them minimally active. Install the same server configuration in all server slots.

Measure total power consumption to all power feeds in the chassis under two conditions and with the following calculations:

  1. Condition 1: Determine power consumption P1 with all N server blade slots installed.
  2. Condition 2: Remove servers so that N/2 (round up) servers are evenly distributed in the chassis; call that number N'. Determine power consumption P2 at this level.
  3. P3 = P1 / N. This is the weighted average power per server blade in a full chassis.
  4. P4 = P2 / N'. This is the weighted average power per server blade in a half-full chassis.
  5. P5 = (P3 + P4) / 2. This is the weighted average power per server blade.

Notes:

  • This accounts for chassis overhead, including fans, power supplies, management modules, and network connectivity. There is a slight penalty to blades here since rack-mount servers don't include any allocation for network switch power, but represents the minimum configuration needed to use those blades. Additionally, many vendors have low-energy networking elements (i.e., passthrough blades) that minimize this impact.
  • If the chassis contains power supplies to convert input voltages to a different voltage supplied on the backplane, the power supplies used in the chassis must meet the power supply qualification requirements outlined elsewhere in the Energy Star for Servers specification.
  • If a chassis contains redundant power supplies, the server blades are eligible for an allowance of 20W per redundant power supply, divided by the number of servers. For example, if a chassis has 2+2 power supplies (2 redundant power supplies and 2 minimum power supplies for a fully loaded chassis) and 10 blades, then each server would get a 4W/server allowance (2 * 20W / 10 servers).

With all the notes above, this may look to be complicated, but it's actually a fairly simple configuration that provides a close analog to how standalone rack-mount servers are tested. This could be used in the initial version ("Tier 1") of the Energy Star for Servers specification if the EPA wanted to use it.

--kb

Thursday, March 12, 2009

Eliminating the UPS Efficiency Penalty with -48Vdc: Part II

In Eliminating the UPS Efficiency Penalty with -48Vdc, there is a discussion of how a non-redundant AC and DC configuration can have nearly equivalent efficiency in facilities without a UPS. However, when redundancy is figured in, the advantages of DC power become more pronounced.

Let's start by looking at the power supply unit (PSU) component by itself. Based on the information in the quantitative analysis by The Green Grid, high-efficiency AC and DC power supplies look like this when compared to each other:



The graph shifts to the right when redundant power supplies are considered. Since there are numerous different voltage converters in a server (modern servers often have in excess of 25 voltage rails used internally), it's really impractical to try to duplicate every voltage converter in a server--at least if you want it for a reasonable price. However, servers with redundant power supplies provide three principal benefits:

  1. Connectivity to separate primary power sources (i.e., different utility feeds)
  2. Protection against failure in upstream power equipment (i.e., failure in a PDU)
  3. Cabling problem or service failure (i.e., accidentally unplugging the wrong server)
In an AC system, separate power supplies are required to have redundant feeds, since each power feed might be slightly out of phase with the other feed by the time the power signal gets to the server (relative phasing can shift in different parts of the data center based on relative cable lengths). If a server has two power supplies equally sharing the load as is commonly done, then each power supply <50%>

In contrast, a DC system has no phasing issues to deal with. Therefore, DC-based equipment has two main options: full duplicate power supplies (like AC) or using a technique called diode OR'ing (or FET OR'ing) to safely combine power from two separate DC sources as inputs to a single power supply. [Since there are numerous downstream power converters that are not redundant, there's no need for the power supply itself to be redundant--it just needs to be fed from multiple inputs.] Many DC power supplies do this today, as this approach is commonly used in the highly-reliable telecommunications system with -48Vdc systems. The result is a wider gap between the net AC power supply efficiency and the DC power supply efficiency:

Taking this a step further, look at the typical operating point for servers vs. their power supply ratings. For example, look at the various published reports for SPECpower_ssj2008: you'll notice there are numerous cases where the power supply shipped with the system is 2-4 times the maximum power draw in the sytem. If the power supply in a system is 2x the necessary power, then the system would normally operate in the left half of the graph immediately above. If the average power is considerably less than the maximum power draw, then the system could spend the bulk of its time operating at the 25% load level or less in the graph above.

At these lower loads, the efficiency benefits of -48Vdc systems become more apparent, even when there's no UPS in the picture. If an installation uses UPSes, the efficiency gap widens further in favor of -48Vdc.

Wednesday, March 4, 2009

Eliminating the UPS Efficiency Penalty with -48Vdc

The Green Grid recently released Quantitative Efficiency Analysis Of Power Distribution Configurations For Data Centers, which shows how different power chains from 480Vac down to 12Vdc stack up in terms of efficiency. This showed -48Vdc to have the highest efficiency for systems at 60% of capacity and below--in an idealized world.

This is true when a UPS is required--but what happens if a UPS isn't needed?


Say what? Who would ever want to deploy servers without UPS
backup?

There are certain circumstances where a UPS is not needed:


  • Services with sufficient geo-redundancy that a power failure at any one site doesn't have appreciable impact on the overall service availability

  • Lower-priority services for which an infrequent service outage would be acceptable

In situations like this, how does a -48Vdc system stack up? Let's look at the data in the report from The Green Grid mentioned above:



  • The best AC power supplies to go from 240Vac down to 12Vdc peak out at around 93% efficiency [Figure 31].

  • The best DC rectifiers (with batteries) to go from 240Vdc down to -48Vdc peak out around 96.5% efficiency [Figure 29].

  • The best DC power supplies to go from -48Vdc down to 12Vdc peak out at almost 95% efficiency [Figure 31].

Taken together, the 96.5% rectifier efficiency x the 95% power supply efficiency equate to ~91.7% efficiency, slightly less than the 93% efficiency of a pure AC to 12Vdc power supply solution.


However, this is using rectifiers with tightly regulated -48Vdc outputs designed to work with batteries along with wide-ranging inputs. This is a mis-match! It's understandable why this has traditionally been done (for applications needing battery backup), but it's overkill for applications not needing battery backup.


Since most -48Vdc power supplies can handle input voltages from -42Vdc to -56Vdc (or a wider range), think what could happen with a DC rectifier with a loosely regulated output well within this range. If a DC rectifier was allowed to vary its output voltage between -44Vdc and -54Vdc, the net efficiency of the -48Vdc system could meet or beat the approach with a straight AC power supply.


Without battery backup, a -48Vdc system could match an AC system; even with full-time battery backup, the -48Vdc system is within ~1.5% of the AC system without battery backup.


Next: the story gets even better when redundancy is considered...

Sunday, March 1, 2009

Sealed Containers: Reality or Myth?

One of the interesting debates for those looking at containerized data centers is whether or not containerized data centers need to be serviceable in the field. Different products on the market today take different approaches:
  • The Sun Modular Datacenter (nee "Blackbox") provides front and rear access to each rack by mounting the racks sideways and using a special tool to slide racks into the center aisle for servicing.
  • The Rackable ICE Cube provides front access to servers, but the setup doesn't lend itself to rear access to the servers.
  • HP's Performance-Optimized Datacenter (POD) takes an alternative approach: there's a wide service aisle on the front, but you need to go outside the container to get to the back side of the racks via external doors.

Some industry notables have advocated even more drastic service changes: James Hamilton (formerly with Microsoft, now with Amazon) was one of the early proponents of containerized data centers, and he has suggested that containerized data centers could be sealed, without the need for end-users to service the hardware. The theory is that it's cheaper to leave the failed servers in the rack, up until the point that so many servers have failed that the entire container is shipped back to the vendor for replacement.

How reasonable is this?

Prior to the advent of containers, fully-configured racks (cabinets) were the largest unit of integration typically used in data centers, and these remain the highest level of integrated product used in most data centers today. How many data centers seal these integrated cabinets and never open the door to the cabinet throughout the life of the equipment in that cabinet? This is perhaps the best indicator as to whether a sealed container really matches existing practices.

We had looked at the "fail in place" model in the company where I work, but it was difficult for managers to accept that it was okay for some number of servers to be failed in a rack. As long as the cost of fixing the hardware is cheaper than the cost of buying a new server (or the equipment is under warranty), most finance people and managers want to see the servers in a rack functional.

What do you think? Do you see people keeping cabinets sealed in data centers today? Does fail in place make sense to you?