Retiprittp.com

the source of revolution

Gaming

The data center temperature debate

Although never directly articulated by any data center authority, the prevailing practice around these critical facilities has often been “The colder the better.” However, some leading server manufacturers and data center efficiency experts share the view that data centers can run much hotter than today without sacrificing uptime and with great savings in both related costs. with refrigeration as in CO2 emissions. A server manufacturer recently announced that its server rack can operate in inlet temperatures of 104 degrees F.

Why do you feel the need to go further? The cooling infrastructure consumes a lot of energy. Operating 24 hours a day, 7 days a week, 365 days a year, this system uses a lot of electricity to create the optimal computing environment, which can range from 55 to 65 degrees F. (The “recommended” range current ASHRAE is 18-27 C or 64.4 degrees F up to 80.6 degrees F)

To achieve efficiencies, several influential end users are running their data centers warmer and advising their contemporaries to follow suit. But the process is not as simple as turning up the thermostat in your home. Here are some of the key arguments and considerations:

Position: Increasing the server inlet temperature will result in significant energy savings.

Arguments for:

o Sun Microsystems, a leading hardware manufacturer and data center operator, estimates a 4% savings in energy costs for every (1) degree increase in server inlet temperature. (Miller, 2007)

o A higher temperature setting may mean more hours of “free cooling” possible through air-side or water-side economizers. This information is especially compelling for an area like San Jose, California, where outdoor (dry bulb) air temperatures are 70 degrees F or lower for 82% of the year. Depending on geography, annual savings from economization could exceed six figures.

Against arguments:

o The cooling infrastructure has certain design set points. How do we know that increasing the server inlet temperature will not result in false economy, causing additional and unnecessary consumption on other components such as the server’s fans, pumps, or compressors?

o Free cooling, while great for new data centers, is an expensive proposition for existing ones. The entire cooling infrastructure would require reengineering and could be cost prohibitive and unnecessarily complex.

o Equipment failure costs related to temperature or downtime will outweigh the savings gained from a higher temperature set point.

Position: Increased server inlet temperature complicates equipment reliability, recovery, and warranties.

Arguments for:

o Inlet air and outlet air are often mixed in a data center. Temperatures are kept low to compensate for this mixing and to keep the server inlet temperature within the ASHRAE recommended range. Rising temperatures could exacerbate existing hot spots.

o Cold temperatures provide a cool air envelope in the room, an advantage in the event of a cooling system failure. Staff may have more time to diagnose and repair the problem and, if necessary, shut down the equipment safely.

o For the 104 degree F server, what is the probability that each piece of equipment, from storage to network, is reliable? Would all warranties still be valid at 104 degrees F?

Against arguments:

o Increasing the temperature of the data center is part of an efficiency program. The rise in temperature should follow best practices in airflow management: use blanking panels, seal cable cuts, remove cable obstructions under the raised floor, and implement some form of air containment. These measures can effectively reduce the mixing of hot and cold air and allow a practical and safe temperature increase.

o The 104 degree F server is an extreme case that encourages thoughtful discussion and critical inquiry among data center operators. After your study, perhaps a facility that once operated at 62 degrees now operates at 70 degrees F. These changes can significantly improve energy efficiency, without compromising equipment availability or warranties.

Position: servers are not as fragile and sensitive as one might think. Studies conducted in 2008 underscore the resilience of modern hardware.

For arguments:

o Microsoft ran servers in a store in the humid Pacific Northwest from November 2007 to June 2008. They experienced no failures.

o Using an air-side economizer, Intel subjected 450 high-density servers to temperatures up to 92 degrees and relative humidity ranges from 4 to 90%. The server failure rate during this experiment was only marginally higher than that of Intel’s enterprise installations.

o Data centers can operate with a temperature in the 80’s and still be ASHRAE compliant. The upper limit of its recommended temperature range increased to 80.6 degrees F (compared to 77 degrees F).

Against arguments:

o High temperatures over time affect server performance. The server fan speed, for example, will increase in response to higher temperatures. This wear can shorten the life of the device.

o Studies from data center giants like Microsoft and Intel may not be relevant to all companies:

o Your huge data center space is more immune to the occasional server failure that can result from excessive heat.

o They can leverage their purchasing power to receive gold-plated warranties that allow higher temperature settings.

o They are most likely updating their hardware at a faster rate than other companies. If that server completely dies after 3 years, it’s not a big deal. A smaller business may need the server to last for more than 3 years.

Position – Higher inlet temperatures can create uncomfortable working conditions for data center staff and visitors.

Arguments for:

o Consider the 104 degree F rack. The hot aisle could be anywhere from 130 to 150 degrees F. Even the upper end of the ASHRAE operating range (80.6 degrees F) would result in hot aisle temperatures around 105- 110 degrees F. Personnel servicing these racks would endure very uncomfortable working conditions. .

o In response to higher temperatures, the server fan speed will increase to dissipate more air. Increasing the fan speed would increase the noise level in the data center. Noise can approach or exceed OSHA sound limits, requiring occupants to wear hearing protection.

Counterarguments

o It goes without saying that as the server inlet temperature increases, so does the hot aisle temperature. Companies must carefully balance worker comfort and energy efficiency efforts in the data center.

o Not all data center environments have a high volume of users. Some high-performance / supercomputing applications operate in a dark environment and contain a homogeneous collection of hardware. These applications are suitable for higher temperature set points.

o The definition of data center is more fluid than ever. Traditional brick and mortar installation can add instant computing power through a data center container without a costly construction project. The container, separated from the rest of the building, can operate at higher temperatures and achieve higher efficiencies (some close-coupled cooling products work in a similar way).

Conclusions

The move to increase data center temperatures is winning, but will face opposition until the concerns are addressed. Reliability and availability are at the top of any IT professional’s performance plan. For this reason, most to date have decided to err on the side of caution: keep it fresh at all costs. However, higher temperatures and reliability are not mutually exclusive. There are ways to protect your data center investments and increase energy efficiency.

Temperature is inseparable from air flow management; Data center professionals must understand how air circulates, enters, and passes through their server racks. Computational fluid dynamics (CFD) can help by analyzing and plotting projected airflow on the data center floor, but since cooling equipment does not always perform to specification and the data it enters can miss some obstructions. Key, on-site monitoring and adjustments are critical requirements to make sure your CFD data and calculations are accurate.

Overcooled data centers are prime environments for raising the temperature set point. Those with hot spots or insufficient cooling can start with inexpensive solutions like blank panels and grommets. Direct coupled containment and cooling strategies are especially relevant as server exhaust air, which is often the cause of thermal challenges, is insulated and prohibited from entering the cold aisle.

With directed airflow, users can focus on finding their “sweet spot,” the ideal temperature setting that aligns with business requirements and improves energy efficiency. Finding it requires proactive measurement and analysis. But the rewards – smaller energy bills, a better carbon footprint, and a corporate responsibility message – are well worth the effort.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *