top of page
Search

What is the Ideal Temperature for a Data Center?

Many organizations have not decided to raise the temperature in a data center despite recommendations from numerous industry experts. One of the main reasons for not raising operating temperatures is the risk of more hardware and equipment failures. Heat is one of a dedicated server’s worst enemy. It especially becomes complicated when data centers have mixed equipment such as newer servers that can tolerate higher temperatures while older servers may not. The ideal temperature for a data center is a complex process.

How high is too high?

The question may simply be how high a temperature can equipment safely operate at. This, in turn, saves a company significant amounts in energy expenditure (cooling). At the same time using the greenbacks saved to purchase newer equipment, thus resulting in the newer equipment being able to operate at higher temperatures more safely.

The highly competitive web hosting market is eager to exploit any advantage available to refresh, update and promote new server offerings and lower pricing in numerous ways.

There can also be minor legal issues to consider such as warranties that dictate acceptable operating temperatures and support contracts (Dell ProSupport Plus for example) that cause possible downtime, especially with the later on older equipment. So are dedicated servers being affected by the increase in data center temperature and in what way? How do you determine how much to increase overall temperature?

Slow and steady leads to the ideal temperature for a data center

Ken Koty from PDU Cables, suggests raising the temperature 1 degree at a time and monitoring temperature changes in server racks, cabinets and also inputs. Temperature readings must be measured and recorded before and after any changes are made to compare temperature changes. Risk assessment is important as well. The potential risks involved are:

  1. Hardware failures.

  2. Impact on support contracts.

  3. Hardware warranties.

Compared this with the advantages:

  1. Energy savings on cooling.

  2. Newer hardware replacement opportunities.

  3. Energy saving on power consumption (on the newer equipment.)

Risk-taking

It can be argued that increasing temperature in a data center is most beneficial to large organizations. A small data center may not see the benefit to this risk and the possible contingencies involved. It is still considered “risk-taking” and is not suitable for all organizations. It may also have the reverse effect in that the data center is already operating at a high temperature and may even need to increase cooling. Frequent hardware failures may be a sign of this. However, an Intel white paper shows hardware failure rates are minimal with reasonable temperature increases and up to 67% in energy savings (using an Air Economizer).

Sun Microsystems says data centers could save 4% in energy costs for every degree increase in the overall cooling set point. That fact alone will influence the ideal temperature for a data center.

Airflow

There is also the issue of dedicated servers simply having to spin fans at a faster speed. They may simply not tolerate higher temperatures. This is especially true on older equipment where by default and in BIOS settings, these increases in temperature are a problem and servers will work too hard at keeping itself cool. Especially older non-low voltage CPU’s may prevent a dedicated server from surviving long in a less than optimal temperature environment. Faster spinning fans inside a dedicated server could start to consume more energy in that regard as well. This may reiterate the case for newer server purchases when the opportunity and need arise as stated earlier. Maximizing airflow management is vital in this scenario.

Not maintaining optimal airflow cooling results in hot spots throughout the data center. This results in relying on the old fashioned over cooling method to ensure adequate cooling throughout. This is a crude and costly solution to data center cooling. Downtime for larger data centers can have huge costly consequences and the risks are too great to even attempt.

If equipment is operating at a slightly higher than recommended temperature, a cooling failure can also have a more damaging effect on components. Instead of simply being able to just shut down, the component is damaged beyond repair. Auto thermal shut-offs can also go so far.

If possible, consulting other data center staff who have raised temperatures can provide some valuable advice and insight. If IT techs walking about your data center are working while wearing sweaters, that should say something about the cooling costs.

23 views0 comments

Comments


bottom of page