Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While we are at it, can anyone explain to me why so much power is spent on cooling datacenters to ~20C, when you can just run it "hot" at ~40C with all hardware being just fine and cooling costs being many times lower(also capex costs being lower)? Are my assumptions wrong or is there a good reason for this?


Huge disclaimer that I'm not an expert on this, but:

Googling suggests that you're trading off electricity used to cool the data center with electricity used to blow the air over the electronics, and that ~25C is the temperature where the sum of those is minimized:

https://i.dell.com/sites/content/business/solutions/whitepap...

On Oxide and Friends (a podcast by a startup building servers) they claims their choice to use larger quieter fans reduces the power consumption of the fans from ~20-30% of the power a rack consumes to ~2%. Which might suggest that that ~25C number is actually a result of poor hardware choices for moving air more than anything else.

https://youtu.be/xNLxknaj72g?si=TWdQnHIRq_BsOOXi&t=3301


I guess that the usual range for datacenter temperature must include the eventual human working in it.

Usual acceptable legal office temp where I've worked in Europe where between 18°c and 26°c.

To allow someone to work out of this range, you'll need to reclassify the workers and provide accommodation for working under "hard conditions".

Ex : workers in refrigerated warehouse have specific work time regulation and specific clothing provided by the employer.


It's not even the eventual human, humans are in data centers all the time.

Temperatures of 40° Celsius are easily high enough to kill someone doing manual manual labor. Especially because that heat isn't caused by the sun. If someone starts suffering from heat exhaustion, going into the shade isn't going to do anything to help them.


One reason cooling is used is to protect the Li-ion or lead acid batteries in the uninterruptible power supplies that provide backup power to the servers. Both Li-ion and lead acid lose calendar life and reliability when operated at 40 C. So, you have extra battery replacement costs and reliability concerns at higher temperature.

I also would imagine that you need some type of cooling system even to keep the ambient temperature at 40C. Otherwise the servers are just continuously pumping out heat and the temperature may get high enough to the point where you have equipment issues. It could be the case that the equipment needed to cool to 25C is not that different or more expensive that the equipment needed to maintain a temperature of 40C.


My impression is that these power supplies are usually in separate rooms or even buildings, so you can have different temperatures there.

> *It could be the case that the equipment needed to cool to 25C is not that different or more expensive that the equipment needed to maintain a temperature of 40C. *

From what I've heard, cost difference is quite large.


I have a background in running 150,000+ GPUs in all sorts of environments.

I'm in the process of building a new GPU based supercomputer and we are going with an air cooled datacenter with a 1.02 PUE, while everyone else is going with 1.4 PUE.

We confirmed with our vendors that air cooling is just fine. The machines will automatically shut down to protect themselves if they get too hot, but I doubt that will happen because we just expel the hot air immediately.


How substantial are the savings from going "hot" in your experience?


I thought I answered that. Nearly 100% of the energy is going into the servers instead of the facility.

https://www.sunbirddcim.com/blog/how-do-i-calculate-pue


Your info about electricity costs is great, I should've clarified that my second question is about capex side of things.


The capex to build a low PUE data center is a fraction of the cost of a traditional tier 3. Just think about all the infrastructure you don't have to build out when you don't have chillers. Essentially, all you need is a big box with fans on it.

I know HN hates Bitcorn, but fact is that the people who mine it have developed some really innovative solutions around data centers and cooling designs. We're going to take advantage of that for our new (not crypto) supercomputer. The money saved there can go into buying more compute and growing the business itself.


Microsoft ran a pilot data center inside an unsealed tent[0]

Minor blurb:

> Inside the tent, we had five HP DL585s running Sandra from November 2007 to June 2008 and we had ZERO failures or 100% uptime.

> In the meantime, there have been a few anecdotal incidents:

> Water dripped from the tent onto the rack. The server continued to run without incident.

> A windstorm blew a section of the fence onto the rack. Again, the servers continued to run.

> An itinerant leaf was sucked onto the server fascia. The server still ran without incident (see picture).

[0] https://web.archive.org/web/20090219172931/http://blogs.msdn...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: