What is the ‘best’ fire suppression system?

Published on 4th January 2015 by Ian Bitterlin

It occurred to me the other day that every operational upside has a technical downside – I guess like the engineers version of ‘every silver lining has a cloud’?

The latest example was a conclusion reached after several occasions during 2014 when I was asked for opinions regarding fire suppression. Just like Karl Marx who said that he had opinions ‘but if you don’t like them I have others’, there is rarely a single answer that fits all circumstances. The attempt to make ‘one size fits all’ is always a compromise.

I know that in North America there is still a high incidence of wet sprinklers (double-knock dry-pipe etc) but I have been led to believe that this is related more to building insurance than ‘protecting’ the IT load? Certainly I can’t recall the last time I saw a wet system in Europe and the suppression of choice appears to be one of the high-pressure gaseous types with a minor sprinkling (pun intended) of water-mist/fog.

The problems with ‘gas’ are noise on discharge (raised hard-disk failure by the physical shock-wave), pressure relief to minimise structural damage (burst panels etc) and gas extract after the discharge – and they certainly are not ideal for direct-air systems (or air-side economization). There is also an urban myth (or is it?) that the gas contaminates ICT mother-boards etc.

Also let’s not forget that many surveys over the past few years (e.g. Liebert Users Group and The Uptime Institute) have highlighted false discharge, often by human error, is a common cause of IT interruption and the cost of gas replenishment is considerable.

Then you have to consider fresh-air make-up/pressurisation systems and how they interface with the fire-suppression being applied. So, I hear you ask, where is this leading? Well, a fire-suppression system that is certainly not ‘new’ but is not widely applied in data centres is ‘oxygen-reduction’, sometimes shortened to ‘oxy-reduct’ or ‘hypoxic’ and some of the most fragile and historical documents and artefacts have been stored in oxygen depleted environments for several years – for example the Magna Carta.

The theory is bullet-proof; reduce the oxygen content in the air from 21% to below 16% by volume and combustion cannot be sustained. It appears to be ‘the’ perfect solution for continuous computing with no environmental impact, hardware contamination or risks of false activation etc, yet there are several issues that need the designers care and attention:

• The room must be air-tight to levels above ‘normal’
• The internal pressure should be higher than the external pressure so the ‘fresh-air make-up’ has to be well engineered, effectively becoming a source of nitrogen to displace the oxygen
• Some form of ‘air-lock’ is to be preferred for entry/exit
• The 16% O2 level is equivalent to around 3500m ASL – so working conditions that involve physical exertion, especially with operatives that have respiratory conditions such as asthma, have to be considered
• The redundancy plan for the oxy-reduct has to be considered as is the time required to reduce the oxygen level – essentially having no protection during that period
• Low oxygen alarms should be installed for H&S reasons
• A small point, but the method of increasing the nitrogen content must have the embedded energy in the supply of nitrogen gas included in the PUE calculation.

So all of these points can be engineered ‘out’ and until just the other day these were my exhaustive list.

Then one more item struck me: Air is made up of 21% O2 and 78% N2 and their diatomic molecular masses are 32 and 28 respectively. That means that if you change the ratio to 16% O2 and 83% N2 the resultant mass per cubic meter reduces by approximately 25%. This changes the amount of heat that the oxy-reduct air can carry and has the result that heat exchanger coils in CRACs or CRAHs have to be de-rated by 25%, or increased in surface area by 25% to maintain the same heat-transfer capacity.

That is one downside that had not been on my radar and clearly has an impact on the cooling design.

There is one other issue which, I think, in part, solves itself: The server fans will be running 25% faster to control the internal temperature but won’t draw any more power as the air will be thinner by the same factor. Having said that it is has to be assumed that the server fans at full speed will be sufficient to provide full cooling required at full compute load – since I am not sure that it is something that can be checked-out with the ICT OEM?

So, in conclusion, I still think that oxy-reduct offers huge operational and availability advantages but the application has to be engineered carefully, as with everything in data centres.

CONNECT WITH US

Sign in


Forgotten Password?

Create MyDCD account

Regions

region LATAM y España North America Europe Em Português Middle East Africa Asia Pacific

Blogger

 

Prof. Ian Bitterlin is the Chief Technology Officer for Emerson Network Power – the world leader in data-centre power and cooling infrastructure solutions and integrated DCIM software. Recognized in the industry as an expert mechanical and electrical engineer, Ian has produced numerous wh ... More

Whitepapers View All