Saturday, June 28, 2014

Green Datacenter

Note: This blog is written with help of my friend Rajanikanth Katikitala


Green IT
•       Use servers with less power demands. The lower the PUE, the more efficient data center infrastructure
Power Usage Efficiency =       Total facility Power 
        -------------------------------            
      Total IT equipment Power
•       Floor space reduction by moving to high density blade servers
•       Use hot & cold aisle containments to avoid mixing of hot air with cold air
•       Use rack  or row cooling instead of traditional CRAC units
•       Use free cooling. Retrofit outside air supply where possible
•       Right size the cooling requirement as per the IT load to avoid over sizing cooling.
•       use high density area to focus on cooling
•       Distribute high-density racks throughout the layout to mitigate hot
•       spots; use spot rack cooling as necessary




The Uptime Institute “tiers”
 The uptime Institute classifies data centers according into “tiers”.  The following table is taken from the paper, “Industry Tier Classifications Define Site Infrastructure Performance,” by W. Pitt Turner IV and Kenneth Brill, both of the UpTime Institute.

Tier I
Tier II
Tier III
Tier IV
Number of delivery paths (of power and a/c etc)
Only 1
Only 1
1 active
1 passive
2 active
Redundant components
N
N+1
N+1
2(N+1) or S+S
Support space to raised floor ratio
20%
30%
80-90%
100%
Initial watts/sq ft
20-30
40-50
40-60
50-80
Ultimate watts/sq ft
20-30
40-50
40-60
150+
Raised floor height
12”
18”
30-36”
30-36”
Floor loading pounds/sq ft
85
100
150
150+
Utility voltage
208, 480
208, 480
12-15Kv
12-15Kv
Months to implement
3
3-6
15-20
15-20
Year first deployed
1965
1970
1985
1995
Construction $/sq ft raised floor
$450
$600
$900
$1,100
Annual IT downtime due to site
28.8 hrs
22.0 hrs
1.6 hrs
0.4 hrs
Site availability
99.671%
99.749%
99.982%
99.995%
“Even a fault-tolerant and concurrently maintainable Tier IV site will not satisfy an IT requirement of “Five Nines” (99.999%) uptime. The best a Tier IV site can deliver over time is 99.995%, and this assumes a site outage occurs only as a result of a fire alarm or EPO (Emergency Power Off), and that such an event occurs no more than once every five years. Only the top 10 percent of Tier IV sites will achieve this level of performance.  Unless human activity issues are continually and rigorously addressed, at least one additional failure is likely over five years.  While the site outage is assumed to be instantaneously restored (which requires 24 x “forever” staffing), it can still require up to four hours for IT to recover information availability.
Tier IV’s 99.995% uptime is an average over five years.  An alternative calculation using the same underlying data is 100% uptime for four years and 99.954% for the year in which the downtime event occurs.  Higher levels of site uptime can be achieved by protecting against accidental activation or the real need for fire protection and EPOs. Preventatives include high sensitivity smoke detection, limiting fire load, signage, extensive training, staff certification, limiting the number of “outsiders” in critical spaces, and treating people well to increase pride in their work. All of these measures, if taken, can reduce the risk of failures.  Other solutions include placing the redundant parts of the IT computing infrastructure in different site infrastructure compartments so that a site infrastructure event cannot simultaneously affect all IT systems.  Another alternative is focusing special effort on business-critical and mission-critical applications so they do not require four hours to restore.  These operational issues can improve the availability offered by any data center, and are particularly important in a “Four Nines” Tier IV data center housing IT equipment requiring “Five Nines” availability.”
Heating Ventilation and Air Conditioning standards
The Uptime Institution recommends hot and cold aisles.  Instead of having the fronts of all racks facing one way, racks in raised floor data centers should be installed with the fronts towards each other (the cold aisle) and the backs towards each other (the hot aisle). 
ASHRAE "TG9 HDEC" is the industry guiding reference document for data center environments in the US. According to it the data center ambient temperature should be from 20o C (68o F) to 25o C (77o F) and measured in the cold aisle at 5 ft. above the raised floor at the air intake of the operating equipment.  The relative humidity should be from 40 to 55%.  The future TIA-942 standard will be comparable to the ASHARE standard. The TIA-942 standard will call for a maximum rate of change of 5o C (9o F) per hour.
 “Operationally, an errpr on the slightly cooler (21-22 deg C 70-72deg F) room ambient measured at an average 3-5' off the floor in the center of the aisles.  Experience has shown that prolonged temperatures in the room above 75-78 deg. F (24-26C) reduce HDD life up to 50% and cause serious internal issues as interior processor temperatures can exceed 40 deg. C (105 deg. F)”.
If you are involved in writing a proposal that calls for data center temperatures, it is strongly recommended that SLAs state that the HVAC system will keep equipment within the temperature ranges specified by OEM equipment manufacturers, rather than the air temperatures mentioned above.  If a loss of city water supply causes a cooling emergency, you will bless that bit of advice.
Rack spacing
Based on studies that Jonathon Jew performed at 12 web hosting data centers between 30% to 60% of the computer room space can be consumed by power & cooling equipment and circulation (compare with the UpTime table’s entries for Support space to raised floor ratio).  He comments that: “… a lot depended on the heat density of the computer room, the shape of the computer room, and the quantity and size of columns.” 
That reference to circulation concerns rack spacing and other passageways. The ASHRAE and ANSI/TIA recommendations for cabinet/rack spacing are to provide a minimum of 4 feet clearance in the cold aisles in front of cabinets and 3 feet clearance in the hot aisles behind cabinets.  The Americans with Disabilities Act dictates a minimum of 3 feet for any aisles where wheel chairs need to traverse.  Jonathon says: “For cabinets/racks with equipment with a depth no greater than 2 feet, you may be able to get by with rows of cabinets/racks spaced on 5 feet centers.  However, the recommended spacing based on ASHRAE and ANSI/TIA is 7 ft centers.”
If your customer has HP 42 U racks (24” wide and close to 40” deep) with 4 foot of clearance for the cold aisle and the 3 foot of clearance in the hot aisle there will be two racks occupying 164” x 24”.  That’s 13.7 square feet per rack including the hot and cold aisles but ignoring any other additional passageways

No comments:

Post a Comment

Featured Post

Amazon Route 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.Route 53  perform three main functions in any...