2015-04-02

At one time, data centre rack enclosures and related equipment were considered commodity products – simply a platform to stack equipment, with more enclosures purchased as servers and rackmount equipment were added to the IT inventory. Today, even though the sophistication and criticality of the data centre has soared, some may still assume that because a rack enclosure isn’t electronic, it’s a modest piece of furniture. In reality, rack enclosures are highly engineered equipment that can enhance the efficiency of supported equipment, and improve the productivity of data centre personnel.

Rack systems are strategic assets that play a key role in system uptime and data centre availability and reliability. They can be counted on to be flexible and adaptive to accommodate rapid change. They can accommodate monitoring systems that can enhance the management of the data centre ecosystem. They are, in short, a vital component of any data centre.

Today’s environment

The changing nature of the data centre environment has placed increased demand on the physical infrastructure. Today, data centre are 24/7/365 mission-critical systems. Growing business demands put pressure on data centres to provide ever more processing power and data storage.

New generations of high density servers and networking equipment have increased rack densities and overall facility power requirements. While power density per rack averaged 6 kW in 2006, it climbed to about 8 kW by 2012, and is expected to approach 12 kW per rack by 2014, according to data collected by the Data Center Users Group, sponsored by Emerson Network Power.

The need now exists for taller, wider and deeper racks to accommodate the changes in IT equipment and densities. As data centre managers strive to make use of valuable space, racks are more fully filled than ever. While high density configurations can enhance energy efficiency, they also create a need for effective power delivery and thermal management.



Fig. 1: Locks on doors.

High density configurations also increase cabling density. As more power is delivered  to more circuits within  a rack, additional required cabling adds the potential to create obstructions within the rack, which can make heat removal more difficult. This also reduces access to equipment. Racks must position and route cabling correctly and provide ready access to equipment.

Failures caused by high temperature or humidity in the rack are clearly unacceptable. The cost of downtime in critical operations demands data center availability. According to an analysis conducted by the Ponemon Institute, a single minute of data centre downtime costs organisations approximately $5600. With the average reported downtime incident lasting

90 minutes, the average cost of a downtime event is about $504 000. Optimised airflow, organised cabling and a monitored sensor network in the rack provide a safe environment for heat emitting equipment.

Security is another concern. Given regulations such as Sarbanes-Oxley regulations, many organisations must make certain that data is secured, not only from online threats, but physical equipment threats as well. Racks with lockable side panels and/or doors can prevent unwarranted access or theft and have become the norm in many data centers.

Rack enclosure trends

The need to improve  IT system performance and ensure reliability has driven the evolution in rack technology. Today’s racks offer features that improve equipment installation speed,

while increased height, width and depth options better contain the larger equipment that is being introduced into the data centre.



Fig. 2: a) Velcro straps. b) Lobster claws.

Personnel who need to install new components or remove components are no longer strictly technicians or maintenance personnel. Tool-less installation helps to save time during equipment deployment. The ability to add new equipment or change configurations at potentially a moment’s notice makes component mounting a fast “slide

in, slide out” procedure,  a necessity in today’s data centre.

Taller racks, beyond the common 47U (2200 mm) are also becoming more popular as data centres with room to expand vertically take advantage of headroom.

The depth of rack-mounted equipment is also increasing. For example, the greatest server depth requirements in the past resulted in a rack depth of 1200 mm.

The introduction of side-breathing equipment makes rack width a factor as well. Racks widths up to 1000 mm are becoming common to meet equipment manufacturers’ specifications that call for clearances from 150 – 280 mm. These clearances are required for proper airflow to the equipment and to provide ample cable management space. Often, airflow management accessories are required to make them compatible with hot aisle/cold aisle arrangements.

Racks arranged  in a hot-aisle/cold-aisle configuration enhance equipment performance and life. This is an industry best practice that arranges a data centre with a cold aisle (two cabinet fronts facing each other) and a hot aisle for component exhaust (cabinet backs facing each other). This arrangement prevents hot air that has been expelled from one equipment rack to be drawn into equipment directly across the aisle. This practice optimises cooling efficiency, extends equipment life and reduces potential damage from overheating. Racks have traditionally been used as a mounting platform for the equipment housed internally, but they are also important for their ability to mount power distribution equipment, overhead cable management, and attachment points for aisle containment.



Fig. 3: Hot aisle/cold aisle configuration.

What to look for: flexibility, adaptability

The key to data centre rack planning is to think about flexibility and adaptability. Needs will continue to evolve, and racking solutions must be able to evolve with them. The data centre must be able to adapt to each individual rack environment and rack zone. Similarly, the rack must adapt to the room.

Planning with flexibility in mind allows a data center to quickly and easily respond to changing business needs. Considerations include:

Weight capacity

Racks should be chosen to meet the capability of required loads. 1U servers weigh approximately 15 kg. Given the number of servers, cables, rack power distribution units, overhead cable mounts and containment support points on a typical rack application today, a rack with a 1000 kg capacity should be considered. Racks are available in up to 1500 kg. capacities for heavy-duty applications.

48,26 mm rails

Each of the four rails in a rack should be easy to adjust relative to the needs of the supported equipment. Proper alignment ensures that rails are properly positioned without the need to measure. Front rails should also be flexible to allow for cables in a networking application. Rails that utilise cage nuts eliminate the need for tapping and drilling out stripped screw holes. Stripping out a hole is easy; repairing it is time consuming, and therefore expensive. Cage nuts are available in a variety of sizes, attach to the rail wherever necessary, and provide an economical, fast and flexible method of mounting.

Doors

Most manufactures provide doors with approx. 60 to more than 80% perforation for proper air flow (higher perforation levels improve airflow capability). The doors should also be easily reversible; lift-off hinges require no tools to change the door’s configuration to open from either the right or left. Doors that can be easily removed from their hinges simplify equipment loading. Some doors offer the ability to open 710C and more, which also eases equipment loading. A variety of handle options include key locks, combination, electronic or biometric locks, all of which are built into the handle.

Side panels

Side panels have traditionally been held on with screws that were not simple to attach or remove. The new side panel can early be attach or remove by means of quick fix. The panels may be locked using the same security key as the rack’s front door.

Roof

The rooftop should be ready for cabling and accept a minimum of 1500 Category 5 (Cat 5) cables; many racks today carry that capacity, and higher capacities can reach 2500. Custom cable entry can also be achieved through the base of the rack, if necessary, but the trend is primarily overhead, as many data centers may not have the raised floor necessary for base entry, or in many cases, raised floors may not be utilised because they impede rack airflow. Roof hole covers should be used to prevent debris from entering and to reduce airflow loss. Modular Busway systems and cable management support systems should fit easily on the roof as well.

Grounding

All components, including the roof, doors, rails, side panels and frame should be grounded for safety. They should also be easily disconnected for convenience.  For example, should a door need to be removed for reversal, a quick-disconnect grounding wire will speed the process. A central

grounding point that connects to the building’s central ground should also be used.

Securing the racks

All racks should be either bayed (attached) together, bolted to the floor or utilise an anti-tip device for safety purposes. Should a 48,26 mm piece of equipment be partially removed from a rack for service, it can make a rack top-heavy and unstable if not weight-balanced from below and create

a safety hazard. Baying is the least expensive way to achieve stability; an unlimited number of racks may be bayed together. Anti-tip devices are installed on the front of the rack and provide a set of legs that may be pulled out when rack components are being serviced, essentially expanding the footprint of the rack’s base. Physical security may also be required in an area of high seismic activity; racks must be stable to survive potential earthquakes.

Accessories:

Racks should feature a comprehensive range of accessories, including:

Cable management:  Cable management peripherals will help reduce signal cross talk. They can also reduce the potential to block equipment access and protect cables from damage, keeping cables out of the way during equipment removal. They also help to maintain proper airflow paths. Tool-less accessories should utilise zero- U space outside a 48,26 mm rack; it is important that airflow not be restricted or blocked between the rails of a rack. Cable management “fingers” require no tools to mount and align with U markings on rails, achieving the zero U space. Velcro and “D” rings may also be mounted throughout the enclosure.

Airflow management: Airflow management peripherals will optimise efficiency. Tool-less blanking panels offer a useful option from an energy efficiency standpoint. Blanking panels may be added to any unused U space, to ensure that hot-aisle air is not drawn back into the cold aisle. Vertical airflow baffles should be considered for cabinets wider than 600 mm, to prevent short cycling of air and maintain the hot-aisle/cold-aisle advantage.

IT equipment support

Tool-less shelves may be added to allow for variations in server, switch and router sizes (height, depth or width). Many aren’t configured for typical four-point mounting; an added shelf can alleviate the issue or provide depth adjustment through support rails.

Conclusion

When well-designed and properly selected, racks are a cornerstone of system uptime; a strategic asset that is a key element of delivering data center reliability.

When evaluating racking systems, data center managers should look for racks that offer the most comprehensive support options, such as cable and airflow management and IT equipment support.

Consideration should be given to the largest practical size for an application, with an understanding that equipment continues to become larger. Any purchase today should be an investment for tomorrow.

Not all racks are created equal. Rack selections should be based on flexibility  and adaptability to deliver reliability and lower the total cost of ownership over time. This best practices approach ensures that a user will get the greatest value from rack selection and helps to ensure that the data center layout will meet the needs of today and that of the near future.

Contact Lynette Gordon, Emerson Network Power, 011 284-9639,
lynette.gordon@emerson.com

The post Data centre rack systems key to business-critical continuity appeared first on EE Publishers.

Show more