IT professionals in Data Centers are looking for more tool-less options for speeding up the mounting process. Button mounts have become more and more common on server racks by several well-known manufacturers. Button mounts serve as a tool-less method for vertically mounting longer equipment to the side of a server rack. The “button” is usually located on the back of vertically mounted equipment. Button mounting points are most commonly found along the side of the uprights of the rack. Button mounts are often used as a tool-less method for mounting Power Distribution Units (PDUs) and vertical cable management solutions.
The increased demand for tool-less mounting has caused many companies to create adapter brackets for button mounted equipment. RackSolutions has recently created a button mount adapter for their Rack 111 Open Frame Rack, which mounts directly to its cable management bars.
A white box server refers to customized servers that are either home built, or built by white box suppliers called ODMs (Original Design Manufacturers) such as Supermicro. The term white box simply means that the equipment is unbranded or generic. All parts are purchased separately which helps cut costs as well as provides hobbyists and data center professionals alike more customization to better fit their needs. The ease of customization allows for individual parts to be replaced, rather than replacing the entire server when equipment fails. White box servers are increasingly being used by the Open Computer Project (OCP) which was developed by Facebook.
The drawback of choosing a white box server over a standard OEM server, is that they are less reliable and the components often lack redundancy. White box servers lower their risks of downtime by using clustering techniques for deployment. A cluster enables high availability in a computer system by grouping servers together to act like as a single system. With this in mind, a company should do a strong evaluation on the pros in cons of deploying white box servers to ensure that it is actually cost-effective.
Rating Data Centers:
Uptime Institute’s Data Center Tier Classification System
What is the Tier System?
The Tier Classification System refers to a benchmarking system from Uptime Institute to determine the availability, or uptime, of a data center. A number of factors determine what tier a data center falls under, including: power, cooling and ancillary data center systems. Each higher Tier delivers more uptime, data center performance, and requires more investment. Data centers range from Tier I to Tier IV, with each tier incorporating the lower tier’s requirements. Each tier is progressive, with Tier I being the simplest and Tier IV being the most resilient. Tier III is a common commercial solution for colocation and wholesale data center service providers. Tier IV data centers are designed for risk averse businesses with mission critical applications. Data centers use the following criteria to determine which Tier a facility falls under:
• Tier I Basic Capacity data centers have a single path of power using an uninterruptable power supply (UPS) to handle short outages, dedicated cooling systems, and engine generators for extended outages. A Tier I data center must have its own dedicated space, as well as, a dedicated site infrastructure for IT support outside of the office. Tier I data centers offer no redundancy.
• Tier II Redundant Component data centers, like Tier I data centers, they have a single path for power and cooling distribution and add redundancy, using equipment such as UPS modules, chillers or pumps, and engine generators which protect it from interruptions in IT processes.
• Tier III Concurrently Maintainable data centers have redundant components and multiple distribution paths to allow for no shut downs for maintenance, repair or replacement of equipment. Tier III data centers have active power and cooling distribution paths with dual corded IT equipment.
• Tier IV Fault Tolerant data centers offer multiple power and cooling distribution paths with autonomous response to failure. Tier IV data centers are self-healing in the case of faults, are compartmentalized to limit impacts of a single major fault and have Continuous Cooling for the transition from utility power to engine generators. Like Tier III data centers, IT equipment is dual corded, so that in the event of equipment failure, IT operations would not be interrupted.
We asked industry experts, Keith Klesner, SVP, North America at Uptime Institute, and Jose Ruiz, VP of Operations at Compass Data Centers about Uptime Institute’s Tier classification system and certification.
How Do I Choose Which Tier Fits My Business Model?
One tier is not better than another. How a company chooses what tier their facility should be is dependent on the needs of its data center. For example, a small law firm may have a Tier I facility, while a large scale e-commerce website may require a Tier III or Tier IV facility. “It really depends on what the company’s needs,” says Jose Ruiz. “Because the tiers are based on the uptime of the facility, the company needs to ask itself if it is okay with going down from time to time,” Ruiz continued. “For example,” he says, “A cloud application provider that is already replicating its data elsewhere may not need a Tier III or IV facility, and a Tier II system will suffice.” Ruiz stated, “On the other hand, a large financial institute, where the uptime is much more mission critical, would more than likely require a Tier III or Tier IV facility.”
The operations plan on how the facility will be managed also plays a huge role. “What Tier facility you plan to use largely depends on the criticality of the business and how you plan to operate long term,” said Klesner. “What operations model are you using and how do you plan to staff your facility? Well-staffed, minimal staff, or even a lights out data center? This will impact the topology choice in your data center design and construction.” Uptime Institute also offers certifications for a facility’s operations. Ruiz suggests, “If you follow what Uptime suggests for operations, then you will have a very well run data center.”
What are the Costs of Not Having a Properly “Tiered” Facility?
The costs of having a data center that meet Uptime’s requirements for a tiered facility vary based on the facility’s business model. The real question a business should be asking itself is, what are the costs of not meeting these specifications? “A company should always take into consideration the cost of an outage,” Keith Klesner told us. “And not just the direct costs, but also the indirect costs.” Klesner used Jet Blue’s recent mishap as an example. “Hundreds of flights were delayed due to their outage, and may result in future business loss.”
Not having a properly tiered facility can be costly due to prolonged periods without proper maintenance. “Non-tiered facilities may have their maintenance deferred which can eventually cause catastrophic outages.” Klesner stated. “These outages can cause vulnerabilities in infrastructure which can be very costly to the business.”
Not All “Tiered” Facilities are Actually Certified
While data centers may advertise themselves as a certain tier based on their design, the tier classification system refers exclusively to the certifications from Uptime Institute. A large number of facilities claim that they are Tier III or Tier IV facilities, but in reality, many fewer of those are actually certified as being so. A large number of companies have been marketing themselves as tiered facilities, or have engineered it to a certain standard, without actually having any sort of certification, and using ‘tier’ as a shorthand for the level of redundancy of their system. Klesner stated, “A self-certified data center is just that. As the IT industry transitions to data center service providers the industry is demanding certification from independent experts like the Uptime Institute.”
Why a Company Should Get Certified?
Many companies feel like they do not need to have their facilities certified or that it’s too costly of an endeavor. If a company can just engineer itself to a certain tier, what purpose or good would a certification do? Ruiz strongly suggests that facilities get certified to give a company peace of mind with unbiased 3rd party reassurance. “If a company requires a ‘tiered’ facility, they should also require a certification for that peace of mind,” he says. “People should carefully read the standards and reach out to Uptime Institute.” “People may think it is too expensive to justify the costs, but without the certification you lack the 3rd party validation.” He went on to explain, “Uptime Institute offers free resources about the certification process and standards.”
Ruiz also suggested that data centers should get both, the on-paper design of the facility certified, as well as the constructed facility. “Some providers might certify a design but find a way to reduce costs during the actual construction.” He used an instance at Compass as an example on why it is important to have both certifications. “We already had our design for the data center certified by Uptime and were having the actual construction of the facility certified. As we were performing a demonstration, some breakers went down. It turns out the breakers were incorrectly wired and labeled and as a result the demonstration failed. You don’t have those issues vetted out on the paper design. Correcting the label positions fixed the problem, but it was an instance where if we did not certify the constructed facility it could have resulted in a major outage.”
What Does the Industry Have to Say About the Standard?
Overall, the industry is very positive about Uptime Institute’s classification system. “The Tier system is a fantastic standard for our industry to classify Data Centers,” said Ruiz. “Data centers don’t all have the same needs.” Ruiz praised Uptime Institute’s classification system by saying, “Uptime Institute’s certification system is probably the only system out there that does a good job due to the amount of attention they put into it.” Ruiz said that the classification system has played a crucial role at Compass Datacenters. “The certifications they offer reassure us that our data centers are what we intended them to be and what we market to our customers.”
What is the Open Compute Project
The Open Compute Project started in 2011 and stems from Facebook’s initiative to improve energy efficiency, reduce hardware costs, and speed up deployment, by developing their own custom servers, power supplies, server racks and battery backup systems. The rack and equipment itself has evolved from a standardized 19” EIA rack with specialized IT servers to a unique rack with wide equipment and centralized power.
The Open Compute Project revolves around the Open Rack, a 539mm (21.22”) wide equipment space with a 48mm (1.89”) OpenU tall space. The power for the rack is standardized on a 12 VDC bus bar that runs the entire height of the rack. The benefits from this are a wider and taller space for the servers promote easier air flow through the equipment. Since power distribution and conversion is centralized to “Power Shelves”, this reduces the amount of intermediate power conversions along the way increasing the overall efficiency of a data center. This saves money money two-fold, by having more efficient cooling and by having less heat to remove.
One of the signature things about OCP is called vanity free, if it doesn’t provide compute power or storage, you don’t need it. This philosophy keeps the equipment minimalist, utilitarian and ultimately lower cost than the more known OEM solutions. Bezels and pretty faceplates are out, efficiency is in.
Open Rack v1.0 versus v1.1 & v1.2
The v1.1 and v1.2 standards are lot more specific in relation to the mounting holes and mounting hole spacing when compared to v1.0. This makes it easier to interchange rails and accessories between different rack manufacturers.
To learn more about the Open Compute project click here
Courtesy of Steve M.
I have wanted to install network wiring throughout the house. A co-worker recommended an open frame wall mount rack for keeping equipment elevated, organized, and out-of-the-way. I was directed toward a few networking equipment websites, including RackSolutions.com. The level of customization that RackSolutions.com has on their open frame wall mount racks was instantly appealing. I settled on a size of 15U tall and 9U deep.
After unboxing everything, you will have four posts, a top and bottom rack piece, and some bags with screws and cage nuts:
The metal is solid, and has a very durable paint coating across every surface edge. Running your finger over the metal, especially near corners or punched/cut areas, revealed no sharp or jagged edges. Each post had several threaded connections installed, which would be used to attach the post to the rack’s top and bottom sections.
The assembly couldn’t have been easier – only a single, #2 Phillips screwdriver is needed.
The posts are designed in a way that only allows it to fit a specific alignment with the top and bottom of the rack. This is accomplished through some notches placed at the ends of the posts. Each post fits within the inner-corner of the top and bottom of the rack.
Mounting the Rack
After marking and pre-drilling two holes for the top of the rack, two lag bolts and washers were installed. About one-half inch of space was left between the bolt heads and the studs.
Hanging the rack on the two lag bolts:
I first installed the patch panel, network switch, and a two-post shelf into the rack. To keep the free-standing components in place and remove any risk of tipping or falling, I ran Velcro cinch straps around the devices.
Here is the final picture of the completed rack:
This product is simply outstanding, and I am absolutely thrilled with how this project turned out. Once the wiring has been installed into the walls, I am completely confident with housing my network and household components in this rack. The rack is quite sturdy and well-built, and I have no doubt about its durability for years to come. RackSolutions offers plenty of accessories for customizing the look of the rack, which allowed me to select a combination that fit my current needs and plans for the future.
There are many ways to utilize 2 post racks. This customer demonstrates that it can even be used as a TV stand.
(Photos and article courtesy of Casey H.)
By using a common 2-post relay rack, adjustable shelves, and the lowest profile TV wall mount I could find, I was able to create a very simple and clean replacement for an old, over-sized glass/metal stand. The look may not be for everyone, but I like the spartan setup and complete customization of shelves. For my entertainment needs, I run only a fanless Intel Core i5 NUC and a Chromecast. (plus a Synology NAS in another room)
Some neat features:
· Buy a rack-mount UPS and screw it right onto the stand
· Wires are held with black nylon cable clamps in the back
· The Monoprice wall mount has a locking security bar
· Plethora of 2-post rack shelves available on the market
In this segment of Ask Katrina, she answers FAQ about 19″ racks:
Question 1: What is a “U space”?
A “U” or “RU” is a unit of measurement that is commonly referred to when discussing rack-mounting equipment. A “U space” is 1.75 inches and is commonly 3 hole spaces tall.
Question 2: What is the most commonly used rack?
We find that most Rack Solutions customers typically have a 4-post square hole rack. It’s called a “4-post rack” because there are 4 uprights and as you can see the holes of the rack are square.
Question 3: What is a 19 inch rack?
A 19 inch rack is the standard EIA-310 server rack. The term “19 inch rack” comes from the width of the equipment in the rack.
If you have any technical questions like the ones I have answered today please let us know. You can call send us an email or use technical support chat at racksolutions.com.