Attending a standards meeting on computer packaging and backplanes is like sitting through an insurance seminar: lots of numbers get thrown around, and only a few people in the room understand their significance. The systems integrators and customers in attendance just want a drawing they can use; they also want to feel comfortable that all the boards will go into the chassis without binding or scraping, or worse yet, flopping around loosely. The importance of getting things right in the standard is left to those mechanical warriors who get little credit for their complex and detailed efforts. But the world is slouching toward “cloud computing,” requiring an all-new model for computer packaging, power consumption, and cooling techniques. This is specifically visible in the VITA committees developing the new Small Form Factor (SFF) specifications for critical applications including VITA 73, VITA 74, and VITA 75.
Worldwide data centers
As the barrage of tolerances and dimensions flung across the room mercilessly numbs the mind, it might be instructive to consider what the data centers of the computer industry are doing in this area as you struggle to remain conscious. A plethora of data centers is being built across the world that is working on similar problems of mechanical design, power consumption, and cooling techniques for computer boards and boxes.
Facebook is building their new data center in Prineville, Oregon (see http://personaldividends.com/money/jon-t-norwood/facebook-goes-green-and-saves-money) and has released the specifications on their packaging, power supplies, and cooling techniques. They have formed the Open Compute Project, a hardware version of the open source computing environment, to maintain and expand these specifications. Google has massive data centers around the world, and they actually shift their search loads to the centers with the lowest power cost during the day. Their power bills for operating the servers and cooling are hurting them financially. But Google only releases their packaging, power, and cooling specifications under Non-Disclosure Agreement (NDA) to their partners and suppliers. Apple has built a massive data center in Maiden, North Carolina, and two more are under construction in Research Triangle Park, North Carolina. [Editor’s note: as we went to press, Apple announced iCloud, the replacement for .Mac/MobileMe. Experts agree that iCloud requires massive Apple data center build-out.]
Presently, there are 22 massive data centers in China (see http://www.datacentermap.com/china/), but we do not know which chassis, cooling, and power specs they use. Rackspace has six data centers in Hong Kong and scattered across the U.S. and U.K. At some point in the distant past, Intel announced they would build a number of huge data centers around the world. When you look at what is happening at the data-center level, it’s clear that each of these companies has more people working on packaging, cooling, and power specs than there are in our entire embedded industry. As the over-hyped “cloud computing” model takes hold, there will be more data centers in the world than fast food restaurants.
The specifications
It occurs to me, as another set of tolerance numbers flies over my head in the meeting, that maybe we should explore what these data center groups are doing and gain some knowledge from the thousands of educated and well-funded minds working on the data-center specifications. Yes, most of the work they are doing revolves around keeping cheap commodity motherboards from croaking under excessive heat. They have no appreciable shock and vibration to accommodate in their environment. None of their gear will fly in military aircraft or Unmanned Aerial Vehicles (UAVs), and none will ever endure the pounding dished out to computers in ground combat vehicles. None of these data center applications are burdened with Mean Time Between Failures (MTBF) or the debilitating effects of Restriction of the use of certain Hazardous Substances (RoHS), the lead-free tin-based solders that create tin whiskers. Even the automobile industry has ignored the deleterious effects of RoHS on critical systems as shown in the recent reports released by NASA and National Highway Traffic Safety Administration (NHTSA) concerning the Toyota unintended acceleration problems (see www.nhtsa.gov/UA).
But we could learn some interesting things by looking at what the data center problems are and the technologies they have developed. All these data centers are using 10 G and 40 G optical Network Interface Cards (NICs) to interconnect the servers, while we remain bogged down in the parasitic, capacitance-infested swamp of copper connections that requires divine intervention to make them work at 10 G. Copper-based interconnects are rapidly becoming a faith-based proposition. While the data centers show a blatant disregard for RoHS consequences, MTBF, shock and vibration, and reliability in general, they have given up on copper interconnects. That point alone suggests the existence of intelligent life in that market segment, and we might learn from how they do power, packaging, and cooling.