It’s not surprising that engineers of high-performance applications such as radar, sonar, or communications are attracted to the prospect of finally being able to use 10 GbE as the high-speed data-plane fabric. Ethernet’s suitability for onboard, on-backplane, and box-to-box communications, along with its ubiquity and consequent advantages for easy connectivity and software portability, make it highly attractive. Still, until the recent advent of 10 GbE, Ethernet’s speed was maxed out at 1 Gbps – too slow to be used for the high-bandwidth data plane.
Previously, the only option at 10 Gbps was to use proprietary or less common interconnects such as RapidIO or PCI Express (PCIe) – interconnects with nowhere near the universality and ecosystem of Ethernet. But with 10 GbE still nascent when the newer VME-family and XMC module standards were being created, the focus was on defining RapidIO and PCIe for the emerging backplanes and XMC high-speed fabric connectors.
Now 10 GbE is rapidly emerging in the commercial space, and the tide is rising in embedded applications, too. One of the technologies contributing to 10 GbE’s popularity outside of the “cable” is XAUI (10 Gigabit Attachment Unit Interface), a widely accepted switching interconnect for 10 GbE components. Thanks to its low pin count and self-clocking serial bus, XAUI offers low-cost, low-power 10 GbE chip-to-chip communication both onboard and over the backplane. Wanting to standardize this technology for the XMC platform, a VSO working group developed VITA 42.6, which defines an open standard for supporting 10 GbE over XAUI or 10GBASE-KX4 switched interconnect protocol on the XMC form factor.
Our discussion focuses on how this higher throughput demand and the desire to use the universal Ethernet standard have left previous fabric standards behind. We’ll examine the XAUI advantage and how the VITA 42.6 specification paves the way for easier high-performance throughput on the XMC platform.
Established fabrics lose their edge
XMC, which has doubled or quintupled the aggregate data transfer bandwidths over the PMC’s 1,064 MBps maximum data rate, has been ideal for high-performance systems deployed in rugged environments such as military ground vehicles, fighter jets, or ships, where vibration, intense heat, or humidity can prove problematic. Based on the PMC standard, XMC (defined in VITA 42.0) offers two high-speed serial connectors (J15 and J16) that can be used for switched fabrics as well as point-to-point connections between I/O modules and carrier cards.
Before the advent of 10 GbE, various fabrics were competing for domination. In 2006, the switch fabric interfaces of Serial RapidIO (defined by VITA 42.2) and PCIe (VITA 42.3) were mapped onto the XMC architecture. Table 1 summarizes the maximum signaling rate and throughput per XMC connector for each direction (the protocols all support full-duplex transmission) and illustrates how the XAUI protocol matches the maximum throughput of its XMC predecessors.
In recent years, PCIe emerged as the most popular switch fabric for XMCs, and it enjoys the widest ecosystem support on XMC carriers. However, as the table shows, PCIe’s maximum throughput on a single XMC connector is 20 percent less than Serial RapidIO or XAUI. Because PCIe’s throughput initially exceeded application requirements, this did not hamper its emergence as the most popular fabric.
However, increased throughput demand and Ethernet’s global dominance as an interconnect have contributed to a groundswell of interest in using 10 GbE in high-performance applications. An XMC equipped with two 10 GbE I/O connections exceeds the bandwidth of PCIe x8 by 25 percent. And, even when the sustained bandwidth of the application is within what PCIe x8 can handle, if the application chooses to use 10 GbE for backplane communication, there is still an inherent inefficiency in having to bridge PCIe to the Ethernet protocol of the backplane. This manifests in a need for board real estate – or possibly an entire slot – to accommodate the bridge silicon.
To move to 10 GbE solutions for box-to-box or sensor-to-box applications, though, developers needed to figure out how to get 10 GbE on and off of an XMC module that did not allow front-panel connections. 1 GbE and other similarly low-rate connections used the PMC/XMC PN4 connector, but 10 GbE was too fast for that connector.
Enter XAUI
The natural candidate for implementing 10 GbE over the XMC connectors is, therefore, XAUI (see Sidebar 1). As a technical innovation, XAUI dramatically improves and simplifies the routing of high-speed electrical interconnections compared to earlier alternatives, making it an ideal baseline for an XMC-connector 10 GbE solution. Developed by an IEEE task force in 2002, XAUI delivers 10 Gbps of data throughput using four differential signal pairs in each direction. It is ideal for chip-to-chip, board-to-board, and chip-to-optics module applications. XAUI is currently the universal 10 Gbps board-level interface supported by the major 10 GbE switch chips.
XAUI was originally developed as a means of extending the reach of 10 Gigabit Media-Independent Interface (XGMII), the bus defined between the 10 GbE MAC and PHY. While XGMII had the requisite 10 Gbps full-duplex speed, its properties effectively limit it to distances of about 7 cm, making it very difficult to route on PCBs, let alone being considered for interboard (backplane) use. XAUI extends the routing to about 50 cm. Notably, XAUI is electrically similar to IEEE 10GBASE-KX4, one of the 10 GbE backplane standards that leverages XAUI for its definition.
As mentioned, XAUI/10GBASE-KX4 is organized into four lanes, each lane having two pairs of signals – one pair for sending and the other pair for receiving (enabling full-duplex transmission). Each lane signal-pair runs at 3.125 Gbps with 20 percent overhead due to the widely used 8B-10B signal encoding, netting 2.5 Gbps per lane. When the transmission of the four lanes is combined, XAUI offers 2.5 Gbps x 4 lanes, providing 10 Gbps throughput.
Moving XAUI to XMC
To move the XAUI standard into the XMC domain, a standard way to organize the communications signals on the XMC connectors was required. Accordingly, VITA 42.6 defines the signaling for supporting 10 GbE via XAUI or 10GBASE-KX4 switched interconnect protocol on the XMC form factor. Building on the XMC base standard, the spec determines how to use the high-speed J15 and J16 connectors to carry the additional signals necessary for communications between the mezzanine card and its carrier. The standard defines 16 differential pairs, 8 defined as Transmit and 8 defined as Receive, which support up to two XAUI/10GBASE-KX4 links per XMC connector. The links are denoted Link 0 and Link 1 on the primary connector and Link 2 and Link 3 on the secondary connector. VITA 42.6 distinguishes between the two forms of Ethernet connectivity: XAUI links are referred to as Type I links, whereas 10GBASE-KX4 is referred to as Type II.
The XMC connectors are designed from the ground up to handle high-speed signaling between mezzanine cards and carriers. In mapping the XAUI/10GBASE-KX4 signals onto these connectors, VITA 42.6 takes into consideration the requirements for successfully routing multi-gigahertz differential signals. Techniques used include the interleaving of ground and signal pairs to minimize noise coupling at these high frequencies. Figure 1 shows how this protocol layer standard builds on the XMC base standard within a system.
As the figure shows, the primary connector (J15) of the carrier board brings in data via XAUI, PCIe, Serial RapidIO, or other fabrics. This data could be from other mezzanine cards (for example, an analog-to-digital XMC module), or from a device on the carrier. The secondary connector (J16) is portrayed as heading to the backplane, but it could alternately be terminating at a switch or processing device on the carrier. A simpler VITA 42.6 application might only use the J15 connector for performing high-rate data transfers, over XAUI, between an I/O or processing mezzanine and a carrier card.
By enabling different protocols on different connectors, developers can still bridge from one protocol on one connector to Ethernet on the other. By providing this, board developers ensure that provisions are made for systems where data input is still being managed by more than a single type of fabric. Figure 2 offers an example of a 10 GbE XMC supporting VITA 42.6. The ADC module transfers data over PCIe (green) to AdvancedIO’s V1120, which bridges it into 10 GbE and sends it over the backplane (via VITA 42.6) to a switch card for distribution to multiple processing targets.
Applications that need to transfer the full 10 GbE bandwidth between the mezzanine and the carrier are accommodated by XAUI on the XMC connector. If there is no front-panel access for conventional 10 GbE optics or copper cabling from XMCs, XAUI signals on the XMC can simply be routed to the backplane, where transition modules can provide rear I/O ports. This latter solution is particularly useful in conduction-cooled environments where no front-panel access is available.
Applications needing to connect 10 GbE from the mezzanine to the backplane for distribution to other cards in the chassis can now use 10GBASE-KX4 on the XMC connector. Since 10GBASE-KX4 is the language spoken on the 10 GbE backplane, there is no longer a need for a bridge between 10 GbE and another fabric such as PCIe. Sensor input can come down the fatter 10 GbE pipeline and be immediately distributed across the backplane.
The future of 10 GbE and XMC
VITA 42.6 completed an ANSI ballot in April 2009 and met the numerical requirements for passing. At present, comments from balloters are being reviewed and revisions to the standard will be made where appropriate. Final ratification will take place thereafter. CS
AdvancedIO Systems Inc. 604-331-1600 www.advancedio.com