The typical VMEbus SBC is evolving rapidly to keep pace with the seemingly insatiable demand for more performance, more I/O bandwidth, and better switched fabric networking connectivity. Just as processor devices separate their instruction and data flows for maximum performance, embedded computing subsystems can realize the same benefits by using separate control, data, and I/O planes. In VME’s infancy, the backplane served all three functions, keeping pace with increasing demands for more data flow with the introduction of VME64 and more recently 2eSST. However, high-end, data-intensive applications such as DSP used many alternatives to VME including PCI, Raceway, and StarFabric as data paths, maintaining VMEbus as the embedded subsystem’s control plane.
While VMEbus is still actively used as a backplane bus in many applications, much of its functionality can be migrated to other media to provide better all-around performance and future growth potential. Through standards such as PICMG 2.16 and its VMEbus equivalent VITA 31.1, Ethernet has been used extensively as a reliable packet-switched backplane bus, providing control and data planes between multiple SBCs in embedded subsystems. A typical VITA 31.1 configuration in a chassis would require a central Ethernet switch in one of its backplane slots. The switch would also provide extra ports for connectivity with other chassis and subsystems, offering broad scope for Ethernet’s use as a system- or platform-wide control plane. Ethernet allows the control plane to be extended much further still – between participating platforms on a battlefield and up through many echelons of command to the Pentagon and back again. This vision of Internet-like connectivity for voice, data, and video traffic in the digital battlefield is behind the DoD’s decision to adopt IPv6 as its future communications standard.
However, Ethernet is not the ideal data plane for every military application because of its nondeterministic nature and the protocol processing overhead of IP. Serial RapidIO has emerged as the data plane fabric of choice for interconnecting multiple processors within a chassis. It can satisfy common fabric topologies such as mesh or star, providing high bandwidth, high-integrity data paths between processors in a peer-to-peer network. Serial RapidIO is implemented in silicon on the latest Freescale 8641 PowerPC single and dual-core processor devices. Eight-port, four-lane Serial RapidIO switches are also available, making an ideal set of components for a Serial RapidIO implementation for next-generation SBCs.
Finally, PCI Express has become the industry standard for connection to I/O devices such as SCSI, SATA, and PMC/XMC-based I/O modules. Strong commercial market demand for I/O devices to attach to PCs will ensure the availability of a broad range of functionality with a PCI Express interface, well into the future. While PCI Express has very high bandwidth, it is not a peer-to-peer fabric, relying instead on a single master to control its operation. In addition to providing an interface to the many onboard I/O devices and PMC sites of an SBC, PCI Express can be routed offboard for applications such as remote data acquisition or SDR. This basic configuration is illustrated in Figure 1.
(Click graphic to zoom by 1.4x)
|
In addition to core processing and I/O functions, Next-generation SBCs will support the concept of control, data, and I/O planes using Ethernet, Serial RapidIO, and PCI Express for each. They also offer two PMC/XMC mezzanine sites, adding further user-configurable I/O capability. This level of functionality on a single board would have been impossible prior to the introduction of the new VPX (VITA 46) standard, requiring significantly more backplane connectivity than provided by VME64. VPX incorporates connectors with 3 GHz+ signaling rates. It also defines pinouts for VMEbus, PMC/XMC sites, user-defined I/O, and four ports of four-lane switched fabric such as Serial RapidIO. Figure 1 shows these four external ports of Serial RapidIO connected to an onboard switch. Where multiple SBCs with Serial RapidIO ports exist within a chassis, these ports can be used to create a distributed switched fabric through the backplane in any required topology.
Figure 2 illustrates how the onboard configuration can be extended to include two PMC/XMC sites. It also adds further flexibility by enabling a PCI Express-based external I/O plane to be created by selecting any of the four fabric ports to be either Serial RapidIO or PCI Express. This configuration has been adopted by Curtiss-Wright Controls Embedded Computing (CWCEC) in the newly released VPX6-185 SBC, available in extended temperature and conduction-cooled versions for the harshest military environments.
(Click graphic to zoom by 1.4x)
|
VPX is the only military, embedded computing standard able to fully exploit the capabilities, flexibility, and performance that will be offered by the next generation of SBCs. However, the potential of these capabilities will not be realized without a supporting infrastructure of the real-time operating systems, tools, protocol stacks, board support packages, and middleware that are now required to deal with the complexities of start up and the provision of a consistent, long-lived interface to application software. Middleware is a key component, providing a common set of data transfer functions to abstract an application from the details of its specific hardware implementation, particularly applicable to data plane fabrics and spiral development programs. Using middleware bridges the gap between generations of fabric and SBC architecture, easing the migration of applications from existing VME implementations with limited fabric and I/O support to new generations of highly capable and customizable VPX SBCs.
For more information, e-mail John at [email protected].