Usually we talk about future trends in technology, but lets take a look into the past to see if we can use history to predict what the next big breakthrough will be in embedded SBCs.
Twenty years ago
The typical VMEbus SBC was just becoming mainstream and by today’s standards had very meager functionality.
Processor: Most were based on Motorola 68K processors as that was the heritage of VMEbus. A few suppliers were setting up their own niches with the Motorola 88K and Sun Microsystems SPARC, with an occasional Intel Pentium SBC. PowerPC had not yet been announced. Clock speed (50 MHz was best-in-class) was all the rage and multicore microprocessors still a dream. Processor benchmarks were widely promoted in datasheets, with 150 MIPS being a very impressive performance level.
Memory: The upper memory limit was 32 MB of DRAM, with the technology changing in size and package configuration by the month, forcing board redesigns on a frequent schedule. Who could afford to use Flash? Flash memory was in its infancy, very small in capacity, and very expensive to implement.
Disk I/O: SCSI bus was the high-performance disk I/O of choice.
Network connectivity: 10/100 Mb Ethernet used for was standard network connectivity.
Graphics: VGA was on a few of the SBCs, though not many because graphics technology was changing too fast, the chipsets expensive, and they required a lot of valuable board real estate.
GPIO: Serial and parallel ports were required; USB was still three years away.
Mezzanine modules: Mezzanines were starting to settle in on PMC. Many proprietary options still existed.
User flexibility: Users could only make additions to board functionality via mezzanine modules; no programmable hardware was available. ASICs were not all that uncommon, but FPGAs were way too expensive.
Software: Many Real-Time Operating System (RTOS) choices were available. Unix or Solaris was used on some SBCs. Windows was used on the occasional Intel Pentium SBC. The biggest challenge of the day was getting a timely port of the operating system to the processor.
Ten years ago
Ten years ago VMEbus SBCs were reaching the limits of the parallel VMEbus. The concept of a data plane using serial fabric options was being introduced, at first over a P0 connector. VXS was announced while the idea for VPX was being hatched in the minds of those looking for more interconnection bandwidth for the data plane. By 2003, a typical SBC looked like Figure 1.
(Click graphic to zoom by 1.3x)
|
Processor: Power Architecture was still the leader but Intel Architecture was closing the gap. Some SBCs had two processors on the board. Multicore was just showing up on processor roadmaps. Processor power budgets were challenging even the most creative board designers.
Memory: Commonly, memory was limited to 2 to 4 GB of DRAM. Small banks of Flash memory were available on most designs. The biggest challenge was whether to make the DRAM field modifiable either via DIMMs or custom memory mezzanines.
Disk I/O: SCSI bus was on the way out and ATA the primary disk I/O choice.
Network connectivity: Dual 1 GB Ethernet was the new standard network connectivity.
Graphics: VGA graphics was not uncommon but was still very dynamic, making it a tough decision to add to the SBC.
GPIO: Serial and parallel ports were giving way to USB, with parallel ports gone from most new products by this time. USB was desirable, but many board designers were concerned about the ability of the available connectors to handle the demanding environments of many embedded applications.
Mezzanine modules: PMC were on every product and XMC was starting to show up on roadmaps.
User flexibility: Early adopters were including some FPGA capability, though user access was still limited. Tools to add IP were complicated and not very effective in allowing customers to add their own IP.
Software: VxWorks was taking the lead in RTOS solutions. Linux was gaining ground and on everyone’s development list, but not always making it to field deployment. Microsoft had stepped up its efforts to promote Windows into the embedded markets, making it an acceptable choice for the user interface side of many applications. The growing base of Intel Architecture-based SBCs drove even more use of Windows.
Observations from the real world
Gunther Gräbner, Product Management at MEN Mikro Elektronik GmbH (www.men.de), shared some of his thoughts on SBCs. His customers like to see SBCs that are Intel Architecture-based, occasionally they ask for AMD, and, more frequently, they are asking about ARM alternatives. However, most of the MEN products remain based on the Intel Architecture. The typical SBC I/O payload consists of USB (requests for USB 3.0 are increasing), Ethernet (as fast as possible), and 90 to 100 pins of Low Voltage Level (LVL) I/O. DisplayPort has replaced VGA in new products. Windows 7 is driving the need for larger and faster memory systems.
Important to many of his customers is the operational temperature range of the boards. He sees COM Express Computers-On-Module (COMs) as a very promising solution for building SBCs; nothing else on the horizon for new form factors impacts his current product roadmaps. Gunther points out that there is a definite trend for customers to request a complete system over board-level components, thus driving the move to more complete system packages. He believes that the weakest area is support for Linux and various RTOS options; customers still have to do a lot of software integration and development before they have a stable platform for the operation of their final application. Those staying in a Windows environment tend to have fewer of these challenges.
Mercury Systems focuses on higher-performance SBCs. They have an equal mix of Power Architecture and Intel Architecture processor-based SBCs in their product portfolio. In talking with Marc Couture, Director of Product Management at Mercury Systems, we discussed the emergence of ARM as a potential player in this level of SBC products. From his perspective, ARM is being driven by two influences. The first is OpenCL for parallel programming, which very important with multicore processors in common use. The second is the adoption of the AMBA AXI version 4 specifications for interconnections from ARM in Xilinx FPGAs.
When asked about changes in GPIO, Marc reinforced the trend that USB 2.0 and 3.0 is replacing RS232/422 I/O, but that it will likely take a long time. The Intel south bridge is the controlling factor in what I/O goes on most SBCs. Since it is a required part of the Intel Architecture processor package, it only makes sense to implement whatever I/O is contained on the south bridge. That means SATA 2 and SATA 3 have replaced SCSI and Fibre Channel is falling off. “I/O is being shifted to active interface modules in fabric architectures,” Marc commented. With switch fabric-based systems, I/O can be moved to these active interface modules and then connected back to the SBC via the appropriate serial fabric. In the long run, this takes most of the I/O out of the SBC design decision. “And in cases where I/O may still be on the board, FMCs make it easier to change the I/O while leaving the I/O implementation up to an FPGA,” Marc noted.
Marc feels that the fabric wars have picked back up. All of the options РSerial Rapid IO, InfiniBand, 10 and 40 GB Ethernet, and PCI Express Рare all rolling out generation 3 speed improvements with more on their respective roadmaps. He noted that the time to speed increases has gotten shorter, making it more difficult to implement the advances. What concerns Mercury Systems engineers the most is the ability of today’s connector technology to handle the speed improvements.
Today
Since 1993, we have moved beyond 3 GHz, but clock speeds are not as important as the number of processor cores. The introduction of VPX has moved the module interconnect speeds from the MHz realm to GHz. The abundant number of pipes is allowing a lot of the I/O to move to active interface modules instead of on the SBCs. This is enabling the addition of more processing power and pushing the I/O processing closer to the inputs: to the sensors.
The original VMEbus specification included the parallel VMEbus, a set of serial lines for a simple control plane, and VME Subsystem Bus (VSB) for a separate “data plane.” This has been replaced with multiple planes, implemented through serial fabrics, between modules today that include a control plane, data plane, expansion plane, and out-of-band management plane. Serial fabrics will allow this concept to grow and become more robust in the coming years.
The line between SBCs and dedicated processor blades has blurred over the years. Since so much of the functionality is embedded in the bridge chipsets, even dedicated processor blades have the basic SBC I/O, and the processing power on an SBC often makes it a pretty good dedicated processing blade.
(Click graphic to zoom by 1.5x)
|
Processor: 3rd Generation Intel Core processors dominate on new SBCs. Power Architecture is used by a handful of suppliers, primarily those targeting defense applications. ARM is getting a look, but it’s not gaining traction yet at as the primary processor.
Memory: The memory starting point is 4 GB and goes up from there. Flash is a must and even replaces rotating disk media in many applications.
Disk I/O: Serial ATA has almost made SCSI obsolete except for some of the most demanding of high-reliability systems.
Network connectivity:Ethernet speeds of 1 GB and 10 GB are very common, with 40 GB showing up for data plane uses.
Graphics: Execution units for graphics are commonly built into the processors, with many EUs available for processing. The board area and cost impact are eliminated from the equation. Graphics is near the top of the scale at gaming-level quality. Options have significantly increased in performance with HDMI and DisplayPort leading the charge, leaving VGA in the dust. Since high-end graphics capability is built into the processor, the cost and real estate impact are very minimal.
GPIO: USB has made serial and parallel ports obsolete. USB 3.0 is the new entry threshold.
Mezzanine modules: PMCs, while still very common, are giving way to XMCs. Most designs implement both during this period of transition.
User flexibility: FPGAs are very common on higher-end products, used both for interconnect management and user features. User tools are much easier to use, significantly reducing the ability to add user IP.
Software: VxWorks leads the RTOS choices. Microsoft has made Windows more attractive to embedded computing applications. Linux has solidly moved in as a choice across all processor architectures. The addition of real-time enhancements makes Linux an attractive choice. Software developers are now more comfortable using open source solutions. Hypervisor and virtualization are helping with the transition to multicore processors and are greatly expanding the capabilities of embedded platforms.
The end of scaling of single-thread performance already means major software challenges; for example, the shift to symmetric parallelism has created perhaps the greatest software challenge in the history of computing.
The future
So what can this tell us about the future? Projecting out 10 years is a huge leap of faith but there are areas where we can be confident that the trends will extend out that far, barring any unpredicted breakthroughs in technology reflected on most roadmaps.
Processor: Processors will stabilize on clock speed. The technology will continue to shrink but at a slower rate, limited by the cost of new process technology. Transistors will continue to be used up with more cores and processing functionality. The power budget is somewhat limited, making energy efficiency drive many design decisions.
To help manage the power budget with the growing transistor count will be the addition of accelerators for specialized tasks such as graphics, security, digital signal processing, and even the addition of FPGA capability. These accelerators will be environmentally managed to optimize energy efficiency in real-time.
SoCs will become a preferred choice for many SBC designers as they continue to increase the functional density of the board itself.
Memory: Memory performance will continue to lag processor performance. It is still the bottleneck in most designs and has plenty of room for improvement. Cache memory will continue to grow, enabling DRAM to emphasize density and cost over speed. More sophisticated cache schemes at multiple levels will help to span the growing speed gap. Capacity will not be a significant issue and may impact the roll of disk storage.
Disk I/O: Taking capacity out of the discussion, which we all expect to grow quickly, the primary concern will be the connection to the storage media. More data will require higher bandwidth. The use of Solid State Drives (SSDs) will address some of the applications.
Traditional Hard Disk Drive (HDD) storage arrays will transition into hybrid Flash/SSD/HDD storage arrays. Flash will begin a transition to newer/better solid-state memory technology. Finally, HDDs will be entirely replaced by these new-generation SSDs.
Network Attached Storage (NAS) will likely eliminate the need for local storage and negate the need to put a dedicated storage interface on the SBC.
Network connectivity;Ethernet will likely find a way to keep getting faster. Active connector technology will help extend the performance. Optical cabling will be the standard for many products.
Graphics: SBC graphics will lag a bit behind dedicated graphics processor units, but, when practical, the technology will be integrated into the microprocessor as is done today. With the rising trend to use GPGPU architectures, we may see some optimization in on-chip graphics to support signal processing.
GPIO: New I/O will emerge but it is the most difficult to predict based on rapid change in the past. More performance is always needed, driving a demand for smart and active connections that can be programmed to handle the particular data being managed by the specific connection. I/O will become “soft,” meaning that an FPGA will handle the I/O on a user-defined basis. As new I/O is introduced or existing methods improve, a change to FPGA IP will negate the need to redesign a new SBC; instead an I/O module such as an FMC will be replaced. A good portion of I/O will move to active interface modules connected back to the SBC over a fabric connection. This has the advantage of making I/O evolution easier to manage, and it can bring the processing closer to the sensors or I/O device.
Mezzanine modules: As board form factors continue to shrink and connectivity between modules becomes solely dependent on serial fabrics, the need for mezzanines for adding additional functionality will diminish or even disappear within the next 10 years. FMCs illustrate a good example of how the role of a mezzanine may be shifting. FMCs are used to provide the physical I/O interface to a FPGA controller that is on the carrier board. The primary reason for keeping mezzanines of any type will be to allow board designers to fill the space in an attempt to get more board real estate to optimize functional density.
User flexibility: FPGAs will continue to play an increasingly important role, perhaps even reaching to the point where most of the SBC “features” will be provided through FPGA IP. Boards will consist of basically the microprocessor, its support chipset (if needed), the memory subsystem, and an FPGA for the remaining functionality.
Software: Sequential programming will give way to managed and parallel programming with languages such as OpenCL. The diversity of applications and the addition of more cores and accelerators to the processors will require applications and system software to participate more fully in power optimization and management.
Applications
The technology used to solve the computing needs of various classes of applications tends to change over time, and VITA technologies like VMEbus and VPX are no exception. Within the spectrum of 3U and 6U, VMEbus started out with a focus on the industrial automation industry, but, as VME increased in performance, more demanding applications such as signal processing started to pick up on VMEbus.
VPX has accelerated this application migration because of the high-performance capability of the control and data planes. This is leading VPX, with its much higher levels of performance and I/O capability, to a sweet spot that requires HPEC platforms, primarily applications in defense, but also in other applications that need the small size and performance.
SBC technology of the future
Future SBCs will consist of the processor, memory subsystem, and FPGAs to handle I/O and module interconnections. In essence, they will be “soft” allowing the board to be programmed in any suitable fashion. Many of the hardware choices will become intellectual property choices to be programmed into an FPGA.
It is very difficult to look more than a year or two into the future, let alone going out 10 years. The past will certainly give some guidance, but innovation in any area of SBC technology can quickly change the direction of the industry, especially in a 10-year window. The wise engineering team has a multi-year technology roadmap that is reviewed on at least a one-year cycle. Watch for emerging trends and be ready to make corrections and strategy changes as breakthroughs are made. One thing is for certain: Change will happen.