Medical Electronics Manufacturing Spring 2000
System designers evaluating processing and input/output (I/O) alternatives for medical application environments might consider teaming the peripheral component interconnect (PCI) and virtual machine environment (VME) buses. Their combination can bring the best of the PC and industrial workstation worlds to the embedded application: the speed, upgradability, and low cost of desktop technology, and the ruggedness, reliability, and I/O-intensive capability of the industrial bus. The two physically separate buses can be linked at the hardware level by means of bus-to-bus adapter products that create a virtual PCI and VME bus (Figure 1).
Figure 1. Linking at the hardware level creates a virtual VME and PCI bus.
The incorporation of standard PCs and workstations into embedded systems has been fueled by desires for greater I/O bandwidth, economical processing, and efficient data manipulation, to take advantage of off-the-shelf software and good software development tools, and to be able to adopt new processors early on. In the medical arena, imaging systems are tasked with efficiently transporting large amounts of raw, unconstructed data from an embedded collection site to a display and processing machine. For example, high-performance computed tomography (CT) scanners and imaging systems offer sophisticated tumor, brain, and cardiac imaging for detection, analysis, and diagnosis. As application requirements evolve (exemplified by the trend from two- to three-dimensional viewing), the connection between the VME- or CompactPCI-based embedded system and the viewing workstation must bear up under the data load without sacrificing performance. Transfer across Ethernet at 5 Mbyte/sec was once sufficient, but ever more complex modeling continues to push the bandwidth envelope.
Bus-to-Bus Adapter Technology
Bus-to-bus adapters with operating system (OS) software drivers allow a standard PC or workstation to function as if it were a processor on the VME backplane. Traditional alternatives such as Ethernet or fiber channel involve milliseconds of latency. With bus-to-bus adapters, a write to an address space is performed as if it were local. It is not slowed by navigation through protocol layers. The entire transaction takes only 2 ms. The result is real-time deterministic access with an ease of programming that speeds time to market.
Bus-to-bus adapter technology incorporates a built-in direct memory access (DMA) controller for high-speed transfers of large data blocks. Controller-mode DMA can sustain very high memory-to-memory data transfer rates of 26 to 35 Mbyte/sec. The next-generation line of bus-to-bus adapters could soon boost transfer rates up to 90 Mbyte/sec. These rates of transfer via fiber-optic cable can extend for 500 m between systems. While many medical application environments will not involve such a great distance to the operator console, the technology does offer the flexibility to accommodate situations where analysis takes place in another room.
Efficiencies are greater than those of Ethernet-based connectivity methods. With Ethernet, a PCI processor is involved in moving the data from the PCI to a VME single-board computer (SBC) address space (Figure 2). On the VME side, the SBC must then transfer the data from the Ethernet to the VME bus and on to the designated address space. The built-in DMA controller simplifies the process by assuming responsibility for the data transfer at far higher sustained rates than 10baseT, 100baseT, or even gigabit Ethernet. The processors are thus freed to focus on other tasks. For systems in which other VME masters have low-latency arbitration requirements, the DMA engine provides a pause mode for more-frequent rearbitration.
Figure 2. Typical Ethernet connectivity.
Shattering Protocol Barriers
The transparent interconnectivity of a single host computer controlling one or more VME-bus systems is epitomized by the address space of the destination VME bus appearing to the host bus to be additional local address space. The effect is that of attaching a PC or workstation directly to the VME chassis or, conversely, inserting a VME chassis into the PC or workstation. The transparency is achieved via memory mapping, which takes defined address ranges of unused memory on the PCI host bus and transposes them to selected global memory address space and I/O on the VME destination bus. Memory-mapped transfers are handled by normal system virtual memory and system bus arbitration logic (Figure 3).
Figure 3. Memory mapping achieves transparent interconnectivity.
Other network-based communication devices such as fiber channel and Ethernet incorporate software-level protocol stacks that add latency and inhibit performance. A typical Ethernet stack, for example, consists of application, presentation, session, transport, network, data link, and physical layers. The seven layers are present at the sending and receiving points, and processing is required at both ends. In a protocol-based transfer, one processor will build messages and send them across the cable to the processor at the other side. The second processor disassembles the messages and stores them in data memory. The interference of the protocol stacks directly affects the achievable sustained data rate, often reducing it to as low as one-third the raw data rate.
By contrast, in a more efficient bus-to-bus adapter configuration, a processor initiates direct memory access. After the transmission is complete, an interrupt is generated to inform the processor of the fact. At the receiving end, the data are moved directly into physical memory. No processor is needed. Cycles are freed up for other tasks at both ends. No software protocol overhead is required; thus, delays associated with routing through protocol stacks are avoided. Sustained data rates are up to 90% of raw cable bandwidth. When a write function is executed on the PCI bus, the adapter technology converts that PCI function into a VME write function on the VME bus. It is as if the write had been to a local PCI resource. Because the latency of the transaction is less than a couple of microseconds, the data transfer cannot be distinguished from a local PCI transaction (Figure 4).
Figure 4. A bus-to-bus adapter configuration in which the processor initiates direct memory access.
Making Software a Nonissue
By working at the bus level, the adapters constitute a very simple solution from a software standpoint. System software is basically unaffected by the virtual dual-nature bus. For example, with the bus-to-bus adapter a VME card can be used as if it were a PCI card, and the system software will not treat it any differently than if it were an actual PCI card in the backplane. The adapter is not attempting to interface with a protocol stack; it is simply connecting as if it were plugged into both backplanes. Programmers are spared having to deal with the complexities of network topologies.
A feature known as scatter/gather can further enhance software independence. A contiguous block of memory on a VME bus, when mapped to the PCI bus, is distributed across noncontiguous blocks of 4 Kbyte each. (A DMA move from a VME block, for example, might be all contiguous memory.) Then, when it comes into the PCI bus, every 4-Kbyte boundary could be assigned to a new address range within the PCI. This is very important where virtual memory resources that are not contiguous are allocated by an OS in physical memory. With scatter/gather, the PCI adapter is able to use the DMA engine to move data into or from noncontiguous physical memory space that the application sees as contiguous virtual memory. The driver that resides on the PCI host workstation handles all the scatter/ gather mapping, transparent to the application software.
Building on the VME Foundation
Bus-to-bus adapters allow users who are committed to legacy VME systems to migrate to newer platforms while maintaining their VME compatibility and investment. By maintaining the VME side of the application, the combination system is afforded all of the proven advantages of the bus that has supported industrial applications for well over a decade. Its reputation has for many years made VME the bus of choice for large-scale, high-speed applications requiring frequent interrupts. The firmly entrenched hundreds of vendors offering supporting products are testimony to the success of the bus. VME offers access to far more specialty cards and more I/O functionality than PCI, in more-suitably rugged packaging. The cards tend to stay on the market far longer than components in the PC realm, which is driven by the transient desires of desktop users.
The combination of the two buses facilitates the physical separation of the cpu from I/O components, a consideration in environments that are harsh or hazardous. Devices in the medical imaging hardware backplane, regardless of their bus architecture, can be directly controlled from a remote workstation. Latencies are low enough that accessing the remote bus has the appearance of a local transaction. For example, the operator of a large piece of equipment that is in the room with a patient can control its backplane from a remote console and a Sun, SGI, or other workstation. Without the bus-to-bus adapter, some sort of SBC would have to be incorporated in the imaging equipment in order to enable communication with the operator console.
But with the bus-to-bus adapter, interrupts can be passed directly between VME and PCI. All seven VME interrupts can be monitored and acknowledged from the PCI host system. Consequently, the host system can be notified asynchronously whenever a VME-bus card requires servicing, without the need for polling. The VME can interrupt the PCI bus, and the PCI bus can send an interrupt to the VME bus. This is significant when a process is occurring on the remote side of the cable—say, on the VME, and the host PCI bus requires notification of its completion—and an interrupt passes transparently across the cable. Such a procedure would require using many ISO layers in a system utilizing Ethernet with a single-board computer. The bus-to-bus adapter approach simply generates a PCI interrupt on the PCI bus when a VME interrupt is asserted on the VME backplane.
The incorporation of economical standard PCs and workstations into embedded systems paves the way for cost savings and convenience on a variety of levels. For instance, a wealth of user-friendly, off-the-shelf software is available. Peripherals such as disk controllers and serial ports are more economical than their VME counterparts. The upgrading and swapping out of PCs can be achieved simply by removing the adapter card and installing it in the new system. System designers are also afforded greater OS flexibility in PCs and workstations; more OS choices and tools are available than with other approaches such as SBCs. And a friendly graphical user interface is available. The bottom-line benefits of PCI-to-VME connectivity via bus-to-bus adapter technology are a lower product development cost, faster time to market, and lower system costs as a result of building with off-the-shelf components. n
Jay Swenson is product marketing manager for SBS Technologies Inc., Connectivity Products (St. Paul, MN).