Normally, I/O interface cards connect to a computer by attaching to a bus on one of the system's main boards. But, improving the performance and RAS of a bus architecture is difficult. InfiniBand is a divergence from bus architectures intended to carry forth the computer industry's pursuit of better I/O solutions.

Before I explain what InfiniBand is, I think it's important to review the traditional, bus-based approach that it's replacing. Buses are shared segments of wires upon which multiple I/O interface cards transmit and receive data. Buses have one set of data wires plus a set of control wires. Signals on the control wires determine which interface card is currently transmitting or receiving data. Therefore, each interface card must have a distinct address and must time share its use of the data wires to perform I/O transfers. Performance improvements typically require increasing the clock rate and the width of the bus so that I/O cards can transmit more data in less time. Adding redundant buses improves the RAS features of the system as well as the total aggregate bandwidth. And limiting how many cards attach to each bus reduces contention for the data wires so as to improve performance.

Improving a bus architecture gets expensive. Wider buses require more pins, wider connectors, and more PCB space. Higher clock rates can require shorter and straighter data paths thus imposing some harsh constraints on the PCB layout.

The InfiniBand architecture is not a bus. Where a system would traditionally have an interface to something like a PCI bus, an InfiniBand system would instead have a single port to an InfiniBand fabric. That port would be a serial connection consisting of only a few wires driven by a high clock rate. The system would then transfer data to and from I/O devices by exchanging IPv6-like data packets through this port, just like one would expect on a network interface. An entire fabric of hubs, switches, and Target Channel Adapters would be wired up outside of the system, and would be attached to this port. I/O cards for specific devices that require specific interfaces (e.g. SCSI, Ethernet, Fiber Channel) would plug into these Target Channel Adapters on a 1-to-1 basis. An IPv6-like address would be assigned to each target and each port on a system. And the data packets would be routed through multiple data paths across the fabric.

The effect of the InfiniBand approach is that you totally avoid the difficult problems of improving performance and RAS features of a bus-based I/O architecture by changing your I/O architecture to instead be a serial TCP/IP network. The theory is that TCP/IP networks have already developed better and cheaper solutions for performance and RAS problems, therefore the InfiniBand architecture can inherit all of those solutions due to its fundamental network-like structure. To improve the performance and RAS features of your entire I/O subsystem, with InfiniBand you can just add additional data links, ports, and switches to your fabric to spread out the I/O load and increase redundancy. And broken links, targets, ports, or switches can be automatically dealt with by just rerouting data around the failures.

Log in or register to write something here or to contact authors.