High performance computing
InfiniBand is rapidly becoming the interconnect protocol of choice in today's High Performance Computing (HPC) networks. Its features include high throughput, low latency, high quality of service and failover, and it is designed to be scalable. Continued enhancements to InfiniBand's scalability and bandwidth capacity are now starting to attract the attention of mainstream enterprise sectors, and InfiniBand's popularity is already spreading to data centers, supercomputers and storage networks.
High speed but limited reach?
The Single Data Rate (SDR) serial connection's signaling rate is 2.5Gbit/s in each direction per connection. Double DR (DDR) is 5Gbit/s and Quad DR (QDR) is 10Gbit/s. Until now, mass adoption of InfiniBand has been restricted due to its inability to transport native traffic over large distances. Traditionally, InfiniBand copper cables have only been able to transport data across small distances, stretching less than 20m. To extend InfiniBand's reach, network staff have traditionally used gateways. Yet this is not an ideal solution, as it can lead to the loss of important InfiniBand features, an increase of latency times and ultimately a degradation in performance.
... a complete InfiniBand-over-distance solution for HPC and enterprise applications
InfiniBand goes the extra mile
There is an answer to this problem though. This involves the use of InfiniBand extension devices that provide an optical network port together with built-in buffer credits, which ensure full throughput over distance. When built on a foundation of WDM transport, enterprises are now able to avoid the cost of multiple optical fiber pairs, to develop a fully scalable design, to transport data at the lowest latency available and to support multiple protocols. To ensure our customers can drive InfiniBand over distance, we've built an entire ecosystem of partners around our ADVA FSP 3000 to deliver a complete solution for HPC and enterprise applications.