Please note that JavaScript and style sheet are used in this website,
Due to unadaptability of the style sheet with the browser used in your computer, pages may not look as original.
Even in such a case, however, the contents can be used safely.

HPC Network

A HPC Network is more than switches and cables

NEC has designed an own crossbar for its vector product line. This implied the existence of an own MPI group with representatives in the leading standard defining committees. This know-how still exists and is utilized for the optimization of the InfiniBand fabric of the LX-Series.

NEC was the first to bring InfiniBand to the automotive CAE market. Since then NEC is in close collaboration with Mellanox to expand the capabilities and performance of InfiniBand for HPC demands in industry and academia.

Together with Mellanox NEC is addressing the closer coupling of MPI and the underlying InfiniBand network. MPI Collective operations are accelerated by using hardware features of the fabric.

Throughput and single job performance is improved by expanding the flexibility of the InfiniBand configuration. Making the HPC network aware of the load situation allows reacting and optimizing the flow of data and communication. Congestion control features can be implemented on a hardware level with fine-tuned threshold values. NEC has investigated those features and is an expert in optimizing the InfiniBand fabric.

Partager: