Nvidia has introduced its new HGX-2 cloud server platform for AI and high-performance workloads.
The company said the HGX-2 is its most powerful cloud server platform to date.
The server comprises two GPU baseboards with eight V100 32GB Tensor Core GPUs and six NVSwitches each.
This layout allows for a total of 16 Tensor Core GPUs to be connected, with each GPU able to communicate with other GPUs at 300GB/s.
Nvidia’s HGX-2 features a total of 512GB GPU memory in addition to a bisection bandwidth of 2,400GB/s, making it over twice as fast as the previous HGX-1 platform.
Nvidia said its server ecosystem partners will bring the HGX-2 server platform to the cloud later this year.