Recently, MLCommons™, a well-known open engineering consortium, released the results of MLPerf™ Inference V1.1, the leading AI benchmark suite. In the very competitive Closed Division, Inspur ranked first in 15 out of 30 tasks, making it the most successful vendor at the event.
Developed by Turing Award winner David Patterson and leading academic institutions, MLPerf™ is the leading industry benchmark for AI performance. Founded in 2020 and based on MLPerf™ benchmarks, MLCommons is an open non-profit engineering consortium dedicated to advancing standards and metrics for machine learning and AI performance. Inspur is a founding member of MLCommons™, along with over 50 other leading organizations and companies from across the AI landscape.
In the MLPerf™ Inference V1.1 benchmark test, the Closed Division included two categories – Data Center (16 tasks) and Edge (14 tasks). Under the Data Center category, six models were covered, including Image Classification (ResNet50), Medical Image Segmentation (3D-UNet), Object Detection (SSD-ResNet34), Speech Recognition (RNN-T), Natural Language Processing (BERT), and Recommendation (DLRM). A high accuracy mode (99.9%) was set for BERT, DLRM and 3D-UNET. Every model task evaluated the performance in both Server and Offline scenarios with the exception 3D-UNET, which was only evaluated in the Offline scenario. For the Edge category, the Recommendation (DLRM) model was removed and the Object Detection (SSD-MobileNet) model was added. A high accuracy mode (99.9%) was set for 3D-UNET. All models were tested for both Offline and Single Stream inference.
In the extremely competitive Closed Division, in which mainstream vendors were competing, the use of the same models and optimizers was required by all participants. Doing so provided the ability to easily evaluate and compare AI computing system performance among various vendors. Nineteen vendors including Nvidia, Intel, Inspur, Qualcomm, Alibaba, Dell, and HPE participated in the Closed Division. A total of 1,130 results were submitted, including 710 for the Data Center category, and 420 for the Edge category.
Full-Stack AI Capabilities Ramp up Performance
Inspur achieved excellent results in this MLPerf™ competition with its three AI servers — NF5488A5, NF5688M6, and NE5260M5.
NF5488A5 is among the world’s first servers on the market with NVIDIA A100 GPUs. Within a 4U space, it accommodates 8 NVIDIA A100 GPUs interconnected via third-generation NVLink and 2 AMD Milan CPUs, and accomplishes this with a unique blend of liquid and air cooling technologies.
NF5688M6 is an AI server designed for large data centers due to its extraordinary scalability. It supports 8 NVIDIA A100 GPUs, 2 Intel Icelake CPUs, and up to 13 PCIe 4.0 add-in cards.
NE5260M5 comes with optimized signaling and power systems, and offers widespread compatibility with high-performance CPUs and a wide range of AI accelerator cards. It features a shock-absorbing and noise-reducing design, and has undergone rigorous reliability testing. With a chassis depth of 430 mm, nearly half the depth traditional servers, it is deployable even in space-constrained edge computing scenarios.
Inspur ranked first in 15 tasks covering all AI models, including Medical Image Recognition, Natural Language Processing, Image Classification, Speech Recognition, Recommendation, as well as Object Detection (SSD-ResNet34 and SSD-MobileNet). The results showcase that from Cloud to Edge, Inspur is ahead of the Industry in nearly all aspects.
Inspur was able to make huge strides in performance in various tasks under the Data Center category compared to previous MLPerf events despite no changes to its server configuration. Its model performance results in Image Classification (ResNet50) and Speech Recognition (RNN-T) increased by 4.75% and 3.83% compared to the V1.0 competition just six months ago.
The outstanding performance of Inspur’s AI servers in the MLPerf™ Benchmark Test can be credited to Inspur’s exceptional system design and full-stack optimization in AI computing systems. Through precise calibration and optimization, CPU and GPU performance as well as the data communication between CPUs and GPUs were able to reach the highest levels for AI inference. Additionally, by enhancing the round-robin scheduling for multiple GPUs based on GPU topology, the performance of a single GPU or multiple GPUs can be increased nearly linearly.
Inspur NF5488A5 was the only AI server in this MLPerf™ competition to support eight 500W A100 GPUs with liquid cooling technology, which significantly boosted AI computing performance. Among mainstream high-end AI servers with 8 NVIDIA A100 SXM4 GPUs, Inspur’s servers came out on top in all 16 tasks in the Closed Division under the Data Center category.
As a leading AI computing company, Inspur is committed to the R&D and innovation of AI computing, including both resource-based and algorithm platforms. It also works with other leading AI enterprises to promote the industrialization of AI and the development of AI-driven industries through its “Meta-Brain” technology ecosystem.
To view the complete results of MLPerf™ Inference v1.1, please visit:
https://mlcommons.org/en/inference-datacenter-11/
https://mlcommons.org/en/inference-edge-11/