NVIDIA Breaks Record after Record for Performance in latest MLPerf AI Training Benchmarks

0
313
NVIDIA breaks record

This Wednesday, MLPerf, industry-standard artificial intelligence and machine learning benchmarking group, released its third round of training benchmark results, and NVIDIA obliterated its competition.

Each year, MLPerf, which is comprised out of 80 universities and companies around the world, creates and releases benchmarks to provide accurate comparisons for training and inference processing. This year, just like the previous two editions, NVIDIA dominated its competition and took things further by breaking 16 AI performance records

Some may argue that NVIDIA has little to no competition, as only a few submissions are promising enough to challenge the ultimate leader, and they all come from companies such as Intel and Google. But truth be told, NVIDIA has dominated the AI accelerator business sector, and it is expected for their AI revenues to overtake their gaming profit over the next year.

What happened at the MLPerf AI Training Benchmarks?

For those of you who need a little memory refresh, each year, MLPerf provides three benchmark systems categories across multiple AI benchmarks, called available, preview, and research, which measure training times for both a maximum scaled system and a single server.

As of now, the results are only for training, with new interference benchmarks probably going to be published in the following months, and NVIDIA won all 16 benchmarks in the commercially available category, leaving its competitors very far behind. 

NVIDIA’s only competitors in the commercially available category were, interestingly enough, Google and Huawei, which only submitted results for image classification. NVIDIA, on the other hand, demonstrated is power and obtained stellar results is system design, AI software, and Mellanox networking. 

The company ran tests using their NVIDIA Ampere and Volta architectures and, in addition to breaking performance records, their A100 processor based on the Ampere architecture hit the market faster than any of the company’s other previous GPUs. Now, the A100 processor is used worldwide to solve some of the most complex AI and computer science challenges. 

NVIDIA’s new A100 chip is just as good as the company promised, but that is not the only aspect that contributed to these impressive results. The company invested significantly in improving its AI software stacks and provided us with performance comparisons on their Volta V100 with and without their new software, proving the impressive work they have done. 

What’s more, the company also built ecosystems for multiple application segments, including autonomous vehicles, health care, and conversational AI frameworks. All of these ecosystems are supported by NVIDIA’s Selene Supercomputer and other DGX systems.

What happened to the competition?

Google’s unannounced TPU V4 sounds very promising, but as of now, it is just slightly better than the NVIDIA A100, and only on three out of eight benchmarks. Google submitted its V4 chip in the research category. However, their TPU V3, which competed against NVIDIA in the commercially available category, was beaten 25-75% by the latest. 

Graphcore and Cerebras, two very promising startups, did not publish any results, and Intel did not publish anything significant either. 

However, NVIDIA can’t remain the only horse in the race for too long, and we can’t wait to see what its competitors are hiding behind their doors.