IBM and MLCommons show how ubiquitous machine learning has become

This week, IBM announced its latest Z-series mainframe and MLCommons released its latest benchmark series. The two announcements had something in common – the acceleration of machine learning (ML) – which is becoming ubiquitous everywhere, from detecting financial fraud in mainframes to detecting wake words in home appliances.

Granted, these two announcements weren’t directly related, but they’re part of a trend, showing how ubiquitous ML has become.

MLCommons brings standards to ML benchmarking

ML benchmarking is important because we often hear about ML performance in terms of TOPS – trillions of operations per second. Like MIPS (“Millions of Instructions Per Second” or “Meaningless Indication of Processor Speed” depending on your perspective), TOPS is a theoretical number calculated from the architecture, not a measured rating based on the running workloads. As such, TOPS can be a misleading number because it does not include the impact of the software stack. Software is the most critical aspect of implementing ML and effectiveness varies widely, so which Nvidia has clearly demonstrated by improving the performance of its A100 platform by 50% in MLCommons benchmarks over the years.

The MLCommons industry organization was created by a consortium of companies to build a standardized set of benchmarks as well as a standardized testing methodology for comparing different machine learning systems. MLCommons’ MLPerf benchmark suites include different benchmarks that cover many popular ML workloads and scenarios. MLPerf benchmarks cover everything from tiny microcontrollers used in consumer and IoT devices, to mobile devices like smartphones and PCs, to edge servers, to data center class server configuration. MLCommons supporters include Amazon, Arm, Baidu, Dell Technologies, Facebook, Google, Harvard, Intel, Lenovo, Microsoft, Nvidia, Stanford, and the University of Toronto.

MORE FORBESMLPerf is gaining traction with suppliers and customers

MLCommons releases benchmark results in batches and has different release schedules for inference and for training. The latest announcement was for MLPerf Inference Suite version 2.0 for data centers and edge servers, MLPerf Mobile version 2.0 and MLPerf Tiny version 0.7 for IoT devices.

To date, the company that has had the most consistent set of submissions, delivering results in every iteration, in every benchmark, and across multiple partners, is Nvidia. Nvidia and its partners appear to have invested enormous resources in running and publishing all relevant MLCommons benchmarks. No other supplier can match this claim. The recent batch of inference benchmark submissions includes Nvidia Jetson Orin SoCs for edge servers and Ampere-based A100 GPUs for data centers. Nvidia’s ‘Hopper’ H100 Data Center GPU, which was announced at the Spring 2022 GTC, arrived too late to be included in MLCommons’ latest announcement, but we expect to see Nvidia H100 results next year. round.

Recently, Qualcomm and its partners released more data center MLPerf benchmarks for the company’s Cloud AI 100 platform and more mobile MLPerf benchmarks for Snapdragon processors. Qualcomm’s latest silicon has proven to be very power efficient in data center ML tests, which may give it an advantage in power-constrained edge server applications.

Most of the bidders are system vendors using processors and accelerators from silicon vendors like AMD, Andes, Ampere, Intel, Nvidia, Qualcomm, and Samsung. But many AI startups have been absent. As consulting firm Krai put it: “Potential bidders, especially hardware ML startups, are understandably hesitant to commit valuable engineering resources to optimizing industry benchmarks instead of actual workloads. client.” But then Krai countered his own objection with “MLPerf is the Olympiad of ML optimization and benchmarking.” Still, many startups haven’t invested in producing MLCommons results for a variety of reasons and that’s disappointing. There are also not enough FPGA vendors participating in this round.

The MLPerf Tiny benchmark is designed for very low power applications such as keyword detection, visual wake words, image classification and anomaly detection. In this case, we see results from a mix of smaller companies like Andes, Plumeria, and Syntiant, as well as established companies like Alibaba, Renesas, Silicon Labs, and STMicroeletronics.

IBM adds AI acceleration to every transaction

Although IBM did not participate in the MLCommons benchmarks, the company takes ML seriously. With its latest Z-series mainframe, the z16, IBM added accelerators for ML inference and quantum secure boot and cryptography. But mainframe systems have different requirements for customers. With approximately 70% of banking transactions (on a value basis) running on IBM mainframes, the company anticipates the needs of financial institutions for extreme reliability and transaction processing protection. Additionally, by adding ML acceleration into its processor, IBM can offer per-transaction ML intelligence to help detect fraudulent transactions.

MORE FORBESIBM revamps mainframe with new Telum processor

In an article I wrote in 2018, I said, “In fact, the future hybrid cloud computing model will likely include classical computing, AI processing, and quantum computing. When it comes to understanding these three technologies, few companies can match IBM’s level of commitment and expertise. And the latest developments in IBM’s quantum computing roadmap and the acceleration of ML in the z16 show that IBM is a leader in both.

MORE FORBESDoes IBM have the quantum advantage?


Machine learning is important, from small devices to mainframes. Accelerating this workload can be done on CPUs, GPUs, FPGAs, ASICs, and even MCUs and is now part of all future computations. These are two examples of how ML evolves and improves over time.

Tirias Research follows and advises companies across the entire electronics ecosystem, from semiconductors to systems and sensors to the cloud. Members of the Tirias research team consulted with IBM, Nvidia, Qualcomm, and other companies across all AI ecosystems.

Previous Iconoclastic composer and artist Mira Calix dies at 52
Next Kim Gardner will face a disciplinary committee on Monday over allegations stemming from the Greitens investigation - Missouri Independent