MLPerf Inference v0.5 provides a framework for assessing the performance and power efficiency of models using machine learning for a variety of applications, including autonomous driving and natural language processing, and in a variety of formats, such as smartphones, edge servers and cloud computing platforms. By measuring the relevance of these models’ results to real-world uses, the benchmark suite is expected not only to standardize AI performance across organizations, but also accelerate further innovation.
The suite was developed over the course of 11 months, with input from the consortium’s many high-profile members. These include leaders from Google, Microsoft, Intel, Facebook, Cisco and HP, as well as Harvard University, Stanford University and the University of California Berkeley.
More articles about AI:
1 in 5 consumers trust AI-generated healthcare information, survey finds
Brigham and Women’s, Emory and more join radiology AI pilot program
AI identifies ideal patients for corneal refractive surgery, potentially reducing complications