A consortium of tech firms, including Facebook Inc. and Alphabet Inc.’s Google, has launched a set of benchmarks for evaluating the efficiency of artificial intelligence tools, aiming to assist businesses to navigate the fast-growing subject.
The benchmarks—which cover picture recognition, object detection, and voice translation—are meant to assist firms in comparing numerous AI instruments to see which work finest for them as they pursue their own AI initiatives, stated Peter Mattson, general chairman of the consortium, MLPerf, which counts 40 corporations as members.
“For CIOs, metrics make for better services and products they’ll then incorporate into their organization,” mentioned Mr. Mattson, a Google engineer.
The MLPerf benchmarks might, for instance, consider the efficiency of an AI image-recognition model constructed with open-source machine-studying software program from Google utilizing data sources like RestNet50 or MobileNets, each of which specializes in picture recognition. Firms can use the outcomes as a place to begin in implementing AI by seeing which software, hardware, and data source work best.
MLPerf was formed in 2018 to fill a void for standardized AI benchmarks. Its first measurement tool, launched in May 2018, focused on training models—the brains behind AI implementations and a reference point for recognizing pictures or voice. The newer benchmarks focus on outcomes from trained models. MLPerf’s current lineup contains representatives from Microsoft Corp., Intel Corp., and Landing AI, started by AI pioneer Andrew Ng.
For the new set of benchmarks, MLPerf pursued metrics around popular applications like voice recognition and computer vision that are universally applicable, stated Vijay Janapa Reddi, associate professor of electrical engineering at Harvard University and co-chairman of MLPerf’s inference working group, which came up with the standards.