This repository contains the results and code for the MLPerf™ Inference v3.0 benchmark.
For benchmark code and rules please see the main MLPerf Inference repository.
Additionally, each organization has written approximately 300 words to help explain their submissions in the Supplemental discussion.
This repository of results from a prior MLPerf Inference round has been archived. Please raise an issue in the main MLPerf Inference repository or contact the MLPerf Inference chairs (inference-chairs@mlcommons.org) if you would like to make a change to this results repo.