Benchopt: Reproducible, efficient and collaborative optimization benchmarks
From MaRDI portal
Publication:6403290
arXiv2206.13424MaRDI QIDQ6403290
Author name not available (Why is that?)
Publication date: 27 June 2022
Abstract: Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice. Yet, the rapid development of the field poses several challenges: researchers are confronted with a profusion of methods to compare, limited transparency and consensus on best practices, as well as tedious re-implementation work. As a result, validation is often very partial, which can lead to wrong conclusions that slow down the progress of research. We propose Benchopt, a collaborative framework to automate, reproduce and publish optimization benchmarks in machine learning across programming languages and hardware architectures. Benchopt simplifies benchmarking for the community by providing an off-the-shelf tool for running, sharing and extending experiments. To demonstrate its broad usability, we showcase benchmarks on three standard learning tasks: -regularized logistic regression, Lasso, and ResNet18 training for image classification. These benchmarks highlight key practical findings that give a more nuanced view of the state-of-the-art for these problems, showing that for practical evaluation, the devil is in the details. We hope that Benchopt will foster collaborative work in the community hence improving the reproducibility of research findings.
Has companion code repository: https://github.com/deepmind/optax
This page was built for publication: Benchopt: Reproducible, efficient and collaborative optimization benchmarks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6403290)