- Incomplete Disclosure: You read a paper that proposes Algorithm A which is shown to outperform SVMs on two datasets. Great. But what about on other datasets? How sensitive is this result? What about compute time – does the algorithm take two seconds on a laptop or two weeks on a 100-node cluster?
- Lack of Standardization: Algorithm A beats Algorithm B on one version of a dataset. Algorithm B beats Algorithm A on another version yet uses slightly different preprocessing. Though doing a head-on comparison would be ideal, it would be tedious since the programs probably use different dataset formats and have a large array of options. And what if we wanted to compare on more than just one dataset and two algorithms?
- Incomplete View of State-of-the-Art: Basic question: What’s the best algorithm for your favorite dataset? To find out, you could simply plow through fifty papers, get code from any author willing to reply, and reimplement the rest. Easy right? Well maybe not…
- Upload a program to our online repository.
- Upload a dataset.
- Run any user’s program on any user’s dataset. (MLcomp provides the computation for free using Amazon’s EC2.)
- For any executed run, view the results (various error metrics and time/memory usage statistics).
- Download any dataset, program, or run for further use.
An important aspect of the site is that it’s collaborative: by uploading just one program or dataset, a user taps into the entire network of existing programs and datasets for comparison. While data and code repositories do exist (e.g., UCI, mloss.org), MLcomp is unique in that data and code interact to produce analyzable results.
MLcomp is under active development. Currently, seven machine learn task types (classification, regression, collaborative filtering, sequence tagging, etc.) are supported, with hundreds of standard programs and datasets already online. We encourage you to browse the site and hopefully contribute more! Please send comments and feedback to mlcomp.support (AT) gmail.com.