MLcomp: a website for objectively comparing ML algorithms

Much of the success and popularity of machine learning has been driven by its practical impact. Of course, the evaluation of empirical work is an integral part of the field. But are the existing mechanisms for evaluating algorithms and comparing results good enough? We (Percy and Jake) believe there are currently a number of shortcomings:

  1. Incomplete Disclosure: You read a paper that proposes Algorithm A which is shown to outperform SVMs on two datasets.  Great.  But what about on other datasets?  How sensitive is this result?   What about compute time – does the algorithm take two seconds on a laptop or two weeks on a 100-node cluster?
  2. Lack of Standardization: Algorithm A beats Algorithm B on one version of a dataset.  Algorithm B beats Algorithm A on another version yet uses slightly different preprocessing.  Though doing a head-on comparison would be ideal, it would be tedious since the programs probably use different dataset formats and have a large array of options.  And what if we wanted to compare on more than just one dataset and two algorithms?
  3. Incomplete View of State-of-the-Art: Basic question: What’s the best algorithm for your favorite dataset?  To find out, you could simply plow through fifty papers, get code from any author willing to reply, and reimplement the rest. Easy right? Well maybe not…

We’ve thought a lot about how to solve these problems. Today, we’re launching a new website, MLcomp.org, which we think is a good first step.

What is MLcomp? In short, it’s a collaborative website for objectively comparing machine learning programs across various datasets.  On the website, a user can do any combination of the following:

  1. Upload a program to our online repository.
  2. Upload a dataset.
  3. Run any user’s program on any user’s dataset.  (MLcomp provides the computation for free using Amazon’s EC2.)
  4. For any executed run, view the results (various error metrics and time/memory usage statistics).
  5. Download any dataset, program, or run for further use.

An important aspect of the site is that it’s collaborative: by uploading just one program or dataset, a user taps into the entire network of existing programs and datasets for comparison.  While data and code repositories do exist (e.g., UCI, mloss.org), MLcomp is unique in that data and code interact to produce analyzable results.

MLcomp is under active development.  Currently, seven machine learn task types (classification, regression, collaborative filtering, sequence tagging, etc.) are supported, with hundreds of standard programs and datasets already online.  We encourage you to browse the site and hopefully contribute more!  Please send comments and feedback to mlcomp.support (AT) gmail.com.

7 Replies to “MLcomp: a website for objectively comparing ML algorithms”

  1. And what about metrics? Will it be possible to upload some new metric to be processed?

  2. Hi guys,
    Thanks for info. Your site looks pretty well-done.

    Have you seen TunedIT website (http://tunedit.org)? It was started in 2009 with the same purpose: to make ML experiments reproducible and verifiable; and to facilitate collaboration between researchers: sharing of datasets, algorithms and experimental results. Till now, TunedIT gathered over 150,000 results and lots of different datasets. See, for example:

    * http://tunedit.org/results?d=iris.arff&a=weka – results of Weka algorithms on Iris dataset
    * http://tunedit.org/search?q=ARFF – datasets in ARFF format

    Experimental results come from TunedTester application, which automates tests and guarantees their reproducibility. Evaluation procedures (metrics) are pluggable and new ones can be created by every user.

  3. This website is very cool. Do you guys have any plans for easier and/or fancier ways to compare algorithm effectiveness across all the datasets? One idea would be a head to head comparisons like in the computer language shootout: http://shootout.alioth.debian.org/

  4. This is a great site. I gonna spread the news among my friends. I wish you good luck!

  5. Hi guys,

    Thanks for the feedback, we really do appreciate it. As far as using new metrics, we are thinking about how to do this. The concept of a “run” in mlcomp is very general, and alternative evaluation metrics can be swapped in with no trouble. The question is how much control we should give the user without things getting confusing. But expect to see more of this in the future, since we’ve heard this request a lot.

    Jake

Comments are closed.