- Incomplete Disclosure: You read a paper that proposes Algorithm A which is shown to outperform SVMs on two datasets. Great. But what about on other datasets? How sensitive is this result? What about compute time – does the algorithm take two seconds on a laptop or two weeks on a 100-node cluster?
- Lack of Standardization: Algorithm A beats Algorithm B on one version of a dataset. Algorithm B beats Algorithm A on another version yet uses slightly different preprocessing. Though doing a head-on comparison would be ideal, it would be tedious since the programs probably use different dataset formats and have a large array of options. And what if we wanted to compare on more than just one dataset and two algorithms?
- Incomplete View of State-of-the-Art: Basic question: What’s the best algorithm for your favorite dataset? To find out, you could simply plow through fifty papers, get code from any author willing to reply, and reimplement the rest. Easy right? Well maybe not…
- Upload a program to our online repository.
- Upload a dataset.
- Run any user’s program on any user’s dataset. (MLcomp provides the computation for free using Amazon’s EC2.)
- For any executed run, view the results (various error metrics and time/memory usage statistics).
- Download any dataset, program, or run for further use.
An important aspect of the site is that it’s collaborative: by uploading just one program or dataset, a user taps into the entire network of existing programs and datasets for comparison. While data and code repositories do exist (e.g., UCI, mloss.org), MLcomp is unique in that data and code interact to produce analyzable results.
MLcomp is under active development. Currently, seven machine learn task types (classification, regression, collaborative filtering, sequence tagging, etc.) are supported, with hundreds of standard programs and datasets already online. We encourage you to browse the site and hopefully contribute more! Please send comments and feedback to mlcomp.support (AT) gmail.com.
And what about metrics? Will it be possible to upload some new metric to be processed?
Less suited for finding the BEST algorithm, but for comparing the capabilities of various algorithms for various types of separated data, this guide is very nice:
http://home.comcast.net/~tom.fawcett/public_html/ML-gallery/pages/index.html
Hi guys,
Thanks for info. Your site looks pretty well-done.
Have you seen TunedIT website (http://tunedit.org)? It was started in 2009 with the same purpose: to make ML experiments reproducible and verifiable; and to facilitate collaboration between researchers: sharing of datasets, algorithms and experimental results. Till now, TunedIT gathered over 150,000 results and lots of different datasets. See, for example:
* http://tunedit.org/results?d=iris.arff&a=weka – results of Weka algorithms on Iris dataset
* http://tunedit.org/search?q=ARFF – datasets in ARFF format
Experimental results come from TunedTester application, which automates tests and guarantees their reproducibility. Evaluation procedures (metrics) are pluggable and new ones can be created by every user.
This website is very cool. Do you guys have any plans for easier and/or fancier ways to compare algorithm effectiveness across all the datasets? One idea would be a head to head comparisons like in the computer language shootout: http://shootout.alioth.debian.org/
This is a great site. I gonna spread the news among my friends. I wish you good luck!
Hi guys,
Thanks for the feedback, we really do appreciate it. As far as using new metrics, we are thinking about how to do this. The concept of a “run” in mlcomp is very general, and alternative evaluation metrics can be swapped in with no trouble. The question is how much control we should give the user without things getting confusing. But expect to see more of this in the future, since we’ve heard this request a lot.
Jake