Summer Conferences

Here’s a handy table for the summer conferences.

Conference Deadline Reviewer Targeting Double Blind Author Feedback Location Date
ICML (wrong ICML) January 26 Yes Yes Yes Montreal, Canada June 14-17
COLT February 13 No No Yes Montreal June 19-21
UAI March 13 No Yes No Montreal June 19-21
KDD February 2/6 No No No Paris, France June 28-July 1

Reviewer targeting is new this year. The idea is that many poor decisions happen because the papers go to reviewers who are unqualified, and the hope is that allowing authors to point out who is qualified results in better decisions. In my experience, this is a reasonable idea to test.

Both UAI and COLT are experimenting this year as well with double blind and author feedback, respectively. Of the two, I believe author feedback is more important, as I’ve seen it make a difference. However, I still consider double blind reviewing a net win, as it’s a substantial public commitment to fairness.

ICML Reviewing Criteria

Michael Littman and Leon Bottou have decided to use a franchise program chair approach to reviewing at ICML this year. I’ll be one of the area chairs, so I wanted to mention a few things if you are thinking about naming me.

  1. I take reviewing seriously. That means papers to be reviewed are read, the implications are considered, and decisions are only made after that. I do my best to be fair, and there are zero subjects that I consider categorical rejects. I don’t consider several arguments for rejection-not-on-the-merits reasonable.
  2. I am generally interested in papers that (a) analyze new models of machine learning, (b) provide new algorithms, and (c) show that they work empirically on plausibly real problems. If a paper has the trifecta, I’m particularly interested. With 2 out of 3, I might be interested. I often find papers with only one element harder to accept, including papers with just (a).
  3. I’m a bit tough. I rarely jump-up-and-down about a paper, because I believe that great progress is rarely made. I’m not very interested in new algorithms with the same theorems as older algorithms. I’m also cautious about new analysis for older algorithms, since I like to see analysis driving algorithm rather than vice-versa. I prioritize a proof-of-possibility over a quantitative improvement. I consider quantitative improvements of small constant factors in sample complexity significant. For computationaly complexity, I generally want to see at least an order of magnitude improvement. I generally disregard any experiments on toy data, because I’ve found that toy data and real data can too-easily differ in their behavior.
  4. My personal interests are pretty well covered by existing papers, but this is perhaps not too important a criteria, compared to the above, as I easily believe other subjects are interesting.

A Healthy COLT

A while ago, we discussed the health of COLT. COLT 2008 substantially addressed my concerns. The papers were diverse and several were interesting. Attendance was up, which is particularly notable in Europe. In my opinion, the colocation with UAI and ICML was the best colocation since 1998.

And, perhaps best of all, registration ended up being free for all students due to various grants from the Academy of Finland, Google, IBM, and Yahoo.

A basic question is: what went right? There seem to be several answers.

  1. Cost-wise, COLT had sufficient grants to alleviate the high cost of the Euro and location at a university substantially reduces the cost compared to a hotel.
  2. Organization-wise, the Finns were great with hordes of volunteers helping set everything up. Having too many volunteers is a good failure mode.
  3. Organization-wise, it was clear that all 3 program chairs were cooperating in designing the program.
  4. Facilities-wise, proximity in time and space made the colocation much more real than many others have been in the past.
  5. Program-wise, COLT notably had two younger program chairs, Tong and Rocco, which seemed to work well.

NIPS 2008 workshop on Kernel Learning

We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24.

More detail can be found here.

Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh