NIPS 2008 workshop on Kernel Learning

We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24.

More detail can be found here.

Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh

Fall ML Conferences

If you are in the New York area and interested in machine learning, consider submitting a 2 page abstract to the ML symposium by tomorrow (Sept 5th) midnight. It’s a fun one day affair on October 10 in an awesome location overlooking the world trade center site.

A bit further off (but a real conference) is the AI and Stats deadline on November 5, to be held in Florida April 16-19.

Interesting papers, ICML 2008

Here are some papers from ICML 2008 that I found interesting.

  1. Risi Kondor and Karsten Borgwardt, The Skew Spectrum of Graphs. This paper is about a new family of functions on graphs which is invariant under node label permutation. They show that these quantities appear to yield good features for learning.
  2. Sanjoy Dasgupta and Daniel Hsu. Hierarchical sampling for active learning. This is the first published practical consistent active learning algorithm. The abstract is also pretty impressive.
  3. Lihong Li, Michael Littman, and Thomas Walsh Knows What It Knows: A Framework For Self-Aware Learning. This is an attempt to create learning algorithms that know when they err, (other work includes Vovk). It’s not yet clear to me what the right model for feature-dependent confidence intervals is.
  4. Novi Quadrianto, Alex Smola, TIberio Caetano, and Quoc Viet Le Estimating Labels from Label Proportions. This is an example of learning in a specialization of the offline contextual bandit setting.
  5. Filip Radlinski, Robert Kleinberg and Thorsten Joachims
    Learning Diverse Rankings with Multi-Armed Bandits. Learning should be used to solve the diversity problem, and doing it in an online bandit-like setting is quite natural. I believe the setting can be generalized to a setting with features without too much work.
  6. Rich Caruana, Nikos Karampatziakis, Ainur Yessenalina An Empirical Evaluation of Supervised Learning in High Dimensions. This paper doesn’t need an abstract given the title :). I hadn’t previously appreciated how well a random forest works in high dimensions.
  7. Sham M. Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Efficient Bandit Algorithms for Online Multiclass Prediction. A paper about an online contextual bandit setting specialized to a multiclass realizable yet otherwise adversarial setting that yields a practical algorithm.

I’d like to add that I thought the conference organization (and the colocation with COLT and UAI) are particularly well done, the best I’ve seen. The key seems to be tight integration of the colocating conference programs and hordes of local volunteers making sure everything is working. I was also happy to see a 10 years award for best paper 10 years ago.

COLT Open Problems

COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).