Interesting papers, ICML 2008

Here are some papers from ICML 2008 that I found interesting.

  1. Risi Kondor and Karsten Borgwardt, The Skew Spectrum of Graphs. This paper is about a new family of functions on graphs which is invariant under node label permutation. They show that these quantities appear to yield good features for learning.
  2. Sanjoy Dasgupta and Daniel Hsu. Hierarchical sampling for active learning. This is the first published practical consistent active learning algorithm. The abstract is also pretty impressive.
  3. Lihong Li, Michael Littman, and Thomas Walsh Knows What It Knows: A Framework For Self-Aware Learning. This is an attempt to create learning algorithms that know when they err, (other work includes Vovk). It’s not yet clear to me what the right model for feature-dependent confidence intervals is.
  4. Novi Quadrianto, Alex Smola, TIberio Caetano, and Quoc Viet Le Estimating Labels from Label Proportions. This is an example of learning in a specialization of the offline contextual bandit setting.
  5. Filip Radlinski, Robert Kleinberg and Thorsten Joachims
    Learning Diverse Rankings with Multi-Armed Bandits. Learning should be used to solve the diversity problem, and doing it in an online bandit-like setting is quite natural. I believe the setting can be generalized to a setting with features without too much work.
  6. Rich Caruana, Nikos Karampatziakis, Ainur Yessenalina An Empirical Evaluation of Supervised Learning in High Dimensions. This paper doesn’t need an abstract given the title :). I hadn’t previously appreciated how well a random forest works in high dimensions.
  7. Sham M. Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Efficient Bandit Algorithms for Online Multiclass Prediction. A paper about an online contextual bandit setting specialized to a multiclass realizable yet otherwise adversarial setting that yields a practical algorithm.

I’d like to add that I thought the conference organization (and the colocation with COLT and UAI) are particularly well done, the best I’ve seen. The key seems to be tight integration of the colocating conference programs and hordes of local volunteers making sure everything is working. I was also happy to see a 10 years award for best paper 10 years ago.

ICML has a comment system

Mark Reid has stepped up and created a comment system for ICML papers which Greger Linden has tightly integrated.

My understanding is that Mark spent quite a bit of time on the details, and there are some cool features like working latex math mode. This is an excellent chance for the ICML community to experiment with making ICML year-round, so I hope it works out. Please do consider experimenting with it.

Reviewing Horror Stories

Essentially everyone who writes research papers suffers rejections. They always sting immediately, but upon further reflection many of these rejections come to seem reasonable. Maybe the equations had too many typos or maybe the topic just isn’t as important as was originally thought. A few rejections do not come to seem acceptable, and these form the basis of reviewing horror stories, a great material for conversations. I’ve decided to share three of mine, now all safely a bit distant in the past.

  1. Prediction Theory for Classification Tutorial. This is a tutorial about tight sample complexity bounds for classification that I submitted to JMLR. The first decision I heard was a reject which appeared quite unjust to me—for example one of the reviewers appeared to claim that all the content was in standard statistics books. Upon further inquiry, several citations were given, none of which actually covered the content. Later, I was shocked to hear the paper was accepted. Apparently, the paper accidentally went to two different action editors, who each chose distinct reviewers.
  2. Cover Tree. This paper was the first one to give a datastructure for nearest neighbor search for an arbitrary metric which both (a) took logarithmic time under dimensionality constraint and (b) always required space competitive with brute force nearest neighbor search. Previous papers had done (a) or (b), but not both, and achieving both appears key to a practical algorithm, which we backed up with experiments and code.

    The cover tree paper suffered a triple rejection, the last one of which seems particularly poor to me. We submitted the draft to SODA, and got back 3 reviews. The first was blank. The second was a paragraph of positive but otherwise uninformative text. The third was blank. The decision was reject. We were rather confused, so we emailed the program chair asking if the decision was right and if so whether there was any more information we could get. We got back only a form letter providing no further information. Since then, the paper was accepted at ICML.

  3. Ranking Reduction. This paper shows that learning how to predict which of a pair of items is better strongly transfers to optimizing a ranking loss, in contrast to (for example) simply predicting a score and ordering according to predicted score.

    We submitted this paper to NIPS and it had the highest average review of any learning theory paper. The decision was to reject. Based upon what we could make out from a statement by the program committee, the logic of this decision is mostly kindly describable as badly flawed—somehow they confused the algorithm, the problem, and the analysis into a mess. Later it was accepted at COLT. (A bit of disclosure: I was on the program committee at NIPS that year, although obviously not involved in the decision on this paper.)

In all cases where a rejection occurs, the default presumption is that the correct decision was made because most of the time a good (or at least reasonable) decision was made. Consequently, it seems important to point out that there are some objective signs each of the above cases involved poor decisions.

  1. The tutorial paper is fairly widely cited (Google scholar places it 8th amongst my papers), and I continue to find it useful material for a lecture when teaching a class.
  2. The cover tree is also fairly widely cited, and I know from various emails and download counts that it is used by several people. It also won an award from IBM. To this day, it seems odd that an algorithms paper was only publishable at a machine learning conference.
  3. It’s really too soon to tell with the ranking paper, but it was one of the few COLT papers invited to a journal special issue, and there has since been substantial additional work by Mehryar Mohri and Nir Ailon which broadens the claim to other ranking metrics and makes it more computationally tractable.

One of the reasons you hear for why a paper was rejected and then accepted is that the paper improved in the meantime. That’s often true, but in each of the above cases I don’t believe there were any substantial changes between submissions (and for the tutorial it was a perfect accidental experiment).

Normally reviewing horror stories are the academic equivalent of warstories, but these ones have slightly more point. They have each informed my thinking about how reviewing should be done. Relating these stories might make this thinking a bit more understandable.

  1. Reviewer Choice. The tutorial case brings home the impact of how reviewers are chosen. If a paper is to have 3 reviews, it seems like a good idea to choose the reviewers in diverse ways, rather than one way. For example, at a conference, one reviewer by bidding preference, one reviewer by area chair, and one reviewer by another area chair or the program chair’s choice might reduce variance.
  2. Uniform Author feedback. The standard at NIPS was to have author feedback when the ranking paper was submitted. In effect, the standard was not followed for the ranking paper, and it’s easy to imagine this making a substantial difference given how badly flawed the basis of rejection was. It is also easy to imagine that author feedback might have made a difference in the tutorial rejection, as the reviewer was wrong (author feedback was not the standard then).
  3. Decision Basis. It’s helpful to relate the basis of decision by the program committee, especially when it is not summarized in the reviews. The cover tree case was one of the things which led me to add summaries to some of the NIPS papers when I was on the program committee, and I am committed to doing the same for SODA papers I’m reviewing this year. Not having a summary saves the program committee the embarassment of accidentally admitting mistakes, but it is badly disrepectful of the authors and generally promotes misunderstanding.
  4. Fast decisions are bad. It’s not possible to reliably make good decisions about technical matters quickly. I suspect that the time crunch of the NIPS program committee meeting was a contributing factor in the ranking paper case.

As anyone educated in machine learning or statistics understands, drawing 4 conclusions from 3 datapoints is problematic, so the above should be understood as suggestions subject to further evidence.

Taking the next step

At the last ICML, Tom Dietterich asked me to look into systems for commenting on papers. I’ve been slow getting to this, but it’s relevant now.

The essential observation is that we now have many tools for online collaboration, but they are not yet much used in academic research. If we can find the right way to use them, then perhaps great things might happen, with extra kudos to the first conference that manages to really create an online community. Various conferences have been poking at this. For example, UAI has setup a wiki, COLT has started using Joomla, with some dynamic content, and AAAI has been setting up a “student blog“. Similarly, Dinoj Surendran setup a twiki for the Chicago Machine Learning Summer School, which was quite useful for coordinating events and other things.

I believe the most important thing is a willingness to experiment. A good place to start seems to be enhancing existing conference websites. For example, the ICML 2007 papers page is basically only useful via grep. A much more human-readable version of the page would organize the papers by topic. If the page wiki-editable, this would almost happen automatically. Adding the ability for people to comment on the papers might make the website more useful beyond the time of the conference itself.

There are several aspects of an experiment which seem intuitively important to me. I found the wikipatterns site a helpful distillation of many of these intuitions. Here are various concerns I have:

  1. Mandate An official mandate is a must-have. Any such enhancement needs to be an official part of the website, or the hesitation to participate will probably be too much.
  2. Permissive Comments Allowing anyone to comment on a website is somewhat scary to academics, because we are used to peer-reviewing papers before publishing. Nevertheless, it seems important to not strongly filter comments, because:
    1. The added (human) work of filtering is burdensome.
    2. The delay introduced acts as a barrier to participation.

    The policy I’ve followed on hunch.net is allowing comments from anyone exhibiting evidence of intelligence—i.e. filtering essentially only robots. This worked as well I hoped, and not as badly as I feared.

  3. Spam Spam is a serious issue for dynamic websites, because it adds substantially to the maintenance load. There are basically two tacks to take here:
    1. Issue a userid/passwd to every conference registrant (and maybe others that request it), the just allow comments from them.
    2. Allow comments from anyone, but use automated filters. I’ve been using Akismet, but recaptcha is also cool.

    I favor the second approach, because it’s more permissive, and it makes participation easier. However, it may increase the maintenance workload.

  4. Someone Someone to shepard the experiment is needed. I’m personally overloaded with other things at the moment (witness the slow post rate), so I don’t have significant time to devote. Nevertheless, I’m sure there are many people in the community with as good a familiarity with the internet and web applications as myself.
  5. Software Choice I don’t have strong preferences for the precise choice of software, but some guidelines seem good.
    1. Open Source I have a strong preference for open source solutions, of which there appear to be several reasonable choices. The reason is that open source applications leave you free (or at least freer) to switch and change things, which seems essential when experimenting.
    2. Large User base When going with an open source solution, something with a large user base is likely to have fewer rough edges.

    I have some preference for systems using flat files for datastorage rather than a database because they are easier to maintain or (if necessary) operate on. This is partly due to a bad experience I had with the twiki setup for MLSS—basically an attempt to transfer data to an upgraded mysql failed because of schema issues I failed to resolve.

    I’m sure there are many with more experience using wiki and comment systems—perhaps they can comment on exact software choices. Wikimatrix seems to provide frighteningly detailed comparisons of different wiki software.

Complexity Illness

One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience.

  1. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence.
  2. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam‘s paper at SODA on generating random factored numbers, but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space.
  3. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, because they give an author the opportunity to write without sharp deadlines or page limit constraints, but this can and does go awry.

Complexity illness is a burden on the community. It means authors spend more time filling out papers, reviewers spend more time reviewing, and (most importantly) effort is misplaced on complex solutions over simple solutions, ultimately slowing (sometimes crippling) the long term impact of an academic community.

It’s difficult to imagine an author-driven solution to complexity illness, because the incentives are simply wrong. Reviewing based on solution value rather than complexity is a good way for individual people to reduce the problem. More generally, it would be great to have a system which explicitly encourages research without excessive complexity. The best example of this seems to be education—it’s the great decomplexifier. The process of teaching something greatly encourages teaching the simple solution, because that is what can be understood. This seems to be true both of traditional education and less conventional means such as wikipedia articles. I’m not sure exactly how to use this observation—Is there some way we can shift conference formats towards the process of creating teachable material?