Reviewing Horror Stories

Essentially everyone who writes research papers suffers rejections. They always sting immediately, but upon further reflection many of these rejections come to seem reasonable. Maybe the equations had too many typos or maybe the topic just isn’t as important as was originally thought. A few rejections do not come to seem acceptable, and these form the basis of reviewing horror stories, a great material for conversations. I’ve decided to share three of mine, now all safely a bit distant in the past.

  1. Prediction Theory for Classification Tutorial. This is a tutorial about tight sample complexity bounds for classification that I submitted to JMLR. The first decision I heard was a reject which appeared quite unjust to me—for example one of the reviewers appeared to claim that all the content was in standard statistics books. Upon further inquiry, several citations were given, none of which actually covered the content. Later, I was shocked to hear the paper was accepted. Apparently, the paper accidentally went to two different action editors, who each chose distinct reviewers.
  2. Cover Tree. This paper was the first one to give a datastructure for nearest neighbor search for an arbitrary metric which both (a) took logarithmic time under dimensionality constraint and (b) always required space competitive with brute force nearest neighbor search. Previous papers had done (a) or (b), but not both, and achieving both appears key to a practical algorithm, which we backed up with experiments and code.

    The cover tree paper suffered a triple rejection, the last one of which seems particularly poor to me. We submitted the draft to SODA, and got back 3 reviews. The first was blank. The second was a paragraph of positive but otherwise uninformative text. The third was blank. The decision was reject. We were rather confused, so we emailed the program chair asking if the decision was right and if so whether there was any more information we could get. We got back only a form letter providing no further information. Since then, the paper was accepted at ICML.

  3. Ranking Reduction. This paper shows that learning how to predict which of a pair of items is better strongly transfers to optimizing a ranking loss, in contrast to (for example) simply predicting a score and ordering according to predicted score.

    We submitted this paper to NIPS and it had the highest average review of any learning theory paper. The decision was to reject. Based upon what we could make out from a statement by the program committee, the logic of this decision is mostly kindly describable as badly flawed—somehow they confused the algorithm, the problem, and the analysis into a mess. Later it was accepted at COLT. (A bit of disclosure: I was on the program committee at NIPS that year, although obviously not involved in the decision on this paper.)

In all cases where a rejection occurs, the default presumption is that the correct decision was made because most of the time a good (or at least reasonable) decision was made. Consequently, it seems important to point out that there are some objective signs each of the above cases involved poor decisions.

  1. The tutorial paper is fairly widely cited (Google scholar places it 8th amongst my papers), and I continue to find it useful material for a lecture when teaching a class.
  2. The cover tree is also fairly widely cited, and I know from various emails and download counts that it is used by several people. It also won an award from IBM. To this day, it seems odd that an algorithms paper was only publishable at a machine learning conference.
  3. It’s really too soon to tell with the ranking paper, but it was one of the few COLT papers invited to a journal special issue, and there has since been substantial additional work by Mehryar Mohri and Nir Ailon which broadens the claim to other ranking metrics and makes it more computationally tractable.

One of the reasons you hear for why a paper was rejected and then accepted is that the paper improved in the meantime. That’s often true, but in each of the above cases I don’t believe there were any substantial changes between submissions (and for the tutorial it was a perfect accidental experiment).

Normally reviewing horror stories are the academic equivalent of warstories, but these ones have slightly more point. They have each informed my thinking about how reviewing should be done. Relating these stories might make this thinking a bit more understandable.

  1. Reviewer Choice. The tutorial case brings home the impact of how reviewers are chosen. If a paper is to have 3 reviews, it seems like a good idea to choose the reviewers in diverse ways, rather than one way. For example, at a conference, one reviewer by bidding preference, one reviewer by area chair, and one reviewer by another area chair or the program chair’s choice might reduce variance.
  2. Uniform Author feedback. The standard at NIPS was to have author feedback when the ranking paper was submitted. In effect, the standard was not followed for the ranking paper, and it’s easy to imagine this making a substantial difference given how badly flawed the basis of rejection was. It is also easy to imagine that author feedback might have made a difference in the tutorial rejection, as the reviewer was wrong (author feedback was not the standard then).
  3. Decision Basis. It’s helpful to relate the basis of decision by the program committee, especially when it is not summarized in the reviews. The cover tree case was one of the things which led me to add summaries to some of the NIPS papers when I was on the program committee, and I am committed to doing the same for SODA papers I’m reviewing this year. Not having a summary saves the program committee the embarassment of accidentally admitting mistakes, but it is badly disrepectful of the authors and generally promotes misunderstanding.
  4. Fast decisions are bad. It’s not possible to reliably make good decisions about technical matters quickly. I suspect that the time crunch of the NIPS program committee meeting was a contributing factor in the ranking paper case.

As anyone educated in machine learning or statistics understands, drawing 4 conclusions from 3 datapoints is problematic, so the above should be understood as suggestions subject to further evidence.

Taking the next step

At the last ICML, Tom Dietterich asked me to look into systems for commenting on papers. I’ve been slow getting to this, but it’s relevant now.

The essential observation is that we now have many tools for online collaboration, but they are not yet much used in academic research. If we can find the right way to use them, then perhaps great things might happen, with extra kudos to the first conference that manages to really create an online community. Various conferences have been poking at this. For example, UAI has setup a wiki, COLT has started using Joomla, with some dynamic content, and AAAI has been setting up a “student blog“. Similarly, Dinoj Surendran setup a twiki for the Chicago Machine Learning Summer School, which was quite useful for coordinating events and other things.

I believe the most important thing is a willingness to experiment. A good place to start seems to be enhancing existing conference websites. For example, the ICML 2007 papers page is basically only useful via grep. A much more human-readable version of the page would organize the papers by topic. If the page wiki-editable, this would almost happen automatically. Adding the ability for people to comment on the papers might make the website more useful beyond the time of the conference itself.

There are several aspects of an experiment which seem intuitively important to me. I found the wikipatterns site a helpful distillation of many of these intuitions. Here are various concerns I have:

  1. Mandate An official mandate is a must-have. Any such enhancement needs to be an official part of the website, or the hesitation to participate will probably be too much.
  2. Permissive Comments Allowing anyone to comment on a website is somewhat scary to academics, because we are used to peer-reviewing papers before publishing. Nevertheless, it seems important to not strongly filter comments, because:
    1. The added (human) work of filtering is burdensome.
    2. The delay introduced acts as a barrier to participation.

    The policy I’ve followed on hunch.net is allowing comments from anyone exhibiting evidence of intelligence—i.e. filtering essentially only robots. This worked as well I hoped, and not as badly as I feared.

  3. Spam Spam is a serious issue for dynamic websites, because it adds substantially to the maintenance load. There are basically two tacks to take here:
    1. Issue a userid/passwd to every conference registrant (and maybe others that request it), the just allow comments from them.
    2. Allow comments from anyone, but use automated filters. I’ve been using Akismet, but recaptcha is also cool.

    I favor the second approach, because it’s more permissive, and it makes participation easier. However, it may increase the maintenance workload.

  4. Someone Someone to shepard the experiment is needed. I’m personally overloaded with other things at the moment (witness the slow post rate), so I don’t have significant time to devote. Nevertheless, I’m sure there are many people in the community with as good a familiarity with the internet and web applications as myself.
  5. Software Choice I don’t have strong preferences for the precise choice of software, but some guidelines seem good.
    1. Open Source I have a strong preference for open source solutions, of which there appear to be several reasonable choices. The reason is that open source applications leave you free (or at least freer) to switch and change things, which seems essential when experimenting.
    2. Large User base When going with an open source solution, something with a large user base is likely to have fewer rough edges.

    I have some preference for systems using flat files for datastorage rather than a database because they are easier to maintain or (if necessary) operate on. This is partly due to a bad experience I had with the twiki setup for MLSS—basically an attempt to transfer data to an upgraded mysql failed because of schema issues I failed to resolve.

    I’m sure there are many with more experience using wiki and comment systems—perhaps they can comment on exact software choices. Wikimatrix seems to provide frighteningly detailed comparisons of different wiki software.

The Science 2.0 article

I found the article about science using modern tools interesting, especially the part about ‘blogophobia’, which in my experience is often a substantial issue: many potential guest posters aren’t quite ready, because of the fear of a permanent public mistake, because it is particularly hard to write about the unknown (the essence of research), and because the system for public credit doesn’t yet really handle blog posts.

So far, science has been relatively resistant to discussing research on blogs. Some things need to change to get there. Public tolerance of the occasional mistake is essential, as is a willingness to cite (and credit) blogs as freely as papers.

I’ve often run into another reason for holding back myself: I don’t want to overtalk my own research. Nevertheless, I’m slowly changing to the opinion that I’m holding back too much: the real power of a blog in research is that it can be used to confer with many people, and that just makes research work better.

It Doesn’t Stop

I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)?

In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard.

The United States, has been experimenting with trying to stop research on stem cells. It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a munition. This slowed the development of encryption tools, but I now routinely use tools such as ssh and GPG.

Although the strategy of preventing research may be doomed, it does bring up a Bill Joy type of question: should we actively chose to do research in a field where knowledge can be used to great harm? As an example, the Terminator series illustrates the dark fears of AI gone bad. Many researchers avoid this question by not thinking about it, but this is a substantial question of concern to society at large, and whether or not society supports a direction of research.

My answer is “yes, we should do research”. The reason is simple: I believe that good AI is the best chance of the survival of civilization. This might seem like a leap, but considering the following.

  1. Civilization is not stable. Anyone who believes otherwise needs to try to smell the 1908. Just a lifetime ago, humans could barely fly and computers were people. These radical changes in the abilities of a civilization are strong evidence against stability. Further evidence of instabilities come from long term world changing trends such as greenhouse gas accumulation and population graphs.
  2. Instability is bad in the long run. There are quite a number of doomsday-for-civilization scenarios kicking around—nuclear, plague, grey goo, black holes, etc… Many people find doomsday scenarios triggered by malevolence or accident to be unconvincing, since doomsday claims are so commonly debunked (remember the Y2K computer bug armageddon?). I am naturally skeptical myself, but it only takes one. In the next 10000 years, the odds of something going wrong seem fair.
  3. … for a closed system. There is one really good caveat to instability, which is redundancy. Perhaps if we Earthlings screwup, our descendendents on Alpha Centauri can come pick up the pieces. The fundamental driver here is light speed latency: if it takes years for two groups to communicate, then it is unlikely they’ll manage to coordinate (with malevolence or accident) a simultaneous doomsday.
  4. But real space travel requires AI. Getting from one star system to another with known physics turns out to be very hard. The best approaches I know involve giant lasers and multiple solar sails or fusion powered rockets, taking many years. Merely getting there, of course, is not enough—we need to get there with a kernel of civilization, capable of growing anew in the new system. Any adjacent star system may not have an earth-like planet implying the need to support a space-based civilization. Since travel between star systems is so prohibitively difficult, a basic question is: how small can we make a kernel of civilization? Many science fiction writers and readers think of generation ships, which would necessarily be enormous to support the air, food, and water requirements of a self-sustaining human population. A much simpler and easier solution comes with AI. A good design might mass 103 kilograms or so and be designed to first land on an asteroid, then mine it, first creating a large solar cell array, and replicas to seed other asteroids. Eventually, hallowed out asteroids could support human life if the requisite materials (Oxygen, Carbon, Hydrogen, etc..) are found. The fundamental observation here is that intelligence and knowledge require very little mass.

I hope we manage to crack AI, opening the door to real space travel, so that civilization doesn’t stop.

Complexity Illness

One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience.

  1. Math++ Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence.
  2. 8 page minimum Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam‘s paper at SODA on generating random factored numbers, but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space.
  3. Journalong Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, because they give an author the opportunity to write without sharp deadlines or page limit constraints, but this can and does go awry.

Complexity illness is a burden on the community. It means authors spend more time filling out papers, reviewers spend more time reviewing, and (most importantly) effort is misplaced on complex solutions over simple solutions, ultimately slowing (sometimes crippling) the long term impact of an academic community.

It’s difficult to imagine an author-driven solution to complexity illness, because the incentives are simply wrong. Reviewing based on solution value rather than complexity is a good way for individual people to reduce the problem. More generally, it would be great to have a system which explicitly encourages research without excessive complexity. The best example of this seems to be education—it’s the great decomplexifier. The process of teaching something greatly encourages teaching the simple solution, because that is what can be understood. This seems to be true both of traditional education and less conventional means such as wikipedia articles. I’m not sure exactly how to use this observation—Is there some way we can shift conference formats towards the process of creating teachable material?