What to do with an unreasonable conditional accept

Last year about this time, we received a conditional accept for the searn paper, which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper, leading to unhappiness for all.

Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. William Cohen and I spoke about this by email, the relevant bit of which is:

If an SPC asks for a revision that is inappropriate, the correct
action is to contact the chairs as soon as the decision is made,
clearly explaining what the problem is, so we can decide whether or
not to over-rule the SPC. As you say, this is extra work for us
chairs, but that’s part of the job, and we’re willing to do that sort
of work to improve the overall quality of the reviewing process and
the conference. In short, Sanjoy was right.

At the time, I operated under the belief that the PC chair’s job was simply too heavy to bother with something like this, but that was wrong. William invited me to post this, and I hope we all learn a little bit from it. Obviously, this should only be used if there is a real flaw in the conditions for a conditional accept paper.

11 Replies to “What to do with an unreasonable conditional accept”

  1. Interesting! But this year there is no conditional accept in ICML. Does the same argument of contacing the chair holds for a rejected paper? can the spc decision be reverted NOW?? and in genral, can the pc chair revert a decision made by the spc (lets say even before the final decision is made)?

  2. I believe the short answer is “no”. Appealing to a PC chair to revert a decision on a paper opens a pandora’s box where the PC chair is overwhelmed by emails and appeals. An appeal to a PC chair should only be made on the grounds of a procedural error. Asking for revisions which aren’t sensible is a procedural error. A final decision of “reject” is not a procedural error.

  3. I’m rather disenchanted of late with our reviewing process. I’ll ignore my own experiences of reviews and concentrate on the ones John has mentioned from his own: the appalling publication delays of the Searn and Cover Tree papers. While papers of marginal utility and little impact are constantly accepted based on some combination of reviewers poor understanding of the existing literature and clever writing, these rejected papers are surprisingly well know and regarded. They manage to be high impact while still being unpublished, which strongly suggests a problem to me.

    At a high level, I think the peer review process is rather broken. (In more ways than simply poor paper selection.) Perhaps it is up to us to decide what can be done to replace or improve it. I read a quote recently (the reference escapes me) that academia is the “original open source community”; are there lessons for our publication from the modern open source community?

    John: It would be informative to me if you could discuss the relevant material. To be fair, as a reviewer I’m often frustrated by the poor background and related work mentioned by authors, and many times I would like to see papers remain unpublished until they properly address the relevant literature. (It’s particularly galling in areas where machine learning and robotics meet.) Given what I know about SEARN, I find it hard to believe there is a compelling issue here and I’d like to understand what the conditional acceptance was calling for in terms of references.

  4. I think the original post on Searn (linked to above) gives a reasonable description of things.

    The larger issue of brokenness in the peer review process is confounding. Part of the problem is that we never have the full picture: we are either the authors or the reviewers, but never both for the same paper, and never communicating well enough to assemble the full picture.

  5. I think it’s well worth discussing what can be done to improve the review process, which is clearly imperfect. But “broken” is strong word. Last year there were 500+ papers submitted to ICML. While looking over 500+ decisions and 2000+ reviews, it was obvious that there were some reviewers that weren’t taking their job seriously, and some more that were just a few transistors short of a CPU. But the vast majority of did a pretty solid job, and of the 500+ decisions I think the vast majority were reasonable – in the sense that you can imagine a reasonable person agreeing with the decision, even if you don’t actually agree yourself.

    That said, submissions that overlap with prior work are a little difficult currently, and my guess is that this is mostly a growing pain. In the old days papers from a non-archival conference or workshop could be republished – but that distinction doesn’t make a lot of sense anymore. Likewise it used to be ok, sort of, to publish the same material in community B even if it had already been presented for community A – but again, that distinction has become fuzzier – I mean, it’s no as if those B guys are going to actually have to pay for the proceedings of the 17th conference on A, are they? as the old justifications become less valid it’s harder to apply them consistently.

  6. This is not limited to ML. The problem is probably just as bad in the NLP community. One statistic that someone (who shall remain anonymous) pointed out to me I think bears serious consideration. At ACL this year we got somewhere around 650 submissions. Each submission got >= 3 reviews and each reviewer covered, on average, 4-5 papers… say 5 to be conservative. This means about 400 reviewers were required to cover the conference. Maybe my local neighborhood of friends is too small, but I find it hard to believe that there actually are 400 qualified reviewers out there in this field all of whom were generous enough to volunteer for the task.

    All I’m saying is that it may not be enough to simple say “reviewers need to work harder.” It may simply be impossible to accumulate sufficiently many reviewers to cover all the submitted papers, especially if the trend of increasing submission counts increases.

  7. My broken comment is in the spirit of Hal’s observation, and not at all a comment on ICML. I think there are just a huge number of papers to review, review results are binary (even if a paper has great material, but does a terrible job understanding, relating, or explaining previous work we can only accept or reject), and the material takes a long time to absorb. Is there a better way (John has discussed Wiki like ideas in the past) that takes better advantage of our community strengths and technology?

  8. This seems like a direct result of publish or peril in academia. This probably leads to a quantity before qualitity mentality. Are there other ways to measure success in academia than the amount of published papers (to reputable conferences/journals)?

  9. Number and spread of citations. This might take a long time, though. And it’s much easier to pile up citations in a burst if one is working in a “hot topic”, even if it loses its significance in a few years.

  10. Perhaps the thing that’s broken is the over-publishing behavior. It’s totally unreasonable and unsustainable to expect every graduate student to have one or two publications in a year. It’s frustrating to watch the competitions between PhD advisors battling it out who is going to have more of his students presenting at a conference. It’s frustrating having to censor the “need” of thousands to build up their CV’s with dozens of papers for the bean counters.

    For that and similar reasons, I’ve personally burned out on reviewing, and on academic publishing too. It’s a waste of my time. I can perhaps manage one good publication in 18 months, and perhaps two tech reports that I wouldn’t really want to endorse. The signal is lost in the noise and cacophony. Everyone’s writing, nobody’s reading.

    The goal of any research community should be making progress, and not measuring spam.

    Sorry about the rant.

  11. Thinking a bit more about my previous comment, I would propose the following approach to prevent the abuse of the peer-review system. Each and every submission should be published on the web. However, the reviewers could endorse good papers and distinguish them, perhaps inviting the authors to give a longer presentation, or solicit commentary on those papers. Moreover, reviewers are not required to review papers they do not like (even if they might have liked the title).

Comments are closed.