One viewpoint on academia is that it is inherently adversarial: there are finite research dollars, positions, and students to work with, implying a zero-sum game between different participants. This is not a viewpoint that I want to promote, as I consider it flawed. However, I know several people believe strongly in this viewpoint, and I have found it to have substantial explanatory power.
For example:
- It explains why your paper was rejected based on poor logic. The reviewer wasn’t concerned with research quality, but rather with rejecting a competitor.
- It explains why professors rarely work together. The goal of a non-tenured professor (at least) is to get tenure, and a case for tenure comes from a portfolio of work that is undisputably yours.
- It explains why new research programs are not quickly adopted. Adopting a competitor’s program is impossible, if your career is based on the competitor being wrong.
Different academic groups subscribe to the adversarial viewpoint in different degrees. In my experience, NIPS is the worst. It is bad enough that the probability of a paper being accepted at NIPS is monotonically decreasing in it’s quality. This is more than just my personal experience over a number of years, as it’s corroborated by others who have told me the same. ICML (run by IMLS) used to have less of a problem, but since it has become more like NIPS over time, it has inherited this problem. COLT has not suffered from this problem as much in my experience, although it had other problems related to the focus being defined too narrowly. I do not have enough experience with UAI or KDD to comment there.
There are substantial flaws in the adversarial viewpoint.
- The adversarial viewpoint makes you stupid. When viewed adversarially, any idea has crippling disadvantages and no advantages. Contorting your viewpoint enough to make this true damages your ability to conduct research. In short, it promotes poor mental hygiene.
- Many activities become impossible. Doing research is in general extremely hard, so there are many instances where working with other people can allow you to do things which are otherwise impossible.
- The previous two disadvantages apply even more strongly for a community—good ideas are more likely to be missed, change comes slowly, and often with steps backward.
- At it’s most basic level, the assumption that research is zero-sum is flawed, because the process of research is not done in a closed system. If the rest of society at large discovers that research is valuable, then the budget increases.
Despite these disadvantages, there is a substantial advantage as well: you can materially protect and aid your career by rejecting papers, preventing grants, and generally discriminating against key people doing interesting but competitive work.
The adversarial viewpoint has a validity in proportion to the number of people subscribing to it. For those of us who would like to deemphasize the adversarial viewpoint, what’s unclear is: how?
One concrete thing is: use Arxiv. For a long time, physicists have adopted an Arxiv-first philosophy, which I’ve come to respect. Arxiv functions as a universal timestamp which decreases the power of an adversarial reviewer. Essentially, you avoid giving away the power to muddy the track of invention. I’m expecting to use Arxiv for essentially all my past-but-unpublished and future papers.
It is plausible that limiting the scope of bidding, as Andrew McCallum suggested at the last ICML, and as is effectively implemented at this ICML, will help. The system of review at journals might also help for the same reason. In my experience as an author, if an anonymous reviewer wants to kill a paper they usually succeed. Most area chairs or program chairs are more interested in avoiding conflict with the reviewer (who they picked and may consider a friend) than reading the paper to determine the illogic of the review (which is a difficult task that simply cannot be done for all papers). NIPS experimented with a reputation system for reviewers last year, but I’m unclear on how well it worked, as an author’s score for a review and a reviewer’s score for the paper may be deeply correlated, revealing little additional information.
Public discussion of research can help with this, because very poor logic simply doesn’t stand up under public scrutiny. While I hope to nudge people in this direction, it’s clear that most people aren’t yet comfortable with public discussion.
Hi John. Very nice post. I think it is an extremely important issue, that seems to have largely been ignored by academia. I wonder if there is a good solution, that allows for anonymity yet prevents unethical behavior (both on the reviewers and submitters side)?
I still don’t understand how the Arxiv can take the place of peer-reviewed work. Yes, physicists use it, but at the point of submission to a “real” journal, one which presumably does the peer review needed to validate and authenticate the work. In CS, which doesn’t have a strong journal culture, how is this supposed to work ?
The usual argument for the arxiv is that you publish the work, and let it speak for itself. But that’s an unrealistic viewpoint in this day and age, when we are drowning in published works. What will really happen is that the arxiv pubs of *certain people* will get read carefully, or that the papers will be lost in the chatter.
I recently read an interesting analysis (that I can’t now trace) to the effect that negative reviews are essentially short positions on the future value of an idea (I reject the paper = I think this idea has no value, nor will it have it in the future) that need not be unwound. Thus, if you offer today to sell me 100 barrels of oil at $40/barrel on Dec 31, 2009, you loose $1000 if on Dec 31 oil is worth $50/barrel. But if you reject my paper X today (assign to it a value lower than the threshold value for publication), your scientific reputation suffers no loss whatsoever if at a later time a lot of people assign to it a much larger value. The conjecture than is that requiring people to unwind their scientific positions would remove this asymmetry that makes it very easy for established reviewers to kill new ideas. I would guess that people like Robin Hanson have given this idea a thorough treatment.
The idea isn’t to replace peer reviewed with Arxiv, but rather to publish first at Arxiv and later at peer reviewed place.
I think it’s dangerous to look at non-peer reviewed work in the same way as peer reviewed work. Looking at the medical field, there is a high amount of pseudoscientific crap, especially once the media gets a hold of it. However, the one thing that prevents this poor research to take a real foothold is peer review.
The real issue is that with only 2-3 reviewers, a single misguided reviewer can prevent good research from getting published. Of course, the solution is to resubmit to another conference or journal. However, this means that different conferences and journals need different reviewers. Realistically, journals and conferences should set up reviewers such that any reviewer can only review papers for one conference or journal at a time. There is too much overlap in reviewers.
Seriously? How often do you review a paper you feel is “competitive” with your own research?
I feel I review a few that are interesting to me and a ton that are rather dull, but only one in a hundred do I have to step back and consider how it impacts my own agenda.
In response to the previous anon – while it is perhaps true that one only occasionally reviews a directly competitive piece of work, I would contend that one is more often faced with ‘philosophically divergent’ viewpoints in which case reviewers may respond by trying to eliminate the ‘misconception’. For instance, reviewer may strongly believe that the right approach to AI is option 1 while the paper under review may adopt option 2 which the reviewer doesn’t like for personal reasons and hence he would try unnecessarily hard to reject that paper. This is not scientifically ethical behavior, but it seems to happen often enough.
I believe you have the ethical duty *reversed*. The point of reviewing is to identify the papers most likely to carry our field forward. It is our duty as reviewers, as PCs, and as editors to accept the work we view as the highest value, and to discourage work we feel is ‘philosophically incorrect’. If you don’t do this, why bother reviewing? This role is even more important in today’s environment where any work can be made easily available on the web– the only roles for reviewers is to identify the best material and to improve and correct the submissions they read by their comments. Identifying the best fundamentally involves our own philosophical viewpoints– we are trying to remove the nonsense.
I have wondered, if you have editorial positions, and are frequently asked to review papers, does that give you an edge? That’s where I see potential for advantage that some might call potential for abuse. You are in parallel with others in considering knotty problems, a paper arrives that has a novel idea – you do not “steal” it, but incorporate it as a missing piece in a puzzle of your own. Is that taking advantage of “the competition” that does not have the early roll-out of the helpful idea?
Also, seeing a novel thing might channel you away in your own work from a less productive direction – you see next year’s Pontiac, you don’t buy the Chev.
Is this real – those that have more editorial posts and more frequently review things have an edge?
JL’s idea of putting it in arXiv first would level that playing field yielding a dual bonus – as a timestamp AND as a leveling factor, with any further publishing via peer reviewed outlets being confirmation of quality of the work instead of a first general public exposure (and then only after perhaps months of lead-time advantage for limited reviewer opportunity).
Any thoughts, does this bolster JL’s original point? If it’s “published” in arXiv, is that cause for a peer reviewer to reject it – it’s already published and the proposal amounts to publishing twice? Does it make the refereed journals less a leading edge outlet but more of a “quality-and-politics bottleneck,” or a “stamp of quality” (or “spam filter”)?
Is it demeaning to those who referee, to term them human spam filters? Any simplification demeans the complexity of most questions. No demeaning was intended, but isn’t that a large part of peer review?
This anonymous comment has turned me from a lurker to a poster. I think the previous poster has touched an an enormously important and interesting point that I would love to see discussed in detail somewhere more public than the comments section of JL’s blog.
I started writing a quick response here, but it quickly got too large, so I’ve posted it on my blog:
Thoughts about the role of authors and reviewers in academic research
What is the role of a reviewer? I think the answer differs from the viewpoint of the author or the program committee.
The short story is that many people see the role of peer review quite differently, and some of the expectations are very incompatible.
The solution of course (in my humble opinion) is for authors to submit fewer, stronger papers and expect less help and more rigorous, impartial review. Reviewers should strive to be less partial and more helpful. The overall quality of submissions and published works will go up, and we’ll all be happy. After that, we should turn our efforts to world peace.
In response to anon(2008-12-30 05:48:47), it is certainly the case that one should reject work that is incorrect. However, unlike in a communist regime where a ‘party line’ is the thing of paramount importance, science is supposed to nurture alternate points of view until one is unmistakably proved to be superior – something that happens much much later than the conference papers under discussion, and after a lot of collective deliberation. Often, one sees a stalemate where multiple competing viewpoints identify respective niches and don’t clearly ‘win’. So, it is rather presumptuous to classify everything as though there is a unique correct/incorrect (which, in some disturbing cases is simply done by how large a community considers viewpoint X currently fashionable)!
IMHO, when reviewers invoke knee-jerk responses to anything that differs from their party line, they are missing this point about science. Unfortunately, given the composition of PCs in many conferences and their implicit biases and group think, this is exactly what sometimes happens. Is that a desirable thing?
In this sense, it is perhaps better to have the arXiv model of dissemination. Apart from the minor inconvenience of clutter, I find it hard to see a strong case for suppressing dissenting viewpoints – and for people who worry about clutter, it is not that hard to form mental spam filters!
I’ll add this to your comment. As far as I know, noone has made a serious case that one approach is flawless.
It might be useful to note that the arxiv isn’t completely without peer review. The arxiv is a community like any other and as you get noticed by your contributions, you will develop a network of friends who will give you feedback. These are people who will have an implicit desire to make your work the best it can be. Reducing the problem to, “This particular paper might have merit that will get missed in the chatter of the arxiv,” is naive.
I’d like to point out that mathematicians also use arXiv extensively. Here’s how we do it: we work on a paper until we believe it is complete and correct, then submit it to the arXiv and to a journal simultaneously. It is not considered a publication but is a kind of ‘hard’ preprint or working paper. The arXiv has at least 4 benefits:
1) It establishes a timestamp. This works beautifully, and there is far less controversy over ‘stealing’ ideas and precedence: everyone can see who had what when. It makes the whole review process far less political and adversarial. It impossible to hide or delay someone else’s result as sometimes happens in, e.g. the life sciences.
2) It ensures open access and is well indexed by search engines.
3) It makes the work available to everyone to build on while the review process grinds on (though one assumes there will be some errors and checks the results used).
4) One can easily subscribe to RSS feeds in your area of interest and thus see the breadth of new results as they are posted.
Thus we view it not as a replacement for peer review but a complement that improves the publication process. The arXiv is a dissemination, the reviewed publication is a stamp of correctness and significance. For conference-focused fields like ML, I would imagine that the process would be essentially the same: post the paper on the arXiv the same day it is submitted to the conference. I would not consider it a publication (no review) and so shouldn’t disqualify for the conference or other submission.
A (minor) problem with this idea (which otherwise happens in some of the theory conferences), is that many conferences are moving towards double blind review, and posting a version on the arxiv would in general violate the spirit of double blind clauses (the authors shall make no attempts to reveal their identity yadda yadda).
Algorithms conferences are not double blind, so this is not an issue. But I’ve had this problem with other conferences: even placing the paper on my website is a no-go.
Any time two or more people get together to make a decision, the result is (by definition) politics. Here’s how the Wikipedia:Politics article begins (no, I didn’t edit it):
How does a whole field’s research direction change? Through “biased” reviewing. No one “decided” that natural language processing should be statistical rather than logical. I’m sure if I was still submitting papers like I did in the 1980s, I’d be experience a lot of “bias” against them. I heard a lot of grumbling back in the day about how hard it was to get stats papers accepted, and now I hear grumbling about just about any other kind of paper being hard to get accepted.
This sounds like a conspiracy theory against editors. Based on my direct and indirect experience with this job I think editors are happy about the good papers and sorry about the (many) poor ones, but you would be surprised how rare is it that you handle a paper that is directly relevant to your research. The majority of papers is marginally relevant if relevant at all. The field we are working in is just huge.
I wonder if reviews being political is a real (and recurring) problem. I think many criteria (soundness, originality, clarity, ..) are objective by their nature. The only one that comes to mind which has an inherently subjective character is significance. I decide about significance by thinking about two things: Was there something that we learned from this paper? Is the working going to inspire other people (not me, or my friends, but other researchers working in the field).
Further, if you are an editor, area chair, etc., it is your job to separate objective and subjective elements in a review. I have certainly seen in a few cases reviews that were based on preconceptions (these typically referred to common wisdom as “we know it”).
I think it is very important to educate (not only new) reviewers about their job and give them feedback. This improves one part of the process, but will not eliminate all the bias. The arXiv model is definitely good for taking care of the rest, e.g., securing a timestamp.
Hi John, very interesting post. Actually, that reminded me of some thoughts I had a while ago. I think the basic fallacy is that that we apply concepts from economy to science. For example, scientific output is usually measured in terms of citation count which basically measures how much customers have bought your product (“customers” being other scientists and “product” being your papers). The problem is that science is not the same as economy. For example, before others can cite your paper, you have to publish it in a journal, which means that your competitors have a say in whether your customers will see your product or not. I think becoming more aware of these differences, we might also get ideas of how to change the environment.
Anyway, I put my thoughts down in this post.
Most certainly not a zero-sum game.
Reference:
Submit your papers where they are likely to be accepted
http://www.daniel-lemire.com/blog/archives/2008/08/12/submit-your-papers-where-they-are-likely-to-be-accepted/