Research in conferences

Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences.

  1. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry.
  2. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where reviewers do not know authors. The intention with double blind reviewing is to make the conference more approachable to first-time authors.
  3. Author Feedback Author feedback is a mechanism where authors can provide feedback to reviewers (and, to some extent, complain). Providing an author feedback mechanism provides an opportunity for the worst reviewing errors to be corrected.
  4. Conditional Accepts A conditional accept is some form of “we will accept this paper if conditions X,Y, and Z are met”. A conditional accept allows reviewers to demand different experiments or other details they need in order to make a decision. This might speed up research significantly because otherwise good papers need not wait another year.
  5. Papers/PC member How many papers can one person actually review well? When there is an incredible load of papers to review, it becomes very tempting to make snap decisions without a thorough attempt at understanding. Snap decisions are often wrong. These numbers are based on the number of submissions with a computer science standard of 3 reviews per paper.

Each of these “options” make reviewing more difficult by requiring more reviewer work. There is a basic trade-off between the amount of time spent reviewing vs. working on new research and the speed of the review process itself. It is unclear where this optimal trade-off point lies, but the easy default is “not enough time spent reviewing” because reviewing is generally an unrewarding job.

It seems reasonable to cross reference these options with some measures of ‘conference impact’. For each of these, it’s important to realize these are not goal metrics and so their meaning is unclear. The best that can be said is that it is not bad to do well. Also keep in mind that measurements of “impact” are inherently “trailing indicators” which are not necessarily relevant to the way the conference is currently run.

  1. average citations Citeseer has been used to estimate the average impact of a conference’s papers here using the average number of citations per paper.
  2. max citations A number of people believe that the maximum number of citations given to any one paper is a strong indicator of the success of the conference. This can be measured by going to scholar.google.com and using ‘advanced search’ for the conference name.
Conference Comments blindness author feedback conditional accepts Reviews/PC member log(average citations per paper+1) max citations
ICML Sometimes Helpful Double Yes Yes 8 2.12 1079
AAAI Sometimes Helpful Double Yes No 8 1.87 650
COLT Sometimes Helpful Single No No 15? 1.49 710
NIPS Sometimes Helpful/Sometimes False Single Yes No 113(*) 1.06 891
CCC Sometimes Helpful Single No No 24 1.25 142
STOC Not Helpful Single No No 41 1.69 611
SODA Not Helpful Single No No 56 1.51 175

(*) To some extent this is a labeling problem. NIPS has an organized process of finding reviewers very similar to ICML. They are simply not called PC members.

Keep in mind that the above is a very incomplete list (it only includes the conferences that I interacted with) and feel free to add details in the comments.

8 Replies to “Research in conferences”

  1. I can add one:

    Conference
    Comments
    blindness
    author feedback

    conditional accepts
    Reviews/PC member
    log(average citations per paper+1)
    max citations

    ACL

    Sometimes Helpful
    Double
    No
    No
    8ish
    1.44

    441

  2. I’m curious about the “Sometimes False” entry for NIPS’s comments. Why do they get the dubious distinction of providing the worst feedback?

  3. While number of citations is a good way to compare impact within a community, I think it fails when you are comparing two communities. A smaller community which publishes fewer papers will have fewer high-citation papers in any given period of time.

    Also, citations usually move to the journal versions of the papers when they are published. So the varying propensity to publish journal versions across communities will also make the comparison fail. Of course one could do an analysis where you count the total citation count of papers first published in a given conference, but that’d probably require writing a small program. Or does google scholar already do that?

  4. Keep in mind this entry is the most subjective as it just represents a small number of samples.

    My limited experience with NIPS reviewers suggests a failure rate near to the byzantine generals limit of 1/3rd. I am counting only reviews where the reviewers are simply wrong on basic facts.

    Keep in mind also that NIPS is trying to do something about it—they did introduce author feedback this year.

  5. A comment on the conference citations: Google Scholar is somewhat tricky with names, and
    if you search for STOC or “theory of computing”, you get different
    results. In particular, the latter query returns a (notable;) paper cited 1296 > 611 times:

    The complexity of theorem-proving procedures
    SA Cook – ? of the third annual ACM symposium on Theory of computing, 1971 –
    portal.acm.org
    Google, Inc. Subscribe (Full Service), Register (Limited Service, Free),
    Login. Search: The ACM Digital Library The Guide. Feedback …
    Cited by 1296 – Web Search – portal.acm.org

    Also, the above queries might not capture conference papers that have already appeared in journals.

Comments are closed.