Why COLT?

By Shie and Nati

Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place.

Why should you submit to COLT?

By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion following paper presentations. If you want theory people to know of your work, you should submit to COLT.

Additionally, this year COLT and ICML are tightly co-located, with joint plenary sessions (i.e. some COLT papers will be presented in a plenary session to the entire combined COLT/ICML audience, as will some ICML papers), and many other opportunities for exposure to the wider ICML audience. And so, by submitting to COLT, you have the potential of reaching both the captive theory audience at COLT and the wider ML audience at ICML.

The advantages of sending to COLT:

  1. Rigorous review process.

    The COLT program committee is comprised entirely of established, mostly fairly senior, researchers. Program committee members read and review papers themselves, or potentially use a sub-reviewer that they know personally and carefully select for the paper, but still check and maintain responsibility for the review. Your paper will get reviewed by at least three program committee members, who will likely be experts on the topics covered by the paper. This is in contrast to ICML (and most other ML conferences) were area chairs (of similar seniority to the COLT program committee) only manage the review process, but reviewers are assigned based on load-balancing considerations and the primary reviewing is done by a very wide set of reviewers, frequently students, who are often not the most relevant experts.

    COLT reviews are typically detailed and technical details are checked. The reviewing process is less rushed and program committee members (and sub-reviewers were appropriate) are expected to do a careful job on each and every paper.

    All papers are then discussed by the program committee, and there is generally significant and meaningful discussions on papers. This also means the COLT reviewing process is far from having a “single point of failure”, as the paper will be carefully considered and argued for by multiple (senior) program committee members. We believe this yields a more consistently high quality program, with much less randomness in the paper selection process, which in turn translates to high respect for accepted COLT papers.

  2. COLT is not double blind, but also not exactly single blind. Program committee members have access to the author identities (as do area chairs in ICML), as this is essential in order to select sub-reviewers. However, the author names do not appear on the papers, both in order to reduce the effect of first impressions, and to allow program committee members to utilize reviewers who are truly blind to the author’s identities.

    It should be noted that the COLT anonimization guidelines are a bit more relaxed, which we hope makes it easier to create an anonimized version for conference submission (authors are still allowed to, and even encouraged, to post their papers online, with their names on them of course).

  3. COLT does not have a dedicated rebuttal phase. Frankly, with the higher quality, less random, reviews, we feel it is not needed, and the hassle to authors and program committee members is not worth it. However, the tradition in COLT, which we plan to follow, is to contact authors as needed during the review and discussion process to ask for clarification on issues that came up during review. In particular, if a concern is raised on the soundness or other technical aspect of a paper, the authors will be contacted to give them a chance to set things straight. But no, there is no generic author response where authors can argue and plead for acceptance.

6 Replies to “Why COLT?”

  1. I agree with several points that Shie & Nati make, but disagree with some others on the COLT/ICML overlap.

    Most pure learning theory papers do go to COLT. At a finer grained level of distinction though, there are sub-areas of theory in which most papers go to ICML. I tried to dilineate my best understanding of this in the Why ICML? post. The core advice of sending a paper where it will be most appreciated is sound and covers most papers. For the papers where this is unclear, I believe it is appropriate to support the conference (and conference mechanism) that you prefer. ICML is also well past the point as a conference where learning theory people ignore the learning theory at ICML, as typically something like half of COLT attendees also come to ICML.

    With respect to reviewing quality, I expect ICML reviewing quality to be superior. Expectations and beliefs differ along with tastes for quality though, so I don’t expect agreement on this. Instead, the best we can do is compare the reviewing processes, which do differ, and let authors decide.

    It’s easiest to think about areas chairs as mini-program chairs. The two ICML area chairs for an ICML paper will each choose an appropriate program committee member, with the option to bring in people not on the program committee as deemed desirable. For learning theory area chairs, the people they choose are likely to be quite similar to COLT PC members or the subreviewers they choose. This is particularly true for the several area chairs who are also COLT PC members this year 🙂

    Given that the reviewers are similar, it should be unsurprising that the technical depth of reviews will also be similar. As a PC member at many past ICMLs, I carefully checked many proofs for correctness, and have some secret pride in those papers that passed. As an area chair, I oversaw careful checking of technical details and backstopped this process when it didn’t work properly. And as Program Chair now, I will certainly encourage careful checking. Fundamentally, I do believe in the use and value of theory.

    There is also no practical difference in anonymization guidelines. We wrote out a detailed double submission guide which makes this clear.

    The reason I expect ICML reviewing to be higher quality is the reviewing process, which is refined in several ways.

    First, the load on individual PC members is significantly lower, with perhaps half as many papers per PC member. This means each PC member can afford to spend more time per paper, which is essential for careful checking of details. I have seen plenty of times when theory reviewers do not check details carefully, because they simply don’t have time. More insidious, perhaps, is when people who know they have a load of papers to go through are busy looking for an excuse to not go through _this_ paper in detail. I have observed a number of cases where well-respected people (including theory people) accidentally dismiss a paper they would not have upon further reflection and understanding.

    Second, we have carefully designed the reviewing process so that it is diverse. Each of the 3 primary reviewers for a paper are chosen through different mechanisms. The decision to do this was influenced by an accidental natural experiment that I performed: the bounds tutorial was simultaneously unanimously accepted (by 3 reviewers) and unanimously rejected (by 3 reviewers) from JMLR due to an error where the paper was accidentally (and unknowingly) assigned to two editors. The key lesson here is that who chooses the reviewers makes an enormous difference. The remedy for this is diversity, which the ICML mechanism explicitly encourages by a combination of recommendations, bidding, and independent assignment by area chairs. The net result (we hope) is a less biased and higher quality process. It is unclear to me whether this point of failure is addressed in the COLT process.

    The 1.5-blind anonymization process in COLT is innovative, and definitely a step in the right direction. It does not go as far as ICML, because for the very substantial fraction of reviews done directly by the program committee, it is not double blind.

    The author response phase at ICML is often frustrating for authors because what they say does not make a difference in the final decision. And, that will be true this year. Nevertheless, we believe it is fundamentally healthy for the community in several subtle ways.
    (a) A reviewer understanding that an author response is coming is somewhat more careful to include (for example) the citation to the paper that they think this paper is just replicating. And where they aren’t, the author’s response makes this quite clear to an area chair. And where they are mistaken, this becomes extremely clear to the area chair.
    (b) The nature of people is that once they form an opinion, it’s hard for them to change it quickly. But good people do change their opinions as they see more evidence. So, even when a reviewer should change their mind and does not, it’s often the case that they will reconsider at a later date.
    (c) In my experience, quite a bit of paper writing is figuring out how to debug reviewers who misconcieve in a surprising number of ways. The author response process provides a further significantly useful mechanism for doing this. In the nightmare version of this, not having author response would have made diagnosis of the problem impossible.

    One last point is important: tastes really do differ. A paper on VC dimension/Covering Number/Rademacher Dimension/Littlestone Dimension/??, probably should go to COLT, because that is where it will be most appreciated. On the other hand, I believe we screwed up submitting the ECT paper to COLT instead of ICML at the time. COLT reviewers were clear that they were just not interested in the question of robust log-time multiclass classification, nor in the method of analysis. It’s hard for me to believe the same would be true at ICML, simply because multiclass classification is fairly common in practice if not theory, as is waiting around for a long time for your multiclass classifier. Send your paper where it will be appreciated.

  2. COLT is not double blind, but also not exactly single blind. Program committee members have access to the author identities (as do area chairs in ICML), as this is essential in order to select sub-reviewers. However, the author names do not appear on the papers, both in order to reduce the effect of first impressions, and to allow program committee members to utilize reviewers who are truly blind to the author’s identities.

    There seems to be no instructions of this nature on the current COLT website. Can you point us to the exact instructions about this? Last years COLT did not have this rule IIRC.

  3. “the primary reviewing is done by a very wide set of reviewers, frequently students, who are often not the most relevant experts”

    I never rely on students for ICML reviews. I sometimes involve them in the review process as a learning experience, but I always read the paper myself and write the review myself. I think this should be the norm. Do we have any way of measuring what fraction of papers are being reviewed primarily by students?

    1. Students review ICML papers both as sub-reviewers and as reviewers. It should not be too difficult to check what percentage of the PC (i.e. of primary reviewers) in, e.g. the last ICML, were students. I’d be curious about this statistic.

Comments are closed.