Code submission should be encouraged but not compulsory

ICML, ICLR, and NeurIPS are all considering or experimenting with code and data submission as a part of the reviewer or publication process with the hypothesis that it aids reproducibility of results. Reproducibility has been a rising concern with discussions in paper, workshop, and invited talk.

The fundamental driver is of course lack of reproducibility. Lack of reproducibility is an inherently serious and valid concern for any kind of publishing process where people rely on prior work to compare with and do new things. Lack of reproducibility (due to random initialization for example) was one of the things leading to a period of unpopularity for neural networks when I was a graduate student. That has proved nonviable (Surprise! Learning circuits is important!), but the reproducibility issue remains. Furthermore, there is always an opportunity and latent suspicion that authors ‘cheat’ in reporting results which could be allayed using a reproducible approach.

With the above said, I think the reproducibility proponents should understand that reproducibility is a value but not an absolute value. As an example here, I believe it’s quite worthwhile for the community to see AlphaGoZero published even if the results are not necessarily easily reproduced. There is real value for the community in showing what is possible irrespective of whether or not another game with same master of Go is possible, and there is real value in having an algorithm like this be public even if the code is not. Treating reproducibility as an absolute value could exclude results like this.

An essential understanding here is that machine learning is (at least) 3 different kinds of research.

  • Algorithms: The goal is coming up with a better algorithm for solving some category of learning problems. This is the most typical viewpoint at these conferences.
  • Theory: The goal is generally understanding what is possible or not possible for learning algorithms. Although these papers may have algorithms, they are often not the point and demanding an implementation of them is a waste of time for author, reviewer, and reader.
  • Applications: The goal is solving some particular task. AlphaGoZero is a reasonable example of this—it was about beating the world champion in Go with algorithmic development in service of that. For this kind of research perfect programmatic reproducibility may be infeasible because the computation is to extreme, the data is proprietary, etc…

Using a one-size-fits-all approach where you demand that every paper “is” a programmatically reproducible implementation is a mistake that would create a division that reduces our community. Keeping this three-fold focus fundamentally enriches the community both literally and ontologically.

Another view here is provided by considering the argument at a wider scope. Would you prefer that health regulations/treatments be based on all scientific studies including those where data is not fully released to the public (i.e almost all of them for privacy reasons)? Or would you prefer that health regulations/treatments be based only on data fully released to the public? Preferring the latter is equivalent to ignoring most scientific studies in making decisions.

The alternative to a compulsory approach is to take an additive view. The additive approach has a good track record amongst reviewing process changes.

  • When I was a graduate student, papers were not double blind. The community switched to double blind because it adds an opportunity for reviewers to review fairly and it gives authors a chance to have their work reviewed fairly whether they are junior or senior. As a community we also do not restrict posting on arxiv or talks about a paper before publication, because that would subtract from what authors can do. Double blind reviewing could be divisive, but it is not when used in this fashion.
  • When I was a graduate student, there was also a hard limit on the number of pages in submissions. For theory papers this meant that proofs were not included. We changed the review process to allow (but not require) submission of an appendix which could optionally be used by reviewers. This again adds to the options available to authors/reviewers and is generally viewed as positive by everyone involved.

What can we add to the community in terms reproducibility?

  1. Can reviewers do a better job of reviewing if they have access to the underlying code or data?
  2. Can authors benefit from releasing code?
  3. Can readers of a paper benefit from an accompanying code release?

The answer to each of these question is a clear ‘yes’ if done right.

For reviewers, it’s important to not overburden them. They may lack the computational resources, platform, or personal time to do a full reproduction of results even if that is possible. Hence, we should view code (and data) submission in the same way as an appendix which reviewers may delve into and use if they so desire.

For authors, code release has two benefits—it provides an additional avenue for convincing reviewers who default to skeptical and it makes followup work significantly more likely. My most cited paper was Isomap which did indeed come with a code release. Of course, this is not possible or beneficial for authors in many cases. Maybe it’s a theory paper where the algorithm isn’t the point? Maybe either data or code can’t be fully released since it’s proprietary? There are a variety of reasons. From this viewpoint we see that releasing code should be supported and encouraged but optional.

For readers, having code (and data) available obviously adds to the depth of value that a paper has. Not every reader will take advantage of that but some will and it enormously reduces the barrier to using a paper in many cases.

Let’s assume we do all of these additive and enabling things, which is about where Kamalika and Russ aimed the ICML policy this year.

Is there a need for go further towards compulsory code submission? I don’t yet see evidence that default skeptical reviewers aren’t capable of weighing the value of reproducibility against other values in considering whether a paper should be published.

Should we do less than the additive and enabling things? I don’t see why—the additive approach provides pure improvements to the author/review/publish process. Not everyone is able to take advantage of this, but that seems like a poor reason to restrict others from taking advantage when they can.

One last thing to note is that this year’s code submission process is an experiment. We should all want program chairs to be able to experiment, because that is how improvements happen. We should do our best to work with such experiments, try to make a real assessment of success/failure, and expect adjustments for next year.

FAQ on ICML 2019 Code Submission Policy

ICML 2019 has an option for supplementary code submission that the authors can use to provide additional evidence to bolster their experimental results. Since we have been getting a lot of questions about it, here is a Frequently Asked Questions for authors.

1. Is code submission mandatory?

No. Code submission is completely optional, and we anticipate that high quality papers whose results are judged by our reviewers to be credible will be accepted to ICML, even if code is not submitted.

2. Does submitted code need to be anonymized?

ICML is a double blind conference, and we expect authors to put in reasonable effort to anonymize the submitted code and institution. This means that author names and licenses that reveal the organization of the authors should be removed.

Please note that submitted code will not be made public — eg, only the reviewers, Area Chair and Senior Area Chair in charge will have access to it during the review period. If the paper gets accepted, we expect the authors to replace the submitted code by a non-anonymized version or link to a public github repository.

3. Are anonymous github links allowed?

Yes. However, they have to be on a branch that will not be modified after the submission deadline. Please enter the github link in a standalone text file in a submitted zip file.

4. How will the submitted code be used for decision-making?

The submitted code will be used as additional evidence provided by the authors to add more credibility to their results. We anticipate that high quality papers whose results are judged by our reviewers to be credible will be accepted to ICML, even if code is not submitted. However, if something is unclear in the paper, then code, if submitted, will provide an extra chance to the authors to clarify the details. To encourage code submission, we will also provide increased visibility to papers that submit code.

5. If code is submitted, do you expect it to be published with the rest of the supplementary? Or, could it be withdrawn later?

We expect submitted code to be published with the rest of the supplementary. However, if the paper gets accepted, then the authors will get a chance to update the code before it is published by adding author names, licenses, etc.

6. Do you expect the code to be standalone? For example, what if it is part of a much bigger codebase?

We expect your code to be readable and helpful to reviewers in verifying the credibility of your results. It is possible to do this through code that is not standalone — for example, with proper documentation.

7. What about pseudocode instead of code? Does that count as code submission?

Yes, we will count detailed pseudocode as code submission as it is helpful to reviewers in validating your results.

8. Do you expect authors to submit data?

We understand that many of our authors work with highly sensitive datasets, and are not asking for private data submission. If the dataset used is publicly available, there is no need to provide it. If the dataset is private, then the authors can submit a toy or simulated dataset to illustrate how the code works.

9. Who has access to my code?

Only the reviewers, Area Chair and Senior Area Chair assigned to your paper will have access to your code. We will instruct reviewers, Area Chair and Senior Area Chair to keep the code submissions confidential (just like the paper submissions), and delete all code submissions from their machine at the end of the review cycle. Please note that code submission is also completely optional.

10. I would like to revise my code/add code during author feedback. Is this permitted?

Unfortunately, no. But please remember that code submission is entirely optional.

The detailed FAQ as well other Author and Style instructions are available here.

Kamalika Chaudhuri and Ruslan Salakhutdinov
ICML 2019 Program Chairs

ICML 2019: Some Changes and Call for Papers

The ICML 2019 Conference will be held from June 10-15 in Long Beach, CA — about a month earlier than last year. To encourage reproducibility as well as high quality submissions, this year we have three major changes in place.

There is an abstract submission deadline on Jan 18, 2019. Only submissions with proper abstracts will be allowed to submit a full paper, and placeholder abstracts will be removed. The full paper submission deadline is Jan 23, 2019.

This year, the author list at the paper submission deadline (Jan 23) is final. No changes will be permitted after this date for accepted papers.

Finally, to foster reproducibility, we highly encourage code submission with papers. Our submission form will have space for two optional supplementary files — a regular supplementary manuscript, and code. Reproducibility of results and easy accessibility of code will be taken into account in the decision-making process.

Our full Call for Papers is available here.

Kamalika Chaudhuri and Ruslan Salakhutdinov
ICML 2019 Program Chairs

When the bubble bursts…

Consider the following facts:

  1. NIPS submission are up 50% this year to ~4800 papers.
  2. There is significant evidence that the process of reviewing papers in machine learning is creaking under several years of exponentiating growth.
  3. Public figures often overclaim the state of AI.
  4. Money rains from the sky on ambitious startups with a good story.
  5. Apparently, we now even have a fake conference website (https://nips.cc/ is the real one for NIPS).

We are clearly not in a steady-state situation. Is this a bubble or a revolution? The answer surely includes a bit of revolution—the fields of vision and speech recognition have been turned over by great empirical successes created by deep neural architectures and more generally machine learning has found plentiful real-world uses.

At the same time, I find it hard to believe that we aren’t living in a bubble. There was an AI bubble in the 1980s (before my time), a techbubble around 2000, and we seem to have a combined AI/tech bubble going on right now. This is great in some ways—many companies are handing out professional sports scale signing bonuses to researchers. It’s a little worrisome in other ways—can the field effectively handle the stress of the influx?

It’s always hard to say when and how a bubble bursts. It might happen today or in several years and it may be a coordinated failure or a series of uncoordinated failures.

As a field, we should consider the coordinated failure case a little bit. What fraction of the field is currently at companies or in units at companies which are very expensive without yet justifying that expense? It’s no longer a small fraction so there is a chance for something traumatic for both the people and field when/where there is a sudden cut-off. My experience is that cuts typically happen quite quickly when they come.

As an individual researcher, consider this an invitation to awareness and a small amount of caution. I’d like everyone to be fully aware that we are in a bit of a bubble right now and consider it in their decisions. Caution should not be overdone—I’d gladly repeat the experience of going to Yahoo! Research even knowing how it ended. There are two natural elements here:

  1. Where do you work as a researcher? The best place to be when a bubble bursts is on the sidelines.
    1. Is it in the middle of a costly venture? Companies are not good places for this in the long term whether a startup or a business unit. Being a researcher at a place desperately trying to figure out how to make research valuable doesn’t sound pleasant.
    2. Is it in the middle of a clearly valuable venture? That could be a good place. If you are interested we are hiring.
    3. Is it in academia? Academia has a real claim to stability over time, but at the same time opportunity may be lost. I’ve greatly enjoyed and benefited from the opportunity to work with highly capable colleagues on the most difficult problems. Assembling the capability to do that in an academic setting seems difficult since the typical maximum scale of research in academia is a professor+students.
  2. What do you work on as a researcher? Some approaches are more “bubbly” than others—they might look good, but do they really provide value?
    1. Are you working on intelligence imitation or intelligence creation? Intelligence creation ends up being more valuable in the long term.
    2. Are you solving synthetic or real-world problems? If you are solving real-world problems, you are almost certainly creating value. Synthetic problems can lead to real-world solutions, but the path is often fraught with unforeseen difficulties.
    3. Are you working on a solution to one problem or many problems? A wide applicability for foundational solutions clearly helps when a bubble bursts.

Researchers have a great ability to survive a bubble bursting—a built up public record of their accomplishments. If you are in a good environment doing valuable things and that environment happens to implode one day the strength of your publications is an immense aid in landing on your feet.