ICML, ICLR, and NeurIPS are all considering or experimenting with code and data submission as a part of the reviewer or publication process with the hypothesis that it aids reproducibility of results. Reproducibility has been a rising concern with discussions in paper, workshop, and invited talk.
The fundamental driver is of course lack of reproducibility. Lack of reproducibility is an inherently serious and valid concern for any kind of publishing process where people rely on prior work to compare with and do new things. Lack of reproducibility (due to random initialization for example) was one of the things leading to a period of unpopularity for neural networks when I was a graduate student. That has proved nonviable (Surprise! Learning circuits is important!), but the reproducibility issue remains. Furthermore, there is always an opportunity and latent suspicion that authors ‘cheat’ in reporting results which could be allayed using a reproducible approach.
With the above said, I think the reproducibility proponents should understand that reproducibility is a value but not an absolute value. As an example here, I believe it’s quite worthwhile for the community to see AlphaGoZero published even if the results are not necessarily easily reproduced. There is real value for the community in showing what is possible irrespective of whether or not another game with same master of Go is possible, and there is real value in having an algorithm like this be public even if the code is not. Treating reproducibility as an absolute value could exclude results like this.
An essential understanding here is that machine learning is (at least) 3 different kinds of research.
- Algorithms: The goal is coming up with a better algorithm for solving some category of learning problems. This is the most typical viewpoint at these conferences.
- Theory: The goal is generally understanding what is possible or not possible for learning algorithms. Although these papers may have algorithms, they are often not the point and demanding an implementation of them is a waste of time for author, reviewer, and reader.
- Applications: The goal is solving some particular task. AlphaGoZero is a reasonable example of this—it was about beating the world champion in Go with algorithmic development in service of that. For this kind of research perfect programmatic reproducibility may be infeasible because the computation is to extreme, the data is proprietary, etc…
Using a one-size-fits-all approach where you demand that every paper “is” a programmatically reproducible implementation is a mistake that would create a division that reduces our community. Keeping this three-fold focus fundamentally enriches the community both literally and ontologically.
Another view here is provided by considering the argument at a wider scope. Would you prefer that health regulations/treatments be based on all scientific studies including those where data is not fully released to the public (i.e almost all of them for privacy reasons)? Or would you prefer that health regulations/treatments be based only on data fully released to the public? Preferring the latter is equivalent to ignoring most scientific studies in making decisions.
The alternative to a compulsory approach is to take an additive view. The additive approach has a good track record amongst reviewing process changes.
- When I was a graduate student, papers were not double blind. The community switched to double blind because it adds an opportunity for reviewers to review fairly and it gives authors a chance to have their work reviewed fairly whether they are junior or senior. As a community we also do not restrict posting on arxiv or talks about a paper before publication, because that would subtract from what authors can do. Double blind reviewing could be divisive, but it is not when used in this fashion.
- When I was a graduate student, there was also a hard limit on the number of pages in submissions. For theory papers this meant that proofs were not included. We changed the review process to allow (but not require) submission of an appendix which could optionally be used by reviewers. This again adds to the options available to authors/reviewers and is generally viewed as positive by everyone involved.
What can we add to the community in terms reproducibility?
- Can reviewers do a better job of reviewing if they have access to the underlying code or data?
- Can authors benefit from releasing code?
- Can readers of a paper benefit from an accompanying code release?
The answer to each of these question is a clear ‘yes’ if done right.
For reviewers, it’s important to not overburden them. They may lack the computational resources, platform, or personal time to do a full reproduction of results even if that is possible. Hence, we should view code (and data) submission in the same way as an appendix which reviewers may delve into and use if they so desire.
For authors, code release has two benefits—it provides an additional avenue for convincing reviewers who default to skeptical and it makes followup work significantly more likely. My most cited paper was Isomap which did indeed come with a code release. Of course, this is not possible or beneficial for authors in many cases. Maybe it’s a theory paper where the algorithm isn’t the point? Maybe either data or code can’t be fully released since it’s proprietary? There are a variety of reasons. From this viewpoint we see that releasing code should be supported and encouraged but optional.
For readers, having code (and data) available obviously adds to the depth of value that a paper has. Not every reader will take advantage of that but some will and it enormously reduces the barrier to using a paper in many cases.
Let’s assume we do all of these additive and enabling things, which is about where Kamalika and Russ aimed the ICML policy this year.
Is there a need for go further towards compulsory code submission? I don’t yet see evidence that default skeptical reviewers aren’t capable of weighing the value of reproducibility against other values in considering whether a paper should be published.
Should we do less than the additive and enabling things? I don’t see why—the additive approach provides pure improvements to the author/review/publish process. Not everyone is able to take advantage of this, but that seems like a poor reason to restrict others from taking advantage when they can.
One last thing to note is that this year’s code submission process is an experiment. We should all want program chairs to be able to experiment, because that is how improvements happen. We should do our best to work with such experiments, try to make a real assessment of success/failure, and expect adjustments for next year.
I don’t think there’s such a thing as a purely “additive approach” when making changes to the review process.
No theorist is going to submit a theory paper without any proofs. It’s missing the heart of the paper. Unfortunately, this causes theory papers to run into page limit issues, where the page limits were put into place to limit the reviewers’ workloads. Allowing for proofs to be put into an appendix might seem to solve both issues, but it creates a real problem: The author is now effectively submitting two half papers, *one of which (the proofs!) is not peer-reviewed*.
The same is true for artifacts. If the reviewers don’t have to review the artifacts, it’s important to be transparent about that fact that the artifacts are not peer-reviewed. And often that means that the paper isn’t been peer-reviewed (where the artifacts are integral).
Top programming languages conferences now have separate artifact evaluation committees, which review the artifacts associated with papers. This is meant to address the concerns above but runs into several issues:
1. It duplicates efforts across two committees.
2. The AEC is not given the authority to reject a paper (it merely gives a badge to artifacts that meet certain standards) and hence doesn’t address the problem of the paper not being fully peer-reviewed.
3. The AEC tends not to have the expertise that the main committee has, or external reviewers.
It’s worth noting that in PL, artifacts are often mechanized proofs of claims in the paper, so the questions of how we deal with proof appendices and artifacts becomes the same question.
Thanks.
One of the key differences between PL and ML at the moment is that the ML conferences are under severe reviewer stress due to recent exponentiation of the field. Given that, it’s not easy to do anything (like separate artifact reviews) which involves more reviewer time.
In my experience as a reviewer, it’s not quite the case that the appendix is not reviewed. As a reviewer, I sometimes look into the appendix to answer specific questions I have about details. I expect the same to be true of code, with the added advantage that some code is even executable. Used in this way, the appendix/code reduce the burden on reviewers by providing a means for disambiguation.
At the same time, I fully agree that appendix/code in ML conferences are not reviewed in any formal sense. Improving that would be great, but getting around the scale problem is hard.