The prevailing wisdom in machine learning seems to be that motivating a paper is the responsibility of the author. I think this is a harmful view—instead, it’s healthier for the community to regard this as the responsibility of the reviewer.
There are lots of reasons to prefer a reviewer-responsibility approach.
- Authors are the most biased possible source of information about the motivation of the paper. Systems which rely upon very biased sources of information are inherently unreliable.
- Authors are highly variable in their ability and desire to express motivation for their work. This adds greatly to variance on acceptance of an idea, and it can systematically discriminate or accentuate careers. It’s great if you have a career accentuated by awesome wording choice, but wise decision making by reviewers is important for the field.
- The motivation section in a paper doesn’t do anything in some sense—it’s there to get the paper in. Reading the motivation of a paper is of little use in helping the reader solve new problems.
- Many motivation sections are a waste of time. The 30th paper on a subject should not require a motivation as if it’s the first paper on a subject, and requiring or expecting this of authors is an exercise in busy work by the research community.
Some caveats to make sure I’m understood:
- I’m not advocating the complete removal of a motivation section (motivectomy?), which would be absurd (and frankly harmful to your career). A paragraph describing common examples where the problem addressed comes up is desirable for readers who are not specialists. This paragraph should not be in the abstract, where it seems to often sneak in.
- I’m also not arguing against discussion of motivations. I regard discussion of motivations as quite important, and totally unsuited to the paper format. It’s hard to imagine any worse method for discussion than one with a year-size latency where quasi-anonymous people are quasi-randomly paired and each attempts to accomplish several different tasks one of which happens to be a one-sided discussion of motivation. A blog can work much better for this sort of thing, and I definitely invite discussion on motivational questions.
So, how do we change the prevailing wisdom? The answer is always “gradually”, but there are a number of steps we can take.
- As an author, one clever technique is to pass serious discussion of motivation by reference. “For a general discussion and motivation of this problem see [].” This would save space in the large number of papers which attempt to address an old problem better than previous approaches.
- Participate in public discussion of motivations. We need to encourage a real mechanism for discussion. Until these alternative (and far better) formats for discussion are developed the problem of “who motivates” will always exist.
- Have private discussions about motivation where you can. Random conversations at conferences are great for this, and the process often sharpens your appreciation.
- Learn to take responsibility for motivation as a reviewer. This might sound hard, but it’s actually somewhat easier than careful evaluation of technical content in my experience.
- The first step is to disbelieve all the motivational parts of a paper by default. As mentioned above, the authors are not a reliable source anyways. Skip it and move on.
- Make sure you understand the problem being addressed.
- Make sure you understand how well the problem is addressed, relative to previous work.
- Think about how important that increment is. This is not equivalent to asking “how many people will appreciate the increment?” which is a popularity question. Frankly, all of Machine Learning fails the popularity test in a wider sense, even though many people appreciate the fruits of machine learning on a daily basis. First, think about the problem.
- How many people might a solution to the problem help? 0 is fairly common amongst submitted papers.
- How much would it help them? If it’s “alot”, then that should add a bit to the importance of the paper.
- How familiar are you with the problem? If not very, then it’s appropriate to give the benefit of the doubt to the authors.
Think about the solution.
- This solution might be useful to some other researchers who come up with something useful. This is a a warning sign.
- This solution might be useful to me in coming up with a useful algorithm for solving problems.
- This paper improves an algorithm. This is also fairly common. It should be improving an algorithm with a reasonable claim at being the best method for solving some problem.
- This paper can provide improvements to many algorithms. Theory papers often fall here, but they can also fall under (1) or (2) easily.
Now, take these considerations into account in forming your own opinion about how motivated the paper is.
- Go multimodel. If you only know one model of what machine learning is, you don’t really know machine learning. Learn multiple ideas of what machine learning are, and actively consider their merits and downsides.