One of the enduring stereotypes of academia is that people spend a great deal of intelligence, time, and effort finding complexity rather than simplicity. This is at least anecdotally true in my experience.

**Math++**Several people have found that adding useless math makes their paper more publishable as evidenced by a reject-add-accept sequence.**8 page minimum**Who submitted a paper to ICML violating the 8 page minimum? Every author fears that the reviewers won’t take their work seriously unless the allowed length is fully used. The best minimum violation I know is Adam‘s paper at SODA on generating random factored numbers, but this is deeply exceptional. It’s a fair bet that 90% of papers submitted are exactly at the page limit. We could imagine that this is because papers naturally take more space, but few people seem to be clamoring for more space.**Journalong**Has anyone been asked to review a 100 page journal paper? I have. Journal papers can be nice, because they give an author the opportunity to write without sharp deadlines or page limit constraints, but this can and does go awry.

Complexity illness is a burden on the community. It means authors spend more time filling out papers, reviewers spend more time reviewing, and (most importantly) effort is misplaced on complex solutions over simple solutions, ultimately slowing (sometimes crippling) the long term impact of an academic community.

It’s difficult to imagine an author-driven solution to complexity illness, because the incentives are simply wrong. Reviewing based on solution value rather than complexity is a good way for individual people to reduce the problem. More generally, it would be great to have a system which explicitly encourages research without excessive complexity. The best example of this seems to be education—it’s the great decomplexifier. The process of teaching something greatly encourages teaching the simple solution, because that is what can be understood. This seems to be true both of traditional education and less conventional means such as wikipedia articles. I’m not sure exactly how to use this observation—Is there some way we can shift conference formats towards the process of creating teachable material?

make it easier for shorter papers to get in. they take up less space, so this has other justifications…

>Who submitted a paper to ICML violating the 8 page minimum?

There is a famous story about Levin, who once submitted a two-page paper to a journal. When the editor rejected the paper because it was shorter than the 8-page limit, Levin stapled four copies of the paper and sent it again!

Does that 1 page SODA paper have as its figure a poorly drawn face? Is that an actual diagram of something? Was that paper really accepted in the form you posted?

I have lately been writing rather long papers (with coauthors). One is more than 60 pages, the other two are around 30 pages long. I rationalize this by believing that the papers actually have an

educationalvalue because they teach people new techniques and ways of thinking. Sure, if you are adding just “one more trick” to a well established field of knowledge, your paper ought to be as short as possible. But what if you’re trying to explain a new way of thinking, then it might be better to go slowly and provide more details than strictly necessary.I suspect that the answer may be multi-dimensional, in the following sense. If you look at the mathematics community, they have two distinct peaks in their style of papers – one peak around large papers that make a major contribution and establish some important idea and one around very short papers that provide McNuggets of insights. The way they are evaluated is often by looking at how many of those major papers they have – which can be fewer, perhaps just one every couple of years. One advantage of this mode is that there are fewer papers that serve as landmarks for any given topic.

In ML, and CS in general, there are a lot more McNuggets being published and the system seems to have reached some equilibrium around a certain quantum for these nuggets – which results in a crisp 6/8 page output. There are more of these and so, occasionally, we have discussions about whether it is too much.

However, it seems to boil down to how the community functions – are there more people, doing slightly different things, contributing a larger edifice or are there fewer people building larger monoliths at a different time scale?

Start a new online journal — call it something like “open paper” or something — use peer review not for minimizing number of papers but for checking that paper is interesting and has no obvious flaws — put it up with a reasonable search capability (accept everything as a PDF so you don’t have to worry about styles, etc.), open it up to comments like a blog, (maybe even use voting), and explicitly state the criteria that you are discussing in this post.

I have a similar post on this issue for computer vision conferences on my blog: http://vimsu99.blogspot.com/

I heard another Levin story, perhaps apocryphal, that goes in the form of a dialogue:

Levin: “A good journal publication ought to be of similar length to its Kolmogorov complexity.”

Student: “But then surely it would be indistinguishable from a random string?”

Levin: “Exactly!”

In last year’s COLT I got the following comment from one reviewer:

“Nice paper; arguably technically a little light.”

So lack of complexity was explicitly perceived as a (slight) drawback!

Personally I first base my reviews on the criterion “did this paper teach me something new/surprising/interesting?” — and given a positive answer, I count simplicity in the exposition as a significant additional positive point.

This is a back-link to Fernando’s post on the subject. I am sympathetic to the idea that research is a process, and so we tend to not have the perfect paper at the time of writing. Nevertheless, I think authors also fear their paper being labeled as ‘too simple’ and rejected, and I’ve even seen some signs of reviewers enforcing this.