Dear Fellow Machine Learners,
For the past year or so I have become increasingly frustrated with the peer review system in our field. I constantly get asked to review papers in which I have no interest. At the same time, as an action editor in JMLR, I constantly have to harass people to review papers. When I send papers to conferences and to journals I often get rejected with reviews that, at least in my mind, make no sense. Finally, I have a very hard time keeping up with the best new work, because I don’t know where to look for it…
I decided to try an do something to improve the situation. I started a new web site, which I decided to call “The machine learning forum” the URL is http://themachinelearningforum.org
The main idea behind this web site is to remove anonymity from the review process. In this site, all opinions are attributed to the actual person that expressed them. I expect that this will improve the quality of the reviews. An obvious other effect is that there will be fewer negative reviews, weak papers will tend not to get reviewed at all, but then again, is that such a bad thing?
If you have any interest in this endeavor, please register to the web site and please submit a photo of yourself. Based on the information on your web site I will decide whether to grant you “author” privileges that would allow you to write reviews and overviews. Anybody can submit pointers to publications that they would like somebody to review. Anybody can participate in the discussion forum that is a fancy message board with threads etc.
Right now the main contribution I am looking for are “overviews”.
Overviews are pages written by somebody who is an authority in some area (for example, Kamalika Chaudhuri is an authority on mixture models) in which they list the main papers in the area and five a high level description for how the papers relate. These overviews are intended to serve as an entry point for somebody that wants to learn about that subfield. Overviews *can* reference the work of the author of the overview. This is unlike reviews, in which the reviewer cannot be the author of the reviewed paper.
I hope you are interested enough to give this a try!
Comments are very welcome.
Cheers
Yoav Freund (yfreund@ucsd.edu)
Sounds like a great idea. I’ve signed up and look forward to seeing how things develop.
This is an excellent website idea. Folks in favor of open access have been talking about solutions for a better publishing process for a while now. This website embodies some of the ideas that have been proposed.
Are you going to open “overviews” to collaborative editing? I was reading the overview on learning mixture models and I found it mostly oriented towards algorithms. Nothing wrong with that. I am wondering, however, whether there is going to be an established channel to keep the overviews growing, possibly in a collaborative way. Here is a good reference, for example:
Stephens, M. (2000). Dealing with label-switching in mixture models. Journal of the Royal Statistical Society, Series B, 62, 795–809.
I don’t think that colleaborative editing a-la wikipedia would work in this context. This is because the postings in this site are
mainly about opinion and point of view and therefor inherently not objective or comprehensive. Wikipedia is supposed to have minimal opinion, which is why collaborative editing makes sense. In the Machine Learning forum each piece reflects the point of view of one person – the author of the piece. The Forum, which is also part of the site, is intended for open-ended discussion threads. It would be nice to find a way to link discussion threads to the overviews, I’ll try to find a way to do that.
Yoav
I’m a little confused. Do you want people to submit original research to this website? Or will you only review already-published papers? If the latter, I’m not sure how this fixes the review process, which I agree is badly broken.
(And yes, I see the irony of posting this comment anonymously.)
Great idea! I’m also mostly completely dissatisfied with reviews of my papers. So this could be a great possibility to improve research quality!
Glad to see new ideas. I can see how your website addresses, “Finally, I have a very hard time keeping up with the best new work, because I don’t know where to look for it.” How does it help with the review process? Are you imagining that we’ll eventually post papers for open review and once they have enough positive reviews then they’ll be published somewhere? Or will this simply help editors find reviewers who are interested in discussing certain topics in the field? I can see the later working, but not the former. I think more clarity on how it should help will inspire participation.
Dear Russel,
Thank you for your comment. I am glad you like the idea.
I would like to see more open debate about different approaches to machine learning, statistical inference, etc.
The standard review process for conferences and for journals does not allow open debate because the process is secret and anonymous.
As for how papers get published… I am not sure publication is really a problem any more. Anybody can put their papers on their web page or on Arxiv and in this way have them “published”. The publication of papers in conferences or journals is mostly a type of quality control. If you have been involved in editorial boards or program committees in the last years you know that this quality control process is very far from perfect.
I am trying to develop an alternative. Not a replacement. Just an additional venue for peer review, feedback and communication.
What shape this will take depends very much on the participation. I am providing a platform, my hope is that a large number of people will contribute their opinions and views. What shape this effort will take is hard to predict. My goal is to make the site interesting and engaging so that people would participate.
I hope this answer some of your questions. I realize it does not provide full clarity, but this is the best I can do at this point.
Best
Yoav
I dislike the rating feature, and I don’t think it is necessary to make the site useful.
I applaud Yoav for taking the lead on this. I am concerned that many reviewers are hiding behind anonymity. Too many papers get insufficient reviewing effort and reviews are often cosmetic or poorly argued. I’ve seen statements like:
“the paper was too dense, rewrite and resubmit next year”
“there was too much math which should have been put in an appendix”
“idea X was counter-intuitive and disagrees with my (omniscient) intuition”
“there is nothing new here (but I can’t provide citations for the previous work)”
Reviewers would have to be more thorough if their identity is revealed. There would be more time spent reading and understanding submissions and providing precise arguments for or against a publication.
I think your archive should collaborate with scholarpedia.org, which has excellent coverage in computational neuroscience, a close sister field. The articles there are slightly more in-depth (actually presenting material rather than simply overviewing it) but also largely function as overviews of various ideas or subfields, often referencing the author’s work and other seminal work in the area. The authors tend to be pre-eminent experts on the subfield in question, and submissions are peer-reviewed.
Wouldn’t it be better if it was possible simply to skip the review and preserve the anonymity? So that one reviews a paper only if it interested in doing it. If it doesn’t, I simply skip it. I think that what you define as “reviews that don’t make sense”, is simply a “I do not care about that”. Just make sort of a pool where you put the papers and everybody has the freedom of choosing to review the papers that wants (consequently the ones better suit to him).