Some of the “sister conference” presentations at AAAI have been great. Roughly speaking, the conference organizers asked other conference organizers to come give a summary of their conference. Many different AI-related conferences accepted. The presenters typically discuss some of the background and goals of the conference then mention the results from a few papers they liked. This is great because it provides a mechanism to get a digested overview of the work of several thousand researchers—something which is simply available nowhere else.
Based on these presentations, it looks like there is a significant component of (and opportunity for) applied machine learning in AIIDE, IUI, and ACL.
There was also some discussion of having a super-colocation event similar to FCRC, but centered on AI & Learning. This seems like a fine idea. The field is fractured across so many different conferences that the mixing of a supercolocation seems likely helpful for research.
ACL is—more or less—the conference with which I most closely associate. Our community is becoming more and more ML oriented (indeed, a friend at MSResearch hypothesized that every paper in ACL this year would be “statistical” in nature; so far I don’t think anyone has proven him wrong). It’s a difficult balance, though, and one that I personally don’t know how best to deal with. The problem is that a lot of times there will be a paper at ACL that really would have been more appropriate at some place like ICML. I have to believe that this happens in many conferences with more applied themes. Probably it is easier to get a machine learning paper accepted at ACL than at somewhere like ICML, because the acceptance strategy at ACL seems to have become “if you improve on state of the art results, your paper will (almost certainly) be published.” (In many cases—eg., parsing—improvement is not even necessary; simply achieving comparable results is adequate). Theoretical justification is simply not as important at a conference like ACL than at a place like ICML, UAI or NIPS.
The situation is strange, though: there are many papers that use machine learning very badly (i.e., applying a completely unmotivated model, not having any guarentees on a proposed learning algorithm, etc.) but get good performance and are accepted; similarly, there are many papers that use very good machine learning, but get boring results (my parser is exactly as good as the five others that are out there—this was a big problem at ACL’2004).
What do other people think about how to balance this in more applied conferences? Should all learning-oriented papers go to something like ICML/UAI/NIPS and all application-oriented papers go to ACL/IUI/AIIDE? This seems to lose something, since many advances in learning for NLP stuff are probably not that interesting to a general ML audience. But it seems that the current criterea for selecting ML-style papers at an applied conference at ACL is leading to uninteresting stuff getting in. Unfortunately, I don’t know of a good utility function for this problem, but I think it’s something that it would be great if we could solve.
(That being said, I encourage ML people to submit well motivated ML models/algorithms applied to *real* natural language problems to our conferences — exposure in our community to these things will likely result it better reviewer confidences and better choices of accepted papers!)
What Hal is getting at is a basic difficulty with conference multiplicity. Every conference has it’s own point of emphasis but it is very common that many of the papers could have gone to a different conference. Since publication is a ‘once only’ thing, this means that it is common for a good paper at one conference to not be known at another conference. As someone with finite time and not-very-finite-interests, I would very much like to see the interesting papers from many other conferences.
I don’t know what the right answer to this dilemma is. Perhaps we can hope that technology will take care of the problem so that all talks become available on the web for nonattendees.