Let me kick things off by posing this question to ML researchers:
What do you think are some important holy grails of machine learning?
– “A classifier with SVM-level performance but much more scalable”
– “Practical confidence bounds (or learning bounds) for classification”
– “A reinforcement learning algorithm that can handle the ___ problem”
– “Understanding theoretically why ___ works so well in practice”
I pose this question because I believe that when goals are stated explicitly and well (thus providing clarity as well as opening up the problems to more people), rather than left implicit, they are likely to be achieved much more quickly. I would also like to know more about the internal goals of the various machine learning sub-areas (theory, kernel methods, graphical models, reinforcement learning, etc) as stated by people in these respective areas. This could help people cross sub-areas.
All branches of machine learning seem to be united in the idea of using data to make predictions. However, people disagree to some extent about what this means. One way to categorize these different goals is on an axis, where one extreme is “tools to aid a human in using data to do prediction” and the other extreme is “tools to do prediction with no human intervention”. Here is my estimate of where various elements of machine learning fall on this spectrum.
||Human partially necessary
|Clustering, data visualization
||Bayesian Learning, Probabilistic Models, Graphical Models
||Kernel Learning (SVM’s, etc..)
The exact position of each element is of course debatable. My reasoning is that clustering and data visualization are nearly useless for prediction without a human in the loop. Bayesian/probabilistic models/graphical models generally require a human to sit and think about what is a good prior/structure. Kernel learning approaches have a few standard kernels which often work on simple problems, although sometimes significant kernel engineering is required. I’ve been impressed of late how ‘black box’ decision trees or boosted decision trees are. The goal of reinforcement learning (rather than perhaps the reality) is designing completely automated agents.
The position in this spectrum provides some idea of what the state of progress is. Things at the ‘human necessary’ end have been succesfully used by many people to solve many learning problems. At the ‘human unnecessary’ end, the systems are finicky and often just won’t work well.
I am most interested in the ‘human unnecessary’ end.
I have decided to run a weblog on machine learning and learning theory research. Here are some reasons:
1) Weblogs enable new functionality:
- Public comment on papers. No mechanism for this exists at conferences and most journals. I have encountered it once for a science paper. Some communities have mailing lists supporting this, but not machine learning or learning theory. I have often read papers and found myself wishing there was some method to consider other’s questions and read the replies.
- Conference shortlists. One of the most common conversations at a conference is “what did you find interesting?” There is no explicit mechanism for sharing this information at conferences, and it’s easy to imagine that it would be handy to do so.
- Evaluation and comment on research directions. Papers are almost exclusively about new research, rather than evaluation (and consideration) of research directions. This last role is satisfied by funding agencies to some extent, but that is a private debate of a subset of the community. It’s easy to imagine that a public debate would be more thorough and thoughtful, producing better decisions.
- Public Collaboration. It may be feasible to use a weblog as a mechanism for public research on a scale less than a paper. Currently, most research is done in machine learning by one or a few closely working and privately communicating authors. Weblogs provide a natural generalization where anyone who is interested may be able to contribute.
- The things not thought of. Weblogs provide new capabilities, and it is natural to miss the impact of these capabilities until a number of people have thought about and used them.
I intend to experiment with these capabilities.
2) Weblogs have the potential to be revolutionary. Here is a comparison of the different mechanisms of communication in a table.
||6 months to years.
||Anyone with interest and access.
||Attendees (and often any with interest).
||a few days
||Anyone subscribed (or reading archives).
||Semipermanent (with archives)
||Whoever is there then.
||Anyone with interest
Weblogs achieve “best we can imagine” in every category except permanency and quality control. Furthermore, the weaknesses are not inherent to the medium, and are being actively addressed.
Permalinks are the equivalent of a citation, providing a semipermanent pointer to a piece of content. This is only ‘semi’ becuase the _author_ of the content can typically revise the content at any moment in the future and the pointer is only permanet up to the permanence of the website.
Trackback is an explicit method for creating the reverse lookup table of citations: who cites this?
In addition, there are several mechanisms for information filtration such as “post is reposted in another weblog” and experimental moderation schemes.
The same forces driving academia into desiring permanent indelible records and very careful information filtration exist for blogs. These forces may produce the ‘missing pieces’, making weblogs very compelling for academic purposes.
3) Lance Fortnow told me so.