Reopening RL->Classification

In research, it’s often the case that solving a problem helps you realize that it wasn’t the right problem to solve. This is the case for the “reduce RL to classification” problem with the solution hinted at here and turned into a paper here.

The essential difficulty is that the method of stating and analyzing reductions ends up being nonalgorithmic (unlike previous reductions) unless you work with learning from teleoperated robots as Greg Grudic does. The difficulty here is due to the reduction being dependent on the optimal policy (which a human teleoperator might simulate, but which is otherwise unavailable).

So, this problem is “open” again with the caveat that this time we want a more algorithmic solution.

Whether or not this is feasible at all is still unclear and evidence in either direction would greatly interest me. A positive answer might have many practical implications in the long run.

Wikis for Summer Schools and Workshops

Chicago ’05 ended a couple of weeks ago. This was the sixth Machine Learning Summer School, and the second one that used a wiki. (The first was Berder ’04, thanks to Gunnar Raetsch.) Wikis are relatively easy to set up, greatly aid social interaction, and should be used a lot more at summer schools and workshops. They can even be used as the meeting’s webpage, as a permanent record of its participants’ collaborations — see for example the wiki/website for last year’s NVO Summer School.

A basic wiki is a collection of editable webpages, maintained by software called a wiki engine. The engine used at both Berder and Chicago was TikiWiki — it is well documented and gets you something running fast. It uses PHP and MySQL, but doesn’t require you to know either. Tikiwiki has far more features than most wikis, as it is really a full Content Management System. (My thanks to Sebastian Stark for pointing this out.) Here are the features we found most useful:

  • Bulletin boards, or forums. The most-used one was the one for social events, which allowed participants to find company for doing stuff without requiring organizer assistance. While conferences, by their inherently less interactive nature, don’t usually benefit from all aspects of wikis, this is one feature worth adding to every one. [Example]

    Other useful forums to set up are “Lost and Found”, and discussion lists for lectures — although the latter only work if the lecturer is willing to actively answer questions arising on the forum. You can set forums up so that all posts to them are immediately emailed to someone.

  • Editable pages. For example, we set up pages for each lecture that we were able to edit easily later as more information (e.g. slides) became available. Lecturers who wanted to modify their pages could do so without requiring organizer help or permission. (Not that most of them actually took advantage of this in practice… but this will happen in time, as the wiki meme infects academia.) [Example]
  • Sign-up sheets. Some tutorials or events were only open to a limited number of people. Having editable pages means that people can sign up themselves. [Example]
  • FAQs. You can set up general categories, and add questions, and place the same question in different categories. We set most of this up before the summer school, with directions of how to get there from the airport, what to bring, etc. We also had volunteers post answers to anticipated FAQs like the location of local restaurants and blues clubs. [Example]
  • Menus. You can set up the overall layout of the webpage, by specifying the locations and contents menus on the left and right of a central `front page’. This is done via the use of `modules’, and makes it possible for your wiki pages to completely replace the webpages — if you are willing to make some aesthetic sacrifices.
  • Different levels of users: The utopian wiki model of having ‘all pages editable by everyone’ is … well, utopian. You can set up different groups of users with different permissions.

  • Calendars. Useful for scheduling, and for changes to schedules. (With the number of changes we had, we really needed this.) You can have multiple calendars e.g. one for lectures, another for practical sessions, and another for social events — and users can overlay them on each other. [Example]

A couple of other TikiWiki features that we didn’t get working at Chicago, but would have been nice to have, are these:

  • Image Galleries. Gunnar got this working at Berder, where it was a huge success. Photographs are great icebreakers, even the ones that don’t involve dancing on tables.
  • Surveys. These are easy to set up, and have option for participants to see, or not to see, the results of surveys — useful when asking people to rate lectures.

TikiWiki also has several features that we didn’t use, such as blogs and RSS feeds. It also has a couple of bugs (and features that are bad enough to be called bugs), such as permission issues and the inability to print calendars neatly. These will doubtless get cleaned up in due course.

Finally, owing to much prodding from John and some other MLSS participants, I’ve written up my experiences in using TikiWiki @ Chicago ’05 on my website, including installation instructions and a list of “Good Things to Do”. This documentation is meant to be a survival guide complementary to the existing TikiWiki documentation, which can sometimes be overwhelming.

Workshops are not Conferences

… and you should use that fact.

A workshop differs from a conference in that it is about a focused group of people worrying about a focused topic. It also differs in that a workshop is typically a “one-time affair” rather than a series. (The Snowbird learning workshop counts as a conference in this respect.)

A common failure mode of both organizers and speakers at a workshop is to treat it as a conference. This is “ok”, but it is not really taking advantage of the situation. Here are some things I’ve learned:

  1. For speakers: A smaller audience means it can be more interactive. Interactive means a better chance to avoid losing your audience and a more interesting presentation (because you can adapt to your audience). Greater focus amongst the participants means you can get to the heart of the matter more easily, and discuss tradeoffs more carefully. Unlike conferences, relevance is more valued than newness.
  2. For organizers: Not everything needs to be in a conference style presentation format (i.e. regularly spaced talks of 20-30 minute duration). Significant (and variable) question time, different talk durations, flexible rescheduling, and panel discussions can all work well.

Question: “When is the right time to insert the loss function?”

Hal asks a very good question: “When is the right time to insert the loss function?” In particular, should it be used at testing time or at training time?

When the world imposes a loss on us, the standard Bayesian recipe is to predict the (conditional) probability of each possibility and then choose the possibility which minimizes the expected loss. In contrast, as the confusion over “loss = money lost” or “loss = the thing you optimize” might indicate, many people ignore the Bayesian approach and simply optimize their loss (or a close proxy for their loss) over the representation on the training set.

The best answer I can give is “it’s unclear, but I prefer optimizing the loss at training time”. My experience is that optimizing the loss in the most direct manner possible typically yields best performance. This question is related to a basic principle which both Yann LeCun(applied) and Vladimir Vapnik(theoretical) advocate: “solve the simplest prediction problem that solves the problem”. (One difficulty with this principle is that ‘simplest’ is difficult to define in a satisfying way.)

One reason why it’s unclear is that optimizing an arbitrary loss is not an easy thing for a learning algorithm to cope with. Learning reductions (which I am a big fan of) give a mechanism for doing this, but they are new and relatively untried.

Drew Bagnell adds: Another approach to integrating loss functions into learning is to try to re-derive ideas about probability theory appropriate for other loss functions. For instance, Peter Grunwald and A.P. Dawid present a variant on maximum entropy learning. Unfortunately, it’s even less clear how often these approaches lead to efficient algorithms.

Exact Online Learning for Classification

Jacob Abernethy and I have found a computationally tractable method for computing an optimal (or near optimal depending on setting) master algorithm combining expert predictions addressing this open problem. A draft is here.

The effect of this improvement seems to be about a factor of 2 decrease in the regret (= error rate minus best possible error rate) for the low error rate situation. (At large error rates, there may be no significant difference.)

There are some unfinished details still to consider:

  1. When we remove all of the approximation slack from online learning, is the result a satisfying learning algorithm, in practice? I consider online learning is one of the more compelling methods of analyzing and deriving algorithms, but that expectation must be either met or not by this algorithm
  2. Some extra details: The algorithm is optimal given a small amount of side information (k in the draft). What is the best way to remove this side information? The removal is necessary for a practical algorithm. One mechanism may be the k->infinity limit.