Machine Learning (Theory)

6/6/2005

Exact Online Learning for Classification

Tags: Solutions jl@ 9:43 am

Jacob Abernethy and I have found a computationally tractable method for computing an optimal (or near optimal depending on setting) master algorithm combining expert predictions addressing this open problem. A draft is here.

The effect of this improvement seems to be about a factor of 2 decrease in the regret (= error rate minus best possible error rate) for the low error rate situation. (At large error rates, there may be no significant difference.)

There are some unfinished details still to consider:

  1. When we remove all of the approximation slack from online learning, is the result a satisfying learning algorithm, in practice? I consider online learning is one of the more compelling methods of analyzing and deriving algorithms, but that expectation must be either met or not by this algorithm
  2. Some extra details: The algorithm is optimal given a small amount of side information (k in the draft). What is the best way to remove this side information? The removal is necessary for a practical algorithm. One mechanism may be the k->infinity limit.
One Comments to “Exact Online Learning for Classification”
  1. [...] Is this a statement about learning or about preservation of learning? We did some experiments to analyze the new Binning algorithm which works in this setting. For several UCI datasets, we reprocessed them so that features could be used as predictors and then applied several master algorithms. The first graph confirms that Binning is indeed a better algorithm according to the tightness of the upper bound. [...]

Leave a Reply


Powered by WordPress