Machine Learning (Theory)

11/2/2005

Progress in Active Learning

Tags: Active,Solutions jl@ 11:08 pm

Several bits of progress have been made since Sanjoy pointed out the significant lack of theoretical understanding of active learning. This is an update on the progress I know of. As a refresher, active learning as meant here is:

  1. There is a source of unlabeled data.
  2. There is an oracle from which labels can be requested for unlabeled data produced by the source.
  3. The goal is to perform well with minimal use of the oracle.

Here is what I’ve learned:

  1. Sanjoy has developed sufficient and semi-necessary conditions for active learning given the assumptions of IID data and “realizability” (that one of the classifiers is a correct classifier).
  2. Nina, Alina, and I developed an algorithm for active learning relying on only the assumption of IID data. A draft is here.
  3. Nicolo, Claudio, and Luca showed that it is possible to do active learning in an entirely adversarial setting for linear threshold classifiers here. This was published a year or two ago and I recently learned about it.

All of these results are relatively ‘rough': they don’t necessarily make good algorithms as stated (although the last one has a few experiments). None of these results are directly comparable because the assumptions vary. Comparing the assumptions and the results leads to a number of remaining questions:

  1. Do the sufficient and seminecessary conditions apply to the IID only case? The adversarial case?
  2. Is there a generic algorithm for any hypothesis space that works in the fully adversarial setting?
  3. What are special cases of these algorithms which are computationally tractable and useful?

The Foundations of Active Learning workshop at NIPS should be a good place to discuss these questions.

3 Comments to “Progress in Active Learning”
  1. Anonymous says:

    Is the following result interesting? Assume your samples comes from R^d, and you are allowed to generate new syntetic samples and ask the oracle what is their label, then any set of n [unlabeled] points in R^d can be labeled using O(log n) oracle queries. The constant depends badly on d, however…

  2. jl says:

    I don’t believe this holds, as written. I suspect there is some constraint such as “the labels obey a perceptron”.

    Given some contraint of this form, the result sounds cute, but maybe not useful. Can you think of a real-world scenario where you would use it?

  3. Anonymous says:

    Sure. Assume a linear classifier.

    And no, I dont see any application, but heck, even in R^4, you might need to query all points labels to get a full correct labeling, so really, you can not hope for much better without further assumptions or allowing syntetic examples.

Sorry, the comment form is closed at this time.

Powered by WordPress