Learning Theory, by assumption

One way to organize learning theory is by assumption (in the assumption = axiom sense), from no assumptions to many assumptions. As you travel down this list, the statements become stronger, but the scope of applicability decreases.

  1. No assumptions
    1. Online learning There exist a meta prediction algorithm which compete well with the best element of any set of prediction algorithms.
    2. Universal Learning Using a “bias” of 2– description length of turing machine in learning is equivalent to all other computable biases up to some constant.
    3. Reductions The ability to predict well on classification problems is equivalent to the ability to predict well on many other learning problems.
  2. Independent and Identically Distributed (IID) Data
    1. Performance Prediction Based upon past performance, you can predict future performance.
    2. Uniform Convergence Performance prediction works even after choosing classifiers based on the data from large sets of classifiers.
  3. IID and partial constraints on the data source
    1. PAC Learning There exists fast algorithms for learning when all examples agree with some function in a function class (such as monomials, decision list, etc…)
    2. Weak Bayes The Bayes law learning algorithm will eventually reach the right solution as long as the right solution has a positive prior.
  4. Strong Constraints on the Data Source
    1. Bayes Learning When the data source is drawn from the prior, using Bayes law is optimal

This doesn’t include all forms of learning theory, because I do not know them all. If there are other bits you know of, please comment.

Kolmogorov Complexity and Googling

Machine learning makes the New Scientist. From the article:

COMPUTERS can learn the meaning of words simply by plugging into Google. The finding could bring forward the day that true artificial intelligence is developed….
But Paul Vitanyi and Rudi Cilibrasi of the National Institute for Mathematics and Computer Science in Amsterdam, the Netherlands, realised that a Google search can be used to measure how closely two words relate to each other. For instance, imagine a computer needs to understand what a hat is.

You can read the paper at KC Google.

Hat tip: Kolmogorov Mailing List

Any thoughts on the paper?

NIPS: Online Bayes

One nice use for this blog is to consider and discuss papers that that have appeared at recent conferences. I really enjoyed Andrew Ng and Sham Kakade’s paper Online Bounds for Bayesian Algorithms. From the paper:

The philosophy taken in the Bayesian methodology is often at odds with
that in the online learning community…. the online learning setting
makes rather minimal assumptions on the conditions under which the
data are being presented to the learner —usually, Nature could provide
examples in an adversarial manner. We study the performance of
Bayesian algorithms in a more adversarial setting… We provide
competitive bounds when the cost function is the log loss, and we
compare our performance to the best model in our model class (as in
the experts setting).

It’s a very nice analysis of some of my favorite algorithms that all hinges around a beautiful theorem:

Let Q be any distribution over parameters theta. Then for all sequences S:

L_{Bayes}(S) leq L_Q(S) + KL(Q|P)

where P is our prior and the losses L are: first, log-loss for the Bayes algorithm (run online) and second, expected log-loss with respect to an arbitrary distribution Q.

Any thoughts? Any other papers you thought we have to read?