Fast Physics for Learning

While everyone is silently working on ICML submissions, I found this discussion about a fast physics simulator chip interesting from a learning viewpoint. In many cases, learning attempts to predict the outcome of physical processes. Access to a fast simulator for these processes might be quite helpful in predicting the outcome. Bayesian learning in particular may directly benefit while many other algorithms (like support vector machines) might have their speed greatly increased.

The biggest drawback is that writing software for these odd architectures is always difficult and time consuming, but a several-orders-of-magnitude speedup might make that worthwhile.

Prior, “Prior” and Bias

Many different ways of reasoning about learning exist, and many of these suggest that some method of saying “I prefer this predictor to that predictor” is useful and necessary. Examples include Bayesian reasoning, prediction bounds, and online learning. One difficulty which arises is that the manner and meaning of saying “I prefer this predictor to that predictor” differs.

  1. Prior (Bayesian) A prior is a probability distribution over a set of distributions which expresses a belief in the probability that some distribution is the distribution generating the data.
  2. “Prior” (Prediction bounds & online learning) The “prior” is a measure over a set of classifiers which expresses the degree to which you hope the classifier will predict well.
  3. Bias (Regularization, Early termination of neural network training, etc…) The bias is some (often implicitly specified by an algorithm) way of preferring one predictor to another.

This only scratches the surface—there are yet more subtleties. For example the (as mentioned in meaning of probability) shifts from one viewpoint to another.

NIPS: Online Bayes

One nice use for this blog is to consider and discuss papers that that have appeared at recent conferences. I really enjoyed Andrew Ng and Sham Kakade’s paper Online Bounds for Bayesian Algorithms. From the paper:

The philosophy taken in the Bayesian methodology is often at odds with
that in the online learning community…. the online learning setting
makes rather minimal assumptions on the conditions under which the
data are being presented to the learner —usually, Nature could provide
examples in an adversarial manner. We study the performance of
Bayesian algorithms in a more adversarial setting… We provide
competitive bounds when the cost function is the log loss, and we
compare our performance to the best model in our model class (as in
the experts setting).

It’s a very nice analysis of some of my favorite algorithms that all hinges around a beautiful theorem:

Let Q be any distribution over parameters theta. Then for all sequences S:

L_{Bayes}(S) leq L_Q(S) + KL(Q|P)

where P is our prior and the losses L are: first, log-loss for the Bayes algorithm (run online) and second, expected log-loss with respect to an arbitrary distribution Q.

Any thoughts? Any other papers you thought we have to read?