Interesting Papers at NIPS 2006

Here are some papers that I found surprisingly interesting.

  1. Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, Greedy Layer-wise Training of Deep Networks. Empirically investigates some of the design choices behind deep belief networks.
  2. Long Zhu, Yuanhao Chen, Alan Yuille Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing. An unsupervised method for detecting objects using simple feature filters that works remarkably well on the (supervised) caltech-101 dataset.
  3. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira, Analysis of Representations for Domain Adaptation. This is the first analysis I’ve seen of learning with respect to samples drawn differently from the evaluation distribution which depends on reasonable measurable quantities.

All of these papers turn out to have a common theme—the power of unlabeled data to do generically useful things.

5 Replies to “Interesting Papers at NIPS 2006”

  1. The first paper and it’s references are very interesting — my “hunch” is that auto-associative structures (“auto-encoders”) are necessary primitives for learning. Has anyone experimented with using a nearest neighbor algorithm over the hidden unit activations of an auto-associative memory?

  2. It is good that Yuille’s almost exclusive collaborations with the Chinese is working out. So great is his China fetish that he will even work with students who have failed the qualifier (Long Zhu).

Comments are closed.