Machine Learning (Theory)

12/12/2006

Interesting Papers at NIPS 2006

Here are some papers that I found surprisingly interesting.

  1. Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, Greedy Layer-wise Training of Deep Networks. Empirically investigates some of the design choices behind deep belief networks.
  2. Long Zhu, Yuanhao Chen, Alan Yuille Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing. An unsupervised method for detecting objects using simple feature filters that works remarkably well on the (supervised) caltech-101 dataset.
  3. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira, Analysis of Representations for Domain Adaptation. This is the first analysis I’ve seen of learning with respect to samples drawn differently from the evaluation distribution which depends on reasonable measurable quantities.

All of these papers turn out to have a common theme—the power of unlabeled data to do generically useful things.

5 Comments to “Interesting Papers at NIPS 2006”
  1. Anonymous says:

    Very useful post!

  2. Interesting! I mentioned two of your three papers in this blog post.

  3. Bill_Lang says:

    Useful information. Is it the trend of learning from unlabeld data of machine learning?

  4. Gordon Rios says:

    The first paper and it’s references are very interesting — my “hunch” is that auto-associative structures (“auto-encoders”) are necessary primitives for learning. Has anyone experimented with using a nearest neighbor algorithm over the hidden unit activations of an auto-associative memory?

  5. anonymous says:

    It is good that Yuille’s almost exclusive collaborations with the Chinese is working out. So great is his China fetish that he will even work with students who have failed the qualifier (Long Zhu).

Sorry, the comment form is closed at this time.

Powered by WordPress