Some NIPS papers

Here is a set of papers that I found interesting (and why).

  1. A PAC-Bayes approach to the Set Covering Machine improves the set covering machine. The set covering machine approach is a new way to do classification characterized by a very close connection between theory and algorithm. At this point, the approach seems to be competing well with SVMs in about all dimensions: similar computational speed, similar accuracy, stronger learning theory guarantees, more general information source (a kernel has strictly more structure than a metric), and more sparsity. Developing a classification algorithm is not very easy, but the results so far are encouraging.
  2. Off-Road Obstacle Avoidance through End-to-End Learning and Learning Depth from Single Monocular Images both effectively showed that depth information can be predicted from camera images (using notably different techniques). This ability is strongly enabling because cameras are cheap, tiny, light, and potentially provider longer range distance information than the laser range finders people traditionally use.
  3. The Forgetron: A Kernel-Based Perceptron on a Fixed Budget proved that a bounded memory kernelized perceptron algorithm (which might be characterizable as “stochastic functional gradient descent with weight decay and truncation”) competes well with respect to an unbounded memory algorithm when the data contains a significant margin. Roughly speaking, this implies that the perceptron approach can learn arbitary (via the kernel) reasonably simple concepts from unbounded quantities of data.

In addition, Sebastian Thrun‘s “How I won the Darpa Grand Challenge” and Sanjoy Dasgupta‘s “Coarse Sample Complexity for Active Learning” talks were both quite interesting.

(Feel free to add any that you found interesting.)

8 Replies to “Some NIPS papers”

  1. Hi,
    I would like to thank you for sharing these papers with us. What follows is quite unrelated to the post, but is a request that may be of general interest for the readers.
    The ease of sharing and disseminating information is what I love about research blogging. I also think that tagging posts with machine_learning or keywords of the like would be of great help to people like me who subscribe to technorati tags. The idea being that there is a central source of great information that can be tapped into merely by looking for suitably tagged posts. I, for instance, subscribe to rss feeds of ‘neuroscience’ tags on technorati and, and regularly find great links through that. As I know from the number of subscribers to your blog on bloglines, yours is widely read within the machine learning circles and I request you to think about the idea of tagging posts with something specific and write about that idea if you think it worthwhile. Your word will definitely make a wider set of people think seriously about this.

  2. I had not realized that tagging in a context beyond the blog itself was appropriate.

    Most posts here are about machine learning, but some are about research in general, so such posts might be “machine_learning” or “research” in the wider context.

    Presumably, the tag set of a post should essentially obey a directory structure—including elements from least specific to most specific.


  3. Yes, tagging is great but even more so when it is managed by appropriate tools and within a focused context.
    May I suggest that you open an account on
    Much much more there than in technorati or…

  4. The Off-Road paper got depth (well, something like depth) from a stereo pair. State-of-the-art computer vision algorithms can do this, with little or no machine learning required at all. It’s a very nice paper, but not because of the depth extraction per se.

    The paper on depth extraction from a monocular image was very nice. It seems that the system was “recognizing” a few features here and there (since it was a linear data likelihood, perhaps just mapping from spatial frequencies to depths), and using spatial MRF terms to fill in the missing areas. The results were very impressive. I asked it if it would work if you turn the images upside-down, and they said they did not think so.

  5. I talked to Yann about this. He said that when the same monocular image was inserted into each of the stereo receptors, the system worked almost as well.

  6. Hi,
    Did I meet you guys at NIPS? It is interesting to see this discussion here. Here are some more papers in field on depth learning from images, that you may find interesing:
    High Speed Obstacle Avoidance using Monocular Vision and Reinforcement Learning, Jeff Michels, Ashutosh Saxena, Andrew Y. Ng. Proceedings of the Twenty-first International Conference on Machine Learning, ICML 2005
    which uses a single monocular image to drive a car (presented in ICML in summers).
    And this one:
    Potetz, B., Lee, T.S. (2006) Scaling Laws in Natural Scenes and the Inference of 3D Shape. NIPS — Advances in Neural Information Processing Systems 19, MIT Press .
    about 3-d reconstruction using shape from shading like techniques…..

Comments are closed.