Interesting things at NIPS 2009

Several papers at NIPS caught my attention.

  1. Elad Hazan and Satyen Kale, Online Submodular Optimization They define an algorithm for online optimization of submodular functions with regret guarantees. This places submodular optimization roughly on par with online convex optimization as tractable settings for online learning.
  2. Elad Hazan and Satyen Kale On Stochastic and Worst-Case Models of Investing. At it’s core, this is yet another example of modifying worst-case online learning to deal with variance, but the application to financial models is particularly cool and it seems plausibly superior other common approaches for financial modeling.
  3. Mark Palatucci, Dean Pomerlau, Tom Mitchell, and Geoff Hinton Zero Shot Learning with Semantic Output Codes The goal here is predicting a label in a multiclass supervised setting where the label never occurs in the training data. They have some basic analysis and also a nice application to FMRI brain reading.
  4. Shobha Venkataraman, Avrim Blum, Dawn Song, Subhabrata Sen, and Oliver Spatscheck, Tracking Dynamic Sources of Malicious Activity at Internet Scales. This is a plausible combination of worst-case learning algorithms in a tree-like structure over IP space to track and predict bad IPs. Their empirical results look quite good to me and there are many applications where this prediction problem needs to be solved.
  5. Kamalika Chaudhuri, Daniel Hsu, and Yoav Freund, A Parameter Free Hedging Algorithm This paper is about eliminating the learning rate parameter from online learning algorithms. While that’s certainly useful, the approach taken involves a double-exponential rather than a single exponential potential, which is strange and potentially useful in many other places.
  6. Bing Bai, Jason Weston, David Grangier, Ronan Collobert, Kunihiko Sadamasa, Yanjun Qi, Corinna Cortes, Polynomial Semantic Indexing This is about an empirically improved algorithm for learning ranking functions based on (query,document) content. The sexy Semantic name is justified because it is not based on syntactic matching of query to document.

I also found the future publication models discussion interesting. The follow-up post here has details and further discussion.

At the workshops, I was deeply confronted with the problem of too many interesting workshops to attend in the given amount of time. Two talks stood out for me:

  1. Carlos Guestrin gave a talk in the interactive machine learning workshop on Turning Down the Noise in the Blogosphere by Khalid El-Arini, Gaurav Veda, Dafna Shahaf, and Carlos Guestrin which I missed at KDD this year. The paper discusses the use exponential weight online learning algorithms to rerank blog posts based on user-specific interests. It comes with a demonstration website where you can test it out.
  2. Leslie Valiant gave a talk on representations and operations on concepts in a brain-like fashion. The style of representation and algorithm involves distributed representations on sparse graphs, an approach which is relatively unfamiliar. Bloom filters and in machine learning experience with learning through hashing functions has sharpened my intuition a bit. The talk seemed to cover Memorization and Association on a Realistic Neural Model at Neural Computation as well as A First Experimental Demonstration of Massive Knowledge Infusion at KR.

3 Replies to “Interesting things at NIPS 2009”

  1. ..2010…wow, what your research on, time machine maybe?
    can you list some interesting papers from icml 2020 ?

    1. Heh. I was late putting it up, and NIPS has a history of date confusion due to how they publish. I’ll tweak the title.

      But your suggestion is actually pretty good. Which papers do you expect to see in 10 years?

Comments are closed.