Machine Learning (Theory)

1/4/2017

EWRL and NIPS 2016

I went to the European Workshop on Reinforcement Learning and NIPS last month and saw several interesting things.

At EWRL, I particularly liked the talks from:

  1. Remi Munos on off-policy evaluation
  2. Mohammad Ghavamzadeh on learning safe policies
  3. Emma Brunskill on optimizing biased-but safe estimators (sense a theme?)
  4. Sergey Levine on low sample complexity applications of RL in robotics.

My talk is here. Overall, this was a well organized workshop with diverse and interesting subjects, with the only caveat being that they had to limit registration :-)

At NIPS itself, I found the poster sessions fairly interesting.

  1. Allen-Zhu and Hazan had a new notion of a reduction (video).
  2. Zhao, Poupart, and Gordon had a new way to learn Sum-Product Networks
  3. Ho, Littman, MacGlashan, Cushman, and Austerwell, had a paper on how “Showing” is different from “Doing”.
  4. Toulis and Parkes had a paper on estimation of long term causal effects.
  5. Rae, Hunt, Danihelka, Harley, Senior, Wayne, Graves, and Lillicrap had a paper on large memories with neural networks.
  6. Hardt, Price, and Srebro, had a paper on Equal Opportunity in ML.

Format-wise, I thought the 2 sessions was better than 1, but I really would have preferred more. The recorded spotlights are also pretty cool.

The NIPS workshops were great, although I was somewhat reminded of kindergarten soccer in terms of lopsided attendance. This may be inevitable given how hot the field is, but I think it’s important for individual researchers to remember that:

  1. There are many important directions of research.
  2. You personally have a much higher chance of doing something interesting if everyone else is not doing it also.

During the workshops, I learned about ADAM (a momentum form of Adagrad), testing ML systems, and that even TenserFlow is finally looking into synchronous updates for parallel learning (allreduce is the way).

(edit: added one)

4 Comments to “EWRL and NIPS 2016”
  1. […] EWRL and NIPS 2016 I went to the European Workshop on Reinforcement Learning and NIPS last month and saw several interesting things. At EWRL, I particularly liked the talks from: Remi Munos on off-policy evaluation Mohammad Ghavamzadeh on learning safe policies Emma Brunskill on optimizing biased-but safe estimators (sense a theme?) Sergey Levine on low sample complexity applications of RL in robotics. My talk is here. Overall, this was a well organized workshop with diverse and interesting subjects, with the only caveat being that they had to limit registration At NIPS itself, I found the poster sessions fairly interesting. Allen-Zhu and Hazan had a new notion of a reduction (video). Zhao, Poupart, and Gordon had a new way to learn Sum-Product Networks Ho, Littman, MacGlashan, Cushman, and Austerwell, had a paper on how “Showing” is different from “Doing”. Toulis and… Original Post: EWRL and NIPS 2016 […]

  2. Daniel Seita says:

    Would it be possible to elaborate on your comments regarding:

    1. There are many important directions of research.
    2. You personally have a much higher chance of doing something interesting if everyone else is not doing it also.

    For instance, do you suggest that the “important directions” for research might be other areas of machine learning which are not getting as much attention?

    • jl says:

      Yes, although I don’t mean that in any sort of jealous way.

      Research is unforgiving of attention bubbles in the long term for most of those involved. A few people who lead the bubble receive enormous credit. An example for me is Isomap—for awhile manifold learning was the rage after that paper. But I’m not sure all the attention received was well spent for most of the people. Fundamentally, you really need to do something different to do something interesting.

      So, when you walk into a room with a 1000 people, I think it’s worth asking yourself “will attending this really help me do something interesting?”. The answer might be “yes” sometimes, but it probably should not be “yes” always.

Leave a Reply


Powered by WordPress