I went to the European Workshop on Reinforcement Learning and NIPS last month and saw several interesting things.
At EWRL, I particularly liked the talks from:
- Remi Munos on off-policy evaluation
- Mohammad Ghavamzadeh on learning safe policies
- Emma Brunskill on optimizing biased-but safe estimators (sense a theme?)
- Sergey Levine on low sample complexity applications of RL in robotics.
My talk is here. Overall, this was a well organized workshop with diverse and interesting subjects, with the only caveat being that they had to limit registration 🙂
At NIPS itself, I found the poster sessions fairly interesting.
- Allen-Zhu and Hazan had a new notion of a reduction (video).
- Zhao, Poupart, and Gordon had a new way to learn Sum-Product Networks
- Ho, Littman, MacGlashan, Cushman, and Austerwell, had a paper on how “Showing” is different from “Doing”.
- Toulis and Parkes had a paper on estimation of long term causal effects.
- Rae, Hunt, Danihelka, Harley, Senior, Wayne, Graves, and Lillicrap had a paper on large memories with neural networks.
- Hardt, Price, and Srebro, had a paper on Equal Opportunity in ML.
Format-wise, I thought the 2 sessions was better than 1, but I really would have preferred more. The recorded spotlights are also pretty cool.
The NIPS workshops were great, although I was somewhat reminded of kindergarten soccer in terms of lopsided attendance. This may be inevitable given how hot the field is, but I think it’s important for individual researchers to remember that:
- There are many important directions of research.
- You personally have a much higher chance of doing something interesting if everyone else is not doing it also.
During the workshops, I learned about ADAM (a momentum form of Adagrad), testing ML systems, and that even TenserFlow is finally looking into synchronous updates for parallel learning (allreduce is the way).
(edit: added one)
Would it be possible to elaborate on your comments regarding:
1. There are many important directions of research.
2. You personally have a much higher chance of doing something interesting if everyone else is not doing it also.
For instance, do you suggest that the “important directions” for research might be other areas of machine learning which are not getting as much attention?
Yes, although I don’t mean that in any sort of jealous way.
Research is unforgiving of attention bubbles in the long term for most of those involved. A few people who lead the bubble receive enormous credit. An example for me is Isomap—for awhile manifold learning was the rage after that paper. But I’m not sure all the attention received was well spent for most of the people. Fundamentally, you really need to do something different to do something interesting.
So, when you walk into a room with a 1000 people, I think it’s worth asking yourself “will attending this really help me do something interesting?”. The answer might be “yes” sometimes, but it probably should not be “yes” always.
Love the Kindergarten soccer reference. Reminds me of Peter Thiel’s monopoly versus competition theme in “Zero to One”. Every time I think of that book title, I think of VW too.
Very Nice. I will also like to visit such events. Can u tell me some forums or group where such events are organized or discussed.
Siddhesh-Codingplex-C programming
So much “computer vision” ML on social media, and it’s nearly all GPU based. I see the clusters of $9,000 GPUs, and I always think about how VW does-more-with-less.
Even Azure has several new GPU options in the last 6 months. I can’t wait to see how VW and the MWTDS carve out their niche or compete in similar spaces to the GPU hype.
Aside, I have a Neural Style Transfer service stood up for an interactive demo here:
http://StyleMyImage.com
It’s a fun toy for making cool images. I still love using VW more than GPU anytime it makes sense for me to do so.