I went to the European Workshop on Reinforcement Learning and NIPS last month and saw several interesting things.
At EWRL, I particularly liked the talks from:
- Remi Munos on off-policy evaluation
- Mohammad Ghavamzadeh on learning safe policies
- Emma Brunskill on optimizing biased-but safe estimators (sense a theme?)
- Sergey Levine on low sample complexity applications of RL in robotics.
My talk is here. Overall, this was a well organized workshop with diverse and interesting subjects, with the only caveat being that they had to limit registration 
At NIPS itself, I found the poster sessions fairly interesting.
- Allen-Zhu and Hazan had a new notion of a reduction (video).
- Zhao, Poupart, and Gordon had a new way to learn Sum-Product Networks
- Ho, Littman, MacGlashan, Cushman, and Austerwell, had a paper on how “Showing” is different from “Doing”.
- Toulis and Parkes had a paper on estimation of long term causal effects.
- Rae, Hunt, Danihelka, Harley, Senior, Wayne, Graves, and Lillicrap had a paper on large memories with neural networks.
- Hardt, Price, and Srebro, had a paper on Equal Opportunity in ML.
Format-wise, I thought the 2 sessions was better than 1, but I really would have preferred more. The recorded spotlights are also pretty cool.
The NIPS workshops were great, although I was somewhat reminded of kindergarten soccer in terms of lopsided attendance. This may be inevitable given how hot the field is, but I think it’s important for individual researchers to remember that:
- There are many important directions of research.
- You personally have a much higher chance of doing something interesting if everyone else is not doing it also.
During the workshops, I learned about ADAM (a momentum form of Adagrad), testing ML systems, and that even TenserFlow is finally looking into synchronous updates for parallel learning (allreduce is the way).
(edit: added one)