ICML is changing its constitution

Andrew McCallum has been leading an initiative to update the bylaws of IMLS, the organization which runs ICML. I expect most people aren’t interested in such details. However, the bylaws change rarely and can have an impact over a long period of time so they do have some real importance. I’d like to hear comment from anyone with a particular interest before this year’s ICML.

In my opinion, the most important aspect of the bylaws is the at-large election of members of the board which is preserved. Most of the changes between the old and new versions are aimed at better defining roles, committees, etc… to leave IMLS/ICML better organized.

Anyways, please comment if you have a concern or thoughts.

Machine Learning the Future Class

This spring, I taught a class on Machine Learning the Future at Cornell Tech covering a number of advanced topics in machine learning including online learning, joint (structured) prediction, active learning, contextual bandit learning, logarithmic time prediction, and parallel learning. Each of these classes was recorded from the laptop via Zoom and I just uploaded the recordings to Youtube.

In some ways, this class is a followup to the large scale learning class I taught with Yann LeCun 4 years ago. The videos for that class were taken down(*) so these lectures both update and replace shared subjects as well as having some new subjects.

Much of this material is fairly close to research so to assist other machine learning lecturers around the world in digesting the material, I’ve made all the source available as well. Feel free to use and improve.

(*) The NYU policy changed so that students could not be shown in classroom videos.

EWRL and NIPS 2016

I went to the European Workshop on Reinforcement Learning and NIPS last month and saw several interesting things.

At EWRL, I particularly liked the talks from:

  1. Remi Munos on off-policy evaluation
  2. Mohammad Ghavamzadeh on learning safe policies
  3. Emma Brunskill on optimizing biased-but safe estimators (sense a theme?)
  4. Sergey Levine on low sample complexity applications of RL in robotics.

My talk is here. Overall, this was a well organized workshop with diverse and interesting subjects, with the only caveat being that they had to limit registration 🙂

At NIPS itself, I found the poster sessions fairly interesting.

  1. Allen-Zhu and Hazan had a new notion of a reduction (video).
  2. Zhao, Poupart, and Gordon had a new way to learn Sum-Product Networks
  3. Ho, Littman, MacGlashan, Cushman, and Austerwell, had a paper on how “Showing” is different from “Doing”.
  4. Toulis and Parkes had a paper on estimation of long term causal effects.
  5. Rae, Hunt, Danihelka, Harley, Senior, Wayne, Graves, and Lillicrap had a paper on large memories with neural networks.
  6. Hardt, Price, and Srebro, had a paper on Equal Opportunity in ML.

Format-wise, I thought the 2 sessions was better than 1, but I really would have preferred more. The recorded spotlights are also pretty cool.

The NIPS workshops were great, although I was somewhat reminded of kindergarten soccer in terms of lopsided attendance. This may be inevitable given how hot the field is, but I think it’s important for individual researchers to remember that:

  1. There are many important directions of research.
  2. You personally have a much higher chance of doing something interesting if everyone else is not doing it also.

During the workshops, I learned about ADAM (a momentum form of Adagrad), testing ML systems, and that even TenserFlow is finally looking into synchronous updates for parallel learning (allreduce is the way).

(edit: added one)