2011 ML symposium and the bears

The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year. My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up.

The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones.

By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM. Leon Bottou spoke on single pass online learning via averaged SGD. Yoav Freund talked about parameter-free hedging. In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious thought needs to go in this direction.

Unrelated, I found quite a bit of truth in Paul’s talking bears and Xtranormal always adds a dash of funny. My impression is that the ML job market has only become hotter since 4 years ago. Anyone who is well trained can find work, with the key limiting factor being “well trained”. In this environment, efforts to make ML more automatic and more easily applied are greatly appreciated. And yes, Yahoo! is still hiring too 🙂

2 Replies to “2011 ML symposium and the bears”

  1. Sorry, just being curious. You speak of “well trained” individuals; if you could kindly answer: what would you personally expect from a well trained person? I mean, something like minimal requirements: either a recent graduate degree in some field of AI from MIT/Stanford/.. or, if not?

    1. In general, it depends greatly.

      1) A single good undergraduate class allows you to potentially do productive things.

      2) A master’s-worth of classes gives you the tools you need to tackle broadly new problems.

      3) A phd puts you in a position to create new algorithms.

      I’d probably put “well trained” at (2). There is no inherent reason why this can’t be pushed down into an undergraduate degree—that just hasn’t happened yet as far as I know.

Comments are closed.