Cool and interesting things seen at NIPS

I learned a number of things at NIPS.

  1. The financial people were there in greater force than previously. Two Sigma sponsored NIPS while DRW Trading had a booth.
  2. The adversarial machine learning workshop had a number of talks about interesting applications where an adversary really is out to try and mess up your learning algorithm. This is very different from the situation we often think of where the world is oblivious to our learning. This may present new and convincing applications for the learning-against-an-adversary work common at COLT.
  3. There were several interesing papers.
    1. Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni had a paper on General Agnostic Active Learning. The basic idea is that active learning can be done via reduction to a form of supervised learning problem. This is great, because we have many supervised learning algorithms from which the benefits of active learning may be derived.
    2. Joseph Bradley and Robert Schapire had a Paper on Filterboost. Filterboost is an online boosting algorithm which I think of as the boost-by-filtration approaches in the first boosting paper updated for an adaboost-like structure. These kinds of approaches are doubtless helpful for large scale learning problems which are becoming more common.
    3. Peter Bartlett, Elad Hazan, and Sasha Rakhlin had a paper on Adaptive Online Learning. This paper refines earlier results for online learning against an adversary via gradient descent, which is plausibly of great use in practice.
  4. MLOSS was giving out free T-shirts which were cool. I missed the workshop starting this effort at last year’s NIPS due to workshop overload, but open source machine learning is definitely of great and sound interest to the community.

3 Replies to “Cool and interesting things seen at NIPS”

  1. The idea of reducing active learning to supervised learning reminded me of a long-standing question: How practical is active learning currently? I’m asking as a computer vision person — active learning sounds potentially tremendously interesting, but when reading (a selected few of) the papers that you mentioned over the time, there is not much application going on and my grasp of the theory is too bad to judge whether this is just writing for a different target community or a reflection of the early stage this approach is in.

  2. I suspect active learning happens in practice all the time using heuristic algorithms. It’s very natural to first label some data, see what the learning algorithm does, and then label some more data to improve it’s performance.

    What’s missing from some of the heuristic methods is safety—you might end up focusing on tuning the learned predictor to do well on irrelevant examples. The safer theoretical algorithms are making substantial progress, so hopefully sometimes soon we’ll see real experiments with them.

Comments are closed.