{"id":3001617,"date":"2015-12-14T13:43:36","date_gmt":"2015-12-14T19:43:36","guid":{"rendered":"http:\/\/hunch.net\/?p=3001617"},"modified":"2015-12-14T13:43:36","modified_gmt":"2015-12-14T19:43:36","slug":"interesting-things-at-nips-2015","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=3001617","title":{"rendered":"Interesting things at NIPS 2015"},"content":{"rendered":"<p>NIPS is getting big.  If you think of each day as a conference crammed into a day, you get a good flavor of things.  Here are some of the interesting things I saw.<\/p>\n<ul>\n<li><a href=\"http:\/\/arxiv.org\/pdf\/1412.7449v3.pdf\">Grammar as a foreign language<\/a>.  Essentially, <a href=\"http:\/\/arxiv.org\/abs\/1409.0473\">attention model<\/a> + <a href=\"https:\/\/en.wikipedia.org\/wiki\/Long_short-term_memory\">LSTM<\/a> + a standard dataset = good parser.  <\/li>\n<li><a href=\"http:\/\/arxiv.org\/pdf\/1505.05310v1.pdf\">A New View of Predictive State Methods for Dynamical System Learning<\/a>.  Predicting future from past and future+1 from past allows you to form an estimate of system dynamics.  <\/li>\n<li><a href=\"http:\/\/arxiv.org\/abs\/1408.1387\">Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing<\/a> (Much) better labeling by better incentives.<\/li>\n<li><a href=\"http:\/\/papers.nips.cc\/paper\/5656-hidden-technical-debt-in-machine-learning-systems.pdf\">Hidden Technical Debt in Machine Learning Systems<\/a>.  A somewhat less vivid title than the <a href=\"http:\/\/static.googleusercontent.com\/media\/research.google.com\/en\/\/pubs\/archive\/43146.pdf\">earlier one<\/a> which is entirely worth reading if you worry about ML systems.<\/li>\n<li><a href=\"https:\/\/papers.nips.cc\/paper\/5692-bandits-with-unobserved-confounders-a-causal-approach\">Bandits with Unobserved Confounders: A Causal Approach<\/a>.  In systems where a &#8216;default action&#8217; exists, the act of intervening is not so simple.<\/li>\n<li><a href=\"http:\/\/papers.nips.cc\/paper\/5748-the-self-normalized-estimator-for-counterfactual-learning.pdf\">The Self-normalized Estimator for Counterfactual Learning<\/a>.  A good idea for reducing variance contextual bandit situations.<\/li>\n<li><a href=\"http:\/\/papers.nips.cc\/paper\/5782-character-level-convolutional-networks-for-text-classification.pdf\">Character-level Convolutional Networks for Text Classification<\/a>.  Extensive empirical experiments showing that character alphabets can be effective for NLP tasks.<\/li>\n<li><a href=\"http:\/\/papers.nips.cc\/paper\/5827-sample-complexity-of-episodic-fixed-horizon-reinforcement-learning.pdf\">Sample Complexity of Episodic Fixed-Horizon Reinforcement Learning<\/a>.  A yet-tighter form of RL MDP learning.<\/li>\n<li><a href=\"https:\/\/papers.nips.cc\/paper\/5832-on-elicitation-complexity\">On Elicitation Complexity<\/a>. How many questions do you need to ask to get answers to questions about distributions?  This has strong implications on learning algorithm design.<\/li>\n<li><a href=\"http:\/\/arxiv.org\/abs\/1503.08895\">End-to-End Memory Networks<\/a>. There are not many algorithms for coherently forming pools of memory and using them to answer questions.<\/li>\n<li><a href=\"http:\/\/arxiv.org\/pdf\/1503.01007.pdf\">Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets<\/a>.  Another way to add memory to a learning system.<\/li>\n<li><a href=\"http:\/\/papers.nips.cc\/paper\/5942-scalable-semi-supervised-aggregation-of-classifiers.pdf\">Scalable Semi-Supervised Aggregation of Classifiers<\/a>.  Better results for classifier aggregation in transductive settings.<\/li>\n<\/ul>\n<p>Two other notable events happened during NIPS.  <\/p>\n<ol>\n<li>The <a href=\"http:\/\/image-net.org\/challenges\/LSVRC\/2015\/results\">Imagenet challenge<\/a> and <a href=\"http:\/\/mscoco.org\/dataset\/#detections-challenge2015\">MS COCO<\/a> results came out.  The first represents a significant improvement over previous years (details <a href=\"http:\/\/arxiv.org\/abs\/1512.03385\">here<\/a>).<\/li>\n<li>The <a href=\"https:\/\/openai.com\/blog\/introducing-openai\/\">Open AI<\/a> initiative started.  Concerned billionaires create a billion dollar endowment to advance AI in a public(NOT Private) way.  What will be done better than <a href=\"http:\/\/www.nsf.gov\/\">NSF<\/a> (which has a similar(ish) goal)?  I can think of many possibilities.<\/li>\n<\/ol>\n<p>See also <a href=\"https:\/\/blogs.princeton.edu\/imabandit\/2015\/12\/13\/on-the-spirit-of-nips-2015-and-openai\/\">Seb&#8217;s post<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>NIPS is getting big. If you think of each day as a conference crammed into a day, you get a good flavor of things. Here are some of the interesting things I saw. Grammar as a foreign language. Essentially, attention model + LSTM + a standard dataset = good parser. A New View of Predictive &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=3001617\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Interesting things at NIPS 2015&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[33,29],"tags":[],"class_list":["post-3001617","post","type-post","status-publish","format-standard","hentry","category-conferences","category-machine-learning"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/3001617","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3001617"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/3001617\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3001617"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3001617"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3001617"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}