{"id":341,"date":"2008-07-15T04:22:22","date_gmt":"2008-07-15T10:22:22","guid":{"rendered":"http:\/\/hunch.net\/?p=341"},"modified":"2008-07-20T11:52:47","modified_gmt":"2008-07-20T17:52:47","slug":"interesting-papers-at-colt-and-a-bit-of-uai-workshops","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=341","title":{"rendered":"Interesting papers at COLT (and a bit of UAI &#038; workshops)"},"content":{"rendered":"<p>Here are a few papers from <a href=\"http:\/\/colt2008.cs.helsinki.fi\/\">COLT 2008<\/a> that I found interesting.<\/p>\n<ol>\n<li><a href=\"http:\/\/www.cs.cmu.edu\/~ninamf\">Maria-Florina Balcan<\/a>, <a href=\"http:\/\/www.cs.cmu.edu\/~shanneke\/\">Steve Hanneke<\/a>, and <a href=\"http:\/\/www.seas.upenn.edu\/~wortmanj\/\">Jenn Wortman<\/a>, <a href=\"http:\/\/www.cs.cmu.edu\/~ninamf\/papers\/true-active.pdf\">The True Sample Complexity of Active Learning<\/a>.  This paper shows that in an asymptotic setting, active learning is <em>always<\/em> better than supervised learning (although the gap may be small).  This is evidence that the only thing in the way of universal active learning is us knowing how to do it properly.<\/li>\n<li><a href=\"http:\/\/nailon.googlepages.com\/\">Nir Ailon<\/a> and <a href=\"http:\/\/www.cs.nyu.edu\/~mohri\/\">Mehryar Mohri<\/a>, <a href=\"http:\/\/www.cs.nyu.edu\/web\/Research\/TechReports\/TR2007-903\/TR2007-903.pdf\">An Efficient Reduction of Ranking to Classification<\/a>.  This paper shows how to robustly rank <em>n<\/em> objects with <em>n log(n)<\/em> classifications using a quicksort based algorithm.  The result is applicable to many ranking loss functions and has implications for others.<\/li>\n<li><a href=\"http:\/\/www.cis.upenn.edu\/~mkearns\/\">Michael Kearns<\/a> and <a href=\"http:\/\/www.seas.upenn.edu\/~wortmanj\/\">Jennifer Wortman<\/a>. <a href=\"http:\/\/www.seas.upenn.edu\/~wortmanj\/papers\/collective.pdf\">Learning from Collective Behavior<\/a>.  This is about learning in a new model, where the goal is to predict how a collection of interacting agents behave.  One claim is that learning in this setting can be reduced to IID learning.<\/li>\n<\/ol>\n<p>Due to the relation with <a href=\"https:\/\/hunch.net\/~jl\/projects\/RL\/metric_e3\/icml_final.ps\">Metric-E<sup>3<\/sup>, I was particularly interested in a <\/a><a href=\"http:\/\/colt2008.cs.helsinki.fi\/papers\/11-Bernstein.pdf\">couple<\/a> <a href=\"http:\/\/uai2008.cs.helsinki.fi\/UAI_camera_ready\/brunskill.pdf\">other<\/a> papers on reinforcement learning in navigation-like spaces.<br \/>\nI also particularly enjoyed <a href=\"http:\/\/colt2008.cs.helsinki.fi\/papers\/klein.pdf\">Dan Klein<\/a>&#8216;s talk, which was the most impressive application of <a href=\"http:\/\/en.wikipedia.org\/wiki\/Graphical_model\">graphical model<\/a> technology I&#8217;ve seen.<\/p>\n<p>I also attended the <a href=\"http:\/\/largescale.first.fraunhofer.de\/workshop\/\">large scale learning challenge workshop<\/a> and enjoyed Antoine Bordes talk about a fast primal space algorithm that won by a hair over other methods in the wild track.  <a href=\"http:\/\/ronan.collobert.com\/\">Ronan Collobert<\/a>&#8216;s talk was also notable in that they are doing relatively featuritis-free NLP.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Here are a few papers from COLT 2008 that I found interesting. Maria-Florina Balcan, Steve Hanneke, and Jenn Wortman, The True Sample Complexity of Active Learning. This paper shows that in an asymptotic setting, active learning is always better than supervised learning (although the gap may be small). This is evidence that the only thing &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=341\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Interesting papers at COLT (and a bit of UAI &#038; workshops)&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[33,29,18],"tags":[],"class_list":["post-341","post","type-post","status-publish","format-standard","hentry","category-conferences","category-machine-learning","category-papers"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/341","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=341"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/341\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=341"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=341"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=341"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}