{"id":272,"date":"2007-06-14T19:13:52","date_gmt":"2007-06-15T01:13:52","guid":{"rendered":"http:\/\/hunch.net\/?p=272"},"modified":"2007-06-14T19:13:52","modified_gmt":"2007-06-15T01:13:52","slug":"interesting-papers-at-colt-2007","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=272","title":{"rendered":"Interesting Papers at COLT 2007"},"content":{"rendered":"<p>Here are two papers that seem particularly interesting at this year&#8217;s COLT.<\/p>\n<ol>\n<li><a href=\"http:\/\/ida.first.fraunhofer.de\/~blanchard\/\">Gilles Blanchard<\/a> and <a href=\"http:\/\/cvlab.epfl.ch\/~fleuret\/\">Fran\u00c3\u0192\u00c2\u00a7ois Fleuret<\/a>, <a href=\"http:\/\/cvlab.epfl.ch\/~fleuret\/papers\/blanchard-fleuret-colt2007.pdf\">Occam&#8217;s Hammer<\/a>.  When we are interested in very tight bounds on the true error rate of a classifier, it is tempting to use a PAC-Bayes bound which can (empirically) be <a href=\"https:\/\/hunch.net\/~jl\/projects\/prediction_bounds\/nn_bound\/not_bound_final.ps\">quite tight<\/a>.  A disadvantage of the PAC-Bayes bound is that it applies to a classifier which is randomized over a set of base classifiers rather than a single classifier.  This paper shows that a similar bound can be proved which holds for a single classifier drawn from the set.   The ability to safely use a single classifier is very nice.  This technique applies generically to any base bound, so it has other applications covered in the paper.<\/li>\n<li><a href=\"http:\/\/www-static.cc.gatech.edu\/~atk\/\">Adam Tauman Kalai<\/a>. <a href=\"http:\/\/www-static.cc.gatech.edu\/~atk\/papers\/genhalf\/nested-long-3-22-07.pdf\">Learning Nested Halfspaces and Uphill Decision Trees<\/a>.  Classification PAC-learning, where you prove that any problem amongst some set is polytime learnable with respect to any distribution over the input <em>X<\/em> is extraordinarily challenging as judged by lack of progress over a long period of time.  This paper is about regression PAC-learning, and the results appear much more encouraging than exist in classification PAC-learning.  Under the assumption that:\n<ol>\n<li>The level sets of the correct regressed value are halfspaces.<\/li>\n<li>The level sets obey a <a href=\"http:\/\/en.wikipedia.org\/wiki\/Lipschitz_continuity\">Lipschitz condition<\/a>.<\/li>\n<\/ol>\n<p>this paper proves that a good regressor can be PAC-learned using a boosting algorithm.  (The &#8220;uphill decision trees&#8221; part of the paper is about one special case where  you don&#8217;t need the Lipschitz condition.)\n<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Here are two papers that seem particularly interesting at this year&#8217;s COLT. Gilles Blanchard and Fran\u00c3\u0192\u00c2\u00a7ois Fleuret, Occam&#8217;s Hammer. When we are interested in very tight bounds on the true error rate of a classifier, it is tempting to use a PAC-Bayes bound which can (empirically) be quite tight. A disadvantage of the PAC-Bayes bound &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=272\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Interesting Papers at COLT 2007&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[33,29,18,8],"tags":[],"class_list":["post-272","post","type-post","status-publish","format-standard","hentry","category-conferences","category-machine-learning","category-papers","category-prediction-theory"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/272","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=272"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/272\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=272"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=272"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=272"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}