{"id":813,"date":"2009-06-24T02:36:42","date_gmt":"2009-06-24T08:36:42","guid":{"rendered":"http:\/\/hunch.net\/?p=813"},"modified":"2009-06-24T02:36:42","modified_gmt":"2009-06-24T08:36:42","slug":"interesting-papers-at-uaicmolt-2009","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=813","title":{"rendered":"Interesting papers at UAICMOLT 2009"},"content":{"rendered":"<p>Here&#8217;s a list of papers that I found interesting at <a href=\"http:\/\/www.cs.mcgill.ca\/~icml2009\/\">ICML<\/a>\/<a href=\"http:\/\/www.cs.mcgill.ca\/~colt2009\">COLT<\/a>\/<a href=\"http:\/\/www.cs.mcgill.ca\/~uai2009\">UAI<\/a> in 2009.<\/p>\n<ol>\n<li><a href=\"http:\/\/www.cs.princeton.edu\/~ehazan\/\">Elad Hazan<\/a> and <a href=\"http:\/\/www.cs.princeton.edu\/~csesha\/\">Comandur Seshadhri<\/a> <a href=\"http:\/\/www.conflate.net\/icml\/paper\/2009\/75\">Efficient learning algorithms for changing environments<\/a> at ICML.  This paper shows how to adapt learning algorithms that compete with fixed predictors to compete with changing policies.  The definition of regret they deal with seems particularly useful in many situation.<\/li>\n<li><a href=\"http:\/\/www.cs.utah.edu\/~hal\/\">Hal Daume<\/a>, <a href=\"http:\/\/conflate.net\/icml\/paper\/2009\/297\">Unsupervised Search-based Structured Prediction<\/a> at ICML.  This paper shows a technique for reducing unsupervised learning to supervised learning which (a) make a fast unsupervised learning algorithm and (b)  makes semisupervised learning both easy and highly effective. <\/li>\n<li>There were two papers with similar results on active learning in the KWIK framework for linear regression, both reducing the sample complexity to .  One was <a href=\"http:\/\/homes.dsi.unimi.it\/~cesabian\/\">Nicolo Cesa-Bianchi<\/a>, <a href=\"http:\/\/www.dicom.uninsubria.it\/~cgentile\/\">Claudio Gentile<\/a>, and <a href=\"http:\/\/www.idiap.ch\/~forabona\/\">Francesco Orabona<\/a> <a href=\"http:\/\/conflate.net\/icml\/paper\/2009\/289\">Robust Bounds for Classification via Selective Sampling<\/a> at ICML and the other was <a href=\"http:\/\/paul.rutgers.edu\/~thomaswa\/\">Thomas Walsh<\/a>, <a href=\"http:\/\/szityu.web.eotvos.elte.hu\/\">Istvan Szita<\/a>, <a href=\"http:\/\/paul.rutgers.edu\/~cdiuk\/\">Carlos Diuk<\/a>, <a href=\"http:\/\/www.cs.duke.edu\/~mlittman\/\">Michael Littman<\/a> <a href=\"http:\/\/www.cs.mcgill.ca\/~uai2009\/papers\/UAI2009_0137_63d573cf0e7d454d9e3e3eb713845209.pdf\">Exploring compact reinforcement-learning representations with linear regression<\/a> at UAI.  The UAI paper covers application to RL as well.<\/li>\n<li><a href=\"http:\/\/www.stat.cornell.edu\/~li\/\">Ping Li<\/a>, <a href=\"http:\/\/www.cs.mcgill.ca\/~uai2009\/papers\/UAI2009_0130_3c7f00baafc699511c26697b0e87ecff.pdf\">Improving Compressed Counting<\/a> at UAI.  This paper talks about how to keep track of the moments in a datastream with very little space and computation.  I&#8217;m not sure I have a use for it yet, but it seems like a cool piece of basic technology.<\/li>\n<li><a href=\"http:\/\/mark.reid.name\/\">Mark Reid<\/a> and <a href=\"http:\/\/axiom.anu.edu.au\/~williams\/\">Bob Williamson<\/a> <a href=\"http:\/\/conflate.net\/icml\/paper\/2009\/400\">Surrogate Regret Bounds for Proper Losses<\/a> at ICML.  This paper points out that via the integral characterization of proper losses, proper scoring rules can be reduced to binary classification.  The results unify and generalize the <a href=\"https:\/\/hunch.net\/~jl\/projects\/reductions\/probing\/aistat_real_final\/aistat_final.ps\">Probing<\/a> and <a href=\"https:\/\/hunch.net\/~jl\/projects\/reductions\/median\/median_uai.ps\">Quanting<\/a> reductions we worked on previously.  This paper is also related to <a href=\"http:\/\/ai.stanford.edu\/~nlambert\/\">Nicolas Lambert<\/a>&#8216;s <a href=\"http:\/\/ai.stanford.edu\/~nlambert\/papers\/EC2008-elicitability.pdf\">work<\/a>, which is quite thought provoking in terms of specifying what is learnable.<\/li>\n<li><a href=\"http:\/\/cseweb.ucsd.edu\/~djhsu\/\">Daniel Hsu<\/a>, <a href=\"http:\/\/ttic.uchicago.edu\/~sham\/\">Sham M. Kakade<\/a> and <a href=\"http:\/\/stat.rutgers.edu\/~tzhang\/\">Tong Zhang<\/a>.  <a href=\"http:\/\/www.cs.mcgill.ca\/~colt2009\/papers\/011.pdf#page=1\">A Spectral Algorithm for Learning Hidden Markov Models<\/a> COLT.  This paper shows that a subset of HMMs can be learned using an SVD-based algorithm.<\/li>\n<li><a href=\"http:\/\/cseweb.ucsd.edu\/~skpotufe\/\">Samory Kpotufe<\/a>, <a href=\"http:\/\/www.cs.mcgill.ca\/~colt2009\/papers\/019.pdf#page=1\">Escaping the curse of dimensionality with a tree-based regressor<\/a> at COLT.  This paper shows how to directly applying regression in high dimensional vector spaces and have it succeed anyways because the data is naturally low-dimensional.<\/li>\n<li><a href=\"http:\/\/www.cs.uwaterloo.ca\/~shai\/\">Shai Ben-David<\/a>, <a href=\"http:\/\/david.palenica.com\/\">David Pal<\/a> and <a href=\"http:\/\/ttic.uchicago.edu\/~shai\/\">Shai Shalev-Shwartz<\/a>. <a href=\"http:\/\/www.cs.mcgill.ca\/~colt2009\/papers\/032.pdf#page=1\">Agnostic Online Learning<\/a> at COLT.  This paper characterizes the ability to learn when an adversary is choosing features in the online setting as the &#8220;Littlestone dimension&#8221;.<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Here&#8217;s a list of papers that I found interesting at ICML\/COLT\/UAI in 2009. Elad Hazan and Comandur Seshadhri Efficient learning algorithms for changing environments at ICML. This paper shows how to adapt learning algorithms that compete with fixed predictors to compete with changing policies. The definition of regret they deal with seems particularly useful in &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=813\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Interesting papers at UAICMOLT 2009&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[33,29,18],"tags":[],"class_list":["post-813","post","type-post","status-publish","format-standard","hentry","category-conferences","category-machine-learning","category-papers"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/813","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=813"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/813\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=813"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=813"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=813"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}