{"id":421,"date":"2008-09-12T09:53:34","date_gmt":"2008-09-12T15:53:34","guid":{"rendered":"http:\/\/hunch.net\/?p=421"},"modified":"2008-09-12T09:53:34","modified_gmt":"2008-09-12T15:53:34","slug":"how-do-we-get-weak-action-dependence-for-learning-with-partial-observations","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=421","title":{"rendered":"How do we get weak action dependence for learning with partial observations?"},"content":{"rendered":"<p>This post is about contextual bandit problems where, repeatedly:<\/p>\n<ol>\n<li>The world chooses features <em>x<\/em> and rewards for each action <em>r<sub>1<\/sub>,&#8230;,r<sub>k<\/sub><\/em> then announces the features <em>x<\/em> (but not the rewards).<\/li>\n<li>A policy chooses an action <em>a<\/em>.<\/li>\n<li>The world announces the reward <em>r<sub>a<\/sub><\/em><\/li>\n<\/ol>\n<p>The goal in these situations is to learn a policy which maximizes <em>r<sub>a<\/sub><\/em> in expectation efficiently.  I&#8217;m thinking about all situations which fit the above setting, whether they are drawn IID or adversarially from round to round and whether they involve past logged data or rapidly learning via interaction.<\/p>\n<p>One common drawback of all algorithms for solving this setting, is that they have a poor dependence on the number of actions.  For example if <em>k<\/em> is the number of actions, <a href=\"http:\/\/www.cs.princeton.edu\/~schapire\/uncompress-papers.cgi\/AuerCeFrSc01.ps\">EXP4 (page 66)<\/a> has a dependence on <em>k<sup>0.5<\/sup><\/em>, <a href=\"https:\/\/hunch.net\/~jl\/projects\/interactive\/sidebandits\/bandit.pdf\">epoch-greedy<\/a> (and the simpler epsilon greedy) have a dependence on <em>k<sup>1\/3<\/sup><\/em>, and the <a href=\"https:\/\/hunch.net\/~jl\/projects\/interactive\/offset_tree\/exploration.pdf\">offset tree<\/a> has a dependence on <em>k-1<\/em>.  These results aren&#8217;t directly comparable because different things are being analyzed.  The fact that <em>all<\/em> analyses have poor dependence on <em>k<\/em> is troublesome.  The lower bounds in the EXP4 paper and the Offset Tree paper demonstrate that this isn&#8217;t a matter of lazy proof writing or a poor choice of algorithms: it&#8217;s essential to the nature of the problem.<\/p>\n<p>In supervised learning, it&#8217;s typical to get no dependence or very weak dependence on the number of actions\/choices\/labels.  For example, if we do empirical risk minimization over a finite hypothesis space <em>H<\/em>, the dependence is at most <em>ln |H|<\/em> using an <a href=\"https:\/\/hunch.net\/~jl\/projects\/prediction_bounds\/tutorial\/langford05a.ps\">Occam&#8217;s Razor<\/a> bound.  Similarly, the <a href=\"https:\/\/hunch.net\/~jl\/projects\/reductions\/tutorial\/paper\/chapter.pdf\">PECOC algorithm (page 12)<\/a> has dependence bounded by a constant.  This kind of dependence is great for the feasibility of machine learning: it means that we can hope to tackle seemingly difficult problems.<\/p>\n<p>Why is there such a large contrast between these settings?  At the level of this discussion, they differ only in step 3, where for supervised learning, all of the rewards are revealed instead of just one.<\/p>\n<p>One of the intuitions you develop after working with supervised learning is that holistic information is often better.  As an example, given a choice between labeling the same point multiple times (perhaps revealing and correcting noise) or labeling other points once, an algorithm with labels other points typically exists and typically yields as good or better performance in theory and in practice.  This appears untrue when we have only partial observations.<\/p>\n<p>For example, consider the following problem(*): &#8220;Find an action with average reward greater than 0.5 with probability at least 0.99&#8221; and consider two algorithms:<\/p>\n<ol>\n<li>Sample actions at random until we can prove (via Hoeffding bounds) that one of them has large reward.<\/li>\n<li>Pick an action at random, sample it 100 times, and if we can prove (via a Hoeffding bound) that it has large average reward return it, otherwise pick another action randomly and repeat.<\/li>\n<\/ol>\n<p>When there are <em>10<sup>10<\/sup><\/em> actions and <em>10<sup>9<\/sup><\/em> of them have average reward 0.6, it&#8217;s easy to prove that algorithm 2 is much better than algorithm 1.<\/p>\n<p>Lower bounds for the partial observation settings imply that more tractable algorithms only exist under additional assumptions.  Two papers which do this without context features are:<\/p>\n<ol>\n<li><a href=\"http:\/\/www.cs.cornell.edu\/~rdk\/\">Robert Kleinberg<\/a>, <a href=\"http:\/\/research.microsoft.com\/users\/slivkins\/\">Aleksandrs Slivkins<\/a>, and <a href=\"http:\/\/www.cs.brown.edu\/~eli\/\">Eli Upfal<\/a>. <a href=\"http:\/\/www.cs.cornell.edu\/~rdk\/papers\/bandits-lip.pdf\">Multi-armed bandit problems in metric spaces<\/a>, <a href=\"http:\/\/webhome.csc.uvic.ca\/~stoc2008\/\">STOC 2008<\/a>.  Here the idea is that you have access to a covering oracle on the actions where actions with similar average rewards cover each other.<\/li>\n<li><a href=\"http:\/\/research.yahoo.com\/bouncer_user\/25\">Deepak Agarwal<\/a>, <a href=\"http:\/\/www.cs.cmu.edu\/~spandey\/\"><\/a>, and <a href=\"http:\/\/www.cs.cmu.edu\/~deepay\/\">Deepayan Chakrabati<\/a>, <a href=\"http:\/\/www.cs.cmu.edu\/~deepay\/mywww\/papers\/icml07-multiarmed.pdf\">Multi-armed Bandit Problems with Dependent Arms<\/a>, <a href=\"http:\/\/oregonstate.edu\/conferences\/icml2007\/\">ICML 2007<\/a>.  Here the idea is that the values of actions are generated recursively, preserving structure through the recursion.<\/li>\n<\/ol>\n<p><strong>Basic questions<\/strong>: Are there other kinds of natural structure which allows a good dependence on the total number of actions?  Can these kinds of structures be extended to the setting with features? (Which seems essential for real applications.)<\/p>\n<p>(*) Developed in discussion with <a href=\"http:\/\/www.yisongyue.com\/\">Yisong Yue<\/a> and <a href=\"http:\/\/www.cs.cornell.edu\/~rdk\/\">Bobby Kleinberg<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This post is about contextual bandit problems where, repeatedly: The world chooses features x and rewards for each action r1,&#8230;,rk then announces the features x (but not the rewards). A policy chooses an action a. The world announces the reward ra The goal in these situations is to learn a policy which maximizes ra in &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=421\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;How do we get weak action dependence for learning with partial observations?&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[29,18,16],"tags":[],"class_list":["post-421","post","type-post","status-publish","format-standard","hentry","category-machine-learning","category-papers","category-problems"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/421","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=421"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/421\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=421"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=421"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=421"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}