{"id":12,"date":"2005-02-01T16:21:21","date_gmt":"2005-02-01T22:21:21","guid":{"rendered":"\/?p=12"},"modified":"2005-02-02T20:00:46","modified_gmt":"2005-02-03T02:00:46","slug":"nips-online-bayes","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=12","title":{"rendered":"NIPS: Online Bayes"},"content":{"rendered":"<p>One nice use for this blog is to consider and discuss papers that that have appeared at recent conferences. I really enjoyed Andrew Ng and Sham Kakade&#8217;s paper <a href = \"http:\/\/books.nips.cc\/papers\/files\/nips17\/2004_0822.pdf\">Online Bounds for Bayesian Algorithms<\/a>. From the paper:<\/p>\n<blockquote><p>\nThe philosophy taken in the Bayesian methodology is often at odds with<br \/>\nthat in the online learning community&#8230;. the online learning setting<br \/>\nmakes rather minimal assumptions on the conditions under which the<br \/>\ndata are being presented to the learner \u00e2\u20ac\u201dusually, Nature could provide<br \/>\nexamples in an adversarial manner. We study the performance of<br \/>\nBayesian algorithms in a more adversarial setting&#8230; We provide<br \/>\ncompetitive bounds when the cost function is the log loss, and we<br \/>\ncompare our performance to the best model in our model class (as in<br \/>\nthe experts setting).  <\/p><\/blockquote>\n<p>It&#8217;s a very nice analysis of some of my favorite algorithms that all hinges around a beautiful theorem:<\/p>\n<p>Let Q be any distribution over parameters theta. Then for all sequences S:<\/p>\n<p> L_{Bayes}(S) leq L_Q(S) + KL(Q|P)<\/p>\n<p>where P is our prior and the losses L are: first, log-loss for the Bayes algorithm (run online) and second, expected log-loss with respect to an arbitrary distribution Q.<\/p>\n<p>Any thoughts? Any other papers you thought we <b>have<\/b> to read?<\/p>\n","protected":false},"excerpt":{"rendered":"<p>One nice use for this blog is to consider and discuss papers that that have appeared at recent conferences. I really enjoyed Andrew Ng and Sham Kakade&#8217;s paper Online Bounds for Bayesian Algorithms. From the paper: The philosophy taken in the Bayesian methodology is often at odds with that in the online learning community&#8230;. the &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=12\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;NIPS: Online Bayes&#8221;<\/span><\/a><\/p>\n","protected":false},"author":9,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,7],"tags":[],"class_list":["post-12","post","type-post","status-publish","format-standard","hentry","category-bayesian","category-online"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/12","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=12"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/12\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=12"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=12"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=12"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}