{"id":111,"date":"2005-08-22T01:55:12","date_gmt":"2005-08-22T07:55:12","guid":{"rendered":"\/?p=111"},"modified":"2005-08-22T01:55:18","modified_gmt":"2005-08-22T07:55:18","slug":"do-you-believe-in-induction","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=111","title":{"rendered":"Do you believe in induction?"},"content":{"rendered":"<p><a href=\"http:\/\/pages.stern.nyu.edu\/~fprovost\/\">Foster Provost<\/a> gave a talk at the ICML <a href=\"http:\/\/www.tlc2.uh.edu\/icml2005-metalearning\">metalearning workshop<\/a> on &#8220;metalearning&#8221; and the &#8220;no free lunch theorem&#8221; which seems worth summarizing.<\/p>\n<p>As a review: the no free lunch theorem is the most complicated way we know of to say that a <a href=\"https:\/\/hunch.net\/index.php?p=10\">bias<\/a> is required in order to learn.  The simplest way to see this is in a nonprobabilistic setting.  If you are given examples of the form <em>(x,y)<\/em> and you wish to predict <em>y<\/em> from <em>x<\/em> then any prediction mechanism errs half the time in expectation over all sequences of examples.  The proof of this is very simple: on every example a predictor must make some prediction and by symmetry over the set of sequences it will be wrong half the time and right half the time.  The basic idea of this proof has been applied to many other settings.<\/p>\n<p>The simplistic interpretation of this theorem which many people jump to is &#8220;machine learning is dead&#8221; since there can be no single learning algorithm which can solve all learning problems.  This is the wrong way to think about it.  In the real world, we do not care about the expectation over all possible sequences, but perhaps instead about some (weighted) expectation over the set of problems we actually encounter.  It is enitrely possible that we can form a prediction algorithm with good performance over this set of problems.<\/p>\n<p>This is one of the fundamental reasons why experiments are done in machine learning.  If we want to access the set of problems we actually encounter, we must do this empirically.  Although we must work with the world to understand what a good general-purpose learning algorithm is, quantifying how good the algorithm is may be difficult.  In particular, performing well on the last 100 encountered learning problems may say nothing about performing well on the next encountered learning problem. <\/p>\n<p>This is where induction comes in.  It has been noted by <a href=\"http:\/\/en.wikipedia.org\/wiki\/David_Hume\">Hume<\/a> that there is no mathematical proof that the sun will rise tomorrow which does not rely on unverifiable assumptions about the world.  Nevertheless, the belief in sunrise tomorrow is essentially universal.  A good general purpose learning algorithm is similar to &#8216;sunrise&#8217;: we can&#8217;t prove that we will succeed on the next learning problem encountered, but nevertheless we might believe it for inductive reasons.  And we might be right.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Foster Provost gave a talk at the ICML metalearning workshop on &#8220;metalearning&#8221; and the &#8220;no free lunch theorem&#8221; which seems worth summarizing. As a review: the no free lunch theorem is the most complicated way we know of to say that a bias is required in order to learn. The simplest way to see this &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=111\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Do you believe in induction?&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-111","post","type-post","status-publish","format-standard","hentry","category-general"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/111","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=111"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/111\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=111"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=111"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=111"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}