{"id":164,"date":"2006-02-02T11:41:09","date_gmt":"2006-02-02T17:41:09","guid":{"rendered":"http:\/\/hunch.net\/?p=164"},"modified":"2006-02-02T11:44:16","modified_gmt":"2006-02-02T17:44:16","slug":"introspectionism-as-a-disease","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=164","title":{"rendered":"Introspectionism as a Disease"},"content":{"rendered":"<p>In the AI-related parts of machine learning, it is often tempting to examine how <em>you<\/em> do things in order to imagine how a machine should do things.  This is introspection, and it can easily go awry.  I will call introspection gone awry introspectionism.<\/p>\n<p>Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research.  It&#8217;s easiest to show how introspectionism arises by an example.<\/p>\n<p>Suppose we want to solve the problem of navigating a robot from point A to point B given a camera.  Then, the following research action plan might seem natural when you examine your own capabilities:<\/p>\n<ol>\n<li>Build an edge detector for still images.<\/li>\n<li>Build an object recognition system given the edge detector.<\/li>\n<li>Build a system to predict distance and orientation to objects given the object recognition system.<\/li>\n<li>Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions.<\/li>\n<li>As you execute the above, constantly repeat the above steps.<\/li>\n<\/ol>\n<p>Introspectionism begins when you believe this <em>must<\/em> be the way that it is done.<\/p>\n<p>Introspectionism arguments are really argument by lack of imagination.  It is like saying &#8220;This is the only way I can imagine doing things, so it must be the way they should be done.&#8221;  This is a common weak argument style that can be very difficult to detect.   It is particularly difficult to detect here because it is easy to confuse <em>capability<\/em> with <em>reuse<\/em>.  Humans, via experimental tests, can be shown capable of executing each step above, but this does <em>not<\/em> imply they reuse these computations in the next step.<\/p>\n<p>There are reasonable evolution-based reasons to believe that brains minimize the amount of computation required to accomplish goals.  Computation costs energy, and since a human brain might consume <a href=\"http:\/\/www.pnas.org\/cgi\/content\/full\/99\/16\/10237\">20% of the energy budget<\/a>, we can be fairly sure that the evolutionary impetus to minimize computation is significant.  This suggests telling a different energy-conservative story.  <\/p>\n<p>An energy consevative version of the above example might look similar, but with very loose approximations.  <\/p>\n<ol>\n<li>The brain might (by default) use a pathetically weak edge detector that is lazily refined into something more effective using time-sequenced images (since edges in moving scenes tend to stand out more).  <\/li>\n<li>The puny edge detector might be used to fill a rough &#8220;obstacle-or-not&#8221; fill map that coarsens dramatically with distance. <\/li>\n<li>Given this, a decision about which direction to go next (rather than a full path) might be made.<\/li>\n<\/ol>\n<p>This strategy avoids the need to build a good edge detector for still scenes, avoids the need to recognize objects, avoids the need to place them with high precision in a scene, and avoids the need to make a full path plan.  All of these avoidances might result in more tractable computation or learning problems.  Note that we can&#8217;t (and shouldn&#8217;t) say that the energy conservative path &#8220;must&#8221; be right because that would also be introspectionism.  However, it does exhibit an alternative showing the failure of imagination in introspectionism on the first approach.<\/p>\n<p>It is reasonable to take introspection derived ideas as suggestions for how to go about building a (learning) system.  But if the suggestions don&#8217;t work, it&#8217;s entirely reasonable to try something else.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. This is introspection, and it can easily go awry. I will call introspection gone awry introspectionism. Introspectionism is almost unique to AI (and the AI-related parts of machine &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=164\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Introspectionism as a Disease&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[30,29],"tags":[],"class_list":["post-164","post","type-post","status-publish","format-standard","hentry","category-ai","category-machine-learning"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/164","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=164"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/164\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=164"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=164"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=164"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}