{"id":302,"date":"2007-11-28T20:44:24","date_gmt":"2007-11-29T02:44:24","guid":{"rendered":"http:\/\/hunch.net\/?p=302"},"modified":"2007-11-28T20:44:24","modified_gmt":"2007-11-29T02:44:24","slug":"computational-consequences-of-classification","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=302","title":{"rendered":"Computational Consequences of Classification"},"content":{"rendered":"<p>In the <a href=\"https:\/\/hunch.net\/?p=211\">regression vs classification debate<\/a>, I&#8217;m adding a new &#8220;pro&#8221; to classification.  It seems there are computational shortcuts available for classification which simply aren&#8217;t available for regression.  This arises in several situations.<\/p>\n<ol>\n<li>In <a href=\"https:\/\/hunch.net\/?cat=22\">active learning<\/a> it is sometimes possible to find an <em>e<\/em> error classifier with just <em>log(e)<\/em> labeled samples.    Only much more modest improvements appear to be achievable for squared loss regression.  The essential reason is that the loss function on many examples is flat with respect to large variations in the parameter spaces of a learned classifier, which implies that many of these classifiers do not need to be considered.  In contrast, for squared loss regression, most substantial variations in the parameter space influence the loss at most points.<\/li>\n<li>In budgeted learning, where there is either a computational time constraint or a feature cost constraint, a classifier can sometimes be learned to very high accuracy under the constraints while a squared loss regressor could not.  For example, if there is one feature which determines whether a binary label has probability less than or greater than 0.5, a great classifier exists using just one feature.  Because squared loss is sensitive to the exact probability, many more features may be required to learn well with respect to squared loss.<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>In the regression vs classification debate, I&#8217;m adding a new &#8220;pro&#8221; to classification. It seems there are computational shortcuts available for classification which simply aren&#8217;t available for regression. This arises in several situations. In active learning it is sometimes possible to find an e error classifier with just log(e) labeled samples. Only much more modest &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=302\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Computational Consequences of Classification&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[22,29,12],"tags":[],"class_list":["post-302","post","type-post","status-publish","format-standard","hentry","category-active","category-machine-learning","category-reductions"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/302","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=302"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/302\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=302"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=302"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=302"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}