{"id":170,"date":"2006-02-27T16:51:09","date_gmt":"2006-02-27T22:51:09","guid":{"rendered":"http:\/\/hunch.net\/?p=170"},"modified":"2006-02-27T16:51:57","modified_gmt":"2006-02-27T22:51:57","slug":"the-peekaboom-dataset","status":"publish","type":"post","link":"https:\/\/hunch.net\/?p=170","title":{"rendered":"The Peekaboom Dataset"},"content":{"rendered":"<p><a href=\"http:\/\/www.cs.cmu.edu\/~biglou\/\">Luis von Ahn<\/a>&#8216;s <a href=\"http:\/\/www.peekaboom.org\/\">Peekaboom project<\/a> has yielded <a href=\"https:\/\/hunch.net\/~learning\/peekaboom.tar.bz2\">data<\/a> (830MB).<\/p>\n<p>Peekaboom is the second attempt (after <a href=\"http:\/\/www.espgame.org\/\">Espgame<\/a>) to produce a dataset which is useful for learning to solve vision problems based on voluntary game play.  As a second attempt, it is meant to address all of the shortcomings of the first attempt.  In particular:<\/p>\n<ol>\n<li>The locations of specific objects are provided by the data.<\/li>\n<li>The data collection is far more complete and extensive.<\/li>\n<\/ol>\n<p>The data consists of:<\/p>\n<ol>\n<li>The source images. (1 file per image, just short of 60K images.)<\/li>\n<li>The in-game events. (1 file per image, in a lispy syntax.)<\/li>\n<li>A description of the event language.<\/li>\n<\/ol>\n<p>There is a great deal of very specific and relevant data here so the hope that this will help solve vision problems seems quite reasonable.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Luis von Ahn&#8216;s Peekaboom project has yielded data (830MB). Peekaboom is the second attempt (after Espgame) to produce a dataset which is useful for learning to solve vision problems based on voluntary game play. As a second attempt, it is meant to address all of the shortcomings of the first attempt. In particular: The locations &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/hunch.net\/?p=170\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;The Peekaboom Dataset&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[29,17],"tags":[],"class_list":["post-170","post","type-post","status-publish","format-standard","hentry","category-machine-learning","category-vision"],"_links":{"self":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/170","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=170"}],"version-history":[{"count":0,"href":"https:\/\/hunch.net\/index.php?rest_route=\/wp\/v2\/posts\/170\/revisions"}],"wp:attachment":[{"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=170"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=170"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hunch.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=170"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}