The Peekaboom Dataset

Luis von Ahn‘s Peekaboom project has yielded data (830MB).

Peekaboom is the second attempt (after Espgame) to produce a dataset which is useful for learning to solve vision problems based on voluntary game play. As a second attempt, it is meant to address all of the shortcomings of the first attempt. In particular:

  1. The locations of specific objects are provided by the data.
  2. The data collection is far more complete and extensive.

The data consists of:

  1. The source images. (1 file per image, just short of 60K images.)
  2. The in-game events. (1 file per image, in a lispy syntax.)
  3. A description of the event language.

There is a great deal of very specific and relevant data here so the hope that this will help solve vision problems seems quite reasonable.

A Fundamentalist Organization of Machine Learning

There are several different flavors of Machine Learning classes. Many classes are of the ‘zoo’ sort: many different learning algorithms are presented. Others avoid the zoo by not covering the full scope of machine learning.

This is my view of what makes a good machine learning class, along with why. I’d like to specifically invite comment on whether things are missing, misemphasized, or misplaced.

Phase Subject Why?
Introduction What is a machine learning problem? A good understanding of the characteristics of machine learning problems seems essential. Characteristics include: a data source, some hope the data is predictive, and a need for generalization. This is probably best taught in a case study manner: lay out the specifics of some problem and then ask “Is this a machine learning problem?”
Introduction Machine Learning Problem Identification Identification and recognition of the type of learning problems is (obviously) a very important step in solving such problems. People need to be familiar witth the concept of ‘regression’, ‘classification’, ‘cost sensitive classification’, ‘reinforcement learning’, etc… A good organization of these things is possible, but not yet well done.
Introduction Example algorithm 1 To really understand machine learning, a couple learning algorithms must be understood in detail.
Introduction Example algorithm 2 Ditto. The reason why the number is “2” and not “1” or “3” is that 2 is the minimum number required to make people naturally aware of the degrees of freedom available in learning algorithm design.
Analysis Bias for Learning The need for a good bias is one of the defining characteristics of learning. This includes discussing the means to specify bias (via Bayesian priors, choice of features, graphical models, etc…). This statement is generic so it will always apply to one degree or another.
Analysis Learning can be boosted. This is the boosting observation: that it is possible to bootstrap predictive ability to create a better overall system. This statement is similarly generic.
Analysis Learning can be transformed This is the reductions observation: that the ability to solve one kind of learning problems implies the ability to solve other kinds of leanring problems. This statement is similarly generic.
Analysis Learning can be preserved This is the online learning with experts observation: that we can have a master algorithm which preserves the best learning performance of subalgorithms. This statement is again generic.
Analysis Overfitting Learning algorithms can easily overfit to existing training data. How to analyze this (with an IID assumption), and how to avoid it are very important for success.
Analysis Hardness of Learning It turns out that there are several different ways in which machine learning can be hard including computational and information theoretic hardness. Some of PAC learning is relevant here. An understanding of how and why learning algorithms can fail seems important to understand the process.
Applications Vision One example of how learning is applied to solve vision problems.
Applications Language Ditto for language problems.
Applications Robotics Ditto for robotics
Applications Speech Ditto for speech
Applications Businesses Ditto for businesses
Where is machine learning going? Insert predictions of the future here. It should be understood that the field of machine learning is changing rapidly.

The emphasis here is on fundamentals: generally applicable mathematical statements and understandings of the learning problem. Given that emphasis, the ‘applications’ section could be cut without harming the integrity of the purpose.

Multiplication of Learned Probabilities is Dangerous

This is about a design flaw in several learning algorithms such as the Naive Bayes classifier and Hidden Markov Models. A number of people are aware of it, but it seems that not everyone is.

Several learning systems have the property that they estimate some conditional probabilities P(event | other events) either explicitly or implicitly. Then, at prediction time, these learned probabilities are multiplied together according to some formula to produce a final prediction. The Naive Bayes classifier for binary data is the simplest of these, so it seems like a good example.

When Naive Bayes is used, a set of probabilities of the form Pr'(feature i | label) are estimated via counting statistics and some prior. Predictions are made according to the label maximizing:

Pr'(label) * Productfeatures i Pr'(feature i | label)

(The Pr’ notation indicates these are estimated values.)

There is nothing wrong with this method as long as (a) the prior for the sample counts is very strong and (b) the prior (on the conditional independences and the sample counts) is “correct”—the actual problem is drawn from it. However, (a) seems to never be true and (b) is often not true.

At this point, we can think a bit from a estimation perspective. When trying to estimate a coin with bias Pr(feature i | label), after observing n IID samples, the estimate is accurate to (at most) c/m for some constant c. (Actually, it’s c/m0.5 in the general case c/m for coins with bias near 0 or 1.) Given this observation, we should expect the estimates Pr’ to differ by c/m or more when the prior on the sample counts is weak.

The problem to notice is that errors of c/m can quickly accumulate. The final product in the naive bayes classifier is n-way linear in the error terms where n is the number of features. If every features true value happens to be v and we happen to have a 1/2 + 1/n0.5 feature fraction estimate too large and 1/2 – 1/n0.5 fraction estimate too small (as might happen with a reasonable chance), the value of the product might be overestimated by:

(v – c/m)n/2 + n^0.5(v + c/m)n/2 + n^0.5 – vn

When c/m is very small, this approximates as c n0.5 /m, which suggests problems must arise when the number of features n is greater than the number of samples squared n > m2. This can actually happen in the text classification settings where Naive Bayes is often applied.

All of the above is under the assumption that the conditional independences encoded in the Naive Bayes classifier are correct for the problem. When these aren’t correct, as is often true in practice, the estimation errors can be systematic rather than stochastic implying much more brittle behavior.

In all of the above, note that we used Naive bayes as a simple example—this brittleness can be found in a number of other common prediction systems.

An important question is “What can you do about this brittleness?” There are several answers:

  1. Use a different system for prediction (there are many).
  2. Get much more serious about following Bayes law here. (a) The process of integrating over a posterior rather than taking the maximum likelihood element of a posterior tends to reduce the sampling effects. (b) Realize that the conditional independence assumptions producing the multiplication are probably excessively strong and design softer priors which better fit reasonable beliefs.

Yahoo’s Learning Problems.

I just visited Yahoo Research which has several fundamental learning problems near to (or beyond) the set of problems we know how to solve well. Here are 3 of them.

  1. Ranking This is the canonical problem of all search engines. It is made extra difficult for several reasons.
    1. There is relatively little “good” supervised learning data and a great deal of data with some signal (such as click through rates).
    2. The learning must occur in a partially adversarial environment. Many people very actively attempt to place themselves at the top of
      rankings.
    3. It is not even quite clear whether the problem should be posed as ‘ranking’ or as ‘regression’ which is then used to produce a
      ranking.
  2. Collaborative filtering Yahoo has a large number of recommendation systems for music, movies, etc… In these sorts of systems, users specify how they liked a set of things, and then the system can (hopefully) find some more examples of things they might like
    by reasoning across multiple such sets.
  3. Exploration with Generalization The cash cow of
    search engines is displaying advertisements which are relevant to search along with search results. Better targeting these advertisements makes money (a small improvement might be worth $millions) and improves the value of the search engine for the user.

    It is natural to predict the set of advertisements which maximize the advertising payoff. This natural idea is stymied by both the extreme
    multiplicity of advertisements under contract (think millions) and a lack of ability to measure hypotheticals like “What would have
    happened if we had displayed a different set of advertisements for this (query,user) pair instead?” This is a combined exploration and
    generalization problem.

Good solutions to any of these problems would be extremely useful (and not just at Yahoo). Even further small improvements on the existing solutions may be very useful.

For those interested, Yahoo (as an organization) knows these are learning problems and is very actively interested in solving them. Yahoo Research is committed to a relatively open method of solving these problems. Dennis DeCoste is one contact point for machine learning research at Yahoo Research.

Introspectionism as a Disease

In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. This is introspection, and it can easily go awry. I will call introspection gone awry introspectionism.

Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research. It’s easiest to show how introspectionism arises by an example.

Suppose we want to solve the problem of navigating a robot from point A to point B given a camera. Then, the following research action plan might seem natural when you examine your own capabilities:

  1. Build an edge detector for still images.
  2. Build an object recognition system given the edge detector.
  3. Build a system to predict distance and orientation to objects given the object recognition system.
  4. Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions.
  5. As you execute the above, constantly repeat the above steps.

Introspectionism begins when you believe this must be the way that it is done.

Introspectionism arguments are really argument by lack of imagination. It is like saying “This is the only way I can imagine doing things, so it must be the way they should be done.” This is a common weak argument style that can be very difficult to detect. It is particularly difficult to detect here because it is easy to confuse capability with reuse. Humans, via experimental tests, can be shown capable of executing each step above, but this does not imply they reuse these computations in the next step.

There are reasonable evolution-based reasons to believe that brains minimize the amount of computation required to accomplish goals. Computation costs energy, and since a human brain might consume 20% of the energy budget, we can be fairly sure that the evolutionary impetus to minimize computation is significant. This suggests telling a different energy-conservative story.

An energy consevative version of the above example might look similar, but with very loose approximations.

  1. The brain might (by default) use a pathetically weak edge detector that is lazily refined into something more effective using time-sequenced images (since edges in moving scenes tend to stand out more).
  2. The puny edge detector might be used to fill a rough “obstacle-or-not” fill map that coarsens dramatically with distance.
  3. Given this, a decision about which direction to go next (rather than a full path) might be made.

This strategy avoids the need to build a good edge detector for still scenes, avoids the need to recognize objects, avoids the need to place them with high precision in a scene, and avoids the need to make a full path plan. All of these avoidances might result in more tractable computation or learning problems. Note that we can’t (and shouldn’t) say that the energy conservative path “must” be right because that would also be introspectionism. However, it does exhibit an alternative showing the failure of imagination in introspectionism on the first approach.

It is reasonable to take introspection derived ideas as suggestions for how to go about building a (learning) system. But if the suggestions don’t work, it’s entirely reasonable to try something else.