Introspectionism as a Disease

In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. This is introspection, and it can easily go awry. I will call introspection gone awry introspectionism.

Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research. It’s easiest to show how introspectionism arises by an example.

Suppose we want to solve the problem of navigating a robot from point A to point B given a camera. Then, the following research action plan might seem natural when you examine your own capabilities:

  1. Build an edge detector for still images.
  2. Build an object recognition system given the edge detector.
  3. Build a system to predict distance and orientation to objects given the object recognition system.
  4. Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions.
  5. As you execute the above, constantly repeat the above steps.

Introspectionism begins when you believe this must be the way that it is done.

Introspectionism arguments are really argument by lack of imagination. It is like saying “This is the only way I can imagine doing things, so it must be the way they should be done.” This is a common weak argument style that can be very difficult to detect. It is particularly difficult to detect here because it is easy to confuse capability with reuse. Humans, via experimental tests, can be shown capable of executing each step above, but this does not imply they reuse these computations in the next step.

There are reasonable evolution-based reasons to believe that brains minimize the amount of computation required to accomplish goals. Computation costs energy, and since a human brain might consume 20% of the energy budget, we can be fairly sure that the evolutionary impetus to minimize computation is significant. This suggests telling a different energy-conservative story.

An energy consevative version of the above example might look similar, but with very loose approximations.

  1. The brain might (by default) use a pathetically weak edge detector that is lazily refined into something more effective using time-sequenced images (since edges in moving scenes tend to stand out more).
  2. The puny edge detector might be used to fill a rough “obstacle-or-not” fill map that coarsens dramatically with distance.
  3. Given this, a decision about which direction to go next (rather than a full path) might be made.

This strategy avoids the need to build a good edge detector for still scenes, avoids the need to recognize objects, avoids the need to place them with high precision in a scene, and avoids the need to make a full path plan. All of these avoidances might result in more tractable computation or learning problems. Note that we can’t (and shouldn’t) say that the energy conservative path “must” be right because that would also be introspectionism. However, it does exhibit an alternative showing the failure of imagination in introspectionism on the first approach.

It is reasonable to take introspection derived ideas as suggestions for how to go about building a (learning) system. But if the suggestions don’t work, it’s entirely reasonable to try something else.

3 Replies to “Introspectionism as a Disease”

  1. I agree with you, although I think introspectionism and argument from lack of imagination are logically distinct. Introspectionism is a source of intuitions to follow whereas the argument from lack of imagination is a (not very convincing) justification for following some particular line of research (regardless of the whether that line was inspired by introspection).

    In psychology, introspection was a very early legitimate research tool, but it was soon stomped on because it was realised that introspection does not provide guaranteed privileged access to the mechanisms of thought. The real problem is that we try to make sense of everything, so some of introspection will be after-the-fact fiction to fit what was real into whatever else we believe and there is no way to tell which bits of introspection are fiction.

    I have no problem with introspection as a source of intuition, provided it is labelled as such. Everybody has to get their research ideas from somewhere and you have to believe your hypotheses sufficiently to go to the bother of testing them before you have enough evidence to objectively justify your actions.

    Argument from lack of imagination is more of a problem and, I believe, very common. It is interesting to speculate why this may be so. Perhaps it is economics, in that people commit to particular lines of research and the switching cost grows over time. Perhaps it is the result of human cognitive biases (people are known to be over-confident in their judgments). Regardless of the reasons for the popularity of the argument from lack of imagination, I think its worst feature is that it effectively denies radical progress. Major (as opposed to incremental) scientific advances involve introduction of new conceptual schemas. That is, major advances require us to think in ways thatwe could not have previously imagined. Historically, this has happened repeatedly. So to seriously put the argument from lack of imagination you are also making a strong claim that no further conceptual development is possible in that field.

    John gave an “energy conservation” counter-example to an introspection based design. Let me expand that a little further. The introspection based design is also a human engineering based design. It has modularity and neat hierarchy (for good reasons of re-useability and human cognitive limitations). However, an evolved natural mechanism tends not to look like human engineering. The “components” are much more likely to be multifunctional and merege into each other, so there is less modularity and hierarchy. Also, the mechanism is grown rather than assembled, which imposes new constraints (the mechanism has to be viable at all times rather than assembled inert then switched on). Consequently, natural systems tend to be very different from engineered systems and difficult (or impossible) for people to understand holistically.

    (Of course, a natural system may serve as an inspiration; principles be discovered; and an engineered system be constructed based on those principles.)

  2. Even if the introspection method and the method used by the brain are actually the same, it still does not mean it is the correct or best way to solve the problem on a computer. The architecture of the brain and the computer are completely different, and thus any method which works on the brain which be terrible (or impossible) on a computer.

  3. Minor note: I distinguish between “introspection” (which is a reasonable source of suggestions) and “introspectionism” which is where the argument by lack of imagination grows out of introspection.

Comments are closed.