Not goal metrics

One of the confusing things about research is that progress is very hard to measure. One of the consequences of being in a hard-to-measure environment is that the wrong things are often measured.

  1. Lines of Code The classical example of this phenomenon is the old lines-of-code-produced metric for programming. It is easy to imagine systems for producing many lines of code with very little work that accomplish very little.
  2. Paper count In academia, a “paper count” is an analog of “lines of code”, and it suffers from the same failure modes. The obvious failure mode here is that we end up with a large number of uninteresting papers since people end up spending a lot of time optimizing this metric.
  3. Complexity Another metric, is “complexity” (in the eye of a reviewer) of a paper. There is a common temptation to make a method appear more complex than it is in order for reviewers to judge it worthy of publication. The failure mode here is unclean thinking. Simple effective methods are often overlooked in favor of complex relatively ineffective methods. This is simply wrong for any field. (Discussion at Lance‘s blog.)
  4. Acceptance Rate “Acceptance rate” is the number of papers accepted/number of papers submitted. A low acceptance rate is often considered desirable for a conference. But:
    1. It’s easy to skew an acceptance rate by adding (or inviting) many weak or bogus papers.
    2. It’s very difficult to judge what, exactly, is good work in the long term. Consequently, a low acceptance rate can retard progress by simply raising the bar too high for what turns out to be a good idea when it is more fully developed. (Consider the limit where only one paper is accepted per year…)
    3. Accept/reject decisions can become more “political” and less about judging the merits of a paper/idea. With a low acceptance ratio, a strong objection by any one of several reviewers might torpedo a paper. The consequence of this is that papers become noncontroversial with a tendency towards incremental improvements.
    4. A low acceptance rate tends to spawn a multiplicity of conferences in one area. There is a strong multiplicity of learning-related conferences.

    (see also How to increase the acceptance ratios at top conferences?)

  5. Citation count Counting citations is somewhat better than counting papers because it is some evidence that an idea is actually useful. This has been particularly aided by automated citation counting systems like scholar.google.com and http://citeseer.ist.psu.edu/. However, there are difficulties—citation counts can be optimized using self-citation and “societies of mutual admiration” (groups of people who agree implicitly or explicitly to cite each other). Citations are also sometimes negative of the form “here we fix bad idea X”.
  6. See also the Academic Mechanism Design post for other ideas.

These metrics do have some meaning. A programmer who writes no lines of code isn’t very good. An academic who produces no papers isn’t very good. A conference that doesn’t aid information filtration isn’t helpful. Hard problems often require complex solutions. Important papers are often cited.

Nevertheless, optimizing these metrics is not beneficial for a field of research. In thinking about this, we must clearly differentiate 1) what is good for a field of research (solving important problems) and 2) what is good for individual researchers (getting jobs). The essential point here is that there is a disparity.

Any individual in academia cannot avoid being judged by these metrics. Attempts by an individual or a small group of individuals to ignore these metrics is unlikely to change the system (and likely to result in the individual or small group being judged badly).

I don’t believe there is an easy fix to this problem. The best we can hope for is incremental progress which takes the form of the leadership in the academic community introducing new, saner metrics. This is a difficult thing, particularly because any academic leader must have succeeded in the old system. Nevertheless, it must happen if academic-style research is to flourish.

In the spirit of being constructive, I’ll make one proposal which may address the “complexity” problem: judge the importance of a piece of work independent of the method. For a conference paper this might be done by changing the review process to have one “technical reviewer” and several “importance reviewers” rather than 3 or 4 reviewers. The “importance reviewer” is easier than the current standard: they must simply understand the problem being solved and rate how important this problem is. The technical reviewers job is harder than the current standard: they must verify that all claims of solution to the problem are met. Overall, the amount of work by reviewers would stay constant, and perhaps we would avoid the preference for complex solutions.

Interesting papers at ACL

A recent discussion indicated that one goal of this blog might be to allow people to post comments about recent papers that they liked. I think this could potentially be very useful, especially for those with diverse interests but only finite time to read through conference proceedings. ACL 2005 recently completed, and here are four papers from that conference that I thought were either good or perhaps of interest to a machine learning audience.

David Chiang, A Hierarchical Phrase-Based Model for Statistical Machine Translation. (Best paper award.) This paper takes the standard phrase-based MT model that is popular in our field (basically, translate a sentence by individually translating phrases and reordering them according to a complicated statistical model) and extends it to take into account hierarchy in phrases, so that you can learn things like “X ‘s Y” -> “Y de X” in chinese, where X and Y are arbitrary phrases. This takes a step toward linguistic syntax for MT, which our group is working strongly on, but doesn’t require any linguists to sit down and write out grammars or parse sentences.

Rie Kubota Ando and Tong Zhang, A High-Performance Semi-Supervised Learning Method for Text Chunking. This is more of a machine learning style paper, where they improve a sequence labeling task by augmenting it with models from related tasks for which data is free. I.e., I might train a model that, given a context with a missing word, will predict the word (eg., “The ____ gave a speech” might want you to insert “president”.) By doing so, you can use these other models to give additional useful information to your main task.

Noah A. Smith and Jason Eisner, Contrastive Estimation: Training Log-Linear Models on Unlabeled Data. This paper talks about training sequence labeling models in an unsupervised fashion, basically by contrasting what the model does on the correct string with what the model does on a corrupted version of the string. They get significantly better results than just by using EM in an HMM, and the idea is pretty nice.

Patrick Pantel, Inducing Ontological Co-occurrence Vectors. This is a pretty neat idea (though I’m biased — Patrick is a friend) where one attempts to come up with feature vectors that describe nodes in a semantic hierarchy (ontology) that could enable you to figure out where to insert new words that are not in your ontology. The results are pretty good, and the method is fairly simple; I’d imagine that a more complex model/learning framework could improve the model even further.

Six Months

This is the 6 month point in the “run a research blog” experiment, so it seems like a good point to take stock and assess.

One fundamental question is: “Is it worth it?” The idea of running a research blog will never become widely popular and useful unless it actually aids research. On the negative side, composing ideas for a post and maintaining a blog takes a significant amount of time. On the positive side, the process might yield better research because there is an opportunity for better, faster feedback implying better, faster thinking.

My answer at the moment is a provisional “yes”. Running the blog has been incidentally helpful in several ways:

  1. It is sometimes educational. example
  2. More often, the process of composing thoughts well enough to post simply aids thinking. This has resulted in a couple solutions to problems of interest (and perhaps more over time). If you really want to solve a problem, letting the world know is helpful. This isn’t necessarily because the world will help you solve it, but it’s helpful nevertheless.
  3. In addition, posts by others have helped frame thinking about “What are important problems people care about?”, and why. In the end, working on the right problem is invaluable.

What Learning Theory might do

I wanted to expand on this post and some of the previous problems/research directions about where learning theory might make large strides.

  1. Why theory? The essential reason for theory is “intuition extension”. A very good applied learning person can master some particular application domain yielding the best computer algorithms for solving that problem. A very good theory can take the intuitions discovered by this and other applied learning people and extend them to new domains in a relatively automatic fashion. To do this, we take these basic intuitions and try to find a mathematical model that:
    1. Explains the basic intuitions.
    2. Makes new testable predictions about how to learn.
    3. Succeeds in so learning.

    This is “intuition extension”: taking what we have learned somewhere else and applying it in new domains. It is fundamentally useful to everyone because it increases the level of automation in solving problems.

  2. Where next for learning theory? I like the analogy with physics. Back before we-the-humans knew much, people would experiment occasionally and learn to design new things by slow evolution. At some point the physics model arose: you try to build mathematical models of what is happening and then make predictions based on the models. This was wildly succesful for physics. For machine learning, it has only been moderately succesful. We have some formalisms which are of some use in addressing novel learning problems, but the overall process of doing machine learning is not very close to “automatic”. The good news is that over the last 20 years a much richer set of positive examples of succesful applied machine learning has developed. Thus, there are many good intuitions from which we can hope to generalize. In the physics analogy, the year is (perhaps) 1900. Here are a few specific issues:
    1. What is the “right” mathematical model of learning? (in analogy, What is the “right” mathematical model of physical phenomena?”) The models we currently use have their compelling points but typically fail to capture all of the relevant details. This is a very hard question to address, but it should be actively considered and any progress may be very helpful. Examples of this include:
      1. What is the “right” model of active learning? We know almost nothing except there is great potential.
      2. What is the “right” model of Reinforcement learning? Again, we know very little in comparison to what we want to know—a fully automatic general RL solver.

      The notion of “right” here is partially theoretical (can we get derive efficient algorithms?) and partially empirical (do they actually work?).

    2. How do we refine the empirical observations and intuitions of applied learning?
      1. How should we think about “prior”? The Bayesian answer seems unconvincing. At a minimum, information used to create a Bayesian prior often does not come in the form of a Bayesian prior, and so some translation system must be developed.
      2. How can we develop big learning systems that solve big problems? Some form of structure seems necessary, but the right form is still unclear. What theory governs the design of such systems?
    3. How do we take existing theoretical insights and translate them into practical algorithms?
      1. The method of linear projection into spaces has been studied theoretically. Is it useful empirically?
      2. The online learning setting seems theoretically compelling and, at least sometimes, empirically validated. What concerns remain to be addressed to make this a useful technology?

We should keep in mind that there is a real chance the limits of machine learning are lower bounded by human learning. Getting from here to there of course will require a bit of work, some of which might be greatly aided by mathematical consideration.

Text Entailment at AAAI

Rajat Raina presented a paper on the technique they used for the PASCAL Recognizing Textual Entailment challenge.

“Text entailment” is the problem of deciding if one sentence implies another. For example the previous sentence entails:

  1. Text entailment is a decision problem.
  2. One sentence can imply another.

The challenge was of the form: given an original sentence and another sentence predict whether there was an entailment. All current techniques for predicting correctness of an entailment are at the “flail” stage—accuracies of around 58% where humans could achieve near 100% accuracy, so there is much room to improve. Apparently, there may be another PASCAL challenge on this problem in the near future.