Loss Functions for Discriminative Training of Energy-Based Models

This is a paper by Yann LeCun and Fu Jie Huang published at AISTAT 2005. I found this paper very difficult to read, but it does have some point about a computational shortcut.

This paper takes for granted that the method of solving a problem is gradient descent on parameters. Given this assumption, the question arises: Do you want to do gradient descent on a probabilistic model or something else?

All (conditional) probabilistic models have the form p(y|x) = f(x,y)/Z(x) where Z(x) = sumy f(x,y) (the paper calls – log f(x,y) an “energy”). If f is parameterized by some w, the gradient has a term for Z(x), and hence for every value of y. The paper claims, that such models can be optimized for classification purposes using only the correct y and the other y’ not y which maximizes f(x,y). This can even be done on unnormalizable models. The paper further claims that this can be done with an approximate maximum. These claims are plausible based on experimental results and intuition.

It wouldn’t surprise me to learn that ignoring Z(x) (and renormalizing later) is common in fast implementations of some probabilistic model fitting algorithms, but I haven’t seen this previously discussed. Ability to use an approximate maximum y’ seems potentially very useful.

With that encouragement, I had significant difficulties with the paper, including the following:

  1. Lack of a theorem. A good theorem proving these things work would be quite helpful. It isn’t clear whether the claims are always true, just true on the examples encountered, or true with some small modification.
  2. Definition of Loss. For better or worse, the paper uses the second definition of loss, “Loss is part of the solution”, which I find unnatural.
  3. Claims I don’t understand or which aren’t technically true. None of these seem to be related to the main point of the paper, but they are very distracting. For example, there is a claim that log-loss is the “only well-justified loss function”. The meaning of well-justified is unclear, and I can think of several meanings where other losses (such as squared error) are well-justified.

With the above difficulties, this paper seems lucky to have been accepted. This isn’t a criticism of AISTAT because it also seems plausible that this computational shortcut may eventually help many optimizations.

What it means to do research.

I want to try to describe what doing research means, especially from the point of view of an undergraduate. The shift from a class-taking mentality to a research mentality is very significant and not easy.

  1. Problem Posing Posing the right problem is often as important as solving them. Many people can get by in research by solving problems others have posed, but that’s not sufficient for really inspiring research. For learning in particular, there is a strong feeling that we just haven’t figured out which questions are the right ones to ask. You can see this, because the answers we have do not seem convincing.
  2. Gambling your life When you do research, you think very hard about new ways of solving problems, new problems, and new solutions. Many conversations are of the form “I wonder what would happen if…” These processes can be short (days or weeks) or years-long endeavours. The worst part is that you’ll only know if you were succesful at the end of the process (and sometimes not even then because it can take a long time for good research to be recognized). This is very risky compared to most forms of work (or just going to classes).
  3. Concentration This is not so different from solving problems in class, except that you may need to concentrate on a problem for much longer to solve it. This often means shutting yourself off from the world (no TV, no interruptions, no web browsing, etc…) and really thinking.
  4. Lack of feedback While doing research there is often a lack of feedback or contradicting feedback. The processing of writing a paper can take a month, you may not get reviews for several months, and the review process can be extremely (and sometimes systematically) noisy.
  5. Curiousity This is not merely idle curiousity. A desire to understand things from different viewpoints, to understand that niggling detail which isn’t right, and to understand the global picture of the way things are. This often implies questioning the basics.
  6. Honesty Good Researchers have to understand the way things are (at least with respect to research). Learning to admit when you are wrong can be very hard.
  7. Prioritization You have many things to do and not enough time to do them in. The need to prioritize generally becomes common, but it’s not so common in undergrad life. This often means saying ‘no’ when you want to say ‘yes’.
  8. Memory Problems often aren’t solved in the first pass. Conversations from a year ago often contain the key to solving today’s problem. A good suite of problem-solving methods and a global understanding of how things fit together are often essential.
  9. Ephemeral Contact The set of people who you know and work with may only be talked with for a few brief but intense hours a year at a conference.
  10. Opportunism Possibilities come up. They must be recognized (which is hard for conservative people) and seized (which is hard for people without enough confidence to gamble).

Not all of these traits are necessary to do good research—some of them can be compensated for and others can be learned. Many parts of academia can be understood as helping to reduce some of these difficulties. For example, teaching reduces the extreme variance of gambling on research output. Tenure provides people a stable base from which they can take greater gambles (… and often results in people doing nothing). Conferences are partly succesful because they provide much more feedback than journals (which are generally slower). Weblogs might, in the future, provide even faster feedback. Many people are quite succesful simply solving problems that others pose.

Learning Research Programs

This is an attempt to organize the broad research programs related to machine learning currently underway. This isn’t easy—this map is partial, the categories often overlap, and there are many details left out. Nevertheless, it is (perhaps) helpful to have some map of what is happening where. The word ‘typical’ should not be construed narrowly here.

  1. Learning Theory Focuses on analyzing mathematical models of learning, essentially no experiments. Typical conference: COLT.
  2. Bayesian Learning Bayes law is always used. Focus on methods of speeding up or approximating integration, new probabilistic models, and practical applications. Typical conferences: NIPS,UAI
  3. Structured learning Predicting complex structured outputs, some applications. Typiical conferences: NIPS, UAI, others
  4. Reinforcement Learning Focused on ‘agent-in-the-world’ learning problems where the goal is optimizing reward. Typical conferences: ICML
  5. Unsupervised Learning/Clustering/Dimensionality Reduction Focused on simpiflying data. Typicaly conferences: Many (each with a somewhat different viewpoint)
  6. Applied Learning Worries about cost sensitive learning, what to do on very large datasets, applications, etc.. Typical conference: KDD
  7. Supervised Leanring Chief concern is making practical algorithms for simpler predictions. Many applications. Typical conference: ICML

Please comment on any missing pieces—it would be good to build up a better understanding of what are the focuses and where they are.

ESPgame and image labeling

Luis von Ahn has been running the espgame for awhile now. The espgame provides a picture to two randomly paired people across the web, and asks them to agree on a label. It hasn’t managed to label the web yet, but it has produced a large dataset of (image, label) pairs. I organized the dataset so you could explore the implied bipartite graph (requires much bandwidth).

Relative to other image datasets, this one is quite large—67000 images, 358,000 labels (average of 5/image with variation from 1 to 19), and 22,000 unique labels (one every 3 images). The dataset is also very ‘natural’, consisting of images spidered from the internet. The multiple label characteristic is intriguing because ‘learning to learn’ and metalearning techniques may be applicable. The ‘natural’ quality means that this dataset varies greatly in difficulty from easy (predicting “red”) to hard (predicting “funny”) and potentially more rewarding to tackle.

The open problem here is, of course, to make an internet image labeling program. At a minimum this might be useful for blind people and image search. Solving this problem well seems likely to require new learning methods.

Clever Methods of Overfitting

“Overfitting” is traditionally defined as training some flexible representation so that it memorizes the data but fails to predict well in the future. For this post, I will define overfitting more generally as over-representing the performance of systems. There are two styles of general overfitting: overrepresenting performance on particular datasets and (implicitly) overrepresenting performance of a method on future datasets.

We should all be aware of these methods, avoid them where possible, and take them into account otherwise. I have used “reproblem” and “old datasets”, and may have participated in “overfitting by review”—some of these are very difficult to avoid.

Name Method Explanation Remedy
Traditional overfitting Train a complex predictor on too-few examples.
  1. Hold out pristine examples for testing.
  2. Use a simpler predictor.
  3. Get more training examples.
  4. Integrate over many predictors.
  5. Reject papers which do this.
Parameter tweak overfitting Use a learning algorithm with many parameters. Choose the parameters based on the test set performance. For example, choosing the features so as to optimize test set performance can achieve this. Same as above.
Brittle measure Use a measure of performance which is especially brittle to overfitting. “entropy”, “mutual information”, and leave-one-out cross-validation are all surprisingly brittle. This is particularly severe when used in conjunction with another approach. Prefer less brittle measures of performance.
Bad statistics Misuse statistics to overstate confidences. One common example is pretending that cross validation performance is drawn from an i.i.d. gaussian, then using standard confidence intervals. Cross validation errors are not independent. Another standard method is to make known-false assumptions about some system and then derive excessive confidence. Don’t do this. Reject papers which do this.
Choice of measure Choose the best of Accuracy, error rate, (A)ROC, F1, percent improvement on the previous best, percent improvement of error rate, etc.. for your method. For bonus points, use ambiguous graphs. This is fairly common and tempting. Use canonical performance measures. For example, the performance measure directly motivated by the problem.
Incomplete Prediction Instead of (say) making a multiclass prediction, make a set of binary predictions, then compute the optimal multiclass prediction. Sometimes it’s tempting to leave a gap filled in by a human when you don’t otherwise succeed. Reject papers which do this.
Human-loop overfitting. Use a human as part of a learning algorithm and don’t take into account overfitting by the entire human/computer interaction. This is subtle and comes in many forms. One example is a human using a clustering algorithm (on training and test examples) to guide learning algorithm choice. Make sure test examples are not available to the human.
Data set selection Chose to report results on some subset of datasets where your algorithm performs well. The reason why we test on natural datasets is because we believe there is some structure captured by the past problems that helps on future problems. Data set selection subverts this and is very difficult to detect. Use comparisons on standard datasets. Select datasets without using the test set. Good Contest performance can’t be faked this way.
Reprobleming Alter the problem so that your performance improves. For example, take a time series dataset and use cross validation. Or, ignore asymmetric false positive/false negative costs. This can be completely unintentional, for example when someone uses an ill-specified UCI dataset. Discount papers which do this. Make sure problem specifications are clear.
Old datasets Create an algorithm for the purpose of improving performance on old datasets. After a dataset has been released, algorithms can be made to perform well on the dataset using a process of feedback design, indicating better performance than we might expect in the future. Some conferences have canonical datasets that have been used for a decade… Prefer simplicity in algorithm design. Weight newer datasets higher in consideration. Making test examples not publicly available for datasets slows the feedback design process but does not eliminate it.
Overfitting by review 10 people submit a paper to a conference. The one with the best result is accepted. This is a systemic problem which is very difficult to detect or eliminate. We want to prefer presentation of good results, but doing so can result in overfitting.
  1. Be more pessimistic of confidence statements by papers at high rejection rate conferences.
  2. Some people have advocated allowing the publishing of methods with poor performance. (I have doubts this would work.)

I have personally observed all of these methods in action, and there are doubtless others.

Edit: a repost on kdnuggets.