Machine Learning (Theory)

4/14/2005

Families of Learning Theory Statements

Tags: Organization jl@ 4:41 pm

The diagram above shows a very broad viewpoint of learning theory.

arrow Typical statement Examples
Past->Past Some prediction algorithm A does almost as well as any of a set of algorithms. Weighted Majority
Past->Future Assuming independent samples, past performance predicts future performance. PAC analysis, ERM analysis
Future->Future Future prediction performance on subproblems implies future prediction performance using algorithm A. ECOC, Probing

A basic question is: Are there other varieties of statements of this type? Avrim noted that there are also “arrows between arrows”: generic methods for transforming between Past->Past statements and Past->Future statements. Are there others?

4/10/2005

Is the Goal Understanding or Prediction?

Tags: Organization jl@ 6:28 pm

Steve Smale and I have a debate about goals of learning theory.

Steve likes theorems with a dependence on unobservable quantities. For example, if D is a distribution over a space X x [0,1], you can state a theorem about the error rate dependent on the variance, E(x,y)~D (y-Ey’~D|x[y’])2.

I dislike this, because I want to use the theorems to produce code solving learning problems. Since I don’t know (and can’t measure) the variance, a theorem depending on the variance does not help me—I would not know what variance to plug into the learning algorithm.

Recast more broadly, this is a debate between “declarative” and “operative” mathematics. A strong example of “declarative” mathematics is “a new kind of science”. Roughly speaking, the goal of this kind of approach seems to be finding a way to explain the observations we make. Examples include “some things are unpredictable”, “a phase transition exists”, etc…

“Operative” mathematics helps you make predictions about the world. A strong example of operative mathematics is Newtonian mechanics in physics: it’s a great tool to help you predict what is going to happen in the world.

In addition to the “I want to do things” motivation for operative mathematics, I find it less arbitrary. In particular, two reasonable people can each be convinced they understand a topic in ways so different that they do not understand the viewpoint. If these understandings are operative, the rest of us on the sidelines can better appreciate which understanding is “best”.

3/21/2005

Research Styles in Machine Learning

Tags: Organization jl@ 4:50 pm

Machine Learning is a field with an impressively diverse set of reseearch styles. Understanding this may be important in appreciating what you see at a conference.

  1. Engineering. How can I solve this problem? People in the engineering research style try to solve hard problems directly by any means available and then describe how they did it. This is typical of problem-specific conferences and communities.
  2. Scientific. What are the principles for solving learning problems? People in this research style test techniques on many different problems. This is fairly common at ICML and NIPS.
  3. Mathematical. How can the learning problem be mathematically understood? People in this research style prove theorems with implications for learning but often do not implement (or test algorithms). COLT is a typical conference for this style.

Many people manage to cross these styles, and that is often beneficial.

Whenver we list a set of alternative, it becomes natural to think “which is best?” In this case of learning it seems that each of these styles is useful, and can lead to new useful discoveries. I sometimes see failures to appreciate the other approaches, which is a shame.

2/17/2005

Learning Research Programs

Tags: Organization jl@ 5:56 pm

This is an attempt to organize the broad research programs related to machine learning currently underway. This isn’t easy—this map is partial, the categories often overlap, and there are many details left out. Nevertheless, it is (perhaps) helpful to have some map of what is happening where. The word ‘typical’ should not be construed narrowly here.

  1. Learning Theory Focuses on analyzing mathematical models of learning, essentially no experiments. Typical conference: COLT.
  2. Bayesian Learning Bayes law is always used. Focus on methods of speeding up or approximating integration, new probabilistic models, and practical applications. Typical conferences: NIPS,UAI
  3. Structured learning Predicting complex structured outputs, some applications. Typiical conferences: NIPS, UAI, others
  4. Reinforcement Learning Focused on ‘agent-in-the-world’ learning problems where the goal is optimizing reward. Typical conferences: ICML
  5. Unsupervised Learning/Clustering/Dimensionality Reduction Focused on simpiflying data. Typicaly conferences: Many (each with a somewhat different viewpoint)
  6. Applied Learning Worries about cost sensitive learning, what to do on very large datasets, applications, etc.. Typical conference: KDD
  7. Supervised Leanring Chief concern is making practical algorithms for simpler predictions. Many applications. Typical conference: ICML

Please comment on any missing pieces—it would be good to build up a better understanding of what are the focuses and where they are.

2/14/2005

Clever Methods of Overfitting

Tags: Organization jl@ 10:56 am

“Overfitting” is traditionally defined as training some flexible representation so that it memorizes the data but fails to predict well in the future. For this post, I will define overfitting more generally as over-representing the performance of systems. There are two styles of general overfitting: overrepresenting performance on particular datasets and (implicitly) overrepresenting performance of a method on future datasets.

We should all be aware of these methods, avoid them where possible, and take them into account otherwise. I have used “reproblem” and “old datasets”, and may have participated in “overfitting by review”—some of these are very difficult to avoid.

Name Method Explanation Remedy
Traditional overfitting Train a complex predictor on too-few examples.
  1. Hold out pristine examples for testing.
  2. Use a simpler predictor.
  3. Get more training examples.
  4. Integrate over many predictors.
  5. Reject papers which do this.
Parameter tweak overfitting Use a learning algorithm with many parameters. Choose the parameters based on the test set performance. For example, choosing the features so as to optimize test set performance can achieve this. Same as above.
Brittle measure Use a measure of performance which is especially brittle to overfitting. “entropy”, “mutual information”, and leave-one-out cross-validation are all surprisingly brittle. This is particularly severe when used in conjunction with another approach. Prefer less brittle measures of performance.
Bad statistics Misuse statistics to overstate confidences. One common example is pretending that cross validation performance is drawn from an i.i.d. gaussian, then using standard confidence intervals. Cross validation errors are not independent. Another standard method is to make known-false assumptions about some system and then derive excessive confidence. Don’t do this. Reject papers which do this.
Choice of measure Choose the best of Accuracy, error rate, (A)ROC, F1, percent improvement on the previous best, percent improvement of error rate, etc.. for your method. For bonus points, use ambiguous graphs. This is fairly common and tempting. Use canonical performance measures. For example, the performance measure directly motivated by the problem.
Incomplete Prediction Instead of (say) making a multiclass prediction, make a set of binary predictions, then compute the optimal multiclass prediction. Sometimes it’s tempting to leave a gap filled in by a human when you don’t otherwise succeed. Reject papers which do this.
Human-loop overfitting. Use a human as part of a learning algorithm and don’t take into account overfitting by the entire human/computer interaction. This is subtle and comes in many forms. One example is a human using a clustering algorithm (on training and test examples) to guide learning algorithm choice. Make sure test examples are not available to the human.
Data set selection Chose to report results on some subset of datasets where your algorithm performs well. The reason why we test on natural datasets is because we believe there is some structure captured by the past problems that helps on future problems. Data set selection subverts this and is very difficult to detect. Use comparisons on standard datasets. Select datasets without using the test set. Good Contest performance can’t be faked this way.
Reprobleming Alter the problem so that your performance improves. For example, take a time series dataset and use cross validation. Or, ignore asymmetric false positive/false negative costs. This can be completely unintentional, for example when someone uses an ill-specified UCI dataset. Discount papers which do this. Make sure problem specifications are clear.
Old datasets Create an algorithm for the purpose of improving performance on old datasets. After a dataset has been released, algorithms can be made to perform well on the dataset using a process of feedback design, indicating better performance than we might expect in the future. Some conferences have canonical datasets that have been used for a decade… Prefer simplicity in algorithm design. Weight newer datasets higher in consideration. Making test examples not publicly available for datasets slows the feedback design process but does not eliminate it.
Overfitting by review 10 people submit a paper to a conference. The one with the best result is accepted. This is a systemic problem which is very difficult to detect or eliminate. We want to prefer presentation of good results, but doing so can result in overfitting.
  1. Be more pessimistic of confidence statements by papers at high rejection rate conferences.
  2. Some people have advocated allowing the publishing of methods with poor performance. (I have doubts this would work.)

I have personally observed all of these methods in action, and there are doubtless others.

Edit: a repost on kdnuggets.

« Newer PostsOlder Posts »

Powered by WordPress