Google Predict

Slashdot points out Google Predict. I’m not privy to the details, but this has the potential to be extremely useful, as in many applications simply having an easy mechanism to apply existing learning algorithms can be extremely helpful. This differs goalwise from MLcomp—instead of public comparisons for research purposes, it’s about private utilization of good existing algorithms. It also differs infrastructurally, since a system designed to do this is much less awkward than using Amazon’s cloud computing. The latter implies that datasets several order of magnitude larger can be handled up to limits imposed by network and storage.

Aggregation of estimators, sparsity in high dimension and computational feasibility

(I’m channeling for Jean-Yves Audibert here, with some minor tweaking for clarity.)

Since Nemirovski’s Saint Flour lecture notes, numerous researchers have studied the following problem in least squares regression: predict as well as
(MS) the best of d given functions (like in prediction with expert advice; model = finite set of d functions)
(C) the best convex combination of these functions (i.e., model = convex hull of the d functions)
(L) the best linear combination of these functions (i.e., model = linear span of the d functions)
It is now well known (see, e.g., Sacha Tsybakov’s COLT’03 paper) that these tasks can be achieved since there exist estimators having an excess risk of order (log d)/n for (MS), min( sqrt((log d)/n), d/n ) for (C) and d/n for (L), where n is the training set size. Here, “risk” is amount of extra loss per example which may be suffered due to the choice of random sample.

The practical use of these results seems rather limited to trivial statements like: do not use the OLS estimator when the dimension d of the input vector is larger than n (here the d functions are the projections on each of the d components). Nevertheless, it provides a rather easy way to prove that there exists a learning algorithm having an excess risk of order s (log d)/n, with respect to the best linear combination of s of the d functions (s-sparse linear model). Indeed, it suffices to consider the algorithm which

  1. cuts the training set into two parts, say of equal size for simplicity,
  2. uses the first part to train linear estimators corresponding to every possible subset of s features. Here you can use your favorite linear estimator (the empirical risk minimizer on a compact set or robust but more involved ones are possible rather than the OLS), as long as it solves (L) with minimal excess risk.
  3. uses the second part to predict as well as the “d choose s” linear estimators built on the first part. Here you choose your favorite aggregate solving (MS). The one I prefer is described in p.5 of my NIPS’07 paper, but you might prefer the progressive mixture rule or the algorithm of Guillaume LecuĂ© and Shahar Mendelson. Note that empirical risk minimization and cross-validation completely fail for this task with excess risk of order sqrt((log d)/n) instead of (log d)/n.

It is an easy exercise to combine the different excess risk bounds and obtain that the above procedure achieves an excess risk of s (log d)/n. The nice thing compared to works on Lasso, Dantzig selectors and their variants is that you do not need all these assumptions saying that your features should be “not too much” correlated. Naturally, the important limitation of the above procedure, which is often encountered when using classical model selection approach, is its computational intractability. So this leaves open the following fundamental problem:
is it possible to design a computationally efficient algorithm with the s (log d)/n guarantee without assuming low correlation between the explanatory variables?

What’s the difference between gambling and rewarding good prediction?

After a major financial crisis, there is much discussion about how finance has become a casino gambling with other’s money, keeping the winnings, and walking away when the money is lost.

When thinking about financial reform, all the many losers in the above scenario are apt to take the view that this activity should be completely, or nearly completely curtailed. But, a more thoughtful view is that sometimes there is a real sense in which there are right and wrong decisions, and we as a society would really prefer that the people most likely to make right decisions are making them. A crucial question then is: “What is the difference between gambling and rewarding good prediction?”

We discussed this before the financial crisis. The cheat-sheet sketch is that the online learning against an adversary problem, algorithm, and theorems, provide a good mathematical model for thinking about this question. What I would like to do here is map this onto various types of financial transactions. The basic mapping is between “wealth” and “weight”, with the essential idea that you can think of wealth as either money or degree of control over decision making. The core algorithms start with a “wealth” spread over many experts, each of which makes predictions and then has it’s wealth updated according to a soft exponential of the value of it’s prediction.

  1. Going Long. The basic strategy here is to buy low and sell high. This strategy is not inherently sound from a learning theory point of view, because a single purchased item can sometimes drop to zero value. Similarly, a single purchased item can sometimes grow radically in value. Neither of these properties are desirable from the viewpoint of a learning algorithm. In the zero value case, a good decision maker can be wiped out by one decision, while in the large value case, a lucky decision maker can randomly achieve overwhelming credit. Nevertheless, there is a sense in which this strategy is compatible. If each item purchased either doubles or halves in value, the fluctuation in the wealth of a decision maker is analogous to the fluctuation in the relative weight of on an expert in the online learning framework.
  2. … with diversification. Going long with diversification implies purchasing several items and selling them later. Adding diversification to the “Long” strategy helps it align substantially better with an optimal learning theory strategy. Single points of failure are avoided, while random fluctuations up in wealth are reduced.
  3. Going Short. The short strategy is borrowing an item (typically a stock), selling it high, then buying it back low to cover the debt. It’s technique used to make money when a stock decreases in value. This technique was banned for a time during the crisis. From the perspective of learning theory, short selling is more dangerous than long, because it’s possible to end up with negative wealth when a stock is sold short, and then it increases in value. To avoid this, it’s necessary to have sufficient collateral to cover the short at all times. If this collateral is at least twice the value when shorting occurs, it’s hard for participants to become wealthy by luck, because wealth at most doubles. Diversification is also a potentially useful helper strategy.
  4. Insurance. Credit Default Swaps are effectively a form of insurance where one party pays another small amounts unless something bad happens, in which case large amounts of money go the other direction. In the financial crisis, credit default swaps made the crisis viral, as the “pay up” clauses triggered, particularly wiping out AIG. Insurance has the same general problem as short selling—it can result in negative wealth unless there is sufficient collateral. It also has the same solution.
  5. Clawback. The basic idea of a “clawback” is that when someone fouls up really badly, you extract it from their past paychecks. As far as I can tell, this sort of clause exists in nearly no contracts, but it’s a popular proposal in retrospect, particularly for certain AIG employees who destroyed their company. The driving problem here is that the actual value of a decision is not known for some time, and it’s misestimated in the short term. Learning theory suggests that you should apply updates to estimated value as soon as possible to adjust wealth, which would correspond to a potential 100% clawback clause.

Two things strike me in considering the above.

The first is that for normal people interacting with the financial system a set of financial rules + good sense have developed such that wealth tends to grow and shrink in a manner similar to what learning theory would suggest is near optimal. For example, most people use the going long strategy by default and most diversify. Most don’t use the short strategy, but those that do must have sufficient collateral. Normal people don’t have access to credit default swaps, and normal insurance has real collateral requirements. Clawbacks are automatic, as normal people bet with their own money and take their own losses.

The second is that larger actors have become quite skillful at avoiding the rules, with unsecured credit default swaps, unsecured shorts, and no clawback rules. But, learning theory is math, so it can’t really be avoided—instead what happens is inefficient decision making via inefficient learning algorithms on a societal scale.

My belief is effective financial reform will impose limits on agents just as learning theory implies. This is also the answer to the title question—it’s gambling if the corresponding learning algorithm has high regret, and it’s rewarding good prediction if the corresponding learning algorithm has low regret. Since this is already done effectively for normal people, shifting all agents towards the limits imposed in that direction works. This means lower bounds on collateral (or equivalently upper bounds on leverage), and standardized markets where all agents can interact on an equal basis. Adding in automatic clawback provisions for all performance-based pay would also probably be very effective.

A full dose of this medicine may upset many people directly affected by such legislation, as it limits their actions and imposes downsides. But this needn’t be so, because the math is straightforward, very robust, and designed precisely to pick out the good decision makers giving them wealth as rapidly as responsibly possible to make and control bigger decisions. If you are a good decision maker, then you should want this.

On the research front, there are substantial improvements we could hope for. Some basic questions are: How can we better structure marketplaces to allocate wealth according to the dynamics of an online learning algorithm? And what are the holes in the mapping between online learning and markets that need repair? And how do you repair them? And how do the repairs effect learning algorithms when backported? Good answer to this question could be radically valuable. Yiling and Jenn have a paper mapping out connections between prediction markets and online learning this year at EC, which is of interest for this direction of research.

Compassionate Reviewing

Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy?

The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy.

But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuinely believe they can predict what will and will not be useful in the longer term. This disparity is a lack of communication. When academics have conversations about reviewing, the presumption of participants in each conversation is that they all share about the same beliefs about what will be useful, and what will take off. Such conversations rarely go into specifics, because the specifics are boring in particular, technical, and because their is a real chance of disagreement on the specifics themselves.

When double blind reviewing was first being considered for ICML, I remember speaking about the experience in the Crypto community, where in my estimate the reviewing was both fairer and less happy. Many conferences in machine learning have shifted to doubleblind reviewing, and I think we have seen this come to pass here as well. Without double blind reviewing, it is common to have an “in” crowd who everyone respects and whose papers are virtually always accepted. These people are happy, and the rest have little voice. With double blind reviewing, everyone suffers substantial rejections.

We might say “fine, at least it’s fair”, but in my experience there is a real problem. From a viewpoint external to the community, when the reviewing is poor and the viewpoint of people in the community highly contradictory, nothing good happens. Outsiders (i.e. most people) viewing the acrimony choose some other way to solve problems, proposals don’t get funded, and the community itself tends to fracture. For example, in cryptography, TCC (not double blind) has started, presumably because the top theory people got tired of having their papers rejected at Crypto (double blind). From a process-of-research standpoint, this seems suboptimal, as different groups using different methods to solve similar problems are particularly the people who you would prefer talking to each other.

What seems to be lost with double blind reviewing is some amount of compassion, unfairly allocated. In a double blind system, any given paper is plausibly from someone you don’t know, and since most papers go nowhere, plausibly not going anywhere. Consequently, the bias starts “against” for all work, a disadvantage which can be quite difficult to overcome. Some time ago, I discussed how I thought motivation should be the responsibility of the reviewer. Aaron Hertzman strongly disagreed on the grounds that this belief could dead end your career as an author. I’ve come to appreciate his viewpoint to an extent. But, it misses the point slightly—the question of “What is good for the community?” differs from “What is good for the author?” In a healthy community, reviewers will actively understand why a piece of work is or is not important, filling in and extending the motivation as they consider the problem.

So, a question is: How can we get compassionate reviewing? (And in a fair way?) It might help somewhat for reviewers to actively consider, as part of their review, the level and mechanism of impact that a paper may have. Reducing reviewing load is certainly helpful, but it is not sufficient alone, because many people naturally interpret a reduced reviewing load as time to work on other things. And, some mechanisms seem to even harm. For example, the two-phase reviewing process that ICML currently uses might save 0.5 reviews/paper, while guaranteeing that for half of the papers, the deciding review is done hastily with no author feedback, a recipe for mistakes.

What creates a great deal of compassion? Public responsibility helps (witness workshops more interesting than conferences). A natural conversation helps (the current method of single round response tends to be very stilted). And time, of course, helps. What else?

MLcomp: a website for objectively comparing ML algorithms

Much of the success and popularity of machine learning has been driven by its practical impact. Of course, the evaluation of empirical work is an integral part of the field. But are the existing mechanisms for evaluating algorithms and comparing results good enough? We (Percy and Jake) believe there are currently a number of shortcomings:

  1. Incomplete Disclosure: You read a paper that proposes Algorithm A which is shown to outperform SVMs on two datasets.  Great.  But what about on other datasets?  How sensitive is this result?   What about compute time – does the algorithm take two seconds on a laptop or two weeks on a 100-node cluster?
  2. Lack of Standardization: Algorithm A beats Algorithm B on one version of a dataset.  Algorithm B beats Algorithm A on another version yet uses slightly different preprocessing.  Though doing a head-on comparison would be ideal, it would be tedious since the programs probably use different dataset formats and have a large array of options.  And what if we wanted to compare on more than just one dataset and two algorithms?
  3. Incomplete View of State-of-the-Art: Basic question: What’s the best algorithm for your favorite dataset?  To find out, you could simply plow through fifty papers, get code from any author willing to reply, and reimplement the rest. Easy right? Well maybe not…

We’ve thought a lot about how to solve these problems. Today, we’re launching a new website, MLcomp.org, which we think is a good first step.

What is MLcomp? In short, it’s a collaborative website for objectively comparing machine learning programs across various datasets.  On the website, a user can do any combination of the following:

  1. Upload a program to our online repository.
  2. Upload a dataset.
  3. Run any user’s program on any user’s dataset.  (MLcomp provides the computation for free using Amazon’s EC2.)
  4. For any executed run, view the results (various error metrics and time/memory usage statistics).
  5. Download any dataset, program, or run for further use.

An important aspect of the site is that it’s collaborative: by uploading just one program or dataset, a user taps into the entire network of existing programs and datasets for comparison.  While data and code repositories do exist (e.g., UCI, mloss.org), MLcomp is unique in that data and code interact to produce analyzable results.

MLcomp is under active development.  Currently, seven machine learn task types (classification, regression, collaborative filtering, sequence tagging, etc.) are supported, with hundreds of standard programs and datasets already online.  We encourage you to browse the site and hopefully contribute more!  Please send comments and feedback to mlcomp.support (AT) gmail.com.