Aggregation of estimators, sparsity in high dimension and computational feasibility

(I’m channeling for Jean-Yves Audibert here, with some minor tweaking for clarity.)

Since Nemirovski’s Saint Flour lecture notes, numerous researchers have studied the following problem in least squares regression: predict as well as
(MS) the best of d given functions (like in prediction with expert advice; model = finite set of d functions)
(C) the best convex combination of these functions (i.e., model = convex hull of the d functions)
(L) the best linear combination of these functions (i.e., model = linear span of the d functions)
It is now well known (see, e.g., Sacha Tsybakov’s COLT’03 paper) that these tasks can be achieved since there exist estimators having an excess risk of order (log d)/n for (MS), min( sqrt((log d)/n), d/n ) for (C) and d/n for (L), where n is the training set size. Here, “risk” is amount of extra loss per example which may be suffered due to the choice of random sample.

The practical use of these results seems rather limited to trivial statements like: do not use the OLS estimator when the dimension d of the input vector is larger than n (here the d functions are the projections on each of the d components). Nevertheless, it provides a rather easy way to prove that there exists a learning algorithm having an excess risk of order s (log d)/n, with respect to the best linear combination of s of the d functions (s-sparse linear model). Indeed, it suffices to consider the algorithm which

  1. cuts the training set into two parts, say of equal size for simplicity,
  2. uses the first part to train linear estimators corresponding to every possible subset of s features. Here you can use your favorite linear estimator (the empirical risk minimizer on a compact set or robust but more involved ones are possible rather than the OLS), as long as it solves (L) with minimal excess risk.
  3. uses the second part to predict as well as the “d choose s” linear estimators built on the first part. Here you choose your favorite aggregate solving (MS). The one I prefer is described in p.5 of my NIPS’07 paper, but you might prefer the progressive mixture rule or the algorithm of Guillaume LecuĂ© and Shahar Mendelson. Note that empirical risk minimization and cross-validation completely fail for this task with excess risk of order sqrt((log d)/n) instead of (log d)/n.

It is an easy exercise to combine the different excess risk bounds and obtain that the above procedure achieves an excess risk of s (log d)/n. The nice thing compared to works on Lasso, Dantzig selectors and their variants is that you do not need all these assumptions saying that your features should be “not too much” correlated. Naturally, the important limitation of the above procedure, which is often encountered when using classical model selection approach, is its computational intractability. So this leaves open the following fundamental problem:
is it possible to design a computationally efficient algorithm with the s (log d)/n guarantee without assuming low correlation between the explanatory variables?

What’s the difference between gambling and rewarding good prediction?

After a major financial crisis, there is much discussion about how finance has become a casino gambling with other’s money, keeping the winnings, and walking away when the money is lost.

When thinking about financial reform, all the many losers in the above scenario are apt to take the view that this activity should be completely, or nearly completely curtailed. But, a more thoughtful view is that sometimes there is a real sense in which there are right and wrong decisions, and we as a society would really prefer that the people most likely to make right decisions are making them. A crucial question then is: “What is the difference between gambling and rewarding good prediction?”

We discussed this before the financial crisis. The cheat-sheet sketch is that the online learning against an adversary problem, algorithm, and theorems, provide a good mathematical model for thinking about this question. What I would like to do here is map this onto various types of financial transactions. The basic mapping is between “wealth” and “weight”, with the essential idea that you can think of wealth as either money or degree of control over decision making. The core algorithms start with a “wealth” spread over many experts, each of which makes predictions and then has it’s wealth updated according to a soft exponential of the value of it’s prediction.

  1. Going Long. The basic strategy here is to buy low and sell high. This strategy is not inherently sound from a learning theory point of view, because a single purchased item can sometimes drop to zero value. Similarly, a single purchased item can sometimes grow radically in value. Neither of these properties are desirable from the viewpoint of a learning algorithm. In the zero value case, a good decision maker can be wiped out by one decision, while in the large value case, a lucky decision maker can randomly achieve overwhelming credit. Nevertheless, there is a sense in which this strategy is compatible. If each item purchased either doubles or halves in value, the fluctuation in the wealth of a decision maker is analogous to the fluctuation in the relative weight of on an expert in the online learning framework.
  2. … with diversification. Going long with diversification implies purchasing several items and selling them later. Adding diversification to the “Long” strategy helps it align substantially better with an optimal learning theory strategy. Single points of failure are avoided, while random fluctuations up in wealth are reduced.
  3. Going Short. The short strategy is borrowing an item (typically a stock), selling it high, then buying it back low to cover the debt. It’s technique used to make money when a stock decreases in value. This technique was banned for a time during the crisis. From the perspective of learning theory, short selling is more dangerous than long, because it’s possible to end up with negative wealth when a stock is sold short, and then it increases in value. To avoid this, it’s necessary to have sufficient collateral to cover the short at all times. If this collateral is at least twice the value when shorting occurs, it’s hard for participants to become wealthy by luck, because wealth at most doubles. Diversification is also a potentially useful helper strategy.
  4. Insurance. Credit Default Swaps are effectively a form of insurance where one party pays another small amounts unless something bad happens, in which case large amounts of money go the other direction. In the financial crisis, credit default swaps made the crisis viral, as the “pay up” clauses triggered, particularly wiping out AIG. Insurance has the same general problem as short selling—it can result in negative wealth unless there is sufficient collateral. It also has the same solution.
  5. Clawback. The basic idea of a “clawback” is that when someone fouls up really badly, you extract it from their past paychecks. As far as I can tell, this sort of clause exists in nearly no contracts, but it’s a popular proposal in retrospect, particularly for certain AIG employees who destroyed their company. The driving problem here is that the actual value of a decision is not known for some time, and it’s misestimated in the short term. Learning theory suggests that you should apply updates to estimated value as soon as possible to adjust wealth, which would correspond to a potential 100% clawback clause.

Two things strike me in considering the above.

The first is that for normal people interacting with the financial system a set of financial rules + good sense have developed such that wealth tends to grow and shrink in a manner similar to what learning theory would suggest is near optimal. For example, most people use the going long strategy by default and most diversify. Most don’t use the short strategy, but those that do must have sufficient collateral. Normal people don’t have access to credit default swaps, and normal insurance has real collateral requirements. Clawbacks are automatic, as normal people bet with their own money and take their own losses.

The second is that larger actors have become quite skillful at avoiding the rules, with unsecured credit default swaps, unsecured shorts, and no clawback rules. But, learning theory is math, so it can’t really be avoided—instead what happens is inefficient decision making via inefficient learning algorithms on a societal scale.

My belief is effective financial reform will impose limits on agents just as learning theory implies. This is also the answer to the title question—it’s gambling if the corresponding learning algorithm has high regret, and it’s rewarding good prediction if the corresponding learning algorithm has low regret. Since this is already done effectively for normal people, shifting all agents towards the limits imposed in that direction works. This means lower bounds on collateral (or equivalently upper bounds on leverage), and standardized markets where all agents can interact on an equal basis. Adding in automatic clawback provisions for all performance-based pay would also probably be very effective.

A full dose of this medicine may upset many people directly affected by such legislation, as it limits their actions and imposes downsides. But this needn’t be so, because the math is straightforward, very robust, and designed precisely to pick out the good decision makers giving them wealth as rapidly as responsibly possible to make and control bigger decisions. If you are a good decision maker, then you should want this.

On the research front, there are substantial improvements we could hope for. Some basic questions are: How can we better structure marketplaces to allocate wealth according to the dynamics of an online learning algorithm? And what are the holes in the mapping between online learning and markets that need repair? And how do you repair them? And how do the repairs effect learning algorithms when backported? Good answer to this question could be radically valuable. Yiling and Jenn have a paper mapping out connections between prediction markets and online learning this year at EC, which is of interest for this direction of research.

Compassionate Reviewing

Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy?

The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy.

But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuinely believe they can predict what will and will not be useful in the longer term. This disparity is a lack of communication. When academics have conversations about reviewing, the presumption of participants in each conversation is that they all share about the same beliefs about what will be useful, and what will take off. Such conversations rarely go into specifics, because the specifics are boring in particular, technical, and because their is a real chance of disagreement on the specifics themselves.

When double blind reviewing was first being considered for ICML, I remember speaking about the experience in the Crypto community, where in my estimate the reviewing was both fairer and less happy. Many conferences in machine learning have shifted to doubleblind reviewing, and I think we have seen this come to pass here as well. Without double blind reviewing, it is common to have an “in” crowd who everyone respects and whose papers are virtually always accepted. These people are happy, and the rest have little voice. With double blind reviewing, everyone suffers substantial rejections.

We might say “fine, at least it’s fair”, but in my experience there is a real problem. From a viewpoint external to the community, when the reviewing is poor and the viewpoint of people in the community highly contradictory, nothing good happens. Outsiders (i.e. most people) viewing the acrimony choose some other way to solve problems, proposals don’t get funded, and the community itself tends to fracture. For example, in cryptography, TCC (not double blind) has started, presumably because the top theory people got tired of having their papers rejected at Crypto (double blind). From a process-of-research standpoint, this seems suboptimal, as different groups using different methods to solve similar problems are particularly the people who you would prefer talking to each other.

What seems to be lost with double blind reviewing is some amount of compassion, unfairly allocated. In a double blind system, any given paper is plausibly from someone you don’t know, and since most papers go nowhere, plausibly not going anywhere. Consequently, the bias starts “against” for all work, a disadvantage which can be quite difficult to overcome. Some time ago, I discussed how I thought motivation should be the responsibility of the reviewer. Aaron Hertzman strongly disagreed on the grounds that this belief could dead end your career as an author. I’ve come to appreciate his viewpoint to an extent. But, it misses the point slightly—the question of “What is good for the community?” differs from “What is good for the author?” In a healthy community, reviewers will actively understand why a piece of work is or is not important, filling in and extending the motivation as they consider the problem.

So, a question is: How can we get compassionate reviewing? (And in a fair way?) It might help somewhat for reviewers to actively consider, as part of their review, the level and mechanism of impact that a paper may have. Reducing reviewing load is certainly helpful, but it is not sufficient alone, because many people naturally interpret a reduced reviewing load as time to work on other things. And, some mechanisms seem to even harm. For example, the two-phase reviewing process that ICML currently uses might save 0.5 reviews/paper, while guaranteeing that for half of the papers, the deciding review is done hastily with no author feedback, a recipe for mistakes.

What creates a great deal of compassion? Public responsibility helps (witness workshops more interesting than conferences). A natural conversation helps (the current method of single round response tends to be very stilted). And time, of course, helps. What else?

COLT Treasurer is now Phil Long

For about 5 years, I’ve been the treasurer of the Association for Computational Learning, otherwise known as COLT, taking over from John Case before me. A transfer of duties to Phil Long is now about complete. This probably matters to almost no one, but I wanted to describe things a bit for those interested.

The immediate impetus for this decision was unhappiness over reviewing decisions at COLT 2009, one as an author and several as a member of the program committee. I seem to have disagreements fairly often about what is important work, partly because I’m focused on learning theory with practical implications, partly because I define learning theory more broadly than is typical amongst COLT members, and partly because COLT suffers a bit from insider-clique issues. The degree to which these issues come up varies substantially each year so last year is not predictive of this one. And, it’s important to understand that COLT remains healthy with these issues not nearly so bad as they were. Nevertheless, I would like to see them taken more actively into account than I’ve been able to persuade people so far.

After thinking about it for a few days before acting, I decided to go ahead with the transfer for another reason: I’ve been suffering from multitask poisoning. Partly this is Ada, but partly it’s many other things, each of which takes a small bit of my time, in aggregate leaving me disappointing people, myself in particular. The effect of this has been quite obvious in terms of the posting rate on hunch.net.

Fortunately, Phil Long was ready to take up the duties, and he’s well positioned to do so.

Despite the above, I found being treasurer not particularly difficult. The functions of the treasury part of ACL have been

  1. Self-insurance for the conference each year. Prior to the formation of ACL-the-nonprofit (which Bob was instrumental in), COLT used to buy insurance against the possibility that some disaster would strike canceling the conference while leaving the local organizer on the hook for substantial expenses. When I came in, the treasury was a little bit low for this function, and when I left, somewhat too high.
  2. Budget fragmentation avoidance. Local organizers typically have a local account from which they spend for expenses and collect registration fees. Without the ACL, dealing with net positive or negative local accounts from year to year was awkward. With the ACL, it’s easy to square things up at the end of each year.
  3. A stable point of contact for funding related things. COLT is partly sponsored by several big CS-related companies including IBM, Microsoft, and Google. Providing a stable point of contact definitely helps ease this process. This also helps on the publishing side, where Omnipress is the current publisher of proceedings.
  4. Budget advice for local organizers. Somewhat to my surprise, the proper role of the treasurer was typically asking the local organizer to reduce registration fees rather than increase. The essential observation is that local organizers, because they operate out of a local account, tend to be a bit conservative in budget estimates. On the other hand, because ACL has an adequate interest bearing account, we should expect and desire to spend the interest in each typical year. In effect, ACL is naturally in a position to sponsor COLT to a small but nontrivial degree.

After having been treasurer for a little while, I’m convinced that having a nonprofit to back a conference is a good idea easing many difficulties with relatively small effort.