Updates for the new decade

This blog has been quiet for the last year. I have quite a bit to write about but found myself often out of time between work at Microsoft, ICML duties, and family life. Nevertheless, I expect to get back to more substantive discussions as I adjust to the new load.

In the meantime, I’ve updated the site in various ways: SSL now works, and mail for people registering new accounts should work again.

I also setup a twitter account as I’ve often had things left unsaid. I’m not a fan of blog-by-twitter (which seems artificially disjointed), so I expect to use twitter for shorter things and hunch.net for longer things.

ICML has 3(!) Real World Reinforcement Learning Workshops

The first is Sunday afternon during the Industry Expo day. This one is meant to be quite practical, starting with an overview of Contextual Bandits and leading into how to apply the new Personalizer service, the first service in the world functionally supporting general contextual bandit learning.

The second is Friday morning. This one is more academic with many topics. I’ll personally be discussing research questions for real world RL.

The third one is Friday afternoon with more emphasis on sequences of decisions. I expect to here “imitation learning” multiple times 🙂

I’m planning to attend all 3. It’s great to see interest building in this direction, because Real World RL seems like the most promising direction for fruitfully expanding the scope of solvable machine learning problems.

Code submission should be encouraged but not compulsory

ICML, ICLR, and NeurIPS are all considering or experimenting with code and data submission as a part of the reviewer or publication process with the hypothesis that it aids reproducibility of results. Reproducibility has been a rising concern with discussions in paper, workshop, and invited talk.

The fundamental driver is of course lack of reproducibility. Lack of reproducibility is an inherently serious and valid concern for any kind of publishing process where people rely on prior work to compare with and do new things. Lack of reproducibility (due to random initialization for example) was one of the things leading to a period of unpopularity for neural networks when I was a graduate student. That has proved nonviable (Surprise! Learning circuits is important!), but the reproducibility issue remains. Furthermore, there is always an opportunity and latent suspicion that authors ‘cheat’ in reporting results which could be allayed using a reproducible approach.

With the above said, I think the reproducibility proponents should understand that reproducibility is a value but not an absolute value. As an example here, I believe it’s quite worthwhile for the community to see AlphaGoZero published even if the results are not necessarily easily reproduced. There is real value for the community in showing what is possible irrespective of whether or not another game with same master of Go is possible, and there is real value in having an algorithm like this be public even if the code is not. Treating reproducibility as an absolute value could exclude results like this.

An essential understanding here is that machine learning is (at least) 3 different kinds of research.

  • Algorithms: The goal is coming up with a better algorithm for solving some category of learning problems. This is the most typical viewpoint at these conferences.
  • Theory: The goal is generally understanding what is possible or not possible for learning algorithms. Although these papers may have algorithms, they are often not the point and demanding an implementation of them is a waste of time for author, reviewer, and reader.
  • Applications: The goal is solving some particular task. AlphaGoZero is a reasonable example of this—it was about beating the world champion in Go with algorithmic development in service of that. For this kind of research perfect programmatic reproducibility may be infeasible because the computation is to extreme, the data is proprietary, etc…

Using a one-size-fits-all approach where you demand that every paper “is” a programmatically reproducible implementation is a mistake that would create a division that reduces our community. Keeping this three-fold focus fundamentally enriches the community both literally and ontologically.

Another view here is provided by considering the argument at a wider scope. Would you prefer that health regulations/treatments be based on all scientific studies including those where data is not fully released to the public (i.e almost all of them for privacy reasons)? Or would you prefer that health regulations/treatments be based only on data fully released to the public? Preferring the latter is equivalent to ignoring most scientific studies in making decisions.

The alternative to a compulsory approach is to take an additive view. The additive approach has a good track record amongst reviewing process changes.

  • When I was a graduate student, papers were not double blind. The community switched to double blind because it adds an opportunity for reviewers to review fairly and it gives authors a chance to have their work reviewed fairly whether they are junior or senior. As a community we also do not restrict posting on arxiv or talks about a paper before publication, because that would subtract from what authors can do. Double blind reviewing could be divisive, but it is not when used in this fashion.
  • When I was a graduate student, there was also a hard limit on the number of pages in submissions. For theory papers this meant that proofs were not included. We changed the review process to allow (but not require) submission of an appendix which could optionally be used by reviewers. This again adds to the options available to authors/reviewers and is generally viewed as positive by everyone involved.

What can we add to the community in terms reproducibility?

  1. Can reviewers do a better job of reviewing if they have access to the underlying code or data?
  2. Can authors benefit from releasing code?
  3. Can readers of a paper benefit from an accompanying code release?

The answer to each of these question is a clear ‘yes’ if done right.

For reviewers, it’s important to not overburden them. They may lack the computational resources, platform, or personal time to do a full reproduction of results even if that is possible. Hence, we should view code (and data) submission in the same way as an appendix which reviewers may delve into and use if they so desire.

For authors, code release has two benefits—it provides an additional avenue for convincing reviewers who default to skeptical and it makes followup work significantly more likely. My most cited paper was Isomap which did indeed come with a code release. Of course, this is not possible or beneficial for authors in many cases. Maybe it’s a theory paper where the algorithm isn’t the point? Maybe either data or code can’t be fully released since it’s proprietary? There are a variety of reasons. From this viewpoint we see that releasing code should be supported and encouraged but optional.

For readers, having code (and data) available obviously adds to the depth of value that a paper has. Not every reader will take advantage of that but some will and it enormously reduces the barrier to using a paper in many cases.

Let’s assume we do all of these additive and enabling things, which is about where Kamalika and Russ aimed the ICML policy this year.

Is there a need for go further towards compulsory code submission? I don’t yet see evidence that default skeptical reviewers aren’t capable of weighing the value of reproducibility against other values in considering whether a paper should be published.

Should we do less than the additive and enabling things? I don’t see why—the additive approach provides pure improvements to the author/review/publish process. Not everyone is able to take advantage of this, but that seems like a poor reason to restrict others from taking advantage when they can.

One last thing to note is that this year’s code submission process is an experiment. We should all want program chairs to be able to experiment, because that is how improvements happen. We should do our best to work with such experiments, try to make a real assessment of success/failure, and expect adjustments for next year.

Please vote

This is not at all related to Machine Learning.

I lived in Squirrel Hill as a graduate student at Carnegie Mellon so the massacre there is feeling particularly immediate. While the person who did it is obviously culpable, the pattern of events makes it clear that others bear responsibility as well. This pattern includes an attempted bomber of Democrats and Trump critics by a Trump fanboy. It also includes a more general cross section of Republicans and their leaders pushing anti-semitism and more general xenophobia about migrants.

I don’t believe that stochastic terrorism is the goal here. Instead, I have a rather pessimal view of politics in which politicians do pretty much anything to get re-elected, at least in aggregate. Donald Trump’s presidential campaign showed how to do this with a platform of populism, nostalgia, xenophobia, and anti-abortion voters.

The populist angle is looking fairly broken now between anti-populist tax cuts and widely publicized efforts to allow preexisting condition discrimination by insurance companies via Obamacare repeal. About the only populist angle which works is the economy, which is doing fine. On the other hand, there is no obvious change in employment trends since 2011 and no change in wage trends since 2014 so the case for responsibility is clearly tenuous.

Alliances in a two-party system tend to be fragile since winning with a smaller constituency enables better serving that constituency. Losing the populist angle leaves a double-down on the remaining agenda as the most plausible choice. Xenophobia is much older than democracy and psychologically potent so it has obvious value. It’s historically used by leaders who pick some characteristic to divide people and position themselves to thrive on the conflict or distraction that creates. Almost anything will do—if you take away religion, birthplace, skin color, and ethnicity, it would just change to hair color, nose size, or left-handedness. In a democracy, the goal with this approach is simply convincing people to vote according to their activated xenophobia.

For people embracing xenophobia to retain power, stochastic terrorism is just an unfortunate side effect. In this sense, inciting xenophobia about a caravan of refugee Guatemalans at the other end of Mexico is rather clever since most of them won’t even make it to the US border months after the election plausibly leaving only electoral consequences. Yet xenophobia is known to be hard to control. Given this, it’s difficult to imagine stochastic terrorism as anything other than deliberately accepted by the Republican party leadership as an observed consequence of this behavior. The Squirrel Hill massacre and the attempted bombing campaigns are precisely the sort of thing that can happen when you dial up the rhetoric just before an election.

This is part of a pattern of moral collapse across the Republican party. By any reasonable measure Donald Trump is a serial liar with Republican politicians now mimicking this behavior. A remarkable set of people around the Trump campaign are confessed or convicted criminals with members of the Republican party variously tolerating, condoning, and perhaps mimicking.

In this context, the upcoming midterm election seems particularly important. If politicians in aggregate behave as if they will do anything to get reelected, then voters must vote for the behavior they want at the ballot box rather than relying on or appealing to it at a later date. In most situations, this is about picking and choosing the better candidate. I’ve been registered as an independent for this reason—I want to decide for myself.

This is not most situations. Do voters rebuke the Republican party or not? If the answer is not (a 37% chance according to bettors at present) then the slide into corruption likely accelerates as confirmed control of the government erodes the remaining institutional checks on corruption. We are several steps away from a state of deep corruption and it takes time for the consequences of corruption to really seep into society. But every step on the path makes the situation worse and we are on the wrong path now as evidenced by bombing attempts, a xenophobic massacre, and the wider context creating them.

I want to particularly encourage those who are eligible to vote in the United States midterms November 6th.

A Real World Reinforcement Learning Research Program

We are hiring for reinforcement learning related research at all levels and all MSR labs. If you are interested, apply, talk to me at COLT or ICML, or email me.

More generally though, I wanted to lay out a philosophy of research which differs from (and plausibly improves on) the current prevailing mode.

Deepmind and OpenAI have popularized an empirical approach where researchers modify algorithms and test them against simulated environments, including in self-play. They’ve achieved significant success in these simulated environments, greatly expanding the reportoire of ‘games solved by reinforcement learning’ which consisted of the singleton backgammon when I was a graduate student. Given the ambitious goals of these organizations, the more general plan seems to be “first solve games, then solve real problems”. There are some weaknesses to this approach, which I want to lay out next.

  • Broken API One issue with this is that multi-step reinforcement learning is a broken API in the sense that it creates an interface for problem definitions that is unsolvable via currently popular algorithm families. In particular, you can create problems which are either ‘antishaped’ so local rewards mislead w.r.t. long term rewards or keylock problems, as are common in Markov Decision Process lower bounds. I coded up simple versions of these problems a couple years ago and stuck them on github now to be extra crisp. If you try to apply policy gradient or Q-learning style algorithms on these problems they commonly run into exponential (in the number of states) sample complexity. As a general principle, APIs which create exponential sample complexity are bad—they imply that individual applications require taking advantage of special structure in order to succeed.
  • Transference Another significant issue is the degree of transference between solutions in simulation and the real world. “Transference” here potentially happens at several levels.
    • Do the algorithms carry over? One of the persistent issues with simulation-based approaches is that you don’t care about sample complexity that much—optimal performance at acceptable computational complexities is the typical goal. In real world applications, this is somewhat absurd—you really care about immediately doing something reasonable and optimizing from there.
    • Do the simulators carry over? For every simulator, there is a fidelity question which comes into play when you want to transfer a policy learned in the simulator into action in the real world. Real-time ray tracing and simulator quality more generally are advancing, but I’m not ready yet to trust a self-driving care trained in a simulated reality. An accurate simulation of the physics is unclear—friction for example is known-difficult, and more generally the representative variety of exogenous events in an open world seems quite difficult to implement.
  • Solution generality When you test and discover that an algorithm works in a simulated world, you know that it works in the simulated world. If you try it in 30 simulated worlds and it works in all of them, it can still easily be the case that an algorithm fails on the 31st simulated world. How can you achieve confidence beyond the number of simulated worlds that you try and succeed on? There is some sense by which you can imagine generalization over an underlying process generating problems, but this seems like a shaky justification in practice, since the nature of the problems encountered seems to be a nonstationary development of an unknown future.
  • Value creation Solutions of a ‘first A, then B’ flavor naturally take time to get to the end state where most of the real value is set to be realized. In the years before reaching applications in the real world, does the funding run out? We certainly hope not for the field of research but a danger does exist. Some discussion here including the comments is relevant.

What’s an alternative?

Each of the issues above is addressable.

  • Build fundamental theories of what are statistically and computationally tractable sub-problems of Reinforcement Learning. These tractable sub-problems form the ‘APIs’ of systems for solving these problems. Examples of this include simpler (Contextual Bandits), intermediate (learning to search, and move advanced (Contextual Decision Process).
  • Work on real-world problems. The obvious antidote to simulation is reality, driving both the need to create systems that work in reality as well as a research agenda around reality-centered issues like performance at low sample complexity. There are some significant difficulties with this—reinforcement style algorithms require interactive access to learn which often drives research towards companies with an infrastructure. Nevertheless, offline evaluation on real-world data does exist and the choice of emphasis in research directions is universal.
  • The combination of fundamental theories and a platform which distills learnings so they are not forgotten and always improved upon provides a stronger basis for expectation of generalization into the next problem.
  • The shortest path to creating valuable applications in the real world is to simply work on creating valuable applications in the real world. Doing this in a manner guided by other elements of the research program is just good sense.

The above must be applied in moderation—some emphasis on theory, some emphasis on real world applications, some emphasis on platforms, and some emphasis on empirics. This has been my research approach for a little over 10 years, ever since I started working on contextual bandits.

Let’s call the first research program ’empirical simulation’ and the second research program ‘real fundamentals’. The empirical simulation approach has a clear strong advantage in that it creates impressive demos, which creates funding, which creates more research. The threshold for contribution to the empirical simulation approach may also be lower simply because it requires mastery of fewer elements, implying people can more easily participate in it. At the same time, the real fundamentals approach has clear advantages in addressing the weaknesses of the empirical simulation approach. At a concrete level, this means we have managed to define and create fundamentals through research while creating real-world applications and value radically more efficiently than the empirical simulation approach has achieved.

The ‘real fundamentals’ concept is behind the open positions above. These positions have been designed to come with both the colleagues and mandate to address the most difficult research problems along with the organizational leverage to change the world. For people interested in fundamentals and making things happen in the real world these are prime positions—please consider joining us.