I wanted to expand on this post and some of the previous problems/research directions about where learning theory might make large strides.

**Why theory?**The essential reason for theory is “intuition extension”. A very good applied learning person can master some particular application domain yielding the best computer algorithms for solving that problem. A very good theory can take the intuitions discovered by this and other applied learning people and extend them to new domains in a relatively automatic fashion. To do this, we take these basic intuitions and try to find a mathematical model that:- Explains the basic intuitions.
- Makes new testable predictions about how to learn.
- Succeeds in so learning.

This is “intuition extension”: taking what we have learned somewhere else and applying it in new domains. It is fundamentally useful to everyone because it increases the level of automation in solving problems.

**Where next for learning theory?**I like the analogy with physics. Back before we-the-humans knew much, people would experiment occasionally and learn to design new things by slow evolution. At some point the physics model arose: you try to build mathematical models of what is happening and then make predictions based on the models. This was wildly succesful for physics. For machine learning, it has only been moderately succesful. We have some formalisms which are of some use in addressing novel learning problems, but the overall process of doing machine learning is not very close to “automatic”. The good news is that over the last 20 years a*much*richer set of positive examples of succesful applied machine learning has developed. Thus, there are many good intuitions from which we can hope to generalize. In the physics analogy, the year is (perhaps) 1900. Here are a few specific issues:- What is the “right” mathematical model of learning? (in analogy, What is the “right” mathematical model of physical phenomena?”) The models we currently use have their compelling points but typically fail to capture all of the relevant details. This is a very hard question to address, but it should be actively considered and any progress may be very helpful. Examples of this include:
- What is the “right” model of active learning? We know almost nothing except there is great potential.
- What is the “right” model of Reinforcement learning? Again, we know very little in comparison to what we want to know—a fully automatic general RL solver.

The notion of “right” here is partially theoretical (can we get derive efficient algorithms?) and partially empirical (do they actually work?).

- How do we refine the empirical observations and intuitions of applied learning?
- How should we think about “prior”? The Bayesian answer seems unconvincing. At a minimum, information used to create a Bayesian prior often does not come in the form of a Bayesian prior, and so some translation system must be developed.
- How can we develop big learning systems that solve big problems? Some form of structure seems necessary, but the right form is still unclear. What theory governs the design of such systems?

- How do we take existing theoretical insights and translate them into practical algorithms?
- The method of linear projection into spaces has been studied theoretically. Is it useful empirically?
- The online learning setting seems theoretically compelling and, at least sometimes, empirically validated. What concerns remain to be addressed to make this a useful technology?

- What is the “right” mathematical model of learning? (in analogy, What is the “right” mathematical model of physical phenomena?”) The models we currently use have their compelling points but typically fail to capture all of the relevant details. This is a very hard question to address, but it should be actively considered and any progress may be very helpful. Examples of this include:

We should keep in mind that there is a real chance the limits of machine learning are *lower bounded* by human learning. Getting from here to there of course will require a bit of work, some of which might be greatly aided by mathematical consideration.

Aroun 1900’s physicsts thought they had discovered everything. I suspect (and hope!) ML is not there yet.