Thoughts regarding “Is machine learning different from statistics?”

Given John’s recent posts on CMU’s new machine learning department and “Deep Learning,” I asked for an opportunity to give a computational learning theory perspective on these issues.

To my mind, the answer to the question “Are the core problems from machine learning different from the core problems of statistics?” is a clear Yes. The point of this post is to describe a core problem in machine learning that is computational in nature and will appeal to statistical learning folk (as an extreme example note that if P=NP– which, for all we know, is true– then we would suddenly find almost all of our favorite machine learning problems considerably more tractable).

If the central question of statistical learning theory were crudely summarized as “given a hypothesis with a certain loss bound over a test set, how well will it generalize?” then the central question of computational learning theory might be “how can we find such a hypothesis efficently (e.g., in polynomial-time)?”

With this in mind, let’s spend the rest of the post thinking about agnostic learning. The following definition is due to Kearns, Schapire, and Sellie:

Fix a function class C. Given a training set drawn from an arbitrary distribution D and *arbitrarily labeled* (that is, we are given samples from an arbitrary distribution D over some set X x Y), efficiently output a hypothesis that is competitive with the best function in C with respect to D. If there exists a provably efficient solution for the above problem we say C is agnostically learnable with respect to D. Notice the difference between agnostic and PAC learning: we *do not assume* that the data is labeled according to a fixed function (I find this to be a common criticism of the PAC model).

To make this a little more concrete, let C be the class of halfspaces, let X = Rn, and let Y = {-1,1}. Let opt = min{h in C} (classification error of h with respect to D). Then the goal is, given an *arbitrarily* labeled data set of points in Rn , find a hypothesis whose true error with respect to D is at most opt + eps. Furthermore the solution should provably run in time polynomial in n and 1/eps. Note that the hypothesis we output *need not be a halfspace*– we only require that its error rate is close to the error rate of the optimal halfspace.

Why choose C to be halfspaces? At the heart of many of our most popular learning tools, such as Support Vector Machines, is an algorithm for learning a halfspace over a set of features in many dimensions. Perhaps surprisingly, *it is still not known*, given samples from an arbitrary distribution on X x Y, how to efficiently ‘find the best fitting halfspace’ over a set of features. In fact, if we require our algorithm to output a halfspace as its hypothesis then the agnostic halfspace learning problem is NP-hard. Recent results have shown that even in the case where there is a halfspace with error rate .0001 with respect to D it is NP-hard to output a halfspace with error rate .49999 (achieving .5 is of course trivial).

The astute reader here may comment that I am ignoring the whole point of the ‘kernel trick,’ which is to provide an implicit representation of a halfspace over some feature set in many dimensions, therefore bypassing the above hardness result. While this is true, I am still not aware of any results in the statistical learning literature that give provably efficient algorithms for agnostically learning halfspaces (regardless of the choice of kernel).

My larger point is that a fundamental approach from statistical learning theory (running an SVM) is deeply connected to strong negative results from computational complexity theory. Still, as I explain in the next paragraph, finding the ‘best fitting halfspace’ remains an open and important computational problem.

Negative results not withstanding, it now seems natural to ask ‘well, can we find any efficient algorithms for agnostically learning halfspaces if we allow ourselves to output hypotheses that are not halfspaces?’ Briefly, the answer is ‘to some extent Yes,’ if we restrict the marginal distribution on X to be one of our favorite nice distributions (such as Gaussian over R^n). The solution runs in polynomial-time for any constant eps > 0 (for various reasons involving the ‘noisy XOR’ topic three posts ago, obtaining efficient algorithms for subconstant eps seems intractable).

The complexity of agnostically learning halfspaces with respect to arbitrary distributions remains open. A solution (if one exists) will require a powerful new algorithmic idea and will have important consequences regarding our most commonly used machine learning tools.

To respond to Andrej Bauer’s comment three posts ago:

‘I guess I have a more general question. I understand that for the purposes of proving lower bounds in computational complexity it’s reasonable to show theorems like “can’t learn XOR with NOT, AND, OR.” But I always thought that machine learning was more “positive”, i.e., you really want to learn whenever possible. So, instead of dispairing one should try to use a different bag of tricks.’

I think Andrej is right. Solving future machine learning problems will require a more sophisticated ‘bag of tricks’ than the one we have now, as most of our algorithms are based on learning halfspaces (in a noisy or agnostic setting). Computational learning theory gives us a methodology for how to proceed: find provably efficient solutions to unresolved computational problems– novel learning algorithms will no doubt follow.

Parallel Machine Learning Problems

Parallel machine learning is a subject rarely addressed at machine learning conferences. Nevertheless, it seems likely to increase in importance because:

  1. Data set sizes appear to be growing substantially faster than computation. Essentially, this happens because more and more sensors of various sorts are being hooked up to the internet.
  2. Serial speedups of processors seem are relatively stalled. The new trend is to make processors more powerful by making them multicore.
    1. Both AMD and Intel are making dual core designs standard, with plans for more parallelism in the future.
    2. IBM’s Cell processor has (essentially) 9 cores.
    3. Modern graphics chips can have an order of magnitude more separate execution units.

    The meaning of ‘core’ varies a bit from processor to processor, but the overall trend seems quite clear.

So, how do we parallelize machine learning algorithms?

  1. The simplest and most common technique is to simply run the same learning algorithm with different parameters on different processors. Cluster management software like OpenMosix, Condor, or Torque are helpful here. This approach doesn’t speed up any individual run of a learning algorithm.
  2. The next simplest technique is to decompose a learning algorithm into an adaptive sequence of statistical queries and parallelize the queries over the sample. This paper (updated from the term paper according to a comment) points out that statistical integration can be implemented via MapReduce which Google popularized (the open source version is Hadoop). The general technique of parallel statistical integration is already used by many people including IBM’s Parallel Machine Learning Toolbox. The disadvantage of this approach is that it is particularly good at speeding up slow algorithms. One example of a fast sequential algorithm is perceptron. The perceptron works on a per example basis making individual updates extremely fast. It is explicitly not a statistical query algorithm.
  3. The most difficult method for speeding up an algorithm is fine-grained structural parallelism. The brain works in this way: each individual neuron operates on it’s own. The reason why this is difficult is that the programming is particularly tricky—you must carefully optimize to avoid latency in network communication. The full complexity of parallel programming is exposed.

A basic question is: are there other approaches to speeding up learning algorithms which don’t incur the full complexity of option (3)? One approach is discussed here.

The Machine Learning Department

Carnegie Mellon School of Computer Science has the first academic Machine Learning department. This department already existed as the Center for Automated Learning and Discovery, but recently changed it’s name.

The reason for changing the name is obvious: very few people think of themselves as “Automated Learner and Discoverers”, but there are number of people who think of themselves as “Machine Learners”. Machine learning is both more succinct and recognizable—good properties for a name.

A more interesting question is “Should there be a Machine Learning Department?”. Tom Mitchell has a relevant whitepaper claiming that machine learning is answering a different question than other fields or departments. The fundamental debate here is “Is machine learning different from statistics?”

At a cultural level, there is no real debate: they are different. Machine learning is characterized by several very active large peer reviewed conferences, operating in a computer science mode. Statistics tends to function with a greater emphasis on journals and a lesser emphasis on conferences which often implies a much longer publishing cycle.

In terms of the basic questions driving the field, the answer seems less clear. It is true that the core problems of statistics in the past have typically differed from the core problems of machine learning today. Yet, there has been some substantial overlap, and there are a number of statisticians nowadays that are actively doing machine learning. It’s reasonably plausible that in the long term statistics departments will adopt the core problems of machine learning, removing the reasons for a separate machine learning department.

The parallel question for computer science comes up less often perhaps because computer science is a notoriously broad field.

The practical implication of a new department is the ability to create a more specific curricula, admit more specific students, and hire faculty based upon more specific interests. Compared to a computer science program, classes on programming languages, computer architecture, or graphics might be dropped in favor of classes on learning theory, statistics, etc… Compared to a statistics program, classes on advanced parameter estimation and measure theory might be dropped in favor of algorithms and programming experience.

An alternative solution like “learn everything from computer science and statistics” is personally appealing to me, and I have benefitted from and recommend a broad education. However this is not practical for everyone. In my experience, a machine learning skill set is an effective specialization with which people can do important things in the world. Given this, having a department with a machine learning centered curricula seems like a good idea. At Carnegie Mellon, this is the Machine Learning department. In the future and elsewhere it may have a different name, but the value of the machine learning skill set should grow with research, improving computers, and improving data sources.

A Deep Belief Net Learning Problem

“Deep learning” is used to describe learning architectures which have significant depth (as a circuit).

One claim is that shallow architectures (one or two layers) can not concisely represent some functions while a circuit with more depth can concisely represent these same functions. Proving lower bounds on the size of a circuit is substantially harder than upper bounds (which are constructive), but some results are known. Luca Trevisan‘s class notes detail how XOR is not concisely representable by “AC0” (= constant depth unbounded fan-in AND, OR, NOT gates). This doesn’t quite prove that depth is necessary for the representations commonly used in learning (such as a thresholded weighted sum), but it is strongly suggestive that this is so.

Examples like this are a bit disheartening because existing algorithms for deep learning (deep belief nets, gradient descent on deep neural networks, and a perhaps decision trees depending on who you ask) can’t learn XOR very easily. Evidence so far suggests learning a noisy version of XOR is hard. In fact, crypto systems have been proposed based upon this hardness. The evidence so far suggests that XOR based deep learning problems have no algorithm much better than “guess and check”.

It turns out that we can define deep learning problems which are solvable by deep belief net style algorithms. Some definitions:

  1. Learning Problem A learning problem is defined by probability distribution D(x,y) over features x which are a vector of bits and a label y which is either 0 or 1.
  2. Shallow Learning Problem A shallow learning problem is a learning problem where the label y can be predicted with error rate at most e < 0.5 by a weighted linear combination of features, sign(sumi wi xi).
  3. Deep Learning Problem A deep learning problem is a learning problem with a solution representable by a circuit of weighted linear sums with O(number of input features) gates.

These definitions are not necessarily the correct ones (and I’d like to hear from anyone that disagrees with the definition, and why), but they seem to capture the intuitions I know. Note that the definition of “deep learning problem” contains the definition of “shallow learning problem” and the XOR example. With high probability, it does not contain a random function. This definition is not captured by any existing complexity theory classes I know, although some are close (TC0, for example).

Theorem There exists a deep learning problem for which:

  1. A deep belief net (like) learning algorithm can achieve error rate 0 with probability 1- d for any d > 0 in the limit as the number of IID samples goes to infinity.
  2. The learning problem is not shallow. In particular for all e > 0, all weighted predictors have error rate at least 1/2 – e

The proof is actually a little bit stronger than the theorem statement. The definition of a ‘shallow learning problem’ can be broadened in several ways to include solution by representation of many common learning algorithms. Also, instead of an asymptotic analysis, a finite sample analysis could be made.

This theorem (roughly) says that “deep learning could be useful in practice”. This is a fairly weak statement. However, a stronger PAC-learning statement appears implausible because deep belief net (like) algorithms actively use the structure in x while PAC analysis holds for all distributions over x. Given the weakness of the theorem statement, empirical evidence for the effectiveness (or not) of deep learning is important.

Proof (This is sketch only.) The first part of the proof is constructive. We simply specify a learning problem, and then show that a deep belief net-like algorithm can solve it. The second part involves some probabilistic analysis.

The learning problem is essentially a ‘hidden bits problem’ which is best specified by defining an algorithm for drawing an example. The problem is parameterized by an integer k, where larger k problems hold for smaller choices of e. An example is drawn by first picking a uniform random bit y from {0,1}. After that k hidden bits h1,…,hk are set so that a random subset of (k + y)/2 of them are 1 and the rest 0. For each hidden bit hi, we have 4 output bits xi1,xi2,xi3,xi4 (implying a total of 4k output bits). If hi = 0, with 0.5 probability we set one of the output bits to 1 and the rest to 0, and with 0.5 probability we set all output bits to 0. If hi = 1, with 0.5 probability we set one of the output bits to 0 and the rest to 1, and with 0.5 probability we set all output bits to 1.

This learning problem is solved by a two-level prediction process. Variations using recursive composition (redefine each “output bit” to be a hidden bit in a new layer, each of which has it’s own output bit) can make the “right” number of levels be larger than 2.

The deep belief net like algorithm we consider is the algorithm which:

  1. Builds a threshold weighted sum predictor for every feature xij using weights = the probability of agreement between the features minus 0.5.
  2. Builds a threshold weighted sum predictor for the label given the predicted values from the first step with weights as before.

(The real algorithm uses something similar to gradient descent which is more powerful, but this is all we need.)

For each output feature xij, the values of output features corresponding to other hidden bits are uncorrelated since by construction Pr(hi = hi’) = 0.5 for i != i’. For output features which share a hidden bit, the probability of agreement in value between two bits j,j’ is 0.75. If we have n IID samples from the learning problem, then Chernoff bounds imply that empirical expectations deviate from expectations at most (log ((4k)2/d)/2n)0.5 with probability d or less for all pairs of features simultaneously. For the prediction of each feature, when n = 512 k4 log ((4k)2/d), the sum of the weights on the 4 (k-1) features corresponding to other hidden weights is bounded by 4(k-1) * 1/(32 k2) <= 1/(8k). On the other hand, the weight on the 3 other features sharing the same bit are each at least 0.25 +/- 1/(32k2) which are individually larger than the sum of all other weights. Consequently, the predicted value is the majority of the 3 other features which is always the value of the hidden bit.

The above analysis (sketchily) shows that the predicted value for each output bit is the hiden bit used to generate it. The same style of analysis shows that given the hidden bits, the output bit can be predicted perfectly. In this case, the value of each hidden bit provides a slight consistent edge in predicting the value of the output bit implying that the learning algorithm converges to uniform weighting over the predicted hidden bit values.

To prove the second part of the theorem, we can first show that a uniform weight over all features is the optimal predictor, and then show that the error rate of this predictor converges to 1/2 as k -> infinity. The optimality of uniform weighting is a little bit tricky to prove, but it is obvious at a high level because (1) of symmetry in the definition of the problem and (2) a nonuniform weighting increases the noise. The error rate convergence to 0.5 is a statement about Binomial probability distributions. Essentially, the noise in the observed bits given the hidden bits kills prediction performance.