Foster Provost and I discussed the merits of ROC curves vs. accuracy estimation. Here is a quick summary of our discussion.
The “Receiver Operating Characteristic” (ROC) curve is an alternative to accuracy for the evaluation of learning algorithms on natural datasets. The ROC curve is a curve and not a single number statistic. In particular, this means that the comparison of two algorithms on a dataset does not always produce an obvious order.
Accuracy (= 1 – error rate) is a standard method used to evaluate learning algorithms. It is a single-number summary of performance.
AROC is the area under the ROC curve. It is a single number summary of performance.
The comparison of these metrics is a subtle affair, because in machine learning, they are compared on different natural datasets. This makes some sense if we accept the hypothesis “Performance on past learning problems (roughly) predicts performance on future learning problems.”
The ROC vs. accuracy discussion is often conflated with “is the goal classification or ranking?” because ROC curve construction requires a ranking be produced. Here, we assume the goal is classification rather than ranking. (There are several natural problems where ranking of instances is much preferred to classification. In addition, there are several natural problems where classification is the goal.)
Arguments for ROC |
Explanation |
Ill-specification |
The costs of choices are not well specified. The training examples are often not drawn from the same marginal distribution as the test examples. ROC curves allow for an effective comparison over a range of different choice costs and marginal distributions. |
Ill-dominance |
Standard classification algorithms do not have a dominance structure as the costs vary. We should not say “algorithm A is better than algorithm B” when you don’t know the choice costs well enough to be sure. |
Just-in-Time use |
Any system with a good ROC curve can easily be designed with a ‘knob’ that controls the rate of false positives vs. false negatives. |
AROC inherits the arguments of ROC except for Ill-dominance.
Arguments for AROC |
Explanation |
Summarization |
Humans don’t have the time to understand the complexities of a conditional comparison, so having a single number instead of a curve is valuable. |
Robustness |
Algorithms with a large AROC are robust against a variation in costs. |
Accuracy is the traditional approach.
Arguments for Accuracy |
Explanation |
Summarization |
As for AROC. |
Intuitiveness |
People understand immediately what accuracy means. Unlike (A)ROC, it’s obvious what happens when one additional example is classified wrong. |
Statistical Stability |
The basic test set bound shows that accuracy is stable subject to only the IID assumption. For AROC (and ROC) this is only true when the number in each class is not near zero. |
Minimality |
In the end, a classifier makes classification decisions. Accuracy directly measures this while (A)ROC comprimises this measure with hypothetical alternate choice costs. For the same reason, computing (A)ROC may require significantly more work than solving the problem. |
Generality |
Accuracy generalizes easily to multiclass accuracy, importance weighted accuracy, and general (per-example) cost sensitive classification. ROC curves become problematic when there are just 3 classes. |
The Just-in-Time argument seems to be the strongest for (A)ROC. One way to rephrase this argument is “Lack of knowledge of relative costs means that classifiers should be rankers so false positive to false negative ratios can be easily altered.” In other words, this is an argument for “ranking instead of classification” rather than “(A)ROC instead of Accuracy”.