|
|
|
| |
| |
| Submission Summary |
Thank You. Here are the details of your
submission. You will not be notified by email for paper confirmation, so
please print this page, and save it for your records.
Click here for a printer friendly version
If you find that there were errors in your submission, select View/Edit paper
To make changes. |
| |
| Paper
ID: |
203 |
| Title: |
Sensitive Error Correcting Output Codes |
| Uploaded
File Size: |
227810
bytes |
| Authors: |
John Langford (jl@hunch.net), Alina Beygelzimer (beygel@us.ibm.com)
|
| Abstract: |
We
present a reduction from cost sensitive classification to binary
classification based on (a modification of) error correcting output
codes. The reduction satisfies the property that \epsilon regret for
binary classification implies l_{2} -regret of at most 2\epsilon for
cost-estimation. This has several implications:
1) Any regret-minimizing online algorithm for 0/1 loss is (via the
reduction) a regret-minimizing online cost sensitive algorithm. In
particular, this means that online learning can be made to work for
arbitrary (i.e. totally unstructured) loss functions.
2) The output of the reduction can be thresholded so \epsilon regret
for binary classification implies at most 4\sqrt{\epsilon} regret for
cost sensitive classificaiton.
3) Using the canonical embedding of multiclass classifcation into cost
sensitive classification, this reduction shows that \epsilon binary
regret implies at most 2\epsilon l_{2} error in the estimation of class
probabilities. For a hard prediction, this implies at most
4\sqrt{\epsilon} multiclass regret. |
| |
|
You can check that your paper was delivered by clicking
here To view what we have received. If you cannot view your paper, then
you must resubmit in time To meet the paper deadline. |
| |
|
|