The purpose of this workshop is not merely progress assessment. The sample complexity bounds community has internal disagreements about what is (and is not) a useful bound, what is (and is not) a tight bound, how (and where) bounds might reasonably be used, and which bounds-related questions should be answered. One goal of this workshop is to debate the merits of these different issues in order to foster better understanding internally as well as externally.
It is not the purpose of the workshop to converge to the one right way to assess sample complexity or learning performance etc; rather we seek to understand the relative merits of diverse approaches and how they relate, recognising that it is very unlikely there is one true and best solution.
The workshop is generally focused on answers to the above questions. Some specific topics include:
|7:30||Introduction & Opening||John Langford and Bob Williamson|
|8:20||Using Unlabeled Data in Generalization Error Bounds||Matti Kaariainen|
|8:55||Learning the prior for the PAC-Bayes bound||Amiran Ambroladze, Emilio Parrado-Hernandez, and John Shawe-Taylor|
|9:25||Improved Risk-Tail Bounds for On-line Algorithms||Nicolo Cesa-Bianchi and Claudio Gentile||Paper|
|4:00||Generalization Bounds for Clustering||Shai Ben-David||presentation|
|4:10||Generalization Bounds for Clustering - Discussion|
|4:40||An Objective Evaluation Criterion for Clustering||Arindam Banerjee and John Langford|
|5:15||A Sample-Complexity Analysis of Learning from Labeled and Unlabeled Data||Maria Balcan and Avrim Blum|
|5:40||Error Bounds for Correlation Clustering||Thorsten Joachims and John Hopcroft|
|6:05||Bounds which exploit a-priori knowledge||Petra Philips|