The package semibound has a suite of several different bound calculation programs.
Matti Kaariainen and John Langford A Comparison of Tight Generalization Bounds ICML 2005 .ps.gz .pdf .tex  Tightens several prediction bounds and compares them on a variety of datasets. All of these bounds are "equally valid" in the sense that they apply given only an assumption of IID data. This contains a much tightened version of the progressive validation bound from "Beating the Holdout" below. 
Arindam Banerjee and John Langford An Objective Evaluation Criterion for Clustering KDD 2004 .ps.gz .pdf .tex  Proposes evaluating clustering algorithms based upon their empirical performance on the PACMDL bound. Some very tight bounds are calculated. 
Peter Grunwald and John Langford Suboptimal Behavior of Bayes and MDL in Classification under Misspecification (MLJ version) COLT 2004 (shorter)colt version .ps.gz, .pdf, .tex presentation  Shows that a common technique for using Bayes law for classification is "asymptotically inconsistent". 
Avrim Blum and John Langford PACMDL Bounds COLT 2003 .ps.gz, .pdf, .tex  Unifies PACBayes, VC dimension, Sample Compession, and Occam's Razor bounds. Shows that any description language for transductive classifiers can be turned into a bound. 
John Langford and John ShaweTaylor PACBayes and Margins. NIPS2002 .pdf, .ps.gz, .tex  The PACBayes bound can explain margin bounds. A small bug is fixed in the tutorial above. 
John Langford, Quantitatively Tight Sample Complexity Bounds Thesis .ps.gz, .pdf, .lyx, .tex, mathml (viewable with mozilla) Contains:

Microchoice bounds are an algorithmically practical variant of the Occam's Razor Bound applicable to discrete decision trees. Shell bounds take advantage of the distribution of error rates on a set of classifiers to gain tightness. The PACBayes bound is shown capable of practical use on neural networks. Arbitrary train set and test set bounds can be tightly combined resulting in improvements over either alone. Margin bounds can be tightened by using the PACBayes bound in the argument. 
Avrim Blum, Adam Kalai, and John Langford Beating the Holdout: Bounds for KFold and Progressive CrossValidation. COLT99 pages 203208, .pdf, .ps.gz, .tex
presentation 
Kfold crossvalidation on m examples is at least as good as a holdout set of size m/K. Progressive validation on m examples is about as tight as a holdout set of size m, while allowing (expected) half of the examples to be used in training. 
Avrim Blum, Carl Burch, and John Langford, 1998. On Learning Monotone Boolean Functions Proceedings of the 39th Annual Symposium on Foundations of Computer Science FOCS '98. .pdf, .ps.gz, .tex  If the classifier you want to learn is monotonic, it can still be very hard to learn. A very simple algorithm provides the small gains that monotonicity provides. 
John Langford Lessons from Learning Theory for Benchmark Design Invited talk at SPIE 2004  This discusses what we can glean from learning theory about benchmark design. Many of the conclusions are wellknown, although perhaps the exact quantitative tradeoffs are helpful. 
John Langford. Staged Learning  Shows that Baxter's model of learning to learn can be extended into an arbitrary depth hierarchy. 