Vowpal Wabbit, version 7.0

A new version of VW is out. The primary changes are:

  1. Learning Reductions: I’ve wanted to get learning reductions working and we’ve finally done it. Not everything is implemented yet, but VW now supports direct:
    1. Multiclass Classification –oaa or –ect.
    2. Cost Sensitive Multiclass Classification –csoaa or –wap.
    3. Contextual Bandit Classification –cb.
    4. Sequential Structured Prediction –searn or –dagger

    In addition, it is now easy to build your own custom learning reductions for various plausible uses: feature diddling, custom structured prediction problems, or alternate learning reductions. This effort is far from done, but it is now in a generally useful state. Note that all learning reductions inherit the ability to do cluster parallel learning.

  2. Library interface: VW now has a basic library interface. The library provides most of the functionality of VW, with the limitation that it is monolithic and nonreentrant. These will be improved over time.
  3. Windows port: The priority of a windows port jumped way up once we moved to Microsoft. The only feature which we know doesn’t work at present is automatic backgrounding when in daemon mode.
  4. New update rule: Stephane visited us this summer, and we fixed the default online update rule so that it is unit invariant.

There are also many other small updates including some contributed utilities that aid the process of applying and using VW.

Plans for the near future involve improving the quality of various items above, and of course better documentation: several of the reductions are not yet well documented.

7 Replies to “Vowpal Wabbit, version 7.0”

  1. Where can we find documentation on the new –searn and –dagger options? It’s not obvious how the input format will change for structured prediction. (Even in a simple case like sequence tagging, the features for each atomic decision might depend on all the earlier atomic decisions on that example, and so should presumably be computed only on an as-needed basis. This seems to require callbacks from VW to the client. More complicated cases might also require a callback for a specialized oracle.) Thanks!

    1. The test suite has a few nontrivial examples for simple cases involving sequential structured prediction. More complex cases definitely need better documentation.

  2. I wish to know whether the default optimization algorithm running behind vw 7.0 is just online gradient descent or is it a combination of online gradient descent to warmstart L-BFGS to reach the final optimal state?
    Thank you in advance 🙂

Comments are closed.