Machine Learning (Theory)


Vowpal Wabbit version 6.1 & the NIPS tutorial

I just made version 6.1 of Vowpal Wabbit. Relative to 6.0, there are few new features, but many refinements.

  1. The cluster parallel learning code better supports multiple simultaneous runs, and other forms of parallelism have been mostly removed. This incidentally significantly simplifies the learning core.
  2. The online learning algorithms are more general, with support for l1 (via a truncated gradient variant) and l2 regularization, and a generalized form of variable metric learning.
  3. There is a solid persistent server mode which can train online, as well as serve answers to many simultaneous queries, either in text or binary.

This should be a very good release if you are just getting started, as we’ve made it compile more automatically out of the box, have several new examples and updated documentation.

As per tradition, we’re planning to do a tutorial at NIPS during the break at the parallel learning workshop at 2pm Spanish time Friday. I’ll cover the basics, leaving the fun stuff for others.

  1. Miro will cover the L-BFGS implementation, which he created from scratch. We have found this works quite well amongst batch learning algorithms.
  2. Alekh will cover how to do cluster parallel learning. If you have access to a large cluster, VW is orders of magnitude faster than any other public learning system accomplishing linear prediction. And if you are as impatient as I am, it is a real pleasure when the computers can keep up with you.

This will be recorded, so it will hopefully be available for viewing online before too long.

I hope to see you soon :)

8 Comments to “Vowpal Wabbit version 6.1 & the NIPS tutorial”
  1. Anonymous says:

    I hope you post on your blog when the videos are available. I might forget.

  2. [...] Vowpal Wabbit version 6.1 [...]

  3. Foster Boondoggle says:

    Elmer Fudd recites Jabberwocky?

  4. Han says:

    Hi John,

    First off, I want to thank you for releasing the vowpal wabbit software as open source to the ML community. When I tried to download and build the 6.1 on Ubuntu 11.10, after adding the boost library into the working directory and building, I got the following error:

    g++ -march=native -Wall -pedantic -O3 -fomit-frame-pointer -ffast-math -fno-strict-aliasing -D_FILE_OFFSET_BITS=64 -I /usr/include -c -o global_data.o
    In file included from parse_regressor.h:10:0,
    from global_data.h:12,
    boost/program_options.hpp:15:57: fatal error: boost/program_options/options_description.hpp: No such file or directory
    compilation terminated.
    make: *** [global_data.o] Error 1

    I was wondering if you could point me in the right direction in how to solve this build error.


    • jl says:

      Can you just install boost with apt-get? Then, everything should work perfectly. Otherwise, you must change the commandline with -I /path/to/your/installed/boost (and something similar to link the library).

  5. Feichao says:

    loss functions: classic,hinge,logistic,squred and quantile are supported.
    I don’t know what’s classic loss function, whould you help to explain? Thanks.

    • jl says:

      classic = vanilla squared loss, without the importance weight aware update. You’ll find that it’s typically worse than squared.

Leave a Reply

Powered by WordPress