This is a list of improvements that we want to make to the code. Any help implementing them is of course welcome.
Cluster parallelism improvements:
- Change the io_buf structure to run in it's own thread. Currently, reading bits into program space operates synchronously with parsing which implies that delays in the return of read() delay parsing. This should speedup all input forms (daemon, stdin, file)
- Change the text parser to work in a read-once fashion. Currently, input strings are read multiple times.
- Change multisource to use epoll_wait() instead of select(). The amount of speedup is unclear, but it's the right thing to do.
- Internal flag passing. Currently, lots of programs must be started on lots of different machines. Instead, you should start VW once on the source machine and have it launch other VW process as necessary as well as passing necessary flags (think of rsync). This is a huge improvement in usability.
- Delayed backprop. A variant of delayed backprop may work better. Along with this, we probably need to implement an example-reorder module to break up substructure in example sequences to avoid incoherent updates.
- The core linear algorithm should have a learning rate specified semantically, as the amount (or at least maximum amount) of change in prediction, instead of simply being a multiplier on the gradient. This should reduce the need to futz with choice of learning rate.
- Alternate learning algorithms. The next level up in complexity is confidence weighted updates or matrix factorization style algorithms. Beyond that, essentially anything trainable in an online fashion is doable.
- Learning reductions. Previously, we've used VW as a library to implement learning reductions against, but adding a layer of abstraction in the system allowing reductions to directly operate should be doable, and probably desirable. Especially in a cluster parallel environment, directly supporting learning reductions appears superior to a library implementation.