Machine Learning (Theory)

12/19/2018

FAQ on ICML 2019 Code Submission Policy

ICML 2019 has an option for supplementary code submission that the authors can use to provide additional evidence to bolster their experimental results. Since we have been getting a lot of questions about it, here is a Frequently Asked Questions for authors.

1. Is code submission mandatory?

No. Code submission is completely optional, and we anticipate that high quality papers whose results are judged by our reviewers to be credible will be accepted to ICML, even if code is not submitted.

2. Does submitted code need to be anonymized?

ICML is a double blind conference, and we expect authors to put in reasonable effort to anonymize the submitted code and institution. This means that author names and licenses that reveal the organization of the authors should be removed.

Please note that submitted code will not be made public — eg, only the reviewers, Area Chair and Senior Area Chair in charge will have access to it during the review period. If the paper gets accepted, we expect the authors to replace the submitted code by a non-anonymized version or link to a public github repository.

3. Are anonymous github links allowed?

Yes. However, they have to be on a branch that will not be modified after the submission deadline. Please enter the github link in a standalone text file in a submitted zip file.

4. How will the submitted code be used for decision-making?

The submitted code will be used as additional evidence provided by the authors to add more credibility to their results. We anticipate that high quality papers whose results are judged by our reviewers to be credible will be accepted to ICML, even if code is not submitted. However, if something is unclear in the paper, then code, if submitted, will provide an extra chance to the authors to clarify the details. To encourage code submission, we will also provide increased visibility to papers that submit code.

5. If code is submitted, do you expect it to be published with the rest of the supplementary? Or, could it be withdrawn later?

We expect submitted code to be published with the rest of the supplementary. However, if the paper gets accepted, then the authors will get a chance to update the code before it is published by adding author names, licenses, etc.

6. Do you expect the code to be standalone? For example, what if it is part of a much bigger codebase?

We expect your code to be readable and helpful to reviewers in verifying the credibility of your results. It is possible to do this through code that is not standalone — for example, with proper documentation.

7. What about pseudocode instead of code? Does that count as code submission?

Yes, we will count detailed pseudocode as code submission as it is helpful to reviewers in validating your results.

8. Do you expect authors to submit data?

We understand that many of our authors work with highly sensitive datasets, and are not asking for private data submission. If the dataset used is publicly available, there is no need to provide it. If the dataset is private, then the authors can submit a toy or simulated dataset to illustrate how the code works.

9. Who has access to my code?

Only the reviewers, Area Chair and Senior Area Chair assigned to your paper will have access to your code. We will instruct reviewers, Area Chair and Senior Area Chair to keep the code submissions confidential (just like the paper submissions), and delete all code submissions from their machine at the end of the review cycle. Please note that code submission is also completely optional.

10. I would like to revise my code/add code during author feedback. Is this permitted?

Unfortunately, no. But please remember that code submission is entirely optional.

The detailed FAQ as well other Author and Style instructions are available here.

Kamalika Chaudhuri and Ruslan Salakhutdinov
ICML 2019 Program Chairs

12/3/2017

Vowpal Wabbit 8.5.0 & NIPS tutorial

Yesterday, I tagged VW version 8.5.0 which has many interactive learning improvements (both contextual bandit and active learning), better support for sparse models, and a new baseline reduction which I’m considering making a part of the default update rule.

If you want to know the details, we’ll be doing a mini-tutorial during the Friday lunch break at the Extreme Classification workshop at NIPS. Please join us if interested.

Edit: also announced at the Learning Systems workshop

7/11/2016

The Multiworld Testing Decision Service

We made a tool that you can use. It is the first general purpose reinforcement-based learning system :-)

Reinforcement learning is much discussed these days with successes like AlphaGo. Wouldn’t it be great if Reinforcement Learning algorithms could easily be used to solve all reinforcement learning problems? But there is a well-known problem: It’s very easy to create natural RL problems for which all standard RL algorithms (epsilon-greedy Q-learning, SARSA, etc…) fail catastrophically. That’s a serious limitation which both inspires research and which I suspect many people need to learn the hard way.

Removing the credit assignment problem from reinforcement learning yields the Contextual Bandit setting which we know is generically solvable in the same manner as common supervised learning problems. I know of about a half-dozen real-world successful contextual bandit applications typically requiring the cooperation of engineers and deeply knowledgeable data scientists.

Can we make this dramatically easier? We need a system that explores over appropriate choices with logging of features, actions, probabilities of actions, and outcomes. These must then be fed into an appropriate learning algorithm which trains a policy and then deploys the policy at the point of decision. Naturally, this is what we’ve done and now it can be used by anyone. This drops the barrier to use down to: “Do you have permissions? And do you have a reasonable idea of what a good feature is?”

A key foundational idea is Multiworld Testing: the capability to evaluate large numbers of policies mapping features to action in a manner exponentially more efficient than standard A/B testing. This is used pervasively in the Contextual Bandit literature and you can see it in action for the system we’ve made at Microsoft Research. The key design principles are:

  1. Contextual Bandits. Many people have tried to create online learning system that do not take into account the biasing effects of decisions. These fail near-universally. For example they might be very good at predicting what was shown (and hence clicked on) rather that what should be shown to generate the most interest.
  2. Data Lifecycle support. This system supports the entire process of data collection, joining, learning, and deployment. Doing this eliminates many stupid-but-killer bugs that I’ve seen in practice.
  3. Modularity. The system decomposes into pieces: exploration library, client library, online learner, join server, etc… because I’ve seen to many cases where the pieces are useful but the system is not.
  4. Reproducibility. Everything is logged in a fashion which makes online behavior offline reproducible. Consequently, the system is debuggable and hence improvable.

The system we’ve created is open source with system components in mwt-ds and the core learning algorithms in Vowpal Wabbit. If you use everything it enables a fully automatic causally sound learning loop for contextual control of a small number of actions. This is strongly scalable, for example a version of this is in use for personalized news on MSN. It can be either low-latency (with a client side library) or cross platform (with a JSON REST web interface). Advanced exploration algorithms are available to enable better exploration strategies than simple epsilon-greedy baselines. The system autodeploys into a chosen Azure account with a baseline cost of about $0.20/hour. The autodeployment takes a few minutes after which you can test or use the system as desired.

This system is open source and there are many ways for people to help if they are interested. For example, support for the client-side library in more languages, support of other learning algorithms & systems, better documentation, etc… are all obviously useful.

Have fun.

11/29/2015

CNTK and Vowpal Wabbit tutorials at NIPS

Both CNTK and Vowpal Wabbit have pirate tutorials at NIPS. The CNTK tutorial is 1 hour during the lunch break of the Optimization workshop while the VW tutorial is 1 hour during the lunch break of the Extreme Multiclass workshop. Consider dropping by either if interested.

CNTK is a deep learning system started by the speech people who started the deep learning craze and grown into a more general platform-independent deep learning system. It has various useful features, the most interesting of which is perhaps efficient scalable training. Using GPUs with allreduce and one-bit sgd it achieves both high efficiency and scalability over many more GPUs than could ever fit into a single machine. This capability is unique amongst all open deep learning codebases so everything else looks nerfed in comparison. CNTK was released in April so this is the first chance for many people to learn about it. See here for more details.

The Vowpal Wabbit tutorial just focuses on what is new this year.

  1. The learning to search framework has greatly matured and is now easily used to solve ad-hoc joint(structured) prediction problems. The ICML tutorial covers algorithms/analysis so this is about using the system.
  2. VW has also become the modeling element of a larger system (called the decision service) which gathers data and uses it as per Contextual Bandit learning. This is now generally usable, and is the first general purpose system of this sort.

12/6/2014

Vowpal Wabbit 7.8 at NIPS

I just created Vowpal Wabbit 7.8, and we are planning to have an increasingly less heretical followup tutorial during the non-“ski break” at the NIPS Optimization workshop. Please join us if interested.

I always feel like things are going slow, but in the last year, but there have been many changes overall. Notes for 7.7 are here. Since then, there are several areas of improvement as well as generalized bug fixes and refactoring.

  1. Learning to Search: Hal completely rewrote the learning to search system, enough that the numbers here are looking obsolete. Kai-Wei has also created several advanced applications for entity-relation and dependency parsing which are promising.
  2. Languages Hal also created a good python library, which includes call-backs for learning to search. You can now develop advanced structured prediction solutions in a nice language. Jonathan Morra also contributed an initial Java interface.
  3. Exploration The contextual bandit subsystem now allows evaluation of an arbitrary policy, and an exploration library is now factored out into an independent library (principally by Luong with help from Sid and Sarah). This is critical for real applications because randomization must happen at the point of decision.
  4. Reductions The learning reductions subsystem has continued to mature, although the perfectionist in me is still dissatisfied. As a consequence, it’s now pretty easy to program new reductions, and the efficiency of these reductions has generally improved. Several news ones are cooking.
  5. Online Learning Alekh added an online SVM implementation based on LaSVM. This is known to parallelize well via the para-active approach.

This project has grown quite a bit—there are about 30 different people contributing to VW since the last release, and there is now a VW meetup (December 8th!) in the bay area that I wish I could attend.

Older Posts »

Powered by WordPress