Vowpal Wabbit 8.5.0 & NIPS tutorial

Yesterday, I tagged VW version 8.5.0 which has many interactive learning improvements (both contextual bandit and active learning), better support for sparse models, and a new baseline reduction which I’m considering making a part of the default update rule.

If you want to know the details, we’ll be doing a mini-tutorial during the Friday lunch break at the Extreme Classification workshop at NIPS. Please join us if interested.

Edit: also announced at the Learning Systems workshop

The Real World Interactive Learning Tutorial

Alekh and I have been polishin the Real World Interactive Learning tutorial for ICML 2017 on Sunday.

This tutorial should be of pretty wide interest. For data scientists, we are crossing a threshold into easy use of interactive learning while for researchers interactive learning is plausibly the most important frontier of understanding. Great progress on both the theory and especially on practical systems has been made since an earlier NIPS 2013 tutorial.

Please join us if you are interested 🙂

ICML is changing its constitution

Andrew McCallum has been leading an initiative to update the bylaws of IMLS, the organization which runs ICML. I expect most people aren’t interested in such details. However, the bylaws change rarely and can have an impact over a long period of time so they do have some real importance. I’d like to hear comment from anyone with a particular interest before this year’s ICML.

In my opinion, the most important aspect of the bylaws is the at-large election of members of the board which is preserved. Most of the changes between the old and new versions are aimed at better defining roles, committees, etc… to leave IMLS/ICML better organized.

Anyways, please comment if you have a concern or thoughts.

Machine Learning the Future Class

This spring, I taught a class on Machine Learning the Future at Cornell Tech covering a number of advanced topics in machine learning including online learning, joint (structured) prediction, active learning, contextual bandit learning, logarithmic time prediction, and parallel learning. Each of these classes was recorded from the laptop via Zoom and I just uploaded the recordings to Youtube.

In some ways, this class is a followup to the large scale learning class I taught with Yann LeCun 4 years ago. The videos for that class were taken down(*) so these lectures both update and replace shared subjects as well as having some new subjects.

Much of this material is fairly close to research so to assist other machine learning lecturers around the world in digesting the material, I’ve made all the source available as well. Feel free to use and improve.

(*) The NYU policy changed so that students could not be shown in classroom videos.

Fact over Fiction

Politics is a distracting affair which I generally believe it’s best to stay out of if you want to be able to concentrate on research. Nevertheless, the US presidential election looks like something that directly politicizes the idea and process of research by damaging the association of scientists & students, funding for basic research, and creating political censorship.

A core question here is: What to do? Today’s March for Science is a good step, but I’m not sure it will change many minds. Unlike most scientists, I grew up in a a county (Linn) which voted overwhelmingly for Trump. As a consequence, I feel like I must translate the mindset a bit. For the median household left behind over my lifetime a march by relatively affluent people protesting the government cutting expenses will not elicit much sympathy. Discussion about the overwhelming value of science may also fall on deaf ears simply because they have not seen the economic value personally. On the contrary, they have seen their economic situation flat or worsening for 4 decades with little prospect for things getting better. Similarly, I don’t expect history lessons on anti-intellectualism to make much of a dent. Fundamentally, scientists and science fans are a small fraction of the population.

What’s needed is a campaign that achieves broad agreement across the population and which will help. One of the roots of the March for Science is a belief in facts over fiction which may have the requisite scope. In particular, there seems to be a good case that the right to engage in mass disinformation has been enormously costly to the United States and is now a significant threat to civil stability. Internally, disinformation is a preferred tool for starting wars or for wealthy companies to push a deadly business model. Externally, disinformation is now being actively used to sway elections and is self-funding.

The election outcome is actually less important than the endemic disagreement that disinformation creates. When people simply believe in different facts about the world how can you expect them to agree? There probably are some good uses of mass disinformation somewhere, but I’m extremely skeptical the value exceeds the cost.

Is opposition to mass disinformation broad enough that it makes a good organizing principle? If mass disinformation was eliminated or greatly reduced it would be an enormous value to society, particularly to the disinformed. It would not address the fundamental economic stagnation of the median household in the United States, but it would remove a significant threat to civil society which may be necessary for such progress. Given a choice between the right to mass disinform and democracy, I choose democracy.

A real question is “how”? We are discussing an abridgment of freedom of speech so from a legal perspective the basis must rest on the balance between freedom of speech and other constitutional rights. Many abridgements exist like censuring a yell of “fire” in a crowded theater unnecessarily.

Voluntary efforts (as Facebook and Twitter have undertaken) are a start, but it seems unlikely to go far enough as many other “news” organizations have made no such commitments. A system where companies commit to informing over disinforming and in return become both more trusted and simultaneously liable for disinformation damages (due to the disinformed) as assessed by civil law may make sense. Right now organizations are mostly free to engage in disinformation as long as it is not directed at an individual where libel laws apply. Penalizing an organization for individual mistakes seems absurd, but a pattern of errors backed by scientific surveys verifying an anomalously misinformed status of viewers/readers/listeners is cause for action. Getting this right is obviously a tricky thing—we want a solution that a real news organization with an existing mimetic immune system prefers to the status quo because it curbs competitors that disinform. At the same time, there must be enough teeth to make disinformation uneconomical or the problem only grows.

Should disinformation have criminal penalties? One existing approach here uses RICO laws to counter disinformation from Tobacco companies. Reading the history, this took an amazing amount of time—enough that it was ineffective for a generation. It seems plausible that an act directly addressing disinformation may be helpful.

What about technical solutions? Technical solutions seem necessary for success, perhaps with changes to law incentivizing this. It’s important to understand that something going before the courts is inherently slow, particularly because courts tend to be deeply overloaded. A significant increase in the number of cases going before courts makes an approach nonviable in practice.

Would we regret this? There is a long history of governments abusing laws to censor inconvenient news sources so caution is warranted. Structuring new laws in a manner such that they cannot be abused is an important consideration. It is obviously important to leave satire fully intact which seems entirely possibly by making the fact that it is satire unmistakable. This entire discussion is also not relevant to individuals speaking to other individuals—that is not what creates a problem.

Is this possible? It might seem obvious that mass disinformation should be curbed but there should be no doubt that powerful forces will work to preserve mass disinformation by subtle and unethical means.

Overall, I fundamentally believe that people in a position to inform or disinform have a responsibility to inform. If they don’t want that responsibility, then they should abdicate the position to someone who does, similar in effect to the proposed fiduciary rule for investments. I’m open to any approach towards achieving this.

Edit: also at CACM.