ICML registration is live

Here. I would recommend registering early because there is a difficult to estimate(*) chance you will not be able to register later.

The program is shaping up and should be of interest. The 9 Tutorials(**), 4 Invited Speakers, and 23 Workshops are all chosen, with paper decisions due out in a couple weeks.

Early Full (after May 7)
Student 510 640
Regular 840 1050

These numbers are as aggressively low as the local chairs and I can sleep with at night. The prices are higher than I’d like (New York is expensive), but a bit lower than last year, particularly for students(***).

(*) Relevant facts:

  1. ICML 2016: submissions up 30% to 1300.
  2. NIPS 2015 in Montreal: 3900 registrations (way up from last year).
  3. NIPS 2016 is in Barcelona.
  4. ICML 2015 in Lille: 1670 registrations.
  5. KDD 2014 in NYC: closed@3000 registrations 1 week before the conference.

I tried to figure out how to setup a prediction market to estimate what will happen this year, but didn’t find an easy-enough way to do that.

(**) I kind of wish we could make up the titles. How about: “Go is Too Easy” and “My Neural Network is Deeper than Yours”?

(***) Sponsors are very generous and are mostly giving to defray student costs. Approximately every dollar of the difference between Regular and Student registration is due to company donations. For students, also note that there will be some scholarship opportunities to defray costs coming out soon.

AlphaGo is not the solution to AI

Congratulations are in order for the folks at Google Deepmind who have mastered Go.

However, some of the discussion around this seems like giddy overstatement. Wired says Machines have conquered the last games and Slashdot says We know now that we don’t need any big new breakthroughs to get to true AI. The truth is nowhere close.

For Go itself, it’s been well-known for a decade that Monte Carlo tree search (i.e. valuation by assuming randomized playout) is unusually effective in Go. Given this, it’s unclear that the AlphaGo algorithm extends to other board games where MCTS does not work so well. Maybe? It will be interesting to see.

Delving into existing computer games, the Atari results (see figure 3) are very fun but obviously unimpressive on about ¼ of the games. My hypothesis for why is that their solution does only local (epsilon-greedy style) exploration rather than global exploration so they can only learn policies addressing either very short credit assignment problems or with greedily accessible polices. Global exploration strategies are known to result in exponentially more efficient strategies in general for deterministic decision process(1993), Markov Decision Processes (1998), and for MDPs without modeling (2006).

The reason these strategies are not used is because they are based on tabular learning rather than function fitting. That’s why I shifted to Contextual Bandit research after the 2006 paper. We’ve learned quite a bit there, enough to start tackling a Contextual Deterministic Decision Process, but that solution is still far from practical. Addressing global exploration effectively is only one of the significant challenges between what is well known now and what needs to be addressed for what I would consider a real AI.

This is generally understood by people working on these techniques but seems to be getting lost in translation to public news reports. That’s dangerous because it leads to disappointment. The field will be better off without an overpromise/bust cycle so I would encourage people to keep and inform a balanced view of successes and their extent. Mastering Go is a great accomplishment, but it is quite far from everything.

Edit: Further discussion here, CACM, here, and KDNuggets.

Learning to avoid not making an AI

Building an AI is one of the most subtle things people have ever attempted with strong evidence provided by the durable nature of the problem despite attempts by many intelligent people. In comparison, putting a man on the moon was a relatively straightforward technical problem with little confusion about the nature of the solution.

Building an AI is almost surely a software problem since the outer limit for the amount of computation in the human brain is only 10^17 ops/second (10^11 neurons with 10^4 connections operating at 10^2 Hz) which is within reach of known systems.

People tend to mysticize the complexity of unknown things, so the “real” amount of computation required for a human scale AI is likely far less—perhaps even within reach of a 10^13 flop GPU.

Since building an AI is a software problem, the problem is complexity in a much stronger sense than for most problems. The effective approach for dealing with complexity is to use modularity. But which modularity? A sprawl of proposed kinds of often incompatible and obviously incomplete modularity exists. The moment when you try to decompose into smaller problems is when the difficulty of solution is confronted.

For guidance, we can consider what works and what does not. This is tricky, because the definition of AI is less than clear. I qualify AI with by degrees of intelligence in my mind—a human level AI is one which can accomplish the range of tasks which a human can. This includes learning complex things (language, reasoning, etc…) from a much more basic state.

The definition seems natural but it is not easily tested via the famous Turing Test. For example, you could imagine a Cyc-backed system passing a Turing Test. Would that be a human-level AI? I’d argue ‘no’, because the reliance on a human-crafted ontology indicates an incapability to discover and use new things effectively. There is a good science fiction story to write here where a Cyc-based system takes over civilization but then gradually falls apart as new relevant concepts simply cannot be grasped.

Instead of AI facsimiles, learning approaches seem to be the key to success. If a system learned from basic primitives how to pass the Turing Test, I would naturally consider it much closer to human-level AI.

We have seen the facsimile design vs. learn tension in approaches to AI activities play out many times with the facsimile design approach winning first, but not always last. Consider Game Playing, Driving, Vision, Speech, and Chat-bots. At this point the facsimile approach has been overwhelmed by learning in Vision and Speech while in Game Playing, Driving, and Chat-bots the situation is less clear.

I expect facsimile approaches are one of the greater sources of misplaced effort in AI and that will continue to be an issue, because it’s such a natural effort trap: Why not simply make the system do what you want it to do? Making a system that works by learning to do things seems a rather indirect route that surely takes longer and requires more effort. The answer of course is that the system which learns what might otherwise be designed can learn other things as needed, making it inherently more robust.

New York Machine Learning Deadlines

There’s a number of different Machine Learning related paper deadlines that may interest.

January 29 (abstract) for March 4 New York ML Symposium Register early because NYAS can only fit 300.
January 27 (abstract)/February 2 (paper) for July 9-15 IJCAI The biggest AI conference
February 5(paper) for June 19-24 ICML Nina and Kilian have 850 well-vetted reviewers. Marek and Peder have increased space to allow 3K people.
February 12(paper) for June 23-26 COLT Vitaly and Sasha are program chairs.
February 12(proposal) for June 23-24 ICML workshops Fei and Ruslan are the workshop chairs. I really like workshops.
February 19(proposal) for June 19 ICML tutorials Bernhard and Alina have invited a few tutorials already but are saving space for good proposals as well.
March 1(paper) for June 25-29 UAI Jersey City isn’t quite New York, but it’s close enough 🙂
May ~2 for June 23-24 ICML workshops Varies with the workshop.

Interesting things at NIPS 2015

NIPS is getting big. If you think of each day as a conference crammed into a day, you get a good flavor of things. Here are some of the interesting things I saw.

Two other notable events happened during NIPS.

  1. The Imagenet challenge and MS COCO results came out. The first represents a significant improvement over previous years (details here).
  2. The Open AI initiative started. Concerned billionaires create a billion dollar endowment to advance AI in a public(NOT Private) way. What will be done better than NSF (which has a similar(ish) goal)? I can think of many possibilities.

See also Seb’s post.