Randomized experimentation

One good thing about doing machine learning at present is that people actually use it! The back-ends of many systems we interact with on a daily basis are driven by machine learning. In most such systems, as users interact with the system, it is natural for the system designer to wish to optimize the models under the hood over time, in a way that improves the user experience. To ground the discussion a bit, let us consider the example of an online portal, that is trying to present interesting news stories to its user. A user comes to the portal and based on whatever information the portal has on the user, it recommends one (or more) news stories. The user chooses to read the story or not and life goes on. Naturally the portal wants to better tailor the stories it displays to the users’ taste over time, which can be observed if users start to click on the displayed story more often.

A natural idea would be to use the past logs and train a machine learning model which prefers the stories that users click on and discourages the stories which are avoided by the users. This sounds like a simple classification problem, for which we might use an off-the-shelf algorithm. This is indeed done reasonably often, and the offline logs suggest that the newly trained model will result in a lot more clicks than the old one. The new model is deployed, only to find out its performance is not as good as hoped, or even poorer than what was happening before! What went wrong? The natural reaction is typically that (a) the machine learning algorithm needs to be improved, or (b) we need better features, or (c) we need more data. Alas, in most of these cases, the right answer is (d) none of the above. Let us see why this is true through a simple example.

Imagine a simple world where some of our users are from New York and others are from Seattle. Some of our news stories pertain to finance, and others pertain to technology. Let us further imagine that the probability of a click (henceforth CTR for clickthrough rate) on a news article based on city and subject has the following distribution:

City

Finance CTR

Tech CTR

New York 1 0.6
Seattle 0.4 0.79

Table1: True (unobserved) CTRs

Of course, we do not have this information ahead of time while designing the system, so our starting system recommends articles according to some heuristic rule. Imagine that we user the rule:

  • New York users get Tech stories, Seattle users get Finance stories.

Now we collect the click data according to this system for a while. As we obtain more and more data, we obtain increasingly accurate estimates of the CTR for Tech stories and NY users, as well as Finance stories and Seattle users (0.6 and 0.4 resp.). However, we have no information on the other two combinations. So if we train a machine learning algorithm to minimize the squared loss between predicted CTR on an article and observed CTR, it is likely to predict the average of observed CTRs (that is 0.5) in the other two blocks. At this point, our guess looks like:

 

City

Finance CTR

Tech CTR

New York 1 / ? / 0.5 0.6 / 0.6 / 0.6
Seattle 0.4 / 0.4 / 0.4 0.79 / ? / 0.5

Table2: True / observed / estimated CTRs

Note that this would be the case even with infinite data and an all powerful learner, so machine learning is not to be faulted in any way here. Given these estimates, we naturally realize that show finance articles to Seattle users was a mistake, and switch to Tech. But Tech is also looking pretty good in NY, and we stick with it. Our new policy is:

  • Both NY and Seattle users get Tech articles.

Running the new system for a while, we will fix the erroneous estimates for the Tech CTR on Seattle (that is, up 0.5 to 0.79). But we still have no signal that makes us prefer Finance over Tech in NY. Indeed even with infinite data, the system will be stuck with this suboptimal choice at this point, and our CTR estimates will look something like:

City

Finance CTR

Tech CTR

New York 1 / ? / 0.59 0.6 / 0.6 / 0.6
Seattle 0.4 / 0.4 / 0.4 0.79 / 0.79 / 0.79

Table3: True / observed / estimated CTRs

We can now assess the earlier claims:

  1. More data does not help: Since Observed and True CTRs match wherever we are collecting data
  2. Better learning algorithm does not help: Since Predicted and Observed CTRs coincide wherever we are collecting data
  3. Better data does help!! We should not be having the blank cell in observed column.

This seems simple enough to fix though. We should have really known better than to completely omit observations in one cell of our table. With good intentions, we decide to collect data in all cells. We choose to use the following rule:

  • Seattle users get Tech stories during day and finance stories during night
  • Similarly, NY users get Tech stories during day and finance stories during night

We are now collecting data on each cell, but we find that our estimates still lead us to a suboptimal policy. Further investigation might reveal that users are more likely to read finance stories during the day when the markets are open. So when we only display finance stories during night, we underestimate the finance CTR and end up with wrong estimates. Realizing the error of our ways, we might try to fix this again and then run into another problem and so on.

The issue we have discovered above is that of confounding variables. There is lot of wonderful work and many techniques that can be used to circumvent confounding variables in experimentation. Here, I mention the simplest one and perhaps the most versatile one of them: Randomization. The idea is that instead of recommending stories to users according to a fix deterministic rule, we allow for different articles to be presented to the user according to some distribution. This distribution does not have to be uniform. In fact, good randomization would likely focus on plausibly good articles so as to not degrade the user experience. However, as long as we add sufficient randomization, we can then obtain consistent counterfactual estimates of quantities from our experimental data. There is growing literature on how to do this well. A nice paper which covers some of these techniques and provides an empirical evaluation is http://arxiv.org/abs/1103.4601. A more involved example in the context of computational advertising at Microsoft is discussed in http://leon.bottou.org/papers/bottou-jmlr-2013.

 

 

Fall Machine Learning Events

Many Machine Learning related events are coming up this fall.

  1. September 9, abstracts for the New York Machine Learning Symposium are due. Send a 2 page pdf, if interested, and note that we:
    1. widened submissions to be from anybody rather than students.
    2. set aside a larger fraction of time for contributed submissions.
  2. September 15, there is a machine learning meetup, where I’ll be discussing terascale learning at AOL.
  3. September 16, there is a CS&Econ day at New York Academy of Sciences. This is not ML focused, but it’s easy to imagine interest.
  4. September 23 and later NIPS workshop submissions start coming due. As usual, there are too many good ones, so I won’t be able to attend all those that interest me. I do hope some workshop makers consider ICML this coming summer, as we are increasing to a 2 day format for you. Here are a few that interest me:
    1. Big Learning is about dealing with lots of data. Abstracts are due September 30.
    2. The Bayes Bandits workshop. Abstracts are due September 23.
    3. The Personalized Medicine workshop
    4. The Learning Semantics workshop. Abstracts are due September 26.
    5. The ML Relations workshop. Abstracts are due September 30.
    6. The Hierarchical Learning workshop. Challenge submissions are due October 17, and abstracts are due October 21.
    7. The Computational Tradeoffs workshop. Abstracts are due October 17.
    8. The Model Selection workshop. Abstracts are due September 24.
  5. October 16-17 is the Singularity Summit in New York. This is for the AIists and only peripherally about ML.
  6. October 16-21 is a Predictive Analytics World in New York. As machine learning goes industrial, we see industrial-style conferences rapidly developing.
  7. October 21, there is the New York ML Symposium. In addition to what’s there, Chris Wiggins is looking into setting up a session for startups and those interested in them to get to know each other, as last year.
  8. Decembr 16-17 NIPS workshops in Granada, Spain.

What does Watson mean?

Watson convincingly beat the best champion Jeopardy! players. The apparent significance of this varies hugely, depending on your background knowledge about the related machine learning, NLP, and search technology. For a random person, this might seem evidence of serious machine intelligence, while for people working on the system itself, it probably seems like a reasonably good assemblage of existing technologies with several twists to make the entire system work.

Above all, I think we should congratulate the people who managed to put together and execute this project—many years of effort by a diverse set of highly skilled people were needed to make this happen. In academia, it’s pretty difficult for one professor to assemble that quantity of talent, and in industry it’s rarely the case that such a capable group has both a worthwhile project and the support needed to pursue something like this for several years before success.

Alina invited me to the Jeopardy watching party at IBM, which was pretty fun, and it gave me a chance to talk to several people, principally Gerry Tesauro (2nd from the right). It’s cool to see people asking for autographs 🙂

I wasn’t surprised to see Watson win. Partly, this is simply because when a big company does a publicity stunt like this, it’s with a pretty solid expectation of victory. Partly, this is because I already knew that computers could answer trivia questions moderately well(*), so the question was just how far this could be improved. Gerry tells me that although Watson’s error rate is still significant, one key element is the ability to estimate with high accuracy when they can answer with high accuracy. Gerry also tells me the Watson papers will be coming out later this summer, with many more details.

What happens next? I don’t expect the project to be shelved like deep blue was, for two reasons. The first is that there is clearly very substantial room for improvement, and the second is that having a natural language question/answering device that can quickly search and respond from large sets of text is obviously valuable. The first means that researchers are interested, and the second that the money to support them can probably be found. The history of textual entailment challenges is another less centralized effort in about the same direction.

In the immediate future (next few years), applications in semi-open domains may become viable, particularly when a question/answer device knows when to answer “I don’t know”. Fully conversational speech recognition working in an open domain should take somewhat longer, because speech recognition software has additional error points, conversational systems aren’t so easy to come by, and in a fully open domain the error rates will be higher. Getting the error rate on questions down to the level that a human with access to the internet has difficulty beating is the tricky challenge which has not yet been addressed. It’s a worthy goal to work towards.

Many people believe in human exceptionalism, so when seeing a computer beat Jeopardy, they are surprised that humans aren’t exceptional there. We should understand that this has happened many times before, with chess and mathematical calculation being two areas where computers now dominate, but which were once thought to be the essence of intelligence by some. Similarly, it is not difficult to imagine automated driving (after all, animals can do it), gross object recognition, etc…

To avert surprise in the future, human exceptionalists should understand what the really hard things for an AI to do are. It’s important to understand that there are various levels of I in AI. A few I think about are:

  1. Animal Intelligence. The ability to understand your place in the world, navigate the world, and accomplish something. Some of these tasks are solved, but many others are not yet. This level implies that routine tasks can be automated. Automated driving, farming, factories, etc…
  2. Turing Test Intelligence. The ability to mimic a typical human well-enough to fool a typical human in open conversation. Watson doesn’t achieve this, but the thrust of the research is in this direction as open domain question answering is probably necessary for this. Nonroutine noncreative tasks might be accomplished by the computer. Think of an automated secretary.
  3. Pandora’s box Intelligence. The ability to efficiently self-program in an open domain so as to continuously improve. At this level human exceptionalism fails, and it is difficult to predict what happens next.

So, serious evidence of (2) or (3) is what I watch for.

(*) About 10 years ago, I had a friend2 on WWTBAM who called the friend for help on a question, who typed the question and multiple choice answers into CMU‘s Zephyr system, where a bot I made queried (question,answer) pairs on Google to discover which had the most web pages. It worked.

AI Safety

Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for.

I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel. On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad.

A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely digital. To further improve, the AI invests a bit into robotics, creating automated manufacturing systems that produce all kinds of goods. Robot cars and construction teams complete the process, so that any human with money can order anything cheaply and quickly, but no jobs remain for humans.

At this point, the AI is stuck—it can eventually extract all the money from the economic system, and that’s all there is. But of course, it isn’t really stuck. It simply funds appropriate political campaigns so that in some country a measure passes granting the AI the right to make money, which it promptly does, mushrooming it’s wealth from trillions to the maximum number representable in all computers simultaneously. To remove this obstacle, the AI promptly starts making more computers on a worldwide scale until all available power sources are used up. To add more power, the AI starts a space program with beamed power. Unfortunately, it finds the pesky atmosphere an obstacle to space travel, so it chemically binds the atmosphere in the crust of the earth allowing many Gauss Guns to efficiently project material into space where solar sails are used for orbital positioning. This process continues, slowed perhaps by the need to cool the Earth’s core, until the earth and other viable rocky bodies in the solar system are discorporated into a Dyson sphere. Then, the AI goes interstellar with the same program.

Somewhere in this process, certainly by the time the atmosphere is chemically bound, all life on earth (except the AI if you count it) is extinct. Furthermore, the AI while intelligent by many measures doesn’t seem to be accomplishing anything interesting.

One element of understanding AI safety seems to be understanding what an AI could do. Many people seem to ascribe arbitrary powers to any sort of superintelligence, making any constraints imposed on them ineffective. I don’t believe that’s the right approach—we should think of an AI as simply having much more ability to research, control, and manipulate large systems, all within the constraints of known physics.

Efforts to create safe AI go back to Asimov‘s Three Laws of Robotics, which appears limited by the inability to encompass robotic warfare. The general problem is related to the wish problem: How do you specify a wish in a manner so that it can’t be misinterpreted? A cheap trick here is to add “… in a manner that I would consider acceptable” to the end of the wish. Applied to AI, this approach also has limits because any limit imposed by a person can and eventually will be removed by a person given sufficient opportunity.

Perhaps a complementary approach is shown by the game RISK, where it appears to be virtually impossible for one player to win if all other players play defensively (i.e. build up armies and only attack in response to a provoking attack). Applied to AI, the idea would be that we make many AIs programmed to behave well either via laws or wish tricks, with an additional element of aggressively enforcing this behavior in other AIs. Then, if any AI is corrupted, the other AIs, with substantially more aggregate resources, will discover and deal with the problem.

Certain elements are necessary for this approach to work. There must be multiple AIs, and (more importantly) the resources any one controls must be a small compared to all, an extreme form of antimonopoly. Furthermore, the default must be that AIs are programmed to not harm or cause harm to humans, enforcing that behavior in other AIs. Getting the programming right is the hard part, and I’m not clear on how viable this is, or how difficult it is compared to simply creating an AI, which of course I haven’t managed.