An AI Miracle Malcontent

The stark success of OpenAI’s GPT4 model surprised me shifting my view from “really good autocomplete” (roughly inline with intuitions here) to a dialog agent exhibiting a significant scope of reasoning and intelligence. Some of the MSR folks did a fairly thorough study of capabilities which seems like a good reference. I think of GPT4 as an artificial savant: super-John capable in some language-centric tasks like style and summarization with impressive yet more limited abilities in other domains like spatial and reasoning intelligence.

And yet, I’m unhappy with mere acceptance because there is a feeling that a miracle happened. How is this not a miracle, at least with hindsight? And given this, it’s not surprising to see folks thinking about more miracles. The difficulty with miracle thinking is that it has no structure upon which to reason for anticipation of the future, prepare for it, and act rationally. Given that, I wanted to lay out my view in some detail and attempt to understand enough to de-miracle what’s happening and what may come next.

Deconstructing The Autocomplete to Dialog Miracle
One of the ironies of the current situation is that an organization called “OpenAI” created AI and isn’t really open about how they did it. That’s an interesting statement about economic incentives and focus. Nevertheless, back when they were publishing, the Instruct GPT paper suggested something interesting: that reinforcement learning on a generative model substrate was remarkably effective—good for 2 to 3 orders of magnitude improvement in the quality of response with a tiny (in comparison to language sources for next word prediction) amount of reinforcement learning. My best guess is that this was the first combination of 3 vital ingredients.

  1. Learning to predict the next word based on vast amounts of language data from the internet. I have no idea how much, but wouldn’t be surprised if it’s a million lifetimes of reading generated by a billion people. That’s a vast amount of information there with deeply intermixed details about the world and language.
    1. Why not other objectives? Well, they wanted something simple so they could maximize scaling. There may indeed be room for improvement in choice of objective.
    2. Why language? Language is fairy unique amongst information in that it’s the best expression of conscious thought. There is thought without language (yes, I believe animals think in various ways), but you can’t really do language without thought.
  2. The use of a large deep transformer model (pseudocode here) to absorb all of this information. Large here presumably implies training on many GPUs with both data and model parallelism. I’m sure there are many fine engineering tricks here. I’m unclear on the scale, but expect the answer is more than thousands and less than millions.
    1. Why transformer models? At a functional level, they embed ‘soft attention’ (=ability to look up a value with a key in a gradient friendly way). At an optimization level, they are GPU-friendly.
    2. Why deep? The drive to minimize word prediction error in the context of differentiable depth creates a pressure to develop useful internal abstractions.
  3. Reinforcement learning on a small amount of data which ‘awakens’ a dialog agent. With the right prompt (=prefix language) engineering a vanilla large language model can address many tasks as the information is there, but it’s awkward and clearly not a general purpose dialog agent. At the same time, the learned substrate is an excellent representation upon which to apply RL creating a more active agent while curbing an inherited tendency to mimic internet flamebait.
    1. Why reinforcement learning? One of the oddities of language is that there is more than one way of saying things. Hence, the supervised learning view that there is a right answer and everything else is wrong sets up inherent conflicts in the optimization. Hence, “reinforcement learning from human feedback” pairs inverse reinforcement learning to discover a reward function and basic reinforcement learning to achieve better performance. What’s remarkable about this is that the two-step approach is counter to the information processing inequality.

The overall impression that I’m left with is something like the “ghost of the internet”. If you ask the internet for the answer to a question on the best forum available and get an answer, it might be in the ballpark of as useful and as correct as that which GPT4 provides (notably, in seconds). Peter Lee’s book on the application to medicine is pretty convincing. There are pluses and minuses here—GPT4’s abstraction of language tasks like summarization and style appear super-human, or at least better than I can manage. For commonly discussed content (e.g. medicine) it’s fairly solid, but for less commonly discussed content (say, Battletech fan designs) it becomes sketchy as the internet gives out. There are obviously times when it errs (often egregiously in a fully confident way), but that’s also true in internet forums. I specifically don’t trust GPT4 with math and often find it’s reasoning and abstraction abilities shaky, although it’s deeply impressive that they exist at all. And driving a car is out because it’s a task that you can’t really describe.

What about the future?
There’s been a great deal about the danger of AI discussed recently, and quite a mess of misexpectations about where we are.

  1. Is GPT4 and future variants the answer to [insert intelligence-requiring problem here]? GPT4 seems most interesting as a language intelligence. It’s clearly useful as an advisor or a brainstormer. The meaning of “GPT5” isn’t clear, but I would expect substantial shifts in core algorithms/representations are necessary for mastering other forms of intelligence like memory, skill formation, information gathering, and optimized decision making.
  2. Are generative models the end of consensual reality? Human societies seem to have a systematic weakness in that people often prefer a consistent viewpoint even at the expense of fairly extreme rationalization. That behavior in large language models is just looking at our collective behavior through a mirror. Generative model development (both language and video) do have a real potential to worsen this. I believe we should be making real efforts as a society to harden and defend objective reality in a multiple ways. This is not specifically about AI, but it would address a class of AI-related concerns and improve society generally.
  3. Is AI about to kill everyone? Yudkowski’s editorial gives the impression that a Terminator style apocalypse is just around the corner. I’m skeptical about the short term (the next several years), but the longer term requires thought.
    1. In the short term there are so many limitations of even GPT4 (even though it’s a giant advance) that I both lack the imagination to see a path to “everyone dies” and I expect it would be suicidal for an AI as well. GPT4, as an AI, is using the borrowed intelligence of the internet. Without that source it’s just an amalgamation of parameters of no interesting capabilities.
    2. For the medium term, I think there’s a credible possibility that drone warfare becomes ultralethal inline with this imagined future. You can already see drone warfare in the Ukraine-Russia war significantly increasing the lethality of a battlefield. This requires some significant advances, but nothing seems outlandish. Counterdrone technology development and limits on usage inline with other war machines seems prudent.
    3. For the longer term, Vinge’s classical singularity essay is telling here as he lays out the inevitability of developing intelligence for competitive reasons. Economists are often fond of pointing out how job creation has accompanied previous mechanization induced job losses and yet my daughter points out how we keep increasing the amount of schooling children must absorb to be capable members of society. It’s not hard to imagine a desolation of jobs in a decade or two where AIs can simply handle almost all present-day jobs and most humans can’t skill-up to be economically meaningful. Our society is not prepared for this situation—it seems like a quite serious and possibly inevitable possibility. Positive models for a nearly-fully-automated society are provided by Star Trek and Iain Banks although science fiction is very far from a working proposal for a working society.
    4. I’m skeptical about a Lawnmower Man like scenario where a superintelligence suddenly takes over the world. In essence, cryptographic barriers are plausibly real, even to a superintelligence. As long as that’s so, the thing to watch out for is excessive concentrations of power without oversight. We already have a functioning notion of super-human intelligence in organizational intelligence and are familiar with techniques for restraining organizational intelligence into useful-for-society channels. Starting with this and improving seems reasonable.

FAQ on ICML 2019 Code Submission Policy

ICML 2019 has an option for supplementary code submission that the authors can use to provide additional evidence to bolster their experimental results. Since we have been getting a lot of questions about it, here is a Frequently Asked Questions for authors.

1. Is code submission mandatory?

No. Code submission is completely optional, and we anticipate that high quality papers whose results are judged by our reviewers to be credible will be accepted to ICML, even if code is not submitted.

2. Does submitted code need to be anonymized?

ICML is a double blind conference, and we expect authors to put in reasonable effort to anonymize the submitted code and institution. This means that author names and licenses that reveal the organization of the authors should be removed.

Please note that submitted code will not be made public — eg, only the reviewers, Area Chair and Senior Area Chair in charge will have access to it during the review period. If the paper gets accepted, we expect the authors to replace the submitted code by a non-anonymized version or link to a public github repository.

3. Are anonymous github links allowed?

Yes. However, they have to be on a branch that will not be modified after the submission deadline. Please enter the github link in a standalone text file in a submitted zip file.

4. How will the submitted code be used for decision-making?

The submitted code will be used as additional evidence provided by the authors to add more credibility to their results. We anticipate that high quality papers whose results are judged by our reviewers to be credible will be accepted to ICML, even if code is not submitted. However, if something is unclear in the paper, then code, if submitted, will provide an extra chance to the authors to clarify the details. To encourage code submission, we will also provide increased visibility to papers that submit code.

5. If code is submitted, do you expect it to be published with the rest of the supplementary? Or, could it be withdrawn later?

We expect submitted code to be published with the rest of the supplementary. However, if the paper gets accepted, then the authors will get a chance to update the code before it is published by adding author names, licenses, etc.

6. Do you expect the code to be standalone? For example, what if it is part of a much bigger codebase?

We expect your code to be readable and helpful to reviewers in verifying the credibility of your results. It is possible to do this through code that is not standalone — for example, with proper documentation.

7. What about pseudocode instead of code? Does that count as code submission?

Yes, we will count detailed pseudocode as code submission as it is helpful to reviewers in validating your results.

8. Do you expect authors to submit data?

We understand that many of our authors work with highly sensitive datasets, and are not asking for private data submission. If the dataset used is publicly available, there is no need to provide it. If the dataset is private, then the authors can submit a toy or simulated dataset to illustrate how the code works.

9. Who has access to my code?

Only the reviewers, Area Chair and Senior Area Chair assigned to your paper will have access to your code. We will instruct reviewers, Area Chair and Senior Area Chair to keep the code submissions confidential (just like the paper submissions), and delete all code submissions from their machine at the end of the review cycle. Please note that code submission is also completely optional.

10. I would like to revise my code/add code during author feedback. Is this permitted?

Unfortunately, no. But please remember that code submission is entirely optional.

The detailed FAQ as well other Author and Style instructions are available here.

Kamalika Chaudhuri and Ruslan Salakhutdinov
ICML 2019 Program Chairs

When the bubble bursts…

Consider the following facts:

  1. NIPS submission are up 50% this year to ~4800 papers.
  2. There is significant evidence that the process of reviewing papers in machine learning is creaking under several years of exponentiating growth.
  3. Public figures often overclaim the state of AI.
  4. Money rains from the sky on ambitious startups with a good story.
  5. Apparently, we now even have a fake conference website (https://nips.cc/ is the real one for NIPS).

We are clearly not in a steady-state situation. Is this a bubble or a revolution? The answer surely includes a bit of revolution—the fields of vision and speech recognition have been turned over by great empirical successes created by deep neural architectures and more generally machine learning has found plentiful real-world uses.

At the same time, I find it hard to believe that we aren’t living in a bubble. There was an AI bubble in the 1980s (before my time), a techbubble around 2000, and we seem to have a combined AI/tech bubble going on right now. This is great in some ways—many companies are handing out professional sports scale signing bonuses to researchers. It’s a little worrisome in other ways—can the field effectively handle the stress of the influx?

It’s always hard to say when and how a bubble bursts. It might happen today or in several years and it may be a coordinated failure or a series of uncoordinated failures.

As a field, we should consider the coordinated failure case a little bit. What fraction of the field is currently at companies or in units at companies which are very expensive without yet justifying that expense? It’s no longer a small fraction so there is a chance for something traumatic for both the people and field when/where there is a sudden cut-off. My experience is that cuts typically happen quite quickly when they come.

As an individual researcher, consider this an invitation to awareness and a small amount of caution. I’d like everyone to be fully aware that we are in a bit of a bubble right now and consider it in their decisions. Caution should not be overdone—I’d gladly repeat the experience of going to Yahoo! Research even knowing how it ended. There are two natural elements here:

  1. Where do you work as a researcher? The best place to be when a bubble bursts is on the sidelines.
    1. Is it in the middle of a costly venture? Companies are not good places for this in the long term whether a startup or a business unit. Being a researcher at a place desperately trying to figure out how to make research valuable doesn’t sound pleasant.
    2. Is it in the middle of a clearly valuable venture? That could be a good place. If you are interested we are hiring.
    3. Is it in academia? Academia has a real claim to stability over time, but at the same time opportunity may be lost. I’ve greatly enjoyed and benefited from the opportunity to work with highly capable colleagues on the most difficult problems. Assembling the capability to do that in an academic setting seems difficult since the typical maximum scale of research in academia is a professor+students.
  2. What do you work on as a researcher? Some approaches are more “bubbly” than others—they might look good, but do they really provide value?
    1. Are you working on intelligence imitation or intelligence creation? Intelligence creation ends up being more valuable in the long term.
    2. Are you solving synthetic or real-world problems? If you are solving real-world problems, you are almost certainly creating value. Synthetic problems can lead to real-world solutions, but the path is often fraught with unforeseen difficulties.
    3. Are you working on a solution to one problem or many problems? A wide applicability for foundational solutions clearly helps when a bubble bursts.

Researchers have a great ability to survive a bubble bursting—a built up public record of their accomplishments. If you are in a good environment doing valuable things and that environment happens to implode one day the strength of your publications is an immense aid in landing on your feet.

AlphaGo is not the solution to AI

Congratulations are in order for the folks at Google Deepmind who have mastered Go.

However, some of the discussion around this seems like giddy overstatement. Wired says Machines have conquered the last games and Slashdot says We know now that we don’t need any big new breakthroughs to get to true AI. The truth is nowhere close.

For Go itself, it’s been well-known for a decade that Monte Carlo tree search (i.e. valuation by assuming randomized playout) is unusually effective in Go. Given this, it’s unclear that the AlphaGo algorithm extends to other board games where MCTS does not work so well. Maybe? It will be interesting to see.

Delving into existing computer games, the Atari results (see figure 3) are very fun but obviously unimpressive on about ¼ of the games. My hypothesis for why is that their solution does only local (epsilon-greedy style) exploration rather than global exploration so they can only learn policies addressing either very short credit assignment problems or with greedily accessible polices. Global exploration strategies are known to result in exponentially more efficient strategies in general for deterministic decision process(1993), Markov Decision Processes (1998), and for MDPs without modeling (2006).

The reason these strategies are not used is because they are based on tabular learning rather than function fitting. That’s why I shifted to Contextual Bandit research after the 2006 paper. We’ve learned quite a bit there, enough to start tackling a Contextual Deterministic Decision Process, but that solution is still far from practical. Addressing global exploration effectively is only one of the significant challenges between what is well known now and what needs to be addressed for what I would consider a real AI.

This is generally understood by people working on these techniques but seems to be getting lost in translation to public news reports. That’s dangerous because it leads to disappointment. The field will be better off without an overpromise/bust cycle so I would encourage people to keep and inform a balanced view of successes and their extent. Mastering Go is a great accomplishment, but it is quite far from everything.

Edit: Further discussion here, CACM, here, and KDNuggets.

Learning to avoid not making an AI

Building an AI is one of the most subtle things people have ever attempted with strong evidence provided by the durable nature of the problem despite attempts by many intelligent people. In comparison, putting a man on the moon was a relatively straightforward technical problem with little confusion about the nature of the solution.

Building an AI is almost surely a software problem since the outer limit for the amount of computation in the human brain is only 10^17 ops/second (10^11 neurons with 10^4 connections operating at 10^2 Hz) which is within reach of known systems.

People tend to mysticize the complexity of unknown things, so the “real” amount of computation required for a human scale AI is likely far less—perhaps even within reach of a 10^13 flop GPU.

Since building an AI is a software problem, the problem is complexity in a much stronger sense than for most problems. The effective approach for dealing with complexity is to use modularity. But which modularity? A sprawl of proposed kinds of often incompatible and obviously incomplete modularity exists. The moment when you try to decompose into smaller problems is when the difficulty of solution is confronted.

For guidance, we can consider what works and what does not. This is tricky, because the definition of AI is less than clear. I qualify AI with by degrees of intelligence in my mind—a human level AI is one which can accomplish the range of tasks which a human can. This includes learning complex things (language, reasoning, etc…) from a much more basic state.

The definition seems natural but it is not easily tested via the famous Turing Test. For example, you could imagine a Cyc-backed system passing a Turing Test. Would that be a human-level AI? I’d argue ‘no’, because the reliance on a human-crafted ontology indicates an incapability to discover and use new things effectively. There is a good science fiction story to write here where a Cyc-based system takes over civilization but then gradually falls apart as new relevant concepts simply cannot be grasped.

Instead of AI facsimiles, learning approaches seem to be the key to success. If a system learned from basic primitives how to pass the Turing Test, I would naturally consider it much closer to human-level AI.

We have seen the facsimile design vs. learn tension in approaches to AI activities play out many times with the facsimile design approach winning first, but not always last. Consider Game Playing, Driving, Vision, Speech, and Chat-bots. At this point the facsimile approach has been overwhelmed by learning in Vision and Speech while in Game Playing, Driving, and Chat-bots the situation is less clear.

I expect facsimile approaches are one of the greater sources of misplaced effort in AI and that will continue to be an issue, because it’s such a natural effort trap: Why not simply make the system do what you want it to do? Making a system that works by learning to do things seems a rather indirect route that surely takes longer and requires more effort. The answer of course is that the system which learns what might otherwise be designed can learn other things as needed, making it inherently more robust.