Learning to avoid not making an AI

Building an AI is one of the most subtle things people have ever attempted with strong evidence provided by the durable nature of the problem despite attempts by many intelligent people. In comparison, putting a man on the moon was a relatively straightforward technical problem with little confusion about the nature of the solution.

Building an AI is almost surely a software problem since the outer limit for the amount of computation in the human brain is only 10^17 ops/second (10^11 neurons with 10^4 connections operating at 10^2 Hz) which is within reach of known systems.

People tend to mysticize the complexity of unknown things, so the “real” amount of computation required for a human scale AI is likely far less—perhaps even within reach of a 10^13 flop GPU.

Since building an AI is a software problem, the problem is complexity in a much stronger sense than for most problems. The effective approach for dealing with complexity is to use modularity. But which modularity? A sprawl of proposed kinds of often incompatible and obviously incomplete modularity exists. The moment when you try to decompose into smaller problems is when the difficulty of solution is confronted.

For guidance, we can consider what works and what does not. This is tricky, because the definition of AI is less than clear. I qualify AI with by degrees of intelligence in my mind—a human level AI is one which can accomplish the range of tasks which a human can. This includes learning complex things (language, reasoning, etc…) from a much more basic state.

The definition seems natural but it is not easily tested via the famous Turing Test. For example, you could imagine a Cyc-backed system passing a Turing Test. Would that be a human-level AI? I’d argue ‘no’, because the reliance on a human-crafted ontology indicates an incapability to discover and use new things effectively. There is a good science fiction story to write here where a Cyc-based system takes over civilization but then gradually falls apart as new relevant concepts simply cannot be grasped.

Instead of AI facsimiles, learning approaches seem to be the key to success. If a system learned from basic primitives how to pass the Turing Test, I would naturally consider it much closer to human-level AI.

We have seen the facsimile design vs. learn tension in approaches to AI activities play out many times with the facsimile design approach winning first, but not always last. Consider Game Playing, Driving, Vision, Speech, and Chat-bots. At this point the facsimile approach has been overwhelmed by learning in Vision and Speech while in Game Playing, Driving, and Chat-bots the situation is less clear.

I expect facsimile approaches are one of the greater sources of misplaced effort in AI and that will continue to be an issue, because it’s such a natural effort trap: Why not simply make the system do what you want it to do? Making a system that works by learning to do things seems a rather indirect route that surely takes longer and requires more effort. The answer of course is that the system which learns what might otherwise be designed can learn other things as needed, making it inherently more robust.

15 Replies to “Learning to avoid not making an AI”

  1. “The effective approach for dealing with complexity is to use modularity.”

    Actually this may rather hamper progress, re a comment by Nick Szabo:

    “The most important relevant distinction between the evolved brain and designed computers is abstraction layers. Human engineers need abstraction layers to understand their designs and communicate them to each other
    …/…
    Evolution has no needs for understanding, so there is no good reason to expect understandable abstraction layers in the human brain.”

    http://unenumerated.blogspot.fr/2011/01/singularity.html#c1692564240307342324

    1. There is an alternate view of why modularity is necessary: coding complexity. There are not too many bits in the genome related to brain function. Escalating the budget of relatively few bits into so many neurons and connections implies that there must be an algorithm at work. The nature of this algorithm may be alien to conventional notions of modularity, but it must have some form of regular repeated structure.

  2. About the Turing test: a trained tester would never allow a system to pass unless it demonstrated learning capability. Forget the so-called annual Turing tests. Those systems would never have fooled Turing. The fact that they fooled so many of the testers says more about the quality of the testers than of the systems. I visited one of the annual winners. “I hear congratulations are in order.” Nothing. Clueless. Test them with a team of AI scientists.

  3. Supervised learning approaches are arguably also facsimile methods. We have very few examples of self-supervised/unsupervised/reinforcement learning methods that have been able to beat facsimile methods. The road to human-level AI does not pass through Amazon Mechanical Turk.

  4. Does it count as “Artificial Intelligence”, when a machine plays the game of Go sublimely, yet cannot rationally explain why certain moves are good, and neither does inspection of the machine’s architecture and code reveal any rational clues, and neither can the machine rationally teach its cognitive skills?

    Peter Sterling’s and Simon Laughlin’s recent (and PROSE-winning) textbook Principles of Neural Design (2015) points toward a world of cognitive capabilities that are characterized by precisely these non-rational traits — traits that (when we reflect) are human indeed.

    1. You are asking for too much.
      Even *human* Go players cannot (fully) “reveal any rational clues, and neither can rationally teach their cognitive skills”

  5. I’d challenge this statement:

    “The human brain is only 10^17 ops/second (10^11 neurons with 10^4 connections operating at 10^2 Hz)”

    This assumes that neurons are simple devices. But they aren’t. Each is a self replicating chemical factory with ~1 gigabyte of DNA/RNA code and tens of thousands of molecular tools at its disposal. Quite possible that it is doing a good share of the total learning in the brain.

Comments are closed.