Sufficient Computation

Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try:

One way to achieve AI is by simulating a human brain. A human brain has about 1015 synapses which operate at about 102 per second implying about 1017 bit ops per second.

A modern computer runs at 109 cycles/second and operates on 102 bits per cycle implying 1011 bits processed per second.

The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 105 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*1015 FLOPS which translates to about 1016 bit ops per second, which is nearly 1017.

There are many criticisms (both positive and negative) for this argument.

  1. Simulation of a human brain might require substantially more detail. Perhaps an additional 102 is required per neuron.
  2. We may not need to simulate a human brain to achieve AI. There are certainly many examples where we have been able to design systems that work much better than evolved systems.
  3. The internet can be viewed as a supercluster with 109 or so CPUs, easily satisfying the computational requirements.
  4. Satisfying the computational requirement is not enough—bandwidth and latency requirements must also be satisfied.

These sorts of order-of-magnitude calculations appear sloppy, but they work out a remarkable number of times when tested elsewhere. I wouldn’t be surprised to see it work out here.

Even with sufficient harrdware, we are missing a vital ingredient: knowing how to do things.

17 Replies to “Sufficient Computation”

  1. I don’t think a sufficient AI would even have to run at ‘real-time’. If it was an order of magnitude or two off of human intellect speed wise, it would still be an amazing achievement. So it is indeed a matter of knowing how to do things.

  2. It also depends on “knowing how to do things” means…
    There seem (to me) to be lack of deeper questionning about AI “things”.
    Research in AI has been, and still is, focused on the problem solving side of intelligence to the detriment of the theory building side which is nevertheless an even more indispensable ingredient of intelligence: you only solve a problem within a theory.
    Adding yet another truck load of clever algorithms to the AI “bag of tricks” isn’t likely to bring much insights into the question of what it is what we are doing when acting “intelligently”.
    And BTW the problem solving question is “solved” (supposedly):
    The Fastest and Shortest Algorithm for All Well-Defined Problems

  3. Supervised learning algorithms have a fitness function that can be evaluated by simple code, possibly from a limited database e.g. recognition of letters and faces, prediction of the next step in a time series, minimize the path length in the travelling salesman problem, winning in games of chess, e.t.c

    But I have never seen a fitness function that gives “degree of true intelligence” as output. There, you have to rely on human judgement like in the Turing test, many orders of magnitude slower than the evaluation in any supervised learning algorithm. The lack of such a fitness function, which may itself have be an AI program, would explain why there has been progress in specialized AI tasks like chess, speech and image recognition, but not so much in “general intelligence”.

  4. Do insects rodents or worms have the same ops rates as human?
    It would be amazing to make a machine function as well as a mosquito or worm.

  5. It would even be impressive if we got AI that matched problem solving abilities of crows. For an example see Science video

    Apparently crow brains weight about 12 grams, so, 100 times smaller than human brain.

    Mouse intelligence might be a good intermediate step, mice have about 4 million cortical neurons, compared to 11.5 billion for humans

  6. Interesting, as far as I’m concerned, you just sumarized the first 150 pages of Ray Kurzweil’s “The Singularity Is Near”. As a machine learning student I was looking for some light reading on real (or strong) AI and I thought: why not start with Mr. Strong AI himself. The dissapointment was big: the book goes on and on about how fast computers are, how much billions and trillions of operations per second we can and will be able to do in the near future; how 3D, molecular and quantum computers will take our FLOP-ability up a few orders of magnitude …

    I have to admit that after a while I starting skipping large sections of the book; but nowhere have I read any insights on “knowing how to do things”. I’m a strong believer in machine learning: I think we can claim some exceptional achievements in Information Retrieval, Vision, Robotics, Datamining and the future of our field doesn’t look to dim either; but one way or another, I feel like we have learned very little about how to build an intelligent system.

    I once read a complexity theorist’s comment that although he spends most of his time proving small but important complexity results, he likes to spend a couple of days a year actually thinking only about how to solve P vs. NP. Although it clearly hasn’t paid of big time, I think this might be an interesting idea: a couple of days a year we should actually try to cook up our best recipe for how to build an intelligent system.

    Whatever that means …

    Jurgen

    PS “The Singularity Is Near” on bayesian Networks: “originally devised by English mathematician Thomas Bayes and published posthumously in 1763 …”

  7. So here’s an order of magnitude problem: what if the *insert-favorite-complexity-measure-here* of “knowing how to do things” is of the same order of magnitude as that of “things”? Ie what if we can’t get anywhere “complex” starting with “simple” algorithms?

    Is this what you were alluding to in asking the question?

  8. When we don’t know how to do things we try to learn from the environment. That points to machine learning instead of AI but the problem is that there’s no evidence that any environment simpler than our physical universe can generate intelligent life? Machine learning is rapidly becoming a central technology in modern life and that’s enough for me.

    As for Ray Kurzweil I’m surprised at how little criticism I see of his work. Of course John Horgan is always fun for this type of thing — see for example, — to me, folks like Ray Kurzweil are living evidence that intelligence and wisdom are separate quantities.

  9. Each neuron has high connectivity. Thousands to tens of thousands of synapses. There is STDP, spike timing dependent plasticity, i.e., organizational differences arising out of and relative to timings of input pulses – collective patterns of timing of input pulses, most likely.

    Firing proximity and physical proximity of synapses are variables. Distant synapses may fire together or in set sequences that make more of a difference than near neighbors. Sometimes.

    There are a range of time scales. Electrical, chemical diffusion and concentration gradients, connective plasticity, and learning – aka childhood.

    And what about the micro tubules that some worry over, or the exact role of glial cells? What did Einstein’s brain look like, more neurons, wired differently, or more glial cells?

    Chemical organization for instance is continuous not discrete, but bounded.

    Then, there is a lot of hardwiring in the human brain that is greatly varied, region to region, evidenced by the neuroanatomical naming of regions, and in designating region to region connectivity and interaction — the physiological study of interaction within and between regions, from macro to micro.

    Add a few orders of magnitude for sensor input and preprocessing, and actuator operation.

    A thought experiment – less frequent than in the past, but still used – electroshock treatment for deep depressive states, etc.

    The patient is zapped, the brain’s electrical organization is wholly disrupted, there is a period of unconsciousness, then the patient “returns” without massive amnesia or personality disruption – “better” the practitioners would say, the psychological distress lessened. Is that coincidental robustness, or innate and crucial?

    Because of the range of time frames of interaction, my guess, if you disrupt only the fastest, the electrical pulse activity, there is robust recovery. Half a day of continuous aelectroshock, and you might just jigger something up badly and permanently. My understanding is they use micro second pulse lengths – certainly under a second in order of magnitude. I could be wrong there – it’s not my expertise.

    Put all that in your simulation. Or what of it is irrelevant?

    The other comments – lower levels of cognitive function in non-humans, is the other side of the coin.

    My dog has feelings, and ways of communicating. Bumping my arm when I am sitting reading and he wants to be fed. Whining sometimes, barking other times. Recognizing tone of voice and showing gladness or cowering over my discovery of something done to the rug or a shoe. Getting excited when the doorbell rings or I am seen reaching for both a jacket and a Frisbee. This is high level cognition, plus there are the instinctive hunting behaviors that shepherds train to advantage. Yet dogs are stupid, relatively speaking. Song birds with smaller than crow-size brains distinguish a blue jay flying in, from a shrike. A crow from a hawk.

    So, what level and kind of cognitive function do you seek? Something with a dollar and cents payoff? Or “feelings?”

    If you cannot measure or benchmark how well you simulate “feelings” you have difficulties if aiming there.

    It would not surprise me if the military has projects simulating shark AI. Certainly homing is studied, and there’s a literature on fly vision systems and motion perception. Those little buggers have to see things different than you and I, but imagining the intricacies is difficult, for me at least. How does a fly “see?” However it is, it is reasonably effective in not being swatted.

    Eric Horvitz for a while had the can we simulate human cognition question posed somewhere within Microsoft Research web pages, you could email him your thoughts, he invited that, but it’s not now on his home page: http://research.microsoft.com/%7Ehorvitz/

  10. I think this is core to the discussion on applying ML to develop “intelligence” — the complexity of intelligence can’t be measured independently from the environment.

    As for problem solving (for example, at the “level of the crow”) animal life provides plenty of clues for devising clever solutions without requiring complete AI.

  11. I think your number are off. The brain has 10^12 neurons and 10^15 synapses. It is the neuron which spikes (and this spike travels to roughly 10^3 to 10^4 other neurons, hence the 10^15 synapses number). So I think it’s something more like 10^14 bits ops per second at best, and each bit open has access to about 10^3.5 inputs (though the complexity of what it can do with these inputs is fairly limited, at least imo).

    -Sham

  12. We have too few knowledge about how the brain is actually working to make any relevant estimation of the computing power needed to simulate it. We are still largely ignoring how we learn new things, how synapses are actually built.

    About Ray Kurzweil and the small amount of critisism against him, I have the feeling that he is now ignored by the scientific community. He was a scientist in the past, he is not a scientist anymore. His arguments look like the ones of a guru or an evangelist, not the ones of a scientist.

  13. Howdy just wanted to give you a brief heads up and let you know a few of the images aren’t loading correctly. I’m not sure why but I think its a linking issue. I’ve tried it in two different browsers and both show the same outcome.

Comments are closed.