Do we have computer hardware sufficient for AI? This question is difficult to answer, but here’s a try:
One way to achieve AI is by simulating a human brain. A human brain has about 1015 synapses which operate at about 102 per second implying about 1017 bit ops per second.
A modern computer runs at 109 cycles/second and operates on 102 bits per cycle implying 1011 bits processed per second.
The gap here is only 6 orders of magnitude, which can be plausibly surpassed via cluster machines. For example, the BlueGene/L operates 105 nodes (one order of magnitude short). It’s peak recorded performance is about 0.5*1015 FLOPS which translates to about 1016 bit ops per second, which is nearly 1017.
There are many criticisms (both positive and negative) for this argument.
- Simulation of a human brain might require substantially more detail. Perhaps an additional 102 is required per neuron.
- We may not need to simulate a human brain to achieve AI. There are certainly many examples where we have been able to design systems that work much better than evolved systems.
- The internet can be viewed as a supercluster with 109 or so CPUs, easily satisfying the computational requirements.
- Satisfying the computational requirement is not enough—bandwidth and latency requirements must also be satisfied.
These sorts of order-of-magnitude calculations appear sloppy, but they work out a remarkable number of times when tested elsewhere. I wouldn’t be surprised to see it work out here.
Even with sufficient harrdware, we are missing a vital ingredient: knowing how to do things.