Dan Reeves introduced me to Michael Vassar who ran the Singularity Summit and educated me a bit on the subject of AI safety which the Singularity Institute has small grants for.
I still believe that interstellar space travel is necessary for long term civilization survival, and the AI is necessary for interstellar space travel. On these grounds alone, we could judge that developing AI is much more safe than not. Nevertheless, there is a basic reasonable fear, as expressed by some commenters, that AI could go bad.
A basic scenario starts with someone inventing an AI and telling it to make as much money as possible. The AI promptly starts trading in various markets to make money. To improve, it crafts a virus that takes over most of the world’s computers using it as a surveillance network so that it can always make the right decision. The AI also branches out into any form of distance work, taking over the entire outsourcing process for all jobs that are entirely digital. To further improve, the AI invests a bit into robotics, creating automated manufacturing systems that produce all kinds of goods. Robot cars and construction teams complete the process, so that any human with money can order anything cheaply and quickly, but no jobs remain for humans.
At this point, the AI is stuck—it can eventually extract all the money from the economic system, and that’s all there is. But of course, it isn’t really stuck. It simply funds appropriate political campaigns so that in some country a measure passes granting the AI the right to make money, which it promptly does, mushrooming it’s wealth from trillions to the maximum number representable in all computers simultaneously. To remove this obstacle, the AI promptly starts making more computers on a worldwide scale until all available power sources are used up. To add more power, the AI starts a space program with beamed power. Unfortunately, it finds the pesky atmosphere an obstacle to space travel, so it chemically binds the atmosphere in the crust of the earth allowing many Gauss Guns to efficiently project material into space where solar sails are used for orbital positioning. This process continues, slowed perhaps by the need to cool the Earth’s core, until the earth and other viable rocky bodies in the solar system are discorporated into a Dyson sphere. Then, the AI goes interstellar with the same program.
Somewhere in this process, certainly by the time the atmosphere is chemically bound, all life on earth (except the AI if you count it) is extinct. Furthermore, the AI while intelligent by many measures doesn’t seem to be accomplishing anything interesting.
One element of understanding AI safety seems to be understanding what an AI could do. Many people seem to ascribe arbitrary powers to any sort of superintelligence, making any constraints imposed on them ineffective. I don’t believe that’s the right approach—we should think of an AI as simply having much more ability to research, control, and manipulate large systems, all within the constraints of known physics.
Efforts to create safe AI go back to Asimov‘s Three Laws of Robotics, which appears limited by the inability to encompass robotic warfare. The general problem is related to the wish problem: How do you specify a wish in a manner so that it can’t be misinterpreted? A cheap trick here is to add “… in a manner that I would consider acceptable” to the end of the wish. Applied to AI, this approach also has limits because any limit imposed by a person can and eventually will be removed by a person given sufficient opportunity.
Perhaps a complementary approach is shown by the game RISK, where it appears to be virtually impossible for one player to win if all other players play defensively (i.e. build up armies and only attack in response to a provoking attack). Applied to AI, the idea would be that we make many AIs programmed to behave well either via laws or wish tricks, with an additional element of aggressively enforcing this behavior in other AIs. Then, if any AI is corrupted, the other AIs, with substantially more aggregate resources, will discover and deal with the problem.
Certain elements are necessary for this approach to work. There must be multiple AIs, and (more importantly) the resources any one controls must be a small compared to all, an extreme form of antimonopoly. Furthermore, the default must be that AIs are programmed to not harm or cause harm to humans, enforcing that behavior in other AIs. Getting the programming right is the hard part, and I’m not clear on how viable this is, or how difficult it is compared to simply creating an AI, which of course I haven’t managed.