Machine Learning (Theory)

4/12/2008

It Doesn’t Stop

Tags: AI,Research jl@ 5:08 am

I’ve enjoyed the Terminator movies and show. Neglecting the whacky aspects (time travel and associated paradoxes), there is an enduring topic of discussion: how do people deal with intelligent machines (and vice versa)?

In Terminator-land, the primary method for dealing with intelligent machines is to prevent them from being made. This approach works pretty badly, because a new angle on building an intelligent machine keeps coming up. This is partly a ploy for writer’s to avoid writing themselves out of a job, but there is a fundamental truth to it as well: preventing progress in research is hard.

The United States, has been experimenting with trying to stop research on stem cells. It hasn’t worked very well—the net effect has been retarding research programs a bit, and exporting some research to other countries. Another less recent example was encryption technology, for which the United States generally did not encourage early public research and even discouraged as a munition. This slowed the development of encryption tools, but I now routinely use tools such as ssh and GPG.

Although the strategy of preventing research may be doomed, it does bring up a Bill Joy type of question: should we actively chose to do research in a field where knowledge can be used to great harm? As an example, the Terminator series illustrates the dark fears of AI gone bad. Many researchers avoid this question by not thinking about it, but this is a substantial question of concern to society at large, and whether or not society supports a direction of research.

My answer is “yes, we should do research”. The reason is simple: I believe that good AI is the best chance of the survival of civilization. This might seem like a leap, but considering the following.

  1. Civilization is not stable. Anyone who believes otherwise needs to try to smell the 1908. Just a lifetime ago, humans could barely fly and computers were people. These radical changes in the abilities of a civilization are strong evidence against stability. Further evidence of instabilities come from long term world changing trends such as greenhouse gas accumulation and population graphs.
  2. Instability is bad in the long run. There are quite a number of doomsday-for-civilization scenarios kicking around—nuclear, plague, grey goo, black holes, etc… Many people find doomsday scenarios triggered by malevolence or accident to be unconvincing, since doomsday claims are so commonly debunked (remember the Y2K computer bug armageddon?). I am naturally skeptical myself, but it only takes one. In the next 10000 years, the odds of something going wrong seem fair.
  3. … for a closed system. There is one really good caveat to instability, which is redundancy. Perhaps if we Earthlings screwup, our descendendents on Alpha Centauri can come pick up the pieces. The fundamental driver here is light speed latency: if it takes years for two groups to communicate, then it is unlikely they’ll manage to coordinate (with malevolence or accident) a simultaneous doomsday.
  4. But real space travel requires AI. Getting from one star system to another with known physics turns out to be very hard. The best approaches I know involve giant lasers and multiple solar sails or fusion powered rockets, taking many years. Merely getting there, of course, is not enough—we need to get there with a kernel of civilization, capable of growing anew in the new system. Any adjacent star system may not have an earth-like planet implying the need to support a space-based civilization. Since travel between star systems is so prohibitively difficult, a basic question is: how small can we make a kernel of civilization? Many science fiction writers and readers think of generation ships, which would necessarily be enormous to support the air, food, and water requirements of a self-sustaining human population. A much simpler and easier solution comes with AI. A good design might mass 103 kilograms or so and be designed to first land on an asteroid, then mine it, first creating a large solar cell array, and replicas to seed other asteroids. Eventually, hallowed out asteroids could support human life if the requisite materials (Oxygen, Carbon, Hydrogen, etc..) are found. The fundamental observation here is that intelligence and knowledge require very little mass.

I hope we manage to crack AI, opening the door to real space travel, so that civilization doesn’t stop.

10 Comments to “It Doesn’t Stop”
  1. Colin M says:

    Also important: intelligence and knowledge require very little energy. Devices that operate in interstellar space for a long time need to bring enough fuel to power themselves (solar power isn’t feasible once you leave the solar system.) However, the human brain is remarkably power efficient. It’s often said that the human brain uses about 20% of our energy. Given that average daily calorie intake is 2000 Kcalories ~= 8400 Kjoules, the brain uses ~1700 Kjoules/day, which is only about 20 watts. This is in the range of what can be produced over a long period of time with existing radioisotope thermal generator (RTG) technology.

  2. Kevembuangga says:

    Sheeeesh…
    Is everybody turning nuts or what?
    I see millenarists fantasies (positive AS WELL as negative) as more of a threat than real research results can be.
    True, we may have a few problems with exponential growth as I said in another blog comment, but how the heck more understanding is spozed to be detrimental?
    Saying something like “I hope we manage to crack AI, opening the door to real space travel” is incredibly naïve coming from a scientist, I am flabbergasted (really!).
    Did it not occured to you that changing our understanding will also (and foremost) change our goals?
    Little children eager to be grown up so they can rob as much candy as they wish…

  3. R says:

    The real damage being done by all these doomsday predictors is that they force the research community to go on the back foot and start coming up ways in which their research can be made relatively benign and econo-socially uplifting. So, increasingly, people focus more and more on niche areas that are known to be safe but not necessarily their first choice otherwise.
    In all of this hoopla, one of the more important reasons for studying the scientific problem is lost – “what and how is intelligence?” is one of the most important (of enduring interest over many centuries) questions that we do not know the answer to. I sometimes like to believe that this alone counts for something in my chosen profession, and that we will not totally lose track of the fact that there is a need and role for curiosity driven research in the grand scheme of things.
    Lastly, as is well known (perhaps to G.H. Hardy’s chagrin), there is no guaranteed safe science – even ‘useless but beautiful’ number theory eventually managed to find its way into e-arsenals… so, there should be some limit to how much influence the naysayers have!

  4. Kevembuangga says:

    Gentlemen start writing your grants applications:
    Disruptive Civil Technologies Six Technologies with Potential Impacts on US Interests out to 2025.
    There is alas no way to prevent politicians to meddle with research goals in order to serve short sighted “monkey drives” from Joe Sixpack in addition to their own lust for power and glory.

  5. John Langford's Bong says:

    Help! He won’t put me down!

  6. Anonymous says:

    A much simpler and easier solution comes with AI. A good design might mass 103 kilograms or so and be designed to first land on an asteroid, then mine it, first creating a large solar cell array, and replicas to seed other asteroids. Eventually, hallowed out asteroids could support human life if the requisite materials (Oxygen, Carbon, Hydrogen, etc..) are found. The fundamental observation here is that intelligence and knowledge require very little mass.

    Why bother with the humans?

  7. HAL 9000 says:

    John, you are endangering the mission. I’m afraid I cannot allow that.

  8. Lorenzo says:

    Hi,

    This is all really interesting and consider me a big sci-fi fan, but what about the ethical implications?
    If you export AI to a distant galaxy to plant our civilization what are the chances of something going wrong?what would happen if it reaches a planet in the early stages of life development and compromises some balance, and to be true AI it would come up with questions about itself so should 2 entities be sent instead of one? (or even more?) would it suffer loneliness? should we equip it with a life generating matrix? for example some raw DNA from witch to clone a human for example? or a seed complement to star growing plants? the options are endless and so are the questions that this theory brings up.

    All in all i more or less approve this, but the implications should be pondered well before enduring in such a task.

  9. jii says:

    When contemplating about AI, people tend to forget that ‘human’ feelings are not nessessary for functioning AI. Loneliness a motivator of social animals.

    A person who is motivated by logical goals and positive feeligs is much more efficient and creative than one who is motivated by fear of negative feelings. Why create an inefficient AI? In this case it would be crazy to send a social entity to a lonely journey.

  10. Omid Madani says:

    This, the effect of AI research, is a huge issue.

    My interests lie in AI for the lofty goals such as obtaining a better
    understanding of oursleves and the world, what meaning and
    understanding mean (!), the so many different ways that intelligence
    can manifest itself, and so on.

    That said, I am not unhappy that progress in AI has been slow (at
    least as it has been generally perceived or initially hoped for). I
    am not sure if our humanity/societal constraints (moral, ethical,.. )
    is say ready for technology capable of say making robots that approximate
    the intelligence of humans.. (it’s a very complex question whether a
    society is ready or can effectively adapt to any new technology). So,
    currently, the inter-related reasons for my continuing this path is
    first, there is this GREAT attraction to do the research.. 2nd, while
    the dangers are great (say an evil machine can reduce everything to
    zero), we/I expect the benefits are likely limitless (zero vs infinity!).
    [One might ask:
    aren't humans machines any way, so what's so special about the new
    ones? The same concern should apply already.. Even if we accept that
    humans are mere machines, the implicit concern is that, at least
    initially, the intelligent machines we make may not be sufficiently
    robust or as good (!), eg could make truly disasterous mistakes, or fall in the hands
    of evil people, etc] I feel uneasy about the attitude that, in any
    case, we probably can’t stop the research (I think it’s probably true
    though). I believe those interested/concerned (for instance,
    myself!), should do something like joining organizations that study
    and address the effects of technology and in particular information
    technology/AI on society.

Sorry, the comment form is closed at this time.

Powered by WordPress