Re: Why would AI want to be friendly?

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Sep 25 2000 - 23:32:49 MDT


Franklin Wayne Poley wrote:
>
> On Mon, 25 Sep 2000, J. R. Molloy wrote:
>
> > The general public need not back or fund AI projects because the US military is
> > spending millions trying to develop AI.
>
> This could be a nightmare. What do you think? If the militaries are in an
> AI race what does it mean? What if we are only a decade or so away from AI
> to surpass human equivalency on all major measured outcomes of
> intelligence including learning ability?

That is pretty unlikely despite some optimistic opinions here.

> What if the present
> state-of-the-art is that it 'only' takes x billions of dollars (available
> to the militaries) and machines will be on hand which are more
> "learned" than humans and can add to that learning better and faster than
> humans?

We are at least a jump in substrate away from machines as
computationally complex as human brains. That is just for the hardware
to be feasible for the task. The software is another matter that is not
just going to magically come together.
  
> They can then work on advancing military science 24 hours a
> day. It makes Roswell conspiracy theories pale by comparison. But then
> this is all just distant futuristic fantasy, right? And we the public can
> sit back and know that private and military AI research is many decades
> away from any such state of development.

I would say more like 2 decades away minimum than 1. But then I saw
average consensus for when we would have at least basic nano-assembler
systems drop from an average of 20+ years in March to about 12 in
September. So I probably should adjust AI guesstimates also.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:07 MDT