Re: Why would AI want to be friendly?

From: Samantha Atkins (samantha@objectent.com)
Date: Wed Sep 27 2000 - 10:46:41 MDT


Eugene Leitl wrote:
>
> Samantha Atkins writes:
>
> > Or you could start with humans and continuously augment (voluntarily)
> > with first external but more and more integrated and then internal
> > hardware and software. This seems to me the best way to keep humans in
>
> This method is
>
> * slow
>
> * technically demanding (biocompatibility; probably requires nanotechnology)
>
> * has high ethical threshold and is risky (neurosurgery is no peanuts)
>
> * attempts to integrate two very different paradigms: neuro and
> digital, which requires a very good understanding of the wet substrate
>

Sure. But bit by bit it is something we are doing anyway. All the
hardware now has to be dragged out of the bag, briefcase, pocket and is
very primitive. This is changing. Many expect humans to be online (at
least white collar humans) pretty much continuously by 2009. The
wearable revolution is a precursor to the next step.

The AI is at least as technically demanding. Neurosurgery is one path.
We have a few steps to take before we get that far. By the time we do
medical nanotech (or something close) will help. Over time we will
integrate human brain / computational resources more and more
symbiotically. It is an inevitable evolutionary step.

I would also point out that making humans smarter may well be necessary
in order for us to be capable of making a fully functioning AI (or seed
of one).

> It would work in principle, provided in the meantime no AI grown by
> evolutionary algorithms emerges. (This is unlikely, because in 20-30
> years we should have molecular circuitry, and hence enough computing
> performance to breed an AI from scratch, while augmentation will have
> made scarcely any headway in that time frame). Because of explosive
> kinetics of the self-enhancing positive autofeedback process of the
> AI, the cyborg wannabees would be just as left in the dust as
> unaugmented people.

Augmentation will have made a great deal of headway at least including
fairly direct brain/computer interfaces which in principle opens up
quite a bit of synergistic/symbiotic potential. I would be very
surprised if enhanced memory modules that are implantable and auxilliary
logic processors were not available.

>
> > the loop and to end up with something human compatible and reasonably
> > likely to be friendly and caring about humanity.
>
> Assuming, the transition will be indeed so slow. Convergent evolution
> would seem to require that ALife AI and uploaders would be
> undistinguishable. Because uploaders would be probably slower to
> converge initially due to evolutionary ballast, this probably means
> that they will be blown away by ALife AI, if latter emerges at
> relatively early step of the game.
>

Actually I am assuming the uploaders will be more compatible and
friendly. I am actually of the mind that we will not get full AI until
humans are significantly more augumented. This is largely because I am
not sure we are bright enough and cooperative with one another to do the
job, because full AI may require a decade or two of hardware advances,
and because it requires significant theoretical advances as well as a
lot deeper thinking about implications and outcomes.

It is the AIs that will be playing a lot of catch up from both a
computational complexity of their "brain" and evolution trained set of
algorithms for action and understanding in/of the material world. We
don't even know yet that the human level AI can be built in less than
two to three decades. We know we can variously augment humans and that
there is great economic and creative advantage to doing so. The
augmentation may be small or larger. That we will have to see.

 
> > But what a great resources to hook into the WebMind or to have at the
> > disposal of more capable AI! Don't throw that work out. It is a useful
> > piece. If nothing else it is a huge glob of training material.
>
> A search engine with Cyc functionality is indeed very useful, but only
> as long as we can't create real AI. It *is* a useful piece, as it
> demonstrates another failure of a given approach.

No. It is useful as a submodule and resource of real AI. And it is
certainly useful to us also. It is not a "failure". It is simply a
failure to produce full AI. But every path explored is also a success
if we learn from it and intelligently use what it did produce.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:14 MDT