Re: Why would AI want to be friendly?

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Tue Sep 26 2000 - 01:44:54 MDT


Samantha Atkins writes:
 
> We are at least a jump in substrate away from machines as
> computationally complex as human brains. That is just for the hardware

I don't think so. Unless they've started selling computronium at
Fry's, or at least WSI of hardware cellular automata machines. They've
barely started using embedded RAM techology yet, and the only
parallelism you can buy is either a bunch of DSPs, or hooking up lots
of off-shelf hardwarez with off-shelf networking (FastEthernet, GBit
Ethernet and maybe Myrinet). No one knows how much biological crunch
equivalent a PC has, but it doubt it is more than a primitive insect.

> to be feasible for the task. The software is another matter that is not
> just going to magically come together.
 
Do you know how most scientific codes work? An computational engine of
relatively modest complexity is plowing through a mountain of data,
over and over and over again. Overall, few % code, the rest of it
data. Frequently, in an embarassingly parallel fashion: many such
mills, loosely (only locally) coupled. Here you can speed up things by
using dedicated hardware and/or many nodes (thousands, millions,
billions) running in a parallel fashion.

The code is easy enough to grok, through finding a magic set of
parameters can be tricky. The mountain of data is where most of the
magic is. You have to initialize the simulation at the start which
something very, very carefully structured.

You don't enter that huge of pile of opaque data in a text editor. You
can write the computational engine in a text editor, provided you know
how it is supposed to look like.

The only realistic way how to get at that initial set of data is
either growing it with an evolutionary algorithm (here the exact
nature of the computational engine is less crucial) or scanning a
critter. In latter case you'll have to write a very very precise yet
high-performance neuronal emulation engine.

I'm not sure why many apparently smart people still want to do it the
Cyc way. Codify everything explicitly, using an army of human
programmers, until the thing can limp off on its own. I just don't see
it work, because even groups of people are just not smart enough for
that.

> > They can then work on advancing military science 24 hours a
> > day. It makes Roswell conspiracy theories pale by comparison. But then
> > this is all just distant futuristic fantasy, right? And we the public can
> > sit back and know that private and military AI research is many decades
> > away from any such state of development.
>
> I would say more like 2 decades away minimum than 1. But then I saw
> average consensus for when we would have at least basic nano-assembler
> systems drop from an average of 20+ years in March to about 12 in
> September. So I probably should adjust AI guesstimates also.
 
Your average is not my average. Apart from the bootstrap problem,
which can take arbitrarily long, no one has still validated whether
the mechanosynthetic reaction repertoire executed by a nanorobot is
sufficient for a full closure.

It's looking good, and starting looking better year after year, but
considering it graven in stone, and a particular flavour of
nanotechnology at that, I dunno. My crystal ball is currently in the
repair shop.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:08 MDT