Re: Why would AI want to be friendly?

From: Samantha Atkins (samantha@objectent.com)
Date: Fri Oct 06 2000 - 14:40:32 MDT


Eugene Leitl wrote:
>
> Samantha Atkins writes:
>
> > We are at least a jump in substrate away from machines as
> > computationally complex as human brains. That is just for the hardware
>
> I don't think so. Unless they've started selling computronium at
> Fry's, or at least WSI of hardware cellular automata machines. They've
> barely started using embedded RAM techology yet, and the only
> parallelism you can buy is either a bunch of DSPs, or hooking up lots
> of off-shelf hardwarez with off-shelf networking (FastEthernet, GBit
> Ethernet and maybe Myrinet). No one knows how much biological crunch
> equivalent a PC has, but it doubt it is more than a primitive insect.

I think you misunderstood me. I agree with you. That's why I said "at
least" in the above. I wonder if anyone is using large scale FPGAs to
get on the fly parallel [re-]configuration as/when needed.

>
> > to be feasible for the task. The software is another matter that is not
> > just going to magically come together.
>
> Do you know how most scientific codes work? An computational engine of
> relatively modest complexity is plowing through a mountain of data,
> over and over and over again. Overall, few % code, the rest of it
> data. Frequently, in an embarassingly parallel fashion: many such
> mills, loosely (only locally) coupled. Here you can speed up things by
> using dedicated hardware and/or many nodes (thousands, millions,
> billions) running in a parallel fashion.
>

Since scientifc computing was where I started out a couple of decades
ago, yeah, I do know about this. Worked a little on an old CDC
supercomputer.
 
> The code is easy enough to grok, through finding a magic set of
> parameters can be tricky. The mountain of data is where most of the
> magic is. You have to initialize the simulation at the start which
> something very, very carefully structured.
>
> You don't enter that huge of pile of opaque data in a text editor. You
> can write the computational engine in a text editor, provided you know
> how it is supposed to look like.
>
> The only realistic way how to get at that initial set of data is
> either growing it with an evolutionary algorithm (here the exact
> nature of the computational engine is less crucial) or scanning a
> critter. In latter case you'll have to write a very very precise yet
> high-performance neuronal emulation engine.
>

Depending on the purpose you are attempting to acheive of course. I was
working on simulating oil fields at the time and the initial data came
from a lot of probes lowered down wells, seismic explosion data and
such. Far more data was put out by the simulation run. So much so the
challenge was mastering that data to allow meaningful analysis.

I question though how much initial state you would have to give a really
good neuronal simulation. If it has workable learning (especially
pattern matching / abstraction) capabilities then why wouldn't simply
exposing it to a LOT of training information / interaction with the
world work? It is not clear to me that newborn biological beasties are
so massively precoded. Sure they have quite a bit of instinctual stuff
and some amount of built-in learning algorithms.

 
> I'm not sure why many apparently smart people still want to do it the
> Cyc way. Codify everything explicitly, using an army of human
> programmers, until the thing can limp off on its own. I just don't see
> it work, because

I don't want to do it that way or at least not only that way. There are
aspects of the Cyc approach that result in useful stuff. But I don't
think it is the approach that will give truly intelligent (and esp.
self-aware) AI.

- samantha



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:15 MDT