Re: Keeping AI at bay (was: How to help create a singularity)

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Fri May 04 2001 - 05:26:40 MDT


On Fri, 4 May 2001, Damien Sullivan wrote:

> > Bungling Demiurgs are universally not well liked.
>
> So why aspire to be one?

How would I know? Ask Kurzweil, Eliezer, or that Goertzel person. I'm only
interested in incremental patches to the human condition. (Btw, if you
ever do find out the technical support contact for the First Cause, please
let me know, as I have a long list of bug reports to file myself.)

> At the moment. I'm not inclined to believe it's inherent, until we
> know a lot more about the human brain.

Well, it's a mess. Whatever it tells us, it doesn't tell us how to build
cold, hard, clean designs, to appeal to the inner anal-retentive in us. If
anything, it tells us that stupid design by mess can come a great long
way, and is still going strong, since doesn't seem having obvious scaling
limitations. It might be not the best way do it in the long run, but I'm
content with hacking an existing system.

> > Godelian-flavoured intrinsic system limitation. As I said, I'm looking
>
> But what's the system? We're too limited to design Pentium's unaided.
> Fortunately we're aided.

If anything, x86s teach us that large number of people using rational
means of design instead of noise-driven methods perform very badly. You
can only navigate in search space if you've got a very good map of said
space already. If you don't have that (at least somebody missed to give me
a copy), it's good old seat of your pants (a mix of brute force and bias,
while furiously sniffing for real or imagined gradients in the breeze).

It would be interesting to know inasmuch CPU designers are using GA for
block layout optimization, and whether statistical optimization methods
are being used at below block level.

It would be very interesting to use GA to directly mutate substrate
structure in the physics simulator, when breeding hardware CAs. Because
the cells are so simple, even the bitspray would probably find something
cool, utilizing dirt effects which human designers are not even aware of.
Doing waferscale hardware CA the VHDL way would seem to losing a lot of
performance. Heck, even a single Chuck can make a design go places people
never thought were possible in a given process, by tweaking the size of a
few (literally) hot spot transistors. Pentiums, shmentiums.

> I also doubt an evolved AI would lead to Singularity. Say we evolve
> one -- which may not be that easy, given how tortuous the path to us
> seems -- then what? We've got a cryptic mess of intelligent code.

We've got a cryptic mess of intelligent code hacking itself, and the
hardware it runs on. A large population of the darn things, actually.
They'll probably be terrific physicists, because a lot of physics is
symbol manipulation, and monkeys are lousy at symbol manipulation, since
not evolved to do that well. They'll be awesome hardware hackers, too,
because coordination and control is expensive, and they're not subject to
energy and footprint constraints of biology legacy (another cubic mile of
substrate? Here you go). If you want to manipulate stuff at molecular
resolution, you need lots of fingers, and damn fast reflexes.

I other words, even rampant anthopocentrism can't prevent us from
glimpsing hints that we'd first become side players, and then rapidly
(very rapidly) diminishing points in the landscape, visibly shifting into
deep infrared and then microwave as the cruise is picking up speed.

> It's more amenable to controlled experiments and eventualy
> modification thatn we are, but still pretty abstruse as far as
> self-modification -- the core Singularity path -- goes. The fun stuff
> happens if the AI has coherent high level structure, so mutations have
> large effect, and design space gets explored quickly.
>
> And I'd avoid this fetishism of low-level evolutionary processes.

The world *is* made from nails. Look, if you disagree, I have this neat
little argument here, to pound my point home...

> It's all Darwinism ultimately, from gene mutations to high level

See, that's much better.

> thoughts. But a child learning chess may try to move anywhere, and be
> swatted away from illegal moves. A chess program only explores legal
> moves. A human grandmaster only explores good moves. If she gets

I have no idea what a human grandmaster explores, actually. (As opposed to
what he tells us he thinks he's exploring). I would like to see an fMRI of
his brain while he's playing. Of course I expect a very good parallel
pattern matcher hybridized with a large position library, which would not
up light up an awful lot. But pretty much everywhere. I would not be
surprised to see a little Edelman neural darwinism for a little parallism
to drive it.

> stuck, then she can try relaxing constraints (although going down to
> genetic mutations to develop better chess players is kind of breaking
> the example.) But that's the last resort, not the first.

If I would want to develop a good chessplayer, I would start with an GA
ANN (which I would have to learn how to mutate first, of course), with an
8x8 chess board as retina input, and do many, many coevolutionary rounds
on a large population of virtual players playing against each other.
Occasionally plugging algorithmic and human chess opponents would be fun,
too. Of course we don't have the hardware nor can we do GA ANNs, so the
question is moot.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:02 MDT