RE: Maximizing results of efforts Re: Mainstreaming

From: Ben Goertzel (
Date: Mon Apr 30 2001 - 08:49:47 MDT

> you can only spend your life on one
> impossibility, after all.

well, no. I mean, I am married, after all ;D

Anyway, are two impossibilities more impossible than one? Isn't the
siutation sort of like infinity+infinity=infinity?

But seriously...

> Besides which, your visualization of organic integration implies growth
> occurring on a timescale comparable to the rate of change in human
> civilizations, which is not realistic for an entity that can absorb
> arbitrarily large multiples of its initial hardware, that has a serial
> element speed millions of times faster than neurons, and that can
> recursively self-improve. But we've already been through that.

Yes. My guess is that organic integration will occur, and on a slower
time-scale than the development of nonhuman superintelligence. I am not so
certain as you that the development of superhuman superintelligence is going
to obsolete human life as we know it... just as we have not obsoleted ants
and cockroaches, though we've changed their lives in many ways.

> If this was a method that relied on the programmer being Mother Theresa,
> we might as well shoot ourselves now and be done with it, because humans
> are by nature imperfect.

Anyway, I've heard she programmed in LISP, and that's a terrible language,

> Actually, it's possible that FAI will work even if you use Saddam Hussein
> as the exclusive source of content, which would make FAI a plan of the
> third kind. But I can't prove that, and it's not the conservative
> assumption,...

I tend to think that once the system has gotten smart enough to rewrite its
own code in ways that we can't understand, it's likely to morph its initial
goal system into something rather different. So I'm just not as confident
in you that explicitly programming in Friendliness as a goal is the magic
solution. It's certainly worth doing, but I don't see it as being **as**
important as positive social integration of the young AI.

anyway, we've debated this enough I guess. I'll try to find time to write
something systematic on it...


This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT