RE: Yudkowsky's AI (again)
Billy Brown (bbrown@conemsco.com)
Fri, 26 Mar 1999 07:49:49 -0600
Eliezer S. Yudkowsky wrote:
> I guess the question everyone else has to ask is whether the possibility
> that late-term Powers are sensitive to the initial conditions is
> outweighed by the possibility of some first-stage transhuman running
> amuck. It's the latter possibility that concerns me with den
> Otter and Bryan Moss, or for that matter with the question of whether the
> seed Power should be a human or an AI.
So, remind me again, why exactly are we so worried about a human upload?
The last time I looked, our best theory of the human brain had it being a
huge mass of interconnected neural nets, with (possibly) some more
procedural software running in an emulation layer. That being the case, a
lone uploaded human isn't likely to be capable of making any vast
improvements to it. By the time he finishes his first primitive neurohack
he's going to have lots of uploaded company.
I think a seed AI closely modeled on the human brain would face similar
problems. What gives Ellison-type architectures the potential for growth is
the presence of a coding domdule, coupled with the fact that the software
has a rational architecture that can be understood with a reasonable amount
of thought. Any system that doesn't have the same kind of internal
simplicity is going to have a much flatter enhancement curve (albeit still
exponential).
Billy Brown, MCSE+I
bbrown@conemsco.com