Re: Technology evolves, ergo automation evolves, until...

Eliezer S. Yudkowsky (
Sat, 07 Nov 1998 22:35:40 -0600

Eugene Leitl wrote:
> Eliezer S. Yudkowsky writes:
> > I could build a transhuman intelligence for the same estimated price. In
> Uh, if you don't mind I would like to hear how you would like to
> accomplish this especially at the low end of things (about 1 G$). A
> time schedle (MMM taken into account) would be nice as well.

I didn't hear any time schedules from the self-replicator people. Come to think of it, I didn't hear any time schedules from the people who said they'd remodel the basement in two weeks.

It's a Pure Intuitive figure, based on how many programmers and how much time I think it would take to do everything listed in "Coding a Transhuman AI". But as I once said, "This is the last program the human race ever needs to write. It makes sense that it would be the largest and the most complex."

> Also, the claim 'I could build' it would seem to become objectionable,
> since you'd wind up more a glorified administrator than an
> implementer. The superintelligence would then be the product of a
> superorganism (team) than a single individual (Eliezer).

Don't be too sure of that. It's harder for me to design organizations than to design computer architectures, and it takes a much larger and more secure power base to implement them - but the level of design necessary just to free myself up is a hell of a lot less than what it takes to design a seed AI.

The range of the estimate reflects the degree to which I can change the rules of the game. A seed AI is a quick kill. But if that quick kill isn't possible, one simply recurses on creating more and more sophisticated forms of intelligence enhancement, ranging from collaborative filtering to Algernic neurosurgery.

--         Eliezer S. Yudkowsky

Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.