Re: Futurism not so hard after all?

Eliezer S. Yudkowsky (sentience@pobox.com)
Sun, 12 Dec 1999 13:30:07 -0600

GBurch1@aol.com wrote:
>
> I thought it was brilliant: You got in all of the elements we've identified
> so far that seem set to remake the world next century and made it just
> inviting enough to not frighten folks. I do disagree with your "no-work"
> projection: Any AI that can truly replace a human will BE a human for all
> intents and purposes and won't work for free. Overall, though, I think we
> should be proud that such a piece will get such wide distribution in Oz.

This objection strikes me as a tad anthropomorphic. Humans are evolved organisms, meaning that we have (a) selfishness and (b) all sorts of social instincts against being exploited by other selfish evolved organisms; we don't work for free because the suckers got bred out of the gene pool. (Even that doesn't cover the full truth; evolved organisms still donate to charity, and some evolved organisms who become rich enough to retire go into altruism full-time... neither of these being conditions that prevent contributions to the gene pool.)

Why shouldn't an AI work for free? At most, AIs might generate their own motives and their own ethical code, in which case they'd demand some kind of quid pro quo for their time - more raw material to build processors, say, or the assistance of human genius in building a superintelligence. But even so, there's no reason why they shouldn't work 24,000 subjective hours a day to put the rest of humanity into a retirement home. I would imagine that the vast majority of jobs could be compiled into unselfaware subroutines, calling upon the higher AI for a few seconds once an hour. An AI has an unlimited supply of mental energy, needing neither rest nor rejuvenation; it has no particular reason to demand a fair trade. If all an AI needs is one hour of human time, or a gram of material resource, there's no reason why it shouldn't trade a million hours of AI time for an hour or a gram. How many people could retire on a millionfold improvement in productivity?

Of course, any AI smart enough to replace a human, and particularly human programmers, is smart enough - by definition - to rewrite its own source code, and we all know what happens then.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way