Re: Geniebusters

Eliezer S. Yudkowsky (sentience@pobox.com)
Tue, 13 Apr 1999 22:24:56 -0500

Lyle Burkhead wrote:
>
> The "genie" hypothesis is simply that AI systems with at-least-human
> intelligence will work for us for free. Discussions of AI have not gone
> beyond it, because it is a general idea that has nothing to do with the
> details of how an AI is created.

Incorrect. See the section on goal systems in "Coding a Transhuman AI", or the summary in "Singularity Analysis". (Yes, you *can't* give AIs orders after a certain point - for technical, not logical, reasons.)

> Nanotechnologists assume that Genies will exist. That's what distinguishes
> Drexlerian Nanotechnology from ordinary technology. The nanotechnology meme
> isn't really about atoms. The key paragraphs are in the section called
> Accelerating the Technology Race on page 81:

Okay, now this is why everyone is shouting "Straw man!". Drextech assumes that the forms of matter (as opposed to matter) become duplicable resources, and that a few new manipulations (healing a cell, creating custom-formed diamonds, etc.) become possible.

Genie machines are unnecessary. Knocking them down does nothing to disprove the utopia envisioned in _Engines_.

> Either you have to program the robots, or you don't. If you do, then using
> them to build a skyscraper will not be free of labor costs -- far from it.

Yes, but you can build ten million skyscrapers as easy - or as hard - as one.

> On the other hand if you don't have to program them, then they have crossed
> the line that separates agents from automatons. They have crossed over to
> our side, and the work _they_ do amounts to the same thing as "human
> labor." Either the robots are automatons, in which case you have to program
> them, or they are agents, in which case you have to pay them. Skyscrapers
> will never be free, because we will always have to make an effort to focus
> our minds and make things happen -- or the robots will have to make the
> same effort to focus _their_ minds and make things happen.

It sounds to me like exactly the same argument would rule out the Industrial Revolution. You've just proved that the cost of everything is an eternal constant. Not only that, you've also proved that I can't have Netscape Navigator on my desk because it took a whole company to produce it.

Nanotechnology is a material technology that does not intrinsically require or imply intelligence enhancement; even though IE is discussed in _Engines_ as both consequence and tool, it is not *necessary*. Nanotech is simply an enormously powerful tool, that can be developed like any other tool; and used to create a Utopia or blast the planet, more likely the latter; with or without IE.

> Without Genies, the ability to make things out of atoms is just an
> extension of present-day technology, not dizzying at all. Molecular
> manufacturing without Genies is just agribusiness.

All material business suddenly has the same economics as information; write it once, use it forever. The world becomes the Internet. That's dizzying.

> Without Genies, there will be no sudden "assembler breakthrough." Instead
> of emerging in a sudden breakthrough, nanotechnology will emerge
> continuously from present-day technology, over a period of decades, step by
> laborious step, each step involving an effort of concentration in a human
> mind.

Why can't we get immediate AIs who then drive all the way to Singularity, whether we like it or not? And why couldn't the immediate effects of assembler technology, relatively simple apps like ultracomputers and the Ultimate Xerox, have such vast impacts - like $3 trillion of venture capital pouring into nanotech, and the ability to actually play around in the molecular world, and the ability to run evolutionary computer models of assemblers, and all the applications we couldn't anticipate in advance - compress decades into months?

> There are no Genies and never will be. This is a logical point, not a
> technical point.

If it really is a logical point, not a technical question of AI motivations, then you're just playing with tautologies.

> It's not a question of what can or can't be done with
> atoms, or what can or can't be done with computers. I'm not saying AI will
> never exist. What I'm saying is that it doesn't matter -- any entity with
> at-least-human intelligence (artificial or not) won't work for free. To the
> extent that a robot makes independent decisions, it will have to be dealt
> with as an entity that makes independent decisions. A group of robots that
> could build a skyscraper by themselves would be indistinguishable from a
> contractor, and would have to be dealt with as such.

You are simply wrong. Our selfishness is quite independent of our intelligence, and in fact interferes with it; this is a technical question of goal systems.

> I'm not saying that nanosystems will not exist, nor that they will not be
> able to create large structures such as pipelines or skyscrapers. I'm
> saying that a nanosystem (or any system) capable of creating a pipeline or
> skyscraper will contain many human-level intelligences, and they will not
> be at your command... unless you pay them, or find some other way to
> motivate them.

If you really want to know how AI motivations work, you're going to have to read "Coding a Transhuman AI".

Furthermore, your technical argument is simply wrong. As technology advances, it takes fewer and fewer humans to accomplish a given task, yes? From sewing to making cars, yes? I don't see why you could have a nanosystem capable of making the pipe that one trained human could operate, or even, yes, one that was simply given directions and went; or, most likely of all, an Exxon that could build a pipe for everyone in the world. Because, the smaller the level on which you operate, the simpler things are; the world is composed of standardized parts.

Even a *one-man* pipemaker would require intermediate AI or advanced crystal intelligences, but nothing smart enough to make motivations a problem. Of course, I could be wrong about the AI required, but then I'm not a nanotechnologist.

Drextech doesn't require genies. Not in part, not in whole.

That is the fundamental root, as I said, of your argument, and it's the reason why we all use the term "straw-man". Drextech simply doesn't require human-equivalent AI. That is a purely technical argument; the only part of it which I am competent to argue is the computing problems, such as storing information or coordinating nanobots or applying generalized heuristics. But I don't see why the Ultimate Xerox Machine, working on anything that can be manufactured with modern bulk technology, isn't possible - and that's all we really need, isn't it? You might not be able to Xerox a rabbit, but you could Xerox a car; a few general repair scenarios would suffice for immortality and healing; and that's all you need for Utopia. And I could make molecular computers, which is all *I* need for a Singularity. If that's not Drextech, what is?

As for ad hominem, when you call Drextech a "belief system" rather than a technical argument, that's implicitly assuming that they believe because they're irrational, not because they're right. First disprove _Nanosystems_. Then talk about belief systems.

-- 
        sentience@pobox.com          Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/singul_arity.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.