Re: Paths to Uploading

Eliezer S. Yudkowsky (sentience@pobox.com)
Wed, 06 Jan 1999 13:20:28 -0600

Anders Sandberg wrote:
>
> OK, this is the standard SI apotheosis scenario. But note that it is
> based on a lot of unsaid assumptions: that it is just hardware
> resources that distinguish a human level AI from an SI (i.e, the
> software development is fairly trivial for the AI and can be done very

That's not what Billy Brown is assuming. He's assuming software development is _harder_ than hardware development, so that by the time we have a working seed AI the available processing power is considerably larger than human-equivalent. I happen to agree with this, by the way.

> fast, and adding more processor power will make the AI *smarter*),

I'll speak for this one. If we could figure out how to add neurons to the human brain and integrate them, people would get smarter. (I deduce this from the existence of Specialists; adding cognitive resources does produce an improvement.) Similarly, I expect that substantial amounts of the AI will be working on sufficiently guided search trees that additional processing power can produce results of substantially higher quality, without requiring exponential amounts of power. Anyway, I think we did this during the Singularity debate.

> that this process has a time constant shorter than days (why just this
> figure? why not milliseconds or centuries?),

Actually, I think milliseconds. I just say "hours or days" because otherwise my argument gets tagged as hyperbole by anyone who lives on the human timescale.

Centuries is equally plausible. It's called a "bottleneck".

> that there will be no
> systems able to interfere with it - note that one of your original
> assumptions was the existence of human-level AI; if AI can get faster
> (not even smarter) by adding more processor power, "tame" AI could
> keep the growing AI under control

  1. This is an Asimov Law. That trick never works.
  2. I bet your tame AI has to be smarter than whatever it's keeping under control. Halting problem...

> - that this SI is able to invent
> anything it needs to (where does it get the skills?)

If you dropped back a hundred thousand years, how long would it take you to out-invent hunters who had been using spears all their life? Skill is a poor substitute for smartness.

And if it's stumped, it can get the answers off the Internet, just like I do.

> and will have
> easy access to somebody's automated lab equipment (how many labs have
> their equipment online, accessible through the Net?

There's some lab with a Scanning Tunneling Probe that's made "selectively" available to students on the Internet. I thought that was hysterical when I read it. All along, I'd been assuming the fast infrastructure would start with hacking automated DNA sequencers and protein synthesis machines. But this way is probably faster.

> why are you
> assuming the AI is able to hack any system,

There are bloody _humans_ who can hack any system.

> especially given the
> presence of other AI?).

In a supersaturated solution, the first crystal wins.

> And finally, we have the assumption that the
> SI will be able to outwit any human in all respects - which is based
> on the idea that intelligence is completely general and the same kind
> of mind that can design a better AI can fool a human into (say)
> connecting an experimental computer to the net or disable other
> security features.

If the intelligence is roughly human-equivalent, then there will be specialties at which it excels and gaping blind spots. If the intelligence is far transhuman, it will still have specialties and blind spots, but not that we can perceive. So yes, I make that assumption: An SI will be able to outwit any human in all respects. I'll go farther: An SI will correctly view humans as manipulable, deterministic processes rather than opponents.

> As you can tell, I don't quite buy this scenario. To me, it sounds
> more like a Hollywood meme.

Not really. Hollywood is assuming that conflicts occur on a humanly understandable level - not to mention that the hero always wins. Remember the Luring Lottery, when Hofstadter received a large collection of Vast numbers, all using different specification schemes? Determining which was largest would be a serious mathematical problem, but I feel quite confident in announcing that one of the numbers was far larger than all the competitors raised to the power of each other.

The forces involved in a Singularity are Vast. I don't know how the forces will work out, but I do predict that the result will be extreme (from our perspective), and not much influenced by our actions or by initial conditions. There won't be the kind of balance we evolved in. Too much positive feedback, not enough negative feedback.

Likewise, the first SI timescale that isn't too slow to notice will be too fast to notice.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.