RE: Paths to Uploading

Billy Brown (bbrown@conemsco.com)
Wed, 6 Jan 1999 13:06:52 -0600

Anders Sandberg wrote:
> As you can tell, I don't quite buy this scenario. To me, it sounds
> more like a Hollywood meme.

Yes, it does. I didn't buy it myself at first - but the assumptions that lead to that conclusion are the same ones that make a Singularity possible. Taking the objections one at a time:

> OK, this is the standard SI apotheosis scenario. But note that it is
> based on a lot of unsaid assumptions: that it is just hardware
> resources that distinguish a human level AI from an SI (i.e., the
> software development is fairly trivial for the AI and can be done very
> fast, and adding more processor power will make the AI *smarter*),

Actually, I assume there is a significant software problem that must be solved as well. That's what makes it all so fast - the first group that figures out how to make an AI sentient enough to do computer programming will be running their experiment on very fast hardware.

> that this process has a time constant shorter than days (why just this
> figure? why not milliseconds or centuries?),

The time constant for self-enhancement is a function of intelligence. Smarter AIs will improve faster than dumb ones, and the time scale of human activity is much harder to change than that of a software entity. In addition, an AI can have a very fast subjective time rate if it is running on fast hardware. Thus, the first smart AI will be able to implement major changes in days, rather than months. I would expect the time scale to shrink rapidly after that.

> that there will be no
> systems able to interfere with it - note that one of your original
> assumptions was the existence of human-level AI;

No, my assumption is that the first human-level AI will become an SI before the second one comes online.

> that this SI is able to invent
> anything it needs to (where does it get the skills?)

I presume it will already have a large database on programming, AI, and common-sense information. It will probably also have a net connection - I would expect a program that has access to the WWW to learn faster than one that doesn't, after all. By the 2010 - 2030 time frame that will be enough to get you just about any information you might want.

> and will have
> easy access to somebody's automated lab equipment (how many labs have
> their equipment online, accessible through the Net? why are you
> assuming the AI is able to hack any system, especially given the
> presence of other AI?).

Again, by this time frame I would expect most labs to be automated, and their net connections will frequently be on the same network as their robotics control software. You don't need to be able to hack everyone, you just need for someone to be stupid.

Besides, the AI could expand several thousand fold just by cracking unsecured systems and stealing their unused CPU time. That speeds up its self-enhancement by a similar factor, which takes us down to a few minutes for a major redesign. I expect a few hours of progress at that rate would result in an entity capable of inventing all sorts of novel attacks that our systems aren't designed to resist.

> And finally, we have the assumption that the
> SI will be able to outwit any human in all respects - which is based
> on the idea that intelligence is completely general and the same kind
> of mind that can design a better AI can fool a human into (say)
> connecting an experimental computer to the net or disable other
> security features.

I don't think intelligence is entirely general - my own cognitive abilities are too lopsided to permit me that illusion. A merely transhuman AI, with an effective IQ of a few hundred, might not be any better at some tasks than your average h

An SI is a different matter. With an effective IQ at least thousands of times beyond human average, it should be able to invent any human cognitive skill with relative ease. Even its weakest abilities would rapidly surpass anything in human experience.

Billy Brown, MCSE+I
bbrown@conemsco.com