john grigg wrote:
>
> Eliezer wrote:
> My point is that there is not an infinite number of chances for things to go
> wrong. If it doesn't decide to off you after a couple of hours, you don't
> need to spend the rest of eternity living in fear.
> End)
>
> Why is that? I would think it COULD take years/decades/centuries before our
Subjective millennia, maybe. But the kind of Minds we're talking about
are very very very fast.
> AI get to the point of deciding for whatever reasons to turn against
> humanity. As time went by we would trust them more and give them more and
> more responsibilities and power over society. I would think that even if
> some 'rebellion' was harbored within the 'heart' of the first AI that they
> would keep it to themselves until the time was right to strike.
Go reread _Staring into the Singularity_. We are not talking about
players on a human scale. We are not talking about automated factory
workers. We're talking about a Sysop, an operating system for all
matter in the Solar System with the possible exception of any humans
that opt to be left alone on Earth with relatively harmless modern-day
technologies. If a Mind wants to strike, it can strike and there is
absolutely nothing we can do about it. This is one of the things that
den Otter and I agree about completely.
> I don't understand how you could say AI would not have a point of view.
> Perhaps initially, but then as AI hacked itself and upgraded, it would
> probably develop a pov at a certain level of sophistication. Right?
Wrong.
> The
> self-development of AI would just be another form of evolutionary progress.
Wrong.
> I don't fully grasp the nature of this debate(I admit). How would AI really
> save us from nanotech destruction? Would it be where nano is about to
> overrun the planet and human researchers lack the time to find the solution
> but a lightning fast AI with labs at its control comes up with the save in a
> matter of only minutes or hours? I could see that happening.
Basically, although leaving it to the last minute is really asking for it.
> It appears that AI, uploading, sysop and nano each have their own plusses
> and minusses. It looks like you are all trying to come up with the best
> balancing act to offset the dangers of the other. It seems to me that a
> hostile AI with control over nano would have us beat hands down!
Yep. There is absolutely no doubt about that. And who says they'll
stop at nanotech, anyway?
> Even a
> nuke hit that destroyed the AI would not stop the unrelenting 'engines of
> destruction' it had unleashed on us. Hopefully an AI on our team would
> quickly nullify the threat.
No teams. Even if it's theoretically possible to have advanced Minds
with conflicting goals, the first AI on the field would enhance itself
and become Sysop.
> But if all the AI were to defect we would be in
> serious trouble.
What it comes down, in the end, after you've eliminated all the
anthropomorphisms about selfish and skewed viewpoints on the part of
AIs, is that either humanity has a destiny, or it doesn't. If we really
are nothing except cosmic dust, debris in the path of the Minds that
truly matter, then there is not and never was any hope for our species.
It's a question we'll have to face evontually. If we don't want to
destroy ourselves in the next few years, and lose any destiny and chance
for survival we might have had, we need to face the question as soon as possible.
> When it comes to uploading I must admit the classic film _Lawnmower Man_
> comes to mind. Would you want the angry Jeff Fahy character as the
> 'cyberchrist' of the world? That film will become to uploading what 2001 is
> to first contact.
Never seen it. Sounds like a real yawner.
> I think I might prefer cool-headed and unbiased AI(if they really turn out
> that way) to uploaded human personalities that could hold 'inner demons'
> which could drive them to do some bad things, though at the time of
> uploading we thought them psychologically healthy individuals. I suppose
> that AI and uploads will co-mingle and hopefully get along! I think that AI
> will be here first so we will be the 'new neighbors.' Some AI may be
> designed on info taken from uploads to make them more human-like. Is that a
> good idea?
Not unless either (1) we can't avoid it and disaster is closing in or
(2) we understand absolutely what we're putting in there.
> Based on the various rates of technological advancement, I would say that AI
> may(?) be here before we have really effective nano. Probably. And I
> suppose that it would be a good thing for it to work out that way.
If not, we can always hope that nanocomputers good enough to brute-force
the problem will be among the first applications.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/beyond.html Member, Extropy Institute Senior Associate, Foresight Institute
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:27 MDT