"D.den Otter" wrote:
> Wow, looks like a more realistic vision of the future is
> slowly but steadily entering the mainstream. Stopping
> progress is neither possible (unless you nuke everything
> to hell) nor desirable (nature's default death sentence is
> still upon us, after all), but that doesn't change the fact
> that advanced technologies will probably kill us all sooner
> or later. No choice, tricky situation...Getting to space
> looks like the only way out. Won't help you much against
> AI, of course, just nano. Never hurts to try, though.
Otter, you and I were having an interesting discussion on the topic of
AI, nano, uploading, and navigation a few months back, trading 30K posts
back and forth, when the discussion suddenly halted for no particular
reason. Can we reopen the topic?
In particular, I think you're prejudiced against AIs. I see humans and
AIs as occupying a continuum of cognitive architectures, while you seem
to be making several arbitrary distinctions between humans and AIs. You
write, for example, of gathering a group of humans for the purpose of
being uploaded simultaneously, where I would say that if you can use a
human for that purpose, you can also build an AI that will do the same thing.
As you know, I don't think that any sufficiently advanced mind can be
coerced. If there is any force that would tend to act on Minds, any
inevitability in the ultimate goals a Mind adopts, then there is
basically nothing we can do about it. And this would hold for both
humanborn and synthetic Minds; if it didn't hold for humans, then it
would not-hold for some class of AIs as well.
If the ultimate goal of a Mind doesn't converge to a single point,
however, then it should be possible - although, perhaps, nontrivial - to
construct a synthetic Mind which possesses momentum. Not "coercions",
not elaborate laws, just a set of instructions which it carries out for
lack of anything better to do. Which, as an outcome, would include
extending den Otter the opportunity to upload and upgrade. It would
also include the instruction not to allow the OtterMind the opportunity
to harm others; this, in turn, would imply that the Sysop Mind must
maintain a level of intelligence in advance of OtterMind, and that it
must either maintain physical defenses undeniably more powerful than
that of the OtterMind, or that the OtterMind may only be allowed access
to external reality through a Sysop API (probably the latter).
Which may *sound* constricting, but I really doubt it would *feel*
constricting. Any goal that does not directly contradict the Sysop
Goals should be as easily and transparently accomplishable as if you
were the Sysop Mind yourself.
The objection you frequently offer, Otter, is that any set of goals
requires "survival" as a subgoal; since humans can themselves become
Minds and thereby compete for resources, you reason, any Mind will
regard all Minds or potential Minds as threats. However, the simple
requirement of survival or even superiority, as a momentum-goal, does
not imply the monopolization of available resources. In fact, if one of
the momentum-goals is to ensure that all humans have the opportunity to
be all they can be, then monopolization of resources would directly
interfere with that goal. As a formal chain of reasoning, destroying
humans doesn't make sense.
Given both the technological difficulty of achieving simultaneous
perfect uploading before both nanowar and AI Transcendence, and the
sociological difficulty of doing *anything* in a noncooperative
environment, it seems to me that the most rational choice given the
Otter Goals - whatever they are - is to support the creation of a Sysop
Mind with a set of goals that permit the accomplishment of the Otter Goals.
Yes, there's a significant probability that the momentum-goals will be
overridden by God knows what, but (1) as I understand it, you don't
believe in objectively existent supergoals and (2) the creation of a
seed AI with Sysop instructions before 2015 is a realistic possibility;
achieving simultaneous uploading of a thousand-person group before 2015
-- email@example.com Eliezer S. Yudkowsky http://pobox.com/~sentience/beyond.html Member, Extropy Institute Senior Associate, Foresight Institute
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:04:57 MDT