----------
> From: Eliezer S. Yudkowsky <sentience@pobox.com>
> Otter, you and I were having an interesting discussion on the topic of
> AI, nano, uploading, and navigation a few months back, trading 30K posts
> back and forth, when the discussion suddenly halted for no particular
> reason.
Stalemate? I dunno...
> Can we reopen the topic?
The game is afoot.
> In particular, I think you're prejudiced against AIs. I see humans and
> AIs as occupying a continuum of cognitive architectures, while you seem
> to be making several arbitrary distinctions between humans and AIs.
I don't have anything against AIs per se, and I agree that they are
part of the cognitive continuum. I wouldn't distrust AIs more than I
would distrust humans (>H uploads). The problem with >H AI is
that, unlike for example nanotech and upload technology in general,
it isn't just another tool to help us overcome the limitations of
our current condition, but lirerally has a "mind of its own". It's
unpredictable, unreliable and therefore *bad* tech from the
traditional transhuman perspective. An powerful genie that, once
released from its bottle, could grant a thousand wishes or send
you straight to Hell.
Russian roulette.
If you see your personal survival as a mere bonus, and the
Singularity as a goal in itself, then of course >H AI is a great
tool for the job, but if you care about your survival and freedom --
as, I belief, is one of the core tenets of Transhumanism/Extropianism--
then >H AI is only useful as a last resort in an utterly desperate
situation. Example: grey goo is eating the planet and there's no
way of stopping it, or a small group of survivors is in a space
station, life support is slowly failing and is beyond repair etc.
In cases like these it makes sense to say: "hey, nothing to
lose, let's activate the AI and hope for the best".
> You
> write, for example, of gathering a group of humans for the purpose of
> being uploaded simultaneously, where I would say that if you can use a
> human for that purpose, you can also build an AI that will do the same thing.
>
> As you know, I don't think that any sufficiently advanced mind can be
> coerced.
Yes, and as you may know I agree. That's why this isn't a viable
option (again, from the traditional >H perspective). What I suggest
is that after sufficient testing with rodents, dogs & monkeys you
use "dumb" AI (fancy computer) to run the opload sequence. The
participants might be uploaded into separate self-sufficient "escape
capsules" which are then shot into space, which means that if
a conflict follows it will be fought IRL, or they could be uploaded
into a single medium, and fight it out (or merge or whatever) in
VR. Though maybe not a very appealing choice to us individualists,
the hive mind could ultimately be a reasonable (necessary?)
comprimize. 15-30 years down the road we'll all probably be
much more "connected" anyway, so presumably the threshold
for this sort of thing wouldn't be as high as it currently is. Mixed
scenarios would also be possible, of course.
> If there is any force that would tend to act on Minds, any
> inevitability in the ultimate goals a Mind adopts, then there is
> basically nothing we can do about it. And this would hold for both
> humanborn and synthetic Minds; if it didn't hold for humans, then it
> would not-hold for some class of AIs as well.
True...
> If the ultimate goal of a Mind doesn't converge to a single point,
> however, then it should be possible - although, perhaps, nontrivial - to
> construct a synthetic Mind which possesses momentum. Not "coercions",
> not elaborate laws, just a set of instructions which it carries out for
> lack of anything better to do. Which, as an outcome, would include
> extending den Otter the opportunity to upload and upgrade. It would
> also include the instruction not to allow the OtterMind the opportunity
> to harm others;
Self-defense excluded, I hope. Otherwise the OtterMind would
be a sitting duck.
> this, in turn, would imply that the Sysop Mind must
> maintain a level of intelligence in advance of OtterMind, and that it
> must either maintain physical defenses undeniably more powerful than
> that of the OtterMind, or that the OtterMind may only be allowed access
> to external reality through a Sysop API (probably the latter).
>
> Which may *sound* constricting, but I really doubt it would *feel*
> constricting. Any goal that does not directly contradict the Sysop
> Goals should be as easily and transparently accomplishable as if you
> were the Sysop Mind yourself.
Well, it's certainly better than nothing, but the fact remains that
the Sysop mind could, at any time and for any reason, decide
that it has better things to do than babysitting the OtterMind,
and terminate/adapt the latter. Being completely at someone's/
something's mercy is never a good idea. The Sysop mind, having
evolved from a human design, could easily have some flaws
that could eventually cause it to mutate into just about
anything, including a vicious death star. Who would stop
it then (surely not the relatively "dumb" and restricted
OtterMind)? Who monitors the Sysop?
Let's look at it this way: what if the government proposed a
system like this, i.e. everyone gets a chip implant that will
monitor his/her behaviour, and correct it if necessary so that
people no longer can (intentionally) harm eachother. How
would the public react? How would the members of this list
react? Wild guess: most wouldn't be too happy about it
(to use a titanic understatement). Blatant infringement of
fundamental rights and all that. Well, right they are. Now,
what would make this system all of a sudden "acceptable"
in a SI future? Does an increase in intelligence justify
this kind of coercion?
And something else: you belief that a SI can do with
us as it pleases because of its massively superior
intelligence. Superior intelligence = superior morality,
correct? This would have some interesting implications
in the present (like it's cool to kill/torture animals,
infants, retards etc), but that aside. Point is, by coercing
the "ex-human" SI (OtterMind in this case) by means
of morally rigid Sysop, you'd implicitly assume that you,
a mere neurohack human, already know "what's right".
You'd apparently just "know" that harming others goes
against Objective Morality. But since, according to
your doctrine, it takes an unbound SI to find the true
meaning of life (if there is any), this would violate the
rules of your own game, and ruin your Singularity.
There is an obvious compromize, though (and perhaps
this is what you meant all along): the synthetic Minds
make sure that everyone uploads and reaches (approx.)
the same level of development (this means boosting
some Minds while slowing down others), and then they
shut themselves down, or simply merge with the
"human" Minds. The latter are then free to find the
true meaning of it all, and perhaps kill eachother in
the process (or maybe not). It remains a tricky
situation, of course; the synth(s) could mutate before
shutdown, and kill/take over everyone. Well, no-one
said that surviving the Singularity is going to be *easy*...
> The objection you frequently offer, Otter, is that any set of goals
> requires "survival" as a subgoal; since humans can themselves become
> Minds and thereby compete for resources, you reason, any Mind will
> regard all Minds or potential Minds as threats. However, the simple
> requirement of survival or even superiority, as a momentum-goal, does
> not imply the monopolization of available resources.
Yes it does, assuming the Mind is fully rational and doesn't
like loose ends. The more control one has over one's surroundings,
the better one's chances of survival are. Also we don't know
how much resources a Mind would actually need, or how much
it could actually use. But even if we assume that scarcity
will never be a problem, the security issue would still remain.
Minds could have conflicts for other reasons than control
of resources.
> In fact, if one of
> the momentum-goals is to ensure that all humans have the opportunity to
> be all they can be, then monopolization of resources would directly
> interfere with that goal. As a formal chain of reasoning, destroying
> humans doesn't make sense.
But where would this momentum goal come from? It's not a
logical goal (like "survival" or "fun") but an arbitrary goal,
a random preference (like one's favorite color, for example).
Sure, a Mind could decide that it wants to keep, or develop,
a sentimental attachment to humans, but I'd be very
surprised indeed if this turned out to be the rule.
> Given both the technological difficulty of achieving simultaneous
> perfect uploading before both nanowar and AI Transcendence, and the
> sociological difficulty of doing *anything* in a noncooperative
> environment, it seems to me that the most rational choice given the
> Otter Goals - whatever they are - is to support the creation of a Sysop
> Mind with a set of goals that permit the accomplishment of the Otter Goals.
Well, see above. This would only make sense in an *acutely*
desperate situation. By all means, go ahead with your research,
but I'd wait with the final steps until we know for sure
that uploading/space escape isn't going to make it. In that
case I'd certainly support a (temporary!) Sysop arrangement.
> Yes, there's a significant probability that the momentum-goals will be
> overridden by God knows what, but (1) as I understand it, you don't
> believe in objectively existent supergoals
That's a tough one. Is "survival" an objective (super)goal? One
must be alive to have (other) goals, that's for sure, but this
makes it a super-subgoal rather than a supergoal. Survival
for its own sake is rather pointless. In the end it still comes
down to arbitrary, subjective choices IMO.
In any case, there's no need for "objectively existent
supergoals" to change the Sysop's mind; a simple
glitch in the system could have the same result, for
example.
> and (2) the creation of a
> seed AI with Sysop instructions before 2015 is a realistic possibility;
> achieving simultaneous uploading of a thousand-person group before 2015
> is not.
Well, that's your educated (and perhaps a wee bit biased)
guess, anyway. We'll see.
P.s: do you watch _Angel_ too?
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:03 MDT