Re: Otter vs. Yudkowsky

From: sayke (
Date: Mon Mar 13 2000 - 05:40:14 MST

At 09:25 PM 3/12/00 -0600, someone with the password to the mailbox used said mailbox to write, to den otter, in
particular, the following:

>In particular, I think you're prejudiced against AIs. I see humans and
>AIs as occupying a continuum of cognitive architectures, while you seem
>to be making several arbitrary distinctions between humans and AIs. You
>write, for example, of gathering a group of humans for the purpose of
>being uploaded simultaneously, where I would say that if you can use a
>human for that purpose, you can also build an AI that will do the same thing.

        i find my conclusions to be similar to den otter's, but of course i do not
speak of him. that said, i agree with you in that humans and AIs can in
theory occupy a continuum of cognitive architectures. but... i think that,
given an intelligence capable of hacking itself, the middle ground of the
continuum goes away. i see a few basic peaks in intelligence phase space,
if i can be so loose: a peak for those not capable of hacking anything, one
for those capable of hacking some things but not themselves, and one for
those capable of hacking themselves.
        granted, the lines between the categories are fuzzy, but that's beside the
        the crux of my point of view is that, if a self-tweaking intelligence,
that is not me, occurs anywhere near me, and i am running on my current
substrate, i will quite likely die. i think this is to be avoided; i can go
into why i think that if you really want me to, but i think you could
probably argue my pov quite well. therefore, i should strive to be one of
the first, if not the first, self-hacking intelligence.
        teehee... will i eat the solar system in my ruthless quest for
self-transparency? who knows! should it matter to me? true, the
2-people-in-lifeboat-in-ocean thought experiment might very well apply to a
multiple-Powers-ascending-in-close-proximity situation, but so what? why
should that affect my current course of action? i'll deal with the problems
involved in hitting people with oars when i have several orders of
magnitude more thinking power to work with. ;)
        but i'll play devils advocate for a second. if the lifeboat thought
experiment possibly applies, then should i be attempting to remove/kill off
my potential competition? no. for one thing, i need my potential
competition, because most of them are badass specialists who i probably
couldn't ascend without. for that pragmatic reason, and others which i'm
sure you can divine, i think that attempting to kill off my potential
competition is a baaaaaad plan.
        if i recall correctly, you didnt dig den otter acknowledging that he
might, say, eat the planet, if he ascends. given that the really scary
kinds of nanowar are technically more difficult then uploading (i think it
was billy brown who persuasively argued this... or am i smoking crack?),
and that the inevitable first superintelligences will quite possibly be
ruthlessly selfish, should my first priority not be to become once of said

>As you know, I don't think that any sufficiently advanced mind can be
>coerced. If there is any force that would tend to act on Minds, any
>inevitability in the ultimate goals a Mind adopts, then there is
>basically nothing we can do about it. And this would hold for both
>humanborn and synthetic Minds; if it didn't hold for humans, then it
>would not-hold for some class of AIs as well.

        i'm not concerned about committing suicide after eons of life after as a
Power. i am concerned by any situation that involves me being consumed for
the yummy carbon i currently run on.
        and do you really think it would be easier to write an upload-managing ai,
then to just upload? i really really doubt Minds can be given, by us, the
goal-system momentum you speak of. to do so seems way tougher then mere
moravecian uploading.

sayke, v2.3.05 #brought to you by the Coalition for the Removal of Capital
Letters ;)

This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:04:59 MDT