Re: Otter vs. Yudkowsky

From: sayke (sayke@gmx.net)
Date: Tue Mar 14 2000 - 06:47:23 MST


At 01:26 AM 3/14/00 -0600, sentience@pobox.com wrote:
>sayke wrote:
>>
>> do terms like "dumb" kinda lose meaning in the absence of personal
>> control? i think so.
>
>Oh, bull. You have no personal control over your quarks, your neurons,
>or your environment. There is not one tool you can use which has a 100%
>chance of working. You are at the mercy of the random factors and the
>hidden variables. "Maintaining control" consists of using the tool with
>the highest probability of working.

        maintaining "personal control" is not the same as "maintaining (generic)
control". as you can see by looking at what i wrote above, i was not
talking about generic control... aw well. argument about this serves no
purpose, methinks. its beside the point. moving on...

>> how kind of the sysop. theocracy might sound nifty, but i don't
think it
>> would be stable, let alone doable, from a monkey point of view.
>
>How fortunate that the Sysop is not a monkey.

        but monkeys will be making it, and operating under it, i presume. that was
my point.

>> an omniscient ai is pretty much inscrutable, right? i don't know
how we
>> can evaluate the inscrutable's chances of becoming what we would call
>> "corrupt". i think the least inscrutable thing about an omniscient
>> intelligence would be its need for resources. other then that... i dunno.
>
>Yes, its need for resources in order to make humans happy. Munching on
>the humans to get the resources to make the humans happy is not valid
>logic even for SHRDLU. Inscrutability is one thing, stupidity another.

        shizat, man, we're talkin right past each other. let me rephrase: i don't
think you can make a sysop. i don't think any monkeys can. i doubt that
suitable momentum can be imparted to something inscrutable. complex systems
are not molded by shoving in ones fingers and stirring... and thats the
least of the difficulties, i think...

>> i fail to see how it could not get tangled up... even in a case
like "in
>> order to maximize greeness, the resources over there should be used in this
>> manner" (which has no self-subject implied) a distenction must be made
>> between resources more directly controlled (what i would call "my stuff")
>> and resources more indirectly controlled (what i would call "other stuff"),
>> etc... and as soon as that distenction is made, degrees of
>> ownership/beingness/whatever is implied, and from there promptly gets mixed
>> up in the goal system...
>
>Wrong.
>
>What else can I say? You, as a human, have whole symphonies of
>emotional tones that automatically bind to a cognitive structure with
>implications of ownership. Seeds don't. End of story.

        it is quite possible that my opinion is being corrupted by my evolutionary
programming. but i don't think things are nearly that simple... aw well.
for the sake of argument, and because i don't think either of us can really
lecture the other on the architecture of transcendent minds, i will concede
the point. lets say that Powers don't need a self-subject; they do not find
them useful. what does that change? it does not make the task of sysop
creation any easier.

>> necessary? in the sense that such an arrangement will increase
my odds of
>> survival, etc? i doubt it, if only because the odds against my survival
>> must be dire indeed (understatement) to justify the massive amount of work
>> that would be required to make a sysop; effort that could rather be
>> invested towards, say, getting off this planet; where getting off the
>> planet would be a better stopgap anyway.
>
>Getting off the planet will protect you from China. It will not protect
>you from me. And you can't get off the planet before I get access to a
>nanocomputer, anyway.

        i don't think your last statement is supportable. i don't think either of
us knows nearly enough about future event sequences to have a say on that.

>> unless, of course, you come up with a well thought out essay on
the order
>> of "coding a transhuman ai" discussing the creation of a specialized sysop
>> ai.
>
>If the problem is solvable, it should be comparatively trivial.
>Extremely hard, you understand, but not within an order of magnitude of
>the problem of intelligence itself.

        um. i beg to differ with this. in the first case, you create a
self-hacking intelligence, which is by nature incomprehensible to you once
it has emerged and tweaked itself for a bit. in the other case, your task
is to give the incomprehensible a specific, arbitrary form of motivational
momentum.
        best of luck to you. either that, or no luck at all.

>> i trend towards advocating a very dumb sysop, if it can be
called that...
>> a "simple" upload manager...
>
>Probably not technologically possible. Even a mind as relatively
>"simple" as Eurisko was held together mostly by the fact of
self-modification.

        the "simple" upload manager i was talking about is not nearly mind-level.

>> >You and a thousand other Mind-wannabes wish to
>> >ensure your safety and survival. One course of action is to upload,
>> >grow on independent hardware, and then fight it out in space.
>>
>> or just run the fuck away, and hopefully not fight it out for a
very, very
>> long time, if ever. dibs on alpha centauri... ;)
>
>One of the things Otter and I agree on is that you can't run away from a
>Power. Nano, yes. Not a Power. Andromeda wouldn't be far enough. The
>only defense against a malevolent Power is to be a Power yourself.
>Otter got that part. The part Otter doesn't seem to get is that if a
>thousand people want to be Powers, then synchronization is probably
>physically impossible and fighting it out means your chance of winning
>is 0.1%; the only solution with a non-negligible probability of working
>is creating a trusted Sysop Mind. Maybe it only has a 30% chance of
>working, but that's better than 0.1%.

        well... being insignificant probably helps. i take that to me the case
with respect to our continued existence. given that powers probably exist,
somewhere, they must not be completely voracious, because we have not been
eaten yet...
        running away would be a stopgap. being that i think your sysop is not
doable, i'm left with stopgaps like running away. shrug. whaddya gonna do...

[snippage snipped because i don't think a sysop is doable]
>> mutually assured destruction seems more clever then a sysop.
>
>It won't work for nano and it sure won't work for Minds.

        nano, yea. the old lone-gunman-who-does-not-value-existance thing...
        but do you really think that Minds don't value their continued existence?

[minor snippage]
>> what if i want to *be* said Pact?
>
>I don't trust you. I can't see your source code, and if I could, I
>almost certainly wouldn't trust it. den Otter doesn't trust you either.
> You're an agent, not a tool.

        you seem to forget that you would be incapable of reading your sysop's
source code, and that it would be an agent in and of itself. you can trust
my self-interest, etc... den otter can too... and i thought you thought (as
i think) that "transcendent tool" is an oxymoron, but, that seems to be
what you want for a sysop.
        i'm not followin, man...

sayke, v2.3.05



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:08 MDT