Re: Otter vs. Yudkowsky

From: Xiaoguang Li (xli03@emory.edu)
Date: Sun Mar 19 2000 - 09:15:35 MST


On Sat, 18 Mar 2000, D.den Otter wrote:

> So, what do you think will happen/should be done?
>

        frankly, this discussion is fast growing intractably confusing to
me -- so this refocuse on reality is very refreshing.

what will happen?

        i feel nanowar is not inevitable because nanotech advanced enough
to make weapons would have eliminated the economic incentive to wage war
by making more useful things. of course, human irrationality must not be
underestimated.
 
        in regard to a yudkowskyan transcendant mind (ytm), i do not believe
that objective morality exists. my guesses in no particular order: #.
un-enhanced human intelligence will not be able to build a ytm; #. human error
will build a ytm that is essentially catastrophic; #. a flawless yudkowskyan
mind will be built, but it will be indistinguishable from the rest of reality
-- ie, quiescent. of course, i still have hope for what i can not believe.

what should be done?

        i must admit that i lack prescriptive opinions. since i do not
believe in free-will, in a sense i'm just along for the ride. however, i
do want to know what actions this discussion is debating. specifically:
what is the near term goal to be achieved? what is the set of actions
whose execution is in debate? what are the hypothetical conditions whose
presence is in consensus? for example, _if_ nanowar and subsequent human
extinction is virtually inevitable, _if_ transcendant minds can pursue momentum
goals, and _if_ momentum goal x can be satisfactorily implemented in a sysop
mind, would uploading ourselves into the sysop mind further the goal of
surviving the next 50 objective years?

        btw, wouldn't it be helpful if you two implemented a tree of
arguments and sub-arguments to make the discussion more systematic?

sincerely,
-x



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:51 MDT