Re: Fwd: Earthweb from transadmin

From: Samantha Atkins (samantha@objectent.com)
Date: Tue Sep 19 2000 - 00:29:33 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Eugene Leitl wrote:
> >
> > So, please tell me how you can predict the growth of the core
>
> I do not propose to predict the growth of the core.
>
> Commonsense arguments are enough. If you like, you can think of the
> commonsense arguments as referring to fuzzily-bordered probability volumes in
> the Hamiltonian space of possibilities, but I don't see how that would
> contribute materially to intelligent thinking.

I, and apparently many others, do not find the arguments made to date
particularlly commensensical or fully convincing that the AI will be
friendly.

>
> I can predict the behavior of the core in terms of ternary logic: Either it's
> friendly, or it's not friendly, or I have failed to understand What's Going
> On.
>
> All else being equal, it should be friendly.
>

All else being equal, any of the possibilities above are equally likely
with a combination of either of the first two and the third being very
likely. :-) Seriously, this is not remotedly a valid argument that the
AI will be friendly.
 
> > Tell me how a piece of "code" during the bootstrap process and
> > afterwards can formally predict what another piece of "code"
>
> I do not propose to make formal predictions of any type. Intelligence
> exploits the regularities in reality; these regularities can be formalized as
> fuzzily-bordered volumes of phase space - say, the space of possible minds
> that can be described as "friendly" - but this formalization adds nothing.
> Build an AI right smack in the middle of "friendly space" and it doesn't
> matter how what kind of sophistries you can raise around the edges.
>

But you have not yet shown that a) friendly space is well enough defined
or b) that you can constrain your AI to it. You have gone to some pains
to clarify that "constrain" is precisely what cannot be done. This
leaves that the original design precludes the AI being not friendly.
But for this to be believed more widely requires more details of that
design and why they preclude unfriendly outcomes.
 
> I cannot formally predict the molecular behavior of a skyscraper; the
> definition of skyscraper is not formally definable around the edges; I can
> still tell the difference between a skyscraper and a hut.
>

Irrelevant.
 
> > Tell me how a team of human programmers is supposed to break through
> > the complexity bareer while building the seed AI without resorting to
> > evolutionary algorithms
>
> We've been through this.
>
> Evolution is the degenerate case of intelligent design in which intelligence
> equals zero. If I happen to have a seed AI lying around, why should it be
> testing millions of unintelligent mutations when it could be testing millions
> of intelligent mutations?
>

For tuning some of the fairly chaotic systems that will be part of its
makeup. Especially for tuning its sensory modalities. Or did you think
you and a company of master-hackers where going to program those
capabilities in from scratch? I suspect that hyper-fast GA driven
desing and tuning will be essential to many of the subsystems of the AI
and will also be employed by it directly for certain classes of
problems.
 
> > Tell me how a single distributed monode can arbitrate synchronous
> > events separated by light seconds, minutes, hours, years, megayears
> > distances without having to resort to relativistic signalling.
>
> You confuse computational architecture with cognitive coherence and
> motivational coherence.
>

Did that actually address the question fully?
 
> > If it's not single, tell me what other nodes will do with a node's
> > decision they consider not kosher, and how they enforce it.
>
> I do not expect motivational conflicts to arise due to distributed processing,
> any more than I expect different nodes to come up with different laws of
> arithmetic.
>

Because you believe that you or perhaps the AI have solved or will solve
the great problem of a universal objective morality and that too
intelligent entities cannot disagree if they are both intelligent enough
to understand that morality? Do I need to point out that this is a
philosophically shaky position? Or do you believe something
significantly different than this?

 
>
> > Tell me how many operations the thing will need to sample all possible
> > trajectories on the behaviour of the society as a whole (sounds
> > NP-complete to me), to pick the best of all possible worlds. (And will
> > it mean that all of us will have to till our virtual gardens?)
>
> I don't understand why you think I'm proposing such a thing. I am not
> proposing to instruct the Sysop to create the best of all possible worlds; I
> am proposing that building a Sysop instructed to be friendly while preserving
> individual rights is the best possible world *I* can attempt to create.
>

How would you define individual rights? Do I have the right to do
something the Sysop may think is wacky? Or does the Sysop override my
decisions whenever it thinks it is significantly "for my own good". Is
this actually "being friendly" to the type of creatures human beings
are?
For certain, you will find many human beings who will consider such the
height of unbearable unfriendliness. When they rebel (which some of
them inevitably will) exactly how will the Sysop choose to be friendly?

 
>
> > There's more, but I'm finished for now. If you can argue all of above
> > points convincingly (no handwaving please), I might start to consider
> > that there's something more to your proposal than just hot air. So
> > show us the money, instead of constantly pelting the list with many
> > redundant descriptions of how wonderful the sysop will be. Frankly,
> > I'm getting sick of it.
>
> Frankly, 'gene, I'm starting to get pretty sick of your attitude. Who are you
> to decide whether my proposal is hot air? I can't see that it makes the least
> bit of difference to the world what you think of my proposal, and frankly, you
> have now managed to tick me off. I may consider my AI work to be superior to
> yours, but I don't propose that you have a responsibility to convince me of
> one damn thing. I expect to be extended the same courtesy.
>

I for one, don't think the proposal is hot air at all. I do think there
is two much hand-waving about the implications of such a creation and
about what the likelihood of it being friendly is.

Your getting ticked off or other people being rude or not is not
relevant to the real questions and concerns. You, of all people, know
this. So please take a deep breath. I realize this is true but utterly
irrelevant advice at the moment. :-)

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:29 MDT