Re: Why would AI want to be friendly?

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Mon Sep 25 2000 - 00:14:44 MDT


Eliezer S. Yudkowsky writes:

> That the pattern is wrong - that Mitchell Porter and I are both totally
> off-base and J.R. Molloy is right - is rather less likely. Imagine you toss
> back three physicists into the fourteenth century. Some alchemist spots them
> arguing and decides to propound his own theory about a world built on five
> mystical elements. The physicists may argue with each other, but they share a
> single pattern, and the alchemist doesn't have it - and either you see it or
> you don't.

Blah blah blahbLahblaHblAHblahblahbLAHblaHBlahBLAhbLaH.

I'm sorry, but instead of answering to technical criticism we've got
allusions to Eliezer's and his critic's mental abilities again (with
no further evidence but his alleged SAT scores), lots of posts
essentially saying "You no good. How dare you question my vast mental
powers! I will show you, eventually. Etc."

You know, this also makes a pattern, and I don't like it. Yakking
about the Big Cool Thing(tm) instead of doing it is sure more fun
(been guilty of it myself), but this doesn't get the BCT done. It
kinda makes one wonder whether there is anything behind that BCT thing
at all, instead of Potyemkin's villages all the way down.

Show us the money. I would be really thrilled if you'd vanish for a
few months, or a year, and come back with with something solid,
something which would make VCs want to write out fat checks
reflexively.

Meanwhile, I'll stop commenting on this thread, because you're not
willing to argument at the technical level, and hence I consider this
a waste of mine and other people's time.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:53 MDT