From: justin corwin (firstname.lastname@example.org)
Date: Sun Feb 17 2002 - 16:43:29 MST
---- "Eliezer S. Yudkowsky" <email@example.com> wrote:
> People are asking the same questions they were asking in 1999. I could
> understand if there'd been progress and yet not progress toward Friendly
> but this is just stasis. Why?
There are a lot of factors in this, Eli, mostly there is the fact that
most people won't bother to bring Friendly AI into their worldview. This
is similar to the phenomena of programmers that ignore new releases of
Another point is that these people may not have encountered The Low Beyond.
I happen to think that anyone who joins extropians should try and look
around the web presence of the some of the more prominent members, anders,
eugene, harvey newstrom, all of these and more have great information
online, that many on extropians will never, and may not even want to
read. i'm sure that's frustrating to them as well.
Also, some people may not like to bring goalsystems into thinking about
AI, concious or not. they like their mind-experiments nice and fuzzy,
thank you very much.
Also, i've noticed a distressing tendency of extropians, casual or otherwise
to associate the SysOp scenario and Friendly AI as intertwined or even
the same thing. Many people, including myself, have "philosophic difficulties"
with the sysop scenario(read: issues with authority) and may just knock
off friendly ai, and all associated terms in one fell swoop.
On a personal note, i usually don't really read these kind of vague ai-kinda-sorta-if-then
discussions. it's more in the spirit of "Can a Lookup Table be Concious"
than "What is Neccessary and Sufficient for Use X of GIAI"
-- justin corwin firstname.lastname@example.org - email (866) 841-9135 x4332 - voicemail/fax __________________________________________________ FREE voicemail, email, and fax...all in one place. Sign Up Now! http://www.onebox.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:39 MST