On 28 Apr 2001, at 13:51, Brian Atkins wrote:
> GBurch1@aol.com wrote:
> > In a message dated 4/28/01 12:22:28 AM Central Daylight Time,
> > firstname.lastname@example.org writes:
> > >
> > > #Couldn't agree more--yet when morality/ethics issues are raised
> > > here, they are ignored at best, dismissed or ridiculed at worst. 'Tis
> > > troubling, to say the least. The term "techno-cheerleader" (coined
> > > by..?) is not at all inappropriate.
> > > #There is ***in general*** no sense of balance, of restraint, of
> > > forethought, caution, or consideration of consequences.
> > I just don't see how you can reach this conclusion. Consider the lengthy,
> > recurrent discussion of social transparency and privacy which arises almost
> > every time someone mentions a new development in the technologies of
> > information gathering.
#Actually, this is what I saw most recently in that regard (I'm
assuming from context the comment is insincere):
Brian Atkins wrote:
"Clearly, a very critical issue for Extropians. There couldn't
possibly be a more appropriate place on the Net to discuss this.
in response to
John Marlow wrote:
> A controversial international treaty aimed at combatting online
> has entered the home stretch before ratification. ...
> "I would say it's the worst process I've seen so far when it comes
> transparency in government," said Gus Hosein, a senior fellow at
> Privacy International and a lecturer at the London School of
> Economics. "For the entire time, there's been complete resistance
> make any changes to accommodate the interests of industry or
Consider the inevitable moral discussions which
> > accompany consideration of uploading and cognitive transformation
> > technologies. And the long-running discussion of "augmented" versus
> > "synthetic" minds is nothing if not a protracted exercise in "forethought,
> > caution and consideration of consequences."
> Not to mention the fact that SIAI has just spent the last 8 months creating
> our "Friendly AI" work, which was specifically created as an answer to the
> possible threat of unFriendly AI. I don't think John even bothered to read
#Well, I certainly haven't read _everything_ there as yet!
but it heavily qualifies as concrete "balance, restraint, forethought-
> in-the-extreme, caution, and consideration of consequences". I would say
> that along with the Foresight nanotech guidelines, that it is one of the
> best examples of transhumanists doing /real work/ on the ethics of the future.
#Yes--but the work itself is not transhumanist, though it has TH
implications/applications. Therefore it's not identified with or
considered representative of TH research.
> Brian Atkins
> Director, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:00 MDT