> In a message dated 4/28/01 12:22:28 AM Central Daylight Time,
> email@example.com writes:
> > #Couldn't agree more--yet when morality/ethics issues are raised
> > here, they are ignored at best, dismissed or ridiculed at worst. 'Tis
> > troubling, to say the least. The term "techno-cheerleader" (coined
> > by..?) is not at all inappropriate.
> > #There is ***in general*** no sense of balance, of restraint, of
> > forethought, caution, or consideration of consequences.
> I just don't see how you can reach this conclusion. Consider the lengthy,
> recurrent discussion of social transparency and privacy which arises almost
> every time someone mentions a new development in the technologies of
> information gathering. Consider the inevitable moral discussions which
> accompany consideration of uploading and cognitive transformation
> technologies. And the long-running discussion of "augmented" versus
> "synthetic" minds is nothing if not a protracted exercise in "forethought,
> caution and consideration of consequences."
Not to mention the fact that SIAI has just spent the last 8 months creating
our "Friendly AI" work, which was specifically created as an answer to the
possible threat of unFriendly AI. I don't think John even bothered to read
it, but it heavily qualifies as concrete "balance, restraint, forethought-
in-the-extreme, caution, and consideration of consequences". I would say
that along with the Foresight nanotech guidelines, that it is one of the
best examples of transhumanists doing /real work/ on the ethics of the future.
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:00 MDT