At 02:02 AM 5/8/01, Anders wrote:
>I consider Warwicks speculations combined with his media savy a quite
>worrying threat to transhumanism. The risk is that he manages to do
>spectacular stunts, drawing attention to his ideas about the imminent
>cybercalypse, helping spread an anti-AI sentiment ("terrestrialism" in the
>terms of Hugo de Garis, who is doing nearly the same thing). The problem here
>is that he has not really given much thought to likely development paths and
>forms of human-AI interaction. This is why the E5 panels about this issue are
>so important: we better do the analysis and start a more serious debate on
>how to make AI social than professor Warwick does.
Anders, I agree completely on the disturbing message that Warwick is
pushing (without any hint of alternative possibilities that I have seen).
The Saturday morning Extro-5 discussions *will* be important in countering
this. Apart from the press present at the event, we need to put together a
statement following the event (on this and other issues) presenting
The Extro-5 presenters will not all be in agreement, but any announcement
of our discussions can show the alternatives to the nightmare
human-annihilation scenario that Warwick, de Garis and some others are
drawing as the only picture.
Max More, Ph.D.
firstname.lastname@example.org or email@example.com
President, Extropy Institute. http://www.extropy.org
Senior Content Architect, ManyWorlds Inc.: http://www.manyworlds.com
Chair, Extro-5: Shaping Things to Come, http://www.extropy.org/ex5/extro5.htm
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:03 MDT