From: "Michael M. Butler" <firstname.lastname@example.org>
> The working definition I use: _A_ singularity is when everything seems to be
happening at once; the rate of change
> becomes so great as to be incalculable.
Eliezer's definition is when AI exceeds intelligence of smartest humans.
Recursively self-improving Artificial Intelligence tends to parallel
recursively self-improving complex adaptive systems found in nature, IOW,
living organisms (at first glance, anyway). This new life form, this RSIAI,
friendly AI, technological singularity harbinger, or whatever it decides to
call itself, would be able to accurately identify incorrect human thinking,
would it not? I ask because a list member has expressed fear that a system
which identifies incorrect thinking might do so with extropians. Wouldn't that
actually be a friendly thing to do? I mean, if extropians think incorrectly, a
friendly AI would be doing all sentient beings a big favor by removing that
incorrect thinking, right? It's not that I want to think with absolute
correctitude. But in the end, it may be worthwhile to understand that thinking
may not be the best way to know reality. Much can be said for direct
experience, and action speaks louder than words.
Useless hypotheses, etc.:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, and ego.
Everything that can happen has already happened, not just once,
but an infinite number of times, and will continue to do so forever.
(Everything that can happen = more than anyone can imagine.)
We won't move into a better future until we debunk religiosity, the most
regressive force now operating in society.
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:59 MDT