neosapient@geocities.com (den Otter) writes:
>In the case of SI, any head start, no matter how slight, can mean all
>the difference in the world. A SI can think, and thus strike, much
>faster than less intelligent, disorganized masses. Before others
>could transcend themselves or organize resistance, it would be too late.
I predict that the first SI will think slower than the best humans. Why would people wait to upload or create an artificial intelligence until CPU power supports ultrafast minds if they can figure out how to do it earlier?
>That could be the case (roughly 33 %), but it wouldn't be the most
>rational approach. Simply put: more (outside) diversity means also
>a greater risk of being attacked, with possibly fatal consequences.
>If a SI is the only intelligent being in the universe, then it's
>presumably safe. In any case *safer* than with known others around.
>> Let me put it this way: I'm pretty sure your view is incorrect, because I
An unusual enough claim that I will assume it is way off unless
someone constructs a carefull argument in favor of it.
>> expect to be one of the first superintelligences, and I intend to uplift
>> others.
>
>A bold claim indeed ;) However, becoming a SI will probably change
>your motivational system, making any views you hold at the beginning
>of transcension rapidly obscolete. Also, being one of the first may
>not be good enough. Once a SI is operational, mere hours, minutes and
>even seconds could be the difference between success and total failure.
>After all, a "malevolent" SI could (for example) easily sabotage most of
>the earth's computer systems, including those of the competition, and
>use the confusion to gain a decisive head start.
Sabotaging computer systems sounds like a good way of reducing the malevolent SI's power. It's power over others is likely to come from it's ability to use those systems better. Driving people to reduce their dependance on computers would probably insure they are more independant of the SI's area of expertise. Also, there will probably be enough secure OSs by then that sabotaging them wouldn't be as easy as you imply (i.e. the SI would probably need to knock out the power system).
-- ------------------------------------------------------------------------ Peter McCluskey | Critmail (http://crit.org/critmail.html): http://www.rahul.net/pcm | Accept nothing less to archive your mailing list