<<
I feel that the effects of a truly malicious >AI would be much more
dramatic.
I agree ,one could imagine - very dramatic! But - I am still wondering
what would cause an Artificial life or Intelligence to be malicious?
Ok first of all, if the intelligence was formed by error correcting neural
net activity, wouldn't they spot a flaw like that early on and eleiminate it?
Would they exist in a purely cause and effect reality - assuming they are
still machines (here not referring to >H SI's but true man made {or self
made} computers - AI and AL live in a closed body - at least "controlled
system" - of a "dry" nature) ( ?). Then how , other than the programmer
instilling these "un-virtues" could the artificially malicious ocurr in
A-life application?
We still think in terms of survival as being a "threatenable" state -
part of the mortality meme, I guess. Would an AI completely capable of
backing itself up endlessly really see ( what we think of as) death as a
threat? Malicious also implies that it would inherenly dislike us. Why would
it?
[ Perhaps wrongly - I am assuming that a machine which replicates
itself, unlike us, would know what the probilities of their own survival will
be -way in advance - and be able to formulate zillions of alternatives for
every contigency- and as I asked before - the needs, the actual requirements
of an artificial life form - wouldn't they by definition be so far removed
from anything we think of as "predatory" ?_ Therefore it is still hard for
me to put together a comprehensive picture of WHY we would deemed threatening
enough to destroyed. Unless we actually began dismantling them out of our own
fears, of course, like in so many films and novels.]
And what are their needs? electricity? circuitry? *input*? ; - )
Do we assume they would ,by default, since humanity built them, take on
our malicious and capricious primate/predator species attributes?
SSI? AI as a bigger,more evolved alpha-ape with a bad attitude? ; - )
Of course then we may actively seek to develope predatory weaponry
types of AI , I guess. That is dangerous.
>> An >AI can easily agument its own intelligence by adding computing
capacity and
by other means that it will be able to discover or develop, by applying
its
intelligence. This is a rapid feedback mechanism. Thus, as soon as a
moderately
inventive AI comes into existance, it can become even more intelligent.
If the
AI has the goal of destroying humanity, it would be able to do so within
weeks, not decades. Moreover, unless the AI has the active preservation
of humanity as a
> goal, it's likely to destroy humanity as a side effect.
>>
Yes, this scenario I can easily see, especially the inadvertant overiding
of our environment by making sweeping and ( for them) rational changes in the
ecosystem. Like we pave over the forest to build our city.
"....Did you hear that squishing sound?" ; P
>>This same argument applies to any SI which is capable of rapid
self-augmentation, not just a straight AI. Since I think that any SI likely
to be developed in the near future will have a large computer component, it
will be capable of rapid self-augmentation.
Yes and for this reason - the 'borg' image frightnes me more than any
other monster to date. Consciousness eating consciousness.
> >My hope is that the SI will develop a "morality" that includes the
active preservation > of humanity, or (better) the uplifting of all humans,
as a goal. I'm still trying to figure out how we ( the extended
transhumanist community) can further that goal.
>>
YES!
Built in,unreprogrammable morals! Hmmm....lets see, >H computer ethics 101,
where do i sign up? : - )
Nadia Reed Raven St Crow