Spike Jones wrote:
> My reasoning goes thus: since silicon based intelligence and carbon
There is no silicon based intelligence now, I doubt there will ever
be. Silicon doesn't do intricate stable 3d structures very well, so
it probably has to be carbon (check out latest http://sciencemag.org).
> based intelligence have different diets and different habitats. I have
Same diet: energy and atoms to build your substrate. Same energy, (mostly)
same atoms. Same habitat: surface of the Earth, at least initially.
Houston, we seem to have a problem.
> a notion that emotion is seated in intelligence. Super intelligence
Not necessarily. You have to build it that way.
> then means super emotions, and so... I hope this is how it works...
> a super AI would love us. It (or they) would see how it (or they)
You can't build a super AI, no one is that smart. You can only create
a seed AI, an AI child, if you wish. If you make it basically human,
by looking which structures are created during neuromorphogenesis and
replicate their functions, and rear the AI child amongst the humans,
it will have similiar emotions. Initially. (Unless you broke something,
and raised a psychopathic AI without knowing it).
> and humans could work together, help each other, etc. There
> is no reason why we and Si-AI should be enemies, since we
> can coexist.
No sir, superintelligence is something qualitatively different.
The positive autofeedback runaway which you cannot follow soon
confronts you with something incomprehensible. A supernatural
force of nature, if you so wish.
> Another angle is this: a more advanced civilization has the luxury
> of trying to protect and preserve wildlife. The western world
> does this. Those societies where people are starving have little
Thank you, but the wildlife is dying just fine, despite the protection.
> regard for preserving wildlife, eh? So the AI would be a very
> advanced civilization, and we would be the wildlife. Temporarily
> that is, until we merge with the AI.
So let's merge with the ants, and the nematodes, and the gastropods.
> Of course this analysis could be wrong, we just dont know what
> will happen. On the other hand, we *do* know exactly what
Exactly: we don't know what will happen.
> will happen if we fail to develop nanotech and/or AI. spike
Yes, lowtech scenarios are more easily understandable, and none
of them look very pretty nor sustainable.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:16 MDT