Damien R. Sullivan writes:
> Actually I'm not certain of the limits of insect sensors themselves,
> although I think they are fairly severe. But the combination of tiny
> sensors and teeny brains adds up to rather limited perceptions.
But insects perceive and represent infinitely more than viroids, so doesn't it seem to be a bit presumptious the bipedal ape's position on the smartness scala can't be topped by similiar increases? Our senses are limited as well, as does our perception. We augment our sensomotorics _and_ our minds (Mathematica 3.0 for Linux is great!) allright, but as a whole we're already participating in upgrades towards godhead. These upgrades are gradual, and not invasive nor result in extracorporeal agents of complexities similiar to ours (unless we consider the metaman level) yet, but, after all, we've only begun with our ascent.
> I really have trouble believing in SI we can't understand at all.
Strange, I really have trouble believing in an SI I can understand to
a meaningful extent. If I could, it wouldn't be an SI, or I would be
its peer. It could be governed by some simple laws (the degenerated
mindstuff I was hinted at), but how the heck would we-current now? The
Strange, I really have trouble believing in an SI I can understand to a meaningful extent. If I could, it wouldn't be an SI, or I would be its peer. It could be governed by some simple laws (the degenerated mindstuff I was hinted at), but how the heck would we-current now? Thewhole argument about the Singularity was not whether it would be magickal (it could or could not), but we wouldn't be able to tell which way it would turn out to be. The Singularity is a developmental _prediction horizont_, after all.
> Things that move too fast, sure. Things that are Einstein+Feynman+Bach
> +Turing+Darwin rolled up, sure. But not the magical SI. In fact,
'Sufficiently advanced technology is indistinguishable from magic'? Sure, unwittedly we might be at the verge of a GUT/TOE, but this doesn't mean we can instantly transform that knowledge into applications.
> magical SI through AI seems a bit incoherent; you exploit the
> Church-Turing thesis, then assume the result does something beyond
I can't follow your reasoning here. Would you care to explain?
> -xx- Damien R. Sullivan X-)