Re: transitional thinking

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Dec 14 2000 - 01:16:21 MST


Justin Corwin wrote:
>
> JR sez:
> >Good question. Also, how do we keep the idea of a transcendent
> >machine from becoming a religious meme?
>
> i would argue, no offense intended at all towards the singularity folks,
> that it already is a religious meme. singularity is supposed to solve every
> problem since the time you burnt your toast to cats and dogs living together
> in harmony, which smacks of faith to me.

Improving things, even broadly, "smacks of faith"? How so? Technology
in general and human freedom have improved things on this earth beyond
many of the dreams of even the most hopeful pious folks. Does this mean
that science/technology and human freedom are religious memes?

Now, that said, there is an element of "we will fix everything and make
everything perfect" in some of our thinking some of the time. That is
probably unrealistic. We will be able to do an almost (not almost,
actual) unimaginable amount of things good and ill but I think expecting
utopia is going too far. I expect things to get a lot more interesting
and a lot less limited in a variety of ways and a lot more joyous but I
also expect there will still be a whole slew of ills and stupidities
still present.

On the other hand, I have nothing against religious memes judiciously
applied with a good helping of common sense and plenty of real-world
science. It may be that such a mix is precisely what is required to
move enough of humanity quickly enough to make the singularity not only
real sooner but far less cataclysmic than it may otherwise be.

>
> not that i think greater than human intelligence wont' solve problems we
> can't on our own, but perhaps the singularity will not be the incredible
> force you're expecting. what if self modifying humans with their patterned
> parallel processors are never overtaken by machine intelligences? it's just
> as likely as a human who is self modifying has less far to go to become more
> intelligent than a human, all he would have to do is become ANY smarter. a
> machine intelligence doesn't exist yet. going from 60 to 61 is easier than 0
> to 61, even with our design flaws.
> just a theory.

Not sure I see where you're going sith that in context. Humans will
eventually become quite different than human as we think of it today.
At least the ones with enough flexibility will. I am more than a little
worried about our raw intelligence, strength and durability increasing
drastically without us changing some of our less desirable and largely
inherited from our merely-meat-days traits. Some of those are far less
appropriate and are downright dangerous for post-humans.

>
> i do however, believe that expert systems and ultratech/ultrasoftware will
> play a role in improving us, as we're unlikely just to do it through
> biological manipulation(gene therapy, bioware, etc). although you could just
> add a bunch of grey matter and see what happened.
>

Not much of anything if you don't add a lot of other stuff to support,
organize and integrate that additional grey matter.
 
>
> the bottom line is that trancendant AI is cool, and massively
> useful(theoretically). but it's also not here. so we need to be careful
> where we put our trust(faith?). as long as we're not sitting on our butts
> waiting for momma AI to come and fix our problems, the religious nature of
> "awaiting the singularity" won't be an issue.
>

Faith? Working our butts off to produce the future doesn't leave a lot
of time for faith except maybe the kind that is simply belief that what
one is doing is really important and for the good of everyone (more or
less) if it can be produced.

> my big question, is, when the singularity does occur, how do we keep people
> from treating it like a GOD, or a Demon. no matter how logical and well
> constructed it is, being treated like a god would hamper both it's
> situation, it's flexibility, and it's psychology(assuming a general
> intelligence would have a similar psychology to other general intelligences,
> such as humans, being given absolute worship and power has a tendency to
> mess with your head and hamper your effectiveness)

The singularity is not a thing per se, much less a candidate for God or
Devil. An SI might be seen to be. Humans really aren't that general
an intelligent compared to an SI. We are highly specialized and quite
inefficient and getting beyond our specialities except by building other
systems to do what we can't. We have very little idea what a true AI
psychology will be like. But I doubt seriously that the AI would bask
in the worship of masses of humans. What for? It might accidentally
come to believe it is infallible or that it can find an answer to any
question it can formulate. It is possible that it would go down a
worrisome path of considering humans a collosal bore and bother at best.

- samantha



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:36 MDT