Re: Thinking about the future...

Eric Watt Forste (arkuat@factory.net)
Tue, 3 Sep 1996 17:39:15 -0700


At 4:47 PM 9/3/96, Dan Clemmensen wrote:
>My hope is that the SI will develop a "morality" that includes the
>active preservation of humanity, or (better) the uplifting of all
>humans, as a goal. I'm still trying to figure out how we (the
>extended transhumanist community) can further that goal.

I was going to write a long response to your assertion that "ab initio"
computation might serve the SI's purposes better than cooperating with
existing human institutions, but I'm feeling too lazy for that right now.
Instead, I'll just say that if I were serious about figuring out how to
further this goal, what I would do is carefully study Hayek's later works
(what I have in mind in particular are THE CONSTITUTION OF LIBERTY and the
3 vol. LAW, LEGISLATION, AND LIBERTY) and figure out how an SI would refute
them and show that it could actually accomplish its own goals more
effectively through "ab initio" computation than through using widespread
and inaccessible local information about resources it might need to use.
It's possible than an SI could see a flaw in Hayek's arguments that I can't
see, but if this is a danger you're seriously worried about, we might
benefit from finding the flaws in Hayek's arguments before building the SI.

I know the titles of these books make them sound like they are about
history, political philosophy, and law (and they are), but from a slightly
different perspective, they are also books about computation and
epistemology. And they contain almost all the arguments that I personally
would use to persuade an SI to be cooperative rather than indifferent.

Consider that an SI, as such, can only compute. If it wants to *do*
something (and presumably if all it wanted to do was sit and compute, it
would present no threat to human beings), it would need to build a vast
network of sensorics and motorics. But instead of wasting time and
resources on this intermediary means-project, it could work directly toward
its ends by using the five billion sophisticated supercomputers (with their
attached sophisticated sensorics and motorics) that we call human beings.
It could also use the vast and inaccessible (except through the market)
database of local information about resources that might be useful toward
achieving its ends, but this information is lodged in human brains, and
there is no effective way to get *all* of it out in a useful way except
either (1) to use the market and its system of price signals or possibly
(2) uplift all human beings.

Since I'd like to see the SI only uplift *consenting* human beings, it
could use (2) on all the transhumanists, and use (1) (as Hayek would
recommend) on all the other, more conservative, human beings.

Eric Watt Forste <arkuat@pobox.com> http://www.c2.org/~arkuat/