Re: One Unity, Different Ideologies, all in the same universe (2)

From: Chen Yixiong, Eric (cyixiong@yahoo.com)
Date: Thu Dec 27 2001 - 12:42:02 MST


I think I had missed out some points in the earlier consolidated reply that I should take note of.

<< What you have stated seems to make solid sense. But it isn't our world today as its currently constituted amongst nation states
and ethnic groups. I surmise that the format for such a civilization order, as you have listed in your post, suggests massive
changes in global human society, pre-occurring this development. >>

Yes, not just massive changes in policy but massive changes in thinking of the world leaders as well as the "common" citizens.

<< Your post is prefigured in Hans Moravec's Robot, in the paragraphs titled, "The Death of Capitalism," in which Moravec predicts
the collapse of traditional capitalism through the application of advanced robot technology (what else?). Moravec envisions the
nations of the world be transfigured into tribal units, as the great cities decline and the populations are distributed, and
supplied by good and serviced literally piped in, or produced in situ. >>

This sounds like a utopia, however, the Federation does not intend to set up a utopia. It might turn out that this scenario happens
to follow the lines of the federation proposal.

I would probably find it interesting to read the book, though. Apologies for fiction lovers, I read far more non-fiction books than
fictional ones, even my favorite: sci-fi. Only thanks to some of you here (on Extropy list) did I read things like "A fire upon the
deep" otherwise without which I would remain happily ignorant of cutting edge sci-fi.

<< I respectfully maintain (not that anyone seems to care ;-) ) that this epoch is definitively NOT here yet, and its dangerously
premature, and potential damaging to our national/global economy. Not everyone will see things as you do and even game theory is not
based on anything that excludes Reciprocity and Mutuality. Your goal and rule-set is admirable, but if Saddam decides to pee in the
soup while you employ your meta-goals, we all could get screwed. We need new technologies in-hand, for something as you have
described to work. >>

The original proposal I posted here does not intend to apply it to all the states of this world. In fact, I started work on Project
Sociologistics and the Ascension Colony Project with the pessimistic (but probably realistic) assumption that only new civilizations
in space would consider this approach. It does not aim to revolutionize world politics in any case, but more of as an afterthought.

The Earth-based societies would most likely remain too entangled in their old thinking to consider such. I also suspect that only in
such a colony would the Singularity develop, hence the Project Sociologistics does intend to require sentient AI. Earth nations
would probably ban research into such sentient AI, thinking that it would cause great disturbance to their power structures as well
as taking a big risk, perhaps using the UN to force all member countries to do so.

The thinking of "ideological communities" still remain alien to many, perhaps because quite a lot of people have no purpose and no
ideology in life. When I suggest this idea, one person even called it a "cult". Some who do not understand what I mean have no
qualms calling it "communist" too and mistakenly see both projects as one single project. Others point me to references to some
island states without considering their relatively "closed sourced", old thinking and of course, usual lack of ideology.

In fact, many find it difficult to consider setting up a space colony easier than changing existing society. They complain about the
costs and technical difficulties and classify it as daydreaming. Yet, the main difficulty lies with the self-reinforcing structures
our societies had built up over the many years they operate to prevent radical change (including laws, bureaucracy, police,
military, propaganda and lots more). One would sometimes find it easier to excavate a mountain than to get one's government to
change an inefficient law.

Without the evolutionary equivalent of a meteor strike, the dinosaurs would remain living happily on planet Earth. I agree with you
in this aspect.

I hereby again, strongly suggest that we seriously consider building a space-based community rather than an Earth-based one, and
focusing design-ahead issues on the social related problems than the technological ones. If we can't stop fighting with ourselves,
inventing nanotech would have little use but to up the stakes. If we don't even dare to venture into the unknown, or let others do
so, then the daydreaming of technology blowing all obstacles away would have little use. If we still have such petty quarrels with
when we live in outer space, I think very few aliens would like to befriend us.

Please consider this seriously, especially before writing your next sci-fi story or building another new society. Incidentally, if
anyone would like to write a sci-fi story on Project Sociologistics or the Ascension Colony, I would like to participate too.

<< I think you must be confusing "global optimum" and "perfection". Mistrusting the search for perfection is a common human
heuristic, which is often quite correct in an imperfect world where predictions of perfection are more often produced by wishful
thinking than by an unbiased model of a real opportunity for perfection. Of course, exporting this heuristic into the
superintelligent spaces is more often the product of anthropomorphism than of principled analysis. >>

I start to suspect that either a global optimum for human societies does not exist, or more than one such optimum exists. However,
the proof of this conjecture remains far from our grasp because it requires more computational resources (, massive datasets and
amazing new techniques) that our puny human minds cannot hold, cannot compute or had not invented.

I purposely mentioned "human societies" because even transhumans would have problems in their societies that they themselves cannot
solve (as in the smarter you get the more complex problems that arise). To get a rather simplified view of this, imagine trying to
understand how your own brain functions. Then imagine attempting to understanding how these brains function and interact with each
other, in ways both observed and not observed (but possible).

Hence, Project Sociologistics does not concern itself exclusively with mathematical analysis and highly simplified but unrealistic
assumptions on solving problems (unlike the flawed approach taken by conventional economics). The problems encountered have far too
much complexity than this alone can handle.

It wishes to use an alternative approach of systems theory, fuzzy experiments and heuristics, rather than proofs and calculations,
to tackle this highly complex problem. It does not view parts of the system as standing by themselves but instead as a "whole" of
interlinked connections.

More importantly, it challenges the old and traditional views of the past. It asks us to consider if sentient beings and their
societies should have a higher purpose beyond that of merely surviving or the mere pursuit of self-interest. It asks us to consider
the prospect of tolerance implemented with communities than within a community. It asks us to consider multiple solutions to our
problems instead of declaring that only one's own solution has validity. It tells us to uproot our thinking from the past and plant
it into the future so that we shall not go crashing into blind ravines ever so often because (as David McLuhan's wrote) we know the
future by looking only at the back of the car with the side-mirror.

<< Hm. Well, your Sandberg Federation must be an absolutist solution too, since it doesn't seem to allow for communities disobeying
the laws of physics. >>

Gödel's Theorem rears its head again. You can't have complete knowledge and a complete yet consistent solution. Even Gödel's Theorem
has incompleteness because it does not apply to itself (or no such theorem can exist due to mutual contradiction).

In such a case, we have little choice but to forge ahead with this instead of twisting logic to prove it. If we should come up with
a better system, then we can consider that system, but if not, then we should probably stick to this system (as how Intellicracy
goes).



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:32 MDT