<< Are you defining "rational" behavior as being the opposite of or in some way inherently different than "self-oriented" behavior such as staying alive? If so, you might want to examine more closely whether that is true. >>
If one wishes to acquire more knowledge and also self-actualise, then he or she would need to stay alive to do so. We could term doing otherwise as irrational, though it might qualify for rationality if the death gave others a chance to do achieve the same goals to acquire knowledge.
Yes, I understand that this definition may not have sufficient details, however, I still had not worked all this out. In general, the disire to stay alive would arise from the disire to acquire knowledge, but not just stand by itself unlike what some might say of the "right to life". We cannot understand this idea merely through the filter of "rights", but we need a new prespective to do so.
> > We cannot meet the objectives of such a society
> > without sufficiently advanced technology (such as
> > full-scale automation, advanced information processing
> > systems), for which we still had not achieved today.
>
> You seem to be saying that a society must have vast amounts of automated equipment and computers in order to be
> "rational". Do you mean to say that any society which does not have such equipment is irrational?
Not exactly, but a little quite.
Let us imagine that a very rich king in the ancient past wanted to build a huge freezer in a desert mountain caravan. However, no matter how many artisans or engineers he could solicit, none could do that. The technology simply does not exist in the ancient times. Someone could dig very deeply into the ground and rig up some water evaporation and cooling system, but it definitely cannot cool things below the freezing point of water. Thus, any such attempt will fail. Contrast this with using our modern air-conditioning technology powered by the abundant solar energy of the desert today.
Similiarly, even if someone 500 years ago thought up what I did today of a Rational Society, they cannot build it no matter the expense and design type they have in mind. They simply don't have the technology to do so. To them, what I propose today may seem like a wield kind of utopia.
However, after technology had accelerated for the past 500 years, I can confidently say that, unless some global cataclysm occurs, that we would have sufficient technology to create the society that I propose by the end of this century (though I would prefer to more optimistically say, by 2050).
The society would have to rely on suficiently advanced information exchange systems, AI intelligences and other computing to function. Perhaps we can also put it another way that, what I proposed would actually represent a class of social systems designed for future technology instead of patching up our current models and extrapolating into the future.
It may sound funny of me to propose something that can't even work straight away, but it would seem much more serious to someone from, say, 2100.
<< I would propose an alternative definition. "Rational" in ordinary usage refers to the use of Reason. I would define someone as "rational" when they are consistently using reason to guide their actions. I would define a rational society as a group of people who use reason and interact in a some mutually beneficial way with each other. >>
Well, the defintion by no measures, exclude such behaviour. In fact, I would like to use your definition if not because I would get accused of intolerance for it, and also that it expresses the concepts I wish to convery in an overly vague manner (which I know would lead to too many unproductive debates). Perhaps I ought to use a different term, but I had not thought of any yet. Any suggestions?
<< This is why you are striving to find some non-capitalist economic system where your needs will be taken care of while you focus on research. >>
Yes, this expresses what I wish to state, but the economic system does not neccessarily exclude capitalist ideas. Instead, as I mentioned, currently I name it the (Hybrid Capitalist Intellicratic System) HCIS system. It would function with a combination of different systems, as one would not have sufficient "fit" to the problems to suffice.
<< This is, I think, what you were hoping could be done by automated equipment, but without artificial intelligence don't think it could be done. *With* artificial intelligence, you may end up having to pay as much attention to what your artificially intelligent robots want as you would have to pay to what your fellow humans want in order to get them to do things for you. >>
This would probably not pose a problem, in that one does not neccessarily need to enslave sentient AI to perform such functions. For instance, we can build special purpose AI that can handle certain problems like manfacturing food without needing to give it sentient capabilities, simply because the problem does not require such complexity.
As computing technology progresses, we find that a lot of problems that once computers could not do, now they can. For instance, computers beating chess masters (at chess, not literally of course). One could say that chess does not require intelligence, but I suspect, soon every mental activity we held so dear would not require intelligence too.
I speculate that only general-purpose problem solving would probably require sentient AI, as the rest we can use logical, neural net or other techniques to solve. As technology progresses, we will see how things turn out.
<< Once an entity is intelligent, it will have its own priorities and goals. If you do not offer such an entity something which *it* values (such as money), why should you expect it to do anything for you? >>
Well then, I admit that I have insufficient knowledge to answer whether such an AI would share the goals that of the proposed society it originated from.
I would give a short answer as to how such a sentient being would receive treatment: Other members will treat it as one of them. In other words, it has the freedoms and liabilities similiar to other members. It does not matter if we can explain how it achieved sentience.
In my view, money can only offer a means to an end, and not provide an end in itself. To mistake the symbol for the actual object, or the motivator and the goal, would prove foolhardy if one wishes to achieve a set of goals. When the time comes, the sentient AI would choose, and then we will see what it chooses. I won't want to impose any choices or values on it as I won't want others to do so.
<< Rather than ignoring this problem and hoping for new technology to create what you call a "rational" society, I would suggest you find a way to make money from your research. That seems to me to be a way you could actually live the kind of life you seem to want. >>
For the former, I will continue to work on the design for such a society. I know the technologies will surely arise, and I ought to concentrate on the social structures more than the technological ones.
For the latter, I intend to implement your suggestion, and I thought of that for some time already. I have no apparent choice to do so if I wish to achieve my objectives. Unfortunately, I don't believe that I could possibly live such a life in any known societies.
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:13 MDT