Re: >H Haiku: Engineered CanOpener

christophe delriviere (darkmichet@bigfoot.com)
Sun, 03 Jan 1999 13:09:16 +0100

> All human persons generally want the same thing -- food, health, happiness, love,
> shelter. As technology increases the human race should be able to
> provide the material necessities to sustain life for all people
> regardless of race, creed, gender, or national origin.

certainly... and to sustain life of other peoples and suppress suffering is good also for you
since you will probably have a positive feedback more important than you investment after.
Even if you are unaware of... i guess..
Of course, negative feedback is always possible. But i believe that statistically positive feedback is far more important.

> It seems logic
> and reason point humanity in one direction and one direction only.

it is also a good principle to, as much as possible, not coerce any part of humanity on a arbitrary direction.

> However, many transhumanist, including myself, will want to belong to a
> super intelligence governed by some type of principles. I believe
> freedom to allow an individual to possess an unique sense of identity
> will be one such principle.

if it is their choice, but it would be far better if they have the ability to change their mind about that after.
As for flesh sculpting it could be good to promote the reversability of decisions and the information about... when it is possible.

> I think the following list member discussion is highly appropos:
> Andrew Ducker wrote:
>
> > My main reasons for being pro-transhumanism are that it provides a high
> > likelihood of more pleasure, less pain, more cool toys, more happiness
> > world-wide.
>
> christophe delriviere <darkmichet@bigfoot.com> responds:
>
> >Well...It is certainly not necessary true... I *consider* also myself as a
> >transhumanist but i certainly don't believe
> >that to optimize only our selves according to our private purposes is sufficient
> >to live in a "good" society.
> >The problem is very easy to understand... we also have to give some "great
> >purposes" to the society or at least to some important parts of the society. If
> >we are not able to define such "great purposes" as could be as a mere
> >example a
> >spatial colonization, our society will be locked by sub optimization.

> On our transhumanist road to a better universal society the famous
> philosophical interrogative of, "Does the end justify the means?" will
> be asked repeatedly. I hope, for our own sakes, mankind can figure it
> out.

Ha... perhaps i'm dumb and it's late for me but i don't see the connection between my prior little argument and your somewhat anthropocentric ethical last one...??

Anyway... sure... we will have to be as sure as possible that the actions we will do will gives the results we are waiting for....

this is an important principle for any intelligent system and any purpose ;)

Delriviere
Christophe