On Thu, 26 Aug 1999 hal@finney.org wrote:
> There will be new minds created, conscious and full of potential, but
I think we have a problem with the term "mind" here. We have the
concept of a conscious entity. We have the concept of creating
an "unformed" conscious entity and letting it evolve (children).
We have the concept of uploading oneself onto new hardware.
We have the concept of creating backup copies to be reactivated
in the case of an accident.
> some of them will have different opportunities and training than others.
Q1: Do you "own" the backup copies?
(After all, you paid for the process (or did it yourself) and
it is presumably on hardware that is your property.)
In some forms, this would be considered "brainwashing", which
seems morally repugnant.
(so they feel no pain) and activate them are they new individuals
with their own free will or are they your "property" (i.e. slaves)?
test code leased from the Anders & Robert SIs), and you have a
subprocess running (say an "original" copy of hal, operating in a
virtual reality to see what would have happened if you *had* tried
to kiss the girl on your first date). Say, hal[0] kisses the
girl, goes on to marry her, gets browbeaten into living a
"normal" life, never discovers the extropian movement, etc.
The simulation becomes uninformative to the overlord, so
it is terminated and all copies of hal[0] are erased.
Have you done something morally wrong?
The female praying mantis, eating the male praying mantis
for protein for the offspring -- is this "natural" or "wrong"?
A human killing a human is wrong in the context of our current
reality, but is perfectly reasonable for an overlord who has
lots of them. It is morally incorrect (and should be punished)
if SIs throw black holes at other SIs. Moral systems are totally
irrelevant since in the long run the universe runs down anyway...
Q3: If you "edit" the backup copies when they are "inactive"
>
> Ethical problems arise similar to issues of child abuse, or of children
> who are not given the same advantages of others. If your neighbor on the
> next asteroid is creating sentient subminds and not letting them grow and
> develop, you might be really unhappy about that. You and your buddies
> might even be able to do something about it. But first you need some
> ethical guidelines for which kinds of minds are proper and which are not.
It seems to me its a really big mess.
>
> Hopefully, by the time this is a serious problem in 2020 or 2030,
> technology will be helping out. Even without a full nanotech singularity
> we can expect improved medicine so that people can have longer and
> healthier working lives, and technologies to amplify productivity so that
> a smaller work force can provide enough goods for an aging population.
I think it arrives by ~2010 or before. Scientific American (Sept. '99) has an interesting piece "Enter Robots Slowly", pg 36-37, discussing gyro-balanced wheelchairs that can traverse uneven terrain and climb stairs with an occupant (using only 3 pentium-class processors).
Things are going faster than even optimists such as myself predict. Compaq has announced 8-way SMP Xeons (in line with the predictions I've been keeping for the semiconductor industry for the last few years). But I've been stunned to the point of falling out of my chair with IBM announcing 64-way SMP (next year) and 500 MHz buses.
SMP = symmetric multi-processing.
My gut instinct is telling me that the biotech industry will many provide similar surprises over the next 10 years.
Anyone who thinks nanoassembly will not arrive until 2030 is a luddite!
Robert