I wrote:
>What if the first upload is a cryonics patient, who is legally "dead"?
>What about the apparent majority today who can't accept a machine
>being "conscious"? What if the first upload company keeps their uploads
>from talking with and interacting with ordinary people in "human" ways?
>(Software just does useful things; you can't see how.) And what if
>you can make *trillions* of dollars selling such software service?
>Won't that motivate them to do what ever paperwork/lobbying it takes?
Curt Adams commented:
>People can and do deny the obvious. Something like 40% of Americans believe
>the world was created in a big "poof" 6000 years ago. Slightly over a century
>ago most people thought that women were intellectually incapable and Africans
>were basically a different species, in spite of extensive evidence against
>both propositions. ... Many, many, people will be profoundly motivated
>to believe
>that thinking machines aren't "really" thinking and that they do not deserve
>human-like rights. I predict they will believe what they want to believe; in
>my experience the readiness to accept disproof of cherished beliefs is a rare
>character.
Since I have the stat handy, in 1999 Gallup found that 47% in US believe God
created humans in basically their present form in the last 10,000 years or so.
Hal Finney responded:
>Paperwork and lobbying won't necessarily do the job if people are morally
>opposed to the project. Look at the genetically modified food issue
>we have been discussing.
Yes, but is it so clear what the directions of people's "moral" dispositions
are in this case? To march against violation of cryonics patients, people
would have to accept cryonics patients as being alive, and therefor that they
have been killing most of the elderly humanity for many decades. To march
against upload working conditions, people would have to accept machines as
conscious, as well as think of upload VR palaces as miserable places. There
seems to be far more basis to let people think of uploads as inhuman as there
ever was to think of black slaves as inhuman.
>Using cryonic suspendees as the guinea pigs for uploading experiments
>would be the height of immorality. They can in no sense be considered to
>have given informed consent. I couldn't see it happening in a political
>climate like today's. Even for the illegal labs, suspendees would not
>make good subjects for the uploaders, as their brains are likely to be
>in worse conditions than fresh brains, and the personalities involved
>are not only unknown but probably quirky at best.
Cryonics patients should continue to be frozen, and many could certainly
give consent. If by "informed" you mean given a detailed description of
specific experiments, no they couldn't, but that applies to just about
any cryonics revival procedure. I agree that quirky personalities would
be a negative, but perhaps not an overwhelming one. I think the worst
conditions of brains will be manageable - you would have already verified
your models of neurons in dishes and in animals. At this point you just
need to infer original structure through the cracks, which should be
easier with vitrification.
>Keeping the existence of the upload secret seems highly problematic.
>If you're using the upload to do intelligent work, that probably is going
>to require a certain amount of natural language I/O. You'd have to screen
>his outputs to make sure he's not saying, "Help! I'm stuck in this box!"
I wasn't talking about secrecy, just keeping it out of people's faces.
People seem to accept "taking advantage" of others in all sorts of ways as
long as it isn't vivid to them. People give to panhandlers and say no to
mailed charity letters. People happily buy products from foreigners working
in conditions they would be outraged to see their neighbors working in.
>Plus, what are the odds that the destructive upload is going to work,
>first time? Most things don't. A number of people are going to be
>killed before you get an upload that works. The moral opposition to
>such an effort would be overwhelming, especially when it comes from
>people who will be put out of work if it succeeds.
Again to get upset about the destruction of a frozen head you had to
accept it as alive in the first place. And the risk of a destructive
scan failing to pick up the relevant info is different from the risk
of any one simluation not being tuned right.
Now opposition due to fears of being put out of work is a whole
different matter. But it isn't the same thing as these other "moral"
concerns you've mentioned.
Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030
703-993-2326 FAX: 703-993-2323
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:34:35 MDT