Re: MEDIA: Professor cyborg

Jim Fehlinger (
Thu, 21 Oct 1999 22:21:31 -0400 wrote:
> There is no reason to expect that machines as such would have any
> greater tendency to villainy than people. In the Warwick scenario,
> why does he depict machines as being evil? Why not people?

Actually, Warwick doesn't expect that machines will have any **greater** tendency to villainy than people. But he doesn't expect them to have any greater tendency to benevolence (toward humans) either. From Chapter Twelve of _In The Mind of the Machine_ ("A Fantastic Future?"):

"Put yourself in the position of being a very intelligent machine. You are part of a new breed. A group called humans still exists and there are many of them. Having been the dominant life form on Earth for many years, they are not too keen to give in to the new breed, even though they originated it...

So what do you and the other members of the new breed do? You could try to be nice to the humans. Maybe in time they will be happy to be second-class citizens. But why should you? The humans are less intelligent than you are, and given half a chance they would most likely try and end your life. It is quite dangerous to give humans any power at all, as they would be likely to use it against the new breed...

In reality we can only speculate as to how members of the new breed would treat humans. After all, the new breed is more intelligent than we are, so it is difficult to judge. Perhaps the most fruitful approach is to look at what has happened in the past and to project this forward. Nietzsche gave an apt description when he said firstly, 'All creatures hitherto have created something beyond themselves.' He went on, 'What is the ape to man? A laughing-stock or a painful embarrassment? And just so shall man be to the superman: a laughing-stock or a painful embarrassment.' The superman in Nietzsche's words is likened here to the new breed of machine.

So, to get a best-guess picture of how the new breed machines would treat humans we should look at how humans have treated those from whom we have evolved. What do we do with chimpanzees or other animals? Do we treat them like brothers? Do we regard them as our equal and elect them to government? Apart from one or two exceptions [?!?] we certainly do not, but why should we? They are less intelligent than humans. It would be an embarrassment to have an orang-utan as prime minister or president. What we do to apes and our other distant evolutionary relatives is shoot them, remove their natural living environment, cage them and ourselves stand outside the cage and glare at them. We use other mammals to make our lives more pleasant, killing them for food and controlling their lives in every respect. In the UK, foxes are hunted and killed, even now, just for sport.

...If you were a new breed machine would you trust a human? ...We must expect that the machines would probably wish to dominate and that they would force home their domination in both mental and physical ways. This is the way we humans behave at present, and as the machines are more intelligent, surely they would learn from our experience."

I gather that Warwick thinks that humans might be able to escape this domination via uploading or cyber-enhancement. He is very dubious about the first possibility, and not much more sanguine about the second. Also from Chapter Twelve:

"Is there anything we can do to stem the tide? ... [H]ow about making ourselves more intelligent, or, as Moravec has suggested,... shoe-horning ourselves into machine carcasses where our lives can continue in a new body and with a new brain? ... [T]his, though, requires the ability to obtain a detailed plan of the way a human brain is arranged. Technically we are at present a long way from achieving this. Indeed, we are only just about able to do such a thing with lower insects...

In order to get a human brain's performance to increase, would it not be possible ... to connect extra memory or extra processing capabilities directly on to the brain, possibly in the form of silicon chips? This is, I believe, a much more realistic suggestion than the Moravec idea...

...But a major concern is that this would imbalance the present human brain set-up. Taking in other signals and processing or understanding them would require some of our present neurons to be redeployed... The new inclusions would effectively be special-purpose partitioned brain blocks, adding to the total brain size but taking up present neurons in order to be operative. We would have to lose something in order to gain something else.

So, realistically, there appears to be no human comeback to the rapid increase in the size and power of intelligent machines. On the assumption that machines will be, after some time, more intelligent than humans, then we have no apparent answer. Humans are severely restricted by our biological framework..."

And from Chapter Thirteen, "Mankind's Last Stand":

"The main points of this book can be summarised as follows:

  1. We humans are presently the dominant life form on Earth because of our overall intelligence.
  2. It is possible for machines to become more intelligent than humans in the reasonably near future.
  3. Machines will then become the dominant life form on Earth.

If asked what possiblity we have of avoiding point 3, perhaps I can misquote boxing promoter Don King by saying that humans have two chances, slim and a dog's, and Slim is out of town..."

Oh, and by the way -- it was from the UK on-line bookstore WHSmith Online that I ordered this book, not Amazon UK. I was confusing this book with a subsequent purchase of the "millennium" edition of _The Lord of the Rings_ ;-)

Jim F.