Re: The Singularity

Bryan Moss (bryan.moss@dial.pipex.com)
Thu, 9 Jul 1998 17:16:55 +0100

Dan Clemmensen wrote:

> > People will have smaller faster more
> > intelligent computers.
>
> Does this mean that you feel that society will
> be structured substantially as it is today? Do
> you see any computer-related differences in the
> society of today versus the society of 1978?

Computers have made major contributions to trends in wealth, employment, and education but there are still no radical differences in society.

I think the basic structure of society is likely to persist for the next 20-years. People may live longer, consume more information, etc, but that is not a radical change and certainly not an entirely unpredictable one.

> > I think my real objection with the SI scenario
> > is the idea that people think intelligence
> > must also mean the ability to wake up one
> > morning and decide to wreak havoc on the
> > mortals.
>
> Actually, the consensus position of the radical
> singulatarian community (consisting of me and
> perhaps 'gene ;-) ) is that the motivations and
> actions of the SI are intrinsically
> unpredictable. There is no reason to predict
> that the SI will be inimical, benevolent or
> indifferent to humans. I'm personally hoping for
> benevolence, and I think the potential benefit
> is worth the risk.

I would say there is a greater chance of benevolent or "useless" SI than a hostile or useful SI. You're talking about emergent behaviour from a fundamentally different environment to our own. It's not only unlikely that the result, if any, would be considered "intelligent" but it's even less likely it would be prone to the human whims of violence and purposeful destruction.

I think some people make the mistake of thinking that because we are likely to create software with human-understanding capabilities, this software can create an emergent SI capable of taunting it's victims before killing them. Putting all the pieces of an SI together in such away for it to be hostile would be an unimaginable act of
negligence. Doing it on purpose would require unparalleled co-operation and resources.

Of course, you can always argue you have time (if it doesn't happen in 10 years, what about 100, or 1,000?) This is true, the more time goes by the more likely it is to happen by accident or on purpose. It's also more likely that we will have a greater understanding of dynamic systems and would be able to engineer the overall behaviour to our advantage. And if you're using SI's for any practical purpose (evolutionary theory, cognitive science, doing homework) it's still relatively easy to stop it from doing any harm.

This does not mean there will be *no* emergent behaviour, it's just unlikely anyone will call this intelligence (unless our definition of that word has dramatically changed). And it's even less likely it will be hostile (although unforeseen emergent properties will no doubt cause accidents, and the papers will run their usual scare stories).

BM