Re: The Singularity

Eugene Leitl (eugene@liposome.genebee.msu.su)
Fri, 17 Jul 1998 19:13:13 +0400 (MSD)

Robin Hanson writes:
[...]
> You probably can't think of a reasonable way to calculate the temperature
> of a black hole either, but that doesn't mean other people can't do it.
> Do you mean to claim more than that *you* *now* haven't thought of
> something you like?

Good point, but afaik there is no consistent theory of human actions, either, and we're a good deal more predictable than an SI. Ergodic systems are intrinsically unpredictable, and you can't prove the SI is not occasionally ergodic.

> >Since the SI will be vastly more intelligent than humans, IMO we may not
> >be able to comprehend its motivations, much less predict them. The SI will
> >be so smart that its actions are constrained only by the laws of physics,
> >and it will choose a course of action based on its motivations.
>
> Why do you assume such a strong association between intelligence and
> motivations? It seems to me that intelligence doesn't change one's
> primary purposes much at all, though it may change one's tactics as one
> better learns the connection between actions and consequences.

I could reason evolutionary, but I have no idea whether this still applies to the SI. Areas of human enterprise is certainly unapplicable.

'gene