Re: Darwinian Extropy

Robin Hanson (hanson@hss.caltech.edu)
Sun, 22 Sep 96 16:36:02 PDT


Dan Clemmensen writes:
> >> ... to make your scenario plausible, you need a plausible process which
> >> creates this massive convergence to a preference with almost no weight
> >> on long-time-scale returns.
>
>The SI can think so fast that on it's time-scale any possible
>extra-system return is too far into the future to be useful in
>comparison to the forgone computational capability represented by the
>extra-system probe's mass. I proposed that the SI would increase its
>speed by several orders of magnitude by converting its mass into a
>neutron star.

You seem to think that there is some natural discount rate, determined
by the computer hardware. I don't see why.

>Unfortunately, as you say there seems to be little in the way a human
>or corporation can do in the way of useful self-augmentation. I
>contend that an SI that includes a substantial computer component is
>very amenable to useful self-augmentation, while people and
>organizations are not. The reason: the SI can understand itself and it
>can reprogram itself. I contend that this is fundamentally different
>than the process used by a human or a corporation atempting
>self-augmentation.

Why do you think an SI will understand itself any more than we
understand ourselves? And even if it could, that doesn't mean such
understanding will lead to much improvement.

Robin D. Hanson hanson@hss.caltech.edu http://hss.caltech.edu/~hanson/