Re: Darwinian Extropy

Robin Hanson (hanson@hss.caltech.edu)
Mon, 23 Sep 96 09:53:29 PDT


Dan Clemmensen writes:
>I'm not an economist, so I don't immediately convert the problem to
>one of discount rate. I'll try not to mangle the concept. Basically,
>I'm arguing that the discount rate is very high because the SI ability
>to employ the mass of the probe for computation is so large. the only
>things an extra-system probe can eventually contribute are 1) a retrun
>of extra-system mass, or 2) a return of extra-system information. I'm
>assuming that the SI will decide that it can use the computational
>power represented by the probe's mass to produce the information
>locally long before the information can be returned fromn an
>extra-system source. That is, I see that one side of the discount
>ration is very large, and I don't see any equivalent large value on
>the other side of the ratio.

Not all information can be computed, if one doesn't have the right
inputs. Furthermore, even for stuff that can be computed, it's not
clear there is some maximum computational "depth" (compute cycles over
output + input length) It would be very interesting if you could prove
that a computer has some universal minimum discount rate. That would
be well worth publishing. However, it seems you are a long way from
showing this.

> Why do you think an SI will understand itself any more than we
> understand ourselves? And even if it could, that doesn't mean such
> understanding will lead to much improvement.
>

>Basically, I don't believe that we understand the basics of human
>cognition.Therefore our attempts at self-augmentation have no firm
>basis. We do, however, understand the basics of machine computation:
>we can design and build more powerful computer hardware and software.
>Since we understand this basis already, I believe that an SI can also
>understand it. I believe that an SI with a computer component will be
>able to design and build ever more powerful hardware and software,
>thus increasing its own capabilities. I think that this is likely to
>lead not just to an improvement, but to a rapid feedback process.

Consider an analogy with the world economy. We understand the basics
of this, and we can change it for the better, but this doesn't imply
an explosive improvement. Good changes are hard to find, and each one
usually makes only a minor improvement. It seems that, in contrast,
you imagine that there are a long series of relatively easy to find
"big wins". If it turns out that our minds are rather badly
designed, you may be right. But our minds may be better designed than
you think.

Robin D. Hanson hanson@hss.caltech.edu http://hss.caltech.edu/~hanson/