Re: >H RE: Present dangers to transhumanism

Doug Jones (random@qnet.com)
Tue, 31 Aug 1999 12:54:30 -0700

Robert J. Bradbury wrote:
> It isn't going to happen tomorrow. And if we don't have the
> designs pre-done (which seems very doubtful unless we get
> self-evolving AI that understands mechanical engineering),
> then even when the technology becomes available we are still
> going to have perhaps more time to adapt than I might have
> if Mt. Ranier suddenly did a Mt. St. Helens.

This is where Eliezer's doubled doubling breaks down- smart AI's might optimize their own code, but faster execution requires faster hardware, which is tied to a physical realm where Moore's law is difficult to shortcut. An AI singularity can't occur unless the time constant for hardware improvement and replication can also be shortened.

(Massively parallel processor systems can grow in processing power only linearly. Even if more chip fabs are brought on line, it's the rate of creation of chip fabs that limits progress.)

Is it just me, or does it seem ironic for Eliezer to be calling anyone else a naive technophile? ;)

--
Doug Jones, Freelance Rocket Plumber