Genius Dogs

John K Clark (johnkc@well.com)
Thu, 9 Oct 1997 10:37:30 -0700 (PDT)


-----BEGIN PGP SIGNED MESSAGE-----

On Wed, 8 Oct 1997 Hal Finney <hal@rain.org> Wrote:

>I seem to recall a claim that Eric Drexler had an idea for creating
>synthetic intelligence by means of an evolution simulation. [...] I
>haven't been able to find the exact description of Drexler's idea,
>but I think it was on this list sometime in the last few years.
>Does anybody remember this?

I found this old post of mine.

Date: Sat, 30 Dec 1995
From: John K Clark <johnkc@well.com>
Subject: Drexler's Timeline

I was only made aware of Drexler's thoughts on this subject second hand,
through Carl Feynman , you're receiving it third hand from me,
so if I say anything stupid in this post it is my fault, not Drexler's.

The idea that the Singularity could come in less that 20 years
makes me weak in the knees, just like everybody else, and I'm
not sure I really believe it could come that soon, but Drexler
didn't just pull this amazingly short timeline out of a hat, it's based
on calculations he made, even if they are informal and unpublished.

Seeing no reason current trends could not be extrapolated and
using his considerable knowledge of the field, he expects to
see the first assembler able to reproduce itself sometime in
the first 2 decades of the next century. A full nanotech
computer could be made almost immediately after that, because
the design work will already be finished by then, as some people
are working on that already. He figures that once we have nano
computers it will only take a couple of years to develop
superhuman artificial intelligence. At this point we have a mind
(or minds) far more intelligent than you or me, and one that
operates a billion times faster to boot. A few hours of that
and the universe will never be the same again.

I can already hear the howls of protest. Even if you have the
hardware, programming a nano computer to do anything useful
would be a monumental task, and developing AI, superhuman or
otherwise would be an astronomically difficult process. I think
Drexler would agree with his critics that it will take many
years to develop AI, many millions of years actually.

Drexler suggests we develop AI in the same way that nature
developed intelligence, by brute force. Nature didn't need any
experts with a deep understanding of intelligence or
consciousness, intelligence just evolved, using only mutation
and natural selection. We can do the same.

A recipe for intelligence: Build a simulated world in your
computer and fill it with very simple creatures (programs).
Make sure they must solve problems in order to get "food". The
creatures that are better at solving problems leave more
descendants. Now you do nothing, just step back and let it
evolve. After evolving for a few hundred million SIMULATED years
you have intelligence, high order intelligence.

How long would it take in real years? He calculated the amount
of computer power needed to simulate ALL the brains that have
ever existed before humanity, that is, all the brains since
brains were invented in the Cambrian Explosion 570 million years
ago. He concluded that 10^38 machine instructions would do the
trick. A Nanotechnology computer the size of a large present
day factory and using no more power, could perform 10^38
machine instructions in about 2 years.

Bottom line, you start with a nano computer but no software to
run on it except a few simple minded programs, smaller than
many you are using now on your home PC . 2 years later you've
got an AI running on the computer, an AI at least as
intelligent as a human and much, much faster.

As breathtaking as these changes are, it's really just
engineering, Drexler invokes no new laws of physics and assumes
no scientific breakthroughs. If there is one things would
become even wilder. For example, if all the recent speculation
about Quantum computers ever pans out and a practical machine
is possible, it would make even Drexler look like an old fuddy duddy.

Somebody mentioned that safety concerns might slow things down,
I doubt that it would, but perhaps it should. We are about to
enter a period of gargantuan change happening in an
astonishingly short amount of time, and that is an inherently
dangerous situation, it would be foolish to deny it. The biggest
danger is probably something that we haven't imagined yet,
probably something we are incapable of imagining. In my darker
moments I wonder if that could be an explanation of the Fermi
paradox, the fact that we don't see any ET's and the fact that
the universe has not yet been engineered.

In spite of the dangers I admit I'm happy about the coming
changes, we might survive it, and the alternative after all,
is certain death for all of us. If nothing else things won't be
dull. The truth however is, it doesn't matter a hill of beans
if you or I think it's a good idea or not, somebody, somewhere,
will do it, and do it as soon as he thinks he can. The best we
can do is prepare ourselves as well as we can.

Speaking of preparation, I don't want to be accused of promoting
complacency as far as Cryonics is concerned. Even a man as
brilliant as Drexler could be wrong, especially about something
like a timeline, as it involves more than science and
engineering, but economics and politics as well. It's safest if
people plan for the worse and hope for the best. This is even
more important for the leaders of the Cryonics companies. They
should operate under the assumption that if it will take 1000
years for Nanotechnology to develop. If events in the next 20
years prove that they are wrong about that, I am certain nobody
will be very upset with them.

Regardless of when the Singularity happens one thing is certain,
we are one year closer to it. HAPPY NEW YEAR!

John K Clark johnkc@well.com

-----BEGIN PGP SIGNATURE-----
Version: 2.6.i

iQCzAgUBND0RAH03wfSpid95AQGyxgTvcqdAQvCoPn39Uetz5IRFSpl+nGFxdyBQ
WsvwhzgN6GRJJBjRiAffRJNe4cHdIOcy9iBZKQnve2lEMwszC+R/Mi4YZW5cmA9z
nzglEYaMmY6tlEjKh7hoXltVM6WQo2/mNyh0DqULVkBU1B2t+ONvEyqPf1cCV44L
ehedmJ/kbOeB7YhgozWm08eTQnhEWSwV1z32/aNE7Z7voc7lJeg=
=lkTO
-----END PGP SIGNATURE-----