From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Thu Feb 27 2003 - 15:08:45 MST
On Thu, 27 Feb 2003, Wei Dai wrote:
> Maybe an interesting question to explore is, what if this is the higest
> level of intelligence that our universe is capable of supporting?
I am reasonably certain I have seen papers strongly correlating
certain aspects of the brain architecture that may involve
intelligence (neurotransmitter levels, variants in receptor
polymorphisms, etc) with pathologies (e.g. depression, suicide,
etc). Anders probably knows this literature much better than I.
But the consequence is that using current brain architecture
we may be pressing up against the limits.
Now, *if* we equate "intelligence" with "OPS" (a stretch,
but perhaps not too unreasonable) then my work on Matrioska
Brains, depending in part on Anders' work on Jupiter (Zeus)
and Dyson (Uranos) brains. [We will ignore his Neutronium
(Chronos) brains, depending in part on I think some comments
by Moravec that go a bit over the edge into "magic physics...],
though Anders isn't alone in his area as it has been explored
somewhat in work by Seth Lloyd and Michael Franks.] (This
isn't a small body of work but can be worked through within
a couple of days -- refs provided on request if those below
are insufficient)
All of this is well documented, see:
http://www.aeiveos.com/~bradbury/MatrioshkaBrains/
and
http://www.jetpress.org/volume5/Brains2.pdf
The number I cite for a Matrioshka Brain is
"a trillion trillion human minds"
One is talking roughly 10^42 instructions per second
assuming the consumption of the entire solar power
output and the most efficient (projected) nanocomputational
capacity that Drexler has proposed.
This isn't the hard limit as it would be for Matrioshka Brains
of typical solar system sizes. Provided they can harvest
more material as they migrate through the galaxy, they can
possibly grow to several light years in size. The additional
growth does have a property of diminishing returns beceause
it does impose increasing time delays on thought processes.
But it does have the property of increasing their aggregate
thought capacity (and therefore presumably, intelligence).
Recent progress with configuration of FPGA's for specific
applications (and their availability for general purpose
computers) suggests that the general concept of "intelligence"
will need to be reexamined (see for example the Minsky/Loebner
debates over the last decade or so).
I believe that I have previously proposed (perhaps on the list
itself) MBrains optimized to think about certain topics. Such
entities would only have a specific-purpose "intelligence" (not
unlike "Deep Blue") and a fully optimized version would bypass
the heat cooling requirements and essentially melt itself at
the precise time it computed the result it was designed for.
Raises a very interesting question with regard to the Fermi
Paradox. Is the reason that the universe seems empty is
because all "intelligences" are eliminated at the end of the
period of what they are invented to do?
Robert
This archive was generated by hypermail 2.1.5 : Thu Feb 27 2003 - 15:11:43 MST