> This question is like
> Australopithecus expressing concern for his fellow hominid, Homo Sapiens,
> "how can you guys live so far from the scavenging grounds, won't you starve?"
No, actually, this is like Australopithecus expressing concern for vis
future descendants: "There are only so many ways you can flake spears,
you know, and while our limited brains are so small that we can still have
fun doing it, won't these hypothetical homo saps figure out the absolute
optimal spear-flaking method in a couple of days and then spend the rest
of their lives totally bored?"
In a mathematically friendly universe, the lesson would be that a fixed
increment in intelligence opens up exponentially larger vistas; to put it
another way, only thirty fixed increments of intelligence are needed to
redouble the "Time to Boredom" from one year to a billion years, and a
superintelligence with billions or trillions of times human computational
power will never even begin to touch All The Things There Are To Do at
that level of intelligence. A population of 1e30 ultraminds operating at
1e40 operations per second, running from the Big Bang to the Big Crunch,
will be in roughly the same position with respect to The Space of All
Possible Things as the first sentient mind alone on the savannah.
A more worrisome question is whether the accumulation of useful memories
and experience requires and causes a corresponding increase in
intelligence; it takes very few additional assumptions to get an
unavoidable exponential increase in mind size, with doubling occurring at
least every subjective millennium. Unless there's really unlimited
computational power, this could present a serious obstacle to the goal of
indefinite growth, life, and fun. If there is unlimited computational
power, of course, then I say GO FOR IT, GREAT!
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:41 MDT