From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Mon May 26 2003 - 00:45:39 MDT
On Sun, 25 May 2003, Spike wrote:
An aside -- I'll buy the mirror area argument, may even try
to test it myself one day.
> I am finding the following line of reasoning compelling:
> any sentience would want to think more.
True. But wouldn't the best way to do this be to
construct a universe in which thinking more is
easier and find a way to transport oneself to it?
All ETI thinking to date has placed an emphasis
on the idea that this universe is it ("there
can be only one") and yet the theorists now seem
to be leaning in the direction that that just isn't so.
> The ultimate limit of computability
> would be reached as soon as *every* photon that is emitted
> for any reason is harnessed to flip one bit.
Spike, we may need to have a serious discussion about
reversible computing. I believe it may be Landauer
who showed communication may be "free" and Bennet
that showed computation may be "free" (please Anders,
Eliezer or others correct me if this is wrong). If
my understanding is correct then one does *not* need the
photons (in other than a general way that there is
heat present in the universe).
I also suspect I could come up with multiple schemes whereby
the energy of single photons (at least those in the UV-visible
range) could be used to flip multiple bits.
> the AI wants to get smarter.
Not completely clear. The AI might want to maximize its
longevity and that might require exiting this universe.
The AI might want to be known as the most brilliant AI
that ever existed -- so it is going to transmit the
most brilliant intellectual result ever created in
the history of the universe as it hurls itself
(and a whole lot of other matter) into a black hole
to generate the energy required to produce the
computational result. So what that it doesn't
survive, it is reknowned throughout the galaxy
as being the most brilliant. The AI might want to
maximize its "fun" in which case it may live fast,
die young and leave a pretty corpse (this is why I
agree with Eli that we need a better "fun" theory.)
> This line of reasoning is sounding
> more inevitable the more I think about it.
Without meaning to sound too harsh we need to think
*more* about it.
> I can see a post-Spike (or post-Singularity) humanity getting
> to the point of wanting to stop all the wastful photon
> barfing within a few hundred years, and actually starting
> in some meaningful way to deal with that waste within 5000.
I agree with the "few hundred years" figure (or even less).
The dealing with the waste issue involves critical questions
about the "present value" of thought. Is somewhat more
thought now better than an even greater amount of thought later?
I believe the current concepts of the Spike or the Singularity
may be incomplete.
Reasoning:
- How *long* do you want to be able to think?
[Reversible computing work seems to suggest that one can
think much longer if one is willing to think slower.]
- How much do you want to be able to think about?
[If faster thinking allows one to get out of what may
be a doomed universe then it is worth it -- but if it
doesn't then it may have been resources ill spent.]
- How much are you willing to sacrifice if thinking more
or longer requires that you leave a significant amount
behind if you have to travel to another universe to
continue thinking?
etc.
I think the ETI thinking may have largely been dominated
by Dysonian or Tiplerian thinking that there may be ways
out of the quagmire (a doomed universe). But what if
Adams and Laughlin (The Five Ages of the Universe) are
right and no matter what we do in the current paradigm
we are ultimately hosed? Then there isn't any f***ing
point to colonization -- ultimately it is a pointless
exercise.
I'd also lean towards asserting that a species capable
of interstellar colonization is probably relatively
capable of knowing whether this universe is doomed.
Robert
This archive was generated by hypermail 2.1.5 : Mon May 26 2003 - 00:55:47 MDT