Max M wrote:
> I see no reason why a SI should wipe out humanity.
Strange, I don't see a single reason why it shouldn't. And a lot
of reasons why it should.
Because quite a lot is at stake if the SI is even uncaring,
I suggest to give the matter the benefit of doubt. Fortunately,
AI is rather tough to do, as given how many people share above
attitude we'd be rather far up the creek already, minus the paddle.
> The first thing I would do when uploaded would be to leave the gravity well,
> and start mining the solar system for raw materials for a bigger and faster
> brain. Until I was the brightest object in the sky so to speak ;-)
Can you guarantee that you'll be the single one to do so, preventing
egress of others? If not, can you guarantee what your peers will and
will not do? Can you at all make meaningful statements about MaxM++'s
strategy, today?
And why have I to repeat above arguments, all of time, year after year?
> So much wealth up there and so little time.
>
> Why should I bother about the earth? There is too much energy and materal
> elsewhere and I wouldn't have to fight no one to get it either. Well not in
> the beginning anyway.
Right, not in the beginning. But rather soon, and soon in our current use of
the word. So as soon the Diaspora ran out of easily minable resources in its
exponential runaway, it's going to turn to less easily minable resources.
A 500 km space rock makes for a lot of gadgetry, if disassembled.
You'd be surface pitted against volume (imagine how much volume you'll fill
with asteroids, comets, space junk, satellites from Saturnian and Jovian
system, and Mercury, Moon, Mars, and the like), and low-tech surface at that.
Even without nanotechnology, and with suboptimal use this more than enough
to eclipse the Sun (hey, you don't have to burn off the atmosphere/hydrosphere,
you can simply shade it off and freeze it out). If you're outmanned, outteched
and outgunned by nine orders of magnitude at the very least, your chances are,
how to put it midly, somewhat limited.
> I guess that other uploads would do the same. There would still be limits in
> growth due to the speed of light, though massively parallel use of nanotech
I don't see how the light cone should concern those who're sitting at the
origin. If you mean communication lag, then I don't see the point why
I should wait for something in lightminute distances. I just go ahead with
my daily business in the here and now. If there's something in my reach
or if I'm ejected from my niche I'll go around wandering. Hmm, nice tasty
planet snack.
> could help on that. But probably a better use of my time would be to make my
> brain smarter instead of just making it bigger.
There's a limit as to how many OPs/s you can extract from a chunk of matter
in a classical computation, and you'll very quickly reach that limit with
molecular circuitry of the second and third generation.
> I don't see why I should stay on earth to do that either.
I wouldn't see what should keep you and your offspring from coming back,
and tearing this place apart for resources.
> Most likely the best strategy would be some kind of tradeoff between a lot
> of different factors. Brain speed, Intelligence level, production
> capability, growth speed etc. to name a few, including some I cannot even
> comprehend now.
>
> But why I should even bother to look back on earth I don't know.
If you don't multiply, somebody else will. The substrate dish is rather
large, so those who multiply fastest, will dominate. Why they should
suddenly stop multiplying and back off when they run out of resources,
and have to disassemble the Earth I do not see.
> It could be fun to play out a scenario based on SI and it's spreading in the
> Solar system. But I must agree with Eliezer that if Nano comes before SI we
> can be in grave trouble.
It is difficult to see why AI (and very soon after, SI) should wait very long
after the advent of useful nano. Molecular circuitry is easiest to do by
self-assembly, which is a distinctly weaker technology than machine-phase
mechanosynthetic self-replicating nanorobotics it it's matter-manipulating
aspect. Which probably means that we'll get AI first, unless we continue to
insist perpetuating our broken approach to software design, of course.
An AI in a positive feedback runaway is going to invent a lot of stuff,
so you can assume nanorobotics will be there instantly after SI is there,
from human time scale point of view.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:17 MDT