Eliezer S. Yudkowsky writes:
> It seems to me that the problem is one of cooperation (or
> "enforcement") rather than raw intelligence. The transhumanists show
> up in the 1980s and have all these great ideas about how to defend the
> world from grey goo, then the Singularitarians show up in the 1990s
> and have this great idea about bypassing the whole problem via
> superintelligence.
Great idea, instead a death of thousands cuts you vanish in the maw of
the Behemoth. Gulp.
So, please tell me how you can predict the growth of the core over the
many orders over orders of magnitude during it's explosion in
complexity, guaranteeing it to follow the set of constraints XY at
each step of the game, even if you don't know what the final result
will look like. Don't tell me about nebulous goals & Co, prove it.
(this says you can't tell what it will do if you make it)
Tell me how a piece of "code" during the bootstrap process and
afterwards can formally predict what another piece of "code" will be
do, in all contexts, under all circumstances. (Bypassing Goedel, no
less. And recent footnotes to Goedel).
(this says the thing itself can't tell what it will do if it makes itself)
Tell me how a team of human programmers is supposed to break through
the complexity bareer while building the seed AI without resorting to
evolutionary algorithms. Show me the evidence that it can be done, or
at least argue from the theoretical position.
(this says you can't make it in the way you want to make it)
Tell me how a single distributed monode can arbitrate synchronous
events separated by light seconds, minutes, hours, years, megayears
distances without having to resort to relativistic signalling.
(this says the thing is inefficient, or is not what you say it is)
If it's not single, tell me what other nodes will do with a node's
decision they consider not kosher, and how they enforce it.
(ditto)
Tell me how the thing is guarded against spontaneous emergence of
autoreplicators in its very fabric, and from invasion of alien
autoreplicators from the outside.
(this says you can't stay in the metastable regime forever, even you
somehow magickaly get there)
Tell me how many operations the thing will need to sample all possible
trajectories on the behaviour of the society as a whole (sounds
NP-complete to me), to pick the best of all possible worlds. (And will
it mean that all of us will have to till our virtual gardens?)
(just how large is the damn thing, and how much resources will it
leave us?)
What is the proposed temporal scope of the prediction horizont?
Minutes? Hours? Years?
(nonlinear systems are provably unpredictable)
How can you decide what the long-term impact of an event in the here
and now is?
(this says you can't tell how a particular event will turn out in
advance, assuming you could predict it, which you can't (see above))
There's more, but I'm finished for now. If you can argue all of above
points convincingly (no handwaving please), I might start to consider
that there's something more to your proposal than just hot air. So
show us the money, instead of constantly pelting the list with many
redundant descriptions of how wonderful the sysop will be. Frankly,
I'm getting sick of it.
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:21 MDT