Re: Keeping AI at bay (was: How to help create a singularity)

From: Eugene.Leitl@lrz.uni-muenchen.de
Date: Tue May 01 2001 - 10:15:21 MDT


Anders Sandberg wrote:

> You make a hidden assumption here: that the jump in software
> quality/capability/whatever is very sudden and not gradual. Is there any
> reason to believe that is true?

Of course I made a hidden assumption. If assumptions were recursive
macros you'd have to expand fully, posts would take a day to write,
and bore the majority of targets to tears. I'm a bit worried that
the sometimes highly compressed (depending on lots of hidden state
built up over course of many years) communication style the more
seasoned participants here have come to adopt, make communication
appear like a sequence of non-sequiturs, or, worse, as semantic
line noise to newcomers. If true, this would seem rather sucky.
So, natch, I forgot to expand. Your namespace seems
to have a different prefix, that's why you failed to expand it ;)

> I agree with your sketch of the hardware/software gap, but it is not
> clear to me that improvements in software must always be less than
> improvements in hardware until the fateful quantum leap. Right now it is

I'm extrapolating the peristance of the trend from past history
(the last four-five decades, only two-three more yet to go),
and lack of awareness of the problem set. The field, still
being rather young, has settled into rather heavy dogma already.
Holistic approaches are being actively deprecated, since breaking
abstraction's usability. Lunatic fringe approaches, some of them
extremely fruitful on the long run do not receive the attention
they deserve, because appearing sterile on the short run. I could
go on, but I think that part of the problem is for real.

We do not have many datapoints as to amplitude of the growing
performance gap between the average case and the best case,
but current early precursors of reconfigurable hardware (FPGAs)
seem to generate extremely compact, nonobvious solutions even
using current primitive evolutionary algorithms. The result is
a curiously stable negative and positive autofeedback coupled
oscillators. The whole is rather opaque to analytical scrutiny
and sterile to human attempts of constructive modifications by
manual means. We can only use the results as building blocks for
hybrid architectures (which also require man-made glue that is
immune to noise and nondeterminism -- we haven't even managed
to do that much yet), and for ingridients for other evolutionary
recipes.

Interesting enough, a few days ago a news message on transhumantech
(by Brian, I think) heralded we're to expect essentially strongly
CA-flavoured FPGA hardware (think 2d computronium implemented in
silicon photolitho) in about five years. Now while I would
compensate for the Nippon bias (the usual pattern of good idea,
lousy first implementation, early termination of project), the
hardware constraints which lead to that architecture are for real,
and eventually bound to resurface either in other product lines, or
become accessible when we'll get our desktop nanolitho printers
for circuit prototyping and small-scale production, which should
be in about 15-20 years. We should see a big resurface in innovation
in then, since vastly lowering the threshold, by making small scale
prototyping affordable to individuals, even hobbyists.

Since it's trivial to extend current compilers to generate
code for above targets, and the runtime for many algorithms
suddenly soars by several orders of magnitude on the same
raw acrage of transistors, it effectively hides the remaining
deficit, so there is no motivation to address it, unless the
lunatic fringe manages to reach high-output result regime,
causing everybody to drop whatever they've been using, and
rush over there, babbling excitedly all along the way. In
case this happens we won't of course see above explosive scenario.

I see two major driving factors for the punctuated equilibrium
(the Jack-in-the-Box-Golem) script: the positive autofeedback of
the mutation function, and the hostile takeover of the computational
resources of the global network.

The reconfigurable hardware is not the product of a coevolution,
so it reacts very brittle to use of the bitspray on existing designs
as attempt to improve it. So the early fitness function, which
hasn't learned to dance around the potholes of the substrate yet,
will produce only good results on small blocks, essentially
bruteforcing solutions by stepping through large areas of the
hardware configuration space. If you sift through a lot of bits
with a slight bias, you'll sooner or later find a sufficiently
good solution.

However, real-world mutation is anything but blind, and as soon
as someone has burned enough computational cycles evolving the
mutation function itself, it will suddenly start becoming much
better. If you just do something stupid, as building this fitness
function into a dumb Net worm, and release it into the wild,
it will bring down the Net, because it will discover most of
the holes in the protocol layers, which are legion. That's a
rather radical global debugging session, and it will bring most
of the world's economy to its knees (this is no hyperbole,
since we're not talking about the current situation), until the
Net will be sufficiently patched to become operable again.
The worm will be sufficiently subdued to become stealthy
(becoming a more adapted parasite, or even a symbiont), and
with time can even be incorporated into the network protocols,
becoming a lot like an immune system, coevolving with the
informational ecology (worms, and viruses, oh my).

This is a very favourable scenario, especially if it occurs
relatively early, because it highly hardens the network layer
against future perversion attempts by virtue of establishing
a baseline diversity and response adaptiveness.

Things only become bleak when you use your mutation function
to (ominous music, lots of hardware boxes standing in a room
in a mountain castle, coolant mists wafting, enter Igor in
a white lab coat) create a BRAIN! MUAHAHAH! I can make a
Brain!!!

I don't know how hard it is to create a nucleus, and at which
stage it is still safe to escape into the Net (a chimp would
do a lot more damage than a rodent), but I would strongly
suggest not to try the experiment.

> cheaper to throw more hardware on slow algorithms and operating systems
> than to improve them, but that might not always hold. For example, if we
> assume nanotech is delayed a bit we will run into a period where Moore's
> law temporarily slacks off, and a big economic incentive to utilize the

This is possible. It is hard to predict what will happen, but given
that embedded RAM designs -- not even FPGA-flavoured architectures
haven't begun cropping up in the roadmaps yet, and that most programmers
don't get parallel programming (who here has heard of active messages?
hands up), it seems that people will build fat CPUs with slow
external RAM, than figure out how to get the last little bit of
performance from that by optimization, before turning to the
alternatives, and realizing that they have to throw away essentially
all the tools they've grown to love and depend on. Schadenfreude,
what a wonderful word.

Given human inertia, this will take a long time. My crystal
ball is in service right now, but I do expect 2d molecular
circuitry by then, soon ramping up to multilayer, and then
going 3d molecular crystal in a more or less organic fashion.

> hardware better appears. Similarly, once we have full nanotech it is
> hard to do any better other than go for more parallelization, which
> anyway requires quite a bit of paradigm shifting in programming.

Yes, I recommend any programmer who's interested to take a good look
at MPI (there's MPI1, and MPI2, the most recent standard). It's
going to stick around with us for a long time, and is already very
useful if you're into high-availability and high-performance
clustering (a bunch of PCs running a *nix, connected by FastEthernet
or Myrinet, a rather expensive high-end networking standard). A core
subset of MPI should scale to very large relatively fine-grained
systems of medium future and to their emulations in molecular
circuitry. Check out "Parallel Programming with MPI" by Peter S. Pacheco
(Morgean Kaufmann publishers).



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT