tisdagen den 1 maj 2001 18:15 wrote Eugene.Leitl@lrz.uni-muenchen.de:
> Anders Sandberg wrote:
> > You make a hidden assumption here: that the jump in software
> > quality/capability/whatever is very sudden and not gradual. Is there any
> > reason to believe that is true?
>
> Of course I made a hidden assumption. If assumptions were recursive
> macros you'd have to expand fully, posts would take a day to write,
> and bore the majority of targets to tears.
No problems with that. But I think some of the macros you use are non-trivial
and in a discussion like this they ought to be on the visible level.
> > I agree with your sketch of the hardware/software gap, but it is not
> > clear to me that improvements in software must always be less than
> > improvements in hardware until the fateful quantum leap. Right now it is
>
> I'm extrapolating the peristance of the trend from past history
> (the last four-five decades, only two-three more yet to go),
> and lack of awareness of the problem set. The field, still
> being rather young, has settled into rather heavy dogma already.
> Holistic approaches are being actively deprecated, since breaking
> abstraction's usability. Lunatic fringe approaches, some of them
> extremely fruitful on the long run do not receive the attention
> they deserve, because appearing sterile on the short run. I could
> go on, but I think that part of the problem is for real.
People are very aware about the problem, IMHO. It is just that so far many of
the solutions have not panned out, making people rather risk-aversive when it
comes to new approaches. But given how quickly people also hang on to new
computing trends when they become fashionable and have enough mass, I don't
think a method of better software efficiency would be ignored if it could
demonstrate a measurable improvement.
> We do not have many datapoints as to amplitude of the growing
> performance gap between the average case and the best case,
> but current early precursors of reconfigurable hardware (FPGAs)
> seem to generate extremely compact, nonobvious solutions even
> using current primitive evolutionary algorithms.
Evolutionary algorithms are great for specialised modules, but lousy at less
well defined problems or when the complexity of the problem makes the
evolutionary search space too nasty. I don't think we will get a dramatic
jump in software abilities through stringing together efficient small modules
since as you say the glueing is the hard part and not easy to evolve itself.
On the other hand, it seems to be a way to help improve software and hardware
a bit by making the hardware adaptable to the software, which is after all a
nice thing.
> I see two major driving factors for the punctuated equilibrium
> (the Jack-in-the-Box-Golem) script: the positive autofeedback of
> the mutation function, and the hostile takeover of the computational
> resources of the global network.
My experience with evolving mutation functions and fitness landscapes suggest
that this is a very hard problem. Computing cycles help, but I am not that
optimistic about positive feedback scenarios. The problem is likely that the
mutation function is problem-dependent, and for arbitrary, ill-defined and
complex problems there are no good muctation functions. Life only had to
solve the problem of adapting to staying alive, the rest was largely random
experimentation (with sudden bangs like Cambrium when occasional tricks
became available).
As for taking over computational resources, that implies that intelligence is
just a question of sufficient resrources and not much of an algorithmic
problem. But so far all evidence seem to point towards algorithms being very
important; having more resources speed up research, but I have not seen any
evidence (yes, not even Moravec's _Robot_) that suggest that if we could just
use a lot of computing power we would get smarter behavior. Besides, hostile
takeovers of net resources are rather uncertain operations, highly dependent
on the security culture at the time, which is hard to predict.
> This is a very favourable scenario, especially if it occurs
> relatively early, because it highly hardens the network layer
> against future perversion attempts by virtue of establishing
> a baseline diversity and response adaptiveness.
Sounds like a good defense in court :-)
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:02 MDT