Re: Eliezer S. Yudkowsky' whopper

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Wed Oct 04 2000 - 12:13:45 MDT


Andrew Lias writes:
> [very true stuff snipped]
> The simple fact is that the future is unknown and, beyond a very limited
> range, unknowable. Making predictions is a risky business at best. The
> strong AI community has repeatedly embarrassed itself by making
> over-confident estimates about when strong AI would show up. As a
> consequence, a lot of people outside of the AI community consider the whole
> venture to be a joke, regardless of the merits of any particular line of
> research.
 
A have a (somewhat miffed-sounding, since written from the view of
the Lisp community) view on that from:
     http://www.naggum.no/worse-is-better.html

( [...]
        Part of the problem stems from our very dear friends in the artificial
        intelligence (AI) business. AI has a number of good approaches to
        formalizing human knowledge and problem solving behavior. However, AI
        does not provide a panacea in any area of its applicability. Some
        early promoters of AI to the commercial world raised expectation
        levels too high. These expectations had to do with the effectiveness
        and deliverability of expert-system-based applications.

        When these expectations were not met, some looked for scapegoats,
        which frequently were the Lisp companies, particularly when it came to
        deliverability. Of course, if the AI companies had any notion about
        what the market would eventually expect from delivered AI software,
        they never shared it with any Lisp companies I know about. I believe
        the attitude of the AI companies was that the Lisp companies will do
        what they need to survive, so why share customer lists and information
        with them? [...]
)
 
> I think that we all agree that we are approaching a point where an explosion
> of intelligence will become a viable possibility. I think, however, that
> it's rather foolish to assert that it will happen before a given time. It
> *might* happen by 2020. It also *might* happen in 2520. It also *might*
> happen tomorrow, if some obscure line of research that nobody is aware of
> hits the jackpot. We simply don't know what factors may advance or retard
> the date.
 
This view is too harsh. We can surely assign probabilities, assuming
nothing happens to the global civilization (major war, global climate
flip-flopping (just read a paper on corellation of species extinctions
with volcanic activity, anthropogenic gases may also act as
precipitant), asteroidal impact, the triffids, terminal boredom) it's
extremely unlikely to happen tomorrow, probability rapidly peaking
after that, having a maximum somewhat by slightly after 2020, but
*distinctly* before 2520. I would be indeed genuinely surprised if it
didn't happen before 2050 (you can call me in my home for the elderly
if this turns out wrong, or drop the notice by the dewar).

> My own concern is that there is a reasonable possibility that we'll hit the
> Singularity in my lifetime and that it may take a form that will exclude my
> interests. My primary interest is that the human race doesn't fall into (or

Ditto here.

> get dragged into) a non-continuitive extinction (i.e., an extinction where
> we simply cease, rather than transcending to something more intelligent and
> capable). My primary concern is that the only thing that we can control,
> when it comes to such a singularity is the initial conditions. I can think

Question is, whether initial conditions have an impact on early growth
kinetics. I think they do. If they don't, the whole question is moot,
anyway, and all we can do is lean back, and watch the pretty pyrotechnics.

> of all to many scenarios where I, personally (to say nothing of the species
> as a whole), either get left behind or destroyed. Unfortunately, there's a
> whole bunch of those.
>
> It is my hope that we will be able to see *just* far enough ahead that we
> don't just blunder into the damned thing (ask me about my monkey scenario!
> ;-). One thing that seems certain to me is that there seems to be a lot of
> unfounded speculations regarding the morality and rationality of
> post-organic hyperintelligence. It seems that the only same position to

Guilty as charged. However, evolutionary biology is equally applicable
to sentients and nonsentients.

> hold in an arena that harbors such beings it to *be* such a being (and even
> that might be presumptive -- we are assuming that amplified intelligence is
> a good thing to have; it's possible that hyperintelligences are prone to
> fatal or adverse mental states that only manifest beyond a certain level of
> complexity; after all, we do know that certain pathological mental states,
> such as a desire for self-martrydom, only show up at human levels of
> intelligence).
 
Yeah, but we have to decide today, in face of limited data. We have to
work with current models, however faulty. The longer we wait, the
harder will it be to change the course.
 
> Frankly, the notion that we are approaching a Singularity scares the hell
> out of me. If I thought that there were some viable way to prevent it or
> even to safely delay it, I'd probably lobby to do so. I'm not convinced
> that there are any such options. As such, my personal goal is to be an
> early adopter and hope that self-amplification isn't the mental equivilent
> of jumping off of a cliff.

Sure, but would be the alternative? (Assuming, it happens at all) we
can't prevent it, we can (at best) delay it. My statistical (assuming
current numbers) life expenctancy will be exceeded in about 60
years. If the Singularity turns out malignant, I can only die once. It
can be a bit premature, it can be quite nasty, but it will be brief.
If it's softer, it gives me a genuine chance to achieve immortality
the Woody Allen way.

Dunno, sounds good to me.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:15 MDT