Re: Is this safe/prudent ? (was Re: Perl AI Weblog)

From: Samantha Atkins (samantha@objectent.com)
Date: Wed Aug 13 2003 - 02:19:20 MDT

  • Next message: Anders Sandberg: "Re: Is this safe/prudent ? (was Re: Perl AI Weblog)"

    On Tuesday 12 August 2003 11:05, nanowave wrote:
    > >So we already have examples of individuals getting *really*
    > >annoyed and doing something about it. It is probably not
    > >going to be a good thing if individuals get *really* annoyed
    > >and have an AI at their disposal to do something about it.
    > >
    > >Robert
    >
    > Yes, the possibility of unwittingly unleashing a kind of "berserker" upon
    > the earth would seem to be not statistically insignificant, nonetheless, I
    > remain confident (if only via gut feeling) that this kind of
    > Saberhagen-esque scenario will never come to pass. Is it completely
    > unreasonable to presume that 'human-level' intelligence implies human-level
    > ethics and values (two kinds of social intelligence) as well as pure
    > knowledge crunching/combinatorial power?
    >

    Existence proof please. Could we please have an AI that even remotely
    approaches human level intelligence in practice *before* we start talking
    about restricting the field in various probably draconian ways? At this
    point, based on actual AI in practice rather than unsubstantiated theory, I
    ranke the changes of a "berserker" AI being released any time soon and most
    certainly statistically insignificant.

    As far as your last sentence though, there is a simple existence proof. There
    are many millions of beings on this planet with human-level intelligence who
    are missing any degree of ethics and values that effectively check some of
    the worse possible actions. Of course not much can be said for much of what
    generally passes for human ethics and values. Certainly normal human
    ethics and values are likely insufficient for a super-human intelligence.

    > A tendency toward malevolence and unnecessary destruction are still
    > considered entropic and SUB-human in these parts are they not?
    >

    Well, with a good AI based "smart" retribution collateral damage should be at
    a minimum. :-)

    - samantha



    This archive was generated by hypermail 2.1.5 : Wed Aug 13 2003 - 03:12:38 MDT