Re: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Adrian Tymes (wingcat@pacbell.net)
Date: Thu Sep 04 2003 - 16:41:50 MDT

  • Next message: Anders Sandberg: "Re: SHT: Crazy British..."

    --- Brett Paatsch <bpaatsch@bigpond.net.au> wrote:
    > Samantha Atkins <samantha@objectent.com> writes:
    > > Whose talking about a "disembodied brain"? What
    > do
    > > you mean by "disembodied"? An SI is embodied with
    > a
    > > computational matrix and may have many extensions
    > in
    > > the physical world in the way of devices it
    > controls.

    In addition to what Brett said: when first created,
    why would a SI necessarily have any extensions in the
    physical world? (Though, if it did have some -
    enough that it could build more, possibly after
    acquiring, by theft if necessary, the tools to do so -
    then see the "unconstrained replicator" problem. Not
    quite gray goo since it's macroscale, but many similar
    features.)

    > I for one would have serious difficulties taking on
    > face
    > value that such an entity should be submitted to
    > merely
    > because it was of higher intelligence by most
    > peoples
    > reckoning. I'd have reservations as to who its real
    > masters
    > and what its real goals might be.

    Not to mention, is it really a higher intelligence?
    There are many examples throughout history of
    purported higher intelligence that turned up empty.
    Let it prove itself by its actions...and then let the
    results of those actions provide reason for or
    against, probably against, servitude. One can easily
    imagine the first AI developing to the level of a
    retarded, dogmatic child and then failing to
    comprehend any way to, or justification for, progress
    from that status.

    One can postulate truly hyperintelligent AIs. But
    then one has to posulate the results of their
    actions.

    > Could be the only way a hyper intelligent AI can
    > kick start
    > a rapid take off singularity is against the wishes
    > of a majority
    > of voters. i.e. by brute military and or economic
    > force and
    > through human proxies. That was my thought anyway.

    Or it could be that the only way it could do so is by
    manipulating the wishes of a majority of voters, i.e.
    by making self-improvement and deep research into how
    to make people (including intelligences on the AI's
    architecture) more intelligent seem super-cool and
    ultra-popular, thus causing most of humanity to
    become a self-improving group intelligence - kind of
    like today, but much much faster.

    And if one compares today's rate of such improvement,
    at least among scientific/industrialized societies,
    to the rate one or two centuries back...



    This archive was generated by hypermail 2.1.5 : Thu Sep 04 2003 - 16:51:06 MDT