From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Thu Sep 04 2003 - 02:41:14 MDT
Samantha Atkins <samantha@objectent.com> writes:
> On Wed. 03 Sept. 2003 14:34, Adrian Tymes wrote:
>
> >
> > He's wondering how an essentially disembodied brain,
> > no matter how hyper-intelligent, could even begin to
> > take over the world.
>
> Whose talking about a "disembodied brain"? What do
> you mean by "disembodied"? An SI is embodied with a
> computational matrix and may have many extensions in
> the physical world in the way of devices it controls. Is
> the question why the SI would want to? There are
> reasons it might form a subgoal of extending its power.
> This might look a bit like "taking over the world" if
> carried too far. So, was that the question?
I didn't use the term "disembodied brain" but that's a fair way
of putting it. I had in mind any hyper general intelligence that
has been artificially produced. An intelligence that was non
biological and would not be considered a legal person in any
existing jurisdiction, therefore could not vote, run for elected
office or perhaps own property in its own right.
I cannot imagine any circumstance where the first artificial
super intelligence would not either emerge as a distributed
phenomenon, or grow from some sort of seed AI under the
guidance of some human person or group of persons.
My question is therefore if such a non biological general hyper
intelligence suddenly appeared on the world stage as a
phenomenon who would submit to its authority (given it is
not a person and has not the right to run for office etc)
or trust that it was benevolent (even if it was) when it had
either been developed by others or had emerged itself?
I for one would have serious difficulties taking on face
value that such an entity should be submitted to merely
because it was of higher intelligence by most peoples
reckoning. I'd have reservations as to who its real masters
and what its real goals might be.
I suspect the general AI would not be able to exert power
directly but only through human proxies and these proxies
would themselves be distrusted by some and have difficulty
getting to a point where they could operate as figurehead
for in any benevolent dictatorship.
Could be the only way a hyper intelligent AI can kick start
a rapid take off singularity is against the wishes of a majority
of voters. i.e. by brute military and or economic force and
through human proxies. That was my thought anyway.
Do others see a problem with this reasoning or conclusion?
Brett
This archive was generated by hypermail 2.1.5 : Thu Sep 04 2003 - 02:51:50 MDT