Re: Hawking on AI dominance

From: Mike Lorrey (mlorrey@datamann.com)
Date: Sun Sep 09 2001 - 18:45:39 MDT


Russell Blackford wrote:
>
> JR said
>
> >From an evolutionary psychology standpoint, human values evolve from
> human needs, which are tied to biological needs.
>
> Which is a pretty good answer the question I asked. My own answer would
> probably be fairly similar, though I'd distinguish (as you might, too)
> various kinds of needs. For example, my needs as a particular biological
> organism might differ from what was needed by my ancestors for inclusive
> fitness in the evolutionary environment.

Yes, this is a good statement, and one can clearly see by layers of
abstraction that biological needs are dictated by the environment, which
are dictated by the laws of nature.

On that basis, in order for technological intelligence to develop and
thrive in this universe, it must behave/fit in a particular niche of
evolved behavior, at least part of which is constructed of those pesky
'values'.

This is obviously highly qualified by issues of anthropic principles and
Fermi Paradoxes: the lack of any alternatives, so far as we can tell,
demonstrates the need for those factors by which the intelligences
known as 'humans' developed.

Until presented with any alternatives, we have to act from this default
state, and trying to start from scratch is more likely to result in
failure to achieve AI than not. Therefore, any AI we develop will likely
act and behave mighty human-like, with at least *some* human values. As
an intelligence with higher abilities to self-modify, whether it retains
those values will obviously depend upon whether human values are as
objective as some think, as well as whether we actually hard-wire them
in to build "A Friendly AI".

>
> JR added:
>
> >I suspect you have your own opinions about where values come from.
> Regardless
> of where they come from, human values are not necessary for pure
> intelligence,
> and my conjecture is that they interfere with pure intelligence (which is
> the
> ability to solve problems autonomously).
>
> JR, I'm just trying to get a handle on your thinking. I *think* I now see
> why you say an AI with values would be (in a sense) "weak AI". Actually, I
> assumed you had something like this in mind but wasn't sure.
>
> You are, of course, redefining the terms "strong AI" and "weak AI", but I
> realise you are doing so deliberately for rhetorical effect, so that's okay.
>
> Can I take this a bit further, however? You seem to be putting a position
> that the only, or the overriding, value is the ability to solve problems.
> But isn't this a value judgment? I'm not trying to be cute, just trying to
> see how you get to this point.

Yes, as others have pointed out in other issues, this is called
'stealing the assumption'.



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:27 MDT