From: James Rogers (firstname.lastname@example.org)
Date: Tue Feb 19 2002 - 00:48:40 MST
On 2/17/02 10:58 AM, "Damien R. Sullivan" <email@example.com> wrote:
> None of this, in my mind, recently enriched by reading PhD theses of
> Hofstadter's students, suggests any pressure toward independence and
> autonomy or having 'own ends'. All the complexity is geared toward
> fulfilling whatever command it received at the prompt.
In relation to "intelligence" in its purest and most universal form, goals
are essentially external bias sources that generate gradients for doing work
in the same way enthalpy gradients can drive work in the physical world.
Biological intelligence comes with a number of bias sources wired into the
intelligence engine by default. While it would be a complex analysis, I
think that a cursory inspection indicates that all the goals of humans are
really subgoals of the external biases that come with the hardware.
>From this perspective, AI doesn't need to do anything at all and probably
couldn't, beyond acting as an efficient recording device perhaps.
Externally biasing the AI structure (e.g. a command from a prompt) would be
the primary way of extracting intelligent behavior, and you would get the
same level of intelligence from an unmotivated AI as you would an equivalent
one with a constant source of bias. Where this could become interesting is
devising a permanent bias source for an AI structure that actually gives the
results the designer intends. There is a good argument to be made that a
constantly biased AI engine will do far more interesting things in the long
run (and do more "AI-ish" things as most people imagine it) at a level of
efficiency that could be higher than an unbiased AI. However, designing
high quality bias sources into an AI in practice does not appear to be a
trivial problem and one can easily imagine all sorts of Faustian unintended
consequences resulting from "auto-biasing" intelligence. It certainly has
provided plenty of fodder for SF books.
I think it might be perfectly valid to classify AI as either "externally
biased" or "internally biased" depending on how the biasing mechanisms are
wired into the general intelligence engine. I can also imagine the
political and legal fallout in a useless attempt to regulate specific
capabilities in this vein.
Extremely tired from a weekend on the ski slopes,
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:40 MST