From: Ramez Naam (mez@apexnano.com)
Date: Thu Jun 19 2003 - 23:04:03 MDT
From: Robin Hanson [mailto:rhanson@gmu.edu]
> As Wei Dai said, the economic rationale for mind transparency
> goes away if it requires a far more expensive mind to see
> into a cheaper mind. So if these AIs are the most
> sophisticated things around, then the question I'm
> interested in is whether economic pressures encourage
> them to make themselves transparent to each other, and
> what mental constructs that includes.
I think this hinges upon factors we're not yet sure of. Two come to
mind:
1) Will future minds have clean, intentionally designed mental
architectures or messy evolved architectures?
2) Will the architectures of future minds be fairly similar across
individuals, or will the population be fairly diverse in mental
architecture?
The optimal world for transparency would seem to be one in which minds
have designed, top-down architectures, and which architectures are
fairly homogenous from mind to mind. In this world sharing internal
states is fairly easy and transparency might be considered the norm.
Here the question of the economic rationale for transparency comes
down to two factors:
1a) The cost of implementing a deception (of pretending to be
transparent while not)
1b) The economic payoff from deceiving others.
Where 1b would depend somewhat on 1a and somewhat on situational
factors.
Alternately, in the opposite world, where mind designs are messy,
evolved things, and where mental architectures vary wildly from
individual to individual, transparency would be extremely costly to
achieve. In this world I would expect to see /less/ transparency than
we have today, as in current humans there is at least a shared mental
architecture across the population, which results in many give-aways
of dishonesty in communication.
This archive was generated by hypermail 2.1.5 : Thu Jun 19 2003 - 23:13:46 MDT