From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 18 2003 - 14:52:36 MDT
Robin Hanson wrote:
>
> How leaky will our distant descendants be? How far will they want to
> go, and be able to go, in agreeing to reveal their secrets to each
> other, to avoid the social problems that secrets cause? It seems
> plausible that our descendants will be constructed so that they can
> allow outsiders to directly inspect the internal state of their minds,
> to verify the absence of certain harmful secrets. It also seems
> plausible that our descendants will feel a pressure to standardize the
> internal state of their mind to facilitate such inspection, just as
> corporations now feel pressure to standardize their accounting.
> [...]
> Nevertheless, as an overall long term trend, I'm leaning toward
> expecting not only a move toward a transparent society (a la Brin), but
> then toward transparent minds as well. And one disturbing implication
> of this is that we may well evolve to become even *more* self-deceived
> than we are now, as believing one thing and thinking another becomes
> even harder than now.
Accepting the scenario, for purposes of discussion... why would
rationalization be any harder to see than an outright lie, under direct
inspection? Rationalization brainware would be visible. The decision to
rationalize would be visible. Anomalous weights of evidence in remembered
belief-support networks could be detected, with required expenditure of
scrutinizing computing power decreasing with the amount of the shift.
It's possible there would be no advantage for rationalization over lying;
rationalization might even be bigger and more blatant from a computational
standpoint.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jun 18 2003 - 15:02:56 MDT