Joseph Sterlynne wrote:
>
> Now, I'm not necessarily challenging the
> idea---just exploring it---but many of seem to think that an upload will
> have no trouble whatsoever with effecting any desired change, including
> substantial upgrades. What guarantee do we have that there is no inherent
> problem in a human-class mind perceiving all of itself? There is of course
> the question of self-observation; it would certainly help to know how much
> control our consciousness really has over the rest of the mind.
You don't modify your entire mind at once. You look at a subsection, then modify that.
If there's one thing that disguishes the field of "transhuman AI" from the graveyard of dreams that marks previous AI, it's that in transhuman AI, the intelligence is complex enough to be divided into parts, or into architecture and content. To quote from yet another unpublished work:
==
One of the fundamental distinctions between transhuman AI and previous
AI is that, while previous AIs are sometimes divided into modular
knowledge bases and the central engine, the central engine itself is
usually - perhaps even always - such a simple algorithm that it can't be
divided into content and architecture. This is, in fact, the same
factor that distinguishes the Standard Social Sciences Model from
evolutionary psychology - one holds that any content-specificity reduces
flexibility by restraining general intelligence, the other that
flexibility is the result of having lots and lots of specifity for many
different kinds of contents.
EURISKO, despite being crystalline, wasn't a "previous AI", but a transhuman AI - the heuristics provided active problem-solving ability that could be divided into general heuristics and domain-specific heuristics. Thus the general intelligence could act to modify a single
heuristic, and though they were similar enough - in the end, it was a limited set of heuristics modifying each other - that there was an eventual sterility problem, it got a longer run-up than any otherprogram before or since.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way