Returning to CaTAI: "AI has an embarassing tendency to predict
success where none materializes,... and to assert that some
simpleminded pattern of suggestively-named LISP tokens completely
explains some incredibly high-level thought process... There are
several ways to avoid making this class of mistake... One is an
intuition of causal analysis... that says 'This cause does not
have sufficient complexity to explain this effect.' One is to be
instinctively wary of trying to implement cognition directly on
the token level". "Any form of cognition which can be
mathematically formalized, or which has a provably correct
implementation, is too simple to contribute materially to
intelligence". "Clearly [there] is vastly more mental material,
more cognitive "stuff", than classical-AI propositional logic
involves". And, "special-purpose low-level code that directly
implements a high-level case is usually a Bad Thing". On the
other hand, the title of Eliezer's article has an awfully
top-down ring due to the word "coding", and his notion of a
"codic cortex" or "codic sensory modality" seems to swamp the
usual notion of a sensory modality. I wonder how, using
Edelman's schema, one would select "values" for a codic sensory
modality without instantly facing all the difficulties of
traditional AI, and in an even more concentrated form than usual
(we don't want a robot that can just sweep the floor, we want a
robot than can write a computer program for a newer robot that
can program better than it does!). It seems like a bit of a leap
from "Blue is bad, red is good" to "COBOL is bad, Java is good"
;->.
Edelman has emphasized the intimate dependence of the
selectionist shaping of the brain on the richly-varied
repertoires made possible by its morphology and biochemistry, and
has warned that these cannot be easily brushed aside by those
looking to build artificial brain-like devices. The economies of
mapping of function to form achieved by evolution taking
advantage of the fine-grained variation available to biological
systems may have to be sacrificed when simulating such a
selectionist system on a computational substrate with a simpler
and more regular structure, but the feasibility of this project
will, once again, depend on the empirical determination of the
computational penalty for doing so, and on the economics of the
available hardware. In CaTAI, Yudkowsky writes: "The
nanotechnology described in Nanosystems, which is basically the
nanotechnological equivalent of a vacuum tube - acoustic
computing, diamondoid rod logics (4) - describes a one-kilogram
computer, running on 100 kW of power, which performs 10^21
ops/sec using 10^12 CPUs running at 10^9 ops/sec. The human
brain is composed of approximately 100 billion neurons and 100
trillion synapses, firing 200 times per second, for approximately
10^17 ops/sec total". If hardware such as this becomes
available, it might be economically feasible to throw away a
factor of 10,000 in computational capacity just to simulate the
morphological granularity of the brain on a simpler substrate --
what might be called the selectional penalty.
In the meanwhile, and depending on the relative rate of
advancement of biotechnology compared to that of other fields,
it's conceivable that the first conscious artifacts may, in fact,
be based on biological tissue, perhaps genetically engineered to
suit the purpose. Having proto-neurons that can both
self-replicate and follow gradients of Cell Adhesion Molecules
and Substrate Adhesion Molecules to wire themselves together, as
a living nervous system does during embryogenesis, solves the
sort of manufacturing difficulties alluded to by Gordon Moore in
the June 19, 2000 special issue of _Time_ magazine entitled "The
Future of Technology": "I'm a bit of a skeptic on molecular
chips. Maybe I'm getting old. It's hard for me to see how those
billions of transistors can be interconnected at that level"
(p. 99). Squishy, biologically-based AIs seem to be getting more
common in contemporary science fiction, as, for example, with the
bacterial AI "Roddy" in Greg Bear's _Slant_ (Tor Books, 1997, see
p. 476), or with the current televison series _Star Trek:
Voyager_'s reference to "bio-neural gel packs" in a couple of
episodes (see _The Star Trek Encylopedia_ [expanded edition], by
Denise and Michael Okuda, Simon & Schuster, 1999, p. 45). People
may be more squeamish about using squishy AI -- imagine a smart
car with a synthetic brain in a tank perfused by nutrients and
oxygen. Imagine leaving that car parked at the mall on a hot
summer day and coming back to find that the the power went off
(due to a faulty fuel cell, perhaps), leading to the failure of
the squishy AI's artificial heart/lung/dialysis machine and its
air-conditioner. He's dead, Jim. Smells like it, too. What
would you tell the kids? People on this list do not fantasize
about being "uploaded" into giant squishy, gurgling, pulsating
biological brains; diamondoid processors are a much more
appealing idea. We want to ditch these squishy bodies, not be
transferred into even ickier ones.
I chose the title of this article partly in hopes that its sheer
lameness would attract attention, but I also intended it as a
humorous allusion to an illustration on p. 172 of UoC
(Fig. 13.3): a visual metaphor for the authors' dynamic core
hypothesis. In this illustration (which I like a lot), the
dynamic core of consciousness is represented by a tangle of
tightly-wound springs under tension, which represents the
capacity of the reentrantly-connected neuronal groups of the core
to rapidly propagate any perturbation of one group to the entire
core. The functionally-insulated parallel loops of the cortical
appendages and organs of succession (connected to the core by
what are called "ports in" and "ports out" in UoC [pp. 178-186])
are represented in the same picture by loose springs which can
propagate travelling waves in one direction only. I suppose the
title could also be interpreted as alluding to Edelman's
contention that only artifacts undergoing selection constrained
by value, characterized by stochastic variation interacting with
a stochastic world, can have a mainspring for self-organizing
behavior that would make them anything more than external
appendages of human brains. Edelman and Tononi give another
visual metaphor for the dynamic core hypothesis in UoC on p. 145
(Fig. 12.1): an astronomical photo of M83, a spiral galaxy in
Hydra, with the caption "No visual metaphor can capture the
properties of the dynamic core, and a galaxy with complicated,
fuzzy borders may be as good or as bad as any other". Child of
60's television that I am, I was presented by memory with a
different visual metaphor -- the vacuum-cleaner monster from an
episode ("It Crawled Out of the Woodwork") of the original _The
Outer Limits_ television SF anthology series (reviewed at
Amazon.com:
http://www.video-department.com/video/0/28/7214.html).
Cheers.
Jim F.
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:13:51 MDT