RE: Constraints on the "singularity"

Ramez Naam (ramezn@EXCHANGE.MICROSOFT.com)
Sun, 12 Oct 1997 18:24:51 -0700


> From: Dan@Clemmensen.ShireNet.com:
> Ramez Naam (Exchange) wrote:
> > While a simple interpolation and future
> > projection of (for example) available computing power may show a
> > vertical asymptote, the underlying equations that govern the rise =
of
> > computing power are subject to constraints that may result in a
> > flattening of the curve.
>
> Oh, dear. Just because there is no singularity in the equations, =
there
> is no reason to postulate a flattening. My grossly-oversimplified
> curve is an exponential. My next-level approximation is an =
exponential
> with step rate increases over time.

My apologies for the misleading language.=A0 By "level off" I actually
meant "continue on a /less than hyperbolic/ growth rate.=A0 Ie, one =
that
does not lead to a vertical asymptote in the graph of computational
power vs. time".

You seem to agree that this is the likely progression.=A0 Thus my
discomfort with using the term "singularity" to describe such an event.

> Vinge's other reason to use the term "singularity" is in the sense of
> an event horizon: in his model we cannot predict beyond it because =
the
> superintelligences are incomprehensible. I agree with him in this.

But it's not really an event horizon.=A0 It's more like an "event fog"
with our ability to predict getting dimmer and dimmer the deeper we =
look
into that cloud.=A0 Thus my objection to the term "singularity"

Possibly you're right, and the superintelligences are =
incomprehensible.=A0
However it seems that we should be able to at least generate some
quantitative boundaries on their capabilities given our deepest current
understanding of the laws of physics and mathematics.

E.g.:

Given Planck space, c, and the maximum density matter can achieve =
before
collapsing into a black hole, what is the maximum achievable
computational power per unit volume?

Given the likely mass, age, and size of the universe, and the
constraints listed above, what is the maximum achievable computational
power of the universe?

Given c, the age, size, and rate of expansion of the universe, how long
would it take an earth-spawned power to infest the galaxy?=A0 1/10e6 of
the universe?=A0 1% of the universe?=A0 10% of the universe?

> > Chaotic Computability Constraints:=A0 The most ambitious nanotech
> > scenarios posit universal assemblers that can be programmed or
> designed
> > to build specific structures.=A0 IMHO this is a clearly chaotic
system.
> > The structure of any complex object created by this sort of =
nanotech
> > would seem to display an exquisite sensitivity to tiny variations =
in
> > design of the assembler (or the assembler "software"), and possibly
to
> > the local environment itself.=A0 I question whether we'll be able =
to
> > design assembler protocols for very complex objects through any
means
> > other than trial and error, which have their own pitfalls (e.g.,
grey
> > goo).
>
> I don't understand. The idea behind nonotech is design to atomic
> precision. Assemblers and their output are digitally-described
objects.
> a system built by assemblers will be essentially perfectly
manufactured
> by comparison to today's systems, in the same way that playing a CD
> yields
> the same result every time since the recording is digital. Sure, I =
can
> conceive of a assembler process using stochastic techniques, but this
is
> certainly not necessary.

My understanding may be off here, but let me put forth the reason I see
chaotic computability entering into the picture with nanotech:

LINEAR NANOTECH
"Linear Nanotech", as I'll call it, is the manipulation of objects at
the atomic scale, one atom at a time.=A0 For example, use of an STM to =
lay
out a company logo on a piece of metal.=A0 As an extension, techniques
that manipulate multiple atoms at a time but can only manipulate a
static number, and which are centrally controlled, are also Linear
Nanotech.=A0 E.g., a multi-head STM.

PARALLEL NANOTECH
Let's use "Parallel Nanotech" to describe the use of a swarm of
non-reproducing assemblers to transform the source material into the
desired object(s).=A0 Possibly this can be handled without worrying =
about
chaos, though I'm not sure.=A0 The question here is what control system =
is
used to distribute instructions to the appropriate assembler at the
appropriate time.=A0 There are several options in this area, divisible
into two groups:

Group 1) Control systems requiring central planning and coordination of
the assemblers.=A0 Ie, with electromagnetic signals of some sort I =
could
communicate to each nanite and instruct it in what to do.=A0 Or I could
"draw" the shape of the object I wanted and have different classes of
nanites respond to different signals (which correspond to different
areas).=A0 This requires external planning, analysis of the source
material, and either precise regulation of the source quality and
structure, or on-the-fly recalibration of the output material.=A0 This
type of control seems to avoid the chaotic computability problems but =
is
a much less flexible nanotech than is popularly conceived by nanotech
proponents.

Group 2) Self-organizing control systems.=A0 The nanites communicate
peer-to-peer to determine the structure and composition of the source
material and negotiate the precise design of the output object.=A0 This
enables a much higher degree of flexibility in relation to source
material, environment the nanites are used in, etc.. It also reduces =
the
burden of external intelligence & processing power to direct the
nanites.=A0

However, self-organizing control results in a complex system where
characteristics of the outputted object are sensitive to initial
conditions such as the exact composition and structure of the source,
temperature, pressure, atmospheric content, shape & amount of initial
release of the nanites, etc..=A0 This is a chaotic system.=A0 Or more
precisely, if the nanite designer has done her job right, the web of
nanites form a complex, self-organizing system that has a strong
tendency towards certain designs and behaviors.=A0=A0 To do her job =
"right",
though, the nanite designer had to understand the relationship between =
a
change to the nanite design (or instructions) and the physical objects
that the nanites would create.=A0 This involves chaos, or minimally
complexity, because the "intelligence" or "decision making power" of =
the
nanite swarm is an emergent property of that swarm, rather than of any
individual nanite.

The design process itself suffers the burden of chaotic incomputability
for another reason, essentially the same reasons that we are unable to
determine the precise effects of twiddling with a particular gene:=A0 =
The
characteristics of the phenotype are only /partially/ based on the
genotype, the rest of the influence being a complex sensitivity to the
environment.

BOOTSTRAP NANOTECH
Beyond Linear and Parallel Nanotech is something I'll dub "Bootstrap
Nanotech", where the assemblers actually use the source material to
reproduce, generating more assemblers which eventually get used to
construct the object you want.=A0 Here the relationship between =
assembler
design and characteristics of the created object is very obviously a
chaotic one.=A0 As may be the relationship between environment =
(structure
and composition of source material) and characteristics of the created
object.

Perhaps someone more versed in nanotech can explain to us the proposed
control systems, or at least the suggested architecture of such =
systems.

mez=20