Re: SI: Singleton and programming

Eliezer S. Yudkowsky (sentience@pobox.com)
Sun, 22 Nov 1998 17:51:53 -0600

Eugene Leitl wrote:
>
> Eliezer S. Yudkowsky writes:
>
> > I can't say that I believe in the scenario of a Singularity as a collection of
> > individuals. Goals converge at sufficiently high intelligence levels, just
> > like pictures of the truth.
>
> Are you saying that evolution doesn't exist post Singularity, then? Can you
> back up that claim?

After reading through your whole post, I believe that we are operating on drastically different and mutually incompatible assumptions about post-Singularity environments. To try and identify both ontological and surface assumptions:

Eugene Leitl:
1. Singularity contains multiple competing entities, with conflicting goals; not only with actions intended to reverse another action, but possibly entities trying to destroy or consume another entity. AFUTD competition. 2. Evolution is the best way to create a robust, efficient process in the vast majority of cases; improvements on evolution (if any) will result in the same stereotypical properties being exhibited by the evolved processes. 3. Intelligent design is inherently fragile in that the failure (or exploitation) of any part destroys the whole; this is a universal problem at all levels of intelligence.

Eliezer Yudkowsky:
1. Morality is contained within an objective (physical? unalterable?) ontology. Goal values post-Singularity converge to the objective values. No competition. 2. Evolution is the best way to do something without intelligence, and consists of a moderately intelligent way to use blind search. SIs can walk all over evolution. (Please see below before responding.) 3. Human design is inherently fragile in that it tends to linear chains and lack of fault-tolerance and no local optimization. This is due to lack of attention, no parallelism (all attention focused on only one object), and low speed. SIs and seed AIs don't share the problem.

> Once you have your omega hardware (probably a kind of quantum dot
> array) design, which should be very early along the Singularity, the

See, I don't trust "quantum dot array". Before the '90s, we would have given a different answer. In ten years, we'll have another answer. Similarly, I don't buy the Singularity using nanotechnology; it's a wonderful method by our standards, but a few decades ago we would have seen Dyson-sized collections of miniaturized vacuum tubes. "Descriptor theory" a la Greg Bear or "wormhole computing" I might buy, on the theory that it won't sound obsolete until we actually know how to do it.

> > It's not until you start talking about time travel (which I think is at least
> > 80% probable post-Singularity) that you get real "inertialess" systems. I
>
> If you have to resort to Planck energies for spacetime engineering,
> you might have to build a particle accelerator around the galaxy. It
> takes a while before you can assemble such a structure, and before
> it can pick up enough steam to do something interesting, like blasting
> a designer hole into spacetime.

...and a 95% probability, given time travel, that it can be done (for millisec. times) using micron-scale machinery.

> > I'm not at all sure that evolution and singletons are compatible. Evolution
>
> The worse for the singletons, then.

The worse for evolution. Would you agree with these statements?

  1. Evolution can be improved by the injection of intelligent selection.
  2. Evolution can be improved by occasional global redesigns of locally optimized competitors being reinjected into the pool.
  3. Evolution as presently carried out does not involve intelligent design, either within the process, or as the origin of evolution. (Theists disagree.)
  4. Evolution consists of using a vast number of random tries and multiple generations of selection and recombination.
  5. Evolution is a blind search, or at least the basic units are blind.
  6. Insofar as evolution has no option but to be blind, there is no reason to think that blindness is optional.
  7. As the search depth narrows, global redesign and intelligent mutation predominate over local optimization and random mutation.
  8. The evolution of evolution converges to an intelligent, global design with intelligent tweaks and intelligent tests. If massive randomness is absolutely necessary, it can be used selectively.

> > "survival" might be determined. Even if one concedes internal differences,
> > the externally observed behavior of every Singularity might be exactly the
> > same - expand in all directions at lightspeed, dive into a black hole. So
> > there might not be room for differential inclusive reproductive success.
>
> So far we have not observed any Singularity or distant signatures
> thereof. What makes you think you can predict details of any such
> event?

What makes you think you can predict the internal software design? Sheer hubris, just like me. But that's not what I was saying.

Sometimes there's such a flagrant local and global optimum that all processes converge there, whether that optimum be "expand at C" or "dive into black hole". I don't have to know where the optimum is to hypothesize that the optimum exists. With respect to the Singularity, it is unlikely that the two best mutually exclusive methods of harvesting flops (the two above being a good example) will produce roughly the same order of results; therefore all Singularities choose the same option. You might disagree on the grounds that an immune-type competition might promote the value of "difference", as such, regardless of the actual change.

> > Internally, the evolution you propose has to occur in defiance of the
> > superintelligent way to do things, or act on properties the superintelligence
>
> Well, you are intelligent.

Incorrect. The personal processing power I control is insignificant compared to that necessary to run a simulation of Earth's evolutionary processes.

> Are you in control of other intelligences?
> Particularly these dumb, lowly things like bacteria, viruses, flies,
> dust mites, ants, cockroaches, silverfish, pets, cars, ships, sealing
> wax? Humankind have not spontaneously mutated into homogenous monocultured
> Eliezers set into nice rows, why should a virtual ecology do that?

You're right, particularly given that Eliezers are not intelligent.

> In
> case you build such a strange thing, due to a chance some part of some
> clone somewhere might grow frisky, and rushes over the civilization
> monoculture like a brush fire. It's instable as hell, the thing.

Let me see if I understand. Translating to my assumptions: "Given any acceptance whatsoever of hardware or software error, stemming from meteors, eganite flux, or intelligent-efficient programming, and given a sufficiently large playing field such as 1e30 flops, pure error will eventually produce a voracious subprocess. There must therefore be internal defenses against that subprocess. The additional possibility of mutations in the defenses causes the field as a whole to converge to TIERRA."

I disagree on the grounds that the maximum complexity of mutated attack forms increases as the log of computing power, so the necessary defense also increases as the log of computing power; thus internal defense will not be a serious problem in the absence of deliberate attack (in which I do not believe).

> > [ AFUTD's skrodes ]
>
> You said a Power would have noticed, but skrodes were built either by
> the Blight or the Countermeasure (it's not clear by which), which both
> ate normal Powers for breakfast. A Blight from a distance didn't look
> particularly singular to a Power, and a lot of Powers meddled with the
> lower zones. The hidden functionality of the skrode/its rider complex
> may well have exceeded a casual scrutiny of a Power. All they'd see
> would be a yet another Power artefact.

It's an interesting question whether 1e70 flops can hide something from 1e40 flops in a design that takes up 1e9 bytes. My guess is no. The Blight has to hide the flaw, and then hide the fact that it's hiding the flaw. Maybe you can get a Power to "skip over" the problem section by exploiting common flaws in the nonconscious first-level parsers, but can you do it without a characteristic pattern that would trigger an alarm? Remember, any density or incomprehensibility over and above what the Power can understand - a part of the design that's opaque - would itself probably trigger an alarm. "Golly, this is the first one-gigabyte program I haven't been able to understand in twenty decillion subjective years!" One would have to convey the illusion of transparency, and I really don't think that's possible unless some parts don't get fully analyzed, i.e. unless the Power is willing to accept opaqueness as normal because it would take too much computing power to go beyond it.

On the other hand, there is absolutely no way I can prove it, and Vinge had Countermeasure hiding *inside* the Blight during the initial Transcendence, so it's obvious that in AFUTD's Universe the assumptions are different.

> > "A programmer with a codic cortex - by analogy to our current visual cortex -
> > would be at a vast advantage in writing code. Imagine trying to learn
> > geometry or mentally rotate a 3D object without a visual cortex; that's what
> > we do, when we write code without a module giving us an intuitive
> > understanding. An AI would no more need a "programming language" than we need
> > a conscious knowledge of geometry or pixel manipulation to represent spatial
> > objects; the sentences of assembly code would be perceived directly - during
> > writing and during execution."
>
> A task you have a knack of, and doing for a long time changes you. You
> sprout representational systems as you grow better and better. A
> tabula rasa AI which was not designed to do machine language would
> learn anything the hard way as well, exactly as a human. Of course if
> it was more intelligent than a human it would grow much better than
> that.

You've just proved that compilers don't exist. Oh, wait - "was not designed to do machine language". But how would you get an AI in the first place if it couldn't bootstrap itself? And presumably "more intelligent" includes a larger short-term memory or inherent searching capabilities, even if its actual creativity is nil? As far as I can tell, a "tabula rasa AI" nonexists.

> > Who needs a Power to get a skrode? The first programming AIs will likely be
> > that incomprehensible to us mere humans. You know how much trouble it is to
>
> Thank you, GP comes up with plenty of compact, efficient, opaque
> solutions humans have no idea of how they work. If you ask a person to
> write a word-recognition circuit, he'll certainly will not build a 100
> FPGA-cell large conundrum consisting of a mesh of autofeedbacked loops
> exploiting the undocumented analog effects of the bare silicon.

Yes, GP is another good example which shows the shortcomings of human "intelligent" design. Extending the analogy between visual cortex and codic cortex to a load-supporting arch, we might say that human designs are Legos - bright, monocolor, blocky structures with a global design but local fragility. GP creates an upside-down arch made from wood splinters packed tightly together with a swirling texture; it's as beautiful as a rose, and any given section is a lot stronger than the human Legos, but it's still upside-down. Actual intelligent design is nanotech; every atom is in a locally and globally optimized design.

> OO mirrors our physical world very much: independant objects interact
> with lots of others via asynchronous messages. Many simulations are
> made much more elegant this way. I think OOP is deeper than a mere
> human fad.

It is. It goes right down into human cognitive structures.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.