Eliezer S. Yudkowsky writes:
> I can't say that I believe in the scenario of a Singularity as a collection of
Are you saying that evolution doesn't exist post Singularity, then? Can you
back up that claim?
> individuals. Goals converge at sufficiently high intelligence levels, just
> like pictures of the truth.
> I can't say that I believe in the scenario of a Singularity as a collection of
Are you saying that evolution doesn't exist post Singularity, then? Can you back up that claim?
> What I'm pointing out is that there won't be "bottleneck" legacy systems. If
> old code is central, it will be redesigned. Where does the vast excess of
Let's say there is a provably optimal simple 'omega' hardware, which gives you very little choice about how your 'code' (information patterns, actually) at the deepest level will look like. The resulting substrate supports a virtual ecology, starting with most primitive autoreplicators a la Langdon loops, hardly more than our analogons of viroids (offspring of those designed or emerged spontaneously) and ending with godlike intelligences. Population pressure causes this substrate to expand, gobbling up the material realm and spitting out molecular circuitry blocks, instantly claimed by the duplicated virtual beasties. The sources of diversity are both rational design and darwinian evolution. Both support some conservation (legacy), as well as novelty due to coevolution concurrence (very much novelty in a Red Queen fitness landcape) but why should it be a bottleneck?
> computing power come from? I'm assuming a constantly expanding pool. In our
Once you have your omega hardware (probably a kind of quantum dot array) design, which should be very early along the Singularity, the expansion rate of the pool is limited by the conversion rate of the material realm into circuitry, and the amount of matter available. Once you've converted and rearranged things optimally, further influx of substrate stops, unless you can shrug off physics as we know it and can make spacetime compute on its own. Signalling in a relativistic universe in a darwinian context limits you to relatively compact system anyway: the maximal grain size will thus be likely limited. You can hardly afford waiting to listen for tidings from locations lightminutes apart, while the local clock ticks at nanosecond and picosecond scale. Frozen statues don't take well to erosion and bulldozers.
> days, there really is a lot of inertia that derives from human nature, even
> after all optimization is accounted for. I do think things will be faster.
> One interesting notion is that even if there's a completely unlimited amount
> of computing power, aleph-null flops, the Singularity still might not be
> autopotent. The extent to which current computing power could be extended,
> controlled, consolidated would still be finite. There would be software
> inertia even in the absence of hardware inertia, and there would still be hard
> choices for optimization. One would no longer conserve resources, or even
> intelligence, but choices.
I disagree. The way I see it, we will hit the computational limits set by physics very quickly, and after that will simply engage in virtual navelgazing/open-ended evolution. I don't buy Moravec's recurring evolution shockwaves rewriting the laws of physics, as they are unfalsifyable using current knowledge. It may take a short while to evolve nanoautoreplicators capable of following hard at the edge of the light cone, but afterwards it's all plain sailing. Of course it can be that traversable wormholes are buildable, but resources required are likely to be so large that the second graduation night might be a few MYrs off.
> It's not until you start talking about time travel (which I think is at least
> 80% probable post-Singularity) that you get real "inertialess" systems. I
If you have to resort to Planck energies for spacetime engineering, you might have to build a particle accelerator around the galaxy. It takes a while before you can assemble such a structure, and before it can pick up enough steam to do something interesting, like blasting a designer hole into spacetime.
> cannot even begin to imagine what this might look like from the inside. It is
> incomprehensibility squared.
> I think this confuses the disadvantages of human design with the disadvantages
> of intelligent design in general. Remember, evolved systems fail too. Humans
> go insane. It's just a different kind of crash. And, even using your
Of course a fair fraction of these failures is due to the mutations, without which evolutionary optimization would be impossible. Another is a trade-off between optimizing forever and having to react as quickly as possible. Red in tooth and claw doesn't breed perfect systems, but robust ones. Any comparisons with products of human engineering and what evolution came up are ridiculous. I'm still wondering why so many people, programmers particularly, trust man-made brittle systems. In biological systems, almost every component is mission-critical (there goes that liver). A truly technological civilization based on current designs would go extinct overnight. Software glitches in a Brazil world? Urgh.
> assumptions, I'd rather suffer a general protection fault and be rebooted from
> backups by an exoself using up 1% of resources, than go insane while using 75%
Would you rather want to ward off several quick (well, quicker than you) evolved systems BSOD'ing you with bug-exploiting nam-shubs? I'd say you'd lose, and the resources used up by you, by your exoself and your backups will be reclaimed by the victor meanies. If you rely on shared blocks, attacking that location would result in a vast payoff to the attacker. Why, suddenly it's not crowded anymore?
> Are PSEs more likely to suffer from asteroid strikes or Y2K? If every date in
> the world ran through a single module, we wouldn't have this problem. Yes, I
> know we would have other problems, but the operative word is "we".
If there was a bug in that module, all of you would go extinct. Descendants of any systems with a local nonfaulty module will happily take your place. No more brittle resource sharing anymore.
Biological beings use internal clocks, synchronized by external cycles. They only go awry when exposed to conditions (usually, man-made) they are not meant to, and even then the results are usually not fatal. Better jetlagged than dead, thankyouverymuch.
> But they wouldn't be individuals - suppose Anders Sandburg has stripped from
> him every algorithm he has in common with any other member of the human race,
> and is given immersive perception and control of a fleem grobbler, one of the
> five major independent fleem-grobbling systems. Is he still Anders Sandburg?
Suppose Anders Sandberg has stripped from him every physical structure he has in common with any other member of the human race, and is given immersive perception and control of a Microsoft Visual Basic compiler, one of the five major independant Microsoft compilers, instead. Is he still Anders Sandberg?
> I'm not at all sure that evolution and singletons are compatible. Evolution
The worse for the singletons, then.
> relies on differential chances of survival, but with everything being
> determined by intelligence, the human-understandable attributes like
'everything being determined by intelligence'? Why that?
> "survival" might be determined. Even if one concedes internal differences,
> the externally observed behavior of every Singularity might be exactly the
> same - expand in all directions at lightspeed, dive into a black hole. So
> there might not be room for differential inclusive reproductive success.
So far we have not observed any Singularity or distant signatures thereof. What makes you think you can predict details of any such event?
> Internally, the evolution you propose has to occur in defiance of the
> superintelligent way to do things, or act on properties the superintelligence
Well, you are intelligent. Are you in control of other intelligences? Particularly these dumb, lowly things like bacteria, viruses, flies, dust mites, ants, cockroaches, silverfish, pets, cars, ships, sealing wax? Humankind have not spontaneously mutated into homogenous monocultured Eliezers set into nice rows, why should a virtual ecology do that? In case you build such a strange thing, due to a chance some part of some clone somewhere might grow frisky, and rushes over the civilization monoculture like a brush fire. It's instable as hell, the thing.
> [ AFUTD's skrodes ]
You said a Power would have noticed, but skrodes were built either by
the Blight or the Countermeasure (it's not clear by which), which both
ate normal Powers for breakfast. A Blight from a distance didn't look
particularly singular to a Power, and a lot of Powers meddled with the
lower zones. The hidden functionality of the skrode/its rider complex
may well have exceeded a casual scrutiny of a Power. All they'd see
would be a yet another Power artefact.
You said a Power would have noticed, but skrodes were built either by the Blight or the Countermeasure (it's not clear by which), which both ate normal Powers for breakfast. A Blight from a distance didn't look particularly singular to a Power, and a lot of Powers meddled with the lower zones. The hidden functionality of the skrode/its rider complex may well have exceeded a casual scrutiny of a Power. All they'd see would be a yet another Power artefact.
> "A programmer with a codic cortex - by analogy to our current visual cortex -
> would be at a vast advantage in writing code. Imagine trying to learn
> geometry or mentally rotate a 3D object without a visual cortex; that's what
> we do, when we write code without a module giving us an intuitive
> understanding. An AI would no more need a "programming language" than we need
> a conscious knowledge of geometry or pixel manipulation to represent spatial
> objects; the sentences of assembly code would be perceived directly - during
> writing and during execution."
A task you have a knack of, and doing for a long time changes you. You sprout representational systems as you grow better and better. A tabula rasa AI which was not designed to do machine language would learn anything the hard way as well, exactly as a human. Of course if it was more intelligent than a human it would grow much better than that.
> Who needs a Power to get a skrode? The first programming AIs will likely be
> that incomprehensible to us mere humans. You know how much trouble it is to
Thank you, GP comes up with plenty of compact, efficient, opaque solutions humans have no idea of how they work. If you ask a person to write a word-recognition circuit, he'll certainly will not build a 100 FPGA-cell large conundrum consisting of a mesh of autofeedbacked loops exploiting the undocumented analog effects of the bare silicon.
> get an AI to walk across a room? Well, that's how hard it is for an AI to
> teach a human to write code.
> OO programming is there for a reason, and that reason is transforming the raw
> environment of assembly language into "objects" and "behaviors" comprehensible
> to our cognition. But OO may make about as much sense to an AI, as repainting
> the Earth in regular patterns and basic shapes that combine to form computer
> programs would make to us. Different ontologies, different rules.
OO mirrors our physical world very much: independant objects interact with lots of others via asynchronous messages. Many simulations are made much more elegant this way. I think OOP is deeper than a mere human fad.