RE: Singularity?

Eugene Leitl (
Thu, 2 Sep 1999 11:15:19 -0700 (PDT)

Robert J. Bradbury writes:

> You may want to rethink this. I think yesterday's news (though
> don't ask me whether it was this list or something else since
> I'm in "overload mode"), discussed a critical gene in transgenic

I noticed ;) I wonder how long you can be so prolific without burning out.

> mice that *did* enhance their intelligence.

Meaning, that if the technique was fully viable in humans, ethically approved and widely practiced it would start to have measurable impact 25-30 years from now. Each iterative improvement step would take two decades.

Genetic modifications are much too slow for the kind of progress I'm expecting for the next ~50 years. Genetic modifications are thus just a fallback option in case multiple technology branches fail dramatically. (Which would currently seem to require a truly global screwup of Holocaust caliber and is thus of no interest to us because we'd be dead or close to it).

> Also, while I'll grant the difficulty of enhancing the current
> "internal hardware", I'm optimistic about the development of interfaces
> to "external hardware". The brain implant chips clearly demonstrate
> this is feasible. What is required is a significant bandwidth
> increase.

I was dismissing recombinant DNA techniques a priori because they are much too slow, and do not at all help people already born (it is not exactly safe messing around with quantitative transfection vectors which activate diverse morphogenesis boxes *in adults*). What I was referring to difficulties in intelligence enhancement I specifically meant those involving building brain implants.

You need nano for really useful brain implants (have you ever tried sticking a million of bionert wires (with big computers attached on the other end) into a semifloating baloon made out of jelly?). It is hard even with nano. Also, by that time you can begin to do that, you can churn out building-sized blocks of bucky circuitry. The disparity would seem pretty apparent. Somebody is surely going to do something with all that computer power.

> > Contrary to what Eliezer profusely professes, human-coded AI will
> > never become relevant.
> I would agree that a completely top-down approach looks questionable.
> However, a co-evolving human brain linked to an external "sub-program"
> generation system might be interesting. "No, not the 'libertarianism
> elimination algorithm' you silly computer, we need a 'dogmatic
> libertarianism elimination algorithm'".

Provided you can do nontrivial brain implants, you will need hyperplastic brains which currently only exist in neonates/ toddlers. It might be possible to rejuvenate brain tissue and throw it into hyperplastic mode with a very advanced drug cocktail, but I'm not counting on this too much. To wit: you require already two nontrivial ingridients: nano and the magic drugs.

> Since the "Intelligence" part of this may be context specific,
> I suspect that we will not get recognizable intelligence unless
> humans are involved in the feedback loop. For example, I can
> imagine a context (say "crystalline environments") in which
> the ability to form random patterns would be viewed as
> "higher intelligence". We, however, would not recognize it as
> such. "Intelligence" if it relates to "ability to survive"
> is going to be highly context dependent.

Yes, but certain contexts will turn out to be pretty expansive, on the cost of other contexts. (It would seem to be a very good strategy to try to identify such contexts early. If you can't beat 'em, try to join 'em).

> > Uh, don't think so. The threshold for enhancing humans is terribly
> > high: essentially you'll need to be able to do uploads. Anything else
> > is comparatively insignificant.
> I have to disagree. While "exponential" enhancement may require
> uploads, I think you can get quite a bit of human enhancement
> with nanobots and high bandwidth links. The question is this --
> Which is harder, interfaces between the thought patterns of unique
> uploads and the external reality or the development of useful message
> passing protocols between your existing brain and external subroutines?

Both seem much harder than building a Golem.

> I'll invoke the catch-22 principle -- uploads don't exist, so people

Oh, but people are working on it. It's still early academia, but it's definitely already there.

> are unlikely to work on reality interfaces; reality interfaces don't
> exist so people won't upload. On the other hand "brains" do exist,
> "nanotechnology" will exist (for uploading or Intelligence Augmentation),
> so the path of "external" brain enhancement seems to be more probable
> than uploading followed by exponential enhancement.
> > Maybe we need another Kaczynski...
> Pointer/Ref please?

Mr. Unabomber. (Advocatus diaboli, at your service ;)

> >
> > I think one of the best projects for funding is brain vitrification
> > which does not require fractal cooling channel plumbing in vivo.
> >
> Why? Unless you are remarkably young looking for your age
> (or you had a body substitute at the Extro4 conference :-)),
> brain vitrification is unlikely to benefit you as much as
> work on intelligence augmentation/interfaces. Unless
> you are taking a highly pessimistic/covering all bets
> approach.

Exactly. When personal survival is at stake, assuming a worst-case scenario is the only prudent approach. (Especially considering the cost/benefit ratio, and the "bus driver waiting for the bus" scenario (it may sound silly, but really, nobody in the civilized world is doing it. Also, consider the current bottleneck in cryonics service providers, currently only CI and Alcor being available. Did I already mention eggs/baskets?)).

Also, I happen to think there might very well be a coevolutionary race scenario between uploads and AIs. Even if chances be slim, being there first can make an universe of a difference. The lowest-tech approach to uploading is currently FSS (freeze, slice, scan). While we more or less know how to tackle the to S's, with the resources at our disposal focusing on F has the best cost/benefit. Currently.

(Disclaimer: any typos and goofups should be entirely attributed to chronical lack of sleep and Jolt O.D.'s. Deadlines, deadlines, deadlines).