Re: Paths to Uploading

Anders Sandberg (asa@nada.kth.se)
07 Jan 1999 14:36:28 +0100

"Bryan Moss" <bryan.moss@dial.pipex.com> writes:

> >> Why do we have such small brains? To me it suggests that the level of
> >> complexity achievable is *very* close to the achieved level.
> >
> >I have the impression that you are not seriously proposing any upper
> >limit on intelligence.
>
> Friends and family might argue that I've never proposed anything serious in
> my life, but I'm in two minds about this. There's a beautiful coherent and
> hopefully quite brilliant concept in the making but I can't get my brain
> around it at the moment.

It would be wonderfully ironic if there was some law that there is an upper limit to intelligence, and all intelligences are too weak to truly understand or prove the law. :-)

(seriously, limits and hindrances like this may correspond to the "toposophical barriers" discussed in Lem's Golem XIV. I think they may exist, but they are not necessarily impermeable)

> Later in your post you mention a Theory of
> Complexity, that's where I'm going with this, only I'm shying away from
> using the term 'complexity' because I'm worried I might abuse it.

Don't worry, everybody is abusing the poor term. :-)

> Let's say there is a parameter of the universe called 'complexity' and it
> defines how easy it is to make a complex structure of any kind. Now if this
> parameter is set to low (complex structures are unlikely) life would not
> have evolved. If this parameter is set too high (complex structures are
> highly likely) and complex structures formed very easily then the
> generalisations (systems like DNA, protein, intelligence, etc) that we call
> 'life' would not have evolved.

This assumes that the complexity parameter affects all levels. It sounds a bit like the "border of chaos" idea: "interesting systems" (i.e life) are possible only in the zone between too much order (low complexity in your terminology) and too much chaos (high complexity in your terminology). In the chaotic domain patterns are not stable because new patterns continually form, and in the ordered domain new patterns have a hard time emerging or surving.

Note that your complexity parameter is system dependent: some systems (like gases) have low complexity, you can't build anything interesting from them, others are too complex (turbulent plasmas?). And metasystems formed from other systems (like molecules of atoms or clumps of matter from molecules) can have quite different complexity. The *BIG* questions are if there is some kind of "master complexity parameter" that affects all or most levels of the universe, or if it is just physics that allows different systems and their complexity parameters could be calculated (somehow) from the interactions.

> Now - and this is where it gets
> really sketchy - I'm imagine something similar for 'intelligence' but I have
> absolutely no idea how to explain it at the moment. Fundamentally I don't
> think it's wrong to suggest that there might be an upper limit to
> intelligence, the way I imagine it is a graph with 'generalisation' and
> 'specialisation' plotted against each other and diagonal line travelling
> between them.
>
>
> Generalisation
> |\
> | \
> | o We are close to here
> | \
> | \
> ------------ Specialisation
>
>
> And this all corresponds to the limited complexity of the system.

Suppose you have a limited amount of mental resources to put into learning and applying your knowledge. Should you specialize or generalize? The answer depends on your environment and goals - on a primordial African savannah being a generalist when it comes to survival is essential, in a pampered western world you get richly rewarded for specializing narrowly (at least I, as a graduate student, gets rewarded for it). We might have biases left by evolution, which has of course set some basic level of ability and interest suitable for our evolutionary past.

> [I hate having to explain fragments of ideas that are probably just ill
> logic on my part.]

Know the feeling.

> >Neurotech is going to be the controversial thing. What does "human"
> >mean when you can alter it? Not just enhance memory or change sexual
> >preferences, but add *new* structures to the brain like the ability to
> >do something like episodic memory for muscle movements? Lots of issues
> >here, and biotech will contribute with problems.
>
> I'd be interested to hear what you thought of Dyson's 'radiotelepathy' in
> _Imagined Worlds_.

Yes, he has an interesting point and I think something like that may become very useful if it can be built. We are already getting there with mobile phones. At the same time, when I read it I felt that good old Dyson is getting old - he never even mentioned nanotechnology, and seemed somehow stuck in a Stapledon world. Very odd.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y