Re: Paths to Uploading

Bryan Moss (bryan.moss@dial.pipex.com)
Wed, 6 Jan 1999 19:33:50 -0000

Anders Sandberg wrote:

> The final answer will of course be when you upload yourself and a certain
> digital mind begins to experience whatever it experiences.

The question still remains as to how others will know it's me, and so on. I have a feeling no one here really wants to dredge up identity issues again, so I'll leave it alone.

>> Why do we have such small brains? To me it suggests that the level of
>> complexity achievable is *very* close to the achieved level.
>
>I have the impression that you are not seriously proposing any upper
>limit on intelligence.

Friends and family might argue that I've never proposed anything serious in my life, but I'm in two minds about this. There's a beautiful coherent and hopefully quite brilliant concept in the making but I can't get my brain around it at the moment. Later in your post you mention a Theory of Complexity, that's where I'm going with this, only I'm shying away from using the term 'complexity' because I'm worried I might abuse it.

> The brain is definitely not small rather it is as large as it can be given
> our current biology. It consumes a significant percentage of our energy;
> most likely it is close to the maximum possible size given a hunter-
> gatherer diet (one factor affecting its evolutionary growth may have been
> more food availability). In addition we have all sorts of purely
> mechanical problems with our large brains.

Let's say there is a parameter of the universe called 'complexity' and it defines how easy it is to make a complex structure of any kind. Now if this parameter is set to low (complex structures are unlikely) life would not have evolved. If this parameter is set too high (complex structures are highly likely) and complex structures formed very easily then the generalisations (systems like DNA, protein, intelligence, etc) that we call 'life' would not have evolved. For instance, if complexity is easy then a new solution can be found to every new problem and the 'building blocks' of (what we call) life do not get produced. Now - and this is where it gets really sketchy - I'm imagine something similar for 'intelligence' but I have absolutely no idea how to explain it at the moment. Fundamentally I don't think it's wrong to suggest that there might be an upper limit to intelligence, the way I imagine it is a graph with 'generalisation' and 'specialisation' plotted against each other and diagonal line travelling between them.

Generalisation

          |\
          |  \
          |    o We are close to here
          |      \
          |        \
          ------------ Specialisation


And this all corresponds to the limited complexity of the system.

[I hate having to explain fragments of ideas that are probably just ill logic on my part.]

>Neurotech is going to be the controversial thing. What does "human"
>mean when you can alter it? Not just enhance memory or change sexual
>preferences, but add *new* structures to the brain like the ability to
>do something like episodic memory for muscle movements? Lots of issues
>here, and biotech will contribute with problems.

I'd be interested to hear what you thought of Dyson's 'radiotelepathy' in _Imagined Worlds_.

BM