Re: Paths to Uploading

Samael (Samael@dial.pipex.com)
Thu, 7 Jan 1999 13:59:08 -0000

-----Original Message-----
From: Billy Brown <bbrown@conemsco.com>
To: extropians@extropy.com <extropians@extropy.com> Date: 07 January 1999 13:50
Subject: RE: Paths to Uploading

>Samael wrote:
>> Humans, when they are first created need spoon-feeding. They need very
>high
>> repetition of information to spot the basic patterns in it and learn to
>> recognise things around them. Later on, they can learn in leaps and
>bounds,
>> but at the beginning they learn slowly. It seems likely (to me,
anyway)
>> that an AI would start off learnnig slowly and then pick up speed as it
>went
>> along. Certianly, early AI's would learn slowly enough for this to be
>> spotted..
>
>In order to make human-equivalent AI we will have to have a larger
knowledge
>base than any one research group is likely to want to code. Sharing
>knowledge bases between projects is perfectly feasible, and I expect it to
>become more common as AIs become more complex. The key point is that once
>you have a body of information encoded in one AI program, you can export it
>to another one.

I'm not sure I agree with this. Every neural net is different. Beyond a very simple level, they will all learn information slightly ifferently and the structures in their brains will be different. It may be very hard to export data from one AI to another unless backwards compatibility is one of the design requisites of the new AI, in which case you won't be able to put any funky new features in it (any more than putting monkey neurons in your brain would work - the structures are almost certianly far too different, and the arrangments caused by learning will also differ vastly).

>> We also learn as fast as we can process data. Admittedly we don't force
>> ourselves to learn as fast as we possibly could (most people aren't that
>> enthusiastic), but we still spend months/years building those first
>levels
>> of our neural network into recognizing basic objects and the
>relationships
>> between them and it's only years later that we become able to spot the
>fine
>> detail of the relationships and can make accurate theories about them.
>
>No, we don't. The human brain deals with a constant data stream of tens
>(maybe hundreds) of megabytes per second. You remember only a tiny
fraction
>of that - maybe a few bytes per second if you're paying close attention,
>otherwise even less. An AI that can handle the same sensory data would be
>able to store a much larger fraction of it, because its memory operates in
>the same time scale as its CPU.

Certianly they will learn faster. They will be more efficient. But the first one to reach 'consciousness' will do so at a slow speed - because it will have had a head start over the next one to be created. We will almost certianly be running tests on a variety of AI types and the ones that look like they are succeeding will be the ones that are run on the fastest machines. These are also the ones we will be watching the closest.

Remember also, that it's a lot easier to learn whilst interacting - and while it's learning to interact, it will be noticeable to the people it is interacting with.

Samael