Head Jobs

From: Adrian Tymes (wingcat@pacbell.net)
Date: Wed Oct 10 2001 - 21:00:04 MDT


Miriam English wrote:
> Heck, I'd settle for speed of sound if I could directly transfer
> thoughts/images/concepts to another mind. I suspect the problem is going to
> be interfacing at each end rather than the transmitting between.

Agreed.

> The way I see it eventually happening is that we will each wear our own
> computer which constantly watches the electrical actions in our brains in
> microscopic detail. These computers will be AIs in their own right and will
> gradually come to learn what parts of our brains correspond to what
> thoughts/emotions/perceptions/movements.

Why full-fleged AIs? "Neural networks" are named that for a reason.

> They will quite literally read our
> minds. Unlike our wetware they will be able to transmit the mental info to
> other people's computers which will be able to map the data to their host
> brain's topology and stimulate them directly in the corresponding regions.

The hardware for the raw brainscan has been commercially available for
years. The challenge is pure software: translating the brainscan to
meaning on one end, and translating meaning to brain stimulus on the
other. The latter may be easier than the former, since our natural
ability to learn can give meaning to an initially random, but
patterned, series of stimuli. (Of course, there's also the practical
matter of having the software run on a computer light enough, with
batteries for sufficient life, to carry around...but this seems similar
in nature to designing video games for 1980s-era personal computers,
with the same approaches to code optimization seeming likely to pay
off if the scales are similar.)

> Even with this degree of communication things are not so cut and dried
> though. How does one computer communicate color info to someone who is
> color blind?

Mostly the same way one transmits sight info to someone who is entirely
blind. With color blindness, the problem is not in the brain but in
the eyes; the circuitry is still there, behind the defective layer.
Likewise, scientists have transmitted stimuli directly to the visual
neurons of a blind person, and the result (a crude matrix of white dots
against a black background), while far from perfect sight, is
definitely nonzero.

> I am sure there would be some data that would not be able to
> be transferred properly because some people might have sufficiently
> dissimilar backgrounds that nothing quite matches up, however there may be
> some way of extrapolating such difficult experiences back to basic sensory
> data. For instance if someone wants to communicate their love of custard
> apples they could send the flavor of custard apple as basic taste data,
> then adding emotional overtones in -- where they were the first time they
> tasted it, how happy they were, and a lot of other subconscious emotional
> baggage that goes with the experience.

Indeed, such approximations may be necessary. What, exactly, gets
transmitted and how? Just as speech is built up from various basic
sounds, and computers communicate using structured series of voltages
and optic excitation levels, this communications method will have to
have some basics of communication...though these basics can be a lot
closer to our internal representation of information, and thus be
better able to express what we wish.

> I think the most useful aspect would be teaching of concepts. These are
> often the most tricky to get across to others via language. However with
> that comes the risk that it also becomes wonderful indoctrination
> machinery. It would be very easy to impart pathological fears or hatred,
> though presumably the AIs would have safeguards... people will not likely
> be too happy about exposing their most valuable organ to that kind of risk.

I think the arts might get more immediate use out of it. Take, for
example, any television program or movie you've seen recently, and
alter it in some way. Say, put it in a different setting, or change
the dialog. Now, what if, merely by imagining it (and maybe pushing a
few buttons), you could get a computer to record a digital image,
complete with sound, of your dream. You can review it as many times as
you like, and if you think of some favorable alteration, just think it
and re-record. To put it bluntly, the bottom would drop out of
production costs, especially if the machines to record could be
manufactured cheaply. (Good and cheap software to edit video streams
already exists, though it typically does need high-end hardware to
process in real time; it could easily be adapted to accept a new video
input.) Sturgeon's Law dictates that much of this would, of course, be
crap...but with greater quantity also comes more diamonds, and the top
works could be all that much better for their ease of creation.

And once the arts gain experience using it to shape dreams, adapting
the software to control machines would be trivial. After that comes
machines (say, nanite clouds) whose shape and utility are determined
and designed upon use. ("Do we make this portable bridge 4 meters or
6 meters long?" "Oh, let's wait 'til we're at the river and just make
it however long we need it to be that day.")

> I think it's a long way away... 50 years????

I suspect it might be one of the enablers of the Singularity, if hard
AI can not be cracked before this tech becomes available.



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:13 MDT