Re: Lanier's losing his edge?

From: Dan Fabulich (dan@darkforge.cc.yale.edu)
Date: Fri Sep 29 2000 - 17:37:28 MDT


This essay is a fantastically good piece of work, a really excellent
summary of the phenomenon he's talking about.

He's also right that what he calls "cybernetic totalism" has no formal
name; the name "cybernetic totalism" certainly won't catch on. (He's
using "cybernetic" to refer to the complex systems studied by Weiner,
though I agree with Lanier in thinking that Weiner would be
flabbergasted at what some people think can be done with his research.)

> Here is a partial roster of the component beliefs of cybernetic totalism:
>
> 1) That cybernetic patterns of information provide the ultimate and
> best way to understand reality.
>
> 2) That people are no more than cybernetic patterns.
>
> 3) That subjective experience either doesn't exist, or is unimportant
> because it is some sort of ambient or peripheral effect.
>
> 4) That what Darwin described in biology, or something like it, is in
> fact also the singular, superior description of all creativity and
> culture.
>
> 5) That qualitative as well as quantitative aspects of
> information systems will be accelerated by Moore's Law.
>
> And finally, the most dramatic:
>
> 6) That biology and physics will merge with computer science
> (becoming biotechnology and nanotechnology), resulting in life and
> the physical universe becoming mercurial; achieving the supposed
> nature of computer software. Furthermore, all of this will happen
> very soon! Since computers are improving so quickly, they will
> overwhelm all the other cybernetic processes, like people, and will
> fundamentally change the nature of what's going on in the familiar
> neighborhood of Earth at some moment when a new "criticality" is
> achieved- maybe in about the year 2020. To be a human after that
> moment will be either impossible or something very different than we
> now can know.

I found myself nodding to most of these, with some qualifications.

For example, I'd put more emphasis, in #3, on the idea that subjective
experience is "unimportant" than on the idea that such experience does
not exist. There are a variety of ways that subjective experience
(or, more to the point, the fact that we know basically nothing about
subjective experience or how it comes about) could turn out to be
unimportant: subjective experience might not exist, it might exist but
fail to act causally on the world in any way, or it might just happen
to follow along with anything that we'd call intelligence, as a sort
of mysterious package deal. It'd be "unimportant" because it would
cast aside the only significant philosophical obstacle to artificial
intelligence.

Similarly, I found myself marginally agreeing with #4, but only
because he put in the caveat "or something like it." I don't think
that memetic theory is a good way of describing most of our
cognitions, though it works pretty well with some of them. While
evolution certainly made me, and thereby generated all my good ideas,
I don't take evolution itself to be a very good way to approach most
problems, though some sort of cybernetic "spontaneous order" approach,
one with a whole ecology of "thermostats," IS what I'd take to be the
best descriptor of both creativity AND culture.

#5 won't be true unless and until we can cram human-equivalent
intelligence into a computer, either via AI or uploading; this is to
say that #5 is not true today, but will be. (Of course, this can't
happen unless creativity can be digitized and experience can be
explained away.)

And #6, Singularity? #6 happens at some point after #5 turns true.

OK, so, let's take a look at where he goes with this.

> Cybernetic Totalist Belief #1: That cybernetic patterns of information
> provide the ultimate and best way to understand reality.
>
> [...]
>
> Belief #1 appeared on the stage almost immediately with the first
> computers. It was articulated by the first generation of computer
> scientists; Weiner, Shannon, Turing. It is so fundamental that it
> isn't even stated anymore within the inner circle. It is so well
> rooted that it is difficult for me to remove myself from my
> all-encompassing intellectual environment long enough to articulate an
> alternative to it.
>
> An alternative might be this: A cybernetic model of a phenomenon can
> never be the sole favored model, because we can't even build computers
> that conform to such models. Real computers are completely different
> from the ideal computers of theory. They break for reasons that are
> not always analyzable, and they seem to intrinsically resist many of
> our endeavors to improve them, in large part due to legacy and
> lock-in, among other problems. We imagine "pure" cybernetic systems
> but we can only prove we know how to build fairly dysfunctional
> ones. We kid ourselves when we think we understand something, even a
> computer, merely because we can model or digitize it.

The notion that cybernetics might be a good way to understand reality
relies inherently on the idea that the laws of physics, such as they
"really" are (or, at least, as close as we'll ever get) could be
simulated on a sufficiently powerful computer/Turing machine, or a
computer equipped with a random number generator. This is certainly
true of our physics now. Some of us argue about whether we need the
random number generator.

Interestingly, barring a Kuhnian scientific revolution, no new
scientific discovery is forthcoming which will change this, on account
of our requirements that for something to count as a scientific
discovery, it must make precise quantitative predictions as good as or
better than its predecessor(s). These quantitative predictions are
described in our math, tested by our symbolic logic, and they can all
be simulated on a computer of sufficient power. No discovery arguing
for non-computability could be considered scientific if presented
today; we'd reject it out of hand.

Because, for non-computability to turn out to be true, the laws of
physics would have to be totally undescribable by any logic with an
algorithmic syntax. For this to turn out to be right would require
something deeply mystical about the laws of physics, far beyond our
capacity to understand them or express them in any language.

The pragmatist in me says: "If THAT'S what the laws of physics are
like: uncomputable, undescribable, and unknowable, then I'LL never
have to worry about those!" Lanier professes to be a kind of
pragmatist; for his argument to hold, he'll have to argue from a
pragmatic stance that the REAL laws of physics, despite the fact that
we can never know about them or understand them, are non-computable in a
rather rigorous way. I can imagine a lot of different kinds of people
holding a view like that, but not a pragmatist.

So, as it turns out, the best physics we'll ever get a handle on
happens to be cybernetic in nature. That's part of what Lanier calls
the "elegance" of this view; it's also part of what worries him. This
concern is understandable. We *should* worry that we're getting too
proud of ourselves. But, with that in mind, he'll have to take
potshots at our current standard of what makes something "scientific"
in order to make his supposedly "pragmatist" argument go through. I
just can't imagine how this argument would go.

> There is also an epistemological problem that bothers me, even though
> my colleagues by and large are willing to ignore it. I don't think you
> can measure the function or even the existence of a computer without a
> cultural context for it. I don't think Martians would necessarily be
> able to distinguish a Macintosh from a space heater.

Why are we willing to ignore it? Simple: "cultural contexts" happen
within the laws of physics. Not reduced to the laws of physics, but
happening within them, failing to do anything which the laws of
physics forbid.

But the cultural point might have some deeper validity than I'm
currently giving credit. I'd be hard pressed to explain why a theory
that failed to make quantitative predictions would be worse than a
theory which didn't, except by appealing to the cultural value
itself. We think it's important that our scientific theories make
precise falsifiable predictions. Our culture has largely agreed on
this, and I see nothing exceptional with it. But I can provide no
better argument for it than that.

So, yes, our current ideas about what the laws of physics are, and
even what could "count" as a competitor, are culturally determined.
So are all of our other beliefs, to the pragmatist. This outcome is
uninteresting.

> Belief #2: That people are no more than cybernetic patterns
>
> [...]
>
> We have caused the Turing test to be passed. There is no
> epistemological difference between artificial intelligence and the
> acceptance of badly designed computer software.

I have NO idea what the man's talking about here.

The Turing test is a kind of cultural trial by fire: it is the trial
which a machine must pass before we'll accept it into our "circle of
empathy," as he calls it. As a descriptive claim, I think it's
basically right; once a computer can act like us in all of the
relevant ways, it will have passed our little hazing ritual and made
it into the Thinker's Club. Sure, there will be some bigots who don't
trust those "bots," but the bigotry will end to the extent that the
machines prove themselves as worthy members. (Or maybe bots will kick
us out.)

The Turing test is NOT the test to see when we'll "put up with" the
technology at hand. What the heck is he saying here?

> The AI belief system is a direct explanation for a lot of bad software
> in the world, such as the annoying features in Microsoft Word and
> PowerPoint that guess at what the user really wanted to type. Almost
> every person I have asked has hated these features, and I have never
> met an engineer at Microsoft who could successfully turn the features
> completely off on my computer (running Mac Office '98), even though
> that is supposed to be possible.

Yeah! And EVERYBODY hates Google! What morons they were, trying to
design a system that would find RELEVANT hits for you. Don't they
know that trying to mimic what a human can do only results in bad
software? Don't they REALIZE that there's no difference between
clicking on "I feel lucky" and accepting google.com as a moral agent?

> Belief #3: That subjective experience either doesn't exist, or is
> unimportant because it is some sort of ambient or peripheral effect.
>
> [...]
>
> I propose to make use of a simple image to consider the alternative
> points of view. This image is of an imaginary circle that each person
> draws around him/herself. We shall call this "the circle of
> empathy". On the inside of the circle are those things that are
> considered deserving of empathy, and the corresponding respect,
> rights, and practical treatment as approximate equals. On the outside
> of the circle are those things that are considered less important,
> less alive, less deserving of rights. (This image is only a tool for
> thought, and should certainly not be taken as my complete model for
> human psychology or moral dilemmas.) Roughly speaking, liberals hope
> to expand the circle, while conservatives wish to contract it.
>
> [...]
>
> So let us pretend that the new Kant has already appeared and done
> his/her inevitable work. We can then say: The placement of one's
> circle of empathy is ultimately a matter of faith. We must accept the
> fact that we are forced to place the circle somewhere, and yet we
> cannot exclude extra-rational faith from our choice of where to place
> it.

Very well. Androids are the compelling argument for letting computers
into your circle of empathy. Androids that pass for humans. Androids
that don't know they're not born like regular people, who live lives,
go to school, then go to work; androids who make love, make sacrifices
for the people closest to them, follow their dreams, etc. If you
wish, cybernetic androids, with metal brains hooked up to bodies of
flesh.

Now I suppose your 21st century Kant will calmly explain to these
androids: "You THINK you're a human, but you're not. You've been
designed as an android. Therefore, you don't deserve moral
consideration or practical treatment as an equal. Sure, you're crying
now and begging me not to take you away from your family, but you're
only doing that because we PROGRAMMED you to do that! You have no
subjective feeling of terror at the thought of my enslaving you! In
fact, your words can't possibly MEAN anything, your thoughts can have
no content, because you have no Intentionality with with to make them
meaningful. Take it away."

This is no Java applet, Lanier. Androids like that DO get in my
circle of empathy, as a matter of personal faith. I'd say that
there's something deeply incommensurable between your personal faith
and mine if they don't get into yours. I might even go as far as to
say that your faith is so far from the norm as to be deranged and
wrong.

You can't deny the possibility of androids that act like this;
you don't. So how can you not let them into the circle? By what
right of birth do you assert bigotry like that?

>

I'm going to skip Belief 4, since it's levied mostly against people
who think memetics is a great way to model the whole brain, and
against evolutionary psychologists, who he misinterprets as claiming
that rape is good because it's natural. (If anything, the correct
conclusion to draw here is that nature is bad because rape is
natural.) I don't believe either of these are true.

> So, while I love Darwin, I won't count on him to write code.

Me neither, though a more complex cybernetic algorithm (or
computer+RNG) will do nicely. I say that only because creativity
happens under the laws of physics, so SOME computer or other can do
the job.

> Belief #5: That qualitative as well as quantitative aspects of
> information systems will be accelerated by Moore's Law.

As I said, this isn't true now. It will be true later, when human
equivalent AI or uploads happen. Then, a qualitative increase in
computing power means a quantitative increase in the number/speed of
the people thinking about qualitative increases. I have every reason
to believe that this will result in rather rapid qualitative
increases.

Then again, there might be something rather fundamentally hard about
qualitative increases in intelligence. If so, then #6, the
Singularity, won't happen any time soon. Immortality will come, but
gradually. The revolution won't be televised, it'll have to be
remarked upon by historians looking backwards and saying: "Look at how
far we've come in such a short time!" I hope that Singularity happens
sooner than later. But I can't say for sure.

-Dan

       -unless you love someone-
     -nothing else makes any sense-
            e.e. cummings



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:27 MDT