Re: Hofstadter Symposium [was Re: it was all a gag]

From: Mike Linksvayer (ml@justintime.com)
Date: Tue Apr 04 2000 - 00:16:47 MDT


Ken Clements wrote:
> Hofstadter admitted that he had stacked the panel by not asking anyone
> from the anti-technology movement (Bill made up that whole side).

Hofstadter didn't invite anyone who believes that intelligence requires
a biological brain, which is quite different from believing that
technology is bad. Joy seems to believe some technology is bad, but he
doesn't seem to fall into the "intelligence requires biology" camp.
(Offtopic aside: Searle sounds like a very reasonable classical
liberal in a recent interview with Reason magazine
<http://www.reasonmag.com/0002/fe.ef.reality.html>. Just more proof,
not that any was needed, that even reasonable people often take dumb
arguments seriously.)

There were really two debates going on (though the atmosphere wasn't
contentious at all): rapid vs. slow development/evolution of
human-level or greater machine "intelligence" (in quotes because what
this means is nebulous and wasn't discussed) (primarily Kurzweil and
Moravec vs. Holland and Koza respectively) and "we must relinquish
dangerous technology now or face catastrophy" (Joy, with support from
Koza, vs. Merkle and Moravec, with support from Kurzweil).

Kurzweil and Moravec's initial talks were quite boring, though their
contributions to the discussion and Q&A periods were the most
insightful of the group. After droning about exponential and double
exponential increases in computational speed, Kurzweil did sneak in one
gem: he indicated that Moore's law, or something like it, also applies
to software, of course very much contrary to most people's intuitions.
I was very eager to hear a rationale for this claim. Unfortunately
when Holland asked about it at one point, Kurzweil only mentioned
better development tools.

Joy seemed quite proud (in a very serious way) that the media is paying
attention to him and that he is well read (or at least can scour books
for emotional quotes supporting his argument, or at least pay someone
to do so). His argument basically boiled down to this: supervirulent
pathogens will be easily engineered and/or produced in crazy and/or
sick people's basements, and if only a few of the millions of
certifiably crazy, evil people in the world do this, we're all doomed.
We must not allow the democratization of KMDs (Knowledge of Mass
Destruction?). Oh yeah, and remember how bad the plague was in Greek
times or the middle ages? Why, they catapulted plague-infected bodies
over city walls, and people died horrible deaths and doctors couldn't
help at all. Clearly we have not evolved to the point where people can
be trusted with knowledge of biology sufficient to engineer pathogens.
And oh yeah, there are a bunch of famous people and books that agree
with Joy, and he can quote them all (I think Einstein was probably
most quoted).

Joy's solution is "relinquishment", though he didn't really give any
details of what this would involve, though he seems to think that arms
control treaties and subsequent verification protocols point in the
right direction. He also mentioned, once, strict corporate liability
as a deterrent to corporations developing dangerous technologies. I
got a tiny chuckle out of that, as strict liability is one of those
libertarian catchall answers.

I believe Joy said that he thinks there is a 30-50% chance of human
extinction (presumably with no posthuman successor), not including all
the other horrible outcomes that are likely. I didn't get the
impression from the other panelists (I should have asked that
question), not to mention reading this mailing list, that human
extinction isn't a real possibility. I'd say that many of his concerns
are valid, though his scaremonger/authoritarian approach seems
contrived to create fame for himself.

If Joy was "wrong" and annoying, Merkle was "right" and extremely
annoying. I felt that Merkle came across as a (highly intelligent)
pompous ass with a really bad sense of humor. He didn't even attempt
to address Joy's points, not counting wisecracks ("Would those
nanomachines be using the broadcast architecture, or some other
architecture?" Ok, you had to be there. I cringed.) I got the same
impression of Merkle when I saw him on stage with Michio Kaku at a
"Next 20 Years" event. My tentative evaluation: brilliant researcher,
rotten public spokesperson.

I hadn't heard of the broadcast architecture before (I don't attempt to
keep current with nanotech research, though hardly anyone in the
audience raised their hands when Merkle asked if anyone had heard of
it, and I suspect many of them were imaginging some networking or
distributed computing architecture, as I was when I considered
half-raising my hand). The idea seems to be that nanobots would
somehow be broadcast instructions, eliminating the need for them to act
completely independently (an analogy with DNA was made -- these
broadcast architecture nanobots wouldn't carry around a full complement
of DNA) and making them much cheaper and more controllable. The last
point was held forth as a promising means of preventing a runaway
self-replicator catastrophe.

My intuition (and that's all I have on this point) doesn't
find this one-sentence version of the broadcast architecture very
compelling in terms of cost or danger. Embedding instructions in a
nanobot seems really cheap, considering the capacity of nanotech
storage. Would an embedded communications device be cheaper? Well, it
may be in one sense at least: it would be much easier to program
nanobots to do some very limited function and await instructions than
it would be to program nanobots to do generalized tasks and to handle
general contingencies. But then it would be even simpler (not to
mention safer) to program nanobots to do one task, then "die" after
doing that task a desired number of times. On controllability, it
seems that if nanobots can be broadcast instructions, then they, having
security bugs, can be broadcast bad instructions.

John Holland's comments were all very brief and generally well spoken.
He was highly skeptical of Kurzweil and Moravec's predictions. Holland
said that we have a very slight understanding of intelligence, and
without much better theory we won't get very far. He drew an analogy
between machine intelligence and fusion power -- he believes that we
haven't gotten very far in five decades with the latter because we
don't have sufficiently good theory, despite spending billions trying
to make it work, and despite fusion power potentially being a really
good thing.

Throughout the afternoon there were several comments that alluded to
the need for better theory, or at least different approaches, in order
to make breakthroughs. Or, as Jeff Davis' Ray Charles signature quote
says "Everything's hard till you know how to do it." Kurzweil and
Moravec were asked whether if 100 years in the future we knew how to
create machine intelligence, we couldn't run such an intelligence on
today's computers (this followed someone mentioning a tinkertoy
computer (but it doesn't run Linux!)). Both seemed to indicate that
today's computers simply don't have the storage or horsepower needed.
I can understand storage, but given an intelligent program and
glacially slow hardware, why can't it just be really slow?

Another comment in this vein from the audience mentioned that someone
(at Sandia?) had created a robot that could walk with only twelve
transistors, involving an analog feedback system, wheras it has been
extremely hard to get many-MIPS digital-brained computers to walk.
Moravec seemed to say that because analog requires some bulk
technology, digital nanocomputers would probably be more cost effective
even if they must be really complex. Well, yeah, but we don't have
nanocomputers yet. There's lots of cool stuff remaining to be done
with old technology, and I bet it will sometimes be much more cost
effective from a development perspective.

Kevin Kelly's answer to the symposium's title "Will spiritual robots
replace humanity by 2100" was "NO WAY". His argument, to the extent I
caught it (I kind of zoned out for awhile do to extreme thirst) was
that machine intelligence will fill lots of specialized niches, some of
them niches previously filled by humans, but no machine will completely
replace humans. He used as a calculator as a primitive example -- it's
much better than any human at arithmatic, but not good for much else.
I'm not making the point as eloquently as he did. Perhaps it was the
graph with lots of little dots on it, all representing little niches
for intelligent entities. At best, he seemed to say, intelligent
machines will free humans from having to work.

I also remember Kelly being the first to mention that communicating
with intelligent machines of our creation could be a very spiritual
thing, much like communicating with "ET" would be. Kurzweil made a
similar point several times.

Frank Drake came off as a mildly boring, mildly crackpot case. We'll
judge the aliens intelligence by the size of their radio telescopes,
har, har, har.

John Koza said that in numerous attempts to have a genetic program
learn to model some tiny aspect of human intelligence or perception,
perhaps equivalent to one second of brain activity (I know this doesn't
really make sense, I'm fuzzy on the details and I don't recall any of
the specific cases) that he found he required 10^15 operations
(requiring months on standard PCs). So, a "brain second" is 10^15
operations, and this huge number obviously poses a huge barrier to
machine intelligence. Or something like that. I'll have to watch the
webcast when it is available, seemed like an interesting point.

Even while listening, I was confused concerning Koza's argument
vis-a-vis the hardness of machine intelligence. It seems (as Kurzweil
later pointed out concerning his speech recognition software) that once
a genetic program "learns" a desired behavior, it can be copied
infinitely, so the operations required to get to a certain level of
functioning are mostly irrelevant.

There was lots of good stuff in the discussion and Q&A sessions, but
it's mostly a blur to me. I'll mention three things I remember:

Kurzweil said that he was using genetic programming to simulate stock
traders (presumably using historical data?) Successful trader programs
get to recombinate with other successful trader programs. He didn't
mention whether they were making real trades and if so, how
successfully. I'm sure lots of people are doing similar research,
given the potential payoffs.

A few people mentioned consciousness being a pattern that presumably
could be mapped to any substrate. An example, given by either Kurzweil
or Moravec, was that of a pattern in a river -- the water molecules
constantly change, but the pattern may remain for long periods of
time. Moravec went even further, saying that perhaps conciousness is
an interpretation of a pattern, so if you know what you're looking for,
you could perhaps find conscious patterns, say in rocks, to pick a
cliche. Sure, this is run of the mill daydreaming for extropians, but
somehow it's pleasant to hear it in public.

In response to an audience question about spirituality, Joy said that
he had read a book (of course!) by E.O. Wilson in which Wilson had
hinted at explaining all beliefs, including spiritual beliefs, in
physical terms. Joy said, roughly paraphrased, "the game's changed,
they [religious people] just haven't been told yet." See, he has some
sense! Yeah, he wrote vi too.

After the event let out, I wondered around a bit and laid down under
the pleasant sun in the deserted engineering quad. The cirrus clouds
above were beautiful and the temperature perfect. The experience was
giddy. I rededicated myself to experiencing the wonder of life, even
as a mere human, and eagerly look forward to attaining ever giddier
heights, perhaps with some technological assistance in the future.

Later I wondered around Palo Alto while waiting for the next Caltrain.
I hadn't been there in a few years. On a saturday night, it's like
fairyland. Healthy and obviously wealthy people literally spilling out
of every immaculate restaurant. Someone went out of their way to pick
up a pen I dropped in the bustle. Even the sole homeless man seemed to
be doing pretty well. Reminded me of Santa Barbara, except that
Stanford is where the ocean would be, and the workers aren't mostly
Mexican. Amazing what extraordinary wealth can do. Don't imagine too
many happy faces there today (NASDAQ selloff).

Mike



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:02 MDT