From: steve (firstname.lastname@example.org)
Date: Sat Jan 12 2002 - 14:34:59 MST
----- Original Message -----
From: "Colin Hales" <email@example.com>
Sent: Friday, January 11, 2002 11:47 PM
Subject: Jaron Lanier Got Up My Shnoz on AI
This is a dec 29 Guardian article on Jaron Lanier. Boy I wish we had more
'eccentics' like this around. We need 'em. He's well worth following. One of
the great 'sleeping bear pokers'.
The Guardian Profile -Jaron Lanier - The virtual visionary
However, I'm a bit mystified by his attitude on AI. Maybe my logic is down
the toilet. I can't tell, maybe you can.
My thoughts, written after I read the article, I dump here.
Jaron Lanier is a regular at www.edge.org <http://www.edge.org>. He's always
very challenging and seems to make sense. However, I don't understand his
ideas about the limits of technology. We, humans, are technology. Just
because we're constructed with DNA or any method you'd care to consider in
no way invalidates another method for creation of intelligence.
As I read the article he isn't saying that we shouldn't create AI, because
that would be impious, (although he does seem to have leanings in that
direction). He also denies that he is taking the line taken by people like
Ray Tallis, i.e. that Dennett and Pinker are wrong and human consciousness
is ultimately mysterious and non-reproducible. He seems to say that we have
no way of measuring or defining intelligence and so if we created an AI we
wouldn't realise or know we had done it because we would have no way of
recognising it. I must say this does sound bizarre to me. I can understand
the notion that there might be forms of intelligence which we might not
recognise but why assume this is the only kind we could create? Why could we
not create something analogous to our own which we could therefore
recognise? The two positions he rejects both seem more coherent to me. Steve
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:34 MST