From: Ramez Naam (mez@apexnano.com)
Date: Tue Jun 17 2003 - 11:13:51 MDT
From: Brett Paatsch [mailto:paatschb@optusnet.com.au]
> 1) What is the probability of General AI in the next 20 years
> of *either* friendly or unfriendly variety?
It seems rather low to me. There is no evidence of a fundamental
design breakthrough in the field of AI[*]. Without such a
breakthrough it seems unlikely that humans will be able to design an
artificial general intelligence (AGI).
The alternative approach of uploading / simulating a human brain will
not be computationally viable in 20 years without a massive leap
beyond the Moore's Law projections for that time. There is no
evidence for this massive computational leap either.
> 3) Can friendly AI be built that would be competitive with
> un-friendly AI or would the friendly AI be at the same sort
> of competitive/selective disadvantage as a lion that wastes
> time and sentiment (resources) making friends with zebras?
This is a fine question. I find the reports of evolution's demise to
be greatly exaggerated. So long as the universe contains replicators
and competition for resources there will be evolution. And as you
point out, unfriendly AIs may be more competitive than friendly AIs.
mez
* - I have a great deal of respect for Eliezer, Ben Goertzel, and
others in this community who are working on AGI. However, until they
produce compelling experimental evidence, I'll put them in the
category of smart people working on a problem that has stumped every
other smart person who's worked on it.
This archive was generated by hypermail 2.1.5 : Tue Jun 17 2003 - 11:23:33 MDT