From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Tue Jun 17 2003 - 18:05:45 MDT
Mez wrote:
> From: Kevin Freels [mailto:megaquark@hotmail.com]
>> Just a thought..... It doesn't seem likely that
>> massive breakthroughs are predictable 20 years in
>> advance. In 1949,noone expected to be on the moon in
>> 1969. In 1883, powered flight wasn't expected anytime
>> soon. In 1976, the current access to information was
>> unfathomable except to a few people speculating. There
>> was no real expectation of it happening. Thus is the
>> nature of discovery.
>
> Certainly. There could be a breakthrough next year that changes
> everything in the field of AI. I'm not saying it's impossible, just
> that there's no hard evidence to suggest that it's going to happen.
> In the absence of that evidence, and in the context of decades of
> unmet expectations for AI, I think it makes sense to take a more
> conservative view of the field.
>
> Or, to appeal to the Cult of Bayes out there, what priors would lead a
> Bayesian to believe that AGI is anywhere near realization? Whatever
> they are, I must not have them.
### For all the decades of unmet expectations, AI relied on computing power of the order of an ant, and only recently, as Moravec writes, did they graduate to the computing power of a mouse. Since AI on ant-powered computers gave ant-powered results, and AI on mouse-powered computers gives mouse-powered capacities (such as target tracking, simple learning, simple motor control), we may expect that AI on human-level computers will give human-level results. Human-level computing power is going to be available to SingInst in about 15 years, so we can expect the recursive self-enhancement of the FAI to take off around that time.
QED?
Rafal
This archive was generated by hypermail 2.1.5 : Tue Jun 17 2003 - 15:15:11 MDT