John Clark writes:
>>... AI progress ... suggesting the importance of lots of little
>>insights which require years of reading and experience to accumulate.
>
>That's a good point but the question remains, even if a billion small
>insights are needed what happens if their rate of discovery increases
>astronomically? ... artificial intelligence is only one path toward the
>singularity, another is Nanotechnology, perhaps another is Quantum
>Computers, and you only need one path to go somewhere.
In the above I was responding to Eliezer Yudkowsky's analysis, which is based on his concept of a few big wins. To respond to your question, I'd have to hear your analysis of why we might see an astronomical increase in the rate of insights. Right now, though, I'd really rather draw folks' attention to Vinge's concept and analysis. So I haven't responded to Nick Bostrom's nanotech/upload analysis, since Vinge explicitly disavows it. And I guess I should stop responding to Eliezer. (Be happy to discuss your other singularity concepts in a few weeks.)
>>There has been *cultural* evolution, but cultural evolution is Lamarkian.
>
>Yes but the distinction between physical and cultural evolution would
>evaporate for an AI, it would all be Lamarkian and that's why it would
>change so fast.
Well this suggests it would be *faster*, all else equal, but how fast and is all else really equal, that's what's at issue.
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614