John Clark writes:
>>... AI progress ... suggesting the importance of lots of little
>>insights which require years of reading and experience to accumulate.
>That's a good point but the question remains, even if a billion small
>insights are needed what happens if their rate of discovery increases
>astronomically? ... artificial intelligence is only one path toward the
>singularity, another is Nanotechnology, perhaps another is Quantum
>Computers, and you only need one path to go somewhere.
The basic problem with singularity discussions is that lots of people
see big fast change coming, but few seem to agree on what that is or
why they think that. Discussion quickly fragment into an enumeration
of possiblities, and no one view is subject to enough critical analysis
to really make progress.
I've tried to deal with this by focusing everyone's attention on the
opinions of the one person most associated with the word "singularity."
But success has been limited, as many prefer to talk about their own
concept of and analysis in support of "singularity".
I've tried to deal with this by focusing everyone's attention on the opinions of the one person most associated with the word "singularity." But success has been limited, as many prefer to talk about their own concept of and analysis in support of "singularity".
In the above I was responding to Eliezer Yudkowsky's analysis, which is based on his concept of a few big wins. To respond to your question, I'd have to hear your analysis of why we might see an astronomical increase in the rate of insights. Right now, though, I'd really rather draw folks' attention to Vinge's concept and analysis. So I haven't responded to Nick Bostrom's nanotech/upload analysis, since Vinge explicitly disavows it. And I guess I should stop responding to Eliezer. (Be happy to discuss your other singularity concepts in a few weeks.)
>>There has been *cultural* evolution, but cultural evolution is Lamarkian.
>Yes but the distinction between physical and cultural evolution would
>evaporate for an AI, it would all be Lamarkian and that's why it would
>change so fast.
Well this suggests it would be *faster*, all else equal, but how fast and is all else really equal, that's what's at issue.
email@example.com http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614