Eliezer S. Yudkowsky writes:
>to know what can be done with nanotechnology, the best way to find out is to
>ask Drexler or go to work for Zyvex, not extrapolate from the rise of
>agriculture. Similarly, there are big wins in the area of AI because when I
>visualize the system, I see lots of opportunities for caching that an
>intelligent compiler could easily use to shrink the system to tenth of its
>size, as well as adding a great deal of cognitive capacity. ...
Asking Drexler has some obvious problems with bias. Better to evaluate the arguments he presents. And to repeat myself again, to make a case for a rate of change, you have to introduce time into your argument. You have to say how fast these opportunities would arrive and how fast they would be taken advantage of.
>In short, I think there are big wins because I've looked a little beyond the
>limits of my own mind ... these are ultimately the only reasons for believing
>in a Horizon or a Singularity, and neither can be argued except with someone
>who understands the technology. ...
>Anyway, the primary point that I learned from the Singularity Colloquium is
>that neither the skeptics nor the Singularitarians are capable of
>communicating with people outside their professions. (Or rather, I should say
>that two people in different professions with strong opinions can't change
>each other's minds; ...
Please don't attribute any disagreement to my failing to understand AI. I think you will find that I can match any credentials you have in AI (or physics or nanotech for that matter).
>> The question is *how fast* a nanotech enabled civilization would turn the
>> planet into a computer. You have to make an argument about *rates* of change,
>> not about eventual consequences.
>If they can, if they have the will and the technology, why on Earth would they
>go slowly just to obey some equation derived from agriculture?
Economists know about far more than agriculture. And you really need a stronger argument than "I'll be fast because it's not agriculture."
>I've tried to articulate why intelligence is power. It's your turn. What are
>the limits? And don't tell me that the burden of proof is on me; it's just
>your profession speaking. From my perspective, the burden of proof is on you
>to prove that analogies hold between intelligence and superintelligence; the
>default assumption, for me, is that no analogies hold - the null hypothesis.
If no analogies hold, then you have no basis for saying anything about it. You can't say it will be fast, slow, purple, sour, or anything else.
To me "superintelligent" means "more intelligent", and we have lots of
experience with relative intelligence. Humans get smarter as they live longer,
and as they learn more in school. Average intelligence levels have been
increasing dramatically over the last century. Humanity as a whole is smarter
in our ability to produce things, and in scientific progress. Companies get
smarter as they adapt to product niches and improve their products. AI
programs get smarter as individual researchers work on them, and the field
gets smarter with new resesarch ideas. Biological creatures get smarter
as they develop adaptive innovations, and as they become better able to
adapt to changing environments.
All this seems relevant to estimating what makes things get more intelligent,
and what added intelligence brings.
All this seems relevant to estimating what makes things get more intelligent, and what added intelligence brings.
email@example.com http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614