Eugene Leitl writes:
Yes, powerful ideas are dangerous. But that doesn't make them wrong.
> > I interpreted both of your initial statements as rejecting inquiry into
> > things post-singularity. It seems as if you both think we *can't*
> > possibly know anything, so we shouldn't try. ...
>In fact I even think the assumptions we can do useful future synoptics over
>decades of dramatic growth are _dangerous_, ....
> > 1) Humans have a vast amount of knowledge and insight, only the tiniest
> > fraction of which can be expressed as equations ...
>I think this is a dangerous position. ...
> > 2) Even if we knew nothing about a subject, that wouldn't mean
This claim is independent of any concept of singularity. So are you
saying no one should ever attempt to envision the future decades ahead?
> > we couldn't learn something if we put our minds to it. ...
>The bulk of past predictions now seems ludicrous. ...
>If there is no banking nor art after the Singularity, extrapolating
>from past human insight is worth shit.
>you emerge a god from the other side ... which is perfectly
>incomprehensible for an nonparticipating mehum observer.
How can you be so sure of these things, that gods have no banking, and that gods are incomprehensible to us? These true by definition of "god"?
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-2627