From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sat Mar 01 2003 - 14:26:10 MST
Joao, now this is indeed an interesting question and I will offer
my comments not having reviewed the other comments.
> I've been wondering on why are transhumanists so confident that we will
> reach the singularity.
The question of the "singularity" (and how it is perceived) is tied up
in the "soft" vs. "hard" takeoff scenarios. My personal opinion is
that the "inertia" of humanity will tend to bias development towards
the "soft" side of things (i.e. a slow ramp-up to the singularity).
> In truth, I'm disappointed with what's being done and I want to know
> why are transhumanists so confident we will reach the singularity.
Some of the rest of us who have spent a great deal of their personal
resources are disappointed as well *but* there are some interesting
developments that offset this (the Ellison Medical Foundation comes
to mind).
In truth, a real "singularity" must require one of two things:
(a) a majority of humans must accept that they have to increase
the rate of their self-evolution and act accordingly; or
(b) that an independent self-evolving unconstrained AI is developed.
I don't view (b) as a good alternative for humanity unless Eliezer's
efforts at a "friendly" AI are successful.
We have made progress in astrophysics but it is a very slow journey.
> It's true breakthroughs have been made in
> biology and medicine, such as the Human Genome Project, but, shit, we
> haven't even cured AIDS, how can we expect to cure aging anytime soon?
Two *very* different problems (I must stress this). HIV is a problem
involving a virus with a genome replication mechanism designed to be
sloppy. Aging is a problem with a genome replication mechanism
designed to be increasingly accurate (as the longevity of the
species increases). That is not to say that accurate genome replication
prevents aging (there are a host of other problems one has to solve
to prevent aging) -- but that innacurate genome replication cannot
but contribute to aging (obviously if one has evolved a genetic
program with "minimal" aging and it becomes corrupted so that it
is no longer that program the impact is most likely to be detrimental.)
> Also, I'm disappointed with the way science is made in the academia with
> personal egos rising above finding the mechanisms of aging. If we want to
> cure aging, we need to work together, but not many do that.
Not completely true. Since I'm not in "academia" I can function somewhat
independently of that framework. That has allowed me positions on both
the AGE and ExI boards of directors. So I can exert some influence
and have done so when the opportunity has been available. Aubrey
de Grey is doing similar functions with regards to the 10th IABG
Congress.
But my observations (of a decade or so of being involved in aging
research) are that it is very much related to a lack of interest
(belief) that the problem can be solved and therefore a lack of
funding and/or qualified scientists going into the area (these
are obviously related).
> In the end, I would say that the basis for the singularity is Moore's law,
> for it allows not only faster computers but also developments in DNA
> sequencing and a host of other possibilities.
Yes, obviously so.
> Yet I'm sure there are physical limits for Moore's law. When will we
> reach them? Can you be sure Moore's law will continue for long enough
> to develop a smarter-than-man artificial intelligence?
The hard limits were discussed in quite great detail in Drexler's
Nanosystems { Sections 12.8 and 12.9 (and all of the discussion
leading up to that) }.
You can get 10^21 OPS, roughly 6 times greater than our best estimate
of a brain at 10^15 OPS, but multiplied by the fact that the
nanocomputer probably occupies a volume of ~1 cm^3 compared to the
brain's > 1000 cm^3 (so propagation delays are significantly less
and "intelligence" may be significantly greater).
Or in other words -- a nanocomputer is potentially a *lot* smarter
than humans (on the order of millions to billions of times).
When we will reach these limits depends on to a large extent on economic
factors. I have received information that the U.S. investment in
venture capital declined from $90 billion to $19 billion between
2000 and 2002. So if you want to evaluate technology and business
development, one has to factor economics into the process. But I
think it is safe to assume that we *will* push this envelope
(i.e. to the limits of Moore's Law) within this century, perhaps
within the next few decades.
(This is, in part, Ray Kurzweil's message, but I am adapting it a bit to
allow for my own perspectives.)
> When I found transhumanism, already several years ago, I thought it set an
> optimistic but plausible scenario. Now, I'm starting to wonder if we're not
> just another cult willing to sacrifice reality towards a fairer image of
> the world.
We are *definitely* in the "optmistic/plausible" scenario.
I and others have been, over the last decade, been willing to put
"money on the line". There is clearly viewable progress
(where in the mid-90's there were essentially two significant
companies involved in aging research, now there are more than
a dozen). I see progress in other transhumanistic perspectives
as well (see recent NY Times articles on the installation of solar
power systems at the Whitehouse).
Joao, do not lose hope -- it is just that progress sometimes takes
much longer than we would like (or expect).
*But* please do not count the singularity out. It depends in large
part how we develop its foundations.
Robert
This archive was generated by hypermail 2.1.5 : Sat Mar 01 2003 - 14:31:07 MST