On Fri, 14 Jul 2000, Robin Hanson wrote:
> Robin wrote:
> > Hardware and upload compilation will become cheap enough to
> > profitably run uploads for lower than then-current wages when
> > human labor is still highly valued (i.e., before strong AI),
> > and substantially before most individuals can afford to
> > non-destructively upload themselves.
> >
>
> All true, but all largely beside the point of the above premise/conclusion.
> Either question the premise, question that the conclusion follows from the
> premise, or accept the conclusion.
Objection (1) is the statement "will become cheap enough".
Yes, we all know hardware will eventually get very cheap, but
this reminds me of the problems Neil Jacobstein pointed out
at the Foresight SA conference when people "compress" technology
curves. Everyone says "this" nanoweapon will be very bad
and ignores the probability that, should parity be maintained,
there will likely be a balanced nanodefense.
The question is *when* will the hardware and compilation become
cheap enough?. Will the rate of advancement of those technologies
exceed the probable decline in augmented human labor costs?
If not, then the argument fails.
For that to occur, corporations are going to have to see a clear
payoff (your point), a clear path (without lots of potentially
sticky legal or investor hurdles as Hal and I have mentioned)
and be convinced that it is doable with an amount of money and
in a time frame that the market environment doesn't change so
much that they aren't better off allocating those resources to
something else. Iridium didn't fail on the basis of technical
problems so much as it botched reading and playing to the market.
All you need is the greens convincing people that "services
provided by *real* people" are better" and you are in a situation
where the market demand for upload slave labor goes soft.
For both human augmentation and evoloading, you will need to develop
technologies like fine neuron signal sensing and optical interconnects
between the brain and the computer. Then you need extensions of
current software agents operating in the computer but under
the control of the human. For destructive copyloads you need
fine scale molecular mapping and develop algorithms and hardware
that hopefully let the copyload run at or faster than a human in
real time. These are different things and proceed along different paths.
So objection (2) is questioning the part "before most individuals
can afford to non-destructively upload themselves". My original
thoughts were not that we "couldn't" *afford* to upload ourselves,
but that most humans would probably be more comfortable with
a gradual evolutionary process.
If you do a copyload with pre-nanotech, you have to have a high
confidence level you can actually make it work before human labor
gets cheap (because living gets cheap) in a post nanotech era.
Then you have the problem that if you want it to be really useful
you are going to have to run it much faster than a human. What
good do 15,000 copies of 1 accountant do me? You want copies that
learn new skills *faster* than humans in a rapidly changing environment.
The most valuable people to copy would be those that know nanoengineering
or molecular biology or perhaps great scriptwriters (since the value
of entertainment goes up). If you cruise the emergency rooms and manage
to get a couple of these, you potentially end up increasing the supply of
this type of labor to the point where it cannot be profitable (after all
you just make another copy right?) Does the economy rapidly flip over to
the point where corporations are negotiating deals like "I'll trade you 4000
physicians and 3000 scriptwriters for 400 nanoengineers"? Then once you
have 3000 scriptwriters aren't you going to need 3000 directors?
The economic bottlenecks that develop because your copies cannot be
utilized effectively are going to diminish their value.
Given the paths I see the technologies on, it seems like the development
of mednanotech for evoloading would arrive sufficiently near enough to
the development of an understanding for brain processes and the
compilation of efficient hardware to execute copyloads, that you cannot
reduce the corporate risks of copyloading projects sufficiently to
justify the expense. That said, I suspect their *may* be people
that have skill bases and narrow field viewpoints that they might
be able to justify the benefits sufficiently to like-minded VC folks that
at some point it will be attempted. (After all, the VC folks *expect*
most of the projects to fail). So the question may really be one of
whether there are enough entrepreneurs who could sell this to get
sufficient funding to make your statement come true.
If you put $1B ov VC into copyloading while $10B is going into evoloading
(just to pick numbers out of the air since we probably need another
couple of volumes of Nanomedicine and pricing & operating costs for
Blue Gene before we can discuss it realistically) then I would say
your statement is false.
Interesting to think about though.
Regarding Eliezer's comments about AI breakout -- that's what firewalls
and encryption are for. Unless his AI figures out how to build a
quantum computer with more than a few dozen Qubits or someone
stupidly gives it the controls of an unconstrained nanoassembler
I'm not going to start to worry.
Robert
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:34:36 MDT