Re: Otter vs. Yudkowsky: Both!

From: Technotranscendence (neptune@mars.superlink.net)
Date: Mon Mar 13 2000 - 08:11:42 MST


On Sunday, March 12, 2000 11:47 PM Eliezer S. Yudkowsky sentience@pobox.com
wrote:
> I'm totally on board with both. In fact, I've explicitly proposed doing
> both. What worries me is the part where Otter wants to outlaw AI. You
> know and I know that as long as humanity survives, the creation of AI is
> inevitable; that doesn't mean that the creation of AI can't be delayed
> until nanocatastrophe either wipes out most of Earth's population (which
> I'd view as a partial loss) or all life in the Solar System (a total
loss).

Those are not the only two catastrophe scenarios I can think of. Another
is, of course, dying. If you die in a car accident today before suspension
or uploading technology is available, well, then, for you, the Singularity
(choose your flavor and brand) might as well not have happened.:(

More global catastrophes, such as asteroid impacts, local supernovae, a new
ice age (see July-August 1999's _American Scientist_ (not to be confused
with _Scientific American_) on rapid (e.g., an ice age in a few decades)
climate change via shifting ocean currents), NBC warfare/terrorism, etc.

And other roadblocks, aside from outright legal proscription can slow the
process down -- as Eliezer already hinted. These can be anything from an
economic downturn to research going in a fruitless direction. I don't want
to sound paranoid...:)

Speaking of rapid climate change, the aforementioned article discusses a
model whereby ocean currents changing can make rapid cooling or warming
happen. If we can find similar mechanisms -- assuming the model is a good
one -- on Earth or other planets, it might be a means to control global
climates.

Long lives to you all!

Daniel Ust
http://mars.superlink.net/neptune/



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:04:59 MDT