Robin Hanson wrote:
> Hal Finney wrote:
> >Also, Billy did some calculations a while back, and a low level neural
> >simulation (as you'd presumably require, working with raw brain slices)
> >requires more computing power than will be feasibly available in a
> >non-nanotech world.
> I'm also optimistic that we'll be able to compile such simulations by
> many orders of magnitude. Yes, we'll have to simulate in great detail
> to make sure we understand how each local thing works. But once we
> understand that, and we can see what details matter and what don't, then
> we should be able to throw away most of the irrelevant detail and simulate
> at a higher level of abstraction.
I agree. However, that process of learning and incemental improvement will
almost certainly take several decades if it is done using extensions of
current techniques. That makes the resulting brain simulation a 2030 - 2050
era technology, vs 2010 - 2030 for basic nanotech manufacturing, sensing and
computing devices. You are correct in that nanotech isn't an essential
prerequisite, in principle, but it seems very likely that simple nanotech will
be availible before any sort of uploading.
It is also worth noting that technologies such as neural inerfaces,
intelligence enhancement and sentient AI also look quite plausible in that
time frame. The 2010-2050 era is a tangled snarl of possibilities, in which
nanotech, AI, IE, neural interfaces and uploading are all likely to appear in
some uncertain order, with the course of later developments being heavily
influenced by those that happen to come first.
IMO, a plausible scenario needs to explicitly posit a particular sequence and
timing to these developments, and then extrapolate their interactions and
effects on society from there. Looking at one technology in isolation will
inevitably give misleading results, because the odds of any of them actually
existing in isolation for long are very low.
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:34:36 MDT