On Wed, 8 Jul 1998, Dan Clemmensen wrote:
Dan, I hope you don't mind the rather delayed nature of my responses. Email sometimes is treated as something to be responded to instantly. By contrast, I've found that I quite enjoy extended discussion like this, often with a considerable break between rounds, for reflection.
> Michael Nielsen wrote:
> >
> >
> > I may as well state one of my main interests in this: whether we'll ever
> > have enough computational power to set up a good "breeding ground"
> > for AIs -- an artificial environment optimized to produce artifical
> > intelligence by means of selective pressure. Using proctonumerology, I'd
> > guess a figure of about 10^40 operations ought to be enough.
> >
> I haven't a clue. What does Moravic say, and should we believe him? Are
> these operations per second?
No, I was speaking of the total number of operations necessary to evolve and intelligence. My thinking was that a population of 10^15 entities roughly as complex as a human brain, evolving for 10^10 years or so, with the same speed of operation as the brain (1Hz, to a first approximation) ought to evolve into something quite interesting. After all, that's not a bad approximation to the human evolutionary path.
Moravec's argument doesn't seem especially sound to me. I forgot the numbers -- they're much more modest than those I gave above -- but he seems to be implicitly assuming that the brain is just a big blob of hardware, in the sense that once we have the hardware necessary to support a brain-equivalent device, we'll be able to create one. This neglect of software and the specific details of arhitecture seems highly dubious to me.
A point I quite like is that chemical reactions -- the basis for life as we know it -- typically progress at a rate on the order of Hz. There are some fast exceptions, but, in any computational system, the relevant timescales are usually the slow ones. Solid state devices can work at GigaHerz or more. Human beings have a factor of a 10^9 or so over nature in this regard. Which is a good thing, because Nature has been going at it for ~10^9 years. Nature's advantage due to size and parallelism are still considerable, though -- we need to work on those things.
> > I don't know how the proposed nanomechanical schemes work. In
> > commonly used electronic schemes, I believe that the depth is O(log N) for
> > random access, but the number of operations is O(N log N), at least in the
> > schemes I'm familiar with. Are you absolutely sure that the number of
> > operations in a nanomechanical addressing systems is O(log N)?
>
> Nope, I made it up as I went along, mentally designing the system on the
> fly and probably messing it up severely. I now strongly suspect that between
> the two of us we are thinking about three different things: depth with O(log N)
> complexity, number of gates needed with O(N), and energy dissipation, which will
> depend strongly implementation, with 0(N) required for a minimized depth (i.e.,
> depth of 2, fastest algorithm) and O(log N) required for a slow but energy-efficient
> approach with one address bit per clock.
> >
> > can we drop the 2020 date?
> >
> Sure, It's more fun.
Great.
> > > By contrast, you raise the issue of error correction.
> > > However, even very powerful ECC schemes require less than doubling the amount
> > > of volume needed to store a word.
> >
> > That's not really true. To do fault-tolerant computation the best known
> > overhead, so far as I know, goes polylogarithmically in the size of the
> > computation, with some (fairly large) constant factor in front of the
> > first term in the polylog factor.
> >
> Here are are again shifting back and forth between ECC for storage and ECC for
> computation.
Not really. Much the same issues come up in both case, unless your storage is fantastically reliable. Why? Suppose your storage is not fantastically reliable. Then you will need to do a "little" bit of error correction. Unfortunately, that involves gates... which themselves need to be error corrected, and so on, down the line. For this reason, there's not so much difference between memory and dynamics, unless the memory is fantastically reliable, so no error correction at all needs to be done. I don't consider that likely for the sort of applications I have in mind, so I'm just using the dynamical figure for all error correction.
> I was addressing storage, wherein we are not using reversable
> logic. For storage, as I recall the Hamming distance goes up rapidly with a modest
> increase in ECC bits. In conjunction with cashing, the bulk memory words
> subject to ECC will be long. The nuber of ECC bits goes up O(log N) with
> word length and linearly (?) with the number of errored bits that can be corrected.
I believe it is linearly; not sure.
> > If I can find the time, I'll try to look up some of the papers analyzing
> > errors in reversible computation. As I recall, there were a few in the
> > early 80s, which I've never read.
> There was at east one paper delivered at the 1997 foresight conference:
> http://WWW.FORESIGHT.ORG/Conferences/MNT05/Abstracts/Frakabst.html
> That deals with reversible computation.
A very interesting article, thanks for the link. It doesn't seem to say much about errors or error correction, though. Some work has been done on these subjects (in reversible computing); I don't recall the results.
Michael Nielsen
http://wwwcas.phys.unm.edu/~mnielsen/index.html