Robert Bradbury wrote:
> On Wed, 19 Apr 2000, Paul Hughes wrote:
> > Billy Brown wrote:
> > > If we know that some particular set of neurons performs a certain function,
> > > and we know we can duplicate said function with X MFLOPS, we have an
> > > accurate estimate of the computational demands of replacing said neurons.
> > > Period.
> > Huh? Now you are making no sense at all. What "If"?? That's the whole point,
> > no one has yet been able to duplicate any said function. You're caught in
> > some weird self referential tautology! You might want to clear your head
> > and start reading this discussion from the beginning since the crux of this
> > argument has obviously lost you.
> As Anders' pointed out, the degree of "accuracy" you get depends on
> the hardware and software you have available and how many "compromises"
> you have to make to get the results in something less than a month.
> If you are satisfied that the abstractions in your model are sufficently
> accurate or spend the time to develop very specialized hardware then
> you can run things much faster.
> > So far both Robin Hanson and Anders Sandberg admit that a single transistor
> > isn't going to do the job. That much we all agree on. What exactly are you
> > arguing?
> True, because a single transistor is designed as a switch, while a neuron
> is a multiply-adder. So what you really need is arrays of DSP chips.
> These have been built, I believe Columbia has one that is pretty large.
> But even the largest (10K+ DSP chips?) are still small compared with
> the number of neurons (billions). They compensate for this by being
> somewhat faster (millions/sec vs. 100s-1000s/sec). In terms of #'s
> of transistors you can probably do this with 1K to 100K transistors
> depending on how many "bits" of accuracy you need (neurons are probably
> more than 1-bit devices but probably much less than 16-bit devices).
> *However*, neurons may be relatively "fuzzy" devices, meaning they
> can be more effectively implemented using analog rather than digital
> logic. I think the transistor count drops by 1-2 orders of magnitude
> when you implement multiply-adders in analog rather than digital logic.
> The real issue that I believe is being "poked" at here is whether or
> not various brain functions are well enough understood that you can
> implement hardware+software that produces the function for fewer "IPs"
> than the brain. I'd say there are a whole *host* of things: arithmetic,
> symbolic logic (to a degree), sorting, spell-checking, language
> recognition, language translation (coming slowly), message passing
> and routing, character "recognition", limited reading (OCR), most game
> playing (the only exception left is Go), driving automobiles,
> complex simulations or phenomena from quantum mechanics to weather,
> are among those "brain functions" where software and hardware have
> trumped the brain completely. The areas we have to wrestle with
> now are pattern recognition, common sense knowledge bases, language
> "comprehension" and a few more.
Somewhere along the line the point of my argument has been lost, since I have always
agreed with everything you said above. I am not disputing that we will eventually be
able to *compress* various functions of the brain and neurons such that we can
reduce the overall number of flops required to simulate them. What I am saying is
that Hans Moravec in both books I've read, never considers that there is any
computation aking place within the neuron itself - which is an amazing oversight if
you ask me. Drugs effects on the receptor sites alone proves this in abundance.
So, my post began with a question asking how much computational power is involved
within the neuron? I leave it up to the more knowledgeable people on this list to
determine what that is, and hopefully find a way to reduce the number of flops
required for future simulations. Billy Brown was arguing about carrots while I've
been pressing the point home about apples! :-)
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:38 MDT