From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Jun 19 2003 - 08:35:20 MDT
On Wed, Jun 18, 2003 at 07:23:15PM -0700, Rafal Smigrodzki wrote:
> >
> ### I understand that Bayesian networks can be used to approximate the
> behavior of both low-level motor/sensory neural networks, and high-level
> semantic networks.
This is roughly the theme of my thesis. I don't think the brain
actually is a Bayesian neural network (slightly different from
the Bayesian networks usually discussed) but I think the idea
that it does approximate statistical inference seems to be very
fruitful.
> Ben Goertzel is working on a system using probabilistic
> inference on "atoms" (simple nodes) and "maps"(mapping of
> ensemble of atoms) to achieve AGI. He believes that 3 to 5
> years might be sufficient to build a system capable of
> supervised learning with real-world input (e.g. scientific
> journals). This is a bit like evolution but more amenable to
> oversight.
I think the key here is oversight, not perfect control. My
problem with many AI proposals is that they are based on
absolutist visions of strict logic, planned safety and other
top-down concepts. While much neural network and evolution work
is far too undirected and hopes for a "then a miracle happens"
self-organized breakthrough when some unknown condition is met.
Neither works well on its own; top-down design is good for
creating overall architectures, but learning and concept
formation is very much bottom-up.
This is why I think we need multiple approaches both in designing
and rearing AI, and in preventing different failure modes.
> >> Human-level computing power is going to be available to
> >> SingInst in about 15 years, so we can expect the recursive
> >> self-enhancement of the FAI to take off around that time.
> >
> > I'm not convinced of this. You're basing this on Moravec's
> > extrapolation of computing power necessary to replace the retina to
> > the whole brain? I think that's a pretty rough model. The retina
> > lacks much of the complexity of the cortex.
>
> ### The estimate of the number of synapses and their firing rates is taken
> directly from histological and neurophysiological analysis of
> the cortex. The equivalence coefficient between computing power
> needed to emulate the retina and the number of retinal synaptic
> events is also a direct observation (subject to uncertainties
> of whether the retinal emulator is really emulating the
> retina). The main leap of faith is applying the FLOP/synapse
> equivalence coefficient from the retina to the cortex, but then
> the increased complexity of the cortex is accounted for by the
> enumeration of synapses. We have no reason to believe that the
> cortical synapses are more computationally efficient that the
> retinal ones (they have similar structures, similar
> evolutionary pressures for optimization). In fact, since
> retinas have been around much longer than the prefrontal
> cortex, the former might better optimized than the latter.
The Moravec calculation is very rough, but many other
calculations do tend to cluster a few orders of magnitude away.
It doesn't matter much, since given the assumption of exponential
computer power growth will easily fix that.
It is a much more serious issue whether self-enhancing AI is
possible (a very interesting theoretical issue at the very least,
and quite likely of practical relevance).
Also, we might want to examine the behavior of Moore's law and
the computational architectures needed for AI. They might not go
in the same direction, or computer power increases might peter
out before they reach the right level.
We have some plans at my research group for a project that would
result in a neural network with mouse cortex computational power.
Very cool, but we still don't have any (clear) idea of how to
divide the network into parts and link them to get mousy
intelligence. It is not unlikely that we will still not know it
when we have far greater computing power.
> In any case, while I am not a totally gung-ho near-term
> Singularity apostle, I do think the UFAI should figure very
> high on Brett's and most of other people's lists of future
> problems.
I think people overestimate the dangers of superintelligence and
underestimate the dangers of subintelligence.
Imagine software that can read scientific text, parse it and
produce stuff based on it. Already that little system would
revolutionize how much science is done - and flood the journals
by semi-automatically written papers on all subjects. Add the
ability to run simulations, and at least some disciplines might
get clogged up by academic goo - not using these papermats would
make your impact rating go down compared to those who do, useful
papers would be hard to find amid the piles of autogenerated
papers copying from each other and so on.
In the long run people would manage. Peer review would change
(automated reviewers as a first step?), imact ratings will be
viewed differently and people will find ways of using this kind
of AI to read literature, research and write more efficiently.
But the transient can be rather bad, and could mess up an
academic discipline a lot. Now imagine the same applied to
secretaries, call centers and spam.
This little example is hardly a world-ender. But it shows that
even a fairly low level of intelligence when automated can have
tremendous effects. It is a bit like replicating technology such
as computer viruses; it can quickly grow in power even if it is
basically a simple program with little smarts. Don't worry about
the world being taken over by Tetragrammaton 2.3, worry about the
Tamagochi Rebellion.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Thu Jun 19 2003 - 08:41:49 MDT