From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Feb 27 2003 - 16:02:18 MST
On Thu, Feb 27, 2003 at 02:08:45PM -0800, Robert J. Bradbury wrote:
> On Thu, 27 Feb 2003, Wei Dai wrote:
>
> > Maybe an interesting question to explore is, what if this is the higest
> > level of intelligence that our universe is capable of supporting?
>
> I am reasonably certain I have seen papers strongly correlating
> certain aspects of the brain architecture that may involve
> intelligence (neurotransmitter levels, variants in receptor
> polymorphisms, etc) with pathologies (e.g. depression, suicide,
> etc). Anders probably knows this literature much better than I.
You mean that we couldn't become any smarter because
we would go mad/crash/lose out on some other important
cognitive ability? I don't think there are any really
good evidence for this. Sure, bipolar disorder seems
to be overrepresented among mathematicians and
Asperger's syndrome among computer people, but that is
likely just that people with certain cognitive
pecularities seek out fields that fit them.
In my own field of memory research there might be some
limitations, due to upper bounds on the speed of
synaptic plasticity, the plasticity/stability dilemma
and the abstraction/representation dilemma. But that
is not really about intelligence per se.
General intelligence seem to correlate with brain size
and processing speed.
> But the consequence is that using current brain architecture
> we may be pressing up against the limits.
Perhaps. I'm not very convinced the human brain as it
is right now scales well (especially not in its
current energy-hungry, fragile and slow wetware
substrate), but it could be that a similar basic
architecture could be scaled up quite a bit and result
in a "smarter" system. An architecture with a square
kilometer of association cortex, basal ganglias meters
across and fast optronic updates might have the
potential to be very smart - but I wouldn't want to
bet on if it is by a hundred, a thousand or ten
thousand "IQ points" (or however one measures
intelligence for this kind of systems).
> Now, *if* we equate "intelligence" with "OPS" (a stretch,
> but perhaps not too unreasonable)
One could always think in terms of communities. A
Matrioshka running a few trillion trillion human minds
in parallel could solve problems in the same way as
human societies or civilizations do - lots of divergent
solution attempts, local criticism and resource
constraints promote initially promising solutions, they
become more widely used and explored by whole
institutions, and so on upwards to the highest
hierarchical levels. The advantage is that single minds
may have limited knowledge and resources, but through
their ant-like cooperation and clever institutional
means like markets it can be shared and amplified.
Still, I wonder if there is an upper limit to the
cleverness of the kind of problems that can be solved
by a society of minds of a certain complexity.
Sufficiently large problems will of course not fit, but
there could also be problems that cannot efficiently be
handled by groups. Hmm, it would be interesting to find
a way of characterizing the *problems*. Maybe the big
problem isn't how to build intelligence, but how to
characterize it...
> Raises a very interesting question with regard to the Fermi
> Paradox. Is the reason that the universe seems empty is
> because all "intelligences" are eliminated at the end of the
> period of what they are invented to do?
You mean there are a lot of civilizations around, but they are
all busy contemplating printouts with the solution to chess,
the complete set of fundamental equations for physics and the
number 42?
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Thu Feb 27 2003 - 16:00:43 MST