I am starting a new thread here, beginning with an acknowledgement of the
earlier messages that gave rise to this one. The main subject of this thread
is consciousness - human or otherwise - and the hardware needed for it.
Earlier, this exchange took place:
Dan Fabulich wrote:
Michael LaTorra mused:
> I doubt that our computational devices are capable of hosting human-level
> information processing yet. That being said, there is a part of me that
> hopes that somewhere, on a Net-connected machine, Sasha's mind still
lives.
True... but isn't it the case that the Buddhists believe that one can
reincarnate into a lot of things that don't have the processing power
to host a human? (grasshoppers, etc?)
- -Dan
Mike replies:
The Buddhist (and Hindu) belief is that incarnation as a human being is the
result of progressive karmic improvement from previous births as lesser
animals. Regression back to an animal state after having incarnated as a
human being is extremely rare. More common is reincarnation as a human being
at a lower social level (hence, the Hindu concept of castes).
It is important to understand, however, that what is incarnating from life
to life is NOT a unitary, monadic, indivisible entity. The Buddhist dogma is
called "anatta" which translates as "no soul." Rather, as the Buddhists have
it, there is an energy that moves from one bodily lifetime to another in a
manner not unlike the physics of one billiard ball impacting another with
the first ball transferring its momentum to the second ball. The first ball
stops "dead" and the second moves on.
Furthermore, this transfer is not limited to a single "target" ball.
Buddhists believe that the energy of a particularly advanced individual can
reincarnate as several simultaneous subsequent individuals, each of whom
embodies a part of the power (energy) of the incarnational predecessor. So,
for example, Jamgon Kongtrul, a Tibetan Lama who lived earlier in the 20th
century, was believed to have entered into 5 concurrent subsequent
reincarnations.
By Buddhist logic, then, human consciousness could depart one bodily vehicle
and migrate to another suitable vehicle (or vehicles) human or otherwise.
Therefore, as the Dalai Lama said (see my first posting) someone who worked
closely with computers might be able to reincarnate into one that was
sufficiently complex to host his or her consciousness.
Questions: How tightly coupled would the components of a computing system
need to be in order to host a human consciousness? Could a distributed
system do it? Could the Internet do it? If so, at what frequency would
thoughts occur in such a host system? If a human consciousness had migrated
to the Net, how would we know?
...That's my reply to Dan.
Next, I'd like to respond to what Bonnie says below (excerpted from her
longer post):
From: "altamira" <altamira@ecpi.com>
. . .My own experiences lead me to suspect that the fragments of
consciousness we
think of as our minds are capable of operating at only a small fraction of
their actual potential while experienced within the human body.
I noticed some of your recent posts referring to the "Singularity" and
followed some links to discover which singularity this might be. I find
that after reading a couple of the essays my take on the idea of
greater-than-human intelligence is different from that of the authors of the
essays.
These authors seemed to view the coming of super-human intelligence as a
cause for alarm or sadness, rather in the same way they view the ending of
the life of a human body. I see another possibility: the opportunity for
the expression of a greater fraction of the "ultimate consciousness" than is
possible from within a human body.
I had not thought to bring up the possibility of human consciousness apart
from the body on this list, since most of the participants seem to see the
mind and body as inseparable and to think of physical immortality as a
prerequisite to mental immortality. However, since Sasha's death, there's
been more talk of the possibility that consciousness can exist apart from a
physical container for some length of time. . . .
Bonnie
Mike responds:
Your notion of "fragments of consciousness" is consonant with what I wrote
above regarding Buddhist beliefs.
The Singularity might be defined as the rise of super-human Artificial
Intelligence that inevitably grows. Whether this will be cause for "alarm
and sadness" rather than being an "opportunity for the expression of a
greater fraction of the 'ultimate consciousness'" is surely an open
question. Opinions vary widely.
Among those who are positively in favor of the Singularity are Eliezer and
his fellow Singularitarians on this list. They believe that the Singularity
will offer the opportunity to eliminate poverty, rationalize society, and
defeat death via uploading. (Eliezer, please step in to correct, or add to,
this summary description.)
In the neutral middle is Vernor Vinge, professor of computer science of
UCSD, science fiction author, and the originator of the AI Singularity idea.
He believes that we cannot know what a superior intelligence would think or
how it would be motivated to act. (The AI god works in mysterious ways.)
Among those who are negatively disposed to super-human AI (as distinguished
from a controlled, or "leashed" AI that is under human control) are K. Eric
Drexler, father of nanotechnology and others. (Among those others is fringe
physician-scientist John Lilly, who I consider to be a special case. At some
future time I would like to discuss his ideas at length.)
My own position is not fixed but varies over projected future time. If we
can harness and control AI systems, then I would place myself with Drexler.
I'd like to keep a lid on AI's until we can cyborgize with such systems in
order to ensure that they ARE us rather than competing with us.
But it may improve impossible to keep a leash on AI's. Then the Singularity
would unfold according to its own logic. In the near term after the
Singularity (if it happens at all), I would side with the optimists. But in
the long term of centuries, I would agree with the pessimists who believe
that the AI's would first cage and then eliminate homo sapiens. After all,
isn't this what superior species do? Did not homo sapiens do something
similar to homo neandertalis?
Questions: Is the Singularity good? Is it bad? Is it inevitable either way?
And are we even qualified to pass judgment on these questions?
Regards,
Michael LaTorra
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:09 MDT