Re: Jaron Lanier Got Up My Shnoz on AI

From: J. R. Molloy (
Date: Mon Jan 14 2002 - 10:44:03 MST

From: "Ken Clements" <>
> you do point out the basic undesirability of
> hiring human workers who waste all those neuro cycles by thinking on the

...and not only thinking, but plotting and scheming and dreaming on the job.
Present levels of robotically controlled manufacturing, process monitoring,
and system administrating signifies that just about all industrial and
management jobs can be performed more cost effectively by autonomous machines.
Although we're still a long way from all-purpose-entirely-robotic-systems
(APE-RS), the immense wealth to be acquired with such machines continues to
motivate competition and basic research in this phase-transitional technology.

> Another problem with claims to immortality is the death through change
> I have a step-son who is a teenager now, but I well remember playing with
him when
> he was three years old. Although he is still alive, that three year old is
> I cannot play with him, and I cannot take his picture. We do not call it
> but I have empirical data that that three year old was not immortal. If
> exists in a few hundred years that comes from who and what I am now, it will
> be like I am now. In that way I am no more immortal than was my three year
> all of technology notwithstanding.

That's partly why I think of cryonics as a neo-Luddite technology that seeks
to preserve (freeze) unenlightened biological entities instead of evolving to
higher capabilities. Once we understand the threshold level of computational
complexity that catalyzes hyper-cognitive enlightenment, the notion of
individual uniqueness disappears along with the notions of vitalism and
phlogiston. Unless humans reach hyper-cognitive sentience, they have failed to
attain their full potential. Once we attain total awareness, we understand
that this direct experience of reality is not specific to a particular brain
(whether carbon-based or otherwise).
Anyway, this is all inconsequential in the face of a global war (inspired by
religious fanaticism) that can destroy all life on Earth.

--- --- --- --- ---

We truly understand reality in proportion as the scientific method accurately
identifies incorrect thinking.

From: "Dossy" <>
> This begs the question of "what is creativity" -- is there money
> in creating machines that are as creative as humans? Would that
> require creating artificially sentient machines? Wouldn't there
> be money in that, then?

Creativity alone is not enough. Even a deaf, dumb, and insensitive system can
create. Nature creates geniuses, but it also creates morons and monsters.
Creativity combined with selectivity makes products worth money. Creativity
does not require sentience, as computer-generated poetry/humor demonstrates.
Artificial sentience has no monetary value for the same reasons that human
sentience has no monetary value. (It's priceless -- just like cat snot.) We
may appreciate and enjoy life, but our sentience has no market value. Money
may buy happiness, but sentience has no economic value. No one is going to pay
us to be hyper-cognitive, sitting quietly in totally aware contemplation. So,
sentience is the ultimate product for end users. Furthermore, we can attain
superlative sentience, while our creative ability may remain undeveloped.
Buddhas are not artists.

> Is there such a notion as a "naturally sentient machine"?

Yeah -- a human being. Duh!

> What
> would you consider a machine/robot that hosts an uploaded human?


> The sentience would be the human's, wouldn't it? Or would it ...

Sentience that is uncontaminated by any particular set and setting, i.e., pure
awareness, is just that -- pure awareness... or we could call it pure
sentience. I like to call it superlative sentience, because it's the ultimate
interface of reality recognizing itself. Superlative sentience is like going
back where you began, and seeing it as if for the first time... only without
the "you."

--- --- --- --- ---

As the scientific method accurately identifies incorrect thinking, inevitably
it displaces thought itself.

From: "Dossy" <>
> Thus, freeing up humans, who do enjoy abilities such as thinking
> and contemplating, to do more thinking and contemplating.

That reminds me of a story:
The Jug Sage of Delhi was an enlightened beggar whose only possessions were a
blanket and a jug. During the day, he would sit by the side of the road with
his jug. Passers-by could put whatever they wanted into the jug (or take
whatever they wanted out of the jug, since the Jug Sage would not prevent
them). The Jug Sage sat in quiet contemplation (with the occasional giggle)
all day long, and was supported by the rest of the population, who gave him
food and water. People treated the Jug Sage like a stray cat or a squirrel in
the park.
Of course this scenario would not play in most American cities, because the
robots (whoops! excuse me, I mean the citizens) have passed laws against
begging and vagrancy.
Anyway, the point of bringing up this story is to show that certain people
would readily accept lives of idyllic idleness (ignoring the uncomfortable
stigma of homeless vagrancy), and let robots take care of their physical
needs. Those who have learned the knack of meditation see this as the ultimate
way of life.

--- --- --- --- ---

As the scientific method accurately identifies incorrect thinking, it
inevitably identifies all thinking as incorrect. At that point, science means
the same as superlative sentience.

This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:34 MST