Fwd: [isml] Building Better Humans

From: Ziana Astralos (ziana@extrotech.net)
Date: Mon Jan 01 2001 - 17:31:54 MST


------- Start of forwarded message -------
From: "DS" <ds2000@mediaone.net>
Date: Mon, 1 Jan 2001 17:13:33 -0500

>From MSNBC,
http://www.msnbc.com/news/508404.asp
-
Technology: Building Better Humans
The great decision ahead of us is philosophical-do we want our new machines
to be like us? Or should we be more like our machines? And does it matter?

By Peter McGrath
NEWSWEEK

      Jan. 2001 - In August 1998 Kevin Warwick put his body on the network.
He had a silicon chip surgically implanted in his left arm, enabling a
computer at the University of Reading, England, to track him throughout the
Department of Cybernetics, where he teaches. Over the next nine days, the
computer would recognize him as he arrived at the main entrance, and its
voice box would greet him. It opened his lab door for him. It turned on the
lights. The experiment had a danger: the glass tube containing the implant
could have shattered inside him.

         BUT IT WAS WORTH THE RISK to find whether an implant could
communicate with a computer Warwick's next experiment, probably sometime
next spring, will test an implant's ability to shuttle signals between his
nervous system and a computer-a radical step toward linking brain and
machine directly. And after that? Perhaps an implant that does internal
processing, if he can develop one small enough. "The potential for humans,
if we stick to our present physical form, is pretty limited," says Warwick.
"The opportunity for me to become a cyborg is extremely exciting. I can't
wait to get on with it."

        The future enters into us long before it happens, the German poet
Rainer Maria Rilke once said. This is no longer a metaphor. The future is
entering us. We eat genetically modified food. We submit to implanted
devices that go well beyond the familiar heart pacemaker. We tinker with
human tissue, developing artificial bone and skin for transplantation. We
are on the verge of "smart" prosthetics, such as retinal implants that
restore vision in damaged eyes. Such devices will ultimately be networked,
allowing, say, a subcutaneous chip to transmit a person's entire medical
history to a physician far away. Peter Cochrane, the former chief
technologist for British Telecom, envisions a world where chip implants are
commonplace and "as desirable as mobile phones." Rodney Brooks, the director
of the Artificial Intelligence Laboratory at the Massachusetts Institute of
Technology, goes even further. Over time, he says, "we will become our
machines."

HUMAN HYBRIDS
       When the word "cyborg" first appeared in the middle of the 20th
century, it was strictly the stuff of science fiction. Everybody knew you
couldn't put human physiology under mechanical or electronic control. You
couldn't stitch technology into tissue. The idea of, say, an implant of
neural circuits inside the skull-proposed by Brooks as a cure for cerebellum
damage-would have been at best distasteful. The notion of a hybrid human
would have seemed like sacrilege.
 Today some researchers believe that cyborgs will be possible within 50
years, or at least that humans will have so many manufactured parts as to be
virtually indistinguishable from cyborgs.

         That was then. Today some researchers believe that cyborgs will be
possible within 50 years, or at least that humans will have so many
manufactured parts as to be virtually indistinguishable from cyborgs.
Machines might be so assimilated to us-or we to them-as to raise the most
fundamental questions. As technology fills you up with synthetic parts, at
what point do you cease to be fully human? One quarter? One third? Which
part of us is irreplaceably human, such that if we augmented it with
technology we would become some other kind of being? The brain? Or is the
brain merely a conductive medium, our humanity defined more by the content
of our thought and the intensity of our emotions than by the neural
circuitry? At bottom lies one critical issue for a technological age: are
some kinds of knowledge so terrible they simply should not be pursued? If
there can be such a thing as a philosophical crisis, this will be it. These
questions, says Rushworth Kidder, president of the Institute for Global
Ethics in Camden, Maine, are especially vexing because they lie at "the
convergence of three domains-technology, politics and ethics-that are so far
hardly on speaking terms."
        There have always been dangerous technologies. The 20th century,
which might as well be called the age of industrialized murder, is only the
most obvious example. But technology is upping the ante by creating fields
where benign intentions could lead to brutal outcomes. This was the point of
an article in the April issue of Wired magazine by Bill Joy, the chief
scientist at Sun Microsystems. Under the title "The Future Doesn't Need Us,"
Joy described advances in three fields: genetic engineering, nanotechnology
and robotics. The first has created the possibility of gene therapy that
would at least bring diseases like cancer under control. The second is an
umbrella term for technologies that manipulate matter on the ultrasmall
scale of nanometers, or millionths of a millimeter. Nanotechnology would
enable the creation of novel plant species or new viruses. Finally, robotics
will eventually raise the possibility of intelligent and self-replicating
machines whose processes so closely mimic ours that we will wonder if the
only difference between them and us is that our life form is based on
carbon.

THE LIMITS OF SILICON
       All three of these technologies depend on a fourth: the continued
growth in computing power. It's expected that in about 10 years, engineers
will reach the limits of their ability to put circuitry onto silicon chips.
But alternatives to silicon are under development. One involves "biological
computation," in which DNA or another biological molecule serves as a
processing medium. Another derives from the counterintuitive principles of
quantum mechanics: working at the subatomic level, a quantum computer could
exist in multiple states at the same time, enabling multiple calculations in
parallel. So promising are these ideas that Joy expects the computers of
2040 to be a million times faster than today's machines. Putting it another
way, he says, a calculation that now would take a lifetime could be carried
out in half an hour.
        At these speeds, technology acquires the inexorability of the ocean
tides. But is human civilization equipped to keep pace? Engineers tend to
associate history with progress. But what in our history inspires confidence
in our ability to channel technology away from destructive uses? "Technology
is evolving a thousand times faster than our ability to change our social
institutions," says Joy.
        But if bioengineering really can "turn off" cancer cells, what's
wrong with that? If nanotechnology can develop devices that extend our
physically active lives for decades, is that a problem? If robots can for
most purposes end our need to do physical labor, should we object? Joy's
answer was that "with each of these technologies, a sequence of small,
individually sensible advances leads to an accumulation of great power and,
concomitantly, great danger." Unlike 20th-century technologies like nuclear
weapons, which were self-limiting because they depended on scarce and
expensive raw materials, the new technologies could produce "accidents and
abuses [that] are widely within the reach of individuals or small groups...
Knowledge alone will enable the use of them." And, he added, "this
destructiveness [will] be hugely amplified by the power of
 self-replication." Nanotechnology could create viruses that reproduce
uncontrollably and blanket the planet. Intelligent robots could make copies
of themselves and eventually displace people. The extinction of the human
species, Joy wrote, is all too conceivable.
        The article was an instant sensation (except in Silicon Valley,
where, Joy says, "dot-bomb has displaced any other topic"). Theologian
Leonard Sweet of Drew University called it "the opening salvo of the 21st
century." Scientists took it seriously, especially those who had worked on
advanced weapons programs; in the famous words of J. Robert Oppenheimer,
they had "known sin." Besides, the writer was not some Luddite from the
Birkenstock-and-health-food set. As the developer of high-end computer
architectures going back more than 20 years, Joy had credibility. Still,
many in the technology sector thought he overstated his case. "The
science-fiction version of nanotechnology is very different from real
nanotechnology," Rodney Brooks said pointedly at the recent Camden
Technology Conference in Maine. Joy himself says that "if your concern is
that somebody will program Frankenstein's monster, your concern is probably
misplaced."

         One problem is that Joy was asking his colleagues to think 50 years
out. That is too far, says Sherry Turkle, a social psychologist at MIT and
the author of "Life on the Screen." Her own work is at ground level,
focusing now on robotic dolls, the first commercial version of which came
out last month from Hasbro. My Real Baby uses an array of sensors to detect
light and motion, as well as when her skin is being touched. According to
codeveloper iRobot of Somerville, Massachusetts, she "knows" when she's
being hugged, rocked and even burped. She can respond in an "emotionlike"
way, with one of hundreds of facial expressions, and can draw on billions of
different sound combinations. She seems to have moods. She requires
"nurture." In other words, she presents herself to a child as a being of
equal dignity and worth. My Real Baby is a seductive machine, says Turkle:
"We have to fear not so much the computers as our responses-not so much that
the computers are going to take over as that we'll become like the
computers... that we'll begin to experience ourselves as machines."

NO SOUL?
       "So what?" say many researchers in the fields of artificial
intelligence and robotics. Brooks, for one, foresees a gradual convergence
of humans and intelligent implants, to the point that the difference between
the two becomes both physiologically and philosophically meaningless. Even
with humans, he says, "there is nothing beyond physical principles going on.
There is no soul, no elixir of life, nothing beyond molecules working
together in the mindless, fixed ways that the physics of their constituent
particles dictates."
        Enter philosophy. The most common definitions of "human" proceed
from an assertion of an intelligence unique to us, but this is precisely
what technology is eroding. Is there a type of intelligence computers could
not acquire? Is, for example, intelligence the capacity to innovate? Is it
the ability to criticize your own projects and values-in computing terms,
the ability to override your instruction set? How about the ability to
create by accident? Great innovations can occur ambiguously: one winner of
this year's Nobel Prize in Chemistry was Hideki Shirakawa, whose laboratory
mistake led to the development of conductive polymers. If cyborgs are less
error-prone than humans, might they be less creative? And what about sheer
fancifulness? Albert Einstein always said that thinking like a child was
what enabled him to hit upon the theory of relativity.
        In the end, the measure of humanity is a philosophical matter.
Philosophy, however, has almost nothing to say about such things. Academic
philosophers spent much of the last century bankrupting their discipline.
With a few honorable exceptions, they preoccupied themselves with questions
of method and nomenclature, such as: under what linguistic conditions would
it be meaningful to ask about the definition of "the human"? As Bernard
Williams wrote in his 1972 book "Morality": "Contemporary moral philosophy
has found an original way of being boring, which is by not discussing moral
issues at all." Newsweek.MSNBC.com

        Who, then, can speak on moral issues? Certainly not the engineers.
Ellen Ullman, a former computer programmer and the author of the 1997 book
"Close to the Machine: Technology and Its Discontents," says that "the
problem is not the technology, which in any event can't be stopped. The
problem is that engineers are making decisions for the rest of us."
Programmers are hired guns, she says, and rarely understand in a nuanced way
their clients' actual work. They are, she says, the last people "to
understand what is an acceptable risk."
        This will be the great decision of the next decade. It goes well
beyond the mere commercial viability of new technologies, though many will
think that is all we need to know. It goes to who we think we are. One way:
every possibility is welcome, no matter how dangerous, because we are a
species that loves knowledge. The other: we don't want to be overcome by
technology.
        But that's what it means to be human. You have a choice. Take your
pick.

       © 2000 Newsweek, Inc.

--
Dan S

------- End of forwarded message -------

Aumentar! Onward, ------------------------------------------------------- Ziana Astralos GCS/MC/IT/L/O d- s-:- a? ziana@extrotech.net C++++ U P+ L W+++ N+ w+ M-- PS+++ PE Y+ PGP-- t+ T.E.C.H. 5++ X R tv+ b+++ DI++++ http://www.extrotech.net D+ G++ e- h!>++ !r y- ------------------------------------------------------- __________________________________________ Get your free domain name and domain-based e-mail from Namezero.com New! Namezero Plus domains now available. Find out more at: http://www.namezero.com



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:16 MDT