"Bryan Moss" <email@example.com> writes on Lanier's argument:
> Autonomy has no technological benefit. That is, we
> cannot *use* autonomous devices. But the approaches we are
> currently taking towards the creation of autonomous devices
> are technological. For example, *evolving* a brain is not
> science--it provides no understanding--and, if the goal is
> human equivalency, it's of no technological benefit either.
> (Imagine an automobile manufacturer that has not only fully
> automated their production and design process but only
> produces passenger-less self-driving vehicles.) Given that
> AI has no apparent merit either as science or technology
> there must be another reason for adopting it as a goal, and
> that reason is the quasi-religious "Cybernetic Totalism".
I agree with this. Why uplift animals or make alife? Beside the pure
ego gratification of "because they are/aren't there", there is likely
a kind of spiritual drive to be agent of evolution. I certainly feel
But Lanier is not just attacking the idea of making AI (which is much
broader than autonomy - a smart house or software agent may not be
totally autonomous but *very* useful) but the overall idea that
Something Big is about to happend due to technological change, and
some of the information assumptions many make in relation to this.
> I think Lanier's mistake, like so many critics of technology, is the
> failure to recognise that technology does not create new problems it
> merely magnifies existing ones.
Well, I would think most of them actually like to imply that it does
both. What they fail to notice is that it also solves many problems
and can be directed in useful ways.
> In the case of AI it's that old favourite
> "what are we and what are we doing here?" You can't
> question the purpose of fully autonomous systems without
> also questioning the purpose of our own society.
> It may also be that Lanier is using AI's questionable
> application as a user interface to challenge the idea that
> AI could become integral to society rather than simply be
> used to automate facets of society into a kind of
> disconnectedness (as with my example of the automobile
> manufacturer). If we want AI to form a part of society and
> do not simply accept AI as our mind children and "hand over
> the reigns" we have to find a niche in society that involves
> interaction rather than automated isolation. By questioning
> this niche Lanier adds merit to his argument.
It might be interesting to explore what kinds of interfaces to AI
would be useful. Would a animistic interface (everything is aware and
sentient to some extent) be useful, for example?
> > [...] we should see how we can polish up transhumanist
> > thinking in order not to fall into the traps he describes.
> I think Lanier makes some good points that are difficult to
> find in what is essentially a very confused essay. The main
> thing we should take away from this is the questionable
> nature of AI as a goal, not because it is necessarily a bad
> goal but because, for me, it illuminates a bigger problem.
> After all, what is society but a fully autonomous system?
> And what external purpose does that system serve? For me
> Lanier's essay was an affirmation of my own doubts about
> transhumanism. Without a purpose we cannot architect our
> future, we need to discover the precise things we wish to
> preserve about ourselves and our society and only then can
> we go forward. In my mind it is not enough to say "I want
> to live forever"; "I" is simply shorthand, I want to know
> what it is about me that I should preserve and why I should
> preserve it. I think these problems run deep enough that
> we'll need more than polish.
My thought also. I would gladly see more discussions about questions
like this, the philosophical foundations of transhumanism both on the
list and among transhumanists. Obviously there is more to it than
wanting to live forever and to play a bit with computers.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! firstname.lastname@example.org http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:17 MDT