From: paatschb@optusnet.com.au
Date: Sun Jul 20 2003 - 11:26:05 MDT
Bryan Moss writes:
> I say the following with complete confidence: there will
> be no Yudkowskian Singularity, the copy is not the
> original, the creation of the first assembler will not cause
> an immediate revolution in manufacturing.
I'm a singularity sceptic. What do you take to be the
Yudkowskian Singularity? I am curious if it would overlaps
with Eliezer's view of the Singularity.
> These
> are science fiction pipedreams. They're not even very
> good ones. Further, we need to "deconstruct" our
> relation to the computer revolution. We're on
> the other side now, I mean this in complete seriousness,
> the computer revolution is played out. All that is left is
> for computers to recede; not in the hip, ubiquitous
> technology "computer in my doorknob" sense but in the
> "everybody stopped caring" sense.
Well leaving aside the impact of computing in genomics
and proteomics. I am in the middle of reading Kevin
Warwick's book 'I, CYBORG' (pub 2002).
According to the blurb, Kevin Warwick (professor of
Cybernetics) has been using himself as a guinea pig
recieving, by surgical operation technological implants
connected to his central nervous system.
I haven't yet got to the part where he succeeds or fails
in sending a nervous signal over the internet to a partner
but it is clear that the what is know in detail about
the human nervous system is limited and that there may
be exciting possibilities in medicine and in adding additional
sensors (perhaps as mitigators for those who have
disabilities) using electronics and new uses of computing.
It seems to me that the biological revolution will ensure
that people will "not stop caring" about computing in the
near future.
Then there is the matter of quantum computing and
quantum encryption. I think it unlikely that privacy or
the underpinnnings of the global financial system will be
a matter of indifference and so in another way people
will care about computing.
> This may mean they'll take different shapes. But that's
> it, that's your revolution. Now it's time to look back
> and ask ourselves what was real and what was hype.
> A lot of it was hype. But that's our origin and we need
> to pick it apart to understand where we came from.
> Artificial Intelligence, of the CS kind, of the kind that
> assumes we can design Minds (not brains) through
> some sort of hokey self-reflection, is the sort of hubris
> we must now only find humour in.
Well I'm sceptical but I'm not laughing. Whats the joke?
Are you suggesting Artificial Intelligence is impossible?
> (Which is not to say computer simulation won't play a
> big role in the brain sciences or any other science, but
> it's a tool now, nothing more.)
Okay, this answers my points about biotech enabling.
But much is said when a quantum computer or an
algorithm for determining how proteins fold is described
"as just a tool".
> Even if you bracket the three "ultratechnologies" I
> mentioned only as a thought exercise, it's interesting to
> see how the horizon changes. [With]out superintelligence,
> [with]out the technological Saviour-God, there is no wall
> over which we cannot see.
This criticism is good imo.
> Without uploading, we're going to die unless we
> fight for it.
I concede this but with the reservation that cryonics
*might* offer some chance. Though I've never yet heard
a satisfactory operational explanation for the reassembly
process.
> Curing aging is only a first (incredibly
> difficult) step, the way we value our lives will have
> to change, the medical practise will have to change.
> Nobody wants to live to 400 and slip in the bath,
> crack their head open on the faucet. It's an entirely
> different attitude towards death and we have to sell
> it to the world.
It is not that simple. There are important questions of
triage. The world does not make decisions, individuals
do, and individuals confront mortality personally.
A solution for everyone aged 40 or less now if fixed
as an operational objective would ostracise the over
40's. There may be *no* effective political place to
draw the line for *any* organisation that starts with
the priority of confronting individual mortality first,
unless that organisation decides to pursue its goal
deliberately leaving some part of the human species
outside its considerations.
> Without drexlerian nanontechnology
> (and I speak more of the supposed time frame than
> the technology itself) there is no sudden "fix" for the
> poor, the starving. We need to engineer crops,
> educate people, provide clean water. None of this
> is going to be easy. We're not going to get off-world
> soon either, so, yes, we're stuck here amidst the war,
> the famine, those evil fundamentalists.
For now yes.
>
> > People who complain about our slow progress,
> > question whether things will work, point out flaws
> > in existing plans, etc., are the real heroes of tomorrow.
> > They are the engineers of the future. People who
> > don't know enough technology to see the flaws, or
> > who are so optimistic that they don't see any need
> > to address the flaws, are the people who are delaying
> > progress. Dynamic Optimism was never intended to be a
> > faith-based position.
Excellent point. Belief *itself* strongly suggests a form of
pathology. Belief and reason are incompatible where one
is happening the other is not.
> We were supposed to be optimistic that everything
> > was possible so that we would continue working
> > toward a solution while others had long since given up.
> > Optimism should be an excuse to work harder for
> > the future, not an excuse to sit back and do nothing.
Optimism as pragmatism without abandoning hope.
> Yes, and as well as realising that this stuff requires
> hard work, we need a *critical* approach to technology.
> We need to take our heads out of the sand, lose the
> ridiculous "luddite" talk,
This "luddite" talk could have an important political
purpose. We should remember that it is up to those
who wish to change the status quo in democracies that
have the burden of making the case. The luddites are
the conservatives. They don't have to work so hard
to sell their message - its the default.
> and realise that, yes, technology does effect peoples
> lives, and that, no, not all technology effects all lives
> in a positive way.
Good.
> Technophilia doesn't cut it.
True.
> Technology is ideological in the strongest sense.
No this is a non-sequitor.
> The telephone has something to say about personal
> space, personal time, about availability, about distance,
> it embodies certain attitudes towards these things.
Bulldust. The telephone is inanimate. Any attitude you
see in a telephone is your projection.
> Technology is not neutral. It meets the world in the
> form of products or govenment programmes;
Ah you mean particular instances or embodiments
of technologies not all technology per se. What is
developed or realised in particular as opposed to
what is not.
> if
> science has a claim to neutrality, its realisation in
> technology has long since lost it.
This I don't follow.
> And we must always keep in mind that science only
> makes a *claim* to neutrality: universality is the target
> of science, not its immediate achievement. We can
> be critical of science, we must be.
Bollocks science has *no* aims. Scientists do. Your
anthropomorphising.
> We must be able to be critical of some research,
> some applications of science, some technologies in
> order to make a fair argument for others.
Yes.
> To take a specific example: with genetically modified
> foods our fight is to move the field of battle from the
> general, from the sweeping accusation, to the specific.
> We have to acknowledge that, yes, there are some
> negative uses here.
You make a good general point but what do you mean
"we" white-man :-) What is the glue that you imagine
binds *us*? The ExI Principles? A shared desire to
avoid death?
> However, the mistake we don't want to make is t[o]
> focus on the catastrophic. That's the mistake Foresight
> made. Nobody references Drexler because he's the
> grey goo guy.
Actually Foresight is bigger than Drexler and it is referenced.
Glenn Fishbine author of "The Investors Guide to
Nanotechnology and Micromachines" is no disciple of Drexler,
but specifically acknowledges the influence of Foresight as a
player in setting the larger scene of policy around nanotech.
> You can think on whatever timeframe you choose, that's
> your prerogative, but you can only act on a human
> timeframe.
Good point. And further action if it must be coordiated between
multiple agents (to be effective) requires planning cohesion and
shared objectives. The prioritisation of these is not easy to agree
on in practice because we all bring our own egos and timetables
to the table.
> What's funny is, a lot of fears could be alleviated if we just
> admitted how difficult this stuff is.
Some.
> Designer babies? Not likely!
Bad example. Choosing a child sex could be engaging in
designer babies and it has already happened at least once
in an IVF clinic. I understand dwarf couples have also been
alloud to have a dwarf child - which incurred a backlash as
it was considered that dwarfism is accompanies by other
health defects.
Clones as copies of the self - that is a good example of
a furphy.
Sorry for the forensic criticism. I thought this was
an excellent post.
Regards,
Brett
This archive was generated by hypermail 2.1.5 : Sun Jul 20 2003 - 13:06:46 MDT