From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Sat Sep 06 2003 - 01:04:28 MDT
Emlyn writes:
> I think you've mistaken me for someone who thinks
> friendliness is doable.
My apologies then. I'm beginning to wonder if anyone
thinks friendliness is doable and if it isn't perhaps we'd
do well to recognize that.
[Emlyn]
> This is all individuals and small groups acting on their
> own cognisance; nobody else gets a vote at this stage.
Yep incrementally developed AI, multiple locations,
no rapid take off effect on the singularity. Similar to
Anders view (as I understand it).
But the US govt (I'm not picking on it, it is just the big
one at present) can do quite a comprehensive survey
of developing AI projects presumably and possible
classify some private sector work that seeks patents
or it otherwise becomes aware off, (all in the "national
interest" of course - not being facetious here - well
not completely).
In such a way a well resourced govt dept might be able
to come up with an AI that is a substantial increment
on what else is around as they are *both* well resourced
and have access to all the public domain stuff and need
not share what they (the govt dept) knows.
I think there are some precedents for this in the
development of computers to crack Enigma, and in a
sort of RSA encryption from memory. (see Simon
Singh's: The Code Book).
<snip>
> > > 3 - The enslaved AI is simply so damned good
> > > at running a company that more and more decision
> > > making functions are delegated to it over time;
> > > management automation.
<snip>
> > > Note that in this last case, the management
> > > automation software need not even be self aware,
> > > just really good at what it does.
>
> AI might even emerge from cobbled together, ever more
> skilled expert systems and other AI-ish bits and pieces.
> Drexler talks about this, doesn't he?
He may. I don't think of Drexler as the AI guy. To me he's
"the MNT guy", but he may.
Regards,
Brett
This archive was generated by hypermail 2.1.5 : Sat Sep 06 2003 - 01:13:48 MDT