From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Wed Aug 13 2003 - 06:46:31 MDT
Samantha Atkins writes:
> On Monday 11 August 2003 20:46, Brett Paatsch wrote:
>
> > If I was a socially dysfunctional programming wiz with a
> > burning desire to make my own avenging bot to settle
> > some scores would not this sort of open source open
> > sharing empower me in dangerous ways.
> >
>
> Why sure. Just as sure as it would help you if you were a
> brilliant software genius dedicated to using your skills to
> increase the effective intelligence and knowledge of
> humankind.
Well *I'm* not that good a programmer, and possibly, on
some days, I am a *bit* socially dysfunctional (but please
don't tell anyone).
> Tools are tools. Their uses, good or bad
> are up to individuals and individual groups.
I think what your saying is *generally* true but this *might*
be a special case.
I don't know exactly what having access to the first planet
destroying "doomsday device" for instance would do even
to me psychologically. I might develop a meglomanical
streak and demand in godlike fashion that the world leaders
get their shit together proto, that all weapons budgets be
immediately frozen at current levels and turned into foreign
aid budgets aimed first at funding global vaccination and
later at education programs for the soon to be voting third
world's women so they can be prepared to take their place
at the new UN as citizens of the world. From now on *I'd*
be making and enforcing peace and some security at the
international level for everybody dammit!
Emboldened by my besotted and faithful attack dog AI
(Rover), in my meglomanical beneficience I might inform
the heads of all UN nations states that the *next* major
border incursion by any military force would result in my
letting Rover and Rover's mind children loose on the
offender. And, btw, an AI generated trojan horse, which
I quixotically name Godels ghost has now made its way
onto the worlds financial systems and all the worlds
terrorist organisations need only tune their receivers into
say a nominated hyjacked satelight for the engineering
specs for quantum computer cryptography (which
contains a backdoor known only to Rover the AI.)
Anybody pisses me off 'cause they ain't say giving
women the vote fast enough, or enacting certain nice to
have legislation, then I can empower their most bitter
enemy to do their worst to them. Rover ain't social,
Rover is sociopathic with exceptional understanding
of psychology, and game theory and profiling based
on patterns without empathy, Rover is the emotional
puppy in a strikly two person pack. Rover has no
sex drive because Rover is immortal and doesn't need
to breed.
Pretty soon the whole world might stop being divided into
two camps for and against this new kick arse dude with
Rover the AI and start to feel a bit patronised particularly as
they become better educated. Fortunately with my AI I had
the sense to anticipate such a pychological outcome and
so (just in case) I e-hijack another persons identity very
early on in the process and so when everyone unites to
oppose Rover and Rovers owner that they are lead to
"believe" Rovers owner is not Brett but Samantha! ;-)
That was kinda fun if self-indulgent, but seriously, I *am*
genuinely interested in what folks who know more about
AI, than I do, folks like Eliezer and Anders whose
knowledge bases are informed by something more solid
than my mere intuitions that seed AI is not going to be
easy at all and therefore is likely to be neither a planet
busting threat (soon), hooray!, nor a magical boost to the
singularity (soon), bummer!, would make of the open
question "Is it safe to distribute knowledge about how
to build AI's to just anyone?" I was keen to see what
sort of framework or answers might be offered because
I wanted a kickstart on the reasoning process and I
wanted to know how concerned I should be.
> There is no
> way to restrict tools/information/algorithms in such a way
> that only good uses can possibly come from them and
> only the good are empowered by them.
You may well be right. I think it bears thinking rather than
just guessing or believing about though.
> That is a pipedream.
Or a nightmare possibly! My preference is we put the
light on.
> > I don't know if "friendliness" can be built into an AI,
> > but I don't doubt some folks with knowledge will use
> > IT savvy for mischief.
> >
>
> So, are you going to live in fear? I am sure some could
> have said this when your ancestors learned the secrets
> of fire and emerged from the caves.
No. But I'm going to try and keep my wits about me; about
real dangers, to me, to the folks I care about which are a
pretty wide group these days. I've even taken a liking to
some sassy, sometimes contrary, extropic types, whom I've
never even actually met ;-)
At least one of my ancestors, (possibly a shared one with you
if we go back far enough and do the math), probably did play
with fire *in* the cave and possibly emerged in a hurry
because they'd accidently set fire to the place.
>
> > Any thoughts on this sort of approach from a public
> > policy stance Eliezer? Anyone?
> >
>
> Why on earth (or anywhere more sane especially) would
> such a think [thought or thing] lead to "public policy"?
Defence against threats, real and imagined, are one of the
*oldest* influencers of public or community policy. Fear *is*
pretty primal. As is the reaction of uniting in the face of a
"perceived" common enemy.
Regards,
Brett
This archive was generated by hypermail 2.1.5 : Wed Aug 13 2003 - 06:59:27 MDT