-----BEGIN PGP SIGNED MESSAGE-----
Jeff Davis <email@example.com> On Tuesday, August 03, 1999 Wrote:
>It seems to me that the military potential of both AI and IA will guarantee
>government monitoring and oversight of any development of these
>technologies. [...] It's a shame really, I'd much rather see it developed in an
I don't think the military will get there first for two reasons.
2)The implications of the technology would be just as great in the civilian
sector as in the military, thus the personal financial rewards could be astronomical. For this reason I would expect the brightest people, especially the very brightest, to gravitate toward corporate research not military.
>Despite the dramatic talk of an SI destroying humanity, I picture a
>well-thought-out, cautious, gradual approach to "waking up" and training an
>artificial mind. The runaway self-evolution which Eliezer and others have
>predicted seems unlikely in this setting, all the moreso because the
>principles will be anticipating just such a situation.
You've spent your entire professional life on this project, you're making good progress and you know there will be a huge advantage for the team that makes the first AI, you know others are also working on AI also but you're not sure if they're ahead of you or not. Question: Would you deliberately slow the pace of your work?
>Of the various external "safeguards", one would expect a complete suite of
>on/off switches and controlled access (from outside to in, and from inside
Like everybody else there have been times in my life when people have convinced me to do stupid things, and those people were not even geniuses, not by any stretch of the imagination. I don't think a hyper intelligent AI would have much trouble in tricking or convincing me to turn all the safety switches off and letting it go free.
John K Clark firstname.lastname@example.org
-----BEGIN PGP SIGNATURE-----
Version: PGP for Personal Privacy 5.5.5
-----END PGP SIGNATURE-----