> From: "Spike Jones" <spike66@attglobal.net>
> > MAD will serve us well, until it fails. We might have the UIM by then,
> > but we might not.
> > What I have not seen is some kind of timeline that shows the intermediate
> > steps that need to be accomplished before the UIM is born.
>
> "J. R. Molloy" wrote: But you've seen the intermediate steps that allow us to
> jettison MAD?
No. Is there a way? Sure hope so. Recognizing the fact that we
are apes with dangerous reptilian brain functions does not show
us how to overcome those reptilian tendencies. Humans were
born to fight. {8-[ Im hoping the AI will figure out how to save
us from us.
> > Any major
> > project has something like this, but Ive not seen one for AI. Does such
> > a thing exist? Are there milestones we should be looking for? spike
>
> Maybe this will help, Spike:
>
> Sam N. Lehman Wilzeg, "Frankenstein Unbound: Towards a Legal Definition of
> Artifical Intelligence," Futures (December 1981), pp. 442-457
Cool, thanks for the references JR!
> By any definition the present powers of AI machines are both impressive and
> worrisome. Cyberneticists have already created or proven that AI constructs
> can do the following:
> ...
> (3) Learn from their own mistakes.
> --N. Wiener, God and Golem, Inc (Cambridge, MA: MIT Press, 1966)
In a very limited sense, this was accomplished before 1966. One
of the early chess programs was set up to play without knowledge
of chess other than how the pieces move, and to develop its own
notions of the relative values of its pieces. It eventually began to
converge on a reasonable values before the experimenters decided
to quit spending precious computer time on it. One of these days
I want to find that research and see if the experiment has been repeated
now that computer time is cheap. That might make for a cool parallel
computing project.
Actually Im not sure if that fits the definition of learning from its
mistakes, exactly. spike
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:50 MDT