Brian wrote:
> On one hand you want to allow
> some research in order to get improved smart software packages, but on
> the other hand you want to prevent the "bad" software development that
> might lead to a real general intelligence?
I was predicting, not recommending. I was responding to your suggestion
to "look around at the reality of the next 20 years (max). There are
likely to be no Turing Police tracking down and containing all these
AIs that all the hackers and scientists out there will dream up."
My points were (a) that 20 years is not necessarily the right timeframe
because it is questionable whether AI can be developed that quickly,
(b) that we might well see increased restrictions on genetics, nanotech
and robotic research as called for by Joy, and (c) that if AI does make
progress, people are going to know about it and possibly be afraid of the
consequences. See my earlier message for elaborations on these.
> Is the government going to sit
> and watch every line of code that every hacker on the planet types in?
> In an era of super-strong encryption and electronic privacy (we hope)?
So you are arguing that even if Turing Police are authorized by the
public, they will not be effective? If so I misunderstood your earlier
point. I thought you were implying that AI would happen so quickly as
to be "under the radar" of a public which was ignorant of its dangers,
hence there would be no awareness of the threat. It was that scenario
which I disagreed with.
The question of efficacy is more difficult to judge. If AI research
were controlled, would we really see bands of hackers, protected by
cryptographic anonymity, working together in networks scattered across
the planet to make new AI systems? This would be a task orders of
magnitude more difficult than the closest thing I see today, the open
source programming projects. And the reward is much more distant and
hypothetical. You'd almost have to have an ideological commitment to
AI as the supreme goal of your life to stay motivated and involved in
a project like this. This would limit participation considerably.
I wouldn't rule out some efforts along these lines, but my guess is
that they would be relatively small and uncoordinated. Progress would
be much slower than a scenario where AI research was done in the open.
Given that I think AI progress would be slow even in the best case, my
opinion is that the chances of anonymous hackers successfully producing
a super-AI is low.
I am not recommending that we stop AI research. I am offering an analysis
of whether restrictions would be effective against it, which is what I
understand to be the point you are raising.
Hal
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT