Brian Atkins writes:
> hal@finney.org wrote:
> >
> > That's not clear. First, it could easily take longer than 20 years to
> > get superhuman AI, for several reasons:
> >
> > - We may not have nanotech in 20 years
I would be indeed very surprised if we don't see (primitive,
self-assembling) molecular circuitry in 20 years. Possibly less.
> > - We may hit Moore's Wall before then as computer speeds turn out to be
> > on an S curve just like every other technology before them
We have to have 2d molecular circuitry if we want to keep the
integration density linear on the log plot beyond 2014, or so (I don't
have the data for a back of the envelope, so roll your own breakdown
point). Very soon after we will have to have volume integrated
molecular circuitry (probably introduced piecemeal by layers, before
going molecular crystal). After that, you can only scale up
volume. Sooner or later you'll run out of atoms, and if no new physics
is then ready to pick up the ball, Moore will have ran out of
steam. No surprise there.
Of course, integration density does not automatically translate in
front-end performance. There's considerable architectural slack
present in semiconductor photolitho, exploitable in on-die
architectures (embedded RAM, ultrabroad SIMD in a register
parallelism), shrinking system grain size (boosting good die yield),
evolvable hardware (FPGA & Co), and synergies thereof. Of course,
currently this is blocked by the state of art in software.
> > - Software may continue to improve as it has in the past (i.e. not very
> > fast)
There is considerable doubt that mainstream technologies do improve at
all. I would have rather written this text on a modern 512-CPU Lisp
machine (Symbolics, where art thou?) or on a consumer version of CM-10
than on an x86 Linux box, which has a slower response time than my
vintage 1986 and 1988 computers. Bleh.
Though there are validated methodologies how low-defect software can
be written, this does not scale to high complexity. (Of course, only a
tiny fraction of the industry are using them at all). I do not see
anything beyond evolutionary algorithms which is going to deliver
adaptive, robust, complex systems. As long as we don't try to go near
human scale, we should be more or less safe.
> > - AI researchers have a track record of over-optimism
>
> Perhaps so, in which case Eugene has nothing to worry about. These things
I wish I could *know* this. Too much is at stake.
> above are not what we want to discuss on this thread. We want to hear what
> you propose to do to have a good outcome in a world where the above things
> turn out to be wrong.
Though this is directed to hal, I'll make this post a portemanteau.
Unsurprisingly, I have no silver bullet to offer. Complex reality
demands solutions which are less than simple, and frequently
ambiguous.
To recapitulate, we have a case where memetical evolution driving
culture and technology has overwhelmed genetic adaptability. We have
primates with a conserved firmware compelling them to pull a tail
before checking whether there is a tiger attached to other end of
it. This made sense once, where explorative behaviour, though risky to
the individual, provided a potential payoff more than compensating the
genetically related group as a whole for a few dead hominids along the
way. This wouldn't matter if we didn't have some (very few, specific,
easily identifyable) technologies which can amplify microscopic
decisions to macroscale. Arguably, this already applies to weapons of
mass destruction, though overhead necessary to soak a landscape in VX
or delivering a thermonuclear warhead to a city can hardly be called
microscopic. Though we're no means through the arms race bottleneck
(Cuba crisis being a damn close shave), we might be through the worst
of it. The emerging Armageddon-scale technologies are different.
This doesn't mean we're doomed.
This does mean that we have to try to compensate for the firmware
which makes us overly frisky during the risky passage. There is an
established methodology associated with managing high risk
projects. Let's use it. Because our firmware also does prevent us from
succumbing to external pressures voluntarily, existing ways of dealing
with deviants need to be applied. This will be an opressive, nasty,
violent and responsible thing to do. Because the new threats are
low-profile, there is clearly an early point of diminishing returns,
where external enforcing authority breaks more than it accomplishes,
and incites to rebellious behaviour, since pissing people
off. Clearly, we do not want to go there.
We can't contain even most of it, but we can reduce the number of
potential apocalyptic nuclei while we're passing the tight spot.
(This meta-level action envelope is translatable into a set of
immediately applicable rules, but I'm no expert in this, and no one is
paying me to go through the whole thing anyway, so, hopefully, the
professionals in the emerging fields will come up with self-regulation
rules, early kudos go to Foresight).
> > Secondly, I suspect that in this time frame we are going to see
> > increased awareness of the dangers of future technology, with Joy's
> > trumpet blast just the beginning. Joy included "robotics" in his troika
> > of technological terrors (I guess calling it "AI" wouldn't have let him
> > keep to the magic three letters). If we do see an Index of of Forbidden
> > Technology, it is entirely possible that AI research will be included.
>
> Don't you think this better happen soon, otherwise the governments will
> end up trying to regulate something that already is in widespread use? There
> are going to be "intelligent" programs out there soon- in fact you can already
> see commercials in the last year touting "intelligent" software packages.
This is what marketing understand under intelligence. The only robust,
adaptive all-purpose intelligence -- traces of it -- can be found in
academic ALife research labs. This technology is right now absolutely
innocuous, and it will take decades to reach the marketplace as is.
> Do you really think it is likely our government would outlaw this area of
> software development once it becomes a huge market? Extremely unlikely...
As long as we don't see anything approaching the threat threshold, the
field should not be regulated. AI is already ailing as is, we don't
have to have Turing pigs breathing down your neck as you code.
> > Third, realistically the AI scenario will take time to unfold. As I
> > have argued repeatedly, self-improvement can't really take off until
> > we can build super-human intelligence on our own (because IQ 100 is
> > self-evidently not smart enough to figure out how to do AI, or else
> > we'd have had it years ago). So the climb to human equivalence will
> > continue to be slow and frustrating. Progress will be incremental,
> > with a gradual expansion of capability.
>
> Up to a point...
It is not necessary to be extremely intelligent to create
intelligence, co-evolution is dirt stupid, yet it came up with
us. Using the same principles, we, barely intelligent primates, can
push it way further. As long as it doesn't end us all, we *should*
push it further. I don't know about you, but I'm tired of chipping
flint while sitting in front of the cave, it's cold, wet, drafty, and
the lice bite unmercifully.
> > I see the improved AI being put to work immediately because of the many
> > commercial opportunities, so the public will generally be well aware of
> > the state of the art. The many difficult ethical and practical dilemmas
>
> Public.. well aware of state of the art... bwhahahaha. No I don't think
> so. At any rate, I guess you are talking about a different scenario here;
> one without Turing Police preventing development?
As soon as the field will start producing results, you can expect
people to perk up.
> > that appear when you have intelligent machines will become part of the
> > public dialogue long before any super-human AI could appear on the scene.
> >
> > Therefore I don't think that super-intelligent AI will catch society by
> > surprise, but will appear in a social milieu which is well aware of the
> > possibility, the potential, and the peril. If society is more concerned
> > about the dangers than the opportunities, then we might well see Turing
> > Police enforcing restrictions on AI research.
>
> Well I'd love to see how that would work. On one hand you want to allow
> some research in order to get improved smart software packages, but on
> the other hand you want to prevent the "bad" software development that
Intelligence is not only a software problem. We're both hardware and
software bottlenecked. As long as your available hardware performance
(number of stored bits, speed with which these bits can be tweaked) is
below a certain threshold, even provably optimal software is not going
to go beyond human scale.
Even if the grains are small, we still have to secure the networks
against worms, since the Net as a whole is certainly adequate.
> might lead to a real general intelligence? Is the government going to sit
> and watch every line of code that every hacker on the planet types in?
How much hardware will the average hacker have by 2020? How much of it
will be reconfigurable? What will the state of evolvable hardware by
2020?
> In an era of super-strong encryption and electronic privacy (we hope)?
We hope.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT