J. R. Molloy writes:
> We don't really have any reason to presume that SIs would not emerge from
> genetic programming projects, do we? As AIs compete with each other, winners
We have every reason to presume that an SI won't come from explicit
programming by humans.
# make SI
make: *** No rule to make target `SI'. Stop.
Witness the software industry: certainly not something evolving along
a straight log plot. (Nor does hardware industry, look at the growth
of nonlocal memory bandwidth). The only currently visible pathway
towards robust AI is stealing from biology, reproducing the
development of the brain in machina, or breeding a something exploting
a given hardware architecture optimally by evolutionary algorithms.
> would evolve with higher and higher IQs. Those intelligent agents (IAs) that
> display unfriendly tendencies could simply be terminated. This is a tremendous
1) Undecidedability, dude. 2) Containment. If the thing is useful, it
is dangerous. If it is useful, and can be built and/or reverse
engineered easily enough it will be used. If it will be used in
large enough numbers for a long enough time, the thing either will
express its hidden malignant tendency or mutate into a potentially
dangerous shape. There is zilch you can do about it.
AIs are very useful, since actual human intellects are scarce,
expensive and slow to produce, and evolutionary algorithms are by far
the simplest way to produce robust, open-ended intelligence. Look into
the mirror.
> advantage of working with Mind Children, because you can't (legally or
> ethically) do this with biological children. As the evolution of AI moves toward
> SI, all you need to do to preclude unfriendliness is to cull to herd, so to
> speak.
[insert derisive laughter here]
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:17 MDT