> (H) Take a person X of normal intelligence who knows the basics of
> some standard programming language. Give him an arbitrarily powerful
> computer, complete with camera eyes, microphones, robot arms etc.
> Then it is possible to educate X in less than a week in such a way
> that he will be able to program his computer to achieve
This is an interesting question. I posed something similar on the list a
> From hal Fri Oct 11 16:31:01 1996
> To: email@example.com
> Subject: Infinitely fast computer
> An idea I've been amusing myself with a bit, relating to the question
> of how hard it will be to generate AI using nanotech is this: suppose a
> genii gives you an infinitely fast computer. This is a computer just
> like today's, programmable in C or Lisp or some other language, which
> had the property that it runs infinitely fast. Any program you put on
> it completes instantly (unless it is of the type which never completes,
> in which case it runs forever until you halt it). All computation is
> done in zero time. We'll also throw in infinite memory while we're at
> it, although I'm not sure how big the C pointers have to be then :-).
> The question is, given such a miraculous device, how hard would it be
> for you, meaning the typical programmer reading this, to produce a program
> which could pass the Turing test, or better still one which is super-
> intelligent? Where would you start? How long would it take you to write
> the code? What research would you have to do? Could it even be done?
The few people who replied seemed to think that it would not take long,
maybe even just a few days. I am more skeptical. When I try to think
through an actual, concrete plan, it is hard to know where to start.
Tell me what the first program you would write is, what you would expect
to learn from it, and where you would go from there. How many lines of
code would the program be? Do you have the knowledge now to write it,
or would you have to do research first? How quickly could you write it?
Maybe you could get a copy of Thomas Ray's a-life program Tierra,
which lets various "program organisms" interact and reproduce under
certain rules. You fire it up and tell it to stop when the average
organism size reaches some large value, with the program set up so
that normally small organisms will have a reproductive advantage.
Hopefully the only way a large organism could grow and spread would be
if were doing something smart.
Well, you'd probably end up with a real mess. Zillions of programs, all
interacting in some messy ways, various extreme flavors of meta-parasitism.
It would likely be monstrously complex, and you'd have to study it for
years to understand what was going on. There could even be human-level
intelligences in there, happily communicating in program fragments, while
you can't even see object boundaries (like looking at a fourier transform
of our world).
On the other hand we don't know if any particular artificial world is
even capable of evolving intelligence. Tierra may be too simple to
allow complexity to evolve (too many predators, or too much chaos).
Some philosophers argue that our own world should be about as simple as
it can be such that it is still able to support intelligence. I don't
give much weight to give this kind of reasoning, but it does weakly
suggest that simple toy worlds will not be able to do it, otherwise most
intelligent beings will live in such worlds. (That's assuming that we don't
have ultra-simple rules buried in the quantum foam, which we just haven't
figured out yet.)