Re: Who's Afraid of the SI?

Doug Bailey (Doug.Bailey@ey.com)
Sat, 15 Aug 1998 23:01:04 -0400

Bryan Moss wrote:

> In my opinion the reality the AI inhabits is just
> as real as the one we inhabit. The argument is
> that the AI would not have any concept of what a
> human is (in our sense) and would therefore not be
> capable of being malicious (or benevolent) towards
> us. This does not mean it can't be dangerous, just
> that the possibility of the AI causing intentional
> harm is highly unlikely (unfortunately you quoted
> a sentence in which I did not make this as clear
> as I could have).

Bryan later states:

> Yes, but the AI does not know there's a financial
> system to attack. You're anthropomorphizing,
> whereas the AI (to coin a term) would be
> AImorphizing. You see a financial system; the AI
> sees physical law.

I'm not comfortable with the idea that an AI able to match human-levels of cognition would not be able to understand the "concept of what a human is", be able to fathom how humans view themselves, and be able to act in a harmful way towards humans. I think you are exhibiting a bit of anthropic hubris or not giving AIs a fair shake.

An AI might not have any use for the stock market but that does not mean it can not discover its existence; determine its significance (by the concentration of computer power and security protocols humans have dedicated to market systems); investigate, discover, and comprehend the conceptual meaning of the markets; and on the stage of necessity act in a way as to disrupt them.

The thought process an AI goes through to reach its conclusions may be different (or it may be the same) but it should be capable of reaching a conclusion that led to any action a human might conclude to take. Perhaps an AI's ultimate objective is to maximize its information processing rate. At some point its efforts to reach that objective might conflict with the goals of humans to operate their information infrastructure. The AI might decide to "commandeer" system resources at the expense of human information processing demands.

Doug
doug.bailey@ey.com

AI point of view:

The AI sees a series of objects jumping around in what appears to be a random way. Being a scientist, the AI decides to investigate further. It is not long before the AI has found patterns in the system and is capable of making short-term predictions.

Human point of view:

This is an AI trained to predict trends in the stock market. Fortunately it has had a good success rate of short-term predictions and has made a significant amount of money.

The "jumping objects" of the AI's world are natural phenomenon, much like electron clouds or planetary orbits. And, as all rational AI's and humans know, asking "why" jumping objects and electrons exist is a religious endeavour.

A social AI is similar, it gets spatial
information, gestures, heat patterns and it acts on them according to its programming. Even if it can reprogram itself it would be a massive stroke of luck if it decided to turn on us. Remember, an AI that has evolved around us is likely to have no concept of resources or power (in the Hitler sense). And an AI designed to share resources would have no idea what it was "really" doing. In my opinion, neither would be likely to cause us any intentional harm.

We cannot say what motivations an AI might develop, but we can attach a high probability that they won't be like human motivations (since we know the course of human evolution). An AI that is like a human (rather than `human-like') would have to evolve along a similar path as a human.

> second, the AI can easily operate in real space
> even with today's technology, and can readily
> design and implement even better robotic
> technology.

The patterns of "real space" are no different to the patterns of stocks falling and rising.

> Why do you feel that the AI is further removed
> from "reality" than your own intelligence is?

I feel that my motivations and the motivations of the AI are more likely to be different than they are to be similar. And I think that there are more available courses of action that do not harm humans, than there are those that do. The view of a super intelligence that is like a disease, gorvernment, or corporation (expand and engulf) is, imho, completely unfounded.

I could be wrong.

BM