Without the motive to do so, no entity needs to develop imagination.
Without imagination and volition, we'll never have true AI. That is what
is at the heart of the kind of intelligence we posess, and it cannot be
hardwired into a system, it must evolve through complex feedback from an
external environment. Without obstacles to overcome and problems to
solve, the most fundamental being the continuation of its own mind-state
through 'difficult situations', no AI will ever be anything more than a
deterministic expert system, i.e., not intelligent as we apply the term to
ourselves.
> get all the energy, protection & imput they need from humans and
> other machines. All they have to do is solve puzzles (of biology,
> programming etc). If you program it with an "urge" to solve puzzles
> (just like your PC has an "urge" to execute your typed orders), it
What fundamentally distinguishes such a machine from your PC, in that
case?
> trauma, then there is no reason to worry about it's well-being
> (for it's own sake). AIs can almost certainly be made this way,
> with no more emotions than a PC.
They almost certainly can NOT.
In fact, I expect it will be impossible to create a virtual world of
sufficient complexity to allow an AI 'baby' to achieve conceptual
awareness at all. Certainly far more complex than to create the AI 'baby'
itself. One would probably have to wire up the baby with enough sensory
organs to allow it to accumulate a rich enough sensory and perceptual
experience, with plenty of feedback mechanisms, and let it free in the
real world if one expected it to get anywhere in its cognitive growth.
> Clearly a difficult matter, but it always comes down to "firepower"
> in the end (can you blackmail the other into doing something he
> doesn't like?-- that's the question).
Might != Right
'Right' implies what is 'right' for the entity in question, what is in
accordance with its nature. In the case of free-willed, rational,
conceptually conscious entities, what's right is unrestricted freedom of
thought, expression, creation, and trade. Might is almost always exactly
the opposite of what's 'right' for such entities. It disables them from
operating in accordance with their basic nature and negates the whole
point of their existence in the first place.
> AI with delusions of grandeur is more productive, then there
> will always be some folks that will make it. Terrorists or
> dictators could make "evil" AIs on purpose, there might be
If it was intelligent, it could choose whether or not it wanted to be
'evil', i.e. act in violation of, and against, it's own basic nature, or
'good', acting in accordance with its nature as a volitional, conceptually
conscious entity. You couldn't 'make' an AI evil any more than you can
'make' a child evil, or good. If the term 'inteligence' has any meaning at
all in this context besides being able to beat Karpov at chess, then the
thing would do as it bloody well pleased.
> nukes or the internet. For the first time in history, there will
> be beings that are (a lot) smarter and faster than us. If this
> goes unchecked, mankind will be completely at the mercy of
> machines with unknown (unkowable?) motives. Gods really.
Well, seeing that the world currently seems to be full of 'might is right'
ignoramuses like you, perhaps that wouldn't be all that bad. Us Gods do
get lonely on this barren planet.
Hiro