Re: electronic intelligence and ethics

From: Anders Sandberg (asa@nada.kth.se)
Date: Wed Feb 23 2000 - 11:32:01 MST


"Zero Powers" <zero_powers@hotmail.com> writes:

> Following up on QueeneMuse's interesting question about what the uploading
> experience will be like, one of the things I wonder about is what sort of
> life we (plain, trans-, or post-)humans will be allowed by the electronic
> intelligences (I prefer the term EI to AI) once they take over.

Aren't you a bit determinist here? It almost sounds like the future
was pre-planned...

        In the new 5-year plan, our glorious chairman has decided AI
        will be achieved in 2024 [wild applause] A self-augmenting AI
        system will be operational in 2025! [wild applause] And
        exactly 32 hours after that the Singularity will be reached
        [wild applause, the speaker tries to be heard] ...as predicted
        by the Maxist-Yudkowist Dialectic! [standing ovations and
        spontaneous singing from the assembled transhumans]

The development of AI and the further interactions between
human-derived and non-human intelligence are complex issues, and just
saying that they will take over is to oversimplify things. For
example, Hans Moravec sketches a scenario in _Robot_ where the
super-AI corporations of the future will provide the humans with a
pleasant lifestyle by having them as stock owners. If that scenario
will come to pass or not depends a lot on exactly how hard AI is going
to be to achieve relative other technologies and the changes in
society, how the economy will interact with the AI possibilities and
the results of individual actions.

> But what else (if anything) will motivate them? Will there be any "good"
> for them other than information? Any "evil" other than ignorance? Will
> they care at all about such trivialities as emotion, fairness, compassion
> and pain? Whether or not I want to survive the ascendancy of strong EI will
> depend largely upon this question. Unfortunately I'll never know the
> answers unless and until I survive to that time. Or unless I am persuaded
> by the musings of this list. I can't wait to hear your thoughts.

We could likely create AIs with all sorts of motivations, ranging from
the arbitrary ("Green is an axiomatic good - strive to maximize
greenness") over the useful ("To serve humans is the greatest joy") to
the frustrating ("Discover the meaning of it all"). But only some
motivational systems are likely robust and flexible enough to work in
the world at large, and once AIs start to change themselves and their
offspring, we will likely see a kind of evolution take place.

My guess is that the basics will be fairly similar to most other life:
preserve your own (and possibly your offspring's) existence, use
efficient means to achieve your goals (rationality, bounded by
available intelligence and the environment). Other than that I think
there is room for enormous variety, likely much more than the one seen
among humans. Also, I consider human-AI symbiosis a likely path, so
there might be entities around that contain mixtures of synthetic and
human-derived motivations.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:04:06 MDT