> Emotions are important for us because they help us to survive. AIs
> don't need to fend for themselves in a difficult environment; they
> get all the energy, protection & imput they need from humans and
> other machines. All they have to do is solve puzzles (of biology,
> programming etc).
Isn't solving puzzles a difficult environment? The space of scientific
research is a very weird place, with quite complex hinders and even
some dangers (if the AI gets stuck in an infinite loop it will be
reset, if it doesn't produce anything considered useful by the humans
it will be discontinued, research money might run out etc).
> If you program it with an "urge" to solve puzzles
> (just like your PC has an "urge" to execute your typed orders), it
> will work just fine. No (other) "emotions" are needed, imo.
[Slight spoiler] I'm reminded of the AI in Greg Bear's _Moving Mars_
who becomes fascinated by a space of alternate truths while doing a
kind of hyperspace jump. The effects are nearly deadly to the
explorers and the AI, as the laws of physics in the ship begins to
shift, but they manage to return to normal space and survive. The
person linked to the AI assures the others that it won't happen again
since the AI now has realized that if it explores that kind of stuff
it will cease to exist, and that would be the end of its ability to
explore. Note that the AI did not have self-preservation in the first
place, and this was nearly deadly. Second, it had some kind of
motivation to solve more problems and learn more, but this can be
generalized into new emotions. If it can deduce self preservation from
this, why not other emotions we have not desired, like territoriality
(if you remove resources from it it will not be able to solve problems
well, so it should prevent this).
> >>Secondly, even if they have less feelings
> than humans, why does that mean we can treat them as we like?<<
>
> If it doesn't care about abuse, if it feels no pain or emotional
> trauma, then there is no reason to worry about it's well-being
> (for it's own sake). AIs can almost certainly be made this way,
> with no more emotions than a PC.
So then it would be no problem with using humans without any sense of
pain or emotion for medical experiments? Why does emotions imply
ethical rights?
> > > This would
> > > pretty much solve the whole "rights problem" (which is largely
> > > artificial anyway), since you don't grant rights to specific parts
> > > of your brain.
> >
> > Let me see. Overheard on the SubSpace network:
> >
> > Borg Hive 19117632: "What about the ethics of creating those
> > 'individuals' you created on Earth a few megayears ago?"
> >
> > Borg Hive 54874378: "No problem. I will assimilate them all in a
> > moment. Then there will be no ethical problem since they will be part
> > of me."
> >
> > I think you are again getting into the 'might is right' position you
> > had on the posthuman ethics thread on the transhumanist list. Am I
> > completely wrong?
>
> Eternal truth: unless 19117632 or some other being can and wants to
> stop 54874378, it will do what it bloody well pleases. Might is
> the *source* of right. Right = privilege, and always exists within
> a context of power. A SI may have many "autonomous" AIs in it's
> "head". For example, it could decide to simulate life on earth,
> with "real" players to make it more interesting. Are these simulated
> beings, mere thoughts to the SI, entitled to rights and protection?
> If so, who or what could force the SI to grant it's thoughts rights
> (a rather rude invasion of privacy). How do you enforce such rules?
> Clearly a difficult matter, but it always comes down to "firepower"
> in the end (can you blackmail the other into doing something he
> doesn't like?-- that's the question).
>
> > > A failure to integrate with the AIs asap would
> > > undoubtedly result in AI domination, and human extinction.
> >
> > Again a highly doubtful assertion. As I argued in my essay about
> > posthuman ethics, even without integration (which I really think is a
> > great idea, it is just that I want to integrate with AI developed
> > specifically for that purpose and not just to get rid of unnecessary
> > ethical subjects) human extinction is not a rational consequence of
> > superintelligent AI under a very general set of assumptions.
>
> It is rational because in the development of AI, just like in any
> economic or arms race, people will sacrifice safety to get that
> extra edge.
Do they? Note that even taking risks is subject to rational
analysis. Some risks are acceptable, others aren't, and you can
estimate this before taking them. Taking arbitrarily large risks
doesn't work in the long run since they outweigh the benefits, and we
get a clustering around the rational level of risks by the
survivors/people with experience.
> If it turns out that a highly "emotionally" unstable
> AI with delusions of grandeur is more productive, then there
> will always be some folks that will make it. Terrorists or
> dictators could make "evil" AIs on purpose, there might be
> a freak error that sends one or more AIs out of control, or
> they might simply decide that humans are just dangerous pests,
> and kill them. There is *plenty* of stuff that could go wrong,
> and in a time of nano-built basement nukes and the like,
> small mistakes can have big consequences. Intelligence is
> a powerful weapon indeed.
Exactly. But how large is the ratio between irrational people (the
dictators, terrorists or extreme risk takers) and rational people? It
is not that large really, so as long as there exists
counter-technologies or the possibility or retaliation the irrational
people have a distinct disadvantage. Things change if you introduce
technologies that cannot be defended well against or cannot be traced;
now the situation is unstable. However, one cannot just say that AI
will by necessity lead to this situation, one has to analyze it more
carefully. (In addition, irrational people seem to be much worse at
creating new technology than rational people)
For example, in a world with a lot of human-level AIs, would a
above-human-level AI with nasty goals have a decisive advantage? Many
seem to assume this, but I have not seen any believable analysis that
really show it (other than the fallacy that greater intelligence can
always outsmart lower intelligence; if that is so, how come that many
brilliant generals have still been defeated by less brilliant
opponents?). In fact, it might turn out that being a lone
superintelligence is a disadvantage when faced with a multitude of
opponents, it would be better to be a larger but less smart group.
Unfortunately this kind of theoretical discussion is plagued by
thought experiments that gladly ignore the constraints of psychology,
economics, game theory and technology.
> > Somehow I think it is our mammalian territoriality and xenophobia
> > speaking rather than a careful analysis of consequences that makes
> > people so fond of setting up AI as invincible conquerors.
>
> That territoriality and xenophobia helped to keep us alive, and
> still do. The world is a dangerous place, and it won't become
> all of sudden peachy & rosy when AI arrives. Remember: this isn't
> comparable to the invention of gun powder, the industrial revolution,
> nukes or the internet. For the first time in history, there will
> be beings that are (a lot) smarter and faster than us. If this
> goes unchecked, mankind will be completely at the mercy of
> machines with unknown (unkowable?) motives. Gods really.
But they won't come into existence or develop in a vacuum. Most
discussions of this kind on this list tend to implicitely assume that
first there is nothing, and then there are Gods, with so little time
in between that it can safely be ignored. I am highly sceptical of
this, and feel justified in this by my knowledge of both the history
of technological development, the economics of technology, the
complexity of scientific research and the trickiness of improving very
complex systems. AI will not develop overnight, it will emerge under a
period of time which will most likely be decades rather than days (as
some singularitians like to think); a very fast development, but not
something instantaneous. During this time the AI and humans will
interact and both adapt to each other in various ways. I would be very
surprised if we ended up with just humans and super-AIs. A more likely
result would be a broad spectrum of entities of different levels, at
least able to communicate with those just above or below them.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y