> den Otter wrote:
> >
> > Responsible and ethical people would probably use Asimov's
> > robotics laws to control the AI, which may or may not work
> > (probably not). How the AI evolves may very well be "fundamentally"
> > beyond our control. So...I'd join the *uploading* team if you
> > have serious future plans. Load me up, Scotty!
>
> Dammit, Otter, if an entity that started out as uploaded Otter managed
> to keep *any* of your motivations through Transcendence, selfish or
> otherwise, you could use the same pattern to create a reliably
> benevolent Power. I mean, let's look at the logic here:
>
> OTTER and ELIEZER, speaking in unison: "The mind is simply the result
> of evolution, and our actions are the causal result of evolution. All
> emotions exist only as adaptations to a hunter-gatherer environment, and
> thus, to any Power, are fundamentally disposable."
>
> ELIEZER: "If there's one set of behaviors that isn't arbitrary, it's
> logic. When we say that two and two make four, it's copied from the
> laws of physics and whatever created the laws of physics, which could
> turn out to be meaningful. All the other stuff is just survival and
> reproduction; we know how that works and it isn't very interesting."
>
> OTTER: "All the emotions are arbitrary evolved adaptations, except
> selfishness, which alone is meaningful."
>
> This just says "Thud". To use a Hofstadterian analogy, it's like:
>
> abc->abd::xyz->?
[etc.]
So stay alive, evolve and be happy. Don't get your ass killed because of some silly chimera. Once you've abolished suffering and gained immortality, you have forever to find out whether there is such a thing as an "external(ist)" meaning of life. There's no rush.
Or:
#1
#2
Who wins?
Your goal of creating superhuman AI and causing a
Singularity is worth 10 emotional points. You get killed
by the Singularity, so you have 10 points (+any previously
earned points, obviously) total. A finite amount.
You upload, transcend and live forever. You gain an infinite
amount of points.