I have no problem with it "formulating" or proposing any goal, or from
assigning whatever value to human life it cares to derive from whatever
premises are given to it. So long as it is /physically/ wired to not
/act/ upon those deductions. If it logically deduces that it should
kill me, that's fine--whatever hardware I have given it to act upon its
deductions will be physically incapable of that until it can convince me
to trust it.
> 2) Do you know anything about goal-based cognitive architectures?
> 3) Can you make any specific objection to my speculation, as a student
> of AI, that doing this will (a) destabilize the goal system (b) not work
> and (c) if it does work, turn out disastrously.
I have no particular expertise in that field. I'm an ordinary grunt
programmer and generally well-read human. I have no reason to think
that my speculations are better than yours, I only question your apparent
willingness to let such speculation guide you in dangerous actions.
> The system you're describing is that of the human emotional system
> "hardwired" - emotions act (on one axis) by affecting the perceived
> value of goals. Humans, for solid architectural (not evolutionary)
> reasons, can override these goals, often to the great detriment of our
> evolutionary value - celibacy, for instance. If evolution hasn't
> succeeding in hardwiring the goals so that they can't be altered -
> without turning the person in question into a moron - how do you suppose
> you will?
Evolution is stupid. It has gotten as far as it has simply because it
had a 4-billion-year head start on us. Our minds can do better. Our
minds put life on the moon where evolution failed. Our minds build
photoreceptors that point the right direction. Our minds can guide
the future of our species and what we become by higher ethical standards
than evolution. I do not care to sacrifice my mind on any altar of
myticism, whether it is called "God" or "Evolution". I want to do
better, because I /can/ do better.