"J. R. Molloy" wrote:
>
> "Max More"
>
> > ...or we can
> > alter your brain just enough to ensure that you don't want to initiate
> > aggression. Yes, more problems arise in that scenario, but at least the
> > person is given a chance. Since I don't believe in Evil People (but do
> > think that certain actions and behaviors can be called evil), I would
> > rather offer someone redemption ever if they have done horrible things.
>
> As you imply, some people would consider it a "horrible thing" to alter human
> brains to fit social expectations. When >H AI emerges, all these questions will
> become moot, since evolution will have surpassed the human brain anyway.
> Hyper-compassionate SI will, by its own definition, have what it takes to solve
> all these problems.
>
Speaking from a purely materialistic viewpoint, exactly why would it be
any more horrible to fix and errant and dangerous human mind than to
debug an errant and dangerous AI module? Why would it be worse to
tamper with either than to debug an ordinary software module or fix a
broken hardware component?
I don't believe such questions can be safely ignored because the SI will
solve them all and more if and when it arrives on the scene. I think it
is quite dangerous to ignore difficult questions and problems on this or
any other basis.
- samantha
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:16 MDT