Reason wrote:
>
> > I'm not sure you have a complete grasp of the Sysop Scenario. It is not
> > about "forcing" anything at all. No one is forced to modify their mind,
> > or live a libertarian life. The whole point is everyone is free to do
> > what they want, live the way they want, etc. It is volition-based. It
> > is about protecting what you want and how you want to live.
> >
> > I would not call it a libertarian or any other society. Some people may
> > form governments or societies inside of it, others may not participate at
> > all and focus on individual pursuits. Yet they all can still interact with
> > other without any possibility of stepping on each others toes (unless they
> > want their toes stepped on).
>
> I think you miss my point: how does the Sysop Scenario prevent a group of
> people/entities using their "freedom" to dismantle the Sysop Scenario and
> set up their own form of managing things in which people/entities have no
> freedom at all?
Because the Sysop would prevent that. This is the ole "setting up a Hell
sim world" issue.
>
> The only way that you can prevent this is through use of force and coercion
> by the Sysop.
The only thing causing the coercion is your attempt to coerce someone else.
If you want to live a world where you can do that, you are going to be out
of luck. Are you trying to argue that the ability to coerce someone is some
how important?
>
> (And my original point was that if you don't have the freedom to destroy the
> society, then it's not a truly libertarian society, which is why a truly
> libertarian society can't exist).
Well you'll be glad to know that even in Sysop Space you can still get
rid of it. It just would require convincing the Sysop that this is the
most Friendly action.
>
> > I have yet to see a better solution to the issue. At some point the matter
> > (as in atoms) must fall under someone's control, and personally I don't
> > relish the idea of having to constantly protect myself from everyone else
> > who can't be trusted with nanotech and AI. All it takes is one Blight to
> > wipe us out. That kind of threat does not go away as humans progress to
> > transhumanity, rather it increases in likelihood. What is the stable state
> > if not Sysop or total death? There may be some other possibilities, can
> > you name some?
>
> The most likely outcome is just as it is now -- you will be constantly
> having to protect yourself from entities that can't be trusted. Just that
> the tech involved will be somewhat more sophisticated. It's dynamism in
> action -- things constantly changing. The Sysop Scenario sounds like wishful
> socialist central-planning type thinking...although there's nothing to say
> that central planning couldn't work if you have sufficient processing power.
> But it's not a free society.
>
I think you have too rosy colored glasses on. In a transhuman world, not
only the tech is changing but the minds are too. The only way to fully
protect yourself would be to be as smart as the smartest individual.
Otherwise, they can invent something you can't defend against. This is
an unstable situation unless everyone somehow achieves the same
permanent level of intelligence and tech capabilities. Even then you may
have issues like groups ganging up on one individual. It's just a huge
mess of a future, at least potentially.
A Sysop Scenario has ZERO to do with central planning. It is no more
centrally planned than our current USA system is centrally planned-
actually it would be quite a bit better than the USA system which
still has a bunch of central one-size-fits-all laws, etc. Everyone
living in it is ABSOLUTELY FREE, except they can't mess with anyone
who doesn't want to be messed with. I still think you have some large
misconceptions of what I'm describing.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:01 MDT