Re: objectivists hate libertarians

Eugene Leitl (
Thu, 26 Sep 1996 13:34:35 +0200 (MET DST)

On Mon, 23 Sep 1996, Eric Watt Forste wrote:

> On Mon, 23 Sep 1996, Eugene Leitl wrote:
> > I think this is bullshit. I am fairly Libertarian, and I am strongly pro
> > capitalism. Maximizing individual freedom has no intrinsic value. It is
> > the maximization of in toto (over population, over time) happiness which
> > has value. Why?
> Eugene, I appreciate your sentiments, but I really don't think this
> stance makes any more sense to me than the objectivists' stance.

That was my point. Trying to find rational reasons for why we do things
is a cheap rationalization ;) . There are none. Ultimatively, this is a
hardware function. It does what it was designed to do (by the Blind
Watchmaker), no more, no less. (Admit some random factor for GA

Assuming everything else is just a case of antromorphism.

> Maximizing happiness sounds like maximizing complexity to me; an effort
> to maximize something that we have *no* clue how to *measure* is just a

This assumes that happiness does exist. (Ok, it probably does ;) What
one can do, is to define a happiness metric. This would require e.g.
statistically evaluated polling, and clearly it gives us no a priori
clue, which future trajectory from countless possible to select to land in
an alternative world with the max integrated happiness. This would require
future models, which are progressively inaccurate if not altoghether

Another arbitrary assumption would be how to distribute happiness.
Equidistributed? Weighted? Does one integrate over population? Over time?

> metaphor, and exactly as misleading as some of the metaphors that the
> Objectivists have recourse to.
> (If anyone figures out how to measure these things, or how to measure
> life or extropy for that matter, please let me know.)

One could define a life metric, at least a boolean one. Since life is a
complex phenomenon, it would be somewhat context (observer) dependant.

Extropy is not a formal concept. It was never intended to be one.
Information/entropy can be measured, though.

> > [ objective reality? ]
> Now, this I also find curious because Darwinian evolution is in itself
> our best argument for believing in objective reality. Darwinian evolution

If one wants to define objective reality as something having the same
impact on different individua (of course it can't, since no one can
occupy the same voxel region of spacetime, but most of them are quite
dull, anyway), yes.

> is the explanation that each of us has recourse to when accounting for
> how our individual minds came into existence. Of course there is an
> outside; if there weren't, then our own existence as individuals would be
> a puzzling lacuna in the explanatory structure of our thought. If there

I think we can quite assume existance of the outside being an axiom.
Everybody not agreeing, can now safely quit quarreling with figments of
his own overactive imagination ;)

> weren't an outside, we'd be able to explain all our percepts in terms of
> our own activity, but we would lack any explanation for our own activity.

Right. A bad model. Resembling recursive homunkuli to explain cognition.
Infinite regress, while each stage having the same complexity. Baad model.

> The definition that I am currently experimenting with: objective
> reality is that part of reality which is the way it is independently of
> the influence of any individual person's mind. By a strong construction
> of this definition, only offplanet things are "objective reality" by now.

Only these currently outside the light cone beginning with spacetime
point signifying your birth ;)

> Certainly most of the objects surrounding you, the monitor you are
> reading, etc, are not independent of the causal influence of mind and
> thought. One might say that technology is the admixture of subjective

Of course they are not. Theoretically, you can influence everying within
your light cone. If the system is extremely ergodic, it even _will_
be influenced, whether you wish it, or not.

> reality (thought and feeling) with objective reality (that which is
> independent of thought and feeling). Some people seem to think that what
> we are all trying to do here is to bring subjective reality into thorough
> admixture with the rest of the poor dead "objective reality" universe.

This one went right beyond me. Huh?

> However, I must insist that I still do not think the objective/subjective
> dichotomy is a very useful one philosophically. It's a good thing to get

_Can_ philosophy be useful at all? ;)

> exhausted thinking about, if you need that kind of exercise, but it tells

Mental calisthenics is fun. At the very least it keeps you from going
human turnip too soon. Medicine men say that, at least.

> us practically nothing about what we ought to be doing, and isn't that
> the point?

Yes. Moreover, it can keep us from thinking other, more constructive
thoughts, and doing others, more constructive tasks. We always quietly
assume that thinking enhances our fitness. In some (most?) cases it does
not. A bummer, eh?

Thinking can reduce reaction time in a dangerous situation. People with
good reflexes are certainly better equipped than powerful ponderers. Just
think about fertility vs. IQ distribution. Does anybody have actual
statistics at hand?

> So I suppose I'm actually agreeing with Eugene. He seems to think that
> there is no point to elaborating the objective/subjective boundary, and

Yep. Unless one wants to exercise the neurotransmitter vesicles ;)

> I'm inclined to agree with him. But to deny the existence of this
> boundary, or to deny the existence of that objective reality that
> explains our individual existences just closes up the minds of most
> people who might otherwise have been listening to you.

Sigh... I have a strong impression that you might be right. For what
its worth, I believe (just a sentiment, of course) in existance of
external, let's call it objective, reality.

> > Knowledge: a bag of tricks. Ethics: cooperation emergence due to
> > evolution pressure. Deterministic as hell.
> Determinism/indeterminism is another one of these questions that I don't
> like to see my friends taking sides on. This is an open question! It may

I wasn't implying strict determinism! It's just that there is a trend
for complexity, and for increasingly benign cooperation strategies,
them cropping up sooner or later. To predict its exact occurence is
awfully hard if not impossible, of course.

> be decades, centuries, or thousands of years before we have an answer to
> this question (and I'd be willing to settle for a proof that an "answer"
> is impossible), but it is certainly an open question right now.

It certainly is.

> How could a computer fully predict the future course of its own
> computation in a manner that we could usefully distinguish from its mere
> carrying out of that computation? It could not. If I could prove that it

Agree absolutely.

> could not, then I would have proved that the determinism/indeterminism
> question is a permanently open one, at least to the satisfaction of those

No. A complex system can "understand" a drastically simpler one. It can
contain a primitive model of itself (in fact it probably must to
survive, it has to represent its surroundings, and itself within its
surrounds for future planning. Some bipedal primates go to great length
even to build metamodels, etc.). We all do.

> few people who think that human beings are a kind of computer. I suspect

No, not a computer. I just think information is a basic property of all
material systems (compare definitions of statistical entropy/information
theory entropy), even extremely weird objects like singularities obey
them, just as is energy. Consider observer interference in a QM
measurement process. Information has about the same status as energy, imo.

I think there is some equivalency between systems. Consider trajectories of
a system's statespace evolution. If I can track it with a model with
sufficient accuracy, the model and the system is equivalent. If it
wasn't, all modelling would be just an exercise in futility.

Whether the system is a gas box, or a human mind is the same.

Consider statespace velocity times trajectory (info-theory) entropy for
an abstract computation index. I can't think of a system I can't
benchmark it off hand.

> Eugene is one of them. Me, I like to keep an open mind, although it gets
> harder every year.

I can't know human mind is just computation, of course. I just suspect. I
might be wrong, of course.

> In the meantime, I'll be looking into more Popper. He was obviously
> fascinated by this question, and I haven't read everything he had to say
> about it yet.

Alas, I haven't time to dig into philosophy... It's great fun, but
this pesky reality always interferes :(

> > I dunno... Going for the right goals, but for wrong reasons? What's wrong
> > with that? These objectivists are certainly not utilitarists. But they
> > are certainly irrational.
> Utilitarianism is irrational too. Most of the people that "utilitarian

Not if utilitarism tries to derive implementable future strategies from
guessing the shape of the Great & Mysterious Fitness Function. If there is
such a thing, that is.

> libertarians" look to for guidance in difficult political questions (I

I think the iterated prisonner's dilemma (iPD) might well give us handy
rules for dealing with our co-humans. I mean: what can we lose? Just
building a wrong strategy from a wrong model. So what? It rarely kills
one, at least not immediately. Why should I resist the evolutionary
process, once I am aware it exists? I may. I may not. Free will? Who

> have in mind David Friedman here) deny that they are utilitarians, and for
> damn good reason: David Friedman is *not* a utilitarian, and neither am I.
> Unless "utilitarian" suddenly means something utterly different from the
> very clear technical meaning that was established at the climax of
> utilitarian thought in late nineteenth-century Britain. Frankly, of late

Alas, I am a cave man. I have no idea what these distinguished gentlemen
thought and wrote. My (very fuzzy) understanding of utilitarism is based
on some "The Mind's I" contribution, about God being utilitarist, and such.

> I like Hayek's thinking best, and he's certainly no utilitarian either.
> Nor is he an objectivist, though I think Rand approved of him. If
> objectivists want to criticize libertarians, that's okay with me (though I
> really wonder about their motivation), but if they seek to criticize
> libertarians on the grounds that libertarians are utilitarians, then they
> are attacking straw men... a time-honored Objectivist recreation, I'm
> afraid.

World's a funny place ;)

> I suspect what makes them *really* nervous is that so many libertarians
> are vague Taoists or Discordians who, while continuing to keep their
> intellectual tools for thinking about philosophy sharp, have utterly
> abandoned any and all philosophical *commitments*. And who feel happier
> that way. And whose valuable political commitments aren't even slightly
> harmed by such lack of philosophical commitment. But most seriously
> committed Objectivists I've known don't even know how to begin arguing
> against this stance by any means other than the ad hominem. I would be

Hey, that's a perfectly rational thing to do! Ad hominem works great in a
debate. That's why it has been invented. We're still not ripe for a
rational (whatever that might be...) discourse yet, it seems. For now,
studying rhetoric, and having a Ph.D. in social engineering (wonderful
term, whoever invented it ;) helps a lot in the real world.

> very interested in seeing some clear, calm, rational arguments against
> this stance.
> (I'd also be interested in seeing a solid refutation of Hayek, too, but
> I'm not getting my hopes up or anything.)

| mailto: | transhumanism >H, cryonics, |
| mailto: | nanotechnology, etc. etc. |
| mailto: | "deus ex machina, v.0.0.alpha" |
| icbmto: N 48 10'07'' E 011 33'53'' | |