Re: `capitalist' character values

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jul 23 2001 - 10:06:28 MDT


"Eliezer S. Yudkowsky" wrote:
>
> (Originally sent as a private email to Damien; Damien asked that I forward
> it on to the list. Slightly edited/augmented.)
>
> ==
>
> My take on minimum guaranteed income (MGI):
>
> 1) To the extent that the disruption you postulate is *not* being
> produced by ultraefficient AIs, it may not be a good investment for me to
> form an opinion. Often there *are* no good answers to pre-Singularity
> moral questions, and I tend to view pre-Singularity capitalism as a means
> to an end. A lot of MGI debate seems to turn on whose "fault" it is. I
> don't *care* whose fault it is, and since I'm trying to solve the problem
> by means other than proposing new structures of social humans, I have no
> *need* to decide whose fault it is. This is a moral question that will
> make no sense post-Singularity, and my social contribution consists not of
> rendering a judgement on the moral question and helping to produce social
> conformity, but in working to turn the moral question into nonsense.
>

Fault is irrelevant. Creating a world we most wish to inhabit
is what is relevant. This include taking care of the
inhabitants, including making the most room for them to take
care of themselves but not necessarily limited to that.

Moral questions are difficult. That doesn't mean we can afford
to put them of until "come the Singularity" any more than we can
wait for all things to be resolved "in heaven".

> 2) To the extent that the disruption you postulate is being produced by
> efficient but not hard-takeoff AIs, a rapid-producing Friendly AI would
> not care about green pieces of paper, or even vis own welfare,
> except as means to an end, and would be expected to approach philanthropy
> rather differently than humans. Possibly one of the few plausible paths
> to a minimum guaranteed income without the need for government.
> This scenario is highly implausible due to the expected Singularity
> dynamics, and I mention it purely so that I can't be accused of "blowing
> off a possible problem" just because it's a Slow-Singularity scenario.
>

Well, sorry, but that is not the world we live in now. It
sounds like you are still "preachin' the sweet by and by" to
me. How will we get to Singularity is this pre-Singularity
world rips itself apart? How will we ready ourselves for the
Singularity if we refuse to think of what it means and what we
wish to create now and within it?

 
> 3) To the extent that the disruption you postulate is being produced by a
> hard takeoff, I don't expect any problems as a result; quite the
> opposite. A nanotech-equipped Transition Guide is perfectly capable of
> giving poor people as well as rich people what they want, if in fact the
> Transition Guide would even notice the difference, which is unlikely.
> ("Notice" in the decisionmaking, not perceptual, sense). Under the Sysop
> Scenario, everyone gets an equal piece of the Solar System, again
> disregarding as utterly irrelevant any existing Earthbound wealth
> inequities.
>
> Basically, I don't see a minimum guaranteed income as being necessary or
> desirable at any point in the next decade if the Singularity occurs in
> 2010; we don't yet have enough ultraproductivity to blow off the
> production drop introduced by the incentive change of an MGI system, and I
> don't expect to see that pre-Singularity. I don't expect the Singularity
> to be delayed until 2030, but if it is, I'll be doing what I can on the
> near-human Friendly AI side of it in the meanwhile.
>
>

Do you know how much we spend on HEW now? It is quite huge.
This isn't necessarily money that we don't already collect.

What exactly is going to lead humans to understand work in a
world of abundance they are not accustomed to? Just the SI
divvying up the solar system and telling us what will be? Do
you really think that can work?

- samantha



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:55 MDT