Geniebusters (was: RE: Yudkowsky's AI (again))

Billy Brown (bbrown@conemsco.com)
Tue, 30 Mar 1999 09:26:34 -0600

Lyle Burkhead wrote:
> to which Billy Boy replies,
>
> > Maybe you should - although I hope you'd do
> > a better job of it than he did. Mr. Burkhead seems to be
> > fond of making sweeping statements about what is possible
> > in fields he obviously doesn't know anything about....
>
> This kind of thinking weakens you. This is not the way to see reality
> clearly. On a battlefield, in business, or anywhere, the one who sees
> clearly wins. Our way of thinking (“calibration”) is exemplified by the
> geniebusters site. It strengthens us. It does lead to clear perceptions.

Ah. So you do still read the list. I wondered.

Actually, my criticism of your site has nothing to do with the kind of nano-utopianism that you are trying to fight. I agree completely with your major thesis - that the mere invention of nanotech does not inevitably lead to genie machines and the end of economics. I am rather dismayed by the number of people who become so enthusiastic at the prospect of such technologies that they forget to think about what it would take to actually use them.

However, I think that you make the opposite mistake. Certain sections of your site make it seem as if your method is to assume that any revolutionary claim must be false, and then look for an excuse to back up your assumption. This is just as wrong-headed as assuming that all such claims are true.

You rely heavily (and, IMO, properly), on the use of comparisons between hypothetical technologies and real ones in making projections. However, in order for these 'calibrations' to be meaningful they must compare systems that actually have similar properties. Determining whether a proposed comparison is accurate requires a certain level of understanding in both of the relevant fields, which of course means that no one person can do this for every possible proposal.

Now, I have no objections to a lot of your proposed calibrations. However, there are two big ones that IMO are inaccurate:

  1. The comparison of a truly general-purpose nanomanufacturing system to a modern industrial nation is such an oversimplification that it has no predictive value. There are too many differences between the behaviors of the two systems, and the implications of these differences need to be assessed individually and in detail if you want to make a meaningful prediction.
  2. Your assertion that any software capable of replacing a human would demand a salary is unsupported. AI has had good success in the past at making non-sentient programs capable of doing things that were once thought to require sentience. There is good reason to expect this trend to continue, especially in the areas relevant to automated manufacturing.

There are also several lesser points I disagree with, but these two are the important ones. If you are interested in discussing the matter, on or off list, you have my e-mail address.

Billy Brown, MCSE+I
bbrown@conemsco.com