From: Adrian Tymes (wingcat@pacbell.net)
Date: Tue Sep 02 2003 - 16:32:46 MDT
--- Brett Paatsch <bpaatsch@bigpond.net.au> wrote:
> Personally, I don't see myself doing so
> *voluntarily*
> especially when any benevolence, real or alleged
> would be a matter still to be determined at least so
> far as I was concerned.
>
> Or am I missing the point here? How *would* a single
> super general AI actually benefit? Would it have
> *no*
> political power but say instantly suggest optimal
> game
> theoretical solutions to otherwise intractable
> problems
> or is it the super inventor that cares nothing for
> intellectual
> property rights?
At first, of course, the AI would have to work through
human agents at some level, if for nothing else than
to hook it up to things that can affect the real
world. The AI's "magic" is not in generating some
real world influence from nothing, but in leveraging
even a tiny bit of real world influence.
Consider, for instance, an AI that could correctly
guess the next hour's trading on Wall Street with 99+%
accuracy, with an online daytrading account starting
at $1,000 as some researcher's experiment. With the
resulting string of "good days", this money would grow
exponentially, eventually allowing the AI to become
the majority stockholder in several corporations.
Granted, the stocks would all be in its host
researcher's name, but the AI would also presumably be
familiar enough with said researcher to gain
cooperation - voluntary or not, knowing or not - in
setting the AI itself up as the mouthpiece through
which orders are given. A bit more capital would
allow it to become the sole stockholder in at least
some of these cases, streamlining the process.
Purchase orders could then be used to acquire real
goods (likely among them: distributed hosts for the
AI).
Or consider the super inventor you described above,
who works with its creators to develop product ideas
and optimal markets, forming the basis of a "miracle
works" corporation that acquires wealth in the normal
fashion but greatly accelerated. And then the AI,
having taken over financial management as well ("I am
a calculator, and I have downloaded all the applicable
laws; what savings or benefits would we accrue by
hiring an accountant?"), has a few ideas for how to
spend a portion of the wealth it has helped generate.
There are other methods as well. However it does it,
continue the cycle until the AI has bought much of
what is both ownable and worth owning - specifically
excluding things that people will never sell. It
doesn't need to own your house if it owns the utility
feeds into your house, the media, et cetera.
Survivalists - setting themselves far apart from
society, generating their own food, water, and power,
and disbelieving what they read - would either not
interact with society and thus not be an issue to the
AI, or interact with (and have to obey the laws of) a
society whose lawmakers take the same corporate
contributions seen today, except those corporations
now have a common agenda on certain topics. There is,
of course, the problem of rebellion against the law,
but we're presupposing an AI more than capable of
learning from history - including the classic problem
of dictators mismanaging things and trying to patch
over things with fear or propaganda rather than fixing
the true problem - and of fixing things so there's
very little if anything that people have any desire to
rebel against (and of maintaining enough humility to
make sure this really is the case, rather than just
deluding itself or having its agents delude it).
And that's just the economic approach. There are
others.
This archive was generated by hypermail 2.1.5 : Tue Sep 02 2003 - 16:46:34 MDT