Re: Is this safe/prudent ? (was Re: Perl AI Weblog)

From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Tue Aug 12 2003 - 10:58:46 MDT

  • Next message: Rafal Smigrodzki: "RE: FWD [forteana] Health Care: USA, Iraq & Canada"

    Anders writes:

    > A bit of scenario analysis, based on some binary assumption
    > trees:

    Neat analysis Anders. I started to add a couple of columns
    on the left. Column 4. Does this drive the singularity significantly
    and Column 5. Would the govt/military want to classify or
    render patents secret in this instance. I thought that 4 and 5
    might couple in an unfortunate way, that is when the singularity
    is likely to get the biggest boost the govt is also most likely to
    cut in, perhaps cutting it off, but I haven't had the chance to
    run through it yet to check.

    I think this sort of binary tree analysis could be taken up a notch
    to produce tables identifying where the best extropic reward for
    effort can be achieved factoring in some reasonable assumptions.

    Sorry haven't the time presently to give this the attention it
    deserves.

    Brett

    >
    >
    > If the strong autodevelopment scenario of AI is true, then AI
    > development is a "first come, first win" situation that promotes
    > arms races. But there are two additional assumptions that affect
    > things: is there a high complexity threshold to the
    > autodevelopment or is it just about finding the right seed, and
    > how much would the autodeveloping AI change things - while we
    > tend to assume it is a phase transition, it could turn out that
    > the top of the development sigmoid is not that high, and the
    > resulting ultimate AIs still far from omnipotent.
    >
    > If we make up scenarios based on these assumptions, we get eight
    > possibilities:
    >
    > Autodevelopment Threshold Effect
    > can occur
    >
    > 1 No No Small
    > 2 No No Large
    > 3 No Yes Small
    > 4 No Yes Large
    > 5 Yes No Small
    > 6 Yes No Large
    > 7 Yes Yes Small
    > 8 Yes Yes Large
    >
    > The first four are the scenarios where rapid autodevelopment can
    > not happen, because general intelligence turns out to be messy
    > and incompressible. 1 is the case where AI develops
    > incrementally, never taking off or becoming very smart. 2 allows
    > you to push to superintelligence, but it requires a broad
    > research base. 3 and 4 represent situations where it is very hard
    > to get anywhere, and a huge push would be needed - which would be
    > hard to motivate if people believe they are in 3. But lets
    > disregard these for the moment, even if I think we should
    > consider refined versions of them as real possibilities.
    >
    > The last four represent the take-off scenarios. 5 & 6 are the
    > "seed is easy" situations and 7 & 8 the "seed is hard"
    > situations. If the seeds have a low initial complexity, then they
    > are possible to do for groups with small resources. Manhattan
    > projects have an advantage, but it is not total. In 7 & 8
    > amateurs are unlikely to get there, and Manhattans will win the
    > game.
    >
    > How large the perceived effect of the AI is will determine
    > policy. If AI is seen as "harmless" there will not be a strong
    > push to control it from many quarters, while if it is believed to
    > be of world domination class stuff people will clamor for
    > control. (I made the mistake above of looking at objective power
    > of AI; lets retroactively change the third column to "perceived
    > power" - it is what matters for policy).
    >
    > The Center for Responsible Nanotechnology has written a very
    > interesting series of papers on control of nanotechnology, which
    > they consider to be relatively easy to bootstrap ("seed yes")
    > once an initial large investment has been achieved ("threshold
    > yes") and then it will change the world (for good or bad). Given
    > these assumptions (and that it is likely to be developed *soon*)
    > they conclude that the best way to deal with it is a single
    > international Manhattan project aimed at getting nanotech first
    > and set up the rules for it as a benign monopoly, giving DRM
    > limited matter compilers essentially to everyone to forestall the
    > need for competing projects. (I'm writing some technology and
    > policy comments on the papers which will appear later; I disagree
    > with it a lot, but it is a good kind of disagreement :-)
    >
    > Compare this to AI. CRN are in scenario 8, and presumably their
    > reasoning would run the same for seed AI: we better get a central
    > major project to get it first, and competing projects should be
    > discouraged until success guarantees that they can be prevented.
    > Of course, getting such a project of the ground assumes
    > decisionmakers believe AI will be powerful. It is worth noting
    > that if such a project is started for technology X, it is likely
    > to be a template for a project dealing with technology Y or even
    > extend its domain to that - we get the Technology Authority
    > trying to get a monopoly, and nobody else should be allowed to
    > play.
    >
    > On the other hand, the nightmare for this scenario is that seeds
    > do not have high complexity thresholds but are only about getting
    > the right template into order. To get a Technology Authority
    > going takes time, and if a myriad amateurs, companies and states
    > start playing in the meantime there is a very real risk that
    > somebody launches something. Even if it later turns out that the
    > AI is not super (it just changes world economy totally, but no
    > gods pop up) the perception that it is dangerous is going to
    > produce calls on ending these bioweapons-like projects. It is
    > worth considering that if the belief that seed AI is possible and
    > has a not too high threshold and will be powerful - Eliezers
    > position as I understand it - becomes widespread among
    > policymakers, then it is likely in the current anti-terror
    > climate such AI research would be viewed just as unacceptable and
    > in need of stopping as people working on homebrew bioweapons.
    > Expect marines kicking in doors. It is actually more relaxed in
    > the high threshold belief scenarios, because there the worry
    > would be just other Manhattan projects, amateurs are not seen as
    > risks.
    >
    > On the other hand, if AI is not generally perceived as powerful
    > or possible, then the field is clear. No Manhattan projects, no
    > Homeland defense raids. That might of course be a mistake in
    > scenario 5 and 6. This is where we are right now; the
    > policymakers and public are right now unaware or think it is
    > unlikely that seeds or powerful AI will be developed.
    >
    > So where does this put the "AI underground" that believes in seed
    > AI? The ordinary academic AI world mostly believes in non-seed AI
    > with or without complexity thresholds, so they are not overly
    > worried. But if you think seeds are possible then things become
    > more complex. If you believe that there are complexity
    > thresholds, then you need a Manhattan-like project (be it the
    > Singularity Institute, a popular open source initiative or
    > selling out to North Korea). Otherwise just enough brains or luck
    > is needed.
    >
    > Should you try to convince people about your views? If you
    > believe in the low threshold situation, then you should only do
    > it if you think that it is a good idea with antiterror raids on
    > AI developers because AI development is too dangerous - if you
    > are megalomaniac, think that you could do it right or that AI
    > will almost certainly be good/safe, then you better hack away in
    > secrecy instead, hoping to be the first. In the almost certainly
    > good/safe situation, or if you think that AI will just shake up
    > the world a little, then spreading sources around to facilitate
    > faster development makes sense. If you believe in a high
    > threshold situation you should go public (or more public, since
    > the project is still visible) if you think it is likely that you
    > would end up in the big centralised project (which you assume to
    > be good or at least better than alternatives), or that you have a
    > reasonable chance in a race between Manhattans. If you distrust
    > the big project and worry about competition, you should be quiet.
    > If the threshold is high, spreading sources won't matter much
    > except possibly by allowing the broad criticism/analysis from
    > others "in the know".
    >
    > From this analysis it seems that the "AI underground" that
    > believes in seed AIs in general would be rather quiet about it,
    > and especially not seek to convince the world that the Godseed Is
    > Nigh unless they have plenty of connections in Washington. An
    > interesting corrolary is that beside the usual suspects on or
    > around this list, there are likely many others who have thought
    > about these admittedly obvious issues and reached similar
    > conclusions. There are likely many (or at least some) people
    > hacking away in cellars at their seeds beside the publicly known
    > developers. If I believed in seeds of low threshold, I would be
    > seriously worried.
    >
    >
    >
    > --
    > -----------------------------------------------------------------------
    > Anders Sandberg Towards Ascension!
    > asa@nada.kth.se http://www.nada.kth.se/~asa/
    > GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
    >



    This archive was generated by hypermail 2.1.5 : Tue Aug 12 2003 - 11:04:41 MDT