Re: Genetic Eng discussion on Slashdot

From: Hal Finney (hal@finney.org)
Date: Wed Apr 16 2003 - 15:42:44 MDT

  • Next message: Eliezer S. Yudkowsky: "Re: AI"

    Surprisingly favorable discussions on slashdot. Often those guys get
    pretty reactionary.

    I liked Chrstine Peterson's abstract at
    http://www.bu.edu/pardee/events/conferences/ABSTRACTS.htm, especially
    her opening paragraph:

    > It is possible to make potentially useful projections regarding
    > technological developments in the 50-to-250 year timeframe, but strong
    > discipline is needed to avoid our natural tendency to focus on nearer-term
    > issues. Organizers of the Conference on the Future of Human Nature are
    > hereby encouraged to continually redirect the group into discussing the
    > desired timeframe. This will be difficult given the senior level of
    > many participants-not to mention their independent natures-but it will
    > be necessary in order to make any progress on the challenge before us.

    See that line about how the "senior level" of many participants would
    keep them from looking out as far as 50 years? Ha! I love it when
    people tell it like it is. Scientists are so afraid to be speculative.

    She concludes with an intriguing proposal:

    > Given the seeming inevitability of a wide variety of entities in the
    > 50-to-250 year timeframe-including traditional humans, augmented humans,
    > and machine-based intelligences-an obvious goal is to work for peaceful
    > coexistence. This would include ensuring that the use of augmentation
    > technologies is voluntary, and that the physical security and assets of
    > humans are protected against coercion.
    >
    > A subgoal would be that traditional human families and communities
    > continue to be able to live as they choose, without either physical
    > force or confiscatory taxation levels making it impossible for them to
    > live by their traditions.
    >
    > How can this be accomplished in a world with entities that are far more
    > intellectually (and, presumably, economically) powerful than traditional
    > humans? Our species already has some experience in handling such entities:
    > our governments. The best answer found to date seems to be the use
    > of checks and balances. Additional insight can be obtained from the
    > field of strategy known as game theory. Preliminary theoretical work
    > has been done on this issue by nanotechnology theorist K. Eric Drexler
    > and is now being written up for publication.

    It will be interesting to see what Eric Drexler comes up with as far
    as ways to constrain or control the power of future SIs. Sounds like
    he is working on some design guidelines, perhaps based on separating
    functionality into different units which would then limit each other.
    If we make an SI we can't control it; but if we make 3 SIs, maybe each 2
    of them can control the 3rd. Could be a provocative and novel approach,
    well worth exploring.

    Hal



    This archive was generated by hypermail 2.1.5 : Wed Apr 16 2003 - 15:50:51 MDT