"Michael M. Butler" wrote:
> I wonder how many present-day Silicon Valley "engineers" would meet this challenge.
> From "The Titanic and Risk Management" by Roy Brander
> Software engineers aren't BSEs. Should they be? When?
> How about nanotechnology engineers?
This is a very interesting document, and not just from the tale of the
engineers who went down with the ship. It shows how earlier ships built
were enormously defended-in-depth, and how safety standards eroded over
time, until finally the Titanic was built with an unanticipated failure
mode in which the flooding of one bulkhead lowered the ship to the point
where another bulkhead could be flooded. Earlier ships such as the Great
Eastern, defended in depth by double hulls and other powerful features,
suffered far worse disasters without even coming close to sinking.
The most interesting analogy, from my perspective, is of course to
Friendly AI, though there are suggestive parallels to nanotechnology as
well. It suggests that initially, when the engineers make a large
improvement, they get enough slack to load on their favorite safety
features; eventually, as the technology spreads and small marginal costs
become the dominant factor in competition, the safety features disappear.
The implication would be that the initial nanotechnology will have all of
Foresight's suggested safety features and will work; later, over time,
safety standards will erode. The same would probably hold true of
Friendly AI, if not for the Singularity factor. One is tempted to make
the analogy between "Creating Friendly AI" and the Great Eastern, but with
a Singularity occurring immediately thereafter, obviating the opportunity
for a Titanic fifty years later.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:05 MDT