'Eliezer S. Yudkowsky?' 'There is no Eliezer S. Yudkowsky. Only Zuul':
> Actually, my current take calls for a comparative, rather than a
> quantitative, goal system. So under that system, you'd put in the
> statement "The humans want you to do X". All else being equal, this
> statement is the only consideration; all else not being equal, it can
> be ignored.
Well, at that point, it seems to me that you don't have to figure out
what, in particular, it ought to do with its simulations, since its
primary assignment is to figure out what it ought to do. Leave questions
like informed consent up to the expert(s).
> I don't think so. Richness doesn't have to be extracted from the
> environment; it can as easily be extracted from mathematics or
> subjunctive simulations.
Information like this CAN be extracted from simulations, but it's not
possible to simulate something unless you already know quite a lot about
how it operates in the wild, as well as (at least for our mundane
processors) in what ways it's reasonable to abstract away from the messy
details. There's a lot that it can learn once it can start running
simulations, but there's a long way from knowing "I need a goal" to
knowing enough about humans to run a simulation.
> The general problem is that human beings are stupid, both passively and
> actively, and over the course of hundreds of thousands of years we've
> evolved personal and cultural heuristics for "How not to be stupid".
> The entire discipline of science is essentially a reaction to our
> tendency to believe statements for ideological reasons; if an AI doesn't
> have that tendency, will it evolve the rigorous methods of science?
It would be a mistake to assume that an AI will be smart across the board.
It will be smart eventually, but you must assume that it will be stupid,
though methodical, at first. Historically, the guys who burn heretics
have had nothing going for them if not rigor and method.
I'm not familiar with your distinction of active and passive stupidity; I
take it you mean that a rock is passively stupid, whereas the guys who
killed Galileo were actively stupid. (A rock never makes mistakes?)
Frankly, I find the idea that an AI will not be actively stupid
suspicious. Granted, post-Singularity it won't be actively stupid, but
for most of the period that YOU have to worry about, you need to
anticipate that the AI will be both passively and actively stupid.
> Anyway, it gets complicated.
It certainly does.
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:04:00 MDT