Brian Atkins wrote:
>
> Hmm well you've obviously got Eliezer thinking about such stuff, that's
> one. He is working on Ben Goertzel of webmind.com, so hopefully he will
> start thinking about this more. And you definitely have people like Minsky
> thinking about these issues since he was there at the recent foresight senior
> associates gathering, and attended Eliezer's talk on friendly AI.
Minsky may be thinking about these issues, but he hasn't gotten very far.
Still at the subgoal-stomping-on-a-supergoal stage - his phrase went something
along the line of "If you ask the AI to solve the Goldbach Conjecture, it
might wipe us all out to prevent us from interfering with its solution of the
Goldbach Conjecture." If he picked up anything at all from my talk, it didn't
show, and he doesn't appear to be interested in discussing the matter with
anyone. I've written him off unless something new pops up.
Ben Goertzel has read _Coding a Transhuman AI 2.2_ and we've traded comments
on Friendly AI on the SL4 mailing list, but we're delaying a more complex
discussion until I can read the newly released docs on Webmind's design and he
can read the not-yet-published "Friendly AI" section of CaTAI. I will say
that Ben Goertzel currently seems to be thinking in terms of a transhuman
Friendly AI that increases in intelligence slowly and still operates within
the context of a human economy. I hope to either persuade Goertzel that the
very-rapid-transcendence scenario is more likely, or get Goertzel to agree
that it makes sense to overdesign for that scenario.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:18 MDT