RE: question for the singularity institute people

From: Reason (
Date: Thu Jul 12 2001 - 03:32:19 MDT

---> Eliezer S. Yudkowsky
> > So; why no SourceForge ( project?
> We might try and open-source the development of tools, if they're tools
> that can be useful to others. In fact, that's currently the plan with
> respect to at least one of the tools we might need. But the AI itself
> doesn't seem like the kind of thing that could be open-sourced. I used to
> plan that way, but later learned, on reading my Eric S. Raymond, that open
> source is especially well suited to certain types of projects, and
> generally intelligent AI is not one of them. Characteristics that make AI
> wrong for open-source include: (1) Open-source works best when the
> creation to be implemented is well-understood (Linux was not the first
> Unix implementation);

Depends on how you want to define "well understood". I'd substitute "highest
level design spec mostly completed" as a minimum. That gives you the all
important end goal and a navigable pathway towards modularizing development.
Then the project is just partitioned and subpartitioned, etc, until you end
up with things that can be coded.

But I don't think that this is an open source thing -- any software project
is pretty much doomed unless you know where you're going and have a map for
dividing up the work.

> (2) Open-source is very hard to get started unless
> there is running code that does something cool, and there may be a heckuva
> lot of coding required before a true general intelligence can say "Hello
> world" much less do cool stuff;

Tools will do in this case; a partially finished tool is enough to attract
developers. Especially if it has cachet by being associated with something

Demonstration models will do as well. You don't need a hello world -- you
just need a conceptual framework in code form and guidelines on how the
components will flesh out later. Frameworks in code form attract developers
like wax things attract bees (bad analogy). They just beg to have components
coded for them.

> (3) Open-source works best when code can
> be developed by a loosely distributed team working in rough but not
> perfect synchronization; an AI, *especially* in the initial phases, would
> probably need to be developed by a very tightly knit team.

Initial blocking out of design by tightly knit team, I'd agree, but I've yet
to meet a project in which less than 80% of the programming time wasn't
spent on simply completing and fleshing out things that have already been
spec'ed and prototyped. How would an AI -- and early prototype code -- be
any different?

So, ok, after some rambling, my suggestion and 2c worth: I don't think it
would hurt you guys at all to codify your initial arguments on the form and
nature of AI into a framework for components -- defining APIs, putting the
framework together, that's all the tightly knit team stuff. Then open source
it, announce far and wide, and see who drops out of the woodwork to start
building components to fit into the framework.

Now I'm speaking from a position of comparative ignorance, just having read
your literature and having seen no real model implementations or discussions
in AI/human simulation language that I'm familiar with from the gaming side
of things, but I don't see any reason for an AI not to be componentizable...


This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:44 MDT