> On Mon, 30 Apr 2001, Aleks Jakulin wrote:
> Sure, current languages are broken, but that doesn't mean much: the
> hardware itself is broken, too. It's a vicious circle of the hardware
> albatross hanging upon software's neck, and the other way round. And the
> crew has started dropping, already.
I agree. But progress is usually made with baby steps, not jumps. When the needs
are demonstrated, the solutions will follow. Hardware is very conservative ---
many have been burned by providing something that was eventually not supported.
Practically nobody will throw a technology on the market without knowing how it
will be used. Even fascinating technologies such as FPGA are marginalized,
because they're too radical. And in any case, before you implement something in
hardware, it has to be simulated and demonstrated in software.
An interesting challenge is how to work well with architectures such as CA's.
Right now it seems that they're used either as a machine code equivalent for
logic expressed as "traditional" serial code, or for evolved computational
circuits. It seems to me that a new generation of significantly more
sophisticated tools are required to take advantages of notions like heuristics
> No. We need something which can twiddle huge amounts of little integers
> very quickly. Because of relativistic latency and fanout limitations you
> will have strong signalling locality. You can't program in an environment
> like this, despite it being provably optimal. Human mind can't handle
> literally billions simultaneous streams of control flow.
> Luckily, you don't have to. You'll define boundary conditions,
> functionality metrics, either formally (we're probably smart enough to do
> that, it will be rather tedious, though), or informally (in an interactive
> teaching session, we're certainly smart enough to do that, in a pinch we
> can hire even a few chimps), and let the system figure out the pattern of
> these little integers, that defines the mapping solving that problem.
I generally agree. The main dilemma is how many heuristics should people
provide -- on one hand there might be genetic algorithms, on the other
hand-coded "intelligence". My position is somewhere in between, as I believe
Minsky had a nice way of saying it, something along these lines: if natural
evolution of intelligence took 200 million years, common sense and reason might
take far less. The quest is for the most efficient approach (thus realistically
feasible), not the simplest theoretically feasible.
> > code is too inefficient to be interpreted. Given a task, the system
> > the important tasks into code that executes quickly. And the code that
> > especially often to justify the expenditure of time, can be recompiled into
> > FPGAs. And every now and then, the system might provide designs for custom
> > chips and ask the human guardians for them.
> A truly smart system designs itself in the bootstrap, and it doesn't need
> slow and stupid "guardians", at least after significantly into bootstrap
> phase. You do not want anything like this roaming the current landscape.
> It would be literally the end of the world. It might make sense at some
> later point of the game, where there are no monkeys, nor a landscape to
Again, it's a question of efficiency, and the trade-off. We will see who gets
But read carefully at what point and for what purpose was I referring to
> > * Organization
> > that would allow many people to contribute, without breaking the system.
> > Deductive thought is just one subsystem, other subsystems are spatial
> > thought, classification, clustering, self-analysis, etc.
> I hope you like it in there, where you're sitting. I've been there myself,
> briefly, but thankfully, have gotten better since.
I do not quite understand what you mean here. Could you explain?
> We don't need no compilation//we don't need no flow control.
Even we monkeys have multiple layers of behavior control, some are reflexive and
fast, others are reflective and flexible.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT