Robert J. Bradbury writes:
> could develop (affordable) AI development machines. The real
> trick is letting someone like IBM develop the instruction set
> and software. The limited instruction set they have has to be
It is less the problem of the instruction set but the architecture. PIM means small memory grains, which doesn't allow you to use lavish data structures (like voxels). On the other hand, you can hardwire the forcefield engine in silicon. Finding nearest neighbours, accelerating DPMTA (or use voxels for electrostatic maps), integrating, etc.
I doubt IBM has something as a advanced in mind, though. The machine would be too special-purpose then.
> highly specific for molecular modeling. Have you got a
> limited instruction set highly specific for neurohacking?
> Things like SetSynapseWeight, CopyAllWeights, CopyNeuroPattern,
> etc. come to mind but this really isn't my field.
I've looked at a massively parallel (kBit wide bus) PIM architecture for neural applictions a few years back, there are orders of magnitude for acceleration there.
I'm glad such architectures are finally moving in the mainstream (see Playstation 2 for a most prominent example).