Spike Jones wrote:
> Hmmm. I didnt make this clear at all. I dont mean make a 3D
> brain outta silicon. I meant a silicon microprocessor running a
> simulation of carbon-based brain cells. I suppose the first time
Sorry, stuff like http://neuron.duke.edu/ doesn't scale. The neural
hardware does a lot of computations, so you'd be sorely pressed
abstracting it all to software. Best way to emulate a given physical
system is to tesselate it into voxels, with a set of laws simultaneously
applied to them. Mapping a problem to dedicated cellular automata
> we make a real silicon based sim of a brain, it would run much
> slower than realtime, perhaps one millionth as fast. It would be
> not much of a conversationalist. At first.
Indeed, if 1 ms lasts for 20 minutes. Such a critter would not
be of much use for anything behavioural. It has to be realtime, or
> But also a solution. We can run AI in geosynchronous orbit
> with restricted downlink.
If it's smart, it can't be contained. If it's useful, it can
> 'Gene we have then two alternatives: one unknown and one
> unabiguously bad. I choooooose... the unknown.
Sure, it's more fun that way, and since we're not typical,
nor powerful (as far as I know) as well as few, the decision
is taken out ouf our hands (with a high probability, though
I'd welcome to be falsified), anyway.
> Actually, this whole thread has been tremendously insightful.
> Given the two choices, I personally would choose taking
> my chances with a possibly hostile AI than simply dying the
> old fashioned way. Yet if I had heirs, I see the point
I think the decision space is not boolean. There are a number
of future directories, some of them more high-risk (and,
unfortunately, also high-payoff) and some of them less so.
For instance, this indicates that making a "natural" AI,
by figuring out the embryoneuromorphogenesis and the function
of the early hardware, and raising the resulting "machine child"
in a human environment should create a more human-like AI,
which 1) has empathy/can relate to us primates 2) has trouble
figuring out how it works, since it's only human, and its
hardware layer is nontrivial. This is different to an ALife
AI emerged/co-evolved at molecular hardware level. These
things will be very fast, and probably rather nonhuman. If
people panick, and try to get them out of the global network,
this will be interpreted as a hostile act.
Lack of empathy, access to world information and instrumentation,
superrealtime speed, stealthy ubiquity, these things all spell
out *DANGER* in my book.
> of view of the Luddites, who would prefer to relinquish
> technology and hand a low-tech world to their children
> and grandchildren. I myself see the entire exercise as
> pointless if they too are to grow old and die. Yet I can
> see where some may logically disagree. spike
What is considered "natural" and "proper" is subject to constant
erosion, even though the age shift in mature industrial societies
results in more conservative opinions and policies, stifling
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:17 MDT