J. Maxwell Legg wrote:
>> How is the existing ruler's Achilles heel exposed by m-w???
>Ask not "How" but "What". Is their reliance on figures and their notion that
>effects always follow causes the limited creation of a future m-w magician?
Many-worlds is deterministic, and accepts cause-and-effect. The scientist opens the box containing Schroedinger's cat and that *causes* the worlds to split, then and there.
Here's relevant quotes from the FAQ you sent me:
Q8 When does Schrodinger's cat split?
The cat splits when the device is triggered, irreversibly. The investigator splits when they open the box. The alive cat has no idea that investigator has split, any more than it is aware that there is a dead cat in the neighbouring split-off world. The investigator can deduce, after the event, by examining the cyanide mechanism, or the cat's memory, that the cat split prior to opening the box.
Q19 Do worlds differentiate or split?
AAAAAAAAAAAAAAABBBBBBBBBBBBBBB --------------> time (Worlds differentiate)AAAAAAAAAAAAAAACCCCCCCCCCCCCCC
occurs, rather than:
BBBBBBBBBBBBBBB B AAAAAAAAAAAAAA (Worlds split) C CCCCCCCCCCCCCCC
according to many-worlds.
This false differentiation model, at the mental level, seems favoured by adherents of many-minds. (See "What is many-minds?")
Q20 What is many-minds?
In many-minds the role of the conscious observer is accorded special status, with its fundamental axiom about infinities of pre-existing minds, and as such is philosophically opposed to many-worlds, which seeks to remove the observer from any privileged role in physics. (Many-minds was co-invented by David Albert, who has, apparently, since abandoned it. See Scientific American July 1992 page 80 and contrast with Albert's April '94 Scientific American article.)
The two theories must not be confused.
What's more, unless you're proposing massively improbable violations of thermodynamic laws, m-w doesn't allow for magicians: the worlds will remain so far split that there is no chance that they will fuse in any way a magician could use reliably. Similarly, no magician in the future can influence events in the past using many-worlds.
>I read accusations that S&B operates in secret and assume this causes
>loss designed to stymie AI development.
I'm actually beginning to understand you here, but not quite. Does an AI need *all* of our information in order to run a neuronomy? Or just lots? Why isn't lots sufficient?
>Why do I feel that your agenda is S&B's?
The Skull and Bones building is about a block away from where I live here at Yale. It is closed to outsiders. Though you have no reason to believe me, I happen to be fundamentally opposed to secret societies and their ilk, for reasons that don't relate to this conversation.
>> Your view on how and why AI will be implemented will be my chance to have a
>> say in the making of new global politics? Or will the AI itself and its
>> implementation be my chance to have a say in the making of global politics?
>> You're not making any sense! Please, I beg of you, for your sake and
>> mine, clearly identify your position in a way that doesn't allude to
>> another idea that you haven't already explained!
>Answering a question with a question is your style not mine.
To begin with, I didn't assert that you answered questions with questions, so I don't understand why you're raising this point. Moreover, this is an abuse of the phrase. I have replied to almost all of your questions with a very particular question: "I didn't understand your question. Would you please explain it?"
>Take the simpler
>proposition and see if it happens. Any AI that's better than what I
foresee as a
>global realization of a parallel implementation of Ingrid
>get my support so either way you'll have your chance. BTW, do you know how
>implementations has S&B scuttled?
I don't know about *any* AI implementations that S&B has scuttled. They're a secret society, for crying out loud! They make it their prime objective to prevent me (and you) from finding out about stuff like this. :)
>Why is Zapata making its move on the net?
Cursed if I know. A search for "zapata AND conspiracy" turned up nothing. ;)
Anyway, if I were to take a wild flaming guess: it's because they think that's where the money is.
>Try "George Kelly Ingrid" or better still search for a course on how to
>(Sorry, but you're asking for trouble by not doing enough for yourself.)
I do *plenty* for myself. There's a limited amount of effort I'm willing to exert in order to understand what you happen to think, however. :)
Anyway, if I follow you correctly, Ingrid is an attempt to automate and coordinate the distribution of personal construct grids, yes? In doing so, a simply written program could coordinate the whole of human behavior if it had access to enough/ the right grids. Am I on the right track?
>Since I first used Ingrid, I can't express how hard it is to put my
>english. My brain now rapidly construes in grids and as yet has no user
>semantic interface, but don't worry, I'm getting closer to finding a way
>this high dimensional dilemma.
I can see how this would pose problems. :) Might I reccomend Strunk & White's _Elements of Style_? The rules laid out there are a little bit too difficult for most of today's AI but aren't too difficult for most people I know.
>> Here's my essential problem: I have no idea what sort of system you're
>> proposing. The impression I get is that it involves neural nets
>> intimately, but you haven't yet explained how, beyond the idea that the net
>> itself would be running the show.
>I didn't ever say the net would be running the show; - you did.
And you tentatively affirmed what I said.
>In practice the
>adopted set of decentralized plans will be running the show and continuos
>will adjust those plans without today's delays and incompatibilities. Only
>plotting that keeps destroying the messengers can stop the development of
>governance based on artificial intelligence.
You continue to assert that AI needs all the info, or as much as possible in order to work. We humans function alright (if suboptimally) with incomplete information; it seems to me that an AI that required complete information before it would work would be effectively crippled, spending its time on a quixotic quest for all the information it needs to solve the problem.
Indeed, I'm now beginning to see some of those paralells between capitalism and thermodynamics that Clark was alluding to: simply the act of trying to find out everything costs money; trying to improve the efficiency of the whole economy by observing the whole economy and dictating its behavior leads to less efficiency than just letting the people organize themselves and remaining in rational ignorance.
>Let's look at something else for a change. Caesar, Lincoln and Kennedy
The link to assassination politics on this page is broken... Do you know
of any mirrors?
>to what I know were assassinated for wanting to print debt free money; - a
>mistake. Will a simple treaty proposal calling for synaptic links to be
>in all new software releases usher in a new round of assassinations? I am
>such a proposal right now.
The link to assassination politics on this page is broken... Do you know of any mirrors?
As to the point about assassinations, I'd not heard this particular theory before. From whom/where did you hear this?
-TODAY IS A GOOD DAY TO LIVE-