Re: Beg your pardon? (Was: Teach the hungry)

Dan Fabulich (daniel.fabulich@yale.edu)
Thu, 03 Sep 1998 08:35:33 -0400

J. Maxwell Legg wrote:
>Quite right, it doesn't. It was an aside that to show my confidence that the
>existing ruler's Achilles heel is exposed by m-w as AI might allow a
revolution of
>power.

How is the existing ruler's Achilles heel exposed by m-w???

>> Are you referring to the secret society "Skull & Bones" of which former
>> president George Bush was a member? How does Skull & Bones "slay a
>> perceived dragon," as you put it, and how does that result in information
>> loss? What does the activity of Skull & Bones and/or the loss of certain
>> information have to do with AI? I presume that it relates to m-w in that
>> the loss of information represents an irreversible process. Moreover,
>> beyond the fact that S&B has an agenda, what does any of THAT have to do
>> with accounting?
>
>It has to do with knowledge discovery by being able to have software parse
the
>actual email content. Just look at the actual format above as an example
of email
>that us humans can barely decipher and then think how hard it will be to
extract
>meaning ny getting an AI to read this thread. My comment was a lament that
>software agents are not available during the actual writing process in
order to
>structure the knowledge on the fly. Secretly I hope that 'knowledge
extractable'
>email might be the nicest way to kick start my AI concept. Hey, I'm a
dreamer.

This didn't answer any of my questions.

>> I didn't know that, nor do I know what sort of feedback loops you had in
>> mind. Continuously adjusted summaries of all encompassing global
>> activities? I have no clear idea as to what this is; I have even less of
>> an idea as to why it would be useful.
>
>I'm sorry I can't help you see the usefulness of my view of how and why AI
should
>best be implemented at a global level.

<sigh> I was asking about "continuously adjusted summaries of all encompassing global material," a string of words which makes no sense to me. I asked you to explain this. I see little use in continuously adjusted summaries of all encompassing global material at the moment, because I have no idea what you mean by that. Did you miss that part?

>> >Don't like what's going on here? Need a soapbox? Have a beef about
>> something or other
>> >then this will be your chance to have a say in the making of new global
>> politics.
>>
>> What will? What's "this?"
>
>"this" is my view of how and why AI should be implemented at the global
level.

Your view on how and why AI will be implemented will be my chance to have a say in the making of new global politics? Or will the AI itself and its implementation be my chance to have a say in the making of global politics? You're not making any sense! Please, I beg of you, for your sake and mine, clearly identify your position in a way that doesn't allude to another idea that you haven't already explained!

>> What exactly do you mean by a "qualitative statistical mediator?" What's a
>> "super ordinate construction?"
>
>"super ordinate constructions" is another term for "core constructs" from the
>specialty field of Personal Construct Psychology where the Ingrid software
was
>developed. See George Kelly's theories.

A web search for George Kelly turned up, as usual, nothing relevant to this conversation. Fine dining and sports, mostly. Care to give me a little more of a hint?

>I foresee grids (neurons) dynamically
>activating the levels of knowledge extracted according to Kelly's theories.
>Qualitative or synthetic data includes such independent componentry that
make it
>applicable to this imagery type of processing. I would hope that an open
structure
>would allow enough variables to be processed that the contributors of the
data
>would trust the inferences displayed. Therefore a "qualitative statistical
>mediator" is a software system capable of say presenting solutions to large
>problems. I wouldn't be surprised if such a tool was used to design the
Northern
>Ireland peace proposal. (I know for example that Tony Blair was greatly
influenced
>by Kelly's work.)

Dare I ask that you summarize what you mean here?

>> In short, super-AI will read everything, know everything we can know,
>> process all the information, and deal with it accordingly. Yes?
>
>To what point in the life cycle of the AI does your irrelevant question
refer.

My irrelevant question is trying to pin down your position to something I can understand. Your apparent enmity to clarity has got me jinxed. I don't know "what point in the life cycle" to which I was referring: I was trying to clarify YOUR position.

<snip>

>AI reporting structures will hopefully evolve to suit the needs and before
it's
>built, like you, I really don't know what to expect or what will be the
motivation
>for doing so. (Ask me again when I get unlimited computing power and
interfaces to
>all the worlds knowledge bases:-) The differing sorts of drivers (factors)
in each
>network node are derived using Kelly's repertory grid technique from the
actual
>elements making up the node itself.

Again, I feel like you're answering the wrong question. I don't understand what kind of "reporting" you think would be going on. You talk about forms of abstraction and delving into bionomic areas, and then reporting bionomic areas. These are words I know, but the phrases you use are nonetheless organized in a way that makes them completely opaque to me. As far as I can tell, it's a misuse of the term "report" to say that you report an area. Maybe you mean that they would report ON an area, but that still leaves open the question as to what they would be reporting. Temperature? Weather? Life signs?

>> Here, lets try a thought experiment. Suppose you've got a two person
>> economy, Alice and Bob. Alice has an apple which she happens to value less
>> than Bob values the apple.
>>
>> Now, in capitalism, Bob will pay Alice for the apple, thus maximizing value
>> and resulting in economic efficiency in the Marshallian sense. This
>> process is scalable to all kinds of economic activity.
>>
>> Now, suppose Alice and Bob had chosen to reject capitalism and instead
>> operate under a neuronomy. What would happen? What would they do? How
>> would it work?
>>
>
>When thinking about a neuronomy it might be easier to consider that it
couldn't
>work with only two people and would need a highly developed interdependent
global
>economy where no element was independent. Furthermore it would have come into
>existence in a similar way to the internet in that it couldn't be designed
without
>building it. Your whole question is simply unanswerable because I refuse
to answer
>that level of detail. This is a pity because you're also not going to find
anyone
>else to answer your questions as they are too afraid of upsetting their
position
>in capitalism.

Here's my essential problem: I have no idea what sort of system you're proposing. The impression I get is that it involves neural nets intimately, but you haven't yet explained how, beyond the idea that the net itself would be running the show.

Would this be better than what we have today? Worse? Do we have any choice in the matter? You might have answered these questions for yourself, but I can't answer them for myself until you express more clearly what form your system would take. So far your explanations have been mostly useless to me.

>> What I'm looking for is fewer buzzwords and more content. ;) You're using
>> terms I know in ways that don't seem to make sense in this context. What
>> do you mean here?
>
>I don't think you want help to connect the dots but if I'm wrong you could
do well
>by taking my buzzwords and use a search engine to determine how I'm using
them.

I have. As I have noted, you're using words I know in contexts that make no sense. Search engines don't help there. And though I would like for you to connect the dots for me, I must admit that your apparent refusal to do so is leading me to conclude that if you can't say it clearly then you probably don't have all that much worth saying. :(

-Dan

-TODAY IS A GOOD DAY TO LIVE-