AI and jurisprudence, FLAIR Project

From: J. R. Molloy (jr@shasta.com)
Date: Sat Dec 23 2000 - 12:47:25 MST


From: <GBurch1@aol.com>
> ...there's a developing
> field of study of the interaction of "AI" and jurisprudence. See, for
> instance:
>
> http://www.dur.ac.uk/~dla0www/centre/web_ai_a.html
> [Artificial Intelligence and Law Publications]
> and
>
> http://www.flair.law.ubc.ca/jcsmith/logos/noos/machine.html
An Introduction to Artificial Intelligence and Law: or,
Can Machines Be Made to Think Like Lawyers?

Thank you for posting these URLs, Greg. I find this topic fascinating.

J. C. Smith writes,
<<The theoretical issues which are swiftly developing in this new field of
artificial intelligence and law, will, in the final analysis, be about the
nature and structure of language and its many functions, and how humans
actually do reason.>>

That's a good point. Roboticists can't begin to build a machine that thinks
like a lawyer until it is first discovered how humans actually reason.
Expert systems can augment the abilities of attorneys, but they're far from
replacing lawyers, right?

<<No one as yet has transcended mind/body dualism, nor has bridged the
phenomenological gap between the two.>>

How would we know if they did? It might not be evident to anyone but the
transcending individual, rather like a headache.

<<Legal theory, reflecting as it does the more broad ontological,
epistemological, and moral, disputes of general philosophy, is not
ideologically neutral. It is as much prescriptive as it is descriptive.>>

That, I think, presents the greatest challenge for automating jurisprudence,
because AI *is* ideologically neutral, until it is programmed to conform to
the rules of a particular ideology. So, unless ideology is removed from
jurisprudence, machines can't be made to think like lawyers, unless the
ideology of each lawyer is specified for a corresponding machine. IOW, which
lawyer a machine is made to think like determines the ratio of prescriptive
to descriptive law involved.

<<If mathematical/logical reasoning is best, and a logically deductive
system is ideal, then humans should reason more like computers. Artificial
intelligence would be the ideal model for real intelligence. If, on the
other hand, practical everyday reasoning is the optimum and paradigm form of
human cogitation, and deductive, mathematical, and logical reasoning is
something we only need to do, and should do on particular occasions for
particular purposes, then machines will become more intelligent as they
become better able to simulate human reasoning.>>

So, we'll have to dumb down Robo-judges to make them more like humans. I
think it could be practical to make humans more rational instead.

Stay hungry,

--J. R.
3M TA3

=====================
Useless hypotheses: consciousness, phlogiston, philosophy, vitalism, mind,
free will



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:39 MDT