×

Logic and representation. (English) Zbl 0826.68120

CSLI Lecture Notes. 39. Stanford, CA: CSLI - Center for the Study of Language and Information. xiv, 196 p. (1995).
This book is a collection of essays written by one of the leading specialists in knowledge representation in artificial intelligence (AI) and the inventor of autoepistemic logic, one of the main AI formalisms. These essays have been originally written by Moore in different years, and have been reedited and updated to make this book.
The book starts with the methodological motivations for the author’s work (in Part I). First, although from a purely mathematical viewpoint, we can view the human and computer’s behavior in purely behavioral terms (stimulus-reaction), but from the computational viewpoint it is often more efficient to describe the corresponding input-output relation in terms of states and knowledge representation. Historically the first method of representing our knowledge in precise terms was logic. There exist two radical views on the role of logic in knowledge representation: that logic is all we need, and that logic has proven to be practically useless. The author dismisses both: the first as a hype, the second as a natural reaction to that hype. Logic is necessary, but the existing logical formalisms are clearly not sufficient, so, new formalisms are necessary. Designing new logical formalisms for knowledge representation and knowledge processing is what the rest of the book is about.
One of the things that is missing in traditional logic but is absolutely necessary in knowledge representation is representing knowledge. There are several formalisms for that, but they are either oversimplifying, or unnecessarily complicated.
We need to describe what we know, what we know about the knowledge of the others, what we know about what the others know, etc. If we start describing all that literally, we end up with an infinitely long description even for the simplest cases of two or three intelligent agents. The author believes that this complication is caused by the fact that we try to build our logic on top of the traditional one rather than instead of it. A much clearer formalism can be obtained if we correspondingly modify the notion of a possible world, and take as a basis the relation \(K(A, W, W')\) meaning that in the world \(W\), the knowledge of an agent \(A\) is such that, based on this knowledge, \(W'\) seems possible to him. In other words, the notion of a possible world must include not only the description of what is true, but also the description of what everyone knows; thus, we avoid the infinite loops. This same formalism enables us to describe knowledge, actions, and their interaction (knowledge leads to actions, and the results of the actions bring new knowledge) in the same language; something that was difficult to achieve before.
An even more complicated problem is describing beliefs (see paper 4), because beliefs may turn out to be false. An idea of a general formalism that avoids infinite loops and complication is given in paper 5 under the name of Russellian propositions. The idea is that logical connectives, quantifiers, and modal statements like “knows” can be described as relations between their component statements. This description is too general to be efficient, but its clarity makes it useful for understanding, and understanding is definitely the necessary first step before we can start efficiently compute.
Part III describes the main contribution of Moore to AI: autoepistemic logic. It turned out that many nonmonotonic formalisms for knowledge representation and logic programming can be clarified if we, e.g., interpret some negations \(\neg A\) not as “not \(A\)”, but as “I do not believe in \(A\)”. This interpretation helped to clarify (and in some cases, to correct) many ad hoc heuristic formalisms designed to interpret the behavior of successful Prolog programs. For this logic, possible world interpretation is also very helpful (paper 7). In the final paper of this part (No. 8), Moore gives a brief survey of the recent achievements and formulates open problems and research directions.
Finally, in Part IV, the author shows that knowledge representation and knowledge processing techniques developed in AI not only help process knowledge in computers, but they also help us to clarify the structure of natural language. For example, the notion of unification, that is used in automatic deduction and in Prolog, helps to understand English sentences that traditional linguistics finds difficult to handle.

MSC:

68T30 Knowledge representation
68T27 Logic in artificial intelligence
68-02 Research exposition (monographs, survey articles) pertaining to computer science
03-02 Research exposition (monographs, survey articles) pertaining to mathematical logic and foundations
03B60 Other nonclassical logic
00B60 Collections of reprinted articles
68T50 Natural language processing
03B65 Logic of natural languages
PDFBibTeX XMLCite