--- a/doc-src/Logics/intro.tex Fri Apr 15 14:09:12 1994 +0200
+++ b/doc-src/Logics/intro.tex Fri Apr 15 16:08:31 1994 +0200
@@ -5,35 +5,44 @@
starting points for defining new logics. Each logic is distributed with
sample proofs, some of which are described in this document.
-\begin{quote}
-{\ttindexbold{FOL}} is many-sorted first-order logic with natural
+\begin{ttdescription}
+\item[\thydx{FOL}] is many-sorted first-order logic with natural
deduction. It comes in both constructive and classical versions.
-{\ttindexbold{ZF}} is axiomatic set theory, using the Zermelo-Fraenkel
+\item[\thydx{ZF}] is axiomatic set theory, using the Zermelo-Fraenkel
axioms~\cite{suppes72}. It is built upon classical~\FOL{}.
+
+\item[\thydx{CCL}] is Martin Coen's Classical Computational Logic,
+ which is the basis of a preliminary method for deriving programs from
+ proofs~\cite{coen92}. It is built upon classical~\FOL{}.
-{\ttindexbold{HOL}} is the higher-order logic of Church~\cite{church40},
+\item[\thydx{LCF}] is a version of Scott's Logic for Computable
+ Functions, which is also implemented by the~{\sc lcf}
+ system~\cite{paulson87}. It is built upon classical~\FOL{}.
+
+\item[\thydx{HOL}] is the higher-order logic of Church~\cite{church40},
which is also implemented by Gordon's~{\sc hol} system~\cite{mgordon88a}. This
object-logic should not be confused with Isabelle's meta-logic, which is
also a form of higher-order logic.
+
+\item[\thydx{HOLCF}] is an alternative version of {\sc lcf}, defined
+ as an extension of {\tt HOL}\@.
-{\ttindexbold{CTT}} is a version of Martin-L\"of's Constructive Type
+\item[\thydx{CTT}] is a version of Martin-L\"of's Constructive Type
Theory~\cite{nordstrom90}, with extensional equality. Universes are not
included.
-{\ttindexbold{LK}} is another version of first-order logic, a classical
+\item[\thydx{LK}] is another version of first-order logic, a classical
sequent calculus. Sequents have the form $A@1,\ldots,A@m\turn
B@1,\ldots,B@n$; rules are applied using associative matching.
-{\ttindexbold{Modal}} implements the modal logics $T$, $S4$, and~$S43$. It
-is built upon~\LK{}.
-
-{\ttindexbold{Cube}} is Barendregt's $\lambda$-cube.
+\item[\thydx{Modal}] implements the modal logics $T$, $S4$,
+ and~$S43$. It is built upon~\LK{}.
-{\ttindexbold{LCF}} is a version of Scott's Logic for Computable Functions,
-which is also implemented by the~{\sc lcf} system~\cite{paulson87}.
-\end{quote}
-The logics {\tt Modal}, {\tt Cube} and {\tt LCF} are currently undocumented.
+\item[\thydx{Cube}] is Barendregt's $\lambda$-cube.
+\end{ttdescription}
+The logics {\tt CCL}, {\tt LCF}, {\tt HOLCF}, {\tt Modal} and {\tt Cube}
+are currently undocumented.
You should not read this before reading {\em Introduction to Isabelle\/}
and performing some Isabelle proofs. Consult the {\em Reference Manual}
@@ -41,7 +50,7 @@
\section{Syntax definitions}
-The syntax of each logic syntax is defined using a context-free grammar.
+The syntax of each logic is presented using a context-free grammar.
These grammars obey the following conventions:
\begin{itemize}
\item identifiers denote nonterminal symbols
@@ -50,20 +59,20 @@
\item constructs followed by a Kleene star, such as $id^*$ and $(\ldots)^*$
can be repeated~0 or more times
\item alternatives are separated by a vertical bar,~$|$
-\item the symbol for alphanumeric identifier is~{\it id\/}
+\item the symbol for alphanumeric identifiers is~{\it id\/}
\item the symbol for scheme variables is~{\it var}
\end{itemize}
To reduce the number of nonterminals and grammar rules required, Isabelle's
-syntax module employs {\bf precedences}. Each grammar rule is given by a
-mixfix declaration, which has a precedence, and each argument place has a
-precedence. This general approach handles infix operators that associate
-either to the left or to the right, as well as prefix and binding
-operators.
+syntax module employs {\bf priorities},\index{priorities} or precedences.
+Each grammar rule is given by a mixfix declaration, which has a priority,
+and each argument place has a priority. This general approach handles
+infix operators that associate either to the left or to the right, as well
+as prefix and binding operators.
In a syntactically valid expression, an operator's arguments never involve
-an operator of lower precedence unless brackets are used. Consider
-first-order logic, where $\exists$ has lower precedence than $\disj$,
-which has lower precedence than $\conj$. There, $P\conj Q \disj R$
+an operator of lower priority unless brackets are used. Consider
+first-order logic, where $\exists$ has lower priority than $\disj$,
+which has lower priority than $\conj$. There, $P\conj Q \disj R$
abbreviates $(P\conj Q) \disj R$ rather than $P\conj (Q\disj R)$. Also,
$\exists x.P\disj Q$ abbreviates $\exists x.(P\disj Q)$ rather than
$(\exists x.P)\disj Q$. Note especially that $P\disj(\exists x.Q)$
@@ -72,7 +81,7 @@
A {\bf binder} is a symbol associated with a constant of type
$(\sigma\To\tau)\To\tau'$. For instance, we may declare~$\forall$ as a
binder for the constant~$All$, which has type $(\alpha\To o)\To o$. This
-defines the syntax $\forall x.t$ to mean $\forall(\lambda x.t)$. We can
+defines the syntax $\forall x.t$ to mean $All(\lambda x.t)$. We can
also write $\forall x@1\ldots x@m.t$ to abbreviate $\forall x@1. \ldots
\forall x@m.t$; this is possible for any constant provided that $\tau$ and
$\tau'$ are the same type. \HOL's description operator $\epsilon x.P(x)$
@@ -80,47 +89,42 @@
when $\alpha$ is $bool$. \ZF's bounded quantifier $\forall x\in A.P(x)$
cannot be declared as a binder because it has type $[i, i\To o]\To o$. The
syntax for binders allows type constraints on bound variables, as in
-\[ \forall (x{::}\alpha) \; y{::}\beta. R(x,y) \]
+\[ \forall (x{::}\alpha) \; (y{::}\beta). R(x,y) \]
To avoid excess detail, the logic descriptions adopt a semi-formal style.
Infix operators and binding operators are listed in separate tables, which
-include their precedences. Grammars do not give numeric precedences;
-instead, the rules appear in order of decreasing precedence. This should
-suffice for most purposes; for detailed precedence information, consult the
-syntax definitions in the {\tt.thy} files. Chapter~\ref{Defining-Logics}
-describes Isabelle's syntax module, including its use of precedences.
+include their priorities. Grammar descriptions do not include numeric
+priorities; instead, the rules appear in order of decreasing priority.
+This should suffice for most purposes; for full details, please consult the
+actual syntax definitions in the {\tt.thy} files.
Each nonterminal symbol is associated with some Isabelle type. For
example, the {\bf formulae} of first-order logic have type~$o$. Every
Isabelle expression of type~$o$ is therefore a formula. These include
atomic formulae such as $P$, where $P$ is a variable of type~$o$, and more
generally expressions such as $P(t,u)$, where $P$, $t$ and~$u$ have
-suitable types. Therefore, ``expression of type~$o$'' is listed as a
+suitable types. Therefore, `expression of type~$o$' is listed as a
separate possibility in the grammar for formulae.
-Infix operators are represented internally by constants with the prefix
-\hbox{\tt"op "}. For instance, implication is the constant
-\hbox{\tt"op~-->"}. This permits infixes to be used in non-infix contexts
-(just as with \ML's~{\tt op} keyword). You also need to know the name of
-the internal constant if you are writing code that inspects terms.
-
-\section{Proof procedures}
+\section{Proof procedures}\label{sec:safe}
Most object-logics come with simple proof procedures. These are reasonably
powerful for interactive use, though often simplistic and incomplete. You
can do single-step proofs using \verb|resolve_tac| and
\verb|assume_tac|, referring to the inference rules of the logic by {\sc
ml} identifiers.
-Call a rule {\em safe\/} if applying it to a provable goal always yields
-provable subgoals. If a rule is safe then it can be applied automatically
-to a goal without destroying our chances of finding a proof. For instance,
-all the rules of the classical sequent calculus {\sc lk} are safe.
-Intuitionistic logic includes some unsafe rules, like disjunction
-introduction ($P\disj Q$ can be true when $P$ is false) and existential
-introduction ($\ex{x}P(x)$ can be true when $P(a)$ is false for certain
-$a$). Universal elimination is unsafe if the formula $\all{x}P(x)$ is
-deleted after use.
+For theorem proving, rules can be classified as {\bf safe} or {\bf unsafe}.
+A rule is safe if applying it to a provable goal always yields provable
+subgoals. If a rule is safe then it can be applied automatically to a goal
+without destroying our chances of finding a proof. For instance, all the
+rules of the classical sequent calculus {\sc lk} are safe. Universal
+elimination is unsafe if the formula $\all{x}P(x)$ is deleted after use.
+Other unsafe rules include the following:
+\[ \infer[({\disj}I1)]{P\disj Q}{P} \qquad
+ \infer[({\imp}E)]{Q}{P\imp Q & P} \qquad
+ \infer[({\exists}I)]{\exists x.P}{P[t/x]}
+\]
Proof procedures use safe rules whenever possible, delaying the application
of unsafe rules. Those safe rules are preferred that generate the fewest
@@ -131,7 +135,7 @@
Many of the proof procedures use backtracking. Typically they attempt to
solve subgoal~$i$ by repeatedly applying a certain tactic to it. This
tactic, which is known as a {\it step tactic}, resolves a selection of
-rules with subgoal~$i$. This may replace one subgoal by many; but the
+rules with subgoal~$i$. This may replace one subgoal by many; the
search persists until there are fewer subgoals in total than at the start.
Backtracking happens when the search reaches a dead end: when the step
tactic fails. Alternative outcomes are then searched by a depth-first or