104
|
1 |
%% $Id$
|
|
2 |
\chapter{Introduction}
|
|
3 |
Several logics come with Isabelle. Many of them are sufficiently developed
|
|
4 |
to serve as comfortable reasoning environments. They are also good
|
|
5 |
starting points for defining new logics. Each logic is distributed with
|
|
6 |
sample proofs, some of which are described in this document.
|
|
7 |
|
|
8 |
\begin{quote}
|
|
9 |
{\ttindexbold{FOL}} is many-sorted first-order logic with natural
|
|
10 |
deduction. It comes in both constructive and classical versions.
|
|
11 |
|
|
12 |
{\ttindexbold{ZF}} is axiomatic set theory, using the Zermelo-Fraenkel
|
|
13 |
axioms~\cite{suppes72}. It is built upon classical~\FOL{}.
|
|
14 |
|
|
15 |
{\ttindexbold{HOL}} is the higher-order logic of Church~\cite{church40},
|
|
16 |
which is also implemented by Gordon's~{\sc hol} system~\cite{gordon88a}. This
|
|
17 |
object-logic should not be confused with Isabelle's meta-logic, which is
|
|
18 |
also a form of higher-order logic.
|
|
19 |
|
|
20 |
{\ttindexbold{CTT}} is a version of Martin-L\"of's Constructive Type
|
|
21 |
Theory~\cite{nordstrom90}, with extensional equality. Universes are not
|
|
22 |
included.
|
|
23 |
|
|
24 |
{\ttindexbold{LK}} is another version of first-order logic, a classical
|
|
25 |
sequent calculus. Sequents have the form $A@1,\ldots,A@m\turn
|
|
26 |
B@1,\ldots,B@n$; rules are applied using associative matching.
|
|
27 |
|
|
28 |
{\ttindexbold{Modal}} implements the modal logics $T$, $S4$, and~$S43$. It
|
|
29 |
is built upon~\LK{}.
|
|
30 |
|
|
31 |
{\ttindexbold{Cube}} is Barendregt's $\lambda$-cube.
|
|
32 |
|
|
33 |
{\ttindexbold{LCF}} is a version of Scott's Logic for Computable Functions,
|
|
34 |
which is also implemented by the~{\sc lcf} system~\cite{paulson87}.
|
|
35 |
\end{quote}
|
|
36 |
The logics {\tt Modal}, {\tt Cube} and {\tt LCF} are currently undocumented.
|
|
37 |
|
|
38 |
This manual assumes that you have read the {\em Introduction to Isabelle\/}
|
|
39 |
and have some experience of using Isabelle to perform interactive proofs.
|
|
40 |
It refers to packages defined in the {\em Reference Manual}, which you
|
|
41 |
should keep at hand.
|
|
42 |
|
|
43 |
|
|
44 |
\section{Syntax definitions}
|
|
45 |
This manual defines each logic's syntax using a context-free grammar.
|
|
46 |
These grammars obey the following conventions:
|
|
47 |
\begin{itemize}
|
|
48 |
\item identifiers denote nonterminal symbols
|
|
49 |
\item {\tt typewriter} font denotes terminal symbols
|
|
50 |
\item parentheses $(\ldots)$ express grouping
|
|
51 |
\item constructs followed by a Kleene star, such as $id^*$ and $(\ldots)^*$
|
|
52 |
can be repeated~0 or more times
|
|
53 |
\item alternatives are separated by a vertical bar,~$|$
|
|
54 |
\item the symbol for alphanumeric identifier is~{\it id\/}
|
|
55 |
\item the symbol for scheme variables is~{\it var}
|
|
56 |
\end{itemize}
|
|
57 |
To reduce the number of nonterminals and grammar rules required, Isabelle's
|
|
58 |
syntax module employs {\bf precedences}. Each grammar rule is given by a
|
|
59 |
mixfix declaration, which has a precedence, and each argument place has a
|
|
60 |
precedence. This general approach handles infix operators that associate
|
|
61 |
either to the left or to the right, as well as prefix and binding
|
|
62 |
operators.
|
|
63 |
|
|
64 |
In a syntactically valid expression, an operator's arguments never involve
|
|
65 |
an operator of lower precedence unless brackets are used. Consider
|
|
66 |
first-order logic, where $\exists$ has lower precedence than $\disj$,
|
|
67 |
which has lower precedence than $\conj$. There, $P\conj Q \disj R$
|
|
68 |
abbreviates $(P\conj Q) \disj R$ rather than $P\conj (Q\disj R)$. Also,
|
|
69 |
$\exists x.P\disj Q$ abbreviates $\exists x.(P\disj Q)$ rather than
|
|
70 |
$(\exists x.P)\disj Q$. Note especially that $P\disj(\exists x.Q)$
|
|
71 |
becomes syntactically invalid if the brackets are removed.
|
|
72 |
|
|
73 |
A {\bf binder} is a symbol associated with a constant of type
|
|
74 |
$(\sigma\To\tau)\To\tau'$. For instance, we may declare~$\forall$ as a
|
|
75 |
binder for the constant~$All$, which has type $(\alpha\To o)\To o$. This
|
|
76 |
defines the syntax $\forall x.t$ to mean $\forall(\lambda x.t)$. We can
|
|
77 |
also write $\forall x@1\ldots x@m.t$ to abbreviate $\forall x@1. \ldots
|
|
78 |
\forall x@m.t$; this is possible for any constant provided that $\tau$ and
|
|
79 |
$\tau'$ are the same type. \HOL's description operator $\epsilon x.P(x)$
|
|
80 |
has type $(\alpha\To bool)\To\alpha$ and can bind only one variable, except
|
|
81 |
when $\alpha$ is $bool$. \ZF's bounded quantifier $\forall x\in A.P(x)$
|
|
82 |
cannot be declared as a binder because it has type $[i, i\To o]\To o$. The
|
|
83 |
syntax for binders allows type constraints on bound variables, as in
|
|
84 |
\[ \forall (x{::}\alpha) \; y{::}\beta. R(x,y) \]
|
|
85 |
|
|
86 |
To avoid excess detail, the logic descriptions adopt a semi-formal style.
|
|
87 |
Infix operators and binding operators are listed in separate tables, which
|
|
88 |
include their precedences. Grammars do not give numeric precedences;
|
|
89 |
instead, the rules appear in order of decreasing precedence. This should
|
|
90 |
suffice for most purposes; for detailed precedence information, consult the
|
|
91 |
syntax definitions in the {\tt.thy} files. Chapter~\ref{Defining-Logics}
|
|
92 |
describes Isabelle's syntax module, including its use of precedences.
|
|
93 |
|
|
94 |
Each nonterminal symbol is associated with some Isabelle type. For
|
|
95 |
example, the {\bf formulae} of first-order logic have type~$o$. Every
|
|
96 |
Isabelle expression of type~$o$ is therefore a formula. These include
|
|
97 |
atomic formulae such as $P$, where $P$ is a variable of type~$o$, and more
|
|
98 |
generally expressions such as $P(t,u)$, where $P$, $t$ and~$u$ have
|
|
99 |
suitable types. Therefore, ``expression of type~$o$'' is listed as a
|
|
100 |
separate possibility in the grammar for formulae.
|
|
101 |
|
|
102 |
Infix operators are represented internally by constants with the prefix
|
|
103 |
\hbox{\tt"op "}. For instance, implication is the constant
|
|
104 |
\hbox{\tt"op~-->"}. This permits infixes to be used in non-infix contexts
|
|
105 |
(just as with \ML's~{\tt op} keyword). You also need to know the name of
|
|
106 |
the internal constant if you are writing code that inspects terms.
|
|
107 |
|
|
108 |
|
|
109 |
\section{Proof procedures}
|
|
110 |
Most object-logics come with simple proof procedures. These are reasonably
|
|
111 |
powerful for interactive use, though often simplistic and incomplete. You
|
|
112 |
can do single-step proofs using \verb|resolve_tac| and
|
|
113 |
\verb|assume_tac|, referring to the inference rules of the logic by {\sc
|
|
114 |
ml} identifiers.
|
|
115 |
|
|
116 |
Call a rule {\em safe\/} if applying it to a provable goal always yields
|
|
117 |
provable subgoals. If a rule is safe then it can be applied automatically
|
|
118 |
to a goal without destroying our chances of finding a proof. For instance,
|
|
119 |
all the rules of the classical sequent calculus {\sc lk} are safe.
|
|
120 |
Intuitionistic logic includes some unsafe rules, like disjunction
|
|
121 |
introduction ($P\disj Q$ can be true when $P$ is false) and existential
|
|
122 |
introduction ($\ex{x}P(x)$ can be true when $P(a)$ is false for certain
|
|
123 |
$a$). Universal elimination is unsafe if the formula $\all{x}P(x)$ is
|
|
124 |
deleted after use.
|
|
125 |
|
|
126 |
Proof procedures use safe rules whenever possible, delaying the application
|
|
127 |
of unsafe rules. Those safe rules are preferred that generate the fewest
|
|
128 |
subgoals. Safe rules are (by definition) deterministic, while the unsafe
|
|
129 |
rules require search. The design of a suitable set of rules can be as
|
|
130 |
important as the strategy for applying them.
|
|
131 |
|
|
132 |
Many of the proof procedures use backtracking. Typically they attempt to
|
|
133 |
solve subgoal~$i$ by repeatedly applying a certain tactic to it. This
|
|
134 |
tactic, which is known as a {\it step tactic}, resolves a selection of
|
|
135 |
rules with subgoal~$i$. This may replace one subgoal by many; but the
|
|
136 |
search persists until there are fewer subgoals in total than at the start.
|
|
137 |
Backtracking happens when the search reaches a dead end: when the step
|
|
138 |
tactic fails. Alternative outcomes are then searched by a depth-first or
|
|
139 |
best-first strategy.
|