%% $Id$
\chapter{Theorems and Forward Proof}
\index{theorems|(}
Theorems, which represent the axioms, theorems and rules of
object-logics, have type \mltydx{thm}. This chapter begins by
describing operations that print theorems and that join them in
forward proof. Most theorem operations are intended for advanced
applications, such as programming new proof procedures. Many of these
operations refer to signatures, certified terms and certified types,
which have the \ML{} types {\tt Sign.sg}, {\tt cterm} and {\tt ctyp}
and are discussed in Chapter~\ref{theories}. Beginning users should
ignore such complexities --- and skip all but the first section of
this chapter.
The theorem operations do not print error messages. Instead, they raise
exception~\xdx{THM}\@. Use \ttindex{print_exn} to display
exceptions nicely:
\begin{ttbox}
allI RS mp handle e => print_exn e;
{\out Exception THM raised:}
{\out RSN: no unifiers -- premise 1}
{\out (!!x. ?P(x)) ==> ALL x. ?P(x)}
{\out [| ?P --> ?Q; ?P |] ==> ?Q}
{\out}
{\out uncaught exception THM}
\end{ttbox}
\section{Basic operations on theorems}
\subsection{Pretty-printing a theorem}
\index{theorems!printing of}
\begin{ttbox}
prth : thm -> thm
prths : thm list -> thm list
prthq : thm Seq.seq -> thm Seq.seq
print_thm : thm -> unit
print_goals : int -> thm -> unit
string_of_thm : thm -> string
\end{ttbox}
The first three commands are for interactive use. They are identity
functions that display, then return, their argument. The \ML{} identifier
{\tt it} will refer to the value just displayed.
The others are for use in programs. Functions with result type {\tt unit}
are convenient for imperative programming.
\begin{ttdescription}
\item[\ttindexbold{prth} {\it thm}]
prints {\it thm\/} at the terminal.
\item[\ttindexbold{prths} {\it thms}]
prints {\it thms}, a list of theorems.
\item[\ttindexbold{prthq} {\it thmq}]
prints {\it thmq}, a sequence of theorems. It is useful for inspecting
the output of a tactic.
\item[\ttindexbold{print_thm} {\it thm}]
prints {\it thm\/} at the terminal.
\item[\ttindexbold{print_goals} {\it limit\/} {\it thm}]
prints {\it thm\/} in goal style, with the premises as subgoals. It prints
at most {\it limit\/} subgoals. The subgoal module calls {\tt print_goals}
to display proof states.
\item[\ttindexbold{string_of_thm} {\it thm}]
converts {\it thm\/} to a string.
\end{ttdescription}
\subsection{Forward proof: joining rules by resolution}
\index{theorems!joining by resolution}
\index{resolution}\index{forward proof}
\begin{ttbox}
RSN : thm * (int * thm) -> thm \hfill\textbf{infix}
RS : thm * thm -> thm \hfill\textbf{infix}
MRS : thm list * thm -> thm \hfill\textbf{infix}
OF : thm * thm list -> thm \hfill\textbf{infix}
RLN : thm list * (int * thm list) -> thm list \hfill\textbf{infix}
RL : thm list * thm list -> thm list \hfill\textbf{infix}
MRL : thm list list * thm list -> thm list \hfill\textbf{infix}
\end{ttbox}
Joining rules together is a simple way of deriving new rules. These
functions are especially useful with destruction rules. To store
the result in the theorem database, use \ttindex{bind_thm}
(\S\ref{ExtractingAndStoringTheProvedTheorem}).
\begin{ttdescription}
\item[\tt$thm@1$ RSN $(i,thm@2)$] \indexbold{*RSN}
resolves the conclusion of $thm@1$ with the $i$th premise of~$thm@2$.
Unless there is precisely one resolvent it raises exception
\xdx{THM}; in that case, use {\tt RLN}.
\item[\tt$thm@1$ RS $thm@2$] \indexbold{*RS}
abbreviates \hbox{\tt$thm@1$ RSN $(1,thm@2)$}. Thus, it resolves the
conclusion of $thm@1$ with the first premise of~$thm@2$.
\item[\tt {$[thm@1,\ldots,thm@n]$} MRS $thm$] \indexbold{*MRS}
uses {\tt RSN} to resolve $thm@i$ against premise~$i$ of $thm$, for
$i=n$, \ldots,~1. This applies $thm@n$, \ldots, $thm@1$ to the first $n$
premises of $thm$. Because the theorems are used from right to left, it
does not matter if the $thm@i$ create new premises. {\tt MRS} is useful
for expressing proof trees.
\item[\tt {$thm$ OF $[thm@1,\ldots,thm@n]$}] \indexbold{*OF} is the same as
\texttt{$[thm@1,\ldots,thm@n]$ MRS $thm$}, with slightly more readable
argument order, though.
\item[\tt$thms@1$ RLN $(i,thms@2)$] \indexbold{*RLN}
joins lists of theorems. For every $thm@1$ in $thms@1$ and $thm@2$ in
$thms@2$, it resolves the conclusion of $thm@1$ with the $i$th premise
of~$thm@2$, accumulating the results.
\item[\tt$thms@1$ RL $thms@2$] \indexbold{*RL}
abbreviates \hbox{\tt$thms@1$ RLN $(1,thms@2)$}.
\item[\tt {$[thms@1,\ldots,thms@n]$} MRL $thms$] \indexbold{*MRL}
is analogous to {\tt MRS}, but combines theorem lists rather than theorems.
It too is useful for expressing proof trees.
\end{ttdescription}
\subsection{Expanding definitions in theorems}
\index{meta-rewriting!in theorems}
\begin{ttbox}
rewrite_rule : thm list -> thm -> thm
rewrite_goals_rule : thm list -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{rewrite_rule} {\it defs} {\it thm}]
unfolds the {\it defs} throughout the theorem~{\it thm}.
\item[\ttindexbold{rewrite_goals_rule} {\it defs} {\it thm}]
unfolds the {\it defs} in the premises of~{\it thm}, but it leaves the
conclusion unchanged. This rule is the basis for \ttindex{rewrite_goals_tac},
but it serves little purpose in forward proof.
\end{ttdescription}
\subsection{Instantiating unknowns in a theorem} \label{sec:instantiate}
\index{instantiation}
\begin{alltt}\footnotesize
read_instantiate : (string*string) list -> thm -> thm
read_instantiate_sg : Sign.sg -> (string*string) list -> thm -> thm
cterm_instantiate : (cterm*cterm) list -> thm -> thm
instantiate' : ctyp option list -> cterm option list -> thm -> thm
\end{alltt}
These meta-rules instantiate type and term unknowns in a theorem. They are
occasionally useful. They can prevent difficulties with higher-order
unification, and define specialized versions of rules.
\begin{ttdescription}
\item[\ttindexbold{read_instantiate} {\it insts} {\it thm}]
processes the instantiations {\it insts} and instantiates the rule~{\it
thm}. The processing of instantiations is described
in \S\ref{res_inst_tac}, under {\tt res_inst_tac}.
Use {\tt res_inst_tac}, not {\tt read_instantiate}, to instantiate a rule
and refine a particular subgoal. The tactic allows instantiation by the
subgoal's parameters, and reads the instantiations using the signature
associated with the proof state.
Use {\tt read_instantiate_sg} below if {\it insts\/} appears to be treated
incorrectly.
\item[\ttindexbold{read_instantiate_sg} {\it sg} {\it insts} {\it thm}]
is like \texttt{read_instantiate {\it insts}~{\it thm}}, but it reads
the instantiations under signature~{\it sg}. This is necessary to
instantiate a rule from a general theory, such as first-order logic,
using the notation of some specialized theory. Use the function {\tt
sign_of} to get a theory's signature.
\item[\ttindexbold{cterm_instantiate} {\it ctpairs} {\it thm}]
is similar to {\tt read_instantiate}, but the instantiations are provided
as pairs of certified terms, not as strings to be read.
\item[\ttindexbold{instantiate'} {\it ctyps} {\it cterms} {\it thm}]
instantiates {\it thm} according to the positional arguments {\it
ctyps} and {\it cterms}. Counting from left to right, schematic
variables $?x$ are either replaced by $t$ for any argument
\texttt{Some\(\;t\)}, or left unchanged in case of \texttt{None} or
if the end of the argument list is encountered. Types are
instantiated before terms.
\end{ttdescription}
\subsection{Miscellaneous forward rules}\label{MiscellaneousForwardRules}
\index{theorems!standardizing}
\begin{ttbox}
standard : thm -> thm
zero_var_indexes : thm -> thm
make_elim : thm -> thm
rule_by_tactic : tactic -> thm -> thm
rotate_prems : int -> thm -> thm
permute_prems : int -> int -> thm -> thm
rearrange_prems : int list -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{standard} $thm$] puts $thm$ into the standard form
of object-rules. It discharges all meta-assumptions, replaces free
variables by schematic variables, renames schematic variables to
have subscript zero, also strips outer (meta) quantifiers and
removes dangling sort hypotheses.
\item[\ttindexbold{zero_var_indexes} $thm$]
makes all schematic variables have subscript zero, renaming them to avoid
clashes.
\item[\ttindexbold{make_elim} $thm$]
\index{rules!converting destruction to elimination}
converts $thm$, which should be a destruction rule of the form
$\List{P@1;\ldots;P@m}\Imp
Q$, to the elimination rule $\List{P@1; \ldots; P@m; Q\Imp R}\Imp R$. This
is the basis for destruct-resolution: {\tt dresolve_tac}, etc.
\item[\ttindexbold{rule_by_tactic} {\it tac} {\it thm}]
applies {\it tac\/} to the {\it thm}, freezing its variables first, then
yields the proof state returned by the tactic. In typical usage, the
{\it thm\/} represents an instance of a rule with several premises, some
with contradictory assumptions (because of the instantiation). The
tactic proves those subgoals and does whatever else it can, and returns
whatever is left.
\item[\ttindexbold{rotate_prems} $k$ $thm$] rotates the premises of $thm$ to
the left by~$k$ positions (to the right if $k<0$). It simply calls
\texttt{permute_prems}, below, with $j=0$. Used with
\texttt{eresolve_tac}\index{*eresolve_tac!on other than first premise}, it
gives the effect of applying the tactic to some other premise of $thm$ than
the first.
\item[\ttindexbold{permute_prems} $j$ $k$ $thm$] rotates the premises of $thm$
leaving the first $j$ premises unchanged. It
requires $0\leq j\leq n$, where $n$ is the number of premises. If $k$ is
positive then it rotates the remaining $n-j$ premises to the left; if $k$ is
negative then it rotates the premises to the right.
\item[\ttindexbold{rearrange_prems} $ps$ $thm$] permutes the premises of $thm$
where the value at the $i$-th position (counting from $0$) in the list $ps$
gives the position within the original thm to be transferred to position $i$.
Any remaining trailing positions are left unchanged.
\end{ttdescription}
\subsection{Taking a theorem apart}
\index{theorems!taking apart}
\index{flex-flex constraints}
\begin{ttbox}
cprop_of : thm -> cterm
concl_of : thm -> term
prems_of : thm -> term list
cprems_of : thm -> cterm list
nprems_of : thm -> int
tpairs_of : thm -> (term*term) list
sign_of_thm : thm -> Sign.sg
theory_of_thm : thm -> theory
dest_state : thm * int -> (term*term) list * term list * term * term
rep_thm : thm -> \{sign_ref: Sign.sg_ref, der: bool * deriv, maxidx: int,
shyps: sort list, hyps: term list, prop: term\}
crep_thm : thm -> \{sign_ref: Sign.sg_ref, der: bool * deriv, maxidx: int,
shyps: sort list, hyps: cterm list, prop:{\ts}cterm\}
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{cprop_of} $thm$] returns the statement of $thm$ as
a certified term.
\item[\ttindexbold{concl_of} $thm$] returns the conclusion of $thm$ as
a term.
\item[\ttindexbold{prems_of} $thm$] returns the premises of $thm$ as a
list of terms.
\item[\ttindexbold{cprems_of} $thm$] returns the premises of $thm$ as
a list of certified terms.
\item[\ttindexbold{nprems_of} $thm$]
returns the number of premises in $thm$, and is equivalent to {\tt
length~(prems_of~$thm$)}.
\item[\ttindexbold{tpairs_of} $thm$] returns the flex-flex constraints
of $thm$.
\item[\ttindexbold{sign_of_thm} $thm$] returns the signature
associated with $thm$.
\item[\ttindexbold{theory_of_thm} $thm$] returns the theory associated
with $thm$. Note that this does a lookup in Isabelle's global
database of loaded theories.
\item[\ttindexbold{dest_state} $(thm,i)$]
decomposes $thm$ as a tuple containing a list of flex-flex constraints, a
list of the subgoals~1 to~$i-1$, subgoal~$i$, and the rest of the theorem
(this will be an implication if there are more than $i$ subgoals).
\item[\ttindexbold{rep_thm} $thm$] decomposes $thm$ as a record
containing the statement of~$thm$ ({\tt prop}), its list of
meta-assumptions ({\tt hyps}), its derivation ({\tt der}), a bound
on the maximum subscript of its unknowns ({\tt maxidx}), and a
reference to its signature ({\tt sign_ref}). The {\tt shyps} field
is discussed below.
\item[\ttindexbold{crep_thm} $thm$] like \texttt{rep_thm}, but returns
the hypotheses and statement as certified terms.
\end{ttdescription}
\subsection{*Sort hypotheses} \label{sec:sort-hyps}
\index{sort hypotheses}
\begin{ttbox}
strip_shyps : thm -> thm
strip_shyps_warning : thm -> thm
\end{ttbox}
Isabelle's type variables are decorated with sorts, constraining them to
certain ranges of types. This has little impact when sorts only serve for
syntactic classification of types --- for example, FOL distinguishes between
terms and other types. But when type classes are introduced through axioms,
this may result in some sorts becoming {\em empty\/}: where one cannot exhibit
a type belonging to it because certain sets of axioms are unsatisfiable.
If a theorem contains a type variable that is constrained by an empty
sort, then that theorem has no instances. It is basically an instance
of {\em ex falso quodlibet}. But what if it is used to prove another
theorem that no longer involves that sort? The latter theorem holds
only if under an additional non-emptiness assumption.
Therefore, Isabelle's theorems carry around sort hypotheses. The {\tt
shyps} field is a list of sorts occurring in type variables in the current
{\tt prop} and {\tt hyps} fields. It may also includes sorts used in the
theorem's proof that no longer appear in the {\tt prop} or {\tt hyps}
fields --- so-called {\em dangling\/} sort constraints. These are the
critical ones, asserting non-emptiness of the corresponding sorts.
Isabelle automatically removes extraneous sorts from the {\tt shyps} field at
the end of a proof, provided that non-emptiness can be established by looking
at the theorem's signature: from the {\tt classes} and {\tt arities}
information. This operation is performed by \texttt{strip_shyps} and
\texttt{strip_shyps_warning}.
\begin{ttdescription}
\item[\ttindexbold{strip_shyps} $thm$] removes any extraneous sort hypotheses
that can be witnessed from the type signature.
\item[\ttindexbold{strip_shyps_warning}] is like \texttt{strip_shyps}, but
issues a warning message of any pending sort hypotheses that do not have a
(syntactic) witness.
\end{ttdescription}
\subsection{Tracing flags for unification}
\index{tracing!of unification}
\begin{ttbox}
Unify.trace_simp : bool ref \hfill\textbf{initially false}
Unify.trace_types : bool ref \hfill\textbf{initially false}
Unify.trace_bound : int ref \hfill\textbf{initially 10}
Unify.search_bound : int ref \hfill\textbf{initially 20}
\end{ttbox}
Tracing the search may be useful when higher-order unification behaves
unexpectedly. Letting {\tt res_inst_tac} circumvent the problem is easier,
though.
\begin{ttdescription}
\item[set Unify.trace_simp;]
causes tracing of the simplification phase.
\item[set Unify.trace_types;]
generates warnings of incompleteness, when unification is not considering
all possible instantiations of type unknowns.
\item[Unify.trace_bound := $n$;]
causes unification to print tracing information once it reaches depth~$n$.
Use $n=0$ for full tracing. At the default value of~10, tracing
information is almost never printed.
\item[Unify.search_bound := $n$;] prevents unification from
searching past the depth~$n$. Because of this bound, higher-order
unification cannot return an infinite sequence, though it can return
an exponentially long one. The search rarely approaches the default value
of~20. If the search is cut off, unification prints a warning
\texttt{Unification bound exceeded}.
\end{ttdescription}
\section{*Primitive meta-level inference rules}
\index{meta-rules|(}
These implement the meta-logic in the style of the {\sc lcf} system,
as functions from theorems to theorems. They are, rarely, useful for
deriving results in the pure theory. Mainly, they are included for
completeness, and most users should not bother with them. The
meta-rules raise exception \xdx{THM} to signal malformed premises,
incompatible signatures and similar errors.
\index{meta-assumptions}
The meta-logic uses natural deduction. Each theorem may depend on
meta-level assumptions. Certain rules, such as $({\Imp}I)$,
discharge assumptions; in most other rules, the conclusion depends on all
of the assumptions of the premises. Formally, the system works with
assertions of the form
\[ \phi \quad [\phi@1,\ldots,\phi@n], \]
where $\phi@1$,~\ldots,~$\phi@n$ are the assumptions. This can be
also read as a single conclusion sequent $\phi@1,\ldots,\phi@n \vdash
\phi$. Do not confuse meta-level assumptions with the object-level
assumptions in a subgoal, which are represented in the meta-logic
using~$\Imp$.
Each theorem has a signature. Certified terms have a signature. When a
rule takes several premises and certified terms, it merges the signatures
to make a signature for the conclusion. This fails if the signatures are
incompatible.
\medskip
The following presentation of primitive rules ignores sort
hypotheses\index{sort hypotheses} (see also \S\ref{sec:sort-hyps}). These are
handled transparently by the logic implementation.
\bigskip
\index{meta-implication}
The \textbf{implication} rules are $({\Imp}I)$
and $({\Imp}E)$:
\[ \infer[({\Imp}I)]{\phi\Imp \psi}{\infer*{\psi}{[\phi]}} \qquad
\infer[({\Imp}E)]{\psi}{\phi\Imp \psi & \phi} \]
\index{meta-equality}
Equality of truth values means logical equivalence:
\[ \infer[({\equiv}I)]{\phi\equiv\psi}{\phi\Imp\psi &
\psi\Imp\phi}
\qquad
\infer[({\equiv}E)]{\psi}{\phi\equiv \psi & \phi} \]
The \textbf{equality} rules are reflexivity, symmetry, and transitivity:
\[ {a\equiv a}\,(refl) \qquad
\infer[(sym)]{b\equiv a}{a\equiv b} \qquad
\infer[(trans)]{a\equiv c}{a\equiv b & b\equiv c} \]
\index{lambda calc@$\lambda$-calculus}
The $\lambda$-conversions are $\alpha$-conversion, $\beta$-conversion, and
extensionality:\footnote{$\alpha$-conversion holds if $y$ is not free
in~$a$; $(ext)$ holds if $x$ is not free in the assumptions, $f$, or~$g$.}
\[ {(\lambda x.a) \equiv (\lambda y.a[y/x])} \qquad
{((\lambda x.a)(b)) \equiv a[b/x]} \qquad
\infer[(ext)]{f\equiv g}{f(x) \equiv g(x)} \]
The \textbf{abstraction} and \textbf{combination} rules let conversions be
applied to subterms:\footnote{Abstraction holds if $x$ is not free in the
assumptions.}
\[ \infer[(abs)]{(\lambda x.a) \equiv (\lambda x.b)}{a\equiv b} \qquad
\infer[(comb)]{f(a)\equiv g(b)}{f\equiv g & a\equiv b} \]
\index{meta-quantifiers}
The \textbf{universal quantification} rules are $(\Forall I)$ and $(\Forall
E)$:\footnote{$(\Forall I)$ holds if $x$ is not free in the assumptions.}
\[ \infer[(\Forall I)]{\Forall x.\phi}{\phi} \qquad
\infer[(\Forall E)]{\phi[b/x]}{\Forall x.\phi} \]
\subsection{Assumption rule}
\index{meta-assumptions}
\begin{ttbox}
assume: cterm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{assume} $ct$]
makes the theorem \(\phi \;[\phi]\), where $\phi$ is the value of~$ct$.
The rule checks that $ct$ has type $prop$ and contains no unknowns, which
are not allowed in assumptions.
\end{ttdescription}
\subsection{Implication rules}
\index{meta-implication}
\begin{ttbox}
implies_intr : cterm -> thm -> thm
implies_intr_list : cterm list -> thm -> thm
implies_intr_hyps : thm -> thm
implies_elim : thm -> thm -> thm
implies_elim_list : thm -> thm list -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{implies_intr} $ct$ $thm$]
is $({\Imp}I)$, where $ct$ is the assumption to discharge, say~$\phi$. It
maps the premise~$\psi$ to the conclusion $\phi\Imp\psi$, removing all
occurrences of~$\phi$ from the assumptions. The rule checks that $ct$ has
type $prop$.
\item[\ttindexbold{implies_intr_list} $cts$ $thm$]
applies $({\Imp}I)$ repeatedly, on every element of the list~$cts$.
\item[\ttindexbold{implies_intr_hyps} $thm$]
applies $({\Imp}I)$ to discharge all the hypotheses (assumptions) of~$thm$.
It maps the premise $\phi \; [\phi@1,\ldots,\phi@n]$ to the conclusion
$\List{\phi@1,\ldots,\phi@n}\Imp\phi$.
\item[\ttindexbold{implies_elim} $thm@1$ $thm@2$]
applies $({\Imp}E)$ to $thm@1$ and~$thm@2$. It maps the premises $\phi\Imp
\psi$ and $\phi$ to the conclusion~$\psi$.
\item[\ttindexbold{implies_elim_list} $thm$ $thms$]
applies $({\Imp}E)$ repeatedly to $thm$, using each element of~$thms$ in
turn. It maps the premises $\List{\phi@1,\ldots,\phi@n}\Imp\psi$ and
$\phi@1$,\ldots,$\phi@n$ to the conclusion~$\psi$.
\end{ttdescription}
\subsection{Logical equivalence rules}
\index{meta-equality}
\begin{ttbox}
equal_intr : thm -> thm -> thm
equal_elim : thm -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{equal_intr} $thm@1$ $thm@2$]
applies $({\equiv}I)$ to $thm@1$ and~$thm@2$. It maps the premises~$\psi$
and~$\phi$ to the conclusion~$\phi\equiv\psi$; the assumptions are those of
the first premise with~$\phi$ removed, plus those of
the second premise with~$\psi$ removed.
\item[\ttindexbold{equal_elim} $thm@1$ $thm@2$]
applies $({\equiv}E)$ to $thm@1$ and~$thm@2$. It maps the premises
$\phi\equiv\psi$ and $\phi$ to the conclusion~$\psi$.
\end{ttdescription}
\subsection{Equality rules}
\index{meta-equality}
\begin{ttbox}
reflexive : cterm -> thm
symmetric : thm -> thm
transitive : thm -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{reflexive} $ct$]
makes the theorem \(ct\equiv ct\).
\item[\ttindexbold{symmetric} $thm$]
maps the premise $a\equiv b$ to the conclusion $b\equiv a$.
\item[\ttindexbold{transitive} $thm@1$ $thm@2$]
maps the premises $a\equiv b$ and $b\equiv c$ to the conclusion~${a\equiv c}$.
\end{ttdescription}
\subsection{The $\lambda$-conversion rules}
\index{lambda calc@$\lambda$-calculus}
\begin{ttbox}
beta_conversion : cterm -> thm
extensional : thm -> thm
abstract_rule : string -> cterm -> thm -> thm
combination : thm -> thm -> thm
\end{ttbox}
There is no rule for $\alpha$-conversion because Isabelle regards
$\alpha$-convertible theorems as equal.
\begin{ttdescription}
\item[\ttindexbold{beta_conversion} $ct$]
makes the theorem $((\lambda x.a)(b)) \equiv a[b/x]$, where $ct$ is the
term $(\lambda x.a)(b)$.
\item[\ttindexbold{extensional} $thm$]
maps the premise $f(x) \equiv g(x)$ to the conclusion $f\equiv g$.
Parameter~$x$ is taken from the premise. It may be an unknown or a free
variable (provided it does not occur in the assumptions); it must not occur
in $f$ or~$g$.
\item[\ttindexbold{abstract_rule} $v$ $x$ $thm$]
maps the premise $a\equiv b$ to the conclusion $(\lambda x.a) \equiv
(\lambda x.b)$, abstracting over all occurrences (if any!) of~$x$.
Parameter~$x$ is supplied as a cterm. It may be an unknown or a free
variable (provided it does not occur in the assumptions). In the
conclusion, the bound variable is named~$v$.
\item[\ttindexbold{combination} $thm@1$ $thm@2$]
maps the premises $f\equiv g$ and $a\equiv b$ to the conclusion~$f(a)\equiv
g(b)$.
\end{ttdescription}
\subsection{Forall introduction rules}
\index{meta-quantifiers}
\begin{ttbox}
forall_intr : cterm -> thm -> thm
forall_intr_list : cterm list -> thm -> thm
forall_intr_frees : thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{forall_intr} $x$ $thm$]
applies $({\Forall}I)$, abstracting over all occurrences (if any!) of~$x$.
The rule maps the premise $\phi$ to the conclusion $\Forall x.\phi$.
Parameter~$x$ is supplied as a cterm. It may be an unknown or a free
variable (provided it does not occur in the assumptions).
\item[\ttindexbold{forall_intr_list} $xs$ $thm$]
applies $({\Forall}I)$ repeatedly, on every element of the list~$xs$.
\item[\ttindexbold{forall_intr_frees} $thm$]
applies $({\Forall}I)$ repeatedly, generalizing over all the free variables
of the premise.
\end{ttdescription}
\subsection{Forall elimination rules}
\begin{ttbox}
forall_elim : cterm -> thm -> thm
forall_elim_list : cterm list -> thm -> thm
forall_elim_var : int -> thm -> thm
forall_elim_vars : int -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{forall_elim} $ct$ $thm$]
applies $({\Forall}E)$, mapping the premise $\Forall x.\phi$ to the conclusion
$\phi[ct/x]$. The rule checks that $ct$ and $x$ have the same type.
\item[\ttindexbold{forall_elim_list} $cts$ $thm$]
applies $({\Forall}E)$ repeatedly, on every element of the list~$cts$.
\item[\ttindexbold{forall_elim_var} $k$ $thm$]
applies $({\Forall}E)$, mapping the premise $\Forall x.\phi$ to the conclusion
$\phi[\Var{x@k}/x]$. Thus, it replaces the outermost $\Forall$-bound
variable by an unknown having subscript~$k$.
\item[\ttindexbold{forall_elim_vars} $k$ $thm$]
applies {\tt forall_elim_var}~$k$ repeatedly until the theorem no longer has
the form $\Forall x.\phi$.
\end{ttdescription}
\subsection{Instantiation of unknowns}
\index{instantiation}
\begin{alltt}\footnotesize
instantiate: (indexname * ctyp){\thinspace}list * (cterm * cterm){\thinspace}list -> thm -> thm
\end{alltt}
There are two versions of this rule. The primitive one is
\ttindexbold{Thm.instantiate}, which merely performs the instantiation and can
produce a conclusion not in normal form. A derived version is
\ttindexbold{Drule.instantiate}, which normalizes its conclusion.
\begin{ttdescription}
\item[\ttindexbold{instantiate} ($tyinsts$,$insts$) $thm$]
simultaneously substitutes types for type unknowns (the
$tyinsts$) and terms for term unknowns (the $insts$). Instantiations are
given as $(v,t)$ pairs, where $v$ is an unknown and $t$ is a term (of the
same type as $v$) or a type (of the same sort as~$v$). All the unknowns
must be distinct.
In some cases, \ttindex{instantiate'} (see \S\ref{sec:instantiate})
provides a more convenient interface to this rule.
\end{ttdescription}
\subsection{Freezing/thawing type unknowns}
\index{type unknowns!freezing/thawing of}
\begin{ttbox}
freezeT: thm -> thm
varifyT: thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{freezeT} $thm$]
converts all the type unknowns in $thm$ to free type variables.
\item[\ttindexbold{varifyT} $thm$]
converts all the free type variables in $thm$ to type unknowns.
\end{ttdescription}
\section{Derived rules for goal-directed proof}
Most of these rules have the sole purpose of implementing particular
tactics. There are few occasions for applying them directly to a theorem.
\subsection{Proof by assumption}
\index{meta-assumptions}
\begin{ttbox}
assumption : int -> thm -> thm Seq.seq
eq_assumption : int -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{assumption} {\it i} $thm$]
attempts to solve premise~$i$ of~$thm$ by assumption.
\item[\ttindexbold{eq_assumption}]
is like {\tt assumption} but does not use unification.
\end{ttdescription}
\subsection{Resolution}
\index{resolution}
\begin{ttbox}
biresolution : bool -> (bool*thm)list -> int -> thm
-> thm Seq.seq
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{biresolution} $match$ $rules$ $i$ $state$]
performs bi-resolution on subgoal~$i$ of $state$, using the list of $\it
(flag,rule)$ pairs. For each pair, it applies resolution if the flag
is~{\tt false} and elim-resolution if the flag is~{\tt true}. If $match$
is~{\tt true}, the $state$ is not instantiated.
\end{ttdescription}
\subsection{Composition: resolution without lifting}
\index{resolution!without lifting}
\begin{ttbox}
compose : thm * int * thm -> thm list
COMP : thm * thm -> thm
bicompose : bool -> bool * thm * int -> int -> thm
-> thm Seq.seq
\end{ttbox}
In forward proof, a typical use of composition is to regard an assertion of
the form $\phi\Imp\psi$ as atomic. Schematic variables are not renamed, so
beware of clashes!
\begin{ttdescription}
\item[\ttindexbold{compose} ($thm@1$, $i$, $thm@2$)]
uses $thm@1$, regarded as an atomic formula, to solve premise~$i$
of~$thm@2$. Let $thm@1$ and $thm@2$ be $\psi$ and $\List{\phi@1; \ldots;
\phi@n} \Imp \phi$. For each $s$ that unifies~$\psi$ and $\phi@i$, the
result list contains the theorem
\[ (\List{\phi@1; \ldots; \phi@{i-1}; \phi@{i+1}; \ldots; \phi@n} \Imp \phi)s.
\]
\item[$thm@1$ \ttindexbold{COMP} $thm@2$]
calls \hbox{\tt compose ($thm@1$, 1, $thm@2$)} and returns the result, if
unique; otherwise, it raises exception~\xdx{THM}\@. It is
analogous to {\tt RS}\@.
For example, suppose that $thm@1$ is $a=b\Imp b=a$, a symmetry rule, and
that $thm@2$ is $\List{P\Imp Q; \neg Q} \Imp\neg P$, which is the
principle of contrapositives. Then the result would be the
derived rule $\neg(b=a)\Imp\neg(a=b)$.
\item[\ttindexbold{bicompose} $match$ ($flag$, $rule$, $m$) $i$ $state$]
refines subgoal~$i$ of $state$ using $rule$, without lifting. The $rule$
is taken to have the form $\List{\psi@1; \ldots; \psi@m} \Imp \psi$, where
$\psi$ need not be atomic; thus $m$ determines the number of new
subgoals. If $flag$ is {\tt true} then it performs elim-resolution --- it
solves the first premise of~$rule$ by assumption and deletes that
assumption. If $match$ is~{\tt true}, the $state$ is not instantiated.
\end{ttdescription}
\subsection{Other meta-rules}
\begin{ttbox}
trivial : cterm -> thm
lift_rule : (thm * int) -> thm -> thm
rename_params_rule : string list * int -> thm -> thm
flexflex_rule : thm -> thm Seq.seq
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{trivial} $ct$]
makes the theorem \(\phi\Imp\phi\), where $\phi$ is the value of~$ct$.
This is the initial state for a goal-directed proof of~$\phi$. The rule
checks that $ct$ has type~$prop$.
\item[\ttindexbold{lift_rule} ($state$, $i$) $rule$] \index{lifting}
prepares $rule$ for resolution by lifting it over the parameters and
assumptions of subgoal~$i$ of~$state$.
\item[\ttindexbold{rename_params_rule} ({\it names}, {\it i}) $thm$]
uses the $names$ to rename the parameters of premise~$i$ of $thm$. The
names must be distinct. If there are fewer names than parameters, then the
rule renames the innermost parameters and may modify the remaining ones to
ensure that all the parameters are distinct.
\index{parameters!renaming}
\item[\ttindexbold{flexflex_rule} $thm$] \index{flex-flex constraints}
removes all flex-flex pairs from $thm$ using the trivial unifier.
\end{ttdescription}
\index{meta-rules|)}
\section{Proof terms}\label{sec:proofObjects}
\index{proof terms|(} Isabelle can record the full meta-level proof of each
theorem. The proof term contains all logical inferences in detail.
%while
%omitting bookkeeping steps that have no logical meaning to an outside
%observer. Rewriting steps are recorded in similar detail as the output of
%simplifier tracing.
Resolution and rewriting steps are broken down to primitive rules of the
meta-logic. The proof term can be inspected by a separate proof-checker,
for example.
According to the well-known {\em Curry-Howard isomorphism}, a proof can
be viewed as a $\lambda$-term. Following this idea, proofs
in Isabelle are internally represented by a datatype similar to the one for
terms described in \S\ref{sec:terms}.
\begin{ttbox}
infix 8 % %%;
datatype proof =
PBound of int
| Abst of string * typ option * proof
| AbsP of string * term option * proof
| op % of proof * term option
| op %% of proof * proof
| Hyp of term
| PThm of (string * (string * string list) list) *
proof * term * typ list option
| PAxm of string * term * typ list option
| Oracle of string * term * typ list option
| MinProof of proof list;
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{Abst} ($a$, $\tau$, $prf$)] is the abstraction over
a {\it term variable} of type $\tau$ in the body $prf$. Logically, this
corresponds to $\bigwedge$ introduction. The name $a$ is used only for
parsing and printing.
\item[\ttindexbold{AbsP} ($a$, $\varphi$, $prf$)] is the abstraction
over a {\it proof variable} standing for a proof of proposition $\varphi$
in the body $prf$. This corresponds to $\Longrightarrow$ introduction.
\item[$prf$ \% $t$] \index{\%@{\tt\%}|bold}
is the application of proof $prf$ to term $t$
which corresponds to $\bigwedge$ elimination.
\item[$prf@1$ \%\% $prf@2$] \index{\%\%@{\tt\%\%}|bold}
is the application of proof $prf@1$ to
proof $prf@2$ which corresponds to $\Longrightarrow$ elimination.
\item[\ttindexbold{PBound} $i$] is a {\em proof variable} with de Bruijn
\cite{debruijn72} index $i$.
\item[\ttindexbold{Hyp} $\varphi$] corresponds to the use of a meta level
hypothesis $\varphi$.
\item[\ttindexbold{PThm} (($name$, $tags$), $prf$, $\varphi$, $\overline{\tau}$)]
stands for a pre-proved theorem, where $name$ is the name of the theorem,
$prf$ is its actual proof, $\varphi$ is the proven proposition,
and $\overline{\tau}$ is
a type assignment for the type variables occurring in the proposition.
\item[\ttindexbold{PAxm} ($name$, $\varphi$, $\overline{\tau}$)]
corresponds to the use of an axiom with name $name$ and proposition
$\varphi$, where $\overline{\tau}$ is a type assignment for the type
variables occurring in the proposition.
\item[\ttindexbold{Oracle} ($name$, $\varphi$, $\overline{\tau}$)]
denotes the invocation of an oracle with name $name$ which produced
a proposition $\varphi$, where $\overline{\tau}$ is a type assignment
for the type variables occurring in the proposition.
\item[\ttindexbold{MinProof} $prfs$]
represents a {\em minimal proof} where $prfs$ is a list of theorems,
axioms or oracles.
\end{ttdescription}
Note that there are no separate constructors
for abstraction and application on the level of {\em types}, since
instantiation of type variables is accomplished via the type assignments
attached to {\tt Thm}, {\tt Axm} and {\tt Oracle}.
Each theorem's derivation is stored as the {\tt der} field of its internal
record:
\begin{ttbox}
#2 (#der (rep_thm conjI));
{\out PThm (("HOL.conjI", []),}
{\out AbsP ("H", None, AbsP ("H", None, \dots)), \dots, None) %}
{\out None % None : Proofterm.proof}
\end{ttbox}
This proof term identifies a labelled theorem, {\tt conjI} of theory
\texttt{HOL}, whose underlying proof is
{\tt AbsP ("H", None, AbsP ("H", None, $\dots$))}.
The theorem is applied to two (implicit) term arguments, which correspond
to the two variables occurring in its proposition.
Isabelle's inference kernel can produce proof objects with different
levels of detail. This is controlled via the global reference variable
\ttindexbold{proofs}:
\begin{ttdescription}
\item[proofs := 0;] only record uses of oracles
\item[proofs := 1;] record uses of oracles as well as dependencies
on other theorems and axioms
\item[proofs := 2;] record inferences in full detail
\end{ttdescription}
Reconstruction and checking of proofs as described in \S\ref{sec:reconstruct_proofs}
will not work for proofs constructed with {\tt proofs} set to
{\tt 0} or {\tt 1}.
Theorems involving oracles will be printed with a
suffixed \verb|[!]| to point out the different quality of confidence achieved.
\medskip
The dependencies of theorems can be viewed using the function
\ttindexbold{thm_deps}\index{theorems!dependencies}:
\begin{ttbox}
thm_deps [\(thm@1\), \(\ldots\), \(thm@n\)];
\end{ttbox}
generates the dependency graph of the theorems $thm@1$, $\ldots$, $thm@n$ and
displays it using Isabelle's graph browser. For this to work properly,
the theorems in question have to be proved with {\tt proofs} set to a value
greater than {\tt 0}. You can use
\begin{ttbox}
ThmDeps.enable : unit -> unit
ThmDeps.disable : unit -> unit
\end{ttbox}
to set \texttt{proofs} appropriately.
\subsection{Reconstructing and checking proof terms}\label{sec:reconstruct_proofs}
\index{proof terms!reconstructing}
\index{proof terms!checking}
When looking at the above datatype of proofs more closely, one notices that
some arguments of constructors are {\it optional}. The reason for this is that
keeping a full proof term for each theorem would result in enormous memory
requirements. Fortunately, typical proof terms usually contain quite a lot of
redundant information that can be reconstructed from the context. Therefore,
Isabelle's inference kernel creates only {\em partial} (or {\em implicit})
\index{proof terms!partial} proof terms, in which
all typing information in terms, all term and type labels of abstractions
{\tt AbsP} and {\tt Abst}, and (if possible) some argument terms of
\verb!%! are omitted. The following functions are available for
reconstructing and checking proof terms:
\begin{ttbox}
Reconstruct.reconstruct_proof :
Sign.sg -> term -> Proofterm.proof -> Proofterm.proof
Reconstruct.expand_proof :
Sign.sg -> string list -> Proofterm.proof -> Proofterm.proof
ProofChecker.thm_of_proof : theory -> Proofterm.proof -> thm
\end{ttbox}
\begin{ttdescription}
\item[Reconstruct.reconstruct_proof $sg$ $t$ $prf$]
turns the partial proof $prf$ into a full proof of the
proposition denoted by $t$, with respect to signature $sg$.
Reconstruction will fail with an error message if $prf$
is not a proof of $t$, is ill-formed, or does not contain
sufficient information for reconstruction by
{\em higher order pattern unification}
\cite{nipkow-patterns, Berghofer-Nipkow:2000:TPHOL}.
The latter may only happen for proofs
built up ``by hand'' but not for those produced automatically
by Isabelle's inference kernel.
\item[Reconstruct.expand_proof $sg$
\ttlbrack$name@1$, $\ldots$, $name@n${\ttrbrack} $prf$]
expands and reconstructs the proofs of all theorems with names
$name@1$, $\ldots$, $name@n$ in the (full) proof $prf$.
\item[ProofChecker.thm_of_proof $thy$ $prf$] turns the (full) proof
$prf$ into a theorem with respect to theory $thy$ by replaying
it using only primitive rules from Isabelle's inference kernel.
\end{ttdescription}
\subsection{Parsing and printing proof terms}
\index{proof terms!parsing}
\index{proof terms!printing}
Isabelle offers several functions for parsing and printing
proof terms. The concrete syntax for proof terms is described
in Fig.\ts\ref{fig:proof_gram}.
Implicit term arguments in partial proofs are indicated
by ``{\tt _}''.
Type arguments for theorems and axioms may be specified using
\verb!%! or ``$\cdot$'' with an argument of the form {\tt TYPE($type$)}
(see \S\ref{sec:basic_syntax}).
They must appear before any other term argument of a theorem
or axiom. In contrast to term arguments, type arguments may
be completely omitted.
\begin{ttbox}
ProofSyntax.read_proof : theory -> bool -> string -> Proofterm.proof
ProofSyntax.pretty_proof : Sign.sg -> Proofterm.proof -> Pretty.T
ProofSyntax.pretty_proof_of : bool -> thm -> Pretty.T
ProofSyntax.print_proof_of : bool -> thm -> unit
\end{ttbox}
\begin{figure}
\begin{center}
\begin{tabular}{rcl}
$proof$ & $=$ & {\tt Lam} $params${\tt .} $proof$ ~~$|$~~
$\Lambda params${\tt .} $proof$ \\
& $|$ & $proof$ \verb!%! $any$ ~~$|$~~
$proof$ $\cdot$ $any$ \\
& $|$ & $proof$ \verb!%%! $proof$ ~~$|$~~
$proof$ {\boldmath$\cdot$} $proof$ \\
& $|$ & $id$ ~~$|$~~ $longid$ \\\\
$param$ & $=$ & $idt$ ~~$|$~~ $idt$ {\tt :} $prop$ ~~$|$~~
{\tt (} $param$ {\tt )} \\\\
$params$ & $=$ & $param$ ~~$|$~~ $param$ $params$
\end{tabular}
\end{center}
\caption{Proof term syntax}\label{fig:proof_gram}
\end{figure}
The function {\tt read_proof} reads in a proof term with
respect to a given theory. The boolean flag indicates whether
the proof term to be parsed contains explicit typing information
to be taken into account.
Usually, typing information is left implicit and
is inferred during proof reconstruction. The pretty printing
functions operating on theorems take a boolean flag as an
argument which indicates whether the proof term should
be reconstructed before printing.
The following example (based on Isabelle/HOL) illustrates how
to parse and check proof terms. We start by parsing a partial
proof term
\begin{ttbox}
val prf = ProofSyntax.read_proof Main.thy false
"impI % _ % _ %% (Lam H : _. conjE % _ % _ % _ %% H %%
(Lam (H1 : _) H2 : _. conjI % _ % _ %% H2 %% H1))";
{\out val prf = PThm (("HOL.impI", []), \dots, \dots, None) % None % None %%}
{\out AbsP ("H", None, PThm (("HOL.conjE", []), \dots, \dots, None) %}
{\out None % None % None %% PBound 0 %%}
{\out AbsP ("H1", None, AbsP ("H2", None, \dots))) : Proofterm.proof}
\end{ttbox}
The statement to be established by this proof is
\begin{ttbox}
val t = term_of
(read_cterm (sign_of Main.thy) ("A & B --> B & A", propT));
{\out val t = Const ("Trueprop", "bool => prop") $}
{\out (Const ("op -->", "[bool, bool] => bool") $}
{\out \dots $ \dots : Term.term}
\end{ttbox}
Using {\tt t} we can reconstruct the full proof
\begin{ttbox}
val prf' = Reconstruct.reconstruct_proof (sign_of Main.thy) t prf;
{\out val prf' = PThm (("HOL.impI", []), \dots, \dots, Some []) %}
{\out Some (Const ("op &", \dots) $ Free ("A", \dots) $ Free ("B", \dots)) %}
{\out Some (Const ("op &", \dots) $ Free ("B", \dots) $ Free ("A", \dots)) %%}
{\out AbsP ("H", Some (Const ("Trueprop", \dots) $ \dots), \dots)}
{\out : Proofterm.proof}
\end{ttbox}
This proof can finally be turned into a theorem
\begin{ttbox}
val thm = ProofChecker.thm_of_proof Main.thy prf';
{\out val thm = "A & B --> B & A" : Thm.thm}
\end{ttbox}
\index{proof terms|)}
\index{theorems|)}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "ref"
%%% End: