doc-src/Ref/thm.tex
author wenzelm
Wed, 07 May 1997 17:21:24 +0200
changeset 3135 233aba197bf2
parent 3108 335efc3f5632
child 3485 f27a30a18a17
permissions -rw-r--r--
tuned spaces;

%% $Id$
\chapter{Theorems and Forward Proof}
\index{theorems|(}

Theorems, which represent the axioms, theorems and rules of
object-logics, have type \mltydx{thm}.  This chapter begins by
describing operations that print theorems and that join them in
forward proof.  Most theorem operations are intended for advanced
applications, such as programming new proof procedures.  Many of these
operations refer to signatures, certified terms and certified types,
which have the \ML{} types {\tt Sign.sg}, {\tt cterm} and {\tt ctyp}
and are discussed in Chapter~\ref{theories}.  Beginning users should
ignore such complexities --- and skip all but the first section of
this chapter.

The theorem operations do not print error messages.  Instead, they raise
exception~\xdx{THM}\@.  Use \ttindex{print_exn} to display
exceptions nicely:
\begin{ttbox} 
allI RS mp  handle e => print_exn e;
{\out Exception THM raised:}
{\out RSN: no unifiers -- premise 1}
{\out (!!x. ?P(x)) ==> ALL x. ?P(x)}
{\out [| ?P --> ?Q; ?P |] ==> ?Q}
{\out}
{\out uncaught exception THM}
\end{ttbox}


\section{Basic operations on theorems}
\subsection{Pretty-printing a theorem}
\index{theorems!printing of}
\begin{ttbox} 
prth          : thm -> thm
prths         : thm list -> thm list
prthq         : thm Sequence.seq -> thm Sequence.seq
print_thm     : thm -> unit
print_goals   : int -> thm -> unit
string_of_thm : thm -> string
\end{ttbox}
The first three commands are for interactive use.  They are identity
functions that display, then return, their argument.  The \ML{} identifier
{\tt it} will refer to the value just displayed.

The others are for use in programs.  Functions with result type {\tt unit}
are convenient for imperative programming.

\begin{ttdescription}
\item[\ttindexbold{prth} {\it thm}]  
prints {\it thm\/} at the terminal.

\item[\ttindexbold{prths} {\it thms}]  
prints {\it thms}, a list of theorems.

\item[\ttindexbold{prthq} {\it thmq}]  
prints {\it thmq}, a sequence of theorems.  It is useful for inspecting
the output of a tactic.

\item[\ttindexbold{print_thm} {\it thm}]  
prints {\it thm\/} at the terminal.

\item[\ttindexbold{print_goals} {\it limit\/} {\it thm}]  
prints {\it thm\/} in goal style, with the premises as subgoals.  It prints
at most {\it limit\/} subgoals.  The subgoal module calls {\tt print_goals}
to display proof states.

\item[\ttindexbold{string_of_thm} {\it thm}]  
converts {\it thm\/} to a string.
\end{ttdescription}


\subsection{Forward proof: joining rules by resolution}
\index{theorems!joining by resolution}
\index{resolution}\index{forward proof}
\begin{ttbox} 
RSN : thm * (int * thm) -> thm                 \hfill{\bf infix}
RS  : thm * thm -> thm                         \hfill{\bf infix}
MRS : thm list * thm -> thm                    \hfill{\bf infix}
RLN : thm list * (int * thm list) -> thm list  \hfill{\bf infix}
RL  : thm list * thm list -> thm list          \hfill{\bf infix}
MRL : thm list list * thm list -> thm list     \hfill{\bf infix}
\end{ttbox}
Joining rules together is a simple way of deriving new rules.  These
functions are especially useful with destruction rules.  To store
the result in the theorem database, use \ttindex{bind_thm}
(\S\ref{ExtractingAndStoringTheProvedTheorem}). 
\begin{ttdescription}
\item[\tt$thm@1$ RSN $(i,thm@2)$] \indexbold{*RSN} 
  resolves the conclusion of $thm@1$ with the $i$th premise of~$thm@2$.
  Unless there is precisely one resolvent it raises exception
  \xdx{THM}; in that case, use {\tt RLN}.

\item[\tt$thm@1$ RS $thm@2$] \indexbold{*RS} 
abbreviates \hbox{\tt$thm@1$ RSN $(1,thm@2)$}.  Thus, it resolves the
conclusion of $thm@1$ with the first premise of~$thm@2$.

\item[\tt {$[thm@1,\ldots,thm@n]$} MRS $thm$] \indexbold{*MRS} 
  uses {\tt RSN} to resolve $thm@i$ against premise~$i$ of $thm$, for
  $i=n$, \ldots,~1.  This applies $thm@n$, \ldots, $thm@1$ to the first $n$
  premises of $thm$.  Because the theorems are used from right to left, it
  does not matter if the $thm@i$ create new premises.  {\tt MRS} is useful
  for expressing proof trees.

\item[\tt$thms@1$ RLN $(i,thms@2)$] \indexbold{*RLN} 
  joins lists of theorems.  For every $thm@1$ in $thms@1$ and $thm@2$ in
  $thms@2$, it resolves the conclusion of $thm@1$ with the $i$th premise
  of~$thm@2$, accumulating the results. 

\item[\tt$thms@1$ RL $thms@2$] \indexbold{*RL} 
abbreviates \hbox{\tt$thms@1$ RLN $(1,thms@2)$}. 

\item[\tt {$[thms@1,\ldots,thms@n]$} MRL $thms$] \indexbold{*MRL} 
is analogous to {\tt MRS}, but combines theorem lists rather than theorems.
It too is useful for expressing proof trees.
\end{ttdescription}


\subsection{Expanding definitions in theorems}
\index{meta-rewriting!in theorems}
\begin{ttbox} 
rewrite_rule       : thm list -> thm -> thm
rewrite_goals_rule : thm list -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{rewrite_rule} {\it defs} {\it thm}]  
unfolds the {\it defs} throughout the theorem~{\it thm}.

\item[\ttindexbold{rewrite_goals_rule} {\it defs} {\it thm}]  
unfolds the {\it defs} in the premises of~{\it thm}, but leaves the
conclusion unchanged.  This rule underlies \ttindex{rewrite_goals_tac}, but 
serves little purpose in forward proof.
\end{ttdescription}


\subsection{Instantiating a theorem}
\index{instantiation}
\begin{ttbox}
read_instantiate    :            (string * string) list -> thm -> thm
read_instantiate_sg : Sign.sg -> (string * string) list -> thm -> thm
cterm_instantiate   :              (cterm * cterm) list -> thm -> thm
\end{ttbox}
These meta-rules instantiate type and term unknowns in a theorem.  They are
occasionally useful.  They can prevent difficulties with higher-order
unification, and define specialized versions of rules.
\begin{ttdescription}
\item[\ttindexbold{read_instantiate} {\it insts} {\it thm}] 
processes the instantiations {\it insts} and instantiates the rule~{\it
thm}.  The processing of instantiations is described
in \S\ref{res_inst_tac}, under {\tt res_inst_tac}.  

Use {\tt res_inst_tac}, not {\tt read_instantiate}, to instantiate a rule
and refine a particular subgoal.  The tactic allows instantiation by the
subgoal's parameters, and reads the instantiations using the signature
associated with the proof state.

Use {\tt read_instantiate_sg} below if {\it insts\/} appears to be treated
incorrectly.

\item[\ttindexbold{read_instantiate_sg} {\it sg} {\it insts} {\it thm}]
  resembles \hbox{\tt read_instantiate {\it insts} {\it thm}}, but reads
  the instantiations under signature~{\it sg}.  This is necessary to
  instantiate a rule from a general theory, such as first-order logic,
  using the notation of some specialized theory.  Use the function {\tt
    sign_of} to get a theory's signature.

\item[\ttindexbold{cterm_instantiate} {\it ctpairs} {\it thm}] 
is similar to {\tt read_instantiate}, but the instantiations are provided
as pairs of certified terms, not as strings to be read.
\end{ttdescription}


\subsection{Miscellaneous forward rules}\label{MiscellaneousForwardRules}
\index{theorems!standardizing}
\begin{ttbox} 
standard         :           thm -> thm
zero_var_indexes :           thm -> thm
make_elim        :           thm -> thm
rule_by_tactic   : tactic -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{standard} $thm$] puts $thm$ into the standard form
  of object-rules.  It discharges all meta-assumptions, replaces free
  variables by schematic variables, renames schematic variables to
  have subscript zero, also strips outer (meta) quantifiers and
  removes dangling sort hypotheses.

\item[\ttindexbold{zero_var_indexes} $thm$] 
makes all schematic variables have subscript zero, renaming them to avoid
clashes. 

\item[\ttindexbold{make_elim} $thm$] 
\index{rules!converting destruction to elimination}
converts $thm$, a destruction rule of the form $\List{P@1;\ldots;P@m}\Imp
Q$, to the elimination rule $\List{P@1; \ldots; P@m; Q\Imp R}\Imp R$.  This
is the basis for destruct-resolution: {\tt dresolve_tac}, etc.

\item[\ttindexbold{rule_by_tactic} {\it tac} {\it thm}] 
  applies {\it tac\/} to the {\it thm}, freezing its variables first, then
  yields the proof state returned by the tactic.  In typical usage, the
  {\it thm\/} represents an instance of a rule with several premises, some
  with contradictory assumptions (because of the instantiation).  The
  tactic proves those subgoals and does whatever else it can, and returns
  whatever is left.
\end{ttdescription}


\subsection{Taking a theorem apart}
\index{theorems!taking apart}
\index{flex-flex constraints}
\begin{ttbox} 
concl_of      : thm -> term
prems_of      : thm -> term list
nprems_of     : thm -> int
tpairs_of     : thm -> (term*term)list
stamps_of_thy : thm -> string ref list
theory_of_thm : thm -> theory
dest_state    : thm*int -> (term*term)list*term list*term*term
rep_thm       : thm -> {\ttlbrace}prop: term, hyps: term list, der: deriv, 
                        maxidx: int, sign: Sign.sg, shyps: sort list\ttrbrace
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{concl_of} $thm$] 
returns the conclusion of $thm$ as a term.

\item[\ttindexbold{prems_of} $thm$] 
returns the premises of $thm$ as a list of terms.

\item[\ttindexbold{nprems_of} $thm$] 
returns the number of premises in $thm$, and is equivalent to {\tt
  length(prems_of~$thm$)}.

\item[\ttindexbold{tpairs_of} $thm$] 
returns the flex-flex constraints of $thm$.

\item[\ttindexbold{stamps_of_thm} $thm$] 
returns the \rmindex{stamps} of the signature associated with~$thm$.

\item[\ttindexbold{theory_of_thm} $thm$]
returns the theory associated with $thm$.

\item[\ttindexbold{dest_state} $(thm,i)$] 
decomposes $thm$ as a tuple containing a list of flex-flex constraints, a
list of the subgoals~1 to~$i-1$, subgoal~$i$, and the rest of the theorem
(this will be an implication if there are more than $i$ subgoals).

\item[\ttindexbold{rep_thm} $thm$] decomposes $thm$ as a record containing the
  statement of~$thm$ ({\tt prop}), its list of meta-assumptions ({\tt hyps}),
  its derivation ({\tt der}), a bound on the maximum subscript of its
  unknowns ({\tt maxidx}), and its signature ({\tt sign}).  The {\tt shyps}
  field is discussed below.
\end{ttdescription}


\subsection{*Sort hypotheses} 
\index{sort hypotheses}
\begin{ttbox} 
force_strip_shyps : bool ref \hfill{\bf initially true}
\end{ttbox}

\begin{ttdescription}
\item[\ttindexbold{force_strip_shyps}]
causes sort hypotheses to be deleted, printing a warning.
\end{ttdescription}

Isabelle's type variables are decorated with sorts, constraining them to
certain ranges of types.  This has little impact when sorts only serve for
syntactic classification of types --- for example, FOL distinguishes between
terms and other types.  But when type classes are introduced through axioms,
this may result in some sorts becoming {\em empty\/}: where one cannot exhibit
a type belonging to it because certain axioms are unsatisfiable.

If a theorem contains a type variable that is constrained by an empty
sort, then that theorem has no instances. It is basically an instance
of {\em ex falso quodlibet}.  But what if it is used to prove another
theorem that no longer involves that sort?  The latter theorem holds
only if under an additional non-emptiness assumption.

Therefore, Isabelle's theorems carry around sort hypotheses. The {\tt
shyps} field is a list of sorts occurring in type variables in the current
{\tt prop} and {\tt hyps} fields.  It may also includes sorts used in the
theorem's proof that no longer appear in the {\tt prop} or {\tt hyps}
fields --- so-called {\em dangling\/} sort constraints. These are the
critical ones, asserting non-emptiness of the corresponding sorts.
 
Isabelle tries to remove extraneous sorts from the {\tt shyps} field whenever
non-emptiness can be established by looking at the theorem's signature: from
the {\tt arities} information, etc.  Because its current implementation is
highly incomplete, the flag shown above is available.  Setting it to true (the
default) allows existing proofs to run.


\subsection{Tracing flags for unification}
\index{tracing!of unification}
\begin{ttbox} 
Unify.trace_simp   : bool ref \hfill{\bf initially false}
Unify.trace_types  : bool ref \hfill{\bf initially false}
Unify.trace_bound  : int ref \hfill{\bf initially 10}
Unify.search_bound : int ref \hfill{\bf initially 20}
\end{ttbox}
Tracing the search may be useful when higher-order unification behaves
unexpectedly.  Letting {\tt res_inst_tac} circumvent the problem is easier,
though.
\begin{ttdescription}
\item[Unify.trace_simp := true;] 
causes tracing of the simplification phase.

\item[Unify.trace_types := true;] 
generates warnings of incompleteness, when unification is not considering
all possible instantiations of type unknowns.

\item[Unify.trace_bound := $n$;] 
causes unification to print tracing information once it reaches depth~$n$.
Use $n=0$ for full tracing.  At the default value of~10, tracing
information is almost never printed.

\item[Unify.search_bound := $n$;] 
causes unification to limit its search to depth~$n$.  Because of this
bound, higher-order unification cannot return an infinite sequence, though
it can return a very long one.  The search rarely approaches the default
value of~20.  If the search is cut off, unification prints {\tt
***Unification bound exceeded}.
\end{ttdescription}


\section{Primitive meta-level inference rules}
\index{meta-rules|(}
These implement the meta-logic in {\sc lcf} style, as functions from theorems
to theorems.  They are, rarely, useful for deriving results in the pure
theory.  Mainly, they are included for completeness, and most users should
not bother with them.  The meta-rules raise exception \xdx{THM} to signal
malformed premises, incompatible signatures and similar errors.

\index{meta-assumptions}
The meta-logic uses natural deduction.  Each theorem may depend on
meta-level assumptions.  Certain rules, such as $({\Imp}I)$,
discharge assumptions; in most other rules, the conclusion depends on all
of the assumptions of the premises.  Formally, the system works with
assertions of the form
\[ \phi \quad [\phi@1,\ldots,\phi@n], \]
where $\phi@1$,~\ldots,~$\phi@n$ are the assumptions.  This can be
also read as a single conclusion sequent $\phi@1,\ldots,\phi@n \vdash
\phi$. Do not confuse meta-level assumptions with the object-level
assumptions in a subgoal, which are represented in the meta-logic
using~$\Imp$.

Each theorem has a signature.  Certified terms have a signature.  When a
rule takes several premises and certified terms, it merges the signatures
to make a signature for the conclusion.  This fails if the signatures are
incompatible. 

\index{meta-implication}
The {\bf implication} rules are $({\Imp}I)$
and $({\Imp}E)$:
\[ \infer[({\Imp}I)]{\phi\Imp \psi}{\infer*{\psi}{[\phi]}}  \qquad
   \infer[({\Imp}E)]{\psi}{\phi\Imp \psi & \phi}  \]

\index{meta-equality}
Equality of truth values means logical equivalence:
\[ \infer[({\equiv}I)]{\phi\equiv\psi}{\infer*{\psi}{[\phi]} &
                                       \infer*{\phi}{[\psi]}}  
   \qquad
   \infer[({\equiv}E)]{\psi}{\phi\equiv \psi & \phi}   \]

The {\bf equality} rules are reflexivity, symmetry, and transitivity:
\[ {a\equiv a}\,(refl)  \qquad
   \infer[(sym)]{b\equiv a}{a\equiv b}  \qquad
   \infer[(trans)]{a\equiv c}{a\equiv b & b\equiv c}   \]

\index{lambda calc@$\lambda$-calculus}
The $\lambda$-conversions are $\alpha$-conversion, $\beta$-conversion, and
extensionality:\footnote{$\alpha$-conversion holds if $y$ is not free
in~$a$; $(ext)$ holds if $x$ is not free in the assumptions, $f$, or~$g$.}
\[ {(\lambda x.a) \equiv (\lambda y.a[y/x])}    \qquad
   {((\lambda x.a)(b)) \equiv a[b/x]}           \qquad
   \infer[(ext)]{f\equiv g}{f(x) \equiv g(x)}   \]

The {\bf abstraction} and {\bf combination} rules let conversions be
applied to subterms:\footnote{Abstraction holds if $x$ is not free in the
assumptions.}
\[  \infer[(abs)]{(\lambda x.a) \equiv (\lambda x.b)}{a\equiv b}   \qquad
    \infer[(comb)]{f(a)\equiv g(b)}{f\equiv g & a\equiv b}   \]

\index{meta-quantifiers}
The {\bf universal quantification} rules are $(\Forall I)$ and $(\Forall
E)$:\footnote{$(\Forall I)$ holds if $x$ is not free in the assumptions.}
\[ \infer[(\Forall I)]{\Forall x.\phi}{\phi}        \qquad
   \infer[(\Forall E)]{\phi[b/x]}{\Forall x.\phi}   \]


\subsection{Assumption rule}
\index{meta-assumptions}
\begin{ttbox} 
assume: cterm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{assume} $ct$] 
makes the theorem \(\phi \;[\phi]\), where $\phi$ is the value of~$ct$.
The rule checks that $ct$ has type $prop$ and contains no unknowns, which
are not allowed in assumptions.
\end{ttdescription}

\subsection{Implication rules}
\index{meta-implication}
\begin{ttbox} 
implies_intr      : cterm -> thm -> thm
implies_intr_list : cterm list -> thm -> thm
implies_intr_hyps : thm -> thm
implies_elim      : thm -> thm -> thm
implies_elim_list : thm -> thm list -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{implies_intr} $ct$ $thm$] 
is $({\Imp}I)$, where $ct$ is the assumption to discharge, say~$\phi$.  It
maps the premise~$\psi$ to the conclusion $\phi\Imp\psi$, removing all
occurrences of~$\phi$ from the assumptions.  The rule checks that $ct$ has
type $prop$. 

\item[\ttindexbold{implies_intr_list} $cts$ $thm$] 
applies $({\Imp}I)$ repeatedly, on every element of the list~$cts$.

\item[\ttindexbold{implies_intr_hyps} $thm$] 
applies $({\Imp}I)$ to discharge all the hypotheses (assumptions) of~$thm$.
It maps the premise $\phi \; [\phi@1,\ldots,\phi@n]$ to the conclusion
$\List{\phi@1,\ldots,\phi@n}\Imp\phi$.

\item[\ttindexbold{implies_elim} $thm@1$ $thm@2$] 
applies $({\Imp}E)$ to $thm@1$ and~$thm@2$.  It maps the premises $\phi\Imp
\psi$ and $\phi$ to the conclusion~$\psi$.

\item[\ttindexbold{implies_elim_list} $thm$ $thms$] 
applies $({\Imp}E)$ repeatedly to $thm$, using each element of~$thms$ in
turn.  It maps the premises $\List{\phi@1,\ldots,\phi@n}\Imp\psi$ and
$\phi@1$,\ldots,$\phi@n$ to the conclusion~$\psi$.
\end{ttdescription}

\subsection{Logical equivalence rules}
\index{meta-equality}
\begin{ttbox} 
equal_intr : thm -> thm -> thm 
equal_elim : thm -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{equal_intr} $thm@1$ $thm@2$] 
applies $({\equiv}I)$ to $thm@1$ and~$thm@2$.  It maps the premises~$\psi$
and~$\phi$ to the conclusion~$\phi\equiv\psi$; the assumptions are those of
the first premise with~$\phi$ removed, plus those of
the second premise with~$\psi$ removed.

\item[\ttindexbold{equal_elim} $thm@1$ $thm@2$] 
applies $({\equiv}E)$ to $thm@1$ and~$thm@2$.  It maps the premises
$\phi\equiv\psi$ and $\phi$ to the conclusion~$\psi$.
\end{ttdescription}


\subsection{Equality rules}
\index{meta-equality}
\begin{ttbox} 
reflexive  : cterm -> thm
symmetric  : thm -> thm
transitive : thm -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{reflexive} $ct$] 
makes the theorem \(ct\equiv ct\). 

\item[\ttindexbold{symmetric} $thm$] 
maps the premise $a\equiv b$ to the conclusion $b\equiv a$.

\item[\ttindexbold{transitive} $thm@1$ $thm@2$] 
maps the premises $a\equiv b$ and $b\equiv c$ to the conclusion~${a\equiv c}$.
\end{ttdescription}


\subsection{The $\lambda$-conversion rules}
\index{lambda calc@$\lambda$-calculus}
\begin{ttbox} 
beta_conversion : cterm -> thm
extensional     : thm -> thm
abstract_rule   : string -> cterm -> thm -> thm
combination     : thm -> thm -> thm
\end{ttbox} 
There is no rule for $\alpha$-conversion because Isabelle regards
$\alpha$-convertible theorems as equal.
\begin{ttdescription}
\item[\ttindexbold{beta_conversion} $ct$] 
makes the theorem $((\lambda x.a)(b)) \equiv a[b/x]$, where $ct$ is the
term $(\lambda x.a)(b)$.

\item[\ttindexbold{extensional} $thm$] 
maps the premise $f(x) \equiv g(x)$ to the conclusion $f\equiv g$.
Parameter~$x$ is taken from the premise.  It may be an unknown or a free
variable (provided it does not occur in the assumptions); it must not occur
in $f$ or~$g$.

\item[\ttindexbold{abstract_rule} $v$ $x$ $thm$] 
maps the premise $a\equiv b$ to the conclusion $(\lambda x.a) \equiv
(\lambda x.b)$, abstracting over all occurrences (if any!) of~$x$.
Parameter~$x$ is supplied as a cterm.  It may be an unknown or a free
variable (provided it does not occur in the assumptions).  In the
conclusion, the bound variable is named~$v$.

\item[\ttindexbold{combination} $thm@1$ $thm@2$] 
maps the premises $f\equiv g$ and $a\equiv b$ to the conclusion~$f(a)\equiv
g(b)$.
\end{ttdescription}


\subsection{Forall introduction rules}
\index{meta-quantifiers}
\begin{ttbox} 
forall_intr       : cterm      -> thm -> thm
forall_intr_list  : cterm list -> thm -> thm
forall_intr_frees :               thm -> thm
\end{ttbox}

\begin{ttdescription}
\item[\ttindexbold{forall_intr} $x$ $thm$] 
applies $({\Forall}I)$, abstracting over all occurrences (if any!) of~$x$.
The rule maps the premise $\phi$ to the conclusion $\Forall x.\phi$.
Parameter~$x$ is supplied as a cterm.  It may be an unknown or a free
variable (provided it does not occur in the assumptions).

\item[\ttindexbold{forall_intr_list} $xs$ $thm$] 
applies $({\Forall}I)$ repeatedly, on every element of the list~$xs$.

\item[\ttindexbold{forall_intr_frees} $thm$] 
applies $({\Forall}I)$ repeatedly, generalizing over all the free variables
of the premise.
\end{ttdescription}


\subsection{Forall elimination rules}
\begin{ttbox} 
forall_elim       : cterm      -> thm -> thm
forall_elim_list  : cterm list -> thm -> thm
forall_elim_var   :        int -> thm -> thm
forall_elim_vars  :        int -> thm -> thm
\end{ttbox}

\begin{ttdescription}
\item[\ttindexbold{forall_elim} $ct$ $thm$] 
applies $({\Forall}E)$, mapping the premise $\Forall x.\phi$ to the conclusion
$\phi[ct/x]$.  The rule checks that $ct$ and $x$ have the same type.

\item[\ttindexbold{forall_elim_list} $cts$ $thm$] 
applies $({\Forall}E)$ repeatedly, on every element of the list~$cts$.

\item[\ttindexbold{forall_elim_var} $k$ $thm$] 
applies $({\Forall}E)$, mapping the premise $\Forall x.\phi$ to the conclusion
$\phi[\Var{x@k}/x]$.  Thus, it replaces the outermost $\Forall$-bound
variable by an unknown having subscript~$k$.

\item[\ttindexbold{forall_elim_vars} $ks$ $thm$] 
applies {\tt forall_elim_var} repeatedly, for every element of the list~$ks$.
\end{ttdescription}

\subsection{Instantiation of unknowns}
\index{instantiation}
\begin{ttbox} 
instantiate: (indexname * ctyp){\thinspace}list * (cterm * cterm){\thinspace}list -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{instantiate} ($tyinsts$, $insts$) $thm$] 
simultaneously substitutes types for type unknowns (the
$tyinsts$) and terms for term unknowns (the $insts$).  Instantiations are
given as $(v,t)$ pairs, where $v$ is an unknown and $t$ is a term (of the
same type as $v$) or a type (of the same sort as~$v$).  All the unknowns
must be distinct.  The rule normalizes its conclusion.
\end{ttdescription}


\subsection{Freezing/thawing type unknowns}
\index{type unknowns!freezing/thawing of}
\begin{ttbox} 
freezeT: thm -> thm
varifyT: thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{freezeT} $thm$] 
converts all the type unknowns in $thm$ to free type variables.

\item[\ttindexbold{varifyT} $thm$] 
converts all the free type variables in $thm$ to type unknowns.
\end{ttdescription}


\section{Derived rules for goal-directed proof}
Most of these rules have the sole purpose of implementing particular
tactics.  There are few occasions for applying them directly to a theorem.

\subsection{Proof by assumption}
\index{meta-assumptions}
\begin{ttbox} 
assumption    : int -> thm -> thm Sequence.seq
eq_assumption : int -> thm -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{assumption} {\it i} $thm$] 
attempts to solve premise~$i$ of~$thm$ by assumption.

\item[\ttindexbold{eq_assumption}] 
is like {\tt assumption} but does not use unification.
\end{ttdescription}


\subsection{Resolution}
\index{resolution}
\begin{ttbox} 
biresolution : bool -> (bool*thm)list -> int -> thm
               -> thm Sequence.seq
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{biresolution} $match$ $rules$ $i$ $state$] 
performs bi-resolution on subgoal~$i$ of $state$, using the list of $\it
(flag,rule)$ pairs.  For each pair, it applies resolution if the flag
is~{\tt false} and elim-resolution if the flag is~{\tt true}.  If $match$
is~{\tt true}, the $state$ is not instantiated.
\end{ttdescription}


\subsection{Composition: resolution without lifting}
\index{resolution!without lifting}
\begin{ttbox}
compose   : thm * int * thm -> thm list
COMP      : thm * thm -> thm
bicompose : bool -> bool * thm * int -> int -> thm
            -> thm Sequence.seq
\end{ttbox}
In forward proof, a typical use of composition is to regard an assertion of
the form $\phi\Imp\psi$ as atomic.  Schematic variables are not renamed, so
beware of clashes!
\begin{ttdescription}
\item[\ttindexbold{compose} ($thm@1$, $i$, $thm@2$)] 
uses $thm@1$, regarded as an atomic formula, to solve premise~$i$
of~$thm@2$.  Let $thm@1$ and $thm@2$ be $\psi$ and $\List{\phi@1; \ldots;
\phi@n} \Imp \phi$.  For each $s$ that unifies~$\psi$ and $\phi@i$, the
result list contains the theorem
\[ (\List{\phi@1; \ldots; \phi@{i-1}; \phi@{i+1}; \ldots; \phi@n} \Imp \phi)s.
\]

\item[$thm@1$ \ttindexbold{COMP} $thm@2$] 
calls \hbox{\tt compose ($thm@1$, 1, $thm@2$)} and returns the result, if
unique; otherwise, it raises exception~\xdx{THM}\@.  It is
analogous to {\tt RS}\@.  

For example, suppose that $thm@1$ is $a=b\Imp b=a$, a symmetry rule, and
that $thm@2$ is $\List{P\Imp Q; \neg Q} \Imp\neg P$, which is the
principle of contrapositives.  Then the result would be the
derived rule $\neg(b=a)\Imp\neg(a=b)$.

\item[\ttindexbold{bicompose} $match$ ($flag$, $rule$, $m$) $i$ $state$]
refines subgoal~$i$ of $state$ using $rule$, without lifting.  The $rule$
is taken to have the form $\List{\psi@1; \ldots; \psi@m} \Imp \psi$, where
$\psi$ need not be atomic; thus $m$ determines the number of new
subgoals.  If $flag$ is {\tt true} then it performs elim-resolution --- it
solves the first premise of~$rule$ by assumption and deletes that
assumption.  If $match$ is~{\tt true}, the $state$ is not instantiated.
\end{ttdescription}


\subsection{Other meta-rules}
\begin{ttbox} 
trivial            : cterm -> thm
lift_rule          : (thm * int) -> thm -> thm
rename_params_rule : string list * int -> thm -> thm
rewrite_cterm      : thm list -> cterm -> thm
flexflex_rule      : thm -> thm Sequence.seq
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{trivial} $ct$] 
makes the theorem \(\phi\Imp\phi\), where $\phi$ is the value of~$ct$.
This is the initial state for a goal-directed proof of~$\phi$.  The rule
checks that $ct$ has type~$prop$.

\item[\ttindexbold{lift_rule} ($state$, $i$) $rule$] \index{lifting}
prepares $rule$ for resolution by lifting it over the parameters and
assumptions of subgoal~$i$ of~$state$.

\item[\ttindexbold{rename_params_rule} ({\it names}, {\it i}) $thm$] 
uses the $names$ to rename the parameters of premise~$i$ of $thm$.  The
names must be distinct.  If there are fewer names than parameters, then the
rule renames the innermost parameters and may modify the remaining ones to
ensure that all the parameters are distinct.
\index{parameters!renaming}

\item[\ttindexbold{rewrite_cterm} $defs$ $ct$]
transforms $ct$ to $ct'$ by repeatedly applying $defs$ as rewrite rules; it
returns the conclusion~$ct\equiv ct'$.  This underlies the meta-rewriting
tactics and rules.
\index{meta-rewriting!in terms}

\item[\ttindexbold{flexflex_rule} $thm$]  \index{flex-flex constraints}
removes all flex-flex pairs from $thm$ using the trivial unifier.
\end{ttdescription}
\index{meta-rules|)}


\section{Proof objects}\label{sec:proofObjects}
\index{proof objects|(} Isabelle can record the full meta-level proof of each
theorem.  The proof object contains all logical inferences in detail, while
omitting bookkeeping steps that have no logical meaning to an outside
observer.  Rewriting steps are recorded in similar detail as the output of
simplifier tracing.  The proof object can be inspected by a separate
proof-checker, or used to generate human-readable proof digests.

Full proof objects are large.  They multiply storage requirements by about
seven; attempts to build large logics (such as {\sc zf} and {\sc hol}) may
fail.  Isabelle normally builds minimal proof objects, which include only uses
of oracles.  You can also request an intermediate level of detail, containing
uses of oracles, axioms and theorems.  These smaller proof objects indicate a
theorem's dependencies.

Isabelle provides proof objects for the sake of transparency.  Their aim is to
increase your confidence in Isabelle.  They let you inspect proofs constructed
by the classical reasoner or simplifier, and inform you of all uses of
oracles.  Seldom will proof objects be given whole to an automatic
proof-checker: none has been written.  It is up to you to examine and
interpret them sensibly.  For example, when scrutinizing a theorem's
derivation for dependence upon some oracle or axiom, remember to scrutinize
all of its lemmas.  Their proofs are included in the main derivation, through
the {\tt Theorem} constructor.

Proof objects are expressed using a polymorphic type of variable-branching
trees.  Proof objects (formally known as {\em derivations\/}) are trees
labelled by rules, where {\tt rule} is a complicated datatype declared in the
file {\tt Pure/thm.ML}.
\begin{ttbox} 
datatype 'a mtree = Join of 'a * 'a mtree list;
datatype rule     = \(\ldots\);
type deriv        = rule mtree;
\end{ttbox}
%
Each theorem's derivation is stored as the {\tt der} field of its internal
record: 
\begin{ttbox} 
#der (rep_thm conjI);
{\out Join (Theorem "conjI", [Join (MinProof,[])]) : deriv}
\end{ttbox}
This proof object identifies a labelled theorem, {\tt conjI}, whose underlying
proof has not been recorded; all we have is {\tt MinProof}.

Nontrivial proof objects are unreadably large and complex.  Isabelle provides
several functions to help you inspect them informally.  These functions omit
the more obscure inferences and attempt to restructure the others into natural
formats, linear or tree-structured.

\begin{ttbox} 
keep_derivs  : deriv_kind ref
Deriv.size   : deriv -> int
Deriv.drop   : 'a mtree * int -> 'a mtree
Deriv.linear : deriv -> deriv list
Deriv.tree   : deriv -> Deriv.orule mtree
\end{ttbox}

\begin{ttdescription}
\item[\ttindexbold{keep_derivs} := MinDeriv $|$ ThmDeriv $|$ FullDeriv;] 
specifies one of the three options for keeping derivations.  They can be
minimal (oracles only), include theorems and axioms, or be full.

\item[\ttindexbold{Deriv.size} $der$] yields the size of a derivation,
  excluding lemmas.

\item[\ttindexbold{Deriv.drop} ($tree$,$n$)] returns the subtree $n$ levels
  down, always following the first child.  It is good for stripping off
  outer level inferences that are used to put a theorem into standard form.

\item[\ttindexbold{Deriv.linear} $der$] converts a derivation into a linear
  format, replacing the deep nesting by a list of rules.  Intuitively, this
  reveals the single-step Isabelle proof that is constructed internally by
  tactics.  

\item[\ttindexbold{Deriv.tree} $der$] converts a derivation into an
  object-level proof tree.  A resolution by an object-rule is converted to a
  tree node labelled by that rule.  Complications arise if the object-rule is
  itself derived in some way.  Nested resolutions are unravelled, but other
  operations on rules (such as rewriting) are left as-is.  
\end{ttdescription}

Functions {\tt Deriv.linear} and {\tt Deriv.tree} omit the proof of any named
theorems (constructor {\tt Theorem}) they encounter in a derivation.  Applying
them directly to the derivation of a named theorem is therefore pointless.
Use {\tt Deriv.drop} with argument~1 to skip over the initial {\tt Theorem}
constructor.


\index{proof objects|)}
\index{theorems|)}