author nipkow
Thu, 16 Oct 1997 13:13:03 +0200
changeset 3881 73be08b4da3f
parent 3489 afa802078173
child 3959 033633d9a032
permissions -rw-r--r--
Added last, butlast, dropped ttl.

%% $Id$
\chapter{Higher-Order Logic}
\index{higher-order logic|(}
\index{HOL system@{\sc hol} system}

The theory~\thydx{HOL} implements higher-order logic.  It is based on
Gordon's~{\sc hol} system~\cite{mgordon-hol}, which itself is based on
Church's original paper~\cite{church40}.  Andrews's
book~\cite{andrews86} is a full description of the original
Church-style higher-order logic.  Experience with the {\sc hol} system
has demonstrated that higher-order logic is widely applicable in many
areas of mathematics and computer science, not just hardware
verification, {\sc hol}'s original \textit{raison d'\^etre\/}.  It is
weaker than {\ZF} set theory but for most applications this does not
matter.  If you prefer {\ML} to Lisp, you will probably prefer \HOL\ 

The syntax of \HOL\footnote{Earlier versions of Isabelle's \HOL\ used a
different syntax.  Ancient releases of Isabelle included still another version
of~\HOL, with explicit type inference rules~\cite{paulson-COLOG}.  This
version no longer exists, but \thydx{ZF} supports a similar style of
reasoning.} follows $\lambda$-calculus and functional programming.  Function
application is curried.  To apply the function~$f$ of type
$\tau@1\To\tau@2\To\tau@3$ to the arguments~$a$ and~$b$ in \HOL, you simply
write $f\,a\,b$.  There is no `apply' operator as in \thydx{ZF}.  Note that
$f(a,b)$ means ``$f$ applied to the pair $(a,b)$'' in \HOL.  We write ordered
pairs as $(a,b)$, not $\langle a,b\rangle$ as in {\ZF}.

\HOL\ has a distinct feel, compared with {\ZF} and {\CTT}.  It
identifies object-level types with meta-level types, taking advantage of
Isabelle's built-in type checker.  It identifies object-level functions
with meta-level functions, so it uses Isabelle's operations for abstraction
and application.

These identifications allow Isabelle to support \HOL\ particularly
nicely, but they also mean that \HOL\ requires more sophistication
from the user --- in particular, an understanding of Isabelle's type
system.  Beginners should work with \texttt{show_types} (or even
\texttt{show_sorts}) set to \texttt{true}.
%  Gain experience by
%working in first-order logic before attempting to use higher-order logic.
%This chapter assumes familiarity with~{\FOL{}}.

  \it name      &\it meta-type  & \it description \\
  \cdx{Trueprop}& $bool\To prop$                & coercion to $prop$\\
  \cdx{Not}     & $bool\To bool$                & negation ($\neg$) \\
  \cdx{True}    & $bool$                        & tautology ($\top$) \\
  \cdx{False}   & $bool$                        & absurdity ($\bot$) \\
  \cdx{If}      & $[bool,\alpha,\alpha]\To\alpha$ & conditional \\
  \cdx{Let}     & $[\alpha,\alpha\To\beta]\To\beta$ & let binder

\index{"@@{\tt\at} symbol}
\index{*"! symbol}\index{*"? symbol}
\index{*"?"! symbol}\index{*"E"X"! symbol}
  \it symbol &\it name     &\it meta-type & \it description \\
  \tt\at & \cdx{Eps}  & $(\alpha\To bool)\To\alpha$ & 
        Hilbert description ($\varepsilon$) \\
  {\tt!~} or \sdx{ALL}  & \cdx{All}  & $(\alpha\To bool)\To bool$ & 
        universal quantifier ($\forall$) \\
  {\tt?~} or \sdx{EX}   & \cdx{Ex}   & $(\alpha\To bool)\To bool$ & 
        existential quantifier ($\exists$) \\
  {\tt?!} or \texttt{EX!}  & \cdx{Ex1}  & $(\alpha\To bool)\To bool$ & 
        unique existence ($\exists!$)\\
  \texttt{LEAST}  & \cdx{Least}  & $(\alpha::ord \To bool)\To\alpha$ & 
        least element

\index{*"= symbol}
\index{&@{\tt\&} symbol}
\index{*"| symbol}
\index{*"-"-"> symbol}
  \it symbol    & \it meta-type & \it priority & \it description \\ 
  \sdx{o}       & $[\beta\To\gamma,\alpha\To\beta]\To (\alpha\To\gamma)$ & 
        Left 55 & composition ($\circ$) \\
  \tt =         & $[\alpha,\alpha]\To bool$ & Left 50 & equality ($=$) \\
  \tt <         & $[\alpha::ord,\alpha]\To bool$ & Left 50 & less than ($<$) \\
  \tt <=        & $[\alpha::ord,\alpha]\To bool$ & Left 50 & 
                less than or equals ($\leq$)\\
  \tt \&        & $[bool,bool]\To bool$ & Right 35 & conjunction ($\conj$) \\
  \tt |         & $[bool,bool]\To bool$ & Right 30 & disjunction ($\disj$) \\
  \tt -->       & $[bool,bool]\To bool$ & Right 25 & implication ($\imp$)
\caption{Syntax of \texttt{HOL}} \label{hol-constants}

\index{*let symbol}
\index{*in symbol}
    term & = & \hbox{expression of class~$term$} \\
         & | & "\at~" id " . " formula \\
         & | & 
    \multicolumn{3}{l}{"let"~id~"="~term";"\dots";"~id~"="~term~"in"~term} \\
         & | & 
    \multicolumn{3}{l}{"if"~formula~"then"~term~"else"~term} \\
         & | & "LEAST"~ id " . " formula \\[2ex]
 formula & = & \hbox{expression of type~$bool$} \\
         & | & term " = " term \\
         & | & term " \ttilde= " term \\
         & | & term " < " term \\
         & | & term " <= " term \\
         & | & "\ttilde\ " formula \\
         & | & formula " \& " formula \\
         & | & formula " | " formula \\
         & | & formula " --> " formula \\
         & | & "!~~~" id~id^* " . " formula 
         & | & "ALL~" id~id^* " . " formula \\
         & | & "?~~~" id~id^* " . " formula 
         & | & "EX~~" id~id^* " . " formula \\
         & | & "?!~~" id~id^* " . " formula 
         & | & "EX!~" id~id^* " . " formula
\caption{Full grammar for \HOL} \label{hol-grammar}


Figure~\ref{hol-constants} lists the constants (including infixes and
binders), while Fig.\ts\ref{hol-grammar} presents the grammar of
higher-order logic.  Note that $a$\verb|~=|$b$ is translated to

  \HOL\ has no if-and-only-if connective; logical equivalence is expressed
  using equality.  But equality has a high priority, as befitting a
  relation, while if-and-only-if typically has the lowest priority.  Thus,
  $\neg\neg P=P$ abbreviates $\neg\neg (P=P)$ and not $(\neg\neg P)=P$.
  When using $=$ to mean logical equivalence, enclose both operands in

\subsection{Types and classes}
The universal type class of higher-order terms is called~\cldx{term}.
By default, explicit type variables have class \cldx{term}.  In
particular the equality symbol and quantifiers are polymorphic over
class \texttt{term}.

The type of formulae, \tydx{bool}, belongs to class \cldx{term}; thus,
formulae are terms.  The built-in type~\tydx{fun}, which constructs
function types, is overloaded with arity {\tt(term,\thinspace
  term)\thinspace term}.  Thus, $\sigma\To\tau$ belongs to class~{\tt
  term} if $\sigma$ and~$\tau$ do, allowing quantification over

\HOL\ offers various methods for introducing new types.
See~\S\ref{sec:HOL:Types} and~\S\ref{sec:HOL:datatype}.

Theory \thydx{Ord} defines the syntactic class \cldx{ord} of order
signatures; the relations $<$ and $\leq$ are polymorphic over this
class, as are the functions \cdx{mono}, \cdx{min} and \cdx{max}, and
the \cdx{LEAST} operator. \thydx{Ord} also defines a subclass
\cldx{order} of \cldx{ord} which axiomatizes partially ordered types
(w.r.t.\ $\le$).

Three other syntactic type classes --- \cldx{plus}, \cldx{minus} and
\cldx{times} --- permit overloading of the operators {\tt+},\index{*"+
  symbol} {\tt-}\index{*"- symbol} and {\tt*}.\index{*"* symbol} In
particular, {\tt-} is instantiated for set difference and subtraction
on natural numbers.

If you state a goal containing overloaded functions, you may need to include
type constraints.  Type inference may otherwise make the goal more
polymorphic than you intended, with confusing results.  For example, the
variables $i$, $j$ and $k$ in the goal $i \le j \Imp i \le j+k$ have type
$\alpha::\{ord,plus\}$, although you may have expected them to have some
numeric type, e.g. $nat$.  Instead you should have stated the goal as
$(i::nat) \le j \Imp i \le j+k$, which causes all three variables to have
type $nat$.

  If resolution fails for no obvious reason, try setting
  \ttindex{show_types} to \texttt{true}, causing Isabelle to display
  types of terms.  Possibly set \ttindex{show_sorts} to \texttt{true} as
  well, causing Isabelle to display type classes and sorts.

  \index{unification!incompleteness of}
  Where function types are involved, Isabelle's unification code does not
  guarantee to find instantiations for type variables automatically.  Be
  prepared to use \ttindex{res_inst_tac} instead of \texttt{resolve_tac},
  possibly instantiating type variables.  Setting
  \ttindex{Unify.trace_types} to \texttt{true} causes Isabelle to report
  omitted search paths during unification.\index{tracing!of unification}


Hilbert's {\bf description} operator~$\varepsilon x.P[x]$ stands for
some~$x$ satisfying~$P$, if such exists.  Since all terms in \HOL\ 
denote something, a description is always meaningful, but we do not
know its value unless $P$ defines it uniquely.  We may write
descriptions as \cdx{Eps}($\lambda x.P[x]$) or use the syntax
\hbox{\tt \at $x$.$P[x]$}.

Existential quantification is defined by
\[ \exists x.P~x \;\equiv\; P(\varepsilon x.P~x). \]
The unique existence quantifier, $\exists!x.P$, is defined in terms
of~$\exists$ and~$\forall$.  An Isabelle binder, it admits nested
quantifications.  For instance, $\exists!x\,y.P\,x\,y$ abbreviates
$\exists!x. \exists!y.P\,x\,y$; note that this does not mean that there
exists a unique pair $(x,y)$ satisfying~$P\,x\,y$.

\index{*"! symbol}\index{*"? symbol}\index{HOL system@{\sc hol} system}
Quantifiers have two notations.  As in Gordon's {\sc hol} system, \HOL\
uses~{\tt!}\ and~{\tt?}\ to stand for $\forall$ and $\exists$.  The
existential quantifier must be followed by a space; thus {\tt?x} is an
unknown, while \verb'? x.f x=y' is a quantification.  Isabelle's usual
notation for quantifiers, \sdx{ALL} and \sdx{EX}, is also
available.  Both notations are accepted for input.  The {\ML} reference
\ttindexbold{HOL_quantifiers} governs the output notation.  If set to {\tt
true}, then~{\tt!}\ and~{\tt?}\ are displayed; this is the default.  If set
to \texttt{false}, then~{\tt ALL} and~{\tt EX} are displayed.

If $\tau$ is a type of class \cldx{ord}, $P$ a formula and $x$ a
variable of type $\tau$, then the term \cdx{LEAST}~$x.P[x]$ is defined
to be the least (w.r.t.\ $\le$) $x$ such that $P~x$ holds (see
Fig.~\ref{hol-defs}).  The definition uses Hilbert's $\varepsilon$
choice operator, so \texttt{Least} is always meaningful, but may yield
nothing useful in case there is not a unique least element satisfying
$P$.\footnote{Class $ord$ does not require much of its instances, so
  $\le$ need not be a well-ordering, not even an order at all!}

\medskip All these binders have priority 10.

The low priority of binders means that they need to be enclosed in
parenthesis when they occur in the context of other operations.  For example,
instead of $P \land \forall x.Q$ you need to write $P \land (\forall x.Q)$.

\subsection{The \sdx{let} and \sdx{case} constructions}
Local abbreviations can be introduced by a \texttt{let} construct whose
syntax appears in Fig.\ts\ref{hol-grammar}.  Internally it is translated into
the constant~\cdx{Let}.  It can be expanded by rewriting with its
definition, \tdx{Let_def}.

\HOL\ also defines the basic syntax
\[\dquotes"case"~e~"of"~c@1~"=>"~e@1~"|" \dots "|"~c@n~"=>"~e@n\] 
as a uniform means of expressing \texttt{case} constructs.  Therefore \texttt{case}
and \sdx{of} are reserved words.  Initially, this is mere syntax and has no
logical meaning.  By declaring translations, you can cause instances of the
{\tt case} construct to denote applications of particular case operators.
This is what happens automatically for each \texttt{datatype} definition

Both \texttt{if} and \texttt{case} constructs have as low a priority as
quantifiers, which requires additional enclosing parentheses in the context
of most other operations.  For example, instead of $f~x = if \dots then \dots
else \dots$ you need to write $f~x = (if \dots then \dots else

\section{Rules of inference}

\tdx{refl}           t = (t::'a)
\tdx{subst}          [| s = t; P s |] ==> P (t::'a)
\tdx{ext}            (!!x::'a. (f x :: 'b) = g x) ==> (\%x.f x) = (\%x.g x)
\tdx{impI}           (P ==> Q) ==> P-->Q
\tdx{mp}             [| P-->Q;  P |] ==> Q
\tdx{iff}            (P-->Q) --> (Q-->P) --> (P=Q)
\tdx{selectI}        P(x::'a) ==> P(@x.P x)
\tdx{True_or_False}  (P=True) | (P=False)
\caption{The \texttt{HOL} rules} \label{hol-rules}

Figure~\ref{hol-rules} shows the primitive inference rules of~\HOL{},
with their~{\ML} names.  Some of the rules deserve additional
\item[\tdx{ext}] expresses extensionality of functions.
\item[\tdx{iff}] asserts that logically equivalent formulae are
\item[\tdx{selectI}] gives the defining property of the Hilbert
  $\varepsilon$-operator.  It is a form of the Axiom of Choice.  The derived rule
  \tdx{select_equality} (see below) is often easier to use.
\item[\tdx{True_or_False}] makes the logic classical.\footnote{In
    fact, the $\varepsilon$-operator already makes the logic classical, as
    shown by Diaconescu; see Paulson~\cite{paulson-COLOG} for details.}

\begin{figure}\hfuzz=4pt%suppress "Overfull \hbox" message
\tdx{True_def}   True     == ((\%x::bool.x)=(\%x.x))
\tdx{All_def}    All      == (\%P. P = (\%x.True))
\tdx{Ex_def}     Ex       == (\%P. P(@x.P x))
\tdx{False_def}  False    == (!P.P)
\tdx{not_def}    not      == (\%P. P-->False)
\tdx{and_def}    op &     == (\%P Q. !R. (P-->Q-->R) --> R)
\tdx{or_def}     op |     == (\%P Q. !R. (P-->R) --> (Q-->R) --> R)
\tdx{Ex1_def}    Ex1      == (\%P. ? x. P x & (! y. P y --> y=x))

\tdx{o_def}      op o     == (\%(f::'b=>'c) g x::'a. f(g x))
\tdx{if_def}     If P x y ==
              (\%P x y. @z::'a.(P=True --> z=x) & (P=False --> z=y))
\tdx{Let_def}    Let s f  == f s
\tdx{Least_def}  Least P  == @x. P(x) & (ALL y. P(y) --> x <= y)"
\caption{The \texttt{HOL} definitions} \label{hol-defs}

\HOL{} follows standard practice in higher-order logic: only a few
connectives are taken as primitive, with the remainder defined obscurely
(Fig.\ts\ref{hol-defs}).  Gordon's {\sc hol} system expresses the
corresponding definitions \cite[page~270]{mgordon-hol} using
object-equality~({\tt=}), which is possible because equality in
higher-order logic may equate formulae and even functions over formulae.
But theory~\HOL{}, like all other Isabelle theories, uses
meta-equality~({\tt==}) for definitions.
The definitions above should never be expanded and are shown for completeness
only.  Instead users should reason in terms of the derived rules shown below
or, better still, using high-level tactics

Some of the rules mention type variables; for example, \texttt{refl}
mentions the type variable~{\tt'a}.  This allows you to instantiate
type variables explicitly by calling \texttt{res_inst_tac}.

\tdx{sym}         s=t ==> t=s
\tdx{trans}       [| r=s; s=t |] ==> r=t
\tdx{ssubst}      [| t=s; P s |] ==> P t
\tdx{box_equals}  [| a=b;  a=c;  b=d |] ==> c=d  
\tdx{arg_cong}    x = y ==> f x = f y
\tdx{fun_cong}    f = g ==> f x = g x
\tdx{cong}        [| f = g; x = y |] ==> f x = g y
\tdx{not_sym}     t ~= s ==> s ~= t

\tdx{TrueI}       True 
\tdx{FalseE}      False ==> P

\tdx{conjI}       [| P; Q |] ==> P&Q
\tdx{conjunct1}   [| P&Q |] ==> P
\tdx{conjunct2}   [| P&Q |] ==> Q 
\tdx{conjE}       [| P&Q;  [| P; Q |] ==> R |] ==> R

\tdx{disjI1}      P ==> P|Q
\tdx{disjI2}      Q ==> P|Q
\tdx{disjE}       [| P | Q; P ==> R; Q ==> R |] ==> R

\tdx{notI}        (P ==> False) ==> ~ P
\tdx{notE}        [| ~ P;  P |] ==> R
\tdx{impE}        [| P-->Q;  P;  Q ==> R |] ==> R
\subcaption{Propositional logic}

\tdx{iffI}        [| P ==> Q;  Q ==> P |] ==> P=Q
\tdx{iffD1}       [| P=Q; P |] ==> Q
\tdx{iffD2}       [| P=Q; Q |] ==> P
\tdx{iffE}        [| P=Q; [| P --> Q; Q --> P |] ==> R |] ==> R
%\tdx{eqTrueI}     P ==> P=True 
%\tdx{eqTrueE}     P=True ==> P 
\subcaption{Logical equivalence}

\caption{Derived rules for \HOL} \label{hol-lemmas1}

\tdx{allI}      (!!x. P x) ==> !x. P x
\tdx{spec}      !x.P x ==> P x
\tdx{allE}      [| !x.P x;  P x ==> R |] ==> R
\tdx{all_dupE}  [| !x.P x;  [| P x; !x.P x |] ==> R |] ==> R

\tdx{exI}       P x ==> ? x. P x
\tdx{exE}       [| ? x. P x; !!x. P x ==> Q |] ==> Q

\tdx{ex1I}      [| P a;  !!x. P x ==> x=a |] ==> ?! x. P x
\tdx{ex1E}      [| ?! x.P x;  !!x. [| P x;  ! y. P y --> y=x |] ==> R 
          |] ==> R

\tdx{select_equality} [| P a;  !!x. P x ==> x=a |] ==> (@x.P x) = a
\subcaption{Quantifiers and descriptions}

\tdx{ccontr}          (~P ==> False) ==> P
\tdx{classical}       (~P ==> P) ==> P
\tdx{excluded_middle} ~P | P

\tdx{disjCI}          (~Q ==> P) ==> P|Q
\tdx{exCI}            (! x. ~ P x ==> P a) ==> ? x.P x
\tdx{impCE}           [| P-->Q; ~ P ==> R; Q ==> R |] ==> R
\tdx{iffCE}           [| P=Q;  [| P;Q |] ==> R;  [| ~P; ~Q |] ==> R |] ==> R
\tdx{notnotD}         ~~P ==> P
\tdx{swap}            ~P ==> (~Q ==> P) ==> Q
\subcaption{Classical logic}

%\tdx{if_True}         (if True then x else y) = x
%\tdx{if_False}        (if False then x else y) = y
\tdx{if_P}            P ==> (if P then x else y) = x
\tdx{if_not_P}        ~ P ==> (if P then x else y) = y
\tdx{expand_if}       P(if Q then x else y) = ((Q --> P x) & (~Q --> P y))
\caption{More derived rules} \label{hol-lemmas2}

Some derived rules are shown in Figures~\ref{hol-lemmas1}
and~\ref{hol-lemmas2}, with their {\ML} names.  These include natural rules
for the logical connectives, as well as sequent-style elimination rules for
conjunctions, implications, and universal quantifiers.  

Note the equality rules: \tdx{ssubst} performs substitution in
backward proofs, while \tdx{box_equals} supports reasoning by
simplifying both sides of an equation.

The following simple tactics are occasionally useful:
\item[\ttindexbold{strip_tac} $i$] applies \texttt{allI} and \texttt{impI}
  repeatedly to remove all outermost universal quantifiers and implications
  from subgoal $i$.
\item[\ttindexbold{case_tac} {\tt"}$P${\tt"} $i$] performs case distinction
  on $P$ for subgoal $i$: the latter is replaced by two identical subgoals
  with the added assumptions $P$ and $\neg P$, respectively.

  \it name      &\it meta-type  & \it description \\ 
\index{{}@\verb'{}' symbol}
  \verb|{}|     & $\alpha\,set$         & the empty set \\
  \cdx{insert}  & $[\alpha,\alpha\,set]\To \alpha\,set$
        & insertion of element \\
  \cdx{Collect} & $(\alpha\To bool)\To\alpha\,set$
        & comprehension \\
  \cdx{Compl}   & $\alpha\,set\To\alpha\,set$
        & complement \\
  \cdx{INTER} & $[\alpha\,set,\alpha\To\beta\,set]\To\beta\,set$
        & intersection over a set\\
  \cdx{UNION} & $[\alpha\,set,\alpha\To\beta\,set]\To\beta\,set$
        & union over a set\\
  \cdx{Inter} & $(\alpha\,set)set\To\alpha\,set$
        &set of sets intersection \\
  \cdx{Union} & $(\alpha\,set)set\To\alpha\,set$
        &set of sets union \\
  \cdx{Pow}   & $\alpha\,set \To (\alpha\,set)set$
        & powerset \\[1ex]
  \cdx{range}   & $(\alpha\To\beta )\To\beta\,set$
        & range of a function \\[1ex]
  \cdx{Ball}~~\cdx{Bex} & $[\alpha\,set,\alpha\To bool]\To bool$
        & bounded quantifiers

  \it symbol &\it name     &\it meta-type & \it priority & \it description \\
  \sdx{INT}  & \cdx{INTER1}  & $(\alpha\To\beta\,set)\To\beta\,set$ & 10 & 
        intersection over a type\\
  \sdx{UN}  & \cdx{UNION1}  & $(\alpha\To\beta\,set)\To\beta\,set$ & 10 & 
        union over a type

\index{*"`"` symbol}
\index{*": symbol}
\index{*"<"= symbol}
  \it symbol    & \it meta-type & \it priority & \it description \\ 
  \tt ``        & $[\alpha\To\beta ,\alpha\,set]\To  \beta\,set$
        & Left 90 & image \\
  \sdx{Int}     & $[\alpha\,set,\alpha\,set]\To\alpha\,set$
        & Left 70 & intersection ($\int$) \\
  \sdx{Un}      & $[\alpha\,set,\alpha\,set]\To\alpha\,set$
        & Left 65 & union ($\un$) \\
  \tt:          & $[\alpha ,\alpha\,set]\To bool$       
        & Left 50 & membership ($\in$) \\
  \tt <=        & $[\alpha\,set,\alpha\,set]\To bool$
        & Left 50 & subset ($\subseteq$) 
\caption{Syntax of the theory \texttt{Set}} \label{hol-set-syntax}

\begin{center} \tt\frenchspacing
\index{*"! symbol}
  \it external          & \it internal  & \it description \\ 
  $a$ \ttilde: $b$      & \ttilde($a$ : $b$)    & \rm non-membership\\
  {\ttlbrace}$a@1$, $\ldots${\ttrbrace}  &  insert $a@1$ $\ldots$ {\ttlbrace}{\ttrbrace} & \rm finite set \\
  {\ttlbrace}$x$.$P[x]${\ttrbrace}        &  Collect($\lambda x.P[x]$) &
        \rm comprehension \\
  \sdx{INT} $x$:$A$.$B[x]$      & INTER $A$ $\lambda x.B[x]$ &
        \rm intersection \\
  \sdx{UN}{\tt\ }  $x$:$A$.$B[x]$      & UNION $A$ $\lambda x.B[x]$ &
        \rm union \\
  \tt ! $x$:$A$.$P[x]$ or \sdx{ALL} $x$:$A$.$P[x]$ & 
        Ball $A$ $\lambda x.P[x]$ & 
        \rm bounded $\forall$ \\
  \sdx{?} $x$:$A$.$P[x]$ or \sdx{EX}{\tt\ } $x$:$A$.$P[x]$ & 
        Bex $A$ $\lambda x.P[x]$ & \rm bounded $\exists$

    term & = & \hbox{other terms\ldots} \\
         & | & "{\ttlbrace}{\ttrbrace}" \\
         & | & "{\ttlbrace} " term\; ("," term)^* " {\ttrbrace}" \\
         & | & "{\ttlbrace} " id " . " formula " {\ttrbrace}" \\
         & | & term " `` " term \\
         & | & term " Int " term \\
         & | & term " Un " term \\
         & | & "INT~~"  id ":" term " . " term \\
         & | & "UN~~~"  id ":" term " . " term \\
         & | & "INT~~"  id~id^* " . " term \\
         & | & "UN~~~"  id~id^* " . " term \\[2ex]
 formula & = & \hbox{other formulae\ldots} \\
         & | & term " : " term \\
         & | & term " \ttilde: " term \\
         & | & term " <= " term \\
         & | & "!~" id ":" term " . " formula 
         & | & "ALL " id ":" term " . " formula \\
         & | & "?~" id ":" term " . " formula 
         & | & "EX~~" id ":" term " . " formula
\subcaption{Full Grammar}
\caption{Syntax of the theory \texttt{Set} (continued)} \label{hol-set-syntax2}

\section{A formulation of set theory}
Historically, higher-order logic gives a foundation for Russell and
Whitehead's theory of classes.  Let us use modern terminology and call them
{\bf sets}, but note that these sets are distinct from those of {\ZF} set
theory, and behave more like {\ZF} classes.
Sets are given by predicates over some type~$\sigma$.  Types serve to
define universes for sets, but type checking is still significant.
There is a universal set (for each type).  Thus, sets have complements, and
may be defined by absolute comprehension.
Although sets may contain other sets as elements, the containing set must
have a more complex type.
Finite unions and intersections have the same behaviour in \HOL\ as they
do in~{\ZF}.  In \HOL\ the intersection of the empty set is well-defined,
denoting the universal set for the given type.

\subsection{Syntax of set theory}\index{*set type}
\HOL's set theory is called \thydx{Set}.  The type $\alpha\,set$ is
essentially the same as $\alpha\To bool$.  The new type is defined for
clarity and to avoid complications involving function types in unification.
The isomorphisms between the two types are declared explicitly.  They are
very natural: \texttt{Collect} maps $\alpha\To bool$ to $\alpha\,set$, while
\hbox{\tt op :} maps in the other direction (ignoring argument order).

Figure~\ref{hol-set-syntax} lists the constants, infixes, and syntax
translations.  Figure~\ref{hol-set-syntax2} presents the grammar of the new
constructs.  Infix operators include union and intersection ($A\un B$
and $A\int B$), the subset and membership relations, and the image
operator~{\tt``}\@.  Note that $a$\verb|~:|$b$ is translated to
$\neg(a\in b)$.  

The $\{a@1,\ldots\}$ notation abbreviates finite sets constructed in
the obvious manner using~{\tt insert} and~$\{\}$:
  \{a, b, c\} & \equiv &
  \texttt{insert} \, a \, ({\tt insert} \, b \, ({\tt insert} \, c \, \{\}))

The set \hbox{\tt{\ttlbrace}$x$.$P[x]${\ttrbrace}} consists of all $x$ (of suitable type)
that satisfy~$P[x]$, where $P[x]$ is a formula that may contain free
occurrences of~$x$.  This syntax expands to \cdx{Collect}$(\lambda
x.P[x])$.  It defines sets by absolute comprehension, which is impossible
in~{\ZF}; the type of~$x$ implicitly restricts the comprehension.

The set theory defines two {\bf bounded quantifiers}:
   \forall x\in A.P[x] &\hbox{abbreviates}& \forall x. x\in A\imp P[x] \\
   \exists x\in A.P[x] &\hbox{abbreviates}& \exists x. x\in A\conj P[x]
The constants~\cdx{Ball} and~\cdx{Bex} are defined
accordingly.  Instead of \texttt{Ball $A$ $P$} and \texttt{Bex $A$ $P$} we may
write\index{*"! symbol}\index{*"? symbol}
\index{*ALL symbol}\index{*EX symbol} 
\hbox{\tt !~$x$:$A$.$P[x]$} and \hbox{\tt ?~$x$:$A$.$P[x]$}.  Isabelle's
usual quantifier symbols, \sdx{ALL} and \sdx{EX}, are also accepted
for input.  As with the primitive quantifiers, the {\ML} reference
\ttindex{HOL_quantifiers} specifies which notation to use for output.

Unions and intersections over sets, namely $\bigcup@{x\in A}B[x]$ and
$\bigcap@{x\in A}B[x]$, are written 
\sdx{UN}~\hbox{\tt$x$:$A$.$B[x]$} and

Unions and intersections over types, namely $\bigcup@x B[x]$ and $\bigcap@x
B[x]$, are written \sdx{UN}~\hbox{\tt$x$.$B[x]$} and
\sdx{INT}~\hbox{\tt$x$.$B[x]$}.  They are equivalent to the previous
union and intersection operators when $A$ is the universal set.

The operators $\bigcup A$ and $\bigcap A$ act upon sets of sets.  They are
not binders, but are equal to $\bigcup@{x\in A}x$ and $\bigcap@{x\in A}x$,

\begin{figure} \underscoreon
\tdx{mem_Collect_eq}    (a : {\ttlbrace}x.P x{\ttrbrace}) = P a
\tdx{Collect_mem_eq}    {\ttlbrace}x.x:A{\ttrbrace} = A

\tdx{empty_def}         {\ttlbrace}{\ttrbrace}          == {\ttlbrace}x.False{\ttrbrace}
\tdx{insert_def}        insert a B  == {\ttlbrace}x.x=a{\ttrbrace} Un B
\tdx{Ball_def}          Ball A P    == ! x. x:A --> P x
\tdx{Bex_def}           Bex A P     == ? x. x:A & P x
\tdx{subset_def}        A <= B      == ! x:A. x:B
\tdx{Un_def}            A Un B      == {\ttlbrace}x.x:A | x:B{\ttrbrace}
\tdx{Int_def}           A Int B     == {\ttlbrace}x.x:A & x:B{\ttrbrace}
\tdx{set_diff_def}      A - B       == {\ttlbrace}x.x:A & x~:B{\ttrbrace}
\tdx{Compl_def}         Compl A     == {\ttlbrace}x. ~ x:A{\ttrbrace}
\tdx{INTER_def}         INTER A B   == {\ttlbrace}y. ! x:A. y: B x{\ttrbrace}
\tdx{UNION_def}         UNION A B   == {\ttlbrace}y. ? x:A. y: B x{\ttrbrace}
\tdx{INTER1_def}        INTER1 B    == INTER {\ttlbrace}x.True{\ttrbrace} B 
\tdx{UNION1_def}        UNION1 B    == UNION {\ttlbrace}x.True{\ttrbrace} B 
\tdx{Inter_def}         Inter S     == (INT x:S. x)
\tdx{Union_def}         Union S     == (UN  x:S. x)
\tdx{Pow_def}           Pow A       == {\ttlbrace}B. B <= A{\ttrbrace}
\tdx{image_def}         f``A        == {\ttlbrace}y. ? x:A. y=f x{\ttrbrace}
\tdx{range_def}         range f     == {\ttlbrace}y. ? x. y=f x{\ttrbrace}
\caption{Rules of the theory \texttt{Set}} \label{hol-set-rules}

\begin{figure} \underscoreon
\tdx{CollectI}        [| P a |] ==> a : {\ttlbrace}x.P x{\ttrbrace}
\tdx{CollectD}        [| a : {\ttlbrace}x.P x{\ttrbrace} |] ==> P a
\tdx{CollectE}        [| a : {\ttlbrace}x.P x{\ttrbrace};  P a ==> W |] ==> W

\tdx{ballI}           [| !!x. x:A ==> P x |] ==> ! x:A. P x
\tdx{bspec}           [| ! x:A. P x;  x:A |] ==> P x
\tdx{ballE}           [| ! x:A. P x;  P x ==> Q;  ~ x:A ==> Q |] ==> Q

\tdx{bexI}            [| P x;  x:A |] ==> ? x:A. P x
\tdx{bexCI}           [| ! x:A. ~ P x ==> P a;  a:A |] ==> ? x:A.P x
\tdx{bexE}            [| ? x:A. P x;  !!x. [| x:A; P x |] ==> Q  |] ==> Q
\subcaption{Comprehension and Bounded quantifiers}

\tdx{subsetI}         (!!x.x:A ==> x:B) ==> A <= B
\tdx{subsetD}         [| A <= B;  c:A |] ==> c:B
\tdx{subsetCE}        [| A <= B;  ~ (c:A) ==> P;  c:B ==> P |] ==> P

\tdx{subset_refl}     A <= A
\tdx{subset_trans}    [| A<=B;  B<=C |] ==> A<=C

\tdx{equalityI}       [| A <= B;  B <= A |] ==> A = B
\tdx{equalityD1}      A = B ==> A<=B
\tdx{equalityD2}      A = B ==> B<=A
\tdx{equalityE}       [| A = B;  [| A<=B; B<=A |] ==> P |]  ==>  P

\tdx{equalityCE}      [| A = B;  [| c:A; c:B |] ==> P;  
                           [| ~ c:A; ~ c:B |] ==> P 
                |]  ==>  P
\subcaption{The subset and equality relations}
\caption{Derived rules for set theory} \label{hol-set1}

\begin{figure} \underscoreon
\tdx{emptyE}   a : {\ttlbrace}{\ttrbrace} ==> P

\tdx{insertI1} a : insert a B
\tdx{insertI2} a : B ==> a : insert b B
\tdx{insertE}  [| a : insert b A;  a=b ==> P;  a:A ==> P |] ==> P

\tdx{ComplI}   [| c:A ==> False |] ==> c : Compl A
\tdx{ComplD}   [| c : Compl A |] ==> ~ c:A

\tdx{UnI1}     c:A ==> c : A Un B
\tdx{UnI2}     c:B ==> c : A Un B
\tdx{UnCI}     (~c:B ==> c:A) ==> c : A Un B
\tdx{UnE}      [| c : A Un B;  c:A ==> P;  c:B ==> P |] ==> P

\tdx{IntI}     [| c:A;  c:B |] ==> c : A Int B
\tdx{IntD1}    c : A Int B ==> c:A
\tdx{IntD2}    c : A Int B ==> c:B
\tdx{IntE}     [| c : A Int B;  [| c:A; c:B |] ==> P |] ==> P

\tdx{UN_I}     [| a:A;  b: B a |] ==> b: (UN x:A. B x)
\tdx{UN_E}     [| b: (UN x:A. B x);  !!x.[| x:A;  b:B x |] ==> R |] ==> R

\tdx{INT_I}    (!!x. x:A ==> b: B x) ==> b : (INT x:A. B x)
\tdx{INT_D}    [| b: (INT x:A. B x);  a:A |] ==> b: B a
\tdx{INT_E}    [| b: (INT x:A. B x);  b: B a ==> R;  ~ a:A ==> R |] ==> R

\tdx{UnionI}   [| X:C;  A:X |] ==> A : Union C
\tdx{UnionE}   [| A : Union C;  !!X.[| A:X;  X:C |] ==> R |] ==> R

\tdx{InterI}   [| !!X. X:C ==> A:X |] ==> A : Inter C
\tdx{InterD}   [| A : Inter C;  X:C |] ==> A:X
\tdx{InterE}   [| A : Inter C;  A:X ==> R;  ~ X:C ==> R |] ==> R

\tdx{PowI}     A<=B ==> A: Pow B
\tdx{PowD}     A: Pow B ==> A<=B

\tdx{imageI}   [| x:A |] ==> f x : f``A
\tdx{imageE}   [| b : f``A;  !!x.[| b=f x;  x:A |] ==> P |] ==> P

\tdx{rangeI}   f x : range f
\tdx{rangeE}   [| b : range f;  !!x.[| b=f x |] ==> P |] ==> P
\caption{Further derived rules for set theory} \label{hol-set2}

\subsection{Axioms and rules of set theory}
Figure~\ref{hol-set-rules} presents the rules of theory \thydx{Set}.  The
axioms \tdx{mem_Collect_eq} and \tdx{Collect_mem_eq} assert
that the functions \texttt{Collect} and \hbox{\tt op :} are isomorphisms.  Of
course, \hbox{\tt op :} also serves as the membership relation.

All the other axioms are definitions.  They include the empty set, bounded
quantifiers, unions, intersections, complements and the subset relation.
They also include straightforward constructions on functions: image~({\tt``})
and \texttt{range}.

%The predicate \cdx{inj_onto} is used for simulating type definitions.
%The statement ${\tt inj_onto}~f~A$ asserts that $f$ is injective on the
%set~$A$, which specifies a subset of its domain type.  In a type
%definition, $f$ is the abstraction function and $A$ is the set of valid
%representations; we should not expect $f$ to be injective outside of~$A$.

%\begin{figure} \underscoreon
%\tdx{Inv_f_f}    inj f ==> Inv f (f x) = x
%\tdx{f_Inv_f}    y : range f ==> f(Inv f y) = y
%    [| Inv f x=Inv f y; x: range f;  y: range f |] ==> x=y
%\tdx{monoI}      [| !!A B. A <= B ==> f A <= f B |] ==> mono f
%\tdx{monoD}      [| mono f;  A <= B |] ==> f A <= f B
%\tdx{injI}       [| !! x y. f x = f y ==> x=y |] ==> inj f
%\tdx{inj_inverseI}              (!!x. g(f x) = x) ==> inj f
%\tdx{injD}       [| inj f; f x = f y |] ==> x=y
%\tdx{inj_ontoI}  (!!x y. [| f x=f y; x:A; y:A |] ==> x=y) ==> inj_onto f A
%\tdx{inj_ontoD}  [| inj_onto f A;  f x=f y;  x:A;  y:A |] ==> x=y
%    (!!x. x:A ==> g(f x) = x) ==> inj_onto f A
%    [| inj_onto f A;  x~=y;  x:A;  y:A |] ==> ~ f x=f y
%\caption{Derived rules involving functions} \label{hol-fun}

\begin{figure} \underscoreon
\tdx{Union_upper}     B:A ==> B <= Union A
\tdx{Union_least}     [| !!X. X:A ==> X<=C |] ==> Union A <= C

\tdx{Inter_lower}     B:A ==> Inter A <= B
\tdx{Inter_greatest}  [| !!X. X:A ==> C<=X |] ==> C <= Inter A

\tdx{Un_upper1}       A <= A Un B
\tdx{Un_upper2}       B <= A Un B
\tdx{Un_least}        [| A<=C;  B<=C |] ==> A Un B <= C

\tdx{Int_lower1}      A Int B <= A
\tdx{Int_lower2}      A Int B <= B
\tdx{Int_greatest}    [| C<=A;  C<=B |] ==> C <= A Int B
\caption{Derived rules involving subsets} \label{hol-subset}

\begin{figure} \underscoreon   \hfuzz=4pt%suppress "Overfull \hbox" message
\tdx{Int_absorb}        A Int A = A
\tdx{Int_commute}       A Int B = B Int A
\tdx{Int_assoc}         (A Int B) Int C  =  A Int (B Int C)
\tdx{Int_Un_distrib}    (A Un B)  Int C  =  (A Int C) Un (B Int C)

\tdx{Un_absorb}         A Un A = A
\tdx{Un_commute}        A Un B = B Un A
\tdx{Un_assoc}          (A Un B)  Un C  =  A Un (B Un C)
\tdx{Un_Int_distrib}    (A Int B) Un C  =  (A Un C) Int (B Un C)

\tdx{Compl_disjoint}    A Int (Compl A) = {\ttlbrace}x.False{\ttrbrace}
\tdx{Compl_partition}   A Un  (Compl A) = {\ttlbrace}x.True{\ttrbrace}
\tdx{double_complement} Compl(Compl A) = A
\tdx{Compl_Un}          Compl(A Un B)  = (Compl A) Int (Compl B)
\tdx{Compl_Int}         Compl(A Int B) = (Compl A) Un (Compl B)

\tdx{Union_Un_distrib}  Union(A Un B) = (Union A) Un (Union B)
\tdx{Int_Union}         A Int (Union B) = (UN C:B. A Int C)
\tdx{Un_Union_image}    (UN x:C.(A x) Un (B x)) = Union(A``C) Un Union(B``C)

\tdx{Inter_Un_distrib}  Inter(A Un B) = (Inter A) Int (Inter B)
\tdx{Un_Inter}          A Un (Inter B) = (INT C:B. A Un C)
\tdx{Int_Inter_image}   (INT x:C.(A x) Int (B x)) = Inter(A``C) Int Inter(B``C)
\caption{Set equalities} \label{hol-equalities}

Figures~\ref{hol-set1} and~\ref{hol-set2} present derived rules.  Most are
obvious and resemble rules of Isabelle's {\ZF} set theory.  Certain rules,
such as \tdx{subsetCE}, \tdx{bexCI} and \tdx{UnCI},
are designed for classical reasoning; the rules \tdx{subsetD},
\tdx{bexI}, \tdx{Un1} and~\tdx{Un2} are not
strictly necessary but yield more natural proofs.  Similarly,
\tdx{equalityCE} supports classical reasoning about extensionality,
after the fashion of \tdx{iffCE}.  See the file \texttt{HOL/Set.ML} for
proofs pertaining to set theory.

Figure~\ref{hol-subset} presents lattice properties of the subset relation.
Unions form least upper bounds; non-empty intersections form greatest lower
bounds.  Reasoning directly about subsets often yields clearer proofs than
reasoning about the membership relation.  See the file \texttt{HOL/subset.ML}.

Figure~\ref{hol-equalities} presents many common set equalities.  They
include commutative, associative and distributive laws involving unions,
intersections and complements.  For a complete listing see the file {\tt

\texttt{Blast_tac} proves many set-theoretic theorems automatically.
Hence you seldom need to refer to the theorems above.

  \it name      &\it meta-type  & \it description \\ 
  \cdx{inj}~~\cdx{surj}& $(\alpha\To\beta )\To bool$
        & injective/surjective \\
  \cdx{inj_onto}        & $[\alpha\To\beta ,\alpha\,set]\To bool$
        & injective over subset\\
  \cdx{inv} & $(\alpha\To\beta)\To(\beta\To\alpha)$ & inverse function

\tdx{inj_def}           inj f        == ! x y. f x=f y --> x=y
\tdx{surj_def}          surj f       == ! y. ? x. y=f x
\tdx{inj_onto_def}      inj_onto f A == !x:A. !y:A. f x=f y --> x=y
\tdx{inv_def}           inv f        == (\%y. @x. f(x)=y)
\caption{Theory \thydx{Fun}} \label{fig:HOL:Fun}

\subsection{Properties of functions}\nopagebreak
Figure~\ref{fig:HOL:Fun} presents a theory of simple properties of functions.
Note that ${\tt inv}~f$ uses Hilbert's $\varepsilon$ to yield an inverse
of~$f$.  See the file \texttt{HOL/Fun.ML} for a complete listing of the derived
rules.  Reasoning about function composition (the operator~\sdx{o}) and the
predicate~\cdx{surj} is done simply by expanding the definitions.

There is also a large collection of monotonicity theorems for constructions
on sets in the file \texttt{HOL/mono.ML}.

\section{Generic packages}

\HOL\ instantiates most of Isabelle's generic packages, making available the
simplifier and the classical reasoner.

\subsection{Simplification and substitution}

The simplifier is available in \HOL.  Tactics such as {\tt
  Asm_simp_tac} and \texttt{Full_simp_tac} use the default simpset
({\tt!simpset}), which works for most purposes.  A quite minimal
simplification set for higher-order logic is~\ttindexbold{HOL_ss},
even more frugal is \ttindexbold{HOL_basic_ss}.  Equality~($=$), which
also expresses logical equivalence, may be used for rewriting.  See
the file \texttt{HOL/simpdata.ML} for a complete listing of the basic
simplification rules.

See \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
{Chaps.\ts\ref{substitution} and~\ref{simp-chap}} for details of substitution
and simplification.

\begin{warn}\index{simplification!of conjunctions}%
  Reducing $a=b\conj P(a)$ to $a=b\conj P(b)$ is sometimes advantageous.  The
  left part of a conjunction helps in simplifying the right part.  This effect
  is not available by default: it can be slow.  It can be obtained by
  including \ttindex{conj_cong} in a simpset, \verb$addcongs [conj_cong]$.

If the simplifier cannot use a certain rewrite rule --- either because
of nontermination or because its left-hand side is too flexible ---
then you might try \texttt{stac}:
\item[\ttindexbold{stac} $thm$ $i,$] where $thm$ is of the form $lhs = rhs$,
  replaces in subgoal $i$ instances of $lhs$ by corresponding instances of
  $rhs$.  In case of multiple instances of $lhs$ in subgoal $i$, backtracking
  may be necessary to select the desired ones.

If $thm$ is a conditional equality, the instantiated condition becomes an
additional (first) subgoal.

 \HOL{} provides the tactic \ttindex{hyp_subst_tac}, which substitutes
  for an equality throughout a subgoal and its hypotheses.  This tactic uses
  \HOL's general substitution rule.

\subsection{Classical reasoning}

\HOL\ derives classical introduction rules for $\disj$ and~$\exists$, as
well as classical elimination rules for~$\imp$ and~$\bimp$, and the swap
rule; recall Fig.\ts\ref{hol-lemmas2} above.

The classical reasoner is installed.  Tactics such as \texttt{Blast_tac} and {\tt
Best_tac} use the default claset ({\tt!claset}), which works for most
purposes.  Named clasets include \ttindexbold{prop_cs}, which includes the
propositional rules, and \ttindexbold{HOL_cs}, which also includes quantifier
rules.  See the file \texttt{HOL/cladata.ML} for lists of the classical rules,
and \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
{Chap.\ts\ref{chap:classical}} for more discussion of classical proof methods.

This section describes \HOL's basic predefined types ($\alpha \times
\beta$, $\alpha + \beta$, $nat$ and $\alpha \; list$) and ways for
introducing new types in general.  The most important type
construction, the \texttt{datatype}, is treated separately in

\subsection{Product and sum types}\index{*"* type}\index{*"+ type}

  \it symbol    & \it meta-type &           & \it description \\ 
  \cdx{Pair}    & $[\alpha,\beta]\To \alpha\times\beta$
        & & ordered pairs $(a,b)$ \\
  \cdx{fst}     & $\alpha\times\beta \To \alpha$        & & first projection\\
  \cdx{snd}     & $\alpha\times\beta \To \beta$         & & second projection\\
  \cdx{split}   & $[[\alpha,\beta]\To\gamma, \alpha\times\beta] \To \gamma$ 
        & & generalized projection\\
  \cdx{Sigma}  & 
        $[\alpha\,set, \alpha\To\beta\,set]\To(\alpha\times\beta)set$ &
        & general sum of sets
%\tdx{fst_def}      fst p     == @a. ? b. p = (a,b)
%\tdx{snd_def}      snd p     == @b. ? a. p = (a,b)
%\tdx{split_def}    split c p == c (fst p) (snd p)
\tdx{Sigma_def}    Sigma A B == UN x:A. UN y:B x. {\ttlbrace}(x,y){\ttrbrace}

\tdx{Pair_eq}      ((a,b) = (a',b')) = (a=a' & b=b')
\tdx{Pair_inject}  [| (a, b) = (a',b');  [| a=a';  b=b' |] ==> R |] ==> R
\tdx{PairE}        [| !!x y. p = (x,y) ==> Q |] ==> Q

\tdx{fst_conv}     fst (a,b) = a
\tdx{snd_conv}     snd (a,b) = b
\tdx{surjective_pairing}  p = (fst p,snd p)

\tdx{split}        split c (a,b) = c a b
\tdx{expand_split} R(split c p) = (! x y. p = (x,y) --> R(c x y))

\tdx{SigmaI}    [| a:A;  b:B a |] ==> (a,b) : Sigma A B
\tdx{SigmaE}    [| c:Sigma A B; !!x y.[| x:A; y:B x; c=(x,y) |] ==> P |] ==> P
\caption{Type $\alpha\times\beta$}\label{hol-prod}

Theory \thydx{Prod} (Fig.\ts\ref{hol-prod}) defines the product type
$\alpha\times\beta$, with the ordered pair syntax $(a, b)$.  General
tuples are simulated by pairs nested to the right:
external & internal \\
$\tau@1 \times \dots \times \tau@n$ & $\tau@1 \times (\dots (\tau@{n-1} \times \tau@n)\dots)$ \\
$(t@1,\dots,t@n)$ & $(t@1,(\dots,(t@{n-1},t@n)\dots)$ \\
In addition, it is possible to use tuples
as patterns in abstractions:
{\tt\%($x$,$y$).$t$} \quad stands for\quad \texttt{split(\%$x$\thinspace$y$.$t$)} 
Nested patterns are also supported.  They are translated stepwise:
{\tt\%($x$,$y$,$z$).$t$} $\leadsto$ {\tt\%($x$,($y$,$z$)).$t$} $\leadsto$
{\tt split(\%$x$.\%($y$,$z$).$t$)} $\leadsto$ \texttt{split(\%$x$.split(\%$y$
  $z$.$t$))}.  The reverse translation is performed upon printing.
  The translation between patterns and \texttt{split} is performed automatically
  by the parser and printer.  Thus the internal and external form of a term
  may differ, which can affects proofs.  For example the term {\tt
  (\%(x,y).(y,x))(a,b)} requires the theorem \texttt{split} (which is in the
  default simpset) to rewrite to {\tt(b,a)}.
In addition to explicit $\lambda$-abstractions, patterns can be used in any
variable binding construct which is internally described by a
$\lambda$-abstraction.  Some important examples are
\item[Let:] \texttt{let {\it pattern} = $t$ in $u$}
\item[Quantifiers:] \texttt{!~{\it pattern}:$A$.~$P$}
\item[Choice:] {\underscoreon \tt @~{\it pattern}~.~$P$}
\item[Set operations:] \texttt{UN~{\it pattern}:$A$.~$B$}
\item[Sets:] \texttt{{\ttlbrace}~{\it pattern}~.~$P$~{\ttrbrace}}

There is a simple tactic which supports reasoning about patterns:
\item[\ttindexbold{split_all_tac} $i$] replaces in subgoal $i$ all
  {\tt!!}-quantified variables of product type by individual variables for
  each component.  A simple example:
{\out 1. !!p. (\%(x,y,z). (x, y, z)) p = p}
by(split_all_tac 1);
{\out 1. !!x xa ya. (\%(x,y,z). (x, y, z)) (x, xa, ya) = (x, xa, ya)}

Theory \texttt{Prod} also introduces the degenerate product type \texttt{unit}
which contains only a single element named {\tt()} with the property
\tdx{unit_eq}       u = ()

Theory \thydx{Sum} (Fig.~\ref{hol-sum}) defines the sum type $\alpha+\beta$
which associates to the right and has a lower priority than $*$: $\tau@1 +
\tau@2 + \tau@3*\tau@4$ means $\tau@1 + (\tau@2 + (\tau@3*\tau@4))$.

The definition of products and sums in terms of existing types is not
shown.  The constructions are fairly standard and can be found in the
respective theory files.

  \it symbol    & \it meta-type &           & \it description \\ 
  \cdx{Inl}     & $\alpha \To \alpha+\beta$    & & first injection\\
  \cdx{Inr}     & $\beta \To \alpha+\beta$     & & second injection\\
  \cdx{sum_case} & $[\alpha\To\gamma, \beta\To\gamma, \alpha+\beta] \To\gamma$
        & & conditional
%\tdx{sum_case_def}   sum_case == (\%f g p. @z. (!x. p=Inl x --> z=f x) &
%                                        (!y. p=Inr y --> z=g y))
\tdx{Inl_not_Inr}    Inl a ~= Inr b

\tdx{inj_Inl}        inj Inl
\tdx{inj_Inr}        inj Inr

\tdx{sumE}           [| !!x. P(Inl x);  !!y. P(Inr y) |] ==> P s

\tdx{sum_case_Inl}   sum_case f g (Inl x) = f x
\tdx{sum_case_Inr}   sum_case f g (Inr x) = g x

\tdx{surjective_sum} sum_case (\%x. f(Inl x)) (\%y. f(Inr y)) s = f s
\tdx{expand_sum_case} R(sum_case f g s) = ((! x. s = Inl(x) --> R(f(x))) &
                                     (! y. s = Inr(y) --> R(g(y))))
\caption{Type $\alpha+\beta$}\label{hol-sum}

\index{*"< symbol}
\index{*"* symbol}
\index{*div symbol}
\index{*mod symbol}
\index{*"+ symbol}
\index{*"- symbol}
  \it symbol    & \it meta-type & \it priority & \it description \\ 
  \cdx{0}       & $nat$         & & zero \\
  \cdx{Suc}     & $nat \To nat$ & & successor function\\
% \cdx{nat_case} & $[\alpha, nat\To\alpha, nat] \To\alpha$ & & conditional\\
% \cdx{nat_rec} & $[nat, \alpha, [nat, \alpha]\To\alpha] \To \alpha$
%        & & primitive recursor\\
  \tt *         & $[nat,nat]\To nat$    &  Left 70      & multiplication \\
  \tt div       & $[nat,nat]\To nat$    &  Left 70      & division\\
  \tt mod       & $[nat,nat]\To nat$    &  Left 70      & modulus\\
  \tt +         & $[nat,nat]\To nat$    &  Left 65      & addition\\
  \tt -         & $[nat,nat]\To nat$    &  Left 65      & subtraction
\subcaption{Constants and infixes}

\tdx{nat_induct}     [| P 0; !!n. P n ==> P(Suc n) |]  ==> P n

\tdx{Suc_not_Zero}   Suc m ~= 0
\tdx{inj_Suc}        inj Suc
\tdx{n_not_Suc_n}    n~=Suc n
\subcaption{Basic properties}
\caption{The type of natural numbers, \tydx{nat}} \label{hol-nat1}

              0+n           = n
              (Suc m)+n     = Suc(m+n)

              m-0           = m
              0-n           = n
              Suc(m)-Suc(n) = m-n

              0*n           = 0
              Suc(m)*n      = n + m*n

\tdx{mod_less}      m<n ==> m mod n = m
\tdx{mod_geq}       [| 0<n;  ~m<n |] ==> m mod n = (m-n) mod n

\tdx{div_less}      m<n ==> m div n = 0
\tdx{div_geq}       [| 0<n;  ~m<n |] ==> m div n = Suc((m-n) div n)
\caption{Recursion equations for the arithmetic operators} \label{hol-nat2}

\subsection{The type of natural numbers, \textit{nat}}
\index{nat@{\textit{nat}} type|(}

The theory \thydx{NatDef} defines the natural numbers in a roundabout but
traditional way.  The axiom of infinity postulates a type~\tydx{ind} of
individuals, which is non-empty and closed under an injective operation.  The
natural numbers are inductively generated by choosing an arbitrary individual
for~0 and using the injective operation to take successors.  This is a least
fixedpoint construction.  For details see the file \texttt{NatDef.thy}.

Type~\tydx{nat} is an instance of class~\cldx{ord}, which makes the
overloaded functions of this class (esp.\ \cdx{<} and \cdx{<=}, but also
\cdx{min}, \cdx{max} and \cdx{LEAST}) available on \tydx{nat}.  Theory
\thydx{Nat} builds on \texttt{NatDef} and shows that {\tt<=} is a partial order,
so \tydx{nat} is also an instance of class \cldx{order}.

Theory \thydx{Arith} develops arithmetic on the natural numbers.  It defines
addition, multiplication and subtraction.  Theory \thydx{Divides} defines
division, remainder and the ``divides'' relation.  The numerous theorems
proved include commutative, associative, distributive, identity and
cancellation laws.  See Figs.\ts\ref{hol-nat1} and~\ref{hol-nat2}.  The
recursion equations for the operators \texttt{+}, \texttt{-} and \texttt{*} on
\texttt{nat} are part of the default simpset.

Functions on \tydx{nat} can be defined by primitive or well-founded recursion;
see \S\ref{sec:HOL:recursive}.  A simple example is addition.
Here, \texttt{op +} is the name of the infix operator~\texttt{+}, following
the standard convention.
\sdx{primrec} "op +" nat 
  "    0 + n = n"
  "Suc m + n = Suc(m + n)"
There is also a \sdx{case}-construct
of the form
case \(e\) of 0 => \(a\) | Suc \(m\) => \(b\)
Note that Isabelle insists on precisely this format; you may not even change
the order of the two cases.
Both \texttt{primrec} and \texttt{case} are realized by a recursion operator
\cdx{nat_rec}, the details of which can be found in theory \texttt{NatDef}.

%The predecessor relation, \cdx{pred_nat}, is shown to be well-founded.
%Recursion along this relation resembles primitive recursion, but is
%stronger because we are in higher-order logic; using primitive recursion to
%define a higher-order function, we can easily Ackermann's function, which
%is not primitive recursive \cite[page~104]{thompson91}.
%The transitive closure of \cdx{pred_nat} is~$<$.  Many functions on the
%natural numbers are most easily expressed using recursion along~$<$.

Tactic {\tt\ttindex{induct_tac} "$n$" $i$} performs induction on variable~$n$
in subgoal~$i$ using theorem \texttt{nat_induct}.  There is also the derived
theorem \tdx{less_induct}:
[| !!n. [| ! m. m<n --> P m |] ==> P n |]  ==>  P n

Reasoning about arithmetic inequalities can be tedious.  A minimal amount of
automation is provided by the tactic \ttindex{trans_tac} of type \texttt{int ->
tactic} that deals with simple inequalities.  Note that it only knows about
{\tt 0}, \texttt{Suc}, {\tt<} and {\tt<=}.  The following goals are all solved by
{\tt trans_tac 1}:
{\out  1. \dots ==> m <= Suc(Suc m)}
{\out  1. [| \dots i <= j \dots Suc j <= k \dots |] ==> i < k}
{\out  1. [| \dots Suc m <= n \dots ~ m < n \dots |] ==> \dots}
For a complete description of the limitations of the tactic and how to avoid
some of them, see the comments at the start of the file {\tt

If \texttt{trans_tac} fails you, try to find relevant arithmetic results in
the library.  The theory \texttt{NatDef} contains theorems about {\tt<} and
{\tt<=}, the theory \texttt{Arith} contains theorems about \texttt{+},
\texttt{-} and \texttt{*}, and theory \texttt{Divides} contains theorems about
\texttt{div} and \texttt{mod}.  Use the \texttt{find}-functions to locate them
(see the {\em Reference Manual\/}).

\index{#@{\tt[]} symbol}
\index{#@{\tt\#} symbol}
\index{"@@{\tt\at} symbol}
  \it symbol & \it meta-type & \it priority & \it description \\
  \tt[]    & $\alpha\,list$ & & empty list\\
  \tt \#   & $[\alpha,\alpha\,list]\To \alpha\,list$ & Right 65 & 
        list constructor \\
  \cdx{null}    & $\alpha\,list \To bool$ & & emptiness test\\
  \cdx{hd}      & $\alpha\,list \To \alpha$ & & head \\
  \cdx{tl}      & $\alpha\,list \To \alpha\,list$ & & tail \\
  \cdx{last}    & $\alpha\,list \To \alpha$ & & last element \\
  \cdx{butlast} & $\alpha\,list \To \alpha\,list$ & & drop last element \\
  \tt\at  & $[\alpha\,list,\alpha\,list]\To \alpha\,list$ & Left 65 & append \\
  \cdx{map}     & $(\alpha\To\beta) \To (\alpha\,list \To \beta\,list)$
        & & apply to all\\
  \cdx{filter}  & $(\alpha \To bool) \To (\alpha\,list \To \alpha\,list)$
        & & filter functional\\
  \cdx{set}& $\alpha\,list \To \alpha\,set$ & & elements\\
  \sdx{mem}  & $[\alpha,\alpha\,list]\To bool$    &  Left 55   & membership\\
  \cdx{foldl}   & $(\beta\To\alpha\To\beta) \To \beta \To \alpha\,list \To \beta$ &
  & iteration \\
  \cdx{concat}   & $(\alpha\,list)list\To \alpha\,list$ & & concatenation \\
  \cdx{rev}     & $\alpha\,list \To \alpha\,list$ & & reverse \\
  \cdx{length}  & $\alpha\,list \To nat$ & & length \\
  \cdx{nth}  & $nat \To \alpha\,list \To \alpha$ & & indexing \\
  \cdx{take}, \cdx{drop} & $nat \To \alpha\,list \To \alpha\,list$ &&
    take or drop a prefix \\
  \cdx{dropWhile} &
    $(\alpha \To bool) \To \alpha\,list \To \alpha\,list$ &&
    take or drop a prefix
\subcaption{Constants and infixes}

\begin{center} \tt\frenchspacing
  \it external        & \it internal  & \it description \\{}
  [$x@1$, $\dots$, $x@n$]  &  $x@1$ \# $\cdots$ \# $x@n$ \# [] &
        \rm finite list \\{}
  [$x$:$l$. $P$]  & filter ($\lambda x{.}P$) $l$ & 
        \rm list comprehension
\caption{The theory \thydx{List}} \label{hol-list}

null [] = True
null (x#xs) = False

hd (x#xs) = x
tl (x#xs) = xs
tl [] = []

[] @ ys = ys
(x#xs) @ ys = x # xs @ ys

map f [] = []
map f (x#xs) = f x # map f xs

filter P [] = []
filter P (x#xs) = (if P x then x#filter P xs else filter P xs)

set [] = \ttlbrace\ttrbrace
set (x#xs) = insert x (set xs)

x mem [] = False
x mem (y#ys) = (if y=x then True else x mem ys)

foldl f a [] = a
foldl f a (x#xs) = foldl f (f a x) xs

concat([]) = []
concat(x#xs) = x @ concat(xs)

rev([]) = []
rev(x#xs) = rev(xs) @ [x]

length([]) = 0
length(x#xs) = Suc(length(xs))

nth 0 xs = hd xs
nth (Suc n) xs = nth n (tl xs)

take n [] = []
take n (x#xs) = (case n of 0 => [] | Suc(m) => x # take m xs)

drop n [] = []
drop n (x#xs) = (case n of 0 => x#xs | Suc(m) => drop m xs)

takeWhile P [] = []
takeWhile P (x#xs) = (if P x then x#takeWhile P xs else [])

dropWhile P [] = []
dropWhile P (x#xs) = (if P x then dropWhile P xs else xs)
\caption{Recursions equations for list processing functions}
\index{nat@{\textit{nat}} type|)}

\subsection{The type constructor for lists, \textit{list}}
\index{list@{\textit{list}} type|(}

Figure~\ref{hol-list} presents the theory \thydx{List}: the basic list
operations with their types and syntax.  Type $\alpha \; list$ is
defined as a \texttt{datatype} with the constructors {\tt[]} and {\tt\#}.
As a result the generic structural induction and case analysis tactics
\texttt{induct\_tac} and \texttt{exhaust\_tac} also become available for
lists.  A \sdx{case} construct of the form
case $e$ of [] => $a$  |  \(x\)\#\(xs\) => b
is defined by translation.  For details see~\S\ref{sec:HOL:datatype}.

{\tt List} provides a basic library of list processing functions defined by
primitive recursion (see~\S\ref{sec:HOL:primrec}).  The recursion equations
are shown in Fig.\ts\ref{fig:HOL:list-simps}.

\index{list@{\textit{list}} type|)}

\subsection{Introducing new types} \label{sec:typedef}

The \HOL-methodology dictates that all extensions to a theory should
be \textbf{definitional}.  The type definition mechanism that
meets this criterion is \ttindex{typedef}.  Note that \emph{type synonyms},
which are inherited from {\Pure} and described elsewhere, are just
syntactic abbreviations that have no logical meaning.

  Types in \HOL\ must be non-empty; otherwise the quantifier rules would be
  unsound, because $\exists x. x=x$ is a theorem \cite[\S7]{paulson-COLOG}.
A \bfindex{type definition} identifies the new type with a subset of
an existing type.  More precisely, the new type is defined by
exhibiting an existing type~$\tau$, a set~$A::\tau\,set$, and a
theorem of the form $x:A$.  Thus~$A$ is a non-empty subset of~$\tau$,
and the new type denotes this subset.  New functions are defined that
establish an isomorphism between the new type and the subset.  If
type~$\tau$ involves type variables $\alpha@1$, \ldots, $\alpha@n$,
then the type definition creates a type constructor
$(\alpha@1,\ldots,\alpha@n)ty$ rather than a particular type.

typedef  : 'typedef' ( () | '(' name ')') type '=' set witness;
type    : typevarlist name ( () | '(' infix ')' );
set     : string;
witness : () | '(' id ')';
\caption{Syntax of type definitions}

The syntax for type definitions is shown in Fig.~\ref{fig:HOL:typedef}.  For
the definition of `typevarlist' and `infix' see
{the appendix of the {\em Reference Manual\/}}%
{Appendix~\ref{app:TheorySyntax}}.  The remaining nonterminals have the
following meaning:
\item[\it type:] the new type constructor $(\alpha@1,\dots,\alpha@n)ty$ with
  optional infix annotation.
\item[\it name:] an alphanumeric name $T$ for the type constructor
  $ty$, in case $ty$ is a symbolic name.  Defaults to $ty$.
\item[\it set:] the representing subset $A$.
\item[\it witness:] name of a theorem of the form $a:A$ proving
  non-emptiness.  It can be omitted in case Isabelle manages to prove
  non-emptiness automatically.
If all context conditions are met (no duplicate type variables in
`typevarlist', no extra type variables in `set', and no free term variables
in `set'), the following components are added to the theory:
\item a type $ty :: (term,\dots,term)term$
\item constants
T &::& \tau\;set \\
Rep_T &::& (\alpha@1,\dots,\alpha@n)ty \To \tau \\
Abs_T &::& \tau \To (\alpha@1,\dots,\alpha@n)ty
\item a definition and three axioms
T{\tt_def} & T \equiv A \\
{\tt Rep_}T & Rep_T\,x \in T \\
{\tt Rep_}T{\tt_inverse} & Abs_T\,(Rep_T\,x) = x \\
{\tt Abs_}T{\tt_inverse} & y \in T \Imp Rep_T\,(Abs_T\,y) = y
stating that $(\alpha@1,\dots,\alpha@n)ty$ is isomorphic to $A$ by $Rep_T$
and its inverse $Abs_T$.
Below are two simple examples of \HOL\ type definitions.  Non-emptiness
is proved automatically here.
typedef unit = "{\ttlbrace}True{\ttrbrace}"

typedef (prod)
  ('a, 'b) "*"    (infixr 20)
      = "{\ttlbrace}f . EX (a::'a) (b::'b). f = (\%x y. x = a & y = b){\ttrbrace}"

Type definitions permit the introduction of abstract data types in a safe
way, namely by providing models based on already existing types.  Given some
abstract axiomatic description $P$ of a type, this involves two steps:
\item Find an appropriate type $\tau$ and subset $A$ which has the desired
  properties $P$, and make a type definition based on this representation.
\item Prove that $P$ holds for $ty$ by lifting $P$ from the representation.
You can now forget about the representation and work solely in terms of the
abstract properties $P$.

If you introduce a new type (constructor) $ty$ axiomatically, i.e.\ by
declaring the type and its operations and by stating the desired axioms, you
should make sure the type has a non-empty model.  You must also have a clause
arities \(ty\) :: (term,\thinspace\(\dots\),{\thinspace}term){\thinspace}term
in your theory file to tell Isabelle that $ty$ is in class \texttt{term}, the
class of all \HOL\ types.

\section{Datatype declarations}

Inductive datatypes, similar to those of \ML, frequently appear in
non-trivial applications of \HOL.  In principle, such types could be
defined by hand via \texttt{typedef} (see \S\ref{sec:typedef}), but
this would be far too tedious.  The \ttindex{datatype} definition
package of \HOL\ automates such chores.  It generates freeness theorems
and induction rules from a very simple description of the new type
provided by the user.


The general \HOL\ \texttt{datatype} definition is of the following form:
\mathtt{datatype}~(\alpha@1, \dots, \alpha@n) \, t ~=~
C@1~\tau@{11}~\dots~\tau@{1k@1} ~\mid~ \dots ~\mid~
where $\alpha@i$ are type variables, $C@i$ are distinct constructor
names and $\tau@{ij}$ are types.  The latter may be one of the
\item type variables $\alpha@1, \dots, \alpha@n$,

\item types $(\beta@1, \dots, \beta@l) \, t'$ where $t'$ is a
  previously declared type constructor or type synonym and $\{\beta@1,
  \dots, \beta@l\} \subseteq \{\alpha@1, \dots, \alpha@n\}$,
\item the newly defined type $(\alpha@1, \dots, \alpha@n) \, t$.
Recursive occurences of $(\alpha@1, \dots, \alpha@n) \, t$ are quite
restricted.  To ensure that the new type is non-empty, at least one
constructor must consist of only non-recursive type components.  If
you would like one of the $\tau@{ij}$ to be a complex type expression
$\tau$ you need to declare a new type synonym $syn = \tau$ first and
use $syn$ in place of $\tau$.  Of course this does not work if $\tau$
mentions the recursive type itself, thus ruling out problematic cases
like $\mathtt{datatype}~ t ~=~ C \, (t \To t)$, but also unproblematic
ones like $\mathtt{datatype}~ t ~=~ C \, (t~list)$.

The constructors are automatically defined as functions of their respective
\[ C@j :: [\tau@{j1},\dots,\tau@{jk@j}] \To (\alpha@1,\dots,\alpha@n)t \]
These functions have certain {\em freeness} properties --- they are
C@i~x@1~\dots~x@{k@i} \neq C@j~y@1~\dots~y@{k@j} \qquad
\mbox{for all}~ i \neq j.
and they are injective:
(C@j~x@1~\dots~x@{k@j} = C@j~y@1~\dots~y@{k@j}) =
(x@1 = y@1 \land \dots \land x@{k@j} = y@{k@j})
Because the number of inequalities is quadratic in the number of
constructors, a different representation is used if there are $7$ or
more of them.  In that case every constructor term is mapped to a
natural number:
t_ord \, (C@i \, x@1 \, \dots \, x@{k@i}) = i - 1
Then distinctness of constructor terms is expressed by:
t_ord \, x \neq t_ord \, y \Imp x \neq y.

\medskip Generally, the following structural induction rule is
\infer{P \, x}
\Forall x@1\dots x@{k@1}.
  \List{P~x@{r@{11}}; \dots; P~x@{r@{1l@1}}} &
  \Imp  & P \, (C@1~x@1~\dots~x@{k@1}) \\
 & \vdots & \\
\Forall x@1\dots x@{k@m}.
  \List{P~x@{r@{m1}}; \dots; P~x@{r@{ml@m}}} &
  \Imp & P \, (C@m~x@1~\dots~x@{k@m})
where $\{r@{j1},\dots,r@{jl@j}\} = \{i \in \{1,\dots k@j\} ~\mid~ \tau@{ji}
= (\alpha@1,\dots,\alpha@n)t \} =: Rec@j$, i.e.\ the property $P$ can be
assumed for all arguments of the recursive type.

For convenience, the following additional constructions are predefined for
each datatype.

\subsubsection{The \sdx{case} construct}

The type comes with an \ML-like \texttt{case}-construct:
\mbox{\tt case}~e~\mbox{\tt of} & C@1~x@{11}~\dots~x@{1k@1} & \To & e@1 \\
                           \vdots \\
                           \mid & C@m~x@{m1}~\dots~x@{mk@m} & \To & e@m
where the $x@{ij}$ are either identifiers or nested tuple patterns as in
In contrast to \ML, {\em all} constructors must be present, their order is
fixed, and nested patterns are not supported (with the exception of tuples).
Violating this restriction results in strange error messages.

\subsubsection{The function \cdx{size}}\label{sec:HOL:size}

Theory \texttt{Arith} declares an overloaded function \texttt{size} of type
$\alpha\To nat$.  Each datatype defines a particular instance of \texttt{size}
according to the following scheme:
size(C@j~x@{j1}~\dots~x@{jk@1}) =
0 & \mbox{if $Rec@j = \emptyset$} \\
size(x@{r@{j1}}) + \cdots + size(x@{r@{jl@j}}) + 1 &
 \mbox{if $Rec@j = \{r@{j1},\dots,r@{jl@j}\}$}
where $Rec@j$ is defined above.  Viewing datatypes as generalized trees, the
size of a leaf is 0 and the size of a node is the sum of the sizes of its
subtrees $+1$.

\subsection{Defining datatypes}

A datatype is defined in a theory definition file using the keyword
{\tt datatype}.  The definition following this must conform to the
syntax of {\em typedecl} specified in Fig.~\ref{datatype-grammar} and
must obey the rules in the previous section.  As a result the theory is
extended with the new type, the constructors, and the theorems listed
in the previous section.

typedecl : typevarlist id '=' (cons + '|')
cons     : name (typ *) ( () | mixfix )
typ      : id | tid | ('(' typevarlist id ')')
\caption{Syntax of datatype declarations}

  Every theory containing a datatype declaration must be based, directly or
  indirectly, on the theory \texttt{Arith}, if necessary by including it
  explicitly as a parent.

Most of the theorems about the datatype become part of the default simpset
and you never need to see them again because the simplifier applies them
automatically.  Only induction is invoked by hand:
\item[\ttindexbold{induct_tac} {\tt"}$x${\tt"} $i$]
 applies structural induction on variable $x$ to subgoal $i$, provided the
 type of $x$ is a datatype or type \tydx{nat}.
In some cases, induction is overkill and a case distinction over all
constructors of the datatype suffices:
\item[\ttindexbold{exhaust_tac} {\tt"}$u${\tt"} $i$]
 performs an exhaustive case analysis for the term $u$ whose type
 must be a datatype or type \tydx{nat}.  If the datatype has $n$ constructors
 $C@1$, \dots $C@n$, subgoal $i$ is replaced by $n$ new subgoals which
 contain the additional assumption $u = C@j~x@1~\dots~x@{k@j}$ for
 $j=1$, $\dots$,~$n$.
  Induction is only allowed on a free variable that should not occur among
  the premises of the subgoal.  Exhaustion is works for arbitrary terms.

For the technically minded, we give a more detailed description.
Reading the theory file produces an \ML\ structure which, in addition to the
usual components, contains a structure named $t$ for each datatype $t$
defined in the file.  Each structure $t$ contains the following
val distinct : thm list
val inject : thm list
val induct : thm
val cases : thm list
val simps : thm list
val induct_tac : string -> int -> tactic
{\tt distinct}, \texttt{inject} and \texttt{induct} contain the theorems
described above.  For user convenience, \texttt{distinct} contains
inequalities in both directions.  The reduction rules of the {\tt
  case}-construct are in \texttt{cases}.  All theorems from {\tt
  distinct}, \texttt{inject} and \texttt{cases} are combined in \texttt{simps}.


\subsubsection{The datatype $\alpha~mylist$}

We want to define the type $\alpha~mylist$.\footnote{This is just an
  example, there is already a list type in \HOL, of course.} To do
this we have to build a new theory that contains the type definition.
We start from the basic \texttt{HOL} theory.
MyList = HOL +
  datatype 'a mylist = Nil | Cons 'a ('a mylist)
After loading the theory (with \verb$use_thy "MyList"$), we can prove
$Cons~x~xs\neq xs$.  To ease the induction applied below, we state the
goal with $x$ quantified at the object-level.  This will be stripped
later using \ttindex{qed_spec_mp}.
goal MyList.thy "!x. Cons x xs ~= xs";
{\out Level 0}
{\out ! x. Cons x xs ~= xs}
{\out  1. ! x. Cons x xs ~= xs}
This can be proved by the structural induction tactic:
by (induct_tac "xs" 1);
{\out Level 1}
{\out ! x. Cons x xs ~= xs}
{\out  1. ! x. Cons x Nil ~= Nil}
{\out  2. !!a mylist.}
{\out        ! x. Cons x mylist ~= mylist ==>}
{\out        ! x. Cons x (Cons a mylist) ~= Cons a mylist}
The first subgoal can be proved using the simplifier.
Isabelle has already added the freeness properties of lists to the 
default simplification set.
by (Simp_tac 1);
{\out Level 2}
{\out ! x. Cons x xs ~= xs}
{\out  1. !!a mylist.}
{\out        ! x. Cons x mylist ~= mylist ==>}
{\out        ! x. Cons x (Cons a mylist) ~= Cons a mylist}
Similarly, we prove the remaining goal.
by (Asm_simp_tac 1);
{\out Level 3}
{\out ! x. Cons x xs ~= xs}
{\out No subgoals!}
qed_spec_mp "not_Cons_self";
{\out val not_Cons_self = "Cons x xs ~= xs";}
Because both subgoals could have been proved by \texttt{Asm_simp_tac}
we could have done that in one step:
by (ALLGOALS Asm_simp_tac);

\subsubsection{The datatype $\alpha~mylist$ with mixfix syntax}

In this example we define the type $\alpha~mylist$ again but this time
we want to write \texttt{[]} for \texttt{Nil} and we want to use infix
notation \verb|#| for \texttt{Cons}.  To do this we simply add mixfix
annotations after the constructor declarations as follows:
MyList = HOL +
  datatype 'a mylist =
    Nil ("[]")  |
    Cons 'a ('a mylist)  (infixr "#" 70)
Now the theorem in the previous example can be written \verb|x#xs ~= xs|.  The
proof is the same.

\subsubsection{A datatype for weekdays}

This example shows a datatype that consists of 7 constructors:
Days = Arith +
  datatype days = Mon | Tue | Wed | Thu | Fri | Sat | Sun
Because there are more than 6 constructors, the theory must be based
on \texttt{Arith}.  Inequality is expressed via a function
\verb|days_ord|.  The theorem \verb|Mon ~= Tue| is not directly
contained among the distinctness theorems, but the simplifier can
prove it thanks to rewrite rules inherited from theory \texttt{Arith}:
goal Days.thy "Mon ~= Tue";
by (Simp_tac 1);
You need not derive such inequalities explicitly: the simplifier will dispose
of them automatically.

\section{Recursive function definitions}\label{sec:HOL:recursive}
\index{recursive functions|see{recursion}}

Isabelle/HOL provides two means of declaring recursive functions.
\item \textbf{Primitive recursion} is available only for datatypes, and it is
  highly restrictive.  Recursive calls are only allowed on the argument's
  immediate constituents.  On the other hand, it is the form of recursion most
  often wanted, and it is easy to use.
\item \textbf{Well-founded recursion} requires that you supply a well-founded
  relation that governs the recursion.  Recursive calls are only allowed if
  they make the argument decrease under the relation.  Complicated recursion
  forms, such as nested recursion, can be dealt with.  Termination can even be
  proved at a later time, though having unsolved termination conditions around
  can make work difficult.%
  \footnote{This facility is based on Konrad Slind's TFL
    package~\cite{slind-tfl}.  Thanks are due to Konrad for implementing TFL
    and assisting with its installation.}

A theory file may contain any number of recursive function definitions, which
may be intermixed with other declarations.  Every recursive function must
already have been declared as a constant.

These declarations do not assert new axioms.  Instead, they define the
function using a recursion operator.  Both HOL and ZF derive the theory of
well-founded recursion from first principles~\cite{paulson-set-II}.  Primitive
recursion over some datatype relies on the recursion operator provided by the
datatype package.  With either form of function definition, Isabelle proves
the desired recursion equations as theorems.

\subsection{Primitive recursive functions}

Datatypes come with a uniform way of defining functions, {\bf
  primitive recursion}.  In principle, one can define
primitive recursive functions by asserting their reduction rules as
new axioms.  Here is an example:
Append = MyList +
consts app :: ['a mylist, 'a mylist] => 'a mylist
   app_Nil   "app [] ys = ys"
   app_Cons  "app (x#xs) ys = x#app xs ys"
But asserting axioms brings the danger of accidentally asserting an
inconsistency, as in \verb$app [] ys = us$.

The \ttindex{primrec} declaration is a safe means of defining primitive
recursive functions on datatypes:
Append = MyList +
consts app :: ['a mylist, 'a mylist] => 'a mylist
primrec app MyList.mylist
   "app [] ys = ys"
   "app (x#xs) ys = x#app xs ys"
Isabelle will now check that the two rules do indeed form a primitive
recursive definition, preserving consistency.  For example
primrec app MyList.mylist
    "app [] ys = us"
is rejected with an error message \texttt{Extra variables on rhs}.


The general form of a primitive recursive definition is
primrec {\it function} {\it type}
    {\it reduction rules}
\item \textit{function} is the name of the function, either as an \textit{id}
  or a \textit{string}.
\item \textit{type} is the name of the datatype, either as an \textit{id} or
  in the long form \texttt{$T$.$t$} ($T$ is the name of the theory
  where the datatype has been declared, $t$ the name of the datatype).
  The long form is required if the \texttt{datatype} and the {\tt
    primrec} sections are in different theories.
\item \textit{reduction rules} specify one or more equations of the form
  \[ f \, x@1 \, \dots \, x@m \, (C \, y@1 \, \dots \, y@k) \, z@1 \,
  \dots \, z@n = r \] such that $C$ is a constructor of the datatype,
  $r$ contains only the free variables on the left-hand side, and all
  recursive calls in $r$ are of the form $f \, \dots \, y@i \, \dots$
  for some $i$.  There must be exactly one reduction rule for each
  constructor.  The order is immaterial.  Also note that all reduction
  rules are added to the default simpset!
  If you would like to refer to some rule by name, then you must prefix
  \emph{each} rule with an identifier.  These identifiers, like those in the
  \texttt{rules} section of a theory, will be visible at the \ML\ level.

The primitive recursive function can have infix or mixfix syntax:
Append = MyList +
consts "@"  :: ['a mylist, 'a mylist] => 'a mylist  (infixr 60)
primrec "op @" MyList.mylist
   "[] @ ys = ys"
   "(x#xs) @ ys = x#(xs @ ys)"

The reduction rules for {\tt\at} become part of the default simpset, which
leads to short proofs:
goal Append.thy "(xs @ ys) @ zs = xs @ (ys @ zs)";
by (induct\_tac "xs" 1);
by (ALLGOALS Asm\_simp\_tac);


\subsection{Well-founded recursive functions}
\index{primitive recursion|(}

Well-founded recursion can express any function whose termination can be
proved by showing that each recursive call makes the argument smaller in a
suitable sense.  The recursion need not involve datatypes and there are few
syntactic restrictions.  Nested recursion and pattern-matching are allowed.

Here is a simple example, the Fibonacci function.  The first line declares
\texttt{fib} to be a constant.  The well-founded relation is simply~$<$ (on
the natural numbers).  Pattern-matching is used here: \texttt{1} is a
macro for \texttt{Suc~0}.
consts fib  :: "nat => nat"
recdef fib "less_than"
    "fib 0 = 0"
    "fib 1 = 1"
    "fib (Suc(Suc x)) = (fib x + fib (Suc x))"

The well-founded relation defines a notion of ``smaller'' for the function's
argument type.  The relation $\prec$ is \textbf{well-founded} provided it
admits no infinitely decreasing chains
\[ \cdots\prec x@n\prec\cdots\prec x@1. \]
If the function's argument has type~$\tau$, then $\prec$ should be a relation
over~$\tau$: it must have type $(\tau\times\tau)set$.

Proving well-foundedness can be tricky, so {\HOL} provides a collection of
operators for building well-founded relations.  The package recognizes these
operators and automatically proves that the constructed relation is
well-founded.  Here are those operators, in order of importance:
\item \texttt{less_than} is ``less than'' on the natural numbers.
  (It has type $(nat\times nat)set$, while $<$ has type $[nat,nat]\To bool$.
\item $\mathop{\mathtt{measure}} f$, where $f$ has type $\tau\To nat$, is the
  relation~$\prec$ on type~$\tau$ such that $x\prec y$ iff $f(x)<f(y)$.
  Typically, $f$ takes the recursive function's arguments (as a tuple) and
  returns a result expressed in terms of the function \texttt{size}.  It is
  called a \textbf{measure function}.  Recall that \texttt{size} is overloaded
  and is defined on all datatypes (see \S\ref{sec:HOL:size}).
\item $\mathop{\mathtt{inv_image}} f\;R$ is a generalization of
  \texttt{measure}.  It specifies a relation such that $x\prec y$ iff $f(x)$
  is less than $f(y)$ according to~$R$, which must itself be a well-founded

\item $R@1\texttt{**}R@2$ is the lexicographic product of two relations.  It
  is a relation on pairs and satisfies $(x@1,x@2)\prec(y@1,y@2)$ iff $x@1$
  is less than $y@1$ according to~$R@1$ or $x@1=y@1$ and $x@2$
  is less than $y@2$ according to~$R@2$.

\item \texttt{finite_psubset} is the proper subset relation on finite sets.

We can use \texttt{measure} to declare Euclid's algorithm for the greatest
common divisor.  The measure function, $\lambda(m,n).n$, specifies that the
recursion terminates because argument~$n$ decreases.
recdef gcd "measure ((\%(m,n).n) ::nat*nat=>nat)"
    "gcd (m, n) = (if n=0 then m else gcd(n, m mod n))"

The general form of a primitive recursive definition is
recdef {\it function} {\it rel}
    congs   {\it congruence rules}      {\bf(optional)}
    simpset {\it simplification set}      {\bf(optional)}
   {\it reduction rules}
\item \textit{function} is the name of the function, either as an \textit{id}
  or a \textit{string}.  
\item \textit{rel} is a {\HOL} expression for the well-founded termination
\item \textit{congruence rules} are required only in highly exceptional
\item the \textit{simplification set} is used to prove that the supplied
  relation is well-founded.  It is also used to prove the \textbf{termination
    conditions}: assertions that arguments of recursive calls decrease under
  \textit{rel}.  By default, simplification uses \texttt{!simpset}, which
  is sufficient to prove well-foundedness for the built-in relations listed
\item \textit{reduction rules} specify one or more recursion equations.  Each
  left-hand side must have the form $f\,t$, where $f$ is the function and $t$
  is a tuple of distinct variables.  If more than one equation is present then
  $f$ is defined by pattern-matching on components of its argument whose type
  is a \texttt{datatype}.  The patterns must be exhaustive and
  Unlike with \texttt{primrec}, the reduction rules are not added to the
  default simpset, and individual rules may not be labelled with identifiers.
  However, the identifier $f$\texttt{.rules} is visible at the \ML\ level
  as a list of theorems.

With the definition of \texttt{gcd} shown above, Isabelle is unable to prove
one termination condition.  It remains as a precondition of the recursion
{\out ["! m n. n ~= 0 --> m mod n < n}
{\out   ==> gcd (?m, ?n) = (if ?n = 0 then ?m else gcd (?n, ?m mod ?n))"] }
{\out : thm list}
The theory \texttt{Primes} (on the examples directory \texttt{HOL/ex})
illustrates how to prove termination conditions afterwards.  The function
\texttt{Tfl.tgoalw} is like the standard function \texttt{goalw}, which sets
up a goal to prove, but its argument should be the identifier
$f$\texttt{.rules} and its effect is to set up a proof of the termination
Tfl.tgoalw thy [] gcd.rules;
{\out Level 0}
{\out ! m n. n ~= 0 --> m mod n < n}
{\out  1. ! m n. n ~= 0 --> m mod n < n}
This subgoal has a one-step proof using \texttt{simp_tac}.  Once the theorem
is proved, it can be used to eliminate the termination conditions from
elements of \texttt{gcd.rules}.  Theory \texttt{Unify} on directory
\texttt{HOL/Subst} is a much more complicated example of this process, where
the termination conditions can only be proved by complicated reasoning
involving the recursive function itself.

Isabelle can prove the \texttt{gcd} function's termination condition
automatically if supplied with the right simpset.
recdef gcd "measure ((\%(m,n).n) ::nat*nat=>nat)"
  simpset "!simpset addsimps [mod_less_divisor, zero_less_eq]"
    "gcd (m, n) = (if n=0 then m else gcd(n, m mod n))"

A \texttt{recdef} definition also returns an induction rule specialized for
the recursive function.  For the \texttt{gcd} function above, the induction
rule is
{\out "(!!m n. n ~= 0 --> ?P n (m mod n) ==> ?P m n) ==> ?P ?u ?v" : thm}
This rule should be used to reason inductively about the \texttt{gcd}
function.  It usually makes the induction hypothesis available at all
recursive calls, leading to very direct proofs.  If any termination
conditions remain unproved, they will be additional premises of this rule.


\section{Inductive and coinductive definitions}

An {\bf inductive definition} specifies the least set~$R$ closed under given
rules.  (Applying a rule to elements of~$R$ yields a result within~$R$.)  For
example, a structural operational semantics is an inductive definition of an
evaluation relation.  Dually, a {\bf coinductive definition} specifies the
greatest set~$R$ consistent with given rules.  (Every element of~$R$ can be
seen as arising by applying a rule to elements of~$R$.)  An important example
is using bisimulation relations to formalize equivalence of processes and
infinite data structures.

A theory file may contain any number of inductive and coinductive
definitions.  They may be intermixed with other declarations; in
particular, the (co)inductive sets {\bf must} be declared separately as
constants, and may have mixfix syntax or be subject to syntax translations.

Each (co)inductive definition adds definitions to the theory and also
proves some theorems.  Each definition creates an \ML\ structure, which is a
substructure of the main theory structure.

This package is derived from the \ZF\ one, described in a separate
\footnote{It appeared in CADE~\cite{paulson-CADE}; a longer version is
  distributed with Isabelle.}  %
which you should refer to in case of difficulties.  The package is simpler
than \ZF's thanks to \HOL's automatic type-checking.  The type of the
(co)inductive determines the domain of the fixedpoint definition, and the
package does not use inference rules for type-checking.

\subsection{The result structure}
Many of the result structure's components have been discussed in the paper;
others are self-explanatory.
\item[\tt thy] is the new theory containing the recursive sets.

\item[\tt defs] is the list of definitions of the recursive sets.

\item[\tt mono] is a monotonicity theorem for the fixedpoint operator.

\item[\tt unfold] is a fixedpoint equation for the recursive set (the union of
the recursive sets, in the case of mutual recursion).

\item[\tt intrs] is the list of introduction rules, now proved as theorems, for
the recursive sets.  The rules are also available individually, using the
names given them in the theory file. 

\item[\tt elim] is the elimination rule.

\item[\tt mk_cases] is a function to create simplified instances of {\tt
elim}, using freeness reasoning on some underlying datatype.

For an inductive definition, the result structure contains two induction
rules, \texttt{induct} and \verb|mutual_induct|.  (To save storage, the latter
rule is just \texttt{True} unless more than one set is being defined.)  For a
coinductive definition, it contains the rule \verb|coinduct|.

Figure~\ref{def-result-fig} summarizes the two result signatures,
specifying the types of all these components.

val thy          : theory
val defs         : thm list
val mono         : thm
val unfold       : thm
val intrs        : thm list
val elim         : thm
val mk_cases     : thm list -> string -> thm
{\it(Inductive definitions only)} 
val induct       : thm
val mutual_induct: thm
{\it(Coinductive definitions only)}
val coinduct    : thm
\caption{The result of a (co)inductive definition} \label{def-result-fig}

\subsection{The syntax of a (co)inductive definition}
An inductive definition has the form
inductive    \textit{inductive sets}
  intrs      \textit{introduction rules}
  monos      \textit{monotonicity theorems}
  con_defs   \textit{constructor definitions}
A coinductive definition is identical, except that it starts with the keyword
{\tt coinductive}.  

The \texttt{monos} and \texttt{con_defs} sections are optional.  If present,
each is specified as a string, which must be a valid \ML{} expression
of type \texttt{thm list}.  It is simply inserted into the generated
\ML{} file that is generated from the theory definition; if it is
ill-formed, it will trigger ML error messages.  You can then inspect
the file on your directory.

\item The \textit{inductive sets} are specified by one or more strings.

\item The \textit{introduction rules} specify one or more introduction rules in
  the form \textit{ident\/}~\textit{string}, where the identifier gives the name of
  the rule in the result structure.

\item The \textit{monotonicity theorems} are required for each operator
  applied to a recursive set in the introduction rules.  There {\bf must}
  be a theorem of the form $A\subseteq B\Imp M(A)\subseteq M(B)$, for each
  premise $t\in M(R@i)$ in an introduction rule!

\item The \textit{constructor definitions} contain definitions of constants
  appearing in the introduction rules.  In most cases it can be omitted.

The package has a few notable restrictions:
\item The theory must separately declare the recursive sets as
\item The names of the recursive sets must be alphanumeric

\item Side-conditions must not be conjunctions.  However, an introduction rule
may contain any number of side-conditions.

\item Side-conditions of the form $x=t$, where the variable~$x$ does not
  occur in~$t$, will be substituted through the rule \verb|mutual_induct|.

\subsection{Example of an inductive definition}
Two declarations, included in a theory file, define the finite powerset
operator.  First we declare the constant~{\tt Fin}.  Then we declare it
inductively, with two introduction rules:
consts Fin :: 'a set => 'a set set
inductive "Fin A"
    emptyI  "{\ttlbrace}{\ttrbrace} : Fin A"
    insertI "[| a: A;  b: Fin A |] ==> insert a b : Fin A"
The resulting theory structure contains a substructure, called~{\tt Fin}.
It contains the \texttt{Fin}$~A$ introduction rules as the list \texttt{Fin.intrs},
and also individually as \texttt{Fin.emptyI} and \texttt{Fin.consI}.  The induction
rule is \texttt{Fin.induct}.

For another example, here is a theory file defining the accessible
part of a relation.  The main thing to note is the use of~{\tt Pow} in
the sole introduction rule, and the corresponding mention of the rule
\verb|Pow_mono| in the \texttt{monos} list.  The paper
\cite{paulson-CADE} discusses a \ZF\ version of this example in more
Acc = WF + 
consts pred :: "['b, ('a * 'b)set] => 'a set"   (*Set of predecessors*)
       acc  :: "('a * 'a)set => 'a set"         (*Accessible part*)
defs   pred_def  "pred x r == {y. (y,x):r}"
inductive "acc r"
     pred "pred a r: Pow(acc r) ==> a: acc r"
  monos   "[Pow_mono]"
The \HOL{} distribution contains many other inductive definitions.
Simple examples are collected on subdirectory \texttt{Induct}.  The
theory \texttt{HOL/Induct/LList.thy} contains coinductive definitions.
Larger examples may be found on other subdirectories, such as {\tt
  IMP}, \texttt{Lambda} and \texttt{Auth}.

\index{*coinductive|)} \index{*inductive|)}

\section{The examples directories}

Directory \texttt{HOL/Auth} contains theories for proving the correctness of 
cryptographic protocols.  The approach is based upon operational 
semantics~\cite{paulson-security} rather than the more usual belief logics.
On the same directory are proofs for some standard examples, such as the 
Needham-Schroeder public-key authentication protocol~\cite{paulson-ns} 
and the Otway-Rees protocol.

Directory \texttt{HOL/IMP} contains a formalization of various denotational,
operational and axiomatic semantics of a simple while-language, the necessary
equivalence proofs, soundness and completeness of the Hoare rules with respect
to the 
denotational semantics, and soundness and completeness of a verification
condition generator.  Much of development is taken from
Winskel~\cite{winskel93}.  For details see~\cite{nipkow-IMP}.

Directory \texttt{HOL/Hoare} contains a user friendly surface syntax for Hoare
logic, including a tactic for generating verification-conditions.

Directory \texttt{HOL/MiniML} contains a formalization of the type system of the
core functional language Mini-ML and a correctness proof for its type
inference algorithm $\cal W$~\cite{milner78,nazareth-nipkow}.

Directory \texttt{HOL/Lambda} contains a formalization of untyped
$\lambda$-calculus in de~Bruijn notation and Church-Rosser proofs for $\beta$
and $\eta$ reduction~\cite{Nipkow-CR}.

Directory \texttt{HOL/Subst} contains Martin Coen's mechanization of a theory of
substitutions and unifiers.  It is based on Paulson's previous
mechanisation in {\LCF}~\cite{paulson85} of Manna and Waldinger's
theory~\cite{mw81}.  It demonstrates a complicated use of \texttt{recdef},
with nested recursion.

Directory \texttt{HOL/Induct} presents simple examples of (co)inductive
\item Theory \texttt{PropLog} proves the soundness and completeness of
  classical propositional logic, given a truth table semantics.  The only
  connective is $\imp$.  A Hilbert-style axiom system is specified, and its
  set of theorems defined inductively.  A similar proof in \ZF{} is
  described elsewhere~\cite{paulson-set-II}.

\item Theory \texttt{Term} develops an experimental recursive type definition;
  the recursion goes through the type constructor~\tydx{list}.

\item Theory \texttt{Simult} constructs mutually recursive sets of trees and
  forests, including induction and recursion rules.

\item The definition of lazy lists demonstrates methods for handling
  infinite data structures and coinduction in higher-order
\footnote{To be precise, these lists are \emph{potentially infinite} rather
  than lazy.  Lazy implies a particular operational semantics.}
  Theory \thydx{LList} defines an operator for
  corecursion on lazy lists, which is used to define a few simple functions
  such as map and append.   A coinduction principle is defined
  for proving equations on lazy lists.
\item Theory \thydx{LFilter} defines the filter functional for lazy lists.
  This functional is notoriously difficult to define because finding the next
  element meeting the predicate requires possibly unlimited search.  It is not
  computable, but can be expressed using a combination of induction and

\item Theory \thydx{Exp} illustrates the use of iterated inductive definitions
  to express a programming language semantics that appears to require mutual
  induction.  Iterated induction allows greater modularity.

Directory \texttt{HOL/ex} contains other examples and experimental proofs in
\item Theory \texttt{Recdef} presents many examples of using \texttt{recdef}
  to define recursive functions.  Another example is \texttt{Fib}, which
  defines the Fibonacci function.

\item Theory \texttt{Primes} defines the Greatest Common Divisor of two
  natural numbers and proves a key lemma of the Fundamental Theorem of
  Arithmetic: if $p$ is prime and $p$ divides $m\times n$ then $p$ divides~$m$
  or $p$ divides~$n$.

\item Theory \texttt{Primrec} develops some computation theory.  It
  inductively defines the set of primitive recursive functions and presents a
  proof that Ackermann's function is not primitive recursive.

\item File \texttt{cla.ML} demonstrates the classical reasoner on over sixty
  predicate calculus theorems, ranging from simple tautologies to
  moderately difficult problems involving equality and quantifiers.

\item File \texttt{meson.ML} contains an experimental implementation of the {\sc
    meson} proof procedure, inspired by Plaisted~\cite{plaisted90}.  It is
  much more powerful than Isabelle's classical reasoner.  But it is less
  useful in practice because it works only for pure logic; it does not
  accept derived rules for the set theory primitives, for example.

\item File \texttt{mesontest.ML} contains test data for the {\sc meson} proof
  procedure.  These are mostly taken from Pelletier \cite{pelletier86}.

\item File \texttt{set.ML} proves Cantor's Theorem, which is presented in
  \S\ref{sec:hol-cantor} below, and the Schr\"oder-Bernstein Theorem.

\item Theory \texttt{MT} contains Jacob Frost's formalization~\cite{frost93} of
  Milner and Tofte's coinduction example~\cite{milner-coind}.  This
  substantial proof concerns the soundness of a type system for a simple
  functional language.  The semantics of recursion is given by a cyclic
  environment, which makes a coinductive argument appropriate.

\section{Example: Cantor's Theorem}\label{sec:hol-cantor}
Cantor's Theorem states that every set has more subsets than it has
elements.  It has become a favourite example in higher-order logic since
it is so easily expressed:
\[  \forall f::\alpha \To \alpha \To bool. \exists S::\alpha\To bool.
    \forall x::\alpha. f~x \not= S 
Viewing types as sets, $\alpha\To bool$ represents the powerset
of~$\alpha$.  This version states that for every function from $\alpha$ to
its powerset, some subset is outside its range.  

The Isabelle proof uses \HOL's set theory, with the type $\alpha\,set$ and
the operator \cdx{range}.  The set~$S$ is given as an unknown instead of a
quantified variable so that we may inspect the subset found by the proof.
goal Set.thy "?S ~: range\thinspace(f :: 'a=>'a set)";
{\out Level 0}
{\out ?S ~: range f}
{\out  1. ?S ~: range f}
The first two steps are routine.  The rule \tdx{rangeE} replaces
$\Var{S}\in \texttt{range} \, f$ by $\Var{S}=f~x$ for some~$x$.
by (resolve_tac [notI] 1);
{\out Level 1}
{\out ?S ~: range f}
{\out  1. ?S : range f ==> False}
by (eresolve_tac [rangeE] 1);
{\out Level 2}
{\out ?S ~: range f}
{\out  1. !!x. ?S = f x ==> False}
Next, we apply \tdx{equalityCE}, reasoning that since $\Var{S}=f~x$,
we have $\Var{c}\in \Var{S}$ if and only if $\Var{c}\in f~x$ for
by (eresolve_tac [equalityCE] 1);
{\out Level 3}
{\out ?S ~: range f}
{\out  1. !!x. [| ?c3 x : ?S; ?c3 x : f x |] ==> False}
{\out  2. !!x. [| ?c3 x ~: ?S; ?c3 x ~: f x |] ==> False}
Now we use a bit of creativity.  Suppose that~$\Var{S}$ has the form of a
comprehension.  Then $\Var{c}\in\{x.\Var{P}~x\}$ implies
$\Var{P}~\Var{c}$.   Destruct-resolution using \tdx{CollectD}
instantiates~$\Var{S}$ and creates the new assumption.
by (dresolve_tac [CollectD] 1);
{\out Level 4}
{\out {\ttlbrace}x. ?P7 x{\ttrbrace} ~: range f}
{\out  1. !!x. [| ?c3 x : f x; ?P7(?c3 x) |] ==> False}
{\out  2. !!x. [| ?c3 x ~: {\ttlbrace}x. ?P7 x{\ttrbrace}; ?c3 x ~: f x |] ==> False}
Forcing a contradiction between the two assumptions of subgoal~1
completes the instantiation of~$S$.  It is now the set $\{x. x\not\in
f~x\}$, which is the standard diagonal construction.
by (contr_tac 1);
{\out Level 5}
{\out {\ttlbrace}x. x ~: f x{\ttrbrace} ~: range f}
{\out  1. !!x. [| x ~: {\ttlbrace}x. x ~: f x{\ttrbrace}; x ~: f x |] ==> False}
The rest should be easy.  To apply \tdx{CollectI} to the negated
assumption, we employ \ttindex{swap_res_tac}:
by (swap_res_tac [CollectI] 1);
{\out Level 6}
{\out {\ttlbrace}x. x ~: f x{\ttrbrace} ~: range f}
{\out  1. !!x. [| x ~: f x; ~ False |] ==> x ~: f x}
by (assume_tac 1);
{\out Level 7}
{\out {\ttlbrace}x. x ~: f x{\ttrbrace} ~: range f}
{\out No subgoals!}
How much creativity is required?  As it happens, Isabelle can prove this
theorem automatically.  The default classical set {\tt!claset} contains rules
for most of the constructs of \HOL's set theory.  We must augment it with
\tdx{equalityCE} to break up set equalities, and then apply best-first
search.  Depth-first search would diverge, but best-first search
successfully navigates through the large search space.
choplev 0;
{\out Level 0}
{\out ?S ~: range f}
{\out  1. ?S ~: range f}
by (best_tac (!claset addSEs [equalityCE]) 1);
{\out Level 1}
{\out {\ttlbrace}x. x ~: f x{\ttrbrace} ~: range f}
{\out No subgoals!}
If you run this example interactively, make sure your current theory contains
theory \texttt{Set}, for example by executing
\ttindex{set_current_thy}~{\tt"Set"}.  Otherwise the default claset may not
contain the rules for set theory.
\index{higher-order logic|)}