converted HOL.tex to CHOL.tex; replaced HOL.tex by CHOL.tex
authorclasohm
Tue, 09 May 1995 10:43:19 +0200
changeset 1113 dd7284573601
parent 1112 737a1a0df754
child 1114 c8dfb56a7e95
converted HOL.tex to CHOL.tex; replaced HOL.tex by CHOL.tex
doc-src/Logics/CHOL.tex
doc-src/Logics/HOL.tex
doc-src/Logics/logics.tex
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/Logics/CHOL.tex	Tue May 09 10:43:19 1995 +0200
@@ -0,0 +1,1898 @@
+%% $Id$
+\chapter{Higher-Order Logic}
+\index{higher-order logic|(}
+\index{CHOL system@{\sc chol} system}
+
+The theory~\thydx{CHOL} implements higher-order logic with curried
+function application.  It is based on Gordon's~{\sc hol}
+system~\cite{mgordon-hol}, which itself is based on Church's original
+paper~\cite{church40}.  Andrews's book~\cite{andrews86} is a full
+description of higher-order logic.  Experience with the {\sc hol}
+system has demonstrated that higher-order logic is useful for hardware
+verification; beyond this, it is widely applicable in many areas of
+mathematics.  It is weaker than {\ZF} set theory but for most
+applications this does not matter.  If you prefer {\ML} to Lisp, you
+will probably prefer \CHOL\ to~{\ZF}.
+
+\CHOL\ is a modified version of Isabelle's \HOL\ and uses curried function
+application. Therefore the expression $f(a,b)$ (which in \HOL\ means
+``f applied to the two arguments $a$ and $b$'') means ``f applied to
+the pair $(a,b)$'' in \CHOL. N.B. that ordered pairs in \HOL\ are written as
+$<a,b>$ while in \CHOL\ the syntax $(a,b)$ is used.  Previous
+releases of Isabelle also included a different version of~\HOL, with
+explicit type inference rules~\cite{paulson-COLOG}.  This version no
+longer exists, but \thydx{ZF} supports a similar style of reasoning.
+
+\CHOL\ has a distinct feel, compared with {\ZF} and {\CTT}.  It
+identifies object-level types with meta-level types, taking advantage of
+Isabelle's built-in type checker.  It identifies object-level functions
+with meta-level functions, so it uses Isabelle's operations for abstraction
+and application.  There is no `apply' operator: function applications are
+written as simply~$f~a$ rather than $f{\tt`}a$.
+
+These identifications allow Isabelle to support \CHOL\ particularly nicely,
+but they also mean that \CHOL\ requires more sophistication from the user
+--- in particular, an understanding of Isabelle's type system.  Beginners
+should work with {\tt show_types} set to {\tt true}.  Gain experience by
+working in first-order logic before attempting to use higher-order logic.
+This chapter assumes familiarity with~{\FOL{}}.
+
+
+\begin{figure} 
+\begin{center}
+\begin{tabular}{rrr} 
+  \it name      &\it meta-type  & \it description \\ 
+  \cdx{Trueprop}& $bool\To prop$                & coercion to $prop$\\
+  \cdx{not}     & $bool\To bool$                & negation ($\neg$) \\
+  \cdx{True}    & $bool$                        & tautology ($\top$) \\
+  \cdx{False}   & $bool$                        & absurdity ($\bot$) \\
+  \cdx{If}      & $[bool,\alpha,\alpha]\To\alpha::term$ & conditional \\
+  \cdx{Inv}     & $(\alpha\To\beta)\To(\beta\To\alpha)$ & function inversion\\
+  \cdx{Let}     & $[\alpha,\alpha\To\beta]\To\beta$ & let binder
+\end{tabular}
+\end{center}
+\subcaption{Constants}
+
+\begin{center}
+\index{"@@{\tt\at} symbol}
+\index{*"! symbol}\index{*"? symbol}
+\index{*"?"! symbol}\index{*"E"X"! symbol}
+\begin{tabular}{llrrr} 
+  \it symbol &\it name     &\it meta-type & \it description \\
+  \tt\at & \cdx{Eps}  & $(\alpha\To bool)\To\alpha::term$ & 
+        Hilbert description ($\epsilon$) \\
+  {\tt!~} or \sdx{ALL}  & \cdx{All}  & $(\alpha::term\To bool)\To bool$ & 
+        universal quantifier ($\forall$) \\
+  {\tt?~} or \sdx{EX}   & \cdx{Ex}   & $(\alpha::term\To bool)\To bool$ & 
+        existential quantifier ($\exists$) \\
+  {\tt?!} or {\tt EX!}  & \cdx{Ex1}  & $(\alpha::term\To bool)\To bool$ & 
+        unique existence ($\exists!$)
+\end{tabular}
+\end{center}
+\subcaption{Binders} 
+
+\begin{center}
+\index{*"= symbol}
+\index{&@{\tt\&} symbol}
+\index{*"| symbol}
+\index{*"-"-"> symbol}
+\begin{tabular}{rrrr} 
+  \it symbol    & \it meta-type & \it priority & \it description \\ 
+  \sdx{o}       & $[\beta\To\gamma,\alpha\To\beta]\To (\alpha\To\gamma)$ & 
+        Right 50 & composition ($\circ$) \\
+  \tt =         & $[\alpha::term,\alpha]\To bool$ & Left 50 & equality ($=$) \\
+  \tt <         & $[\alpha::ord,\alpha]\To bool$ & Left 50 & less than ($<$) \\
+  \tt <=        & $[\alpha::ord,\alpha]\To bool$ & Left 50 & 
+                less than or equals ($\leq$)\\
+  \tt \&        & $[bool,bool]\To bool$ & Right 35 & conjunction ($\conj$) \\
+  \tt |         & $[bool,bool]\To bool$ & Right 30 & disjunction ($\disj$) \\
+  \tt -->       & $[bool,bool]\To bool$ & Right 25 & implication ($\imp$)
+\end{tabular}
+\end{center}
+\subcaption{Infixes}
+\caption{Syntax of {\tt HOL}} \label{chol-constants}
+\end{figure}
+
+
+\begin{figure}
+\index{*let symbol}
+\index{*in symbol}
+\dquotes
+\[\begin{array}{rclcl}
+    term & = & \hbox{expression of class~$term$} \\
+         & | & "\at~" id~id^* " . " formula \\
+         & | & 
+    \multicolumn{3}{l}{"let"~id~"="~term";"\dots";"~id~"="~term~"in"~term} \\
+         & | & 
+    \multicolumn{3}{l}{"if"~formula~"then"~term~"else"~term} \\[2ex]
+ formula & = & \hbox{expression of type~$bool$} \\
+         & | & term " = " term \\
+         & | & term " \ttilde= " term \\
+         & | & term " < " term \\
+         & | & term " <= " term \\
+         & | & "\ttilde\ " formula \\
+         & | & formula " \& " formula \\
+         & | & formula " | " formula \\
+         & | & formula " --> " formula \\
+         & | & "!~~~" id~id^* " . " formula 
+         & | & "ALL~" id~id^* " . " formula \\
+         & | & "?~~~" id~id^* " . " formula 
+         & | & "EX~~" id~id^* " . " formula \\
+         & | & "?!~~" id~id^* " . " formula 
+         & | & "EX!~" id~id^* " . " formula
+  \end{array}
+\]
+\caption{Full grammar for \CHOL} \label{chol-grammar}
+\end{figure} 
+
+
+\section{Syntax}
+The type class of higher-order terms is called~\cldx{term}.  Type variables
+range over this class by default.  The equality symbol and quantifiers are
+polymorphic over class {\tt term}.
+
+Class \cldx{ord} consists of all ordered types; the relations $<$ and
+$\leq$ are polymorphic over this class, as are the functions
+\cdx{mono}, \cdx{min} and \cdx{max}.  Three other
+type classes --- \cldx{plus}, \cldx{minus} and \cldx{times} --- permit
+overloading of the operators {\tt+}, {\tt-} and {\tt*}.  In particular,
+{\tt-} is overloaded for set difference and subtraction.
+\index{*"+ symbol}
+\index{*"- symbol}
+\index{*"* symbol}
+
+Figure~\ref{chol-constants} lists the constants (including infixes and
+binders), while Fig.\ts\ref{chol-grammar} presents the grammar of
+higher-order logic.  Note that $a$\verb|~=|$b$ is translated to
+$\neg(a=b)$.
+
+\begin{warn}
+  \CHOL\ has no if-and-only-if connective; logical equivalence is expressed
+  using equality.  But equality has a high priority, as befitting a
+  relation, while if-and-only-if typically has the lowest priority.  Thus,
+  $\neg\neg P=P$ abbreviates $\neg\neg (P=P)$ and not $(\neg\neg P)=P$.
+  When using $=$ to mean logical equivalence, enclose both operands in
+  parentheses.
+\end{warn}
+
+\subsection{Types}\label{CHOL-types}
+The type of formulae, \tydx{bool}, belongs to class \cldx{term}; thus,
+formulae are terms.  The built-in type~\tydx{fun}, which constructs function
+types, is overloaded with arity {\tt(term,term)term}.  Thus, $\sigma\To\tau$
+belongs to class~{\tt term} if $\sigma$ and~$\tau$ do, allowing quantification
+over functions.
+
+Types in \CHOL\ must be non-empty; otherwise the quantifier rules would be
+unsound.  I have commented on this elsewhere~\cite[\S7]{paulson-COLOG}.
+
+\index{type definitions}
+Gordon's {\sc hol} system supports {\bf type definitions}.  A type is
+defined by exhibiting an existing type~$\sigma$, a predicate~$P::\sigma\To
+bool$, and a theorem of the form $\exists x::\sigma.P~x$.  Thus~$P$
+specifies a non-empty subset of~$\sigma$, and the new type denotes this
+subset.  New function constants are generated to establish an isomorphism
+between the new type and the subset.  If type~$\sigma$ involves type
+variables $\alpha@1$, \ldots, $\alpha@n$, then the type definition creates
+a type constructor $(\alpha@1,\ldots,\alpha@n)ty$ rather than a particular
+type.  Melham~\cite{melham89} discusses type definitions at length, with
+examples. 
+
+Isabelle does not support type definitions at present.  Instead, they are
+mimicked by explicit definitions of isomorphism functions.  The definitions
+should be supported by theorems of the form $\exists x::\sigma.P~x$, but
+Isabelle cannot enforce this.
+
+
+\subsection{Binders}
+Hilbert's {\bf description} operator~$\epsilon x.P[x]$ stands for some~$a$
+satisfying~$P[a]$, if such exists.  Since all terms in \CHOL\ denote
+something, a description is always meaningful, but we do not know its value
+unless $P[x]$ defines it uniquely.  We may write descriptions as
+\cdx{Eps}($P$) or use the syntax
+\hbox{\tt \at $x$.$P[x]$}.
+
+Existential quantification is defined by
+\[ \exists x.P~x \;\equiv\; P(\epsilon x.P~x). \]
+The unique existence quantifier, $\exists!x.P[x]$, is defined in terms
+of~$\exists$ and~$\forall$.  An Isabelle binder, it admits nested
+quantifications.  For instance, $\exists!x y.P~x~y$ abbreviates
+$\exists!x. \exists!y.P~x~y$; note that this does not mean that there
+exists a unique pair $(x,y)$ satisfying~$P~x~y$.
+
+\index{*"! symbol}\index{*"? symbol}\index{CHOL system@{\sc hol} system}
+Quantifiers have two notations.  As in Gordon's {\sc hol} system, \CHOL\
+uses~{\tt!}\ and~{\tt?}\ to stand for $\forall$ and $\exists$.  The
+existential quantifier must be followed by a space; thus {\tt?x} is an
+unknown, while \verb'? x.f x=y' is a quantification.  Isabelle's usual
+notation for quantifiers, \sdx{ALL} and \sdx{EX}, is also
+available.  Both notations are accepted for input.  The {\ML} reference
+\ttindexbold{HOL_quantifiers} governs the output notation.  If set to {\tt
+true}, then~{\tt!}\ and~{\tt?}\ are displayed; this is the default.  If set
+to {\tt false}, then~{\tt ALL} and~{\tt EX} are displayed.
+
+All these binders have priority 10. 
+
+
+\subsection{The \sdx{let} and \sdx{case} constructions}
+Local abbreviations can be introduced by a {\tt let} construct whose
+syntax appears in Fig.\ts\ref{chol-grammar}.  Internally it is translated into
+the constant~\cdx{Let}.  It can be expanded by rewriting with its
+definition, \tdx{Let_def}.
+
+\CHOL\ also defines the basic syntax
+\[\dquotes"case"~e~"of"~c@1~"=>"~e@1~"|" \dots "|"~c@n~"=>"~e@n\] 
+as a uniform means of expressing {\tt case} constructs.  Therefore {\tt
+  case} and \sdx{of} are reserved words.  However, so far this is mere
+syntax and has no logical meaning.  By declaring translations, you can
+cause instances of the {\tt case} construct to denote applications of
+particular case operators.  The patterns supplied for $c@1$,~\ldots,~$c@n$
+distinguish among the different case operators.  For an example, see the
+case construct for lists on page~\pageref{chol-list} below.
+
+\begin{figure}
+\begin{ttbox}\makeatother
+\tdx{refl}           t = (t::'a)
+\tdx{subst}          [| s=t; P s |] ==> P(t::'a)
+\tdx{ext}            (!!x::'a. (f x::'b) = g x) ==> (\%x.f x) = (\%x.g x)
+\tdx{impI}           (P ==> Q) ==> P-->Q
+\tdx{mp}             [| P-->Q;  P |] ==> Q
+\tdx{iff}            (P-->Q) --> (Q-->P) --> (P=Q)
+\tdx{selectI}        P(x::'a) ==> P(@x.P x)
+\tdx{True_or_False}  (P=True) | (P=False)
+\end{ttbox}
+\caption{The {\tt CHOL} rules} \label{chol-rules}
+\end{figure}
+
+
+\begin{figure}\hfuzz=4pt%suppress "Overfull \hbox" message
+\begin{ttbox}\makeatother
+\tdx{True_def}   True     == ((\%x::bool.x)=(\%x.x))
+\tdx{All_def}    All      == (\%P. P = (\%x.True))
+\tdx{Ex_def}     Ex       == (\%P. P(@x.P x))
+\tdx{False_def}  False    == (!P.P)
+\tdx{not_def}    not      == (\%P. P-->False)
+\tdx{and_def}    op &     == (\%P Q. !R. (P-->Q-->R) --> R)
+\tdx{or_def}     op |     == (\%P Q. !R. (P-->R) --> (Q-->R) --> R)
+\tdx{Ex1_def}    Ex1      == (\%P. ? x. P x & (! y. P y --> y=x))
+
+\tdx{Inv_def}    Inv      == (\%(f::'a=>'b) y. @x. f x=y)
+\tdx{o_def}      op o     == (\%(f::'b=>'c) g (x::'a). f(g x))
+\tdx{if_def}     If P x y == (\%P x y.@z::'a.(P=True --> z=x) & (P=False --> z=y))
+\tdx{Let_def}    Let s f  == f s
+\end{ttbox}
+\caption{The {\tt CHOL} definitions} \label{chol-defs}
+\end{figure}
+
+
+\section{Rules of inference}
+Figure~\ref{chol-rules} shows the inference rules of~\CHOL{}, with
+their~{\ML} names.  Some of the rules deserve additional comments:
+\begin{ttdescription}
+\item[\tdx{ext}] expresses extensionality of functions.
+\item[\tdx{iff}] asserts that logically equivalent formulae are
+  equal.
+\item[\tdx{selectI}] gives the defining property of the Hilbert
+  $\epsilon$-operator.  It is a form of the Axiom of Choice.  The derived rule
+  \tdx{select_equality} (see below) is often easier to use.
+\item[\tdx{True_or_False}] makes the logic classical.\footnote{In
+    fact, the $\epsilon$-operator already makes the logic classical, as
+    shown by Diaconescu; see Paulson~\cite{paulson-COLOG} for details.}
+\end{ttdescription}
+
+\CHOL{} follows standard practice in higher-order logic: only a few
+connectives are taken as primitive, with the remainder defined obscurely
+(Fig.\ts\ref{chol-defs}).  Gordon's {\sc hol} system expresses the
+corresponding definitions \cite[page~270]{mgordon-hol} using
+object-equality~({\tt=}), which is possible because equality in
+higher-order logic may equate formulae and even functions over formulae.
+But theory~\CHOL{}, like all other Isabelle theories, uses
+meta-equality~({\tt==}) for definitions.
+
+Some of the rules mention type variables; for example, {\tt refl}
+mentions the type variable~{\tt'a}.  This allows you to instantiate
+type variables explicitly by calling {\tt res_inst_tac}.  By default,
+explicit type variables have class \cldx{term}.
+
+Include type constraints whenever you state a polymorphic goal.  Type
+inference may otherwise make the goal more polymorphic than you intended,
+with confusing results.
+
+\begin{warn}
+  If resolution fails for no obvious reason, try setting
+  \ttindex{show_types} to {\tt true}, causing Isabelle to display types of
+  terms.  Possibly set \ttindex{show_sorts} to {\tt true} as well, causing
+  Isabelle to display sorts.
+
+  \index{unification!incompleteness of}
+  Where function types are involved, Isabelle's unification code does not
+  guarantee to find instantiations for type variables automatically.  Be
+  prepared to use \ttindex{res_inst_tac} instead of {\tt resolve_tac},
+  possibly instantiating type variables.  Setting
+  \ttindex{Unify.trace_types} to {\tt true} causes Isabelle to report
+  omitted search paths during unification.\index{tracing!of unification}
+\end{warn}
+
+
+\begin{figure}
+\begin{ttbox}
+\tdx{sym}         s=t ==> t=s
+\tdx{trans}       [| r=s; s=t |] ==> r=t
+\tdx{ssubst}      [| t=s; P s |] ==> P(t::'a)
+\tdx{box_equals}  [| a=b;  a=c;  b=d |] ==> c=d  
+\tdx{arg_cong}    x=y ==> f x=f y
+\tdx{fun_cong}    f=g ==> f x=g x
+\subcaption{Equality}
+
+\tdx{TrueI}       True 
+\tdx{FalseE}      False ==> P
+
+\tdx{conjI}       [| P; Q |] ==> P&Q
+\tdx{conjunct1}   [| P&Q |] ==> P
+\tdx{conjunct2}   [| P&Q |] ==> Q 
+\tdx{conjE}       [| P&Q;  [| P; Q |] ==> R |] ==> R
+
+\tdx{disjI1}      P ==> P|Q
+\tdx{disjI2}      Q ==> P|Q
+\tdx{disjE}       [| P | Q; P ==> R; Q ==> R |] ==> R
+
+\tdx{notI}        (P ==> False) ==> ~ P
+\tdx{notE}        [| ~ P;  P |] ==> R
+\tdx{impE}        [| P-->Q;  P;  Q ==> R |] ==> R
+\subcaption{Propositional logic}
+
+\tdx{iffI}        [| P ==> Q;  Q ==> P |] ==> P=Q
+\tdx{iffD1}       [| P=Q; P |] ==> Q
+\tdx{iffD2}       [| P=Q; Q |] ==> P
+\tdx{iffE}        [| P=Q; [| P --> Q; Q --> P |] ==> R |] ==> R
+
+\tdx{eqTrueI}     P ==> P=True 
+\tdx{eqTrueE}     P=True ==> P 
+\subcaption{Logical equivalence}
+
+\end{ttbox}
+\caption{Derived rules for \CHOL} \label{chol-lemmas1}
+\end{figure}
+
+
+\begin{figure}
+\begin{ttbox}\makeatother
+\tdx{allI}      (!!x::'a. P x) ==> !x. P x
+\tdx{spec}      !x::'a.P x ==> P x
+\tdx{allE}      [| !x.P x;  P x ==> R |] ==> R
+\tdx{all_dupE}  [| !x.P x;  [| P x; !x.P x |] ==> R |] ==> R
+
+\tdx{exI}       P x ==> ? x::'a.P x
+\tdx{exE}       [| ? x::'a.P x; !!x. P x ==> Q |] ==> Q
+
+\tdx{ex1I}      [| P a;  !!x. P x ==> x=a |] ==> ?! x. P x
+\tdx{ex1E}      [| ?! x.P x;  !!x. [| P x;  ! y. P y --> y=x |] ==> R 
+          |] ==> R
+
+\tdx{select_equality} [| P a;  !!x. P x ==> x=a |] ==> (@x.P x) = a
+\subcaption{Quantifiers and descriptions}
+
+\tdx{ccontr}          (~P ==> False) ==> P
+\tdx{classical}       (~P ==> P) ==> P
+\tdx{excluded_middle} ~P | P
+
+\tdx{disjCI}          (~Q ==> P) ==> P|Q
+\tdx{exCI}            (! x. ~ P x ==> P a) ==> ? x.P x
+\tdx{impCE}           [| P-->Q; ~ P ==> R; Q ==> R |] ==> R
+\tdx{iffCE}           [| P=Q;  [| P;Q |] ==> R;  [| ~P; ~Q |] ==> R |] ==> R
+\tdx{notnotD}         ~~P ==> P
+\tdx{swap}            ~P ==> (~Q ==> P) ==> Q
+\subcaption{Classical logic}
+
+\tdx{if_True}         if True then x else y = x
+\tdx{if_False}        if False then x else y = y
+\tdx{if_P}            P ==> if P then x else y = x
+\tdx{if_not_P}        ~ P ==> if P then x else y = y
+\tdx{expand_if}       P(if Q then x else y) = ((Q --> P x) & (~Q --> P y))
+\subcaption{Conditionals}
+\end{ttbox}
+\caption{More derived rules} \label{chol-lemmas2}
+\end{figure}
+
+
+Some derived rules are shown in Figures~\ref{chol-lemmas1}
+and~\ref{chol-lemmas2}, with their {\ML} names.  These include natural rules
+for the logical connectives, as well as sequent-style elimination rules for
+conjunctions, implications, and universal quantifiers.  
+
+Note the equality rules: \tdx{ssubst} performs substitution in
+backward proofs, while \tdx{box_equals} supports reasoning by
+simplifying both sides of an equation.
+
+
+\begin{figure} 
+\begin{center}
+\begin{tabular}{rrr} 
+  \it name      &\it meta-type  & \it description \\ 
+\index{{}@\verb'{}' symbol}
+  \verb|{}|     & $\alpha\,set$         & the empty set \\
+  \cdx{insert}  & $[\alpha,\alpha\,set]\To \alpha\,set$
+        & insertion of element \\
+  \cdx{Collect} & $(\alpha\To bool)\To\alpha\,set$
+        & comprehension \\
+  \cdx{Compl}   & $(\alpha\,set)\To\alpha\,set$
+        & complement \\
+  \cdx{INTER} & $[\alpha\,set,\alpha\To\beta\,set]\To\beta\,set$
+        & intersection over a set\\
+  \cdx{UNION} & $[\alpha\,set,\alpha\To\beta\,set]\To\beta\,set$
+        & union over a set\\
+  \cdx{Inter} & $(\alpha\,set)set\To\alpha\,set$
+        &set of sets intersection \\
+  \cdx{Union} & $(\alpha\,set)set\To\alpha\,set$
+        &set of sets union \\
+  \cdx{Pow}   & $\alpha\,set \To (\alpha\,set)set$
+        & powerset \\[1ex]
+  \cdx{range}   & $(\alpha\To\beta )\To\beta\,set$
+        & range of a function \\[1ex]
+  \cdx{Ball}~~\cdx{Bex} & $[\alpha\,set,\alpha\To bool]\To bool$
+        & bounded quantifiers \\
+  \cdx{mono}    & $(\alpha\,set\To\beta\,set)\To bool$
+        & monotonicity \\
+  \cdx{inj}~~\cdx{surj}& $(\alpha\To\beta )\To bool$
+        & injective/surjective \\
+  \cdx{inj_onto}        & $[\alpha\To\beta ,\alpha\,set]\To bool$
+        & injective over subset
+\end{tabular}
+\end{center}
+\subcaption{Constants}
+
+\begin{center}
+\begin{tabular}{llrrr} 
+  \it symbol &\it name     &\it meta-type & \it priority & \it description \\
+  \sdx{INT}  & \cdx{INTER1}  & $(\alpha\To\beta\,set)\To\beta\,set$ & 10 & 
+        intersection over a type\\
+  \sdx{UN}  & \cdx{UNION1}  & $(\alpha\To\beta\,set)\To\beta\,set$ & 10 & 
+        union over a type
+\end{tabular}
+\end{center}
+\subcaption{Binders} 
+
+\begin{center}
+\index{*"`"` symbol}
+\index{*": symbol}
+\index{*"<"= symbol}
+\begin{tabular}{rrrr} 
+  \it symbol    & \it meta-type & \it priority & \it description \\ 
+  \tt ``        & $[\alpha\To\beta ,\alpha\,set]\To  (\beta\,set)$
+        & Left 90 & image \\
+  \sdx{Int}     & $[\alpha\,set,\alpha\,set]\To\alpha\,set$
+        & Left 70 & intersection ($\inter$) \\
+  \sdx{Un}      & $[\alpha\,set,\alpha\,set]\To\alpha\,set$
+        & Left 65 & union ($\union$) \\
+  \tt:          & $[\alpha ,\alpha\,set]\To bool$       
+        & Left 50 & membership ($\in$) \\
+  \tt <=        & $[\alpha\,set,\alpha\,set]\To bool$
+        & Left 50 & subset ($\subseteq$) 
+\end{tabular}
+\end{center}
+\subcaption{Infixes}
+\caption{Syntax of the theory {\tt Set}} \label{chol-set-syntax}
+\end{figure} 
+
+
+\begin{figure} 
+\begin{center} \tt\frenchspacing
+\index{*"! symbol}
+\begin{tabular}{rrr} 
+  \it external          & \it internal  & \it description \\ 
+  $a$ \ttilde: $b$      & \ttilde($a$ : $b$)    & \rm non-membership\\
+  \{$a@1$, $\ldots$\}  &  insert $a@1$ $\ldots$ \{\} & \rm finite set \\
+  \{$x$.$P[x]$\}        &  Collect($\lambda x.P[x]$) &
+        \rm comprehension \\
+  \sdx{INT} $x$:$A$.$B[x]$      & INTER $A$ $\lambda x.B[x]$ &
+        \rm intersection \\
+  \sdx{UN}{\tt\ }  $x$:$A$.$B[x]$      & UNION $A$ $\lambda x.B[x]$ &
+        \rm union \\
+  \tt ! $x$:$A$.$P[x]$ or \sdx{ALL} $x$:$A$.$P[x]$ & 
+        Ball $A$ $\lambda x.P[x]$ & 
+        \rm bounded $\forall$ \\
+  \sdx{?} $x$:$A$.$P[x]$ or \sdx{EX}{\tt\ } $x$:$A$.$P[x]$ & 
+        Bex $A$ $\lambda x.P[x]$ & \rm bounded $\exists$
+\end{tabular}
+\end{center}
+\subcaption{Translations}
+
+\dquotes
+\[\begin{array}{rclcl}
+    term & = & \hbox{other terms\ldots} \\
+         & | & "\{\}" \\
+         & | & "\{ " term\; ("," term)^* " \}" \\
+         & | & "\{ " id " . " formula " \}" \\
+         & | & term " `` " term \\
+         & | & term " Int " term \\
+         & | & term " Un " term \\
+         & | & "INT~~"  id ":" term " . " term \\
+         & | & "UN~~~"  id ":" term " . " term \\
+         & | & "INT~~"  id~id^* " . " term \\
+         & | & "UN~~~"  id~id^* " . " term \\[2ex]
+ formula & = & \hbox{other formulae\ldots} \\
+         & | & term " : " term \\
+         & | & term " \ttilde: " term \\
+         & | & term " <= " term \\
+         & | & "!~" id ":" term " . " formula 
+         & | & "ALL " id ":" term " . " formula \\
+         & | & "?~" id ":" term " . " formula 
+         & | & "EX~~" id ":" term " . " formula
+  \end{array}
+\]
+\subcaption{Full Grammar}
+\caption{Syntax of the theory {\tt Set} (continued)} \label{chol-set-syntax2}
+\end{figure} 
+
+
+\section{A formulation of set theory}
+Historically, higher-order logic gives a foundation for Russell and
+Whitehead's theory of classes.  Let us use modern terminology and call them
+{\bf sets}, but note that these sets are distinct from those of {\ZF} set
+theory, and behave more like {\ZF} classes.
+\begin{itemize}
+\item
+Sets are given by predicates over some type~$\sigma$.  Types serve to
+define universes for sets, but type checking is still significant.
+\item
+There is a universal set (for each type).  Thus, sets have complements, and
+may be defined by absolute comprehension.
+\item
+Although sets may contain other sets as elements, the containing set must
+have a more complex type.
+\end{itemize}
+Finite unions and intersections have the same behaviour in \CHOL\ as they
+do in~{\ZF}.  In \CHOL\ the intersection of the empty set is well-defined,
+denoting the universal set for the given type.
+
+
+\subsection{Syntax of set theory}\index{*set type}
+\CHOL's set theory is called \thydx{Set}.  The type $\alpha\,set$ is
+essentially the same as $\alpha\To bool$.  The new type is defined for
+clarity and to avoid complications involving function types in unification.
+Since Isabelle does not support type definitions (as mentioned in
+\S\ref{CHOL-types}), the isomorphisms between the two types are declared
+explicitly.  Here they are natural: {\tt Collect} maps $\alpha\To bool$ to
+$\alpha\,set$, while \hbox{\tt op :} maps in the other direction (ignoring
+argument order).
+
+Figure~\ref{chol-set-syntax} lists the constants, infixes, and syntax
+translations.  Figure~\ref{chol-set-syntax2} presents the grammar of the new
+constructs.  Infix operators include union and intersection ($A\union B$
+and $A\inter B$), the subset and membership relations, and the image
+operator~{\tt``}\@.  Note that $a$\verb|~:|$b$ is translated to
+$\neg(a\in b)$.  
+
+The {\tt\{\ldots\}} notation abbreviates finite sets constructed in the
+obvious manner using~{\tt insert} and~$\{\}$:
+\begin{eqnarray*}
+  \{a@1, \ldots, a@n\}  & \equiv &  
+  {\tt insert}~a@1~({\tt insert}\ldots({\tt insert}~a@n~\{\})\ldots)
+\end{eqnarray*}
+
+The set \hbox{\tt\{$x$.$P[x]$\}} consists of all $x$ (of suitable type)
+that satisfy~$P[x]$, where $P[x]$ is a formula that may contain free
+occurrences of~$x$.  This syntax expands to \cdx{Collect}$(\lambda
+x.P[x])$.  It defines sets by absolute comprehension, which is impossible
+in~{\ZF}; the type of~$x$ implicitly restricts the comprehension.
+
+The set theory defines two {\bf bounded quantifiers}:
+\begin{eqnarray*}
+   \forall x\in A.P[x] &\hbox{abbreviates}& \forall x. x\in A\imp P[x] \\
+   \exists x\in A.P[x] &\hbox{abbreviates}& \exists x. x\in A\conj P[x]
+\end{eqnarray*}
+The constants~\cdx{Ball} and~\cdx{Bex} are defined
+accordingly.  Instead of {\tt Ball $A$ $P$} and {\tt Bex $A$ $P$} we may
+write\index{*"! symbol}\index{*"? symbol}
+\index{*ALL symbol}\index{*EX symbol} 
+%
+\hbox{\tt !~$x$:$A$.$P[x]$} and \hbox{\tt ?~$x$:$A$.$P[x]$}.  Isabelle's
+usual quantifier symbols, \sdx{ALL} and \sdx{EX}, are also accepted
+for input.  As with the primitive quantifiers, the {\ML} reference
+\ttindex{HOL_quantifiers} specifies which notation to use for output.
+
+Unions and intersections over sets, namely $\bigcup@{x\in A}B[x]$ and
+$\bigcap@{x\in A}B[x]$, are written 
+\sdx{UN}~\hbox{\tt$x$:$A$.$B[x]$} and
+\sdx{INT}~\hbox{\tt$x$:$A$.$B[x]$}.  
+
+Unions and intersections over types, namely $\bigcup@x B[x]$ and $\bigcap@x
+B[x]$, are written \sdx{UN}~\hbox{\tt$x$.$B[x]$} and
+\sdx{INT}~\hbox{\tt$x$.$B[x]$}.  They are equivalent to the previous
+union and intersection operators when $A$ is the universal set.
+
+The operators $\bigcup A$ and $\bigcap A$ act upon sets of sets.  They are
+not binders, but are equal to $\bigcup@{x\in A}x$ and $\bigcap@{x\in A}x$,
+respectively.
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{mem_Collect_eq}    (a : \{x.P x\}) = P a
+\tdx{Collect_mem_eq}    \{x.x:A\} = A
+
+\tdx{empty_def}         \{\}          == \{x.False\}
+\tdx{insert_def}        insert a B  == \{x.x=a\} Un B
+\tdx{Ball_def}          Ball A P    == ! x. x:A --> P x
+\tdx{Bex_def}           Bex A P     == ? x. x:A & P x
+\tdx{subset_def}        A <= B      == ! x:A. x:B
+\tdx{Un_def}            A Un B      == \{x.x:A | x:B\}
+\tdx{Int_def}           A Int B     == \{x.x:A & x:B\}
+\tdx{set_diff_def}      A - B       == \{x.x:A & x~:B\}
+\tdx{Compl_def}         Compl A     == \{x. ~ x:A\}
+\tdx{INTER_def}         INTER A B   == \{y. ! x:A. y: B x\}
+\tdx{UNION_def}         UNION A B   == \{y. ? x:A. y: B x\}
+\tdx{INTER1_def}        INTER1 B    == INTER \{x.True\} B 
+\tdx{UNION1_def}        UNION1 B    == UNION \{x.True\} B 
+\tdx{Inter_def}         Inter S     == (INT x:S. x)
+\tdx{Union_def}         Union S     == (UN  x:S. x)
+\tdx{Pow_def}           Pow A       == \{B. B <= A\}
+\tdx{image_def}         f``A        == \{y. ? x:A. y=f x\}
+\tdx{range_def}         range f     == \{y. ? x. y=f x\}
+\tdx{mono_def}          mono f      == !A B. A <= B --> f A <= f B
+\tdx{inj_def}           inj f       == ! x y. f x=f y --> x=y
+\tdx{surj_def}          surj f      == ! y. ? x. y=f x
+\tdx{inj_onto_def}      inj_onto f A == !x:A. !y:A. f x=f y --> x=y
+\end{ttbox}
+\caption{Rules of the theory {\tt Set}} \label{chol-set-rules}
+\end{figure}
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{CollectI}        [| P a |] ==> a : \{x.P x\}
+\tdx{CollectD}        [| a : \{x.P x\} |] ==> P a
+\tdx{CollectE}        [| a : \{x.P x\};  P a ==> W |] ==> W
+
+\tdx{ballI}           [| !!x. x:A ==> P x |] ==> ! x:A. P x
+\tdx{bspec}           [| ! x:A. P x;  x:A |] ==> P x
+\tdx{ballE}           [| ! x:A. P x;  P x ==> Q;  ~ x:A ==> Q |] ==> Q
+
+\tdx{bexI}            [| P x;  x:A |] ==> ? x:A. P x
+\tdx{bexCI}           [| ! x:A. ~ P x ==> P a;  a:A |] ==> ? x:A.P x
+\tdx{bexE}            [| ? x:A. P x;  !!x. [| x:A; P x |] ==> Q  |] ==> Q
+\subcaption{Comprehension and Bounded quantifiers}
+
+\tdx{subsetI}         (!!x.x:A ==> x:B) ==> A <= B
+\tdx{subsetD}         [| A <= B;  c:A |] ==> c:B
+\tdx{subsetCE}        [| A <= B;  ~ (c:A) ==> P;  c:B ==> P |] ==> P
+
+\tdx{subset_refl}     A <= A
+\tdx{subset_trans}    [| A<=B;  B<=C |] ==> A<=C
+
+\tdx{equalityI}       [| A <= B;  B <= A |] ==> A = B
+\tdx{equalityD1}      A = B ==> A<=B
+\tdx{equalityD2}      A = B ==> B<=A
+\tdx{equalityE}       [| A = B;  [| A<=B; B<=A |] ==> P |]  ==>  P
+
+\tdx{equalityCE}      [| A = B;  [| c:A; c:B |] ==> P;  
+                           [| ~ c:A; ~ c:B |] ==> P 
+                |]  ==>  P
+\subcaption{The subset and equality relations}
+\end{ttbox}
+\caption{Derived rules for set theory} \label{chol-set1}
+\end{figure}
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{emptyE}   a : \{\} ==> P
+
+\tdx{insertI1} a : insert a B
+\tdx{insertI2} a : B ==> a : insert b B
+\tdx{insertE}  [| a : insert b A;  a=b ==> P;  a:A ==> P |] ==> P
+
+\tdx{ComplI}   [| c:A ==> False |] ==> c : Compl A
+\tdx{ComplD}   [| c : Compl A |] ==> ~ c:A
+
+\tdx{UnI1}     c:A ==> c : A Un B
+\tdx{UnI2}     c:B ==> c : A Un B
+\tdx{UnCI}     (~c:B ==> c:A) ==> c : A Un B
+\tdx{UnE}      [| c : A Un B;  c:A ==> P;  c:B ==> P |] ==> P
+
+\tdx{IntI}     [| c:A;  c:B |] ==> c : A Int B
+\tdx{IntD1}    c : A Int B ==> c:A
+\tdx{IntD2}    c : A Int B ==> c:B
+\tdx{IntE}     [| c : A Int B;  [| c:A; c:B |] ==> P |] ==> P
+
+\tdx{UN_I}     [| a:A;  b: B a |] ==> b: (UN x:A. B x)
+\tdx{UN_E}     [| b: (UN x:A. B x);  !!x.[| x:A;  b:B x |] ==> R |] ==> R
+
+\tdx{INT_I}    (!!x. x:A ==> b: B x) ==> b : (INT x:A. B x)
+\tdx{INT_D}    [| b: (INT x:A. B x);  a:A |] ==> b: B a
+\tdx{INT_E}    [| b: (INT x:A. B x);  b: B a ==> R;  ~ a:A ==> R |] ==> R
+
+\tdx{UnionI}   [| X:C;  A:X |] ==> A : Union C
+\tdx{UnionE}   [| A : Union C;  !!X.[| A:X;  X:C |] ==> R |] ==> R
+
+\tdx{InterI}   [| !!X. X:C ==> A:X |] ==> A : Inter C
+\tdx{InterD}   [| A : Inter C;  X:C |] ==> A:X
+\tdx{InterE}   [| A : Inter C;  A:X ==> R;  ~ X:C ==> R |] ==> R
+
+\tdx{PowI}     A<=B ==> A: Pow B
+\tdx{PowD}     A: Pow B ==> A<=B
+\end{ttbox}
+\caption{Further derived rules for set theory} \label{chol-set2}
+\end{figure}
+
+
+\subsection{Axioms and rules of set theory}
+Figure~\ref{chol-set-rules} presents the rules of theory \thydx{Set}.  The
+axioms \tdx{mem_Collect_eq} and \tdx{Collect_mem_eq} assert
+that the functions {\tt Collect} and \hbox{\tt op :} are isomorphisms.  Of
+course, \hbox{\tt op :} also serves as the membership relation.
+
+All the other axioms are definitions.  They include the empty set, bounded
+quantifiers, unions, intersections, complements and the subset relation.
+They also include straightforward properties of functions: image~({\tt``}) and
+{\tt range}, and predicates concerning monotonicity, injectiveness and
+surjectiveness.  
+
+The predicate \cdx{inj_onto} is used for simulating type definitions.
+The statement ${\tt inj_onto}~f~A$ asserts that $f$ is injective on the
+set~$A$, which specifies a subset of its domain type.  In a type
+definition, $f$ is the abstraction function and $A$ is the set of valid
+representations; we should not expect $f$ to be injective outside of~$A$.
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{Inv_f_f}    inj f ==> Inv f (f x) = x
+\tdx{f_Inv_f}    y : range f ==> f(Inv f y) = y
+
+%\tdx{Inv_injective}
+%    [| Inv f x=Inv f y; x: range f;  y: range f |] ==> x=y
+%
+\tdx{imageI}     [| x:A |] ==> f x : f``A
+\tdx{imageE}     [| b : f``A;  !!x.[| b=f x;  x:A |] ==> P |] ==> P
+
+\tdx{rangeI}     f x : range f
+\tdx{rangeE}     [| b : range f;  !!x.[| b=f x |] ==> P |] ==> P
+
+\tdx{monoI}      [| !!A B. A <= B ==> f A <= f B |] ==> mono f
+\tdx{monoD}      [| mono f;  A <= B |] ==> f A <= f B
+
+\tdx{injI}       [| !! x y. f x = f y ==> x=y |] ==> inj f
+\tdx{inj_inverseI}              (!!x. g(f x) = x) ==> inj f
+\tdx{injD}       [| inj f; f x = f y |] ==> x=y
+
+\tdx{inj_ontoI}  (!!x y. [| f x=f y; x:A; y:A |] ==> x=y) ==> inj_onto f A
+\tdx{inj_ontoD}  [| inj_onto f A;  f x=f y;  x:A;  y:A |] ==> x=y
+
+\tdx{inj_onto_inverseI}
+    (!!x. x:A ==> g(f x) = x) ==> inj_onto f A
+\tdx{inj_onto_contraD}
+    [| inj_onto f A;  x~=y;  x:A;  y:A |] ==> ~ f x=f y
+\end{ttbox}
+\caption{Derived rules involving functions} \label{chol-fun}
+\end{figure}
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{Union_upper}     B:A ==> B <= Union A
+\tdx{Union_least}     [| !!X. X:A ==> X<=C |] ==> Union A <= C
+
+\tdx{Inter_lower}     B:A ==> Inter A <= B
+\tdx{Inter_greatest}  [| !!X. X:A ==> C<=X |] ==> C <= Inter A
+
+\tdx{Un_upper1}       A <= A Un B
+\tdx{Un_upper2}       B <= A Un B
+\tdx{Un_least}        [| A<=C;  B<=C |] ==> A Un B <= C
+
+\tdx{Int_lower1}      A Int B <= A
+\tdx{Int_lower2}      A Int B <= B
+\tdx{Int_greatest}    [| C<=A;  C<=B |] ==> C <= A Int B
+\end{ttbox}
+\caption{Derived rules involving subsets} \label{chol-subset}
+\end{figure}
+
+
+\begin{figure} \underscoreon   \hfuzz=4pt%suppress "Overfull \hbox" message
+\begin{ttbox}
+\tdx{Int_absorb}        A Int A = A
+\tdx{Int_commute}       A Int B = B Int A
+\tdx{Int_assoc}         (A Int B) Int C  =  A Int (B Int C)
+\tdx{Int_Un_distrib}    (A Un B)  Int C  =  (A Int C) Un (B Int C)
+
+\tdx{Un_absorb}         A Un A = A
+\tdx{Un_commute}        A Un B = B Un A
+\tdx{Un_assoc}          (A Un B)  Un C  =  A Un (B Un C)
+\tdx{Un_Int_distrib}    (A Int B) Un C  =  (A Un C) Int (B Un C)
+
+\tdx{Compl_disjoint}    A Int (Compl A) = \{x.False\}
+\tdx{Compl_partition}   A Un  (Compl A) = \{x.True\}
+\tdx{double_complement} Compl(Compl A) = A
+\tdx{Compl_Un}          Compl(A Un B)  = (Compl A) Int (Compl B)
+\tdx{Compl_Int}         Compl(A Int B) = (Compl A) Un (Compl B)
+
+\tdx{Union_Un_distrib}  Union(A Un B) = (Union A) Un (Union B)
+\tdx{Int_Union}         A Int (Union B) = (UN C:B. A Int C)
+\tdx{Un_Union_image}    (UN x:C.(A x) Un (B x)) = Union(A``C) Un Union(B``C)
+
+\tdx{Inter_Un_distrib}  Inter(A Un B) = (Inter A) Int (Inter B)
+\tdx{Un_Inter}          A Un (Inter B) = (INT C:B. A Un C)
+\tdx{Int_Inter_image}   (INT x:C.(A x) Int (B x)) = Inter(A``C) Int Inter(B``C)
+\end{ttbox}
+\caption{Set equalities} \label{chol-equalities}
+\end{figure}
+
+
+Figures~\ref{chol-set1} and~\ref{chol-set2} present derived rules.  Most are
+obvious and resemble rules of Isabelle's {\ZF} set theory.  Certain rules,
+such as \tdx{subsetCE}, \tdx{bexCI} and \tdx{UnCI},
+are designed for classical reasoning; the rules \tdx{subsetD},
+\tdx{bexI}, \tdx{Un1} and~\tdx{Un2} are not
+strictly necessary but yield more natural proofs.  Similarly,
+\tdx{equalityCE} supports classical reasoning about extensionality,
+after the fashion of \tdx{iffCE}.  See the file {\tt CHOL/Set.ML} for
+proofs pertaining to set theory.
+
+Figure~\ref{chol-fun} presents derived inference rules involving functions.
+They also include rules for \cdx{Inv}, which is defined in theory~{\tt
+  CHOL}; note that ${\tt Inv}~f$ applies the Axiom of Choice to yield an
+inverse of~$f$.  They also include natural deduction rules for the image
+and range operators, and for the predicates {\tt inj} and {\tt inj_onto}.
+Reasoning about function composition (the operator~\sdx{o}) and the
+predicate~\cdx{surj} is done simply by expanding the definitions.  See
+the file {\tt CHOL/fun.ML} for a complete listing of the derived rules.
+
+Figure~\ref{chol-subset} presents lattice properties of the subset relation.
+Unions form least upper bounds; non-empty intersections form greatest lower
+bounds.  Reasoning directly about subsets often yields clearer proofs than
+reasoning about the membership relation.  See the file {\tt CHOL/subset.ML}.
+
+Figure~\ref{chol-equalities} presents many common set equalities.  They
+include commutative, associative and distributive laws involving unions,
+intersections and complements.  The proofs are mostly trivial, using the
+classical reasoner; see file {\tt CHOL/equalities.ML}.
+
+
+\begin{figure}
+\begin{constants}
+  \it symbol    & \it meta-type &           & \it description \\ 
+  \cdx{Pair}    & $[\alpha,\beta]\To \alpha\times\beta$
+        & & ordered pairs $(a,b)$ \\
+  \cdx{fst}     & $\alpha\times\beta \To \alpha$        & & first projection\\
+  \cdx{snd}     & $\alpha\times\beta \To \beta$         & & second projection\\
+  \cdx{split}   & $[[\alpha,\beta]\To\gamma, \alpha\times\beta] \To \gamma$ 
+        & & generalized projection\\
+  \cdx{Sigma}  & 
+        $[\alpha\,set, \alpha\To\beta\,set]\To(\alpha\times\beta)set$ &
+        & general sum of sets
+\end{constants}
+\begin{ttbox}\makeatletter
+\tdx{fst_def}      fst p     == @a. ? b. p = (a,b)
+\tdx{snd_def}      snd p     == @b. ? a. p = (a,b)
+\tdx{split_def}    split c p == c (fst p) (snd p)
+\tdx{Sigma_def}    Sigma A B == UN x:A. UN y:B x. \{(x,y)\}
+
+
+\tdx{Pair_inject}  [| (a, b) = (a',b');  [| a=a';  b=b' |] ==> R |] ==> R
+\tdx{fst_conv}     fst (a,b) = a
+\tdx{snd_conv}     snd (a,b) = b
+\tdx{split}        split c (a,b) = c a b
+
+\tdx{surjective_pairing}  p = (fst p,snd p)
+
+\tdx{SigmaI}       [| a:A;  b:B a |] ==> (a,b) : Sigma A B
+
+\tdx{SigmaE}       [| c: Sigma A B;  
+                !!x y.[| x:A; y:B x; c=(x,y) |] ==> P |] ==> P
+\end{ttbox}
+\caption{Type $\alpha\times\beta$}\label{chol-prod}
+\end{figure} 
+
+
+\begin{figure}
+\begin{constants}
+  \it symbol    & \it meta-type &           & \it description \\ 
+  \cdx{Inl}     & $\alpha \To \alpha+\beta$    & & first injection\\
+  \cdx{Inr}     & $\beta \To \alpha+\beta$     & & second injection\\
+  \cdx{sum_case} & $[\alpha\To\gamma, \beta\To\gamma, \alpha+\beta] \To\gamma$
+        & & conditional
+\end{constants}
+\begin{ttbox}\makeatletter
+\tdx{sum_case_def}   sum_case == (\%f g p. @z. (!x. p=Inl x --> z=f x) &
+                                        (!y. p=Inr y --> z=g y))
+
+\tdx{Inl_not_Inr}    ~ Inl a=Inr b
+
+\tdx{inj_Inl}        inj Inl
+\tdx{inj_Inr}        inj Inr
+
+\tdx{sumE}           [| !!x::'a. P(Inl x);  !!y::'b. P(Inr y) |] ==> P s
+
+\tdx{sum_case_Inl}   sum_case f g (Inl x) = f x
+\tdx{sum_case_Inr}   sum_case f g (Inr x) = g x
+
+\tdx{surjective_sum} sum_case (\%x::'a. f(Inl x)) (\%y::'b. f(Inr y)) s = f s
+\end{ttbox}
+\caption{Type $\alpha+\beta$}\label{chol-sum}
+\end{figure}
+
+
+\section{Generic packages and classical reasoning}
+\CHOL\ instantiates most of Isabelle's generic packages;
+see {\tt CHOL/ROOT.ML} for details.
+\begin{itemize}
+\item 
+Because it includes a general substitution rule, \CHOL\ instantiates the
+tactic {\tt hyp_subst_tac}, which substitutes for an equality
+throughout a subgoal and its hypotheses.
+\item 
+It instantiates the simplifier, defining~\ttindexbold{HOL_ss} as the
+simplification set for higher-order logic.  Equality~($=$), which also
+expresses logical equivalence, may be used for rewriting.  See the file
+{\tt CHOL/simpdata.ML} for a complete listing of the simplification
+rules. 
+\item 
+It instantiates the classical reasoner, as described below. 
+\end{itemize}
+\CHOL\ derives classical introduction rules for $\disj$ and~$\exists$, as
+well as classical elimination rules for~$\imp$ and~$\bimp$, and the swap
+rule; recall Fig.\ts\ref{chol-lemmas2} above.
+
+The classical reasoner is set up as the structure
+{\tt Classical}.  This structure is open, so {\ML} identifiers such
+as {\tt step_tac}, {\tt fast_tac}, {\tt best_tac}, etc., refer to it.
+\HOL\ defines the following classical rule sets:
+\begin{ttbox} 
+prop_cs    : claset
+HOL_cs     : claset
+set_cs     : claset
+\end{ttbox}
+\begin{ttdescription}
+\item[\ttindexbold{prop_cs}] contains the propositional rules, namely
+those for~$\top$, $\bot$, $\conj$, $\disj$, $\neg$, $\imp$ and~$\bimp$,
+along with the rule~{\tt refl}.
+
+\item[\ttindexbold{HOL_cs}] extends {\tt prop_cs} with the safe rules
+  {\tt allI} and~{\tt exE} and the unsafe rules {\tt allE}
+  and~{\tt exI}, as well as rules for unique existence.  Search using
+  this classical set is incomplete: quantified formulae are used at most
+  once.
+
+\item[\ttindexbold{set_cs}] extends {\tt HOL_cs} with rules for the bounded
+  quantifiers, subsets, comprehensions, unions and intersections,
+  complements, finite sets, images and ranges.
+\end{ttdescription}
+\noindent
+See \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
+        {Chap.\ts\ref{chap:classical}} 
+for more discussion of classical proof methods.
+
+
+\section{Types}
+The basic higher-order logic is augmented with a tremendous amount of
+material, including support for recursive function and type definitions.  A
+detailed discussion appears elsewhere~\cite{paulson-coind}.  The simpler
+definitions are the same as those used by the {\sc hol} system, but my
+treatment of recursive types differs from Melham's~\cite{melham89}.  The
+present section describes product, sum, natural number and list types.
+
+\subsection{Product and sum types}\index{*"* type}\index{*"+ type}
+Theory \thydx{Prod} defines the product type $\alpha\times\beta$, with
+the ordered pair syntax {\tt($a$,$b$)}.  Theory \thydx{Sum} defines the
+sum type $\alpha+\beta$.  These use fairly standard constructions; see
+Figs.\ts\ref{chol-prod} and~\ref{chol-sum}.  Because Isabelle does not
+support abstract type definitions, the isomorphisms between these types and
+their representations are made explicitly.
+
+Most of the definitions are suppressed, but observe that the projections
+and conditionals are defined as descriptions.  Their properties are easily
+proved using \tdx{select_equality}.  
+
+\begin{figure} 
+\index{*"< symbol}
+\index{*"* symbol}
+\index{*div symbol}
+\index{*mod symbol}
+\index{*"+ symbol}
+\index{*"- symbol}
+\begin{constants}
+  \it symbol    & \it meta-type & \it priority & \it description \\ 
+  \cdx{0}       & $nat$         & & zero \\
+  \cdx{Suc}     & $nat \To nat$ & & successor function\\
+  \cdx{nat_case} & $[\alpha, nat\To\alpha, nat] \To\alpha$
+        & & conditional\\
+  \cdx{nat_rec} & $[nat, \alpha, [nat, \alpha]\To\alpha] \To \alpha$
+        & & primitive recursor\\
+  \cdx{pred_nat} & $(nat\times nat) set$ & & predecessor relation\\
+  \tt *         & $[nat,nat]\To nat$    &  Left 70      & multiplication \\
+  \tt div       & $[nat,nat]\To nat$    &  Left 70      & division\\
+  \tt mod       & $[nat,nat]\To nat$    &  Left 70      & modulus\\
+  \tt +         & $[nat,nat]\To nat$    &  Left 65      & addition\\
+  \tt -         & $[nat,nat]\To nat$    &  Left 65      & subtraction
+\end{constants}
+\subcaption{Constants and infixes}
+
+\begin{ttbox}\makeatother
+\tdx{nat_case_def}  nat_case == (\%a f n. @z. (n=0 --> z=a) & 
+                                       (!x. n=Suc x --> z=f x))
+\tdx{pred_nat_def}  pred_nat == \{p. ? n. p = (n, Suc n)\} 
+\tdx{less_def}      m<n      == (m,n):pred_nat^+
+\tdx{nat_rec_def}   nat_rec n c d == 
+               wfrec pred_nat n (nat_case (\%g.c) (\%m g. d m (g m)))
+
+\tdx{add_def}   m+n     == nat_rec m n (\%u v. Suc v)
+\tdx{diff_def}  m-n     == nat_rec n m (\%u v. nat_rec v 0 (\%x y.x))
+\tdx{mult_def}  m*n     == nat_rec m 0 (\%u v. n + v)
+\tdx{mod_def}   m mod n == wfrec (trancl pred_nat)
+                        m (\%j f. if j<n then j else f j-n))
+\tdx{quo_def}   m div n == wfrec (trancl pred_nat), 
+                        m (\%j f. if j<n then 0 else Suc(f j-n))
+\subcaption{Definitions}
+\end{ttbox}
+\caption{Defining {\tt nat}, the type of natural numbers} \label{chol-nat1}
+\end{figure}
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{nat_induct}     [| P 0; !!k. [| P k |] ==> P(Suc k) |]  ==> P n
+
+\tdx{Suc_not_Zero}   Suc m ~= 0
+\tdx{inj_Suc}        inj Suc
+\tdx{n_not_Suc_n}    n~=Suc n
+\subcaption{Basic properties}
+
+\tdx{pred_natI}      (n, Suc n) : pred_nat
+\tdx{pred_natE}
+    [| p : pred_nat;  !!x n. [| p = (n, Suc n) |] ==> R |] ==> R
+
+\tdx{nat_case_0}     nat_case a f 0 = a
+\tdx{nat_case_Suc}   nat_case a f (Suc k) = f k
+
+\tdx{wf_pred_nat}    wf pred_nat
+\tdx{nat_rec_0}      nat_rec 0 c h = c
+\tdx{nat_rec_Suc}    nat_rec (Suc n) c h = h n (nat_rec n c h)
+\subcaption{Case analysis and primitive recursion}
+
+\tdx{less_trans}     [| i<j;  j<k |] ==> i<k
+\tdx{lessI}          n < Suc n
+\tdx{zero_less_Suc}  0 < Suc n
+
+\tdx{less_not_sym}   n<m --> ~ m<n 
+\tdx{less_not_refl}  ~ n<n
+\tdx{not_less0}      ~ n<0
+
+\tdx{Suc_less_eq}    (Suc m < Suc n) = (m<n)
+\tdx{less_induct}    [| !!n. [| ! m. m<n --> P m |] ==> P n |]  ==>  P n
+
+\tdx{less_linear}    m<n | m=n | n<m
+\subcaption{The less-than relation}
+\end{ttbox}
+\caption{Derived rules for {\tt nat}} \label{chol-nat2}
+\end{figure}
+
+
+\subsection{The type of natural numbers, {\tt nat}}
+The theory \thydx{Nat} defines the natural numbers in a roundabout but
+traditional way.  The axiom of infinity postulates an type~\tydx{ind} of
+individuals, which is non-empty and closed under an injective operation.
+The natural numbers are inductively generated by choosing an arbitrary
+individual for~0 and using the injective operation to take successors.  As
+usual, the isomorphisms between~\tydx{nat} and its representation are made
+explicitly.
+
+The definition makes use of a least fixed point operator \cdx{lfp},
+defined using the Knaster-Tarski theorem.  This is used to define the
+operator \cdx{trancl}, for taking the transitive closure of a relation.
+Primitive recursion makes use of \cdx{wfrec}, an operator for recursion
+along arbitrary well-founded relations.  The corresponding theories are
+called {\tt Lfp}, {\tt Trancl} and {\tt WF}\@.  Elsewhere I have described
+similar constructions in the context of set theory~\cite{paulson-set-II}.
+
+Type~\tydx{nat} is postulated to belong to class~\cldx{ord}, which
+overloads $<$ and $\leq$ on the natural numbers.  As of this writing,
+Isabelle provides no means of verifying that such overloading is sensible;
+there is no means of specifying the operators' properties and verifying
+that instances of the operators satisfy those properties.  To be safe, the
+\CHOL\ theory includes no polymorphic axioms asserting general properties of
+$<$ and~$\leq$.
+
+Theory \thydx{Arith} develops arithmetic on the natural numbers.  It
+defines addition, multiplication, subtraction, division, and remainder.
+Many of their properties are proved: commutative, associative and
+distributive laws, identity and cancellation laws, etc.  The most
+interesting result is perhaps the theorem $a \bmod b + (a/b)\times b = a$.
+Division and remainder are defined by repeated subtraction, which requires
+well-founded rather than primitive recursion.  See Figs.\ts\ref{chol-nat1}
+and~\ref{chol-nat2}.
+
+The predecessor relation, \cdx{pred_nat}, is shown to be well-founded.
+Recursion along this relation resembles primitive recursion, but is
+stronger because we are in higher-order logic; using primitive recursion to
+define a higher-order function, we can easily Ackermann's function, which
+is not primitive recursive \cite[page~104]{thompson91}.
+The transitive closure of \cdx{pred_nat} is~$<$.  Many functions on the
+natural numbers are most easily expressed using recursion along~$<$.
+
+The tactic {\tt\ttindex{nat_ind_tac} "$n$" $i$} performs induction over the
+variable~$n$ in subgoal~$i$.
+
+\begin{figure}
+\index{#@{\tt\#} symbol}
+\index{"@@{\tt\at} symbol}
+\begin{constants}
+  \it symbol & \it meta-type & \it priority & \it description \\
+  \cdx{Nil}     & $\alpha list$ & & empty list\\
+  \tt \#   & $[\alpha,\alpha list]\To \alpha list$ & Right 65 & 
+        list constructor \\
+  \cdx{null}    & $\alpha list \To bool$ & & emptiness test\\
+  \cdx{hd}      & $\alpha list \To \alpha$ & & head \\
+  \cdx{tl}      & $\alpha list \To \alpha list$ & & tail \\
+  \cdx{ttl}     & $\alpha list \To \alpha list$ & & total tail \\
+  \tt\at  & $[\alpha list,\alpha list]\To \alpha list$ & Left 65 & append \\
+  \sdx{mem}  & $[\alpha,\alpha list]\To bool$    &  Left 55   & membership\\
+  \cdx{map}     & $(\alpha\To\beta) \To (\alpha list \To \beta list)$
+        & & mapping functional\\
+  \cdx{filter}  & $(\alpha \To bool) \To (\alpha list \To \alpha list)$
+        & & filter functional\\
+  \cdx{list_all}& $(\alpha \To bool) \To (\alpha list \To bool)$
+        & & forall functional\\
+  \cdx{list_rec}        & $[\alpha list, \beta, [\alpha ,\alpha list,
+\beta]\To\beta] \To \beta$
+        & & list recursor
+\end{constants}
+\subcaption{Constants and infixes}
+
+\begin{center} \tt\frenchspacing
+\begin{tabular}{rrr} 
+  \it external        & \it internal  & \it description \\{}
+  \sdx{[]}            & Nil           & \rm empty list \\{}
+  [$x@1$, $\dots$, $x@n$]  &  $x@1$ \# $\cdots$ \# $x@n$ \# [] &
+        \rm finite list \\{}
+  [$x$:$l$. $P$]  & filter ($\lambda x{.}P$) $l$ & 
+        \rm list comprehension
+\end{tabular}
+\end{center}
+\subcaption{Translations}
+
+\begin{ttbox}
+\tdx{list_induct}    [| P [];  !!x xs. [| P xs |] ==> P x#xs) |]  ==> P l
+
+\tdx{Cons_not_Nil}   (x # xs) ~= []
+\tdx{Cons_Cons_eq}   ((x # xs) = (y # ys)) = (x=y & xs=ys)
+\subcaption{Induction and freeness}
+\end{ttbox}
+\caption{The theory \thydx{List}} \label{chol-list}
+\end{figure}
+
+\begin{figure}
+\begin{ttbox}\makeatother
+\tdx{list_rec_Nil}    list_rec [] c h = c  
+\tdx{list_rec_Cons}   list_rec a#l c h = h a l (list_rec l c h)
+
+\tdx{list_case_Nil}   list_case c h [] = c 
+\tdx{list_case_Cons}  list_case c h x#xs = h x xs
+
+\tdx{map_Nil}         map f [] = []
+\tdx{map_Cons}        map f x \# xs = f x \# map f xs
+
+\tdx{null_Nil}        null [] = True
+\tdx{null_Cons}       null x#xs = False
+
+\tdx{hd_Cons}         hd x#xs = x
+\tdx{tl_Cons}         tl x#xs = xs
+
+\tdx{ttl_Nil}         ttl [] = []
+\tdx{ttl_Cons}        ttl x#xs = xs
+
+\tdx{append_Nil}      [] @ ys = ys
+\tdx{append_Cons}     (x#xs) \at ys = x # xs \at ys
+
+\tdx{mem_Nil}         x mem [] = False
+\tdx{mem_Cons}        x mem (y#ys) = if y=x then True else x mem ys
+
+\tdx{filter_Nil}      filter P [] = []
+\tdx{filter_Cons}     filter P x#xs = if P x then x#filter P xs else filter P xs
+
+\tdx{list_all_Nil}    list_all P [] = True
+\tdx{list_all_Cons}   list_all P x#xs = (P x & list_all P xs)
+\end{ttbox}
+\caption{Rewrite rules for lists} \label{chol-list-simps}
+\end{figure}
+
+
+\subsection{The type constructor for lists, {\tt list}}
+\index{*list type}
+
+\CHOL's definition of lists is an example of an experimental method for
+handling recursive data types.  Figure~\ref{chol-list} presents the theory
+\thydx{List}: the basic list operations with their types and properties.
+
+The \sdx{case} construct is defined by the following translation:
+{\dquotes
+\begin{eqnarray*}
+  \begin{array}{r@{\;}l@{}l}
+  "case " e " of" & "[]"    & " => " a\\
+              "|" & x"\#"xs & " => " b
+  \end{array} 
+  & \equiv &
+  "list_case"~a~(\lambda x\;xs.b)~e
+\end{eqnarray*}}%
+The theory includes \cdx{list_rec}, a primitive recursion operator
+for lists.  It is derived from well-founded recursion, a general principle
+that can express arbitrary total recursive functions.
+
+The simpset \ttindex{list_ss} contains, along with additional useful lemmas,
+the basic rewrite rules that appear in Fig.\ts\ref{chol-list-simps}.
+
+The tactic {\tt\ttindex{list_ind_tac} "$xs$" $i$} performs induction over the
+variable~$xs$ in subgoal~$i$.
+
+
+\section{Datatype declarations}
+\index{*datatype|(}
+
+\underscoreon
+
+It is often necessary to extend a theory with \ML-like datatypes.  This
+extension consists of the new type, declarations of its constructors and
+rules that describe the new type. The theory definition section {\tt
+  datatype} represents a compact way of doing this.
+
+
+\subsection{Foundations}
+
+A datatype declaration has the following general structure:
+\[ \mbox{\tt datatype}~ (\alpha_1,\dots,\alpha_n)t ~=~
+      C_1~\tau_{11}~\dots~\tau_{1k_1} ~\mid~ \dots ~\mid~
+      C_m~\tau_{m1}~\dots~\tau_{mk_m} 
+\]
+where $\alpha_i$ are type variables, $C_i$ are distinct constructor names and
+$\tau_{ij}$ are one of the following:
+\begin{itemize}
+\item type variables $\alpha_1,\dots,\alpha_n$,
+\item types $(\beta_1,\dots,\beta_l)s$ where $s$ is a previously declared
+  type or type synonym and $\{\beta_1,\dots,\beta_l\} \subseteq
+  \{\alpha_1,\dots,\alpha_n\}$,
+\item the newly defined type $(\alpha_1,\dots,\alpha_n)t$ \footnote{This
+    makes it a recursive type. To ensure that the new type is not empty at
+    least one constructor must consist of only non-recursive type
+    components.}
+\end{itemize}
+If you would like one of the $\tau_{ij}$ to be a complex type expression
+$\tau$ you need to declare a new type synonym $syn = \tau$ first and use
+$syn$ in place of $\tau$. Of course this does not work if $\tau$ mentions the
+recursive type itself, thus ruling out problematic cases like \[ \mbox{\tt
+  datatype}~ t ~=~ C(t \To t) \] together with unproblematic ones like \[
+\mbox{\tt datatype}~ t ~=~ C(t~list). \]
+
+The constructors are automatically defined as functions of their respective
+type:
+\[ C_j : [\tau_{j1},\dots,\tau_{jk_j}] \To (\alpha_1,\dots,\alpha_n)t \]
+These functions have certain {\em freeness} properties:
+\begin{description}
+\item[\tt distinct] They are distinct:
+\[ C_i~x_1~\dots~x_{k_i} \neq C_j~y_1~\dots~y_{k_j} \qquad
+   \mbox{for all}~ i \neq j.
+\]
+\item[\tt inject] They are injective:
+\[ (C_j~x_1~\dots~x_{k_j} = C_j~y_1~\dots~y_{k_j}) =
+   (x_1 = y_1 \land \dots \land x_{k_j} = y_{k_j})
+\]
+\end{description}
+Because the number of inequalities is quadratic in the number of
+constructors, a different method is used if their number exceeds
+a certain value, currently 4. In that case every constructor is mapped to a
+natural number
+\[
+\begin{array}{lcl}
+\mbox{\it t\_ord}(C_1~x_1~\dots~x_{k_1}) & = & 0 \\
+& \vdots & \\
+\mbox{\it t\_ord}(C_m x_1~\dots~x_{k_m}) & = & m-1
+\end{array}
+\]
+and distinctness of constructors is expressed by:
+\[
+\mbox{\it t\_ord}~x \neq \mbox{\it t\_ord}~y \Imp x \neq y.
+\]
+In addition a structural induction axiom {\tt induct} is provided: 
+\[
+\infer{P x}
+{\begin{array}{lcl}
+\Forall x_1\dots x_{k_1}.
+  \List{P~x_{r_{11}}; \dots; P~x_{r_{1l_1}}} &
+  \Imp  & P(C_1~x_1~\dots~x_{k_1}) \\
+ & \vdots & \\
+\Forall x_1\dots x_{k_m}.
+  \List{P~x_{r_{m1}}; \dots; P~x_{r_{ml_m}}} &
+  \Imp & P(C_m~x_1~\dots~x_{k_m})
+\end{array}}
+\]
+where $\{r_{j1},\dots,r_{jl_j}\} = \{i \in \{1,\dots k_j\} ~\mid~ \tau_{ji}
+= (\alpha_1,\dots,\alpha_n)t \}$, i.e.\ the property $P$ can be assumed for
+all arguments of the recursive type.
+
+The type also comes with an \ML-like \sdx{case}-construct:
+\[
+\begin{array}{rrcl}
+\mbox{\tt case}~e~\mbox{\tt of} & C_1~x_{11}~\dots~x_{1k_1} & \To & e_1 \\
+                           \vdots \\
+                           \mid & C_m~x_{m1}~\dots~x_{mk_m} & \To & e_m
+\end{array}
+\]
+In contrast to \ML, {\em all} constructors must be present, their order is
+fixed, and nested patterns are not supported.
+
+
+\subsection{Defining datatypes}
+
+A datatype is defined in a theory definition file using the keyword {\tt
+  datatype}. The definition following {\tt datatype} must conform to the
+syntax of {\em typedecl} specified in Fig.~\ref{datatype-grammar} and must
+obey the rules in the previous section. As a result the theory is extended
+with the new type, the constructors, and the theorems listed in the previous
+section.
+
+\begin{figure}
+\begin{rail}
+typedecl : typevarlist id '=' (cons + '|')
+         ;
+cons     : (id | string) ( () | '(' (typ + ',') ')' ) ( () | mixfix )
+         ;
+typ      : typevarlist id
+           | tid
+         ;
+typevarlist : () | tid | '(' (tid + ',') ')'
+         ;
+\end{rail}
+\caption{Syntax of datatype declarations}
+\label{datatype-grammar}
+\end{figure}
+
+Reading the theory file produces a structure which, in addition to the usual
+components, contains a structure named $t$ for each datatype $t$ defined in
+the file.\footnote{Otherwise multiple datatypes in the same theory file would
+  lead to name clashes.} Each structure $t$ contains the following elements:
+\begin{ttbox}
+val distinct : thm list
+val inject : thm list
+val induct : thm
+val cases : thm list
+val simps : thm list
+val induct_tac : string -> int -> tactic
+\end{ttbox}
+{\tt distinct}, {\tt inject} and {\tt induct} contain the theorems described
+above. For convenience {\tt distinct} contains inequalities in both
+directions.
+\begin{warn}
+  If there are five or more constructors, the {\em t\_ord} scheme is used for
+  {\tt distinct}.  In this case the theory {\tt Arith} must be contained
+  in the current theory, if necessary by including it explicitly.
+\end{warn}
+The reduction rules of the {\tt case}-construct are in {\tt cases}.  All
+theorems from {\tt distinct}, {\tt inject} and {\tt cases} are combined in
+{\tt simps} for use with the simplifier. The tactic {\verb$induct_tac$~{\em
+    var i}\/} applies structural induction over variable {\em var} to
+subgoal {\em i}.
+
+
+\subsection{Examples}
+
+\subsubsection{The datatype $\alpha~list$}
+
+We want to define the type $\alpha~list$.\footnote{Of course there is a list
+  type in CHOL already. This is only an example.} To do this we have to build
+a new theory that contains the type definition. We start from {\tt CHOL}.
+\begin{ttbox}
+MyList = CHOL +
+  datatype 'a list = Nil | Cons 'a ('a list)
+end
+\end{ttbox}
+After loading the theory (\verb$use_thy "MyList"$), we can prove
+$Cons~x~xs\neq xs$.  First we build a suitable simpset for the simplifier:
+\begin{ttbox}
+val mylist_ss = HOL_ss addsimps MyList.list.simps;
+goal MyList.thy "!x. Cons x xs ~= xs";
+{\out Level 0}
+{\out ! x. Cons x xs ~= xs}
+{\out  1. ! x. Cons x xs ~= xs}
+\end{ttbox}
+This can be proved by the structural induction tactic:
+\begin{ttbox}
+by (MyList.list.induct_tac "xs" 1);
+{\out Level 1}
+{\out ! x. Cons x xs ~= xs}
+{\out  1. ! x. Cons x Nil ~= Nil}
+{\out  2. !!a list.}
+{\out        ! x. Cons x list ~= list ==>}
+{\out        ! x. Cons x (Cons a list) ~= Cons a list}
+\end{ttbox}
+The first subgoal can be proved with the simplifier and the distinctness
+axioms which are part of \verb$mylist_ss$.
+\begin{ttbox}
+by (simp_tac mylist_ss 1);
+{\out Level 2}
+{\out ! x. Cons x xs ~= xs}
+{\out  1. !!a list.}
+{\out        ! x. Cons x list ~= list ==>}
+{\out        ! x. Cons x (Cons a list) ~= Cons a list}
+\end{ttbox}
+Using the freeness axioms we can quickly prove the remaining goal.
+\begin{ttbox}
+by (asm_simp_tac mylist_ss 1);
+{\out Level 3}
+{\out ! x. Cons x xs ~= xs}
+{\out No subgoals!}
+\end{ttbox}
+Because both subgoals were proved by almost the same tactic we could have
+done that in one step using
+\begin{ttbox}
+by (ALLGOALS (asm_simp_tac mylist_ss));
+\end{ttbox}
+
+
+\subsubsection{The datatype $\alpha~list$ with mixfix syntax}
+
+In this example we define the type $\alpha~list$ again but this time we want
+to write {\tt []} instead of {\tt Nil} and we want to use the infix operator
+\verb|#| instead of {\tt Cons}. To do this we simply add mixfix annotations
+after the constructor declarations as follows:
+\begin{ttbox}
+MyList = CHOL +
+  datatype 'a list = "[]" ("[]") 
+                   | "#" 'a ('a list) (infixr 70)
+end
+\end{ttbox}
+Now the theorem in the previous example can be written \verb|x#xs ~= xs|. The
+proof is the same.
+
+
+\subsubsection{A datatype for weekdays}
+
+This example shows a datatype that consists of more than four constructors:
+\begin{ttbox}
+Days = Arith +
+  datatype days = Mo | Tu | We | Th | Fr | Sa | So
+end
+\end{ttbox}
+Because there are more than four constructors, the theory must be based on
+{\tt Arith}. Inequality is defined via a function \verb|days_ord|. Although
+the expression \verb|Mo ~= Tu| is not directly contained in {\tt distinct},
+it can be proved by the simplifier if \verb$arith_ss$ is used:
+\begin{ttbox}
+val days_ss = arith_ss addsimps Days.days.simps;
+
+goal Days.thy "Mo ~= Tu";
+by (simp_tac days_ss 1);
+\end{ttbox}
+Note that usually it is not necessary to derive these inequalities explicitly
+because the simplifier will dispose of them automatically.
+
+\subsection{Primitive recursive functions}
+\index{primitive recursion|(}
+\index{*primrec|(}
+
+Datatypes come with a uniform way of defining functions, {\bf primitive
+  recursion}. Although it is possible to define primitive recursive functions
+by asserting their reduction rules as new axioms, e.g.\
+\begin{ttbox}
+Append = MyList +
+consts app :: "['a list,'a list] => 'a list"
+rules 
+   app_Nil   "app [] ys = ys"
+   app_Cons  "app x#xs ys = x#app xs ys"
+end
+\end{ttbox}
+this carries with it the danger of accidentally asserting an inconsistency,
+as in \verb$app [] ys = us$. Therefore primitive recursive functions on
+datatypes can be defined with a special syntax:
+\begin{ttbox}
+Append = MyList +
+consts app :: "'['a list,'a list] => 'a list"
+primrec app MyList.list
+   app_Nil   "app [] ys = ys"
+   app_Cons  "app x#xs ys = x#app xs ys"
+end
+\end{ttbox}
+The system will now check that the two rules \verb$app_Nil$ and
+\verb$app_Cons$ do indeed form a primitive recursive definition, thus
+ensuring that consistency is maintained. For example
+\begin{ttbox}
+primrec app MyList.list
+    app_Nil   "app [] ys = us"
+\end{ttbox}
+is rejected:
+\begin{ttbox}
+Extra variables on rhs
+\end{ttbox}
+\bigskip
+
+The general form of a primitive recursive definition is
+\begin{ttbox}
+primrec {\it function} {\it type}
+    {\it reduction rules}
+\end{ttbox}
+where
+\begin{itemize}
+\item {\it function} is the name of the function, either as an {\it id} or a
+  {\it string}. The function must already have been declared.
+\item {\it type} is the name of the datatype, either as an {\it id} or in the
+  long form {\it Thy.t}, where {\it Thy} is the name of the parent theory the
+  datatype was declared in, and $t$ the name of the datatype. The long form
+  is required if the {\tt datatype} and the {\tt primrec} sections are in
+  different theories.
+\item {\it reduction rules} specify one or more named equations of the form
+  {\it id\/}~{\it string}, where the identifier gives the name of the rule in
+  the result structure, and {\it string} is a reduction rule of the form \[
+  f~x_1~\dots~x_m~(C~y_1~\dots~y_k)~z_1~\dots~z_n = r \] such that $C$ is a
+  constructor of the datatype, $r$ contains only the free variables on the
+  left-hand side, and all recursive calls in $r$ are of the form
+  $f~\dots~y_i~\dots$ for some $i$. There must be exactly one reduction
+  rule for each constructor.
+\end{itemize}
+A theory file may contain any number of {\tt primrec} sections which may be
+intermixed with other declarations.
+
+For the consistency-sensitive user it may be reassuring to know that {\tt
+  primrec} does not assert the reduction rules as new axioms but derives them
+as theorems from an explicit definition of the recursive function in terms of
+a recursion operator on the datatype.
+
+The primitive recursive function can also use infix or mixfix syntax:
+\begin{ttbox}
+Append = MyList +
+consts "@"  :: "['a list,'a list] => 'a list"  (infixr 60)
+primrec "op @" MyList.list
+   app_Nil   "[] @ ys = ys"
+   app_Cons  "(x#xs) @ ys = x#(xs @ ys)"
+end
+\end{ttbox}
+
+The reduction rules become part of the ML structure \verb$Append$ and can
+be used to prove theorems about the function:
+\begin{ttbox}
+val append_ss = HOL_ss addsimps [Append.app_Nil,Append.app_Cons];
+
+goal Append.thy "(xs @ ys) @ zs = xs @ (ys @ zs)";
+by (MyList.list.induct_tac "xs" 1);
+by (ALLGOALS(asm_simp_tac append_ss));
+\end{ttbox}
+
+%Note that underdefined primitive recursive functions are allowed:
+%\begin{ttbox}
+%Tl = MyList +
+%consts tl  :: "'a list => 'a list"
+%primrec tl MyList.list
+%   tl_Cons "tl(x#xs) = xs"
+%end
+%\end{ttbox}
+%Nevertheless {\tt tl} is total, although we do not know what the result of
+%\verb$tl([])$ is.
+
+\index{primitive recursion|)}
+\index{*primrec|)}
+\index{*datatype|)}
+
+
+\section{Inductive and coinductive definitions}
+\index{*inductive|(}
+\index{*coinductive|(}
+
+An {\bf inductive definition} specifies the least set closed under given
+rules.  For example, a structural operational semantics is an inductive
+definition of an evaluation relation.  Dually, a {\bf coinductive
+  definition} specifies the greatest set closed under given rules.  An
+important example is using bisimulation relations to formalize equivalence
+of processes and infinite data structures.
+
+A theory file may contain any number of inductive and coinductive
+definitions.  They may be intermixed with other declarations; in
+particular, the (co)inductive sets {\bf must} be declared separately as
+constants, and may have mixfix syntax or be subject to syntax translations.
+
+Each (co)inductive definition adds definitions to the theory and also
+proves some theorems.  Each definition creates an ML structure, which is a
+substructure of the main theory structure.
+
+This package is derived from the ZF one, described in a
+separate paper,\footnote{It appeared in CADE~\cite{paulson-CADE} and a
+  longer version is distributed with Isabelle.} which you should refer to
+in case of difficulties.  The package is simpler than ZF's, thanks to CHOL's
+automatic type-checking.  The type of the (co)inductive determines the
+domain of the fixedpoint definition, and the package does not use inference
+rules for type-checking.
+
+
+\subsection{The result structure}
+Many of the result structure's components have been discussed in the paper;
+others are self-explanatory.
+\begin{description}
+\item[\tt thy] is the new theory containing the recursive sets.
+
+\item[\tt defs] is the list of definitions of the recursive sets.
+
+\item[\tt mono] is a monotonicity theorem for the fixedpoint operator.
+
+\item[\tt unfold] is a fixedpoint equation for the recursive set (the union of
+the recursive sets, in the case of mutual recursion).
+
+\item[\tt intrs] is the list of introduction rules, now proved as theorems, for
+the recursive sets.  The rules are also available individually, using the
+names given them in the theory file. 
+
+\item[\tt elim] is the elimination rule.
+
+\item[\tt mk\_cases] is a function to create simplified instances of {\tt
+elim}, using freeness reasoning on some underlying datatype.
+\end{description}
+
+For an inductive definition, the result structure contains two induction rules,
+{\tt induct} and \verb|mutual_induct|.  For a coinductive definition, it
+contains the rule \verb|coinduct|.
+
+Figure~\ref{def-result-fig} summarizes the two result signatures,
+specifying the types of all these components.
+
+\begin{figure}
+\begin{ttbox}
+sig
+val thy          : theory
+val defs         : thm list
+val mono         : thm
+val unfold       : thm
+val intrs        : thm list
+val elim         : thm
+val mk_cases     : thm list -> string -> thm
+{\it(Inductive definitions only)} 
+val induct       : thm
+val mutual_induct: thm
+{\it(Coinductive definitions only)}
+val coinduct    : thm
+end
+\end{ttbox}
+\hrule
+\caption{The result of a (co)inductive definition} \label{def-result-fig}
+\end{figure}
+
+\subsection{The syntax of a (co)inductive definition}
+An inductive definition has the form
+\begin{ttbox}
+inductive    {\it inductive sets}
+  intrs      {\it introduction rules}
+  monos      {\it monotonicity theorems}
+  con_defs   {\it constructor definitions}
+\end{ttbox}
+A coinductive definition is identical, except that it starts with the keyword
+{\tt coinductive}.  
+
+The {\tt monos} and {\tt con\_defs} sections are optional.  If present,
+each is specified as a string, which must be a valid ML expression of type
+{\tt thm list}.  It is simply inserted into the {\tt .thy.ML} file; if it
+is ill-formed, it will trigger ML error messages.  You can then inspect the
+file on your directory.
+
+\begin{itemize}
+\item The {\it inductive sets} are specified by one or more strings.
+
+\item The {\it introduction rules} specify one or more introduction rules in
+  the form {\it ident\/}~{\it string}, where the identifier gives the name of
+  the rule in the result structure.
+
+\item The {\it monotonicity theorems} are required for each operator
+  applied to a recursive set in the introduction rules.  There {\bf must}
+  be a theorem of the form $A\subseteq B\Imp M(A)\subseteq M(B)$, for each
+  premise $t\in M(R_i)$ in an introduction rule!
+
+\item The {\it constructor definitions} contain definitions of constants
+  appearing in the introduction rules.  In most cases it can be omitted.
+\end{itemize}
+
+The package has a few notable restrictions:
+\begin{itemize}
+\item The theory must separately declare the recursive sets as
+  constants.
+
+\item The names of the recursive sets must be identifiers, not infix
+operators.  
+
+\item Side-conditions must not be conjunctions.  However, an introduction rule
+may contain any number of side-conditions.
+
+\item Side-conditions of the form $x=t$, where the variable~$x$ does not
+  occur in~$t$, will be substituted through the rule \verb|mutual_induct|.
+\end{itemize}
+
+
+\subsection{Example of an inductive definition}
+Two declarations, included in a theory file, define the finite powerset
+operator.  First we declare the constant~{\tt Fin}.  Then we declare it
+inductively, with two introduction rules:
+\begin{ttbox}
+consts Fin :: "'a set => 'a set set"
+inductive "Fin A"
+  intrs
+    emptyI  "{} : Fin A"
+    insertI "[| a: A;  b: Fin A |] ==> insert a b : Fin A"
+\end{ttbox}
+The resulting theory structure contains a substructure, called~{\tt Fin}.
+It contains the {\tt Fin}$~A$ introduction rules as the list {\tt Fin.intrs},
+and also individually as {\tt Fin.emptyI} and {\tt Fin.consI}.  The induction
+rule is {\tt Fin.induct}.
+
+For another example, here is a theory file defining the accessible part of a
+relation.  The main thing to note is the use of~{\tt Pow} in the sole
+introduction rule, and the corresponding mention of the rule
+\verb|Pow_mono| in the {\tt monos} list.  The paper discusses a ZF version
+of this example in more detail.
+\begin{ttbox}
+Acc = WF + 
+consts pred :: "['b, ('a * 'b)set] => 'a set"   (*Set of predecessors*)
+       acc  :: "('a * 'a)set => 'a set"         (*Accessible part*)
+defs   pred_def  "pred x r == {y. (y,x):r}"
+inductive "acc r"
+  intrs
+     pred "pred a r: Pow(acc r) ==> a: acc r"
+  monos   "[Pow_mono]"
+end
+\end{ttbox}
+The CHOL distribution contains many other inductive definitions, such as the
+theory {\tt CHOL/ex/PropLog.thy} and the directory {\tt CHOL/IMP}.  The
+theory {\tt CHOL/ex/LList.thy} contains coinductive definitions.
+
+\index{*coinductive|)} \index{*inductive|)} \underscoreoff
+
+
+\section{The examples directories}
+Directory {\tt CHOL/Subst} contains Martin Coen's mechanisation of a theory of
+substitutions and unifiers.  It is based on Paulson's previous
+mechanisation in {\LCF}~\cite{paulson85} of Manna and Waldinger's
+theory~\cite{mw81}. 
+
+Directory {\tt CHOL/IMP} contains a mechanised version of a semantic
+equivalence proof taken from Winskel~\cite{winskel93}.  It formalises the
+denotational and operational semantics of a simple while-language, then
+proves the two equivalent.  It contains several datatype and inductive
+definitions, and demonstrates their use.
+
+Directory {\tt CHOL/ex} contains other examples and experimental proofs in
+{\CHOL}.  Here is an overview of the more interesting files.
+\begin{itemize}
+\item File {\tt cla.ML} demonstrates the classical reasoner on over sixty
+  predicate calculus theorems, ranging from simple tautologies to
+  moderately difficult problems involving equality and quantifiers.
+
+\item File {\tt meson.ML} contains an experimental implementation of the {\sc
+    meson} proof procedure, inspired by Plaisted~\cite{plaisted90}.  It is
+  much more powerful than Isabelle's classical reasoner.  But it is less
+  useful in practice because it works only for pure logic; it does not
+  accept derived rules for the set theory primitives, for example.
+
+\item File {\tt mesontest.ML} contains test data for the {\sc meson} proof
+  procedure.  These are mostly taken from Pelletier \cite{pelletier86}.
+
+\item File {\tt set.ML} proves Cantor's Theorem, which is presented in
+  \S\ref{sec:chol-cantor} below, and the Schr\"oder-Bernstein Theorem.
+
+\item Theories {\tt InSort} and {\tt Qsort} prove correctness properties of
+  insertion sort and quick sort.
+
+\item The definition of lazy lists demonstrates methods for handling
+  infinite data structures and coinduction in higher-order
+  logic~\cite{paulson-coind}.  Theory \thydx{LList} defines an operator for
+  corecursion on lazy lists, which is used to define a few simple functions
+  such as map and append.  Corecursion cannot easily define operations such
+  as filter, which can compute indefinitely before yielding the next
+  element (if any!) of the lazy list.  A coinduction principle is defined
+  for proving equations on lazy lists.
+
+\item Theory {\tt PropLog} proves the soundness and completeness of
+  classical propositional logic, given a truth table semantics.  The only
+  connective is $\imp$.  A Hilbert-style axiom system is specified, and its
+  set of theorems defined inductively.  A similar proof in \ZF{} is
+  described elsewhere~\cite{paulson-set-II}.
+
+\item Theory {\tt Term} develops an experimental recursive type definition;
+  the recursion goes through the type constructor~\tydx{list}.
+
+\item Theory {\tt Simult} constructs mutually recursive sets of trees and
+  forests, including induction and recursion rules.
+
+\item Theory {\tt MT} contains Jacob Frost's formalization~\cite{frost93} of
+  Milner and Tofte's coinduction example~\cite{milner-coind}.  This
+  substantial proof concerns the soundness of a type system for a simple
+  functional language.  The semantics of recursion is given by a cyclic
+  environment, which makes a coinductive argument appropriate.
+\end{itemize}
+
+
+\goodbreak
+\section{Example: Cantor's Theorem}\label{sec:chol-cantor}
+Cantor's Theorem states that every set has more subsets than it has
+elements.  It has become a favourite example in higher-order logic since
+it is so easily expressed:
+\[  \forall f::[\alpha,\alpha]\To bool. \exists S::\alpha\To bool.
+    \forall x::\alpha. f~x \not= S 
+\] 
+%
+Viewing types as sets, $\alpha\To bool$ represents the powerset
+of~$\alpha$.  This version states that for every function from $\alpha$ to
+its powerset, some subset is outside its range.  
+
+The Isabelle proof uses \CHOL's set theory, with the type $\alpha\,set$ and
+the operator \cdx{range}.  The set~$S$ is given as an unknown instead of a
+quantified variable so that we may inspect the subset found by the proof.
+\begin{ttbox}
+goal Set.thy "~ ?S : range(f :: 'a=>'a set)";
+{\out Level 0}
+{\out ~ ?S : range f}
+{\out  1. ~ ?S : range f}
+\end{ttbox}
+The first two steps are routine.  The rule \tdx{rangeE} replaces
+$\Var{S}\in {\tt range} f$ by $\Var{S}=f~x$ for some~$x$.
+\begin{ttbox}
+by (resolve_tac [notI] 1);
+{\out Level 1}
+{\out ~ ?S : range f}
+{\out  1. ?S : range f ==> False}
+\ttbreak
+by (eresolve_tac [rangeE] 1);
+{\out Level 2}
+{\out ~ ?S : range f}
+{\out  1. !!x. ?S = f x ==> False}
+\end{ttbox}
+Next, we apply \tdx{equalityCE}, reasoning that since $\Var{S}=f~x$,
+we have $\Var{c}\in \Var{S}$ if and only if $\Var{c}\in f~x$ for
+any~$\Var{c}$.
+\begin{ttbox}
+by (eresolve_tac [equalityCE] 1);
+{\out Level 3}
+{\out ~ ?S : range f}
+{\out  1. !!x. [| ?c3 x : ?S; ?c3 x : f x |] ==> False}
+{\out  2. !!x. [| ~ ?c3 x : ?S; ~ ?c3 x : f x |] ==> False}
+\end{ttbox}
+Now we use a bit of creativity.  Suppose that~$\Var{S}$ has the form of a
+comprehension.  Then $\Var{c}\in\{x.\Var{P}~x\}$ implies
+$\Var{P}~\Var{c}$.   Destruct-resolution using \tdx{CollectD}
+instantiates~$\Var{S}$ and creates the new assumption.
+\begin{ttbox}
+by (dresolve_tac [CollectD] 1);
+{\out Level 4}
+{\out ~ \{x. ?P7 x\} : range f}
+{\out  1. !!x. [| ?c3 x : f x; ?P7(?c3 x) |] ==> False}
+{\out  2. !!x. [| ~ ?c3 x : \{x. ?P7 x\}; ~ ?c3 x : f x |] ==> False}
+\end{ttbox}
+Forcing a contradiction between the two assumptions of subgoal~1 completes
+the instantiation of~$S$.  It is now the set $\{x. x\not\in f~x\}$, which
+is the standard diagonal construction.
+\begin{ttbox}
+by (contr_tac 1);
+{\out Level 5}
+{\out ~ \{x. ~ x : f x\} : range f}
+{\out  1. !!x. [| ~ x : \{x. ~ x : f x\}; ~ x : f x |] ==> False}
+\end{ttbox}
+The rest should be easy.  To apply \tdx{CollectI} to the negated
+assumption, we employ \ttindex{swap_res_tac}:
+\begin{ttbox}
+by (swap_res_tac [CollectI] 1);
+{\out Level 6}
+{\out ~ \{x. ~ x : f x\} : range f}
+{\out  1. !!x. [| ~ x : f x; ~ False |] ==> ~ x : f x}
+\ttbreak
+by (assume_tac 1);
+{\out Level 7}
+{\out ~ \{x. ~ x : f x\} : range f}
+{\out No subgoals!}
+\end{ttbox}
+How much creativity is required?  As it happens, Isabelle can prove this
+theorem automatically.  The classical set \ttindex{set_cs} contains rules
+for most of the constructs of \CHOL's set theory.  We must augment it with
+\tdx{equalityCE} to break up set equalities, and then apply best-first
+search.  Depth-first search would diverge, but best-first search
+successfully navigates through the large search space.
+\index{search!best-first}
+\begin{ttbox}
+choplev 0;
+{\out Level 0}
+{\out ~ ?S : range f}
+{\out  1. ~ ?S : range f}
+\ttbreak
+by (best_tac (set_cs addSEs [equalityCE]) 1);
+{\out Level 1}
+{\out ~ \{x. ~ x : f x\} : range f}
+{\out No subgoals!}
+\end{ttbox}
+
+\index{higher-order logic|)}
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/Logics/HOL.tex	Tue May 09 10:43:19 1995 +0200
@@ -0,0 +1,1898 @@
+%% $Id$
+\chapter{Higher-Order Logic}
+\index{higher-order logic|(}
+\index{CHOL system@{\sc chol} system}
+
+The theory~\thydx{CHOL} implements higher-order logic with curried
+function application.  It is based on Gordon's~{\sc hol}
+system~\cite{mgordon-hol}, which itself is based on Church's original
+paper~\cite{church40}.  Andrews's book~\cite{andrews86} is a full
+description of higher-order logic.  Experience with the {\sc hol}
+system has demonstrated that higher-order logic is useful for hardware
+verification; beyond this, it is widely applicable in many areas of
+mathematics.  It is weaker than {\ZF} set theory but for most
+applications this does not matter.  If you prefer {\ML} to Lisp, you
+will probably prefer \CHOL\ to~{\ZF}.
+
+\CHOL\ is a modified version of Isabelle's \HOL\ and uses curried function
+application. Therefore the expression $f(a,b)$ (which in \HOL\ means
+``f applied to the two arguments $a$ and $b$'') means ``f applied to
+the pair $(a,b)$'' in \CHOL. N.B. that ordered pairs in \HOL\ are written as
+$<a,b>$ while in \CHOL\ the syntax $(a,b)$ is used.  Previous
+releases of Isabelle also included a different version of~\HOL, with
+explicit type inference rules~\cite{paulson-COLOG}.  This version no
+longer exists, but \thydx{ZF} supports a similar style of reasoning.
+
+\CHOL\ has a distinct feel, compared with {\ZF} and {\CTT}.  It
+identifies object-level types with meta-level types, taking advantage of
+Isabelle's built-in type checker.  It identifies object-level functions
+with meta-level functions, so it uses Isabelle's operations for abstraction
+and application.  There is no `apply' operator: function applications are
+written as simply~$f~a$ rather than $f{\tt`}a$.
+
+These identifications allow Isabelle to support \CHOL\ particularly nicely,
+but they also mean that \CHOL\ requires more sophistication from the user
+--- in particular, an understanding of Isabelle's type system.  Beginners
+should work with {\tt show_types} set to {\tt true}.  Gain experience by
+working in first-order logic before attempting to use higher-order logic.
+This chapter assumes familiarity with~{\FOL{}}.
+
+
+\begin{figure} 
+\begin{center}
+\begin{tabular}{rrr} 
+  \it name      &\it meta-type  & \it description \\ 
+  \cdx{Trueprop}& $bool\To prop$                & coercion to $prop$\\
+  \cdx{not}     & $bool\To bool$                & negation ($\neg$) \\
+  \cdx{True}    & $bool$                        & tautology ($\top$) \\
+  \cdx{False}   & $bool$                        & absurdity ($\bot$) \\
+  \cdx{If}      & $[bool,\alpha,\alpha]\To\alpha::term$ & conditional \\
+  \cdx{Inv}     & $(\alpha\To\beta)\To(\beta\To\alpha)$ & function inversion\\
+  \cdx{Let}     & $[\alpha,\alpha\To\beta]\To\beta$ & let binder
+\end{tabular}
+\end{center}
+\subcaption{Constants}
+
+\begin{center}
+\index{"@@{\tt\at} symbol}
+\index{*"! symbol}\index{*"? symbol}
+\index{*"?"! symbol}\index{*"E"X"! symbol}
+\begin{tabular}{llrrr} 
+  \it symbol &\it name     &\it meta-type & \it description \\
+  \tt\at & \cdx{Eps}  & $(\alpha\To bool)\To\alpha::term$ & 
+        Hilbert description ($\epsilon$) \\
+  {\tt!~} or \sdx{ALL}  & \cdx{All}  & $(\alpha::term\To bool)\To bool$ & 
+        universal quantifier ($\forall$) \\
+  {\tt?~} or \sdx{EX}   & \cdx{Ex}   & $(\alpha::term\To bool)\To bool$ & 
+        existential quantifier ($\exists$) \\
+  {\tt?!} or {\tt EX!}  & \cdx{Ex1}  & $(\alpha::term\To bool)\To bool$ & 
+        unique existence ($\exists!$)
+\end{tabular}
+\end{center}
+\subcaption{Binders} 
+
+\begin{center}
+\index{*"= symbol}
+\index{&@{\tt\&} symbol}
+\index{*"| symbol}
+\index{*"-"-"> symbol}
+\begin{tabular}{rrrr} 
+  \it symbol    & \it meta-type & \it priority & \it description \\ 
+  \sdx{o}       & $[\beta\To\gamma,\alpha\To\beta]\To (\alpha\To\gamma)$ & 
+        Right 50 & composition ($\circ$) \\
+  \tt =         & $[\alpha::term,\alpha]\To bool$ & Left 50 & equality ($=$) \\
+  \tt <         & $[\alpha::ord,\alpha]\To bool$ & Left 50 & less than ($<$) \\
+  \tt <=        & $[\alpha::ord,\alpha]\To bool$ & Left 50 & 
+                less than or equals ($\leq$)\\
+  \tt \&        & $[bool,bool]\To bool$ & Right 35 & conjunction ($\conj$) \\
+  \tt |         & $[bool,bool]\To bool$ & Right 30 & disjunction ($\disj$) \\
+  \tt -->       & $[bool,bool]\To bool$ & Right 25 & implication ($\imp$)
+\end{tabular}
+\end{center}
+\subcaption{Infixes}
+\caption{Syntax of {\tt HOL}} \label{chol-constants}
+\end{figure}
+
+
+\begin{figure}
+\index{*let symbol}
+\index{*in symbol}
+\dquotes
+\[\begin{array}{rclcl}
+    term & = & \hbox{expression of class~$term$} \\
+         & | & "\at~" id~id^* " . " formula \\
+         & | & 
+    \multicolumn{3}{l}{"let"~id~"="~term";"\dots";"~id~"="~term~"in"~term} \\
+         & | & 
+    \multicolumn{3}{l}{"if"~formula~"then"~term~"else"~term} \\[2ex]
+ formula & = & \hbox{expression of type~$bool$} \\
+         & | & term " = " term \\
+         & | & term " \ttilde= " term \\
+         & | & term " < " term \\
+         & | & term " <= " term \\
+         & | & "\ttilde\ " formula \\
+         & | & formula " \& " formula \\
+         & | & formula " | " formula \\
+         & | & formula " --> " formula \\
+         & | & "!~~~" id~id^* " . " formula 
+         & | & "ALL~" id~id^* " . " formula \\
+         & | & "?~~~" id~id^* " . " formula 
+         & | & "EX~~" id~id^* " . " formula \\
+         & | & "?!~~" id~id^* " . " formula 
+         & | & "EX!~" id~id^* " . " formula
+  \end{array}
+\]
+\caption{Full grammar for \CHOL} \label{chol-grammar}
+\end{figure} 
+
+
+\section{Syntax}
+The type class of higher-order terms is called~\cldx{term}.  Type variables
+range over this class by default.  The equality symbol and quantifiers are
+polymorphic over class {\tt term}.
+
+Class \cldx{ord} consists of all ordered types; the relations $<$ and
+$\leq$ are polymorphic over this class, as are the functions
+\cdx{mono}, \cdx{min} and \cdx{max}.  Three other
+type classes --- \cldx{plus}, \cldx{minus} and \cldx{times} --- permit
+overloading of the operators {\tt+}, {\tt-} and {\tt*}.  In particular,
+{\tt-} is overloaded for set difference and subtraction.
+\index{*"+ symbol}
+\index{*"- symbol}
+\index{*"* symbol}
+
+Figure~\ref{chol-constants} lists the constants (including infixes and
+binders), while Fig.\ts\ref{chol-grammar} presents the grammar of
+higher-order logic.  Note that $a$\verb|~=|$b$ is translated to
+$\neg(a=b)$.
+
+\begin{warn}
+  \CHOL\ has no if-and-only-if connective; logical equivalence is expressed
+  using equality.  But equality has a high priority, as befitting a
+  relation, while if-and-only-if typically has the lowest priority.  Thus,
+  $\neg\neg P=P$ abbreviates $\neg\neg (P=P)$ and not $(\neg\neg P)=P$.
+  When using $=$ to mean logical equivalence, enclose both operands in
+  parentheses.
+\end{warn}
+
+\subsection{Types}\label{CHOL-types}
+The type of formulae, \tydx{bool}, belongs to class \cldx{term}; thus,
+formulae are terms.  The built-in type~\tydx{fun}, which constructs function
+types, is overloaded with arity {\tt(term,term)term}.  Thus, $\sigma\To\tau$
+belongs to class~{\tt term} if $\sigma$ and~$\tau$ do, allowing quantification
+over functions.
+
+Types in \CHOL\ must be non-empty; otherwise the quantifier rules would be
+unsound.  I have commented on this elsewhere~\cite[\S7]{paulson-COLOG}.
+
+\index{type definitions}
+Gordon's {\sc hol} system supports {\bf type definitions}.  A type is
+defined by exhibiting an existing type~$\sigma$, a predicate~$P::\sigma\To
+bool$, and a theorem of the form $\exists x::\sigma.P~x$.  Thus~$P$
+specifies a non-empty subset of~$\sigma$, and the new type denotes this
+subset.  New function constants are generated to establish an isomorphism
+between the new type and the subset.  If type~$\sigma$ involves type
+variables $\alpha@1$, \ldots, $\alpha@n$, then the type definition creates
+a type constructor $(\alpha@1,\ldots,\alpha@n)ty$ rather than a particular
+type.  Melham~\cite{melham89} discusses type definitions at length, with
+examples. 
+
+Isabelle does not support type definitions at present.  Instead, they are
+mimicked by explicit definitions of isomorphism functions.  The definitions
+should be supported by theorems of the form $\exists x::\sigma.P~x$, but
+Isabelle cannot enforce this.
+
+
+\subsection{Binders}
+Hilbert's {\bf description} operator~$\epsilon x.P[x]$ stands for some~$a$
+satisfying~$P[a]$, if such exists.  Since all terms in \CHOL\ denote
+something, a description is always meaningful, but we do not know its value
+unless $P[x]$ defines it uniquely.  We may write descriptions as
+\cdx{Eps}($P$) or use the syntax
+\hbox{\tt \at $x$.$P[x]$}.
+
+Existential quantification is defined by
+\[ \exists x.P~x \;\equiv\; P(\epsilon x.P~x). \]
+The unique existence quantifier, $\exists!x.P[x]$, is defined in terms
+of~$\exists$ and~$\forall$.  An Isabelle binder, it admits nested
+quantifications.  For instance, $\exists!x y.P~x~y$ abbreviates
+$\exists!x. \exists!y.P~x~y$; note that this does not mean that there
+exists a unique pair $(x,y)$ satisfying~$P~x~y$.
+
+\index{*"! symbol}\index{*"? symbol}\index{CHOL system@{\sc hol} system}
+Quantifiers have two notations.  As in Gordon's {\sc hol} system, \CHOL\
+uses~{\tt!}\ and~{\tt?}\ to stand for $\forall$ and $\exists$.  The
+existential quantifier must be followed by a space; thus {\tt?x} is an
+unknown, while \verb'? x.f x=y' is a quantification.  Isabelle's usual
+notation for quantifiers, \sdx{ALL} and \sdx{EX}, is also
+available.  Both notations are accepted for input.  The {\ML} reference
+\ttindexbold{HOL_quantifiers} governs the output notation.  If set to {\tt
+true}, then~{\tt!}\ and~{\tt?}\ are displayed; this is the default.  If set
+to {\tt false}, then~{\tt ALL} and~{\tt EX} are displayed.
+
+All these binders have priority 10. 
+
+
+\subsection{The \sdx{let} and \sdx{case} constructions}
+Local abbreviations can be introduced by a {\tt let} construct whose
+syntax appears in Fig.\ts\ref{chol-grammar}.  Internally it is translated into
+the constant~\cdx{Let}.  It can be expanded by rewriting with its
+definition, \tdx{Let_def}.
+
+\CHOL\ also defines the basic syntax
+\[\dquotes"case"~e~"of"~c@1~"=>"~e@1~"|" \dots "|"~c@n~"=>"~e@n\] 
+as a uniform means of expressing {\tt case} constructs.  Therefore {\tt
+  case} and \sdx{of} are reserved words.  However, so far this is mere
+syntax and has no logical meaning.  By declaring translations, you can
+cause instances of the {\tt case} construct to denote applications of
+particular case operators.  The patterns supplied for $c@1$,~\ldots,~$c@n$
+distinguish among the different case operators.  For an example, see the
+case construct for lists on page~\pageref{chol-list} below.
+
+\begin{figure}
+\begin{ttbox}\makeatother
+\tdx{refl}           t = (t::'a)
+\tdx{subst}          [| s=t; P s |] ==> P(t::'a)
+\tdx{ext}            (!!x::'a. (f x::'b) = g x) ==> (\%x.f x) = (\%x.g x)
+\tdx{impI}           (P ==> Q) ==> P-->Q
+\tdx{mp}             [| P-->Q;  P |] ==> Q
+\tdx{iff}            (P-->Q) --> (Q-->P) --> (P=Q)
+\tdx{selectI}        P(x::'a) ==> P(@x.P x)
+\tdx{True_or_False}  (P=True) | (P=False)
+\end{ttbox}
+\caption{The {\tt CHOL} rules} \label{chol-rules}
+\end{figure}
+
+
+\begin{figure}\hfuzz=4pt%suppress "Overfull \hbox" message
+\begin{ttbox}\makeatother
+\tdx{True_def}   True     == ((\%x::bool.x)=(\%x.x))
+\tdx{All_def}    All      == (\%P. P = (\%x.True))
+\tdx{Ex_def}     Ex       == (\%P. P(@x.P x))
+\tdx{False_def}  False    == (!P.P)
+\tdx{not_def}    not      == (\%P. P-->False)
+\tdx{and_def}    op &     == (\%P Q. !R. (P-->Q-->R) --> R)
+\tdx{or_def}     op |     == (\%P Q. !R. (P-->R) --> (Q-->R) --> R)
+\tdx{Ex1_def}    Ex1      == (\%P. ? x. P x & (! y. P y --> y=x))
+
+\tdx{Inv_def}    Inv      == (\%(f::'a=>'b) y. @x. f x=y)
+\tdx{o_def}      op o     == (\%(f::'b=>'c) g (x::'a). f(g x))
+\tdx{if_def}     If P x y == (\%P x y.@z::'a.(P=True --> z=x) & (P=False --> z=y))
+\tdx{Let_def}    Let s f  == f s
+\end{ttbox}
+\caption{The {\tt CHOL} definitions} \label{chol-defs}
+\end{figure}
+
+
+\section{Rules of inference}
+Figure~\ref{chol-rules} shows the inference rules of~\CHOL{}, with
+their~{\ML} names.  Some of the rules deserve additional comments:
+\begin{ttdescription}
+\item[\tdx{ext}] expresses extensionality of functions.
+\item[\tdx{iff}] asserts that logically equivalent formulae are
+  equal.
+\item[\tdx{selectI}] gives the defining property of the Hilbert
+  $\epsilon$-operator.  It is a form of the Axiom of Choice.  The derived rule
+  \tdx{select_equality} (see below) is often easier to use.
+\item[\tdx{True_or_False}] makes the logic classical.\footnote{In
+    fact, the $\epsilon$-operator already makes the logic classical, as
+    shown by Diaconescu; see Paulson~\cite{paulson-COLOG} for details.}
+\end{ttdescription}
+
+\CHOL{} follows standard practice in higher-order logic: only a few
+connectives are taken as primitive, with the remainder defined obscurely
+(Fig.\ts\ref{chol-defs}).  Gordon's {\sc hol} system expresses the
+corresponding definitions \cite[page~270]{mgordon-hol} using
+object-equality~({\tt=}), which is possible because equality in
+higher-order logic may equate formulae and even functions over formulae.
+But theory~\CHOL{}, like all other Isabelle theories, uses
+meta-equality~({\tt==}) for definitions.
+
+Some of the rules mention type variables; for example, {\tt refl}
+mentions the type variable~{\tt'a}.  This allows you to instantiate
+type variables explicitly by calling {\tt res_inst_tac}.  By default,
+explicit type variables have class \cldx{term}.
+
+Include type constraints whenever you state a polymorphic goal.  Type
+inference may otherwise make the goal more polymorphic than you intended,
+with confusing results.
+
+\begin{warn}
+  If resolution fails for no obvious reason, try setting
+  \ttindex{show_types} to {\tt true}, causing Isabelle to display types of
+  terms.  Possibly set \ttindex{show_sorts} to {\tt true} as well, causing
+  Isabelle to display sorts.
+
+  \index{unification!incompleteness of}
+  Where function types are involved, Isabelle's unification code does not
+  guarantee to find instantiations for type variables automatically.  Be
+  prepared to use \ttindex{res_inst_tac} instead of {\tt resolve_tac},
+  possibly instantiating type variables.  Setting
+  \ttindex{Unify.trace_types} to {\tt true} causes Isabelle to report
+  omitted search paths during unification.\index{tracing!of unification}
+\end{warn}
+
+
+\begin{figure}
+\begin{ttbox}
+\tdx{sym}         s=t ==> t=s
+\tdx{trans}       [| r=s; s=t |] ==> r=t
+\tdx{ssubst}      [| t=s; P s |] ==> P(t::'a)
+\tdx{box_equals}  [| a=b;  a=c;  b=d |] ==> c=d  
+\tdx{arg_cong}    x=y ==> f x=f y
+\tdx{fun_cong}    f=g ==> f x=g x
+\subcaption{Equality}
+
+\tdx{TrueI}       True 
+\tdx{FalseE}      False ==> P
+
+\tdx{conjI}       [| P; Q |] ==> P&Q
+\tdx{conjunct1}   [| P&Q |] ==> P
+\tdx{conjunct2}   [| P&Q |] ==> Q 
+\tdx{conjE}       [| P&Q;  [| P; Q |] ==> R |] ==> R
+
+\tdx{disjI1}      P ==> P|Q
+\tdx{disjI2}      Q ==> P|Q
+\tdx{disjE}       [| P | Q; P ==> R; Q ==> R |] ==> R
+
+\tdx{notI}        (P ==> False) ==> ~ P
+\tdx{notE}        [| ~ P;  P |] ==> R
+\tdx{impE}        [| P-->Q;  P;  Q ==> R |] ==> R
+\subcaption{Propositional logic}
+
+\tdx{iffI}        [| P ==> Q;  Q ==> P |] ==> P=Q
+\tdx{iffD1}       [| P=Q; P |] ==> Q
+\tdx{iffD2}       [| P=Q; Q |] ==> P
+\tdx{iffE}        [| P=Q; [| P --> Q; Q --> P |] ==> R |] ==> R
+
+\tdx{eqTrueI}     P ==> P=True 
+\tdx{eqTrueE}     P=True ==> P 
+\subcaption{Logical equivalence}
+
+\end{ttbox}
+\caption{Derived rules for \CHOL} \label{chol-lemmas1}
+\end{figure}
+
+
+\begin{figure}
+\begin{ttbox}\makeatother
+\tdx{allI}      (!!x::'a. P x) ==> !x. P x
+\tdx{spec}      !x::'a.P x ==> P x
+\tdx{allE}      [| !x.P x;  P x ==> R |] ==> R
+\tdx{all_dupE}  [| !x.P x;  [| P x; !x.P x |] ==> R |] ==> R
+
+\tdx{exI}       P x ==> ? x::'a.P x
+\tdx{exE}       [| ? x::'a.P x; !!x. P x ==> Q |] ==> Q
+
+\tdx{ex1I}      [| P a;  !!x. P x ==> x=a |] ==> ?! x. P x
+\tdx{ex1E}      [| ?! x.P x;  !!x. [| P x;  ! y. P y --> y=x |] ==> R 
+          |] ==> R
+
+\tdx{select_equality} [| P a;  !!x. P x ==> x=a |] ==> (@x.P x) = a
+\subcaption{Quantifiers and descriptions}
+
+\tdx{ccontr}          (~P ==> False) ==> P
+\tdx{classical}       (~P ==> P) ==> P
+\tdx{excluded_middle} ~P | P
+
+\tdx{disjCI}          (~Q ==> P) ==> P|Q
+\tdx{exCI}            (! x. ~ P x ==> P a) ==> ? x.P x
+\tdx{impCE}           [| P-->Q; ~ P ==> R; Q ==> R |] ==> R
+\tdx{iffCE}           [| P=Q;  [| P;Q |] ==> R;  [| ~P; ~Q |] ==> R |] ==> R
+\tdx{notnotD}         ~~P ==> P
+\tdx{swap}            ~P ==> (~Q ==> P) ==> Q
+\subcaption{Classical logic}
+
+\tdx{if_True}         if True then x else y = x
+\tdx{if_False}        if False then x else y = y
+\tdx{if_P}            P ==> if P then x else y = x
+\tdx{if_not_P}        ~ P ==> if P then x else y = y
+\tdx{expand_if}       P(if Q then x else y) = ((Q --> P x) & (~Q --> P y))
+\subcaption{Conditionals}
+\end{ttbox}
+\caption{More derived rules} \label{chol-lemmas2}
+\end{figure}
+
+
+Some derived rules are shown in Figures~\ref{chol-lemmas1}
+and~\ref{chol-lemmas2}, with their {\ML} names.  These include natural rules
+for the logical connectives, as well as sequent-style elimination rules for
+conjunctions, implications, and universal quantifiers.  
+
+Note the equality rules: \tdx{ssubst} performs substitution in
+backward proofs, while \tdx{box_equals} supports reasoning by
+simplifying both sides of an equation.
+
+
+\begin{figure} 
+\begin{center}
+\begin{tabular}{rrr} 
+  \it name      &\it meta-type  & \it description \\ 
+\index{{}@\verb'{}' symbol}
+  \verb|{}|     & $\alpha\,set$         & the empty set \\
+  \cdx{insert}  & $[\alpha,\alpha\,set]\To \alpha\,set$
+        & insertion of element \\
+  \cdx{Collect} & $(\alpha\To bool)\To\alpha\,set$
+        & comprehension \\
+  \cdx{Compl}   & $(\alpha\,set)\To\alpha\,set$
+        & complement \\
+  \cdx{INTER} & $[\alpha\,set,\alpha\To\beta\,set]\To\beta\,set$
+        & intersection over a set\\
+  \cdx{UNION} & $[\alpha\,set,\alpha\To\beta\,set]\To\beta\,set$
+        & union over a set\\
+  \cdx{Inter} & $(\alpha\,set)set\To\alpha\,set$
+        &set of sets intersection \\
+  \cdx{Union} & $(\alpha\,set)set\To\alpha\,set$
+        &set of sets union \\
+  \cdx{Pow}   & $\alpha\,set \To (\alpha\,set)set$
+        & powerset \\[1ex]
+  \cdx{range}   & $(\alpha\To\beta )\To\beta\,set$
+        & range of a function \\[1ex]
+  \cdx{Ball}~~\cdx{Bex} & $[\alpha\,set,\alpha\To bool]\To bool$
+        & bounded quantifiers \\
+  \cdx{mono}    & $(\alpha\,set\To\beta\,set)\To bool$
+        & monotonicity \\
+  \cdx{inj}~~\cdx{surj}& $(\alpha\To\beta )\To bool$
+        & injective/surjective \\
+  \cdx{inj_onto}        & $[\alpha\To\beta ,\alpha\,set]\To bool$
+        & injective over subset
+\end{tabular}
+\end{center}
+\subcaption{Constants}
+
+\begin{center}
+\begin{tabular}{llrrr} 
+  \it symbol &\it name     &\it meta-type & \it priority & \it description \\
+  \sdx{INT}  & \cdx{INTER1}  & $(\alpha\To\beta\,set)\To\beta\,set$ & 10 & 
+        intersection over a type\\
+  \sdx{UN}  & \cdx{UNION1}  & $(\alpha\To\beta\,set)\To\beta\,set$ & 10 & 
+        union over a type
+\end{tabular}
+\end{center}
+\subcaption{Binders} 
+
+\begin{center}
+\index{*"`"` symbol}
+\index{*": symbol}
+\index{*"<"= symbol}
+\begin{tabular}{rrrr} 
+  \it symbol    & \it meta-type & \it priority & \it description \\ 
+  \tt ``        & $[\alpha\To\beta ,\alpha\,set]\To  (\beta\,set)$
+        & Left 90 & image \\
+  \sdx{Int}     & $[\alpha\,set,\alpha\,set]\To\alpha\,set$
+        & Left 70 & intersection ($\inter$) \\
+  \sdx{Un}      & $[\alpha\,set,\alpha\,set]\To\alpha\,set$
+        & Left 65 & union ($\union$) \\
+  \tt:          & $[\alpha ,\alpha\,set]\To bool$       
+        & Left 50 & membership ($\in$) \\
+  \tt <=        & $[\alpha\,set,\alpha\,set]\To bool$
+        & Left 50 & subset ($\subseteq$) 
+\end{tabular}
+\end{center}
+\subcaption{Infixes}
+\caption{Syntax of the theory {\tt Set}} \label{chol-set-syntax}
+\end{figure} 
+
+
+\begin{figure} 
+\begin{center} \tt\frenchspacing
+\index{*"! symbol}
+\begin{tabular}{rrr} 
+  \it external          & \it internal  & \it description \\ 
+  $a$ \ttilde: $b$      & \ttilde($a$ : $b$)    & \rm non-membership\\
+  \{$a@1$, $\ldots$\}  &  insert $a@1$ $\ldots$ \{\} & \rm finite set \\
+  \{$x$.$P[x]$\}        &  Collect($\lambda x.P[x]$) &
+        \rm comprehension \\
+  \sdx{INT} $x$:$A$.$B[x]$      & INTER $A$ $\lambda x.B[x]$ &
+        \rm intersection \\
+  \sdx{UN}{\tt\ }  $x$:$A$.$B[x]$      & UNION $A$ $\lambda x.B[x]$ &
+        \rm union \\
+  \tt ! $x$:$A$.$P[x]$ or \sdx{ALL} $x$:$A$.$P[x]$ & 
+        Ball $A$ $\lambda x.P[x]$ & 
+        \rm bounded $\forall$ \\
+  \sdx{?} $x$:$A$.$P[x]$ or \sdx{EX}{\tt\ } $x$:$A$.$P[x]$ & 
+        Bex $A$ $\lambda x.P[x]$ & \rm bounded $\exists$
+\end{tabular}
+\end{center}
+\subcaption{Translations}
+
+\dquotes
+\[\begin{array}{rclcl}
+    term & = & \hbox{other terms\ldots} \\
+         & | & "\{\}" \\
+         & | & "\{ " term\; ("," term)^* " \}" \\
+         & | & "\{ " id " . " formula " \}" \\
+         & | & term " `` " term \\
+         & | & term " Int " term \\
+         & | & term " Un " term \\
+         & | & "INT~~"  id ":" term " . " term \\
+         & | & "UN~~~"  id ":" term " . " term \\
+         & | & "INT~~"  id~id^* " . " term \\
+         & | & "UN~~~"  id~id^* " . " term \\[2ex]
+ formula & = & \hbox{other formulae\ldots} \\
+         & | & term " : " term \\
+         & | & term " \ttilde: " term \\
+         & | & term " <= " term \\
+         & | & "!~" id ":" term " . " formula 
+         & | & "ALL " id ":" term " . " formula \\
+         & | & "?~" id ":" term " . " formula 
+         & | & "EX~~" id ":" term " . " formula
+  \end{array}
+\]
+\subcaption{Full Grammar}
+\caption{Syntax of the theory {\tt Set} (continued)} \label{chol-set-syntax2}
+\end{figure} 
+
+
+\section{A formulation of set theory}
+Historically, higher-order logic gives a foundation for Russell and
+Whitehead's theory of classes.  Let us use modern terminology and call them
+{\bf sets}, but note that these sets are distinct from those of {\ZF} set
+theory, and behave more like {\ZF} classes.
+\begin{itemize}
+\item
+Sets are given by predicates over some type~$\sigma$.  Types serve to
+define universes for sets, but type checking is still significant.
+\item
+There is a universal set (for each type).  Thus, sets have complements, and
+may be defined by absolute comprehension.
+\item
+Although sets may contain other sets as elements, the containing set must
+have a more complex type.
+\end{itemize}
+Finite unions and intersections have the same behaviour in \CHOL\ as they
+do in~{\ZF}.  In \CHOL\ the intersection of the empty set is well-defined,
+denoting the universal set for the given type.
+
+
+\subsection{Syntax of set theory}\index{*set type}
+\CHOL's set theory is called \thydx{Set}.  The type $\alpha\,set$ is
+essentially the same as $\alpha\To bool$.  The new type is defined for
+clarity and to avoid complications involving function types in unification.
+Since Isabelle does not support type definitions (as mentioned in
+\S\ref{CHOL-types}), the isomorphisms between the two types are declared
+explicitly.  Here they are natural: {\tt Collect} maps $\alpha\To bool$ to
+$\alpha\,set$, while \hbox{\tt op :} maps in the other direction (ignoring
+argument order).
+
+Figure~\ref{chol-set-syntax} lists the constants, infixes, and syntax
+translations.  Figure~\ref{chol-set-syntax2} presents the grammar of the new
+constructs.  Infix operators include union and intersection ($A\union B$
+and $A\inter B$), the subset and membership relations, and the image
+operator~{\tt``}\@.  Note that $a$\verb|~:|$b$ is translated to
+$\neg(a\in b)$.  
+
+The {\tt\{\ldots\}} notation abbreviates finite sets constructed in the
+obvious manner using~{\tt insert} and~$\{\}$:
+\begin{eqnarray*}
+  \{a@1, \ldots, a@n\}  & \equiv &  
+  {\tt insert}~a@1~({\tt insert}\ldots({\tt insert}~a@n~\{\})\ldots)
+\end{eqnarray*}
+
+The set \hbox{\tt\{$x$.$P[x]$\}} consists of all $x$ (of suitable type)
+that satisfy~$P[x]$, where $P[x]$ is a formula that may contain free
+occurrences of~$x$.  This syntax expands to \cdx{Collect}$(\lambda
+x.P[x])$.  It defines sets by absolute comprehension, which is impossible
+in~{\ZF}; the type of~$x$ implicitly restricts the comprehension.
+
+The set theory defines two {\bf bounded quantifiers}:
+\begin{eqnarray*}
+   \forall x\in A.P[x] &\hbox{abbreviates}& \forall x. x\in A\imp P[x] \\
+   \exists x\in A.P[x] &\hbox{abbreviates}& \exists x. x\in A\conj P[x]
+\end{eqnarray*}
+The constants~\cdx{Ball} and~\cdx{Bex} are defined
+accordingly.  Instead of {\tt Ball $A$ $P$} and {\tt Bex $A$ $P$} we may
+write\index{*"! symbol}\index{*"? symbol}
+\index{*ALL symbol}\index{*EX symbol} 
+%
+\hbox{\tt !~$x$:$A$.$P[x]$} and \hbox{\tt ?~$x$:$A$.$P[x]$}.  Isabelle's
+usual quantifier symbols, \sdx{ALL} and \sdx{EX}, are also accepted
+for input.  As with the primitive quantifiers, the {\ML} reference
+\ttindex{HOL_quantifiers} specifies which notation to use for output.
+
+Unions and intersections over sets, namely $\bigcup@{x\in A}B[x]$ and
+$\bigcap@{x\in A}B[x]$, are written 
+\sdx{UN}~\hbox{\tt$x$:$A$.$B[x]$} and
+\sdx{INT}~\hbox{\tt$x$:$A$.$B[x]$}.  
+
+Unions and intersections over types, namely $\bigcup@x B[x]$ and $\bigcap@x
+B[x]$, are written \sdx{UN}~\hbox{\tt$x$.$B[x]$} and
+\sdx{INT}~\hbox{\tt$x$.$B[x]$}.  They are equivalent to the previous
+union and intersection operators when $A$ is the universal set.
+
+The operators $\bigcup A$ and $\bigcap A$ act upon sets of sets.  They are
+not binders, but are equal to $\bigcup@{x\in A}x$ and $\bigcap@{x\in A}x$,
+respectively.
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{mem_Collect_eq}    (a : \{x.P x\}) = P a
+\tdx{Collect_mem_eq}    \{x.x:A\} = A
+
+\tdx{empty_def}         \{\}          == \{x.False\}
+\tdx{insert_def}        insert a B  == \{x.x=a\} Un B
+\tdx{Ball_def}          Ball A P    == ! x. x:A --> P x
+\tdx{Bex_def}           Bex A P     == ? x. x:A & P x
+\tdx{subset_def}        A <= B      == ! x:A. x:B
+\tdx{Un_def}            A Un B      == \{x.x:A | x:B\}
+\tdx{Int_def}           A Int B     == \{x.x:A & x:B\}
+\tdx{set_diff_def}      A - B       == \{x.x:A & x~:B\}
+\tdx{Compl_def}         Compl A     == \{x. ~ x:A\}
+\tdx{INTER_def}         INTER A B   == \{y. ! x:A. y: B x\}
+\tdx{UNION_def}         UNION A B   == \{y. ? x:A. y: B x\}
+\tdx{INTER1_def}        INTER1 B    == INTER \{x.True\} B 
+\tdx{UNION1_def}        UNION1 B    == UNION \{x.True\} B 
+\tdx{Inter_def}         Inter S     == (INT x:S. x)
+\tdx{Union_def}         Union S     == (UN  x:S. x)
+\tdx{Pow_def}           Pow A       == \{B. B <= A\}
+\tdx{image_def}         f``A        == \{y. ? x:A. y=f x\}
+\tdx{range_def}         range f     == \{y. ? x. y=f x\}
+\tdx{mono_def}          mono f      == !A B. A <= B --> f A <= f B
+\tdx{inj_def}           inj f       == ! x y. f x=f y --> x=y
+\tdx{surj_def}          surj f      == ! y. ? x. y=f x
+\tdx{inj_onto_def}      inj_onto f A == !x:A. !y:A. f x=f y --> x=y
+\end{ttbox}
+\caption{Rules of the theory {\tt Set}} \label{chol-set-rules}
+\end{figure}
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{CollectI}        [| P a |] ==> a : \{x.P x\}
+\tdx{CollectD}        [| a : \{x.P x\} |] ==> P a
+\tdx{CollectE}        [| a : \{x.P x\};  P a ==> W |] ==> W
+
+\tdx{ballI}           [| !!x. x:A ==> P x |] ==> ! x:A. P x
+\tdx{bspec}           [| ! x:A. P x;  x:A |] ==> P x
+\tdx{ballE}           [| ! x:A. P x;  P x ==> Q;  ~ x:A ==> Q |] ==> Q
+
+\tdx{bexI}            [| P x;  x:A |] ==> ? x:A. P x
+\tdx{bexCI}           [| ! x:A. ~ P x ==> P a;  a:A |] ==> ? x:A.P x
+\tdx{bexE}            [| ? x:A. P x;  !!x. [| x:A; P x |] ==> Q  |] ==> Q
+\subcaption{Comprehension and Bounded quantifiers}
+
+\tdx{subsetI}         (!!x.x:A ==> x:B) ==> A <= B
+\tdx{subsetD}         [| A <= B;  c:A |] ==> c:B
+\tdx{subsetCE}        [| A <= B;  ~ (c:A) ==> P;  c:B ==> P |] ==> P
+
+\tdx{subset_refl}     A <= A
+\tdx{subset_trans}    [| A<=B;  B<=C |] ==> A<=C
+
+\tdx{equalityI}       [| A <= B;  B <= A |] ==> A = B
+\tdx{equalityD1}      A = B ==> A<=B
+\tdx{equalityD2}      A = B ==> B<=A
+\tdx{equalityE}       [| A = B;  [| A<=B; B<=A |] ==> P |]  ==>  P
+
+\tdx{equalityCE}      [| A = B;  [| c:A; c:B |] ==> P;  
+                           [| ~ c:A; ~ c:B |] ==> P 
+                |]  ==>  P
+\subcaption{The subset and equality relations}
+\end{ttbox}
+\caption{Derived rules for set theory} \label{chol-set1}
+\end{figure}
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{emptyE}   a : \{\} ==> P
+
+\tdx{insertI1} a : insert a B
+\tdx{insertI2} a : B ==> a : insert b B
+\tdx{insertE}  [| a : insert b A;  a=b ==> P;  a:A ==> P |] ==> P
+
+\tdx{ComplI}   [| c:A ==> False |] ==> c : Compl A
+\tdx{ComplD}   [| c : Compl A |] ==> ~ c:A
+
+\tdx{UnI1}     c:A ==> c : A Un B
+\tdx{UnI2}     c:B ==> c : A Un B
+\tdx{UnCI}     (~c:B ==> c:A) ==> c : A Un B
+\tdx{UnE}      [| c : A Un B;  c:A ==> P;  c:B ==> P |] ==> P
+
+\tdx{IntI}     [| c:A;  c:B |] ==> c : A Int B
+\tdx{IntD1}    c : A Int B ==> c:A
+\tdx{IntD2}    c : A Int B ==> c:B
+\tdx{IntE}     [| c : A Int B;  [| c:A; c:B |] ==> P |] ==> P
+
+\tdx{UN_I}     [| a:A;  b: B a |] ==> b: (UN x:A. B x)
+\tdx{UN_E}     [| b: (UN x:A. B x);  !!x.[| x:A;  b:B x |] ==> R |] ==> R
+
+\tdx{INT_I}    (!!x. x:A ==> b: B x) ==> b : (INT x:A. B x)
+\tdx{INT_D}    [| b: (INT x:A. B x);  a:A |] ==> b: B a
+\tdx{INT_E}    [| b: (INT x:A. B x);  b: B a ==> R;  ~ a:A ==> R |] ==> R
+
+\tdx{UnionI}   [| X:C;  A:X |] ==> A : Union C
+\tdx{UnionE}   [| A : Union C;  !!X.[| A:X;  X:C |] ==> R |] ==> R
+
+\tdx{InterI}   [| !!X. X:C ==> A:X |] ==> A : Inter C
+\tdx{InterD}   [| A : Inter C;  X:C |] ==> A:X
+\tdx{InterE}   [| A : Inter C;  A:X ==> R;  ~ X:C ==> R |] ==> R
+
+\tdx{PowI}     A<=B ==> A: Pow B
+\tdx{PowD}     A: Pow B ==> A<=B
+\end{ttbox}
+\caption{Further derived rules for set theory} \label{chol-set2}
+\end{figure}
+
+
+\subsection{Axioms and rules of set theory}
+Figure~\ref{chol-set-rules} presents the rules of theory \thydx{Set}.  The
+axioms \tdx{mem_Collect_eq} and \tdx{Collect_mem_eq} assert
+that the functions {\tt Collect} and \hbox{\tt op :} are isomorphisms.  Of
+course, \hbox{\tt op :} also serves as the membership relation.
+
+All the other axioms are definitions.  They include the empty set, bounded
+quantifiers, unions, intersections, complements and the subset relation.
+They also include straightforward properties of functions: image~({\tt``}) and
+{\tt range}, and predicates concerning monotonicity, injectiveness and
+surjectiveness.  
+
+The predicate \cdx{inj_onto} is used for simulating type definitions.
+The statement ${\tt inj_onto}~f~A$ asserts that $f$ is injective on the
+set~$A$, which specifies a subset of its domain type.  In a type
+definition, $f$ is the abstraction function and $A$ is the set of valid
+representations; we should not expect $f$ to be injective outside of~$A$.
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{Inv_f_f}    inj f ==> Inv f (f x) = x
+\tdx{f_Inv_f}    y : range f ==> f(Inv f y) = y
+
+%\tdx{Inv_injective}
+%    [| Inv f x=Inv f y; x: range f;  y: range f |] ==> x=y
+%
+\tdx{imageI}     [| x:A |] ==> f x : f``A
+\tdx{imageE}     [| b : f``A;  !!x.[| b=f x;  x:A |] ==> P |] ==> P
+
+\tdx{rangeI}     f x : range f
+\tdx{rangeE}     [| b : range f;  !!x.[| b=f x |] ==> P |] ==> P
+
+\tdx{monoI}      [| !!A B. A <= B ==> f A <= f B |] ==> mono f
+\tdx{monoD}      [| mono f;  A <= B |] ==> f A <= f B
+
+\tdx{injI}       [| !! x y. f x = f y ==> x=y |] ==> inj f
+\tdx{inj_inverseI}              (!!x. g(f x) = x) ==> inj f
+\tdx{injD}       [| inj f; f x = f y |] ==> x=y
+
+\tdx{inj_ontoI}  (!!x y. [| f x=f y; x:A; y:A |] ==> x=y) ==> inj_onto f A
+\tdx{inj_ontoD}  [| inj_onto f A;  f x=f y;  x:A;  y:A |] ==> x=y
+
+\tdx{inj_onto_inverseI}
+    (!!x. x:A ==> g(f x) = x) ==> inj_onto f A
+\tdx{inj_onto_contraD}
+    [| inj_onto f A;  x~=y;  x:A;  y:A |] ==> ~ f x=f y
+\end{ttbox}
+\caption{Derived rules involving functions} \label{chol-fun}
+\end{figure}
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{Union_upper}     B:A ==> B <= Union A
+\tdx{Union_least}     [| !!X. X:A ==> X<=C |] ==> Union A <= C
+
+\tdx{Inter_lower}     B:A ==> Inter A <= B
+\tdx{Inter_greatest}  [| !!X. X:A ==> C<=X |] ==> C <= Inter A
+
+\tdx{Un_upper1}       A <= A Un B
+\tdx{Un_upper2}       B <= A Un B
+\tdx{Un_least}        [| A<=C;  B<=C |] ==> A Un B <= C
+
+\tdx{Int_lower1}      A Int B <= A
+\tdx{Int_lower2}      A Int B <= B
+\tdx{Int_greatest}    [| C<=A;  C<=B |] ==> C <= A Int B
+\end{ttbox}
+\caption{Derived rules involving subsets} \label{chol-subset}
+\end{figure}
+
+
+\begin{figure} \underscoreon   \hfuzz=4pt%suppress "Overfull \hbox" message
+\begin{ttbox}
+\tdx{Int_absorb}        A Int A = A
+\tdx{Int_commute}       A Int B = B Int A
+\tdx{Int_assoc}         (A Int B) Int C  =  A Int (B Int C)
+\tdx{Int_Un_distrib}    (A Un B)  Int C  =  (A Int C) Un (B Int C)
+
+\tdx{Un_absorb}         A Un A = A
+\tdx{Un_commute}        A Un B = B Un A
+\tdx{Un_assoc}          (A Un B)  Un C  =  A Un (B Un C)
+\tdx{Un_Int_distrib}    (A Int B) Un C  =  (A Un C) Int (B Un C)
+
+\tdx{Compl_disjoint}    A Int (Compl A) = \{x.False\}
+\tdx{Compl_partition}   A Un  (Compl A) = \{x.True\}
+\tdx{double_complement} Compl(Compl A) = A
+\tdx{Compl_Un}          Compl(A Un B)  = (Compl A) Int (Compl B)
+\tdx{Compl_Int}         Compl(A Int B) = (Compl A) Un (Compl B)
+
+\tdx{Union_Un_distrib}  Union(A Un B) = (Union A) Un (Union B)
+\tdx{Int_Union}         A Int (Union B) = (UN C:B. A Int C)
+\tdx{Un_Union_image}    (UN x:C.(A x) Un (B x)) = Union(A``C) Un Union(B``C)
+
+\tdx{Inter_Un_distrib}  Inter(A Un B) = (Inter A) Int (Inter B)
+\tdx{Un_Inter}          A Un (Inter B) = (INT C:B. A Un C)
+\tdx{Int_Inter_image}   (INT x:C.(A x) Int (B x)) = Inter(A``C) Int Inter(B``C)
+\end{ttbox}
+\caption{Set equalities} \label{chol-equalities}
+\end{figure}
+
+
+Figures~\ref{chol-set1} and~\ref{chol-set2} present derived rules.  Most are
+obvious and resemble rules of Isabelle's {\ZF} set theory.  Certain rules,
+such as \tdx{subsetCE}, \tdx{bexCI} and \tdx{UnCI},
+are designed for classical reasoning; the rules \tdx{subsetD},
+\tdx{bexI}, \tdx{Un1} and~\tdx{Un2} are not
+strictly necessary but yield more natural proofs.  Similarly,
+\tdx{equalityCE} supports classical reasoning about extensionality,
+after the fashion of \tdx{iffCE}.  See the file {\tt CHOL/Set.ML} for
+proofs pertaining to set theory.
+
+Figure~\ref{chol-fun} presents derived inference rules involving functions.
+They also include rules for \cdx{Inv}, which is defined in theory~{\tt
+  CHOL}; note that ${\tt Inv}~f$ applies the Axiom of Choice to yield an
+inverse of~$f$.  They also include natural deduction rules for the image
+and range operators, and for the predicates {\tt inj} and {\tt inj_onto}.
+Reasoning about function composition (the operator~\sdx{o}) and the
+predicate~\cdx{surj} is done simply by expanding the definitions.  See
+the file {\tt CHOL/fun.ML} for a complete listing of the derived rules.
+
+Figure~\ref{chol-subset} presents lattice properties of the subset relation.
+Unions form least upper bounds; non-empty intersections form greatest lower
+bounds.  Reasoning directly about subsets often yields clearer proofs than
+reasoning about the membership relation.  See the file {\tt CHOL/subset.ML}.
+
+Figure~\ref{chol-equalities} presents many common set equalities.  They
+include commutative, associative and distributive laws involving unions,
+intersections and complements.  The proofs are mostly trivial, using the
+classical reasoner; see file {\tt CHOL/equalities.ML}.
+
+
+\begin{figure}
+\begin{constants}
+  \it symbol    & \it meta-type &           & \it description \\ 
+  \cdx{Pair}    & $[\alpha,\beta]\To \alpha\times\beta$
+        & & ordered pairs $(a,b)$ \\
+  \cdx{fst}     & $\alpha\times\beta \To \alpha$        & & first projection\\
+  \cdx{snd}     & $\alpha\times\beta \To \beta$         & & second projection\\
+  \cdx{split}   & $[[\alpha,\beta]\To\gamma, \alpha\times\beta] \To \gamma$ 
+        & & generalized projection\\
+  \cdx{Sigma}  & 
+        $[\alpha\,set, \alpha\To\beta\,set]\To(\alpha\times\beta)set$ &
+        & general sum of sets
+\end{constants}
+\begin{ttbox}\makeatletter
+\tdx{fst_def}      fst p     == @a. ? b. p = (a,b)
+\tdx{snd_def}      snd p     == @b. ? a. p = (a,b)
+\tdx{split_def}    split c p == c (fst p) (snd p)
+\tdx{Sigma_def}    Sigma A B == UN x:A. UN y:B x. \{(x,y)\}
+
+
+\tdx{Pair_inject}  [| (a, b) = (a',b');  [| a=a';  b=b' |] ==> R |] ==> R
+\tdx{fst_conv}     fst (a,b) = a
+\tdx{snd_conv}     snd (a,b) = b
+\tdx{split}        split c (a,b) = c a b
+
+\tdx{surjective_pairing}  p = (fst p,snd p)
+
+\tdx{SigmaI}       [| a:A;  b:B a |] ==> (a,b) : Sigma A B
+
+\tdx{SigmaE}       [| c: Sigma A B;  
+                !!x y.[| x:A; y:B x; c=(x,y) |] ==> P |] ==> P
+\end{ttbox}
+\caption{Type $\alpha\times\beta$}\label{chol-prod}
+\end{figure} 
+
+
+\begin{figure}
+\begin{constants}
+  \it symbol    & \it meta-type &           & \it description \\ 
+  \cdx{Inl}     & $\alpha \To \alpha+\beta$    & & first injection\\
+  \cdx{Inr}     & $\beta \To \alpha+\beta$     & & second injection\\
+  \cdx{sum_case} & $[\alpha\To\gamma, \beta\To\gamma, \alpha+\beta] \To\gamma$
+        & & conditional
+\end{constants}
+\begin{ttbox}\makeatletter
+\tdx{sum_case_def}   sum_case == (\%f g p. @z. (!x. p=Inl x --> z=f x) &
+                                        (!y. p=Inr y --> z=g y))
+
+\tdx{Inl_not_Inr}    ~ Inl a=Inr b
+
+\tdx{inj_Inl}        inj Inl
+\tdx{inj_Inr}        inj Inr
+
+\tdx{sumE}           [| !!x::'a. P(Inl x);  !!y::'b. P(Inr y) |] ==> P s
+
+\tdx{sum_case_Inl}   sum_case f g (Inl x) = f x
+\tdx{sum_case_Inr}   sum_case f g (Inr x) = g x
+
+\tdx{surjective_sum} sum_case (\%x::'a. f(Inl x)) (\%y::'b. f(Inr y)) s = f s
+\end{ttbox}
+\caption{Type $\alpha+\beta$}\label{chol-sum}
+\end{figure}
+
+
+\section{Generic packages and classical reasoning}
+\CHOL\ instantiates most of Isabelle's generic packages;
+see {\tt CHOL/ROOT.ML} for details.
+\begin{itemize}
+\item 
+Because it includes a general substitution rule, \CHOL\ instantiates the
+tactic {\tt hyp_subst_tac}, which substitutes for an equality
+throughout a subgoal and its hypotheses.
+\item 
+It instantiates the simplifier, defining~\ttindexbold{HOL_ss} as the
+simplification set for higher-order logic.  Equality~($=$), which also
+expresses logical equivalence, may be used for rewriting.  See the file
+{\tt CHOL/simpdata.ML} for a complete listing of the simplification
+rules. 
+\item 
+It instantiates the classical reasoner, as described below. 
+\end{itemize}
+\CHOL\ derives classical introduction rules for $\disj$ and~$\exists$, as
+well as classical elimination rules for~$\imp$ and~$\bimp$, and the swap
+rule; recall Fig.\ts\ref{chol-lemmas2} above.
+
+The classical reasoner is set up as the structure
+{\tt Classical}.  This structure is open, so {\ML} identifiers such
+as {\tt step_tac}, {\tt fast_tac}, {\tt best_tac}, etc., refer to it.
+\HOL\ defines the following classical rule sets:
+\begin{ttbox} 
+prop_cs    : claset
+HOL_cs     : claset
+set_cs     : claset
+\end{ttbox}
+\begin{ttdescription}
+\item[\ttindexbold{prop_cs}] contains the propositional rules, namely
+those for~$\top$, $\bot$, $\conj$, $\disj$, $\neg$, $\imp$ and~$\bimp$,
+along with the rule~{\tt refl}.
+
+\item[\ttindexbold{HOL_cs}] extends {\tt prop_cs} with the safe rules
+  {\tt allI} and~{\tt exE} and the unsafe rules {\tt allE}
+  and~{\tt exI}, as well as rules for unique existence.  Search using
+  this classical set is incomplete: quantified formulae are used at most
+  once.
+
+\item[\ttindexbold{set_cs}] extends {\tt HOL_cs} with rules for the bounded
+  quantifiers, subsets, comprehensions, unions and intersections,
+  complements, finite sets, images and ranges.
+\end{ttdescription}
+\noindent
+See \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
+        {Chap.\ts\ref{chap:classical}} 
+for more discussion of classical proof methods.
+
+
+\section{Types}
+The basic higher-order logic is augmented with a tremendous amount of
+material, including support for recursive function and type definitions.  A
+detailed discussion appears elsewhere~\cite{paulson-coind}.  The simpler
+definitions are the same as those used by the {\sc hol} system, but my
+treatment of recursive types differs from Melham's~\cite{melham89}.  The
+present section describes product, sum, natural number and list types.
+
+\subsection{Product and sum types}\index{*"* type}\index{*"+ type}
+Theory \thydx{Prod} defines the product type $\alpha\times\beta$, with
+the ordered pair syntax {\tt($a$,$b$)}.  Theory \thydx{Sum} defines the
+sum type $\alpha+\beta$.  These use fairly standard constructions; see
+Figs.\ts\ref{chol-prod} and~\ref{chol-sum}.  Because Isabelle does not
+support abstract type definitions, the isomorphisms between these types and
+their representations are made explicitly.
+
+Most of the definitions are suppressed, but observe that the projections
+and conditionals are defined as descriptions.  Their properties are easily
+proved using \tdx{select_equality}.  
+
+\begin{figure} 
+\index{*"< symbol}
+\index{*"* symbol}
+\index{*div symbol}
+\index{*mod symbol}
+\index{*"+ symbol}
+\index{*"- symbol}
+\begin{constants}
+  \it symbol    & \it meta-type & \it priority & \it description \\ 
+  \cdx{0}       & $nat$         & & zero \\
+  \cdx{Suc}     & $nat \To nat$ & & successor function\\
+  \cdx{nat_case} & $[\alpha, nat\To\alpha, nat] \To\alpha$
+        & & conditional\\
+  \cdx{nat_rec} & $[nat, \alpha, [nat, \alpha]\To\alpha] \To \alpha$
+        & & primitive recursor\\
+  \cdx{pred_nat} & $(nat\times nat) set$ & & predecessor relation\\
+  \tt *         & $[nat,nat]\To nat$    &  Left 70      & multiplication \\
+  \tt div       & $[nat,nat]\To nat$    &  Left 70      & division\\
+  \tt mod       & $[nat,nat]\To nat$    &  Left 70      & modulus\\
+  \tt +         & $[nat,nat]\To nat$    &  Left 65      & addition\\
+  \tt -         & $[nat,nat]\To nat$    &  Left 65      & subtraction
+\end{constants}
+\subcaption{Constants and infixes}
+
+\begin{ttbox}\makeatother
+\tdx{nat_case_def}  nat_case == (\%a f n. @z. (n=0 --> z=a) & 
+                                       (!x. n=Suc x --> z=f x))
+\tdx{pred_nat_def}  pred_nat == \{p. ? n. p = (n, Suc n)\} 
+\tdx{less_def}      m<n      == (m,n):pred_nat^+
+\tdx{nat_rec_def}   nat_rec n c d == 
+               wfrec pred_nat n (nat_case (\%g.c) (\%m g. d m (g m)))
+
+\tdx{add_def}   m+n     == nat_rec m n (\%u v. Suc v)
+\tdx{diff_def}  m-n     == nat_rec n m (\%u v. nat_rec v 0 (\%x y.x))
+\tdx{mult_def}  m*n     == nat_rec m 0 (\%u v. n + v)
+\tdx{mod_def}   m mod n == wfrec (trancl pred_nat)
+                        m (\%j f. if j<n then j else f j-n))
+\tdx{quo_def}   m div n == wfrec (trancl pred_nat), 
+                        m (\%j f. if j<n then 0 else Suc(f j-n))
+\subcaption{Definitions}
+\end{ttbox}
+\caption{Defining {\tt nat}, the type of natural numbers} \label{chol-nat1}
+\end{figure}
+
+
+\begin{figure} \underscoreon
+\begin{ttbox}
+\tdx{nat_induct}     [| P 0; !!k. [| P k |] ==> P(Suc k) |]  ==> P n
+
+\tdx{Suc_not_Zero}   Suc m ~= 0
+\tdx{inj_Suc}        inj Suc
+\tdx{n_not_Suc_n}    n~=Suc n
+\subcaption{Basic properties}
+
+\tdx{pred_natI}      (n, Suc n) : pred_nat
+\tdx{pred_natE}
+    [| p : pred_nat;  !!x n. [| p = (n, Suc n) |] ==> R |] ==> R
+
+\tdx{nat_case_0}     nat_case a f 0 = a
+\tdx{nat_case_Suc}   nat_case a f (Suc k) = f k
+
+\tdx{wf_pred_nat}    wf pred_nat
+\tdx{nat_rec_0}      nat_rec 0 c h = c
+\tdx{nat_rec_Suc}    nat_rec (Suc n) c h = h n (nat_rec n c h)
+\subcaption{Case analysis and primitive recursion}
+
+\tdx{less_trans}     [| i<j;  j<k |] ==> i<k
+\tdx{lessI}          n < Suc n
+\tdx{zero_less_Suc}  0 < Suc n
+
+\tdx{less_not_sym}   n<m --> ~ m<n 
+\tdx{less_not_refl}  ~ n<n
+\tdx{not_less0}      ~ n<0
+
+\tdx{Suc_less_eq}    (Suc m < Suc n) = (m<n)
+\tdx{less_induct}    [| !!n. [| ! m. m<n --> P m |] ==> P n |]  ==>  P n
+
+\tdx{less_linear}    m<n | m=n | n<m
+\subcaption{The less-than relation}
+\end{ttbox}
+\caption{Derived rules for {\tt nat}} \label{chol-nat2}
+\end{figure}
+
+
+\subsection{The type of natural numbers, {\tt nat}}
+The theory \thydx{Nat} defines the natural numbers in a roundabout but
+traditional way.  The axiom of infinity postulates an type~\tydx{ind} of
+individuals, which is non-empty and closed under an injective operation.
+The natural numbers are inductively generated by choosing an arbitrary
+individual for~0 and using the injective operation to take successors.  As
+usual, the isomorphisms between~\tydx{nat} and its representation are made
+explicitly.
+
+The definition makes use of a least fixed point operator \cdx{lfp},
+defined using the Knaster-Tarski theorem.  This is used to define the
+operator \cdx{trancl}, for taking the transitive closure of a relation.
+Primitive recursion makes use of \cdx{wfrec}, an operator for recursion
+along arbitrary well-founded relations.  The corresponding theories are
+called {\tt Lfp}, {\tt Trancl} and {\tt WF}\@.  Elsewhere I have described
+similar constructions in the context of set theory~\cite{paulson-set-II}.
+
+Type~\tydx{nat} is postulated to belong to class~\cldx{ord}, which
+overloads $<$ and $\leq$ on the natural numbers.  As of this writing,
+Isabelle provides no means of verifying that such overloading is sensible;
+there is no means of specifying the operators' properties and verifying
+that instances of the operators satisfy those properties.  To be safe, the
+\CHOL\ theory includes no polymorphic axioms asserting general properties of
+$<$ and~$\leq$.
+
+Theory \thydx{Arith} develops arithmetic on the natural numbers.  It
+defines addition, multiplication, subtraction, division, and remainder.
+Many of their properties are proved: commutative, associative and
+distributive laws, identity and cancellation laws, etc.  The most
+interesting result is perhaps the theorem $a \bmod b + (a/b)\times b = a$.
+Division and remainder are defined by repeated subtraction, which requires
+well-founded rather than primitive recursion.  See Figs.\ts\ref{chol-nat1}
+and~\ref{chol-nat2}.
+
+The predecessor relation, \cdx{pred_nat}, is shown to be well-founded.
+Recursion along this relation resembles primitive recursion, but is
+stronger because we are in higher-order logic; using primitive recursion to
+define a higher-order function, we can easily Ackermann's function, which
+is not primitive recursive \cite[page~104]{thompson91}.
+The transitive closure of \cdx{pred_nat} is~$<$.  Many functions on the
+natural numbers are most easily expressed using recursion along~$<$.
+
+The tactic {\tt\ttindex{nat_ind_tac} "$n$" $i$} performs induction over the
+variable~$n$ in subgoal~$i$.
+
+\begin{figure}
+\index{#@{\tt\#} symbol}
+\index{"@@{\tt\at} symbol}
+\begin{constants}
+  \it symbol & \it meta-type & \it priority & \it description \\
+  \cdx{Nil}     & $\alpha list$ & & empty list\\
+  \tt \#   & $[\alpha,\alpha list]\To \alpha list$ & Right 65 & 
+        list constructor \\
+  \cdx{null}    & $\alpha list \To bool$ & & emptiness test\\
+  \cdx{hd}      & $\alpha list \To \alpha$ & & head \\
+  \cdx{tl}      & $\alpha list \To \alpha list$ & & tail \\
+  \cdx{ttl}     & $\alpha list \To \alpha list$ & & total tail \\
+  \tt\at  & $[\alpha list,\alpha list]\To \alpha list$ & Left 65 & append \\
+  \sdx{mem}  & $[\alpha,\alpha list]\To bool$    &  Left 55   & membership\\
+  \cdx{map}     & $(\alpha\To\beta) \To (\alpha list \To \beta list)$
+        & & mapping functional\\
+  \cdx{filter}  & $(\alpha \To bool) \To (\alpha list \To \alpha list)$
+        & & filter functional\\
+  \cdx{list_all}& $(\alpha \To bool) \To (\alpha list \To bool)$
+        & & forall functional\\
+  \cdx{list_rec}        & $[\alpha list, \beta, [\alpha ,\alpha list,
+\beta]\To\beta] \To \beta$
+        & & list recursor
+\end{constants}
+\subcaption{Constants and infixes}
+
+\begin{center} \tt\frenchspacing
+\begin{tabular}{rrr} 
+  \it external        & \it internal  & \it description \\{}
+  \sdx{[]}            & Nil           & \rm empty list \\{}
+  [$x@1$, $\dots$, $x@n$]  &  $x@1$ \# $\cdots$ \# $x@n$ \# [] &
+        \rm finite list \\{}
+  [$x$:$l$. $P$]  & filter ($\lambda x{.}P$) $l$ & 
+        \rm list comprehension
+\end{tabular}
+\end{center}
+\subcaption{Translations}
+
+\begin{ttbox}
+\tdx{list_induct}    [| P [];  !!x xs. [| P xs |] ==> P x#xs) |]  ==> P l
+
+\tdx{Cons_not_Nil}   (x # xs) ~= []
+\tdx{Cons_Cons_eq}   ((x # xs) = (y # ys)) = (x=y & xs=ys)
+\subcaption{Induction and freeness}
+\end{ttbox}
+\caption{The theory \thydx{List}} \label{chol-list}
+\end{figure}
+
+\begin{figure}
+\begin{ttbox}\makeatother
+\tdx{list_rec_Nil}    list_rec [] c h = c  
+\tdx{list_rec_Cons}   list_rec a#l c h = h a l (list_rec l c h)
+
+\tdx{list_case_Nil}   list_case c h [] = c 
+\tdx{list_case_Cons}  list_case c h x#xs = h x xs
+
+\tdx{map_Nil}         map f [] = []
+\tdx{map_Cons}        map f x \# xs = f x \# map f xs
+
+\tdx{null_Nil}        null [] = True
+\tdx{null_Cons}       null x#xs = False
+
+\tdx{hd_Cons}         hd x#xs = x
+\tdx{tl_Cons}         tl x#xs = xs
+
+\tdx{ttl_Nil}         ttl [] = []
+\tdx{ttl_Cons}        ttl x#xs = xs
+
+\tdx{append_Nil}      [] @ ys = ys
+\tdx{append_Cons}     (x#xs) \at ys = x # xs \at ys
+
+\tdx{mem_Nil}         x mem [] = False
+\tdx{mem_Cons}        x mem (y#ys) = if y=x then True else x mem ys
+
+\tdx{filter_Nil}      filter P [] = []
+\tdx{filter_Cons}     filter P x#xs = if P x then x#filter P xs else filter P xs
+
+\tdx{list_all_Nil}    list_all P [] = True
+\tdx{list_all_Cons}   list_all P x#xs = (P x & list_all P xs)
+\end{ttbox}
+\caption{Rewrite rules for lists} \label{chol-list-simps}
+\end{figure}
+
+
+\subsection{The type constructor for lists, {\tt list}}
+\index{*list type}
+
+\CHOL's definition of lists is an example of an experimental method for
+handling recursive data types.  Figure~\ref{chol-list} presents the theory
+\thydx{List}: the basic list operations with their types and properties.
+
+The \sdx{case} construct is defined by the following translation:
+{\dquotes
+\begin{eqnarray*}
+  \begin{array}{r@{\;}l@{}l}
+  "case " e " of" & "[]"    & " => " a\\
+              "|" & x"\#"xs & " => " b
+  \end{array} 
+  & \equiv &
+  "list_case"~a~(\lambda x\;xs.b)~e
+\end{eqnarray*}}%
+The theory includes \cdx{list_rec}, a primitive recursion operator
+for lists.  It is derived from well-founded recursion, a general principle
+that can express arbitrary total recursive functions.
+
+The simpset \ttindex{list_ss} contains, along with additional useful lemmas,
+the basic rewrite rules that appear in Fig.\ts\ref{chol-list-simps}.
+
+The tactic {\tt\ttindex{list_ind_tac} "$xs$" $i$} performs induction over the
+variable~$xs$ in subgoal~$i$.
+
+
+\section{Datatype declarations}
+\index{*datatype|(}
+
+\underscoreon
+
+It is often necessary to extend a theory with \ML-like datatypes.  This
+extension consists of the new type, declarations of its constructors and
+rules that describe the new type. The theory definition section {\tt
+  datatype} represents a compact way of doing this.
+
+
+\subsection{Foundations}
+
+A datatype declaration has the following general structure:
+\[ \mbox{\tt datatype}~ (\alpha_1,\dots,\alpha_n)t ~=~
+      C_1~\tau_{11}~\dots~\tau_{1k_1} ~\mid~ \dots ~\mid~
+      C_m~\tau_{m1}~\dots~\tau_{mk_m} 
+\]
+where $\alpha_i$ are type variables, $C_i$ are distinct constructor names and
+$\tau_{ij}$ are one of the following:
+\begin{itemize}
+\item type variables $\alpha_1,\dots,\alpha_n$,
+\item types $(\beta_1,\dots,\beta_l)s$ where $s$ is a previously declared
+  type or type synonym and $\{\beta_1,\dots,\beta_l\} \subseteq
+  \{\alpha_1,\dots,\alpha_n\}$,
+\item the newly defined type $(\alpha_1,\dots,\alpha_n)t$ \footnote{This
+    makes it a recursive type. To ensure that the new type is not empty at
+    least one constructor must consist of only non-recursive type
+    components.}
+\end{itemize}
+If you would like one of the $\tau_{ij}$ to be a complex type expression
+$\tau$ you need to declare a new type synonym $syn = \tau$ first and use
+$syn$ in place of $\tau$. Of course this does not work if $\tau$ mentions the
+recursive type itself, thus ruling out problematic cases like \[ \mbox{\tt
+  datatype}~ t ~=~ C(t \To t) \] together with unproblematic ones like \[
+\mbox{\tt datatype}~ t ~=~ C(t~list). \]
+
+The constructors are automatically defined as functions of their respective
+type:
+\[ C_j : [\tau_{j1},\dots,\tau_{jk_j}] \To (\alpha_1,\dots,\alpha_n)t \]
+These functions have certain {\em freeness} properties:
+\begin{description}
+\item[\tt distinct] They are distinct:
+\[ C_i~x_1~\dots~x_{k_i} \neq C_j~y_1~\dots~y_{k_j} \qquad
+   \mbox{for all}~ i \neq j.
+\]
+\item[\tt inject] They are injective:
+\[ (C_j~x_1~\dots~x_{k_j} = C_j~y_1~\dots~y_{k_j}) =
+   (x_1 = y_1 \land \dots \land x_{k_j} = y_{k_j})
+\]
+\end{description}
+Because the number of inequalities is quadratic in the number of
+constructors, a different method is used if their number exceeds
+a certain value, currently 4. In that case every constructor is mapped to a
+natural number
+\[
+\begin{array}{lcl}
+\mbox{\it t\_ord}(C_1~x_1~\dots~x_{k_1}) & = & 0 \\
+& \vdots & \\
+\mbox{\it t\_ord}(C_m x_1~\dots~x_{k_m}) & = & m-1
+\end{array}
+\]
+and distinctness of constructors is expressed by:
+\[
+\mbox{\it t\_ord}~x \neq \mbox{\it t\_ord}~y \Imp x \neq y.
+\]
+In addition a structural induction axiom {\tt induct} is provided: 
+\[
+\infer{P x}
+{\begin{array}{lcl}
+\Forall x_1\dots x_{k_1}.
+  \List{P~x_{r_{11}}; \dots; P~x_{r_{1l_1}}} &
+  \Imp  & P(C_1~x_1~\dots~x_{k_1}) \\
+ & \vdots & \\
+\Forall x_1\dots x_{k_m}.
+  \List{P~x_{r_{m1}}; \dots; P~x_{r_{ml_m}}} &
+  \Imp & P(C_m~x_1~\dots~x_{k_m})
+\end{array}}
+\]
+where $\{r_{j1},\dots,r_{jl_j}\} = \{i \in \{1,\dots k_j\} ~\mid~ \tau_{ji}
+= (\alpha_1,\dots,\alpha_n)t \}$, i.e.\ the property $P$ can be assumed for
+all arguments of the recursive type.
+
+The type also comes with an \ML-like \sdx{case}-construct:
+\[
+\begin{array}{rrcl}
+\mbox{\tt case}~e~\mbox{\tt of} & C_1~x_{11}~\dots~x_{1k_1} & \To & e_1 \\
+                           \vdots \\
+                           \mid & C_m~x_{m1}~\dots~x_{mk_m} & \To & e_m
+\end{array}
+\]
+In contrast to \ML, {\em all} constructors must be present, their order is
+fixed, and nested patterns are not supported.
+
+
+\subsection{Defining datatypes}
+
+A datatype is defined in a theory definition file using the keyword {\tt
+  datatype}. The definition following {\tt datatype} must conform to the
+syntax of {\em typedecl} specified in Fig.~\ref{datatype-grammar} and must
+obey the rules in the previous section. As a result the theory is extended
+with the new type, the constructors, and the theorems listed in the previous
+section.
+
+\begin{figure}
+\begin{rail}
+typedecl : typevarlist id '=' (cons + '|')
+         ;
+cons     : (id | string) ( () | '(' (typ + ',') ')' ) ( () | mixfix )
+         ;
+typ      : typevarlist id
+           | tid
+         ;
+typevarlist : () | tid | '(' (tid + ',') ')'
+         ;
+\end{rail}
+\caption{Syntax of datatype declarations}
+\label{datatype-grammar}
+\end{figure}
+
+Reading the theory file produces a structure which, in addition to the usual
+components, contains a structure named $t$ for each datatype $t$ defined in
+the file.\footnote{Otherwise multiple datatypes in the same theory file would
+  lead to name clashes.} Each structure $t$ contains the following elements:
+\begin{ttbox}
+val distinct : thm list
+val inject : thm list
+val induct : thm
+val cases : thm list
+val simps : thm list
+val induct_tac : string -> int -> tactic
+\end{ttbox}
+{\tt distinct}, {\tt inject} and {\tt induct} contain the theorems described
+above. For convenience {\tt distinct} contains inequalities in both
+directions.
+\begin{warn}
+  If there are five or more constructors, the {\em t\_ord} scheme is used for
+  {\tt distinct}.  In this case the theory {\tt Arith} must be contained
+  in the current theory, if necessary by including it explicitly.
+\end{warn}
+The reduction rules of the {\tt case}-construct are in {\tt cases}.  All
+theorems from {\tt distinct}, {\tt inject} and {\tt cases} are combined in
+{\tt simps} for use with the simplifier. The tactic {\verb$induct_tac$~{\em
+    var i}\/} applies structural induction over variable {\em var} to
+subgoal {\em i}.
+
+
+\subsection{Examples}
+
+\subsubsection{The datatype $\alpha~list$}
+
+We want to define the type $\alpha~list$.\footnote{Of course there is a list
+  type in CHOL already. This is only an example.} To do this we have to build
+a new theory that contains the type definition. We start from {\tt CHOL}.
+\begin{ttbox}
+MyList = CHOL +
+  datatype 'a list = Nil | Cons 'a ('a list)
+end
+\end{ttbox}
+After loading the theory (\verb$use_thy "MyList"$), we can prove
+$Cons~x~xs\neq xs$.  First we build a suitable simpset for the simplifier:
+\begin{ttbox}
+val mylist_ss = HOL_ss addsimps MyList.list.simps;
+goal MyList.thy "!x. Cons x xs ~= xs";
+{\out Level 0}
+{\out ! x. Cons x xs ~= xs}
+{\out  1. ! x. Cons x xs ~= xs}
+\end{ttbox}
+This can be proved by the structural induction tactic:
+\begin{ttbox}
+by (MyList.list.induct_tac "xs" 1);
+{\out Level 1}
+{\out ! x. Cons x xs ~= xs}
+{\out  1. ! x. Cons x Nil ~= Nil}
+{\out  2. !!a list.}
+{\out        ! x. Cons x list ~= list ==>}
+{\out        ! x. Cons x (Cons a list) ~= Cons a list}
+\end{ttbox}
+The first subgoal can be proved with the simplifier and the distinctness
+axioms which are part of \verb$mylist_ss$.
+\begin{ttbox}
+by (simp_tac mylist_ss 1);
+{\out Level 2}
+{\out ! x. Cons x xs ~= xs}
+{\out  1. !!a list.}
+{\out        ! x. Cons x list ~= list ==>}
+{\out        ! x. Cons x (Cons a list) ~= Cons a list}
+\end{ttbox}
+Using the freeness axioms we can quickly prove the remaining goal.
+\begin{ttbox}
+by (asm_simp_tac mylist_ss 1);
+{\out Level 3}
+{\out ! x. Cons x xs ~= xs}
+{\out No subgoals!}
+\end{ttbox}
+Because both subgoals were proved by almost the same tactic we could have
+done that in one step using
+\begin{ttbox}
+by (ALLGOALS (asm_simp_tac mylist_ss));
+\end{ttbox}
+
+
+\subsubsection{The datatype $\alpha~list$ with mixfix syntax}
+
+In this example we define the type $\alpha~list$ again but this time we want
+to write {\tt []} instead of {\tt Nil} and we want to use the infix operator
+\verb|#| instead of {\tt Cons}. To do this we simply add mixfix annotations
+after the constructor declarations as follows:
+\begin{ttbox}
+MyList = CHOL +
+  datatype 'a list = "[]" ("[]") 
+                   | "#" 'a ('a list) (infixr 70)
+end
+\end{ttbox}
+Now the theorem in the previous example can be written \verb|x#xs ~= xs|. The
+proof is the same.
+
+
+\subsubsection{A datatype for weekdays}
+
+This example shows a datatype that consists of more than four constructors:
+\begin{ttbox}
+Days = Arith +
+  datatype days = Mo | Tu | We | Th | Fr | Sa | So
+end
+\end{ttbox}
+Because there are more than four constructors, the theory must be based on
+{\tt Arith}. Inequality is defined via a function \verb|days_ord|. Although
+the expression \verb|Mo ~= Tu| is not directly contained in {\tt distinct},
+it can be proved by the simplifier if \verb$arith_ss$ is used:
+\begin{ttbox}
+val days_ss = arith_ss addsimps Days.days.simps;
+
+goal Days.thy "Mo ~= Tu";
+by (simp_tac days_ss 1);
+\end{ttbox}
+Note that usually it is not necessary to derive these inequalities explicitly
+because the simplifier will dispose of them automatically.
+
+\subsection{Primitive recursive functions}
+\index{primitive recursion|(}
+\index{*primrec|(}
+
+Datatypes come with a uniform way of defining functions, {\bf primitive
+  recursion}. Although it is possible to define primitive recursive functions
+by asserting their reduction rules as new axioms, e.g.\
+\begin{ttbox}
+Append = MyList +
+consts app :: "['a list,'a list] => 'a list"
+rules 
+   app_Nil   "app [] ys = ys"
+   app_Cons  "app x#xs ys = x#app xs ys"
+end
+\end{ttbox}
+this carries with it the danger of accidentally asserting an inconsistency,
+as in \verb$app [] ys = us$. Therefore primitive recursive functions on
+datatypes can be defined with a special syntax:
+\begin{ttbox}
+Append = MyList +
+consts app :: "'['a list,'a list] => 'a list"
+primrec app MyList.list
+   app_Nil   "app [] ys = ys"
+   app_Cons  "app x#xs ys = x#app xs ys"
+end
+\end{ttbox}
+The system will now check that the two rules \verb$app_Nil$ and
+\verb$app_Cons$ do indeed form a primitive recursive definition, thus
+ensuring that consistency is maintained. For example
+\begin{ttbox}
+primrec app MyList.list
+    app_Nil   "app [] ys = us"
+\end{ttbox}
+is rejected:
+\begin{ttbox}
+Extra variables on rhs
+\end{ttbox}
+\bigskip
+
+The general form of a primitive recursive definition is
+\begin{ttbox}
+primrec {\it function} {\it type}
+    {\it reduction rules}
+\end{ttbox}
+where
+\begin{itemize}
+\item {\it function} is the name of the function, either as an {\it id} or a
+  {\it string}. The function must already have been declared.
+\item {\it type} is the name of the datatype, either as an {\it id} or in the
+  long form {\it Thy.t}, where {\it Thy} is the name of the parent theory the
+  datatype was declared in, and $t$ the name of the datatype. The long form
+  is required if the {\tt datatype} and the {\tt primrec} sections are in
+  different theories.
+\item {\it reduction rules} specify one or more named equations of the form
+  {\it id\/}~{\it string}, where the identifier gives the name of the rule in
+  the result structure, and {\it string} is a reduction rule of the form \[
+  f~x_1~\dots~x_m~(C~y_1~\dots~y_k)~z_1~\dots~z_n = r \] such that $C$ is a
+  constructor of the datatype, $r$ contains only the free variables on the
+  left-hand side, and all recursive calls in $r$ are of the form
+  $f~\dots~y_i~\dots$ for some $i$. There must be exactly one reduction
+  rule for each constructor.
+\end{itemize}
+A theory file may contain any number of {\tt primrec} sections which may be
+intermixed with other declarations.
+
+For the consistency-sensitive user it may be reassuring to know that {\tt
+  primrec} does not assert the reduction rules as new axioms but derives them
+as theorems from an explicit definition of the recursive function in terms of
+a recursion operator on the datatype.
+
+The primitive recursive function can also use infix or mixfix syntax:
+\begin{ttbox}
+Append = MyList +
+consts "@"  :: "['a list,'a list] => 'a list"  (infixr 60)
+primrec "op @" MyList.list
+   app_Nil   "[] @ ys = ys"
+   app_Cons  "(x#xs) @ ys = x#(xs @ ys)"
+end
+\end{ttbox}
+
+The reduction rules become part of the ML structure \verb$Append$ and can
+be used to prove theorems about the function:
+\begin{ttbox}
+val append_ss = HOL_ss addsimps [Append.app_Nil,Append.app_Cons];
+
+goal Append.thy "(xs @ ys) @ zs = xs @ (ys @ zs)";
+by (MyList.list.induct_tac "xs" 1);
+by (ALLGOALS(asm_simp_tac append_ss));
+\end{ttbox}
+
+%Note that underdefined primitive recursive functions are allowed:
+%\begin{ttbox}
+%Tl = MyList +
+%consts tl  :: "'a list => 'a list"
+%primrec tl MyList.list
+%   tl_Cons "tl(x#xs) = xs"
+%end
+%\end{ttbox}
+%Nevertheless {\tt tl} is total, although we do not know what the result of
+%\verb$tl([])$ is.
+
+\index{primitive recursion|)}
+\index{*primrec|)}
+\index{*datatype|)}
+
+
+\section{Inductive and coinductive definitions}
+\index{*inductive|(}
+\index{*coinductive|(}
+
+An {\bf inductive definition} specifies the least set closed under given
+rules.  For example, a structural operational semantics is an inductive
+definition of an evaluation relation.  Dually, a {\bf coinductive
+  definition} specifies the greatest set closed under given rules.  An
+important example is using bisimulation relations to formalize equivalence
+of processes and infinite data structures.
+
+A theory file may contain any number of inductive and coinductive
+definitions.  They may be intermixed with other declarations; in
+particular, the (co)inductive sets {\bf must} be declared separately as
+constants, and may have mixfix syntax or be subject to syntax translations.
+
+Each (co)inductive definition adds definitions to the theory and also
+proves some theorems.  Each definition creates an ML structure, which is a
+substructure of the main theory structure.
+
+This package is derived from the ZF one, described in a
+separate paper,\footnote{It appeared in CADE~\cite{paulson-CADE} and a
+  longer version is distributed with Isabelle.} which you should refer to
+in case of difficulties.  The package is simpler than ZF's, thanks to CHOL's
+automatic type-checking.  The type of the (co)inductive determines the
+domain of the fixedpoint definition, and the package does not use inference
+rules for type-checking.
+
+
+\subsection{The result structure}
+Many of the result structure's components have been discussed in the paper;
+others are self-explanatory.
+\begin{description}
+\item[\tt thy] is the new theory containing the recursive sets.
+
+\item[\tt defs] is the list of definitions of the recursive sets.
+
+\item[\tt mono] is a monotonicity theorem for the fixedpoint operator.
+
+\item[\tt unfold] is a fixedpoint equation for the recursive set (the union of
+the recursive sets, in the case of mutual recursion).
+
+\item[\tt intrs] is the list of introduction rules, now proved as theorems, for
+the recursive sets.  The rules are also available individually, using the
+names given them in the theory file. 
+
+\item[\tt elim] is the elimination rule.
+
+\item[\tt mk\_cases] is a function to create simplified instances of {\tt
+elim}, using freeness reasoning on some underlying datatype.
+\end{description}
+
+For an inductive definition, the result structure contains two induction rules,
+{\tt induct} and \verb|mutual_induct|.  For a coinductive definition, it
+contains the rule \verb|coinduct|.
+
+Figure~\ref{def-result-fig} summarizes the two result signatures,
+specifying the types of all these components.
+
+\begin{figure}
+\begin{ttbox}
+sig
+val thy          : theory
+val defs         : thm list
+val mono         : thm
+val unfold       : thm
+val intrs        : thm list
+val elim         : thm
+val mk_cases     : thm list -> string -> thm
+{\it(Inductive definitions only)} 
+val induct       : thm
+val mutual_induct: thm
+{\it(Coinductive definitions only)}
+val coinduct    : thm
+end
+\end{ttbox}
+\hrule
+\caption{The result of a (co)inductive definition} \label{def-result-fig}
+\end{figure}
+
+\subsection{The syntax of a (co)inductive definition}
+An inductive definition has the form
+\begin{ttbox}
+inductive    {\it inductive sets}
+  intrs      {\it introduction rules}
+  monos      {\it monotonicity theorems}
+  con_defs   {\it constructor definitions}
+\end{ttbox}
+A coinductive definition is identical, except that it starts with the keyword
+{\tt coinductive}.  
+
+The {\tt monos} and {\tt con\_defs} sections are optional.  If present,
+each is specified as a string, which must be a valid ML expression of type
+{\tt thm list}.  It is simply inserted into the {\tt .thy.ML} file; if it
+is ill-formed, it will trigger ML error messages.  You can then inspect the
+file on your directory.
+
+\begin{itemize}
+\item The {\it inductive sets} are specified by one or more strings.
+
+\item The {\it introduction rules} specify one or more introduction rules in
+  the form {\it ident\/}~{\it string}, where the identifier gives the name of
+  the rule in the result structure.
+
+\item The {\it monotonicity theorems} are required for each operator
+  applied to a recursive set in the introduction rules.  There {\bf must}
+  be a theorem of the form $A\subseteq B\Imp M(A)\subseteq M(B)$, for each
+  premise $t\in M(R_i)$ in an introduction rule!
+
+\item The {\it constructor definitions} contain definitions of constants
+  appearing in the introduction rules.  In most cases it can be omitted.
+\end{itemize}
+
+The package has a few notable restrictions:
+\begin{itemize}
+\item The theory must separately declare the recursive sets as
+  constants.
+
+\item The names of the recursive sets must be identifiers, not infix
+operators.  
+
+\item Side-conditions must not be conjunctions.  However, an introduction rule
+may contain any number of side-conditions.
+
+\item Side-conditions of the form $x=t$, where the variable~$x$ does not
+  occur in~$t$, will be substituted through the rule \verb|mutual_induct|.
+\end{itemize}
+
+
+\subsection{Example of an inductive definition}
+Two declarations, included in a theory file, define the finite powerset
+operator.  First we declare the constant~{\tt Fin}.  Then we declare it
+inductively, with two introduction rules:
+\begin{ttbox}
+consts Fin :: "'a set => 'a set set"
+inductive "Fin A"
+  intrs
+    emptyI  "{} : Fin A"
+    insertI "[| a: A;  b: Fin A |] ==> insert a b : Fin A"
+\end{ttbox}
+The resulting theory structure contains a substructure, called~{\tt Fin}.
+It contains the {\tt Fin}$~A$ introduction rules as the list {\tt Fin.intrs},
+and also individually as {\tt Fin.emptyI} and {\tt Fin.consI}.  The induction
+rule is {\tt Fin.induct}.
+
+For another example, here is a theory file defining the accessible part of a
+relation.  The main thing to note is the use of~{\tt Pow} in the sole
+introduction rule, and the corresponding mention of the rule
+\verb|Pow_mono| in the {\tt monos} list.  The paper discusses a ZF version
+of this example in more detail.
+\begin{ttbox}
+Acc = WF + 
+consts pred :: "['b, ('a * 'b)set] => 'a set"   (*Set of predecessors*)
+       acc  :: "('a * 'a)set => 'a set"         (*Accessible part*)
+defs   pred_def  "pred x r == {y. (y,x):r}"
+inductive "acc r"
+  intrs
+     pred "pred a r: Pow(acc r) ==> a: acc r"
+  monos   "[Pow_mono]"
+end
+\end{ttbox}
+The CHOL distribution contains many other inductive definitions, such as the
+theory {\tt CHOL/ex/PropLog.thy} and the directory {\tt CHOL/IMP}.  The
+theory {\tt CHOL/ex/LList.thy} contains coinductive definitions.
+
+\index{*coinductive|)} \index{*inductive|)} \underscoreoff
+
+
+\section{The examples directories}
+Directory {\tt CHOL/Subst} contains Martin Coen's mechanisation of a theory of
+substitutions and unifiers.  It is based on Paulson's previous
+mechanisation in {\LCF}~\cite{paulson85} of Manna and Waldinger's
+theory~\cite{mw81}. 
+
+Directory {\tt CHOL/IMP} contains a mechanised version of a semantic
+equivalence proof taken from Winskel~\cite{winskel93}.  It formalises the
+denotational and operational semantics of a simple while-language, then
+proves the two equivalent.  It contains several datatype and inductive
+definitions, and demonstrates their use.
+
+Directory {\tt CHOL/ex} contains other examples and experimental proofs in
+{\CHOL}.  Here is an overview of the more interesting files.
+\begin{itemize}
+\item File {\tt cla.ML} demonstrates the classical reasoner on over sixty
+  predicate calculus theorems, ranging from simple tautologies to
+  moderately difficult problems involving equality and quantifiers.
+
+\item File {\tt meson.ML} contains an experimental implementation of the {\sc
+    meson} proof procedure, inspired by Plaisted~\cite{plaisted90}.  It is
+  much more powerful than Isabelle's classical reasoner.  But it is less
+  useful in practice because it works only for pure logic; it does not
+  accept derived rules for the set theory primitives, for example.
+
+\item File {\tt mesontest.ML} contains test data for the {\sc meson} proof
+  procedure.  These are mostly taken from Pelletier \cite{pelletier86}.
+
+\item File {\tt set.ML} proves Cantor's Theorem, which is presented in
+  \S\ref{sec:chol-cantor} below, and the Schr\"oder-Bernstein Theorem.
+
+\item Theories {\tt InSort} and {\tt Qsort} prove correctness properties of
+  insertion sort and quick sort.
+
+\item The definition of lazy lists demonstrates methods for handling
+  infinite data structures and coinduction in higher-order
+  logic~\cite{paulson-coind}.  Theory \thydx{LList} defines an operator for
+  corecursion on lazy lists, which is used to define a few simple functions
+  such as map and append.  Corecursion cannot easily define operations such
+  as filter, which can compute indefinitely before yielding the next
+  element (if any!) of the lazy list.  A coinduction principle is defined
+  for proving equations on lazy lists.
+
+\item Theory {\tt PropLog} proves the soundness and completeness of
+  classical propositional logic, given a truth table semantics.  The only
+  connective is $\imp$.  A Hilbert-style axiom system is specified, and its
+  set of theorems defined inductively.  A similar proof in \ZF{} is
+  described elsewhere~\cite{paulson-set-II}.
+
+\item Theory {\tt Term} develops an experimental recursive type definition;
+  the recursion goes through the type constructor~\tydx{list}.
+
+\item Theory {\tt Simult} constructs mutually recursive sets of trees and
+  forests, including induction and recursion rules.
+
+\item Theory {\tt MT} contains Jacob Frost's formalization~\cite{frost93} of
+  Milner and Tofte's coinduction example~\cite{milner-coind}.  This
+  substantial proof concerns the soundness of a type system for a simple
+  functional language.  The semantics of recursion is given by a cyclic
+  environment, which makes a coinductive argument appropriate.
+\end{itemize}
+
+
+\goodbreak
+\section{Example: Cantor's Theorem}\label{sec:chol-cantor}
+Cantor's Theorem states that every set has more subsets than it has
+elements.  It has become a favourite example in higher-order logic since
+it is so easily expressed:
+\[  \forall f::[\alpha,\alpha]\To bool. \exists S::\alpha\To bool.
+    \forall x::\alpha. f~x \not= S 
+\] 
+%
+Viewing types as sets, $\alpha\To bool$ represents the powerset
+of~$\alpha$.  This version states that for every function from $\alpha$ to
+its powerset, some subset is outside its range.  
+
+The Isabelle proof uses \CHOL's set theory, with the type $\alpha\,set$ and
+the operator \cdx{range}.  The set~$S$ is given as an unknown instead of a
+quantified variable so that we may inspect the subset found by the proof.
+\begin{ttbox}
+goal Set.thy "~ ?S : range(f :: 'a=>'a set)";
+{\out Level 0}
+{\out ~ ?S : range f}
+{\out  1. ~ ?S : range f}
+\end{ttbox}
+The first two steps are routine.  The rule \tdx{rangeE} replaces
+$\Var{S}\in {\tt range} f$ by $\Var{S}=f~x$ for some~$x$.
+\begin{ttbox}
+by (resolve_tac [notI] 1);
+{\out Level 1}
+{\out ~ ?S : range f}
+{\out  1. ?S : range f ==> False}
+\ttbreak
+by (eresolve_tac [rangeE] 1);
+{\out Level 2}
+{\out ~ ?S : range f}
+{\out  1. !!x. ?S = f x ==> False}
+\end{ttbox}
+Next, we apply \tdx{equalityCE}, reasoning that since $\Var{S}=f~x$,
+we have $\Var{c}\in \Var{S}$ if and only if $\Var{c}\in f~x$ for
+any~$\Var{c}$.
+\begin{ttbox}
+by (eresolve_tac [equalityCE] 1);
+{\out Level 3}
+{\out ~ ?S : range f}
+{\out  1. !!x. [| ?c3 x : ?S; ?c3 x : f x |] ==> False}
+{\out  2. !!x. [| ~ ?c3 x : ?S; ~ ?c3 x : f x |] ==> False}
+\end{ttbox}
+Now we use a bit of creativity.  Suppose that~$\Var{S}$ has the form of a
+comprehension.  Then $\Var{c}\in\{x.\Var{P}~x\}$ implies
+$\Var{P}~\Var{c}$.   Destruct-resolution using \tdx{CollectD}
+instantiates~$\Var{S}$ and creates the new assumption.
+\begin{ttbox}
+by (dresolve_tac [CollectD] 1);
+{\out Level 4}
+{\out ~ \{x. ?P7 x\} : range f}
+{\out  1. !!x. [| ?c3 x : f x; ?P7(?c3 x) |] ==> False}
+{\out  2. !!x. [| ~ ?c3 x : \{x. ?P7 x\}; ~ ?c3 x : f x |] ==> False}
+\end{ttbox}
+Forcing a contradiction between the two assumptions of subgoal~1 completes
+the instantiation of~$S$.  It is now the set $\{x. x\not\in f~x\}$, which
+is the standard diagonal construction.
+\begin{ttbox}
+by (contr_tac 1);
+{\out Level 5}
+{\out ~ \{x. ~ x : f x\} : range f}
+{\out  1. !!x. [| ~ x : \{x. ~ x : f x\}; ~ x : f x |] ==> False}
+\end{ttbox}
+The rest should be easy.  To apply \tdx{CollectI} to the negated
+assumption, we employ \ttindex{swap_res_tac}:
+\begin{ttbox}
+by (swap_res_tac [CollectI] 1);
+{\out Level 6}
+{\out ~ \{x. ~ x : f x\} : range f}
+{\out  1. !!x. [| ~ x : f x; ~ False |] ==> ~ x : f x}
+\ttbreak
+by (assume_tac 1);
+{\out Level 7}
+{\out ~ \{x. ~ x : f x\} : range f}
+{\out No subgoals!}
+\end{ttbox}
+How much creativity is required?  As it happens, Isabelle can prove this
+theorem automatically.  The classical set \ttindex{set_cs} contains rules
+for most of the constructs of \CHOL's set theory.  We must augment it with
+\tdx{equalityCE} to break up set equalities, and then apply best-first
+search.  Depth-first search would diverge, but best-first search
+successfully navigates through the large search space.
+\index{search!best-first}
+\begin{ttbox}
+choplev 0;
+{\out Level 0}
+{\out ~ ?S : range f}
+{\out  1. ~ ?S : range f}
+\ttbreak
+by (best_tac (set_cs addSEs [equalityCE]) 1);
+{\out Level 1}
+{\out ~ \{x. ~ x : f x\} : range f}
+{\out No subgoals!}
+\end{ttbox}
+
+\index{higher-order logic|)}
--- a/doc-src/Logics/logics.tex	Tue May 09 10:42:23 1995 +0200
+++ b/doc-src/Logics/logics.tex	Tue May 09 10:43:19 1995 +0200
@@ -47,7 +47,7 @@
 \include{intro}
 \include{FOL}
 \include{ZF}
-\include{HOL}
+\include{CHOL}
 \include{LK}
 %%\include{Modal}
 \include{CTT}