First draft of Springer book
authorlcp
Sat, 19 Mar 1994 03:01:25 +0100
changeset 284 1072b18b2caa
parent 283 76caebd18756
child 285 fd4a6585e5bf
First draft of Springer book
doc-src/Intro/advanced.tex
doc-src/Logics/CTT.tex
--- a/doc-src/Intro/advanced.tex	Thu Mar 17 17:48:37 1994 +0100
+++ b/doc-src/Intro/advanced.tex	Sat Mar 19 03:01:25 1994 +0100
@@ -1,17 +1,8 @@
 %% $Id$
-\part{Advanced methods}
+\part{Advanced Methods}
 Before continuing, it might be wise to try some of your own examples in
 Isabelle, reinforcing your knowledge of the basic functions.
-This paper is merely an introduction to Isabelle.  Two other documents
-exist:
-\begin{itemize}
-  \item {\em The Isabelle Reference Manual\/} contains information about
-most of the facilities of Isabelle, apart from particular object-logics.
 
-  \item {\em Isabelle's Object-Logics\/} describes the various logics
-distributed with Isabelle.  It also explains how to define new logics in
-Isabelle.
-\end{itemize}
 Look through {\em Isabelle's Object-Logics\/} and try proving some simple
 theorems.  You probably should begin with first-order logic ({\tt FOL}
 or~{\tt LK}).  Try working some of the examples provided, and others from
@@ -225,11 +216,13 @@
 
 
 \subsubsection{Deriving the elimination rule}
-Let us derive $(\neg E)$.  The proof follows that of~{\tt conjE}
+Let us derive the rule $(\neg E)$.  The proof follows that of~{\tt conjE}
 (\S\ref{deriving-example}), with an additional step to unfold negation in
 the major premise.  Although the {\tt goalw} command is best for this, let
-us try~\ttindex{goal}.  As usual, we bind the premises to \ML\ identifiers.
-We then apply \ttindex{FalseE}, which stands for~$(\bot E)$:
+us try~\ttindex{goal} and examine another way of unfolding definitions.
+
+As usual, we bind the premises to \ML\ identifiers.  We then apply
+\ttindex{FalseE}, which stands for~$(\bot E)$:
 \begin{ttbox}
 val [major,minor] = goal FOL.thy "[| ~P;  P |] ==> R";
 {\out Level 0}
@@ -242,7 +235,10 @@
 {\out Level 1}
 {\out R}
 {\out  1. False}
-\ttbreak
+\end{ttbox}
+Everything follows from falsity.  And we can prove falsity using the
+premises and Modus Ponens:
+\begin{ttbox}
 by (resolve_tac [mp] 1);
 {\out Level 2}
 {\out R}
@@ -261,7 +257,8 @@
 {\out R}
 {\out  1. P}
 \end{ttbox}
-Now {\tt?P1} has changed to~{\tt P}; we need only use the minor premise:
+The subgoal {\tt?P1} has been instantiate to~{\tt P}, which we can prove
+using the minor premise:
 \begin{ttbox}
 by (resolve_tac [minor] 1);
 {\out Level 4}
@@ -273,9 +270,9 @@
 \indexbold{*notE}
 
 \medskip
-Again, there is a simpler way of conducting this proof.  The
-\ttindex{goalw} command unfolds definitions in the premises as well
-as the conclusion:
+Again, there is a simpler way of conducting this proof.  Recall that
+the \ttindex{goalw} command unfolds definitions the conclusion; it also
+unfolds definitions in the premises:
 \begin{ttbox}
 val [major,minor] = goalw FOL.thy [not_def]
     "[| ~P;  P |] ==> R";
@@ -285,7 +282,8 @@
 Observe the difference in {\tt major}; the premises are now {\bf unfolded}
 and we need not call~\ttindex{rewrite_rule}.  Incidentally, the four calls
 to \ttindex{resolve_tac} above can be collapsed to one, with the help
-of~\ttindex{RS}\@:
+of~\ttindex{RS}; this is a typical example of forward reasoning from a
+complex premise.
 \begin{ttbox}
 minor RS (major RS mp RS FalseE);
 {\out val it = "?P  [P, ~P]" : thm}
@@ -295,11 +293,12 @@
 {\out No subgoals!}
 \end{ttbox}
 
-
-\medskip Finally, here is a trick that is sometimes useful.  If the goal
+\goodbreak\medskip
+Finally, here is a trick that is sometimes useful.  If the goal
 has an outermost meta-quantifier, then \ttindex{goal} and \ttindex{goalw}
-do not return the rule's premises in the list of theorems.  Instead, the
-premises become assumptions in subgoal~1:
+do not return the rule's premises in the list of theorems;  instead, the
+premises become assumptions in subgoal~1.  
+%%%It does not matter which variables are quantified over.
 \begin{ttbox}
 goalw FOL.thy [not_def] "!!P R. [| ~P;  P |] ==> R";
 {\out Level 0}
@@ -338,7 +337,7 @@
 for deriving the introduction rule~$(\neg I)$.
 
 
-\section{Defining theories}
+\section{Defining theories}\label{sec:defining-theories}
 \index{theories!defining|(}
 Isabelle makes no distinction between simple extensions of a logic --- like
 defining a type~$bool$ with constants~$true$ and~$false$ --- and defining
@@ -362,8 +361,7 @@
 associated concrete syntax.  The translations section specifies rewrite
 rules on abstract syntax trees, for defining notations and abbreviations.
 The {\ML} section contains code to perform arbitrary syntactic
-transformations.  The main declaration forms are discussed below; see {\em
-  Isabelle's Object-Logics} for full details and examples.
+transformations.  The main declaration forms are discussed below.
 
 All the declaration parts can be omitted.  In the simplest case, $T$ is
 just the union of $S@1$,~\ldots,~$S@n$.  New theories always extend one
@@ -396,7 +394,7 @@
 bindings to the old rule may persist.  Isabelle ensures that the old and
 new versions of~$T$ are not involved in the same proof.  Attempting to
 combine different versions of~$T$ yields the fatal error
-\begin{ttbox} 
+\begin{ttbox}
 Attempt to merge different versions of theory: \(T\)
 \end{ttbox}
 
@@ -424,13 +422,20 @@
         \(id@n\) "\(rule@n\)"
 \end{ttbox}
 where $id@1$, \ldots, $id@n$ are \ML{} identifiers and $rule@1$, \ldots,
-$rule@n$ are expressions of type~$prop$.  {\bf Definitions} are rules of
-the form $t\equiv u$.  Each rule {\em must\/} be enclosed in quotation marks.
+$rule@n$ are expressions of type~$prop$.  Each rule {\em must\/} be
+enclosed in quotation marks.
+
+{\bf Definitions} are rules of the form $t\equiv u$.  Normally definitions
+should be conservative, serving only as abbreviations.  As of this writing,
+Isabelle does not provide a separate declaration part for definitions; it
+is your responsibility to ensure that your definitions are conservative.
+However, Isabelle's rewriting primitives will reject $t\equiv u$ unless all
+variables free in~$u$ are also free in~$t$.
 
 \index{examples!of theories}
 This theory extends first-order logic with two constants {\em nand} and
-{\em xor}, and two rules defining them:
-\begin{ttbox} 
+{\em xor}, and declares rules to define them:
+\begin{ttbox}
 Gate = FOL +
 consts  nand,xor :: "[o,o] => o"
 rules   nand_def "nand(P,Q) == ~(P & Q)"
@@ -440,21 +445,28 @@
 
 
 \subsection{Declaring type constructors}
-\indexbold{type constructors!declaring}\indexbold{arities!declaring}
+\indexbold{types!declaring}\indexbold{arities!declaring} 
+%
 Types are composed of type variables and {\bf type constructors}.  Each
-type constructor has a fixed number of argument places.  For example,
-$list$ is a 1-place type constructor and $nat$ is a 0-place type
-constructor.
-
-The {\bf type declaration part} has the form
+type constructor takes a fixed number of arguments.  They are declared
+with an \ML-like syntax.  If $list$ takes one type argument, $tree$ takes
+two arguments and $nat$ takes no arguments, then these type constructors
+can be declared by
 \begin{ttbox}
-types   \(id@1\) \(k@1\)
-        \vdots
-        \(id@n\) \(k@n\)
+types 'a list
+      ('a,'b) tree
+      nat
 \end{ttbox}
-where $id@1$, \ldots, $id@n$ are identifiers and $k@1$, \ldots, $k@n$ are
-natural numbers.  It declares each $id@i$ as a type constructor with $k@i$
-argument places.
+
+The {\bf type declaration part} has the general form
+\begin{ttbox}
+types   \(tids@1\) \(id@1\)
+        \vdots
+        \(tids@1\) \(id@n\)
+\end{ttbox}
+where $id@1$, \ldots, $id@n$ are identifiers and $tids@1$, \ldots, $tids@n$
+are type argument lists as shown in the example above.  It declares each
+$id@i$ as a type constructor with the specified number of argument places.
 
 The {\bf arity declaration part} has the form
 \begin{ttbox}
@@ -465,28 +477,26 @@
 where $tycon@1$, \ldots, $tycon@n$ are identifiers and $arity@1$, \ldots,
 $arity@n$ are arities.  Arity declarations add arities to existing
 types; they complement type declarations.
-
 In the simplest case, for an 0-place type constructor, an arity is simply
 the type's class.  Let us declare a type~$bool$ of class $term$, with
-constants $tt$ and~$ff$:\footnote{In first-order logic, booleans are
-distinct from formulae, which have type $o::logic$.}
+constants $tt$ and~$ff$.  (In first-order logic, booleans are
+distinct from formulae, which have type $o::logic$.)
 \index{examples!of theories}
-\begin{ttbox} 
+\begin{ttbox}
 Bool = FOL +
-types   bool 0
+types   bool
 arities bool    :: term
 consts  tt,ff   :: "bool"
 end
 \end{ttbox}
-In the general case, type constructors take arguments.  Each type
-constructor has an {\bf arity} with respect to
-classes~(\S\ref{polymorphic}).  A $k$-place type constructor may have
-arities of the form $(s@1,\ldots,s@k)c$, where $s@1,\ldots,s@n$ are sorts
-and $c$ is a class.  Each sort specifies a type argument; it has the form
-$\{c@1,\ldots,c@m\}$, where $c@1$, \dots,~$c@m$ are classes.  Mostly we
-deal with singleton sorts, and may abbreviate them by dropping the braces.
-The arity declaration $list{::}(term)term$ is short for
-$list{::}(\{term\})term$.
+Type constructors can take arguments.  Each type constructor has an {\bf
+  arity} with respect to classes~(\S\ref{polymorphic}).  A $k$-place type
+constructor may have arities of the form $(s@1,\ldots,s@k)c$, where
+$s@1,\ldots,s@n$ are sorts and $c$ is a class.  Each sort specifies a type
+argument; it has the form $\{c@1,\ldots,c@m\}$, where $c@1$, \dots,~$c@m$
+are classes.  Mostly we deal with singleton sorts, and may abbreviate them
+by dropping the braces.  The arity $(term)term$ is short for
+$(\{term\})term$.
 
 A type constructor may be overloaded (subject to certain conditions) by
 appearing in several arity declarations.  For instance, the built-in type
@@ -494,20 +504,19 @@
 logic, it is declared also to have arity $(term,term)term$.
 
 Theory {\tt List} declares the 1-place type constructor $list$, gives
-it arity $list{::}(term)term$, and declares constants $Nil$ and $Cons$ with
+it arity $(term)term$, and declares constants $Nil$ and $Cons$ with
 polymorphic types:
 \index{examples!of theories}
-\begin{ttbox} 
+\begin{ttbox}
 List = FOL +
-types   list 1
+types   'a list
 arities list    :: (term)term
 consts  Nil     :: "'a list"
         Cons    :: "['a, 'a list] => 'a list" 
 end
 \end{ttbox}
-Multiple type and arity declarations may be abbreviated to a single line:
+Multiple arity declarations may be abbreviated to a single line:
 \begin{ttbox}
-types   \(id@1\), \ldots, \(id@n\) \(k\)
 arities \(tycon@1\), \ldots, \(tycon@n\) :: \(arity\)
 \end{ttbox}
 
@@ -520,7 +529,7 @@
 \subsection{Infixes and Mixfixes}
 \indexbold{infix operators}\index{examples!of theories}
 The constant declaration part of the theory
-\begin{ttbox} 
+\begin{ttbox}
 Gate2 = FOL +
 consts  "~&"     :: "[o,o] => o"         (infixl 35)
         "#"      :: "[o,o] => o"         (infixl 30)
@@ -540,19 +549,19 @@
 
 \indexbold{mixfix operators}
 {\bf Mixfix} operators may have arbitrary context-free syntaxes.  For example
-\begin{ttbox} 
-    If :: "[o,o,o] => o"       ("if _ then _ else _")
+\begin{ttbox}
+        If :: "[o,o,o] => o"       ("if _ then _ else _")
 \end{ttbox}
-declares a constant $If$ of type $[o,o,o] \To o$ with concrete syntax
-$if~P~then~Q~else~R$ instead of $If(P,Q,R)$.  Underscores denote argument
-positions.  Pretty-printing information can be specified in order to
-improve the layout of formulae with mixfix operations.  For details, see
-{\em Isabelle's Object-Logics}.
+declares a constant $If$ of type $[o,o,o] \To o$ with concrete syntax {\tt
+  if~$P$ then~$Q$ else~$R$} instead of {\tt If($P$,$Q$,$R$)}.  Underscores
+denote argument positions.  Pretty-printing information can be specified in
+order to improve the layout of formulae with mixfix operations.  For
+details, see {\em Isabelle's Object-Logics}.
 
 Mixfix declarations can be annotated with precedences, just like
 infixes.  The example above is just a shorthand for
-\begin{ttbox} 
-    If :: "[o,o,o] => o"       ("if _ then _ else _" [0,0,0] 1000)
+\begin{ttbox}
+        If :: "[o,o,o] => o"       ("if _ then _ else _" [0,0,0] 1000)
 \end{ttbox}
 The numeric components determine precedences.  The list of integers
 defines, for each argument position, the minimal precedence an expression
@@ -561,15 +570,15 @@
 acceptable because precedences are non-negative, and conditionals may
 appear everywhere because 1000 is the highest precedence.  On the other
 hand,
-\begin{ttbox} 
-    If :: "[o,o,o] => o"       ("if _ then _ else _" [100,0,0] 99)
+\begin{ttbox}
+        If :: "[o,o,o] => o"       ("if _ then _ else _" [100,0,0] 99)
 \end{ttbox}
-defines concrete syntax for a
-conditional whose first argument cannot have the form $if~P~then~Q~else~R$
-because it must have a precedence of at least~100.  Since expressions put in
-parentheses have maximal precedence, we may of course write 
-\begin{quote}
-\it  if (if P then Q else R) then S else T
+defines concrete syntax for a conditional whose first argument cannot have
+the form {\tt if~$P$ then~$Q$ else~$R$} because it must have a precedence
+of at least~100.  Since expressions put in parentheses have maximal
+precedence, we may of course write
+\begin{quote}\tt
+if (if $P$ then $Q$ else $R$) then $S$ else $T$
 \end{quote}
 Conditional expressions can also be written using the constant {\tt If}.
 
@@ -580,7 +589,7 @@
 \index{examples!of theories}
 \begin{ttbox}
 Prod = FOL +
-types   "*" 2                                 (infixl 20)
+types   ('a,'b) "*"                           (infixl 20)
 arities "*"     :: (term,term)term
 consts  fst     :: "'a * 'b => 'a"
         snd     :: "'a * 'b => 'b"
@@ -637,9 +646,10 @@
 \index{examples!of theories}
 \begin{ttbox}
 BoolNat = Arith +
-types   bool,nat    0
+types   bool,nat
 arities bool,nat    :: arith
 consts  Suc         :: "nat=>nat"
+\ttbreak
 rules   add0        "0 + n = n::nat"
         addS        "Suc(m)+n = Suc(m+n)"
         nat1        "1 = Suc(0)"
@@ -653,7 +663,7 @@
 either type.  The type constraints in the axioms are vital.  Without
 constraints, the $x$ in $1+x = x$ would have type $\alpha{::}arith$
 and the axiom would hold for any type of class $arith$.  This would
-collapse $nat$:
+collapse $nat$ to a trivial type:
 \[ Suc(1) = Suc(0+1) = Suc(0)+1 = 1+1 = 1! \]
 The class $arith$ as defined above is more specific than necessary.  Many
 types come with a binary operation and identity~(0).  On lists,
@@ -670,10 +680,10 @@
 \subsection{Extending first-order logic with the natural numbers}
 \index{examples!of theories}
 
-The early part of this paper defines a first-order logic, including a
-type~$nat$ and the constants $0::nat$ and $Suc::nat\To nat$.  Let us
-introduce the Peano axioms for mathematical induction and the freeness of
-$0$ and~$Suc$:
+Section\ts\ref{sec:logical-syntax} has formalized a first-order logic,
+including a type~$nat$ and the constants $0::nat$ and $Suc::nat\To nat$.
+Let us introduce the Peano axioms for mathematical induction and the
+freeness of $0$ and~$Suc$:
 \[ \vcenter{\infer[(induct)*]{P[n/x]}{P[0/x] & \infer*{P[Suc(x)/x]}{[P]}}}
  \qquad \parbox{4.5cm}{provided $x$ is not free in any assumption except~$P$}
 \]
@@ -730,7 +740,7 @@
 the 0-place type constructor $nat$ and the constants $rec$ and~$Suc$:
 \begin{ttbox}
 Nat = FOL +
-types   nat 0
+types   nat
 arities nat         :: term
 consts  "0"         :: "nat"    ("0")
         Suc         :: "nat=>nat"
@@ -753,31 +763,31 @@
 File {\tt FOL/ex/nat.ML} contains proofs involving this theory of the
 natural numbers.  As a trivial example, let us derive recursion equations
 for \verb$+$.  Here is the zero case:
-\begin{ttbox} 
+\begin{ttbox}
 goalw Nat.thy [add_def] "0+n = n";
 {\out Level 0}
 {\out 0 + n = n}
-{\out  1. rec(0,n,%x y. Suc(y)) = n}
+{\out  1. rec(0,n,\%x y. Suc(y)) = n}
 \ttbreak
 by (resolve_tac [rec_0] 1);
 {\out Level 1}
 {\out 0 + n = n}
 {\out No subgoals!}
 val add_0 = result();
-\end{ttbox} 
+\end{ttbox}
 And here is the successor case:
-\begin{ttbox} 
+\begin{ttbox}
 goalw Nat.thy [add_def] "Suc(m)+n = Suc(m+n)";
 {\out Level 0}
 {\out Suc(m) + n = Suc(m + n)}
-{\out  1. rec(Suc(m),n,%x y. Suc(y)) = Suc(rec(m,n,%x y. Suc(y)))}
+{\out  1. rec(Suc(m),n,\%x y. Suc(y)) = Suc(rec(m,n,\%x y. Suc(y)))}
 \ttbreak
 by (resolve_tac [rec_Suc] 1);
 {\out Level 1}
 {\out Suc(m) + n = Suc(m + n)}
 {\out No subgoals!}
 val add_Suc = result();
-\end{ttbox} 
+\end{ttbox}
 The induction rule raises some complications, which are discussed next.
 \index{theories!defining|)}
 
@@ -820,7 +830,7 @@
 Let us prove that no natural number~$k$ equals its own successor.  To
 use~$(induct)$, we instantiate~$\Var{n}$ to~$k$; Isabelle finds a good
 instantiation for~$\Var{P}$.
-\begin{ttbox} 
+\begin{ttbox}
 goal Nat.thy "~ (Suc(k) = k)";
 {\out Level 0}
 {\out ~Suc(k) = k}
@@ -831,13 +841,13 @@
 {\out ~Suc(k) = k}
 {\out  1. ~Suc(0) = 0}
 {\out  2. !!x. ~Suc(x) = x ==> ~Suc(Suc(x)) = Suc(x)}
-\end{ttbox} 
+\end{ttbox}
 We should check that Isabelle has correctly applied induction.  Subgoal~1
 is the base case, with $k$ replaced by~0.  Subgoal~2 is the inductive step,
 with $k$ replaced by~$Suc(x)$ and with an induction hypothesis for~$x$.
 The rest of the proof demonstrates~\ttindex{notI}, \ttindex{notE} and the
 other rules of~\ttindex{Nat.thy}.  The base case holds by~\ttindex{Suc_neq_0}:
-\begin{ttbox} 
+\begin{ttbox}
 by (resolve_tac [notI] 1);
 {\out Level 2}
 {\out ~Suc(k) = k}
@@ -848,10 +858,11 @@
 {\out Level 3}
 {\out ~Suc(k) = k}
 {\out  1. !!x. ~Suc(x) = x ==> ~Suc(Suc(x)) = Suc(x)}
-\end{ttbox} 
+\end{ttbox}
 The inductive step holds by the contrapositive of~\ttindex{Suc_inject}.
-Using the negation rule, we assume $Suc(Suc(x)) = Suc(x)$ and prove $Suc(x)=x$:
-\begin{ttbox} 
+Negation rules transform the subgoal into that of proving $Suc(x)=x$ from
+$Suc(Suc(x)) = Suc(x)$:
+\begin{ttbox}
 by (resolve_tac [notI] 1);
 {\out Level 4}
 {\out ~Suc(k) = k}
@@ -866,7 +877,7 @@
 {\out Level 6}
 {\out ~Suc(k) = k}
 {\out No subgoals!}
-\end{ttbox} 
+\end{ttbox}
 
 
 \subsection{An example of ambiguity in {\tt resolve_tac}}
@@ -875,7 +886,7 @@
 not actually needed.  Almost by chance, \ttindex{resolve_tac} finds the right
 instantiation for~$(induct)$ to yield the desired next state.  With more
 complex formulae, our luck fails.  
-\begin{ttbox} 
+\begin{ttbox}
 goal Nat.thy "(k+m)+n = k+(m+n)";
 {\out Level 0}
 {\out k + m + n = k + (m + n)}
@@ -886,59 +897,63 @@
 {\out k + m + n = k + (m + n)}
 {\out  1. k + m + n = 0}
 {\out  2. !!x. k + m + n = x ==> k + m + n = Suc(x)}
-\end{ttbox} 
-This proof requires induction on~$k$.  But the 0 in subgoal~1 indicates
-that induction has been applied to the term~$k+(m+n)$.  The
-\ttindex{back} command causes backtracking to an alternative
-outcome of the tactic.  
-\begin{ttbox} 
+\end{ttbox}
+This proof requires induction on~$k$.  The occurrence of~0 in subgoal~1
+indicates that induction has been applied to the term~$k+(m+n)$; this
+application is sound but will not lead to a proof here.  Fortunately,
+Isabelle can (lazily!) generate all the valid applications of induction.
+The \ttindex{back} command causes backtracking to an alternative outcome of
+the tactic.
+\begin{ttbox}
 back();
 {\out Level 1}
 {\out k + m + n = k + (m + n)}
 {\out  1. k + m + n = k + 0}
 {\out  2. !!x. k + m + n = k + x ==> k + m + n = k + Suc(x)}
-\end{ttbox} 
-Now induction has been applied to~$m+n$.  Let us call \ttindex{back}
-again.
-\begin{ttbox} 
+\end{ttbox}
+Now induction has been applied to~$m+n$.  This is equally useless.  Let us
+call \ttindex{back} again.
+\begin{ttbox}
 back();
 {\out Level 1}
 {\out k + m + n = k + (m + n)}
 {\out  1. k + m + 0 = k + (m + 0)}
-{\out  2. !!x. k + m + x = k + (m + x) ==> k + m + Suc(x) = k + (m + Suc(x))}
-\end{ttbox} 
+{\out  2. !!x. k + m + x = k + (m + x) ==>}
+{\out          k + m + Suc(x) = k + (m + Suc(x))}
+\end{ttbox}
 Now induction has been applied to~$n$.  What is the next alternative?
-\begin{ttbox} 
+\begin{ttbox}
 back();
 {\out Level 1}
 {\out k + m + n = k + (m + n)}
 {\out  1. k + m + n = k + (m + 0)}
 {\out  2. !!x. k + m + n = k + (m + x) ==> k + m + n = k + (m + Suc(x))}
-\end{ttbox} 
+\end{ttbox}
 Inspecting subgoal~1 reveals that induction has been applied to just the
 second occurrence of~$n$.  This perfectly legitimate induction is useless
 here.  The main goal admits fourteen different applications of induction.
 The number is exponential in the size of the formula.
 
 \subsection{Proving that addition is associative}
-\index{associativity of addition}
-\index{examples!of rewriting}
-Let us do the proof properly, using~\ttindex{res_inst_tac}.  At the same
-time, we shall have a glimpse at Isabelle's rewriting tactics, which are
-described in the {\em Reference Manual}.
+Let us invoke the induction rule properly properly,
+using~\ttindex{res_inst_tac}.  At the same time, we shall have a glimpse at
+Isabelle's rewriting tactics, which are described in the {\em Reference
+  Manual}.
 
-\index{rewriting!object-level} 
+\index{rewriting!object-level}\index{examples!of rewriting} 
+
 Isabelle's rewriting tactics repeatedly applies equations to a subgoal,
 simplifying or proving it.  For efficiency, the rewriting rules must be
-packaged into a \bfindex{simplification set}.  Let us include the equations
+packaged into a \bfindex{simplification set}, or {\bf simpset}.  We take
+the standard simpset for first-order logic and insert the equations
 for~{\tt add} proved in the previous section, namely $0+n=n$ and ${\tt
-  Suc}(m)+n={\tt Suc}(m+n)$: 
-\begin{ttbox} 
+  Suc}(m)+n={\tt Suc}(m+n)$:
+\begin{ttbox}
 val add_ss = FOL_ss addrews [add_0, add_Suc];
-\end{ttbox} 
+\end{ttbox}
 We state the goal for associativity of addition, and
 use \ttindex{res_inst_tac} to invoke induction on~$k$:
-\begin{ttbox} 
+\begin{ttbox}
 goal Nat.thy "(k+m)+n = k+(m+n)";
 {\out Level 0}
 {\out k + m + n = k + (m + n)}
@@ -948,34 +963,36 @@
 {\out Level 1}
 {\out k + m + n = k + (m + n)}
 {\out  1. 0 + m + n = 0 + (m + n)}
-{\out  2. !!x. x + m + n = x + (m + n) ==> Suc(x) + m + n = Suc(x) + (m + n)}
-\end{ttbox} 
+{\out  2. !!x. x + m + n = x + (m + n) ==>}
+{\out          Suc(x) + m + n = Suc(x) + (m + n)}
+\end{ttbox}
 The base case holds easily; both sides reduce to $m+n$.  The
 tactic~\ttindex{simp_tac} rewrites with respect to the given simplification
 set, applying the rewrite rules for~{\tt +}:
-\begin{ttbox} 
+\begin{ttbox}
 by (simp_tac add_ss 1);
 {\out Level 2}
 {\out k + m + n = k + (m + n)}
-{\out  1. !!x. x + m + n = x + (m + n) ==> Suc(x) + m + n = Suc(x) + (m + n)}
-\end{ttbox} 
+{\out  1. !!x. x + m + n = x + (m + n) ==>}
+{\out          Suc(x) + m + n = Suc(x) + (m + n)}
+\end{ttbox}
 The inductive step requires rewriting by the equations for~{\tt add}
 together the induction hypothesis, which is also an equation.  The
 tactic~\ttindex{asm_simp_tac} rewrites using a simplification set and any
 useful assumptions:
-\begin{ttbox} 
+\begin{ttbox}
 by (asm_simp_tac add_ss 1);
 {\out Level 3}
 {\out k + m + n = k + (m + n)}
 {\out No subgoals!}
-\end{ttbox} 
+\end{ttbox}
 
 
-\section{A {\sc Prolog} interpreter}
+\section{A Prolog interpreter}
 \index{Prolog interpreter|bold}
-To demonstrate the power of tacticals, let us construct a {\sc Prolog}
+To demonstrate the power of tacticals, let us construct a Prolog
 interpreter and execute programs involving lists.\footnote{To run these
-examples, see the file {\tt FOL/ex/prolog.ML}.} The {\sc Prolog} program
+examples, see the file {\tt FOL/ex/prolog.ML}.} The Prolog program
 consists of a theory.  We declare a type constructor for lists, with an
 arity declaration to say that $(\tau)list$ is of class~$term$
 provided~$\tau$ is:
@@ -984,7 +1001,7 @@
 \end{eqnarray*}
 We declare four constants: the empty list~$Nil$; the infix list
 constructor~{:}; the list concatenation predicate~$app$; the list reverse
-predicate~$rev$.  (In {\sc Prolog}, functions on lists are expressed as
+predicate~$rev$.  (In Prolog, functions on lists are expressed as
 predicates.)
 \begin{eqnarray*}
     Nil         & :: & \alpha list \\
@@ -992,7 +1009,7 @@
     app & :: & [\alpha list,\alpha list,\alpha list] \To o \\
     rev & :: & [\alpha list,\alpha list] \To o 
 \end{eqnarray*}
-The predicate $app$ should satisfy the {\sc Prolog}-style rules
+The predicate $app$ should satisfy the Prolog-style rules
 \[ {app(Nil,ys,ys)} \qquad
    {app(xs,ys,zs) \over app(x:xs, ys, x:zs)} \]
 We define the naive version of $rev$, which calls~$app$:
@@ -1020,7 +1037,7 @@
 end
 \end{ttbox}
 \subsection{Simple executions}
-Repeated application of the rules solves {\sc Prolog} goals.  Let us
+Repeated application of the rules solves Prolog goals.  Let us
 append the lists $[a,b,c]$ and~$[d,e]$.  As the rules are applied, the
 answer builds up in~{\tt ?x}.
 \begin{ttbox}
@@ -1052,7 +1069,7 @@
 {\out No subgoals!}
 \end{ttbox}
 
-{\sc Prolog} can run functions backwards.  Which list can be appended
+Prolog can run functions backwards.  Which list can be appended
 with $[c,d]$ to produce $[a,b,c,d]$?
 Using \ttindex{REPEAT}, we find the answer at once, $[a,b]$:
 \begin{ttbox}
@@ -1068,11 +1085,11 @@
 \end{ttbox}
 
 
-\subsection{Backtracking}
-\index{backtracking}
+\subsection{Backtracking}\index{backtracking}
+Prolog backtracking can handle questions that have multiple solutions.
 Which lists $x$ and $y$ can be appended to form the list $[a,b,c,d]$?
-Using \ttindex{REPEAT} to apply the rules, we quickly find the solution
-$x=[]$ and $y=[a,b,c,d]$:
+Using \ttindex{REPEAT} to apply the rules, we quickly find the first
+solution, namely $x=[]$ and $y=[a,b,c,d]$:
 \begin{ttbox}
 goal Prolog.thy "app(?x, ?y, a:b:c:d:Nil)";
 {\out Level 0}
@@ -1084,8 +1101,8 @@
 {\out app(Nil, a : b : c : d : Nil, a : b : c : d : Nil)}
 {\out No subgoals!}
 \end{ttbox}
-The \ttindex{back} command returns the tactic's next outcome,
-$x=[a]$ and $y=[b,c,d]$:
+Isabelle can lazily generate all the possibilities.  The \ttindex{back}
+command returns the tactic's next outcome, namely $x=[a]$ and $y=[b,c,d]$:
 \begin{ttbox}
 back();
 {\out Level 1}
@@ -1121,7 +1138,9 @@
 goal Prolog.thy "rev(a:b:c:d:e:f:g:h:i:j:k:l:m:n:Nil, ?w)";
 {\out Level 0}
 {\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}
-{\out  1. rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}
+{\out  1. rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}
+{\out         ?w)}
+\ttbreak
 val rules = [appNil,appCons,revNil,revCons];
 \ttbreak
 by (REPEAT (resolve_tac rules 1));
@@ -1168,8 +1187,8 @@
 val prolog_tac = DEPTH_FIRST (has_fewer_prems 1) 
                              (resolve_tac rules 1);
 \end{ttbox}
-Since {\sc Prolog} uses depth-first search, this tactic is a (slow!) {\sc
-Prolog} interpreter.  We return to the start of the proof (using
+Since Prolog uses depth-first search, this tactic is a (slow!) 
+Prolog interpreter.  We return to the start of the proof (using
 \ttindex{choplev}), and apply {\tt prolog_tac}:
 \begin{ttbox}
 choplev 0;
@@ -1194,6 +1213,6 @@
 {\out rev(a : b : c : d : Nil, d : c : b : a : Nil)}
 {\out No subgoals!}
 \end{ttbox}
-Although Isabelle is much slower than a {\sc Prolog} system, Isabelle
+Although Isabelle is much slower than a Prolog system, Isabelle
 tactics can exploit logic programming techniques.  
 
--- a/doc-src/Logics/CTT.tex	Thu Mar 17 17:48:37 1994 +0100
+++ b/doc-src/Logics/CTT.tex	Sat Mar 19 03:01:25 1994 +0100
@@ -159,7 +159,7 @@
 
 
 \section{Syntax}
-The constants are shown in Figure~\ref{ctt-constants}.  The infixes include
+The constants are shown in Fig.\ts\ref{ctt-constants}.  The infixes include
 the function application operator (sometimes called `apply'), and the
 2-place type operators.  Note that meta-level abstraction and application,
 $\lambda x.b$ and $f(a)$, differ from object-level abstraction and
@@ -170,10 +170,10 @@
 \indexbold{*F}\indexbold{*T}\indexbold{*SUM}\indexbold{*PROD}
 The empty type is called $F$ and the one-element type is $T$; other finite
 sets are built as $T+T+T$, etc.  The notation for~{\CTT}
-(Figure~\ref{ctt-syntax}) is based on that of Nordstr\"om et
+(Fig.\ts\ref{ctt-syntax}) is based on that of Nordstr\"om et
 al.~\cite{nordstrom90}.  We can write
 \begin{ttbox}
-SUM y:B. PROD x:A. C(x,y)   {\rm for}   Sum(B, %y. Prod(A, %x. C(x,y)))
+SUM y:B. PROD x:A. C(x,y)   {\rm for}   Sum(B, \%y. Prod(A, \%x. C(x,y)))
 \end{ttbox}
 The special cases as \hbox{\tt$A$*$B$} and \hbox{\tt$A$-->$B$} abbreviate
 general sums and products over a constant family.\footnote{Unlike normal
@@ -222,20 +222,20 @@
 
 \idx{NE}        [| p: N;  a: C(0);  
              !!u v. [| u: N; v: C(u) |] ==> b(u,v): C(succ(u)) 
-          |] ==> rec(p, a, %u v.b(u,v)) : C(p)
+          |] ==> rec(p, a, \%u v.b(u,v)) : C(p)
 
 \idx{NEL}       [| p = q : N;  a = c : C(0);  
              !!u v. [| u: N; v: C(u) |] ==> b(u,v)=d(u,v): C(succ(u))
-          |] ==> rec(p, a, %u v.b(u,v)) = rec(q,c,d) : C(p)
+          |] ==> rec(p, a, \%u v.b(u,v)) = rec(q,c,d) : C(p)
 
 \idx{NC0}       [| a: C(0);  
              !!u v. [| u: N; v: C(u) |] ==> b(u,v): C(succ(u))
-          |] ==> rec(0, a, %u v.b(u,v)) = a : C(0)
+          |] ==> rec(0, a, \%u v.b(u,v)) = a : C(0)
 
 \idx{NC_succ}   [| p: N;  a: C(0);  
              !!u v. [| u: N; v: C(u) |] ==> b(u,v): C(succ(u)) 
-          |] ==> rec(succ(p), a, %u v.b(u,v)) =
-                 b(p, rec(p, a, %u v.b(u,v))) : C(succ(p))
+          |] ==> rec(succ(p), a, \%u v.b(u,v)) =
+                 b(p, rec(p, a, \%u v.b(u,v))) : C(succ(p))
 
 \idx{zero_ne_succ}      [| a: N;  0 = succ(a) : N |] ==> 0: F
 \end{ttbox}
@@ -277,18 +277,18 @@
 
 \idx{SumE}      [| p: SUM x:A.B(x);  
              !!x y. [| x:A; y:B(x) |] ==> c(x,y): C(<x,y>) 
-          |] ==> split(p, %x y.c(x,y)) : C(p)
+          |] ==> split(p, \%x y.c(x,y)) : C(p)
 
 \idx{SumEL}     [| p=q : SUM x:A.B(x); 
              !!x y. [| x:A; y:B(x) |] ==> c(x,y)=d(x,y): C(<x,y>)
-          |] ==> split(p, %x y.c(x,y)) = split(q, %x y.d(x,y)) : C(p)
+          |] ==> split(p, \%x y.c(x,y)) = split(q, \%x y.d(x,y)) : C(p)
 
 \idx{SumC}      [| a: A;  b: B(a);
              !!x y. [| x:A; y:B(x) |] ==> c(x,y): C(<x,y>)
-          |] ==> split(<a,b>, %x y.c(x,y)) = c(a,b) : C(<a,b>)
+          |] ==> split(<a,b>, \%x y.c(x,y)) = c(a,b) : C(<a,b>)
 
-\idx{fst_def}   fst(a) == split(a, %x y.x)
-\idx{snd_def}   snd(a) == split(a, %x y.y)
+\idx{fst_def}   fst(a) == split(a, \%x y.x)
+\idx{snd_def}   snd(a) == split(a, \%x y.y)
 \end{ttbox}
 \caption{Rules for the sum type $\sum@{x\in A}B[x]$} \label{ctt-sum}
 \end{figure}
@@ -308,23 +308,23 @@
 \idx{PlusE}     [| p: A+B;
              !!x. x:A ==> c(x): C(inl(x));  
              !!y. y:B ==> d(y): C(inr(y))
-          |] ==> when(p, %x.c(x), %y.d(y)) : C(p)
+          |] ==> when(p, \%x.c(x), \%y.d(y)) : C(p)
 
 \idx{PlusEL}    [| p = q : A+B;
              !!x. x: A ==> c(x) = e(x) : C(inl(x));   
              !!y. y: B ==> d(y) = f(y) : C(inr(y))
-          |] ==> when(p, %x.c(x), %y.d(y)) = 
-                 when(q, %x.e(x), %y.f(y)) : C(p)
+          |] ==> when(p, \%x.c(x), \%y.d(y)) = 
+                 when(q, \%x.e(x), \%y.f(y)) : C(p)
 
 \idx{PlusC_inl} [| a: A;
              !!x. x:A ==> c(x): C(inl(x));  
              !!y. y:B ==> d(y): C(inr(y))
-          |] ==> when(inl(a), %x.c(x), %y.d(y)) = c(a) : C(inl(a))
+          |] ==> when(inl(a), \%x.c(x), \%y.d(y)) = c(a) : C(inl(a))
 
 \idx{PlusC_inr} [| b: B;
              !!x. x:A ==> c(x): C(inl(x));  
              !!y. y:B ==> d(y): C(inr(y))
-          |] ==> when(inr(b), %x.c(x), %y.d(y)) = d(b) : C(inr(b))
+          |] ==> when(inr(b), \%x.c(x), \%y.d(y)) = d(b) : C(inr(b))
 \end{ttbox}
 \caption{Rules for the binary sum type $A+B$} \label{ctt-plus}
 \end{figure}
@@ -406,7 +406,7 @@
 note that the only rule that makes use of {\tt Reduce} is \ttindex{trans_red},
 whose first premise ensures that $a$ and $b$ (and thus $c$) are well-typed.
 
-Derived rules are shown in Figure~\ref{ctt-derived}.  The rule
+Derived rules are shown in Fig.\ts\ref{ctt-derived}.  The rule
 \ttindex{subst_prodE} is derived from \ttindex{prodE}, and is easier to
 use in backwards proof.  The rules \ttindex{SumE_fst} and
 \ttindex{SumE_snd} express the typing of~\ttindex{fst} and~\ttindex{snd};
@@ -589,16 +589,16 @@
 
 \begin{figure} 
 \begin{ttbox}
-\idx{add_def}           a#+b  == rec(a, b, %u v.succ(v))  
-\idx{diff_def}          a-b   == rec(b, a, %u v.rec(v, 0, %x y.x))  
+\idx{add_def}           a#+b  == rec(a, b, \%u v.succ(v))  
+\idx{diff_def}          a-b   == rec(b, a, \%u v.rec(v, 0, \%x y.x))  
 \idx{absdiff_def}       a|-|b == (a-b) #+ (b-a)  
-\idx{mult_def}          a#*b  == rec(a, 0, %u v. b #+ v)  
+\idx{mult_def}          a#*b  == rec(a, 0, \%u v. b #+ v)  
 
 \idx{mod_def}   a mod b == rec(a, 0,
-                      %u v. rec(succ(v) |-| b, 0, %x y.succ(v)))  
+                      \%u v. rec(succ(v) |-| b, 0, \%x y.succ(v)))  
 
 \idx{div_def}   a div b == rec(a, 0,
-                      %u v. rec(succ(u) mod b, succ(v), %x y.v))
+                      \%u v. rec(succ(u) mod b, succ(v), \%x y.v))
 \end{ttbox}
 \subcaption{Definitions of the operators}
 
@@ -643,7 +643,7 @@
 Figure~\ref{ctt-arith} presents the definitions and some of the key
 theorems, including commutative, distributive, and associative laws.  The
 theory has the {\ML} identifier \ttindexbold{arith.thy}.  All proofs are on
-the file \ttindexbold{CTT/arith.ML}.
+the file {\tt CTT/arith.ML}.
 
 The operators~\verb'#+', \verb'-', \verb'|-|', \verb'#*', \verb'mod'
 and~\verb'div' stand for sum, difference, absolute difference, product,
@@ -665,17 +665,17 @@
 \section{The examples directory}
 This directory contains examples and experimental proofs in {\CTT}.
 \begin{description}
-\item[\ttindexbold{CTT/ex/typechk.ML}]
+\item[{\tt CTT/ex/typechk.ML}]
 contains simple examples of type checking and type deduction.
 
-\item[\ttindexbold{CTT/ex/elim.ML}]
+\item[{\tt CTT/ex/elim.ML}]
 contains some examples from Martin-L\"of~\cite{martinlof84}, proved using 
 {\tt pc_tac}.
 
-\item[\ttindexbold{CTT/ex/equal.ML}]
+\item[{\tt CTT/ex/equal.ML}]
 contains simple examples of rewriting.
 
-\item[\ttindexbold{CTT/ex/synth.ML}]
+\item[{\tt CTT/ex/synth.ML}]
 demonstrates the use of unknowns with some trivial examples of program
 synthesis. 
 \end{description}
@@ -688,10 +688,10 @@
 unknown, takes shape in the course of the proof.  Our example is the
 predecessor function on the natural numbers.
 \begin{ttbox}
-goal CTT.thy "lam n. rec(n, 0, %x y.x) : ?A";
+goal CTT.thy "lam n. rec(n, 0, \%x y.x) : ?A";
 {\out Level 0}
-{\out lam n. rec(n,0,%x y. x) : ?A}
-{\out  1. lam n. rec(n,0,%x y. x) : ?A}
+{\out lam n. rec(n,0,\%x y. x) : ?A}
+{\out  1. lam n. rec(n,0,\%x y. x) : ?A}
 \end{ttbox}
 Since the term is a Constructive Type Theory $\lambda$-abstraction (not to
 be confused with a meta-level abstraction), we apply the rule
@@ -700,47 +700,58 @@
 \begin{ttbox}
 by (resolve_tac [ProdI] 1);
 {\out Level 1}
-{\out lam n. rec(n,0,%x y. x) : PROD x:?A1. ?B1(x)}
+{\out lam n. rec(n,0,\%x y. x) : PROD x:?A1. ?B1(x)}
 {\out  1. ?A1 type}
-{\out  2. !!n. n : ?A1 ==> rec(n,0,%x y. x) : ?B1(n)}
+{\out  2. !!n. n : ?A1 ==> rec(n,0,\%x y. x) : ?B1(n)}
 \end{ttbox}
-Subgoal~1 can be solved by instantiating~$\Var{A@1}$ to any type, but this
-could invalidate subgoal~2.  We therefore tackle the latter subgoal.  It
-asks the type of a term beginning with {\tt rec}, which can be found by
-$N$-elimination.\index{*NE}
+Subgoal~1 is too flexible.  It can be solved by instantiating~$\Var{A@1}$
+to any type, but most instantiations will invalidate subgoal~2.  We
+therefore tackle the latter subgoal.  It asks the type of a term beginning
+with {\tt rec}, which can be found by $N$-elimination.\index{*NE}
 \begin{ttbox}
 by (eresolve_tac [NE] 2);
 {\out Level 2}
-{\out lam n. rec(n,0,%x y. x) : PROD x:N. ?C2(x,x)}
+{\out lam n. rec(n,0,\%x y. x) : PROD x:N. ?C2(x,x)}
 {\out  1. N type}
 {\out  2. !!n. 0 : ?C2(n,0)}
 {\out  3. !!n x y. [| x : N; y : ?C2(n,x) |] ==> x : ?C2(n,succ(x))}
 \end{ttbox}
-We now know~$\Var{A@1}$ is the type of natural numbers.  However, let us
-continue with subgoal~2.  What is the type of~0?\index{*NIO}
+Subgoal~1 is no longer flexible: we now know~$\Var{A@1}$ is the type of
+natural numbers.  However, let us continue proving nontrivial subgoals.
+Subgoal~2 asks, what is the type of~0?\index{*NIO}
 \begin{ttbox}
 by (resolve_tac [NI0] 2);
 {\out Level 3}
-{\out lam n. rec(n,0,%x y. x) : N --> N}
+{\out lam n. rec(n,0,\%x y. x) : N --> N}
 {\out  1. N type}
 {\out  2. !!n x y. [| x : N; y : N |] ==> x : N}
 \end{ttbox}
-The type~$\Var{A}$ is now determined.  It is $\prod@{n\in N}N$, which is
-equivalent to $N\to N$.  But we must prove all the subgoals to show that
-the original term is validly typed.  Subgoal~2 is provable by assumption
-and the remaining subgoal falls by $N$-formation.\index{*NF}
+The type~$\Var{A}$ is now fully determined.  It is the product type
+$\prod@{x\in N}N$, which is equivalent to function type $N\to N$ because
+there is no dependence on~$x$.  But we must prove all the subgoals to show
+that the original term is validly typed.  Subgoal~2 is provable by
+assumption and the remaining subgoal falls by $N$-formation.\index{*NF}
 \begin{ttbox}
 by (assume_tac 2);
 {\out Level 4}
-{\out lam n. rec(n,0,%x y. x) : N --> N}
+{\out lam n. rec(n,0,\%x y. x) : N --> N}
 {\out  1. N type}
+\ttbreak
 by (resolve_tac [NF] 1);
 {\out Level 5}
-{\out lam n. rec(n,0,%x y. x) : N --> N}
+{\out lam n. rec(n,0,\%x y. x) : N --> N}
 {\out No subgoals!}
 \end{ttbox}
 Calling \ttindex{typechk_tac} can prove this theorem in one step.
 
+Even if the original term is ill-typed, one can infer a type for it, but
+unprovable subgoals will be left.  As an exercise, try to prove the
+following invalid goal:
+\begin{ttbox}
+goal CTT.thy "lam n. rec(n, 0, \%x y.tt) : ?A";
+\end{ttbox}
+
+
 
 \section{An example of logical reasoning}
 Logical reasoning in Type Theory involves proving a goal of the form
@@ -786,8 +797,9 @@
 rather than the assumption of a goal, it cannot be found by
 \ttindex{eresolve_tac}.  We could insert it by calling
 \hbox{\tt \ttindex{cut_facts_tac} prems 1}.   Instead, let us resolve the
-$\Sigma$-elimination rule with the premises; this yields one result, which
-we supply to \ttindex{resolve_tac}.\index{*SumE}\index{*RL}
+$\Sigma$-elimination rule with the premises using~{\tt RL}; this forward
+inference yields one result, which we supply to
+\ttindex{resolve_tac}.\index{*SumE}\index{*RL} 
 \begin{ttbox}
 by (resolve_tac (prems RL [SumE]) 1);
 {\out Level 1}
@@ -796,15 +808,15 @@
 {\out        [| x : A; y : B(x) + C(x) |] ==>}
 {\out        ?c1(x,y) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
 \end{ttbox}
-The subgoal has two new parameters.  In the main goal, $\Var{a}$ has been
-instantiated with a \ttindex{split} term.  The assumption $y\in B(x) + C(x)$ is
-eliminated next, causing a case split and a new parameter.  The main goal
-now contains~\ttindex{when}.
+The subgoal has two new parameters, $x$ and~$y$.  In the main goal,
+$\Var{a}$ has been instantiated with a \ttindex{split} term.  The
+assumption $y\in B(x) + C(x)$ is eliminated next, causing a case split and
+a further parameter,~$xa$.  It also inserts~\ttindex{when} into the main goal.
 \index{*PlusE}
 \begin{ttbox}
 by (eresolve_tac [PlusE] 1);
 {\out Level 2}
-{\out split(p,%x y. when(y,?c2(x,y),?d2(x,y)))}
+{\out split(p,\%x y. when(y,?c2(x,y),?d2(x,y)))}
 {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
 {\out  1. !!x y xa.}
 {\out        [| x : A; xa : B(x) |] ==>}
@@ -822,7 +834,7 @@
 \begin{ttbox}
 by (resolve_tac [PlusI_inl] 1);
 {\out Level 3}
-{\out split(p,%x y. when(y,%xa. inl(?a3(x,y,xa)),?d2(x,y)))}
+{\out split(p,\%x y. when(y,\%xa. inl(?a3(x,y,xa)),?d2(x,y)))}
 {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
 {\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> ?a3(x,y,xa) : SUM x:A. B(x)}
 {\out  2. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
@@ -838,7 +850,7 @@
 \begin{ttbox}
 by (resolve_tac [SumI] 1);
 {\out Level 4}
-{\out split(p,%x y. when(y,%xa. inl(<?a4(x,y,xa),?b4(x,y,xa)>),?d2(x,y)))}
+{\out split(p,\%x y. when(y,\%xa. inl(<?a4(x,y,xa),?b4(x,y,xa)>),?d2(x,y)))}
 {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
 {\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> ?a4(x,y,xa) : A}
 {\out  2. !!x y xa. [| x : A; xa : B(x) |] ==> ?b4(x,y,xa) : B(?a4(x,y,xa))}
@@ -852,16 +864,18 @@
 \begin{ttbox}
 by (assume_tac 1);
 {\out Level 5}
-{\out split(p,%x y. when(y,%xa. inl(<x,?b4(x,y,xa)>),?d2(x,y)))}
+{\out split(p,\%x y. when(y,\%xa. inl(<x,?b4(x,y,xa)>),?d2(x,y)))}
 {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
+\ttbreak
 {\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> ?b4(x,y,xa) : B(x)}
 {\out  2. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
 {\out  3. !!x y ya.}
 {\out        [| x : A; ya : C(x) |] ==>}
 {\out        ?d2(x,y,ya) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
+\ttbreak
 by (assume_tac 1);
 {\out Level 6}
-{\out split(p,%x y. when(y,%xa. inl(<x,xa>),?d2(x,y)))}
+{\out split(p,\%x y. when(y,\%xa. inl(<x,xa>),?d2(x,y)))}
 {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
 {\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
 {\out  2. !!x y ya.}
@@ -873,7 +887,7 @@
 \begin{ttbox}
 by (typechk_tac prems);
 {\out Level 7}
-{\out split(p,%x y. when(y,%xa. inl(<x,xa>),?d2(x,y)))}
+{\out split(p,\%x y. when(y,\%xa. inl(<x,xa>),?d2(x,y)))}
 {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
 {\out  1. !!x y ya.}
 {\out        [| x : A; ya : C(x) |] ==>}
@@ -884,7 +898,7 @@
 \begin{ttbox}
 by (pc_tac prems 1);
 {\out Level 8}
-{\out split(p,%x y. when(y,%xa. inl(<x,xa>),%y. inr(<x,y>)))}
+{\out split(p,\%x y. when(y,\%xa. inl(<x,xa>),\%y. inr(<x,y>)))}
 {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
 {\out No subgoals!}
 \end{ttbox}
@@ -895,18 +909,27 @@
 \section{Example: deriving a currying functional}
 In simply-typed languages such as {\ML}, a currying functional has the type 
 \[ (A\times B \to C) \to (A\to (B\to C)). \]
-Let us generalize this to~$\Sigma$ and~$\Pi$.  The argument of the
-functional is a function that maps $z:\Sigma(A,B)$ to~$C(z)$; the resulting
-function maps $x\in A$ and $y\in B(x)$ to $C(\langle x,y\rangle)$.  Here
-$B$ is a family over~$A$, while $C$ is a family over $\Sigma(A,B)$.
+Let us generalize this to~$\Sigma$ and~$\Pi$.  
+The functional takes a function~$f$ that maps $z:\Sigma(A,B)$
+to~$C(z)$; the resulting function maps $x\in A$ and $y\in B(x)$ to
+$C(\langle x,y\rangle)$.
+
+Formally, there are three typing premises.  $A$ is a type; $B$ is an
+$A$-indexed family of types; $C$ is a family of types indexed by
+$\Sigma(A,B)$.  The goal is expressed using \hbox{\tt PROD f} to ensure
+that the parameter corresponding to the functional's argument is really
+called~$f$; Isabelle echoes the type using \verb|-->| because there is no
+explicit dependence upon~$f$.
 \begin{ttbox}
 val prems = goal CTT.thy
-    "[| A type; !!x. x:A ==> B(x) type;                    \ttback
-\ttback               !!z. z: (SUM x:A. B(x)) ==> C(z) type |]   \ttback
-\ttback    ==> ?a : (PROD z : (SUM x:A . B(x)) . C(z))           \ttback
-\ttback         --> (PROD x:A . PROD y:B(x) . C(<x,y>))";
+    "[| A type; !!x. x:A ==> B(x) type;                           \ttback
+\ttback               !!z. z: (SUM x:A. B(x)) ==> C(z) type             \ttback
+\ttback    |] ==> ?a : PROD f: (PROD z : (SUM x:A . B(x)) . C(z)).      \ttback
+\ttback                     (PROD x:A . PROD y:B(x) . C(<x,y>))";
+\ttbreak
 {\out Level 0}
-{\out ?a : (PROD z:SUM x:A. B(x). C(z)) --> (PROD x:A. PROD y:B(x). C(<x,y>))}
+{\out ?a : (PROD z:SUM x:A. B(x). C(z)) -->}
+{\out      (PROD x:A. PROD y:B(x). C(<x,y>))}
 {\out  1. ?a : (PROD z:SUM x:A. B(x). C(z)) -->}
 {\out          (PROD x:A. PROD y:B(x). C(<x,y>))}
 \ttbreak
@@ -915,27 +938,32 @@
 {\out              "?z : SUM x:A. B(x) ==> C(?z) type}
 {\out               [!!z. z : SUM x:A. B(x) ==> C(z) type]"] : thm list}
 \end{ttbox}
-This is an opportunity to demonstrate \ttindex{intr_tac}.  Here, the tactic
-repeatedly applies $\Pi$-introduction, automatically proving the rather
-tiresome typing conditions.  Note that $\Var{a}$ becomes instantiated to
-three nested $\lambda$-abstractions.
+This is a chance to demonstrate \ttindex{intr_tac}.  Here, the tactic
+repeatedly applies $\Pi$-introduction, proving the rather
+tiresome typing conditions.  
+
+Note that $\Var{a}$ becomes instantiated to three nested
+$\lambda$-abstractions.  It would be easier to read if the bound variable
+names agreed with the parameters in the subgoal.  Isabelle attempts to give
+parameters the same names as corresponding bound variables in the goal, but
+this does not always work.  In any event, the goal is logically correct.
 \begin{ttbox}
 by (intr_tac prems);
 {\out Level 1}
 {\out lam x xa xb. ?b7(x,xa,xb)}
 {\out : (PROD z:SUM x:A. B(x). C(z)) --> (PROD x:A. PROD y:B(x). C(<x,y>))}
-{\out  1. !!uu x y.}
-{\out        [| uu : PROD z:SUM x:A. B(x). C(z); x : A; y : B(x) |] ==>}
-{\out        ?b7(uu,x,y) : C(<x,y>)}
+{\out  1. !!f x y.}
+{\out        [| f : PROD z:SUM x:A. B(x). C(z); x : A; y : B(x) |] ==>}
+{\out        ?b7(f,x,y) : C(<x,y>)}
 \end{ttbox}
-Using $\Pi$-elimination, we solve subgoal~1 by applying the function~$uu$.
+Using $\Pi$-elimination, we solve subgoal~1 by applying the function~$f$.
 \index{*ProdE}
 \begin{ttbox}
 by (eresolve_tac [ProdE] 1);
 {\out Level 2}
 {\out lam x xa xb. x ` <xa,xb>}
 {\out : (PROD z:SUM x:A. B(x). C(z)) --> (PROD x:A. PROD y:B(x). C(<x,y>))}
-{\out  1. !!uu x y. [| x : A; y : B(x) |] ==> <x,y> : SUM x:A. B(x)}
+{\out  1. !!f x y. [| x : A; y : B(x) |] ==> <x,y> : SUM x:A. B(x)}
 \end{ttbox}
 Finally, we exhibit a suitable argument for the function application.  This
 is straightforward using introduction rules.
@@ -970,14 +998,14 @@
 But a completely formal proof is hard to find.  Many of the rules can be
 applied in a multiplicity of ways, yielding a large number of higher-order
 unifiers.  The proof can get bogged down in the details.  But with a
-careful selection of derived rules (recall Figure~\ref{ctt-derived}) and
+careful selection of derived rules (recall Fig.\ts\ref{ctt-derived}) and
 the type checking tactics, we can prove the theorem in nine steps.
 \begin{ttbox}
 val prems = goal CTT.thy
-    "[| A type;  !!x. x:A ==> B(x) type;              \ttback
-\ttback       !!x y.[| x:A;  y:B(x) |] ==> C(x,y) type      \ttback
-\ttback    |] ==> ?a :    (PROD x:A. SUM y:B(x). C(x,y))    \ttback
-\ttback               --> (SUM f: (PROD x:A. B(x)). PROD x:A. C(x, f`x))";
+    "[| A type;  !!x. x:A ==> B(x) type;                    \ttback
+\ttback       !!x y.[| x:A;  y:B(x) |] ==> C(x,y) type            \ttback
+\ttback    |] ==> ?a : PROD h: (PROD x:A. SUM y:B(x). C(x,y)).    \ttback
+\ttback                     (SUM f: (PROD x:A. B(x)). PROD x:A. C(x, f`x))";
 {\out Level 0}
 {\out ?a : (PROD x:A. SUM y:B(x). C(x,y)) -->}
 {\out      (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
@@ -1000,18 +1028,19 @@
 {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
 {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
 \ttbreak
-{\out  1. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        ?b7(uu,x) : B(x)}
+{\out  1. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        ?b7(h,x) : B(x)}
 \ttbreak
-{\out  2. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        ?b8(uu,x) : C(x,(lam x. ?b7(uu,x)) ` x)}
+{\out  2. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        ?b8(h,x) : C(x,(lam x. ?b7(h,x)) ` x)}
 \end{ttbox}
 Subgoal~1 asks to find the choice function itself, taking $x\in A$ to some
-$\Var{b@7}(uu,x)\in B(x)$.  Subgoal~2 asks, given $x\in A$, for a proof
-object $\Var{b@8}(uu,x)$ to witness that the choice function's argument
-and result lie in the relation~$C$.  
+$\Var{b@7}(h,x)\in B(x)$.  Subgoal~2 asks, given $x\in A$, for a proof
+object $\Var{b@8}(h,x)$ to witness that the choice function's argument and
+result lie in the relation~$C$.  This latter task will take up most of the
+proof.
 \index{*ProdE}\index{*SumE_fst}\index{*RS}
 \begin{ttbox}
 by (eresolve_tac [ProdE RS SumE_fst] 1);
@@ -1020,27 +1049,26 @@
 {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
 {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
 \ttbreak
-{\out  1. !!uu x. x : A ==> x : A}
-{\out  2. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        ?b8(uu,x) : C(x,(lam x. fst(uu ` x)) ` x)}
+{\out  1. !!h x. x : A ==> x : A}
+{\out  2. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        ?b8(h,x) : C(x,(lam x. fst(h ` x)) ` x)}
 \end{ttbox}
-Above, we have composed \ttindex{fst} with the function~$h$ (named~$uu$ in
-the assumptions).  Unification has deduced that the function must be
-applied to $x\in A$.
+Above, we have composed \ttindex{fst} with the function~$h$.  Unification
+has deduced that the function must be applied to $x\in A$.
 \begin{ttbox}
 by (assume_tac 1);
 {\out Level 3}
 {\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
 {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
 {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
-{\out  1. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        ?b8(uu,x) : C(x,(lam x. fst(uu ` x)) ` x)}
+{\out  1. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        ?b8(h,x) : C(x,(lam x. fst(h ` x)) ` x)}
 \end{ttbox}
 Before we can compose \ttindex{snd} with~$h$, the arguments of $C$ must be
 simplified.  The derived rule \ttindex{replace_type} lets us replace a type
-by any equivalent type:
+by any equivalent type, shown below as the schematic term $\Var{A@{13}}(h,x)$:
 \begin{ttbox}
 by (resolve_tac [replace_type] 1);
 {\out Level 4}
@@ -1048,13 +1076,13 @@
 {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
 {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
 \ttbreak
-{\out  1. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        C(x,(lam x. fst(uu ` x)) ` x) = ?A13(uu,x)}
+{\out  1. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        C(x,(lam x. fst(h ` x)) ` x) = ?A13(h,x)}
 \ttbreak
-{\out  2. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        ?b8(uu,x) : ?A13(uu,x)}
+{\out  2. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        ?b8(h,x) : ?A13(h,x)}
 \end{ttbox}
 The derived rule \ttindex{subst_eqtyparg} lets us simplify a type's
 argument (by currying, $C(x)$ is a unary type operator):
@@ -1065,21 +1093,22 @@
 {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
 {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
 \ttbreak
-{\out  1. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        (lam x. fst(uu ` x)) ` x = ?c14(uu,x) : ?A14(uu,x)}
+{\out  1. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        (lam x. fst(h ` x)) ` x = ?c14(h,x) : ?A14(h,x)}
 \ttbreak
-{\out  2. !!uu x z.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A;}
-{\out           z : ?A14(uu,x) |] ==>}
+{\out  2. !!h x z.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A;}
+{\out           z : ?A14(h,x) |] ==>}
 {\out        C(x,z) type}
 \ttbreak
-{\out  3. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        ?b8(uu,x) : C(x,?c14(uu,x))}
+{\out  3. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        ?b8(h,x) : C(x,?c14(h,x))}
 \end{ttbox}
-The rule \ttindex{ProdC} is simply $\beta$-reduction.  The term
-$\Var{c@{14}}(uu,x)$ receives the simplified form, $f{\tt`}x$.
+Subgoal~1 requires simply $\beta$-contraction, which is the rule
+\ttindex{ProdC}.  The term $\Var{c@{14}}(h,x)$ in the last subgoal
+receives the contracted result.
 \begin{ttbox}
 by (resolve_tac [ProdC] 1);
 {\out Level 6}
@@ -1087,35 +1116,37 @@
 {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
 {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
 \ttbreak
-{\out  1. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==> x : ?A15(uu,x)}
+{\out  1. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        x : ?A15(h,x)}
 \ttbreak
-{\out  2. !!uu x xa.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A;}
-{\out           xa : ?A15(uu,x) |] ==>}
-{\out        fst(uu ` xa) : ?B15(uu,x,xa)}
+{\out  2. !!h x xa.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A;}
+{\out           xa : ?A15(h,x) |] ==>}
+{\out        fst(h ` xa) : ?B15(h,x,xa)}
 \ttbreak
-{\out  3. !!uu x z.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A;}
-{\out           z : ?B15(uu,x,x) |] ==>}
+{\out  3. !!h x z.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A;}
+{\out           z : ?B15(h,x,x) |] ==>}
 {\out        C(x,z) type}
 \ttbreak
-{\out  4. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        ?b8(uu,x) : C(x,fst(uu ` x))}
+{\out  4. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        ?b8(h,x) : C(x,fst(h ` x))}
 \end{ttbox}
 Routine type checking goals proliferate in Constructive Type Theory, but
 \ttindex{typechk_tac} quickly solves them.  Note the inclusion of
-\ttindex{SumE_fst}.
+\ttindex{SumE_fst} along with the premises.
 \begin{ttbox}
 by (typechk_tac (SumE_fst::prems));
 {\out Level 7}
 {\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
 {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
 {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
-{\out  1. !!uu x.}
-{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
-{\out        ?b8(uu,x) : C(x,fst(uu ` x))}
+\ttbreak
+{\out  1. !!h x.}
+{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
+{\out        ?b8(h,x) : C(x,fst(h ` x))}
 \end{ttbox}
 We are finally ready to compose \ttindex{snd} with~$h$.
 \index{*ProdE}\index{*SumE_snd}\index{*RS}
@@ -1125,9 +1156,10 @@
 {\out lam x. <lam xa. fst(x ` xa),lam xa. snd(x ` xa)>}
 {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
 {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
-{\out  1. !!uu x. x : A ==> x : A}
-{\out  2. !!uu x. x : A ==> B(x) type}
-{\out  3. !!uu x xa. [| x : A; xa : B(x) |] ==> C(x,xa) type}
+\ttbreak
+{\out  1. !!h x. x : A ==> x : A}
+{\out  2. !!h x. x : A ==> B(x) type}
+{\out  3. !!h x xa. [| x : A; xa : B(x) |] ==> C(x,xa) type}
 \end{ttbox}
 The proof object has reached its final form.  We call \ttindex{typechk_tac}
 to finish the type checking.