lcp's pass over the book, chapters 1-8
authorpaulson
Fri, 12 Jan 2001 16:32:01 +0100
changeset 10885 90695f46440b
parent 10884 2995639c6a09
child 10886 f6b16554720d
lcp's pass over the book, chapters 1-8
doc-src/TutorialI/Advanced/Partial.thy
doc-src/TutorialI/Advanced/WFrec.thy
doc-src/TutorialI/Advanced/advanced.tex
doc-src/TutorialI/Advanced/simp.thy
doc-src/TutorialI/CTL/CTL.thy
doc-src/TutorialI/CTL/CTLind.thy
doc-src/TutorialI/CodeGen/CodeGen.thy
doc-src/TutorialI/Ifexpr/Ifexpr.thy
doc-src/TutorialI/Misc/AdvancedInd.thy
doc-src/TutorialI/Misc/Itrev.thy
doc-src/TutorialI/Misc/case_exprs.thy
doc-src/TutorialI/Misc/simp.thy
doc-src/TutorialI/Misc/types.thy
doc-src/TutorialI/Recdef/Nested0.thy
doc-src/TutorialI/Recdef/Nested1.thy
doc-src/TutorialI/Recdef/Nested2.thy
doc-src/TutorialI/Recdef/simplification.thy
doc-src/TutorialI/ToyList/ToyList.thy
doc-src/TutorialI/Types/Axioms.thy
doc-src/TutorialI/Types/Overloading0.thy
doc-src/TutorialI/Types/Overloading1.thy
doc-src/TutorialI/Types/Overloading2.thy
doc-src/TutorialI/Types/Pairs.thy
doc-src/TutorialI/Types/Typedef.thy
doc-src/TutorialI/Types/types.tex
doc-src/TutorialI/basics.tex
doc-src/TutorialI/fp.tex
doc-src/TutorialI/tutorial.tex
--- a/doc-src/TutorialI/Advanced/Partial.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Advanced/Partial.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -34,7 +34,7 @@
 matching.
 *}
 
-subsubsection{*Guarded recursion*}
+subsubsection{*Guarded Recursion*}
 
 text{* Neither \isacommand{primrec} nor \isacommand{recdef} allow to
 prefix an equation with a condition in the way ordinary definitions do
@@ -58,8 +58,8 @@
 text{*\noindent Of course we could also have defined
 @{term"divi(m,0)"} to be some specific number, for example 0. The
 latter option is chosen for the predefined @{text div} function, which
-simplifies proofs at the expense of moving further away from the
-standard mathematical divison function.
+simplifies proofs at the expense of deviating from the
+standard mathematical division function.
 
 As a more substantial example we consider the problem of searching a graph.
 For simplicity our graph is given by a function (@{term f}) of
@@ -130,11 +130,11 @@
 apply simp
 done
 
-subsubsection{*The {\tt\slshape while} combinator*}
+subsubsection{*The {\tt\slshape while} Combinator*}
 
 text{*If the recursive function happens to be tail recursive, its
 definition becomes a triviality if based on the predefined \isaindexbold{while}
-combinator.  The latter lives in the library theory
+combinator.  The latter lives in the Library theory
 \isa{While_Combinator}, which is not part of @{text Main} but needs to
 be included explicitly among the ancestor theories.
 
@@ -166,9 +166,9 @@
 correctness of loops expressed with @{term while}:
 @{thm[display,margin=50]while_rule[no_vars]} @{term P} needs to be
 true of the initial state @{term s} and invariant under @{term c}
-(premises 1 and 2).The post-condition @{term Q} must become true when
-leaving the loop (premise 3). And each loop iteration must descend
-along a well-founded relation @{term r} (premises 4 and 5).
+(premises 1 and~2). The post-condition @{term Q} must become true when
+leaving the loop (premise~3). And each loop iteration must descend
+along a well-founded relation @{term r} (premises 4 and~5).
 
 Let us now prove that @{term find2} does indeed find a fixed point. Instead
 of induction we apply the above while rule, suitably instantiated.
@@ -176,9 +176,9 @@
 by @{text auto} but falls to @{text simp}:
 *}
 
-lemma lem: "\<lbrakk> wf(step1 f); x' = f x \<rbrakk> \<Longrightarrow> \<exists>y y'.
-   while (\<lambda>(x,x'). x' \<noteq> x) (\<lambda>(x,x'). (x',f x')) (x,x') = (y,y') \<and>
-   y' = y \<and> f y = y"
+lemma lem: "\<lbrakk> wf(step1 f); x' = f x \<rbrakk> \<Longrightarrow> 
+   \<exists>y. while (\<lambda>(x,x'). x' \<noteq> x) (\<lambda>(x,x'). (x',f x')) (x,x') = (y,y) \<and>
+       f y = y"
 apply(rule_tac P = "\<lambda>(x,x'). x' = f x" and
                r = "inv_image (step1 f) fst" in while_rule);
 apply auto
@@ -205,8 +205,8 @@
 program. This is in stark contrast to guarded recursion as introduced
 above which requires an explicit test @{prop"x \<in> dom f"} in the
 function body.  Unless @{term dom} is trivial, this leads to a
-definition which is either not at all executable or prohibitively
-expensive. Thus, if you are aiming for an efficiently executable definition
+definition that is impossible to execute (or prohibitively slow).
+Thus, if you are aiming for an efficiently executable definition
 of a partial function, you are likely to need @{term while}.
 *}
 
--- a/doc-src/TutorialI/Advanced/WFrec.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Advanced/WFrec.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -28,11 +28,11 @@
 left-hand side of an equation and $r$ the argument of some recursive call on
 the corresponding right-hand side, induces a well-founded relation.  For a
 systematic account of termination proofs via well-founded relations see, for
-example, \cite{Baader-Nipkow}.
+example, Baader and Nipkow~\cite{Baader-Nipkow}.
 
 Each \isacommand{recdef} definition should be accompanied (after the name of
 the function) by a well-founded relation on the argument type of the
-function.  The HOL library formalizes some of the most important
+function.  HOL formalizes some of the most important
 constructions of well-founded relations (see \S\ref{sec:Well-founded}). For
 example, @{term"measure f"} is always well-founded, and the lexicographic
 product of two well-founded relations is again well-founded, which we relied
@@ -50,7 +50,7 @@
 
 text{*
 Lexicographic products of measure functions already go a long
-way. Furthermore you may embed some type in an
+way. Furthermore, you may embed a type in an
 existing well-founded relation via the inverse image construction @{term
 inv_image}. All these constructions are known to \isacommand{recdef}. Thus you
 will never have to prove well-foundedness of any relation composed
--- a/doc-src/TutorialI/Advanced/advanced.tex	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Advanced/advanced.tex	Fri Jan 12 16:32:01 2001 +0100
@@ -2,8 +2,7 @@
 
 Although we have already learned a lot about simplification, recursion and
 induction, there are some advanced proof techniques that we have not covered
-yet and which are worth knowing about if you intend to beome a serious
-(human) theorem prover. The three sections of this chapter are almost
+yet and which are worth learning. The three sections of this chapter are almost
 independent of each other and can be read in any order. Only the notion of
 \emph{congruence rules}, introduced in the section on simplification, is
 required for parts of the section on recursion.
@@ -19,12 +18,12 @@
 and how to deal with partial functions.
 
 If, after reading this section, you feel that the definition of recursive
-functions is overly and maybe unnecessarily complicated by the requirement of
+functions is overly complicated by the requirement of
 totality, you should ponder the alternative, a logic of partial functions,
 where recursive definitions are always wellformed. For a start, there are many
 such logics, and no clear winner has emerged. And in all of these logics you
 are (more or less frequently) required to reason about the definedness of
-terms explicitly. Thus one shifts definedness arguments from definition to
+terms explicitly. Thus one shifts definedness arguments from definition time to
 proof time. In HOL you may have to work hard to define a function, but proofs
 can then proceed unencumbered by worries about undefinedness.
 
--- a/doc-src/TutorialI/Advanced/simp.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Advanced/simp.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -11,9 +11,9 @@
 situation.
 *}
 
-subsection{*Advanced features*}
+subsection{*Advanced Features*}
 
-subsubsection{*Congruence rules*}
+subsubsection{*Congruence Rules*}
 
 text{*\label{sec:simp-cong}
 It is hardwired into the simplifier that while simplifying the conclusion $Q$
@@ -45,7 +45,7 @@
 congruence rule for @{text if}. Analogous rules control the evaluaton of
 @{text case} expressions.
 
-You can delare your own congruence rules with the attribute @{text cong},
+You can declare your own congruence rules with the attribute @{text cong},
 either globally, in the usual manner,
 \begin{quote}
 \isacommand{declare} \textit{theorem-name} @{text"[cong]"}
@@ -59,11 +59,12 @@
 \begin{warn}
 The congruence rule @{thm[source]conj_cong}
 @{thm[display]conj_cong[no_vars]}
-is occasionally useful but not a default rule; you have to use it explicitly.
+\par\noindent
+is occasionally useful but is not a default rule; you have to declare it explicitly.
 \end{warn}
 *}
 
-subsubsection{*Permutative rewrite rules*}
+subsubsection{*Permutative Rewrite Rules*}
 
 text{*
 \index{rewrite rule!permutative|bold}
@@ -105,11 +106,11 @@
  f(f(b,c),a) \maps{A} f(b,f(c,a)) \maps{C} f(b,f(a,c)) \maps{LC} f(a,f(b,c)) \]
 
 Note that ordered rewriting for @{text"+"} and @{text"*"} on numbers is rarely
-necessary because the builtin arithmetic capabilities often take care of
-this.
+necessary because the built-in arithmetic prover often succeeds without
+such hints.
 *}
 
-subsection{*How it works*}
+subsection{*How It Works*}
 
 text{*\label{sec:SimpHow}
 Roughly speaking, the simplifier proceeds bottom-up (subterms are simplified
@@ -117,7 +118,7 @@
 proved (again by simplification). Below we explain some special features of the rewriting process.
 *}
 
-subsubsection{*Higher-order patterns*}
+subsubsection{*Higher-Order Patterns*}
 
 text{*\index{simplification rule|(}
 So far we have pretended the simplifier can deal with arbitrary
@@ -139,7 +140,7 @@
 
 If the left-hand side is not a higher-order pattern, not all is lost
 and the simplifier will still try to apply the rule, but only if it
-matches ``directly'', i.e.\ without much $\lambda$-calculus hocus
+matches \emph{directly}, i.e.\ without much $\lambda$-calculus hocus
 pocus. For example, @{text"?f ?x \<in> range ?f = True"} rewrites
 @{term"g a \<in> range g"} to @{term True}, but will fail to match
 @{text"g(h b) \<in> range(\<lambda>x. g(h x))"}.  However, you can
@@ -148,24 +149,24 @@
 ?f ?x \<Longrightarrow> ?y \<in> range ?f = True"} is fine
 as a conditional rewrite rule since conditions can be arbitrary
 terms. However, this trick is not a panacea because the newly
-introduced conditions may be hard to prove, which has to take place
+introduced conditions may be hard to prove, as they must be
 before the rule can actually be applied.
   
-There is basically no restriction on the form of the right-hand
+There is no restriction on the form of the right-hand
 sides.  They may not contain extraneous term or type variables, though.
 *}
 
-subsubsection{*The preprocessor*}
+subsubsection{*The Preprocessor*}
 
 text{*\label{sec:simp-preprocessor}
-When some theorem is declared a simplification rule, it need not be a
+When a theorem is declared a simplification rule, it need not be a
 conditional equation already.  The simplifier will turn it into a set of
-conditional equations automatically.  For example, given @{prop"f x =
-g x & h x = k x"} the simplifier will turn this into the two separate
-simplifiction rules @{prop"f x = g x"} and @{prop"h x = k x"}. In
+conditional equations automatically.  For example, @{prop"f x =
+g x & h x = k x"} becomes the two separate
+simplification rules @{prop"f x = g x"} and @{prop"h x = k x"}. In
 general, the input theorem is converted as follows:
 \begin{eqnarray}
-\neg P &\mapsto& P = False \nonumber\\
+\neg P &\mapsto& P = \hbox{\isa{False}} \nonumber\\
 P \longrightarrow Q &\mapsto& P \Longrightarrow Q \nonumber\\
 P \land Q &\mapsto& P,\ Q \nonumber\\
 \forall x.~P~x &\mapsto& P~\Var{x}\nonumber\\
@@ -174,11 +175,12 @@
  P \Longrightarrow Q,\ \neg P \Longrightarrow R \nonumber
 \end{eqnarray}
 Once this conversion process is finished, all remaining non-equations
-$P$ are turned into trivial equations $P = True$.
-For example, the formula \begin{center}@{prop"(p \<longrightarrow> q \<and> r) \<and> s"}\end{center}
+$P$ are turned into trivial equations $P =\isa{True}$.
+For example, the formula 
+\begin{center}@{prop"(p \<longrightarrow> t=u \<and> ~r) \<and> s"}\end{center}
 is converted into the three rules
 \begin{center}
-@{prop"p \<Longrightarrow> q = True"},\quad  @{prop"p \<Longrightarrow> r = True"},\quad  @{prop"s = True"}.
+@{prop"p \<Longrightarrow> t = u"},\quad  @{prop"p \<Longrightarrow> r = False"},\quad  @{prop"s = True"}.
 \end{center}
 \index{simplification rule|)}
 \index{simplification|)}
--- a/doc-src/TutorialI/CTL/CTL.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/CTL/CTL.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -456,10 +456,10 @@
 Let us close this section with a few words about the executability of our model checkers.
 It is clear that if all sets are finite, they can be represented as lists and the usual
 set operations are easily implemented. Only @{term lfp} requires a little thought.
-\REMARK{you had `in the library' but I assume you meant it was a 
-built in lemma.  Do we give its name?}
-Fortunately, Isabelle/HOL provides a theorem stating that 
-in the case of finite sets and a monotone @{term F},
+Fortunately, the HOL Library%
+\footnote{See theory \isa{While_Combinator_Example}.}
+provides a theorem stating that 
+in the case of finite sets and a monotone function~@{term F},
 the value of @{term"lfp F"} can be computed by iterated application of @{term F} to~@{term"{}"} until
 a fixed point is reached. It is actually possible to generate executable functional programs
 from HOL definitions, but that is beyond the scope of the tutorial.
--- a/doc-src/TutorialI/CTL/CTLind.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/CTL/CTLind.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -1,6 +1,6 @@
 (*<*)theory CTLind = CTL:(*>*)
 
-subsection{*CTL revisited*}
+subsection{*CTL Revisited*}
 
 text{*\label{sec:CTL-revisited}
 The purpose of this section is twofold: we want to demonstrate
@@ -55,9 +55,8 @@
 starting from @{term u}, a successor of @{term t}. Now we simply instantiate
 the @{text"\<forall>f\<in>Paths t"} in the induction hypothesis by the path starting with
 @{term t} and continuing with @{term f}. That is what the above $\lambda$-term
-expresses. That fact that this is a path starting with @{term t} and that
-the instantiated induction hypothesis implies the conclusion is shown by
-simplification.
+expresses.  Simplification shows that this is a path starting with @{term t} 
+and that the instantiated induction hypothesis implies the conclusion.
 
 Now we come to the key lemma. It says that if @{term t} can be reached by a
 finite @{term A}-avoiding path from @{term s}, then @{prop"t \<in> lfp(af A)"},
@@ -68,11 +67,12 @@
 lemma Avoid_in_lfp[rule_format(no_asm)]:
   "\<forall>p\<in>Paths s. \<exists>i. p i \<in> A \<Longrightarrow> t \<in> Avoid s A \<longrightarrow> t \<in> lfp(af A)";
 txt{*\noindent
-The trick is not to induct on @{prop"t \<in> Avoid s A"}, as already the base
-case would be a problem, but to proceed by well-founded induction @{term
-t}. Hence @{prop"t \<in> Avoid s A"} needs to be brought into the conclusion as
+The trick is not to induct on @{prop"t \<in> Avoid s A"}, as even the base
+case would be a problem, but to proceed by well-founded induction on~@{term
+t}. Hence\hbox{ @{prop"t \<in> Avoid s A"}} must be brought into the conclusion as
 well, which the directive @{text rule_format} undoes at the end (see below).
-But induction with respect to which well-founded relation? The restriction
+But induction with respect to which well-founded relation? The
+one we want is the restriction
 of @{term M} to @{term"Avoid s A"}:
 @{term[display]"{(y,x). (x,y) \<in> M \<and> x \<in> Avoid s A \<and> y \<in> Avoid s A \<and> x \<notin> A}"}
 As we shall see in a moment, the absence of infinite @{term A}-avoiding paths
@@ -86,7 +86,11 @@
  apply(clarsimp);
 
 txt{*\noindent
-Now can assume additionally (induction hypothesis) that if @{prop"t \<notin> A"}
+@{subgoals[display,indent=0,margin=65]}
+\REMARK{I put in this proof state but I wonder if readers will be able to
+follow this proof. You could prove that your relation is WF in a 
+lemma beforehand, maybe omitting that proof.}
+Now the induction hypothesis states that if @{prop"t \<notin> A"}
 then all successors of @{term t} that are in @{term"Avoid s A"} are in
 @{term"lfp (af A)"}. To prove the actual goal we unfold @{term lfp} once. Now
 we have to prove that @{term t} is in @{term A} or all successors of @{term
@@ -103,14 +107,13 @@
 
 txt{*
 Having proved the main goal we return to the proof obligation that the above
-relation is indeed well-founded. This is proved by contraposition: we assume
-the relation is not well-founded. Thus there exists an infinite @{term
+relation is indeed well-founded. This is proved by contradiction: if
+the relation is not well-founded then there exists an infinite @{term
 A}-avoiding path all in @{term"Avoid s A"}, by theorem
 @{thm[source]wf_iff_no_infinite_down_chain}:
 @{thm[display]wf_iff_no_infinite_down_chain[no_vars]}
 From lemma @{thm[source]ex_infinite_path} the existence of an infinite
-@{term A}-avoiding path starting in @{term s} follows, just as required for
-the contraposition.
+@{term A}-avoiding path starting in @{term s} follows, contradiction.
 *}
 
 apply(erule contrapos_pp);
--- a/doc-src/TutorialI/CodeGen/CodeGen.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/CodeGen/CodeGen.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -2,7 +2,7 @@
 theory CodeGen = Main:
 (*>*)
 
-section{*Case study: compiling expressions*}
+section{*Case Study: Compiling Expressions*}
 
 text{*\label{sec:ExprCompiler}
 The task is to develop a compiler from a generic type of expressions (built
--- a/doc-src/TutorialI/Ifexpr/Ifexpr.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Ifexpr/Ifexpr.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -2,7 +2,7 @@
 theory Ifexpr = Main:;
 (*>*)
 
-subsection{*Case study: boolean expressions*}
+subsection{*Case Study: Boolean Expressions*}
 
 text{*\label{sec:boolex}
 The aim of this case study is twofold: it shows how to model boolean
@@ -10,7 +10,7 @@
 the constructs introduced above.
 *}
 
-subsubsection{*How can we model boolean expressions?*}
+subsubsection{*How Can We Model Boolean Expressions?*}
 
 text{*
 We want to represent boolean expressions built up from variables and
@@ -28,7 +28,7 @@
 For example, the formula $P@0 \land \neg P@1$ is represented by the term
 @{term"And (Var 0) (Neg(Var 1))"}.
 
-\subsubsection{What is the value of a boolean expression?}
+\subsubsection{What is the Value of a Boolean Expression?}
 
 The value of a boolean expression depends on the value of its variables.
 Hence the function @{text"value"} takes an additional parameter, an
@@ -44,7 +44,7 @@
 "value (And b c) env = (value b env \<and> value c env)";
 
 text{*\noindent
-\subsubsection{If-expressions}
+\subsubsection{If-Expressions}
 
 An alternative and often more efficient (because in a certain sense
 canonical) representation are so-called \emph{If-expressions} built up
@@ -66,8 +66,9 @@
                                         else valif e env)";
 
 text{*
-\subsubsection{Transformation into and of If-expressions}
+\subsubsection{Transformation Into and of If-Expressions}
 
+\REMARK{is this the title you wanted?}
 The type @{typ"boolex"} is close to the customary representation of logical
 formulae, whereas @{typ"ifex"} is designed for efficiency. It is easy to
 translate from @{typ"boolex"} into @{typ"ifex"}:
--- a/doc-src/TutorialI/Misc/AdvancedInd.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Misc/AdvancedInd.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -11,104 +11,83 @@
 with an extended example of induction (\S\ref{sec:CTL-revisited}).
 *};
 
-subsection{*Massaging the proposition*};
+subsection{*Massaging the Proposition*};
 
 text{*\label{sec:ind-var-in-prems}
-So far we have assumed that the theorem we want to prove is already in a form
-that is amenable to induction, but this is not always the case:
+Often we have assumed that the theorem we want to prove is already in a form
+that is amenable to induction, but sometimes it isn't.
+Here is an example.
+Since @{term"hd"} and @{term"last"} return the first and last element of a
+non-empty list, this lemma looks easy to prove:
 *};
 
-lemma "xs \<noteq> [] \<Longrightarrow> hd(rev xs) = last xs";
-apply(induct_tac xs);
+lemma "xs \<noteq> [] \<Longrightarrow> hd(rev xs) = last xs"
+apply(induct_tac xs)
 
 txt{*\noindent
-(where @{term"hd"} and @{term"last"} return the first and last element of a
-non-empty list)
-produces the warning
+But induction produces the warning
 \begin{quote}\tt
 Induction variable occurs also among premises!
 \end{quote}
 and leads to the base case
 @{subgoals[display,indent=0,goals_limit=1]}
-which, after simplification, becomes
+After simplification, it becomes
 \begin{isabelle}
 \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
 \end{isabelle}
 We cannot prove this equality because we do not know what @{term hd} and
 @{term last} return when applied to @{term"[]"}.
 
-The point is that we have violated the above warning. Because the induction
-formula is only the conclusion, the occurrence of @{term xs} in the premises is
-not modified by induction. Thus the case that should have been trivial
+We should not have ignored the warning. Because the induction
+formula is only the conclusion, induction does not affect the occurrence of @{term xs} in the premises.  
+Thus the case that should have been trivial
 becomes unprovable. Fortunately, the solution is easy:\footnote{A very similar
 heuristic applies to rule inductions; see \S\ref{sec:rtc}.}
 \begin{quote}
 \emph{Pull all occurrences of the induction variable into the conclusion
 using @{text"\<longrightarrow>"}.}
 \end{quote}
-This means we should prove
+Thus we should state the lemma as an ordinary 
+implication~(@{text"\<longrightarrow>"}), letting
+@{text rule_format} (\S\ref{sec:forward}) convert the
+result to the usual @{text"\<Longrightarrow>"} form:
 *};
 (*<*)oops;(*>*)
-lemma hd_rev: "xs \<noteq> [] \<longrightarrow> hd(rev xs) = last xs";
+lemma hd_rev [rule_format]: "xs \<noteq> [] \<longrightarrow> hd(rev xs) = last xs";
 (*<*)
 apply(induct_tac xs);
 (*>*)
 
 txt{*\noindent
-This time, induction leaves us with the following base case
+This time, induction leaves us with a trivial base case:
 @{subgoals[display,indent=0,goals_limit=1]}
-which is trivial, and @{text"auto"} finishes the whole proof.
+And @{text"auto"} completes the proof.
+*}
+(*<*)
+by auto;
+(*>*)
 
-If @{text hd_rev} is meant to be a simplification rule, you are
-done. But if you really need the @{text"\<Longrightarrow>"}-version of
-@{text hd_rev}, for example because you want to apply it as an
-introduction rule, you need to derive it separately, by combining it with
-modus ponens:
-*}(*<*)by(auto);(*>*)
-lemmas hd_revI = hd_rev[THEN mp];
- 
-text{*\noindent
-which yields the lemma we originally set out to prove.
 
-In case there are multiple premises $A@1$, \dots, $A@n$ containing the
+text{*
+If there are multiple premises $A@1$, \dots, $A@n$ containing the
 induction variable, you should turn the conclusion $C$ into
 \[ A@1 \longrightarrow \cdots A@n \longrightarrow C \]
 (see the remark?? in \S\ref{??}).
 Additionally, you may also have to universally quantify some other variables,
-which can yield a fairly complex conclusion.
+which can yield a fairly complex conclusion.  However, @{text"rule_format"} 
+can remove any number of occurrences of @{text"\<forall>"} and
+@{text"\<longrightarrow>"}.
+
 Here is a simple example (which is proved by @{text"blast"}):
 *};
 
-lemma simple: "\<forall>y. A y \<longrightarrow> B y \<longrightarrow> B y \<and> A y";
-(*<*)by blast;(*>*)
-
-text{*\noindent
-You can get the desired lemma by explicit
-application of modus ponens and @{thm[source]spec}:
-*};
-
-lemmas myrule = simple[THEN spec, THEN mp, THEN mp];
-
-text{*\noindent
-or the wholesale stripping of @{text"\<forall>"} and
-@{text"\<longrightarrow>"} in the conclusion via @{text"rule_format"} 
-*};
-
-lemmas myrule = simple[rule_format];
-
-text{*\noindent
-yielding @{thm"myrule"[no_vars]}.
-You can go one step further and include these derivations already in the
-statement of your original lemma, thus avoiding the intermediate step:
-*};
-
-lemma myrule[rule_format]:  "\<forall>y. A y \<longrightarrow> B y \<longrightarrow> B y \<and> A y";
+lemma simple[rule_format]:  "\<forall>y. A y \<longrightarrow> B y \<longrightarrow> B y \<and> A y";
 (*<*)
 by blast;
 (*>*)
 
 text{*
-\bigskip
+\medskip
 
 A second reason why your proposition may not be amenable to induction is that
 you want to induct on a whole term, rather than an individual variable. In
@@ -131,19 +110,19 @@
 the conclusion as well, under the @{text"\<forall>"}, again using @{text"\<longrightarrow>"} as shown above.
 *};
 
-subsection{*Beyond structural and recursion induction*};
+subsection{*Beyond Structural and Recursion Induction*};
 
 text{*\label{sec:complete-ind}
-So far, inductive proofs where by structural induction for
+So far, inductive proofs were by structural induction for
 primitive recursive functions and recursion induction for total recursive
 functions. But sometimes structural induction is awkward and there is no
-recursive function in sight either that could furnish a more appropriate
-induction schema. In such cases some existing standard induction schema can
+recursive function that could furnish a more appropriate
+induction schema. In such cases a general-purpose induction schema can
 be helpful. We show how to apply such induction schemas by an example.
 
 Structural induction on @{typ"nat"} is
-usually known as ``mathematical induction''. There is also ``complete
-induction'', where you must prove $P(n)$ under the assumption that $P(m)$
+usually known as mathematical induction. There is also \emph{complete}
+induction, where you must prove $P(n)$ under the assumption that $P(m)$
 holds for all $m<n$. In Isabelle, this is the theorem @{thm[source]nat_less_induct}:
 @{thm[display]"nat_less_induct"[no_vars]}
 Here is an example of its application.
@@ -153,11 +132,7 @@
 axioms f_ax: "f(f(n)) < f(Suc(n))";
 
 text{*\noindent
-From the above axiom\footnote{In general, the use of axioms is strongly
-discouraged, because of the danger of inconsistencies. The above axiom does
-not introduce an inconsistency because, for example, the identity function
-satisfies it.}
-for @{term"f"} it follows that @{prop"n <= f n"}, which can
+The axiom for @{term"f"} implies @{prop"n <= f n"}, which can
 be proved by induction on @{term"f n"}. Following the recipe outlined
 above, we have to phrase the proposition as follows to allow induction:
 *};
@@ -191,14 +166,23 @@
 by(blast intro!: f_ax Suc_leI intro: le_less_trans);
 
 text{*\noindent
-It is not surprising if you find the last step puzzling.
+If you find the last step puzzling, here are the 
+two other lemmas used above:
+\begin{isabelle}
+@{thm Suc_leI[no_vars]}
+\rulename{Suc_leI}\isanewline
+@{thm le_less_trans[no_vars]}
+\rulename{le_less_trans}
+\end{isabelle}
+%
 The proof goes like this (writing @{term"j"} instead of @{typ"nat"}).
 Since @{prop"i = Suc j"} it suffices to show
-@{prop"j < f(Suc j)"} (by @{thm[source]Suc_leI}: @{thm"Suc_leI"[no_vars]}). This is
+\hbox{@{prop"j < f(Suc j)"}},
+by @{thm[source]Suc_leI}\@.  This is
 proved as follows. From @{thm[source]f_ax} we have @{prop"f (f j) < f (Suc j)"}
-(1) which implies @{prop"f j <= f (f j)"} (by the induction hypothesis).
-Using (1) once more we obtain @{prop"f j < f(Suc j)"} (2) by transitivity
-(@{thm[source]le_less_trans}: @{thm"le_less_trans"[no_vars]}).
+(1) which implies @{prop"f j <= f (f j)"} by the induction hypothesis.
+Using (1) once more we obtain @{prop"f j < f(Suc j)"} (2) by the transitivity
+rule @{thm[source]le_less_trans}.
 Using the induction hypothesis once more we obtain @{prop"j <= f j"}
 which, together with (2) yields @{prop"j < f (Suc j)"} (again by
 @{thm[source]le_less_trans}).
@@ -206,7 +190,7 @@
 This last step shows both the power and the danger of automatic proofs: they
 will usually not tell you how the proof goes, because it can be very hard to
 translate the internal proof into a human-readable format. Therefore
-\S\ref{sec:part2?} introduces a language for writing readable yet concise
+Chapter~\ref{sec:part2?} introduces a language for writing readable
 proofs.
 
 We can now derive the desired @{prop"i <= f i"} from @{text"f_incr"}:
@@ -215,19 +199,25 @@
 lemmas f_incr = f_incr_lem[rule_format, OF refl];
 
 text{*\noindent
-The final @{thm[source]refl} gets rid of the premise @{text"?k = f ?i"}. Again,
-we could have included this derivation in the original statement of the lemma:
+The final @{thm[source]refl} gets rid of the premise @{text"?k = f ?i"}. 
+We could have included this derivation in the original statement of the lemma:
 *};
 
 lemma f_incr[rule_format, OF refl]: "\<forall>i. k = f i \<longrightarrow> i \<le> f i";
 (*<*)oops;(*>*)
 
 text{*
-\begin{exercise}
-From the above axiom and lemma for @{term"f"} show that @{term"f"} is the
-identity.
-\end{exercise}
+\begin{warn}
+We discourage the use of axioms because of the danger of
+inconsistencies.  Axiom @{text f_ax} does
+not introduce an inconsistency because, for example, the identity function
+satisfies it.  Axioms can be useful in exploratory developments, say when 
+you assume some well-known theorems so that you can quickly demonstrate some
+point about methodology.  If your example turns into a substantial proof
+development, you should replace the axioms by proofs.
+\end{warn}
 
+\bigskip
 In general, @{text induct_tac} can be applied with any rule $r$
 whose conclusion is of the form ${?}P~?x@1 \dots ?x@n$, in which case the
 format is
@@ -245,9 +235,14 @@
 \begin{quote}
 \isacommand{apply}@{text"(induct_tac"} $y@1 \dots y@n$ @{text"and"} \dots\ @{text"and"} $z@1 \dots z@m$ @{text"rule:"} $r$@{text")"}
 \end{quote}
+
+\begin{exercise}
+From the axiom and lemma for @{term"f"}, show that @{term"f"} is the
+identity function.
+\end{exercise}
 *};
 
-subsection{*Derivation of new induction schemas*};
+subsection{*Derivation of New Induction Schemas*};
 
 text{*\label{sec:derive-ind}
 Induction schemas are ordinary theorems and you can derive new ones
@@ -270,7 +265,8 @@
 by(blast elim:less_SucE)
 
 text{*\noindent
-The elimination rule @{thm[source]less_SucE} expresses the case distinction:
+The elimination rule @{thm[source]less_SucE} expresses the case distinction
+mentioned above:
 @{thm[display]"less_SucE"[no_vars]}
 
 Now it is straightforward to derive the original version of
@@ -287,8 +283,8 @@
 text{*
 Finally we should remind the reader that HOL already provides the mother of
 all inductions, well-founded induction (see \S\ref{sec:Well-founded}).  For
-example theorem @{thm[source]nat_less_induct} can be viewed (and derived) as
+example theorem @{thm[source]nat_less_induct} is
 a special case of @{thm[source]wf_induct} where @{term r} is @{text"<"} on
-@{typ nat}. The details can be found in the HOL library.
+@{typ nat}. The details can be found in theory \isa{Wellfounded_Recursion}.
 *}
 (*<*)end(*>*)
--- a/doc-src/TutorialI/Misc/Itrev.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Misc/Itrev.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -2,7 +2,7 @@
 theory Itrev = Main:;
 (*>*)
 
-section{*Induction heuristics*}
+section{*Induction Heuristics*}
 
 text{*\label{sec:InductionHeuristics}
 The purpose of this section is to illustrate some simple heuristics for
--- a/doc-src/TutorialI/Misc/case_exprs.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Misc/case_exprs.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -2,7 +2,7 @@
 theory case_exprs = Main:
 (*>*)
 
-subsection{*Case expressions*}
+subsection{*Case Expressions*}
 
 text{*\label{sec:case-expressions}
 HOL also features \isaindexbold{case}-expressions for analyzing
@@ -40,7 +40,7 @@
 indicate their scope
 *}
 
-subsection{*Structural induction and case distinction*}
+subsection{*Structural Induction and Case Distinction*}
 
 text{*\label{sec:struct-ind-case}
 \indexbold{structural induction}
--- a/doc-src/TutorialI/Misc/simp.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Misc/simp.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -2,7 +2,7 @@
 theory simp = Main:
 (*>*)
 
-subsubsection{*Simplification rules*}
+subsubsection{*Simplification Rules*}
 
 text{*\indexbold{simplification rule}
 To facilitate simplification, theorems can be declared to be simplification
@@ -39,7 +39,7 @@
 \end{warn}
 *}
 
-subsubsection{*The simplification method*}
+subsubsection{*The Simplification Method*}
 
 text{*\index{*simp (method)|bold}
 The general format of the simplification method is
@@ -54,7 +54,7 @@
 Note that @{text simp} fails if nothing changes.
 *}
 
-subsubsection{*Adding and deleting simplification rules*}
+subsubsection{*Adding and Deleting Simplification Rules*}
 
 text{*
 If a certain theorem is merely needed in a few proofs by simplification,
@@ -123,7 +123,7 @@
 other arguments.
 *}
 
-subsubsection{*Rewriting with definitions*}
+subsubsection{*Rewriting with Definitions*}
 
 text{*\index{simplification!with definitions}
 Constant definitions (\S\ref{sec:ConstDefinitions}) can
@@ -171,7 +171,7 @@
 \end{warn}
 *}
 
-subsubsection{*Simplifying let-expressions*}
+subsubsection{*Simplifying Let-Expressions*}
 
 text{*\index{simplification!of let-expressions}
 Proving a goal containing \isaindex{let}-expressions almost invariably
@@ -192,7 +192,7 @@
 *}
 declare Let_def [simp]
 
-subsubsection{*Conditional equations*}
+subsubsection{*Conditional Equations*}
 
 text{*
 So far all examples of rewrite rules were equations. The simplifier also
@@ -224,7 +224,7 @@
 *}
 
 
-subsubsection{*Automatic case splits*}
+subsubsection{*Automatic Case Splits*}
 
 text{*\indexbold{case splits}\index{*split (method, attr.)|(}
 Goals containing @{text"if"}-expressions are usually proved by case
--- a/doc-src/TutorialI/Misc/types.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Misc/types.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -13,7 +13,7 @@
 consts nand :: gate
        xor  :: gate;
 
-subsection{*Constant definitions*}
+subsection{*Constant Definitions*}
 
 text{*\label{sec:ConstDefinitions}\indexbold{definition}%
 The above constants @{term"nand"} and @{term"xor"} are non-recursive and can
--- a/doc-src/TutorialI/Recdef/Nested0.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Recdef/Nested0.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -12,8 +12,9 @@
 and closed with the observation that the associated schema for the definition
 of primitive recursive functions leads to overly verbose definitions. Moreover,
 if you have worked exercise~\ref{ex:trev-trev} you will have noticed that
-you needed to reprove many lemmas reminiscent of similar lemmas about
-@{term"rev"}. We will now show you how \isacommand{recdef} can simplify
+you needed to declare essentially the same function as @{term"rev"}
+and prove many standard properties of list reverse all over again. 
+We will now show you how \isacommand{recdef} can simplify
 definitions and proofs about nested recursive datatypes. As an example we
 choose exercise~\ref{ex:trev-trev}:
 *}
--- a/doc-src/TutorialI/Recdef/Nested1.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Recdef/Nested1.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -4,7 +4,7 @@
 
 text{*\noindent
 Although the definition of @{term trev} is quite natural, we will have
-overcome a minor difficulty in convincing Isabelle of is termination.
+to overcome a minor difficulty in convincing Isabelle of its termination.
 It is precisely this difficulty that is the \textit{raison d'\^etre} of
 this subsection.
 
@@ -25,13 +25,13 @@
 where @{term set} returns the set of elements of a list
 and @{text"term_list_size :: term list \<Rightarrow> nat"} is an auxiliary
 function automatically defined by Isabelle
-(when @{text term} was defined).  First we have to understand why the
+(while processing the declaration of @{text term}).  First we have to understand why the
 recursive call of @{term trev} underneath @{term map} leads to the above
 condition. The reason is that \isacommand{recdef} ``knows'' that @{term map}
-will apply @{term trev} only to elements of @{term ts}. Thus the above
+will apply @{term trev} only to elements of @{term ts}. Thus the 
 condition expresses that the size of the argument @{prop"t : set ts"} of any
-recursive call of @{term trev} is strictly less than @{prop"size(App f ts) =
-Suc(term_list_size ts)"}.  We will now prove the termination condition and
+recursive call of @{term trev} is strictly less than @{term"size(App f ts)"},
+which equals @{term"Suc(term_list_size ts)"}.  We will now prove the termination condition and
 continue with our definition.  Below we return to the question of how
 \isacommand{recdef} ``knows'' about @{term map}.
 *};
--- a/doc-src/TutorialI/Recdef/Nested2.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Recdef/Nested2.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -15,10 +15,10 @@
 (*>*)
 text{*\noindent
 By making this theorem a simplification rule, \isacommand{recdef}
-applies it automatically and the above definition of @{term"trev"}
+applies it automatically and the definition of @{term"trev"}
 succeeds now. As a reward for our effort, we can now prove the desired
-lemma directly. The key is the fact that we no longer need the verbose
-induction schema for type @{text"term"} but the simpler one arising from
+lemma directly.  We no longer need the verbose
+induction schema for type @{text"term"} and can use the simpler one arising from
 @{term"trev"}:
 *};
 
@@ -33,7 +33,7 @@
 by(simp_all add:rev_map sym[OF map_compose] cong:map_cong);
 
 text{*\noindent
-If the proof of the induction step mystifies you, we recommend to go through
+If the proof of the induction step mystifies you, we recommend that you go through
 the chain of simplification steps in detail; you will probably need the help of
 @{text"trace_simp"}. Theorem @{thm[source]map_cong} is discussed below.
 %\begin{quote}
@@ -46,9 +46,9 @@
 %{term[display]"App f ts"}
 %\end{quote}
 
-The above definition of @{term"trev"} is superior to the one in
-\S\ref{sec:nested-datatype} because it brings @{term"rev"} into play, about
-which already know a lot, in particular @{prop"rev(rev xs) = xs"}.
+The definition of @{term"trev"} above is superior to the one in
+\S\ref{sec:nested-datatype} because it uses @{term"rev"}
+and lets us use existing facts such as \hbox{@{prop"rev(rev xs) = xs"}}.
 Thus this proof is a good example of an important principle:
 \begin{quote}
 \emph{Chose your definitions carefully\\
--- a/doc-src/TutorialI/Recdef/simplification.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Recdef/simplification.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -19,8 +19,8 @@
 According to the measure function, the second argument should decrease with
 each recursive call. The resulting termination condition
 @{term[display]"n ~= 0 ==> m mod n < n"}
-is provded automatically because it is already present as a lemma in the
-arithmetic library. Thus the recursion equation becomes a simplification
+is proved automatically because it is already present as a lemma in
+HOL\@.  Thus the recursion equation becomes a simplification
 rule. Of course the equation is nonterminating if we are allowed to unfold
 the recursive call inside the @{text else} branch, which is why programming
 languages and our simplifier don't do that. Unfortunately the simplifier does
--- a/doc-src/TutorialI/ToyList/ToyList.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/ToyList/ToyList.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -107,7 +107,7 @@
 the \bfindex{inner syntax} and the enclosing theory language as the \bfindex{outer syntax}.
 
 
-\section{An introductory proof}
+\section{An Introductory Proof}
 \label{sec:intro-proof}
 
 Assuming you have input the declarations and definitions of \texttt{ToyList}
@@ -220,7 +220,7 @@
 oops
 (*>*)
 
-subsubsection{*First lemma: @{text"rev(xs @ ys) = (rev ys) @ (rev xs)"}*}
+subsubsection{*First Lemma: @{text"rev(xs @ ys) = (rev ys) @ (rev xs)"}*}
 
 text{*
 After abandoning the above proof attempt\indexbold{abandon
@@ -259,7 +259,7 @@
 oops
 (*>*)
 
-subsubsection{*Second lemma: @{text"xs @ [] = xs"}*}
+subsubsection{*Second Lemma: @{text"xs @ [] = xs"}*}
 
 text{*
 This time the canonical proof procedure
@@ -312,7 +312,7 @@
 *}
 (*<*)oops(*>*)
 
-subsubsection{*Third lemma: @{text"(xs @ ys) @ zs = xs @ (ys @ zs)"}*}
+subsubsection{*Third Lemma: @{text"(xs @ ys) @ zs = xs @ (ys @ zs)"}*}
 
 text{*
 Abandoning the previous proof, the canonical proof procedure
--- a/doc-src/TutorialI/Types/Axioms.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Types/Axioms.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -9,7 +9,7 @@
 our above ordering relations.
 *}
 
-subsubsection{*Partial orders*}
+subsubsection{*Partial Orders*}
 
 text{*
 A \emph{partial order} is a subclass of @{text ordrel}
@@ -27,7 +27,7 @@
 requires that @{text"<<"} and @{text"<<="} are related as expected.
 Note that behind the scenes, Isabelle has restricted the axioms to class
 @{text parord}. For example, this is what @{thm[source]refl} really looks like:
-@{thm[show_types]refl}.
+@{thm[show_sorts]refl}.
 
 We have not made @{thm[source]less_le} a global definition because it would
 fix once and for all that @{text"<<"} is defined in terms of @{text"<<="}.
@@ -84,14 +84,14 @@
 by simp;
 
 text{*\noindent
-The effect is not stunning but demonstrates the principle.  It also shows
+The effect is not stunning, but it demonstrates the principle.  It also shows
 that tools like the simplifier can deal with generic rules as well. Moreover,
 it should be clear that the main advantage of the axiomatic method is that
 theorems can be proved in the abstract and one does not need to repeat the
 proof for each instance.
 *}
 
-subsubsection{*Linear orders*}
+subsubsection{*Linear Orders*}
 
 text{* If any two elements of a partial order are comparable it is a
 \emph{linear} or \emph{total} order: *}
@@ -118,7 +118,7 @@
 paragraph.
 *}
 
-subsubsection{*Strict orders*}
+subsubsection{*Strict Orders*}
 
 text{*
 An alternative axiomatization of partial orders takes @{text"<<"} rather than
@@ -162,7 +162,7 @@
 done
 *)
 
-subsubsection{*Multiple inheritance and sorts*}
+subsubsection{*Multiple Inheritance and Sorts*}
 
 text{*
 A class may inherit from more than one direct superclass. This is called
@@ -196,7 +196,7 @@
 & & \isa{wellord}
 \end{array}
 \]
-\caption{Subclass diagramm}
+\caption{Subclass Diagram}
 \label{fig:subclass}
 \end{figure}
 
--- a/doc-src/TutorialI/Types/Overloading0.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Types/Overloading0.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -4,7 +4,7 @@
 useful on its own: \emph{overloading}. Isabelle allows overloading: a
 constant may have multiple definitions at non-overlapping types. *}
 
-subsubsection{*An initial example*}
+subsubsection{*An Initial Example*}
 
 text{*
 If we want to introduce the notion of an \emph{inverse} for arbitrary types we
--- a/doc-src/TutorialI/Types/Overloading1.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Types/Overloading1.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -1,6 +1,6 @@
 (*<*)theory Overloading1 = Main:(*>*)
 
-subsubsection{*Controlled overloading with type classes*}
+subsubsection{*Controlled Overloading with Type Classes*}
 
 text{*
 We now start with the theory of ordering relations, which we want to phrase
--- a/doc-src/TutorialI/Types/Overloading2.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Types/Overloading2.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -18,7 +18,7 @@
 The infix function @{text"!"} yields the nth element of a list.
 *}
 
-subsubsection{*Predefined overloading*}
+subsubsection{*Predefined Overloading*}
 
 text{*
 HOL comes with a number of overloaded constants and corresponding classes.
@@ -45,7 +45,7 @@
 @{term Least} & @{text "('a::ord \<Rightarrow> bool) \<Rightarrow> 'a"} &
 @{text LEAST}$~x.~P$
 \end{tabular}
-\caption{Overloaded constants in HOL}
+\caption{Overloaded Constants in HOL}
 \label{tab:overloading}
 \end{center}
 \end{table}
--- a/doc-src/TutorialI/Types/Pairs.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Types/Pairs.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -11,26 +11,27 @@
 problem: pattern matching with tuples.
 *}
 
-subsection{*Pattern matching with tuples*}
+subsection{*Pattern Matching with Tuples*}
 
 text{*
-It is possible to use (nested) tuples as patterns in $\lambda$-abstractions,
+Tuples may be used as patterns in $\lambda$-abstractions,
 for example @{text"\<lambda>(x,y,z).x+y+z"} and @{text"\<lambda>((x,y),z).x+y+z"}. In fact,
-tuple patterns can be used in most variable binding constructs. Here are
+tuple patterns can be used in most variable binding constructs,
+and they can be nested. Here are
 some typical examples:
 \begin{quote}
 @{term"let (x,y) = f z in (y,x)"}\\
 @{term"case xs of [] => 0 | (x,y)#zs => x+y"}\\
 @{text"\<forall>(x,y)\<in>A. x=y"}\\
-@{text"{(x,y). x=y}"}\\
+@{text"{(x,y,z). x=z}"}\\
 @{term"\<Union>(x,y)\<in>A. {x+y}"}
 \end{quote}
 *}
 
 text{*
-The intuitive meaning of this notations should be pretty obvious.
+The intuitive meanings of these expressions should be obvious.
 Unfortunately, we need to know in more detail what the notation really stands
-for once we have to reason about it. The fact of the matter is that abstraction
+for once we have to reason about it.  Abstraction
 over pairs and tuples is merely a convenient shorthand for a more complex
 internal representation.  Thus the internal and external form of a term may
 differ, which can affect proofs. If you want to avoid this complication,
@@ -51,7 +52,7 @@
 understand how to reason about such constructs.
 *}
 
-subsection{*Theorem proving*}
+subsection{*Theorem Proving*}
 
 text{*
 The most obvious approach is the brute force expansion of @{term split}:
@@ -61,7 +62,7 @@
 by(simp add:split_def)
 
 text{* This works well if rewriting with @{thm[source]split_def} finishes the
-proof, as in the above lemma. But if it doesn't, you end up with exactly what
+proof, as it does above.  But if it doesn't, you end up with exactly what
 we are trying to avoid: nests of @{term fst} and @{term snd}. Thus this
 approach is neither elegant nor very practical in large examples, although it
 can be effective in small ones.
--- a/doc-src/TutorialI/Types/Typedef.thy	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Types/Typedef.thy	Fri Jan 12 16:32:01 2001 +0100
@@ -1,13 +1,13 @@
 (*<*)theory Typedef = Main:(*>*)
 
-section{*Introducing new types*}
+section{*Introducing New Types*}
 
 text{*\label{sec:adv-typedef}
 By now we have seen a number of ways for introducing new types, for example
 type synonyms, recursive datatypes and records. For most applications, this
 repertoire should be quite sufficient. Very occasionally you may feel the
-need for a more advanced type. If you cannot avoid that type, and you are
-quite certain that it is not definable by any of the standard means,
+need for a more advanced type. If you cannot do without that type, and you are
+certain that it is not definable by any of the standard means,
 then read on.
 \begin{warn}
   Types in HOL must be non-empty; otherwise the quantifier rules would be
@@ -15,7 +15,7 @@
 \end{warn}
 *}
 
-subsection{*Declaring new types*}
+subsection{*Declaring New Types*}
 
 text{*\label{sec:typedecl}
 The most trivial way of introducing a new type is by a \bfindex{type
@@ -49,11 +49,11 @@
 However, we strongly discourage this approach, except at explorative stages
 of your development. It is extremely easy to write down contradictory sets of
 axioms, in which case you will be able to prove everything but it will mean
-nothing.  In the above case it also turns out that the axiomatic approach is
+nothing.  Here the axiomatic approach is
 unnecessary: a one-element type called @{typ unit} is already defined in HOL.
 *}
 
-subsection{*Defining new types*}
+subsection{*Defining New Types*}
 
 text{*\label{sec:typedef}
 Now we come to the most general method of safely introducing a new type, the
@@ -124,16 +124,16 @@
 @{thm Abs_three_inverse[no_vars]} &~~ (@{thm[source]Abs_three_inverse})
 \end{tabular}
 \end{center}
-
-From the above example it should be clear what \isacommand{typedef} does
-in general: simply replace the name @{text three} and the set
-@{term"{n. n\<le>2}"} by the respective arguments.
+%
+From this example it should be clear what \isacommand{typedef} does
+in general given a name (here @{text three}) and a set
+(here @{term"{n. n\<le>2}"}).
 
 Our next step is to define the basic functions expected on the new type.
 Although this depends on the type at hand, the following strategy works well:
 \begin{itemize}
-\item define a small kernel of basic functions such that all further
-functions you anticipate can be defined on top of that kernel.
+\item define a small kernel of basic functions that can express all other
+functions you anticipate.
 \item define the kernel in terms of corresponding functions on the
 representing type using @{term Abs} and @{term Rep} to convert between the
 two levels.
@@ -168,10 +168,12 @@
  "\<lbrakk> x \<in> three; y \<in> three \<rbrakk> \<Longrightarrow> (Abs_three x = Abs_three y) = (x=y)";
 
 txt{*\noindent
-We prove both directions separately. From @{prop"Abs_three x = Abs_three y"}
-we derive @{prop"Rep_three(Abs_three x) = Rep_three(Abs_three y)"} (via
-@{thm[source]arg_cong}: @{thm arg_cong}), and thus the required @{prop"x =
-y"} by simplification with @{thm[source]Abs_three_inverse}. The other direction
+We prove each direction separately. From @{prop"Abs_three x = Abs_three y"}
+we use @{thm[source]arg_cong} to apply @{term Rep_three} to both sides,
+deriving @{prop[display]"Rep_three(Abs_three x) = Rep_three(Abs_three y)"}
+Thus we get the required @{prop"x =
+y"} by simplification with @{thm[source]Abs_three_inverse}. 
+The other direction
 is trivial by simplification:
 *}
 
@@ -229,7 +231,7 @@
 txt{*\noindent
 This substitution step worked nicely because there was just a single
 occurrence of a term of type @{typ three}, namely @{term x}.
-When we now apply the above lemma, @{term Q} becomes @{term"\<lambda>n. P(Abs_three
+When we now apply the lemma, @{term Q} becomes @{term"\<lambda>n. P(Abs_three
 n)"} because @{term"Rep_three x"} is the only term of type @{typ nat}:
 *}
 
@@ -262,7 +264,7 @@
 example to demonstrate \isacommand{typedef} because its simplicity makes the
 key concepts particularly easy to grasp. If you would like to see a
 nontrivial example that cannot be defined more directly, we recommend the
-definition of \emph{finite multisets} in the HOL library.
+definition of \emph{finite multisets} in the HOL Library.
 
 Let us conclude by summarizing the above procedure for defining a new type.
 Given some abstract axiomatic description $P$ of a type $ty$ in terms of a
--- a/doc-src/TutorialI/Types/types.tex	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/Types/types.tex	Fri Jan 12 16:32:01 2001 +0100
@@ -3,13 +3,13 @@
 
 So far we have learned about a few basic types (for example \isa{bool} and
 \isa{nat}), type abbreviations (\isacommand{types}) and recursive datatpes
-(\isacommand{datatype}). This chapter will introduce the following more
+(\isacommand{datatype}). This chapter will introduce more
 advanced material:
 \begin{itemize}
 \item More about basic types: numbers ({\S}\ref{sec:numbers}), pairs
   ({\S}\ref{sec:products}) and records ({\S}\ref{sec:records}), and how to
   reason about them.
-\item Introducing your own types: how to introduce your own new types that
+\item Introducing your own types: how to introduce new types that
   cannot be constructed with any of the basic methods
   ({\S}\ref{sec:adv-typedef}).
 \item Type classes: how to specify and reason about axiomatic collections of
--- a/doc-src/TutorialI/basics.tex	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/basics.tex	Fri Jan 12 16:32:01 2001 +0100
@@ -61,24 +61,24 @@
     proofs}} in the above theory template.  A complete grammar of the basic
 constructs is found in the Isabelle/Isar Reference Manual.
 
-HOL's theory library is available online at
+HOL's theory collection is available online at
 \begin{center}\small
     \url{http://isabelle.in.tum.de/library/}
 \end{center}
-and is recommended browsing. Note that most of the theories in the library
+and is recommended browsing. Note that most of the theories 
 are based on classical Isabelle without the Isar extension. This means that
 they look slightly different than the theories in this tutorial, and that all
 proofs are in separate ML files.
 
 \begin{warn}
   HOL contains a theory \isaindexbold{Main}, the union of all the basic
-  predefined theories like arithmetic, lists, sets, etc.\ (see the online
-  library).  Unless you know what you are doing, always include \isa{Main}
+  predefined theories like arithmetic, lists, sets, etc.  
+  Unless you know what you are doing, always include \isa{Main}
   as a direct or indirect parent theory of all your theories.
 \end{warn}
 
 
-\section{Types, terms and formulae}
+\section{Types, Terms and Formulae}
 \label{sec:TypesTermsForms}
 \indexbold{type}
 
@@ -264,7 +264,7 @@
   interpreted as a schematic variable.
 \end{warn}
 
-\section{Interaction and interfaces}
+\section{Interaction and Interfaces}
 
 Interaction with Isabelle can either occur at the shell level or through more
 advanced interfaces. To keep the tutorial independent of the interface we
@@ -288,7 +288,7 @@
 before it knows that the command is complete.
 
 
-\section{Getting started}
+\section{Getting Started}
 
 Assuming you have installed Isabelle, you start it by typing \texttt{isabelle
   -I HOL} in a shell window.\footnote{Simply executing \texttt{isabelle -I}
--- a/doc-src/TutorialI/fp.tex	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/fp.tex	Fri Jan 12 16:32:01 2001 +0100
@@ -12,7 +12,7 @@
 goes well beyond what can be expressed as a program. However, for the time
 being we concentrate on the computable.
 
-\section{An introductory theory}
+\section{An Introductory Theory}
 \label{sec:intro-theory}
 
 Functional programming needs datatypes and functions. Both of them can be
@@ -55,7 +55,7 @@
 \end{itemize}
 
 
-\section{Some helpful commands}
+\section{Some Helpful Commands}
 \label{sec:commands-and-hints}
 
 This section discusses a few basic commands for manipulating the proof state
@@ -167,7 +167,7 @@
 always use HOL's predefined lists.
 
 
-\subsection{The general format}
+\subsection{The General Format}
 \label{sec:general-datatype}
 
 The general HOL \isacommand{datatype} definition is of the form
@@ -198,7 +198,7 @@
 Isabelle will always show \isa{size} on lists as \isa{length}.
 
 
-\subsection{Primitive recursion}
+\subsection{Primitive Recursion}
 
 Functions on datatypes are usually defined by recursion. In fact, most of the
 time they are defined by what is called \bfindex{primitive recursion}.
@@ -221,10 +221,10 @@
 
 \input{Ifexpr/document/Ifexpr.tex}
 
-\section{Some basic types}
+\section{Some Basic Types}
 
 
-\subsection{Natural numbers}
+\subsection{Natural Numbers}
 \label{sec:nat}
 \index{arithmetic|(}
 
@@ -251,7 +251,7 @@
 definitions.
 
 
-\subsection{Type synonyms}
+\subsection{Type Synonyms}
 \indexbold{type synonym}
 
 Type synonyms are similar to those found in ML\@. Their syntax is fairly self
@@ -302,7 +302,7 @@
 section as well, in particular in order to understand what happened if things
 do not simplify as expected.
 
-\subsubsection{What is simplification}
+\subsubsection{What is Simplification?}
 
 In its most basic form, simplification means repeated application of
 equations from left to right. For example, taking the rules for \isa{\at}
@@ -329,7 +329,7 @@
 \input{CodeGen/document/CodeGen.tex}
 
 
-\section{Advanced datatypes}
+\section{Advanced Datatypes}
 \label{sec:advanced-datatypes}
 \index{*datatype|(}
 \index{*primrec|(}
@@ -337,18 +337,18 @@
 
 This section presents advanced forms of \isacommand{datatype}s.
 
-\subsection{Mutual recursion}
+\subsection{Mutual Recursion}
 \label{sec:datatype-mut-rec}
 
 \input{Datatype/document/ABexpr.tex}
 
-\subsection{Nested recursion}
+\subsection{Nested Recursion}
 \label{sec:nested-datatype}
 
 {\makeatother\input{Datatype/document/Nested.tex}}
 
 
-\subsection{The limits of nested recursion}
+\subsection{The Limits of Nested Recursion}
 
 How far can we push nested recursion? By the unfolding argument above, we can
 reduce nested to mutual recursion provided the nested recursion only involves
@@ -387,7 +387,7 @@
 \index{*primrec|)}
 \index{*datatype|)}
 
-\subsection{Case study: Tries}
+\subsection{Case Study: Tries}
 \label{sec:Trie}
 
 Tries are a classic search tree data structure~\cite{Knuth3-75} for fast
@@ -455,7 +455,7 @@
   deletion, if possible.
 \end{exercise}
 
-\section{Total recursive functions}
+\section{Total Recursive Functions}
 \label{sec:recdef}
 \index{*recdef|(}
 
@@ -467,15 +467,15 @@
 supplied) sense. In this section we ristrict ourselves to measure functions;
 more advanced termination proofs are discussed in {\S}\ref{sec:beyond-measure}.
 
-\subsection{Defining recursive functions}
+\subsection{Defining Recursive Functions}
 \label{sec:recdef-examples}
 \input{Recdef/document/examples.tex}
 
-\subsection{Proving termination}
+\subsection{Proving Termination}
 
 \input{Recdef/document/termination.tex}
 
-\subsection{Simplification with recdef}
+\subsection{Simplification with Recdef}
 \label{sec:recdef-simplification}
 
 \input{Recdef/document/simplification.tex}
--- a/doc-src/TutorialI/tutorial.tex	Fri Jan 12 16:28:14 2001 +0100
+++ b/doc-src/TutorialI/tutorial.tex	Fri Jan 12 16:32:01 2001 +0100
@@ -21,6 +21,7 @@
 \newcommand{\isasymImp}{\isasymLongrightarrow}
 \newcommand{\isasymFun}{\isasymRightarrow}
 \newcommand{\isasymuniqex}{\isamath{\exists!\,}}
+\renewcommand{\S}{Sect.\ts}
 
 \renewenvironment{isamarkuptxt}{\begin{isamarkuptext}}{\end{isamarkuptext}}
 
@@ -88,7 +89,7 @@
 \input{Types/types}
 \input{Advanced/advanced}
 \chapter{Theory Presentation}
-\chapter{Case Study: The Needhamd-Schroeder Protocol}
+\chapter{Case Study: Verifying a Cryptographic Protocol}
 \chapter{Structured Proofs}
 \chapter{Case Study: UNIX File-System Security}
 %\chapter{The Tricks of the Trade}