revisions and indexing
authorpaulson
Thu, 09 Aug 2001 18:12:15 +0200
changeset 11494 23a118849801
parent 11493 f3ff2549cdc8
child 11495 3621dea6113e
revisions and indexing
doc-src/TutorialI/Advanced/Partial.thy
doc-src/TutorialI/Advanced/WFrec.thy
doc-src/TutorialI/Advanced/advanced.tex
doc-src/TutorialI/Advanced/document/Partial.tex
doc-src/TutorialI/Advanced/document/WFrec.tex
doc-src/TutorialI/Advanced/document/simp.tex
doc-src/TutorialI/Advanced/simp.thy
doc-src/TutorialI/CTL/CTL.thy
doc-src/TutorialI/CTL/CTLind.thy
doc-src/TutorialI/CTL/document/CTL.tex
doc-src/TutorialI/CTL/document/CTLind.tex
doc-src/TutorialI/Inductive/AB.thy
doc-src/TutorialI/Inductive/Mutual.thy
doc-src/TutorialI/Inductive/Star.thy
doc-src/TutorialI/Inductive/document/AB.tex
doc-src/TutorialI/Inductive/document/Mutual.tex
doc-src/TutorialI/Inductive/document/Star.tex
doc-src/TutorialI/Inductive/even-example.tex
doc-src/TutorialI/Misc/AdvancedInd.thy
doc-src/TutorialI/Misc/document/AdvancedInd.tex
doc-src/TutorialI/Misc/document/simp.tex
doc-src/TutorialI/Misc/simp.thy
doc-src/TutorialI/Recdef/Nested0.thy
doc-src/TutorialI/Recdef/Nested1.thy
doc-src/TutorialI/Recdef/Nested2.thy
doc-src/TutorialI/Recdef/document/Nested0.tex
doc-src/TutorialI/Recdef/document/Nested1.tex
doc-src/TutorialI/Recdef/document/Nested2.tex
doc-src/TutorialI/Rules/rules.tex
doc-src/TutorialI/Sets/sets.tex
doc-src/TutorialI/Types/Axioms.thy
doc-src/TutorialI/Types/Overloading.thy
doc-src/TutorialI/Types/Overloading0.thy
doc-src/TutorialI/Types/Overloading1.thy
doc-src/TutorialI/Types/Overloading2.thy
doc-src/TutorialI/Types/Pairs.thy
doc-src/TutorialI/Types/Typedef.thy
doc-src/TutorialI/Types/document/Axioms.tex
doc-src/TutorialI/Types/document/Overloading.tex
doc-src/TutorialI/Types/document/Overloading0.tex
doc-src/TutorialI/Types/document/Overloading1.tex
doc-src/TutorialI/Types/document/Overloading2.tex
doc-src/TutorialI/Types/document/Pairs.tex
doc-src/TutorialI/Types/document/Typedef.tex
doc-src/TutorialI/Types/numerics.tex
doc-src/TutorialI/Types/types.tex
doc-src/TutorialI/tutorial.ind
--- a/doc-src/TutorialI/Advanced/Partial.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Advanced/Partial.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -1,7 +1,7 @@
 (*<*)theory Partial = While_Combinator:(*>*)
 
-text{*\noindent Throughout the tutorial we have emphasized the fact
-that all functions in HOL are total. Hence we cannot hope to define
+text{*\noindent Throughout this tutorial, we have emphasized
+that all functions in HOL are total.  We cannot hope to define
 truly partial functions, but must make them total.  A straightforward
 method is to lift the result type of the function from $\tau$ to
 $\tau$~@{text option} (see \ref{sec:option}), where @{term None} is
@@ -45,7 +45,9 @@
 
 subsubsection{*Guarded Recursion*}
 
-text{* Neither \isacommand{primrec} nor \isacommand{recdef} allow to
+text{* 
+\index{recursion!guarded}%
+Neither \isacommand{primrec} nor \isacommand{recdef} allow to
 prefix an equation with a condition in the way ordinary definitions do
 (see @{term minus} above). Instead we have to move the condition over
 to the right-hand side of the equation. Given a partial function $f$
@@ -73,7 +75,7 @@
 As a more substantial example we consider the problem of searching a graph.
 For simplicity our graph is given by a function @{term f} of
 type @{typ"'a \<Rightarrow> 'a"} which
-maps each node to its successor, i.e.\ the graph is really a tree.
+maps each node to its successor; the graph is really a tree.
 The task is to find the end of a chain, modelled by a node pointing to
 itself. Here is a first attempt:
 @{prop[display]"find(f,x) = (if f x = x then x else find(f, f x))"}
@@ -118,21 +120,21 @@
 f"}. The proof requires unfolding the definition of @{term step1},
 as specified in the \isacommand{hints} above.
 
-Normally you will then derive the following conditional variant of and from
-the recursion equation
+Normally you will then derive the following conditional variant from
+the recursion equation:
 *}
 
 lemma [simp]:
   "wf(step1 f) \<Longrightarrow> find(f,x) = (if f x = x then x else find(f, f x))"
 by simp
 
-text{*\noindent and then disable the original recursion equation:*}
+text{*\noindent Then you should disable the original recursion equation:*}
 
 declare find.simps[simp del]
 
 text{*
-We can reason about such underdefined functions just like about any other
-recursive function. Here is a simple example of recursion induction:
+Reasoning about such underdefined functions is like that for other
+recursive functions.  Here is a simple example of recursion induction:
 *}
 
 lemma "wf(step1 f) \<longrightarrow> f(find(f,x)) = find(f,x)"
@@ -154,8 +156,8 @@
 \begin{verbatim}
      x := s; while b(x) do x := c(x); return x
 \end{verbatim}
-In general, @{term s} will be a tuple (better still: a record). As an example
-consider the following definition of function @{term find} above:
+In general, @{term s} will be a tuple or record.  As an example
+consider the following definition of function @{term find}:
 *}
 
 constdefs find2 :: "('a \<Rightarrow> 'a) \<Rightarrow> 'a \<Rightarrow> 'a"
@@ -171,7 +173,7 @@
 Although the definition of tail recursive functions via @{term while} avoids
 termination proofs, there is no free lunch. When proving properties of
 functions defined by @{term while}, termination rears its ugly head
-again. Here is @{thm[source]while_rule}, the well known proof rule for total
+again. Here is \tdx{while_rule}, the well known proof rule for total
 correctness of loops expressed with @{term while}:
 @{thm[display,margin=50]while_rule[no_vars]} @{term P} needs to be true of
 the initial state @{term s} and invariant under @{term c} (premises 1
@@ -205,7 +207,7 @@
 
 text{* Let us conclude this section on partial functions by a
 discussion of the merits of the @{term while} combinator. We have
-already seen that the advantage (if it is one) of not having to
+already seen that the advantage of not having to
 provide a termination argument when defining a function via @{term
 while} merely puts off the evil hour. On top of that, tail recursive
 functions tend to be more complicated to reason about. So why use
--- a/doc-src/TutorialI/Advanced/WFrec.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Advanced/WFrec.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -2,7 +2,7 @@
 
 text{*\noindent
 So far, all recursive definitions were shown to terminate via measure
-functions. Sometimes this can be quite inconvenient or even
+functions. Sometimes this can be inconvenient or
 impossible. Fortunately, \isacommand{recdef} supports much more
 general definitions. For example, termination of Ackermann's function
 can be shown by means of the \rmindex{lexicographic product} @{text"<*lex*>"}:
@@ -23,18 +23,19 @@
 In general, \isacommand{recdef} supports termination proofs based on
 arbitrary well-founded relations as introduced in \S\ref{sec:Well-founded}.
 This is called \textbf{well-founded
-recursion}\indexbold{recursion!well-founded}. Clearly, a function definition
-is total iff the set of all pairs $(r,l)$, where $l$ is the argument on the
+recursion}\indexbold{recursion!well-founded}.  A function definition
+is total if and only if the set of 
+all pairs $(r,l)$, where $l$ is the argument on the
 left-hand side of an equation and $r$ the argument of some recursive call on
 the corresponding right-hand side, induces a well-founded relation.  For a
 systematic account of termination proofs via well-founded relations see, for
 example, Baader and Nipkow~\cite{Baader-Nipkow}.
 
-Each \isacommand{recdef} definition should be accompanied (after the name of
-the function) by a well-founded relation on the argument type of the
-function.  Isabelle/HOL formalizes some of the most important
+Each \isacommand{recdef} definition should be accompanied (after the function's
+name) by a well-founded relation on the function's argument type.  
+Isabelle/HOL formalizes some of the most important
 constructions of well-founded relations (see \S\ref{sec:Well-founded}). For
-example, @{term"measure f"} is always well-founded, and the lexicographic
+example, @{term"measure f"} is always well-founded.   The lexicographic
 product of two well-founded relations is again well-founded, which we relied
 on when defining Ackermann's function above.
 Of course the lexicographic product can also be iterated:
@@ -55,8 +56,8 @@
 inv_image}. All these constructions are known to \isacommand{recdef}. Thus you
 will never have to prove well-foundedness of any relation composed
 solely of these building blocks. But of course the proof of
-termination of your function definition, i.e.\ that the arguments
-decrease with every recursive call, may still require you to provide
+termination of your function definition --- that the arguments
+decrease with every recursive call --- may still require you to provide
 additional lemmas.
 
 It is also possible to use your own well-founded relations with
@@ -79,7 +80,7 @@
 
 txt{*\noindent
 The proof is by showing that our relation is a subset of another well-founded
-relation: one given by a measure function. 
+relation: one given by a measure function.\index{*wf_subset (theorem)}
 *};
 
 apply (rule wf_subset [of "measure (\<lambda>k::nat. N-k)"], blast);
--- a/doc-src/TutorialI/Advanced/advanced.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Advanced/advanced.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -12,15 +12,15 @@
 \section{Advanced Forms of Recursion}
 \index{recdef@\isacommand {recdef} (command)|(}
 
-The purpose of this section is to introduce advanced forms of
+This section introduces advanced forms of
 \isacommand{recdef}: how to establish termination by means other than measure
-functions, how to define recursive function over nested recursive datatypes,
+functions, how to define recursive functions over nested recursive datatypes
 and how to deal with partial functions.
 
 If, after reading this section, you feel that the definition of recursive
 functions is overly complicated by the requirement of
-totality, you should ponder the alternative, a logic of partial functions,
-where recursive definitions are always wellformed. For a start, there are many
+totality, you should ponder the alternatives.  In a logic of partial functions,
+recursive definitions are always accepted.  But there are many
 such logics, and no clear winner has emerged. And in all of these logics you
 are (more or less frequently) required to reason about the definedness of
 terms explicitly. Thus one shifts definedness arguments from definition time to
@@ -38,7 +38,7 @@
 \input{Recdef/document/Nested2.tex}
 
 \subsection{Partial Functions}
-\index{partial function}
+\index{functions!partial}
 \input{Advanced/document/Partial.tex}
 
 \index{recdef@\isacommand {recdef} (command)|)}
--- a/doc-src/TutorialI/Advanced/document/Partial.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Advanced/document/Partial.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -3,8 +3,8 @@
 \def\isabellecontext{Partial}%
 %
 \begin{isamarkuptext}%
-\noindent Throughout the tutorial we have emphasized the fact
-that all functions in HOL are total. Hence we cannot hope to define
+\noindent Throughout this tutorial, we have emphasized
+that all functions in HOL are total.  We cannot hope to define
 truly partial functions, but must make them total.  A straightforward
 method is to lift the result type of the function from $\tau$ to
 $\tau$~\isa{option} (see \ref{sec:option}), where \isa{None} is
@@ -47,6 +47,7 @@
 }
 %
 \begin{isamarkuptext}%
+\index{recursion!guarded}%
 Neither \isacommand{primrec} nor \isacommand{recdef} allow to
 prefix an equation with a condition in the way ordinary definitions do
 (see \isa{minus} above). Instead we have to move the condition over
@@ -76,7 +77,7 @@
 As a more substantial example we consider the problem of searching a graph.
 For simplicity our graph is given by a function \isa{f} of
 type \isa{{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a} which
-maps each node to its successor, i.e.\ the graph is really a tree.
+maps each node to its successor; the graph is really a tree.
 The task is to find the end of a chain, modelled by a node pointing to
 itself. Here is a first attempt:
 \begin{isabelle}%
@@ -126,19 +127,19 @@
 argument stays unchanged and the second one descends along \isa{step{\isadigit{1}}\ f}. The proof requires unfolding the definition of \isa{step{\isadigit{1}}},
 as specified in the \isacommand{hints} above.
 
-Normally you will then derive the following conditional variant of and from
-the recursion equation%
+Normally you will then derive the following conditional variant from
+the recursion equation:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\isanewline
 \ \ {\isachardoublequote}wf{\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}\ {\isasymLongrightarrow}\ find{\isacharparenleft}f{\isacharcomma}x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}if\ f\ x\ {\isacharequal}\ x\ then\ x\ else\ find{\isacharparenleft}f{\isacharcomma}\ f\ x{\isacharparenright}{\isacharparenright}{\isachardoublequote}\isanewline
 \isacommand{by}\ simp%
 \begin{isamarkuptext}%
-\noindent and then disable the original recursion equation:%
+\noindent Then you should disable the original recursion equation:%
 \end{isamarkuptext}%
 \isacommand{declare}\ find{\isachardot}simps{\isacharbrackleft}simp\ del{\isacharbrackright}%
 \begin{isamarkuptext}%
-We can reason about such underdefined functions just like about any other
-recursive function. Here is a simple example of recursion induction:%
+Reasoning about such underdefined functions is like that for other
+recursive functions.  Here is a simple example of recursion induction:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}wf{\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}\ {\isasymlongrightarrow}\ f{\isacharparenleft}find{\isacharparenleft}f{\isacharcomma}x{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ find{\isacharparenleft}f{\isacharcomma}x{\isacharparenright}{\isachardoublequote}\isanewline
 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ f\ x\ rule{\isacharcolon}find{\isachardot}induct{\isacharparenright}\isanewline
@@ -162,8 +163,8 @@
 \begin{verbatim}
      x := s; while b(x) do x := c(x); return x
 \end{verbatim}
-In general, \isa{s} will be a tuple (better still: a record). As an example
-consider the following definition of function \isa{find} above:%
+In general, \isa{s} will be a tuple or record.  As an example
+consider the following definition of function \isa{find}:%
 \end{isamarkuptext}%
 \isacommand{constdefs}\ find{\isadigit{2}}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a{\isacharparenright}\ {\isasymRightarrow}\ {\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a{\isachardoublequote}\isanewline
 \ \ {\isachardoublequote}find{\isadigit{2}}\ f\ x\ {\isasymequiv}\isanewline
@@ -178,7 +179,7 @@
 Although the definition of tail recursive functions via \isa{while} avoids
 termination proofs, there is no free lunch. When proving properties of
 functions defined by \isa{while}, termination rears its ugly head
-again. Here is \isa{while{\isacharunderscore}rule}, the well known proof rule for total
+again. Here is \tdx{while_rule}, the well known proof rule for total
 correctness of loops expressed with \isa{while}:
 \begin{isabelle}%
 \ \ \ \ \ {\isasymlbrakk}P\ s{\isacharsemicolon}\ {\isasymAnd}s{\isachardot}\ {\isasymlbrakk}P\ s{\isacharsemicolon}\ b\ s{\isasymrbrakk}\ {\isasymLongrightarrow}\ P\ {\isacharparenleft}c\ s{\isacharparenright}{\isacharsemicolon}\isanewline
@@ -214,7 +215,7 @@
 \begin{isamarkuptext}%
 Let us conclude this section on partial functions by a
 discussion of the merits of the \isa{while} combinator. We have
-already seen that the advantage (if it is one) of not having to
+already seen that the advantage of not having to
 provide a termination argument when defining a function via \isa{while} merely puts off the evil hour. On top of that, tail recursive
 functions tend to be more complicated to reason about. So why use
 \isa{while} at all? The only reason is executability: the recursion
--- a/doc-src/TutorialI/Advanced/document/WFrec.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Advanced/document/WFrec.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -5,7 +5,7 @@
 \begin{isamarkuptext}%
 \noindent
 So far, all recursive definitions were shown to terminate via measure
-functions. Sometimes this can be quite inconvenient or even
+functions. Sometimes this can be inconvenient or
 impossible. Fortunately, \isacommand{recdef} supports much more
 general definitions. For example, termination of Ackermann's function
 can be shown by means of the \rmindex{lexicographic product} \isa{{\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}}:%
@@ -25,18 +25,19 @@
 In general, \isacommand{recdef} supports termination proofs based on
 arbitrary well-founded relations as introduced in \S\ref{sec:Well-founded}.
 This is called \textbf{well-founded
-recursion}\indexbold{recursion!well-founded}. Clearly, a function definition
-is total iff the set of all pairs $(r,l)$, where $l$ is the argument on the
+recursion}\indexbold{recursion!well-founded}.  A function definition
+is total if and only if the set of 
+all pairs $(r,l)$, where $l$ is the argument on the
 left-hand side of an equation and $r$ the argument of some recursive call on
 the corresponding right-hand side, induces a well-founded relation.  For a
 systematic account of termination proofs via well-founded relations see, for
 example, Baader and Nipkow~\cite{Baader-Nipkow}.
 
-Each \isacommand{recdef} definition should be accompanied (after the name of
-the function) by a well-founded relation on the argument type of the
-function.  Isabelle/HOL formalizes some of the most important
+Each \isacommand{recdef} definition should be accompanied (after the function's
+name) by a well-founded relation on the function's argument type.  
+Isabelle/HOL formalizes some of the most important
 constructions of well-founded relations (see \S\ref{sec:Well-founded}). For
-example, \isa{measure\ f} is always well-founded, and the lexicographic
+example, \isa{measure\ f} is always well-founded.   The lexicographic
 product of two well-founded relations is again well-founded, which we relied
 on when defining Ackermann's function above.
 Of course the lexicographic product can also be iterated:%
@@ -54,8 +55,8 @@
 existing well-founded relation via the inverse image construction \isa{inv{\isacharunderscore}image}. All these constructions are known to \isacommand{recdef}. Thus you
 will never have to prove well-foundedness of any relation composed
 solely of these building blocks. But of course the proof of
-termination of your function definition, i.e.\ that the arguments
-decrease with every recursive call, may still require you to provide
+termination of your function definition --- that the arguments
+decrease with every recursive call --- may still require you to provide
 additional lemmas.
 
 It is also possible to use your own well-founded relations with
@@ -76,7 +77,7 @@
 \begin{isamarkuptxt}%
 \noindent
 The proof is by showing that our relation is a subset of another well-founded
-relation: one given by a measure function.%
+relation: one given by a measure function.\index{*wf_subset (theorem)}%
 \end{isamarkuptxt}%
 \isacommand{apply}\ {\isacharparenleft}rule\ wf{\isacharunderscore}subset\ {\isacharbrackleft}of\ {\isachardoublequote}measure\ {\isacharparenleft}{\isasymlambda}k{\isacharcolon}{\isacharcolon}nat{\isachardot}\ N{\isacharminus}k{\isacharparenright}{\isachardoublequote}{\isacharbrackright}{\isacharcomma}\ blast{\isacharparenright}%
 \begin{isamarkuptxt}%
--- a/doc-src/TutorialI/Advanced/document/simp.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Advanced/document/simp.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -7,10 +7,9 @@
 %
 \begin{isamarkuptext}%
 \label{sec:simplification-II}\index{simplification|(}
-This section discusses some additional nifty features not covered so far and
-gives a short introduction to the simplification process itself. The latter
-is helpful to understand why a particular rule does or does not apply in some
-situation.%
+This section describes features not covered until now.  It also
+outlines the simplification process itself, which can be helpful
+when the simplifier does not do what you expect of it.%
 \end{isamarkuptext}%
 %
 \isamarkupsubsection{Advanced Features%
@@ -21,9 +20,10 @@
 %
 \begin{isamarkuptext}%
 \label{sec:simp-cong}
-It is hardwired into the simplifier that while simplifying the conclusion $Q$
-of $P \Imp Q$ it is legal to make uses of the assumption $P$. This
-kind of contextual information can also be made available for other
+While simplifying the conclusion $Q$
+of $P \Imp Q$, it is legal use the assumption $P$.
+For $\Imp$ this policy is hardwired, but 
+contextual information can also be made available for other
 operators. For example, \isa{xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymlongrightarrow}\ xs\ {\isacharat}\ xs\ {\isacharequal}\ xs} simplifies to \isa{True} because we may use \isa{xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}} when simplifying \isa{xs\ {\isacharat}\ xs\ {\isacharequal}\ xs}. The generation of contextual information during simplification is
 controlled by so-called \bfindex{congruence rules}. This is the one for
 \isa{{\isasymlongrightarrow}}:
@@ -41,14 +41,14 @@
 \ \ \ \ \ {\isasymlbrakk}A\ {\isacharequal}\ B{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ x\ {\isasymin}\ B\ {\isasymLongrightarrow}\ P\ x\ {\isacharequal}\ Q\ x{\isasymrbrakk}\isanewline
 \isaindent{\ \ \ \ \ }{\isasymLongrightarrow}\ {\isacharparenleft}{\isasymforall}x{\isasymin}A{\isachardot}\ P\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isasymforall}x{\isasymin}B{\isachardot}\ Q\ x{\isacharparenright}%
 \end{isabelle}
-The congruence rule for conditional expressions supplies contextual
+One congruence rule for conditional expressions supplies contextual
 information for simplifying the \isa{then} and \isa{else} cases:
 \begin{isabelle}%
 \ \ \ \ \ {\isasymlbrakk}b\ {\isacharequal}\ c{\isacharsemicolon}\ c\ {\isasymLongrightarrow}\ x\ {\isacharequal}\ u{\isacharsemicolon}\ {\isasymnot}\ c\ {\isasymLongrightarrow}\ y\ {\isacharequal}\ v{\isasymrbrakk}\isanewline
 \isaindent{\ \ \ \ \ }{\isasymLongrightarrow}\ {\isacharparenleft}if\ b\ then\ x\ else\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}if\ c\ then\ u\ else\ v{\isacharparenright}%
 \end{isabelle}
-A congruence rule can also \emph{prevent} simplification of some arguments.
-Here is an alternative congruence rule for conditional expressions:
+An alternative congruence rule for conditional expressions
+actually \emph{prevents} simplification of some arguments:
 \begin{isabelle}%
 \ \ \ \ \ b\ {\isacharequal}\ c\ {\isasymLongrightarrow}\ {\isacharparenleft}if\ b\ then\ x\ else\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}if\ c\ then\ x\ else\ y{\isacharparenright}%
 \end{isabelle}
@@ -83,11 +83,8 @@
 }
 %
 \begin{isamarkuptext}%
-\index{rewrite rule!permutative|bold}
-\index{rewriting!ordered|bold}
-\index{ordered rewriting|bold}
-\index{simplification!ordered|bold}
-An equation is a \bfindex{permutative rewrite rule} if the left-hand
+\index{rewrite rules!permutative|bold}%
+An equation is a \textbf{permutative rewrite rule} if the left-hand
 side and right-hand side are the same up to renaming of variables.  The most
 common permutative rule is commutativity: \isa{x\ {\isacharplus}\ y\ {\isacharequal}\ y\ {\isacharplus}\ x}.  Other examples
 include \isa{x\ {\isacharminus}\ y\ {\isacharminus}\ z\ {\isacharequal}\ x\ {\isacharminus}\ z\ {\isacharminus}\ y} in arithmetic and \isa{insert\ x\ {\isacharparenleft}insert\ y\ A{\isacharparenright}\ {\isacharequal}\ insert\ y\ {\isacharparenleft}insert\ x\ A{\isacharparenright}} for sets. Such rules are problematic because
@@ -125,14 +122,15 @@
 such tricks.%
 \end{isamarkuptext}%
 %
-\isamarkupsubsection{How it Works%
+\isamarkupsubsection{How the Simplifier Works%
 }
 %
 \begin{isamarkuptext}%
 \label{sec:SimpHow}
-Roughly speaking, the simplifier proceeds bottom-up (subterms are simplified
-first) and a conditional equation is only applied if its condition could be
-proved (again by simplification). Below we explain some special features of the rewriting process.%
+Roughly speaking, the simplifier proceeds bottom-up: subterms are simplified
+first.  A conditional equation is only applied if its condition can be
+proved, again by simplification.  Below we explain some special features of
+the rewriting process.%
 \end{isamarkuptext}%
 %
 \isamarkupsubsubsection{Higher-Order Patterns%
@@ -141,32 +139,34 @@
 \begin{isamarkuptext}%
 \index{simplification rule|(}
 So far we have pretended the simplifier can deal with arbitrary
-rewrite rules. This is not quite true.  Due to efficiency (and
-potentially also computability) reasons, the simplifier expects the
+rewrite rules. This is not quite true.  For reasons of feasibility,
+the simplifier expects the
 left-hand side of each rule to be a so-called \emph{higher-order
-pattern}~\cite{nipkow-patterns}\indexbold{higher-order
-pattern}\indexbold{pattern, higher-order}. This restricts where
+pattern}~\cite{nipkow-patterns}\indexbold{patterns!higher-order}. 
+This restricts where
 unknowns may occur.  Higher-order patterns are terms in $\beta$-normal
-form (this will always be the case unless you have done something
-strange) where each occurrence of an unknown is of the form
+form.  (This means there are no subterms of the form $(\lambda x. M)(N)$.)  
+Each occurrence of an unknown is of the form
 $\Var{f}~x@1~\dots~x@n$, where the $x@i$ are distinct bound
 variables. Thus all ordinary rewrite rules, where all unknowns are
-of base type, for example \isa{{\isacharquery}m\ {\isacharplus}\ {\isacharquery}n\ {\isacharplus}\ {\isacharquery}k\ {\isacharequal}\ {\isacharquery}m\ {\isacharplus}\ {\isacharparenleft}{\isacharquery}n\ {\isacharplus}\ {\isacharquery}k{\isacharparenright}}, are OK: if an unknown is
+of base type, for example \isa{{\isacharquery}m\ {\isacharplus}\ {\isacharquery}n\ {\isacharplus}\ {\isacharquery}k\ {\isacharequal}\ {\isacharquery}m\ {\isacharplus}\ {\isacharparenleft}{\isacharquery}n\ {\isacharplus}\ {\isacharquery}k{\isacharparenright}}, are acceptable: if an unknown is
 of base type, it cannot have any arguments. Additionally, the rule
-\isa{{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x\ {\isasymand}\ {\isacharquery}Q\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}Q\ x{\isacharparenright}{\isacharparenright}} is also OK, in
+\isa{{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x\ {\isasymand}\ {\isacharquery}Q\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}Q\ x{\isacharparenright}{\isacharparenright}} is also acceptable, in
 both directions: all arguments of the unknowns \isa{{\isacharquery}P} and
 \isa{{\isacharquery}Q} are distinct bound variables.
 
-If the left-hand side is not a higher-order pattern, not all is lost
-and the simplifier will still try to apply the rule, but only if it
-matches \emph{directly}, i.e.\ without much $\lambda$-calculus hocus
-pocus. For example, \isa{{\isacharquery}f\ {\isacharquery}x\ {\isasymin}\ range\ {\isacharquery}f\ {\isacharequal}\ True} rewrites
+If the left-hand side is not a higher-order pattern, all is not lost.
+The simplifier will still try to apply the rule provided it
+matches directly: without much $\lambda$-calculus hocus
+pocus.  For example, \isa{{\isacharparenleft}{\isacharquery}f\ {\isacharquery}x\ {\isasymin}\ range\ {\isacharquery}f{\isacharparenright}\ {\isacharequal}\ True} rewrites
 \isa{g\ a\ {\isasymin}\ range\ g} to \isa{True}, but will fail to match
 \isa{g{\isacharparenleft}h\ b{\isacharparenright}\ {\isasymin}\ range{\isacharparenleft}{\isasymlambda}x{\isachardot}\ g{\isacharparenleft}h\ x{\isacharparenright}{\isacharparenright}}.  However, you can
-replace the offending subterms (in our case \isa{{\isacharquery}f\ {\isacharquery}x}, which
-is not a pattern) by adding new variables and conditions: \isa{{\isacharquery}y\ {\isacharequal}\ {\isacharquery}f\ {\isacharquery}x\ {\isasymLongrightarrow}\ {\isacharquery}y\ {\isasymin}\ range\ {\isacharquery}f\ {\isacharequal}\ True} is fine
+eliminate the offending subterms --- those that are not patterns ---
+by adding new variables and conditions.
+In our example, we eliminate \isa{{\isacharquery}f\ {\isacharquery}x} and obtain
+ \isa{{\isacharquery}y\ {\isacharequal}\ {\isacharquery}f\ {\isacharquery}x\ {\isasymLongrightarrow}\ {\isacharparenleft}{\isacharquery}y\ {\isasymin}\ range\ {\isacharquery}f{\isacharparenright}\ {\isacharequal}\ True}, which is fine
 as a conditional rewrite rule since conditions can be arbitrary
-terms. However, this trick is not a panacea because the newly
+terms.  However, this trick is not a panacea because the newly
 introduced conditions may be hard to solve.
   
 There is no restriction on the form of the right-hand
--- a/doc-src/TutorialI/Advanced/simp.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Advanced/simp.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -5,10 +5,9 @@
 section{*Simplification*}
 
 text{*\label{sec:simplification-II}\index{simplification|(}
-This section discusses some additional nifty features not covered so far and
-gives a short introduction to the simplification process itself. The latter
-is helpful to understand why a particular rule does or does not apply in some
-situation.
+This section describes features not covered until now.  It also
+outlines the simplification process itself, which can be helpful
+when the simplifier does not do what you expect of it.
 *}
 
 subsection{*Advanced Features*}
@@ -16,9 +15,10 @@
 subsubsection{*Congruence Rules*}
 
 text{*\label{sec:simp-cong}
-It is hardwired into the simplifier that while simplifying the conclusion $Q$
-of $P \Imp Q$ it is legal to make uses of the assumption $P$. This
-kind of contextual information can also be made available for other
+While simplifying the conclusion $Q$
+of $P \Imp Q$, it is legal use the assumption $P$.
+For $\Imp$ this policy is hardwired, but 
+contextual information can also be made available for other
 operators. For example, @{prop"xs = [] --> xs@xs = xs"} simplifies to @{term
 True} because we may use @{prop"xs = []"} when simplifying @{prop"xs@xs =
 xs"}. The generation of contextual information during simplification is
@@ -33,11 +33,11 @@
 Here are some more examples.  The congruence rules for bounded
 quantifiers supply contextual information about the bound variable:
 @{thm[display,eta_contract=false,margin=60]ball_cong[no_vars]}
-The congruence rule for conditional expressions supplies contextual
+One congruence rule for conditional expressions supplies contextual
 information for simplifying the @{text then} and @{text else} cases:
 @{thm[display]if_cong[no_vars]}
-A congruence rule can also \emph{prevent} simplification of some arguments.
-Here is an alternative congruence rule for conditional expressions:
+An alternative congruence rule for conditional expressions
+actually \emph{prevents} simplification of some arguments:
 @{thm[display]if_weak_cong[no_vars]}
 Only the first argument is simplified; the others remain unchanged.
 This makes simplification much faster and is faithful to the evaluation
@@ -67,11 +67,8 @@
 subsubsection{*Permutative Rewrite Rules*}
 
 text{*
-\index{rewrite rule!permutative|bold}
-\index{rewriting!ordered|bold}
-\index{ordered rewriting|bold}
-\index{simplification!ordered|bold}
-An equation is a \bfindex{permutative rewrite rule} if the left-hand
+\index{rewrite rules!permutative|bold}%
+An equation is a \textbf{permutative rewrite rule} if the left-hand
 side and right-hand side are the same up to renaming of variables.  The most
 common permutative rule is commutativity: @{prop"x+y = y+x"}.  Other examples
 include @{prop"(x-y)-z = (x-z)-y"} in arithmetic and @{prop"insert x (insert
@@ -110,45 +107,48 @@
 such tricks.
 *}
 
-subsection{*How it Works*}
+subsection{*How the Simplifier Works*}
 
 text{*\label{sec:SimpHow}
-Roughly speaking, the simplifier proceeds bottom-up (subterms are simplified
-first) and a conditional equation is only applied if its condition could be
-proved (again by simplification). Below we explain some special features of the rewriting process.
+Roughly speaking, the simplifier proceeds bottom-up: subterms are simplified
+first.  A conditional equation is only applied if its condition can be
+proved, again by simplification.  Below we explain some special features of
+the rewriting process. 
 *}
 
 subsubsection{*Higher-Order Patterns*}
 
 text{*\index{simplification rule|(}
 So far we have pretended the simplifier can deal with arbitrary
-rewrite rules. This is not quite true.  Due to efficiency (and
-potentially also computability) reasons, the simplifier expects the
+rewrite rules. This is not quite true.  For reasons of feasibility,
+the simplifier expects the
 left-hand side of each rule to be a so-called \emph{higher-order
-pattern}~\cite{nipkow-patterns}\indexbold{higher-order
-pattern}\indexbold{pattern, higher-order}. This restricts where
+pattern}~\cite{nipkow-patterns}\indexbold{patterns!higher-order}. 
+This restricts where
 unknowns may occur.  Higher-order patterns are terms in $\beta$-normal
-form (this will always be the case unless you have done something
-strange) where each occurrence of an unknown is of the form
+form.  (This means there are no subterms of the form $(\lambda x. M)(N)$.)  
+Each occurrence of an unknown is of the form
 $\Var{f}~x@1~\dots~x@n$, where the $x@i$ are distinct bound
 variables. Thus all ordinary rewrite rules, where all unknowns are
-of base type, for example @{thm add_assoc}, are OK: if an unknown is
+of base type, for example @{thm add_assoc}, are acceptable: if an unknown is
 of base type, it cannot have any arguments. Additionally, the rule
-@{text"(\<forall>x. ?P x \<and> ?Q x) = ((\<forall>x. ?P x) \<and> (\<forall>x. ?Q x))"} is also OK, in
+@{text"(\<forall>x. ?P x \<and> ?Q x) = ((\<forall>x. ?P x) \<and> (\<forall>x. ?Q x))"} is also acceptable, in
 both directions: all arguments of the unknowns @{text"?P"} and
 @{text"?Q"} are distinct bound variables.
 
-If the left-hand side is not a higher-order pattern, not all is lost
-and the simplifier will still try to apply the rule, but only if it
-matches \emph{directly}, i.e.\ without much $\lambda$-calculus hocus
-pocus. For example, @{text"?f ?x \<in> range ?f = True"} rewrites
+If the left-hand side is not a higher-order pattern, all is not lost.
+The simplifier will still try to apply the rule provided it
+matches directly: without much $\lambda$-calculus hocus
+pocus.  For example, @{text"(?f ?x \<in> range ?f) = True"} rewrites
 @{term"g a \<in> range g"} to @{term True}, but will fail to match
 @{text"g(h b) \<in> range(\<lambda>x. g(h x))"}.  However, you can
-replace the offending subterms (in our case @{text"?f ?x"}, which
-is not a pattern) by adding new variables and conditions: @{text"?y =
-?f ?x \<Longrightarrow> ?y \<in> range ?f = True"} is fine
+eliminate the offending subterms --- those that are not patterns ---
+by adding new variables and conditions.
+In our example, we eliminate @{text"?f ?x"} and obtain
+ @{text"?y =
+?f ?x \<Longrightarrow> (?y \<in> range ?f) = True"}, which is fine
 as a conditional rewrite rule since conditions can be arbitrary
-terms. However, this trick is not a panacea because the newly
+terms.  However, this trick is not a panacea because the newly
 introduced conditions may be hard to solve.
   
 There is no restriction on the form of the right-hand
--- a/doc-src/TutorialI/CTL/CTL.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/CTL/CTL.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -3,6 +3,7 @@
 subsection{*Computation Tree Logic --- CTL*};
 
 text{*\label{sec:CTL}
+\index{CTL|(}%
 The semantics of PDL only needs reflexive transitive closure.
 Let us be adventurous and introduce a more expressive temporal operator.
 We extend the datatype
@@ -27,7 +28,7 @@
          "Paths s \<equiv> {p. s = p 0 \<and> (\<forall>i. (p i, p(i+1)) \<in> M)}";
 
 text{*\noindent
-This definition allows a very succinct statement of the semantics of @{term AF}:
+This definition allows a succinct statement of the semantics of @{term AF}:
 \footnote{Do not be misled: neither datatypes nor recursive functions can be
 extended by new constructors or equations. This is just a trick of the
 presentation. In reality one has to define a new datatype and a new function.}
@@ -108,9 +109,9 @@
   "lfp(af A) \<subseteq> {s. \<forall> p \<in> Paths s. \<exists> i. p i \<in> A}";
 
 txt{*\noindent
-In contrast to the analogous property for @{term EF}, and just
-for a change, we do not use fixed point induction but a weaker theorem,
-also known in the literature (after David Park) as \emph{Park-induction}:
+In contrast to the analogous proof for @{term EF}, and just
+for a change, we do not use fixed point induction.  Park-induction,
+named after David Park, is weaker but sufficient for this proof:
 \begin{center}
 @{thm lfp_lowerbound[of _ "S",no_vars]} \hfill (@{thm[source]lfp_lowerbound})
 \end{center}
@@ -137,20 +138,19 @@
 
 txt{*
 @{subgoals[display,indent=0,margin=70,goals_limit=1]}
-It merely remains to set @{term pa} to @{term"\<lambda>i. p(i+1)"}, i.e.\ @{term p} without its
-first element. The rest is practically automatic:
+It merely remains to set @{term pa} to @{term"\<lambda>i. p(i+1)"}, that is, 
+@{term p} without its first element.  The rest is automatic:
 *};
 
 apply(erule_tac x = "\<lambda>i. p(i+1)" in allE);
-apply simp;
-apply blast;
+apply force;
 done;
 
 
 text{*
 The opposite inclusion is proved by contradiction: if some state
 @{term s} is not in @{term"lfp(af A)"}, then we can construct an
-infinite @{term A}-avoiding path starting from @{term s}. The reason is
+infinite @{term A}-avoiding path starting from~@{term s}. The reason is
 that by unfolding @{term lfp} we find that if @{term s} is not in
 @{term"lfp(af A)"}, then @{term s} is not in @{term A} and there is a
 direct successor of @{term s} that is again not in \mbox{@{term"lfp(af
@@ -219,9 +219,9 @@
 apply(clarsimp);
 
 txt{*\noindent
-After simplification and clarification, the subgoal has the following form
+After simplification and clarification, the subgoal has the following form:
 @{subgoals[display,indent=0,margin=70,goals_limit=1]}
-and invites a proof by induction on @{term i}:
+It invites a proof by induction on @{term i}:
 *};
 
 apply(induct_tac i);
@@ -244,7 +244,7 @@
  apply(fast intro:someI2_ex);
 
 txt{*\noindent
-What is worth noting here is that we have used @{text fast} rather than
+What is worth noting here is that we have used \methdx{fast} rather than
 @{text blast}.  The reason is that @{text blast} would fail because it cannot
 cope with @{thm[source]someI2_ex}: unifying its conclusion with the current
 subgoal is non-trivial because of the nested schematic variables. For
@@ -349,8 +349,8 @@
 
 The language defined above is not quite CTL\@. The latter also includes an
 until-operator @{term"EU f g"} with semantics ``there \emph{E}xists a path
-where @{term f} is true \emph{U}ntil @{term g} becomes true''. With the help
-of an auxiliary function
+where @{term f} is true \emph{U}ntil @{term g} becomes true''.  We need
+an auxiliary function:
 *}
 
 consts until:: "state set \<Rightarrow> state set \<Rightarrow> state \<Rightarrow> state list \<Rightarrow> bool"
@@ -362,7 +362,7 @@
 "eusem A B \<equiv> {s. \<exists>p. until A B s p}"(*>*)
 
 text{*\noindent
-the semantics of @{term EU} is straightforward:
+Expressing the semantics of @{term EU} is now straightforward:
 @{prop[display]"s \<Turnstile> EU f g = (\<exists>p. until {t. t \<Turnstile> f} {t. t \<Turnstile> g} s p)"}
 Note that @{term EU} is not definable in terms of the other operators!
 
@@ -465,6 +465,7 @@
 in the case of finite sets and a monotone function~@{term F},
 the value of \mbox{@{term"lfp F"}} can be computed by iterated application of @{term F} to~@{term"{}"} until
 a fixed point is reached. It is actually possible to generate executable functional programs
-from HOL definitions, but that is beyond the scope of the tutorial.
+from HOL definitions, but that is beyond the scope of the tutorial.%
+\index{CTL|)}
 *}
 (*<*)end(*>*)
--- a/doc-src/TutorialI/CTL/CTLind.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/CTL/CTLind.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -3,13 +3,14 @@
 subsection{*CTL Revisited*}
 
 text{*\label{sec:CTL-revisited}
-The purpose of this section is twofold: we want to demonstrate
-some of the induction principles and heuristics discussed above and we want to
+\index{CTL|(}%
+The purpose of this section is twofold: to demonstrate
+some of the induction principles and heuristics discussed above and to
 show how inductive definitions can simplify proofs.
 In \S\ref{sec:CTL} we gave a fairly involved proof of the correctness of a
 model checker for CTL\@. In particular the proof of the
 @{thm[source]infinity_lemma} on the way to @{thm[source]AF_lemma2} is not as
-simple as one might intuitively expect, due to the @{text SOME} operator
+simple as one might expect, due to the @{text SOME} operator
 involved. Below we give a simpler proof of @{thm[source]AF_lemma2}
 based on an auxiliary inductive definition.
 
@@ -50,7 +51,7 @@
 done
 
 text{*\noindent
-The base case (@{prop"t = s"}) is trivial (@{text blast}).
+The base case (@{prop"t = s"}) is trivial and proved by @{text blast}.
 In the induction step, we have an infinite @{term A}-avoiding path @{term f}
 starting from @{term u}, a successor of @{term t}. Now we simply instantiate
 the @{text"\<forall>f\<in>Paths t"} in the induction hypothesis by the path starting with
@@ -61,7 +62,7 @@
 Now we come to the key lemma. Assuming that no infinite @{term A}-avoiding
 path starts from @{term s}, we want to show @{prop"s \<in> lfp(af A)"}. For the
 inductive proof this must be generalized to the statement that every point @{term t}
-``between'' @{term s} and @{term A}, i.e.\ all of @{term"Avoid s A"},
+``between'' @{term s} and @{term A}, in other words all of @{term"Avoid s A"},
 is contained in @{term"lfp(af A)"}:
 *}
 
@@ -77,7 +78,7 @@
 again trivial.
 
 The formal counterpart of this proof sketch is a well-founded induction
-w.r.t.\ @{term M} restricted to (roughly speaking) @{term"Avoid s A - A"}:
+on~@{term M} restricted to @{term"Avoid s A - A"}, roughly speaking:
 @{term[display]"{(y,x). (x,y) \<in> M \<and> x \<in> Avoid s A \<and> x \<notin> A}"}
 As we shall see presently, the absence of infinite @{term A}-avoiding paths
 starting from @{term s} implies well-foundedness of this relation. For the
@@ -95,11 +96,11 @@
 then all successors of @{term t} that are in @{term"Avoid s A"} are in
 @{term"lfp (af A)"}. Unfolding @{term lfp} in the conclusion of the first
 subgoal once, we have to prove that @{term t} is in @{term A} or all successors
-of @{term t} are in @{term"lfp (af A)"}: if @{term t} is not in @{term A},
+of @{term t} are in @{term"lfp (af A)"}.  But if @{term t} is not in @{term A},
 the second 
 @{term Avoid}-rule implies that all successors of @{term t} are in
-@{term"Avoid s A"} (because we also assume @{prop"t \<in> Avoid s A"}), and
-hence, by the induction hypothesis, all successors of @{term t} are indeed in
+@{term"Avoid s A"}, because we also assume @{prop"t \<in> Avoid s A"}.
+Hence, by the induction hypothesis, all successors of @{term t} are indeed in
 @{term"lfp(af A)"}. Mechanically:
 *}
 
@@ -108,8 +109,8 @@
  apply(blast intro:Avoid.intros);
 
 txt{*
-Having proved the main goal we return to the proof obligation that the above
-relation is indeed well-founded. This is proved by contradiction: if
+Having proved the main goal, we return to the proof obligation that the 
+relation used above is indeed well-founded. This is proved by contradiction: if
 the relation is not well-founded then there exists an infinite @{term
 A}-avoiding path all in @{term"Avoid s A"}, by theorem
 @{thm[source]wf_iff_no_infinite_down_chain}:
@@ -128,14 +129,15 @@
 text{*
 The @{text"(no_asm)"} modifier of the @{text"rule_format"} directive in the
 statement of the lemma means
-that the assumption is left unchanged --- otherwise the @{text"\<forall>p"} is turned
+that the assumption is left unchanged; otherwise the @{text"\<forall>p"} 
+would be turned
 into a @{text"\<And>p"}, which would complicate matters below. As it is,
 @{thm[source]Avoid_in_lfp} is now
 @{thm[display]Avoid_in_lfp[no_vars]}
 The main theorem is simply the corollary where @{prop"t = s"},
-in which case the assumption @{prop"t \<in> Avoid s A"} is trivially true
-by the first @{term Avoid}-rule. Isabelle confirms this:
-*}
+when the assumption @{prop"t \<in> Avoid s A"} is trivially true
+by the first @{term Avoid}-rule. Isabelle confirms this:%
+\index{CTL|)}*}
 
 theorem AF_lemma2:  "{s. \<forall>p \<in> Paths s. \<exists> i. p i \<in> A} \<subseteq> lfp(af A)";
 by(auto elim:Avoid_in_lfp intro:Avoid.intros);
--- a/doc-src/TutorialI/CTL/document/CTL.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/CTL/document/CTL.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -7,6 +7,7 @@
 %
 \begin{isamarkuptext}%
 \label{sec:CTL}
+\index{CTL|(}%
 The semantics of PDL only needs reflexive transitive closure.
 Let us be adventurous and introduce a more expressive temporal operator.
 We extend the datatype
@@ -24,7 +25,7 @@
 \ \ \ \ \ \ \ \ \ {\isachardoublequote}Paths\ s\ {\isasymequiv}\ {\isacharbraceleft}p{\isachardot}\ s\ {\isacharequal}\ p\ {\isadigit{0}}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}i{\isachardot}\ {\isacharparenleft}p\ i{\isacharcomma}\ p{\isacharparenleft}i{\isacharplus}{\isadigit{1}}{\isacharparenright}{\isacharparenright}\ {\isasymin}\ M{\isacharparenright}{\isacharbraceright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-This definition allows a very succinct statement of the semantics of \isa{AF}:
+This definition allows a succinct statement of the semantics of \isa{AF}:
 \footnote{Do not be misled: neither datatypes nor recursive functions can be
 extended by new constructors or equations. This is just a trick of the
 presentation. In reality one has to define a new datatype and a new function.}%
@@ -62,9 +63,9 @@
 \ \ {\isachardoublequote}lfp{\isacharparenleft}af\ A{\isacharparenright}\ {\isasymsubseteq}\ {\isacharbraceleft}s{\isachardot}\ {\isasymforall}\ p\ {\isasymin}\ Paths\ s{\isachardot}\ {\isasymexists}\ i{\isachardot}\ p\ i\ {\isasymin}\ A{\isacharbraceright}{\isachardoublequote}%
 \begin{isamarkuptxt}%
 \noindent
-In contrast to the analogous property for \isa{EF}, and just
-for a change, we do not use fixed point induction but a weaker theorem,
-also known in the literature (after David Park) as \emph{Park-induction}:
+In contrast to the analogous proof for \isa{EF}, and just
+for a change, we do not use fixed point induction.  Park-induction,
+named after David Park, is weaker but sufficient for this proof:
 \begin{center}
 \isa{f\ S\ {\isasymsubseteq}\ S\ {\isasymLongrightarrow}\ lfp\ f\ {\isasymsubseteq}\ S} \hfill (\isa{lfp{\isacharunderscore}lowerbound})
 \end{center}
@@ -95,21 +96,20 @@
 \begin{isamarkuptxt}%
 \begin{isabelle}%
 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ {\isasymlbrakk}{\isasymforall}i{\isachardot}\ {\isacharparenleft}p\ i{\isacharcomma}\ p\ {\isacharparenleft}Suc\ i{\isacharparenright}{\isacharparenright}\ {\isasymin}\ M{\isacharsemicolon}\isanewline
-\isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ \ \ \ }{\isasymforall}pa{\isachardot}\ p\ {\isadigit{1}}\ {\isacharequal}\ pa\ {\isadigit{0}}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}i{\isachardot}\ {\isacharparenleft}pa\ i{\isacharcomma}\ pa\ {\isacharparenleft}Suc\ i{\isacharparenright}{\isacharparenright}\ {\isasymin}\ M{\isacharparenright}\ {\isasymlongrightarrow}\isanewline
+\isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ \ \ \ }{\isasymforall}pa{\isachardot}\ p\ {\isadigit{1}}{\isacharprime}\ {\isacharequal}\ pa\ {\isadigit{0}}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}i{\isachardot}\ {\isacharparenleft}pa\ i{\isacharcomma}\ pa\ {\isacharparenleft}Suc\ i{\isacharparenright}{\isacharparenright}\ {\isasymin}\ M{\isacharparenright}\ {\isasymlongrightarrow}\isanewline
 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ \ \ \ {\isasymforall}pa{\isachardot}\ }{\isacharparenleft}{\isasymexists}i{\isachardot}\ pa\ i\ {\isasymin}\ A{\isacharparenright}{\isasymrbrakk}\isanewline
 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ }{\isasymLongrightarrow}\ {\isasymexists}i{\isachardot}\ p\ i\ {\isasymin}\ A%
 \end{isabelle}
-It merely remains to set \isa{pa} to \isa{{\isasymlambda}i{\isachardot}\ p\ {\isacharparenleft}i\ {\isacharplus}\ {\isadigit{1}}{\isacharparenright}}, i.e.\ \isa{p} without its
-first element. The rest is practically automatic:%
+It merely remains to set \isa{pa} to \isa{{\isasymlambda}i{\isachardot}\ p\ {\isacharparenleft}i\ {\isacharplus}\ {\isadigit{1}}{\isacharparenright}}, that is, 
+\isa{p} without its first element.  The rest is automatic:%
 \end{isamarkuptxt}%
 \isacommand{apply}{\isacharparenleft}erule{\isacharunderscore}tac\ x\ {\isacharequal}\ {\isachardoublequote}{\isasymlambda}i{\isachardot}\ p{\isacharparenleft}i{\isacharplus}{\isadigit{1}}{\isacharparenright}{\isachardoublequote}\ \isakeyword{in}\ allE{\isacharparenright}\isanewline
-\isacommand{apply}\ simp\isanewline
-\isacommand{apply}\ blast\isanewline
+\isacommand{apply}\ force\isanewline
 \isacommand{done}%
 \begin{isamarkuptext}%
 The opposite inclusion is proved by contradiction: if some state
 \isa{s} is not in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}, then we can construct an
-infinite \isa{A}-avoiding path starting from \isa{s}. The reason is
+infinite \isa{A}-avoiding path starting from~\isa{s}. The reason is
 that by unfolding \isa{lfp} we find that if \isa{s} is not in
 \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}, then \isa{s} is not in \isa{A} and there is a
 direct successor of \isa{s} that is again not in \mbox{\isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}}. Iterating this argument yields the promised infinite
@@ -171,13 +171,13 @@
 \isacommand{apply}{\isacharparenleft}clarsimp{\isacharparenright}%
 \begin{isamarkuptxt}%
 \noindent
-After simplification and clarification, the subgoal has the following form
+After simplification and clarification, the subgoal has the following form:
 \begin{isabelle}%
 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}i{\isachardot}\ {\isasymlbrakk}Q\ s{\isacharsemicolon}\ {\isasymforall}s{\isachardot}\ Q\ s\ {\isasymlongrightarrow}\ {\isacharparenleft}{\isasymexists}t{\isachardot}\ {\isacharparenleft}s{\isacharcomma}\ t{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ Q\ t{\isacharparenright}{\isasymrbrakk}\isanewline
 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}i{\isachardot}\ }{\isasymLongrightarrow}\ {\isacharparenleft}path\ s\ Q\ i{\isacharcomma}\ SOME\ t{\isachardot}\ {\isacharparenleft}path\ s\ Q\ i{\isacharcomma}\ t{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ Q\ t{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\isanewline
 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}i{\isachardot}\ {\isasymLongrightarrow}\ }Q\ {\isacharparenleft}path\ s\ Q\ i{\isacharparenright}%
 \end{isabelle}
-and invites a proof by induction on \isa{i}:%
+It invites a proof by induction on \isa{i}:%
 \end{isamarkuptxt}%
 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ i{\isacharparenright}\isanewline
 \ \isacommand{apply}{\isacharparenleft}simp{\isacharparenright}%
@@ -203,7 +203,7 @@
 \ \isacommand{apply}{\isacharparenleft}fast\ intro{\isacharcolon}someI{\isadigit{2}}{\isacharunderscore}ex{\isacharparenright}%
 \begin{isamarkuptxt}%
 \noindent
-What is worth noting here is that we have used \isa{fast} rather than
+What is worth noting here is that we have used \methdx{fast} rather than
 \isa{blast}.  The reason is that \isa{blast} would fail because it cannot
 cope with \isa{someI{\isadigit{2}}{\isacharunderscore}ex}: unifying its conclusion with the current
 subgoal is non-trivial because of the nested schematic variables. For
@@ -283,8 +283,8 @@
 \begin{isamarkuptext}%
 The language defined above is not quite CTL\@. The latter also includes an
 until-operator \isa{EU\ f\ g} with semantics ``there \emph{E}xists a path
-where \isa{f} is true \emph{U}ntil \isa{g} becomes true''. With the help
-of an auxiliary function%
+where \isa{f} is true \emph{U}ntil \isa{g} becomes true''.  We need
+an auxiliary function:%
 \end{isamarkuptext}%
 \isacommand{consts}\ until{\isacharcolon}{\isacharcolon}\ {\isachardoublequote}state\ set\ {\isasymRightarrow}\ state\ set\ {\isasymRightarrow}\ state\ {\isasymRightarrow}\ state\ list\ {\isasymRightarrow}\ bool{\isachardoublequote}\isanewline
 \isacommand{primrec}\isanewline
@@ -292,7 +292,7 @@
 {\isachardoublequote}until\ A\ B\ s\ {\isacharparenleft}t{\isacharhash}p{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}s\ {\isasymin}\ A\ {\isasymand}\ {\isacharparenleft}s{\isacharcomma}t{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ until\ A\ B\ t\ p{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-the semantics of \isa{EU} is straightforward:
+Expressing the semantics of \isa{EU} is now straightforward:
 \begin{isabelle}%
 \ \ \ \ \ s\ {\isasymTurnstile}\ EU\ f\ g\ {\isacharequal}\ {\isacharparenleft}{\isasymexists}p{\isachardot}\ until\ {\isacharbraceleft}t{\isachardot}\ t\ {\isasymTurnstile}\ f{\isacharbraceright}\ {\isacharbraceleft}t{\isachardot}\ t\ {\isasymTurnstile}\ g{\isacharbraceright}\ s\ p{\isacharparenright}%
 \end{isabelle}
@@ -327,6 +327,7 @@
 the value of \mbox{\isa{lfp\ F}} can be computed by iterated application of \isa{F} to~\isa{{\isacharbraceleft}{\isacharbraceright}} until
 a fixed point is reached. It is actually possible to generate executable functional programs
 from HOL definitions, but that is beyond the scope of the tutorial.%
+\index{CTL|)}%
 \end{isamarkuptext}%
 \end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/CTL/document/CTLind.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/CTL/document/CTLind.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -7,13 +7,14 @@
 %
 \begin{isamarkuptext}%
 \label{sec:CTL-revisited}
-The purpose of this section is twofold: we want to demonstrate
-some of the induction principles and heuristics discussed above and we want to
+\index{CTL|(}%
+The purpose of this section is twofold: to demonstrate
+some of the induction principles and heuristics discussed above and to
 show how inductive definitions can simplify proofs.
 In \S\ref{sec:CTL} we gave a fairly involved proof of the correctness of a
 model checker for CTL\@. In particular the proof of the
 \isa{infinity{\isacharunderscore}lemma} on the way to \isa{AF{\isacharunderscore}lemma{\isadigit{2}}} is not as
-simple as one might intuitively expect, due to the \isa{SOME} operator
+simple as one might expect, due to the \isa{SOME} operator
 involved. Below we give a simpler proof of \isa{AF{\isacharunderscore}lemma{\isadigit{2}}}
 based on an auxiliary inductive definition.
 
@@ -50,7 +51,7 @@
 \isacommand{done}%
 \begin{isamarkuptext}%
 \noindent
-The base case (\isa{t\ {\isacharequal}\ s}) is trivial (\isa{blast}).
+The base case (\isa{t\ {\isacharequal}\ s}) is trivial and proved by \isa{blast}.
 In the induction step, we have an infinite \isa{A}-avoiding path \isa{f}
 starting from \isa{u}, a successor of \isa{t}. Now we simply instantiate
 the \isa{{\isasymforall}f{\isasymin}Paths\ t} in the induction hypothesis by the path starting with
@@ -61,7 +62,7 @@
 Now we come to the key lemma. Assuming that no infinite \isa{A}-avoiding
 path starts from \isa{s}, we want to show \isa{s\ {\isasymin}\ lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. For the
 inductive proof this must be generalized to the statement that every point \isa{t}
-``between'' \isa{s} and \isa{A}, i.e.\ all of \isa{Avoid\ s\ A},
+``between'' \isa{s} and \isa{A}, in other words all of \isa{Avoid\ s\ A},
 is contained in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ Avoid{\isacharunderscore}in{\isacharunderscore}lfp{\isacharbrackleft}rule{\isacharunderscore}format{\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}{\isacharbrackright}{\isacharcolon}\isanewline
@@ -75,7 +76,7 @@
 again trivial.
 
 The formal counterpart of this proof sketch is a well-founded induction
-w.r.t.\ \isa{M} restricted to (roughly speaking) \isa{Avoid\ s\ A\ {\isacharminus}\ A}:
+on~\isa{M} restricted to \isa{Avoid\ s\ A\ {\isacharminus}\ A}, roughly speaking:
 \begin{isabelle}%
 \ \ \ \ \ {\isacharbraceleft}{\isacharparenleft}y{\isacharcomma}\ x{\isacharparenright}{\isachardot}\ {\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ x\ {\isasymin}\ Avoid\ s\ A\ {\isasymand}\ x\ {\isasymnotin}\ A{\isacharbraceright}%
 \end{isabelle}
@@ -100,19 +101,19 @@
 then all successors of \isa{t} that are in \isa{Avoid\ s\ A} are in
 \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. Unfolding \isa{lfp} in the conclusion of the first
 subgoal once, we have to prove that \isa{t} is in \isa{A} or all successors
-of \isa{t} are in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}: if \isa{t} is not in \isa{A},
+of \isa{t} are in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}.  But if \isa{t} is not in \isa{A},
 the second 
 \isa{Avoid}-rule implies that all successors of \isa{t} are in
-\isa{Avoid\ s\ A} (because we also assume \isa{t\ {\isasymin}\ Avoid\ s\ A}), and
-hence, by the induction hypothesis, all successors of \isa{t} are indeed in
+\isa{Avoid\ s\ A}, because we also assume \isa{t\ {\isasymin}\ Avoid\ s\ A}.
+Hence, by the induction hypothesis, all successors of \isa{t} are indeed in
 \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. Mechanically:%
 \end{isamarkuptxt}%
 \ \isacommand{apply}{\isacharparenleft}subst\ lfp{\isacharunderscore}unfold{\isacharbrackleft}OF\ mono{\isacharunderscore}af{\isacharbrackright}{\isacharparenright}\isanewline
 \ \isacommand{apply}{\isacharparenleft}simp\ {\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}\ add{\isacharcolon}\ af{\isacharunderscore}def{\isacharparenright}\isanewline
 \ \isacommand{apply}{\isacharparenleft}blast\ intro{\isacharcolon}Avoid{\isachardot}intros{\isacharparenright}%
 \begin{isamarkuptxt}%
-Having proved the main goal we return to the proof obligation that the above
-relation is indeed well-founded. This is proved by contradiction: if
+Having proved the main goal, we return to the proof obligation that the 
+relation used above is indeed well-founded. This is proved by contradiction: if
 the relation is not well-founded then there exists an infinite \isa{A}-avoiding path all in \isa{Avoid\ s\ A}, by theorem
 \isa{wf{\isacharunderscore}iff{\isacharunderscore}no{\isacharunderscore}infinite{\isacharunderscore}down{\isacharunderscore}chain}:
 \begin{isabelle}%
@@ -130,15 +131,17 @@
 \begin{isamarkuptext}%
 The \isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}} modifier of the \isa{rule{\isacharunderscore}format} directive in the
 statement of the lemma means
-that the assumption is left unchanged --- otherwise the \isa{{\isasymforall}p} is turned
+that the assumption is left unchanged; otherwise the \isa{{\isasymforall}p} 
+would be turned
 into a \isa{{\isasymAnd}p}, which would complicate matters below. As it is,
 \isa{Avoid{\isacharunderscore}in{\isacharunderscore}lfp} is now
 \begin{isabelle}%
 \ \ \ \ \ {\isasymlbrakk}{\isasymforall}p{\isasymin}Paths\ s{\isachardot}\ {\isasymexists}i{\isachardot}\ p\ i\ {\isasymin}\ A{\isacharsemicolon}\ t\ {\isasymin}\ Avoid\ s\ A{\isasymrbrakk}\ {\isasymLongrightarrow}\ t\ {\isasymin}\ lfp\ {\isacharparenleft}af\ A{\isacharparenright}%
 \end{isabelle}
 The main theorem is simply the corollary where \isa{t\ {\isacharequal}\ s},
-in which case the assumption \isa{t\ {\isasymin}\ Avoid\ s\ A} is trivially true
+when the assumption \isa{t\ {\isasymin}\ Avoid\ s\ A} is trivially true
 by the first \isa{Avoid}-rule. Isabelle confirms this:%
+\index{CTL|)}%
 \end{isamarkuptext}%
 \isacommand{theorem}\ AF{\isacharunderscore}lemma{\isadigit{2}}{\isacharcolon}\ \ {\isachardoublequote}{\isacharbraceleft}s{\isachardot}\ {\isasymforall}p\ {\isasymin}\ Paths\ s{\isachardot}\ {\isasymexists}\ i{\isachardot}\ p\ i\ {\isasymin}\ A{\isacharbraceright}\ {\isasymsubseteq}\ lfp{\isacharparenleft}af\ A{\isacharparenright}{\isachardoublequote}\isanewline
 \isacommand{by}{\isacharparenleft}auto\ elim{\isacharcolon}Avoid{\isacharunderscore}in{\isacharunderscore}lfp\ intro{\isacharcolon}Avoid{\isachardot}intros{\isacharparenright}\isanewline
--- a/doc-src/TutorialI/Inductive/AB.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Inductive/AB.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -3,6 +3,7 @@
 section{*Case Study: A Context Free Grammar*}
 
 text{*\label{sec:CFG}
+\index{grammars!defining inductively|(}%
 Grammars are nothing but shorthands for inductive definitions of nonterminals
 which represent sets of strings. For example, the production
 $A \to B c$ is short for
@@ -136,8 +137,7 @@
 by(force simp add:zabs_def take_Cons split:nat.split if_splits); 
 
 text{*
-Finally we come to the above mentioned lemma about cutting in half a word with two
-more elements of one sort than of the other sort:
+Finally we come to the above-mentioned lemma about cutting in half a word with two more elements of one sort than of the other sort:
 *}
 
 lemma part1:
@@ -212,7 +212,7 @@
 holds for all words shorter than @{term w}.
 
 The proof continues with a case distinction on @{term w},
-i.e.\ if @{term w} is empty or not.
+on whether @{term w} is empty or not.
 *}
 
 apply(case_tac w);
@@ -285,7 +285,9 @@
 
 text{*
 We conclude this section with a comparison of our proof with 
-Hopcroft and Ullman's \cite[p.\ts81]{HopcroftUllman}. For a start, the textbook
+Hopcroft\index{Hopcroft, J. E.} and Ullman's\index{Ullman, J. D.}
+\cite[p.\ts81]{HopcroftUllman}.
+For a start, the textbook
 grammar, for no good reason, excludes the empty word, thus complicating
 matters just a little bit: they have 8 instead of our 7 productions.
 
@@ -302,7 +304,9 @@
 thus is not at all similar to the other cases (which are automatic in
 Isabelle). The authors are at least cavalier about this point and may
 even have overlooked the slight difficulty lurking in the omitted
-cases. This is not atypical for pen-and-paper proofs, once analysed in
-detail.  *}
+cases.  Such errors are found in many pen-and-paper proofs when they
+are scrutinized formally.%
+\index{grammars!defining inductively|)}
+*}
 
 (*<*)end(*>*)
--- a/doc-src/TutorialI/Inductive/Mutual.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Inductive/Mutual.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -25,7 +25,7 @@
 (simply concatenate the names of the sets involved) and has the conclusion
 @{text[display]"(?x \<in> even \<longrightarrow> ?P ?x) \<and> (?y \<in> odd \<longrightarrow> ?Q ?y)"}
 
-If we want to prove that all even numbers are divisible by 2, we have to
+If we want to prove that all even numbers are divisible by two, we have to
 generalize the statement as follows:
 *}
 
--- a/doc-src/TutorialI/Inductive/Star.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Inductive/Star.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -3,6 +3,7 @@
 section{*The Reflexive Transitive Closure*}
 
 text{*\label{sec:rtc}
+\index{reflexive transitive closure!defining inductively|(}%
 An inductive definition may accept parameters, so it can express 
 functions that yield sets.
 Relations too can be defined inductively, since they are just sets of pairs.
@@ -21,7 +22,7 @@
 
 text{*\noindent
 The function @{term rtc} is annotated with concrete syntax: instead of
-@{text"rtc r"} we can read and write @{term"r*"}. The actual definition
+@{text"rtc r"} we can write @{term"r*"}. The actual definition
 consists of two rules. Reflexivity is obvious and is immediately given the
 @{text iff} attribute to increase automation. The
 second rule, @{thm[source]rtc_step}, says that we can always add one more
@@ -65,9 +66,9 @@
 apply(erule rtc.induct)
 
 txt{*\noindent
-Unfortunately, even the resulting base case is a problem
+Unfortunately, even the base case is a problem:
 @{subgoals[display,indent=0,goals_limit=1]}
-and maybe not what you had expected. We have to abandon this proof attempt.
+We have to abandon this proof attempt.
 To understand what is going on, let us look again at @{thm[source]rtc.induct}.
 In the above application of @{text erule}, the first premise of
 @{thm[source]rtc.induct} is unified with the first suitable assumption, which
@@ -150,6 +151,7 @@
 @{thm[source]rtc2.induct}. Since inductive proofs are hard enough
 anyway, we should always pick the simplest induction schema available.
 Hence @{term rtc} is the definition of choice.
+\index{reflexive transitive closure!defining inductively|)}
 
 \begin{exercise}\label{ex:converse-rtc-step}
 Show that the converse of @{thm[source]rtc_step} also holds:
--- a/doc-src/TutorialI/Inductive/document/AB.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Inductive/document/AB.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -7,6 +7,7 @@
 %
 \begin{isamarkuptext}%
 \label{sec:CFG}
+\index{grammars!defining inductively|(}%
 Grammars are nothing but shorthands for inductive definitions of nonterminals
 which represent sets of strings. For example, the production
 $A \to B c$ is short for
@@ -129,8 +130,7 @@
 \ \isacommand{apply}{\isacharparenleft}simp{\isacharparenright}\isanewline
 \isacommand{by}{\isacharparenleft}force\ simp\ add{\isacharcolon}zabs{\isacharunderscore}def\ take{\isacharunderscore}Cons\ split{\isacharcolon}nat{\isachardot}split\ if{\isacharunderscore}splits{\isacharparenright}%
 \begin{isamarkuptext}%
-Finally we come to the above mentioned lemma about cutting in half a word with two
-more elements of one sort than of the other sort:%
+Finally we come to the above-mentioned lemma about cutting in half a word with two more elements of one sort than of the other sort:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ part{\isadigit{1}}{\isacharcolon}\isanewline
 \ {\isachardoublequote}size{\isacharbrackleft}x{\isasymin}w{\isachardot}\ P\ x{\isacharbrackright}\ {\isacharequal}\ size{\isacharbrackleft}x{\isasymin}w{\isachardot}\ {\isasymnot}P\ x{\isacharbrackright}{\isacharplus}{\isadigit{2}}\ {\isasymLongrightarrow}\isanewline
@@ -199,7 +199,7 @@
 holds for all words shorter than \isa{w}.
 
 The proof continues with a case distinction on \isa{w},
-i.e.\ if \isa{w} is empty or not.%
+on whether \isa{w} is empty or not.%
 \end{isamarkuptxt}%
 \isacommand{apply}{\isacharparenleft}case{\isacharunderscore}tac\ w{\isacharparenright}\isanewline
 \ \isacommand{apply}{\isacharparenleft}simp{\isacharunderscore}all{\isacharparenright}%
@@ -266,7 +266,9 @@
 \isacommand{by}{\isacharparenleft}force\ simp\ add{\isacharcolon}min{\isacharunderscore}less{\isacharunderscore}iff{\isacharunderscore}disj\ split\ add{\isacharcolon}\ nat{\isacharunderscore}diff{\isacharunderscore}split{\isacharparenright}%
 \begin{isamarkuptext}%
 We conclude this section with a comparison of our proof with 
-Hopcroft and Ullman's \cite[p.\ts81]{HopcroftUllman}. For a start, the textbook
+Hopcroft\index{Hopcroft, J. E.} and Ullman's\index{Ullman, J. D.}
+\cite[p.\ts81]{HopcroftUllman}.
+For a start, the textbook
 grammar, for no good reason, excludes the empty word, thus complicating
 matters just a little bit: they have 8 instead of our 7 productions.
 
@@ -283,8 +285,9 @@
 thus is not at all similar to the other cases (which are automatic in
 Isabelle). The authors are at least cavalier about this point and may
 even have overlooked the slight difficulty lurking in the omitted
-cases. This is not atypical for pen-and-paper proofs, once analysed in
-detail.%
+cases.  Such errors are found in many pen-and-paper proofs when they
+are scrutinized formally.%
+\index{grammars!defining inductively|)}%
 \end{isamarkuptext}%
 \end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/Inductive/document/Mutual.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Inductive/document/Mutual.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -29,7 +29,7 @@
 \ \ \ \ \ {\isacharparenleft}{\isacharquery}x\ {\isasymin}\ even\ {\isasymlongrightarrow}\ {\isacharquery}P\ {\isacharquery}x{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}{\isacharquery}y\ {\isasymin}\ odd\ {\isasymlongrightarrow}\ {\isacharquery}Q\ {\isacharquery}y{\isacharparenright}%
 \end{isabelle}
 
-If we want to prove that all even numbers are divisible by 2, we have to
+If we want to prove that all even numbers are divisible by two, we have to
 generalize the statement as follows:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}{\isacharparenleft}m\ {\isasymin}\ even\ {\isasymlongrightarrow}\ {\isadigit{2}}\ dvd\ m{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}n\ {\isasymin}\ odd\ {\isasymlongrightarrow}\ {\isadigit{2}}\ dvd\ {\isacharparenleft}Suc\ n{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
--- a/doc-src/TutorialI/Inductive/document/Star.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Inductive/document/Star.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -7,6 +7,7 @@
 %
 \begin{isamarkuptext}%
 \label{sec:rtc}
+\index{reflexive transitive closure!defining inductively|(}%
 An inductive definition may accept parameters, so it can express 
 functions that yield sets.
 Relations too can be defined inductively, since they are just sets of pairs.
@@ -24,7 +25,7 @@
 \begin{isamarkuptext}%
 \noindent
 The function \isa{rtc} is annotated with concrete syntax: instead of
-\isa{rtc\ r} we can read and write \isa{r{\isacharasterisk}}. The actual definition
+\isa{rtc\ r} we can write \isa{r{\isacharasterisk}}. The actual definition
 consists of two rules. Reflexivity is obvious and is immediately given the
 \isa{iff} attribute to increase automation. The
 second rule, \isa{rtc{\isacharunderscore}step}, says that we can always add one more
@@ -68,11 +69,11 @@
 \isacommand{apply}{\isacharparenleft}erule\ rtc{\isachardot}induct{\isacharparenright}%
 \begin{isamarkuptxt}%
 \noindent
-Unfortunately, even the resulting base case is a problem
+Unfortunately, even the base case is a problem:
 \begin{isabelle}%
 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}x{\isachardot}\ {\isacharparenleft}y{\isacharcomma}\ z{\isacharparenright}\ {\isasymin}\ r{\isacharasterisk}\ {\isasymLongrightarrow}\ {\isacharparenleft}x{\isacharcomma}\ z{\isacharparenright}\ {\isasymin}\ r{\isacharasterisk}%
 \end{isabelle}
-and maybe not what you had expected. We have to abandon this proof attempt.
+We have to abandon this proof attempt.
 To understand what is going on, let us look again at \isa{rtc{\isachardot}induct}.
 In the above application of \isa{erule}, the first premise of
 \isa{rtc{\isachardot}induct} is unified with the first suitable assumption, which
@@ -154,6 +155,7 @@
 \isa{rtc{\isadigit{2}}{\isachardot}induct}. Since inductive proofs are hard enough
 anyway, we should always pick the simplest induction schema available.
 Hence \isa{rtc} is the definition of choice.
+\index{reflexive transitive closure!defining inductively|)}
 
 \begin{exercise}\label{ex:converse-rtc-step}
 Show that the converse of \isa{rtc{\isacharunderscore}step} also holds:
--- a/doc-src/TutorialI/Inductive/even-example.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Inductive/even-example.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -27,7 +27,7 @@
 above states that 0 is even; the second states that if $n$ is even, then so
 is~$n+2$.  Given this declaration, Isabelle generates a fixed point
 definition for \isa{even} and proves theorems about it,
-thus following the definitional approach (see \S\ref{sec:definitional}).
+thus following the definitional approach (see {\S}\ref{sec:definitional}).
 These theorems
 include the introduction rules specified in the declaration, an elimination
 rule for case analysis and an induction rule.  We can refer to these
@@ -300,9 +300,10 @@
 dispensing with the lemma \isa{even_imp_even_minus_2}. 
 This example also justifies the terminology
 \textbf{rule inversion}: the new rule inverts the introduction rule
-\isa{even.step}.
+\isa{even.step}.  In general, a rule can be inverted when the set of elements
+it introduces is disjoint from those of the other introduction rules.
 
-For one-off applications of rule inversion, use the \isa{ind_cases} method. 
+For one-off applications of rule inversion, use the \methdx{ind_cases} method. 
 Here is an example:
 \begin{isabelle}
 \isacommand{apply}\ (ind_cases\ "Suc(Suc\ n)\ \isasymin \ even")
--- a/doc-src/TutorialI/Misc/AdvancedInd.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Misc/AdvancedInd.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -4,7 +4,7 @@
 
 text{*\noindent
 Now that we have learned about rules and logic, we take another look at the
-finer points of induction. The two questions we answer are: what to do if the
+finer points of induction.  We consider two questions: what to do if the
 proposition to be proved is not directly amenable to induction
 (\S\ref{sec:ind-var-in-prems}), and how to utilize (\S\ref{sec:complete-ind})
 and even derive (\S\ref{sec:derive-ind}) new induction schemas. We conclude
@@ -14,7 +14,7 @@
 subsection{*Massaging the Proposition*};
 
 text{*\label{sec:ind-var-in-prems}
-Often we have assumed that the theorem we want to prove is already in a form
+Often we have assumed that the theorem to be proved is already in a form
 that is amenable to induction, but sometimes it isn't.
 Here is an example.
 Since @{term"hd"} and @{term"last"} return the first and last element of a
@@ -31,7 +31,7 @@
 \end{quote}
 and leads to the base case
 @{subgoals[display,indent=0,goals_limit=1]}
-After simplification, it becomes
+Simplification reduces the base case to this:
 \begin{isabelle}
 \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
 \end{isabelle}
@@ -41,7 +41,7 @@
 We should not have ignored the warning. Because the induction
 formula is only the conclusion, induction does not affect the occurrence of @{term xs} in the premises.  
 Thus the case that should have been trivial
-becomes unprovable. Fortunately, the solution is easy:\footnote{A very similar
+becomes unprovable. Fortunately, the solution is easy:\footnote{A similar
 heuristic applies to rule inductions; see \S\ref{sec:rtc}.}
 \begin{quote}
 \emph{Pull all occurrences of the induction variable into the conclusion
@@ -49,7 +49,7 @@
 \end{quote}
 Thus we should state the lemma as an ordinary 
 implication~(@{text"\<longrightarrow>"}), letting
-@{text rule_format} (\S\ref{sec:forward}) convert the
+\attrdx{rule_format} (\S\ref{sec:forward}) convert the
 result to the usual @{text"\<Longrightarrow>"} form:
 *};
 (*<*)oops;(*>*)
@@ -65,7 +65,7 @@
 
 If there are multiple premises $A@1$, \dots, $A@n$ containing the
 induction variable, you should turn the conclusion $C$ into
-\[ A@1 \longrightarrow \cdots A@n \longrightarrow C \]
+\[ A@1 \longrightarrow \cdots A@n \longrightarrow C. \]
 Additionally, you may also have to universally quantify some other variables,
 which can yield a fairly complex conclusion.  However, @{text rule_format} 
 can remove any number of occurrences of @{text"\<forall>"} and
@@ -82,15 +82,17 @@
 *)
 
 text{*
+\index{induction!on a term}%
 A second reason why your proposition may not be amenable to induction is that
-you want to induct on a whole term, rather than an individual variable. In
-general, when inducting on some term $t$ you must rephrase the conclusion $C$
+you want to induct on a complex term, rather than a variable. In
+general, induction on a term~$t$ requires rephrasing the conclusion~$C$
 as
 \begin{equation}\label{eqn:ind-over-term}
-\forall y@1 \dots y@n.~ x = t \longrightarrow C
+\forall y@1 \dots y@n.~ x = t \longrightarrow C.
 \end{equation}
-where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is new, and
-perform induction on $x$ afterwards. An example appears in
+where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is a new variable.
+Now you can perform
+perform induction on~$x$. An example appears in
 \S\ref{sec:complete-ind} below.
 
 The very same problem may occur in connection with rule induction. Remember
@@ -98,7 +100,7 @@
 some inductively defined set and the $x@i$ are variables.  If instead we have
 a premise $t \in R$, where $t$ is not just an $n$-tuple of variables, we
 replace it with $(x@1,\dots,x@k) \in R$, and rephrase the conclusion $C$ as
-\[ \forall y@1 \dots y@n.~ (x@1,\dots,x@k) = t \longrightarrow C \]
+\[ \forall y@1 \dots y@n.~ (x@1,\dots,x@k) = t \longrightarrow C. \]
 For an example see \S\ref{sec:CTL-revisited} below.
 
 Of course, all premises that share free variables with $t$ need to be pulled into
@@ -132,11 +134,12 @@
 be helpful. We show how to apply such induction schemas by an example.
 
 Structural induction on @{typ"nat"} is
-usually known as mathematical induction. There is also \emph{complete}
-induction, where you must prove $P(n)$ under the assumption that $P(m)$
-holds for all $m<n$. In Isabelle, this is the theorem @{thm[source]nat_less_induct}:
+usually known as mathematical induction. There is also \textbf{complete}
+\index{induction!complete}%
+induction, where you prove $P(n)$ under the assumption that $P(m)$
+holds for all $m<n$. In Isabelle, this is the theorem \tdx{nat_less_induct}:
 @{thm[display]"nat_less_induct"[no_vars]}
-As an example of its application we prove a property of the following
+As an application, we prove a property of the following
 function:
 *};
 
@@ -169,7 +172,7 @@
 apply(induct_tac k rule: nat_less_induct);
 
 txt{*\noindent
-which leaves us with the following proof state:
+We get the following proof state:
 @{subgoals[display,indent=0,margin=65]}
 After stripping the @{text"\<forall>i"}, the proof continues with a case
 distinction on @{term"i"}. The case @{prop"i = 0"} is trivial and we focus on
@@ -205,13 +208,12 @@
 which, together with (2) yields @{prop"j < f (Suc j)"} (again by
 @{thm[source]le_less_trans}).
 
-This last step shows both the power and the danger of automatic proofs: they
-will usually not tell you how the proof goes, because it can be very hard to
-translate the internal proof into a human-readable format. Therefore
-Chapter~\ref{sec:part2?} introduces a language for writing readable
-proofs.
+This last step shows both the power and the danger of automatic proofs.  They
+will usually not tell you how the proof goes, because it can be hard to
+translate the internal proof into a human-readable format.  Automatic
+proofs are easy to write but hard to read and understand.
 
-We can now derive the desired @{prop"i <= f i"} from @{thm[source]f_incr_lem}:
+The desired result, @{prop"i <= f i"}, follows from @{thm[source]f_incr_lem}:
 *};
 
 lemmas f_incr = f_incr_lem[rule_format, OF refl];
@@ -254,11 +256,12 @@
 subsection{*Derivation of New Induction Schemas*};
 
 text{*\label{sec:derive-ind}
+\index{induction!deriving new schemas}%
 Induction schemas are ordinary theorems and you can derive new ones
-whenever you wish.  This section shows you how to, using the example
+whenever you wish.  This section shows you how, using the example
 of @{thm[source]nat_less_induct}. Assume we only have structural induction
-available for @{typ"nat"} and want to derive complete induction. This
-requires us to generalize the statement first:
+available for @{typ"nat"} and want to derive complete induction.  We
+must generalize the statement as shown:
 *};
 
 lemma induct_lem: "(\<And>n::nat. \<forall>m<n. P m \<Longrightarrow> P n) \<Longrightarrow> \<forall>m<n. P m";
@@ -289,7 +292,7 @@
 by(insert induct_lem, blast);
 
 text{*
-Finally we should remind the reader that HOL already provides the mother of
+HOL already provides the mother of
 all inductions, well-founded induction (see \S\ref{sec:Well-founded}).  For
 example theorem @{thm[source]nat_less_induct} is
 a special case of @{thm[source]wf_induct} where @{term r} is @{text"<"} on
--- a/doc-src/TutorialI/Misc/document/AdvancedInd.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Misc/document/AdvancedInd.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -5,7 +5,7 @@
 \begin{isamarkuptext}%
 \noindent
 Now that we have learned about rules and logic, we take another look at the
-finer points of induction. The two questions we answer are: what to do if the
+finer points of induction.  We consider two questions: what to do if the
 proposition to be proved is not directly amenable to induction
 (\S\ref{sec:ind-var-in-prems}), and how to utilize (\S\ref{sec:complete-ind})
 and even derive (\S\ref{sec:derive-ind}) new induction schemas. We conclude
@@ -17,7 +17,7 @@
 %
 \begin{isamarkuptext}%
 \label{sec:ind-var-in-prems}
-Often we have assumed that the theorem we want to prove is already in a form
+Often we have assumed that the theorem to be proved is already in a form
 that is amenable to induction, but sometimes it isn't.
 Here is an example.
 Since \isa{hd} and \isa{last} return the first and last element of a
@@ -35,7 +35,7 @@
 \begin{isabelle}%
 \ {\isadigit{1}}{\isachardot}\ xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymLongrightarrow}\ hd\ {\isacharparenleft}rev\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\ {\isacharequal}\ last\ {\isacharbrackleft}{\isacharbrackright}%
 \end{isabelle}
-After simplification, it becomes
+Simplification reduces the base case to this:
 \begin{isabelle}
 \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
 \end{isabelle}
@@ -45,7 +45,7 @@
 We should not have ignored the warning. Because the induction
 formula is only the conclusion, induction does not affect the occurrence of \isa{xs} in the premises.  
 Thus the case that should have been trivial
-becomes unprovable. Fortunately, the solution is easy:\footnote{A very similar
+becomes unprovable. Fortunately, the solution is easy:\footnote{A similar
 heuristic applies to rule inductions; see \S\ref{sec:rtc}.}
 \begin{quote}
 \emph{Pull all occurrences of the induction variable into the conclusion
@@ -53,7 +53,7 @@
 \end{quote}
 Thus we should state the lemma as an ordinary 
 implication~(\isa{{\isasymlongrightarrow}}), letting
-\isa{rule{\isacharunderscore}format} (\S\ref{sec:forward}) convert the
+\attrdx{rule_format} (\S\ref{sec:forward}) convert the
 result to the usual \isa{{\isasymLongrightarrow}} form:%
 \end{isamarkuptxt}%
 \isacommand{lemma}\ hd{\isacharunderscore}rev\ {\isacharbrackleft}rule{\isacharunderscore}format{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymlongrightarrow}\ hd{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ last\ xs{\isachardoublequote}%
@@ -67,7 +67,7 @@
 
 If there are multiple premises $A@1$, \dots, $A@n$ containing the
 induction variable, you should turn the conclusion $C$ into
-\[ A@1 \longrightarrow \cdots A@n \longrightarrow C \]
+\[ A@1 \longrightarrow \cdots A@n \longrightarrow C. \]
 Additionally, you may also have to universally quantify some other variables,
 which can yield a fairly complex conclusion.  However, \isa{rule{\isacharunderscore}format} 
 can remove any number of occurrences of \isa{{\isasymforall}} and
@@ -75,15 +75,17 @@
 \end{isamarkuptxt}%
 %
 \begin{isamarkuptext}%
+\index{induction!on a term}%
 A second reason why your proposition may not be amenable to induction is that
-you want to induct on a whole term, rather than an individual variable. In
-general, when inducting on some term $t$ you must rephrase the conclusion $C$
+you want to induct on a complex term, rather than a variable. In
+general, induction on a term~$t$ requires rephrasing the conclusion~$C$
 as
 \begin{equation}\label{eqn:ind-over-term}
-\forall y@1 \dots y@n.~ x = t \longrightarrow C
+\forall y@1 \dots y@n.~ x = t \longrightarrow C.
 \end{equation}
-where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is new, and
-perform induction on $x$ afterwards. An example appears in
+where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is a new variable.
+Now you can perform
+perform induction on~$x$. An example appears in
 \S\ref{sec:complete-ind} below.
 
 The very same problem may occur in connection with rule induction. Remember
@@ -91,7 +93,7 @@
 some inductively defined set and the $x@i$ are variables.  If instead we have
 a premise $t \in R$, where $t$ is not just an $n$-tuple of variables, we
 replace it with $(x@1,\dots,x@k) \in R$, and rephrase the conclusion $C$ as
-\[ \forall y@1 \dots y@n.~ (x@1,\dots,x@k) = t \longrightarrow C \]
+\[ \forall y@1 \dots y@n.~ (x@1,\dots,x@k) = t \longrightarrow C. \]
 For an example see \S\ref{sec:CTL-revisited} below.
 
 Of course, all premises that share free variables with $t$ need to be pulled into
@@ -127,13 +129,14 @@
 be helpful. We show how to apply such induction schemas by an example.
 
 Structural induction on \isa{nat} is
-usually known as mathematical induction. There is also \emph{complete}
-induction, where you must prove $P(n)$ under the assumption that $P(m)$
-holds for all $m<n$. In Isabelle, this is the theorem \isa{nat{\isacharunderscore}less{\isacharunderscore}induct}:
+usually known as mathematical induction. There is also \textbf{complete}
+\index{induction!complete}%
+induction, where you prove $P(n)$ under the assumption that $P(m)$
+holds for all $m<n$. In Isabelle, this is the theorem \tdx{nat_less_induct}:
 \begin{isabelle}%
 \ \ \ \ \ {\isacharparenleft}{\isasymAnd}n{\isachardot}\ {\isasymforall}m{\isachardot}\ m\ {\isacharless}\ n\ {\isasymlongrightarrow}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ P\ n%
 \end{isabelle}
-As an example of its application we prove a property of the following
+As an application, we prove a property of the following
 function:%
 \end{isamarkuptext}%
 \isacommand{consts}\ f\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}nat\ {\isasymRightarrow}\ nat{\isachardoublequote}\isanewline
@@ -162,7 +165,7 @@
 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ k\ rule{\isacharcolon}\ nat{\isacharunderscore}less{\isacharunderscore}induct{\isacharparenright}%
 \begin{isamarkuptxt}%
 \noindent
-which leaves us with the following proof state:
+We get the following proof state:
 \begin{isabelle}%
 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}n{\isachardot}\ {\isasymforall}m{\isachardot}\ m\ {\isacharless}\ n\ {\isasymlongrightarrow}\ {\isacharparenleft}{\isasymforall}i{\isachardot}\ m\ {\isacharequal}\ f\ i\ {\isasymlongrightarrow}\ i\ {\isasymle}\ f\ i{\isacharparenright}\ {\isasymLongrightarrow}\isanewline
 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}n{\isachardot}\ }{\isasymforall}i{\isachardot}\ n\ {\isacharequal}\ f\ i\ {\isasymlongrightarrow}\ i\ {\isasymle}\ f\ i%
@@ -204,13 +207,12 @@
 which, together with (2) yields \isa{j\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}} (again by
 \isa{le{\isacharunderscore}less{\isacharunderscore}trans}).
 
-This last step shows both the power and the danger of automatic proofs: they
-will usually not tell you how the proof goes, because it can be very hard to
-translate the internal proof into a human-readable format. Therefore
-Chapter~\ref{sec:part2?} introduces a language for writing readable
-proofs.
+This last step shows both the power and the danger of automatic proofs.  They
+will usually not tell you how the proof goes, because it can be hard to
+translate the internal proof into a human-readable format.  Automatic
+proofs are easy to write but hard to read and understand.
 
-We can now derive the desired \isa{i\ {\isasymle}\ f\ i} from \isa{f{\isacharunderscore}incr{\isacharunderscore}lem}:%
+The desired result, \isa{i\ {\isasymle}\ f\ i}, follows from \isa{f{\isacharunderscore}incr{\isacharunderscore}lem}:%
 \end{isamarkuptext}%
 \isacommand{lemmas}\ f{\isacharunderscore}incr\ {\isacharequal}\ f{\isacharunderscore}incr{\isacharunderscore}lem{\isacharbrackleft}rule{\isacharunderscore}format{\isacharcomma}\ OF\ refl{\isacharbrackright}%
 \begin{isamarkuptext}%
@@ -255,11 +257,12 @@
 %
 \begin{isamarkuptext}%
 \label{sec:derive-ind}
+\index{induction!deriving new schemas}%
 Induction schemas are ordinary theorems and you can derive new ones
-whenever you wish.  This section shows you how to, using the example
+whenever you wish.  This section shows you how, using the example
 of \isa{nat{\isacharunderscore}less{\isacharunderscore}induct}. Assume we only have structural induction
-available for \isa{nat} and want to derive complete induction. This
-requires us to generalize the statement first:%
+available for \isa{nat} and want to derive complete induction.  We
+must generalize the statement as shown:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ induct{\isacharunderscore}lem{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m{\isachardoublequote}\isanewline
 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ n{\isacharparenright}%
@@ -288,7 +291,7 @@
 \isacommand{theorem}\ nat{\isacharunderscore}less{\isacharunderscore}induct{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ P\ n{\isachardoublequote}\isanewline
 \isacommand{by}{\isacharparenleft}insert\ induct{\isacharunderscore}lem{\isacharcomma}\ blast{\isacharparenright}%
 \begin{isamarkuptext}%
-Finally we should remind the reader that HOL already provides the mother of
+HOL already provides the mother of
 all inductions, well-founded induction (see \S\ref{sec:Well-founded}).  For
 example theorem \isa{nat{\isacharunderscore}less{\isacharunderscore}induct} is
 a special case of \isa{wf{\isacharunderscore}induct} where \isa{r} is \isa{{\isacharless}} on
--- a/doc-src/TutorialI/Misc/document/simp.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Misc/document/simp.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -287,7 +287,7 @@
 \isa{case}-expressions, as it does \isa{if}-expressions, 
 because with recursive datatypes it could lead to nontermination.
 Instead, the simplifier has a modifier
-\isa{split}\index{*split (modified)} 
+\isa{split}\index{*split (modifier)} 
 for adding splitting rules explicitly.  The
 lemma above can be proved in one step by%
 \end{isamarkuptxt}%
--- a/doc-src/TutorialI/Misc/simp.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Misc/simp.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -281,7 +281,7 @@
 @{text"case"}-expressions, as it does @{text"if"}-expressions, 
 because with recursive datatypes it could lead to nontermination.
 Instead, the simplifier has a modifier
-@{text split}\index{*split (modified)} 
+@{text split}\index{*split (modifier)} 
 for adding splitting rules explicitly.  The
 lemma above can be proved in one step by
 *}
--- a/doc-src/TutorialI/Recdef/Nested0.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Recdef/Nested0.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -3,6 +3,7 @@
 (*>*)
 
 text{*
+\index{datatypes!nested}%
 In \S\ref{sec:nested-datatype} we defined the datatype of terms
 *}
 
--- a/doc-src/TutorialI/Recdef/Nested1.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Recdef/Nested1.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -25,15 +25,15 @@
 where @{term set} returns the set of elements of a list
 and @{text"term_list_size :: term list \<Rightarrow> nat"} is an auxiliary
 function automatically defined by Isabelle
-(while processing the declaration of @{text term}).  First we have to understand why the
-recursive call of @{term trev} underneath @{term map} leads to the above
-condition. The reason is that \isacommand{recdef} ``knows'' that @{term map}
+(while processing the declaration of @{text term}).  Why does the
+recursive call of @{term trev} lead to this
+condition?  Because \isacommand{recdef} knows that @{term map}
 will apply @{term trev} only to elements of @{term ts}. Thus the 
 condition expresses that the size of the argument @{prop"t : set ts"} of any
 recursive call of @{term trev} is strictly less than @{term"size(App f ts)"},
 which equals @{term"Suc(term_list_size ts)"}.  We will now prove the termination condition and
 continue with our definition.  Below we return to the question of how
-\isacommand{recdef} ``knows'' about @{term map}.
+\isacommand{recdef} knows about @{term map}.
 *};
 
 (*<*)
--- a/doc-src/TutorialI/Recdef/Nested2.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Recdef/Nested2.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -54,19 +54,19 @@
 
 Let us now return to the question of how \isacommand{recdef} can come up with
 sensible termination conditions in the presence of higher-order functions
-like @{term"map"}. For a start, if nothing were known about @{term"map"},
+like @{term"map"}. For a start, if nothing were known about @{term"map"}, then
 @{term"map trev ts"} might apply @{term"trev"} to arbitrary terms, and thus
 \isacommand{recdef} would try to prove the unprovable @{term"size t < Suc
 (term_list_size ts)"}, without any assumption about @{term"t"}.  Therefore
 \isacommand{recdef} has been supplied with the congruence theorem
 @{thm[source]map_cong}:
 @{thm[display,margin=50]"map_cong"[no_vars]}
-Its second premise expresses (indirectly) that the first argument of
-@{term"map"} is only applied to elements of its second argument. Congruence
-rules for other higher-order functions on lists look very similar. If you get
+Its second premise expresses that in @{term"map f xs"},
+function @{term"f"} is only applied to elements of list @{term"xs"}.  Congruence
+rules for other higher-order functions on lists are similar.  If you get
 into a situation where you need to supply \isacommand{recdef} with new
-congruence rules, you can either append a hint after the end of
-the recursion equations
+congruence rules, you can append a hint after the end of
+the recursion equations:
 *}
 (*<*)
 consts dummy :: "nat => nat"
@@ -76,14 +76,14 @@
 (hints recdef_cong: map_cong)
 
 text{*\noindent
-or declare them globally
-by giving them the \attrdx{recdef_cong} attribute as in
+Or you can declare them globally
+by giving them the \attrdx{recdef_cong} attribute:
 *}
 
 declare map_cong[recdef_cong]
 
 text{*
-Note that the @{text cong} and @{text recdef_cong} attributes are
+The @{text cong} and @{text recdef_cong} attributes are
 intentionally kept apart because they control different activities, namely
 simplification and making recursive definitions.
 % The local @{text cong} in
--- a/doc-src/TutorialI/Recdef/document/Nested0.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Recdef/document/Nested0.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -3,6 +3,7 @@
 \def\isabellecontext{Nested{\isadigit{0}}}%
 %
 \begin{isamarkuptext}%
+\index{datatypes!nested}%
 In \S\ref{sec:nested-datatype} we defined the datatype of terms%
 \end{isamarkuptext}%
 \isacommand{datatype}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}{\isachardoublequote}term{\isachardoublequote}\ {\isacharequal}\ Var\ {\isacharprime}a\ {\isacharbar}\ App\ {\isacharprime}b\ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}term\ list{\isachardoublequote}%
--- a/doc-src/TutorialI/Recdef/document/Nested1.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Recdef/document/Nested1.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -27,15 +27,15 @@
 where \isa{set} returns the set of elements of a list
 and \isa{term{\isacharunderscore}list{\isacharunderscore}size\ {\isacharcolon}{\isacharcolon}\ term\ list\ {\isasymRightarrow}\ nat} is an auxiliary
 function automatically defined by Isabelle
-(while processing the declaration of \isa{term}).  First we have to understand why the
-recursive call of \isa{trev} underneath \isa{map} leads to the above
-condition. The reason is that \isacommand{recdef} ``knows'' that \isa{map}
+(while processing the declaration of \isa{term}).  Why does the
+recursive call of \isa{trev} lead to this
+condition?  Because \isacommand{recdef} knows that \isa{map}
 will apply \isa{trev} only to elements of \isa{ts}. Thus the 
 condition expresses that the size of the argument \isa{t\ {\isasymin}\ set\ ts} of any
 recursive call of \isa{trev} is strictly less than \isa{size\ {\isacharparenleft}App\ f\ ts{\isacharparenright}},
 which equals \isa{Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}}.  We will now prove the termination condition and
 continue with our definition.  Below we return to the question of how
-\isacommand{recdef} ``knows'' about \isa{map}.%
+\isacommand{recdef} knows about \isa{map}.%
 \end{isamarkuptext}%
 \end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/Recdef/document/Nested2.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Recdef/document/Nested2.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -54,7 +54,7 @@
 
 Let us now return to the question of how \isacommand{recdef} can come up with
 sensible termination conditions in the presence of higher-order functions
-like \isa{map}. For a start, if nothing were known about \isa{map},
+like \isa{map}. For a start, if nothing were known about \isa{map}, then
 \isa{map\ trev\ ts} might apply \isa{trev} to arbitrary terms, and thus
 \isacommand{recdef} would try to prove the unprovable \isa{size\ t\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}}, without any assumption about \isa{t}.  Therefore
 \isacommand{recdef} has been supplied with the congruence theorem
@@ -63,22 +63,22 @@
 \ \ \ \ \ {\isasymlbrakk}xs\ {\isacharequal}\ ys{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ x\ {\isasymin}\ set\ ys\ {\isasymLongrightarrow}\ f\ x\ {\isacharequal}\ g\ x{\isasymrbrakk}\isanewline
 \isaindent{\ \ \ \ \ }{\isasymLongrightarrow}\ map\ f\ xs\ {\isacharequal}\ map\ g\ ys%
 \end{isabelle}
-Its second premise expresses (indirectly) that the first argument of
-\isa{map} is only applied to elements of its second argument. Congruence
-rules for other higher-order functions on lists look very similar. If you get
+Its second premise expresses that in \isa{map\ f\ xs},
+function \isa{f} is only applied to elements of list \isa{xs}.  Congruence
+rules for other higher-order functions on lists are similar.  If you get
 into a situation where you need to supply \isacommand{recdef} with new
-congruence rules, you can either append a hint after the end of
-the recursion equations%
+congruence rules, you can append a hint after the end of
+the recursion equations:%
 \end{isamarkuptext}%
 {\isacharparenleft}\isakeyword{hints}\ recdef{\isacharunderscore}cong{\isacharcolon}\ map{\isacharunderscore}cong{\isacharparenright}%
 \begin{isamarkuptext}%
 \noindent
-or declare them globally
-by giving them the \attrdx{recdef_cong} attribute as in%
+Or you can declare them globally
+by giving them the \attrdx{recdef_cong} attribute:%
 \end{isamarkuptext}%
 \isacommand{declare}\ map{\isacharunderscore}cong{\isacharbrackleft}recdef{\isacharunderscore}cong{\isacharbrackright}%
 \begin{isamarkuptext}%
-Note that the \isa{cong} and \isa{recdef{\isacharunderscore}cong} attributes are
+The \isa{cong} and \isa{recdef{\isacharunderscore}cong} attributes are
 intentionally kept apart because they control different activities, namely
 simplification and making recursive definitions.
 % The local \isa{cong} in
--- a/doc-src/TutorialI/Rules/rules.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Rules/rules.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -1311,7 +1311,6 @@
 \isacommand{by}\ (auto\ intro:\ order_antisym)
 \end{isabelle}
 
-\REMARK{refer to \isa{Main_wo_AC} if we introduce it}
 
 \subsection{Indefinite Descriptions}
 
--- a/doc-src/TutorialI/Sets/sets.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Sets/sets.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -913,7 +913,7 @@
 ((x,y)\ \isasymin\ less_than)\ =\ (x\ <\ y)
 \rulenamedx{less_than_iff}\isanewline
 wf\ less_than
-\rulename{wf_less_than}
+\rulenamedx{wf_less_than}
 \end{isabelle}
 
 The notion of measure generalizes to the 
@@ -928,7 +928,7 @@
 \isasymin\ r\isacharbraceright
 \rulenamedx{inv_image_def}\isanewline
 wf\ r\ \isasymLongrightarrow\ wf\ (inv_image\ r\ f)
-\rulename{wf_inv_image}
+\rulenamedx{wf_inv_image}
 \end{isabelle}
 
 A measure function involves the natural numbers.  The relation \isa{measure
@@ -939,7 +939,7 @@
 measure\ \isasymequiv\ inv_image\ less_than%
 \rulenamedx{measure_def}\isanewline
 wf\ (measure\ f)
-\rulename{wf_measure}
+\rulenamedx{wf_measure}
 \end{isabelle}
 
 Of the other constructions, the most important is the
@@ -957,7 +957,7 @@
 \rulenamedx{lex_prod_def}%
 \par\smallskip
 \isasymlbrakk wf\ ra;\ wf\ rb\isasymrbrakk \ \isasymLongrightarrow \ wf\ (ra\ <*lex*>\ rb)
-\rulename{wf_lex_prod}
+\rulenamedx{wf_lex_prod}
 \end{isabelle}
 
 These constructions can be used in a
--- a/doc-src/TutorialI/Types/Axioms.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/Axioms.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -3,10 +3,10 @@
 subsection{*Axioms*}
 
 
-text{*Now we want to attach axioms to our classes. Then we can reason on the
-level of classes and the results will be applicable to all types in a class,
-just as in axiomatic mathematics. These ideas are demonstrated by means of
-our above ordering relations.
+text{*Attaching axioms to our classes lets us reason on the
+level of classes.  The results will be applicable to all types in a class,
+just as in axiomatic mathematics.  These ideas are demonstrated by means of
+our ordering relations.
 *}
 
 subsubsection{*Partial Orders*}
@@ -26,13 +26,13 @@
 The first three axioms are the familiar ones, and the final one
 requires that @{text"<<"} and @{text"<<="} are related as expected.
 Note that behind the scenes, Isabelle has restricted the axioms to class
-@{text parord}. For example, this is what @{thm[source]refl} really looks like:
+@{text parord}. For example, the axiom @{thm[source]refl} really is
 @{thm[show_sorts]refl}.
 
 We have not made @{thm[source]less_le} a global definition because it would
 fix once and for all that @{text"<<"} is defined in terms of @{text"<<="} and
 never the other way around. Below you will see why we want to avoid this
-asymmetry. The drawback of the above class is that
+asymmetry. The drawback of our choice is that
 we need to define both @{text"<<="} and @{text"<<"} for each instance.
 
 We can now prove simple theorems in this abstract setting, for example
@@ -42,14 +42,16 @@
 lemma [simp]: "(x::'a::parord) << y \<Longrightarrow> (\<not> y << x) = True";
 
 txt{*\noindent
-The conclusion is not simply @{prop"\<not> y << x"} because the preprocessor
-of the simplifier (see \S\ref{sec:simp-preprocessor})
-would turn this into @{prop"y << x = False"}, thus yielding
-a nonterminating rewrite rule. In the above form it is a generally useful
-rule.
+The conclusion is not just @{prop"\<not> y << x"} because the 
+simplifier's preprocessor (see \S\ref{sec:simp-preprocessor})
+would turn it into @{prop"y << x = False"}, yielding
+a nonterminating rewrite rule.  
+(It would be used to try to prove its own precondition \emph{ad
+    infinitum}.)
+In the form above, the rule is useful.
 The type constraint is necessary because otherwise Isabelle would only assume
-@{text"'a::ordrel"} (as required in the type of @{text"<<"}), in
-which case the proposition is not a theorem.  The proof is easy:
+@{text"'a::ordrel"} (as required in the type of @{text"<<"}), 
+when the proposition is not a theorem.  The proof is easy:
 *}
 
 by(simp add:less_le antisym);
@@ -67,8 +69,9 @@
 This time @{text intro_classes} leaves us with the four axioms,
 specialized to type @{typ bool}, as subgoals:
 @{subgoals[display,show_types,indent=0]}
-Fortunately, the proof is easy for blast, once we have unfolded the definitions
-of @{text"<<"} and @{text"<<="} at @{typ bool}:
+Fortunately, the proof is easy for \isa{blast}
+once we have unfolded the definitions
+of @{text"<<"} and @{text"<<="} at type @{typ bool}:
 *}
 
 apply(simp_all (no_asm_use) only: le_bool_def less_bool_def);
@@ -85,16 +88,15 @@
 
 text{*\noindent
 The effect is not stunning, but it demonstrates the principle.  It also shows
-that tools like the simplifier can deal with generic rules as well.
-It should be clear that the main advantage of the axiomatic method is that
-theorems can be proved in the abstract and one does not need to repeat the
-proof for each instance.
+that tools like the simplifier can deal with generic rules.
+The main advantage of the axiomatic method is that
+theorems can be proved in the abstract and freely reused for each instance.
 *}
 
 subsubsection{*Linear Orders*}
 
 text{* If any two elements of a partial order are comparable it is a
-\emph{linear} or \emph{total} order: *}
+\textbf{linear} or \textbf{total} order: *}
 
 axclass linord < parord
 linear: "x <<= y \<or> y <<= x"
@@ -112,9 +114,10 @@
 done
 
 text{*
-Linear orders are an example of subclassing by construction, which is the most
-common case. It is also possible to prove additional subclass relationships
-later on, i.e.\ subclassing by proof. This is the topic of the following
+Linear orders are an example of subclassing\index{subclasses}
+by construction, which is the most
+common case.  Subclass relationships can also be proved.  
+This is the topic of the following
 paragraph.
 *}
 
@@ -122,7 +125,7 @@
 
 text{*
 An alternative axiomatization of partial orders takes @{text"<<"} rather than
-@{text"<<="} as the primary concept. The result is a \emph{strict} order:
+@{text"<<="} as the primary concept. The result is a \textbf{strict} order:
 *}
 
 axclass strord < ordrel
@@ -173,7 +176,7 @@
 
 text{*
 A class may inherit from more than one direct superclass. This is called
-multiple inheritance and is certainly permitted. For example we could define
+\bfindex{multiple inheritance}.  For example, we could define
 the classes of well-founded orderings and well-orderings:
 *}
 
@@ -220,13 +223,12 @@
 This concludes our demonstration of type classes based on orderings.  We
 remind our readers that \isa{Main} contains a theory of
 orderings phrased in terms of the usual @{text"\<le>"} and @{text"<"}.
-It is recommended that, if possible,
-you base your own ordering relations on this theory.
+If possible, base your own ordering relations on this theory.
 *}
 
 subsubsection{*Inconsistencies*}
 
-text{* The reader may be wondering what happens if we, maybe accidentally,
+text{* The reader may be wondering what happens if we
 attach an inconsistent set of axioms to a class. So far we have always
 avoided to add new axioms to HOL for fear of inconsistencies and suddenly it
 seems that we are throwing all caution to the wind. So why is there no
--- a/doc-src/TutorialI/Types/Overloading.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/Overloading.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -4,8 +4,9 @@
 
 text{*\noindent
 This \isacommand{instance} declaration can be read like the declaration of
-a function on types: the constructor @{text list} maps types of class @{text
-term}, i.e.\ all HOL types, to types of class @{text ordrel}, i.e.\
+a function on types.  The constructor @{text list} maps types of class @{text
+term} (all HOL types), to types of class @{text ordrel};
+in other words,
 if @{text"ty :: term"} then @{text"ty list :: ordrel"}.
 Of course we should also define the meaning of @{text <<=} and
 @{text <<} on lists:
--- a/doc-src/TutorialI/Types/Overloading0.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/Overloading0.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -34,10 +34,8 @@
 
 However, there is nothing to prevent the user from forming terms such as
 @{text"inverse []"} and proving theorems such as @{text"inverse []
-= inverse []"}, although we never defined inverse on lists. We hasten to say
-that there is nothing wrong with such terms and theorems. But it would be
-nice if we could prevent their formation, simply because it is very likely
-that the user did not mean to write what he did. Thus she had better not waste
-her time pursuing it further. This requires the use of type classes.
+= inverse []"} when inverse is not defined on lists.  Proving theorems about
+undefined constants does not endanger soundness, but it is pointless.
+To prevent such terms from even being formed requires the use of type classes.
 *}
 (*<*)end(*>*)
--- a/doc-src/TutorialI/Types/Overloading1.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/Overloading1.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -3,7 +3,7 @@
 subsubsection{*Controlled Overloading with Type Classes*}
 
 text{*
-We now start with the theory of ordering relations, which we want to phrase
+We now start with the theory of ordering relations, which we shall phrase
 in terms of the two binary symbols @{text"<<"} and @{text"<<="}
 to avoid clashes with @{text"<"} and @{text"\<le>"} in theory @{text
 Main}. To restrict the application of @{text"<<"} and @{text"<<="} we
@@ -14,9 +14,9 @@
 
 text{*\noindent
 This introduces a new class @{text ordrel} and makes it a subclass of
-the predefined class @{text term}\footnote{The quotes around @{text term}
-simply avoid the clash with the command \isacommand{term}.} --- @{text term}
-is the class of all HOL types, like ``Object'' in Java.
+the predefined class @{text term}, which
+is the class of all HOL types.\footnote{The quotes around @{text term}
+simply avoid the clash with the command \isacommand{term}.}
 This is a degenerate form of axiomatic type class without any axioms.
 Its sole purpose is to restrict the use of overloaded constants to meaningful
 instances:
@@ -30,7 +30,7 @@
 constrained with a class; the constraint is propagated to the other
 occurrences automatically.
 
-So far there is not a single type of class @{text ordrel}. To breathe life
+So far there are no types of class @{text ordrel}. To breathe life
 into @{text ordrel} we need to declare a type to be an \bfindex{instance} of
 @{text ordrel}:
 *}
@@ -41,7 +41,7 @@
 Command \isacommand{instance} actually starts a proof, namely that
 @{typ bool} satisfies all axioms of @{text ordrel}.
 There are none, but we still need to finish that proof, which we do
-by invoking a fixed predefined method:
+by invoking the \methdx{intro_classes} method:
 *}
 
 by intro_classes
@@ -59,12 +59,13 @@
 less_bool_def: "P << Q \<equiv> \<not>P \<and> Q"
 
 text{*\noindent
-Now @{prop"False <<= P"} is provable
+Now @{prop"False <<= P"} is provable:
 *}
 
 lemma "False <<= P"
 by(simp add: le_bool_def)
 
 text{*\noindent
-whereas @{text"[] <<= []"} is not even well-typed. In order to make it well-typed
+At this point, @{text"[] <<= []"} is not even well-typed.
+To make it well-typed,
 we need to make lists a type of class @{text ordrel}:*}(*<*)end(*>*)
--- a/doc-src/TutorialI/Types/Overloading2.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/Overloading2.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -16,6 +16,12 @@
 
 text{*\noindent
 The infix function @{text"!"} yields the nth element of a list.
+
+\begin{warn}
+A type constructor can be instantiated in only one way to
+a given type class.  For example, our two instantiations of \isa{list} must
+reside in separate theories with disjoint scopes.\REMARK{Tobias, please check}
+\end{warn}
 *}
 
 subsubsection{*Predefined Overloading*}
@@ -24,7 +30,7 @@
 HOL comes with a number of overloaded constants and corresponding classes.
 The most important ones are listed in Table~\ref{tab:overloading} in the appendix. They are
 defined on all numeric types and sometimes on other types as well, for example
-@{text"-"}, @{text"\<le>"} and @{text"<"} on sets.
+$-$ and @{text"\<le>"} on sets.
 
 In addition there is a special input syntax for bounded quantifiers:
 \begin{center}
--- a/doc-src/TutorialI/Types/Pairs.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/Pairs.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -6,8 +6,8 @@
 Ordered pairs were already introduced in \S\ref{sec:pairs}, but only with a minimal
 repertoire of operations: pairing and the two projections @{term fst} and
 @{term snd}. In any non-trivial application of pairs you will find that this
-quickly leads to unreadable formulae involving nests of projections. This
-section is concerned with introducing some syntactic sugar to overcome this
+quickly leads to unreadable nests of projections. This
+section introduces syntactic sugar to overcome this
 problem: pattern matching with tuples.
 *}
 
@@ -33,12 +33,11 @@
 internal representation.  Thus the internal and external form of a term may
 differ, which can affect proofs. If you want to avoid this complication,
 stick to @{term fst} and @{term snd} and write @{term"%p. fst p + snd p"}
-instead of @{text"\<lambda>(x,y). x+y"} (which denote the same function but are quite
-different terms).
+instead of @{text"\<lambda>(x,y). x+y"}.  These terms are distinct even though they
+denote the same function.
 
 Internally, @{term"%(x,y). t"} becomes @{text"split (\<lambda>x y. t)"}, where
-@{term split}\indexbold{*split (constant)}
-is the uncurrying function of type @{text"('a \<Rightarrow> 'b
+\cdx{split} is the uncurrying function of type @{text"('a \<Rightarrow> 'b
 \<Rightarrow> 'c) \<Rightarrow> 'a \<times> 'b \<Rightarrow> 'c"} defined as
 \begin{center}
 @{thm split_def}
@@ -64,16 +63,17 @@
 approach is neither elegant nor very practical in large examples, although it
 can be effective in small ones.
 
-If we step back and ponder why the above lemma presented a problem in the
-first place, we quickly realize that what we would like is to replace @{term
-p} with some concrete pair @{term"(a,b)"}, in which case both sides of the
-equation would simplify to @{term a} because of the simplification rules
-@{thm split_conv[no_vars]} and @{thm fst_conv[no_vars]}.  This is the
-key problem one faces when reasoning about pattern matching with pairs: how to
-convert some atomic term into a pair.
+If we consider why this lemma presents a problem, 
+we quickly realize that we need to replace the variable~@{term
+p} by some pair @{term"(a,b)"}.  Then both sides of the
+equation would simplify to @{term a} by the simplification rules
+@{thm split_conv[no_vars]} and @{thm fst_conv[no_vars]}.  
+To reason about tuple patterns requires some way of
+converting a variable of product type into a pair.
 
 In case of a subterm of the form @{term"split f p"} this is easy: the split
-rule @{thm[source]split_split} replaces @{term p} by a pair:
+rule @{thm[source]split_split} replaces @{term p} by a pair:%
+\index{*split (method)}
 *}
 
 lemma "(\<lambda>(x,y).y) p = snd p"
@@ -147,7 +147,7 @@
 
 txt{*\noindent
 @{subgoals[display,indent=0]}
-Again, @{text case_tac} is applicable because @{text"\<times>"} is a datatype.
+Again, \methdx{case_tac} is applicable because @{text"\<times>"} is a datatype.
 The subgoal is easily proved by @{text simp}.
 
 Splitting by @{text case_tac} also solves the previous examples and may thus
@@ -172,10 +172,13 @@
 
 text{*\noindent
 Note that we have intentionally included only @{thm[source]split_paired_all}
-in the first simplification step. This time the reason was not merely
+in the first simplification step, and then we simplify again. 
+This time the reason was not merely
 pedagogical:
-@{thm[source]split_paired_all} may interfere with certain congruence
-rules of the simplifier, i.e.
+@{thm[source]split_paired_all} may interfere with other functions
+of the simplifier.
+The following command could fail (here it does not)
+where two separate \isa{simp} applications succeed.
 *}
 
 (*<*)
@@ -184,18 +187,15 @@
 apply(simp add:split_paired_all)
 (*<*)done(*>*)
 text{*\noindent
-may fail (here it does not) where the above two stages succeed.
-
-Finally, all @{text"\<forall>"} and @{text"\<exists>"}-quantified variables are split
-automatically by the simplifier:
+Finally, the simplifier automatically splits all @{text"\<forall>"} and
+@{text"\<exists>"}-quantified variables:
 *}
 
 lemma "\<forall>p. \<exists>q. swap p = swap q"
-apply simp;
-done
+by simp;
 
 text{*\noindent
-In case you would like to turn off this automatic splitting, just disable the
+To turn off this automatic splitting, just disable the
 responsible simplification rules:
 \begin{center}
 @{thm split_paired_All[no_vars]}
--- a/doc-src/TutorialI/Types/Typedef.thy	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/Typedef.thy	Thu Aug 09 18:12:15 2001 +0200
@@ -5,8 +5,8 @@
 text{*\label{sec:adv-typedef}
 For most applications, a combination of predefined types like @{typ bool} and
 @{text"\<Rightarrow>"} with recursive datatypes and records is quite sufficient. Very
-occasionally you may feel the need for a more advanced type. If you cannot do
-without that type, and you are certain that it is not definable by any of the
+occasionally you may feel the need for a more advanced type.  If you
+are certain that your type is not definable by any of the
 standard means, then read on.
 \begin{warn}
   Types in HOL must be non-empty; otherwise the quantifier rules would be
@@ -164,37 +164,25 @@
 we merely need to prove that @{term A}, @{term B} and @{term C} are distinct
 and that they exhaust the type.
 
-We start with a helpful version of injectivity of @{term Abs_three} on the
-representing subset:
-*}
-
-lemma [simp]:
- "\<lbrakk> x \<in> three; y \<in> three \<rbrakk> \<Longrightarrow> (Abs_three x = Abs_three y) = (x=y)";
-
-txt{*\noindent
-We prove each direction separately. From @{prop"Abs_three x = Abs_three y"}
-we use @{thm[source]arg_cong} to apply @{term Rep_three} to both sides,
-deriving @{prop[display]"Rep_three(Abs_three x) = Rep_three(Abs_three y)"}
-Thus we get the required @{prop"x =
-y"} by simplification with @{thm[source]Abs_three_inverse}. 
-The other direction
-is trivial by simplification:
-*}
-
-apply(rule iffI);
- apply(drule_tac f = Rep_three in arg_cong);
- apply(simp add:Abs_three_inverse);
-by simp;
-
-text{*\noindent
-Analogous lemmas can be proved in the same way for arbitrary type definitions.
-
+In processing our \isacommand{typedef} declaration, 
+Isabelle helpfully proves several lemmas.
+One, @{thm[source]Abs_three_inject},
+expresses that @{term Abs_three} is injective on the representing subset:
+\begin{center}
+@{thm Abs_three_inject[no_vars]}
+\end{center}
+Another, @{thm[source]Rep_three_inject}, expresses that the representation
+function is also injective:
+\begin{center}
+@{thm Rep_three_inject[no_vars]}
+\end{center}
 Distinctness of @{term A}, @{term B} and @{term C} follows immediately
-if we expand their definitions and rewrite with the above simplification rule:
+if we expand their definitions and rewrite with the injectivity
+of @{term Abs_three}:
 *}
 
 lemma "A \<noteq> B \<and> B \<noteq> A \<and> A \<noteq> C \<and> C \<noteq> A \<and> B \<noteq> C \<and> C \<noteq> B";
-by(simp add:A_def B_def C_def three_def);
+by(simp add: Abs_three_inject A_def B_def C_def three_def);
 
 text{*\noindent
 Of course we rely on the simplifier to solve goals like @{prop"0 \<noteq> 1"}.
@@ -221,7 +209,8 @@
 done
 
 text{*
-Now the case distinction lemma on type @{typ three} is easy to derive if you know how to:
+Now the case distinction lemma on type @{typ three} is easy to derive if you 
+know how:
 *}
 
 lemma three_cases: "\<lbrakk> P A; P B; P C \<rbrakk> \<Longrightarrow> P x";
--- a/doc-src/TutorialI/Types/document/Axioms.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Axioms.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -6,10 +6,10 @@
 }
 %
 \begin{isamarkuptext}%
-Now we want to attach axioms to our classes. Then we can reason on the
-level of classes and the results will be applicable to all types in a class,
-just as in axiomatic mathematics. These ideas are demonstrated by means of
-our above ordering relations.%
+Attaching axioms to our classes lets us reason on the
+level of classes.  The results will be applicable to all types in a class,
+just as in axiomatic mathematics.  These ideas are demonstrated by means of
+our ordering relations.%
 \end{isamarkuptext}%
 %
 \isamarkupsubsubsection{Partial Orders%
@@ -29,13 +29,13 @@
 The first three axioms are the familiar ones, and the final one
 requires that \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} are related as expected.
 Note that behind the scenes, Isabelle has restricted the axioms to class
-\isa{parord}. For example, this is what \isa{refl} really looks like:
+\isa{parord}. For example, the axiom \isa{refl} really is
 \isa{{\isacharparenleft}{\isacharquery}x{\isasymColon}{\isacharquery}{\isacharprime}a{\isasymColon}parord{\isacharparenright}\ {\isacharless}{\isacharless}{\isacharequal}\ {\isacharquery}x}.
 
 We have not made \isa{less{\isacharunderscore}le} a global definition because it would
 fix once and for all that \isa{{\isacharless}{\isacharless}} is defined in terms of \isa{{\isacharless}{\isacharless}{\isacharequal}} and
 never the other way around. Below you will see why we want to avoid this
-asymmetry. The drawback of the above class is that
+asymmetry. The drawback of our choice is that
 we need to define both \isa{{\isacharless}{\isacharless}{\isacharequal}} and \isa{{\isacharless}{\isacharless}} for each instance.
 
 We can now prove simple theorems in this abstract setting, for example
@@ -44,14 +44,16 @@
 \isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}x{\isacharcolon}{\isacharcolon}{\isacharprime}a{\isacharcolon}{\isacharcolon}parord{\isacharparenright}\ {\isacharless}{\isacharless}\ y\ {\isasymLongrightarrow}\ {\isacharparenleft}{\isasymnot}\ y\ {\isacharless}{\isacharless}\ x{\isacharparenright}\ {\isacharequal}\ True{\isachardoublequote}%
 \begin{isamarkuptxt}%
 \noindent
-The conclusion is not simply \isa{{\isasymnot}\ y\ {\isacharless}{\isacharless}\ x} because the preprocessor
-of the simplifier (see \S\ref{sec:simp-preprocessor})
-would turn this into \isa{{\isacharparenleft}y\ {\isacharless}{\isacharless}\ x{\isacharparenright}\ {\isacharequal}\ False}, thus yielding
-a nonterminating rewrite rule. In the above form it is a generally useful
-rule.
+The conclusion is not just \isa{{\isasymnot}\ y\ {\isacharless}{\isacharless}\ x} because the 
+simplifier's preprocessor (see \S\ref{sec:simp-preprocessor})
+would turn it into \isa{{\isacharparenleft}y\ {\isacharless}{\isacharless}\ x{\isacharparenright}\ {\isacharequal}\ False}, yielding
+a nonterminating rewrite rule.  
+(It would be used to try to prove its own precondition \emph{ad
+    infinitum}.)
+In the form above, the rule is useful.
 The type constraint is necessary because otherwise Isabelle would only assume
-\isa{{\isacharprime}a{\isacharcolon}{\isacharcolon}ordrel} (as required in the type of \isa{{\isacharless}{\isacharless}}), in
-which case the proposition is not a theorem.  The proof is easy:%
+\isa{{\isacharprime}a{\isacharcolon}{\isacharcolon}ordrel} (as required in the type of \isa{{\isacharless}{\isacharless}}), 
+when the proposition is not a theorem.  The proof is easy:%
 \end{isamarkuptxt}%
 \isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}less{\isacharunderscore}le\ antisym{\isacharparenright}%
 \begin{isamarkuptext}%
@@ -73,8 +75,9 @@
 \ {\isadigit{3}}{\isachardot}\ {\isasymAnd}{\isacharparenleft}x{\isasymColon}bool{\isacharparenright}\ y{\isasymColon}bool{\isachardot}\ {\isasymlbrakk}x\ {\isacharless}{\isacharless}{\isacharequal}\ y{\isacharsemicolon}\ y\ {\isacharless}{\isacharless}{\isacharequal}\ x{\isasymrbrakk}\ {\isasymLongrightarrow}\ x\ {\isacharequal}\ y\isanewline
 \ {\isadigit{4}}{\isachardot}\ {\isasymAnd}{\isacharparenleft}x{\isasymColon}bool{\isacharparenright}\ y{\isasymColon}bool{\isachardot}\ {\isacharparenleft}x\ {\isacharless}{\isacharless}\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}x\ {\isacharless}{\isacharless}{\isacharequal}\ y\ {\isasymand}\ x\ {\isasymnoteq}\ y{\isacharparenright}%
 \end{isabelle}
-Fortunately, the proof is easy for blast, once we have unfolded the definitions
-of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} at \isa{bool}:%
+Fortunately, the proof is easy for \isa{blast}
+once we have unfolded the definitions
+of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} at type \isa{bool}:%
 \end{isamarkuptxt}%
 \isacommand{apply}{\isacharparenleft}simp{\isacharunderscore}all\ {\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}use{\isacharparenright}\ only{\isacharcolon}\ le{\isacharunderscore}bool{\isacharunderscore}def\ less{\isacharunderscore}bool{\isacharunderscore}def{\isacharparenright}\isanewline
 \isacommand{by}{\isacharparenleft}blast{\isacharcomma}\ blast{\isacharcomma}\ blast{\isacharcomma}\ blast{\isacharparenright}%
@@ -89,10 +92,9 @@
 \begin{isamarkuptext}%
 \noindent
 The effect is not stunning, but it demonstrates the principle.  It also shows
-that tools like the simplifier can deal with generic rules as well.
-It should be clear that the main advantage of the axiomatic method is that
-theorems can be proved in the abstract and one does not need to repeat the
-proof for each instance.%
+that tools like the simplifier can deal with generic rules.
+The main advantage of the axiomatic method is that
+theorems can be proved in the abstract and freely reused for each instance.%
 \end{isamarkuptext}%
 %
 \isamarkupsubsubsection{Linear Orders%
@@ -100,7 +102,7 @@
 %
 \begin{isamarkuptext}%
 If any two elements of a partial order are comparable it is a
-\emph{linear} or \emph{total} order:%
+\textbf{linear} or \textbf{total} order:%
 \end{isamarkuptext}%
 \isacommand{axclass}\ linord\ {\isacharless}\ parord\isanewline
 linear{\isacharcolon}\ {\isachardoublequote}x\ {\isacharless}{\isacharless}{\isacharequal}\ y\ {\isasymor}\ y\ {\isacharless}{\isacharless}{\isacharequal}\ x{\isachardoublequote}%
@@ -116,9 +118,10 @@
 \isacommand{apply}\ blast\isanewline
 \isacommand{done}%
 \begin{isamarkuptext}%
-Linear orders are an example of subclassing by construction, which is the most
-common case. It is also possible to prove additional subclass relationships
-later on, i.e.\ subclassing by proof. This is the topic of the following
+Linear orders are an example of subclassing\index{subclasses}
+by construction, which is the most
+common case.  Subclass relationships can also be proved.  
+This is the topic of the following
 paragraph.%
 \end{isamarkuptext}%
 %
@@ -127,7 +130,7 @@
 %
 \begin{isamarkuptext}%
 An alternative axiomatization of partial orders takes \isa{{\isacharless}{\isacharless}} rather than
-\isa{{\isacharless}{\isacharless}{\isacharequal}} as the primary concept. The result is a \emph{strict} order:%
+\isa{{\isacharless}{\isacharless}{\isacharequal}} as the primary concept. The result is a \textbf{strict} order:%
 \end{isamarkuptext}%
 \isacommand{axclass}\ strord\ {\isacharless}\ ordrel\isanewline
 irrefl{\isacharcolon}\ \ \ \ \ {\isachardoublequote}{\isasymnot}\ x\ {\isacharless}{\isacharless}\ x{\isachardoublequote}\isanewline
@@ -167,7 +170,7 @@
 %
 \begin{isamarkuptext}%
 A class may inherit from more than one direct superclass. This is called
-multiple inheritance and is certainly permitted. For example we could define
+\bfindex{multiple inheritance}.  For example, we could define
 the classes of well-founded orderings and well-orderings:%
 \end{isamarkuptext}%
 \isacommand{axclass}\ wford\ {\isacharless}\ parord\isanewline
@@ -212,15 +215,14 @@
 This concludes our demonstration of type classes based on orderings.  We
 remind our readers that \isa{Main} contains a theory of
 orderings phrased in terms of the usual \isa{{\isasymle}} and \isa{{\isacharless}}.
-It is recommended that, if possible,
-you base your own ordering relations on this theory.%
+If possible, base your own ordering relations on this theory.%
 \end{isamarkuptext}%
 %
 \isamarkupsubsubsection{Inconsistencies%
 }
 %
 \begin{isamarkuptext}%
-The reader may be wondering what happens if we, maybe accidentally,
+The reader may be wondering what happens if we
 attach an inconsistent set of axioms to a class. So far we have always
 avoided to add new axioms to HOL for fear of inconsistencies and suddenly it
 seems that we are throwing all caution to the wind. So why is there no
--- a/doc-src/TutorialI/Types/document/Overloading.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Overloading.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -6,7 +6,8 @@
 \begin{isamarkuptext}%
 \noindent
 This \isacommand{instance} declaration can be read like the declaration of
-a function on types: the constructor \isa{list} maps types of class \isa{term}, i.e.\ all HOL types, to types of class \isa{ordrel}, i.e.\
+a function on types.  The constructor \isa{list} maps types of class \isa{term} (all HOL types), to types of class \isa{ordrel};
+in other words,
 if \isa{ty\ {\isacharcolon}{\isacharcolon}\ term} then \isa{ty\ list\ {\isacharcolon}{\isacharcolon}\ ordrel}.
 Of course we should also define the meaning of \isa{{\isacharless}{\isacharless}{\isacharequal}} and
 \isa{{\isacharless}{\isacharless}} on lists:%
--- a/doc-src/TutorialI/Types/document/Overloading0.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Overloading0.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -37,11 +37,9 @@
 warnings to that effect.
 
 However, there is nothing to prevent the user from forming terms such as
-\isa{inverse\ {\isacharbrackleft}{\isacharbrackright}} and proving theorems such as \isa{inverse\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ inverse\ {\isacharbrackleft}{\isacharbrackright}}, although we never defined inverse on lists. We hasten to say
-that there is nothing wrong with such terms and theorems. But it would be
-nice if we could prevent their formation, simply because it is very likely
-that the user did not mean to write what he did. Thus she had better not waste
-her time pursuing it further. This requires the use of type classes.%
+\isa{inverse\ {\isacharbrackleft}{\isacharbrackright}} and proving theorems such as \isa{inverse\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ inverse\ {\isacharbrackleft}{\isacharbrackright}} when inverse is not defined on lists.  Proving theorems about
+undefined constants does not endanger soundness, but it is pointless.
+To prevent such terms from even being formed requires the use of type classes.%
 \end{isamarkuptext}%
 \end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/Types/document/Overloading1.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Overloading1.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -6,7 +6,7 @@
 }
 %
 \begin{isamarkuptext}%
-We now start with the theory of ordering relations, which we want to phrase
+We now start with the theory of ordering relations, which we shall phrase
 in terms of the two binary symbols \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}}
 to avoid clashes with \isa{{\isacharless}} and \isa{{\isasymle}} in theory \isa{Main}. To restrict the application of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} we
 introduce the class \isa{ordrel}:%
@@ -15,9 +15,9 @@
 \begin{isamarkuptext}%
 \noindent
 This introduces a new class \isa{ordrel} and makes it a subclass of
-the predefined class \isa{term}\footnote{The quotes around \isa{term}
-simply avoid the clash with the command \isacommand{term}.} --- \isa{term}
-is the class of all HOL types, like ``Object'' in Java.
+the predefined class \isa{term}, which
+is the class of all HOL types.\footnote{The quotes around \isa{term}
+simply avoid the clash with the command \isacommand{term}.}
 This is a degenerate form of axiomatic type class without any axioms.
 Its sole purpose is to restrict the use of overloaded constants to meaningful
 instances:%
@@ -30,7 +30,7 @@
 constrained with a class; the constraint is propagated to the other
 occurrences automatically.
 
-So far there is not a single type of class \isa{ordrel}. To breathe life
+So far there are no types of class \isa{ordrel}. To breathe life
 into \isa{ordrel} we need to declare a type to be an \bfindex{instance} of
 \isa{ordrel}:%
 \end{isamarkuptext}%
@@ -40,7 +40,7 @@
 Command \isacommand{instance} actually starts a proof, namely that
 \isa{bool} satisfies all axioms of \isa{ordrel}.
 There are none, but we still need to finish that proof, which we do
-by invoking a fixed predefined method:%
+by invoking the \methdx{intro_classes} method:%
 \end{isamarkuptxt}%
 \isacommand{by}\ intro{\isacharunderscore}classes%
 \begin{isamarkuptext}%
@@ -56,13 +56,14 @@
 less{\isacharunderscore}bool{\isacharunderscore}def{\isacharcolon}\ {\isachardoublequote}P\ {\isacharless}{\isacharless}\ Q\ {\isasymequiv}\ {\isasymnot}P\ {\isasymand}\ Q{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-Now \isa{False\ {\isacharless}{\isacharless}{\isacharequal}\ P} is provable%
+Now \isa{False\ {\isacharless}{\isacharless}{\isacharequal}\ P} is provable:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}False\ {\isacharless}{\isacharless}{\isacharequal}\ P{\isachardoublequote}\isanewline
 \isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}\ le{\isacharunderscore}bool{\isacharunderscore}def{\isacharparenright}%
 \begin{isamarkuptext}%
 \noindent
-whereas \isa{{\isacharbrackleft}{\isacharbrackright}\ {\isacharless}{\isacharless}{\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}} is not even well-typed. In order to make it well-typed
+At this point, \isa{{\isacharbrackleft}{\isacharbrackright}\ {\isacharless}{\isacharless}{\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}} is not even well-typed.
+To make it well-typed,
 we need to make lists a type of class \isa{ordrel}:%
 \end{isamarkuptext}%
 \end{isabellebody}%
--- a/doc-src/TutorialI/Types/document/Overloading2.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Overloading2.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -16,7 +16,13 @@
 \ \ \ \ \ \ \ \ \ \ \ \ \ \ size\ xs\ {\isacharequal}\ size\ ys\ {\isasymand}\ {\isacharparenleft}{\isasymforall}i{\isacharless}size\ xs{\isachardot}\ xs{\isacharbang}i\ {\isacharless}{\isacharless}{\isacharequal}\ ys{\isacharbang}i{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-The infix function \isa{{\isacharbang}} yields the nth element of a list.%
+The infix function \isa{{\isacharbang}} yields the nth element of a list.
+
+\begin{warn}
+A type constructor can be instantiated in only one way to
+a given type class.  For example, our two instantiations of \isa{list} must
+reside in separate theories with disjoint scopes.\REMARK{Tobias, please check}
+\end{warn}%
 \end{isamarkuptext}%
 %
 \isamarkupsubsubsection{Predefined Overloading%
@@ -26,7 +32,7 @@
 HOL comes with a number of overloaded constants and corresponding classes.
 The most important ones are listed in Table~\ref{tab:overloading} in the appendix. They are
 defined on all numeric types and sometimes on other types as well, for example
-\isa{{\isacharminus}}, \isa{{\isasymle}} and \isa{{\isacharless}} on sets.
+$-$ and \isa{{\isasymle}} on sets.
 
 In addition there is a special input syntax for bounded quantifiers:
 \begin{center}
--- a/doc-src/TutorialI/Types/document/Pairs.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Pairs.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -10,8 +10,8 @@
 Ordered pairs were already introduced in \S\ref{sec:pairs}, but only with a minimal
 repertoire of operations: pairing and the two projections \isa{fst} and
 \isa{snd}. In any non-trivial application of pairs you will find that this
-quickly leads to unreadable formulae involving nests of projections. This
-section is concerned with introducing some syntactic sugar to overcome this
+quickly leads to unreadable nests of projections. This
+section introduces syntactic sugar to overcome this
 problem: pattern matching with tuples.%
 \end{isamarkuptext}%
 %
@@ -38,12 +38,11 @@
 internal representation.  Thus the internal and external form of a term may
 differ, which can affect proofs. If you want to avoid this complication,
 stick to \isa{fst} and \isa{snd} and write \isa{{\isasymlambda}p{\isachardot}\ fst\ p\ {\isacharplus}\ snd\ p}
-instead of \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharplus}y} (which denote the same function but are quite
-different terms).
+instead of \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharplus}y}.  These terms are distinct even though they
+denote the same function.
 
 Internally, \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}{\isachardot}\ t} becomes \isa{split\ {\isacharparenleft}{\isasymlambda}x\ y{\isachardot}\ t{\isacharparenright}}, where
-\isa{split}\indexbold{*split (constant)}
-is the uncurrying function of type \isa{{\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}b\ {\isasymRightarrow}\ {\isacharprime}c{\isacharparenright}\ {\isasymRightarrow}\ {\isacharprime}a\ {\isasymtimes}\ {\isacharprime}b\ {\isasymRightarrow}\ {\isacharprime}c} defined as
+\cdx{split} is the uncurrying function of type \isa{{\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}b\ {\isasymRightarrow}\ {\isacharprime}c{\isacharparenright}\ {\isasymRightarrow}\ {\isacharprime}a\ {\isasymtimes}\ {\isacharprime}b\ {\isasymRightarrow}\ {\isacharprime}c} defined as
 \begin{center}
 \isa{split\ {\isasymequiv}\ {\isasymlambda}c\ p{\isachardot}\ c\ {\isacharparenleft}fst\ p{\isacharparenright}\ {\isacharparenleft}snd\ p{\isacharparenright}}
 \hfill(\isa{split{\isacharunderscore}def})
@@ -68,15 +67,16 @@
 approach is neither elegant nor very practical in large examples, although it
 can be effective in small ones.
 
-If we step back and ponder why the above lemma presented a problem in the
-first place, we quickly realize that what we would like is to replace \isa{p} with some concrete pair \isa{{\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}}, in which case both sides of the
-equation would simplify to \isa{a} because of the simplification rules
-\isa{split\ c\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isacharequal}\ c\ a\ b} and \isa{fst\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isacharequal}\ a}.  This is the
-key problem one faces when reasoning about pattern matching with pairs: how to
-convert some atomic term into a pair.
+If we consider why this lemma presents a problem, 
+we quickly realize that we need to replace the variable~\isa{p} by some pair \isa{{\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}}.  Then both sides of the
+equation would simplify to \isa{a} by the simplification rules
+\isa{split\ c\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isacharequal}\ c\ a\ b} and \isa{fst\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isacharequal}\ a}.  
+To reason about tuple patterns requires some way of
+converting a variable of product type into a pair.
 
 In case of a subterm of the form \isa{split\ f\ p} this is easy: the split
 rule \isa{split{\isacharunderscore}split} replaces \isa{p} by a pair:%
+\index{*split (method)}%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}{\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}y{\isacharparenright}\ p\ {\isacharequal}\ snd\ p{\isachardoublequote}\isanewline
 \isacommand{apply}{\isacharparenleft}split\ split{\isacharunderscore}split{\isacharparenright}%
@@ -142,7 +142,7 @@
 \begin{isabelle}%
 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}a\ b{\isachardot}\ p\ {\isacharequal}\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isasymLongrightarrow}\ swap\ {\isacharparenleft}swap\ p{\isacharparenright}\ {\isacharequal}\ p%
 \end{isabelle}
-Again, \isa{case{\isacharunderscore}tac} is applicable because \isa{{\isasymtimes}} is a datatype.
+Again, \methdx{case_tac} is applicable because \isa{{\isasymtimes}} is a datatype.
 The subgoal is easily proved by \isa{simp}.
 
 Splitting by \isa{case{\isacharunderscore}tac} also solves the previous examples and may thus
@@ -167,25 +167,25 @@
 \begin{isamarkuptext}%
 \noindent
 Note that we have intentionally included only \isa{split{\isacharunderscore}paired{\isacharunderscore}all}
-in the first simplification step. This time the reason was not merely
+in the first simplification step, and then we simplify again. 
+This time the reason was not merely
 pedagogical:
-\isa{split{\isacharunderscore}paired{\isacharunderscore}all} may interfere with certain congruence
-rules of the simplifier, i.e.%
+\isa{split{\isacharunderscore}paired{\isacharunderscore}all} may interfere with other functions
+of the simplifier.
+The following command could fail (here it does not)
+where two separate \isa{simp} applications succeed.%
 \end{isamarkuptext}%
 \isacommand{apply}{\isacharparenleft}simp\ add{\isacharcolon}split{\isacharunderscore}paired{\isacharunderscore}all{\isacharparenright}%
 \begin{isamarkuptext}%
 \noindent
-may fail (here it does not) where the above two stages succeed.
-
-Finally, all \isa{{\isasymforall}} and \isa{{\isasymexists}}-quantified variables are split
-automatically by the simplifier:%
+Finally, the simplifier automatically splits all \isa{{\isasymforall}} and
+\isa{{\isasymexists}}-quantified variables:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}{\isasymforall}p{\isachardot}\ {\isasymexists}q{\isachardot}\ swap\ p\ {\isacharequal}\ swap\ q{\isachardoublequote}\isanewline
-\isacommand{apply}\ simp\isanewline
-\isacommand{done}%
+\isacommand{by}\ simp%
 \begin{isamarkuptext}%
 \noindent
-In case you would like to turn off this automatic splitting, just disable the
+To turn off this automatic splitting, just disable the
 responsible simplification rules:
 \begin{center}
 \isa{{\isacharparenleft}{\isasymforall}x{\isachardot}\ P\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isasymforall}a\ b{\isachardot}\ P\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}{\isacharparenright}}
--- a/doc-src/TutorialI/Types/document/Typedef.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/document/Typedef.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -9,8 +9,8 @@
 \label{sec:adv-typedef}
 For most applications, a combination of predefined types like \isa{bool} and
 \isa{{\isasymRightarrow}} with recursive datatypes and records is quite sufficient. Very
-occasionally you may feel the need for a more advanced type. If you cannot do
-without that type, and you are certain that it is not definable by any of the
+occasionally you may feel the need for a more advanced type.  If you
+are certain that your type is not definable by any of the
 standard means, then read on.
 \begin{warn}
   Types in HOL must be non-empty; otherwise the quantifier rules would be
@@ -165,35 +165,24 @@
 we merely need to prove that \isa{A}, \isa{B} and \isa{C} are distinct
 and that they exhaust the type.
 
-We start with a helpful version of injectivity of \isa{Abs{\isacharunderscore}three} on the
-representing subset:%
-\end{isamarkuptext}%
-\isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\isanewline
-\ {\isachardoublequote}{\isasymlbrakk}\ x\ {\isasymin}\ three{\isacharsemicolon}\ y\ {\isasymin}\ three\ {\isasymrbrakk}\ {\isasymLongrightarrow}\ {\isacharparenleft}Abs{\isacharunderscore}three\ x\ {\isacharequal}\ Abs{\isacharunderscore}three\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}x{\isacharequal}y{\isacharparenright}{\isachardoublequote}%
-\begin{isamarkuptxt}%
-\noindent
-We prove each direction separately. From \isa{Abs{\isacharunderscore}three\ x\ {\isacharequal}\ Abs{\isacharunderscore}three\ y}
-we use \isa{arg{\isacharunderscore}cong} to apply \isa{Rep{\isacharunderscore}three} to both sides,
-deriving \begin{isabelle}%
-Rep{\isacharunderscore}three\ {\isacharparenleft}Abs{\isacharunderscore}three\ x{\isacharparenright}\ {\isacharequal}\ Rep{\isacharunderscore}three\ {\isacharparenleft}Abs{\isacharunderscore}three\ y{\isacharparenright}%
-\end{isabelle}
-Thus we get the required \isa{x\ {\isacharequal}\ y} by simplification with \isa{Abs{\isacharunderscore}three{\isacharunderscore}inverse}. 
-The other direction
-is trivial by simplification:%
-\end{isamarkuptxt}%
-\isacommand{apply}{\isacharparenleft}rule\ iffI{\isacharparenright}\isanewline
-\ \isacommand{apply}{\isacharparenleft}drule{\isacharunderscore}tac\ f\ {\isacharequal}\ Rep{\isacharunderscore}three\ \isakeyword{in}\ arg{\isacharunderscore}cong{\isacharparenright}\isanewline
-\ \isacommand{apply}{\isacharparenleft}simp\ add{\isacharcolon}Abs{\isacharunderscore}three{\isacharunderscore}inverse{\isacharparenright}\isanewline
-\isacommand{by}\ simp%
-\begin{isamarkuptext}%
-\noindent
-Analogous lemmas can be proved in the same way for arbitrary type definitions.
-
+In processing our \isacommand{typedef} declaration, 
+Isabelle helpfully proves several lemmas.
+One, \isa{Abs{\isacharunderscore}three{\isacharunderscore}inject},
+expresses that \isa{Abs{\isacharunderscore}three} is injective on the representing subset:
+\begin{center}
+\isa{{\isasymlbrakk}x\ {\isasymin}\ three{\isacharsemicolon}\ y\ {\isasymin}\ three{\isasymrbrakk}\ {\isasymLongrightarrow}\ {\isacharparenleft}Abs{\isacharunderscore}three\ x\ {\isacharequal}\ Abs{\isacharunderscore}three\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}x\ {\isacharequal}\ y{\isacharparenright}}
+\end{center}
+Another, \isa{Rep{\isacharunderscore}three{\isacharunderscore}inject}, expresses that the representation
+function is also injective:
+\begin{center}
+\isa{{\isacharparenleft}Rep{\isacharunderscore}three\ x\ {\isacharequal}\ Rep{\isacharunderscore}three\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}x\ {\isacharequal}\ y{\isacharparenright}}
+\end{center}
 Distinctness of \isa{A}, \isa{B} and \isa{C} follows immediately
-if we expand their definitions and rewrite with the above simplification rule:%
+if we expand their definitions and rewrite with the injectivity
+of \isa{Abs{\isacharunderscore}three}:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}A\ {\isasymnoteq}\ B\ {\isasymand}\ B\ {\isasymnoteq}\ A\ {\isasymand}\ A\ {\isasymnoteq}\ C\ {\isasymand}\ C\ {\isasymnoteq}\ A\ {\isasymand}\ B\ {\isasymnoteq}\ C\ {\isasymand}\ C\ {\isasymnoteq}\ B{\isachardoublequote}\isanewline
-\isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}A{\isacharunderscore}def\ B{\isacharunderscore}def\ C{\isacharunderscore}def\ three{\isacharunderscore}def{\isacharparenright}%
+\isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}\ Abs{\isacharunderscore}three{\isacharunderscore}inject\ A{\isacharunderscore}def\ B{\isacharunderscore}def\ C{\isacharunderscore}def\ three{\isacharunderscore}def{\isacharparenright}%
 \begin{isamarkuptext}%
 \noindent
 Of course we rely on the simplifier to solve goals like \isa{{\isadigit{0}}\ {\isasymnoteq}\ {\isadigit{1}}}.
@@ -220,7 +209,8 @@
 \isacommand{apply}\ simp{\isacharunderscore}all\isanewline
 \isacommand{done}%
 \begin{isamarkuptext}%
-Now the case distinction lemma on type \isa{three} is easy to derive if you know how to:%
+Now the case distinction lemma on type \isa{three} is easy to derive if you 
+know how:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ three{\isacharunderscore}cases{\isacharcolon}\ {\isachardoublequote}{\isasymlbrakk}\ P\ A{\isacharsemicolon}\ P\ B{\isacharsemicolon}\ P\ C\ {\isasymrbrakk}\ {\isasymLongrightarrow}\ P\ x{\isachardoublequote}%
 \begin{isamarkuptxt}%
--- a/doc-src/TutorialI/Types/numerics.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/numerics.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -3,6 +3,7 @@
 \section{Numbers}
 \label{sec:numbers}
 
+\index{numbers|(}%
 Until now, our numerical examples have used the type of \textbf{natural
 numbers},
 \isa{nat}.  This is a recursive datatype generated by the constructors
@@ -45,7 +46,7 @@
 They begin 
 with a number sign (\isa{\#}), have an optional minus sign (\isa{-}) and 
 then one or more decimal digits. Examples are \isa{\#0}, \isa{\#-3} 
-and \isa{\#441223334678}.
+and \isa{\#441223334678}.\REMARK{will need updating?}
 
 Literals look like constants, but they abbreviate 
 terms, representing the number in a two's complement binary notation. 
@@ -110,7 +111,7 @@
 in a variety of additive types. The symbols \sdx{1} and \sdx{2} are 
 not constants but abbreviations for \isa{Suc 0} and \isa{Suc(Suc 0)},
 respectively. The literals \isa{\#0}, \isa{\#1} and \isa{\#2}  are
-syntactically different from \isa{0}, \isa{1} and \isa{2}. You  will
+syntactically different from \isa{0}, \isa{1} and \isa{2}. You  will\REMARK{will need updating?}
 sometimes prefer one notation to the other. Literals are  obviously
 necessary to express large values, while \isa{0} and \isa{Suc}  are needed
 in order to match many theorems, including the rewrite  rules for primitive
@@ -224,7 +225,7 @@
 d))
 \rulename{nat_diff_split}
 \end{isabelle}
-For example, it proves the following fact, which lies outside the scope of
+For example, this splitting proves the following fact, which lies outside the scope of
 linear arithmetic:\REMARK{replace by new example!}
 \begin{isabelle}
 \isacommand{lemma}\ "(n-1)*(n+1)\ =\ n*n\ -\ 1"\isanewline
@@ -245,8 +246,10 @@
 +\ z)
 \rulename{add_left_commute}
 \end{isabelle}
-The name \isa{add_ac} refers to the list of all three theorems, similarly
-there is \isa{mult_ac}.  Here is an example of the sorting effect.  Start
+The name \isa{add_ac}\index{*add_ac (theorems)} 
+refers to the list of all three theorems; similarly
+there is \isa{mult_ac}.\index{*mult_ac (theorems)} 
+Here is an example of the sorting effect.  Start
 with this goal:
 \begin{isabelle}
 \ 1.\ Suc\ (i\ +\ j\ *\ l\ *\ k\ +\ m\ *\ n)\ =\
@@ -297,9 +300,11 @@
 
 Concerning simplifier tricks, we have no need to eliminate subtraction: it
 is well-behaved.  As with the natural numbers, the simplifier can sort the
-operands of sums and products.  The name \isa{zadd_ac} refers to the
+operands of sums and products.  The name \isa{zadd_ac}\index{*zadd_ac (theorems)}
+refers to the
 associativity and commutativity theorems for integer addition, while
-\isa{zmult_ac} has the analogous theorems for multiplication.  The
+\isa{zmult_ac}\index{*zmult_ac (theorems)}
+has the analogous theorems for multiplication.  The
 prefix~\isa{z} in many theorem names recalls the use of $\mathbb{Z}$ to
 denote the set of integers.
 
@@ -450,4 +455,5 @@
 about limits, differentiation and integration~\cite{fleuriot-jcm}.  The
 development defines an infinitely large number, \isa{omega} and an
 infinitely small positive number, \isa{epsilon}.  The 
-relation $x\approx y$ means ``$x$ is infinitely close to~$y$''.
+relation $x\approx y$ means ``$x$ is infinitely close to~$y$''.%
+\index{numbers|)}
\ No newline at end of file
--- a/doc-src/TutorialI/Types/types.tex	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/Types/types.tex	Thu Aug 09 18:12:15 2001 +0200
@@ -18,7 +18,8 @@
 
 The material in this section goes beyond the needs of most novices.  Serious
 users should at least skim the sections on basic types and on type classes.
-The latter is fairly advanced: read the beginning to understand what it is
+The latter material is fairly advanced; read the beginning to understand what
+it is 
 about, but consult the rest only when necessary.
 
 \input{Types/numerics}
@@ -44,8 +45,9 @@
 an axiomatic specification of a class of types. Thus we can talk about a type
 $\tau$ being in a class $C$, which is written $\tau :: C$.  This is the case if
 $\tau$ satisfies the axioms of $C$. Furthermore, type classes can be
-organized in a hierarchy. Thus there is the notion of a class $D$ being a
-\textbf{subclass} of a class $C$, written $D < C$. This is the case if all
+organized in a hierarchy.  Thus there is the notion of a class $D$ being a
+\textbf{subclass}\index{subclasses}
+of a class $C$, written $D < C$. This is the case if all
 axioms of $C$ are also provable in $D$. We introduce these concepts
 by means of a running example, ordering relations.
 
--- a/doc-src/TutorialI/tutorial.ind	Thu Aug 09 10:17:45 2001 +0200
+++ b/doc-src/TutorialI/tutorial.ind	Thu Aug 09 18:12:15 2001 +0200
@@ -49,6 +49,7 @@
   \item \texttt {abs}, \bold{189}
   \item absolute value, 135
   \item \isa {add} (modifier), 27
+  \item \isa {add_ac} (theorems), 134
   \item \isa {add_assoc} (theorem), \bold{134}
   \item \isa {add_commute} (theorem), \bold{134}
   \item \isa {add_mult_distrib} (theorem), \bold{133}
@@ -133,13 +134,12 @@
   \item converse
     \subitem of a relation, \bold{96}
   \item \isa {converse_iff} (theorem), \bold{96}
-  \item CTL, 100--110
 
   \indexspace
 
   \item \isacommand {datatype} (command), 7, 36--41
   \item datatypes, 15--20
-    \subitem and nested recursion, 38
+    \subitem and nested recursion, 38, 42
     \subitem mutually recursive, 36
   \item \isacommand {defer} (command), 14, 84
   \item Definitional Approach, 24
@@ -204,7 +204,7 @@
   \indexspace
 
   \item \isa {False} (constant), 3
-  \item \isa {fast} (method), 76
+  \item \isa {fast} (method), 76, 108
   \item Fibonacci function, 44
   \item \isa {finite} (symbol), 93
   \item \isa {Finites} (constant), 93
@@ -229,6 +229,8 @@
   \item generalizing induction formulae, 32
   \item Girard, Jean-Yves, \fnote{55}
   \item Gordon, Mike, 1
+  \item grammars
+    \subitem defining inductively, 124--129
   \item ground terms example, 119--124
 
   \indexspace
@@ -237,6 +239,7 @@
   \item higher-order pattern, \bold{159}
   \item Hilbert's $\varepsilon$-operator, 70
   \item HOLCF, 41
+  \item Hopcroft, J. E., 129
   \item \isa {hypreal} (type), 137
 
   \indexspace
@@ -249,7 +252,7 @@
   \item identity relation, \bold{96}
   \item \isa {if} expressions, 3, 4
     \subitem simplification of, 31
-    \subitem splitting of, 29
+    \subitem splitting of, 29, 47
   \item if-and-only-if, 3
   \item \isa {iff} (attribute), 74, 86, 114
   \item \isa {iffD1} (theorem), \bold{78}
@@ -261,6 +264,7 @@
   \item \isa {Image_iff} (theorem), \bold{96}
   \item \isa {impI} (theorem), \bold{56}
   \item implication, 56--57
+  \item \isa {ind_cases} (method), 115
   \item \isa {induct_tac} (method), 10, 17, 50, 172
   \item induction, 168--175
     \subitem recursion, 49--50
@@ -339,7 +343,7 @@
   \item \isa {Main} (theory), 2
   \item major premise, \bold{59}
   \item \isa {max} (constant), 21, 22
-  \item measure function, \bold{45}, \bold{98}
+  \item measure functions, 45, 98
   \item \isa {measure_def} (theorem), \bold{99}
   \item meta-logic, \bold{64}
   \item methods, \bold{14}
@@ -347,12 +351,14 @@
   \item \isa {mod} (symbol), 21
   \item \isa {mod_div_equality} (theorem), \bold{133}
   \item \isa {mod_mult_distrib} (theorem), \bold{133}
+  \item model checking example, 100--110
   \item \emph{modus ponens}, 51, 56
   \item \isa {mono_def} (theorem), \bold{100}
   \item monotone functions, \bold{100}, 123
     \subitem and inductive definitions, 121--122
   \item \isa {more} (constant), 140--142
   \item \isa {mp} (theorem), \bold{56}
+  \item \isa {mult_ac} (theorems), 134
   \item multiset ordering, \bold{99}
 
   \indexspace
@@ -370,6 +376,7 @@
   \item \isa {None} (constant), \bold{22}
   \item \isa {notE} (theorem), \bold{57}
   \item \isa {notI} (theorem), \bold{57}
+  \item numbers, 131--137
   \item numeric literals, 132
     \subitem for type \protect\isa{nat}, 133
     \subitem for type \protect\isa{real}, 136
@@ -393,8 +400,10 @@
   \item pairs and tuples, 22, 137--140
   \item parent theories, \bold{2}
   \item partial function, 164
+  \item pattern matching
+    \subitem and \isacommand{recdef}, 45
   \item pattern, higher-order, \bold{159}
-  \item PDL, 102--105
+  \item PDL, 102--104
   \item permutative rewrite rule, \bold{158}
   \item \isacommand {pr} (command), 14, 84
   \item \isacommand {prefer} (command), 14, 84
@@ -445,6 +454,8 @@
   \item recursion induction, 49--50
   \item \isacommand {redo} (command), 14
   \item reflexive and transitive closure, 96--98
+  \item reflexive transitive closure
+    \subitem defining inductively, 116--119
   \item relations, 95--98
     \subitem well-founded, 98--99
   \item \isa {rename_tac} (method), 66--67
@@ -500,15 +511,15 @@
   \item sorts, 150
   \item \isa {spec} (theorem), \bold{64}
   \item \isa {split} (attribute), 30
-  \item \isa {split} (constant), \bold{137}
-  \item \isa {split} (method), 29
-  \item \isa {split} (method, attr.), 29--31
-  \item \isa {split} (modified), 30
+  \item \isa {split} (constant), 137
+  \item \isa {split} (method), 29, 138
+  \item \isa {split} (modifier), 30
   \item split rule, \bold{30}
   \item \isa {split_if} (theorem), 30
   \item \isa {split_if_asm} (theorem), 30
   \item \isa {ssubst} (theorem), \bold{61}
   \item structural induction, \see{induction, structural}{1}
+  \item subgoal numbering, 44
   \item \isa {subgoal_tac} (method), 82
   \item subgoals, 10
   \item subset relation, \bold{90}
@@ -545,6 +556,7 @@
   \item \isa {trace_simp} (flag), 31
   \item tracing the simplifier, \bold{31}
   \item \isa {trancl_trans} (theorem), \bold{97}
+  \item transition systems, 101
   \item \isacommand {translations} (command), 24
   \item tries, 41--44
   \item \isa {True} (constant), 3
@@ -555,7 +567,7 @@
   \item type inference, \bold{3}
   \item type synonyms, 23
   \item type variables, 3
-  \item \isacommand {typedecl} (command), 150
+  \item \isacommand {typedecl} (command), 101, 150
   \item \isacommand {typedef} (command), 151--155
   \item types, 2--3
     \subitem declaring, 150--151
@@ -564,6 +576,7 @@
 
   \indexspace
 
+  \item Ullman, J. D., 129
   \item \texttt {UN}, \bold{189}
   \item \texttt {Un}, \bold{189}
   \item \isa {UN_E} (theorem), \bold{92}
@@ -593,7 +606,16 @@
 
   \item Wenzel, Markus, v
   \item \isa {wf_induct} (theorem), \bold{99}
+  \item \isa {wf_inv_image} (theorem), \bold{99}
+  \item \isa {wf_less_than} (theorem), \bold{98}
+  \item \isa {wf_lex_prod} (theorem), \bold{99}
+  \item \isa {wf_measure} (theorem), \bold{99}
   \item \isa {while} (constant), 167
   \item \isa {While_Combinator} (theory), 167
 
+  \indexspace
+
+  \item \isa {zadd_ac} (theorems), 135
+  \item \isa {zmult_ac} (theorems), 135
+
 \end{theindex}