doc update
authornipkow
Tue, 24 Apr 2012 09:09:55 +0200
changeset 47711 c1cca2a052e4
parent 47710 4ced56100757
child 47712 81c3c4e01263
doc update
doc-src/ProgProve/Thys/Bool_nat_list.thy
doc-src/ProgProve/Thys/Isar.thy
doc-src/ProgProve/Thys/Logic.thy
doc-src/ProgProve/Thys/Types_and_funs.thy
doc-src/ProgProve/Thys/document/Bool_nat_list.tex
doc-src/ProgProve/Thys/document/Isar.tex
doc-src/ProgProve/Thys/document/Logic.tex
doc-src/ProgProve/Thys/document/Types_and_funs.tex
--- a/doc-src/ProgProve/Thys/Bool_nat_list.thy	Mon Apr 23 23:55:06 2012 +0200
+++ b/doc-src/ProgProve/Thys/Bool_nat_list.thy	Tue Apr 24 09:09:55 2012 +0200
@@ -138,11 +138,11 @@
 Throughout this book, \concept{IH} will stand for ``induction hypothesis''.
 
 We have now seen three proofs of @{prop"add m 0 = 0"}: the Isabelle one, the
-terse 4 lines explaining the base case and the induction step, and just now a
+terse four lines explaining the base case and the induction step, and just now a
 model of a traditional inductive proof. The three proofs differ in the level
 of detail given and the intended reader: the Isabelle proof is for the
 machine, the informal proofs are for humans. Although this book concentrates
-of Isabelle proofs, it is important to be able to rephrase those proofs
+on Isabelle proofs, it is important to be able to rephrase those proofs
 as informal text comprehensible to a reader familiar with traditional
 mathematical proofs. Later on we will introduce an Isabelle proof language
 that is closer to traditional informal mathematical language and is often
@@ -162,7 +162,7 @@
 
 text{*
 \begin{itemize}
-\item Type @{typ "'a list"} is the type of list over elements of type @{typ 'a}. Because @{typ 'a} is a type variable, lists are in fact \concept{polymorphic}: the elements of a list can be of arbitrary type (but must all be of the same type).
+\item Type @{typ "'a list"} is the type of lists over elements of type @{typ 'a}. Because @{typ 'a} is a type variable, lists are in fact \concept{polymorphic}: the elements of a list can be of arbitrary type (but must all be of the same type).
 \item Lists have two constructors: @{const Nil}, the empty list, and @{const Cons}, which puts an element (of type @{typ 'a}) in front of a list (of type @{typ "'a list"}).
 Hence all lists are of the form @{const Nil}, or @{term"Cons x Nil"},
 or @{term"Cons x (Cons y Nil)"} etc.
@@ -218,7 +218,7 @@
 you need to prove
 \begin{enumerate}
 \item the base case @{prop"P(Nil)"} and
-\item the inductive case @{prop"P(Cons x xs)"} under the assumption @{prop"P(xs)"}, for some arbitrary but fixed @{text xs}.
+\item the inductive case @{prop"P(Cons x xs)"} under the assumption @{prop"P(xs)"}, for some arbitrary but fixed @{text x} and @{text xs}.
 \end{enumerate}
 This is often called \concept{structural induction}.
 
@@ -302,7 +302,7 @@
 We find that this time @{text"auto"} solves the base case, but the
 induction step merely simplifies to
 @{subgoals[display,indent=0,goals_limit=1]}
-The the missing lemma is associativity of @{const app},
+The missing lemma is associativity of @{const app},
 which we insert in front of the failed lemma @{text rev_app}.
 
 \subsubsection{Associativity of @{const app}}
--- a/doc-src/ProgProve/Thys/Isar.thy	Mon Apr 23 23:55:06 2012 +0200
+++ b/doc-src/ProgProve/Thys/Isar.thy	Tue Apr 24 09:09:55 2012 +0200
@@ -172,7 +172,7 @@
 \end{tabular}
 \medskip
 
-\noindent The \isacom{using} idiom de-emphasises the used facts by moving them
+\noindent The \isacom{using} idiom de-emphasizes the used facts by moving them
 behind the proposition.
 
 \subsection{Structured lemma statements: \isacom{fixes}, \isacom{assumes}, \isacom{shows}}
@@ -228,7 +228,7 @@
 default. The patterns are phrased in terms of \isacom{show} but work for
 \isacom{have} and \isacom{lemma}, too.
 
-We start with two forms of \concept{case distinction}:
+We start with two forms of \concept{case analysis}:
 starting from a formula @{text P} we have the two cases @{text P} and
 @{prop"~P"}, and starting from a fact @{prop"P \<or> Q"}
 we have the two cases @{text P} and @{text Q}:
@@ -548,7 +548,7 @@
 
 Sometimes one would like to prove some lemma locally within a proof.
 A lemma that shares the current context of assumptions but that
-has its own assumptions and is generalised over its locally fixed
+has its own assumptions and is generalized over its locally fixed
 variables at the end. This is what a \concept{raw proof block} does:
 \begin{quote}
 @{text"{"} \isacom{fix} @{text"x\<^isub>1 \<dots> x\<^isub>n"}\\
@@ -590,14 +590,14 @@
 the fact just proved, in this case the preceding block. In general,
 \isacom{note} introduces a new name for one or more facts.
 
-\section{Case distinction and induction}
+\section{Case analysis and induction}
 
-\subsection{Datatype case distinction}
+\subsection{Datatype case analysis}
 
-We have seen case distinction on formulas. Now we want to distinguish
+We have seen case analysis on formulas. Now we want to distinguish
 which form some term takes: is it @{text 0} or of the form @{term"Suc n"},
 is it @{term"[]"} or of the form @{term"x#xs"}, etc. Here is a typical example
-proof by case distinction on the form of @{text xs}:
+proof by case analysis on the form of @{text xs}:
 *}
 
 lemma "length(tl xs) = length xs - 1"
@@ -650,12 +650,12 @@
 Never mind the details, just focus on the pattern:
 *}
 
-lemma "\<Sum>{0..n::nat} = n*(n+1) div 2" (is "?P n")
+lemma "\<Sum>{0..n::nat} = n*(n+1) div 2"
 proof (induction n)
   show "\<Sum>{0..0::nat} = 0*(0+1) div 2" by simp
 next
   fix n assume "\<Sum>{0..n::nat} = n*(n+1) div 2"
-  thus "\<Sum>{0..Suc n::nat} = Suc n*(Suc n+1) div 2" by simp
+  thus "\<Sum>{0..Suc n} = Suc n*(Suc n+1) div 2" by simp
 qed
 
 text{* Except for the rewrite steps, everything is explicitly given. This
@@ -920,13 +920,13 @@
 
 \subsection{Rule inversion}
 
-Rule inversion is case distinction on which rule could have been used to
+Rule inversion is case analysis of which rule could have been used to
 derive some fact. The name \concept{rule inversion} emphasizes that we are
 reasoning backwards: by which rules could some given fact have been proved?
 For the inductive definition of @{const ev}, rule inversion can be summarized
 like this:
 @{prop[display]"ev n \<Longrightarrow> n = 0 \<or> (EX k. n = Suc(Suc k) \<and> ev k)"}
-The realisation in Isabelle is a case distinction.
+The realisation in Isabelle is a case analysis.
 A simple example is the proof that @{prop"ev n \<Longrightarrow> ev (n - 2)"}. We
 already went through the details informally in \autoref{sec:Logic:even}. This
 is the Isar proof:
@@ -946,7 +946,7 @@
 end
 (*>*)
 
-text{* The key point here is that a case distinction over some inductively
+text{* The key point here is that a case analysis over some inductively
 defined predicate is triggered by piping the given fact
 (here: \isacom{from}~@{text this}) into a proof by @{text cases}.
 Let us examine the assumptions available in each case. In case @{text ev0}
--- a/doc-src/ProgProve/Thys/Logic.thy	Mon Apr 23 23:55:06 2012 +0200
+++ b/doc-src/ProgProve/Thys/Logic.thy	Tue Apr 24 09:09:55 2012 +0200
@@ -5,12 +5,12 @@
 (*>*)
 text{*
 \vspace{-5ex}
-\section{Logic and Proof Beyond Equality}
+\section{Logic and proof beyond equality}
 \label{sec:Logic}
 
 \subsection{Formulas}
 
-The basic syntax of formulas (\textit{form} below)
+The core syntax of formulas (\textit{form} below)
 provides the standard logical constructs, in decreasing precedence:
 \[
 \begin{array}{rcl}
@@ -27,14 +27,14 @@
  &\mid& @{prop"\<forall>x. form"} ~\mid~  @{prop"\<exists>x. form"}
 \end{array}
 \]
-Terms are the ones we have seen all along, built from constant, variables,
+Terms are the ones we have seen all along, built from constants, variables,
 function application and @{text"\<lambda>"}-abstraction, including all the syntactic
 sugar like infix symbols, @{text "if"}, @{text "case"} etc.
 \begin{warn}
 Remember that formulas are simply terms of type @{text bool}. Hence
 @{text "="} also works for formulas. Beware that @{text"="} has a higher
 precedence than the other logical operators. Hence @{prop"s = t \<and> A"} means
-@{text"(s = t) \<and> A"}, and @{prop"A\<and>B = B\<and>A"} means @{text"A \<and> (B = A) \<and> B"}.
+@{text"(s = t) \<and> A"}, and @{prop"A\<and>B = B\<and>A"} means @{text"A \<and> (B = B) \<and> A"}.
 Logical equivalence can also be written with
 @{text "\<longleftrightarrow>"} instead of @{text"="}, where @{text"\<longleftrightarrow>"} has the same low
 precedence as @{text"\<longrightarrow>"}. Hence @{text"A \<and> B \<longleftrightarrow> B \<and> A"} really means
@@ -51,7 +51,7 @@
 @{text "\<exists>"} & \xsymbol{exists} & \texttt{EX}\\
 @{text "\<lambda>"} & \xsymbol{lambda} & \texttt{\%}\\
 @{text "\<longrightarrow>"} & \texttt{-{}->}\\
-@{text "\<longleftrightarrow>"} & \texttt{<-{}->}\\
+@{text "\<longleftrightarrow>"} & \texttt{<->}\\
 @{text "\<and>"} & \texttt{/\char`\\} & \texttt{\&}\\
 @{text "\<or>"} & \texttt{\char`\\/} & \texttt{|}\\
 @{text "\<not>"} & \xsymbol{not} & \texttt{\char`~}\\
@@ -63,7 +63,7 @@
 and the third column shows ASCII representations that stay fixed.
 \begin{warn}
 The implication @{text"\<Longrightarrow>"} is part of the Isabelle framework. It structures
-theorems and proof states, separating assumptions from conclusion.
+theorems and proof states, separating assumptions from conclusions.
 The implication @{text"\<longrightarrow>"} is part of the logic HOL and can occur inside the
 formulas that make up the assumptions and conclusion.
 Theorems should be of the form @{text"\<lbrakk> A\<^isub>1; \<dots>; A\<^isub>n \<rbrakk> \<Longrightarrow> A"},
@@ -74,7 +74,7 @@
 \subsection{Sets}
 
 Sets of elements of type @{typ 'a} have type @{typ"'a set"}.
-They can be finite or infinite. Sets come with the usual notations:
+They can be finite or infinite. Sets come with the usual notation:
 \begin{itemize}
 \item @{term"{}"},\quad @{text"{e\<^isub>1,\<dots>,e\<^isub>n}"}
 \item @{prop"e \<in> A"},\quad @{prop"A \<subseteq> B"}
@@ -87,7 +87,7 @@
 \begin{warn}
 In @{term"{x. P}"} the @{text x} must be a variable. Set comprehension
 involving a proper term @{text t} must be written
-@{term[source]"{t |x y z. P}"},
+@{term[source]"{t | x y z. P}"},
 where @{text "x y z"} are the free variables in @{text t}.
 This is just a shorthand for @{term"{v. EX x y z. v = t \<and> P}"}, where
 @{text v} is a new variable.
@@ -153,8 +153,8 @@
 succeeds where @{text fastforce} fails.
 
 The method of choice for complex logical goals is @{text blast}. In the
-following example, @{text T} and @{text A} are two binary predicates, and it
-is shown that @{text T} is total, @{text A} is antisymmetric and @{text T} is
+following example, @{text T} and @{text A} are two binary predicates. It
+is shown that if @{text T} is total, @{text A} is antisymmetric and @{text T} is
 a subset of @{text A}, then @{text A} is a subset of @{text T}:
 *}
 
@@ -344,12 +344,12 @@
 automatically selects the appropriate rule for the current subgoal.
 
 You can also turn your own theorems into introduction rules by giving them
-them @{text"intro"} attribute, analogous to the @{text simp} attribute.  In
+the @{text"intro"} attribute, analogous to the @{text simp} attribute.  In
 that case @{text blast}, @{text fastforce} and (to a limited extent) @{text
 auto} will automatically backchain with those theorems. The @{text intro}
 attribute should be used with care because it increases the search space and
-can lead to nontermination.  Sometimes it is better to use it only in a
-particular calls of @{text blast} and friends. For example,
+can lead to nontermination.  Sometimes it is better to use it only in
+specific calls of @{text blast} and friends. For example,
 @{thm[source] le_trans}, transitivity of @{text"\<le>"} on type @{typ nat},
 is not an introduction rule by default because of the disastrous effect
 on the search space, but can be useful in specific situations:
@@ -419,8 +419,11 @@
 \label{sec:inductive-defs}
 
 Inductive definitions are the third important definition facility, after
-datatypes and recursive function. In fact, they are the key construct in the
+datatypes and recursive function.
+\sem
+In fact, they are the key construct in the
 definition of operational semantics in the second part of the book.
+\endsem
 
 \subsection{An example: even numbers}
 \label{sec:Logic:even}
@@ -506,9 +509,9 @@
 case @{thm[source]evSS} we have \mbox{@{prop"m = n+(2::nat)"}} and may assume
 @{prop"ev n"}, which implies @{prop"ev (m - 2)"} because @{text"m - 2 = (n +
 2) - 2 = n"}. We did not need the induction hypothesis at all for this proof,
-it is just a case distinction on which rule was used, but having @{prop"ev
+it is just a case analysis of which rule was used, but having @{prop"ev
 n"} at our disposal in case @{thm[source]evSS} was essential.
-This case distinction over rules is also called ``rule inversion''
+This case analysis of rules is also called ``rule inversion''
 and is discussed in more detail in \autoref{ch:Isar}.
 
 \subsubsection{In Isabelle}
@@ -589,7 +592,7 @@
 definition only expresses the positive information directly. The negative
 information, for example, that @{text 1} is not even, has to be proved from
 it (by induction or rule inversion). On the other hand, rule induction is
-Taylor made for proving \mbox{@{prop"ev n \<Longrightarrow> P n"}} because it only asks you
+tailor-made for proving \mbox{@{prop"ev n \<Longrightarrow> P n"}} because it only asks you
 to prove the positive cases. In the proof of @{prop"even n \<Longrightarrow> P n"} by
 computation induction via @{thm[source]even.induct}, we are also presented
 with the trivial negative cases. If you want the convenience of both
@@ -599,8 +602,8 @@
 
 But many concepts do not admit a recursive definition at all because there is
 no datatype for the recursion (for example, the transitive closure of a
-relation), or the recursion would not terminate (for example, the operational
-semantics in the second part of this book). Even if there is a recursive
+relation), or the recursion would not terminate (for example,
+an interpreter for a programming language). Even if there is a recursive
 definition, if we are only interested in the positive information, the
 inductive definition may be much simpler.
 
@@ -609,8 +612,11 @@
 
 Evenness is really more conveniently expressed recursively than inductively.
 As a second and very typical example of an inductive definition we define the
-reflexive transitive closure. It will also be an important building block for
+reflexive transitive closure.
+\sem
+It will also be an important building block for
 some of the semantics considered in the second part of the book.
+\endsem
 
 The reflexive transitive closure, called @{text star} below, is a function
 that maps a binary predicate to another binary predicate: if @{text r} is of
@@ -628,8 +634,8 @@
 
 text{* The base case @{thm[source] refl} is reflexivity: @{term "x=y"}. The
 step case @{thm[source]step} combines an @{text r} step (from @{text x} to
-@{text y}) and a @{const star} step (from @{text y} to @{text z}) into a
-@{const star} step (from @{text x} to @{text z}).
+@{text y}) and a @{term"star r"} step (from @{text y} to @{text z}) into a
+@{term"star r"} step (from @{text x} to @{text z}).
 The ``\isacom{for}~@{text r}'' in the header is merely a hint to Isabelle
 that @{text r} is a fixed parameter of @{const star}, in contrast to the
 further parameters of @{const star}, which change. As a result, Isabelle
--- a/doc-src/ProgProve/Thys/Types_and_funs.thy	Mon Apr 23 23:55:06 2012 +0200
+++ b/doc-src/ProgProve/Thys/Types_and_funs.thy	Tue Apr 24 09:09:55 2012 +0200
@@ -132,12 +132,12 @@
 function @{prop"f n = f n + (1::nat)"} and conclude \mbox{@{prop"(0::nat) = 1"}} by
 subtracting @{term"f n"} on both sides.
 
-Isabelle automatic termination checker requires that the arguments of
+Isabelle's automatic termination checker requires that the arguments of
 recursive calls on the right-hand side must be strictly smaller than the
 arguments on the left-hand side. In the simplest case, this means that one
 fixed argument position decreases in size with each recursive call. The size
 is measured as the number of constructors (excluding 0-ary ones, e.g.\ @{text
-Nil}). Lexicographic combinations are also recognised. In more complicated
+Nil}). Lexicographic combinations are also recognized. In more complicated
 situations, the user may have to prove termination by hand. For details
 see~\cite{Krauss}.
 
@@ -152,7 +152,7 @@
 "div2 (Suc(Suc n)) = Suc(div2 n)"
 
 text{* does not just define @{const div2} but also proves a
-customised induction rule:
+customized induction rule:
 \[
 \inferrule{
 \mbox{@{thm (prem 1) div2.induct}}\\
@@ -160,7 +160,7 @@
 \mbox{@{thm (prem 3) div2.induct}}}
 {\mbox{@{thm (concl) div2.induct[of _ "m"]}}}
 \]
-This customised induction rule can simplify inductive proofs. For example,
+This customized induction rule can simplify inductive proofs. For example,
 *}
 
 lemma "div2(n+n) = n"
@@ -171,7 +171,7 @@
 An application of @{text auto} finishes the proof.
 Had we used ordinary structural induction on @{text n},
 the proof would have needed an additional
-case distinction in the induction step.
+case analysis in the induction step.
 
 The general case is often called \concept{computation induction},
 because the induction follows the (terminating!) computation.
@@ -202,7 +202,7 @@
  if the function is defined by recursion on argument number $i$.}
 \end{quote}
 The key heuristic, and the main point of this section, is to
-\emph{generalise the goal before induction}.
+\emph{generalize the goal before induction}.
 The reason is simple: if the goal is
 too specific, the induction hypothesis is too weak to allow the induction
 step to go through. Let us illustrate the idea with an example.
@@ -218,12 +218,12 @@
 done
 (*>*)
 fun itrev :: "'a list \<Rightarrow> 'a list \<Rightarrow> 'a list" where
-"itrev []     ys = ys" |
+"itrev []        ys = ys" |
 "itrev (x#xs) ys = itrev xs (x#ys)"
 
 text{* The behaviour of @{const itrev} is simple: it reverses
 its first argument by stacking its elements onto the second argument,
-and returning that second argument when the first one becomes
+and it returns that second argument when the first one becomes
 empty. Note that @{const itrev} is tail-recursive: it can be
 compiled into a loop, no stack is necessary for executing it.
 
@@ -247,10 +247,10 @@
 argument,~@{term"[]"}, prevents it from rewriting the conclusion.  
 This example suggests a heuristic:
 \begin{quote}
-\emph{Generalise goals for induction by replacing constants by variables.}
+\emph{Generalize goals for induction by replacing constants by variables.}
 \end{quote}
 Of course one cannot do this na\"{\i}vely: @{prop"itrev xs ys = rev xs"} is
-just not true.  The correct generalisation is
+just not true.  The correct generalization is
 *};
 (*<*)oops;(*>*)
 lemma "itrev xs ys = rev xs @ ys"
@@ -258,7 +258,7 @@
 txt{*
 If @{text ys} is replaced by @{term"[]"}, the right-hand side simplifies to
 @{term"rev xs"}, as required.
-In this instance it was easy to guess the right generalisation.
+In this instance it was easy to guess the right generalization.
 Other situations can require a good deal of creativity.  
 
 Although we now have two variables, only @{text xs} is suitable for
@@ -266,12 +266,12 @@
 not there:
 @{subgoals[display,margin=65]}
 The induction hypothesis is still too weak, but this time it takes no
-intuition to generalise: the problem is that the @{text ys}
+intuition to generalize: the problem is that the @{text ys}
 in the induction hypothesis is fixed,
 but the induction hypothesis needs to be applied with
 @{term"a # ys"} instead of @{text ys}. Hence we prove the theorem
 for all @{text ys} instead of a fixed one. We can instruct induction
-to perform this generalisation for us by adding @{text "arbitrary: ys"}.
+to perform this generalization for us by adding @{text "arbitrary: ys"}.
 *}
 (*<*)oops
 lemma "itrev xs ys = rev xs @ ys"
@@ -287,12 +287,12 @@
 done
 
 text{*
-This leads to another heuristic for generalisation:
+This leads to another heuristic for generalization:
 \begin{quote}
-\emph{Generalise induction by generalising all free
+\emph{Generalize induction by generalizing all free
 variables\\ {\em(except the induction variable itself)}.}
 \end{quote}
-Generalisation is best performed with @{text"arbitrary: y\<^isub>1 \<dots> y\<^isub>k"}. 
+Generalization is best performed with @{text"arbitrary: y\<^isub>1 \<dots> y\<^isub>k"}. 
 This heuristic prevents trivial failures like the one above.
 However, it should not be applied blindly.
 It is not always required, and the additional quantifiers can complicate
@@ -306,7 +306,7 @@
 \item using equations $l = r$ from left to right (only),
 \item as long as possible.
 \end{itemize}
-To emphasise the directionality, equations that have been given the
+To emphasize the directionality, equations that have been given the
 @{text"simp"} attribute are called \concept{simplification}
 rules. Logically, they are still symmetric, but proofs by
 simplification use them only in the left-to-right direction.  The proof tool
--- a/doc-src/ProgProve/Thys/document/Bool_nat_list.tex	Mon Apr 23 23:55:06 2012 +0200
+++ b/doc-src/ProgProve/Thys/document/Bool_nat_list.tex	Tue Apr 24 09:09:55 2012 +0200
@@ -192,11 +192,11 @@
 Throughout this book, \concept{IH} will stand for ``induction hypothesis''.
 
 We have now seen three proofs of \isa{add\ m\ {\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}}: the Isabelle one, the
-terse 4 lines explaining the base case and the induction step, and just now a
+terse four lines explaining the base case and the induction step, and just now a
 model of a traditional inductive proof. The three proofs differ in the level
 of detail given and the intended reader: the Isabelle proof is for the
 machine, the informal proofs are for humans. Although this book concentrates
-of Isabelle proofs, it is important to be able to rephrase those proofs
+on Isabelle proofs, it is important to be able to rephrase those proofs
 as informal text comprehensible to a reader familiar with traditional
 mathematical proofs. Later on we will introduce an Isabelle proof language
 that is closer to traditional informal mathematical language and is often
@@ -219,7 +219,7 @@
 \ {\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{3D}{\isacharequal}}\ Nil\ {\isaliteral{7C}{\isacharbar}}\ Cons\ {\isaliteral{27}{\isacharprime}}a\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{27}{\isacharprime}}a\ list{\isaliteral{22}{\isachardoublequoteclose}}%
 \begin{isamarkuptext}%
 \begin{itemize}
-\item Type \isa{{\isaliteral{27}{\isacharprime}}a\ list} is the type of list over elements of type \isa{{\isaliteral{27}{\isacharprime}}a}. Because \isa{{\isaliteral{27}{\isacharprime}}a} is a type variable, lists are in fact \concept{polymorphic}: the elements of a list can be of arbitrary type (but must all be of the same type).
+\item Type \isa{{\isaliteral{27}{\isacharprime}}a\ list} is the type of lists over elements of type \isa{{\isaliteral{27}{\isacharprime}}a}. Because \isa{{\isaliteral{27}{\isacharprime}}a} is a type variable, lists are in fact \concept{polymorphic}: the elements of a list can be of arbitrary type (but must all be of the same type).
 \item Lists have two constructors: \isa{Nil}, the empty list, and \isa{Cons}, which puts an element (of type \isa{{\isaliteral{27}{\isacharprime}}a}) in front of a list (of type \isa{{\isaliteral{27}{\isacharprime}}a\ list}).
 Hence all lists are of the form \isa{Nil}, or \isa{Cons\ x\ Nil},
 or \isa{Cons\ x\ {\isaliteral{28}{\isacharparenleft}}Cons\ y\ Nil{\isaliteral{29}{\isacharparenright}}} etc.
@@ -282,7 +282,7 @@
 you need to prove
 \begin{enumerate}
 \item the base case \isa{P\ Nil} and
-\item the inductive case \isa{P\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ xs{\isaliteral{29}{\isacharparenright}}} under the assumption \isa{P\ xs}, for some arbitrary but fixed \isa{xs}.
+\item the inductive case \isa{P\ {\isaliteral{28}{\isacharparenleft}}Cons\ x\ xs{\isaliteral{29}{\isacharparenright}}} under the assumption \isa{P\ xs}, for some arbitrary but fixed \isa{x} and \isa{xs}.
 \end{enumerate}
 This is often called \concept{structural induction}.
 
@@ -435,7 +435,7 @@
 \isaindent{\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ \ \ }app\ {\isaliteral{28}{\isacharparenleft}}app\ {\isaliteral{28}{\isacharparenleft}}rev\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}Cons\ a\ Nil{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\isanewline
 \isaindent{\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ \ \ }app\ {\isaliteral{28}{\isacharparenleft}}rev\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}app\ {\isaliteral{28}{\isacharparenleft}}rev\ xs{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{28}{\isacharparenleft}}Cons\ a\ Nil{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}%
 \end{isabelle}
-The the missing lemma is associativity of \isa{app},
+The missing lemma is associativity of \isa{app},
 which we insert in front of the failed lemma \isa{rev{\isaliteral{5F}{\isacharunderscore}}app}.
 
 \subsubsection{Associativity of \isa{app}}
--- a/doc-src/ProgProve/Thys/document/Isar.tex	Mon Apr 23 23:55:06 2012 +0200
+++ b/doc-src/ProgProve/Thys/document/Isar.tex	Tue Apr 24 09:09:55 2012 +0200
@@ -263,7 +263,7 @@
 \end{tabular}
 \medskip
 
-\noindent The \isacom{using} idiom de-emphasises the used facts by moving them
+\noindent The \isacom{using} idiom de-emphasizes the used facts by moving them
 behind the proposition.
 
 \subsection{Structured lemma statements: \isacom{fixes}, \isacom{assumes}, \isacom{shows}}
@@ -339,7 +339,7 @@
 default. The patterns are phrased in terms of \isacom{show} but work for
 \isacom{have} and \isacom{lemma}, too.
 
-We start with two forms of \concept{case distinction}:
+We start with two forms of \concept{case analysis}:
 starting from a formula \isa{P} we have the two cases \isa{P} and
 \isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}\ P}, and starting from a fact \isa{P\ {\isaliteral{5C3C6F723E}{\isasymor}}\ Q}
 we have the two cases \isa{P} and \isa{Q}:%
@@ -946,7 +946,7 @@
 
 Sometimes one would like to prove some lemma locally within a proof.
 A lemma that shares the current context of assumptions but that
-has its own assumptions and is generalised over its locally fixed
+has its own assumptions and is generalized over its locally fixed
 variables at the end. This is what a \concept{raw proof block} does:
 \begin{quote}
 \isa{{\isaliteral{7B}{\isacharbraceleft}}} \isacom{fix} \isa{x\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ x\isaliteral{5C3C5E697375623E}{}\isactrlisub n}\\
@@ -1016,14 +1016,14 @@
 the fact just proved, in this case the preceding block. In general,
 \isacom{note} introduces a new name for one or more facts.
 
-\section{Case distinction and induction}
+\section{Case analysis and induction}
 
-\subsection{Datatype case distinction}
+\subsection{Datatype case analysis}
 
-We have seen case distinction on formulas. Now we want to distinguish
+We have seen case analysis on formulas. Now we want to distinguish
 which form some term takes: is it \isa{{\isadigit{0}}} or of the form \isa{Suc\ n},
 is it \isa{{\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}} or of the form \isa{x\ {\isaliteral{23}{\isacharhash}}\ xs}, etc. Here is a typical example
-proof by case distinction on the form of \isa{xs}:%
+proof by case analysis on the form of \isa{xs}:%
 \end{isamarkuptext}%
 \isamarkuptrue%
 \isacommand{lemma}\isamarkupfalse%
@@ -1123,7 +1123,7 @@
 \end{isamarkuptext}%
 \isamarkuptrue%
 \isacommand{lemma}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}n{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ n{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{28}{\isacharparenleft}}\isakeyword{is}\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{3F}{\isacharquery}}P\ n{\isaliteral{22}{\isachardoublequoteclose}}{\isaliteral{29}{\isacharparenright}}\isanewline
+\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}n{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ n{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
 %
 \isadelimproof
 %
@@ -1141,7 +1141,7 @@
 \ n\ \isacommand{assume}\isamarkupfalse%
 \ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}n{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ n{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}n{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
 \ \ \isacommand{thus}\isamarkupfalse%
-\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}Suc\ n{\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}nat{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ Suc\ n{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
+\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{5C3C53756D3E}{\isasymSum}}{\isaliteral{7B}{\isacharbraceleft}}{\isadigit{0}}{\isaliteral{2E}{\isachardot}}{\isaliteral{2E}{\isachardot}}Suc\ n{\isaliteral{7D}{\isacharbraceright}}\ {\isaliteral{3D}{\isacharequal}}\ Suc\ n{\isaliteral{2A}{\isacharasterisk}}{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{2B}{\isacharplus}}{\isadigit{1}}{\isaliteral{29}{\isacharparenright}}\ div\ {\isadigit{2}}{\isaliteral{22}{\isachardoublequoteclose}}\ \isacommand{by}\isamarkupfalse%
 \ simp\isanewline
 \isacommand{qed}\isamarkupfalse%
 %
@@ -1548,7 +1548,7 @@
 
 \subsection{Rule inversion}
 
-Rule inversion is case distinction on which rule could have been used to
+Rule inversion is case analysis of which rule could have been used to
 derive some fact. The name \concept{rule inversion} emphasizes that we are
 reasoning backwards: by which rules could some given fact have been proved?
 For the inductive definition of \isa{ev}, rule inversion can be summarized
@@ -1556,7 +1556,7 @@
 \begin{isabelle}%
 ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ n\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}\ {\isaliteral{5C3C6F723E}{\isasymor}}\ {\isaliteral{28}{\isacharparenleft}}{\isaliteral{5C3C6578697374733E}{\isasymexists}}k{\isaliteral{2E}{\isachardot}}\ n\ {\isaliteral{3D}{\isacharequal}}\ Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ k{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ ev\ k{\isaliteral{29}{\isacharparenright}}%
 \end{isabelle}
-The realisation in Isabelle is a case distinction.
+The realisation in Isabelle is a case analysis.
 A simple example is the proof that \isa{ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ ev\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}}. We
 already went through the details informally in \autoref{sec:Logic:even}. This
 is the Isar proof:%
@@ -1595,7 +1595,7 @@
 \endisadelimproof
 %
 \begin{isamarkuptext}%
-The key point here is that a case distinction over some inductively
+The key point here is that a case analysis over some inductively
 defined predicate is triggered by piping the given fact
 (here: \isacom{from}~\isa{this}) into a proof by \isa{cases}.
 Let us examine the assumptions available in each case. In case \isa{ev{\isadigit{0}}}
--- a/doc-src/ProgProve/Thys/document/Logic.tex	Mon Apr 23 23:55:06 2012 +0200
+++ b/doc-src/ProgProve/Thys/document/Logic.tex	Tue Apr 24 09:09:55 2012 +0200
@@ -17,12 +17,12 @@
 %
 \begin{isamarkuptext}%
 \vspace{-5ex}
-\section{Logic and Proof Beyond Equality}
+\section{Logic and proof beyond equality}
 \label{sec:Logic}
 
 \subsection{Formulas}
 
-The basic syntax of formulas (\textit{form} below)
+The core syntax of formulas (\textit{form} below)
 provides the standard logical constructs, in decreasing precedence:
 \[
 \begin{array}{rcl}
@@ -39,14 +39,14 @@
  &\mid& \isa{{\isaliteral{5C3C666F72616C6C3E}{\isasymforall}}x{\isaliteral{2E}{\isachardot}}\ form} ~\mid~  \isa{{\isaliteral{5C3C6578697374733E}{\isasymexists}}x{\isaliteral{2E}{\isachardot}}\ form}
 \end{array}
 \]
-Terms are the ones we have seen all along, built from constant, variables,
+Terms are the ones we have seen all along, built from constants, variables,
 function application and \isa{{\isaliteral{5C3C6C616D6264613E}{\isasymlambda}}}-abstraction, including all the syntactic
 sugar like infix symbols, \isa{if}, \isa{case} etc.
 \begin{warn}
 Remember that formulas are simply terms of type \isa{bool}. Hence
 \isa{{\isaliteral{3D}{\isacharequal}}} also works for formulas. Beware that \isa{{\isaliteral{3D}{\isacharequal}}} has a higher
 precedence than the other logical operators. Hence \isa{s\ {\isaliteral{3D}{\isacharequal}}\ t\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A} means
-\isa{{\isaliteral{28}{\isacharparenleft}}s\ {\isaliteral{3D}{\isacharequal}}\ t{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A}, and \isa{A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ B\ {\isaliteral{3D}{\isacharequal}}\ B\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A} means \isa{A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ {\isaliteral{28}{\isacharparenleft}}B\ {\isaliteral{3D}{\isacharequal}}\ A{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ B}.
+\isa{{\isaliteral{28}{\isacharparenleft}}s\ {\isaliteral{3D}{\isacharequal}}\ t{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A}, and \isa{A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ B\ {\isaliteral{3D}{\isacharequal}}\ B\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A} means \isa{A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ {\isaliteral{28}{\isacharparenleft}}B\ {\isaliteral{3D}{\isacharequal}}\ B{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A}.
 Logical equivalence can also be written with
 \isa{{\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}} instead of \isa{{\isaliteral{3D}{\isacharequal}}}, where \isa{{\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}} has the same low
 precedence as \isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}}. Hence \isa{A\ {\isaliteral{5C3C616E643E}{\isasymand}}\ B\ {\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}\ B\ {\isaliteral{5C3C616E643E}{\isasymand}}\ A} really means
@@ -63,7 +63,7 @@
 \isa{{\isaliteral{5C3C6578697374733E}{\isasymexists}}} & \xsymbol{exists} & \texttt{EX}\\
 \isa{{\isaliteral{5C3C6C616D6264613E}{\isasymlambda}}} & \xsymbol{lambda} & \texttt{\%}\\
 \isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}} & \texttt{-{}->}\\
-\isa{{\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}} & \texttt{<-{}->}\\
+\isa{{\isaliteral{5C3C6C6F6E676C65667472696768746172726F773E}{\isasymlongleftrightarrow}}} & \texttt{<->}\\
 \isa{{\isaliteral{5C3C616E643E}{\isasymand}}} & \texttt{/\char`\\} & \texttt{\&}\\
 \isa{{\isaliteral{5C3C6F723E}{\isasymor}}} & \texttt{\char`\\/} & \texttt{|}\\
 \isa{{\isaliteral{5C3C6E6F743E}{\isasymnot}}} & \xsymbol{not} & \texttt{\char`~}\\
@@ -75,7 +75,7 @@
 and the third column shows ASCII representations that stay fixed.
 \begin{warn}
 The implication \isa{{\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}} is part of the Isabelle framework. It structures
-theorems and proof states, separating assumptions from conclusion.
+theorems and proof states, separating assumptions from conclusions.
 The implication \isa{{\isaliteral{5C3C6C6F6E6772696768746172726F773E}{\isasymlongrightarrow}}} is part of the logic HOL and can occur inside the
 formulas that make up the assumptions and conclusion.
 Theorems should be of the form \isa{{\isaliteral{5C3C6C6272616B6B3E}{\isasymlbrakk}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{3B}{\isacharsemicolon}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{3B}{\isacharsemicolon}}\ A\isaliteral{5C3C5E697375623E}{}\isactrlisub n\ {\isaliteral{5C3C726272616B6B3E}{\isasymrbrakk}}\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ A},
@@ -86,7 +86,7 @@
 \subsection{Sets}
 
 Sets of elements of type \isa{{\isaliteral{27}{\isacharprime}}a} have type \isa{{\isaliteral{27}{\isacharprime}}a\ set}.
-They can be finite or infinite. Sets come with the usual notations:
+They can be finite or infinite. Sets come with the usual notation:
 \begin{itemize}
 \item \isa{{\isaliteral{7B}{\isacharbraceleft}}{\isaliteral{7D}{\isacharbraceright}}},\quad \isa{{\isaliteral{7B}{\isacharbraceleft}}e\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}{\isaliteral{2C}{\isacharcomma}}{\isaliteral{5C3C646F74733E}{\isasymdots}}{\isaliteral{2C}{\isacharcomma}}e\isaliteral{5C3C5E697375623E}{}\isactrlisub n{\isaliteral{7D}{\isacharbraceright}}}
 \item \isa{e\ {\isaliteral{5C3C696E3E}{\isasymin}}\ A},\quad \isa{A\ {\isaliteral{5C3C73756273657465713E}{\isasymsubseteq}}\ B}
@@ -99,7 +99,7 @@
 \begin{warn}
 In \isa{{\isaliteral{7B}{\isacharbraceleft}}x{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{7D}{\isacharbraceright}}} the \isa{x} must be a variable. Set comprehension
 involving a proper term \isa{t} must be written
-\isa{{\isaliteral{22}{\isachardoublequote}}{\isaliteral{7B}{\isacharbraceleft}}t\ {\isaliteral{7C}{\isacharbar}}x\ y\ z{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{7D}{\isacharbraceright}}{\isaliteral{22}{\isachardoublequote}}},
+\isa{{\isaliteral{22}{\isachardoublequote}}{\isaliteral{7B}{\isacharbraceleft}}t\ {\isaliteral{7C}{\isacharbar}}\ x\ y\ z{\isaliteral{2E}{\isachardot}}\ P{\isaliteral{7D}{\isacharbraceright}}{\isaliteral{22}{\isachardoublequote}}},
 where \isa{x\ y\ z} are the free variables in \isa{t}.
 This is just a shorthand for \isa{{\isaliteral{7B}{\isacharbraceleft}}v{\isaliteral{2E}{\isachardot}}\ {\isaliteral{5C3C6578697374733E}{\isasymexists}}x\ y\ z{\isaliteral{2E}{\isachardot}}\ v\ {\isaliteral{3D}{\isacharequal}}\ t\ {\isaliteral{5C3C616E643E}{\isasymand}}\ P{\isaliteral{7D}{\isacharbraceright}}}, where
 \isa{v} is a new variable.
@@ -210,8 +210,8 @@
 succeeds where \isa{fastforce} fails.
 
 The method of choice for complex logical goals is \isa{blast}. In the
-following example, \isa{T} and \isa{A} are two binary predicates, and it
-is shown that \isa{T} is total, \isa{A} is antisymmetric and \isa{T} is
+following example, \isa{T} and \isa{A} are two binary predicates. It
+is shown that if \isa{T} is total, \isa{A} is antisymmetric and \isa{T} is
 a subset of \isa{A}, then \isa{A} is a subset of \isa{T}:%
 \end{isamarkuptext}%
 \isamarkuptrue%
@@ -449,11 +449,11 @@
 automatically selects the appropriate rule for the current subgoal.
 
 You can also turn your own theorems into introduction rules by giving them
-them \isa{intro} attribute, analogous to the \isa{simp} attribute.  In
+the \isa{intro} attribute, analogous to the \isa{simp} attribute.  In
 that case \isa{blast}, \isa{fastforce} and (to a limited extent) \isa{auto} will automatically backchain with those theorems. The \isa{intro}
 attribute should be used with care because it increases the search space and
-can lead to nontermination.  Sometimes it is better to use it only in a
-particular calls of \isa{blast} and friends. For example,
+can lead to nontermination.  Sometimes it is better to use it only in
+specific calls of \isa{blast} and friends. For example,
 \isa{le{\isaliteral{5F}{\isacharunderscore}}trans}, transitivity of \isa{{\isaliteral{5C3C6C653E}{\isasymle}}} on type \isa{nat},
 is not an introduction rule by default because of the disastrous effect
 on the search space, but can be useful in specific situations:%
@@ -550,8 +550,11 @@
 \label{sec:inductive-defs}
 
 Inductive definitions are the third important definition facility, after
-datatypes and recursive function. In fact, they are the key construct in the
+datatypes and recursive function.
+\sem
+In fact, they are the key construct in the
 definition of operational semantics in the second part of the book.
+\endsem
 
 \subsection{An example: even numbers}
 \label{sec:Logic:even}
@@ -639,8 +642,8 @@
 from \isa{ev\ {\isadigit{0}}} because \isa{{\isadigit{0}}\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{0}}} on type \isa{nat}. In
 case \isa{evSS} we have \mbox{\isa{m\ {\isaliteral{3D}{\isacharequal}}\ n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}}} and may assume
 \isa{ev\ n}, which implies \isa{ev\ {\isaliteral{28}{\isacharparenleft}}m\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}} because \isa{m\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}\ {\isaliteral{3D}{\isacharequal}}\ {\isaliteral{28}{\isacharparenleft}}n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{2}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{2D}{\isacharminus}}\ {\isadigit{2}}\ {\isaliteral{3D}{\isacharequal}}\ n}. We did not need the induction hypothesis at all for this proof,
-it is just a case distinction on which rule was used, but having \isa{ev\ n} at our disposal in case \isa{evSS} was essential.
-This case distinction over rules is also called ``rule inversion''
+it is just a case analysis of which rule was used, but having \isa{ev\ n} at our disposal in case \isa{evSS} was essential.
+This case analysis of rules is also called ``rule inversion''
 and is discussed in more detail in \autoref{ch:Isar}.
 
 \subsubsection{In Isabelle}
@@ -789,7 +792,7 @@
 definition only expresses the positive information directly. The negative
 information, for example, that \isa{{\isadigit{1}}} is not even, has to be proved from
 it (by induction or rule inversion). On the other hand, rule induction is
-Taylor made for proving \mbox{\isa{ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P\ n}} because it only asks you
+tailor-made for proving \mbox{\isa{ev\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P\ n}} because it only asks you
 to prove the positive cases. In the proof of \isa{even\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P\ n} by
 computation induction via \isa{even{\isaliteral{2E}{\isachardot}}induct}, we are also presented
 with the trivial negative cases. If you want the convenience of both
@@ -799,8 +802,8 @@
 
 But many concepts do not admit a recursive definition at all because there is
 no datatype for the recursion (for example, the transitive closure of a
-relation), or the recursion would not terminate (for example, the operational
-semantics in the second part of this book). Even if there is a recursive
+relation), or the recursion would not terminate (for example,
+an interpreter for a programming language). Even if there is a recursive
 definition, if we are only interested in the positive information, the
 inductive definition may be much simpler.
 
@@ -809,8 +812,11 @@
 
 Evenness is really more conveniently expressed recursively than inductively.
 As a second and very typical example of an inductive definition we define the
-reflexive transitive closure. It will also be an important building block for
+reflexive transitive closure.
+\sem
+It will also be an important building block for
 some of the semantics considered in the second part of the book.
+\endsem
 
 The reflexive transitive closure, called \isa{star} below, is a function
 that maps a binary predicate to another binary predicate: if \isa{r} is of
@@ -828,8 +834,8 @@
 \begin{isamarkuptext}%
 The base case \isa{refl} is reflexivity: \isa{x\ {\isaliteral{3D}{\isacharequal}}\ y}. The
 step case \isa{step} combines an \isa{r} step (from \isa{x} to
-\isa{y}) and a \isa{star} step (from \isa{y} to \isa{z}) into a
-\isa{star} step (from \isa{x} to \isa{z}).
+\isa{y}) and a \isa{star\ r} step (from \isa{y} to \isa{z}) into a
+\isa{star\ r} step (from \isa{x} to \isa{z}).
 The ``\isacom{for}~\isa{r}'' in the header is merely a hint to Isabelle
 that \isa{r} is a fixed parameter of \isa{star}, in contrast to the
 further parameters of \isa{star}, which change. As a result, Isabelle
--- a/doc-src/ProgProve/Thys/document/Types_and_funs.tex	Mon Apr 23 23:55:06 2012 +0200
+++ b/doc-src/ProgProve/Thys/document/Types_and_funs.tex	Tue Apr 24 09:09:55 2012 +0200
@@ -169,11 +169,11 @@
 function \isa{f\ n\ {\isaliteral{3D}{\isacharequal}}\ f\ n\ {\isaliteral{2B}{\isacharplus}}\ {\isadigit{1}}} and conclude \mbox{\isa{{\isadigit{0}}\ {\isaliteral{3D}{\isacharequal}}\ {\isadigit{1}}}} by
 subtracting \isa{f\ n} on both sides.
 
-Isabelle automatic termination checker requires that the arguments of
+Isabelle's automatic termination checker requires that the arguments of
 recursive calls on the right-hand side must be strictly smaller than the
 arguments on the left-hand side. In the simplest case, this means that one
 fixed argument position decreases in size with each recursive call. The size
-is measured as the number of constructors (excluding 0-ary ones, e.g.\ \isa{Nil}). Lexicographic combinations are also recognised. In more complicated
+is measured as the number of constructors (excluding 0-ary ones, e.g.\ \isa{Nil}). Lexicographic combinations are also recognized. In more complicated
 situations, the user may have to prove termination by hand. For details
 see~\cite{Krauss}.
 
@@ -189,7 +189,7 @@
 {\isaliteral{22}{\isachardoublequoteopen}}div{\isadigit{2}}\ {\isaliteral{28}{\isacharparenleft}}Suc{\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ Suc{\isaliteral{28}{\isacharparenleft}}div{\isadigit{2}}\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}%
 \begin{isamarkuptext}%
 does not just define \isa{div{\isadigit{2}}} but also proves a
-customised induction rule:
+customized induction rule:
 \[
 \inferrule{
 \mbox{\isa{P\ {\isadigit{0}}}}\\
@@ -197,7 +197,7 @@
 \mbox{\isa{{\isaliteral{5C3C416E643E}{\isasymAnd}}n{\isaliteral{2E}{\isachardot}}\ P\ n\ {\isaliteral{5C3C4C6F6E6772696768746172726F773E}{\isasymLongrightarrow}}\ P\ {\isaliteral{28}{\isacharparenleft}}Suc\ {\isaliteral{28}{\isacharparenleft}}Suc\ n{\isaliteral{29}{\isacharparenright}}{\isaliteral{29}{\isacharparenright}}}}}
 {\mbox{\isa{P\ m}}}
 \]
-This customised induction rule can simplify inductive proofs. For example,%
+This customized induction rule can simplify inductive proofs. For example,%
 \end{isamarkuptext}%
 \isamarkuptrue%
 \isacommand{lemma}\isamarkupfalse%
@@ -221,7 +221,7 @@
 An application of \isa{auto} finishes the proof.
 Had we used ordinary structural induction on \isa{n},
 the proof would have needed an additional
-case distinction in the induction step.
+case analysis in the induction step.
 
 The general case is often called \concept{computation induction},
 because the induction follows the (terminating!) computation.
@@ -252,7 +252,7 @@
  if the function is defined by recursion on argument number $i$.}
 \end{quote}
 The key heuristic, and the main point of this section, is to
-\emph{generalise the goal before induction}.
+\emph{generalize the goal before induction}.
 The reason is simple: if the goal is
 too specific, the induction hypothesis is too weak to allow the induction
 step to go through. Let us illustrate the idea with an example.
@@ -273,12 +273,12 @@
 \endisadelimproof
 \isacommand{fun}\isamarkupfalse%
 \ itrev\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\ {\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ list\ {\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\ {\isaliteral{27}{\isacharprime}}a\ list{\isaliteral{22}{\isachardoublequoteclose}}\ \isakeyword{where}\isanewline
-{\isaliteral{22}{\isachardoublequoteopen}}itrev\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}\ \ \ \ \ ys\ {\isaliteral{3D}{\isacharequal}}\ ys{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
+{\isaliteral{22}{\isachardoublequoteopen}}itrev\ {\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}\ \ \ \ \ \ \ \ ys\ {\isaliteral{3D}{\isacharequal}}\ ys{\isaliteral{22}{\isachardoublequoteclose}}\ {\isaliteral{7C}{\isacharbar}}\isanewline
 {\isaliteral{22}{\isachardoublequoteopen}}itrev\ {\isaliteral{28}{\isacharparenleft}}x{\isaliteral{23}{\isacharhash}}xs{\isaliteral{29}{\isacharparenright}}\ ys\ {\isaliteral{3D}{\isacharequal}}\ itrev\ xs\ {\isaliteral{28}{\isacharparenleft}}x{\isaliteral{23}{\isacharhash}}ys{\isaliteral{29}{\isacharparenright}}{\isaliteral{22}{\isachardoublequoteclose}}%
 \begin{isamarkuptext}%
 The behaviour of \isa{itrev} is simple: it reverses
 its first argument by stacking its elements onto the second argument,
-and returning that second argument when the first one becomes
+and it returns that second argument when the first one becomes
 empty. Note that \isa{itrev} is tail-recursive: it can be
 compiled into a loop, no stack is necessary for executing it.
 
@@ -312,10 +312,10 @@
 argument,~\isa{{\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}}, prevents it from rewriting the conclusion.  
 This example suggests a heuristic:
 \begin{quote}
-\emph{Generalise goals for induction by replacing constants by variables.}
+\emph{Generalize goals for induction by replacing constants by variables.}
 \end{quote}
 Of course one cannot do this na\"{\i}vely: \isa{itrev\ xs\ ys\ {\isaliteral{3D}{\isacharequal}}\ rev\ xs} is
-just not true.  The correct generalisation is%
+just not true.  The correct generalization is%
 \end{isamarkuptxt}%
 \isamarkuptrue%
 %
@@ -336,7 +336,7 @@
 \begin{isamarkuptxt}%
 If \isa{ys} is replaced by \isa{{\isaliteral{5B}{\isacharbrackleft}}{\isaliteral{5D}{\isacharbrackright}}}, the right-hand side simplifies to
 \isa{rev\ xs}, as required.
-In this instance it was easy to guess the right generalisation.
+In this instance it was easy to guess the right generalization.
 Other situations can require a good deal of creativity.  
 
 Although we now have two variables, only \isa{xs} is suitable for
@@ -348,12 +348,12 @@
 \isaindent{\ {\isadigit{1}}{\isaliteral{2E}{\isachardot}}\ \ \ \ }itrev\ xs\ {\isaliteral{28}{\isacharparenleft}}a\ {\isaliteral{23}{\isacharhash}}\ ys{\isaliteral{29}{\isacharparenright}}\ {\isaliteral{3D}{\isacharequal}}\ rev\ xs\ {\isaliteral{40}{\isacharat}}\ a\ {\isaliteral{23}{\isacharhash}}\ ys%
 \end{isabelle}
 The induction hypothesis is still too weak, but this time it takes no
-intuition to generalise: the problem is that the \isa{ys}
+intuition to generalize: the problem is that the \isa{ys}
 in the induction hypothesis is fixed,
 but the induction hypothesis needs to be applied with
 \isa{a\ {\isaliteral{23}{\isacharhash}}\ ys} instead of \isa{ys}. Hence we prove the theorem
 for all \isa{ys} instead of a fixed one. We can instruct induction
-to perform this generalisation for us by adding \isa{arbitrary{\isaliteral{3A}{\isacharcolon}}\ ys}.%
+to perform this generalization for us by adding \isa{arbitrary{\isaliteral{3A}{\isacharcolon}}\ ys}.%
 \end{isamarkuptxt}%
 \isamarkuptrue%
 %
@@ -394,12 +394,12 @@
 \endisadelimproof
 %
 \begin{isamarkuptext}%
-This leads to another heuristic for generalisation:
+This leads to another heuristic for generalization:
 \begin{quote}
-\emph{Generalise induction by generalising all free
+\emph{Generalize induction by generalizing all free
 variables\\ {\em(except the induction variable itself)}.}
 \end{quote}
-Generalisation is best performed with \isa{arbitrary{\isaliteral{3A}{\isacharcolon}}\ y\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ y\isaliteral{5C3C5E697375623E}{}\isactrlisub k}. 
+Generalization is best performed with \isa{arbitrary{\isaliteral{3A}{\isacharcolon}}\ y\isaliteral{5C3C5E697375623E}{}\isactrlisub {\isadigit{1}}\ {\isaliteral{5C3C646F74733E}{\isasymdots}}\ y\isaliteral{5C3C5E697375623E}{}\isactrlisub k}. 
 This heuristic prevents trivial failures like the one above.
 However, it should not be applied blindly.
 It is not always required, and the additional quantifiers can complicate
@@ -413,7 +413,7 @@
 \item using equations $l = r$ from left to right (only),
 \item as long as possible.
 \end{itemize}
-To emphasise the directionality, equations that have been given the
+To emphasize the directionality, equations that have been given the
 \isa{simp} attribute are called \concept{simplification}
 rules. Logically, they are still symmetric, but proofs by
 simplification use them only in the left-to-right direction.  The proof tool