improved presentation;
authorwenzelm
Sat Oct 30 20:20:48 1999 +0200 (1999-10-30)
changeset 7982d534b897ce39
parent 7981 5120a2a15d06
child 7983 d823fdcc0645
improved presentation;
src/HOL/Isar_examples/BasicLogic.thy
src/HOL/Isar_examples/Cantor.thy
src/HOL/Isar_examples/ExprCompiler.thy
src/HOL/Isar_examples/Group.thy
src/HOL/Isar_examples/KnasterTarski.thy
src/HOL/Isar_examples/MultisetOrder.thy
src/HOL/Isar_examples/Peirce.thy
src/HOL/Isar_examples/Summation.thy
src/HOL/Isar_examples/W_correct.thy
src/HOL/Isar_examples/document/style.tex
     1.1 --- a/src/HOL/Isar_examples/BasicLogic.thy	Sat Oct 30 20:13:16 1999 +0200
     1.2 +++ b/src/HOL/Isar_examples/BasicLogic.thy	Sat Oct 30 20:20:48 1999 +0200
     1.3 @@ -5,7 +5,7 @@
     1.4  Basic propositional and quantifier reasoning.
     1.5  *)
     1.6  
     1.7 -header {* Basic reasoning *};
     1.8 +header {* Basic logical reasoning *};
     1.9  
    1.10  theory BasicLogic = Main:;
    1.11  
    1.12 @@ -70,9 +70,9 @@
    1.13  text {*
    1.14   In fact, concluding any (sub-)proof already involves solving any
    1.15   remaining goals by assumption\footnote{This is not a completely
    1.16 - trivial operation, as proof by assumption involves full higher-order
    1.17 - unification.}.  Thus we may skip the rather vacuous body of the above
    1.18 - proof as well.
    1.19 + trivial operation, as proof by assumption may involve full
    1.20 + higher-order unification.}.  Thus we may skip the rather vacuous body
    1.21 + of the above proof as well.
    1.22  *};
    1.23  
    1.24  lemma "A --> A";
    1.25 @@ -99,7 +99,7 @@
    1.26  text {*
    1.27   Thus we have arrived at an adequate representation of the proof of a
    1.28   tautology that holds by a single standard rule.\footnote{Apparently,
    1.29 - the rule is implication introduction.}
    1.30 + the rule here is implication introduction.}
    1.31  *};
    1.32  
    1.33  text {*
    1.34 @@ -129,7 +129,7 @@
    1.35   Just like $\idt{rule}$, the $\idt{intro}$ and $\idt{elim}$ proof
    1.36   methods pick standard structural rules, in case no explicit arguments
    1.37   are given.  While implicit rules are usually just fine for single
    1.38 - rule application, this may go too far in iteration.  Thus in
    1.39 + rule application, this may go too far with iteration.  Thus in
    1.40   practice, $\idt{intro}$ and $\idt{elim}$ would be typically
    1.41   restricted to certain structures by giving a few rules only, e.g.\
    1.42   \isacommand{proof}~($\idt{intro}$~\name{impI}~\name{allI}) to strip
    1.43 @@ -168,11 +168,12 @@
    1.44  
    1.45  text {*
    1.46   Above, the $\idt{conjunct}_{1/2}$ projection rules had to be named
    1.47 - explicitly, since the goals did not provide any structural clue.
    1.48 - This may be avoided using \isacommand{from} to focus on $\idt{prems}$
    1.49 - (i.e.\ the $A \conj B$ assumption) as the current facts, enabling the
    1.50 - use of double-dot proofs.  Note that \isacommand{from} already
    1.51 - does forward-chaining, involving the \name{conjE} rule.
    1.52 + explicitly, since the goals $B$ and $A$ did not provide any
    1.53 + structural clue.  This may be avoided using \isacommand{from} to
    1.54 + focus on $\idt{prems}$ (i.e.\ the $A \conj B$ assumption) as the
    1.55 + current facts, enabling the use of double-dot proofs.  Note that
    1.56 + \isacommand{from} already does forward-chaining, involving the
    1.57 + \name{conjE} rule here.
    1.58  *};
    1.59  
    1.60  lemma "A & B --> B & A";
    1.61 @@ -222,7 +223,7 @@
    1.62  text {*
    1.63   We can still push forward reasoning a bit further, even at the risk
    1.64   of getting ridiculous.  Note that we force the initial proof step to
    1.65 - do nothing, by referring to the ``-'' proof method.
    1.66 + do nothing here, by referring to the ``-'' proof method.
    1.67  *};
    1.68  
    1.69  lemma "A & B --> B & A";
    1.70 @@ -245,7 +246,7 @@
    1.71  
    1.72   The general lesson learned here is that good proof style would
    1.73   achieve just the \emph{right} balance of top-down backward
    1.74 - decomposition, and bottom-up forward composition.  In practice, there
    1.75 + decomposition, and bottom-up forward composition.  In general, there
    1.76   is no single best way to arrange some pieces of formal reasoning, of
    1.77   course.  Depending on the actual applications, the intended audience
    1.78   etc., rules (and methods) on the one hand vs.\ facts on the other
    1.79 @@ -278,7 +279,7 @@
    1.80  
    1.81  text {*
    1.82   We rephrase some of the basic reasoning examples of
    1.83 - \cite{isabelle-intro} (using HOL rather than FOL).
    1.84 + \cite{isabelle-intro}, using HOL rather than FOL.
    1.85  *};
    1.86  
    1.87  subsubsection {* A propositional proof *};
    1.88 @@ -315,8 +316,8 @@
    1.89   In order to avoid too much explicit parentheses, the Isar system
    1.90   implicitly opens an additional block for any new goal, the
    1.91   \isacommand{next} statement then closes one block level, opening a
    1.92 - new one.  The resulting behavior is what one might expect from
    1.93 - separating cases, only that it is more flexible.  E.g. an induction
    1.94 + new one.  The resulting behavior is what one would expect from
    1.95 + separating cases, only that it is more flexible.  E.g.\ an induction
    1.96   base case (which does not introduce local assumptions) would
    1.97   \emph{not} require \isacommand{next} to separate the subsequent step
    1.98   case.
    1.99 @@ -381,8 +382,8 @@
   1.100  qed;
   1.101  
   1.102  text {*
   1.103 - While explicit rule instantiation may occasionally help to improve
   1.104 - the readability of certain aspects of reasoning, it is usually quite
   1.105 + While explicit rule instantiation may occasionally improve
   1.106 + readability of certain aspects of reasoning, it is usually quite
   1.107   redundant.  Above, the basic proof outline gives already enough
   1.108   structural clues for the system to infer both the rules and their
   1.109   instances (by higher-order unification).  Thus we may as well prune
   1.110 @@ -404,17 +405,18 @@
   1.111  subsubsection {* Deriving rules in Isabelle *};
   1.112  
   1.113  text {*
   1.114 - We derive the conjunction elimination rule from the projections.  The
   1.115 - proof is quite straight-forward, since Isabelle/Isar supports
   1.116 - non-atomic goals and assumptions fully transparently.
   1.117 + We derive the conjunction elimination rule from the corresponding
   1.118 + projections.  The proof is quite straight-forward, since
   1.119 + Isabelle/Isar supports non-atomic goals and assumptions fully
   1.120 + transparently.
   1.121  *};
   1.122  
   1.123  theorem conjE: "A & B ==> (A ==> B ==> C) ==> C";
   1.124  proof -;
   1.125    assume "A & B";
   1.126 -  assume ab_c: "A ==> B ==> C";
   1.127 +  assume r: "A ==> B ==> C";
   1.128    show C;
   1.129 -  proof (rule ab_c);
   1.130 +  proof (rule r);
   1.131      show A; by (rule conjunct1);
   1.132      show B; by (rule conjunct2);
   1.133    qed;
   1.134 @@ -425,7 +427,7 @@
   1.135   different way.  The tactic script as given in \cite{isabelle-intro}
   1.136   for the same example of \name{conjE} depends on the primitive
   1.137   \texttt{goal} command to decompose the rule into premises and
   1.138 - conclusion.  The proper result would then emerge by discharging of
   1.139 + conclusion.  The actual result would then emerge by discharging of
   1.140   the context at \texttt{qed} time.
   1.141  *};
   1.142  
     2.1 --- a/src/HOL/Isar_examples/Cantor.thy	Sat Oct 30 20:13:16 1999 +0200
     2.2 +++ b/src/HOL/Isar_examples/Cantor.thy	Sat Oct 30 20:20:48 1999 +0200
     2.3 @@ -30,7 +30,7 @@
     2.4   with the innermost reasoning expressed quite naively.
     2.5  *};
     2.6  
     2.7 -theorem "EX S. S ~: range(f :: 'a => 'a set)";
     2.8 +theorem "EX S. S ~: range (f :: 'a => 'a set)";
     2.9  proof;
    2.10    let ?S = "{x. x ~: f x}";
    2.11    show "?S ~: range f";
    2.12 @@ -69,7 +69,7 @@
    2.13   introduced \emph{before} its corresponding \isacommand{show}.}
    2.14  *};
    2.15  
    2.16 -theorem "EX S. S ~: range(f :: 'a => 'a set)";
    2.17 +theorem "EX S. S ~: range (f :: 'a => 'a set)";
    2.18  proof;
    2.19    let ?S = "{x. x ~: f x}";
    2.20    show "?S ~: range f";
    2.21 @@ -95,22 +95,22 @@
    2.22  
    2.23  text {*
    2.24   How much creativity is required?  As it happens, Isabelle can prove
    2.25 - this theorem automatically.  The default context of the Isabelle's
    2.26 - classical prover contains rules for most of the constructs of HOL's
    2.27 - set theory.  We must augment it with \name{equalityCE} to break up
    2.28 - set equalities, and then apply best-first search.  Depth-first search
    2.29 - would diverge, but best-first search successfully navigates through
    2.30 - the large search space.
    2.31 + this theorem automatically.  The context of Isabelle's classical
    2.32 + prover contains rules for most of the constructs of HOL's set theory.
    2.33 + We must augment it with \name{equalityCE} to break up set equalities,
    2.34 + and then apply best-first search.  Depth-first search would diverge,
    2.35 + but best-first search successfully navigates through the large search
    2.36 + space.
    2.37  *};
    2.38  
    2.39 -theorem "EX S. S ~: range(f :: 'a => 'a set)";
    2.40 +theorem "EX S. S ~: range (f :: 'a => 'a set)";
    2.41    by (best elim: equalityCE);
    2.42  
    2.43  text {*
    2.44   While this establishes the same theorem internally, we do not get any
    2.45   idea of how the proof actually works.  There is currently no way to
    2.46   transform internal system-level representations of Isabelle proofs
    2.47 - back into Isar documents.  Writing intelligible proof documents
    2.48 + back into Isar text.  Writing intelligible proof documents
    2.49   really is a creative process, after all.
    2.50  *};
    2.51  
     3.1 --- a/src/HOL/Isar_examples/ExprCompiler.thy	Sat Oct 30 20:13:16 1999 +0200
     3.2 +++ b/src/HOL/Isar_examples/ExprCompiler.thy	Sat Oct 30 20:20:48 1999 +0200
     3.3 @@ -20,7 +20,7 @@
     3.4  
     3.5  text {*
     3.6   Binary operations are just functions over some type of values.  This
     3.7 - is both for syntax and semantics, i.e.\ we use a ``shallow
     3.8 + is both for abstract syntax and semantics, i.e.\ we use a ``shallow
     3.9   embedding'' here.
    3.10  *};
    3.11  
     4.1 --- a/src/HOL/Isar_examples/Group.thy	Sat Oct 30 20:13:16 1999 +0200
     4.2 +++ b/src/HOL/Isar_examples/Group.thy	Sat Oct 30 20:20:48 1999 +0200
     4.3 @@ -53,8 +53,9 @@
     4.4  qed;
     4.5  
     4.6  text {*
     4.7 - With \name{group-right-inverse} already at our disposal,
     4.8 - \name{group-right-unit} is now obtained much easier.
     4.9 + With \name{group-right-inverse} already available,
    4.10 + \name{group-right-unit}\label{thm:group-right-unit} is now
    4.11 + established much easier.
    4.12  *};
    4.13  
    4.14  theorem group_right_unit: "x * one = (x::'a::group)";
    4.15 @@ -75,14 +76,14 @@
    4.16   presentations given in any introductory course on algebra.  The basic
    4.17   technique is to form a transitive chain of equations, which in turn
    4.18   are established by simplifying with appropriate rules.  The low-level
    4.19 - logical parts of equational reasoning are left implicit.
    4.20 + logical details of equational reasoning are left implicit.
    4.21  
    4.22 - Note that ``$\dots$'' is just a special term variable that happens to
    4.23 - be bound automatically to the argument\footnote{The argument of a
    4.24 - curried infix expression happens to be its right-hand side.} of the
    4.25 - last fact achieved by any local assumption or proven statement.  In
    4.26 - contrast to $\var{thesis}$, the ``$\dots$'' variable is bound
    4.27 - \emph{after} the proof is finished.
    4.28 + Note that ``$\dots$'' is just a special term variable that is bound
    4.29 + automatically to the argument\footnote{The argument of a curried
    4.30 + infix expression happens to be its right-hand side.} of the last fact
    4.31 + achieved by any local assumption or proven statement.  In contrast to
    4.32 + $\var{thesis}$, the ``$\dots$'' variable is bound \emph{after} the
    4.33 + proof is finished, though.
    4.34  
    4.35   There are only two separate Isar language elements for calculational
    4.36   proofs: ``\isakeyword{also}'' for initial or intermediate
    4.37 @@ -90,8 +91,8 @@
    4.38   result of a calculation.  These constructs are not hardwired into
    4.39   Isabelle/Isar, but defined on top of the basic Isar/VM interpreter.
    4.40   Expanding the \isakeyword{also} and \isakeyword{finally} derived
    4.41 - language elements, calculations may be simulated as demonstrated
    4.42 - below.
    4.43 + language elements, calculations may be simulated by hand as
    4.44 + demonstrated below.
    4.45  *};
    4.46  
    4.47  theorem "x * one = (x::'a::group)";
    4.48 @@ -128,10 +129,10 @@
    4.49  text {*
    4.50   Note that this scheme of calculations is not restricted to plain
    4.51   transitivity.  Rules like anti-symmetry, or even forward and backward
    4.52 - substitution work as well.  For the actual \isacommand{also} and
    4.53 - \isacommand{finally} commands, Isabelle/Isar maintains separate
    4.54 - context information of ``transitivity'' rules.  Rule selection takes
    4.55 - place automatically by higher-order unification.
    4.56 + substitution work as well.  For the actual implementation of
    4.57 + \isacommand{also} and \isacommand{finally}, Isabelle/Isar maintains
    4.58 + separate context information of ``transitivity'' rules.  Rule
    4.59 + selection takes place automatically by higher-order unification.
    4.60  *};
    4.61  
    4.62  
    4.63 @@ -150,10 +151,11 @@
    4.64  text {*
    4.65   Groups are \emph{not} yet monoids directly from the definition.  For
    4.66   monoids, \name{right-unit} had to be included as an axiom, but for
    4.67 - groups both \name{right-unit} and \name{right-inverse} are
    4.68 - derivable from the other axioms.  With \name{group-right-unit}
    4.69 - derived as a theorem of group theory (see above), we may still
    4.70 - instantiate $\idt{group} \subset \idt{monoid}$ properly as follows.
    4.71 + groups both \name{right-unit} and \name{right-inverse} are derivable
    4.72 + from the other axioms.  With \name{group-right-unit} derived as a
    4.73 + theorem of group theory (see page~\pageref{thm:group-right-unit}), we
    4.74 + may still instantiate $\idt{group} \subset \idt{monoid}$ properly as
    4.75 + follows.
    4.76  *};
    4.77  
    4.78  instance group < monoid;
    4.79 @@ -167,7 +169,7 @@
    4.80   \isacommand{theorem}, setting up a goal that reflects the intended
    4.81   class relation (or type constructor arity).  Thus any Isar proof
    4.82   language element may be involved to establish this statement.  When
    4.83 - concluding the proof, the result is transformed into the original
    4.84 + concluding the proof, the result is transformed into the intended
    4.85   type signature extension behind the scenes.
    4.86  *};
    4.87  
     5.1 --- a/src/HOL/Isar_examples/KnasterTarski.thy	Sat Oct 30 20:13:16 1999 +0200
     5.2 +++ b/src/HOL/Isar_examples/KnasterTarski.thy	Sat Oct 30 20:20:48 1999 +0200
     5.3 @@ -37,7 +37,8 @@
     5.4   The Isar proof below closely follows the original presentation.
     5.5   Virtually all of the prose narration has been rephrased in terms of
     5.6   formal Isar language elements.  Just as many textbook-style proofs,
     5.7 - there is a strong bias towards forward reasoning.
     5.8 + there is a strong bias towards forward proof, and several bends
     5.9 + in the course of reasoning.
    5.10  *};
    5.11  
    5.12  theorem KnasterTarski: "mono f ==> EX a::'a set. f a = a";
    5.13 @@ -72,11 +73,11 @@
    5.14   explicit block structure and weak assumptions.  Thus we have mimicked
    5.15   the particular way of reasoning of the original text.
    5.16  
    5.17 - In the subsequent version of the proof the order of reasoning is
    5.18 - changed to achieve structured top-down decomposition of the problem
    5.19 - at the outer level, while the small inner steps of reasoning or done
    5.20 - in a forward manner.  Note that this requires only the most basic
    5.21 - features of the Isar language, we are certainly more at ease here.
    5.22 + In the subsequent version the order of reasoning is changed to
    5.23 + achieve structured top-down decomposition of the problem at the outer
    5.24 + level, while only the inner steps of reasoning are done in a forward
    5.25 + manner.  We are certainly more at ease here, requiring only the most
    5.26 + basic features of the Isar language.
    5.27  *};
    5.28  
    5.29  theorem KnasterTarski': "mono f ==> EX a::'a set. f a = a";
     6.1 --- a/src/HOL/Isar_examples/MultisetOrder.thy	Sat Oct 30 20:13:16 1999 +0200
     6.2 +++ b/src/HOL/Isar_examples/MultisetOrder.thy	Sat Oct 30 20:20:48 1999 +0200
     6.3 @@ -10,7 +10,7 @@
     6.4  theory MultisetOrder = Multiset:;
     6.5  
     6.6  text_raw {*
     6.7 - \footnote{Original tactic script by Tobias Nipkow (see also
     6.8 + \footnote{Original tactic script by Tobias Nipkow (see
     6.9   \url{http://isabelle.in.tum.de/library/HOL/Induct/Multiset.html}),
    6.10   based on a pen-and-paper proof due to Wilfried Buchholz.}
    6.11  *};
    6.12 @@ -22,8 +22,8 @@
    6.13      (EX K. (ALL b. elem K b --> (b, a) : r) & N = M0 + K)"
    6.14    (concl is "?case1 (mult1 r) | ?case2");
    6.15  proof (unfold mult1_def);
    6.16 -  let ?r = "%K a. ALL b. elem K b --> (b, a) : r";
    6.17 -  let ?R = "%N M. EX a M0 K. M = M0 + {#a#} & N = M0 + K & ?r K a";
    6.18 +  let ?r = "\<lambda>K a. ALL b. elem K b --> (b, a) : r";
    6.19 +  let ?R = "\<lambda>N M. EX a M0 K. M = M0 + {#a#} & N = M0 + K & ?r K a";
    6.20    let ?case1 = "?case1 {(N, M). ?R N M}";
    6.21  
    6.22    assume "(N, M0 + {#a#}) : {(N, M). ?R N M}";
    6.23 @@ -61,7 +61,6 @@
    6.24  proof;
    6.25    let ?R = "mult1 r";
    6.26    let ?W = "acc ?R";
    6.27 -
    6.28    {{;
    6.29      fix M M0 a;
    6.30      assume M0: "M0 : ?W"
     7.1 --- a/src/HOL/Isar_examples/Peirce.thy	Sat Oct 30 20:13:16 1999 +0200
     7.2 +++ b/src/HOL/Isar_examples/Peirce.thy	Sat Oct 30 20:20:48 1999 +0200
     7.3 @@ -75,15 +75,15 @@
     7.4   individual parts of the proof configuration.
     7.5  
     7.6   Nevertheless, the ``strong'' mode of plain assumptions is quite
     7.7 - important in practice to achieve robustness of proof document
     7.8 + important in practice to achieve robustness of proof text
     7.9   interpretation.  By forcing both the conclusion \emph{and} the
    7.10   assumptions to unify with the pending goal to be solved, goal
    7.11   selection becomes quite deterministic.  For example, decomposition
    7.12 - with ``case-analysis'' type rules usually give rise to several goals
    7.13 - that only differ in there local contexts.  With strong assumptions
    7.14 - these may be still solved in any order in a predictable way, while
    7.15 - weak ones would quickly lead to great confusion, eventually demanding
    7.16 - even some backtracking.
    7.17 + with rules of the ``case-analysis'' type usually gives rise to
    7.18 + several goals that only differ in there local contexts.  With strong
    7.19 + assumptions these may be still solved in any order in a predictable
    7.20 + way, while weak ones would quickly lead to great confusion,
    7.21 + eventually demanding even some backtracking.
    7.22  *};
    7.23  
    7.24  end;
     8.1 --- a/src/HOL/Isar_examples/Summation.thy	Sat Oct 30 20:13:16 1999 +0200
     8.2 +++ b/src/HOL/Isar_examples/Summation.thy	Sat Oct 30 20:20:48 1999 +0200
     8.3 @@ -16,7 +16,7 @@
     8.4  
     8.5  text {*
     8.6   Subsequently, we prove some summation laws of natural numbers
     8.7 - (including odds, squares and cubes).  These examples demonstrate how
     8.8 + (including odds, squares, and cubes).  These examples demonstrate how
     8.9   plain natural deduction (including induction) may be combined with
    8.10   calculational proof.
    8.11  *};
    8.12 @@ -26,25 +26,25 @@
    8.13  
    8.14  text {*
    8.15    The binder operator $\idt{sum} :: (\idt{nat} \To \idt{nat}) \To
    8.16 - \idt{nat} \To \idt{nat}$ formalizes summation from $0$ up to $k$
    8.17 - (excluding the bound).
    8.18 + \idt{nat} \To \idt{nat}$ formalizes summation of natural numbers
    8.19 + indexed from $0$ up to $k$ (excluding the bound):
    8.20   \[
    8.21   \sum\limits_{i < k} f(i) = \idt{sum} \ap (\lam i f \ap i) \ap k
    8.22   \]
    8.23  *};
    8.24  
    8.25  consts
    8.26 -  sum   :: "[nat => nat, nat] => nat";
    8.27 +  sum :: "[nat => nat, nat] => nat";
    8.28  
    8.29  primrec
    8.30    "sum f 0 = 0"
    8.31    "sum f (Suc n) = f n + sum f n";
    8.32  
    8.33  syntax
    8.34 -  "_SUM" :: "idt => nat => nat => nat"
    8.35 +  "_SUM" :: "[idt, nat, nat] => nat"
    8.36      ("SUM _ < _. _" [0, 0, 10] 10);
    8.37  translations
    8.38 -  "SUM i < k. b" == "sum (%i. b) k";
    8.39 +  "SUM i < k. b" == "sum (\<lambda>i. b) k";
    8.40  
    8.41  
    8.42  subsection {* Summation laws *};
    8.43 @@ -69,8 +69,8 @@
    8.44  
    8.45  text {*
    8.46   The sum of natural numbers $0 + \cdots + n$ equals $n \times (n +
    8.47 - 1)/2$.  In order to avoid formal reasoning about division, we just
    8.48 - show $2 \times \Sigma_{i < n} i = n \times (n + 1)$.
    8.49 + 1)/2$.  Avoiding formal reasoning about division we prove this
    8.50 + equation multiplied by $2$.
    8.51  *};
    8.52  
    8.53  theorem sum_of_naturals:
    8.54 @@ -90,23 +90,22 @@
    8.55   The above proof is a typical instance of mathematical induction.  The
    8.56   main statement is viewed as some $\var{P} \ap n$ that is split by the
    8.57   induction method into base case $\var{P} \ap 0$, and step case
    8.58 - $\var{P} \ap n \Impl \var{P} \ap (\idt{Suc} \ap n)$ for any $n$.
    8.59 + $\var{P} \ap n \Impl \var{P} \ap (\idt{Suc} \ap n)$ for arbitrary $n$.
    8.60  
    8.61   The step case is established by a short calculation in forward
    8.62   manner.  Starting from the left-hand side $\var{S} \ap (n + 1)$ of
    8.63 - the thesis, the final result is achieved by basic transformations
    8.64 - involving arithmetic reasoning (using the Simplifier).  The main
    8.65 - point is where the induction hypothesis $\var{S} \ap n = n \times (n
    8.66 - + 1)$ is introduced in order to replace a certain subterm.  So the
    8.67 + the thesis, the final result is achieved by transformations involving
    8.68 + basic arithmetic reasoning (using the Simplifier).  The main point is
    8.69 + where the induction hypothesis $\var{S} \ap n = n \times (n + 1)$ is
    8.70 + introduced in order to replace a certain subterm.  So the
    8.71   ``transitivity'' rule involved here is actual \emph{substitution}.
    8.72   Also note how the occurrence of ``\dots'' in the subsequent step
    8.73 - documents the position where the right-hand side of the hypotheses
    8.74 + documents the position where the right-hand side of the hypothesis
    8.75   got filled in.
    8.76  
    8.77   \medskip A further notable point here is integration of calculations
    8.78 - with plain natural deduction.  This works works quite well in Isar
    8.79 - for two reasons.
    8.80 -
    8.81 + with plain natural deduction.  This works so well in Isar for two
    8.82 + reasons.
    8.83   \begin{enumerate}
    8.84  
    8.85   \item Facts involved in \isakeyword{also}~/ \isakeyword{finally}
    8.86 @@ -116,19 +115,18 @@
    8.87  
    8.88   \item There are two \emph{separate} primitives for building natural
    8.89   deduction contexts: \isakeyword{fix}~$x$ and \isakeyword{assume}~$A$.
    8.90 - Thus it is possible to start reasoning with new ``arbitrary, but
    8.91 - fixed'' elements before bringing in the actual assumptions.
    8.92 - Occasionally, natural deduction is formalized with basic context
    8.93 - elements of the form $x:A$; this would rule out mixing with
    8.94 - calculations as done here.
    8.95 + Thus it is possible to start reasoning with some new ``arbitrary, but
    8.96 + fixed'' elements before bringing in the actual assumption.  In
    8.97 + contrast, natural deduction is occasionally formalized with basic
    8.98 + context elements of the form $x:A$ instead.
    8.99  
   8.100   \end{enumerate}
   8.101  *};
   8.102  
   8.103  text {*
   8.104 - \medskip We derive further summation laws for odds, squares, cubes as
   8.105 - follows.  The basic technique of induction plus calculation is the
   8.106 - same.
   8.107 + \medskip We derive further summation laws for odds, squares, and
   8.108 + cubes as follows.  The basic technique of induction plus calculation
   8.109 + is the same as before.
   8.110  *};
   8.111  
   8.112  theorem sum_of_odds:
   8.113 @@ -175,20 +173,20 @@
   8.114  text {*
   8.115   Comparing these examples with the tactic script version
   8.116   \url{http://isabelle.in.tum.de/library/HOL/ex/NatSum.html}, we note
   8.117 - an important difference how of induction vs.\ simplification is
   8.118 + an important difference of how induction vs.\ simplification is
   8.119   applied.  While \cite[\S10]{isabelle-ref} advises for these examples
   8.120   that ``induction should not be applied until the goal is in the
   8.121   simplest form'' this would be a very bad idea in our setting.
   8.122  
   8.123   Simplification normalizes all arithmetic expressions involved,
   8.124 - producing huge intermediate goals.  Applying induction afterwards,
   8.125 - the Isar document would then have to reflect the emerging
   8.126 - configuration by appropriate the sub-proofs.  This would result in
   8.127 - badly structured, low-level technical reasoning, without any good
   8.128 - idea of the actual point.
   8.129 + producing huge intermediate goals.  With applying induction
   8.130 + afterwards, the Isar proof text would have to reflect the emerging
   8.131 + configuration by appropriate sub-proofs.  This would result in badly
   8.132 + structured, low-level technical reasoning, without any good idea of
   8.133 + the actual point.
   8.134  
   8.135   \medskip As a general rule of good proof style, automatic methods
   8.136 - such as $\idt{simp}$ or $\idt{auto}$ should normally never used as
   8.137 + such as $\idt{simp}$ or $\idt{auto}$ should normally be never used as
   8.138   initial proof methods, but only as terminal ones, solving certain
   8.139   goals completely.
   8.140  *};
     9.1 --- a/src/HOL/Isar_examples/W_correct.thy	Sat Oct 30 20:13:16 1999 +0200
     9.2 +++ b/src/HOL/Isar_examples/W_correct.thy	Sat Oct 30 20:20:48 1999 +0200
     9.3 @@ -39,7 +39,7 @@
     9.4      AppI: "[| a |- e1 :: t2 -> t1; a |- e2 :: t2 |]
     9.5                ==> a |- App e1 e2 :: t1";
     9.6  
     9.7 -text {* Type assigment is close wrt.\ substitution. *};
     9.8 +text {* Type assigment is closed wrt.\ substitution. *};
     9.9  
    9.10  lemma has_type_subst_closed: "a |- e :: t ==> $s a |- e :: $s t";
    9.11  proof -;
    9.12 @@ -79,10 +79,10 @@
    9.13  
    9.14  primrec
    9.15    "W (Var i) a n =
    9.16 -      (if i < length a then Ok(id_subst, a ! i, n) else Fail)"
    9.17 +      (if i < length a then Ok (id_subst, a ! i, n) else Fail)"
    9.18    "W (Abs e) a n =
    9.19        ((s, t, m) := W e (TVar n # a) (Suc n);
    9.20 -       Ok(s, (s n) -> t, m))"
    9.21 +       Ok (s, (s n) -> t, m))"
    9.22    "W (App e1 e2) a n =
    9.23        ((s1, t1, m1) := W e1 a n;
    9.24         (s2, t2, m2) := W e2 ($s1 a) m1;
    9.25 @@ -92,9 +92,13 @@
    9.26  
    9.27  subsection {* Correctness theorem *};
    9.28  
    9.29 +text_raw {* \begin{comment} *};
    9.30 +
    9.31  (* FIXME proper split att/mod *)
    9.32  ML_setup {* Addsplits [split_bind]; *};
    9.33  
    9.34 +text_raw {* \end{comment} *};
    9.35 +
    9.36  theorem W_correct: "W e a n = Ok (s, t, m) ==> $ s a |- e :: t";
    9.37  proof -;
    9.38    assume W_ok: "W e a n = Ok (s, t, m)";
    10.1 --- a/src/HOL/Isar_examples/document/style.tex	Sat Oct 30 20:13:16 1999 +0200
    10.2 +++ b/src/HOL/Isar_examples/document/style.tex	Sat Oct 30 20:20:48 1999 +0200
    10.3 @@ -2,7 +2,7 @@
    10.4  %% $Id$
    10.5  
    10.6  \documentclass[11pt,a4paper]{article}
    10.7 -\usepackage{comment,proof,isabelle,pdfsetup}
    10.8 +\usepackage{comment,proof,isabelle,isabellesym,pdfsetup}
    10.9  
   10.10  \renewcommand{\isamarkupheader}[1]{\section{#1}}
   10.11