misc tuning and modernization;
authorwenzelm
Thu Jul 01 18:31:46 2010 +0200 (2010-07-01)
changeset 37671fa53d267dab3
parent 37670 0ce594837524
child 37672 645eb9fec794
misc tuning and modernization;
src/HOL/Isar_Examples/Basic_Logic.thy
src/HOL/Isar_Examples/Cantor.thy
src/HOL/Isar_Examples/Drinker.thy
src/HOL/Isar_Examples/Expr_Compiler.thy
src/HOL/Isar_Examples/Fibonacci.thy
src/HOL/Isar_Examples/Group.thy
src/HOL/Isar_Examples/Hoare.thy
src/HOL/Isar_Examples/Hoare_Ex.thy
src/HOL/Isar_Examples/Knaster_Tarski.thy
src/HOL/Isar_Examples/Mutilated_Checkerboard.thy
src/HOL/Isar_Examples/Nested_Datatype.thy
src/HOL/Isar_Examples/Peirce.thy
src/HOL/Isar_Examples/Puzzle.thy
src/HOL/Isar_Examples/Summation.thy
     1.1 --- a/src/HOL/Isar_Examples/Basic_Logic.thy	Thu Jul 01 14:32:57 2010 +0200
     1.2 +++ b/src/HOL/Isar_Examples/Basic_Logic.thy	Thu Jul 01 18:31:46 2010 +0200
     1.3 @@ -13,12 +13,10 @@
     1.4  
     1.5  subsection {* Pure backward reasoning *}
     1.6  
     1.7 -text {*
     1.8 -  In order to get a first idea of how Isabelle/Isar proof documents
     1.9 -  may look like, we consider the propositions @{text I}, @{text K},
    1.10 -  and @{text S}.  The following (rather explicit) proofs should
    1.11 -  require little extra explanations.
    1.12 -*}
    1.13 +text {* In order to get a first idea of how Isabelle/Isar proof
    1.14 +  documents may look like, we consider the propositions @{text I},
    1.15 +  @{text K}, and @{text S}.  The following (rather explicit) proofs
    1.16 +  should require little extra explanations. *}
    1.17  
    1.18  lemma I: "A --> A"
    1.19  proof
    1.20 @@ -53,15 +51,14 @@
    1.21    qed
    1.22  qed
    1.23  
    1.24 -text {*
    1.25 -  Isar provides several ways to fine-tune the reasoning, avoiding
    1.26 -  excessive detail.  Several abbreviated language elements are
    1.27 -  available, enabling the writer to express proofs in a more concise
    1.28 -  way, even without referring to any automated proof tools yet.
    1.29 +text {* Isar provides several ways to fine-tune the reasoning,
    1.30 +  avoiding excessive detail.  Several abbreviated language elements
    1.31 +  are available, enabling the writer to express proofs in a more
    1.32 +  concise way, even without referring to any automated proof tools
    1.33 +  yet.
    1.34  
    1.35    First of all, proof by assumption may be abbreviated as a single
    1.36 -  dot.
    1.37 -*}
    1.38 +  dot. *}
    1.39  
    1.40  lemma "A --> A"
    1.41  proof
    1.42 @@ -69,51 +66,42 @@
    1.43    show A by fact+
    1.44  qed
    1.45  
    1.46 -text {*
    1.47 -  In fact, concluding any (sub-)proof already involves solving any
    1.48 -  remaining goals by assumption\footnote{This is not a completely
    1.49 +text {* In fact, concluding any (sub-)proof already involves solving
    1.50 +  any remaining goals by assumption\footnote{This is not a completely
    1.51    trivial operation, as proof by assumption may involve full
    1.52    higher-order unification.}.  Thus we may skip the rather vacuous
    1.53 -  body of the above proof as well.
    1.54 -*}
    1.55 +  body of the above proof as well. *}
    1.56  
    1.57  lemma "A --> A"
    1.58  proof
    1.59  qed
    1.60  
    1.61 -text {*
    1.62 -  Note that the \isacommand{proof} command refers to the @{text rule}
    1.63 -  method (without arguments) by default.  Thus it implicitly applies a
    1.64 -  single rule, as determined from the syntactic form of the statements
    1.65 -  involved.  The \isacommand{by} command abbreviates any proof with
    1.66 -  empty body, so the proof may be further pruned.
    1.67 -*}
    1.68 +text {* Note that the \isacommand{proof} command refers to the @{text
    1.69 +  rule} method (without arguments) by default.  Thus it implicitly
    1.70 +  applies a single rule, as determined from the syntactic form of the
    1.71 +  statements involved.  The \isacommand{by} command abbreviates any
    1.72 +  proof with empty body, so the proof may be further pruned. *}
    1.73  
    1.74  lemma "A --> A"
    1.75    by rule
    1.76  
    1.77 -text {*
    1.78 -  Proof by a single rule may be abbreviated as double-dot.
    1.79 -*}
    1.80 +text {* Proof by a single rule may be abbreviated as double-dot. *}
    1.81  
    1.82  lemma "A --> A" ..
    1.83  
    1.84 -text {*
    1.85 -  Thus we have arrived at an adequate representation of the proof of a
    1.86 -  tautology that holds by a single standard rule.\footnote{Apparently,
    1.87 -  the rule here is implication introduction.}
    1.88 -*}
    1.89 +text {* Thus we have arrived at an adequate representation of the
    1.90 +  proof of a tautology that holds by a single standard
    1.91 +  rule.\footnote{Apparently, the rule here is implication
    1.92 +  introduction.} *}
    1.93  
    1.94 -text {*
    1.95 -  Let us also reconsider @{text K}.  Its statement is composed of
    1.96 -  iterated connectives.  Basic decomposition is by a single rule at a
    1.97 -  time, which is why our first version above was by nesting two
    1.98 +text {* Let us also reconsider @{text K}.  Its statement is composed
    1.99 +  of iterated connectives.  Basic decomposition is by a single rule at
   1.100 +  a time, which is why our first version above was by nesting two
   1.101    proofs.
   1.102  
   1.103    The @{text intro} proof method repeatedly decomposes a goal's
   1.104    conclusion.\footnote{The dual method is @{text elim}, acting on a
   1.105 -  goal's premises.}
   1.106 -*}
   1.107 +  goal's premises.} *}
   1.108  
   1.109  lemma "A --> B --> A"
   1.110  proof (intro impI)
   1.111 @@ -121,16 +109,13 @@
   1.112    show A by fact
   1.113  qed
   1.114  
   1.115 -text {*
   1.116 -  Again, the body may be collapsed.
   1.117 -*}
   1.118 +text {* Again, the body may be collapsed. *}
   1.119  
   1.120  lemma "A --> B --> A"
   1.121    by (intro impI)
   1.122  
   1.123 -text {*
   1.124 -  Just like @{text rule}, the @{text intro} and @{text elim} proof
   1.125 -  methods pick standard structural rules, in case no explicit
   1.126 +text {* Just like @{text rule}, the @{text intro} and @{text elim}
   1.127 +  proof methods pick standard structural rules, in case no explicit
   1.128    arguments are given.  While implicit rules are usually just fine for
   1.129    single rule application, this may go too far with iteration.  Thus
   1.130    in practice, @{text intro} and @{text elim} would be typically
   1.131 @@ -142,21 +127,18 @@
   1.132    prime application of @{text intro} and @{text elim}.  In contrast,
   1.133    terminal steps that solve a goal completely are usually performed by
   1.134    actual automated proof methods (such as \isacommand{by}~@{text
   1.135 -  blast}.
   1.136 -*}
   1.137 +  blast}. *}
   1.138  
   1.139  
   1.140  subsection {* Variations of backward vs.\ forward reasoning *}
   1.141  
   1.142 -text {*
   1.143 -  Certainly, any proof may be performed in backward-style only.  On
   1.144 -  the other hand, small steps of reasoning are often more naturally
   1.145 +text {* Certainly, any proof may be performed in backward-style only.
   1.146 +  On the other hand, small steps of reasoning are often more naturally
   1.147    expressed in forward-style.  Isar supports both backward and forward
   1.148    reasoning as a first-class concept.  In order to demonstrate the
   1.149    difference, we consider several proofs of @{text "A \<and> B \<longrightarrow> B \<and> A"}.
   1.150  
   1.151 -  The first version is purely backward.
   1.152 -*}
   1.153 +  The first version is purely backward. *}
   1.154  
   1.155  lemma "A & B --> B & A"
   1.156  proof
   1.157 @@ -168,14 +150,13 @@
   1.158    qed
   1.159  qed
   1.160  
   1.161 -text {*
   1.162 -  Above, the @{text "conjunct_1/2"} projection rules had to be named
   1.163 -  explicitly, since the goals @{text B} and @{text A} did not provide
   1.164 -  any structural clue.  This may be avoided using \isacommand{from} to
   1.165 -  focus on the @{text "A \<and> B"} assumption as the current facts,
   1.166 -  enabling the use of double-dot proofs.  Note that \isacommand{from}
   1.167 -  already does forward-chaining, involving the \name{conjE} rule here.
   1.168 -*}
   1.169 +text {* Above, the @{text "conjunct_1/2"} projection rules had to be
   1.170 +  named explicitly, since the goals @{text B} and @{text A} did not
   1.171 +  provide any structural clue.  This may be avoided using
   1.172 +  \isacommand{from} to focus on the @{text "A \<and> B"} assumption as the
   1.173 +  current facts, enabling the use of double-dot proofs.  Note that
   1.174 +  \isacommand{from} already does forward-chaining, involving the
   1.175 +  @{text conjE} rule here. *}
   1.176  
   1.177  lemma "A & B --> B & A"
   1.178  proof
   1.179 @@ -187,15 +168,13 @@
   1.180    qed
   1.181  qed
   1.182  
   1.183 -text {*
   1.184 -  In the next version, we move the forward step one level upwards.
   1.185 -  Forward-chaining from the most recent facts is indicated by the
   1.186 -  \isacommand{then} command.  Thus the proof of @{text "B \<and> A"} from
   1.187 -  @{text "A \<and> B"} actually becomes an elimination, rather than an
   1.188 +text {* In the next version, we move the forward step one level
   1.189 +  upwards.  Forward-chaining from the most recent facts is indicated
   1.190 +  by the \isacommand{then} command.  Thus the proof of @{text "B \<and> A"}
   1.191 +  from @{text "A \<and> B"} actually becomes an elimination, rather than an
   1.192    introduction.  The resulting proof structure directly corresponds to
   1.193    that of the @{text conjE} rule, including the repeated goal
   1.194 -  proposition that is abbreviated as @{text ?thesis} below.
   1.195 -*}
   1.196 +  proposition that is abbreviated as @{text ?thesis} below. *}
   1.197  
   1.198  lemma "A & B --> B & A"
   1.199  proof
   1.200 @@ -207,11 +186,9 @@
   1.201    qed
   1.202  qed
   1.203  
   1.204 -text {*
   1.205 -  In the subsequent version we flatten the structure of the main body
   1.206 -  by doing forward reasoning all the time.  Only the outermost
   1.207 -  decomposition step is left as backward.
   1.208 -*}
   1.209 +text {* In the subsequent version we flatten the structure of the main
   1.210 +  body by doing forward reasoning all the time.  Only the outermost
   1.211 +  decomposition step is left as backward. *}
   1.212  
   1.213  lemma "A & B --> B & A"
   1.214  proof
   1.215 @@ -221,11 +198,9 @@
   1.216    from `B` `A` show "B & A" ..
   1.217  qed
   1.218  
   1.219 -text {*
   1.220 -  We can still push forward-reasoning a bit further, even at the risk
   1.221 -  of getting ridiculous.  Note that we force the initial proof step to
   1.222 -  do nothing here, by referring to the ``-'' proof method.
   1.223 -*}
   1.224 +text {* We can still push forward-reasoning a bit further, even at the
   1.225 +  risk of getting ridiculous.  Note that we force the initial proof
   1.226 +  step to do nothing here, by referring to the ``-'' proof method. *}
   1.227  
   1.228  lemma "A & B --> B & A"
   1.229  proof -
   1.230 @@ -235,15 +210,14 @@
   1.231      from `A & B` have B ..
   1.232      from `B` `A` have "B & A" ..
   1.233    }
   1.234 -  then show ?thesis ..         -- {* rule \name{impI} *}
   1.235 +  then show ?thesis ..         -- {* rule @{text impI} *}
   1.236  qed
   1.237  
   1.238 -text {*
   1.239 -  \medskip With these examples we have shifted through a whole range
   1.240 -  from purely backward to purely forward reasoning.  Apparently, in
   1.241 -  the extreme ends we get slightly ill-structured proofs, which also
   1.242 -  require much explicit naming of either rules (backward) or local
   1.243 -  facts (forward).
   1.244 +text {* \medskip With these examples we have shifted through a whole
   1.245 +  range from purely backward to purely forward reasoning.  Apparently,
   1.246 +  in the extreme ends we get slightly ill-structured proofs, which
   1.247 +  also require much explicit naming of either rules (backward) or
   1.248 +  local facts (forward).
   1.249  
   1.250    The general lesson learned here is that good proof style would
   1.251    achieve just the \emph{right} balance of top-down backward
   1.252 @@ -252,14 +226,11 @@
   1.253    course.  Depending on the actual applications, the intended audience
   1.254    etc., rules (and methods) on the one hand vs.\ facts on the other
   1.255    hand have to be emphasized in an appropriate way.  This requires the
   1.256 -  proof writer to develop good taste, and some practice, of course.
   1.257 -*}
   1.258 +  proof writer to develop good taste, and some practice, of course. *}
   1.259  
   1.260 -text {*
   1.261 -  For our example the most appropriate way of reasoning is probably
   1.262 -  the middle one, with conjunction introduction done after
   1.263 -  elimination.
   1.264 -*}
   1.265 +text {* For our example the most appropriate way of reasoning is
   1.266 +  probably the middle one, with conjunction introduction done after
   1.267 +  elimination. *}
   1.268  
   1.269  lemma "A & B --> B & A"
   1.270  proof
   1.271 @@ -275,18 +246,15 @@
   1.272  
   1.273  subsection {* A few examples from ``Introduction to Isabelle'' *}
   1.274  
   1.275 -text {*
   1.276 -  We rephrase some of the basic reasoning examples of
   1.277 -  \cite{isabelle-intro}, using HOL rather than FOL.
   1.278 -*}
   1.279 +text {* We rephrase some of the basic reasoning examples of
   1.280 +  \cite{isabelle-intro}, using HOL rather than FOL. *}
   1.281 +
   1.282  
   1.283  subsubsection {* A propositional proof *}
   1.284  
   1.285 -text {*
   1.286 -  We consider the proposition @{text "P \<or> P \<longrightarrow> P"}.  The proof below
   1.287 -  involves forward-chaining from @{text "P \<or> P"}, followed by an
   1.288 -  explicit case-analysis on the two \emph{identical} cases.
   1.289 -*}
   1.290 +text {* We consider the proposition @{text "P \<or> P \<longrightarrow> P"}.  The proof
   1.291 +  below involves forward-chaining from @{text "P \<or> P"}, followed by an
   1.292 +  explicit case-analysis on the two \emph{identical} cases. *}
   1.293  
   1.294  lemma "P | P --> P"
   1.295  proof
   1.296 @@ -301,10 +269,9 @@
   1.297    qed
   1.298  qed
   1.299  
   1.300 -text {*
   1.301 -  Case splits are \emph{not} hardwired into the Isar language as a
   1.302 -  special feature.  The \isacommand{next} command used to separate the
   1.303 -  cases above is just a short form of managing block structure.
   1.304 +text {* Case splits are \emph{not} hardwired into the Isar language as
   1.305 +  a special feature.  The \isacommand{next} command used to separate
   1.306 +  the cases above is just a short form of managing block structure.
   1.307  
   1.308    \medskip In general, applying proof methods may split up a goal into
   1.309    separate ``cases'', i.e.\ new subgoals with individual local
   1.310 @@ -322,8 +289,7 @@
   1.311  
   1.312    \medskip In our example the situation is even simpler, since the two
   1.313    cases actually coincide.  Consequently the proof may be rephrased as
   1.314 -  follows.
   1.315 -*}
   1.316 +  follows. *}
   1.317  
   1.318  lemma "P | P --> P"
   1.319  proof
   1.320 @@ -336,12 +302,10 @@
   1.321    qed
   1.322  qed
   1.323  
   1.324 -text {*
   1.325 -  Again, the rather vacuous body of the proof may be collapsed.  Thus
   1.326 -  the case analysis degenerates into two assumption steps, which are
   1.327 -  implicitly performed when concluding the single rule step of the
   1.328 -  double-dot proof as follows.
   1.329 -*}
   1.330 +text {* Again, the rather vacuous body of the proof may be collapsed.
   1.331 +  Thus the case analysis degenerates into two assumption steps, which
   1.332 +  are implicitly performed when concluding the single rule step of the
   1.333 +  double-dot proof as follows. *}
   1.334  
   1.335  lemma "P | P --> P"
   1.336  proof
   1.337 @@ -352,26 +316,24 @@
   1.338  
   1.339  subsubsection {* A quantifier proof *}
   1.340  
   1.341 -text {*
   1.342 -  To illustrate quantifier reasoning, let us prove @{text "(\<exists>x. P (f
   1.343 -  x)) \<longrightarrow> (\<exists>y. P y)"}.  Informally, this holds because any @{text a}
   1.344 -  with @{text "P (f a)"} may be taken as a witness for the second
   1.345 -  existential statement.
   1.346 +text {* To illustrate quantifier reasoning, let us prove @{text
   1.347 +  "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"}.  Informally, this holds because any
   1.348 +  @{text a} with @{text "P (f a)"} may be taken as a witness for the
   1.349 +  second existential statement.
   1.350  
   1.351    The first proof is rather verbose, exhibiting quite a lot of
   1.352    (redundant) detail.  It gives explicit rules, even with some
   1.353    instantiation.  Furthermore, we encounter two new language elements:
   1.354    the \isacommand{fix} command augments the context by some new
   1.355    ``arbitrary, but fixed'' element; the \isacommand{is} annotation
   1.356 -  binds term abbreviations by higher-order pattern matching.
   1.357 -*}
   1.358 +  binds term abbreviations by higher-order pattern matching. *}
   1.359  
   1.360  lemma "(EX x. P (f x)) --> (EX y. P y)"
   1.361  proof
   1.362    assume "EX x. P (f x)"
   1.363    then show "EX y. P y"
   1.364    proof (rule exE)             -- {*
   1.365 -    rule \name{exE}: \smash{$\infer{B}{\ex x A(x) & \infer*{B}{[A(x)]_x}}$}
   1.366 +    rule @{text exE}: \smash{$\infer{B}{\ex x A(x) & \infer*{B}{[A(x)]_x}}$}
   1.367    *}
   1.368      fix a
   1.369      assume "P (f a)" (is "P ?witness")
   1.370 @@ -379,14 +341,12 @@
   1.371    qed
   1.372  qed
   1.373  
   1.374 -text {*
   1.375 -  While explicit rule instantiation may occasionally improve
   1.376 +text {* While explicit rule instantiation may occasionally improve
   1.377    readability of certain aspects of reasoning, it is usually quite
   1.378    redundant.  Above, the basic proof outline gives already enough
   1.379    structural clues for the system to infer both the rules and their
   1.380    instances (by higher-order unification).  Thus we may as well prune
   1.381 -  the text as follows.
   1.382 -*}
   1.383 +  the text as follows. *}
   1.384  
   1.385  lemma "(EX x. P (f x)) --> (EX y. P y)"
   1.386  proof
   1.387 @@ -399,12 +359,10 @@
   1.388    qed
   1.389  qed
   1.390  
   1.391 -text {*
   1.392 -  Explicit @{text \<exists>}-elimination as seen above can become quite
   1.393 +text {* Explicit @{text \<exists>}-elimination as seen above can become quite
   1.394    cumbersome in practice.  The derived Isar language element
   1.395    ``\isakeyword{obtain}'' provides a more handsome way to do
   1.396 -  generalized existence reasoning.
   1.397 -*}
   1.398 +  generalized existence reasoning. *}
   1.399  
   1.400  lemma "(EX x. P (f x)) --> (EX y. P y)"
   1.401  proof
   1.402 @@ -413,26 +371,21 @@
   1.403    then show "EX y. P y" ..
   1.404  qed
   1.405  
   1.406 -text {*
   1.407 -  Technically, \isakeyword{obtain} is similar to \isakeyword{fix} and
   1.408 -  \isakeyword{assume} together with a soundness proof of the
   1.409 -  elimination involved.  Thus it behaves similar to any other forward
   1.410 -  proof element.  Also note that due to the nature of general
   1.411 -  existence reasoning involved here, any result exported from the
   1.412 -  context of an \isakeyword{obtain} statement may \emph{not} refer to
   1.413 -  the parameters introduced there.
   1.414 -*}
   1.415 -
   1.416 +text {* Technically, \isakeyword{obtain} is similar to
   1.417 +  \isakeyword{fix} and \isakeyword{assume} together with a soundness
   1.418 +  proof of the elimination involved.  Thus it behaves similar to any
   1.419 +  other forward proof element.  Also note that due to the nature of
   1.420 +  general existence reasoning involved here, any result exported from
   1.421 +  the context of an \isakeyword{obtain} statement may \emph{not} refer
   1.422 +  to the parameters introduced there. *}
   1.423  
   1.424  
   1.425  subsubsection {* Deriving rules in Isabelle *}
   1.426  
   1.427 -text {*
   1.428 -  We derive the conjunction elimination rule from the corresponding
   1.429 -  projections.  The proof is quite straight-forward, since
   1.430 -  Isabelle/Isar supports non-atomic goals and assumptions fully
   1.431 -  transparently.
   1.432 -*}
   1.433 +text {* We derive the conjunction elimination rule from the
   1.434 +  corresponding projections.  The proof is quite straight-forward,
   1.435 +  since Isabelle/Isar supports non-atomic goals and assumptions fully
   1.436 +  transparently. *}
   1.437  
   1.438  theorem conjE: "A & B ==> (A ==> B ==> C) ==> C"
   1.439  proof -
     2.1 --- a/src/HOL/Isar_Examples/Cantor.thy	Thu Jul 01 14:32:57 2010 +0200
     2.2 +++ b/src/HOL/Isar_Examples/Cantor.thy	Thu Jul 01 18:31:46 2010 +0200
     2.3 @@ -8,14 +8,11 @@
     2.4  imports Main
     2.5  begin
     2.6  
     2.7 -text_raw {*
     2.8 -  \footnote{This is an Isar version of the final example of the
     2.9 -  Isabelle/HOL manual \cite{isabelle-HOL}.}
    2.10 -*}
    2.11 +text_raw {* \footnote{This is an Isar version of the final example of
    2.12 +  the Isabelle/HOL manual \cite{isabelle-HOL}.}  *}
    2.13  
    2.14 -text {*
    2.15 -  Cantor's Theorem states that every set has more subsets than it has
    2.16 -  elements.  It has become a favorite basic example in pure
    2.17 +text {* Cantor's Theorem states that every set has more subsets than
    2.18 +  it has elements.  It has become a favorite basic example in pure
    2.19    higher-order logic since it is so easily expressed: \[\all{f::\alpha
    2.20    \To \alpha \To \idt{bool}} \ex{S::\alpha \To \idt{bool}}
    2.21    \all{x::\alpha} f \ap x \not= S\]
    2.22 @@ -25,8 +22,7 @@
    2.23    every function from $\alpha$ to its powerset, some subset is outside
    2.24    its range.  The Isabelle/Isar proofs below uses HOL's set theory,
    2.25    with the type $\alpha \ap \idt{set}$ and the operator
    2.26 -  $\idt{range}::(\alpha \To \beta) \To \beta \ap \idt{set}$.
    2.27 -*}
    2.28 +  $\idt{range}::(\alpha \To \beta) \To \beta \ap \idt{set}$. *}
    2.29  
    2.30  theorem "EX S. S ~: range (f :: 'a => 'a set)"
    2.31  proof
    2.32 @@ -48,24 +44,20 @@
    2.33    qed
    2.34  qed
    2.35  
    2.36 -text {*
    2.37 -  How much creativity is required?  As it happens, Isabelle can prove
    2.38 -  this theorem automatically using best-first search.  Depth-first
    2.39 -  search would diverge, but best-first search successfully navigates
    2.40 -  through the large search space.  The context of Isabelle's classical
    2.41 -  prover contains rules for the relevant constructs of HOL's set
    2.42 -  theory.
    2.43 -*}
    2.44 +text {* How much creativity is required?  As it happens, Isabelle can
    2.45 +  prove this theorem automatically using best-first search.
    2.46 +  Depth-first search would diverge, but best-first search successfully
    2.47 +  navigates through the large search space.  The context of Isabelle's
    2.48 +  classical prover contains rules for the relevant constructs of HOL's
    2.49 +  set theory.  *}
    2.50  
    2.51  theorem "EX S. S ~: range (f :: 'a => 'a set)"
    2.52    by best
    2.53  
    2.54 -text {*
    2.55 -  While this establishes the same theorem internally, we do not get
    2.56 -  any idea of how the proof actually works.  There is currently no way
    2.57 -  to transform internal system-level representations of Isabelle
    2.58 +text {* While this establishes the same theorem internally, we do not
    2.59 +  get any idea of how the proof actually works.  There is currently no
    2.60 +  way to transform internal system-level representations of Isabelle
    2.61    proofs back into Isar text.  Writing intelligible proof documents
    2.62 -  really is a creative process, after all.
    2.63 -*}
    2.64 +  really is a creative process, after all. *}
    2.65  
    2.66  end
     3.1 --- a/src/HOL/Isar_Examples/Drinker.thy	Thu Jul 01 14:32:57 2010 +0200
     3.2 +++ b/src/HOL/Isar_Examples/Drinker.thy	Thu Jul 01 18:31:46 2010 +0200
     3.3 @@ -8,24 +8,21 @@
     3.4  imports Main
     3.5  begin
     3.6  
     3.7 -text {*
     3.8 -  Here is another example of classical reasoning: the Drinker's
     3.9 +text {* Here is another example of classical reasoning: the Drinker's
    3.10    Principle says that for some person, if he is drunk, everybody else
    3.11    is drunk!
    3.12  
    3.13 -  We first prove a classical part of de-Morgan's law.
    3.14 -*}
    3.15 +  We first prove a classical part of de-Morgan's law. *}
    3.16  
    3.17 -lemma deMorgan:
    3.18 +lemma de_Morgan:
    3.19    assumes "\<not> (\<forall>x. P x)"
    3.20    shows "\<exists>x. \<not> P x"
    3.21 -  using prems
    3.22 +  using assms
    3.23  proof (rule contrapos_np)
    3.24    assume a: "\<not> (\<exists>x. \<not> P x)"
    3.25    show "\<forall>x. P x"
    3.26    proof
    3.27 -    fix x
    3.28 -    show "P x"
    3.29 +    fix x show "P x"
    3.30      proof (rule classical)
    3.31        assume "\<not> P x"
    3.32        then have "\<exists>x. \<not> P x" ..
    3.33 @@ -41,12 +38,12 @@
    3.34    then show ?thesis ..
    3.35  next
    3.36    assume "\<not> (\<forall>x. drunk x)"
    3.37 -  then have "\<exists>x. \<not> drunk x" by (rule deMorgan)
    3.38 +  then have "\<exists>x. \<not> drunk x" by (rule de_Morgan)
    3.39    then obtain a where a: "\<not> drunk a" ..
    3.40    have "drunk a \<longrightarrow> (\<forall>x. drunk x)"
    3.41    proof
    3.42      assume "drunk a"
    3.43 -    with a show "\<forall>x. drunk x" by (contradiction)
    3.44 +    with a show "\<forall>x. drunk x" by contradiction
    3.45    qed
    3.46    then show ?thesis ..
    3.47  qed
     4.1 --- a/src/HOL/Isar_Examples/Expr_Compiler.thy	Thu Jul 01 14:32:57 2010 +0200
     4.2 +++ b/src/HOL/Isar_Examples/Expr_Compiler.thy	Thu Jul 01 18:31:46 2010 +0200
     4.3 @@ -10,20 +10,17 @@
     4.4  imports Main
     4.5  begin
     4.6  
     4.7 -text {*
     4.8 - This is a (rather trivial) example of program verification.  We model
     4.9 - a compiler for translating expressions to stack machine instructions,
    4.10 - and prove its correctness wrt.\ some evaluation semantics.
    4.11 -*}
    4.12 +text {* This is a (rather trivial) example of program verification.
    4.13 +  We model a compiler for translating expressions to stack machine
    4.14 +  instructions, and prove its correctness wrt.\ some evaluation
    4.15 +  semantics. *}
    4.16  
    4.17  
    4.18  subsection {* Binary operations *}
    4.19  
    4.20 -text {*
    4.21 - Binary operations are just functions over some type of values.  This
    4.22 - is both for abstract syntax and semantics, i.e.\ we use a ``shallow
    4.23 - embedding'' here.
    4.24 -*}
    4.25 +text {* Binary operations are just functions over some type of values.
    4.26 +  This is both for abstract syntax and semantics, i.e.\ we use a
    4.27 +  ``shallow embedding'' here. *}
    4.28  
    4.29  types
    4.30    'val binop = "'val => 'val => 'val"
    4.31 @@ -31,85 +28,71 @@
    4.32  
    4.33  subsection {* Expressions *}
    4.34  
    4.35 -text {*
    4.36 - The language of expressions is defined as an inductive type,
    4.37 - consisting of variables, constants, and binary operations on
    4.38 - expressions.
    4.39 -*}
    4.40 +text {* The language of expressions is defined as an inductive type,
    4.41 +  consisting of variables, constants, and binary operations on
    4.42 +  expressions. *}
    4.43  
    4.44  datatype ('adr, 'val) expr =
    4.45 -  Variable 'adr |
    4.46 -  Constant 'val |
    4.47 -  Binop "'val binop" "('adr, 'val) expr" "('adr, 'val) expr"
    4.48 +    Variable 'adr
    4.49 +  | Constant 'val
    4.50 +  | Binop "'val binop" "('adr, 'val) expr" "('adr, 'val) expr"
    4.51  
    4.52 -text {*
    4.53 - Evaluation (wrt.\ some environment of variable assignments) is
    4.54 - defined by primitive recursion over the structure of expressions.
    4.55 -*}
    4.56 +text {* Evaluation (wrt.\ some environment of variable assignments) is
    4.57 +  defined by primitive recursion over the structure of expressions. *}
    4.58  
    4.59 -consts
    4.60 -  eval :: "('adr, 'val) expr => ('adr => 'val) => 'val"
    4.61 -
    4.62 -primrec
    4.63 +primrec eval :: "('adr, 'val) expr => ('adr => 'val) => 'val"
    4.64 +where
    4.65    "eval (Variable x) env = env x"
    4.66 -  "eval (Constant c) env = c"
    4.67 -  "eval (Binop f e1 e2) env = f (eval e1 env) (eval e2 env)"
    4.68 +| "eval (Constant c) env = c"
    4.69 +| "eval (Binop f e1 e2) env = f (eval e1 env) (eval e2 env)"
    4.70  
    4.71  
    4.72  subsection {* Machine *}
    4.73  
    4.74 -text {*
    4.75 - Next we model a simple stack machine, with three instructions.
    4.76 -*}
    4.77 +text {* Next we model a simple stack machine, with three
    4.78 +  instructions. *}
    4.79  
    4.80  datatype ('adr, 'val) instr =
    4.81 -  Const 'val |
    4.82 -  Load 'adr |
    4.83 -  Apply "'val binop"
    4.84 +    Const 'val
    4.85 +  | Load 'adr
    4.86 +  | Apply "'val binop"
    4.87  
    4.88 -text {*
    4.89 - Execution of a list of stack machine instructions is easily defined
    4.90 - as follows.
    4.91 -*}
    4.92 +text {* Execution of a list of stack machine instructions is easily
    4.93 +  defined as follows. *}
    4.94  
    4.95 -consts
    4.96 +primrec
    4.97    exec :: "(('adr, 'val) instr) list
    4.98      => 'val list => ('adr => 'val) => 'val list"
    4.99 -
   4.100 -primrec
   4.101 +where
   4.102    "exec [] stack env = stack"
   4.103 -  "exec (instr # instrs) stack env =
   4.104 +| "exec (instr # instrs) stack env =
   4.105      (case instr of
   4.106        Const c => exec instrs (c # stack) env
   4.107      | Load x => exec instrs (env x # stack) env
   4.108      | Apply f => exec instrs (f (hd stack) (hd (tl stack))
   4.109                     # (tl (tl stack))) env)"
   4.110  
   4.111 -definition execute :: "(('adr, 'val) instr) list => ('adr => 'val) => 'val" where
   4.112 -  "execute instrs env == hd (exec instrs [] env)"
   4.113 +definition
   4.114 +  execute :: "(('adr, 'val) instr) list => ('adr => 'val) => 'val"
   4.115 +  where "execute instrs env = hd (exec instrs [] env)"
   4.116  
   4.117  
   4.118  subsection {* Compiler *}
   4.119  
   4.120 -text {*
   4.121 - We are ready to define the compilation function of expressions to
   4.122 - lists of stack machine instructions.
   4.123 -*}
   4.124 -
   4.125 -consts
   4.126 -  compile :: "('adr, 'val) expr => (('adr, 'val) instr) list"
   4.127 +text {* We are ready to define the compilation function of expressions
   4.128 +  to lists of stack machine instructions. *}
   4.129  
   4.130  primrec
   4.131 +  compile :: "('adr, 'val) expr => (('adr, 'val) instr) list"
   4.132 +where
   4.133    "compile (Variable x) = [Load x]"
   4.134 -  "compile (Constant c) = [Const c]"
   4.135 -  "compile (Binop f e1 e2) = compile e2 @ compile e1 @ [Apply f]"
   4.136 +| "compile (Constant c) = [Const c]"
   4.137 +| "compile (Binop f e1 e2) = compile e2 @ compile e1 @ [Apply f]"
   4.138  
   4.139  
   4.140 -text {*
   4.141 - The main result of this development is the correctness theorem for
   4.142 - $\idt{compile}$.  We first establish a lemma about $\idt{exec}$ and
   4.143 - list append.
   4.144 -*}
   4.145 +text {* The main result of this development is the correctness theorem
   4.146 +  for @{text compile}.  We first establish a lemma about @{text exec}
   4.147 +  and list append. *}
   4.148  
   4.149  lemma exec_append:
   4.150    "exec (xs @ ys) stack env =
   4.151 @@ -146,13 +129,11 @@
   4.152  qed
   4.153  
   4.154  
   4.155 -text {*
   4.156 - \bigskip In the proofs above, the \name{simp} method does quite a lot
   4.157 - of work behind the scenes (mostly ``functional program execution'').
   4.158 - Subsequently, the same reasoning is elaborated in detail --- at most
   4.159 - one recursive function definition is used at a time.  Thus we get a
   4.160 - better idea of what is actually going on.
   4.161 -*}
   4.162 +text {* \bigskip In the proofs above, the @{text simp} method does
   4.163 +  quite a lot of work behind the scenes (mostly ``functional program
   4.164 +  execution'').  Subsequently, the same reasoning is elaborated in
   4.165 +  detail --- at most one recursive function definition is used at a
   4.166 +  time.  Thus we get a better idea of what is actually going on. *}
   4.167  
   4.168  lemma exec_append':
   4.169    "exec (xs @ ys) stack env = exec ys (exec xs stack env) env"
   4.170 @@ -221,8 +202,7 @@
   4.171  
   4.172    have "execute (compile e) env = hd (exec (compile e) [] env)"
   4.173      by (simp add: execute_def)
   4.174 -  also from exec_compile
   4.175 -    have "exec (compile e) [] env = [eval e env]" .
   4.176 +  also from exec_compile have "exec (compile e) [] env = [eval e env]" .
   4.177    also have "hd ... = eval e env" by simp
   4.178    finally show ?thesis .
   4.179  qed
     5.1 --- a/src/HOL/Isar_Examples/Fibonacci.thy	Thu Jul 01 14:32:57 2010 +0200
     5.2 +++ b/src/HOL/Isar_Examples/Fibonacci.thy	Thu Jul 01 18:31:46 2010 +0200
     5.3 @@ -15,22 +15,20 @@
     5.4  header {* Fib and Gcd commute *}
     5.5  
     5.6  theory Fibonacci
     5.7 -imports Primes
     5.8 +imports "../Old_Number_Theory/Primes"
     5.9  begin
    5.10  
    5.11 -text_raw {*
    5.12 - \footnote{Isar version by Gertrud Bauer.  Original tactic script by
    5.13 - Larry Paulson.  A few proofs of laws taken from
    5.14 - \cite{Concrete-Math}.}
    5.15 -*}
    5.16 +text_raw {* \footnote{Isar version by Gertrud Bauer.  Original tactic
    5.17 +  script by Larry Paulson.  A few proofs of laws taken from
    5.18 +  \cite{Concrete-Math}.} *}
    5.19  
    5.20  
    5.21  subsection {* Fibonacci numbers *}
    5.22  
    5.23  fun fib :: "nat \<Rightarrow> nat" where
    5.24    "fib 0 = 0"
    5.25 -  | "fib (Suc 0) = 1"
    5.26 -  | "fib (Suc (Suc x)) = fib x + fib (Suc x)"
    5.27 +| "fib (Suc 0) = 1"
    5.28 +| "fib (Suc (Suc x)) = fib x + fib (Suc x)"
    5.29  
    5.30  lemma [simp]: "0 < fib (Suc n)"
    5.31    by (induct n rule: fib.induct) simp_all
    5.32 @@ -102,7 +100,7 @@
    5.33    then have "gcd (fib m) (fib (n + m)) = gcd (fib (n + k + 1)) (fib (k + 1))"
    5.34      by (simp add: gcd_commute)
    5.35    also have "fib (n + k + 1)
    5.36 -    = fib (k + 1) * fib (n + 1) + fib k * fib n"
    5.37 +      = fib (k + 1) * fib (n + 1) + fib k * fib n"
    5.38      by (rule fib_add)
    5.39    also have "gcd ... (fib (k + 1)) = gcd (fib k * fib n) (fib (k + 1))"
    5.40      by (simp add: gcd_mult_add)
    5.41 @@ -139,18 +137,17 @@
    5.42        case False then have "m <= n" by simp
    5.43        from `0 < m` and False have "n - m < n" by simp
    5.44        with hyp have "gcd (fib m) (fib ((n - m) mod m))
    5.45 -        = gcd (fib m) (fib (n - m))" by simp
    5.46 +          = gcd (fib m) (fib (n - m))" by simp
    5.47        also have "... = gcd (fib m) (fib n)"
    5.48          using `m <= n` by (rule gcd_fib_diff)
    5.49        finally have "gcd (fib m) (fib ((n - m) mod m)) =
    5.50 -        gcd (fib m) (fib n)" .
    5.51 +          gcd (fib m) (fib n)" .
    5.52        with False show ?thesis by simp
    5.53      qed
    5.54      finally show ?thesis .
    5.55    qed
    5.56  qed
    5.57  
    5.58 -
    5.59  theorem fib_gcd: "fib (gcd m n) = gcd (fib m) (fib n)" (is "?P m n")
    5.60  proof (induct m n rule: gcd_induct)
    5.61    fix m show "fib (gcd m 0) = gcd (fib m) (fib 0)" by simp
     6.1 --- a/src/HOL/Isar_Examples/Group.thy	Thu Jul 01 14:32:57 2010 +0200
     6.2 +++ b/src/HOL/Isar_Examples/Group.thy	Thu Jul 01 18:31:46 2010 +0200
     6.3 @@ -10,92 +10,83 @@
     6.4  
     6.5  subsection {* Groups and calculational reasoning *} 
     6.6  
     6.7 -text {*
     6.8 - Groups over signature $({\times} :: \alpha \To \alpha \To \alpha,
     6.9 - \idt{one} :: \alpha, \idt{inverse} :: \alpha \To \alpha)$ are defined
    6.10 - as an axiomatic type class as follows.  Note that the parent class
    6.11 - $\idt{times}$ is provided by the basic HOL theory.
    6.12 -*}
    6.13 -
    6.14 -notation Groups.one ("one")
    6.15 +text {* Groups over signature $({\times} :: \alpha \To \alpha \To
    6.16 +  \alpha, \idt{one} :: \alpha, \idt{inverse} :: \alpha \To \alpha)$
    6.17 +  are defined as an axiomatic type class as follows.  Note that the
    6.18 +  parent class $\idt{times}$ is provided by the basic HOL theory. *}
    6.19  
    6.20  class group = times + one + inverse +
    6.21 -  assumes group_assoc:         "(x * y) * z = x * (y * z)"
    6.22 -  assumes group_left_one:      "one * x = x"
    6.23 -  assumes group_left_inverse:  "inverse x * x = one"
    6.24 +  assumes group_assoc: "(x * y) * z = x * (y * z)"
    6.25 +    and group_left_one: "1 * x = x"
    6.26 +    and group_left_inverse: "inverse x * x = 1"
    6.27  
    6.28 -text {*
    6.29 - The group axioms only state the properties of left one and inverse,
    6.30 - the right versions may be derived as follows.
    6.31 -*}
    6.32 +text {* The group axioms only state the properties of left one and
    6.33 +  inverse, the right versions may be derived as follows. *}
    6.34  
    6.35 -theorem group_right_inverse: "x * inverse x = (one::'a::group)"
    6.36 +theorem (in group) group_right_inverse: "x * inverse x = 1"
    6.37  proof -
    6.38 -  have "x * inverse x = one * (x * inverse x)"
    6.39 +  have "x * inverse x = 1 * (x * inverse x)"
    6.40      by (simp only: group_left_one)
    6.41 -  also have "... = one * x * inverse x"
    6.42 +  also have "... = 1 * x * inverse x"
    6.43      by (simp only: group_assoc)
    6.44    also have "... = inverse (inverse x) * inverse x * x * inverse x"
    6.45      by (simp only: group_left_inverse)
    6.46    also have "... = inverse (inverse x) * (inverse x * x) * inverse x"
    6.47      by (simp only: group_assoc)
    6.48 -  also have "... = inverse (inverse x) * one * inverse x"
    6.49 +  also have "... = inverse (inverse x) * 1 * inverse x"
    6.50      by (simp only: group_left_inverse)
    6.51 -  also have "... = inverse (inverse x) * (one * inverse x)"
    6.52 +  also have "... = inverse (inverse x) * (1 * inverse x)"
    6.53      by (simp only: group_assoc)
    6.54    also have "... = inverse (inverse x) * inverse x"
    6.55      by (simp only: group_left_one)
    6.56 -  also have "... = one"
    6.57 +  also have "... = 1"
    6.58      by (simp only: group_left_inverse)
    6.59    finally show ?thesis .
    6.60  qed
    6.61  
    6.62 -text {*
    6.63 - With \name{group-right-inverse} already available,
    6.64 - \name{group-right-one}\label{thm:group-right-one} is now established
    6.65 - much easier.
    6.66 -*}
    6.67 +text {* With \name{group-right-inverse} already available,
    6.68 +  \name{group-right-one}\label{thm:group-right-one} is now established
    6.69 +  much easier. *}
    6.70  
    6.71 -theorem group_right_one: "x * one = (x::'a::group)"
    6.72 +theorem (in group) group_right_one: "x * 1 = x"
    6.73  proof -
    6.74 -  have "x * one = x * (inverse x * x)"
    6.75 +  have "x * 1 = x * (inverse x * x)"
    6.76      by (simp only: group_left_inverse)
    6.77    also have "... = x * inverse x * x"
    6.78      by (simp only: group_assoc)
    6.79 -  also have "... = one * x"
    6.80 +  also have "... = 1 * x"
    6.81      by (simp only: group_right_inverse)
    6.82    also have "... = x"
    6.83      by (simp only: group_left_one)
    6.84    finally show ?thesis .
    6.85  qed
    6.86  
    6.87 -text {*
    6.88 - \medskip The calculational proof style above follows typical
    6.89 - presentations given in any introductory course on algebra.  The basic
    6.90 - technique is to form a transitive chain of equations, which in turn
    6.91 - are established by simplifying with appropriate rules.  The low-level
    6.92 - logical details of equational reasoning are left implicit.
    6.93 +text {* \medskip The calculational proof style above follows typical
    6.94 +  presentations given in any introductory course on algebra.  The
    6.95 +  basic technique is to form a transitive chain of equations, which in
    6.96 +  turn are established by simplifying with appropriate rules.  The
    6.97 +  low-level logical details of equational reasoning are left implicit.
    6.98  
    6.99 - Note that ``$\dots$'' is just a special term variable that is bound
   6.100 - automatically to the argument\footnote{The argument of a curried
   6.101 - infix expression happens to be its right-hand side.} of the last fact
   6.102 - achieved by any local assumption or proven statement.  In contrast to
   6.103 - $\var{thesis}$, the ``$\dots$'' variable is bound \emph{after} the
   6.104 - proof is finished, though.
   6.105 +  Note that ``$\dots$'' is just a special term variable that is bound
   6.106 +  automatically to the argument\footnote{The argument of a curried
   6.107 +  infix expression happens to be its right-hand side.} of the last
   6.108 +  fact achieved by any local assumption or proven statement.  In
   6.109 +  contrast to $\var{thesis}$, the ``$\dots$'' variable is bound
   6.110 +  \emph{after} the proof is finished, though.
   6.111  
   6.112 - There are only two separate Isar language elements for calculational
   6.113 - proofs: ``\isakeyword{also}'' for initial or intermediate
   6.114 - calculational steps, and ``\isakeyword{finally}'' for exhibiting the
   6.115 - result of a calculation.  These constructs are not hardwired into
   6.116 - Isabelle/Isar, but defined on top of the basic Isar/VM interpreter.
   6.117 - Expanding the \isakeyword{also} and \isakeyword{finally} derived
   6.118 - language elements, calculations may be simulated by hand as
   6.119 - demonstrated below.
   6.120 +  There are only two separate Isar language elements for calculational
   6.121 +  proofs: ``\isakeyword{also}'' for initial or intermediate
   6.122 +  calculational steps, and ``\isakeyword{finally}'' for exhibiting the
   6.123 +  result of a calculation.  These constructs are not hardwired into
   6.124 +  Isabelle/Isar, but defined on top of the basic Isar/VM interpreter.
   6.125 +  Expanding the \isakeyword{also} and \isakeyword{finally} derived
   6.126 +  language elements, calculations may be simulated by hand as
   6.127 +  demonstrated below.
   6.128  *}
   6.129  
   6.130 -theorem "x * one = (x::'a::group)"
   6.131 +theorem (in group) "x * 1 = x"
   6.132  proof -
   6.133 -  have "x * one = x * (inverse x * x)"
   6.134 +  have "x * 1 = x * (inverse x * x)"
   6.135      by (simp only: group_left_inverse)
   6.136  
   6.137    note calculation = this
   6.138 @@ -107,7 +98,7 @@
   6.139    note calculation = trans [OF calculation this]
   6.140      -- {* general calculational step: compose with transitivity rule *}
   6.141  
   6.142 -  have "... = one * x"
   6.143 +  have "... = 1 * x"
   6.144      by (simp only: group_right_inverse)
   6.145  
   6.146    note calculation = trans [OF calculation this]
   6.147 @@ -124,129 +115,119 @@
   6.148    show ?thesis .
   6.149  qed
   6.150  
   6.151 -text {*
   6.152 - Note that this scheme of calculations is not restricted to plain
   6.153 - transitivity.  Rules like anti-symmetry, or even forward and backward
   6.154 - substitution work as well.  For the actual implementation of
   6.155 - \isacommand{also} and \isacommand{finally}, Isabelle/Isar maintains
   6.156 - separate context information of ``transitivity'' rules.  Rule
   6.157 - selection takes place automatically by higher-order unification.
   6.158 -*}
   6.159 +text {* Note that this scheme of calculations is not restricted to
   6.160 +  plain transitivity.  Rules like anti-symmetry, or even forward and
   6.161 +  backward substitution work as well.  For the actual implementation
   6.162 +  of \isacommand{also} and \isacommand{finally}, Isabelle/Isar
   6.163 +  maintains separate context information of ``transitivity'' rules.
   6.164 +  Rule selection takes place automatically by higher-order
   6.165 +  unification. *}
   6.166  
   6.167  
   6.168  subsection {* Groups as monoids *}
   6.169  
   6.170 -text {*
   6.171 - Monoids over signature $({\times} :: \alpha \To \alpha \To \alpha,
   6.172 - \idt{one} :: \alpha)$ are defined like this.
   6.173 +text {* Monoids over signature $({\times} :: \alpha \To \alpha \To
   6.174 +  \alpha, \idt{one} :: \alpha)$ are defined like this.
   6.175  *}
   6.176  
   6.177  class monoid = times + one +
   6.178 -  assumes monoid_assoc:       "(x * y) * z = x * (y * z)"
   6.179 -  assumes monoid_left_one:   "one * x = x"
   6.180 -  assumes monoid_right_one:  "x * one = x"
   6.181 +  assumes monoid_assoc: "(x * y) * z = x * (y * z)"
   6.182 +    and monoid_left_one: "1 * x = x"
   6.183 +    and monoid_right_one: "x * 1 = x"
   6.184  
   6.185 -text {*
   6.186 - Groups are \emph{not} yet monoids directly from the definition.  For
   6.187 - monoids, \name{right-one} had to be included as an axiom, but for
   6.188 - groups both \name{right-one} and \name{right-inverse} are derivable
   6.189 - from the other axioms.  With \name{group-right-one} derived as a
   6.190 - theorem of group theory (see page~\pageref{thm:group-right-one}), we
   6.191 - may still instantiate $\idt{group} \subseteq \idt{monoid}$ properly
   6.192 - as follows.
   6.193 -*}
   6.194 +text {* Groups are \emph{not} yet monoids directly from the
   6.195 +  definition.  For monoids, \name{right-one} had to be included as an
   6.196 +  axiom, but for groups both \name{right-one} and \name{right-inverse}
   6.197 +  are derivable from the other axioms.  With \name{group-right-one}
   6.198 +  derived as a theorem of group theory (see
   6.199 +  page~\pageref{thm:group-right-one}), we may still instantiate
   6.200 +  $\idt{group} \subseteq \idt{monoid}$ properly as follows. *}
   6.201  
   6.202  instance group < monoid
   6.203 -  by (intro_classes,
   6.204 -       rule group_assoc,
   6.205 -       rule group_left_one,
   6.206 -       rule group_right_one)
   6.207 +  by intro_classes
   6.208 +    (rule group_assoc,
   6.209 +      rule group_left_one,
   6.210 +      rule group_right_one)
   6.211  
   6.212 -text {*
   6.213 - The \isacommand{instance} command actually is a version of
   6.214 - \isacommand{theorem}, setting up a goal that reflects the intended
   6.215 - class relation (or type constructor arity).  Thus any Isar proof
   6.216 - language element may be involved to establish this statement.  When
   6.217 - concluding the proof, the result is transformed into the intended
   6.218 - type signature extension behind the scenes.
   6.219 -*}
   6.220 +text {* The \isacommand{instance} command actually is a version of
   6.221 +  \isacommand{theorem}, setting up a goal that reflects the intended
   6.222 +  class relation (or type constructor arity).  Thus any Isar proof
   6.223 +  language element may be involved to establish this statement.  When
   6.224 +  concluding the proof, the result is transformed into the intended
   6.225 +  type signature extension behind the scenes. *}
   6.226 +
   6.227  
   6.228  subsection {* More theorems of group theory *}
   6.229  
   6.230 -text {*
   6.231 - The one element is already uniquely determined by preserving an
   6.232 - \emph{arbitrary} group element.
   6.233 -*}
   6.234 +text {* The one element is already uniquely determined by preserving
   6.235 +  an \emph{arbitrary} group element. *}
   6.236  
   6.237 -theorem group_one_equality: "e * x = x ==> one = (e::'a::group)"
   6.238 +theorem (in group) group_one_equality:
   6.239 +  assumes eq: "e * x = x"
   6.240 +  shows "1 = e"
   6.241  proof -
   6.242 -  assume eq: "e * x = x"
   6.243 -  have "one = x * inverse x"
   6.244 +  have "1 = x * inverse x"
   6.245      by (simp only: group_right_inverse)
   6.246    also have "... = (e * x) * inverse x"
   6.247      by (simp only: eq)
   6.248    also have "... = e * (x * inverse x)"
   6.249      by (simp only: group_assoc)
   6.250 -  also have "... = e * one"
   6.251 +  also have "... = e * 1"
   6.252      by (simp only: group_right_inverse)
   6.253    also have "... = e"
   6.254      by (simp only: group_right_one)
   6.255    finally show ?thesis .
   6.256  qed
   6.257  
   6.258 -text {*
   6.259 - Likewise, the inverse is already determined by the cancel property.
   6.260 -*}
   6.261 +text {* Likewise, the inverse is already determined by the cancel property. *}
   6.262  
   6.263 -theorem group_inverse_equality:
   6.264 -  "x' * x = one ==> inverse x = (x'::'a::group)"
   6.265 +theorem (in group) group_inverse_equality:
   6.266 +  assumes eq: "x' * x = 1"
   6.267 +  shows "inverse x = x'"
   6.268  proof -
   6.269 -  assume eq: "x' * x = one"
   6.270 -  have "inverse x = one * inverse x"
   6.271 +  have "inverse x = 1 * inverse x"
   6.272      by (simp only: group_left_one)
   6.273    also have "... = (x' * x) * inverse x"
   6.274      by (simp only: eq)
   6.275    also have "... = x' * (x * inverse x)"
   6.276      by (simp only: group_assoc)
   6.277 -  also have "... = x' * one"
   6.278 +  also have "... = x' * 1"
   6.279      by (simp only: group_right_inverse)
   6.280    also have "... = x'"
   6.281      by (simp only: group_right_one)
   6.282    finally show ?thesis .
   6.283  qed
   6.284  
   6.285 -text {*
   6.286 - The inverse operation has some further characteristic properties.
   6.287 -*}
   6.288 +text {* The inverse operation has some further characteristic properties. *}
   6.289  
   6.290 -theorem group_inverse_times:
   6.291 -  "inverse (x * y) = inverse y * inverse (x::'a::group)"
   6.292 +theorem (in group) group_inverse_times: "inverse (x * y) = inverse y * inverse x"
   6.293  proof (rule group_inverse_equality)
   6.294 -  show "(inverse y * inverse x) * (x * y) = one"
   6.295 +  show "(inverse y * inverse x) * (x * y) = 1"
   6.296    proof -
   6.297      have "(inverse y * inverse x) * (x * y) =
   6.298          (inverse y * (inverse x * x)) * y"
   6.299        by (simp only: group_assoc)
   6.300 -    also have "... = (inverse y * one) * y"
   6.301 +    also have "... = (inverse y * 1) * y"
   6.302        by (simp only: group_left_inverse)
   6.303      also have "... = inverse y * y"
   6.304        by (simp only: group_right_one)
   6.305 -    also have "... = one"
   6.306 +    also have "... = 1"
   6.307        by (simp only: group_left_inverse)
   6.308      finally show ?thesis .
   6.309    qed
   6.310  qed
   6.311  
   6.312 -theorem inverse_inverse: "inverse (inverse x) = (x::'a::group)"
   6.313 +theorem (in group) inverse_inverse: "inverse (inverse x) = x"
   6.314  proof (rule group_inverse_equality)
   6.315    show "x * inverse x = one"
   6.316      by (simp only: group_right_inverse)
   6.317  qed
   6.318  
   6.319 -theorem inverse_inject: "inverse x = inverse y ==> x = (y::'a::group)"
   6.320 +theorem (in group) inverse_inject:
   6.321 +  assumes eq: "inverse x = inverse y"
   6.322 +  shows "x = y"
   6.323  proof -
   6.324 -  assume eq: "inverse x = inverse y"
   6.325 -  have "x = x * one"
   6.326 +  have "x = x * 1"
   6.327      by (simp only: group_right_one)
   6.328    also have "... = x * (inverse y * y)"
   6.329      by (simp only: group_left_inverse)
   6.330 @@ -254,7 +235,7 @@
   6.331      by (simp only: eq)
   6.332    also have "... = (x * inverse x) * y"
   6.333      by (simp only: group_assoc)
   6.334 -  also have "... = one * y"
   6.335 +  also have "... = 1 * y"
   6.336      by (simp only: group_right_inverse)
   6.337    also have "... = y"
   6.338      by (simp only: group_left_one)
     7.1 --- a/src/HOL/Isar_Examples/Hoare.thy	Thu Jul 01 14:32:57 2010 +0200
     7.2 +++ b/src/HOL/Isar_Examples/Hoare.thy	Thu Jul 01 18:31:46 2010 +0200
     7.3 @@ -13,14 +13,12 @@
     7.4  
     7.5  subsection {* Abstract syntax and semantics *}
     7.6  
     7.7 -text {*
     7.8 - The following abstract syntax and semantics of Hoare Logic over
     7.9 - \texttt{WHILE} programs closely follows the existing tradition in
    7.10 - Isabelle/HOL of formalizing the presentation given in
    7.11 - \cite[\S6]{Winskel:1993}.  See also
    7.12 - \url{http://isabelle.in.tum.de/library/Hoare/} and
    7.13 - \cite{Nipkow:1998:Winskel}.
    7.14 -*}
    7.15 +text {* The following abstract syntax and semantics of Hoare Logic
    7.16 +  over \texttt{WHILE} programs closely follows the existing tradition
    7.17 +  in Isabelle/HOL of formalizing the presentation given in
    7.18 +  \cite[\S6]{Winskel:1993}.  See also
    7.19 +  \url{http://isabelle.in.tum.de/library/Hoare/} and
    7.20 +  \cite{Nipkow:1998:Winskel}. *}
    7.21  
    7.22  types
    7.23    'a bexp = "'a set"
    7.24 @@ -32,33 +30,31 @@
    7.25    | Cond "'a bexp" "'a com" "'a com"
    7.26    | While "'a bexp" "'a assn" "'a com"
    7.27  
    7.28 -abbreviation
    7.29 -  Skip  ("SKIP") where
    7.30 -  "SKIP == Basic id"
    7.31 +abbreviation Skip  ("SKIP")
    7.32 +  where "SKIP == Basic id"
    7.33  
    7.34  types
    7.35    'a sem = "'a => 'a => bool"
    7.36  
    7.37 -consts
    7.38 -  iter :: "nat => 'a bexp => 'a sem => 'a sem"
    7.39 -primrec
    7.40 +primrec iter :: "nat => 'a bexp => 'a sem => 'a sem"
    7.41 +where
    7.42    "iter 0 b S s s' = (s ~: b & s = s')"
    7.43 -  "iter (Suc n) b S s s' =
    7.44 -    (s : b & (EX s''. S s s'' & iter n b S s'' s'))"
    7.45 +| "iter (Suc n) b S s s' = (s : b & (EX s''. S s s'' & iter n b S s'' s'))"
    7.46  
    7.47 -consts
    7.48 -  Sem :: "'a com => 'a sem"
    7.49 -primrec
    7.50 +primrec Sem :: "'a com => 'a sem"
    7.51 +where
    7.52    "Sem (Basic f) s s' = (s' = f s)"
    7.53 -  "Sem (c1; c2) s s' = (EX s''. Sem c1 s s'' & Sem c2 s'' s')"
    7.54 -  "Sem (Cond b c1 c2) s s' =
    7.55 +| "Sem (c1; c2) s s' = (EX s''. Sem c1 s s'' & Sem c2 s'' s')"
    7.56 +| "Sem (Cond b c1 c2) s s' =
    7.57      (if s : b then Sem c1 s s' else Sem c2 s s')"
    7.58 -  "Sem (While b x c) s s' = (EX n. iter n b (Sem c) s s')"
    7.59 +| "Sem (While b x c) s s' = (EX n. iter n b (Sem c) s s')"
    7.60  
    7.61 -definition Valid :: "'a bexp => 'a com => 'a bexp => bool" ("(3|- _/ (2_)/ _)" [100, 55, 100] 50) where
    7.62 -  "|- P c Q == ALL s s'. Sem c s s' --> s : P --> s' : Q"
    7.63 +definition
    7.64 +  Valid :: "'a bexp => 'a com => 'a bexp => bool"
    7.65 +    ("(3|- _/ (2_)/ _)" [100, 55, 100] 50)
    7.66 +  where "|- P c Q \<longleftrightarrow> (\<forall>s s'. Sem c s s' --> s : P --> s' : Q)"
    7.67  
    7.68 -notation (xsymbols) Valid ("(3\<turnstile> _/ (2_)/ _)" [100, 55, 100] 50)
    7.69 +notation (xsymbols) Valid  ("(3\<turnstile> _/ (2_)/ _)" [100, 55, 100] 50)
    7.70  
    7.71  lemma ValidI [intro?]:
    7.72      "(!!s s'. Sem c s s' ==> s : P ==> s' : Q) ==> |- P c Q"
    7.73 @@ -71,22 +67,20 @@
    7.74  
    7.75  subsection {* Primitive Hoare rules *}
    7.76  
    7.77 -text {*
    7.78 - From the semantics defined above, we derive the standard set of
    7.79 - primitive Hoare rules; e.g.\ see \cite[\S6]{Winskel:1993}.  Usually,
    7.80 - variant forms of these rules are applied in actual proof, see also
    7.81 - \S\ref{sec:hoare-isar} and \S\ref{sec:hoare-vcg}.
    7.82 +text {* From the semantics defined above, we derive the standard set
    7.83 +  of primitive Hoare rules; e.g.\ see \cite[\S6]{Winskel:1993}.
    7.84 +  Usually, variant forms of these rules are applied in actual proof,
    7.85 +  see also \S\ref{sec:hoare-isar} and \S\ref{sec:hoare-vcg}.
    7.86  
    7.87 - \medskip The \name{basic} rule represents any kind of atomic access
    7.88 - to the state space.  This subsumes the common rules of \name{skip}
    7.89 - and \name{assign}, as formulated in \S\ref{sec:hoare-isar}.
    7.90 -*}
    7.91 +  \medskip The \name{basic} rule represents any kind of atomic access
    7.92 +  to the state space.  This subsumes the common rules of \name{skip}
    7.93 +  and \name{assign}, as formulated in \S\ref{sec:hoare-isar}. *}
    7.94  
    7.95  theorem basic: "|- {s. f s : P} (Basic f) P"
    7.96  proof
    7.97    fix s s' assume s: "s : {s. f s : P}"
    7.98    assume "Sem (Basic f) s s'"
    7.99 -  hence "s' = f s" by simp
   7.100 +  then have "s' = f s" by simp
   7.101    with s show "s' : P" by simp
   7.102  qed
   7.103  
   7.104 @@ -117,11 +111,9 @@
   7.105    with QQ' show "s' : Q'" ..
   7.106  qed
   7.107  
   7.108 -text {*
   7.109 - The rule for conditional commands is directly reflected by the
   7.110 - corresponding semantics; in the proof we just have to look closely
   7.111 - which cases apply.
   7.112 -*}
   7.113 +text {* The rule for conditional commands is directly reflected by the
   7.114 +  corresponding semantics; in the proof we just have to look closely
   7.115 +  which cases apply. *}
   7.116  
   7.117  theorem cond:
   7.118    "|- (P Int b) c1 Q ==> |- (P Int -b) c2 Q ==> |- P (Cond b c1 c2) Q"
   7.119 @@ -147,14 +139,12 @@
   7.120    qed
   7.121  qed
   7.122  
   7.123 -text {*
   7.124 - The \name{while} rule is slightly less trivial --- it is the only one
   7.125 - based on recursion, which is expressed in the semantics by a
   7.126 - Kleene-style least fixed-point construction.  The auxiliary statement
   7.127 - below, which is by induction on the number of iterations is the main
   7.128 - point to be proven; the rest is by routine application of the
   7.129 - semantics of \texttt{WHILE}.
   7.130 -*}
   7.131 +text {* The @{text while} rule is slightly less trivial --- it is the
   7.132 +  only one based on recursion, which is expressed in the semantics by
   7.133 +  a Kleene-style least fixed-point construction.  The auxiliary
   7.134 +  statement below, which is by induction on the number of iterations
   7.135 +  is the main point to be proven; the rest is by routine application
   7.136 +  of the semantics of \texttt{WHILE}. *}
   7.137  
   7.138  theorem while:
   7.139    assumes body: "|- (P Int b) c P"
   7.140 @@ -166,12 +156,11 @@
   7.141    from this and s show "s' : P Int -b"
   7.142    proof (induct n arbitrary: s)
   7.143      case 0
   7.144 -    thus ?case by auto
   7.145 +    then show ?case by auto
   7.146    next
   7.147      case (Suc n)
   7.148      then obtain s'' where b: "s : b" and sem: "Sem c s s''"
   7.149 -      and iter: "iter n b (Sem c) s'' s'"
   7.150 -      by auto
   7.151 +      and iter: "iter n b (Sem c) s'' s'" by auto
   7.152      from Suc and b have "s : P Int b" by simp
   7.153      with body sem have "s'' : P" ..
   7.154      with iter show ?case by (rule Suc)
   7.155 @@ -181,30 +170,26 @@
   7.156  
   7.157  subsection {* Concrete syntax for assertions *}
   7.158  
   7.159 -text {*
   7.160 - We now introduce concrete syntax for describing commands (with
   7.161 - embedded expressions) and assertions. The basic technique is that of
   7.162 - semantic ``quote-antiquote''.  A \emph{quotation} is a syntactic
   7.163 - entity delimited by an implicit abstraction, say over the state
   7.164 - space.  An \emph{antiquotation} is a marked expression within a
   7.165 - quotation that refers the implicit argument; a typical antiquotation
   7.166 - would select (or even update) components from the state.
   7.167 +text {* We now introduce concrete syntax for describing commands (with
   7.168 +  embedded expressions) and assertions. The basic technique is that of
   7.169 +  semantic ``quote-antiquote''.  A \emph{quotation} is a syntactic
   7.170 +  entity delimited by an implicit abstraction, say over the state
   7.171 +  space.  An \emph{antiquotation} is a marked expression within a
   7.172 +  quotation that refers the implicit argument; a typical antiquotation
   7.173 +  would select (or even update) components from the state.
   7.174  
   7.175 - We will see some examples later in the concrete rules and
   7.176 - applications.
   7.177 -*}
   7.178 +  We will see some examples later in the concrete rules and
   7.179 +  applications. *}
   7.180  
   7.181 -text {*
   7.182 - The following specification of syntax and translations is for
   7.183 - Isabelle experts only; feel free to ignore it.
   7.184 +text {* The following specification of syntax and translations is for
   7.185 +  Isabelle experts only; feel free to ignore it.
   7.186  
   7.187 - While the first part is still a somewhat intelligible specification
   7.188 - of the concrete syntactic representation of our Hoare language, the
   7.189 - actual ``ML drivers'' is quite involved.  Just note that the we
   7.190 - re-use the basic quote/antiquote translations as already defined in
   7.191 - Isabelle/Pure (see \verb,Syntax.quote_tr, and
   7.192 - \verb,Syntax.quote_tr',).
   7.193 -*}
   7.194 +  While the first part is still a somewhat intelligible specification
   7.195 +  of the concrete syntactic representation of our Hoare language, the
   7.196 +  actual ``ML drivers'' is quite involved.  Just note that the we
   7.197 +  re-use the basic quote/antiquote translations as already defined in
   7.198 +  Isabelle/Pure (see \verb,Syntax.quote_tr, and
   7.199 +  \verb,Syntax.quote_tr',). *}
   7.200  
   7.201  syntax
   7.202    "_quote"       :: "'b => ('a => 'b)"       ("(.'(_').)" [0] 1000)
   7.203 @@ -238,11 +223,10 @@
   7.204    in [(@{syntax_const "_quote"}, quote_tr)] end
   7.205  *}
   7.206  
   7.207 -text {*
   7.208 - As usual in Isabelle syntax translations, the part for printing is
   7.209 - more complicated --- we cannot express parts as macro rules as above.
   7.210 - Don't look here, unless you have to do similar things for yourself.
   7.211 -*}
   7.212 +text {* As usual in Isabelle syntax translations, the part for
   7.213 +  printing is more complicated --- we cannot express parts as macro
   7.214 +  rules as above.  Don't look here, unless you have to do similar
   7.215 +  things for yourself. *}
   7.216  
   7.217  print_translation {*
   7.218    let
   7.219 @@ -277,15 +261,13 @@
   7.220  
   7.221  subsection {* Rules for single-step proof \label{sec:hoare-isar} *}
   7.222  
   7.223 -text {*
   7.224 - We are now ready to introduce a set of Hoare rules to be used in
   7.225 - single-step structured proofs in Isabelle/Isar.  We refer to the
   7.226 - concrete syntax introduce above.
   7.227 +text {* We are now ready to introduce a set of Hoare rules to be used
   7.228 +  in single-step structured proofs in Isabelle/Isar.  We refer to the
   7.229 +  concrete syntax introduce above.
   7.230  
   7.231 - \medskip Assertions of Hoare Logic may be manipulated in
   7.232 - calculational proofs, with the inclusion expressed in terms of sets
   7.233 - or predicates.  Reversed order is supported as well.
   7.234 -*}
   7.235 +  \medskip Assertions of Hoare Logic may be manipulated in
   7.236 +  calculational proofs, with the inclusion expressed in terms of sets
   7.237 +  or predicates.  Reversed order is supported as well. *}
   7.238  
   7.239  lemma [trans]: "|- P c Q ==> P' <= P ==> |- P' c Q"
   7.240    by (unfold Valid_def) blast
   7.241 @@ -312,49 +294,41 @@
   7.242    by (simp add: Valid_def)
   7.243  
   7.244  
   7.245 -text {*
   7.246 - Identity and basic assignments.\footnote{The $\idt{hoare}$ method
   7.247 - introduced in \S\ref{sec:hoare-vcg} is able to provide proper
   7.248 - instances for any number of basic assignments, without producing
   7.249 - additional verification conditions.}
   7.250 -*}
   7.251 +text {* Identity and basic assignments.\footnote{The $\idt{hoare}$
   7.252 +  method introduced in \S\ref{sec:hoare-vcg} is able to provide proper
   7.253 +  instances for any number of basic assignments, without producing
   7.254 +  additional verification conditions.} *}
   7.255  
   7.256  lemma skip [intro?]: "|- P SKIP P"
   7.257  proof -
   7.258    have "|- {s. id s : P} SKIP P" by (rule basic)
   7.259 -  thus ?thesis by simp
   7.260 +  then show ?thesis by simp
   7.261  qed
   7.262  
   7.263  lemma assign: "|- P [\<acute>a/\<acute>x] \<acute>x := \<acute>a P"
   7.264    by (rule basic)
   7.265  
   7.266 -text {*
   7.267 - Note that above formulation of assignment corresponds to our
   7.268 - preferred way to model state spaces, using (extensible) record types
   7.269 - in HOL \cite{Naraschewski-Wenzel:1998:HOOL}.  For any record field
   7.270 - $x$, Isabelle/HOL provides a functions $x$ (selector) and
   7.271 - $\idt{x{\dsh}update}$ (update).  Above, there is only a place-holder
   7.272 - appearing for the latter kind of function: due to concrete syntax
   7.273 - \isa{\'x := \'a} also contains \isa{x\_update}.\footnote{Note that due
   7.274 - to the external nature of HOL record fields, we could not even state
   7.275 - a general theorem relating selector and update functions (if this
   7.276 - were required here); this would only work for any particular instance
   7.277 - of record fields introduced so far.}
   7.278 -*}
   7.279 +text {* Note that above formulation of assignment corresponds to our
   7.280 +  preferred way to model state spaces, using (extensible) record types
   7.281 +  in HOL \cite{Naraschewski-Wenzel:1998:HOOL}.  For any record field
   7.282 +  $x$, Isabelle/HOL provides a functions $x$ (selector) and
   7.283 +  $\idt{x{\dsh}update}$ (update).  Above, there is only a place-holder
   7.284 +  appearing for the latter kind of function: due to concrete syntax
   7.285 +  \isa{\'x := \'a} also contains \isa{x\_update}.\footnote{Note that
   7.286 +  due to the external nature of HOL record fields, we could not even
   7.287 +  state a general theorem relating selector and update functions (if
   7.288 +  this were required here); this would only work for any particular
   7.289 +  instance of record fields introduced so far.} *}
   7.290  
   7.291 -text {*
   7.292 - Sequential composition --- normalizing with associativity achieves
   7.293 - proper of chunks of code verified separately.
   7.294 -*}
   7.295 +text {* Sequential composition --- normalizing with associativity
   7.296 +  achieves proper of chunks of code verified separately. *}
   7.297  
   7.298  lemmas [trans, intro?] = seq
   7.299  
   7.300  lemma seq_assoc [simp]: "( |- P c1;(c2;c3) Q) = ( |- P (c1;c2);c3 Q)"
   7.301    by (auto simp add: Valid_def)
   7.302  
   7.303 -text {*
   7.304 - Conditional statements.
   7.305 -*}
   7.306 +text {* Conditional statements. *}
   7.307  
   7.308  lemmas [trans, intro?] = cond
   7.309  
   7.310 @@ -364,9 +338,7 @@
   7.311        ==> |- .{\<acute>P}. IF \<acute>b THEN c1 ELSE c2 FI Q"
   7.312      by (rule cond) (simp_all add: Valid_def)
   7.313  
   7.314 -text {*
   7.315 - While statements --- with optional invariant.
   7.316 -*}
   7.317 +text {* While statements --- with optional invariant. *}
   7.318  
   7.319  lemma [intro?]:
   7.320      "|- (P Int b) c P ==> |- P (While b P c) (P Int -b)"
   7.321 @@ -390,18 +362,16 @@
   7.322  
   7.323  subsection {* Verification conditions \label{sec:hoare-vcg} *}
   7.324  
   7.325 -text {*
   7.326 - We now load the \emph{original} ML file for proof scripts and tactic
   7.327 - definition for the Hoare Verification Condition Generator (see
   7.328 - \url{http://isabelle.in.tum.de/library/Hoare/}).  As far as we are
   7.329 - concerned here, the result is a proof method \name{hoare}, which may
   7.330 - be applied to a Hoare Logic assertion to extract purely logical
   7.331 - verification conditions.  It is important to note that the method
   7.332 - requires \texttt{WHILE} loops to be fully annotated with invariants
   7.333 - beforehand.  Furthermore, only \emph{concrete} pieces of code are
   7.334 - handled --- the underlying tactic fails ungracefully if supplied with
   7.335 - meta-variables or parameters, for example.
   7.336 -*}
   7.337 +text {* We now load the \emph{original} ML file for proof scripts and
   7.338 +  tactic definition for the Hoare Verification Condition Generator
   7.339 +  (see \url{http://isabelle.in.tum.de/library/Hoare/}).  As far as we
   7.340 +  are concerned here, the result is a proof method \name{hoare}, which
   7.341 +  may be applied to a Hoare Logic assertion to extract purely logical
   7.342 +  verification conditions.  It is important to note that the method
   7.343 +  requires \texttt{WHILE} loops to be fully annotated with invariants
   7.344 +  beforehand.  Furthermore, only \emph{concrete} pieces of code are
   7.345 +  handled --- the underlying tactic fails ungracefully if supplied
   7.346 +  with meta-variables or parameters, for example. *}
   7.347  
   7.348  lemma SkipRule: "p \<subseteq> q \<Longrightarrow> Valid p (Basic id) q"
   7.349    by (auto simp add: Valid_def)
     8.1 --- a/src/HOL/Isar_Examples/Hoare_Ex.thy	Thu Jul 01 14:32:57 2010 +0200
     8.2 +++ b/src/HOL/Isar_Examples/Hoare_Ex.thy	Thu Jul 01 18:31:46 2010 +0200
     8.3 @@ -6,11 +6,9 @@
     8.4  
     8.5  subsection {* State spaces *}
     8.6  
     8.7 -text {*
     8.8 - First of all we provide a store of program variables that
     8.9 - occur in any of the programs considered later.  Slightly unexpected
    8.10 - things may happen when attempting to work with undeclared variables.
    8.11 -*}
    8.12 +text {* First of all we provide a store of program variables that
    8.13 +  occur in any of the programs considered later.  Slightly unexpected
    8.14 +  things may happen when attempting to work with undeclared variables. *}
    8.15  
    8.16  record vars =
    8.17    I :: nat
    8.18 @@ -18,37 +16,29 @@
    8.19    N :: nat
    8.20    S :: nat
    8.21  
    8.22 -text {*
    8.23 - While all of our variables happen to have the same type, nothing
    8.24 - would prevent us from working with many-sorted programs as well, or
    8.25 - even polymorphic ones.  Also note that Isabelle/HOL's extensible
    8.26 - record types even provides simple means to extend the state space
    8.27 - later.
    8.28 -*}
    8.29 +text {* While all of our variables happen to have the same type,
    8.30 +  nothing would prevent us from working with many-sorted programs as
    8.31 +  well, or even polymorphic ones.  Also note that Isabelle/HOL's
    8.32 +  extensible record types even provides simple means to extend the
    8.33 +  state space later. *}
    8.34  
    8.35  
    8.36  subsection {* Basic examples *}
    8.37  
    8.38 -text {*
    8.39 - We look at few trivialities involving assignment and sequential
    8.40 - composition, in order to get an idea of how to work with our
    8.41 - formulation of Hoare Logic.
    8.42 -*}
    8.43 +text {* We look at few trivialities involving assignment and
    8.44 +  sequential composition, in order to get an idea of how to work with
    8.45 +  our formulation of Hoare Logic. *}
    8.46  
    8.47 -text {*
    8.48 - Using the basic \name{assign} rule directly is a bit cumbersome.
    8.49 -*}
    8.50 +text {* Using the basic @{text assign} rule directly is a bit
    8.51 +  cumbersome. *}
    8.52  
    8.53 -lemma
    8.54 -  "|- .{\<acute>(N_update (\<lambda>_. (2 * \<acute>N))) : .{\<acute>N = 10}.}. \<acute>N := 2 * \<acute>N .{\<acute>N = 10}."
    8.55 +lemma "|- .{\<acute>(N_update (\<lambda>_. (2 * \<acute>N))) : .{\<acute>N = 10}.}. \<acute>N := 2 * \<acute>N .{\<acute>N = 10}."
    8.56    by (rule assign)
    8.57  
    8.58 -text {*
    8.59 - Certainly we want the state modification already done, e.g.\ by
    8.60 - simplification.  The \name{hoare} method performs the basic state
    8.61 - update for us; we may apply the Simplifier afterwards to achieve
    8.62 - ``obvious'' consequences as well.
    8.63 -*}
    8.64 +text {* Certainly we want the state modification already done, e.g.\
    8.65 +  by simplification.  The \name{hoare} method performs the basic state
    8.66 +  update for us; we may apply the Simplifier afterwards to achieve
    8.67 +  ``obvious'' consequences as well. *}
    8.68  
    8.69  lemma "|- .{True}. \<acute>N := 10 .{\<acute>N = 10}."
    8.70    by hoare
    8.71 @@ -77,12 +67,10 @@
    8.72      .{\<acute>M = b & \<acute>N = a}."
    8.73    by hoare simp
    8.74  
    8.75 -text {*
    8.76 - It is important to note that statements like the following one can
    8.77 - only be proven for each individual program variable.  Due to the
    8.78 - extra-logical nature of record fields, we cannot formulate a theorem
    8.79 - relating record selectors and updates schematically.
    8.80 -*}
    8.81 +text {* It is important to note that statements like the following one
    8.82 +  can only be proven for each individual program variable.  Due to the
    8.83 +  extra-logical nature of record fields, we cannot formulate a theorem
    8.84 +  relating record selectors and updates schematically. *}
    8.85  
    8.86  lemma "|- .{\<acute>N = a}. \<acute>N := \<acute>N .{\<acute>N = a}."
    8.87    by hoare
    8.88 @@ -96,11 +84,9 @@
    8.89    oops
    8.90  
    8.91  
    8.92 -text {*
    8.93 - In the following assignments we make use of the consequence rule in
    8.94 - order to achieve the intended precondition.  Certainly, the
    8.95 - \name{hoare} method is able to handle this case, too.
    8.96 -*}
    8.97 +text {* In the following assignments we make use of the consequence
    8.98 +  rule in order to achieve the intended precondition.  Certainly, the
    8.99 +  \name{hoare} method is able to handle this case, too. *}
   8.100  
   8.101  lemma "|- .{\<acute>M = \<acute>N}. \<acute>M := \<acute>M + 1 .{\<acute>M ~= \<acute>N}."
   8.102  proof -
   8.103 @@ -128,12 +114,10 @@
   8.104  
   8.105  subsection {* Multiplication by addition *}
   8.106  
   8.107 -text {*
   8.108 - We now do some basic examples of actual \texttt{WHILE} programs.
   8.109 - This one is a loop for calculating the product of two natural
   8.110 - numbers, by iterated addition.  We first give detailed structured
   8.111 - proof based on single-step Hoare rules.
   8.112 -*}
   8.113 +text {* We now do some basic examples of actual \texttt{WHILE}
   8.114 +  programs.  This one is a loop for calculating the product of two
   8.115 +  natural numbers, by iterated addition.  We first give detailed
   8.116 +  structured proof based on single-step Hoare rules. *}
   8.117  
   8.118  lemma
   8.119    "|- .{\<acute>M = 0 & \<acute>S = 0}.
   8.120 @@ -157,12 +141,10 @@
   8.121    finally show ?thesis .
   8.122  qed
   8.123  
   8.124 -text {*
   8.125 - The subsequent version of the proof applies the \name{hoare} method
   8.126 - to reduce the Hoare statement to a purely logical problem that can be
   8.127 - solved fully automatically.  Note that we have to specify the
   8.128 - \texttt{WHILE} loop invariant in the original statement.
   8.129 -*}
   8.130 +text {* The subsequent version of the proof applies the @{text hoare}
   8.131 +  method to reduce the Hoare statement to a purely logical problem
   8.132 +  that can be solved fully automatically.  Note that we have to
   8.133 +  specify the \texttt{WHILE} loop invariant in the original statement. *}
   8.134  
   8.135  lemma
   8.136    "|- .{\<acute>M = 0 & \<acute>S = 0}.
   8.137 @@ -175,21 +157,15 @@
   8.138  
   8.139  subsection {* Summing natural numbers *}
   8.140  
   8.141 -text {*
   8.142 - We verify an imperative program to sum natural numbers up to a given
   8.143 - limit.  First some functional definition for proper specification of
   8.144 - the problem.
   8.145 -*}
   8.146 +text {* We verify an imperative program to sum natural numbers up to a
   8.147 +  given limit.  First some functional definition for proper
   8.148 +  specification of the problem. *}
   8.149  
   8.150 -text {*
   8.151 - The following proof is quite explicit in the individual steps taken,
   8.152 - with the \name{hoare} method only applied locally to take care of
   8.153 - assignment and sequential composition.  Note that we express
   8.154 - intermediate proof obligation in pure logic, without referring to the
   8.155 - state space.
   8.156 -*}
   8.157 -
   8.158 -declare atLeast0LessThan[symmetric,simp]
   8.159 +text {* The following proof is quite explicit in the individual steps
   8.160 +  taken, with the \name{hoare} method only applied locally to take
   8.161 +  care of assignment and sequential composition.  Note that we express
   8.162 +  intermediate proof obligation in pure logic, without referring to
   8.163 +  the state space. *}
   8.164  
   8.165  theorem
   8.166    "|- .{True}.
   8.167 @@ -227,10 +203,9 @@
   8.168    finally show ?thesis .
   8.169  qed
   8.170  
   8.171 -text {*
   8.172 - The next version uses the \name{hoare} method, while still explaining
   8.173 - the resulting proof obligations in an abstract, structured manner.
   8.174 -*}
   8.175 +text {* The next version uses the @{text hoare} method, while still
   8.176 +  explaining the resulting proof obligations in an abstract,
   8.177 +  structured manner. *}
   8.178  
   8.179  theorem
   8.180    "|- .{True}.
   8.181 @@ -251,17 +226,15 @@
   8.182      show "?inv 0 1" by simp
   8.183    next
   8.184      fix s i assume "?inv s i & i ~= n"
   8.185 -    thus "?inv (s + i) (i + 1)" by simp
   8.186 +    then show "?inv (s + i) (i + 1)" by simp
   8.187    next
   8.188      fix s i assume "?inv s i & ~ i ~= n"
   8.189 -    thus "s = ?sum n" by simp
   8.190 +    then show "s = ?sum n" by simp
   8.191    qed
   8.192  qed
   8.193  
   8.194 -text {*
   8.195 - Certainly, this proof may be done fully automatic as well, provided
   8.196 - that the invariant is given beforehand.
   8.197 -*}
   8.198 +text {* Certainly, this proof may be done fully automatic as well,
   8.199 +  provided that the invariant is given beforehand. *}
   8.200  
   8.201  theorem
   8.202    "|- .{True}.
   8.203 @@ -278,21 +251,19 @@
   8.204  
   8.205  subsection{* Time *}
   8.206  
   8.207 -text{*
   8.208 -  A simple embedding of time in Hoare logic: function @{text timeit}
   8.209 -  inserts an extra variable to keep track of the elapsed time.
   8.210 -*}
   8.211 +text{* A simple embedding of time in Hoare logic: function @{text
   8.212 +  timeit} inserts an extra variable to keep track of the elapsed time. *}
   8.213  
   8.214  record tstate = time :: nat
   8.215  
   8.216  types 'a time = "\<lparr>time :: nat, \<dots> :: 'a\<rparr>"
   8.217  
   8.218 -consts timeit :: "'a time com \<Rightarrow> 'a time com"
   8.219 -primrec
   8.220 +primrec timeit :: "'a time com \<Rightarrow> 'a time com"
   8.221 +where
   8.222    "timeit (Basic f) = (Basic f; Basic(\<lambda>s. s\<lparr>time := Suc (time s)\<rparr>))"
   8.223 -  "timeit (c1; c2) = (timeit c1; timeit c2)"
   8.224 -  "timeit (Cond b c1 c2) = Cond b (timeit c1) (timeit c2)"
   8.225 -  "timeit (While b iv c) = While b iv (timeit c)"
   8.226 +| "timeit (c1; c2) = (timeit c1; timeit c2)"
   8.227 +| "timeit (Cond b c1 c2) = Cond b (timeit c1) (timeit c2)"
   8.228 +| "timeit (While b iv c) = While b iv (timeit c)"
   8.229  
   8.230  record tvars = tstate +
   8.231    I :: nat
     9.1 --- a/src/HOL/Isar_Examples/Knaster_Tarski.thy	Thu Jul 01 14:32:57 2010 +0200
     9.2 +++ b/src/HOL/Isar_Examples/Knaster_Tarski.thy	Thu Jul 01 18:31:46 2010 +0200
     9.3 @@ -13,10 +13,10 @@
     9.4  
     9.5  subsection {* Prose version *}
     9.6  
     9.7 -text {*
     9.8 -  According to the textbook \cite[pages 93--94]{davey-priestley}, the
     9.9 -  Knaster-Tarski fixpoint theorem is as follows.\footnote{We have
    9.10 -  dualized the argument, and tuned the notation a little bit.}
    9.11 +text {* According to the textbook \cite[pages
    9.12 +  93--94]{davey-priestley}, the Knaster-Tarski fixpoint theorem is as
    9.13 +  follows.\footnote{We have dualized the argument, and tuned the
    9.14 +  notation a little bit.}
    9.15  
    9.16    \textbf{The Knaster-Tarski Fixpoint Theorem.}  Let @{text L} be a
    9.17    complete lattice and @{text "f: L \<rightarrow> L"} an order-preserving map.
    9.18 @@ -28,19 +28,16 @@
    9.19    H}, whence @{text "f(a) \<le> a"}.  We now use this inequality to prove
    9.20    the reverse one (!) and thereby complete the proof that @{text a} is
    9.21    a fixpoint.  Since @{text f} is order-preserving, @{text "f(f(a)) \<le>
    9.22 -  f(a)"}.  This says @{text "f(a) \<in> H"}, so @{text "a \<le> f(a)"}.
    9.23 -*}
    9.24 +  f(a)"}.  This says @{text "f(a) \<in> H"}, so @{text "a \<le> f(a)"}. *}
    9.25  
    9.26  
    9.27  subsection {* Formal versions *}
    9.28  
    9.29 -text {*
    9.30 -  The Isar proof below closely follows the original presentation.
    9.31 -  Virtually all of the prose narration has been rephrased in terms of
    9.32 -  formal Isar language elements.  Just as many textbook-style proofs,
    9.33 -  there is a strong bias towards forward proof, and several bends in
    9.34 -  the course of reasoning.
    9.35 -*}
    9.36 +text {* The Isar proof below closely follows the original
    9.37 +  presentation.  Virtually all of the prose narration has been
    9.38 +  rephrased in terms of formal Isar language elements.  Just as many
    9.39 +  textbook-style proofs, there is a strong bias towards forward proof,
    9.40 +  and several bends in the course of reasoning. *}
    9.41  
    9.42  theorem Knaster_Tarski:
    9.43    fixes f :: "'a::complete_lattice \<Rightarrow> 'a"
    9.44 @@ -70,17 +67,15 @@
    9.45    qed
    9.46  qed
    9.47  
    9.48 -text {*
    9.49 -  Above we have used several advanced Isar language elements, such as
    9.50 -  explicit block structure and weak assumptions.  Thus we have
    9.51 +text {* Above we have used several advanced Isar language elements,
    9.52 +  such as explicit block structure and weak assumptions.  Thus we have
    9.53    mimicked the particular way of reasoning of the original text.
    9.54  
    9.55    In the subsequent version the order of reasoning is changed to
    9.56    achieve structured top-down decomposition of the problem at the
    9.57    outer level, while only the inner steps of reasoning are done in a
    9.58    forward manner.  We are certainly more at ease here, requiring only
    9.59 -  the most basic features of the Isar language.
    9.60 -*}
    9.61 +  the most basic features of the Isar language. *}
    9.62  
    9.63  theorem Knaster_Tarski':
    9.64    fixes f :: "'a::complete_lattice \<Rightarrow> 'a"
    10.1 --- a/src/HOL/Isar_Examples/Mutilated_Checkerboard.thy	Thu Jul 01 14:32:57 2010 +0200
    10.2 +++ b/src/HOL/Isar_Examples/Mutilated_Checkerboard.thy	Thu Jul 01 18:31:46 2010 +0200
    10.3 @@ -9,27 +9,26 @@
    10.4  imports Main
    10.5  begin
    10.6  
    10.7 -text {*
    10.8 - The Mutilated Checker Board Problem, formalized inductively.  See
    10.9 - \cite{paulson-mutilated-board} and
   10.10 - \url{http://isabelle.in.tum.de/library/HOL/Induct/Mutil.html} for the
   10.11 - original tactic script version.
   10.12 -*}
   10.13 +text {* The Mutilated Checker Board Problem, formalized inductively.
   10.14 +  See \cite{paulson-mutilated-board} and
   10.15 +  \url{http://isabelle.in.tum.de/library/HOL/Induct/Mutil.html} for
   10.16 +  the original tactic script version. *}
   10.17  
   10.18  subsection {* Tilings *}
   10.19  
   10.20 -inductive_set
   10.21 -  tiling :: "'a set set => 'a set set"
   10.22 +inductive_set tiling :: "'a set set => 'a set set"
   10.23    for A :: "'a set set"
   10.24 -  where
   10.25 -    empty: "{} : tiling A"
   10.26 -  | Un: "a : A ==> t : tiling A ==> a <= - t ==> a Un t : tiling A"
   10.27 +where
   10.28 +  empty: "{} : tiling A"
   10.29 +| Un: "a : A ==> t : tiling A ==> a <= - t ==> a Un t : tiling A"
   10.30  
   10.31  
   10.32  text "The union of two disjoint tilings is a tiling."
   10.33  
   10.34  lemma tiling_Un:
   10.35 -  assumes "t : tiling A" and "u : tiling A" and "t Int u = {}"
   10.36 +  assumes "t : tiling A"
   10.37 +    and "u : tiling A"
   10.38 +    and "t Int u = {}"
   10.39    shows "t Un u : tiling A"
   10.40  proof -
   10.41    let ?T = "tiling A"
   10.42 @@ -60,8 +59,8 @@
   10.43  
   10.44  subsection {* Basic properties of ``below'' *}
   10.45  
   10.46 -definition below :: "nat => nat set" where
   10.47 -  "below n == {i. i < n}"
   10.48 +definition below :: "nat => nat set"
   10.49 +  where "below n = {i. i < n}"
   10.50  
   10.51  lemma below_less_iff [iff]: "(i: below k) = (i < k)"
   10.52    by (simp add: below_def)
   10.53 @@ -74,8 +73,8 @@
   10.54    by (simp add: below_def less_Suc_eq) blast
   10.55  
   10.56  lemma Sigma_Suc2:
   10.57 -    "m = n + 2 ==> A <*> below m =
   10.58 -      (A <*> {n}) Un (A <*> {n + 1}) Un (A <*> below n)"
   10.59 +  "m = n + 2 ==> A <*> below m =
   10.60 +    (A <*> {n}) Un (A <*> {n + 1}) Un (A <*> below n)"
   10.61    by (auto simp add: below_def)
   10.62  
   10.63  lemmas Sigma_Suc = Sigma_Suc1 Sigma_Suc2
   10.64 @@ -83,27 +82,26 @@
   10.65  
   10.66  subsection {* Basic properties of ``evnodd'' *}
   10.67  
   10.68 -definition evnodd :: "(nat * nat) set => nat => (nat * nat) set" where
   10.69 -  "evnodd A b == A Int {(i, j). (i + j) mod 2 = b}"
   10.70 +definition evnodd :: "(nat * nat) set => nat => (nat * nat) set"
   10.71 +  where "evnodd A b = A Int {(i, j). (i + j) mod 2 = b}"
   10.72  
   10.73 -lemma evnodd_iff:
   10.74 -    "(i, j): evnodd A b = ((i, j): A  & (i + j) mod 2 = b)"
   10.75 +lemma evnodd_iff: "(i, j): evnodd A b = ((i, j): A  & (i + j) mod 2 = b)"
   10.76    by (simp add: evnodd_def)
   10.77  
   10.78  lemma evnodd_subset: "evnodd A b <= A"
   10.79 -  by (unfold evnodd_def, rule Int_lower1)
   10.80 +  unfolding evnodd_def by (rule Int_lower1)
   10.81  
   10.82  lemma evnoddD: "x : evnodd A b ==> x : A"
   10.83 -  by (rule subsetD, rule evnodd_subset)
   10.84 +  by (rule subsetD) (rule evnodd_subset)
   10.85  
   10.86  lemma evnodd_finite: "finite A ==> finite (evnodd A b)"
   10.87 -  by (rule finite_subset, rule evnodd_subset)
   10.88 +  by (rule finite_subset) (rule evnodd_subset)
   10.89  
   10.90  lemma evnodd_Un: "evnodd (A Un B) b = evnodd A b Un evnodd B b"
   10.91 -  by (unfold evnodd_def) blast
   10.92 +  unfolding evnodd_def by blast
   10.93  
   10.94  lemma evnodd_Diff: "evnodd (A - B) b = evnodd A b - evnodd B b"
   10.95 -  by (unfold evnodd_def) blast
   10.96 +  unfolding evnodd_def by blast
   10.97  
   10.98  lemma evnodd_empty: "evnodd {} b = {}"
   10.99    by (simp add: evnodd_def)
  10.100 @@ -116,11 +114,10 @@
  10.101  
  10.102  subsection {* Dominoes *}
  10.103  
  10.104 -inductive_set
  10.105 -  domino :: "(nat * nat) set set"
  10.106 -  where
  10.107 -    horiz: "{(i, j), (i, j + 1)} : domino"
  10.108 -  | vertl: "{(i, j), (i + 1, j)} : domino"
  10.109 +inductive_set domino :: "(nat * nat) set set"
  10.110 +where
  10.111 +  horiz: "{(i, j), (i, j + 1)} : domino"
  10.112 +| vertl: "{(i, j), (i + 1, j)} : domino"
  10.113  
  10.114  lemma dominoes_tile_row:
  10.115    "{i} <*> below (2 * n) : tiling domino"
  10.116 @@ -165,9 +162,10 @@
  10.117  qed
  10.118  
  10.119  lemma domino_singleton:
  10.120 -  assumes d: "d : domino" and "b < 2"
  10.121 +  assumes "d : domino"
  10.122 +    and "b < 2"
  10.123    shows "EX i j. evnodd d b = {(i, j)}"  (is "?P d")
  10.124 -  using d
  10.125 +  using assms
  10.126  proof induct
  10.127    from `b < 2` have b_cases: "b = 0 | b = 1" by arith
  10.128    fix i j
  10.129 @@ -177,9 +175,9 @@
  10.130  qed
  10.131  
  10.132  lemma domino_finite:
  10.133 -  assumes d: "d: domino"
  10.134 +  assumes "d: domino"
  10.135    shows "finite d"
  10.136 -  using d
  10.137 +  using assms
  10.138  proof induct
  10.139    fix i j :: nat
  10.140    show "finite {(i, j), (i, j + 1)}" by (intro finite.intros)
  10.141 @@ -245,8 +243,9 @@
  10.142  
  10.143  subsection {* Main theorem *}
  10.144  
  10.145 -definition mutilated_board :: "nat => nat => (nat * nat) set" where
  10.146 -  "mutilated_board m n ==
  10.147 +definition mutilated_board :: "nat => nat => (nat * nat) set"
  10.148 +where
  10.149 +  "mutilated_board m n =
  10.150      below (2 * (m + 1)) <*> below (2 * (n + 1))
  10.151        - {(0, 0)} - {(2 * m + 1, 2 * n + 1)}"
  10.152  
  10.153 @@ -256,7 +255,7 @@
  10.154    let ?t = "below (2 * (m + 1)) <*> below (2 * (n + 1))"
  10.155    let ?t' = "?t - {(0, 0)}"
  10.156    let ?t'' = "?t' - {(2 * m + 1, 2 * n + 1)}"
  10.157 -
  10.158 +  
  10.159    show "?t'' ~: ?T"
  10.160    proof
  10.161      have t: "?t : ?T" by (rule dominoes_tile_matrix)
    11.1 --- a/src/HOL/Isar_Examples/Nested_Datatype.thy	Thu Jul 01 14:32:57 2010 +0200
    11.2 +++ b/src/HOL/Isar_Examples/Nested_Datatype.thy	Thu Jul 01 18:31:46 2010 +0200
    11.3 @@ -10,8 +10,10 @@
    11.4      Var 'a
    11.5    | App 'b "('a, 'b) term list"
    11.6  
    11.7 -primrec subst_term :: "('a => ('a, 'b) term) => ('a, 'b) term => ('a, 'b) term" and
    11.8 -  subst_term_list :: "('a => ('a, 'b) term) => ('a, 'b) term list => ('a, 'b) term list" where
    11.9 +primrec
   11.10 +  subst_term :: "('a => ('a, 'b) term) => ('a, 'b) term => ('a, 'b) term" and
   11.11 +  subst_term_list :: "('a => ('a, 'b) term) => ('a, 'b) term list => ('a, 'b) term list"
   11.12 +where
   11.13    "subst_term f (Var a) = f a"
   11.14  | "subst_term f (App b ts) = App b (subst_term_list f ts)"
   11.15  | "subst_term_list f [] = []"
   11.16 @@ -19,18 +21,18 @@
   11.17  
   11.18  lemmas subst_simps = subst_term_subst_term_list.simps
   11.19  
   11.20 -text {*
   11.21 - \medskip A simple lemma about composition of substitutions.
   11.22 -*}
   11.23 +text {* \medskip A simple lemma about composition of substitutions. *}
   11.24  
   11.25 -lemma "subst_term (subst_term f1 o f2) t =
   11.26 -      subst_term f1 (subst_term f2 t)"
   11.27 -  and "subst_term_list (subst_term f1 o f2) ts =
   11.28 -      subst_term_list f1 (subst_term_list f2 ts)"
   11.29 +lemma
   11.30 +  "subst_term (subst_term f1 o f2) t =
   11.31 +    subst_term f1 (subst_term f2 t)"
   11.32 +  and
   11.33 +  "subst_term_list (subst_term f1 o f2) ts =
   11.34 +    subst_term_list f1 (subst_term_list f2 ts)"
   11.35    by (induct t and ts) simp_all
   11.36  
   11.37  lemma "subst_term (subst_term f1 o f2) t =
   11.38 -  subst_term f1 (subst_term f2 t)"
   11.39 +    subst_term f1 (subst_term f2 t)"
   11.40  proof -
   11.41    let "?P t" = ?thesis
   11.42    let ?Q = "\<lambda>ts. subst_term_list (subst_term f1 o f2) ts =
    12.1 --- a/src/HOL/Isar_Examples/Peirce.thy	Thu Jul 01 14:32:57 2010 +0200
    12.2 +++ b/src/HOL/Isar_Examples/Peirce.thy	Thu Jul 01 18:31:46 2010 +0200
    12.3 @@ -8,17 +8,16 @@
    12.4  imports Main
    12.5  begin
    12.6  
    12.7 -text {*
    12.8 - We consider Peirce's Law: $((A \impl B) \impl A) \impl A$.  This is
    12.9 - an inherently non-intuitionistic statement, so its proof will
   12.10 - certainly involve some form of classical contradiction.
   12.11 +text {* We consider Peirce's Law: $((A \impl B) \impl A) \impl A$.
   12.12 +  This is an inherently non-intuitionistic statement, so its proof
   12.13 +  will certainly involve some form of classical contradiction.
   12.14  
   12.15 - The first proof is again a well-balanced combination of plain
   12.16 - backward and forward reasoning.  The actual classical step is where
   12.17 - the negated goal may be introduced as additional assumption.  This
   12.18 - eventually leads to a contradiction.\footnote{The rule involved there
   12.19 - is negation elimination; it holds in intuitionistic logic as well.}
   12.20 -*}
   12.21 +  The first proof is again a well-balanced combination of plain
   12.22 +  backward and forward reasoning.  The actual classical step is where
   12.23 +  the negated goal may be introduced as additional assumption.  This
   12.24 +  eventually leads to a contradiction.\footnote{The rule involved
   12.25 +  there is negation elimination; it holds in intuitionistic logic as
   12.26 +  well.} *}
   12.27  
   12.28  theorem "((A --> B) --> A) --> A"
   12.29  proof
   12.30 @@ -35,21 +34,19 @@
   12.31    qed
   12.32  qed
   12.33  
   12.34 -text {*
   12.35 - In the subsequent version the reasoning is rearranged by means of
   12.36 - ``weak assumptions'' (as introduced by \isacommand{presume}).  Before
   12.37 - assuming the negated goal $\neg A$, its intended consequence $A \impl
   12.38 - B$ is put into place in order to solve the main problem.
   12.39 - Nevertheless, we do not get anything for free, but have to establish
   12.40 - $A \impl B$ later on.  The overall effect is that of a logical
   12.41 - \emph{cut}.
   12.42 +text {* In the subsequent version the reasoning is rearranged by means
   12.43 +  of ``weak assumptions'' (as introduced by \isacommand{presume}).
   12.44 +  Before assuming the negated goal $\neg A$, its intended consequence
   12.45 +  $A \impl B$ is put into place in order to solve the main problem.
   12.46 +  Nevertheless, we do not get anything for free, but have to establish
   12.47 +  $A \impl B$ later on.  The overall effect is that of a logical
   12.48 +  \emph{cut}.
   12.49  
   12.50 - Technically speaking, whenever some goal is solved by
   12.51 - \isacommand{show} in the context of weak assumptions then the latter
   12.52 - give rise to new subgoals, which may be established separately.  In
   12.53 - contrast, strong assumptions (as introduced by \isacommand{assume})
   12.54 - are solved immediately.
   12.55 -*}
   12.56 +  Technically speaking, whenever some goal is solved by
   12.57 +  \isacommand{show} in the context of weak assumptions then the latter
   12.58 +  give rise to new subgoals, which may be established separately.  In
   12.59 +  contrast, strong assumptions (as introduced by \isacommand{assume})
   12.60 +  are solved immediately. *}
   12.61  
   12.62  theorem "((A --> B) --> A) --> A"
   12.63  proof
   12.64 @@ -68,23 +65,21 @@
   12.65    qed
   12.66  qed
   12.67  
   12.68 -text {*
   12.69 - Note that the goals stemming from weak assumptions may be even left
   12.70 - until qed time, where they get eventually solved ``by assumption'' as
   12.71 - well.  In that case there is really no fundamental difference between
   12.72 - the two kinds of assumptions, apart from the order of reducing the
   12.73 - individual parts of the proof configuration.
   12.74 +text {* Note that the goals stemming from weak assumptions may be even
   12.75 +  left until qed time, where they get eventually solved ``by
   12.76 +  assumption'' as well.  In that case there is really no fundamental
   12.77 +  difference between the two kinds of assumptions, apart from the
   12.78 +  order of reducing the individual parts of the proof configuration.
   12.79  
   12.80 - Nevertheless, the ``strong'' mode of plain assumptions is quite
   12.81 - important in practice to achieve robustness of proof text
   12.82 - interpretation.  By forcing both the conclusion \emph{and} the
   12.83 - assumptions to unify with the pending goal to be solved, goal
   12.84 - selection becomes quite deterministic.  For example, decomposition
   12.85 - with rules of the ``case-analysis'' type usually gives rise to
   12.86 - several goals that only differ in there local contexts.  With strong
   12.87 - assumptions these may be still solved in any order in a predictable
   12.88 - way, while weak ones would quickly lead to great confusion,
   12.89 - eventually demanding even some backtracking.
   12.90 -*}
   12.91 +  Nevertheless, the ``strong'' mode of plain assumptions is quite
   12.92 +  important in practice to achieve robustness of proof text
   12.93 +  interpretation.  By forcing both the conclusion \emph{and} the
   12.94 +  assumptions to unify with the pending goal to be solved, goal
   12.95 +  selection becomes quite deterministic.  For example, decomposition
   12.96 +  with rules of the ``case-analysis'' type usually gives rise to
   12.97 +  several goals that only differ in there local contexts.  With strong
   12.98 +  assumptions these may be still solved in any order in a predictable
   12.99 +  way, while weak ones would quickly lead to great confusion,
  12.100 +  eventually demanding even some backtracking. *}
  12.101  
  12.102  end
    13.1 --- a/src/HOL/Isar_Examples/Puzzle.thy	Thu Jul 01 14:32:57 2010 +0200
    13.2 +++ b/src/HOL/Isar_Examples/Puzzle.thy	Thu Jul 01 18:31:46 2010 +0200
    13.3 @@ -4,17 +4,13 @@
    13.4  imports Main
    13.5  begin
    13.6  
    13.7 -text_raw {*
    13.8 -  \footnote{A question from ``Bundeswettbewerb Mathematik''.  Original
    13.9 -  pen-and-paper proof due to Herbert Ehler; Isabelle tactic script by
   13.10 -  Tobias Nipkow.}
   13.11 -*}
   13.12 +text_raw {* \footnote{A question from ``Bundeswettbewerb Mathematik''.
   13.13 +  Original pen-and-paper proof due to Herbert Ehler; Isabelle tactic
   13.14 +  script by Tobias Nipkow.} *}
   13.15  
   13.16 -text {*
   13.17 -  \textbf{Problem.}  Given some function $f\colon \Nat \to \Nat$ such
   13.18 -  that $f \ap (f \ap n) < f \ap (\idt{Suc} \ap n)$ for all $n$.
   13.19 -  Demonstrate that $f$ is the identity.
   13.20 -*}
   13.21 +text {* \textbf{Problem.}  Given some function $f\colon \Nat \to \Nat$
   13.22 +  such that $f \ap (f \ap n) < f \ap (\idt{Suc} \ap n)$ for all $n$.
   13.23 +  Demonstrate that $f$ is the identity. *}
   13.24  
   13.25  theorem
   13.26    assumes f_ax: "\<And>n. f (f n) < f (Suc n)"
    14.1 --- a/src/HOL/Isar_Examples/Summation.thy	Thu Jul 01 14:32:57 2010 +0200
    14.2 +++ b/src/HOL/Isar_Examples/Summation.thy	Thu Jul 01 18:31:46 2010 +0200
    14.3 @@ -8,28 +8,22 @@
    14.4  imports Main
    14.5  begin
    14.6  
    14.7 -text_raw {*
    14.8 - \footnote{This example is somewhat reminiscent of the
    14.9 - \url{http://isabelle.in.tum.de/library/HOL/ex/NatSum.html}, which is
   14.10 - discussed in \cite{isabelle-ref} in the context of permutative
   14.11 - rewrite rules and ordered rewriting.}
   14.12 -*}
   14.13 +text_raw {* \footnote{This example is somewhat reminiscent of the
   14.14 +  \url{http://isabelle.in.tum.de/library/HOL/ex/NatSum.html}, which is
   14.15 +  discussed in \cite{isabelle-ref} in the context of permutative
   14.16 +  rewrite rules and ordered rewriting.} *}
   14.17  
   14.18 -text {*
   14.19 - Subsequently, we prove some summation laws of natural numbers
   14.20 - (including odds, squares, and cubes).  These examples demonstrate how
   14.21 - plain natural deduction (including induction) may be combined with
   14.22 - calculational proof.
   14.23 -*}
   14.24 +text {* Subsequently, we prove some summation laws of natural numbers
   14.25 +  (including odds, squares, and cubes).  These examples demonstrate
   14.26 +  how plain natural deduction (including induction) may be combined
   14.27 +  with calculational proof. *}
   14.28  
   14.29  
   14.30  subsection {* Summation laws *}
   14.31  
   14.32 -text {*
   14.33 - The sum of natural numbers $0 + \cdots + n$ equals $n \times (n +
   14.34 - 1)/2$.  Avoiding formal reasoning about division we prove this
   14.35 - equation multiplied by $2$.
   14.36 -*}
   14.37 +text {* The sum of natural numbers $0 + \cdots + n$ equals $n \times
   14.38 +  (n + 1)/2$.  Avoiding formal reasoning about division we prove this
   14.39 +  equation multiplied by $2$. *}
   14.40  
   14.41  theorem sum_of_naturals:
   14.42    "2 * (\<Sum>i::nat=0..n. i) = n * (n + 1)"
   14.43 @@ -43,48 +37,47 @@
   14.44    finally show "?P (Suc n)" by simp
   14.45  qed
   14.46  
   14.47 -text {*
   14.48 - The above proof is a typical instance of mathematical induction.  The
   14.49 - main statement is viewed as some $\var{P} \ap n$ that is split by the
   14.50 - induction method into base case $\var{P} \ap 0$, and step case
   14.51 - $\var{P} \ap n \Impl \var{P} \ap (\idt{Suc} \ap n)$ for arbitrary $n$.
   14.52 +text {* The above proof is a typical instance of mathematical
   14.53 +  induction.  The main statement is viewed as some $\var{P} \ap n$
   14.54 +  that is split by the induction method into base case $\var{P} \ap
   14.55 +  0$, and step case $\var{P} \ap n \Impl \var{P} \ap (\idt{Suc} \ap
   14.56 +  n)$ for arbitrary $n$.
   14.57  
   14.58 - The step case is established by a short calculation in forward
   14.59 - manner.  Starting from the left-hand side $\var{S} \ap (n + 1)$ of
   14.60 - the thesis, the final result is achieved by transformations involving
   14.61 - basic arithmetic reasoning (using the Simplifier).  The main point is
   14.62 - where the induction hypothesis $\var{S} \ap n = n \times (n + 1)$ is
   14.63 - introduced in order to replace a certain subterm.  So the
   14.64 - ``transitivity'' rule involved here is actual \emph{substitution}.
   14.65 - Also note how the occurrence of ``\dots'' in the subsequent step
   14.66 - documents the position where the right-hand side of the hypothesis
   14.67 - got filled in.
   14.68 +  The step case is established by a short calculation in forward
   14.69 +  manner.  Starting from the left-hand side $\var{S} \ap (n + 1)$ of
   14.70 +  the thesis, the final result is achieved by transformations
   14.71 +  involving basic arithmetic reasoning (using the Simplifier).  The
   14.72 +  main point is where the induction hypothesis $\var{S} \ap n = n
   14.73 +  \times (n + 1)$ is introduced in order to replace a certain subterm.
   14.74 +  So the ``transitivity'' rule involved here is actual
   14.75 +  \emph{substitution}.  Also note how the occurrence of ``\dots'' in
   14.76 +  the subsequent step documents the position where the right-hand side
   14.77 +  of the hypothesis got filled in.
   14.78  
   14.79 - \medskip A further notable point here is integration of calculations
   14.80 - with plain natural deduction.  This works so well in Isar for two
   14.81 - reasons.
   14.82 - \begin{enumerate}
   14.83 +  \medskip A further notable point here is integration of calculations
   14.84 +  with plain natural deduction.  This works so well in Isar for two
   14.85 +  reasons.
   14.86 +  \begin{enumerate}
   14.87 +
   14.88 +  \item Facts involved in \isakeyword{also}~/ \isakeyword{finally}
   14.89 +  calculational chains may be just anything.  There is nothing special
   14.90 +  about \isakeyword{have}, so the natural deduction element
   14.91 +  \isakeyword{assume} works just as well.
   14.92  
   14.93 - \item Facts involved in \isakeyword{also}~/ \isakeyword{finally}
   14.94 - calculational chains may be just anything.  There is nothing special
   14.95 - about \isakeyword{have}, so the natural deduction element
   14.96 - \isakeyword{assume} works just as well.
   14.97 +  \item There are two \emph{separate} primitives for building natural
   14.98 +  deduction contexts: \isakeyword{fix}~$x$ and
   14.99 +  \isakeyword{assume}~$A$.  Thus it is possible to start reasoning
  14.100 +  with some new ``arbitrary, but fixed'' elements before bringing in
  14.101 +  the actual assumption.  In contrast, natural deduction is
  14.102 +  occasionally formalized with basic context elements of the form
  14.103 +  $x:A$ instead.
  14.104  
  14.105 - \item There are two \emph{separate} primitives for building natural
  14.106 - deduction contexts: \isakeyword{fix}~$x$ and \isakeyword{assume}~$A$.
  14.107 - Thus it is possible to start reasoning with some new ``arbitrary, but
  14.108 - fixed'' elements before bringing in the actual assumption.  In
  14.109 - contrast, natural deduction is occasionally formalized with basic
  14.110 - context elements of the form $x:A$ instead.
  14.111 -
  14.112 - \end{enumerate}
  14.113 +  \end{enumerate}
  14.114  *}
  14.115  
  14.116 -text {*
  14.117 - \medskip We derive further summation laws for odds, squares, and
  14.118 - cubes as follows.  The basic technique of induction plus calculation
  14.119 - is the same as before.
  14.120 -*}
  14.121 +text {* \medskip We derive further summation laws for odds, squares,
  14.122 +  and cubes as follows.  The basic technique of induction plus
  14.123 +  calculation is the same as before. *}
  14.124  
  14.125  theorem sum_of_odds:
  14.126    "(\<Sum>i::nat=0..<n. 2 * i + 1) = n^Suc (Suc 0)"
  14.127 @@ -98,11 +91,9 @@
  14.128    finally show "?P (Suc n)" by simp
  14.129  qed
  14.130  
  14.131 -text {*
  14.132 - Subsequently we require some additional tweaking of Isabelle built-in
  14.133 - arithmetic simplifications, such as bringing in distributivity by
  14.134 - hand.
  14.135 -*}
  14.136 +text {* Subsequently we require some additional tweaking of Isabelle
  14.137 +  built-in arithmetic simplifications, such as bringing in
  14.138 +  distributivity by hand. *}
  14.139  
  14.140  lemmas distrib = add_mult_distrib add_mult_distrib2
  14.141  
  14.142 @@ -116,7 +107,7 @@
  14.143      by (simp add: distrib)
  14.144    also assume "?S n = n * (n + 1) * (2 * n + 1)"
  14.145    also have "... + 6 * (n + 1)^Suc (Suc 0) =
  14.146 -    (n + 1) * (n + 2) * (2 * (n + 1) + 1)" by (simp add: distrib)
  14.147 +      (n + 1) * (n + 2) * (2 * (n + 1) + 1)" by (simp add: distrib)
  14.148    finally show "?P (Suc n)" by simp
  14.149  qed
  14.150  
  14.151 @@ -134,25 +125,23 @@
  14.152    finally show "?P (Suc n)" by simp
  14.153  qed
  14.154  
  14.155 -text {*
  14.156 - Comparing these examples with the tactic script version
  14.157 - \url{http://isabelle.in.tum.de/library/HOL/ex/NatSum.html}, we note
  14.158 - an important difference of how induction vs.\ simplification is
  14.159 - applied.  While \cite[\S10]{isabelle-ref} advises for these examples
  14.160 - that ``induction should not be applied until the goal is in the
  14.161 - simplest form'' this would be a very bad idea in our setting.
  14.162 +text {* Comparing these examples with the tactic script version
  14.163 +  \url{http://isabelle.in.tum.de/library/HOL/ex/NatSum.html}, we note
  14.164 +  an important difference of how induction vs.\ simplification is
  14.165 +  applied.  While \cite[\S10]{isabelle-ref} advises for these examples
  14.166 +  that ``induction should not be applied until the goal is in the
  14.167 +  simplest form'' this would be a very bad idea in our setting.
  14.168  
  14.169 - Simplification normalizes all arithmetic expressions involved,
  14.170 - producing huge intermediate goals.  With applying induction
  14.171 - afterwards, the Isar proof text would have to reflect the emerging
  14.172 - configuration by appropriate sub-proofs.  This would result in badly
  14.173 - structured, low-level technical reasoning, without any good idea of
  14.174 - the actual point.
  14.175 +  Simplification normalizes all arithmetic expressions involved,
  14.176 +  producing huge intermediate goals.  With applying induction
  14.177 +  afterwards, the Isar proof text would have to reflect the emerging
  14.178 +  configuration by appropriate sub-proofs.  This would result in badly
  14.179 +  structured, low-level technical reasoning, without any good idea of
  14.180 +  the actual point.
  14.181  
  14.182 - \medskip As a general rule of good proof style, automatic methods
  14.183 - such as $\idt{simp}$ or $\idt{auto}$ should normally be never used as
  14.184 - initial proof methods, but only as terminal ones, solving certain
  14.185 - goals completely.
  14.186 -*}
  14.187 +  \medskip As a general rule of good proof style, automatic methods
  14.188 +  such as $\idt{simp}$ or $\idt{auto}$ should normally be never used
  14.189 +  as initial proof methods, but only as terminal ones, solving certain
  14.190 +  goals completely.  *}
  14.191  
  14.192  end