src/HOL/Isar_examples/BasicLogic.thy
changeset 18193 54419506df9e
parent 16417 9bc16273c2d4
child 23373 ead82c82da9e
     1.1 --- a/src/HOL/Isar_examples/BasicLogic.thy	Wed Nov 16 17:50:35 2005 +0100
     1.2 +++ b/src/HOL/Isar_examples/BasicLogic.thy	Wed Nov 16 19:34:19 2005 +0100
     1.3 @@ -13,10 +13,10 @@
     1.4  subsection {* Pure backward reasoning *}
     1.5  
     1.6  text {*
     1.7 - In order to get a first idea of how Isabelle/Isar proof documents may
     1.8 - look like, we consider the propositions $I$, $K$, and $S$.  The
     1.9 - following (rather explicit) proofs should require little extra
    1.10 - explanations.
    1.11 +  In order to get a first idea of how Isabelle/Isar proof documents
    1.12 +  may look like, we consider the propositions @{text I}, @{text K},
    1.13 +  and @{text S}.  The following (rather explicit) proofs should
    1.14 +  require little extra explanations.
    1.15  *}
    1.16  
    1.17  lemma I: "A --> A"
    1.18 @@ -45,7 +45,7 @@
    1.19        assume A
    1.20        show C
    1.21        proof (rule mp)
    1.22 -	show "B --> C" by (rule mp)
    1.23 +        show "B --> C" by (rule mp)
    1.24          show B by (rule mp)
    1.25        qed
    1.26      qed
    1.27 @@ -53,12 +53,13 @@
    1.28  qed
    1.29  
    1.30  text {*
    1.31 - Isar provides several ways to fine-tune the reasoning, avoiding
    1.32 - excessive detail.  Several abbreviated language elements are
    1.33 - available, enabling the writer to express proofs in a more concise
    1.34 - way, even without referring to any automated proof tools yet.
    1.35 +  Isar provides several ways to fine-tune the reasoning, avoiding
    1.36 +  excessive detail.  Several abbreviated language elements are
    1.37 +  available, enabling the writer to express proofs in a more concise
    1.38 +  way, even without referring to any automated proof tools yet.
    1.39  
    1.40 - First of all, proof by assumption may be abbreviated as a single dot.
    1.41 +  First of all, proof by assumption may be abbreviated as a single
    1.42 +  dot.
    1.43  *}
    1.44  
    1.45  lemma "A --> A"
    1.46 @@ -68,11 +69,11 @@
    1.47  qed
    1.48  
    1.49  text {*
    1.50 - In fact, concluding any (sub-)proof already involves solving any
    1.51 - remaining goals by assumption\footnote{This is not a completely
    1.52 - trivial operation, as proof by assumption may involve full
    1.53 - higher-order unification.}.  Thus we may skip the rather vacuous body
    1.54 - of the above proof as well.
    1.55 +  In fact, concluding any (sub-)proof already involves solving any
    1.56 +  remaining goals by assumption\footnote{This is not a completely
    1.57 +  trivial operation, as proof by assumption may involve full
    1.58 +  higher-order unification.}.  Thus we may skip the rather vacuous
    1.59 +  body of the above proof as well.
    1.60  *}
    1.61  
    1.62  lemma "A --> A"
    1.63 @@ -80,36 +81,37 @@
    1.64  qed
    1.65  
    1.66  text {*
    1.67 - Note that the \isacommand{proof} command refers to the $\idt{rule}$
    1.68 - method (without arguments) by default.  Thus it implicitly applies a
    1.69 - single rule, as determined from the syntactic form of the statements
    1.70 - involved.  The \isacommand{by} command abbreviates any proof with
    1.71 - empty body, so the proof may be further pruned.
    1.72 +  Note that the \isacommand{proof} command refers to the @{text rule}
    1.73 +  method (without arguments) by default.  Thus it implicitly applies a
    1.74 +  single rule, as determined from the syntactic form of the statements
    1.75 +  involved.  The \isacommand{by} command abbreviates any proof with
    1.76 +  empty body, so the proof may be further pruned.
    1.77  *}
    1.78  
    1.79  lemma "A --> A"
    1.80    by rule
    1.81  
    1.82  text {*
    1.83 - Proof by a single rule may be abbreviated as double-dot.
    1.84 +  Proof by a single rule may be abbreviated as double-dot.
    1.85  *}
    1.86  
    1.87  lemma "A --> A" ..
    1.88  
    1.89  text {*
    1.90 - Thus we have arrived at an adequate representation of the proof of a
    1.91 - tautology that holds by a single standard rule.\footnote{Apparently,
    1.92 - the rule here is implication introduction.}
    1.93 +  Thus we have arrived at an adequate representation of the proof of a
    1.94 +  tautology that holds by a single standard rule.\footnote{Apparently,
    1.95 +  the rule here is implication introduction.}
    1.96  *}
    1.97  
    1.98  text {*
    1.99 - Let us also reconsider $K$.  Its statement is composed of iterated
   1.100 - connectives.  Basic decomposition is by a single rule at a time,
   1.101 - which is why our first version above was by nesting two proofs.
   1.102 +  Let us also reconsider @{text K}.  Its statement is composed of
   1.103 +  iterated connectives.  Basic decomposition is by a single rule at a
   1.104 +  time, which is why our first version above was by nesting two
   1.105 +  proofs.
   1.106  
   1.107 - The $\idt{intro}$ proof method repeatedly decomposes a goal's
   1.108 - conclusion.\footnote{The dual method is $\idt{elim}$, acting on a
   1.109 - goal's premises.}
   1.110 +  The @{text intro} proof method repeatedly decomposes a goal's
   1.111 +  conclusion.\footnote{The dual method is @{text elim}, acting on a
   1.112 +  goal's premises.}
   1.113  *}
   1.114  
   1.115  lemma "A --> B --> A"
   1.116 @@ -119,41 +121,40 @@
   1.117  qed
   1.118  
   1.119  text {*
   1.120 - Again, the body may be collapsed.
   1.121 +  Again, the body may be collapsed.
   1.122  *}
   1.123  
   1.124  lemma "A --> B --> A"
   1.125    by (intro impI)
   1.126  
   1.127  text {*
   1.128 - Just like $\idt{rule}$, the $\idt{intro}$ and $\idt{elim}$ proof
   1.129 - methods pick standard structural rules, in case no explicit arguments
   1.130 - are given.  While implicit rules are usually just fine for single
   1.131 - rule application, this may go too far with iteration.  Thus in
   1.132 - practice, $\idt{intro}$ and $\idt{elim}$ would be typically
   1.133 - restricted to certain structures by giving a few rules only, e.g.\
   1.134 - \isacommand{proof}~($\idt{intro}$~\name{impI}~\name{allI}) to strip
   1.135 - implications and universal quantifiers.
   1.136 +  Just like @{text rule}, the @{text intro} and @{text elim} proof
   1.137 +  methods pick standard structural rules, in case no explicit
   1.138 +  arguments are given.  While implicit rules are usually just fine for
   1.139 +  single rule application, this may go too far with iteration.  Thus
   1.140 +  in practice, @{text intro} and @{text elim} would be typically
   1.141 +  restricted to certain structures by giving a few rules only, e.g.\
   1.142 +  \isacommand{proof}~@{text "(intro impI allI)"} to strip implications
   1.143 +  and universal quantifiers.
   1.144  
   1.145 - Such well-tuned iterated decomposition of certain structures is the
   1.146 - prime application of $\idt{intro}$ and $\idt{elim}$.  In contrast,
   1.147 - terminal steps that solve a goal completely are usually performed by
   1.148 - actual automated proof methods (such as
   1.149 - \isacommand{by}~$\idt{blast}$).
   1.150 +  Such well-tuned iterated decomposition of certain structures is the
   1.151 +  prime application of @{text intro} and @{text elim}.  In contrast,
   1.152 +  terminal steps that solve a goal completely are usually performed by
   1.153 +  actual automated proof methods (such as \isacommand{by}~@{text
   1.154 +  blast}.
   1.155  *}
   1.156  
   1.157  
   1.158  subsection {* Variations of backward vs.\ forward reasoning *}
   1.159  
   1.160  text {*
   1.161 - Certainly, any proof may be performed in backward-style only.  On the
   1.162 - other hand, small steps of reasoning are often more naturally
   1.163 - expressed in forward-style.  Isar supports both backward and forward
   1.164 - reasoning as a first-class concept.  In order to demonstrate the
   1.165 - difference, we consider several proofs of $A \conj B \impl B \conj
   1.166 - A$.
   1.167 +  Certainly, any proof may be performed in backward-style only.  On
   1.168 +  the other hand, small steps of reasoning are often more naturally
   1.169 +  expressed in forward-style.  Isar supports both backward and forward
   1.170 +  reasoning as a first-class concept.  In order to demonstrate the
   1.171 +  difference, we consider several proofs of @{text "A \<and> B \<longrightarrow> B \<and> A"}.
   1.172  
   1.173 - The first version is purely backward.
   1.174 +  The first version is purely backward.
   1.175  *}
   1.176  
   1.177  lemma "A & B --> B & A"
   1.178 @@ -167,13 +168,13 @@
   1.179  qed
   1.180  
   1.181  text {*
   1.182 - Above, the $\idt{conjunct}_{1/2}$ projection rules had to be named
   1.183 - explicitly, since the goals $B$ and $A$ did not provide any
   1.184 - structural clue.  This may be avoided using \isacommand{from} to
   1.185 - focus on $\idt{prems}$ (i.e.\ the $A \conj B$ assumption) as the
   1.186 - current facts, enabling the use of double-dot proofs.  Note that
   1.187 - \isacommand{from} already does forward-chaining, involving the
   1.188 - \name{conjE} rule here.
   1.189 +  Above, the @{text "conjunct_1/2"} projection rules had to be named
   1.190 +  explicitly, since the goals @{text B} and @{text A} did not provide
   1.191 +  any structural clue.  This may be avoided using \isacommand{from} to
   1.192 +  focus on @{text prems} (i.e.\ the @{text "A \<and> B"} assumption) as the
   1.193 +  current facts, enabling the use of double-dot proofs.  Note that
   1.194 +  \isacommand{from} already does forward-chaining, involving the
   1.195 +  \name{conjE} rule here.
   1.196  *}
   1.197  
   1.198  lemma "A & B --> B & A"
   1.199 @@ -187,29 +188,29 @@
   1.200  qed
   1.201  
   1.202  text {*
   1.203 - In the next version, we move the forward step one level upwards.
   1.204 - Forward-chaining from the most recent facts is indicated by the
   1.205 - \isacommand{then} command.  Thus the proof of $B \conj A$ from $A
   1.206 - \conj B$ actually becomes an elimination, rather than an
   1.207 - introduction.  The resulting proof structure directly corresponds to
   1.208 - that of the $\name{conjE}$ rule, including the repeated goal
   1.209 - proposition that is abbreviated as $\var{thesis}$ below.
   1.210 +  In the next version, we move the forward step one level upwards.
   1.211 +  Forward-chaining from the most recent facts is indicated by the
   1.212 +  \isacommand{then} command.  Thus the proof of @{text "B \<and> A"} from
   1.213 +  @{text "A \<and> B"} actually becomes an elimination, rather than an
   1.214 +  introduction.  The resulting proof structure directly corresponds to
   1.215 +  that of the @{text conjE} rule, including the repeated goal
   1.216 +  proposition that is abbreviated as @{text ?thesis} below.
   1.217  *}
   1.218  
   1.219  lemma "A & B --> B & A"
   1.220  proof
   1.221    assume "A & B"
   1.222    then show "B & A"
   1.223 -  proof                    -- {* rule \name{conjE} of $A \conj B$ *}
   1.224 +  proof                    -- {* rule @{text conjE} of @{text "A \<and> B"} *}
   1.225      assume A B
   1.226 -    show ?thesis ..       -- {* rule \name{conjI} of $B \conj A$ *}
   1.227 +    show ?thesis ..       -- {* rule @{text conjI} of @{text "B \<and> A"} *}
   1.228    qed
   1.229  qed
   1.230  
   1.231  text {*
   1.232 - In the subsequent version we flatten the structure of the main body
   1.233 - by doing forward reasoning all the time.  Only the outermost
   1.234 - decomposition step is left as backward.
   1.235 +  In the subsequent version we flatten the structure of the main body
   1.236 +  by doing forward reasoning all the time.  Only the outermost
   1.237 +  decomposition step is left as backward.
   1.238  *}
   1.239  
   1.240  lemma "A & B --> B & A"
   1.241 @@ -221,9 +222,9 @@
   1.242  qed
   1.243  
   1.244  text {*
   1.245 - We can still push forward-reasoning a bit further, even at the risk
   1.246 - of getting ridiculous.  Note that we force the initial proof step to
   1.247 - do nothing here, by referring to the ``-'' proof method.
   1.248 +  We can still push forward-reasoning a bit further, even at the risk
   1.249 +  of getting ridiculous.  Note that we force the initial proof step to
   1.250 +  do nothing here, by referring to the ``-'' proof method.
   1.251  *}
   1.252  
   1.253  lemma "A & B --> B & A"
   1.254 @@ -238,29 +239,29 @@
   1.255  qed
   1.256  
   1.257  text {*
   1.258 - \medskip With these examples we have shifted through a whole range
   1.259 - from purely backward to purely forward reasoning.  Apparently, in the
   1.260 - extreme ends we get slightly ill-structured proofs, which also
   1.261 - require much explicit naming of either rules (backward) or local
   1.262 - facts (forward).
   1.263 +  \medskip With these examples we have shifted through a whole range
   1.264 +  from purely backward to purely forward reasoning.  Apparently, in
   1.265 +  the extreme ends we get slightly ill-structured proofs, which also
   1.266 +  require much explicit naming of either rules (backward) or local
   1.267 +  facts (forward).
   1.268  
   1.269 - The general lesson learned here is that good proof style would
   1.270 - achieve just the \emph{right} balance of top-down backward
   1.271 - decomposition, and bottom-up forward composition.  In general, there
   1.272 - is no single best way to arrange some pieces of formal reasoning, of
   1.273 - course.  Depending on the actual applications, the intended audience
   1.274 - etc., rules (and methods) on the one hand vs.\ facts on the other
   1.275 - hand have to be emphasized in an appropriate way.  This requires the
   1.276 - proof writer to develop good taste, and some practice, of course.
   1.277 +  The general lesson learned here is that good proof style would
   1.278 +  achieve just the \emph{right} balance of top-down backward
   1.279 +  decomposition, and bottom-up forward composition.  In general, there
   1.280 +  is no single best way to arrange some pieces of formal reasoning, of
   1.281 +  course.  Depending on the actual applications, the intended audience
   1.282 +  etc., rules (and methods) on the one hand vs.\ facts on the other
   1.283 +  hand have to be emphasized in an appropriate way.  This requires the
   1.284 +  proof writer to develop good taste, and some practice, of course.
   1.285  *}
   1.286  
   1.287  text {*
   1.288 - For our example the most appropriate way of reasoning is probably the
   1.289 - middle one, with conjunction introduction done after elimination.
   1.290 - This reads even more concisely using \isacommand{thus}, which
   1.291 - abbreviates \isacommand{then}~\isacommand{show}.\footnote{In the same
   1.292 - vein, \isacommand{hence} abbreviates
   1.293 - \isacommand{then}~\isacommand{have}.}
   1.294 +  For our example the most appropriate way of reasoning is probably
   1.295 +  the middle one, with conjunction introduction done after
   1.296 +  elimination.  This reads even more concisely using
   1.297 +  \isacommand{thus}, which abbreviates
   1.298 +  \isacommand{then}~\isacommand{show}.\footnote{In the same vein,
   1.299 +  \isacommand{hence} abbreviates \isacommand{then}~\isacommand{have}.}
   1.300  *}
   1.301  
   1.302  lemma "A & B --> B & A"
   1.303 @@ -278,16 +279,16 @@
   1.304  subsection {* A few examples from ``Introduction to Isabelle'' *}
   1.305  
   1.306  text {*
   1.307 - We rephrase some of the basic reasoning examples of
   1.308 - \cite{isabelle-intro}, using HOL rather than FOL.
   1.309 +  We rephrase some of the basic reasoning examples of
   1.310 +  \cite{isabelle-intro}, using HOL rather than FOL.
   1.311  *}
   1.312  
   1.313  subsubsection {* A propositional proof *}
   1.314  
   1.315  text {*
   1.316 - We consider the proposition $P \disj P \impl P$.  The proof below
   1.317 - involves forward-chaining from $P \disj P$, followed by an explicit
   1.318 - case-analysis on the two \emph{identical} cases.
   1.319 +  We consider the proposition @{text "P \<or> P \<longrightarrow> P"}.  The proof below
   1.320 +  involves forward-chaining from @{text "P \<or> P"}, followed by an
   1.321 +  explicit case-analysis on the two \emph{identical} cases.
   1.322  *}
   1.323  
   1.324  lemma "P | P --> P"
   1.325 @@ -295,7 +296,7 @@
   1.326    assume "P | P"
   1.327    thus P
   1.328    proof                    -- {*
   1.329 -    rule \name{disjE}: \smash{$\infer{C}{A \disj B & \infer*{C}{[A]} & \infer*{C}{[B]}}$}
   1.330 +    rule @{text disjE}: \smash{$\infer{C}{A \disj B & \infer*{C}{[A]} & \infer*{C}{[B]}}$}
   1.331    *}
   1.332      assume P show P .
   1.333    next
   1.334 @@ -304,27 +305,27 @@
   1.335  qed
   1.336  
   1.337  text {*
   1.338 - Case splits are \emph{not} hardwired into the Isar language as a
   1.339 - special feature.  The \isacommand{next} command used to separate the
   1.340 - cases above is just a short form of managing block structure.
   1.341 +  Case splits are \emph{not} hardwired into the Isar language as a
   1.342 +  special feature.  The \isacommand{next} command used to separate the
   1.343 +  cases above is just a short form of managing block structure.
   1.344  
   1.345 - \medskip In general, applying proof methods may split up a goal into
   1.346 - separate ``cases'', i.e.\ new subgoals with individual local
   1.347 - assumptions.  The corresponding proof text typically mimics this by
   1.348 - establishing results in appropriate contexts, separated by blocks.
   1.349 +  \medskip In general, applying proof methods may split up a goal into
   1.350 +  separate ``cases'', i.e.\ new subgoals with individual local
   1.351 +  assumptions.  The corresponding proof text typically mimics this by
   1.352 +  establishing results in appropriate contexts, separated by blocks.
   1.353  
   1.354 - In order to avoid too much explicit parentheses, the Isar system
   1.355 - implicitly opens an additional block for any new goal, the
   1.356 - \isacommand{next} statement then closes one block level, opening a
   1.357 - new one.  The resulting behavior is what one would expect from
   1.358 - separating cases, only that it is more flexible.  E.g.\ an induction
   1.359 - base case (which does not introduce local assumptions) would
   1.360 - \emph{not} require \isacommand{next} to separate the subsequent step
   1.361 - case.
   1.362 +  In order to avoid too much explicit parentheses, the Isar system
   1.363 +  implicitly opens an additional block for any new goal, the
   1.364 +  \isacommand{next} statement then closes one block level, opening a
   1.365 +  new one.  The resulting behavior is what one would expect from
   1.366 +  separating cases, only that it is more flexible.  E.g.\ an induction
   1.367 +  base case (which does not introduce local assumptions) would
   1.368 +  \emph{not} require \isacommand{next} to separate the subsequent step
   1.369 +  case.
   1.370  
   1.371 - \medskip In our example the situation is even simpler, since the two
   1.372 - cases actually coincide.  Consequently the proof may be rephrased as
   1.373 - follows.
   1.374 +  \medskip In our example the situation is even simpler, since the two
   1.375 +  cases actually coincide.  Consequently the proof may be rephrased as
   1.376 +  follows.
   1.377  *}
   1.378  
   1.379  lemma "P | P --> P"
   1.380 @@ -339,10 +340,10 @@
   1.381  qed
   1.382  
   1.383  text {*
   1.384 - Again, the rather vacuous body of the proof may be collapsed.  Thus
   1.385 - the case analysis degenerates into two assumption steps, which are
   1.386 - implicitly performed when concluding the single rule step of the
   1.387 - double-dot proof as follows.
   1.388 +  Again, the rather vacuous body of the proof may be collapsed.  Thus
   1.389 +  the case analysis degenerates into two assumption steps, which are
   1.390 +  implicitly performed when concluding the single rule step of the
   1.391 +  double-dot proof as follows.
   1.392  *}
   1.393  
   1.394  lemma "P | P --> P"
   1.395 @@ -355,17 +356,17 @@
   1.396  subsubsection {* A quantifier proof *}
   1.397  
   1.398  text {*
   1.399 - To illustrate quantifier reasoning, let us prove $(\ex x P \ap (f \ap
   1.400 - x)) \impl (\ex x P \ap x)$.  Informally, this holds because any $a$
   1.401 - with $P \ap (f \ap a)$ may be taken as a witness for the second
   1.402 - existential statement.
   1.403 +  To illustrate quantifier reasoning, let us prove @{text "(\<exists>x. P (f
   1.404 +  x)) \<longrightarrow> (\<exists>y. P y)"}.  Informally, this holds because any @{text a}
   1.405 +  with @{text "P (f a)"} may be taken as a witness for the second
   1.406 +  existential statement.
   1.407  
   1.408 - The first proof is rather verbose, exhibiting quite a lot of
   1.409 - (redundant) detail.  It gives explicit rules, even with some
   1.410 - instantiation.  Furthermore, we encounter two new language elements:
   1.411 - the \isacommand{fix} command augments the context by some new
   1.412 - ``arbitrary, but fixed'' element; the \isacommand{is} annotation
   1.413 - binds term abbreviations by higher-order pattern matching.
   1.414 +  The first proof is rather verbose, exhibiting quite a lot of
   1.415 +  (redundant) detail.  It gives explicit rules, even with some
   1.416 +  instantiation.  Furthermore, we encounter two new language elements:
   1.417 +  the \isacommand{fix} command augments the context by some new
   1.418 +  ``arbitrary, but fixed'' element; the \isacommand{is} annotation
   1.419 +  binds term abbreviations by higher-order pattern matching.
   1.420  *}
   1.421  
   1.422  lemma "(EX x. P (f x)) --> (EX y. P y)"
   1.423 @@ -382,12 +383,12 @@
   1.424  qed
   1.425  
   1.426  text {*
   1.427 - While explicit rule instantiation may occasionally improve
   1.428 - readability of certain aspects of reasoning, it is usually quite
   1.429 - redundant.  Above, the basic proof outline gives already enough
   1.430 - structural clues for the system to infer both the rules and their
   1.431 - instances (by higher-order unification).  Thus we may as well prune
   1.432 - the text as follows.
   1.433 +  While explicit rule instantiation may occasionally improve
   1.434 +  readability of certain aspects of reasoning, it is usually quite
   1.435 +  redundant.  Above, the basic proof outline gives already enough
   1.436 +  structural clues for the system to infer both the rules and their
   1.437 +  instances (by higher-order unification).  Thus we may as well prune
   1.438 +  the text as follows.
   1.439  *}
   1.440  
   1.441  lemma "(EX x. P (f x)) --> (EX y. P y)"
   1.442 @@ -402,10 +403,10 @@
   1.443  qed
   1.444  
   1.445  text {*
   1.446 - Explicit $\exists$-elimination as seen above can become quite
   1.447 - cumbersome in practice.  The derived Isar language element
   1.448 - ``\isakeyword{obtain}'' provides a more handsome way to do
   1.449 - generalized existence reasoning.
   1.450 +  Explicit @{text \<exists>}-elimination as seen above can become quite
   1.451 +  cumbersome in practice.  The derived Isar language element
   1.452 +  ``\isakeyword{obtain}'' provides a more handsome way to do
   1.453 +  generalized existence reasoning.
   1.454  *}
   1.455  
   1.456  lemma "(EX x. P (f x)) --> (EX y. P y)"
   1.457 @@ -416,13 +417,13 @@
   1.458  qed
   1.459  
   1.460  text {*
   1.461 - Technically, \isakeyword{obtain} is similar to \isakeyword{fix} and
   1.462 - \isakeyword{assume} together with a soundness proof of the
   1.463 - elimination involved.  Thus it behaves similar to any other forward
   1.464 - proof element.  Also note that due to the nature of general existence
   1.465 - reasoning involved here, any result exported from the context of an
   1.466 - \isakeyword{obtain} statement may \emph{not} refer to the parameters
   1.467 - introduced there.
   1.468 +  Technically, \isakeyword{obtain} is similar to \isakeyword{fix} and
   1.469 +  \isakeyword{assume} together with a soundness proof of the
   1.470 +  elimination involved.  Thus it behaves similar to any other forward
   1.471 +  proof element.  Also note that due to the nature of general
   1.472 +  existence reasoning involved here, any result exported from the
   1.473 +  context of an \isakeyword{obtain} statement may \emph{not} refer to
   1.474 +  the parameters introduced there.
   1.475  *}
   1.476  
   1.477  
   1.478 @@ -430,10 +431,10 @@
   1.479  subsubsection {* Deriving rules in Isabelle *}
   1.480  
   1.481  text {*
   1.482 - We derive the conjunction elimination rule from the corresponding
   1.483 - projections.  The proof is quite straight-forward, since
   1.484 - Isabelle/Isar supports non-atomic goals and assumptions fully
   1.485 - transparently.
   1.486 +  We derive the conjunction elimination rule from the corresponding
   1.487 +  projections.  The proof is quite straight-forward, since
   1.488 +  Isabelle/Isar supports non-atomic goals and assumptions fully
   1.489 +  transparently.
   1.490  *}
   1.491  
   1.492  theorem conjE: "A & B ==> (A ==> B ==> C) ==> C"
   1.493 @@ -448,12 +449,12 @@
   1.494  qed
   1.495  
   1.496  text {*
   1.497 - Note that classic Isabelle handles higher rules in a slightly
   1.498 - different way.  The tactic script as given in \cite{isabelle-intro}
   1.499 - for the same example of \name{conjE} depends on the primitive
   1.500 - \texttt{goal} command to decompose the rule into premises and
   1.501 - conclusion.  The actual result would then emerge by discharging of
   1.502 - the context at \texttt{qed} time.
   1.503 +  Note that classic Isabelle handles higher rules in a slightly
   1.504 +  different way.  The tactic script as given in \cite{isabelle-intro}
   1.505 +  for the same example of \name{conjE} depends on the primitive
   1.506 +  \texttt{goal} command to decompose the rule into premises and
   1.507 +  conclusion.  The actual result would then emerge by discharging of
   1.508 +  the context at \texttt{qed} time.
   1.509  *}
   1.510  
   1.511  end