src/HOL/Isar_Examples/Basic_Logic.thy
changeset 61932 2e48182cc82c
parent 61799 4cf66f21b764
child 63585 f4a308fdf664
--- a/src/HOL/Isar_Examples/Basic_Logic.thy	Sat Dec 26 15:03:41 2015 +0100
+++ b/src/HOL/Isar_Examples/Basic_Logic.thy	Sat Dec 26 15:44:14 2015 +0100
@@ -1,5 +1,5 @@
 (*  Title:      HOL/Isar_Examples/Basic_Logic.thy
-    Author:     Markus Wenzel, TU Muenchen
+    Author:     Makarius
 
 Basic propositional and quantifier reasoning.
 *)
@@ -13,9 +13,11 @@
 
 subsection \<open>Pure backward reasoning\<close>
 
-text \<open>In order to get a first idea of how Isabelle/Isar proof documents may
-  look like, we consider the propositions \<open>I\<close>, \<open>K\<close>, and \<open>S\<close>. The following
-  (rather explicit) proofs should require little extra explanations.\<close>
+text \<open>
+  In order to get a first idea of how Isabelle/Isar proof documents may look
+  like, we consider the propositions \<open>I\<close>, \<open>K\<close>, and \<open>S\<close>. The following (rather
+  explicit) proofs should require little extra explanations.
+\<close>
 
 lemma I: "A \<longrightarrow> A"
 proof
@@ -50,54 +52,52 @@
   qed
 qed
 
-text \<open>Isar provides several ways to fine-tune the reasoning, avoiding
-  excessive detail. Several abbreviated language elements are available,
-  enabling the writer to express proofs in a more concise way, even without
-  referring to any automated proof tools yet.
-
-  First of all, proof by assumption may be abbreviated as a single dot.\<close>
+text \<open>
+  Isar provides several ways to fine-tune the reasoning, avoiding excessive
+  detail. Several abbreviated language elements are available, enabling the
+  writer to express proofs in a more concise way, even without referring to
+  any automated proof tools yet.
 
-lemma "A \<longrightarrow> A"
-proof
-  assume A
-  show A by fact+
-qed
-
-text \<open>In fact, concluding any (sub-)proof already involves solving any
-  remaining goals by assumption\footnote{This is not a completely trivial
-  operation, as proof by assumption may involve full higher-order
-  unification.}. Thus we may skip the rather vacuous body of the above proof
-  as well.\<close>
+  Concluding any (sub-)proof already involves solving any remaining goals by
+  assumption\<^footnote>\<open>This is not a completely trivial operation, as proof by
+  assumption may involve full higher-order unification.\<close>. Thus we may skip the
+  rather vacuous body of the above proof.
+\<close>
 
 lemma "A \<longrightarrow> A"
 proof
 qed
 
-text \<open>Note that the \isacommand{proof} command refers to the \<open>rule\<close> method
-  (without arguments) by default. Thus it implicitly applies a single rule,
-  as determined from the syntactic form of the statements involved. The
-  \isacommand{by} command abbreviates any proof with empty body, so the
-  proof may be further pruned.\<close>
+text \<open>
+  Note that the \<^theory_text>\<open>proof\<close> command refers to the \<open>rule\<close> method (without
+  arguments) by default. Thus it implicitly applies a single rule, as
+  determined from the syntactic form of the statements involved. The \<^theory_text>\<open>by\<close>
+  command abbreviates any proof with empty body, so the proof may be further
+  pruned.
+\<close>
 
 lemma "A \<longrightarrow> A"
   by rule
 
-text \<open>Proof by a single rule may be abbreviated as double-dot.\<close>
+text \<open>
+  Proof by a single rule may be abbreviated as double-dot.
+\<close>
 
 lemma "A \<longrightarrow> A" ..
 
-text \<open>Thus we have arrived at an adequate representation of the proof of a
-  tautology that holds by a single standard rule.\footnote{Apparently, the
-  rule here is implication introduction.}
+text \<open>
+  Thus we have arrived at an adequate representation of the proof of a
+  tautology that holds by a single standard rule.\<^footnote>\<open>Apparently, the
+  rule here is implication introduction.\<close>
 
   \<^medskip>
   Let us also reconsider \<open>K\<close>. Its statement is composed of iterated
-  connectives. Basic decomposition is by a single rule at a time, which is
-  why our first version above was by nesting two proofs.
+  connectives. Basic decomposition is by a single rule at a time, which is why
+  our first version above was by nesting two proofs.
 
-  The \<open>intro\<close> proof method repeatedly decomposes a goal's
-  conclusion.\footnote{The dual method is \<open>elim\<close>, acting on a goal's
-  premises.}\<close>
+  The \<open>intro\<close> proof method repeatedly decomposes a goal's conclusion.\<^footnote>\<open>The
+  dual method is \<open>elim\<close>, acting on a goal's premises.\<close>
+\<close>
 
 lemma "A \<longrightarrow> B \<longrightarrow> A"
 proof (intro impI)
@@ -110,29 +110,32 @@
 lemma "A \<longrightarrow> B \<longrightarrow> A"
   by (intro impI)
 
-text \<open>Just like \<open>rule\<close>, the \<open>intro\<close> and \<open>elim\<close> proof methods pick standard
+text \<open>
+  Just like \<open>rule\<close>, the \<open>intro\<close> and \<open>elim\<close> proof methods pick standard
   structural rules, in case no explicit arguments are given. While implicit
-  rules are usually just fine for single rule application, this may go too
-  far with iteration. Thus in practice, \<open>intro\<close> and \<open>elim\<close> would be
-  typically restricted to certain structures by giving a few rules only,
-  e.g.\ \isacommand{proof}~\<open>(intro impI allI)\<close> to strip implications and
-  universal quantifiers.
+  rules are usually just fine for single rule application, this may go too far
+  with iteration. Thus in practice, \<open>intro\<close> and \<open>elim\<close> would be typically
+  restricted to certain structures by giving a few rules only, e.g.\ \<^theory_text>\<open>proof
+  (intro impI allI)\<close> to strip implications and universal quantifiers.
 
   Such well-tuned iterated decomposition of certain structures is the prime
-  application of \<open>intro\<close> and \<open>elim\<close>. In contrast, terminal steps that solve
-  a goal completely are usually performed by actual automated proof methods
-  (such as \isacommand{by}~\<open>blast\<close>.\<close>
+  application of \<open>intro\<close> and \<open>elim\<close>. In contrast, terminal steps that solve a
+  goal completely are usually performed by actual automated proof methods
+  (such as \<^theory_text>\<open>by blast\<close>.
+\<close>
 
 
 subsection \<open>Variations of backward vs.\ forward reasoning\<close>
 
-text \<open>Certainly, any proof may be performed in backward-style only. On the
-  other hand, small steps of reasoning are often more naturally expressed in
+text \<open>
+  Certainly, any proof may be performed in backward-style only. On the other
+  hand, small steps of reasoning are often more naturally expressed in
   forward-style. Isar supports both backward and forward reasoning as a
   first-class concept. In order to demonstrate the difference, we consider
   several proofs of \<open>A \<and> B \<longrightarrow> B \<and> A\<close>.
 
-  The first version is purely backward.\<close>
+  The first version is purely backward.
+\<close>
 
 lemma "A \<and> B \<longrightarrow> B \<and> A"
 proof
@@ -144,12 +147,13 @@
   qed
 qed
 
-text \<open>Above, the projection rules \<open>conjunct1\<close> / \<open>conjunct2\<close> had to be named
-  explicitly, since the goals \<open>B\<close> and \<open>A\<close> did not provide any structural
-  clue. This may be avoided using \isacommand{from} to focus on the \<open>A \<and> B\<close>
-  assumption as the current facts, enabling the use of double-dot proofs.
-  Note that \isacommand{from} already does forward-chaining, involving the
-  \<open>conjE\<close> rule here.\<close>
+text \<open>
+  Above, the projection rules \<open>conjunct1\<close> / \<open>conjunct2\<close> had to be named
+  explicitly, since the goals \<open>B\<close> and \<open>A\<close> did not provide any structural clue.
+  This may be avoided using \<^theory_text>\<open>from\<close> to focus on the \<open>A \<and> B\<close> assumption as the
+  current facts, enabling the use of double-dot proofs. Note that \<^theory_text>\<open>from\<close>
+  already does forward-chaining, involving the \<open>conjE\<close> rule here.
+\<close>
 
 lemma "A \<and> B \<longrightarrow> B \<and> A"
 proof
@@ -161,12 +165,14 @@
   qed
 qed
 
-text \<open>In the next version, we move the forward step one level upwards.
-  Forward-chaining from the most recent facts is indicated by the
-  \isacommand{then} command. Thus the proof of \<open>B \<and> A\<close> from \<open>A \<and> B\<close> actually
-  becomes an elimination, rather than an introduction. The resulting proof
-  structure directly corresponds to that of the \<open>conjE\<close> rule, including the
-  repeated goal proposition that is abbreviated as \<open>?thesis\<close> below.\<close>
+text \<open>
+  In the next version, we move the forward step one level upwards.
+  Forward-chaining from the most recent facts is indicated by the \<^theory_text>\<open>then\<close>
+  command. Thus the proof of \<open>B \<and> A\<close> from \<open>A \<and> B\<close> actually becomes an
+  elimination, rather than an introduction. The resulting proof structure
+  directly corresponds to that of the \<open>conjE\<close> rule, including the repeated
+  goal proposition that is abbreviated as \<open>?thesis\<close> below.
+\<close>
 
 lemma "A \<and> B \<longrightarrow> B \<and> A"
 proof
@@ -178,9 +184,11 @@
   qed
 qed
 
-text \<open>In the subsequent version we flatten the structure of the main body by
-  doing forward reasoning all the time. Only the outermost decomposition
-  step is left as backward.\<close>
+text \<open>
+  In the subsequent version we flatten the structure of the main body by doing
+  forward reasoning all the time. Only the outermost decomposition step is
+  left as backward.
+\<close>
 
 lemma "A \<and> B \<longrightarrow> B \<and> A"
 proof
@@ -190,9 +198,11 @@
   from \<open>B\<close> \<open>A\<close> show "B \<and> A" ..
 qed
 
-text \<open>We can still push forward-reasoning a bit further, even at the risk of
-  getting ridiculous. Note that we force the initial proof step to do
-  nothing here, by referring to the \<open>-\<close> proof method.\<close>
+text \<open>
+  We can still push forward-reasoning a bit further, even at the risk of
+  getting ridiculous. Note that we force the initial proof step to do nothing
+  here, by referring to the \<open>-\<close> proof method.
+\<close>
 
 lemma "A \<and> B \<longrightarrow> B \<and> A"
 proof -
@@ -208,22 +218,22 @@
 text \<open>
   \<^medskip>
   With these examples we have shifted through a whole range from purely
-  backward to purely forward reasoning. Apparently, in the extreme ends we
-  get slightly ill-structured proofs, which also require much explicit
-  naming of either rules (backward) or local facts (forward).
+  backward to purely forward reasoning. Apparently, in the extreme ends we get
+  slightly ill-structured proofs, which also require much explicit naming of
+  either rules (backward) or local facts (forward).
 
-  The general lesson learned here is that good proof style would achieve
-  just the \<^emph>\<open>right\<close> balance of top-down backward decomposition, and
-  bottom-up forward composition. In general, there is no single best way to
-  arrange some pieces of formal reasoning, of course. Depending on the
-  actual applications, the intended audience etc., rules (and methods) on
-  the one hand vs.\ facts on the other hand have to be emphasized in an
-  appropriate way. This requires the proof writer to develop good taste, and
-  some practice, of course.
+  The general lesson learned here is that good proof style would achieve just
+  the \<^emph>\<open>right\<close> balance of top-down backward decomposition, and bottom-up
+  forward composition. In general, there is no single best way to arrange some
+  pieces of formal reasoning, of course. Depending on the actual applications,
+  the intended audience etc., rules (and methods) on the one hand vs.\ facts
+  on the other hand have to be emphasized in an appropriate way. This requires
+  the proof writer to develop good taste, and some practice, of course.
 
   \<^medskip>
-  For our example the most appropriate way of reasoning is probably the
-  middle one, with conjunction introduction done after elimination.\<close>
+  For our example the most appropriate way of reasoning is probably the middle
+  one, with conjunction introduction done after elimination.
+\<close>
 
 lemma "A \<and> B \<longrightarrow> B \<and> A"
 proof
@@ -239,15 +249,19 @@
 
 subsection \<open>A few examples from ``Introduction to Isabelle''\<close>
 
-text \<open>We rephrase some of the basic reasoning examples of @{cite
-  "isabelle-intro"}, using HOL rather than FOL.\<close>
+text \<open>
+  We rephrase some of the basic reasoning examples of @{cite
+  "isabelle-intro"}, using HOL rather than FOL.
+\<close>
 
 
 subsubsection \<open>A propositional proof\<close>
 
-text \<open>We consider the proposition \<open>P \<or> P \<longrightarrow> P\<close>. The proof below involves
-  forward-chaining from \<open>P \<or> P\<close>, followed by an explicit case-analysis on
-  the two \<^emph>\<open>identical\<close> cases.\<close>
+text \<open>
+  We consider the proposition \<open>P \<or> P \<longrightarrow> P\<close>. The proof below involves
+  forward-chaining from \<open>P \<or> P\<close>, followed by an explicit case-analysis on the
+  two \<^emph>\<open>identical\<close> cases.
+\<close>
 
 lemma "P \<or> P \<longrightarrow> P"
 proof
@@ -260,9 +274,10 @@
   qed
 qed
 
-text \<open>Case splits are \<^emph>\<open>not\<close> hardwired into the Isar language as a
-  special feature. The \isacommand{next} command used to separate the cases
-  above is just a short form of managing block structure.
+text \<open>
+  Case splits are \<^emph>\<open>not\<close> hardwired into the Isar language as a special
+  feature. The \<^theory_text>\<open>next\<close> command used to separate the cases above is just a
+  short form of managing block structure.
 
   \<^medskip>
   In general, applying proof methods may split up a goal into separate
@@ -270,17 +285,17 @@
   corresponding proof text typically mimics this by establishing results in
   appropriate contexts, separated by blocks.
 
-  In order to avoid too much explicit parentheses, the Isar system
-  implicitly opens an additional block for any new goal, the
-  \isacommand{next} statement then closes one block level, opening a new
-  one. The resulting behaviour is what one would expect from separating
-  cases, only that it is more flexible. E.g.\ an induction base case (which
-  does not introduce local assumptions) would \<^emph>\<open>not\<close> require
-  \isacommand{next} to separate the subsequent step case.
+  In order to avoid too much explicit parentheses, the Isar system implicitly
+  opens an additional block for any new goal, the \<^theory_text>\<open>next\<close> statement then
+  closes one block level, opening a new one. The resulting behaviour is what
+  one would expect from separating cases, only that it is more flexible. E.g.\
+  an induction base case (which does not introduce local assumptions) would
+  \<^emph>\<open>not\<close> require \<^theory_text>\<open>next\<close> to separate the subsequent step case.
 
   \<^medskip>
   In our example the situation is even simpler, since the two cases actually
-  coincide. Consequently the proof may be rephrased as follows.\<close>
+  coincide. Consequently the proof may be rephrased as follows.
+\<close>
 
 lemma "P \<or> P \<longrightarrow> P"
 proof
@@ -307,16 +322,17 @@
 
 subsubsection \<open>A quantifier proof\<close>
 
-text \<open>To illustrate quantifier reasoning, let us prove
+text \<open>
+  To illustrate quantifier reasoning, let us prove
   \<open>(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)\<close>. Informally, this holds because any \<open>a\<close> with
   \<open>P (f a)\<close> may be taken as a witness for the second existential statement.
 
   The first proof is rather verbose, exhibiting quite a lot of (redundant)
-  detail. It gives explicit rules, even with some instantiation.
-  Furthermore, we encounter two new language elements: the \isacommand{fix}
-  command augments the context by some new ``arbitrary, but fixed'' element;
-  the \isacommand{is} annotation binds term abbreviations by higher-order
-  pattern matching.\<close>
+  detail. It gives explicit rules, even with some instantiation. Furthermore,
+  we encounter two new language elements: the \<^theory_text>\<open>fix\<close> command augments the
+  context by some new ``arbitrary, but fixed'' element; the \<^theory_text>\<open>is\<close> annotation
+  binds term abbreviations by higher-order pattern matching.
+\<close>
 
 lemma "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"
 proof
@@ -329,11 +345,13 @@
   qed
 qed
 
-text \<open>While explicit rule instantiation may occasionally improve readability
-  of certain aspects of reasoning, it is usually quite redundant. Above, the
-  basic proof outline gives already enough structural clues for the system
-  to infer both the rules and their instances (by higher-order unification).
-  Thus we may as well prune the text as follows.\<close>
+text \<open>
+  While explicit rule instantiation may occasionally improve readability of
+  certain aspects of reasoning, it is usually quite redundant. Above, the
+  basic proof outline gives already enough structural clues for the system to
+  infer both the rules and their instances (by higher-order unification). Thus
+  we may as well prune the text as follows.
+\<close>
 
 lemma "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"
 proof
@@ -346,9 +364,11 @@
   qed
 qed
 
-text \<open>Explicit \<open>\<exists>\<close>-elimination as seen above can become quite cumbersome in
-  practice. The derived Isar language element ``\isakeyword{obtain}''
-  provides a more handsome way to do generalized existence reasoning.\<close>
+text \<open>
+  Explicit \<open>\<exists>\<close>-elimination as seen above can become quite cumbersome in
+  practice. The derived Isar language element ``\<^theory_text>\<open>obtain\<close>'' provides a more
+  handsome way to do generalized existence reasoning.
+\<close>
 
 lemma "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"
 proof
@@ -357,19 +377,22 @@
   then show "\<exists>y. P y" ..
 qed
 
-text \<open>Technically, \isakeyword{obtain} is similar to \isakeyword{fix} and
-  \isakeyword{assume} together with a soundness proof of the elimination
-  involved. Thus it behaves similar to any other forward proof element. Also
-  note that due to the nature of general existence reasoning involved here,
-  any result exported from the context of an \isakeyword{obtain} statement
-  may \<^emph>\<open>not\<close> refer to the parameters introduced there.\<close>
+text \<open>
+  Technically, \<^theory_text>\<open>obtain\<close> is similar to \<^theory_text>\<open>fix\<close> and \<^theory_text>\<open>assume\<close> together with a
+  soundness proof of the elimination involved. Thus it behaves similar to any
+  other forward proof element. Also note that due to the nature of general
+  existence reasoning involved here, any result exported from the context of
+  an \<^theory_text>\<open>obtain\<close> statement may \<^emph>\<open>not\<close> refer to the parameters introduced there.
+\<close>
 
 
 subsubsection \<open>Deriving rules in Isabelle\<close>
 
-text \<open>We derive the conjunction elimination rule from the corresponding
+text \<open>
+  We derive the conjunction elimination rule from the corresponding
   projections. The proof is quite straight-forward, since Isabelle/Isar
-  supports non-atomic goals and assumptions fully transparently.\<close>
+  supports non-atomic goals and assumptions fully transparently.
+\<close>
 
 theorem conjE: "A \<and> B \<Longrightarrow> (A \<Longrightarrow> B \<Longrightarrow> C) \<Longrightarrow> C"
 proof -