src/HOL/Isar_Examples/Basic_Logic.thy
author wenzelm
Tue Oct 20 19:37:09 2009 +0200 (2009-10-20)
changeset 33026 8f35633c4922
parent 31758 src/HOL/Isar_examples/Basic_Logic.thy@3edd5f813f01
child 37671 fa53d267dab3
permissions -rw-r--r--
modernized session Isar_Examples;
     1 (*  Title:      HOL/Isar_Examples/Basic_Logic.thy
     2     Author:     Markus Wenzel, TU Muenchen
     3 
     4 Basic propositional and quantifier reasoning.
     5 *)
     6 
     7 header {* Basic logical reasoning *}
     8 
     9 theory Basic_Logic
    10 imports Main
    11 begin
    12 
    13 
    14 subsection {* Pure backward reasoning *}
    15 
    16 text {*
    17   In order to get a first idea of how Isabelle/Isar proof documents
    18   may look like, we consider the propositions @{text I}, @{text K},
    19   and @{text S}.  The following (rather explicit) proofs should
    20   require little extra explanations.
    21 *}
    22 
    23 lemma I: "A --> A"
    24 proof
    25   assume A
    26   show A by fact
    27 qed
    28 
    29 lemma K: "A --> B --> A"
    30 proof
    31   assume A
    32   show "B --> A"
    33   proof
    34     show A by fact
    35   qed
    36 qed
    37 
    38 lemma S: "(A --> B --> C) --> (A --> B) --> A --> C"
    39 proof
    40   assume "A --> B --> C"
    41   show "(A --> B) --> A --> C"
    42   proof
    43     assume "A --> B"
    44     show "A --> C"
    45     proof
    46       assume A
    47       show C
    48       proof (rule mp)
    49         show "B --> C" by (rule mp) fact+
    50         show B by (rule mp) fact+
    51       qed
    52     qed
    53   qed
    54 qed
    55 
    56 text {*
    57   Isar provides several ways to fine-tune the reasoning, avoiding
    58   excessive detail.  Several abbreviated language elements are
    59   available, enabling the writer to express proofs in a more concise
    60   way, even without referring to any automated proof tools yet.
    61 
    62   First of all, proof by assumption may be abbreviated as a single
    63   dot.
    64 *}
    65 
    66 lemma "A --> A"
    67 proof
    68   assume A
    69   show A by fact+
    70 qed
    71 
    72 text {*
    73   In fact, concluding any (sub-)proof already involves solving any
    74   remaining goals by assumption\footnote{This is not a completely
    75   trivial operation, as proof by assumption may involve full
    76   higher-order unification.}.  Thus we may skip the rather vacuous
    77   body of the above proof as well.
    78 *}
    79 
    80 lemma "A --> A"
    81 proof
    82 qed
    83 
    84 text {*
    85   Note that the \isacommand{proof} command refers to the @{text rule}
    86   method (without arguments) by default.  Thus it implicitly applies a
    87   single rule, as determined from the syntactic form of the statements
    88   involved.  The \isacommand{by} command abbreviates any proof with
    89   empty body, so the proof may be further pruned.
    90 *}
    91 
    92 lemma "A --> A"
    93   by rule
    94 
    95 text {*
    96   Proof by a single rule may be abbreviated as double-dot.
    97 *}
    98 
    99 lemma "A --> A" ..
   100 
   101 text {*
   102   Thus we have arrived at an adequate representation of the proof of a
   103   tautology that holds by a single standard rule.\footnote{Apparently,
   104   the rule here is implication introduction.}
   105 *}
   106 
   107 text {*
   108   Let us also reconsider @{text K}.  Its statement is composed of
   109   iterated connectives.  Basic decomposition is by a single rule at a
   110   time, which is why our first version above was by nesting two
   111   proofs.
   112 
   113   The @{text intro} proof method repeatedly decomposes a goal's
   114   conclusion.\footnote{The dual method is @{text elim}, acting on a
   115   goal's premises.}
   116 *}
   117 
   118 lemma "A --> B --> A"
   119 proof (intro impI)
   120   assume A
   121   show A by fact
   122 qed
   123 
   124 text {*
   125   Again, the body may be collapsed.
   126 *}
   127 
   128 lemma "A --> B --> A"
   129   by (intro impI)
   130 
   131 text {*
   132   Just like @{text rule}, the @{text intro} and @{text elim} proof
   133   methods pick standard structural rules, in case no explicit
   134   arguments are given.  While implicit rules are usually just fine for
   135   single rule application, this may go too far with iteration.  Thus
   136   in practice, @{text intro} and @{text elim} would be typically
   137   restricted to certain structures by giving a few rules only, e.g.\
   138   \isacommand{proof}~@{text "(intro impI allI)"} to strip implications
   139   and universal quantifiers.
   140 
   141   Such well-tuned iterated decomposition of certain structures is the
   142   prime application of @{text intro} and @{text elim}.  In contrast,
   143   terminal steps that solve a goal completely are usually performed by
   144   actual automated proof methods (such as \isacommand{by}~@{text
   145   blast}.
   146 *}
   147 
   148 
   149 subsection {* Variations of backward vs.\ forward reasoning *}
   150 
   151 text {*
   152   Certainly, any proof may be performed in backward-style only.  On
   153   the other hand, small steps of reasoning are often more naturally
   154   expressed in forward-style.  Isar supports both backward and forward
   155   reasoning as a first-class concept.  In order to demonstrate the
   156   difference, we consider several proofs of @{text "A \<and> B \<longrightarrow> B \<and> A"}.
   157 
   158   The first version is purely backward.
   159 *}
   160 
   161 lemma "A & B --> B & A"
   162 proof
   163   assume "A & B"
   164   show "B & A"
   165   proof
   166     show B by (rule conjunct2) fact
   167     show A by (rule conjunct1) fact
   168   qed
   169 qed
   170 
   171 text {*
   172   Above, the @{text "conjunct_1/2"} projection rules had to be named
   173   explicitly, since the goals @{text B} and @{text A} did not provide
   174   any structural clue.  This may be avoided using \isacommand{from} to
   175   focus on the @{text "A \<and> B"} assumption as the current facts,
   176   enabling the use of double-dot proofs.  Note that \isacommand{from}
   177   already does forward-chaining, involving the \name{conjE} rule here.
   178 *}
   179 
   180 lemma "A & B --> B & A"
   181 proof
   182   assume "A & B"
   183   show "B & A"
   184   proof
   185     from `A & B` show B ..
   186     from `A & B` show A ..
   187   qed
   188 qed
   189 
   190 text {*
   191   In the next version, we move the forward step one level upwards.
   192   Forward-chaining from the most recent facts is indicated by the
   193   \isacommand{then} command.  Thus the proof of @{text "B \<and> A"} from
   194   @{text "A \<and> B"} actually becomes an elimination, rather than an
   195   introduction.  The resulting proof structure directly corresponds to
   196   that of the @{text conjE} rule, including the repeated goal
   197   proposition that is abbreviated as @{text ?thesis} below.
   198 *}
   199 
   200 lemma "A & B --> B & A"
   201 proof
   202   assume "A & B"
   203   then show "B & A"
   204   proof                    -- {* rule @{text conjE} of @{text "A \<and> B"} *}
   205     assume B A
   206     then show ?thesis ..   -- {* rule @{text conjI} of @{text "B \<and> A"} *}
   207   qed
   208 qed
   209 
   210 text {*
   211   In the subsequent version we flatten the structure of the main body
   212   by doing forward reasoning all the time.  Only the outermost
   213   decomposition step is left as backward.
   214 *}
   215 
   216 lemma "A & B --> B & A"
   217 proof
   218   assume "A & B"
   219   from `A & B` have A ..
   220   from `A & B` have B ..
   221   from `B` `A` show "B & A" ..
   222 qed
   223 
   224 text {*
   225   We can still push forward-reasoning a bit further, even at the risk
   226   of getting ridiculous.  Note that we force the initial proof step to
   227   do nothing here, by referring to the ``-'' proof method.
   228 *}
   229 
   230 lemma "A & B --> B & A"
   231 proof -
   232   {
   233     assume "A & B"
   234     from `A & B` have A ..
   235     from `A & B` have B ..
   236     from `B` `A` have "B & A" ..
   237   }
   238   then show ?thesis ..         -- {* rule \name{impI} *}
   239 qed
   240 
   241 text {*
   242   \medskip With these examples we have shifted through a whole range
   243   from purely backward to purely forward reasoning.  Apparently, in
   244   the extreme ends we get slightly ill-structured proofs, which also
   245   require much explicit naming of either rules (backward) or local
   246   facts (forward).
   247 
   248   The general lesson learned here is that good proof style would
   249   achieve just the \emph{right} balance of top-down backward
   250   decomposition, and bottom-up forward composition.  In general, there
   251   is no single best way to arrange some pieces of formal reasoning, of
   252   course.  Depending on the actual applications, the intended audience
   253   etc., rules (and methods) on the one hand vs.\ facts on the other
   254   hand have to be emphasized in an appropriate way.  This requires the
   255   proof writer to develop good taste, and some practice, of course.
   256 *}
   257 
   258 text {*
   259   For our example the most appropriate way of reasoning is probably
   260   the middle one, with conjunction introduction done after
   261   elimination.
   262 *}
   263 
   264 lemma "A & B --> B & A"
   265 proof
   266   assume "A & B"
   267   then show "B & A"
   268   proof
   269     assume B A
   270     then show ?thesis ..
   271   qed
   272 qed
   273 
   274 
   275 
   276 subsection {* A few examples from ``Introduction to Isabelle'' *}
   277 
   278 text {*
   279   We rephrase some of the basic reasoning examples of
   280   \cite{isabelle-intro}, using HOL rather than FOL.
   281 *}
   282 
   283 subsubsection {* A propositional proof *}
   284 
   285 text {*
   286   We consider the proposition @{text "P \<or> P \<longrightarrow> P"}.  The proof below
   287   involves forward-chaining from @{text "P \<or> P"}, followed by an
   288   explicit case-analysis on the two \emph{identical} cases.
   289 *}
   290 
   291 lemma "P | P --> P"
   292 proof
   293   assume "P | P"
   294   then show P
   295   proof                    -- {*
   296     rule @{text disjE}: \smash{$\infer{C}{A \disj B & \infer*{C}{[A]} & \infer*{C}{[B]}}$}
   297   *}
   298     assume P show P by fact
   299   next
   300     assume P show P by fact
   301   qed
   302 qed
   303 
   304 text {*
   305   Case splits are \emph{not} hardwired into the Isar language as a
   306   special feature.  The \isacommand{next} command used to separate the
   307   cases above is just a short form of managing block structure.
   308 
   309   \medskip In general, applying proof methods may split up a goal into
   310   separate ``cases'', i.e.\ new subgoals with individual local
   311   assumptions.  The corresponding proof text typically mimics this by
   312   establishing results in appropriate contexts, separated by blocks.
   313 
   314   In order to avoid too much explicit parentheses, the Isar system
   315   implicitly opens an additional block for any new goal, the
   316   \isacommand{next} statement then closes one block level, opening a
   317   new one.  The resulting behavior is what one would expect from
   318   separating cases, only that it is more flexible.  E.g.\ an induction
   319   base case (which does not introduce local assumptions) would
   320   \emph{not} require \isacommand{next} to separate the subsequent step
   321   case.
   322 
   323   \medskip In our example the situation is even simpler, since the two
   324   cases actually coincide.  Consequently the proof may be rephrased as
   325   follows.
   326 *}
   327 
   328 lemma "P | P --> P"
   329 proof
   330   assume "P | P"
   331   then show P
   332   proof
   333     assume P
   334     show P by fact
   335     show P by fact
   336   qed
   337 qed
   338 
   339 text {*
   340   Again, the rather vacuous body of the proof may be collapsed.  Thus
   341   the case analysis degenerates into two assumption steps, which are
   342   implicitly performed when concluding the single rule step of the
   343   double-dot proof as follows.
   344 *}
   345 
   346 lemma "P | P --> P"
   347 proof
   348   assume "P | P"
   349   then show P ..
   350 qed
   351 
   352 
   353 subsubsection {* A quantifier proof *}
   354 
   355 text {*
   356   To illustrate quantifier reasoning, let us prove @{text "(\<exists>x. P (f
   357   x)) \<longrightarrow> (\<exists>y. P y)"}.  Informally, this holds because any @{text a}
   358   with @{text "P (f a)"} may be taken as a witness for the second
   359   existential statement.
   360 
   361   The first proof is rather verbose, exhibiting quite a lot of
   362   (redundant) detail.  It gives explicit rules, even with some
   363   instantiation.  Furthermore, we encounter two new language elements:
   364   the \isacommand{fix} command augments the context by some new
   365   ``arbitrary, but fixed'' element; the \isacommand{is} annotation
   366   binds term abbreviations by higher-order pattern matching.
   367 *}
   368 
   369 lemma "(EX x. P (f x)) --> (EX y. P y)"
   370 proof
   371   assume "EX x. P (f x)"
   372   then show "EX y. P y"
   373   proof (rule exE)             -- {*
   374     rule \name{exE}: \smash{$\infer{B}{\ex x A(x) & \infer*{B}{[A(x)]_x}}$}
   375   *}
   376     fix a
   377     assume "P (f a)" (is "P ?witness")
   378     then show ?thesis by (rule exI [of P ?witness])
   379   qed
   380 qed
   381 
   382 text {*
   383   While explicit rule instantiation may occasionally improve
   384   readability of certain aspects of reasoning, it is usually quite
   385   redundant.  Above, the basic proof outline gives already enough
   386   structural clues for the system to infer both the rules and their
   387   instances (by higher-order unification).  Thus we may as well prune
   388   the text as follows.
   389 *}
   390 
   391 lemma "(EX x. P (f x)) --> (EX y. P y)"
   392 proof
   393   assume "EX x. P (f x)"
   394   then show "EX y. P y"
   395   proof
   396     fix a
   397     assume "P (f a)"
   398     then show ?thesis ..
   399   qed
   400 qed
   401 
   402 text {*
   403   Explicit @{text \<exists>}-elimination as seen above can become quite
   404   cumbersome in practice.  The derived Isar language element
   405   ``\isakeyword{obtain}'' provides a more handsome way to do
   406   generalized existence reasoning.
   407 *}
   408 
   409 lemma "(EX x. P (f x)) --> (EX y. P y)"
   410 proof
   411   assume "EX x. P (f x)"
   412   then obtain a where "P (f a)" ..
   413   then show "EX y. P y" ..
   414 qed
   415 
   416 text {*
   417   Technically, \isakeyword{obtain} is similar to \isakeyword{fix} and
   418   \isakeyword{assume} together with a soundness proof of the
   419   elimination involved.  Thus it behaves similar to any other forward
   420   proof element.  Also note that due to the nature of general
   421   existence reasoning involved here, any result exported from the
   422   context of an \isakeyword{obtain} statement may \emph{not} refer to
   423   the parameters introduced there.
   424 *}
   425 
   426 
   427 
   428 subsubsection {* Deriving rules in Isabelle *}
   429 
   430 text {*
   431   We derive the conjunction elimination rule from the corresponding
   432   projections.  The proof is quite straight-forward, since
   433   Isabelle/Isar supports non-atomic goals and assumptions fully
   434   transparently.
   435 *}
   436 
   437 theorem conjE: "A & B ==> (A ==> B ==> C) ==> C"
   438 proof -
   439   assume "A & B"
   440   assume r: "A ==> B ==> C"
   441   show C
   442   proof (rule r)
   443     show A by (rule conjunct1) fact
   444     show B by (rule conjunct2) fact
   445   qed
   446 qed
   447 
   448 end