src/HOL/Isar_Examples/Basic_Logic.thy
author hoelzl
Tue Mar 26 12:20:58 2013 +0100 (2013-03-26)
changeset 51526 155263089e7b
parent 37671 fa53d267dab3
child 55640 abc140f21caa
permissions -rw-r--r--
move SEQ.thy and Lim.thy to Limits.thy
     1 (*  Title:      HOL/Isar_Examples/Basic_Logic.thy
     2     Author:     Markus Wenzel, TU Muenchen
     3 
     4 Basic propositional and quantifier reasoning.
     5 *)
     6 
     7 header {* Basic logical reasoning *}
     8 
     9 theory Basic_Logic
    10 imports Main
    11 begin
    12 
    13 
    14 subsection {* Pure backward reasoning *}
    15 
    16 text {* In order to get a first idea of how Isabelle/Isar proof
    17   documents may look like, we consider the propositions @{text I},
    18   @{text K}, and @{text S}.  The following (rather explicit) proofs
    19   should require little extra explanations. *}
    20 
    21 lemma I: "A --> A"
    22 proof
    23   assume A
    24   show A by fact
    25 qed
    26 
    27 lemma K: "A --> B --> A"
    28 proof
    29   assume A
    30   show "B --> A"
    31   proof
    32     show A by fact
    33   qed
    34 qed
    35 
    36 lemma S: "(A --> B --> C) --> (A --> B) --> A --> C"
    37 proof
    38   assume "A --> B --> C"
    39   show "(A --> B) --> A --> C"
    40   proof
    41     assume "A --> B"
    42     show "A --> C"
    43     proof
    44       assume A
    45       show C
    46       proof (rule mp)
    47         show "B --> C" by (rule mp) fact+
    48         show B by (rule mp) fact+
    49       qed
    50     qed
    51   qed
    52 qed
    53 
    54 text {* Isar provides several ways to fine-tune the reasoning,
    55   avoiding excessive detail.  Several abbreviated language elements
    56   are available, enabling the writer to express proofs in a more
    57   concise way, even without referring to any automated proof tools
    58   yet.
    59 
    60   First of all, proof by assumption may be abbreviated as a single
    61   dot. *}
    62 
    63 lemma "A --> A"
    64 proof
    65   assume A
    66   show A by fact+
    67 qed
    68 
    69 text {* In fact, concluding any (sub-)proof already involves solving
    70   any remaining goals by assumption\footnote{This is not a completely
    71   trivial operation, as proof by assumption may involve full
    72   higher-order unification.}.  Thus we may skip the rather vacuous
    73   body of the above proof as well. *}
    74 
    75 lemma "A --> A"
    76 proof
    77 qed
    78 
    79 text {* Note that the \isacommand{proof} command refers to the @{text
    80   rule} method (without arguments) by default.  Thus it implicitly
    81   applies a single rule, as determined from the syntactic form of the
    82   statements involved.  The \isacommand{by} command abbreviates any
    83   proof with empty body, so the proof may be further pruned. *}
    84 
    85 lemma "A --> A"
    86   by rule
    87 
    88 text {* Proof by a single rule may be abbreviated as double-dot. *}
    89 
    90 lemma "A --> A" ..
    91 
    92 text {* Thus we have arrived at an adequate representation of the
    93   proof of a tautology that holds by a single standard
    94   rule.\footnote{Apparently, the rule here is implication
    95   introduction.} *}
    96 
    97 text {* Let us also reconsider @{text K}.  Its statement is composed
    98   of iterated connectives.  Basic decomposition is by a single rule at
    99   a time, which is why our first version above was by nesting two
   100   proofs.
   101 
   102   The @{text intro} proof method repeatedly decomposes a goal's
   103   conclusion.\footnote{The dual method is @{text elim}, acting on a
   104   goal's premises.} *}
   105 
   106 lemma "A --> B --> A"
   107 proof (intro impI)
   108   assume A
   109   show A by fact
   110 qed
   111 
   112 text {* Again, the body may be collapsed. *}
   113 
   114 lemma "A --> B --> A"
   115   by (intro impI)
   116 
   117 text {* Just like @{text rule}, the @{text intro} and @{text elim}
   118   proof methods pick standard structural rules, in case no explicit
   119   arguments are given.  While implicit rules are usually just fine for
   120   single rule application, this may go too far with iteration.  Thus
   121   in practice, @{text intro} and @{text elim} would be typically
   122   restricted to certain structures by giving a few rules only, e.g.\
   123   \isacommand{proof}~@{text "(intro impI allI)"} to strip implications
   124   and universal quantifiers.
   125 
   126   Such well-tuned iterated decomposition of certain structures is the
   127   prime application of @{text intro} and @{text elim}.  In contrast,
   128   terminal steps that solve a goal completely are usually performed by
   129   actual automated proof methods (such as \isacommand{by}~@{text
   130   blast}. *}
   131 
   132 
   133 subsection {* Variations of backward vs.\ forward reasoning *}
   134 
   135 text {* Certainly, any proof may be performed in backward-style only.
   136   On the other hand, small steps of reasoning are often more naturally
   137   expressed in forward-style.  Isar supports both backward and forward
   138   reasoning as a first-class concept.  In order to demonstrate the
   139   difference, we consider several proofs of @{text "A \<and> B \<longrightarrow> B \<and> A"}.
   140 
   141   The first version is purely backward. *}
   142 
   143 lemma "A & B --> B & A"
   144 proof
   145   assume "A & B"
   146   show "B & A"
   147   proof
   148     show B by (rule conjunct2) fact
   149     show A by (rule conjunct1) fact
   150   qed
   151 qed
   152 
   153 text {* Above, the @{text "conjunct_1/2"} projection rules had to be
   154   named explicitly, since the goals @{text B} and @{text A} did not
   155   provide any structural clue.  This may be avoided using
   156   \isacommand{from} to focus on the @{text "A \<and> B"} assumption as the
   157   current facts, enabling the use of double-dot proofs.  Note that
   158   \isacommand{from} already does forward-chaining, involving the
   159   @{text conjE} rule here. *}
   160 
   161 lemma "A & B --> B & A"
   162 proof
   163   assume "A & B"
   164   show "B & A"
   165   proof
   166     from `A & B` show B ..
   167     from `A & B` show A ..
   168   qed
   169 qed
   170 
   171 text {* In the next version, we move the forward step one level
   172   upwards.  Forward-chaining from the most recent facts is indicated
   173   by the \isacommand{then} command.  Thus the proof of @{text "B \<and> A"}
   174   from @{text "A \<and> B"} actually becomes an elimination, rather than an
   175   introduction.  The resulting proof structure directly corresponds to
   176   that of the @{text conjE} rule, including the repeated goal
   177   proposition that is abbreviated as @{text ?thesis} below. *}
   178 
   179 lemma "A & B --> B & A"
   180 proof
   181   assume "A & B"
   182   then show "B & A"
   183   proof                    -- {* rule @{text conjE} of @{text "A \<and> B"} *}
   184     assume B A
   185     then show ?thesis ..   -- {* rule @{text conjI} of @{text "B \<and> A"} *}
   186   qed
   187 qed
   188 
   189 text {* In the subsequent version we flatten the structure of the main
   190   body by doing forward reasoning all the time.  Only the outermost
   191   decomposition step is left as backward. *}
   192 
   193 lemma "A & B --> B & A"
   194 proof
   195   assume "A & B"
   196   from `A & B` have A ..
   197   from `A & B` have B ..
   198   from `B` `A` show "B & A" ..
   199 qed
   200 
   201 text {* We can still push forward-reasoning a bit further, even at the
   202   risk of getting ridiculous.  Note that we force the initial proof
   203   step to do nothing here, by referring to the ``-'' proof method. *}
   204 
   205 lemma "A & B --> B & A"
   206 proof -
   207   {
   208     assume "A & B"
   209     from `A & B` have A ..
   210     from `A & B` have B ..
   211     from `B` `A` have "B & A" ..
   212   }
   213   then show ?thesis ..         -- {* rule @{text impI} *}
   214 qed
   215 
   216 text {* \medskip With these examples we have shifted through a whole
   217   range from purely backward to purely forward reasoning.  Apparently,
   218   in the extreme ends we get slightly ill-structured proofs, which
   219   also require much explicit naming of either rules (backward) or
   220   local facts (forward).
   221 
   222   The general lesson learned here is that good proof style would
   223   achieve just the \emph{right} balance of top-down backward
   224   decomposition, and bottom-up forward composition.  In general, there
   225   is no single best way to arrange some pieces of formal reasoning, of
   226   course.  Depending on the actual applications, the intended audience
   227   etc., rules (and methods) on the one hand vs.\ facts on the other
   228   hand have to be emphasized in an appropriate way.  This requires the
   229   proof writer to develop good taste, and some practice, of course. *}
   230 
   231 text {* For our example the most appropriate way of reasoning is
   232   probably the middle one, with conjunction introduction done after
   233   elimination. *}
   234 
   235 lemma "A & B --> B & A"
   236 proof
   237   assume "A & B"
   238   then show "B & A"
   239   proof
   240     assume B A
   241     then show ?thesis ..
   242   qed
   243 qed
   244 
   245 
   246 
   247 subsection {* A few examples from ``Introduction to Isabelle'' *}
   248 
   249 text {* We rephrase some of the basic reasoning examples of
   250   \cite{isabelle-intro}, using HOL rather than FOL. *}
   251 
   252 
   253 subsubsection {* A propositional proof *}
   254 
   255 text {* We consider the proposition @{text "P \<or> P \<longrightarrow> P"}.  The proof
   256   below involves forward-chaining from @{text "P \<or> P"}, followed by an
   257   explicit case-analysis on the two \emph{identical} cases. *}
   258 
   259 lemma "P | P --> P"
   260 proof
   261   assume "P | P"
   262   then show P
   263   proof                    -- {*
   264     rule @{text disjE}: \smash{$\infer{C}{A \disj B & \infer*{C}{[A]} & \infer*{C}{[B]}}$}
   265   *}
   266     assume P show P by fact
   267   next
   268     assume P show P by fact
   269   qed
   270 qed
   271 
   272 text {* Case splits are \emph{not} hardwired into the Isar language as
   273   a special feature.  The \isacommand{next} command used to separate
   274   the cases above is just a short form of managing block structure.
   275 
   276   \medskip In general, applying proof methods may split up a goal into
   277   separate ``cases'', i.e.\ new subgoals with individual local
   278   assumptions.  The corresponding proof text typically mimics this by
   279   establishing results in appropriate contexts, separated by blocks.
   280 
   281   In order to avoid too much explicit parentheses, the Isar system
   282   implicitly opens an additional block for any new goal, the
   283   \isacommand{next} statement then closes one block level, opening a
   284   new one.  The resulting behavior is what one would expect from
   285   separating cases, only that it is more flexible.  E.g.\ an induction
   286   base case (which does not introduce local assumptions) would
   287   \emph{not} require \isacommand{next} to separate the subsequent step
   288   case.
   289 
   290   \medskip In our example the situation is even simpler, since the two
   291   cases actually coincide.  Consequently the proof may be rephrased as
   292   follows. *}
   293 
   294 lemma "P | P --> P"
   295 proof
   296   assume "P | P"
   297   then show P
   298   proof
   299     assume P
   300     show P by fact
   301     show P by fact
   302   qed
   303 qed
   304 
   305 text {* Again, the rather vacuous body of the proof may be collapsed.
   306   Thus the case analysis degenerates into two assumption steps, which
   307   are implicitly performed when concluding the single rule step of the
   308   double-dot proof as follows. *}
   309 
   310 lemma "P | P --> P"
   311 proof
   312   assume "P | P"
   313   then show P ..
   314 qed
   315 
   316 
   317 subsubsection {* A quantifier proof *}
   318 
   319 text {* To illustrate quantifier reasoning, let us prove @{text
   320   "(\<exists>x. P (f x)) \<longrightarrow> (\<exists>y. P y)"}.  Informally, this holds because any
   321   @{text a} with @{text "P (f a)"} may be taken as a witness for the
   322   second existential statement.
   323 
   324   The first proof is rather verbose, exhibiting quite a lot of
   325   (redundant) detail.  It gives explicit rules, even with some
   326   instantiation.  Furthermore, we encounter two new language elements:
   327   the \isacommand{fix} command augments the context by some new
   328   ``arbitrary, but fixed'' element; the \isacommand{is} annotation
   329   binds term abbreviations by higher-order pattern matching. *}
   330 
   331 lemma "(EX x. P (f x)) --> (EX y. P y)"
   332 proof
   333   assume "EX x. P (f x)"
   334   then show "EX y. P y"
   335   proof (rule exE)             -- {*
   336     rule @{text exE}: \smash{$\infer{B}{\ex x A(x) & \infer*{B}{[A(x)]_x}}$}
   337   *}
   338     fix a
   339     assume "P (f a)" (is "P ?witness")
   340     then show ?thesis by (rule exI [of P ?witness])
   341   qed
   342 qed
   343 
   344 text {* While explicit rule instantiation may occasionally improve
   345   readability of certain aspects of reasoning, it is usually quite
   346   redundant.  Above, the basic proof outline gives already enough
   347   structural clues for the system to infer both the rules and their
   348   instances (by higher-order unification).  Thus we may as well prune
   349   the text as follows. *}
   350 
   351 lemma "(EX x. P (f x)) --> (EX y. P y)"
   352 proof
   353   assume "EX x. P (f x)"
   354   then show "EX y. P y"
   355   proof
   356     fix a
   357     assume "P (f a)"
   358     then show ?thesis ..
   359   qed
   360 qed
   361 
   362 text {* Explicit @{text \<exists>}-elimination as seen above can become quite
   363   cumbersome in practice.  The derived Isar language element
   364   ``\isakeyword{obtain}'' provides a more handsome way to do
   365   generalized existence reasoning. *}
   366 
   367 lemma "(EX x. P (f x)) --> (EX y. P y)"
   368 proof
   369   assume "EX x. P (f x)"
   370   then obtain a where "P (f a)" ..
   371   then show "EX y. P y" ..
   372 qed
   373 
   374 text {* Technically, \isakeyword{obtain} is similar to
   375   \isakeyword{fix} and \isakeyword{assume} together with a soundness
   376   proof of the elimination involved.  Thus it behaves similar to any
   377   other forward proof element.  Also note that due to the nature of
   378   general existence reasoning involved here, any result exported from
   379   the context of an \isakeyword{obtain} statement may \emph{not} refer
   380   to the parameters introduced there. *}
   381 
   382 
   383 subsubsection {* Deriving rules in Isabelle *}
   384 
   385 text {* We derive the conjunction elimination rule from the
   386   corresponding projections.  The proof is quite straight-forward,
   387   since Isabelle/Isar supports non-atomic goals and assumptions fully
   388   transparently. *}
   389 
   390 theorem conjE: "A & B ==> (A ==> B ==> C) ==> C"
   391 proof -
   392   assume "A & B"
   393   assume r: "A ==> B ==> C"
   394   show C
   395   proof (rule r)
   396     show A by (rule conjunct1) fact
   397     show B by (rule conjunct2) fact
   398   qed
   399 qed
   400 
   401 end