src/HOL/Isar_examples/BasicLogic.thy
author wenzelm
Sun May 21 14:49:28 2000 +0200 (2000-05-21)
changeset 8902 a705822f4e2a
parent 7982 d534b897ce39
child 9477 9506127f6fbb
permissions -rw-r--r--
replaced {{ }} by { };
     1 (*  Title:      HOL/Isar_examples/BasicLogic.thy
     2     ID:         $Id$
     3     Author:     Markus Wenzel, TU Muenchen
     4 
     5 Basic propositional and quantifier reasoning.
     6 *)
     7 
     8 header {* Basic logical reasoning *};
     9 
    10 theory BasicLogic = Main:;
    11 
    12 
    13 subsection {* Pure backward reasoning *};
    14 
    15 text {*
    16  In order to get a first idea of how Isabelle/Isar proof documents may
    17  look like, we consider the propositions $I$, $K$, and $S$.  The
    18  following (rather explicit) proofs should require little extra
    19  explanations.
    20 *};
    21 
    22 lemma I: "A --> A";
    23 proof;
    24   assume A;
    25   show A; by assumption;
    26 qed;
    27 
    28 lemma K: "A --> B --> A";
    29 proof;
    30   assume A;
    31   show "B --> A";
    32   proof;
    33     show A; by assumption;
    34   qed;
    35 qed;
    36 
    37 lemma S: "(A --> B --> C) --> (A --> B) --> A --> C";
    38 proof;
    39   assume "A --> B --> C";
    40   show "(A --> B) --> A --> C";
    41   proof;
    42     assume "A --> B";
    43     show "A --> C";
    44     proof;
    45       assume A;
    46       show C;
    47       proof (rule mp);
    48 	show "B --> C"; by (rule mp);
    49         show B; by (rule mp);
    50       qed;
    51     qed;
    52   qed;
    53 qed;
    54 
    55 text {*
    56  Isar provides several ways to fine-tune the reasoning, avoiding
    57  excessive detail.  Several abbreviated language elements are
    58  available, enabling the writer to express proofs in a more concise
    59  way, even without referring to any automated proof tools yet.
    60 
    61  First of all, proof by assumption may be abbreviated as a single dot.
    62 *};
    63 
    64 lemma "A --> A";
    65 proof;
    66   assume A;
    67   show A; .;
    68 qed;
    69 
    70 text {*
    71  In fact, concluding any (sub-)proof already involves solving any
    72  remaining goals by assumption\footnote{This is not a completely
    73  trivial operation, as proof by assumption may involve full
    74  higher-order unification.}.  Thus we may skip the rather vacuous body
    75  of the above proof as well.
    76 *};
    77 
    78 lemma "A --> A";
    79 proof;
    80 qed;
    81 
    82 text {*
    83  Note that the \isacommand{proof} command refers to the $\idt{rule}$
    84  method (without arguments) by default.  Thus it implicitly applies a
    85  single rule, as determined from the syntactic form of the statements
    86  involved.  The \isacommand{by} command abbreviates any proof with
    87  empty body, so the proof may be further pruned.
    88 *};
    89 
    90 lemma "A --> A";
    91   by rule;
    92 
    93 text {*
    94  Proof by a single rule may be abbreviated as double-dot.
    95 *};
    96 
    97 lemma "A --> A"; ..;
    98 
    99 text {*
   100  Thus we have arrived at an adequate representation of the proof of a
   101  tautology that holds by a single standard rule.\footnote{Apparently,
   102  the rule here is implication introduction.}
   103 *};
   104 
   105 text {*
   106  Let us also reconsider $K$.  Its statement is composed of iterated
   107  connectives.  Basic decomposition is by a single rule at a time,
   108  which is why our first version above was by nesting two proofs.
   109 
   110  The $\idt{intro}$ proof method repeatedly decomposes a goal's
   111  conclusion.\footnote{The dual method is $\idt{elim}$, acting on a
   112  goal's premises.}
   113 *};
   114 
   115 lemma "A --> B --> A";
   116 proof intro;
   117   assume A;
   118   show A; .;
   119 qed;
   120 
   121 text {*
   122  Again, the body may be collapsed.
   123 *};
   124 
   125 lemma "A --> B --> A";
   126   by intro;
   127 
   128 text {*
   129  Just like $\idt{rule}$, the $\idt{intro}$ and $\idt{elim}$ proof
   130  methods pick standard structural rules, in case no explicit arguments
   131  are given.  While implicit rules are usually just fine for single
   132  rule application, this may go too far with iteration.  Thus in
   133  practice, $\idt{intro}$ and $\idt{elim}$ would be typically
   134  restricted to certain structures by giving a few rules only, e.g.\
   135  \isacommand{proof}~($\idt{intro}$~\name{impI}~\name{allI}) to strip
   136  implications and universal quantifiers.
   137 
   138  Such well-tuned iterated decomposition of certain structures is the
   139  prime application of $\idt{intro}$ and $\idt{elim}$.  In contrast,
   140  terminal steps that solve a goal completely are usually performed by
   141  actual automated proof methods (such as
   142  \isacommand{by}~$\idt{blast}$).
   143 *};
   144 
   145 
   146 subsection {* Variations of backward vs.\ forward reasoning *};
   147 
   148 text {*
   149  Certainly, any proof may be performed in backward-style only.  On the
   150  other hand, small steps of reasoning are often more naturally
   151  expressed in forward-style.  Isar supports both backward and forward
   152  reasoning as a first-class concept.  In order to demonstrate the
   153  difference, we consider several proofs of $A \conj B \impl B \conj
   154  A$.
   155 
   156  The first version is purely backward.
   157 *};
   158 
   159 lemma "A & B --> B & A";
   160 proof;
   161   assume "A & B";
   162   show "B & A";
   163   proof;
   164     show B; by (rule conjunct2);
   165     show A; by (rule conjunct1);
   166   qed;
   167 qed;
   168 
   169 text {*
   170  Above, the $\idt{conjunct}_{1/2}$ projection rules had to be named
   171  explicitly, since the goals $B$ and $A$ did not provide any
   172  structural clue.  This may be avoided using \isacommand{from} to
   173  focus on $\idt{prems}$ (i.e.\ the $A \conj B$ assumption) as the
   174  current facts, enabling the use of double-dot proofs.  Note that
   175  \isacommand{from} already does forward-chaining, involving the
   176  \name{conjE} rule here.
   177 *};
   178 
   179 lemma "A & B --> B & A";
   180 proof;
   181   assume "A & B";
   182   show "B & A";
   183   proof;
   184     from prems; show B; ..;
   185     from prems; show A; ..;
   186   qed;
   187 qed;
   188 
   189 text {*
   190  In the next version, we move the forward step one level upwards.
   191  Forward-chaining from the most recent facts is indicated by the
   192  \isacommand{then} command.  Thus the proof of $B \conj A$ from $A
   193  \conj B$ actually becomes an elimination, rather than an
   194  introduction.  The resulting proof structure directly corresponds to
   195  that of the $\name{conjE}$ rule, including the repeated goal
   196  proposition that is abbreviated as $\var{thesis}$ below.
   197 *};
   198 
   199 lemma "A & B --> B & A";
   200 proof;
   201   assume "A & B";
   202   then; show "B & A";
   203   proof                    -- {* rule \name{conjE} of $A \conj B$ *};
   204     assume A B;
   205     show ?thesis; ..       -- {* rule \name{conjI} of $B \conj A$ *};
   206   qed;
   207 qed;
   208 
   209 text {*
   210  In the subsequent version we flatten the structure of the main body
   211  by doing forward reasoning all the time.  Only the outermost
   212  decomposition step is left as backward.
   213 *};
   214 
   215 lemma "A & B --> B & A";
   216 proof;
   217   assume ab: "A & B";
   218   from ab; have a: A; ..;
   219   from ab; have b: B; ..;
   220   from b a; show "B & A"; ..;
   221 qed;
   222 
   223 text {*
   224  We can still push forward reasoning a bit further, even at the risk
   225  of getting ridiculous.  Note that we force the initial proof step to
   226  do nothing here, by referring to the ``-'' proof method.
   227 *};
   228 
   229 lemma "A & B --> B & A";
   230 proof -;
   231   {;
   232     assume ab: "A & B";
   233     from ab; have a: A; ..;
   234     from ab; have b: B; ..;
   235     from b a; have "B & A"; ..;
   236   };
   237   thus ?thesis; ..         -- {* rule \name{impI} *};
   238 qed;
   239 
   240 text {*
   241  \medskip With these examples we have shifted through a whole range
   242  from purely backward to purely forward reasoning.  Apparently, in the
   243  extreme ends we get slightly ill-structured proofs, which also
   244  require much explicit naming of either rules (backward) or local
   245  facts (forward).
   246 
   247  The general lesson learned here is that good proof style would
   248  achieve just the \emph{right} balance of top-down backward
   249  decomposition, and bottom-up forward composition.  In general, there
   250  is no single best way to arrange some pieces of formal reasoning, of
   251  course.  Depending on the actual applications, the intended audience
   252  etc., rules (and methods) on the one hand vs.\ facts on the other
   253  hand have to be emphasized in an appropriate way.  This requires the
   254  proof writer to develop good taste, and some practice, of course.
   255 *};
   256 
   257 text {*
   258  For our example the most appropriate way of reasoning is probably the
   259  middle one, with conjunction introduction done after elimination.
   260  This reads even more concisely using \isacommand{thus}, which
   261  abbreviates \isacommand{then}~\isacommand{show}.\footnote{In the same
   262  vein, \isacommand{hence} abbreviates
   263  \isacommand{then}~\isacommand{have}.}
   264 *};
   265 
   266 lemma "A & B --> B & A";
   267 proof;
   268   assume "A & B";
   269   thus "B & A";
   270   proof;
   271     assume A B;
   272     show ?thesis; ..;
   273   qed;
   274 qed;
   275 
   276 
   277 
   278 subsection {* A few examples from ``Introduction to Isabelle'' *};
   279 
   280 text {*
   281  We rephrase some of the basic reasoning examples of
   282  \cite{isabelle-intro}, using HOL rather than FOL.
   283 *};
   284 
   285 subsubsection {* A propositional proof *};
   286 
   287 text {*
   288  We consider the proposition $P \disj P \impl P$.  The proof below
   289  involves forward-chaining from $P \disj P$, followed by an explicit
   290  case-analysis on the two \emph{identical} cases.
   291 *};
   292 
   293 lemma "P | P --> P";
   294 proof;
   295   assume "P | P";
   296   thus P;
   297   proof                    -- {*
   298     rule \name{disjE}: \smash{$\infer{C}{A \disj B & \infer*{C}{[A]} & \infer*{C}{[B]}}$}
   299   *};
   300     assume P; show P; .;
   301   next;
   302     assume P; show P; .;
   303   qed;
   304 qed;
   305 
   306 text {*
   307  Case splits are \emph{not} hardwired into the Isar language as a
   308  special feature.  The \isacommand{next} command used to separate the
   309  cases above is just a short form of managing block structure.
   310 
   311  \medskip In general, applying proof methods may split up a goal into
   312  separate ``cases'', i.e.\ new subgoals with individual local
   313  assumptions.  The corresponding proof text typically mimics this by
   314  establishing results in appropriate contexts, separated by blocks.
   315 
   316  In order to avoid too much explicit parentheses, the Isar system
   317  implicitly opens an additional block for any new goal, the
   318  \isacommand{next} statement then closes one block level, opening a
   319  new one.  The resulting behavior is what one would expect from
   320  separating cases, only that it is more flexible.  E.g.\ an induction
   321  base case (which does not introduce local assumptions) would
   322  \emph{not} require \isacommand{next} to separate the subsequent step
   323  case.
   324 
   325  \medskip In our example the situation is even simpler, since the two
   326  cases actually coincide.  Consequently the proof may be rephrased as
   327  follows.
   328 *};
   329 
   330 lemma "P | P --> P";
   331 proof;
   332   assume "P | P";
   333   thus P;
   334   proof;
   335     assume P;
   336     show P; .;
   337     show P; .;
   338   qed;
   339 qed;
   340 
   341 text {*
   342  Again, the rather vacuous body of the proof may be collapsed.  Thus
   343  the case analysis degenerates into two assumption steps, which are
   344  implicitly performed when concluding the single rule step of the
   345  double-dot proof as follows.
   346 *};
   347 
   348 lemma "P | P --> P";
   349 proof;
   350   assume "P | P";
   351   thus P; ..;
   352 qed;
   353 
   354 
   355 subsubsection {* A quantifier proof *};
   356 
   357 text {*
   358  To illustrate quantifier reasoning, let us prove $(\ex x P \ap (f \ap
   359  x)) \impl (\ex x P \ap x)$.  Informally, this holds because any $a$
   360  with $P \ap (f \ap a)$ may be taken as a witness for the second
   361  existential statement.
   362 
   363  The first proof is rather verbose, exhibiting quite a lot of
   364  (redundant) detail.  It gives explicit rules, even with some
   365  instantiation.  Furthermore, we encounter two new language elements:
   366  the \isacommand{fix} command augments the context by some new
   367  ``arbitrary, but fixed'' element; the \isacommand{is} annotation
   368  binds term abbreviations by higher-order pattern matching.
   369 *};
   370 
   371 lemma "(EX x. P (f x)) --> (EX x. P x)";
   372 proof;
   373   assume "EX x. P (f x)";
   374   thus "EX x. P x";
   375   proof (rule exE)             -- {*
   376     rule \name{exE}: \smash{$\infer{B}{\ex x A(x) & \infer*{B}{[A(x)]_x}}$}
   377   *};
   378     fix a;
   379     assume "P (f a)" (is "P ?witness");
   380     show ?thesis; by (rule exI [of P ?witness]);
   381   qed;
   382 qed;
   383 
   384 text {*
   385  While explicit rule instantiation may occasionally improve
   386  readability of certain aspects of reasoning, it is usually quite
   387  redundant.  Above, the basic proof outline gives already enough
   388  structural clues for the system to infer both the rules and their
   389  instances (by higher-order unification).  Thus we may as well prune
   390  the text as follows.
   391 *};
   392 
   393 lemma "(EX x. P (f x)) --> (EX x. P x)";
   394 proof;
   395   assume "EX x. P (f x)";
   396   thus "EX x. P x";
   397   proof;
   398     fix a;
   399     assume "P (f a)";
   400     show ?thesis; ..;
   401   qed;
   402 qed;
   403 
   404 
   405 subsubsection {* Deriving rules in Isabelle *};
   406 
   407 text {*
   408  We derive the conjunction elimination rule from the corresponding
   409  projections.  The proof is quite straight-forward, since
   410  Isabelle/Isar supports non-atomic goals and assumptions fully
   411  transparently.
   412 *};
   413 
   414 theorem conjE: "A & B ==> (A ==> B ==> C) ==> C";
   415 proof -;
   416   assume "A & B";
   417   assume r: "A ==> B ==> C";
   418   show C;
   419   proof (rule r);
   420     show A; by (rule conjunct1);
   421     show B; by (rule conjunct2);
   422   qed;
   423 qed;
   424 
   425 text {*
   426  Note that classic Isabelle handles higher rules in a slightly
   427  different way.  The tactic script as given in \cite{isabelle-intro}
   428  for the same example of \name{conjE} depends on the primitive
   429  \texttt{goal} command to decompose the rule into premises and
   430  conclusion.  The actual result would then emerge by discharging of
   431  the context at \texttt{qed} time.
   432 *};
   433 
   434 end;