src/HOL/Isar_examples/BasicLogic.thy
author wenzelm
Thu Oct 14 01:07:24 1999 +0200 (1999-10-14)
changeset 7860 7819547df4d8
parent 7833 f5288e4b95d1
child 7874 180364256231
permissions -rw-r--r--
improved presentation;
     1 (*  Title:      HOL/Isar_examples/BasicLogic.thy
     2     ID:         $Id$
     3     Author:     Markus Wenzel, TU Muenchen
     4 
     5 Basic propositional and quantifier reasoning.
     6 *)
     7 
     8 header {* Basic reasoning *};
     9 
    10 theory BasicLogic = Main:;
    11 
    12 
    13 subsection {* Pure backward reasoning *};
    14 
    15 text {*
    16  In order to get a first idea of how Isabelle/Isar proof documents may
    17  look like, we consider the propositions $I$, $K$, and $S$.  The
    18  following (rather explicit) proofs should require little extra
    19  explanations.
    20 *};
    21 
    22 lemma I: "A --> A";
    23 proof;
    24   assume A;
    25   show A; by assumption;
    26 qed;
    27 
    28 lemma K: "A --> B --> A";
    29 proof;
    30   assume A;
    31   show "B --> A";
    32   proof;
    33     show A; by assumption;
    34   qed;
    35 qed;
    36 
    37 lemma S: "(A --> B --> C) --> (A --> B) --> A --> C";
    38 proof;
    39   assume "A --> B --> C";
    40   show "(A --> B) --> A --> C";
    41   proof;
    42     assume "A --> B";
    43     show "A --> C";
    44     proof;
    45       assume A;
    46       show C;
    47       proof (rule mp);
    48 	show "B --> C"; by (rule mp);
    49         show B; by (rule mp);
    50       qed;
    51     qed;
    52   qed;
    53 qed;
    54 
    55 text {*
    56  Isar provides several ways to fine-tune the reasoning, avoiding
    57  irrelevant detail.  Several abbreviated language elements are
    58  available, enabling the writer to express proofs in a more concise
    59  way, even without referring to any automated proof tools yet.
    60 
    61  First of all, proof by assumption may be abbreviated as a single dot.
    62 *};
    63 
    64 lemma "A --> A";
    65 proof;
    66   assume A;
    67   show A; .;
    68 qed;
    69 
    70 text {*
    71  In fact, concluding any (sub-)proof already involves solving any
    72  remaining goals by assumption\footnote{This is not a completely
    73  trivial operation.  As usual in Isabelle, proof by assumption
    74  involves full higher-order unification.}.  Thus we may skip the
    75  rather vacuous body of the above proof as well.
    76 *};
    77 
    78 lemma "A --> A";
    79 proof;
    80 qed;
    81 
    82 text {*
    83  Note that the \isacommand{proof} command refers to the $\idt{rule}$
    84  method (without arguments) by default.  Thus it implicitly applies a
    85  single rule, as determined from the syntactic form of the statements
    86  involved.  The \isacommand{by} command abbreviates any proof with
    87  empty body, so the proof may be further pruned.
    88 *};
    89 
    90 lemma "A --> A";
    91   by rule;
    92 
    93 text {*
    94  Proof by a single rule may be abbreviated as a double dot.
    95 *};
    96 
    97 lemma "A --> A"; ..;
    98 
    99 text {*
   100  Thus we have arrived at an adequate representation of the proof of a
   101  tautology that holds by a single standard rule.\footnote{Apparently,
   102  the rule is implication introduction.}
   103 *};
   104 
   105 text {*
   106  Let us also reconsider $K$.  It's statement is composed of iterated
   107  connectives.  Basic decomposition is by a single rule at a time,
   108  which is why our first version above was by nesting two proofs.
   109 
   110  The $\idt{intro}$ proof method repeatedly decomposes a goal's
   111  conclusion.\footnote{The dual method is $\idt{elim}$, acting on a
   112  goal's premises.}
   113 *};
   114 
   115 lemma "A --> B --> A";
   116 proof intro;
   117   assume A;
   118   show A; .;
   119 qed;
   120 
   121 text {*
   122  Again, the body may be collapsed.
   123 *};
   124 
   125 lemma "A --> B --> A";
   126   by intro;
   127 
   128 text {*
   129  Just like $\idt{rule}$, the $\idt{intro}$ and $\idt{elim}$ proof
   130  methods pick standard structural rules, in case no explicit arguments
   131  are given.  While implicit rules are usually just fine for single
   132  rule application, this may go too far in iteration.  Thus in
   133  practice, $\idt{intro}$ and $\idt{elim}$ would be typically
   134  restricted to certain structures by giving a few rules only, e.g.\
   135  \isacommand{proof}~($\idt{intro}$~\name{impI}~\name{allI}) to strip
   136  implications and universal quantifiers.
   137 
   138  Such well-tuned iterated decomposition of certain structure is the
   139  prime application of $\idt{intro}$ and $\idt{elim}$.  In general,
   140  terminal steps that solve a goal completely are typically performed
   141  by actual automated proof methods (such as
   142  \isacommand{by}~$\idt{blast}$).
   143 *};
   144 
   145 
   146 subsection {* Variations of backward vs.\ forward reasoning *};
   147 
   148 text {*
   149  Certainly, any proof may be performed in backward-style only.  On the
   150  other hand, small steps of reasoning are often more naturally
   151  expressed in forward-style.  Isar supports both backward and forward
   152  reasoning as a first-class concept.  In order to demonstrate the
   153  difference, we consider several proofs of $A \conj B \impl B \conj
   154  A$.
   155 
   156  The first version is purely backward.
   157 *};
   158 
   159 lemma "A & B --> B & A";
   160 proof;
   161   assume "A & B";
   162   show "B & A";
   163   proof;
   164     show B; by (rule conjunct2);
   165     show A; by (rule conjunct1);
   166   qed;
   167 qed;
   168 
   169 text {*
   170  Above, the $\idt{conjunct}_{1/2}$ projection rules had to be named
   171  explicitly, since the goals did not provide any structural clue.
   172  This may be avoided using \isacommand{from} to focus on $\idt{prems}$
   173  (i.e.\ the $A \conj B$ assumption) as the current facts, enabling the
   174  use of double-dot proofs.  Note that \isacommand{from} already
   175  does forward-chaining, involving the \name{conjE} rule.
   176 *};
   177 
   178 lemma "A & B --> B & A";
   179 proof;
   180   assume "A & B";
   181   show "B & A";
   182   proof;
   183     from prems; show B; ..;
   184     from prems; show A; ..;
   185   qed;
   186 qed;
   187 
   188 text {*
   189  In the next version, we move the forward step one level upwards.
   190  Forward-chaining from the most recent facts is indicated by the
   191  \isacommand{then} command.  Thus the proof of $B \conj A$ from $A
   192  \conj B$ actually becomes an elimination, rather than an
   193  introduction.  The resulting proof structure directly corresponds to
   194  that of the $\name{conjE}$ rule, including the repeated goal
   195  proposition that is abbreviated as $\var{thesis}$ below.
   196 *};
   197 
   198 lemma "A & B --> B & A";
   199 proof;
   200   assume "A & B";
   201   then; show "B & A";
   202   proof                    -- {* rule \name{conjE} of $A \conj B$ *};
   203     assume A B;
   204     show ?thesis; ..       -- {* rule \name{conjI} of $B \conj A$ *};
   205   qed;
   206 qed;
   207 
   208 text {*
   209  Subsequently, only the outermost decomposition step is left backward,
   210  all the rest is forward.
   211 *};
   212 
   213 lemma "A & B --> B & A";
   214 proof;
   215   assume ab: "A & B";
   216   from ab; have a: A; ..;
   217   from ab; have b: B; ..;
   218   from b a; show "B & A"; ..;
   219 qed;
   220 
   221 text {*
   222  We can still push forward reasoning a bit further, even at the risk
   223  of getting ridiculous.  Note that we force the initial proof step to
   224  do nothing, by referring to the ``-'' proof method.
   225 *};
   226 
   227 lemma "A & B --> B & A";
   228 proof -;
   229   {{;
   230     assume ab: "A & B";
   231     from ab; have a: A; ..;
   232     from ab; have b: B; ..;
   233     from b a; have "B & A"; ..;
   234   }};
   235   thus ?thesis; ..         -- {* rule \name{impI} *};
   236 qed;
   237 
   238 text {*
   239  \medskip With these examples we have shifted through a whole range
   240  from purely backward to purely forward reasoning.  Apparently, in the
   241  extreme ends we get slightly ill-structured proofs, which also
   242  require much explicit naming of either rules (backward) or local
   243  facts (forward).
   244 
   245  The general lesson learned here is that good proof style would
   246  achieve just the \emph{right} balance of top-down backward
   247  decomposition, and bottom-up forward composition.  In practice, there
   248  is no single best way to arrange some pieces of formal reasoning, of
   249  course.  Depending on the actual applications, the intended audience
   250  etc., certain aspects such as rules~/ methods vs.\ facts have to be
   251  emphasized in an appropriate way.  This requires the proof writer to
   252  develop good taste, and some practice, of course.
   253 *};
   254 
   255 text {*
   256  For our example the most appropriate way of reasoning is probably the
   257  middle one, with conjunction introduction done after elimination.
   258  This reads even more concisely using \isacommand{thus}, which
   259  abbreviates \isacommand{then}~\isacommand{show}.\footnote{In the same
   260  vein, \isacommand{hence} abbreviates
   261  \isacommand{then}~\isacommand{have}.}
   262 *};
   263 
   264 lemma "A & B --> B & A";
   265 proof;
   266   assume "A & B";
   267   thus "B & A";
   268   proof;
   269     assume A B;
   270     show ?thesis; ..;
   271   qed;
   272 qed;
   273 
   274 
   275 
   276 subsection {* A few examples from ``Introduction to Isabelle'' *};
   277 
   278 text {*
   279  We rephrase some of the basic reasoning examples of
   280  \cite{isabelle-intro} (using HOL rather than FOL).
   281 *};
   282 
   283 subsubsection {* A propositional proof *};
   284 
   285 text {*
   286  We consider the proposition $P \disj P \impl P$.  The proof below
   287  involves forward-chaining from $P \disj P$, followed by an explicit
   288  case-analysis on the two \emph{identical} cases.
   289 *};
   290 
   291 lemma "P | P --> P";
   292 proof;
   293   assume "P | P";
   294   thus P;
   295   proof                    -- {*
   296     rule \name{disjE}: \smash{$\infer{C}{A \disj B & \infer*{C}{[A]} & \infer*{C}{[B]}}$}
   297   *};
   298     assume P; show P; .;
   299   next;
   300     assume P; show P; .;
   301   qed;
   302 qed;
   303 
   304 text {*
   305  Case splits are \emph{not} hardwired into the Isar language as a
   306  special feature.  The \isacommand{next} command used to separate the
   307  cases above is just a short form of managing block structure.
   308 
   309  \medskip In general, applying proof methods may split up a goal into
   310  separate ``cases'', i.e.\ new subgoals with individual local
   311  assumptions.  The corresponding proof text typically mimics this by
   312  establishing results in appropriate contexts, separated by blocks.
   313 
   314  In order to avoid too much explicit parentheses, the Isar system
   315  implicitly opens an additional block for any new goal, the
   316  \isacommand{next} statement then closes one block level, opening a
   317  new one.  The resulting behavior is what one might expect from
   318  separating cases, only that it is more flexible.  E.g. an induction
   319  base case (which does not introduce local assumptions) would
   320  \emph{not} require \isacommand{next} to separate the subsequent step
   321  case.
   322 
   323  \medskip In our example the situation is even simpler, since the two
   324  ``cases'' actually coincide.  Consequently the proof may be rephrased
   325  as follows.
   326 *};
   327 
   328 lemma "P | P --> P";
   329 proof;
   330   assume "P | P";
   331   thus P;
   332   proof;
   333     assume P;
   334     show P; .;
   335     show P; .;
   336   qed;
   337 qed;
   338 
   339 text {*
   340  Again, the rather vacuous body of the proof may be collapsed.  Thus
   341  the case analysis degenerates into two assumption steps, which are
   342  implicitly performed when concluding the single rule step of the
   343  double-dot proof as follows.
   344 *};
   345 
   346 lemma "P | P --> P";
   347 proof;
   348   assume "P | P";
   349   thus P; ..;
   350 qed;
   351 
   352 
   353 subsubsection {* A quantifier proof *};
   354 
   355 text {*
   356  To illustrate quantifier reasoning, let us prove $(\ex x P \ap (f \ap
   357  x)) \impl (\ex x P \ap x)$.  Informally, this holds because any $a$
   358  with $P \ap (f \ap a)$ may be taken as a witness for the second
   359  existential statement.
   360 
   361  The first proof is rather verbose, exhibiting quite a lot of
   362  (redundant) detail.  It gives explicit rules, even with some
   363  instantiation.  Furthermore, we encounter two new language elements:
   364  the \isacommand{fix} command augments the context by some new
   365  ``arbitrary, but fixed'' element; the \isacommand{is} annotation
   366  binds term abbreviations by higher-order pattern matching.
   367 *};
   368 
   369 lemma "(EX x. P (f x)) --> (EX x. P x)";
   370 proof;
   371   assume "EX x. P (f x)";
   372   thus "EX x. P x";
   373   proof (rule exE)             -- {*
   374     rule \name{exE}: \smash{$\infer{B}{\ex x A(x) & \infer*{B}{[A(x)]_x}}$}
   375   *};
   376     fix a;
   377     assume "P (f a)" (is "P ?witness");
   378     show ?thesis; by (rule exI [of P ?witness]);
   379   qed;
   380 qed;
   381 
   382 text {*
   383  While explicit rule instantiation may occasionally help to improve
   384  the readability of certain aspects of reasoning, it is usually quite
   385  redundant.  Above, the basic proof outline gives already enough
   386  structural clues for the system to infer both the rules and their
   387  instances (by higher-order unification).  Thus we may as well prune
   388  the text as follows.
   389 *};
   390 
   391 lemma "(EX x. P (f x)) --> (EX x. P x)";
   392 proof;
   393   assume "EX x. P (f x)";
   394   thus "EX x. P x";
   395   proof;
   396     fix a;
   397     assume "P (f a)";
   398     show ?thesis; ..;
   399   qed;
   400 qed;
   401 
   402 
   403 subsubsection {* Deriving rules in Isabelle *};
   404 
   405 text {*
   406  We derive the conjunction elimination rule from the projections.  The
   407  proof is quite straight-forward, since Isabelle/Isar supports
   408  non-atomic goals and assumptions fully transparently.
   409 *};
   410 
   411 theorem conjE: "A & B ==> (A ==> B ==> C) ==> C";
   412 proof -;
   413   assume "A & B";
   414   assume ab_c: "A ==> B ==> C";
   415   show C;
   416   proof (rule ab_c);
   417     show A; by (rule conjunct1);
   418     show B; by (rule conjunct2);
   419   qed;
   420 qed;
   421 
   422 text {*
   423  Note that classic Isabelle handles higher rules in a slightly
   424  different way.  The tactic script as given in \cite{isabelle-intro}
   425  for the same example of \name{conjE} depends on the primitive
   426  \texttt{goal} command to decompose the rule into premises and
   427  conclusion.  The proper result would then emerge by discharging of
   428  the context at \texttt{qed} time.
   429 *};
   430 
   431 end;