src/HOL/Isar_examples/BasicLogic.thy
 author wenzelm Fri Oct 15 16:44:37 1999 +0200 (1999-10-15) changeset 7874 180364256231 parent 7860 7819547df4d8 child 7982 d534b897ce39 permissions -rw-r--r--
improved presentation;
     1 (*  Title:      HOL/Isar_examples/BasicLogic.thy

     2     ID:         $Id$

     3     Author:     Markus Wenzel, TU Muenchen

     4

     5 Basic propositional and quantifier reasoning.

     6 *)

     7

     8 header {* Basic reasoning *};

     9

    10 theory BasicLogic = Main:;

    11

    12

    13 subsection {* Pure backward reasoning *};

    14

    15 text {*

    16  In order to get a first idea of how Isabelle/Isar proof documents may

    17  look like, we consider the propositions $I$, $K$, and $S$.  The

    18  following (rather explicit) proofs should require little extra

    19  explanations.

    20 *};

    21

    22 lemma I: "A --> A";

    23 proof;

    24   assume A;

    25   show A; by assumption;

    26 qed;

    27

    28 lemma K: "A --> B --> A";

    29 proof;

    30   assume A;

    31   show "B --> A";

    32   proof;

    33     show A; by assumption;

    34   qed;

    35 qed;

    36

    37 lemma S: "(A --> B --> C) --> (A --> B) --> A --> C";

    38 proof;

    39   assume "A --> B --> C";

    40   show "(A --> B) --> A --> C";

    41   proof;

    42     assume "A --> B";

    43     show "A --> C";

    44     proof;

    45       assume A;

    46       show C;

    47       proof (rule mp);

    48 	show "B --> C"; by (rule mp);

    49         show B; by (rule mp);

    50       qed;

    51     qed;

    52   qed;

    53 qed;

    54

    55 text {*

    56  Isar provides several ways to fine-tune the reasoning, avoiding

    57  excessive detail.  Several abbreviated language elements are

    58  available, enabling the writer to express proofs in a more concise

    59  way, even without referring to any automated proof tools yet.

    60

    61  First of all, proof by assumption may be abbreviated as a single dot.

    62 *};

    63

    64 lemma "A --> A";

    65 proof;

    66   assume A;

    67   show A; .;

    68 qed;

    69

    70 text {*

    71  In fact, concluding any (sub-)proof already involves solving any

    72  remaining goals by assumption\footnote{This is not a completely

    73  trivial operation, as proof by assumption involves full higher-order

    74  unification.}.  Thus we may skip the rather vacuous body of the above

    75  proof as well.

    76 *};

    77

    78 lemma "A --> A";

    79 proof;

    80 qed;

    81

    82 text {*

    83  Note that the \isacommand{proof} command refers to the $\idt{rule}$

    84  method (without arguments) by default.  Thus it implicitly applies a

    85  single rule, as determined from the syntactic form of the statements

    86  involved.  The \isacommand{by} command abbreviates any proof with

    87  empty body, so the proof may be further pruned.

    88 *};

    89

    90 lemma "A --> A";

    91   by rule;

    92

    93 text {*

    94  Proof by a single rule may be abbreviated as double-dot.

    95 *};

    96

    97 lemma "A --> A"; ..;

    98

    99 text {*

   100  Thus we have arrived at an adequate representation of the proof of a

   101  tautology that holds by a single standard rule.\footnote{Apparently,

   102  the rule is implication introduction.}

   103 *};

   104

   105 text {*

   106  Let us also reconsider $K$.  Its statement is composed of iterated

   107  connectives.  Basic decomposition is by a single rule at a time,

   108  which is why our first version above was by nesting two proofs.

   109

   110  The $\idt{intro}$ proof method repeatedly decomposes a goal's

   111  conclusion.\footnote{The dual method is $\idt{elim}$, acting on a

   112  goal's premises.}

   113 *};

   114

   115 lemma "A --> B --> A";

   116 proof intro;

   117   assume A;

   118   show A; .;

   119 qed;

   120

   121 text {*

   122  Again, the body may be collapsed.

   123 *};

   124

   125 lemma "A --> B --> A";

   126   by intro;

   127

   128 text {*

   129  Just like $\idt{rule}$, the $\idt{intro}$ and $\idt{elim}$ proof

   130  methods pick standard structural rules, in case no explicit arguments

   131  are given.  While implicit rules are usually just fine for single

   132  rule application, this may go too far in iteration.  Thus in

   133  practice, $\idt{intro}$ and $\idt{elim}$ would be typically

   134  restricted to certain structures by giving a few rules only, e.g.\

   135  \isacommand{proof}~($\idt{intro}$~\name{impI}~\name{allI}) to strip

   136  implications and universal quantifiers.

   137

   138  Such well-tuned iterated decomposition of certain structures is the

   139  prime application of $\idt{intro}$ and $\idt{elim}$.  In contrast,

   140  terminal steps that solve a goal completely are usually performed by

   141  actual automated proof methods (such as

   142  \isacommand{by}~$\idt{blast}$).

   143 *};

   144

   145

   146 subsection {* Variations of backward vs.\ forward reasoning *};

   147

   148 text {*

   149  Certainly, any proof may be performed in backward-style only.  On the

   150  other hand, small steps of reasoning are often more naturally

   151  expressed in forward-style.  Isar supports both backward and forward

   152  reasoning as a first-class concept.  In order to demonstrate the

   153  difference, we consider several proofs of $A \conj B \impl B \conj   154 A$.

   155

   156  The first version is purely backward.

   157 *};

   158

   159 lemma "A & B --> B & A";

   160 proof;

   161   assume "A & B";

   162   show "B & A";

   163   proof;

   164     show B; by (rule conjunct2);

   165     show A; by (rule conjunct1);

   166   qed;

   167 qed;

   168

   169 text {*

   170  Above, the $\idt{conjunct}_{1/2}$ projection rules had to be named

   171  explicitly, since the goals did not provide any structural clue.

   172  This may be avoided using \isacommand{from} to focus on $\idt{prems}$

   173  (i.e.\ the $A \conj B$ assumption) as the current facts, enabling the

   174  use of double-dot proofs.  Note that \isacommand{from} already

   175  does forward-chaining, involving the \name{conjE} rule.

   176 *};

   177

   178 lemma "A & B --> B & A";

   179 proof;

   180   assume "A & B";

   181   show "B & A";

   182   proof;

   183     from prems; show B; ..;

   184     from prems; show A; ..;

   185   qed;

   186 qed;

   187

   188 text {*

   189  In the next version, we move the forward step one level upwards.

   190  Forward-chaining from the most recent facts is indicated by the

   191  \isacommand{then} command.  Thus the proof of $B \conj A$ from $A   192 \conj B$ actually becomes an elimination, rather than an

   193  introduction.  The resulting proof structure directly corresponds to

   194  that of the $\name{conjE}$ rule, including the repeated goal

   195  proposition that is abbreviated as $\var{thesis}$ below.

   196 *};

   197

   198 lemma "A & B --> B & A";

   199 proof;

   200   assume "A & B";

   201   then; show "B & A";

   202   proof                    -- {* rule \name{conjE} of $A \conj B$ *};

   203     assume A B;

   204     show ?thesis; ..       -- {* rule \name{conjI} of $B \conj A$ *};

   205   qed;

   206 qed;

   207

   208 text {*

   209  In the subsequent version we flatten the structure of the main body

   210  by doing forward reasoning all the time.  Only the outermost

   211  decomposition step is left as backward.

   212 *};

   213

   214 lemma "A & B --> B & A";

   215 proof;

   216   assume ab: "A & B";

   217   from ab; have a: A; ..;

   218   from ab; have b: B; ..;

   219   from b a; show "B & A"; ..;

   220 qed;

   221

   222 text {*

   223  We can still push forward reasoning a bit further, even at the risk

   224  of getting ridiculous.  Note that we force the initial proof step to

   225  do nothing, by referring to the -'' proof method.

   226 *};

   227

   228 lemma "A & B --> B & A";

   229 proof -;

   230   {{;

   231     assume ab: "A & B";

   232     from ab; have a: A; ..;

   233     from ab; have b: B; ..;

   234     from b a; have "B & A"; ..;

   235   }};

   236   thus ?thesis; ..         -- {* rule \name{impI} *};

   237 qed;

   238

   239 text {*

   240  \medskip With these examples we have shifted through a whole range

   241  from purely backward to purely forward reasoning.  Apparently, in the

   242  extreme ends we get slightly ill-structured proofs, which also

   243  require much explicit naming of either rules (backward) or local

   244  facts (forward).

   245

   246  The general lesson learned here is that good proof style would

   247  achieve just the \emph{right} balance of top-down backward

   248  decomposition, and bottom-up forward composition.  In practice, there

   249  is no single best way to arrange some pieces of formal reasoning, of

   250  course.  Depending on the actual applications, the intended audience

   251  etc., rules (and methods) on the one hand vs.\ facts on the other

   252  hand have to be emphasized in an appropriate way.  This requires the

   253  proof writer to develop good taste, and some practice, of course.

   254 *};

   255

   256 text {*

   257  For our example the most appropriate way of reasoning is probably the

   258  middle one, with conjunction introduction done after elimination.

   259  This reads even more concisely using \isacommand{thus}, which

   260  abbreviates \isacommand{then}~\isacommand{show}.\footnote{In the same

   261  vein, \isacommand{hence} abbreviates

   262  \isacommand{then}~\isacommand{have}.}

   263 *};

   264

   265 lemma "A & B --> B & A";

   266 proof;

   267   assume "A & B";

   268   thus "B & A";

   269   proof;

   270     assume A B;

   271     show ?thesis; ..;

   272   qed;

   273 qed;

   274

   275

   276

   277 subsection {* A few examples from Introduction to Isabelle'' *};

   278

   279 text {*

   280  We rephrase some of the basic reasoning examples of

   281  \cite{isabelle-intro} (using HOL rather than FOL).

   282 *};

   283

   284 subsubsection {* A propositional proof *};

   285

   286 text {*

   287  We consider the proposition $P \disj P \impl P$.  The proof below

   288  involves forward-chaining from $P \disj P$, followed by an explicit

   289  case-analysis on the two \emph{identical} cases.

   290 *};

   291

   292 lemma "P | P --> P";

   293 proof;

   294   assume "P | P";

   295   thus P;

   296   proof                    -- {*

   297     rule \name{disjE}: \smash{$\infer{C}{A \disj B & \infer*{C}{[A]} & \infer*{C}{[B]}}$}

   298   *};

   299     assume P; show P; .;

   300   next;

   301     assume P; show P; .;

   302   qed;

   303 qed;

   304

   305 text {*

   306  Case splits are \emph{not} hardwired into the Isar language as a

   307  special feature.  The \isacommand{next} command used to separate the

   308  cases above is just a short form of managing block structure.

   309

   310  \medskip In general, applying proof methods may split up a goal into

   311  separate cases'', i.e.\ new subgoals with individual local

   312  assumptions.  The corresponding proof text typically mimics this by

   313  establishing results in appropriate contexts, separated by blocks.

   314

   315  In order to avoid too much explicit parentheses, the Isar system

   316  implicitly opens an additional block for any new goal, the

   317  \isacommand{next} statement then closes one block level, opening a

   318  new one.  The resulting behavior is what one might expect from

   319  separating cases, only that it is more flexible.  E.g. an induction

   320  base case (which does not introduce local assumptions) would

   321  \emph{not} require \isacommand{next} to separate the subsequent step

   322  case.

   323

   324  \medskip In our example the situation is even simpler, since the two

   325  cases actually coincide.  Consequently the proof may be rephrased as

   326  follows.

   327 *};

   328

   329 lemma "P | P --> P";

   330 proof;

   331   assume "P | P";

   332   thus P;

   333   proof;

   334     assume P;

   335     show P; .;

   336     show P; .;

   337   qed;

   338 qed;

   339

   340 text {*

   341  Again, the rather vacuous body of the proof may be collapsed.  Thus

   342  the case analysis degenerates into two assumption steps, which are

   343  implicitly performed when concluding the single rule step of the

   344  double-dot proof as follows.

   345 *};

   346

   347 lemma "P | P --> P";

   348 proof;

   349   assume "P | P";

   350   thus P; ..;

   351 qed;

   352

   353

   354 subsubsection {* A quantifier proof *};

   355

   356 text {*

   357  To illustrate quantifier reasoning, let us prove $(\ex x P \ap (f \ap   358 x)) \impl (\ex x P \ap x)$.  Informally, this holds because any $a$

   359  with $P \ap (f \ap a)$ may be taken as a witness for the second

   360  existential statement.

   361

   362  The first proof is rather verbose, exhibiting quite a lot of

   363  (redundant) detail.  It gives explicit rules, even with some

   364  instantiation.  Furthermore, we encounter two new language elements:

   365  the \isacommand{fix} command augments the context by some new

   366  arbitrary, but fixed'' element; the \isacommand{is} annotation

   367  binds term abbreviations by higher-order pattern matching.

   368 *};

   369

   370 lemma "(EX x. P (f x)) --> (EX x. P x)";

   371 proof;

   372   assume "EX x. P (f x)";

   373   thus "EX x. P x";

   374   proof (rule exE)             -- {*

   375     rule \name{exE}: \smash{$\infer{B}{\ex x A(x) & \infer*{B}{[A(x)]_x}}$}

   376   *};

   377     fix a;

   378     assume "P (f a)" (is "P ?witness");

   379     show ?thesis; by (rule exI [of P ?witness]);

   380   qed;

   381 qed;

   382

   383 text {*

   384  While explicit rule instantiation may occasionally help to improve

   385  the readability of certain aspects of reasoning, it is usually quite

   386  redundant.  Above, the basic proof outline gives already enough

   387  structural clues for the system to infer both the rules and their

   388  instances (by higher-order unification).  Thus we may as well prune

   389  the text as follows.

   390 *};

   391

   392 lemma "(EX x. P (f x)) --> (EX x. P x)";

   393 proof;

   394   assume "EX x. P (f x)";

   395   thus "EX x. P x";

   396   proof;

   397     fix a;

   398     assume "P (f a)";

   399     show ?thesis; ..;

   400   qed;

   401 qed;

   402

   403

   404 subsubsection {* Deriving rules in Isabelle *};

   405

   406 text {*

   407  We derive the conjunction elimination rule from the projections.  The

   408  proof is quite straight-forward, since Isabelle/Isar supports

   409  non-atomic goals and assumptions fully transparently.

   410 *};

   411

   412 theorem conjE: "A & B ==> (A ==> B ==> C) ==> C";

   413 proof -;

   414   assume "A & B";

   415   assume ab_c: "A ==> B ==> C";

   416   show C;

   417   proof (rule ab_c);

   418     show A; by (rule conjunct1);

   419     show B; by (rule conjunct2);

   420   qed;

   421 qed;

   422

   423 text {*

   424  Note that classic Isabelle handles higher rules in a slightly

   425  different way.  The tactic script as given in \cite{isabelle-intro}

   426  for the same example of \name{conjE} depends on the primitive

   427  \texttt{goal} command to decompose the rule into premises and

   428  conclusion.  The proper result would then emerge by discharging of

   429  the context at \texttt{qed} time.

   430 *};

   431

   432 end;