src/HOL/Isar_examples/BasicLogic.thy
 author wenzelm Sat Oct 09 23:20:49 1999 +0200 (1999-10-09) changeset 7820 cad7cc30fa40 parent 7761 7fab9592384f child 7833 f5288e4b95d1 permissions -rw-r--r--
more explanations;
     1 (*  Title:      HOL/Isar_examples/BasicLogic.thy

     2     ID:         $Id$

     3     Author:     Markus Wenzel, TU Muenchen

     4

     5 Basic propositional and quantifier reasoning.

     6 *)

     7

     8 header {* Basic reasoning *};

     9

    10 theory BasicLogic = Main:;

    11

    12

    13 subsection {* Pure backward reasoning *};

    14

    15 text {*

    16  In order to get a first idea of how Isabelle/Isar proof documents may

    17  look like, we consider the propositions $I$, $K$, and $S$.  The

    18  following (rather explicit) proofs should require little extra

    19  explanations.

    20 *};

    21

    22 lemma I: "A --> A";

    23 proof;

    24   assume A;

    25   show A; by assumption;

    26 qed;

    27

    28 lemma K: "A --> B --> A";

    29 proof;

    30   assume A;

    31   show "B --> A";

    32   proof;

    33     show A; by assumption;

    34   qed;

    35 qed;

    36

    37 lemma S: "(A --> B --> C) --> (A --> B) --> A --> C";

    38 proof;

    39   assume "A --> B --> C";

    40   show "(A --> B) --> A --> C";

    41   proof;

    42     assume "A --> B";

    43     show "A --> C";

    44     proof;

    45       assume A;

    46       show C;

    47       proof (rule mp);

    48 	show "B --> C"; by (rule mp);

    49         show B; by (rule mp);

    50       qed;

    51     qed;

    52   qed;

    53 qed;

    54

    55 text {*

    56  Isar provides several ways to fine-tune the reasoning, avoiding

    57  irrelevant detail.  Several abbreviated language elements are

    58  available, enabling the writer to express proofs in a more concise

    59  way, even without referring to any automated proof tools yet.

    60

    61  First of all, proof by assumption may be abbreviated as a single dot.

    62 *};

    63

    64 lemma "A --> A";

    65 proof;

    66   assume A;

    67   show A; .;

    68 qed;

    69

    70 text {*

    71  In fact, concluding any (sub-)proof already involves solving any

    72  remaining goals by assumption.  Thus we may skip the rather vacuous

    73  body of the above proof as well.

    74 *};

    75

    76 lemma "A --> A";

    77 proof;

    78 qed;

    79

    80 text {*

    81  Note that the \isacommand{proof} command refers to the $\idt{rule}$

    82  method (without arguments) by default.  Thus it implicitly applies a

    83  single rule, as determined from the syntactic form of the statements

    84  involved.  The \isacommand{by} command abbreviates any proof with

    85  empty body, so the proof may be further pruned.

    86 *};

    87

    88 lemma "A --> A";

    89   by rule;

    90

    91 text {*

    92  Proof by a single rule may be abbreviated as a double dot.

    93 *};

    94

    95 lemma "A --> A"; ..;

    96

    97 text {*

    98  Thus we have arrived at an adequate representation of the proof of a

    99  tautology that holds by a single standard rule.\footnote{Here the

   100  rule is implication introduction.}

   101 *};

   102

   103 text {*

   104  Let us also reconsider $K$.  It's statement is composed of iterated

   105  connectives.  Basic decomposition is by a single rule at a time,

   106  which is why our first version above was by nesting two proofs.

   107

   108  The $\idt{intro}$ proof method repeatedly decomposes a goal's

   109  conclusion.\footnote{The dual method is $\idt{elim}$, acting on a

   110  goal's premises.}

   111 *};

   112

   113 lemma "A --> B --> A";

   114 proof intro;

   115   assume A;

   116   show A; .;

   117 qed;

   118

   119 text {*

   120  Again, the body may be collapsed.

   121 *};

   122

   123 lemma "A --> B --> A";

   124   by intro;

   125

   126 text {*

   127  Just like $\idt{rule}$, the $\idt{intro}$ and $\idt{elim}$ proof

   128  methods pick standard structural rules, in case no explicit arguments

   129  are given.  While implicit rules are usually just fine for single

   130  rule application, this may go too far in iteration.  Thus in

   131  practice, $\idt{intro}$ and $\idt{elim}$ would be typically

   132  restricted to certain structures by giving a few rules only, e.g.\

   133  $(\idt{intro}~\name{impI}~\name{allI})$ to strip implications and

   134  universal quantifiers.

   135

   136  Such well-tuned iterated decomposition of certain structure is the

   137  prime application of $\idt{intro}$~/ $\idt{elim}$.  In general,

   138  terminal steps that solve a goal completely are typically performed

   139  by actual automated proof methods (e.g.\

   140  \isacommand{by}~$\idt{blast}$).

   141 *};

   142

   143

   144 subsection {* Variations of backward vs.\ forward reasoning *};

   145

   146 text {*

   147  Certainly, any proof may be performed in backward-style only.  On the

   148  other hand, small steps of reasoning are often more naturally

   149  expressed in forward-style.  Isar supports both backward and forward

   150  reasoning as a first-class concept.  In order to demonstrate the

   151  difference, we consider several proofs of $A \conj B \impl B \conj   152 A$.

   153

   154  The first version is purely backward.

   155 *};

   156

   157 lemma "A & B --> B & A";

   158 proof;

   159   assume "A & B";

   160   show "B & A";

   161   proof;

   162     show B; by (rule conjunct2);

   163     show A; by (rule conjunct1);

   164   qed;

   165 qed;

   166

   167 text {*

   168  Above, the $\idt{conjunct}_{1/2}$ projection rules had to be named

   169  explicitly, since the goals did not provide any structural clue.

   170  This may be avoided using \isacommand{from} to focus on $\idt{prems}$

   171  (i.e.\ the $A \conj B$ assumption) as the current facts, enabling the

   172  use of double-dot proofs.  Note that \isacommand{from} already

   173  involves forward-chaining.

   174 *};

   175

   176 lemma "A & B --> B & A";

   177 proof;

   178   assume "A & B";

   179   show "B & A";

   180   proof;

   181     from prems; show B; ..;

   182     from prems; show A; ..;

   183   qed;

   184 qed;

   185

   186 text {*

   187  In the next version, we move the forward step one level upwards.

   188  Forward-chaining from the most recent facts is indicated by the

   189  \isacommand{then} command.  Thus the proof of $B \conj A$ from $A   190 \conj B$ actually becomes an elimination, rather than an

   191  introduction.  The resulting proof structure directly corresponds to

   192  that of the $\name{conjE}$ rule, including the repeated goal

   193  proposition that is abbreviated as $\var{thesis}$ below.

   194 *};

   195

   196 lemma "A & B --> B & A";

   197 proof;

   198   assume "A & B";

   199   then; show "B & A";

   200   proof                    -- {* rule \name{conjE} of $A \conj B$ *};

   201     assume A B;

   202     show ?thesis; ..       -- {* rule \name{conjI} of $B \conj A$ *};

   203   qed;

   204 qed;

   205

   206 text {*

   207  Subsequently, only the outermost decomposition step is left backward,

   208  all the rest is forward.

   209 *};

   210

   211 lemma "A & B --> B & A";

   212 proof;

   213   assume ab: "A & B";

   214   from ab; have a: A; ..;

   215   from ab; have b: B; ..;

   216   from b a; show "B & A"; ..;

   217 qed;

   218

   219 text {*

   220  We can still push forward reasoning a bit further, even at the danger

   221  of getting ridiculous.  Note that we force the initial proof step to

   222  do nothing, by referring to the -'' proof method.

   223 *};

   224

   225 lemma "A & B --> B & A";

   226 proof -;

   227   {{;

   228     assume ab: "A & B";

   229     from ab; have a: A; ..;

   230     from ab; have b: B; ..;

   231     from b a; have "B & A"; ..;

   232   }};

   233   thus ?thesis; ..         -- {* rule \name{impI} *};

   234 qed;

   235

   236 text {*

   237  \medskip With these examples we have shifted through a whole range

   238  from purely backward to purely forward reasoning.  Apparently, in the

   239  extreme ends we get slightly ill-structured proofs, which also

   240  require much explicit naming of either rules (backward) or local

   241  facts (forward).

   242

   243  The general lesson learned here is that good proof style would

   244  achieve just the \emph{right} balance of top-down backward

   245  decomposition, and bottom-up forward composition.  In practice, there

   246  is no single best way to arrange some pieces of formal reasoning, of

   247  course.  Depending on the actual applications, the intended audience

   248  etc., certain aspects such as rules~/ methods vs.\ facts have to be

   249  emphasised in an appropriate way.  This requires the proof writer to

   250  develop good taste, and some practice, of course.

   251 *};

   252

   253 text {*

   254  For our example the most appropriate way of reasoning is probably the

   255  middle one, with conjunction introduction done after elimination.

   256  This reads even more concisely using \isacommand{thus}, which

   257  abbreviates \isacommand{then}~\isacommand{show}.\footnote{In the same

   258  vein, \isacommand{hence} abbreviates

   259  \isacommand{then}~\isacommand{have}.}

   260 *};

   261

   262 lemma "A & B --> B & A";

   263 proof;

   264   assume "A & B";

   265   thus "B & A";

   266   proof;

   267     assume A B;

   268     show ?thesis; ..;

   269   qed;

   270 qed;

   271

   272

   273

   274 subsection {* A few examples from Introduction to Isabelle'' *};

   275

   276 text {*

   277  We rephrase some of the basic reasoning examples of

   278  \cite{isabelle-intro}.

   279 *};

   280

   281 subsubsection {* Propositional proof *};

   282

   283 lemma "P | P --> P";

   284 proof;

   285   assume "P | P";

   286   then; show P;

   287   proof;

   288     assume P;

   289     show P; .;

   290     show P; .;

   291   qed;

   292 qed;

   293

   294 lemma "P | P --> P";

   295 proof;

   296   assume "P | P";

   297   then; show P; ..;

   298 qed;

   299

   300

   301 subsubsection {* Quantifier proof *};

   302

   303 lemma "(EX x. P(f(x))) --> (EX x. P(x))";

   304 proof;

   305   assume "EX x. P(f(x))";

   306   then; show "EX x. P(x)";

   307   proof (rule exE);

   308     fix a;

   309     assume "P(f(a))" (is "P(?witness)");

   310     show ?thesis; by (rule exI [of P ?witness]);

   311   qed;

   312 qed;

   313

   314 lemma "(EX x. P(f(x))) --> (EX x. P(x))";

   315 proof;

   316   assume "EX x. P(f(x))";

   317   then; show "EX x. P(x)";

   318   proof;

   319     fix a;

   320     assume "P(f(a))";

   321     show ?thesis; ..;

   322   qed;

   323 qed;

   324

   325 lemma "(EX x. P(f(x))) --> (EX x. P(x))";

   326   by blast;

   327

   328

   329 subsubsection {* Deriving rules in Isabelle *};

   330

   331 text {* We derive the conjunction elimination rule from the

   332  projections.  The proof below follows the basic reasoning of the

   333  script given in the Isabelle Intro Manual closely.  Although, the

   334  actual underlying operations on rules and proof states are quite

   335  different: Isabelle/Isar supports non-atomic goals and assumptions

   336  fully transparently, while the original Isabelle classic script

   337  depends on the primitive goal command to decompose the rule into

   338  premises and conclusion, with the result emerging by discharging the

   339  context again later. *};

   340

   341 theorem conjE: "A & B ==> (A ==> B ==> C) ==> C";

   342 proof -;

   343   assume ab: "A & B";

   344   assume ab_c: "A ==> B ==> C";

   345   show C;

   346   proof (rule ab_c);

   347     from ab; show A; ..;

   348     from ab; show B; ..;

   349   qed;

   350 qed;

   351

   352 end;