13267
|
1 |
(*<*)theory Logic = Main:(*>*)
|
13307
|
2 |
text{* We begin by looking at a simplified grammar for Isar proofs:
|
13267
|
3 |
\begin{center}
|
|
4 |
\begin{tabular}{lrl}
|
13307
|
5 |
\emph{proof} & ::= & \isakeyword{proof} \emph{method}$^?$
|
13267
|
6 |
\emph{statement}*
|
13307
|
7 |
\isakeyword{qed} \\
|
|
8 |
&$\mid$& \isakeyword{by} \emph{method}\\[1ex]
|
|
9 |
\emph{statement} &::= & \isakeyword{fix} \emph{variables} \\
|
|
10 |
&$\mid$& \isakeyword{assume} proposition* \\
|
|
11 |
&$\mid$& (\isakeyword{from} \emph{label}$^*$ $\mid$
|
|
12 |
\isakeyword{then})$^?$ \\
|
|
13 |
&& (\isakeyword{show} $\mid$ \isakeyword{have})
|
13267
|
14 |
\emph{proposition} \emph{proof} \\[1ex]
|
|
15 |
\emph{proposition} &::=& (\emph{label}{\bf:})$^?$ \emph{string}
|
|
16 |
\end{tabular}
|
|
17 |
\end{center}
|
|
18 |
where paranthesis are used for grouping and $^?$ indicates an optional item.
|
|
19 |
|
|
20 |
This is a typical proof skeleton:
|
|
21 |
|
|
22 |
\begin{center}
|
|
23 |
\begin{tabular}{@ {}ll}
|
13307
|
24 |
\isakeyword{proof}\\
|
|
25 |
\hspace*{3ex}\isakeyword{assume} @{text"\""}\emph{the-assm}@{text"\""}\\
|
|
26 |
\hspace*{3ex}\isakeyword{have} @{text"\""}\dots@{text"\""} & -- "intermediate result"\\
|
13267
|
27 |
\hspace*{3ex}\vdots\\
|
13307
|
28 |
\hspace*{3ex}\isakeyword{have} @{text"\""}\dots@{text"\""} & -- "intermediate result"\\
|
|
29 |
\hspace*{3ex}\isakeyword{show} @{text"\""}\emph{the-concl}@{text"\""}\\
|
|
30 |
\isakeyword{qed}
|
13267
|
31 |
\end{tabular}
|
|
32 |
\end{center}
|
13307
|
33 |
It proves \emph{the-assm}~@{text"\<Longrightarrow>"}~\emph{the-concl}.*}
|
13267
|
34 |
|
|
35 |
section{*Logic*}
|
|
36 |
|
|
37 |
subsection{*Propositional logic*}
|
|
38 |
|
13294
|
39 |
subsubsection{*Introduction rules*}
|
|
40 |
|
13267
|
41 |
lemma "A \<longrightarrow> A"
|
|
42 |
proof (rule impI)
|
13307
|
43 |
assume a: "A"
|
|
44 |
show "A" by(rule a)
|
13267
|
45 |
qed
|
|
46 |
|
|
47 |
text{*\noindent
|
|
48 |
The operational reading: the assume-show block proves @{prop"A \<Longrightarrow> A"},
|
|
49 |
which rule @{thm[source]impI} turns into the desired @{prop"A \<longrightarrow> A"}.
|
|
50 |
However, this text is much too detailled for comfort. Therefore Isar
|
|
51 |
implements the following principle:
|
|
52 |
\begin{quote}\em
|
13307
|
53 |
Command \isakeyword{proof} automatically tries select an introduction rule
|
13294
|
54 |
based on the goal and a predefined list of rules.
|
13267
|
55 |
\end{quote}
|
|
56 |
Here @{thm[source]impI} is applied automatically:
|
|
57 |
*}
|
|
58 |
|
|
59 |
lemma "A \<longrightarrow> A"
|
|
60 |
proof
|
13307
|
61 |
assume a: "A"
|
|
62 |
show "A" by(rule a)
|
13267
|
63 |
qed
|
|
64 |
|
13294
|
65 |
text{* Trivial proofs, in particular those by assumption, should be trivial
|
13307
|
66 |
to perform. Method ``.'' does just that (and a bit more --- see later). Thus
|
13294
|
67 |
naming of assumptions is often superfluous: *}
|
13267
|
68 |
|
|
69 |
lemma "A \<longrightarrow> A"
|
|
70 |
proof
|
|
71 |
assume "A"
|
13307
|
72 |
show "A" .
|
13267
|
73 |
qed
|
|
74 |
|
13307
|
75 |
text{* To hide proofs by assumption further, \isakeyword{by}@{text"(method)"}
|
13294
|
76 |
first applies @{text method} and then tries to solve all remaining subgoals
|
|
77 |
by assumption: *}
|
13267
|
78 |
|
13307
|
79 |
lemma "A \<longrightarrow> A \<and> A"
|
13267
|
80 |
proof
|
|
81 |
assume A
|
13307
|
82 |
show "A \<and> A" by(rule conjI)
|
13267
|
83 |
qed
|
|
84 |
|
13294
|
85 |
text{*\noindent A drawback of these implicit proofs by assumption is that it
|
|
86 |
is no longer obvious where an assumption is used.
|
13267
|
87 |
|
13307
|
88 |
Proofs of the form \isakeyword{by}@{text"(rule <name>)"} can be abbreviated
|
|
89 |
to ``..'' if @{text"<name>"} is one of the predefined introduction rules. Thus
|
13294
|
90 |
*}
|
|
91 |
|
|
92 |
lemma "A \<longrightarrow> A \<and> A"
|
13267
|
93 |
proof
|
|
94 |
assume A
|
13294
|
95 |
show "A \<and> A" ..
|
13267
|
96 |
qed
|
|
97 |
|
13294
|
98 |
text{*\noindent
|
|
99 |
What happens: first the matching introduction rule @{thm[source]conjI}
|
|
100 |
is applied (first "."), then the two subgoals are solved by assumption
|
|
101 |
(second "."). *}
|
13267
|
102 |
|
13294
|
103 |
subsubsection{*Elimination rules*}
|
13267
|
104 |
|
13294
|
105 |
text{*A typical elimination rule is @{thm[source]conjE}, $\land$-elimination:
|
|
106 |
@{thm[display,indent=5]conjE[no_vars]} In the following proof it is applied
|
|
107 |
by hand, after its first (``\emph{major}'') premise has been eliminated via
|
|
108 |
@{text"[OF AB]"}: *}
|
|
109 |
|
|
110 |
lemma "A \<and> B \<longrightarrow> B \<and> A"
|
13267
|
111 |
proof
|
13294
|
112 |
assume AB: "A \<and> B"
|
|
113 |
show "B \<and> A"
|
13267
|
114 |
proof (rule conjE[OF AB])
|
|
115 |
assume A and B
|
13294
|
116 |
show ?thesis ..
|
13267
|
117 |
qed
|
|
118 |
qed
|
|
119 |
|
13294
|
120 |
text{*\noindent Note that the term @{text"?thesis"} always stands for the
|
13307
|
121 |
``current goal'', i.e.\ the enclosing \isakeyword{show} (or
|
|
122 |
\isakeyword{have}).
|
13267
|
123 |
|
13294
|
124 |
This is too much proof text. Elimination rules should be selected
|
|
125 |
automatically based on their major premise, the formula or rather connective
|
|
126 |
to be eliminated. In Isar they are triggered by propositions being fed
|
|
127 |
\emph{into} a proof block. Syntax:
|
|
128 |
\begin{center}
|
13307
|
129 |
\isakeyword{from} \emph{fact} \isakeyword{show} \emph{proposition}
|
13294
|
130 |
\end{center}
|
13307
|
131 |
where \emph{fact} stands for the name of a previously proved
|
13294
|
132 |
proposition, e.g.\ an assumption, an intermediate result or some global
|
|
133 |
theorem. The fact may also be modified with @{text of}, @{text OF} etc.
|
13307
|
134 |
This command applies an elimination rule (from a predefined list)
|
13294
|
135 |
whose first premise is solved by the fact. Thus: *}
|
13267
|
136 |
|
13294
|
137 |
lemma "A \<and> B \<longrightarrow> B \<and> A"
|
13267
|
138 |
proof
|
13294
|
139 |
assume AB: "A \<and> B"
|
|
140 |
from AB show "B \<and> A"
|
13267
|
141 |
proof
|
|
142 |
assume A and B
|
|
143 |
show ?thesis ..
|
|
144 |
qed
|
|
145 |
qed
|
|
146 |
|
13294
|
147 |
text{* Now we come a second important principle:
|
|
148 |
\begin{quote}\em
|
|
149 |
Try to arrange the sequence of propositions in a UNIX-like pipe,
|
|
150 |
such that the proof of each proposition builds on the previous proposition.
|
|
151 |
\end{quote}
|
|
152 |
The previous proposition can be referred to via the fact @{text this}.
|
|
153 |
This greatly reduces the need for explicit naming of propositions:
|
|
154 |
*}
|
|
155 |
|
|
156 |
lemma "A \<and> B \<longrightarrow> B \<and> A"
|
13267
|
157 |
proof
|
13294
|
158 |
assume "A \<and> B"
|
|
159 |
from this show "B \<and> A"
|
13267
|
160 |
proof
|
|
161 |
assume A and B
|
|
162 |
show ?thesis ..
|
|
163 |
qed
|
|
164 |
qed
|
|
165 |
|
13294
|
166 |
text{*\noindent
|
13307
|
167 |
A final simplification: \isakeyword{from}~@{text this} can be
|
|
168 |
abbreviated to \isakeyword{thus}.
|
13294
|
169 |
\bigskip
|
13267
|
170 |
|
13294
|
171 |
Here is an alternative proof that operates purely by forward reasoning: *}
|
|
172 |
|
|
173 |
lemma "A \<and> B \<longrightarrow> B \<and> A"
|
13267
|
174 |
proof
|
13294
|
175 |
assume ab: "A \<and> B"
|
13267
|
176 |
from ab have a: A ..
|
|
177 |
from ab have b: B ..
|
13294
|
178 |
from b a show "B \<and> A" ..
|
13267
|
179 |
qed
|
|
180 |
|
13294
|
181 |
text{*\noindent
|
|
182 |
It is worth examining this text in detail because it exhibits a number of new features.
|
13267
|
183 |
|
13294
|
184 |
For a start, this is the first time we have proved intermediate propositions
|
13307
|
185 |
(\isakeyword{have}) on the way to the final \isakeyword{show}. This is the
|
13294
|
186 |
norm in any nontrival proof where one cannot bridge the gap between the
|
13307
|
187 |
assumptions and the conclusion in one step. Both \isakeyword{have}s above are
|
13294
|
188 |
proved automatically via @{thm[source]conjE} triggered by
|
13307
|
189 |
\isakeyword{from}~@{text ab}.
|
13267
|
190 |
|
13307
|
191 |
The \isakeyword{show} command illustrates two things:
|
13294
|
192 |
\begin{itemize}
|
13307
|
193 |
\item \isakeyword{from} can be followed by any number of facts.
|
|
194 |
Given \isakeyword{from}~@{text f}$_1$~\dots~@{text f}$_n$, \isakeyword{show}
|
13294
|
195 |
tries to find an elimination rule whose first $n$ premises can be proved
|
|
196 |
by the given facts in the given order.
|
|
197 |
\item If there is no matching elimination rule, introduction rules are tried,
|
|
198 |
again using the facts to prove the premises.
|
|
199 |
\end{itemize}
|
13307
|
200 |
In this case, the proof succeeds with @{thm[source]conjI}. Note that the proof would fail if we had written \isakeyword{from}~@{text"a b"}
|
|
201 |
instead of \isakeyword{from}~@{text"b a"}.
|
13294
|
202 |
|
|
203 |
This treatment of facts fed into a proof is not restricted to implicit
|
|
204 |
application of introduction and elimination rules but applies to explicit
|
|
205 |
application of rules as well. Thus you could replace the final ``..'' above
|
13307
|
206 |
with \isakeyword{by}@{text"(rule conjI)"}.
|
13294
|
207 |
|
|
208 |
Note Isar's predefined introduction and elimination rules include all the
|
|
209 |
usual natural deduction rules for propositional logic. We conclude this
|
|
210 |
section with an extended example --- which rules are used implicitly where?
|
|
211 |
Rule @{thm[source]ccontr} is @{thm ccontr[no_vars]}.
|
|
212 |
*}
|
|
213 |
|
|
214 |
lemma "\<not>(A \<and> B) \<longrightarrow> \<not>A \<or> \<not>B"
|
13267
|
215 |
proof
|
13294
|
216 |
assume n: "\<not>(A \<and> B)"
|
|
217 |
show "\<not>A \<or> \<not>B"
|
13267
|
218 |
proof (rule ccontr)
|
13294
|
219 |
assume nn: "\<not>(\<not> A \<or> \<not>B)"
|
13267
|
220 |
from n show False
|
|
221 |
proof
|
13294
|
222 |
show "A \<and> B"
|
13267
|
223 |
proof
|
|
224 |
show A
|
|
225 |
proof (rule ccontr)
|
13294
|
226 |
assume "\<not>A"
|
13267
|
227 |
have "\<not> A \<or> \<not> B" ..
|
|
228 |
from nn this show False ..
|
|
229 |
qed
|
|
230 |
next
|
|
231 |
show B
|
|
232 |
proof (rule ccontr)
|
13294
|
233 |
assume "\<not>B"
|
13267
|
234 |
have "\<not> A \<or> \<not> B" ..
|
|
235 |
from nn this show False ..
|
|
236 |
qed
|
|
237 |
qed
|
|
238 |
qed
|
|
239 |
qed
|
|
240 |
qed;
|
|
241 |
|
13294
|
242 |
text{*\noindent Apart from demonstrating the strangeness of classical
|
|
243 |
arguments by contradiction, this example also introduces a new language
|
13307
|
244 |
feature to deal with multiple subgoals: \isakeyword{next}. When showing
|
13294
|
245 |
@{term"A \<and> B"} we need to show both @{term A} and @{term B}. Each subgoal
|
|
246 |
is proved separately, in \emph{any} order. The individual proofs are
|
13307
|
247 |
separated by \isakeyword{next}. *}
|
13267
|
248 |
|
|
249 |
subsection{*Becoming more concise*}
|
|
250 |
|
13294
|
251 |
text{* So far our examples have been a bit unnatural: normally we want to
|
|
252 |
prove rules expressed with @{text"\<Longrightarrow>"}, not @{text"\<longrightarrow>"}. Here is an example:
|
|
253 |
*}
|
13267
|
254 |
|
|
255 |
lemma "\<lbrakk> A \<Longrightarrow> False \<rbrakk> \<Longrightarrow> \<not> A"
|
|
256 |
proof
|
13294
|
257 |
assume "A \<Longrightarrow> False" "A"
|
13267
|
258 |
thus False .
|
|
259 |
qed
|
|
260 |
|
13307
|
261 |
text{*\noindent The \isakeyword{proof} always works on the conclusion,
|
13294
|
262 |
@{prop"\<not>A"} in our case, thus selecting $\neg$-introduction. Hence we can
|
|
263 |
additionally assume @{prop"A"}.
|
13267
|
264 |
|
13294
|
265 |
This is all very well as long as formulae are small. Let us now look at some
|
|
266 |
devices to avoid repeating (possibly large) formulae. A very general method
|
|
267 |
is pattern matching: *}
|
13267
|
268 |
|
13294
|
269 |
lemma "(large_formula \<Longrightarrow> False) \<Longrightarrow> \<not> large_formula"
|
13267
|
270 |
(is "(?P \<Longrightarrow> _) \<Longrightarrow> _")
|
|
271 |
proof
|
|
272 |
assume "?P \<Longrightarrow> False" ?P
|
|
273 |
thus False .
|
|
274 |
qed
|
|
275 |
|
13294
|
276 |
text{*\noindent Any formula may be followed by
|
13307
|
277 |
@{text"("}\isakeyword{is}~\emph{pattern}@{text")"} which causes the pattern
|
13294
|
278 |
to be matched against the formula, instantiating the ?-variables in the
|
|
279 |
pattern. Subsequent uses of these variables in other terms simply causes them
|
|
280 |
to be replaced by the terms they stand for.
|
|
281 |
|
|
282 |
We can simplify things even more by stating the theorem by means of the
|
13307
|
283 |
\isakeyword{assumes} and \isakeyword{shows} primitives which allow direct
|
13294
|
284 |
naming of assumptions: *}
|
13267
|
285 |
|
|
286 |
lemma assumes A: "large_formula \<Longrightarrow> False"
|
13294
|
287 |
shows "\<not> large_formula" (is "\<not> ?P")
|
13267
|
288 |
proof
|
|
289 |
assume ?P
|
13294
|
290 |
with A show False .
|
13267
|
291 |
qed
|
|
292 |
|
13294
|
293 |
text{*\noindent Here we have used the abbreviation
|
|
294 |
\begin{center}
|
13307
|
295 |
\isakeyword{with}~\emph{facts} \quad = \quad
|
|
296 |
\isakeyword{from}~\emph{facts} \isakeyword{and} @{text this}
|
13294
|
297 |
\end{center}
|
|
298 |
|
|
299 |
Sometimes it is necessary to supress the implicit application of rules in a
|
13307
|
300 |
\isakeyword{proof}. For example \isakeyword{show}~@{prop"A \<or> B"} would
|
13294
|
301 |
trigger $\lor$-introduction, requiring us to prove @{prop A}. A simple
|
|
302 |
``@{text"-"}'' prevents this \emph{faut pas}: *}
|
|
303 |
|
|
304 |
lemma assumes AB: "A \<or> B" shows "B \<or> A"
|
13267
|
305 |
proof -
|
13294
|
306 |
from AB show ?thesis
|
|
307 |
proof
|
|
308 |
assume A show ?thesis ..
|
|
309 |
next
|
|
310 |
assume B show ?thesis ..
|
13267
|
311 |
qed
|
|
312 |
qed
|
|
313 |
|
13294
|
314 |
subsection{*Predicate calculus*}
|
|
315 |
|
13307
|
316 |
text{* Command \isakeyword{fix} introduces new local variables into a
|
13294
|
317 |
proof. It corresponds to @{text"\<And>"} (the universal quantifier at the
|
13307
|
318 |
meta-level) just like \isakeyword{assume}-\isakeyword{show} corresponds to
|
13294
|
319 |
@{text"\<Longrightarrow>"}. Here is a sample proof, annotated with the rules that are
|
|
320 |
applied implictly: *}
|
|
321 |
|
|
322 |
lemma assumes P: "\<forall>x. P x" shows "\<forall>x. P(f x)"
|
|
323 |
proof -- "@{thm[source]allI}: @{thm allI[no_vars]}"
|
|
324 |
fix a
|
|
325 |
from P show "P(f a)" .. --"@{thm[source]allE}: @{thm allE[no_vars]}"
|
|
326 |
qed
|
|
327 |
|
|
328 |
text{*\noindent Note that in the proof we have chosen to call the bound
|
|
329 |
variable @{term a} instead of @{term x} merely to show that the choice of
|
|
330 |
names is irrelevant.
|
|
331 |
|
|
332 |
Next we look at @{text"\<exists>"} which is a bit more tricky.
|
13267
|
333 |
*}
|
|
334 |
|
13294
|
335 |
lemma assumes Pf: "\<exists>x. P(f x)" shows "\<exists>y. P y"
|
13267
|
336 |
proof -
|
13294
|
337 |
from Pf show ?thesis
|
|
338 |
proof -- "@{thm[source]exE}: @{thm exE[no_vars]}"
|
|
339 |
fix a
|
|
340 |
assume "P(f a)"
|
|
341 |
show ?thesis .. --"@{thm[source]exI}: @{thm exI[no_vars]}"
|
|
342 |
qed
|
13267
|
343 |
qed
|
|
344 |
|
13294
|
345 |
text {*\noindent
|
|
346 |
Explicit $\exists$-elimination as seen above can become
|
|
347 |
cumbersome in practice. The derived Isar language element
|
|
348 |
``\isakeyword{obtain}'' provides a more handsome way to perform
|
|
349 |
generalized existence reasoning:
|
13267
|
350 |
*}
|
|
351 |
|
13294
|
352 |
lemma assumes Pf: "\<exists>x. P(f x)" shows "\<exists>y. P y"
|
|
353 |
proof -
|
|
354 |
from Pf obtain x where "P(f x)" ..
|
|
355 |
thus "\<exists>y. P y" ..
|
13267
|
356 |
qed
|
|
357 |
|
13294
|
358 |
text {*\noindent Note how the proof text follows the usual mathematical style
|
|
359 |
of concluding $P x$ from $\exists x. P(x)$, while carefully introducing $x$
|
|
360 |
as a new local variable.
|
|
361 |
Technically, \isakeyword{obtain} is similar to \isakeyword{fix} and
|
|
362 |
\isakeyword{assume} together with a soundness proof of the
|
|
363 |
elimination involved. Thus it behaves similar to any other forward
|
|
364 |
proof element.
|
13267
|
365 |
|
13294
|
366 |
Here is a proof of a well known tautology which employs another useful
|
|
367 |
abbreviation: \isakeyword{hence} abbreviates \isakeyword{from}~@{text
|
|
368 |
this}~\isakeyword{have}. Figure out which rule is used where! *}
|
|
369 |
|
|
370 |
lemma assumes ex: "\<exists>x. \<forall>y. P x y" shows "\<forall>y. \<exists>x. P x y"
|
13267
|
371 |
proof
|
13294
|
372 |
fix y
|
|
373 |
from ex obtain x where "\<forall>y. P x y" ..
|
|
374 |
hence "P x y" ..
|
|
375 |
thus "\<exists>x. P x y" ..
|
|
376 |
qed
|
|
377 |
|
|
378 |
text{* So far we have confined ourselves to single step proofs. Of course
|
|
379 |
powerful automatic methods can be used just as well. Here is an example,
|
|
380 |
Cantor's theorem that there is no surjective function from a set to its
|
|
381 |
powerset: *}
|
|
382 |
|
|
383 |
theorem "\<exists>S. S \<notin> range (f :: 'a \<Rightarrow> 'a set)"
|
|
384 |
proof
|
|
385 |
let ?S = "{x. x \<notin> f x}"
|
|
386 |
show "?S \<notin> range f"
|
13267
|
387 |
proof
|
13294
|
388 |
assume "?S \<in> range f"
|
13267
|
389 |
then obtain y where fy: "?S = f y" ..
|
|
390 |
show False
|
|
391 |
proof (cases)
|
13294
|
392 |
assume "y \<in> ?S"
|
13267
|
393 |
with fy show False by blast
|
|
394 |
next
|
13294
|
395 |
assume "y \<notin> ?S"
|
13267
|
396 |
with fy show False by blast
|
|
397 |
qed
|
|
398 |
qed
|
|
399 |
qed
|
|
400 |
|
13294
|
401 |
text{*\noindent
|
|
402 |
For a start, the example demonstrates two new language elements:
|
|
403 |
\begin{itemize}
|
|
404 |
\item \isakeyword{let} introduces an abbreviation for a term, in our case
|
|
405 |
the witness for the claim, because the term occurs multiple times in the proof.
|
|
406 |
\item Proof by @{text"(cases)"} starts a proof by cases. Note that it remains
|
|
407 |
implicit what the two cases are: it is merely expected that the two subproofs
|
|
408 |
prove @{prop"P \<Longrightarrow> Q"} and @{prop"\<not>P \<Longrightarrow> Q"} for suitable @{term P} and
|
|
409 |
@{term Q}.
|
|
410 |
\end{itemize}
|
|
411 |
If you wonder how to \isakeyword{obtain} @{term y}:
|
|
412 |
via the predefined elimination rule @{thm rangeE[no_vars]}.
|
|
413 |
|
|
414 |
Method @{text blast} is used because the contradiction does not follow easily
|
|
415 |
by just a single rule. If you find the proof too cryptic for human
|
|
416 |
consumption, here is a more detailed version; the beginning up to
|
|
417 |
\isakeyword{obtain} stays unchanged. *}
|
|
418 |
|
|
419 |
theorem "\<exists>S. S \<notin> range (f :: 'a \<Rightarrow> 'a set)"
|
13267
|
420 |
proof
|
13294
|
421 |
let ?S = "{x. x \<notin> f x}"
|
|
422 |
show "?S \<notin> range f"
|
13267
|
423 |
proof
|
13294
|
424 |
assume "?S \<in> range f"
|
|
425 |
then obtain y where fy: "?S = f y" ..
|
13267
|
426 |
show False
|
|
427 |
proof (cases)
|
13294
|
428 |
assume A: "y \<in> ?S"
|
|
429 |
hence isin: "y \<in> f y" by(simp add:fy)
|
|
430 |
from A have "y \<notin> f y" by simp
|
13267
|
431 |
with isin show False by contradiction
|
|
432 |
next
|
13294
|
433 |
assume A: "y \<notin> ?S"
|
|
434 |
hence notin: "y \<notin> f y" by(simp add:fy)
|
|
435 |
from A have "y \<in> f y" by simp
|
13267
|
436 |
with notin show False by contradiction
|
|
437 |
qed
|
|
438 |
qed
|
|
439 |
qed
|
|
440 |
|
13294
|
441 |
text {*\noindent Method @{text contradiction} succeeds if both $P$ and
|
|
442 |
$\neg P$ are among the assumptions and the facts fed into that step.
|
|
443 |
|
|
444 |
As it happens, Cantor's theorem can be proved automatically by best-first
|
|
445 |
search. Depth-first search would diverge, but best-first search successfully
|
|
446 |
navigates through the large search space:
|
13267
|
447 |
*}
|
|
448 |
|
13294
|
449 |
theorem "\<exists>S. S \<notin> range (f :: 'a \<Rightarrow> 'a set)"
|
|
450 |
by best
|
13267
|
451 |
|
13294
|
452 |
text{*\noindent Of course this only works in the context of HOL's carefully
|
|
453 |
constructed introduction and elimination rules for set theory.
|
|
454 |
|
|
455 |
Finally, whole scripts may appear in the leaves of the proof tree, although
|
|
456 |
this is best avoided. Here is a contrived example: *}
|
13267
|
457 |
|
|
458 |
lemma "A \<longrightarrow> (A \<longrightarrow>B) \<longrightarrow> B"
|
|
459 |
proof
|
13294
|
460 |
assume a: A
|
13267
|
461 |
show "(A \<longrightarrow>B) \<longrightarrow> B"
|
|
462 |
apply(rule impI)
|
|
463 |
apply(erule impE)
|
13294
|
464 |
apply(rule a)
|
13267
|
465 |
apply assumption
|
|
466 |
done
|
|
467 |
qed
|
|
468 |
|
13294
|
469 |
text{*\noindent You may need to resort to this technique if an automatic step
|
|
470 |
fails to prove the desired proposition. *}
|
13305
|
471 |
|
13307
|
472 |
|
|
473 |
section{*Case distinction and induction*}
|
|
474 |
|
|
475 |
|
|
476 |
subsection{*Case distinction*}
|
|
477 |
|
|
478 |
text{* We have already met the @{text cases} method for performing
|
|
479 |
binary case splits. Here is another example: *}
|
|
480 |
|
|
481 |
lemma "P \<or> \<not> P"
|
|
482 |
proof cases
|
|
483 |
assume "P" thus ?thesis ..
|
|
484 |
next
|
|
485 |
assume "\<not> P" thus ?thesis ..
|
|
486 |
qed
|
|
487 |
|
|
488 |
text{*\noindent As always, the cases can be tackled in any order.
|
|
489 |
|
|
490 |
This form is appropriate if @{term P} is textually small. However, if
|
|
491 |
@{term P} is large, we don't want to repeat it. This can be avoided by
|
|
492 |
the following idiom *}
|
|
493 |
|
|
494 |
lemma "P \<or> \<not> P"
|
|
495 |
proof (cases "P")
|
|
496 |
case True thus ?thesis ..
|
|
497 |
next
|
|
498 |
case False thus ?thesis ..
|
|
499 |
qed
|
|
500 |
|
|
501 |
text{*\noindent which is simply a shorthand for the previous
|
|
502 |
proof. More precisely, the phrases \isakeyword{case}~@{prop
|
|
503 |
True}/@{prop False} abbreviate the corresponding assumptions @{prop P}
|
|
504 |
and @{prop"\<not>P"}.
|
|
505 |
|
|
506 |
The same game can be played with other datatypes, for example lists:
|
|
507 |
*}
|
|
508 |
(*<*)declare length_tl[simp del](*>*)
|
|
509 |
lemma "length(tl xs) = length xs - 1"
|
|
510 |
proof (cases xs)
|
|
511 |
case Nil thus ?thesis by simp
|
|
512 |
next
|
|
513 |
case Cons thus ?thesis by simp
|
|
514 |
qed
|
|
515 |
|
|
516 |
text{*\noindent Here \isakeyword{case}~@{text Nil} abbreviates
|
|
517 |
\isakeyword{assume}~@{prop"x = []"} and \isakeyword{case}~@{text Cons}
|
|
518 |
abbreviates \isakeyword{assume}~@{text"xs = a # list"}. However, we
|
|
519 |
cannot refer to @{term a} and @{term list} because this would lead to
|
|
520 |
brittle proofs: these names have been chosen by the system and have
|
|
521 |
not been introduced via \isakeyword{fix}. Luckily, the proof is simple
|
|
522 |
enough we do not need to refer to @{term a} and @{term list}. However,
|
|
523 |
in general one may have to. Hence Isar offers a simple scheme for
|
|
524 |
naming those variables: Replace the anonymous @{text Cons} by, for
|
|
525 |
example, @{text"(Cons y ys)"}, which abbreviates
|
|
526 |
\isakeyword{fix}~@{text"y ys"} \isakeyword{assume}~@{text"xs = Cons y
|
|
527 |
ys"}, i.e.\ @{prop"xs = y # ys"}. Here is a (somewhat contrived)
|
|
528 |
example: *}
|
|
529 |
|
|
530 |
lemma "length(tl xs) = length xs - 1"
|
|
531 |
proof (cases xs)
|
|
532 |
case Nil thus ?thesis by simp
|
|
533 |
next
|
|
534 |
case (Cons y ys)
|
|
535 |
hence "length xs = length ys + 1 & length(tl xs) = length ys" by simp
|
|
536 |
thus ?thesis by simp
|
|
537 |
qed
|
|
538 |
|
|
539 |
text{* New case distincion rules can be declared any time, even with
|
|
540 |
named cases. *}
|
|
541 |
|
|
542 |
|
|
543 |
subsection{*Induction*}
|
|
544 |
|
|
545 |
|
|
546 |
subsubsection{*Structural induction*}
|
|
547 |
|
|
548 |
text{* We start with a simple inductive proof where both cases are proved automatically: *}
|
13305
|
549 |
|
|
550 |
lemma "2 * (\<Sum>i<n+1. i) = n*(n+1)"
|
|
551 |
by (induct n, simp_all)
|
|
552 |
|
13307
|
553 |
text{*\noindent If we want to expose more of the structure of the
|
|
554 |
proof, we can use pattern matching to avoid having to repeat the goal
|
|
555 |
statement: *}
|
|
556 |
|
13305
|
557 |
lemma "2 * (\<Sum>i<n+1. i) = n*(n+1)" (is "?P n")
|
|
558 |
proof (induct n)
|
|
559 |
show "?P 0" by simp
|
|
560 |
next
|
|
561 |
fix n assume "?P n"
|
|
562 |
thus "?P(Suc n)" by simp
|
|
563 |
qed
|
|
564 |
|
13307
|
565 |
text{* \noindent We could refine this further to show more of the equational
|
|
566 |
proof. Instead we explore the same avenue as for case distinctions:
|
|
567 |
introducing context via the \isakeyword{case} command: *}
|
13305
|
568 |
|
|
569 |
lemma "2 * (\<Sum>i<n+1. i) = n*(n+1)"
|
|
570 |
proof (induct n)
|
|
571 |
case 0 show ?case by simp
|
|
572 |
next
|
|
573 |
case Suc thus ?case by simp
|
|
574 |
qed
|
|
575 |
|
13307
|
576 |
text{* \noindent The implicitly defined @{text ?case} refers to the
|
|
577 |
corresponding case to be proved, i.e.\ @{text"?P 0"} in the first case
|
|
578 |
and @{text"?P(Suc n)"} in the second case. Context
|
|
579 |
\isakeyword{case}~@{text 0} is empty whereas \isakeyword{case}~@{text
|
|
580 |
Suc} assumes @{text"?P n"}. Again we have the same problem as with
|
|
581 |
case distinctions: we cannot refer to @{term n} in the induction step
|
|
582 |
because it has not been introduced via \isakeyword{fix} (in contrast
|
|
583 |
to the previous proof). The solution is the same as above: @{text"(Suc
|
|
584 |
i)"} instead of just @{term Suc}: *}
|
13305
|
585 |
|
13307
|
586 |
lemma fixes n::nat shows "n < n*n + 1"
|
|
587 |
proof (induct n)
|
|
588 |
case 0 show ?case by simp
|
|
589 |
next
|
|
590 |
case (Suc i) thus "Suc i < Suc i * Suc i + 1" by simp
|
|
591 |
qed
|
13305
|
592 |
|
13307
|
593 |
text{* \noindent Of course we could again have written
|
|
594 |
\isakeyword{thus}~@{text ?case} instead of giving the term explicitly
|
|
595 |
but we wanted to use @{term i} somewehere.
|
13305
|
596 |
|
13307
|
597 |
Let us now tackle a more ambitious lemma: proving complete induction
|
|
598 |
@{prop[display,indent=5]"(\<And>n. (\<And>m. m < n \<Longrightarrow> P m) \<Longrightarrow> P n) \<Longrightarrow> P n"}
|
|
599 |
via structural induction. It is well known that one needs to prove
|
|
600 |
something more general first: *}
|
13305
|
601 |
|
13307
|
602 |
lemma cind_lemma:
|
|
603 |
assumes A: "(\<And>n. (\<And>m. m < n \<Longrightarrow> P m) \<Longrightarrow> P n)"
|
13305
|
604 |
shows "\<And>m. m < n \<Longrightarrow> P(m::nat)"
|
|
605 |
proof (induct n)
|
13307
|
606 |
case 0 thus ?case by simp
|
13305
|
607 |
next
|
|
608 |
case (Suc n m)
|
|
609 |
show ?case
|
|
610 |
proof (cases)
|
|
611 |
assume eq: "m = n"
|
13307
|
612 |
from Suc A have "P n" by blast
|
13305
|
613 |
with eq show "P m" by simp
|
|
614 |
next
|
|
615 |
assume neq: "m \<noteq> n"
|
|
616 |
with Suc have "m < n" by simp
|
|
617 |
with Suc show "P m" by blast
|
|
618 |
qed
|
13307
|
619 |
qed
|
13305
|
620 |
|
13307
|
621 |
text{* \noindent In contrast to the style advertized in the
|
|
622 |
Tutorial~\cite{LNCS2283}, structured Isar proofs do not need to
|
|
623 |
introduce @{text"\<forall>"} or @{text"\<longrightarrow>"} into a theorem to strengthen it
|
|
624 |
for induction --- we use @{text"\<And>"} and @{text"\<Longrightarrow>"} instead. This
|
|
625 |
simplifies the proof and means we don't have to convert between the
|
|
626 |
two kinds of connectives. As usual, at the end of the proof
|
|
627 |
@{text"\<And>"} disappears and the bound variable is turned into a
|
|
628 |
@{text"?"}-variable. Thus @{thm[source]cind_lemma} becomes
|
|
629 |
@{thm[display,indent=5] cind_lemma} Complete induction is an easy
|
|
630 |
corollary: instantiation followed by
|
|
631 |
simplification @{thm[source] cind_lemma[of _ n "Suc n", simplified]}
|
|
632 |
yields @{thm[display,indent=5] cind_lemma[of _ n "Suc n", simplified]}
|
13305
|
633 |
|
13307
|
634 |
So what is this funny case @{text"(Suc n m)"} in the proof of
|
|
635 |
@{thm[source] cind_lemma}? It looks confusing at first and reveals
|
|
636 |
that the very suggestive @{text"(Suc n)"} used above is not the whole
|
|
637 |
truth. The variable names after the case name (here: @{term Suc}) are
|
|
638 |
the names of the parameters of that subgoal. So far the only
|
|
639 |
parameters where the arguments to the constructor, but now we have an
|
|
640 |
additonal parameter coming from the @{text"\<And>m"} in the
|
|
641 |
\isakeyword{shows} clause. Thus @{text"(Suc n m)"} does not mean that
|
|
642 |
constructor @{term Suc} is applied to two arguments but that the two
|
|
643 |
parameters in the @{term Suc} case are to be named @{text n} and
|
|
644 |
@{text m}. *}
|
13305
|
645 |
|
13307
|
646 |
subsubsection{*Rule induction*}
|
|
647 |
|
|
648 |
text{* We define our own version of reflexive transitive closure of a
|
|
649 |
relation *}
|
13305
|
650 |
|
|
651 |
consts rtc :: "('a \<times> 'a)set \<Rightarrow> ('a \<times> 'a)set" ("_*" [1000] 999)
|
|
652 |
inductive "r*"
|
|
653 |
intros
|
13307
|
654 |
refl: "(x,x) \<in> r*"
|
|
655 |
step: "\<lbrakk> (x,y) \<in> r; (y,z) \<in> r* \<rbrakk> \<Longrightarrow> (x,z) \<in> r*"
|
|
656 |
|
|
657 |
text{* \noindent and prove that @{term"r*"} is indeed transitive: *}
|
13305
|
658 |
|
13307
|
659 |
lemma assumes A: "(x,y) \<in> r*"
|
|
660 |
shows "(y,z) \<in> r* \<Longrightarrow> (x,z) \<in> r*" (is "PROP ?P x y")
|
|
661 |
proof -
|
|
662 |
from A show "PROP ?P x y"
|
|
663 |
proof induct
|
|
664 |
fix x show "PROP ?P x x" .
|
|
665 |
next
|
|
666 |
fix x' x y
|
|
667 |
assume "(x',x) \<in> r" and
|
|
668 |
"PROP ?P x y" -- "The induction hypothesis"
|
|
669 |
thus "PROP ?P x' y" by(blast intro: rtc.step)
|
|
670 |
qed
|
|
671 |
qed
|
13305
|
672 |
|
13307
|
673 |
text{*\noindent Rule induction is triggered by a fact $(x_1,\dots,x_n) \in R$
|
|
674 |
piped into the proof, here \isakeyword{from}~@{text A}. The proof
|
|
675 |
itself follows the two rules of the inductive definition very closely.
|
|
676 |
The only surprising thing should be the keyword @{text PROP}: because
|
|
677 |
of certain syntactic subleties it is required in front of terms of
|
|
678 |
type @{typ prop} (the type of meta-level propositions) which are not
|
|
679 |
obviously of type @{typ prop} because they do not contain a tell-tale
|
|
680 |
constant such as @{text"\<And>"} or @{text"\<Longrightarrow>"}.
|
|
681 |
|
|
682 |
However, the proof is rather terse. Here is a more detailed version:
|
|
683 |
*}
|
|
684 |
|
|
685 |
lemma assumes A:"(x,y) \<in> r*" and B:"(y,z) \<in> r*"
|
|
686 |
shows "(x,z) \<in> r*"
|
13305
|
687 |
proof -
|
|
688 |
from A B show ?thesis
|
13307
|
689 |
proof induct
|
|
690 |
fix x assume "(x,z) \<in> r*" -- "B[x := z]"
|
|
691 |
thus "(x,z) \<in> r*" .
|
13305
|
692 |
next
|
13307
|
693 |
fix x' x y
|
|
694 |
assume step: "(x',x) \<in> r" and
|
|
695 |
IH: "(y,z) \<in> r* \<Longrightarrow> (x,z) \<in> r*" and
|
|
696 |
B: "(y,z) \<in> r*"
|
|
697 |
from step IH[OF B] show "(x',z) \<in> r*" by(rule rtc.step)
|
|
698 |
qed
|
13305
|
699 |
qed
|
|
700 |
|
13307
|
701 |
text{*\noindent We start the proof with \isakeyword{from}~@{text"A
|
|
702 |
B"}. Only @{text A} is ``consumed'' by the first proof step, here
|
|
703 |
induction. Since @{text B} is left over we don't just prove @{text
|
|
704 |
?thesis} but in fact @{text"B \<Longrightarrow> ?thesis"}, just as in the previous
|
|
705 |
proof, only more elegantly. The base case is trivial. In the
|
|
706 |
assumptions for the induction step we can see very clearly how things
|
|
707 |
fit together and permit ourselves the obvious forward step
|
|
708 |
@{text"IH[OF B]"}. *}
|
13305
|
709 |
|
13294
|
710 |
(*<*)end(*>*)
|