13267
|
1 |
(*<*)theory Logic = Main:(*>*)
|
13310
|
2 |
text{* We begin by looking at a simplified grammar for Isar proofs
|
13322
|
3 |
where parentheses are used for grouping and $^?$ indicates an optional item:
|
13267
|
4 |
\begin{center}
|
|
5 |
\begin{tabular}{lrl}
|
13307
|
6 |
\emph{proof} & ::= & \isakeyword{proof} \emph{method}$^?$
|
13267
|
7 |
\emph{statement}*
|
13307
|
8 |
\isakeyword{qed} \\
|
|
9 |
&$\mid$& \isakeyword{by} \emph{method}\\[1ex]
|
|
10 |
\emph{statement} &::= & \isakeyword{fix} \emph{variables} \\
|
13322
|
11 |
&$\mid$& \isakeyword{assume} \emph{propositions} \\
|
|
12 |
&$\mid$& (\isakeyword{from} \emph{facts})$^?$
|
13310
|
13 |
(\isakeyword{show} $\mid$ \isakeyword{have})
|
13322
|
14 |
\emph{propositions} \emph{proof} \\[1ex]
|
|
15 |
\emph{proposition} &::=& (\emph{label}{\bf:})$^?$ \emph{string} \\[1ex]
|
|
16 |
\emph{fact} &::=& \emph{label}
|
13267
|
17 |
\end{tabular}
|
|
18 |
\end{center}
|
|
19 |
|
|
20 |
This is a typical proof skeleton:
|
|
21 |
\begin{center}
|
|
22 |
\begin{tabular}{@ {}ll}
|
13307
|
23 |
\isakeyword{proof}\\
|
|
24 |
\hspace*{3ex}\isakeyword{assume} @{text"\""}\emph{the-assm}@{text"\""}\\
|
13310
|
25 |
\hspace*{3ex}\isakeyword{have} @{text"\""}\dots@{text"\""} & -- intermediate result\\
|
13267
|
26 |
\hspace*{3ex}\vdots\\
|
13310
|
27 |
\hspace*{3ex}\isakeyword{have} @{text"\""}\dots@{text"\""} & -- intermediate result\\
|
13307
|
28 |
\hspace*{3ex}\isakeyword{show} @{text"\""}\emph{the-concl}@{text"\""}\\
|
|
29 |
\isakeyword{qed}
|
13267
|
30 |
\end{tabular}
|
|
31 |
\end{center}
|
13310
|
32 |
It proves \emph{the-assm}~@{text"\<Longrightarrow>"}~\emph{the-concl}.
|
|
33 |
Text starting with ``--'' is a comment.
|
13322
|
34 |
|
|
35 |
Note that propositions in \isakeyword{assume} may but need not be
|
13329
|
36 |
separated by \isakeyword{and}, whose purpose is to structure the
|
|
37 |
assumptions into possibly named blocks. For example in
|
13322
|
38 |
\begin{center}
|
|
39 |
\isakeyword{assume} @{text"A:"} $A_1$ $A_2$ \isakeyword{and} @{text"B:"} $A_3$
|
|
40 |
\isakeyword{and} $A_4$
|
|
41 |
\end{center}
|
13329
|
42 |
label @{text A} refers to the list of propositions $A_1$ $A_2$ and
|
13322
|
43 |
label @{text B} to $A_3$.
|
13310
|
44 |
*}
|
13267
|
45 |
|
|
46 |
section{*Logic*}
|
|
47 |
|
|
48 |
subsection{*Propositional logic*}
|
|
49 |
|
13294
|
50 |
subsubsection{*Introduction rules*}
|
|
51 |
|
13310
|
52 |
text{* We start with a really trivial toy proof to introduce the basic
|
|
53 |
features of structured proofs. *}
|
13267
|
54 |
lemma "A \<longrightarrow> A"
|
|
55 |
proof (rule impI)
|
13307
|
56 |
assume a: "A"
|
|
57 |
show "A" by(rule a)
|
13267
|
58 |
qed
|
|
59 |
text{*\noindent
|
13310
|
60 |
The operational reading: the \isakeyword{assume}-\isakeyword{show} block
|
|
61 |
proves @{prop"A \<Longrightarrow> A"},
|
13267
|
62 |
which rule @{thm[source]impI} turns into the desired @{prop"A \<longrightarrow> A"}.
|
13330
|
63 |
However, this text is much too detailed for comfort. Therefore Isar
|
13267
|
64 |
implements the following principle:
|
|
65 |
\begin{quote}\em
|
13310
|
66 |
Command \isakeyword{proof} automatically tries to select an introduction rule
|
13294
|
67 |
based on the goal and a predefined list of rules.
|
13267
|
68 |
\end{quote}
|
|
69 |
Here @{thm[source]impI} is applied automatically:
|
|
70 |
*}
|
|
71 |
|
|
72 |
lemma "A \<longrightarrow> A"
|
|
73 |
proof
|
13307
|
74 |
assume a: "A"
|
|
75 |
show "A" by(rule a)
|
13267
|
76 |
qed
|
|
77 |
|
13294
|
78 |
text{* Trivial proofs, in particular those by assumption, should be trivial
|
13322
|
79 |
to perform. Proof ``.'' does just that (and a bit more). Thus
|
13294
|
80 |
naming of assumptions is often superfluous: *}
|
13267
|
81 |
lemma "A \<longrightarrow> A"
|
|
82 |
proof
|
|
83 |
assume "A"
|
13307
|
84 |
show "A" .
|
13267
|
85 |
qed
|
|
86 |
|
13307
|
87 |
text{* To hide proofs by assumption further, \isakeyword{by}@{text"(method)"}
|
13294
|
88 |
first applies @{text method} and then tries to solve all remaining subgoals
|
|
89 |
by assumption: *}
|
13307
|
90 |
lemma "A \<longrightarrow> A \<and> A"
|
13267
|
91 |
proof
|
|
92 |
assume A
|
13307
|
93 |
show "A \<and> A" by(rule conjI)
|
13267
|
94 |
qed
|
13294
|
95 |
text{*\noindent A drawback of these implicit proofs by assumption is that it
|
|
96 |
is no longer obvious where an assumption is used.
|
13267
|
97 |
|
13310
|
98 |
Proofs of the form \isakeyword{by}@{text"(rule"}~\emph{name}@{text")"} can be
|
|
99 |
abbreviated to ``..''
|
13322
|
100 |
if \emph{name} refers to one of the predefined introduction rules:
|
13294
|
101 |
*}
|
|
102 |
|
|
103 |
lemma "A \<longrightarrow> A \<and> A"
|
13267
|
104 |
proof
|
|
105 |
assume A
|
13294
|
106 |
show "A \<and> A" ..
|
13267
|
107 |
qed
|
13294
|
108 |
text{*\noindent
|
13310
|
109 |
This is what happens: first the matching introduction rule @{thm[source]conjI}
|
|
110 |
is applied (first ``.''), then the two subgoals are solved by assumption
|
|
111 |
(second ``.''). *}
|
13267
|
112 |
|
13294
|
113 |
subsubsection{*Elimination rules*}
|
13267
|
114 |
|
13294
|
115 |
text{*A typical elimination rule is @{thm[source]conjE}, $\land$-elimination:
|
|
116 |
@{thm[display,indent=5]conjE[no_vars]} In the following proof it is applied
|
|
117 |
by hand, after its first (``\emph{major}'') premise has been eliminated via
|
|
118 |
@{text"[OF AB]"}: *}
|
|
119 |
lemma "A \<and> B \<longrightarrow> B \<and> A"
|
13267
|
120 |
proof
|
13294
|
121 |
assume AB: "A \<and> B"
|
|
122 |
show "B \<and> A"
|
13310
|
123 |
proof (rule conjE[OF AB]) -- {*@{prop"(A \<Longrightarrow> B \<Longrightarrow> R) \<Longrightarrow> R"}*}
|
13267
|
124 |
assume A and B
|
13294
|
125 |
show ?thesis ..
|
13267
|
126 |
qed
|
|
127 |
qed
|
13294
|
128 |
text{*\noindent Note that the term @{text"?thesis"} always stands for the
|
13307
|
129 |
``current goal'', i.e.\ the enclosing \isakeyword{show} (or
|
13322
|
130 |
\isakeyword{have}) statement.
|
13267
|
131 |
|
13294
|
132 |
This is too much proof text. Elimination rules should be selected
|
|
133 |
automatically based on their major premise, the formula or rather connective
|
|
134 |
to be eliminated. In Isar they are triggered by propositions being fed
|
|
135 |
\emph{into} a proof block. Syntax:
|
|
136 |
\begin{center}
|
13307
|
137 |
\isakeyword{from} \emph{fact} \isakeyword{show} \emph{proposition}
|
13294
|
138 |
\end{center}
|
13307
|
139 |
where \emph{fact} stands for the name of a previously proved
|
13294
|
140 |
proposition, e.g.\ an assumption, an intermediate result or some global
|
|
141 |
theorem. The fact may also be modified with @{text of}, @{text OF} etc.
|
13307
|
142 |
This command applies an elimination rule (from a predefined list)
|
13294
|
143 |
whose first premise is solved by the fact. Thus: *}
|
13267
|
144 |
|
13294
|
145 |
lemma "A \<and> B \<longrightarrow> B \<and> A"
|
13267
|
146 |
proof
|
13294
|
147 |
assume AB: "A \<and> B"
|
|
148 |
from AB show "B \<and> A"
|
13267
|
149 |
proof
|
|
150 |
assume A and B
|
|
151 |
show ?thesis ..
|
|
152 |
qed
|
|
153 |
qed
|
|
154 |
|
13294
|
155 |
text{* Now we come a second important principle:
|
|
156 |
\begin{quote}\em
|
|
157 |
Try to arrange the sequence of propositions in a UNIX-like pipe,
|
|
158 |
such that the proof of each proposition builds on the previous proposition.
|
|
159 |
\end{quote}
|
|
160 |
The previous proposition can be referred to via the fact @{text this}.
|
|
161 |
This greatly reduces the need for explicit naming of propositions:
|
|
162 |
*}
|
|
163 |
lemma "A \<and> B \<longrightarrow> B \<and> A"
|
13267
|
164 |
proof
|
13294
|
165 |
assume "A \<and> B"
|
|
166 |
from this show "B \<and> A"
|
13267
|
167 |
proof
|
|
168 |
assume A and B
|
|
169 |
show ?thesis ..
|
|
170 |
qed
|
|
171 |
qed
|
|
172 |
|
13322
|
173 |
text{*\noindent Because of the frequency of \isakeyword{from}~@{text
|
|
174 |
this} Isar provides two abbreviations:
|
|
175 |
\begin{center}
|
|
176 |
\begin{tabular}{rcl}
|
|
177 |
\isakeyword{then} &=& \isakeyword{from} @{text this} \\
|
|
178 |
\isakeyword{thus} &=& \isakeyword{then} \isakeyword{show}
|
|
179 |
\end{tabular}
|
|
180 |
\end{center}
|
13310
|
181 |
\medskip
|
13267
|
182 |
|
13294
|
183 |
Here is an alternative proof that operates purely by forward reasoning: *}
|
|
184 |
lemma "A \<and> B \<longrightarrow> B \<and> A"
|
13267
|
185 |
proof
|
13294
|
186 |
assume ab: "A \<and> B"
|
13267
|
187 |
from ab have a: A ..
|
|
188 |
from ab have b: B ..
|
13294
|
189 |
from b a show "B \<and> A" ..
|
13267
|
190 |
qed
|
13294
|
191 |
text{*\noindent
|
13322
|
192 |
It is worth examining this text in detail because it exhibits a number of new concepts.
|
13267
|
193 |
|
13294
|
194 |
For a start, this is the first time we have proved intermediate propositions
|
13307
|
195 |
(\isakeyword{have}) on the way to the final \isakeyword{show}. This is the
|
13330
|
196 |
norm in any nontrivial proof where one cannot bridge the gap between the
|
13307
|
197 |
assumptions and the conclusion in one step. Both \isakeyword{have}s above are
|
13294
|
198 |
proved automatically via @{thm[source]conjE} triggered by
|
13307
|
199 |
\isakeyword{from}~@{text ab}.
|
13267
|
200 |
|
13307
|
201 |
The \isakeyword{show} command illustrates two things:
|
13294
|
202 |
\begin{itemize}
|
13307
|
203 |
\item \isakeyword{from} can be followed by any number of facts.
|
13322
|
204 |
Given \isakeyword{from}~@{text f}$_1$~\dots~@{text f}$_n$, the
|
|
205 |
\isakeyword{proof} step after \isakeyword{show}
|
13294
|
206 |
tries to find an elimination rule whose first $n$ premises can be proved
|
|
207 |
by the given facts in the given order.
|
|
208 |
\item If there is no matching elimination rule, introduction rules are tried,
|
|
209 |
again using the facts to prove the premises.
|
|
210 |
\end{itemize}
|
13310
|
211 |
In this case, the proof succeeds with @{thm[source]conjI}. Note that the proof
|
|
212 |
would fail if we had written \isakeyword{from}~@{text"a b"}
|
13307
|
213 |
instead of \isakeyword{from}~@{text"b a"}.
|
13294
|
214 |
|
|
215 |
This treatment of facts fed into a proof is not restricted to implicit
|
|
216 |
application of introduction and elimination rules but applies to explicit
|
|
217 |
application of rules as well. Thus you could replace the final ``..'' above
|
13307
|
218 |
with \isakeyword{by}@{text"(rule conjI)"}.
|
13294
|
219 |
|
13310
|
220 |
Note that Isar's predefined introduction and elimination rules include all the
|
13322
|
221 |
usual natural deduction rules. We conclude this
|
13294
|
222 |
section with an extended example --- which rules are used implicitly where?
|
|
223 |
Rule @{thm[source]ccontr} is @{thm ccontr[no_vars]}.
|
|
224 |
*}
|
|
225 |
|
13326
|
226 |
lemma "\<not> (A \<and> B) \<longrightarrow> \<not> A \<or> \<not> B"
|
13267
|
227 |
proof
|
13326
|
228 |
assume n: "\<not> (A \<and> B)"
|
|
229 |
show "\<not> A \<or> \<not> B"
|
13267
|
230 |
proof (rule ccontr)
|
13326
|
231 |
assume nn: "\<not> (\<not> A \<or> \<not> B)"
|
|
232 |
have "\<not> A"
|
13267
|
233 |
proof
|
13326
|
234 |
assume A
|
|
235 |
have "\<not> B"
|
13267
|
236 |
proof
|
13326
|
237 |
assume B
|
|
238 |
have "A \<and> B" ..
|
|
239 |
with n show False ..
|
13267
|
240 |
qed
|
13326
|
241 |
hence "\<not> A \<or> \<not> B" ..
|
|
242 |
with nn show False ..
|
13267
|
243 |
qed
|
13326
|
244 |
hence "\<not> A \<or> \<not> B" ..
|
|
245 |
with nn show False ..
|
13267
|
246 |
qed
|
13326
|
247 |
qed
|
13294
|
248 |
text{*\noindent Apart from demonstrating the strangeness of classical
|
13329
|
249 |
arguments by contradiction, this example also introduces two new
|
13330
|
250 |
abbreviations:
|
13329
|
251 |
\begin{center}
|
|
252 |
\begin{tabular}{lcl}
|
|
253 |
\isakeyword{hence} &=& \isakeyword{then} \isakeyword{have} \\
|
|
254 |
\isakeyword{with}~\emph{facts} &=&
|
|
255 |
\isakeyword{from}~\emph{facts} \isakeyword{and} @{text this}
|
|
256 |
\end{tabular}
|
|
257 |
\end{center}
|
13326
|
258 |
*}
|
13267
|
259 |
|
13310
|
260 |
subsection{*Avoiding duplication*}
|
13267
|
261 |
|
13294
|
262 |
text{* So far our examples have been a bit unnatural: normally we want to
|
|
263 |
prove rules expressed with @{text"\<Longrightarrow>"}, not @{text"\<longrightarrow>"}. Here is an example:
|
|
264 |
*}
|
13322
|
265 |
lemma "A \<and> B \<Longrightarrow> B \<and> A"
|
13267
|
266 |
proof
|
13322
|
267 |
assume "A \<and> B" thus "B" ..
|
|
268 |
next
|
|
269 |
assume "A \<and> B" thus "A" ..
|
13267
|
270 |
qed
|
13307
|
271 |
text{*\noindent The \isakeyword{proof} always works on the conclusion,
|
13322
|
272 |
@{prop"B \<and> A"} in our case, thus selecting $\land$-introduction. Hence
|
|
273 |
we must show @{prop B} and @{prop A}; both are proved by
|
13329
|
274 |
$\land$-elimination and the proofs are separated by \isakeyword{next}:
|
|
275 |
\begin{description}
|
|
276 |
\item[\isakeyword{next}] deals with multiple subgoals. For example,
|
|
277 |
when showing @{term"A \<and> B"} we need to show both @{term A} and @{term
|
|
278 |
B}. Each subgoal is proved separately, in \emph{any} order. The
|
|
279 |
individual proofs are separated by \isakeyword{next}.
|
|
280 |
\end{description}
|
13267
|
281 |
|
13294
|
282 |
This is all very well as long as formulae are small. Let us now look at some
|
|
283 |
devices to avoid repeating (possibly large) formulae. A very general method
|
|
284 |
is pattern matching: *}
|
13267
|
285 |
|
13322
|
286 |
lemma "large_A \<and> large_B \<Longrightarrow> large_B \<and> large_A"
|
|
287 |
(is "?AB \<Longrightarrow> ?B \<and> ?A")
|
13267
|
288 |
proof
|
13322
|
289 |
assume ?AB thus ?B ..
|
|
290 |
next
|
|
291 |
assume ?AB thus ?A ..
|
13267
|
292 |
qed
|
13294
|
293 |
text{*\noindent Any formula may be followed by
|
13307
|
294 |
@{text"("}\isakeyword{is}~\emph{pattern}@{text")"} which causes the pattern
|
13310
|
295 |
to be matched against the formula, instantiating the @{text"?"}-variables in
|
|
296 |
the pattern. Subsequent uses of these variables in other terms simply causes
|
|
297 |
them to be replaced by the terms they stand for.
|
13294
|
298 |
|
|
299 |
We can simplify things even more by stating the theorem by means of the
|
13322
|
300 |
\isakeyword{assumes} and \isakeyword{shows} elements which allow direct
|
13294
|
301 |
naming of assumptions: *}
|
13267
|
302 |
|
13322
|
303 |
lemma assumes AB: "large_A \<and> large_B"
|
|
304 |
shows "large_B \<and> large_A" (is "?B \<and> ?A")
|
13267
|
305 |
proof
|
13322
|
306 |
from AB show ?B ..
|
|
307 |
next
|
|
308 |
from AB show ?A ..
|
13267
|
309 |
qed
|
13322
|
310 |
text{*\noindent Note the difference between @{text ?AB}, a term, and
|
|
311 |
@{text AB}, a fact.
|
|
312 |
|
|
313 |
Finally we want to start the proof with $\land$-elimination so we
|
|
314 |
don't have to perform it twice, as above. Here is a slick way to
|
|
315 |
achieve this: *}
|
|
316 |
|
|
317 |
lemma assumes AB: "large_A \<and> large_B"
|
|
318 |
shows "large_B \<and> large_A" (is "?B \<and> ?A")
|
|
319 |
using AB
|
|
320 |
proof
|
|
321 |
assume ?A and ?B show ?thesis ..
|
|
322 |
qed
|
|
323 |
text{*\noindent Command \isakeyword{using} can appear before a proof
|
|
324 |
and adds further facts to those piped into the proof. Here @{text AB}
|
|
325 |
is the only such fact and it triggers $\land$-elimination. Another
|
|
326 |
frequent usage is as follows:
|
13294
|
327 |
\begin{center}
|
13322
|
328 |
\isakeyword{from} \emph{important facts}
|
|
329 |
\isakeyword{show} \emph{something}
|
|
330 |
\isakeyword{using} \emph{minor facts}
|
13294
|
331 |
\end{center}
|
13322
|
332 |
\medskip
|
13294
|
333 |
|
13330
|
334 |
Sometimes it is necessary to suppress the implicit application of rules in a
|
13307
|
335 |
\isakeyword{proof}. For example \isakeyword{show}~@{prop"A \<or> B"} would
|
13294
|
336 |
trigger $\lor$-introduction, requiring us to prove @{prop A}. A simple
|
|
337 |
``@{text"-"}'' prevents this \emph{faut pas}: *}
|
|
338 |
|
|
339 |
lemma assumes AB: "A \<or> B" shows "B \<or> A"
|
13267
|
340 |
proof -
|
13294
|
341 |
from AB show ?thesis
|
|
342 |
proof
|
|
343 |
assume A show ?thesis ..
|
|
344 |
next
|
|
345 |
assume B show ?thesis ..
|
13267
|
346 |
qed
|
|
347 |
qed
|
13322
|
348 |
text{*\noindent Could \isakeyword{using} help to eliminate this ``@{text"-"}''? *}
|
13317
|
349 |
|
13294
|
350 |
subsection{*Predicate calculus*}
|
|
351 |
|
13307
|
352 |
text{* Command \isakeyword{fix} introduces new local variables into a
|
13322
|
353 |
proof. The pair \isakeyword{fix}-\isakeyword{show} corresponds to @{text"\<And>"}
|
|
354 |
(the universal quantifier at the
|
13307
|
355 |
meta-level) just like \isakeyword{assume}-\isakeyword{show} corresponds to
|
13294
|
356 |
@{text"\<Longrightarrow>"}. Here is a sample proof, annotated with the rules that are
|
13330
|
357 |
applied implicitly: *}
|
13294
|
358 |
lemma assumes P: "\<forall>x. P x" shows "\<forall>x. P(f x)"
|
|
359 |
proof -- "@{thm[source]allI}: @{thm allI[no_vars]}"
|
|
360 |
fix a
|
|
361 |
from P show "P(f a)" .. --"@{thm[source]allE}: @{thm allE[no_vars]}"
|
|
362 |
qed
|
|
363 |
text{*\noindent Note that in the proof we have chosen to call the bound
|
|
364 |
variable @{term a} instead of @{term x} merely to show that the choice of
|
13322
|
365 |
local names is irrelevant.
|
13294
|
366 |
|
|
367 |
Next we look at @{text"\<exists>"} which is a bit more tricky.
|
13267
|
368 |
*}
|
|
369 |
|
13294
|
370 |
lemma assumes Pf: "\<exists>x. P(f x)" shows "\<exists>y. P y"
|
13267
|
371 |
proof -
|
13294
|
372 |
from Pf show ?thesis
|
|
373 |
proof -- "@{thm[source]exE}: @{thm exE[no_vars]}"
|
|
374 |
fix a
|
|
375 |
assume "P(f a)"
|
|
376 |
show ?thesis .. --"@{thm[source]exI}: @{thm exI[no_vars]}"
|
|
377 |
qed
|
13267
|
378 |
qed
|
13317
|
379 |
text{*\noindent Explicit $\exists$-elimination as seen above can become
|
13294
|
380 |
cumbersome in practice. The derived Isar language element
|
13322
|
381 |
\isakeyword{obtain} provides a more appealing form of generalized
|
13317
|
382 |
existence reasoning: *}
|
13267
|
383 |
|
13294
|
384 |
lemma assumes Pf: "\<exists>x. P(f x)" shows "\<exists>y. P y"
|
|
385 |
proof -
|
13322
|
386 |
from Pf obtain x where "P(f x)" ..
|
13294
|
387 |
thus "\<exists>y. P y" ..
|
13267
|
388 |
qed
|
13317
|
389 |
text{*\noindent Note how the proof text follows the usual mathematical style
|
13310
|
390 |
of concluding $P(x)$ from $\exists x. P(x)$, while carefully introducing $x$
|
|
391 |
as a new local variable. Technically, \isakeyword{obtain} is similar to
|
|
392 |
\isakeyword{fix} and \isakeyword{assume} together with a soundness proof of
|
13322
|
393 |
the elimination involved.
|
13267
|
394 |
|
13329
|
395 |
Here is a proof of a well known tautology.
|
|
396 |
Figure out which rule is used where! *}
|
13294
|
397 |
|
|
398 |
lemma assumes ex: "\<exists>x. \<forall>y. P x y" shows "\<forall>y. \<exists>x. P x y"
|
13267
|
399 |
proof
|
13294
|
400 |
fix y
|
|
401 |
from ex obtain x where "\<forall>y. P x y" ..
|
|
402 |
hence "P x y" ..
|
|
403 |
thus "\<exists>x. P x y" ..
|
|
404 |
qed
|
|
405 |
|
|
406 |
text{* So far we have confined ourselves to single step proofs. Of course
|
|
407 |
powerful automatic methods can be used just as well. Here is an example,
|
|
408 |
Cantor's theorem that there is no surjective function from a set to its
|
|
409 |
powerset: *}
|
|
410 |
theorem "\<exists>S. S \<notin> range (f :: 'a \<Rightarrow> 'a set)"
|
|
411 |
proof
|
|
412 |
let ?S = "{x. x \<notin> f x}"
|
|
413 |
show "?S \<notin> range f"
|
13267
|
414 |
proof
|
13294
|
415 |
assume "?S \<in> range f"
|
13267
|
416 |
then obtain y where fy: "?S = f y" ..
|
|
417 |
show False
|
13310
|
418 |
proof cases
|
13294
|
419 |
assume "y \<in> ?S"
|
13267
|
420 |
with fy show False by blast
|
|
421 |
next
|
13294
|
422 |
assume "y \<notin> ?S"
|
13267
|
423 |
with fy show False by blast
|
|
424 |
qed
|
|
425 |
qed
|
|
426 |
qed
|
13294
|
427 |
text{*\noindent
|
13329
|
428 |
For a start, the example demonstrates two new constructs:
|
13294
|
429 |
\begin{itemize}
|
|
430 |
\item \isakeyword{let} introduces an abbreviation for a term, in our case
|
|
431 |
the witness for the claim, because the term occurs multiple times in the proof.
|
13310
|
432 |
\item Proof by @{text"cases"} starts a proof by cases. Note that it remains
|
13294
|
433 |
implicit what the two cases are: it is merely expected that the two subproofs
|
|
434 |
prove @{prop"P \<Longrightarrow> Q"} and @{prop"\<not>P \<Longrightarrow> Q"} for suitable @{term P} and
|
|
435 |
@{term Q}.
|
|
436 |
\end{itemize}
|
|
437 |
If you wonder how to \isakeyword{obtain} @{term y}:
|
|
438 |
via the predefined elimination rule @{thm rangeE[no_vars]}.
|
|
439 |
|
|
440 |
Method @{text blast} is used because the contradiction does not follow easily
|
|
441 |
by just a single rule. If you find the proof too cryptic for human
|
|
442 |
consumption, here is a more detailed version; the beginning up to
|
|
443 |
\isakeyword{obtain} stays unchanged. *}
|
|
444 |
|
|
445 |
theorem "\<exists>S. S \<notin> range (f :: 'a \<Rightarrow> 'a set)"
|
13267
|
446 |
proof
|
13294
|
447 |
let ?S = "{x. x \<notin> f x}"
|
|
448 |
show "?S \<notin> range f"
|
13267
|
449 |
proof
|
13294
|
450 |
assume "?S \<in> range f"
|
|
451 |
then obtain y where fy: "?S = f y" ..
|
13267
|
452 |
show False
|
13310
|
453 |
proof cases
|
13294
|
454 |
assume A: "y \<in> ?S"
|
|
455 |
hence isin: "y \<in> f y" by(simp add:fy)
|
|
456 |
from A have "y \<notin> f y" by simp
|
13267
|
457 |
with isin show False by contradiction
|
|
458 |
next
|
13294
|
459 |
assume A: "y \<notin> ?S"
|
|
460 |
hence notin: "y \<notin> f y" by(simp add:fy)
|
|
461 |
from A have "y \<in> f y" by simp
|
13267
|
462 |
with notin show False by contradiction
|
|
463 |
qed
|
|
464 |
qed
|
|
465 |
qed
|
13317
|
466 |
text{*\noindent Method @{text contradiction} succeeds if both $P$ and
|
13322
|
467 |
$\neg P$ are among the assumptions and the facts fed into that step, in any order.
|
13294
|
468 |
|
|
469 |
As it happens, Cantor's theorem can be proved automatically by best-first
|
|
470 |
search. Depth-first search would diverge, but best-first search successfully
|
|
471 |
navigates through the large search space:
|
13267
|
472 |
*}
|
|
473 |
|
13294
|
474 |
theorem "\<exists>S. S \<notin> range (f :: 'a \<Rightarrow> 'a set)"
|
|
475 |
by best
|
|
476 |
text{*\noindent Of course this only works in the context of HOL's carefully
|
|
477 |
constructed introduction and elimination rules for set theory.
|
|
478 |
|
|
479 |
Finally, whole scripts may appear in the leaves of the proof tree, although
|
|
480 |
this is best avoided. Here is a contrived example: *}
|
13267
|
481 |
|
13322
|
482 |
lemma "A \<longrightarrow> (A \<longrightarrow> B) \<longrightarrow> B"
|
13267
|
483 |
proof
|
13294
|
484 |
assume a: A
|
13267
|
485 |
show "(A \<longrightarrow>B) \<longrightarrow> B"
|
|
486 |
apply(rule impI)
|
|
487 |
apply(erule impE)
|
13294
|
488 |
apply(rule a)
|
13267
|
489 |
apply assumption
|
|
490 |
done
|
|
491 |
qed
|
13294
|
492 |
text{*\noindent You may need to resort to this technique if an automatic step
|
|
493 |
fails to prove the desired proposition. *}
|
13305
|
494 |
|
13307
|
495 |
section{*Case distinction and induction*}
|
|
496 |
|
|
497 |
|
|
498 |
subsection{*Case distinction*}
|
|
499 |
|
|
500 |
text{* We have already met the @{text cases} method for performing
|
|
501 |
binary case splits. Here is another example: *}
|
|
502 |
lemma "P \<or> \<not> P"
|
|
503 |
proof cases
|
|
504 |
assume "P" thus ?thesis ..
|
|
505 |
next
|
|
506 |
assume "\<not> P" thus ?thesis ..
|
|
507 |
qed
|
13322
|
508 |
text{*\noindent Note that the two cases must come in this order.
|
13307
|
509 |
|
|
510 |
This form is appropriate if @{term P} is textually small. However, if
|
|
511 |
@{term P} is large, we don't want to repeat it. This can be avoided by
|
|
512 |
the following idiom *}
|
|
513 |
|
|
514 |
lemma "P \<or> \<not> P"
|
|
515 |
proof (cases "P")
|
|
516 |
case True thus ?thesis ..
|
|
517 |
next
|
|
518 |
case False thus ?thesis ..
|
|
519 |
qed
|
|
520 |
text{*\noindent which is simply a shorthand for the previous
|
|
521 |
proof. More precisely, the phrases \isakeyword{case}~@{prop
|
|
522 |
True}/@{prop False} abbreviate the corresponding assumptions @{prop P}
|
13322
|
523 |
and @{prop"\<not>P"}. In contrast to the previous proof we can prove the cases
|
|
524 |
in arbitrary order.
|
13307
|
525 |
|
|
526 |
The same game can be played with other datatypes, for example lists:
|
|
527 |
*}
|
|
528 |
(*<*)declare length_tl[simp del](*>*)
|
|
529 |
lemma "length(tl xs) = length xs - 1"
|
|
530 |
proof (cases xs)
|
|
531 |
case Nil thus ?thesis by simp
|
|
532 |
next
|
|
533 |
case Cons thus ?thesis by simp
|
|
534 |
qed
|
|
535 |
text{*\noindent Here \isakeyword{case}~@{text Nil} abbreviates
|
|
536 |
\isakeyword{assume}~@{prop"x = []"} and \isakeyword{case}~@{text Cons}
|
13310
|
537 |
abbreviates \isakeyword{assume}~@{text"xs = _ # _"}. The names of the head
|
|
538 |
and tail of @{text xs} have been chosen by the system. Therefore we cannot
|
13322
|
539 |
refer to them (this would lead to obscure proofs) and have not even shown
|
|
540 |
them. Luckily, this proof is simple enough we do not need to refer to them.
|
13310
|
541 |
However, in general one may have to. Hence Isar offers a simple scheme for
|
13322
|
542 |
naming those variables: replace the anonymous @{text Cons} by, for example,
|
13310
|
543 |
@{text"(Cons y ys)"}, which abbreviates \isakeyword{fix}~@{text"y ys"}
|
|
544 |
\isakeyword{assume}~@{text"xs = Cons y ys"}, i.e.\ @{prop"xs = y # ys"}. Here
|
|
545 |
is a (somewhat contrived) example: *}
|
13307
|
546 |
|
|
547 |
lemma "length(tl xs) = length xs - 1"
|
|
548 |
proof (cases xs)
|
|
549 |
case Nil thus ?thesis by simp
|
|
550 |
next
|
|
551 |
case (Cons y ys)
|
13310
|
552 |
hence "length(tl xs) = length ys \<and> length xs = length ys + 1"
|
|
553 |
by simp
|
13307
|
554 |
thus ?thesis by simp
|
|
555 |
qed
|
13322
|
556 |
|
13307
|
557 |
|
|
558 |
subsection{*Induction*}
|
|
559 |
|
|
560 |
|
|
561 |
subsubsection{*Structural induction*}
|
|
562 |
|
|
563 |
text{* We start with a simple inductive proof where both cases are proved automatically: *}
|
13305
|
564 |
lemma "2 * (\<Sum>i<n+1. i) = n*(n+1)"
|
|
565 |
by (induct n, simp_all)
|
|
566 |
|
13307
|
567 |
text{*\noindent If we want to expose more of the structure of the
|
|
568 |
proof, we can use pattern matching to avoid having to repeat the goal
|
|
569 |
statement: *}
|
13305
|
570 |
lemma "2 * (\<Sum>i<n+1. i) = n*(n+1)" (is "?P n")
|
|
571 |
proof (induct n)
|
|
572 |
show "?P 0" by simp
|
|
573 |
next
|
|
574 |
fix n assume "?P n"
|
|
575 |
thus "?P(Suc n)" by simp
|
|
576 |
qed
|
|
577 |
|
13307
|
578 |
text{* \noindent We could refine this further to show more of the equational
|
|
579 |
proof. Instead we explore the same avenue as for case distinctions:
|
|
580 |
introducing context via the \isakeyword{case} command: *}
|
13305
|
581 |
lemma "2 * (\<Sum>i<n+1. i) = n*(n+1)"
|
|
582 |
proof (induct n)
|
|
583 |
case 0 show ?case by simp
|
|
584 |
next
|
|
585 |
case Suc thus ?case by simp
|
|
586 |
qed
|
|
587 |
|
13307
|
588 |
text{* \noindent The implicitly defined @{text ?case} refers to the
|
13310
|
589 |
corresponding case to be proved, i.e.\ @{text"?P 0"} in the first case and
|
|
590 |
@{text"?P(Suc n)"} in the second case. Context \isakeyword{case}~@{text 0} is
|
|
591 |
empty whereas \isakeyword{case}~@{text Suc} assumes @{text"?P n"}. Again we
|
13322
|
592 |
have the same problem as with case distinctions: we cannot refer to an anonymous @{term n}
|
13310
|
593 |
in the induction step because it has not been introduced via \isakeyword{fix}
|
|
594 |
(in contrast to the previous proof). The solution is the same as above:
|
|
595 |
replace @{term Suc} by @{text"(Suc i)"}: *}
|
13307
|
596 |
lemma fixes n::nat shows "n < n*n + 1"
|
|
597 |
proof (induct n)
|
|
598 |
case 0 show ?case by simp
|
|
599 |
next
|
|
600 |
case (Suc i) thus "Suc i < Suc i * Suc i + 1" by simp
|
|
601 |
qed
|
13305
|
602 |
|
13307
|
603 |
text{* \noindent Of course we could again have written
|
|
604 |
\isakeyword{thus}~@{text ?case} instead of giving the term explicitly
|
13330
|
605 |
but we wanted to use @{term i} somewhere.
|
13305
|
606 |
|
13307
|
607 |
Let us now tackle a more ambitious lemma: proving complete induction
|
|
608 |
@{prop[display,indent=5]"(\<And>n. (\<And>m. m < n \<Longrightarrow> P m) \<Longrightarrow> P n) \<Longrightarrow> P n"}
|
|
609 |
via structural induction. It is well known that one needs to prove
|
|
610 |
something more general first: *}
|
|
611 |
lemma cind_lemma:
|
|
612 |
assumes A: "(\<And>n. (\<And>m. m < n \<Longrightarrow> P m) \<Longrightarrow> P n)"
|
13305
|
613 |
shows "\<And>m. m < n \<Longrightarrow> P(m::nat)"
|
|
614 |
proof (induct n)
|
13307
|
615 |
case 0 thus ?case by simp
|
13305
|
616 |
next
|
|
617 |
case (Suc n m)
|
|
618 |
show ?case
|
13310
|
619 |
proof cases
|
13305
|
620 |
assume eq: "m = n"
|
13307
|
621 |
from Suc A have "P n" by blast
|
13305
|
622 |
with eq show "P m" by simp
|
|
623 |
next
|
|
624 |
assume neq: "m \<noteq> n"
|
|
625 |
with Suc have "m < n" by simp
|
|
626 |
with Suc show "P m" by blast
|
|
627 |
qed
|
13307
|
628 |
qed
|
13310
|
629 |
text{* \noindent Let us first examine the statement of the lemma.
|
13330
|
630 |
In contrast to the style advertised in the
|
13307
|
631 |
Tutorial~\cite{LNCS2283}, structured Isar proofs do not need to
|
|
632 |
introduce @{text"\<forall>"} or @{text"\<longrightarrow>"} into a theorem to strengthen it
|
|
633 |
for induction --- we use @{text"\<And>"} and @{text"\<Longrightarrow>"} instead. This
|
|
634 |
simplifies the proof and means we don't have to convert between the
|
|
635 |
two kinds of connectives. As usual, at the end of the proof
|
|
636 |
@{text"\<And>"} disappears and the bound variable is turned into a
|
|
637 |
@{text"?"}-variable. Thus @{thm[source]cind_lemma} becomes
|
|
638 |
@{thm[display,indent=5] cind_lemma} Complete induction is an easy
|
|
639 |
corollary: instantiation followed by
|
13310
|
640 |
simplification, @{thm[source] cind_lemma[of _ n "Suc n", simplified]},
|
13307
|
641 |
yields @{thm[display,indent=5] cind_lemma[of _ n "Suc n", simplified]}
|
13305
|
642 |
|
13310
|
643 |
Now we examine the proof. So what is this funny case @{text"(Suc n m)"}? It
|
|
644 |
looks confusing at first and reveals that the very suggestive @{text"(Suc
|
|
645 |
n)"} used above is not the whole truth. The variable names after the case
|
|
646 |
name (here: @{term Suc}) are the names of the parameters of that subgoal. So
|
|
647 |
far the only parameters where the arguments to the constructor, but now we
|
13330
|
648 |
have an additional parameter coming from the @{text"\<And>m"} in the main
|
13322
|
649 |
\isakeyword{shows} clause. Thus @{text"(Suc n m)"} does not mean that
|
13307
|
650 |
constructor @{term Suc} is applied to two arguments but that the two
|
13310
|
651 |
parameters in the @{term Suc} case are to be named @{text n} and @{text
|
|
652 |
m}. *}
|
13305
|
653 |
|
13307
|
654 |
subsubsection{*Rule induction*}
|
|
655 |
|
|
656 |
text{* We define our own version of reflexive transitive closure of a
|
|
657 |
relation *}
|
13305
|
658 |
consts rtc :: "('a \<times> 'a)set \<Rightarrow> ('a \<times> 'a)set" ("_*" [1000] 999)
|
|
659 |
inductive "r*"
|
|
660 |
intros
|
13307
|
661 |
refl: "(x,x) \<in> r*"
|
|
662 |
step: "\<lbrakk> (x,y) \<in> r; (y,z) \<in> r* \<rbrakk> \<Longrightarrow> (x,z) \<in> r*"
|
|
663 |
|
|
664 |
text{* \noindent and prove that @{term"r*"} is indeed transitive: *}
|
13322
|
665 |
lemma assumes A: "(x,y) \<in> r*" shows "(y,z) \<in> r* \<Longrightarrow> (x,z) \<in> r*"
|
13317
|
666 |
using A
|
|
667 |
proof induct
|
|
668 |
case refl thus ?case .
|
|
669 |
next
|
|
670 |
case step thus ?case by(blast intro: rtc.step)
|
|
671 |
qed
|
13322
|
672 |
text{*\noindent Rule induction is triggered by a fact $(x_1,\dots,x_n)
|
|
673 |
\in R$ piped into the proof, here \isakeyword{using}~@{text A}. The
|
|
674 |
proof itself follows the inductive definition very
|
|
675 |
closely: there is one case for each rule, and it has the same name as
|
|
676 |
the rule, analogous to structural induction.
|
13307
|
677 |
|
13322
|
678 |
However, this proof is rather terse. Here is a more readable version:
|
13307
|
679 |
*}
|
|
680 |
|
13310
|
681 |
lemma assumes A: "(x,y) \<in> r*" and B: "(y,z) \<in> r*"
|
|
682 |
shows "(x,z) \<in> r*"
|
13305
|
683 |
proof -
|
|
684 |
from A B show ?thesis
|
13307
|
685 |
proof induct
|
13312
|
686 |
fix x assume "(x,z) \<in> r*" -- {*@{text B}[@{text y} := @{text x}]*}
|
13307
|
687 |
thus "(x,z) \<in> r*" .
|
13305
|
688 |
next
|
13307
|
689 |
fix x' x y
|
13313
|
690 |
assume 1: "(x',x) \<in> r" and
|
13307
|
691 |
IH: "(y,z) \<in> r* \<Longrightarrow> (x,z) \<in> r*" and
|
|
692 |
B: "(y,z) \<in> r*"
|
13313
|
693 |
from 1 IH[OF B] show "(x',z) \<in> r*" by(rule rtc.step)
|
13307
|
694 |
qed
|
13305
|
695 |
qed
|
13307
|
696 |
text{*\noindent We start the proof with \isakeyword{from}~@{text"A
|
13322
|
697 |
B"}. Only @{text A} is ``consumed'' by the induction step.
|
|
698 |
Since @{text B} is left over we don't just prove @{text
|
|
699 |
?thesis} but @{text"B \<Longrightarrow> ?thesis"}, just as in the previous proof. The
|
|
700 |
base case is trivial. In the assumptions for the induction step we can
|
|
701 |
see very clearly how things fit together and permit ourselves the
|
|
702 |
obvious forward step @{text"IH[OF B]"}. *}
|
13305
|
703 |
|
13330
|
704 |
subsection{*More induction*}
|
|
705 |
|
|
706 |
text{* We close the section by demonstrating how arbitrary induction rules
|
|
707 |
are applied. As a simple example we have chose recursion induction, i.e.\
|
|
708 |
induction based on a recursive function definition. The example is an unusual
|
|
709 |
definition of rotation of a list: *}
|
|
710 |
|
13329
|
711 |
consts rot :: "'a list \<Rightarrow> 'a list"
|
|
712 |
recdef rot "measure length"
|
|
713 |
"rot [] = []"
|
|
714 |
"rot [x] = [x]"
|
|
715 |
"rot (x#y#zs) = y # rot(x#zs)"
|
|
716 |
|
13338
|
717 |
text{* In the first proof we set up each case explicitly, merely using
|
|
718 |
pattern matching to abbreviate the statement: *}
|
|
719 |
|
13329
|
720 |
lemma "length(rot xs) = length xs" (is "?P xs")
|
|
721 |
proof (induct xs rule: rot.induct)
|
|
722 |
show "?P []" by simp
|
|
723 |
next
|
|
724 |
fix x show "?P [x]" by simp
|
|
725 |
next
|
|
726 |
fix x y zs assume "?P (x#zs)"
|
|
727 |
thus "?P (x#y#zs)" by simp
|
|
728 |
qed
|
13338
|
729 |
text{* This proof skeletons works for arbitrary induction rules, not just
|
13330
|
730 |
@{thm[source]rot.induct}.
|
|
731 |
|
13338
|
732 |
In the following proof we rely on a default naming scheme for cases: they are
|
|
733 |
called 1, 2, etc, unless they have been named explicitly. The latter happens
|
|
734 |
only with datatypes and inductively defined sets, but not with recursive
|
|
735 |
functions. *}
|
|
736 |
|
|
737 |
lemma "xs \<noteq> [] \<Longrightarrow> rot xs = tl xs @ [hd xs]"
|
|
738 |
proof (induct xs rule: rot.induct)
|
|
739 |
case 1 thus ?case by simp
|
|
740 |
next
|
|
741 |
case 2 show ?case by simp
|
|
742 |
next
|
|
743 |
case 3 thus ?case by simp
|
|
744 |
qed
|
|
745 |
|
|
746 |
text{*Of course both proofs are so simple that they can be condensed to*}
|
|
747 |
(*<*)lemma "xs \<noteq> [] \<Longrightarrow> rot xs = tl xs @ [hd xs]"(*>*)
|
13330
|
748 |
by (induct xs rule: rot.induct, simp_all)
|
|
749 |
|
13294
|
750 |
(*<*)end(*>*)
|