21212
|
1 |
(* Title: Doc/IsarAdvanced/Functions/Thy/Fundefs.thy
|
|
2 |
ID: $Id$
|
|
3 |
Author: Alexander Krauss, TU Muenchen
|
|
4 |
|
|
5 |
Tutorial for function definitions with the new "function" package.
|
|
6 |
*)
|
|
7 |
|
|
8 |
theory Functions
|
|
9 |
imports Main
|
|
10 |
begin
|
|
11 |
|
|
12 |
section {* Function Definition for Dummies *}
|
|
13 |
|
|
14 |
text {*
|
|
15 |
In most cases, defining a recursive function is just as simple as other definitions:
|
23003
|
16 |
|
|
17 |
Like in functional programming, a function definition consists of a
|
|
18 |
|
|
19 |
*}
|
21212
|
20 |
|
|
21 |
fun fib :: "nat \<Rightarrow> nat"
|
|
22 |
where
|
|
23 |
"fib 0 = 1"
|
|
24 |
| "fib (Suc 0) = 1"
|
|
25 |
| "fib (Suc (Suc n)) = fib n + fib (Suc n)"
|
|
26 |
|
|
27 |
text {*
|
23003
|
28 |
The syntax is rather self-explanatory: We introduce a function by
|
|
29 |
giving its name, its type and a set of defining recursive
|
|
30 |
equations.
|
|
31 |
*}
|
|
32 |
|
|
33 |
|
|
34 |
|
|
35 |
|
|
36 |
|
|
37 |
text {*
|
|
38 |
The function always terminates, since its argument gets smaller in
|
|
39 |
every recursive call. Termination is an important requirement, since
|
|
40 |
it prevents inconsistencies: From the "definition" @{text "f(n) =
|
|
41 |
f(n) + 1"} we could prove @{text "0 = 1"} by subtracting @{text
|
|
42 |
"f(n)"} on both sides.
|
21212
|
43 |
|
|
44 |
Isabelle tries to prove termination automatically when a function is
|
|
45 |
defined. We will later look at cases where this fails and see what to
|
|
46 |
do then.
|
|
47 |
*}
|
|
48 |
|
|
49 |
subsection {* Pattern matching *}
|
|
50 |
|
22065
|
51 |
text {* \label{patmatch}
|
23003
|
52 |
Like in functional programming, we can use pattern matching to
|
|
53 |
define functions. At the moment we will only consider \emph{constructor
|
21212
|
54 |
patterns}, which only consist of datatype constructors and
|
|
55 |
variables.
|
|
56 |
|
|
57 |
If patterns overlap, the order of the equations is taken into
|
|
58 |
account. The following function inserts a fixed element between any
|
|
59 |
two elements of a list:
|
|
60 |
*}
|
|
61 |
|
|
62 |
fun sep :: "'a \<Rightarrow> 'a list \<Rightarrow> 'a list"
|
|
63 |
where
|
|
64 |
"sep a (x#y#xs) = x # a # sep a (y # xs)"
|
|
65 |
| "sep a xs = xs"
|
|
66 |
|
|
67 |
text {*
|
|
68 |
Overlapping patterns are interpreted as "increments" to what is
|
|
69 |
already there: The second equation is only meant for the cases where
|
|
70 |
the first one does not match. Consequently, Isabelle replaces it
|
22065
|
71 |
internally by the remaining cases, making the patterns disjoint:
|
21212
|
72 |
*}
|
|
73 |
|
22065
|
74 |
thm sep.simps
|
21212
|
75 |
|
22065
|
76 |
text {* @{thm [display] sep.simps[no_vars]} *}
|
21212
|
77 |
|
|
78 |
text {*
|
|
79 |
The equations from function definitions are automatically used in
|
|
80 |
simplification:
|
|
81 |
*}
|
|
82 |
|
22065
|
83 |
lemma "sep (0::nat) [1, 2, 3] = [1, 0, 2, 0, 3]"
|
21212
|
84 |
by simp
|
|
85 |
|
|
86 |
subsection {* Induction *}
|
|
87 |
|
22065
|
88 |
text {*
|
|
89 |
|
|
90 |
Isabelle provides customized induction rules for recursive functions.
|
|
91 |
See \cite[\S3.5.4]{isa-tutorial}.
|
|
92 |
*}
|
21212
|
93 |
|
|
94 |
|
22065
|
95 |
section {* Full form definitions *}
|
21212
|
96 |
|
|
97 |
text {*
|
|
98 |
Up to now, we were using the \cmd{fun} command, which provides a
|
|
99 |
convenient shorthand notation for simple function definitions. In
|
|
100 |
this mode, Isabelle tries to solve all the necessary proof obligations
|
|
101 |
automatically. If a proof does not go through, the definition is
|
22065
|
102 |
rejected. This can either mean that the definition is indeed faulty,
|
|
103 |
or that the default proof procedures are just not smart enough (or
|
|
104 |
rather: not designed) to handle the definition.
|
|
105 |
|
|
106 |
By expanding the abbreviated \cmd{fun} to the full \cmd{function}
|
|
107 |
command, the proof obligations become visible and can be analyzed or
|
|
108 |
solved manually.
|
|
109 |
|
|
110 |
\end{isamarkuptext}
|
|
111 |
|
|
112 |
|
|
113 |
\fbox{\parbox{\textwidth}{
|
|
114 |
\noindent\cmd{fun} @{text "f :: \<tau>"}\\%
|
|
115 |
\cmd{where}\isanewline%
|
|
116 |
\ \ {\it equations}\isanewline%
|
|
117 |
\ \ \quad\vdots
|
|
118 |
}}
|
21212
|
119 |
|
22065
|
120 |
\begin{isamarkuptext}
|
|
121 |
\vspace*{1em}
|
|
122 |
\noindent abbreviates
|
|
123 |
\end{isamarkuptext}
|
21212
|
124 |
|
22065
|
125 |
\fbox{\parbox{\textwidth}{
|
|
126 |
\noindent\cmd{function} @{text "("}\cmd{sequential}@{text ") f :: \<tau>"}\\%
|
|
127 |
\cmd{where}\isanewline%
|
|
128 |
\ \ {\it equations}\isanewline%
|
|
129 |
\ \ \quad\vdots\\%
|
|
130 |
\cmd{by} @{text "pat_completeness auto"}\\%
|
|
131 |
\cmd{termination by} @{text "lexicographic_order"}
|
|
132 |
}}
|
21212
|
133 |
|
22065
|
134 |
\begin{isamarkuptext}
|
|
135 |
\vspace*{1em}
|
|
136 |
\noindent Some declarations and proofs have now become explicit:
|
21212
|
137 |
|
|
138 |
\begin{enumerate}
|
22065
|
139 |
\item The \cmd{sequential} option enables the preprocessing of
|
21212
|
140 |
pattern overlaps we already saw. Without this option, the equations
|
|
141 |
must already be disjoint and complete. The automatic completion only
|
|
142 |
works with datatype patterns.
|
|
143 |
|
|
144 |
\item A function definition now produces a proof obligation which
|
|
145 |
expresses completeness and compatibility of patterns (We talk about
|
22065
|
146 |
this later). The combination of the methods @{text "pat_completeness"} and
|
|
147 |
@{text "auto"} is used to solve this proof obligation.
|
21212
|
148 |
|
|
149 |
\item A termination proof follows the definition, started by the
|
22065
|
150 |
\cmd{termination} command, which sets up the goal. The @{text
|
|
151 |
"lexicographic_order"} method can prove termination of a certain
|
|
152 |
class of functions by searching for a suitable lexicographic
|
|
153 |
combination of size measures.
|
|
154 |
\end{enumerate}
|
|
155 |
Whenever a \cmd{fun} command fails, it is usually a good idea to
|
|
156 |
expand the syntax to the more verbose \cmd{function} form, to see
|
|
157 |
what is actually going on.
|
21212
|
158 |
*}
|
|
159 |
|
|
160 |
|
|
161 |
section {* Proving termination *}
|
|
162 |
|
|
163 |
text {*
|
|
164 |
Consider the following function, which sums up natural numbers up to
|
22065
|
165 |
@{text "N"}, using a counter @{text "i"}:
|
21212
|
166 |
*}
|
|
167 |
|
|
168 |
function sum :: "nat \<Rightarrow> nat \<Rightarrow> nat"
|
|
169 |
where
|
|
170 |
"sum i N = (if i > N then 0 else i + sum (Suc i) N)"
|
22065
|
171 |
by pat_completeness auto
|
21212
|
172 |
|
|
173 |
text {*
|
22065
|
174 |
\noindent The @{text "lexicographic_order"} method fails on this example, because none of the
|
21212
|
175 |
arguments decreases in the recursive call.
|
|
176 |
|
|
177 |
A more general method for termination proofs is to supply a wellfounded
|
|
178 |
relation on the argument type, and to show that the argument
|
|
179 |
decreases in every recursive call.
|
|
180 |
|
|
181 |
The termination argument for @{text "sum"} is based on the fact that
|
|
182 |
the \emph{difference} between @{text "i"} and @{text "N"} gets
|
|
183 |
smaller in every step, and that the recursion stops when @{text "i"}
|
|
184 |
is greater then @{text "n"}. Phrased differently, the expression
|
|
185 |
@{text "N + 1 - i"} decreases in every recursive call.
|
|
186 |
|
22065
|
187 |
We can use this expression as a measure function suitable to prove termination.
|
21212
|
188 |
*}
|
|
189 |
|
23003
|
190 |
termination sum
|
22065
|
191 |
by (relation "measure (\<lambda>(i,N). N + 1 - i)") auto
|
21212
|
192 |
|
|
193 |
text {*
|
23003
|
194 |
The \cmd{termination} command sets up the termination goal for the
|
|
195 |
specified function @{text "sum"}. If the function name is omitted it
|
|
196 |
implicitly refers to the last function definition.
|
|
197 |
|
22065
|
198 |
The @{text relation} method takes a relation of
|
|
199 |
type @{typ "('a \<times> 'a) set"}, where @{typ "'a"} is the argument type of
|
|
200 |
the function. If the function has multiple curried arguments, then
|
|
201 |
these are packed together into a tuple, as it happened in the above
|
|
202 |
example.
|
21212
|
203 |
|
22065
|
204 |
The predefined function @{term_type "measure"} is a very common way of
|
|
205 |
specifying termination relations in terms of a mapping into the
|
|
206 |
natural numbers.
|
|
207 |
|
|
208 |
After the invocation of @{text "relation"}, we must prove that (a)
|
|
209 |
the relation we supplied is wellfounded, and (b) that the arguments
|
|
210 |
of recursive calls indeed decrease with respect to the
|
|
211 |
relation. These goals are all solved by the subsequent call to
|
|
212 |
@{text "auto"}.
|
|
213 |
|
|
214 |
Let us complicate the function a little, by adding some more
|
|
215 |
recursive calls:
|
21212
|
216 |
*}
|
|
217 |
|
|
218 |
function foo :: "nat \<Rightarrow> nat \<Rightarrow> nat"
|
|
219 |
where
|
|
220 |
"foo i N = (if i > N
|
|
221 |
then (if N = 0 then 0 else foo 0 (N - 1))
|
|
222 |
else i + foo (Suc i) N)"
|
|
223 |
by pat_completeness auto
|
|
224 |
|
|
225 |
text {*
|
|
226 |
When @{text "i"} has reached @{text "N"}, it starts at zero again
|
|
227 |
and @{text "N"} is decremented.
|
|
228 |
This corresponds to a nested
|
|
229 |
loop where one index counts up and the other down. Termination can
|
|
230 |
be proved using a lexicographic combination of two measures, namely
|
22065
|
231 |
the value of @{text "N"} and the above difference. The @{const
|
|
232 |
"measures"} combinator generalizes @{text "measure"} by taking a
|
|
233 |
list of measure functions.
|
21212
|
234 |
*}
|
|
235 |
|
|
236 |
termination
|
22065
|
237 |
by (relation "measures [\<lambda>(i, N). N, \<lambda>(i,N). N + 1 - i]") auto
|
21212
|
238 |
|
23003
|
239 |
subsection {* Manual Termination Proofs *}
|
|
240 |
|
|
241 |
text {*
|
|
242 |
The @{text relation} method is often useful, but not
|
|
243 |
necessary. Since termination proofs are just normal Isabelle proofs,
|
|
244 |
they can also be carried out manually:
|
|
245 |
*}
|
|
246 |
|
|
247 |
function id :: "nat \<Rightarrow> nat"
|
|
248 |
where
|
|
249 |
"id 0 = 0"
|
|
250 |
| "id (Suc n) = Suc (id n)"
|
|
251 |
by pat_completeness auto
|
|
252 |
|
|
253 |
termination
|
|
254 |
proof
|
|
255 |
show "wf less_than" ..
|
|
256 |
next
|
|
257 |
fix n show "(n, Suc n) \<in> less_than" by simp
|
|
258 |
qed
|
|
259 |
|
|
260 |
text {*
|
|
261 |
Of course this is just a trivial example, but manual proofs can
|
|
262 |
sometimes be the only choice if faced with very hard termination problems.
|
|
263 |
*}
|
|
264 |
|
21212
|
265 |
|
|
266 |
section {* Mutual Recursion *}
|
|
267 |
|
|
268 |
text {*
|
|
269 |
If two or more functions call one another mutually, they have to be defined
|
|
270 |
in one step. The simplest example are probably @{text "even"} and @{text "odd"}:
|
|
271 |
*}
|
|
272 |
|
22065
|
273 |
function even :: "nat \<Rightarrow> bool"
|
|
274 |
and odd :: "nat \<Rightarrow> bool"
|
21212
|
275 |
where
|
|
276 |
"even 0 = True"
|
|
277 |
| "odd 0 = False"
|
|
278 |
| "even (Suc n) = odd n"
|
|
279 |
| "odd (Suc n) = even n"
|
22065
|
280 |
by pat_completeness auto
|
21212
|
281 |
|
|
282 |
text {*
|
|
283 |
To solve the problem of mutual dependencies, Isabelle internally
|
|
284 |
creates a single function operating on the sum
|
|
285 |
type. Then the original functions are defined as
|
|
286 |
projections. Consequently, termination has to be proved
|
|
287 |
simultaneously for both functions, by specifying a measure on the
|
|
288 |
sum type:
|
|
289 |
*}
|
|
290 |
|
|
291 |
termination
|
22065
|
292 |
by (relation "measure (\<lambda>x. case x of Inl n \<Rightarrow> n | Inr n \<Rightarrow> n)")
|
|
293 |
auto
|
|
294 |
|
|
295 |
subsection {* Induction for mutual recursion *}
|
|
296 |
|
|
297 |
text {*
|
|
298 |
|
|
299 |
When functions are mutually recursive, proving properties about them
|
|
300 |
generally requires simultaneous induction. The induction rules
|
|
301 |
generated from the definitions reflect this.
|
|
302 |
|
|
303 |
Let us prove something about @{const even} and @{const odd}:
|
|
304 |
*}
|
|
305 |
|
|
306 |
lemma
|
|
307 |
"even n = (n mod 2 = 0)"
|
|
308 |
"odd n = (n mod 2 = 1)"
|
|
309 |
|
|
310 |
txt {*
|
|
311 |
We apply simultaneous induction, specifying the induction variable
|
|
312 |
for both goals, separated by \cmd{and}: *}
|
|
313 |
|
|
314 |
apply (induct n and n rule: even_odd.induct)
|
|
315 |
|
|
316 |
txt {*
|
|
317 |
We get four subgoals, which correspond to the clauses in the
|
|
318 |
definition of @{const even} and @{const odd}:
|
|
319 |
@{subgoals[display,indent=0]}
|
|
320 |
Simplification solves the first two goals, leaving us with two
|
|
321 |
statements about the @{text "mod"} operation to prove:
|
|
322 |
*}
|
|
323 |
|
|
324 |
apply simp_all
|
|
325 |
|
|
326 |
txt {*
|
|
327 |
@{subgoals[display,indent=0]}
|
|
328 |
|
|
329 |
\noindent These can be handeled by the descision procedure for
|
|
330 |
presburger arithmethic.
|
|
331 |
|
|
332 |
*}
|
|
333 |
|
|
334 |
apply presburger
|
|
335 |
apply presburger
|
|
336 |
done
|
21212
|
337 |
|
22065
|
338 |
text {*
|
|
339 |
Even if we were just interested in one of the statements proved by
|
|
340 |
simultaneous induction, the other ones may be necessary to
|
|
341 |
strengthen the induction hypothesis. If we had left out the statement
|
|
342 |
about @{const odd} (by substituting it with @{term "True"}, our
|
|
343 |
proof would have failed:
|
|
344 |
*}
|
|
345 |
|
|
346 |
lemma
|
|
347 |
"even n = (n mod 2 = 0)"
|
|
348 |
"True"
|
|
349 |
apply (induct n rule: even_odd.induct)
|
|
350 |
|
|
351 |
txt {*
|
|
352 |
\noindent Now the third subgoal is a dead end, since we have no
|
|
353 |
useful induction hypothesis:
|
|
354 |
|
|
355 |
@{subgoals[display,indent=0]}
|
|
356 |
*}
|
|
357 |
|
|
358 |
oops
|
|
359 |
|
|
360 |
section {* More general patterns *}
|
|
361 |
|
|
362 |
subsection {* Avoiding pattern splitting *}
|
|
363 |
|
|
364 |
text {*
|
|
365 |
|
|
366 |
Up to now, we used pattern matching only on datatypes, and the
|
|
367 |
patterns were always disjoint and complete, and if they weren't,
|
|
368 |
they were made disjoint automatically like in the definition of
|
|
369 |
@{const "sep"} in \S\ref{patmatch}.
|
|
370 |
|
|
371 |
This splitting can significantly increase the number of equations
|
|
372 |
involved, and is not always necessary. The following simple example
|
|
373 |
shows the problem:
|
|
374 |
|
|
375 |
Suppose we are modelling incomplete knowledge about the world by a
|
23003
|
376 |
three-valued datatype, which has values @{term "T"}, @{term "F"}
|
|
377 |
and @{term "X"} for true, false and uncertain propositions, respectively.
|
22065
|
378 |
*}
|
|
379 |
|
|
380 |
datatype P3 = T | F | X
|
|
381 |
|
|
382 |
text {* Then the conjunction of such values can be defined as follows: *}
|
|
383 |
|
|
384 |
fun And :: "P3 \<Rightarrow> P3 \<Rightarrow> P3"
|
|
385 |
where
|
|
386 |
"And T p = p"
|
23003
|
387 |
| "And p T = p"
|
|
388 |
| "And p F = F"
|
|
389 |
| "And F p = F"
|
|
390 |
| "And X X = X"
|
21212
|
391 |
|
|
392 |
|
22065
|
393 |
text {*
|
|
394 |
This definition is useful, because the equations can directly be used
|
|
395 |
as rules to simplify expressions. But the patterns overlap, e.g.~the
|
|
396 |
expression @{term "And T T"} is matched by the first two
|
|
397 |
equations. By default, Isabelle makes the patterns disjoint by
|
|
398 |
splitting them up, producing instances:
|
|
399 |
*}
|
|
400 |
|
|
401 |
thm And.simps
|
|
402 |
|
|
403 |
text {*
|
|
404 |
@{thm[indent=4] And.simps}
|
|
405 |
|
|
406 |
\vspace*{1em}
|
23003
|
407 |
\noindent There are several problems with this:
|
22065
|
408 |
|
|
409 |
\begin{enumerate}
|
|
410 |
\item When datatypes have many constructors, there can be an
|
|
411 |
explosion of equations. For @{const "And"}, we get seven instead of
|
23003
|
412 |
five equations, which can be tolerated, but this is just a small
|
22065
|
413 |
example.
|
|
414 |
|
23003
|
415 |
\item Since splitting makes the equations "less general", they
|
22065
|
416 |
do not always match in rewriting. While the term @{term "And x F"}
|
|
417 |
can be simplified to @{term "F"} by the original specification, a
|
|
418 |
(manual) case split on @{term "x"} is now necessary.
|
|
419 |
|
|
420 |
\item The splitting also concerns the induction rule @{text
|
|
421 |
"And.induct"}. Instead of five premises it now has seven, which
|
|
422 |
means that our induction proofs will have more cases.
|
|
423 |
|
|
424 |
\item In general, it increases clarity if we get the same definition
|
|
425 |
back which we put in.
|
|
426 |
\end{enumerate}
|
|
427 |
|
|
428 |
On the other hand, a definition needs to be consistent and defining
|
|
429 |
both @{term "f x = True"} and @{term "f x = False"} is a bad
|
|
430 |
idea. So if we don't want Isabelle to mangle our definitions, we
|
|
431 |
will have to prove that this is not necessary. By using the full
|
23003
|
432 |
definition form without the \cmd{sequential} option, we get this
|
22065
|
433 |
behaviour:
|
|
434 |
*}
|
|
435 |
|
|
436 |
function And2 :: "P3 \<Rightarrow> P3 \<Rightarrow> P3"
|
|
437 |
where
|
|
438 |
"And2 T p = p"
|
23003
|
439 |
| "And2 p T = p"
|
|
440 |
| "And2 p F = F"
|
|
441 |
| "And2 F p = F"
|
|
442 |
| "And2 X X = X"
|
22065
|
443 |
|
|
444 |
txt {*
|
|
445 |
Now it is also time to look at the subgoals generated by a
|
|
446 |
function definition. In this case, they are:
|
|
447 |
|
|
448 |
@{subgoals[display,indent=0]}
|
|
449 |
|
|
450 |
The first subgoal expresses the completeness of the patterns. It has
|
|
451 |
the form of an elimination rule and states that every @{term x} of
|
|
452 |
the function's input type must match one of the patterns. It could
|
|
453 |
be equivalently stated as a disjunction of existential statements:
|
|
454 |
@{term "(\<exists>p. x = (T, p)) \<or> (\<exists>p. x = (p, T)) \<or> (\<exists>p. x = (p, F)) \<or>
|
|
455 |
(\<exists>p. x = (F, p)) \<or> (x = (X, X))"} If the patterns just involve
|
|
456 |
datatypes, we can solve it with the @{text "pat_completeness"} method:
|
|
457 |
*}
|
|
458 |
|
|
459 |
apply pat_completeness
|
|
460 |
|
|
461 |
txt {*
|
|
462 |
The remaining subgoals express \emph{pattern compatibility}. We do
|
|
463 |
allow that a value is matched by more than one patterns, but in this
|
|
464 |
case, the result (i.e.~the right hand sides of the equations) must
|
|
465 |
also be equal. For each pair of two patterns, there is one such
|
|
466 |
subgoal. Usually this needs injectivity of the constructors, which
|
|
467 |
is used automatically by @{text "auto"}.
|
|
468 |
*}
|
|
469 |
|
|
470 |
by auto
|
21212
|
471 |
|
|
472 |
|
22065
|
473 |
subsection {* Non-constructor patterns *}
|
21212
|
474 |
|
|
475 |
text {* FIXME *}
|
|
476 |
|
22065
|
477 |
|
|
478 |
|
|
479 |
|
|
480 |
section {* Partiality *}
|
|
481 |
|
|
482 |
text {*
|
|
483 |
In HOL, all functions are total. A function @{term "f"} applied to
|
|
484 |
@{term "x"} always has a value @{term "f x"}, and there is no notion
|
|
485 |
of undefinedness.
|
23003
|
486 |
|
|
487 |
This property of HOL is the reason why we have to do termination
|
|
488 |
proofs when defining functions: The termination proof justifies the
|
|
489 |
definition of the function by wellfounded recursion.
|
22065
|
490 |
|
23003
|
491 |
However, the \cmd{function} package still supports partiality. Let's
|
|
492 |
look at the following function which searches for a zero in the
|
|
493 |
function f.
|
|
494 |
*}
|
|
495 |
|
|
496 |
function (*<*)(domintros, tailrec)(*>*)findzero :: "(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat"
|
|
497 |
where
|
|
498 |
"findzero f n = (if f n = 0 then n else findzero f (Suc n))"
|
|
499 |
by pat_completeness auto
|
|
500 |
(*<*)declare findzero.simps[simp del](*>*)
|
|
501 |
|
|
502 |
text {*
|
|
503 |
Clearly, any attempt of a termination proof must fail. And without
|
|
504 |
that, we do not get the usual rules @{text "findzero.simp"} and
|
|
505 |
@{text "findzero.induct"}. So what was the definition good for at all?
|
|
506 |
*}
|
|
507 |
|
|
508 |
subsection {* Domain predicates *}
|
|
509 |
|
|
510 |
text {*
|
|
511 |
The trick is that Isabelle has not only defined the function @{const findzero}, but also
|
|
512 |
a predicate @{term "findzero_dom"} that characterizes the values where the function
|
|
513 |
terminates: the \emph{domain} of the function. In Isabelle/HOL, a
|
|
514 |
partial function is just a total function with an additional domain
|
|
515 |
predicate. Like with total functions, we get simplification and
|
|
516 |
induction rules, but they are guarded by the domain conditions and
|
|
517 |
called @{text psimps} and @{text pinduct}:
|
|
518 |
*}
|
|
519 |
|
|
520 |
thm findzero.psimps
|
|
521 |
|
|
522 |
text {*
|
|
523 |
@{thm[display] findzero.psimps}
|
|
524 |
*}
|
|
525 |
|
|
526 |
thm findzero.pinduct
|
|
527 |
|
|
528 |
text {*
|
|
529 |
@{thm[display] findzero.pinduct}
|
|
530 |
*}
|
|
531 |
|
|
532 |
text {*
|
|
533 |
As already mentioned, HOL does not support true partiality. All we
|
|
534 |
are doing here is using some tricks to make a total function appear
|
|
535 |
as if it was partial. We can still write the term @{term "findzero
|
|
536 |
(\<lambda>x. 1) 0"} and like any other term of type @{typ nat} it is equal
|
|
537 |
to some natural number, although we might not be able to find out
|
|
538 |
which one (we will discuss this further in \S\ref{default}). The
|
|
539 |
function is \emph{underdefined}.
|
|
540 |
|
|
541 |
But it is enough defined to prove something about it. We can prove
|
|
542 |
that if @{term "findzero f n"}
|
|
543 |
it terminates, it indeed returns a zero of @{term f}:
|
|
544 |
*}
|
|
545 |
|
|
546 |
lemma findzero_zero: "findzero_dom (f, n) \<Longrightarrow> f (findzero f n) = 0"
|
|
547 |
|
|
548 |
txt {* We apply induction as usual, but using the partial induction
|
|
549 |
rule: *}
|
|
550 |
|
|
551 |
apply (induct f n rule: findzero.pinduct)
|
|
552 |
|
|
553 |
txt {* This gives the following subgoals:
|
|
554 |
|
|
555 |
@{subgoals[display,indent=0]}
|
|
556 |
|
|
557 |
The premise in our lemma was used to satisfy the first premise in
|
|
558 |
the induction rule. However, now we can also use @{term
|
|
559 |
"findzero_dom (f, n)"} as an assumption in the induction step. This
|
|
560 |
allows to unfold @{term "findzero f n"} using the @{text psimps}
|
|
561 |
rule, and the rest is trivial. Since @{text psimps} rules carry the
|
|
562 |
@{text "[simp]"} attribute by default, we just need a single step:
|
|
563 |
*}
|
|
564 |
apply simp
|
|
565 |
done
|
|
566 |
|
|
567 |
text {*
|
|
568 |
Proofs about partial functions are often not harder than for total
|
|
569 |
functions. Fig.~\ref{findzero_isar} shows a slightly more
|
|
570 |
complicated proof written in Isar. It is verbose enough to show how
|
|
571 |
partiality comes into play: From the partial induction, we get an
|
|
572 |
additional domain condition hypothesis. Observe how this condition
|
|
573 |
is applied when calls to @{term findzero} are unfolded.
|
|
574 |
*}
|
|
575 |
|
|
576 |
text_raw {*
|
|
577 |
\begin{figure}
|
|
578 |
\begin{center}
|
|
579 |
\begin{minipage}{0.8\textwidth}
|
|
580 |
\isabellestyle{it}
|
|
581 |
\isastyle\isamarkuptrue
|
|
582 |
*}
|
|
583 |
lemma "\<lbrakk>findzero_dom (f, n); x \<in> {n ..< findzero f n}\<rbrakk> \<Longrightarrow> f x \<noteq> 0"
|
|
584 |
proof (induct rule: findzero.pinduct)
|
|
585 |
fix f n assume dom: "findzero_dom (f, n)"
|
|
586 |
and IH: "\<lbrakk>f n \<noteq> 0; x \<in> {Suc n..<findzero f (Suc n)}\<rbrakk>
|
|
587 |
\<Longrightarrow> f x \<noteq> 0"
|
|
588 |
and x_range: "x \<in> {n..<findzero f n}"
|
|
589 |
|
|
590 |
have "f n \<noteq> 0"
|
|
591 |
proof
|
|
592 |
assume "f n = 0"
|
|
593 |
with dom have "findzero f n = n" by simp
|
|
594 |
with x_range show False by auto
|
|
595 |
qed
|
|
596 |
|
|
597 |
from x_range have "x = n \<or> x \<in> {Suc n ..< findzero f n}" by auto
|
|
598 |
thus "f x \<noteq> 0"
|
|
599 |
proof
|
|
600 |
assume "x = n"
|
|
601 |
with `f n \<noteq> 0` show ?thesis by simp
|
|
602 |
next
|
|
603 |
assume "x \<in> {Suc n..<findzero f n}"
|
|
604 |
with dom and `f n \<noteq> 0` have "x \<in> {Suc n ..< findzero f (Suc n)}"
|
|
605 |
by simp
|
|
606 |
with IH and `f n \<noteq> 0`
|
|
607 |
show ?thesis by simp
|
|
608 |
qed
|
|
609 |
qed
|
|
610 |
text_raw {*
|
|
611 |
\isamarkupfalse\isabellestyle{tt}
|
|
612 |
\end{minipage}\end{center}
|
|
613 |
\caption{A proof about a partial function}\label{findzero_isar}
|
|
614 |
\end{figure}
|
|
615 |
*}
|
|
616 |
|
|
617 |
subsection {* Partial termination proofs *}
|
|
618 |
|
|
619 |
text {*
|
|
620 |
Now that we have proved some interesting properties about our
|
|
621 |
function, we should turn to the domain predicate and see if it is
|
|
622 |
actually true for some values. Otherwise we would have just proved
|
|
623 |
lemmas with @{term False} as a premise.
|
|
624 |
|
|
625 |
Essentially, we need some introduction rules for @{text
|
|
626 |
findzero_dom}. The function package can prove such domain
|
|
627 |
introduction rules automatically. But since they are not used very
|
|
628 |
often (they are almost never needed if the function is total), they
|
|
629 |
are disabled by default for efficiency reasons. So we have to go
|
|
630 |
back and ask for them explicitly by passing the @{text
|
|
631 |
"(domintros)"} option to the function package:
|
|
632 |
|
|
633 |
\noindent\cmd{function} @{text "(domintros) findzero :: \"(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat\""}\\%
|
|
634 |
\cmd{where}\isanewline%
|
|
635 |
\ \ \ldots\\
|
|
636 |
\cmd{by} @{text "pat_completeness auto"}\\%
|
|
637 |
|
|
638 |
|
|
639 |
Now the package has proved an introduction rule for @{text findzero_dom}:
|
|
640 |
*}
|
|
641 |
|
|
642 |
thm findzero.domintros
|
|
643 |
|
|
644 |
text {*
|
|
645 |
@{thm[display] findzero.domintros}
|
|
646 |
|
|
647 |
Domain introduction rules allow to show that a given value lies in the
|
|
648 |
domain of a function, if the arguments of all recursive calls
|
|
649 |
are in the domain as well. They allow to do a \qt{single step} in a
|
|
650 |
termination proof. Usually, you want to combine them with a suitable
|
|
651 |
induction principle.
|
|
652 |
|
|
653 |
Since our function increases its argument at recursive calls, we
|
|
654 |
need an induction principle which works \qt{backwards}. We will use
|
|
655 |
@{text inc_induct}, which allows to do induction from a fixed number
|
|
656 |
\qt{downwards}:
|
|
657 |
|
|
658 |
@{thm[display] inc_induct}
|
|
659 |
|
|
660 |
Fig.~\ref{findzero_term} gives a detailed Isar proof of the fact
|
|
661 |
that @{text findzero} terminates if there is a zero which is greater
|
|
662 |
or equal to @{term n}. First we derive two useful rules which will
|
|
663 |
solve the base case and the step case of the induction. The
|
|
664 |
induction is then straightforward, except for the unusal induction
|
|
665 |
principle.
|
|
666 |
|
|
667 |
*}
|
|
668 |
|
|
669 |
text_raw {*
|
|
670 |
\begin{figure}
|
|
671 |
\begin{center}
|
|
672 |
\begin{minipage}{0.8\textwidth}
|
|
673 |
\isabellestyle{it}
|
|
674 |
\isastyle\isamarkuptrue
|
|
675 |
*}
|
|
676 |
lemma findzero_termination:
|
|
677 |
assumes "x >= n"
|
|
678 |
assumes "f x = 0"
|
|
679 |
shows "findzero_dom (f, n)"
|
|
680 |
proof -
|
|
681 |
have base: "findzero_dom (f, x)"
|
|
682 |
by (rule findzero.domintros) (simp add:`f x = 0`)
|
|
683 |
|
|
684 |
have step: "\<And>i. findzero_dom (f, Suc i)
|
|
685 |
\<Longrightarrow> findzero_dom (f, i)"
|
|
686 |
by (rule findzero.domintros) simp
|
|
687 |
|
|
688 |
from `x \<ge> n`
|
|
689 |
show ?thesis
|
|
690 |
proof (induct rule:inc_induct)
|
|
691 |
show "findzero_dom (f, x)"
|
|
692 |
by (rule base)
|
|
693 |
next
|
|
694 |
fix i assume "findzero_dom (f, Suc i)"
|
|
695 |
thus "findzero_dom (f, i)"
|
|
696 |
by (rule step)
|
|
697 |
qed
|
|
698 |
qed
|
|
699 |
text_raw {*
|
|
700 |
\isamarkupfalse\isabellestyle{tt}
|
|
701 |
\end{minipage}\end{center}
|
|
702 |
\caption{Termination proof for @{text findzero}}\label{findzero_term}
|
|
703 |
\end{figure}
|
|
704 |
*}
|
|
705 |
|
|
706 |
text {*
|
|
707 |
Again, the proof given in Fig.~\ref{findzero_term} has a lot of
|
|
708 |
detail in order to explain the principles. Using more automation, we
|
|
709 |
can also have a short proof:
|
|
710 |
*}
|
|
711 |
|
|
712 |
lemma findzero_termination_short:
|
|
713 |
assumes zero: "x >= n"
|
|
714 |
assumes [simp]: "f x = 0"
|
|
715 |
shows "findzero_dom (f, n)"
|
|
716 |
using zero
|
|
717 |
by (induct rule:inc_induct) (auto intro: findzero.domintros)
|
|
718 |
|
|
719 |
text {*
|
|
720 |
It is simple to combine the partial correctness result with the
|
|
721 |
termination lemma:
|
|
722 |
*}
|
|
723 |
|
|
724 |
lemma findzero_total_correctness:
|
|
725 |
"f x = 0 \<Longrightarrow> f (findzero f 0) = 0"
|
|
726 |
by (blast intro: findzero_zero findzero_termination)
|
|
727 |
|
|
728 |
subsection {* Definition of the domain predicate *}
|
|
729 |
|
|
730 |
text {*
|
|
731 |
Sometimes it is useful to know what the definition of the domain
|
|
732 |
predicate actually is. Actually, @{text findzero_dom} is just an
|
|
733 |
abbreviation:
|
|
734 |
|
|
735 |
@{abbrev[display] findzero_dom}
|
|
736 |
|
|
737 |
The domain predicate is the accessible part of a relation @{const
|
|
738 |
findzero_rel}, which was also created internally by the function
|
|
739 |
package. @{const findzero_rel} is just a normal
|
|
740 |
inductively defined predicate, so we can inspect its definition by
|
|
741 |
looking at the introduction rules @{text findzero_rel.intros}.
|
|
742 |
In our case there is just a single rule:
|
|
743 |
|
|
744 |
@{thm[display] findzero_rel.intros}
|
|
745 |
|
|
746 |
The relation @{const findzero_rel}, expressed as a binary predicate,
|
|
747 |
describes the \emph{recursion relation} of the function
|
|
748 |
definition. The recursion relation is a binary relation on
|
|
749 |
the arguments of the function that relates each argument to its
|
|
750 |
recursive calls. In general, there is one introduction rule for each
|
|
751 |
recursive call.
|
|
752 |
|
|
753 |
The predicate @{term "acc findzero_rel"} is the \emph{accessible part} of
|
|
754 |
that relation. An argument belongs to the accessible part, if it can
|
|
755 |
be reached in a finite number of steps.
|
|
756 |
|
|
757 |
Since the domain predicate is just an abbreviation, you can use
|
|
758 |
lemmas for @{const acc} and @{const findzero_rel} directly. Some
|
|
759 |
lemmas which are occasionally useful are @{text accI}, @{text
|
|
760 |
acc_downward}, and of course the introduction and elimination rules
|
|
761 |
for the recursion relation @{text "findzero.intros"} and @{text "findzero.cases"}.
|
|
762 |
*}
|
|
763 |
|
|
764 |
(*lemma findzero_nicer_domintros:
|
|
765 |
"f x = 0 \<Longrightarrow> findzero_dom (f, x)"
|
|
766 |
"findzero_dom (f, Suc x) \<Longrightarrow> findzero_dom (f, x)"
|
|
767 |
by (rule accI, erule findzero_rel.cases, auto)+
|
|
768 |
*)
|
|
769 |
|
|
770 |
subsection {* A Useful Special Case: Tail recursion *}
|
|
771 |
|
|
772 |
text {*
|
|
773 |
The domain predicate is our trick that allows us to model partiality
|
|
774 |
in a world of total functions. The downside of this is that we have
|
|
775 |
to carry it around all the time. The termination proof above allowed
|
|
776 |
us to replace the abstract @{term "findzero_dom (f, n)"} by the more
|
|
777 |
concrete @{term "(x \<ge> n \<and> f x = 0)"}, but the condition is still
|
|
778 |
there and it won't go away soon.
|
|
779 |
|
|
780 |
In particular, the domain predicate guard the unfolding of our
|
|
781 |
function, since it is there as a condition in the @{text psimp}
|
|
782 |
rules.
|
|
783 |
|
|
784 |
On the other hand, we must be happy about the domain predicate,
|
|
785 |
since it guarantees that all this is at all possible without losing
|
|
786 |
consistency.
|
|
787 |
|
|
788 |
Now there is an important special case: We can actually get rid
|
|
789 |
of the condition in the simplification rules, \emph{if the function
|
|
790 |
is tail-recursive}. The reason is that for all tail-recursive
|
|
791 |
equations there is a total function satisfying them, even if they
|
|
792 |
are non-terminating.
|
|
793 |
|
|
794 |
The function package internally does the right construction and can
|
|
795 |
derive the unconditional simp rules, if we ask it to do so. Luckily,
|
|
796 |
our @{const "findzero"} function is tail-recursive, so we can just go
|
|
797 |
back and add another option to the \cmd{function} command:
|
|
798 |
|
|
799 |
\noindent\cmd{function} @{text "(domintros, tailrec) findzero :: \"(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat\""}\\%
|
|
800 |
\cmd{where}\isanewline%
|
|
801 |
\ \ \ldots\\%
|
|
802 |
|
|
803 |
|
|
804 |
Now, we actually get the unconditional simplification rules, even
|
|
805 |
though the function is partial:
|
|
806 |
*}
|
|
807 |
|
|
808 |
thm findzero.simps
|
|
809 |
|
|
810 |
text {*
|
|
811 |
@{thm[display] findzero.simps}
|
|
812 |
|
|
813 |
Of course these would make the simplifier loop, so we better remove
|
|
814 |
them from the simpset:
|
|
815 |
*}
|
|
816 |
|
|
817 |
declare findzero.simps[simp del]
|
|
818 |
|
|
819 |
|
|
820 |
text {* \fixme{Code generation ???} *}
|
|
821 |
|
|
822 |
section {* Nested recursion *}
|
|
823 |
|
|
824 |
text {*
|
|
825 |
Recursive calls which are nested in one another frequently cause
|
|
826 |
complications, since their termination proof can depend on a partial
|
|
827 |
correctness property of the function itself.
|
|
828 |
|
|
829 |
As a small example, we define the \qt{nested zero} function:
|
|
830 |
*}
|
|
831 |
|
|
832 |
function nz :: "nat \<Rightarrow> nat"
|
|
833 |
where
|
|
834 |
"nz 0 = 0"
|
|
835 |
| "nz (Suc n) = nz (nz n)"
|
|
836 |
by pat_completeness auto
|
|
837 |
|
|
838 |
text {*
|
|
839 |
If we attempt to prove termination using the identity measure on
|
|
840 |
naturals, this fails:
|
|
841 |
*}
|
|
842 |
|
|
843 |
termination
|
|
844 |
apply (relation "measure (\<lambda>n. n)")
|
|
845 |
apply auto
|
|
846 |
|
|
847 |
txt {*
|
|
848 |
We get stuck with the subgoal
|
|
849 |
|
|
850 |
@{subgoals[display]}
|
|
851 |
|
|
852 |
Of course this statement is true, since we know that @{const nz} is
|
|
853 |
the zero function. And in fact we have no problem proving this
|
|
854 |
property by induction.
|
|
855 |
*}
|
|
856 |
oops
|
|
857 |
|
|
858 |
lemma nz_is_zero: "nz_dom n \<Longrightarrow> nz n = 0"
|
|
859 |
by (induct rule:nz.pinduct) auto
|
|
860 |
|
|
861 |
text {*
|
|
862 |
We formulate this as a partial correctness lemma with the condition
|
|
863 |
@{term "nz_dom n"}. This allows us to prove it with the @{text
|
|
864 |
pinduct} rule before we have proved termination. With this lemma,
|
|
865 |
the termination proof works as expected:
|
|
866 |
*}
|
|
867 |
|
|
868 |
termination
|
|
869 |
by (relation "measure (\<lambda>n. n)") (auto simp: nz_is_zero)
|
|
870 |
|
|
871 |
text {*
|
|
872 |
As a general strategy, one should prove the statements needed for
|
|
873 |
termination as a partial property first. Then they can be used to do
|
|
874 |
the termination proof. This also works for less trivial
|
|
875 |
examples. Figure \ref{f91} defines the well-known 91-function by
|
|
876 |
McCarthy \cite{?} and proves its termination.
|
|
877 |
*}
|
|
878 |
|
|
879 |
text_raw {*
|
|
880 |
\begin{figure}
|
|
881 |
\begin{center}
|
|
882 |
\begin{minipage}{0.8\textwidth}
|
|
883 |
\isabellestyle{it}
|
|
884 |
\isastyle\isamarkuptrue
|
|
885 |
*}
|
|
886 |
|
|
887 |
function f91 :: "nat => nat"
|
|
888 |
where
|
|
889 |
"f91 n = (if 100 < n then n - 10 else f91 (f91 (n + 11)))"
|
|
890 |
by pat_completeness auto
|
|
891 |
|
|
892 |
lemma f91_estimate:
|
|
893 |
assumes trm: "f91_dom n"
|
|
894 |
shows "n < f91 n + 11"
|
|
895 |
using trm by induct auto
|
|
896 |
|
|
897 |
termination
|
|
898 |
proof
|
|
899 |
let ?R = "measure (\<lambda>x. 101 - x)"
|
|
900 |
show "wf ?R" ..
|
|
901 |
|
|
902 |
fix n :: nat assume "\<not> 100 < n" -- "Assumptions for both calls"
|
|
903 |
|
|
904 |
thus "(n + 11, n) \<in> ?R" by simp -- "Inner call"
|
|
905 |
|
|
906 |
assume inner_trm: "f91_dom (n + 11)" -- "Outer call"
|
|
907 |
with f91_estimate have "n + 11 < f91 (n + 11) + 11" .
|
|
908 |
with `\<not> 100 < n` show "(f91 (n + 11), n) \<in> ?R" by simp
|
|
909 |
qed
|
|
910 |
|
|
911 |
text_raw {*
|
|
912 |
\isamarkupfalse\isabellestyle{tt}
|
|
913 |
\end{minipage}\end{center}
|
|
914 |
\caption{McCarthy's 91-function}\label{f91}
|
|
915 |
\end{figure}
|
22065
|
916 |
*}
|
|
917 |
|
|
918 |
|
23003
|
919 |
section {* Higher-Order Recursion *}
|
22065
|
920 |
|
23003
|
921 |
text {*
|
|
922 |
Higher-order recursion occurs when recursive calls
|
|
923 |
are passed as arguments to higher-order combinators such as @{term
|
|
924 |
map}, @{term filter} etc.
|
|
925 |
As an example, imagine a data type of n-ary trees:
|
|
926 |
*}
|
|
927 |
|
|
928 |
datatype 'a tree =
|
|
929 |
Leaf 'a
|
|
930 |
| Branch "'a tree list"
|
|
931 |
|
|
932 |
|
|
933 |
text {* \noindent We can define a map function for trees, using the predefined
|
|
934 |
map function for lists. *}
|
22065
|
935 |
|
23003
|
936 |
function treemap :: "('a \<Rightarrow> 'a) \<Rightarrow> 'a tree \<Rightarrow> 'a tree"
|
|
937 |
where
|
|
938 |
"treemap f (Leaf n) = Leaf (f n)"
|
|
939 |
| "treemap f (Branch l) = Branch (map (treemap f) l)"
|
|
940 |
by pat_completeness auto
|
22065
|
941 |
|
|
942 |
text {*
|
23003
|
943 |
We do the termination proof manually, to point out what happens
|
|
944 |
here:
|
|
945 |
*}
|
|
946 |
|
|
947 |
termination proof
|
|
948 |
txt {*
|
|
949 |
|
|
950 |
As usual, we have to give a wellfounded relation, such that the
|
|
951 |
arguments of the recursive calls get smaller. But what exactly are
|
|
952 |
the arguments of the recursive calls? Isabelle gives us the
|
|
953 |
subgoals
|
|
954 |
|
|
955 |
@{subgoals[display,indent=0]}
|
|
956 |
|
|
957 |
So Isabelle seems to know that @{const map} behaves nicely and only
|
|
958 |
applies the recursive call @{term "treemap f"} to elements
|
|
959 |
of @{term "l"}. Before we discuss where this knowledge comes from,
|
|
960 |
let us finish the termination proof:
|
|
961 |
*}
|
|
962 |
|
|
963 |
show "wf (measure (size o snd))" by simp
|
|
964 |
next
|
|
965 |
fix f l and x :: "'a tree"
|
|
966 |
assume "x \<in> set l"
|
|
967 |
thus "((f, x), (f, Branch l)) \<in> measure (size o snd)"
|
|
968 |
apply simp
|
|
969 |
txt {*
|
|
970 |
Simplification returns the following subgoal:
|
|
971 |
|
|
972 |
@{subgoals[display,indent=0]}
|
|
973 |
|
|
974 |
We are lacking a property about the function @{const
|
|
975 |
tree_list_size}, which was generated automatically at the
|
|
976 |
definition of the @{text tree} type. We should go back and prove
|
|
977 |
it, by induction.
|
|
978 |
*}
|
|
979 |
oops
|
|
980 |
|
|
981 |
lemma tree_list_size[simp]: "x \<in> set l \<Longrightarrow> size x < Suc (tree_list_size l)"
|
|
982 |
by (induct l) auto
|
|
983 |
|
|
984 |
text {*
|
|
985 |
Now the whole termination proof is automatic:
|
|
986 |
*}
|
|
987 |
|
|
988 |
termination
|
|
989 |
by lexicographic_order
|
22065
|
990 |
|
|
991 |
|
23003
|
992 |
subsection {* Congruence Rules *}
|
|
993 |
|
|
994 |
text {*
|
|
995 |
Let's come back to the question how Isabelle knows about @{const
|
|
996 |
map}.
|
|
997 |
|
|
998 |
The knowledge about map is encoded in so-called congruence rules,
|
|
999 |
which are special theorems known to the \cmd{function} command. The
|
|
1000 |
rule for map is
|
|
1001 |
|
|
1002 |
@{thm[display] map_cong}
|
|
1003 |
|
|
1004 |
You can read this in the following way: Two applications of @{const
|
|
1005 |
map} are equal, if the list arguments are equal and the functions
|
|
1006 |
coincide on the elements of the list. This means that for the value
|
|
1007 |
@{term "map f l"} we only have to know how @{term f} behaves on
|
|
1008 |
@{term l}.
|
|
1009 |
|
|
1010 |
Usually, one such congruence rule is
|
|
1011 |
needed for each higher-order construct that is used when defining
|
|
1012 |
new functions. In fact, even basic functions like @{const
|
|
1013 |
If} and @{const Let} are handeled by this mechanism. The congruence
|
|
1014 |
rule for @{const If} states that the @{text then} branch is only
|
|
1015 |
relevant if the condition is true, and the @{text else} branch only if it
|
|
1016 |
is false:
|
|
1017 |
|
|
1018 |
@{thm[display] if_cong}
|
|
1019 |
|
|
1020 |
Congruence rules can be added to the
|
|
1021 |
function package by giving them the @{term fundef_cong} attribute.
|
|
1022 |
|
|
1023 |
Isabelle comes with predefined congruence rules for most of the
|
|
1024 |
definitions.
|
|
1025 |
But if you define your own higher-order constructs, you will have to
|
|
1026 |
come up with the congruence rules yourself, if you want to use your
|
|
1027 |
functions in recursive definitions. Since the structure of
|
|
1028 |
congruence rules is a little unintuitive, here are some exercises:
|
|
1029 |
*}
|
|
1030 |
(*<*)
|
|
1031 |
fun mapeven :: "(nat \<Rightarrow> nat) \<Rightarrow> nat list \<Rightarrow> nat list"
|
|
1032 |
where
|
|
1033 |
"mapeven f [] = []"
|
|
1034 |
| "mapeven f (x#xs) = (if x mod 2 = 0 then f x # mapeven f xs else x #
|
|
1035 |
mapeven f xs)"
|
|
1036 |
(*>*)
|
|
1037 |
text {*
|
|
1038 |
|
|
1039 |
\begin{exercise}
|
|
1040 |
Find a suitable congruence rule for the following function which
|
|
1041 |
maps only over the even numbers in a list:
|
|
1042 |
|
|
1043 |
@{thm[display] mapeven.simps}
|
|
1044 |
\end{exercise}
|
|
1045 |
|
|
1046 |
\begin{exercise}
|
|
1047 |
What happens if the congruence rule for @{const If} is
|
|
1048 |
disabled by declaring @{text "if_cong[fundef_cong del]"}?
|
|
1049 |
\end{exercise}
|
|
1050 |
|
|
1051 |
Note that in some cases there is no \qt{best} congruence rule.
|
|
1052 |
\fixme
|
|
1053 |
|
|
1054 |
*}
|
|
1055 |
|
22065
|
1056 |
|
|
1057 |
|
|
1058 |
|
|
1059 |
|
23003
|
1060 |
|
|
1061 |
|
|
1062 |
section {* Appendix: Predefined Congruence Rules *}
|
|
1063 |
|
|
1064 |
(*<*)
|
|
1065 |
syntax (Rule output)
|
|
1066 |
"==>" :: "prop \<Rightarrow> prop \<Rightarrow> prop"
|
|
1067 |
("\<^raw:\mbox{}\inferrule{\mbox{>_\<^raw:}}>\<^raw:{\mbox{>_\<^raw:}}>")
|
|
1068 |
|
|
1069 |
"_bigimpl" :: "asms \<Rightarrow> prop \<Rightarrow> prop"
|
|
1070 |
("\<^raw:\mbox{}\inferrule{>_\<^raw:}>\<^raw:{\mbox{>_\<^raw:}}>")
|
|
1071 |
|
|
1072 |
"_asms" :: "prop \<Rightarrow> asms \<Rightarrow> asms"
|
|
1073 |
("\<^raw:\mbox{>_\<^raw:}\\>/ _")
|
|
1074 |
|
|
1075 |
"_asm" :: "prop \<Rightarrow> asms" ("\<^raw:\mbox{>_\<^raw:}>")
|
|
1076 |
|
|
1077 |
definition
|
|
1078 |
FixImp :: "prop \<Rightarrow> prop \<Rightarrow> prop"
|
|
1079 |
where
|
|
1080 |
"FixImp (PROP A) (PROP B) \<equiv> (PROP A \<Longrightarrow> PROP B)"
|
|
1081 |
notation (output)
|
|
1082 |
FixImp (infixr "\<Longrightarrow>" 1)
|
|
1083 |
|
|
1084 |
setup {*
|
|
1085 |
let
|
|
1086 |
val fix_imps = map_aterms (fn Const ("==>", T) => Const ("Functions.FixImp", T) | t => t)
|
|
1087 |
fun transform t = Logic.list_implies (map fix_imps (Logic.strip_imp_prems t), Logic.strip_imp_concl t)
|
|
1088 |
in
|
|
1089 |
TermStyle.add_style "topl" (K transform)
|
|
1090 |
end
|
|
1091 |
*}
|
|
1092 |
(*>*)
|
|
1093 |
|
|
1094 |
subsection {* Basic Control Structures *}
|
|
1095 |
|
|
1096 |
text {*
|
|
1097 |
|
|
1098 |
@{thm_style[mode=Rule] topl if_cong}
|
|
1099 |
|
|
1100 |
@{thm_style [mode=Rule] topl let_cong}
|
|
1101 |
|
|
1102 |
*}
|
|
1103 |
|
|
1104 |
subsection {* Data Types *}
|
|
1105 |
|
|
1106 |
text {*
|
|
1107 |
|
|
1108 |
For each \cmd{datatype} definition, a congruence rule for the case
|
|
1109 |
combinator is registeres automatically. Here are the rules for
|
|
1110 |
@{text "nat"} and @{text "list"}:
|
|
1111 |
|
|
1112 |
\begin{center}@{thm_style[mode=Rule] topl nat.case_cong}\end{center}
|
|
1113 |
|
|
1114 |
\begin{center}@{thm_style[mode=Rule] topl list.case_cong}\end{center}
|
|
1115 |
|
|
1116 |
*}
|
|
1117 |
|
|
1118 |
subsection {* List combinators *}
|
|
1119 |
|
|
1120 |
|
|
1121 |
text {*
|
|
1122 |
|
|
1123 |
@{thm_style[mode=Rule] topl map_cong}
|
|
1124 |
|
|
1125 |
@{thm_style[mode=Rule] topl filter_cong}
|
|
1126 |
|
|
1127 |
@{thm_style[mode=Rule] topl foldr_cong}
|
|
1128 |
|
|
1129 |
@{thm_style[mode=Rule] topl foldl_cong}
|
|
1130 |
|
|
1131 |
Similar: takewhile, dropwhile
|
|
1132 |
|
|
1133 |
*}
|
|
1134 |
|
|
1135 |
subsection {* Sets *}
|
|
1136 |
|
|
1137 |
|
|
1138 |
text {*
|
|
1139 |
|
|
1140 |
@{thm_style[mode=Rule] topl ball_cong}
|
|
1141 |
|
|
1142 |
@{thm_style[mode=Rule] topl bex_cong}
|
|
1143 |
|
|
1144 |
@{thm_style[mode=Rule] topl UN_cong}
|
|
1145 |
|
|
1146 |
@{thm_style[mode=Rule] topl INT_cong}
|
|
1147 |
|
|
1148 |
@{thm_style[mode=Rule] topl image_cong}
|
|
1149 |
|
|
1150 |
|
|
1151 |
*}
|
22065
|
1152 |
|
|
1153 |
|
|
1154 |
|
|
1155 |
|
|
1156 |
|
|
1157 |
|
21212
|
1158 |
end
|