author | wenzelm |
Fri, 15 Aug 2008 15:50:52 +0200 | |
changeset 27885 | 76b51cd0a37c |
parent 27026 | 3602b81665b5 |
child 28040 | f47b4af3716a |
permissions | -rw-r--r-- |
21212 | 1 |
(* Title: Doc/IsarAdvanced/Functions/Thy/Fundefs.thy |
2 |
ID: $Id$ |
|
3 |
Author: Alexander Krauss, TU Muenchen |
|
4 |
||
5 |
Tutorial for function definitions with the new "function" package. |
|
6 |
*) |
|
7 |
||
8 |
theory Functions |
|
9 |
imports Main |
|
10 |
begin |
|
11 |
||
23188 | 12 |
section {* Function Definitions for Dummies *} |
21212 | 13 |
|
14 |
text {* |
|
15 |
In most cases, defining a recursive function is just as simple as other definitions: |
|
23003 | 16 |
*} |
21212 | 17 |
|
18 |
fun fib :: "nat \<Rightarrow> nat" |
|
19 |
where |
|
20 |
"fib 0 = 1" |
|
21 |
| "fib (Suc 0) = 1" |
|
22 |
| "fib (Suc (Suc n)) = fib n + fib (Suc n)" |
|
23 |
||
24 |
text {* |
|
23003 | 25 |
The syntax is rather self-explanatory: We introduce a function by |
25091
a2ae7f71613d
Updated function tutorial: Types can be inferred and need not be given anymore
krauss
parents:
23805
diff
changeset
|
26 |
giving its name, its type, |
a2ae7f71613d
Updated function tutorial: Types can be inferred and need not be given anymore
krauss
parents:
23805
diff
changeset
|
27 |
and a set of defining recursive equations. |
a2ae7f71613d
Updated function tutorial: Types can be inferred and need not be given anymore
krauss
parents:
23805
diff
changeset
|
28 |
If we leave out the type, the most general type will be |
a2ae7f71613d
Updated function tutorial: Types can be inferred and need not be given anymore
krauss
parents:
23805
diff
changeset
|
29 |
inferred, which can sometimes lead to surprises: Since both @{term |
25278 | 30 |
"1::nat"} and @{text "+"} are overloaded, we would end up |
25091
a2ae7f71613d
Updated function tutorial: Types can be inferred and need not be given anymore
krauss
parents:
23805
diff
changeset
|
31 |
with @{text "fib :: nat \<Rightarrow> 'a::{one,plus}"}. |
23003 | 32 |
*} |
33 |
||
34 |
text {* |
|
35 |
The function always terminates, since its argument gets smaller in |
|
23188 | 36 |
every recursive call. |
37 |
Since HOL is a logic of total functions, termination is a |
|
38 |
fundamental requirement to prevent inconsistencies\footnote{From the |
|
39 |
\qt{definition} @{text "f(n) = f(n) + 1"} we could prove |
|
40 |
@{text "0 = 1"} by subtracting @{text "f(n)"} on both sides.}. |
|
41 |
Isabelle tries to prove termination automatically when a definition |
|
42 |
is made. In \S\ref{termination}, we will look at cases where this |
|
43 |
fails and see what to do then. |
|
21212 | 44 |
*} |
45 |
||
46 |
subsection {* Pattern matching *} |
|
47 |
||
22065 | 48 |
text {* \label{patmatch} |
23003 | 49 |
Like in functional programming, we can use pattern matching to |
50 |
define functions. At the moment we will only consider \emph{constructor |
|
21212 | 51 |
patterns}, which only consist of datatype constructors and |
23805 | 52 |
variables. Furthermore, patterns must be linear, i.e.\ all variables |
53 |
on the left hand side of an equation must be distinct. In |
|
54 |
\S\ref{genpats} we discuss more general pattern matching. |
|
21212 | 55 |
|
56 |
If patterns overlap, the order of the equations is taken into |
|
57 |
account. The following function inserts a fixed element between any |
|
58 |
two elements of a list: |
|
59 |
*} |
|
60 |
||
61 |
fun sep :: "'a \<Rightarrow> 'a list \<Rightarrow> 'a list" |
|
62 |
where |
|
63 |
"sep a (x#y#xs) = x # a # sep a (y # xs)" |
|
64 |
| "sep a xs = xs" |
|
65 |
||
66 |
text {* |
|
23188 | 67 |
Overlapping patterns are interpreted as \qt{increments} to what is |
21212 | 68 |
already there: The second equation is only meant for the cases where |
69 |
the first one does not match. Consequently, Isabelle replaces it |
|
22065 | 70 |
internally by the remaining cases, making the patterns disjoint: |
21212 | 71 |
*} |
72 |
||
22065 | 73 |
thm sep.simps |
21212 | 74 |
|
22065 | 75 |
text {* @{thm [display] sep.simps[no_vars]} *} |
21212 | 76 |
|
77 |
text {* |
|
23805 | 78 |
\noindent The equations from function definitions are automatically used in |
21212 | 79 |
simplification: |
80 |
*} |
|
81 |
||
23188 | 82 |
lemma "sep 0 [1, 2, 3] = [1, 0, 2, 0, 3]" |
21212 | 83 |
by simp |
84 |
||
85 |
subsection {* Induction *} |
|
86 |
||
22065 | 87 |
text {* |
88 |
||
23805 | 89 |
Isabelle provides customized induction rules for recursive |
90 |
functions. These rules follow the recursive structure of the |
|
91 |
definition. Here is the rule @{text sep.induct} arising from the |
|
92 |
above definition of @{const sep}: |
|
93 |
||
94 |
@{thm [display] sep.induct} |
|
95 |
||
96 |
We have a step case for list with at least two elements, and two |
|
97 |
base cases for the zero- and the one-element list. Here is a simple |
|
98 |
proof about @{const sep} and @{const map} |
|
99 |
*} |
|
23188 | 100 |
|
23805 | 101 |
lemma "map f (sep x ys) = sep (f x) (map f ys)" |
102 |
apply (induct x ys rule: sep.induct) |
|
103 |
||
104 |
txt {* |
|
105 |
We get three cases, like in the definition. |
|
106 |
||
107 |
@{subgoals [display]} |
|
108 |
*} |
|
109 |
||
110 |
apply auto |
|
111 |
done |
|
112 |
text {* |
|
23188 | 113 |
|
114 |
With the \cmd{fun} command, you can define about 80\% of the |
|
115 |
functions that occur in practice. The rest of this tutorial explains |
|
116 |
the remaining 20\%. |
|
22065 | 117 |
*} |
21212 | 118 |
|
119 |
||
23188 | 120 |
section {* fun vs.\ function *} |
21212 | 121 |
|
122 |
text {* |
|
23188 | 123 |
The \cmd{fun} command provides a |
21212 | 124 |
convenient shorthand notation for simple function definitions. In |
125 |
this mode, Isabelle tries to solve all the necessary proof obligations |
|
27026 | 126 |
automatically. If any proof fails, the definition is |
22065 | 127 |
rejected. This can either mean that the definition is indeed faulty, |
128 |
or that the default proof procedures are just not smart enough (or |
|
129 |
rather: not designed) to handle the definition. |
|
130 |
||
23188 | 131 |
By expanding the abbreviation to the more verbose \cmd{function} command, these proof obligations become visible and can be analyzed or |
132 |
solved manually. The expansion from \cmd{fun} to \cmd{function} is as follows: |
|
22065 | 133 |
|
134 |
\end{isamarkuptext} |
|
135 |
||
136 |
||
23188 | 137 |
\[\left[\;\begin{minipage}{0.25\textwidth}\vspace{6pt} |
138 |
\cmd{fun} @{text "f :: \<tau>"}\\% |
|
139 |
\cmd{where}\\% |
|
140 |
\hspace*{2ex}{\it equations}\\% |
|
141 |
\hspace*{2ex}\vdots\vspace*{6pt} |
|
142 |
\end{minipage}\right] |
|
143 |
\quad\equiv\quad |
|
27026 | 144 |
\left[\;\begin{minipage}{0.48\textwidth}\vspace{6pt} |
23188 | 145 |
\cmd{function} @{text "("}\cmd{sequential}@{text ") f :: \<tau>"}\\% |
146 |
\cmd{where}\\% |
|
147 |
\hspace*{2ex}{\it equations}\\% |
|
148 |
\hspace*{2ex}\vdots\\% |
|
22065 | 149 |
\cmd{by} @{text "pat_completeness auto"}\\% |
23188 | 150 |
\cmd{termination by} @{text "lexicographic_order"}\vspace{6pt} |
151 |
\end{minipage} |
|
152 |
\right]\] |
|
21212 | 153 |
|
22065 | 154 |
\begin{isamarkuptext} |
155 |
\vspace*{1em} |
|
23188 | 156 |
\noindent Some details have now become explicit: |
21212 | 157 |
|
158 |
\begin{enumerate} |
|
22065 | 159 |
\item The \cmd{sequential} option enables the preprocessing of |
23805 | 160 |
pattern overlaps which we already saw. Without this option, the equations |
21212 | 161 |
must already be disjoint and complete. The automatic completion only |
23188 | 162 |
works with constructor patterns. |
21212 | 163 |
|
23188 | 164 |
\item A function definition produces a proof obligation which |
165 |
expresses completeness and compatibility of patterns (we talk about |
|
22065 | 166 |
this later). The combination of the methods @{text "pat_completeness"} and |
167 |
@{text "auto"} is used to solve this proof obligation. |
|
21212 | 168 |
|
169 |
\item A termination proof follows the definition, started by the |
|
23188 | 170 |
\cmd{termination} command. This will be explained in \S\ref{termination}. |
22065 | 171 |
\end{enumerate} |
172 |
Whenever a \cmd{fun} command fails, it is usually a good idea to |
|
173 |
expand the syntax to the more verbose \cmd{function} form, to see |
|
174 |
what is actually going on. |
|
21212 | 175 |
*} |
176 |
||
177 |
||
23188 | 178 |
section {* Termination *} |
21212 | 179 |
|
23188 | 180 |
text {*\label{termination} |
23805 | 181 |
The method @{text "lexicographic_order"} is the default method for |
182 |
termination proofs. It can prove termination of a |
|
23188 | 183 |
certain class of functions by searching for a suitable lexicographic |
184 |
combination of size measures. Of course, not all functions have such |
|
23805 | 185 |
a simple termination argument. For them, we can specify the termination |
186 |
relation manually. |
|
23188 | 187 |
*} |
188 |
||
189 |
subsection {* The {\tt relation} method *} |
|
190 |
text{* |
|
21212 | 191 |
Consider the following function, which sums up natural numbers up to |
22065 | 192 |
@{text "N"}, using a counter @{text "i"}: |
21212 | 193 |
*} |
194 |
||
195 |
function sum :: "nat \<Rightarrow> nat \<Rightarrow> nat" |
|
196 |
where |
|
197 |
"sum i N = (if i > N then 0 else i + sum (Suc i) N)" |
|
22065 | 198 |
by pat_completeness auto |
21212 | 199 |
|
200 |
text {* |
|
22065 | 201 |
\noindent The @{text "lexicographic_order"} method fails on this example, because none of the |
23805 | 202 |
arguments decreases in the recursive call, with respect to the standard size ordering. |
203 |
To prove termination manually, we must provide a custom wellfounded relation. |
|
21212 | 204 |
|
205 |
The termination argument for @{text "sum"} is based on the fact that |
|
206 |
the \emph{difference} between @{text "i"} and @{text "N"} gets |
|
207 |
smaller in every step, and that the recursion stops when @{text "i"} |
|
23805 | 208 |
is greater than @{text "N"}. Phrased differently, the expression |
209 |
@{text "N + 1 - i"} always decreases. |
|
21212 | 210 |
|
22065 | 211 |
We can use this expression as a measure function suitable to prove termination. |
21212 | 212 |
*} |
213 |
||
27026 | 214 |
termination sum |
23188 | 215 |
apply (relation "measure (\<lambda>(i,N). N + 1 - i)") |
21212 | 216 |
|
23188 | 217 |
txt {* |
23003 | 218 |
The \cmd{termination} command sets up the termination goal for the |
23188 | 219 |
specified function @{text "sum"}. If the function name is omitted, it |
23003 | 220 |
implicitly refers to the last function definition. |
221 |
||
22065 | 222 |
The @{text relation} method takes a relation of |
223 |
type @{typ "('a \<times> 'a) set"}, where @{typ "'a"} is the argument type of |
|
224 |
the function. If the function has multiple curried arguments, then |
|
225 |
these are packed together into a tuple, as it happened in the above |
|
226 |
example. |
|
21212 | 227 |
|
27026 | 228 |
The predefined function @{term[source] "measure :: ('a \<Rightarrow> nat) \<Rightarrow> ('a \<times> 'a) set"} constructs a |
23188 | 229 |
wellfounded relation from a mapping into the natural numbers (a |
230 |
\emph{measure function}). |
|
22065 | 231 |
|
232 |
After the invocation of @{text "relation"}, we must prove that (a) |
|
233 |
the relation we supplied is wellfounded, and (b) that the arguments |
|
234 |
of recursive calls indeed decrease with respect to the |
|
23188 | 235 |
relation: |
236 |
||
237 |
@{subgoals[display,indent=0]} |
|
22065 | 238 |
|
23188 | 239 |
These goals are all solved by @{text "auto"}: |
240 |
*} |
|
241 |
||
242 |
apply auto |
|
243 |
done |
|
244 |
||
245 |
text {* |
|
22065 | 246 |
Let us complicate the function a little, by adding some more |
247 |
recursive calls: |
|
21212 | 248 |
*} |
249 |
||
250 |
function foo :: "nat \<Rightarrow> nat \<Rightarrow> nat" |
|
251 |
where |
|
252 |
"foo i N = (if i > N |
|
253 |
then (if N = 0 then 0 else foo 0 (N - 1)) |
|
254 |
else i + foo (Suc i) N)" |
|
255 |
by pat_completeness auto |
|
256 |
||
257 |
text {* |
|
258 |
When @{text "i"} has reached @{text "N"}, it starts at zero again |
|
259 |
and @{text "N"} is decremented. |
|
260 |
This corresponds to a nested |
|
261 |
loop where one index counts up and the other down. Termination can |
|
262 |
be proved using a lexicographic combination of two measures, namely |
|
22065 | 263 |
the value of @{text "N"} and the above difference. The @{const |
264 |
"measures"} combinator generalizes @{text "measure"} by taking a |
|
265 |
list of measure functions. |
|
21212 | 266 |
*} |
267 |
||
268 |
termination |
|
22065 | 269 |
by (relation "measures [\<lambda>(i, N). N, \<lambda>(i,N). N + 1 - i]") auto |
21212 | 270 |
|
23188 | 271 |
subsection {* How @{text "lexicographic_order"} works *} |
272 |
||
273 |
(*fun fails :: "nat \<Rightarrow> nat list \<Rightarrow> nat" |
|
274 |
where |
|
275 |
"fails a [] = a" |
|
276 |
| "fails a (x#xs) = fails (x + a) (x # xs)" |
|
277 |
*) |
|
23003 | 278 |
|
279 |
text {* |
|
23188 | 280 |
To see how the automatic termination proofs work, let's look at an |
281 |
example where it fails\footnote{For a detailed discussion of the |
|
282 |
termination prover, see \cite{bulwahnKN07}}: |
|
283 |
||
284 |
\end{isamarkuptext} |
|
285 |
\cmd{fun} @{text "fails :: \"nat \<Rightarrow> nat list \<Rightarrow> nat\""}\\% |
|
286 |
\cmd{where}\\% |
|
287 |
\hspace*{2ex}@{text "\"fails a [] = a\""}\\% |
|
288 |
|\hspace*{1.5ex}@{text "\"fails a (x#xs) = fails (x + a) (x#xs)\""}\\ |
|
289 |
\begin{isamarkuptext} |
|
290 |
||
291 |
\noindent Isabelle responds with the following error: |
|
292 |
||
293 |
\begin{isabelle} |
|
23805 | 294 |
*** Unfinished subgoals:\newline |
295 |
*** (a, 1, <):\newline |
|
296 |
*** \ 1.~@{text "\<And>x. x = 0"}\newline |
|
297 |
*** (a, 1, <=):\newline |
|
298 |
*** \ 1.~False\newline |
|
299 |
*** (a, 2, <):\newline |
|
300 |
*** \ 1.~False\newline |
|
23188 | 301 |
*** Calls:\newline |
302 |
*** a) @{text "(a, x # xs) -->> (x + a, x # xs)"}\newline |
|
303 |
*** Measures:\newline |
|
304 |
*** 1) @{text "\<lambda>x. size (fst x)"}\newline |
|
305 |
*** 2) @{text "\<lambda>x. size (snd x)"}\newline |
|
23805 | 306 |
*** Result matrix:\newline |
307 |
*** \ \ \ \ 1\ \ 2 \newline |
|
308 |
*** a: ? <= \newline |
|
309 |
*** Could not find lexicographic termination order.\newline |
|
23188 | 310 |
*** At command "fun".\newline |
311 |
\end{isabelle} |
|
23003 | 312 |
*} |
313 |
||
23188 | 314 |
|
315 |
text {* |
|
316 |
||
317 |
||
23805 | 318 |
The the key to this error message is the matrix at the bottom. The rows |
23188 | 319 |
of that matrix correspond to the different recursive calls (In our |
320 |
case, there is just one). The columns are the function's arguments |
|
321 |
(expressed through different measure functions, which map the |
|
322 |
argument tuple to a natural number). |
|
323 |
||
324 |
The contents of the matrix summarize what is known about argument |
|
325 |
descents: The second argument has a weak descent (@{text "<="}) at the |
|
326 |
recursive call, and for the first argument nothing could be proved, |
|
23805 | 327 |
which is expressed by @{text "?"}. In general, there are the values |
328 |
@{text "<"}, @{text "<="} and @{text "?"}. |
|
23188 | 329 |
|
330 |
For the failed proof attempts, the unfinished subgoals are also |
|
23805 | 331 |
printed. Looking at these will often point to a missing lemma. |
23188 | 332 |
|
333 |
% As a more real example, here is quicksort: |
|
334 |
*} |
|
335 |
(* |
|
336 |
function qs :: "nat list \<Rightarrow> nat list" |
|
23003 | 337 |
where |
23188 | 338 |
"qs [] = []" |
339 |
| "qs (x#xs) = qs [y\<in>xs. y < x] @ x # qs [y\<in>xs. y \<ge> x]" |
|
23003 | 340 |
by pat_completeness auto |
341 |
||
23188 | 342 |
termination apply lexicographic_order |
343 |
||
344 |
text {* If we try @{text "lexicographic_order"} method, we get the |
|
345 |
following error *} |
|
346 |
termination by (lexicographic_order simp:l2) |
|
23003 | 347 |
|
23188 | 348 |
lemma l: "x \<le> y \<Longrightarrow> x < Suc y" by arith |
23003 | 349 |
|
23188 | 350 |
function |
351 |
||
352 |
*) |
|
21212 | 353 |
|
354 |
section {* Mutual Recursion *} |
|
355 |
||
356 |
text {* |
|
357 |
If two or more functions call one another mutually, they have to be defined |
|
23188 | 358 |
in one step. Here are @{text "even"} and @{text "odd"}: |
21212 | 359 |
*} |
360 |
||
22065 | 361 |
function even :: "nat \<Rightarrow> bool" |
362 |
and odd :: "nat \<Rightarrow> bool" |
|
21212 | 363 |
where |
364 |
"even 0 = True" |
|
365 |
| "odd 0 = False" |
|
366 |
| "even (Suc n) = odd n" |
|
367 |
| "odd (Suc n) = even n" |
|
22065 | 368 |
by pat_completeness auto |
21212 | 369 |
|
370 |
text {* |
|
23188 | 371 |
To eliminate the mutual dependencies, Isabelle internally |
21212 | 372 |
creates a single function operating on the sum |
23188 | 373 |
type @{typ "nat + nat"}. Then, @{const even} and @{const odd} are |
374 |
defined as projections. Consequently, termination has to be proved |
|
21212 | 375 |
simultaneously for both functions, by specifying a measure on the |
376 |
sum type: |
|
377 |
*} |
|
378 |
||
379 |
termination |
|
23188 | 380 |
by (relation "measure (\<lambda>x. case x of Inl n \<Rightarrow> n | Inr n \<Rightarrow> n)") auto |
381 |
||
382 |
text {* |
|
383 |
We could also have used @{text lexicographic_order}, which |
|
384 |
supports mutual recursive termination proofs to a certain extent. |
|
385 |
*} |
|
22065 | 386 |
|
387 |
subsection {* Induction for mutual recursion *} |
|
388 |
||
389 |
text {* |
|
390 |
||
391 |
When functions are mutually recursive, proving properties about them |
|
23188 | 392 |
generally requires simultaneous induction. The induction rule @{text "even_odd.induct"} |
393 |
generated from the above definition reflects this. |
|
22065 | 394 |
|
395 |
Let us prove something about @{const even} and @{const odd}: |
|
396 |
*} |
|
397 |
||
23188 | 398 |
lemma even_odd_mod2: |
22065 | 399 |
"even n = (n mod 2 = 0)" |
400 |
"odd n = (n mod 2 = 1)" |
|
401 |
||
402 |
txt {* |
|
403 |
We apply simultaneous induction, specifying the induction variable |
|
404 |
for both goals, separated by \cmd{and}: *} |
|
405 |
||
406 |
apply (induct n and n rule: even_odd.induct) |
|
407 |
||
408 |
txt {* |
|
409 |
We get four subgoals, which correspond to the clauses in the |
|
410 |
definition of @{const even} and @{const odd}: |
|
411 |
@{subgoals[display,indent=0]} |
|
412 |
Simplification solves the first two goals, leaving us with two |
|
413 |
statements about the @{text "mod"} operation to prove: |
|
414 |
*} |
|
415 |
||
416 |
apply simp_all |
|
417 |
||
418 |
txt {* |
|
419 |
@{subgoals[display,indent=0]} |
|
420 |
||
23805 | 421 |
\noindent These can be handled by Isabelle's arithmetic decision procedures. |
22065 | 422 |
|
423 |
*} |
|
424 |
||
23805 | 425 |
apply arith |
426 |
apply arith |
|
22065 | 427 |
done |
21212 | 428 |
|
22065 | 429 |
text {* |
23188 | 430 |
In proofs like this, the simultaneous induction is really essential: |
431 |
Even if we are just interested in one of the results, the other |
|
432 |
one is necessary to strengthen the induction hypothesis. If we leave |
|
27026 | 433 |
out the statement about @{const odd} and just write @{term True} instead, |
434 |
the same proof fails: |
|
22065 | 435 |
*} |
436 |
||
23188 | 437 |
lemma failed_attempt: |
22065 | 438 |
"even n = (n mod 2 = 0)" |
439 |
"True" |
|
440 |
apply (induct n rule: even_odd.induct) |
|
441 |
||
442 |
txt {* |
|
443 |
\noindent Now the third subgoal is a dead end, since we have no |
|
23188 | 444 |
useful induction hypothesis available: |
22065 | 445 |
|
446 |
@{subgoals[display,indent=0]} |
|
447 |
*} |
|
448 |
||
449 |
oops |
|
450 |
||
23188 | 451 |
section {* General pattern matching *} |
23805 | 452 |
text{*\label{genpats} *} |
22065 | 453 |
|
23188 | 454 |
subsection {* Avoiding automatic pattern splitting *} |
22065 | 455 |
|
456 |
text {* |
|
457 |
||
458 |
Up to now, we used pattern matching only on datatypes, and the |
|
459 |
patterns were always disjoint and complete, and if they weren't, |
|
460 |
they were made disjoint automatically like in the definition of |
|
461 |
@{const "sep"} in \S\ref{patmatch}. |
|
462 |
||
23188 | 463 |
This automatic splitting can significantly increase the number of |
464 |
equations involved, and this is not always desirable. The following |
|
465 |
example shows the problem: |
|
22065 | 466 |
|
23805 | 467 |
Suppose we are modeling incomplete knowledge about the world by a |
23003 | 468 |
three-valued datatype, which has values @{term "T"}, @{term "F"} |
469 |
and @{term "X"} for true, false and uncertain propositions, respectively. |
|
22065 | 470 |
*} |
471 |
||
472 |
datatype P3 = T | F | X |
|
473 |
||
23188 | 474 |
text {* \noindent Then the conjunction of such values can be defined as follows: *} |
22065 | 475 |
|
476 |
fun And :: "P3 \<Rightarrow> P3 \<Rightarrow> P3" |
|
477 |
where |
|
478 |
"And T p = p" |
|
23003 | 479 |
| "And p T = p" |
480 |
| "And p F = F" |
|
481 |
| "And F p = F" |
|
482 |
| "And X X = X" |
|
21212 | 483 |
|
484 |
||
22065 | 485 |
text {* |
486 |
This definition is useful, because the equations can directly be used |
|
23805 | 487 |
as simplification rules rules. But the patterns overlap: For example, |
23188 | 488 |
the expression @{term "And T T"} is matched by both the first and |
489 |
the second equation. By default, Isabelle makes the patterns disjoint by |
|
22065 | 490 |
splitting them up, producing instances: |
491 |
*} |
|
492 |
||
493 |
thm And.simps |
|
494 |
||
495 |
text {* |
|
496 |
@{thm[indent=4] And.simps} |
|
497 |
||
498 |
\vspace*{1em} |
|
23003 | 499 |
\noindent There are several problems with this: |
22065 | 500 |
|
501 |
\begin{enumerate} |
|
23188 | 502 |
\item If the datatype has many constructors, there can be an |
22065 | 503 |
explosion of equations. For @{const "And"}, we get seven instead of |
23003 | 504 |
five equations, which can be tolerated, but this is just a small |
22065 | 505 |
example. |
506 |
||
23188 | 507 |
\item Since splitting makes the equations \qt{less general}, they |
22065 | 508 |
do not always match in rewriting. While the term @{term "And x F"} |
23188 | 509 |
can be simplified to @{term "F"} with the original equations, a |
22065 | 510 |
(manual) case split on @{term "x"} is now necessary. |
511 |
||
512 |
\item The splitting also concerns the induction rule @{text |
|
513 |
"And.induct"}. Instead of five premises it now has seven, which |
|
514 |
means that our induction proofs will have more cases. |
|
515 |
||
516 |
\item In general, it increases clarity if we get the same definition |
|
517 |
back which we put in. |
|
518 |
\end{enumerate} |
|
519 |
||
23188 | 520 |
If we do not want the automatic splitting, we can switch it off by |
521 |
leaving out the \cmd{sequential} option. However, we will have to |
|
522 |
prove that our pattern matching is consistent\footnote{This prevents |
|
523 |
us from defining something like @{term "f x = True"} and @{term "f x |
|
524 |
= False"} simultaneously.}: |
|
22065 | 525 |
*} |
526 |
||
527 |
function And2 :: "P3 \<Rightarrow> P3 \<Rightarrow> P3" |
|
528 |
where |
|
529 |
"And2 T p = p" |
|
23003 | 530 |
| "And2 p T = p" |
531 |
| "And2 p F = F" |
|
532 |
| "And2 F p = F" |
|
533 |
| "And2 X X = X" |
|
22065 | 534 |
|
535 |
txt {* |
|
23188 | 536 |
\noindent Now let's look at the proof obligations generated by a |
22065 | 537 |
function definition. In this case, they are: |
538 |
||
23188 | 539 |
@{subgoals[display,indent=0]}\vspace{-1.2em}\hspace{3cm}\vdots\vspace{1.2em} |
22065 | 540 |
|
541 |
The first subgoal expresses the completeness of the patterns. It has |
|
542 |
the form of an elimination rule and states that every @{term x} of |
|
23188 | 543 |
the function's input type must match at least one of the patterns\footnote{Completeness could |
22065 | 544 |
be equivalently stated as a disjunction of existential statements: |
545 |
@{term "(\<exists>p. x = (T, p)) \<or> (\<exists>p. x = (p, T)) \<or> (\<exists>p. x = (p, F)) \<or> |
|
27026 | 546 |
(\<exists>p. x = (F, p)) \<or> (x = (X, X))"}, and you can use the method @{text atomize_elim} to get that form instead.}. If the patterns just involve |
23188 | 547 |
datatypes, we can solve it with the @{text "pat_completeness"} |
548 |
method: |
|
22065 | 549 |
*} |
550 |
||
551 |
apply pat_completeness |
|
552 |
||
553 |
txt {* |
|
554 |
The remaining subgoals express \emph{pattern compatibility}. We do |
|
23188 | 555 |
allow that an input value matches multiple patterns, but in this |
22065 | 556 |
case, the result (i.e.~the right hand sides of the equations) must |
557 |
also be equal. For each pair of two patterns, there is one such |
|
558 |
subgoal. Usually this needs injectivity of the constructors, which |
|
559 |
is used automatically by @{text "auto"}. |
|
560 |
*} |
|
561 |
||
562 |
by auto |
|
21212 | 563 |
|
564 |
||
22065 | 565 |
subsection {* Non-constructor patterns *} |
21212 | 566 |
|
23188 | 567 |
text {* |
23805 | 568 |
Most of Isabelle's basic types take the form of inductive datatypes, |
569 |
and usually pattern matching works on the constructors of such types. |
|
570 |
However, this need not be always the case, and the \cmd{function} |
|
571 |
command handles other kind of patterns, too. |
|
23188 | 572 |
|
23805 | 573 |
One well-known instance of non-constructor patterns are |
23188 | 574 |
so-called \emph{$n+k$-patterns}, which are a little controversial in |
575 |
the functional programming world. Here is the initial fibonacci |
|
576 |
example with $n+k$-patterns: |
|
577 |
*} |
|
578 |
||
579 |
function fib2 :: "nat \<Rightarrow> nat" |
|
580 |
where |
|
581 |
"fib2 0 = 1" |
|
582 |
| "fib2 1 = 1" |
|
583 |
| "fib2 (n + 2) = fib2 n + fib2 (Suc n)" |
|
584 |
||
26749
397a1aeede7d
* New attribute "termination_simp": Simp rules for termination proofs
krauss
parents:
26580
diff
changeset
|
585 |
(*<*)ML_val "goals_limit := 1"(*>*) |
23188 | 586 |
txt {* |
23805 | 587 |
This kind of matching is again justified by the proof of pattern |
588 |
completeness and compatibility. |
|
23188 | 589 |
The proof obligation for pattern completeness states that every natural number is |
590 |
either @{term "0::nat"}, @{term "1::nat"} or @{term "n + |
|
591 |
(2::nat)"}: |
|
592 |
||
593 |
@{subgoals[display,indent=0]} |
|
594 |
||
595 |
This is an arithmetic triviality, but unfortunately the |
|
596 |
@{text arith} method cannot handle this specific form of an |
|
23805 | 597 |
elimination rule. However, we can use the method @{text |
26580
c3e597a476fd
Generic conversion and tactic "atomize_elim" to convert elimination rules
krauss
parents:
25688
diff
changeset
|
598 |
"atomize_elim"} to do an ad-hoc conversion to a disjunction of |
23805 | 599 |
existentials, which can then be soved by the arithmetic decision procedure. |
600 |
Pattern compatibility and termination are automatic as usual. |
|
23188 | 601 |
*} |
26749
397a1aeede7d
* New attribute "termination_simp": Simp rules for termination proofs
krauss
parents:
26580
diff
changeset
|
602 |
(*<*)ML_val "goals_limit := 10"(*>*) |
26580
c3e597a476fd
Generic conversion and tactic "atomize_elim" to convert elimination rules
krauss
parents:
25688
diff
changeset
|
603 |
apply atomize_elim |
23805 | 604 |
apply arith |
23188 | 605 |
apply auto |
606 |
done |
|
607 |
termination by lexicographic_order |
|
608 |
text {* |
|
609 |
We can stretch the notion of pattern matching even more. The |
|
610 |
following function is not a sensible functional program, but a |
|
611 |
perfectly valid mathematical definition: |
|
612 |
*} |
|
613 |
||
614 |
function ev :: "nat \<Rightarrow> bool" |
|
615 |
where |
|
616 |
"ev (2 * n) = True" |
|
617 |
| "ev (2 * n + 1) = False" |
|
26580
c3e597a476fd
Generic conversion and tactic "atomize_elim" to convert elimination rules
krauss
parents:
25688
diff
changeset
|
618 |
apply atomize_elim |
23805 | 619 |
by arith+ |
23188 | 620 |
termination by (relation "{}") simp |
621 |
||
622 |
text {* |
|
27026 | 623 |
This general notion of pattern matching gives you a certain freedom |
624 |
in writing down specifications. However, as always, such freedom should |
|
23188 | 625 |
be used with care: |
626 |
||
627 |
If we leave the area of constructor |
|
628 |
patterns, we have effectively departed from the world of functional |
|
629 |
programming. This means that it is no longer possible to use the |
|
630 |
code generator, and expect it to generate ML code for our |
|
631 |
definitions. Also, such a specification might not work very well together with |
|
632 |
simplification. Your mileage may vary. |
|
633 |
*} |
|
634 |
||
635 |
||
636 |
subsection {* Conditional equations *} |
|
637 |
||
638 |
text {* |
|
639 |
The function package also supports conditional equations, which are |
|
640 |
similar to guards in a language like Haskell. Here is Euclid's |
|
641 |
algorithm written with conditional patterns\footnote{Note that the |
|
642 |
patterns are also overlapping in the base case}: |
|
643 |
*} |
|
644 |
||
645 |
function gcd :: "nat \<Rightarrow> nat \<Rightarrow> nat" |
|
646 |
where |
|
647 |
"gcd x 0 = x" |
|
648 |
| "gcd 0 y = y" |
|
649 |
| "x < y \<Longrightarrow> gcd (Suc x) (Suc y) = gcd (Suc x) (y - x)" |
|
650 |
| "\<not> x < y \<Longrightarrow> gcd (Suc x) (Suc y) = gcd (x - y) (Suc y)" |
|
26580
c3e597a476fd
Generic conversion and tactic "atomize_elim" to convert elimination rules
krauss
parents:
25688
diff
changeset
|
651 |
by (atomize_elim, auto, arith) |
23188 | 652 |
termination by lexicographic_order |
653 |
||
654 |
text {* |
|
655 |
By now, you can probably guess what the proof obligations for the |
|
656 |
pattern completeness and compatibility look like. |
|
657 |
||
658 |
Again, functions with conditional patterns are not supported by the |
|
659 |
code generator. |
|
660 |
*} |
|
661 |
||
662 |
||
663 |
subsection {* Pattern matching on strings *} |
|
664 |
||
665 |
text {* |
|
23805 | 666 |
As strings (as lists of characters) are normal datatypes, pattern |
23188 | 667 |
matching on them is possible, but somewhat problematic. Consider the |
668 |
following definition: |
|
669 |
||
670 |
\end{isamarkuptext} |
|
671 |
\noindent\cmd{fun} @{text "check :: \"string \<Rightarrow> bool\""}\\% |
|
672 |
\cmd{where}\\% |
|
673 |
\hspace*{2ex}@{text "\"check (''good'') = True\""}\\% |
|
674 |
@{text "| \"check s = False\""} |
|
675 |
\begin{isamarkuptext} |
|
676 |
||
23805 | 677 |
\noindent An invocation of the above \cmd{fun} command does not |
23188 | 678 |
terminate. What is the problem? Strings are lists of characters, and |
23805 | 679 |
characters are a datatype with a lot of constructors. Splitting the |
23188 | 680 |
catch-all pattern thus leads to an explosion of cases, which cannot |
681 |
be handled by Isabelle. |
|
682 |
||
683 |
There are two things we can do here. Either we write an explicit |
|
684 |
@{text "if"} on the right hand side, or we can use conditional patterns: |
|
685 |
*} |
|
686 |
||
687 |
function check :: "string \<Rightarrow> bool" |
|
688 |
where |
|
689 |
"check (''good'') = True" |
|
690 |
| "s \<noteq> ''good'' \<Longrightarrow> check s = False" |
|
691 |
by auto |
|
22065 | 692 |
|
693 |
||
694 |
section {* Partiality *} |
|
695 |
||
696 |
text {* |
|
697 |
In HOL, all functions are total. A function @{term "f"} applied to |
|
23188 | 698 |
@{term "x"} always has the value @{term "f x"}, and there is no notion |
22065 | 699 |
of undefinedness. |
23188 | 700 |
This is why we have to do termination |
701 |
proofs when defining functions: The proof justifies that the |
|
702 |
function can be defined by wellfounded recursion. |
|
22065 | 703 |
|
23188 | 704 |
However, the \cmd{function} package does support partiality to a |
705 |
certain extent. Let's look at the following function which looks |
|
706 |
for a zero of a given function f. |
|
23003 | 707 |
*} |
708 |
||
709 |
function (*<*)(domintros, tailrec)(*>*)findzero :: "(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat" |
|
710 |
where |
|
711 |
"findzero f n = (if f n = 0 then n else findzero f (Suc n))" |
|
712 |
by pat_completeness auto |
|
713 |
(*<*)declare findzero.simps[simp del](*>*) |
|
714 |
||
715 |
text {* |
|
23805 | 716 |
\noindent Clearly, any attempt of a termination proof must fail. And without |
23003 | 717 |
that, we do not get the usual rules @{text "findzero.simp"} and |
718 |
@{text "findzero.induct"}. So what was the definition good for at all? |
|
719 |
*} |
|
720 |
||
721 |
subsection {* Domain predicates *} |
|
722 |
||
723 |
text {* |
|
724 |
The trick is that Isabelle has not only defined the function @{const findzero}, but also |
|
725 |
a predicate @{term "findzero_dom"} that characterizes the values where the function |
|
23188 | 726 |
terminates: the \emph{domain} of the function. If we treat a |
727 |
partial function just as a total function with an additional domain |
|
728 |
predicate, we can derive simplification and |
|
729 |
induction rules as we do for total functions. They are guarded |
|
730 |
by domain conditions and are called @{text psimps} and @{text |
|
731 |
pinduct}: |
|
23003 | 732 |
*} |
733 |
||
23805 | 734 |
text {* |
735 |
\noindent\begin{minipage}{0.79\textwidth}@{thm[display,margin=85] findzero.psimps}\end{minipage} |
|
736 |
\hfill(@{text "findzero.psimps"}) |
|
737 |
\vspace{1em} |
|
23003 | 738 |
|
23805 | 739 |
\noindent\begin{minipage}{0.79\textwidth}@{thm[display,margin=85] findzero.pinduct}\end{minipage} |
740 |
\hfill(@{text "findzero.pinduct"}) |
|
23003 | 741 |
*} |
742 |
||
743 |
text {* |
|
23188 | 744 |
Remember that all we |
745 |
are doing here is use some tricks to make a total function appear |
|
23003 | 746 |
as if it was partial. We can still write the term @{term "findzero |
747 |
(\<lambda>x. 1) 0"} and like any other term of type @{typ nat} it is equal |
|
748 |
to some natural number, although we might not be able to find out |
|
23188 | 749 |
which one. The function is \emph{underdefined}. |
23003 | 750 |
|
23805 | 751 |
But it is defined enough to prove something interesting about it. We |
23188 | 752 |
can prove that if @{term "findzero f n"} |
23805 | 753 |
terminates, it indeed returns a zero of @{term f}: |
23003 | 754 |
*} |
755 |
||
756 |
lemma findzero_zero: "findzero_dom (f, n) \<Longrightarrow> f (findzero f n) = 0" |
|
757 |
||
23805 | 758 |
txt {* \noindent We apply induction as usual, but using the partial induction |
23003 | 759 |
rule: *} |
760 |
||
761 |
apply (induct f n rule: findzero.pinduct) |
|
762 |
||
23805 | 763 |
txt {* \noindent This gives the following subgoals: |
23003 | 764 |
|
765 |
@{subgoals[display,indent=0]} |
|
766 |
||
23805 | 767 |
\noindent The hypothesis in our lemma was used to satisfy the first premise in |
23188 | 768 |
the induction rule. However, we also get @{term |
769 |
"findzero_dom (f, n)"} as a local assumption in the induction step. This |
|
23003 | 770 |
allows to unfold @{term "findzero f n"} using the @{text psimps} |
23188 | 771 |
rule, and the rest is trivial. Since the @{text psimps} rules carry the |
23003 | 772 |
@{text "[simp]"} attribute by default, we just need a single step: |
773 |
*} |
|
774 |
apply simp |
|
775 |
done |
|
776 |
||
777 |
text {* |
|
778 |
Proofs about partial functions are often not harder than for total |
|
779 |
functions. Fig.~\ref{findzero_isar} shows a slightly more |
|
780 |
complicated proof written in Isar. It is verbose enough to show how |
|
781 |
partiality comes into play: From the partial induction, we get an |
|
782 |
additional domain condition hypothesis. Observe how this condition |
|
783 |
is applied when calls to @{term findzero} are unfolded. |
|
784 |
*} |
|
785 |
||
786 |
text_raw {* |
|
787 |
\begin{figure} |
|
23188 | 788 |
\hrule\vspace{6pt} |
23003 | 789 |
\begin{minipage}{0.8\textwidth} |
790 |
\isabellestyle{it} |
|
791 |
\isastyle\isamarkuptrue |
|
792 |
*} |
|
793 |
lemma "\<lbrakk>findzero_dom (f, n); x \<in> {n ..< findzero f n}\<rbrakk> \<Longrightarrow> f x \<noteq> 0" |
|
794 |
proof (induct rule: findzero.pinduct) |
|
795 |
fix f n assume dom: "findzero_dom (f, n)" |
|
23188 | 796 |
and IH: "\<lbrakk>f n \<noteq> 0; x \<in> {Suc n ..< findzero f (Suc n)}\<rbrakk> \<Longrightarrow> f x \<noteq> 0" |
797 |
and x_range: "x \<in> {n ..< findzero f n}" |
|
23003 | 798 |
have "f n \<noteq> 0" |
799 |
proof |
|
800 |
assume "f n = 0" |
|
801 |
with dom have "findzero f n = n" by simp |
|
802 |
with x_range show False by auto |
|
803 |
qed |
|
804 |
||
805 |
from x_range have "x = n \<or> x \<in> {Suc n ..< findzero f n}" by auto |
|
806 |
thus "f x \<noteq> 0" |
|
807 |
proof |
|
808 |
assume "x = n" |
|
809 |
with `f n \<noteq> 0` show ?thesis by simp |
|
810 |
next |
|
23188 | 811 |
assume "x \<in> {Suc n ..< findzero f n}" |
23805 | 812 |
with dom and `f n \<noteq> 0` have "x \<in> {Suc n ..< findzero f (Suc n)}" by simp |
23003 | 813 |
with IH and `f n \<noteq> 0` |
814 |
show ?thesis by simp |
|
815 |
qed |
|
816 |
qed |
|
817 |
text_raw {* |
|
818 |
\isamarkupfalse\isabellestyle{tt} |
|
23188 | 819 |
\end{minipage}\vspace{6pt}\hrule |
23003 | 820 |
\caption{A proof about a partial function}\label{findzero_isar} |
821 |
\end{figure} |
|
822 |
*} |
|
823 |
||
824 |
subsection {* Partial termination proofs *} |
|
825 |
||
826 |
text {* |
|
827 |
Now that we have proved some interesting properties about our |
|
828 |
function, we should turn to the domain predicate and see if it is |
|
829 |
actually true for some values. Otherwise we would have just proved |
|
830 |
lemmas with @{term False} as a premise. |
|
831 |
||
832 |
Essentially, we need some introduction rules for @{text |
|
833 |
findzero_dom}. The function package can prove such domain |
|
834 |
introduction rules automatically. But since they are not used very |
|
23188 | 835 |
often (they are almost never needed if the function is total), this |
836 |
functionality is disabled by default for efficiency reasons. So we have to go |
|
23003 | 837 |
back and ask for them explicitly by passing the @{text |
838 |
"(domintros)"} option to the function package: |
|
839 |
||
23188 | 840 |
\vspace{1ex} |
23003 | 841 |
\noindent\cmd{function} @{text "(domintros) findzero :: \"(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat\""}\\% |
842 |
\cmd{where}\isanewline% |
|
843 |
\ \ \ldots\\ |
|
844 |
||
23188 | 845 |
\noindent Now the package has proved an introduction rule for @{text findzero_dom}: |
23003 | 846 |
*} |
847 |
||
848 |
thm findzero.domintros |
|
849 |
||
850 |
text {* |
|
851 |
@{thm[display] findzero.domintros} |
|
852 |
||
853 |
Domain introduction rules allow to show that a given value lies in the |
|
854 |
domain of a function, if the arguments of all recursive calls |
|
855 |
are in the domain as well. They allow to do a \qt{single step} in a |
|
856 |
termination proof. Usually, you want to combine them with a suitable |
|
857 |
induction principle. |
|
858 |
||
859 |
Since our function increases its argument at recursive calls, we |
|
860 |
need an induction principle which works \qt{backwards}. We will use |
|
861 |
@{text inc_induct}, which allows to do induction from a fixed number |
|
862 |
\qt{downwards}: |
|
863 |
||
23188 | 864 |
\begin{center}@{thm inc_induct}\hfill(@{text "inc_induct"})\end{center} |
23003 | 865 |
|
23188 | 866 |
Figure \ref{findzero_term} gives a detailed Isar proof of the fact |
23003 | 867 |
that @{text findzero} terminates if there is a zero which is greater |
868 |
or equal to @{term n}. First we derive two useful rules which will |
|
869 |
solve the base case and the step case of the induction. The |
|
23805 | 870 |
induction is then straightforward, except for the unusual induction |
23003 | 871 |
principle. |
872 |
||
873 |
*} |
|
874 |
||
875 |
text_raw {* |
|
876 |
\begin{figure} |
|
23188 | 877 |
\hrule\vspace{6pt} |
23003 | 878 |
\begin{minipage}{0.8\textwidth} |
879 |
\isabellestyle{it} |
|
880 |
\isastyle\isamarkuptrue |
|
881 |
*} |
|
882 |
lemma findzero_termination: |
|
23188 | 883 |
assumes "x \<ge> n" and "f x = 0" |
23003 | 884 |
shows "findzero_dom (f, n)" |
885 |
proof - |
|
886 |
have base: "findzero_dom (f, x)" |
|
887 |
by (rule findzero.domintros) (simp add:`f x = 0`) |
|
888 |
||
889 |
have step: "\<And>i. findzero_dom (f, Suc i) |
|
890 |
\<Longrightarrow> findzero_dom (f, i)" |
|
891 |
by (rule findzero.domintros) simp |
|
892 |
||
23188 | 893 |
from `x \<ge> n` show ?thesis |
23003 | 894 |
proof (induct rule:inc_induct) |
23188 | 895 |
show "findzero_dom (f, x)" by (rule base) |
23003 | 896 |
next |
897 |
fix i assume "findzero_dom (f, Suc i)" |
|
23188 | 898 |
thus "findzero_dom (f, i)" by (rule step) |
23003 | 899 |
qed |
900 |
qed |
|
901 |
text_raw {* |
|
902 |
\isamarkupfalse\isabellestyle{tt} |
|
23188 | 903 |
\end{minipage}\vspace{6pt}\hrule |
23003 | 904 |
\caption{Termination proof for @{text findzero}}\label{findzero_term} |
905 |
\end{figure} |
|
906 |
*} |
|
907 |
||
908 |
text {* |
|
909 |
Again, the proof given in Fig.~\ref{findzero_term} has a lot of |
|
910 |
detail in order to explain the principles. Using more automation, we |
|
911 |
can also have a short proof: |
|
912 |
*} |
|
913 |
||
914 |
lemma findzero_termination_short: |
|
915 |
assumes zero: "x >= n" |
|
916 |
assumes [simp]: "f x = 0" |
|
917 |
shows "findzero_dom (f, n)" |
|
23805 | 918 |
using zero |
919 |
by (induct rule:inc_induct) (auto intro: findzero.domintros) |
|
23003 | 920 |
|
921 |
text {* |
|
23188 | 922 |
\noindent It is simple to combine the partial correctness result with the |
23003 | 923 |
termination lemma: |
924 |
*} |
|
925 |
||
926 |
lemma findzero_total_correctness: |
|
927 |
"f x = 0 \<Longrightarrow> f (findzero f 0) = 0" |
|
928 |
by (blast intro: findzero_zero findzero_termination) |
|
929 |
||
930 |
subsection {* Definition of the domain predicate *} |
|
931 |
||
932 |
text {* |
|
933 |
Sometimes it is useful to know what the definition of the domain |
|
23805 | 934 |
predicate looks like. Actually, @{text findzero_dom} is just an |
23003 | 935 |
abbreviation: |
936 |
||
937 |
@{abbrev[display] findzero_dom} |
|
938 |
||
23188 | 939 |
The domain predicate is the \emph{accessible part} of a relation @{const |
23003 | 940 |
findzero_rel}, which was also created internally by the function |
941 |
package. @{const findzero_rel} is just a normal |
|
23188 | 942 |
inductive predicate, so we can inspect its definition by |
23003 | 943 |
looking at the introduction rules @{text findzero_rel.intros}. |
944 |
In our case there is just a single rule: |
|
945 |
||
946 |
@{thm[display] findzero_rel.intros} |
|
947 |
||
23188 | 948 |
The predicate @{const findzero_rel} |
23003 | 949 |
describes the \emph{recursion relation} of the function |
950 |
definition. The recursion relation is a binary relation on |
|
951 |
the arguments of the function that relates each argument to its |
|
952 |
recursive calls. In general, there is one introduction rule for each |
|
953 |
recursive call. |
|
954 |
||
23805 | 955 |
The predicate @{term "accp findzero_rel"} is the accessible part of |
23003 | 956 |
that relation. An argument belongs to the accessible part, if it can |
23188 | 957 |
be reached in a finite number of steps (cf.~its definition in @{text |
958 |
"Accessible_Part.thy"}). |
|
23003 | 959 |
|
960 |
Since the domain predicate is just an abbreviation, you can use |
|
23805 | 961 |
lemmas for @{const accp} and @{const findzero_rel} directly. Some |
962 |
lemmas which are occasionally useful are @{text accpI}, @{text |
|
963 |
accp_downward}, and of course the introduction and elimination rules |
|
23003 | 964 |
for the recursion relation @{text "findzero.intros"} and @{text "findzero.cases"}. |
965 |
*} |
|
966 |
||
967 |
(*lemma findzero_nicer_domintros: |
|
968 |
"f x = 0 \<Longrightarrow> findzero_dom (f, x)" |
|
969 |
"findzero_dom (f, Suc x) \<Longrightarrow> findzero_dom (f, x)" |
|
23805 | 970 |
by (rule accpI, erule findzero_rel.cases, auto)+ |
23003 | 971 |
*) |
972 |
||
973 |
subsection {* A Useful Special Case: Tail recursion *} |
|
974 |
||
975 |
text {* |
|
976 |
The domain predicate is our trick that allows us to model partiality |
|
977 |
in a world of total functions. The downside of this is that we have |
|
978 |
to carry it around all the time. The termination proof above allowed |
|
979 |
us to replace the abstract @{term "findzero_dom (f, n)"} by the more |
|
23188 | 980 |
concrete @{term "(x \<ge> n \<and> f x = (0::nat))"}, but the condition is still |
981 |
there and can only be discharged for special cases. |
|
982 |
In particular, the domain predicate guards the unfolding of our |
|
23003 | 983 |
function, since it is there as a condition in the @{text psimp} |
984 |
rules. |
|
985 |
||
986 |
Now there is an important special case: We can actually get rid |
|
987 |
of the condition in the simplification rules, \emph{if the function |
|
988 |
is tail-recursive}. The reason is that for all tail-recursive |
|
989 |
equations there is a total function satisfying them, even if they |
|
990 |
are non-terminating. |
|
991 |
||
23188 | 992 |
% A function is tail recursive, if each call to the function is either |
993 |
% equal |
|
994 |
% |
|
995 |
% So the outer form of the |
|
996 |
% |
|
997 |
%if it can be written in the following |
|
998 |
% form: |
|
999 |
% {term[display] "f x = (if COND x then BASE x else f (LOOP x))"} |
|
1000 |
||
1001 |
||
23003 | 1002 |
The function package internally does the right construction and can |
1003 |
derive the unconditional simp rules, if we ask it to do so. Luckily, |
|
1004 |
our @{const "findzero"} function is tail-recursive, so we can just go |
|
1005 |
back and add another option to the \cmd{function} command: |
|
1006 |
||
23188 | 1007 |
\vspace{1ex} |
23003 | 1008 |
\noindent\cmd{function} @{text "(domintros, tailrec) findzero :: \"(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat\""}\\% |
1009 |
\cmd{where}\isanewline% |
|
1010 |
\ \ \ldots\\% |
|
1011 |
||
1012 |
||
23188 | 1013 |
\noindent Now, we actually get unconditional simplification rules, even |
23003 | 1014 |
though the function is partial: |
1015 |
*} |
|
1016 |
||
1017 |
thm findzero.simps |
|
1018 |
||
1019 |
text {* |
|
1020 |
@{thm[display] findzero.simps} |
|
1021 |
||
23188 | 1022 |
\noindent Of course these would make the simplifier loop, so we better remove |
23003 | 1023 |
them from the simpset: |
1024 |
*} |
|
1025 |
||
1026 |
declare findzero.simps[simp del] |
|
1027 |
||
23188 | 1028 |
text {* |
1029 |
Getting rid of the domain conditions in the simplification rules is |
|
1030 |
not only useful because it simplifies proofs. It is also required in |
|
1031 |
order to use Isabelle's code generator to generate ML code |
|
1032 |
from a function definition. |
|
1033 |
Since the code generator only works with equations, it cannot be |
|
1034 |
used with @{text "psimp"} rules. Thus, in order to generate code for |
|
1035 |
partial functions, they must be defined as a tail recursion. |
|
1036 |
Luckily, many functions have a relatively natural tail recursive |
|
1037 |
definition. |
|
1038 |
*} |
|
23003 | 1039 |
|
1040 |
section {* Nested recursion *} |
|
1041 |
||
1042 |
text {* |
|
1043 |
Recursive calls which are nested in one another frequently cause |
|
1044 |
complications, since their termination proof can depend on a partial |
|
1045 |
correctness property of the function itself. |
|
1046 |
||
1047 |
As a small example, we define the \qt{nested zero} function: |
|
1048 |
*} |
|
1049 |
||
1050 |
function nz :: "nat \<Rightarrow> nat" |
|
1051 |
where |
|
1052 |
"nz 0 = 0" |
|
1053 |
| "nz (Suc n) = nz (nz n)" |
|
1054 |
by pat_completeness auto |
|
1055 |
||
1056 |
text {* |
|
1057 |
If we attempt to prove termination using the identity measure on |
|
1058 |
naturals, this fails: |
|
1059 |
*} |
|
1060 |
||
1061 |
termination |
|
1062 |
apply (relation "measure (\<lambda>n. n)") |
|
1063 |
apply auto |
|
1064 |
||
1065 |
txt {* |
|
1066 |
We get stuck with the subgoal |
|
1067 |
||
1068 |
@{subgoals[display]} |
|
1069 |
||
1070 |
Of course this statement is true, since we know that @{const nz} is |
|
1071 |
the zero function. And in fact we have no problem proving this |
|
1072 |
property by induction. |
|
1073 |
*} |
|
23805 | 1074 |
(*<*)oops(*>*) |
23003 | 1075 |
lemma nz_is_zero: "nz_dom n \<Longrightarrow> nz n = 0" |
1076 |
by (induct rule:nz.pinduct) auto |
|
1077 |
||
1078 |
text {* |
|
1079 |
We formulate this as a partial correctness lemma with the condition |
|
1080 |
@{term "nz_dom n"}. This allows us to prove it with the @{text |
|
1081 |
pinduct} rule before we have proved termination. With this lemma, |
|
1082 |
the termination proof works as expected: |
|
1083 |
*} |
|
1084 |
||
1085 |
termination |
|
1086 |
by (relation "measure (\<lambda>n. n)") (auto simp: nz_is_zero) |
|
1087 |
||
1088 |
text {* |
|
1089 |
As a general strategy, one should prove the statements needed for |
|
1090 |
termination as a partial property first. Then they can be used to do |
|
1091 |
the termination proof. This also works for less trivial |
|
23188 | 1092 |
examples. Figure \ref{f91} defines the 91-function, a well-known |
1093 |
challenge problem due to John McCarthy, and proves its termination. |
|
23003 | 1094 |
*} |
1095 |
||
1096 |
text_raw {* |
|
1097 |
\begin{figure} |
|
23188 | 1098 |
\hrule\vspace{6pt} |
23003 | 1099 |
\begin{minipage}{0.8\textwidth} |
1100 |
\isabellestyle{it} |
|
1101 |
\isastyle\isamarkuptrue |
|
1102 |
*} |
|
1103 |
||
23188 | 1104 |
function f91 :: "nat \<Rightarrow> nat" |
23003 | 1105 |
where |
1106 |
"f91 n = (if 100 < n then n - 10 else f91 (f91 (n + 11)))" |
|
1107 |
by pat_completeness auto |
|
1108 |
||
1109 |
lemma f91_estimate: |
|
1110 |
assumes trm: "f91_dom n" |
|
1111 |
shows "n < f91 n + 11" |
|
1112 |
using trm by induct auto |
|
1113 |
||
1114 |
termination |
|
1115 |
proof |
|
1116 |
let ?R = "measure (\<lambda>x. 101 - x)" |
|
1117 |
show "wf ?R" .. |
|
1118 |
||
1119 |
fix n :: nat assume "\<not> 100 < n" -- "Assumptions for both calls" |
|
1120 |
||
1121 |
thus "(n + 11, n) \<in> ?R" by simp -- "Inner call" |
|
1122 |
||
1123 |
assume inner_trm: "f91_dom (n + 11)" -- "Outer call" |
|
1124 |
with f91_estimate have "n + 11 < f91 (n + 11) + 11" . |
|
23805 | 1125 |
with `\<not> 100 < n` show "(f91 (n + 11), n) \<in> ?R" by simp |
23003 | 1126 |
qed |
1127 |
||
1128 |
text_raw {* |
|
1129 |
\isamarkupfalse\isabellestyle{tt} |
|
23188 | 1130 |
\end{minipage} |
1131 |
\vspace{6pt}\hrule |
|
23003 | 1132 |
\caption{McCarthy's 91-function}\label{f91} |
1133 |
\end{figure} |
|
22065 | 1134 |
*} |
1135 |
||
1136 |
||
23003 | 1137 |
section {* Higher-Order Recursion *} |
22065 | 1138 |
|
23003 | 1139 |
text {* |
1140 |
Higher-order recursion occurs when recursive calls |
|
1141 |
are passed as arguments to higher-order combinators such as @{term |
|
1142 |
map}, @{term filter} etc. |
|
23805 | 1143 |
As an example, imagine a datatype of n-ary trees: |
23003 | 1144 |
*} |
1145 |
||
1146 |
datatype 'a tree = |
|
1147 |
Leaf 'a |
|
1148 |
| Branch "'a tree list" |
|
1149 |
||
1150 |
||
25278 | 1151 |
text {* \noindent We can define a function which swaps the left and right subtrees recursively, using the |
1152 |
list functions @{const rev} and @{const map}: *} |
|
25688
6c24a82a98f1
temporarily fixed documentation due to changed size functions
krauss
parents:
25278
diff
changeset
|
1153 |
|
27026 | 1154 |
fun mirror :: "'a tree \<Rightarrow> 'a tree" |
23003 | 1155 |
where |
25278 | 1156 |
"mirror (Leaf n) = Leaf n" |
1157 |
| "mirror (Branch l) = Branch (rev (map mirror l))" |
|
22065 | 1158 |
|
1159 |
text {* |
|
27026 | 1160 |
Although the definition is accepted without problems, let us look at the termination proof: |
23003 | 1161 |
*} |
1162 |
||
1163 |
termination proof |
|
1164 |
txt {* |
|
1165 |
||
1166 |
As usual, we have to give a wellfounded relation, such that the |
|
1167 |
arguments of the recursive calls get smaller. But what exactly are |
|
27026 | 1168 |
the arguments of the recursive calls when mirror is given as an |
1169 |
argument to map? Isabelle gives us the |
|
23003 | 1170 |
subgoals |
1171 |
||
1172 |
@{subgoals[display,indent=0]} |
|
1173 |
||
27026 | 1174 |
So the system seems to know that @{const map} only |
25278 | 1175 |
applies the recursive call @{term "mirror"} to elements |
27026 | 1176 |
of @{term "l"}, which is essential for the termination proof. |
23003 | 1177 |
|
27026 | 1178 |
This knowledge about map is encoded in so-called congruence rules, |
23003 | 1179 |
which are special theorems known to the \cmd{function} command. The |
1180 |
rule for map is |
|
1181 |
||
1182 |
@{thm[display] map_cong} |
|
1183 |
||
1184 |
You can read this in the following way: Two applications of @{const |
|
1185 |
map} are equal, if the list arguments are equal and the functions |
|
1186 |
coincide on the elements of the list. This means that for the value |
|
1187 |
@{term "map f l"} we only have to know how @{term f} behaves on |
|
27026 | 1188 |
the elements of @{term l}. |
23003 | 1189 |
|
1190 |
Usually, one such congruence rule is |
|
1191 |
needed for each higher-order construct that is used when defining |
|
1192 |
new functions. In fact, even basic functions like @{const |
|
23805 | 1193 |
If} and @{const Let} are handled by this mechanism. The congruence |
23003 | 1194 |
rule for @{const If} states that the @{text then} branch is only |
1195 |
relevant if the condition is true, and the @{text else} branch only if it |
|
1196 |
is false: |
|
1197 |
||
1198 |
@{thm[display] if_cong} |
|
1199 |
||
1200 |
Congruence rules can be added to the |
|
1201 |
function package by giving them the @{term fundef_cong} attribute. |
|
1202 |
||
23805 | 1203 |
The constructs that are predefined in Isabelle, usually |
1204 |
come with the respective congruence rules. |
|
27026 | 1205 |
But if you define your own higher-order functions, you may have to |
1206 |
state and prove the required congruence rules yourself, if you want to use your |
|
23805 | 1207 |
functions in recursive definitions. |
1208 |
*} |
|
27026 | 1209 |
(*<*)oops(*>*) |
23805 | 1210 |
|
1211 |
subsection {* Congruence Rules and Evaluation Order *} |
|
1212 |
||
1213 |
text {* |
|
1214 |
Higher order logic differs from functional programming languages in |
|
1215 |
that it has no built-in notion of evaluation order. A program is |
|
1216 |
just a set of equations, and it is not specified how they must be |
|
1217 |
evaluated. |
|
1218 |
||
1219 |
However for the purpose of function definition, we must talk about |
|
1220 |
evaluation order implicitly, when we reason about termination. |
|
1221 |
Congruence rules express that a certain evaluation order is |
|
1222 |
consistent with the logical definition. |
|
1223 |
||
1224 |
Consider the following function. |
|
1225 |
*} |
|
1226 |
||
1227 |
function f :: "nat \<Rightarrow> bool" |
|
1228 |
where |
|
1229 |
"f n = (n = 0 \<or> f (n - 1))" |
|
1230 |
(*<*)by pat_completeness auto(*>*) |
|
1231 |
||
1232 |
text {* |
|
27026 | 1233 |
For this definition, the termination proof fails. The default configuration |
23805 | 1234 |
specifies no congruence rule for disjunction. We have to add a |
1235 |
congruence rule that specifies left-to-right evaluation order: |
|
1236 |
||
1237 |
\vspace{1ex} |
|
1238 |
\noindent @{thm disj_cong}\hfill(@{text "disj_cong"}) |
|
1239 |
\vspace{1ex} |
|
1240 |
||
1241 |
Now the definition works without problems. Note how the termination |
|
1242 |
proof depends on the extra condition that we get from the congruence |
|
1243 |
rule. |
|
1244 |
||
1245 |
However, as evaluation is not a hard-wired concept, we |
|
1246 |
could just turn everything around by declaring a different |
|
1247 |
congruence rule. Then we can make the reverse definition: |
|
1248 |
*} |
|
1249 |
||
1250 |
lemma disj_cong2[fundef_cong]: |
|
1251 |
"(\<not> Q' \<Longrightarrow> P = P') \<Longrightarrow> (Q = Q') \<Longrightarrow> (P \<or> Q) = (P' \<or> Q')" |
|
1252 |
by blast |
|
1253 |
||
1254 |
fun f' :: "nat \<Rightarrow> bool" |
|
1255 |
where |
|
1256 |
"f' n = (f' (n - 1) \<or> n = 0)" |
|
1257 |
||
1258 |
text {* |
|
1259 |
\noindent These examples show that, in general, there is no \qt{best} set of |
|
1260 |
congruence rules. |
|
1261 |
||
1262 |
However, such tweaking should rarely be necessary in |
|
1263 |
practice, as most of the time, the default set of congruence rules |
|
1264 |
works well. |
|
1265 |
*} |
|
1266 |
||
21212 | 1267 |
end |