47269
|
1 |
(*<*)
|
|
2 |
theory Types_and_funs
|
|
3 |
imports Main
|
|
4 |
begin
|
|
5 |
(*>*)
|
|
6 |
text{*
|
|
7 |
\vspace{-5ex}
|
|
8 |
\section{Type and function definitions}
|
|
9 |
|
|
10 |
Type synonyms are abbreviations for existing types, for example
|
|
11 |
*}
|
|
12 |
|
|
13 |
type_synonym string = "char list"
|
|
14 |
|
|
15 |
text{*
|
|
16 |
Type synonyms are expanded after parsing and are not present in internal representation and output. They are mere conveniences for the reader.
|
|
17 |
|
|
18 |
\subsection{Datatypes}
|
51465
|
19 |
\label{sec:datatypes}
|
47269
|
20 |
|
|
21 |
The general form of a datatype definition looks like this:
|
|
22 |
\begin{quote}
|
|
23 |
\begin{tabular}{@ {}rclcll}
|
|
24 |
\isacom{datatype} @{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"}
|
|
25 |
& = & $C_1 \ @{text"\""}\tau_{1,1}@{text"\""} \dots @{text"\""}\tau_{1,n_1}@{text"\""}$ \\
|
|
26 |
& $|$ & \dots \\
|
|
27 |
& $|$ & $C_k \ @{text"\""}\tau_{k,1}@{text"\""} \dots @{text"\""}\tau_{k,n_k}@{text"\""}$
|
|
28 |
\end{tabular}
|
|
29 |
\end{quote}
|
|
30 |
It introduces the constructors \
|
|
31 |
$C_i :: \tau_{i,1}\Rightarrow \cdots \Rightarrow \tau_{i,n_i} \Rightarrow$~@{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"} \ and expresses that any value of this type is built from these constructors in a unique manner. Uniqueness is implied by the following
|
|
32 |
properties of the constructors:
|
|
33 |
\begin{itemize}
|
|
34 |
\item \emph{Distinctness:} $C_i\ \ldots \neq C_j\ \dots$ \quad if $i \neq j$
|
|
35 |
\item \emph{Injectivity:}
|
|
36 |
\begin{tabular}[t]{l}
|
|
37 |
$(C_i \ x_1 \dots x_{n_i} = C_i \ y_1 \dots y_{n_i}) =$\\
|
|
38 |
$(x_1 = y_1 \land \dots \land x_{n_i} = y_{n_i})$
|
|
39 |
\end{tabular}
|
|
40 |
\end{itemize}
|
|
41 |
The fact that any value of the datatype is built from the constructors implies
|
|
42 |
the structural induction rule: to show
|
|
43 |
$P~x$ for all $x$ of type @{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"},
|
|
44 |
one needs to show $P(C_i\ x_1 \dots x_{n_i})$ (for each $i$) assuming
|
|
45 |
$P(x_j)$ for all $j$ where $\tau_{i,j} =$~@{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"}.
|
|
46 |
Distinctness and injectivity are applied automatically by @{text auto}
|
|
47 |
and other proof methods. Induction must be applied explicitly.
|
|
48 |
|
|
49 |
Datatype values can be taken apart with case-expressions, for example
|
|
50 |
\begin{quote}
|
|
51 |
\noquotes{@{term[source] "(case xs of [] \<Rightarrow> 0 | x # _ \<Rightarrow> Suc x)"}}
|
|
52 |
\end{quote}
|
|
53 |
just like in functional programming languages. Case expressions
|
|
54 |
must be enclosed in parentheses.
|
|
55 |
|
|
56 |
As an example, consider binary trees:
|
|
57 |
*}
|
|
58 |
|
47306
|
59 |
datatype 'a tree = Tip | Node "'a tree" 'a "'a tree"
|
47269
|
60 |
|
|
61 |
text{* with a mirror function: *}
|
|
62 |
|
|
63 |
fun mirror :: "'a tree \<Rightarrow> 'a tree" where
|
|
64 |
"mirror Tip = Tip" |
|
|
65 |
"mirror (Node l a r) = Node (mirror r) a (mirror l)"
|
|
66 |
|
|
67 |
text{* The following lemma illustrates induction: *}
|
|
68 |
|
|
69 |
lemma "mirror(mirror t) = t"
|
|
70 |
apply(induction t)
|
|
71 |
|
|
72 |
txt{* yields
|
|
73 |
@{subgoals[display]}
|
|
74 |
The induction step contains two induction hypotheses, one for each subtree.
|
|
75 |
An application of @{text auto} finishes the proof.
|
|
76 |
|
|
77 |
A very simple but also very useful datatype is the predefined
|
|
78 |
@{datatype[display] option}
|
|
79 |
Its sole purpose is to add a new element @{const None} to an existing
|
|
80 |
type @{typ 'a}. To make sure that @{const None} is distinct from all the
|
|
81 |
elements of @{typ 'a}, you wrap them up in @{const Some} and call
|
|
82 |
the new type @{typ"'a option"}. A typical application is a lookup function
|
|
83 |
on a list of key-value pairs, often called an association list:
|
|
84 |
*}
|
|
85 |
(*<*)
|
|
86 |
apply auto
|
|
87 |
done
|
|
88 |
(*>*)
|
|
89 |
fun lookup :: "('a * 'b) list \<Rightarrow> 'a \<Rightarrow> 'b option" where
|
47306
|
90 |
"lookup [] x = None" |
|
|
91 |
"lookup ((a,b) # ps) x = (if a = x then Some b else lookup ps x)"
|
47269
|
92 |
|
|
93 |
text{*
|
|
94 |
Note that @{text"\<tau>\<^isub>1 * \<tau>\<^isub>2"} is the type of pairs, also written @{text"\<tau>\<^isub>1 \<times> \<tau>\<^isub>2"}.
|
51393
|
95 |
Pairs can be taken apart either by pattern matching (as above) or with the
|
|
96 |
projection functions @{const fst} and @{const snd}: @{thm fst_conv} and @{thm snd_conv}. Tuples are simulated by pairs nested to the right: @{term"(a,b,c)"}
|
|
97 |
abbreviates @{text"(a, (b, c))"} and @{text "\<tau>\<^isub>1 \<times> \<tau>\<^isub>2 \<times> \<tau>\<^isub>3"} abbreviates
|
|
98 |
@{text "\<tau>\<^isub>1 \<times> (\<tau>\<^isub>2 \<times> \<tau>\<^isub>3)"}.
|
47269
|
99 |
|
|
100 |
\subsection{Definitions}
|
|
101 |
|
|
102 |
Non recursive functions can be defined as in the following example:
|
|
103 |
*}
|
|
104 |
|
|
105 |
definition sq :: "nat \<Rightarrow> nat" where
|
|
106 |
"sq n = n * n"
|
|
107 |
|
|
108 |
text{* Such definitions do not allow pattern matching but only
|
|
109 |
@{text"f x\<^isub>1 \<dots> x\<^isub>n = t"}, where @{text f} does not occur in @{text t}.
|
|
110 |
|
|
111 |
\subsection{Abbreviations}
|
|
112 |
|
|
113 |
Abbreviations are similar to definitions:
|
|
114 |
*}
|
|
115 |
|
|
116 |
abbreviation sq' :: "nat \<Rightarrow> nat" where
|
|
117 |
"sq' n == n * n"
|
|
118 |
|
|
119 |
text{* The key difference is that @{const sq'} is only syntactic sugar:
|
|
120 |
@{term"sq' t"} is replaced by \mbox{@{term"t*t"}} after parsing, and every
|
|
121 |
occurrence of a term @{term"u*u"} is replaced by \mbox{@{term"sq' u"}} before
|
|
122 |
printing. Internally, @{const sq'} does not exist. This is also the
|
|
123 |
advantage of abbreviations over definitions: definitions need to be expanded
|
|
124 |
explicitly (see \autoref{sec:rewr-defs}) whereas abbreviations are already
|
|
125 |
expanded upon parsing. However, abbreviations should be introduced sparingly:
|
|
126 |
if abused, they can lead to a confusing discrepancy between the internal and
|
|
127 |
external view of a term.
|
|
128 |
|
|
129 |
\subsection{Recursive functions}
|
|
130 |
\label{sec:recursive-funs}
|
|
131 |
|
|
132 |
Recursive functions are defined with \isacom{fun} by pattern matching
|
|
133 |
over datatype constructors. The order of equations matters. Just as in
|
|
134 |
functional programming languages. However, all HOL functions must be
|
|
135 |
total. This simplifies the logic---terms are always defined---but means
|
|
136 |
that recursive functions must terminate. Otherwise one could define a
|
|
137 |
function @{prop"f n = f n + (1::nat)"} and conclude \mbox{@{prop"(0::nat) = 1"}} by
|
|
138 |
subtracting @{term"f n"} on both sides.
|
|
139 |
|
47711
|
140 |
Isabelle's automatic termination checker requires that the arguments of
|
47269
|
141 |
recursive calls on the right-hand side must be strictly smaller than the
|
|
142 |
arguments on the left-hand side. In the simplest case, this means that one
|
|
143 |
fixed argument position decreases in size with each recursive call. The size
|
|
144 |
is measured as the number of constructors (excluding 0-ary ones, e.g.\ @{text
|
47711
|
145 |
Nil}). Lexicographic combinations are also recognized. In more complicated
|
47269
|
146 |
situations, the user may have to prove termination by hand. For details
|
|
147 |
see~\cite{Krauss}.
|
|
148 |
|
|
149 |
Functions defined with \isacom{fun} come with their own induction schema
|
|
150 |
that mirrors the recursion schema and is derived from the termination
|
|
151 |
order. For example,
|
|
152 |
*}
|
|
153 |
|
|
154 |
fun div2 :: "nat \<Rightarrow> nat" where
|
|
155 |
"div2 0 = 0" |
|
|
156 |
"div2 (Suc 0) = Suc 0" |
|
|
157 |
"div2 (Suc(Suc n)) = Suc(div2 n)"
|
|
158 |
|
|
159 |
text{* does not just define @{const div2} but also proves a
|
47711
|
160 |
customized induction rule:
|
47269
|
161 |
\[
|
|
162 |
\inferrule{
|
|
163 |
\mbox{@{thm (prem 1) div2.induct}}\\
|
|
164 |
\mbox{@{thm (prem 2) div2.induct}}\\
|
|
165 |
\mbox{@{thm (prem 3) div2.induct}}}
|
|
166 |
{\mbox{@{thm (concl) div2.induct[of _ "m"]}}}
|
|
167 |
\]
|
47711
|
168 |
This customized induction rule can simplify inductive proofs. For example,
|
47269
|
169 |
*}
|
|
170 |
|
|
171 |
lemma "div2(n+n) = n"
|
|
172 |
apply(induction n rule: div2.induct)
|
|
173 |
|
|
174 |
txt{* yields the 3 subgoals
|
|
175 |
@{subgoals[display,margin=65]}
|
|
176 |
An application of @{text auto} finishes the proof.
|
|
177 |
Had we used ordinary structural induction on @{text n},
|
|
178 |
the proof would have needed an additional
|
47711
|
179 |
case analysis in the induction step.
|
47269
|
180 |
|
|
181 |
The general case is often called \concept{computation induction},
|
|
182 |
because the induction follows the (terminating!) computation.
|
|
183 |
For every defining equation
|
|
184 |
\begin{quote}
|
|
185 |
@{text "f(e) = \<dots> f(r\<^isub>1) \<dots> f(r\<^isub>k) \<dots>"}
|
|
186 |
\end{quote}
|
|
187 |
where @{text"f(r\<^isub>i)"}, @{text"i=1\<dots>k"}, are all the recursive calls,
|
|
188 |
the induction rule @{text"f.induct"} contains one premise of the form
|
|
189 |
\begin{quote}
|
|
190 |
@{text"P(r\<^isub>1) \<Longrightarrow> \<dots> \<Longrightarrow> P(r\<^isub>k) \<Longrightarrow> P(e)"}
|
|
191 |
\end{quote}
|
|
192 |
If @{text "f :: \<tau>\<^isub>1 \<Rightarrow> \<dots> \<Rightarrow> \<tau>\<^isub>n \<Rightarrow> \<tau>"} then @{text"f.induct"} is applied like this:
|
|
193 |
\begin{quote}
|
|
194 |
\isacom{apply}@{text"(induction x\<^isub>1 \<dots> x\<^isub>n rule: f.induct)"}
|
|
195 |
\end{quote}
|
|
196 |
where typically there is a call @{text"f x\<^isub>1 \<dots> x\<^isub>n"} in the goal.
|
|
197 |
But note that the induction rule does not mention @{text f} at all,
|
|
198 |
except in its name, and is applicable independently of @{text f}.
|
|
199 |
|
|
200 |
\section{Induction heuristics}
|
|
201 |
|
|
202 |
We have already noted that theorems about recursive functions are proved by
|
|
203 |
induction. In case the function has more than one argument, we have followed
|
|
204 |
the following heuristic in the proofs about the append function:
|
|
205 |
\begin{quote}
|
|
206 |
\emph{Perform induction on argument number $i$\\
|
|
207 |
if the function is defined by recursion on argument number $i$.}
|
|
208 |
\end{quote}
|
|
209 |
The key heuristic, and the main point of this section, is to
|
47711
|
210 |
\emph{generalize the goal before induction}.
|
47269
|
211 |
The reason is simple: if the goal is
|
|
212 |
too specific, the induction hypothesis is too weak to allow the induction
|
|
213 |
step to go through. Let us illustrate the idea with an example.
|
|
214 |
|
|
215 |
Function @{const rev} has quadratic worst-case running time
|
|
216 |
because it calls append for each element of the list and
|
|
217 |
append is linear in its first argument. A linear time version of
|
|
218 |
@{const rev} requires an extra argument where the result is accumulated
|
|
219 |
gradually, using only~@{text"#"}:
|
|
220 |
*}
|
|
221 |
(*<*)
|
|
222 |
apply auto
|
|
223 |
done
|
|
224 |
(*>*)
|
|
225 |
fun itrev :: "'a list \<Rightarrow> 'a list \<Rightarrow> 'a list" where
|
47711
|
226 |
"itrev [] ys = ys" |
|
47269
|
227 |
"itrev (x#xs) ys = itrev xs (x#ys)"
|
|
228 |
|
|
229 |
text{* The behaviour of @{const itrev} is simple: it reverses
|
|
230 |
its first argument by stacking its elements onto the second argument,
|
47711
|
231 |
and it returns that second argument when the first one becomes
|
47269
|
232 |
empty. Note that @{const itrev} is tail-recursive: it can be
|
|
233 |
compiled into a loop, no stack is necessary for executing it.
|
|
234 |
|
|
235 |
Naturally, we would like to show that @{const itrev} does indeed reverse
|
|
236 |
its first argument provided the second one is empty:
|
|
237 |
*}
|
|
238 |
|
|
239 |
lemma "itrev xs [] = rev xs"
|
|
240 |
|
|
241 |
txt{* There is no choice as to the induction variable:
|
|
242 |
*}
|
|
243 |
|
|
244 |
apply(induction xs)
|
|
245 |
apply(auto)
|
|
246 |
|
|
247 |
txt{*
|
|
248 |
Unfortunately, this attempt does not prove
|
|
249 |
the induction step:
|
|
250 |
@{subgoals[display,margin=70]}
|
|
251 |
The induction hypothesis is too weak. The fixed
|
|
252 |
argument,~@{term"[]"}, prevents it from rewriting the conclusion.
|
|
253 |
This example suggests a heuristic:
|
|
254 |
\begin{quote}
|
47711
|
255 |
\emph{Generalize goals for induction by replacing constants by variables.}
|
47269
|
256 |
\end{quote}
|
|
257 |
Of course one cannot do this na\"{\i}vely: @{prop"itrev xs ys = rev xs"} is
|
47711
|
258 |
just not true. The correct generalization is
|
47269
|
259 |
*};
|
|
260 |
(*<*)oops;(*>*)
|
|
261 |
lemma "itrev xs ys = rev xs @ ys"
|
|
262 |
(*<*)apply(induction xs, auto)(*>*)
|
|
263 |
txt{*
|
|
264 |
If @{text ys} is replaced by @{term"[]"}, the right-hand side simplifies to
|
|
265 |
@{term"rev xs"}, as required.
|
47711
|
266 |
In this instance it was easy to guess the right generalization.
|
47269
|
267 |
Other situations can require a good deal of creativity.
|
|
268 |
|
|
269 |
Although we now have two variables, only @{text xs} is suitable for
|
|
270 |
induction, and we repeat our proof attempt. Unfortunately, we are still
|
|
271 |
not there:
|
|
272 |
@{subgoals[display,margin=65]}
|
|
273 |
The induction hypothesis is still too weak, but this time it takes no
|
47711
|
274 |
intuition to generalize: the problem is that the @{text ys}
|
47269
|
275 |
in the induction hypothesis is fixed,
|
|
276 |
but the induction hypothesis needs to be applied with
|
|
277 |
@{term"a # ys"} instead of @{text ys}. Hence we prove the theorem
|
|
278 |
for all @{text ys} instead of a fixed one. We can instruct induction
|
47711
|
279 |
to perform this generalization for us by adding @{text "arbitrary: ys"}.
|
47269
|
280 |
*}
|
|
281 |
(*<*)oops
|
|
282 |
lemma "itrev xs ys = rev xs @ ys"
|
|
283 |
(*>*)
|
|
284 |
apply(induction xs arbitrary: ys)
|
|
285 |
|
|
286 |
txt{* The induction hypothesis in the induction step is now universally quantified over @{text ys}:
|
|
287 |
@{subgoals[display,margin=65]}
|
|
288 |
Thus the proof succeeds:
|
|
289 |
*}
|
|
290 |
|
|
291 |
apply auto
|
|
292 |
done
|
|
293 |
|
|
294 |
text{*
|
47711
|
295 |
This leads to another heuristic for generalization:
|
47269
|
296 |
\begin{quote}
|
47711
|
297 |
\emph{Generalize induction by generalizing all free
|
47269
|
298 |
variables\\ {\em(except the induction variable itself)}.}
|
|
299 |
\end{quote}
|
47711
|
300 |
Generalization is best performed with @{text"arbitrary: y\<^isub>1 \<dots> y\<^isub>k"}.
|
47269
|
301 |
This heuristic prevents trivial failures like the one above.
|
|
302 |
However, it should not be applied blindly.
|
|
303 |
It is not always required, and the additional quantifiers can complicate
|
|
304 |
matters in some cases. The variables that need to be quantified are typically
|
|
305 |
those that change in recursive calls.
|
|
306 |
|
|
307 |
\section{Simplification}
|
|
308 |
|
|
309 |
So far we have talked a lot about simplifying terms without explaining the concept. \concept{Simplification} means
|
|
310 |
\begin{itemize}
|
|
311 |
\item using equations $l = r$ from left to right (only),
|
|
312 |
\item as long as possible.
|
|
313 |
\end{itemize}
|
47711
|
314 |
To emphasize the directionality, equations that have been given the
|
47269
|
315 |
@{text"simp"} attribute are called \concept{simplification}
|
|
316 |
rules. Logically, they are still symmetric, but proofs by
|
|
317 |
simplification use them only in the left-to-right direction. The proof tool
|
|
318 |
that performs simplifications is called the \concept{simplifier}. It is the
|
|
319 |
basis of @{text auto} and other related proof methods.
|
|
320 |
|
|
321 |
The idea of simplification is best explained by an example. Given the
|
|
322 |
simplification rules
|
|
323 |
\[
|
|
324 |
\begin{array}{rcl@ {\quad}l}
|
|
325 |
@{term"0 + n::nat"} &@{text"="}& @{text n} & (1) \\
|
|
326 |
@{term"(Suc m) + n"} &@{text"="}& @{term"Suc (m + n)"} & (2) \\
|
|
327 |
@{text"(Suc m \<le> Suc n)"} &@{text"="}& @{text"(m \<le> n)"} & (3)\\
|
|
328 |
@{text"(0 \<le> m)"} &@{text"="}& @{const True} & (4)
|
|
329 |
\end{array}
|
|
330 |
\]
|
|
331 |
the formula @{prop"0 + Suc 0 \<le> Suc 0 + x"} is simplified to @{const True}
|
|
332 |
as follows:
|
|
333 |
\[
|
|
334 |
\begin{array}{r@ {}c@ {}l@ {\quad}l}
|
|
335 |
@{text"(0 + Suc 0"} & \leq & @{text"Suc 0 + x)"} & \stackrel{(1)}{=} \\
|
|
336 |
@{text"(Suc 0"} & \leq & @{text"Suc 0 + x)"} & \stackrel{(2)}{=} \\
|
|
337 |
@{text"(Suc 0"} & \leq & @{text"Suc (0 + x)"} & \stackrel{(3)}{=} \\
|
|
338 |
@{text"(0"} & \leq & @{text"0 + x)"} & \stackrel{(4)}{=} \\[1ex]
|
|
339 |
& @{const True}
|
|
340 |
\end{array}
|
|
341 |
\]
|
|
342 |
Simplification is often also called \concept{rewriting}
|
|
343 |
and simplification rules \concept{rewrite rules}.
|
|
344 |
|
|
345 |
\subsection{Simplification rules}
|
|
346 |
|
|
347 |
The attribute @{text"simp"} declares theorems to be simplification rules,
|
|
348 |
which the simplifier will use automatically. In addition,
|
|
349 |
\isacom{datatype} and \isacom{fun} commands implicitly declare some
|
|
350 |
simplification rules: \isacom{datatype} the distinctness and injectivity
|
|
351 |
rules, \isacom{fun} the defining equations. Definitions are not declared
|
|
352 |
as simplification rules automatically! Nearly any theorem can become a
|
|
353 |
simplification rule. The simplifier will try to transform it into an
|
|
354 |
equation. For example, the theorem @{prop"\<not> P"} is turned into @{prop"P = False"}.
|
|
355 |
|
|
356 |
Only equations that really simplify, like @{prop"rev (rev xs) = xs"} and
|
|
357 |
@{prop"xs @ [] = xs"}, should be declared as simplification
|
|
358 |
rules. Equations that may be counterproductive as simplification rules
|
|
359 |
should only be used in specific proof steps (see \S\ref{sec:simp} below).
|
|
360 |
Distributivity laws, for example, alter the structure of terms
|
|
361 |
and can produce an exponential blow-up.
|
|
362 |
|
|
363 |
\subsection{Conditional simplification rules}
|
|
364 |
|
|
365 |
Simplification rules can be conditional. Before applying such a rule, the
|
|
366 |
simplifier will first try to prove the preconditions, again by
|
|
367 |
simplification. For example, given the simplification rules
|
|
368 |
\begin{quote}
|
|
369 |
@{prop"p(0::nat) = True"}\\
|
|
370 |
@{prop"p(x) \<Longrightarrow> f(x) = g(x)"},
|
|
371 |
\end{quote}
|
|
372 |
the term @{term"f(0::nat)"} simplifies to @{term"g(0::nat)"}
|
|
373 |
but @{term"f(1::nat)"} does not simplify because @{prop"p(1::nat)"}
|
|
374 |
is not provable.
|
|
375 |
|
|
376 |
\subsection{Termination}
|
|
377 |
|
|
378 |
Simplification can run forever, for example if both @{prop"f x = g x"} and
|
|
379 |
@{prop"g(x) = f(x)"} are simplification rules. It is the user's
|
|
380 |
responsibility not to include simplification rules that can lead to
|
|
381 |
nontermination, either on their own or in combination with other
|
|
382 |
simplification rules. The right-hand side of a simplification rule should
|
|
383 |
always be ``simpler'' than the left-hand side---in some sense. But since
|
|
384 |
termination is undecidable, such a check cannot be automated completely
|
|
385 |
and Isabelle makes little attempt to detect nontermination.
|
|
386 |
|
|
387 |
When conditional simplification rules are applied, their preconditions are
|
|
388 |
proved first. Hence all preconditions need to be
|
|
389 |
simpler than the left-hand side of the conclusion. For example
|
|
390 |
\begin{quote}
|
|
391 |
@{prop "n < m \<Longrightarrow> (n < Suc m) = True"}
|
|
392 |
\end{quote}
|
|
393 |
is suitable as a simplification rule: both @{prop"n<m"} and @{const True}
|
|
394 |
are simpler than \mbox{@{prop"n < Suc m"}}. But
|
|
395 |
\begin{quote}
|
|
396 |
@{prop "Suc n < m \<Longrightarrow> (n < m) = True"}
|
|
397 |
\end{quote}
|
|
398 |
leads to nontermination: when trying to rewrite @{prop"n<m"} to @{const True}
|
|
399 |
one first has to prove \mbox{@{prop"Suc n < m"}}, which can be rewritten to @{const True} provided @{prop"Suc(Suc n) < m"}, \emph{ad infinitum}.
|
|
400 |
|
|
401 |
\subsection{The @{text simp} proof method}
|
|
402 |
\label{sec:simp}
|
|
403 |
|
|
404 |
So far we have only used the proof method @{text auto}. Method @{text simp}
|
|
405 |
is the key component of @{text auto}, but @{text auto} can do much more. In
|
|
406 |
some cases, @{text auto} is overeager and modifies the proof state too much.
|
|
407 |
In such cases the more predictable @{text simp} method should be used.
|
|
408 |
Given a goal
|
|
409 |
\begin{quote}
|
|
410 |
@{text"1. \<lbrakk> P\<^isub>1; \<dots>; P\<^isub>m \<rbrakk> \<Longrightarrow> C"}
|
|
411 |
\end{quote}
|
|
412 |
the command
|
|
413 |
\begin{quote}
|
|
414 |
\isacom{apply}@{text"(simp add: th\<^isub>1 \<dots> th\<^isub>n)"}
|
|
415 |
\end{quote}
|
|
416 |
simplifies the assumptions @{text "P\<^isub>i"} and the conclusion @{text C} using
|
|
417 |
\begin{itemize}
|
|
418 |
\item all simplification rules, including the ones coming from \isacom{datatype} and \isacom{fun},
|
|
419 |
\item the additional lemmas @{text"th\<^isub>1 \<dots> th\<^isub>n"}, and
|
|
420 |
\item the assumptions.
|
|
421 |
\end{itemize}
|
|
422 |
In addition to or instead of @{text add} there is also @{text del} for removing
|
|
423 |
simplification rules temporarily. Both are optional. Method @{text auto}
|
|
424 |
can be modified similarly:
|
|
425 |
\begin{quote}
|
|
426 |
\isacom{apply}@{text"(auto simp add: \<dots> simp del: \<dots>)"}
|
|
427 |
\end{quote}
|
|
428 |
Here the modifiers are @{text"simp add"} and @{text"simp del"}
|
|
429 |
instead of just @{text add} and @{text del} because @{text auto}
|
|
430 |
does not just perform simplification.
|
|
431 |
|
|
432 |
Note that @{text simp} acts only on subgoal 1, @{text auto} acts on all
|
|
433 |
subgoals. There is also @{text simp_all}, which applies @{text simp} to
|
|
434 |
all subgoals.
|
|
435 |
|
|
436 |
\subsection{Rewriting with definitions}
|
|
437 |
\label{sec:rewr-defs}
|
|
438 |
|
|
439 |
Definitions introduced by the command \isacom{definition}
|
|
440 |
can also be used as simplification rules,
|
|
441 |
but by default they are not: the simplifier does not expand them
|
|
442 |
automatically. Definitions are intended for introducing abstract concepts and
|
|
443 |
not merely as abbreviations. Of course, we need to expand the definition
|
|
444 |
initially, but once we have proved enough abstract properties of the new
|
|
445 |
constant, we can forget its original definition. This style makes proofs more
|
|
446 |
robust: if the definition has to be changed, only the proofs of the abstract
|
|
447 |
properties will be affected.
|
|
448 |
|
|
449 |
The definition of a function @{text f} is a theorem named @{text f_def}
|
|
450 |
and can be added to a call of @{text simp} just like any other theorem:
|
|
451 |
\begin{quote}
|
|
452 |
\isacom{apply}@{text"(simp add: f_def)"}
|
|
453 |
\end{quote}
|
|
454 |
In particular, let-expressions can be unfolded by
|
|
455 |
making @{thm[source] Let_def} a simplification rule.
|
|
456 |
|
|
457 |
\subsection{Case splitting with @{text simp}}
|
|
458 |
|
|
459 |
Goals containing if-expressions are automatically split into two cases by
|
|
460 |
@{text simp} using the rule
|
|
461 |
\begin{quote}
|
|
462 |
@{prop"P(if A then s else t) = ((A \<longrightarrow> P(s)) \<and> (~A \<longrightarrow> P(t)))"}
|
|
463 |
\end{quote}
|
|
464 |
For example, @{text simp} can prove
|
|
465 |
\begin{quote}
|
|
466 |
@{prop"(A \<and> B) = (if A then B else False)"}
|
|
467 |
\end{quote}
|
|
468 |
because both @{prop"A \<longrightarrow> (A & B) = B"} and @{prop"~A \<longrightarrow> (A & B) = False"}
|
|
469 |
simplify to @{const True}.
|
|
470 |
|
|
471 |
We can split case-expressions similarly. For @{text nat} the rule looks like this:
|
|
472 |
@{prop[display,margin=65,indent=4]"P(case e of 0 \<Rightarrow> a | Suc n \<Rightarrow> b n) = ((e = 0 \<longrightarrow> P a) & (ALL n. e = Suc n \<longrightarrow> P(b n)))"}
|
|
473 |
Case expressions are not split automatically by @{text simp}, but @{text simp}
|
|
474 |
can be instructed to do so:
|
|
475 |
\begin{quote}
|
|
476 |
\isacom{apply}@{text"(simp split: nat.split)"}
|
|
477 |
\end{quote}
|
|
478 |
splits all case-expressions over natural numbers. For an arbitrary
|
|
479 |
datatype @{text t} it is @{text "t.split"} instead of @{thm[source] nat.split}.
|
|
480 |
Method @{text auto} can be modified in exactly the same way.
|
|
481 |
*}
|
|
482 |
(*<*)
|
|
483 |
end
|
|
484 |
(*>*)
|