|
1 (*<*) |
|
2 theory Types_and_funs |
|
3 imports Main |
|
4 begin |
|
5 (*>*) |
|
6 text{* |
|
7 \vspace{-5ex} |
|
8 \section{Type and function definitions} |
|
9 |
|
10 Type synonyms are abbreviations for existing types, for example |
|
11 *} |
|
12 |
|
13 type_synonym string = "char list" |
|
14 |
|
15 text{* |
|
16 Type synonyms are expanded after parsing and are not present in internal representation and output. They are mere conveniences for the reader. |
|
17 |
|
18 \subsection{Datatypes} |
|
19 |
|
20 The general form of a datatype definition looks like this: |
|
21 \begin{quote} |
|
22 \begin{tabular}{@ {}rclcll} |
|
23 \isacom{datatype} @{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"} |
|
24 & = & $C_1 \ @{text"\""}\tau_{1,1}@{text"\""} \dots @{text"\""}\tau_{1,n_1}@{text"\""}$ \\ |
|
25 & $|$ & \dots \\ |
|
26 & $|$ & $C_k \ @{text"\""}\tau_{k,1}@{text"\""} \dots @{text"\""}\tau_{k,n_k}@{text"\""}$ |
|
27 \end{tabular} |
|
28 \end{quote} |
|
29 It introduces the constructors \ |
|
30 $C_i :: \tau_{i,1}\Rightarrow \cdots \Rightarrow \tau_{i,n_i} \Rightarrow$~@{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"} \ and expresses that any value of this type is built from these constructors in a unique manner. Uniqueness is implied by the following |
|
31 properties of the constructors: |
|
32 \begin{itemize} |
|
33 \item \emph{Distinctness:} $C_i\ \ldots \neq C_j\ \dots$ \quad if $i \neq j$ |
|
34 \item \emph{Injectivity:} |
|
35 \begin{tabular}[t]{l} |
|
36 $(C_i \ x_1 \dots x_{n_i} = C_i \ y_1 \dots y_{n_i}) =$\\ |
|
37 $(x_1 = y_1 \land \dots \land x_{n_i} = y_{n_i})$ |
|
38 \end{tabular} |
|
39 \end{itemize} |
|
40 The fact that any value of the datatype is built from the constructors implies |
|
41 the structural induction rule: to show |
|
42 $P~x$ for all $x$ of type @{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"}, |
|
43 one needs to show $P(C_i\ x_1 \dots x_{n_i})$ (for each $i$) assuming |
|
44 $P(x_j)$ for all $j$ where $\tau_{i,j} =$~@{text "('a\<^isub>1,\<dots>,'a\<^isub>n)t"}. |
|
45 Distinctness and injectivity are applied automatically by @{text auto} |
|
46 and other proof methods. Induction must be applied explicitly. |
|
47 |
|
48 Datatype values can be taken apart with case-expressions, for example |
|
49 \begin{quote} |
|
50 \noquotes{@{term[source] "(case xs of [] \<Rightarrow> 0 | x # _ \<Rightarrow> Suc x)"}} |
|
51 \end{quote} |
|
52 just like in functional programming languages. Case expressions |
|
53 must be enclosed in parentheses. |
|
54 |
|
55 As an example, consider binary trees: |
|
56 *} |
|
57 |
|
58 datatype 'a tree = Tip | Node "'a tree" 'a "'a tree" |
|
59 |
|
60 text{* with a mirror function: *} |
|
61 |
|
62 fun mirror :: "'a tree \<Rightarrow> 'a tree" where |
|
63 "mirror Tip = Tip" | |
|
64 "mirror (Node l a r) = Node (mirror r) a (mirror l)" |
|
65 |
|
66 text{* The following lemma illustrates induction: *} |
|
67 |
|
68 lemma "mirror(mirror t) = t" |
|
69 apply(induction t) |
|
70 |
|
71 txt{* yields |
|
72 @{subgoals[display]} |
|
73 The induction step contains two induction hypotheses, one for each subtree. |
|
74 An application of @{text auto} finishes the proof. |
|
75 |
|
76 A very simple but also very useful datatype is the predefined |
|
77 @{datatype[display] option} |
|
78 Its sole purpose is to add a new element @{const None} to an existing |
|
79 type @{typ 'a}. To make sure that @{const None} is distinct from all the |
|
80 elements of @{typ 'a}, you wrap them up in @{const Some} and call |
|
81 the new type @{typ"'a option"}. A typical application is a lookup function |
|
82 on a list of key-value pairs, often called an association list: |
|
83 *} |
|
84 (*<*) |
|
85 apply auto |
|
86 done |
|
87 (*>*) |
|
88 fun lookup :: "('a * 'b) list \<Rightarrow> 'a \<Rightarrow> 'b option" where |
|
89 "lookup [] x = None" | |
|
90 "lookup ((a,b) # ps) x = (if a = x then Some b else lookup ps x)" |
|
91 |
|
92 text{* |
|
93 Note that @{text"\<tau>\<^isub>1 * \<tau>\<^isub>2"} is the type of pairs, also written @{text"\<tau>\<^isub>1 \<times> \<tau>\<^isub>2"}. |
|
94 |
|
95 \subsection{Definitions} |
|
96 |
|
97 Non recursive functions can be defined as in the following example: |
|
98 *} |
|
99 |
|
100 definition sq :: "nat \<Rightarrow> nat" where |
|
101 "sq n = n * n" |
|
102 |
|
103 text{* Such definitions do not allow pattern matching but only |
|
104 @{text"f x\<^isub>1 \<dots> x\<^isub>n = t"}, where @{text f} does not occur in @{text t}. |
|
105 |
|
106 \subsection{Abbreviations} |
|
107 |
|
108 Abbreviations are similar to definitions: |
|
109 *} |
|
110 |
|
111 abbreviation sq' :: "nat \<Rightarrow> nat" where |
|
112 "sq' n == n * n" |
|
113 |
|
114 text{* The key difference is that @{const sq'} is only syntactic sugar: |
|
115 @{term"sq' t"} is replaced by \mbox{@{term"t*t"}} after parsing, and every |
|
116 occurrence of a term @{term"u*u"} is replaced by \mbox{@{term"sq' u"}} before |
|
117 printing. Internally, @{const sq'} does not exist. This is also the |
|
118 advantage of abbreviations over definitions: definitions need to be expanded |
|
119 explicitly (see \autoref{sec:rewr-defs}) whereas abbreviations are already |
|
120 expanded upon parsing. However, abbreviations should be introduced sparingly: |
|
121 if abused, they can lead to a confusing discrepancy between the internal and |
|
122 external view of a term. |
|
123 |
|
124 \subsection{Recursive functions} |
|
125 \label{sec:recursive-funs} |
|
126 |
|
127 Recursive functions are defined with \isacom{fun} by pattern matching |
|
128 over datatype constructors. The order of equations matters. Just as in |
|
129 functional programming languages. However, all HOL functions must be |
|
130 total. This simplifies the logic---terms are always defined---but means |
|
131 that recursive functions must terminate. Otherwise one could define a |
|
132 function @{prop"f n = f n + (1::nat)"} and conclude \mbox{@{prop"(0::nat) = 1"}} by |
|
133 subtracting @{term"f n"} on both sides. |
|
134 |
|
135 Isabelle's automatic termination checker requires that the arguments of |
|
136 recursive calls on the right-hand side must be strictly smaller than the |
|
137 arguments on the left-hand side. In the simplest case, this means that one |
|
138 fixed argument position decreases in size with each recursive call. The size |
|
139 is measured as the number of constructors (excluding 0-ary ones, e.g.\ @{text |
|
140 Nil}). Lexicographic combinations are also recognized. In more complicated |
|
141 situations, the user may have to prove termination by hand. For details |
|
142 see~\cite{Krauss}. |
|
143 |
|
144 Functions defined with \isacom{fun} come with their own induction schema |
|
145 that mirrors the recursion schema and is derived from the termination |
|
146 order. For example, |
|
147 *} |
|
148 |
|
149 fun div2 :: "nat \<Rightarrow> nat" where |
|
150 "div2 0 = 0" | |
|
151 "div2 (Suc 0) = Suc 0" | |
|
152 "div2 (Suc(Suc n)) = Suc(div2 n)" |
|
153 |
|
154 text{* does not just define @{const div2} but also proves a |
|
155 customized induction rule: |
|
156 \[ |
|
157 \inferrule{ |
|
158 \mbox{@{thm (prem 1) div2.induct}}\\ |
|
159 \mbox{@{thm (prem 2) div2.induct}}\\ |
|
160 \mbox{@{thm (prem 3) div2.induct}}} |
|
161 {\mbox{@{thm (concl) div2.induct[of _ "m"]}}} |
|
162 \] |
|
163 This customized induction rule can simplify inductive proofs. For example, |
|
164 *} |
|
165 |
|
166 lemma "div2(n+n) = n" |
|
167 apply(induction n rule: div2.induct) |
|
168 |
|
169 txt{* yields the 3 subgoals |
|
170 @{subgoals[display,margin=65]} |
|
171 An application of @{text auto} finishes the proof. |
|
172 Had we used ordinary structural induction on @{text n}, |
|
173 the proof would have needed an additional |
|
174 case analysis in the induction step. |
|
175 |
|
176 The general case is often called \concept{computation induction}, |
|
177 because the induction follows the (terminating!) computation. |
|
178 For every defining equation |
|
179 \begin{quote} |
|
180 @{text "f(e) = \<dots> f(r\<^isub>1) \<dots> f(r\<^isub>k) \<dots>"} |
|
181 \end{quote} |
|
182 where @{text"f(r\<^isub>i)"}, @{text"i=1\<dots>k"}, are all the recursive calls, |
|
183 the induction rule @{text"f.induct"} contains one premise of the form |
|
184 \begin{quote} |
|
185 @{text"P(r\<^isub>1) \<Longrightarrow> \<dots> \<Longrightarrow> P(r\<^isub>k) \<Longrightarrow> P(e)"} |
|
186 \end{quote} |
|
187 If @{text "f :: \<tau>\<^isub>1 \<Rightarrow> \<dots> \<Rightarrow> \<tau>\<^isub>n \<Rightarrow> \<tau>"} then @{text"f.induct"} is applied like this: |
|
188 \begin{quote} |
|
189 \isacom{apply}@{text"(induction x\<^isub>1 \<dots> x\<^isub>n rule: f.induct)"} |
|
190 \end{quote} |
|
191 where typically there is a call @{text"f x\<^isub>1 \<dots> x\<^isub>n"} in the goal. |
|
192 But note that the induction rule does not mention @{text f} at all, |
|
193 except in its name, and is applicable independently of @{text f}. |
|
194 |
|
195 \section{Induction heuristics} |
|
196 |
|
197 We have already noted that theorems about recursive functions are proved by |
|
198 induction. In case the function has more than one argument, we have followed |
|
199 the following heuristic in the proofs about the append function: |
|
200 \begin{quote} |
|
201 \emph{Perform induction on argument number $i$\\ |
|
202 if the function is defined by recursion on argument number $i$.} |
|
203 \end{quote} |
|
204 The key heuristic, and the main point of this section, is to |
|
205 \emph{generalize the goal before induction}. |
|
206 The reason is simple: if the goal is |
|
207 too specific, the induction hypothesis is too weak to allow the induction |
|
208 step to go through. Let us illustrate the idea with an example. |
|
209 |
|
210 Function @{const rev} has quadratic worst-case running time |
|
211 because it calls append for each element of the list and |
|
212 append is linear in its first argument. A linear time version of |
|
213 @{const rev} requires an extra argument where the result is accumulated |
|
214 gradually, using only~@{text"#"}: |
|
215 *} |
|
216 (*<*) |
|
217 apply auto |
|
218 done |
|
219 (*>*) |
|
220 fun itrev :: "'a list \<Rightarrow> 'a list \<Rightarrow> 'a list" where |
|
221 "itrev [] ys = ys" | |
|
222 "itrev (x#xs) ys = itrev xs (x#ys)" |
|
223 |
|
224 text{* The behaviour of @{const itrev} is simple: it reverses |
|
225 its first argument by stacking its elements onto the second argument, |
|
226 and it returns that second argument when the first one becomes |
|
227 empty. Note that @{const itrev} is tail-recursive: it can be |
|
228 compiled into a loop, no stack is necessary for executing it. |
|
229 |
|
230 Naturally, we would like to show that @{const itrev} does indeed reverse |
|
231 its first argument provided the second one is empty: |
|
232 *} |
|
233 |
|
234 lemma "itrev xs [] = rev xs" |
|
235 |
|
236 txt{* There is no choice as to the induction variable: |
|
237 *} |
|
238 |
|
239 apply(induction xs) |
|
240 apply(auto) |
|
241 |
|
242 txt{* |
|
243 Unfortunately, this attempt does not prove |
|
244 the induction step: |
|
245 @{subgoals[display,margin=70]} |
|
246 The induction hypothesis is too weak. The fixed |
|
247 argument,~@{term"[]"}, prevents it from rewriting the conclusion. |
|
248 This example suggests a heuristic: |
|
249 \begin{quote} |
|
250 \emph{Generalize goals for induction by replacing constants by variables.} |
|
251 \end{quote} |
|
252 Of course one cannot do this na\"{\i}vely: @{prop"itrev xs ys = rev xs"} is |
|
253 just not true. The correct generalization is |
|
254 *}; |
|
255 (*<*)oops;(*>*) |
|
256 lemma "itrev xs ys = rev xs @ ys" |
|
257 (*<*)apply(induction xs, auto)(*>*) |
|
258 txt{* |
|
259 If @{text ys} is replaced by @{term"[]"}, the right-hand side simplifies to |
|
260 @{term"rev xs"}, as required. |
|
261 In this instance it was easy to guess the right generalization. |
|
262 Other situations can require a good deal of creativity. |
|
263 |
|
264 Although we now have two variables, only @{text xs} is suitable for |
|
265 induction, and we repeat our proof attempt. Unfortunately, we are still |
|
266 not there: |
|
267 @{subgoals[display,margin=65]} |
|
268 The induction hypothesis is still too weak, but this time it takes no |
|
269 intuition to generalize: the problem is that the @{text ys} |
|
270 in the induction hypothesis is fixed, |
|
271 but the induction hypothesis needs to be applied with |
|
272 @{term"a # ys"} instead of @{text ys}. Hence we prove the theorem |
|
273 for all @{text ys} instead of a fixed one. We can instruct induction |
|
274 to perform this generalization for us by adding @{text "arbitrary: ys"}. |
|
275 *} |
|
276 (*<*)oops |
|
277 lemma "itrev xs ys = rev xs @ ys" |
|
278 (*>*) |
|
279 apply(induction xs arbitrary: ys) |
|
280 |
|
281 txt{* The induction hypothesis in the induction step is now universally quantified over @{text ys}: |
|
282 @{subgoals[display,margin=65]} |
|
283 Thus the proof succeeds: |
|
284 *} |
|
285 |
|
286 apply auto |
|
287 done |
|
288 |
|
289 text{* |
|
290 This leads to another heuristic for generalization: |
|
291 \begin{quote} |
|
292 \emph{Generalize induction by generalizing all free |
|
293 variables\\ {\em(except the induction variable itself)}.} |
|
294 \end{quote} |
|
295 Generalization is best performed with @{text"arbitrary: y\<^isub>1 \<dots> y\<^isub>k"}. |
|
296 This heuristic prevents trivial failures like the one above. |
|
297 However, it should not be applied blindly. |
|
298 It is not always required, and the additional quantifiers can complicate |
|
299 matters in some cases. The variables that need to be quantified are typically |
|
300 those that change in recursive calls. |
|
301 |
|
302 \section{Simplification} |
|
303 |
|
304 So far we have talked a lot about simplifying terms without explaining the concept. \concept{Simplification} means |
|
305 \begin{itemize} |
|
306 \item using equations $l = r$ from left to right (only), |
|
307 \item as long as possible. |
|
308 \end{itemize} |
|
309 To emphasize the directionality, equations that have been given the |
|
310 @{text"simp"} attribute are called \concept{simplification} |
|
311 rules. Logically, they are still symmetric, but proofs by |
|
312 simplification use them only in the left-to-right direction. The proof tool |
|
313 that performs simplifications is called the \concept{simplifier}. It is the |
|
314 basis of @{text auto} and other related proof methods. |
|
315 |
|
316 The idea of simplification is best explained by an example. Given the |
|
317 simplification rules |
|
318 \[ |
|
319 \begin{array}{rcl@ {\quad}l} |
|
320 @{term"0 + n::nat"} &@{text"="}& @{text n} & (1) \\ |
|
321 @{term"(Suc m) + n"} &@{text"="}& @{term"Suc (m + n)"} & (2) \\ |
|
322 @{text"(Suc m \<le> Suc n)"} &@{text"="}& @{text"(m \<le> n)"} & (3)\\ |
|
323 @{text"(0 \<le> m)"} &@{text"="}& @{const True} & (4) |
|
324 \end{array} |
|
325 \] |
|
326 the formula @{prop"0 + Suc 0 \<le> Suc 0 + x"} is simplified to @{const True} |
|
327 as follows: |
|
328 \[ |
|
329 \begin{array}{r@ {}c@ {}l@ {\quad}l} |
|
330 @{text"(0 + Suc 0"} & \leq & @{text"Suc 0 + x)"} & \stackrel{(1)}{=} \\ |
|
331 @{text"(Suc 0"} & \leq & @{text"Suc 0 + x)"} & \stackrel{(2)}{=} \\ |
|
332 @{text"(Suc 0"} & \leq & @{text"Suc (0 + x)"} & \stackrel{(3)}{=} \\ |
|
333 @{text"(0"} & \leq & @{text"0 + x)"} & \stackrel{(4)}{=} \\[1ex] |
|
334 & @{const True} |
|
335 \end{array} |
|
336 \] |
|
337 Simplification is often also called \concept{rewriting} |
|
338 and simplification rules \concept{rewrite rules}. |
|
339 |
|
340 \subsection{Simplification rules} |
|
341 |
|
342 The attribute @{text"simp"} declares theorems to be simplification rules, |
|
343 which the simplifier will use automatically. In addition, |
|
344 \isacom{datatype} and \isacom{fun} commands implicitly declare some |
|
345 simplification rules: \isacom{datatype} the distinctness and injectivity |
|
346 rules, \isacom{fun} the defining equations. Definitions are not declared |
|
347 as simplification rules automatically! Nearly any theorem can become a |
|
348 simplification rule. The simplifier will try to transform it into an |
|
349 equation. For example, the theorem @{prop"\<not> P"} is turned into @{prop"P = False"}. |
|
350 |
|
351 Only equations that really simplify, like @{prop"rev (rev xs) = xs"} and |
|
352 @{prop"xs @ [] = xs"}, should be declared as simplification |
|
353 rules. Equations that may be counterproductive as simplification rules |
|
354 should only be used in specific proof steps (see \S\ref{sec:simp} below). |
|
355 Distributivity laws, for example, alter the structure of terms |
|
356 and can produce an exponential blow-up. |
|
357 |
|
358 \subsection{Conditional simplification rules} |
|
359 |
|
360 Simplification rules can be conditional. Before applying such a rule, the |
|
361 simplifier will first try to prove the preconditions, again by |
|
362 simplification. For example, given the simplification rules |
|
363 \begin{quote} |
|
364 @{prop"p(0::nat) = True"}\\ |
|
365 @{prop"p(x) \<Longrightarrow> f(x) = g(x)"}, |
|
366 \end{quote} |
|
367 the term @{term"f(0::nat)"} simplifies to @{term"g(0::nat)"} |
|
368 but @{term"f(1::nat)"} does not simplify because @{prop"p(1::nat)"} |
|
369 is not provable. |
|
370 |
|
371 \subsection{Termination} |
|
372 |
|
373 Simplification can run forever, for example if both @{prop"f x = g x"} and |
|
374 @{prop"g(x) = f(x)"} are simplification rules. It is the user's |
|
375 responsibility not to include simplification rules that can lead to |
|
376 nontermination, either on their own or in combination with other |
|
377 simplification rules. The right-hand side of a simplification rule should |
|
378 always be ``simpler'' than the left-hand side---in some sense. But since |
|
379 termination is undecidable, such a check cannot be automated completely |
|
380 and Isabelle makes little attempt to detect nontermination. |
|
381 |
|
382 When conditional simplification rules are applied, their preconditions are |
|
383 proved first. Hence all preconditions need to be |
|
384 simpler than the left-hand side of the conclusion. For example |
|
385 \begin{quote} |
|
386 @{prop "n < m \<Longrightarrow> (n < Suc m) = True"} |
|
387 \end{quote} |
|
388 is suitable as a simplification rule: both @{prop"n<m"} and @{const True} |
|
389 are simpler than \mbox{@{prop"n < Suc m"}}. But |
|
390 \begin{quote} |
|
391 @{prop "Suc n < m \<Longrightarrow> (n < m) = True"} |
|
392 \end{quote} |
|
393 leads to nontermination: when trying to rewrite @{prop"n<m"} to @{const True} |
|
394 one first has to prove \mbox{@{prop"Suc n < m"}}, which can be rewritten to @{const True} provided @{prop"Suc(Suc n) < m"}, \emph{ad infinitum}. |
|
395 |
|
396 \subsection{The @{text simp} proof method} |
|
397 \label{sec:simp} |
|
398 |
|
399 So far we have only used the proof method @{text auto}. Method @{text simp} |
|
400 is the key component of @{text auto}, but @{text auto} can do much more. In |
|
401 some cases, @{text auto} is overeager and modifies the proof state too much. |
|
402 In such cases the more predictable @{text simp} method should be used. |
|
403 Given a goal |
|
404 \begin{quote} |
|
405 @{text"1. \<lbrakk> P\<^isub>1; \<dots>; P\<^isub>m \<rbrakk> \<Longrightarrow> C"} |
|
406 \end{quote} |
|
407 the command |
|
408 \begin{quote} |
|
409 \isacom{apply}@{text"(simp add: th\<^isub>1 \<dots> th\<^isub>n)"} |
|
410 \end{quote} |
|
411 simplifies the assumptions @{text "P\<^isub>i"} and the conclusion @{text C} using |
|
412 \begin{itemize} |
|
413 \item all simplification rules, including the ones coming from \isacom{datatype} and \isacom{fun}, |
|
414 \item the additional lemmas @{text"th\<^isub>1 \<dots> th\<^isub>n"}, and |
|
415 \item the assumptions. |
|
416 \end{itemize} |
|
417 In addition to or instead of @{text add} there is also @{text del} for removing |
|
418 simplification rules temporarily. Both are optional. Method @{text auto} |
|
419 can be modified similarly: |
|
420 \begin{quote} |
|
421 \isacom{apply}@{text"(auto simp add: \<dots> simp del: \<dots>)"} |
|
422 \end{quote} |
|
423 Here the modifiers are @{text"simp add"} and @{text"simp del"} |
|
424 instead of just @{text add} and @{text del} because @{text auto} |
|
425 does not just perform simplification. |
|
426 |
|
427 Note that @{text simp} acts only on subgoal 1, @{text auto} acts on all |
|
428 subgoals. There is also @{text simp_all}, which applies @{text simp} to |
|
429 all subgoals. |
|
430 |
|
431 \subsection{Rewriting with definitions} |
|
432 \label{sec:rewr-defs} |
|
433 |
|
434 Definitions introduced by the command \isacom{definition} |
|
435 can also be used as simplification rules, |
|
436 but by default they are not: the simplifier does not expand them |
|
437 automatically. Definitions are intended for introducing abstract concepts and |
|
438 not merely as abbreviations. Of course, we need to expand the definition |
|
439 initially, but once we have proved enough abstract properties of the new |
|
440 constant, we can forget its original definition. This style makes proofs more |
|
441 robust: if the definition has to be changed, only the proofs of the abstract |
|
442 properties will be affected. |
|
443 |
|
444 The definition of a function @{text f} is a theorem named @{text f_def} |
|
445 and can be added to a call of @{text simp} just like any other theorem: |
|
446 \begin{quote} |
|
447 \isacom{apply}@{text"(simp add: f_def)"} |
|
448 \end{quote} |
|
449 In particular, let-expressions can be unfolded by |
|
450 making @{thm[source] Let_def} a simplification rule. |
|
451 |
|
452 \subsection{Case splitting with @{text simp}} |
|
453 |
|
454 Goals containing if-expressions are automatically split into two cases by |
|
455 @{text simp} using the rule |
|
456 \begin{quote} |
|
457 @{prop"P(if A then s else t) = ((A \<longrightarrow> P(s)) \<and> (~A \<longrightarrow> P(t)))"} |
|
458 \end{quote} |
|
459 For example, @{text simp} can prove |
|
460 \begin{quote} |
|
461 @{prop"(A \<and> B) = (if A then B else False)"} |
|
462 \end{quote} |
|
463 because both @{prop"A \<longrightarrow> (A & B) = B"} and @{prop"~A \<longrightarrow> (A & B) = False"} |
|
464 simplify to @{const True}. |
|
465 |
|
466 We can split case-expressions similarly. For @{text nat} the rule looks like this: |
|
467 @{prop[display,margin=65,indent=4]"P(case e of 0 \<Rightarrow> a | Suc n \<Rightarrow> b n) = ((e = 0 \<longrightarrow> P a) & (ALL n. e = Suc n \<longrightarrow> P(b n)))"} |
|
468 Case expressions are not split automatically by @{text simp}, but @{text simp} |
|
469 can be instructed to do so: |
|
470 \begin{quote} |
|
471 \isacom{apply}@{text"(simp split: nat.split)"} |
|
472 \end{quote} |
|
473 splits all case-expressions over natural numbers. For an arbitrary |
|
474 datatype @{text t} it is @{text "t.split"} instead of @{thm[source] nat.split}. |
|
475 Method @{text auto} can be modified in exactly the same way. |
|
476 *} |
|
477 (*<*) |
|
478 end |
|
479 (*>*) |