1 |
1 (*<*) |
2 theory simp = Main: |
2 theory simp = Main: |
3 |
3 (*>*) |
|
4 |
|
5 subsubsection{*Simplification rules*} |
|
6 |
|
7 text{*\indexbold{simplification rule} |
|
8 To facilitate simplification, theorems can be declared to be simplification |
|
9 rules (with the help of the attribute @{text"[simp]"}\index{*simp |
|
10 (attribute)}), in which case proofs by simplification make use of these |
|
11 rules automatically. In addition the constructs \isacommand{datatype} and |
|
12 \isacommand{primrec} (and a few others) invisibly declare useful |
|
13 simplification rules. Explicit definitions are \emph{not} declared |
|
14 simplification rules automatically! |
|
15 |
|
16 Not merely equations but pretty much any theorem can become a simplification |
|
17 rule. The simplifier will try to make sense of it. For example, a theorem |
|
18 @{prop"~P"} is automatically turned into @{prop"P = False"}. The details |
|
19 are explained in \S\ref{sec:SimpHow}. |
|
20 |
|
21 The simplification attribute of theorems can be turned on and off as follows: |
|
22 \begin{quote} |
|
23 \isacommand{declare} \textit{theorem-name}@{text"[simp]"}\\ |
|
24 \isacommand{declare} \textit{theorem-name}@{text"[simp del]"} |
|
25 \end{quote} |
|
26 As a rule of thumb, equations that really simplify (like @{prop"rev(rev xs) = |
|
27 xs"} and @{prop"xs @ [] = xs"}) should be made simplification |
|
28 rules. Those of a more specific nature (e.g.\ distributivity laws, which |
|
29 alter the structure of terms considerably) should only be used selectively, |
|
30 i.e.\ they should not be default simplification rules. Conversely, it may |
|
31 also happen that a simplification rule needs to be disabled in certain |
|
32 proofs. Frequent changes in the simplification status of a theorem may |
|
33 indicate a badly designed theory. |
|
34 \begin{warn} |
|
35 Simplification may not terminate, for example if both $f(x) = g(x)$ and |
|
36 $g(x) = f(x)$ are simplification rules. It is the user's responsibility not |
|
37 to include simplification rules that can lead to nontermination, either on |
|
38 their own or in combination with other simplification rules. |
|
39 \end{warn} |
|
40 *} |
|
41 |
|
42 subsubsection{*The simplification method*} |
|
43 |
|
44 text{*\index{*simp (method)|bold} |
|
45 The general format of the simplification method is |
|
46 \begin{quote} |
|
47 @{text simp} \textit{list of modifiers} |
|
48 \end{quote} |
|
49 where the list of \emph{modifiers} helps to fine tune the behaviour and may |
|
50 be empty. Most if not all of the proofs seen so far could have been performed |
|
51 with @{text simp} instead of \isa{auto}, except that @{text simp} attacks |
|
52 only the first subgoal and may thus need to be repeated---use |
|
53 \isaindex{simp_all} to simplify all subgoals. |
|
54 Note that @{text simp} fails if nothing changes. |
|
55 *} |
|
56 |
|
57 subsubsection{*Adding and deleting simplification rules*} |
|
58 |
|
59 text{* |
|
60 If a certain theorem is merely needed in a few proofs by simplification, |
|
61 we do not need to make it a global simplification rule. Instead we can modify |
|
62 the set of simplification rules used in a simplification step by adding rules |
|
63 to it and/or deleting rules from it. The two modifiers for this are |
|
64 \begin{quote} |
|
65 @{text"add:"} \textit{list of theorem names}\\ |
|
66 @{text"del:"} \textit{list of theorem names} |
|
67 \end{quote} |
|
68 In case you want to use only a specific list of theorems and ignore all |
|
69 others: |
|
70 \begin{quote} |
|
71 @{text"only:"} \textit{list of theorem names} |
|
72 \end{quote} |
|
73 *} |
|
74 |
|
75 subsubsection{*Assumptions*} |
|
76 |
|
77 text{*\index{simplification!with/of assumptions} |
|
78 By default, assumptions are part of the simplification process: they are used |
|
79 as simplification rules and are simplified themselves. For example: |
|
80 *} |
|
81 |
|
82 lemma "\\<lbrakk> xs @ zs = ys @ xs; [] @ xs = [] @ [] \\<rbrakk> \\<Longrightarrow> ys = zs"; |
|
83 by simp; |
|
84 |
|
85 text{*\noindent |
|
86 The second assumption simplifies to @{term"xs = []"}, which in turn |
|
87 simplifies the first assumption to @{term"zs = ys"}, thus reducing the |
|
88 conclusion to @{term"ys = ys"} and hence to @{term"True"}. |
|
89 |
|
90 In some cases this may be too much of a good thing and may lead to |
|
91 nontermination: |
|
92 *} |
|
93 |
|
94 lemma "\\<forall>x. f x = g (f (g x)) \\<Longrightarrow> f [] = f [] @ []"; |
|
95 |
|
96 txt{*\noindent |
|
97 cannot be solved by an unmodified application of @{text"simp"} because the |
|
98 simplification rule @{term"f x = g (f (g x))"} extracted from the assumption |
|
99 does not terminate. Isabelle notices certain simple forms of |
|
100 nontermination but not this one. The problem can be circumvented by |
|
101 explicitly telling the simplifier to ignore the assumptions: |
|
102 *} |
|
103 |
|
104 by(simp (no_asm)); |
|
105 |
|
106 text{*\noindent |
|
107 There are three options that influence the treatment of assumptions: |
|
108 \begin{description} |
|
109 \item[@{text"(no_asm)"}]\indexbold{*no_asm} |
|
110 means that assumptions are completely ignored. |
|
111 \item[@{text"(no_asm_simp)"}]\indexbold{*no_asm_simp} |
|
112 means that the assumptions are not simplified but |
|
113 are used in the simplification of the conclusion. |
|
114 \item[@{text"(no_asm_use)"}]\indexbold{*no_asm_use} |
|
115 means that the assumptions are simplified but are not |
|
116 used in the simplification of each other or the conclusion. |
|
117 \end{description} |
|
118 Neither @{text"(no_asm_simp)"} nor @{text"(no_asm_use)"} allow to simplify |
|
119 the above problematic subgoal. |
|
120 |
|
121 Note that only one of the above options is allowed, and it must precede all |
|
122 other arguments. |
|
123 *} |
|
124 |
|
125 subsubsection{*Rewriting with definitions*} |
|
126 |
|
127 text{*\index{simplification!with definitions} |
|
128 Constant definitions (\S\ref{sec:ConstDefinitions}) can |
|
129 be used as simplification rules, but by default they are not. Hence the |
|
130 simplifier does not expand them automatically, just as it should be: |
|
131 definitions are introduced for the purpose of abbreviating complex |
|
132 concepts. Of course we need to expand the definitions initially to derive |
|
133 enough lemmas that characterize the concept sufficiently for us to forget the |
|
134 original definition. For example, given |
|
135 *} |
|
136 |
|
137 constdefs exor :: "bool \\<Rightarrow> bool \\<Rightarrow> bool" |
|
138 "exor A B \\<equiv> (A \\<and> \\<not>B) \\<or> (\\<not>A \\<and> B)"; |
|
139 |
|
140 text{*\noindent |
|
141 we may want to prove |
|
142 *} |
|
143 |
|
144 lemma "exor A (\\<not>A)"; |
|
145 |
|
146 txt{*\noindent |
|
147 Typically, the opening move consists in \emph{unfolding} the definition(s), which we need to |
|
148 get started, but nothing else:\indexbold{*unfold}\indexbold{definition!unfolding} |
|
149 *} |
|
150 |
|
151 apply(simp only:exor_def); |
|
152 |
|
153 txt{*\noindent |
|
154 In this particular case, the resulting goal |
|
155 \begin{isabelle} |
|
156 ~1.~A~{\isasymand}~{\isasymnot}~{\isasymnot}~A~{\isasymor}~{\isasymnot}~A~{\isasymand}~{\isasymnot}~A% |
|
157 \end{isabelle} |
|
158 can be proved by simplification. Thus we could have proved the lemma outright |
|
159 *}(*<*)oops;lemma "exor A (\\<not>A)";(*>*) |
|
160 by(simp add: exor_def) |
|
161 |
|
162 text{*\noindent |
|
163 Of course we can also unfold definitions in the middle of a proof. |
|
164 |
|
165 You should normally not turn a definition permanently into a simplification |
|
166 rule because this defeats the whole purpose of an abbreviation. |
|
167 |
|
168 \begin{warn} |
|
169 If you have defined $f\,x\,y~\isasymequiv~t$ then you can only expand |
|
170 occurrences of $f$ with at least two arguments. Thus it is safer to define |
|
171 $f$~\isasymequiv~\isasymlambda$x\,y.\;t$. |
|
172 \end{warn} |
|
173 *} |
|
174 |
|
175 subsubsection{*Simplifying let-expressions*} |
|
176 |
|
177 text{*\index{simplification!of let-expressions} |
|
178 Proving a goal containing \isaindex{let}-expressions almost invariably |
|
179 requires the @{text"let"}-con\-structs to be expanded at some point. Since |
|
180 @{text"let"}-@{text"in"} is just syntactic sugar for a predefined constant |
|
181 (called @{term"Let"}), expanding @{text"let"}-constructs means rewriting with |
|
182 @{thm[source]Let_def}: |
|
183 *} |
|
184 |
|
185 lemma "(let xs = [] in xs@ys@xs) = ys"; |
|
186 by(simp add: Let_def); |
|
187 |
|
188 text{* |
|
189 If, in a particular context, there is no danger of a combinatorial explosion |
|
190 of nested @{text"let"}s one could even simlify with @{thm[source]Let_def} by |
|
191 default: |
|
192 *} |
|
193 declare Let_def [simp] |
|
194 |
|
195 subsubsection{*Conditional equations*} |
|
196 |
|
197 text{* |
|
198 So far all examples of rewrite rules were equations. The simplifier also |
|
199 accepts \emph{conditional} equations, for example |
|
200 *} |
|
201 |
|
202 lemma hd_Cons_tl[simp]: "xs \\<noteq> [] \\<Longrightarrow> hd xs # tl xs = xs"; |
|
203 by(case_tac xs, simp, simp); |
|
204 |
|
205 text{*\noindent |
|
206 Note the use of ``\ttindexboldpos{,}{$Isar}'' to string together a |
|
207 sequence of methods. Assuming that the simplification rule |
|
208 @{term"(rev xs = []) = (xs = [])"} |
|
209 is present as well, |
|
210 *} |
|
211 |
|
212 lemma "xs \\<noteq> [] \\<Longrightarrow> hd(rev xs) # tl(rev xs) = rev xs"; |
|
213 (*<*) |
|
214 by(simp); |
|
215 (*>*) |
|
216 text{*\noindent |
|
217 is proved by plain simplification: |
|
218 the conditional equation @{thm[source]hd_Cons_tl} above |
|
219 can simplify @{term"hd(rev xs) # tl(rev xs)"} to @{term"rev xs"} |
|
220 because the corresponding precondition @{term"rev xs ~= []"} |
|
221 simplifies to @{term"xs ~= []"}, which is exactly the local |
|
222 assumption of the subgoal. |
|
223 *} |
|
224 |
|
225 |
|
226 subsubsection{*Automatic case splits*} |
|
227 |
|
228 text{*\indexbold{case splits}\index{*split|(} |
|
229 Goals containing @{text"if"}-expressions are usually proved by case |
|
230 distinction on the condition of the @{text"if"}. For example the goal |
|
231 *} |
|
232 |
|
233 lemma "\\<forall>xs. if xs = [] then rev xs = [] else rev xs \\<noteq> []"; |
|
234 |
|
235 txt{*\noindent |
|
236 can be split into |
|
237 \begin{isabelle} |
|
238 ~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[]) |
|
239 \end{isabelle} |
|
240 by a degenerate form of simplification |
|
241 *} |
|
242 |
|
243 apply(simp only: split: split_if); |
|
244 (*<*)oops;(*>*) |
|
245 |
|
246 text{*\noindent |
|
247 where no simplification rules are included (@{text"only:"} is followed by the |
|
248 empty list of theorems) but the rule \isaindexbold{split_if} for |
|
249 splitting @{text"if"}s is added (via the modifier @{text"split:"}). Because |
|
250 case-splitting on @{text"if"}s is almost always the right proof strategy, the |
|
251 simplifier performs it automatically. Try \isacommand{apply}@{text"(simp)"} |
|
252 on the initial goal above. |
|
253 |
|
254 This splitting idea generalizes from @{text"if"} to \isaindex{case}: |
|
255 *} |
|
256 |
|
257 lemma "(case xs of [] \\<Rightarrow> zs | y#ys \\<Rightarrow> y#(ys@zs)) = xs@zs"; |
|
258 txt{*\noindent |
|
259 becomes |
|
260 \begin{isabelle}\makeatother |
|
261 ~1.~(xs~=~[]~{\isasymlongrightarrow}~zs~=~xs~@~zs)~{\isasymand}\isanewline |
|
262 ~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs) |
|
263 \end{isabelle} |
|
264 by typing |
|
265 *} |
|
266 |
|
267 apply(simp only: split: list.split); |
|
268 (*<*)oops;(*>*) |
|
269 |
|
270 text{*\noindent |
|
271 In contrast to @{text"if"}-expressions, the simplifier does not split |
|
272 @{text"case"}-expressions by default because this can lead to nontermination |
|
273 in case of recursive datatypes. Again, if the @{text"only:"} modifier is |
|
274 dropped, the above goal is solved, |
|
275 *} |
|
276 (*<*) |
|
277 lemma "(case xs of [] \\<Rightarrow> zs | y#ys \\<Rightarrow> y#(ys@zs)) = xs@zs"; |
|
278 (*>*) |
|
279 by(simp split: list.split); |
|
280 |
|
281 text{*\noindent% |
|
282 which \isacommand{apply}@{text"(simp)"} alone will not do. |
|
283 |
|
284 In general, every datatype $t$ comes with a theorem |
|
285 $t$@{text".split"} which can be declared to be a \bfindex{split rule} either |
|
286 locally as above, or by giving it the @{text"split"} attribute globally: |
|
287 *} |
|
288 |
|
289 declare list.split [split] |
|
290 |
|
291 text{*\noindent |
|
292 The @{text"split"} attribute can be removed with the @{text"del"} modifier, |
|
293 either locally |
|
294 *} |
|
295 (*<*) |
|
296 lemma "dummy=dummy"; |
|
297 (*>*) |
|
298 apply(simp split del: split_if); |
|
299 (*<*) |
|
300 oops; |
|
301 (*>*) |
|
302 text{*\noindent |
|
303 or globally: |
|
304 *} |
|
305 declare list.split [split del] |
|
306 |
|
307 text{* |
|
308 The above split rules intentionally only affect the conclusion of a |
|
309 subgoal. If you want to split an @{text"if"} or @{text"case"}-expression in |
|
310 the assumptions, you have to apply @{thm[source]split_if_asm} or |
|
311 $t$@{text".split_asm"}: |
|
312 *} |
|
313 |
|
314 lemma "if xs = [] then ys ~= [] else ys = [] ==> xs @ ys ~= []" |
|
315 apply(simp only: split: split_if_asm); |
|
316 (*<*) |
|
317 by(simp_all) |
|
318 (*>*) |
|
319 |
|
320 text{*\noindent |
|
321 In contrast to splitting the conclusion, this actually creates two |
|
322 separate subgoals (which are solved by @{text"simp_all"}): |
|
323 \begin{isabelle} |
|
324 \ \isadigit{1}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharat}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\isanewline |
|
325 \ \isadigit{2}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{xs}\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright} |
|
326 \end{isabelle} |
|
327 If you need to split both in the assumptions and the conclusion, |
|
328 use $t$@{text".splits"} which subsumes $t$@{text".split"} and |
|
329 $t$@{text".split_asm"}. Analogously, there is @{thm[source]if_splits}. |
|
330 |
|
331 \begin{warn} |
|
332 The simplifier merely simplifies the condition of an \isa{if} but not the |
|
333 \isa{then} or \isa{else} parts. The latter are simplified only after the |
|
334 condition reduces to \isa{True} or \isa{False}, or after splitting. The |
|
335 same is true for \isaindex{case}-expressions: only the selector is |
|
336 simplified at first, until either the expression reduces to one of the |
|
337 cases or it is split. |
|
338 \end{warn} |
|
339 |
|
340 \index{*split|)} |
|
341 *} |
|
342 |
|
343 |
|
344 subsubsection{*Arithmetic*} |
|
345 |
|
346 text{*\index{arithmetic} |
|
347 The simplifier routinely solves a small class of linear arithmetic formulae |
|
348 (over type \isa{nat} and other numeric types): it only takes into account |
|
349 assumptions and conclusions that are (possibly negated) (in)equalities |
|
350 (@{text"="}, \isasymle, @{text"<"}) and it only knows about addition. Thus |
|
351 *} |
|
352 |
|
353 lemma "\\<lbrakk> \\<not> m < n; m < n+1 \\<rbrakk> \\<Longrightarrow> m = n" |
|
354 (*<*)by(auto)(*>*) |
|
355 |
|
356 text{*\noindent |
|
357 is proved by simplification, whereas the only slightly more complex |
|
358 *} |
|
359 |
|
360 lemma "\\<not> m < n \\<and> m < n+1 \\<Longrightarrow> m = n"; |
|
361 (*<*)by(arith)(*>*) |
|
362 |
|
363 text{*\noindent |
|
364 is not proved by simplification and requires @{text arith}. |
|
365 *} |
|
366 |
|
367 |
|
368 subsubsection{*Tracing*} |
|
369 text{*\indexbold{tracing the simplifier} |
|
370 Using the simplifier effectively may take a bit of experimentation. Set the |
|
371 \isaindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going |
|
372 on: |
|
373 *} |
|
374 |
|
375 ML "set trace_simp"; |
|
376 lemma "rev [a] = []"; |
|
377 apply(simp); |
|
378 (*<*)oops(*>*) |
|
379 |
|
380 text{*\noindent |
|
381 produces the trace |
|
382 |
|
383 \begin{ttbox}\makeatother |
|
384 Applying instance of rewrite rule: |
|
385 rev (?x1 \# ?xs1) == rev ?xs1 @ [?x1] |
|
386 Rewriting: |
|
387 rev [x] == rev [] @ [x] |
|
388 Applying instance of rewrite rule: |
|
389 rev [] == [] |
|
390 Rewriting: |
|
391 rev [] == [] |
|
392 Applying instance of rewrite rule: |
|
393 [] @ ?y == ?y |
|
394 Rewriting: |
|
395 [] @ [x] == [x] |
|
396 Applying instance of rewrite rule: |
|
397 ?x3 \# ?t3 = ?t3 == False |
|
398 Rewriting: |
|
399 [x] = [] == False |
|
400 \end{ttbox} |
|
401 |
|
402 In more complicated cases, the trace can be quite lenghty, especially since |
|
403 invocations of the simplifier are often nested (e.g.\ when solving conditions |
|
404 of rewrite rules). Thus it is advisable to reset it: |
|
405 *} |
|
406 |
|
407 ML "reset trace_simp"; |
|
408 |
|
409 (*<*) |
4 end |
410 end |
|
411 (*>*) |