doc-src/TutorialI/Misc/simp.thy
author nipkow
Wed Dec 13 09:39:53 2000 +0100 (2000-12-13)
changeset 10654 458068404143
parent 10362 c6b197ccf1f1
child 10788 ea48dd8b0232
permissions -rw-r--r--
*** empty log message ***
     1 (*<*)
     2 theory simp = Main:
     3 (*>*)
     4 
     5 subsubsection{*Simplification rules*}
     6 
     7 text{*\indexbold{simplification rule}
     8 To facilitate simplification, theorems can be declared to be simplification
     9 rules (with the help of the attribute @{text"[simp]"}\index{*simp
    10   (attribute)}), in which case proofs by simplification make use of these
    11 rules automatically. In addition the constructs \isacommand{datatype} and
    12 \isacommand{primrec} (and a few others) invisibly declare useful
    13 simplification rules. Explicit definitions are \emph{not} declared
    14 simplification rules automatically!
    15 
    16 Not merely equations but pretty much any theorem can become a simplification
    17 rule. The simplifier will try to make sense of it.  For example, a theorem
    18 @{prop"~P"} is automatically turned into @{prop"P = False"}. The details
    19 are explained in \S\ref{sec:SimpHow}.
    20 
    21 The simplification attribute of theorems can be turned on and off as follows:
    22 \begin{quote}
    23 \isacommand{declare} \textit{theorem-name}@{text"[simp]"}\\
    24 \isacommand{declare} \textit{theorem-name}@{text"[simp del]"}
    25 \end{quote}
    26 As a rule of thumb, equations that really simplify (like @{prop"rev(rev xs) =
    27  xs"} and @{prop"xs @ [] = xs"}) should be made simplification
    28 rules.  Those of a more specific nature (e.g.\ distributivity laws, which
    29 alter the structure of terms considerably) should only be used selectively,
    30 i.e.\ they should not be default simplification rules.  Conversely, it may
    31 also happen that a simplification rule needs to be disabled in certain
    32 proofs.  Frequent changes in the simplification status of a theorem may
    33 indicate a badly designed theory.
    34 \begin{warn}
    35   Simplification may not terminate, for example if both $f(x) = g(x)$ and
    36   $g(x) = f(x)$ are simplification rules. It is the user's responsibility not
    37   to include simplification rules that can lead to nontermination, either on
    38   their own or in combination with other simplification rules.
    39 \end{warn}
    40 *}
    41 
    42 subsubsection{*The simplification method*}
    43 
    44 text{*\index{*simp (method)|bold}
    45 The general format of the simplification method is
    46 \begin{quote}
    47 @{text simp} \textit{list of modifiers}
    48 \end{quote}
    49 where the list of \emph{modifiers} helps to fine tune the behaviour and may
    50 be empty. Most if not all of the proofs seen so far could have been performed
    51 with @{text simp} instead of \isa{auto}, except that @{text simp} attacks
    52 only the first subgoal and may thus need to be repeated---use
    53 \isaindex{simp_all} to simplify all subgoals.
    54 Note that @{text simp} fails if nothing changes.
    55 *}
    56 
    57 subsubsection{*Adding and deleting simplification rules*}
    58 
    59 text{*
    60 If a certain theorem is merely needed in a few proofs by simplification,
    61 we do not need to make it a global simplification rule. Instead we can modify
    62 the set of simplification rules used in a simplification step by adding rules
    63 to it and/or deleting rules from it. The two modifiers for this are
    64 \begin{quote}
    65 @{text"add:"} \textit{list of theorem names}\\
    66 @{text"del:"} \textit{list of theorem names}
    67 \end{quote}
    68 In case you want to use only a specific list of theorems and ignore all
    69 others:
    70 \begin{quote}
    71 @{text"only:"} \textit{list of theorem names}
    72 \end{quote}
    73 *}
    74 
    75 subsubsection{*Assumptions*}
    76 
    77 text{*\index{simplification!with/of assumptions}
    78 By default, assumptions are part of the simplification process: they are used
    79 as simplification rules and are simplified themselves. For example:
    80 *}
    81 
    82 lemma "\<lbrakk> xs @ zs = ys @ xs; [] @ xs = [] @ [] \<rbrakk> \<Longrightarrow> ys = zs";
    83 apply simp;
    84 done
    85 
    86 text{*\noindent
    87 The second assumption simplifies to @{term"xs = []"}, which in turn
    88 simplifies the first assumption to @{term"zs = ys"}, thus reducing the
    89 conclusion to @{term"ys = ys"} and hence to @{term"True"}.
    90 
    91 In some cases this may be too much of a good thing and may lead to
    92 nontermination:
    93 *}
    94 
    95 lemma "\<forall>x. f x = g (f (g x)) \<Longrightarrow> f [] = f [] @ []";
    96 
    97 txt{*\noindent
    98 cannot be solved by an unmodified application of @{text"simp"} because the
    99 simplification rule @{term"f x = g (f (g x))"} extracted from the assumption
   100 does not terminate. Isabelle notices certain simple forms of
   101 nontermination but not this one. The problem can be circumvented by
   102 explicitly telling the simplifier to ignore the assumptions:
   103 *}
   104 
   105 apply(simp (no_asm));
   106 done
   107 
   108 text{*\noindent
   109 There are three options that influence the treatment of assumptions:
   110 \begin{description}
   111 \item[@{text"(no_asm)"}]\indexbold{*no_asm}
   112  means that assumptions are completely ignored.
   113 \item[@{text"(no_asm_simp)"}]\indexbold{*no_asm_simp}
   114  means that the assumptions are not simplified but
   115   are used in the simplification of the conclusion.
   116 \item[@{text"(no_asm_use)"}]\indexbold{*no_asm_use}
   117  means that the assumptions are simplified but are not
   118   used in the simplification of each other or the conclusion.
   119 \end{description}
   120 Neither @{text"(no_asm_simp)"} nor @{text"(no_asm_use)"} allow to simplify
   121 the above problematic subgoal.
   122 
   123 Note that only one of the above options is allowed, and it must precede all
   124 other arguments.
   125 *}
   126 
   127 subsubsection{*Rewriting with definitions*}
   128 
   129 text{*\index{simplification!with definitions}
   130 Constant definitions (\S\ref{sec:ConstDefinitions}) can
   131 be used as simplification rules, but by default they are not.  Hence the
   132 simplifier does not expand them automatically, just as it should be:
   133 definitions are introduced for the purpose of abbreviating complex
   134 concepts. Of course we need to expand the definitions initially to derive
   135 enough lemmas that characterize the concept sufficiently for us to forget the
   136 original definition. For example, given
   137 *}
   138 
   139 constdefs exor :: "bool \<Rightarrow> bool \<Rightarrow> bool"
   140          "exor A B \<equiv> (A \<and> \<not>B) \<or> (\<not>A \<and> B)";
   141 
   142 text{*\noindent
   143 we may want to prove
   144 *}
   145 
   146 lemma "exor A (\<not>A)";
   147 
   148 txt{*\noindent
   149 Typically, the opening move consists in \emph{unfolding} the definition(s), which we need to
   150 get started, but nothing else:\indexbold{*unfold}\indexbold{definition!unfolding}
   151 *}
   152 
   153 apply(simp only:exor_def);
   154 
   155 txt{*\noindent
   156 In this particular case, the resulting goal
   157 @{subgoals[display,indent=0]}
   158 can be proved by simplification. Thus we could have proved the lemma outright by
   159 *}(*<*)oops;lemma "exor A (\<not>A)";(*>*)
   160 apply(simp add: exor_def)
   161 (*<*)done(*>*)
   162 text{*\noindent
   163 Of course we can also unfold definitions in the middle of a proof.
   164 
   165 You should normally not turn a definition permanently into a simplification
   166 rule because this defeats the whole purpose of an abbreviation.
   167 
   168 \begin{warn}
   169   If you have defined $f\,x\,y~\isasymequiv~t$ then you can only expand
   170   occurrences of $f$ with at least two arguments. Thus it is safer to define
   171   $f$~\isasymequiv~\isasymlambda$x\,y.\;t$.
   172 \end{warn}
   173 *}
   174 
   175 subsubsection{*Simplifying let-expressions*}
   176 
   177 text{*\index{simplification!of let-expressions}
   178 Proving a goal containing \isaindex{let}-expressions almost invariably
   179 requires the @{text"let"}-con\-structs to be expanded at some point. Since
   180 @{text"let"}-@{text"in"} is just syntactic sugar for a predefined constant
   181 (called @{term"Let"}), expanding @{text"let"}-constructs means rewriting with
   182 @{thm[source]Let_def}:
   183 *}
   184 
   185 lemma "(let xs = [] in xs@ys@xs) = ys";
   186 apply(simp add: Let_def);
   187 done
   188 
   189 text{*
   190 If, in a particular context, there is no danger of a combinatorial explosion
   191 of nested @{text"let"}s one could even simlify with @{thm[source]Let_def} by
   192 default:
   193 *}
   194 declare Let_def [simp]
   195 
   196 subsubsection{*Conditional equations*}
   197 
   198 text{*
   199 So far all examples of rewrite rules were equations. The simplifier also
   200 accepts \emph{conditional} equations, for example
   201 *}
   202 
   203 lemma hd_Cons_tl[simp]: "xs \<noteq> []  \<Longrightarrow>  hd xs # tl xs = xs";
   204 apply(case_tac xs, simp, simp);
   205 done
   206 
   207 text{*\noindent
   208 Note the use of ``\ttindexboldpos{,}{$Isar}'' to string together a
   209 sequence of methods. Assuming that the simplification rule
   210 @{term"(rev xs = []) = (xs = [])"}
   211 is present as well,
   212 *}
   213 
   214 lemma "xs \<noteq> [] \<Longrightarrow> hd(rev xs) # tl(rev xs) = rev xs";
   215 (*<*)
   216 by(simp);
   217 (*>*)
   218 text{*\noindent
   219 is proved by plain simplification:
   220 the conditional equation @{thm[source]hd_Cons_tl} above
   221 can simplify @{term"hd(rev xs) # tl(rev xs)"} to @{term"rev xs"}
   222 because the corresponding precondition @{term"rev xs ~= []"}
   223 simplifies to @{term"xs ~= []"}, which is exactly the local
   224 assumption of the subgoal.
   225 *}
   226 
   227 
   228 subsubsection{*Automatic case splits*}
   229 
   230 text{*\indexbold{case splits}\index{*split (method, attr.)|(}
   231 Goals containing @{text"if"}-expressions are usually proved by case
   232 distinction on the condition of the @{text"if"}. For example the goal
   233 *}
   234 
   235 lemma "\<forall>xs. if xs = [] then rev xs = [] else rev xs \<noteq> []";
   236 
   237 txt{*\noindent
   238 can be split by a special method @{text split}:
   239 *}
   240 
   241 apply(split split_if)
   242 
   243 txt{*\noindent
   244 @{subgoals[display,indent=0]}
   245 where \isaindexbold{split_if} is a theorem that expresses splitting of
   246 @{text"if"}s. Because
   247 case-splitting on @{text"if"}s is almost always the right proof strategy, the
   248 simplifier performs it automatically. Try \isacommand{apply}@{text"(simp)"}
   249 on the initial goal above.
   250 
   251 This splitting idea generalizes from @{text"if"} to \isaindex{case}:
   252 *}(*<*)by simp(*>*)
   253 lemma "(case xs of [] \<Rightarrow> zs | y#ys \<Rightarrow> y#(ys@zs)) = xs@zs";
   254 apply(split list.split);
   255 
   256 txt{*
   257 @{subgoals[display,indent=0]}
   258 In contrast to @{text"if"}-expressions, the simplifier does not split
   259 @{text"case"}-expressions by default because this can lead to nontermination
   260 in case of recursive datatypes. Therefore the simplifier has a modifier
   261 @{text split} for adding further splitting rules explicitly. This means the
   262 above lemma can be proved in one step by
   263 *}
   264 (*<*)oops;
   265 lemma "(case xs of [] \<Rightarrow> zs | y#ys \<Rightarrow> y#(ys@zs)) = xs@zs";
   266 (*>*)
   267 apply(simp split: list.split);
   268 (*<*)done(*>*)
   269 text{*\noindent
   270 whereas \isacommand{apply}@{text"(simp)"} alone will not succeed.
   271 
   272 In general, every datatype $t$ comes with a theorem
   273 $t$@{text".split"} which can be declared to be a \bfindex{split rule} either
   274 locally as above, or by giving it the @{text"split"} attribute globally:
   275 *}
   276 
   277 declare list.split [split]
   278 
   279 text{*\noindent
   280 The @{text"split"} attribute can be removed with the @{text"del"} modifier,
   281 either locally
   282 *}
   283 (*<*)
   284 lemma "dummy=dummy";
   285 (*>*)
   286 apply(simp split del: split_if);
   287 (*<*)
   288 oops;
   289 (*>*)
   290 text{*\noindent
   291 or globally:
   292 *}
   293 declare list.split [split del]
   294 
   295 text{*
   296 In polished proofs the @{text split} method is rarely used on its own
   297 but always as part of the simplifier. However, if a goal contains
   298 multiple splittable constructs, the @{text split} method can be
   299 helpful in selectively exploring the effects of splitting.
   300 
   301 The above split rules intentionally only affect the conclusion of a
   302 subgoal.  If you want to split an @{text"if"} or @{text"case"}-expression in
   303 the assumptions, you have to apply @{thm[source]split_if_asm} or
   304 $t$@{text".split_asm"}:
   305 *}
   306 
   307 lemma "if xs = [] then ys \<noteq> [] else ys = [] \<Longrightarrow> xs @ ys \<noteq> []"
   308 apply(split split_if_asm)
   309 
   310 txt{*\noindent
   311 In contrast to splitting the conclusion, this actually creates two
   312 separate subgoals (which are solved by @{text"simp_all"}):
   313 @{subgoals[display,indent=0]}
   314 If you need to split both in the assumptions and the conclusion,
   315 use $t$@{text".splits"} which subsumes $t$@{text".split"} and
   316 $t$@{text".split_asm"}. Analogously, there is @{thm[source]if_splits}.
   317 
   318 \begin{warn}
   319   The simplifier merely simplifies the condition of an \isa{if} but not the
   320   \isa{then} or \isa{else} parts. The latter are simplified only after the
   321   condition reduces to \isa{True} or \isa{False}, or after splitting. The
   322   same is true for \isaindex{case}-expressions: only the selector is
   323   simplified at first, until either the expression reduces to one of the
   324   cases or it is split.
   325 \end{warn}\index{*split (method, attr.)|)}
   326 *}
   327 (*<*)
   328 by(simp_all)
   329 (*>*)
   330 
   331 subsubsection{*Arithmetic*}
   332 
   333 text{*\index{arithmetic}
   334 The simplifier routinely solves a small class of linear arithmetic formulae
   335 (over type \isa{nat} and other numeric types): it only takes into account
   336 assumptions and conclusions that are (possibly negated) (in)equalities
   337 (@{text"="}, \isasymle, @{text"<"}) and it only knows about addition. Thus
   338 *}
   339 
   340 lemma "\<lbrakk> \<not> m < n; m < n+1 \<rbrakk> \<Longrightarrow> m = n"
   341 (*<*)by(auto)(*>*)
   342 
   343 text{*\noindent
   344 is proved by simplification, whereas the only slightly more complex
   345 *}
   346 
   347 lemma "\<not> m < n \<and> m < n+1 \<Longrightarrow> m = n";
   348 (*<*)by(arith)(*>*)
   349 
   350 text{*\noindent
   351 is not proved by simplification and requires @{text arith}.
   352 *}
   353 
   354 
   355 subsubsection{*Tracing*}
   356 text{*\indexbold{tracing the simplifier}
   357 Using the simplifier effectively may take a bit of experimentation.  Set the
   358 \isaindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
   359 on:
   360 *}
   361 
   362 ML "set trace_simp";
   363 lemma "rev [a] = []";
   364 apply(simp);
   365 (*<*)oops(*>*)
   366 
   367 text{*\noindent
   368 produces the trace
   369 
   370 \begin{ttbox}\makeatother
   371 Applying instance of rewrite rule:
   372 rev (?x1 \# ?xs1) == rev ?xs1 @ [?x1]
   373 Rewriting:
   374 rev [x] == rev [] @ [x]
   375 Applying instance of rewrite rule:
   376 rev [] == []
   377 Rewriting:
   378 rev [] == []
   379 Applying instance of rewrite rule:
   380 [] @ ?y == ?y
   381 Rewriting:
   382 [] @ [x] == [x]
   383 Applying instance of rewrite rule:
   384 ?x3 \# ?t3 = ?t3 == False
   385 Rewriting:
   386 [x] = [] == False
   387 \end{ttbox}
   388 
   389 In more complicated cases, the trace can be quite lenghty, especially since
   390 invocations of the simplifier are often nested (e.g.\ when solving conditions
   391 of rewrite rules). Thus it is advisable to reset it:
   392 *}
   393 
   394 ML "reset trace_simp";
   395 
   396 (*<*)
   397 end
   398 (*>*)