104

1 
%% $Id$


2 
\chapter{Simplification} \label{simpchap}


3 
\index{simplification(}


4 


5 
This chapter describes Isabelle's generic simplification package, which

323

6 
provides a suite of simplification tactics. It performs conditional and


7 
unconditional rewriting and uses contextual information (`local


8 
assumptions'). It provides a few general hooks, which can provide


9 
automatic case splits during rewriting, for example. The simplifier is set


10 
up for many of Isabelle's logics: {\tt FOL}, {\tt ZF}, {\tt LCF} and {\tt


11 
HOL}.

104

12 


13 

286

14 
\section{Simplification sets}\index{simplification sets}

104

15 
The simplification tactics are controlled by {\bf simpsets}. These consist


16 
of five components: rewrite rules, congruence rules, the subgoaler, the

323

17 
solver and the looper. The simplifier should be set up with sensible


18 
defaults so that most simplifier calls specify only rewrite rules.


19 
Experienced users can exploit the other components to streamline proofs.


20 

104

21 


22 
\subsection{Rewrite rules}\index{rewrite rules}

323

23 
Rewrite rules are theorems expressing some form of equality:


24 
\begin{eqnarray*}


25 
Suc(\Var{m}) + \Var{n} &=& \Var{m} + Suc(\Var{n}) \\


26 
\Var{P}\conj\Var{P} &\bimp& \Var{P} \\


27 
\Var{A} \union \Var{B} &\equiv& \{x.x\in \Var{A} \disj x\in \Var{B}\}


28 
\end{eqnarray*}


29 
{\bf Conditional} rewrites such as $\Var{m}<\Var{n} \Imp \Var{m}/\Var{n} =


30 
0$ are permitted; the conditions can be arbitrary terms. The infix


31 
operation \ttindex{addsimps} adds new rewrite rules, while


32 
\ttindex{delsimps} deletes rewrite rules from a simpset.

104

33 

323

34 
Internally, all rewrite rules are translated into metaequalities, theorems


35 
with conclusion $lhs \equiv rhs$. Each simpset contains a function for


36 
extracting equalities from arbitrary theorems. For example,


37 
$\neg(\Var{x}\in \{\})$ could be turned into $\Var{x}\in \{\} \equiv


38 
False$. This function can be installed using \ttindex{setmksimps} but only


39 
the definer of a logic should need to do this; see \S\ref{sec:setmksimps}.


40 
The function processes theorems added by \ttindex{addsimps} as well as


41 
local assumptions.

104

42 


43 


44 
\begin{warn}\index{rewrite rules}


45 
The lefthand side of a rewrite rule must look like a firstorder term:


46 
after etacontraction, none of its unknowns should have arguments. Hence


47 
${\Var{i}+(\Var{j}+\Var{k})} = {(\Var{i}+\Var{j})+\Var{k}}$ and $\neg(\forall


48 
x.\Var{P}(x)) \bimp (\exists x.\neg\Var{P}(x))$ are acceptable, whereas


49 
$\Var{f}(\Var{x})\in {\tt range}(\Var{f}) = True$ is not. However, you can


50 
replace the offending subterms by new variables and conditions: $\Var{y} =


51 
\Var{f}(\Var{x}) \Imp \Var{y}\in {\tt range}(\Var{f}) = True$ is again


52 
acceptable.


53 
\end{warn}


54 


55 
\subsection {Congruence rules}\index{congruence rules}


56 
Congruence rules are metaequalities of the form


57 
\[ \List{\dots} \Imp


58 
f(\Var{x@1},\ldots,\Var{x@n}) \equiv f(\Var{y@1},\ldots,\Var{y@n}).


59 
\]

323

60 
This governs the simplification of the arguments of~$f$. For

104

61 
example, some arguments can be simplified under additional assumptions:


62 
\[ \List{\Var{P@1} \bimp \Var{Q@1};\; \Var{Q@1} \Imp \Var{P@2} \bimp \Var{Q@2}}


63 
\Imp (\Var{P@1} \imp \Var{P@2}) \equiv (\Var{Q@1} \imp \Var{Q@2})


64 
\]

323

65 
Given this rule, the simplifier assumes $Q@1$ and extracts rewrite rules


66 
from it when simplifying~$P@2$. Such local assumptions are effective for


67 
rewriting formulae such as $x=0\imp y+x=y$. The congruence rule for bounded


68 
quantifiers can also supply contextual information:

286

69 
\begin{eqnarray*}


70 
&&\List{\Var{A}=\Var{B};\;


71 
\Forall x. x\in \Var{B} \Imp \Var{P}(x) = \Var{Q}(x)} \Imp{} \\


72 
&&\qquad\qquad


73 
(\forall x\in \Var{A}.\Var{P}(x)) = (\forall x\in \Var{B}.\Var{Q}(x))


74 
\end{eqnarray*}

323

75 
The congruence rule for conditional expressions can supply contextual


76 
information for simplifying the arms:

104

77 
\[ \List{\Var{p}=\Var{q};~ \Var{q} \Imp \Var{a}=\Var{c};~


78 
\neg\Var{q} \Imp \Var{b}=\Var{d}} \Imp


79 
if(\Var{p},\Var{a},\Var{b}) \equiv if(\Var{q},\Var{c},\Var{d})


80 
\]


81 


82 
A congruence rule can also suppress simplification of certain arguments.


83 
Here is an alternative congruence rule for conditional expressions:


84 
\[ \Var{p}=\Var{q} \Imp


85 
if(\Var{p},\Var{a},\Var{b}) \equiv if(\Var{q},\Var{a},\Var{b})


86 
\]


87 
Only the first argument is simplified; the others remain unchanged.


88 
This can make simplification much faster, but may require an extra case split


89 
to prove the goal.


90 

286

91 
Congruence rules are added using \ttindexbold{addeqcongs}. Their conclusion

104

92 
must be a metaequality, as in the examples above. It is more


93 
natural to derive the rules with objectlogic equality, for example


94 
\[ \List{\Var{P@1} \bimp \Var{Q@1};\; \Var{Q@1} \Imp \Var{P@2} \bimp \Var{Q@2}}


95 
\Imp (\Var{P@1} \imp \Var{P@2}) \bimp (\Var{Q@1} \imp \Var{Q@2}),


96 
\]

286

97 
Each objectlogic should define an operator called \ttindex{addcongs} that


98 
expects objectequalities and translates them into metaequalities.

104

99 

323

100 
\subsection{*The subgoaler}

104

101 
The subgoaler is the tactic used to solve subgoals arising out of


102 
conditional rewrite rules or congruence rules. The default should be


103 
simplification itself. Occasionally this strategy needs to be changed. For


104 
example, if the premise of a conditional rule is an instance of its


105 
conclusion, as in $Suc(\Var{m}) < \Var{n} \Imp \Var{m} < \Var{n}$, the


106 
default strategy could loop.


107 


108 
The subgoaler can be set explicitly with \ttindex{setsubgoaler}. For


109 
example, the subgoaler


110 
\begin{ttbox}


111 
fun subgoal_tac ss = resolve_tac (prems_of_ss ss) ORELSE'


112 
asm_simp_tac ss;


113 
\end{ttbox}


114 
tries to solve the subgoal with one of the premises and calls


115 
simplification only if that fails; here {\tt prems_of_ss} extracts the


116 
current premises from a simpset.


117 

323

118 
\subsection{*The solver}

286

119 
The solver is a tactic that attempts to solve a subgoal after


120 
simplification. Typically it just proves trivial subgoals such as {\tt

323

121 
True} and $t=t$. It could use sophisticated means such as {\tt


122 
fast_tac}, though that could make simplification expensive. The solver


123 
is set using \ttindex{setsolver}.

286

124 

323

125 
Rewriting does not instantiate unknowns. For example, rewriting cannot


126 
prove $a\in \Var{A}$ since this requires instantiating~$\Var{A}$. The


127 
solver, however, is an arbitrary tactic and may instantiate unknowns as it


128 
pleases. This is the only way the simplifier can handle a conditional


129 
rewrite rule whose condition contains extra variables.


130 


131 
\index{assumptions!in simplification}

286

132 
The tactic is presented with the full goal, including the asssumptions.


133 
Hence it can use those assumptions (say by calling {\tt assume_tac}) even


134 
inside {\tt simp_tac}, which otherwise does not use assumptions. The


135 
solver is also supplied a list of theorems, namely assumptions that hold in


136 
the local context.

104

137 

323

138 
The subgoaler is also used to solve the premises of congruence rules, which


139 
are usually of the form $s = \Var{x}$, where $s$ needs to be simplified and


140 
$\Var{x}$ needs to be instantiated with the result. Hence the subgoaler


141 
should call the simplifier at some point. The simplifier will then call the


142 
solver, which must therefore be prepared to solve goals of the form $t =


143 
\Var{x}$, usually by reflexivity. In particular, reflexivity should be


144 
tried before any of the fancy tactics like {\tt fast_tac}.


145 


146 
It may even happen that, due to simplification, the subgoal is no longer an


147 
equality. For example $False \bimp \Var{Q}$ could be rewritten to


148 
$\neg\Var{Q}$. To cover this case, the solver could try resolving with the


149 
theorem $\neg False$.

104

150 


151 
\begin{warn}


152 
If the simplifier aborts with the message {\tt Failed congruence proof!},

323

153 
then the subgoaler or solver has failed to prove a premise of a


154 
congruence rule. This should never occur and indicates that the


155 
subgoaler or solver is faulty.

104

156 
\end{warn}


157 

323

158 


159 
\subsection{*The looper}

104

160 
The looper is a tactic that is applied after simplification, in case the


161 
solver failed to solve the simplified goal. If the looper succeeds, the


162 
simplification process is started all over again. Each of the subgoals


163 
generated by the looper is attacked in turn, in reverse order. A


164 
typical looper is case splitting: the expansion of a conditional. Another


165 
possibility is to apply an elimination rule on the assumptions. More


166 
adventurous loopers could start an induction. The looper is set with


167 
\ttindex{setloop}.


168 


169 


170 
\begin{figure}

323

171 
\index{*simpset ML type}


172 
\index{*simp_tac}


173 
\index{*asm_simp_tac}


174 
\index{*asm_full_simp_tac}


175 
\index{*addeqcongs}


176 
\index{*addsimps}


177 
\index{*delsimps}


178 
\index{*empty_ss}


179 
\index{*merge_ss}


180 
\index{*setsubgoaler}


181 
\index{*setsolver}


182 
\index{*setloop}


183 
\index{*setmksimps}


184 
\index{*prems_of_ss}


185 
\index{*rep_ss}

104

186 
\begin{ttbox}


187 
infix addsimps addeqcongs delsimps


188 
setsubgoaler setsolver setloop setmksimps;


189 


190 
signature SIMPLIFIER =


191 
sig


192 
type simpset


193 
val simp_tac: simpset > int > tactic


194 
val asm_simp_tac: simpset > int > tactic

133

195 
val asm_full_simp_tac: simpset > int > tactic\smallskip


196 
val addeqcongs: simpset * thm list > simpset


197 
val addsimps: simpset * thm list > simpset


198 
val delsimps: simpset * thm list > simpset

104

199 
val empty_ss: simpset


200 
val merge_ss: simpset * simpset > simpset


201 
val setsubgoaler: simpset * (simpset > int > tactic) > simpset


202 
val setsolver: simpset * (thm list > int > tactic) > simpset


203 
val setloop: simpset * (int > tactic) > simpset


204 
val setmksimps: simpset * (thm > thm list) > simpset


205 
val prems_of_ss: simpset > thm list


206 
val rep_ss: simpset > \{simps: thm list, congs: thm list\}


207 
end;


208 
\end{ttbox}

323

209 
\caption{The simplifier primitives} \label{SIMPLIFIER}

104

210 
\end{figure}


211 


212 


213 
\section{The simplification tactics} \label{simptactics}

323

214 
\index{simplification!tactics}


215 
\index{tactics!simplification}

104

216 


217 
The actual simplification work is performed by the following tactics. The


218 
rewriting strategy is strictly bottom up, except for congruence rules, which


219 
are applied while descending into a term. Conditions in conditional rewrite


220 
rules are solved recursively before the rewrite rule is applied.


221 


222 
There are three basic simplification tactics:

323

223 
\begin{ttdescription}

104

224 
\item[\ttindexbold{simp_tac} $ss$ $i$] simplifies subgoal~$i$ using the rules


225 
in~$ss$. It may solve the subgoal completely if it has become trivial,


226 
using the solver.


227 

323

228 
\item[\ttindexbold{asm_simp_tac}]\index{assumptions!in simplification}


229 
is like \verb$simp_tac$, but extracts additional rewrite rules from the


230 
assumptions.

104

231 


232 
\item[\ttindexbold{asm_full_simp_tac}] is like \verb$asm_simp_tac$, but also


233 
simplifies the assumptions one by one, using each assumption in the


234 
simplification of the following ones.

323

235 
\end{ttdescription}

104

236 
Using the simplifier effectively may take a bit of experimentation. The


237 
tactics can be traced with the ML command \verb$trace_simp := true$. To


238 
remind yourself of what is in a simpset, use the function \verb$rep_ss$ to


239 
return its simplification and congruence rules.


240 

286

241 
\section{Examples using the simplifier}

323

242 
\index{examples!of simplification}

104

243 
Assume we are working within {\tt FOL} and that

323

244 
\begin{ttdescription}


245 
\item[Nat.thy]


246 
is a theory including the constants $0$, $Suc$ and $+$,


247 
\item[add_0]


248 
is the rewrite rule $0+\Var{n} = \Var{n}$,


249 
\item[add_Suc]


250 
is the rewrite rule $Suc(\Var{m})+\Var{n} = Suc(\Var{m}+\Var{n})$,


251 
\item[induct]


252 
is the induction rule $\List{\Var{P}(0);\; \Forall x. \Var{P}(x)\Imp


253 
\Var{P}(Suc(x))} \Imp \Var{P}(\Var{n})$.


254 
\item[FOL_ss]


255 
is a basic simpset for {\tt FOL}.%


256 
\footnote{These examples reside on the file {\tt FOL/ex/nat.ML}.}


257 
\end{ttdescription}

104

258 


259 
We create a simpset for natural numbers by extending~{\tt FOL_ss}:


260 
\begin{ttbox}


261 
val add_ss = FOL_ss addsimps [add_0, add_Suc];


262 
\end{ttbox}

323

263 


264 
\subsection{A trivial example}

286

265 
Proofs by induction typically involve simplification. Here is a proof


266 
that~0 is a right identity:

104

267 
\begin{ttbox}


268 
goal Nat.thy "m+0 = m";


269 
{\out Level 0}


270 
{\out m + 0 = m}


271 
{\out 1. m + 0 = m}

286

272 
\end{ttbox}


273 
The first step is to perform induction on the variable~$m$. This returns a


274 
base case and inductive step as two subgoals:


275 
\begin{ttbox}

104

276 
by (res_inst_tac [("n","m")] induct 1);


277 
{\out Level 1}


278 
{\out m + 0 = m}


279 
{\out 1. 0 + 0 = 0}


280 
{\out 2. !!x. x + 0 = x ==> Suc(x) + 0 = Suc(x)}


281 
\end{ttbox}

286

282 
Simplification solves the first subgoal trivially:

104

283 
\begin{ttbox}


284 
by (simp_tac add_ss 1);


285 
{\out Level 2}


286 
{\out m + 0 = m}


287 
{\out 1. !!x. x + 0 = x ==> Suc(x) + 0 = Suc(x)}


288 
\end{ttbox}


289 
The remaining subgoal requires \ttindex{asm_simp_tac} in order to use the


290 
induction hypothesis as a rewrite rule:


291 
\begin{ttbox}


292 
by (asm_simp_tac add_ss 1);


293 
{\out Level 3}


294 
{\out m + 0 = m}


295 
{\out No subgoals!}


296 
\end{ttbox}


297 

323

298 
\subsection{An example of tracing}


299 
Let us prove a similar result involving more complex terms. The two


300 
equations together can be used to prove that addition is commutative.

104

301 
\begin{ttbox}


302 
goal Nat.thy "m+Suc(n) = Suc(m+n)";


303 
{\out Level 0}


304 
{\out m + Suc(n) = Suc(m + n)}


305 
{\out 1. m + Suc(n) = Suc(m + n)}

286

306 
\end{ttbox}


307 
We again perform induction on~$m$ and get two subgoals:


308 
\begin{ttbox}

104

309 
by (res_inst_tac [("n","m")] induct 1);


310 
{\out Level 1}


311 
{\out m + Suc(n) = Suc(m + n)}


312 
{\out 1. 0 + Suc(n) = Suc(0 + n)}

286

313 
{\out 2. !!x. x + Suc(n) = Suc(x + n) ==>}


314 
{\out Suc(x) + Suc(n) = Suc(Suc(x) + n)}


315 
\end{ttbox}


316 
Simplification solves the first subgoal, this time rewriting two


317 
occurrences of~0:


318 
\begin{ttbox}

104

319 
by (simp_tac add_ss 1);


320 
{\out Level 2}


321 
{\out m + Suc(n) = Suc(m + n)}

286

322 
{\out 1. !!x. x + Suc(n) = Suc(x + n) ==>}


323 
{\out Suc(x) + Suc(n) = Suc(Suc(x) + n)}

104

324 
\end{ttbox}


325 
Switching tracing on illustrates how the simplifier solves the remaining


326 
subgoal:


327 
\begin{ttbox}


328 
trace_simp := true;


329 
by (asm_simp_tac add_ss 1);

323

330 
\ttbreak

104

331 
{\out Rewriting:}


332 
{\out Suc(x) + Suc(n) == Suc(x + Suc(n))}

323

333 
\ttbreak

104

334 
{\out Rewriting:}


335 
{\out x + Suc(n) == Suc(x + n)}

323

336 
\ttbreak

104

337 
{\out Rewriting:}


338 
{\out Suc(x) + n == Suc(x + n)}

323

339 
\ttbreak

104

340 
{\out Rewriting:}


341 
{\out Suc(Suc(x + n)) = Suc(Suc(x + n)) == True}

323

342 
\ttbreak

104

343 
{\out Level 3}


344 
{\out m + Suc(n) = Suc(m + n)}


345 
{\out No subgoals!}


346 
\end{ttbox}

286

347 
Many variations are possible. At Level~1 (in either example) we could have


348 
solved both subgoals at once using the tactical \ttindex{ALLGOALS}:

104

349 
\begin{ttbox}


350 
by (ALLGOALS (asm_simp_tac add_ss));


351 
{\out Level 2}


352 
{\out m + Suc(n) = Suc(m + n)}


353 
{\out No subgoals!}


354 
\end{ttbox}


355 

323

356 
\subsection{Free variables and simplification}

104

357 
Here is a conjecture to be proved for an arbitrary function~$f$ satisfying

323

358 
the law $f(Suc(\Var{n})) = Suc(f(\Var{n}))$:

104

359 
\begin{ttbox}


360 
val [prem] = goal Nat.thy


361 
"(!!n. f(Suc(n)) = Suc(f(n))) ==> f(i+j) = i+f(j)";


362 
{\out Level 0}


363 
{\out f(i + j) = i + f(j)}


364 
{\out 1. f(i + j) = i + f(j)}

323

365 
\ttbreak

286

366 
{\out val prem = "f(Suc(?n)) = Suc(f(?n))}


367 
{\out [!!n. f(Suc(n)) = Suc(f(n))]" : thm}

323

368 
\end{ttbox}


369 
In the theorem~{\tt prem}, note that $f$ is a free variable while


370 
$\Var{n}$ is a schematic variable.


371 
\begin{ttbox}

104

372 
by (res_inst_tac [("n","i")] induct 1);


373 
{\out Level 1}


374 
{\out f(i + j) = i + f(j)}


375 
{\out 1. f(0 + j) = 0 + f(j)}


376 
{\out 2. !!x. f(x + j) = x + f(j) ==> f(Suc(x) + j) = Suc(x) + f(j)}


377 
\end{ttbox}


378 
We simplify each subgoal in turn. The first one is trivial:


379 
\begin{ttbox}


380 
by (simp_tac add_ss 1);


381 
{\out Level 2}


382 
{\out f(i + j) = i + f(j)}


383 
{\out 1. !!x. f(x + j) = x + f(j) ==> f(Suc(x) + j) = Suc(x) + f(j)}


384 
\end{ttbox}


385 
The remaining subgoal requires rewriting by the premise, so we add it to

323

386 
{\tt add_ss}:%


387 
\footnote{The previous simplifier required congruence rules for function


388 
variables like~$f$ in order to simplify their arguments. It was more


389 
general than the current simplifier, but harder to use and slower. The


390 
present simplifier can be given congruence rules to realize nonstandard


391 
simplification of a function's arguments, but this is seldom necessary.}

104

392 
\begin{ttbox}


393 
by (asm_simp_tac (add_ss addsimps [prem]) 1);


394 
{\out Level 3}


395 
{\out f(i + j) = i + f(j)}


396 
{\out No subgoals!}


397 
\end{ttbox}


398 

286

399 

323

400 
\section{*Permutative rewrite rules}


401 
\index{rewrite rules!permutative(}


402 


403 
A rewrite rule is {\bf permutative} if the lefthand side and righthand


404 
side are the same up to renaming of variables. The most common permutative


405 
rule is commutativity: $x+y = y+x$. Other examples include $(xy)z =


406 
(xz)y$ in arithmetic and $insert(x,insert(y,A)) = insert(y,insert(x,A))$


407 
for sets. Such rules are common enough to merit special attention.


408 


409 
Because ordinary rewriting loops given such rules, the simplifier employs a


410 
special strategy, called {\bf ordered rewriting}\index{rewriting!ordered}.


411 
There is a builtin lexicographic ordering on terms. A permutative rewrite


412 
rule is applied only if it decreases the given term with respect to this


413 
ordering. For example, commutativity rewrites~$b+a$ to $a+b$, but then


414 
stops because $a+b$ is strictly less than $b+a$. The BoyerMoore theorem


415 
prover~\cite{bm88book} also employs ordered rewriting.


416 


417 
Permutative rewrite rules are added to simpsets just like other rewrite


418 
rules; the simplifier recognizes their special status automatically. They


419 
are most effective in the case of associativecommutative operators.


420 
(Associativity by itself is not permutative.) When dealing with an


421 
ACoperator~$f$, keep the following points in mind:


422 
\begin{itemize}\index{associativecommutative operators}


423 
\item The associative law must always be oriented from left to right, namely


424 
$f(f(x,y),z) = f(x,f(y,z))$. The opposite orientation, if used with


425 
commutativity, leads to looping! Future versions of Isabelle may remove


426 
this restriction.


427 


428 
\item To complete your set of rewrite rules, you must add not just


429 
associativity~(A) and commutativity~(C) but also a derived rule, {\bf


430 
leftcommutativity} (LC): $f(x,f(y,z)) = f(y,f(x,z))$.


431 
\end{itemize}


432 
Ordered rewriting with the combination of A, C, and~LC sorts a term


433 
lexicographically:


434 
\[\def\maps#1{\stackrel{#1}{\longmapsto}}


435 
(b+c)+a \maps{A} b+(c+a) \maps{C} b+(a+c) \maps{LC} a+(b+c) \]


436 
Martin and Nipkow~\cite{martinnipkow} discuss the theory and give many


437 
examples; other algebraic structures are amenable to ordered rewriting,


438 
such as boolean rings.


439 


440 
\subsection{Example: sums of integers}


441 
This example is set in Isabelle's higherorder logic. Theory


442 
\thydx{Arith} contains the theory of arithmetic. The simpset {\tt


443 
arith_ss} contains many arithmetic laws including distributivity


444 
of~$\times$ over~$+$, while {\tt add_ac} is a list consisting of the A, C


445 
and LC laws for~$+$. Let us prove the theorem


446 
\[ \sum@{i=1}^n i = n\times(n+1)/2. \]


447 
%


448 
A functional~{\tt sum} represents the summation operator under the


449 
interpretation ${\tt sum}(f,n+1) = \sum@{i=0}^n f(i)$. We extend {\tt


450 
Arith} using a theory file:


451 
\begin{ttbox}


452 
NatSum = Arith +


453 
consts sum :: "[nat=>nat, nat] => nat"


454 
rules sum_0 "sum(f,0) = 0"


455 
sum_Suc "sum(f,Suc(n)) = f(n) + sum(f,n)"


456 
end


457 
\end{ttbox}


458 
After declaring {\tt open Natsum}, we make the required simpset by adding


459 
the ACrules for~$+$ and the axioms for~{\tt sum}:


460 
\begin{ttbox}


461 
val natsum_ss = arith_ss addsimps ([sum_0,sum_Suc] \at add_ac);


462 
{\out val natsum_ss = SS \{\ldots\} : simpset}


463 
\end{ttbox}


464 
Our desired theorem now reads ${\tt sum}(\lambda i.i,n+1) =


465 
n\times(n+1)/2$. The Isabelle goal has both sides multiplied by~$2$:


466 
\begin{ttbox}


467 
goal NatSum.thy "Suc(Suc(0))*sum(\%i.i,Suc(n)) = n*Suc(n)";


468 
{\out Level 0}


469 
{\out Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)}


470 
{\out 1. Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)}


471 
\end{ttbox}


472 
Induction should not be applied until the goal is in the simplest form:


473 
\begin{ttbox}


474 
by (simp_tac natsum_ss 1);


475 
{\out Level 1}


476 
{\out Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)}


477 
{\out 1. n + (n + (sum(\%i. i, n) + sum(\%i. i, n))) = n + n * n}


478 
\end{ttbox}


479 
Ordered rewriting has sorted the terms in the lefthand side.


480 
The subgoal is now ready for induction:


481 
\begin{ttbox}


482 
by (nat_ind_tac "n" 1);


483 
{\out Level 2}


484 
{\out Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)}


485 
{\out 1. 0 + (0 + (sum(\%i. i, 0) + sum(\%i. i, 0))) = 0 + 0 * 0}


486 
\ttbreak


487 
{\out 2. !!n1. n1 + (n1 + (sum(\%i. i, n1) + sum(\%i. i, n1))) =}


488 
{\out n1 + n1 * n1 ==>}


489 
{\out Suc(n1) +}


490 
{\out (Suc(n1) + (sum(\%i. i, Suc(n1)) + sum(\%i. i, Suc(n1)))) =}


491 
{\out Suc(n1) + Suc(n1) * Suc(n1)}


492 
\end{ttbox}


493 
Simplification proves both subgoals immediately:\index{*ALLGOALS}


494 
\begin{ttbox}


495 
by (ALLGOALS (asm_simp_tac natsum_ss));


496 
{\out Level 3}


497 
{\out Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)}


498 
{\out No subgoals!}


499 
\end{ttbox}


500 
If we had omitted {\tt add_ac} from the simpset, simplification would have


501 
failed to prove the induction step:


502 
\begin{ttbox}


503 
Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)


504 
1. !!n1. n1 + (n1 + (sum(\%i. i, n1) + sum(\%i. i, n1))) =


505 
n1 + n1 * n1 ==>


506 
n1 + (n1 + (n1 + sum(\%i. i, n1) + (n1 + sum(\%i. i, n1)))) =


507 
n1 + (n1 + (n1 + n1 * n1))


508 
\end{ttbox}


509 
Ordered rewriting proves this by sorting the lefthand side. Proving


510 
arithmetic theorems without ordered rewriting requires explicit use of


511 
commutativity. This is tedious; try it and see!


512 


513 
Ordered rewriting is equally successful in proving


514 
$\sum@{i=1}^n i^3 = n^2\times(n+1)^2/4$.


515 


516 


517 
\subsection{Reorienting equalities}


518 
Ordered rewriting with the derived rule {\tt symmetry} can reverse equality


519 
signs:


520 
\begin{ttbox}


521 
val symmetry = prove_goal HOL.thy "(x=y) = (y=x)"


522 
(fn _ => [fast_tac HOL_cs 1]);


523 
\end{ttbox}


524 
This is frequently useful. Assumptions of the form $s=t$, where $t$ occurs


525 
in the conclusion but not~$s$, can often be brought into the right form.


526 
For example, ordered rewriting with {\tt symmetry} can prove the goal


527 
\[ f(a)=b \conj f(a)=c \imp b=c. \]


528 
Here {\tt symmetry} reverses both $f(a)=b$ and $f(a)=c$


529 
because $f(a)$ is lexicographically greater than $b$ and~$c$. These


530 
reoriented equations, as rewrite rules, replace $b$ and~$c$ in the


531 
conclusion by~$f(a)$.


532 


533 
Another example is the goal $\neg(t=u) \imp \neg(u=t)$.


534 
The differing orientations make this appear difficult to prove. Ordered


535 
rewriting with {\tt symmetry} makes the equalities agree. (Without


536 
knowing more about~$t$ and~$u$ we cannot say whether they both go to $t=u$


537 
or~$u=t$.) Then the simplifier can prove the goal outright.


538 


539 
\index{rewrite rules!permutative)}


540 


541 


542 
\section{*Setting up the simplifier}\label{sec:settingupsimp}


543 
\index{simplification!setting up}

286

544 


545 
Setting up the simplifier for new logics is complicated. This section

323

546 
describes how the simplifier is installed for intuitionistic firstorder


547 
logic; the code is largely taken from {\tt FOL/simpdata.ML}.

286

548 

323

549 
The simplifier and the case splitting tactic, which reside on separate


550 
files, are not part of Pure Isabelle. They must be loaded explicitly:

286

551 
\begin{ttbox}


552 
use "../Provers/simplifier.ML";


553 
use "../Provers/splitter.ML";


554 
\end{ttbox}


555 


556 
Simplification works by reducing various objectequalities to

323

557 
metaequality. It requires rules stating that equal terms and equivalent


558 
formulae are also equal at the metalevel. The rule declaration part of


559 
the file {\tt FOL/ifol.thy} contains the two lines


560 
\begin{ttbox}\index{*eq_reflection theorem}\index{*iff_reflection theorem}

286

561 
eq_reflection "(x=y) ==> (x==y)"


562 
iff_reflection "(P<>Q) ==> (P==Q)"


563 
\end{ttbox}

323

564 
Of course, you should only assert such rules if they are true for your

286

565 
particular logic. In Constructive Type Theory, equality is a ternary


566 
relation of the form $a=b\in A$; the type~$A$ determines the meaning of the

323

567 
equality effectively as a partial equivalence relation. The present


568 
simplifier cannot be used. Rewriting in {\tt CTT} uses another simplifier,


569 
which resides in the file {\tt typedsimp.ML} and is not documented. Even


570 
this does not work for later variants of Constructive Type Theory that use


571 
intensional equality~\cite{nordstrom90}.

286

572 


573 


574 
\subsection{A collection of standard rewrite rules}


575 
The file begins by proving lots of standard rewrite rules about the logical

323

576 
connectives. These include cancellation and associative laws. To prove


577 
them easily, it defines a function that echoes the desired law and then

286

578 
supplies it the theorem prover for intuitionistic \FOL:


579 
\begin{ttbox}


580 
fun int_prove_fun s =


581 
(writeln s;


582 
prove_goal IFOL.thy s


583 
(fn prems => [ (cut_facts_tac prems 1),


584 
(Int.fast_tac 1) ]));


585 
\end{ttbox}


586 
The following rewrite rules about conjunction are a selection of those


587 
proved on {\tt FOL/simpdata.ML}. Later, these will be supplied to the


588 
standard simpset.


589 
\begin{ttbox}


590 
val conj_rews = map int_prove_fun


591 
["P & True <> P", "True & P <> P",


592 
"P & False <> False", "False & P <> False",


593 
"P & P <> P",


594 
"P & ~P <> False", "~P & P <> False",


595 
"(P & Q) & R <> P & (Q & R)"];


596 
\end{ttbox}


597 
The file also proves some distributive laws. As they can cause exponential


598 
blowup, they will not be included in the standard simpset. Instead they

323

599 
are merely bound to an \ML{} identifier, for user reference.

286

600 
\begin{ttbox}


601 
val distrib_rews = map int_prove_fun


602 
["P & (Q  R) <> P&Q  P&R",


603 
"(Q  R) & P <> Q&P  R&P",


604 
"(P  Q > R) <> (P > R) & (Q > R)"];


605 
\end{ttbox}


606 


607 


608 
\subsection{Functions for preprocessing the rewrite rules}

323

609 
\label{sec:setmksimps}


610 
%

286

611 
The next step is to define the function for preprocessing rewrite rules.


612 
This will be installed by calling {\tt setmksimps} below. Preprocessing


613 
occurs whenever rewrite rules are added, whether by user command or


614 
automatically. Preprocessing involves extracting atomic rewrites at the


615 
objectlevel, then reflecting them to the metalevel.


616 


617 
To start, the function {\tt gen_all} strips any metalevel


618 
quantifiers from the front of the given theorem. Usually there are none


619 
anyway.


620 
\begin{ttbox}


621 
fun gen_all th = forall_elim_vars (#maxidx(rep_thm th)+1) th;


622 
\end{ttbox}


623 
The function {\tt atomize} analyses a theorem in order to extract


624 
atomic rewrite rules. The head of all the patterns, matched by the


625 
wildcard~{\tt _}, is the coercion function {\tt Trueprop}.


626 
\begin{ttbox}


627 
fun atomize th = case concl_of th of


628 
_ $ (Const("op &",_) $ _ $ _) => atomize(th RS conjunct1) \at


629 
atomize(th RS conjunct2)


630 
 _ $ (Const("op >",_) $ _ $ _) => atomize(th RS mp)


631 
 _ $ (Const("All",_) $ _) => atomize(th RS spec)


632 
 _ $ (Const("True",_)) => []


633 
 _ $ (Const("False",_)) => []


634 
 _ => [th];


635 
\end{ttbox}


636 
There are several cases, depending upon the form of the conclusion:


637 
\begin{itemize}


638 
\item Conjunction: extract rewrites from both conjuncts.


639 


640 
\item Implication: convert $P\imp Q$ to the metaimplication $P\Imp Q$ and


641 
extract rewrites from~$Q$; these will be conditional rewrites with the


642 
condition~$P$.


643 


644 
\item Universal quantification: remove the quantifier, replacing the bound


645 
variable by a schematic variable, and extract rewrites from the body.


646 


647 
\item {\tt True} and {\tt False} contain no useful rewrites.


648 


649 
\item Anything else: return the theorem in a singleton list.


650 
\end{itemize}


651 
The resulting theorems are not literally atomic  they could be

323

652 
disjunctive, for example  but are broken down as much as possible. See

286

653 
the file {\tt ZF/simpdata.ML} for a sophisticated translation of


654 
settheoretic formulae into rewrite rules.

104

655 

286

656 
The simplified rewrites must now be converted into metaequalities. The

323

657 
rule {\tt eq_reflection} converts equality rewrites, while {\tt

286

658 
iff_reflection} converts ifandonlyif rewrites. The latter possibility


659 
can arise in two other ways: the negative theorem~$\neg P$ is converted to

323

660 
$P\equiv{\tt False}$, and any other theorem~$P$ is converted to

286

661 
$P\equiv{\tt True}$. The rules {\tt iff_reflection_F} and {\tt


662 
iff_reflection_T} accomplish this conversion.


663 
\begin{ttbox}


664 
val P_iff_F = int_prove_fun "~P ==> (P <> False)";


665 
val iff_reflection_F = P_iff_F RS iff_reflection;


666 
\ttbreak


667 
val P_iff_T = int_prove_fun "P ==> (P <> True)";


668 
val iff_reflection_T = P_iff_T RS iff_reflection;


669 
\end{ttbox}


670 
The function {\tt mk_meta_eq} converts a theorem to a metaequality


671 
using the case analysis described above.


672 
\begin{ttbox}


673 
fun mk_meta_eq th = case concl_of th of


674 
_ $ (Const("op =",_)$_$_) => th RS eq_reflection


675 
 _ $ (Const("op <>",_)$_$_) => th RS iff_reflection


676 
 _ $ (Const("Not",_)$_) => th RS iff_reflection_F


677 
 _ => th RS iff_reflection_T;


678 
\end{ttbox}


679 
The three functions {\tt gen_all}, {\tt atomize} and {\tt mk_meta_eq} will


680 
be composed together and supplied below to {\tt setmksimps}.


681 


682 


683 
\subsection{Making the initial simpset}


684 
It is time to assemble these items. We open module {\tt Simplifier} to

323

685 
gain access to its components. We define the infix operator


686 
\ttindexbold{addcongs} to insert congruence rules; given a list of theorems,


687 
it converts their conclusions into metaequalities and passes them to


688 
\ttindex{addeqcongs}.

286

689 
\begin{ttbox}


690 
open Simplifier;


691 
\ttbreak


692 
infix addcongs;


693 
fun ss addcongs congs =


694 
ss addeqcongs (congs RL [eq_reflection,iff_reflection]);


695 
\end{ttbox}


696 
The list {\tt IFOL_rews} contains the default rewrite rules for firstorder


697 
logic. The first of these is the reflexive law expressed as the

323

698 
equivalence $(a=a)\bimp{\tt True}$; the rewrite rule $a=a$ is clearly useless.

286

699 
\begin{ttbox}


700 
val IFOL_rews =


701 
[refl RS P_iff_T] \at conj_rews \at disj_rews \at not_rews \at


702 
imp_rews \at iff_rews \at quant_rews;


703 
\end{ttbox}


704 
The list {\tt triv_rls} contains trivial theorems for the solver. Any


705 
subgoal that is simplified to one of these will be removed.


706 
\begin{ttbox}


707 
val notFalseI = int_prove_fun "~False";


708 
val triv_rls = [TrueI,refl,iff_refl,notFalseI];


709 
\end{ttbox}

323

710 
%

286

711 
The basic simpset for intuitionistic \FOL{} starts with \ttindex{empty_ss}.


712 
It preprocess rewrites using {\tt gen_all}, {\tt atomize} and {\tt


713 
mk_meta_eq}. It solves simplified subgoals using {\tt triv_rls} and


714 
assumptions. It uses \ttindex{asm_simp_tac} to tackle subgoals of


715 
conditional rewrites. It takes {\tt IFOL_rews} as rewrite rules.


716 
Other simpsets built from {\tt IFOL_ss} will inherit these items.

323

717 
In particular, {\tt FOL_ss} extends {\tt IFOL_ss} with classical rewrite


718 
rules such as $\neg\neg P\bimp P$.

286

719 
\index{*setmksimps}\index{*setsolver}\index{*setsubgoaler}


720 
\index{*addsimps}\index{*addcongs}


721 
\begin{ttbox}


722 
val IFOL_ss =


723 
empty_ss


724 
setmksimps (map mk_meta_eq o atomize o gen_all)


725 
setsolver (fn prems => resolve_tac (triv_rls \at prems) ORELSE'


726 
assume_tac)


727 
setsubgoaler asm_simp_tac


728 
addsimps IFOL_rews


729 
addcongs [imp_cong];


730 
\end{ttbox}


731 
This simpset takes {\tt imp_cong} as a congruence rule in order to use


732 
contextual information to simplify the conclusions of implications:


733 
\[ \List{\Var{P}\bimp\Var{P'};\; \Var{P'} \Imp \Var{Q}\bimp\Var{Q'}} \Imp


734 
(\Var{P}\imp\Var{Q}) \bimp (\Var{P'}\imp\Var{Q'})


735 
\]


736 
By adding the congruence rule {\tt conj_cong}, we could obtain a similar


737 
effect for conjunctions.


738 


739 


740 
\subsection{Case splitting}

323

741 
To set up case splitting, we must prove the theorem below and pass it to


742 
\ttindexbold{mk_case_split_tac}. The tactic \ttindexbold{split_tac} uses


743 
{\tt mk_meta_eq}, defined above, to convert the splitting rules to


744 
metaequalities.

286

745 
\begin{ttbox}


746 
val meta_iffD =


747 
prove_goal FOL.thy "[ P==Q; Q ] ==> P"


748 
(fn [prem1,prem2] => [rewtac prem1, rtac prem2 1])


749 
\ttbreak


750 
fun split_tac splits =


751 
mk_case_split_tac meta_iffD (map mk_meta_eq splits);


752 
\end{ttbox}


753 
%

323

754 
The splitter replaces applications of a given function; the righthand side


755 
of the replacement can be anything. For example, here is a splitting rule


756 
for conditional expressions:

286

757 
\[ \Var{P}(if(\Var{Q},\Var{x},\Var{y})) \bimp (\Var{Q} \imp \Var{P}(\Var{x}))


758 
\conj (\lnot\Var{Q} \imp \Var{P}(\Var{y}))


759 
\]

323

760 
Another example is the elimination operator (which happens to be


761 
called~$split$) for Cartesian products:

286

762 
\[ \Var{P}(split(\Var{f},\Var{p})) \bimp (\forall a~b. \Var{p} =


763 
\langle a,b\rangle \imp \Var{P}(\Var{f}(a,b)))


764 
\]


765 
Case splits should be allowed only when necessary; they are expensive


766 
and hard to control. Here is a typical example of use, where {\tt


767 
expand_if} is the first rule above:


768 
\begin{ttbox}


769 
by (simp_tac (prop_rec_ss setloop (split_tac [expand_if])) 1);


770 
\end{ttbox}


771 

104

772 


773 


774 
\index{simplification)}


775 

286

776 
