104

1 
%% $Id$


2 
\chapter{Simplification} \label{simpchap}


3 
\index{simplification(}


4 


5 
This chapter describes Isabelle's generic simplification package, which


6 
provides a suite of simplification tactics. This rewriting package is less


7 
general than its predecessor  it works only for the equality relation,


8 
not arbitrary preorders  but it is fast and flexible. It performs


9 
conditional and unconditional rewriting and uses contextual information


10 
(``local assumptions''). It provides a few general hooks, which can


11 
provide automatic case splits during rewriting, for example. The


12 
simplifier is set up for many of Isabelle's logics: {\tt FOL}, {\tt ZF},


13 
{\tt LCF} and {\tt HOL}.


14 


15 

286

16 
\section{Simplification sets}\index{simplification sets}

104

17 
The simplification tactics are controlled by {\bf simpsets}. These consist


18 
of five components: rewrite rules, congruence rules, the subgoaler, the


19 
solver and the looper. Normally, the simplifier is set up with sensible


20 
defaults, so that most simplifier calls specify only rewrite rules.


21 
Sophisticated usage of the other components can be highly effective, but


22 
most users should never worry about them.


23 


24 
\subsection{Rewrite rules}\index{rewrite rules}


25 
Rewrite rules are theorems like $Suc(\Var{m}) + \Var{n} = \Var{m} +

122

26 
Suc(\Var{n})$, $\Var{P}\conj\Var{P} \bimp \Var{P}$, or $\Var{A} \union \Var{B}

104

27 
\equiv \{x.x\in A \disj x\in B\}$. {\bf Conditional} rewrites such as


28 
$\Var{m}<\Var{n} \Imp \Var{m}/\Var{n} = 0$ are permitted; the conditions


29 
can be arbitrary terms. The infix operation \ttindex{addsimps} adds new


30 
rewrite rules, while \ttindex{delsimps} deletes rewrite rules from a


31 
simpset.


32 


33 
Theorems added via \ttindex{addsimps} need not be equalities to start with.


34 
Each simpset contains a (userdefinable) function for extracting equalities


35 
from arbitrary theorems. For example $\neg(x\in \{\})$ could be turned


36 
into $x\in \{\} \equiv False$. This function can be set with


37 
\ttindex{setmksimps} but only the definer of a logic should need to do


38 
this. Exceptionally, one may want to install a selective version of


39 
\ttindex{mksimps} in order to filter out looping rewrite rules arising from


40 
local assumptions (see below).


41 


42 
Internally, all rewrite rules are translated into metaequalities:


43 
theorems with conclusion $lhs \equiv rhs$. To this end every simpset contains


44 
a function of type \verb$thm > thm list$ to extract a list


45 
of metaequalities from a given theorem.


46 


47 
\begin{warn}\index{rewrite rules}


48 
The lefthand side of a rewrite rule must look like a firstorder term:


49 
after etacontraction, none of its unknowns should have arguments. Hence


50 
${\Var{i}+(\Var{j}+\Var{k})} = {(\Var{i}+\Var{j})+\Var{k}}$ and $\neg(\forall


51 
x.\Var{P}(x)) \bimp (\exists x.\neg\Var{P}(x))$ are acceptable, whereas


52 
$\Var{f}(\Var{x})\in {\tt range}(\Var{f}) = True$ is not. However, you can


53 
replace the offending subterms by new variables and conditions: $\Var{y} =


54 
\Var{f}(\Var{x}) \Imp \Var{y}\in {\tt range}(\Var{f}) = True$ is again


55 
acceptable.


56 
\end{warn}


57 


58 
\subsection {Congruence rules}\index{congruence rules}


59 
Congruence rules are metaequalities of the form


60 
\[ \List{\dots} \Imp


61 
f(\Var{x@1},\ldots,\Var{x@n}) \equiv f(\Var{y@1},\ldots,\Var{y@n}).


62 
\]


63 
They control the simplification of the arguments of certain constants. For


64 
example, some arguments can be simplified under additional assumptions:


65 
\[ \List{\Var{P@1} \bimp \Var{Q@1};\; \Var{Q@1} \Imp \Var{P@2} \bimp \Var{Q@2}}


66 
\Imp (\Var{P@1} \imp \Var{P@2}) \equiv (\Var{Q@1} \imp \Var{Q@2})


67 
\]


68 
This rule assumes $Q@1$ and any rewrite rules it implies, while


69 
simplifying~$P@2$. Such ``local'' assumptions are effective for rewriting


70 
formulae such as $x=0\imp y+x=y$. The next example makes similar use of


71 
such contextual information in bounded quantifiers:

286

72 
\begin{eqnarray*}


73 
&&\List{\Var{A}=\Var{B};\;


74 
\Forall x. x\in \Var{B} \Imp \Var{P}(x) = \Var{Q}(x)} \Imp{} \\


75 
&&\qquad\qquad


76 
(\forall x\in \Var{A}.\Var{P}(x)) = (\forall x\in \Var{B}.\Var{Q}(x))


77 
\end{eqnarray*}

104

78 
This congruence rule supplies contextual information for simplifying the


79 
arms of a conditional expressions:


80 
\[ \List{\Var{p}=\Var{q};~ \Var{q} \Imp \Var{a}=\Var{c};~


81 
\neg\Var{q} \Imp \Var{b}=\Var{d}} \Imp


82 
if(\Var{p},\Var{a},\Var{b}) \equiv if(\Var{q},\Var{c},\Var{d})


83 
\]


84 


85 
A congruence rule can also suppress simplification of certain arguments.


86 
Here is an alternative congruence rule for conditional expressions:


87 
\[ \Var{p}=\Var{q} \Imp


88 
if(\Var{p},\Var{a},\Var{b}) \equiv if(\Var{q},\Var{a},\Var{b})


89 
\]


90 
Only the first argument is simplified; the others remain unchanged.


91 
This can make simplification much faster, but may require an extra case split


92 
to prove the goal.


93 

286

94 
Congruence rules are added using \ttindexbold{addeqcongs}. Their conclusion

104

95 
must be a metaequality, as in the examples above. It is more


96 
natural to derive the rules with objectlogic equality, for example


97 
\[ \List{\Var{P@1} \bimp \Var{Q@1};\; \Var{Q@1} \Imp \Var{P@2} \bimp \Var{Q@2}}


98 
\Imp (\Var{P@1} \imp \Var{P@2}) \bimp (\Var{Q@1} \imp \Var{Q@2}),


99 
\]

286

100 
Each objectlogic should define an operator called \ttindex{addcongs} that


101 
expects objectequalities and translates them into metaequalities.

104

102 

286

103 
\subsection{The subgoaler}

104

104 
The subgoaler is the tactic used to solve subgoals arising out of


105 
conditional rewrite rules or congruence rules. The default should be


106 
simplification itself. Occasionally this strategy needs to be changed. For


107 
example, if the premise of a conditional rule is an instance of its


108 
conclusion, as in $Suc(\Var{m}) < \Var{n} \Imp \Var{m} < \Var{n}$, the


109 
default strategy could loop.


110 


111 
The subgoaler can be set explicitly with \ttindex{setsubgoaler}. For


112 
example, the subgoaler


113 
\begin{ttbox}


114 
fun subgoal_tac ss = resolve_tac (prems_of_ss ss) ORELSE'


115 
asm_simp_tac ss;


116 
\end{ttbox}


117 
tries to solve the subgoal with one of the premises and calls


118 
simplification only if that fails; here {\tt prems_of_ss} extracts the


119 
current premises from a simpset.


120 

286

121 
\subsection{The solver}


122 
The solver is a tactic that attempts to solve a subgoal after


123 
simplification. Typically it just proves trivial subgoals such as {\tt


124 
True} and $t=t$; it could use sophisticated means such as


125 
\verb$fast_tac$. The solver is set using \ttindex{setsolver}.


126 


127 
The tactic is presented with the full goal, including the asssumptions.


128 
Hence it can use those assumptions (say by calling {\tt assume_tac}) even


129 
inside {\tt simp_tac}, which otherwise does not use assumptions. The


130 
solver is also supplied a list of theorems, namely assumptions that hold in


131 
the local context.

104

132 


133 
\begin{warn}


134 
Rewriting does not instantiate unknowns. Trying to rewrite $a\in


135 
\Var{A}$ with the rule $\Var{x}\in \{\Var{x}\}$ leads nowhere. The


136 
solver, however, is an arbitrary tactic and may instantiate unknowns as


137 
it pleases. This is the only way the simplifier can handle a conditional


138 
rewrite rule whose condition contains extra variables.


139 
\end{warn}


140 


141 
\begin{warn}


142 
If you want to supply your own subgoaler or solver, read on. The subgoaler


143 
is also used to solve the premises of congruence rules, which are usually


144 
of the form $s = \Var{x}$, where $s$ needs to be simplified and $\Var{x}$


145 
needs to be instantiated with the result. Hence the subgoaler should call


146 
the simplifier at some point. The simplifier will then call the solver,


147 
which must therefore be prepared to solve goals of the form $t = \Var{x}$,


148 
usually by reflexivity. In particular, reflexivity should be tried before


149 
any of the fancy tactics like {\tt fast_tac}. It may even happen that, due


150 
to simplification, the subgoal is no longer an equality. For example $False


151 
\bimp \Var{Q}$ could be rewritten to $\neg\Var{Q}$, in which case the


152 
solver must also try resolving with the theorem $\neg False$.


153 


154 
If the simplifier aborts with the message {\tt Failed congruence proof!},

286

155 
it is due to the subgoaler or solver that failed to prove a premise of a

104

156 
congruence rule.


157 
\end{warn}


158 

286

159 
\subsection{The looper}

104

160 
The looper is a tactic that is applied after simplification, in case the


161 
solver failed to solve the simplified goal. If the looper succeeds, the


162 
simplification process is started all over again. Each of the subgoals


163 
generated by the looper is attacked in turn, in reverse order. A


164 
typical looper is case splitting: the expansion of a conditional. Another


165 
possibility is to apply an elimination rule on the assumptions. More


166 
adventurous loopers could start an induction. The looper is set with


167 
\ttindex{setloop}.


168 


169 


170 
\begin{figure}


171 
\indexbold{*SIMPLIFIER}

286

172 
\indexbold{*simpset}


173 
\indexbold{*simp_tac}


174 
\indexbold{*asm_simp_tac}


175 
\indexbold{*asm_full_simp_tac}


176 
\indexbold{*addeqcongs}


177 
\indexbold{*addsimps}


178 
\indexbold{*delsimps}


179 
\indexbold{*empty_ss}


180 
\indexbold{*merge_ss}


181 
\indexbold{*setsubgoaler}


182 
\indexbold{*setsolver}


183 
\indexbold{*setloop}


184 
\indexbold{*setmksimps}


185 
\indexbold{*prems_of_ss}


186 
\indexbold{*rep_ss}

104

187 
\begin{ttbox}


188 
infix addsimps addeqcongs delsimps


189 
setsubgoaler setsolver setloop setmksimps;


190 


191 
signature SIMPLIFIER =


192 
sig


193 
type simpset


194 
val simp_tac: simpset > int > tactic


195 
val asm_simp_tac: simpset > int > tactic

133

196 
val asm_full_simp_tac: simpset > int > tactic\smallskip


197 
val addeqcongs: simpset * thm list > simpset


198 
val addsimps: simpset * thm list > simpset


199 
val delsimps: simpset * thm list > simpset

104

200 
val empty_ss: simpset


201 
val merge_ss: simpset * simpset > simpset


202 
val setsubgoaler: simpset * (simpset > int > tactic) > simpset


203 
val setsolver: simpset * (thm list > int > tactic) > simpset


204 
val setloop: simpset * (int > tactic) > simpset


205 
val setmksimps: simpset * (thm > thm list) > simpset


206 
val prems_of_ss: simpset > thm list


207 
val rep_ss: simpset > \{simps: thm list, congs: thm list\}


208 
end;


209 
\end{ttbox}


210 
\caption{The signature \ttindex{SIMPLIFIER}} \label{SIMPLIFIER}


211 
\end{figure}


212 


213 


214 
\section{The simplification tactics} \label{simptactics}


215 
\index{simplification!tacticsbold}


216 
\index{tactics!simplificationbold}


217 


218 
The actual simplification work is performed by the following tactics. The


219 
rewriting strategy is strictly bottom up, except for congruence rules, which


220 
are applied while descending into a term. Conditions in conditional rewrite


221 
rules are solved recursively before the rewrite rule is applied.


222 


223 
There are three basic simplification tactics:


224 
\begin{description}


225 
\item[\ttindexbold{simp_tac} $ss$ $i$] simplifies subgoal~$i$ using the rules


226 
in~$ss$. It may solve the subgoal completely if it has become trivial,


227 
using the solver.


228 


229 
\item[\ttindexbold{asm_simp_tac}] is like \verb$simp_tac$, but also uses


230 
assumptions as additional rewrite rules.


231 


232 
\item[\ttindexbold{asm_full_simp_tac}] is like \verb$asm_simp_tac$, but also


233 
simplifies the assumptions one by one, using each assumption in the


234 
simplification of the following ones.


235 
\end{description}


236 
Using the simplifier effectively may take a bit of experimentation. The


237 
tactics can be traced with the ML command \verb$trace_simp := true$. To


238 
remind yourself of what is in a simpset, use the function \verb$rep_ss$ to


239 
return its simplification and congruence rules.


240 

286

241 
\section{Examples using the simplifier}

104

242 
\index{simplification!example}


243 
Assume we are working within {\tt FOL} and that


244 
\begin{description}


245 
\item[\tt Nat.thy] is a theory including the constants $0$, $Suc$ and $+$,


246 
\item[\tt add_0] is the rewrite rule $0+n = n$,


247 
\item[\tt add_Suc] is the rewrite rule $Suc(m)+n = Suc(m+n)$,


248 
\item[\tt induct] is the induction rule


249 
$\List{P(0); \Forall x. P(x)\Imp P(Suc(x))} \Imp P(n)$.


250 
\item[\tt FOL_ss] is a basic simpset for {\tt FOL}.\footnote


251 
{These examples reside on the file {\tt FOL/ex/nat.ML}.}


252 
\end{description}


253 


254 
We create a simpset for natural numbers by extending~{\tt FOL_ss}:


255 
\begin{ttbox}


256 
val add_ss = FOL_ss addsimps [add_0, add_Suc];


257 
\end{ttbox}

286

258 
Proofs by induction typically involve simplification. Here is a proof


259 
that~0 is a right identity:

104

260 
\begin{ttbox}


261 
goal Nat.thy "m+0 = m";


262 
{\out Level 0}


263 
{\out m + 0 = m}


264 
{\out 1. m + 0 = m}

286

265 
\end{ttbox}


266 
The first step is to perform induction on the variable~$m$. This returns a


267 
base case and inductive step as two subgoals:


268 
\begin{ttbox}

104

269 
by (res_inst_tac [("n","m")] induct 1);


270 
{\out Level 1}


271 
{\out m + 0 = m}


272 
{\out 1. 0 + 0 = 0}


273 
{\out 2. !!x. x + 0 = x ==> Suc(x) + 0 = Suc(x)}


274 
\end{ttbox}

286

275 
Simplification solves the first subgoal trivially:

104

276 
\begin{ttbox}


277 
by (simp_tac add_ss 1);


278 
{\out Level 2}


279 
{\out m + 0 = m}


280 
{\out 1. !!x. x + 0 = x ==> Suc(x) + 0 = Suc(x)}


281 
\end{ttbox}


282 
The remaining subgoal requires \ttindex{asm_simp_tac} in order to use the


283 
induction hypothesis as a rewrite rule:


284 
\begin{ttbox}


285 
by (asm_simp_tac add_ss 1);


286 
{\out Level 3}


287 
{\out m + 0 = m}


288 
{\out No subgoals!}


289 
\end{ttbox}


290 


291 
The next proof is similar.


292 
\begin{ttbox}


293 
goal Nat.thy "m+Suc(n) = Suc(m+n)";


294 
{\out Level 0}


295 
{\out m + Suc(n) = Suc(m + n)}


296 
{\out 1. m + Suc(n) = Suc(m + n)}

286

297 
\end{ttbox}


298 
We again perform induction on~$m$ and get two subgoals:


299 
\begin{ttbox}

104

300 
by (res_inst_tac [("n","m")] induct 1);


301 
{\out Level 1}


302 
{\out m + Suc(n) = Suc(m + n)}


303 
{\out 1. 0 + Suc(n) = Suc(0 + n)}

286

304 
{\out 2. !!x. x + Suc(n) = Suc(x + n) ==>}


305 
{\out Suc(x) + Suc(n) = Suc(Suc(x) + n)}


306 
\end{ttbox}


307 
Simplification solves the first subgoal, this time rewriting two


308 
occurrences of~0:


309 
\begin{ttbox}

104

310 
by (simp_tac add_ss 1);


311 
{\out Level 2}


312 
{\out m + Suc(n) = Suc(m + n)}

286

313 
{\out 1. !!x. x + Suc(n) = Suc(x + n) ==>}


314 
{\out Suc(x) + Suc(n) = Suc(Suc(x) + n)}

104

315 
\end{ttbox}


316 
Switching tracing on illustrates how the simplifier solves the remaining


317 
subgoal:


318 
\begin{ttbox}


319 
trace_simp := true;


320 
by (asm_simp_tac add_ss 1);


321 
{\out Rewriting:}


322 
{\out Suc(x) + Suc(n) == Suc(x + Suc(n))}


323 
{\out Rewriting:}


324 
{\out x + Suc(n) == Suc(x + n)}


325 
{\out Rewriting:}


326 
{\out Suc(x) + n == Suc(x + n)}


327 
{\out Rewriting:}


328 
{\out Suc(Suc(x + n)) = Suc(Suc(x + n)) == True}


329 
{\out Level 3}


330 
{\out m + Suc(n) = Suc(m + n)}


331 
{\out No subgoals!}


332 
\end{ttbox}

286

333 
Many variations are possible. At Level~1 (in either example) we could have


334 
solved both subgoals at once using the tactical \ttindex{ALLGOALS}:

104

335 
\begin{ttbox}


336 
by (ALLGOALS (asm_simp_tac add_ss));


337 
{\out Level 2}


338 
{\out m + Suc(n) = Suc(m + n)}


339 
{\out No subgoals!}


340 
\end{ttbox}


341 


342 
\medskip


343 
Here is a conjecture to be proved for an arbitrary function~$f$ satisfying

286

344 
the law $f(Suc(n)) = Suc(f(n))$:

104

345 
\begin{ttbox}


346 
val [prem] = goal Nat.thy


347 
"(!!n. f(Suc(n)) = Suc(f(n))) ==> f(i+j) = i+f(j)";


348 
{\out Level 0}


349 
{\out f(i + j) = i + f(j)}


350 
{\out 1. f(i + j) = i + f(j)}

286

351 
{\out val prem = "f(Suc(?n)) = Suc(f(?n))}


352 
{\out [!!n. f(Suc(n)) = Suc(f(n))]" : thm}

104

353 
\ttbreak


354 
by (res_inst_tac [("n","i")] induct 1);


355 
{\out Level 1}


356 
{\out f(i + j) = i + f(j)}


357 
{\out 1. f(0 + j) = 0 + f(j)}


358 
{\out 2. !!x. f(x + j) = x + f(j) ==> f(Suc(x) + j) = Suc(x) + f(j)}


359 
\end{ttbox}


360 
We simplify each subgoal in turn. The first one is trivial:


361 
\begin{ttbox}


362 
by (simp_tac add_ss 1);


363 
{\out Level 2}


364 
{\out f(i + j) = i + f(j)}


365 
{\out 1. !!x. f(x + j) = x + f(j) ==> f(Suc(x) + j) = Suc(x) + f(j)}


366 
\end{ttbox}


367 
The remaining subgoal requires rewriting by the premise, so we add it to

286

368 
{\tt add_ss}:\footnote{The previous


369 
simplifier required congruence rules for function variables like~$f$ in


370 
order to simplify their arguments. The present simplifier can be given


371 
congruence rules to realize nonstandard simplification of a function's


372 
arguments, but this is seldom necessary.}

104

373 
\begin{ttbox}


374 
by (asm_simp_tac (add_ss addsimps [prem]) 1);


375 
{\out Level 3}


376 
{\out f(i + j) = i + f(j)}


377 
{\out No subgoals!}


378 
\end{ttbox}


379 

286

380 


381 
\section{Setting up the simplifier}


382 
\index{simplification!setting upbold}


383 


384 
Setting up the simplifier for new logics is complicated. This section


385 
describes how the simplifier is installed for firstorder logic; the code


386 
is largely taken from {\tt FOL/simpdata.ML}.


387 


388 
The simplifier and the case splitting tactic, which resides in a separate


389 
file, are not part of Pure Isabelle. They must be loaded explicitly:


390 
\begin{ttbox}


391 
use "../Provers/simplifier.ML";


392 
use "../Provers/splitter.ML";


393 
\end{ttbox}


394 


395 
Simplification works by reducing various objectequalities to


396 
metaequality. It requires axioms stating that equal terms and equivalent


397 
formulae are also equal at the metalevel. The file {\tt FOL/ifol.thy}


398 
contains the two lines


399 
\begin{ttbox}\indexbold{*eq_reflection}\indexbold{*iff_reflection}


400 
eq_reflection "(x=y) ==> (x==y)"


401 
iff_reflection "(P<>Q) ==> (P==Q)"


402 
\end{ttbox}


403 
Of course, you should only assert such axioms if they are true for your


404 
particular logic. In Constructive Type Theory, equality is a ternary


405 
relation of the form $a=b\in A$; the type~$A$ determines the meaning of the


406 
equality effectively as a partial equivalence relation.


407 


408 


409 
\subsection{A collection of standard rewrite rules}


410 
The file begins by proving lots of standard rewrite rules about the logical


411 
connectives. These include cancellation laws and associative laws but


412 
certainly not commutative laws, which would case looping. To prove the


413 
laws easily, it defines a function that echoes the desired law and then


414 
supplies it the theorem prover for intuitionistic \FOL:


415 
\begin{ttbox}


416 
fun int_prove_fun s =


417 
(writeln s;


418 
prove_goal IFOL.thy s


419 
(fn prems => [ (cut_facts_tac prems 1),


420 
(Int.fast_tac 1) ]));


421 
\end{ttbox}


422 
The following rewrite rules about conjunction are a selection of those


423 
proved on {\tt FOL/simpdata.ML}. Later, these will be supplied to the


424 
standard simpset.


425 
\begin{ttbox}


426 
val conj_rews = map int_prove_fun


427 
["P & True <> P", "True & P <> P",


428 
"P & False <> False", "False & P <> False",


429 
"P & P <> P",


430 
"P & ~P <> False", "~P & P <> False",


431 
"(P & Q) & R <> P & (Q & R)"];


432 
\end{ttbox}


433 
The file also proves some distributive laws. As they can cause exponential


434 
blowup, they will not be included in the standard simpset. Instead they


435 
are merely bound to an \ML{} identifier.


436 
\begin{ttbox}


437 
val distrib_rews = map int_prove_fun


438 
["P & (Q  R) <> P&Q  P&R",


439 
"(Q  R) & P <> Q&P  R&P",


440 
"(P  Q > R) <> (P > R) & (Q > R)"];


441 
\end{ttbox}


442 


443 


444 
\subsection{Functions for preprocessing the rewrite rules}


445 
The next step is to define the function for preprocessing rewrite rules.


446 
This will be installed by calling {\tt setmksimps} below. Preprocessing


447 
occurs whenever rewrite rules are added, whether by user command or


448 
automatically. Preprocessing involves extracting atomic rewrites at the


449 
objectlevel, then reflecting them to the metalevel.


450 


451 
To start, the function {\tt gen_all} strips any metalevel


452 
quantifiers from the front of the given theorem. Usually there are none


453 
anyway.


454 
\begin{ttbox}


455 
fun gen_all th = forall_elim_vars (#maxidx(rep_thm th)+1) th;


456 
\end{ttbox}


457 
The function {\tt atomize} analyses a theorem in order to extract


458 
atomic rewrite rules. The head of all the patterns, matched by the


459 
wildcard~{\tt _}, is the coercion function {\tt Trueprop}.


460 
\begin{ttbox}


461 
fun atomize th = case concl_of th of


462 
_ $ (Const("op &",_) $ _ $ _) => atomize(th RS conjunct1) \at


463 
atomize(th RS conjunct2)


464 
 _ $ (Const("op >",_) $ _ $ _) => atomize(th RS mp)


465 
 _ $ (Const("All",_) $ _) => atomize(th RS spec)


466 
 _ $ (Const("True",_)) => []


467 
 _ $ (Const("False",_)) => []


468 
 _ => [th];


469 
\end{ttbox}


470 
There are several cases, depending upon the form of the conclusion:


471 
\begin{itemize}


472 
\item Conjunction: extract rewrites from both conjuncts.


473 


474 
\item Implication: convert $P\imp Q$ to the metaimplication $P\Imp Q$ and


475 
extract rewrites from~$Q$; these will be conditional rewrites with the


476 
condition~$P$.


477 


478 
\item Universal quantification: remove the quantifier, replacing the bound


479 
variable by a schematic variable, and extract rewrites from the body.


480 


481 
\item {\tt True} and {\tt False} contain no useful rewrites.


482 


483 
\item Anything else: return the theorem in a singleton list.


484 
\end{itemize}


485 
The resulting theorems are not literally atomic  they could be


486 
disjunctive, for example  but are brokwn down as much as possible. See


487 
the file {\tt ZF/simpdata.ML} for a sophisticated translation of


488 
settheoretic formulae into rewrite rules.

104

489 

286

490 
The simplified rewrites must now be converted into metaequalities. The


491 
axiom {\tt eq_reflection} converts equality rewrites, while {\tt


492 
iff_reflection} converts ifandonlyif rewrites. The latter possibility


493 
can arise in two other ways: the negative theorem~$\neg P$ is converted to


494 
$P\equiv{\tt False}$, and any other theorem~$\neg P$ is converted to


495 
$P\equiv{\tt True}$. The rules {\tt iff_reflection_F} and {\tt


496 
iff_reflection_T} accomplish this conversion.


497 
\begin{ttbox}


498 
val P_iff_F = int_prove_fun "~P ==> (P <> False)";


499 
val iff_reflection_F = P_iff_F RS iff_reflection;


500 
\ttbreak


501 
val P_iff_T = int_prove_fun "P ==> (P <> True)";


502 
val iff_reflection_T = P_iff_T RS iff_reflection;


503 
\end{ttbox}


504 
The function {\tt mk_meta_eq} converts a theorem to a metaequality


505 
using the case analysis described above.


506 
\begin{ttbox}


507 
fun mk_meta_eq th = case concl_of th of


508 
_ $ (Const("op =",_)$_$_) => th RS eq_reflection


509 
 _ $ (Const("op <>",_)$_$_) => th RS iff_reflection


510 
 _ $ (Const("Not",_)$_) => th RS iff_reflection_F


511 
 _ => th RS iff_reflection_T;


512 
\end{ttbox}


513 
The three functions {\tt gen_all}, {\tt atomize} and {\tt mk_meta_eq} will


514 
be composed together and supplied below to {\tt setmksimps}.


515 


516 


517 
\subsection{Making the initial simpset}


518 
It is time to assemble these items. We open module {\tt Simplifier} to


519 
gain access to its components. The infix operator \ttindexbold{addcongs}


520 
handles congruence rules; given a list of theorems, it converts their


521 
conclusions into metaequalities and passes them to \ttindex{addeqcongs}.


522 
\begin{ttbox}


523 
open Simplifier;


524 
\ttbreak


525 
infix addcongs;


526 
fun ss addcongs congs =


527 
ss addeqcongs (congs RL [eq_reflection,iff_reflection]);


528 
\end{ttbox}


529 
The list {\tt IFOL_rews} contains the default rewrite rules for firstorder


530 
logic. The first of these is the reflexive law expressed as the


531 
equivalence $(a=a)\bimp{\tt True}$; if we provided it as $a=a$ it would


532 
cause looping.


533 
\begin{ttbox}


534 
val IFOL_rews =


535 
[refl RS P_iff_T] \at conj_rews \at disj_rews \at not_rews \at


536 
imp_rews \at iff_rews \at quant_rews;


537 
\end{ttbox}


538 
The list {\tt triv_rls} contains trivial theorems for the solver. Any


539 
subgoal that is simplified to one of these will be removed.


540 
\begin{ttbox}


541 
val notFalseI = int_prove_fun "~False";


542 
val triv_rls = [TrueI,refl,iff_refl,notFalseI];


543 
\end{ttbox}


544 


545 
The basic simpset for intuitionistic \FOL{} starts with \ttindex{empty_ss}.


546 
It preprocess rewrites using {\tt gen_all}, {\tt atomize} and {\tt


547 
mk_meta_eq}. It solves simplified subgoals using {\tt triv_rls} and


548 
assumptions. It uses \ttindex{asm_simp_tac} to tackle subgoals of


549 
conditional rewrites. It takes {\tt IFOL_rews} as rewrite rules.


550 
Other simpsets built from {\tt IFOL_ss} will inherit these items.


551 
\index{*setmksimps}\index{*setsolver}\index{*setsubgoaler}


552 
\index{*addsimps}\index{*addcongs}


553 
\begin{ttbox}


554 
val IFOL_ss =


555 
empty_ss


556 
setmksimps (map mk_meta_eq o atomize o gen_all)


557 
setsolver (fn prems => resolve_tac (triv_rls \at prems) ORELSE'


558 
assume_tac)


559 
setsubgoaler asm_simp_tac


560 
addsimps IFOL_rews


561 
addcongs [imp_cong];


562 
\end{ttbox}


563 
This simpset takes {\tt imp_cong} as a congruence rule in order to use


564 
contextual information to simplify the conclusions of implications:


565 
\[ \List{\Var{P}\bimp\Var{P'};\; \Var{P'} \Imp \Var{Q}\bimp\Var{Q'}} \Imp


566 
(\Var{P}\imp\Var{Q}) \bimp (\Var{P'}\imp\Var{Q'})


567 
\]


568 
By adding the congruence rule {\tt conj_cong}, we could obtain a similar


569 
effect for conjunctions.


570 


571 


572 
\subsection{Case splitting}


573 
To set up case splitting, we must prove a theorem of the form shown below


574 
and pass it to \ttindexbold{mk_case_split_tac}. The tactic


575 
\ttindexbold{split_tac} uses {\tt mk_meta_eq} to convert the splitting


576 
rules to metaequalities.


577 


578 
\begin{ttbox}


579 
val meta_iffD =


580 
prove_goal FOL.thy "[ P==Q; Q ] ==> P"


581 
(fn [prem1,prem2] => [rewtac prem1, rtac prem2 1])


582 
\ttbreak


583 
fun split_tac splits =


584 
mk_case_split_tac meta_iffD (map mk_meta_eq splits);


585 
\end{ttbox}


586 
%


587 
The splitter is designed for rules roughly of the form


588 
\[ \Var{P}(if(\Var{Q},\Var{x},\Var{y})) \bimp (\Var{Q} \imp \Var{P}(\Var{x}))


589 
\conj (\lnot\Var{Q} \imp \Var{P}(\Var{y}))


590 
\]


591 
where the righthand side can be anything. Another example is the


592 
elimination operator (which happens to be called~$split$) for Cartesian


593 
products:


594 
\[ \Var{P}(split(\Var{f},\Var{p})) \bimp (\forall a~b. \Var{p} =


595 
\langle a,b\rangle \imp \Var{P}(\Var{f}(a,b)))


596 
\]


597 
Case splits should be allowed only when necessary; they are expensive


598 
and hard to control. Here is a typical example of use, where {\tt


599 
expand_if} is the first rule above:


600 
\begin{ttbox}


601 
by (simp_tac (prop_rec_ss setloop (split_tac [expand_if])) 1);


602 
\end{ttbox}


603 

104

604 


605 


606 
\index{simplification)}


607 

286

608 
