# HG changeset patch # User lcp # Date 766422983 -7200 # Node ID 361a7171317689cb6393d1e3e873562ef5f5b9f5 # Parent bacfaeeea0073f6513dacf5512f7fef3d7190f9d penultimate Springer draft diff -r bacfaeeea007 -r 361a71713176 doc-src/Ref/ref.tex --- a/doc-src/Ref/ref.tex Fri Apr 15 16:53:01 1994 +0200 +++ b/doc-src/Ref/ref.tex Fri Apr 15 17:16:23 1994 +0200 @@ -32,7 +32,7 @@ \begin{document} \index{definitions|see{rewriting, meta-level}} \index{rewriting!object-level|see{simplification}} -\index{rules!meta-level|see{meta-rules}} +\index{meta-rules|see{meta-rules}} \maketitle \pagenumbering{roman} \tableofcontents \clearfirst @@ -43,12 +43,14 @@ \include{tctical} \include{thm} \include{theories} +\include{defining} +\include{syntax} \include{substitution} \include{simplifier} \include{classical} %%seealso's must be last so that they appear last in the index entries -\index{rewriting!meta-level|seealso{tactics, theorems}} +\index{meta-rewriting|seealso{tactics, theorems}} \begingroup \bibliographystyle{plain} \small\raggedright\frenchspacing diff -r bacfaeeea007 -r 361a71713176 doc-src/Ref/simp.tex --- a/doc-src/Ref/simp.tex Fri Apr 15 16:53:01 1994 +0200 +++ b/doc-src/Ref/simp.tex Fri Apr 15 17:16:23 1994 +0200 @@ -110,29 +110,29 @@ Simpsets are values of the abstract type \ttindexbold{simpset}. They are manipulated by the following functions: \index{simplification sets|bold} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{empty_ss}] is the empty simpset. It has no congruence or rewrite rules and its auto-tactic always fails. -\item[\tt $ss$ \ttindexbold{addcongs} $thms$] +\item[$ss$ \ttindexbold{addcongs} $thms$] is the simpset~$ss$ plus the congruence rules~$thms$. -\item[\tt $ss$ \ttindexbold{delcongs} $thms$] +\item[$ss$ \ttindexbold{delcongs} $thms$] is the simpset~$ss$ minus the congruence rules~$thms$. -\item[\tt $ss$ \ttindexbold{addrews} $thms$] +\item[$ss$ \ttindexbold{addrews} $thms$] is the simpset~$ss$ plus the rewrite rules~$thms$. -\item[\tt $ss$ \ttindexbold{delrews} $thms$] +\item[$ss$ \ttindexbold{delrews} $thms$] is the simpset~$ss$ minus the rewrite rules~$thms$. -\item[\tt $ss$ \ttindexbold{setauto} $tacf$] +\item[$ss$ \ttindexbold{setauto} $tacf$] is the simpset~$ss$ with $tacf$ for its auto-tactic. \item[\ttindexbold{print_ss} $ss$] prints all the congruence and rewrite rules in the simpset~$ss$. -\end{description} +\end{ttdescription} Adding a rule to a simpset already containing it, or deleting one from a simpset not containing it, generates a warning message. @@ -171,7 +171,7 @@ rules are solved recursively before the rewrite rule is applied. There are two basic simplification tactics: -\begin{description} +\begin{ttdescription} \item[\ttindexbold{SIMP_TAC} $ss$ $i$] simplifies subgoal~$i$ using the rules in~$ss$. It may solve the subgoal completely if it has become trivial, using the auto-tactic @@ -180,12 +180,12 @@ \item[\ttindexbold{ASM_SIMP_TAC}] is like \verb$SIMP_TAC$, but also uses assumptions as additional rewrite rules. -\end{description} +\end{ttdescription} Many logics have conditional operators like {\it if-then-else}. If the simplifier has been set up with such case splits (see~\ttindex{case_splits} in \S\ref{SimpFun-input}), there are tactics which automatically alternate between simplification and case splitting: -\begin{description} +\begin{ttdescription} \item[\ttindexbold{SIMP_CASE_TAC}] is like {\tt SIMP_TAC} but also performs automatic case splits. More precisely, after each simplification phase the tactic tries to apply a @@ -206,10 +206,10 @@ \item[\ttindexbold{SIMP_THM}] simplifies a theorem using assumptions and case splitting. -\end{description} +\end{ttdescription} Finally there are two useful functions for generating congruence rules for constants and free variables: -\begin{description} +\begin{ttdescription} \item[\ttindexbold{mk_congs} $thy$ $cs$] computes a list of congruence rules, one for each constant in $cs$. Remember that the name of an infix constant @@ -223,7 +223,7 @@ \verb$("f","'a => 'a")$ generates the rule \verb$?x = ?y ==> f(?x) = f(?y)$. Such congruence rules are necessary for goals with free variables whose arguments need to be rewritten. -\end{description} +\end{ttdescription} Both functions work correctly only if {\tt SimpFun} has been supplied with the necessary substitution rules. The details are discussed in \S\ref{SimpFun-input} under {\tt subst_thms}. @@ -240,14 +240,14 @@ \section{Example: using the simplifier} \index{simplification!example} Assume we are working within {\tt FOL} and that -\begin{description} -\item[\tt Nat.thy] is a theory including the constants $0$, $Suc$ and $+$, -\item[\tt add_0] is the rewrite rule $0+n = n$, -\item[\tt add_Suc] is the rewrite rule $Suc(m)+n = Suc(m+n)$, -\item[\tt induct] is the induction rule +\begin{ttdescription} +\item[Nat.thy] is a theory including the constants $0$, $Suc$ and $+$, +\item[add_0] is the rewrite rule $0+n = n$, +\item[add_Suc] is the rewrite rule $Suc(m)+n = Suc(m+n)$, +\item[induct] is the induction rule $\List{P(0); \Forall x. P(x)\Imp P(Suc(x))} \Imp P(n)$. -\item[\tt FOL_ss] is a basic simpset for {\tt FOL}. -\end{description} +\item[FOL_ss] is a basic simpset for {\tt FOL}. +\end{ttdescription} We generate congruence rules for $Suc$ and for the infix operator~$+$: \begin{ttbox} val nat_congs = mk_congs Nat.thy ["Suc", "op +"]; @@ -382,7 +382,7 @@ \end{ttbox} The components of {\tt SIMP_DATA} need to be instantiated as follows. Many of these components are lists, and can be empty. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{refl_thms}] supplies reflexivity theorems of the form $\Var{x} \gg \Var{x}$. They must not have additional premises as, for example, @@ -485,7 +485,7 @@ fun dest_red( _ $ (c $ ta $ tb) ) = (c,ta,tb); \end{verbatim} The wildcard pattern {\tt_} matches the coercion function. -\end{description} +\end{ttdescription} \section{A sample instantiation} diff -r bacfaeeea007 -r 361a71713176 doc-src/Ref/simplifier.tex --- a/doc-src/Ref/simplifier.tex Fri Apr 15 16:53:01 1994 +0200 +++ b/doc-src/Ref/simplifier.tex Fri Apr 15 17:16:23 1994 +0200 @@ -3,46 +3,43 @@ \index{simplification|(} This chapter describes Isabelle's generic simplification package, which -provides a suite of simplification tactics. This rewriting package is less -general than its predecessor --- it works only for the equality relation, -not arbitrary preorders --- but it is fast and flexible. It performs -conditional and unconditional rewriting and uses contextual information -(``local assumptions''). It provides a few general hooks, which can -provide automatic case splits during rewriting, for example. The -simplifier is set up for many of Isabelle's logics: {\tt FOL}, {\tt ZF}, -{\tt LCF} and {\tt HOL}. +provides a suite of simplification tactics. It performs conditional and +unconditional rewriting and uses contextual information (`local +assumptions'). It provides a few general hooks, which can provide +automatic case splits during rewriting, for example. The simplifier is set +up for many of Isabelle's logics: {\tt FOL}, {\tt ZF}, {\tt LCF} and {\tt + HOL}. \section{Simplification sets}\index{simplification sets} The simplification tactics are controlled by {\bf simpsets}. These consist of five components: rewrite rules, congruence rules, the subgoaler, the -solver and the looper. Normally, the simplifier is set up with sensible -defaults, so that most simplifier calls specify only rewrite rules. -Sophisticated usage of the other components can be highly effective, but -most users should never worry about them. +solver and the looper. The simplifier should be set up with sensible +defaults so that most simplifier calls specify only rewrite rules. +Experienced users can exploit the other components to streamline proofs. + \subsection{Rewrite rules}\index{rewrite rules} -Rewrite rules are theorems like $Suc(\Var{m}) + \Var{n} = \Var{m} + -Suc(\Var{n})$, $\Var{P}\conj\Var{P} \bimp \Var{P}$, or $\Var{A} \union \Var{B} -\equiv \{x.x\in A \disj x\in B\}$. {\bf Conditional} rewrites such as -$\Var{m}<\Var{n} \Imp \Var{m}/\Var{n} = 0$ are permitted; the conditions -can be arbitrary terms. The infix operation \ttindex{addsimps} adds new -rewrite rules, while \ttindex{delsimps} deletes rewrite rules from a -simpset. +Rewrite rules are theorems expressing some form of equality: +\begin{eqnarray*} + Suc(\Var{m}) + \Var{n} &=& \Var{m} + Suc(\Var{n}) \\ + \Var{P}\conj\Var{P} &\bimp& \Var{P} \\ + \Var{A} \union \Var{B} &\equiv& \{x.x\in \Var{A} \disj x\in \Var{B}\} +\end{eqnarray*} +{\bf Conditional} rewrites such as $\Var{m}<\Var{n} \Imp \Var{m}/\Var{n} = +0$ are permitted; the conditions can be arbitrary terms. The infix +operation \ttindex{addsimps} adds new rewrite rules, while +\ttindex{delsimps} deletes rewrite rules from a simpset. -Theorems added via \ttindex{addsimps} need not be equalities to start with. -Each simpset contains a (user-definable) function for extracting equalities -from arbitrary theorems. For example $\neg(x\in \{\})$ could be turned -into $x\in \{\} \equiv False$. This function can be set with -\ttindex{setmksimps} but only the definer of a logic should need to do -this. Exceptionally, one may want to install a selective version of -\ttindex{mksimps} in order to filter out looping rewrite rules arising from -local assumptions (see below). +Internally, all rewrite rules are translated into meta-equalities, theorems +with conclusion $lhs \equiv rhs$. Each simpset contains a function for +extracting equalities from arbitrary theorems. For example, +$\neg(\Var{x}\in \{\})$ could be turned into $\Var{x}\in \{\} \equiv +False$. This function can be installed using \ttindex{setmksimps} but only +the definer of a logic should need to do this; see \S\ref{sec:setmksimps}. +The function processes theorems added by \ttindex{addsimps} as well as +local assumptions. -Internally, all rewrite rules are translated into meta-equalities: -theorems with conclusion $lhs \equiv rhs$. To this end every simpset contains -a function of type \verb$thm -> thm list$ to extract a list -of meta-equalities from a given theorem. \begin{warn}\index{rewrite rules} The left-hand side of a rewrite rule must look like a first-order term: @@ -60,23 +57,23 @@ \[ \List{\dots} \Imp f(\Var{x@1},\ldots,\Var{x@n}) \equiv f(\Var{y@1},\ldots,\Var{y@n}). \] -They control the simplification of the arguments of certain constants. For +This governs the simplification of the arguments of~$f$. For example, some arguments can be simplified under additional assumptions: \[ \List{\Var{P@1} \bimp \Var{Q@1};\; \Var{Q@1} \Imp \Var{P@2} \bimp \Var{Q@2}} \Imp (\Var{P@1} \imp \Var{P@2}) \equiv (\Var{Q@1} \imp \Var{Q@2}) \] -This rule assumes $Q@1$ and any rewrite rules it implies, while -simplifying~$P@2$. Such ``local'' assumptions are effective for rewriting -formulae such as $x=0\imp y+x=y$. The next example makes similar use of -such contextual information in bounded quantifiers: +Given this rule, the simplifier assumes $Q@1$ and extracts rewrite rules +from it when simplifying~$P@2$. Such local assumptions are effective for +rewriting formulae such as $x=0\imp y+x=y$. The congruence rule for bounded +quantifiers can also supply contextual information: \begin{eqnarray*} &&\List{\Var{A}=\Var{B};\; \Forall x. x\in \Var{B} \Imp \Var{P}(x) = \Var{Q}(x)} \Imp{} \\ &&\qquad\qquad (\forall x\in \Var{A}.\Var{P}(x)) = (\forall x\in \Var{B}.\Var{Q}(x)) \end{eqnarray*} -This congruence rule supplies contextual information for simplifying the -arms of a conditional expressions: +The congruence rule for conditional expressions can supply contextual +information for simplifying the arms: \[ \List{\Var{p}=\Var{q};~ \Var{q} \Imp \Var{a}=\Var{c};~ \neg\Var{q} \Imp \Var{b}=\Var{d}} \Imp if(\Var{p},\Var{a},\Var{b}) \equiv if(\Var{q},\Var{c},\Var{d}) @@ -100,7 +97,7 @@ Each object-logic should define an operator called \ttindex{addcongs} that expects object-equalities and translates them into meta-equalities. -\subsection{The subgoaler} +\subsection{*The subgoaler} The subgoaler is the tactic used to solve subgoals arising out of conditional rewrite rules or congruence rules. The default should be simplification itself. Occasionally this strategy needs to be changed. For @@ -118,45 +115,48 @@ simplification only if that fails; here {\tt prems_of_ss} extracts the current premises from a simpset. -\subsection{The solver} +\subsection{*The solver} The solver is a tactic that attempts to solve a subgoal after simplification. Typically it just proves trivial subgoals such as {\tt - True} and $t=t$; it could use sophisticated means such as -\verb$fast_tac$. The solver is set using \ttindex{setsolver}. + True} and $t=t$. It could use sophisticated means such as {\tt + fast_tac}, though that could make simplification expensive. The solver +is set using \ttindex{setsolver}. +Rewriting does not instantiate unknowns. For example, rewriting cannot +prove $a\in \Var{A}$ since this requires instantiating~$\Var{A}$. The +solver, however, is an arbitrary tactic and may instantiate unknowns as it +pleases. This is the only way the simplifier can handle a conditional +rewrite rule whose condition contains extra variables. + +\index{assumptions!in simplification} The tactic is presented with the full goal, including the asssumptions. Hence it can use those assumptions (say by calling {\tt assume_tac}) even inside {\tt simp_tac}, which otherwise does not use assumptions. The solver is also supplied a list of theorems, namely assumptions that hold in the local context. -\begin{warn} - Rewriting does not instantiate unknowns. Trying to rewrite $a\in - \Var{A}$ with the rule $\Var{x}\in \{\Var{x}\}$ leads nowhere. The - solver, however, is an arbitrary tactic and may instantiate unknowns as - it pleases. This is the only way the simplifier can handle a conditional - rewrite rule whose condition contains extra variables. -\end{warn} +The subgoaler is also used to solve the premises of congruence rules, which +are usually of the form $s = \Var{x}$, where $s$ needs to be simplified and +$\Var{x}$ needs to be instantiated with the result. Hence the subgoaler +should call the simplifier at some point. The simplifier will then call the +solver, which must therefore be prepared to solve goals of the form $t = +\Var{x}$, usually by reflexivity. In particular, reflexivity should be +tried before any of the fancy tactics like {\tt fast_tac}. + +It may even happen that, due to simplification, the subgoal is no longer an +equality. For example $False \bimp \Var{Q}$ could be rewritten to +$\neg\Var{Q}$. To cover this case, the solver could try resolving with the +theorem $\neg False$. \begin{warn} - If you want to supply your own subgoaler or solver, read on. The subgoaler - is also used to solve the premises of congruence rules, which are usually - of the form $s = \Var{x}$, where $s$ needs to be simplified and $\Var{x}$ - needs to be instantiated with the result. Hence the subgoaler should call - the simplifier at some point. The simplifier will then call the solver, - which must therefore be prepared to solve goals of the form $t = \Var{x}$, - usually by reflexivity. In particular, reflexivity should be tried before - any of the fancy tactics like {\tt fast_tac}. It may even happen that, due - to simplification, the subgoal is no longer an equality. For example $False - \bimp \Var{Q}$ could be rewritten to $\neg\Var{Q}$, in which case the - solver must also try resolving with the theorem $\neg False$. - If the simplifier aborts with the message {\tt Failed congruence proof!}, - it is due to the subgoaler or solver that failed to prove a premise of a - congruence rule. + then the subgoaler or solver has failed to prove a premise of a + congruence rule. This should never occur and indicates that the + subgoaler or solver is faulty. \end{warn} -\subsection{The looper} + +\subsection{*The looper} The looper is a tactic that is applied after simplification, in case the solver failed to solve the simplified goal. If the looper succeeds, the simplification process is started all over again. Each of the subgoals @@ -168,22 +168,21 @@ \begin{figure} -\indexbold{*SIMPLIFIER} -\indexbold{*simpset} -\indexbold{*simp_tac} -\indexbold{*asm_simp_tac} -\indexbold{*asm_full_simp_tac} -\indexbold{*addeqcongs} -\indexbold{*addsimps} -\indexbold{*delsimps} -\indexbold{*empty_ss} -\indexbold{*merge_ss} -\indexbold{*setsubgoaler} -\indexbold{*setsolver} -\indexbold{*setloop} -\indexbold{*setmksimps} -\indexbold{*prems_of_ss} -\indexbold{*rep_ss} +\index{*simpset ML type} +\index{*simp_tac} +\index{*asm_simp_tac} +\index{*asm_full_simp_tac} +\index{*addeqcongs} +\index{*addsimps} +\index{*delsimps} +\index{*empty_ss} +\index{*merge_ss} +\index{*setsubgoaler} +\index{*setsolver} +\index{*setloop} +\index{*setmksimps} +\index{*prems_of_ss} +\index{*rep_ss} \begin{ttbox} infix addsimps addeqcongs delsimps setsubgoaler setsolver setloop setmksimps; @@ -207,13 +206,13 @@ val rep_ss: simpset -> \{simps: thm list, congs: thm list\} end; \end{ttbox} -\caption{The signature \ttindex{SIMPLIFIER}} \label{SIMPLIFIER} +\caption{The simplifier primitives} \label{SIMPLIFIER} \end{figure} \section{The simplification tactics} \label{simp-tactics} -\index{simplification!tactics|bold} -\index{tactics!simplification|bold} +\index{simplification!tactics} +\index{tactics!simplification} The actual simplification work is performed by the following tactics. The rewriting strategy is strictly bottom up, except for congruence rules, which @@ -221,40 +220,48 @@ rules are solved recursively before the rewrite rule is applied. There are three basic simplification tactics: -\begin{description} +\begin{ttdescription} \item[\ttindexbold{simp_tac} $ss$ $i$] simplifies subgoal~$i$ using the rules in~$ss$. It may solve the subgoal completely if it has become trivial, using the solver. -\item[\ttindexbold{asm_simp_tac}] is like \verb$simp_tac$, but also uses - assumptions as additional rewrite rules. +\item[\ttindexbold{asm_simp_tac}]\index{assumptions!in simplification} + is like \verb$simp_tac$, but extracts additional rewrite rules from the + assumptions. \item[\ttindexbold{asm_full_simp_tac}] is like \verb$asm_simp_tac$, but also simplifies the assumptions one by one, using each assumption in the simplification of the following ones. -\end{description} +\end{ttdescription} Using the simplifier effectively may take a bit of experimentation. The tactics can be traced with the ML command \verb$trace_simp := true$. To remind yourself of what is in a simpset, use the function \verb$rep_ss$ to return its simplification and congruence rules. \section{Examples using the simplifier} -\index{simplification!example} +\index{examples!of simplification} Assume we are working within {\tt FOL} and that -\begin{description} -\item[\tt Nat.thy] is a theory including the constants $0$, $Suc$ and $+$, -\item[\tt add_0] is the rewrite rule $0+n = n$, -\item[\tt add_Suc] is the rewrite rule $Suc(m)+n = Suc(m+n)$, -\item[\tt induct] is the induction rule -$\List{P(0); \Forall x. P(x)\Imp P(Suc(x))} \Imp P(n)$. -\item[\tt FOL_ss] is a basic simpset for {\tt FOL}.\footnote -{These examples reside on the file {\tt FOL/ex/nat.ML}.} -\end{description} +\begin{ttdescription} +\item[Nat.thy] + is a theory including the constants $0$, $Suc$ and $+$, +\item[add_0] + is the rewrite rule $0+\Var{n} = \Var{n}$, +\item[add_Suc] + is the rewrite rule $Suc(\Var{m})+\Var{n} = Suc(\Var{m}+\Var{n})$, +\item[induct] + is the induction rule $\List{\Var{P}(0);\; \Forall x. \Var{P}(x)\Imp + \Var{P}(Suc(x))} \Imp \Var{P}(\Var{n})$. +\item[FOL_ss] + is a basic simpset for {\tt FOL}.% +\footnote{These examples reside on the file {\tt FOL/ex/nat.ML}.} +\end{ttdescription} We create a simpset for natural numbers by extending~{\tt FOL_ss}: \begin{ttbox} val add_ss = FOL_ss addsimps [add_0, add_Suc]; \end{ttbox} + +\subsection{A trivial example} Proofs by induction typically involve simplification. Here is a proof that~0 is a right identity: \begin{ttbox} @@ -288,7 +295,9 @@ {\out No subgoals!} \end{ttbox} -The next proof is similar. +\subsection{An example of tracing} +Let us prove a similar result involving more complex terms. The two +equations together can be used to prove that addition is commutative. \begin{ttbox} goal Nat.thy "m+Suc(n) = Suc(m+n)"; {\out Level 0} @@ -318,14 +327,19 @@ \begin{ttbox} trace_simp := true; by (asm_simp_tac add_ss 1); +\ttbreak {\out Rewriting:} {\out Suc(x) + Suc(n) == Suc(x + Suc(n))} +\ttbreak {\out Rewriting:} {\out x + Suc(n) == Suc(x + n)} +\ttbreak {\out Rewriting:} {\out Suc(x) + n == Suc(x + n)} +\ttbreak {\out Rewriting:} {\out Suc(Suc(x + n)) = Suc(Suc(x + n)) == True} +\ttbreak {\out Level 3} {\out m + Suc(n) = Suc(m + n)} {\out No subgoals!} @@ -339,18 +353,22 @@ {\out No subgoals!} \end{ttbox} -\medskip +\subsection{Free variables and simplification} Here is a conjecture to be proved for an arbitrary function~$f$ satisfying -the law $f(Suc(n)) = Suc(f(n))$: +the law $f(Suc(\Var{n})) = Suc(f(\Var{n}))$: \begin{ttbox} val [prem] = goal Nat.thy "(!!n. f(Suc(n)) = Suc(f(n))) ==> f(i+j) = i+f(j)"; {\out Level 0} {\out f(i + j) = i + f(j)} {\out 1. f(i + j) = i + f(j)} +\ttbreak {\out val prem = "f(Suc(?n)) = Suc(f(?n))} {\out [!!n. f(Suc(n)) = Suc(f(n))]" : thm} -\ttbreak +\end{ttbox} +In the theorem~{\tt prem}, note that $f$ is a free variable while +$\Var{n}$ is a schematic variable. +\begin{ttbox} by (res_inst_tac [("n","i")] induct 1); {\out Level 1} {\out f(i + j) = i + f(j)} @@ -365,11 +383,12 @@ {\out 1. !!x. f(x + j) = x + f(j) ==> f(Suc(x) + j) = Suc(x) + f(j)} \end{ttbox} The remaining subgoal requires rewriting by the premise, so we add it to -{\tt add_ss}:\footnote{The previous - simplifier required congruence rules for function variables like~$f$ in - order to simplify their arguments. The present simplifier can be given - congruence rules to realize non-standard simplification of a function's - arguments, but this is seldom necessary.} +{\tt add_ss}:% +\footnote{The previous simplifier required congruence rules for function + variables like~$f$ in order to simplify their arguments. It was more + general than the current simplifier, but harder to use and slower. The + present simplifier can be given congruence rules to realize non-standard + simplification of a function's arguments, but this is seldom necessary.} \begin{ttbox} by (asm_simp_tac (add_ss addsimps [prem]) 1); {\out Level 3} @@ -378,39 +397,184 @@ \end{ttbox} -\section{Setting up the simplifier} -\index{simplification!setting up|bold} +\section{*Permutative rewrite rules} +\index{rewrite rules!permutative|(} + +A rewrite rule is {\bf permutative} if the left-hand side and right-hand +side are the same up to renaming of variables. The most common permutative +rule is commutativity: $x+y = y+x$. Other examples include $(x-y)-z = +(x-z)-y$ in arithmetic and $insert(x,insert(y,A)) = insert(y,insert(x,A))$ +for sets. Such rules are common enough to merit special attention. + +Because ordinary rewriting loops given such rules, the simplifier employs a +special strategy, called {\bf ordered rewriting}\index{rewriting!ordered}. +There is a built-in lexicographic ordering on terms. A permutative rewrite +rule is applied only if it decreases the given term with respect to this +ordering. For example, commutativity rewrites~$b+a$ to $a+b$, but then +stops because $a+b$ is strictly less than $b+a$. The Boyer-Moore theorem +prover~\cite{bm88book} also employs ordered rewriting. + +Permutative rewrite rules are added to simpsets just like other rewrite +rules; the simplifier recognizes their special status automatically. They +are most effective in the case of associative-commutative operators. +(Associativity by itself is not permutative.) When dealing with an +AC-operator~$f$, keep the following points in mind: +\begin{itemize}\index{associative-commutative operators} +\item The associative law must always be oriented from left to right, namely + $f(f(x,y),z) = f(x,f(y,z))$. The opposite orientation, if used with + commutativity, leads to looping! Future versions of Isabelle may remove + this restriction. + +\item To complete your set of rewrite rules, you must add not just + associativity~(A) and commutativity~(C) but also a derived rule, {\bf + left-commutativity} (LC): $f(x,f(y,z)) = f(y,f(x,z))$. +\end{itemize} +Ordered rewriting with the combination of A, C, and~LC sorts a term +lexicographically: +\[\def\maps#1{\stackrel{#1}{\longmapsto}} + (b+c)+a \maps{A} b+(c+a) \maps{C} b+(a+c) \maps{LC} a+(b+c) \] +Martin and Nipkow~\cite{martin-nipkow} discuss the theory and give many +examples; other algebraic structures are amenable to ordered rewriting, +such as boolean rings. + +\subsection{Example: sums of integers} +This example is set in Isabelle's higher-order logic. Theory +\thydx{Arith} contains the theory of arithmetic. The simpset {\tt + arith_ss} contains many arithmetic laws including distributivity +of~$\times$ over~$+$, while {\tt add_ac} is a list consisting of the A, C +and LC laws for~$+$. Let us prove the theorem +\[ \sum@{i=1}^n i = n\times(n+1)/2. \] +% +A functional~{\tt sum} represents the summation operator under the +interpretation ${\tt sum}(f,n+1) = \sum@{i=0}^n f(i)$. We extend {\tt + Arith} using a theory file: +\begin{ttbox} +NatSum = Arith + +consts sum :: "[nat=>nat, nat] => nat" +rules sum_0 "sum(f,0) = 0" + sum_Suc "sum(f,Suc(n)) = f(n) + sum(f,n)" +end +\end{ttbox} +After declaring {\tt open Natsum}, we make the required simpset by adding +the AC-rules for~$+$ and the axioms for~{\tt sum}: +\begin{ttbox} +val natsum_ss = arith_ss addsimps ([sum_0,sum_Suc] \at add_ac); +{\out val natsum_ss = SS \{\ldots\} : simpset} +\end{ttbox} +Our desired theorem now reads ${\tt sum}(\lambda i.i,n+1) = +n\times(n+1)/2$. The Isabelle goal has both sides multiplied by~$2$: +\begin{ttbox} +goal NatSum.thy "Suc(Suc(0))*sum(\%i.i,Suc(n)) = n*Suc(n)"; +{\out Level 0} +{\out Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)} +{\out 1. Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)} +\end{ttbox} +Induction should not be applied until the goal is in the simplest form: +\begin{ttbox} +by (simp_tac natsum_ss 1); +{\out Level 1} +{\out Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)} +{\out 1. n + (n + (sum(\%i. i, n) + sum(\%i. i, n))) = n + n * n} +\end{ttbox} +Ordered rewriting has sorted the terms in the left-hand side. +The subgoal is now ready for induction: +\begin{ttbox} +by (nat_ind_tac "n" 1); +{\out Level 2} +{\out Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)} +{\out 1. 0 + (0 + (sum(\%i. i, 0) + sum(\%i. i, 0))) = 0 + 0 * 0} +\ttbreak +{\out 2. !!n1. n1 + (n1 + (sum(\%i. i, n1) + sum(\%i. i, n1))) =} +{\out n1 + n1 * n1 ==>} +{\out Suc(n1) +} +{\out (Suc(n1) + (sum(\%i. i, Suc(n1)) + sum(\%i. i, Suc(n1)))) =} +{\out Suc(n1) + Suc(n1) * Suc(n1)} +\end{ttbox} +Simplification proves both subgoals immediately:\index{*ALLGOALS} +\begin{ttbox} +by (ALLGOALS (asm_simp_tac natsum_ss)); +{\out Level 3} +{\out Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n)} +{\out No subgoals!} +\end{ttbox} +If we had omitted {\tt add_ac} from the simpset, simplification would have +failed to prove the induction step: +\begin{ttbox} +Suc(Suc(0)) * sum(\%i. i, Suc(n)) = n * Suc(n) + 1. !!n1. n1 + (n1 + (sum(\%i. i, n1) + sum(\%i. i, n1))) = + n1 + n1 * n1 ==> + n1 + (n1 + (n1 + sum(\%i. i, n1) + (n1 + sum(\%i. i, n1)))) = + n1 + (n1 + (n1 + n1 * n1)) +\end{ttbox} +Ordered rewriting proves this by sorting the left-hand side. Proving +arithmetic theorems without ordered rewriting requires explicit use of +commutativity. This is tedious; try it and see! + +Ordered rewriting is equally successful in proving +$\sum@{i=1}^n i^3 = n^2\times(n+1)^2/4$. + + +\subsection{Re-orienting equalities} +Ordered rewriting with the derived rule {\tt symmetry} can reverse equality +signs: +\begin{ttbox} +val symmetry = prove_goal HOL.thy "(x=y) = (y=x)" + (fn _ => [fast_tac HOL_cs 1]); +\end{ttbox} +This is frequently useful. Assumptions of the form $s=t$, where $t$ occurs +in the conclusion but not~$s$, can often be brought into the right form. +For example, ordered rewriting with {\tt symmetry} can prove the goal +\[ f(a)=b \conj f(a)=c \imp b=c. \] +Here {\tt symmetry} reverses both $f(a)=b$ and $f(a)=c$ +because $f(a)$ is lexicographically greater than $b$ and~$c$. These +re-oriented equations, as rewrite rules, replace $b$ and~$c$ in the +conclusion by~$f(a)$. + +Another example is the goal $\neg(t=u) \imp \neg(u=t)$. +The differing orientations make this appear difficult to prove. Ordered +rewriting with {\tt symmetry} makes the equalities agree. (Without +knowing more about~$t$ and~$u$ we cannot say whether they both go to $t=u$ +or~$u=t$.) Then the simplifier can prove the goal outright. + +\index{rewrite rules!permutative|)} + + +\section{*Setting up the simplifier}\label{sec:setting-up-simp} +\index{simplification!setting up} Setting up the simplifier for new logics is complicated. This section -describes how the simplifier is installed for first-order logic; the code -is largely taken from {\tt FOL/simpdata.ML}. +describes how the simplifier is installed for intuitionistic first-order +logic; the code is largely taken from {\tt FOL/simpdata.ML}. -The simplifier and the case splitting tactic, which resides in a separate -file, are not part of Pure Isabelle. They must be loaded explicitly: +The simplifier and the case splitting tactic, which reside on separate +files, are not part of Pure Isabelle. They must be loaded explicitly: \begin{ttbox} use "../Provers/simplifier.ML"; use "../Provers/splitter.ML"; \end{ttbox} Simplification works by reducing various object-equalities to -meta-equality. It requires axioms stating that equal terms and equivalent -formulae are also equal at the meta-level. The file {\tt FOL/ifol.thy} -contains the two lines -\begin{ttbox}\indexbold{*eq_reflection}\indexbold{*iff_reflection} +meta-equality. It requires rules stating that equal terms and equivalent +formulae are also equal at the meta-level. The rule declaration part of +the file {\tt FOL/ifol.thy} contains the two lines +\begin{ttbox}\index{*eq_reflection theorem}\index{*iff_reflection theorem} eq_reflection "(x=y) ==> (x==y)" iff_reflection "(P<->Q) ==> (P==Q)" \end{ttbox} -Of course, you should only assert such axioms if they are true for your +Of course, you should only assert such rules if they are true for your particular logic. In Constructive Type Theory, equality is a ternary relation of the form $a=b\in A$; the type~$A$ determines the meaning of the -equality effectively as a partial equivalence relation. +equality effectively as a partial equivalence relation. The present +simplifier cannot be used. Rewriting in {\tt CTT} uses another simplifier, +which resides in the file {\tt typedsimp.ML} and is not documented. Even +this does not work for later variants of Constructive Type Theory that use +intensional equality~\cite{nordstrom90}. \subsection{A collection of standard rewrite rules} The file begins by proving lots of standard rewrite rules about the logical -connectives. These include cancellation laws and associative laws but -certainly not commutative laws, which would case looping. To prove the -laws easily, it defines a function that echoes the desired law and then +connectives. These include cancellation and associative laws. To prove +them easily, it defines a function that echoes the desired law and then supplies it the theorem prover for intuitionistic \FOL: \begin{ttbox} fun int_prove_fun s = @@ -432,7 +596,7 @@ \end{ttbox} The file also proves some distributive laws. As they can cause exponential blowup, they will not be included in the standard simpset. Instead they -are merely bound to an \ML{} identifier. +are merely bound to an \ML{} identifier, for user reference. \begin{ttbox} val distrib_rews = map int_prove_fun ["P & (Q | R) <-> P&Q | P&R", @@ -442,6 +606,8 @@ \subsection{Functions for preprocessing the rewrite rules} +\label{sec:setmksimps} +% The next step is to define the function for preprocessing rewrite rules. This will be installed by calling {\tt setmksimps} below. Preprocessing occurs whenever rewrite rules are added, whether by user command or @@ -483,15 +649,15 @@ \item Anything else: return the theorem in a singleton list. \end{itemize} The resulting theorems are not literally atomic --- they could be -disjunctive, for example --- but are brokwn down as much as possible. See +disjunctive, for example --- but are broken down as much as possible. See the file {\tt ZF/simpdata.ML} for a sophisticated translation of set-theoretic formulae into rewrite rules. The simplified rewrites must now be converted into meta-equalities. The -axiom {\tt eq_reflection} converts equality rewrites, while {\tt +rule {\tt eq_reflection} converts equality rewrites, while {\tt iff_reflection} converts if-and-only-if rewrites. The latter possibility can arise in two other ways: the negative theorem~$\neg P$ is converted to -$P\equiv{\tt False}$, and any other theorem~$\neg P$ is converted to +$P\equiv{\tt False}$, and any other theorem~$P$ is converted to $P\equiv{\tt True}$. The rules {\tt iff_reflection_F} and {\tt iff_reflection_T} accomplish this conversion. \begin{ttbox} @@ -516,9 +682,10 @@ \subsection{Making the initial simpset} It is time to assemble these items. We open module {\tt Simplifier} to -gain access to its components. The infix operator \ttindexbold{addcongs} -handles congruence rules; given a list of theorems, it converts their -conclusions into meta-equalities and passes them to \ttindex{addeqcongs}. +gain access to its components. We define the infix operator +\ttindexbold{addcongs} to insert congruence rules; given a list of theorems, +it converts their conclusions into meta-equalities and passes them to +\ttindex{addeqcongs}. \begin{ttbox} open Simplifier; \ttbreak @@ -528,8 +695,7 @@ \end{ttbox} The list {\tt IFOL_rews} contains the default rewrite rules for first-order logic. The first of these is the reflexive law expressed as the -equivalence $(a=a)\bimp{\tt True}$; if we provided it as $a=a$ it would -cause looping. +equivalence $(a=a)\bimp{\tt True}$; the rewrite rule $a=a$ is clearly useless. \begin{ttbox} val IFOL_rews = [refl RS P_iff_T] \at conj_rews \at disj_rews \at not_rews \at @@ -541,13 +707,15 @@ val notFalseI = int_prove_fun "~False"; val triv_rls = [TrueI,refl,iff_refl,notFalseI]; \end{ttbox} - +% The basic simpset for intuitionistic \FOL{} starts with \ttindex{empty_ss}. It preprocess rewrites using {\tt gen_all}, {\tt atomize} and {\tt mk_meta_eq}. It solves simplified subgoals using {\tt triv_rls} and assumptions. It uses \ttindex{asm_simp_tac} to tackle subgoals of conditional rewrites. It takes {\tt IFOL_rews} as rewrite rules. Other simpsets built from {\tt IFOL_ss} will inherit these items. +In particular, {\tt FOL_ss} extends {\tt IFOL_ss} with classical rewrite +rules such as $\neg\neg P\bimp P$. \index{*setmksimps}\index{*setsolver}\index{*setsubgoaler} \index{*addsimps}\index{*addcongs} \begin{ttbox} @@ -570,11 +738,10 @@ \subsection{Case splitting} -To set up case splitting, we must prove a theorem of the form shown below -and pass it to \ttindexbold{mk_case_split_tac}. The tactic -\ttindexbold{split_tac} uses {\tt mk_meta_eq} to convert the splitting -rules to meta-equalities. - +To set up case splitting, we must prove the theorem below and pass it to +\ttindexbold{mk_case_split_tac}. The tactic \ttindexbold{split_tac} uses +{\tt mk_meta_eq}, defined above, to convert the splitting rules to +meta-equalities. \begin{ttbox} val meta_iffD = prove_goal FOL.thy "[| P==Q; Q |] ==> P" @@ -584,13 +751,14 @@ mk_case_split_tac meta_iffD (map mk_meta_eq splits); \end{ttbox} % -The splitter is designed for rules roughly of the form +The splitter replaces applications of a given function; the right-hand side +of the replacement can be anything. For example, here is a splitting rule +for conditional expressions: \[ \Var{P}(if(\Var{Q},\Var{x},\Var{y})) \bimp (\Var{Q} \imp \Var{P}(\Var{x})) \conj (\lnot\Var{Q} \imp \Var{P}(\Var{y})) \] -where the right-hand side can be anything. Another example is the -elimination operator (which happens to be called~$split$) for Cartesian -products: +Another example is the elimination operator (which happens to be +called~$split$) for Cartesian products: \[ \Var{P}(split(\Var{f},\Var{p})) \bimp (\forall a~b. \Var{p} = \langle a,b\rangle \imp \Var{P}(\Var{f}(a,b))) \] diff -r bacfaeeea007 -r 361a71713176 doc-src/Ref/substitution.tex --- a/doc-src/Ref/substitution.tex Fri Apr 15 16:53:01 1994 +0200 +++ b/doc-src/Ref/substitution.tex Fri Apr 15 17:16:23 1994 +0200 @@ -1,12 +1,13 @@ %% $Id$ \chapter{Substitution Tactics} \label{substitution} -\index{substitution|(} +\index{tactics!substitution|(}\index{equality|(} + Replacing equals by equals is a basic form of reasoning. Isabelle supports several kinds of equality reasoning. {\bf Substitution} means to replace free occurrences of~$t$ by~$u$ in a subgoal. This is easily done, given an equality $t=u$, provided the logic possesses the appropriate rule --- unless you want to substitute even in the assumptions. The tactic -\ttindex{hyp_subst_tac} performs substitution in the assumptions, but it +{\tt hyp_subst_tac} performs substitution in the assumptions, but it works via object-level implication, and therefore must be specially set up for each suitable object-logic. @@ -18,9 +19,9 @@ Chapter~\ref{simp-chap} discusses Isabelle's rewriting tactics. -\section{Simple substitution} -\index{substitution!simple} -Many logics include a substitution rule of the form\indexbold{*subst} +\section{Substitution rules} +\index{substitution!rules}\index{*subst theorem} +Many logics include a substitution rule of the form $$ \List{\Var{a}=\Var{b}; \Var{P}(\Var{a})} \Imp \Var{P}(\Var{b}) \eqno(subst)$$ In backward proof, this may seem difficult to use: the conclusion @@ -41,14 +42,15 @@ \begin{ttbox} resolve_tac [eqth RS ssubst] \(i\) {\it,} \end{ttbox} -where \ttindexbold{ssubst} is the `swapped' substitution rule +where \tdxbold{ssubst} is the `swapped' substitution rule $$ \List{\Var{a}=\Var{b}; \Var{P}(\Var{b})} \Imp \Var{P}(\Var{a}). \eqno(ssubst)$$ -If \ttindex{sym} denotes the symmetry rule +If \tdx{sym} denotes the symmetry rule \(\Var{a}=\Var{b}\Imp\Var{b}=\Var{a}\), then {\tt ssubst} is just \hbox{\tt sym RS subst}. Many logics with equality include the rules {\tt subst} and {\tt ssubst}, as well as {\tt refl}, {\tt sym} and {\tt trans} -(for the usual equality laws). +(for the usual equality laws). Examples include {\tt FOL} and {\tt HOL}, +but not {\tt CTT} (Constructive Type Theory). Elim-resolution is well-behaved with assumptions of the form $t=u$. To replace $u$ by~$t$ or $t$ by~$u$ in subgoal~$i$, use @@ -58,28 +60,33 @@ \section{Substitution in the hypotheses} -\index{substitution!in hypotheses} +\index{assumptions!substitution in} Substitution rules, like other rules of natural deduction, do not affect the assumptions. This can be inconvenient. Consider proving the subgoal \[ \List{c=a; c=b} \Imp a=b. \] -Calling \hbox{\tt eresolve_tac [ssubst] \(i\)} simply discards the +Calling {\tt eresolve_tac\ts[ssubst]\ts\(i\)} simply discards the assumption~$c=a$, since $c$ does not occur in~$a=b$. Of course, we can -work out a solution. First apply \hbox{\tt eresolve_tac [subst] \(i\)}, +work out a solution. First apply {\tt eresolve_tac\ts[subst]\ts\(i\)}, replacing~$a$ by~$c$: \[ \List{c=b} \Imp c=b \] Equality reasoning can be difficult, but this trivial proof requires nothing more sophisticated than substitution in the assumptions. -Object-logics that include the rule~$(subst)$ provide a tactic for this +Object-logics that include the rule~$(subst)$ provide tactics for this purpose: \begin{ttbox} -hyp_subst_tac : int -> tactic +hyp_subst_tac : int -> tactic +bound_hyp_subst_tac : int -> tactic \end{ttbox} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{hyp_subst_tac} {\it i}] -selects an equality assumption of the form $t=u$ or $u=t$, where $t$ is a -free variable or parameter. Deleting this assumption, it replaces $t$ -by~$u$ throughout subgoal~$i$, including the other assumptions. -\end{description} + selects an equality assumption of the form $t=u$ or $u=t$, where $t$ is a + free variable or parameter. Deleting this assumption, it replaces $t$ + by~$u$ throughout subgoal~$i$, including the other assumptions. + +\item[\ttindexbold{bound_hyp_subst_tac} {\it i}] + is similar but only substitutes for parameters (bound variables). + Uses for this are discussed below. +\end{ttdescription} The term being replaced must be a free variable or parameter. Substitution for constants is usually unhelpful, since they may appear in other theorems. For instance, the best way to use the assumption $0=1$ is to @@ -89,9 +96,9 @@ Replacing a free variable causes similar problems if they appear in the premises of a rule being derived --- the substitution affects object-level assumptions, not meta-level assumptions. For instance, replacing~$a$ -by~$b$ could make the premise~$P(a)$ worthless. To avoid this problem, call -\ttindex{cut_facts_tac} to insert the atomic premises as object-level -assumptions. +by~$b$ could make the premise~$P(a)$ worthless. To avoid this problem, use +{\tt bound_hyp_subst_tac}; alternatively, call \ttindex{cut_facts_tac} to +insert the atomic premises as object-level assumptions. \section{Setting up {\tt hyp_subst_tac}} @@ -101,7 +108,7 @@ rule~$(subst)$. When defining a new logic that includes a substitution rule and implication, you must set up {\tt hyp_subst_tac} yourself. It is packaged as the \ML{} functor \ttindex{HypsubstFun}, which takes the -argument signature~\ttindexbold{HYPSUBST_DATA}: +argument signature~{\tt HYPSUBST_DATA}: \begin{ttbox} signature HYPSUBST_DATA = sig @@ -114,27 +121,27 @@ end; \end{ttbox} Thus, the functor requires the following items: -\begin{description} -\item[\ttindexbold{subst}] should be the substitution rule +\begin{ttdescription} +\item[\tdxbold{subst}] should be the substitution rule $\List{\Var{a}=\Var{b};\; \Var{P}(\Var{a})} \Imp \Var{P}(\Var{b})$. -\item[\ttindexbold{sym}] should be the symmetry rule +\item[\tdxbold{sym}] should be the symmetry rule $\Var{a}=\Var{b}\Imp\Var{b}=\Var{a}$. -\item[\ttindexbold{rev_cut_eq}] should have the form +\item[\tdxbold{rev_cut_eq}] should have the form $\List{\Var{a}=\Var{b};\; \Var{a}=\Var{b}\Imp\Var{R}} \Imp \Var{R}$. -\item[\ttindexbold{imp_intr}] should be the implies introduction +\item[\tdxbold{imp_intr}] should be the implies introduction rule $(\Var{P}\Imp\Var{Q})\Imp \Var{P}\imp\Var{Q}$. -\item[\ttindexbold{rev_mp}] should be the ``reversed'' implies elimination +\item[\tdxbold{rev_mp}] should be the `reversed' implies elimination rule $\List{\Var{P}; \;\Var{P}\imp\Var{Q}} \Imp \Var{Q}$. \item[\ttindexbold{dest_eq}] should return the pair~$(t,u)$ when applied to the \ML{} term that represents~$t=u$. For other terms, it -should raise exception~\ttindex{Match}. -\end{description} -The functor resides in {\tt Provers/hypsubst.ML} on the Isabelle +should raise exception~\xdx{Match}. +\end{ttdescription} +The functor resides in file {\tt Provers/hypsubst.ML} in the Isabelle distribution directory. It is not sensitive to the precise formalization of the object-logic. It is not concerned with the names of the equality and implication symbols, or the types of formula and terms. Coding the @@ -148,18 +155,17 @@ Pattern-matching expresses the function concisely, using wildcards~({\tt_}) to hide the types. -Given a subgoal of the form -\[ \List{P@1; \cdots ; t=u; \cdots ; P@n} \Imp Q, \] -\ttindexbold{hyp_subst_tac} locates a suitable equality -assumption and moves it to the last position using elim-resolution on {\tt -rev_cut_eq} (possibly re-orienting it using~{\tt sym}): +Here is how {\tt hyp_subst_tac} works. Given a subgoal of the form +\[ \List{P@1; \cdots ; t=u; \cdots ; P@n} \Imp Q, \] it locates a suitable +equality assumption and moves it to the last position using elim-resolution +on {\tt rev_cut_eq} (possibly re-orienting it using~{\tt sym}): \[ \List{P@1; \cdots ; P@n; t=u} \Imp Q \] -Using $n$ calls of \hbox{\tt eresolve_tac [rev_mp]}, it creates the subgoal +Using $n$ calls of {\tt eresolve_tac\ts[rev_mp]}, it creates the subgoal \[ \List{t=u} \Imp P@1\imp \cdots \imp P@n \imp Q \] -By \hbox{\tt eresolve_tac [ssubst]}, it replaces~$t$ by~$u$ throughout: +By {\tt eresolve_tac\ts[ssubst]}, it replaces~$t$ by~$u$ throughout: \[ P'@1\imp \cdots \imp P'@n \imp Q' \] -Finally, using $n$ calls of \hbox{\tt resolve_tac [imp_intr]}, it restores +Finally, using $n$ calls of \hbox{\tt resolve_tac\ts[imp_intr]}, it restores $P'@1$, \ldots, $P'@n$ as assumptions: \[ \List{P'@n; \cdots ; P'@1} \Imp Q' \] -\index{substitution|)} +\index{equality|)}\index{tactics!substitution|)} diff -r bacfaeeea007 -r 361a71713176 doc-src/Ref/syntax.tex --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/doc-src/Ref/syntax.tex Fri Apr 15 17:16:23 1994 +0200 @@ -0,0 +1,837 @@ +%% $Id$ +\chapter{Syntax Transformations} \label{chap:syntax} +\newcommand\ttapp{\mathrel{\hbox{\tt\$}}} +\newcommand\mtt[1]{\mbox{\tt #1}} +\newcommand\ttfct[1]{\mathop{\mtt{#1}}\nolimits} +\newcommand\Constant{\ttfct{Constant}} +\newcommand\Variable{\ttfct{Variable}} +\newcommand\Appl[1]{\ttfct{Appl}\,[#1]} +\index{syntax!transformations|(} + +This chapter is intended for experienced Isabelle users who need to define +macros or code their own translation functions. It describes the +transformations between parse trees, abstract syntax trees and terms. + + +\section{Abstract syntax trees} \label{sec:asts} +\index{ASTs|(} + +The parser, given a token list from the lexer, applies productions to yield +a parse tree\index{parse trees}. By applying some internal transformations +the parse tree becomes an abstract syntax tree, or \AST{}. Macro +expansion, further translations and finally type inference yields a +well-typed term. The printing process is the reverse, except for some +subtleties to be discussed later. + +Figure~\ref{fig:parse_print} outlines the parsing and printing process. +Much of the complexity is due to the macro mechanism. Using macros, you +can specify most forms of concrete syntax without writing any \ML{} code. + +\begin{figure} +\begin{center} +\begin{tabular}{cl} +string & \\ +$\downarrow$ & parser \\ +parse tree & \\ +$\downarrow$ & parse \AST{} translation \\ +\AST{} & \\ +$\downarrow$ & \AST{} rewriting (macros) \\ +\AST{} & \\ +$\downarrow$ & parse translation, type inference \\ +--- well-typed term --- & \\ +$\downarrow$ & print translation \\ +\AST{} & \\ +$\downarrow$ & \AST{} rewriting (macros) \\ +\AST{} & \\ +$\downarrow$ & print \AST{} translation, printer \\ +string & +\end{tabular} + +\end{center} +\caption{Parsing and printing}\label{fig:parse_print} +\end{figure} + +Abstract syntax trees are an intermediate form between the raw parse trees +and the typed $\lambda$-terms. An \AST{} is either an atom (constant or +variable) or a list of {\em at least two\/} subtrees. Internally, they +have type \mltydx{Syntax.ast}: \index{*Constant} \index{*Variable} +\index{*Appl} +\begin{ttbox} +datatype ast = Constant of string + | Variable of string + | Appl of ast list +\end{ttbox} +% +Isabelle uses an S-expression syntax for abstract syntax trees. Constant +atoms are shown as quoted strings, variable atoms as non-quoted strings and +applications as a parenthesized list of subtrees. For example, the \AST +\begin{ttbox} +Appl [Constant "_constrain", + Appl [Constant "_abs", Variable "x", Variable "t"], + Appl [Constant "fun", Variable "'a", Variable "'b"]] +\end{ttbox} +is shown as {\tt ("_constrain" ("_abs" x t) ("fun" 'a 'b))}. +Both {\tt ()} and {\tt (f)} are illegal because they have too few +subtrees. + +The resemblance to Lisp's S-expressions is intentional, but there are two +kinds of atomic symbols: $\Constant x$ and $\Variable x$. Do not take the +names {\tt Constant} and {\tt Variable} too literally; in the later +translation to terms, $\Variable x$ may become a constant, free or bound +variable, even a type constructor or class name; the actual outcome depends +on the context. + +Similarly, you can think of ${\tt (} f~x@1~\ldots~x@n{\tt )}$ as the +application of~$f$ to the arguments $x@1, \ldots, x@n$. But the kind of +application is determined later by context; it could be a type constructor +applied to types. + +Forms like {\tt (("_abs" x $t$) $u$)} are legal, but \AST{}s are +first-order: the {\tt "_abs"} does not bind the {\tt x} in any way. Later +at the term level, {\tt ("_abs" x $t$)} will become an {\tt Abs} node and +occurrences of {\tt x} in $t$ will be replaced by bound variables (the term +constructor \ttindex{Bound}). + + +\section{Transforming parse trees to \AST{}s}\label{sec:astofpt} +\index{ASTs!made from parse trees} +\newcommand\astofpt[1]{\lbrakk#1\rbrakk} + +The parse tree is the raw output of the parser. Translation functions, +called {\bf parse AST translations}\indexbold{translations!parse AST}, +transform the parse tree into an abstract syntax tree. + +The parse tree is constructed by nesting the right-hand sides of the +productions used to recognize the input. Such parse trees are simply lists +of tokens and constituent parse trees, the latter representing the +nonterminals of the productions. Let us refer to the actual productions in +the form displayed by {\tt Syntax.print_syntax}. + +Ignoring parse \AST{} translations, parse trees are transformed to \AST{}s +by stripping out delimiters and copy productions. More precisely, the +mapping $\astofpt{-}$ is derived from the productions as follows: +\begin{itemize} +\item Name tokens: $\astofpt{t} = \Variable s$, where $t$ is an \ndx{id}, + \ndx{var}, \ndx{tid} or \ndx{tvar} token, and $s$ its associated string. + +\item Copy productions:\index{productions!copy} + $\astofpt{\ldots P \ldots} = \astofpt{P}$. Here $\ldots$ stands for + strings of delimiters, which are discarded. $P$ stands for the single + constituent that is not a delimiter; it is either a nonterminal symbol or + a name token. + + \item 0-ary productions: $\astofpt{\ldots \mtt{=>} c} = \Constant c$. + Here there are no constituents other than delimiters, which are + discarded. + + \item $n$-ary productions, where $n \ge 1$: delimiters are discarded and + the remaining constituents $P@1$, \ldots, $P@n$ are built into an + application whose head constant is~$c$: + \[ \astofpt{\ldots P@1 \ldots P@n \ldots \mtt{=>} c} = + \Appl{\Constant c, \astofpt{P@1}, \ldots, \astofpt{P@n}} + \] +\end{itemize} +Figure~\ref{fig:parse_ast} presents some simple examples, where {\tt ==}, +{\tt _appl}, {\tt _args}, and so forth name productions of the Pure syntax. +These examples illustrate the need for further translations to make \AST{}s +closer to the typed $\lambda$-calculus. The Pure syntax provides +predefined parse \AST{} translations\index{translations!parse AST} for +ordinary applications, type applications, nested abstractions, meta +implications and function types. Figure~\ref{fig:parse_ast_tr} shows their +effect on some representative input strings. + + +\begin{figure} +\begin{center} +\tt\begin{tabular}{ll} +\rm input string & \rm \AST \\\hline +"f" & f \\ +"'a" & 'a \\ +"t == u" & ("==" t u) \\ +"f(x)" & ("_appl" f x) \\ +"f(x, y)" & ("_appl" f ("_args" x y)) \\ +"f(x, y, z)" & ("_appl" f ("_args" x ("_args" y z))) \\ +"\%x y.\ t" & ("_lambda" ("_idts" x y) t) \\ +\end{tabular} +\end{center} +\caption{Parsing examples using the Pure syntax}\label{fig:parse_ast} +\end{figure} + +\begin{figure} +\begin{center} +\tt\begin{tabular}{ll} +\rm input string & \rm \AST{} \\\hline +"f(x, y, z)" & (f x y z) \\ +"'a ty" & (ty 'a) \\ +"('a, 'b) ty" & (ty 'a 'b) \\ +"\%x y z.\ t" & ("_abs" x ("_abs" y ("_abs" z t))) \\ +"\%x ::\ 'a.\ t" & ("_abs" ("_constrain" x 'a) t) \\ +"[| P; Q; R |] => S" & ("==>" P ("==>" Q ("==>" R S))) \\ +"['a, 'b, 'c] => 'd" & ("fun" 'a ("fun" 'b ("fun" 'c 'd))) +\end{tabular} +\end{center} +\caption{Built-in parse \AST{} translations}\label{fig:parse_ast_tr} +\end{figure} + +The names of constant heads in the \AST{} control the translation process. +The list of constants invoking parse \AST{} translations appears in the +output of {\tt Syntax.print_syntax} under {\tt parse_ast_translation}. + + +\section{Transforming \AST{}s to terms}\label{sec:termofast} +\index{terms!made from ASTs} +\newcommand\termofast[1]{\lbrakk#1\rbrakk} + +The \AST{}, after application of macros (see \S\ref{sec:macros}), is +transformed into a term. This term is probably ill-typed since type +inference has not occurred yet. The term may contain type constraints +consisting of applications with head {\tt "_constrain"}; the second +argument is a type encoded as a term. Type inference later introduces +correct types or rejects the input. + +Another set of translation functions, namely parse +translations\index{translations!parse}, may affect this process. If we +ignore parse translations for the time being, then \AST{}s are transformed +to terms by mapping \AST{} constants to constants, \AST{} variables to +schematic or free variables and \AST{} applications to applications. + +More precisely, the mapping $\termofast{-}$ is defined by +\begin{itemize} +\item Constants: $\termofast{\Constant x} = \ttfct{Const} (x, + \mtt{dummyT})$. + +\item Schematic variables: $\termofast{\Variable \mtt{"?}xi\mtt"} = + \ttfct{Var} ((x, i), \mtt{dummyT})$, where $x$ is the base name and $i$ + the index extracted from~$xi$. + +\item Free variables: $\termofast{\Variable x} = \ttfct{Free} (x, + \mtt{dummyT})$. + +\item Function applications with $n$ arguments: + \[ \termofast{\Appl{f, x@1, \ldots, x@n}} = + \termofast{f} \ttapp + \termofast{x@1} \ttapp \ldots \ttapp \termofast{x@n} + \] +\end{itemize} +Here \ttindex{Const}, \ttindex{Var}, \ttindex{Free} and +\verb|$|\index{$@{\tt\$}} are constructors of the datatype \mltydx{term}, +while \ttindex{dummyT} stands for some dummy type that is ignored during +type inference. + +So far the outcome is still a first-order term. Abstractions and bound +variables (constructors \ttindex{Abs} and \ttindex{Bound}) are introduced +by parse translations. Such translations are attached to {\tt "_abs"}, +{\tt "!!"} and user-defined binders. + + +\section{Printing of terms} +\newcommand\astofterm[1]{\lbrakk#1\rbrakk}\index{ASTs!made from terms} + +The output phase is essentially the inverse of the input phase. Terms are +translated via abstract syntax trees into strings. Finally the strings are +pretty printed. + +Print translations (\S\ref{sec:tr_funs}) may affect the transformation of +terms into \AST{}s. Ignoring those, the transformation maps +term constants, variables and applications to the corresponding constructs +on \AST{}s. Abstractions are mapped to applications of the special +constant {\tt _abs}. + +More precisely, the mapping $\astofterm{-}$ is defined as follows: +\begin{itemize} + \item $\astofterm{\ttfct{Const} (x, \tau)} = \Constant x$. + + \item $\astofterm{\ttfct{Free} (x, \tau)} = constrain (\Variable x, + \tau)$. + + \item $\astofterm{\ttfct{Var} ((x, i), \tau)} = constrain (\Variable + \mtt{"?}xi\mtt", \tau)$, where $\mtt?xi$ is the string representation of + the {\tt indexname} $(x, i)$. + + \item For the abstraction $\lambda x::\tau.t$, let $x'$ be a variant + of~$x$ renamed to differ from all names occurring in~$t$, and let $t'$ + be obtained from~$t$ by replacing all bound occurrences of~$x$ by + the free variable $x'$. This replaces corresponding occurrences of the + constructor \ttindex{Bound} by the term $\ttfct{Free} (x', + \mtt{dummyT})$: + \[ \astofterm{\ttfct{Abs} (x, \tau, t)} = + \Appl{\Constant \mtt{"_abs"}, + constrain(\Variable x', \tau), \astofterm{t'}}. + \] + + \item $\astofterm{\ttfct{Bound} i} = \Variable \mtt{"B.}i\mtt"$. + The occurrence of constructor \ttindex{Bound} should never happen + when printing well-typed terms; it indicates a de Bruijn index with no + matching abstraction. + + \item Where $f$ is not an application, + \[ \astofterm{f \ttapp x@1 \ttapp \ldots \ttapp x@n} = + \Appl{\astofterm{f}, \astofterm{x@1}, \ldots,\astofterm{x@n}} + \] +\end{itemize} +% +Type constraints are inserted to allow the printing of types. This is +governed by the boolean variable \ttindex{show_types}: +\begin{itemize} + \item $constrain(x, \tau) = x$ \ if $\tau = \mtt{dummyT}$ \index{*dummyT} or + \ttindex{show_types} not set to {\tt true}. + + \item $constrain(x, \tau) = \Appl{\Constant \mtt{"_constrain"}, x, + \astofterm{\tau}}$ \ otherwise. + + Here, $\astofterm{\tau}$ is the \AST{} encoding of $\tau$: type + constructors go to {\tt Constant}s; type identifiers go to {\tt + Variable}s; type applications go to {\tt Appl}s with the type + constructor as the first element. If \ttindex{show_sorts} is set to + {\tt true}, some type variables are decorated with an \AST{} encoding + of their sort. +\end{itemize} +% +The \AST{}, after application of macros (see \S\ref{sec:macros}), is +transformed into the final output string. The built-in {\bf print AST + translations}\indexbold{translations!print AST} effectively reverse the +parse \AST{} translations of Fig.\ts\ref{fig:parse_ast_tr}. + +For the actual printing process, the names attached to productions +of the form $\ldots A^{(p@1)}@1 \ldots A^{(p@n)}@n \ldots \mtt{=>} c$ play +a vital role. Each \AST{} with constant head $c$, namely $\mtt"c\mtt"$ or +$(\mtt"c\mtt"~ x@1 \ldots x@n)$, is printed according to the production +for~$c$. Each argument~$x@i$ is converted to a string, and put in +parentheses if its priority~$(p@i)$ requires this. The resulting strings +and their syntactic sugar (denoted by \dots{} above) are joined to make a +single string. + +If an application $(\mtt"c\mtt"~ x@1 \ldots x@m)$ has more arguments than the +corresponding production, it is first split into $((\mtt"c\mtt"~ x@1 \ldots +x@n) ~ x@{n+1} \ldots x@m)$. Applications with too few arguments or with +non-constant head or without a corresponding production are printed as +$f(x@1, \ldots, x@l)$ or $(\alpha@1, \ldots, \alpha@l) ty$. An occurrence of +$\Variable x$ is simply printed as~$x$. + +Blanks are {\em not\/} inserted automatically. If blanks are required to +separate tokens, specify them in the mixfix declaration, possibly preceded +by a slash~({\tt/}) to allow a line break. +\index{ASTs|)} + + + +\section{Macros: Syntactic rewriting} \label{sec:macros} +\index{macros|(}\index{rewriting!syntactic|(} + +Mixfix declarations alone can handle situations where there is a direct +connection between the concrete syntax and the underlying term. Sometimes +we require a more elaborate concrete syntax, such as quantifiers and list +notation. Isabelle's {\bf macros} and {\bf translation functions} can +perform translations such as +\begin{center}\tt + \begin{tabular}{r@{$\quad\protect\rightleftharpoons\quad$}l} + ALL x:A.P & Ball(A, \%x.P) \\ \relax + [x, y, z] & Cons(x, Cons(y, Cons(z, Nil))) + \end{tabular} +\end{center} +Translation functions (see \S\ref{sec:tr_funs}) must be coded in ML; they +are the most powerful translation mechanism but are difficult to read or +write. Macros are specified by first-order rewriting systems that operate +on abstract syntax trees. They are usually easy to read and write, and can +express all but the most obscure translations. + +Figure~\ref{fig:set_trans} defines a fragment of first-order logic and set +theory.\footnote{This and the following theories are complete working + examples, though they specify only syntax, no axioms. The file {\tt + ZF/zf.thy} presents the full set theory definition, including many + macro rules.} Theory {\tt SET} defines constants for set comprehension +({\tt Collect}), replacement ({\tt Replace}) and bounded universal +quantification ({\tt Ball}). Each of these binds some variables. Without +additional syntax we should have to express $\forall x \in A. P$ as {\tt + Ball(A,\%x.P)}, and similarly for the others. + +\begin{figure} +\begin{ttbox} +SET = Pure + +types + i, o +arities + i, o :: logic +consts + Trueprop :: "o => prop" ("_" 5) + Collect :: "[i, i => o] => i" + "{\at}Collect" :: "[idt, i, o] => i" ("(1{\ttlbrace}_:_./ _{\ttrbrace})") + Replace :: "[i, [i, i] => o] => i" + "{\at}Replace" :: "[idt, idt, i, o] => i" ("(1{\ttlbrace}_./ _:_, _{\ttrbrace})") + Ball :: "[i, i => o] => o" + "{\at}Ball" :: "[idt, i, o] => o" ("(3ALL _:_./ _)" 10) +translations + "{\ttlbrace}x:A. P{\ttrbrace}" == "Collect(A, \%x. P)" + "{\ttlbrace}y. x:A, Q{\ttrbrace}" == "Replace(A, \%x y. Q)" + "ALL x:A. P" == "Ball(A, \%x. P)" +end +\end{ttbox} +\caption{Macro example: set theory}\label{fig:set_trans} +\end{figure} + +The theory specifies a variable-binding syntax through additional +productions that have mixfix declarations. Each non-copy production must +specify some constant, which is used for building \AST{}s. +\index{constants!syntactic} The additional constants are decorated with +{\tt\at} to stress their purely syntactic purpose; they should never occur +within the final well-typed terms. Furthermore, they cannot be written in +formulae because they are not legal identifiers. + +The translations cause the replacement of external forms by internal forms +after parsing, and vice versa before printing of terms. As a specification +of the set theory notation, they should be largely self-explanatory. The +syntactic constants, {\tt\at Collect}, {\tt\at Replace} and {\tt\at Ball}, +appear implicitly in the macro rules via their mixfix forms. + +Macros can define variable-binding syntax because they operate on \AST{}s, +which have no inbuilt notion of bound variable. The macro variables {\tt + x} and~{\tt y} have type~{\tt idt} and therefore range over identifiers, +in this case bound variables. The macro variables {\tt P} and~{\tt Q} +range over formulae containing bound variable occurrences. + +Other applications of the macro system can be less straightforward, and +there are peculiarities. The rest of this section will describe in detail +how Isabelle macros are preprocessed and applied. + + +\subsection{Specifying macros} +Macros are basically rewrite rules on \AST{}s. But unlike other macro +systems found in programming languages, Isabelle's macros work in both +directions. Therefore a syntax contains two lists of rewrites: one for +parsing and one for printing. + +\index{*translations section} +The {\tt translations} section specifies macros. The syntax for a macro is +\[ (root)\; string \quad + \left\{\begin{array}[c]{c} \mtt{=>} \\ \mtt{<=} \\ \mtt{==} \end{array} + \right\} \quad + (root)\; string +\] +% +This specifies a parse rule ({\tt =>}), a print rule ({\tt <=}), or both +({\tt ==}). The two strings specify the left and right-hand sides of the +macro rule. The $(root)$ specification is optional; it specifies the +nonterminal for parsing the $string$ and if omitted defaults to {\tt + logic}. \AST{} rewrite rules $(l, r)$ must obey certain conditions: +\begin{itemize} +\item Rules must be left linear: $l$ must not contain repeated variables. + +\item Rules must have constant heads, namely $l = \mtt"c\mtt"$ or $l = + (\mtt"c\mtt" ~ x@1 \ldots x@n)$. + +\item Every variable in~$r$ must also occur in~$l$. +\end{itemize} + +Macro rules may refer to any syntax from the parent theories. They may +also refer to anything defined before the the {\tt .thy} file's {\tt + translations} section --- including any mixfix declarations. + +Upon declaration, both sides of the macro rule undergo parsing and parse +\AST{} translations (see \S\ref{sec:asts}), but do not themselves undergo +macro expansion. The lexer runs in a different mode that additionally +accepts identifiers of the form $\_~letter~quasiletter^*$ (like {\tt _idt}, +{\tt _K}). Thus, a constant whose name starts with an underscore can +appear in macro rules but not in ordinary terms. + +Some atoms of the macro rule's \AST{} are designated as constants for +matching. These are all names that have been declared as classes, types or +constants. + +The result of this preprocessing is two lists of macro rules, each stored +as a pair of \AST{}s. They can be viewed using {\tt Syntax.print_syntax} +(sections \ttindex{parse_rules} and \ttindex{print_rules}). For +theory~{\tt SET} of Fig.~\ref{fig:set_trans} these are +\begin{ttbox} +parse_rules: + ("{\at}Collect" x A P) -> ("Collect" A ("_abs" x P)) + ("{\at}Replace" y x A Q) -> ("Replace" A ("_abs" x ("_abs" y Q))) + ("{\at}Ball" x A P) -> ("Ball" A ("_abs" x P)) +print_rules: + ("Collect" A ("_abs" x P)) -> ("{\at}Collect" x A P) + ("Replace" A ("_abs" x ("_abs" y Q))) -> ("{\at}Replace" y x A Q) + ("Ball" A ("_abs" x P)) -> ("{\at}Ball" x A P) +\end{ttbox} + +\begin{warn} + Avoid choosing variable names that have previously been used as + constants, types or type classes; the {\tt consts} section in the output + of {\tt Syntax.print_syntax} lists all such names. If a macro rule works + incorrectly, inspect its internal form as shown above, recalling that + constants appear as quoted strings and variables without quotes. +\end{warn} + +\begin{warn} +If \ttindex{eta_contract} is set to {\tt true}, terms will be +$\eta$-contracted {\em before\/} the \AST{} rewriter sees them. Thus some +abstraction nodes needed for print rules to match may vanish. For example, +\verb|Ball(A, %x. P(x))| contracts {\tt Ball(A, P)}; the print rule does +not apply and the output will be {\tt Ball(A, P)}. This problem would not +occur if \ML{} translation functions were used instead of macros (as is +done for binder declarations). +\end{warn} + + +\begin{warn} +Another trap concerns type constraints. If \ttindex{show_types} is set to +{\tt true}, bound variables will be decorated by their meta types at the +binding place (but not at occurrences in the body). Matching with +\verb|Collect(A, %x. P)| binds {\tt x} to something like {\tt ("_constrain" y +"i")} rather than only {\tt y}. \AST{} rewriting will cause the constraint to +appear in the external form, say \verb|{y::i:A::i. P::o}|. + +To allow such constraints to be re-read, your syntax should specify bound +variables using the nonterminal~\ndx{idt}. This is the case in our +example. Choosing {\tt id} instead of {\tt idt} is a common error, +especially since it appears in former versions of most of Isabelle's +object-logics. +\end{warn} + + + +\subsection{Applying rules} +As a term is being parsed or printed, an \AST{} is generated as an +intermediate form (recall Fig.\ts\ref{fig:parse_print}). The \AST{} is +normalized by applying macro rules in the manner of a traditional term +rewriting system. We first examine how a single rule is applied. + +Let $t$ be the abstract syntax tree to be normalized and $(l, r)$ some +translation rule. A subtree~$u$ of $t$ is a {\bf redex} if it is an +instance of~$l$; in this case $l$ is said to {\bf match}~$u$. A redex +matched by $l$ may be replaced by the corresponding instance of~$r$, thus +{\bf rewriting} the \AST~$t$. Matching requires some notion of {\bf + place-holders} that may occur in rule patterns but not in ordinary +\AST{}s; {\tt Variable} atoms serve this purpose. + +The matching of the object~$u$ by the pattern~$l$ is performed as follows: +\begin{itemize} + \item Every constant matches itself. + + \item $\Variable x$ in the object matches $\Constant x$ in the pattern. + This point is discussed further below. + + \item Every \AST{} in the object matches $\Variable x$ in the pattern, + binding~$x$ to~$u$. + + \item One application matches another if they have the same number of + subtrees and corresponding subtrees match. + + \item In every other case, matching fails. In particular, {\tt + Constant}~$x$ can only match itself. +\end{itemize} +A successful match yields a substitution that is applied to~$r$, generating +the instance that replaces~$u$. + +The second case above may look odd. This is where {\tt Variable}s of +non-rule \AST{}s behave like {\tt Constant}s. Recall that \AST{}s are not +far removed from parse trees; at this level it is not yet known which +identifiers will become constants, bounds, frees, types or classes. As +\S\ref{sec:asts} describes, former parse tree heads appear in \AST{}s as +{\tt Constant}s, while the name tokens \ndx{id}, \ndx{var}, \ndx{tid} and +\ndx{tvar} become {\tt Variable}s. On the other hand, when \AST{}s +generated from terms for printing, all constants and type constructors +become {\tt Constant}s; see \S\ref{sec:asts}. Thus \AST{}s may contain a +messy mixture of {\tt Variable}s and {\tt Constant}s. This is +insignificant at macro level because matching treats them alike. + +Because of this behaviour, different kinds of atoms with the same name are +indistinguishable, which may make some rules prone to misbehaviour. Example: +\begin{ttbox} +types + Nil +consts + Nil :: "'a list" + "[]" :: "'a list" ("[]") +translations + "[]" == "Nil" +\end{ttbox} +The term {\tt Nil} will be printed as {\tt []}, just as expected. +The term \verb|%Nil.t| will be printed as \verb|%[].t|, which might not be +expected! + +Normalizing an \AST{} involves repeatedly applying macro rules until none +is applicable. Macro rules are chosen in the order that they appear in the +{\tt translations} section. You can watch the normalization of \AST{}s +during parsing and printing by setting \ttindex{Syntax.trace_norm_ast} to +{\tt true}.\index{tracing!of macros} Alternatively, use +\ttindex{Syntax.test_read}. The information displayed when tracing +includes the \AST{} before normalization ({\tt pre}), redexes with results +({\tt rewrote}), the normal form finally reached ({\tt post}) and some +statistics ({\tt normalize}). If tracing is off, +\ttindex{Syntax.stat_norm_ast} can be set to {\tt true} in order to enable +printing of the normal form and statistics only. + + +\subsection{Example: the syntax of finite sets} +\index{examples!of macros} + +This example demonstrates the use of recursive macros to implement a +convenient notation for finite sets. +\index{*empty constant}\index{*insert constant}\index{{}@\verb'{}' symbol} +\index{"@Enum@{\tt\at Enum} constant} +\index{"@Finset@{\tt\at Finset} constant} +\begin{ttbox} +FINSET = SET + +types + is +consts + "" :: "i => is" ("_") + "{\at}Enum" :: "[i, is] => is" ("_,/ _") + empty :: "i" ("{\ttlbrace}{\ttrbrace}") + insert :: "[i, i] => i" + "{\at}Finset" :: "is => i" ("{\ttlbrace}(_){\ttrbrace}") +translations + "{\ttlbrace}x, xs{\ttrbrace}" == "insert(x, {\ttlbrace}xs{\ttrbrace})" + "{\ttlbrace}x{\ttrbrace}" == "insert(x, {\ttlbrace}{\ttrbrace})" +end +\end{ttbox} +Finite sets are internally built up by {\tt empty} and {\tt insert}. The +declarations above specify \verb|{x, y, z}| as the external representation +of +\begin{ttbox} +insert(x, insert(y, insert(z, empty))) +\end{ttbox} +The nonterminal symbol~\ndx{is} stands for one or more objects of type~{\tt + i} separated by commas. The mixfix declaration \hbox{\verb|"_,/ _"|} +allows a line break after the comma for \rmindex{pretty printing}; if no +line break is required then a space is printed instead. + +The nonterminal is declared as the type~{\tt is}, but with no {\tt arities} +declaration. Hence {\tt is} is not a logical type and no default +productions are added. If we had needed enumerations of the nonterminal +{\tt logic}, which would include all the logical types, we could have used +the predefined nonterminal symbol \ndx{args} and skipped this part +altogether. The nonterminal~{\tt is} can later be reused for other +enumerations of type~{\tt i} like lists or tuples. + +\index{"@Finset@{\tt\at Finset} constant} +Next follows {\tt empty}, which is already equipped with its syntax +\verb|{}|, and {\tt insert} without concrete syntax. The syntactic +constant {\tt\at Finset} provides concrete syntax for enumerations of~{\tt + i} enclosed in curly braces. Remember that a pair of parentheses, as in +\verb|"{(_)}"|, specifies a block of indentation for pretty printing. + +The translations may look strange at first. Macro rules are best +understood in their internal forms: +\begin{ttbox} +parse_rules: + ("{\at}Finset" ("{\at}Enum" x xs)) -> ("insert" x ("{\at}Finset" xs)) + ("{\at}Finset" x) -> ("insert" x "empty") +print_rules: + ("insert" x ("{\at}Finset" xs)) -> ("{\at}Finset" ("{\at}Enum" x xs)) + ("insert" x "empty") -> ("{\at}Finset" x) +\end{ttbox} +This shows that \verb|{x, xs}| indeed matches any set enumeration of at least +two elements, binding the first to {\tt x} and the rest to {\tt xs}. +Likewise, \verb|{xs}| and \verb|{x}| represent any set enumeration. +The parse rules only work in the order given. + +\begin{warn} + The \AST{} rewriter cannot discern constants from variables and looks + only for names of atoms. Thus the names of {\tt Constant}s occurring in + the (internal) left-hand side of translation rules should be regarded as + \rmindex{reserved words}. Choose non-identifiers like {\tt\at Finset} or + sufficiently long and strange names. If a bound variable's name gets + rewritten, the result will be incorrect; for example, the term +\begin{ttbox} +\%empty insert. insert(x, empty) +\end{ttbox} + gets printed as \verb|%empty insert. {x}|. +\end{warn} + + +\subsection{Example: a parse macro for dependent types}\label{prod_trans} +\index{examples!of macros} + +As stated earlier, a macro rule may not introduce new {\tt Variable}s on +the right-hand side. Something like \verb|"K(B)" => "%x. B"| is illegal; +if allowed, it could cause variable capture. In such cases you usually +must fall back on translation functions. But a trick can make things +readable in some cases: {\em calling translation functions by parse + macros}: +\begin{ttbox} +PROD = FINSET + +consts + Pi :: "[i, i => i] => i" + "{\at}PROD" :: "[idt, i, i] => i" ("(3PROD _:_./ _)" 10) + "{\at}->" :: "[i, i] => i" ("(_ ->/ _)" [51, 50] 50) +\ttbreak +translations + "PROD x:A. B" => "Pi(A, \%x. B)" + "A -> B" => "Pi(A, _K(B))" +end +ML + val print_translation = [("Pi", dependent_tr' ("{\at}PROD", "{\at}->"))]; +\end{ttbox} + +Here {\tt Pi} is an internal constant for constructing general products. +Two external forms exist: the general case {\tt PROD x:A.B} and the +function space {\tt A -> B}, which abbreviates \verb|Pi(A, %x.B)| when {\tt B} +does not depend on~{\tt x}. + +The second parse macro introduces {\tt _K(B)}, which later becomes \verb|%x.B| +due to a parse translation associated with \cdx{_K}. The order of the +parse rules is critical. Unfortunately there is no such trick for +printing, so we have to add a {\tt ML} section for the print translation +\ttindex{dependent_tr'}. + +Recall that identifiers with a leading {\tt _} are allowed in translation +rules, but not in ordinary terms. Thus we can create \AST{}s containing +names that are not directly expressible. + +The parse translation for {\tt _K} is already installed in Pure, and {\tt +dependent_tr'} is exported by the syntax module for public use. See +\S\ref{sec:tr_funs} below for more of the arcane lore of translation functions. +\index{macros|)}\index{rewriting!syntactic|)} + + +\section{Translation functions} \label{sec:tr_funs} +\index{translations|(} +% +This section describes the translation function mechanism. By writing +\ML{} functions, you can do almost everything with terms or \AST{}s during +parsing and printing. The logic \LK\ is a good example of sophisticated +transformations between internal and external representations of +associative sequences; here, macros would be useless. + +A full understanding of translations requires some familiarity +with Isabelle's internals, especially the datatypes {\tt term}, {\tt typ}, +{\tt Syntax.ast} and the encodings of types and terms as such at the various +stages of the parsing or printing process. Most users should never need to +use translation functions. + +\subsection{Declaring translation functions} +There are four kinds of translation functions. Each such function is +associated with a name, which triggers calls to it. Such names can be +constants (logical or syntactic) or type constructors. + +{\tt Syntax.print_syntax} displays the sets of names associated with the +translation functions of a {\tt Syntax.syntax} under +\ttindex{parse_ast_translation}, \ttindex{parse_translation}, +\ttindex{print_translation} and \ttindex{print_ast_translation}. You can +add new ones via the {\tt ML} section\index{*ML section} of +a {\tt .thy} file. There may never be more than one function of the same +kind per name. Conceptually, the {\tt ML} section should appear between +{\tt consts} and {\tt translations}; newly installed translation functions +are already effective when macros and logical rules are parsed. + +The {\tt ML} section is copied verbatim into the \ML\ file generated from a +{\tt .thy} file. Definitions made here are accessible as components of an +\ML\ structure; to make some definitions private, use an \ML{} {\tt local} +declaration. The {\tt ML} section may install translation functions by +declaring any of the following identifiers: +\begin{ttbox} +val parse_ast_translation : (string * (ast list -> ast)) list +val print_ast_translation : (string * (ast list -> ast)) list +val parse_translation : (string * (term list -> term)) list +val print_translation : (string * (term list -> term)) list +\end{ttbox} + +\subsection{The translation strategy} +All four kinds of translation functions are treated similarly. They are +called during the transformations between parse trees, \AST{}s and terms +(recall Fig.\ts\ref{fig:parse_print}). Whenever a combination of the form +$(\mtt"c\mtt"~x@1 \ldots x@n)$ is encountered, and a translation function +$f$ of appropriate kind exists for $c$, the result is computed by the \ML{} +function call $f \mtt[ x@1, \ldots, x@n \mtt]$. + +For \AST{} translations, the arguments $x@1, \ldots, x@n$ are \AST{}s. A +combination has the form $\Constant c$ or $\Appl{\Constant c, x@1, \ldots, + x@n}$. For term translations, the arguments are terms and a combination +has the form $\ttfct{Const} (c, \tau)$ or $\ttfct{Const} (c, \tau) \ttapp +x@1 \ttapp \ldots \ttapp x@n$. Terms allow more sophisticated +transformations than \AST{}s do, typically involving abstractions and bound +variables. + +Regardless of whether they act on terms or \AST{}s, +parse translations differ from print translations fundamentally: +\begin{description} +\item[Parse translations] are applied bottom-up. The arguments are already + in translated form. The translations must not fail; exceptions trigger + an error message. + +\item[Print translations] are applied top-down. They are supplied with + arguments that are partly still in internal form. The result again + undergoes translation; therefore a print translation should not introduce + as head the very constant that invoked it. The function may raise + exception \xdx{Match} to indicate failure; in this event it has no + effect. +\end{description} + +Only constant atoms --- constructor \ttindex{Constant} for \AST{}s and +\ttindex{Const} for terms --- can invoke translation functions. This +causes another difference between parsing and printing. + +Parsing starts with a string and the constants are not yet identified. +Only parse tree heads create {\tt Constant}s in the resulting \AST, as +described in \S\ref{sec:astofpt}. Macros and parse \AST{} translations may +introduce further {\tt Constant}s. When the final \AST{} is converted to a +term, all {\tt Constant}s become {\tt Const}s, as described in +\S\ref{sec:termofast}. + +Printing starts with a well-typed term and all the constants are known. So +all logical constants and type constructors may invoke print translations. +These, and macros, may introduce further constants. + + +\subsection{Example: a print translation for dependent types} +\index{examples!of translations}\indexbold{*dependent_tr'} + +Let us continue the dependent type example (page~\pageref{prod_trans}) by +examining the parse translation for~\cdx{_K} and the print translation +{\tt dependent_tr'}, which are both built-in. By convention, parse +translations have names ending with {\tt _tr} and print translations have +names ending with {\tt _tr'}. Search for such names in the Isabelle +sources to locate more examples. + +Here is the parse translation for {\tt _K}: +\begin{ttbox} +fun k_tr [t] = Abs ("x", dummyT, incr_boundvars 1 t) + | k_tr ts = raise TERM("k_tr",ts); +\end{ttbox} +If {\tt k_tr} is called with exactly one argument~$t$, it creates a new +{\tt Abs} node with a body derived from $t$. Since terms given to parse +translations are not yet typed, the type of the bound variable in the new +{\tt Abs} is simply {\tt dummyT}. The function increments all {\tt Bound} +nodes referring to outer abstractions by calling \ttindex{incr_boundvars}, +a basic term manipulation function defined in {\tt Pure/term.ML}. + +Here is the print translation for dependent types: +\begin{ttbox} +fun dependent_tr' (q,r) (A :: Abs (x, T, B) :: ts) = + if 0 mem (loose_bnos B) then + let val (x', B') = variant_abs (x, dummyT, B); + in list_comb (Const (q, dummyT) $ Free (x', T) $ A $ B', ts) + end + else list_comb (Const (r, dummyT) $ A $ B, ts) + | dependent_tr' _ _ = raise Match; +\end{ttbox} +The argument {\tt (q,r)} is supplied to {\tt dependent_tr'} by a curried +function application during its installation. We could set up print +translations for both {\tt Pi} and {\tt Sigma} by including +\begin{ttbox}\index{*ML section} +val print_translation = + [("Pi", dependent_tr' ("{\at}PROD", "{\at}->")), + ("Sigma", dependent_tr' ("{\at}SUM", "{\at}*"))]; +\end{ttbox} +within the {\tt ML} section. The first of these transforms ${\tt Pi}(A, +\mtt{Abs}(x, T, B))$ into $\hbox{\tt{\at}PROD}(x', A, B')$ or +$\hbox{\tt{\at}->}r(A, B)$, choosing the latter form if $B$ does not depend +on~$x$. It checks this using \ttindex{loose_bnos}, yet another function +from {\tt Pure/term.ML}. Note that $x'$ is a version of $x$ renamed away +from all names in $B$, and $B'$ the body $B$ with {\tt Bound} nodes +referring to our {\tt Abs} node replaced by $\ttfct{Free} (x', +\mtt{dummyT})$. + +We must be careful with types here. While types of {\tt Const}s are +ignored, type constraints may be printed for some {\tt Free}s and +{\tt Var}s if \ttindex{show_types} is set to {\tt true}. Variables of type +\ttindex{dummyT} are never printed with constraint, though. The line +\begin{ttbox} + let val (x', B') = variant_abs (x, dummyT, B); +\end{ttbox}\index{*variant_abs} +replaces bound variable occurrences in~$B$ by the free variable $x'$ with +type {\tt dummyT}. Only the binding occurrence of~$x'$ is given the +correct type~{\tt T}, so this is the only place where a type +constraint might appear. +\index{translations|)} +\index{syntax!transformations|)} diff -r bacfaeeea007 -r 361a71713176 doc-src/Ref/tactic.tex --- a/doc-src/Ref/tactic.tex Fri Apr 15 16:53:01 1994 +0200 +++ b/doc-src/Ref/tactic.tex Fri Apr 15 17:16:23 1994 +0200 @@ -1,10 +1,10 @@ %% $Id$ \chapter{Tactics} \label{tactics} \index{tactics|(} -Tactics have type \ttindexbold{tactic}. They are essentially +Tactics have type \mltydx{tactic}. They are essentially functions from theorems to theorem sequences, where the theorems represent states of a backward proof. Tactics seldom need to be coded from scratch, -as functions; the basic tactics suffice for most proofs. +as functions; instead they are expressed using basic tactics and tacticals. \section{Resolution and assumption tactics} {\bf Resolution} is Isabelle's basic mechanism for refining a subgoal using @@ -19,6 +19,7 @@ range. \subsection{Resolution tactics} +\index{resolution!tactics} \index{tactics!resolution|bold} \begin{ttbox} resolve_tac : thm list -> int -> tactic @@ -30,49 +31,51 @@ of object-rules. When generating next states, they take each of the rules in the order given. Each rule may yield several next states, or none: higher-order resolution may yield multiple resolvents. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{resolve_tac} {\it thms} {\it i}] -refines the proof state using the object-rules, which should normally be -introduction rules. It resolves an object-rule's conclusion with -subgoal~$i$ of the proof state. + refines the proof state using the rules, which should normally be + introduction rules. It resolves a rule's conclusion with + subgoal~$i$ of the proof state. \item[\ttindexbold{eresolve_tac} {\it thms} {\it i}] -refines the proof state by elim-resolution with the object-rules, which -should normally be elimination rules. It resolves with a rule, solves -its first premise by assumption, and finally {\bf deletes} that assumption -from any new subgoals. + \index{elim-resolution} + performs elim-resolution with the rules, which should normally be + elimination rules. It resolves with a rule, solves its first premise by + assumption, and finally {\em deletes\/} that assumption from any new + subgoals. \item[\ttindexbold{dresolve_tac} {\it thms} {\it i}] -performs destruct-resolution with the object-rules, which normally should -be destruction rules. This replaces an assumption by the result of -applying one of the rules. + \index{forward proof}\index{destruct-resolution} + performs destruct-resolution with the rules, which normally should + be destruction rules. This replaces an assumption by the result of + applying one of the rules. -\item[\ttindexbold{forward_tac}] -is like \ttindex{dresolve_tac} except that the selected assumption is not -deleted. It applies a rule to an assumption, adding the result as a new -assumption. -\end{description} +\item[\ttindexbold{forward_tac}]\index{forward proof} + is like {\tt dresolve_tac} except that the selected assumption is not + deleted. It applies a rule to an assumption, adding the result as a new + assumption. +\end{ttdescription} \subsection{Assumption tactics} -\index{tactics!assumption|bold} +\index{tactics!assumption|bold}\index{assumptions!tactics for} \begin{ttbox} assume_tac : int -> tactic eq_assume_tac : int -> tactic \end{ttbox} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{assume_tac} {\it i}] attempts to solve subgoal~$i$ by assumption. \item[\ttindexbold{eq_assume_tac}] is like {\tt assume_tac} but does not use unification. It succeeds (with a -{\bf unique} next state) if one of the assumptions is identical to the +{\em unique\/} next state) if one of the assumptions is identical to the subgoal's conclusion. Since it does not instantiate variables, it cannot make other subgoals unprovable. It is intended to be called from proof strategies, not interactively. -\end{description} +\end{ttdescription} \subsection{Matching tactics} \label{match_tac} -\index{tactics!matching|bold} +\index{tactics!matching} \begin{ttbox} match_tac : thm list -> int -> tactic ematch_tac : thm list -> int -> tactic @@ -83,9 +86,9 @@ willy-nilly, but are left alone. Matching --- strictly speaking --- means treating the unknowns in the proof state as constants; these tactics merely discard unifiers that would update the proof state. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{match_tac} {\it thms} {\it i}] -refines the proof state using the object-rules, matching an object-rule's +refines the proof state using the rules, matching a rule's conclusion with subgoal~$i$ of the proof state. \item[\ttindexbold{ematch_tac}] @@ -93,11 +96,11 @@ \item[\ttindexbold{dmatch_tac}] is like {\tt match_tac}, but performs destruct-resolution. -\end{description} +\end{ttdescription} \subsection{Resolution with instantiation} \label{res_inst_tac} -\index{tactics!instantiation|bold} +\index{tactics!instantiation}\index{instantiation} \begin{ttbox} res_inst_tac : (string*string)list -> thm -> int -> tactic eres_inst_tac : (string*string)list -> thm -> int -> tactic @@ -134,7 +137,7 @@ the terms receive subscripts and are lifted over the parameters; thus, you may not refer to unknowns in the subgoal. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{res_inst_tac} {\it insts} {\it thm} {\it i}] instantiates the rule {\it thm} with the instantiations {\it insts}, as described above, and then performs resolution on subgoal~$i$. Resolution @@ -151,12 +154,14 @@ is like {\tt dres_inst_tac} except that the selected assumption is not deleted. It applies the instantiated rule to an assumption, adding the result as a new assumption. -\end{description} +\end{ttdescription} \section{Other basic tactics} \subsection{Definitions and meta-level rewriting} -\index{tactics!meta-rewriting|bold}\index{rewriting!meta-level} +\index{tactics!meta-rewriting|bold}\index{meta-rewriting|bold} +\index{definitions} + Definitions in Isabelle have the form $t\equiv u$, where typically $t$ is a constant or a constant applied to a list of variables, for example $\it sqr(n)\equiv n\times n$. (Conditional definitions, $\phi\Imp t\equiv u$, @@ -175,7 +180,7 @@ fold_goals_tac : thm list -> tactic fold_tac : thm list -> tactic \end{ttbox} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{rewrite_goals_tac} {\it defs}] unfolds the {\it defs} throughout the subgoals of the proof state, while leaving the main goal unchanged. Use \ttindex{SELECT_GOAL} to restrict it to a @@ -191,11 +196,11 @@ \item[\ttindexbold{fold_tac} {\it defs}] folds the {\it defs} throughout the proof state. -\end{description} +\end{ttdescription} \subsection{Tactic shortcuts} -\index{shortcuts!for tactics|bold} +\index{shortcuts!for tactics} \index{tactics!resolution}\index{tactics!assumption} \index{tactics!meta-rewriting} \begin{ttbox} @@ -207,7 +212,7 @@ rewtac : thm -> tactic \end{ttbox} These abbreviate common uses of tactics. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{rtac} {\it thm} {\it i}] abbreviates \hbox{\tt resolve_tac [{\it thm}] {\it i}}, doing resolution. @@ -229,11 +234,11 @@ \item[\ttindexbold{rewtac} {\it def}] abbreviates \hbox{\tt rewrite_goals_tac [{\it def}]}, unfolding a definition. -\end{description} +\end{ttdescription} \subsection{Inserting premises and facts}\label{cut_facts_tac} -\index{tactics!for inserting facts|bold} +\index{tactics!for inserting facts}\index{assumptions!inserting} \begin{ttbox} cut_facts_tac : thm list -> int -> tactic cut_inst_tac : (string*string)list -> thm -> int -> tactic @@ -241,7 +246,7 @@ \end{ttbox} These tactics add assumptions --- which must be proved, sooner or later --- to a given subgoal. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{cut_facts_tac} {\it thms} {\it i}] adds the {\it thms} as new assumptions to subgoal~$i$. Once they have been inserted as assumptions, they become subject to tactics such as {\tt @@ -259,39 +264,39 @@ \item[\ttindexbold{subgoal_tac} {\it formula} {\it i}] adds the {\it formula} as a assumption to subgoal~$i$, and inserts the same {\it formula} as a new subgoal, $i+1$. -\end{description} +\end{ttdescription} \subsection{Theorems useful with tactics} -\index{theorems!of pure theory|bold} +\index{theorems!of pure theory} \begin{ttbox} asm_rl: thm cut_rl: thm \end{ttbox} -\begin{description} -\item[\ttindexbold{asm_rl}] +\begin{ttdescription} +\item[\tdx{asm_rl}] is $\psi\Imp\psi$. Under elim-resolution it does proof by assumption, and \hbox{\tt eresolve_tac (asm_rl::{\it thms}) {\it i}} is equivalent to \begin{ttbox} assume_tac {\it i} ORELSE eresolve_tac {\it thms} {\it i} \end{ttbox} -\item[\ttindexbold{cut_rl}] +\item[\tdx{cut_rl}] is $\List{\psi\Imp\theta,\psi}\Imp\theta$. It is useful for inserting -assumptions; it underlies \ttindex{forward_tac}, \ttindex{cut_facts_tac} -and \ttindex{subgoal_tac}. -\end{description} +assumptions; it underlies {\tt forward_tac}, {\tt cut_facts_tac} +and {\tt subgoal_tac}. +\end{ttdescription} \section{Obscure tactics} \subsection{Tidying the proof state} -\index{parameters!removing unused|bold} +\index{parameters!removing unused} \index{flex-flex constraints} \begin{ttbox} prune_params_tac : tactic flexflex_tac : tactic \end{ttbox} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{prune_params_tac}] removes unused parameters from all subgoals of the proof state. It works by rewriting with the theorem $(\Forall x. V)\equiv V$. This tactic can @@ -308,10 +313,10 @@ some variables in a rule~(\S\ref{res_inst_tac}). Normally flex-flex constraints can be ignored; they often disappear as unknowns get instantiated. -\end{description} +\end{ttdescription} -\subsection{Renaming parameters in a goal} \index{parameters!renaming|bold} +\subsection{Renaming parameters in a goal} \index{parameters!renaming} \begin{ttbox} rename_tac : string -> int -> tactic rename_last_tac : string -> string list -> int -> tactic @@ -327,7 +332,7 @@ Sometimes there is insufficient information and Isabelle chooses an arbitrary name. The renaming tactics let you override Isabelle's choice. Because renaming parameters has no logical effect on the proof state, the -{\tt by} command prints the needless message {\tt Warning:\ same as previous +{\tt by} command prints the message {\tt Warning:\ same as previous level}. Alternatively, you can suppress the naming mechanism described above and @@ -335,7 +340,7 @@ form $p${\tt a}, $p${\tt b}, $p${\tt c},~\ldots, where $p$ is any desired prefix. They are ugly but predictable. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{rename_tac} {\it str} {\it i}] interprets the string {\it str} as a series of blank-separated variable names, and uses them to rename the parameters of subgoal~$i$. The names @@ -352,13 +357,13 @@ sets the prefix for uniform renaming to~{\it prefix}. The default prefix is {\tt"k"}. -\item[\ttindexbold{Logic.auto_rename} \tt:= true;] +\item[\ttindexbold{Logic.auto_rename} := true;] makes Isabelle generate uniform names for parameters. -\end{description} +\end{ttdescription} \subsection{Composition: resolution without lifting} -\index{tactics!for composition|bold} +\index{tactics!for composition} \begin{ttbox} compose_tac: (bool * thm * int) -> int -> tactic \end{ttbox} @@ -367,14 +372,14 @@ resolution tactics, may occasionally be useful for special effects. A typical application is \ttindex{res_inst_tac}, which lifts and instantiates a rule, then passes the result to {\tt compose_tac}. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{compose_tac} ($flag$, $rule$, $m$) $i$] refines subgoal~$i$ using $rule$, without lifting. The $rule$ is taken to have the form $\List{\psi@1; \ldots; \psi@m} \Imp \psi$, where $\psi$ need -{\bf not} be atomic; thus $m$ determines the number of new subgoals. If +not be atomic; thus $m$ determines the number of new subgoals. If $flag$ is {\tt true} then it performs elim-resolution --- it solves the first premise of~$rule$ by assumption and deletes that assumption. -\end{description} +\end{ttdescription} \section{Managing lots of rules} @@ -398,7 +403,7 @@ elim-resolution if the flag is~{\tt true}. A single tactic call handles a mixture of introduction and elimination rules. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{biresolve_tac} {\it brls} {\it i}] refines the proof state by resolution or elim-resolution on each rule, as indicated by its flag. It affects subgoal~$i$ of the proof state. @@ -419,13 +424,13 @@ \begin{ttbox} subgoals_of_brl {\it brl1} < subgoals_of_brl {\it brl2} \end{ttbox} +\end{ttdescription} Note that \hbox{\tt sort lessb {\it brls}} sorts a list of $\it (flag,rule)$ pairs by the number of new subgoals they will yield. Thus, those that yield the fewest subgoals should be tried first. -\end{description} -\subsection{Discrimination nets for fast resolution} +\subsection{Discrimination nets for fast resolution}\label{filt_resolve_tac} \index{discrimination nets|bold} \index{tactics!resolution} \begin{ttbox} @@ -437,11 +442,12 @@ could_unify : term*term->bool filter_thms : (term*term->bool) -> int*term*thm list -> thm list \end{ttbox} -The module \ttindex{Net} implements a discrimination net data structure for +The module {\tt Net} implements a discrimination net data structure for fast selection of rules \cite[Chapter 14]{charniak80}. A term is classified by the symbol list obtained by flattening it in preorder. The flattening takes account of function applications, constants, and free and bound variables; it identifies all unknowns and also regards +\index{lambda abs@$\lambda$-abstractions} $\lambda$-abstractions as unknowns, since they could $\eta$-contract to anything. @@ -454,13 +460,13 @@ A net can store introduction rules indexed by their conclusion, and elimination rules indexed by their major premise. Isabelle provides -several functions for ``compiling'' long lists of rules into fast +several functions for `compiling' long lists of rules into fast resolution tactics. When supplied with a list of theorems, these functions build a discrimination net; the net is used when the tactic is applied to a goal. To avoid repreatedly constructing the nets, use currying: bind the resulting tactics to \ML{} identifiers. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{net_resolve_tac} {\it thms}] builds a discrimination net to obtain the effect of a similar call to {\tt resolve_tac}. @@ -486,7 +492,7 @@ type inference. \item[\ttindexbold{could_unify} ({\it t},{\it u})] -returns {\tt false} if~$t$ and~$u$ are ``obviously'' non-unifiable, and +returns {\tt false} if~$t$ and~$u$ are `obviously' non-unifiable, and otherwise returns~{\tt true}. It assumes all variables are distinct, reporting that {\tt ?a=?a} may unify with {\tt 0=1}. @@ -495,17 +501,17 @@ subgoal {\it prem}, using the predicate {\it could\/} to compare the conclusion of the subgoal with the conclusion of each rule. The resulting list is no longer than {\it limit}. -\end{description} +\end{ttdescription} \section{Programming tools for proof strategies} Do not consider using the primitives discussed in this section unless you -really need to code tactics from scratch, +really need to code tactics from scratch. \subsection{Operations on type {\tt tactic}} -\index{tactics!primitives for coding|bold} +\index{tactics!primitives for coding} A tactic maps theorems to theorem sequences (lazy lists). The type -constructor for sequences is called \ttindex{Sequence.seq}. To simplify the +constructor for sequences is called \mltydx{Sequence.seq}. To simplify the types of tactics and tacticals, Isabelle defines a type of tactics: \begin{ttbox} datatype tactic = Tactic of thm -> thm Sequence.seq @@ -519,8 +525,8 @@ STATE : (thm -> tactic) -> tactic SUBGOAL : ((term*int) -> tactic) -> int -> tactic \end{ttbox} -\begin{description} -\item[\ttindexbold{tapply} {\it tac} {\it thm}] +\begin{ttdescription} +\item[\ttindexbold{tapply}({\it tac}, {\it thm})] returns the result of applying the tactic, as a function, to {\it thm}. \item[\ttindexbold{Tactic} {\it f}] @@ -533,24 +539,24 @@ \item[\ttindexbold{STATE} $f$] applies $f$ to the proof state and then applies the resulting tactic to the same state. It supports the following style, where the tactic body is -expressed at a high level, but may peek at the proof state: +expressed using tactics and tacticals, but may peek at the proof state: \begin{ttbox} -STATE (fn state => {\it some tactic}) +STATE (fn state => {\it tactic-valued expression}) \end{ttbox} \item[\ttindexbold{SUBGOAL} $f$ $i$] extracts subgoal~$i$ from the proof state as a term~$t$, and computes a tactic by calling~$f(t,i)$. It applies the resulting tactic to the same -state. The tactic body is expressed at a high level, but may peek at a -particular subgoal: +state. The tactic body is expressed using tactics and tacticals, but may +peek at a particular subgoal: \begin{ttbox} -SUBGOAL (fn (t,i) => {\it some tactic}) +SUBGOAL (fn (t,i) => {\it tactic-valued expression}) \end{ttbox} -\end{description} +\end{ttdescription} \subsection{Tracing} -\index{tactics!tracing|bold} +\index{tactics!tracing} \index{tracing!of tactics} \begin{ttbox} pause_tac: tactic @@ -560,7 +566,7 @@ when they are applied to a proof state. Their output may be difficult to interpret. Note that certain of the searching tacticals, such as {\tt REPEAT}, have built-in tracing options. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{pause_tac}] prints {\tt** Press RETURN to continue:} and then reads a line from the terminal. If this line is blank then it returns the proof state unchanged; @@ -569,13 +575,13 @@ \item[\ttindexbold{print_tac}] returns the proof state unchanged, with the side effect of printing it at the terminal. -\end{description} +\end{ttdescription} -\subsection{Sequences} +\section{Sequences} \index{sequences (lazy lists)|bold} -The module \ttindex{Sequence} declares a type of lazy lists. It uses -Isabelle's type \ttindexbold{option} to represent the possible presence +The module {\tt Sequence} declares a type of lazy lists. It uses +Isabelle's type \mltydx{option} to represent the possible presence (\ttindexbold{Some}) or absence (\ttindexbold{None}) of a value: \begin{ttbox} @@ -585,52 +591,52 @@ specifications below; for instance, {\tt null} appears instead of {\tt Sequence.null}. -\subsubsection{Basic operations on sequences} +\subsection{Basic operations on sequences} \begin{ttbox} null : 'a seq seqof : (unit -> ('a * 'a seq) option) -> 'a seq single : 'a -> 'a seq pull : 'a seq -> ('a * 'a seq) option \end{ttbox} -\begin{description} -\item[{\tt Sequence.null}] +\begin{ttdescription} +\item[Sequence.null] is the empty sequence. \item[\tt Sequence.seqof (fn()=> Some($x$,$s$))] constructs the sequence with head~$x$ and tail~$s$, neither of which is evaluated. -\item[{\tt Sequence.single} $x$] +\item[Sequence.single $x$] constructs the sequence containing the single element~$x$. -\item[{\tt Sequence.pull} $s$] +\item[Sequence.pull $s$] returns {\tt None} if the sequence is empty and {\tt Some($x$,$s'$)} if the sequence has head~$x$ and tail~$s'$. Warning: calling \hbox{Sequence.pull $s$} again will {\bf recompute} the value of~$x$; it is not stored! -\end{description} +\end{ttdescription} -\subsubsection{Converting between sequences and lists} +\subsection{Converting between sequences and lists} \begin{ttbox} chop : int * 'a seq -> 'a list * 'a seq list_of_s : 'a seq -> 'a list s_of_list : 'a list -> 'a seq \end{ttbox} -\begin{description} -\item[{\tt Sequence.chop} ($n$, $s$)] +\begin{ttdescription} +\item[Sequence.chop ($n$, $s$)] returns the first~$n$ elements of~$s$ as a list, paired with the remaining elements of~$s$. If $s$ has fewer than~$n$ elements, then so will the list. -\item[{\tt Sequence.list_of_s} $s$] +\item[Sequence.list_of_s $s$] returns the elements of~$s$, which must be finite, as a list. -\item[{\tt Sequence.s_of_list} $l$] +\item[Sequence.s_of_list $l$] creates a sequence containing the elements of~$l$. -\end{description} +\end{ttdescription} -\subsubsection{Combining sequences} +\subsection{Combining sequences} \begin{ttbox} append : 'a seq * 'a seq -> 'a seq interleave : 'a seq * 'a seq -> 'a seq @@ -638,24 +644,24 @@ maps : ('a -> 'b) -> 'a seq -> 'b seq filters : ('a -> bool) -> 'a seq -> 'a seq \end{ttbox} -\begin{description} -\item[{\tt Sequence.append} ($s@1$, $s@2$)] +\begin{ttdescription} +\item[Sequence.append ($s@1$, $s@2$)] concatenates $s@1$ to $s@2$. -\item[{\tt Sequence.interleave} ($s@1$, $s@2$)] +\item[Sequence.interleave ($s@1$, $s@2$)] joins $s@1$ with $s@2$ by interleaving their elements. The result contains all the elements of the sequences, even if both are infinite. -\item[{\tt Sequence.flats} $ss$] +\item[Sequence.flats $ss$] concatenates a sequence of sequences. -\item[{\tt Sequence.maps} $f$ $s$] +\item[Sequence.maps $f$ $s$] applies $f$ to every element of~$s=x@1,x@2,\ldots$, yielding the sequence $f(x@1),f(x@2),\ldots$. -\item[{\tt Sequence.filters} $p$ $s$] +\item[Sequence.filters $p$ $s$] returns the sequence consisting of all elements~$x$ of~$s$ such that $p(x)$ is {\tt true}. -\end{description} +\end{ttdescription} \index{tactics|)} diff -r bacfaeeea007 -r 361a71713176 doc-src/Ref/tctical.tex --- a/doc-src/Ref/tctical.tex Fri Apr 15 16:53:01 1994 +0200 +++ b/doc-src/Ref/tctical.tex Fri Apr 15 17:16:23 1994 +0200 @@ -1,7 +1,6 @@ %% $Id$ \chapter{Tacticals} \index{tacticals|(} -\index{tactics!combining|see{tacticals}} Tacticals are operations on tactics. Their implementation makes use of functional programming techniques, especially for sequences. Most of the time, you may forget about this and regard tacticals as high-level control @@ -9,7 +8,7 @@ \section{The basic tacticals} \subsection{Joining two tactics} -\index{tacticals!joining two tactics|bold} +\index{tacticals!joining two tactics} The tacticals {\tt THEN} and {\tt ORELSE}, which provide sequencing and alternation, underlie most of the other control structures in Isabelle. {\tt APPEND} and {\tt INTLEAVE} provide more sophisticated forms of @@ -20,40 +19,41 @@ APPEND : tactic * tactic -> tactic \hfill{\bf infix} INTLEAVE : tactic * tactic -> tactic \hfill{\bf infix} \end{ttbox} -\begin{description} -\item[\tt $tac@1$ \ttindexbold{THEN} $tac@2$] +\begin{ttdescription} +\item[$tac@1$ \ttindexbold{THEN} $tac@2$] is the sequential composition of the two tactics. Applied to a proof state, it returns all states reachable in two steps by applying $tac@1$ followed by~$tac@2$. First, it applies $tac@1$ to the proof state, getting a sequence of next states; then, it applies $tac@2$ to each of these and concatenates the results. -\item[\tt $tac@1$ \ttindexbold{ORELSE} $tac@2$] +\item[$tac@1$ \ttindexbold{ORELSE} $tac@2$] makes a choice between the two tactics. Applied to a state, it tries~$tac@1$ and returns the result if non-empty; if $tac@1$ fails then it uses~$tac@2$. This is a deterministic choice: if $tac@1$ succeeds then $tac@2$ is excluded. -\item[\tt $tac@1$ \ttindexbold{APPEND} $tac@2$] +\item[$tac@1$ \ttindexbold{APPEND} $tac@2$] concatenates the results of $tac@1$ and~$tac@2$. By not making a commitment -to either tactic, {\tt APPEND} helps avoid incompleteness during search. +to either tactic, {\tt APPEND} helps avoid incompleteness during +search.\index{search} -\item[\tt $tac@1$ \ttindexbold{INTLEAVE} $tac@2$] +\item[$tac@1$ \ttindexbold{INTLEAVE} $tac@2$] interleaves the results of $tac@1$ and~$tac@2$. Thus, it includes all possible next states, even if one of the tactics returns an infinite sequence. -\end{description} +\end{ttdescription} \subsection{Joining a list of tactics} -\index{tacticals!joining a list of tactics|bold} +\index{tacticals!joining a list of tactics} \begin{ttbox} EVERY : tactic list -> tactic FIRST : tactic list -> tactic \end{ttbox} {\tt EVERY} and {\tt FIRST} are block structured versions of {\tt THEN} and {\tt ORELSE}\@. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{EVERY} {$[tac@1,\ldots,tac@n]$}] abbreviates \hbox{\tt$tac@1$ THEN \ldots{} THEN $tac@n$}. It is useful for writing a series of tactics to be executed in sequence. @@ -61,11 +61,11 @@ \item[\ttindexbold{FIRST} {$[tac@1,\ldots,tac@n]$}] abbreviates \hbox{\tt$tac@1$ ORELSE \ldots{} ORELSE $tac@n$}. It is useful for writing a series of tactics to be attempted one after another. -\end{description} +\end{ttdescription} \subsection{Repetition tacticals} -\index{tacticals!repetition|bold} +\index{tacticals!repetition} \begin{ttbox} TRY : tactic -> tactic REPEAT_DETERM : tactic -> tactic @@ -73,7 +73,7 @@ REPEAT1 : tactic -> tactic trace_REPEAT : bool ref \hfill{\bf initially false} \end{ttbox} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{TRY} {\it tac}] applies {\it tac\/} to the proof state and returns the resulting sequence, if non-empty; otherwise it returns the original state. Thus, it applies @@ -96,19 +96,19 @@ is like \hbox{\tt REPEAT {\it tac}} but it always applies {\it tac\/} at least once, failing if this is impossible. -\item[\ttindexbold{trace_REPEAT} \tt:= true;] +\item[\ttindexbold{trace_REPEAT} := true;] enables an interactive tracing mode for the tacticals {\tt REPEAT_DETERM} and {\tt REPEAT}. To view the tracing options, type {\tt h} at the prompt. -\end{description} +\end{ttdescription} \subsection{Identities for tacticals} -\index{tacticals!identities for|bold} +\index{tacticals!identities for} \begin{ttbox} all_tac : tactic no_tac : tactic \end{ttbox} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{all_tac}] maps any proof state to the one-element sequence containing that state. Thus, it succeeds for all states. It is the identity element of the @@ -119,7 +119,7 @@ It is the identity element of \ttindex{ORELSE}, \ttindex{APPEND}, and \ttindex{INTLEAVE}\@. Also, it is a zero element for \ttindex{THEN}, which means that \hbox{\tt$tac$ THEN no_tac} is equivalent to {\tt no_tac}. -\end{description} +\end{ttdescription} These primitive tactics are useful when writing tacticals. For example, \ttindexbold{TRY} and \ttindexbold{REPEAT} (ignoring tracing) can be coded as follows: @@ -151,6 +151,8 @@ \section{Control and search tacticals} +\index{search!tacticals|(} + A predicate on theorems, namely a function of type \hbox{\tt thm->bool}, can test whether a proof state enjoys some desirable property --- such as having no subgoals. Tactics that search for satisfactory states are easy @@ -160,14 +162,14 @@ \subsection{Filtering a tactic's results} -\index{tacticals!for filtering|bold} -\index{tactics!filtering results of|bold} +\index{tacticals!for filtering} +\index{tactics!filtering results of} \begin{ttbox} FILTER : (thm -> bool) -> tactic -> tactic CHANGED : tactic -> tactic \end{ttbox} -\begin{description} -\item[\tt \tt FILTER {\it p} $tac$] +\begin{ttdescription} +\item[\tt FILTER {\it p} $tac$] applies $tac$ to the proof state and returns a sequence consisting of those result states that satisfy~$p$. @@ -175,11 +177,11 @@ applies {\it tac\/} to the proof state and returns precisely those states that differ from the original state. Thus, \hbox{\tt CHANGED {\it tac}} always has some effect on the state. -\end{description} +\end{ttdescription} \subsection{Depth-first search} -\index{tacticals!searching|bold} +\index{tacticals!searching} \index{tracing!of searching tacticals} \begin{ttbox} DEPTH_FIRST : (thm->bool) -> tactic -> tactic @@ -187,7 +189,7 @@ DEPTH_SOLVE_1 : tactic -> tactic trace_DEPTH_FIRST: bool ref \hfill{\bf initially false} \end{ttbox} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{DEPTH_FIRST} {\it satp} {\it tac}] returns the proof state if {\it satp} returns true. Otherwise it applies {\it tac}, then recursively searches from each element of the resulting @@ -201,14 +203,14 @@ uses {\tt DEPTH_FIRST} to search for states having fewer subgoals than the given state. Thus, it insists upon solving at least one subgoal. -\item[\ttindexbold{trace_DEPTH_FIRST} \tt:= true;] +\item[\ttindexbold{trace_DEPTH_FIRST} := true;] enables interactive tracing for {\tt DEPTH_FIRST}. To view the tracing options, type {\tt h} at the prompt. -\end{description} +\end{ttdescription} \subsection{Other search strategies} -\index{tacticals!searching|bold} +\index{tacticals!searching} \index{tracing!of searching tacticals} \begin{ttbox} BREADTH_FIRST : (thm->bool) -> tactic -> tactic @@ -220,7 +222,7 @@ These search strategies will find a solution if one exists. However, they do not enumerate all solutions; they terminate after the first satisfactory result from {\it tac}. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{BREADTH_FIRST} {\it satp} {\it tac}] uses breadth-first search to find states for which {\it satp\/} is true. For most applications, it is too slow. @@ -241,10 +243,10 @@ contains the result of applying $tac@0$ to the proof state. This tactical permits separate tactics for starting the search and continuing the search. -\item[\ttindexbold{trace_BEST_FIRST} \tt:= true;] +\item[\ttindexbold{trace_BEST_FIRST} := true;] enables an interactive tracing mode for the tactical {\tt BEST_FIRST}. To view the tracing options, type {\tt h} at the prompt. -\end{description} +\end{ttdescription} \subsection{Auxiliary tacticals for searching} @@ -255,8 +257,8 @@ IF_UNSOLVED : tactic -> tactic DETERM : tactic -> tactic \end{ttbox} -\begin{description} -\item[\tt \tt COND {\it p} $tac@1$ $tac@2$] +\begin{ttdescription} +\item[COND {\it p} $tac@1$ $tac@2$] applies $tac@1$ to the proof state if it satisfies~$p$, and applies $tac@2$ otherwise. It is a conditional tactical in that only one of $tac@1$ and $tac@2$ is applied to a proof state. However, both $tac@1$ and $tac@2$ are @@ -271,7 +273,7 @@ applies {\it tac\/} to the proof state and returns the head of the resulting sequence. {\tt DETERM} limits the search space by making its argument deterministic. -\end{description} +\end{ttdescription} \subsection{Predicates and functions useful for searching} @@ -282,7 +284,7 @@ eq_thm : thm * thm -> bool size_of_thm : thm -> int \end{ttbox} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{has_fewer_prems} $n$ $thm$] reports whether $thm$ has fewer than~$n$ premises. By currying, \hbox{\tt has_fewer_prems $n$} is a predicate on theorems; it may @@ -298,7 +300,9 @@ computes the size of $thm$, namely the number of variables, constants and abstractions in its conclusion. It may serve as a distance function for \ttindex{BEST_FIRST}. -\end{description} +\end{ttdescription} + +\index{search!tacticals|)} \section{Tacticals for subgoal numbering} @@ -316,7 +320,7 @@ SELECT_GOAL : tactic -> int -> tactic METAHYPS : (thm list -> tactic) -> int -> tactic \end{ttbox} -\begin{description} +\begin{ttdescription} \item[\ttindexbold{SELECT_GOAL} {\it tac} $i$] restricts the effect of {\it tac\/} to subgoal~$i$ of the proof state. It fails if there is no subgoal~$i$, or if {\it tac\/} changes the main goal @@ -325,14 +329,15 @@ subgoal~$i$. If {\it tac\/} returns multiple results then so does \hbox{\tt SELECT_GOAL {\it tac} $i$}. -Its workings are devious. {\tt SELECT_GOAL} creates a state of the -form $\phi\Imp\phi$, with one subgoal. If subgoal~$i$ has the form say -$\psi\Imp\theta$, then {\tt SELECT_GOAL} creates the state -\[ (\Forall x.\psi\Imp\theta)\Imp(\Forall x.\psi\Imp\theta) \] -rather than $\List{\psi\Imp\theta;\; \psi}\Imp\theta$, which has two -subgoals. +{\tt SELECT_GOAL} works by creating a state of the form $\phi\Imp\phi$, +with the one subgoal~$\phi$. If subgoal~$i$ has the form then +$(\psi\Imp\theta)\Imp(\psi\Imp\theta)$ is in fact $\List{\psi\Imp\theta;\; + \psi}\Imp\theta$, a proof state with two subgoals. Such a proof state +might cause tactics to go astray. Therefore {\tt SELECT_GOAL} inserts a +quantifier to create the state +\[ (\Forall x.\psi\Imp\theta)\Imp(\Forall x.\psi\Imp\theta). \] -\item[\ttindexbold{METAHYPS} {\it tacf} $i$] +\item[\ttindexbold{METAHYPS} {\it tacf} $i$]\index{meta-assumptions} takes subgoal~$i$, of the form \[ \Forall x@1 \ldots x@l. \List{\theta@1; \ldots; \theta@k}\Imp\theta, \] and creates the list $\theta'@1$, \ldots, $\theta'@k$ of meta-level @@ -349,7 +354,7 @@ hypotheses $\theta@1,\ldots,\theta@k$ become free variables in $\theta'@1$, \ldots, $\theta'@k$, and are restored afterwards; the {\tt METAHYPS} call cannot instantiate them. Unknowns in $\theta$ may be instantiated. New -unknowns in $\phi@1$, \ldots; $\phi@n$ are lifted over the parameters. +unknowns in $\phi@1$, \ldots, $\phi@n$ are lifted over the parameters. Here is a typical application. Calling {\tt hyp_res_tac}~$i$ resolves subgoal~$i$ with one of its own assumptions, which may itself have the form @@ -357,7 +362,7 @@ \begin{ttbox} val hyp_res_tac = METAHYPS (fn prems => resolve_tac prems 1); \end{ttbox} -\end{description} +\end{ttdescription} \begin{warn} {\tt METAHYPS} fails if the context or new subgoals contain type unknowns. @@ -366,7 +371,7 @@ \subsection{Scanning for a subgoal by number} -\index{tacticals!scanning for subgoals|bold} +\index{tacticals!scanning for subgoals} \begin{ttbox} ALLGOALS : (int -> tactic) -> tactic TRYALL : (int -> tactic) -> tactic @@ -383,17 +388,17 @@ Suppose that the original proof state has $n$ subgoals. -\begin{description} +\begin{ttdescription} \item[\ttindexbold{ALLGOALS} {\it tacf}] is equivalent to \hbox{\tt$tacf(n)$ THEN \ldots{} THEN $tacf(1)$}. -It applies {\it tacf} to all the subgoals, counting {\bf downwards} (to +It applies {\it tacf} to all the subgoals, counting downwards (to avoid problems when subgoals are added or deleted). \item[\ttindexbold{TRYALL} {\it tacf}] is equivalent to -\hbox{\tt TRY($tacf(n)$) THEN \ldots{} THEN TRY($tacf(1)$)}. +\hbox{\tt TRY$(tacf(n))$ THEN \ldots{} THEN TRY$(tacf(1))$}. It attempts to apply {\it tacf} to all the subgoals. For instance, the tactic \hbox{\tt TRYALL assume_tac} attempts to solve all the subgoals by @@ -403,7 +408,7 @@ is equivalent to \hbox{\tt$tacf(n)$ ORELSE \ldots{} ORELSE $tacf(1)$}. -It applies {\it tacf} to one subgoal, counting {\bf downwards}. For instance, +It applies {\it tacf} to one subgoal, counting downwards. For instance, the tactic \hbox{\tt SOMEGOAL assume_tac} solves one subgoal by assumption, failing if this is impossible. @@ -411,13 +416,13 @@ is equivalent to \hbox{\tt$tacf(1)$ ORELSE \ldots{} ORELSE $tacf(n)$}. -It applies {\it tacf} to one subgoal, counting {\bf upwards}. +It applies {\it tacf} to one subgoal, counting upwards. \item[\ttindexbold{REPEAT_SOME} {\it tacf}] -applies {\it tacf} once or more to a subgoal, counting {\bf downwards}. +applies {\it tacf} once or more to a subgoal, counting downwards. \item[\ttindexbold{REPEAT_FIRST} {\it tacf}] -applies {\it tacf} once or more to a subgoal, counting {\bf upwards}. +applies {\it tacf} once or more to a subgoal, counting upwards. \item[\ttindexbold{trace_goalno_tac} {\it tac} {\it i}] applies \hbox{\it tac i\/} to the proof state. If the resulting sequence @@ -425,13 +430,13 @@ Subgoal~$i$ selected}. Otherwise, {\tt trace_goalno_tac} returns the empty sequence and prints nothing. -It indicates that ``the tactic worked for subgoal~$i$'' and is mainly used +It indicates that `the tactic worked for subgoal~$i$' and is mainly used with {\tt SOMEGOAL} and {\tt FIRSTGOAL}. -\end{description} +\end{ttdescription} \subsection{Joining tactic functions} -\index{tacticals!joining tactic functions|bold} +\index{tacticals!joining tactic functions} \begin{ttbox} THEN' : ('a -> tactic) * ('a -> tactic) -> 'a -> tactic \hfill{\bf infix 1} ORELSE' : ('a -> tactic) * ('a -> tactic) -> 'a -> tactic \hfill{\bf infix} @@ -461,16 +466,16 @@ These tacticals are polymorphic; $x$ need not be an integer. \begin{center} \tt \begin{tabular}{r@{\rm\ \ yields\ \ }l} - ($tacf@1$~~THEN'~~$tacf@2$)$(x)$ \index{*THEN'} & + $(tacf@1$~~THEN'~~$tacf@2)(x)$ \index{*THEN'} & $tacf@1(x)$~~THEN~~$tacf@2(x)$ \\ - ($tacf@1$ ORELSE' $tacf@2$)$(x)$ \index{*ORELSE'} & + $(tacf@1$ ORELSE' $tacf@2)(x)$ \index{*ORELSE'} & $tacf@1(x)$ ORELSE $tacf@2(x)$ \\ - ($tacf@1$ APPEND' $tacf@2$)$(x)$ \index{*APPEND'} & + $(tacf@1$ APPEND' $tacf@2)(x)$ \index{*APPEND'} & $tacf@1(x)$ APPEND $tacf@2(x)$ \\ - ($tacf@1$ INTLEAVE' $tacf@2$)$(x)$ \index{*INTLEAVE'} & + $(tacf@1$ INTLEAVE' $tacf@2)(x)$ \index{*INTLEAVE'} & $tacf@1(x)$ INTLEAVE $tacf@2(x)$ \\ EVERY' $[tacf@1,\ldots,tacf@n] \; (x)$ \index{*EVERY'} & @@ -483,7 +488,7 @@ \subsection{Applying a list of tactics to 1} -\index{tacticals!joining tactic functions|bold} +\index{tacticals!joining tactic functions} \begin{ttbox} EVERY1: (int -> tactic) list -> tactic FIRST1: (int -> tactic) list -> tactic