lcp@284  1 \part{Advanced Methods}  lcp@105  2 Before continuing, it might be wise to try some of your own examples in  lcp@105  3 Isabelle, reinforcing your knowledge of the basic functions.  lcp@105  4 wenzelm@3103  5 Look through {\em Isabelle's Object-Logics\/} and try proving some  wenzelm@3103  6 simple theorems. You probably should begin with first-order logic  paulson@5205  7 (\texttt{FOL} or~\texttt{LK}). Try working some of the examples provided,  paulson@5205  8 and others from the literature. Set theory~(\texttt{ZF}) and  paulson@5205  9 Constructive Type Theory~(\texttt{CTT}) form a richer world for  wenzelm@3103  10 mathematical reasoning and, again, many examples are in the  paulson@5205  11 literature. Higher-order logic~(\texttt{HOL}) is Isabelle's most  paulson@3485  12 elaborate logic. Its types and functions are identified with those of  wenzelm@3103  13 the meta-logic.  lcp@105  14 lcp@105  15 Choose a logic that you already understand. Isabelle is a proof  lcp@105  16 tool, not a teaching tool; if you do not know how to do a particular proof  lcp@105  17 on paper, then you certainly will not be able to do it on the machine.  lcp@105  18 Even experienced users plan large proofs on paper.  lcp@105  19 lcp@105  20 We have covered only the bare essentials of Isabelle, but enough to perform  lcp@105  21 substantial proofs. By occasionally dipping into the {\em Reference  lcp@105  22 Manual}, you can learn additional tactics, subgoal commands and tacticals.  lcp@105  23 lcp@105  24 lcp@105  25 \section{Deriving rules in Isabelle}  lcp@105  26 \index{rules!derived}  lcp@105  27 A mathematical development goes through a progression of stages. Each  lcp@105  28 stage defines some concepts and derives rules about them. We shall see how  lcp@105  29 to derive rules, perhaps involving definitions, using Isabelle. The  lcp@310  30 following section will explain how to declare types, constants, rules and  lcp@105  31 definitions.  lcp@105  32 lcp@105  33 lcp@296  34 \subsection{Deriving a rule using tactics and meta-level assumptions}  lcp@296  35 \label{deriving-example}  lcp@310  36 \index{examples!of deriving rules}\index{assumptions!of main goal}  lcp@296  37 lcp@307  38 The subgoal module supports the derivation of rules, as discussed in  paulson@5205  39 \S\ref{deriving}. When the \ttindex{Goal} command is supplied a formula of  paulson@5205  40 the form $\List{\theta@1; \ldots; \theta@k} \Imp \phi$, there are two  paulson@5205  41 possibilities:  paulson@5205  42 \begin{itemize}  paulson@5205  43 \item If all of the premises $\theta@1$, \ldots, $\theta@k$ are simple  paulson@5205  44  formulae{} (they do not involve the meta-connectives $\Forall$ or  paulson@5205  45  $\Imp$) then the command sets the goal to be  paulson@5205  46  $\List{\theta@1; \ldots; \theta@k} \Imp \phi$ and returns the empty list.  paulson@5205  47 \item If one or more premises involves the meta-connectives $\Forall$ or  paulson@5205  48  $\Imp$, then the command sets the goal to be $\phi$ and returns a list  paulson@5205  49  consisting of the theorems ${\theta@i\;[\theta@i]}$, for $i=1$, \ldots,~$k$.  paulson@14148  50  These meta-level assumptions are also recorded internally, allowing  paulson@5205  51  \texttt{result} (which is called by \texttt{qed}) to discharge them in the  paulson@5205  52  original order.  paulson@5205  53 \end{itemize}  paulson@5205  54 Rules that discharge assumptions or introduce eigenvariables have complex  paulson@14148  55 premises, and the second case applies. In this section, many of the  paulson@14148  56 theorems are subject to meta-level assumptions, so we make them visible by by setting the  paulson@14148  57 \ttindex{show_hyps} flag:  paulson@14148  58 \begin{ttbox}  paulson@14148  59 set show_hyps;  paulson@14148  60 {\out val it = true : bool}  paulson@14148  61 \end{ttbox}  lcp@105  62 paulson@14148  63 Now, we are ready to derive $\conj$ elimination. Until now, calling \texttt{Goal} has  paulson@5205  64 returned an empty list, which we have ignored. In this example, the list  paulson@5205  65 contains the two premises of the rule, since one of them involves the $\Imp$  paulson@5205  66 connective. We bind them to the \ML\ identifiers \texttt{major} and {\tt  paulson@5205  67  minor}:\footnote{Some ML compilers will print a message such as {\em binding  paulson@5205  68  not exhaustive}. This warns that \texttt{Goal} must return a 2-element  paulson@5205  69  list. Otherwise, the pattern-match will fail; ML will raise exception  paulson@5205  70  \xdx{Match}.}  lcp@105  71 \begin{ttbox}  paulson@5205  72 val [major,minor] = Goal  lcp@105  73  "[| P&Q; [| P; Q |] ==> R |] ==> R";  lcp@105  74 {\out Level 0}  lcp@105  75 {\out R}  lcp@105  76 {\out 1. R}  lcp@105  77 {\out val major = "P & Q [P & Q]" : thm}  lcp@105  78 {\out val minor = "[| P; Q |] ==> R [[| P; Q |] ==> R]" : thm}  lcp@105  79 \end{ttbox}  lcp@105  80 Look at the minor premise, recalling that meta-level assumptions are  paulson@5205  81 shown in brackets. Using \texttt{minor}, we reduce $R$ to the subgoals  lcp@105  82 $P$ and~$Q$:  lcp@105  83 \begin{ttbox}  lcp@105  84 by (resolve_tac [minor] 1);  lcp@105  85 {\out Level 1}  lcp@105  86 {\out R}  lcp@105  87 {\out 1. P}  lcp@105  88 {\out 2. Q}  lcp@105  89 \end{ttbox}  lcp@105  90 Deviating from~\S\ref{deriving}, we apply $({\conj}E1)$ forwards from the  lcp@105  91 assumption $P\conj Q$ to obtain the theorem~$P\;[P\conj Q]$.  lcp@105  92 \begin{ttbox}  lcp@105  93 major RS conjunct1;  lcp@105  94 {\out val it = "P [P & Q]" : thm}  lcp@105  95 \ttbreak  lcp@105  96 by (resolve_tac [major RS conjunct1] 1);  lcp@105  97 {\out Level 2}  lcp@105  98 {\out R}  lcp@105  99 {\out 1. Q}  lcp@105  100 \end{ttbox}  lcp@105  101 Similarly, we solve the subgoal involving~$Q$.  lcp@105  102 \begin{ttbox}  lcp@105  103 major RS conjunct2;  lcp@105  104 {\out val it = "Q [P & Q]" : thm}  lcp@105  105 by (resolve_tac [major RS conjunct2] 1);  lcp@105  106 {\out Level 3}  lcp@105  107 {\out R}  lcp@105  108 {\out No subgoals!}  lcp@105  109 \end{ttbox}  lcp@105  110 Calling \ttindex{topthm} returns the current proof state as a theorem.  wenzelm@3103  111 Note that it contains assumptions. Calling \ttindex{qed} discharges  wenzelm@3103  112 the assumptions --- both occurrences of $P\conj Q$ are discharged as  wenzelm@3103  113 one --- and makes the variables schematic.  lcp@105  114 \begin{ttbox}  lcp@105  115 topthm();  lcp@105  116 {\out val it = "R [P & Q, P & Q, [| P; Q |] ==> R]" : thm}  wenzelm@3103  117 qed "conjE";  lcp@105  118 {\out val conjE = "[| ?P & ?Q; [| ?P; ?Q |] ==> ?R |] ==> ?R" : thm}  lcp@105  119 \end{ttbox}  lcp@105  120 lcp@105  121 lcp@105  122 \subsection{Definitions and derived rules} \label{definitions}  lcp@310  123 \index{rules!derived}\index{definitions!and derived rules|(}  lcp@310  124 lcp@105  125 Definitions are expressed as meta-level equalities. Let us define negation  lcp@105  126 and the if-and-only-if connective:  lcp@105  127 \begin{eqnarray*}  lcp@105  128  \neg \Var{P} & \equiv & \Var{P}\imp\bot \\  lcp@105  129  \Var{P}\bimp \Var{Q} & \equiv &  lcp@105  130  (\Var{P}\imp \Var{Q}) \conj (\Var{Q}\imp \Var{P})  lcp@105  131 \end{eqnarray*}  lcp@331  132 \index{meta-rewriting}%  lcp@105  133 Isabelle permits {\bf meta-level rewriting} using definitions such as  lcp@105  134 these. {\bf Unfolding} replaces every instance  lcp@331  135 of $\neg \Var{P}$ by the corresponding instance of ${\Var{P}\imp\bot}$. For  lcp@105  136 example, $\forall x.\neg (P(x)\conj \neg R(x,0))$ unfolds to  lcp@105  137 $\forall x.(P(x)\conj R(x,0)\imp\bot)\imp\bot.$  lcp@105  138 {\bf Folding} a definition replaces occurrences of the right-hand side by  lcp@105  139 the left. The occurrences need not be free in the entire formula.  lcp@105  140 lcp@105  141 When you define new concepts, you should derive rules asserting their  lcp@105  142 abstract properties, and then forget their definitions. This supports  lcp@331  143 modularity: if you later change the definitions without affecting their  lcp@105  144 abstract properties, then most of your proofs will carry through without  lcp@105  145 change. Indiscriminate unfolding makes a subgoal grow exponentially,  lcp@105  146 becoming unreadable.  lcp@105  147 lcp@105  148 Taking this point of view, Isabelle does not unfold definitions  lcp@105  149 automatically during proofs. Rewriting must be explicit and selective.  lcp@105  150 Isabelle provides tactics and meta-rules for rewriting, and a version of  paulson@5205  151 the \texttt{Goal} command that unfolds the conclusion and premises of the rule  lcp@105  152 being derived.  lcp@105  153 lcp@105  154 For example, the intuitionistic definition of negation given above may seem  lcp@105  155 peculiar. Using Isabelle, we shall derive pleasanter negation rules:  lcp@105  156 $\infer[({\neg}I)]{\neg P}{\infer*{\bot}{[P]}} \qquad  lcp@105  157  \infer[({\neg}E)]{Q}{\neg P & P}$  lcp@296  158 This requires proving the following meta-formulae:  wenzelm@3103  159 $$(P\Imp\bot) \Imp \neg P \eqno(\neg I)$$  wenzelm@3103  160 $$\List{\neg P; P} \Imp Q. \eqno(\neg E)$$  lcp@105  161 lcp@105  162 lcp@296  163 \subsection{Deriving the $\neg$ introduction rule}  paulson@5205  164 To derive $(\neg I)$, we may call \texttt{Goal} with the appropriate formula.  paulson@5205  165 Again, the rule's premises involve a meta-connective, and \texttt{Goal}  paulson@5205  166 returns one-element list. We bind this list to the \ML\ identifier \texttt{prems}.  lcp@105  167 \begin{ttbox}  paulson@5205  168 val prems = Goal "(P ==> False) ==> ~P";  lcp@105  169 {\out Level 0}  lcp@105  170 {\out ~P}  lcp@105  171 {\out 1. ~P}  lcp@105  172 {\out val prems = ["P ==> False [P ==> False]"] : thm list}  lcp@105  173 \end{ttbox}  lcp@310  174 Calling \ttindex{rewrite_goals_tac} with \tdx{not_def}, which is the  lcp@105  175 definition of negation, unfolds that definition in the subgoals. It leaves  lcp@105  176 the main goal alone.  lcp@105  177 \begin{ttbox}  lcp@105  178 not_def;  lcp@105  179 {\out val it = "~?P == ?P --> False" : thm}  lcp@105  180 by (rewrite_goals_tac [not_def]);  lcp@105  181 {\out Level 1}  lcp@105  182 {\out ~P}  lcp@105  183 {\out 1. P --> False}  lcp@105  184 \end{ttbox}  lcp@310  185 Using \tdx{impI} and the premise, we reduce subgoal~1 to a triviality:  lcp@105  186 \begin{ttbox}  lcp@105  187 by (resolve_tac [impI] 1);  lcp@105  188 {\out Level 2}  lcp@105  189 {\out ~P}  lcp@105  190 {\out 1. P ==> False}  lcp@105  191 \ttbreak  lcp@105  192 by (resolve_tac prems 1);  lcp@105  193 {\out Level 3}  lcp@105  194 {\out ~P}  lcp@105  195 {\out 1. P ==> P}  lcp@105  196 \end{ttbox}  lcp@296  197 The rest of the proof is routine. Note the form of the final result.  lcp@105  198 \begin{ttbox}  lcp@105  199 by (assume_tac 1);  lcp@105  200 {\out Level 4}  lcp@105  201 {\out ~P}  lcp@105  202 {\out No subgoals!}  lcp@296  203 \ttbreak  wenzelm@3103  204 qed "notI";  lcp@105  205 {\out val notI = "(?P ==> False) ==> ~?P" : thm}  lcp@105  206 \end{ttbox}  lcp@310  207 \indexbold{*notI theorem}  lcp@105  208 paulson@5205  209 There is a simpler way of conducting this proof. The \ttindex{Goalw}  paulson@5205  210 command starts a backward proof, as does \texttt{Goal}, but it also  lcp@296  211 unfolds definitions. Thus there is no need to call  lcp@296  212 \ttindex{rewrite_goals_tac}:  lcp@105  213 \begin{ttbox}  paulson@5205  214 val prems = Goalw [not_def]  lcp@105  215  "(P ==> False) ==> ~P";  lcp@105  216 {\out Level 0}  lcp@105  217 {\out ~P}  lcp@105  218 {\out 1. P --> False}  lcp@105  219 {\out val prems = ["P ==> False [P ==> False]"] : thm list}  lcp@105  220 \end{ttbox}  lcp@105  221 lcp@105  222 lcp@296  223 \subsection{Deriving the $\neg$ elimination rule}  paulson@5205  224 Let us derive the rule $(\neg E)$. The proof follows that of~\texttt{conjE}  lcp@296  225 above, with an additional step to unfold negation in the major premise.  paulson@5205  226 The \texttt{Goalw} command is best for this: it unfolds definitions not only  paulson@5205  227 in the conclusion but the premises.  lcp@105  228 \begin{ttbox}  paulson@5205  229 Goalw [not_def] "[| ~P; P |] ==> R";  lcp@105  230 {\out Level 0}  paulson@5205  231 {\out [| ~ P; P |] ==> R}  paulson@5205  232 {\out 1. [| P --> False; P |] ==> R}  paulson@5205  233 \end{ttbox}  paulson@5205  234 As the first step, we apply \tdx{FalseE}:  paulson@5205  235 \begin{ttbox}  lcp@105  236 by (resolve_tac [FalseE] 1);  lcp@105  237 {\out Level 1}  paulson@5205  238 {\out [| ~ P; P |] ==> R}  paulson@5205  239 {\out 1. [| P --> False; P |] ==> False}  lcp@284  240 \end{ttbox}  paulson@5205  241 %  lcp@284  242 Everything follows from falsity. And we can prove falsity using the  lcp@284  243 premises and Modus Ponens:  lcp@284  244 \begin{ttbox}  paulson@5205  245 by (eresolve_tac [mp] 1);  lcp@105  246 {\out Level 2}  paulson@5205  247 {\out [| ~ P; P |] ==> R}  paulson@5205  248 {\out 1. P ==> P}  paulson@5205  249 \ttbreak  paulson@5205  250 by (assume_tac 1);  lcp@105  251 {\out Level 3}  paulson@5205  252 {\out [| ~ P; P |] ==> R}  lcp@105  253 {\out No subgoals!}  paulson@5205  254 \ttbreak  wenzelm@3103  255 qed "notE";  lcp@105  256 {\out val notE = "[| ~?P; ?P |] ==> ?R" : thm}  lcp@105  257 \end{ttbox}  paulson@5205  258 lcp@105  259 lcp@105  260 \medskip  paulson@5205  261 \texttt{Goalw} unfolds definitions in the premises even when it has to return  paulson@5205  262 them as a list. Another way of unfolding definitions in a theorem is by  paulson@5205  263 applying the function \ttindex{rewrite_rule}.  lcp@105  264 paulson@5205  265 \index{definitions!and derived rules|)}  lcp@105  266 lcp@105  267 lcp@284  268 \section{Defining theories}\label{sec:defining-theories}  lcp@105  269 \index{theories!defining|(}  lcp@310  270 wenzelm@3103  271 Isabelle makes no distinction between simple extensions of a logic ---  wenzelm@3103  272 like specifying a type~$bool$ with constants~$true$ and~$false$ ---  wenzelm@3103  273 and defining an entire logic. A theory definition has a form like  lcp@105  274 \begin{ttbox}  lcp@105  275 $$T$$ = $$S@1$$ + $$\cdots$$ + $$S@n$$ +  lcp@105  276 classes {\it class declarations}  lcp@105  277 default {\it sort}  lcp@331  278 types {\it type declarations and synonyms}  wenzelm@3103  279 arities {\it type arity declarations}  lcp@105  280 consts {\it constant declarations}  wenzelm@3103  281 syntax {\it syntactic constant declarations}  wenzelm@3103  282 translations {\it ast translation rules}  wenzelm@3103  283 defs {\it meta-logical definitions}  lcp@105  284 rules {\it rule declarations}  lcp@105  285 end  lcp@105  286 ML {\it ML code}  lcp@105  287 \end{ttbox}  lcp@105  288 This declares the theory $T$ to extend the existing theories  wenzelm@3103  289 $S@1$,~\ldots,~$S@n$. It may introduce new classes, types, arities  wenzelm@3103  290 (of existing types), constants and rules; it can specify the default  wenzelm@3103  291 sort for type variables. A constant declaration can specify an  wenzelm@3103  292 associated concrete syntax. The translations section specifies  wenzelm@3103  293 rewrite rules on abstract syntax trees, handling notations and  paulson@5205  294 abbreviations. \index{*ML section} The \texttt{ML} section may contain  wenzelm@3103  295 code to perform arbitrary syntactic transformations. The main  wenzelm@3212  296 declaration forms are discussed below. There are some more sections  wenzelm@3212  297 not presented here, the full syntax can be found in  wenzelm@3212  298 \iflabelundefined{app:TheorySyntax}{an appendix of the {\it Reference  wenzelm@3212  299  Manual}}{App.\ts\ref{app:TheorySyntax}}. Also note that  wenzelm@3212  300 object-logics may add further theory sections, for example  wenzelm@9695  301 \texttt{typedef}, \texttt{datatype} in HOL.  lcp@105  302 wenzelm@3103  303 All the declaration parts can be omitted or repeated and may appear in  wenzelm@3103  304 any order, except that the {\ML} section must be last (after the {\tt  wenzelm@3103  305  end} keyword). In the simplest case, $T$ is just the union of  wenzelm@3103  306 $S@1$,~\ldots,~$S@n$. New theories always extend one or more other  wenzelm@3103  307 theories, inheriting their types, constants, syntax, etc. The theory  paulson@3485  308 \thydx{Pure} contains nothing but Isabelle's meta-logic. The variant  wenzelm@3103  309 \thydx{CPure} offers the more usual higher-order function application  wenzelm@9695  310 syntax $t\,u@1\ldots\,u@n$ instead of $t(u@1,\ldots,u@n)$ in Pure.  lcp@105  311 wenzelm@3103  312 Each theory definition must reside in a separate file, whose name is  wenzelm@3103  313 the theory's with {\tt.thy} appended. Calling  wenzelm@3103  314 \ttindexbold{use_thy}~{\tt"{\it T\/}"} reads the definition from {\it  wenzelm@3103  315  T}{\tt.thy}, writes a corresponding file of {\ML} code {\tt.{\it  wenzelm@3103  316  T}.thy.ML}, reads the latter file, and deletes it if no errors  wenzelm@3103  317 occurred. This declares the {\ML} structure~$T$, which contains a  paulson@5205  318 component \texttt{thy} denoting the new theory, a component for each  wenzelm@3103  319 rule, and everything declared in {\it ML code}.  lcp@105  320 wenzelm@3103  321 Errors may arise during the translation to {\ML} (say, a misspelled  wenzelm@3103  322 keyword) or during creation of the new theory (say, a type error in a  paulson@5205  323 rule). But if all goes well, \texttt{use_thy} will finally read the file  wenzelm@3103  324 {\it T}{\tt.ML} (if it exists). This file typically contains proofs  paulson@3485  325 that refer to the components of~$T$. The structure is automatically  wenzelm@3103  326 opened, so its components may be referred to by unqualified names,  paulson@5205  327 e.g.\ just \texttt{thy} instead of $T$\texttt{.thy}.  lcp@105  328 wenzelm@3103  329 \ttindexbold{use_thy} automatically loads a theory's parents before  wenzelm@3103  330 loading the theory itself. When a theory file is modified, many  wenzelm@3103  331 theories may have to be reloaded. Isabelle records the modification  wenzelm@3103  332 times and dependencies of theory files. See  lcp@331  333 \iflabelundefined{sec:reloading-theories}{the {\em Reference Manual\/}}%  lcp@331  334  {\S\ref{sec:reloading-theories}}  lcp@296  335 for more details.  lcp@296  336 lcp@105  337 lcp@1084  338 \subsection{Declaring constants, definitions and rules}  lcp@310  339 \indexbold{constants!declaring}\index{rules!declaring}  lcp@310  340 lcp@1084  341 Most theories simply declare constants, definitions and rules. The {\bf  lcp@1084  342  constant declaration part} has the form  lcp@105  343 \begin{ttbox}  clasohm@1387  344 consts $$c@1$$ :: $$\tau@1$$  lcp@105  345  \vdots  clasohm@1387  346  $$c@n$$ :: $$\tau@n$$  lcp@105  347 \end{ttbox}  lcp@105  348 where $c@1$, \ldots, $c@n$ are constants and $\tau@1$, \ldots, $\tau@n$ are  clasohm@1387  349 types. The types must be enclosed in quotation marks if they contain  paulson@5205  350 user-declared infix type constructors like \texttt{*}. Each  lcp@105  351 constant must be enclosed in quotation marks unless it is a valid  lcp@105  352 identifier. To declare $c@1$, \ldots, $c@n$ as constants of type $\tau$,  lcp@105  353 the $n$ declarations may be abbreviated to a single line:  lcp@105  354 \begin{ttbox}  clasohm@1387  355  $$c@1$$, \ldots, $$c@n$$ :: $$\tau$$  lcp@105  356 \end{ttbox}  lcp@105  357 The {\bf rule declaration part} has the form  lcp@105  358 \begin{ttbox}  lcp@105  359 rules $$id@1$$ "$$rule@1$$"  lcp@105  360  \vdots  lcp@105  361  $$id@n$$ "$$rule@n$$"  lcp@105  362 \end{ttbox}  lcp@105  363 where $id@1$, \ldots, $id@n$ are \ML{} identifiers and $rule@1$, \ldots,  lcp@284  364 $rule@n$ are expressions of type~$prop$. Each rule {\em must\/} be  paulson@14148  365 enclosed in quotation marks. Rules are simply axioms; they are  paulson@14148  366 called \emph{rules} because they are mainly used to specify the inference  paulson@14148  367 rules when defining a new logic.  lcp@284  368 wenzelm@3103  369 \indexbold{definitions} The {\bf definition part} is similar, but with  paulson@5205  370 the keyword \texttt{defs} instead of \texttt{rules}. {\bf Definitions} are  wenzelm@3103  371 rules of the form $s \equiv t$, and should serve only as  paulson@3485  372 abbreviations. The simplest form of a definition is $f \equiv t$,  paulson@3485  373 where $f$ is a constant. Also allowed are $\eta$-equivalent forms of  wenzelm@3106  374 this, where the arguments of~$f$ appear applied on the left-hand side  wenzelm@3106  375 of the equation instead of abstracted on the right-hand side.  lcp@1084  376 wenzelm@3103  377 Isabelle checks for common errors in definitions, such as extra  paulson@14148  378 variables on the right-hand side and cyclic dependencies, that could  paulson@14148  379 least to inconsistency. It is still essential to take care:  paulson@14148  380 theorems proved on the basis of incorrect definitions are useless,  paulson@14148  381 your system can be consistent and yet still wrong.  wenzelm@3103  382 wenzelm@3103  383 \index{examples!of theories} This example theory extends first-order  wenzelm@3103  384 logic by declaring and defining two constants, {\em nand} and {\em  wenzelm@3103  385  xor}:  lcp@284  386 \begin{ttbox}  lcp@105  387 Gate = FOL +  clasohm@1387  388 consts nand,xor :: [o,o] => o  lcp@1084  389 defs nand_def "nand(P,Q) == ~(P & Q)"  lcp@105  390  xor_def "xor(P,Q) == P & ~Q | ~P & Q"  lcp@105  391 end  lcp@105  392 \end{ttbox}  lcp@105  393 nipkow@1649  394 Declaring and defining constants can be combined:  nipkow@1649  395 \begin{ttbox}  nipkow@1649  396 Gate = FOL +  nipkow@1649  397 constdefs nand :: [o,o] => o  nipkow@1649  398  "nand(P,Q) == ~(P & Q)"  nipkow@1649  399  xor :: [o,o] => o  nipkow@1649  400  "xor(P,Q) == P & ~Q | ~P & Q"  nipkow@1649  401 end  nipkow@1649  402 \end{ttbox}  paulson@5205  403 \texttt{constdefs} generates the names \texttt{nand_def} and \texttt{xor_def}  paulson@3485  404 automatically, which is why it is restricted to alphanumeric identifiers. In  nipkow@1649  405 general it has the form  nipkow@1649  406 \begin{ttbox}  nipkow@1649  407 constdefs $$id@1$$ :: $$\tau@1$$  nipkow@1649  408  "$$id@1 \equiv \dots$$"  nipkow@1649  409  \vdots  nipkow@1649  410  $$id@n$$ :: $$\tau@n$$  nipkow@1649  411  "$$id@n \equiv \dots$$"  nipkow@1649  412 \end{ttbox}  nipkow@1649  413 nipkow@1649  414 nipkow@1366  415 \begin{warn}  nipkow@1366  416 A common mistake when writing definitions is to introduce extra free variables  nipkow@1468  417 on the right-hand side as in the following fictitious definition:  nipkow@1366  418 \begin{ttbox}  nipkow@1366  419 defs prime_def "prime(p) == (m divides p) --> (m=1 | m=p)"  nipkow@1366  420 \end{ttbox}  paulson@5205  421 Isabelle rejects this definition'' because of the extra \texttt{m} on the  paulson@3485  422 right-hand side, which would introduce an inconsistency. What you should have  nipkow@1366  423 written is  nipkow@1366  424 \begin{ttbox}  nipkow@1366  425 defs prime_def "prime(p) == ALL m. (m divides p) --> (m=1 | m=p)"  nipkow@1366  426 \end{ttbox}  nipkow@1366  427 \end{warn}  lcp@105  428 lcp@105  429 \subsection{Declaring type constructors}  nipkow@303  430 \indexbold{types!declaring}\indexbold{arities!declaring}  lcp@284  431 %  lcp@105  432 Types are composed of type variables and {\bf type constructors}. Each  lcp@284  433 type constructor takes a fixed number of arguments. They are declared  lcp@284  434 with an \ML-like syntax. If $list$ takes one type argument, $tree$ takes  lcp@284  435 two arguments and $nat$ takes no arguments, then these type constructors  lcp@284  436 can be declared by  lcp@105  437 \begin{ttbox}  lcp@284  438 types 'a list  lcp@284  439  ('a,'b) tree  lcp@284  440  nat  lcp@105  441 \end{ttbox}  lcp@284  442 lcp@284  443 The {\bf type declaration part} has the general form  lcp@284  444 \begin{ttbox}  lcp@284  445 types $$tids@1$$ $$id@1$$  lcp@284  446  \vdots  wenzelm@841  447  $$tids@n$$ $$id@n$$  lcp@284  448 \end{ttbox}  lcp@284  449 where $id@1$, \ldots, $id@n$ are identifiers and $tids@1$, \ldots, $tids@n$  lcp@284  450 are type argument lists as shown in the example above. It declares each  lcp@284  451 $id@i$ as a type constructor with the specified number of argument places.  lcp@105  452 lcp@105  453 The {\bf arity declaration part} has the form  lcp@105  454 \begin{ttbox}  lcp@105  455 arities $$tycon@1$$ :: $$arity@1$$  lcp@105  456  \vdots  lcp@105  457  $$tycon@n$$ :: $$arity@n$$  lcp@105  458 \end{ttbox}  lcp@105  459 where $tycon@1$, \ldots, $tycon@n$ are identifiers and $arity@1$, \ldots,  lcp@105  460 $arity@n$ are arities. Arity declarations add arities to existing  lcp@296  461 types; they do not declare the types themselves.  lcp@105  462 In the simplest case, for an 0-place type constructor, an arity is simply  lcp@105  463 the type's class. Let us declare a type~$bool$ of class $term$, with  lcp@284  464 constants $tt$ and~$ff$. (In first-order logic, booleans are  lcp@284  465 distinct from formulae, which have type $o::logic$.)  lcp@105  466 \index{examples!of theories}  lcp@284  467 \begin{ttbox}  lcp@105  468 Bool = FOL +  lcp@284  469 types bool  lcp@105  470 arities bool :: term  clasohm@1387  471 consts tt,ff :: bool  lcp@105  472 end  lcp@105  473 \end{ttbox}  lcp@296  474 A $k$-place type constructor may have arities of the form  lcp@296  475 $(s@1,\ldots,s@k)c$, where $s@1,\ldots,s@n$ are sorts and $c$ is a class.  lcp@296  476 Each sort specifies a type argument; it has the form $\{c@1,\ldots,c@m\}$,  lcp@296  477 where $c@1$, \dots,~$c@m$ are classes. Mostly we deal with singleton  lcp@296  478 sorts, and may abbreviate them by dropping the braces. The arity  lcp@296  479 $(term)term$ is short for $(\{term\})term$. Recall the discussion in  lcp@296  480 \S\ref{polymorphic}.  lcp@105  481 lcp@105  482 A type constructor may be overloaded (subject to certain conditions) by  lcp@296  483 appearing in several arity declarations. For instance, the function type  lcp@331  484 constructor~$fun$ has the arity $(logic,logic)logic$; in higher-order  lcp@105  485 logic, it is declared also to have arity $(term,term)term$.  lcp@105  486 paulson@5205  487 Theory \texttt{List} declares the 1-place type constructor $list$, gives  paulson@5205  488 it the arity $(term)term$, and declares constants $Nil$ and $Cons$ with  lcp@296  489 polymorphic types:%  paulson@5205  490 \footnote{In the \texttt{consts} part, type variable {\tt'a} has the default  paulson@5205  491  sort, which is \texttt{term}. See the {\em Reference Manual\/}  lcp@296  492 \iflabelundefined{sec:ref-defining-theories}{}%  lcp@296  493 {(\S\ref{sec:ref-defining-theories})} for more information.}  lcp@105  494 \index{examples!of theories}  lcp@284  495 \begin{ttbox}  lcp@105  496 List = FOL +  lcp@284  497 types 'a list  lcp@105  498 arities list :: (term)term  clasohm@1387  499 consts Nil :: 'a list  clasohm@1387  500  Cons :: ['a, 'a list] => 'a list  lcp@105  501 end  lcp@105  502 \end{ttbox}  lcp@284  503 Multiple arity declarations may be abbreviated to a single line:  lcp@105  504 \begin{ttbox}  lcp@105  505 arities $$tycon@1$$, \ldots, $$tycon@n$$ :: $$arity$$  lcp@105  506 \end{ttbox}  lcp@105  507 wenzelm@3103  508 %\begin{warn}  wenzelm@3103  509 %Arity declarations resemble constant declarations, but there are {\it no\/}  wenzelm@3103  510 %quotation marks! Types and rules must be quoted because the theory  wenzelm@3103  511 %translator passes them verbatim to the {\ML} output file.  wenzelm@3103  512 %\end{warn}  lcp@105  513 lcp@331  514 \subsection{Type synonyms}\indexbold{type synonyms}  nipkow@303  515 Isabelle supports {\bf type synonyms} ({\bf abbreviations}) which are similar  lcp@307  516 to those found in \ML. Such synonyms are defined in the type declaration part  nipkow@303  517 and are fairly self explanatory:  nipkow@303  518 \begin{ttbox}  clasohm@1387  519 types gate = [o,o] => o  clasohm@1387  520  'a pred = 'a => o  clasohm@1387  521  ('a,'b)nuf = 'b => 'a  nipkow@303  522 \end{ttbox}  nipkow@303  523 Type declarations and synonyms can be mixed arbitrarily:  nipkow@303  524 \begin{ttbox}  nipkow@303  525 types nat  clasohm@1387  526  'a stream = nat => 'a  clasohm@1387  527  signal = nat stream  nipkow@303  528  'a list  nipkow@303  529 \end{ttbox}  wenzelm@3103  530 A synonym is merely an abbreviation for some existing type expression.  wenzelm@3103  531 Hence synonyms may not be recursive! Internally all synonyms are  wenzelm@3103  532 fully expanded. As a consequence Isabelle output never contains  wenzelm@3103  533 synonyms. Their main purpose is to improve the readability of theory  wenzelm@3103  534 definitions. Synonyms can be used just like any other type:  nipkow@303  535 \begin{ttbox}  clasohm@1387  536 consts and,or :: gate  clasohm@1387  537  negate :: signal => signal  nipkow@303  538 \end{ttbox}  nipkow@303  539 lcp@348  540 \subsection{Infix and mixfix operators}  lcp@310  541 \index{infixes}\index{examples!of theories}  lcp@310  542 lcp@310  543 Infix or mixfix syntax may be attached to constants. Consider the  lcp@310  544 following theory:  lcp@284  545 \begin{ttbox}  lcp@105  546 Gate2 = FOL +  clasohm@1387  547 consts "~&" :: [o,o] => o (infixl 35)  clasohm@1387  548  "#" :: [o,o] => o (infixl 30)  lcp@1084  549 defs nand_def "P ~& Q == ~(P & Q)"  lcp@105  550  xor_def "P # Q == P & ~Q | ~P & Q"  lcp@105  551 end  lcp@105  552 \end{ttbox}  lcp@310  553 The constant declaration part declares two left-associating infix operators  lcp@310  554 with their priorities, or precedences; they are $\nand$ of priority~35 and  lcp@310  555 $\xor$ of priority~30. Hence $P \xor Q \xor R$ is parsed as $(P\xor Q)  lcp@310  556 \xor R$ and $P \xor Q \nand R$ as $P \xor (Q \nand R)$. Note the quotation  lcp@310  557 marks in \verb|"~&"| and \verb|"#"|.  lcp@105  558 lcp@105  559 The constants \hbox{\verb|op ~&|} and \hbox{\verb|op #|} are declared  lcp@105  560 automatically, just as in \ML. Hence you may write propositions like  lcp@105  561 \verb|op #(True) == op ~&(True)|, which asserts that the functions $\lambda  lcp@105  562 Q.True \xor Q$ and $\lambda Q.True \nand Q$ are identical.  lcp@105  563 wenzelm@3212  564 \medskip Infix syntax and constant names may be also specified  paulson@3485  565 independently. For example, consider this version of $\nand$:  wenzelm@3212  566 \begin{ttbox}  wenzelm@3212  567 consts nand :: [o,o] => o (infixl "~&" 35)  wenzelm@3212  568 \end{ttbox}  wenzelm@3212  569 lcp@310  570 \bigskip\index{mixfix declarations}  lcp@310  571 {\bf Mixfix} operators may have arbitrary context-free syntaxes. Let us  lcp@310  572 add a line to the constant declaration part:  lcp@284  573 \begin{ttbox}  clasohm@1387  574  If :: [o,o,o] => o ("if _ then _ else _")  lcp@105  575 \end{ttbox}  lcp@310  576 This declares a constant $If$ of type $[o,o,o] \To o$ with concrete syntax {\tt  paulson@5205  577  if~$P$ then~$Q$ else~$R$} as well as \texttt{If($P$,$Q$,$R$)}. Underscores  lcp@310  578 denote argument positions.  lcp@105  579 paulson@5205  580 The declaration above does not allow the \texttt{if}-\texttt{then}-{\tt  wenzelm@3103  581  else} construct to be printed split across several lines, even if it  wenzelm@3103  582 is too long to fit on one line. Pretty-printing information can be  wenzelm@3103  583 added to specify the layout of mixfix operators. For details, see  lcp@310  584 \iflabelundefined{Defining-Logics}%  lcp@310  585  {the {\it Reference Manual}, chapter Defining Logics'}%  lcp@310  586  {Chap.\ts\ref{Defining-Logics}}.  lcp@310  587 lcp@310  588 Mixfix declarations can be annotated with priorities, just like  lcp@105  589 infixes. The example above is just a shorthand for  lcp@284  590 \begin{ttbox}  clasohm@1387  591  If :: [o,o,o] => o ("if _ then _ else _" [0,0,0] 1000)  lcp@105  592 \end{ttbox}  lcp@310  593 The numeric components determine priorities. The list of integers  lcp@310  594 defines, for each argument position, the minimal priority an expression  lcp@310  595 at that position must have. The final integer is the priority of the  lcp@105  596 construct itself. In the example above, any argument expression is  lcp@310  597 acceptable because priorities are non-negative, and conditionals may  lcp@310  598 appear everywhere because 1000 is the highest priority. On the other  lcp@310  599 hand, the declaration  lcp@284  600 \begin{ttbox}  clasohm@1387  601  If :: [o,o,o] => o ("if _ then _ else _" [100,0,0] 99)  lcp@105  602 \end{ttbox}  lcp@284  603 defines concrete syntax for a conditional whose first argument cannot have  paulson@5205  604 the form \texttt{if~$P$ then~$Q$ else~$R$} because it must have a priority  lcp@310  605 of at least~100. We may of course write  lcp@284  606 \begin{quote}\tt  lcp@284  607 if (if $P$ then $Q$ else $R$) then $S$ else $T$  lcp@156  608 \end{quote}  lcp@310  609 because expressions in parentheses have maximal priority.  lcp@105  610 lcp@105  611 Binary type constructors, like products and sums, may also be declared as  lcp@105  612 infixes. The type declaration below introduces a type constructor~$*$ with  lcp@105  613 infix notation $\alpha*\beta$, together with the mixfix notation  lcp@1084  614 ${<}\_,\_{>}$ for pairs. We also see a rule declaration part.  lcp@310  615 \index{examples!of theories}\index{mixfix declarations}  lcp@105  616 \begin{ttbox}  lcp@105  617 Prod = FOL +  lcp@284  618 types ('a,'b) "*" (infixl 20)  lcp@105  619 arities "*" :: (term,term)term  lcp@105  620 consts fst :: "'a * 'b => 'a"  lcp@105  621  snd :: "'a * 'b => 'b"  lcp@105  622  Pair :: "['a,'b] => 'a * 'b" ("(1<_,/_>)")  lcp@105  623 rules fst "fst() = a"  lcp@105  624  snd "snd() = b"  lcp@105  625 end  lcp@105  626 \end{ttbox}  lcp@105  627 lcp@105  628 \begin{warn}  paulson@5205  629  The name of the type constructor is~\texttt{*} and not \texttt{op~*}, as  wenzelm@3103  630  it would be in the case of an infix constant. Only infix type  paulson@5205  631  constructors can have symbolic names like~\texttt{*}. General mixfix  paulson@5205  632  syntax for types may be introduced via appropriate \texttt{syntax}  wenzelm@3103  633  declarations.  lcp@105  634 \end{warn}  lcp@105  635 lcp@105  636 lcp@105  637 \subsection{Overloading}  lcp@105  638 \index{overloading}\index{examples!of theories}  lcp@105  639 The {\bf class declaration part} has the form  lcp@105  640 \begin{ttbox}  lcp@105  641 classes $$id@1$$ < $$c@1$$  lcp@105  642  \vdots  lcp@105  643  $$id@n$$ < $$c@n$$  lcp@105  644 \end{ttbox}  lcp@105  645 where $id@1$, \ldots, $id@n$ are identifiers and $c@1$, \ldots, $c@n$ are  lcp@105  646 existing classes. It declares each $id@i$ as a new class, a subclass  lcp@105  647 of~$c@i$. In the general case, an identifier may be declared to be a  lcp@105  648 subclass of $k$ existing classes:  lcp@105  649 \begin{ttbox}  lcp@105  650  $$id$$ < $$c@1$$, \ldots, $$c@k$$  lcp@105  651 \end{ttbox}  lcp@296  652 Type classes allow constants to be overloaded. As suggested in  lcp@307  653 \S\ref{polymorphic}, let us define the class $arith$ of arithmetic  lcp@296  654 types with the constants ${+} :: [\alpha,\alpha]\To \alpha$ and $0,1 {::}  lcp@296  655 \alpha$, for $\alpha{::}arith$. We introduce $arith$ as a subclass of  lcp@296  656 $term$ and add the three polymorphic constants of this class.  lcp@310  657 \index{examples!of theories}\index{constants!overloaded}  lcp@105  658 \begin{ttbox}  lcp@105  659 Arith = FOL +  lcp@105  660 classes arith < term  clasohm@1387  661 consts "0" :: 'a::arith ("0")  clasohm@1387  662  "1" :: 'a::arith ("1")  clasohm@1387  663  "+" :: ['a::arith,'a] => 'a (infixl 60)  lcp@105  664 end  lcp@105  665 \end{ttbox}  lcp@105  666 No rules are declared for these constants: we merely introduce their  lcp@105  667 names without specifying properties. On the other hand, classes  lcp@105  668 with rules make it possible to prove {\bf generic} theorems. Such  lcp@105  669 theorems hold for all instances, all types in that class.  lcp@105  670 lcp@105  671 We can now obtain distinct versions of the constants of $arith$ by  lcp@105  672 declaring certain types to be of class $arith$. For example, let us  lcp@105  673 declare the 0-place type constructors $bool$ and $nat$:  lcp@105  674 \index{examples!of theories}  lcp@105  675 \begin{ttbox}  lcp@105  676 BoolNat = Arith +  lcp@348  677 types bool nat  lcp@348  678 arities bool, nat :: arith  clasohm@1387  679 consts Suc :: nat=>nat  lcp@284  680 \ttbreak  lcp@105  681 rules add0 "0 + n = n::nat"  lcp@105  682  addS "Suc(m)+n = Suc(m+n)"  lcp@105  683  nat1 "1 = Suc(0)"  lcp@105  684  or0l "0 + x = x::bool"  lcp@105  685  or0r "x + 0 = x::bool"  lcp@105  686  or1l "1 + x = 1::bool"  lcp@105  687  or1r "x + 1 = 1::bool"  lcp@105  688 end  lcp@105  689 \end{ttbox}  lcp@105  690 Because $nat$ and $bool$ have class $arith$, we can use $0$, $1$ and $+$ at  lcp@105  691 either type. The type constraints in the axioms are vital. Without  paulson@14148  692 constraints, the $x$ in $1+x = 1$ (axiom \texttt{or1l})  paulson@14148  693 would have type $\alpha{::}arith$  lcp@105  694 and the axiom would hold for any type of class $arith$. This would  lcp@284  695 collapse $nat$ to a trivial type:  lcp@105  696 $Suc(1) = Suc(0+1) = Suc(0)+1 = 1+1 = 1!$  lcp@296  697 lcp@105  698 lcp@296  699 \section{Theory example: the natural numbers}  lcp@296  700 lcp@296  701 We shall now work through a small example of formalized mathematics  lcp@105  702 demonstrating many of the theory extension features.  lcp@105  703 lcp@105  704 lcp@105  705 \subsection{Extending first-order logic with the natural numbers}  lcp@105  706 \index{examples!of theories}  lcp@105  707 lcp@284  708 Section\ts\ref{sec:logical-syntax} has formalized a first-order logic,  lcp@284  709 including a type~$nat$ and the constants $0::nat$ and $Suc::nat\To nat$.  lcp@284  710 Let us introduce the Peano axioms for mathematical induction and the  lcp@310  711 freeness of $0$ and~$Suc$:\index{axioms!Peano}  lcp@307  712 $\vcenter{\infer[(induct)]{P[n/x]}{P[0/x] & \infer*{P[Suc(x)/x]}{[P]}}}  lcp@105  713  \qquad \parbox{4.5cm}{provided x is not free in any assumption except~P}  lcp@105  714 $  lcp@105  715 $\infer[(Suc\_inject)]{m=n}{Suc(m)=Suc(n)} \qquad  lcp@105  716  \infer[(Suc\_neq\_0)]{R}{Suc(m)=0}  lcp@105  717 $  lcp@105  718 Mathematical induction asserts that $P(n)$ is true, for any $n::nat$,  lcp@105  719 provided $P(0)$ holds and that $P(x)$ implies $P(Suc(x))$ for all~$x$.  lcp@105  720 Some authors express the induction step as $\forall x. P(x)\imp P(Suc(x))$.  lcp@105  721 To avoid making induction require the presence of other connectives, we  lcp@105  722 formalize mathematical induction as  lcp@105  723 $$\List{P(0); \Forall x. P(x)\Imp P(Suc(x))} \Imp P(n). \eqno(induct)$$  lcp@105  724 lcp@105  725 \noindent  lcp@105  726 Similarly, to avoid expressing the other rules using~$\forall$, $\imp$  lcp@105  727 and~$\neg$, we take advantage of the meta-logic;\footnote  lcp@105  728 {On the other hand, the axioms $Suc(m)=Suc(n) \bimp m=n$  lcp@105  729 and $\neg(Suc(m)=0)$ are logically equivalent to those given, and work  lcp@105  730 better with Isabelle's simplifier.}  lcp@105  731 $(Suc\_neq\_0)$ is  lcp@105  732 an elimination rule for $Suc(m)=0$:  lcp@105  733 $$Suc(m)=Suc(n) \Imp m=n \eqno(Suc\_inject)$$  lcp@105  734 $$Suc(m)=0 \Imp R \eqno(Suc\_neq\_0)$$  lcp@105  735 lcp@105  736 \noindent  lcp@105  737 We shall also define a primitive recursion operator, $rec$. Traditionally,  lcp@105  738 primitive recursion takes a natural number~$a$ and a 2-place function~$f$,  lcp@105  739 and obeys the equations  lcp@105  740 \begin{eqnarray*}  lcp@105  741  rec(0,a,f) & = & a \\  lcp@105  742  rec(Suc(m),a,f) & = & f(m, rec(m,a,f))  lcp@105  743 \end{eqnarray*}  lcp@105  744 Addition, defined by $m+n \equiv rec(m,n,\lambda x\,y.Suc(y))$,  lcp@105  745 should satisfy  lcp@105  746 \begin{eqnarray*}  lcp@105  747  0+n & = & n \\  lcp@105  748  Suc(m)+n & = & Suc(m+n)  lcp@105  749 \end{eqnarray*}  lcp@296  750 Primitive recursion appears to pose difficulties: first-order logic has no  lcp@296  751 function-valued expressions. We again take advantage of the meta-logic,  lcp@296  752 which does have functions. We also generalise primitive recursion to be  lcp@105  753 polymorphic over any type of class~$term$, and declare the addition  lcp@105  754 function:  lcp@105  755 \begin{eqnarray*}  lcp@105  756  rec & :: & [nat, \alpha{::}term, [nat,\alpha]\To\alpha] \To\alpha \\  lcp@105  757  + & :: & [nat,nat]\To nat  lcp@105  758 \end{eqnarray*}  lcp@105  759 lcp@105  760 lcp@105  761 \subsection{Declaring the theory to Isabelle}  lcp@105  762 \index{examples!of theories}  lcp@310  763 Let us create the theory \thydx{Nat} starting from theory~\verb$FOL$,  lcp@105  764 which contains only classical logic with no natural numbers. We declare  lcp@307  765 the 0-place type constructor $nat$ and the associated constants. Note that  lcp@307  766 the constant~0 requires a mixfix annotation because~0 is not a legal  lcp@307  767 identifier, and could not otherwise be written in terms:  lcp@310  768 \begin{ttbox}\index{mixfix declarations}  lcp@105  769 Nat = FOL +  lcp@284  770 types nat  lcp@105  771 arities nat :: term  clasohm@1387  772 consts "0" :: nat ("0")  clasohm@1387  773  Suc :: nat=>nat  clasohm@1387  774  rec :: [nat, 'a, [nat,'a]=>'a] => 'a  clasohm@1387  775  "+" :: [nat, nat] => nat (infixl 60)  lcp@296  776 rules Suc_inject "Suc(m)=Suc(n) ==> m=n"  lcp@105  777  Suc_neq_0 "Suc(m)=0 ==> R"  lcp@296  778  induct "[| P(0); !!x. P(x) ==> P(Suc(x)) |] ==> P(n)"  lcp@105  779  rec_0 "rec(0,a,f) = a"  lcp@105  780  rec_Suc "rec(Suc(m), a, f) = f(m, rec(m,a,f))"  lcp@296  781  add_def "m+n == rec(m, n, \%x y. Suc(y))"  lcp@105  782 end  lcp@105  783 \end{ttbox}  paulson@5205  784 In axiom \texttt{add_def}, recall that \verb|%| stands for~$\lambda$.  paulson@5205  785 Loading this theory file creates the \ML\ structure \texttt{Nat}, which  wenzelm@3103  786 contains the theory and axioms.  lcp@296  787 lcp@296  788 \subsection{Proving some recursion equations}  paulson@5205  789 Theory \texttt{FOL/ex/Nat} contains proofs involving this theory of the  lcp@105  790 natural numbers. As a trivial example, let us derive recursion equations  lcp@105  791 for \verb$+$. Here is the zero case:  lcp@284  792 \begin{ttbox}  paulson@5205  793 Goalw [add_def] "0+n = n";  lcp@105  794 {\out Level 0}  lcp@105  795 {\out 0 + n = n}  lcp@284  796 {\out 1. rec(0,n,\%x y. Suc(y)) = n}  lcp@105  797 \ttbreak  lcp@105  798 by (resolve_tac [rec_0] 1);  lcp@105  799 {\out Level 1}  lcp@105  800 {\out 0 + n = n}  lcp@105  801 {\out No subgoals!}  wenzelm@3103  802 qed "add_0";  lcp@284  803 \end{ttbox}  lcp@105  804 And here is the successor case:  lcp@284  805 \begin{ttbox}  paulson@5205  806 Goalw [add_def] "Suc(m)+n = Suc(m+n)";  lcp@105  807 {\out Level 0}  lcp@105  808 {\out Suc(m) + n = Suc(m + n)}  lcp@284  809 {\out 1. rec(Suc(m),n,\%x y. Suc(y)) = Suc(rec(m,n,\%x y. Suc(y)))}  lcp@105  810 \ttbreak  lcp@105  811 by (resolve_tac [rec_Suc] 1);  lcp@105  812 {\out Level 1}  lcp@105  813 {\out Suc(m) + n = Suc(m + n)}  lcp@105  814 {\out No subgoals!}  wenzelm@3103  815 qed "add_Suc";  lcp@284  816 \end{ttbox}  lcp@105  817 The induction rule raises some complications, which are discussed next.  lcp@105  818 \index{theories!defining|)}  lcp@105  819 lcp@105  820 lcp@105  821 \section{Refinement with explicit instantiation}  lcp@310  822 \index{resolution!with instantiation}  lcp@310  823 \index{instantiation|(}  lcp@310  824 lcp@105  825 In order to employ mathematical induction, we need to refine a subgoal by  lcp@105  826 the rule~$(induct)$. The conclusion of this rule is $\Var{P}(\Var{n})$,  lcp@105  827 which is highly ambiguous in higher-order unification. It matches every  lcp@105  828 way that a formula can be regarded as depending on a subterm of type~$nat$.  lcp@105  829 To get round this problem, we could make the induction rule conclude  lcp@105  830 $\forall n.\Var{P}(n)$ --- but putting a subgoal into this form requires  lcp@105  831 refinement by~$(\forall E)$, which is equally hard!  lcp@105  832 paulson@5205  833 The tactic \texttt{res_inst_tac}, like \texttt{resolve_tac}, refines a subgoal by  lcp@105  834 a rule. But it also accepts explicit instantiations for the rule's  lcp@105  835 schematic variables.  lcp@105  836 \begin{description}  lcp@310  837 \item[\ttindex{res_inst_tac} {\it insts} {\it thm} {\it i}]  lcp@105  838 instantiates the rule {\it thm} with the instantiations {\it insts}, and  lcp@105  839 then performs resolution on subgoal~$i$.  lcp@105  840 lcp@310  841 \item[\ttindex{eres_inst_tac}]  lcp@310  842 and \ttindex{dres_inst_tac} are similar, but perform elim-resolution  lcp@105  843 and destruct-resolution, respectively.  lcp@105  844 \end{description}  lcp@105  845 The list {\it insts} consists of pairs $[(v@1,e@1), \ldots, (v@n,e@n)]$,  lcp@105  846 where $v@1$, \ldots, $v@n$ are names of schematic variables in the rule ---  lcp@307  847 with no leading question marks! --- and $e@1$, \ldots, $e@n$ are  lcp@105  848 expressions giving their instantiations. The expressions are type-checked  lcp@105  849 in the context of a particular subgoal: free variables receive the same  lcp@105  850 types as they have in the subgoal, and parameters may appear. Type  lcp@105  851 variable instantiations may appear in~{\it insts}, but they are seldom  paulson@5205  852 required: \texttt{res_inst_tac} instantiates type variables automatically  lcp@105  853 whenever the type of~$e@i$ is an instance of the type of~$\Var{v@i}$.  lcp@105  854 lcp@105  855 \subsection{A simple proof by induction}  lcp@310  856 \index{examples!of induction}  lcp@105  857 Let us prove that no natural number~$k$ equals its own successor. To  lcp@105  858 use~$(induct)$, we instantiate~$\Var{n}$ to~$k$; Isabelle finds a good  lcp@105  859 instantiation for~$\Var{P}$.  lcp@284  860 \begin{ttbox}  paulson@5205  861 Goal "~ (Suc(k) = k)";  lcp@105  862 {\out Level 0}  lcp@459  863 {\out Suc(k) ~= k}  lcp@459  864 {\out 1. Suc(k) ~= k}  lcp@105  865 \ttbreak  lcp@105  866 by (res_inst_tac [("n","k")] induct 1);  lcp@105  867 {\out Level 1}  lcp@459  868 {\out Suc(k) ~= k}  lcp@459  869 {\out 1. Suc(0) ~= 0}  lcp@459  870 {\out 2. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}  lcp@284  871 \end{ttbox}  lcp@105  872 We should check that Isabelle has correctly applied induction. Subgoal~1  lcp@105  873 is the base case, with $k$ replaced by~0. Subgoal~2 is the inductive step,  lcp@105  874 with $k$ replaced by~$Suc(x)$ and with an induction hypothesis for~$x$.  lcp@310  875 The rest of the proof demonstrates~\tdx{notI}, \tdx{notE} and the  paulson@5205  876 other rules of theory \texttt{Nat}. The base case holds by~\ttindex{Suc_neq_0}:  lcp@284  877 \begin{ttbox}  lcp@105  878 by (resolve_tac [notI] 1);  lcp@105  879 {\out Level 2}  lcp@459  880 {\out Suc(k) ~= k}  lcp@105  881 {\out 1. Suc(0) = 0 ==> False}  lcp@459  882 {\out 2. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}  lcp@105  883 \ttbreak  lcp@105  884 by (eresolve_tac [Suc_neq_0] 1);  lcp@105  885 {\out Level 3}  lcp@459  886 {\out Suc(k) ~= k}  lcp@459  887 {\out 1. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}  lcp@284  888 \end{ttbox}  lcp@105  889 The inductive step holds by the contrapositive of~\ttindex{Suc_inject}.  lcp@284  890 Negation rules transform the subgoal into that of proving $Suc(x)=x$ from  lcp@284  891 $Suc(Suc(x)) = Suc(x)$:  lcp@284  892 \begin{ttbox}  lcp@105  893 by (resolve_tac [notI] 1);  lcp@105  894 {\out Level 4}  lcp@459  895 {\out Suc(k) ~= k}  lcp@459  896 {\out 1. !!x. [| Suc(x) ~= x; Suc(Suc(x)) = Suc(x) |] ==> False}  lcp@105  897 \ttbreak  lcp@105  898 by (eresolve_tac [notE] 1);  lcp@105  899 {\out Level 5}  lcp@459  900 {\out Suc(k) ~= k}  lcp@105  901 {\out 1. !!x. Suc(Suc(x)) = Suc(x) ==> Suc(x) = x}  lcp@105  902 \ttbreak  lcp@105  903 by (eresolve_tac [Suc_inject] 1);  lcp@105  904 {\out Level 6}  lcp@459  905 {\out Suc(k) ~= k}  lcp@105  906 {\out No subgoals!}  lcp@284  907 \end{ttbox}  lcp@105  908 lcp@105  909 paulson@5205  910 \subsection{An example of ambiguity in \texttt{resolve_tac}}  lcp@105  911 \index{examples!of induction}\index{unification!higher-order}  paulson@5205  912 If you try the example above, you may observe that \texttt{res_inst_tac} is  lcp@105  913 not actually needed. Almost by chance, \ttindex{resolve_tac} finds the right  lcp@105  914 instantiation for~$(induct)$ to yield the desired next state. With more  lcp@105  915 complex formulae, our luck fails.  lcp@284  916 \begin{ttbox}  paulson@5205  917 Goal "(k+m)+n = k+(m+n)";  lcp@105  918 {\out Level 0}  lcp@105  919 {\out k + m + n = k + (m + n)}  lcp@105  920 {\out 1. k + m + n = k + (m + n)}  lcp@105  921 \ttbreak  lcp@105  922 by (resolve_tac [induct] 1);  lcp@105  923 {\out Level 1}  lcp@105  924 {\out k + m + n = k + (m + n)}  lcp@105  925 {\out 1. k + m + n = 0}  lcp@105  926 {\out 2. !!x. k + m + n = x ==> k + m + n = Suc(x)}  lcp@284  927 \end{ttbox}  lcp@284  928 This proof requires induction on~$k$. The occurrence of~0 in subgoal~1  lcp@284  929 indicates that induction has been applied to the term~$k+(m+n)$; this  lcp@284  930 application is sound but will not lead to a proof here. Fortunately,  lcp@284  931 Isabelle can (lazily!) generate all the valid applications of induction.  lcp@284  932 The \ttindex{back} command causes backtracking to an alternative outcome of  lcp@284  933 the tactic.  lcp@284  934 \begin{ttbox}  lcp@105  935 back();  lcp@105  936 {\out Level 1}  lcp@105  937 {\out k + m + n = k + (m + n)}  lcp@105  938 {\out 1. k + m + n = k + 0}  lcp@105  939 {\out 2. !!x. k + m + n = k + x ==> k + m + n = k + Suc(x)}  lcp@284  940 \end{ttbox}  lcp@284  941 Now induction has been applied to~$m+n$. This is equally useless. Let us  lcp@284  942 call \ttindex{back} again.  lcp@284  943 \begin{ttbox}  lcp@105  944 back();  lcp@105  945 {\out Level 1}  lcp@105  946 {\out k + m + n = k + (m + n)}  lcp@105  947 {\out 1. k + m + 0 = k + (m + 0)}  lcp@284  948 {\out 2. !!x. k + m + x = k + (m + x) ==>}  lcp@284  949 {\out k + m + Suc(x) = k + (m + Suc(x))}  lcp@284  950 \end{ttbox}  lcp@105  951 Now induction has been applied to~$n$. What is the next alternative?  lcp@284  952 \begin{ttbox}  lcp@105  953 back();  lcp@105  954 {\out Level 1}  lcp@105  955 {\out k + m + n = k + (m + n)}  lcp@105  956 {\out 1. k + m + n = k + (m + 0)}  lcp@105  957 {\out 2. !!x. k + m + n = k + (m + x) ==> k + m + n = k + (m + Suc(x))}  lcp@284  958 \end{ttbox}  lcp@105  959 Inspecting subgoal~1 reveals that induction has been applied to just the  lcp@105  960 second occurrence of~$n$. This perfectly legitimate induction is useless  lcp@310  961 here.  lcp@310  962 lcp@310  963 The main goal admits fourteen different applications of induction. The  lcp@310  964 number is exponential in the size of the formula.  lcp@105  965 lcp@105  966 \subsection{Proving that addition is associative}  lcp@331  967 Let us invoke the induction rule properly, using~{\tt  lcp@310  968  res_inst_tac}. At the same time, we shall have a glimpse at Isabelle's  lcp@310  969 simplification tactics, which are described in  lcp@310  970 \iflabelundefined{simp-chap}%  lcp@310  971  {the {\em Reference Manual}}{Chap.\ts\ref{simp-chap}}.  lcp@284  972 lcp@310  973 \index{simplification}\index{examples!of simplification}  lcp@310  974 wenzelm@9695  975 Isabelle's simplification tactics repeatedly apply equations to a subgoal,  wenzelm@9695  976 perhaps proving it. For efficiency, the rewrite rules must be packaged into a  wenzelm@9695  977 {\bf simplification set},\index{simplification sets} or {\bf simpset}. We  wenzelm@9695  978 augment the implicit simpset of FOL with the equations proved in the previous  wenzelm@9695  979 section, namely $0+n=n$ and $\texttt{Suc}(m)+n=\texttt{Suc}(m+n)$:  lcp@284  980 \begin{ttbox}  wenzelm@3114  981 Addsimps [add_0, add_Suc];  lcp@284  982 \end{ttbox}  lcp@105  983 We state the goal for associativity of addition, and  lcp@105  984 use \ttindex{res_inst_tac} to invoke induction on~$k$:  lcp@284  985 \begin{ttbox}  paulson@5205  986 Goal "(k+m)+n = k+(m+n)";  lcp@105  987 {\out Level 0}  lcp@105  988 {\out k + m + n = k + (m + n)}  lcp@105  989 {\out 1. k + m + n = k + (m + n)}  lcp@105  990 \ttbreak  lcp@105  991 by (res_inst_tac [("n","k")] induct 1);  lcp@105  992 {\out Level 1}  lcp@105  993 {\out k + m + n = k + (m + n)}  lcp@105  994 {\out 1. 0 + m + n = 0 + (m + n)}  lcp@284  995 {\out 2. !!x. x + m + n = x + (m + n) ==>}  lcp@284  996 {\out Suc(x) + m + n = Suc(x) + (m + n)}  lcp@284  997 \end{ttbox}  lcp@105  998 The base case holds easily; both sides reduce to $m+n$. The  wenzelm@3114  999 tactic~\ttindex{Simp_tac} rewrites with respect to the current  wenzelm@3114  1000 simplification set, applying the rewrite rules for addition:  lcp@284  1001 \begin{ttbox}  wenzelm@3114  1002 by (Simp_tac 1);  lcp@105  1003 {\out Level 2}  lcp@105  1004 {\out k + m + n = k + (m + n)}  lcp@284  1005 {\out 1. !!x. x + m + n = x + (m + n) ==>}  lcp@284  1006 {\out Suc(x) + m + n = Suc(x) + (m + n)}  lcp@284  1007 \end{ttbox}  lcp@331  1008 The inductive step requires rewriting by the equations for addition  paulson@14148  1009 and with the induction hypothesis, which is also an equation. The  wenzelm@3114  1010 tactic~\ttindex{Asm_simp_tac} rewrites using the implicit  wenzelm@3114  1011 simplification set and any useful assumptions:  lcp@284  1012 \begin{ttbox}  wenzelm@3114  1013 by (Asm_simp_tac 1);  lcp@105  1014 {\out Level 3}  lcp@105  1015 {\out k + m + n = k + (m + n)}  lcp@105  1016 {\out No subgoals!}  lcp@284  1017 \end{ttbox}  lcp@310  1018 \index{instantiation|)}  lcp@105  1019 lcp@105  1020 lcp@284  1021 \section{A Prolog interpreter}  lcp@105  1022 \index{Prolog interpreter|bold}  lcp@284  1023 To demonstrate the power of tacticals, let us construct a Prolog  lcp@105  1024 interpreter and execute programs involving lists.\footnote{To run these  paulson@5205  1025 examples, see the file \texttt{FOL/ex/Prolog.ML}.} The Prolog program  lcp@105  1026 consists of a theory. We declare a type constructor for lists, with an  lcp@105  1027 arity declaration to say that $(\tau)list$ is of class~$term$  lcp@105  1028 provided~$\tau$ is:  lcp@105  1029 \begin{eqnarray*}  lcp@105  1030  list & :: & (term)term  lcp@105  1031 \end{eqnarray*}  lcp@105  1032 We declare four constants: the empty list~$Nil$; the infix list  lcp@105  1033 constructor~{:}; the list concatenation predicate~$app$; the list reverse  lcp@284  1034 predicate~$rev$. (In Prolog, functions on lists are expressed as  lcp@105  1035 predicates.)  lcp@105  1036 \begin{eqnarray*}  lcp@105  1037  Nil & :: & \alpha list \\  lcp@105  1038  {:} & :: & [\alpha,\alpha list] \To \alpha list \\  lcp@105  1039  app & :: & [\alpha list,\alpha list,\alpha list] \To o \\  lcp@105  1040  rev & :: & [\alpha list,\alpha list] \To o  lcp@105  1041 \end{eqnarray*}  lcp@284  1042 The predicate $app$ should satisfy the Prolog-style rules  lcp@105  1043 ${app(Nil,ys,ys)} \qquad  lcp@105  1044  {app(xs,ys,zs) \over app(x:xs, ys, x:zs)}$  lcp@105  1045 We define the naive version of $rev$, which calls~$app$:  lcp@105  1046 ${rev(Nil,Nil)} \qquad  lcp@105  1047  {rev(xs,ys)\quad app(ys, x:Nil, zs) \over  lcp@105  1048  rev(x:xs, zs)}  lcp@105  1049 $  lcp@105  1050 lcp@105  1051 \index{examples!of theories}  lcp@310  1052 Theory \thydx{Prolog} extends first-order logic in order to make use  lcp@105  1053 of the class~$term$ and the type~$o$. The interpreter does not use the  paulson@5205  1054 rules of~\texttt{FOL}.  lcp@105  1055 \begin{ttbox}  lcp@105  1056 Prolog = FOL +  lcp@296  1057 types 'a list  lcp@105  1058 arities list :: (term)term  clasohm@1387  1059 consts Nil :: 'a list  clasohm@1387  1060  ":" :: ['a, 'a list]=> 'a list (infixr 60)  clasohm@1387  1061  app :: ['a list, 'a list, 'a list] => o  clasohm@1387  1062  rev :: ['a list, 'a list] => o  lcp@105  1063 rules appNil "app(Nil,ys,ys)"  lcp@105  1064  appCons "app(xs,ys,zs) ==> app(x:xs, ys, x:zs)"  lcp@105  1065  revNil "rev(Nil,Nil)"  lcp@105  1066  revCons "[| rev(xs,ys); app(ys,x:Nil,zs) |] ==> rev(x:xs,zs)"  lcp@105  1067 end  lcp@105  1068 \end{ttbox}  lcp@105  1069 \subsection{Simple executions}  lcp@284  1070 Repeated application of the rules solves Prolog goals. Let us  lcp@105  1071 append the lists $[a,b,c]$ and~$[d,e]$. As the rules are applied, the  paulson@5205  1072 answer builds up in~\texttt{?x}.  lcp@105  1073 \begin{ttbox}  paulson@5205  1074 Goal "app(a:b:c:Nil, d:e:Nil, ?x)";  lcp@105  1075 {\out Level 0}  lcp@105  1076 {\out app(a : b : c : Nil, d : e : Nil, ?x)}  lcp@105  1077 {\out 1. app(a : b : c : Nil, d : e : Nil, ?x)}  lcp@105  1078 \ttbreak  lcp@105  1079 by (resolve_tac [appNil,appCons] 1);  lcp@105  1080 {\out Level 1}  lcp@105  1081 {\out app(a : b : c : Nil, d : e : Nil, a : ?zs1)}  lcp@105  1082 {\out 1. app(b : c : Nil, d : e : Nil, ?zs1)}  lcp@105  1083 \ttbreak  lcp@105  1084 by (resolve_tac [appNil,appCons] 1);  lcp@105  1085 {\out Level 2}  lcp@105  1086 {\out app(a : b : c : Nil, d : e : Nil, a : b : ?zs2)}  lcp@105  1087 {\out 1. app(c : Nil, d : e : Nil, ?zs2)}  lcp@105  1088 \end{ttbox}  lcp@105  1089 At this point, the first two elements of the result are~$a$ and~$b$.  lcp@105  1090 \begin{ttbox}  lcp@105  1091 by (resolve_tac [appNil,appCons] 1);  lcp@105  1092 {\out Level 3}  lcp@105  1093 {\out app(a : b : c : Nil, d : e : Nil, a : b : c : ?zs3)}  lcp@105  1094 {\out 1. app(Nil, d : e : Nil, ?zs3)}  lcp@105  1095 \ttbreak  lcp@105  1096 by (resolve_tac [appNil,appCons] 1);  lcp@105  1097 {\out Level 4}  lcp@105  1098 {\out app(a : b : c : Nil, d : e : Nil, a : b : c : d : e : Nil)}  lcp@105  1099 {\out No subgoals!}  lcp@105  1100 \end{ttbox}  lcp@105  1101 lcp@284  1102 Prolog can run functions backwards. Which list can be appended  lcp@105  1103 with $[c,d]$ to produce $[a,b,c,d]$?  lcp@105  1104 Using \ttindex{REPEAT}, we find the answer at once, $[a,b]$:  lcp@105  1105 \begin{ttbox}  paulson@5205  1106 Goal "app(?x, c:d:Nil, a:b:c:d:Nil)";  lcp@105  1107 {\out Level 0}  lcp@105  1108 {\out app(?x, c : d : Nil, a : b : c : d : Nil)}  lcp@105  1109 {\out 1. app(?x, c : d : Nil, a : b : c : d : Nil)}  lcp@105  1110 \ttbreak  lcp@105  1111 by (REPEAT (resolve_tac [appNil,appCons] 1));  lcp@105  1112 {\out Level 1}  lcp@105  1113 {\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}  lcp@105  1114 {\out No subgoals!}  lcp@105  1115 \end{ttbox}  lcp@105  1116 lcp@105  1117 lcp@310  1118 \subsection{Backtracking}\index{backtracking!Prolog style}  lcp@296  1119 Prolog backtracking can answer questions that have multiple solutions.  lcp@296  1120 Which lists $x$ and $y$ can be appended to form the list $[a,b,c,d]$? This  lcp@296  1121 question has five solutions. Using \ttindex{REPEAT} to apply the rules, we  lcp@296  1122 quickly find the first solution, namely $x=[]$ and $y=[a,b,c,d]$:  lcp@105  1123 \begin{ttbox}  paulson@5205  1124 Goal "app(?x, ?y, a:b:c:d:Nil)";  lcp@105  1125 {\out Level 0}  lcp@105  1126 {\out app(?x, ?y, a : b : c : d : Nil)}  lcp@105  1127 {\out 1. app(?x, ?y, a : b : c : d : Nil)}  lcp@105  1128 \ttbreak  lcp@105  1129 by (REPEAT (resolve_tac [appNil,appCons] 1));  lcp@105  1130 {\out Level 1}  lcp@105  1131 {\out app(Nil, a : b : c : d : Nil, a : b : c : d : Nil)}  lcp@105  1132 {\out No subgoals!}  lcp@105  1133 \end{ttbox}  lcp@284  1134 Isabelle can lazily generate all the possibilities. The \ttindex{back}  lcp@284  1135 command returns the tactic's next outcome, namely $x=[a]$ and $y=[b,c,d]$:  lcp@105  1136 \begin{ttbox}  lcp@105  1137 back();  lcp@105  1138 {\out Level 1}  lcp@105  1139 {\out app(a : Nil, b : c : d : Nil, a : b : c : d : Nil)}  lcp@105  1140 {\out No subgoals!}  lcp@105  1141 \end{ttbox}  lcp@105  1142 The other solutions are generated similarly.  lcp@105  1143 \begin{ttbox}  lcp@105  1144 back();  lcp@105  1145 {\out Level 1}  lcp@105  1146 {\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}  lcp@105  1147 {\out No subgoals!}  lcp@105  1148 \ttbreak  lcp@105  1149 back();  lcp@105  1150 {\out Level 1}  lcp@105  1151 {\out app(a : b : c : Nil, d : Nil, a : b : c : d : Nil)}  lcp@105  1152 {\out No subgoals!}  lcp@105  1153 \ttbreak  lcp@105  1154 back();  lcp@105  1155 {\out Level 1}  lcp@105  1156 {\out app(a : b : c : d : Nil, Nil, a : b : c : d : Nil)}  lcp@105  1157 {\out No subgoals!}  lcp@105  1158 \end{ttbox}  lcp@105  1159 lcp@105  1160 lcp@105  1161 \subsection{Depth-first search}  lcp@105  1162 \index{search!depth-first}  lcp@105  1163 Now let us try $rev$, reversing a list.  paulson@5205  1164 Bundle the rules together as the \ML{} identifier \texttt{rules}. Naive  lcp@105  1165 reverse requires 120 inferences for this 14-element list, but the tactic  lcp@105  1166 terminates in a few seconds.  lcp@105  1167 \begin{ttbox}  paulson@5205  1168 Goal "rev(a:b:c:d:e:f:g:h:i:j:k:l:m:n:Nil, ?w)";  lcp@105  1169 {\out Level 0}  lcp@105  1170 {\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}  lcp@284  1171 {\out 1. rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}  lcp@284  1172 {\out ?w)}  lcp@284  1173 \ttbreak  lcp@105  1174 val rules = [appNil,appCons,revNil,revCons];  lcp@105  1175 \ttbreak  lcp@105  1176 by (REPEAT (resolve_tac rules 1));  lcp@105  1177 {\out Level 1}  lcp@105  1178 {\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}  lcp@105  1179 {\out n : m : l : k : j : i : h : g : f : e : d : c : b : a : Nil)}  lcp@105  1180 {\out No subgoals!}  lcp@105  1181 \end{ttbox}  lcp@105  1182 We may execute $rev$ backwards. This, too, should reverse a list. What  lcp@105  1183 is the reverse of $[a,b,c]$?  lcp@105  1184 \begin{ttbox}  paulson@5205  1185 Goal "rev(?x, a:b:c:Nil)";  lcp@105  1186 {\out Level 0}  lcp@105  1187 {\out rev(?x, a : b : c : Nil)}  lcp@105  1188 {\out 1. rev(?x, a : b : c : Nil)}  lcp@105  1189 \ttbreak  lcp@105  1190 by (REPEAT (resolve_tac rules 1));  lcp@105  1191 {\out Level 1}  lcp@105  1192 {\out rev(?x1 : Nil, a : b : c : Nil)}  lcp@105  1193 {\out 1. app(Nil, ?x1 : Nil, a : b : c : Nil)}  lcp@105  1194 \end{ttbox}  lcp@105  1195 The tactic has failed to find a solution! It reached a dead end at  lcp@331  1196 subgoal~1: there is no~$\Var{x@1}$ such that [] appended with~$[\Var{x@1}]$  lcp@105  1197 equals~$[a,b,c]$. Backtracking explores other outcomes.  lcp@105  1198 \begin{ttbox}  lcp@105  1199 back();  lcp@105  1200 {\out Level 1}  lcp@105  1201 {\out rev(?x1 : a : Nil, a : b : c : Nil)}  lcp@105  1202 {\out 1. app(Nil, ?x1 : Nil, b : c : Nil)}  lcp@105  1203 \end{ttbox}  lcp@105  1204 This too is a dead end, but the next outcome is successful.  lcp@105  1205 \begin{ttbox}  lcp@105  1206 back();  lcp@105  1207 {\out Level 1}  lcp@105  1208 {\out rev(c : b : a : Nil, a : b : c : Nil)}  lcp@105  1209 {\out No subgoals!}  lcp@105  1210 \end{ttbox}  lcp@310  1211 \ttindex{REPEAT} goes wrong because it is only a repetition tactical, not a  paulson@5205  1212 search tactical. \texttt{REPEAT} stops when it cannot continue, regardless of  lcp@310  1213 which state is reached. The tactical \ttindex{DEPTH_FIRST} searches for a  lcp@310  1214 satisfactory state, as specified by an \ML{} predicate. Below,  lcp@105  1215 \ttindex{has_fewer_prems} specifies that the proof state should have no  lcp@310  1216 subgoals.  lcp@105  1217 \begin{ttbox}  lcp@105  1218 val prolog_tac = DEPTH_FIRST (has_fewer_prems 1)  lcp@105  1219  (resolve_tac rules 1);  lcp@105  1220 \end{ttbox}  lcp@284  1221 Since Prolog uses depth-first search, this tactic is a (slow!)  lcp@296  1222 Prolog interpreter. We return to the start of the proof using  paulson@5205  1223 \ttindex{choplev}, and apply \texttt{prolog_tac}:  lcp@105  1224 \begin{ttbox}  lcp@105  1225 choplev 0;  lcp@105  1226 {\out Level 0}  lcp@105  1227 {\out rev(?x, a : b : c : Nil)}  lcp@105  1228 {\out 1. rev(?x, a : b : c : Nil)}  lcp@105  1229 \ttbreak  paulson@14148  1230 by prolog_tac;  lcp@105  1231 {\out Level 1}  lcp@105  1232 {\out rev(c : b : a : Nil, a : b : c : Nil)}  lcp@105  1233 {\out No subgoals!}  lcp@105  1234 \end{ttbox}  paulson@5205  1235 Let us try \texttt{prolog_tac} on one more example, containing four unknowns:  lcp@105  1236 \begin{ttbox}  paulson@5205  1237 Goal "rev(a:?x:c:?y:Nil, d:?z:b:?u)";  lcp@105  1238 {\out Level 0}  lcp@105  1239 {\out rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}  lcp@105  1240 {\out 1. rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}  lcp@105  1241 \ttbreak  lcp@105  1242 by prolog_tac;  lcp@105  1243 {\out Level 1}  lcp@105  1244 {\out rev(a : b : c : d : Nil, d : c : b : a : Nil)}  lcp@105  1245 {\out No subgoals!}  lcp@105  1246 \end{ttbox}  lcp@284  1247 Although Isabelle is much slower than a Prolog system, Isabelle  lcp@156  1248 tactics can exploit logic programming techniques.  lcp@156  1249`