doc-src/Intro/document/advanced.tex
changeset 48941 fbf60999dc31
parent 42637 381fdcab0f36
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/Intro/document/advanced.tex	Mon Aug 27 20:50:10 2012 +0200
@@ -0,0 +1,1249 @@
+\part{Advanced Methods}
+Before continuing, it might be wise to try some of your own examples in
+Isabelle, reinforcing your knowledge of the basic functions.
+
+Look through {\em Isabelle's Object-Logics\/} and try proving some
+simple theorems.  You probably should begin with first-order logic
+(\texttt{FOL} or~\texttt{LK}).  Try working some of the examples provided,
+and others from the literature.  Set theory~(\texttt{ZF}) and
+Constructive Type Theory~(\texttt{CTT}) form a richer world for
+mathematical reasoning and, again, many examples are in the
+literature.  Higher-order logic~(\texttt{HOL}) is Isabelle's most
+elaborate logic.  Its types and functions are identified with those of
+the meta-logic.
+
+Choose a logic that you already understand.  Isabelle is a proof
+tool, not a teaching tool; if you do not know how to do a particular proof
+on paper, then you certainly will not be able to do it on the machine.
+Even experienced users plan large proofs on paper.
+
+We have covered only the bare essentials of Isabelle, but enough to perform
+substantial proofs.  By occasionally dipping into the {\em Reference
+Manual}, you can learn additional tactics, subgoal commands and tacticals.
+
+
+\section{Deriving rules in Isabelle}
+\index{rules!derived}
+A mathematical development goes through a progression of stages.  Each
+stage defines some concepts and derives rules about them.  We shall see how
+to derive rules, perhaps involving definitions, using Isabelle.  The
+following section will explain how to declare types, constants, rules and
+definitions.
+
+
+\subsection{Deriving a rule using tactics and meta-level assumptions} 
+\label{deriving-example}
+\index{examples!of deriving rules}\index{assumptions!of main goal}
+
+The subgoal module supports the derivation of rules, as discussed in
+\S\ref{deriving}.  When the \ttindex{Goal} command is supplied a formula of
+the form $\List{\theta@1; \ldots; \theta@k} \Imp \phi$, there are two
+possibilities:
+\begin{itemize}
+\item If all of the premises $\theta@1$, \ldots, $\theta@k$ are simple
+  formulae{} (they do not involve the meta-connectives $\Forall$ or
+  $\Imp$) then the command sets the goal to be 
+  $\List{\theta@1; \ldots; \theta@k} \Imp \phi$ and returns the empty list.
+\item If one or more premises involves the meta-connectives $\Forall$ or
+  $\Imp$, then the command sets the goal to be $\phi$ and returns a list
+  consisting of the theorems ${\theta@i\;[\theta@i]}$, for $i=1$, \ldots,~$k$.
+  These meta-level assumptions are also recorded internally, allowing
+  \texttt{result} (which is called by \texttt{qed}) to discharge them in the
+  original order.
+\end{itemize}
+Rules that discharge assumptions or introduce eigenvariables have complex
+premises, and the second case applies.  In this section, many of the
+theorems are subject to meta-level assumptions, so we make them visible by by setting the 
+\ttindex{show_hyps} flag:
+\begin{ttbox} 
+set show_hyps;
+{\out val it = true : bool}
+\end{ttbox}
+
+Now, we are ready to derive $\conj$ elimination.  Until now, calling \texttt{Goal} has
+returned an empty list, which we have ignored.  In this example, the list
+contains the two premises of the rule, since one of them involves the $\Imp$
+connective.  We bind them to the \ML\ identifiers \texttt{major} and {\tt
+  minor}:\footnote{Some ML compilers will print a message such as {\em binding
+    not exhaustive}.  This warns that \texttt{Goal} must return a 2-element
+  list.  Otherwise, the pattern-match will fail; ML will raise exception
+  \xdx{Match}.}
+\begin{ttbox}
+val [major,minor] = Goal
+    "[| P&Q;  [| P; Q |] ==> R |] ==> R";
+{\out Level 0}
+{\out R}
+{\out  1. R}
+{\out val major = "P & Q  [P & Q]" : thm}
+{\out val minor = "[| P; Q |] ==> R  [[| P; Q |] ==> R]" : thm}
+\end{ttbox}
+Look at the minor premise, recalling that meta-level assumptions are
+shown in brackets.  Using \texttt{minor}, we reduce $R$ to the subgoals
+$P$ and~$Q$:
+\begin{ttbox}
+by (resolve_tac [minor] 1);
+{\out Level 1}
+{\out R}
+{\out  1. P}
+{\out  2. Q}
+\end{ttbox}
+Deviating from~\S\ref{deriving}, we apply $({\conj}E1)$ forwards from the
+assumption $P\conj Q$ to obtain the theorem~$P\;[P\conj Q]$.
+\begin{ttbox}
+major RS conjunct1;
+{\out val it = "P  [P & Q]" : thm}
+\ttbreak
+by (resolve_tac [major RS conjunct1] 1);
+{\out Level 2}
+{\out R}
+{\out  1. Q}
+\end{ttbox}
+Similarly, we solve the subgoal involving~$Q$.
+\begin{ttbox}
+major RS conjunct2;
+{\out val it = "Q  [P & Q]" : thm}
+by (resolve_tac [major RS conjunct2] 1);
+{\out Level 3}
+{\out R}
+{\out No subgoals!}
+\end{ttbox}
+Calling \ttindex{topthm} returns the current proof state as a theorem.
+Note that it contains assumptions.  Calling \ttindex{qed} discharges
+the assumptions --- both occurrences of $P\conj Q$ are discharged as
+one --- and makes the variables schematic.
+\begin{ttbox}
+topthm();
+{\out val it = "R  [P & Q, P & Q, [| P; Q |] ==> R]" : thm}
+qed "conjE";
+{\out val conjE = "[| ?P & ?Q; [| ?P; ?Q |] ==> ?R |] ==> ?R" : thm}
+\end{ttbox}
+
+
+\subsection{Definitions and derived rules} \label{definitions}
+\index{rules!derived}\index{definitions!and derived rules|(}
+
+Definitions are expressed as meta-level equalities.  Let us define negation
+and the if-and-only-if connective:
+\begin{eqnarray*}
+  \neg \Var{P}          & \equiv & \Var{P}\imp\bot \\
+  \Var{P}\bimp \Var{Q}  & \equiv & 
+                (\Var{P}\imp \Var{Q}) \conj (\Var{Q}\imp \Var{P})
+\end{eqnarray*}
+\index{meta-rewriting}%
+Isabelle permits {\bf meta-level rewriting} using definitions such as
+these.  {\bf Unfolding} replaces every instance
+of $\neg \Var{P}$ by the corresponding instance of ${\Var{P}\imp\bot}$.  For
+example, $\forall x.\neg (P(x)\conj \neg R(x,0))$ unfolds to
+\[ \forall x.(P(x)\conj R(x,0)\imp\bot)\imp\bot.  \]
+{\bf Folding} a definition replaces occurrences of the right-hand side by
+the left.  The occurrences need not be free in the entire formula.
+
+When you define new concepts, you should derive rules asserting their
+abstract properties, and then forget their definitions.  This supports
+modularity: if you later change the definitions without affecting their
+abstract properties, then most of your proofs will carry through without
+change.  Indiscriminate unfolding makes a subgoal grow exponentially,
+becoming unreadable.
+
+Taking this point of view, Isabelle does not unfold definitions
+automatically during proofs.  Rewriting must be explicit and selective.
+Isabelle provides tactics and meta-rules for rewriting, and a version of
+the \texttt{Goal} command that unfolds the conclusion and premises of the rule
+being derived.
+
+For example, the intuitionistic definition of negation given above may seem
+peculiar.  Using Isabelle, we shall derive pleasanter negation rules:
+\[  \infer[({\neg}I)]{\neg P}{\infer*{\bot}{[P]}}   \qquad
+    \infer[({\neg}E)]{Q}{\neg P & P}  \]
+This requires proving the following meta-formulae:
+$$ (P\Imp\bot)    \Imp \neg P   \eqno(\neg I) $$
+$$ \List{\neg P; P} \Imp Q.       \eqno(\neg E) $$
+
+
+\subsection{Deriving the $\neg$ introduction rule}
+To derive $(\neg I)$, we may call \texttt{Goal} with the appropriate formula.
+Again, the rule's premises involve a meta-connective, and \texttt{Goal}
+returns one-element list.  We bind this list to the \ML\ identifier \texttt{prems}.
+\begin{ttbox}
+val prems = Goal "(P ==> False) ==> ~P";
+{\out Level 0}
+{\out ~P}
+{\out  1. ~P}
+{\out val prems = ["P ==> False  [P ==> False]"] : thm list}
+\end{ttbox}
+Calling \ttindex{rewrite_goals_tac} with \tdx{not_def}, which is the
+definition of negation, unfolds that definition in the subgoals.  It leaves
+the main goal alone.
+\begin{ttbox}
+not_def;
+{\out val it = "~?P == ?P --> False" : thm}
+by (rewrite_goals_tac [not_def]);
+{\out Level 1}
+{\out ~P}
+{\out  1. P --> False}
+\end{ttbox}
+Using \tdx{impI} and the premise, we reduce subgoal~1 to a triviality:
+\begin{ttbox}
+by (resolve_tac [impI] 1);
+{\out Level 2}
+{\out ~P}
+{\out  1. P ==> False}
+\ttbreak
+by (resolve_tac prems 1);
+{\out Level 3}
+{\out ~P}
+{\out  1. P ==> P}
+\end{ttbox}
+The rest of the proof is routine.  Note the form of the final result.
+\begin{ttbox}
+by (assume_tac 1);
+{\out Level 4}
+{\out ~P}
+{\out No subgoals!}
+\ttbreak
+qed "notI";
+{\out val notI = "(?P ==> False) ==> ~?P" : thm}
+\end{ttbox}
+\indexbold{*notI theorem}
+
+There is a simpler way of conducting this proof.  The \ttindex{Goalw}
+command starts a backward proof, as does \texttt{Goal}, but it also
+unfolds definitions.  Thus there is no need to call
+\ttindex{rewrite_goals_tac}:
+\begin{ttbox}
+val prems = Goalw [not_def]
+    "(P ==> False) ==> ~P";
+{\out Level 0}
+{\out ~P}
+{\out  1. P --> False}
+{\out val prems = ["P ==> False  [P ==> False]"] : thm list}
+\end{ttbox}
+
+
+\subsection{Deriving the $\neg$ elimination rule}
+Let us derive the rule $(\neg E)$.  The proof follows that of~\texttt{conjE}
+above, with an additional step to unfold negation in the major premise.
+The \texttt{Goalw} command is best for this: it unfolds definitions not only
+in the conclusion but the premises.
+\begin{ttbox}
+Goalw [not_def] "[| ~P;  P |] ==> R";
+{\out Level 0}
+{\out [| ~ P; P |] ==> R}
+{\out  1. [| P --> False; P |] ==> R}
+\end{ttbox}
+As the first step, we apply \tdx{FalseE}:
+\begin{ttbox}
+by (resolve_tac [FalseE] 1);
+{\out Level 1}
+{\out [| ~ P; P |] ==> R}
+{\out  1. [| P --> False; P |] ==> False}
+\end{ttbox}
+%
+Everything follows from falsity.  And we can prove falsity using the
+premises and Modus Ponens:
+\begin{ttbox}
+by (eresolve_tac [mp] 1);
+{\out Level 2}
+{\out [| ~ P; P |] ==> R}
+{\out  1. P ==> P}
+\ttbreak
+by (assume_tac 1);
+{\out Level 3}
+{\out [| ~ P; P |] ==> R}
+{\out No subgoals!}
+\ttbreak
+qed "notE";
+{\out val notE = "[| ~?P; ?P |] ==> ?R" : thm}
+\end{ttbox}
+
+
+\medskip
+\texttt{Goalw} unfolds definitions in the premises even when it has to return
+them as a list.  Another way of unfolding definitions in a theorem is by
+applying the function \ttindex{rewrite_rule}.
+
+\index{definitions!and derived rules|)}
+
+
+\section{Defining theories}\label{sec:defining-theories}
+\index{theories!defining|(}
+
+Isabelle makes no distinction between simple extensions of a logic ---
+like specifying a type~$bool$ with constants~$true$ and~$false$ ---
+and defining an entire logic.  A theory definition has a form like
+\begin{ttbox}
+\(T\) = \(S@1\) + \(\cdots\) + \(S@n\) +
+classes      {\it class declarations}
+default      {\it sort}
+types        {\it type declarations and synonyms}
+arities      {\it type arity declarations}
+consts       {\it constant declarations}
+syntax       {\it syntactic constant declarations}
+translations {\it ast translation rules}
+defs         {\it meta-logical definitions}
+rules        {\it rule declarations}
+end
+ML           {\it ML code}
+\end{ttbox}
+This declares the theory $T$ to extend the existing theories
+$S@1$,~\ldots,~$S@n$.  It may introduce new classes, types, arities
+(of existing types), constants and rules; it can specify the default
+sort for type variables.  A constant declaration can specify an
+associated concrete syntax.  The translations section specifies
+rewrite rules on abstract syntax trees, handling notations and
+abbreviations.  \index{*ML section} The \texttt{ML} section may contain
+code to perform arbitrary syntactic transformations.  The main
+declaration forms are discussed below.  There are some more sections
+not presented here, the full syntax can be found in
+\iflabelundefined{app:TheorySyntax}{an appendix of the {\it Reference
+    Manual}}{App.\ts\ref{app:TheorySyntax}}.  Also note that
+object-logics may add further theory sections, for example
+\texttt{typedef}, \texttt{datatype} in HOL.
+
+All the declaration parts can be omitted or repeated and may appear in
+any order, except that the {\ML} section must be last (after the {\tt
+  end} keyword).  In the simplest case, $T$ is just the union of
+$S@1$,~\ldots,~$S@n$.  New theories always extend one or more other
+theories, inheriting their types, constants, syntax, etc.  The theory
+\thydx{Pure} contains nothing but Isabelle's meta-logic.  The variant
+\thydx{CPure} offers the more usual higher-order function application
+syntax $t\,u@1\ldots\,u@n$ instead of $t(u@1,\ldots,u@n)$ in Pure.
+
+Each theory definition must reside in a separate file, whose name is
+the theory's with {\tt.thy} appended.  Calling
+\ttindexbold{use_thy}~{\tt"{\it T\/}"} reads the definition from {\it
+  T}{\tt.thy}, writes a corresponding file of {\ML} code {\tt.{\it
+    T}.thy.ML}, reads the latter file, and deletes it if no errors
+occurred.  This declares the {\ML} structure~$T$, which contains a
+component \texttt{thy} denoting the new theory, a component for each
+rule, and everything declared in {\it ML code}.
+
+Errors may arise during the translation to {\ML} (say, a misspelled
+keyword) or during creation of the new theory (say, a type error in a
+rule).  But if all goes well, \texttt{use_thy} will finally read the file
+{\it T}{\tt.ML} (if it exists).  This file typically contains proofs
+that refer to the components of~$T$.  The structure is automatically
+opened, so its components may be referred to by unqualified names,
+e.g.\ just \texttt{thy} instead of $T$\texttt{.thy}.
+
+\ttindexbold{use_thy} automatically loads a theory's parents before
+loading the theory itself.  When a theory file is modified, many
+theories may have to be reloaded.  Isabelle records the modification
+times and dependencies of theory files.  See
+\iflabelundefined{sec:reloading-theories}{the {\em Reference Manual\/}}%
+                 {\S\ref{sec:reloading-theories}}
+for more details.
+
+
+\subsection{Declaring constants, definitions and rules}
+\indexbold{constants!declaring}\index{rules!declaring}
+
+Most theories simply declare constants, definitions and rules.  The {\bf
+  constant declaration part} has the form
+\begin{ttbox}
+consts  \(c@1\) :: \(\tau@1\)
+        \vdots
+        \(c@n\) :: \(\tau@n\)
+\end{ttbox}
+where $c@1$, \ldots, $c@n$ are constants and $\tau@1$, \ldots, $\tau@n$ are
+types.  The types must be enclosed in quotation marks if they contain
+user-declared infix type constructors like \texttt{*}.  Each
+constant must be enclosed in quotation marks unless it is a valid
+identifier.  To declare $c@1$, \ldots, $c@n$ as constants of type $\tau$,
+the $n$ declarations may be abbreviated to a single line:
+\begin{ttbox}
+        \(c@1\), \ldots, \(c@n\) :: \(\tau\)
+\end{ttbox}
+The {\bf rule declaration part} has the form
+\begin{ttbox}
+rules   \(id@1\) "\(rule@1\)"
+        \vdots
+        \(id@n\) "\(rule@n\)"
+\end{ttbox}
+where $id@1$, \ldots, $id@n$ are \ML{} identifiers and $rule@1$, \ldots,
+$rule@n$ are expressions of type~$prop$.  Each rule {\em must\/} be
+enclosed in quotation marks.  Rules are simply axioms; they are 
+called \emph{rules} because they are mainly used to specify the inference
+rules when defining a new logic.
+
+\indexbold{definitions} The {\bf definition part} is similar, but with
+the keyword \texttt{defs} instead of \texttt{rules}.  {\bf Definitions} are
+rules of the form $s \equiv t$, and should serve only as
+abbreviations.  The simplest form of a definition is $f \equiv t$,
+where $f$ is a constant.  Also allowed are $\eta$-equivalent forms of
+this, where the arguments of~$f$ appear applied on the left-hand side
+of the equation instead of abstracted on the right-hand side.
+
+Isabelle checks for common errors in definitions, such as extra
+variables on the right-hand side and cyclic dependencies, that could
+least to inconsistency.  It is still essential to take care:
+theorems proved on the basis of incorrect definitions are useless,
+your system can be consistent and yet still wrong.
+
+\index{examples!of theories} This example theory extends first-order
+logic by declaring and defining two constants, {\em nand} and {\em
+  xor}:
+\begin{ttbox}
+Gate = FOL +
+consts  nand,xor :: [o,o] => o
+defs    nand_def "nand(P,Q) == ~(P & Q)"
+        xor_def  "xor(P,Q)  == P & ~Q | ~P & Q"
+end
+\end{ttbox}
+
+Declaring and defining constants can be combined:
+\begin{ttbox}
+Gate = FOL +
+constdefs  nand :: [o,o] => o
+           "nand(P,Q) == ~(P & Q)"
+           xor  :: [o,o] => o
+           "xor(P,Q)  == P & ~Q | ~P & Q"
+end
+\end{ttbox}
+\texttt{constdefs} generates the names \texttt{nand_def} and \texttt{xor_def}
+automatically, which is why it is restricted to alphanumeric identifiers.  In
+general it has the form
+\begin{ttbox}
+constdefs  \(id@1\) :: \(\tau@1\)
+           "\(id@1 \equiv \dots\)"
+           \vdots
+           \(id@n\) :: \(\tau@n\)
+           "\(id@n \equiv \dots\)"
+\end{ttbox}
+
+
+\begin{warn}
+A common mistake when writing definitions is to introduce extra free variables
+on the right-hand side as in the following fictitious definition:
+\begin{ttbox}
+defs  prime_def "prime(p) == (m divides p) --> (m=1 | m=p)"
+\end{ttbox}
+Isabelle rejects this ``definition'' because of the extra \texttt{m} on the
+right-hand side, which would introduce an inconsistency.  What you should have
+written is
+\begin{ttbox}
+defs  prime_def "prime(p) == ALL m. (m divides p) --> (m=1 | m=p)"
+\end{ttbox}
+\end{warn}
+
+\subsection{Declaring type constructors}
+\indexbold{types!declaring}\indexbold{arities!declaring}
+%
+Types are composed of type variables and {\bf type constructors}.  Each
+type constructor takes a fixed number of arguments.  They are declared
+with an \ML-like syntax.  If $list$ takes one type argument, $tree$ takes
+two arguments and $nat$ takes no arguments, then these type constructors
+can be declared by
+\begin{ttbox}
+types 'a list
+      ('a,'b) tree
+      nat
+\end{ttbox}
+
+The {\bf type declaration part} has the general form
+\begin{ttbox}
+types   \(tids@1\) \(id@1\)
+        \vdots
+        \(tids@n\) \(id@n\)
+\end{ttbox}
+where $id@1$, \ldots, $id@n$ are identifiers and $tids@1$, \ldots, $tids@n$
+are type argument lists as shown in the example above.  It declares each
+$id@i$ as a type constructor with the specified number of argument places.
+
+The {\bf arity declaration part} has the form
+\begin{ttbox}
+arities \(tycon@1\) :: \(arity@1\)
+        \vdots
+        \(tycon@n\) :: \(arity@n\)
+\end{ttbox}
+where $tycon@1$, \ldots, $tycon@n$ are identifiers and $arity@1$, \ldots,
+$arity@n$ are arities.  Arity declarations add arities to existing
+types; they do not declare the types themselves.
+In the simplest case, for an 0-place type constructor, an arity is simply
+the type's class.  Let us declare a type~$bool$ of class $term$, with
+constants $tt$ and~$ff$.  (In first-order logic, booleans are
+distinct from formulae, which have type $o::logic$.)
+\index{examples!of theories}
+\begin{ttbox}
+Bool = FOL +
+types   bool
+arities bool    :: term
+consts  tt,ff   :: bool
+end
+\end{ttbox}
+A $k$-place type constructor may have arities of the form
+$(s@1,\ldots,s@k)c$, where $s@1,\ldots,s@n$ are sorts and $c$ is a class.
+Each sort specifies a type argument; it has the form $\{c@1,\ldots,c@m\}$,
+where $c@1$, \dots,~$c@m$ are classes.  Mostly we deal with singleton
+sorts, and may abbreviate them by dropping the braces.  The arity
+$(term)term$ is short for $(\{term\})term$.  Recall the discussion in
+\S\ref{polymorphic}.
+
+A type constructor may be overloaded (subject to certain conditions) by
+appearing in several arity declarations.  For instance, the function type
+constructor~$fun$ has the arity $(logic,logic)logic$; in higher-order
+logic, it is declared also to have arity $(term,term)term$.
+
+Theory \texttt{List} declares the 1-place type constructor $list$, gives
+it the arity $(term)term$, and declares constants $Nil$ and $Cons$ with
+polymorphic types:%
+\footnote{In the \texttt{consts} part, type variable {\tt'a} has the default
+  sort, which is \texttt{term}.  See the {\em Reference Manual\/}
+\iflabelundefined{sec:ref-defining-theories}{}%
+{(\S\ref{sec:ref-defining-theories})} for more information.}
+\index{examples!of theories}
+\begin{ttbox}
+List = FOL +
+types   'a list
+arities list    :: (term)term
+consts  Nil     :: 'a list
+        Cons    :: ['a, 'a list] => 'a list
+end
+\end{ttbox}
+Multiple arity declarations may be abbreviated to a single line:
+\begin{ttbox}
+arities \(tycon@1\), \ldots, \(tycon@n\) :: \(arity\)
+\end{ttbox}
+
+%\begin{warn}
+%Arity declarations resemble constant declarations, but there are {\it no\/}
+%quotation marks!  Types and rules must be quoted because the theory
+%translator passes them verbatim to the {\ML} output file.
+%\end{warn}
+
+\subsection{Type synonyms}\indexbold{type synonyms}
+Isabelle supports {\bf type synonyms} ({\bf abbreviations}) which are similar
+to those found in \ML.  Such synonyms are defined in the type declaration part
+and are fairly self explanatory:
+\begin{ttbox}
+types gate       = [o,o] => o
+      'a pred    = 'a => o
+      ('a,'b)nuf = 'b => 'a
+\end{ttbox}
+Type declarations and synonyms can be mixed arbitrarily:
+\begin{ttbox}
+types nat
+      'a stream = nat => 'a
+      signal    = nat stream
+      'a list
+\end{ttbox}
+A synonym is merely an abbreviation for some existing type expression.
+Hence synonyms may not be recursive!  Internally all synonyms are
+fully expanded.  As a consequence Isabelle output never contains
+synonyms.  Their main purpose is to improve the readability of theory
+definitions.  Synonyms can be used just like any other type:
+\begin{ttbox}
+consts and,or :: gate
+       negate :: signal => signal
+\end{ttbox}
+
+\subsection{Infix and mixfix operators}
+\index{infixes}\index{examples!of theories}
+
+Infix or mixfix syntax may be attached to constants.  Consider the
+following theory:
+\begin{ttbox}
+Gate2 = FOL +
+consts  "~&"     :: [o,o] => o         (infixl 35)
+        "#"      :: [o,o] => o         (infixl 30)
+defs    nand_def "P ~& Q == ~(P & Q)"    
+        xor_def  "P # Q  == P & ~Q | ~P & Q"
+end
+\end{ttbox}
+The constant declaration part declares two left-associating infix operators
+with their priorities, or precedences; they are $\nand$ of priority~35 and
+$\xor$ of priority~30.  Hence $P \xor Q \xor R$ is parsed as $(P\xor Q)
+\xor R$ and $P \xor Q \nand R$ as $P \xor (Q \nand R)$.  Note the quotation
+marks in \verb|"~&"| and \verb|"#"|.
+
+The constants \hbox{\verb|op ~&|} and \hbox{\verb|op #|} are declared
+automatically, just as in \ML.  Hence you may write propositions like
+\verb|op #(True) == op ~&(True)|, which asserts that the functions $\lambda
+Q.True \xor Q$ and $\lambda Q.True \nand Q$ are identical.
+
+\medskip Infix syntax and constant names may be also specified
+independently.  For example, consider this version of $\nand$:
+\begin{ttbox}
+consts  nand     :: [o,o] => o         (infixl "~&" 35)
+\end{ttbox}
+
+\bigskip\index{mixfix declarations}
+{\bf Mixfix} operators may have arbitrary context-free syntaxes.  Let us
+add a line to the constant declaration part:
+\begin{ttbox}
+        If :: [o,o,o] => o       ("if _ then _ else _")
+\end{ttbox}
+This declares a constant $If$ of type $[o,o,o] \To o$ with concrete syntax {\tt
+  if~$P$ then~$Q$ else~$R$} as well as \texttt{If($P$,$Q$,$R$)}.  Underscores
+denote argument positions.  
+
+The declaration above does not allow the \texttt{if}-\texttt{then}-{\tt
+  else} construct to be printed split across several lines, even if it
+is too long to fit on one line.  Pretty-printing information can be
+added to specify the layout of mixfix operators.  For details, see
+\iflabelundefined{Defining-Logics}%
+    {the {\it Reference Manual}, chapter `Defining Logics'}%
+    {Chap.\ts\ref{Defining-Logics}}.
+
+Mixfix declarations can be annotated with priorities, just like
+infixes.  The example above is just a shorthand for
+\begin{ttbox}
+        If :: [o,o,o] => o       ("if _ then _ else _" [0,0,0] 1000)
+\end{ttbox}
+The numeric components determine priorities.  The list of integers
+defines, for each argument position, the minimal priority an expression
+at that position must have.  The final integer is the priority of the
+construct itself.  In the example above, any argument expression is
+acceptable because priorities are non-negative, and conditionals may
+appear everywhere because 1000 is the highest priority.  On the other
+hand, the declaration
+\begin{ttbox}
+        If :: [o,o,o] => o       ("if _ then _ else _" [100,0,0] 99)
+\end{ttbox}
+defines concrete syntax for a conditional whose first argument cannot have
+the form \texttt{if~$P$ then~$Q$ else~$R$} because it must have a priority
+of at least~100.  We may of course write
+\begin{quote}\tt
+if (if $P$ then $Q$ else $R$) then $S$ else $T$
+\end{quote}
+because expressions in parentheses have maximal priority.  
+
+Binary type constructors, like products and sums, may also be declared as
+infixes.  The type declaration below introduces a type constructor~$*$ with
+infix notation $\alpha*\beta$, together with the mixfix notation
+${<}\_,\_{>}$ for pairs.  We also see a rule declaration part.
+\index{examples!of theories}\index{mixfix declarations}
+\begin{ttbox}
+Prod = FOL +
+types   ('a,'b) "*"                           (infixl 20)
+arities "*"     :: (term,term)term
+consts  fst     :: "'a * 'b => 'a"
+        snd     :: "'a * 'b => 'b"
+        Pair    :: "['a,'b] => 'a * 'b"       ("(1<_,/_>)")
+rules   fst     "fst(<a,b>) = a"
+        snd     "snd(<a,b>) = b"
+end
+\end{ttbox}
+
+\begin{warn}
+  The name of the type constructor is~\texttt{*} and not \texttt{op~*}, as
+  it would be in the case of an infix constant.  Only infix type
+  constructors can have symbolic names like~\texttt{*}.  General mixfix
+  syntax for types may be introduced via appropriate \texttt{syntax}
+  declarations.
+\end{warn}
+
+
+\subsection{Overloading}
+\index{overloading}\index{examples!of theories}
+The {\bf class declaration part} has the form
+\begin{ttbox}
+classes \(id@1\) < \(c@1\)
+        \vdots
+        \(id@n\) < \(c@n\)
+\end{ttbox}
+where $id@1$, \ldots, $id@n$ are identifiers and $c@1$, \ldots, $c@n$ are
+existing classes.  It declares each $id@i$ as a new class, a subclass
+of~$c@i$.  In the general case, an identifier may be declared to be a
+subclass of $k$ existing classes:
+\begin{ttbox}
+        \(id\) < \(c@1\), \ldots, \(c@k\)
+\end{ttbox}
+Type classes allow constants to be overloaded.  As suggested in
+\S\ref{polymorphic}, let us define the class $arith$ of arithmetic
+types with the constants ${+} :: [\alpha,\alpha]\To \alpha$ and $0,1 {::}
+\alpha$, for $\alpha{::}arith$.  We introduce $arith$ as a subclass of
+$term$ and add the three polymorphic constants of this class.
+\index{examples!of theories}\index{constants!overloaded}
+\begin{ttbox}
+Arith = FOL +
+classes arith < term
+consts  "0"     :: 'a::arith                  ("0")
+        "1"     :: 'a::arith                  ("1")
+        "+"     :: ['a::arith,'a] => 'a       (infixl 60)
+end
+\end{ttbox}
+No rules are declared for these constants: we merely introduce their
+names without specifying properties.  On the other hand, classes
+with rules make it possible to prove {\bf generic} theorems.  Such
+theorems hold for all instances, all types in that class.
+
+We can now obtain distinct versions of the constants of $arith$ by
+declaring certain types to be of class $arith$.  For example, let us
+declare the 0-place type constructors $bool$ and $nat$:
+\index{examples!of theories}
+\begin{ttbox}
+BoolNat = Arith +
+types   bool  nat
+arities bool, nat   :: arith
+consts  Suc         :: nat=>nat
+\ttbreak
+rules   add0        "0 + n = n::nat"
+        addS        "Suc(m)+n = Suc(m+n)"
+        nat1        "1 = Suc(0)"
+        or0l        "0 + x = x::bool"
+        or0r        "x + 0 = x::bool"
+        or1l        "1 + x = 1::bool"
+        or1r        "x + 1 = 1::bool"
+end
+\end{ttbox}
+Because $nat$ and $bool$ have class $arith$, we can use $0$, $1$ and $+$ at
+either type.  The type constraints in the axioms are vital.  Without
+constraints, the $x$ in $1+x = 1$ (axiom \texttt{or1l})
+would have type $\alpha{::}arith$
+and the axiom would hold for any type of class $arith$.  This would
+collapse $nat$ to a trivial type:
+\[ Suc(1) = Suc(0+1) = Suc(0)+1 = 1+1 = 1! \]
+
+
+\section{Theory example: the natural numbers}
+
+We shall now work through a small example of formalized mathematics
+demonstrating many of the theory extension features.
+
+
+\subsection{Extending first-order logic with the natural numbers}
+\index{examples!of theories}
+
+Section\ts\ref{sec:logical-syntax} has formalized a first-order logic,
+including a type~$nat$ and the constants $0::nat$ and $Suc::nat\To nat$.
+Let us introduce the Peano axioms for mathematical induction and the
+freeness of $0$ and~$Suc$:\index{axioms!Peano}
+\[ \vcenter{\infer[(induct)]{P[n/x]}{P[0/x] & \infer*{P[Suc(x)/x]}{[P]}}}
+ \qquad \parbox{4.5cm}{provided $x$ is not free in any assumption except~$P$}
+\]
+\[ \infer[(Suc\_inject)]{m=n}{Suc(m)=Suc(n)} \qquad
+   \infer[(Suc\_neq\_0)]{R}{Suc(m)=0}
+\]
+Mathematical induction asserts that $P(n)$ is true, for any $n::nat$,
+provided $P(0)$ holds and that $P(x)$ implies $P(Suc(x))$ for all~$x$.
+Some authors express the induction step as $\forall x. P(x)\imp P(Suc(x))$.
+To avoid making induction require the presence of other connectives, we
+formalize mathematical induction as
+$$ \List{P(0); \Forall x. P(x)\Imp P(Suc(x))} \Imp P(n). \eqno(induct) $$
+
+\noindent
+Similarly, to avoid expressing the other rules using~$\forall$, $\imp$
+and~$\neg$, we take advantage of the meta-logic;\footnote
+{On the other hand, the axioms $Suc(m)=Suc(n) \bimp m=n$
+and $\neg(Suc(m)=0)$ are logically equivalent to those given, and work
+better with Isabelle's simplifier.} 
+$(Suc\_neq\_0)$ is
+an elimination rule for $Suc(m)=0$:
+$$ Suc(m)=Suc(n) \Imp m=n  \eqno(Suc\_inject) $$
+$$ Suc(m)=0      \Imp R    \eqno(Suc\_neq\_0) $$
+
+\noindent
+We shall also define a primitive recursion operator, $rec$.  Traditionally,
+primitive recursion takes a natural number~$a$ and a 2-place function~$f$,
+and obeys the equations
+\begin{eqnarray*}
+  rec(0,a,f)            & = & a \\
+  rec(Suc(m),a,f)       & = & f(m, rec(m,a,f))
+\end{eqnarray*}
+Addition, defined by $m+n \equiv rec(m,n,\lambda x\,y.Suc(y))$,
+should satisfy
+\begin{eqnarray*}
+  0+n      & = & n \\
+  Suc(m)+n & = & Suc(m+n)
+\end{eqnarray*}
+Primitive recursion appears to pose difficulties: first-order logic has no
+function-valued expressions.  We again take advantage of the meta-logic,
+which does have functions.  We also generalise primitive recursion to be
+polymorphic over any type of class~$term$, and declare the addition
+function:
+\begin{eqnarray*}
+  rec   & :: & [nat, \alpha{::}term, [nat,\alpha]\To\alpha] \To\alpha \\
+  +     & :: & [nat,nat]\To nat 
+\end{eqnarray*}
+
+
+\subsection{Declaring the theory to Isabelle}
+\index{examples!of theories}
+Let us create the theory \thydx{Nat} starting from theory~\verb$FOL$,
+which contains only classical logic with no natural numbers.  We declare
+the 0-place type constructor $nat$ and the associated constants.  Note that
+the constant~0 requires a mixfix annotation because~0 is not a legal
+identifier, and could not otherwise be written in terms:
+\begin{ttbox}\index{mixfix declarations}
+Nat = FOL +
+types   nat
+arities nat         :: term
+consts  "0"         :: nat                              ("0")
+        Suc         :: nat=>nat
+        rec         :: [nat, 'a, [nat,'a]=>'a] => 'a
+        "+"         :: [nat, nat] => nat                (infixl 60)
+rules   Suc_inject  "Suc(m)=Suc(n) ==> m=n"
+        Suc_neq_0   "Suc(m)=0      ==> R"
+        induct      "[| P(0);  !!x. P(x) ==> P(Suc(x)) |]  ==> P(n)"
+        rec_0       "rec(0,a,f) = a"
+        rec_Suc     "rec(Suc(m), a, f) = f(m, rec(m,a,f))"
+        add_def     "m+n == rec(m, n, \%x y. Suc(y))"
+end
+\end{ttbox}
+In axiom \texttt{add_def}, recall that \verb|%| stands for~$\lambda$.
+Loading this theory file creates the \ML\ structure \texttt{Nat}, which
+contains the theory and axioms.
+
+\subsection{Proving some recursion equations}
+Theory \texttt{FOL/ex/Nat} contains proofs involving this theory of the
+natural numbers.  As a trivial example, let us derive recursion equations
+for \verb$+$.  Here is the zero case:
+\begin{ttbox}
+Goalw [add_def] "0+n = n";
+{\out Level 0}
+{\out 0 + n = n}
+{\out  1. rec(0,n,\%x y. Suc(y)) = n}
+\ttbreak
+by (resolve_tac [rec_0] 1);
+{\out Level 1}
+{\out 0 + n = n}
+{\out No subgoals!}
+qed "add_0";
+\end{ttbox}
+And here is the successor case:
+\begin{ttbox}
+Goalw [add_def] "Suc(m)+n = Suc(m+n)";
+{\out Level 0}
+{\out Suc(m) + n = Suc(m + n)}
+{\out  1. rec(Suc(m),n,\%x y. Suc(y)) = Suc(rec(m,n,\%x y. Suc(y)))}
+\ttbreak
+by (resolve_tac [rec_Suc] 1);
+{\out Level 1}
+{\out Suc(m) + n = Suc(m + n)}
+{\out No subgoals!}
+qed "add_Suc";
+\end{ttbox}
+The induction rule raises some complications, which are discussed next.
+\index{theories!defining|)}
+
+
+\section{Refinement with explicit instantiation}
+\index{resolution!with instantiation}
+\index{instantiation|(}
+
+In order to employ mathematical induction, we need to refine a subgoal by
+the rule~$(induct)$.  The conclusion of this rule is $\Var{P}(\Var{n})$,
+which is highly ambiguous in higher-order unification.  It matches every
+way that a formula can be regarded as depending on a subterm of type~$nat$.
+To get round this problem, we could make the induction rule conclude
+$\forall n.\Var{P}(n)$ --- but putting a subgoal into this form requires
+refinement by~$(\forall E)$, which is equally hard!
+
+The tactic \texttt{res_inst_tac}, like \texttt{resolve_tac}, refines a subgoal by
+a rule.  But it also accepts explicit instantiations for the rule's
+schematic variables.  
+\begin{description}
+\item[\ttindex{res_inst_tac} {\it insts} {\it thm} {\it i}]
+instantiates the rule {\it thm} with the instantiations {\it insts}, and
+then performs resolution on subgoal~$i$.
+
+\item[\ttindex{eres_inst_tac}] 
+and \ttindex{dres_inst_tac} are similar, but perform elim-resolution
+and destruct-resolution, respectively.
+\end{description}
+The list {\it insts} consists of pairs $[(v@1,e@1), \ldots, (v@n,e@n)]$,
+where $v@1$, \ldots, $v@n$ are names of schematic variables in the rule ---
+with no leading question marks! --- and $e@1$, \ldots, $e@n$ are
+expressions giving their instantiations.  The expressions are type-checked
+in the context of a particular subgoal: free variables receive the same
+types as they have in the subgoal, and parameters may appear.  Type
+variable instantiations may appear in~{\it insts}, but they are seldom
+required: \texttt{res_inst_tac} instantiates type variables automatically
+whenever the type of~$e@i$ is an instance of the type of~$\Var{v@i}$.
+
+\subsection{A simple proof by induction}
+\index{examples!of induction}
+Let us prove that no natural number~$k$ equals its own successor.  To
+use~$(induct)$, we instantiate~$\Var{n}$ to~$k$; Isabelle finds a good
+instantiation for~$\Var{P}$.
+\begin{ttbox}
+Goal "~ (Suc(k) = k)";
+{\out Level 0}
+{\out Suc(k) ~= k}
+{\out  1. Suc(k) ~= k}
+\ttbreak
+by (res_inst_tac [("n","k")] induct 1);
+{\out Level 1}
+{\out Suc(k) ~= k}
+{\out  1. Suc(0) ~= 0}
+{\out  2. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}
+\end{ttbox}
+We should check that Isabelle has correctly applied induction.  Subgoal~1
+is the base case, with $k$ replaced by~0.  Subgoal~2 is the inductive step,
+with $k$ replaced by~$Suc(x)$ and with an induction hypothesis for~$x$.
+The rest of the proof demonstrates~\tdx{notI}, \tdx{notE} and the
+other rules of theory \texttt{Nat}.  The base case holds by~\ttindex{Suc_neq_0}:
+\begin{ttbox}
+by (resolve_tac [notI] 1);
+{\out Level 2}
+{\out Suc(k) ~= k}
+{\out  1. Suc(0) = 0 ==> False}
+{\out  2. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}
+\ttbreak
+by (eresolve_tac [Suc_neq_0] 1);
+{\out Level 3}
+{\out Suc(k) ~= k}
+{\out  1. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}
+\end{ttbox}
+The inductive step holds by the contrapositive of~\ttindex{Suc_inject}.
+Negation rules transform the subgoal into that of proving $Suc(x)=x$ from
+$Suc(Suc(x)) = Suc(x)$:
+\begin{ttbox}
+by (resolve_tac [notI] 1);
+{\out Level 4}
+{\out Suc(k) ~= k}
+{\out  1. !!x. [| Suc(x) ~= x; Suc(Suc(x)) = Suc(x) |] ==> False}
+\ttbreak
+by (eresolve_tac [notE] 1);
+{\out Level 5}
+{\out Suc(k) ~= k}
+{\out  1. !!x. Suc(Suc(x)) = Suc(x) ==> Suc(x) = x}
+\ttbreak
+by (eresolve_tac [Suc_inject] 1);
+{\out Level 6}
+{\out Suc(k) ~= k}
+{\out No subgoals!}
+\end{ttbox}
+
+
+\subsection{An example of ambiguity in \texttt{resolve_tac}}
+\index{examples!of induction}\index{unification!higher-order}
+If you try the example above, you may observe that \texttt{res_inst_tac} is
+not actually needed.  Almost by chance, \ttindex{resolve_tac} finds the right
+instantiation for~$(induct)$ to yield the desired next state.  With more
+complex formulae, our luck fails.  
+\begin{ttbox}
+Goal "(k+m)+n = k+(m+n)";
+{\out Level 0}
+{\out k + m + n = k + (m + n)}
+{\out  1. k + m + n = k + (m + n)}
+\ttbreak
+by (resolve_tac [induct] 1);
+{\out Level 1}
+{\out k + m + n = k + (m + n)}
+{\out  1. k + m + n = 0}
+{\out  2. !!x. k + m + n = x ==> k + m + n = Suc(x)}
+\end{ttbox}
+This proof requires induction on~$k$.  The occurrence of~0 in subgoal~1
+indicates that induction has been applied to the term~$k+(m+n)$; this
+application is sound but will not lead to a proof here.  Fortunately,
+Isabelle can (lazily!) generate all the valid applications of induction.
+The \ttindex{back} command causes backtracking to an alternative outcome of
+the tactic.
+\begin{ttbox}
+back();
+{\out Level 1}
+{\out k + m + n = k + (m + n)}
+{\out  1. k + m + n = k + 0}
+{\out  2. !!x. k + m + n = k + x ==> k + m + n = k + Suc(x)}
+\end{ttbox}
+Now induction has been applied to~$m+n$.  This is equally useless.  Let us
+call \ttindex{back} again.
+\begin{ttbox}
+back();
+{\out Level 1}
+{\out k + m + n = k + (m + n)}
+{\out  1. k + m + 0 = k + (m + 0)}
+{\out  2. !!x. k + m + x = k + (m + x) ==>}
+{\out          k + m + Suc(x) = k + (m + Suc(x))}
+\end{ttbox}
+Now induction has been applied to~$n$.  What is the next alternative?
+\begin{ttbox}
+back();
+{\out Level 1}
+{\out k + m + n = k + (m + n)}
+{\out  1. k + m + n = k + (m + 0)}
+{\out  2. !!x. k + m + n = k + (m + x) ==> k + m + n = k + (m + Suc(x))}
+\end{ttbox}
+Inspecting subgoal~1 reveals that induction has been applied to just the
+second occurrence of~$n$.  This perfectly legitimate induction is useless
+here.  
+
+The main goal admits fourteen different applications of induction.  The
+number is exponential in the size of the formula.
+
+\subsection{Proving that addition is associative}
+Let us invoke the induction rule properly, using~{\tt
+  res_inst_tac}.  At the same time, we shall have a glimpse at Isabelle's
+simplification tactics, which are described in 
+\iflabelundefined{simp-chap}%
+    {the {\em Reference Manual}}{Chap.\ts\ref{simp-chap}}.
+
+\index{simplification}\index{examples!of simplification} 
+
+Isabelle's simplification tactics repeatedly apply equations to a subgoal,
+perhaps proving it.  For efficiency, the rewrite rules must be packaged into a
+{\bf simplification set},\index{simplification sets} or {\bf simpset}.  We
+augment the implicit simpset of FOL with the equations proved in the previous
+section, namely $0+n=n$ and $\texttt{Suc}(m)+n=\texttt{Suc}(m+n)$:
+\begin{ttbox}
+Addsimps [add_0, add_Suc];
+\end{ttbox}
+We state the goal for associativity of addition, and
+use \ttindex{res_inst_tac} to invoke induction on~$k$:
+\begin{ttbox}
+Goal "(k+m)+n = k+(m+n)";
+{\out Level 0}
+{\out k + m + n = k + (m + n)}
+{\out  1. k + m + n = k + (m + n)}
+\ttbreak
+by (res_inst_tac [("n","k")] induct 1);
+{\out Level 1}
+{\out k + m + n = k + (m + n)}
+{\out  1. 0 + m + n = 0 + (m + n)}
+{\out  2. !!x. x + m + n = x + (m + n) ==>}
+{\out          Suc(x) + m + n = Suc(x) + (m + n)}
+\end{ttbox}
+The base case holds easily; both sides reduce to $m+n$.  The
+tactic~\ttindex{Simp_tac} rewrites with respect to the current
+simplification set, applying the rewrite rules for addition:
+\begin{ttbox}
+by (Simp_tac 1);
+{\out Level 2}
+{\out k + m + n = k + (m + n)}
+{\out  1. !!x. x + m + n = x + (m + n) ==>}
+{\out          Suc(x) + m + n = Suc(x) + (m + n)}
+\end{ttbox}
+The inductive step requires rewriting by the equations for addition
+and with the induction hypothesis, which is also an equation.  The
+tactic~\ttindex{Asm_simp_tac} rewrites using the implicit
+simplification set and any useful assumptions:
+\begin{ttbox}
+by (Asm_simp_tac 1);
+{\out Level 3}
+{\out k + m + n = k + (m + n)}
+{\out No subgoals!}
+\end{ttbox}
+\index{instantiation|)}
+
+
+\section{A Prolog interpreter}
+\index{Prolog interpreter|bold}
+To demonstrate the power of tacticals, let us construct a Prolog
+interpreter and execute programs involving lists.\footnote{To run these
+examples, see the file \texttt{FOL/ex/Prolog.ML}.} The Prolog program
+consists of a theory.  We declare a type constructor for lists, with an
+arity declaration to say that $(\tau)list$ is of class~$term$
+provided~$\tau$ is:
+\begin{eqnarray*}
+  list  & :: & (term)term
+\end{eqnarray*}
+We declare four constants: the empty list~$Nil$; the infix list
+constructor~{:}; the list concatenation predicate~$app$; the list reverse
+predicate~$rev$.  (In Prolog, functions on lists are expressed as
+predicates.)
+\begin{eqnarray*}
+    Nil         & :: & \alpha list \\
+    {:}         & :: & [\alpha,\alpha list] \To \alpha list \\
+    app & :: & [\alpha list,\alpha list,\alpha list] \To o \\
+    rev & :: & [\alpha list,\alpha list] \To o 
+\end{eqnarray*}
+The predicate $app$ should satisfy the Prolog-style rules
+\[ {app(Nil,ys,ys)} \qquad
+   {app(xs,ys,zs) \over app(x:xs, ys, x:zs)} \]
+We define the naive version of $rev$, which calls~$app$:
+\[ {rev(Nil,Nil)} \qquad
+   {rev(xs,ys)\quad  app(ys, x:Nil, zs) \over
+    rev(x:xs, zs)} 
+\]
+
+\index{examples!of theories}
+Theory \thydx{Prolog} extends first-order logic in order to make use
+of the class~$term$ and the type~$o$.  The interpreter does not use the
+rules of~\texttt{FOL}.
+\begin{ttbox}
+Prolog = FOL +
+types   'a list
+arities list    :: (term)term
+consts  Nil     :: 'a list
+        ":"     :: ['a, 'a list]=> 'a list            (infixr 60)
+        app     :: ['a list, 'a list, 'a list] => o
+        rev     :: ['a list, 'a list] => o
+rules   appNil  "app(Nil,ys,ys)"
+        appCons "app(xs,ys,zs) ==> app(x:xs, ys, x:zs)"
+        revNil  "rev(Nil,Nil)"
+        revCons "[| rev(xs,ys); app(ys,x:Nil,zs) |] ==> rev(x:xs,zs)"
+end
+\end{ttbox}
+\subsection{Simple executions}
+Repeated application of the rules solves Prolog goals.  Let us
+append the lists $[a,b,c]$ and~$[d,e]$.  As the rules are applied, the
+answer builds up in~\texttt{?x}.
+\begin{ttbox}
+Goal "app(a:b:c:Nil, d:e:Nil, ?x)";
+{\out Level 0}
+{\out app(a : b : c : Nil, d : e : Nil, ?x)}
+{\out  1. app(a : b : c : Nil, d : e : Nil, ?x)}
+\ttbreak
+by (resolve_tac [appNil,appCons] 1);
+{\out Level 1}
+{\out app(a : b : c : Nil, d : e : Nil, a : ?zs1)}
+{\out  1. app(b : c : Nil, d : e : Nil, ?zs1)}
+\ttbreak
+by (resolve_tac [appNil,appCons] 1);
+{\out Level 2}
+{\out app(a : b : c : Nil, d : e : Nil, a : b : ?zs2)}
+{\out  1. app(c : Nil, d : e : Nil, ?zs2)}
+\end{ttbox}
+At this point, the first two elements of the result are~$a$ and~$b$.
+\begin{ttbox}
+by (resolve_tac [appNil,appCons] 1);
+{\out Level 3}
+{\out app(a : b : c : Nil, d : e : Nil, a : b : c : ?zs3)}
+{\out  1. app(Nil, d : e : Nil, ?zs3)}
+\ttbreak
+by (resolve_tac [appNil,appCons] 1);
+{\out Level 4}
+{\out app(a : b : c : Nil, d : e : Nil, a : b : c : d : e : Nil)}
+{\out No subgoals!}
+\end{ttbox}
+
+Prolog can run functions backwards.  Which list can be appended
+with $[c,d]$ to produce $[a,b,c,d]$?
+Using \ttindex{REPEAT}, we find the answer at once, $[a,b]$:
+\begin{ttbox}
+Goal "app(?x, c:d:Nil, a:b:c:d:Nil)";
+{\out Level 0}
+{\out app(?x, c : d : Nil, a : b : c : d : Nil)}
+{\out  1. app(?x, c : d : Nil, a : b : c : d : Nil)}
+\ttbreak
+by (REPEAT (resolve_tac [appNil,appCons] 1));
+{\out Level 1}
+{\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}
+{\out No subgoals!}
+\end{ttbox}
+
+
+\subsection{Backtracking}\index{backtracking!Prolog style}
+Prolog backtracking can answer questions that have multiple solutions.
+Which lists $x$ and $y$ can be appended to form the list $[a,b,c,d]$?  This
+question has five solutions.  Using \ttindex{REPEAT} to apply the rules, we
+quickly find the first solution, namely $x=[]$ and $y=[a,b,c,d]$:
+\begin{ttbox}
+Goal "app(?x, ?y, a:b:c:d:Nil)";
+{\out Level 0}
+{\out app(?x, ?y, a : b : c : d : Nil)}
+{\out  1. app(?x, ?y, a : b : c : d : Nil)}
+\ttbreak
+by (REPEAT (resolve_tac [appNil,appCons] 1));
+{\out Level 1}
+{\out app(Nil, a : b : c : d : Nil, a : b : c : d : Nil)}
+{\out No subgoals!}
+\end{ttbox}
+Isabelle can lazily generate all the possibilities.  The \ttindex{back}
+command returns the tactic's next outcome, namely $x=[a]$ and $y=[b,c,d]$:
+\begin{ttbox}
+back();
+{\out Level 1}
+{\out app(a : Nil, b : c : d : Nil, a : b : c : d : Nil)}
+{\out No subgoals!}
+\end{ttbox}
+The other solutions are generated similarly.
+\begin{ttbox}
+back();
+{\out Level 1}
+{\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}
+{\out No subgoals!}
+\ttbreak
+back();
+{\out Level 1}
+{\out app(a : b : c : Nil, d : Nil, a : b : c : d : Nil)}
+{\out No subgoals!}
+\ttbreak
+back();
+{\out Level 1}
+{\out app(a : b : c : d : Nil, Nil, a : b : c : d : Nil)}
+{\out No subgoals!}
+\end{ttbox}
+
+
+\subsection{Depth-first search}
+\index{search!depth-first}
+Now let us try $rev$, reversing a list.
+Bundle the rules together as the \ML{} identifier \texttt{rules}.  Naive
+reverse requires 120 inferences for this 14-element list, but the tactic
+terminates in a few seconds.
+\begin{ttbox}
+Goal "rev(a:b:c:d:e:f:g:h:i:j:k:l:m:n:Nil, ?w)";
+{\out Level 0}
+{\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}
+{\out  1. rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}
+{\out         ?w)}
+\ttbreak
+val rules = [appNil,appCons,revNil,revCons];
+\ttbreak
+by (REPEAT (resolve_tac rules 1));
+{\out Level 1}
+{\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}
+{\out     n : m : l : k : j : i : h : g : f : e : d : c : b : a : Nil)}
+{\out No subgoals!}
+\end{ttbox}
+We may execute $rev$ backwards.  This, too, should reverse a list.  What
+is the reverse of $[a,b,c]$?
+\begin{ttbox}
+Goal "rev(?x, a:b:c:Nil)";
+{\out Level 0}
+{\out rev(?x, a : b : c : Nil)}
+{\out  1. rev(?x, a : b : c : Nil)}
+\ttbreak
+by (REPEAT (resolve_tac rules 1));
+{\out Level 1}
+{\out rev(?x1 : Nil, a : b : c : Nil)}
+{\out  1. app(Nil, ?x1 : Nil, a : b : c : Nil)}
+\end{ttbox}
+The tactic has failed to find a solution!  It reached a dead end at
+subgoal~1: there is no~$\Var{x@1}$ such that [] appended with~$[\Var{x@1}]$
+equals~$[a,b,c]$.  Backtracking explores other outcomes.
+\begin{ttbox}
+back();
+{\out Level 1}
+{\out rev(?x1 : a : Nil, a : b : c : Nil)}
+{\out  1. app(Nil, ?x1 : Nil, b : c : Nil)}
+\end{ttbox}
+This too is a dead end, but the next outcome is successful.
+\begin{ttbox}
+back();
+{\out Level 1}
+{\out rev(c : b : a : Nil, a : b : c : Nil)}
+{\out No subgoals!}
+\end{ttbox}
+\ttindex{REPEAT} goes wrong because it is only a repetition tactical, not a
+search tactical.  \texttt{REPEAT} stops when it cannot continue, regardless of
+which state is reached.  The tactical \ttindex{DEPTH_FIRST} searches for a
+satisfactory state, as specified by an \ML{} predicate.  Below,
+\ttindex{has_fewer_prems} specifies that the proof state should have no
+subgoals.
+\begin{ttbox}
+val prolog_tac = DEPTH_FIRST (has_fewer_prems 1) 
+                             (resolve_tac rules 1);
+\end{ttbox}
+Since Prolog uses depth-first search, this tactic is a (slow!) 
+Prolog interpreter.  We return to the start of the proof using
+\ttindex{choplev}, and apply \texttt{prolog_tac}:
+\begin{ttbox}
+choplev 0;
+{\out Level 0}
+{\out rev(?x, a : b : c : Nil)}
+{\out  1. rev(?x, a : b : c : Nil)}
+\ttbreak
+by prolog_tac;
+{\out Level 1}
+{\out rev(c : b : a : Nil, a : b : c : Nil)}
+{\out No subgoals!}
+\end{ttbox}
+Let us try \texttt{prolog_tac} on one more example, containing four unknowns:
+\begin{ttbox}
+Goal "rev(a:?x:c:?y:Nil, d:?z:b:?u)";
+{\out Level 0}
+{\out rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}
+{\out  1. rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}
+\ttbreak
+by prolog_tac;
+{\out Level 1}
+{\out rev(a : b : c : d : Nil, d : c : b : a : Nil)}
+{\out No subgoals!}
+\end{ttbox}
+Although Isabelle is much slower than a Prolog system, Isabelle
+tactics can exploit logic programming techniques.  
+