--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Datatype/document/ABexpr.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,114 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+Sometimes it is necessary to define two datatypes that depend on each
+other. This is called \textbf{mutual recursion}. As an example consider a
+language of arithmetic and boolean expressions where
+\begin{itemize}
+\item arithmetic expressions contain boolean expressions because there are
+ conditional arithmetic expressions like ``if $m<n$ then $n-m$ else $m-n$'',
+ and
+\item boolean expressions contain arithmetic expressions because of
+ comparisons like ``$m<n$''.
+\end{itemize}
+In Isabelle this becomes%
+\end{isamarkuptext}%
+\isacommand{datatype}~'a~aexp~=~IF~~~{"}'a~bexp{"}~{"}'a~aexp{"}~{"}'a~aexp{"}\isanewline
+~~~~~~~~~~~~~~~~~|~Sum~~{"}'a~aexp{"}~{"}'a~aexp{"}\isanewline
+~~~~~~~~~~~~~~~~~|~Diff~{"}'a~aexp{"}~{"}'a~aexp{"}\isanewline
+~~~~~~~~~~~~~~~~~|~Var~'a\isanewline
+~~~~~~~~~~~~~~~~~|~Num~nat\isanewline
+\isakeyword{and}~~~~~~'a~bexp~=~Less~{"}'a~aexp{"}~{"}'a~aexp{"}\isanewline
+~~~~~~~~~~~~~~~~~|~And~~{"}'a~bexp{"}~{"}'a~bexp{"}\isanewline
+~~~~~~~~~~~~~~~~~|~Neg~~{"}'a~bexp{"}%
+\begin{isamarkuptext}%
+\noindent
+Type \isa{aexp} is similar to \isa{expr} in \S\ref{sec:ExprCompiler},
+except that we have fixed the values to be of type \isa{nat} and that we
+have fixed the two binary operations \isa{Sum} and \isa{Diff}. Boolean
+expressions can be arithmetic comparisons, conjunctions and negations.
+The semantics is fixed via two evaluation functions%
+\end{isamarkuptext}%
+\isacommand{consts}~~evala~::~{"}('a~{\isasymRightarrow}~nat)~{\isasymRightarrow}~'a~aexp~{\isasymRightarrow}~nat{"}\isanewline
+~~~~~~~~evalb~::~{"}('a~{\isasymRightarrow}~nat)~{\isasymRightarrow}~'a~bexp~{\isasymRightarrow}~bool{"}%
+\begin{isamarkuptext}%
+\noindent
+that take an environment (a mapping from variables \isa{'a} to values
+\isa{nat}) and an expression and return its arithmetic/boolean
+value. Since the datatypes are mutually recursive, so are functions that
+operate on them. Hence they need to be defined in a single \isacommand{primrec}
+section:%
+\end{isamarkuptext}%
+\isacommand{primrec}\isanewline
+~~{"}evala~env~(IF~b~a1~a2)~=\isanewline
+~~~~~(if~evalb~env~b~then~evala~env~a1~else~evala~env~a2){"}\isanewline
+~~{"}evala~env~(Sum~a1~a2)~=~evala~env~a1~+~evala~env~a2{"}\isanewline
+~~{"}evala~env~(Diff~a1~a2)~=~evala~env~a1~-~evala~env~a2{"}\isanewline
+~~{"}evala~env~(Var~v)~=~env~v{"}\isanewline
+~~{"}evala~env~(Num~n)~=~n{"}\isanewline
+\isanewline
+~~{"}evalb~env~(Less~a1~a2)~=~(evala~env~a1~<~evala~env~a2){"}\isanewline
+~~{"}evalb~env~(And~b1~b2)~=~(evalb~env~b1~{\isasymand}~evalb~env~b2){"}\isanewline
+~~{"}evalb~env~(Neg~b)~=~({\isasymnot}~evalb~env~b){"}%
+\begin{isamarkuptext}%
+\noindent
+In the same fashion we also define two functions that perform substitution:%
+\end{isamarkuptext}%
+\isacommand{consts}~substa~::~{"}('a~{\isasymRightarrow}~'b~aexp)~{\isasymRightarrow}~'a~aexp~{\isasymRightarrow}~'b~aexp{"}\isanewline
+~~~~~~~substb~::~{"}('a~{\isasymRightarrow}~'b~aexp)~{\isasymRightarrow}~'a~bexp~{\isasymRightarrow}~'b~bexp{"}%
+\begin{isamarkuptext}%
+\noindent
+The first argument is a function mapping variables to expressions, the
+substitution. It is applied to all variables in the second argument. As a
+result, the type of variables in the expression may change from \isa{'a}
+to \isa{'b}. Note that there are only arithmetic and no boolean variables.%
+\end{isamarkuptext}%
+\isacommand{primrec}\isanewline
+~~{"}substa~s~(IF~b~a1~a2)~=\isanewline
+~~~~~IF~(substb~s~b)~(substa~s~a1)~(substa~s~a2){"}\isanewline
+~~{"}substa~s~(Sum~a1~a2)~=~Sum~(substa~s~a1)~(substa~s~a2){"}\isanewline
+~~{"}substa~s~(Diff~a1~a2)~=~Diff~(substa~s~a1)~(substa~s~a2){"}\isanewline
+~~{"}substa~s~(Var~v)~=~s~v{"}\isanewline
+~~{"}substa~s~(Num~n)~=~Num~n{"}\isanewline
+\isanewline
+~~{"}substb~s~(Less~a1~a2)~=~Less~(substa~s~a1)~(substa~s~a2){"}\isanewline
+~~{"}substb~s~(And~b1~b2)~=~And~(substb~s~b1)~(substb~s~b2){"}\isanewline
+~~{"}substb~s~(Neg~b)~=~Neg~(substb~s~b){"}%
+\begin{isamarkuptext}%
+Now we can prove a fundamental theorem about the interaction between
+evaluation and substitution: applying a substitution $s$ to an expression $a$
+and evaluating the result in an environment $env$ yields the same result as
+evaluation $a$ in the environment that maps every variable $x$ to the value
+of $s(x)$ under $env$. If you try to prove this separately for arithmetic or
+boolean expressions (by induction), you find that you always need the other
+theorem in the induction step. Therefore you need to state and prove both
+theorems simultaneously:%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}evala~env~(substa~s~a)~=~evala~({\isasymlambda}x.~evala~env~(s~x))~a~{\isasymand}\isanewline
+~~~~~~~~evalb~env~(substb~s~b)~=~evalb~({\isasymlambda}x.~evala~env~(s~x))~b{"}\isanewline
+\isacommand{apply}(induct\_tac~a~\isakeyword{and}~b)%
+\begin{isamarkuptxt}%
+\noindent
+The resulting 8 goals (one for each constructor) are proved in one fell swoop:%
+\end{isamarkuptxt}%
+\isacommand{apply}~auto\isacommand{.}%
+\begin{isamarkuptext}%
+In general, given $n$ mutually recursive datatypes $\tau@1$, \dots, $\tau@n$,
+an inductive proof expects a goal of the form
+\[ P@1(x@1)\ \land \dots \land P@n(x@n) \]
+where each variable $x@i$ is of type $\tau@i$. Induction is started by
+\begin{ttbox}
+apply(induct_tac \(x@1\) \texttt{and} \(\dots\) \texttt{and} \(x@n\));
+\end{ttbox}
+
+\begin{exercise}
+ Define a function \isa{norma} of type \isa{'a aexp \isasymFun\ 'a aexp} that
+ replaces \isa{IF}s with complex boolean conditions by nested
+ \isa{IF}s where each condition is a \isa{Less} --- \isa{And} and
+ \isa{Neg} should be eliminated completely. Prove that \isa{norma}
+ preserves the value of an expression and that the result of \isa{norma}
+ is really normal, i.e.\ no more \isa{And}s and \isa{Neg}s occur in
+ it. ({\em Hint:} proceed as in \S\ref{sec:boolex}).
+\end{exercise}%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Datatype/document/Fundata.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,48 @@
+\begin{isabelle}%
+\isacommand{datatype}~('a,'i)bigtree~=~Tip~|~Branch~'a~{"}'i~{\isasymRightarrow}~('a,'i)bigtree{"}%
+\begin{isamarkuptext}%
+\noindent Parameter \isa{'a} is the type of values stored in
+the \isa{Branch}es of the tree, whereas \isa{'i} is the index
+type over which the tree branches. If \isa{'i} is instantiated to
+\isa{bool}, the result is a binary tree; if it is instantiated to
+\isa{nat}, we have an infinitely branching tree because each node
+has as many subtrees as there are natural numbers. How can we possibly
+write down such a tree? Using functional notation! For example, the%
+\end{isamarkuptext}%
+\isacommand{term}~{"}Branch~0~({\isasymlambda}i.~Branch~i~({\isasymlambda}n.~Tip)){"}%
+\begin{isamarkuptext}%
+\noindent of type \isa{(nat,nat)bigtree} is the tree whose
+root is labeled with 0 and whose $n$th subtree is labeled with $n$ and
+has merely \isa{Tip}s as further subtrees.
+
+Function \isa{map_bt} applies a function to all labels in a \isa{bigtree}:%
+\end{isamarkuptext}%
+\isacommand{consts}~map\_bt~::~{"}('a~{\isasymRightarrow}~'b)~{\isasymRightarrow}~('a,'i)bigtree~{\isasymRightarrow}~('b,'i)bigtree{"}\isanewline
+\isacommand{primrec}\isanewline
+{"}map\_bt~f~Tip~~~~~~~~~~=~Tip{"}\isanewline
+{"}map\_bt~f~(Branch~a~F)~=~Branch~(f~a)~({\isasymlambda}i.~map\_bt~f~(F~i)){"}%
+\begin{isamarkuptext}%
+\noindent This is a valid \isacommand{primrec} definition because the
+recursive calls of \isa{map_bt} involve only subtrees obtained from
+\isa{F}, i.e.\ the left-hand side. Thus termination is assured. The
+seasoned functional programmer might have written \isa{map_bt~f~o~F} instead
+of \isa{\isasymlambda{}i.~map_bt~f~(F~i)}, but the former is not accepted by
+Isabelle because the termination proof is not as obvious since
+\isa{map_bt} is only partially applied.
+
+The following lemma has a canonical proof%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}map\_bt~g~(map\_bt~f~T)~=~map\_bt~(g~o~f)~T{"}\isanewline
+\isacommand{apply}(induct\_tac~{"}T{"},~auto)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+but it is worth taking a look at the proof state after the induction step
+to understand what the presence of the function type entails:
+\begin{isabellepar}%
+~1.~map\_bt~g~(map\_bt~f~Tip)~=~map\_bt~(g~{\isasymcirc}~f)~Tip\isanewline
+~2.~{\isasymAnd}a~F.\isanewline
+~~~~~~{\isasymforall}x.~map\_bt~g~(map\_bt~f~(F~x))~=~map\_bt~(g~{\isasymcirc}~f)~(F~x)~{\isasymLongrightarrow}\isanewline
+~~~~~~map\_bt~g~(map\_bt~f~(Branch~a~F))~=~map\_bt~(g~{\isasymcirc}~f)~(Branch~a~F)%
+\end{isabellepar}%%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Datatype/document/Nested.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,79 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+So far, all datatypes had the property that on the right-hand side of their
+definition they occurred only at the top-level, i.e.\ directly below a
+constructor. This is not the case any longer for the following model of terms
+where function symbols can be applied to a list of arguments:%
+\end{isamarkuptext}%
+\isacommand{datatype}~('a,'b){"}term{"}~=~Var~'a~|~App~'b~{"}('a,'b)term~list{"}%
+\begin{isamarkuptext}%
+\noindent
+Note that we need to quote \isa{term} on the left to avoid confusion with
+the command \isacommand{term}.
+Parameter \isa{'a} is the type of variables and \isa{'b} the type of
+function symbols.
+A mathematical term like $f(x,g(y))$ becomes \isa{App f [Var x, App g
+ [Var y]]}, where \isa{f}, \isa{g}, \isa{x}, \isa{y} are
+suitable values, e.g.\ numbers or strings.
+
+What complicates the definition of \isa{term} is the nested occurrence of
+\isa{term} inside \isa{list} on the right-hand side. In principle,
+nested recursion can be eliminated in favour of mutual recursion by unfolding
+the offending datatypes, here \isa{list}. The result for \isa{term}
+would be something like
+\begin{ttbox}
+\input{Datatype/tunfoldeddata}\end{ttbox}
+Although we do not recommend this unfolding to the user, it shows how to
+simulate nested recursion by mutual recursion.
+Now we return to the initial definition of \isa{term} using
+nested recursion.
+
+Let us define a substitution function on terms. Because terms involve term
+lists, we need to define two substitution functions simultaneously:%
+\end{isamarkuptext}%
+\isacommand{consts}\isanewline
+subst~::~{"}('a{\isasymRightarrow}('a,'b)term)~{\isasymRightarrow}~('a,'b)term~~~~~~{\isasymRightarrow}~('a,'b)term{"}\isanewline
+substs::~{"}('a{\isasymRightarrow}('a,'b)term)~{\isasymRightarrow}~('a,'b)term~list~{\isasymRightarrow}~('a,'b)term~list{"}\isanewline
+\isanewline
+\isacommand{primrec}\isanewline
+~~{"}subst~s~(Var~x)~=~s~x{"}\isanewline
+~~{"}subst~s~(App~f~ts)~=~App~f~(substs~s~ts){"}\isanewline
+\isanewline
+~~{"}substs~s~[]~=~[]{"}\isanewline
+~~{"}substs~s~(t~\#~ts)~=~subst~s~t~\#~substs~s~ts{"}%
+\begin{isamarkuptext}%
+\noindent
+Similarly, when proving a statement about terms inductively, we need
+to prove a related statement about term lists simultaneously. For example,
+the fact that the identity substitution does not change a term needs to be
+strengthened and proved as follows:%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}subst~~Var~t~~=~(t~::('a,'b)term)~~{\isasymand}\isanewline
+~~~~~~~~substs~Var~ts~=~(ts::('a,'b)term~list){"}\isanewline
+\isacommand{apply}(induct\_tac~t~\isakeyword{and}~ts,~auto)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+Note that \isa{Var} is the identity substitution because by definition it
+leaves variables unchanged: \isa{subst Var (Var $x$) = Var $x$}. Note also
+that the type annotations are necessary because otherwise there is nothing in
+the goal to enforce that both halves of the goal talk about the same type
+parameters \isa{('a,'b)}. As a result, induction would fail
+because the two halves of the goal would be unrelated.
+
+\begin{exercise}
+The fact that substitution distributes over composition can be expressed
+roughly as follows:
+\begin{ttbox}
+subst (f o g) t = subst f (subst g t)
+\end{ttbox}
+Correct this statement (you will find that it does not type-check),
+strengthen it, and prove it. (Note: \ttindexbold{o} is function composition;
+its definition is found in theorem \isa{o_def}).
+\end{exercise}
+
+Of course, you may also combine mutual and nested recursion. For example,
+constructor \isa{Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
+expressions as its argument: \isa{Sum "'a aexp list"}.%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Ifexpr/document/Ifexpr.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,134 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+\subsubsection{How can we model boolean expressions?}
+
+We want to represent boolean expressions built up from variables and
+constants by negation and conjunction. The following datatype serves exactly
+that purpose:%
+\end{isamarkuptext}%
+\isacommand{datatype}~boolex~=~Const~bool~|~Var~nat~|~Neg~boolex\isanewline
+~~~~~~~~~~~~~~~~|~And~boolex~boolex%
+\begin{isamarkuptext}%
+\noindent
+The two constants are represented by \isa{Const~True} and
+\isa{Const~False}. Variables are represented by terms of the form
+\isa{Var~$n$}, where $n$ is a natural number (type \isa{nat}).
+For example, the formula $P@0 \land \neg P@1$ is represented by the term
+\isa{And~(Var~0)~(Neg(Var~1))}.
+
+\subsubsection{What is the value of a boolean expression?}
+
+The value of a boolean expression depends on the value of its variables.
+Hence the function \isa{value} takes an additional parameter, an {\em
+ environment} of type \isa{nat \isasymFun\ bool}, which maps variables to
+their values:%
+\end{isamarkuptext}%
+\isacommand{consts}~value~::~{"}boolex~{\isasymRightarrow}~(nat~{\isasymRightarrow}~bool)~{\isasymRightarrow}~bool{"}\isanewline
+\isacommand{primrec}\isanewline
+{"}value~(Const~b)~env~=~b{"}\isanewline
+{"}value~(Var~x)~~~env~=~env~x{"}\isanewline
+{"}value~(Neg~b)~~~env~=~({\isasymnot}~value~b~env){"}\isanewline
+{"}value~(And~b~c)~env~=~(value~b~env~{\isasymand}~value~c~env){"}%
+\begin{isamarkuptext}%
+\noindent
+\subsubsection{If-expressions}
+
+An alternative and often more efficient (because in a certain sense
+canonical) representation are so-called \emph{If-expressions} built up
+from constants (\isa{CIF}), variables (\isa{VIF}) and conditionals
+(\isa{IF}):%
+\end{isamarkuptext}%
+\isacommand{datatype}~ifex~=~CIF~bool~|~VIF~nat~|~IF~ifex~ifex~ifex%
+\begin{isamarkuptext}%
+\noindent
+The evaluation if If-expressions proceeds as for \isa{boolex}:%
+\end{isamarkuptext}%
+\isacommand{consts}~valif~::~{"}ifex~{\isasymRightarrow}~(nat~{\isasymRightarrow}~bool)~{\isasymRightarrow}~bool{"}\isanewline
+\isacommand{primrec}\isanewline
+{"}valif~(CIF~b)~~~~env~=~b{"}\isanewline
+{"}valif~(VIF~x)~~~~env~=~env~x{"}\isanewline
+{"}valif~(IF~b~t~e)~env~=~(if~valif~b~env~then~valif~t~env\isanewline
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~else~valif~e~env){"}%
+\begin{isamarkuptext}%
+\subsubsection{Transformation into and of If-expressions}
+
+The type \isa{boolex} is close to the customary representation of logical
+formulae, whereas \isa{ifex} is designed for efficiency. Thus we need to
+translate from \isa{boolex} into \isa{ifex}:%
+\end{isamarkuptext}%
+\isacommand{consts}~bool2if~::~{"}boolex~{\isasymRightarrow}~ifex{"}\isanewline
+\isacommand{primrec}\isanewline
+{"}bool2if~(Const~b)~=~CIF~b{"}\isanewline
+{"}bool2if~(Var~x)~~~=~VIF~x{"}\isanewline
+{"}bool2if~(Neg~b)~~~=~IF~(bool2if~b)~(CIF~False)~(CIF~True){"}\isanewline
+{"}bool2if~(And~b~c)~=~IF~(bool2if~b)~(bool2if~c)~(CIF~False){"}%
+\begin{isamarkuptext}%
+\noindent
+At last, we have something we can verify: that \isa{bool2if} preserves the
+value of its argument:%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}valif~(bool2if~b)~env~=~value~b~env{"}%
+\begin{isamarkuptxt}%
+\noindent
+The proof is canonical:%
+\end{isamarkuptxt}%
+\isacommand{apply}(induct\_tac~b)\isanewline
+\isacommand{apply}(auto)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+In fact, all proofs in this case study look exactly like this. Hence we do
+not show them below.
+
+More interesting is the transformation of If-expressions into a normal form
+where the first argument of \isa{IF} cannot be another \isa{IF} but
+must be a constant or variable. Such a normal form can be computed by
+repeatedly replacing a subterm of the form \isa{IF~(IF~b~x~y)~z~u} by
+\isa{IF b (IF x z u) (IF y z u)}, which has the same value. The following
+primitive recursive functions perform this task:%
+\end{isamarkuptext}%
+\isacommand{consts}~normif~::~{"}ifex~{\isasymRightarrow}~ifex~{\isasymRightarrow}~ifex~{\isasymRightarrow}~ifex{"}\isanewline
+\isacommand{primrec}\isanewline
+{"}normif~(CIF~b)~~~~t~e~=~IF~(CIF~b)~t~e{"}\isanewline
+{"}normif~(VIF~x)~~~~t~e~=~IF~(VIF~x)~t~e{"}\isanewline
+{"}normif~(IF~b~t~e)~u~f~=~normif~b~(normif~t~u~f)~(normif~e~u~f){"}\isanewline
+\isanewline
+\isacommand{consts}~norm~::~{"}ifex~{\isasymRightarrow}~ifex{"}\isanewline
+\isacommand{primrec}\isanewline
+{"}norm~(CIF~b)~~~~=~CIF~b{"}\isanewline
+{"}norm~(VIF~x)~~~~=~VIF~x{"}\isanewline
+{"}norm~(IF~b~t~e)~=~normif~b~(norm~t)~(norm~e){"}%
+\begin{isamarkuptext}%
+\noindent
+Their interplay is a bit tricky, and we leave it to the reader to develop an
+intuitive understanding. Fortunately, Isabelle can help us to verify that the
+transformation preserves the value of the expression:%
+\end{isamarkuptext}%
+\isacommand{theorem}~{"}valif~(norm~b)~env~=~valif~b~env{"}%
+\begin{isamarkuptext}%
+\noindent
+The proof is canonical, provided we first show the following simplification
+lemma (which also helps to understand what \isa{normif} does):%
+\end{isamarkuptext}%
+\isacommand{lemma}~[simp]:\isanewline
+~~{"}{\isasymforall}t~e.~valif~(normif~b~t~e)~env~=~valif~(IF~b~t~e)~env{"}%
+\begin{isamarkuptext}%
+\noindent
+Note that the lemma does not have a name, but is implicitly used in the proof
+of the theorem shown above because of the \isa{[simp]} attribute.
+
+But how can we be sure that \isa{norm} really produces a normal form in
+the above sense? We define a function that tests If-expressions for normality%
+\end{isamarkuptext}%
+\isacommand{consts}~normal~::~{"}ifex~{\isasymRightarrow}~bool{"}\isanewline
+\isacommand{primrec}\isanewline
+{"}normal(CIF~b)~=~True{"}\isanewline
+{"}normal(VIF~x)~=~True{"}\isanewline
+{"}normal(IF~b~t~e)~=~(normal~t~{\isasymand}~normal~e~{\isasymand}\isanewline
+~~~~~(case~b~of~CIF~b~{\isasymRightarrow}~True~|~VIF~x~{\isasymRightarrow}~True~|~IF~x~y~z~{\isasymRightarrow}~False)){"}%
+\begin{isamarkuptext}%
+\noindent
+and prove \isa{normal(norm b)}. Of course, this requires a lemma about
+normality of \isa{normif}:%
+\end{isamarkuptext}%
+\isacommand{lemma}~[simp]:~{"}{\isasymforall}t~e.~normal(normif~b~t~e)~=~(normal~t~{\isasymand}~normal~e){"}\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/Itrev.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,79 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+We define a tail-recursive version of list-reversal,
+i.e.\ one that can be compiled into a loop:%
+\end{isamarkuptext}%
+\isacommand{consts}~itrev~::~{"}'a~list~{\isasymRightarrow}~'a~list~{\isasymRightarrow}~'a~list{"}\isanewline
+\isacommand{primrec}\isanewline
+{"}itrev~[]~~~~~ys~=~ys{"}\isanewline
+{"}itrev~(x\#xs)~ys~=~itrev~xs~(x\#ys){"}%
+\begin{isamarkuptext}%
+\noindent
+The behaviour of \isa{itrev} is simple: it reverses its first argument by
+stacking its elements onto the second argument, and returning that second
+argument when the first one becomes empty.
+We need to show that \isa{itrev} does indeed reverse its first argument
+provided the second one is empty:%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}itrev~xs~[]~=~rev~xs{"}%
+\begin{isamarkuptxt}%
+\noindent
+There is no choice as to the induction variable, and we immediately simplify:%
+\end{isamarkuptxt}%
+\isacommand{apply}(induct\_tac~xs,~auto)%
+\begin{isamarkuptxt}%
+\noindent
+Unfortunately, this is not a complete success:
+\begin{isabellepar}%
+~1.~\dots~itrev~list~[]~=~rev~list~{\isasymLongrightarrow}~itrev~list~[a]~=~rev~list~@~[a]%
+\end{isabellepar}%
+Just as predicted above, the overall goal, and hence the induction
+hypothesis, is too weak to solve the induction step because of the fixed
+\isa{[]}. The corresponding heuristic:
+\begin{quote}
+{\em 3. Generalize goals for induction by replacing constants by variables.}
+\end{quote}
+
+Of course one cannot do this na\"{\i}vely: \isa{itrev xs ys = rev xs} is
+just not true---the correct generalization is%
+\end{isamarkuptxt}%
+\isacommand{lemma}~{"}itrev~xs~ys~=~rev~xs~@~ys{"}%
+\begin{isamarkuptxt}%
+\noindent
+If \isa{ys} is replaced by \isa{[]}, the right-hand side simplifies to
+\isa{rev xs}, just as required.
+
+In this particular instance it was easy to guess the right generalization,
+but in more complex situations a good deal of creativity is needed. This is
+the main source of complications in inductive proofs.
+
+Although we now have two variables, only \isa{xs} is suitable for
+induction, and we repeat our above proof attempt. Unfortunately, we are still
+not there:
+\begin{isabellepar}%
+~1.~{\isasymAnd}a~list.\isanewline
+~~~~~~~itrev~list~ys~=~rev~list~@~ys~{\isasymLongrightarrow}\isanewline
+~~~~~~~itrev~list~(a~\#~ys)~=~rev~list~@~a~\#~ys%
+\end{isabellepar}%
+The induction hypothesis is still too weak, but this time it takes no
+intuition to generalize: the problem is that \isa{ys} is fixed throughout
+the subgoal, but the induction hypothesis needs to be applied with
+\isa{a \# ys} instead of \isa{ys}. Hence we prove the theorem
+for all \isa{ys} instead of a fixed one:%
+\end{isamarkuptxt}%
+\isacommand{lemma}~{"}{\isasymforall}ys.~itrev~xs~ys~=~rev~xs~@~ys{"}%
+\begin{isamarkuptxt}%
+\noindent
+This time induction on \isa{xs} followed by simplification succeeds. This
+leads to another heuristic for generalization:
+\begin{quote}
+{\em 4. Generalize goals for induction by universally quantifying all free
+variables {\em(except the induction variable itself!)}.}
+\end{quote}
+This prevents trivial failures like the above and does not change the
+provability of the goal. Because it is not always required, and may even
+complicate matters in some cases, this heuristic is often not
+applied blindly.%
+\end{isamarkuptxt}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/Tree.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,13 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+\noindent
+Define the datatype of binary trees%
+\end{isamarkuptext}%
+\isacommand{datatype}~'a~tree~=~Tip~|~Node~{"}'a~tree{"}~'a~{"}'a~tree{"}%
+\begin{isamarkuptext}%
+\noindent
+and a function \isa{mirror} that mirrors a binary tree
+by swapping subtrees (recursively). Prove%
+\end{isamarkuptext}%
+\isacommand{lemma}~mirror\_mirror:~{"}mirror(mirror~t)~=~t{"}\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/arith1.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,3 @@
+\begin{isabelle}%
+\isacommand{lemma}~{"}{\isasymlbrakk}~{\isasymnot}~m~<~n;~m~<~n+1~{\isasymrbrakk}~{\isasymLongrightarrow}~m~=~n{"}\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/arith2.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,4 @@
+\begin{isabelle}%
+\isacommand{lemma}~{"}min~i~(max~j~(k*k))~=~max~(min~(k*k)~i)~(min~i~(j::nat)){"}\isanewline
+\isacommand{apply}(arith)\isacommand{.}\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/arith3.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,3 @@
+\begin{isabelle}%
+\isacommand{lemma}~{"}n*n~=~n~{\isasymLongrightarrow}~n=0~{\isasymor}~n=1{"}\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/arith4.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,3 @@
+\begin{isabelle}%
+\isacommand{lemma}~{"}{\isasymnot}~m~<~n~{\isasymand}~m~<~n+1~{\isasymLongrightarrow}~m~=~n{"}\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/asm_simp.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,45 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+By default, assumptions are part of the simplification process: they are used
+as simplification rules and are simplified themselves. For example:%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}{\isasymlbrakk}~xs~@~zs~=~ys~@~xs;~[]~@~xs~=~[]~@~[]~{\isasymrbrakk}~{\isasymLongrightarrow}~ys~=~zs{"}\isanewline
+\isacommand{apply}~simp\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+The second assumption simplifies to \isa{xs = []}, which in turn
+simplifies the first assumption to \isa{zs = ys}, thus reducing the
+conclusion to \isa{ys = ys} and hence to \isa{True}.
+
+In some cases this may be too much of a good thing and may lead to
+nontermination:%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}{\isasymforall}x.~f~x~=~g~(f~(g~x))~{\isasymLongrightarrow}~f~[]~=~f~[]~@~[]{"}%
+\begin{isamarkuptxt}%
+\noindent
+cannot be solved by an unmodified application of \isa{simp} because the
+simplification rule \isa{f x = g(f(g x))} extracted from the assumption
+does not terminate. Isabelle notices certain simple forms of
+nontermination but not this one. The problem can be circumvented by
+explicitly telling the simplifier to ignore the assumptions:%
+\end{isamarkuptxt}%
+\isacommand{apply}(simp~no\_asm:)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+There are three modifiers that influence the treatment of assumptions:
+\begin{description}
+\item[\isaindexbold{no_asm:}] means that assumptions are completely ignored.
+\item[\isaindexbold{no_asm_simp:}] means that the assumptions are not simplified but
+ are used in the simplification of the conclusion.
+\item[\isaindexbold{no_asm_use:}] means that the assumptions are simplified
+but are not
+ used in the simplification of each other or the conclusion.
+\end{description}
+Neither \isa{no_asm_simp:} nor \isa{no_asm_use:} allow to simplify the above
+problematic subgoal.
+
+Note that only one of the above modifiers is allowed, and it must precede all
+other modifiers.%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/case_splits.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,65 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+Goals containing \isaindex{if}-expressions are usually proved by case
+distinction on the condition of the \isa{if}. For example the goal%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}{\isasymforall}xs.~if~xs~=~[]~then~rev~xs~=~[]~else~rev~xs~{\isasymnoteq}~[]{"}%
+\begin{isamarkuptxt}%
+\noindent
+can be split into
+\begin{isabellepar}%
+~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[])%
+\end{isabellepar}%
+by a degenerate form of simplification%
+\end{isamarkuptxt}%
+\isacommand{apply}(simp~only:~split:~split\_if)%
+\begin{isamarkuptext}%
+\noindent
+where no simplification rules are included (\isa{only:} is followed by the
+empty list of theorems) but the rule \isaindexbold{split_if} for
+splitting \isa{if}s is added (via the modifier \isa{split:}). Because
+case-splitting on \isa{if}s is almost always the right proof strategy, the
+simplifier performs it automatically. Try \isacommand{apply}\isa{(simp)}
+on the initial goal above.
+
+This splitting idea generalizes from \isa{if} to \isaindex{case}:%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}(case~xs~of~[]~{\isasymRightarrow}~zs~|~y\#ys~{\isasymRightarrow}~y\#(ys@zs))~=~xs@zs{"}%
+\begin{isamarkuptxt}%
+\noindent
+becomes
+\begin{isabellepar}%
+~1.~(xs~=~[]~{\isasymlongrightarrow}~zs~=~xs~@~zs)~{\isasymand}\isanewline
+~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs)%
+\end{isabellepar}%
+by typing%
+\end{isamarkuptxt}%
+\isacommand{apply}(simp~only:~split:~list.split)%
+\begin{isamarkuptext}%
+\noindent
+In contrast to \isa{if}-expressions, the simplifier does not split
+\isa{case}-expressions by default because this can lead to nontermination
+in case of recursive datatypes. Again, if the \isa{only:} modifier is
+dropped, the above goal is solved, which \isacommand{apply}\isa{(simp)}
+alone will not do:%
+\end{isamarkuptext}%
+\isacommand{apply}(simp~split:~list.split)\isacommand{.}%
+\begin{isamarkuptext}%
+In general, every datatype $t$ comes with a theorem
+\isa{$t$.split} which can be declared to be a \bfindex{split rule} either
+locally as above, or by giving it the \isa{split} attribute globally:%
+\end{isamarkuptext}%
+\isacommand{theorems}~[split]~=~list.split%
+\begin{isamarkuptext}%
+\noindent
+The \isa{split} attribute can be removed with the \isa{del} modifier,
+either locally%
+\end{isamarkuptext}%
+\isacommand{apply}(simp~split~del:~split\_if)%
+\begin{isamarkuptext}%
+\noindent
+or globally:%
+\end{isamarkuptext}%
+\isacommand{theorems}~[split~del]~=~list.split\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/cases.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,12 @@
+\begin{isabelle}%
+\isanewline
+\isacommand{lemma}~{"}(case~xs~of~[]~{\isasymRightarrow}~[]~|~y\#ys~{\isasymRightarrow}~xs)~=~xs{"}\isanewline
+\isacommand{apply}(case\_tac~xs)%
+\begin{isamarkuptxt}%
+\begin{isabellepar}%
+~1.~xs~=~[]~{\isasymLongrightarrow}~(case~xs~of~[]~{\isasymRightarrow}~[]~|~y~\#~ys~{\isasymRightarrow}~xs)~=~xs\isanewline
+~2.~{\isasymAnd}a~list.~xs=a\#list~{\isasymLongrightarrow}~(case~xs~of~[]~{\isasymRightarrow}~[]~|~y\#ys~{\isasymRightarrow}~xs)~=~xs%
+\end{isabellepar}%%
+\end{isamarkuptxt}%
+\isacommand{apply}(auto)\isacommand{.}\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/cond_rewr.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,29 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+So far all examples of rewrite rules were equations. The simplifier also
+accepts \emph{conditional} equations, for example%
+\end{isamarkuptext}%
+\isacommand{lemma}~hd\_Cons\_tl[simp]:~{"}xs~{\isasymnoteq}~[]~~{\isasymLongrightarrow}~~hd~xs~\#~tl~xs~=~xs{"}\isanewline
+\isacommand{apply}(case\_tac~xs,~simp,~simp)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+Note the use of ``\ttindexboldpos{,}{$Isar}'' to string together a
+sequence of methods. Assuming that the simplification rule%
+\end{isamarkuptext}%
+~{"}(rev~xs~=~[])~=~(xs~=~[]){"}%
+\begin{isamarkuptext}%
+\noindent
+is present as well,%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}xs~{\isasymnoteq}~[]~{\isasymLongrightarrow}~hd(rev~xs)~\#~tl(rev~xs)~=~rev~xs{"}%
+\begin{isamarkuptext}%
+\noindent
+is proved by simplification:
+the conditional equation \isa{hd_Cons_tl} above
+can simplify \isa{hd(rev~xs)~\#~tl(rev~xs)} to \isa{rev xs}
+because the corresponding precondition \isa{rev xs \isasymnoteq\ []}
+simplifies to \isa{xs \isasymnoteq\ []}, which is exactly the local
+assumption of the subgoal.%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/def_rewr.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,44 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+\noindent Constant definitions (\S\ref{sec:ConstDefinitions}) can
+be used as simplification rules, but by default they are not. Hence the
+simplifier does not expand them automatically, just as it should be:
+definitions are introduced for the purpose of abbreviating complex
+concepts. Of course we need to expand the definitions initially to derive
+enough lemmas that characterize the concept sufficiently for us to forget the
+original definition. For example, given%
+\end{isamarkuptext}%
+\isacommand{constdefs}~exor~::~{"}bool~{\isasymRightarrow}~bool~{\isasymRightarrow}~bool{"}\isanewline
+~~~~~~~~~{"}exor~A~B~{\isasymequiv}~(A~{\isasymand}~{\isasymnot}B)~{\isasymor}~({\isasymnot}A~{\isasymand}~B){"}%
+\begin{isamarkuptext}%
+\noindent
+we may want to prove%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}exor~A~({\isasymnot}A){"}%
+\begin{isamarkuptxt}%
+\noindent
+There is a special method for \emph{unfolding} definitions, which we need to
+get started:\indexbold{*unfold}\indexbold{definition!unfolding}%
+\end{isamarkuptxt}%
+\isacommand{apply}(unfold~exor\_def)%
+\begin{isamarkuptxt}%
+\noindent
+It unfolds the given list of definitions (here merely one) in all subgoals:
+\begin{isabellepar}%
+~1.~A~{\isasymand}~{\isasymnot}~{\isasymnot}~A~{\isasymor}~{\isasymnot}~A~{\isasymand}~{\isasymnot}~A%
+\end{isabellepar}%
+The resulting goal can be proved by simplification.
+
+In case we want to expand a definition in the middle of a proof, we can
+simply include it locally:%
+\end{isamarkuptxt}%
+\isacommand{apply}(simp~add:~exor\_def)%
+\begin{isamarkuptext}%
+\noindent
+In fact, this one command proves the above lemma directly.
+
+You should normally not turn a definition permanently into a simplification
+rule because this defeats the whole purpose of an abbreviation.%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/fakenat.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,8 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+\noindent
+The type \isaindexbold{nat} of natural numbers is predefined and
+behaves like%
+\end{isamarkuptext}%
+\isacommand{datatype}~nat~=~{"}0{"}~|~Suc~nat\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/let_rewr.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,8 @@
+\begin{isabelle}%
+\isacommand{lemma}~{"}(let~xs~=~[]~in~xs@ys@xs)~=~ys{"}\isanewline
+\isacommand{apply}(simp~add:~Let\_def)\isacommand{.}%
+\begin{isamarkuptext}%
+If, in a particular context, there is no danger of a combinatorial explosion
+of nested \texttt{let}s one could even add \texttt{Let_def} permanently:%
+\end{isamarkuptext}%
+\isacommand{theorems}~[simp]~=~Let\_def\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/natsum.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,22 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+\noindent
+In particular, there are \isa{case}-expressions, for example%
+\end{isamarkuptext}%
+~{"}case~n~of~0~{\isasymRightarrow}~0~|~Suc~m~{\isasymRightarrow}~m{"}%
+\begin{isamarkuptext}%
+\noindent
+primitive recursion, for example%
+\end{isamarkuptext}%
+\isacommand{consts}~sum~::~{"}nat~{\isasymRightarrow}~nat{"}\isanewline
+\isacommand{primrec}~{"}sum~0~=~0{"}\isanewline
+~~~~~~~~{"}sum~(Suc~n)~=~Suc~n~+~sum~n{"}%
+\begin{isamarkuptext}%
+\noindent
+and induction, for example%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}sum~n~+~sum~n~=~n*(Suc~n){"}\isanewline
+\isacommand{apply}(induct\_tac~n)\isanewline
+\isacommand{apply}(auto)\isacommand{.}\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/pairs.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,3 @@
+\begin{isabelle}%
+~{"}let~(x,y)~=~f~z~in~(y,x){"}~{"}case~xs~of~[]~{\isasymRightarrow}~0~|~(x,y)\#zs~{\isasymRightarrow}~x+y{"}\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/prime_def.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,11 @@
+\begin{isabelle}%
+\isanewline
+~~~~{"}prime(p)~{\isasymequiv}~1~<~p~{\isasymand}~((m~dvd~p)~{\isasymlongrightarrow}~(m=1~{\isasymor}~m=p)){"}%
+\begin{isamarkuptext}%
+\noindent\small
+where \isa{dvd} means ``divides''.
+Isabelle rejects this ``definition'' because of the extra \isa{m} on the
+right-hand side, which would introduce an inconsistency. (Why?) What you
+should have written is%
+\end{isamarkuptext}%
+~{"}prime(p)~{\isasymequiv}~1~<~p~{\isasymand}~({\isasymforall}m.~(m~dvd~p)~{\isasymlongrightarrow}~(m=1~{\isasymor}~m=p)){"}\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/trace_simp.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,39 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+Using the simplifier effectively may take a bit of experimentation. Set the
+\ttindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
+on:%
+\end{isamarkuptext}%
+\isacommand{ML}~{"}set~trace\_simp{"}\isanewline
+\isacommand{lemma}~{"}rev~[a]~=~[]{"}\isanewline
+\isacommand{apply}(simp)%
+\begin{isamarkuptxt}%
+\noindent
+produces the trace
+
+\begin{ttbox}
+Applying instance of rewrite rule:
+rev (?x1 \# ?xs1) == rev ?xs1 @ [?x1]
+Rewriting:
+rev [x] == rev [] @ [x]
+Applying instance of rewrite rule:
+rev [] == []
+Rewriting:
+rev [] == []
+Applying instance of rewrite rule:
+[] @ ?y == ?y
+Rewriting:
+[] @ [x] == [x]
+Applying instance of rewrite rule:
+?x3 \# ?t3 = ?t3 == False
+Rewriting:
+[x] = [] == False
+\end{ttbox}
+
+In more complicated cases, the trace can quite lenghty, especially since
+invocations of the simplifier are often nested (e.g.\ when solving conditions
+of rewrite rules). Thus it is advisable to reset it:%
+\end{isamarkuptxt}%
+\isacommand{ML}~{"}reset~trace\_simp{"}\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Misc/document/types.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,41 @@
+\begin{isabelle}%
+\isacommand{types}~number~~~~~~~=~nat\isanewline
+~~~~~~gate~~~~~~~~~=~{"}bool~{\isasymRightarrow}~bool~{\isasymRightarrow}~bool{"}\isanewline
+~~~~~~('a,'b)alist~=~{"}('a~*~'b)list{"}%
+\begin{isamarkuptext}%
+\noindent\indexbold{*types}%
+Internally all synonyms are fully expanded. As a consequence Isabelle's
+output never contains synonyms. Their main purpose is to improve the
+readability of theory definitions. Synonyms can be used just like any other
+type:%
+\end{isamarkuptext}%
+\isacommand{consts}~nand~::~gate\isanewline
+~~~~~~~exor~::~gate%
+\begin{isamarkuptext}%
+\subsection{Constant definitions}
+\label{sec:ConstDefinitions}
+\indexbold{definition}
+
+The above constants \isa{nand} and \isa{exor} are non-recursive and can
+therefore be defined directly by%
+\end{isamarkuptext}%
+\isacommand{defs}~nand\_def:~{"}nand~A~B~{\isasymequiv}~{\isasymnot}(A~{\isasymand}~B){"}\isanewline
+~~~~~exor\_def:~{"}exor~A~B~{\isasymequiv}~A~{\isasymand}~{\isasymnot}B~{\isasymor}~{\isasymnot}A~{\isasymand}~B{"}%
+\begin{isamarkuptext}%
+\noindent%
+where \isacommand{defs}\indexbold{*defs} is a keyword and \isa{nand_def} and
+\isa{exor_def} are arbitrary user-supplied names.
+The symbol \indexboldpos{\isasymequiv}{$IsaEq} is a special form of equality
+that should only be used in constant definitions.
+Declarations and definitions can also be merged%
+\end{isamarkuptext}%
+\isacommand{constdefs}~nor~::~gate\isanewline
+~~~~~~~~~{"}nor~A~B~{\isasymequiv}~{\isasymnot}(A~{\isasymor}~B){"}\isanewline
+~~~~~~~~~~exor2~::~gate\isanewline
+~~~~~~~~~{"}exor2~A~B~{\isasymequiv}~(A~{\isasymor}~B)~{\isasymand}~({\isasymnot}A~{\isasymor}~{\isasymnot}B){"}%
+\begin{isamarkuptext}%
+\noindent\indexbold{*constdefs}%
+in which case the default name of each definition is \isa{$f$_def}, where
+$f$ is the name of the defined constant.%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Recdef/document/Induction.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,65 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+Assuming we have defined our function such that Isabelle could prove
+termination and that the recursion equations (or some suitable derived
+equations) are simplification rules, we might like to prove something about
+our function. Since the function is recursive, the natural proof principle is
+again induction. But this time the structural form of induction that comes
+with datatypes is unlikely to work well---otherwise we could have defined the
+function by \isacommand{primrec}. Therefore \isacommand{recdef} automatically
+proves a suitable induction rule $f$\isa{.induct} that follows the
+recursion pattern of the particular function $f$. We call this
+\textbf{recursion induction}. Roughly speaking, it
+requires you to prove for each \isacommand{recdef} equation that the property
+you are trying to establish holds for the left-hand side provided it holds
+for all recursive calls on the right-hand side. Here is a simple example:%
+\end{isamarkuptext}%
+\isacommand{lemma}~{"}map~f~(sep(x,xs))~=~sep(f~x,~map~f~xs){"}%
+\begin{isamarkuptxt}%
+\noindent
+involving the predefined \isa{map} functional on lists: \isa{map f xs}
+is the result of applying \isa{f} to all elements of \isa{xs}. We prove
+this lemma by recursion induction w.r.t. \isa{sep}:%
+\end{isamarkuptxt}%
+\isacommand{apply}(induct\_tac~x~xs~rule:~sep.induct)%
+\begin{isamarkuptxt}%
+\noindent
+The resulting proof state has three subgoals corresponding to the three
+clauses for \isa{sep}:
+\begin{isabellepar}%
+~1.~{\isasymAnd}a.~map~f~(sep~(a,~[]))~=~sep~(f~a,~map~f~[])\isanewline
+~2.~{\isasymAnd}a~x.~map~f~(sep~(a,~[x]))~=~sep~(f~a,~map~f~[x])\isanewline
+~3.~{\isasymAnd}a~x~y~zs.\isanewline
+~~~~~~~map~f~(sep~(a,~y~\#~zs))~=~sep~(f~a,~map~f~(y~\#~zs))~{\isasymLongrightarrow}\isanewline
+~~~~~~~map~f~(sep~(a,~x~\#~y~\#~zs))~=~sep~(f~a,~map~f~(x~\#~y~\#~zs))%
+\end{isabellepar}%
+The rest is pure simplification:%
+\end{isamarkuptxt}%
+\isacommand{apply}~auto\isacommand{.}%
+\begin{isamarkuptext}%
+Try proving the above lemma by structural induction, and you find that you
+need an additional case distinction. What is worse, the names of variables
+are invented by Isabelle and have nothing to do with the names in the
+definition of \isa{sep}.
+
+In general, the format of invoking recursion induction is
+\begin{ttbox}
+apply(induct_tac \(x@1 \dots x@n\) rule: \(f\).induct)
+\end{ttbox}\index{*induct_tac}%
+where $x@1~\dots~x@n$ is a list of free variables in the subgoal and $f$ the
+name of a function that takes an $n$-tuple. Usually the subgoal will
+contain the term $f~x@1~\dots~x@n$ but this need not be the case. The
+induction rules do not mention $f$ at all. For example \isa{sep.induct}
+\begin{isabellepar}%
+{\isasymlbrakk}~{\isasymAnd}a.~?P~a~[];\isanewline
+~~{\isasymAnd}a~x.~?P~a~[x];\isanewline
+~~{\isasymAnd}a~x~y~zs.~?P~a~(y~\#~zs)~{\isasymLongrightarrow}~?P~a~(x~\#~y~\#~zs){\isasymrbrakk}\isanewline
+{\isasymLongrightarrow}~?P~?u~?v%
+\end{isabellepar}%
+merely says that in order to prove a property \isa{?P} of \isa{?u} and
+\isa{?v} you need to prove it for the three cases where \isa{?v} is the
+empty list, the singleton list, and the list with at least two elements
+(in which case you may assume it holds for the tail of that list).%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Recdef/document/examples.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,80 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+Here is a simple example, the Fibonacci function:%
+\end{isamarkuptext}%
+\isacommand{consts}~fib~::~{"}nat~{\isasymRightarrow}~nat{"}\isanewline
+\isacommand{recdef}~fib~{"}measure({\isasymlambda}n.~n){"}\isanewline
+~~{"}fib~0~=~0{"}\isanewline
+~~{"}fib~1~=~1{"}\isanewline
+~~{"}fib~(Suc(Suc~x))~=~fib~x~+~fib~(Suc~x){"}%
+\begin{isamarkuptext}%
+\noindent
+The definition of \isa{fib} is accompanied by a \bfindex{measure function}
+\isa{\isasymlambda{}n.$\;$n} that maps the argument of \isa{fib} to a
+natural number. The requirement is that in each equation the measure of the
+argument on the left-hand side is strictly greater than the measure of the
+argument of each recursive call. In the case of \isa{fib} this is
+obviously true because the measure function is the identity and
+\isa{Suc(Suc~x)} is strictly greater than both \isa{x} and
+\isa{Suc~x}.
+
+Slightly more interesting is the insertion of a fixed element
+between any two elements of a list:%
+\end{isamarkuptext}%
+\isacommand{consts}~sep~::~{"}'a~*~'a~list~{\isasymRightarrow}~'a~list{"}\isanewline
+\isacommand{recdef}~sep~{"}measure~({\isasymlambda}(a,xs).~length~xs){"}\isanewline
+~~{"}sep(a,~[])~~~~~=~[]{"}\isanewline
+~~{"}sep(a,~[x])~~~~=~[x]{"}\isanewline
+~~{"}sep(a,~x\#y\#zs)~=~x~\#~a~\#~sep(a,y\#zs){"}%
+\begin{isamarkuptext}%
+\noindent
+This time the measure is the length of the list, which decreases with the
+recursive call; the first component of the argument tuple is irrelevant.
+
+Pattern matching need not be exhaustive:%
+\end{isamarkuptext}%
+\isacommand{consts}~last~::~{"}'a~list~{\isasymRightarrow}~'a{"}\isanewline
+\isacommand{recdef}~last~{"}measure~({\isasymlambda}xs.~length~xs){"}\isanewline
+~~{"}last~[x]~~~~~~=~x{"}\isanewline
+~~{"}last~(x\#y\#zs)~=~last~(y\#zs){"}%
+\begin{isamarkuptext}%
+Overlapping patterns are disambiguated by taking the order of equations into
+account, just as in functional programming:%
+\end{isamarkuptext}%
+\isacommand{consts}~sep1~::~{"}'a~*~'a~list~{\isasymRightarrow}~'a~list{"}\isanewline
+\isacommand{recdef}~sep1~{"}measure~({\isasymlambda}(a,xs).~length~xs){"}\isanewline
+~~{"}sep1(a,~x\#y\#zs)~=~x~\#~a~\#~sep1(a,y\#zs){"}\isanewline
+~~{"}sep1(a,~xs)~~~~~=~xs{"}%
+\begin{isamarkuptext}%
+\noindent
+This defines exactly the same function as \isa{sep} above, i.e.\
+\isa{sep1 = sep}.
+
+\begin{warn}
+ \isacommand{recdef} only takes the first argument of a (curried)
+ recursive function into account. This means both the termination measure
+ and pattern matching can only use that first argument. In general, you will
+ therefore have to combine several arguments into a tuple. In case only one
+ argument is relevant for termination, you can also rearrange the order of
+ arguments as in the following definition:
+\end{warn}%
+\end{isamarkuptext}%
+\isacommand{consts}~sep2~::~{"}'a~list~{\isasymRightarrow}~'a~{\isasymRightarrow}~'a~list{"}\isanewline
+\isacommand{recdef}~sep2~{"}measure~length{"}\isanewline
+~~{"}sep2~(x\#y\#zs)~=~({\isasymlambda}a.~x~\#~a~\#~sep2~zs~a){"}\isanewline
+~~{"}sep2~xs~~~~~~~=~({\isasymlambda}a.~xs){"}%
+\begin{isamarkuptext}%
+Because of its pattern-matching syntax, \isacommand{recdef} is also useful
+for the definition of non-recursive functions:%
+\end{isamarkuptext}%
+\isacommand{consts}~swap12~::~{"}'a~list~{\isasymRightarrow}~'a~list{"}\isanewline
+\isacommand{recdef}~swap12~{"}{\isabraceleft}{\isabraceright}{"}\isanewline
+~~{"}swap12~(x\#y\#zs)~=~y\#x\#zs{"}\isanewline
+~~{"}swap12~zs~~~~~~~=~zs{"}%
+\begin{isamarkuptext}%
+\noindent
+In the non-recursive case the termination measure degenerates to the empty
+set \isa{\{\}}.%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Recdef/document/simplification.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,90 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+Once we have succeeded in proving all termination conditions, the recursion
+equations become simplification rules, just as with
+\isacommand{primrec}. In most cases this works fine, but there is a subtle
+problem that must be mentioned: simplification may not
+terminate because of automatic splitting of \isa{if}.
+Let us look at an example:%
+\end{isamarkuptext}%
+\isacommand{consts}~gcd~::~{"}nat*nat~{\isasymRightarrow}~nat{"}\isanewline
+\isacommand{recdef}~gcd~{"}measure~({\isasymlambda}(m,n).n){"}\isanewline
+~~{"}gcd~(m,~n)~=~(if~n=0~then~m~else~gcd(n,~m~mod~n)){"}%
+\begin{isamarkuptext}%
+\noindent
+According to the measure function, the second argument should decrease with
+each recursive call. The resulting termination condition%
+\end{isamarkuptext}%
+~{"}n~{\isasymnoteq}~0~{\isasymLongrightarrow}~m~mod~n~<~n{"}%
+\begin{isamarkuptext}%
+\noindent
+is provded automatically because it is already present as a lemma in the
+arithmetic library. Thus the recursion equation becomes a simplification
+rule. Of course the equation is nonterminating if we are allowed to unfold
+the recursive call inside the \isa{else} branch, which is why programming
+languages and our simplifier don't do that. Unfortunately the simplifier does
+something else which leads to the same problem: it splits \isa{if}s if the
+condition simplifies to neither \isa{True} nor \isa{False}. For
+example, simplification reduces%
+\end{isamarkuptext}%
+~{"}gcd(m,n)~=~k{"}%
+\begin{isamarkuptext}%
+\noindent
+in one step to%
+\end{isamarkuptext}%
+~{"}(if~n=0~then~m~else~gcd(n,~m~mod~n))~=~k{"}%
+\begin{isamarkuptext}%
+\noindent
+where the condition cannot be reduced further, and splitting leads to%
+\end{isamarkuptext}%
+~{"}(n=0~{\isasymlongrightarrow}~m=k)~{\isasymand}~(n{\isasymnoteq}0~{\isasymlongrightarrow}~gcd(n,~m~mod~n)=k){"}%
+\begin{isamarkuptext}%
+\noindent
+Since the recursive call \isa{gcd(n, m mod n)} is no longer protected by
+an \isa{if}, this leads to an infinite chain of simplification steps.
+Fortunately, this problem can be avoided in many different ways.
+
+Of course the most radical solution is to disable the offending
+\isa{split_if} as shown in the section on case splits in
+\S\ref{sec:SimpFeatures}.
+However, we do not recommend this because it means you will often have to
+invoke the rule explicitly when \isa{if} is involved.
+
+If possible, the definition should be given by pattern matching on the left
+rather than \isa{if} on the right. In the case of \isa{gcd} the
+following alternative definition suggests itself:%
+\end{isamarkuptext}%
+\isacommand{consts}~gcd1~::~{"}nat*nat~{\isasymRightarrow}~nat{"}\isanewline
+\isacommand{recdef}~gcd1~{"}measure~({\isasymlambda}(m,n).n){"}\isanewline
+~~{"}gcd1~(m,~0)~=~m{"}\isanewline
+~~{"}gcd1~(m,~n)~=~gcd1(n,~m~mod~n){"}%
+\begin{isamarkuptext}%
+\noindent
+Note that the order of equations is important and hides the side condition
+\isa{n \isasymnoteq\ 0}. Unfortunately, in general the case distinction
+may not be expressible by pattern matching.
+
+A very simple alternative is to replace \isa{if} by \isa{case}, which
+is also available for \isa{bool} but is not split automatically:%
+\end{isamarkuptext}%
+\isacommand{consts}~gcd2~::~{"}nat*nat~{\isasymRightarrow}~nat{"}\isanewline
+\isacommand{recdef}~gcd2~{"}measure~({\isasymlambda}(m,n).n){"}\isanewline
+~~{"}gcd2(m,n)~=~(case~n=0~of~True~{\isasymRightarrow}~m~|~False~{\isasymRightarrow}~gcd2(n,m~mod~n)){"}%
+\begin{isamarkuptext}%
+\noindent
+In fact, this is probably the neatest solution next to pattern matching.
+
+A final alternative is to replace the offending simplification rules by
+derived conditional ones. For \isa{gcd} it means we have to prove%
+\end{isamarkuptext}%
+\isacommand{lemma}~[simp]:~{"}gcd~(m,~0)~=~m{"}\isanewline
+\isacommand{apply}(simp)\isacommand{.}\isanewline
+\isacommand{lemma}~[simp]:~{"}n~{\isasymnoteq}~0~{\isasymLongrightarrow}~gcd(m,~n)~=~gcd(n,~m~mod~n){"}\isanewline
+\isacommand{apply}(simp)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+after which we can disable the original simplification rule:%
+\end{isamarkuptext}%
+\isacommand{lemmas}~[simp~del]~=~gcd.simps\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Recdef/document/termination.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,89 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+When a function is defined via \isacommand{recdef}, Isabelle tries to prove
+its termination with the help of the user-supplied measure. All of the above
+examples are simple enough that Isabelle can prove automatically that the
+measure of the argument goes down in each recursive call. In that case
+$f$.\isa{simps} contains the defining equations (or variants derived from
+them) as theorems. For example, look (via \isacommand{thm}) at
+\isa{sep.simps} and \isa{sep1.simps} to see that they define the same
+function. What is more, those equations are automatically declared as
+simplification rules.
+
+In general, Isabelle may not be able to prove all termination condition
+(there is one for each recursive call) automatically. For example,
+termination of the following artificial function%
+\end{isamarkuptext}%
+\isacommand{consts}~f~::~{"}nat*nat~{\isasymRightarrow}~nat{"}\isanewline
+\isacommand{recdef}~f~{"}measure({\isasymlambda}(x,y).~x-y){"}\isanewline
+~~{"}f(x,y)~=~(if~x~{\isasymle}~y~then~x~else~f(x,y+1)){"}%
+\begin{isamarkuptext}%
+\noindent
+is not proved automatically (although maybe it should be). Isabelle prints a
+kind of error message showing you what it was unable to prove. You will then
+have to prove it as a separate lemma before you attempt the definition
+of your function once more. In our case the required lemma is the obvious one:%
+\end{isamarkuptext}%
+\isacommand{lemma}~termi\_lem[simp]:~{"}{\isasymnot}~x~{\isasymle}~y~{\isasymLongrightarrow}~x~-~Suc~y~<~x~-~y{"}%
+\begin{isamarkuptxt}%
+\noindent
+It was not proved automatically because of the special nature of \isa{-}
+on \isa{nat}. This requires more arithmetic than is tried by default:%
+\end{isamarkuptxt}%
+\isacommand{apply}(arith)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+Because \isacommand{recdef}'s the termination prover involves simplification,
+we have declared our lemma a simplification rule. Therefore our second
+attempt to define our function will automatically take it into account:%
+\end{isamarkuptext}%
+\isacommand{consts}~g~::~{"}nat*nat~{\isasymRightarrow}~nat{"}\isanewline
+\isacommand{recdef}~g~{"}measure({\isasymlambda}(x,y).~x-y){"}\isanewline
+~~{"}g(x,y)~=~(if~x~{\isasymle}~y~then~x~else~g(x,y+1)){"}%
+\begin{isamarkuptext}%
+\noindent
+This time everything works fine. Now \isa{g.simps} contains precisely the
+stated recursion equation for \isa{g} and they are simplification
+rules. Thus we can automatically prove%
+\end{isamarkuptext}%
+\isacommand{theorem}~wow:~{"}g(1,0)~=~g(1,1){"}\isanewline
+\isacommand{apply}(simp)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+More exciting theorems require induction, which is discussed below.
+
+Because lemma \isa{termi_lem} above was only turned into a
+simplification rule for the sake of the termination proof, we may want to
+disable it again:%
+\end{isamarkuptext}%
+\isacommand{lemmas}~[simp~del]~=~termi\_lem%
+\begin{isamarkuptext}%
+The attentive reader may wonder why we chose to call our function \isa{g}
+rather than \isa{f} the second time around. The reason is that, despite
+the failed termination proof, the definition of \isa{f} did not
+fail (and thus we could not define it a second time). However, all theorems
+about \isa{f}, for example \isa{f.simps}, carry as a precondition the
+unproved termination condition. Moreover, the theorems \isa{f.simps} are
+not simplification rules. However, this mechanism allows a delayed proof of
+termination: instead of proving \isa{termi_lem} up front, we could prove
+it later on and then use it to remove the preconditions from the theorems
+about \isa{f}. In most cases this is more cumbersome than proving things
+up front
+%FIXME, with one exception: nested recursion.
+
+Although all the above examples employ measure functions, \isacommand{recdef}
+allows arbitrary wellfounded relations. For example, termination of
+Ackermann's function requires the lexicographic product \isa{<*lex*>}:%
+\end{isamarkuptext}%
+\isacommand{consts}~ack~::~{"}nat*nat~{\isasymRightarrow}~nat{"}\isanewline
+\isacommand{recdef}~ack~{"}measure(\%m.~m)~<*lex*>~measure(\%n.~n){"}\isanewline
+~~{"}ack(0,n)~~~~~~~~~=~Suc~n{"}\isanewline
+~~{"}ack(Suc~m,0)~~~~~=~ack(m,~1){"}\isanewline
+~~{"}ack(Suc~m,Suc~n)~=~ack(m,ack(Suc~m,n)){"}%
+\begin{isamarkuptext}%
+\noindent
+For details see the manual~\cite{isabelle-HOL} and the examples in the
+library.%
+\end{isamarkuptext}%
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/ToyList/document/ToyList.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,326 @@
+\begin{isabelle}%
+\isacommand{theory}~ToyList~=~PreList:%
+\begin{isamarkuptext}%
+\noindent
+HOL already has a predefined theory of lists called \texttt{List} ---
+\texttt{ToyList} is merely a small fragment of it chosen as an example. In
+contrast to what is recommended in \S\ref{sec:Basic:Theories},
+\texttt{ToyList} is not based on \texttt{Main} but on \texttt{PreList}, a
+theory that contains pretty much everything but lists, thus avoiding
+ambiguities caused by defining lists twice.%
+\end{isamarkuptext}%
+\isacommand{datatype}~'a~list~=~Nil~~~~~~~~~~~~~~~~~~~~~~~~~~({"}[]{"})\isanewline
+~~~~~~~~~~~~~~~~~|~Cons~'a~{"}'a~list{"}~~~~~~~~~~~~(\isakeyword{infixr}~{"}\#{"}~65)%
+\begin{isamarkuptext}%
+\noindent
+The datatype\index{*datatype} \isaindexbold{list} introduces two
+constructors \isaindexbold{Nil} and \isaindexbold{Cons}, the
+empty list and the operator that adds an element to the front of a list. For
+example, the term \isa{Cons True (Cons False Nil)} is a value of type
+\isa{bool~list}, namely the list with the elements \isa{True} and
+\isa{False}. Because this notation becomes unwieldy very quickly, the
+datatype declaration is annotated with an alternative syntax: instead of
+\isa{Nil} and \isa{Cons~$x$~$xs$} we can write
+\isa{[]}\index{$HOL2list@\texttt{[]}|bold} and
+\isa{$x$~\#~$xs$}\index{$HOL2list@\texttt{\#}|bold}. In fact, this
+alternative syntax is the standard syntax. Thus the list \isa{Cons True
+(Cons False Nil)} becomes \isa{True \# False \# []}. The annotation
+\isacommand{infixr}\indexbold{*infixr} means that \isa{\#} associates to
+the right, i.e.\ the term \isa{$x$ \# $y$ \# $z$} is read as \isa{$x$
+\# ($y$ \# $z$)} and not as \isa{($x$ \# $y$) \# $z$}.
+
+\begin{warn}
+ Syntax annotations are a powerful but completely optional feature. You
+ could drop them from theory \texttt{ToyList} and go back to the identifiers
+ \isa{Nil} and \isa{Cons}. However, lists are such a central datatype
+ that their syntax is highly customized. We recommend that novices should
+ not use syntax annotations in their own theories.
+\end{warn}
+Next, two functions \isa{app} and \isaindexbold{rev} are declared:%
+\end{isamarkuptext}%
+\isacommand{consts}~app~::~{"}'a~list~{\isasymRightarrow}~'a~list~{\isasymRightarrow}~'a~list{"}~~~(\isakeyword{infixr}~{"}@{"}~65)\isanewline
+~~~~~~~rev~::~{"}'a~list~{\isasymRightarrow}~'a~list{"}%
+\begin{isamarkuptext}%
+\noindent
+In contrast to ML, Isabelle insists on explicit declarations of all functions
+(keyword \isacommand{consts}). (Apart from the declaration-before-use
+restriction, the order of items in a theory file is unconstrained.) Function
+\isa{app} is annotated with concrete syntax too. Instead of the prefix
+syntax \isa{app~$xs$~$ys$} the infix
+\isa{$xs$~\at~$ys$}\index{$HOL2list@\texttt{\at}|bold} becomes the preferred
+form. Both functions are defined recursively:%
+\end{isamarkuptext}%
+\isacommand{primrec}\isanewline
+{"}[]~@~ys~~~~~~~=~ys{"}\isanewline
+{"}(x~\#~xs)~@~ys~=~x~\#~(xs~@~ys){"}\isanewline
+\isanewline
+\isacommand{primrec}\isanewline
+{"}rev~[]~~~~~~~~=~[]{"}\isanewline
+{"}rev~(x~\#~xs)~~=~(rev~xs)~@~(x~\#~[]){"}%
+\begin{isamarkuptext}%
+\noindent
+The equations for \isa{app} and \isa{rev} hardly need comments:
+\isa{app} appends two lists and \isa{rev} reverses a list. The keyword
+\isacommand{primrec}\index{*primrec} indicates that the recursion is of a
+particularly primitive kind where each recursive call peels off a datatype
+constructor from one of the arguments (see \S\ref{sec:datatype}). Thus the
+recursion always terminates, i.e.\ the function is \bfindex{total}.
+
+The termination requirement is absolutely essential in HOL, a logic of total
+functions. If we were to drop it, inconsistencies would quickly arise: the
+``definition'' $f(n) = f(n)+1$ immediately leads to $0 = 1$ by subtracting
+$f(n)$ on both sides.
+% However, this is a subtle issue that we cannot discuss here further.
+
+\begin{warn}
+ As we have indicated, the desire for total functions is not a gratuitously
+ imposed restriction but an essential characteristic of HOL. It is only
+ because of totality that reasoning in HOL is comparatively easy. More
+ generally, the philosophy in HOL is not to allow arbitrary axioms (such as
+ function definitions whose totality has not been proved) because they
+ quickly lead to inconsistencies. Instead, fixed constructs for introducing
+ types and functions are offered (such as \isacommand{datatype} and
+ \isacommand{primrec}) which are guaranteed to preserve consistency.
+\end{warn}
+
+A remark about syntax. The textual definition of a theory follows a fixed
+syntax with keywords like \isacommand{datatype} and \isacommand{end} (see
+Fig.~\ref{fig:keywords} in Appendix~\ref{sec:Appendix} for a full list).
+Embedded in this syntax are the types and formulae of HOL, whose syntax is
+extensible, e.g.\ by new user-defined infix operators
+(see~\ref{sec:infix-syntax}). To distinguish the two levels, everything
+HOL-specific (terms and types) should be enclosed in
+\texttt{"}\dots\texttt{"}.
+To lessen this burden, quotation marks around a single identifier can be
+dropped, unless the identifier happens to be a keyword, as in%
+\end{isamarkuptext}%
+\isacommand{consts}~{"}end{"}~::~{"}'a~list~{\isasymRightarrow}~'a{"}%
+\begin{isamarkuptext}%
+\noindent
+When Isabelle prints a syntax error message, it refers to the HOL syntax as
+the \bfindex{inner syntax}.
+
+
+\section{An introductory proof}
+\label{sec:intro-proof}
+
+Assuming you have input the declarations and definitions of \texttt{ToyList}
+presented so far, we are ready to prove a few simple theorems. This will
+illustrate not just the basic proof commands but also the typical proof
+process.
+
+\subsubsection*{Main goal: \texttt{rev(rev xs) = xs}}
+
+Our goal is to show that reversing a list twice produces the original
+list. The input line%
+\end{isamarkuptext}%
+\isacommand{theorem}~rev\_rev~[simp]:~{"}rev(rev~xs)~=~xs{"}%
+\begin{isamarkuptxt}%
+\noindent\index{*theorem|bold}\index{*simp (attribute)|bold}
+\begin{itemize}
+\item
+establishes a new theorem to be proved, namely \isa{rev(rev xs) = xs},
+\item
+gives that theorem the name \isa{rev_rev} by which it can be referred to,
+\item
+and tells Isabelle (via \isa{[simp]}) to use the theorem (once it has been
+proved) as a simplification rule, i.e.\ all future proofs involving
+simplification will replace occurrences of \isa{rev(rev xs)} by
+\isa{xs}.
+
+The name and the simplification attribute are optional.
+\end{itemize}
+Isabelle's response is to print
+\begin{isabellepar}%
+proof(prove):~step~0\isanewline
+\isanewline
+goal~(theorem~rev\_rev):\isanewline
+rev~(rev~xs)~=~xs\isanewline
+~1.~rev~(rev~xs)~=~xs
+\end{isabellepar}%
+The first three lines tell us that we are 0 steps into the proof of
+theorem \isa{rev_rev}; for compactness reasons we rarely show these
+initial lines in this tutorial. The remaining lines display the current
+proof state.
+Until we have finished a proof, the proof state always looks like this:
+\begin{isabellepar}%
+$G$\isanewline
+~1.~$G\sb{1}$\isanewline
+~~\vdots~~\isanewline
+~$n$.~$G\sb{n}$
+\end{isabellepar}%
+where $G$
+is the overall goal that we are trying to prove, and the numbered lines
+contain the subgoals $G\sb{1}$, \dots, $G\sb{n}$ that we need to prove to
+establish $G$. At \isa{step 0} there is only one subgoal, which is
+identical with the overall goal. Normally $G$ is constant and only serves as
+a reminder. Hence we rarely show it in this tutorial.
+
+Let us now get back to \isa{rev(rev xs) = xs}. Properties of recursively
+defined functions are best established by induction. In this case there is
+not much choice except to induct on \isa{xs}:%
+\end{isamarkuptxt}%
+\isacommand{apply}(induct\_tac~xs)%
+\begin{isamarkuptxt}%
+\noindent\index{*induct_tac}%
+This tells Isabelle to perform induction on variable \isa{xs}. The suffix
+\isa{tac} stands for ``tactic'', a synonym for ``theorem proving function''.
+By default, induction acts on the first subgoal. The new proof state contains
+two subgoals, namely the base case (\isa{Nil}) and the induction step
+(\isa{Cons}):
+\begin{isabellepar}%
+~1.~rev~(rev~[])~=~[]\isanewline
+~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list%
+\end{isabellepar}%
+
+The induction step is an example of the general format of a subgoal:
+\begin{isabellepar}%
+~$i$.~{\indexboldpos{\isasymAnd}{$IsaAnd}}$x\sb{1}$~\dots~$x\sb{n}$.~{\it assumptions}~{\isasymLongrightarrow}~{\it conclusion}
+\end{isabellepar}%
+The prefix of bound variables \isasymAnd$x\sb{1}$~\dots~$x\sb{n}$ can be
+ignored most of the time, or simply treated as a list of variables local to
+this subgoal. Their deeper significance is explained in \S\ref{sec:PCproofs}.
+The {\it assumptions} are the local assumptions for this subgoal and {\it
+ conclusion} is the actual proposition to be proved. Typical proof steps
+that add new assumptions are induction or case distinction. In our example
+the only assumption is the induction hypothesis \isa{rev (rev list) =
+ list}, where \isa{list} is a variable name chosen by Isabelle. If there
+are multiple assumptions, they are enclosed in the bracket pair
+\indexboldpos{\isasymlbrakk}{$Isabrl} and
+\indexboldpos{\isasymrbrakk}{$Isabrr} and separated by semicolons.
+
+%FIXME indent!
+Let us try to solve both goals automatically:%
+\end{isamarkuptxt}%
+\isacommand{apply}(auto)%
+\begin{isamarkuptxt}%
+\noindent
+This command tells Isabelle to apply a proof strategy called
+\isa{auto} to all subgoals. Essentially, \isa{auto} tries to
+``simplify'' the subgoals. In our case, subgoal~1 is solved completely (thanks
+to the equation \isa{rev [] = []}) and disappears; the simplified version
+of subgoal~2 becomes the new subgoal~1:
+\begin{isabellepar}%
+~1.~\dots~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev~list~@~a~\#~[])~=~a~\#~list
+\end{isabellepar}%
+In order to simplify this subgoal further, a lemma suggests itself.%
+\end{isamarkuptxt}%
+%
+\begin{isamarkuptext}%
+\subsubsection*{First lemma: \texttt{rev(xs \at~ys) = (rev ys) \at~(rev xs)}}
+
+After abandoning the above proof attempt (at the shell level type
+\isa{oops}) we start a new proof:%
+\end{isamarkuptext}%
+\isacommand{lemma}~rev\_app~[simp]:~{"}rev(xs~@~ys)~=~(rev~ys)~@~(rev~xs){"}%
+\begin{isamarkuptxt}%
+\noindent The keywords \isacommand{theorem}\index{*theorem} and
+\isacommand{lemma}\indexbold{*lemma} are interchangable and merely indicate
+the importance we attach to a proposition. In general, we use the words
+\emph{theorem}\index{theorem} and \emph{lemma}\index{lemma} pretty much
+interchangeably.
+
+%FIXME indent!
+There are two variables that we could induct on: \isa{xs} and
+\isa{ys}. Because \isa{\at} is defined by recursion on
+the first argument, \isa{xs} is the correct one:%
+\end{isamarkuptxt}%
+\isacommand{apply}(induct\_tac~xs)%
+\begin{isamarkuptxt}%
+\noindent
+This time not even the base case is solved automatically:%
+\end{isamarkuptxt}%
+\isacommand{apply}(auto)%
+\begin{isamarkuptxt}%
+\begin{isabellepar}%
+~1.~rev~ys~=~rev~ys~@~[]\isanewline
+~2. \dots
+\end{isabellepar}%
+We need to cancel this proof attempt and prove another simple lemma first.
+In the future the step of cancelling an incomplete proof before embarking on
+the proof of a lemma often remains implicit.%
+\end{isamarkuptxt}%
+%
+\begin{isamarkuptext}%
+\subsubsection*{Second lemma: \texttt{xs \at~[] = xs}}
+
+This time the canonical proof procedure%
+\end{isamarkuptext}%
+\isacommand{lemma}~app\_Nil2~[simp]:~{"}xs~@~[]~=~xs{"}\isanewline
+\isacommand{apply}(induct\_tac~xs)\isanewline
+\isacommand{apply}(auto)%
+\begin{isamarkuptxt}%
+\noindent
+leads to the desired message \isa{No subgoals!}:
+\begin{isabellepar}%
+xs~@~[]~=~xs\isanewline
+No~subgoals!
+\end{isabellepar}%
+
+We still need to confirm that the proof is now finished:%
+\end{isamarkuptxt}%
+\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent\indexbold{$Isar@\texttt{.}}%
+As a result of that final dot, Isabelle associates the lemma
+just proved with its name. Notice that in the lemma \isa{app_Nil2} (as
+printed out after the final dot) the free variable \isa{xs} has been
+replaced by the unknown \isa{?xs}, just as explained in
+\S\ref{sec:variables}.
+
+Going back to the proof of the first lemma%
+\end{isamarkuptext}%
+\isacommand{lemma}~rev\_app~[simp]:~{"}rev(xs~@~ys)~=~(rev~ys)~@~(rev~xs){"}\isanewline
+\isacommand{apply}(induct\_tac~xs)\isanewline
+\isacommand{apply}(auto)%
+\begin{isamarkuptxt}%
+\noindent
+we find that this time \isa{auto} solves the base case, but the
+induction step merely simplifies to
+\begin{isabellepar}
+~1.~{\isasymAnd}a~list.\isanewline
+~~~~~~~rev~(list~@~ys)~=~rev~ys~@~rev~list~{\isasymLongrightarrow}\isanewline
+~~~~~~~(rev~ys~@~rev~list)~@~a~\#~[]~=~rev~ys~@~rev~list~@~a~\#~[]
+\end{isabellepar}%
+Now we need to remember that \isa{\at} associates to the right, and that
+\isa{\#} and \isa{\at} have the same priority (namely the \isa{65}
+in their \isacommand{infixr} annotation). Thus the conclusion really is
+\begin{isabellepar}%
+~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))%
+\end{isabellepar}%
+and the missing lemma is associativity of \isa{\at}.
+
+\subsubsection*{Third lemma: \texttt{(xs \at~ys) \at~zs = xs \at~(ys \at~zs)}}
+
+Abandoning the previous proof, the canonical proof procedure%
+\end{isamarkuptxt}%
+%
+\begin{comment}
+\isacommand{oops}%
+\end{comment}
+\isacommand{lemma}~app\_assoc~[simp]:~{"}(xs~@~ys)~@~zs~=~xs~@~(ys~@~zs){"}\isanewline
+\isacommand{apply}(induct\_tac~xs)\isanewline
+\isacommand{apply}(auto)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+succeeds without further ado.
+
+Now we can go back and prove the first lemma%
+\end{isamarkuptext}%
+\isacommand{lemma}~rev\_app~[simp]:~{"}rev(xs~@~ys)~=~(rev~ys)~@~(rev~xs){"}\isanewline
+\isacommand{apply}(induct\_tac~xs)\isanewline
+\isacommand{apply}(auto)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+and then solve our main theorem:%
+\end{isamarkuptext}%
+\isacommand{theorem}~rev\_rev~[simp]:~{"}rev(rev~xs)~=~xs{"}\isanewline
+\isacommand{apply}(induct\_tac~xs)\isanewline
+\isacommand{apply}(auto)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+The final \isa{end} tells Isabelle to close the current theory because
+we are finished with its development:%
+\end{isamarkuptext}%
+\isacommand{end}\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Trie/document/Option2.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,4 @@
+\begin{isabelle}%
+\isanewline
+\isacommand{datatype}~'a~option~=~None~|~Some~'a\isanewline
+\end{isabelle}%
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Trie/document/Trie.tex Wed Apr 19 12:59:38 2000 +0200
@@ -0,0 +1,126 @@
+\begin{isabelle}%
+%
+\begin{isamarkuptext}%
+To minimize running time, each node of a trie should contain an array that maps
+letters to subtries. We have chosen a (sometimes) more space efficient
+representation where the subtries are held in an association list, i.e.\ a
+list of (letter,trie) pairs. Abstracting over the alphabet \isa{'a} and the
+values \isa{'v} we define a trie as follows:%
+\end{isamarkuptext}%
+\isacommand{datatype}~('a,'v)trie~=~Trie~~{"}'v~option{"}~~{"}('a~*~('a,'v)trie)list{"}%
+\begin{isamarkuptext}%
+\noindent
+The first component is the optional value, the second component the
+association list of subtries. This is an example of nested recursion involving products,
+which is fine because products are datatypes as well.
+We define two selector functions:%
+\end{isamarkuptext}%
+\isacommand{consts}~value~::~{"}('a,'v)trie~{\isasymRightarrow}~'v~option{"}\isanewline
+~~~~~~~alist~::~{"}('a,'v)trie~{\isasymRightarrow}~('a~*~('a,'v)trie)list{"}\isanewline
+\isacommand{primrec}~{"}value(Trie~ov~al)~=~ov{"}\isanewline
+\isacommand{primrec}~{"}alist(Trie~ov~al)~=~al{"}%
+\begin{isamarkuptext}%
+\noindent
+Association lists come with a generic lookup function:%
+\end{isamarkuptext}%
+\isacommand{consts}~~~assoc~::~{"}('key~*~'val)list~{\isasymRightarrow}~'key~{\isasymRightarrow}~'val~option{"}\isanewline
+\isacommand{primrec}~{"}assoc~[]~x~=~None{"}\isanewline
+~~~~~~~~{"}assoc~(p\#ps)~x~=\isanewline
+~~~~~~~~~~~(let~(a,b)~=~p~in~if~a=x~then~Some~b~else~assoc~ps~x){"}%
+\begin{isamarkuptext}%
+Now we can define the lookup function for tries. It descends into the trie
+examining the letters of the search string one by one. As
+recursion on lists is simpler than on tries, let us express this as primitive
+recursion on the search string argument:%
+\end{isamarkuptext}%
+\isacommand{consts}~~~lookup~::~{"}('a,'v)trie~{\isasymRightarrow}~'a~list~{\isasymRightarrow}~'v~option{"}\isanewline
+\isacommand{primrec}~{"}lookup~t~[]~=~value~t{"}\isanewline
+~~~~~~~~{"}lookup~t~(a\#as)~=~(case~assoc~(alist~t)~a~of\isanewline
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~None~{\isasymRightarrow}~None\isanewline
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Some~at~{\isasymRightarrow}~lookup~at~as){"}%
+\begin{isamarkuptext}%
+As a first simple property we prove that looking up a string in the empty
+trie \isa{Trie~None~[]} always returns \isa{None}. The proof merely
+distinguishes the two cases whether the search string is empty or not:%
+\end{isamarkuptext}%
+\isacommand{lemma}~[simp]:~{"}lookup~(Trie~None~[])~as~=~None{"}\isanewline
+\isacommand{apply}(cases~{"}as{"},~auto)\isacommand{.}%
+\begin{isamarkuptext}%
+Things begin to get interesting with the definition of an update function
+that adds a new (string,value) pair to a trie, overwriting the old value
+associated with that string:%
+\end{isamarkuptext}%
+\isacommand{consts}~update~::~{"}('a,'v)trie~{\isasymRightarrow}~'a~list~{\isasymRightarrow}~'v~{\isasymRightarrow}~('a,'v)trie{"}\isanewline
+\isacommand{primrec}\isanewline
+~~{"}update~t~[]~~~~~v~=~Trie~(Some~v)~(alist~t){"}\isanewline
+~~{"}update~t~(a\#as)~v~=\isanewline
+~~~~~(let~tt~=~(case~assoc~(alist~t)~a~of\isanewline
+~~~~~~~~~~~~~~~~~~None~{\isasymRightarrow}~Trie~None~[]~|~Some~at~{\isasymRightarrow}~at)\isanewline
+~~~~~~in~Trie~(value~t)~((a,update~tt~as~v)\#alist~t)){"}%
+\begin{isamarkuptext}%
+\noindent
+The base case is obvious. In the recursive case the subtrie
+\isa{tt} associated with the first letter \isa{a} is extracted,
+recursively updated, and then placed in front of the association list.
+The old subtrie associated with \isa{a} is still in the association list
+but no longer accessible via \isa{assoc}. Clearly, there is room here for
+optimizations!
+
+Before we start on any proofs about \isa{update} we tell the simplifier to
+expand all \isa{let}s and to split all \isa{case}-constructs over
+options:%
+\end{isamarkuptext}%
+\isacommand{theorems}~[simp]~=~Let\_def\isanewline
+\isacommand{theorems}~[split]~=~option.split%
+\begin{isamarkuptext}%
+\noindent
+The reason becomes clear when looking (probably after a failed proof
+attempt) at the body of \isa{update}: it contains both
+\isa{let} and a case distinction over type \isa{option}.
+
+Our main goal is to prove the correct interaction of \isa{update} and
+\isa{lookup}:%
+\end{isamarkuptext}%
+\isacommand{theorem}~{"}{\isasymforall}t~v~bs.~lookup~(update~t~as~v)~bs~=\isanewline
+~~~~~~~~~~~~~~~~~~~~(if~as=bs~then~Some~v~else~lookup~t~bs){"}%
+\begin{isamarkuptxt}%
+\noindent
+Our plan is to induct on \isa{as}; hence the remaining variables are
+quantified. From the definitions it is clear that induction on either
+\isa{as} or \isa{bs} is required. The choice of \isa{as} is merely
+guided by the intuition that simplification of \isa{lookup} might be easier
+if \isa{update} has already been simplified, which can only happen if
+\isa{as} is instantiated.
+The start of the proof is completely conventional:%
+\end{isamarkuptxt}%
+\isacommand{apply}(induct\_tac~{"}as{"},~auto)%
+\begin{isamarkuptxt}%
+\noindent
+Unfortunately, this time we are left with three intimidating looking subgoals:
+\begin{isabellepar}%
+~1.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
+~2.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
+~3.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs%
+\end{isabellepar}%
+Clearly, if we want to make headway we have to instantiate \isa{bs} as
+well now. It turns out that instead of induction, case distinction
+suffices:%
+\end{isamarkuptxt}%
+\isacommand{apply}(case\_tac[!]~bs)\isanewline
+\isacommand{apply}(auto)\isacommand{.}%
+\begin{isamarkuptext}%
+\noindent
+Both \isaindex{case_tac} and \isaindex{induct_tac}
+take an optional first argument that specifies the range of subgoals they are
+applied to, where \isa{!} means all subgoals, i.e.\ \isa{[1-3]}. Individual
+subgoal number are also allowed.
+
+This proof may look surprisingly straightforward. However, note that this
+comes at a cost: the proof script is unreadable because the
+intermediate proof states are invisible, and we rely on the (possibly
+brittle) magic of \isa{auto} (after the induction) to split the remaining
+goals up in such a way that case distinction on \isa{bs} makes sense and
+solves the proof. Part~\ref{Isar} shows you how to write readable and stable
+proofs.%
+\end{isamarkuptext}%
+\end{isabelle}%