doc-src/Tutorial/fp.tex
author wenzelm
Mon, 04 Sep 2000 18:38:53 +0200
changeset 9828 1d8bc4f1833e
parent 9721 7e51c9f3d5a0
permissions -rw-r--r--
display: avoid empty lines;

\chapter{Functional Programming in HOL}

Although on the surface this chapter is mainly concerned with how to write
functional programs in HOL and how to verify them, most of the
constructs and proof procedures introduced are general purpose and recur in
any specification or verification task.

The dedicated functional programmer should be warned: HOL offers only {\em
  total functional programming} --- all functions in HOL must be total; lazy
data structures are not directly available. On the positive side, functions
in HOL need not be computable: HOL is a specification language that goes well
beyond what can be expressed as a program. However, for the time being we
concentrate on the computable.

\section{An introductory theory}
\label{sec:intro-theory}

Functional programming needs datatypes and functions. Both of them can be
defined in a theory with a syntax reminiscent of languages like ML or
Haskell. As an example consider the theory in Fig.~\ref{fig:ToyList}.

\begin{figure}[htbp]
\begin{ttbox}\makeatother
\input{ToyList/ToyList.thy}\end{ttbox}
\caption{A theory of lists}
\label{fig:ToyList}
\end{figure}

HOL already has a predefined theory of lists called \texttt{List} ---
\texttt{ToyList} is merely a small fragment of it chosen as an example. In
contrast to what is recommended in \S\ref{sec:Basic:Theories},
\texttt{ToyList} is not based on \texttt{Main} but on \texttt{Datatype}, a
theory that contains everything required for datatype definitions but does
not have \texttt{List} as a parent, thus avoiding ambiguities caused by
defining lists twice.

The \ttindexbold{datatype} \texttt{list} introduces two constructors
\texttt{Nil} and \texttt{Cons}, the empty list and the operator that adds an
element to the front of a list. For example, the term \texttt{Cons True (Cons
  False Nil)} is a value of type \texttt{bool~list}, namely the list with the
elements \texttt{True} and \texttt{False}. Because this notation becomes
unwieldy very quickly, the datatype declaration is annotated with an
alternative syntax: instead of \texttt{Nil} and \texttt{Cons}~$x$~$xs$ we can
write \index{#@{\tt[]}|bold}\texttt{[]} and
\texttt{$x$~\#~$xs$}\index{#@{\tt\#}|bold}. In fact, this alternative syntax
is the standard syntax. Thus the list \texttt{Cons True (Cons False Nil)}
becomes \texttt{True \# False \# []}. The annotation \ttindexbold{infixr}
means that \texttt{\#} associates to the right, i.e.\ the term \texttt{$x$ \#
  $y$ \# $z$} is read as \texttt{$x$ \# ($y$ \# $z$)} and not as \texttt{($x$
  \# $y$) \# $z$}.

\begin{warn}
  Syntax annotations are a powerful but completely optional feature. You
  could drop them from theory \texttt{ToyList} and go back to the identifiers
  \texttt{Nil} and \texttt{Cons}. However, lists are such a central datatype
  that their syntax is highly customized. We recommend that novices should
  not use syntax annotations in their own theories.
\end{warn}

Next, the functions \texttt{app} and \texttt{rev} are declared. In contrast
to ML, Isabelle insists on explicit declarations of all functions (keyword
\ttindexbold{consts}).  (Apart from the declaration-before-use restriction,
the order of items in a theory file is unconstrained.) Function \texttt{app}
is annotated with concrete syntax too. Instead of the prefix syntax
\texttt{app}~$xs$~$ys$ the infix $xs$~\texttt{\at}~$ys$ becomes the preferred
form.

Both functions are defined recursively. The equations for \texttt{app} and
\texttt{rev} hardly need comments: \texttt{app} appends two lists and
\texttt{rev} reverses a list.  The keyword \ttindex{primrec} indicates that
the recursion is of a particularly primitive kind where each recursive call
peels off a datatype constructor from one of the arguments (see
\S\ref{sec:datatype}).  Thus the recursion always terminates, i.e.\ the
function is \bfindex{total}.

The termination requirement is absolutely essential in HOL, a logic of total
functions. If we were to drop it, inconsistencies could quickly arise: the
``definition'' $f(n) = f(n)+1$ immediately leads to $0 = 1$ by subtracting
$f(n)$ on both sides.
% However, this is a subtle issue that we cannot discuss here further.

\begin{warn}
  As we have indicated, the desire for total functions is not a gratuitously
  imposed restriction but an essential characteristic of HOL. It is only
  because of totality that reasoning in HOL is comparatively easy.  More
  generally, the philosophy in HOL is not to allow arbitrary axioms (such as
  function definitions whose totality has not been proved) because they
  quickly lead to inconsistencies. Instead, fixed constructs for introducing
  types and functions are offered (such as \texttt{datatype} and
  \texttt{primrec}) which are guaranteed to preserve consistency.
\end{warn}

A remark about syntax.  The textual definition of a theory follows a fixed
syntax with keywords like \texttt{datatype} and \texttt{end} (see
Fig.~\ref{fig:keywords} in Appendix~\ref{sec:Appendix} for a full list).
Embedded in this syntax are the types and formulae of HOL, whose syntax is
extensible, e.g.\ by new user-defined infix operators
(see~\ref{sec:infix-syntax}). To distinguish the two levels, everything
HOL-specific should be enclosed in \texttt{"}\dots\texttt{"}. The same holds
for identifiers that happen to be keywords, as in
\begin{ttbox}
consts "end" :: 'a list => 'a
\end{ttbox}
To lessen this burden, quotation marks around types can be dropped,
provided their syntax does not go beyond what is described in
\S\ref{sec:TypesTermsForms}. Types containing further operators, e.g.\
\label{startype} \texttt{*} for Cartesian products, need quotation marks.

When Isabelle prints a syntax error message, it refers to the HOL syntax as
the \bfindex{inner syntax}.

\section{An introductory proof}
\label{sec:intro-proof}

Having defined \texttt{ToyList}, we load it with the ML command
\begin{ttbox}
use_thy "ToyList";
\end{ttbox}
and are ready to prove a few simple theorems. This will illustrate not just
the basic proof commands but also the typical proof process.

\subsubsection*{Main goal: \texttt{rev(rev xs) = xs}}

Our goal is to show that reversing a list twice produces the original
list. Typing
\begin{ttbox}
\input{ToyList/thm}\end{ttbox}
establishes a new goal to be proved in the context of the current theory,
which is the one we just loaded. Isabelle's response is to print the current proof state:
\begin{ttbox}
\Out{Level 0}
\Out{rev (rev xs) = xs}
\Out{ 1. rev (rev xs) = xs}
\end{ttbox}
Until we have finished a proof, the proof state always looks like this:
\begin{ttbox}
\Out{Level \(i\)}
\Out{\(G\)}
\Out{ 1. \(G@1\)}
\Out{ \(\vdots\)}
\Out{ \(n\). \(G@n\)}
\end{ttbox}
where \texttt{Level}~$i$ indicates that we are $i$ steps into the proof, $G$
is the overall goal that we are trying to prove, and the numbered lines
contain the subgoals $G@1$, \dots, $G@n$ that we need to prove to establish
$G$. At \texttt{Level 0} there is only one subgoal, which is identical with
the overall goal.  Normally $G$ is constant and only serves as a reminder.
Hence we rarely show it in this tutorial.

Let us now get back to \texttt{rev(rev xs) = xs}. Properties of recursively
defined functions are best established by induction. In this case there is
not much choice except to induct on \texttt{xs}:
\begin{ttbox}
\input{ToyList/inductxs}\end{ttbox}
This tells Isabelle to perform induction on variable \texttt{xs} in subgoal
1. The new proof state contains two subgoals, namely the base case
(\texttt{Nil}) and the induction step (\texttt{Cons}):
\begin{ttbox}
\Out{1. rev (rev []) = []}
\Out{2. !!a list. rev (rev list) = list ==> rev (rev (a # list)) = a # list}
\end{ttbox}
The induction step is an example of the general format of a subgoal:
\begin{ttbox}
\Out{\(i\). !!\(x@1 \dots x@n\). {\it assumptions} ==> {\it conclusion}}
\end{ttbox}\index{==>@{\tt==>}|bold}
The prefix of bound variables \texttt{!!\(x@1 \dots x@n\)} can be ignored
most of the time, or simply treated as a list of variables local to this
subgoal. Their deeper significance is explained in \S\ref{sec:PCproofs}.  The
{\it assumptions} are the local assumptions for this subgoal and {\it
  conclusion} is the actual proposition to be proved. Typical proof steps
that add new assumptions are induction or case distinction. In our example
the only assumption is the induction hypothesis \texttt{rev (rev list) =
  list}, where \texttt{list} is a variable name chosen by Isabelle. If there
are multiple assumptions, they are enclosed in the bracket pair
\texttt{[|}\index{==>@\ttlbr|bold} and \texttt{|]}\index{==>@\ttrbr|bold}
and separated by semicolons.

Let us try to solve both goals automatically:
\begin{ttbox}
\input{ToyList/autotac}\end{ttbox}
This command tells Isabelle to apply a proof strategy called
\texttt{Auto_tac} to all subgoals. Essentially, \texttt{Auto_tac} tries to
`simplify' the subgoals.  In our case, subgoal~1 is solved completely (thanks
to the equation \texttt{rev [] = []}) and disappears; the simplified version
of subgoal~2 becomes the new subgoal~1:
\begin{ttbox}\makeatother
\Out{1. !!a list. rev(rev list) = list ==> rev(rev list @ a # []) = a # list}
\end{ttbox}
In order to simplify this subgoal further, a lemma suggests itself.

\subsubsection*{First lemma: \texttt{rev(xs \at~ys) = (rev ys) \at~(rev xs)}}

We start the proof as usual:
\begin{ttbox}\makeatother
\input{ToyList/lemma1}\end{ttbox}
There are two variables that we could induct on: \texttt{xs} and
\texttt{ys}. Because \texttt{\at} is defined by recursion on
the first argument, \texttt{xs} is the correct one:
\begin{ttbox}
\input{ToyList/inductxs}\end{ttbox}
This time not even the base case is solved automatically:
\begin{ttbox}\makeatother
by(Auto_tac);
\Out{ 1. rev ys = rev ys @ []}
\Out{ 2. \dots}
\end{ttbox}
We need another lemma.

\subsubsection*{Second lemma: \texttt{xs \at~[] = xs}}

This time the canonical proof procedure
\begin{ttbox}\makeatother
\input{ToyList/lemma2}\input{ToyList/inductxs}\input{ToyList/autotac}\end{ttbox}
leads to the desired message \texttt{No subgoals!}:
\begin{ttbox}\makeatother
\Out{xs @ [] = xs}
\Out{No subgoals!}
\end{ttbox}
Now we can give the lemma just proved a suitable name
\begin{ttbox}
\input{ToyList/qed2}\end{ttbox}\index{*qed}
and tell Isabelle to use this lemma in all future proofs by simplification:
\begin{ttbox}
\input{ToyList/addsimps2}\end{ttbox}
Note that in the theorem \texttt{app_Nil2} the free variable \texttt{xs} has
been replaced by the unknown \texttt{?xs}, just as explained in
\S\ref{sec:variables}.

Going back to the proof of the first lemma
\begin{ttbox}\makeatother
\input{ToyList/lemma1}\input{ToyList/inductxs}\input{ToyList/autotac}\end{ttbox}
we find that this time \texttt{Auto_tac} solves the base case, but the
induction step merely simplifies to
\begin{ttbox}\makeatother
\Out{1. !!a list.}
\Out{      rev (list @ ys) = rev ys @ rev list}
\Out{      ==> (rev ys @ rev list) @ a # [] = rev ys @ rev list @ a # []}
\end{ttbox}
Now we need to remember that \texttt{\at} associates to the right, and that
\texttt{\#} and \texttt{\at} have the same priority (namely the \texttt{65}
in the definition of \texttt{ToyList}). Thus the conclusion really is
\begin{ttbox}\makeatother
\Out{   ==> (rev ys @ rev list) @ (a # []) = rev ys @ (rev list @ (a # []))}
\end{ttbox}
and the missing lemma is associativity of \texttt{\at}.

\subsubsection*{Third lemma: \texttt{(xs \at~ys) \at~zs = xs \at~(ys \at~zs)}}

This time the canonical proof procedure
\begin{ttbox}\makeatother
\input{ToyList/lemma3}\end{ttbox}
succeeds without further ado. Again we name the lemma and add it to
the set of lemmas used during simplification:
\begin{ttbox}
\input{ToyList/qed3}\end{ttbox}
Now we can go back and prove the first lemma
\begin{ttbox}\makeatother
\input{ToyList/lemma1}\input{ToyList/inductxs}\input{ToyList/autotac}\end{ttbox}
add it to the simplification lemmas
\begin{ttbox}
\input{ToyList/qed1}\end{ttbox}
and then solve our main theorem:
\begin{ttbox}\makeatother
\input{ToyList/thm}\input{ToyList/inductxs}\input{ToyList/autotac}\end{ttbox}

\subsubsection*{Review}

This is the end of our toy proof. It should have familiarized you with
\begin{itemize}
\item the standard theorem proving procedure:
state a goal; proceed with proof until a new lemma is required; prove that
lemma; come back to the original goal.
\item a specific procedure that works well for functional programs:
induction followed by all-out simplification via \texttt{Auto_tac}.
\item a basic repertoire of proof commands.
\end{itemize}


\section{Some helpful commands}
\label{sec:commands-and-hints}

This section discusses a few basic commands for manipulating the proof state
and can be skipped by casual readers.

There are two kinds of commands used during a proof: the actual proof
commands and auxiliary commands for examining the proof state and controlling
the display. Proof commands are always of the form
\texttt{by(\textit{tactic});}\indexbold{tactic} where \textbf{tactic} is a
synonym for ``theorem proving function''. Typical examples are
\texttt{induct_tac} and \texttt{Auto_tac} --- the suffix \texttt{_tac} is
merely a mnemonic. Further tactics are introduced throughout the tutorial.

%Tactics can also be modified. For example,
%\begin{ttbox}
%by(ALLGOALS Asm_simp_tac);
%\end{ttbox}
%tells Isabelle to apply \texttt{Asm_simp_tac} to all subgoals. For more on
%tactics and how to combine them see~\S\ref{sec:Tactics}.

The most useful auxiliary commands are:
\begin{description}
\item[Printing the current state]
Type \texttt{pr();} to redisplay the current proof state, for example when it
has disappeared off the screen.
\item[Limiting the number of subgoals]
Typing \texttt{prlim $k$;} tells Isabelle to print only the first $k$
subgoals from now on and redisplays the current proof state. This is helpful
when there are many subgoals.
\item[Undoing] Typing \texttt{undo();} undoes the effect of the last
tactic.
\item[Context switch] Every proof happens in the context of a
  \bfindex{current theory}. By default, this is the last theory loaded. If
  you want to prove a theorem in the context of a different theory
  \texttt{T}, you need to type \texttt{context T.thy;}\index{*context|bold}
  first. Of course you need to change the context again if you want to go
  back to your original theory.
\item[Displaying types] We have already mentioned the flag
  \ttindex{show_types} above. It can also be useful for detecting typos in
  formulae early on. For example, if \texttt{show_types} is set and the goal
  \texttt{rev(rev xs) = xs} is started, Isabelle prints the additional output
\begin{ttbox}
\Out{Variables:}
\Out{  xs :: 'a list}
\end{ttbox}
which tells us that Isabelle has correctly inferred that
\texttt{xs} is a variable of list type. On the other hand, had we
made a typo as in \texttt{rev(re xs) = xs}, the response
\begin{ttbox}
\Out{Variables:}
\Out{  re :: 'a list => 'a list}
\Out{  xs :: 'a list}
\end{ttbox}
would have alerted us because of the unexpected variable \texttt{re}.
\item[(Re)loading theories]\index{loading theories}\index{reloading theories}
Initially you load theory \texttt{T} by typing \ttindex{use_thy}~\texttt{"T";},
which loads all parent theories of \texttt{T} automatically, if they are not
loaded already. If you modify \texttt{T.thy} or \texttt{T.ML}, you can
reload it by typing \texttt{use_thy~"T";} again. This time, however, only
\texttt{T} is reloaded. If some of \texttt{T}'s parents have changed as well,
type \ttindexbold{update_thy}~\texttt{"T";} to reload \texttt{T} and all of
its parents that have changed (or have changed parents).
\end{description}
Further commands are found in the Reference Manual.


\section{Datatypes}
\label{sec:datatype}

Inductive datatypes are part of almost every non-trivial application of HOL.
First we take another look at a very important example, the datatype of
lists, before we turn to datatypes in general. The section closes with a
case study.


\subsection{Lists}

Lists are one of the essential datatypes in computing. Readers of this
tutorial and users of HOL need to be familiar with their basic operations.
Theory \texttt{ToyList} is only a small fragment of HOL's predefined theory
\texttt{List}\footnote{\url{http://isabelle.in.tum.de/library/HOL/List.html}}.
The latter contains many further operations. For example, the functions
\ttindexbold{hd} (`head') and \ttindexbold{tl} (`tail') return the first
element and the remainder of a list. (However, pattern-matching is usually
preferable to \texttt{hd} and \texttt{tl}.)  Theory \texttt{List} also
contains more syntactic sugar:
\texttt{[}$x@1$\texttt{,}\dots\texttt{,}$x@n$\texttt{]} abbreviates
$x@1$\texttt{\#}\dots\texttt{\#}$x@n$\texttt{\#[]}.  In the rest of the
tutorial we always use HOL's predefined lists.


\subsection{The general format}
\label{sec:general-datatype}

The general HOL \texttt{datatype} definition is of the form
\[
\mathtt{datatype}~(\alpha@1, \dots, \alpha@n) \, t ~=~
C@1~\tau@{11}~\dots~\tau@{1k@1} ~\mid~ \dots ~\mid~
C@m~\tau@{m1}~\dots~\tau@{mk@m}
\]
where $\alpha@i$ are type variables (the parameters), $C@i$ are distinct
constructor names and $\tau@{ij}$ are types; it is customary to capitalize
the first letter in constructor names. There are a number of
restrictions (such as the type should not be empty) detailed
elsewhere~\cite{isabelle-HOL}. Isabelle notifies you if you violate them.

Laws about datatypes, such as \verb$[] ~= x#xs$ and \texttt{(x\#xs = y\#ys) =
  (x=y \& xs=ys)}, are used automatically during proofs by simplification.
The same is true for the equations in primitive recursive function
definitions.

\subsection{Primitive recursion}

Functions on datatypes are usually defined by recursion. In fact, most of the
time they are defined by what is called \bfindex{primitive recursion}.
The keyword \ttindexbold{primrec} is followed by a list of equations
\[ f \, x@1 \, \dots \, (C \, y@1 \, \dots \, y@k)\, \dots \, x@n = r \]
such that $C$ is a constructor of the datatype $t$ and all recursive calls of
$f$ in $r$ are of the form $f \, \dots \, y@i \, \dots$ for some $i$. Thus
Isabelle immediately sees that $f$ terminates because one (fixed!) argument
becomes smaller with every recursive call. There must be exactly one equation
for each constructor.  Their order is immaterial.
A more general method for defining total recursive functions is explained in
\S\ref{sec:recdef}.

\begin{exercise}
Given the datatype of binary trees
\begin{ttbox}
\input{Misc/tree}\end{ttbox}
define a function \texttt{mirror} that mirrors the structure of a binary tree
by swapping subtrees (recursively). Prove \texttt{mirror(mirror(t)) = t}.
\end{exercise}

\subsection{\texttt{case}-expressions}
\label{sec:case-expressions}

HOL also features \ttindexbold{case}-expressions for analyzing elements of a
datatype. For example,
\begin{ttbox}
case xs of [] => 0 | y#ys => y
\end{ttbox}
evaluates to \texttt{0} if \texttt{xs} is \texttt{[]} and to \texttt{y} if 
\texttt{xs} is \texttt{y\#ys}. (Since the result in both branches must be of
the same type, it follows that \texttt{y::nat} and hence
\texttt{xs::(nat)list}.)

In general, if $e$ is a term of the datatype $t$ defined in
\S\ref{sec:general-datatype} above, the corresponding
\texttt{case}-expression analyzing $e$ is
\[
\begin{array}{rrcl}
\mbox{\tt case}~e~\mbox{\tt of} & C@1~x@{11}~\dots~x@{1k@1} & \To & e@1 \\
                           \vdots \\
                           \mid & C@m~x@{m1}~\dots~x@{mk@m} & \To & e@m
\end{array}
\]

\begin{warn}
{\em All} constructors must be present, their order is fixed, and nested
patterns are not supported.  Violating these restrictions results in strange
error messages.
\end{warn}
\noindent
Nested patterns can be simulated by nested \texttt{case}-expressions: instead
of
\begin{ttbox}
case xs of [] => 0 | [x] => x | x#(y#zs) => y
\end{ttbox}
write
\begin{ttbox}
case xs of [] => 0 | x#ys => (case ys of [] => x | y#zs => y)
\end{ttbox}
Note that \texttt{case}-expressions should be enclosed in parentheses to
indicate their scope.

\subsection{Structural induction}

Almost all the basic laws about a datatype are applied automatically during
simplification. Only induction is invoked by hand via \texttt{induct_tac},
which works for any datatype. In some cases, induction is overkill and a case
distinction over all constructors of the datatype suffices. This is performed
by \ttindexbold{case_tac}. A trivial example:
\begin{ttbox}
\input{Misc/exhaust.ML}\Out{ 1. xs = [] ==> (case xs of [] => [] | y # ys => xs) = xs}
\Out{ 2. !!a list. xs = a # list ==> (case xs of [] => [] | y # ys => xs) = xs}
\input{Misc/autotac.ML}\end{ttbox}
Note that this particular case distinction could have been automated
completely. See~\S\ref{sec:SimpFeatures}.

\begin{warn}
  Induction is only allowed on a free variable that should not occur among
  the assumptions of the subgoal.  Exhaustion works for arbitrary terms.
\end{warn}

\subsection{Case study: boolean expressions}
\label{sec:boolex}

The aim of this case study is twofold: it shows how to model boolean
expressions and some algorithms for manipulating them, and it demonstrates
the constructs introduced above.

\subsubsection{How can we model boolean expressions?}

We want to represent boolean expressions built up from variables and
constants by negation and conjunction. The following datatype serves exactly
that purpose:
\begin{ttbox}
\input{Ifexpr/boolex}\end{ttbox}
The two constants are represented by the terms \texttt{Const~True} and
\texttt{Const~False}. Variables are represented by terms of the form
\texttt{Var}~$n$, where $n$ is a natural number (type \texttt{nat}).
For example, the formula $P@0 \land \neg P@1$ is represented by the term
\texttt{And~(Var~0)~(Neg(Var~1))}.

\subsubsection{What is the value of boolean expressions?}

The value of a boolean expressions depends on the value of its variables.
Hence the function \texttt{value} takes an additional parameter, an {\em
  environment} of type \texttt{nat~=>~bool}, which maps variables to their
values:
\begin{ttbox}
\input{Ifexpr/value}\end{ttbox}

\subsubsection{If-expressions}

An alternative and often more efficient (because in a certain sense
canonical) representation are so-called \textit{If-expressions\/} built up
from constants (\texttt{CIF}), variables (\texttt{VIF}) and conditionals
(\texttt{IF}):
\begin{ttbox}
\input{Ifexpr/ifex}\end{ttbox}
The evaluation if If-expressions proceeds as for \texttt{boolex}:
\begin{ttbox}
\input{Ifexpr/valif}\end{ttbox}

\subsubsection{Transformation into and of If-expressions}

The type \texttt{boolex} is close to the customary representation of logical
formulae, whereas \texttt{ifex} is designed for efficiency. Thus we need to
translate from \texttt{boolex} into \texttt{ifex}:
\begin{ttbox}
\input{Ifexpr/bool2if}\end{ttbox}
At last, we have something we can verify: that \texttt{bool2if} preserves the
value of its argument.
\begin{ttbox}
\input{Ifexpr/bool2if.ML}\end{ttbox}
The proof is canonical:
\begin{ttbox}
\input{Ifexpr/proof.ML}\end{ttbox}
In fact, all proofs in this case study look exactly like this. Hence we do
not show them below.

More interesting is the transformation of If-expressions into a normal form
where the first argument of \texttt{IF} cannot be another \texttt{IF} but
must be a constant or variable. Such a normal form can be computed by
repeatedly replacing a subterm of the form \texttt{IF~(IF~b~x~y)~z~u} by
\texttt{IF b (IF x z u) (IF y z u)}, which has the same value. The following
primitive recursive functions perform this task:
\begin{ttbox}
\input{Ifexpr/normif}
\input{Ifexpr/norm}\end{ttbox}
Their interplay is a bit tricky, and we leave it to the reader to develop an
intuitive understanding. Fortunately, Isabelle can help us to verify that the
transformation preserves the value of the expression:
\begin{ttbox}
\input{Ifexpr/norm.ML}\end{ttbox}
The proof is canonical, provided we first show the following lemma (which
also helps to understand what \texttt{normif} does) and make it available
for simplification via \texttt{Addsimps}:
\begin{ttbox}
\input{Ifexpr/normif.ML}\end{ttbox}

But how can we be sure that \texttt{norm} really produces a normal form in
the above sense? We have to prove
\begin{ttbox}
\input{Ifexpr/normal_norm.ML}\end{ttbox}
where \texttt{normal} expresses that an If-expression is in normal form:
\begin{ttbox}
\input{Ifexpr/normal}\end{ttbox}
Of course, this requires a lemma about normality of \texttt{normif}
\begin{ttbox}
\input{Ifexpr/normal_normif.ML}\end{ttbox}
that has to be made available for simplification via \texttt{Addsimps}.

How does one come up with the required lemmas? Try to prove the main theorems
without them and study carefully what \texttt{Auto_tac} leaves unproved. This
has to provide the clue.
The necessity of universal quantification (\texttt{!t e}) in the two lemmas
is explained in \S\ref{sec:InductionHeuristics}

\begin{exercise}
  We strengthen the definition of a \texttt{normal} If-expression as follows:
  the first argument of all \texttt{IF}s must be a variable. Adapt the above
  development to this changed requirement. (Hint: you may need to formulate
  some of the goals as implications (\texttt{-->}) rather than equalities
  (\texttt{=}).)
\end{exercise}

\section{Some basic types}


\subsection{Natural numbers}
\index{arithmetic|(}

The type \ttindexbold{nat} of natural numbers is predefined and behaves like
\begin{ttbox}
datatype nat = 0 | Suc nat
\end{ttbox}
In particular, there are \texttt{case}-expressions, for example
\begin{ttbox}
case n of 0 => 0 | Suc m => m
\end{ttbox}
primitive recursion, for example
\begin{ttbox}
\input{Misc/natsum}\end{ttbox}
and induction, for example
\begin{ttbox}
\input{Misc/NatSum.ML}\ttbreak
\Out{ sum n + sum n = n * Suc n}
\Out{ No subgoals!}
\end{ttbox}

The usual arithmetic operations \ttindexbold{+}, \ttindexbold{-},
\ttindexbold{*}, \ttindexbold{div}, \ttindexbold{mod}, \ttindexbold{min} and
\ttindexbold{max} are predefined, as are the relations \ttindexbold{<=} and
\ttindexbold{<}. There is even a least number operation \ttindexbold{LEAST}.
For example, \texttt{(LEAST n.$\,$1 < n) = 2} (HOL does not prove this
completely automatically).

\begin{warn}
  The operations \ttindexbold{+}, \ttindexbold{-}, \ttindexbold{*},
  \ttindexbold{min}, \ttindexbold{max}, \ttindexbold{<=} and \ttindexbold{<}
  are overloaded, i.e.\ they are available not just for natural numbers but
  at other types as well (see \S\ref{sec:TypeClasses}). For example, given
  the goal \texttt{x+y = y+x}, there is nothing to indicate that you are
  talking about natural numbers.  Hence Isabelle can only infer that
  \texttt{x} and \texttt{y} are of some arbitrary type where \texttt{+} is
  declared. As a consequence, you will be unable to prove the goal (although
  it may take you some time to realize what has happened if
  \texttt{show_types} is not set).  In this particular example, you need to
  include an explicit type constraint, for example \texttt{x+y = y+(x::nat)}.
  If there is enough contextual information this may not be necessary:
  \texttt{x+0 = x} automatically implies \texttt{x::nat}.
\end{warn}

Simple arithmetic goals are proved automatically by both \texttt{Auto_tac}
and the simplification tactics introduced in \S\ref{sec:Simplification}.  For
example, the goal
\begin{ttbox}
\input{Misc/arith1.ML}\end{ttbox}
is proved automatically. The main restriction is that only addition is taken
into account; other arithmetic operations and quantified formulae are ignored.

For more complex goals, there is the special tactic \ttindexbold{arith_tac}. It
proves arithmetic goals involving the usual logical connectives (\verb$~$,
\verb$&$, \verb$|$, \verb$-->$), the relations \texttt{<=} and \texttt{<},
and the operations \ttindexbold{+}, \ttindexbold{-}, \ttindexbold{min} and
\ttindexbold{max}. For example, it can prove
\begin{ttbox}
\input{Misc/arith2.ML}\end{ttbox}
because \texttt{k*k} can be treated as atomic.
In contrast, $n*n = n \Longrightarrow n=0 \lor n=1$ is not
even proved by \texttt{arith_tac} because the proof relies essentially on
properties of multiplication.

\begin{warn}
  The running time of \texttt{arith_tac} is exponential in the number of
  occurrences of \ttindexbold{-}, \ttindexbold{min} and \ttindexbold{max}
  because they are first eliminated by case distinctions.

  \texttt{arith_tac} is incomplete even for the restricted class of formulae
  described above (known as ``linear arithmetic''). If divisibility plays a
  role, it may fail to prove a valid formula, for example $m+m \neq n+n+1$.
  Fortunately, such examples are rare in practice.
\end{warn}

\index{arithmetic|)}


\subsection{Products}

HOL also has pairs: \texttt{($a@1$,$a@2$)} is of type \texttt{$\tau@1$ *
$\tau@2$} provided each $a@i$ is of type $\tau@i$. The components of a pair
are extracted by \texttt{fst} and \texttt{snd}:
\texttt{fst($x$,$y$) = $x$} and \texttt{snd($x$,$y$) = $y$}. Tuples
are simulated by pairs nested to the right: 
\texttt{($a@1$,$a@2$,$a@3$)} and \texttt{$\tau@1$ * $\tau@2$ * $\tau@3$}
stand for \texttt{($a@1$,($a@2$,$a@3$))} and \texttt{$\tau@1$ * ($\tau@2$ *
$\tau@3$)}. Therefore \texttt{fst(snd($a@1$,$a@2$,$a@3$)) = $a@2$}.

It is possible to use (nested) tuples as patterns in abstractions, for
example \texttt{\%(x,y,z).x+y+z} and \texttt{\%((x,y),z).x+y+z}.

In addition to explicit $\lambda$-abstractions, tuple patterns can be used in
most variable binding constructs. Typical examples are
\begin{ttbox}
let (x,y) = f z in (y,x)

case xs of [] => 0 | (x,y)\#zs => x+y
\end{ttbox}
Further important examples are quantifiers and sets.

\begin{warn}
Abstraction over pairs and tuples is merely a convenient shorthand for a more
complex internal representation.  Thus the internal and external form of a
term may differ, which can affect proofs. If you want to avoid this
complication, use \texttt{fst} and \texttt{snd}, i.e.\ write
\texttt{\%p.~fst p + snd p} instead of \texttt{\%(x,y).~x + y}.
See~\S\ref{} for theorem proving with tuple patterns.
\end{warn}


\section{Definitions}
\label{sec:Definitions}

A definition is simply an abbreviation, i.e.\ a new name for an existing
construction. In particular, definitions cannot be recursive. Isabelle offers
definitions on the level of types and terms. Those on the type level are
called type synonyms, those on the term level are called (constant)
definitions.


\subsection{Type synonyms}
\indexbold{type synonyms}

Type synonyms are similar to those found in ML. Their syntax is fairly self
explanatory:
\begin{ttbox}
\input{Misc/types}\end{ttbox}\indexbold{*types}
The synonym \texttt{alist} shows that in general the type on the right-hand
side needs to be enclosed in double quotation marks
(see the end of~\S\ref{sec:intro-theory}).

Internally all synonyms are fully expanded.  As a consequence Isabelle's
output never contains synonyms.  Their main purpose is to improve the
readability of theory definitions.  Synonyms can be used just like any other
type:
\begin{ttbox}
\input{Misc/consts}\end{ttbox}

\subsection{Constant definitions}
\label{sec:ConstDefinitions}

The above constants \texttt{nand} and \texttt{exor} are non-recursive and can
therefore be defined directly by
\begin{ttbox}
\input{Misc/defs}\end{ttbox}\indexbold{*defs}
where \texttt{defs} is a keyword and \texttt{nand_def} and \texttt{exor_def}
are arbitrary user-supplied names.
The symbol \texttt{==}\index{==>@{\tt==}|bold} is a special form of equality
that should only be used in constant definitions.
Declarations and definitions can also be merged
\begin{ttbox}
\input{Misc/constdefs}\end{ttbox}\indexbold{*constdefs}
in which case the default name of each definition is $f$\texttt{_def}, where
$f$ is the name of the defined constant.

Note that pattern-matching is not allowed, i.e.\ each definition must be of
the form $f\,x@1\,\dots\,x@n$\texttt{~==~}$t$.

Section~\S\ref{sec:Simplification} explains how definitions are used
in proofs.

\begin{warn}
A common mistake when writing definitions is to introduce extra free variables
on the right-hand side as in the following fictitious definition:
\begin{ttbox}
defs  prime_def "prime(p) == (m divides p) --> (m=1 | m=p)"
\end{ttbox}
Isabelle rejects this `definition' because of the extra {\tt m} on the
right-hand side, which would introduce an inconsistency.  What you should have
written is
\begin{ttbox}
defs  prime_def "prime(p) == !m. (m divides p) --> (m=1 | m=p)"
\end{ttbox}
\end{warn}




\chapter{More Functional Programming}

The purpose of this chapter is to deepen the reader's understanding of the
concepts encountered so far and to introduce an advanced method for defining
recursive functions. The first two sections give a structured presentation of
theorem proving by simplification (\S\ref{sec:Simplification}) and
discuss important heuristics for induction (\S\ref{sec:InductionHeuristics}). They
can be skipped by readers less interested in proofs. They are followed by a
case study, a compiler for expressions (\S\ref{sec:ExprCompiler}).
Finally we present a very general method for defining recursive functions
that goes well beyond what \texttt{primrec} allows (\S\ref{sec:recdef}).


\section{Simplification}
\label{sec:Simplification}

So far we have proved our theorems by \texttt{Auto_tac}, which
`simplifies' all subgoals. In fact, \texttt{Auto_tac} can do much more than
that, except that it did not need to so far. However, when you go beyond toy
examples, you need to understand the ingredients of \texttt{Auto_tac}.
This section covers the tactic that \texttt{Auto_tac} always applies first,
namely simplification.

Simplification is one of the central theorem proving tools in Isabelle and
many other systems. The tool itself is called the \bfindex{simplifier}. The
purpose of this section is twofold: to introduce the many features of the
simplifier (\S\ref{sec:SimpFeatures}) and to explain a little bit how the
simplifier works (\S\ref{sec:SimpHow}).  Anybody intending to use HOL should
read \S\ref{sec:SimpFeatures}, and the serious student should read
\S\ref{sec:SimpHow} as well in order to understand what happened in case
things do not simplify as expected.


\subsection{Using the simplifier}
\label{sec:SimpFeatures}

In its most basic form, simplification means repeated application of
equations from left to right. For example, taking the rules for \texttt{\at}
and applying them to the term \texttt{[0,1] \at\ []} results in a sequence of
simplification steps:
\begin{ttbox}\makeatother
(0#1#[]) @ []  \(\leadsto\)  0#((1#[]) @ [])  \(\leadsto\)  0#(1#([] @ []))  \(\leadsto\)  0#1#[]
\end{ttbox}
This is also known as {\em term rewriting} and the equations are referred
to as {\em rewrite rules}. This is more honest than `simplification' because
the terms do not necessarily become simpler in the process.

\subsubsection{Simpsets}

To facilitate simplification, each theory has an associated set of
simplification rules, known as a \bfindex{simpset}. Within a theory,
proofs by simplification refer to the associated simpset by default.
The simpset of a theory is built up as follows: starting with the union of
the simpsets of the parent theories, each occurrence of a \texttt{datatype}
or \texttt{primrec} construct augments the simpset. Explicit definitions are
not added automatically. Users can add new theorems via \texttt{Addsimps} and
delete them again later by \texttt{Delsimps}.

You may augment a simpset not just by equations but by pretty much any
theorem. The simplifier will try to make sense of it.  For example, a theorem
\verb$~$$P$ is automatically turned into \texttt{$P$ = False}. The details are
explained in \S\ref{sec:SimpHow}.

As a rule of thumb, rewrite rules that really simplify a term (like
\texttt{xs \at\ [] = xs} and \texttt{rev(rev xs) = xs}) should be added to the
current simpset right after they have been proved.  Those of a more specific
nature (e.g.\ distributivity laws, which alter the structure of terms
considerably) should only be added for specific proofs and deleted again
afterwards.  Conversely, it may also happen that a generally useful rule
needs to be removed for a certain proof and is added again afterwards.  The
need of frequent temporary additions or deletions may indicate a badly
designed simpset.
\begin{warn}
  Simplification may not terminate, for example if both $f(x) = g(x)$ and
  $g(x) = f(x)$ are in the simpset. It is the user's responsibility not to
  include rules that can lead to nontermination, either on their own or in
  combination with other rules.
\end{warn}

\subsubsection{Simplification tactics}

There are four main simplification tactics:
\begin{ttdescription}
\item[\ttindexbold{Simp_tac} $i$] simplifies the conclusion of subgoal~$i$
  using the theory's simpset.  It may solve the subgoal completely if it has
  become trivial. For example:
\begin{ttbox}\makeatother
\Out{ 1. [] @ [] = []}
by(Simp_tac 1);
\Out{ No subgoals!}
\end{ttbox}

\item[\ttindexbold{Asm_simp_tac}]
  is like \verb$Simp_tac$, but extracts additional rewrite rules from
  the assumptions of the subgoal. For example, it solves
\begin{ttbox}\makeatother
\Out{1. xs = [] ==> xs @ ys = ys @ xs}
\end{ttbox}
which \texttt{Simp_tac} does not do.
  
\item[\ttindexbold{Full_simp_tac}] is like \verb$Simp_tac$, but also
  simplifies the assumptions (without using the assumptions to
  simplify each other or the actual goal).

\item[\ttindexbold{Asm_full_simp_tac}] is like \verb$Asm_simp_tac$,
  but also simplifies the assumptions. In particular, assumptions can
  simplify each other. For example:
\begin{ttbox}\makeatother
\Out{ 1. [| xs @ zs = ys @ xs; [] @ xs = [] @ [] |] ==> ys = zs}
by(Asm_full_simp_tac 1);
\Out{ No subgoals!}
\end{ttbox}
The second assumption simplifies to \texttt{xs = []}, which in turn
simplifies the first assumption to \texttt{zs = ys}, thus reducing the
conclusion to \texttt{ys = ys} and hence to \texttt{True}.
(See also the paragraph on tracing below.)
\end{ttdescription}
\texttt{Asm_full_simp_tac} is the most powerful of this quartet of
tactics. In fact, \texttt{Auto_tac} starts by applying
\texttt{Asm_full_simp_tac} to all subgoals. The only reason for the existence
of the other three tactics is that sometimes one wants to limit the amount of
simplification, for example to avoid nontermination:
\begin{ttbox}\makeatother
\Out{1. ! x. f x = g (f (g x)) ==> f [] = f [] @ []}
\end{ttbox}
is solved by \texttt{Simp_tac}, but \texttt{Asm_simp_tac} and
\texttt{Asm_full_simp_tac} loop because the rewrite rule \texttt{f x = g(f(g
x))} extracted from the assumption does not terminate.  Isabelle notices
certain simple forms of nontermination, but not this one.
 
\subsubsection{Modifying simpsets locally}

If a certain theorem is merely needed in one proof by simplification, the
pattern
\begin{ttbox}
Addsimps [\(rare_theorem\)];
by(Simp_tac 1);
Delsimps [\(rare_theorem\)];
\end{ttbox}
is awkward. Therefore there are lower-case versions of the simplification
tactics (\ttindexbold{simp_tac}, \ttindexbold{asm_simp_tac},
\ttindexbold{full_simp_tac}, \ttindexbold{asm_full_simp_tac}) and of the
simpset modifiers (\ttindexbold{addsimps}, \ttindexbold{delsimps})
that do not access or modify the implicit simpset but explicitly take a
simpset as an argument. For example, the above three lines become
\begin{ttbox}
by(simp_tac (simpset() addsimps [\(rare_theorem\)]) 1);
\end{ttbox}
where the result of the function call \texttt{simpset()} is the simpset of
the current theory and \texttt{addsimps} is an infix function. The implicit
simpset is read once but not modified.
This is far preferable to pairs of \texttt{Addsimps} and \texttt{Delsimps}.
Local modifications can be stacked as in
\begin{ttbox}
by(simp_tac (simpset() addsimps [\(rare_theorem\)] delsimps [\(some_thm\)]) 1);
\end{ttbox}

\subsubsection{Rewriting with definitions}

Constant definitions (\S\ref{sec:ConstDefinitions}) are not automatically
included in the simpset of a theory. Hence such definitions are not expanded
automatically either, just as it should be: definitions are introduced for
the purpose of abbreviating complex concepts. Of course we need to expand the
definitions initially to derive enough lemmas that characterize the concept
sufficiently for us to forget the original definition completely. For
example, given the theory
\begin{ttbox}
\input{Misc/Exor.thy}\end{ttbox}
we may want to prove \verb$exor A (~A)$. Instead of \texttt{Goal} we use
\begin{ttbox}
\input{Misc/exorgoal.ML}\end{ttbox}
which tells Isabelle to expand the definition of \texttt{exor}---the first
argument of \texttt{Goalw} can be a list of definitions---in the initial goal:
\begin{ttbox}
\Out{exor A (~ A)}
\Out{ 1. A & ~ ~ A | ~ A & ~ A}
\end{ttbox}
In this simple example, the goal is proved by \texttt{Simp_tac}.
Of course the resulting theorem is insufficient to characterize \texttt{exor}
completely.

In case we want to expand a definition in the middle of a proof, we can
simply add the definition locally to the simpset:
\begin{ttbox}
\input{Misc/exorproof.ML}\end{ttbox}
You should normally not add the definition permanently to the simpset
using \texttt{Addsimps} because this defeats the whole purpose of an
abbreviation.

\begin{warn}
If you have defined $f\,x\,y$\texttt{~==~}$t$ then you can only expand
occurrences of $f$ with at least two arguments. Thus it is safer to define
$f$\texttt{~==~\%$x\,y$.}$\;t$.
\end{warn}

\subsubsection{Simplifying \texttt{let}-expressions}

Proving a goal containing \ttindex{let}-expressions invariably requires the
\texttt{let}-constructs to be expanded at some point. Since
\texttt{let}-\texttt{in} is just syntactic sugar for a defined constant
(called \texttt{Let}), expanding \texttt{let}-constructs means rewriting with
\texttt{Let_def}:
%context List.thy;
%Goal "(let xs = [] in xs@xs) = ys";
\begin{ttbox}\makeatother
\Out{ 1. (let xs = [] in xs @ xs) = ys}
by(simp_tac (simpset() addsimps [Let_def]) 1);
\Out{ 1. [] = ys}
\end{ttbox}
If, in a particular context, there is no danger of a combinatorial explosion
of nested \texttt{let}s one could even add \texttt{Let_def} permanently via
\texttt{Addsimps}.

\subsubsection{Conditional equations}

So far all examples of rewrite rules were equations. The simplifier also
accepts {\em conditional\/} equations, for example
\begin{ttbox}
xs ~= []  ==>  hd xs # tl xs = xs \hfill \((*)\)
\end{ttbox}
(which is proved by \texttt{case_tac} on \texttt{xs} followed by
\texttt{Asm_full_simp_tac} twice). Assuming that this theorem together with
%\begin{ttbox}\makeatother
\texttt{(rev xs = []) = (xs = [])}
%\end{ttbox}
are part of the simpset, the subgoal
\begin{ttbox}\makeatother
\Out{1. xs ~= [] ==> hd(rev xs) # tl(rev xs) = rev xs}
\end{ttbox}
is proved by simplification:
the conditional equation $(*)$ above
can simplify \texttt{hd(rev~xs)~\#~tl(rev~xs)} to \texttt{rev xs}
because the corresponding precondition \verb$rev xs ~= []$ simplifies to
\verb$xs ~= []$, which is exactly the local assumption of the subgoal.


\subsubsection{Automatic case splits}

Goals containing \ttindex{if}-expressions are usually proved by case
distinction on the condition of the \texttt{if}. For example the goal
\begin{ttbox}
\Out{1. ! xs. if xs = [] then rev xs = [] else rev xs ~= []}
\end{ttbox}
can be split into
\begin{ttbox}
\Out{1. ! xs. (xs = [] --> rev xs = []) \& (xs ~= [] --> rev xs ~= [])}
\end{ttbox}
by typing
\begin{ttbox}
\input{Misc/splitif.ML}\end{ttbox}
Because this is almost always the right proof strategy, the simplifier
performs case-splitting on \texttt{if}s automatically. Try \texttt{Simp_tac}
on the initial goal above.

This splitting idea generalizes from \texttt{if} to \ttindex{case}:
\begin{ttbox}\makeatother
\Out{1. (case xs of [] => zs | y#ys => y#(ys@zs)) = xs@zs}
\end{ttbox}
becomes
\begin{ttbox}\makeatother
\Out{1. (xs = [] --> zs = xs @ zs) &}
\Out{   (! a list. xs = a # list --> a # list @ zs = xs @ zs)}
\end{ttbox}
by typing
\begin{ttbox}
\input{Misc/splitlist.ML}\end{ttbox}
In contrast to \texttt{if}-expressions, the simplifier does not split
\texttt{case}-expressions by default because this can lead to nontermination
in case of recursive datatypes.
Nevertheless the simplifier can be instructed to perform \texttt{case}-splits
by adding the appropriate rule to the simpset:
\begin{ttbox}
by(simp_tac (simpset() addsplits [list.split]) 1);
\end{ttbox}\indexbold{*addsplits}
solves the initial goal outright, which \texttt{Simp_tac} alone will not do.

In general, every datatype $t$ comes with a rule
\texttt{$t$.split} that can be added to the simpset either
locally via \texttt{addsplits} (see above), or permanently via
\begin{ttbox}
Addsplits [\(t\).split];
\end{ttbox}\indexbold{*Addsplits}
Split-rules can be removed globally via \ttindexbold{Delsplits} and locally
via \ttindexbold{delsplits} as, for example, in
\begin{ttbox}
by(simp_tac (simpset() addsimps [\(\dots\)] delsplits [split_if]) 1);
\end{ttbox}


\subsubsection{Arithmetic}
\index{arithmetic}

The simplifier routinely solves a small class of linear arithmetic formulae
(over types \texttt{nat} and \texttt{int}): it only takes into account
assumptions and conclusions that are (possibly negated) (in)equalities
(\texttt{=}, \texttt{<=}, \texttt{<}) and it only knows about addition. Thus
\begin{ttbox}
\input{Misc/arith1.ML}\end{ttbox}
is proved by simplification, whereas the only slightly more complex
\begin{ttbox}
\input{Misc/arith3.ML}\end{ttbox}
is not proved by simplification and requires \texttt{arith_tac}.

\subsubsection{Permutative rewrite rules}

A rewrite rule is {\bf permutative} if the left-hand side and right-hand side
are the same up to renaming of variables.  The most common permutative rule
is commutativity: $x+y = y+x$.  Another example is $(x-y)-z = (x-z)-y$.  Such
rules are problematic because once they apply, they can be used forever.
The simplifier is aware of this danger and treats permutative rules
separately. For details see~\cite{isabelle-ref}.

\subsubsection{Tracing}
\indexbold{tracing the simplifier}

Using the simplifier effectively may take a bit of experimentation.  Set the
\verb$trace_simp$ flag to get a better idea of what is going on:
\begin{ttbox}\makeatother
\Out{  1. rev [x] = []}
\ttbreak
set trace_simp;
by(Simp_tac 1);
\ttbreak
\Out{ Applying instance of rewrite rule:}
\Out{ rev (?x # ?xs) == rev ?xs @ [?x]}
\Out{ Rewriting:}
\Out{ rev [x] == rev [] @ [x]}
\ttbreak
\Out{ Applying instance of rewrite rule:}
\Out{ rev [] == []}
\Out{ Rewriting:}
\Out{ rev [] == []}
\ttbreak
\Out{ Applying instance of rewrite rule:}
\Out{ [] @ ?y == ?y}
\Out{ Rewriting:}
\Out{ [] @ [x] == [x]}
\ttbreak
\Out{ Applying instance of rewrite rule:}
\Out{ ?x # ?t = ?t == False}
\Out{ Rewriting:}
\Out{ [x] = [] == False}
\ttbreak
\Out{ Level 1}
\Out{ rev [x] = []}
\Out{  1. False}
\end{ttbox}
In more complicated cases, the trace can be enormous, especially since
invocations of the simplifier are often nested (e.g.\ when solving conditions
of rewrite rules).

\subsection{How it works}
\label{sec:SimpHow}

\subsubsection{Higher-order patterns}

\subsubsection{Local assumptions}

\subsubsection{The preprocessor}

\section{Induction heuristics}
\label{sec:InductionHeuristics}

The purpose of this section is to illustrate some simple heuristics for
inductive proofs. The first one we have already mentioned in our initial
example:
\begin{quote}
{\em 1. Theorems about recursive functions are proved by induction.}
\end{quote}
In case the function has more than one argument
\begin{quote}
{\em 2. Do induction on argument number $i$ if the function is defined by
recursion in argument number $i$.}
\end{quote}
When we look at the proof of
\begin{ttbox}\makeatother
(xs @ ys) @ zs = xs @ (ys @ zs)
\end{ttbox}
in \S\ref{sec:intro-proof} we find (a) \texttt{\at} is recursive in the first
argument, (b) \texttt{xs} occurs only as the first argument of \texttt{\at},
and (c) both \texttt{ys} and \texttt{zs} occur at least once as the second
argument of \texttt{\at}. Hence it is natural to perform induction on
\texttt{xs}.

The key heuristic, and the main point of this section, is to
generalize the goal before induction. The reason is simple: if the goal is
too specific, the induction hypothesis is too weak to allow the induction
step to go through. Let us now illustrate the idea with an example.

We define a tail-recursive version of list-reversal,
i.e.\ one that can be compiled into a loop:
\begin{ttbox}
\input{Misc/Itrev.thy}\end{ttbox}
The behaviour of \texttt{itrev} is simple: it reverses its first argument by
stacking its elements onto the second argument, and returning that second
argument when the first one becomes empty.
We need to show that \texttt{itrev} does indeed reverse its first argument
provided the second one is empty:
\begin{ttbox}
\input{Misc/itrev1.ML}\end{ttbox}
There is no choice as to the induction variable, and we immediately simplify:
\begin{ttbox}\makeatother
\input{Misc/induct_auto.ML}\ttbreak
\Out{ 1. !!a list. itrev list [] = rev list\(\;\)==> itrev list [a] = rev list @ [a]}
\end{ttbox}
Just as predicted above, the overall goal, and hence the induction
hypothesis, is too weak to solve the induction step because of the fixed
\texttt{[]}. The corresponding heuristic:
\begin{quote}
{\em 3. Generalize goals for induction by replacing constants by variables.}
\end{quote}
Of course one cannot do this na\"{\i}vely: \texttt{itrev xs ys = rev xs} is
just not true --- the correct generalization is
\begin{ttbox}\makeatother
\input{Misc/itrev2.ML}\end{ttbox}
If \texttt{ys} is replaced by \texttt{[]}, the right-hand side simplifies to
\texttt{rev xs}, just as required.

In this particular instance it is easy to guess the right generalization,
but in more complex situations a good deal of creativity is needed. This is
the main source of complications in inductive proofs.

Although we now have two variables, only \texttt{xs} is suitable for
induction, and we repeat our above proof attempt. Unfortunately, we are still
not there:
\begin{ttbox}\makeatother
\Out{1. !!a list.}
\Out{      itrev list ys = rev list @ ys}
\Out{      ==> itrev list (a # ys) = rev list @ a # ys}
\end{ttbox}
The induction hypothesis is still too weak, but this time it takes no
intuition to generalize: the problem is that \texttt{ys} is fixed throughout
the subgoal, but the induction hypothesis needs to be applied with
\texttt{a \# ys} instead of \texttt{ys}. Hence we prove the theorem
for all \texttt{ys} instead of a fixed one:
\begin{ttbox}\makeatother
\input{Misc/itrev3.ML}\end{ttbox}
This time induction on \texttt{xs} followed by simplification succeeds. This
leads to another heuristic for generalization:
\begin{quote}
{\em 4. Generalize goals for induction by universally quantifying all free
variables {\em(except the induction variable itself!)}.}
\end{quote}
This prevents trivial failures like the above and does not change the
provability of the goal. Because it is not always required, and may even
complicate matters in some cases, this heuristic is often not
applied blindly.

A final point worth mentioning is the orientation of the equation we just
proved: the more complex notion (\texttt{itrev}) is on the left-hand
side, the simpler \texttt{rev} on the right-hand side. This constitutes
another, albeit weak heuristic that is not restricted to induction:
\begin{quote}
  {\em 5. The right-hand side of an equation should (in some sense) be
    simpler than the left-hand side.}
\end{quote}
The heuristic is tricky to apply because it is not obvious that
\texttt{rev xs \at\ ys} is simpler than \texttt{itrev xs ys}. But see what
happens if you try to prove the symmetric equation!


\section{Case study: compiling expressions}
\label{sec:ExprCompiler}

The task is to develop a compiler from a generic type of expressions (built
up from variables, constants and binary operations) to a stack machine.  This
generic type of expressions is a generalization of the boolean expressions in
\S\ref{sec:boolex}.  This time we do not commit ourselves to a particular
type of variables or values but make them type parameters.  Neither is there
a fixed set of binary operations: instead the expression contains the
appropriate function itself.
\begin{ttbox}
\input{CodeGen/expr}\end{ttbox}
The three constructors represent constants, variables and the combination of
two subexpressions with a binary operation.

The value of an expression w.r.t.\ an environment that maps variables to
values is easily defined:
\begin{ttbox}
\input{CodeGen/value}\end{ttbox}

The stack machine has three instructions: load a constant value onto the
stack, load the contents of a certain address onto the stack, and apply a
binary operation to the two topmost elements of the stack, replacing them by
the result. As for \texttt{expr}, addresses and values are type parameters:
\begin{ttbox}
\input{CodeGen/instr}\end{ttbox}

The execution of the stack machine is modelled by a function \texttt{exec}
that takes a store (modelled as a function from addresses to values, just
like the environment for evaluating expressions), a stack (modelled as a
list) of values and a list of instructions and returns the stack at the end
of the execution --- the store remains unchanged:
\begin{ttbox}
\input{CodeGen/exec}\end{ttbox}
Recall that \texttt{hd} and \texttt{tl}
return the first element and the remainder of a list.

Because all functions are total, \texttt{hd} is defined even for the empty
list, although we do not know what the result is. Thus our model of the
machine always terminates properly, although the above definition does not
tell us much about the result in situations where \texttt{Apply} was executed
with fewer than two elements on the stack.

The compiler is a function from expressions to a list of instructions. Its
definition is pretty much obvious:
\begin{ttbox}\makeatother
\input{CodeGen/comp}\end{ttbox}

Now we have to prove the correctness of the compiler, i.e.\ that the
execution of a compiled expression results in the value of the expression:
\begin{ttbox}
exec s [] (comp e) = [value s e]
\end{ttbox}
This is generalized to
\begin{ttbox}
\input{CodeGen/goal2.ML}\end{ttbox}
and proved by induction on \texttt{e} followed by simplification, once we
have the following lemma about executing the concatenation of two instruction
sequences:
\begin{ttbox}\makeatother
\input{CodeGen/goal1.ML}\end{ttbox}
This requires induction on \texttt{xs} and ordinary simplification for the
base cases. In the induction step, simplification leaves us with a formula
that contains two \texttt{case}-expressions over instructions. Thus we add
automatic case splitting as well, which finishes the proof:
\begin{ttbox}
\input{CodeGen/simpsplit.ML}\end{ttbox}

We could now go back and prove \texttt{exec s [] (comp e) = [value s e]}
merely by simplification with the generalized version we just proved.
However, this is unnecessary because the generalized version fully subsumes
its instance.


\section{Advanced datatypes}
\index{*datatype|(}
\index{*primrec|(}

This section presents advanced forms of \texttt{datatype}s and (in the near
future!) records.

\subsection{Mutual recursion}
\label{sec:datatype-mut-rec}

Sometimes it is necessary to define two datatypes that depend on each
other. This is called \textbf{mutual recursion}. As an example consider a
language of arithmetic and boolean expressions where
\begin{itemize}
\item arithmetic expressions contain boolean expressions because there are
  conditional arithmetic expressions like ``if $m<n$ then $n-m$ else $m-n$'',
  and
\item boolean expressions contain arithmetic expressions because of
  comparisons like ``$m<n$''.
\end{itemize}
In Isabelle this becomes
\begin{ttbox}
\input{Datatype/abdata}\end{ttbox}\indexbold{*and}
Type \texttt{aexp} is similar to \texttt{expr} in \S\ref{sec:ExprCompiler},
except that we have fixed the values to be of type \texttt{nat} and that we
have fixed the two binary operations \texttt{Sum} and \texttt{Diff}. Boolean
expressions can be arithmetic comparisons, conjunctions and negations.
The semantics is fixed via two evaluation functions
\begin{ttbox}
\input{Datatype/abconstseval}\end{ttbox}
that take an environment (a mapping from variables \texttt{'a} to values
\texttt{nat}) and an expression and return its arithmetic/boolean
value. Since the datatypes are mutually recursive, so are functions that
operate on them. Hence they need to be defined in a single \texttt{primrec}
section:
\begin{ttbox}
\input{Datatype/abevala}
\input{Datatype/abevalb}\end{ttbox}

%In general, given $n$ mutually recursive datatypes, whenever you define a
%\texttt{primrec} function on one of them, Isabelle expects you to define $n$
%(possibly mutually recursive) functions simultaneously.

In the same fashion we also define two functions that perform substitution:
\begin{ttbox}
\input{Datatype/abconstssubst}\end{ttbox}
The first argument is a function mapping variables to expressions, the
substitution. It is applied to all variables in the second argument. As a
result, the type of variables in the expression may change from \texttt{'a}
to \texttt{'b}. Note that there are only arithmetic and no boolean variables.
\begin{ttbox}
\input{Datatype/absubsta}
\input{Datatype/absubstb}\end{ttbox}

Now we can prove a fundamental theorem about the interaction between
evaluation and substitution: applying a substitution $s$ to an expression $a$
and evaluating the result in an environment $env$ yields the same result as
evaluation $a$ in the environment that maps every variable $x$ to the value
of $s(x)$ under $env$. If you try to prove this separately for arithmetic or
boolean expressions (by induction), you find that you always need the other
theorem in the induction step. Therefore you need to state and prove both
theorems simultaneously:
\begin{quote}\small
\verbatiminput{Datatype/abgoalind.ML}
\end{quote}
The resulting 8 goals (one for each constructor) are proved in one fell swoop
\texttt{by Auto_tac;}.

In general, given $n$ mutually recursive datatypes $\tau@1$, \dots, $\tau@n$,
Isabelle expects an inductive proof to start with a goal of the form
\[ P@1(x@1)\ \texttt{\&}\ \dots\ \texttt{\&}\ P@n(x@n) \]
where each variable $x@i$ is of type $\tau@i$. Induction is started by
\begin{ttbox}
by(induct_tac "\(x@1\) \(\dots\) \(x@n\)" \(k\));
\end{ttbox}

\begin{exercise}
  Define a function \texttt{norma} of type \texttt{'a aexp => 'a aexp} that
  replaces \texttt{IF}s with complex boolean conditions by nested
  \texttt{IF}s where each condition is a \texttt{Less} --- \texttt{And} and
  \texttt{Neg} should be eliminated completely. Prove that \texttt{norma}
  preserves the value of an expression and that the result of \texttt{norma}
  is really normal, i.e.\ no more \texttt{And}s and \texttt{Neg}s occur in
  it.  ({\em Hint:} proceed as in \S\ref{sec:boolex}).
\end{exercise}

\subsection{Nested recursion}

So far, all datatypes had the property that on the right-hand side of their
definition they occurred only at the top-level, i.e.\ directly below a
constructor. This is not the case any longer for the following model of terms
where function symbols can be applied to a list of arguments:
\begin{ttbox}
\input{Datatype/tdata}\end{ttbox}
Parameter \texttt{'a} is the type of variables and \texttt{'b} the type of
function symbols.
A mathematical term like $f(x,g(y))$ becomes \texttt{App f [Var x, App g
  [Var y]]}, where \texttt{f}, \texttt{g}, \texttt{x}, \texttt{y} are
suitable values, e.g.\ numbers or strings.

What complicates the definition of \texttt{term} is the nested occurrence of
\texttt{term} inside \texttt{list} on the right-hand side. In principle,
nested recursion can be eliminated in favour of mutual recursion by unfolding
the offending datatypes, here \texttt{list}. The result for \texttt{term}
would be something like
\begin{ttbox}
\input{Datatype/tunfoldeddata}\end{ttbox}
Although we do not recommend this unfolding to the user, it shows how to
simulate nested recursion by mutual recursion.
Now we return to the initial definition of \texttt{term} using
nested recursion.

Let us define a substitution function on terms. Because terms involve term
lists, we need to define two substitution functions simultaneously:
\begin{ttbox}
\input{Datatype/tconstssubst}
\input{Datatype/tsubst}
\input{Datatype/tsubsts}\end{ttbox}
Similarly, when proving a statement about terms inductively, we need
to prove a related statement about term lists simultaneously. For example,
the fact that the identity substitution does not change a term needs to be
strengthened and proved as follows:
\begin{quote}\small
\verbatiminput{Datatype/tidproof.ML}
\end{quote}
Note that \texttt{Var} is the identity substitution because by definition it
leaves variables unchanged: \texttt{subst Var (Var $x$) = Var $x$}. Note also
that the type annotations are necessary because otherwise there is nothing in
the goal to enforce that both halves of the goal talk about the same type
parameters \texttt{('a,'b)}. As a result, induction would fail
because the two halves of the goal would be unrelated.

\begin{exercise}
The fact that substitution distributes over composition can be expressed
roughly as follows:
\begin{ttbox}
subst (f o g) t = subst f (subst g t)
\end{ttbox}
Correct this statement (you will find that it does not type-check),
strengthen it and prove it. (Note: \texttt{o} is function composition; its
definition is found in the theorem \texttt{o_def}).
\end{exercise}


Of course, you may also combine mutual and nested recursion. For example,
constructor \texttt{Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
expressions as its argument: \texttt{Sum ('a aexp list)}.


\subsection{The limits of nested recursion}

How far can we push nested recursion? By the unfolding argument above, we can
reduce nested to mutual recursion provided the nested recursion only involves
previously defined datatypes. The \texttt{data} example above involves
products, which is fine because products are also datatypes.
However, functions are most emphatically not allowed:
\begin{ttbox}
datatype t = C (t => bool)
\end{ttbox}
is a real can of worms: in HOL it must be ruled out because it requires a
type \texttt{t} such that \texttt{t} and its power set \texttt{t => bool}
have the same cardinality---an impossibility.
In theory, we could allow limited forms of recursion involving function
spaces. For example
\begin{ttbox}
datatype t = C (nat => t) | D
\end{ttbox}
is unproblematic. However, Isabelle does not support recursion involving
\texttt{=>} at all at the moment.

For a theoretical analysis of what kinds of datatypes are feasible in HOL
see, for example,~\cite{Gunter-HOL92}. There are alternatives to pure HOL:
LCF~\cite{paulson87} is a logic where types like
\begin{ttbox}
datatype t = C (t -> t)
\end{ttbox}
do indeed make sense (note the intentionally different arrow \texttt{->}!).
There is even a version of LCF on top of HOL, called
HOLCF~\cite{MuellerNvOS99}.

\index{*primrec|)}
\index{*datatype|)}

\subsection{Case study: Tries}

Tries are a classic search tree data structure~\cite{Knuth3-75} for fast
indexing with strings. Figure~\ref{fig:trie} gives a graphical example of a
trie containing the words ``all'', ``an'', ``ape'', ``can'', ``car'' and
``cat''.  When searching a string in a trie, the letters of the string are
examined sequentially. Each letter determines which subtrie to search next.
In this case study we model tries as a datatype, define a lookup and an
update function, and prove that they behave as expected.

\begin{figure}[htbp]
\begin{center}
\unitlength1mm
\begin{picture}(60,30)
\put( 5, 0){\makebox(0,0)[b]{l}}
\put(25, 0){\makebox(0,0)[b]{e}}
\put(35, 0){\makebox(0,0)[b]{n}}
\put(45, 0){\makebox(0,0)[b]{r}}
\put(55, 0){\makebox(0,0)[b]{t}}
%
\put( 5, 9){\line(0,-1){5}}
\put(25, 9){\line(0,-1){5}}
\put(44, 9){\line(-3,-2){9}}
\put(45, 9){\line(0,-1){5}}
\put(46, 9){\line(3,-2){9}}
%
\put( 5,10){\makebox(0,0)[b]{l}}
\put(15,10){\makebox(0,0)[b]{n}}
\put(25,10){\makebox(0,0)[b]{p}}
\put(45,10){\makebox(0,0)[b]{a}}
%
\put(14,19){\line(-3,-2){9}}
\put(15,19){\line(0,-1){5}}
\put(16,19){\line(3,-2){9}}
\put(45,19){\line(0,-1){5}}
%
\put(15,20){\makebox(0,0)[b]{a}}
\put(45,20){\makebox(0,0)[b]{c}}
%
\put(30,30){\line(-3,-2){13}}
\put(30,30){\line(3,-2){13}}
\end{picture}
\end{center}
\caption{A sample trie}
\label{fig:trie}
\end{figure}

Proper tries associate some value with each string. Since the
information is stored only in the final node associated with the string, many
nodes do not carry any value. This distinction is captured by the
following predefined datatype (from theory \texttt{Option}, which is a parent
of \texttt{Main}):
\begin{ttbox}
datatype 'a option = None | Some 'a
\end{ttbox}\indexbold{*option}\indexbold{*None}\indexbold{*Some}

To minimize running time, each node of a trie should contain an array that maps
letters to subtries. We have chosen a more space efficient representation
where the subtries are held in an association list, i.e.\ a list of
(letter,trie) pairs.  Abstracting over the alphabet \texttt{'a} and the
values \texttt{'v} we define a trie as follows:
\begin{ttbox}
\input{Datatype/trie}\end{ttbox}
The first component is the optional value, the second component the
association list of subtries. We define two corresponding selector functions:
\begin{ttbox}
\input{Datatype/triesels}\end{ttbox}
Association lists come with a generic lookup function:
\begin{ttbox}
\input{Datatype/assoc}\end{ttbox}

Now we can define the lookup function for tries. It descends into the trie
examining the letters of the search string one by one. As
recursion on lists is simpler than on tries, let us express this as primitive
recursion on the search string argument:
\begin{ttbox}
\input{Datatype/lookup}\end{ttbox}
As a first simple property we prove that looking up a string in the empty
trie \texttt{Trie~None~[]} always returns \texttt{None}. The proof merely
distinguishes the two cases whether the search string is empty or not:
\begin{ttbox}
\input{Datatype/lookupempty.ML}\end{ttbox}
This lemma is added to the simpset as usual.

Things begin to get interesting with the definition of an update function
that adds a new (string,value) pair to a trie, overwriting the old value
associated with that string:
\begin{ttbox}
\input{Datatype/update}\end{ttbox}
The base case is obvious. In the recursive case the subtrie
\texttt{tt} associated with the first letter \texttt{a} is extracted,
recursively updated, and then placed in front of the association list.
The old subtrie associated with \texttt{a} is still in the association list
but no longer accessible via \texttt{assoc}. Clearly, there is room here for
optimizations!

Our main goal is to prove the correct interaction of \texttt{update} and
\texttt{lookup}:
\begin{quote}\small
\verbatiminput{Datatype/triemain.ML}
\end{quote}
Our plan will be to induct on \texttt{as}; hence the remaining variables are
quantified. From the definitions it is clear that induction on either
\texttt{as} or \texttt{bs} is required. The choice of \texttt{as} is merely
guided by the intuition that simplification of \texttt{lookup} might be easier
if \texttt{update} has already been simplified, which can only happen if
\texttt{as} is instantiated.
The start of the proof is completely conventional:
\begin{ttbox}
\input{Datatype/trieinduct.ML}\end{ttbox}
Unfortunately, this time we are left with three intimidating looking subgoals:
\begin{ttbox}
\Out{1. ... ==> ... lookup (...) bs = lookup t bs}
\Out{2. ... ==> ... lookup (...) bs = lookup t bs}
\Out{3. ... ==> ... lookup (...) bs = lookup t bs}
\end{ttbox}
Clearly, if we want to make headway we have to instantiate \texttt{bs} as
well now. It turns out that instead of induction, case distinction
suffices:
\begin{ttbox}
\input{Datatype/trieexhaust.ML}\end{ttbox}
The {\em tactical} \texttt{ALLGOALS} merely applies the tactic following it
to all subgoals, which results in a total of six subgoals; \texttt{Auto_tac}
solves them all.

This proof may look surprisingly straightforward. The reason is that we
have told the simplifier (without telling the reader) to expand all
\texttt{let}s and to split all \texttt{case}-constructs over options before
we started on the main goal:
\begin{ttbox}
\input{Datatype/trieAdds.ML}\end{ttbox}

\begin{exercise}
  Write an improved version of \texttt{update} that does not suffer from the
  space leak in the version above. Prove the main theorem for your improved
  \texttt{update}.
\end{exercise}

\begin{exercise}
  Modify \texttt{update} so that it can also {\em delete} entries from a
  trie. It is up to you if you want to shrink tries if possible. Prove (a
  modified version of) the main theorem above.
\end{exercise}

\section{Total recursive functions}
\label{sec:recdef}
\index{*recdef|(}


Although many total functions have a natural primitive recursive definition,
this is not always the case. Arbitrary total recursive functions can be
defined by means of \texttt{recdef}: you can use full pattern-matching,
recursion need not involve datatypes, and termination is proved by showing
that the arguments of all recursive calls are smaller in a suitable (user
supplied) sense.

\subsection{Defining recursive functions}

Here is a simple example, the Fibonacci function:
\begin{ttbox}
\input{Recdef/fib}\end{ttbox}
The definition of \texttt{fib} is accompanied by a \bfindex{measure function}
\texttt{\%n.$\;$n} that maps the argument of \texttt{fib} to a natural
number. The requirement is that in each equation the measure of the argument
on the left-hand side is strictly greater than the measure of the argument of
each recursive call. In the case of \texttt{fib} this is obviously true
because the measure function is the identity and \texttt{Suc(Suc~x)} is
strictly greater than both \texttt{x} and \texttt{Suc~x}.

Slightly more interesting is the insertion of a fixed element
between any two elements of a list:
\begin{ttbox}
\input{Recdef/sep}\end{ttbox}
This time the measure is the length of the list, which decreases with the
recursive call; the first component of the argument tuple is irrelevant.

Pattern matching need not be exhaustive:
\begin{ttbox}
\input{Recdef/last}\end{ttbox}

Overlapping patterns are disambiguated by taking the order of equations into
account, just as in functional programming:
\begin{ttbox}
\input{Recdef/sep1}\end{ttbox}
This defines exactly the same function \texttt{sep} as further above.

\begin{warn}
  Currently \texttt{recdef} only takes the first argument of a (curried)
  recursive function into account. This means both the termination measure
  and pattern matching can only use that first argument. In general, you will
  therefore have to combine several arguments into a tuple. In case only one
  argument is relevant for termination, you can also rearrange the order of
  arguments as in
\begin{ttbox}
\input{Recdef/sep2}\end{ttbox}
\end{warn}

When loading a theory containing a \texttt{recdef} of a function $f$,
Isabelle proves the recursion equations and stores the result as a list of
theorems $f$.\texttt{rules}. It can be viewed by typing
\begin{ttbox}
prths \(f\).rules;
\end{ttbox}
All of the above examples are simple enough that Isabelle can determine
automatically that the measure of the argument goes down in each recursive
call. In that case $f$.\texttt{rules} contains precisely the defining
equations.

In general, Isabelle may not be able to prove all termination conditions
automatically. For example, termination of
\begin{ttbox}
\input{Recdef/constsgcd}recdef gcd "measure (\%(m,n).n)"
  "gcd (m, n) = (if n=0 then m else gcd(n, m mod n))"
\end{ttbox}
relies on the lemma \texttt{mod_less_divisor}
\begin{ttbox}
0 < n ==> m mod n < n
\end{ttbox}
that is not part of the default simpset. As a result, Isabelle prints a
warning and \texttt{gcd.rules} contains a precondition:
\begin{ttbox}
(! m n. 0 < n --> m mod n < n) ==> gcd (m, n) = (if n=0 \dots)
\end{ttbox}
We need to instruct \texttt{recdef} to use an extended simpset to prove the
termination condition:
\begin{ttbox}
\input{Recdef/gcd}\end{ttbox}
This time everything works fine and \texttt{gcd.rules} contains precisely the
stated recursion equation for \texttt{gcd}.

When defining some nontrivial total recursive function, the first attempt
will usually generate a number of termination conditions, some of which may
require new lemmas to be proved in some of the parent theories. Those lemmas
can then be added to the simpset used by \texttt{recdef} for its
proofs, as shown for \texttt{gcd}.

Although all the above examples employ measure functions, \texttt{recdef}
allows arbitrary wellfounded relations. For example, termination of
Ackermann's function requires the lexicographic product \texttt{**}:
\begin{ttbox}
\input{Recdef/ack}\end{ttbox}
For details see the manual~\cite{isabelle-HOL} and the examples in the
library.


\subsection{Deriving simplification rules}

Once we have succeeded in proving all termination conditions, we can start to
derive some theorems. In contrast to \texttt{primrec} definitions, which are
automatically added to the simpset, \texttt{recdef} rules must be included
explicitly, for example as in
\begin{ttbox}
Addsimps fib.rules;
\end{ttbox}
However, some care is necessary now, in contrast to \texttt{primrec}.
Although \texttt{gcd} is a total function, its defining equation leads to
nontermination of the simplifier, because the subterm \texttt{gcd(n, m mod
  n)} on the right-hand side can again be simplified by the same equation,
and so on. The reason: the simplifier rewrites the \texttt{then} and
\texttt{else} branches of a conditional if the condition simplifies to
neither \texttt{True} nor \texttt{False}.  Therefore it is recommended to
derive an alternative formulation that replaces case distinctions on the
right-hand side by conditional equations. For \texttt{gcd} it means we have
to prove
\begin{ttbox}
           gcd (m, 0) = m
n ~= 0 ==> gcd (m, n) = gcd(n, m mod n)
\end{ttbox}
To avoid nontermination during those proofs, we have to resort to some low
level tactics:
\begin{ttbox}
Goal "gcd(m,0) = m";
by(resolve_tac [trans] 1);
by(resolve_tac gcd.rules 1);
by(Simp_tac 1);
\end{ttbox}
At this point it is not necessary to understand what exactly
\texttt{resolve_tac} is doing. The main point is that the above proof works
not just for this one example but in general (except that we have to use
\texttt{Asm_simp_tac} and $f$\texttt{.rules} in general). Try the second
\texttt{gcd}-equation above!


\subsection{Induction}

Assuming we have added the recursion equations (or some suitable derived
equations) to the simpset, we might like to prove something about our
function. Since the function is recursive, the natural proof principle is
again induction. But this time the structural form of induction that comes
with datatypes is unlikely to work well---otherwise we could have defined the
function by \texttt{primrec}. Therefore \texttt{recdef} automatically proves
a suitable induction rule $f$\texttt{.induct} that follows the recursion
pattern of the particular function $f$. Roughly speaking, it requires you to
prove for each \texttt{recdef} equation that the property you are trying to
establish holds for the left-hand side provided it holds for all recursive
calls on the right-hand side. Applying $f$\texttt{.induct} requires its
explicit instantiation. See \S\ref{sec:explicit-inst} for details.

\index{*recdef|)}