added Fun
authornipkow
Fri, 02 Nov 2007 08:26:01 +0100
changeset 25260 71b2a1fefb84
parent 25259 8d6b03eef9c9
child 25261 3dc292be0b54
added Fun
doc-src/TutorialI/Fun/ROOT.ML
doc-src/TutorialI/Fun/document/fun0.tex
doc-src/TutorialI/Fun/fun0.thy
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Fun/ROOT.ML	Fri Nov 02 08:26:01 2007 +0100
@@ -0,0 +1,2 @@
+use "../settings";
+use_thy "fun0";
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Fun/document/fun0.tex	Fri Nov 02 08:26:01 2007 +0100
@@ -0,0 +1,362 @@
+%
+\begin{isabellebody}%
+\def\isabellecontext{fun{\isadigit{0}}}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\begin{isamarkuptext}%
+\subsection{Definition}
+\label{sec:fun-examples}
+
+Here is a simple example, the \rmindex{Fibonacci function}:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{fun}\isamarkupfalse%
+\ fib\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}nat\ {\isasymRightarrow}\ nat{\isachardoublequoteclose}\ \isakeyword{where}\isanewline
+\ \ {\isachardoublequoteopen}fib\ {\isadigit{0}}\ {\isacharequal}\ {\isadigit{0}}{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}fib\ {\isacharparenleft}Suc\ {\isadigit{0}}{\isacharparenright}\ {\isacharequal}\ {\isadigit{1}}{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}fib\ {\isacharparenleft}Suc{\isacharparenleft}Suc\ x{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ fib\ x\ {\isacharplus}\ fib\ {\isacharparenleft}Suc\ x{\isacharparenright}{\isachardoublequoteclose}%
+\begin{isamarkuptext}%
+\noindent
+This resembles ordinary functional programming languages. Note the obligatory
+\isacommand{where} and \isa{|}. Command \isacommand{fun} declares and
+defines the function in one go. Isabelle establishes termination automatically
+because \isa{fib}'s argument decreases in every recursive call.
+
+Slightly more interesting is the insertion of a fixed element
+between any two elements of a list:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{fun}\isamarkupfalse%
+\ sep\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list{\isachardoublequoteclose}\ \isakeyword{where}\isanewline
+\ \ {\isachardoublequoteopen}sep\ a\ {\isacharbrackleft}{\isacharbrackright}\ \ \ \ \ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}sep\ a\ {\isacharbrackleft}x{\isacharbrackright}\ \ \ \ {\isacharequal}\ {\isacharbrackleft}x{\isacharbrackright}{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}sep\ a\ {\isacharparenleft}x{\isacharhash}y{\isacharhash}zs{\isacharparenright}\ {\isacharequal}\ x\ {\isacharhash}\ a\ {\isacharhash}\ sep\ a\ {\isacharparenleft}y{\isacharhash}zs{\isacharparenright}{\isachardoublequoteclose}%
+\begin{isamarkuptext}%
+\noindent
+This time the length of the list decreases with the
+recursive call; the first argument is irrelevant for termination.
+
+Pattern matching\index{pattern matching!and \isacommand{fun}}
+need not be exhaustive and may employ wildcards:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{fun}\isamarkupfalse%
+\ last\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}{\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a{\isachardoublequoteclose}\ \isakeyword{where}\isanewline
+\ \ {\isachardoublequoteopen}last\ {\isacharbrackleft}x{\isacharbrackright}\ \ \ \ \ \ {\isacharequal}\ x{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}last\ {\isacharparenleft}{\isacharunderscore}{\isacharhash}y{\isacharhash}zs{\isacharparenright}\ {\isacharequal}\ last\ {\isacharparenleft}y{\isacharhash}zs{\isacharparenright}{\isachardoublequoteclose}%
+\begin{isamarkuptext}%
+Overlapping patterns are disambiguated by taking the order of equations into
+account, just as in functional programming:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{fun}\isamarkupfalse%
+\ sep{\isadigit{1}}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list{\isachardoublequoteclose}\ \isakeyword{where}\isanewline
+\ \ {\isachardoublequoteopen}sep{\isadigit{1}}\ a\ {\isacharparenleft}x{\isacharhash}y{\isacharhash}zs{\isacharparenright}\ {\isacharequal}\ x\ {\isacharhash}\ a\ {\isacharhash}\ sep{\isadigit{1}}\ a\ {\isacharparenleft}y{\isacharhash}zs{\isacharparenright}{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}sep{\isadigit{1}}\ {\isacharunderscore}\ xs\ \ \ \ \ \ \ {\isacharequal}\ xs{\isachardoublequoteclose}%
+\begin{isamarkuptext}%
+\noindent
+To guarantee that the second equation can only be applied if the first
+one does not match, Isabelle internally replaces the second equation
+by the two possibilities that are left: \isa{sep{\isadigit{1}}\ a\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}} and
+\isa{sep{\isadigit{1}}\ a\ {\isacharbrackleft}x{\isacharbrackright}\ {\isacharequal}\ {\isacharbrackleft}x{\isacharbrackright}}.  Thus the functions \isa{sep} and
+\isa{sep{\isadigit{1}}} are identical.
+
+Because of its pattern-matching syntax, \isacommand{fun} is also useful
+for the definition of non-recursive functions:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{fun}\isamarkupfalse%
+\ swap{\isadigit{1}}{\isadigit{2}}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}{\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list{\isachardoublequoteclose}\ \isakeyword{where}\isanewline
+\ \ {\isachardoublequoteopen}swap{\isadigit{1}}{\isadigit{2}}\ {\isacharparenleft}x{\isacharhash}y{\isacharhash}zs{\isacharparenright}\ {\isacharequal}\ y{\isacharhash}x{\isacharhash}zs{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}swap{\isadigit{1}}{\isadigit{2}}\ zs\ \ \ \ \ \ \ {\isacharequal}\ zs{\isachardoublequoteclose}%
+\begin{isamarkuptext}%
+After a function~$f$ has been defined via \isacommand{fun},
+its defining equations (or variants derived from them) are available
+under the name $f$\isa{{\isachardot}simps} as theorems.
+For example, look (via \isacommand{thm}) at
+\isa{sep{\isachardot}simps} and \isa{sep{\isadigit{1}}{\isachardot}simps} to see that they define
+the same function. What is more, those equations are automatically declared as
+simplification rules.
+
+\subsection{Termination}
+
+Isabelle's automatic termination prover for \isacommand{fun} has a
+fixed notion of the \emph{size} (of type \isa{nat}) of an
+argument. The size of a natural number is the number itself. The size
+of a list is its length. In general, every datatype \isa{t} comes
+with its own size function (named \isa{t{\isachardot}size}) which counts the
+number of constructors in a term of type \isa{t} --- more precisely
+those constructors where one of the arguments is of the type itself:
+\isa{Suc} in the case of \isa{nat} and \isa{op\ {\isacharhash}} in the case
+of lists. A recursive function is accepted if \isacommand{fun} can
+show that the size of one fixed argument becomes smaller with each
+recursive call.
+
+More generally, \isa{fun} allows any \emph{lexicographic
+combination} of size measures in case there are multiple
+arguments. For example the following version of \rmindex{Ackermann's
+function} is accepted:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{fun}\isamarkupfalse%
+\ ack{\isadigit{2}}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}nat\ {\isasymRightarrow}\ nat\ {\isasymRightarrow}\ nat{\isachardoublequoteclose}\ \isakeyword{where}\isanewline
+\ \ {\isachardoublequoteopen}ack{\isadigit{2}}\ n\ {\isadigit{0}}\ {\isacharequal}\ Suc\ n{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}ack{\isadigit{2}}\ {\isadigit{0}}\ {\isacharparenleft}Suc\ m{\isacharparenright}\ {\isacharequal}\ ack{\isadigit{2}}\ {\isacharparenleft}Suc\ {\isadigit{0}}{\isacharparenright}\ m{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}ack{\isadigit{2}}\ {\isacharparenleft}Suc\ n{\isacharparenright}\ {\isacharparenleft}Suc\ m{\isacharparenright}\ {\isacharequal}\ ack{\isadigit{2}}\ {\isacharparenleft}ack{\isadigit{2}}\ n\ {\isacharparenleft}Suc\ m{\isacharparenright}{\isacharparenright}\ m{\isachardoublequoteclose}%
+\begin{isamarkuptext}%
+Thus the order of arguments has no influence on whether
+\isacommand{fun} can prove termination of a function. For more details
+see elsewhere~\cite{bulwahnKN07}.
+
+\subsection{Simplification}
+\label{sec:fun-simplification}
+
+Upon successful termination proof, the recursion equations become
+simplification rules, just as with \isacommand{primrec}.
+In most cases this works fine, but there is a subtle
+problem that must be mentioned: simplification may not
+terminate because of automatic splitting of \isa{if}.
+\index{*if expressions!splitting of}
+Let us look at an example:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{fun}\isamarkupfalse%
+\ gcd\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}nat\ {\isasymtimes}\ nat\ {\isasymRightarrow}\ nat{\isachardoublequoteclose}\ \isakeyword{where}\isanewline
+\ \ {\isachardoublequoteopen}gcd{\isacharparenleft}m{\isacharcomma}n{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}if\ n{\isacharequal}{\isadigit{0}}\ then\ m\ else\ gcd{\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}{\isacharparenright}{\isachardoublequoteclose}%
+\begin{isamarkuptext}%
+\noindent
+The second argument decreases with each recursive call.
+The termination condition
+\begin{isabelle}%
+\ \ \ \ \ n\ {\isasymnoteq}\ {\isadigit{0}}\ {\isasymLongrightarrow}\ m\ mod\ n\ {\isacharless}\ n%
+\end{isabelle}
+is proved automatically because it is already present as a lemma in
+HOL\@.  Thus the recursion equation becomes a simplification
+rule. Of course the equation is nonterminating if we are allowed to unfold
+the recursive call inside the \isa{else} branch, which is why programming
+languages and our simplifier don't do that. Unfortunately the simplifier does
+something else that leads to the same problem: it splits 
+each \isa{if}-expression unless its
+condition simplifies to \isa{True} or \isa{False}.  For
+example, simplification reduces
+\begin{isabelle}%
+\ \ \ \ \ gcd\ {\isacharparenleft}m{\isacharcomma}\ n{\isacharparenright}\ {\isacharequal}\ k%
+\end{isabelle}
+in one step to
+\begin{isabelle}%
+\ \ \ \ \ {\isacharparenleft}if\ n\ {\isacharequal}\ {\isadigit{0}}\ then\ m\ else\ gcd\ {\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ k%
+\end{isabelle}
+where the condition cannot be reduced further, and splitting leads to
+\begin{isabelle}%
+\ \ \ \ \ {\isacharparenleft}n\ {\isacharequal}\ {\isadigit{0}}\ {\isasymlongrightarrow}\ m\ {\isacharequal}\ k{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}n\ {\isasymnoteq}\ {\isadigit{0}}\ {\isasymlongrightarrow}\ gcd\ {\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}\ {\isacharequal}\ k{\isacharparenright}%
+\end{isabelle}
+Since the recursive call \isa{gcd\ {\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}} is no longer protected by
+an \isa{if}, it is unfolded again, which leads to an infinite chain of
+simplification steps. Fortunately, this problem can be avoided in many
+different ways.
+
+The most radical solution is to disable the offending theorem
+\isa{split{\isacharunderscore}if},
+as shown in \S\ref{sec:AutoCaseSplits}.  However, we do not recommend this
+approach: you will often have to invoke the rule explicitly when
+\isa{if} is involved.
+
+If possible, the definition should be given by pattern matching on the left
+rather than \isa{if} on the right. In the case of \isa{gcd} the
+following alternative definition suggests itself:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{fun}\isamarkupfalse%
+\ gcd{\isadigit{1}}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}nat\ {\isasymRightarrow}\ nat\ {\isasymRightarrow}\ nat{\isachardoublequoteclose}\ \isakeyword{where}\isanewline
+\ \ {\isachardoublequoteopen}gcd{\isadigit{1}}\ m\ {\isadigit{0}}\ {\isacharequal}\ m{\isachardoublequoteclose}\ {\isacharbar}\isanewline
+\ \ {\isachardoublequoteopen}gcd{\isadigit{1}}\ m\ n\ {\isacharequal}\ gcd{\isadigit{1}}\ n\ {\isacharparenleft}m\ mod\ n{\isacharparenright}{\isachardoublequoteclose}%
+\begin{isamarkuptext}%
+\noindent
+The order of equations is important: it hides the side condition
+\isa{n\ {\isasymnoteq}\ {\isadigit{0}}}.  Unfortunately, in general the case distinction
+may not be expressible by pattern matching.
+
+A simple alternative is to replace \isa{if} by \isa{case}, 
+which is also available for \isa{bool} and is not split automatically:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{fun}\isamarkupfalse%
+\ gcd{\isadigit{2}}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}nat\ {\isasymRightarrow}\ nat\ {\isasymRightarrow}\ nat{\isachardoublequoteclose}\ \isakeyword{where}\isanewline
+\ \ {\isachardoublequoteopen}gcd{\isadigit{2}}\ m\ n\ {\isacharequal}\ {\isacharparenleft}case\ n{\isacharequal}{\isadigit{0}}\ of\ True\ {\isasymRightarrow}\ m\ {\isacharbar}\ False\ {\isasymRightarrow}\ gcd{\isadigit{2}}\ n\ {\isacharparenleft}m\ mod\ n{\isacharparenright}{\isacharparenright}{\isachardoublequoteclose}%
+\begin{isamarkuptext}%
+\noindent
+This is probably the neatest solution next to pattern matching, and it is
+always available.
+
+A final alternative is to replace the offending simplification rules by
+derived conditional ones. For \isa{gcd} it means we have to prove
+these lemmas:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{lemma}\isamarkupfalse%
+\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequoteopen}gcd{\isacharparenleft}m{\isacharcomma}{\isadigit{0}}{\isacharparenright}\ {\isacharequal}\ m{\isachardoublequoteclose}\isanewline
+%
+\isadelimproof
+%
+\endisadelimproof
+%
+\isatagproof
+\isacommand{apply}\isamarkupfalse%
+{\isacharparenleft}simp{\isacharparenright}\isanewline
+\isacommand{done}\isamarkupfalse%
+%
+\endisatagproof
+{\isafoldproof}%
+%
+\isadelimproof
+\isanewline
+%
+\endisadelimproof
+\isanewline
+\isacommand{lemma}\isamarkupfalse%
+\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequoteopen}n\ {\isasymnoteq}\ {\isadigit{0}}\ {\isasymLongrightarrow}\ gcd{\isacharparenleft}m{\isacharcomma}n{\isacharparenright}\ {\isacharequal}\ gcd{\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}{\isachardoublequoteclose}\isanewline
+%
+\isadelimproof
+%
+\endisadelimproof
+%
+\isatagproof
+\isacommand{apply}\isamarkupfalse%
+{\isacharparenleft}simp{\isacharparenright}\isanewline
+\isacommand{done}\isamarkupfalse%
+%
+\endisatagproof
+{\isafoldproof}%
+%
+\isadelimproof
+%
+\endisadelimproof
+%
+\begin{isamarkuptext}%
+\noindent
+Simplification terminates for these proofs because the condition of the \isa{if} simplifies to \isa{True} or \isa{False}.
+Now we can disable the original simplification rule:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{declare}\isamarkupfalse%
+\ gcd{\isachardot}simps\ {\isacharbrackleft}simp\ del{\isacharbrackright}%
+\begin{isamarkuptext}%
+\index{induction!recursion|(}
+\index{recursion induction|(}
+
+\subsection{Induction}
+\label{sec:fun-induction}
+
+Having defined a function we might like to prove something about it.
+Since the function is recursive, the natural proof principle is
+again induction. But this time the structural form of induction that comes
+with datatypes is unlikely to work well --- otherwise we could have defined the
+function by \isacommand{primrec}. Therefore \isacommand{fun} automatically
+proves a suitable induction rule $f$\isa{{\isachardot}induct} that follows the
+recursion pattern of the particular function $f$. We call this
+\textbf{recursion induction}. Roughly speaking, it
+requires you to prove for each \isacommand{fun} equation that the property
+you are trying to establish holds for the left-hand side provided it holds
+for all recursive calls on the right-hand side. Here is a simple example
+involving the predefined \isa{map} functional on lists:%
+\end{isamarkuptext}%
+\isamarkuptrue%
+\isacommand{lemma}\isamarkupfalse%
+\ {\isachardoublequoteopen}map\ f\ {\isacharparenleft}sep\ x\ xs{\isacharparenright}\ {\isacharequal}\ sep\ {\isacharparenleft}f\ x{\isacharparenright}\ {\isacharparenleft}map\ f\ xs{\isacharparenright}{\isachardoublequoteclose}%
+\isadelimproof
+%
+\endisadelimproof
+%
+\isatagproof
+%
+\begin{isamarkuptxt}%
+\noindent
+Note that \isa{map\ f\ xs}
+is the result of applying \isa{f} to all elements of \isa{xs}. We prove
+this lemma by recursion induction over \isa{sep}:%
+\end{isamarkuptxt}%
+\isamarkuptrue%
+\isacommand{apply}\isamarkupfalse%
+{\isacharparenleft}induct{\isacharunderscore}tac\ x\ xs\ rule{\isacharcolon}\ sep{\isachardot}induct{\isacharparenright}%
+\begin{isamarkuptxt}%
+\noindent
+The resulting proof state has three subgoals corresponding to the three
+clauses for \isa{sep}:
+\begin{isabelle}%
+\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}a{\isachardot}\ map\ f\ {\isacharparenleft}sep\ a\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\ {\isacharequal}\ sep\ {\isacharparenleft}f\ a{\isacharparenright}\ {\isacharparenleft}map\ f\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\isanewline
+\ {\isadigit{2}}{\isachardot}\ {\isasymAnd}a\ x{\isachardot}\ map\ f\ {\isacharparenleft}sep\ a\ {\isacharbrackleft}x{\isacharbrackright}{\isacharparenright}\ {\isacharequal}\ sep\ {\isacharparenleft}f\ a{\isacharparenright}\ {\isacharparenleft}map\ f\ {\isacharbrackleft}x{\isacharbrackright}{\isacharparenright}\isanewline
+\ {\isadigit{3}}{\isachardot}\ {\isasymAnd}a\ x\ y\ zs{\isachardot}\isanewline
+\isaindent{\ {\isadigit{3}}{\isachardot}\ \ \ \ }map\ f\ {\isacharparenleft}sep\ a\ {\isacharparenleft}y\ {\isacharhash}\ zs{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ sep\ {\isacharparenleft}f\ a{\isacharparenright}\ {\isacharparenleft}map\ f\ {\isacharparenleft}y\ {\isacharhash}\ zs{\isacharparenright}{\isacharparenright}\ {\isasymLongrightarrow}\isanewline
+\isaindent{\ {\isadigit{3}}{\isachardot}\ \ \ \ }map\ f\ {\isacharparenleft}sep\ a\ {\isacharparenleft}x\ {\isacharhash}\ y\ {\isacharhash}\ zs{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ sep\ {\isacharparenleft}f\ a{\isacharparenright}\ {\isacharparenleft}map\ f\ {\isacharparenleft}x\ {\isacharhash}\ y\ {\isacharhash}\ zs{\isacharparenright}{\isacharparenright}%
+\end{isabelle}
+The rest is pure simplification:%
+\end{isamarkuptxt}%
+\isamarkuptrue%
+\isacommand{apply}\isamarkupfalse%
+\ simp{\isacharunderscore}all\isanewline
+\isacommand{done}\isamarkupfalse%
+%
+\endisatagproof
+{\isafoldproof}%
+%
+\isadelimproof
+%
+\endisadelimproof
+%
+\begin{isamarkuptext}%
+Try proving the above lemma by structural induction, and you find that you
+need an additional case distinction.
+
+In general, the format of invoking recursion induction is
+\begin{quote}
+\isacommand{apply}\isa{{\isacharparenleft}induct{\isacharunderscore}tac} $x@1 \dots x@n$ \isa{rule{\isacharcolon}} $f$\isa{{\isachardot}induct{\isacharparenright}}
+\end{quote}\index{*induct_tac (method)}%
+where $x@1~\dots~x@n$ is a list of free variables in the subgoal and $f$ the
+name of a function that takes an $n$-tuple. Usually the subgoal will
+contain the term $f(x@1,\dots,x@n)$ but this need not be the case. The
+induction rules do not mention $f$ at all. Here is \isa{sep{\isachardot}induct}:
+\begin{isabelle}
+{\isasymlbrakk}~{\isasymAnd}a.~P~a~[];\isanewline
+~~{\isasymAnd}a~x.~P~a~[x];\isanewline
+~~{\isasymAnd}a~x~y~zs.~P~a~(y~\#~zs)~{\isasymLongrightarrow}~P~a~(x~\#~y~\#~zs){\isasymrbrakk}\isanewline
+{\isasymLongrightarrow}~P~u~v%
+\end{isabelle}
+It merely says that in order to prove a property \isa{P} of \isa{u} and
+\isa{v} you need to prove it for the three cases where \isa{v} is the
+empty list, the singleton list, and the list with at least two elements.
+The final case has an induction hypothesis:  you may assume that \isa{P}
+holds for the tail of that list.
+\index{induction!recursion|)}
+\index{recursion induction|)}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+\end{isabellebody}%
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "root"
+%%% End:
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/TutorialI/Fun/fun0.thy	Fri Nov 02 08:26:01 2007 +0100
@@ -0,0 +1,264 @@
+(*<*)
+theory fun0 imports Main begin
+(*>*)
+
+text{*
+\subsection{Definition}
+\label{sec:fun-examples}
+
+Here is a simple example, the \rmindex{Fibonacci function}:
+*}
+
+fun fib :: "nat \<Rightarrow> nat" where
+  "fib 0 = 0" |
+  "fib (Suc 0) = 1" |
+  "fib (Suc(Suc x)) = fib x + fib (Suc x)"
+
+text{*\noindent
+This resembles ordinary functional programming languages. Note the obligatory
+\isacommand{where} and \isa{|}. Command \isacommand{fun} declares and
+defines the function in one go. Isabelle establishes termination automatically
+because @{const fib}'s argument decreases in every recursive call.
+
+Slightly more interesting is the insertion of a fixed element
+between any two elements of a list:
+*}
+
+fun sep :: "'a \<Rightarrow> 'a list \<Rightarrow> 'a list" where
+  "sep a []     = []" |
+  "sep a [x]    = [x]" |
+  "sep a (x#y#zs) = x # a # sep a (y#zs)";
+
+text{*\noindent
+This time the length of the list decreases with the
+recursive call; the first argument is irrelevant for termination.
+
+Pattern matching\index{pattern matching!and \isacommand{fun}}
+need not be exhaustive and may employ wildcards:
+*}
+
+fun last :: "'a list \<Rightarrow> 'a" where
+  "last [x]      = x" |
+  "last (_#y#zs) = last (y#zs)"
+
+text{*
+Overlapping patterns are disambiguated by taking the order of equations into
+account, just as in functional programming:
+*}
+
+fun sep1 :: "'a \<Rightarrow> 'a list \<Rightarrow> 'a list" where
+  "sep1 a (x#y#zs) = x # a # sep1 a (y#zs)" |
+  "sep1 _ xs       = xs"
+
+text{*\noindent
+To guarantee that the second equation can only be applied if the first
+one does not match, Isabelle internally replaces the second equation
+by the two possibilities that are left: @{prop"sep1 a [] = []"} and
+@{prop"sep1 a [x] = [x]"}.  Thus the functions @{const sep} and
+@{const sep1} are identical.
+
+Because of its pattern-matching syntax, \isacommand{fun} is also useful
+for the definition of non-recursive functions:
+*}
+
+fun swap12 :: "'a list \<Rightarrow> 'a list" where
+  "swap12 (x#y#zs) = y#x#zs" |
+  "swap12 zs       = zs"
+
+text{*
+After a function~$f$ has been defined via \isacommand{fun},
+its defining equations (or variants derived from them) are available
+under the name $f$@{text".simps"} as theorems.
+For example, look (via \isacommand{thm}) at
+@{thm[source]sep.simps} and @{thm[source]sep1.simps} to see that they define
+the same function. What is more, those equations are automatically declared as
+simplification rules.
+
+\subsection{Termination}
+
+Isabelle's automatic termination prover for \isacommand{fun} has a
+fixed notion of the \emph{size} (of type @{typ nat}) of an
+argument. The size of a natural number is the number itself. The size
+of a list is its length. In general, every datatype @{text t} comes
+with its own size function (named @{text"t.size"}) which counts the
+number of constructors in a term of type @{text t} --- more precisely
+those constructors where one of the arguments is of the type itself:
+@{const Suc} in the case of @{typ nat} and @{const "op #"} in the case
+of lists. A recursive function is accepted if \isacommand{fun} can
+show that the size of one fixed argument becomes smaller with each
+recursive call.
+
+More generally, @{text fun} allows any \emph{lexicographic
+combination} of size measures in case there are multiple
+arguments. For example the following version of \rmindex{Ackermann's
+function} is accepted: *}
+
+fun ack2 :: "nat \<Rightarrow> nat \<Rightarrow> nat" where
+  "ack2 n 0 = Suc n" |
+  "ack2 0 (Suc m) = ack2 (Suc 0) m" |
+  "ack2 (Suc n) (Suc m) = ack2 (ack2 n (Suc m)) m"
+
+text{* Thus the order of arguments has no influence on whether
+\isacommand{fun} can prove termination of a function. For more details
+see elsewhere~\cite{bulwahnKN07}.
+
+\subsection{Simplification}
+\label{sec:fun-simplification}
+
+Upon successful termination proof, the recursion equations become
+simplification rules, just as with \isacommand{primrec}.
+In most cases this works fine, but there is a subtle
+problem that must be mentioned: simplification may not
+terminate because of automatic splitting of @{text "if"}.
+\index{*if expressions!splitting of}
+Let us look at an example:
+*}
+
+fun gcd :: "nat \<times> nat \<Rightarrow> nat" where
+  "gcd(m,n) = (if n=0 then m else gcd(n, m mod n))"
+
+text{*\noindent
+The second argument decreases with each recursive call.
+The termination condition
+@{prop[display]"n ~= (0::nat) ==> m mod n < n"}
+is proved automatically because it is already present as a lemma in
+HOL\@.  Thus the recursion equation becomes a simplification
+rule. Of course the equation is nonterminating if we are allowed to unfold
+the recursive call inside the @{text else} branch, which is why programming
+languages and our simplifier don't do that. Unfortunately the simplifier does
+something else that leads to the same problem: it splits 
+each @{text "if"}-expression unless its
+condition simplifies to @{term True} or @{term False}.  For
+example, simplification reduces
+@{prop[display]"gcd(m,n) = k"}
+in one step to
+@{prop[display]"(if n=0 then m else gcd(n, m mod n)) = k"}
+where the condition cannot be reduced further, and splitting leads to
+@{prop[display]"(n=0 --> m=k) & (n ~= 0 --> gcd(n, m mod n)=k)"}
+Since the recursive call @{term"gcd(n, m mod n)"} is no longer protected by
+an @{text "if"}, it is unfolded again, which leads to an infinite chain of
+simplification steps. Fortunately, this problem can be avoided in many
+different ways.
+
+The most radical solution is to disable the offending theorem
+@{thm[source]split_if},
+as shown in \S\ref{sec:AutoCaseSplits}.  However, we do not recommend this
+approach: you will often have to invoke the rule explicitly when
+@{text "if"} is involved.
+
+If possible, the definition should be given by pattern matching on the left
+rather than @{text "if"} on the right. In the case of @{term gcd} the
+following alternative definition suggests itself:
+*}
+
+fun gcd1 :: "nat \<Rightarrow> nat \<Rightarrow> nat" where
+  "gcd1 m 0 = m" |
+  "gcd1 m n = gcd1 n (m mod n)"
+
+text{*\noindent
+The order of equations is important: it hides the side condition
+@{prop"n ~= (0::nat)"}.  Unfortunately, in general the case distinction
+may not be expressible by pattern matching.
+
+A simple alternative is to replace @{text "if"} by @{text case}, 
+which is also available for @{typ bool} and is not split automatically:
+*}
+
+fun gcd2 :: "nat \<Rightarrow> nat \<Rightarrow> nat" where
+  "gcd2 m n = (case n=0 of True \<Rightarrow> m | False \<Rightarrow> gcd2 n (m mod n))"
+
+text{*\noindent
+This is probably the neatest solution next to pattern matching, and it is
+always available.
+
+A final alternative is to replace the offending simplification rules by
+derived conditional ones. For @{term gcd} it means we have to prove
+these lemmas:
+*}
+
+lemma [simp]: "gcd(m,0) = m"
+apply(simp)
+done
+
+lemma [simp]: "n \<noteq> 0 \<Longrightarrow> gcd(m,n) = gcd(n, m mod n)"
+apply(simp)
+done
+
+text{*\noindent
+Simplification terminates for these proofs because the condition of the @{text
+"if"} simplifies to @{term True} or @{term False}.
+Now we can disable the original simplification rule:
+*}
+
+declare gcd.simps [simp del]
+
+text{*
+\index{induction!recursion|(}
+\index{recursion induction|(}
+
+\subsection{Induction}
+\label{sec:fun-induction}
+
+Having defined a function we might like to prove something about it.
+Since the function is recursive, the natural proof principle is
+again induction. But this time the structural form of induction that comes
+with datatypes is unlikely to work well --- otherwise we could have defined the
+function by \isacommand{primrec}. Therefore \isacommand{fun} automatically
+proves a suitable induction rule $f$@{text".induct"} that follows the
+recursion pattern of the particular function $f$. We call this
+\textbf{recursion induction}. Roughly speaking, it
+requires you to prove for each \isacommand{fun} equation that the property
+you are trying to establish holds for the left-hand side provided it holds
+for all recursive calls on the right-hand side. Here is a simple example
+involving the predefined @{term"map"} functional on lists:
+*}
+
+lemma "map f (sep x xs) = sep (f x) (map f xs)"
+
+txt{*\noindent
+Note that @{term"map f xs"}
+is the result of applying @{term"f"} to all elements of @{term"xs"}. We prove
+this lemma by recursion induction over @{term"sep"}:
+*}
+
+apply(induct_tac x xs rule: sep.induct);
+
+txt{*\noindent
+The resulting proof state has three subgoals corresponding to the three
+clauses for @{term"sep"}:
+@{subgoals[display,indent=0]}
+The rest is pure simplification:
+*}
+
+apply simp_all;
+done
+
+text{*
+Try proving the above lemma by structural induction, and you find that you
+need an additional case distinction.
+
+In general, the format of invoking recursion induction is
+\begin{quote}
+\isacommand{apply}@{text"(induct_tac"} $x@1 \dots x@n$ @{text"rule:"} $f$@{text".induct)"}
+\end{quote}\index{*induct_tac (method)}%
+where $x@1~\dots~x@n$ is a list of free variables in the subgoal and $f$ the
+name of a function that takes an $n$-tuple. Usually the subgoal will
+contain the term $f(x@1,\dots,x@n)$ but this need not be the case. The
+induction rules do not mention $f$ at all. Here is @{thm[source]sep.induct}:
+\begin{isabelle}
+{\isasymlbrakk}~{\isasymAnd}a.~P~a~[];\isanewline
+~~{\isasymAnd}a~x.~P~a~[x];\isanewline
+~~{\isasymAnd}a~x~y~zs.~P~a~(y~\#~zs)~{\isasymLongrightarrow}~P~a~(x~\#~y~\#~zs){\isasymrbrakk}\isanewline
+{\isasymLongrightarrow}~P~u~v%
+\end{isabelle}
+It merely says that in order to prove a property @{term"P"} of @{term"u"} and
+@{term"v"} you need to prove it for the three cases where @{term"v"} is the
+empty list, the singleton list, and the list with at least two elements.
+The final case has an induction hypothesis:  you may assume that @{term"P"}
+holds for the tail of that list.
+\index{induction!recursion|)}
+\index{recursion induction|)}
+*}
+(*<*)
+end
+(*>*)