doc-src/TutorialI/fp.tex
 author nipkow Tue Apr 25 08:09:10 2000 +0200 (2000-04-25) changeset 8771 026f37a86ea7 parent 8743 3253c6046d57 child 9458 c613cd06d5cf permissions -rw-r--r--
*** empty log message ***
 nipkow@8743  1 \chapter{Functional Programming in HOL}  nipkow@8743  2 nipkow@8743  3 Although on the surface this chapter is mainly concerned with how to write  nipkow@8743  4 functional programs in HOL and how to verify them, most of the  nipkow@8743  5 constructs and proof procedures introduced are general purpose and recur in  nipkow@8743  6 any specification or verification task.  nipkow@8743  7 nipkow@8743  8 The dedicated functional programmer should be warned: HOL offers only {\em  nipkow@8743  9  total functional programming} --- all functions in HOL must be total; lazy  nipkow@8743  10 data structures are not directly available. On the positive side, functions  nipkow@8743  11 in HOL need not be computable: HOL is a specification language that goes well  nipkow@8743  12 beyond what can be expressed as a program. However, for the time being we  nipkow@8743  13 concentrate on the computable.  nipkow@8743  14 nipkow@8743  15 \section{An introductory theory}  nipkow@8743  16 \label{sec:intro-theory}  nipkow@8743  17 nipkow@8743  18 Functional programming needs datatypes and functions. Both of them can be  nipkow@8743  19 defined in a theory with a syntax reminiscent of languages like ML or  nipkow@8743  20 Haskell. As an example consider the theory in figure~\ref{fig:ToyList}.  nipkow@8743  21 We will now examine it line by line.  nipkow@8743  22 nipkow@8743  23 \begin{figure}[htbp]  nipkow@8743  24 \begin{ttbox}\makeatother  nipkow@8743  25 \input{ToyList2/ToyList1}\end{ttbox}  nipkow@8743  26 \caption{A theory of lists}  nipkow@8743  27 \label{fig:ToyList}  nipkow@8743  28 \end{figure}  nipkow@8743  29 nipkow@8743  30 {\makeatother\input{ToyList/document/ToyList.tex}}  nipkow@8743  31 nipkow@8743  32 The complete proof script is shown in figure~\ref{fig:ToyList-proofs}. The  nipkow@8743  33 concatenation of figures \ref{fig:ToyList} and \ref{fig:ToyList-proofs}  nipkow@8743  34 constitutes the complete theory \texttt{ToyList} and should reside in file  nipkow@8743  35 \texttt{ToyList.thy}. It is good practice to present all declarations and  nipkow@8743  36 definitions at the beginning of a theory to facilitate browsing.  nipkow@8743  37 nipkow@8743  38 \begin{figure}[htbp]  nipkow@8743  39 \begin{ttbox}\makeatother  nipkow@8743  40 \input{ToyList2/ToyList2}\end{ttbox}  nipkow@8743  41 \caption{Proofs about lists}  nipkow@8743  42 \label{fig:ToyList-proofs}  nipkow@8743  43 \end{figure}  nipkow@8743  44 nipkow@8743  45 \subsubsection*{Review}  nipkow@8743  46 nipkow@8743  47 This is the end of our toy proof. It should have familiarized you with  nipkow@8743  48 \begin{itemize}  nipkow@8743  49 \item the standard theorem proving procedure:  nipkow@8743  50 state a goal (lemma or theorem); proceed with proof until a separate lemma is  nipkow@8743  51 required; prove that lemma; come back to the original goal.  nipkow@8743  52 \item a specific procedure that works well for functional programs:  nipkow@8743  53 induction followed by all-out simplification via \isa{auto}.  nipkow@8743  54 \item a basic repertoire of proof commands.  nipkow@8743  55 \end{itemize}  nipkow@8743  56 nipkow@8743  57 nipkow@8743  58 \section{Some helpful commands}  nipkow@8743  59 \label{sec:commands-and-hints}  nipkow@8743  60 nipkow@8743  61 This section discusses a few basic commands for manipulating the proof state  nipkow@8743  62 and can be skipped by casual readers.  nipkow@8743  63 nipkow@8743  64 There are two kinds of commands used during a proof: the actual proof  nipkow@8743  65 commands and auxiliary commands for examining the proof state and controlling  nipkow@8743  66 the display. Simple proof commands are of the form  nipkow@8743  67 \isacommand{apply}\isa{(method)}\indexbold{apply} where \bfindex{method} is a  nipkow@8743  68 synonym for theorem proving function''. Typical examples are  nipkow@8743  69 \isa{induct_tac} and \isa{auto}. Further methods are introduced throughout  nipkow@8743  70 the tutorial. Unless stated otherwise you may assume that a method attacks  nipkow@8743  71 merely the first subgoal. An exception is \isa{auto} which tries to solve all  nipkow@8743  72 subgoals.  nipkow@8743  73 nipkow@8743  74 The most useful auxiliary commands are:  nipkow@8743  75 \begin{description}  nipkow@8743  76 \item[Undoing:] \isacommand{undo}\indexbold{*undo} undoes the effect of the  nipkow@8743  77  last command; \isacommand{undo} can be undone by  nipkow@8743  78  \isacommand{redo}\indexbold{*redo}. Both are only needed at the shell  nipkow@8743  79  level and should not occur in the final theory.  nipkow@8743  80 \item[Printing the current state:] \isacommand{pr}\indexbold{*pr} redisplays  nipkow@8743  81  the current proof state, for example when it has disappeared off the  nipkow@8743  82  screen.  nipkow@8743  83 \item[Limiting the number of subgoals:] \isacommand{pr}~$n$ tells Isabelle to  nipkow@8743  84  print only the first $n$ subgoals from now on and redisplays the current  nipkow@8743  85  proof state. This is helpful when there are many subgoals.  nipkow@8743  86 \item[Modifying the order of subgoals:]  nipkow@8743  87 \isacommand{defer}\indexbold{*defer} moves the first subgoal to the end and  nipkow@8743  88 \isacommand{prefer}\indexbold{*prefer}~$n$ moves subgoal $n$ to the front.  nipkow@8743  89 \item[Printing theorems:]  nipkow@8743  90  \isacommand{thm}\indexbold{*thm}~\textit{name}$@1$~\dots~\textit{name}$@n$  nipkow@8743  91  prints the named theorems.  nipkow@8743  92 \item[Displaying types:] We have already mentioned the flag  nipkow@8743  93  \ttindex{show_types} above. It can also be useful for detecting typos in  nipkow@8743  94  formulae early on. For example, if \texttt{show_types} is set and the goal  nipkow@8743  95  \isa{rev(rev xs) = xs} is started, Isabelle prints the additional output  nipkow@8743  96 \par\noindent  nipkow@8743  97 \begin{isabelle}%  nipkow@8743  98 Variables:\isanewline  nipkow@8743  99 ~~xs~::~'a~list  nipkow@8743  100 \end{isabelle}%  nipkow@8743  101 \par\noindent  nipkow@8743  102 which tells us that Isabelle has correctly inferred that  nipkow@8743  103 \isa{xs} is a variable of list type. On the other hand, had we  nipkow@8743  104 made a typo as in \isa{rev(re xs) = xs}, the response  nipkow@8743  105 \par\noindent  nipkow@8743  106 \begin{isabelle}%  nipkow@8743  107 Variables:\isanewline  nipkow@8743  108 ~~re~::~'a~list~{\isasymRightarrow}~'a~list\isanewline  nipkow@8743  109 ~~xs~::~'a~list%  nipkow@8743  110 \end{isabelle}%  nipkow@8743  111 \par\noindent  nipkow@8743  112 would have alerted us because of the unexpected variable \isa{re}.  nipkow@8743  113 \item[Reading terms and types:] \isacommand{term}\indexbold{*term}  nipkow@8743  114  \textit{string} reads, type-checks and prints the given string as a term in  nipkow@8743  115  the current context; the inferred type is output as well.  nipkow@8743  116  \isacommand{typ}\indexbold{*typ} \textit{string} reads and prints the given  nipkow@8743  117  string as a type in the current context.  nipkow@8743  118 \item[(Re)loading theories:] When you start your interaction you must open a  nipkow@8771  119  named theory with the line \isa{\isacommand{theory}~T~=~\dots~:}. Isabelle  nipkow@8771  120  automatically loads all the required parent theories from their respective  nipkow@8771  121  files (which may take a moment, unless the theories are already loaded and  nipkow@8771  122  the files have not been modified). The only time when you need to load a  nipkow@8771  123  theory by hand is when you simply want to check if it loads successfully  nipkow@8771  124  without wanting to make use of the theory itself. This you can do by typing  nipkow@8771  125  \isa{\isacommand{use\_thy}\indexbold{*use_thy}~"T"}.  nipkow@8743  126   nipkow@8743  127  If you suddenly discover that you need to modify a parent theory of your  nipkow@8743  128  current theory you must first abandon your current theory (at the shell  nipkow@8743  129  level type \isacommand{kill}\indexbold{*kill}). After the parent theory has  nipkow@8743  130  been modified you go back to your original theory. When its first line  nipkow@8743  131  \isacommand{theory}\texttt{~T~=}~\dots~\texttt{:} is processed, the  nipkow@8743  132  modified parent is reloaded automatically.  nipkow@8743  133 \end{description}  nipkow@8743  134 Further commands are found in the Isabelle/Isar Reference Manual.  nipkow@8743  135 nipkow@8771  136 We now examine Isabelle's functional programming constructs systematically,  nipkow@8771  137 starting with inductive datatypes.  nipkow@8771  138 nipkow@8743  139 nipkow@8743  140 \section{Datatypes}  nipkow@8743  141 \label{sec:datatype}  nipkow@8743  142 nipkow@8743  143 Inductive datatypes are part of almost every non-trivial application of HOL.  nipkow@8743  144 First we take another look at a very important example, the datatype of  nipkow@8743  145 lists, before we turn to datatypes in general. The section closes with a  nipkow@8743  146 case study.  nipkow@8743  147 nipkow@8743  148 nipkow@8743  149 \subsection{Lists}  nipkow@8743  150 \indexbold{*list}  nipkow@8743  151 nipkow@8743  152 Lists are one of the essential datatypes in computing. Readers of this  nipkow@8743  153 tutorial and users of HOL need to be familiar with their basic operations.  nipkow@8771  154 Theory \isa{ToyList} is only a small fragment of HOL's predefined theory  nipkow@8771  155 \isa{List}\footnote{\url{http://isabelle.in.tum.de/library/HOL/List.html}}.  nipkow@8743  156 The latter contains many further operations. For example, the functions  nipkow@8771  157 \isaindexbold{hd} (head'') and \isaindexbold{tl} (tail'') return the first  nipkow@8743  158 element and the remainder of a list. (However, pattern-matching is usually  nipkow@8771  159 preferable to \isa{hd} and \isa{tl}.) Theory \isa{List} also contains  nipkow@8743  160 more syntactic sugar: \isa{[}$x@1$\isa{,}\dots\isa{,}$x@n$\isa{]} abbreviates  nipkow@8743  161 $x@1$\isa{\#}\dots\isa{\#}$x@n$\isa{\#[]}. In the rest of the tutorial we  nipkow@8743  162 always use HOL's predefined lists.  nipkow@8743  163 nipkow@8743  164 nipkow@8743  165 \subsection{The general format}  nipkow@8743  166 \label{sec:general-datatype}  nipkow@8743  167 nipkow@8743  168 The general HOL \isacommand{datatype} definition is of the form  nipkow@8743  169 $ nipkow@8743  170 \isacommand{datatype}~(\alpha@1, \dots, \alpha@n) \, t ~=~  nipkow@8743  171 C@1~\tau@{11}~\dots~\tau@{1k@1} ~\mid~ \dots ~\mid~  nipkow@8743  172 C@m~\tau@{m1}~\dots~\tau@{mk@m}  nipkow@8743  173 $  nipkow@8771  174 where $\alpha@i$ are distinct type variables (the parameters), $C@i$ are distinct  nipkow@8743  175 constructor names and $\tau@{ij}$ are types; it is customary to capitalize  nipkow@8743  176 the first letter in constructor names. There are a number of  nipkow@8743  177 restrictions (such as that the type should not be empty) detailed  nipkow@8743  178 elsewhere~\cite{isabelle-HOL}. Isabelle notifies you if you violate them.  nipkow@8743  179 nipkow@8743  180 Laws about datatypes, such as \isa{[] \isasymnoteq~x\#xs} and  nipkow@8743  181 \isa{(x\#xs = y\#ys) = (x=y \isasymand~xs=ys)}, are used automatically  nipkow@8743  182 during proofs by simplification. The same is true for the equations in  nipkow@8743  183 primitive recursive function definitions.  nipkow@8743  184 nipkow@8743  185 \subsection{Primitive recursion}  nipkow@8743  186 nipkow@8743  187 Functions on datatypes are usually defined by recursion. In fact, most of the  nipkow@8743  188 time they are defined by what is called \bfindex{primitive recursion}.  nipkow@8743  189 The keyword \isacommand{primrec}\indexbold{*primrec} is followed by a list of  nipkow@8743  190 equations  nipkow@8743  191 $f \, x@1 \, \dots \, (C \, y@1 \, \dots \, y@k)\, \dots \, x@n = r$  nipkow@8743  192 such that $C$ is a constructor of the datatype $t$ and all recursive calls of  nipkow@8743  193 $f$ in $r$ are of the form $f \, \dots \, y@i \, \dots$ for some $i$. Thus  nipkow@8743  194 Isabelle immediately sees that $f$ terminates because one (fixed!) argument  nipkow@8743  195 becomes smaller with every recursive call. There must be exactly one equation  nipkow@8743  196 for each constructor. Their order is immaterial.  nipkow@8771  197 A more general method for defining total recursive functions is introduced in  nipkow@8743  198 \S\ref{sec:recdef}.  nipkow@8743  199 nipkow@8743  200 \begin{exercise}  nipkow@8743  201 \input{Misc/document/Tree.tex}%  nipkow@8743  202 \end{exercise}  nipkow@8743  203 nipkow@8743  204 \subsection{Case expressions}  nipkow@8743  205 \label{sec:case-expressions}  nipkow@8743  206 nipkow@8743  207 HOL also features \isaindexbold{case}-expressions for analyzing  nipkow@8743  208 elements of a datatype. For example,  nipkow@8743  209 % case xs of [] => 0 | y#ys => y  nipkow@8743  210 \begin{isabellepar}%  nipkow@8743  211 ~~~case~xs~of~[]~{\isasymRightarrow}~0~|~y~\#~ys~{\isasymRightarrow}~y  nipkow@8743  212 \end{isabellepar}%  nipkow@8743  213 evaluates to \isa{0} if \isa{xs} is \isa{[]} and to \isa{y} if  nipkow@8743  214 \isa{xs} is \isa{y\#ys}. (Since the result in both branches must be of  nipkow@8743  215 the same type, it follows that \isa{y::nat} and hence  nipkow@8743  216 \isa{xs::(nat)list}.)  nipkow@8743  217 nipkow@8743  218 In general, if $e$ is a term of the datatype $t$ defined in  nipkow@8743  219 \S\ref{sec:general-datatype} above, the corresponding  nipkow@8743  220 \isa{case}-expression analyzing $e$ is  nipkow@8743  221 $ nipkow@8743  222 \begin{array}{rrcl}  nipkow@8743  223 \isa{case}~e~\isa{of} & C@1~x@{11}~\dots~x@{1k@1} & \To & e@1 \\  nipkow@8743  224  \vdots \\  nipkow@8743  225  \mid & C@m~x@{m1}~\dots~x@{mk@m} & \To & e@m  nipkow@8743  226 \end{array}  nipkow@8743  227 $  nipkow@8743  228 nipkow@8743  229 \begin{warn}  nipkow@8743  230 {\em All} constructors must be present, their order is fixed, and nested  nipkow@8743  231 patterns are not supported. Violating these restrictions results in strange  nipkow@8743  232 error messages.  nipkow@8743  233 \end{warn}  nipkow@8743  234 \noindent  nipkow@8743  235 Nested patterns can be simulated by nested \isa{case}-expressions: instead  nipkow@8743  236 of  nipkow@8743  237 % case xs of [] => 0 | [x] => x | x#(y#zs) => y  nipkow@8743  238 \begin{isabellepar}%  nipkow@8743  239 ~~~case~xs~of~[]~{\isasymRightarrow}~0~|~[x]~{\isasymRightarrow}~x~|~x~\#~y~\#~zs~{\isasymRightarrow}~y  nipkow@8743  240 \end{isabellepar}%  nipkow@8743  241 write  nipkow@8743  242 % term(latex xsymbols symbols)"case xs of [] => 0 | x#ys => (case ys of [] => x | y#zs => y)";  nipkow@8743  243 \begin{isabellepar}%  nipkow@8743  244 ~~~case~xs~of~[]~{\isasymRightarrow}~0~|~x~\#~ys~{\isasymRightarrow}~(case~ys~of~[]~{\isasymRightarrow}~x~|~y~\#~zs~{\isasymRightarrow}~y)%  nipkow@8743  245 \end{isabellepar}%  nipkow@8743  246 Note that \isa{case}-expressions should be enclosed in parentheses to  nipkow@8743  247 indicate their scope.  nipkow@8743  248 nipkow@8743  249 \subsection{Structural induction and case distinction}  nipkow@8743  250 \indexbold{structural induction}  nipkow@8743  251 \indexbold{induction!structural}  nipkow@8743  252 \indexbold{case distinction}  nipkow@8743  253 nipkow@8743  254 Almost all the basic laws about a datatype are applied automatically during  nipkow@8743  255 simplification. Only induction is invoked by hand via \isaindex{induct_tac},  nipkow@8743  256 which works for any datatype. In some cases, induction is overkill and a case  nipkow@8743  257 distinction over all constructors of the datatype suffices. This is performed  nipkow@8743  258 by \isaindexbold{case_tac}. A trivial example:  nipkow@8743  259 nipkow@8743  260 \input{Misc/document/cases.tex}%  nipkow@8743  261 nipkow@8743  262 Note that we do not need to give a lemma a name if we do not intend to refer  nipkow@8743  263 to it explicitly in the future.  nipkow@8743  264 nipkow@8743  265 \begin{warn}  nipkow@8743  266  Induction is only allowed on free (or \isasymAnd-bound) variables that  nipkow@8743  267  should not occur among the assumptions of the subgoal. Case distinction  nipkow@8743  268  (\isa{case_tac}) works for arbitrary terms, which need to be  nipkow@8743  269  quoted if they are non-atomic.  nipkow@8743  270 \end{warn}  nipkow@8743  271 nipkow@8743  272 nipkow@8743  273 \subsection{Case study: boolean expressions}  nipkow@8743  274 \label{sec:boolex}  nipkow@8743  275 nipkow@8743  276 The aim of this case study is twofold: it shows how to model boolean  nipkow@8743  277 expressions and some algorithms for manipulating them, and it demonstrates  nipkow@8743  278 the constructs introduced above.  nipkow@8743  279 nipkow@8743  280 \input{Ifexpr/document/Ifexpr.tex}  nipkow@8771  281 \medskip  nipkow@8743  282 nipkow@8743  283 How does one come up with the required lemmas? Try to prove the main theorems  nipkow@8743  284 without them and study carefully what \isa{auto} leaves unproved. This has  nipkow@8743  285 to provide the clue. The necessity of universal quantification  nipkow@8743  286 (\isa{\isasymforall{}t e}) in the two lemmas is explained in  nipkow@8743  287 \S\ref{sec:InductionHeuristics}  nipkow@8743  288 nipkow@8743  289 \begin{exercise}  nipkow@8743  290  We strengthen the definition of a \isa{normal} If-expression as follows:  nipkow@8743  291  the first argument of all \isa{IF}s must be a variable. Adapt the above  nipkow@8743  292  development to this changed requirement. (Hint: you may need to formulate  nipkow@8743  293  some of the goals as implications (\isasymimp) rather than equalities  nipkow@8743  294  (\isa{=}).)  nipkow@8743  295 \end{exercise}  nipkow@8743  296 nipkow@8743  297 \section{Some basic types}  nipkow@8743  298 nipkow@8743  299 nipkow@8743  300 \subsection{Natural numbers}  nipkow@8743  301 \index{arithmetic|(}  nipkow@8743  302 nipkow@8743  303 \input{Misc/document/fakenat.tex}  nipkow@8743  304 \input{Misc/document/natsum.tex}  nipkow@8743  305 nipkow@8743  306 The usual arithmetic operations \ttindexboldpos{+}{$HOL2arithfun},  nipkow@8743  307 \ttindexboldpos{-}{$HOL2arithfun}, \ttindexboldpos{*}{$HOL2arithfun},  nipkow@8743  308 \isaindexbold{div}, \isaindexbold{mod}, \isaindexbold{min} and  nipkow@8743  309 \isaindexbold{max} are predefined, as are the relations  nipkow@8743  310 \indexboldpos{\isasymle}{$HOL2arithrel} and  nipkow@8743  311 \ttindexboldpos{<}{$HOL2arithrel}. There is even a least number operation  nipkow@8743  312 \isaindexbold{LEAST}. For example, \isa{(LEAST n.$\,$1 < n) = 2}, although  nipkow@8743  313 Isabelle does not prove this completely automatically. Note that \isa{1} and  nipkow@8743  314 \isa{2} are available as abbreviations for the corresponding  nipkow@8743  315 \isa{Suc}-expressions. If you need the full set of numerals,  nipkow@8743  316 see~\S\ref{nat-numerals}.  nipkow@8743  317 nipkow@8743  318 \begin{warn}  nipkow@8743  319  The operations \ttindexboldpos{+}{$HOL2arithfun},  nipkow@8743  320  \ttindexboldpos{-}{$HOL2arithfun}, \ttindexboldpos{*}{$HOL2arithfun},  nipkow@8743  321  \isaindexbold{min}, \isaindexbold{max},  nipkow@8743  322  \indexboldpos{\isasymle}{$HOL2arithrel} and  nipkow@8743  323  \ttindexboldpos{<}{$HOL2arithrel} are overloaded, i.e.\ they are available  nipkow@8743  324  not just for natural numbers but at other types as well (see  nipkow@8743  325  \S\ref{sec:TypeClasses}). For example, given the goal \isa{x+y = y+x},  nipkow@8743  326  there is nothing to indicate that you are talking about natural numbers.  nipkow@8743  327  Hence Isabelle can only infer that \isa{x} and \isa{y} are of some  nipkow@8743  328  arbitrary type where \isa{+} is declared. As a consequence, you will be  nipkow@8743  329  unable to prove the goal (although it may take you some time to realize  nipkow@8743  330  what has happened if \texttt{show_types} is not set). In this particular  nipkow@8743  331  example, you need to include an explicit type constraint, for example  nipkow@8743  332  \isa{x+y = y+(x::nat)}. If there is enough contextual information this may  nipkow@8743  333  not be necessary: \isa{x+0 = x} automatically implies \isa{x::nat}.  nipkow@8743  334 \end{warn}  nipkow@8743  335 nipkow@8743  336 Simple arithmetic goals are proved automatically by both \isa{auto}  nipkow@8743  337 and the simplification methods introduced in \S\ref{sec:Simplification}. For  nipkow@8743  338 example,  nipkow@8743  339 nipkow@8743  340 \input{Misc/document/arith1.tex}%  nipkow@8743  341 is proved automatically. The main restriction is that only addition is taken  nipkow@8743  342 into account; other arithmetic operations and quantified formulae are ignored.  nipkow@8743  343 nipkow@8743  344 For more complex goals, there is the special method  nipkow@8743  345 \isaindexbold{arith} (which attacks the first subgoal). It proves  nipkow@8743  346 arithmetic goals involving the usual logical connectives (\isasymnot,  nipkow@8743  347 \isasymand, \isasymor, \isasymimp), the relations \isasymle\ and \isa{<}, and  nipkow@8743  348 the operations \isa{+}, \isa{-}, \isa{min} and \isa{max}. For example,  nipkow@8743  349 nipkow@8743  350 \input{Misc/document/arith2.tex}%  nipkow@8743  351 succeeds because \isa{k*k} can be treated as atomic.  nipkow@8743  352 In contrast,  nipkow@8743  353 nipkow@8743  354 \input{Misc/document/arith3.tex}%  nipkow@8743  355 is not even proved by \isa{arith} because the proof relies essentially  nipkow@8743  356 on properties of multiplication.  nipkow@8743  357 nipkow@8743  358 \begin{warn}  nipkow@8743  359  The running time of \isa{arith} is exponential in the number of occurrences  nipkow@8743  360  of \ttindexboldpos{-}{$HOL2arithfun}, \isaindexbold{min} and  nipkow@8743  361  \isaindexbold{max} because they are first eliminated by case distinctions.  nipkow@8743  362 nipkow@8743  363  \isa{arith} is incomplete even for the restricted class of formulae  nipkow@8743  364  described above (known as linear arithmetic''). If divisibility plays a  nipkow@8743  365  role, it may fail to prove a valid formula, for example$m+m \neq n+n+1$.  nipkow@8743  366  Fortunately, such examples are rare in practice.  nipkow@8743  367 \end{warn}  nipkow@8743  368 nipkow@8743  369 \index{arithmetic|)}  nipkow@8743  370 nipkow@8743  371 nipkow@8743  372 \subsection{Products}  nipkow@8743  373 nipkow@8771  374 HOL also has pairs: \isa{($a@1$,$a@2$)} is of type \isa{$\tau@1$*  nipkow@8743  375 $\tau@2$} provided each$a@i$is of type$\tau@i$. The components of a pair  nipkow@8743  376 are extracted by \isa{fst} and \isa{snd}: \isa{fst($x$,$y$) =$x$} and  nipkow@8743  377 \isa{snd($x$,$y$) =$y$}. Tuples are simulated by pairs nested to the right:  nipkow@8743  378 \isa{($a@1$,$a@2$,$a@3$)} stands for \isa{($a@1$,($a@2$,$a@3$))} and  nipkow@8743  379 \isa{$\tau@1$*$\tau@2$*$\tau@3$} for \isa{$\tau@1$* ($\tau@2$*  nipkow@8743  380 $\tau@3$)}. Therefore we have \isa{fst(snd($a@1$,$a@2$,$a@3$)) =$a@2$}.  nipkow@8743  381 nipkow@8743  382 It is possible to use (nested) tuples as patterns in abstractions, for  nipkow@8743  383 example \isa{\isasymlambda(x,y,z).x+y+z} and  nipkow@8743  384 \isa{\isasymlambda((x,y),z).x+y+z}.  nipkow@8743  385 In addition to explicit$\lambda$-abstractions, tuple patterns can be used in  nipkow@8743  386 most variable binding constructs. Typical examples are  nipkow@8743  387 nipkow@8743  388 \input{Misc/document/pairs.tex}%  nipkow@8743  389 Further important examples are quantifiers and sets (see~\S\ref{quant-pats}).  nipkow@8743  390 nipkow@8743  391 %FIXME move stuff below into section on proofs about products?  nipkow@8743  392 \begin{warn}  nipkow@8743  393  Abstraction over pairs and tuples is merely a convenient shorthand for a  nipkow@8743  394  more complex internal representation. Thus the internal and external form  nipkow@8743  395  of a term may differ, which can affect proofs. If you want to avoid this  nipkow@8743  396  complication, use \isa{fst} and \isa{snd}, i.e.\ write  nipkow@8743  397  \isa{\isasymlambda{}p.~fst p + snd p} instead of  nipkow@8743  398  \isa{\isasymlambda(x,y).~x + y}. See~\S\ref{proofs-about-products} for  nipkow@8743  399  theorem proving with tuple patterns.  nipkow@8743  400 \end{warn}  nipkow@8743  401 nipkow@8743  402 Finally note that products, like natural numbers, are datatypes, which means  nipkow@8743  403 in particular that \isa{induct_tac} and \isa{case_tac} are applicable to  nipkow@8743  404 products (see \S\ref{proofs-about-products}).  nipkow@8743  405 nipkow@8743  406 \section{Definitions}  nipkow@8743  407 \label{sec:Definitions}  nipkow@8743  408 nipkow@8743  409 A definition is simply an abbreviation, i.e.\ a new name for an existing  nipkow@8743  410 construction. In particular, definitions cannot be recursive. Isabelle offers  nipkow@8743  411 definitions on the level of types and terms. Those on the type level are  nipkow@8743  412 called type synonyms, those on the term level are called (constant)  nipkow@8743  413 definitions.  nipkow@8743  414 nipkow@8743  415 nipkow@8743  416 \subsection{Type synonyms}  nipkow@8771  417 \indexbold{type synonym}  nipkow@8743  418 nipkow@8743  419 Type synonyms are similar to those found in ML. Their syntax is fairly self  nipkow@8743  420 explanatory:  nipkow@8743  421 nipkow@8743  422 \input{Misc/document/types.tex}%  nipkow@8743  423 nipkow@8743  424 Note that pattern-matching is not allowed, i.e.\ each definition must be of  nipkow@8743  425 the form$f\,x@1\,\dots\,x@n~\isasymequiv~t$.  nipkow@8743  426 Section~\S\ref{sec:Simplification} explains how definitions are used  nipkow@8743  427 in proofs.  nipkow@8743  428 nipkow@8743  429 \begin{warn}%  nipkow@8743  430 A common mistake when writing definitions is to introduce extra free  nipkow@8743  431 variables on the right-hand side as in the following fictitious definition:  nipkow@8743  432 \input{Misc/document/prime_def.tex}%  nipkow@8743  433 \end{warn}  nipkow@8743  434 nipkow@8743  435 nipkow@8743  436 \chapter{More Functional Programming}  nipkow@8743  437 nipkow@8743  438 The purpose of this chapter is to deepen the reader's understanding of the  nipkow@8771  439 concepts encountered so far and to introduce advanced forms of datatypes and  nipkow@8771  440 recursive functions. The first two sections give a structured presentation of  nipkow@8771  441 theorem proving by simplification (\S\ref{sec:Simplification}) and discuss  nipkow@8771  442 important heuristics for induction (\S\ref{sec:InductionHeuristics}). They can  nipkow@8771  443 be skipped by readers less interested in proofs. They are followed by a case  nipkow@8771  444 study, a compiler for expressions (\S\ref{sec:ExprCompiler}). Advanced  nipkow@8771  445 datatypes, including those involving function spaces, are covered in  nipkow@8771  446 \S\ref{sec:advanced-datatypes}, which closes with another case study, search  nipkow@8771  447 trees (tries''). Finally we introduce \isacommand{recdef}, a very general  nipkow@8771  448 form of recursive function definition which goes well beyond what  nipkow@8771  449 \isacommand{primrec} allows (\S\ref{sec:recdef}).  nipkow@8743  450 nipkow@8743  451 nipkow@8743  452 \section{Simplification}  nipkow@8743  453 \label{sec:Simplification}  nipkow@8743  454 \index{simplification|(}  nipkow@8743  455 nipkow@8743  456 So far we have proved our theorems by \isa{auto}, which simplifies'' all  nipkow@8743  457 subgoals. In fact, \isa{auto} can do much more than that, except that it  nipkow@8743  458 did not need to so far. However, when you go beyond toy examples, you need to  nipkow@8743  459 understand the ingredients of \isa{auto}. This section covers the method  nipkow@8743  460 that \isa{auto} always applies first, namely simplification.  nipkow@8743  461 nipkow@8743  462 Simplification is one of the central theorem proving tools in Isabelle and  nipkow@8743  463 many other systems. The tool itself is called the \bfindex{simplifier}. The  nipkow@8743  464 purpose of this section is twofold: to introduce the many features of the  nipkow@8743  465 simplifier (\S\ref{sec:SimpFeatures}) and to explain a little bit how the  nipkow@8743  466 simplifier works (\S\ref{sec:SimpHow}). Anybody intending to use HOL should  nipkow@8743  467 read \S\ref{sec:SimpFeatures}, and the serious student should read  nipkow@8743  468 \S\ref{sec:SimpHow} as well in order to understand what happened in case  nipkow@8743  469 things do not simplify as expected.  nipkow@8743  470 nipkow@8743  471 nipkow@8743  472 \subsection{Using the simplifier}  nipkow@8743  473 \label{sec:SimpFeatures}  nipkow@8743  474 nipkow@8743  475 In its most basic form, simplification means repeated application of  nipkow@8743  476 equations from left to right. For example, taking the rules for \isa{\at}  nipkow@8743  477 and applying them to the term \isa{[0,1] \at\ []} results in a sequence of  nipkow@8743  478 simplification steps:  nipkow@8743  479 \begin{ttbox}\makeatother  nipkow@8743  480 (0#1#[]) @ [] $$\leadsto$$ 0#((1#[]) @ []) $$\leadsto$$ 0#(1#([] @ [])) $$\leadsto$$ 0#1#[]  nipkow@8743  481 \end{ttbox}  nipkow@8743  482 This is also known as \bfindex{term rewriting} and the equations are referred  nipkow@8771  483 to as \bfindex{rewrite rules}. Rewriting'' is more honest than simplification''  nipkow@8743  484 because the terms do not necessarily become simpler in the process.  nipkow@8743  485 nipkow@8743  486 \subsubsection{Simplification rules}  nipkow@8743  487 \indexbold{simplification rules}  nipkow@8743  488 nipkow@8743  489 To facilitate simplification, theorems can be declared to be simplification  nipkow@8743  490 rules (with the help of the attribute \isa{[simp]}\index{*simp  nipkow@8743  491  (attribute)}), in which case proofs by simplification make use of these  nipkow@8771  492 rules automatically. In addition the constructs \isacommand{datatype} and  nipkow@8743  493 \isacommand{primrec} (and a few others) invisibly declare useful  nipkow@8743  494 simplification rules. Explicit definitions are \emph{not} declared  nipkow@8743  495 simplification rules automatically!  nipkow@8743  496 nipkow@8743  497 Not merely equations but pretty much any theorem can become a simplification  nipkow@8743  498 rule. The simplifier will try to make sense of it. For example, a theorem  nipkow@8743  499 \isasymnot$P$is automatically turned into \isa{$P$= False}. The details  nipkow@8743  500 are explained in \S\ref{sec:SimpHow}.  nipkow@8743  501 nipkow@8743  502 The simplification attribute of theorems can be turned on and off as follows:  nipkow@8743  503 \begin{ttbox}  nipkow@8743  504 theorems [simp] = $$list of theorem names$$;  nipkow@8743  505 theorems [simp del] = $$list of theorem names$$;  nipkow@8743  506 \end{ttbox}  nipkow@8771  507 As a rule of thumb, equations that really simplify (like \isa{rev(rev xs) =  nipkow@8743  508  xs} and \mbox{\isa{xs \at\ [] = xs}}) should be made simplification  nipkow@8743  509 rules. Those of a more specific nature (e.g.\ distributivity laws, which  nipkow@8743  510 alter the structure of terms considerably) should only be used selectively,  nipkow@8743  511 i.e.\ they should not be default simplification rules. Conversely, it may  nipkow@8743  512 also happen that a simplification rule needs to be disabled in certain  nipkow@8743  513 proofs. Frequent changes in the simplification status of a theorem may  nipkow@8771  514 indicate a badly designed theory.  nipkow@8743  515 \begin{warn}  nipkow@8743  516  Simplification may not terminate, for example if both$f(x) = g(x)$and  nipkow@8743  517 $g(x) = f(x)$are simplification rules. It is the user's responsibility not  nipkow@8743  518  to include simplification rules that can lead to nontermination, either on  nipkow@8743  519  their own or in combination with other simplification rules.  nipkow@8743  520 \end{warn}  nipkow@8743  521 nipkow@8743  522 \subsubsection{The simplification method}  nipkow@8743  523 \index{*simp (method)|bold}  nipkow@8743  524 nipkow@8743  525 The general format of the simplification method is  nipkow@8743  526 \begin{ttbox}  nipkow@8743  527 simp $$list of modifiers$$  nipkow@8743  528 \end{ttbox}  nipkow@8743  529 where the list of \emph{modifiers} helps to fine tune the behaviour and may  nipkow@8743  530 be empty. Most if not all of the proofs seen so far could have been performed  nipkow@8743  531 with \isa{simp} instead of \isa{auto}, except that \isa{simp} attacks  nipkow@8743  532 only the first subgoal and may thus need to be repeated.  nipkow@8743  533 Note that \isa{simp} fails if nothing changes.  nipkow@8743  534 nipkow@8743  535 \subsubsection{Adding and deleting simplification rules}  nipkow@8743  536 nipkow@8743  537 If a certain theorem is merely needed in a few proofs by simplification,  nipkow@8743  538 we do not need to make it a global simplification rule. Instead we can modify  nipkow@8743  539 the set of simplification rules used in a simplification step by adding rules  nipkow@8743  540 to it and/or deleting rules from it. The two modifiers for this are  nipkow@8743  541 \begin{ttbox}  nipkow@8743  542 add: $$list of theorem names$$  nipkow@8743  543 del: $$list of theorem names$$  nipkow@8743  544 \end{ttbox}  nipkow@8743  545 In case you want to use only a specific list of theorems and ignore all  nipkow@8743  546 others:  nipkow@8743  547 \begin{ttbox}  nipkow@8743  548 only: $$list of theorem names$$  nipkow@8743  549 \end{ttbox}  nipkow@8743  550 nipkow@8743  551 nipkow@8743  552 \subsubsection{Assumptions}  nipkow@8743  553 \index{simplification!with/of assumptions}  nipkow@8743  554 nipkow@8743  555 {\makeatother\input{Misc/document/asm_simp.tex}}  nipkow@8743  556 nipkow@8743  557 \subsubsection{Rewriting with definitions}  nipkow@8743  558 \index{simplification!with definitions}  nipkow@8743  559 nipkow@8743  560 \input{Misc/document/def_rewr.tex}  nipkow@8743  561 nipkow@8743  562 \begin{warn}  nipkow@8743  563  If you have defined$f\,x\,y~\isasymequiv~t$then you can only expand  nipkow@8743  564  occurrences of$f$with at least two arguments. Thus it is safer to define  nipkow@8743  565 $f$~\isasymequiv~\isasymlambda$x\,y.\;t$.  nipkow@8743  566 \end{warn}  nipkow@8743  567 nipkow@8743  568 \subsubsection{Simplifying let-expressions}  nipkow@8743  569 \index{simplification!of let-expressions}  nipkow@8743  570 nipkow@8743  571 Proving a goal containing \isaindex{let}-expressions almost invariably  nipkow@8743  572 requires the \isa{let}-con\-structs to be expanded at some point. Since  nipkow@8771  573 \isa{let}-\isa{in} is just syntactic sugar for a predefined constant (called  nipkow@8743  574 \isa{Let}), expanding \isa{let}-constructs means rewriting with  nipkow@8743  575 \isa{Let_def}:  nipkow@8743  576 nipkow@8743  577 {\makeatother\input{Misc/document/let_rewr.tex}}  nipkow@8743  578 nipkow@8743  579 \subsubsection{Conditional equations}  nipkow@8743  580 nipkow@8743  581 \input{Misc/document/cond_rewr.tex}  nipkow@8743  582 nipkow@8743  583 nipkow@8743  584 \subsubsection{Automatic case splits}  nipkow@8743  585 \indexbold{case splits}  nipkow@8743  586 \index{*split|(}  nipkow@8743  587 nipkow@8743  588 {\makeatother\input{Misc/document/case_splits.tex}}  nipkow@8743  589 nipkow@8743  590 \index{*split|)}  nipkow@8743  591 nipkow@8743  592 \begin{warn}  nipkow@8743  593  The simplifier merely simplifies the condition of an \isa{if} but not the  nipkow@8743  594  \isa{then} or \isa{else} parts. The latter are simplified only after the  nipkow@8743  595  condition reduces to \isa{True} or \isa{False}, or after splitting. The  nipkow@8743  596  same is true for \isaindex{case}-expressions: only the selector is  nipkow@8743  597  simplified at first, until either the expression reduces to one of the  nipkow@8743  598  cases or it is split.  nipkow@8743  599 \end{warn}  nipkow@8743  600 nipkow@8743  601 \subsubsection{Arithmetic}  nipkow@8743  602 \index{arithmetic}  nipkow@8743  603 nipkow@8743  604 The simplifier routinely solves a small class of linear arithmetic formulae  nipkow@8771  605 (over type \isa{nat} and other numeric types): it only takes into account  nipkow@8743  606 assumptions and conclusions that are (possibly negated) (in)equalities  nipkow@8743  607 (\isa{=}, \isasymle, \isa{<}) and it only knows about addition. Thus  nipkow@8743  608 nipkow@8743  609 \input{Misc/document/arith1.tex}%  nipkow@8743  610 is proved by simplification, whereas the only slightly more complex  nipkow@8743  611 nipkow@8743  612 \input{Misc/document/arith4.tex}%  nipkow@8743  613 is not proved by simplification and requires \isa{arith}.  nipkow@8743  614 nipkow@8743  615 \subsubsection{Permutative rewrite rules}  nipkow@8743  616 nipkow@8743  617 A rewrite rule is {\bf permutative} if the left-hand side and right-hand side  nipkow@8743  618 are the same up to renaming of variables. The most common permutative rule  nipkow@8743  619 is commutativity:$x+y = y+x$. Another example is$(x-y)-z = (x-z)-y$. Such  nipkow@8743  620 rules are problematic because once they apply, they can be used forever.  nipkow@8743  621 The simplifier is aware of this danger and treats permutative rules  nipkow@8743  622 separately. For details see~\cite{isabelle-ref}.  nipkow@8743  623 nipkow@8743  624 \subsubsection{Tracing}  nipkow@8743  625 \indexbold{tracing the simplifier}  nipkow@8743  626 nipkow@8743  627 {\makeatother\input{Misc/document/trace_simp.tex}}  nipkow@8743  628 nipkow@8743  629 \subsection{How it works}  nipkow@8743  630 \label{sec:SimpHow}  nipkow@8743  631 nipkow@8743  632 \subsubsection{Higher-order patterns}  nipkow@8743  633 nipkow@8743  634 \subsubsection{Local assumptions}  nipkow@8743  635 nipkow@8743  636 \subsubsection{The preprocessor}  nipkow@8743  637 nipkow@8743  638 \index{simplification|)}  nipkow@8743  639 nipkow@8743  640 \section{Induction heuristics}  nipkow@8743  641 \label{sec:InductionHeuristics}  nipkow@8743  642 nipkow@8743  643 The purpose of this section is to illustrate some simple heuristics for  nipkow@8743  644 inductive proofs. The first one we have already mentioned in our initial  nipkow@8743  645 example:  nipkow@8743  646 \begin{quote}  nipkow@8743  647 {\em 1. Theorems about recursive functions are proved by induction.}  nipkow@8743  648 \end{quote}  nipkow@8743  649 In case the function has more than one argument  nipkow@8743  650 \begin{quote}  nipkow@8743  651 {\em 2. Do induction on argument number$i$if the function is defined by  nipkow@8743  652 recursion in argument number$i\$.}  nipkow@8743  653 \end{quote}  nipkow@8743  654 When we look at the proof of {\makeatother\isa{(xs @ ys) @ zs = xs @ (ys @  nipkow@8743  655  zs)}} in \S\ref{sec:intro-proof} we find (a) \isa{\at} is recursive in  nipkow@8743  656 the first argument, (b) \isa{xs} occurs only as the first argument of  nipkow@8743  657 \isa{\at}, and (c) both \isa{ys} and \isa{zs} occur at least once as  nipkow@8743  658 the second argument of \isa{\at}. Hence it is natural to perform induction  nipkow@8743  659 on \isa{xs}.  nipkow@8743  660 nipkow@8743  661 The key heuristic, and the main point of this section, is to  nipkow@8743  662 generalize the goal before induction. The reason is simple: if the goal is  nipkow@8743  663 too specific, the induction hypothesis is too weak to allow the induction  nipkow@8743  664 step to go through. Let us now illustrate the idea with an example.  nipkow@8743  665 nipkow@8743  666 {\makeatother\input{Misc/document/Itrev.tex}}  nipkow@8743  667 nipkow@8743  668 A final point worth mentioning is the orientation of the equation we just  nipkow@8743  669 proved: the more complex notion (\isa{itrev}) is on the left-hand  nipkow@8743  670 side, the simpler \isa{rev} on the right-hand side. This constitutes  nipkow@8743  671 another, albeit weak heuristic that is not restricted to induction:  nipkow@8743  672 \begin{quote}  nipkow@8743  673  {\em 5. The right-hand side of an equation should (in some sense) be  nipkow@8743  674  simpler than the left-hand side.}  nipkow@8743  675 \end{quote}  nipkow@8743  676 The heuristic is tricky to apply because it is not obvious that  nipkow@8743  677 \isa{rev xs \at\ ys} is simpler than \isa{itrev xs ys}. But see what  nipkow@8771  678 happens if you try to prove \isa{rev xs \at\ ys = itrev xs ys}!  nipkow@8743  679 nipkow@8743  680 nipkow@8743  681 \section{Case study: compiling expressions}  nipkow@8743  682 \label{sec:ExprCompiler}  nipkow@8743  683 nipkow@8743  684 {\makeatother\input{CodeGen/document/CodeGen.tex}}  nipkow@8743  685 nipkow@8743  686 nipkow@8743  687 \section{Advanced datatypes}  nipkow@8743  688 \label{sec:advanced-datatypes}  nipkow@8743  689 \index{*datatype|(}  nipkow@8743  690 \index{*primrec|(}  nipkow@8743  691 %|)  nipkow@8743  692 nipkow@8743  693 This section presents advanced forms of \isacommand{datatype}s.  nipkow@8743  694 nipkow@8743  695 \subsection{Mutual recursion}  nipkow@8743  696 \label{sec:datatype-mut-rec}  nipkow@8743  697 nipkow@8743  698 \input{Datatype/document/ABexpr.tex}  nipkow@8743  699 nipkow@8743  700 \subsection{Nested recursion}  nipkow@8743  701 nipkow@8743  702 \input{Datatype/document/Nested.tex}  nipkow@8743  703 nipkow@8743  704 nipkow@8743  705 \subsection{The limits of nested recursion}  nipkow@8743  706 nipkow@8743  707 How far can we push nested recursion? By the unfolding argument above, we can  nipkow@8743  708 reduce nested to mutual recursion provided the nested recursion only involves  nipkow@8743  709 previously defined datatypes. This does not include functions:  nipkow@8743  710 \begin{ttbox}  nipkow@8743  711 datatype t = C (t => bool)  nipkow@8743  712 \end{ttbox}  nipkow@8743  713 is a real can of worms: in HOL it must be ruled out because it requires a type  nipkow@8743  714 \isa{t} such that \isa{t} and its power set \isa{t \isasymFun\ bool} have the  nipkow@8743  715 same cardinality---an impossibility. For the same reason it is not possible  nipkow@8743  716 to allow recursion involving the type \isa{set}, which is isomorphic to  nipkow@8743  717 \isa{t \isasymFun\ bool}.  nipkow@8743  718 nipkow@8743  719 Fortunately, a limited form of recursion  nipkow@8743  720 involving function spaces is permitted: the recursive type may occur on the  nipkow@8743  721 right of a function arrow, but never on the left. Hence the above can of worms  nipkow@8743  722 is ruled out but the following example of a potentially infinitely branching tree is  nipkow@8743  723 accepted:  nipkow@8771  724 \smallskip  nipkow@8743  725 nipkow@8743  726 \input{Datatype/document/Fundata.tex}  nipkow@8743  727 \bigskip  nipkow@8743  728 nipkow@8743  729 If you need nested recursion on the left of a function arrow,  nipkow@8743  730 there are alternatives to pure HOL: LCF~\cite{paulson87} is a logic where types like  nipkow@8743  731 \begin{ttbox}  nipkow@8743  732 datatype lam = C (lam -> lam)  nipkow@8743  733 \end{ttbox}  nipkow@8771  734 do indeed make sense (note the intentionally different arrow \isa{->}).  nipkow@8743  735 There is even a version of LCF on top of HOL, called  nipkow@8743  736 HOLCF~\cite{MuellerNvOS99}.  nipkow@8743  737 nipkow@8743  738 \index{*primrec|)}  nipkow@8743  739 \index{*datatype|)}  nipkow@8743  740 nipkow@8743  741 \subsection{Case study: Tries}  nipkow@8743  742 nipkow@8743  743 Tries are a classic search tree data structure~\cite{Knuth3-75} for fast  nipkow@8743  744 indexing with strings. Figure~\ref{fig:trie} gives a graphical example of a  nipkow@8743  745 trie containing the words all'', an'', ape'', can'', car'' and  nipkow@8743  746 cat''. When searching a string in a trie, the letters of the string are  nipkow@8743  747 examined sequentially. Each letter determines which subtrie to search next.  nipkow@8743  748 In this case study we model tries as a datatype, define a lookup and an  nipkow@8743  749 update function, and prove that they behave as expected.  nipkow@8743  750 nipkow@8743  751 \begin{figure}[htbp]  nipkow@8743  752 \begin{center}  nipkow@8743  753 \unitlength1mm  nipkow@8743  754 \begin{picture}(60,30)  nipkow@8743  755 \put( 5, 0){\makebox(0,0)[b]{l}}  nipkow@8743  756 \put(25, 0){\makebox(0,0)[b]{e}}  nipkow@8743  757 \put(35, 0){\makebox(0,0)[b]{n}}  nipkow@8743  758 \put(45, 0){\makebox(0,0)[b]{r}}  nipkow@8743  759 \put(55, 0){\makebox(0,0)[b]{t}}  nipkow@8743  760 %  nipkow@8743  761 \put( 5, 9){\line(0,-1){5}}  nipkow@8743  762 \put(25, 9){\line(0,-1){5}}  nipkow@8743  763 \put(44, 9){\line(-3,-2){9}}  nipkow@8743  764 \put(45, 9){\line(0,-1){5}}  nipkow@8743  765 \put(46, 9){\line(3,-2){9}}  nipkow@8743  766 %  nipkow@8743  767 \put( 5,10){\makebox(0,0)[b]{l}}  nipkow@8743  768 \put(15,10){\makebox(0,0)[b]{n}}  nipkow@8743  769 \put(25,10){\makebox(0,0)[b]{p}}  nipkow@8743  770 \put(45,10){\makebox(0,0)[b]{a}}  nipkow@8743  771 %  nipkow@8743  772 \put(14,19){\line(-3,-2){9}}  nipkow@8743  773 \put(15,19){\line(0,-1){5}}  nipkow@8743  774 \put(16,19){\line(3,-2){9}}  nipkow@8743  775 \put(45,19){\line(0,-1){5}}  nipkow@8743  776 %  nipkow@8743  777 \put(15,20){\makebox(0,0)[b]{a}}  nipkow@8743  778 \put(45,20){\makebox(0,0)[b]{c}}  nipkow@8743  779 %  nipkow@8743  780 \put(30,30){\line(-3,-2){13}}  nipkow@8743  781 \put(30,30){\line(3,-2){13}}  nipkow@8743  782 \end{picture}  nipkow@8743  783 \end{center}  nipkow@8743  784 \caption{A sample trie}  nipkow@8743  785 \label{fig:trie}  nipkow@8743  786 \end{figure}  nipkow@8743  787 nipkow@8743  788 Proper tries associate some value with each string. Since the  nipkow@8743  789 information is stored only in the final node associated with the string, many  nipkow@8743  790 nodes do not carry any value. This distinction is captured by the  nipkow@8771  791 following predefined datatype (from theory \isa{Option}, which is a parent  nipkow@8771  792 of \isa{Main}):  nipkow@8771  793 \smallskip  nipkow@8743  794 \input{Trie/document/Option2.tex}  nipkow@8771  795 \indexbold{*option}\indexbold{*None}\indexbold{*Some}%  nipkow@8743  796 \input{Trie/document/Trie.tex}  nipkow@8743  797 nipkow@8743  798 \begin{exercise}  nipkow@8743  799  Write an improved version of \isa{update} that does not suffer from the  nipkow@8743  800  space leak in the version above. Prove the main theorem for your improved  nipkow@8743  801  \isa{update}.  nipkow@8743  802 \end{exercise}  nipkow@8743  803 nipkow@8743  804 \begin{exercise}  nipkow@8743  805  Write a function to \emph{delete} entries from a trie. An easy solution is  nipkow@8743  806  a clever modification of \isa{update} which allows both insertion and  nipkow@8743  807  deletion with a single function. Prove (a modified version of) the main  nipkow@8743  808  theorem above. Optimize you function such that it shrinks tries after  nipkow@8743  809  deletion, if possible.  nipkow@8743  810 \end{exercise}  nipkow@8743  811 nipkow@8743  812 \section{Total recursive functions}  nipkow@8743  813 \label{sec:recdef}  nipkow@8743  814 \index{*recdef|(}  nipkow@8743  815 nipkow@8743  816 Although many total functions have a natural primitive recursive definition,  nipkow@8743  817 this is not always the case. Arbitrary total recursive functions can be  nipkow@8743  818 defined by means of \isacommand{recdef}: you can use full pattern-matching,  nipkow@8743  819 recursion need not involve datatypes, and termination is proved by showing  nipkow@8743  820 that the arguments of all recursive calls are smaller in a suitable (user  nipkow@8743  821 supplied) sense.  nipkow@8743  822 nipkow@8743  823 \subsection{Defining recursive functions}  nipkow@8743  824 nipkow@8743  825 \input{Recdef/document/examples.tex}  nipkow@8743  826 nipkow@8743  827 \subsection{Proving termination}  nipkow@8743  828 nipkow@8743  829 \input{Recdef/document/termination.tex}  nipkow@8743  830 nipkow@8743  831 \subsection{Simplification with recdef}  nipkow@8743  832 nipkow@8743  833 \input{Recdef/document/simplification.tex}  nipkow@8743  834 nipkow@8743  835 nipkow@8743  836 \subsection{Induction}  nipkow@8743  837 \index{induction!recursion|(}  nipkow@8743  838 \index{recursion induction|(}  nipkow@8743  839 nipkow@8743  840 \input{Recdef/document/Induction.tex}  nipkow@8743  841 nipkow@8743  842 \index{induction!recursion|)}  nipkow@8743  843 \index{recursion induction|)}  nipkow@8743  844 \index{*recdef|)}