doc-src/TutorialI/Advanced/Partial.thy
author nipkow
Wed Dec 13 09:39:53 2000 +0100 (2000-12-13)
changeset 10654 458068404143
child 10885 90695f46440b
permissions -rw-r--r--
*** empty log message ***
     1 (*<*)theory Partial = While_Combinator:(*>*)
     2 
     3 text{*\noindent
     4 Throughout the tutorial we have emphasized the fact that all functions
     5 in HOL are total. Hence we cannot hope to define truly partial
     6 functions. The best we can do are functions that are
     7 \emph{underdefined}\index{underdefined function}:
     8 for certain arguments we only know that a result
     9 exists, but we don't know what it is. When defining functions that are
    10 normally considered partial, underdefinedness turns out to be a very
    11 reasonable alternative.
    12 
    13 We have already seen an instance of underdefinedness by means of
    14 non-exhaustive pattern matching: the definition of @{term last} in
    15 \S\ref{sec:recdef-examples}. The same is allowed for \isacommand{primrec}
    16 *}
    17 
    18 consts hd :: "'a list \<Rightarrow> 'a"
    19 primrec "hd (x#xs) = x"
    20 
    21 text{*\noindent
    22 although it generates a warning.
    23 
    24 Even ordinary definitions allow underdefinedness, this time by means of
    25 preconditions:
    26 *}
    27 
    28 constdefs minus :: "nat \<Rightarrow> nat \<Rightarrow> nat"
    29 "n \<le> m \<Longrightarrow> minus m n \<equiv> m - n"
    30 
    31 text{*
    32 The rest of this section is devoted to the question of how to define
    33 partial recursive functions by other means that non-exhaustive pattern
    34 matching.
    35 *}
    36 
    37 subsubsection{*Guarded recursion*}
    38 
    39 text{* Neither \isacommand{primrec} nor \isacommand{recdef} allow to
    40 prefix an equation with a condition in the way ordinary definitions do
    41 (see @{term minus} above). Instead we have to move the condition over
    42 to the right-hand side of the equation. Given a partial function $f$
    43 that should satisfy the recursion equation $f(x) = t$ over its domain
    44 $dom(f)$, we turn this into the \isacommand{recdef}
    45 @{prop[display]"f(x) = (if x \<in> dom(f) then t else arbitrary)"}
    46 where @{term arbitrary} is a predeclared constant of type @{typ 'a}
    47 which has no definition. Thus we know nothing about its value,
    48 which is ideal for specifying underdefined functions on top of it.
    49 
    50 As a simple example we define division on @{typ nat}:
    51 *}
    52 
    53 consts divi :: "nat \<times> nat \<Rightarrow> nat"
    54 recdef divi "measure(\<lambda>(m,n). m)"
    55   "divi(m,n) = (if n = 0 then arbitrary else
    56                 if m < n then 0 else divi(m-n,n)+1)"
    57 
    58 text{*\noindent Of course we could also have defined
    59 @{term"divi(m,0)"} to be some specific number, for example 0. The
    60 latter option is chosen for the predefined @{text div} function, which
    61 simplifies proofs at the expense of moving further away from the
    62 standard mathematical divison function.
    63 
    64 As a more substantial example we consider the problem of searching a graph.
    65 For simplicity our graph is given by a function (@{term f}) of
    66 type @{typ"'a \<Rightarrow> 'a"} which
    67 maps each node to its successor, and the task is to find the end of a chain,
    68 i.e.\ a node pointing to itself. Here is a first attempt:
    69 @{prop[display]"find(f,x) = (if f x = x then x else find(f, f x))"}
    70 This may be viewed as a fixed point finder or as one half of the well known
    71 \emph{Union-Find} algorithm.
    72 The snag is that it may not terminate if @{term f} has nontrivial cycles.
    73 Phrased differently, the relation
    74 *}
    75 
    76 constdefs step1 :: "('a \<Rightarrow> 'a) \<Rightarrow> ('a \<times> 'a)set"
    77   "step1 f \<equiv> {(y,x). y = f x \<and> y \<noteq> x}"
    78 
    79 text{*\noindent
    80 must be well-founded. Thus we make the following definition:
    81 *}
    82 
    83 consts find :: "('a \<Rightarrow> 'a) \<times> 'a \<Rightarrow> 'a"
    84 recdef find "same_fst (\<lambda>f. wf(step1 f)) step1"
    85   "find(f,x) = (if wf(step1 f)
    86                 then if f x = x then x else find(f, f x)
    87                 else arbitrary)"
    88 (hints recdef_simp:same_fst_def step1_def)
    89 
    90 text{*\noindent
    91 The recursion equation itself should be clear enough: it is our aborted
    92 first attempt augmented with a check that there are no non-trivial loops.
    93 
    94 What complicates the termination proof is that the argument of
    95 @{term find} is a pair. To express the required well-founded relation
    96 we employ the predefined combinator @{term same_fst} of type
    97 @{text[display]"('a \<Rightarrow> bool) \<Rightarrow> ('a \<Rightarrow> ('b\<times>'b)set) \<Rightarrow> (('a\<times>'b) \<times> ('a\<times>'b))set"}
    98 defined as
    99 @{thm[display]same_fst_def[no_vars]}
   100 This combinator is designed for recursive functions on pairs where the first
   101 component of the argument is passed unchanged to all recursive
   102 calls. Given a constraint on the first component and a relation on the second
   103 component, @{term same_fst} builds the required relation on pairs.
   104 The theorem @{thm[display]wf_same_fst[no_vars]}
   105 is known to the well-foundedness prover of \isacommand{recdef}.
   106 Thus well-foundedness of the given relation is immediate.
   107 Furthermore, each recursive call descends along the given relation:
   108 the first argument stays unchanged and the second one descends along
   109 @{term"step1 f"}. The proof merely requires unfolding of some definitions.
   110 
   111 Normally you will then derive the following conditional variant of and from
   112 the recursion equation
   113 *}
   114 
   115 lemma [simp]:
   116   "wf(step1 f) \<Longrightarrow> find(f,x) = (if f x = x then x else find(f, f x))"
   117 by simp
   118 
   119 text{*\noindent and then disable the original recursion equation:*}
   120 
   121 declare find.simps[simp del]
   122 
   123 text{*
   124 We can reason about such underdefined functions just like about any other
   125 recursive function. Here is a simple example of recursion induction:
   126 *}
   127 
   128 lemma "wf(step1 f) \<longrightarrow> f(find(f,x)) = find(f,x)"
   129 apply(induct_tac f x rule:find.induct);
   130 apply simp
   131 done
   132 
   133 subsubsection{*The {\tt\slshape while} combinator*}
   134 
   135 text{*If the recursive function happens to be tail recursive, its
   136 definition becomes a triviality if based on the predefined \isaindexbold{while}
   137 combinator.  The latter lives in the library theory
   138 \isa{While_Combinator}, which is not part of @{text Main} but needs to
   139 be included explicitly among the ancestor theories.
   140 
   141 Constant @{term while} is of type @{text"('a \<Rightarrow> bool) \<Rightarrow> ('a \<Rightarrow> 'a) \<Rightarrow> 'a"}
   142 and satisfies the recursion equation @{thm[display]while_unfold[no_vars]}
   143 That is, @{term"while b c s"} is equivalent to the imperative program
   144 \begin{verbatim}
   145      x := s; while b(x) do x := c(x); return x
   146 \end{verbatim}
   147 In general, @{term s} will be a tuple (better still: a record). As an example
   148 consider the tail recursive variant of function @{term find} above:
   149 *}
   150 
   151 constdefs find2 :: "('a \<Rightarrow> 'a) \<Rightarrow> 'a \<Rightarrow> 'a"
   152   "find2 f x \<equiv>
   153    fst(while (\<lambda>(x,x'). x' \<noteq> x) (\<lambda>(x,x'). (x',f x')) (x,f x))"
   154 
   155 text{*\noindent
   156 The loop operates on two ``local variables'' @{term x} and @{term x'}
   157 containing the ``current'' and the ``next'' value of function @{term f}.
   158 They are initalized with the global @{term x} and @{term"f x"}. At the
   159 end @{term fst} selects the local @{term x}.
   160 
   161 This looks like we can define at least tail recursive functions
   162 without bothering about termination after all. But there is no free
   163 lunch: when proving properties of functions defined by @{term while},
   164 termination rears its ugly head again. Here is
   165 @{thm[source]while_rule}, the well known proof rule for total
   166 correctness of loops expressed with @{term while}:
   167 @{thm[display,margin=50]while_rule[no_vars]} @{term P} needs to be
   168 true of the initial state @{term s} and invariant under @{term c}
   169 (premises 1 and 2).The post-condition @{term Q} must become true when
   170 leaving the loop (premise 3). And each loop iteration must descend
   171 along a well-founded relation @{term r} (premises 4 and 5).
   172 
   173 Let us now prove that @{term find2} does indeed find a fixed point. Instead
   174 of induction we apply the above while rule, suitably instantiated.
   175 Only the final premise of @{thm[source]while_rule} is left unproved
   176 by @{text auto} but falls to @{text simp}:
   177 *}
   178 
   179 lemma lem: "\<lbrakk> wf(step1 f); x' = f x \<rbrakk> \<Longrightarrow> \<exists>y y'.
   180    while (\<lambda>(x,x'). x' \<noteq> x) (\<lambda>(x,x'). (x',f x')) (x,x') = (y,y') \<and>
   181    y' = y \<and> f y = y"
   182 apply(rule_tac P = "\<lambda>(x,x'). x' = f x" and
   183                r = "inv_image (step1 f) fst" in while_rule);
   184 apply auto
   185 apply(simp add:inv_image_def step1_def)
   186 done
   187 
   188 text{*
   189 The theorem itself is a simple consequence of this lemma:
   190 *}
   191 
   192 theorem "wf(step1 f) \<Longrightarrow> f(find2 f x) = find2 f x"
   193 apply(drule_tac x = x in lem)
   194 apply(auto simp add:find2_def)
   195 done
   196 
   197 text{* Let us conclude this section on partial functions by a
   198 discussion of the merits of the @{term while} combinator. We have
   199 already seen that the advantage (if it is one) of not having to
   200 provide a termintion argument when defining a function via @{term
   201 while} merely puts off the evil hour. On top of that, tail recursive
   202 functions tend to be more complicated to reason about. So why use
   203 @{term while} at all? The only reason is executability: the recursion
   204 equation for @{term while} is a directly executable functional
   205 program. This is in stark contrast to guarded recursion as introduced
   206 above which requires an explicit test @{prop"x \<in> dom f"} in the
   207 function body.  Unless @{term dom} is trivial, this leads to a
   208 definition which is either not at all executable or prohibitively
   209 expensive. Thus, if you are aiming for an efficiently executable definition
   210 of a partial function, you are likely to need @{term while}.
   211 *}
   212 
   213 (*<*)end(*>*)