author berghofe Thu Jul 19 15:29:51 2007 +0200 (2007-07-19) changeset 23842 9d87177f1f89 parent 23841 598839baafed child 23843 4cd60e5d2999
LaTeX code is now generated directly from theory file.
     1.1 --- a/doc-src/TutorialI/Inductive/Advanced.thy	Wed Jul 18 14:46:59 2007 +0200
1.2 +++ b/doc-src/TutorialI/Inductive/Advanced.thy	Thu Jul 19 15:29:51 2007 +0200
1.3 @@ -1,10 +1,61 @@
1.4  (* ID:         $Id$ *)
1.5 -theory Advanced imports Even begin
1.6 +(*<*)theory Advanced imports Even uses "../../antiquote_setup.ML" begin(*>*)
1.7 +
1.8 +text {*
1.9 +The premises of introduction rules may contain universal quantifiers and
1.10 +monotone functions.  A universal quantifier lets the rule
1.11 +refer to any number of instances of
1.12 +the inductively defined set.  A monotone function lets the rule refer
1.13 +to existing constructions (such as list of'') over the inductively defined
1.14 +set.  The examples below show how to use the additional expressiveness
1.15 +and how to reason from the resulting definitions.
1.16 +*}
1.17
1.18 +subsection{* Universal Quantifiers in Introduction Rules \label{sec:gterm-datatype} *}
1.19 +
1.20 +text {*
1.21 +\index{ground terms example|(}%
1.22 +\index{quantifiers!and inductive definitions|(}%
1.23 +As a running example, this section develops the theory of \textbf{ground
1.24 +terms}: terms constructed from constant and function
1.25 +symbols but not variables. To simplify matters further, we regard a
1.26 +constant as a function applied to the null argument  list.  Let us declare a
1.27 +datatype @{text gterm} for the type of ground  terms. It is a type constructor
1.28 +whose argument is a type of  function symbols.
1.29 +*}
1.30
1.31  datatype 'f gterm = Apply 'f "'f gterm list"
1.32
1.33 -datatype integer_op = Number int | UnaryMinus | Plus;
1.34 +text {*
1.35 +To try it out, we declare a datatype of some integer operations:
1.36 +integer constants, the unary minus operator and the addition
1.37 +operator.
1.38 +*}
1.39 +
1.40 +datatype integer_op = Number int | UnaryMinus | Plus
1.41 +
1.42 +text {*
1.43 +Now the type @{typ "integer_op gterm"} denotes the ground
1.44 +terms built over those symbols.
1.45 +
1.46 +The type constructor @{text gterm} can be generalized to a function
1.47 +over sets.  It returns
1.48 +the set of ground terms that can be formed over a set @{text F} of function symbols. For
1.49 +example,  we could consider the set of ground terms formed from the finite
1.50 +set @{text "{Number 2, UnaryMinus, Plus}"}.
1.51 +
1.52 +This concept is inductive. If we have a list @{text args} of ground terms
1.53 +over~@{text F} and a function symbol @{text f} in @{text F}, then we
1.54 +can apply @{text f} to @{text args} to obtain another ground term.
1.55 +The only difficulty is that the argument list may be of any length. Hitherto,
1.56 +each rule in an inductive definition referred to the inductively
1.57 +defined set a fixed number of times, typically once or twice.
1.58 +A universal quantifier in the premise of the introduction rule
1.59 +expresses that every element of @{text args} belongs
1.60 +to our inductively defined set: is a ground term
1.61 +over~@{text F}.  The function @{term set} denotes the set of elements in a given
1.62 +list.
1.63 +*}
1.64
1.65  inductive_set
1.66    gterms :: "'f set \<Rightarrow> 'f gterm set"
1.67 @@ -13,77 +64,56 @@
1.68  step[intro!]: "\<lbrakk>\<forall>t \<in> set args. t \<in> gterms F;  f \<in> F\<rbrakk>
1.69                 \<Longrightarrow> (Apply f args) \<in> gterms F"
1.70
1.71 +text {*
1.72 +To demonstrate a proof from this definition, let us
1.73 +show that the function @{term gterms}
1.74 +is \textbf{monotone}.  We shall need this concept shortly.
1.75 +*}
1.76 +
1.77 +lemma gterms_mono: "F\<subseteq>G \<Longrightarrow> gterms F \<subseteq> gterms G"
1.78 +apply clarify
1.79 +apply (erule gterms.induct)
1.80 +apply blast
1.81 +done
1.82 +(*<*)
1.83  lemma gterms_mono: "F\<subseteq>G \<Longrightarrow> gterms F \<subseteq> gterms G"
1.84  apply clarify
1.85  apply (erule gterms.induct)
1.86 +(*>*)
1.87  txt{*
1.88 -@{subgoals[display,indent=0,margin=65]}
1.89 -*};
1.90 -apply blast
1.91 -done
1.92 -
1.93 -
1.94 -text{*
1.95 -@{thm[display] even.cases[no_vars]}
1.96 -\rulename{even.cases}
1.97 -
1.98 -Just as a demo I include
1.99 -the two forms that Markus has made available. First the one for binding the
1.100 -result to a name
1.101 -
1.102 +Intuitively, this theorem says that
1.103 +enlarging the set of function symbols enlarges the set of ground
1.104 +terms. The proof is a trivial rule induction.
1.105 +First we use the @{text clarify} method to assume the existence of an element of
1.106 +@{term "gterms F"}.  (We could have used @{text "intro subsetI"}.)  We then
1.107 +apply rule induction. Here is the resulting subgoal:
1.108 +@{subgoals[display,indent=0]}
1.109 +The assumptions state that @{text f} belongs
1.110 +to~@{text F}, which is included in~@{text G}, and that every element of the list @{text args} is
1.111 +a ground term over~@{text G}.  The @{text blast} method finds this chain of reasoning easily.
1.112  *}
1.113 -
1.114 -inductive_cases Suc_Suc_cases [elim!]:
1.115 -  "Suc(Suc n) \<in> even"
1.116 -
1.117 -thm Suc_Suc_cases;
1.118 -
1.119 -text{*
1.120 -@{thm[display] Suc_Suc_cases[no_vars]}
1.121 -\rulename{Suc_Suc_cases}
1.122 -
1.123 -and now the one for local usage:
1.124 -*}
1.125 -
1.126 -lemma "Suc(Suc n) \<in> even \<Longrightarrow> P n";
1.127 -apply (ind_cases "Suc(Suc n) \<in> even");
1.128 -oops
1.129 +(*<*)oops(*>*)
1.130 +text {*
1.131 +\begin{warn}
1.132 +Why do we call this function @{text gterms} instead
1.133 +of @{text gterm}?  A constant may have the same name as a type.  However,
1.134 +name  clashes could arise in the theorems that Isabelle generates.
1.135 +Our choice of names keeps @{text gterms.induct} separate from
1.136 +@{text gterm.induct}.
1.137 +\end{warn}
1.138
1.139 -inductive_cases gterm_Apply_elim [elim!]: "Apply f args \<in> gterms F"
1.140 -
1.141 -text{*this is what we get:
1.142 -
1.143 -@{thm[display] gterm_Apply_elim[no_vars]}
1.144 -\rulename{gterm_Apply_elim}
1.145 +Call a term \textbf{well-formed} if each symbol occurring in it is applied
1.146 +to the correct number of arguments.  (This number is called the symbol's
1.147 +\textbf{arity}.)  We can express well-formedness by
1.148 +generalizing the inductive definition of
1.149 +\isa{gterms}.
1.150 +Suppose we are given a function called @{text arity}, specifying the arities
1.151 +of all symbols.  In the inductive step, we have a list @{text args} of such
1.152 +terms and a function  symbol~@{text f}. If the length of the list matches the
1.153 +function's arity  then applying @{text f} to @{text args} yields a well-formed
1.154 +term.
1.155  *}
1.156
1.157 -lemma gterms_IntI [rule_format, intro!]:
1.158 -     "t \<in> gterms F \<Longrightarrow> t \<in> gterms G \<longrightarrow> t \<in> gterms (F\<inter>G)"
1.159 -apply (erule gterms.induct)
1.160 -txt{*
1.161 -@{subgoals[display,indent=0,margin=65]}
1.162 -*};
1.163 -apply blast
1.164 -done
1.165 -
1.166 -
1.167 -text{*
1.168 -@{thm[display] mono_Int[no_vars]}
1.169 -\rulename{mono_Int}
1.170 -*}
1.171 -
1.172 -lemma gterms_Int_eq [simp]:
1.173 -     "gterms (F\<inter>G) = gterms F \<inter> gterms G"
1.174 -by (blast intro!: mono_Int monoI gterms_mono)
1.175 -
1.176 -
1.177 -text{*the following declaration isn't actually used*}
1.178 -consts integer_arity :: "integer_op \<Rightarrow> nat"
1.179 -primrec
1.180 -"integer_arity (Number n)        = 0"
1.181 -"integer_arity UnaryMinus        = 1"
1.182 -"integer_arity Plus              = 2"
1.183 -
1.184  inductive_set
1.185    well_formed_gterm :: "('f \<Rightarrow> nat) \<Rightarrow> 'f gterm set"
1.186    for arity :: "'f \<Rightarrow> nat"
1.187 @@ -92,6 +122,32 @@
1.188                  length args = arity f\<rbrakk>
1.189                 \<Longrightarrow> (Apply f args) \<in> well_formed_gterm arity"
1.190
1.191 +text {*
1.192 +The inductive definition neatly captures the reasoning above.
1.193 +The universal quantification over the
1.194 +@{text set} of arguments expresses that all of them are well-formed.%
1.195 +\index{quantifiers!and inductive definitions|)}
1.196 +*}
1.197 +
1.198 +subsection{* Alternative Definition Using a Monotone Function *}
1.199 +
1.200 +text {*
1.201 +\index{monotone functions!and inductive definitions|(}%
1.202 +An inductive definition may refer to the
1.203 +inductively defined  set through an arbitrary monotone function.  To
1.204 +demonstrate this powerful feature, let us
1.205 +change the  inductive definition above, replacing the
1.206 +quantifier by a use of the function @{term lists}. This
1.207 +function, from the Isabelle theory of lists, is analogous to the
1.208 +function @{term gterms} declared above: if @{text A} is a set then
1.209 +@{term "lists A"} is the set of lists whose elements belong to
1.210 +@{term A}.
1.211 +
1.212 +In the inductive definition of well-formed terms, examine the one
1.213 +introduction rule.  The first premise states that @{text args} belongs to
1.214 +the @{text lists} of well-formed terms.  This formulation is more
1.215 +direct, if more obscure, than using a universal quantifier.
1.216 +*}
1.217
1.218  inductive_set
1.219    well_formed_gterm' :: "('f \<Rightarrow> nat) \<Rightarrow> 'f gterm set"
1.220 @@ -102,41 +158,214 @@
1.221                 \<Longrightarrow> (Apply f args) \<in> well_formed_gterm' arity"
1.222  monos lists_mono
1.223
1.224 +text {*
1.225 +We cite the theorem @{text lists_mono} to justify
1.226 +using the function @{term lists}.%
1.227 +\footnote{This particular theorem is installed by default already, but we
1.228 +include the \isakeyword{monos} declaration in order to illustrate its syntax.}
1.229 +@{named_thms [display,indent=0] lists_mono [no_vars] (lists_mono)}
1.230 +Why must the function be monotone?  An inductive definition describes
1.231 +an iterative construction: each element of the set is constructed by a
1.232 +finite number of introduction rule applications.  For example, the
1.233 +elements of \isa{even} are constructed by finitely many applications of
1.234 +the rules
1.235 +@{thm [display,indent=0] even.intros [no_vars]}
1.236 +All references to a set in its
1.237 +inductive definition must be positive.  Applications of an
1.238 +introduction rule cannot invalidate previous applications, allowing the
1.239 +construction process to converge.
1.240 +The following pair of rules do not constitute an inductive definition:
1.241 +\begin{trivlist}
1.242 +\item @{term "0 \<in> even"}
1.243 +\item @{term "n \<notin> even \<Longrightarrow> (Suc n) \<in> even"}
1.244 +\end{trivlist}
1.245 +Showing that 4 is even using these rules requires showing that 3 is not
1.246 +even.  It is far from trivial to show that this set of rules
1.247 +characterizes the even numbers.
1.248 +
1.249 +Even with its use of the function \isa{lists}, the premise of our
1.250 +introduction rule is positive:
1.251 +@{thm_style [display,indent=0] prem1 step [no_vars]}
1.252 +To apply the rule we construct a list @{term args} of previously
1.253 +constructed well-formed terms.  We obtain a
1.254 +new term, @{term "Apply f args"}.  Because @{term lists} is monotone,
1.255 +applications of the rule remain valid as new terms are constructed.
1.256 +Further lists of well-formed
1.257 +terms become available and none are taken away.%
1.258 +\index{monotone functions!and inductive definitions|)}
1.259 +*}
1.260 +
1.261 +subsection{* A Proof of Equivalence *}
1.262 +
1.263 +text {*
1.264 +We naturally hope that these two inductive definitions of well-formed''
1.265 +coincide.  The equality can be proved by separate inclusions in
1.266 +each direction.  Each is a trivial rule induction.
1.267 +*}
1.268 +
1.269  lemma "well_formed_gterm arity \<subseteq> well_formed_gterm' arity"
1.270  apply clarify
1.271 -txt{*
1.272 -The situation after clarify
1.273 -@{subgoals[display,indent=0,margin=65]}
1.274 -*};
1.275 +apply (erule well_formed_gterm.induct)
1.276 +apply auto
1.277 +done
1.278 +(*<*)
1.279 +lemma "well_formed_gterm arity \<subseteq> well_formed_gterm' arity"
1.280 +apply clarify
1.281  apply (erule well_formed_gterm.induct)
1.282 -txt{*
1.283 -note the induction hypothesis!
1.284 -@{subgoals[display,indent=0,margin=65]}
1.285 -*};
1.286 +(*>*)
1.287 +txt {*
1.288 +The @{text clarify} method gives
1.289 +us an element of @{term "well_formed_gterm arity"} on which to perform
1.290 +induction.  The resulting subgoal can be proved automatically:
1.291 +@{subgoals[display,indent=0]}
1.292 +This proof resembles the one given in
1.293 +{\S}\ref{sec:gterm-datatype} above, especially in the form of the
1.294 +induction hypothesis.  Next, we consider the opposite inclusion:
1.295 +*}
1.296 +(*<*)oops(*>*)
1.297 +lemma "well_formed_gterm' arity \<subseteq> well_formed_gterm arity"
1.298 +apply clarify
1.299 +apply (erule well_formed_gterm'.induct)
1.300  apply auto
1.301  done
1.302 +(*<*)
1.303 +lemma "well_formed_gterm' arity \<subseteq> well_formed_gterm arity"
1.304 +apply clarify
1.305 +apply (erule well_formed_gterm'.induct)
1.306 +(*>*)
1.307 +txt {*
1.308 +The proof script is identical, but the subgoal after applying induction may
1.309 +be surprising:
1.310 +@{subgoals[display,indent=0,margin=65]}
1.311 +The induction hypothesis contains an application of @{term lists}.  Using a
1.312 +monotone function in the inductive definition always has this effect.  The
1.313 +subgoal may look uninviting, but fortunately
1.314 +@{term lists} distributes over intersection:
1.315 +@{named_thms [display,indent=0] lists_Int_eq [no_vars] (lists_Int_eq)}
1.316 +Thanks to this default simplification rule, the induction hypothesis
1.317 +is quickly replaced by its two parts:
1.318 +\begin{trivlist}
1.319 +\item @{term "args \<in> lists (well_formed_gterm' arity)"}
1.320 +\item @{term "args \<in> lists (well_formed_gterm arity)"}
1.321 +\end{trivlist}
1.322 +Invoking the rule @{text well_formed_gterm.step} completes the proof.  The
1.323 +call to @{text auto} does all this work.
1.324
1.325 +This example is typical of how monotone functions
1.326 +\index{monotone functions} can be used.  In particular, many of them
1.327 +distribute over intersection.  Monotonicity implies one direction of
1.328 +this set equality; we have this theorem:
1.329 +@{named_thms [display,indent=0] mono_Int [no_vars] (mono_Int)}
1.330 +*}
1.331 +(*<*)oops(*>*)
1.332
1.333
1.334 -lemma "well_formed_gterm' arity \<subseteq> well_formed_gterm arity"
1.335 -apply clarify
1.336 -txt{*
1.337 -The situation after clarify
1.338 +subsection{* Another Example of Rule Inversion *}
1.339 +
1.340 +text {*
1.341 +\index{rule inversion|(}%
1.342 +Does @{term gterms} distribute over intersection?  We have proved that this
1.343 +function is monotone, so @{text mono_Int} gives one of the inclusions.  The
1.344 +opposite inclusion asserts that if @{term t} is a ground term over both of the
1.345 +sets
1.346 +@{term F} and~@{term G} then it is also a ground term over their intersection,
1.347 +@{term "F \<inter> G"}.
1.348 +*}
1.349 +
1.350 +lemma gterms_IntI:
1.351 +     "t \<in> gterms F \<Longrightarrow> t \<in> gterms G \<longrightarrow> t \<in> gterms (F\<inter>G)"
1.352 +(*<*)oops(*>*)
1.353 +text {*
1.354 +Attempting this proof, we get the assumption
1.355 +@{term "Apply f args \<in> gterms G"}, which cannot be broken down.
1.356 +It looks like a job for rule inversion:\cmmdx{inductive\protect\_cases}
1.357 +*}
1.358 +
1.359 +inductive_cases gterm_Apply_elim [elim!]: "Apply f args \<in> gterms F"
1.360 +
1.361 +text {*
1.362 +Here is the result.
1.363 +@{named_thms [display,indent=0,margin=50] gterm_Apply_elim [no_vars] (gterm_Apply_elim)}
1.364 +This rule replaces an assumption about @{term "Apply f args"} by
1.365 +assumptions about @{term f} and~@{term args}.
1.366 +No cases are discarded (there was only one to begin
1.367 +with) but the rule applies specifically to the pattern @{term "Apply f args"}.
1.368 +It can be applied repeatedly as an elimination rule without looping, so we
1.369 +have given the @{text "elim!"} attribute.
1.370 +
1.371 +Now we can prove the other half of that distributive law.
1.372 +*}
1.373 +
1.374 +lemma gterms_IntI [rule_format, intro!]:
1.375 +     "t \<in> gterms F \<Longrightarrow> t \<in> gterms G \<longrightarrow> t \<in> gterms (F\<inter>G)"
1.376 +apply (erule gterms.induct)
1.377 +apply blast
1.378 +done
1.379 +(*<*)
1.380 +lemma "t \<in> gterms F \<Longrightarrow> t \<in> gterms G \<longrightarrow> t \<in> gterms (F\<inter>G)"
1.381 +apply (erule gterms.induct)
1.382 +(*>*)
1.383 +txt {*
1.384 +The proof begins with rule induction over the definition of
1.385 +@{term gterms}, which leaves a single subgoal:
1.386  @{subgoals[display,indent=0,margin=65]}
1.387 -*};
1.388 -apply (erule well_formed_gterm'.induct)
1.389 -txt{*
1.390 -note the induction hypothesis!
1.391 -@{subgoals[display,indent=0,margin=65]}
1.392 -*};
1.393 -apply auto
1.394 -done
1.395 +To prove this, we assume @{term "Apply f args \<in> gterms G"}.  Rule inversion,
1.396 +in the form of @{text gterm_Apply_elim}, infers
1.397 +that every element of @{term args} belongs to
1.398 +@{term "gterms G"}; hence (by the induction hypothesis) it belongs
1.399 +to @{term "gterms (F \<inter> G)"}.  Rule inversion also yields
1.400 +@{term "f \<in> G"} and hence @{term "f \<in> F \<inter> G"}.
1.401 +All of this reasoning is done by @{text blast}.
1.402 +
1.403 +\smallskip
1.404 +Our distributive law is a trivial consequence of previously-proved results:
1.405 +*}
1.406 +(*<*)oops(*>*)
1.407 +lemma gterms_Int_eq [simp]:
1.408 +     "gterms (F \<inter> G) = gterms F \<inter> gterms G"
1.409 +by (blast intro!: mono_Int monoI gterms_mono)
1.410 +
1.411 +text_raw {*
1.412 +\index{rule inversion|)}%
1.413 +\index{ground terms example|)}
1.414
1.415
1.416 -text{*
1.417 -@{thm[display] lists_Int_eq[no_vars]}
1.418 +\begin{isamarkuptext}
1.419 +\begin{exercise}
1.420 +A function mapping function symbols to their
1.421 +types is called a \textbf{signature}.  Given a type
1.422 +ranging over type symbols, we can represent a function's type by a
1.423 +list of argument types paired with the result type.
1.424 +Complete this inductive definition:
1.425 +\begin{isabelle}
1.426  *}
1.427
1.428 +inductive_set
1.429 +  well_typed_gterm :: "('f \<Rightarrow> 't list * 't) \<Rightarrow> ('f gterm * 't)set"
1.430 +  for sig :: "'f \<Rightarrow> 't list * 't"
1.431 +(*<*)
1.432 +where
1.433 +step[intro!]:
1.434 +    "\<lbrakk>\<forall>pair \<in> set args. pair \<in> well_typed_gterm sig;
1.435 +      sig f = (map snd args, rtype)\<rbrakk>
1.436 +     \<Longrightarrow> (Apply f (map fst args), rtype)
1.437 +         \<in> well_typed_gterm sig"
1.438 +(*>*)
1.439 +text_raw {*
1.440 +\end{isabelle}
1.441 +\end{exercise}
1.442 +\end{isamarkuptext}
1.443 +*}
1.444 +
1.445 +(*<*)
1.446 +
1.447 +text{*the following declaration isn't actually used*}
1.448 +consts integer_arity :: "integer_op \<Rightarrow> nat"
1.449 +primrec
1.450 +"integer_arity (Number n)        = 0"
1.451 +"integer_arity UnaryMinus        = 1"
1.452 +"integer_arity Plus              = 2"
1.453 +
1.454  text{* the rest isn't used: too complicated.  OK for an exercise though.*}
1.455
1.456  inductive_set
1.457 @@ -146,17 +375,6 @@
1.458  | UnaryMinus: "(UnaryMinus, ([()], ())) \<in> integer_signature"
1.459  | Plus:       "(Plus,       ([(),()], ())) \<in> integer_signature"
1.460
1.461 -
1.462 -inductive_set
1.463 -  well_typed_gterm :: "('f \<Rightarrow> 't list * 't) \<Rightarrow> ('f gterm * 't)set"
1.464 -  for sig :: "'f \<Rightarrow> 't list * 't"
1.465 -where
1.466 -step[intro!]:
1.467 -    "\<lbrakk>\<forall>pair \<in> set args. pair \<in> well_typed_gterm sig;
1.468 -      sig f = (map snd args, rtype)\<rbrakk>
1.469 -     \<Longrightarrow> (Apply f (map fst args), rtype)
1.470 -         \<in> well_typed_gterm sig"
1.471 -
1.472  inductive_set
1.473    well_typed_gterm' :: "('f \<Rightarrow> 't list * 't) \<Rightarrow> ('f gterm * 't)set"
1.474    for sig :: "'f \<Rightarrow> 't list * 't"
1.475 @@ -183,4 +401,4 @@
1.476
1.477
1.478  end
1.479 -
1.480 +(*>*)

     2.1 --- a/doc-src/TutorialI/Inductive/Even.thy	Wed Jul 18 14:46:59 2007 +0200
2.2 +++ b/doc-src/TutorialI/Inductive/Even.thy	Thu Jul 19 15:29:51 2007 +0200
2.3 @@ -1,89 +1,290 @@
2.4  (* ID:         $Id$ *)
2.5 -theory Even imports Main begin
2.6 +(*<*)theory Even imports Main uses "../../antiquote_setup.ML" begin(*>*)
2.7 +
2.8 +section{* The Set of Even Numbers *}
2.9
2.10 +text {*
2.11 +\index{even numbers!defining inductively|(}%
2.12 +The set of even numbers can be inductively defined as the least set
2.13 +containing 0 and closed under the operation $+2$.  Obviously,
2.14 +\emph{even} can also be expressed using the divides relation (@{text dvd}).
2.15 +We shall prove below that the two formulations coincide.  On the way we
2.16 +shall examine the primary means of reasoning about inductively defined
2.17 +sets: rule induction.
2.18 +*}
2.19 +
2.20 +subsection{* Making an Inductive Definition *}
2.21 +
2.22 +text {*
2.23 +Using \commdx{inductive\_set}, we declare the constant @{text even} to be
2.24 +a set of natural numbers with the desired properties.
2.25 +*}
2.26
2.27  inductive_set even :: "nat set"
2.28  where
2.29    zero[intro!]: "0 \<in> even"
2.30  | step[intro!]: "n \<in> even \<Longrightarrow> (Suc (Suc n)) \<in> even"
2.31
2.32 -text{*An inductive definition consists of introduction rules.
2.33 -
2.34 -@{thm[display] even.step[no_vars]}
2.35 -\rulename{even.step}
2.36 +text {*
2.37 +An inductive definition consists of introduction rules.  The first one
2.38 +above states that 0 is even; the second states that if $n$ is even, then so
2.39 +is~$n+2$.  Given this declaration, Isabelle generates a fixed point
2.40 +definition for @{term even} and proves theorems about it,
2.41 +thus following the definitional approach (see {\S}\ref{sec:definitional}).
2.42 +These theorems
2.43 +include the introduction rules specified in the declaration, an elimination
2.44 +rule for case analysis and an induction rule.  We can refer to these
2.45 +theorems by automatically-generated names.  Here are two examples:
2.46 +@{named_thms[display,indent=0] even.zero[no_vars] (even.zero) even.step[no_vars] (even.step)}
2.47
2.48 -@{thm[display] even.induct[no_vars]}
2.49 -\rulename{even.induct}
2.50 +The introduction rules can be given attributes.  Here
2.51 +both rules are specified as \isa{intro!},%
2.52 +\index{intro"!@\isa {intro"!} (attribute)}
2.53 +directing the classical reasoner to
2.54 +apply them aggressively. Obviously, regarding 0 as even is safe.  The
2.55 +@{text step} rule is also safe because $n+2$ is even if and only if $n$ is
2.56 +even.  We prove this equivalence later.
2.57 +*}
2.58
2.59 -Attributes can be given to the introduction rules.  Here both rules are
2.60 -specified as \isa{intro!}
2.61 +subsection{*Using Introduction Rules*}
2.62
2.63 -Our first lemma states that numbers of the form $2\times k$ are even. *}
2.64 +text {*
2.65 +Our first lemma states that numbers of the form $2\times k$ are even.
2.66 +Introduction rules are used to show that specific values belong to the
2.67 +inductive set.  Such proofs typically involve
2.68 +induction, perhaps over some other inductive set.
2.69 +*}
2.70 +
2.71  lemma two_times_even[intro!]: "2*k \<in> even"
2.72  apply (induct_tac k)
2.73 -txt{*
2.74 -The first step is induction on the natural number \isa{k}, which leaves
2.75 + apply auto
2.76 +done
2.77 +(*<*)
2.78 +lemma "2*k \<in> even"
2.79 +apply (induct_tac k)
2.80 +(*>*)
2.81 +txt {*
2.82 +\noindent
2.83 +The first step is induction on the natural number @{text k}, which leaves
2.84  two subgoals:
2.85  @{subgoals[display,indent=0,margin=65]}
2.86 -Here \isa{auto} simplifies both subgoals so that they match the introduction
2.87 -rules, which then are applied automatically.*};
2.88 - apply auto
2.89 -done
2.90 +Here @{text auto} simplifies both subgoals so that they match the introduction
2.91 +rules, which are then applied automatically.
2.92
2.93 -text{*Our goal is to prove the equivalence between the traditional definition
2.94 -of even (using the divides relation) and our inductive definition.  Half of
2.95 -this equivalence is trivial using the lemma just proved, whose \isa{intro!}
2.96 -attribute ensures it will be applied automatically.  *}
2.97 -
2.98 +Our ultimate goal is to prove the equivalence between the traditional
2.99 +definition of @{text even} (using the divides relation) and our inductive
2.100 +definition.  One direction of this equivalence is immediate by the lemma
2.101 +just proved, whose @{text "intro!"} attribute ensures it is applied automatically.
2.102 +*}
2.103 +(*<*)oops(*>*)
2.104  lemma dvd_imp_even: "2 dvd n \<Longrightarrow> n \<in> even"
2.105  by (auto simp add: dvd_def)
2.106
2.107 -text{*our first rule induction!*}
2.108 +subsection{* Rule Induction \label{sec:rule-induction} *}
2.109 +
2.110 +text {*
2.111 +\index{rule induction|(}%
2.112 +From the definition of the set
2.113 +@{term even}, Isabelle has
2.114 +generated an induction rule:
2.115 +@{named_thms [display,indent=0,margin=40] even.induct [no_vars] (even.induct)}
2.116 +A property @{term P} holds for every even number provided it
2.117 +holds for~@{text 0} and is closed under the operation
2.118 +\isa{Suc(Suc $$\cdot$$)}.  Then @{term P} is closed under the introduction
2.119 +rules for @{term even}, which is the least set closed under those rules.
2.120 +This type of inductive argument is called \textbf{rule induction}.
2.121 +
2.122 +Apart from the double application of @{term Suc}, the induction rule above
2.123 +resembles the familiar mathematical induction, which indeed is an instance
2.124 +of rule induction; the natural numbers can be defined inductively to be
2.125 +the least set containing @{text 0} and closed under~@{term Suc}.
2.126 +
2.127 +Induction is the usual way of proving a property of the elements of an
2.128 +inductively defined set.  Let us prove that all members of the set
2.129 +@{term even} are multiples of two.
2.130 +*}
2.131 +
2.132  lemma even_imp_dvd: "n \<in> even \<Longrightarrow> 2 dvd n"
2.133 +txt {*
2.134 +We begin by applying induction.  Note that @{text even.induct} has the form
2.135 +of an elimination rule, so we use the method @{text erule}.  We get two
2.136 +subgoals:
2.137 +*}
2.138  apply (erule even.induct)
2.139 -txt{*
2.140 -@{subgoals[display,indent=0,margin=65]}
2.141 -*};
2.142 +txt {*
2.143 +@{subgoals[display,indent=0]}
2.144 +We unfold the definition of @{text dvd} in both subgoals, proving the first
2.145 +one and simplifying the second:
2.146 +*}
2.148 -txt{*
2.149 -@{subgoals[display,indent=0,margin=65]}
2.150 -*};
2.151 +txt {*
2.152 +@{subgoals[display,indent=0]}
2.153 +The next command eliminates the existential quantifier from the assumption
2.154 +and replaces @{text n} by @{text "2 * k"}.
2.155 +*}
2.156  apply clarify
2.157 -txt{*
2.158 -@{subgoals[display,indent=0,margin=65]}
2.159 -*};
2.160 +txt {*
2.161 +@{subgoals[display,indent=0]}
2.162 +To conclude, we tell Isabelle that the desired value is
2.163 +@{term "Suc k"}.  With this hint, the subgoal falls to @{text simp}.
2.164 +*}
2.165  apply (rule_tac x = "Suc k" in exI, simp)
2.166 -done
2.167 +(*<*)done(*>*)
2.168
2.169 +text {*
2.170 +Combining the previous two results yields our objective, the
2.171 +equivalence relating @{term even} and @{text dvd}.
2.172 +%
2.173 +%we don't want [iff]: discuss?
2.174 +*}
2.175
2.176 -text{*no iff-attribute because we don't always want to use it*}
2.177  theorem even_iff_dvd: "(n \<in> even) = (2 dvd n)"
2.178  by (blast intro: dvd_imp_even even_imp_dvd)
2.179
2.180 -text{*this result ISN'T inductive...*}
2.181 -lemma Suc_Suc_even_imp_even: "Suc (Suc n) \<in> even \<Longrightarrow> n \<in> even"
2.182 +
2.183 +subsection{* Generalization and Rule Induction \label{sec:gen-rule-induction} *}
2.184 +
2.185 +text {*
2.186 +\index{generalizing for induction}%
2.187 +Before applying induction, we typically must generalize
2.188 +the induction formula.  With rule induction, the required generalization
2.189 +can be hard to find and sometimes requires a complete reformulation of the
2.190 +problem.  In this  example, our first attempt uses the obvious statement of
2.191 +the result.  It fails:
2.192 +*}
2.193 +
2.194 +lemma "Suc (Suc n) \<in> even \<Longrightarrow> n \<in> even"
2.195  apply (erule even.induct)
2.196 -txt{*
2.197 -@{subgoals[display,indent=0,margin=65]}
2.198 -*};
2.199  oops
2.200 -
2.201 -text{*...so we need an inductive lemma...*}
2.202 +(*<*)
2.203 +lemma "Suc (Suc n) \<in> even \<Longrightarrow> n \<in> even"
2.204 +apply (erule even.induct)
2.205 +(*>*)
2.206 +txt {*
2.207 +Rule induction finds no occurrences of @{term "Suc(Suc n)"} in the
2.208 +conclusion, which it therefore leaves unchanged.  (Look at
2.209 +@{text even.induct} to see why this happens.)  We have these subgoals:
2.210 +@{subgoals[display,indent=0]}
2.211 +The first one is hopeless.  Rule induction on
2.212 +a non-variable term discards information, and usually fails.
2.213 +How to deal with such situations
2.214 +in general is described in {\S}\ref{sec:ind-var-in-prems} below.
2.215 +In the current case the solution is easy because
2.216 +we have the necessary inverse, subtraction:
2.217 +*}
2.218 +(*<*)oops(*>*)
2.219  lemma even_imp_even_minus_2: "n \<in> even \<Longrightarrow> n - 2 \<in> even"
2.220  apply (erule even.induct)
2.221 -txt{*
2.222 -@{subgoals[display,indent=0,margin=65]}
2.223 -*};
2.224 -apply auto
2.225 + apply auto
2.226  done
2.227 +(*<*)
2.228 +lemma "n \<in>  even \<Longrightarrow> n - 2 \<in> even"
2.229 +apply (erule even.induct)
2.230 +(*>*)
2.231 +txt {*
2.232 +This lemma is trivially inductive.  Here are the subgoals:
2.233 +@{subgoals[display,indent=0]}
2.234 +The first is trivial because @{text "0 - 2"} simplifies to @{text 0}, which is
2.235 +even.  The second is trivial too: @{term "Suc (Suc n) - 2"} simplifies to
2.236 +@{term n}, matching the assumption.%
2.237 +\index{rule induction|)}  %the sequel isn't really about induction
2.238
2.239 -text{*...and prove it in a separate step*}
2.240 +\medskip
2.241 +Using our lemma, we can easily prove the result we originally wanted:
2.242 +*}
2.243 +(*<*)oops(*>*)
2.244  lemma Suc_Suc_even_imp_even: "Suc (Suc n) \<in> even \<Longrightarrow> n \<in> even"
2.245  by (drule even_imp_even_minus_2, simp)
2.246
2.247 +text {*
2.248 +We have just proved the converse of the introduction rule @{text even.step}.
2.249 +This suggests proving the following equivalence.  We give it the
2.250 +\attrdx{iff} attribute because of its obvious value for simplification.
2.251 +*}
2.252
2.253  lemma [iff]: "((Suc (Suc n)) \<in> even) = (n \<in> even)"
2.254  by (blast dest: Suc_Suc_even_imp_even)
2.255
2.256 -end
2.257 +
2.258 +subsection{* Rule Inversion \label{sec:rule-inversion} *}
2.259 +
2.260 +text {*
2.261 +\index{rule inversion|(}%
2.262 +Case analysis on an inductive definition is called \textbf{rule
2.263 +inversion}.  It is frequently used in proofs about operational
2.264 +semantics.  It can be highly effective when it is applied
2.265 +automatically.  Let us look at how rule inversion is done in
2.266 +Isabelle/HOL\@.
2.267 +
2.268 +Recall that @{term even} is the minimal set closed under these two rules:
2.269 +@{thm [display,indent=0] even.intros [no_vars]}
2.270 +Minimality means that @{term even} contains only the elements that these
2.271 +rules force it to contain.  If we are told that @{term a}
2.272 +belongs to
2.273 +@{term even} then there are only two possibilities.  Either @{term a} is @{text 0}
2.274 +or else @{term a} has the form @{term "Suc(Suc n)"}, for some suitable @{term n}
2.275 +that belongs to
2.276 +@{term even}.  That is the gist of the @{term cases} rule, which Isabelle proves
2.277 +for us when it accepts an inductive definition:
2.278 +@{named_thms [display,indent=0,margin=40] even.cases [no_vars] (even.cases)}
2.279 +This general rule is less useful than instances of it for
2.280 +specific patterns.  For example, if @{term a} has the form
2.281 +@{term "Suc(Suc n)"} then the first case becomes irrelevant, while the second
2.282 +case tells us that @{term n} belongs to @{term even}.  Isabelle will generate
2.283 +this instance for us:
2.284 +*}
2.285 +
2.286 +inductive_cases Suc_Suc_cases [elim!]: "Suc(Suc n) \<in> even"
2.287 +
2.288 +text {*
2.289 +The \commdx{inductive\protect\_cases} command generates an instance of
2.290 +the @{text cases} rule for the supplied pattern and gives it the supplied name:
2.291 +@{named_thms [display,indent=0] Suc_Suc_cases [no_vars] (Suc_Suc_cases)}
2.292 +Applying this as an elimination rule yields one case where @{text even.cases}
2.293 +would yield two.  Rule inversion works well when the conclusions of the
2.294 +introduction rules involve datatype constructors like @{term Suc} and @{text "#"}
2.295 +(list cons''); freeness reasoning discards all but one or two cases.
2.296
2.297 +In the \isacommand{inductive\_cases} command we supplied an
2.298 +attribute, @{text "elim!"},
2.299 +\index{elim"!@\isa {elim"!} (attribute)}%
2.300 +indicating that this elimination rule can be
2.301 +applied aggressively.  The original
2.302 +@{term cases} rule would loop if used in that manner because the
2.303 +pattern~@{term a} matches everything.
2.304 +
2.305 +The rule @{text Suc_Suc_cases} is equivalent to the following implication:
2.306 +@{term [display,indent=0] "Suc (Suc n) \<in> even \<Longrightarrow> n \<in> even"}
2.307 +Just above we devoted some effort to reaching precisely
2.308 +this result.  Yet we could have obtained it by a one-line declaration,
2.309 +dispensing with the lemma @{text even_imp_even_minus_2}.
2.310 +This example also justifies the terminology
2.311 +\textbf{rule inversion}: the new rule inverts the introduction rule
2.312 +@{text even.step}.  In general, a rule can be inverted when the set of elements
2.313 +it introduces is disjoint from those of the other introduction rules.
2.314 +
2.315 +For one-off applications of rule inversion, use the \methdx{ind_cases} method.
2.316 +Here is an example:
2.317 +*}
2.318 +
2.319 +(*<*)lemma "Suc(Suc n) \<in> even \<Longrightarrow> P"(*>*)
2.320 +apply (ind_cases "Suc(Suc n) \<in> even")
2.321 +(*<*)oops(*>*)
2.322 +
2.323 +text {*
2.324 +The specified instance of the @{text cases} rule is generated, then applied
2.325 +as an elimination rule.
2.326 +
2.327 +To summarize, every inductive definition produces a @{text cases} rule.  The
2.328 +\commdx{inductive\protect\_cases} command stores an instance of the
2.329 +@{text cases} rule for a given pattern.  Within a proof, the
2.330 +@{text ind_cases} method applies an instance of the @{text cases}
2.331 +rule.
2.332 +
2.333 +The even numbers example has shown how inductive definitions can be
2.334 +used.  Later examples will show that they are actually worth using.%
2.335 +\index{rule inversion|)}%
2.336 +\index{even numbers!defining inductively|)}
2.337 +*}
2.338 +
2.339 +(*<*)end(*>*)