diff -r a39e5d43de55 -r bbefb6ce5cb2 doc-src/TutorialI/Misc/AdvancedInd.thy --- a/doc-src/TutorialI/Misc/AdvancedInd.thy Fri Sep 01 18:29:52 2000 +0200 +++ b/doc-src/TutorialI/Misc/AdvancedInd.thy Fri Sep 01 19:09:44 2000 +0200 @@ -21,7 +21,7 @@ apply(induct_tac xs); txt{*\noindent -(where \isa{hd} and \isa{last} return the first and last element of a +(where @{term"hd"} and @{term"last"} return the first and last element of a non-empty list) produces the warning \begin{quote}\tt @@ -35,16 +35,16 @@ \begin{isabelle} \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ [] \end{isabelle} -We cannot prove this equality because we do not know what \isa{hd} and -\isa{last} return when applied to \isa{[]}. +We cannot prove this equality because we do not know what @{term"hd"} and +@{term"last"} return when applied to @{term"[]"}. The point is that we have violated the above warning. Because the induction -formula is only the conclusion, the occurrence of \isa{xs} in the premises is +formula is only the conclusion, the occurrence of @{term"xs"} in the premises is not modified by induction. Thus the case that should have been trivial becomes unprovable. Fortunately, the solution is easy: \begin{quote} \emph{Pull all occurrences of the induction variable into the conclusion -using \isa{\isasymlongrightarrow}.} +using @{text"\"}.} \end{quote} This means we should prove *}; @@ -59,12 +59,13 @@ \begin{isabelle} \ 1.\ []\ {\isasymnoteq}\ []\ {\isasymlongrightarrow}\ hd\ (rev\ [])\ =\ last\ [] \end{isabelle} -which is trivial, and \isa{auto} finishes the whole proof. +which is trivial, and @{text"auto"} finishes the whole proof. -If \isa{hd\_rev} is meant to be a simplification rule, you are done. But if you -really need the \isa{\isasymLongrightarrow}-version of \isa{hd\_rev}, for -example because you want to apply it as an introduction rule, you need to -derive it separately, by combining it with modus ponens: +If @{thm[source]hd_rev} is meant to be a simplification rule, you are +done. But if you really need the @{text"\"}-version of +@{thm[source]hd_rev}, for example because you want to apply it as an +introduction rule, you need to derive it separately, by combining it with +modus ponens: *}; lemmas hd_revI = hd_rev[THEN mp]; @@ -78,7 +79,7 @@ (see the remark?? in \S\ref{??}). Additionally, you may also have to universally quantify some other variables, which can yield a fairly complex conclusion. -Here is a simple example (which is proved by \isa{blast}): +Here is a simple example (which is proved by @{text"blast"}): *}; lemma simple: "\\y. A y \\ B y \ B y & A y"; @@ -86,14 +87,14 @@ text{*\noindent You can get the desired lemma by explicit -application of modus ponens and \isa{spec}: +application of modus ponens and @{thm[source]spec}: *}; lemmas myrule = simple[THEN spec, THEN mp, THEN mp]; text{*\noindent -or the wholesale stripping of \isa{\isasymforall} and -\isa{\isasymlongrightarrow} in the conclusion via \isa{rulify} +or the wholesale stripping of @{text"\"} and +@{text"\"} in the conclusion via @{text"rulify"} *}; lemmas myrule = simple[rulify]; @@ -130,10 +131,10 @@ induction schema. In such cases some existing standard induction schema can be helpful. We show how to apply such induction schemas by an example. -Structural induction on \isa{nat} is +Structural induction on @{typ"nat"} is usually known as ``mathematical induction''. There is also ``complete induction'', where you must prove $P(n)$ under the assumption that $P(m)$ -holds for all $mi. k = f i \\ i \\ f i"; txt{*\noindent -To perform induction on \isa{k} using \isa{less\_induct}, we use the same +To perform induction on @{term"k"} using @{thm[source]less_induct}, we use the same general induction method as for recursion induction (see \S\ref{sec:recdef-induction}): *}; @@ -173,9 +174,9 @@ \ 1.\ {\isasymAnd}\mbox{n}.\ {\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i})\isanewline \ \ \ \ \ \ \ {\isasymLongrightarrow}\ {\isasymforall}\mbox{i}.\ \mbox{n}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i} \end{isabelle} -After stripping the \isa{\isasymforall i}, the proof continues with a case -distinction on \isa{i}. The case \isa{i = 0} is trivial and we focus on the -other case: +After stripping the @{text"\i"}, the proof continues with a case +distinction on @{term"i"}. The case @{prop"i = 0"} is trivial and we focus on +the other case: \begin{isabelle} \ 1.\ {\isasymAnd}\mbox{n}\ \mbox{i}\ \mbox{nat}.\isanewline \ \ \ \ \ \ \ {\isasymlbrakk}{\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i});\ \mbox{i}\ =\ Suc\ \mbox{nat}{\isasymrbrakk}\isanewline @@ -187,16 +188,16 @@ text{*\noindent It is not surprising if you find the last step puzzling. -The proof goes like this (writing \isa{j} instead of \isa{nat}). -Since @{term"i = Suc j"} it suffices to show -@{term"j < f(Suc j)"} (by \isa{Suc\_leI}: @{thm"Suc_leI"[no_vars]}). This is -proved as follows. From \isa{f\_ax} we have @{term"f (f j) < f (Suc j)"} -(1) which implies @{term"f j <= f (f j)"} (by the induction hypothesis). -Using (1) once more we obtain @{term"f j < f(Suc j)"} (2) by transitivity -(\isa{le_less_trans}: @{thm"le_less_trans"[no_vars]}). -Using the induction hypothesis once more we obtain @{term"j <= f j"} -which, together with (2) yields @{term"j < f (Suc j)"} (again by -\isa{le_less_trans}). +The proof goes like this (writing @{term"j"} instead of @{typ"nat"}). +Since @{prop"i = Suc j"} it suffices to show +@{prop"j < f(Suc j)"} (by @{thm[source]Suc_leI}: @{thm"Suc_leI"[no_vars]}). This is +proved as follows. From @{thm[source]f_ax} we have @{prop"f (f j) < f (Suc j)"} +(1) which implies @{prop"f j <= f (f j)"} (by the induction hypothesis). +Using (1) once more we obtain @{prop"f j < f(Suc j)"} (2) by transitivity +(@{thm[source]le_less_trans}: @{thm"le_less_trans"[no_vars]}). +Using the induction hypothesis once more we obtain @{prop"j <= f j"} +which, together with (2) yields @{prop"j < f (Suc j)"} (again by +@{thm[source]le_less_trans}). This last step shows both the power and the danger of automatic proofs: they will usually not tell you how the proof goes, because it can be very hard to @@ -204,14 +205,14 @@ \S\ref{sec:part2?} introduces a language for writing readable yet concise proofs. -We can now derive the desired @{term"i <= f i"} from \isa{f\_incr}: +We can now derive the desired @{prop"i <= f i"} from @{text"f_incr"}: *}; lemmas f_incr = f_incr_lem[rulify, OF refl]; text{*\noindent -The final \isa{refl} gets rid of the premise \isa{?k = f ?i}. Again, we could -have included this derivation in the original statement of the lemma: +The final @{thm[source]refl} gets rid of the premise @{text"?k = f ?i"}. Again, +we could have included this derivation in the original statement of the lemma: *}; lemma f_incr[rulify, OF refl]: "\\i. k = f i \\ i \\ f i"; @@ -219,22 +220,23 @@ text{* \begin{exercise} -From the above axiom and lemma for \isa{f} show that \isa{f} is the identity. +From the above axiom and lemma for @{term"f"} show that @{term"f"} is the +identity. \end{exercise} -In general, \isa{induct\_tac} can be applied with any rule \isa{r} -whose conclusion is of the form \isa{?P ?x1 \dots ?xn}, in which case the +In general, @{text"induct_tac"} can be applied with any rule $r$ +whose conclusion is of the form ${?}P~?x@1 \dots ?x@n$, in which case the format is -\begin{ttbox} -apply(induct_tac y1 ... yn rule: r) -\end{ttbox}\index{*induct_tac}% -where \isa{y1}, \dots, \isa{yn} are variables in the first subgoal. -In fact, \isa{induct\_tac} even allows the conclusion of -\isa{r} to be an (iterated) conjunction of formulae of the above form, in +\begin{quote} +\isacommand{apply}@{text"(induct_tac"} $y@1 \dots y@n$ @{text"rule:"} $r$@{text")"} +\end{quote}\index{*induct_tac}% +where $y@1, \dots, y@n$ are variables in the first subgoal. +In fact, @{text"induct_tac"} even allows the conclusion of +$r$ to be an (iterated) conjunction of formulae of the above form, in which case the application is -\begin{ttbox} -apply(induct_tac y1 ... yn and ... and z1 ... zm rule: r) -\end{ttbox} +\begin{quote} +\isacommand{apply}@{text"(induct_tac"} $y@1 \dots y@n$ @{text"and"} \dots\ @{text"and"} $z@1 \dots z@m$ @{text"rule:"} $r$@{text")"} +\end{quote} *}; subsection{*Derivation of new induction schemas*}; @@ -242,18 +244,18 @@ text{*\label{sec:derive-ind} Induction schemas are ordinary theorems and you can derive new ones whenever you wish. This section shows you how to, using the example -of \isa{less\_induct}. Assume we only have structural induction +of @{thm[source]less_induct}. Assume we only have structural induction available for @{typ"nat"} and want to derive complete induction. This requires us to generalize the statement first: *}; -lemma induct_lem: "(\\n::nat. \\m P n) ==> \\mn::nat. \\m P n) \\ \\mn::nat. \\m P n) ==> P n"; +theorem less_induct: "(\\n::nat. \\m P n) \\ P n"; by(insert induct_lem, blast); text{*\noindent Finally we should mention that HOL already provides the mother of all -inductions, \emph{wellfounded induction} (\isa{wf\_induct}): +inductions, \emph{wellfounded induction} (@{thm[source]wf_induct}): \begin{quote} @{thm[display]"wf_induct"[no_vars]} \end{quote} -where @{term"wf r"} means that the relation \isa{r} is wellfounded. -For example \isa{less\_induct} is the special case where \isa{r} is \isa{<} on @{typ"nat"}. -For details see the library. +where @{term"wf r"} means that the relation @{term"r"} is wellfounded. +For example @{thm[source]less_induct} is the special case where @{term"r"} is +@{text"<"} on @{typ"nat"}. For details see the library. *}; (*<*)