*** empty log message ***
authornipkow
Mon, 28 Aug 2000 09:32:51 +0200
changeset 9689 751fde5307e4
parent 9688 d1415164b814
child 9690 50f22b1b136a
*** empty log message ***
doc-src/TutorialI/Datatype/ABexpr.thy
doc-src/TutorialI/Datatype/Fundata.thy
doc-src/TutorialI/Datatype/Nested.thy
doc-src/TutorialI/Datatype/ROOT.ML
doc-src/TutorialI/Datatype/document/ABexpr.tex
doc-src/TutorialI/Datatype/document/Nested.tex
doc-src/TutorialI/IsaMakefile
doc-src/TutorialI/Misc/AdvancedInd.thy
doc-src/TutorialI/Misc/Itrev.thy
doc-src/TutorialI/Misc/def_rewr.thy
doc-src/TutorialI/Recdef/Induction.thy
doc-src/TutorialI/Recdef/Nested1.thy
doc-src/TutorialI/Recdef/ROOT.ML
doc-src/TutorialI/Recdef/document/Induction.tex
doc-src/TutorialI/Trie/Trie.thy
doc-src/TutorialI/basics.tex
doc-src/TutorialI/fp.tex
--- a/doc-src/TutorialI/Datatype/ABexpr.thy	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Datatype/ABexpr.thy	Mon Aug 28 09:32:51 2000 +0200
@@ -101,7 +101,7 @@
 The resulting 8 goals (one for each constructor) are proved in one fell swoop:
 *}
 
-by auto;
+by simp_all;
 
 text{*
 In general, given $n$ mutually recursive datatypes $\tau@1$, \dots, $\tau@n$,
--- a/doc-src/TutorialI/Datatype/Fundata.thy	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Datatype/Fundata.thy	Mon Aug 28 09:32:51 2000 +0200
@@ -36,7 +36,7 @@
 The following lemma has a canonical proof  *}
 
 lemma "map_bt (g o f) T = map_bt g (map_bt f T)";
-by(induct_tac "T", auto)
+by(induct_tac "T", simp_all)
 
 text{*\noindent
 but it is worth taking a look at the proof state after the induction step
--- a/doc-src/TutorialI/Datatype/Nested.thy	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Datatype/Nested.thy	Mon Aug 28 09:32:51 2000 +0200
@@ -63,7 +63,7 @@
 
 lemma "subst  Var t  = (t ::('a,'b)term)  \\<and>
         substs Var ts = (ts::('a,'b)term list)";
-by(induct_tac t and ts, auto);
+by(induct_tac t and ts, simp_all);
 
 text{*\noindent
 Note that \isa{Var} is the identity substitution because by definition it
@@ -102,50 +102,24 @@
 *}
 
 lemma [simp]: "subst s (App f ts) = App f (map (subst s) ts)"
-by(induct_tac ts, auto)
+by(induct_tac ts, simp_all)
 
-text{*
+text{*\noindent
 What is more, we can now disable the old defining equation as a
 simplification rule:
 *}
 
 lemmas [simp del] = subst_App
 
-text{*
-The advantage is that now we have replaced @{term"substs"} by @{term"map"},
-we can profit from the large number of pre-proved lemmas about @{term"map"}.
-We illustrate this with an example, reversing terms:
-*}
-
-consts trev  :: "('a,'b)term => ('a,'b)term"
-       trevs :: "('a,'b)term list => ('a,'b)term list"
-primrec   "trev (Var x) = Var x"
-trev_App: "trev (App f ts) = App f (trevs ts)"
-
-          "trevs [] = []"
-          "trevs (t#ts) = trevs ts @ [trev t]"
-
 text{*\noindent
-Following the above methodology, we re-express \isa{trev\_App}:
-*}
-
-lemma [simp]: "trev (App f ts) = App f (rev(map trev ts))"
-by(induct_tac ts, auto)
-lemmas [simp del] = trev_App
-
-text{*\noindent
-Now it becomes quite easy to prove @{term"trev(trev t) = t"}, except that we
-still have to come up with the second half of the conjunction:
-*}
-
-lemma "trev(trev t) = (t::('a,'b)term) &
-        map trev (map trev ts) = (ts::('a,'b)term list)";
-by(induct_tac t and ts, auto simp add:rev_map);
-
-text{*\noindent
-Getting rid of this second conjunct requires deriving a new induction schema
-for \isa{term}, which is beyond what we have learned so far. Please stay
-tuned, we will solve this final problem in \S\ref{sec:der-ind-schemas}.
+The advantage is that now we have replaced @{term"substs"} by
+@{term"map"}, we can profit from the large number of pre-proved lemmas
+about @{term"map"}.  Unfortunately inductive proofs about type
+\isa{term} are still awkward because they expect a conjunction. One
+could derive a new induction principle as well (see
+\S\ref{sec:derive-ind}), but turns out to be simpler to define
+functions by \isacommand{recdef} instead of \isacommand{primrec}.
+The details are explained in \S\ref{sec:advanced-recdef} below.
 
 Of course, you may also combine mutual and nested recursion. For example,
 constructor \isa{Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
--- a/doc-src/TutorialI/Datatype/ROOT.ML	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Datatype/ROOT.ML	Mon Aug 28 09:32:51 2000 +0200
@@ -1,3 +1,4 @@
 use_thy "ABexpr";
 use_thy "unfoldnested";
+use_thy "Nested";
 use_thy "Fundata";
--- a/doc-src/TutorialI/Datatype/document/ABexpr.tex	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Datatype/document/ABexpr.tex	Mon Aug 28 09:32:51 2000 +0200
@@ -91,7 +91,7 @@
 \noindent
 The resulting 8 goals (one for each constructor) are proved in one fell swoop:%
 \end{isamarkuptxt}%
-\isacommand{by}\ auto%
+\isacommand{by}\ simp\_all%
 \begin{isamarkuptext}%
 In general, given $n$ mutually recursive datatypes $\tau@1$, \dots, $\tau@n$,
 an inductive proof expects a goal of the form
--- a/doc-src/TutorialI/Datatype/document/Nested.tex	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Datatype/document/Nested.tex	Mon Aug 28 09:32:51 2000 +0200
@@ -57,7 +57,7 @@
 \end{isamarkuptext}%
 \isacommand{lemma}\ {"}subst\ \ Var\ t\ \ =\ (t\ ::('a,'b)term)\ \ {\isasymand}\isanewline
 \ \ \ \ \ \ \ \ substs\ Var\ ts\ =\ (ts::('a,'b)term\ list){"}\isanewline
-\isacommand{by}(induct\_tac\ t\ \isakeyword{and}\ ts,\ auto)%
+\isacommand{by}(induct\_tac\ t\ \isakeyword{and}\ ts,\ simp\_all)%
 \begin{isamarkuptext}%
 \noindent
 Note that \isa{Var} is the identity substitution because by definition it
@@ -99,44 +99,23 @@
 suggested equation holds:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ [simp]:\ {"}subst\ s\ (App\ f\ ts)\ =\ App\ f\ (map\ (subst\ s)\ ts){"}\isanewline
-\isacommand{by}(induct\_tac\ ts,\ auto)%
+\isacommand{by}(induct\_tac\ ts,\ simp\_all)%
 \begin{isamarkuptext}%
+\noindent
 What is more, we can now disable the old defining equation as a
 simplification rule:%
 \end{isamarkuptext}%
 \isacommand{lemmas}\ [simp\ del]\ =\ subst\_App%
 \begin{isamarkuptext}%
-The advantage is that now we have replaced \isa{substs} by \isa{map},
-we can profit from the large number of pre-proved lemmas about \isa{map}.
-We illustrate this with an example, reversing terms:%
-\end{isamarkuptext}%
-\isacommand{consts}\ trev\ \ ::\ {"}('a,'b)term\ =>\ ('a,'b)term{"}\isanewline
-\ \ \ \ \ \ \ trevs\ ::\ {"}('a,'b)term\ list\ =>\ ('a,'b)term\ list{"}\isanewline
-\isacommand{primrec}\ \ \ {"}trev\ (Var\ x)\ =\ Var\ x{"}\isanewline
-trev\_App:\ {"}trev\ (App\ f\ ts)\ =\ App\ f\ (trevs\ ts){"}\isanewline
-\isanewline
-\ \ \ \ \ \ \ \ \ \ {"}trevs\ []\ =\ []{"}\isanewline
-\ \ \ \ \ \ \ \ \ \ {"}trevs\ (t\#ts)\ =\ trevs\ ts\ @\ [trev\ t]{"}%
-\begin{isamarkuptext}%
 \noindent
-Following the above methodology, we re-express \isa{trev\_App}:%
-\end{isamarkuptext}%
-\isacommand{lemma}\ [simp]:\ {"}trev\ (App\ f\ ts)\ =\ App\ f\ (rev(map\ trev\ ts)){"}\isanewline
-\isacommand{by}(induct\_tac\ ts,\ auto)\isanewline
-\isacommand{lemmas}\ [simp\ del]\ =\ trev\_App%
-\begin{isamarkuptext}%
-\noindent
-Now it becomes quite easy to prove \isa{trev\ (trev\ \mbox{t})\ =\ \mbox{t}}, except that we
-still have to come up with the second half of the conjunction:%
-\end{isamarkuptext}%
-\isacommand{lemma}\ {"}trev(trev\ t)\ =\ (t::('a,'b)term)\ \&\isanewline
-\ \ \ \ \ \ \ \ map\ trev\ (map\ trev\ ts)\ =\ (ts::('a,'b)term\ list){"}\isanewline
-\isacommand{by}(induct\_tac\ t\ \isakeyword{and}\ ts,\ auto\ simp\ add:rev\_map)%
-\begin{isamarkuptext}%
-\noindent
-Getting rid of this second conjunct requires deriving a new induction schema
-for \isa{term}, which is beyond what we have learned so far. Please stay
-tuned, we will solve this final problem in \S\ref{sec:der-ind-schemas}.
+The advantage is that now we have replaced \isa{substs} by
+\isa{map}, we can profit from the large number of pre-proved lemmas
+about \isa{map}.  Unfortunately inductive proofs about type
+\isa{term} are still awkward because they expect a conjunction. One
+could derive a new induction principle as well (see
+\S\ref{sec:derive-ind}), but turns out to be simpler to define
+functions by \isacommand{recdef} instead of \isacommand{primrec}.
+The details are explained in \S\ref{sec:advanced-recdef} below.
 
 Of course, you may also combine mutual and nested recursion. For example,
 constructor \isa{Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
--- a/doc-src/TutorialI/IsaMakefile	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/IsaMakefile	Mon Aug 28 09:32:51 2000 +0200
@@ -88,7 +88,7 @@
 
 $(LOG)/HOL-Recdef.gz: $(OUT)/HOL Recdef/examples.thy Recdef/termination.thy \
   Recdef/simplification.thy Recdef/Induction.thy \
-  Recdef/Nested0.thy Recdef/Nested1.thy 
+  Recdef/Nested0.thy Recdef/Nested1.thy Recdef/Nested2.thy
 	@$(ISATOOL) usedir -i true -d dvi -D document $(OUT)/HOL Recdef
 	@rm -f tutorial.dvi
 
--- a/doc-src/TutorialI/Misc/AdvancedInd.thy	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Misc/AdvancedInd.thy	Mon Aug 28 09:32:51 2000 +0200
@@ -7,15 +7,15 @@
 finer points of induction. The two questions we answer are: what to do if the
 proposition to be proved is not directly amenable to induction, and how to
 utilize and even derive new induction schemas.
-*}
+*};
 
-subsection{*Massaging the proposition\label{sec:ind-var-in-prems}*}
+subsection{*Massaging the proposition\label{sec:ind-var-in-prems}*};
 
 text{*
 \noindent
 So far we have assumed that the theorem we want to prove is already in a form
 that is amenable to induction, but this is not always the case:
-*}
+*};
 
 lemma "xs \\<noteq> [] \\<Longrightarrow> hd(rev xs) = last xs";
 apply(induct_tac xs);
@@ -47,11 +47,11 @@
 using \isa{\isasymlongrightarrow}.}
 \end{quote}
 This means we should prove
-*}
-(*<*)oops(*>*)
+*};
+(*<*)oops;(*>*)
 lemma hd_rev: "xs \\<noteq> [] \\<longrightarrow> hd(rev xs) = last xs";
 (*<*)
-by(induct_tac xs, auto)
+by(induct_tac xs, auto);
 (*>*)
 
 text{*\noindent
@@ -61,13 +61,13 @@
 \end{isabellepar}%
 which is trivial, and \isa{auto} finishes the whole proof.
 
-If \isa{hd\_rev} is meant to be simplification rule, you are done. But if you
+If \isa{hd\_rev} is meant to be a simplification rule, you are done. But if you
 really need the \isa{\isasymLongrightarrow}-version of \isa{hd\_rev}, for
 example because you want to apply it as an introduction rule, you need to
 derive it separately, by combining it with modus ponens:
-*}
+*};
 
-lemmas hd_revI = hd_rev[THEN mp]
+lemmas hd_revI = hd_rev[THEN mp];
  
 text{*\noindent
 which yields the lemma we originally set out to prove.
@@ -79,34 +79,34 @@
 Additionally, you may also have to universally quantify some other variables,
 which can yield a fairly complex conclusion.
 Here is a simple example (which is proved by \isa{blast}):
-*}
+*};
 
-lemma simple: "\\<forall> y. A y \\<longrightarrow> B y \<longrightarrow> B y & A y"
-(*<*)by blast(*>*)
+lemma simple: "\\<forall>y. A y \\<longrightarrow> B y \<longrightarrow> B y & A y";
+(*<*)by blast;(*>*)
 
 text{*\noindent
 You can get the desired lemma by explicit
 application of modus ponens and \isa{spec}:
-*}
+*};
 
-lemmas myrule = simple[THEN spec, THEN mp, THEN mp]
+lemmas myrule = simple[THEN spec, THEN mp, THEN mp];
 
 text{*\noindent
 or the wholesale stripping of \isa{\isasymforall} and
 \isa{\isasymlongrightarrow} in the conclusion via \isa{rulify} 
-*}
+*};
 
-lemmas myrule = simple[rulify]
+lemmas myrule = simple[rulify];
 
 text{*\noindent
-yielding @{thm"myrule"}.
+yielding @{thm"myrule"[no_vars]}.
 You can go one step further and include these derivations already in the
 statement of your original lemma, thus avoiding the intermediate step:
-*}
+*};
 
-lemma myrule[rulify]:  "\\<forall> y. A y \\<longrightarrow> B y \<longrightarrow> B y & A y"
+lemma myrule[rulify]:  "\\<forall>y. A y \\<longrightarrow> B y \<longrightarrow> B y & A y";
 (*<*)
-by blast
+by blast;
 (*>*)
 
 text{*
@@ -118,10 +118,9 @@
 \[ \forall y@1 \dots y@n.~ x = t \longrightarrow C \] where $y@1 \dots y@n$
 are the free variables in $t$ and $x$ is new, and perform induction on $x$
 afterwards. An example appears below.
+*};
 
-*}
-
-subsection{*Beyond structural induction*}
+subsection{*Beyond structural and recursion induction*};
 
 text{*
 So far, inductive proofs where by structural induction for
@@ -136,13 +135,13 @@
 induction'', where you must prove $P(n)$ under the assumption that $P(m)$
 holds for all $m<n$. In Isabelle, this is the theorem \isa{less\_induct}:
 \begin{quote}
-@{thm[display]"less_induct"}
+@{thm[display]"less_induct"[no_vars]}
 \end{quote}
 Here is an example of its application.
-*}
+*};
 
-consts f :: "nat => nat"
-axioms f_ax: "f(f(n)) < f(Suc(n))"
+consts f :: "nat => nat";
+axioms f_ax: "f(f(n)) < f(Suc(n))";
 
 text{*\noindent
 From the above axiom\footnote{In general, the use of axioms is strongly
@@ -152,19 +151,19 @@
 for \isa{f} it follows that @{term"n <= f n"}, which can
 be proved by induction on @{term"f n"}. Following the recipy outlined
 above, we have to phrase the proposition as follows to allow induction:
-*}
+*};
 
-lemma f_incr_lem: "\\<forall>i. k = f i \\<longrightarrow> i \\<le> f i"
+lemma f_incr_lem: "\\<forall>i. k = f i \\<longrightarrow> i \\<le> f i";
 
 txt{*\noindent
 To perform induction on \isa{k} using \isa{less\_induct}, we use the same
 general induction method as for recursion induction (see
 \S\ref{sec:recdef-induction}):
-*}
+*};
 
-apply(induct_tac k rule:less_induct)
+apply(induct_tac k rule:less_induct);
 (*<*)
-apply(rule allI)
+apply(rule allI);
 apply(case_tac i);
  apply(simp);
 (*>*)
@@ -182,7 +181,7 @@
 \ \ \ \ \ \ \ {\isasymlbrakk}{\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i});\ \mbox{i}\ =\ Suc\ \mbox{nat}{\isasymrbrakk}\isanewline
 \ \ \ \ \ \ \ {\isasymLongrightarrow}\ \mbox{n}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i}
 \end{isabellepar}%
-*}
+*};
 
 by(blast intro!: f_ax Suc_leI intro:le_less_trans);
 
@@ -190,11 +189,11 @@
 It is not surprising if you find the last step puzzling.
 The proof goes like this (writing \isa{j} instead of \isa{nat}).
 Since @{term"i = Suc j"} it suffices to show
-@{term"j < f(Suc j)"} (by \isa{Suc\_leI}: @{thm"Suc_leI"}). This is
+@{term"j < f(Suc j)"} (by \isa{Suc\_leI}: @{thm"Suc_leI"[no_vars]}). This is
 proved as follows. From \isa{f\_ax} we have @{term"f (f j) < f (Suc j)"}
 (1) which implies @{term"f j <= f (f j)"} (by the induction hypothesis).
 Using (1) once more we obtain @{term"f j < f(Suc j)"} (2) by transitivity
-(\isa{le_less_trans}: @{thm"le_less_trans"}).
+(\isa{le_less_trans}: @{thm"le_less_trans"[no_vars]}).
 Using the induction hypothesis once more we obtain @{term"j <= f j"}
 which, together with (2) yields @{term"j < f (Suc j)"} (again by
 \isa{le_less_trans}).
@@ -206,17 +205,17 @@
 proofs.
 
 We can now derive the desired @{term"i <= f i"} from \isa{f\_incr}:
-*}
+*};
 
 lemmas f_incr = f_incr_lem[rulify, OF refl];
 
-text{*
+text{*\noindent
 The final \isa{refl} gets rid of the premise \isa{?k = f ?i}. Again, we could
 have included this derivation in the original statement of the lemma:
-*}
+*};
 
-lemma f_incr[rulify, OF refl]: "\\<forall>i. k = f i \\<longrightarrow> i \\<le> f i"
-(*<*)oops(*>*)
+lemma f_incr[rulify, OF refl]: "\\<forall>i. k = f i \\<longrightarrow> i \\<le> f i";
+(*<*)oops;(*>*)
 
 text{*
 \begin{exercise}
@@ -236,14 +235,61 @@
 \begin{ttbox}
 apply(induct_tac y1 ... yn and ... and z1 ... zm rule: r)
 \end{ttbox}
+*};
 
+subsection{*Derivation of new induction schemas*};
+
+text{*\label{sec:derive-ind}
+Induction schemas are ordinary theorems and you can derive new ones
+whenever you wish.  This section shows you how to, using the example
+of \isa{less\_induct}. Assume we only have structural induction
+available for @{typ"nat"} and want to derive complete induction. This
+requires us to generalize the statement first:
+*};
+
+lemma induct_lem: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) ==> \\<forall>m<n. P m";
+apply(induct_tac n);
+
+txt{*\noindent
+The base case is trivially true. For the induction step (@{term"m <
+Suc n"}) we distinguish two cases: @{term"m < n"} is true by induction
+hypothesis and @{term"m = n"} follow from the assumption again using
+the induction hypothesis:
+*};
+
+apply(blast);
+(* apply(blast elim:less_SucE); *)
+ML"set quick_and_dirty"
+sorry;
+ML"reset quick_and_dirty"
+
+text{*\noindent
+The elimination rule \isa{less_SucE} expresses the case distinction:
+\begin{quote}
+@{thm[display]"less_SucE"[no_vars]}
+\end{quote}
+
+Now it is straightforward to derive the original version of
+\isa{less\_induct} by manipulting the conclusion of the above lemma:
+instantiate \isa{n} by @{term"Suc n"} and \isa{m} by \isa{n} and
+remove the trivial condition @{term"n < Sc n"}. Fortunately, this
+happens automatically when we add the lemma as a new premise to the
+desired goal:
+*};
+
+theorem less_induct: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) ==> P n";
+by(insert induct_lem, blast);
+
+text{*\noindent
 Finally we should mention that HOL already provides the mother of all
 inductions, \emph{wellfounded induction} (\isa{wf\_induct}):
 \begin{quote}
-@{thm[display]"wf_induct"}
+@{thm[display]"wf_induct"[no_vars]}
 \end{quote}
+where @{term"wf r"} means that the relation \isa{r} is wellfounded.
+For example \isa{less\_induct} is the special case where \isa{r} is \isa{<} on @{typ"nat"}.
 For details see the library.
-*}
+*};
 
 (*<*)
 end
--- a/doc-src/TutorialI/Misc/Itrev.thy	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Misc/Itrev.thy	Mon Aug 28 09:32:51 2000 +0200
@@ -28,7 +28,7 @@
 There is no choice as to the induction variable, and we immediately simplify:
 *};
 
-apply(induct_tac xs, auto);
+apply(induct_tac xs, simp_all);
 
 txt{*\noindent
 Unfortunately, this is not a complete success:
@@ -94,6 +94,6 @@
 *};
 
 (*<*)
-by(induct_tac xs, simp, simp);
+by(induct_tac xs, simp_all);
 end
 (*>*)
--- a/doc-src/TutorialI/Misc/def_rewr.thy	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Misc/def_rewr.thy	Mon Aug 28 09:32:51 2000 +0200
@@ -20,25 +20,23 @@
 lemma "exor A (\\<not>A)";
 
 txt{*\noindent
-There is a special method for \emph{unfolding} definitions, which we need to
-get started:\indexbold{*unfold}\indexbold{definition!unfolding}
+Typically, the opening move consists in \emph{unfolding} the definition(s), which we need to
+get started, but nothing else:\indexbold{*unfold}\indexbold{definition!unfolding}
 *}
 
-apply(unfold exor_def);
+apply(simp only:exor_def);
 
 txt{*\noindent
-It unfolds the given list of definitions (here merely one) in all subgoals:
+In this particular case, the resulting goal
 \begin{isabellepar}%
 ~1.~A~{\isasymand}~{\isasymnot}~{\isasymnot}~A~{\isasymor}~{\isasymnot}~A~{\isasymand}~{\isasymnot}~A%
 \end{isabellepar}%
-The resulting goal can be proved by simplification.
-
-In case we want to expand a definition in the middle of a proof, we can
-simply include it locally:*}(*<*)oops;lemma "exor A (\\<not>A)";(*>*)
-apply(simp add: exor_def);(*<*).(*>*)
+can be proved by simplification. Thus we could have proved the lemma outright
+*}(*<*)oops;lemma "exor A (\\<not>A)";(*>*)
+by(simp add: exor_def)
 
 text{*\noindent
-In fact, this one command proves the above lemma directly.
+Of course we can also unfold definitions in the middle of a proof.
 
 You should normally not turn a definition permanently into a simplification
 rule because this defeats the whole purpose of an abbreviation.
--- a/doc-src/TutorialI/Recdef/Induction.thy	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Recdef/Induction.thy	Mon Aug 28 09:32:51 2000 +0200
@@ -41,7 +41,7 @@
 The rest is pure simplification:
 *}
 
-by auto;
+by simp_all;
 
 text{*
 Try proving the above lemma by structural induction, and you find that you
@@ -58,13 +58,13 @@
 contain the term $f~x@1~\dots~x@n$ but this need not be the case. The
 induction rules do not mention $f$ at all. For example \isa{sep.induct}
 \begin{isabellepar}%
-{\isasymlbrakk}~{\isasymAnd}a.~?P~a~[];\isanewline
-~~{\isasymAnd}a~x.~?P~a~[x];\isanewline
-~~{\isasymAnd}a~x~y~zs.~?P~a~(y~\#~zs)~{\isasymLongrightarrow}~?P~a~(x~\#~y~\#~zs){\isasymrbrakk}\isanewline
-{\isasymLongrightarrow}~?P~?u~?v%
+{\isasymlbrakk}~{\isasymAnd}a.~P~a~[];\isanewline
+~~{\isasymAnd}a~x.~P~a~[x];\isanewline
+~~{\isasymAnd}a~x~y~zs.~P~a~(y~\#~zs)~{\isasymLongrightarrow}~P~a~(x~\#~y~\#~zs){\isasymrbrakk}\isanewline
+{\isasymLongrightarrow}~P~u~v%
 \end{isabellepar}%
-merely says that in order to prove a property \isa{?P} of \isa{?u} and
-\isa{?v} you need to prove it for the three cases where \isa{?v} is the
+merely says that in order to prove a property \isa{P} of \isa{u} and
+\isa{v} you need to prove it for the three cases where \isa{v} is the
 empty list, the singleton list, and the list with at least two elements
 (in which case you may assume it holds for the tail of that list).
 *}
--- a/doc-src/TutorialI/Recdef/Nested1.thy	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Recdef/Nested1.thy	Mon Aug 28 09:32:51 2000 +0200
@@ -1,7 +1,7 @@
 (*<*)
-theory Nested1 = Nested0:
+theory Nested1 = Nested0:;
 (*>*)
-consts trev  :: "('a,'b)term => ('a,'b)term"
+consts trev  :: "('a,'b)term => ('a,'b)term";
 
 text{*\noindent
 Although the definition of @{term"trev"} is quite natural, we will have
@@ -12,31 +12,33 @@
 Defining @{term"trev"} by \isacommand{recdef} rather than \isacommand{primrec}
 simplifies matters because we are now free to use the recursion equation
 suggested at the end of \S\ref{sec:nested-datatype}:
-*}
-ML"Prim.Add_recdef_congs [map_cong]";
-ML"Prim.print_recdef_congs(Context.the_context())";
+*};
 
 recdef trev "measure size"
  "trev (Var x) = Var x"
  "trev (App f ts) = App f (rev(map trev ts))";
-ML"Prim.congs []";
-congs map_cong;
-thm trev.simps;
-(*<*)ML"Pretty.setmargin 60"(*>*)
+
 text{*
 FIXME: recdef should complain and generate unprovable termination condition!
+moveto todo
 
-The point is that the above termination condition is unprovable because it
-ignores some crucial information: the recursive call of @{term"trev"} will
-not involve arbitrary terms but only those in @{term"ts"}. This is expressed
-by theorem \isa{map\_cong}:
+Remember that function @{term"size"} is defined for each \isacommand{datatype}.
+However, the definition does not succeed. Isabelle complains about an unproved termination
+condition
 \begin{quote}
-@{thm[display]"map_cong"}
+@{term[display]"t : set ts --> size t < Suc (term_size ts)"}
 \end{quote}
-*}
+where @{term"set"} returns the set of elements of a list---no special knowledge of sets is
+required in the following.
+First we have to understand why the recursive call of @{term"trev"} underneath @{term"map"} leads
+to the above condition. The reason is that \isacommand{recdef} ``knows'' that @{term"map"} will
+apply @{term"trev"} only to elements of @{term"ts"}. Thus the above condition expresses that
+the size of the argument @{term"t : set ts"} of any recursive call of @{term"trev"} is strictly
+less than @{term"size(App f ts) = Suc(term_size ts)"}.
+We will now prove the termination condition and continue with our definition.
+Below we return to the question of how \isacommand{recdef} ``knows'' about @{term"map"}.
+*};
 
 (*<*)
-end
-(*>*)
-
-
+end;
+(*>*)
\ No newline at end of file
--- a/doc-src/TutorialI/Recdef/ROOT.ML	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Recdef/ROOT.ML	Mon Aug 28 09:32:51 2000 +0200
@@ -1,3 +1,4 @@
 use_thy "termination";
 use_thy "Induction";
-(*use_thy "Nested1";*)
+use_thy "Nested1";
+use_thy "Nested2";
--- a/doc-src/TutorialI/Recdef/document/Induction.tex	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Recdef/document/Induction.tex	Mon Aug 28 09:32:51 2000 +0200
@@ -36,7 +36,7 @@
 \end{isabellepar}%
 The rest is pure simplification:%
 \end{isamarkuptxt}%
-\isacommand{by}\ auto%
+\isacommand{by}\ simp\_all%
 \begin{isamarkuptext}%
 Try proving the above lemma by structural induction, and you find that you
 need an additional case distinction. What is worse, the names of variables
@@ -52,13 +52,13 @@
 contain the term $f~x@1~\dots~x@n$ but this need not be the case. The
 induction rules do not mention $f$ at all. For example \isa{sep.induct}
 \begin{isabellepar}%
-{\isasymlbrakk}~{\isasymAnd}a.~?P~a~[];\isanewline
-~~{\isasymAnd}a~x.~?P~a~[x];\isanewline
-~~{\isasymAnd}a~x~y~zs.~?P~a~(y~\#~zs)~{\isasymLongrightarrow}~?P~a~(x~\#~y~\#~zs){\isasymrbrakk}\isanewline
-{\isasymLongrightarrow}~?P~?u~?v%
+{\isasymlbrakk}~{\isasymAnd}a.~P~a~[];\isanewline
+~~{\isasymAnd}a~x.~P~a~[x];\isanewline
+~~{\isasymAnd}a~x~y~zs.~P~a~(y~\#~zs)~{\isasymLongrightarrow}~P~a~(x~\#~y~\#~zs){\isasymrbrakk}\isanewline
+{\isasymLongrightarrow}~P~u~v%
 \end{isabellepar}%
-merely says that in order to prove a property \isa{?P} of \isa{?u} and
-\isa{?v} you need to prove it for the three cases where \isa{?v} is the
+merely says that in order to prove a property \isa{P} of \isa{u} and
+\isa{v} you need to prove it for the three cases where \isa{v} is the
 empty list, the singleton list, and the list with at least two elements
 (in which case you may assume it holds for the tail of that list).%
 \end{isamarkuptext}%
--- a/doc-src/TutorialI/Trie/Trie.thy	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/Trie/Trie.thy	Mon Aug 28 09:32:51 2000 +0200
@@ -52,7 +52,7 @@
 *};
 
 lemma [simp]: "lookup (Trie None []) as = None";
-by(case_tac as, auto);
+by(case_tac as, simp_all);
 
 text{*
 Things begin to get interesting with the definition of an update function
@@ -129,8 +129,8 @@
 This proof may look surprisingly straightforward. However, note that this
 comes at a cost: the proof script is unreadable because the
 intermediate proof states are invisible, and we rely on the (possibly
-brittle) magic of \isa{auto} (after the induction) to split the remaining
-goals up in such a way that case distinction on \isa{bs} makes sense and
+brittle) magic of \isa{auto} (\isa{simp\_all} will not do---try it) to split the subgoals
+of the induction up in such a way that case distinction on \isa{bs} makes sense and
 solves the proof. Part~\ref{Isar} shows you how to write readable and stable
 proofs.
 *};
--- a/doc-src/TutorialI/basics.tex	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/basics.tex	Mon Aug 28 09:32:51 2000 +0200
@@ -256,6 +256,7 @@
 a theorem, Isabelle will turn your free variables into unknowns: it merely
 indicates that Isabelle will automatically instantiate those unknowns
 suitably when the theorem is used in some other proof.
+Note that for readability we often drop the \isa{?}s when displaying a theorem.
 \begin{warn}
   If you use \isa{?}\index{$HOL0Ex@\texttt{?}} as an existential
   quantifier, it needs to be followed by a space. Otherwise \isa{?x} is
--- a/doc-src/TutorialI/fp.tex	Fri Aug 25 12:17:09 2000 +0200
+++ b/doc-src/TutorialI/fp.tex	Mon Aug 28 09:32:51 2000 +0200
@@ -534,7 +534,8 @@
 where the list of \emph{modifiers} helps to fine tune the behaviour and may
 be empty. Most if not all of the proofs seen so far could have been performed
 with \isa{simp} instead of \isa{auto}, except that \isa{simp} attacks
-only the first subgoal and may thus need to be repeated.
+only the first subgoal and may thus need to be repeated---use \isaindex{simp_all}
+to simplify all subgoals.
 Note that \isa{simp} fails if nothing changes.
 
 \subsubsection{Adding and deleting simplification rules}
@@ -851,10 +852,12 @@
 \index{induction!recursion|)}
 \index{recursion induction|)}
 
-%\subsection{Advanced forms of recursion}
+\subsection{Advanced forms of recursion}
+\label{sec:advanced-recdef}
 
-%\input{Recdef/document/Nested0.tex}
-%\input{Recdef/document/Nested1.tex}
+\input{Recdef/document/Nested0.tex}
+\input{Recdef/document/Nested1.tex}
+\input{Recdef/document/Nested2.tex}
 
 \index{*recdef|)}