*** empty log message ***
authornipkow
Fri, 01 Sep 2000 19:09:44 +0200
changeset 9792 bbefb6ce5cb2
parent 9791 a39e5d43de55
child 9793 2c3d4e03e00c
*** empty log message ***
doc-src/TutorialI/CodeGen/CodeGen.thy
doc-src/TutorialI/Datatype/ABexpr.thy
doc-src/TutorialI/Datatype/Fundata.thy
doc-src/TutorialI/Datatype/Nested.thy
doc-src/TutorialI/Datatype/document/ABexpr.tex
doc-src/TutorialI/Datatype/document/Fundata.tex
doc-src/TutorialI/Datatype/document/Nested.tex
doc-src/TutorialI/Ifexpr/Ifexpr.thy
doc-src/TutorialI/Ifexpr/document/Ifexpr.tex
doc-src/TutorialI/Misc/AdvancedInd.thy
doc-src/TutorialI/Misc/Itrev.thy
doc-src/TutorialI/Misc/Tree.thy
doc-src/TutorialI/Misc/Tree2.thy
doc-src/TutorialI/Misc/asm_simp.thy
doc-src/TutorialI/Misc/case_exprs.thy
doc-src/TutorialI/Misc/case_splits.thy
doc-src/TutorialI/Misc/cond_rewr.thy
doc-src/TutorialI/Misc/document/AdvancedInd.tex
doc-src/TutorialI/Misc/document/Itrev.tex
doc-src/TutorialI/Misc/document/Tree2.tex
doc-src/TutorialI/Misc/document/asm_simp.tex
doc-src/TutorialI/Misc/document/case_exprs.tex
doc-src/TutorialI/Misc/document/case_splits.tex
doc-src/TutorialI/Misc/document/cond_rewr.tex
doc-src/TutorialI/Misc/document/let_rewr.tex
doc-src/TutorialI/Misc/document/natsum.tex
doc-src/TutorialI/Misc/document/pairs.tex
doc-src/TutorialI/Misc/document/trace_simp.tex
doc-src/TutorialI/Misc/document/types.tex
doc-src/TutorialI/Misc/let_rewr.thy
doc-src/TutorialI/Misc/natsum.thy
doc-src/TutorialI/Misc/pairs.thy
doc-src/TutorialI/Misc/prime_def.thy
doc-src/TutorialI/Misc/trace_simp.thy
doc-src/TutorialI/Misc/types.thy
doc-src/TutorialI/Recdef/Induction.thy
doc-src/TutorialI/Recdef/Nested1.thy
doc-src/TutorialI/Recdef/Nested2.thy
doc-src/TutorialI/Recdef/document/Induction.tex
doc-src/TutorialI/Recdef/document/Nested1.tex
doc-src/TutorialI/Recdef/document/Nested2.tex
doc-src/TutorialI/Recdef/document/examples.tex
doc-src/TutorialI/Recdef/document/simplification.tex
doc-src/TutorialI/Recdef/document/termination.tex
doc-src/TutorialI/Recdef/examples.thy
doc-src/TutorialI/Recdef/simplification.thy
doc-src/TutorialI/Recdef/termination.thy
doc-src/TutorialI/ToyList/ToyList.thy
doc-src/TutorialI/ToyList/document/ToyList.tex
doc-src/TutorialI/Trie/Trie.thy
doc-src/TutorialI/Trie/document/Trie.tex
doc-src/TutorialI/basics.tex
doc-src/TutorialI/fp.tex
--- a/doc-src/TutorialI/CodeGen/CodeGen.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/CodeGen/CodeGen.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -35,7 +35,7 @@
 The stack machine has three instructions: load a constant value onto the
 stack, load the contents of a certain address onto the stack, and apply a
 binary operation to the two topmost elements of the stack, replacing them by
-the result. As for \isa{expr}, addresses and values are type parameters:
+the result. As for @{text"expr"}, addresses and values are type parameters:
 *}
 
 datatype ('a,'v) instr = Const 'v
@@ -44,7 +44,7 @@
 
 text{*
 The execution of the stack machine is modelled by a function
-\isa{exec} that takes a list of instructions, a store (modelled as a
+@{text"exec"} that takes a list of instructions, a store (modelled as a
 function from addresses to values, just like the environment for
 evaluating expressions), and a stack (modelled as a list) of values,
 and returns the stack at the end of the execution---the store remains
@@ -60,12 +60,12 @@
   | Apply f  \\<Rightarrow> exec is s ((f (hd vs) (hd(tl vs)))#(tl(tl vs))))";
 
 text{*\noindent
-Recall that \isa{hd} and \isa{tl}
+Recall that @{term"hd"} and @{term"tl"}
 return the first element and the remainder of a list.
-Because all functions are total, \isa{hd} is defined even for the empty
+Because all functions are total, @{term"hd"} is defined even for the empty
 list, although we do not know what the result is. Thus our model of the
 machine always terminates properly, although the above definition does not
-tell us much about the result in situations where \isa{Apply} was executed
+tell us much about the result in situations where @{term"Apply"} was executed
 with fewer than two elements on the stack.
 
 The compiler is a function from expressions to a list of instructions. Its
@@ -91,7 +91,7 @@
 theorem "\\<forall>vs. exec (comp e) s vs = (value e s) # vs";
 
 txt{*\noindent
-which is proved by induction on \isa{e} followed by simplification, once
+which is proved by induction on @{term"e"} followed by simplification, once
 we have the following lemma about executing the concatenation of two
 instruction sequences:
 *}
@@ -100,16 +100,16 @@
   "\\<forall>vs. exec (xs@ys) s vs = exec ys s (exec xs s vs)"; 
 
 txt{*\noindent
-This requires induction on \isa{xs} and ordinary simplification for the
+This requires induction on @{term"xs"} and ordinary simplification for the
 base cases. In the induction step, simplification leaves us with a formula
-that contains two \isa{case}-expressions over instructions. Thus we add
+that contains two @{text"case"}-expressions over instructions. Thus we add
 automatic case splitting as well, which finishes the proof:
 *}
 by(induct_tac xs, simp, simp split: instr.split);
 
 text{*\noindent
 Note that because \isaindex{auto} performs simplification, it can
-also be modified in the same way \isa{simp} can. Thus the proof can be
+also be modified in the same way @{text"simp"} can. Thus the proof can be
 rewritten as
 *}
 (*<*)
--- a/doc-src/TutorialI/Datatype/ABexpr.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Datatype/ABexpr.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -26,9 +26,9 @@
                  | Neg  "'a bexp";
 
 text{*\noindent
-Type \isa{aexp} is similar to \isa{expr} in \S\ref{sec:ExprCompiler},
-except that we have fixed the values to be of type \isa{nat} and that we
-have fixed the two binary operations \isa{Sum} and \isa{Diff}. Boolean
+Type @{text"aexp"} is similar to @{text"expr"} in \S\ref{sec:ExprCompiler},
+except that we have fixed the values to be of type @{typ"nat"} and that we
+have fixed the two binary operations @{term"Sum"} and @{term"Diff"}. Boolean
 expressions can be arithmetic comparisons, conjunctions and negations.
 The semantics is fixed via two evaluation functions
 *}
@@ -37,8 +37,8 @@
         evalb :: "'a bexp \\<Rightarrow> ('a \\<Rightarrow> nat) \\<Rightarrow> bool";
 
 text{*\noindent
-that take an expression and an environment (a mapping from variables \isa{'a} to values
-\isa{nat}) and return its arithmetic/boolean
+that take an expression and an environment (a mapping from variables @{typ"'a"} to values
+@{typ"nat"}) and return its arithmetic/boolean
 value. Since the datatypes are mutually recursive, so are functions that
 operate on them. Hence they need to be defined in a single \isacommand{primrec}
 section:
@@ -66,8 +66,8 @@
 text{*\noindent
 The first argument is a function mapping variables to expressions, the
 substitution. It is applied to all variables in the second argument. As a
-result, the type of variables in the expression may change from \isa{'a}
-to \isa{'b}. Note that there are only arithmetic and no boolean variables.
+result, the type of variables in the expression may change from @{typ"'a"}
+to @{typ"'b"}. Note that there are only arithmetic and no boolean variables.
 *}
 
 primrec
@@ -108,17 +108,17 @@
 an inductive proof expects a goal of the form
 \[ P@1(x@1)\ \land \dots \land P@n(x@n) \]
 where each variable $x@i$ is of type $\tau@i$. Induction is started by
-\begin{ttbox}
-apply(induct_tac \(x@1\) \texttt{and} \(\dots\) \texttt{and} \(x@n\));
-\end{ttbox}
+\begin{isabelle}
+\isacommand{apply}@{text"(induct_tac"} $x@1$ @{text"and"} \dots\ @{text"and"} $x@n$@{text")"}
+\end{isabelle}
 
 \begin{exercise}
-  Define a function \isa{norma} of type @{typ"'a aexp => 'a aexp"} that
-  replaces \isa{IF}s with complex boolean conditions by nested
-  \isa{IF}s where each condition is a \isa{Less} --- \isa{And} and
-  \isa{Neg} should be eliminated completely. Prove that \isa{norma}
-  preserves the value of an expression and that the result of \isa{norma}
-  is really normal, i.e.\ no more \isa{And}s and \isa{Neg}s occur in
+  Define a function @{text"norma"} of type @{typ"'a aexp => 'a aexp"} that
+  replaces @{term"IF"}s with complex boolean conditions by nested
+  @{term"IF"}s where each condition is a @{term"Less"} --- @{term"And"} and
+  @{term"Neg"} should be eliminated completely. Prove that @{text"norma"}
+  preserves the value of an expression and that the result of @{text"norma"}
+  is really normal, i.e.\ no more @{term"And"}s and @{term"Neg"}s occur in
   it.  ({\em Hint:} proceed as in \S\ref{sec:boolex}).
 \end{exercise}
 *}
--- a/doc-src/TutorialI/Datatype/Fundata.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Datatype/Fundata.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -3,11 +3,12 @@
 (*>*)
 datatype ('a,'i)bigtree = Tip | Branch 'a "'i \\<Rightarrow> ('a,'i)bigtree"
 
-text{*\noindent Parameter \isa{'a} is the type of values stored in
-the \isa{Branch}es of the tree, whereas \isa{'i} is the index
-type over which the tree branches. If \isa{'i} is instantiated to
-\isa{bool}, the result is a binary tree; if it is instantiated to
-\isa{nat}, we have an infinitely branching tree because each node
+text{*\noindent
+Parameter @{typ"'a"} is the type of values stored in
+the @{term"Branch"}es of the tree, whereas @{typ"'i"} is the index
+type over which the tree branches. If @{typ"'i"} is instantiated to
+@{typ"bool"}, the result is a binary tree; if it is instantiated to
+@{typ"nat"}, we have an infinitely branching tree because each node
 has as many subtrees as there are natural numbers. How can we possibly
 write down such a tree? Using functional notation! For example, the term
 \begin{quote}
@@ -15,9 +16,9 @@
 \end{quote}
 of type @{typ"(nat,nat)bigtree"} is the tree whose
 root is labeled with 0 and whose $i$th subtree is labeled with $i$ and
-has merely \isa{Tip}s as further subtrees.
+has merely @{term"Tip"}s as further subtrees.
 
-Function \isa{map_bt} applies a function to all labels in a \isa{bigtree}:
+Function @{term"map_bt"} applies a function to all labels in a @{text"bigtree"}:
 *}
 
 consts map_bt :: "('a \\<Rightarrow> 'b) \\<Rightarrow> ('a,'i)bigtree \\<Rightarrow> ('b,'i)bigtree"
@@ -26,12 +27,12 @@
 "map_bt f (Branch a F) = Branch (f a) (\\<lambda>i. map_bt f (F i))"
 
 text{*\noindent This is a valid \isacommand{primrec} definition because the
-recursive calls of \isa{map_bt} involve only subtrees obtained from
-\isa{F}, i.e.\ the left-hand side. Thus termination is assured.  The
+recursive calls of @{term"map_bt"} involve only subtrees obtained from
+@{term"F"}, i.e.\ the left-hand side. Thus termination is assured.  The
 seasoned functional programmer might have written @{term"map_bt f o F"}
 instead of @{term"%i. map_bt f (F i)"}, but the former is not accepted by
 Isabelle because the termination proof is not as obvious since
-\isa{map_bt} is only partially applied.
+@{term"map_bt"} is only partially applied.
 
 The following lemma has a canonical proof  *}
 
--- a/doc-src/TutorialI/Datatype/Nested.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Datatype/Nested.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -1,5 +1,5 @@
 (*<*)
-theory Nested = Main:;
+theory Nested = ABexpr:
 (*>*)
 
 text{*
@@ -12,18 +12,18 @@
 datatype ('a,'b)"term" = Var 'a | App 'b "('a,'b)term list";
 
 text{*\noindent
-Note that we need to quote \isa{term} on the left to avoid confusion with
+Note that we need to quote @{text"term"} on the left to avoid confusion with
 the command \isacommand{term}.
-Parameter \isa{'a} is the type of variables and \isa{'b} the type of
+Parameter @{typ"'a"} is the type of variables and @{typ"'b"} the type of
 function symbols.
 A mathematical term like $f(x,g(y))$ becomes @{term"App f [Var x, App g
-  [Var y]]"}, where \isa{f}, \isa{g}, \isa{x}, \isa{y} are
+  [Var y]]"}, where @{term"f"}, @{term"g"}, @{term"x"}, @{term"y"} are
 suitable values, e.g.\ numbers or strings.
 
-What complicates the definition of \isa{term} is the nested occurrence of
-\isa{term} inside \isa{list} on the right-hand side. In principle,
+What complicates the definition of @{text"term"} is the nested occurrence of
+@{text"term"} inside @{text"list"} on the right-hand side. In principle,
 nested recursion can be eliminated in favour of mutual recursion by unfolding
-the offending datatypes, here \isa{list}. The result for \isa{term}
+the offending datatypes, here @{text"list"}. The result for @{text"term"}
 would be something like
 \medskip
 
@@ -33,7 +33,7 @@
 \noindent
 Although we do not recommend this unfolding to the user, it shows how to
 simulate nested recursion by mutual recursion.
-Now we return to the initial definition of \isa{term} using
+Now we return to the initial definition of @{text"term"} using
 nested recursion.
 
 Let us define a substitution function on terms. Because terms involve term
@@ -53,7 +53,7 @@
   "substs s (t # ts) = subst s t # substs s ts";
 
 text{*\noindent
-You should ignore the label \isa{subst\_App} for the moment.
+You should ignore the label @{thm[source]subst_App} for the moment.
 
 Similarly, when proving a statement about terms inductively, we need
 to prove a related statement about term lists simultaneously. For example,
@@ -66,39 +66,39 @@
 by(induct_tac t and ts, simp_all);
 
 text{*\noindent
-Note that \isa{Var} is the identity substitution because by definition it
-leaves variables unchanged: @{term"subst Var (Var x) = Var x"}. Note also
+Note that @{term"Var"} is the identity substitution because by definition it
+leaves variables unchanged: @{prop"subst Var (Var x) = Var x"}. Note also
 that the type annotations are necessary because otherwise there is nothing in
 the goal to enforce that both halves of the goal talk about the same type
-parameters \isa{('a,'b)}. As a result, induction would fail
+parameters @{text"('a,'b)"}. As a result, induction would fail
 because the two halves of the goal would be unrelated.
 
 \begin{exercise}
 The fact that substitution distributes over composition can be expressed
 roughly as follows:
-\begin{ttbox}
-subst (f o g) t = subst f (subst g t)
-\end{ttbox}
+\begin{quote}
+@{text[display]"subst (f o g) t = subst f (subst g t)"}
+\end{quote}
 Correct this statement (you will find that it does not type-check),
-strengthen it, and prove it. (Note: \ttindexbold{o} is function composition;
-its definition is found in theorem \isa{o_def}).
+strengthen it, and prove it. (Note: \isaindexbold{o} is function composition;
+its definition is found in theorem @{thm[source]o_def}).
 \end{exercise}
 \begin{exercise}\label{ex:trev-trev}
-  Define a function \isa{trev} of type @{typ"('a,'b)term => ('a,'b)term"} that
-  recursively reverses the order of arguments of all function symbols in a
-  term. Prove that \isa{trev(trev t) = t}.
+  Define a function @{text"trev"} of type @{typ"('a,'b)term => ('a,'b)term"}
+that recursively reverses the order of arguments of all function symbols in a
+  term. Prove that @{prop"trev(trev t) = t"}.
 \end{exercise}
 
 The experienced functional programmer may feel that our above definition of
-\isa{subst} is unnecessarily complicated in that \isa{substs} is completely
-unnecessary. The @{term"App"}-case can be defined directly as
+@{term"subst"} is unnecessarily complicated in that @{term"substs"} is
+completely unnecessary. The @{term"App"}-case can be defined directly as
 \begin{quote}
 @{term[display]"subst s (App f ts) = App f (map (subst s) ts)"}
 \end{quote}
 where @{term"map"} is the standard list function such that
-\isa{map f [x1,...,xn] = [f x1,...,f xn]}. This is true, but Isabelle insists
-on the above fixed format. Fortunately, we can easily \emph{prove} that the
-suggested equation holds:
+@{text"map f [x1,...,xn] = [f x1,...,f xn]"}. This is true, but Isabelle
+insists on the above fixed format. Fortunately, we can easily \emph{prove}
+that the suggested equation holds:
 *}
 
 lemma [simp]: "subst s (App f ts) = App f (map (subst s) ts)"
@@ -115,15 +115,15 @@
 The advantage is that now we have replaced @{term"substs"} by
 @{term"map"}, we can profit from the large number of pre-proved lemmas
 about @{term"map"}.  Unfortunately inductive proofs about type
-\isa{term} are still awkward because they expect a conjunction. One
+@{text"term"} are still awkward because they expect a conjunction. One
 could derive a new induction principle as well (see
 \S\ref{sec:derive-ind}), but turns out to be simpler to define
 functions by \isacommand{recdef} instead of \isacommand{primrec}.
 The details are explained in \S\ref{sec:advanced-recdef} below.
 
 Of course, you may also combine mutual and nested recursion. For example,
-constructor \isa{Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
-expressions as its argument: \isa{Sum "'a aexp list"}.
+constructor @{text"Sum"} in \S\ref{sec:datatype-mut-rec} could take a list of
+expressions as its argument: @{text"Sum"}~@{typ[quotes]"'a aexp list"}.
 *}
 (*<*)
 end
--- a/doc-src/TutorialI/Datatype/document/ABexpr.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Datatype/document/ABexpr.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -26,7 +26,7 @@
 \noindent
 Type \isa{aexp} is similar to \isa{expr} in \S\ref{sec:ExprCompiler},
 except that we have fixed the values to be of type \isa{nat} and that we
-have fixed the two binary operations \isa{Sum} and \isa{Diff}. Boolean
+have fixed the two binary operations \isa{aexp{\isachardot}Sum} and \isa{Diff}. Boolean
 expressions can be arithmetic comparisons, conjunctions and negations.
 The semantics is fixed via two evaluation functions%
 \end{isamarkuptext}%
@@ -34,7 +34,7 @@
 \ \ \ \ \ \ \ \ evalb\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharprime}a\ bexp\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ nat{\isacharparenright}\ {\isasymRightarrow}\ bool{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-that take an expression and an environment (a mapping from variables \isa{'a} to values
+that take an expression and an environment (a mapping from variables \isa{{\isacharprime}a} to values
 \isa{nat}) and return its arithmetic/boolean
 value. Since the datatypes are mutually recursive, so are functions that
 operate on them. Hence they need to be defined in a single \isacommand{primrec}
@@ -61,8 +61,8 @@
 \noindent
 The first argument is a function mapping variables to expressions, the
 substitution. It is applied to all variables in the second argument. As a
-result, the type of variables in the expression may change from \isa{'a}
-to \isa{'b}. Note that there are only arithmetic and no boolean variables.%
+result, the type of variables in the expression may change from \isa{{\isacharprime}a}
+to \isa{{\isacharprime}b}. Note that there are only arithmetic and no boolean variables.%
 \end{isamarkuptext}%
 \isacommand{primrec}\isanewline
 \ \ {\isachardoublequote}substa\ s\ {\isacharparenleft}IF\ b\ a\isadigit{1}\ a\isadigit{2}{\isacharparenright}\ {\isacharequal}\isanewline
@@ -98,12 +98,12 @@
 an inductive proof expects a goal of the form
 \[ P@1(x@1)\ \land \dots \land P@n(x@n) \]
 where each variable $x@i$ is of type $\tau@i$. Induction is started by
-\begin{ttbox}
-apply(induct_tac \(x@1\) \texttt{and} \(\dots\) \texttt{and} \(x@n\));
-\end{ttbox}
+\begin{isabelle}
+\isacommand{apply}\isa{{\isacharparenleft}induct{\isacharunderscore}tac} $x@1$ \isa{and} \dots\ \isa{and} $x@n$\isa{{\isacharparenright}}
+\end{isabelle}
 
 \begin{exercise}
-  Define a function \isa{norma} of type \isa{\mbox{{\isacharprime}a}\ aexp\ {\isasymRightarrow}\ \mbox{{\isacharprime}a}\ aexp} that
+  Define a function \isa{norma} of type \isa{{\isacharprime}a\ aexp\ {\isasymRightarrow}\ {\isacharprime}a\ aexp} that
   replaces \isa{IF}s with complex boolean conditions by nested
   \isa{IF}s where each condition is a \isa{Less} --- \isa{And} and
   \isa{Neg} should be eliminated completely. Prove that \isa{norma}
--- a/doc-src/TutorialI/Datatype/document/Fundata.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Datatype/document/Fundata.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -2,9 +2,10 @@
 \begin{isabellebody}%
 \isacommand{datatype}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}i{\isacharparenright}bigtree\ {\isacharequal}\ Tip\ {\isacharbar}\ Branch\ {\isacharprime}a\ {\isachardoublequote}{\isacharprime}i\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}i{\isacharparenright}bigtree{\isachardoublequote}%
 \begin{isamarkuptext}%
-\noindent Parameter \isa{'a} is the type of values stored in
-the \isa{Branch}es of the tree, whereas \isa{'i} is the index
-type over which the tree branches. If \isa{'i} is instantiated to
+\noindent
+Parameter \isa{{\isacharprime}a} is the type of values stored in
+the \isa{Branch}es of the tree, whereas \isa{{\isacharprime}i} is the index
+type over which the tree branches. If \isa{{\isacharprime}i} is instantiated to
 \isa{bool}, the result is a binary tree; if it is instantiated to
 \isa{nat}, we have an infinitely branching tree because each node
 has as many subtrees as there are natural numbers. How can we possibly
@@ -12,7 +13,7 @@
 \begin{quote}
 
 \begin{isabelle}%
-Branch\ \isadigit{0}\ {\isacharparenleft}{\isasymlambda}\mbox{i}{\isachardot}\ Branch\ \mbox{i}\ {\isacharparenleft}{\isasymlambda}\mbox{n}{\isachardot}\ Tip{\isacharparenright}{\isacharparenright}
+Branch\ \isadigit{0}\ {\isacharparenleft}{\isasymlambda}i{\isachardot}\ Branch\ i\ {\isacharparenleft}{\isasymlambda}n{\isachardot}\ Tip{\isacharparenright}{\isacharparenright}
 \end{isabelle}%
 
 \end{quote}
@@ -20,7 +21,7 @@
 root is labeled with 0 and whose $i$th subtree is labeled with $i$ and
 has merely \isa{Tip}s as further subtrees.
 
-Function \isa{map_bt} applies a function to all labels in a \isa{bigtree}:%
+Function \isa{map{\isacharunderscore}bt} applies a function to all labels in a \isa{bigtree}:%
 \end{isamarkuptext}%
 \isacommand{consts}\ map{\isacharunderscore}bt\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}b{\isacharparenright}\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}i{\isacharparenright}bigtree\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}b{\isacharcomma}{\isacharprime}i{\isacharparenright}bigtree{\isachardoublequote}\isanewline
 \isacommand{primrec}\isanewline
@@ -28,12 +29,12 @@
 {\isachardoublequote}map{\isacharunderscore}bt\ f\ {\isacharparenleft}Branch\ a\ F{\isacharparenright}\ {\isacharequal}\ Branch\ {\isacharparenleft}f\ a{\isacharparenright}\ {\isacharparenleft}{\isasymlambda}i{\isachardot}\ map{\isacharunderscore}bt\ f\ {\isacharparenleft}F\ i{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent This is a valid \isacommand{primrec} definition because the
-recursive calls of \isa{map_bt} involve only subtrees obtained from
+recursive calls of \isa{map{\isacharunderscore}bt} involve only subtrees obtained from
 \isa{F}, i.e.\ the left-hand side. Thus termination is assured.  The
-seasoned functional programmer might have written \isa{map{\isacharunderscore}bt\ \mbox{f}\ {\isasymcirc}\ \mbox{F}}
-instead of \isa{{\isasymlambda}\mbox{i}{\isachardot}\ map{\isacharunderscore}bt\ \mbox{f}\ {\isacharparenleft}\mbox{F}\ \mbox{i}{\isacharparenright}}, but the former is not accepted by
+seasoned functional programmer might have written \isa{map{\isacharunderscore}bt\ f\ {\isasymcirc}\ F}
+instead of \isa{{\isasymlambda}i{\isachardot}\ map{\isacharunderscore}bt\ f\ {\isacharparenleft}F\ i{\isacharparenright}}, but the former is not accepted by
 Isabelle because the termination proof is not as obvious since
-\isa{map_bt} is only partially applied.
+\isa{map{\isacharunderscore}bt} is only partially applied.
 
 The following lemma has a canonical proof%
 \end{isamarkuptext}%
--- a/doc-src/TutorialI/Datatype/document/Nested.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Datatype/document/Nested.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -12,9 +12,9 @@
 \noindent
 Note that we need to quote \isa{term} on the left to avoid confusion with
 the command \isacommand{term}.
-Parameter \isa{'a} is the type of variables and \isa{'b} the type of
+Parameter \isa{{\isacharprime}a} is the type of variables and \isa{{\isacharprime}b} the type of
 function symbols.
-A mathematical term like $f(x,g(y))$ becomes \isa{App\ \mbox{f}\ {\isacharbrackleft}Var\ \mbox{x}{\isacharcomma}\ App\ \mbox{g}\ {\isacharbrackleft}Var\ \mbox{y}{\isacharbrackright}{\isacharbrackright}}, where \isa{f}, \isa{g}, \isa{x}, \isa{y} are
+A mathematical term like $f(x,g(y))$ becomes \isa{App\ f\ {\isacharbrackleft}term{\isachardot}Var\ x{\isacharcomma}\ App\ g\ {\isacharbrackleft}term{\isachardot}Var\ y{\isacharbrackright}{\isacharbrackright}}, where \isa{f}, \isa{g}, \isa{x}, \isa{y} are
 suitable values, e.g.\ numbers or strings.
 
 What complicates the definition of \isa{term} is the nested occurrence of
@@ -49,7 +49,7 @@
 \ \ {\isachardoublequote}substs\ s\ {\isacharparenleft}t\ {\isacharhash}\ ts{\isacharparenright}\ {\isacharequal}\ subst\ s\ t\ {\isacharhash}\ substs\ s\ ts{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-You should ignore the label \isa{subst\_App} for the moment.
+You should ignore the label \isa{subst{\isacharunderscore}App} for the moment.
 
 Similarly, when proving a statement about terms inductively, we need
 to prove a related statement about term lists simultaneously. For example,
@@ -61,43 +61,47 @@
 \isacommand{by}{\isacharparenleft}induct{\isacharunderscore}tac\ t\ \isakeyword{and}\ ts{\isacharcomma}\ simp{\isacharunderscore}all{\isacharparenright}%
 \begin{isamarkuptext}%
 \noindent
-Note that \isa{Var} is the identity substitution because by definition it
-leaves variables unchanged: \isa{subst\ Var\ {\isacharparenleft}Var\ \mbox{x}{\isacharparenright}\ {\isacharequal}\ Var\ \mbox{x}}. Note also
+Note that \isa{term{\isachardot}Var} is the identity substitution because by definition it
+leaves variables unchanged: \isa{subst\ term{\isachardot}Var\ {\isacharparenleft}term{\isachardot}Var\ x{\isacharparenright}\ {\isacharequal}\ term{\isachardot}Var\ x}. Note also
 that the type annotations are necessary because otherwise there is nothing in
 the goal to enforce that both halves of the goal talk about the same type
-parameters \isa{('a,'b)}. As a result, induction would fail
+parameters \isa{{\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}}. As a result, induction would fail
 because the two halves of the goal would be unrelated.
 
 \begin{exercise}
 The fact that substitution distributes over composition can be expressed
 roughly as follows:
-\begin{ttbox}
-subst (f o g) t = subst f (subst g t)
-\end{ttbox}
+\begin{quote}
+
+\begin{isabelle}%
+subst\ {\isacharparenleft}f\ o\ g{\isacharparenright}\ t\ {\isacharequal}\ subst\ f\ {\isacharparenleft}subst\ g\ t{\isacharparenright}
+\end{isabelle}%
+
+\end{quote}
 Correct this statement (you will find that it does not type-check),
-strengthen it, and prove it. (Note: \ttindexbold{o} is function composition;
-its definition is found in theorem \isa{o_def}).
+strengthen it, and prove it. (Note: \isaindexbold{o} is function composition;
+its definition is found in theorem \isa{o{\isacharunderscore}def}).
 \end{exercise}
 \begin{exercise}\label{ex:trev-trev}
-  Define a function \isa{trev} of type \isa{{\isacharparenleft}\mbox{{\isacharprime}a}{\isacharcomma}\ \mbox{{\isacharprime}b}{\isacharparenright}\ term\ {\isasymRightarrow}\ {\isacharparenleft}\mbox{{\isacharprime}a}{\isacharcomma}\ \mbox{{\isacharprime}b}{\isacharparenright}\ term} that
-  recursively reverses the order of arguments of all function symbols in a
-  term. Prove that \isa{trev(trev t) = t}.
+  Define a function \isa{trev} of type \isa{{\isacharparenleft}{\isacharprime}a{\isacharcomma}\ {\isacharprime}b{\isacharparenright}\ term\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}\ {\isacharprime}b{\isacharparenright}\ term}
+that recursively reverses the order of arguments of all function symbols in a
+  term. Prove that \isa{trev\ {\isacharparenleft}trev\ t{\isacharparenright}\ {\isacharequal}\ t}.
 \end{exercise}
 
 The experienced functional programmer may feel that our above definition of
-\isa{subst} is unnecessarily complicated in that \isa{substs} is completely
-unnecessary. The \isa{App}-case can be defined directly as
+\isa{subst} is unnecessarily complicated in that \isa{substs} is
+completely unnecessary. The \isa{App}-case can be defined directly as
 \begin{quote}
 
 \begin{isabelle}%
-subst\ \mbox{s}\ {\isacharparenleft}App\ \mbox{f}\ \mbox{ts}{\isacharparenright}\ {\isacharequal}\ App\ \mbox{f}\ {\isacharparenleft}map\ {\isacharparenleft}subst\ \mbox{s}{\isacharparenright}\ \mbox{ts}{\isacharparenright}
+subst\ s\ {\isacharparenleft}App\ f\ ts{\isacharparenright}\ {\isacharequal}\ App\ f\ {\isacharparenleft}map\ {\isacharparenleft}subst\ s{\isacharparenright}\ ts{\isacharparenright}
 \end{isabelle}%
 
 \end{quote}
 where \isa{map} is the standard list function such that
-\isa{map f [x1,...,xn] = [f x1,...,f xn]}. This is true, but Isabelle insists
-on the above fixed format. Fortunately, we can easily \emph{prove} that the
-suggested equation holds:%
+\isa{map\ f\ {\isacharbrackleft}x\isadigit{1}{\isacharcomma}{\isachardot}{\isachardot}{\isachardot}{\isacharcomma}xn{\isacharbrackright}\ {\isacharequal}\ {\isacharbrackleft}f\ x\isadigit{1}{\isacharcomma}{\isachardot}{\isachardot}{\isachardot}{\isacharcomma}f\ xn{\isacharbrackright}}. This is true, but Isabelle
+insists on the above fixed format. Fortunately, we can easily \emph{prove}
+that the suggested equation holds:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}subst\ s\ {\isacharparenleft}App\ f\ ts{\isacharparenright}\ {\isacharequal}\ App\ f\ {\isacharparenleft}map\ {\isacharparenleft}subst\ s{\isacharparenright}\ ts{\isacharparenright}{\isachardoublequote}\isanewline
 \isacommand{by}{\isacharparenleft}induct{\isacharunderscore}tac\ ts{\isacharcomma}\ simp{\isacharunderscore}all{\isacharparenright}%
@@ -120,7 +124,7 @@
 
 Of course, you may also combine mutual and nested recursion. For example,
 constructor \isa{Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
-expressions as its argument: \isa{Sum "'a aexp list"}.%
+expressions as its argument: \isa{Sum}~\isa{{\isachardoublequote}{\isacharprime}a\ aexp\ list{\isachardoublequote}}.%
 \end{isamarkuptext}%
 \end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/Ifexpr/Ifexpr.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Ifexpr/Ifexpr.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -16,16 +16,16 @@
 text{*\noindent
 The two constants are represented by @{term"Const True"} and
 @{term"Const False"}. Variables are represented by terms of the form
-@{term"Var n"}, where @{term"n"} is a natural number (type \isa{nat}).
+@{term"Var n"}, where @{term"n"} is a natural number (type @{typ"nat"}).
 For example, the formula $P@0 \land \neg P@1$ is represented by the term
 @{term"And (Var 0) (Neg(Var 1))"}.
 
 \subsubsection{What is the value of a boolean expression?}
 
 The value of a boolean expression depends on the value of its variables.
-Hence the function \isa{value} takes an additional parameter, an {\em
-  environment} of type @{typ"nat => bool"}, which maps variables to
-their values:
+Hence the function @{text"value"} takes an additional parameter, an
+\emph{environment} of type @{typ"nat => bool"}, which maps variables to their
+values:
 *}
 
 consts value :: "boolex \\<Rightarrow> (nat \\<Rightarrow> bool) \\<Rightarrow> bool";
@@ -40,14 +40,14 @@
 
 An alternative and often more efficient (because in a certain sense
 canonical) representation are so-called \emph{If-expressions} built up
-from constants (\isa{CIF}), variables (\isa{VIF}) and conditionals
-(\isa{IF}):
+from constants (@{term"CIF"}), variables (@{term"VIF"}) and conditionals
+(@{term"IF"}):
 *}
 
 datatype ifex = CIF bool | VIF nat | IF ifex ifex ifex;
 
 text{*\noindent
-The evaluation if If-expressions proceeds as for \isa{boolex}:
+The evaluation if If-expressions proceeds as for @{typ"boolex"}:
 *}
 
 consts valif :: "ifex \\<Rightarrow> (nat \\<Rightarrow> bool) \\<Rightarrow> bool";
@@ -60,9 +60,9 @@
 text{*
 \subsubsection{Transformation into and of If-expressions}
 
-The type \isa{boolex} is close to the customary representation of logical
-formulae, whereas \isa{ifex} is designed for efficiency. It is easy to
-translate from \isa{boolex} into \isa{ifex}:
+The type @{typ"boolex"} is close to the customary representation of logical
+formulae, whereas @{typ"ifex"} is designed for efficiency. It is easy to
+translate from @{typ"boolex"} into @{typ"ifex"}:
 *}
 
 consts bool2if :: "boolex \\<Rightarrow> ifex";
@@ -73,7 +73,7 @@
 "bool2if (And b c) = IF (bool2if b) (bool2if c) (CIF False)";
 
 text{*\noindent
-At last, we have something we can verify: that \isa{bool2if} preserves the
+At last, we have something we can verify: that @{term"bool2if"} preserves the
 value of its argument:
 *}
 
@@ -91,7 +91,7 @@
 not show them below.
 
 More interesting is the transformation of If-expressions into a normal form
-where the first argument of \isa{IF} cannot be another \isa{IF} but
+where the first argument of @{term"IF"} cannot be another @{term"IF"} but
 must be a constant or variable. Such a normal form can be computed by
 repeatedly replacing a subterm of the form @{term"IF (IF b x y) z u"} by
 @{term"IF b (IF x z u) (IF y z u)"}, which has the same value. The following
@@ -120,7 +120,7 @@
 
 text{*\noindent
 The proof is canonical, provided we first show the following simplification
-lemma (which also helps to understand what \isa{normif} does):
+lemma (which also helps to understand what @{term"normif"} does):
 *}
 
 lemma [simp]:
@@ -135,9 +135,9 @@
 (*>*)
 text{*\noindent
 Note that the lemma does not have a name, but is implicitly used in the proof
-of the theorem shown above because of the \isa{[simp]} attribute.
+of the theorem shown above because of the @{text"[simp]"} attribute.
 
-But how can we be sure that \isa{norm} really produces a normal form in
+But how can we be sure that @{term"norm"} really produces a normal form in
 the above sense? We define a function that tests If-expressions for normality
 *}
 
@@ -149,8 +149,8 @@
      (case b of CIF b \\<Rightarrow> True | VIF x \\<Rightarrow> True | IF x y z \\<Rightarrow> False))";
 
 text{*\noindent
-and prove \isa{normal(norm b)}. Of course, this requires a lemma about
-normality of \isa{normif}:
+and prove @{term"normal(norm b)"}. Of course, this requires a lemma about
+normality of @{term"normif"}:
 *}
 
 lemma[simp]: "\\<forall>t e. normal(normif b t e) = (normal t \\<and> normal e)";
--- a/doc-src/TutorialI/Ifexpr/document/Ifexpr.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Ifexpr/document/Ifexpr.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -14,16 +14,16 @@
 \noindent
 The two constants are represented by \isa{Const\ True} and
 \isa{Const\ False}. Variables are represented by terms of the form
-\isa{Var\ \mbox{n}}, where \isa{\mbox{n}} is a natural number (type \isa{nat}).
+\isa{Var\ n}, where \isa{n} is a natural number (type \isa{nat}).
 For example, the formula $P@0 \land \neg P@1$ is represented by the term
 \isa{And\ {\isacharparenleft}Var\ \isadigit{0}{\isacharparenright}\ {\isacharparenleft}Neg\ {\isacharparenleft}Var\ \isadigit{1}{\isacharparenright}{\isacharparenright}}.
 
 \subsubsection{What is the value of a boolean expression?}
 
 The value of a boolean expression depends on the value of its variables.
-Hence the function \isa{value} takes an additional parameter, an {\em
-  environment} of type \isa{nat\ {\isasymRightarrow}\ bool}, which maps variables to
-their values:%
+Hence the function \isa{value} takes an additional parameter, an
+\emph{environment} of type \isa{nat\ {\isasymRightarrow}\ bool}, which maps variables to their
+values:%
 \end{isamarkuptext}%
 \isacommand{consts}\ value\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}boolex\ {\isasymRightarrow}\ {\isacharparenleft}nat\ {\isasymRightarrow}\ bool{\isacharparenright}\ {\isasymRightarrow}\ bool{\isachardoublequote}\isanewline
 \isacommand{primrec}\isanewline
@@ -66,7 +66,7 @@
 {\isachardoublequote}bool\isadigit{2}if\ {\isacharparenleft}And\ b\ c{\isacharparenright}\ {\isacharequal}\ IF\ {\isacharparenleft}bool\isadigit{2}if\ b{\isacharparenright}\ {\isacharparenleft}bool\isadigit{2}if\ c{\isacharparenright}\ {\isacharparenleft}CIF\ False{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-At last, we have something we can verify: that \isa{bool2if} preserves the
+At last, we have something we can verify: that \isa{bool\isadigit{2}if} preserves the
 value of its argument:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}valif\ {\isacharparenleft}bool\isadigit{2}if\ b{\isacharparenright}\ env\ {\isacharequal}\ value\ b\ env{\isachardoublequote}%
@@ -84,8 +84,8 @@
 More interesting is the transformation of If-expressions into a normal form
 where the first argument of \isa{IF} cannot be another \isa{IF} but
 must be a constant or variable. Such a normal form can be computed by
-repeatedly replacing a subterm of the form \isa{IF\ {\isacharparenleft}IF\ \mbox{b}\ \mbox{x}\ \mbox{y}{\isacharparenright}\ \mbox{z}\ \mbox{u}} by
-\isa{IF\ \mbox{b}\ {\isacharparenleft}IF\ \mbox{x}\ \mbox{z}\ \mbox{u}{\isacharparenright}\ {\isacharparenleft}IF\ \mbox{y}\ \mbox{z}\ \mbox{u}{\isacharparenright}}, which has the same value. The following
+repeatedly replacing a subterm of the form \isa{IF\ {\isacharparenleft}IF\ b\ x\ y{\isacharparenright}\ z\ u} by
+\isa{IF\ b\ {\isacharparenleft}IF\ x\ z\ u{\isacharparenright}\ {\isacharparenleft}IF\ y\ z\ u{\isacharparenright}}, which has the same value. The following
 primitive recursive functions perform this task:%
 \end{isamarkuptext}%
 \isacommand{consts}\ normif\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}ifex\ {\isasymRightarrow}\ ifex\ {\isasymRightarrow}\ ifex\ {\isasymRightarrow}\ ifex{\isachardoublequote}\isanewline
@@ -116,7 +116,7 @@
 \begin{isamarkuptext}%
 \noindent
 Note that the lemma does not have a name, but is implicitly used in the proof
-of the theorem shown above because of the \isa{[simp]} attribute.
+of the theorem shown above because of the \isa{{\isacharbrackleft}simp{\isacharbrackright}} attribute.
 
 But how can we be sure that \isa{norm} really produces a normal form in
 the above sense? We define a function that tests If-expressions for normality%
@@ -129,7 +129,7 @@
 \ \ \ \ \ {\isacharparenleft}case\ b\ of\ CIF\ b\ {\isasymRightarrow}\ True\ {\isacharbar}\ VIF\ x\ {\isasymRightarrow}\ True\ {\isacharbar}\ IF\ x\ y\ z\ {\isasymRightarrow}\ False{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-and prove \isa{normal(norm b)}. Of course, this requires a lemma about
+and prove \isa{normal\ {\isacharparenleft}norm\ b{\isacharparenright}}. Of course, this requires a lemma about
 normality of \isa{normif}:%
 \end{isamarkuptext}%
 \isacommand{lemma}{\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}{\isasymforall}t\ e{\isachardot}\ normal{\isacharparenleft}normif\ b\ t\ e{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}normal\ t\ {\isasymand}\ normal\ e{\isacharparenright}{\isachardoublequote}\end{isabellebody}%
--- a/doc-src/TutorialI/Misc/AdvancedInd.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/AdvancedInd.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -21,7 +21,7 @@
 apply(induct_tac xs);
 
 txt{*\noindent
-(where \isa{hd} and \isa{last} return the first and last element of a
+(where @{term"hd"} and @{term"last"} return the first and last element of a
 non-empty list)
 produces the warning
 \begin{quote}\tt
@@ -35,16 +35,16 @@
 \begin{isabelle}
 \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
 \end{isabelle}
-We cannot prove this equality because we do not know what \isa{hd} and
-\isa{last} return when applied to \isa{[]}.
+We cannot prove this equality because we do not know what @{term"hd"} and
+@{term"last"} return when applied to @{term"[]"}.
 
 The point is that we have violated the above warning. Because the induction
-formula is only the conclusion, the occurrence of \isa{xs} in the premises is
+formula is only the conclusion, the occurrence of @{term"xs"} in the premises is
 not modified by induction. Thus the case that should have been trivial
 becomes unprovable. Fortunately, the solution is easy:
 \begin{quote}
 \emph{Pull all occurrences of the induction variable into the conclusion
-using \isa{\isasymlongrightarrow}.}
+using @{text"\<longrightarrow>"}.}
 \end{quote}
 This means we should prove
 *};
@@ -59,12 +59,13 @@
 \begin{isabelle}
 \ 1.\ []\ {\isasymnoteq}\ []\ {\isasymlongrightarrow}\ hd\ (rev\ [])\ =\ last\ []
 \end{isabelle}
-which is trivial, and \isa{auto} finishes the whole proof.
+which is trivial, and @{text"auto"} finishes the whole proof.
 
-If \isa{hd\_rev} is meant to be a simplification rule, you are done. But if you
-really need the \isa{\isasymLongrightarrow}-version of \isa{hd\_rev}, for
-example because you want to apply it as an introduction rule, you need to
-derive it separately, by combining it with modus ponens:
+If @{thm[source]hd_rev} is meant to be a simplification rule, you are
+done. But if you really need the @{text"\<Longrightarrow>"}-version of
+@{thm[source]hd_rev}, for example because you want to apply it as an
+introduction rule, you need to derive it separately, by combining it with
+modus ponens:
 *};
 
 lemmas hd_revI = hd_rev[THEN mp];
@@ -78,7 +79,7 @@
 (see the remark?? in \S\ref{??}).
 Additionally, you may also have to universally quantify some other variables,
 which can yield a fairly complex conclusion.
-Here is a simple example (which is proved by \isa{blast}):
+Here is a simple example (which is proved by @{text"blast"}):
 *};
 
 lemma simple: "\\<forall>y. A y \\<longrightarrow> B y \<longrightarrow> B y & A y";
@@ -86,14 +87,14 @@
 
 text{*\noindent
 You can get the desired lemma by explicit
-application of modus ponens and \isa{spec}:
+application of modus ponens and @{thm[source]spec}:
 *};
 
 lemmas myrule = simple[THEN spec, THEN mp, THEN mp];
 
 text{*\noindent
-or the wholesale stripping of \isa{\isasymforall} and
-\isa{\isasymlongrightarrow} in the conclusion via \isa{rulify} 
+or the wholesale stripping of @{text"\<forall>"} and
+@{text"\<longrightarrow>"} in the conclusion via @{text"rulify"} 
 *};
 
 lemmas myrule = simple[rulify];
@@ -130,10 +131,10 @@
 induction schema. In such cases some existing standard induction schema can
 be helpful. We show how to apply such induction schemas by an example.
 
-Structural induction on \isa{nat} is
+Structural induction on @{typ"nat"} is
 usually known as ``mathematical induction''. There is also ``complete
 induction'', where you must prove $P(n)$ under the assumption that $P(m)$
-holds for all $m<n$. In Isabelle, this is the theorem \isa{less\_induct}:
+holds for all $m<n$. In Isabelle, this is the theorem @{thm[source]less_induct}:
 \begin{quote}
 @{thm[display]"less_induct"[no_vars]}
 \end{quote}
@@ -148,7 +149,7 @@
 discouraged, because of the danger of inconsistencies. The above axiom does
 not introduce an inconsistency because, for example, the identity function
 satisfies it.}
-for \isa{f} it follows that @{term"n <= f n"}, which can
+for @{term"f"} it follows that @{prop"n <= f n"}, which can
 be proved by induction on @{term"f n"}. Following the recipy outlined
 above, we have to phrase the proposition as follows to allow induction:
 *};
@@ -156,7 +157,7 @@
 lemma f_incr_lem: "\\<forall>i. k = f i \\<longrightarrow> i \\<le> f i";
 
 txt{*\noindent
-To perform induction on \isa{k} using \isa{less\_induct}, we use the same
+To perform induction on @{term"k"} using @{thm[source]less_induct}, we use the same
 general induction method as for recursion induction (see
 \S\ref{sec:recdef-induction}):
 *};
@@ -173,9 +174,9 @@
 \ 1.\ {\isasymAnd}\mbox{n}.\ {\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i})\isanewline
 \ \ \ \ \ \ \ {\isasymLongrightarrow}\ {\isasymforall}\mbox{i}.\ \mbox{n}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i}
 \end{isabelle}
-After stripping the \isa{\isasymforall i}, the proof continues with a case
-distinction on \isa{i}. The case \isa{i = 0} is trivial and we focus on the
-other case:
+After stripping the @{text"\<forall>i"}, the proof continues with a case
+distinction on @{term"i"}. The case @{prop"i = 0"} is trivial and we focus on
+the other case:
 \begin{isabelle}
 \ 1.\ {\isasymAnd}\mbox{n}\ \mbox{i}\ \mbox{nat}.\isanewline
 \ \ \ \ \ \ \ {\isasymlbrakk}{\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i});\ \mbox{i}\ =\ Suc\ \mbox{nat}{\isasymrbrakk}\isanewline
@@ -187,16 +188,16 @@
 
 text{*\noindent
 It is not surprising if you find the last step puzzling.
-The proof goes like this (writing \isa{j} instead of \isa{nat}).
-Since @{term"i = Suc j"} it suffices to show
-@{term"j < f(Suc j)"} (by \isa{Suc\_leI}: @{thm"Suc_leI"[no_vars]}). This is
-proved as follows. From \isa{f\_ax} we have @{term"f (f j) < f (Suc j)"}
-(1) which implies @{term"f j <= f (f j)"} (by the induction hypothesis).
-Using (1) once more we obtain @{term"f j < f(Suc j)"} (2) by transitivity
-(\isa{le_less_trans}: @{thm"le_less_trans"[no_vars]}).
-Using the induction hypothesis once more we obtain @{term"j <= f j"}
-which, together with (2) yields @{term"j < f (Suc j)"} (again by
-\isa{le_less_trans}).
+The proof goes like this (writing @{term"j"} instead of @{typ"nat"}).
+Since @{prop"i = Suc j"} it suffices to show
+@{prop"j < f(Suc j)"} (by @{thm[source]Suc_leI}: @{thm"Suc_leI"[no_vars]}). This is
+proved as follows. From @{thm[source]f_ax} we have @{prop"f (f j) < f (Suc j)"}
+(1) which implies @{prop"f j <= f (f j)"} (by the induction hypothesis).
+Using (1) once more we obtain @{prop"f j < f(Suc j)"} (2) by transitivity
+(@{thm[source]le_less_trans}: @{thm"le_less_trans"[no_vars]}).
+Using the induction hypothesis once more we obtain @{prop"j <= f j"}
+which, together with (2) yields @{prop"j < f (Suc j)"} (again by
+@{thm[source]le_less_trans}).
 
 This last step shows both the power and the danger of automatic proofs: they
 will usually not tell you how the proof goes, because it can be very hard to
@@ -204,14 +205,14 @@
 \S\ref{sec:part2?} introduces a language for writing readable yet concise
 proofs.
 
-We can now derive the desired @{term"i <= f i"} from \isa{f\_incr}:
+We can now derive the desired @{prop"i <= f i"} from @{text"f_incr"}:
 *};
 
 lemmas f_incr = f_incr_lem[rulify, OF refl];
 
 text{*\noindent
-The final \isa{refl} gets rid of the premise \isa{?k = f ?i}. Again, we could
-have included this derivation in the original statement of the lemma:
+The final @{thm[source]refl} gets rid of the premise @{text"?k = f ?i"}. Again,
+we could have included this derivation in the original statement of the lemma:
 *};
 
 lemma f_incr[rulify, OF refl]: "\\<forall>i. k = f i \\<longrightarrow> i \\<le> f i";
@@ -219,22 +220,23 @@
 
 text{*
 \begin{exercise}
-From the above axiom and lemma for \isa{f} show that \isa{f} is the identity.
+From the above axiom and lemma for @{term"f"} show that @{term"f"} is the
+identity.
 \end{exercise}
 
-In general, \isa{induct\_tac} can be applied with any rule \isa{r}
-whose conclusion is of the form \isa{?P ?x1 \dots ?xn}, in which case the
+In general, @{text"induct_tac"} can be applied with any rule $r$
+whose conclusion is of the form ${?}P~?x@1 \dots ?x@n$, in which case the
 format is
-\begin{ttbox}
-apply(induct_tac y1 ... yn rule: r)
-\end{ttbox}\index{*induct_tac}%
-where \isa{y1}, \dots, \isa{yn} are variables in the first subgoal.
-In fact, \isa{induct\_tac} even allows the conclusion of
-\isa{r} to be an (iterated) conjunction of formulae of the above form, in
+\begin{quote}
+\isacommand{apply}@{text"(induct_tac"} $y@1 \dots y@n$ @{text"rule:"} $r$@{text")"}
+\end{quote}\index{*induct_tac}%
+where $y@1, \dots, y@n$ are variables in the first subgoal.
+In fact, @{text"induct_tac"} even allows the conclusion of
+$r$ to be an (iterated) conjunction of formulae of the above form, in
 which case the application is
-\begin{ttbox}
-apply(induct_tac y1 ... yn and ... and z1 ... zm rule: r)
-\end{ttbox}
+\begin{quote}
+\isacommand{apply}@{text"(induct_tac"} $y@1 \dots y@n$ @{text"and"} \dots\ @{text"and"} $z@1 \dots z@m$ @{text"rule:"} $r$@{text")"}
+\end{quote}
 *};
 
 subsection{*Derivation of new induction schemas*};
@@ -242,18 +244,18 @@
 text{*\label{sec:derive-ind}
 Induction schemas are ordinary theorems and you can derive new ones
 whenever you wish.  This section shows you how to, using the example
-of \isa{less\_induct}. Assume we only have structural induction
+of @{thm[source]less_induct}. Assume we only have structural induction
 available for @{typ"nat"} and want to derive complete induction. This
 requires us to generalize the statement first:
 *};
 
-lemma induct_lem: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) ==> \\<forall>m<n. P m";
+lemma induct_lem: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) \\<Longrightarrow> \\<forall>m<n. P m";
 apply(induct_tac n);
 
 txt{*\noindent
-The base case is trivially true. For the induction step (@{term"m <
-Suc n"}) we distinguish two cases: @{term"m < n"} is true by induction
-hypothesis and @{term"m = n"} follow from the assumption again using
+The base case is trivially true. For the induction step (@{prop"m <
+Suc n"}) we distinguish two cases: @{prop"m < n"} is true by induction
+hypothesis and @{prop"m = n"} follow from the assumption again using
 the induction hypothesis:
 *};
 
@@ -264,31 +266,31 @@
 ML"reset quick_and_dirty"
 
 text{*\noindent
-The elimination rule \isa{less_SucE} expresses the case distinction:
+The elimination rule @{thm[source]less_SucE} expresses the case distinction:
 \begin{quote}
 @{thm[display]"less_SucE"[no_vars]}
 \end{quote}
 
 Now it is straightforward to derive the original version of
-\isa{less\_induct} by manipulting the conclusion of the above lemma:
-instantiate \isa{n} by @{term"Suc n"} and \isa{m} by \isa{n} and
-remove the trivial condition @{term"n < Sc n"}. Fortunately, this
+@{thm[source]less_induct} by manipulting the conclusion of the above lemma:
+instantiate @{term"n"} by @{term"Suc n"} and @{term"m"} by @{term"n"} and
+remove the trivial condition @{prop"n < Sc n"}. Fortunately, this
 happens automatically when we add the lemma as a new premise to the
 desired goal:
 *};
 
-theorem less_induct: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) ==> P n";
+theorem less_induct: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) \\<Longrightarrow> P n";
 by(insert induct_lem, blast);
 
 text{*\noindent
 Finally we should mention that HOL already provides the mother of all
-inductions, \emph{wellfounded induction} (\isa{wf\_induct}):
+inductions, \emph{wellfounded induction} (@{thm[source]wf_induct}):
 \begin{quote}
 @{thm[display]"wf_induct"[no_vars]}
 \end{quote}
-where @{term"wf r"} means that the relation \isa{r} is wellfounded.
-For example \isa{less\_induct} is the special case where \isa{r} is \isa{<} on @{typ"nat"}.
-For details see the library.
+where @{term"wf r"} means that the relation @{term"r"} is wellfounded.
+For example @{thm[source]less_induct} is the special case where @{term"r"} is
+@{text"<"} on @{typ"nat"}. For details see the library.
 *};
 
 (*<*)
--- a/doc-src/TutorialI/Misc/Itrev.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/Itrev.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -4,10 +4,10 @@
 
 text{*
 Function @{term"rev"} has quadratic worst-case running time
-because it calls function @{term"@"} for each element of the list and
-@{term"@"} is linear in its first argument.  A linear time version of
+because it calls function @{text"@"} for each element of the list and
+@{text"@"} is linear in its first argument.  A linear time version of
 @{term"rev"} reqires an extra argument where the result is accumulated
-gradually, using only @{name"#"}:
+gradually, using only @{text"#"}:
 *}
 
 consts itrev :: "'a list \\<Rightarrow> 'a list \\<Rightarrow> 'a list";
--- a/doc-src/TutorialI/Misc/Tree.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/Tree.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -14,7 +14,7 @@
 "mirror (Node l x r) = Node (mirror r) x (mirror l)";(*>*)
 
 text{*\noindent
-and a function \isa{mirror} that mirrors a binary tree
+and a function @{term"mirror"} that mirrors a binary tree
 by swapping subtrees (recursively). Prove
 *}
 
@@ -30,7 +30,7 @@
 (*>*)
 
 text{*\noindent
-Define a function \isa{flatten} that flattens a tree into a list
+Define a function @{term"flatten"} that flattens a tree into a list
 by traversing it in infix order. Prove
 *}
 
--- a/doc-src/TutorialI/Misc/Tree2.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/Tree2.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -3,9 +3,9 @@
 (*>*)
 
 text{*\noindent In Exercise~\ref{ex:Tree} we defined a function
-\isa{flatten} from trees to lists. The straightforward version of
-\isa{flatten} is based on \isa{\at} and is thus, like \isa{rev}, quadratic.
-A linear time version of \isa{flatten} again reqires an extra
+@{term"flatten"} from trees to lists. The straightforward version of
+@{term"flatten"} is based on @{text"@"} and is thus, like @{term"rev"},
+quadratic. A linear time version of @{term"flatten"} again reqires an extra
 argument, the accumulator: *}
 
 consts flatten2 :: "'a tree => 'a list => 'a list"
@@ -15,7 +15,7 @@
 "flatten2 (Node l x r) xs = flatten2 l (x#(flatten2 r xs))"
 (*>*)
 
-text{*\noindent Define \isa{flatten2} and prove
+text{*\noindent Define @{term"flatten2"} and prove
 *}
 (*<*)
 lemma [simp]: "!xs. flatten2 t xs = flatten t @ xs";
--- a/doc-src/TutorialI/Misc/asm_simp.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/asm_simp.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -12,7 +12,7 @@
 text{*\noindent
 The second assumption simplifies to @{term"xs = []"}, which in turn
 simplifies the first assumption to @{term"zs = ys"}, thus reducing the
-conclusion to @{term"ys = ys"} and hence to \isa{True}.
+conclusion to @{term"ys = ys"} and hence to @{term"True"}.
 
 In some cases this may be too much of a good thing and may lead to
 nontermination:
@@ -21,7 +21,7 @@
 lemma "\\<forall>x. f x = g (f (g x)) \\<Longrightarrow> f [] = f [] @ []";
 
 txt{*\noindent
-cannot be solved by an unmodified application of \isa{simp} because the
+cannot be solved by an unmodified application of @{text"simp"} because the
 simplification rule @{term"f x = g (f (g x))"} extracted from the assumption
 does not terminate. Isabelle notices certain simple forms of
 nontermination but not this one. The problem can be circumvented by
@@ -33,17 +33,17 @@
 text{*\noindent
 There are three options that influence the treatment of assumptions:
 \begin{description}
-\item[\isa{(no_asm)}]\indexbold{*no_asm}
+\item[@{text"(no_asm)"}]\indexbold{*no_asm}
  means that assumptions are completely ignored.
-\item[\isa{(no_asm_simp)}]\indexbold{*no_asm_simp}
+\item[@{text"(no_asm_simp)"}]\indexbold{*no_asm_simp}
  means that the assumptions are not simplified but
   are used in the simplification of the conclusion.
-\item[\isa{(no_asm_use)}]\indexbold{*no_asm_use}
+\item[@{text"(no_asm_use)"}]\indexbold{*no_asm_use}
  means that the assumptions are simplified but are not
   used in the simplification of each other or the conclusion.
 \end{description}
-Neither \isa{(no_asm_simp)} nor \isa{(no_asm_use)} allow to simplify the above
-problematic subgoal.
+Neither @{text"(no_asm_simp)"} nor @{text"(no_asm_use)"} allow to simplify
+the above problematic subgoal.
 
 Note that only one of the above options is allowed, and it must precede all
 other arguments.
--- a/doc-src/TutorialI/Misc/case_exprs.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/case_exprs.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -17,10 +17,10 @@
 
 In general, if $e$ is a term of the datatype $t$ defined in
 \S\ref{sec:general-datatype} above, the corresponding
-\isa{case}-expression analyzing $e$ is
+@{text"case"}-expression analyzing $e$ is
 \[
 \begin{array}{rrcl}
-\isa{case}~e~\isa{of} & C@1~x@ {11}~\dots~x@ {1k@1} & \To & e@1 \\
+@{text"case"}~e~@{text"of"} & C@1~x@ {11}~\dots~x@ {1k@1} & \To & e@1 \\
                            \vdots \\
                            \mid & C@m~x@ {m1}~\dots~x@ {mk@m} & \To & e@m
 \end{array}
@@ -32,18 +32,19 @@
 error messages.
 \end{warn}
 \noindent
-Nested patterns can be simulated by nested \isa{case}-expressions: instead
+Nested patterns can be simulated by nested @{text"case"}-expressions: instead
 of
-% case xs of [] => 1 | [x] => x | x#(y#zs) => y
-\begin{isabelle}
-~~~case~xs~of~[]~{\isasymRightarrow}~1~|~[x]~{\isasymRightarrow}~x~|~x~\#~y~\#~zs~{\isasymRightarrow}~y
-\end{isabelle}
+% 
+\begin{quote}
+@{text"case xs of [] => 1 | [x] => x | x#(y#zs) => y"}
+%~~~case~xs~of~[]~{\isasymRightarrow}~1~|~[x]~{\isasymRightarrow}~x~|~x~\#~y~\#~zs~{\isasymRightarrow}~y
+\end{quote}
 write
 \begin{quote}
 @{term[display,eta_contract=false,margin=50]"case xs of [] => 1 | x#ys => (case ys of [] => x | y#zs => y)"}
 \end{quote}
 
-Note that \isa{case}-expressions may need to be enclosed in parentheses to
+Note that @{text"case"}-expressions may need to be enclosed in parentheses to
 indicate their scope
 *}
 
--- a/doc-src/TutorialI/Misc/case_splits.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/case_splits.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -3,8 +3,8 @@
 (*>*)
 
 text{*
-Goals containing \isaindex{if}-expressions are usually proved by case
-distinction on the condition of the \isa{if}. For example the goal
+Goals containing @{text"if"}-expressions are usually proved by case
+distinction on the condition of the @{text"if"}. For example the goal
 *}
 
 lemma "\\<forall>xs. if xs = [] then rev xs = [] else rev xs \\<noteq> []";
@@ -12,7 +12,7 @@
 txt{*\noindent
 can be split into
 \begin{isabelle}
-~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[])%
+~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[])
 \end{isabelle}
 by a degenerate form of simplification
 *}
@@ -21,14 +21,14 @@
 (*<*)oops;(*>*)
 
 text{*\noindent
-where no simplification rules are included (\isa{only:} is followed by the
+where no simplification rules are included (@{text"only:"} is followed by the
 empty list of theorems) but the rule \isaindexbold{split_if} for
-splitting \isa{if}s is added (via the modifier \isa{split:}). Because
-case-splitting on \isa{if}s is almost always the right proof strategy, the
-simplifier performs it automatically. Try \isacommand{apply}\isa{(simp)}
+splitting @{text"if"}s is added (via the modifier @{text"split:"}). Because
+case-splitting on @{text"if"}s is almost always the right proof strategy, the
+simplifier performs it automatically. Try \isacommand{apply}@{text"(simp)"}
 on the initial goal above.
 
-This splitting idea generalizes from \isa{if} to \isaindex{case}:
+This splitting idea generalizes from @{text"if"} to \isaindex{case}:
 *}
 
 lemma "(case xs of [] \\<Rightarrow> zs | y#ys \\<Rightarrow> y#(ys@zs)) = xs@zs";
@@ -36,7 +36,7 @@
 becomes
 \begin{isabelle}
 ~1.~(xs~=~[]~{\isasymlongrightarrow}~zs~=~xs~@~zs)~{\isasymand}\isanewline
-~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs)%
+~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs)
 \end{isabelle}
 by typing
 *}
@@ -45,9 +45,9 @@
 (*<*)oops;(*>*)
 
 text{*\noindent
-In contrast to \isa{if}-expressions, the simplifier does not split
-\isa{case}-expressions by default because this can lead to nontermination
-in case of recursive datatypes. Again, if the \isa{only:} modifier is
+In contrast to @{text"if"}-expressions, the simplifier does not split
+@{text"case"}-expressions by default because this can lead to nontermination
+in case of recursive datatypes. Again, if the @{text"only:"} modifier is
 dropped, the above goal is solved,
 *}
 (*<*)
@@ -56,17 +56,17 @@
 by(simp split: list.split);
 
 text{*\noindent%
-which \isacommand{apply}\isa{(simp)} alone will not do.
+which \isacommand{apply}@{text"(simp)"} alone will not do.
 
 In general, every datatype $t$ comes with a theorem
-\isa{$t$.split} which can be declared to be a \bfindex{split rule} either
-locally as above, or by giving it the \isa{split} attribute globally:
+$t$@{text".split"} which can be declared to be a \bfindex{split rule} either
+locally as above, or by giving it the @{text"split"} attribute globally:
 *}
 
 lemmas [split] = list.split;
 
 text{*\noindent
-The \isa{split} attribute can be removed with the \isa{del} modifier,
+The @{text"split"} attribute can be removed with the @{text"del"} modifier,
 either locally
 *}
 (*<*)
@@ -83,8 +83,9 @@
 
 text{*
 The above split rules intentionally only affect the conclusion of a
-subgoal.  If you want to split an \isa{if} or \isa{case}-expression in
-the assumptions, you have to apply \isa{split\_if\_asm} or $t$\isa{.split_asm}:
+subgoal.  If you want to split an @{text"if"} or @{text"case"}-expression in
+the assumptions, you have to apply @{thm[source]split_if_asm} or
+$t$@{text".split_asm"}:
 *}
 
 lemma "if xs = [] then ys ~= [] else ys = [] ==> xs @ ys ~= []"
@@ -92,13 +93,14 @@
 
 txt{*\noindent
 In contrast to splitting the conclusion, this actually creates two
-separate subgoals (which are solved by \isa{simp\_all}):
+separate subgoals (which are solved by @{text"simp_all"}):
 \begin{isabelle}
 \ \isadigit{1}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharat}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\isanewline
 \ \isadigit{2}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{xs}\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}
 \end{isabelle}
 If you need to split both in the assumptions and the conclusion,
-use $t$\isa{.splits} which subsumes $t$\isa{.split} and $t$\isa{.split_asm}.
+use $t$@{text".splits"} which subsumes $t$@{text".split"} and
+$t$@{text".split_asm"}. Analogously, there is @{thm[source]if_splits}.
 *}
 
 (*<*)
--- a/doc-src/TutorialI/Misc/cond_rewr.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/cond_rewr.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -23,10 +23,10 @@
 (*>*)
 text{*\noindent
 is proved by plain simplification:
-the conditional equation \isa{hd_Cons_tl} above
-can simplify \isa{hd(rev~xs)~\#~tl(rev~xs)} to \isa{rev xs}
-because the corresponding precondition \isa{rev xs \isasymnoteq\ []}
-simplifies to \isa{xs \isasymnoteq\ []}, which is exactly the local
+the conditional equation @{thm[source]hd_Cons_tl} above
+can simplify @{term"hd(rev xs) # tl(rev xs)"} to @{term"rev xs"}
+because the corresponding precondition @{term"rev xs ~= []"}
+simplifies to @{term"xs ~= []"}, which is exactly the local
 assumption of the subgoal.
 *}
 (*<*)
--- a/doc-src/TutorialI/Misc/document/AdvancedInd.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/AdvancedInd.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -35,7 +35,7 @@
 \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
 \end{isabelle}
 We cannot prove this equality because we do not know what \isa{hd} and
-\isa{last} return when applied to \isa{[]}.
+\isa{last} return when applied to \isa{{\isacharbrackleft}{\isacharbrackright}}.
 
 The point is that we have violated the above warning. Because the induction
 formula is only the conclusion, the occurrence of \isa{xs} in the premises is
@@ -43,7 +43,7 @@
 becomes unprovable. Fortunately, the solution is easy:
 \begin{quote}
 \emph{Pull all occurrences of the induction variable into the conclusion
-using \isa{\isasymlongrightarrow}.}
+using \isa{{\isasymlongrightarrow}}.}
 \end{quote}
 This means we should prove%
 \end{isamarkuptxt}%
@@ -56,10 +56,11 @@
 \end{isabelle}
 which is trivial, and \isa{auto} finishes the whole proof.
 
-If \isa{hd\_rev} is meant to be a simplification rule, you are done. But if you
-really need the \isa{\isasymLongrightarrow}-version of \isa{hd\_rev}, for
-example because you want to apply it as an introduction rule, you need to
-derive it separately, by combining it with modus ponens:%
+If \isa{hd{\isacharunderscore}rev} is meant to be a simplification rule, you are
+done. But if you really need the \isa{{\isasymLongrightarrow}}-version of
+\isa{hd{\isacharunderscore}rev}, for example because you want to apply it as an
+introduction rule, you need to derive it separately, by combining it with
+modus ponens:%
 \end{isamarkuptext}%
 \isacommand{lemmas}\ hd{\isacharunderscore}revI\ {\isacharequal}\ hd{\isacharunderscore}rev{\isacharbrackleft}THEN\ mp{\isacharbrackright}%
 \begin{isamarkuptext}%
@@ -83,13 +84,13 @@
 \isacommand{lemmas}\ myrule\ {\isacharequal}\ simple{\isacharbrackleft}THEN\ spec{\isacharcomma}\ THEN\ mp{\isacharcomma}\ THEN\ mp{\isacharbrackright}%
 \begin{isamarkuptext}%
 \noindent
-or the wholesale stripping of \isa{\isasymforall} and
-\isa{\isasymlongrightarrow} in the conclusion via \isa{rulify}%
+or the wholesale stripping of \isa{{\isasymforall}} and
+\isa{{\isasymlongrightarrow}} in the conclusion via \isa{rulify}%
 \end{isamarkuptext}%
 \isacommand{lemmas}\ myrule\ {\isacharequal}\ simple{\isacharbrackleft}rulify{\isacharbrackright}%
 \begin{isamarkuptext}%
 \noindent
-yielding \isa{{\isasymlbrakk}\mbox{A}\ \mbox{y}{\isacharsemicolon}\ \mbox{B}\ \mbox{y}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{B}\ \mbox{y}\ {\isasymand}\ \mbox{A}\ \mbox{y}}.
+yielding \isa{{\isasymlbrakk}A\ y{\isacharsemicolon}\ B\ y{\isasymrbrakk}\ {\isasymLongrightarrow}\ B\ y\ {\isasymand}\ A\ y}.
 You can go one step further and include these derivations already in the
 statement of your original lemma, thus avoiding the intermediate step:%
 \end{isamarkuptext}%
@@ -118,11 +119,11 @@
 Structural induction on \isa{nat} is
 usually known as ``mathematical induction''. There is also ``complete
 induction'', where you must prove $P(n)$ under the assumption that $P(m)$
-holds for all $m<n$. In Isabelle, this is the theorem \isa{less\_induct}:
+holds for all $m<n$. In Isabelle, this is the theorem \isa{less{\isacharunderscore}induct}:
 \begin{quote}
 
 \begin{isabelle}%
-{\isacharparenleft}{\isasymAnd}\mbox{n}{\isachardot}\ {\isasymforall}\mbox{m}{\isachardot}\ \mbox{m}\ {\isacharless}\ \mbox{n}\ {\isasymlongrightarrow}\ \mbox{P}\ \mbox{m}\ {\isasymLongrightarrow}\ \mbox{P}\ \mbox{n}{\isacharparenright}\ {\isasymLongrightarrow}\ \mbox{P}\ \mbox{n}
+{\isacharparenleft}{\isasymAnd}n{\isachardot}\ {\isasymforall}m{\isachardot}\ m\ {\isacharless}\ n\ {\isasymlongrightarrow}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ P\ n
 \end{isabelle}%
 
 \end{quote}
@@ -136,14 +137,14 @@
 discouraged, because of the danger of inconsistencies. The above axiom does
 not introduce an inconsistency because, for example, the identity function
 satisfies it.}
-for \isa{f} it follows that \isa{\mbox{n}\ {\isasymle}\ f\ \mbox{n}}, which can
-be proved by induction on \isa{f\ \mbox{n}}. Following the recipy outlined
+for \isa{f} it follows that \isa{n\ {\isasymle}\ f\ n}, which can
+be proved by induction on \isa{f\ n}. Following the recipy outlined
 above, we have to phrase the proposition as follows to allow induction:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ f{\isacharunderscore}incr{\isacharunderscore}lem{\isacharcolon}\ {\isachardoublequote}{\isasymforall}i{\isachardot}\ k\ {\isacharequal}\ f\ i\ {\isasymlongrightarrow}\ i\ {\isasymle}\ f\ i{\isachardoublequote}%
 \begin{isamarkuptxt}%
 \noindent
-To perform induction on \isa{k} using \isa{less\_induct}, we use the same
+To perform induction on \isa{k} using \isa{less{\isacharunderscore}induct}, we use the same
 general induction method as for recursion induction (see
 \S\ref{sec:recdef-induction}):%
 \end{isamarkuptxt}%
@@ -155,9 +156,9 @@
 \ 1.\ {\isasymAnd}\mbox{n}.\ {\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i})\isanewline
 \ \ \ \ \ \ \ {\isasymLongrightarrow}\ {\isasymforall}\mbox{i}.\ \mbox{n}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i}
 \end{isabelle}
-After stripping the \isa{\isasymforall i}, the proof continues with a case
-distinction on \isa{i}. The case \isa{i = 0} is trivial and we focus on the
-other case:
+After stripping the \isa{{\isasymforall}i}, the proof continues with a case
+distinction on \isa{i}. The case \isa{i\ {\isacharequal}\ \isadigit{0}} is trivial and we focus on
+the other case:
 \begin{isabelle}
 \ 1.\ {\isasymAnd}\mbox{n}\ \mbox{i}\ \mbox{nat}.\isanewline
 \ \ \ \ \ \ \ {\isasymlbrakk}{\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i});\ \mbox{i}\ =\ Suc\ \mbox{nat}{\isasymrbrakk}\isanewline
@@ -169,15 +170,15 @@
 \noindent
 It is not surprising if you find the last step puzzling.
 The proof goes like this (writing \isa{j} instead of \isa{nat}).
-Since \isa{\mbox{i}\ {\isacharequal}\ Suc\ \mbox{j}} it suffices to show
-\isa{\mbox{j}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ \mbox{j}{\isacharparenright}} (by \isa{Suc\_leI}: \isa{\mbox{m}\ {\isacharless}\ \mbox{n}\ {\isasymLongrightarrow}\ Suc\ \mbox{m}\ {\isasymle}\ \mbox{n}}). This is
-proved as follows. From \isa{f\_ax} we have \isa{f\ {\isacharparenleft}f\ \mbox{j}{\isacharparenright}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ \mbox{j}{\isacharparenright}}
-(1) which implies \isa{f\ \mbox{j}\ {\isasymle}\ f\ {\isacharparenleft}f\ \mbox{j}{\isacharparenright}} (by the induction hypothesis).
-Using (1) once more we obtain \isa{f\ \mbox{j}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ \mbox{j}{\isacharparenright}} (2) by transitivity
-(\isa{le_less_trans}: \isa{{\isasymlbrakk}\mbox{i}\ {\isasymle}\ \mbox{j}{\isacharsemicolon}\ \mbox{j}\ {\isacharless}\ \mbox{k}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{i}\ {\isacharless}\ \mbox{k}}).
-Using the induction hypothesis once more we obtain \isa{\mbox{j}\ {\isasymle}\ f\ \mbox{j}}
-which, together with (2) yields \isa{\mbox{j}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ \mbox{j}{\isacharparenright}} (again by
-\isa{le_less_trans}).
+Since \isa{i\ {\isacharequal}\ Suc\ j} it suffices to show
+\isa{j\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}} (by \isa{Suc{\isacharunderscore}leI}: \isa{m\ {\isacharless}\ n\ {\isasymLongrightarrow}\ Suc\ m\ {\isasymle}\ n}). This is
+proved as follows. From \isa{f{\isacharunderscore}ax} we have \isa{f\ {\isacharparenleft}f\ j{\isacharparenright}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}}
+(1) which implies \isa{f\ j\ {\isasymle}\ f\ {\isacharparenleft}f\ j{\isacharparenright}} (by the induction hypothesis).
+Using (1) once more we obtain \isa{f\ j\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}} (2) by transitivity
+(\isa{le{\isacharunderscore}less{\isacharunderscore}trans}: \isa{{\isasymlbrakk}i\ {\isasymle}\ j{\isacharsemicolon}\ j\ {\isacharless}\ k{\isasymrbrakk}\ {\isasymLongrightarrow}\ i\ {\isacharless}\ k}).
+Using the induction hypothesis once more we obtain \isa{j\ {\isasymle}\ f\ j}
+which, together with (2) yields \isa{j\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}} (again by
+\isa{le{\isacharunderscore}less{\isacharunderscore}trans}).
 
 This last step shows both the power and the danger of automatic proofs: they
 will usually not tell you how the proof goes, because it can be very hard to
@@ -185,33 +186,34 @@
 \S\ref{sec:part2?} introduces a language for writing readable yet concise
 proofs.
 
-We can now derive the desired \isa{\mbox{i}\ {\isasymle}\ f\ \mbox{i}} from \isa{f\_incr}:%
+We can now derive the desired \isa{i\ {\isasymle}\ f\ i} from \isa{f{\isacharunderscore}incr}:%
 \end{isamarkuptext}%
 \isacommand{lemmas}\ f{\isacharunderscore}incr\ {\isacharequal}\ f{\isacharunderscore}incr{\isacharunderscore}lem{\isacharbrackleft}rulify{\isacharcomma}\ OF\ refl{\isacharbrackright}%
 \begin{isamarkuptext}%
 \noindent
-The final \isa{refl} gets rid of the premise \isa{?k = f ?i}. Again, we could
-have included this derivation in the original statement of the lemma:%
+The final \isa{refl} gets rid of the premise \isa{{\isacharquery}k\ {\isacharequal}\ f\ {\isacharquery}i}. Again,
+we could have included this derivation in the original statement of the lemma:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ f{\isacharunderscore}incr{\isacharbrackleft}rulify{\isacharcomma}\ OF\ refl{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}{\isasymforall}i{\isachardot}\ k\ {\isacharequal}\ f\ i\ {\isasymlongrightarrow}\ i\ {\isasymle}\ f\ i{\isachardoublequote}%
 \begin{isamarkuptext}%
 \begin{exercise}
-From the above axiom and lemma for \isa{f} show that \isa{f} is the identity.
+From the above axiom and lemma for \isa{f} show that \isa{f} is the
+identity.
 \end{exercise}
 
-In general, \isa{induct\_tac} can be applied with any rule \isa{r}
-whose conclusion is of the form \isa{?P ?x1 \dots ?xn}, in which case the
+In general, \isa{induct{\isacharunderscore}tac} can be applied with any rule $r$
+whose conclusion is of the form ${?}P~?x@1 \dots ?x@n$, in which case the
 format is
-\begin{ttbox}
-apply(induct_tac y1 ... yn rule: r)
-\end{ttbox}\index{*induct_tac}%
-where \isa{y1}, \dots, \isa{yn} are variables in the first subgoal.
-In fact, \isa{induct\_tac} even allows the conclusion of
-\isa{r} to be an (iterated) conjunction of formulae of the above form, in
+\begin{quote}
+\isacommand{apply}\isa{{\isacharparenleft}induct{\isacharunderscore}tac} $y@1 \dots y@n$ \isa{rule{\isacharcolon}} $r$\isa{{\isacharparenright}}
+\end{quote}\index{*induct_tac}%
+where $y@1, \dots, y@n$ are variables in the first subgoal.
+In fact, \isa{induct{\isacharunderscore}tac} even allows the conclusion of
+$r$ to be an (iterated) conjunction of formulae of the above form, in
 which case the application is
-\begin{ttbox}
-apply(induct_tac y1 ... yn and ... and z1 ... zm rule: r)
-\end{ttbox}%
+\begin{quote}
+\isacommand{apply}\isa{{\isacharparenleft}induct{\isacharunderscore}tac} $y@1 \dots y@n$ \isa{and} \dots\ \isa{and} $z@1 \dots z@m$ \isa{rule{\isacharcolon}} $r$\isa{{\isacharparenright}}
+\end{quote}%
 \end{isamarkuptext}%
 %
 \isamarkupsubsection{Derivation of new induction schemas}
@@ -220,16 +222,16 @@
 \label{sec:derive-ind}
 Induction schemas are ordinary theorems and you can derive new ones
 whenever you wish.  This section shows you how to, using the example
-of \isa{less\_induct}. Assume we only have structural induction
+of \isa{less{\isacharunderscore}induct}. Assume we only have structural induction
 available for \isa{nat} and want to derive complete induction. This
 requires us to generalize the statement first:%
 \end{isamarkuptext}%
-\isacommand{lemma}\ induct{\isacharunderscore}lem{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isacharequal}{\isacharequal}{\isachargreater}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m{\isachardoublequote}\isanewline
+\isacommand{lemma}\ induct{\isacharunderscore}lem{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m{\isachardoublequote}\isanewline
 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ n{\isacharparenright}%
 \begin{isamarkuptxt}%
 \noindent
-The base case is trivially true. For the induction step (\isa{\mbox{m}\ {\isacharless}\ Suc\ \mbox{n}}) we distinguish two cases: \isa{\mbox{m}\ {\isacharless}\ \mbox{n}} is true by induction
-hypothesis and \isa{\mbox{m}\ {\isacharequal}\ \mbox{n}} follow from the assumption again using
+The base case is trivially true. For the induction step (\isa{m\ {\isacharless}\ Suc\ n}) we distinguish two cases: \isa{m\ {\isacharless}\ n} is true by induction
+hypothesis and \isa{m\ {\isacharequal}\ n} follow from the assumption again using
 the induction hypothesis:%
 \end{isamarkuptxt}%
 \isacommand{apply}{\isacharparenleft}blast{\isacharparenright}\isanewline
@@ -239,38 +241,38 @@
 \isacommand{ML}{\isachardoublequote}reset\ quick{\isacharunderscore}and{\isacharunderscore}dirty{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-The elimination rule \isa{less_SucE} expresses the case distinction:
+The elimination rule \isa{less{\isacharunderscore}SucE} expresses the case distinction:
 \begin{quote}
 
 \begin{isabelle}%
-{\isasymlbrakk}\mbox{m}\ {\isacharless}\ Suc\ \mbox{n}{\isacharsemicolon}\ \mbox{m}\ {\isacharless}\ \mbox{n}\ {\isasymLongrightarrow}\ \mbox{P}{\isacharsemicolon}\ \mbox{m}\ {\isacharequal}\ \mbox{n}\ {\isasymLongrightarrow}\ \mbox{P}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{P}
+{\isasymlbrakk}m\ {\isacharless}\ Suc\ n{\isacharsemicolon}\ m\ {\isacharless}\ n\ {\isasymLongrightarrow}\ P{\isacharsemicolon}\ m\ {\isacharequal}\ n\ {\isasymLongrightarrow}\ P{\isasymrbrakk}\ {\isasymLongrightarrow}\ P
 \end{isabelle}%
 
 \end{quote}
 
 Now it is straightforward to derive the original version of
-\isa{less\_induct} by manipulting the conclusion of the above lemma:
-instantiate \isa{n} by \isa{Suc\ \mbox{n}} and \isa{m} by \isa{n} and
-remove the trivial condition \isa{\mbox{n}\ {\isacharless}\ \mbox{Sc}\ \mbox{n}}. Fortunately, this
+\isa{less{\isacharunderscore}induct} by manipulting the conclusion of the above lemma:
+instantiate \isa{n} by \isa{Suc\ n} and \isa{m} by \isa{n} and
+remove the trivial condition \isa{n\ {\isacharless}\ Sc\ n}. Fortunately, this
 happens automatically when we add the lemma as a new premise to the
 desired goal:%
 \end{isamarkuptext}%
-\isacommand{theorem}\ less{\isacharunderscore}induct{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isacharequal}{\isacharequal}{\isachargreater}\ P\ n{\isachardoublequote}\isanewline
+\isacommand{theorem}\ less{\isacharunderscore}induct{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ P\ n{\isachardoublequote}\isanewline
 \isacommand{by}{\isacharparenleft}insert\ induct{\isacharunderscore}lem{\isacharcomma}\ blast{\isacharparenright}%
 \begin{isamarkuptext}%
 \noindent
 Finally we should mention that HOL already provides the mother of all
-inductions, \emph{wellfounded induction} (\isa{wf\_induct}):
+inductions, \emph{wellfounded induction} (\isa{wf{\isacharunderscore}induct}):
 \begin{quote}
 
 \begin{isabelle}%
-{\isasymlbrakk}wf\ \mbox{r}{\isacharsemicolon}\ {\isasymAnd}\mbox{x}{\isachardot}\ {\isasymforall}\mbox{y}{\isachardot}\ {\isacharparenleft}\mbox{y}{\isacharcomma}\ \mbox{x}{\isacharparenright}\ {\isasymin}\ \mbox{r}\ {\isasymlongrightarrow}\ \mbox{P}\ \mbox{y}\ {\isasymLongrightarrow}\ \mbox{P}\ \mbox{x}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{P}\ \mbox{a}
+{\isasymlbrakk}wf\ r{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ {\isasymforall}y{\isachardot}\ {\isacharparenleft}y{\isacharcomma}\ x{\isacharparenright}\ {\isasymin}\ r\ {\isasymlongrightarrow}\ P\ y\ {\isasymLongrightarrow}\ P\ x{\isasymrbrakk}\ {\isasymLongrightarrow}\ P\ a
 \end{isabelle}%
 
 \end{quote}
-where \isa{wf\ \mbox{r}} means that the relation \isa{r} is wellfounded.
-For example \isa{less\_induct} is the special case where \isa{r} is \isa{<} on \isa{nat}.
-For details see the library.%
+where \isa{wf\ r} means that the relation \isa{r} is wellfounded.
+For example \isa{less{\isacharunderscore}induct} is the special case where \isa{r} is
+\isa{{\isacharless}} on \isa{nat}. For details see the library.%
 \end{isamarkuptext}%
 \end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/Misc/document/Itrev.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/Itrev.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -3,17 +3,18 @@
 %
 \begin{isamarkuptext}%
 Function \isa{rev} has quadratic worst-case running time
-because it calls function \isa{\at} for each element of the list and
-\isa{\at} is linear in its first argument.  A linear time version of
+because it calls function \isa{{\isacharat}} for each element of the list and
+\isa{{\isacharat}} is linear in its first argument.  A linear time version of
 \isa{rev} reqires an extra argument where the result is accumulated
-gradually, using only \isa{\#}:%
+gradually, using only \isa{{\isacharhash}}:%
 \end{isamarkuptext}%
 \isacommand{consts}\ itrev\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list{\isachardoublequote}\isanewline
 \isacommand{primrec}\isanewline
 {\isachardoublequote}itrev\ {\isacharbrackleft}{\isacharbrackright}\ \ \ \ \ ys\ {\isacharequal}\ ys{\isachardoublequote}\isanewline
 {\isachardoublequote}itrev\ {\isacharparenleft}x{\isacharhash}xs{\isacharparenright}\ ys\ {\isacharequal}\ itrev\ xs\ {\isacharparenleft}x{\isacharhash}ys{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
-\noindent The behaviour of \isa{itrev} is simple: it reverses
+\noindent
+The behaviour of \isa{itrev} is simple: it reverses
 its first argument by stacking its elements onto the second argument,
 and returning that second argument when the first one becomes
 empty. Note that \isa{itrev} is tail-recursive, i.e.\ it can be
@@ -36,18 +37,18 @@
 \end{isabelle}
 Just as predicted above, the overall goal, and hence the induction
 hypothesis, is too weak to solve the induction step because of the fixed
-\isa{[]}. The corresponding heuristic:
+\isa{{\isacharbrackleft}{\isacharbrackright}}. The corresponding heuristic:
 \begin{quote}
 \emph{Generalize goals for induction by replacing constants by variables.}
 \end{quote}
-Of course one cannot do this na\"{\i}vely: \isa{itrev xs ys = rev xs} is
+Of course one cannot do this na\"{\i}vely: \isa{itrev\ xs\ ys\ {\isacharequal}\ rev\ xs} is
 just not true---the correct generalization is%
 \end{isamarkuptxt}%
 \isacommand{lemma}\ {\isachardoublequote}itrev\ xs\ ys\ {\isacharequal}\ rev\ xs\ {\isacharat}\ ys{\isachardoublequote}%
 \begin{isamarkuptxt}%
 \noindent
-If \isa{ys} is replaced by \isa{[]}, the right-hand side simplifies to
-\isa{rev\ \mbox{xs}}, just as required.
+If \isa{ys} is replaced by \isa{{\isacharbrackleft}{\isacharbrackright}}, the right-hand side simplifies to
+\isa{rev\ xs}, just as required.
 
 In this particular instance it was easy to guess the right generalization,
 but in more complex situations a good deal of creativity is needed. This is
@@ -59,12 +60,12 @@
 \begin{isabelle}
 ~1.~{\isasymAnd}a~list.\isanewline
 ~~~~~~~itrev~list~ys~=~rev~list~@~ys~{\isasymLongrightarrow}\isanewline
-~~~~~~~itrev~list~(a~\#~ys)~=~rev~list~@~a~\#~ys%
+~~~~~~~itrev~list~(a~\#~ys)~=~rev~list~@~a~\#~ys
 \end{isabelle}
 The induction hypothesis is still too weak, but this time it takes no
 intuition to generalize: the problem is that \isa{ys} is fixed throughout
 the subgoal, but the induction hypothesis needs to be applied with
-\isa{\mbox{a}\ {\isacharhash}\ \mbox{ys}} instead of \isa{ys}. Hence we prove the theorem
+\isa{a\ {\isacharhash}\ ys} instead of \isa{ys}. Hence we prove the theorem
 for all \isa{ys} instead of a fixed one:%
 \end{isamarkuptxt}%
 \isacommand{lemma}\ {\isachardoublequote}{\isasymforall}ys{\isachardot}\ itrev\ xs\ ys\ {\isacharequal}\ rev\ xs\ {\isacharat}\ ys{\isachardoublequote}%
--- a/doc-src/TutorialI/Misc/document/Tree2.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/Tree2.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -4,13 +4,13 @@
 \begin{isamarkuptext}%
 \noindent In Exercise~\ref{ex:Tree} we defined a function
 \isa{flatten} from trees to lists. The straightforward version of
-\isa{flatten} is based on \isa{\at} and is thus, like \isa{rev}, quadratic.
-A linear time version of \isa{flatten} again reqires an extra
+\isa{flatten} is based on \isa{{\isacharat}} and is thus, like \isa{rev},
+quadratic. A linear time version of \isa{flatten} again reqires an extra
 argument, the accumulator:%
 \end{isamarkuptext}%
 \isacommand{consts}\ flatten\isadigit{2}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharprime}a\ tree\ {\isacharequal}{\isachargreater}\ {\isacharprime}a\ list\ {\isacharequal}{\isachargreater}\ {\isacharprime}a\ list{\isachardoublequote}%
 \begin{isamarkuptext}%
-\noindent Define \isa{flatten2} and prove%
+\noindent Define \isa{flatten\isadigit{2}} and prove%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}flatten\isadigit{2}\ t\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ flatten\ t{\isachardoublequote}\end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/Misc/document/asm_simp.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/asm_simp.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -9,9 +9,9 @@
 \isacommand{by}\ simp%
 \begin{isamarkuptext}%
 \noindent
-The second assumption simplifies to \isa{\mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}}, which in turn
-simplifies the first assumption to \isa{\mbox{zs}\ {\isacharequal}\ \mbox{ys}}, thus reducing the
-conclusion to \isa{\mbox{ys}\ {\isacharequal}\ \mbox{ys}} and hence to \isa{True}.
+The second assumption simplifies to \isa{xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}}, which in turn
+simplifies the first assumption to \isa{zs\ {\isacharequal}\ ys}, thus reducing the
+conclusion to \isa{ys\ {\isacharequal}\ ys} and hence to \isa{True}.
 
 In some cases this may be too much of a good thing and may lead to
 nontermination:%
@@ -20,7 +20,7 @@
 \begin{isamarkuptxt}%
 \noindent
 cannot be solved by an unmodified application of \isa{simp} because the
-simplification rule \isa{\mbox{f}\ \mbox{x}\ {\isacharequal}\ \mbox{g}\ {\isacharparenleft}\mbox{f}\ {\isacharparenleft}\mbox{g}\ \mbox{x}{\isacharparenright}{\isacharparenright}} extracted from the assumption
+simplification rule \isa{f\ x\ {\isacharequal}\ g\ {\isacharparenleft}f\ {\isacharparenleft}g\ x{\isacharparenright}{\isacharparenright}} extracted from the assumption
 does not terminate. Isabelle notices certain simple forms of
 nontermination but not this one. The problem can be circumvented by
 explicitly telling the simplifier to ignore the assumptions:%
@@ -30,17 +30,17 @@
 \noindent
 There are three options that influence the treatment of assumptions:
 \begin{description}
-\item[\isa{(no_asm)}]\indexbold{*no_asm}
+\item[\isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}}]\indexbold{*no_asm}
  means that assumptions are completely ignored.
-\item[\isa{(no_asm_simp)}]\indexbold{*no_asm_simp}
+\item[\isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}simp{\isacharparenright}}]\indexbold{*no_asm_simp}
  means that the assumptions are not simplified but
   are used in the simplification of the conclusion.
-\item[\isa{(no_asm_use)}]\indexbold{*no_asm_use}
+\item[\isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}use{\isacharparenright}}]\indexbold{*no_asm_use}
  means that the assumptions are simplified but are not
   used in the simplification of each other or the conclusion.
 \end{description}
-Neither \isa{(no_asm_simp)} nor \isa{(no_asm_use)} allow to simplify the above
-problematic subgoal.
+Neither \isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}simp{\isacharparenright}} nor \isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}use{\isacharparenright}} allow to simplify
+the above problematic subgoal.
 
 Note that only one of the above options is allowed, and it must precede all
 other arguments.%
--- a/doc-src/TutorialI/Misc/document/case_exprs.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/case_exprs.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -10,14 +10,14 @@
 \begin{quote}
 
 \begin{isabelle}%
-case\ \mbox{xs}\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{1}\ {\isacharbar}\ \mbox{y}\ {\isacharhash}\ \mbox{ys}\ {\isasymRightarrow}\ \mbox{y}
+case\ xs\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{1}\ {\isacharbar}\ y\ {\isacharhash}\ ys\ {\isasymRightarrow}\ y
 \end{isabelle}%
 
 \end{quote}
-evaluates to \isa{\isadigit{1}} if \isa{\mbox{xs}} is \isa{{\isacharbrackleft}{\isacharbrackright}} and to \isa{\mbox{y}} if 
-\isa{\mbox{xs}} is \isa{\mbox{y}\ {\isacharhash}\ \mbox{ys}}. (Since the result in both branches must be of
-the same type, it follows that \isa{\mbox{y}} is of type \isa{nat} and hence
-that \isa{\mbox{xs}} is of type \isa{nat\ list}.)
+evaluates to \isa{\isadigit{1}} if \isa{xs} is \isa{{\isacharbrackleft}{\isacharbrackright}} and to \isa{y} if 
+\isa{xs} is \isa{y\ {\isacharhash}\ ys}. (Since the result in both branches must be of
+the same type, it follows that \isa{y} is of type \isa{nat} and hence
+that \isa{xs} is of type \isa{nat\ list}.)
 
 In general, if $e$ is a term of the datatype $t$ defined in
 \S\ref{sec:general-datatype} above, the corresponding
@@ -38,16 +38,17 @@
 \noindent
 Nested patterns can be simulated by nested \isa{case}-expressions: instead
 of
-% case xs of [] => 1 | [x] => x | x#(y#zs) => y
-\begin{isabelle}
-~~~case~xs~of~[]~{\isasymRightarrow}~1~|~[x]~{\isasymRightarrow}~x~|~x~\#~y~\#~zs~{\isasymRightarrow}~y
-\end{isabelle}
+% 
+\begin{quote}
+\isa{case\ xs\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}{\isachargreater}\ \isadigit{1}\ {\isacharbar}\ {\isacharbrackleft}x{\isacharbrackright}\ {\isacharequal}{\isachargreater}\ x\ {\isacharbar}\ x{\isacharhash}{\isacharparenleft}y{\isacharhash}zs{\isacharparenright}\ {\isacharequal}{\isachargreater}\ y}
+%~~~case~xs~of~[]~{\isasymRightarrow}~1~|~[x]~{\isasymRightarrow}~x~|~x~\#~y~\#~zs~{\isasymRightarrow}~y
+\end{quote}
 write
 \begin{quote}
 
 \begin{isabelle}%
-case\ \mbox{xs}\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{1}\isanewline
-{\isacharbar}\ \mbox{x}\ {\isacharhash}\ \mbox{ys}\ {\isasymRightarrow}\ case\ \mbox{ys}\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \mbox{x}\ {\isacharbar}\ \mbox{y}\ {\isacharhash}\ \mbox{zs}\ {\isasymRightarrow}\ \mbox{y}
+case\ xs\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{1}\isanewline
+{\isacharbar}\ x\ {\isacharhash}\ ys\ {\isasymRightarrow}\ case\ ys\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ x\ {\isacharbar}\ y\ {\isacharhash}\ zs\ {\isasymRightarrow}\ y
 \end{isabelle}%
 
 \end{quote}
--- a/doc-src/TutorialI/Misc/document/case_splits.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/case_splits.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -2,7 +2,7 @@
 \begin{isabellebody}%
 %
 \begin{isamarkuptext}%
-Goals containing \isaindex{if}-expressions are usually proved by case
+Goals containing \isa{if}-expressions are usually proved by case
 distinction on the condition of the \isa{if}. For example the goal%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}{\isasymforall}xs{\isachardot}\ if\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ then\ rev\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ else\ rev\ xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isachardoublequote}%
@@ -10,18 +10,18 @@
 \noindent
 can be split into
 \begin{isabelle}
-~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[])%
+~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[])
 \end{isabelle}
 by a degenerate form of simplification%
 \end{isamarkuptxt}%
 \isacommand{apply}{\isacharparenleft}simp\ only{\isacharcolon}\ split{\isacharcolon}\ split{\isacharunderscore}if{\isacharparenright}%
 \begin{isamarkuptext}%
 \noindent
-where no simplification rules are included (\isa{only:} is followed by the
+where no simplification rules are included (\isa{only{\isacharcolon}} is followed by the
 empty list of theorems) but the rule \isaindexbold{split_if} for
-splitting \isa{if}s is added (via the modifier \isa{split:}). Because
+splitting \isa{if}s is added (via the modifier \isa{split{\isacharcolon}}). Because
 case-splitting on \isa{if}s is almost always the right proof strategy, the
-simplifier performs it automatically. Try \isacommand{apply}\isa{(simp)}
+simplifier performs it automatically. Try \isacommand{apply}\isa{{\isacharparenleft}simp{\isacharparenright}}
 on the initial goal above.
 
 This splitting idea generalizes from \isa{if} to \isaindex{case}:%
@@ -32,7 +32,7 @@
 becomes
 \begin{isabelle}
 ~1.~(xs~=~[]~{\isasymlongrightarrow}~zs~=~xs~@~zs)~{\isasymand}\isanewline
-~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs)%
+~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs)
 \end{isabelle}
 by typing%
 \end{isamarkuptxt}%
@@ -41,16 +41,16 @@
 \noindent
 In contrast to \isa{if}-expressions, the simplifier does not split
 \isa{case}-expressions by default because this can lead to nontermination
-in case of recursive datatypes. Again, if the \isa{only:} modifier is
+in case of recursive datatypes. Again, if the \isa{only{\isacharcolon}} modifier is
 dropped, the above goal is solved,%
 \end{isamarkuptext}%
 \isacommand{by}{\isacharparenleft}simp\ split{\isacharcolon}\ list{\isachardot}split{\isacharparenright}%
 \begin{isamarkuptext}%
 \noindent%
-which \isacommand{apply}\isa{(simp)} alone will not do.
+which \isacommand{apply}\isa{{\isacharparenleft}simp{\isacharparenright}} alone will not do.
 
 In general, every datatype $t$ comes with a theorem
-\isa{$t$.split} which can be declared to be a \bfindex{split rule} either
+$t$\isa{{\isachardot}split} which can be declared to be a \bfindex{split rule} either
 locally as above, or by giving it the \isa{split} attribute globally:%
 \end{isamarkuptext}%
 \isacommand{lemmas}\ {\isacharbrackleft}split{\isacharbrackright}\ {\isacharequal}\ list{\isachardot}split%
@@ -68,20 +68,22 @@
 \begin{isamarkuptext}%
 The above split rules intentionally only affect the conclusion of a
 subgoal.  If you want to split an \isa{if} or \isa{case}-expression in
-the assumptions, you have to apply \isa{split\_if\_asm} or $t$\isa{.split_asm}:%
+the assumptions, you have to apply \isa{split{\isacharunderscore}if{\isacharunderscore}asm} or
+$t$\isa{{\isachardot}split{\isacharunderscore}asm}:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}if\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ then\ ys\ {\isachartilde}{\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ else\ ys\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}{\isacharequal}{\isachargreater}\ xs\ {\isacharat}\ ys\ {\isachartilde}{\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isachardoublequote}\isanewline
 \isacommand{apply}{\isacharparenleft}simp\ only{\isacharcolon}\ split{\isacharcolon}\ split{\isacharunderscore}if{\isacharunderscore}asm{\isacharparenright}%
 \begin{isamarkuptxt}%
 \noindent
 In contrast to splitting the conclusion, this actually creates two
-separate subgoals (which are solved by \isa{simp\_all}):
+separate subgoals (which are solved by \isa{simp{\isacharunderscore}all}):
 \begin{isabelle}
 \ \isadigit{1}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharat}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\isanewline
 \ \isadigit{2}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{xs}\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}
 \end{isabelle}
 If you need to split both in the assumptions and the conclusion,
-use $t$\isa{.splits} which subsumes $t$\isa{.split} and $t$\isa{.split_asm}.%
+use $t$\isa{{\isachardot}splits} which subsumes $t$\isa{{\isachardot}split} and
+$t$\isa{{\isachardot}split{\isacharunderscore}asm}. Analogously, there is \isa{if{\isacharunderscore}splits}.%
 \end{isamarkuptxt}%
 \end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/Misc/document/cond_rewr.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/cond_rewr.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -11,17 +11,17 @@
 \noindent
 Note the use of ``\ttindexboldpos{,}{$Isar}'' to string together a
 sequence of methods. Assuming that the simplification rule
-\isa{{\isacharparenleft}rev\ \mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}\mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}}
+\isa{{\isacharparenleft}rev\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}}
 is present as well,%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymLongrightarrow}\ hd{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharhash}\ tl{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ rev\ xs{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
 is proved by plain simplification:
-the conditional equation \isa{hd_Cons_tl} above
-can simplify \isa{hd(rev~xs)~\#~tl(rev~xs)} to \isa{rev xs}
-because the corresponding precondition \isa{rev xs \isasymnoteq\ []}
-simplifies to \isa{xs \isasymnoteq\ []}, which is exactly the local
+the conditional equation \isa{hd{\isacharunderscore}Cons{\isacharunderscore}tl} above
+can simplify \isa{hd\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharhash}\ tl\ {\isacharparenleft}rev\ xs{\isacharparenright}} to \isa{rev\ xs}
+because the corresponding precondition \isa{rev\ xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}}
+simplifies to \isa{xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}}, which is exactly the local
 assumption of the subgoal.%
 \end{isamarkuptext}%
 \end{isabellebody}%
--- a/doc-src/TutorialI/Misc/document/let_rewr.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/let_rewr.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -1,10 +1,21 @@
 %
 \begin{isabellebody}%
+%
+\isamarkupsubsubsection{Simplifying let-expressions}
+%
+\begin{isamarkuptext}%
+\index{simplification!of let-expressions}
+Proving a goal containing \isaindex{let}-expressions almost invariably
+requires the \isa{let}-con\-structs to be expanded at some point. Since
+\isa{let}-\isa{in} is just syntactic sugar for a predefined constant
+(called \isa{Let}), expanding \isa{let}-constructs means rewriting with
+\isa{Let{\isacharunderscore}def}:%
+\end{isamarkuptext}%
 \isacommand{lemma}\ {\isachardoublequote}{\isacharparenleft}let\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ in\ xs{\isacharat}ys{\isacharat}xs{\isacharparenright}\ {\isacharequal}\ ys{\isachardoublequote}\isanewline
 \isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}\ Let{\isacharunderscore}def{\isacharparenright}%
 \begin{isamarkuptext}%
 If, in a particular context, there is no danger of a combinatorial explosion
-of nested \isa{let}s one could even add \isa{Let_def} permanently:%
+of nested \isa{let}s one could even add \isa{Let{\isacharunderscore}def} permanently:%
 \end{isamarkuptext}%
 \isacommand{lemmas}\ {\isacharbrackleft}simp{\isacharbrackright}\ {\isacharequal}\ Let{\isacharunderscore}def\end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/Misc/document/natsum.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/natsum.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -7,7 +7,7 @@
 \begin{quote}
 
 \begin{isabelle}%
-case\ \mbox{n}\ of\ \isadigit{0}\ {\isasymRightarrow}\ \isadigit{0}\ {\isacharbar}\ Suc\ \mbox{m}\ {\isasymRightarrow}\ \mbox{m}
+case\ n\ of\ \isadigit{0}\ {\isasymRightarrow}\ \isadigit{0}\ {\isacharbar}\ Suc\ m\ {\isasymRightarrow}\ m
 \end{isabelle}%
 
 \end{quote}
--- a/doc-src/TutorialI/Misc/document/pairs.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/pairs.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -16,8 +16,8 @@
 In addition to explicit $\lambda$-abstractions, tuple patterns can be used in
 most variable binding constructs. Typical examples are
 \begin{quote}
-\isa{let\ {\isacharparenleft}\mbox{x}{\isacharcomma}\ \mbox{y}{\isacharparenright}\ {\isacharequal}\ \mbox{f}\ \mbox{z}\ in\ {\isacharparenleft}\mbox{y}{\isacharcomma}\ \mbox{x}{\isacharparenright}}\\
-\isa{case\ \mbox{xs}\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{0}\ {\isacharbar}\ {\isacharparenleft}\mbox{x}{\isacharcomma}\ \mbox{y}{\isacharparenright}\ {\isacharhash}\ \mbox{zs}\ {\isasymRightarrow}\ \mbox{x}\ {\isacharplus}\ \mbox{y}}
+\isa{let\ {\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}\ {\isacharequal}\ f\ z\ in\ {\isacharparenleft}y{\isacharcomma}\ x{\isacharparenright}}\\
+\isa{case\ xs\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{0}\ {\isacharbar}\ {\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}\ {\isacharhash}\ zs\ {\isasymRightarrow}\ x\ {\isacharplus}\ y}
 \end{quote}
 Further important examples are quantifiers and sets (see~\S\ref{quant-pats}).%
 \end{isamarkuptext}%
--- a/doc-src/TutorialI/Misc/document/trace_simp.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/trace_simp.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -3,7 +3,7 @@
 %
 \begin{isamarkuptext}%
 Using the simplifier effectively may take a bit of experimentation.  Set the
-\ttindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
+\isaindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
 on:%
 \end{isamarkuptext}%
 \isacommand{ML}\ {\isachardoublequote}set\ trace{\isacharunderscore}simp{\isachardoublequote}\isanewline
--- a/doc-src/TutorialI/Misc/document/types.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/document/types.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -24,8 +24,8 @@
 \ \ \ \ \ exor{\isacharunderscore}def{\isacharcolon}\ {\isachardoublequote}exor\ A\ B\ {\isasymequiv}\ A\ {\isasymand}\ {\isasymnot}B\ {\isasymor}\ {\isasymnot}A\ {\isasymand}\ B{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent%
-where \isacommand{defs}\indexbold{*defs} is a keyword and \isa{nand_def} and
-\isa{exor_def} are user-supplied names.
+where \isacommand{defs}\indexbold{*defs} is a keyword and
+\isa{nand{\isacharunderscore}def} and \isa{exor{\isacharunderscore}def} are user-supplied names.
 The symbol \indexboldpos{\isasymequiv}{$IsaEq} is a special form of equality
 that must be used in constant definitions.
 Declarations and definitions can also be merged%
@@ -36,7 +36,7 @@
 \ \ \ \ \ \ \ \ \ {\isachardoublequote}exor\isadigit{2}\ A\ B\ {\isasymequiv}\ {\isacharparenleft}A\ {\isasymor}\ B{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}{\isasymnot}A\ {\isasymor}\ {\isasymnot}B{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent\indexbold{*constdefs}%
-in which case the default name of each definition is \isa{$f$_def}, where
+in which case the default name of each definition is $f$\isa{{\isacharunderscore}def}, where
 $f$ is the name of the defined constant.%
 \end{isamarkuptext}%
 \end{isabellebody}%
--- a/doc-src/TutorialI/Misc/let_rewr.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/let_rewr.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -1,12 +1,23 @@
 (*<*)
 theory let_rewr = Main:;
 (*>*)
+
+subsubsection{*Simplifying let-expressions*}
+
+text{*\index{simplification!of let-expressions}
+Proving a goal containing \isaindex{let}-expressions almost invariably
+requires the @{text"let"}-con\-structs to be expanded at some point. Since
+@{text"let"}-@{text"in"} is just syntactic sugar for a predefined constant
+(called @{term"Let"}), expanding @{text"let"}-constructs means rewriting with
+@{thm[source]Let_def}:
+*}
+
 lemma "(let xs = [] in xs@ys@xs) = ys";
 by(simp add: Let_def);
 
 text{*
 If, in a particular context, there is no danger of a combinatorial explosion
-of nested \isa{let}s one could even add \isa{Let_def} permanently:
+of nested @{text"let"}s one could even add @{thm[source]Let_def} permanently:
 *}
 lemmas [simp] = Let_def;
 (*<*)
--- a/doc-src/TutorialI/Misc/natsum.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/natsum.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -2,7 +2,7 @@
 theory natsum = Main:;
 (*>*)
 text{*\noindent
-In particular, there are \isa{case}-expressions, for example
+In particular, there are @{text"case"}-expressions, for example
 \begin{quote}
 @{term[display]"case n of 0 => 0 | Suc m => m"}
 \end{quote}
--- a/doc-src/TutorialI/Misc/pairs.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/pairs.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -4,7 +4,7 @@
 text{*
 HOL also has pairs: \isa{($a@1$,$a@2$)} is of type \isa{$\tau@1$ *
   $\tau@2$} provided each $a@i$ is of type $\tau@i$. The components of a pair
-are extracted by \isa{fst} and \isa{snd}: \isa{fst($x$,$y$) = $x$} and
+are extracted by @{term"fst"} and @{term"snd"}: \isa{fst($x$,$y$) = $x$} and
 \isa{snd($x$,$y$) = $y$}. Tuples are simulated by pairs nested to the right:
 \isa{($a@1$,$a@2$,$a@3$)} stands for \isa{($a@1$,($a@2$,$a@3$))} and
 \isa{$\tau@1$ * $\tau@2$ * $\tau@3$} for \isa{$\tau@1$ * ($\tau@2$ *
--- a/doc-src/TutorialI/Misc/prime_def.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/prime_def.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -6,8 +6,8 @@
 
     "prime(p) \\<equiv> 1 < p \\<and> (m dvd p \\<longrightarrow> (m=1 \\<or> m=p))";
 text{*\noindent\small
-where \isa{dvd} means ``divides''.
-Isabelle rejects this ``definition'' because of the extra \isa{m} on the
+where @{text"dvd"} means ``divides''.
+Isabelle rejects this ``definition'' because of the extra @{term"m"} on the
 right-hand side, which would introduce an inconsistency (why?). What you
 should have written is
 *}
--- a/doc-src/TutorialI/Misc/trace_simp.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/trace_simp.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -4,7 +4,7 @@
 
 text{*
 Using the simplifier effectively may take a bit of experimentation.  Set the
-\ttindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
+\isaindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
 on:
 *}
 ML "set trace_simp";
--- a/doc-src/TutorialI/Misc/types.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Misc/types.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -19,7 +19,7 @@
 \label{sec:ConstDefinitions}
 \indexbold{definition}
 
-The above constants \isa{nand} and \isa{exor} are non-recursive and can
+The above constants @{term"nand"} and @{term"exor"} are non-recursive and can
 therefore be defined directly by
 *}
 
@@ -27,8 +27,8 @@
      exor_def: "exor A B \\<equiv> A \\<and> \\<not>B \\<or> \\<not>A \\<and> B";
 
 text{*\noindent%
-where \isacommand{defs}\indexbold{*defs} is a keyword and \isa{nand_def} and
-\isa{exor_def} are user-supplied names.
+where \isacommand{defs}\indexbold{*defs} is a keyword and
+@{thm[source]nand_def} and @{thm[source]exor_def} are user-supplied names.
 The symbol \indexboldpos{\isasymequiv}{$IsaEq} is a special form of equality
 that must be used in constant definitions.
 Declarations and definitions can also be merged
@@ -40,7 +40,7 @@
          "exor2 A B \\<equiv> (A \\<or> B) \\<and> (\\<not>A \\<or> \\<not>B)";
 
 text{*\noindent\indexbold{*constdefs}%
-in which case the default name of each definition is \isa{$f$_def}, where
+in which case the default name of each definition is $f$@{text"_def"}, where
 $f$ is the name of the defined constant.*}(*<*)
 end
 (*>*)
--- a/doc-src/TutorialI/Recdef/Induction.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/Induction.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -10,7 +10,7 @@
 again induction. But this time the structural form of induction that comes
 with datatypes is unlikely to work well---otherwise we could have defined the
 function by \isacommand{primrec}. Therefore \isacommand{recdef} automatically
-proves a suitable induction rule $f$\isa{.induct} that follows the
+proves a suitable induction rule $f$@{text".induct"} that follows the
 recursion pattern of the particular function $f$. We call this
 \textbf{recursion induction}. Roughly speaking, it
 requires you to prove for each \isacommand{recdef} equation that the property
@@ -21,16 +21,16 @@
 lemma "map f (sep(x,xs)) = sep(f x, map f xs)";
 
 txt{*\noindent
-involving the predefined \isa{map} functional on lists: \isa{map f xs}
-is the result of applying \isa{f} to all elements of \isa{xs}. We prove
-this lemma by recursion induction w.r.t. \isa{sep}:
+involving the predefined @{term"map"} functional on lists: @{term"map f xs"}
+is the result of applying @{term"f"} to all elements of @{term"xs"}. We prove
+this lemma by recursion induction w.r.t. @{term"sep"}:
 *}
 
 apply(induct_tac x xs rule: sep.induct);
 
 txt{*\noindent
 The resulting proof state has three subgoals corresponding to the three
-clauses for \isa{sep}:
+clauses for @{term"sep"}:
 \begin{isabelle}
 ~1.~{\isasymAnd}a.~map~f~(sep~(a,~[]))~=~sep~(f~a,~map~f~[])\isanewline
 ~2.~{\isasymAnd}a~x.~map~f~(sep~(a,~[x]))~=~sep~(f~a,~map~f~[x])\isanewline
@@ -47,24 +47,24 @@
 Try proving the above lemma by structural induction, and you find that you
 need an additional case distinction. What is worse, the names of variables
 are invented by Isabelle and have nothing to do with the names in the
-definition of \isa{sep}.
+definition of @{term"sep"}.
 
 In general, the format of invoking recursion induction is
-\begin{ttbox}
-apply(induct_tac \(x@1 \dots x@n\) rule: \(f\).induct)
-\end{ttbox}\index{*induct_tac}%
+\begin{quote}
+\isacommand{apply}@{text"(induct_tac ("}$x@1 \dots x@n$ @{text"rule:"} $f$@{text".induct)"}
+\end{quote}\index{*induct_tac}%
 where $x@1~\dots~x@n$ is a list of free variables in the subgoal and $f$ the
 name of a function that takes an $n$-tuple. Usually the subgoal will
 contain the term $f~x@1~\dots~x@n$ but this need not be the case. The
-induction rules do not mention $f$ at all. For example \isa{sep.induct}
+induction rules do not mention $f$ at all. For example @{thm[source]sep.induct}
 \begin{isabelle}
 {\isasymlbrakk}~{\isasymAnd}a.~P~a~[];\isanewline
 ~~{\isasymAnd}a~x.~P~a~[x];\isanewline
 ~~{\isasymAnd}a~x~y~zs.~P~a~(y~\#~zs)~{\isasymLongrightarrow}~P~a~(x~\#~y~\#~zs){\isasymrbrakk}\isanewline
 {\isasymLongrightarrow}~P~u~v%
 \end{isabelle}
-merely says that in order to prove a property \isa{P} of \isa{u} and
-\isa{v} you need to prove it for the three cases where \isa{v} is the
+merely says that in order to prove a property @{term"P"} of @{term"u"} and
+@{term"v"} you need to prove it for the three cases where @{term"v"} is the
 empty list, the singleton list, and the list with at least two elements
 (in which case you may assume it holds for the tail of that list).
 *}
--- a/doc-src/TutorialI/Recdef/Nested1.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/Nested1.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -26,9 +26,9 @@
 @{term[display]"t : set ts --> size t < Suc (term_list_size ts)"}
 \end{quote}
 where @{term"set"} returns the set of elements of a list (no special
-knowledge of sets is required in the following) and @{name"term_list_size ::
+knowledge of sets is required in the following) and @{text"term_list_size ::
 term list \<Rightarrow> nat"} is an auxiliary function automatically defined by Isabelle
-(when @{name"term"} was defined).  First we have to understand why the
+(when @{text"term"} was defined).  First we have to understand why the
 recursive call of @{term"trev"} underneath @{term"map"} leads to the above
 condition. The reason is that \isacommand{recdef} ``knows'' that @{term"map"}
 will apply @{term"trev"} only to elements of @{term"ts"}. Thus the above
--- a/doc-src/TutorialI/Recdef/Nested2.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/Nested2.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -19,7 +19,7 @@
 applies it automatically and the above definition of @{term"trev"}
 succeeds now. As a reward for our effort, we can now prove the desired
 lemma directly. The key is the fact that we no longer need the verbose
-induction schema for type @{name"term"} but the simpler one arising from
+induction schema for type @{text"term"} but the simpler one arising from
 @{term"trev"}:
 *};
 
@@ -39,7 +39,7 @@
 text{*\noindent
 If the proof of the induction step mystifies you, we recommend to go through
 the chain of simplification steps in detail; you will probably need the help of
-@{name"trace_simp"}.
+@{text"trace_simp"}.
 %\begin{quote}
 %{term[display]"trev(trev(App f ts))"}\\
 %{term[display]"App f (rev(map trev (rev(map trev ts))))"}\\
@@ -66,7 +66,7 @@
 \isacommand{recdef} would try to prove the unprovable @{term"size t < Suc
 (term_list_size ts)"}, without any assumption about @{term"t"}.  Therefore
 \isacommand{recdef} has been supplied with the congruence theorem
-@{name"map_cong"}:
+@{thm[source]map_cong}:
 \begin{quote}
 @{thm[display,margin=50]"map_cong"[no_vars]}
 \end{quote}
--- a/doc-src/TutorialI/Recdef/document/Induction.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/document/Induction.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -9,7 +9,7 @@
 again induction. But this time the structural form of induction that comes
 with datatypes is unlikely to work well---otherwise we could have defined the
 function by \isacommand{primrec}. Therefore \isacommand{recdef} automatically
-proves a suitable induction rule $f$\isa{.induct} that follows the
+proves a suitable induction rule $f$\isa{{\isachardot}induct} that follows the
 recursion pattern of the particular function $f$. We call this
 \textbf{recursion induction}. Roughly speaking, it
 requires you to prove for each \isacommand{recdef} equation that the property
@@ -19,7 +19,7 @@
 \isacommand{lemma}\ {\isachardoublequote}map\ f\ {\isacharparenleft}sep{\isacharparenleft}x{\isacharcomma}xs{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ sep{\isacharparenleft}f\ x{\isacharcomma}\ map\ f\ xs{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptxt}%
 \noindent
-involving the predefined \isa{map} functional on lists: \isa{map f xs}
+involving the predefined \isa{map} functional on lists: \isa{map\ f\ xs}
 is the result of applying \isa{f} to all elements of \isa{xs}. We prove
 this lemma by recursion induction w.r.t. \isa{sep}:%
 \end{isamarkuptxt}%
@@ -45,13 +45,13 @@
 definition of \isa{sep}.
 
 In general, the format of invoking recursion induction is
-\begin{ttbox}
-apply(induct_tac \(x@1 \dots x@n\) rule: \(f\).induct)
-\end{ttbox}\index{*induct_tac}%
+\begin{quote}
+\isacommand{apply}\isa{{\isacharparenleft}induct{\isacharunderscore}tac\ {\isacharparenleft}}$x@1 \dots x@n$ \isa{rule{\isacharcolon}} $f$\isa{{\isachardot}induct{\isacharparenright}}
+\end{quote}\index{*induct_tac}%
 where $x@1~\dots~x@n$ is a list of free variables in the subgoal and $f$ the
 name of a function that takes an $n$-tuple. Usually the subgoal will
 contain the term $f~x@1~\dots~x@n$ but this need not be the case. The
-induction rules do not mention $f$ at all. For example \isa{sep.induct}
+induction rules do not mention $f$ at all. For example \isa{sep{\isachardot}induct}
 \begin{isabelle}
 {\isasymlbrakk}~{\isasymAnd}a.~P~a~[];\isanewline
 ~~{\isasymAnd}a~x.~P~a~[x];\isanewline
--- a/doc-src/TutorialI/Recdef/document/Nested1.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/document/Nested1.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -23,7 +23,7 @@
 \begin{quote}
 
 \begin{isabelle}%
-\mbox{t}\ {\isasymin}\ set\ \mbox{ts}\ {\isasymlongrightarrow}\ size\ \mbox{t}\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ \mbox{ts}{\isacharparenright}
+t\ {\isasymin}\ set\ ts\ {\isasymlongrightarrow}\ size\ t\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}
 \end{isabelle}%
 
 \end{quote}
@@ -32,9 +32,9 @@
 (when \isa{term} was defined).  First we have to understand why the
 recursive call of \isa{trev} underneath \isa{map} leads to the above
 condition. The reason is that \isacommand{recdef} ``knows'' that \isa{map}
-will apply \isa{trev} only to elements of \isa{\mbox{ts}}. Thus the above
-condition expresses that the size of the argument \isa{\mbox{t}\ {\isasymin}\ set\ \mbox{ts}} of any
-recursive call of \isa{trev} is strictly less than \isa{size\ {\isacharparenleft}App\ \mbox{f}\ \mbox{ts}{\isacharparenright}\ {\isacharequal}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ \mbox{ts}{\isacharparenright}}.  We will now prove the termination condition and
+will apply \isa{trev} only to elements of \isa{ts}. Thus the above
+condition expresses that the size of the argument \isa{t\ {\isasymin}\ set\ ts} of any
+recursive call of \isa{trev} is strictly less than \isa{size\ {\isacharparenleft}App\ f\ ts{\isacharparenright}\ {\isacharequal}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}}.  We will now prove the termination condition and
 continue with our definition.  Below we return to the question of how
 \isacommand{recdef} ``knows'' about \isa{map}.%
 \end{isamarkuptext}%
--- a/doc-src/TutorialI/Recdef/document/Nested2.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/document/Nested2.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -21,12 +21,12 @@
 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ t\ rule{\isacharcolon}trev{\isachardot}induct{\isacharparenright}%
 \begin{isamarkuptxt}%
 \noindent
-This leaves us with a trivial base case \isa{trev\ {\isacharparenleft}trev\ {\isacharparenleft}Var\ \mbox{x}{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ Var\ \mbox{x}} and the step case
+This leaves us with a trivial base case \isa{trev\ {\isacharparenleft}trev\ {\isacharparenleft}Var\ x{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ Var\ x} and the step case
 \begin{quote}
 
 \begin{isabelle}%
-{\isasymforall}\mbox{t}{\isachardot}\ \mbox{t}\ {\isasymin}\ set\ \mbox{ts}\ {\isasymlongrightarrow}\ trev\ {\isacharparenleft}trev\ \mbox{t}{\isacharparenright}\ {\isacharequal}\ \mbox{t}\ {\isasymLongrightarrow}\isanewline
-trev\ {\isacharparenleft}trev\ {\isacharparenleft}App\ \mbox{f}\ \mbox{ts}{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ App\ \mbox{f}\ \mbox{ts}
+{\isasymforall}t{\isachardot}\ t\ {\isasymin}\ set\ ts\ {\isasymlongrightarrow}\ trev\ {\isacharparenleft}trev\ t{\isacharparenright}\ {\isacharequal}\ t\ {\isasymLongrightarrow}\isanewline
+trev\ {\isacharparenleft}trev\ {\isacharparenleft}App\ f\ ts{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ App\ f\ ts
 \end{isabelle}%
 
 \end{quote}
@@ -50,7 +50,7 @@
 
 The above definition of \isa{trev} is superior to the one in
 \S\ref{sec:nested-datatype} because it brings \isa{rev} into play, about
-which already know a lot, in particular \isa{rev\ {\isacharparenleft}rev\ \mbox{xs}{\isacharparenright}\ {\isacharequal}\ \mbox{xs}}.
+which already know a lot, in particular \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs}.
 Thus this proof is a good example of an important principle:
 \begin{quote}
 \emph{Chose your definitions carefully\\
@@ -60,15 +60,15 @@
 Let us now return to the question of how \isacommand{recdef} can come up with
 sensible termination conditions in the presence of higher-order functions
 like \isa{map}. For a start, if nothing were known about \isa{map},
-\isa{map\ trev\ \mbox{ts}} might apply \isa{trev} to arbitrary terms, and thus
-\isacommand{recdef} would try to prove the unprovable \isa{size\ \mbox{t}\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ \mbox{ts}{\isacharparenright}}, without any assumption about \isa{\mbox{t}}.  Therefore
+\isa{map\ trev\ ts} might apply \isa{trev} to arbitrary terms, and thus
+\isacommand{recdef} would try to prove the unprovable \isa{size\ t\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}}, without any assumption about \isa{t}.  Therefore
 \isacommand{recdef} has been supplied with the congruence theorem
 \isa{map{\isacharunderscore}cong}:
 \begin{quote}
 
 \begin{isabelle}%
-{\isasymlbrakk}\mbox{xs}\ {\isacharequal}\ \mbox{ys}{\isacharsemicolon}\ {\isasymAnd}\mbox{x}{\isachardot}\ \mbox{x}\ {\isasymin}\ set\ \mbox{ys}\ {\isasymLongrightarrow}\ \mbox{f}\ \mbox{x}\ {\isacharequal}\ \mbox{g}\ \mbox{x}{\isasymrbrakk}\isanewline
-{\isasymLongrightarrow}\ map\ \mbox{f}\ \mbox{xs}\ {\isacharequal}\ map\ \mbox{g}\ \mbox{ys}
+{\isasymlbrakk}xs\ {\isacharequal}\ ys{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ x\ {\isasymin}\ set\ ys\ {\isasymLongrightarrow}\ f\ x\ {\isacharequal}\ g\ x{\isasymrbrakk}\isanewline
+{\isasymLongrightarrow}\ map\ f\ xs\ {\isacharequal}\ map\ g\ ys
 \end{isabelle}%
 
 \end{quote}
--- a/doc-src/TutorialI/Recdef/document/examples.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/document/examples.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -12,13 +12,13 @@
 \begin{isamarkuptext}%
 \noindent
 The definition of \isa{fib} is accompanied by a \bfindex{measure function}
-\isa{{\isasymlambda}\mbox{n}{\isachardot}\ \mbox{n}} which maps the argument of \isa{fib} to a
+\isa{{\isasymlambda}n{\isachardot}\ n} which maps the argument of \isa{fib} to a
 natural number. The requirement is that in each equation the measure of the
 argument on the left-hand side is strictly greater than the measure of the
 argument of each recursive call. In the case of \isa{fib} this is
 obviously true because the measure function is the identity and
-\isa{Suc\ {\isacharparenleft}Suc\ \mbox{x}{\isacharparenright}} is strictly greater than both \isa{x} and
-\isa{Suc\ \mbox{x}}.
+\isa{Suc\ {\isacharparenleft}Suc\ x{\isacharparenright}} is strictly greater than both \isa{x} and
+\isa{Suc\ x}.
 
 Slightly more interesting is the insertion of a fixed element
 between any two elements of a list:%
@@ -50,7 +50,7 @@
 \begin{isamarkuptext}%
 \noindent
 This defines exactly the same function as \isa{sep} above, i.e.\
-\isa{sep1 = sep}.
+\isa{sep\isadigit{1}\ {\isacharequal}\ sep}.
 
 \begin{warn}
   \isacommand{recdef} only takes the first argument of a (curried)
@@ -76,7 +76,7 @@
 \begin{isamarkuptext}%
 \noindent
 For non-recursive functions the termination measure degenerates to the empty
-set \isa{\{\}}.%
+set \isa{{\isacharbraceleft}{\isacharbraceright}}.%
 \end{isamarkuptext}%
 \end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/Recdef/document/simplification.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/document/simplification.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -19,7 +19,7 @@
 \begin{quote}
 
 \begin{isabelle}%
-\mbox{n}\ {\isasymnoteq}\ \isadigit{0}\ {\isasymLongrightarrow}\ \mbox{m}\ mod\ \mbox{n}\ {\isacharless}\ \mbox{n}
+n\ {\isasymnoteq}\ \isadigit{0}\ {\isasymLongrightarrow}\ m\ mod\ n\ {\isacharless}\ n
 \end{isabelle}%
 
 \end{quote}
@@ -34,7 +34,7 @@
 \begin{quote}
 
 \begin{isabelle}%
-gcd\ {\isacharparenleft}\mbox{m}{\isacharcomma}\ \mbox{n}{\isacharparenright}\ {\isacharequal}\ \mbox{k}
+gcd\ {\isacharparenleft}m{\isacharcomma}\ n{\isacharparenright}\ {\isacharequal}\ k
 \end{isabelle}%
 
 \end{quote}
@@ -42,7 +42,7 @@
 \begin{quote}
 
 \begin{isabelle}%
-{\isacharparenleft}if\ \mbox{n}\ {\isacharequal}\ \isadigit{0}\ then\ \mbox{m}\ else\ gcd\ {\isacharparenleft}\mbox{n}{\isacharcomma}\ \mbox{m}\ mod\ \mbox{n}{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ \mbox{k}
+{\isacharparenleft}if\ n\ {\isacharequal}\ \isadigit{0}\ then\ m\ else\ gcd\ {\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ k
 \end{isabelle}%
 
 \end{quote}
@@ -50,19 +50,20 @@
 \begin{quote}
 
 \begin{isabelle}%
-{\isacharparenleft}\mbox{n}\ {\isacharequal}\ \isadigit{0}\ {\isasymlongrightarrow}\ \mbox{m}\ {\isacharequal}\ \mbox{k}{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}\mbox{n}\ {\isasymnoteq}\ \isadigit{0}\ {\isasymlongrightarrow}\ gcd\ {\isacharparenleft}\mbox{n}{\isacharcomma}\ \mbox{m}\ mod\ \mbox{n}{\isacharparenright}\ {\isacharequal}\ \mbox{k}{\isacharparenright}
+{\isacharparenleft}n\ {\isacharequal}\ \isadigit{0}\ {\isasymlongrightarrow}\ m\ {\isacharequal}\ k{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}n\ {\isasymnoteq}\ \isadigit{0}\ {\isasymlongrightarrow}\ gcd\ {\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}\ {\isacharequal}\ k{\isacharparenright}
 \end{isabelle}%
 
 \end{quote}
-Since the recursive call \isa{gcd\ {\isacharparenleft}\mbox{n}{\isacharcomma}\ \mbox{m}\ mod\ \mbox{n}{\isacharparenright}} is no longer protected by
+Since the recursive call \isa{gcd\ {\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}} is no longer protected by
 an \isa{if}, it is unfolded again, which leads to an infinite chain of
 simplification steps. Fortunately, this problem can be avoided in many
 different ways.
 
-The most radical solution is to disable the offending \\isa{split{\isacharunderscore}if} as
-shown in the section on case splits in \S\ref{sec:Simplification}.  However,
-we do not recommend this because it means you will often have to invoke the
-rule explicitly when \isa{if} is involved.
+The most radical solution is to disable the offending
+\isa{split{\isacharunderscore}if} as shown in the section on case splits in
+\S\ref{sec:Simplification}.  However, we do not recommend this because it
+means you will often have to invoke the rule explicitly when \isa{if} is
+involved.
 
 If possible, the definition should be given by pattern matching on the left
 rather than \isa{if} on the right. In the case of \isa{gcd} the
@@ -75,7 +76,7 @@
 \begin{isamarkuptext}%
 \noindent
 Note that the order of equations is important and hides the side condition
-\isa{\mbox{n}\ {\isasymnoteq}\ \isadigit{0}}. Unfortunately, in general the case distinction
+\isa{n\ {\isasymnoteq}\ \isadigit{0}}. Unfortunately, in general the case distinction
 may not be expressible by pattern matching.
 
 A very simple alternative is to replace \isa{if} by \isa{case}, which
--- a/doc-src/TutorialI/Recdef/document/termination.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/document/termination.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -6,10 +6,10 @@
 its termination with the help of the user-supplied measure.  All of the above
 examples are simple enough that Isabelle can prove automatically that the
 measure of the argument goes down in each recursive call. As a result,
-\isa{$f$.simps} will contain the defining equations (or variants derived from
-them) as theorems. For example, look (via \isacommand{thm}) at
-\isa{sep.simps} and \isa{sep1.simps} to see that they define the same
-function. What is more, those equations are automatically declared as
+$f$\isa{{\isachardot}simps} will contain the defining equations (or variants derived
+from them) as theorems. For example, look (via \isacommand{thm}) at
+\isa{sep{\isachardot}simps} and \isa{sep\isadigit{1}{\isachardot}simps} to see that they define
+the same function. What is more, those equations are automatically declared as
 simplification rules.
 
 In general, Isabelle may not be able to prove all termination condition
@@ -29,7 +29,7 @@
 \isacommand{lemma}\ termi{\isacharunderscore}lem{\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}{\isasymnot}\ x\ {\isasymle}\ y\ {\isasymLongrightarrow}\ x\ {\isacharminus}\ Suc\ y\ {\isacharless}\ x\ {\isacharminus}\ y{\isachardoublequote}%
 \begin{isamarkuptxt}%
 \noindent
-It was not proved automatically because of the special nature of \isa{-}
+It was not proved automatically because of the special nature of \isa{{\isacharminus}}
 on \isa{nat}. This requires more arithmetic than is tried by default:%
 \end{isamarkuptxt}%
 \isacommand{by}{\isacharparenleft}arith{\isacharparenright}%
@@ -44,8 +44,8 @@
 \ \ {\isachardoublequote}g{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}if\ x\ {\isasymle}\ y\ then\ x\ else\ g{\isacharparenleft}x{\isacharcomma}y{\isacharplus}\isadigit{1}{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-This time everything works fine. Now \isa{g.simps} contains precisely the
-stated recursion equation for \isa{g} and they are simplification
+This time everything works fine. Now \isa{g{\isachardot}simps} contains precisely
+the stated recursion equation for \isa{g} and they are simplification
 rules. Thus we can automatically prove%
 \end{isamarkuptext}%
 \isacommand{theorem}\ wow{\isacharcolon}\ {\isachardoublequote}g{\isacharparenleft}\isadigit{1}{\isacharcomma}\isadigit{0}{\isacharparenright}\ {\isacharequal}\ g{\isacharparenleft}\isadigit{1}{\isacharcomma}\isadigit{1}{\isacharparenright}{\isachardoublequote}\isanewline
@@ -54,7 +54,7 @@
 \noindent
 More exciting theorems require induction, which is discussed below.
 
-Because lemma \isa{termi_lem} above was only turned into a
+Because lemma \isa{termi{\isacharunderscore}lem} above was only turned into a
 simplification rule for the sake of the termination proof, we may want to
 disable it again:%
 \end{isamarkuptext}%
@@ -63,22 +63,23 @@
 The attentive reader may wonder why we chose to call our function \isa{g}
 rather than \isa{f} the second time around. The reason is that, despite
 the failed termination proof, the definition of \isa{f} did not
-fail (and thus we could not define it a second time). However, all theorems
-about \isa{f}, for example \isa{f.simps}, carry as a precondition the
-unproved termination condition. Moreover, the theorems \isa{f.simps} are
-not simplification rules. However, this mechanism allows a delayed proof of
-termination: instead of proving \isa{termi_lem} up front, we could prove
+fail, and thus we could not define it a second time. However, all theorems
+about \isa{f}, for example \isa{f{\isachardot}simps}, carry as a precondition
+the unproved termination condition. Moreover, the theorems
+\isa{f{\isachardot}simps} are not simplification rules. However, this mechanism
+allows a delayed proof of termination: instead of proving
+\isa{termi{\isacharunderscore}lem} up front, we could prove 
 it later on and then use it to remove the preconditions from the theorems
 about \isa{f}. In most cases this is more cumbersome than proving things
-up front
+up front.
 %FIXME, with one exception: nested recursion.
 
 Although all the above examples employ measure functions, \isacommand{recdef}
 allows arbitrary wellfounded relations. For example, termination of
-Ackermann's function requires the lexicographic product \isa{<*lex*>}:%
+Ackermann's function requires the lexicographic product \isa{{\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}}:%
 \end{isamarkuptext}%
 \isacommand{consts}\ ack\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}nat{\isacharasterisk}nat\ {\isasymRightarrow}\ nat{\isachardoublequote}\isanewline
-\isacommand{recdef}\ ack\ {\isachardoublequote}measure{\isacharparenleft}{\isacharpercent}m{\isachardot}\ m{\isacharparenright}\ {\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}\ measure{\isacharparenleft}{\isacharpercent}n{\isachardot}\ n{\isacharparenright}{\isachardoublequote}\isanewline
+\isacommand{recdef}\ ack\ {\isachardoublequote}measure{\isacharparenleft}{\isasymlambda}m{\isachardot}\ m{\isacharparenright}\ {\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}\ measure{\isacharparenleft}{\isasymlambda}n{\isachardot}\ n{\isacharparenright}{\isachardoublequote}\isanewline
 \ \ {\isachardoublequote}ack{\isacharparenleft}\isadigit{0}{\isacharcomma}n{\isacharparenright}\ \ \ \ \ \ \ \ \ {\isacharequal}\ Suc\ n{\isachardoublequote}\isanewline
 \ \ {\isachardoublequote}ack{\isacharparenleft}Suc\ m{\isacharcomma}\isadigit{0}{\isacharparenright}\ \ \ \ \ {\isacharequal}\ ack{\isacharparenleft}m{\isacharcomma}\ \isadigit{1}{\isacharparenright}{\isachardoublequote}\isanewline
 \ \ {\isachardoublequote}ack{\isacharparenleft}Suc\ m{\isacharcomma}Suc\ n{\isacharparenright}\ {\isacharequal}\ ack{\isacharparenleft}m{\isacharcomma}ack{\isacharparenleft}Suc\ m{\isacharcomma}n{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
--- a/doc-src/TutorialI/Recdef/examples.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/examples.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -13,13 +13,13 @@
   "fib (Suc(Suc x)) = fib x + fib (Suc x)";
 
 text{*\noindent
-The definition of \isa{fib} is accompanied by a \bfindex{measure function}
-@{term"%n. n"} which maps the argument of \isa{fib} to a
+The definition of @{term"fib"} is accompanied by a \bfindex{measure function}
+@{term"%n. n"} which maps the argument of @{term"fib"} to a
 natural number. The requirement is that in each equation the measure of the
 argument on the left-hand side is strictly greater than the measure of the
-argument of each recursive call. In the case of \isa{fib} this is
+argument of each recursive call. In the case of @{term"fib"} this is
 obviously true because the measure function is the identity and
-@{term"Suc(Suc x)"} is strictly greater than both \isa{x} and
+@{term"Suc(Suc x)"} is strictly greater than both @{term"x"} and
 @{term"Suc x"}.
 
 Slightly more interesting is the insertion of a fixed element
@@ -55,8 +55,8 @@
   "sep1(a, xs)     = xs";
 
 text{*\noindent
-This defines exactly the same function as \isa{sep} above, i.e.\
-\isa{sep1 = sep}.
+This defines exactly the same function as @{term"sep"} above, i.e.\
+@{prop"sep1 = sep"}.
 
 \begin{warn}
   \isacommand{recdef} only takes the first argument of a (curried)
@@ -84,7 +84,7 @@
 
 text{*\noindent
 For non-recursive functions the termination measure degenerates to the empty
-set \isa{\{\}}.
+set @{term"{}"}.
 *}
 
 (*<*)
--- a/doc-src/TutorialI/Recdef/simplification.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/simplification.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -7,7 +7,7 @@
 equations become simplification rules, just as with
 \isacommand{primrec}. In most cases this works fine, but there is a subtle
 problem that must be mentioned: simplification may not
-terminate because of automatic splitting of @{name"if"}.
+terminate because of automatic splitting of @{text"if"}.
 Let us look at an example:
 *}
 
@@ -24,9 +24,9 @@
 is provded automatically because it is already present as a lemma in the
 arithmetic library. Thus the recursion equation becomes a simplification
 rule. Of course the equation is nonterminating if we are allowed to unfold
-the recursive call inside the @{name"if"} branch, which is why programming
+the recursive call inside the @{text"if"} branch, which is why programming
 languages and our simplifier don't do that. Unfortunately the simplifier does
-something else which leads to the same problem: it splits @{name"if"}s if the
+something else which leads to the same problem: it splits @{text"if"}s if the
 condition simplifies to neither @{term"True"} nor @{term"False"}. For
 example, simplification reduces
 \begin{quote}
@@ -41,17 +41,18 @@
 @{term[display]"(n=0 --> m=k) & (n ~= 0 --> gcd(n, m mod n)=k)"}
 \end{quote}
 Since the recursive call @{term"gcd(n, m mod n)"} is no longer protected by
-an @{name"if"}, it is unfolded again, which leads to an infinite chain of
+an @{text"if"}, it is unfolded again, which leads to an infinite chain of
 simplification steps. Fortunately, this problem can be avoided in many
 different ways.
 
-The most radical solution is to disable the offending \@{name"split_if"} as
-shown in the section on case splits in \S\ref{sec:Simplification}.  However,
-we do not recommend this because it means you will often have to invoke the
-rule explicitly when @{name"if"} is involved.
+The most radical solution is to disable the offending
+@{thm[source]split_if} as shown in the section on case splits in
+\S\ref{sec:Simplification}.  However, we do not recommend this because it
+means you will often have to invoke the rule explicitly when @{text"if"} is
+involved.
 
 If possible, the definition should be given by pattern matching on the left
-rather than @{name"if"} on the right. In the case of @{term"gcd"} the
+rather than @{text"if"} on the right. In the case of @{term"gcd"} the
 following alternative definition suggests itself:
 *}
 
@@ -66,7 +67,7 @@
 @{prop"n ~= 0"}. Unfortunately, in general the case distinction
 may not be expressible by pattern matching.
 
-A very simple alternative is to replace @{name"if"} by @{name"case"}, which
+A very simple alternative is to replace @{text"if"} by @{text"case"}, which
 is also available for @{typ"bool"} but is not split automatically:
 *}
 
--- a/doc-src/TutorialI/Recdef/termination.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Recdef/termination.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -1,5 +1,5 @@
 (*<*)
-theory termination = Main:;
+theory termination = examples:
 (*>*)
 
 text{*
@@ -7,10 +7,10 @@
 its termination with the help of the user-supplied measure.  All of the above
 examples are simple enough that Isabelle can prove automatically that the
 measure of the argument goes down in each recursive call. As a result,
-\isa{$f$.simps} will contain the defining equations (or variants derived from
-them) as theorems. For example, look (via \isacommand{thm}) at
-\isa{sep.simps} and \isa{sep1.simps} to see that they define the same
-function. What is more, those equations are automatically declared as
+$f$@{text".simps"} will contain the defining equations (or variants derived
+from them) as theorems. For example, look (via \isacommand{thm}) at
+@{thm[source]sep.simps} and @{thm[source]sep1.simps} to see that they define
+the same function. What is more, those equations are automatically declared as
 simplification rules.
 
 In general, Isabelle may not be able to prove all termination condition
@@ -32,8 +32,8 @@
 lemma termi_lem[simp]: "\\<not> x \\<le> y \\<Longrightarrow> x - Suc y < x - y";
 
 txt{*\noindent
-It was not proved automatically because of the special nature of \isa{-}
-on \isa{nat}. This requires more arithmetic than is tried by default:
+It was not proved automatically because of the special nature of @{text"-"}
+on @{typ"nat"}. This requires more arithmetic than is tried by default:
 *}
 
 by(arith);
@@ -49,8 +49,8 @@
   "g(x,y) = (if x \\<le> y then x else g(x,y+1))";
 
 text{*\noindent
-This time everything works fine. Now \isa{g.simps} contains precisely the
-stated recursion equation for \isa{g} and they are simplification
+This time everything works fine. Now @{thm[source]g.simps} contains precisely
+the stated recursion equation for @{term"g"} and they are simplification
 rules. Thus we can automatically prove
 *}
 
@@ -60,7 +60,7 @@
 text{*\noindent
 More exciting theorems require induction, which is discussed below.
 
-Because lemma \isa{termi_lem} above was only turned into a
+Because lemma @{thm[source]termi_lem} above was only turned into a
 simplification rule for the sake of the termination proof, we may want to
 disable it again:
 *}
@@ -68,26 +68,27 @@
 lemmas [simp del] = termi_lem;
 
 text{*
-The attentive reader may wonder why we chose to call our function \isa{g}
-rather than \isa{f} the second time around. The reason is that, despite
-the failed termination proof, the definition of \isa{f} did not
-fail (and thus we could not define it a second time). However, all theorems
-about \isa{f}, for example \isa{f.simps}, carry as a precondition the
-unproved termination condition. Moreover, the theorems \isa{f.simps} are
-not simplification rules. However, this mechanism allows a delayed proof of
-termination: instead of proving \isa{termi_lem} up front, we could prove
+The attentive reader may wonder why we chose to call our function @{term"g"}
+rather than @{term"f"} the second time around. The reason is that, despite
+the failed termination proof, the definition of @{term"f"} did not
+fail, and thus we could not define it a second time. However, all theorems
+about @{term"f"}, for example @{thm[source]f.simps}, carry as a precondition
+the unproved termination condition. Moreover, the theorems
+@{thm[source]f.simps} are not simplification rules. However, this mechanism
+allows a delayed proof of termination: instead of proving
+@{thm[source]termi_lem} up front, we could prove 
 it later on and then use it to remove the preconditions from the theorems
-about \isa{f}. In most cases this is more cumbersome than proving things
-up front
+about @{term"f"}. In most cases this is more cumbersome than proving things
+up front.
 %FIXME, with one exception: nested recursion.
 
 Although all the above examples employ measure functions, \isacommand{recdef}
 allows arbitrary wellfounded relations. For example, termination of
-Ackermann's function requires the lexicographic product \isa{<*lex*>}:
+Ackermann's function requires the lexicographic product @{text"<*lex*>"}:
 *}
 
 consts ack :: "nat*nat \\<Rightarrow> nat";
-recdef ack "measure(%m. m) <*lex*> measure(%n. n)"
+recdef ack "measure(\<lambda>m. m) <*lex*> measure(\<lambda>n. n)"
   "ack(0,n)         = Suc n"
   "ack(Suc m,0)     = ack(m, 1)"
   "ack(Suc m,Suc n) = ack(m,ack(Suc m,n))";
--- a/doc-src/TutorialI/ToyList/ToyList.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/ToyList/ToyList.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -1,10 +1,10 @@
 theory ToyList = PreList:
 
 text{*\noindent
-HOL already has a predefined theory of lists called \isa{List} ---
-\isa{ToyList} is merely a small fragment of it chosen as an example. In
+HOL already has a predefined theory of lists called @{text"List"} ---
+@{text"ToyList"} is merely a small fragment of it chosen as an example. In
 contrast to what is recommended in \S\ref{sec:Basic:Theories},
-\isa{ToyList} is not based on \isa{Main} but on \isa{PreList}, a
+@{text"ToyList"} is not based on @{text"Main"} but on @{text"PreList"}, a
 theory that contains pretty much everything but lists, thus avoiding
 ambiguities caused by defining lists twice.
 *}
@@ -16,27 +16,28 @@
 The datatype\index{*datatype} \isaindexbold{list} introduces two
 constructors \isaindexbold{Nil} and \isaindexbold{Cons}, the
 empty~list and the operator that adds an element to the front of a list. For
-example, the term \isa{Cons True (Cons False Nil)} is a value of type
-@{typ"bool list"}, namely the list with the elements \isa{True} and
-\isa{False}. Because this notation becomes unwieldy very quickly, the
+example, the term \isa{Cons True (Cons False Nil)} is a value of
+type @{typ"bool list"}, namely the list with the elements @{term"True"} and
+@{term"False"}. Because this notation becomes unwieldy very quickly, the
 datatype declaration is annotated with an alternative syntax: instead of
-\isa{Nil} and \isa{Cons x xs} we can write
-\isa{[]}\index{$HOL2list@\texttt{[]}|bold} and
+@{term[source]Nil} and \isa{Cons x xs} we can write
+@{term"[]"}\index{$HOL2list@\texttt{[]}|bold} and
 @{term"x # xs"}\index{$HOL2list@\texttt{\#}|bold}. In fact, this
 alternative syntax is the standard syntax. Thus the list \isa{Cons True
 (Cons False Nil)} becomes @{term"True # False # []"}. The annotation
-\isacommand{infixr}\indexbold{*infixr} means that \isa{\#} associates to
-the right, i.e.\ the term @{term"x # y # z"} is read as \isa{x
-\# (y \# z)} and not as \isa{(x \# y) \# z}.
+\isacommand{infixr}\indexbold{*infixr} means that @{text"#"} associates to
+the right, i.e.\ the term @{term"x # y # z"} is read as @{text"x # (y # z)"}
+and not as @{text"(x # y) # z"}.
 
 \begin{warn}
   Syntax annotations are a powerful but completely optional feature. You
-  could drop them from theory \isa{ToyList} and go back to the identifiers
-  \isa{Nil} and \isa{Cons}. However, lists are such a central datatype
+  could drop them from theory @{text"ToyList"} and go back to the identifiers
+  @{term[source]Nil} and @{term[source]Cons}. However, lists are such a
+  central datatype
   that their syntax is highly customized. We recommend that novices should
   not use syntax annotations in their own theories.
 \end{warn}
-Next, two functions \isa{app} and \isaindexbold{rev} are declared:
+Next, two functions @{text"app"} and \isaindexbold{rev} are declared:
 *}
 
 consts app :: "'a list \\<Rightarrow> 'a list \\<Rightarrow> 'a list"   (infixr "@" 65)
@@ -47,7 +48,7 @@
 In contrast to ML, Isabelle insists on explicit declarations of all functions
 (keyword \isacommand{consts}).  (Apart from the declaration-before-use
 restriction, the order of items in a theory file is unconstrained.) Function
-\isa{app} is annotated with concrete syntax too. Instead of the prefix
+@{term"app"} is annotated with concrete syntax too. Instead of the prefix
 syntax \isa{app xs ys} the infix
 @{term"xs @ ys"}\index{$HOL2list@\texttt{\at}|bold} becomes the preferred
 form. Both functions are defined recursively:
@@ -63,8 +64,8 @@
 
 text{*
 \noindent
-The equations for \isa{app} and \isa{rev} hardly need comments:
-\isa{app} appends two lists and \isa{rev} reverses a list.  The keyword
+The equations for @{term"app"} and @{term"rev"} hardly need comments:
+@{term"app"} appends two lists and @{term"rev"} reverses a list.  The keyword
 \isacommand{primrec}\index{*primrec} indicates that the recursion is of a
 particularly primitive kind where each recursive call peels off a datatype
 constructor from one of the arguments.  Thus the
@@ -114,7 +115,7 @@
 illustrate not just the basic proof commands but also the typical proof
 process.
 
-\subsubsection*{Main goal: \texttt{rev(rev xs) = xs}}
+\subsubsection*{Main goal: @{text"rev(rev xs) = xs"}}
 
 Our goal is to show that reversing a list twice produces the original
 list. The input line
@@ -125,14 +126,15 @@
 txt{*\index{*theorem|bold}\index{*simp (attribute)|bold}
 \begin{itemize}
 \item
-establishes a new theorem to be proved, namely \isa{rev(rev xs) = xs},
+establishes a new theorem to be proved, namely @{prop"rev(rev xs) = xs"},
 \item
-gives that theorem the name \isa{rev_rev} by which it can be referred to,
+gives that theorem the name @{text"rev_rev"} by which it can be
+referred to,
 \item
-and tells Isabelle (via \isa{[simp]}) to use the theorem (once it has been
+and tells Isabelle (via @{text"[simp]"}) to use the theorem (once it has been
 proved) as a simplification rule, i.e.\ all future proofs involving
-simplification will replace occurrences of \isa{rev(rev xs)} by
-\isa{xs}.
+simplification will replace occurrences of @{term"rev(rev xs)"} by
+@{term"xs"}.
 
 The name and the simplification attribute are optional.
 \end{itemize}
@@ -145,7 +147,7 @@
 ~1.~rev~(rev~xs)~=~xs
 \end{isabelle}
 The first three lines tell us that we are 0 steps into the proof of
-theorem \isa{rev_rev}; for compactness reasons we rarely show these
+theorem @{text"rev_rev"}; for compactness reasons we rarely show these
 initial lines in this tutorial. The remaining lines display the current
 proof state.
 Until we have finished a proof, the proof state always looks like this:
@@ -158,26 +160,26 @@
 where $G$
 is the overall goal that we are trying to prove, and the numbered lines
 contain the subgoals $G\sb{1}$, \dots, $G\sb{n}$ that we need to prove to
-establish $G$. At \isa{step 0} there is only one subgoal, which is
+establish $G$. At @{text"step 0"} there is only one subgoal, which is
 identical with the overall goal.  Normally $G$ is constant and only serves as
 a reminder. Hence we rarely show it in this tutorial.
 
-Let us now get back to \isa{rev(rev xs) = xs}. Properties of recursively
+Let us now get back to @{prop"rev(rev xs) = xs"}. Properties of recursively
 defined functions are best established by induction. In this case there is
-not much choice except to induct on \isa{xs}:
+not much choice except to induct on @{term"xs"}:
 *}
 
 apply(induct_tac xs);
 
 txt{*\noindent\index{*induct_tac}%
-This tells Isabelle to perform induction on variable \isa{xs}. The suffix
-\isa{tac} stands for ``tactic'', a synonym for ``theorem proving function''.
+This tells Isabelle to perform induction on variable @{term"xs"}. The suffix
+@{term"tac"} stands for ``tactic'', a synonym for ``theorem proving function''.
 By default, induction acts on the first subgoal. The new proof state contains
-two subgoals, namely the base case (\isa{Nil}) and the induction step
-(\isa{Cons}):
+two subgoals, namely the base case (@{term[source]Nil}) and the induction step
+(@{term[source]Cons}):
 \begin{isabelle}
 ~1.~rev~(rev~[])~=~[]\isanewline
-~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list%
+~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list
 \end{isabelle}
 
 The induction step is an example of the general format of a subgoal:
@@ -191,12 +193,11 @@
   conclusion} is the actual proposition to be proved. Typical proof steps
 that add new assumptions are induction or case distinction. In our example
 the only assumption is the induction hypothesis @{term"rev (rev list) =
-  list"}, where \isa{list} is a variable name chosen by Isabelle. If there
+  list"}, where @{term"list"} is a variable name chosen by Isabelle. If there
 are multiple assumptions, they are enclosed in the bracket pair
 \indexboldpos{\isasymlbrakk}{$Isabrl} and
 \indexboldpos{\isasymrbrakk}{$Isabrr} and separated by semicolons.
 
-%FIXME indent!
 Let us try to solve both goals automatically:
 *}
 
@@ -204,9 +205,9 @@
 
 txt{*\noindent
 This command tells Isabelle to apply a proof strategy called
-\isa{auto} to all subgoals. Essentially, \isa{auto} tries to
+@{text"auto"} to all subgoals. Essentially, @{text"auto"} tries to
 ``simplify'' the subgoals.  In our case, subgoal~1 is solved completely (thanks
-to the equation \isa{rev [] = []}) and disappears; the simplified version
+to the equation @{prop"rev [] = []"}) and disappears; the simplified version
 of subgoal~2 becomes the new subgoal~1:
 \begin{isabelle}
 ~1.~\dots~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev~list~@~a~\#~[])~=~a~\#~list
@@ -217,7 +218,7 @@
 oops
 (*>*)
 
-subsubsection{*First lemma: \texttt{rev(xs \at~ys) = (rev ys) \at~(rev xs)}*}
+subsubsection{*First lemma: @{text"rev(xs @ ys) = (rev ys) @ (rev xs)"}*}
 
 text{*
 After abandoning the above proof attempt\indexbold{abandon
@@ -233,9 +234,9 @@
 \emph{theorem}\index{theorem} and \emph{lemma}\index{lemma} pretty much
 interchangeably.
 
-There are two variables that we could induct on: \isa{xs} and
-\isa{ys}. Because \isa{\at} is defined by recursion on
-the first argument, \isa{xs} is the correct one:
+There are two variables that we could induct on: @{term"xs"} and
+@{term"ys"}. Because @{text"@"} is defined by recursion on
+the first argument, @{term"xs"} is the correct one:
 *}
 
 apply(induct_tac xs);
@@ -259,7 +260,7 @@
 oops
 (*>*)
 
-subsubsection{*Second lemma: \texttt{xs \at~[] = xs}*}
+subsubsection{*Second lemma: @{text"xs @ [] = xs"}*}
 
 text{*
 This time the canonical proof procedure
@@ -271,7 +272,7 @@
 
 txt{*
 \noindent
-leads to the desired message \isa{No subgoals!}:
+leads to the desired message @{text"No subgoals!"}:
 \begin{isabelle}
 xs~@~[]~=~xs\isanewline
 No~subgoals!
@@ -284,12 +285,12 @@
 
 text{*\noindent\indexbold{$Isar@\texttt{.}}%
 As a result of that final dot, Isabelle associates the lemma
-just proved with its name. Notice that in the lemma \isa{app_Nil2} (as
-printed out after the final dot) the free variable \isa{xs} has been
-replaced by the unknown \isa{?xs}, just as explained in
-\S\ref{sec:variables}. Note that instead of instead of \isacommand{apply}
+just proved with its name. Instead of \isacommand{apply}
 followed by a dot, you can simply write \isacommand{by}\indexbold{by},
-which we do most of the time.
+which we do most of the time. Notice that in lemma @{thm[source]app_Nil2}
+(as printed out after the final dot) the free variable @{term"xs"} has been
+replaced by the unknown @{text"?xs"}, just as explained in
+\S\ref{sec:variables}.
 
 Going back to the proof of the first lemma
 *}
@@ -300,24 +301,24 @@
 
 txt{*
 \noindent
-we find that this time \isa{auto} solves the base case, but the
+we find that this time @{text"auto"} solves the base case, but the
 induction step merely simplifies to
 \begin{isabelle}
 ~1.~{\isasymAnd}a~list.\isanewline
 ~~~~~~~rev~(list~@~ys)~=~rev~ys~@~rev~list~{\isasymLongrightarrow}\isanewline
 ~~~~~~~(rev~ys~@~rev~list)~@~a~\#~[]~=~rev~ys~@~rev~list~@~a~\#~[]
 \end{isabelle}
-Now we need to remember that \isa{\at} associates to the right, and that
-\isa{\#} and \isa{\at} have the same priority (namely the \isa{65}
+Now we need to remember that @{text"@"} associates to the right, and that
+@{text"#"} and @{text"@"} have the same priority (namely the @{text"65"}
 in their \isacommand{infixr} annotation). Thus the conclusion really is
 \begin{isabelle}
-~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))%
+~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))
 \end{isabelle}
-and the missing lemma is associativity of \isa{\at}.
+and the missing lemma is associativity of @{text"@"}.
 *}
 (*<*)oops(*>*)
 
-subsubsection{*Third lemma: \texttt{(xs \at~ys) \at~zs = xs \at~(ys \at~zs)}*}
+subsubsection{*Third lemma: @{text"(xs @ ys) @ zs = xs @ (ys @ zs)"}*}
 
 text{*
 Abandoning the previous proof, the canonical proof procedure
@@ -346,7 +347,7 @@
 by(auto);
 
 text{*\noindent
-The final \isa{end} tells Isabelle to close the current theory because
+The final \isacommand{end} tells Isabelle to close the current theory because
 we are finished with its development:
 *}
 
--- a/doc-src/TutorialI/ToyList/document/ToyList.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/ToyList/document/ToyList.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -17,23 +17,24 @@
 The datatype\index{*datatype} \isaindexbold{list} introduces two
 constructors \isaindexbold{Nil} and \isaindexbold{Cons}, the
 empty~list and the operator that adds an element to the front of a list. For
-example, the term \isa{Cons True (Cons False Nil)} is a value of type
-\isa{bool\ list}, namely the list with the elements \isa{True} and
+example, the term \isa{Cons True (Cons False Nil)} is a value of
+type \isa{bool\ list}, namely the list with the elements \isa{True} and
 \isa{False}. Because this notation becomes unwieldy very quickly, the
 datatype declaration is annotated with an alternative syntax: instead of
 \isa{Nil} and \isa{Cons x xs} we can write
-\isa{[]}\index{$HOL2list@\texttt{[]}|bold} and
-\isa{\mbox{x}\ {\isacharhash}\ \mbox{xs}}\index{$HOL2list@\texttt{\#}|bold}. In fact, this
+\isa{{\isacharbrackleft}{\isacharbrackright}}\index{$HOL2list@\texttt{[]}|bold} and
+\isa{x\ {\isacharhash}\ xs}\index{$HOL2list@\texttt{\#}|bold}. In fact, this
 alternative syntax is the standard syntax. Thus the list \isa{Cons True
 (Cons False Nil)} becomes \isa{True\ {\isacharhash}\ False\ {\isacharhash}\ {\isacharbrackleft}{\isacharbrackright}}. The annotation
-\isacommand{infixr}\indexbold{*infixr} means that \isa{\#} associates to
-the right, i.e.\ the term \isa{\mbox{x}\ {\isacharhash}\ \mbox{y}\ {\isacharhash}\ \mbox{z}} is read as \isa{x
-\# (y \# z)} and not as \isa{(x \# y) \# z}.
+\isacommand{infixr}\indexbold{*infixr} means that \isa{{\isacharhash}} associates to
+the right, i.e.\ the term \isa{x\ {\isacharhash}\ y\ {\isacharhash}\ z} is read as \isa{x\ {\isacharhash}\ {\isacharparenleft}y\ {\isacharhash}\ z{\isacharparenright}}
+and not as \isa{{\isacharparenleft}x\ {\isacharhash}\ y{\isacharparenright}\ {\isacharhash}\ z}.
 
 \begin{warn}
   Syntax annotations are a powerful but completely optional feature. You
   could drop them from theory \isa{ToyList} and go back to the identifiers
-  \isa{Nil} and \isa{Cons}. However, lists are such a central datatype
+  \isa{Nil} and \isa{Cons}. However, lists are such a
+  central datatype
   that their syntax is highly customized. We recommend that novices should
   not use syntax annotations in their own theories.
 \end{warn}
@@ -46,9 +47,9 @@
 In contrast to ML, Isabelle insists on explicit declarations of all functions
 (keyword \isacommand{consts}).  (Apart from the declaration-before-use
 restriction, the order of items in a theory file is unconstrained.) Function
-\isa{app} is annotated with concrete syntax too. Instead of the prefix
+\isa{op\ {\isacharat}} is annotated with concrete syntax too. Instead of the prefix
 syntax \isa{app xs ys} the infix
-\isa{\mbox{xs}\ {\isacharat}\ \mbox{ys}}\index{$HOL2list@\texttt{\at}|bold} becomes the preferred
+\isa{xs\ {\isacharat}\ ys}\index{$HOL2list@\texttt{\at}|bold} becomes the preferred
 form. Both functions are defined recursively:%
 \end{isamarkuptext}%
 \isacommand{primrec}\isanewline
@@ -60,8 +61,8 @@
 {\isachardoublequote}rev\ {\isacharparenleft}x\ {\isacharhash}\ xs{\isacharparenright}\ \ {\isacharequal}\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharat}\ {\isacharparenleft}x\ {\isacharhash}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 \noindent
-The equations for \isa{app} and \isa{rev} hardly need comments:
-\isa{app} appends two lists and \isa{rev} reverses a list.  The keyword
+The equations for \isa{op\ {\isacharat}} and \isa{rev} hardly need comments:
+\isa{op\ {\isacharat}} appends two lists and \isa{rev} reverses a list.  The keyword
 \isacommand{primrec}\index{*primrec} indicates that the recursion is of a
 particularly primitive kind where each recursive call peels off a datatype
 constructor from one of the arguments.  Thus the
@@ -110,7 +111,7 @@
 illustrate not just the basic proof commands but also the typical proof
 process.
 
-\subsubsection*{Main goal: \texttt{rev(rev xs) = xs}}
+\subsubsection*{Main goal: \isa{rev{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs}}
 
 Our goal is to show that reversing a list twice produces the original
 list. The input line%
@@ -120,13 +121,14 @@
 \index{*theorem|bold}\index{*simp (attribute)|bold}
 \begin{itemize}
 \item
-establishes a new theorem to be proved, namely \isa{rev(rev xs) = xs},
-\item
-gives that theorem the name \isa{rev_rev} by which it can be referred to,
+establishes a new theorem to be proved, namely \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs},
 \item
-and tells Isabelle (via \isa{[simp]}) to use the theorem (once it has been
+gives that theorem the name \isa{rev{\isacharunderscore}rev} by which it can be
+referred to,
+\item
+and tells Isabelle (via \isa{{\isacharbrackleft}simp{\isacharbrackright}}) to use the theorem (once it has been
 proved) as a simplification rule, i.e.\ all future proofs involving
-simplification will replace occurrences of \isa{rev(rev xs)} by
+simplification will replace occurrences of \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}} by
 \isa{xs}.
 
 The name and the simplification attribute are optional.
@@ -140,7 +142,7 @@
 ~1.~rev~(rev~xs)~=~xs
 \end{isabelle}
 The first three lines tell us that we are 0 steps into the proof of
-theorem \isa{rev_rev}; for compactness reasons we rarely show these
+theorem \isa{rev{\isacharunderscore}rev}; for compactness reasons we rarely show these
 initial lines in this tutorial. The remaining lines display the current
 proof state.
 Until we have finished a proof, the proof state always looks like this:
@@ -153,11 +155,11 @@
 where $G$
 is the overall goal that we are trying to prove, and the numbered lines
 contain the subgoals $G\sb{1}$, \dots, $G\sb{n}$ that we need to prove to
-establish $G$. At \isa{step 0} there is only one subgoal, which is
+establish $G$. At \isa{step\ \isadigit{0}} there is only one subgoal, which is
 identical with the overall goal.  Normally $G$ is constant and only serves as
 a reminder. Hence we rarely show it in this tutorial.
 
-Let us now get back to \isa{rev(rev xs) = xs}. Properties of recursively
+Let us now get back to \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs}. Properties of recursively
 defined functions are best established by induction. In this case there is
 not much choice except to induct on \isa{xs}:%
 \end{isamarkuptxt}%
@@ -171,7 +173,7 @@
 (\isa{Cons}):
 \begin{isabelle}
 ~1.~rev~(rev~[])~=~[]\isanewline
-~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list%
+~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list
 \end{isabelle}
 
 The induction step is an example of the general format of a subgoal:
@@ -184,12 +186,11 @@
 The {\it assumptions} are the local assumptions for this subgoal and {\it
   conclusion} is the actual proposition to be proved. Typical proof steps
 that add new assumptions are induction or case distinction. In our example
-the only assumption is the induction hypothesis \isa{rev\ {\isacharparenleft}rev\ \mbox{list}{\isacharparenright}\ {\isacharequal}\ \mbox{list}}, where \isa{list} is a variable name chosen by Isabelle. If there
+the only assumption is the induction hypothesis \isa{rev\ {\isacharparenleft}rev\ list{\isacharparenright}\ {\isacharequal}\ list}, where \isa{list} is a variable name chosen by Isabelle. If there
 are multiple assumptions, they are enclosed in the bracket pair
 \indexboldpos{\isasymlbrakk}{$Isabrl} and
 \indexboldpos{\isasymrbrakk}{$Isabrr} and separated by semicolons.
 
-%FIXME indent!
 Let us try to solve both goals automatically:%
 \end{isamarkuptxt}%
 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}%
@@ -198,7 +199,7 @@
 This command tells Isabelle to apply a proof strategy called
 \isa{auto} to all subgoals. Essentially, \isa{auto} tries to
 ``simplify'' the subgoals.  In our case, subgoal~1 is solved completely (thanks
-to the equation \isa{rev [] = []}) and disappears; the simplified version
+to the equation \isa{rev\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}}) and disappears; the simplified version
 of subgoal~2 becomes the new subgoal~1:
 \begin{isabelle}
 ~1.~\dots~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev~list~@~a~\#~[])~=~a~\#~list
@@ -206,7 +207,7 @@
 In order to simplify this subgoal further, a lemma suggests itself.%
 \end{isamarkuptxt}%
 %
-\isamarkupsubsubsection{First lemma: \texttt{rev(xs \at~ys) = (rev ys) \at~(rev xs)}}
+\isamarkupsubsubsection{First lemma: \isa{rev{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}rev\ ys{\isacharparenright}\ {\isacharat}\ {\isacharparenleft}rev\ xs{\isacharparenright}}}
 %
 \begin{isamarkuptext}%
 After abandoning the above proof attempt\indexbold{abandon
@@ -222,7 +223,7 @@
 interchangeably.
 
 There are two variables that we could induct on: \isa{xs} and
-\isa{ys}. Because \isa{\at} is defined by recursion on
+\isa{ys}. Because \isa{{\isacharat}} is defined by recursion on
 the first argument, \isa{xs} is the correct one:%
 \end{isamarkuptxt}%
 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ xs{\isacharparenright}%
@@ -241,7 +242,7 @@
 the proof of a lemma usually remains implicit.%
 \end{isamarkuptxt}%
 %
-\isamarkupsubsubsection{Second lemma: \texttt{xs \at~[] = xs}}
+\isamarkupsubsubsection{Second lemma: \isa{xs\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ xs}}
 %
 \begin{isamarkuptext}%
 This time the canonical proof procedure%
@@ -251,7 +252,7 @@
 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}%
 \begin{isamarkuptxt}%
 \noindent
-leads to the desired message \isa{No subgoals!}:
+leads to the desired message \isa{No\ subgoals{\isacharbang}}:
 \begin{isabelle}
 xs~@~[]~=~xs\isanewline
 No~subgoals!
@@ -263,12 +264,12 @@
 \begin{isamarkuptext}%
 \noindent\indexbold{$Isar@\texttt{.}}%
 As a result of that final dot, Isabelle associates the lemma
-just proved with its name. Notice that in the lemma \isa{app_Nil2} (as
-printed out after the final dot) the free variable \isa{xs} has been
-replaced by the unknown \isa{?xs}, just as explained in
-\S\ref{sec:variables}. Note that instead of instead of \isacommand{apply}
+just proved with its name. Instead of \isacommand{apply}
 followed by a dot, you can simply write \isacommand{by}\indexbold{by},
-which we do most of the time.
+which we do most of the time. Notice that in lemma \isa{app{\isacharunderscore}Nil\isadigit{2}}
+(as printed out after the final dot) the free variable \isa{xs} has been
+replaced by the unknown \isa{{\isacharquery}xs}, just as explained in
+\S\ref{sec:variables}.
 
 Going back to the proof of the first lemma%
 \end{isamarkuptext}%
@@ -284,16 +285,16 @@
 ~~~~~~~rev~(list~@~ys)~=~rev~ys~@~rev~list~{\isasymLongrightarrow}\isanewline
 ~~~~~~~(rev~ys~@~rev~list)~@~a~\#~[]~=~rev~ys~@~rev~list~@~a~\#~[]
 \end{isabelle}
-Now we need to remember that \isa{\at} associates to the right, and that
-\isa{\#} and \isa{\at} have the same priority (namely the \isa{65}
+Now we need to remember that \isa{{\isacharat}} associates to the right, and that
+\isa{{\isacharhash}} and \isa{{\isacharat}} have the same priority (namely the \isa{\isadigit{6}\isadigit{5}}
 in their \isacommand{infixr} annotation). Thus the conclusion really is
 \begin{isabelle}
-~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))%
+~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))
 \end{isabelle}
-and the missing lemma is associativity of \isa{\at}.%
+and the missing lemma is associativity of \isa{{\isacharat}}.%
 \end{isamarkuptxt}%
 %
-\isamarkupsubsubsection{Third lemma: \texttt{(xs \at~ys) \at~zs = xs \at~(ys \at~zs)}}
+\isamarkupsubsubsection{Third lemma: \isa{{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharat}\ zs\ {\isacharequal}\ xs\ {\isacharat}\ {\isacharparenleft}ys\ {\isacharat}\ zs{\isacharparenright}}}
 %
 \begin{isamarkuptext}%
 Abandoning the previous proof, the canonical proof procedure%
@@ -318,7 +319,7 @@
 \isacommand{by}{\isacharparenleft}auto{\isacharparenright}%
 \begin{isamarkuptext}%
 \noindent
-The final \isa{end} tells Isabelle to close the current theory because
+The final \isacommand{end} tells Isabelle to close the current theory because
 we are finished with its development:%
 \end{isamarkuptext}%
 \isacommand{end}\isanewline
--- a/doc-src/TutorialI/Trie/Trie.thy	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Trie/Trie.thy	Fri Sep 01 19:09:44 2000 +0200
@@ -5,8 +5,8 @@
 To minimize running time, each node of a trie should contain an array that maps
 letters to subtries. We have chosen a (sometimes) more space efficient
 representation where the subtries are held in an association list, i.e.\ a
-list of (letter,trie) pairs.  Abstracting over the alphabet \isa{'a} and the
-values \isa{'v} we define a trie as follows:
+list of (letter,trie) pairs.  Abstracting over the alphabet @{typ"'a"} and the
+values @{typ"'v"} we define a trie as follows:
 *};
 
 datatype ('a,'v)trie = Trie  "'v option"  "('a * ('a,'v)trie)list";
@@ -47,7 +47,7 @@
 
 text{*
 As a first simple property we prove that looking up a string in the empty
-trie \isa{Trie~None~[]} always returns \isa{None}. The proof merely
+trie @{term"Trie None []"} always returns @{term"None"}. The proof merely
 distinguishes the two cases whether the search string is empty or not:
 *};
 
@@ -70,14 +70,14 @@
 
 text{*\noindent
 The base case is obvious. In the recursive case the subtrie
-\isa{tt} associated with the first letter \isa{a} is extracted,
+@{term"tt"} associated with the first letter @{term"a"} is extracted,
 recursively updated, and then placed in front of the association list.
-The old subtrie associated with \isa{a} is still in the association list
-but no longer accessible via \isa{assoc}. Clearly, there is room here for
+The old subtrie associated with @{term"a"} is still in the association list
+but no longer accessible via @{term"assoc"}. Clearly, there is room here for
 optimizations!
 
-Before we start on any proofs about \isa{update} we tell the simplifier to
-expand all \isa{let}s and to split all \isa{case}-constructs over
+Before we start on any proofs about @{term"update"} we tell the simplifier to
+expand all @{text"let"}s and to split all @{text"case"}-constructs over
 options:
 *};
 
@@ -86,23 +86,23 @@
 
 text{*\noindent
 The reason becomes clear when looking (probably after a failed proof
-attempt) at the body of \isa{update}: it contains both
-\isa{let} and a case distinction over type \isa{option}.
+attempt) at the body of @{term"update"}: it contains both
+@{text"let"} and a case distinction over type @{text"option"}.
 
-Our main goal is to prove the correct interaction of \isa{update} and
-\isa{lookup}:
+Our main goal is to prove the correct interaction of @{term"update"} and
+@{term"lookup"}:
 *};
 
 theorem "\\<forall>t v bs. lookup (update t as v) bs =
                     (if as=bs then Some v else lookup t bs)";
 
 txt{*\noindent
-Our plan is to induct on \isa{as}; hence the remaining variables are
+Our plan is to induct on @{term"as"}; hence the remaining variables are
 quantified. From the definitions it is clear that induction on either
-\isa{as} or \isa{bs} is required. The choice of \isa{as} is merely
-guided by the intuition that simplification of \isa{lookup} might be easier
-if \isa{update} has already been simplified, which can only happen if
-\isa{as} is instantiated.
+@{term"as"} or @{term"bs"} is required. The choice of @{term"as"} is merely
+guided by the intuition that simplification of @{term"lookup"} might be easier
+if @{term"update"} has already been simplified, which can only happen if
+@{term"as"} is instantiated.
 The start of the proof is completely conventional:
 *};
 apply(induct_tac as, auto);
@@ -112,27 +112,27 @@
 \begin{isabelle}
 ~1.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
 ~2.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
-~3.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs%
+~3.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs
 \end{isabelle}
-Clearly, if we want to make headway we have to instantiate \isa{bs} as
+Clearly, if we want to make headway we have to instantiate @{term"bs"} as
 well now. It turns out that instead of induction, case distinction
 suffices:
 *};
 by(case_tac[!] bs, auto);
 
 text{*\noindent
-All methods ending in \isa{tac} take an optional first argument that
-specifies the range of subgoals they are applied to, where \isa{[!]} means all
-subgoals, i.e.\ \isa{[1-3]} in our case. Individual subgoal numbers,
-e.g. \isa{[2]} are also allowed.
+All methods ending in @{text"tac"} take an optional first argument that
+specifies the range of subgoals they are applied to, where @{text"[!]"} means
+all subgoals, i.e.\ @{text"[1-3]"} in our case. Individual subgoal numbers,
+e.g. @{text"[2]"} are also allowed.
 
 This proof may look surprisingly straightforward. However, note that this
-comes at a cost: the proof script is unreadable because the
-intermediate proof states are invisible, and we rely on the (possibly
-brittle) magic of \isa{auto} (\isa{simp\_all} will not do---try it) to split the subgoals
-of the induction up in such a way that case distinction on \isa{bs} makes sense and
-solves the proof. Part~\ref{Isar} shows you how to write readable and stable
-proofs.
+comes at a cost: the proof script is unreadable because the intermediate
+proof states are invisible, and we rely on the (possibly brittle) magic of
+@{text"auto"} (@{text"simp_all"} will not do---try it) to split the subgoals
+of the induction up in such a way that case distinction on @{term"bs"} makes
+sense and solves the proof. Part~\ref{Isar} shows you how to write readable
+and stable proofs.
 *};
 
 (*<*)
--- a/doc-src/TutorialI/Trie/document/Trie.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/Trie/document/Trie.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -5,8 +5,8 @@
 To minimize running time, each node of a trie should contain an array that maps
 letters to subtries. We have chosen a (sometimes) more space efficient
 representation where the subtries are held in an association list, i.e.\ a
-list of (letter,trie) pairs.  Abstracting over the alphabet \isa{'a} and the
-values \isa{'v} we define a trie as follows:%
+list of (letter,trie) pairs.  Abstracting over the alphabet \isa{{\isacharprime}a} and the
+values \isa{{\isacharprime}v} we define a trie as follows:%
 \end{isamarkuptext}%
 \isacommand{datatype}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}v{\isacharparenright}trie\ {\isacharequal}\ Trie\ \ {\isachardoublequote}{\isacharprime}v\ option{\isachardoublequote}\ \ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a\ {\isacharasterisk}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}v{\isacharparenright}trie{\isacharparenright}list{\isachardoublequote}%
 \begin{isamarkuptext}%
@@ -41,7 +41,7 @@
 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\isacharbar}\ Some\ at\ {\isasymRightarrow}\ lookup\ at\ as{\isacharparenright}{\isachardoublequote}%
 \begin{isamarkuptext}%
 As a first simple property we prove that looking up a string in the empty
-trie \isa{Trie~None~[]} always returns \isa{None}. The proof merely
+trie \isa{Trie\ None\ {\isacharbrackleft}{\isacharbrackright}} always returns \isa{None}. The proof merely
 distinguishes the two cases whether the search string is empty or not:%
 \end{isamarkuptext}%
 \isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}lookup\ {\isacharparenleft}Trie\ None\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\ as\ {\isacharequal}\ None{\isachardoublequote}\isanewline
@@ -101,7 +101,7 @@
 \begin{isabelle}
 ~1.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
 ~2.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
-~3.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs%
+~3.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs
 \end{isabelle}
 Clearly, if we want to make headway we have to instantiate \isa{bs} as
 well now. It turns out that instead of induction, case distinction
@@ -111,17 +111,17 @@
 \begin{isamarkuptext}%
 \noindent
 All methods ending in \isa{tac} take an optional first argument that
-specifies the range of subgoals they are applied to, where \isa{[!]} means all
-subgoals, i.e.\ \isa{[1-3]} in our case. Individual subgoal numbers,
-e.g. \isa{[2]} are also allowed.
+specifies the range of subgoals they are applied to, where \isa{{\isacharbrackleft}{\isacharbang}{\isacharbrackright}} means
+all subgoals, i.e.\ \isa{{\isacharbrackleft}\isadigit{1}{\isacharminus}\isadigit{3}{\isacharbrackright}} in our case. Individual subgoal numbers,
+e.g. \isa{{\isacharbrackleft}\isadigit{2}{\isacharbrackright}} are also allowed.
 
 This proof may look surprisingly straightforward. However, note that this
-comes at a cost: the proof script is unreadable because the
-intermediate proof states are invisible, and we rely on the (possibly
-brittle) magic of \isa{auto} (\isa{simp\_all} will not do---try it) to split the subgoals
-of the induction up in such a way that case distinction on \isa{bs} makes sense and
-solves the proof. Part~\ref{Isar} shows you how to write readable and stable
-proofs.%
+comes at a cost: the proof script is unreadable because the intermediate
+proof states are invisible, and we rely on the (possibly brittle) magic of
+\isa{auto} (\isa{simp{\isacharunderscore}all} will not do---try it) to split the subgoals
+of the induction up in such a way that case distinction on \isa{bs} makes
+sense and solves the proof. Part~\ref{Isar} shows you how to write readable
+and stable proofs.%
 \end{isamarkuptext}%
 \end{isabellebody}%
 %%% Local Variables:
--- a/doc-src/TutorialI/basics.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/basics.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -71,7 +71,7 @@
 proofs are in separate ML files.
 
 \begin{warn}
-  HOL contains a theory \ttindexbold{Main}, the union of all the basic
+  HOL contains a theory \isaindexbold{Main}, the union of all the basic
   predefined theories like arithmetic, lists, sets, etc.\ (see the online
   library).  Unless you know what you are doing, always include \texttt{Main}
   as a direct or indirect parent theory of all your theories.
@@ -115,7 +115,7 @@
   called \bfindex{type inference}) and keeps quiet about it. Occasionally
   this may lead to misunderstandings between you and the system. If anything
   strange happens, we recommend to set the \rmindex{flag}
-  \ttindexbold{show_types} that tells Isabelle to display type information
+  \isaindexbold{show_types} that tells Isabelle to display type information
   that is usually suppressed: simply type
 \begin{ttbox}
 ML "set show_types"
@@ -198,7 +198,7 @@
 input, you can see which parentheses are dropped---they were superfluous. If
 you are unsure how to interpret Isabelle's output because you don't know
 where the (dropped) parentheses go, set (and possibly reset) the \rmindex{flag}
-\ttindexbold{show_brackets}:
+\isaindexbold{show_brackets}:
 \begin{ttbox}
 ML "set show_brackets"; \(\dots\); ML "reset show_brackets";
 \end{ttbox}
--- a/doc-src/TutorialI/fp.tex	Fri Sep 01 18:29:52 2000 +0200
+++ b/doc-src/TutorialI/fp.tex	Fri Sep 01 19:09:44 2000 +0200
@@ -508,16 +508,7 @@
   $f$~\isasymequiv~\isasymlambda$x\,y.\;t$.
 \end{warn}
 
-\subsubsection{Simplifying let-expressions}
-\index{simplification!of let-expressions}
-
-Proving a goal containing \isaindex{let}-expressions almost invariably
-requires the \isa{let}-con\-structs to be expanded at some point. Since
-\isa{let}-\isa{in} is just syntactic sugar for a predefined constant (called
-\isa{Let}), expanding \isa{let}-constructs means rewriting with
-\isa{Let_def}:
-
-{\makeatother\input{Misc/document/let_rewr.tex}}
+\input{Misc/document/let_rewr.tex}
 
 \subsubsection{Conditional equations}
 
@@ -597,7 +588,7 @@
   \emph{The right-hand side of an equation should (in some sense) be simpler
     than the left-hand side.}
 \end{quote}
-The heuristic is tricky to apply because it is not obvious that
+This heuristic is tricky to apply because it is not obvious that
 \isa{rev xs \at\ ys} is simpler than \isa{itrev xs ys}. But see what
 happens if you try to prove \isa{rev xs \at\ ys = itrev xs ys}!
 
@@ -635,9 +626,9 @@
 How far can we push nested recursion? By the unfolding argument above, we can
 reduce nested to mutual recursion provided the nested recursion only involves
 previously defined datatypes. This does not include functions:
-\begin{ttbox}
-datatype t = C (t => bool)
-\end{ttbox}
+\begin{isabelle}
+\isacommand{datatype} t = C "t \isasymRightarrow\ bool"
+\end{isabelle}
 is a real can of worms: in HOL it must be ruled out because it requires a type
 \isa{t} such that \isa{t} and its power set \isa{t \isasymFun\ bool} have the
 same cardinality---an impossibility. For the same reason it is not possible
@@ -654,14 +645,14 @@
 \input{Datatype/document/Fundata.tex}
 \bigskip
 
-If you need nested recursion on the left of a function arrow,
-there are alternatives to pure HOL: LCF~\cite{paulson87} is a logic where types like
-\begin{ttbox}
-datatype lam = C (lam -> lam)
-\end{ttbox}
-do indeed make sense (note the intentionally different arrow \isa{->}).
-There is even a version of LCF on top of HOL, called
-HOLCF~\cite{MuellerNvOS99}.
+If you need nested recursion on the left of a function arrow, there are
+alternatives to pure HOL: LCF~\cite{paulson87} is a logic where types like
+\begin{isabelle}
+\isacommand{datatype} lam = C "lam \isasymrightarrow\ lam"
+\end{isabelle}
+do indeed make sense (but note the intentionally different arrow
+\isa{\isasymrightarrow}). There is even a version of LCF on top of HOL,
+called HOLCF~\cite{MuellerNvOS99}.
 
 \index{*primrec|)}
 \index{*datatype|)}