fixed many bad line & page breaks
authorpaulson
Tue, 18 Jan 2000 11:33:31 +0100
changeset 8136 8c65f3ca13f2
parent 8135 ad1c4a678196
child 8137 fb6fe34060ca
fixed many bad line & page breaks
doc-src/Ref/classical.tex
doc-src/Ref/defining.tex
doc-src/Ref/goals.tex
doc-src/Ref/introduction.tex
doc-src/Ref/simplifier.tex
doc-src/Ref/substitution.tex
doc-src/Ref/syntax.tex
doc-src/Ref/tactic.tex
doc-src/Ref/theories.tex
doc-src/Ref/thm.tex
--- a/doc-src/Ref/classical.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/classical.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -240,21 +240,21 @@
 supports the following operations (provided the classical reasoner is
 installed!):
 \begin{ttbox} 
-empty_cs    : claset
-print_cs    : claset -> unit
-rep_cs      : claset -> {safeEs: thm list, safeIs: thm list,
-                         hazEs: thm list,  hazIs: thm list, 
-                         swrappers: (string * wrapper) list, 
-                         uwrappers: (string * wrapper) list,
-                         safe0_netpair: netpair, safep_netpair: netpair,
-                           haz_netpair: netpair,   dup_netpair: netpair}
-addSIs      : claset * thm list -> claset                 \hfill{\bf infix 4}
-addSEs      : claset * thm list -> claset                 \hfill{\bf infix 4}
-addSDs      : claset * thm list -> claset                 \hfill{\bf infix 4}
-addIs       : claset * thm list -> claset                 \hfill{\bf infix 4}
-addEs       : claset * thm list -> claset                 \hfill{\bf infix 4}
-addDs       : claset * thm list -> claset                 \hfill{\bf infix 4}
-delrules    : claset * thm list -> claset                 \hfill{\bf infix 4}
+empty_cs : claset
+print_cs : claset -> unit
+rep_cs : claset -> \{safeEs: thm list, safeIs: thm list,
+                    hazEs: thm list,  hazIs: thm list, 
+                    swrappers: (string * wrapper) list, 
+                    uwrappers: (string * wrapper) list,
+                    safe0_netpair: netpair, safep_netpair: netpair,
+                    haz_netpair: netpair, dup_netpair: netpair\}
+addSIs   : claset * thm list -> claset                 \hfill{\bf infix 4}
+addSEs   : claset * thm list -> claset                 \hfill{\bf infix 4}
+addSDs   : claset * thm list -> claset                 \hfill{\bf infix 4}
+addIs    : claset * thm list -> claset                 \hfill{\bf infix 4}
+addEs    : claset * thm list -> claset                 \hfill{\bf infix 4}
+addDs    : claset * thm list -> claset                 \hfill{\bf infix 4}
+delrules : claset * thm list -> claset                 \hfill{\bf infix 4}
 \end{ttbox}
 The add operations ignore any rule already present in the claset with the same
 classification (such as Safe Introduction).  They print a warning if the rule
@@ -451,9 +451,10 @@
     rangeI} in {\HOL} and \texttt{RepFunI} in {\ZF}.  There are often
     alternatives to such rules, for example {\tt
     range_eqI} and \texttt{RepFun_eqI}.
-\item The message {\small\tt Function Var's argument not a bound variable\ }
-relates to the lack of higher-order unification.  Function variables
-may only be applied to parameters of the subgoal.
+\item Function variables may only be applied to parameters of the subgoal.
+(This restriction arises because the prover does not use higher-order
+unification.)  If other function variables are present then the prover will
+fail with the message {\small\tt Function Var's argument not a bound variable}.
 \item Its proof strategy is more general than \texttt{fast_tac}'s but can be
   slower.  If \texttt{blast_tac} fails or seems to be running forever, try {\tt
   fast_tac} and the other tactics described below.
@@ -555,16 +556,16 @@
 that sets up the classical reasoner.
 \begin{ttdescription}
 \item[\ttindexbold{fast_tac} $cs$ $i$] applies \texttt{step_tac} using
-depth-first search, to prove subgoal~$i$.
+depth-first search to prove subgoal~$i$.
 
 \item[\ttindexbold{best_tac} $cs$ $i$] applies \texttt{step_tac} using
-best-first search, to prove subgoal~$i$.
+best-first search to prove subgoal~$i$.
 
 \item[\ttindexbold{slow_tac} $cs$ $i$] applies \texttt{slow_step_tac} using
-depth-first search, to prove subgoal~$i$.
+depth-first search to prove subgoal~$i$.
 
-\item[\ttindexbold{slow_best_tac} $cs$ $i$] applies \texttt{slow_step_tac} using
-best-first search, to prove subgoal~$i$.
+\item[\ttindexbold{slow_best_tac} $cs$ $i$] applies \texttt{slow_step_tac} with
+best-first search to prove subgoal~$i$.
 \end{ttdescription}
 
 
@@ -613,8 +614,9 @@
 but allows unknowns to be instantiated.
 
 \item[\ttindexbold{step_tac} $cs$ $i$] is the basic step of the proof
-  procedure.  The (unsafe) wrapper tacticals are applied to a tactic that tries
- \texttt{safe_tac}, \texttt{inst_step_tac}, or applies an unsafe rule from~$cs$.
+  procedure.  The unsafe wrapper tacticals are applied to a tactic that tries
+  \texttt{safe_tac}, \texttt{inst_step_tac}, or applies an unsafe rule
+  from~$cs$.
 
 \item[\ttindexbold{slow_step_tac}] 
   resembles \texttt{step_tac}, but allows backtracking between using safe
--- a/doc-src/Ref/defining.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/defining.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -120,7 +120,7 @@
      &$|$& $type^{(1)}$ {\tt =>} $type$ & (0) \\
      &$|$& {\tt[}  $type$ {\tt,} \dots {\tt,} $type$ {\tt]} {\tt=>} $type$&(0) \\\\
 $sort$ &=& $id$ ~~$|$~~ $longid$ ~~$|$~~ {\tt\ttlbrace\ttrbrace} ~~$|$~~
-  {\tt\ttlbrace} $id$ ~$|$~ $longid$ {\tt,} \dots {\tt,} $id$ ~$|$~ $longid$ {\tt\ttrbrace}
+  {\tt\ttlbrace} $id$ ~$|$~ $longid${\tt,}\dots{\tt,} $id$ ~$|$~$longid$ {\tt\ttrbrace}
 \end{tabular}
 \index{*PROP symbol}
 \index{*== symbol}\index{*=?= symbol}\index{*==> symbol}
@@ -291,8 +291,8 @@
 \item[\ttindexbold{syn_of} {\it thy}] returns the syntax of the Isabelle
   theory~{\it thy} as an \ML\ value.
 
-\item[\ttindexbold{print_syntax} $thy$] displays the syntax part of $thy$
-  using {\tt Syntax.print_syntax}.
+\item[\ttindexbold{print_syntax} $thy$] uses {\tt Syntax.print_syntax}
+ to display the syntax part of theory $thy$.
 
 \item[\ttindexbold{Syntax.print_syntax} {\it syn}] shows virtually all
   information contained in the syntax {\it syn}.  The displayed output can
@@ -508,7 +508,7 @@
   "-" :: exp => exp          ("- _"    [3] 3)
 end
 \end{ttbox}
-If you put this into a file {\tt ExpSyntax.thy} and load it via {\tt
+If you put this into a file {\tt ExpSyntax.thy} and load it by {\tt
   use_thy"ExpSyntax"}, you can run some tests:
 \begin{ttbox}
 val read_exp = Syntax.test_read (syn_of ExpSyntax.thy) "exp";
--- a/doc-src/Ref/goals.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/goals.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -484,14 +484,14 @@
 
 \section{Executing batch proofs}
 \index{batch execution}%
-To save space below, let type \texttt{tacfun} abbreviate \texttt{thm list ->
+To save space below, let type \texttt{tacfn} abbreviate \texttt{thm list ->
   tactic list}, which is the type of a tactical proof.
 \begin{ttbox}
-prove_goal :           theory ->             string -> tacfun -> thm
-qed_goal   : string -> theory ->             string -> tacfun -> unit
-prove_goalw:           theory -> thm list -> string -> tacfun -> thm
-qed_goalw  : string -> theory -> thm list -> string -> tacfun -> unit
-prove_goalw_cterm:               thm list -> cterm  -> tacfun -> thm
+prove_goal :           theory ->             string -> tacfn -> thm
+qed_goal   : string -> theory ->             string -> tacfn -> unit
+prove_goalw:           theory -> thm list -> string -> tacfn -> thm
+qed_goalw  : string -> theory -> thm list -> string -> tacfn -> unit
+prove_goalw_cterm:               thm list -> cterm  -> tacfn -> thm
 \end{ttbox}
 These batch functions create an initial proof state, then apply a tactic to
 it, yielding a sequence of final proof states.  The head of the sequence is
@@ -524,7 +524,7 @@
 the given tactic function.
 
 \item[\ttindexbold{qed_goal} $name$ $theory$ $formula$ $tacsf$;] acts
-  like \texttt{prove_goal} but also stores the resulting theorem in the
+  like \texttt{prove_goal} but it also stores the resulting theorem in the
   theorem database associated with its theory and in the {\ML}
   variable $name$ (see \S\ref{ExtractingAndStoringTheProvedTheorem}).
 
--- a/doc-src/Ref/introduction.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/introduction.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -182,7 +182,7 @@
   raises an error if absent.
   
 \item[\ttindexbold{theory} "$name$";] retrieves the theory called $name$ from
-  the internal database of loaded theories, raising an error if absent.
+  the internal data\-base of loaded theories, raising an error if absent.
   
 \item[\ttindexbold{use_thy} "$name$";] reads theory $name$ from the file
   system, looking for $name$\texttt{.thy} and $name$\texttt{.ML} (the latter
--- a/doc-src/Ref/simplifier.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/simplifier.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -313,8 +313,8 @@
 \end{ttdescription}
 
 \begin{warn}
-  There is a subtle difference between \texttt{(SIMPSET'~$tacf$)} and
-  \texttt{($tacf$~(simpset()))}.  For example \texttt{(SIMPSET'
+  There is a small difference between \texttt{(SIMPSET'~$tacf$)} and
+  \texttt{($tacf\,$(simpset()))}.  For example \texttt{(SIMPSET'
     simp_tac)} would depend on the theory of the proof state it is
   applied to, while \texttt{(simp_tac (simpset()))} implicitly refers
   to the current theory context.  Both are usually the same in proof
@@ -676,8 +676,8 @@
 \[ \Var{P}(if(\Var{Q},\Var{x},\Var{y})) \bimp (\Var{Q} \imp \Var{P}(\Var{x}))
 \conj (\lnot\Var{Q} \imp \Var{P}(\Var{y})) 
 \] 
-Another example is the elimination operator (which happens to be
-called~$split$) for Cartesian products:
+Another example is the elimination operator for Cartesian products (which
+happens to be called~$split$):  
 \[ \Var{P}(split(\Var{f},\Var{p})) \bimp (\forall a~b. \Var{p} =
 \langle a,b\rangle \imp \Var{P}(\Var{f}(a,b))) 
 \] 
@@ -698,12 +698,15 @@
 and hard to control.  Here is an example of use, where \texttt{split_if}
 is the first rule above:
 \begin{ttbox}
-by (simp_tac (simpset() addloop ("split if",split_tac [split_if])) 1);
+by (simp_tac (simpset() 
+                 addloop ("split if", split_tac [split_if])) 1);
 \end{ttbox}
 Users would usually prefer the following shortcut using \texttt{addsplits}:
 \begin{ttbox}
 by (simp_tac (simpset() addsplits [split_if]) 1);
 \end{ttbox}
+Case-splitting on conditional expressions is usually beneficial, so it is
+enabled by default in the object-logics \texttt{HOL} and \texttt{FOL}.
 
 
 \section{The simplification tactics}\label{simp-tactics}
@@ -927,7 +930,8 @@
 Here is a conjecture to be proved for an arbitrary function~$f$
 satisfying the law $f(Suc(\Var{n})) = Suc(f(\Var{n}))$:
 \begin{ttbox}
-val [prem] = Goal "(!!n. f(Suc(n)) = Suc(f(n))) ==> f(i+j) = i+f(j)";
+val [prem] = Goal
+               "(!!n. f(Suc(n)) = Suc(f(n))) ==> f(i+j) = i+f(j)";
 {\out Level 0}
 {\out f(i + j) = i + f(j)}
 {\out  1. f(i + j) = i + f(j)}
@@ -1112,7 +1116,7 @@
 {\out  1. 0 + (sum (\%i. i) 0 + sum (\%i. i) 0) = 0 * 0}
 \ttbreak
 {\out  2. !!n. n + (sum (\%i. i) n + sum (\%i. i) n) = n * n}
-{\out           ==> Suc n + (sum (\%i. i) (Suc n) + sum (\%i. i) (Suc n)) =}
+{\out           ==> Suc n + (sum (\%i. i) (Suc n) + sum (\%i.\,i) (Suc n)) =}
 {\out               Suc n * Suc n}
 \end{ttbox}
 Simplification proves both subgoals immediately:\index{*ALLGOALS}
@@ -1128,7 +1132,7 @@
 {\out Level 3}
 {\out 2 * sum (\%i. i) (Suc n) = n * Suc n}
 {\out  1. !!n. n + sum (\%i. i) n + (n + sum (\%i. i) n) = n + n * n}
-{\out           ==> n + (n + sum (\%i. i) n) + (n + (n + sum (\%i. i) n)) =}
+{\out           ==> n + (n + sum (\%i. i) n) + (n + (n + sum (\%i.\,i) n)) =}
 {\out               n + (n + (n + n * n))}
 \end{ttbox}
 Ordered rewriting proves this by sorting the left-hand side.  Proving
@@ -1438,18 +1442,20 @@
 fun unsafe_solver prems = FIRST'[resolve_tac (triv_rls {\at} prems),
                                  atac, etac FalseE];
 
-fun   safe_solver prems = FIRST'[match_tac (triv_rls {\at} prems),
-                                 eq_assume_tac, ematch_tac [FalseE]];
+fun safe_solver prems = FIRST'[match_tac (triv_rls {\at} prems),
+                               eq_assume_tac, ematch_tac [FalseE]];
 
-val FOL_basic_ss = empty_ss setsubgoaler asm_simp_tac
-                            addsimprocs [defALL_regroup, defEX_regroup]
-                            setSSolver   safe_solver
-                            setSolver  unsafe_solver
-                            setmksimps (map mk_eq o atomize o gen_all);
+val FOL_basic_ss = 
+      empty_ss setsubgoaler asm_simp_tac
+               addsimprocs [defALL_regroup, defEX_regroup]
+               setSSolver   safe_solver
+               setSolver  unsafe_solver
+               setmksimps (map mk_eq o atomize o gen_all);
 
-val IFOL_ss = FOL_basic_ss addsimps (IFOL_simps {\at} 
-                                     int_ex_simps {\at} int_all_simps)
-                           addcongs [imp_cong];
+val IFOL_ss = 
+      FOL_basic_ss addsimps (IFOL_simps {\at} 
+                             int_ex_simps {\at} int_all_simps)
+                   addcongs [imp_cong];
 \end{ttbox}
 This simpset takes \texttt{imp_cong} as a congruence rule in order to use
 contextual information to simplify the conclusions of implications:
--- a/doc-src/Ref/substitution.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/substitution.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -128,12 +128,12 @@
   val dest_Trueprop    : term -> term
   val dest_eq          : term -> term*term*typ
   val dest_imp         : term -> term*term
-  val eq_reflection    : thm             (* a=b ==> a==b *)
-  val imp_intr         : thm             (* (P ==> Q) ==> P-->Q *)
-  val rev_mp           : thm             (* [| P;  P-->Q |] ==> Q *)
-  val subst            : thm             (* [| a=b;  P(a) |] ==> P(b) *)
-  val sym              : thm             (* a=b ==> b=a *)
-  val thin_refl        : thm             (* [|x=x; P|] ==> P *)
+  val eq_reflection    : thm         (* a=b ==> a==b *)
+  val imp_intr         : thm         (*(P ==> Q) ==> P-->Q *)
+  val rev_mp           : thm         (* [| P;  P-->Q |] ==> Q *)
+  val subst            : thm         (* [| a=b;  P(a) |] ==> P(b) *)
+  val sym              : thm         (* a=b ==> b=a *)
+  val thin_refl        : thm         (* [|x=x; P|] ==> P *)
   end;
 \end{ttbox}
 Thus, the functor requires the following items:
--- a/doc-src/Ref/syntax.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/syntax.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -259,7 +259,7 @@
     \mtt{dummyT})$:
    \[ \astofterm{\ttfct{Abs} (x, \tau, t)} =
       \Appl{\Constant \mtt{"_abs"},
-        constrain(\Variable x', \tau), \astofterm{t'}}.
+        constrain(\Variable x', \tau), \astofterm{t'}}
     \]
 
   \item $\astofterm{\ttfct{Bound} i} = \Variable \mtt{"B.}i\mtt"$.
@@ -685,10 +685,10 @@
 rules, but not in ordinary terms.  Thus we can create \AST{}s containing
 names that are not directly expressible.
 
-The parse translation for {\tt _K} is already installed in Pure, and {\tt
-dependent_tr'} is exported by the syntax module for public use.  See
-\S\ref{sec:tr_funs} below for more of the arcane lore of translation functions.
-\index{macros|)}\index{rewriting!syntactic|)}
+The parse translation for {\tt _K} is already installed in Pure, and the
+function {\tt dependent_tr'} is exported by the syntax module for public use.
+See \S\ref{sec:tr_funs} below for more of the arcane lore of translation
+functions.  \index{macros|)}\index{rewriting!syntactic|)}
 
 
 \section{Translation functions} \label{sec:tr_funs}
@@ -814,7 +814,8 @@
       if 0 mem (loose_bnos B) then
         let val (x', B') = Syntax.variant_abs' (x, dummyT, B) in
           list_comb
-            (Const (q,{\thinspace}dummyT) $ Syntax.mark_boundT (x',{\thinspace}T) $ A $ B',{\thinspace}ts)
+            (Const (q,dummyT) $
+             Syntax.mark_boundT (x',{\thinspace}T) $ A $ B', ts)
         end
       else list_comb (Const (r, dummyT) $ A $ B, ts)
   | dependent_tr' _ _ = raise Match;
@@ -887,7 +888,8 @@
 \ttindex{token_translation} value within the \texttt{ML} section of a theory
 definition file:
 \begin{ttbox}
-val token_translation: (string * string * (string -> string * real)) list
+val token_translation: 
+      (string * string * (string -> string * real)) list
 \end{ttbox}\index{token_translation}
 The elements of this list are of the form $(m, c, f)$, where $m$ is a print
 mode identifier, $c$ a token class, and $f\colon string \to string \times
--- a/doc-src/Ref/tactic.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/tactic.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -131,11 +131,11 @@
 instantiates any type unknowns in $\tau$, these instantiations
 are recorded for application to the rule.
 \end{itemize}
-Types are instantiated before terms.  Because type instantiations are
+Types are instantiated before terms are.  Because type instantiations are
 inferred from term instantiations, explicit type instantiations are seldom
 necessary --- if \verb$?t$ has type \verb$?'a$, then the instantiation list
-\verb$[("'a","bool"),("t","True")]$ may be simplified to
-\verb$[("t","True")]$.  Type unknowns in the proof state may cause
+\texttt{[("'a","bool"), ("t","True")]} may be simplified to
+\texttt{[("t","True")]}.  Type unknowns in the proof state may cause
 failure because the tactics cannot instantiate them.
 
 The instantiation tactics act on a given subgoal.  Terms in the
@@ -523,7 +523,7 @@
 net_bimatch_tac  : (bool*thm) list -> int -> tactic
 filt_resolve_tac : thm list -> int -> int -> tactic
 could_unify      : term*term->bool
-filter_thms      : (term*term->bool) -> int*term*thm list -> thm list
+filter_thms      : (term*term->bool) -> int*term*thm list -> thm{\ts}list
 \end{ttbox}
 The module {\tt Net} implements a discrimination net data structure for
 fast selection of rules \cite[Chapter 14]{charniak80}.  A term is
--- a/doc-src/Ref/theories.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/theories.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -34,7 +34,7 @@
 constituent parts.
 \begin{description}
 \item[{\it theoryDef}] is the full definition.  The new theory is called $id$.
-  It is the union of the named {\bf parent
+  It is the union of the named \textbf{parent
     theories}\indexbold{theories!parent}, possibly extended with new
   components.  \thydx{Pure} and \thydx{CPure} are the basic theories, which
   contain only the meta-logic.  They differ just in their concrete syntax for
@@ -73,14 +73,14 @@
   $(\alpha@1,\dots,\alpha@n)name$, where the type variables serve only to
   indicate the number~$n$.
 
-  A {\bf type synonym}\indexbold{type synonyms} is an abbreviation
+  A \textbf{type synonym}\indexbold{type synonyms} is an abbreviation
   $(\alpha@1,\dots,\alpha@n)name = \tau$, where $name$ and $\tau$ can
   be strings.
 
 \item[$infix$]
-  declares a type or constant to be an infix operator of priority $nat$
-  associating to the left (\texttt{infixl}) or right (\texttt{infixr}).  Only
-  2-place type constructors can have infix status; an example is {\tt
+  declares a type or constant to be an infix operator having priority $nat$
+  and associating to the left (\texttt{infixl}) or right (\texttt{infixr}).
+  Only 2-place type constructors can have infix status; an example is {\tt
   ('a,'b)~"*"~(infixr~20)}, which may express binary product types.
 
 \item[$arities$] is a series of type arity declarations.  Each assigns
@@ -183,7 +183,7 @@
 
 \subsection{Definitions}\indexbold{definitions}
 
-{\bf Definitions} are intended to express abbreviations.  The simplest
+\textbf{Definitions} are intended to express abbreviations.  The simplest
 form of a definition is $f \equiv t$, where $f$ is a constant.
 Isabelle also allows a derived forms where the arguments of~$f$ appear
 on the left, abbreviating a string of $\lambda$-abstractions.
@@ -216,8 +216,8 @@
   excludes the following:
 \begin{ttbox}
 arities
-  foo :: ({\ttlbrace}logic{\ttrbrace}) logic
-  foo :: ({\ttlbrace}{\ttrbrace})logic
+  foo :: (\{logic{\}}) logic
+  foo :: (\{{\}})logic
 \end{ttbox}
 
 \item If there are two declarations $ty :: (s@1,\dots,s@n)c$ and $ty ::
@@ -229,8 +229,8 @@
 following is forbidden:
 \begin{ttbox}
 arities
-  foo :: ({\ttlbrace}logic{\ttrbrace})logic
-  foo :: ({\ttlbrace}{\ttrbrace})term
+  foo :: (\{logic{\}})logic
+  foo :: (\{{\}})term
 \end{ttbox}
 
 \end{itemize}
@@ -249,7 +249,7 @@
 update_thy_only : string -> unit
 touch_thy       : string -> unit
 remove_thy      : string -> unit
-delete_tmpfiles : bool ref \hfill{\bf initially true}
+delete_tmpfiles : bool ref \hfill\textbf{initially true}
 \end{ttbox}
 
 \begin{ttdescription}
@@ -684,23 +684,23 @@
 \end{ttbox}
 \begin{ttdescription}
 \item[\ttindexbold{Const} ($a$, $T$)] \index{constants|bold}
-  is the {\bf constant} with name~$a$ and type~$T$.  Constants include
+  is the \textbf{constant} with name~$a$ and type~$T$.  Constants include
   connectives like $\land$ and $\forall$ as well as constants like~0
   and~$Suc$.  Other constants may be required to define a logic's concrete
   syntax.
 
 \item[\ttindexbold{Free} ($a$, $T$)] \index{variables!free|bold}
-  is the {\bf free variable} with name~$a$ and type~$T$.
+  is the \textbf{free variable} with name~$a$ and type~$T$.
 
 \item[\ttindexbold{Var} ($v$, $T$)] \index{unknowns|bold}
-  is the {\bf scheme variable} with indexname~$v$ and type~$T$.  An
+  is the \textbf{scheme variable} with indexname~$v$ and type~$T$.  An
   \mltydx{indexname} is a string paired with a non-negative index, or
   subscript; a term's scheme variables can be systematically renamed by
   incrementing their subscripts.  Scheme variables are essentially free
   variables, but may be instantiated during unification.
 
 \item[\ttindexbold{Bound} $i$] \index{variables!bound|bold}
-  is the {\bf bound variable} with de Bruijn index~$i$, which counts the
+  is the \textbf{bound variable} with de Bruijn index~$i$, which counts the
   number of lambdas, starting from zero, between a variable's occurrence
   and its binding.  The representation prevents capture of variables.  For
   more information see de Bruijn \cite{debruijn72} or
@@ -708,12 +708,12 @@
 
 \item[\ttindexbold{Abs} ($a$, $T$, $u$)]
   \index{lambda abs@$\lambda$-abstractions|bold}
-  is the $\lambda$-{\bf abstraction} with body~$u$, and whose bound
+  is the $\lambda$-\textbf{abstraction} with body~$u$, and whose bound
   variable has name~$a$ and type~$T$.  The name is used only for parsing
   and printing; it has no logical significance.
 
 \item[$t$ \$ $u$] \index{$@{\tt\$}|bold} \index{function applications|bold}
-is the {\bf application} of~$t$ to~$u$.
+is the \textbf{application} of~$t$ to~$u$.
 \end{ttdescription}
 Application is written as an infix operator to aid readability.
 Here is an \ML\ pattern to recognize \FOL{} formulae of
@@ -729,7 +729,7 @@
 incr_boundvars : int -> term -> term
 abstract_over  : term*term -> term
 variant_abs    : string * typ * term -> string * term
-aconv          : term * term -> bool\hfill{\bf infix}
+aconv          : term * term -> bool\hfill\textbf{infix}
 \end{ttbox}
 These functions are all concerned with the de Bruijn representation of
 bound variables.
@@ -782,7 +782,7 @@
 \end{ttdescription}
 
 \section{Certified terms}\index{terms!certified|bold}\index{signatures}
-A term $t$ can be {\bf certified} under a signature to ensure that every type
+A term $t$ can be \textbf{certified} under a signature to ensure that every type
 in~$t$ is well-formed and every constant in~$t$ is a type instance of a
 constant declared in the signature.  The term must be well-typed and its use
 of bound variables must be well-formed.  Meta-rules such as \texttt{forall_elim}
@@ -808,11 +808,11 @@
 
 \subsection{Making and inspecting certified terms}
 \begin{ttbox}
-cterm_of          : Sign.sg -> term -> cterm
-read_cterm        : Sign.sg -> string * typ -> cterm
-cert_axm          : Sign.sg -> string * term -> string * term
-read_axm          : Sign.sg -> string * string -> string * term
-rep_cterm         : cterm -> {\ttlbrace}T:typ, t:term, sign:Sign.sg, maxidx:int\ttrbrace
+cterm_of       : Sign.sg -> term -> cterm
+read_cterm     : Sign.sg -> string * typ -> cterm
+cert_axm       : Sign.sg -> string * term -> string * term
+read_axm       : Sign.sg -> string * string -> string * term
+rep_cterm      : cterm -> \{T:typ, t:term, sign:Sign.sg, maxidx:int\}
 Sign.certify_term : Sign.sg -> term -> term * typ * int
 \end{ttbox}
 \begin{ttdescription}
@@ -867,7 +867,7 @@
 \end{ttbox}
 \begin{ttdescription}
 \item[\ttindexbold{Type} ($a$, $Ts$)] \index{type constructors|bold}
-  applies the {\bf type constructor} named~$a$ to the type operands~$Ts$.
+  applies the \textbf{type constructor} named~$a$ to the type operand list~$Ts$.
   Type constructors include~\tydx{fun}, the binary function space
   constructor, as well as nullary type constructors such as~\tydx{prop}.
   Other type constructors may be introduced.  In expressions, but not in
@@ -875,10 +875,10 @@
   types.
 
 \item[\ttindexbold{TFree} ($a$, $s$)] \index{type variables|bold}
-  is the {\bf type variable} with name~$a$ and sort~$s$.
+  is the \textbf{type variable} with name~$a$ and sort~$s$.
 
 \item[\ttindexbold{TVar} ($v$, $s$)] \index{type unknowns|bold}
-  is the {\bf type unknown} with indexname~$v$ and sort~$s$.
+  is the \textbf{type unknown} with indexname~$v$ and sort~$s$.
   Type unknowns are essentially free type variables, but may be
   instantiated during unification.
 \end{ttdescription}
@@ -907,7 +907,7 @@
 \subsection{Making and inspecting certified types}
 \begin{ttbox}
 ctyp_of          : Sign.sg -> typ -> ctyp
-rep_ctyp         : ctyp -> {\ttlbrace}T: typ, sign: Sign.sg\ttrbrace
+rep_ctyp         : ctyp -> \{T: typ, sign: Sign.sg\}
 Sign.certify_typ : Sign.sg -> typ -> typ
 \end{ttbox}
 \begin{ttdescription}
--- a/doc-src/Ref/thm.tex	Tue Jan 18 11:00:10 2000 +0100
+++ b/doc-src/Ref/thm.tex	Tue Jan 18 11:33:31 2000 +0100
@@ -73,12 +73,12 @@
 \index{theorems!joining by resolution}
 \index{resolution}\index{forward proof}
 \begin{ttbox} 
-RSN : thm * (int * thm) -> thm                 \hfill{\bf infix}
-RS  : thm * thm -> thm                         \hfill{\bf infix}
-MRS : thm list * thm -> thm                    \hfill{\bf infix}
-RLN : thm list * (int * thm list) -> thm list  \hfill{\bf infix}
-RL  : thm list * thm list -> thm list          \hfill{\bf infix}
-MRL : thm list list * thm list -> thm list     \hfill{\bf infix}
+RSN : thm * (int * thm) -> thm                 \hfill\textbf{infix}
+RS  : thm * thm -> thm                         \hfill\textbf{infix}
+MRS : thm list * thm -> thm                    \hfill\textbf{infix}
+RLN : thm list * (int * thm list) -> thm list  \hfill\textbf{infix}
+RL  : thm list * thm list -> thm list          \hfill\textbf{infix}
+MRL : thm list list * thm list -> thm list     \hfill\textbf{infix}
 \end{ttbox}
 Joining rules together is a simple way of deriving new rules.  These
 functions are especially useful with destruction rules.  To store
@@ -126,20 +126,20 @@
 unfolds the {\it defs} throughout the theorem~{\it thm}.
 
 \item[\ttindexbold{rewrite_goals_rule} {\it defs} {\it thm}]  
-unfolds the {\it defs} in the premises of~{\it thm}, but leaves the
-conclusion unchanged.  This rule underlies \ttindex{rewrite_goals_tac}, but 
-serves little purpose in forward proof.
+unfolds the {\it defs} in the premises of~{\it thm}, but it leaves the
+conclusion unchanged.  This rule is the basis for \ttindex{rewrite_goals_tac},
+but it serves little purpose in forward proof.
 \end{ttdescription}
 
 
 \subsection{Instantiating unknowns in a theorem} \label{sec:instantiate}
 \index{instantiation}
-\begin{ttbox}
+\begin{alltt}\footnotesize
 read_instantiate    :                (string*string) list -> thm -> thm
 read_instantiate_sg :     Sign.sg -> (string*string) list -> thm -> thm
 cterm_instantiate   :                  (cterm*cterm) list -> thm -> thm
 instantiate'      : ctyp option list -> cterm option list -> thm -> thm
-\end{ttbox}
+\end{alltt}
 These meta-rules instantiate type and term unknowns in a theorem.  They are
 occasionally useful.  They can prevent difficulties with higher-order
 unification, and define specialized versions of rules.
@@ -201,7 +201,8 @@
 
 \item[\ttindexbold{make_elim} $thm$] 
 \index{rules!converting destruction to elimination}
-converts $thm$, a destruction rule of the form $\List{P@1;\ldots;P@m}\Imp
+converts $thm$, which should be  a destruction rule of the form
+$\List{P@1;\ldots;P@m}\Imp 
 Q$, to the elimination rule $\List{P@1; \ldots; P@m; Q\Imp R}\Imp R$.  This
 is the basis for destruct-resolution: {\tt dresolve_tac}, etc.
 
@@ -234,11 +235,11 @@
 tpairs_of     : thm -> (term*term) list
 sign_of_thm   : thm -> Sign.sg
 theory_of_thm : thm -> theory
-dest_state    : thm * int -> (term*term) list * term list * term * term
-rep_thm       : thm -> {\ttlbrace}sign_ref: Sign.sg_ref, der: deriv, maxidx: int,
-                        shyps: sort list, hyps: term list, prop: term\ttrbrace
-crep_thm      : thm -> {\ttlbrace}sign_ref: Sign.sg_ref, der: deriv, maxidx: int,
-                        shyps: sort list, hyps: cterm list, prop: cterm\ttrbrace
+dest_state : thm * int -> (term*term) list * term list * term * term
+rep_thm    : thm -> \{sign_ref: Sign.sg_ref, der: deriv, maxidx: int,
+                     shyps: sort list, hyps: term list, prop: term\}
+crep_thm   : thm -> \{sign_ref: Sign.sg_ref, der: deriv, maxidx: int,
+                     shyps: sort list, hyps: cterm list, prop:{\ts}cterm\}
 \end{ttbox}
 \begin{ttdescription}
 \item[\ttindexbold{cprop_of} $thm$] returns the statement of $thm$ as
@@ -333,10 +334,10 @@
 \subsection{Tracing flags for unification}
 \index{tracing!of unification}
 \begin{ttbox} 
-Unify.trace_simp   : bool ref \hfill{\bf initially false}
-Unify.trace_types  : bool ref \hfill{\bf initially false}
-Unify.trace_bound  : int ref \hfill{\bf initially 10}
-Unify.search_bound : int ref \hfill{\bf initially 20}
+Unify.trace_simp   : bool ref \hfill\textbf{initially false}
+Unify.trace_types  : bool ref \hfill\textbf{initially false}
+Unify.trace_bound  : int ref \hfill\textbf{initially 10}
+Unify.search_bound : int ref \hfill\textbf{initially 20}
 \end{ttbox}
 Tracing the search may be useful when higher-order unification behaves
 unexpectedly.  Letting {\tt res_inst_tac} circumvent the problem is easier,
@@ -354,10 +355,10 @@
 Use $n=0$ for full tracing.  At the default value of~10, tracing
 information is almost never printed.
 
-\item[Unify.search_bound := $n$;] causes unification to limit its
-  search to depth~$n$.  Because of this bound, higher-order
+\item[Unify.search_bound := $n$;] prevents unification from
+  searching past the depth~$n$.  Because of this bound, higher-order
   unification cannot return an infinite sequence, though it can return
-  a very long one.  The search rarely approaches the default value
+  an exponentially long one.  The search rarely approaches the default value
   of~20.  If the search is cut off, unification prints a warning
   \texttt{Unification bound exceeded}.
 \end{ttdescription}
@@ -399,7 +400,7 @@
 \bigskip
 
 \index{meta-implication}
-The {\bf implication} rules are $({\Imp}I)$
+The \textbf{implication} rules are $({\Imp}I)$
 and $({\Imp}E)$:
 \[ \infer[({\Imp}I)]{\phi\Imp \psi}{\infer*{\psi}{[\phi]}}  \qquad
    \infer[({\Imp}E)]{\psi}{\phi\Imp \psi & \phi}  \]
@@ -411,7 +412,7 @@
    \qquad
    \infer[({\equiv}E)]{\psi}{\phi\equiv \psi & \phi}   \]
 
-The {\bf equality} rules are reflexivity, symmetry, and transitivity:
+The \textbf{equality} rules are reflexivity, symmetry, and transitivity:
 \[ {a\equiv a}\,(refl)  \qquad
    \infer[(sym)]{b\equiv a}{a\equiv b}  \qquad
    \infer[(trans)]{a\equiv c}{a\equiv b & b\equiv c}   \]
@@ -424,14 +425,14 @@
    {((\lambda x.a)(b)) \equiv a[b/x]}           \qquad
    \infer[(ext)]{f\equiv g}{f(x) \equiv g(x)}   \]
 
-The {\bf abstraction} and {\bf combination} rules let conversions be
+The \textbf{abstraction} and \textbf{combination} rules let conversions be
 applied to subterms:\footnote{Abstraction holds if $x$ is not free in the
 assumptions.}
 \[  \infer[(abs)]{(\lambda x.a) \equiv (\lambda x.b)}{a\equiv b}   \qquad
     \infer[(comb)]{f(a)\equiv g(b)}{f\equiv g & a\equiv b}   \]
 
 \index{meta-quantifiers}
-The {\bf universal quantification} rules are $(\Forall I)$ and $(\Forall
+The \textbf{universal quantification} rules are $(\Forall I)$ and $(\Forall
 E)$:\footnote{$(\Forall I)$ holds if $x$ is not free in the assumptions.}
 \[ \infer[(\Forall I)]{\Forall x.\phi}{\phi}        \qquad
    \infer[(\Forall E)]{\phi[b/x]}{\Forall x.\phi}   \]
@@ -607,16 +608,16 @@
 
 \subsection{Instantiation of unknowns}
 \index{instantiation}
-\begin{ttbox} 
+\begin{alltt}\footnotesize
 instantiate: (indexname * ctyp){\thinspace}list * (cterm * cterm){\thinspace}list -> thm -> thm
-\end{ttbox}
+\end{alltt}
 There are two versions of this rule.  The primitive one is
 \ttindexbold{Thm.instantiate}, which merely performs the instantiation and can
 produce a conclusion not in normal form.  A derived version is  
 \ttindexbold{Drule.instantiate}, which normalizes its conclusion.
 
 \begin{ttdescription}
-\item[\ttindexbold{instantiate} ($tyinsts$, $insts$) $thm$] 
+\item[\ttindexbold{instantiate} ($tyinsts$,$insts$) $thm$] 
 simultaneously substitutes types for type unknowns (the
 $tyinsts$) and terms for term unknowns (the $insts$).  Instantiations are
 given as $(v,t)$ pairs, where $v$ is an unknown and $t$ is a term (of the