first draft of Springer book
authorlcp
Mon, 21 Mar 1994 11:02:57 +0100
changeset 286 e7efbf03562b
parent 285 fd4a6585e5bf
child 287 6b62a6ddbe15
first draft of Springer book
doc-src/Ref/classical.tex
doc-src/Ref/goals.tex
doc-src/Ref/introduction.tex
doc-src/Ref/ref.tex
doc-src/Ref/simp.tex
doc-src/Ref/simplifier.tex
doc-src/Ref/substitution.tex
doc-src/Ref/tactic.tex
doc-src/Ref/tctical.tex
doc-src/Ref/theories.tex
doc-src/Ref/thm.tex
doc-src/Ref/undocumented.tex
--- a/doc-src/Ref/classical.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/classical.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -1,6 +1,6 @@
 %% $Id$
-\chapter{The classical theorem prover}
-\index{classical prover|(}
+\chapter{The Classical Reasoner}
+\index{classical reasoner|(}
 \newcommand\ainfer[2]{\begin{array}{r@{}l}#2\\ \hline#1\end{array}}
 Although Isabelle is generic, many users will be working in some extension
 of classical first-order logic (FOL).  Isabelle's set theory is built upon
@@ -8,7 +8,7 @@
 in FOL is of course undecidable, but many researchers have developed
 strategies to assist in this task.
 
-Isabelle's classical module is an \ML{} functor that accepts certain
+Isabelle's classical reasoner is an \ML{} functor that accepts certain
 information about a logic and delivers a suite of automatic tactics.  Each
 tactic takes a collection of rules and executes a simple, non-clausal proof
 procedure.  They are slow and simplistic compared with resolution theorem
@@ -26,7 +26,7 @@
 proofs, but the tactics can be traced, and their components can be called
 directly.  In this manner, any proof can be viewed interactively.
 
-The logics FOL, HOL and ZF have the classical prover already installed.  We
+The logics FOL, HOL and ZF have the classical reasoner already installed.  We
 shall first consider how to use it, then see how to instantiate it for new
 logics. 
 
@@ -160,7 +160,7 @@
 All introduction rules mentioned above are also useful in swapped form.
 
 Replication makes the search space infinite; we must apply the rules with
-care.  The classical package distinguishes between safe and unsafe
+care.  The classical reasoner distinguishes between safe and unsafe
 rules, applying the latter only when there is no alternative.  Depth-first
 search may well go down a blind alley; best-first search is better behaved
 in an infinite search space.  However, quantifier replication is too
@@ -197,7 +197,7 @@
 \ttindex{eq_assume_tac} must be used, rather than \ttindex{assume_tac}.
 
 Classical rule sets belong to the abstract type \ttindexbold{claset}, which
-supports the following operations (provided the classical prover is
+supports the following operations (provided the classical reasoner is
 installed!):
 \begin{ttbox} 
 empty_cs : claset
@@ -237,9 +237,9 @@
 Introduction rules are those that can be applied using ordinary resolution.
 The classical set automatically generates their swapped forms, which will
 be applied using elim-resolution.  Elimination rules are applied using
-elim-resolution.  In a classical set, rules are sorted the number of new
-subgoals they will yield, so that rules that generate the fewest subgoals
-will be tried first (see \S\ref{biresolve_tac}).
+elim-resolution.  In a classical set, rules are sorted by the number of new
+subgoals they will yield; rules that generate the fewest subgoals will be
+tried first (see \S\ref{biresolve_tac}).
 
 For a given classical set, the proof strategy is simple.  Perform as many
 safe inferences as possible; or else, apply certain safe rules, allowing
@@ -306,7 +306,7 @@
 \item[\ttindexbold{best_tac} $cs$ $i$] applies {\tt step_tac} using
 best-first search, to solve subgoal~$i$.  A heuristic function ---
 typically, the total size of the proof state --- guides the search.  This
-function is supplied when the classical module is set up.
+function is supplied when the classical reasoner is set up.
 \end{description}
 
 
@@ -360,11 +360,11 @@
 \end{description}
 
 
-\section{Setting up the classical prover}
-\index{classical prover!setting up|bold}
+\section{Setting up the classical reasoner}
+\index{classical reasoner!setting up|bold}
 Isabelle's classical object-logics, including {\tt FOL} and {\tt HOL}, have
-the classical prover already set up.  When defining a new classical logic,
-you should set up the prover yourself.  It consists of the \ML{} functor
+the classical reasoner already set up.  When defining a new classical logic,
+you should set up the reasoner yourself.  It consists of the \ML{} functor
 \ttindex{ClassicalFun}, which takes the argument
 signature\indexbold{*CLASSICAL_DATA}
 \begin{ttbox} 
--- a/doc-src/Ref/goals.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/goals.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -102,19 +102,16 @@
 tactics})}.
 \end{description}
 
-\noindent{\bf Error indications:}
+\noindent{\it Error indications:}\nobreak
 \begin{itemize}
-\item 
-{\tt "by:\ tactic failed"} means that the tactic returned an empty
-sequence when applied to the current proof state. 
-\item 
-{\tt "Warning:\ same as previous level"} means that the new proof state
-is identical to the previous state.
-\item
-{\tt "Warning:\ signature of proof state has changed"} means that a rule
-was applied from outside the theory of the initial proof state.  This
-guards against mistakes such as expressing the goal in intuitionistic logic
-and proving it using classical logic.
+\item {\footnotesize\tt "by:\ tactic failed"} means that the tactic
+  returned an empty sequence when applied to the current proof state.
+\item {\footnotesize\tt "Warning:\ same as previous level"} means that the
+  new proof state is identical to the previous state.
+\item{\footnotesize\tt "Warning:\ signature of proof state has changed"}
+  means that some rule was applied whose theory is outside the theory of
+  the initial proof state.  This could signify a mistake such as expressing
+  the goal in intuitionistic logic and proving it using classical logic.
 \end{itemize}
 
 \subsection{Extracting the proved theorem}
@@ -145,7 +142,6 @@
 
 
 \subsection{Undoing and backtracking}
-\index{undoing a proof command|bold}
 \index{backtracking command|see{\tt back}}
 \begin{ttbox} 
 chop    : unit -> unit
@@ -165,7 +161,8 @@
 \item[\ttindexbold{back}();]
 searches the state list for a non-empty branch point, starting from the top
 level.  The first one found becomes the current proof state --- the most
-recent alternative branch is taken.
+recent alternative branch is taken.  This is a form of interactive
+backtracking.
 
 \item[\ttindexbold{undo}();] 
 cancels the most recent change to the proof state by the commands \ttindex{by},
@@ -173,18 +170,18 @@
 cancel {\tt goal} or {\tt undo} itself.  It can be repeated to
 cancel a series of commands.
 \end{description}
-\noindent{\bf Error indications for {\tt back}:}
+
+\goodbreak
+\noindent{\it Error indications for {\tt back}:}\par\nobreak
 \begin{itemize}
-\item
-{\tt"Warning:\ same as previous choice at this level"} means that {\tt
-back} found a non-empty branch point, but that it contained the same proof
-state as the current one.  
-\item
-{\tt "Warning:\ signature of proof state has changed"} means the signature
-of the alternative proof state differs from that of the current state.
-\item 
-{\tt "back:\ no alternatives"} means {\tt back} could find no alternative
-proof state.
+\item{\footnotesize\tt"Warning:\ same as previous choice at this level"}
+  means {\tt back} found a non-empty branch point, but that it contained
+  the same proof state as the current one.
+\item{\footnotesize\tt "Warning:\ signature of proof state has changed"}
+  means the signature of the alternative proof state differs from that of
+  the current state.
+\item {\footnotesize\tt "back:\ no alternatives"} means {\tt back} could
+  find no alternative proof state.
 \end{itemize}
 
 \subsection{Printing the proof state}
@@ -349,7 +346,7 @@
 \subsection{Executing batch proofs}
 \index{proofs!batch|bold}
 \index{batch execution|bold}
-\begin{ttbox} 
+\begin{ttbox}
 prove_goal  :  theory->          string->(thm list->tactic list)->thm
 prove_goalw :  theory->thm list->string->(thm list->tactic list)->thm
 prove_goalw_cterm: thm list->Sign.cterm->(thm list->tactic list)->thm
@@ -391,16 +388,18 @@
 the given tactic function.
 
 \item[\ttindexbold{prove_goalw} {\it theory} {\it defs} {\it formula} 
-      {\it tacsf}; ]
+      {\it tacsf}; ]\index{rewriting!meta-level}
 is like {\tt prove_goal} but also applies the list of {\it defs\/} as
 meta-rewrite rules to the first subgoal and the premises.
-\index{rewriting!meta-level}
 
 \item[\ttindexbold{prove_goalw_cterm} {\it theory} {\it defs} {\it ct}
       {\it tacsf}; ] 
 is a version of {\tt prove_goalw} for special applications.  The main
 goal is supplied as a cterm, not as a string.  Typically, the cterm is
-created from a term~$t$ by \hbox{\tt Sign.cterm_of (sign_of thy) $t$}.
+created from a term~$t$ as follows:
+\begin{ttbox}
+Sign.cterm_of (sign_of thy) \(t\)
+\end{ttbox}
 \index{*Sign.cterm_of}\index{*sign_of}
 \end{description}
 
--- a/doc-src/Ref/introduction.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/introduction.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -1,27 +1,22 @@
 %% $Id$
-\maketitle 
-\pagenumbering{roman} \tableofcontents \clearfirst
+\chapter{Basic Use of Isabelle}\index{sessions|(} 
+The Reference Manual is a comprehensive description of Isabelle, including
+all commands, functions and packages.  It really is intended for reference,
+perhaps for browsing, but not for reading through.  It is not a tutorial,
+but assumes familiarity with the basic concepts of Isabelle.
 
-\chapter{Introduction} 
-\index{sessions|(}
-This manual is a comprehensive description of Isabelle, including all
-commands, functions and packages.  Please do not read it: it is intended
-for reference.  It is not a tutorial!  The manual assumes
-familiarity with the basic concepts explained in {\em Introduction to
-Isabelle}.
-
-Functions are organized by their purpose, by their operands (subgoals,
-tactics, theorems), and by their usefulness.  In each section, basic
-functions appear first, then advanced functions, and finally esoteric
-functions.  When you are looking for a way of performing some task, scan
-the Table of Contents for a relevant heading.
+When you are looking for a way of performing some task, scan the Table of
+Contents for a relevant heading.  Functions are organized by their purpose,
+by their operands (subgoals, tactics, theorems), and by their usefulness.
+In each section, basic functions appear first, then advanced functions, and
+finally esoteric functions.
 
 The Index provides an alphabetical listing.  Page numbers of definitions
 are printed in {\bf bold}, passing references in Roman type.  Use the Index
 when you are looking for the definition of a particular Isabelle function.
 
-This manual contains few examples.  Many files of examples are distributed
-with Isabelle, however; please experiment interactively.
+A few examples are presented.  Many examples files are distributed with
+Isabelle, however; please experiment interactively.
 
 
 \section{Basic interaction with Isabelle}
@@ -30,7 +25,7 @@
 Theorems are simply part of the \ML{} state and are named by \ML{}
 identifiers.  To save your work between sessions, you must save a copy of
 the \ML{} image.  The procedure for doing so is compiler-dependent:
-\begin{itemize}
+\begin{itemize}\index{Poly/\ML}
 \item At the end of a session, Poly/\ML{} saves the state, provided you have
 created a database for your own use.  You can create a database by copying
 an existing one, or by calling the Poly/\ML{} function {\tt make_database};
@@ -72,7 +67,8 @@
 
 \item[\ttindexbold{commit}();]  saves the current state in your
 Poly/\ML{} database without finishing the session.  Values of reference
-variables are lost, so never do this during an interactive proof!
+variables are lost, so never do this during an interactive
+proof!\index{Poly/\ML} 
 
 \item[\ttindexbold{exportML} \tt"{\it file}";]  saves an
 image of your session to the given {\it file}.
@@ -86,7 +82,7 @@
 
 
 \section{Reading files of proofs and theories}
-\index{files, reading|bold}
+\index{files!reading|bold}
 \begin{ttbox} 
 cd              : string -> unit
 use             : string -> unit
@@ -121,9 +117,10 @@
 reads all theories that have changed since the last time they have been read.
 See Chapter~\ref{theories} for details.
 
-\item[\ttindexbold{loadpath}] contains a list of paths that is used by 
-{\tt use_thy} and {\tt update} to find {\it tname}{\tt.ML} and {\it tname}
-{\tt.thy}.  See Chapter~\ref{theories} for details.
+\item[\ttindexbold{loadpath}] 
+  contains a list of paths that is used by {\tt use_thy} and {\tt update}
+  to find {\it tname}{\tt.ML} and {\it tname}{\tt.thy}.  See
+  Chapter~\ref{theories} for details.
 \end{description}
 
 
@@ -252,10 +249,9 @@
 of them depend upon shell environment variables.
 \begin{description}
 \item[\ttindexbold{make-all} $switches$] 
-compiles the Isabelle system, when executed on the source directory.
-Environment variables specify which \ML{} compiler (and {\tt Makefile}s) to
-use.  These variables, and the {\it switches}, are documented on the file
-itself.
+  compiles the Isabelle system, when executed on the source directory.
+  Environment variables specify which \ML{} compiler to use.  These
+  variables, and the {\it switches}, are documented on the file itself.
 
 \item[\ttindexbold{teeinput} $program$] 
 executes the {\it program}, while piping the standard input to a log file
@@ -272,8 +268,8 @@
 
 \item[\ttindexbold{expandshort} $files$] 
   reads the {\it files\/} and replaces all occurrences of the shorthand
-  commands {\tt br}, {\tt be}, {\tt brs}, {\tt bes}, etc., with commands
-  like \hbox{\tt by (resolve_tac \ldots)}.  The commands should appear one
+  commands {\tt br}, {\tt be}, {\tt brs}, {\tt bes}, etc., with the
+  corresponding full commands.  Shorthand commands should appear one
   per line.  The old versions of the files
   are renamed to have the suffix~\verb'~~'.
 \end{description}
--- a/doc-src/Ref/ref.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/ref.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -1,4 +1,4 @@
-\documentstyle[a4,12pt,proof,iman,alltt]{report}
+\documentstyle[a4,12pt,proof,iman,extra]{report}
 %% $Id$
 %%% \includeonly{thm}
 %%% to index ids: \[\\tt \([a-zA-Z0-9][a-zA-Z0-9_'.]*\)    [\\ttindexbold{\1}
@@ -12,7 +12,7 @@
  Carsten Clasohm also contributed to Chapter~6.
  Sara Kalvala and Markus Wenzel suggested changes and corrections.
  The research has been funded by the SERC (grants GR/G53279, GR/H40570)
-  and by ESPRIT project 6453: Types).} 
+  and by ESPRIT project 6453: Types.} 
 \\  
         Computer Laboratory \\ University of Cambridge \\[2ex]
         {\small{\em Electronic mail\/}: {\tt lcp@cl.cam.ac.uk}} \\[3cm]
@@ -33,6 +33,10 @@
 \index{definitions|see{rewriting, meta-level}}
 \index{rewriting!object-level|see{simplification}}
 \index{rules!meta-level|see{meta-rules}}
+
+\maketitle 
+\pagenumbering{roman} \tableofcontents \clearfirst
+
 \include{introduction}
 \include{goals}
 \include{tactic}
@@ -46,7 +50,10 @@
 %%seealso's must be last so that they appear last in the index entries
 \index{rewriting!meta-level|seealso{tactics, theorems}}
 
-\bibliographystyle{plain} \small\raggedright\frenchspacing
-\bibliography{atp,funprog,general,logicprog,theory}
-\input{ref.ind}
+\begingroup
+  \bibliographystyle{plain} \small\raggedright\frenchspacing
+  \bibliography{atp,funprog,general,logicprog,theory}
+\endgroup
+\include{Ref/theory-syntax}
+\addcontentsline{toc}{chapter}{Index}\input{ref.ind}
 \end{document}
--- a/doc-src/Ref/simp.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/simp.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -22,7 +22,7 @@
 the simplifier has been set up already. Hence we start by describing the
 functions provided by the simplifier --- those functions exported by
 \ttindex{SimpFun} through its result signature \ttindex{SIMP} shown in
-Figure~\ref{SIMP}.  
+Fig.\ts\ref{SIMP}.  
 
 
 \section{Simplification sets}
--- a/doc-src/Ref/simplifier.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/simplifier.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -2,7 +2,6 @@
 \chapter{Simplification} \label{simp-chap}
 \index{simplification|(}
 
-
 This chapter describes Isabelle's generic simplification package, which
 provides a suite of simplification tactics.  This rewriting package is less
 general than its predecessor --- it works only for the equality relation,
@@ -14,9 +13,7 @@
 {\tt LCF} and {\tt HOL}.
 
 
-\section{Simplification sets}
-\index{simplification sets} 
-
+\section{Simplification sets}\index{simplification sets} 
 The simplification tactics are controlled by {\bf simpsets}.  These consist
 of five components: rewrite rules, congruence rules, the subgoaler, the
 solver and the looper.  Normally, the simplifier is set up with sensible
@@ -25,7 +22,6 @@
 most users should never worry about them.
 
 \subsection{Rewrite rules}\index{rewrite rules}
-
 Rewrite rules are theorems like $Suc(\Var{m}) + \Var{n} = \Var{m} +
 Suc(\Var{n})$, $\Var{P}\conj\Var{P} \bimp \Var{P}$, or $\Var{A} \union \Var{B}
 \equiv \{x.x\in A \disj x\in B\}$.  {\bf Conditional} rewrites such as
@@ -73,9 +69,12 @@
 simplifying~$P@2$.  Such ``local'' assumptions are effective for rewriting
 formulae such as $x=0\imp y+x=y$.  The next example makes similar use of
 such contextual information in bounded quantifiers:
-\[ \List{\Var{A}=\Var{B};\; \Forall x. x\in \Var{B} \Imp \Var{P}(x) = \Var{Q}(x)}
-   \Imp (\forall x\in \Var{A}.\Var{P}(x)) = (\forall x\in \Var{B}.\Var{Q}(x))
-\]
+\begin{eqnarray*}
+  &&\List{\Var{A}=\Var{B};\; 
+          \Forall x. x\in \Var{B} \Imp \Var{P}(x) = \Var{Q}(x)} \Imp{} \\
+ &&\qquad\qquad
+    (\forall x\in \Var{A}.\Var{P}(x)) = (\forall x\in \Var{B}.\Var{Q}(x))
+\end{eqnarray*}
 This congruence rule supplies contextual information for simplifying the
 arms of a conditional expressions:
 \[ \List{\Var{p}=\Var{q};~ \Var{q} \Imp \Var{a}=\Var{c};~
@@ -92,17 +91,16 @@
 This can make simplification much faster, but may require an extra case split
 to prove the goal.  
 
-Congruence rules are added using \ttindex{addeqcongs}.  Their conclusion
+Congruence rules are added using \ttindexbold{addeqcongs}.  Their conclusion
 must be a meta-equality, as in the examples above.  It is more
 natural to derive the rules with object-logic equality, for example
 \[ \List{\Var{P@1} \bimp \Var{Q@1};\; \Var{Q@1} \Imp \Var{P@2} \bimp \Var{Q@2}}
    \Imp (\Var{P@1} \imp \Var{P@2}) \bimp (\Var{Q@1} \imp \Var{Q@2}),
 \]
-So each object-logic should provide an alternative combinator
-\ttindex{addcongs} that expects object-equalities and translates them into
-meta-equalities.
+Each object-logic should define an operator called \ttindex{addcongs} that
+expects object-equalities and translates them into meta-equalities.
 
-\subsection{The subgoaler} \index{subgoaler}
+\subsection{The subgoaler}
 The subgoaler is the tactic used to solve subgoals arising out of
 conditional rewrite rules or congruence rules.  The default should be
 simplification itself.  Occasionally this strategy needs to be changed.  For
@@ -120,11 +118,17 @@
 simplification only if that fails; here {\tt prems_of_ss} extracts the
 current premises from a simpset.
 
-\subsection{The solver} \index{solver}
-The solver attempts to solve a subgoal after simplification, say by
-recognizing it as a tautology. It can be set with \ttindex{setsolver}.
-Occasionally, simplification on its own is not enough to reduce the subgoal
-to a tautology; additional means, like \verb$fast_tac$, may be necessary.
+\subsection{The solver}
+The solver is a tactic that attempts to solve a subgoal after
+simplification.  Typically it just proves trivial subgoals such as {\tt
+  True} and $t=t$; it could use sophisticated means such as
+\verb$fast_tac$.  The solver is set using \ttindex{setsolver}.
+
+The tactic is presented with the full goal, including the asssumptions.
+Hence it can use those assumptions (say by calling {\tt assume_tac}) even
+inside {\tt simp_tac}, which otherwise does not use assumptions.  The
+solver is also supplied a list of theorems, namely assumptions that hold in
+the local context.
 
 \begin{warn}
   Rewriting does not instantiate unknowns.  Trying to rewrite $a\in
@@ -148,11 +152,11 @@
   solver must also try resolving with the theorem $\neg False$.
 
   If the simplifier aborts with the message {\tt Failed congruence proof!},
-  it is due to the subgoaler or solver who failed to prove a premise of a
+  it is due to the subgoaler or solver that failed to prove a premise of a
   congruence rule.
 \end{warn}
 
-\subsection{The looper} \index{looper}
+\subsection{The looper}
 The looper is a tactic that is applied after simplification, in case the
 solver failed to solve the simplified goal.  If the looper succeeds, the
 simplification process is started all over again.  Each of the subgoals
@@ -165,6 +169,21 @@
 
 \begin{figure}
 \indexbold{*SIMPLIFIER}
+\indexbold{*simpset}
+\indexbold{*simp_tac}
+\indexbold{*asm_simp_tac}
+\indexbold{*asm_full_simp_tac}
+\indexbold{*addeqcongs}
+\indexbold{*addsimps}
+\indexbold{*delsimps}
+\indexbold{*empty_ss}
+\indexbold{*merge_ss}
+\indexbold{*setsubgoaler}
+\indexbold{*setsolver}
+\indexbold{*setloop}
+\indexbold{*setmksimps}
+\indexbold{*prems_of_ss}
+\indexbold{*rep_ss}
 \begin{ttbox}
 infix addsimps addeqcongs delsimps
       setsubgoaler setsolver setloop setmksimps;
@@ -219,7 +238,7 @@
 remind yourself of what is in a simpset, use the function \verb$rep_ss$ to
 return its simplification and congruence rules.
 
-\section{Example: using the simplifier}
+\section{Examples using the simplifier}
 \index{simplification!example}
 Assume we are working within {\tt FOL} and that
 \begin{description}
@@ -236,20 +255,24 @@
 \begin{ttbox}
 val add_ss = FOL_ss addsimps [add_0, add_Suc];
 \end{ttbox}
-Proofs by induction typically involve simplification:
+Proofs by induction typically involve simplification.  Here is a proof
+that~0 is a right identity:
 \begin{ttbox}
 goal Nat.thy "m+0 = m";
 {\out Level 0}
 {\out m + 0 = m}
 {\out  1. m + 0 = m}
-\ttbreak
+\end{ttbox}
+The first step is to perform induction on the variable~$m$.  This returns a
+base case and inductive step as two subgoals:
+\begin{ttbox}
 by (res_inst_tac [("n","m")] induct 1);
 {\out Level 1}
 {\out m + 0 = m}
 {\out  1. 0 + 0 = 0}
 {\out  2. !!x. x + 0 = x ==> Suc(x) + 0 = Suc(x)}
 \end{ttbox}
-Simplification solves the first subgoal:
+Simplification solves the first subgoal trivially:
 \begin{ttbox}
 by (simp_tac add_ss 1);
 {\out Level 2}
@@ -265,24 +288,30 @@
 {\out No subgoals!}
 \end{ttbox}
 
-\medskip
 The next proof is similar.
 \begin{ttbox}
 goal Nat.thy "m+Suc(n) = Suc(m+n)";
 {\out Level 0}
 {\out m + Suc(n) = Suc(m + n)}
 {\out  1. m + Suc(n) = Suc(m + n)}
-\ttbreak
+\end{ttbox}
+We again perform induction on~$m$ and get two subgoals:
+\begin{ttbox}
 by (res_inst_tac [("n","m")] induct 1);
 {\out Level 1}
 {\out m + Suc(n) = Suc(m + n)}
 {\out  1. 0 + Suc(n) = Suc(0 + n)}
-{\out  2. !!x. x + Suc(n) = Suc(x + n) ==> Suc(x) + Suc(n) = Suc(Suc(x) + n)}
-\ttbreak
+{\out  2. !!x. x + Suc(n) = Suc(x + n) ==>}
+{\out          Suc(x) + Suc(n) = Suc(Suc(x) + n)}
+\end{ttbox}
+Simplification solves the first subgoal, this time rewriting two
+occurrences of~0:
+\begin{ttbox}
 by (simp_tac add_ss 1);
 {\out Level 2}
 {\out m + Suc(n) = Suc(m + n)}
-{\out  1. !!x. x + Suc(n) = Suc(x + n) ==> Suc(x) + Suc(n) = Suc(Suc(x) + n)}
+{\out  1. !!x. x + Suc(n) = Suc(x + n) ==>}
+{\out          Suc(x) + Suc(n) = Suc(Suc(x) + n)}
 \end{ttbox}
 Switching tracing on illustrates how the simplifier solves the remaining
 subgoal: 
@@ -301,8 +330,8 @@
 {\out m + Suc(n) = Suc(m + n)}
 {\out No subgoals!}
 \end{ttbox}
-As usual, many variations are possible.  At Level~1 we could have solved
-both subgoals at once using the tactical \ttindex{ALLGOALS}:
+Many variations are possible.  At Level~1 (in either example) we could have
+solved both subgoals at once using the tactical \ttindex{ALLGOALS}:
 \begin{ttbox}
 by (ALLGOALS (asm_simp_tac add_ss));
 {\out Level 2}
@@ -312,18 +341,15 @@
 
 \medskip
 Here is a conjecture to be proved for an arbitrary function~$f$ satisfying
-the law $f(Suc(n)) = Suc(f(n))$:\footnote{The previous
-  simplifier required congruence rules for such function variables in
-  order to simplify their arguments.  The present simplifier can be given
-  congruence rules to realize non-standard simplification of a function's
-  arguments, but this is seldom necessary.}
+the law $f(Suc(n)) = Suc(f(n))$:
 \begin{ttbox}
 val [prem] = goal Nat.thy
     "(!!n. f(Suc(n)) = Suc(f(n))) ==> f(i+j) = i+f(j)";
 {\out Level 0}
 {\out f(i + j) = i + f(j)}
 {\out  1. f(i + j) = i + f(j)}
-{\out val prem = "f(Suc(?n)) = Suc(f(?n))  [!!n. f(Suc(n)) = Suc(f(n))]" : thm}
+{\out val prem = "f(Suc(?n)) = Suc(f(?n))}
+{\out             [!!n. f(Suc(n)) = Suc(f(n))]" : thm}
 \ttbreak
 by (res_inst_tac [("n","i")] induct 1);
 {\out Level 1}
@@ -339,7 +365,11 @@
 {\out  1. !!x. f(x + j) = x + f(j) ==> f(Suc(x) + j) = Suc(x) + f(j)}
 \end{ttbox}
 The remaining subgoal requires rewriting by the premise, so we add it to
-{\tt add_ss}: 
+{\tt add_ss}:\footnote{The previous
+  simplifier required congruence rules for function variables like~$f$ in
+  order to simplify their arguments.  The present simplifier can be given
+  congruence rules to realize non-standard simplification of a function's
+  arguments, but this is seldom necessary.}
 \begin{ttbox}
 by (asm_simp_tac (add_ss addsimps [prem]) 1);
 {\out Level 3}
@@ -347,14 +377,232 @@
 {\out No subgoals!}
 \end{ttbox}
 
-No documentation is available on setting up the simplifier for new logics.
-Please consult {\tt FOL/simpdata.ML} to see how this is done, and {\tt
-  FOL/simpdata.ML} for a fairly sophisticated translation of formulae into
-rewrite rules.
+
+\section{Setting up the simplifier}
+\index{simplification!setting up|bold}
+
+Setting up the simplifier for new logics is complicated.  This section
+describes how the simplifier is installed for first-order logic; the code
+is largely taken from {\tt FOL/simpdata.ML}.
+
+The simplifier and the case splitting tactic, which resides in a separate
+file, are not part of Pure Isabelle.  They must be loaded explicitly:
+\begin{ttbox}
+use "../Provers/simplifier.ML";
+use "../Provers/splitter.ML";
+\end{ttbox}
+
+Simplification works by reducing various object-equalities to
+meta-equality.  It requires axioms stating that equal terms and equivalent
+formulae are also equal at the meta-level.  The file {\tt FOL/ifol.thy}
+contains the two lines
+\begin{ttbox}\indexbold{*eq_reflection}\indexbold{*iff_reflection}
+eq_reflection   "(x=y)   ==> (x==y)"
+iff_reflection  "(P<->Q) ==> (P==Q)"
+\end{ttbox}
+Of course, you should only assert such axioms if they are true for your
+particular logic.  In Constructive Type Theory, equality is a ternary
+relation of the form $a=b\in A$; the type~$A$ determines the meaning of the
+equality effectively as a partial equivalence relation.
+
+
+\subsection{A collection of standard rewrite rules}
+The file begins by proving lots of standard rewrite rules about the logical
+connectives.  These include cancellation laws and associative laws but
+certainly not commutative laws, which would case looping.  To prove the
+laws easily, it defines a function that echoes the desired law and then
+supplies it the theorem prover for intuitionistic \FOL:
+\begin{ttbox}
+fun int_prove_fun s = 
+ (writeln s;  
+  prove_goal IFOL.thy s
+   (fn prems => [ (cut_facts_tac prems 1), 
+                  (Int.fast_tac 1) ]));
+\end{ttbox}
+The following rewrite rules about conjunction are a selection of those
+proved on {\tt FOL/simpdata.ML}.  Later, these will be supplied to the
+standard simpset.
+\begin{ttbox}
+val conj_rews = map int_prove_fun
+ ["P & True <-> P",      "True & P <-> P",
+  "P & False <-> False", "False & P <-> False",
+  "P & P <-> P",
+  "P & ~P <-> False",    "~P & P <-> False",
+  "(P & Q) & R <-> P & (Q & R)"];
+\end{ttbox}
+The file also proves some distributive laws.  As they can cause exponential
+blowup, they will not be included in the standard simpset.  Instead they
+are merely bound to an \ML{} identifier.
+\begin{ttbox}
+val distrib_rews  = map int_prove_fun
+ ["P & (Q | R) <-> P&Q | P&R", 
+  "(Q | R) & P <-> Q&P | R&P",
+  "(P | Q --> R) <-> (P --> R) & (Q --> R)"];
+\end{ttbox}
+
+
+\subsection{Functions for preprocessing the rewrite rules}
+The next step is to define the function for preprocessing rewrite rules.
+This will be installed by calling {\tt setmksimps} below.  Preprocessing
+occurs whenever rewrite rules are added, whether by user command or
+automatically.  Preprocessing involves extracting atomic rewrites at the
+object-level, then reflecting them to the meta-level.
+
+To start, the function {\tt gen_all} strips any meta-level
+quantifiers from the front of the given theorem.  Usually there are none
+anyway.
+\begin{ttbox}
+fun gen_all th = forall_elim_vars (#maxidx(rep_thm th)+1) th;
+\end{ttbox}
+The function {\tt atomize} analyses a theorem in order to extract
+atomic rewrite rules.  The head of all the patterns, matched by the
+wildcard~{\tt _}, is the coercion function {\tt Trueprop}.
+\begin{ttbox}
+fun atomize th = case concl_of th of 
+    _ $ (Const("op &",_) $ _ $ _)   => atomize(th RS conjunct1) \at
+                                       atomize(th RS conjunct2)
+  | _ $ (Const("op -->",_) $ _ $ _) => atomize(th RS mp)
+  | _ $ (Const("All",_) $ _)        => atomize(th RS spec)
+  | _ $ (Const("True",_))           => []
+  | _ $ (Const("False",_))          => []
+  | _                               => [th];
+\end{ttbox}
+There are several cases, depending upon the form of the conclusion:
+\begin{itemize}
+\item Conjunction: extract rewrites from both conjuncts.
+
+\item Implication: convert $P\imp Q$ to the meta-implication $P\Imp Q$ and
+  extract rewrites from~$Q$; these will be conditional rewrites with the
+  condition~$P$.
+
+\item Universal quantification: remove the quantifier, replacing the bound
+  variable by a schematic variable, and extract rewrites from the body.
+
+\item {\tt True} and {\tt False} contain no useful rewrites.
+
+\item Anything else: return the theorem in a singleton list.
+\end{itemize}
+The resulting theorems are not literally atomic --- they could be
+disjunctive, for example --- but are brokwn down as much as possible.  See
+the file {\tt ZF/simpdata.ML} for a sophisticated translation of
+set-theoretic formulae into rewrite rules.
 
-%%\section{Setting up the simplifier} \label{SimpFun-input}
-%%Should be written!
+The simplified rewrites must now be converted into meta-equalities.  The
+axiom {\tt eq_reflection} converts equality rewrites, while {\tt
+  iff_reflection} converts if-and-only-if rewrites.  The latter possibility
+can arise in two other ways: the negative theorem~$\neg P$ is converted to
+$P\equiv{\tt False}$, and any other theorem~$\neg P$ is converted to
+$P\equiv{\tt True}$.  The rules {\tt iff_reflection_F} and {\tt
+  iff_reflection_T} accomplish this conversion.
+\begin{ttbox}
+val P_iff_F = int_prove_fun "~P ==> (P <-> False)";
+val iff_reflection_F = P_iff_F RS iff_reflection;
+\ttbreak
+val P_iff_T = int_prove_fun "P ==> (P <-> True)";
+val iff_reflection_T = P_iff_T RS iff_reflection;
+\end{ttbox}
+The function {\tt mk_meta_eq} converts a theorem to a meta-equality
+using the case analysis described above.
+\begin{ttbox}
+fun mk_meta_eq th = case concl_of th of
+    _ $ (Const("op =",_)$_$_)   => th RS eq_reflection
+  | _ $ (Const("op <->",_)$_$_) => th RS iff_reflection
+  | _ $ (Const("Not",_)$_)      => th RS iff_reflection_F
+  | _                           => th RS iff_reflection_T;
+\end{ttbox}
+The three functions {\tt gen_all}, {\tt atomize} and {\tt mk_meta_eq} will
+be composed together and supplied below to {\tt setmksimps}.
+
+
+\subsection{Making the initial simpset}
+It is time to assemble these items.  We open module {\tt Simplifier} to
+gain access to its components.  The infix operator \ttindexbold{addcongs}
+handles congruence rules; given a list of theorems, it converts their
+conclusions into meta-equalities and passes them to \ttindex{addeqcongs}.
+\begin{ttbox}
+open Simplifier;
+\ttbreak
+infix addcongs;
+fun ss addcongs congs =
+    ss addeqcongs (congs RL [eq_reflection,iff_reflection]);
+\end{ttbox}
+The list {\tt IFOL_rews} contains the default rewrite rules for first-order
+logic.  The first of these is the reflexive law expressed as the
+equivalence $(a=a)\bimp{\tt True}$; if we provided it as $a=a$ it would
+cause looping.
+\begin{ttbox}
+val IFOL_rews =
+   [refl RS P_iff_T] \at conj_rews \at disj_rews \at not_rews \at 
+    imp_rews \at iff_rews \at quant_rews;
+\end{ttbox}
+The list {\tt triv_rls} contains trivial theorems for the solver.  Any
+subgoal that is simplified to one of these will be removed.
+\begin{ttbox}
+val notFalseI = int_prove_fun "~False";
+val triv_rls = [TrueI,refl,iff_refl,notFalseI];
+\end{ttbox}
+
+The basic simpset for intuitionistic \FOL{} starts with \ttindex{empty_ss}.
+It preprocess rewrites using {\tt gen_all}, {\tt atomize} and {\tt
+  mk_meta_eq}.  It solves simplified subgoals using {\tt triv_rls} and
+assumptions.  It uses \ttindex{asm_simp_tac} to tackle subgoals of
+conditional rewrites.  It takes {\tt IFOL_rews} as rewrite rules.  
+Other simpsets built from {\tt IFOL_ss} will inherit these items.
+\index{*setmksimps}\index{*setsolver}\index{*setsubgoaler}
+\index{*addsimps}\index{*addcongs}
+\begin{ttbox}
+val IFOL_ss = 
+  empty_ss 
+  setmksimps (map mk_meta_eq o atomize o gen_all)
+  setsolver  (fn prems => resolve_tac (triv_rls \at prems) ORELSE' 
+                          assume_tac)
+  setsubgoaler asm_simp_tac
+  addsimps IFOL_rews
+  addcongs [imp_cong];
+\end{ttbox}
+This simpset takes {\tt imp_cong} as a congruence rule in order to use
+contextual information to simplify the conclusions of implications:
+\[ \List{\Var{P}\bimp\Var{P'};\; \Var{P'} \Imp \Var{Q}\bimp\Var{Q'}} \Imp
+   (\Var{P}\imp\Var{Q}) \bimp (\Var{P'}\imp\Var{Q'})
+\]
+By adding the congruence rule {\tt conj_cong}, we could obtain a similar
+effect for conjunctions.
+
+
+\subsection{Case splitting}
+To set up case splitting, we must prove a theorem of the form shown below
+and pass it to \ttindexbold{mk_case_split_tac}.  The tactic
+\ttindexbold{split_tac} uses {\tt mk_meta_eq} to convert the splitting
+rules to meta-equalities.
+
+\begin{ttbox}
+val meta_iffD = 
+    prove_goal FOL.thy "[| P==Q; Q |] ==> P"
+        (fn [prem1,prem2] => [rewtac prem1, rtac prem2 1])
+\ttbreak
+fun split_tac splits =
+    mk_case_split_tac meta_iffD (map mk_meta_eq splits);
+\end{ttbox}
+%
+The splitter is designed for rules roughly of the form
+\[ \Var{P}(if(\Var{Q},\Var{x},\Var{y})) \bimp (\Var{Q} \imp \Var{P}(\Var{x}))
+\conj (\lnot\Var{Q} \imp \Var{P}(\Var{y})) 
+\] 
+where the right-hand side can be anything.  Another example is the
+elimination operator (which happens to be called~$split$) for Cartesian
+products:
+\[ \Var{P}(split(\Var{f},\Var{p})) \bimp (\forall a~b. \Var{p} =
+\langle a,b\rangle \imp \Var{P}(\Var{f}(a,b))) 
+\] 
+Case splits should be allowed only when necessary; they are expensive
+and hard to control.  Here is a typical example of use, where {\tt
+  expand_if} is the first rule above:
+\begin{ttbox}
+by (simp_tac (prop_rec_ss setloop (split_tac [expand_if])) 1);
+\end{ttbox}
+
 
 
 \index{simplification|)}
 
+
--- a/doc-src/Ref/substitution.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/substitution.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -141,7 +141,7 @@
 function {\tt dest_eq} requires knowledge of Isabelle's representation of
 terms.  For {\tt FOL} it is defined by
 \begin{ttbox} 
-fun dest_eq (Const("Trueprop",_) $ (Const("op =",_)$t$u)) = (t,u);
+fun dest_eq (Const("Trueprop",_) $ (Const("op =",_)$t$u)) = (t,u)
 \end{ttbox}
 Here {\tt Trueprop} is the coercion from type~$o$ to type~$prop$, while
 \hbox{\tt op =} is the internal name of the infix operator~{\tt=}.
--- a/doc-src/Ref/tactic.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/tactic.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -236,19 +236,25 @@
 \index{tactics!for inserting facts|bold}
 \begin{ttbox} 
 cut_facts_tac : thm list -> int -> tactic
-subgoal_tac   :   string -> int -> tactic
+cut_inst_tac  : (string*string)list -> thm -> int -> tactic
+subgoal_tac   : string -> int -> tactic
 \end{ttbox}
 These tactics add assumptions --- which must be proved, sooner or later ---
 to a given subgoal.
 \begin{description}
 \item[\ttindexbold{cut_facts_tac} {\it thms} {\it i}] 
   adds the {\it thms} as new assumptions to subgoal~$i$.  Once they have
-  been inserted as assumptions, they become subject to {\tt eresolve_tac}
-  and {\tt rewrite_goals_tac}.  Only rules with no premises are inserted:
-  Isabelle cannot use assumptions that contain $\Imp$ or~$\Forall$.  Sometimes
-  the theorems are premises of a rule being derived, returned by~{\tt
-    goal}; instead of calling this tactic, you could state the goal with an
-  outermost meta-quantifier.
+  been inserted as assumptions, they become subject to tactics such as {\tt
+    eresolve_tac} and {\tt rewrite_goals_tac}.  Only rules with no premises
+  are inserted: Isabelle cannot use assumptions that contain $\Imp$
+  or~$\Forall$.  Sometimes the theorems are premises of a rule being
+  derived, returned by~{\tt goal}; instead of calling this tactic, you
+  could state the goal with an outermost meta-quantifier.
+
+\item[\ttindexbold{cut_inst_tac} {\it insts} {\it thm} {\it i}]
+  instantiates the {\it thm} with the instantiations {\it insts}, as
+  described in \S\ref{res_inst_tac}.  It adds the resulting theorem as a
+  new assumption to subgoal~$i$. 
 
 \item[\ttindexbold{subgoal_tac} {\it formula} {\it i}] 
 adds the {\it formula} as a assumption to subgoal~$i$, and inserts the same
@@ -575,14 +581,16 @@
 \begin{ttbox}
 datatype 'a option = None  |  Some of 'a;
 \end{ttbox}
+For clarity, the module name {\tt Sequence} is omitted from the signature
+specifications below; for instance, {\tt null} appears instead of {\tt
+  Sequence.null}.
 
 \subsubsection{Basic operations on sequences}
 \begin{ttbox} 
-Sequence.null   : 'a Sequence.seq
-Sequence.seqof  : (unit -> ('a * 'a Sequence.seq) option)
-                     -> 'a Sequence.seq
-Sequence.single : 'a -> 'a Sequence.seq
-Sequence.pull   : 'a Sequence.seq -> ('a * 'a Sequence.seq) option
+null   : 'a seq
+seqof  : (unit -> ('a * 'a seq) option) -> 'a seq
+single : 'a -> 'a seq
+pull   : 'a seq -> ('a * 'a seq) option
 \end{ttbox}
 \begin{description}
 \item[{\tt Sequence.null}] 
@@ -604,9 +612,9 @@
 
 \subsubsection{Converting between sequences and lists}
 \begin{ttbox} 
-Sequence.chop: int * 'a Sequence.seq -> 'a list * 'a Sequence.seq
-Sequence.list_of_s : 'a Sequence.seq -> 'a list
-Sequence.s_of_list : 'a list -> 'a Sequence.seq
+chop      : int * 'a seq -> 'a list * 'a seq
+list_of_s : 'a seq -> 'a list
+s_of_list : 'a list -> 'a seq
 \end{ttbox}
 \begin{description}
 \item[{\tt Sequence.chop} ($n$, $s$)] 
@@ -624,12 +632,11 @@
 
 \subsubsection{Combining sequences}
 \begin{ttbox} 
-Sequence.append: 'a Sequence.seq * 'a Sequence.seq -> 'a Sequence.seq
-Sequence.interleave : 'a Sequence.seq * 'a Sequence.seq
-                                                   -> 'a Sequence.seq
-Sequence.flats   : 'a Sequence.seq Sequence.seq -> 'a Sequence.seq
-Sequence.maps    : ('a -> 'b) -> 'a Sequence.seq -> 'b Sequence.seq
-Sequence.filters : ('a -> bool) -> 'a Sequence.seq -> 'a Sequence.seq
+append     : 'a seq * 'a seq -> 'a seq
+interleave : 'a seq * 'a seq -> 'a seq
+flats      : 'a seq seq -> 'a seq
+maps       : ('a -> 'b) -> 'a seq -> 'b seq
+filters    : ('a -> bool) -> 'a seq -> 'a seq
 \end{ttbox} 
 \begin{description}
 \item[{\tt Sequence.append} ($s@1$, $s@2$)] 
--- a/doc-src/Ref/tctical.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/tctical.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -97,8 +97,8 @@
 least once, failing if this is impossible.
 
 \item[\ttindexbold{trace_REPEAT} \tt:= true;] 
-enables an interactive tracing mode for {\tt REPEAT_DETERM} and {\tt
-REPEAT}.  To view the tracing options, type {\tt h} at the prompt.
+enables an interactive tracing mode for the tacticals {\tt REPEAT_DETERM}
+and {\tt REPEAT}.  To view the tracing options, type {\tt h} at the prompt.
 \end{description}
 
 
@@ -242,8 +242,8 @@
 permits separate tactics for starting the search and continuing the search.
 
 \item[\ttindexbold{trace_BEST_FIRST} \tt:= true;] 
-enables an interactive tracing mode for {\tt BEST_FIRST}.  To view the
-tracing options, type {\tt h} at the prompt.
+enables an interactive tracing mode for the tactical {\tt BEST_FIRST}.  To
+view the tracing options, type {\tt h} at the prompt.
 \end{description}
 
 
@@ -345,11 +345,11 @@
 possibly containing $\theta'@1,\ldots,\theta'@k$ as assumptions, then it is
 lifted back into the original context, yielding $n$ subgoals.
 
-Meta-level assumptions may not contain unknowns.  Unknowns in
-$\theta@1,\ldots,\theta@k$ become free variables in $\theta'@1$, \ldots,
-$\theta'@k$, and are restored afterwards; the {\tt METAHYPS} call cannot
-instantiate them.  Unknowns in $\theta$ may be instantiated.  New unknowns
-in $\phi@1$, \ldots; $\phi@n$ are lifted over the parameters.
+Meta-level assumptions may not contain unknowns.  Unknowns in the
+hypotheses $\theta@1,\ldots,\theta@k$ become free variables in $\theta'@1$,
+\ldots, $\theta'@k$, and are restored afterwards; the {\tt METAHYPS} call
+cannot instantiate them.  Unknowns in $\theta$ may be instantiated.  New
+unknowns in $\phi@1$, \ldots; $\phi@n$ are lifted over the parameters.
 
 Here is a typical application.  Calling {\tt hyp_res_tac}~$i$ resolves
 subgoal~$i$ with one of its own assumptions, which may itself have the form
@@ -396,16 +396,16 @@
 \hbox{\tt TRY($tacf(n)$) THEN \ldots{} THEN TRY($tacf(1)$)}.  
 
 It attempts to apply {\it tacf} to all the subgoals.  For instance,
-\hbox{\tt TRYALL assume_tac} attempts to solve all the subgoals by
+the tactic \hbox{\tt TRYALL assume_tac} attempts to solve all the subgoals by
 assumption.
 
 \item[\ttindexbold{SOMEGOAL} {\it tacf}] 
 is equivalent to
 \hbox{\tt$tacf(n)$ ORELSE \ldots{} ORELSE $tacf(1)$}.  
 
-It applies {\it tacf} to one subgoal, counting {\bf downwards}.  
-\hbox{\tt SOMEGOAL assume_tac} solves one subgoal by assumption, failing if
-this is impossible.
+It applies {\it tacf} to one subgoal, counting {\bf downwards}.  For instance,
+the tactic \hbox{\tt SOMEGOAL assume_tac} solves one subgoal by assumption,
+failing if this is impossible.
 
 \item[\ttindexbold{FIRSTGOAL} {\it tacf}] 
 is equivalent to
--- a/doc-src/Ref/theories.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/theories.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -23,20 +23,12 @@
 that looks for subgoals of a particular form.  Terms and types may be
 `certified' to be well-formed with respect to a given signature.
 
-\section{Defining Theories}
-\label{DefiningTheories}
+\section{Defining Theories}\label{DefiningTheories}
 
 Theories can be defined either using concrete syntax or by calling certain
-\ML-functions (see \S\ref{BuildingATheory}).  Appendix~\ref{TheorySyntax}
-presents the concrete syntax for theories following convention that
-\begin{itemize}
-\item {\tt typewriter font} denotes terminal symbols;
-\item $id$, $tid$, $nat$, $string$ and $text$ are the lexical classes of
-  identifiers, type identifiers, natural numbers, \ML\ strings (with their
-  quotation marks) and arbitrary \ML\ text. The first three are fully defined
-  in the {\it Defining Logics} chapter of {\it Isabelle's Object Logics}.
-\end{itemize}
-The different parts of a theory definition are interpreted as follows:
+\ML{} functions (see \S\ref{BuildingATheory}).  Appendix~\ref{app:TheorySyntax}
+presents the concrete syntax for theories.  A theory definition consists of
+several different parts: 
 \begin{description} 
 \item[{\it theoryDef}] A theory has a name $id$ and is an extension of the
   union of theories with name {\it name}, the {\bf parent
@@ -63,12 +55,12 @@
   synonym $name$. Only binary type constructors can have infix status and
   symbolic names, e.g.\ {\tt ('a,'b)"*"}. Type constructors of $n$ arguments
   are declared by $(\alpha@1,\dots,\alpha@n)name$.  A {\bf type
-    synonym}\indexbold{type!synonym} is simply an abbreviation
+    synonym}\indexbold{types!synonyms} is simply an abbreviation
   $(\alpha@1,\dots,\alpha@n)name = \mbox{\tt"}\tau\mbox{\tt"}$ and follows
   the same rules as in \ML, except that $name$ can be a string and $\tau$
   must be enclosed in quotation marks.
 \item[$infix$] declares a type or constant to be an infix operator of
-  precedence $nat$ associating to the left ({\tt infixl}) or right ({\tt
+  priority $nat$ associating to the left ({\tt infixl}) or right ({\tt
     infixr}).
 \item[$arities$] Each $name$ must be an existing type constructor which is
   given the additional arity $arity$.
@@ -79,15 +71,16 @@
   \item A mixfix template given as a $string$ of the form
     {\tt"}\dots{\tt\_}\dots{\tt\_}\dots{\tt"} where the $i$-th underscore
     indicates the position where the $i$-th argument should go. The minimal
-    precedence of each argument is given by a list of natural numbers, the
-    precedence of the whole construct by the following natural number;
-    precedences are optional.
+    priority of each argument is given by a list of natural numbers, the
+    priority of the whole construct by the following natural number;
+    priorities are optional.
 
   \item Binary constants can be given infix status.
 
-  \item A constant $f$ {\tt::} $(\tau@1\To\tau@2)\To\tau@3$ can be given {\em
-    binder\/} status: {\tt binder} $Q$ $p$ causes $Q\,x.F(x)$ to be treated
-  like $f(F)$; $p$ is the precedence of the construct.
+  \item A constant $f$ {\tt::} $(\tau@1\To\tau@2)\To\tau@3$ can be given {\bf
+    binder} status: the declaration {\tt binder} $\cal Q$ $p$ causes
+  ${\cal Q}\,x.F(x)$ to be treated
+  like $f(F)$, where $p$ is the priority.
   \end{itemize}
 \item[$trans$] Specifies syntactic translation rules, that is parse 
   rules ({\tt =>}), print rules ({\tt <=}), or both ({\tt ==}).
@@ -101,122 +94,157 @@
 
 
 \subsection{*Classes and arities}
-\index{*classes!context conditions}
-\index{*arities!context conditions}
+\index{*classes!context conditions}\index{*arities!context conditions}
 
-Classes and arities are subject to the following two well-formedness
-conditions:
+In order to guarantee principal types~\cite{nipkow-prehofer},
+classes and arities must obey two conditions:
 \begin{itemize}
-\item There are no two declarations $ty :: (\vec{r})c$ and $ty :: (\vec{s})c$
-  with $\vec{r} \neq \vec{s}$.  For example
+\item There must be no two declarations $ty :: (\vec{r})c$ and $ty ::
+  (\vec{s})c$ with $\vec{r} \neq \vec{s}$.  For example, the following is
+  forbidden:
 \begin{ttbox}
 types 'a ty
 arities ty :: ({\ttlbrace}logic{\ttrbrace}) logic
         ty :: ({\ttlbrace}{\ttrbrace})logic
 \end{ttbox}
-leads to an error message and fails.
+
 \item If there are two declarations $ty :: (s@1,\dots,s@n)c$ and $ty ::
   (s@1',\dots,s@n')c'$ such that $c' < c$ then $s@i' \preceq s@i$ must hold
   for $i=1,\dots,n$.  The relationship $\preceq$, defined as
 \[ s' \preceq s \iff \forall c\in s. \exists c'\in s'.~ c'\le c, \]
 expresses that the set of types represented by $s'$ is a subset of the set of
-types represented by $s$.  For example
+types represented by $s$.  For example, the following is forbidden:
 \begin{ttbox}
 classes term < logic
 types 'a ty
 arities ty :: ({\ttlbrace}logic{\ttrbrace})logic
         ty :: ({\ttlbrace}{\ttrbrace})term
 \end{ttbox}
-leads to an error message and fails.
+
 \end{itemize}
-These conditions guarantee principal types~\cite{nipkow-prehofer}.
 
 
-\section{Loading Theories}
+\section{Loading a new theory}
 \label{LoadingTheories}
-\index{theories!loading|(}
+\index{theories!loading}
+\begin{ttbox} 
+use_thy         : string -> unit
+time_use_thy    : string -> unit
+loadpath        : string list ref \hfill{\bf initially {\tt["."]}}
+delete_tmpfiles : bool ref \hfill{\bf initially true}
+\end{ttbox}
+
+\begin{description}
+\item[\ttindexbold{use_thy} $thyname$] 
+  reads the theory $thyname$ and creates an \ML{} structure as described below.
 
-\subsection{Reading a new theory}
+\item[\ttindexbold{time_use_thy} $thyname$] 
+  calls {\tt use_thy} $thyname$ and reports the time taken.
 
+\item[\ttindexbold{loadpath}] 
+  contains a list of directories to search when locating the files that
+  define a theory.  This list is only used if the theory name in {\tt
+    use_thy} does not specify the path explicitly.
+
+\item[\ttindexbold{delete_tmpfiles} \tt:= false;] 
+suppresses the deletion of temporary files.
+\end{description}
+%
 Each theory definition must reside in a separate file.  Let the file {\it
   tf}{\tt.thy} contain the definition of a theory called $TF$ with parent
 theories $TB@1$ \dots $TB@n$.  Calling \ttindexbold{use_thy}~{\tt"}{\it
-  TF\/}{\tt"} reads file {\it tf}{\tt.thy}, writes an intermediate \ML-file
-{\tt.}{\it tf}{\tt.thy.ML}, and reads the latter file. Any of the parent
-theories that have not been loaded yet are read now by recursive {\tt
-  use_thy} calls until either an already loaded theory or {\tt Pure} is
-reached.  Therefore one can read a complete logic by just one {\tt use_thy}
-call if all theories are linked appropriately.  Afterwards an \ML\ 
-structure~$TF$ containing a component {\tt thy} for the new theory and
-components $r@1$ \dots $r@n$ for the rules is declared; it also contains the
-definitions of the {\tt ML} section if any. Unless
-\ttindexbold{delete_tmpfiles} is set to {\tt false}, {\tt.}{\it
-  tf}{\tt.thy.ML} is deleted if no errors occurred. Finally the file {\it
-  tf}{\tt.ML} is read, if it exists; this file normally contains proofs
-involving the new theory.
+  TF\/}{\tt"} reads file {\it tf}{\tt.thy}, writes an intermediate \ML{}
+file {\tt.}{\it tf}{\tt.thy.ML}, and reads the latter file.  Recursive {\tt
+  use_thy} calls load those parent theories that have not been loaded
+previously; the recursion may continue to any depth.  One {\tt use_thy}
+call can read an entire logic if all theories are linked appropriately.
 
+The result is an \ML\ structure~$TF$ containing a component {\tt thy} for
+the new theory and components $r@1$ \dots $r@n$ for the rules.  The
+structure also contains the definitions of the {\tt ML} section, if
+present.  The file {\tt.}{\it tf}{\tt.thy.ML} is then deleted if
+\ttindexbold{delete_tmpfiles} is set to {\tt true} and no errors occurred.
 
-\subsection{Theory names, file names and search paths}
-\indexbold{theories!names of}
-\indexbold{files!names of}
-\indexbold{search path}
+Finally the file {\it tf}{\tt.ML} is read, if it exists.  This file
+normally contains proofs involving the new theory.  It is also possible to
+provide only a {\tt.ML} file, with no {\tt.thy} file.  In this case the
+{\tt.ML} file must declare an \ML\ structure having the theory's name.  If
+both the {\tt.thy} file and a structure declaration in the {\tt.ML} file
+exist, then the latter declaration is used.  See {\tt ZF/ex/llist.ML} for
+an example.
 
+\indexbold{theories!names of}\indexbold{files!names of}
 \begin{warn}
-  The files defining the theory must have the lower case name of the theory
-  with {\tt.thy} or {\tt.ML} appended.  On the other hand \ttindex{use_thy}'s
-  parameter has to be the exact theory name.
-\end{warn}
-Optionally the name can be preceded by a path to specify the directory where
-the files can be found.  If no path is provided the reference variable
-\ttindexbold{loadpath} is used which contains a list of paths that are
-searched in the given order.  After the {\tt.thy}-file for a theory has
-been found, the same path is used to locate the (optional) {\tt.ML}-file.
-
-It is also possible to provide only a {\tt.ML}-file, with no
-{\tt.thy}-file.  In this case the {\tt.ML}-file must declare an \ML\
-structure having the theory's name.  If both the {\tt.thy}-file and a
-structure declaration in the {\tt.ML}-file exist, then the latter
-declaration is used.  See {\tt ZF/ex/llist.ML} for an example.
-
-
-\subsection{Reloading a theory}
-\indexbold{theories!reloading}
-\index{*update|(}
-
-\ttindex{use_thy} keeps track of all loaded theories and stores information
-about their files.  If it finds that the theory to be loaded was already read
-before, the following happens: first the theory's files are searched at the
-place they were located the last time they were read. If they are found,
-their time of last modification is compared to the internal data and {\tt
-  use_thy} stops if they are equal. In case the files have been moved, {\tt
-  use_thy} searches them the same way a new theory would be searched for.
-After all these tests have been passed, the theory is reloaded and all
-its children are marked as out-of-date.
-\begin{warn}
-  Changing a theory on disk often makes it necessary to reload all theories
-  that directly or indirectly depend on it. However, {\tt use_thy} reads only
-  one theory, even if some of the parent theories are out of date. In this
-  case {\tt update()} should be used.  This function reloads {\em all\/}
-  modified theories and their descendants in the correct order.
+  Case is significant.  The argument of \ttindex{use_thy} must be the exact
+  theory name.  The corresponding filenames are derived by appending
+  {\tt.thy} or {\tt.ML} to the theory's name {\it after conversion to lower
+    case}. 
 \end{warn}
 
 
-\subsection{*Pseudo theories}
-\indexbold{theories!pseudo}
+\section{Reloading modified theories}
+\indexbold{theories!reloading}\index{*update|(}
+\begin{ttbox} 
+update     : unit -> unit
+unlink_thy : string -> unit
+\end{ttbox}
+Isabelle keeps track of all loaded theories and their files.  If
+\ttindex{use_thy} finds that the theory to be loaded has been read before,
+it determines whether to reload the theory as follows.  First it looks for
+the theory's files in their previous location.  If it finds them, it
+compares their modification times to the internal data and stops if they
+are equal.  If the files have been moved, {\tt use_thy} searches for them
+as it would for a new theory.  After {\tt use_thy} reloads a theory, it
+marks the children as out-of-date.
+\begin{warn}
+  Changing a theory on disk often makes it necessary to reload all theories
+  descended from it.  However, {\tt use_thy} reads only one theory, even if
+  some of the parent theories are out of date.  In this case you should
+  call {\tt update()}.
+\end{warn}
+
+\begin{description}
+\item[\ttindexbold{update} ()] 
+  reloads all modified theories and their descendants in the correct order.  
 
+\item[\ttindexbold{unlink_thy} $thyname$]\indexbold{theories!removing}
+  informs Isabelle that theory $thyname$ no longer exists.  If you delete the
+  theory files for $thyname$ then you must execute {\tt unlink_thy};
+  otherwise {\tt update} will complain about a missing file.
+\end{description}
+
+
+\subsection{Important note for Poly/ML users}\index{Poly/\ML}
+The theory mechanism depends upon reference variables.  At the end of a
+Poly/\ML{} session, the contents of references are lost unless they are
+declared in the current database.  Assignments to references in the {\tt
+  Pure} database are lost, including all information about loaded theories.
+
+To avoid losing such information (assuming you have loaded some theories)
+you must declare a new {\tt Readthy} structure in the child database:
+\begin{ttbox}
+structure Readthy = ReadthyFUN (structure ThySyn = ThySyn);
+Readthy.loaded_thys := !loaded_thys;
+open Readthy;
+\end{ttbox}
+This copies into the new reference \verb$loaded_thys$ the contents of the
+original reference, which is the list of already loaded theories.
+
+
+\subsection{*Pseudo theories}\indexbold{theories!pseudo}
 Any automatic reloading facility requires complete knowledge of all
-dependencies. Sometimes theories depend on objects created in \ML-files
-without associated {\tt.thy}-file. Unless such dependencies are documented,
-{\tt update} fails to reload these \ML-files and the system is left in a
+dependencies.  Sometimes theories depend on objects created in \ML{} files
+with no associated {\tt.thy} file.  Unless such dependencies are documented,
+{\tt update} fails to reload these \ML{} files and the system is left in a
 state where some objects, e.g.\ theorems, still refer to old versions of
 theories. This may lead to the error
 \begin{ttbox}
 Attempt to merge different versions of theory: \(T\)
 \end{ttbox}
-Therefore there is a way to link theories and {\em orphaned\/} \ML-files,
-i.e.\ those without associated {\tt.thy}-file.
+Therefore there is a way to link theories and {\em orphaned\/} \ML{} files ---
+those without associated {\tt.thy} file.
 
-Let us assume we have an orphaned \ML-file named {\tt orphan.ML} and a theory
+Let us assume we have an orphaned \ML{} file named {\tt orphan.ML} and a theory
 $B$ which depends on {\tt orphan.ML} (for example, {\tt b.ML} uses theorems
 that are proved in {\tt orphan.ML}). Then {\tt b.thy} should mention this
 dependence:
@@ -224,7 +252,7 @@
 B = ... + "orphan" + ...
 \end{ttbox}
 Although {\tt orphan} is {\em not\/} used in building the base of theory $B$
-(strings stand for \ML-files rather than {\tt.thy}-files and merely document
+(strings stand for \ML{} files rather than {\tt.thy} files and merely document
 additional dependencies), {\tt orphan.ML} is loaded automatically when $B$ is
 (re)built.
 
@@ -233,43 +261,15 @@
 \begin{ttbox}
 orphan = A1 + \(...\) + An
 \end{ttbox}
-The resulting theory is never used but guarantees that {\tt update} reloads
+The resulting theory is a dummy, but it ensures that {\tt update} reloads
 theory {\it orphan} whenever it reloads one of the $A@i$.
 
 For an extensive example of how this technique can be used to link over 30
 files and load them by just two {\tt use_thy} calls, consult the ZF source
 files.
-\index{theories!loading|)}
-
-
-\subsection{Removing a theory}
-\indexbold{theories!removing}
-
-If a previously read file is removed, {\tt update} will keep on complaining
-about a missing file.  The theory is not automatically removed from the
-internal list to preserve the links to other theories in case one forgot to
-adjust the {\tt loadpath} after moving a file.  To remove a theory manually
-use \ttindexbold{unlink_thy}.  \index{*update|)}
-
 
-\subsection{Using Poly/\ML}
-\index{Poly/\ML}
-\index{reference variables}
+\index{*update|)}
 
-As the functions for reading theories depend on reference variables one has
-to take into account the way Poly/\ML\ handles them.  They are only saved
-together with the state if they were declared in the current database.  For
-example, changes made to a reference variable declared in the {\tt Pure}
-database are {\em not\/} saved if made while using a child database.
-Therefore a new {\tt Readthy} structure has to be created by
-\begin{ttbox}
-structure Readthy = ReadthyFUN (structure ThySyn = ThySyn);
-Readthy.loaded_thys := !loaded_thys;
-open Readthy;
-\end{ttbox}
-in every newly created database.  By copying the contents of the old reference
-variable \verb$loaded_thys$ the list of already loaded theories is preserved.
-Of course this is not necessary if no theories have been read yet.
 
 
 \section{Basic operations on theories}
@@ -286,46 +286,48 @@
 to have the same name; {\tt get_axiom} returns an arbitrary one.
 
 \item[\ttindexbold{assume_ax} $thy$ $formula$] 
-reads the {\it formula} using the syntax of $thy$, following the same
-conventions as axioms in a theory definition.  You can thus pretend that
-{\it formula} is an axiom; in fact, {\tt assume_ax} returns an assumption.
-You can use the resulting theorem like an axiom.  Note that 
-\ttindex{result} complains about additional assumptions, but 
-\ttindex{uresult} does not.
+  reads the {\it formula} using the syntax of $thy$, following the same
+  conventions as axioms in a theory definition.  You can thus pretend that
+  {\it formula} is an axiom and use the resulting theorem like an axiom.
+  Actually {\tt assume_ax} returns an assumption;  \ttindex{result}
+  complains about additional assumptions, but \ttindex{uresult} does not.
 
 For example, if {\it formula} is
 \hbox{\tt a=b ==> b=a} then the resulting theorem might have the form
 \hbox{\tt\frenchspacing ?a=?b ==> ?b=?a  [!!a b. a=b ==> b=a]}
 \end{description}
 
-%\subsection{Building a theory}
-%\label{BuildingATheory}
-%\index{theories!constructing|bold}
-%\begin{ttbox} 
-%pure_thy: theory
-%merge_theories: theory * theory -> theory
-%extend_theory: theory -> string
-%        -> (class * class list) list 
-%           * sort
-%           * (string list * int)list
-%           * (string list * (sort list * class))list
-%           * (string list * string)list * sext option
-%        -> (string*string)list -> theory
-%\end{ttbox}
-%Type \ttindex{class} is a synonym for {\tt string}; type \ttindex{sort} is
-%a synonym for \hbox{\tt class list}.
-%\begin{description}
-%\item[\ttindexbold{pure_thy}] contains just the types, constants, and syntax
-%  of the meta-logic.  There are no axioms: meta-level inferences are carried
-%  out by \ML\ functions.
-%\item[\ttindexbold{merge_theories} ($thy@1$, $thy@2$)] merges the two
-%  theories $thy@1$ and $thy@2$.  The resulting theory contains all types,
-%  constants and axioms of the constituent theories; its default sort is the
-%  union of the default sorts of the constituent theories.
+\subsection{Building a theory}
+\label{BuildingATheory}
+\index{theories!constructing|bold}
+\begin{ttbox} 
+pure_thy: theory
+merge_theories: theory * theory -> theory
+extend_theory: theory -> string -> \(\cdots\) -> theory
+\end{ttbox}
+\begin{description}
+\item[\ttindexbold{pure_thy}] contains just the types, constants, and syntax
+  of the meta-logic.  There are no axioms: meta-level inferences are carried
+  out by \ML\ functions.
+\item[\ttindexbold{merge_theories} ($thy@1$, $thy@2$)] merges the two
+  theories $thy@1$ and $thy@2$.  The resulting theory contains all types,
+  constants and axioms of the constituent theories; its default sort is the
+  union of the default sorts of the constituent theories.
+\item [\ttindexbold{extend_theory} $thy$ {\tt"}$T${\tt"} $\cdots$] extends
+  the theory $thy$ with new types, constants, etc.  $T$ identifies the theory
+  internally.  When a theory is redeclared, say to change an incorrect axiom,
+  bindings to the old axiom may persist.  Isabelle ensures that the old and
+  new theories are not involved in the same proof.  Attempting to combine
+  different theories having the same name $T$ yields the fatal error
+\begin{ttbox}
+Attempt to merge different versions of theory: \(T\)
+\end{ttbox}
+\end{description}
+
 %\item [\ttindexbold{extend_theory} $thy$ {\tt"}$T${\tt"}
 %      ($classes$, $default$, $types$, $arities$, $consts$, $sextopt$) $rules$]
 %\hfill\break   %%% include if line is just too short
-%is the \ML-equivalent of the following theory definition:
+%is the \ML{} equivalent of the following theory definition:
 %\begin{ttbox}
 %\(T\) = \(thy\) +
 %classes \(c\) < \(c@1\),\(\dots\),\(c@m\)
@@ -356,43 +358,6 @@
 %as mixfix annotations to constants.  Using {\tt extend_theory}, new syntax can
 %be added via $sextopt$ which is either {\tt None}, or {\tt Some($sext$)}.  The
 %latter case is not documented.
-%
-%$T$ identifies the theory internally.  When a theory is redeclared, say to
-%change an incorrect axiom, bindings to the old axiom may persist.  Isabelle
-%ensures that the old and new theories are not involved in the same proof.
-%Attempting to combine different theories having the same name $T$ yields the
-%fatal error
-%\begin{ttbox}
-%Attempt to merge different versions of theory: \(T\)
-%\end{ttbox}
-%\end{description}
-
-\subsection{Building a theory}
-\label{BuildingATheory}
-\index{theories!constructing|bold}
-\begin{ttbox} 
-pure_thy: theory
-merge_theories: theory * theory -> theory
-extend_theory: theory -> string -> \(\cdots\) -> theory
-\end{ttbox}
-\begin{description}
-\item[\ttindexbold{pure_thy}] contains just the types, constants, and syntax
-  of the meta-logic.  There are no axioms: meta-level inferences are carried
-  out by \ML\ functions.
-\item[\ttindexbold{merge_theories} ($thy@1$, $thy@2$)] merges the two
-  theories $thy@1$ and $thy@2$.  The resulting theory contains all types,
-  constants and axioms of the constituent theories; its default sort is the
-  union of the default sorts of the constituent theories.
-\item [\ttindexbold{extend_theory} $thy$ {\tt"}$T${\tt"} $\cdots$] extends
-  the theory $thy$ with new types, constants, etc.  $T$ identifies the theory
-  internally.  When a theory is redeclared, say to change an incorrect axiom,
-  bindings to the old axiom may persist.  Isabelle ensures that the old and
-  new theories are not involved in the same proof.  Attempting to combine
-  different theories having the same name $T$ yields the fatal error
-\begin{ttbox}
-Attempt to merge different versions of theory: \(T\)
-\end{ttbox}
-\end{description}
 
 
 \subsection{Inspecting a theory}
@@ -444,10 +409,10 @@
 \end{ttbox}
 \begin{description}
 \item[\ttindexbold{Const}($a$, $T$)] 
-is the {\bf constant} with name~$a$ and type~$T$.  Constants include
-connectives like $\land$ and $\forall$ (logical constants), as well as
-constants like~0 and~$Suc$.  Other constants may be required to define the
-concrete syntax of a logic.
+  is the {\bf constant} with name~$a$ and type~$T$.  Constants include
+  connectives like $\land$ and $\forall$ as well as constants like~0
+  and~$Suc$.  Other constants may be required to define a logic's concrete
+  syntax. 
 
 \item[\ttindexbold{Free}($a$, $T$)] 
 is the {\bf free variable} with name~$a$ and type~$T$.
@@ -474,23 +439,78 @@
 \item[\tt $t$ \$ $u$] \index{$@{\tt\$}|bold}
 is the {\bf application} of~$t$ to~$u$.  
 \end{description}
-Application is written as an infix operator in order to aid readability.
-For example, here is an \ML\ pattern to recognize first-order formula of
+Application is written as an infix operator to aid readability.
+Here is an \ML\ pattern to recognize first-order formula of
 the form~$A\imp B$, binding the subformulae to~$A$ and~$B$:
 \begin{ttbox} 
 Const("Trueprop",_) $ (Const("op -->",_) $ A $ B)
 \end{ttbox}
 
 
-\section{Certified terms}
-\index{terms!certified|bold}\index{signatures}
+\section{Terms and variable binding}
+\begin{ttbox}
+loose_bnos     : term -> int list
+incr_boundvars : int -> term -> term
+abstract_over  : term*term -> term
+variant_abs    : string * typ * term -> string * term
+aconv          : term*term -> bool\hfill{\bf infix}
+\end{ttbox}
+These functions are all concerned with the de Bruijn representation of
+bound variables.
+\begin{description}
+\item[\ttindexbold{loose_bnos} $t$] 
+  returns the list of all dangling bound variable references.  In
+  particular, {\tt Bound~0} is loose unless it is enclosed in an
+  abstraction.  Similarly {\tt Bound~1} is loose unless it is enclosed in
+  at least two abstractions; if enclosed in just one, the list will contain
+  the number 0.  A well-formed term does not contain any loose variables.
+
+\item[\ttindexbold{incr_boundvars} $j$] 
+  increases a term's dangling bound variables by the offset~$j$.  This
+  required when moving a subterm into a context where it is enclosed by a
+  different number of abstractions.  Bound variables with a matching
+  abstraction are unaffected.
+
+\item[\ttindexbold{abstract_over} $(v,t)$] 
+  forms the abstraction of~$t$ over~$v$, which may be any well-formed term.
+  It replaces every occurrence of \(v\) by a {\tt Bound} variable with the
+  correct index.
+
+\item[\ttindexbold{variant_abs} $(a,T,u)$] 
+  substitutes into $u$, which should be the body of an abstraction.
+  It replaces each occurrence of the outermost bound variable by a free
+  variable.  The free variable has type~$T$ and its name is a variant
+  of~$a$ choosen to be distinct from all constants and from all variables
+  free in~$u$.
+
+\item[$t$ \ttindexbold{aconv} $u$] 
+  tests whether terms~$t$ and~$u$ are \(\alpha\)-convertible: identical up
+  to renaming of bound variables.
+\begin{itemize}
+  \item
+    Two constants, {\tt Free}s, or {\tt Var}s are \(\alpha\)-convertible
+    if their names and types are equal.
+    (Variables having the same name but different types are thus distinct.
+    This confusing situation should be avoided!)
+  \item
+    Two bound variables are \(\alpha\)-convertible
+    if they have the same number.
+  \item
+    Two abstractions are \(\alpha\)-convertible
+    if their bodies are, and their bound variables have the same type.
+  \item
+    Two applications are \(\alpha\)-convertible
+    if the corresponding subterms are.
+\end{itemize}
+
+\end{description}
+
+\section{Certified terms}\index{terms!certified|bold}\index{signatures} 
 A term $t$ can be {\bf certified} under a signature to ensure that every
 type in~$t$ is declared in the signature and every constant in~$t$ is
 declared as a constant of the same type in the signature.  It must be
-well-typed and must not have any `loose' bound variable
-references.\footnote{This concerns the internal representation of variable
-binding using de Bruijn indexes.} Meta-rules such as {\tt forall_elim} take
-certified terms as arguments.
+well-typed and its use of bound variables must be well-formed.  Meta-rules
+such as {\tt forall_elim} take certified terms as arguments.
 
 Certified terms belong to the abstract type \ttindexbold{cterm}.
 Elements of the type can only be created through the certification process.
@@ -532,11 +552,12 @@
 \end{description}
 
 
-\section{Types}
-\index{types|bold}
+\section{Types}\index{types|bold} 
 Types belong to the \ML\ type \ttindexbold{typ}, which is a concrete
-datatype with three constructors.  Types are classified by sorts, which are
-lists of classes.  A class is represented by a string.
+datatype with three constructor functions.  These correspond to type
+constructors, free type variables and schematic type variables.  Types are
+classified by sorts, which are lists of classes.  A class is represented by
+a string.
 \begin{ttbox}
 type class = string;
 type sort  = class list;
--- a/doc-src/Ref/thm.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/thm.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -132,7 +132,7 @@
 
 
 \subsection{Instantiating a theorem} \index{theorems!instantiating|bold}
-\begin{ttbox} 
+\begin{ttbox}
 read_instantiate    :            (string*string)list -> thm -> thm
 read_instantiate_sg : Sign.sg -> (string*string)list -> thm -> thm
 cterm_instantiate   :    (Sign.cterm*Sign.cterm)list -> thm -> thm
@@ -208,7 +208,7 @@
 nprems_of     : thm -> int
 tpairs_of     : thm -> (term*term)list
 stamps_of_thy : thm -> string ref list
-dest_state    : thm*int -> (term*term)list * term list * term * term
+dest_state    : thm*int -> (term*term)list*term list*term*term
 rep_thm       : thm -> \{prop:term, hyps:term list, 
                         maxidx:int, sign:Sign.sg\}
 \end{ttbox}
@@ -220,7 +220,8 @@
 returns the premises of $thm$ as a list of terms.
 
 \item[\ttindexbold{nprems_of} $thm$] 
-returns the number of premises in $thm$: {\tt length(prems_of~$thm$)}.
+returns the number of premises in $thm$, and is equivalent to {\tt
+  length(prems_of~$thm$)}.
 
 \item[\ttindexbold{tpairs_of} $thm$] 
 returns the flex-flex constraints of $thm$.
@@ -240,7 +241,7 @@
 
 
 \subsection{Tracing flags for unification}
-\index{tracing!of unification}
+\index{tracing!of unification}\index{unification!tracing}
 \begin{ttbox} 
 Unify.trace_simp   : bool ref \hfill{\bf initially false}
 Unify.trace_types  : bool ref \hfill{\bf initially false}
@@ -251,19 +252,19 @@
 unexpectedly.  Letting {\tt res_inst_tac} circumvent the problem is easier,
 though.
 \begin{description}
-\item[\ttindexbold{Unify.trace_simp} \tt:= true;] 
+\item[{\tt Unify.trace_simp} \tt:= true;] 
 causes tracing of the simplification phase.
 
-\item[\ttindexbold{Unify.trace_types} \tt:= true;] 
+\item[{\tt Unify.trace_types} \tt:= true;] 
 generates warnings of incompleteness, when unification is not considering
 all possible instantiations of type unknowns.
 
-\item[\ttindexbold{Unify.trace_bound} \tt:= $n$] 
+\item[{\tt Unify.trace_bound} \tt:= $n$] 
 causes unification to print tracing information once it reaches depth~$n$.
 Use $n=0$ for full tracing.  At the default value of~10, tracing
 information is almost never printed.
 
-\item[\ttindexbold{Unify.search_bound} \tt:= $n$] 
+\item[{\tt Unify.search_bound} \tt:= $n$] 
 causes unification to limit its search to depth~$n$.  Because of this
 bound, higher-order unification cannot return an infinite sequence, though
 it can return a very long one.  The search rarely approaches the default
@@ -302,7 +303,7 @@
 
 Equality of truth values means logical equivalence:
 \[ \infer[({\equiv}I)]{\phi\equiv\psi}{\infer*{\psi}{[\phi]} &
- 			               \infer*{\phi}{[\psi]}}  
+                                       \infer*{\phi}{[\psi]}}  
    \qquad
    \infer[({\equiv}E)]{\psi}{\phi\equiv \psi & \phi}   \]
 
@@ -327,7 +328,7 @@
 The {\em universal quantification\/} rules are $(\Forall I)$ and $(\Forall
 E)$:\footnote{$(\Forall I)$ holds if $x$ is not free in the assumptions.}
 \[ \infer[(\Forall I)]{\Forall x.\phi}{\phi}        \qquad
-   \infer[(\Forall I)]{\phi[b/x]}{\Forall x.\phi}   \]
+   \infer[(\Forall E)]{\phi[b/x]}{\Forall x.\phi}   \]
 
 
 \subsection{Propositional rules}
@@ -499,8 +500,8 @@
 \subsubsection{Instantiation of unknowns}
 \index{meta-rules!instantiation|bold}
 \begin{ttbox} 
-instantiate: (indexname*Sign.ctyp)list * (Sign.cterm*Sign.cterm)list
-                   -> thm -> thm
+instantiate: (indexname*Sign.ctyp)list * 
+             (Sign.cterm*Sign.cterm)list  -> thm -> thm
 \end{ttbox}
 \begin{description}
 \item[\ttindexbold{instantiate} $tyinsts$ $insts$ $thm$] 
--- a/doc-src/Ref/undocumented.tex	Mon Mar 21 10:51:28 1994 +0100
+++ b/doc-src/Ref/undocumented.tex	Mon Mar 21 11:02:57 1994 +0100
@@ -22,58 +22,9 @@
 Given types \([ \tau_1, \ldots, \tau_n]\) and \(\tau\), it
 forms the type \(\tau_1\to \cdots \to (\tau_n\to\tau)\).
 
-\index{loose_bnos}
-\beginprog
-loose_bnos: term -> int list
-\endprog
-Calling \verb|loose_bnos t| returns the list of all 'loose' bound variable
-references.  In particular, {\tt Bound~0} is loose unless it is enclosed in
-an abstraction.  Similarly {\tt Bound~1} is loose unless it is enclosed in
-at least two abstractions; if enclosed in just one, the list will contain
-the number 0.  A well-formed term does not contain any loose variables.
-
 Calling {\prog{}type_of \${t}}\index{type_of} computes the type of the
 term~$t$.  Raises exception {\tt TYPE} unless applications are well-typed.
 
-\index{aconv}
-\beginprog
-op aconv: term*term -> bool
-\endprog
-Calling $t\Term{ aconv }u$ tests whether terms~$t$ and~$u$ are
-\(\alpha\)-convertible: identical up to renaming of bound variables.
-\begin{itemize}
-  \item
-    Two constants, {\tt Free}s, or {\tt Var}s are \(\alpha\)-convertible
-    just when their names and types are equal.
-    (Variables having the same name but different types are thus distinct.
-    This confusing situation should be avoided!)
-  \item
-    Two bound variables are \(\alpha\)-convertible
-    just when they have the same number.
-  \item
-    Two abstractions are \(\alpha\)-convertible
-    just when their bodies are, and their bound variables have the same type.
-  \item
-    Two applications are \(\alpha\)-convertible
-    just when the corresponding subterms are.
-\end{itemize}
-
-
-\index{incr_boundvars}
-\beginprog
-incr_boundvars: int -> term -> term
-\endprog
-This increments a term's `loose' bound variables by a given offset,
-required when moving a subterm into a context
-where it is enclosed by a different number of lambdas.
-
-\index{abstract_over}
-\beginprog
-abstract_over: term*term -> term
-\endprog
-For abstracting a term over a subterm \(v\):
-replaces every occurrence of \(v\) by a {\tt Bound} variable
-with the correct index.
 
 Calling \verb|subst_bounds|$([u_{n-1},\ldots,u_0],\,t)$\index{subst_bounds}
 substitutes the $u_i$ for loose bound variables in $t$.  This achieves