Now there are TWO spaces after each full stop, so that the Emacs sentence
authorpaulson
Wed, 02 Jul 1997 16:46:36 +0200
changeset 3485 f27a30a18a17
parent 3484 1e93eb09ebb9
child 3486 10cf84e5d2c2
Now there are TWO spaces after each full stop, so that the Emacs sentence primitives work
doc-src/Intro/advanced.tex
doc-src/Intro/foundations.tex
doc-src/Intro/getting.tex
doc-src/Ref/classical.tex
doc-src/Ref/defining.tex
doc-src/Ref/introduction.tex
doc-src/Ref/simplifier.tex
doc-src/Ref/syntax.tex
doc-src/Ref/theories.tex
doc-src/Ref/thm.tex
doc-src/Ref/undocumented.tex
--- a/doc-src/Intro/advanced.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Intro/advanced.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -10,7 +10,7 @@
 Constructive Type Theory~({\tt CTT}) form a richer world for
 mathematical reasoning and, again, many examples are in the
 literature.  Higher-order logic~({\tt HOL}) is Isabelle's most
-elaborate logic. Its types and functions are identified with those of
+elaborate logic.  Its types and functions are identified with those of
 the meta-logic.
 
 Choose a logic that you already understand.  Isabelle is a proof
@@ -367,7 +367,7 @@
   end} keyword).  In the simplest case, $T$ is just the union of
 $S@1$,~\ldots,~$S@n$.  New theories always extend one or more other
 theories, inheriting their types, constants, syntax, etc.  The theory
-\thydx{Pure} contains nothing but Isabelle's meta-logic. The variant
+\thydx{Pure} contains nothing but Isabelle's meta-logic.  The variant
 \thydx{CPure} offers the more usual higher-order function application
 syntax $t\,u@1\ldots\,u@n$ instead of $t(u@1,\ldots,u@n)$ in \Pure.
 
@@ -384,7 +384,7 @@
 keyword) or during creation of the new theory (say, a type error in a
 rule).  But if all goes well, {\tt use_thy} will finally read the file
 {\it T}{\tt.ML} (if it exists).  This file typically contains proofs
-that refer to the components of~$T$. The structure is automatically
+that refer to the components of~$T$.  The structure is automatically
 opened, so its components may be referred to by unqualified names,
 e.g.\ just {\tt thy} instead of $T${\tt .thy}.
 
@@ -429,14 +429,14 @@
 \indexbold{definitions} The {\bf definition part} is similar, but with
 the keyword {\tt defs} instead of {\tt rules}.  {\bf Definitions} are
 rules of the form $s \equiv t$, and should serve only as
-abbreviations. The simplest form of a definition is $f \equiv t$,
-where $f$ is a constant. Also allowed are $\eta$-equivalent forms of
+abbreviations.  The simplest form of a definition is $f \equiv t$,
+where $f$ is a constant.  Also allowed are $\eta$-equivalent forms of
 this, where the arguments of~$f$ appear applied on the left-hand side
 of the equation instead of abstracted on the right-hand side.
 
 Isabelle checks for common errors in definitions, such as extra
 variables on the right-hand side, but currently does not a complete
-test of well-formedness. Thus determined users can write
+test of well-formedness.  Thus determined users can write
 non-conservative `definitions' by using mutual recursion, for example;
 the consequences of such actions are their responsibility.
 
@@ -461,7 +461,7 @@
 end
 \end{ttbox}
 {\tt constdefs} generates the names {\tt nand_def} and {\tt xor_def}
-automatically, which is why it is restricted to alphanumeric identifiers. In
+automatically, which is why it is restricted to alphanumeric identifiers.  In
 general it has the form
 \begin{ttbox}
 constdefs  \(id@1\) :: \(\tau@1\)
@@ -479,7 +479,7 @@
 defs  prime_def "prime(p) == (m divides p) --> (m=1 | m=p)"
 \end{ttbox}
 Isabelle rejects this ``definition'' because of the extra {\tt m} on the
-right-hand side, which would introduce an inconsistency. What you should have
+right-hand side, which would introduce an inconsistency.  What you should have
 written is
 \begin{ttbox}
 defs  prime_def "prime(p) == ALL m. (m divides p) --> (m=1 | m=p)"
@@ -622,7 +622,7 @@
 Q.True \xor Q$ and $\lambda Q.True \nand Q$ are identical.
 
 \medskip Infix syntax and constant names may be also specified
-independently. For example, consider this version of $\nand$:
+independently.  For example, consider this version of $\nand$:
 \begin{ttbox}
 consts  nand     :: [o,o] => o         (infixl "~&" 35)
 \end{ttbox}
--- a/doc-src/Intro/foundations.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Intro/foundations.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -79,7 +79,7 @@
 Note that there are two versions of function application syntax
 available in Isabelle: either $t\,u$, which is the usual form for
 higher-order languages, or $t(u)$, trying to look more like
-first-order. The latter syntax is used throughout the manual.
+first-order.  The latter syntax is used throughout the manual.
 \[ 
 \index{lambda abs@$\lambda$-abstractions}\index{function applications}
 \begin{array}{ll}
@@ -602,7 +602,7 @@
 
   \item Even an equation such as $\Var{f}(a)\qeq a+a$ is all right.  It has
 four solutions, but Isabelle evaluates them lazily, trying projection before
-imitation. The first solution is usually the one desired:
+imitation.  The first solution is usually the one desired:
 \[ \Var{f}\equiv \lambda x. x+x \quad
    \Var{f}\equiv \lambda x. a+x \quad
    \Var{f}\equiv \lambda x. x+a \quad
--- a/doc-src/Intro/getting.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Intro/getting.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -4,9 +4,9 @@
 present, Isabelle's user interface is \ML.  Proofs are conducted by
 applying certain \ML{} functions, which update a stored proof state.
 Basically, all syntax must be expressed using plain {\sc ascii}
-characters. There are also mechanisms built into Isabelle that support
+characters.  There are also mechanisms built into Isabelle that support
 alternative syntaxes, for example using mathematical symbols from a
-special screen font. The meta-logic and major object-logics already
+special screen font.  The meta-logic and major object-logics already
 provide such fancy output as an option.
 
 Object-logics are built upon Pure Isabelle, which implements the
@@ -599,8 +599,8 @@
 attempted proof of the non-theorem $\exists y.\forall x.x=y$.  The former
 proof succeeds, and the latter fails, because of the scope of quantified
 variables~\cite{paulson-found}.  Unification helps even in these trivial
-proofs. In $\forall x.\exists y.x=y$ the $y$ that `exists' is simply $x$,
-but we need never say so. This choice is forced by the reflexive law for
+proofs.  In $\forall x.\exists y.x=y$ the $y$ that `exists' is simply $x$,
+but we need never say so.  This choice is forced by the reflexive law for
 equality, and happens automatically.
 
 \paragraph{The successful proof.}
--- a/doc-src/Ref/classical.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Ref/classical.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -306,11 +306,11 @@
 
 The classical reasoning tactics --- except {\tt blast_tac}! --- allow
 you to modify this basic proof strategy by applying two arbitrary {\bf
-  wrapper tacticals} to it. This affects each step of the search.
+  wrapper tacticals} to it.  This affects each step of the search.
 Usually they are the identity tacticals, but they could apply another
-tactic before or after the step tactic. The first one, which is
+tactic before or after the step tactic.  The first one, which is
 considered to be safe, affects \ttindex{safe_step_tac} and all the
-tactics that call it. The the second one, which may be unsafe, affects
+tactics that call it.  The the second one, which may be unsafe, affects
 \ttindex{step_tac}, \ttindex{slow_step_tac} and the tactics that call
 them.
 
@@ -339,7 +339,7 @@
 
 \begin{ttdescription}
 \item[$cs$ addss $ss$] \indexbold{*addss}
-adds the simpset~$ss$ to the classical set. The assumptions and goal will be
+adds the simpset~$ss$ to the classical set.  The assumptions and goal will be
 simplified, in a safe way, after the safe steps of the search.
 
 \item[$cs$ addSbefore $tac$] \indexbold{*addSbefore}
@@ -509,7 +509,7 @@
 yourself, you can execute these procedures one step at a time.
 \begin{ttdescription}
 \item[\ttindexbold{safe_step_tac} $cs$ $i$] performs a safe step on
-subgoal~$i$. The safe wrapper tactical is applied to a tactic that may include 
+subgoal~$i$.  The safe wrapper tactical is applied to a tactic that may include 
 proof by assumption or Modus Ponens (taking care not to instantiate unknowns), 
 or {\tt hyp_subst_tac}.
 
@@ -549,7 +549,7 @@
 \indexbold{*Blast_tac}\indexbold{*Auto_tac}
 \indexbold{*Best_tac}\indexbold{*Fast_tac}%
 \indexbold{*Deepen_tac}\indexbold{*Step_tac}
-make use of the current claset. E.g. {\tt Blast_tac} is defined as follows:
+make use of the current claset.  E.g. {\tt Blast_tac} is defined as follows:
 \begin{ttbox}
 fun Blast_tac i = fast_tac (!claset) i;
 \end{ttbox}
@@ -560,7 +560,7 @@
 \end{ttbox}
 \indexbold{*AddSIs} \indexbold{*AddSEs} \indexbold{*AddSDs}
 \indexbold{*AddIs} \indexbold{*AddEs} \indexbold{*AddDs}
-are used to add rules to the current claset. They work exactly like their
+are used to add rules to the current claset.  They work exactly like their
 lower case counterparts, such as {\tt addSIs}.  Calling
 \begin{ttbox}
 Delrules : thm list -> unit
--- a/doc-src/Ref/defining.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Ref/defining.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -182,9 +182,9 @@
 \end{warn}
 
 \begin{warn}
-  Type constraints bind very weakly. For example, \verb!x<y::nat! is normally
+  Type constraints bind very weakly.  For example, \verb!x<y::nat! is normally
   parsed as \verb!(x<y)::nat!, unless \verb$<$ has priority of 3 or less, in
-  which case the string is likely to be ambiguous. The correct form is
+  which case the string is likely to be ambiguous.  The correct form is
   \verb!x<(y::nat)!.
 \end{warn}
 
@@ -217,10 +217,10 @@
 Name tokens have a predefined syntax.  The lexer distinguishes six disjoint
 classes of names: \rmindex{identifiers}, \rmindex{unknowns}, type
 identifiers\index{type identifiers}, type unknowns\index{type unknowns},
-\rmindex{numerals}, \rmindex{strings}. They are denoted by \ndxbold{id},
+\rmindex{numerals}, \rmindex{strings}.  They are denoted by \ndxbold{id},
 \ndxbold{var}, \ndxbold{tid}, \ndxbold{tvar}, \ndxbold{xnum}, \ndxbold{xstr},
 respectively.  Typical examples are {\tt x}, {\tt ?x7}, {\tt 'a}, {\tt ?'a3},
-{\tt \#42}, {\tt ''foo bar''}. Here is the precise syntax:
+{\tt \#42}, {\tt ''foo bar''}.  Here is the precise syntax:
 \begin{eqnarray*}
 id        & =   & letter~quasiletter^* \\
 var       & =   & \mbox{\tt ?}id ~~|~~ \mbox{\tt ?}id\mbox{\tt .}nat \\
@@ -331,7 +331,7 @@
     analysis.\index{delimiters}
 
   \item[{\tt logtypes}] lists the types that are regarded the same as {\tt
-    logic} syntactically. Thus types of object-logics (e.g.\ {\tt nat}, say)
+    logic} syntactically.  Thus types of object-logics (e.g.\ {\tt nat}, say)
     will be automatically equipped with the standard syntax of
     $\lambda$-calculus.
 
@@ -445,7 +445,7 @@
 
 \begin{warn}
   Theories must sometimes declare types for purely syntactic purposes ---
-  merely playing the role of nonterminals. One example is \tydx{type}, the
+  merely playing the role of nonterminals.  One example is \tydx{type}, the
   built-in type of types.  This is a `type of all types' in the syntactic
   sense only.  Do not declare such types under {\tt arities} as belonging to
   class {\tt logic}\index{*logic class}, for that would make them useless as
@@ -527,11 +527,11 @@
 \end{ttbox}
 
 Note that because {\tt exp} is not of class {\tt logic}, it has been
-retained as a separate nonterminal. This also entails that the syntax
+retained as a separate nonterminal.  This also entails that the syntax
 does not provide for identifiers or paranthesized expressions.
 Normally you would also want to add the declaration {\tt arities
   exp::logic} after {\tt types} and use {\tt consts} instead of {\tt
-  syntax}. Try this as an exercise and study the changes in the
+  syntax}.  Try this as an exercise and study the changes in the
 grammar.
 
 \subsection{The mixfix template}
@@ -600,7 +600,7 @@
 
 A slightly more general form of infix declarations allows constant
 names to be independent from their concrete syntax, namely \texttt{$c$
-  ::\ $\sigma$\ (infixl "$sy$" $p$)}, the same for \texttt{infixr}. As
+  ::\ $\sigma$\ (infixl "$sy$" $p$)}, the same for \texttt{infixr}.  As
 an example consider:
 \begin{ttbox}
 and :: [bool, bool] => bool  (infixr "&" 35)
@@ -621,7 +621,7 @@
 introduces a constant~$c$ of type~$\sigma$, which must have the form
 $(\tau@1 \To \tau@2) \To \tau@3$.  Its concrete syntax is $\Q~x.P$, where
 $x$ is a bound variable of type~$\tau@1$, the body~$P$ has type $\tau@2$
-and the whole term has type~$\tau@3$. The optional integer $pb$
+and the whole term has type~$\tau@3$.  The optional integer $pb$
 specifies the body's priority, by default~$p$.  Special characters
 in $\Q$ must be escaped using a single quote.
 
@@ -660,8 +660,8 @@
 \section{*Alternative print modes} \label{sec:prmodes}
 \index{print modes|(}
 %
-Isabelle's pretty printer supports alternative output syntaxes. These
-may be used independently or in cooperation. The currently active
+Isabelle's pretty printer supports alternative output syntaxes.  These
+may be used independently or in cooperation.  The currently active
 print modes (with precedence from left to right) are determined by a
 reference variable.
 \begin{ttbox}\index{*print_mode}
@@ -669,7 +669,7 @@
 \end{ttbox}
 Initially this may already contain some print mode identifiers,
 depending on how Isabelle has been invoked (e.g.\ by some user
-interface). So changes should be incremental --- adding or deleting
+interface).  So changes should be incremental --- adding or deleting
 modes relative to the current value.
 
 Any \ML{} string is a legal print mode identifier, without any
@@ -678,7 +678,7 @@
 \texttt{symbols}, \texttt{latex}, \texttt{xterm}.
 
 There is a separate table of mixfix productions for pretty printing
-associated with each print mode. The currently active ones are
+associated with each print mode.  The currently active ones are
 conceptually just concatenated from left to right, with the standard
 syntax output table always coming last as default.  Thus mixfix
 productions of preceding modes in the list may override those of later
@@ -687,7 +687,7 @@
 
 \medskip The canonical application of print modes is optional printing
 of mathematical symbols from a special screen font instead of {\sc
-  ascii}. Another example is to re-use Isabelle's advanced
+  ascii}.  Another example is to re-use Isabelle's advanced
 $\lambda$-term printing mechanisms to generate completely different
 output, say for interfacing external tools like \rmindex{model
   checkers} (see also \texttt{HOL/Modelcheck}).
@@ -700,21 +700,21 @@
 
 To keep the grammar small and allow common productions to be shared
 all logical types (except {\tt prop}) are internally represented
-by one nonterminal, namely {\tt logic}. This and omitted or too freely
+by one nonterminal, namely {\tt logic}.  This and omitted or too freely
 chosen priorities may lead to ways of parsing an expression that were
-not intended by the theory's maker. In most cases Isabelle is able to
+not intended by the theory's maker.  In most cases Isabelle is able to
 select one of multiple parse trees that an expression has lead
-to by checking which of them can be typed correctly. But this may not
+to by checking which of them can be typed correctly.  But this may not
 work in every case and always slows down parsing.
 The warning and error messages that can be produced during this process are
 as follows:
 
 If an ambiguity can be resolved by type inference the following
 warning is shown to remind the user that parsing is (unnecessarily)
-slowed down. In cases where it's not easily possible to eliminate the
+slowed down.  In cases where it's not easily possible to eliminate the
 ambiguity the frequency of the warning can be controlled by changing
 the value of {\tt Syntax.ambiguity_level} which has type {\tt int
-ref}. Its default value is 1 and by increasing it one can control how
+ref}.  Its default value is 1 and by increasing it one can control how
 many parse trees are necessary to generate the warning.
 
 \begin{ttbox}
@@ -738,7 +738,7 @@
 
 Ambiguities occuring in syntax translation rules cannot be resolved by
 type inference because it is not necessary for these rules to be type
-correct. Therefore Isabelle always generates an error message and the
+correct.  Therefore Isabelle always generates an error message and the
 ambiguity should be eliminated by changing the grammar or the rule.
 
 
--- a/doc-src/Ref/introduction.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Ref/introduction.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -28,13 +28,13 @@
 directory of the distribution on how to build it.
 
 \medskip Let $\langle isabellehome \rangle$ denote the location where
-the distribution has been installed. To run Isabelle from a the shell
+the distribution has been installed.  To run Isabelle from a the shell
 prompt within an ordinary text terminal session, simply type:
 \begin{ttbox}
 \({\langle}isabellehome{\rangle}\)/bin/isabelle
 \end{ttbox}
 This should start an interactive \ML{} session with the default
-object-logic already preloaded. All Isabelle commands are bound to
+object-logic already preloaded.  All Isabelle commands are bound to
 \ML{} identifiers.
 
 Subsequently we assume that {\tt \(\langle isabellehome \rangle\)/bin}
@@ -52,11 +52,11 @@
 \index{saving your work|bold} Isabelle provides no means of storing
 theorems or proofs on files.  Theorems are simply part of the \ML{}
 state and are named by \ML{} identifiers.  To save your work between
-sessions, you must dump the \ML{} system state to a file. This is done
+sessions, you must dump the \ML{} system state to a file.  This is done
 automatically when ending the session normally (e.g.\ by typing
 control-D), provided that the image has been opened \emph{writable} in
-the first place. The standard object-logics are usually read-only, so
-you probably have to create a private working copy first. For example,
+the first place.  The standard object-logics are usually read-only, so
+you probably have to create a private working copy first.  For example,
 the following shell command puts you into a writable Isabelle session
 of name \texttt{Foo} that initially contains just \FOL:
 \begin{ttbox}
@@ -65,7 +65,7 @@
 Ending the \texttt{Foo} session with control-D will cause the complete
 \ML{} world to be saved somewhere in your home directory\footnote{The
   default location is in \texttt{\~\relax/isabelle/heaps}, but this
-  depends on your local configuration.}. Make sure there is enough
+  depends on your local configuration.}.  Make sure there is enough
 space available! Then one may later continue at exactly the same point
 by running
 \begin{ttbox}
@@ -83,10 +83,10 @@
 work.
 
 \medskip There are more comfortable user interfaces than the
-bare-bones \ML{} top-level run from a text terminal. The
+bare-bones \ML{} top-level run from a text terminal.  The
 \texttt{Isabelle} executable (note the capital I) runs one such
 interface, depending on your local configuration.  Furthermore there
-are a number of external utilities available. These are started
+are a number of external utilities available.  These are started
 uniformly via the \texttt{isatool} wrapper.
 
 Again, see the \emph{System Manual} for more information user
@@ -145,7 +145,7 @@
 reset   : bool ref -> bool
 toggle  : bool ref -> bool
 \end{ttbox}\index{*set}\index{*reset}\index{*toggle}
-These are some shorthands for manipulating boolean references. The new
+These are some shorthands for manipulating boolean references.  The new
 value is returned.
 
 
--- a/doc-src/Ref/simplifier.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Ref/simplifier.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -63,13 +63,13 @@
 \end{ttdescription}
 
 {\tt Asm_full_simp_tac} is the most powerful of this quartet but may also
-loop where some of the others terminate. For example,
+loop where some of the others terminate.  For example,
 \begin{ttbox}
 {\out  1. ALL x. f(x) = g(f(g(x))) ==> f(0) = f(0)+0}
 \end{ttbox}
 is solved by {\tt Simp_tac}, but {\tt Asm_simp_tac} and {\tt Asm_simp_tac}
 loop because the rewrite rule $f(\Var{x}) = f(g(f(\Var{x})))$ extracted from
-the assumption does not terminate. Isabelle notices certain simple forms of
+the assumption does not terminate.  Isabelle notices certain simple forms of
 nontermination, but not this one.
  
 \begin{warn}
@@ -79,7 +79,7 @@
 {\out [| P(f(a)); f(a) = t |] ==> ...}
 \end{ttbox}
 \verb$Asm_full_simp_tac$ will not simplify the first assumption using the
-second but will leave the assumptions unchanged. See
+second but will leave the assumptions unchanged.  See
 \S\ref{sec:reordering-asms} for ways around this problem.
 \end{warn}
 
@@ -88,12 +88,12 @@
 be traced by setting \verb$trace_simp := true$.
 
 There is not just one global current simpset, but one associated with each
-theory as well. How are these simpset built up?
+theory as well.  How are these simpset built up?
 \begin{enumerate}
 \item When loading {\tt T.thy}, the current simpset is initialized with the
-  union of the simpsets associated with all the ancestors of {\tt T}. In
+  union of the simpsets associated with all the ancestors of {\tt T}.  In
   addition, certain constructs in {\tt T} may add new rules to the simpset,
-  e.g.\ \ttindex{datatype} and \ttindex{primrec} in \HOL. Definitions are not
+  e.g.\ \ttindex{datatype} and \ttindex{primrec} in \HOL.  Definitions are not
   added automatically!
 \item The script in {\tt T.ML} may manipulate the current simpset further by
   explicitly adding/deleting theorems to/from it (see below).
@@ -110,10 +110,10 @@
 \end{ttdescription}
 
 Generally useful simplification rules $\Var{n}+0 = \Var{n}$ should be added
-to the current simpset right after they have been proved. Those of a more
+to the current simpset right after they have been proved.  Those of a more
 specific nature (e.g.\ the laws of de~Morgan, which alter the structure of a
 formula) should only be added for specific proofs and deleted again
-afterwards. Conversely, it may also happen that a generally useful rule needs
+afterwards.  Conversely, it may also happen that a generally useful rule needs
 to be removed for a certain proof and is added again afterwards.  Well
 designed simpsets need few temporary additions or deletions.
 
@@ -133,7 +133,7 @@
   If you execute proofs interactively, or load them from an ML file without
   associated {\tt .thy} file, you must set the current simpset by calling
   \ttindex{set_current_thy}~{\tt"}$T${\tt"}, where $T$ is the name of the
-  theory you want to work in. If you have just loaded $T$, this is not
+  theory you want to work in.  If you have just loaded $T$, this is not
   necessary.
 \end{warn}
 
@@ -177,7 +177,7 @@
 {(\Var{i}+\Var{j})+\Var{k}}$ is ok.
 
 It will also deal gracefully with all rules whose left-hand sides are
-so-called {\em higher-order patterns}~\cite{nipkow-patterns}. These are terms
+so-called {\em higher-order patterns}~\cite{nipkow-patterns}.  These are terms
 in $\beta$-normal form (this will always be the case unless you have done
 something strange) where each occurrence of an unknown is of the form
 $\Var{F}(x@1,\dots,x@n)$, where the $x@i$ are distinct bound variables.
@@ -187,7 +187,7 @@
 In some rare cases the rewriter will even deal with quite general rules: for
 example ${\Var{f}(\Var{x})\in range(\Var{f})} = True$ rewrites $g(a) \in
 range(g)$ to $True$, but will fail to match $g(h(b)) \in range(\lambda
-x.g(h(x)))$. However, you can replace the offending subterms (in our case
+x.g(h(x)))$.  However, you can replace the offending subterms (in our case
 $\Var{f}(\Var{x})$, which is not a pattern) by adding new variables and
 conditions: $\Var{y} = \Var{f}(\Var{x}) \Imp \Var{y}\in range(\Var{f})
 = True$ is acceptable as a conditional rewrite rule since conditions can
@@ -277,20 +277,20 @@
 instantiating~$\Var{A}$.  The solver, however, is an arbitrary tactic
 and may instantiate unknowns as it pleases.  This is the only way the
 simplifier can handle a conditional rewrite rule whose condition
-contains extra variables. When a simplification tactic is to be
+contains extra variables.  When a simplification tactic is to be
 combined with other provers, especially with the classical reasoner,
-it is important whether it can be considered safe or not. Therefore,
-the solver is split into a safe and an unsafe part. Both parts can be
+it is important whether it can be considered safe or not.  Therefore,
+the solver is split into a safe and an unsafe part.  Both parts can be
 set independently, using either \ttindex{setSSolver} with a safe
 tactic as argument, or \ttindex{setSolver} with an unsafe tactic.
 Additional solvers, which are tried after the already set solvers, may
 be added using \ttindex{addSSolver} and \ttindex{addSolver}.
 
 The standard simplification strategy solely uses the unsafe solver,
-which is appropriate in most cases. But for special applications where
+which is appropriate in most cases.  But for special applications where
 the simplification process is not allowed to instantiate unknowns
 within the goal, the tactic \ttindexbold{safe_asm_full_simp_tac} is
-provided. It uses the \emph{safe} solver for the current subgoal, but
+provided.  It uses the \emph{safe} solver for the current subgoal, but
 applies ordinary unsafe {\tt asm_full_simp_tac} for recursive internal
 simplifications (for conditional rules or congruences).
 
@@ -331,7 +331,7 @@
 typical looper is case splitting: the expansion of a conditional.  Another
 possibility is to apply an elimination rule on the assumptions.  More
 adventurous loopers could start an induction.  The looper is set with 
-\ttindex{setloop}. Additional loopers, which are tried after the already set
+\ttindex{setloop}.  Additional loopers, which are tried after the already set
 looper, may be added with \ttindex{addloop}.
 
 
@@ -420,9 +420,9 @@
 
 The actual simplification work is performed by the following basic tactics:
 \ttindexbold{simp_tac}, \ttindexbold{asm_simp_tac},
-\ttindexbold{full_simp_tac} and \ttindexbold{asm_full_simp_tac}. They work
+\ttindexbold{full_simp_tac} and \ttindexbold{asm_full_simp_tac}.  They work
 exactly like their namesakes in \S\ref{sec:simp-for-dummies}, except that
-they are explicitly supplied with a simpset. This is because the ones in
+they are explicitly supplied with a simpset.  This is because the ones in
 \S\ref{sec:simp-for-dummies} are defined in terms of the ones above, e.g.
 \begin{ttbox}
 fun Simp_tac i = simp_tac (!simpset) i;
@@ -440,7 +440,7 @@
 \begin{ttbox}
 fun Addsimps thms = (simpset := (!simpset addsimps thms));
 \end{ttbox}
-and can also be used directly, even in the presence of a current simpset. For
+and can also be used directly, even in the presence of a current simpset.  For
 example
 \begin{ttbox}
 Addsimps \(thms\);
@@ -626,7 +626,7 @@
 \index{assumptions!reordering}
 
 As mentioned above, \ttindex{asm_full_simp_tac} may require the assumptions
-to be permuted to be more effective. Given the subgoal
+to be permuted to be more effective.  Given the subgoal
 \begin{ttbox}
 {\out 1. [| P(f(a)); Q; f(a) = t; R |] ==> S}
 \end{ttbox}
@@ -643,7 +643,7 @@
 
 Since rotation alone cannot produce arbitrary permutations, you can also pick
 out a particular assumption which needs to be rewritten and move it the the
-right end of the assumptions. In the above case rotation can be replaced by
+right end of the assumptions.  In the above case rotation can be replaced by
 \begin{ttbox}
 by (dres_inst_tac [("psi","P(f(a))")] asm_rl 1);
 \end{ttbox}
@@ -653,7 +653,7 @@
 \end{ttbox}
 
 Note that reordering assumptions usually leads to brittle proofs and should
-therefore be avoided. Future versions of \verb$asm_full_simp_tac$ may remove
+therefore be avoided.  Future versions of \verb$asm_full_simp_tac$ may remove
 the need for such manipulations.
 
 \section{Permutative rewrite rules}
--- a/doc-src/Ref/syntax.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Ref/syntax.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -114,7 +114,7 @@
 \begin{itemize}
 \item Name tokens: $\astofpt{t} = \Variable s$, where $t$ is an \ndx{id},
   \ndx{var}, \ndx{tid}, \ndx{tvar}, \ndx{xnum} or \ndx{xstr} token, and $s$
-  its associated string. Note that for {\tt xstr} this does not include the
+  its associated string.  Note that for {\tt xstr} this does not include the
   quotes.
 
 \item Copy productions:\index{productions!copy}
@@ -606,7 +606,7 @@
 The nonterminal is declared as the type~{\tt is}, but with no {\tt arities}
 declaration.  Hence {\tt is} is not a logical type and may be used safely as
 a new nonterminal for custom syntax.  The nonterminal~{\tt is} can later be
-re-used for other enumerations of type~{\tt i} like lists or tuples. If we
+re-used for other enumerations of type~{\tt i} like lists or tuples.  If we
 had needed polymorphic enumerations, we could have used the predefined
 nonterminal symbol \ndx{args} and skipped this part altogether.
 
@@ -850,7 +850,7 @@
 constraint might appear.
 
 Also note that we are responsible to mark free identifiers that
-actually represent bound variables. This is achieved by
+actually represent bound variables.  This is achieved by
 \ttindex{Syntax.variant_abs'} and \ttindex{Syntax.mark_boundT} above.
 Failing to do so may cause these names to be printed in the wrong
 style.  \index{translations|)} \index{syntax!transformations|)}
@@ -861,24 +861,24 @@
 %
 Isabelle's meta-logic features quite a lot of different kinds of
 identifiers, namely {\em class}, {\em tfree}, {\em tvar}, {\em free},
-{\em bound}, {\em var}. One might want to have these printed in
+{\em bound}, {\em var}.  One might want to have these printed in
 different styles, e.g.\ in bold or italic, or even transcribed into
 something more readable like $\alpha, \alpha', \beta$ instead of {\tt
-  'a}, {\tt 'aa}, {\tt 'b} for type variables. Token translations
+  'a}, {\tt 'aa}, {\tt 'b} for type variables.  Token translations
 provide a means to such ends, enabling the user to install certain
 \ML{} functions associated with any logical \rmindex{token class} and
 depending on some \rmindex{print mode}.
 
 The logical class of identifiers can not necessarily be determined by
-its syntactic category, though. For example, consider free vs.\ bound
-variables. So Isabelle's pretty printing mechanism, starting from
+its syntactic category, though.  For example, consider free vs.\ bound
+variables.  So Isabelle's pretty printing mechanism, starting from
 fully typed terms, has to be careful to preserve this additional
 information\footnote{This is done by marking atoms in abstract syntax
-  trees appropriately. The marks are actually visible by print
+  trees appropriately.  The marks are actually visible by print
   translation functions -- they are just special constants applied to
   atomic asts, for example \texttt{("_bound" x)}.}.  In particular,
 user-supplied print translation functions operating on terms have to
-be well-behaved in this respect. Free identifiers introduced to
+be well-behaved in this respect.  Free identifiers introduced to
 represent bound variables have to be marked appropriately (cf.\ the
 example at the end of \S\ref{sec:tr_funs}).
 
@@ -890,12 +890,12 @@
 \end{ttbox}\index{token_translation}
 The elements of this list are of the form $(m, c, f)$, where $m$ is a
 print mode identifier, $c$ a token class, and $f\colon string \to
-string \times int$ the actual translation function. Assuming that $x$
+string \times int$ the actual translation function.  Assuming that $x$
 is of identifier class $c$, and print mode $m$ is the first one of all
 currently active modes that provide some translation for $c$, then $x$
-is output according to $f(x) = (x', len)$. Thereby $x'$ is the
+is output according to $f(x) = (x', len)$.  Thereby $x'$ is the
 modified identifier name and $len$ its visual length approximated in
-terms of whole characters. Thus $x'$ may include non-printing parts
+terms of whole characters.  Thus $x'$ may include non-printing parts
 like control sequences or markup information for typesetting systems.
 
 %FIXME example (?)
--- a/doc-src/Ref/theories.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Ref/theories.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -28,7 +28,7 @@
 \section{Defining theories}\label{sec:ref-defining-theories}
 
 Theories are usually defined using theory definition files (which have a name
-suffix {\tt .thy}). There is also a low level interface provided by certain
+suffix {\tt .thy}).  There is also a low level interface provided by certain
 \ML{} functions (see \S\ref{BuildingATheory}).
 Appendix~\ref{app:TheorySyntax} presents the concrete syntax for theory
 definitions; here is an explanation of the constituent parts:
@@ -37,7 +37,7 @@
   called $id$.  It is the union of the named {\bf parent
     theories}\indexbold{theories!parent}, possibly extended with new
   components.  \thydx{Pure} and \thydx{CPure} are the basic theories,
-  which contain only the meta-logic. They differ just in their
+  which contain only the meta-logic.  They differ just in their
   concrete syntax for function applications.
 
   Normally each {\it name\/} is an identifier, the name of the parent theory.
@@ -74,7 +74,7 @@
 
 \item[$infix$]
   declares a type or constant to be an infix operator of priority $nat$
-  associating to the left ({\tt infixl}) or right ({\tt infixr}). Only
+  associating to the left ({\tt infixl}) or right ({\tt infixr}).  Only
   2-place type constructors can have infix status; an example is {\tt
   ('a,'b)~"*"~(infixr~20)}, which may express binary product types.
 
@@ -89,8 +89,8 @@
 \item[$syntax$] \index{*syntax section}\index{print mode} is a variant
   of $consts$ which adds just syntax without actually declaring
   logical constants.  This gives full control over a theory's context
-  free grammar. The optional $mode$ specifies the print mode where the
-  mixfix productions should be added. If there is no \texttt{output}
+  free grammar.  The optional $mode$ specifies the print mode where the
+  mixfix productions should be added.  If there is no \texttt{output}
   option given, all productions are also added to the input syntax
   (regardless of the print mode).
 
@@ -126,19 +126,19 @@
   that every $string$ must be a definition (see below for details).
 
 \item[$constdefs$] combines the declaration of constants and their
-  definition. The first $string$ is the type, the second the definition.
+  definition.  The first $string$ is the type, the second the definition.
   
 \item[$axclass$] \index{*axclass section} defines an
   \rmindex{axiomatic type class} as the intersection of existing
-  classes, with additional axioms holding. Class axioms may not
-  contain more than one type variable. The class axioms (with implicit
+  classes, with additional axioms holding.  Class axioms may not
+  contain more than one type variable.  The class axioms (with implicit
   sort constraints added) are bound to the given names.  Furthermore a
   class introduction rule is generated, which is automatically
   employed by $instance$ to prove instantiations of this class.
   
 \item[$instance$] \index{*instance section} proves class inclusions or
   type arities at the logical level and then transfers these to the
-  type signature. The instantiation is proven and checked properly.
+  type signature.  The instantiation is proven and checked properly.
   The user has to supply sufficient witness information: theorems
   ($longident$), axioms ($string$), or even arbitrary \ML{} tactic
   code $verbatim$.
@@ -157,7 +157,7 @@
 
 \subsection{Definitions}\indexbold{definitions}
 
-{\bf Definitions} are intended to express abbreviations. The simplest
+{\bf Definitions} are intended to express abbreviations.  The simplest
 form of a definition is $f \equiv t$, where $f$ is a constant.
 Isabelle also allows a derived forms where the arguments of~$f$ appear
 on the left, abbreviating a string of $\lambda$-abstractions.
@@ -327,20 +327,20 @@
 \end{ttbox}
 Quoted strings stand for theories which have to be loaded before the
 current theory is read but which are not used in building the base of
-theory~$B$. Whenever {\tt orphan} changes and is reloaded, Isabelle
+theory~$B$.  Whenever {\tt orphan} changes and is reloaded, Isabelle
 knows that $B$ has to be updated, too.
 
 Note that it's necessary for {\tt orphan} to declare a special ML
-object of type {\tt theory} which is present in all theories. This is
+object of type {\tt theory} which is present in all theories.  This is
 normally achieved by adding the file {\tt orphan.thy} to make {\tt
-orphan} a {\bf pseudo theory}. A minimum version of {\tt orphan.thy}
+orphan} a {\bf pseudo theory}.  A minimum version of {\tt orphan.thy}
 would be
 
 \begin{ttbox}
 orphan = Pure
 \end{ttbox}
 
-which uses {\tt Pure} to make a dummy theory. Normally though the
+which uses {\tt Pure} to make a dummy theory.  Normally though the
 orphaned file has its own dependencies.  If {\tt orphan.ML} depends on
 theories or files $A@1$, \ldots, $A@n$, record this by creating the
 pseudo theory in the following way:
@@ -372,7 +372,7 @@
   to have the same name; {\tt get_axiom} returns an arbitrary one.
 
 \item[\ttindexbold{get_thm} $thy$ $name$]
-  is analogous to {\tt get_axiom}, but looks for a stored theorem. Like
+  is analogous to {\tt get_axiom}, but looks for a stored theorem.  Like
   {\tt get_axiom} it searches all parents of a theory if the theorem
   is not associated with $thy$.
 
@@ -399,13 +399,13 @@
 \begin{description}
 \item[\ttindexbold{Pure.thy}, \ttindexbold{CPure.thy}] contain the
   syntax and signature of the meta-logic.  There are no axioms:
-  meta-level inferences are carried out by \ML\ functions. The two
+  meta-level inferences are carried out by \ML\ functions.  The two
   \Pure s just differ in their concrete syntax of function
   application: $t(u@1, \ldots, u@n)$ vs.\ $t\,u@1,\ldots\,u@n$.
 
 \item[\ttindexbold{merge_theories} ($thy@1$, $thy@2$)] merges the two
   theories $thy@1$ and $thy@2$.  The resulting theory contains all of the
-  syntax, signature and axioms of the constituent theories. Merging theories
+  syntax, signature and axioms of the constituent theories.  Merging theories
   that contain different identification stamps of the same name fails with
   the following message
 \begin{ttbox}
--- a/doc-src/Ref/thm.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Ref/thm.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -270,16 +270,16 @@
 a type belonging to it because certain axioms are unsatisfiable.
 
 If a theorem contains a type variable that is constrained by an empty
-sort, then that theorem has no instances. It is basically an instance
+sort, then that theorem has no instances.  It is basically an instance
 of {\em ex falso quodlibet}.  But what if it is used to prove another
 theorem that no longer involves that sort?  The latter theorem holds
 only if under an additional non-emptiness assumption.
 
-Therefore, Isabelle's theorems carry around sort hypotheses. The {\tt
+Therefore, Isabelle's theorems carry around sort hypotheses.  The {\tt
 shyps} field is a list of sorts occurring in type variables in the current
 {\tt prop} and {\tt hyps} fields.  It may also includes sorts used in the
 theorem's proof that no longer appear in the {\tt prop} or {\tt hyps}
-fields --- so-called {\em dangling\/} sort constraints. These are the
+fields --- so-called {\em dangling\/} sort constraints.  These are the
 critical ones, asserting non-emptiness of the corresponding sorts.
  
 Isabelle tries to remove extraneous sorts from the {\tt shyps} field whenever
@@ -339,7 +339,7 @@
 \[ \phi \quad [\phi@1,\ldots,\phi@n], \]
 where $\phi@1$,~\ldots,~$\phi@n$ are the assumptions.  This can be
 also read as a single conclusion sequent $\phi@1,\ldots,\phi@n \vdash
-\phi$. Do not confuse meta-level assumptions with the object-level
+\phi$.  Do not confuse meta-level assumptions with the object-level
 assumptions in a subgoal, which are represented in the meta-logic
 using~$\Imp$.
 
--- a/doc-src/Ref/undocumented.tex	Wed Jul 02 11:59:10 1997 +0200
+++ b/doc-src/Ref/undocumented.tex	Wed Jul 02 16:46:36 1997 +0200
@@ -162,7 +162,7 @@
 \( (i, [], \phi_1;\ldots \phi_i\Imp \phi) \)
 it returns another triple,
 \((\phi_i, [\phi_{i-1},\ldots, \phi_1], \phi)\),
-where $\phi$ need not be atomic. Raises an exception if $i$ is out of
+where $\phi$ need not be atomic.  Raises an exception if $i$ is out of
 range.
 
 
@@ -240,8 +240,8 @@
 smash_unifiers: env * (term*term)list -> env seq
 \endprog
 This unification function maps an environment and a list of disagreement
-pairs to a sequence of updated environments. The function obliterates
-flex-flex pairs by choosing the obvious unifier. It may be used to tidy up
+pairs to a sequence of updated environments.  The function obliterates
+flex-flex pairs by choosing the obvious unifier.  It may be used to tidy up
 any flex-flex pairs remaining at the end of a proof.