--- a/doc-src/HOL/HOL.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/HOL/HOL.tex Mon Aug 28 13:52:38 2000 +0200
@@ -5,41 +5,34 @@
The theory~\thydx{HOL} implements higher-order logic. It is based on
Gordon's~{\sc hol} system~\cite{mgordon-hol}, which itself is based on
-Church's original paper~\cite{church40}. Andrews's
-book~\cite{andrews86} is a full description of the original
-Church-style higher-order logic. Experience with the {\sc hol} system
-has demonstrated that higher-order logic is widely applicable in many
-areas of mathematics and computer science, not just hardware
-verification, {\sc hol}'s original \textit{raison d'{\^e}tre\/}. It is
-weaker than {\ZF} set theory but for most applications this does not
-matter. If you prefer {\ML} to Lisp, you will probably prefer \HOL\
-to~{\ZF}.
-
-The syntax of \HOL\footnote{Earlier versions of Isabelle's \HOL\ used a
-different syntax. Ancient releases of Isabelle included still another version
-of~\HOL, with explicit type inference rules~\cite{paulson-COLOG}. This
-version no longer exists, but \thydx{ZF} supports a similar style of
-reasoning.} follows $\lambda$-calculus and functional programming. Function
-application is curried. To apply the function~$f$ of type
-$\tau@1\To\tau@2\To\tau@3$ to the arguments~$a$ and~$b$ in \HOL, you simply
-write $f\,a\,b$. There is no `apply' operator as in \thydx{ZF}. Note that
-$f(a,b)$ means ``$f$ applied to the pair $(a,b)$'' in \HOL. We write ordered
-pairs as $(a,b)$, not $\langle a,b\rangle$ as in {\ZF}.
-
-\HOL\ has a distinct feel, compared with {\ZF} and {\CTT}. It
-identifies object-level types with meta-level types, taking advantage of
-Isabelle's built-in type-checker. It identifies object-level functions
-with meta-level functions, so it uses Isabelle's operations for abstraction
-and application.
-
-These identifications allow Isabelle to support \HOL\ particularly
-nicely, but they also mean that \HOL\ requires more sophistication
-from the user --- in particular, an understanding of Isabelle's type
-system. Beginners should work with \texttt{show_types} (or even
-\texttt{show_sorts}) set to \texttt{true}.
-% Gain experience by
-%working in first-order logic before attempting to use higher-order logic.
-%This chapter assumes familiarity with~{\FOL{}}.
+Church's original paper~\cite{church40}. Andrews's book~\cite{andrews86} is a
+full description of the original Church-style higher-order logic. Experience
+with the {\sc hol} system has demonstrated that higher-order logic is widely
+applicable in many areas of mathematics and computer science, not just
+hardware verification, {\sc hol}'s original \textit{raison d'{\^e}tre\/}. It
+is weaker than ZF set theory but for most applications this does not matter.
+If you prefer {\ML} to Lisp, you will probably prefer HOL to~ZF.
+
+The syntax of HOL\footnote{Earlier versions of Isabelle's HOL used a different
+ syntax. Ancient releases of Isabelle included still another version of~HOL,
+ with explicit type inference rules~\cite{paulson-COLOG}. This version no
+ longer exists, but \thydx{ZF} supports a similar style of reasoning.}
+follows $\lambda$-calculus and functional programming. Function application
+is curried. To apply the function~$f$ of type $\tau@1\To\tau@2\To\tau@3$ to
+the arguments~$a$ and~$b$ in HOL, you simply write $f\,a\,b$. There is no
+`apply' operator as in \thydx{ZF}. Note that $f(a,b)$ means ``$f$ applied to
+the pair $(a,b)$'' in HOL. We write ordered pairs as $(a,b)$, not $\langle
+a,b\rangle$ as in ZF.
+
+HOL has a distinct feel, compared with ZF and CTT. It identifies object-level
+types with meta-level types, taking advantage of Isabelle's built-in
+type-checker. It identifies object-level functions with meta-level functions,
+so it uses Isabelle's operations for abstraction and application.
+
+These identifications allow Isabelle to support HOL particularly nicely, but
+they also mean that HOL requires more sophistication from the user --- in
+particular, an understanding of Isabelle's type system. Beginners should work
+with \texttt{show_types} (or even \texttt{show_sorts}) set to \texttt{true}.
\begin{figure}
@@ -123,7 +116,7 @@
& | & "?!~~" id~id^* " . " formula \\
\end{array}
\]
-\caption{Full grammar for \HOL} \label{hol-grammar}
+\caption{Full grammar for HOL} \label{hol-grammar}
\end{figure}
@@ -135,12 +128,11 @@
$\lnot(a=b)$.
\begin{warn}
- \HOL\ has no if-and-only-if connective; logical equivalence is expressed
- using equality. But equality has a high priority, as befitting a
- relation, while if-and-only-if typically has the lowest priority. Thus,
- $\lnot\lnot P=P$ abbreviates $\lnot\lnot (P=P)$ and not $(\lnot\lnot P)=P$.
- When using $=$ to mean logical equivalence, enclose both operands in
- parentheses.
+ HOL has no if-and-only-if connective; logical equivalence is expressed using
+ equality. But equality has a high priority, as befitting a relation, while
+ if-and-only-if typically has the lowest priority. Thus, $\lnot\lnot P=P$
+ abbreviates $\lnot\lnot (P=P)$ and not $(\lnot\lnot P)=P$. When using $=$
+ to mean logical equivalence, enclose both operands in parentheses.
\end{warn}
\subsection{Types and overloading}
@@ -156,7 +148,7 @@
term} if $\sigma$ and~$\tau$ do, allowing quantification over
functions.
-\HOL\ allows new types to be declared as subsets of existing types;
+HOL allows new types to be declared as subsets of existing types;
see~{\S}\ref{sec:HOL:Types}. ML-like datatypes can also be declared; see
~{\S}\ref{sec:HOL:datatype}.
@@ -218,12 +210,11 @@
\subsection{Binders}
-Hilbert's {\bf description} operator~$\varepsilon x. P[x]$ stands for
-some~$x$ satisfying~$P$, if such exists. Since all terms in \HOL\
-denote something, a description is always meaningful, but we do not
-know its value unless $P$ defines it uniquely. We may write
-descriptions as \cdx{Eps}($\lambda x. P[x]$) or use the syntax
-\hbox{\tt SOME~$x$.~$P[x]$}.
+Hilbert's {\bf description} operator~$\varepsilon x. P[x]$ stands for some~$x$
+satisfying~$P$, if such exists. Since all terms in HOL denote something, a
+description is always meaningful, but we do not know its value unless $P$
+defines it uniquely. We may write descriptions as \cdx{Eps}($\lambda x.
+P[x]$) or use the syntax \hbox{\tt SOME~$x$.~$P[x]$}.
Existential quantification is defined by
\[ \exists x. P~x \;\equiv\; P(\varepsilon x. P~x). \]
@@ -272,7 +263,7 @@
the constant~\cdx{Let}. It can be expanded by rewriting with its
definition, \tdx{Let_def}.
-\HOL\ also defines the basic syntax
+HOL also defines the basic syntax
\[\dquotes"case"~e~"of"~c@1~"=>"~e@1~"|" \dots "|"~c@n~"=>"~e@n\]
as a uniform means of expressing \texttt{case} constructs. Therefore \texttt{case}
and \sdx{of} are reserved words. Initially, this is mere syntax and has no
@@ -305,9 +296,8 @@
\caption{The \texttt{HOL} rules} \label{hol-rules}
\end{figure}
-Figure~\ref{hol-rules} shows the primitive inference rules of~\HOL{},
-with their~{\ML} names. Some of the rules deserve additional
-comments:
+Figure~\ref{hol-rules} shows the primitive inference rules of~HOL, with
+their~{\ML} names. Some of the rules deserve additional comments:
\begin{ttdescription}
\item[\tdx{ext}] expresses extensionality of functions.
\item[\tdx{iff}] asserts that logically equivalent formulae are
@@ -342,14 +332,14 @@
\end{figure}
-\HOL{} follows standard practice in higher-order logic: only a few
-connectives are taken as primitive, with the remainder defined obscurely
+HOL follows standard practice in higher-order logic: only a few connectives
+are taken as primitive, with the remainder defined obscurely
(Fig.\ts\ref{hol-defs}). Gordon's {\sc hol} system expresses the
corresponding definitions \cite[page~270]{mgordon-hol} using
-object-equality~({\tt=}), which is possible because equality in
-higher-order logic may equate formulae and even functions over formulae.
-But theory~\HOL{}, like all other Isabelle theories, uses
-meta-equality~({\tt==}) for definitions.
+object-equality~({\tt=}), which is possible because equality in higher-order
+logic may equate formulae and even functions over formulae. But theory~HOL,
+like all other Isabelle theories, uses meta-equality~({\tt==}) for
+definitions.
\begin{warn}
The definitions above should never be expanded and are shown for completeness
only. Instead users should reason in terms of the derived rules shown below
@@ -401,7 +391,7 @@
\subcaption{Logical equivalence}
\end{ttbox}
-\caption{Derived rules for \HOL} \label{hol-lemmas1}
+\caption{Derived rules for HOL} \label{hol-lemmas1}
\end{figure}
@@ -584,8 +574,8 @@
\section{A formulation of set theory}
Historically, higher-order logic gives a foundation for Russell and
Whitehead's theory of classes. Let us use modern terminology and call them
-{\bf sets}, but note that these sets are distinct from those of {\ZF} set
-theory, and behave more like {\ZF} classes.
+{\bf sets}, but note that these sets are distinct from those of ZF set theory,
+and behave more like ZF classes.
\begin{itemize}
\item
Sets are given by predicates over some type~$\sigma$. Types serve to
@@ -597,17 +587,17 @@
Although sets may contain other sets as elements, the containing set must
have a more complex type.
\end{itemize}
-Finite unions and intersections have the same behaviour in \HOL\ as they
-do in~{\ZF}. In \HOL\ the intersection of the empty set is well-defined,
-denoting the universal set for the given type.
+Finite unions and intersections have the same behaviour in HOL as they do
+in~ZF. In HOL the intersection of the empty set is well-defined, denoting the
+universal set for the given type.
\subsection{Syntax of set theory}\index{*set type}
-\HOL's set theory is called \thydx{Set}. The type $\alpha\,set$ is
-essentially the same as $\alpha\To bool$. The new type is defined for
-clarity and to avoid complications involving function types in unification.
-The isomorphisms between the two types are declared explicitly. They are
-very natural: \texttt{Collect} maps $\alpha\To bool$ to $\alpha\,set$, while
-\hbox{\tt op :} maps in the other direction (ignoring argument order).
+HOL's set theory is called \thydx{Set}. The type $\alpha\,set$ is essentially
+the same as $\alpha\To bool$. The new type is defined for clarity and to
+avoid complications involving function types in unification. The isomorphisms
+between the two types are declared explicitly. They are very natural:
+\texttt{Collect} maps $\alpha\To bool$ to $\alpha\,set$, while \hbox{\tt op :}
+maps in the other direction (ignoring argument order).
Figure~\ref{hol-set-syntax} lists the constants, infixes, and syntax
translations. Figure~\ref{hol-set-syntax2} presents the grammar of the new
@@ -623,11 +613,11 @@
\texttt{insert} \, a \, ({\tt insert} \, b \, ({\tt insert} \, c \, \{\}))
\end{eqnarray*}
-The set \hbox{\tt{\ttlbrace}$x$.\ $P[x]${\ttrbrace}} consists of all $x$ (of suitable type)
-that satisfy~$P[x]$, where $P[x]$ is a formula that may contain free
-occurrences of~$x$. This syntax expands to \cdx{Collect}$(\lambda
-x. P[x])$. It defines sets by absolute comprehension, which is impossible
-in~{\ZF}; the type of~$x$ implicitly restricts the comprehension.
+The set \hbox{\tt{\ttlbrace}$x$.\ $P[x]${\ttrbrace}} consists of all $x$ (of
+suitable type) that satisfy~$P[x]$, where $P[x]$ is a formula that may contain
+free occurrences of~$x$. This syntax expands to \cdx{Collect}$(\lambda x.
+P[x])$. It defines sets by absolute comprehension, which is impossible in~ZF;
+the type of~$x$ implicitly restricts the comprehension.
The set theory defines two {\bf bounded quantifiers}:
\begin{eqnarray*}
@@ -867,14 +857,13 @@
Figures~\ref{hol-set1} and~\ref{hol-set2} present derived rules. Most are
-obvious and resemble rules of Isabelle's {\ZF} set theory. Certain rules,
-such as \tdx{subsetCE}, \tdx{bexCI} and \tdx{UnCI},
-are designed for classical reasoning; the rules \tdx{subsetD},
-\tdx{bexI}, \tdx{Un1} and~\tdx{Un2} are not
-strictly necessary but yield more natural proofs. Similarly,
-\tdx{equalityCE} supports classical reasoning about extensionality,
-after the fashion of \tdx{iffCE}. See the file \texttt{HOL/Set.ML} for
-proofs pertaining to set theory.
+obvious and resemble rules of Isabelle's ZF set theory. Certain rules, such
+as \tdx{subsetCE}, \tdx{bexCI} and \tdx{UnCI}, are designed for classical
+reasoning; the rules \tdx{subsetD}, \tdx{bexI}, \tdx{Un1} and~\tdx{Un2} are
+not strictly necessary but yield more natural proofs. Similarly,
+\tdx{equalityCE} supports classical reasoning about extensionality, after the
+fashion of \tdx{iffCE}. See the file \texttt{HOL/Set.ML} for proofs
+pertaining to set theory.
Figure~\ref{hol-subset} presents lattice properties of the subset relation.
Unions form least upper bounds; non-empty intersections form greatest lower
@@ -927,7 +916,7 @@
\section{Generic packages}
\label{sec:HOL:generic-packages}
-\HOL\ instantiates most of Isabelle's generic packages, making available the
+HOL instantiates most of Isabelle's generic packages, making available the
simplifier and the classical reasoner.
\subsection{Simplification and substitution}
@@ -972,17 +961,17 @@
additional (first) subgoal.
\end{ttdescription}
- \HOL{} provides the tactic \ttindex{hyp_subst_tac}, which substitutes
- for an equality throughout a subgoal and its hypotheses. This tactic uses
- \HOL's general substitution rule.
+HOL provides the tactic \ttindex{hyp_subst_tac}, which substitutes for an
+equality throughout a subgoal and its hypotheses. This tactic uses HOL's
+general substitution rule.
\subsubsection{Case splitting}
\label{subsec:HOL:case:splitting}
-\HOL{} also provides convenient means for case splitting during
-rewriting. Goals containing a subterm of the form \texttt{if}~$b$~{\tt
-then\dots else\dots} often require a case distinction on $b$. This is
-expressed by the theorem \tdx{split_if}:
+HOL also provides convenient means for case splitting during rewriting. Goals
+containing a subterm of the form \texttt{if}~$b$~{\tt then\dots else\dots}
+often require a case distinction on $b$. This is expressed by the theorem
+\tdx{split_if}:
$$
\Var{P}(\mbox{\tt if}~\Var{b}~{\tt then}~\Var{x}~\mbox{\tt else}~\Var{y})~=~
((\Var{b} \to \Var{P}(\Var{x})) \land (\lnot \Var{b} \to \Var{P}(\Var{y})))
@@ -1035,9 +1024,9 @@
\subsection{Classical reasoning}
-\HOL\ derives classical introduction rules for $\disj$ and~$\exists$, as
-well as classical elimination rules for~$\imp$ and~$\bimp$, and the swap
-rule; recall Fig.\ts\ref{hol-lemmas2} above.
+HOL derives classical introduction rules for $\disj$ and~$\exists$, as well as
+classical elimination rules for~$\imp$ and~$\bimp$, and the swap rule; recall
+Fig.\ts\ref{hol-lemmas2} above.
The classical reasoner is installed. Tactics such as \texttt{Blast_tac} and
{\tt Best_tac} refer to the default claset (\texttt{claset()}), which works
@@ -1143,11 +1132,10 @@
\section{Types}\label{sec:HOL:Types}
-This section describes \HOL's basic predefined types ($\alpha \times
-\beta$, $\alpha + \beta$, $nat$ and $\alpha \; list$) and ways for
-introducing new types in general. The most important type
-construction, the \texttt{datatype}, is treated separately in
-{\S}\ref{sec:HOL:datatype}.
+This section describes HOL's basic predefined types ($\alpha \times \beta$,
+$\alpha + \beta$, $nat$ and $\alpha \; list$) and ways for introducing new
+types in general. The most important type construction, the
+\texttt{datatype}, is treated separately in {\S}\ref{sec:HOL:datatype}.
\subsection{Product and sum types}\index{*"* type}\index{*"+ type}
@@ -1407,7 +1395,7 @@
\subsection{Numerical types and numerical reasoning}
-The integers (type \tdx{int}) are also available in \HOL, and the reals (type
+The integers (type \tdx{int}) are also available in HOL, and the reals (type
\tdx{real}) are available in the logic image \texttt{HOL-Real}. They support
the expected operations of addition (\texttt{+}), subtraction (\texttt{-}) and
multiplication (\texttt{*}), and much else. Type \tdx{int} provides the
@@ -1608,14 +1596,14 @@
\subsection{Introducing new types} \label{sec:typedef}
-The \HOL-methodology dictates that all extensions to a theory should
-be \textbf{definitional}. The type definition mechanism that
-meets this criterion is \ttindex{typedef}. Note that \emph{type synonyms},
-which are inherited from {\Pure} and described elsewhere, are just
-syntactic abbreviations that have no logical meaning.
+The HOL-methodology dictates that all extensions to a theory should be
+\textbf{definitional}. The type definition mechanism that meets this
+criterion is \ttindex{typedef}. Note that \emph{type synonyms}, which are
+inherited from Pure and described elsewhere, are just syntactic abbreviations
+that have no logical meaning.
\begin{warn}
- Types in \HOL\ must be non-empty; otherwise the quantifier rules would be
+ Types in HOL must be non-empty; otherwise the quantifier rules would be
unsound, because $\exists x. x=x$ is a theorem \cite[{\S}7]{paulson-COLOG}.
\end{warn}
A \bfindex{type definition} identifies the new type with a subset of
@@ -1679,8 +1667,8 @@
stating that $(\alpha@1,\dots,\alpha@n)ty$ is isomorphic to $A$ by $Rep_T$
and its inverse $Abs_T$.
\end{itemize}
-Below are two simple examples of \HOL\ type definitions. Non-emptiness
-is proved automatically here.
+Below are two simple examples of HOL type definitions. Non-emptiness is
+proved automatically here.
\begin{ttbox}
typedef unit = "{\ttlbrace}True{\ttrbrace}"
@@ -1709,7 +1697,7 @@
arities \(ty\) :: (term,\thinspace\(\dots\),{\thinspace}term){\thinspace}term
\end{ttbox}
in your theory file to tell Isabelle that $ty$ is in class \texttt{term}, the
-class of all \HOL\ types.
+class of all HOL types.
\end{warn}
@@ -2746,7 +2734,7 @@
\item \textit{function} is the name of the function, either as an \textit{id}
or a \textit{string}.
-\item \textit{rel} is a {\HOL} expression for the well-founded termination
+\item \textit{rel} is a HOL expression for the well-founded termination
relation.
\item \textit{congruence rules} are required only in highly exceptional
@@ -2852,14 +2840,14 @@
proves some theorems. Each definition creates an \ML\ structure, which is a
substructure of the main theory structure.
-This package is related to the \ZF\ one, described in a separate
+This package is related to the ZF one, described in a separate
paper,%
\footnote{It appeared in CADE~\cite{paulson-CADE}; a longer version is
distributed with Isabelle.} %
which you should refer to in case of difficulties. The package is simpler
-than \ZF's thanks to \HOL's extra-logical automatic type-checking. The types
-of the (co)inductive sets determine the domain of the fixedpoint definition,
-and the package does not have to use inference rules for type-checking.
+than ZF's thanks to HOL's extra-logical automatic type-checking. The types of
+the (co)inductive sets determine the domain of the fixedpoint definition, and
+the package does not have to use inference rules for type-checking.
\subsection{The result structure}
@@ -2990,10 +2978,9 @@
and also individually as \texttt{Fin.emptyI} and \texttt{Fin.consI}. The induction
rule is \texttt{Fin.induct}.
-For another example, here is a theory file defining the accessible
-part of a relation. The paper
-\cite{paulson-CADE} discusses a \ZF\ version of this example in more
-detail.
+For another example, here is a theory file defining the accessible part of a
+relation. The paper \cite{paulson-CADE} discusses a ZF version of this
+example in more detail.
\begin{ttbox}
Acc = WF + Inductive +
@@ -3041,9 +3028,9 @@
$\lambda$-calculus in de~Bruijn notation and Church-Rosser proofs for $\beta$
and $\eta$ reduction~\cite{Nipkow-CR}.
-Directory \texttt{HOL/Subst} contains Martin Coen's mechanization of a theory of
-substitutions and unifiers. It is based on Paulson's previous
-mechanisation in {\LCF}~\cite{paulson85} of Manna and Waldinger's
+Directory \texttt{HOL/Subst} contains Martin Coen's mechanization of a theory
+of substitutions and unifiers. It is based on Paulson's previous
+mechanisation in LCF~\cite{paulson85} of Manna and Waldinger's
theory~\cite{mw81}. It demonstrates a complicated use of \texttt{recdef},
with nested recursion.
@@ -3053,8 +3040,8 @@
\item Theory \texttt{PropLog} proves the soundness and completeness of
classical propositional logic, given a truth table semantics. The only
connective is $\imp$. A Hilbert-style axiom system is specified, and its
- set of theorems defined inductively. A similar proof in \ZF{} is
- described elsewhere~\cite{paulson-set-II}.
+ set of theorems defined inductively. A similar proof in ZF is described
+ elsewhere~\cite{paulson-set-II}.
\item Theory \texttt{Term} defines the datatype \texttt{term}.
@@ -3083,7 +3070,7 @@
\end{itemize}
Directory \texttt{HOL/ex} contains other examples and experimental proofs in
-{\HOL}.
+HOL.
\begin{itemize}
\item Theory \texttt{Recdef} presents many examples of using \texttt{recdef}
to define recursive functions. Another example is \texttt{Fib}, which
@@ -3135,7 +3122,7 @@
of~$\alpha$. This version states that for every function from $\alpha$ to
its powerset, some subset is outside its range.
-The Isabelle proof uses \HOL's set theory, with the type $\alpha\,set$ and
+The Isabelle proof uses HOL's set theory, with the type $\alpha\,set$ and
the operator \cdx{range}.
\begin{ttbox}
context Set.thy;
@@ -3205,12 +3192,11 @@
{\out No subgoals!}
\end{ttbox}
How much creativity is required? As it happens, Isabelle can prove this
-theorem automatically. The default classical set \texttt{claset()} contains rules
-for most of the constructs of \HOL's set theory. We must augment it with
-\tdx{equalityCE} to break up set equalities, and then apply best-first
-search. Depth-first search would diverge, but best-first search
-successfully navigates through the large search space.
-\index{search!best-first}
+theorem automatically. The default classical set \texttt{claset()} contains
+rules for most of the constructs of HOL's set theory. We must augment it with
+\tdx{equalityCE} to break up set equalities, and then apply best-first search.
+Depth-first search would diverge, but best-first search successfully navigates
+through the large search space. \index{search!best-first}
\begin{ttbox}
choplev 0;
{\out Level 0}
--- a/doc-src/HOL/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/HOL/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -13,7 +13,7 @@
NAME = logics-HOL
FILES = logics-HOL.tex ../Logics/syntax.tex HOL.tex \
- ../rail.sty ../proof.sty ../iman.sty ../extra.sty ../manual.bib
+ ../rail.sty ../proof.sty ../iman.sty ../extra.sty ../ttbox.sty ../manual.bib
dvi: $(NAME).dvi
--- a/doc-src/HOL/logics-HOL.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/HOL/logics-HOL.tex Mon Aug 28 13:52:38 2000 +0200
@@ -1,6 +1,6 @@
%% $Id$
\documentclass[12pt,a4paper]{report}
-\usepackage{graphicx,../iman,../extra,../proof,../rail,latexsym,../pdfsetup}
+\usepackage{graphicx,../iman,../extra,../ttbox,../proof,../rail,latexsym,../pdfsetup}
%%% to index derived rls: ^\([a-zA-Z0-9][a-zA-Z0-9_]*\) \\tdx{\1}
%%% to index rulenames: ^ *(\([a-zA-Z0-9][a-zA-Z0-9_]*\), \\tdx{\1}
--- a/doc-src/Inductive/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Inductive/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -12,7 +12,7 @@
include ../Makefile.in
NAME = ind-defs
-FILES = ind-defs.tex ../proof.sty ../iman.sty ../extra.sty ../manual.bib
+FILES = ind-defs.tex ../proof.sty ../iman.sty ../extra.sty ../ttbox.sty ../manual.bib
dvi: $(NAME).dvi
--- a/doc-src/Inductive/ind-defs.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Inductive/ind-defs.tex Mon Aug 28 13:52:38 2000 +0200
@@ -1,6 +1,6 @@
%% $Id$
\documentclass[12pt,a4paper]{article}
-\usepackage{latexsym,../iman,../extra,../proof,../pdfsetup}
+\usepackage{latexsym,../iman,../extra,../ttbox,../proof,../pdfsetup}
\newif\ifshort%''Short'' means a published version, not the documentation
\shortfalse%%%%%\shorttrue
--- a/doc-src/Intro/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Intro/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -13,7 +13,7 @@
NAME = intro
FILES = intro.tex foundations.tex getting.tex advanced.tex \
- ../proof.sty ../iman.sty ../extra.sty ../manual.bib
+ ../proof.sty ../iman.sty ../extra.sty ../ttbox.sty ../manual.bib
dvi: $(NAME).dvi
--- a/doc-src/Intro/advanced.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Intro/advanced.tex Mon Aug 28 13:52:38 2000 +0200
@@ -293,7 +293,7 @@
\iflabelundefined{app:TheorySyntax}{an appendix of the {\it Reference
Manual}}{App.\ts\ref{app:TheorySyntax}}. Also note that
object-logics may add further theory sections, for example
-\texttt{typedef}, \texttt{datatype} in \HOL.
+\texttt{typedef}, \texttt{datatype} in HOL.
All the declaration parts can be omitted or repeated and may appear in
any order, except that the {\ML} section must be last (after the {\tt
@@ -302,7 +302,7 @@
theories, inheriting their types, constants, syntax, etc. The theory
\thydx{Pure} contains nothing but Isabelle's meta-logic. The variant
\thydx{CPure} offers the more usual higher-order function application
-syntax $t\,u@1\ldots\,u@n$ instead of $t(u@1,\ldots,u@n)$ in \Pure.
+syntax $t\,u@1\ldots\,u@n$ instead of $t(u@1,\ldots,u@n)$ in Pure.
Each theory definition must reside in a separate file, whose name is
the theory's with {\tt.thy} appended. Calling
@@ -964,12 +964,11 @@
\index{simplification}\index{examples!of simplification}
-Isabelle's simplification tactics repeatedly apply equations to a
-subgoal, perhaps proving it. For efficiency, the rewrite rules must
-be packaged into a {\bf simplification set},\index{simplification
- sets} or {\bf simpset}. We augment the implicit simpset of {\FOL}
-with the equations proved in the previous section, namely $0+n=n$ and
-$\texttt{Suc}(m)+n=\texttt{Suc}(m+n)$:
+Isabelle's simplification tactics repeatedly apply equations to a subgoal,
+perhaps proving it. For efficiency, the rewrite rules must be packaged into a
+{\bf simplification set},\index{simplification sets} or {\bf simpset}. We
+augment the implicit simpset of FOL with the equations proved in the previous
+section, namely $0+n=n$ and $\texttt{Suc}(m)+n=\texttt{Suc}(m+n)$:
\begin{ttbox}
Addsimps [add_0, add_Suc];
\end{ttbox}
--- a/doc-src/Intro/foundations.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Intro/foundations.tex Mon Aug 28 13:52:38 2000 +0200
@@ -441,20 +441,18 @@
Under similar conditions, a signature can be extended. Signatures are
managed internally by Isabelle; users seldom encounter them.
-\index{theories|bold} A {\bf theory} consists of a signature plus a
-collection of axioms. The {\Pure} theory contains only the
-meta-logic. Theories can be combined provided their signatures are
-compatible. A theory definition extends an existing theory with
-further signature specifications --- classes, types, constants and
-mixfix declarations --- plus lists of axioms and definitions etc.,
-expressed as strings to be parsed. A theory can formalize a small
-piece of mathematics, such as lists and their operations, or an entire
-logic. A mathematical development typically involves many theories in
-a hierarchy. For example, the {\Pure} theory could be extended to
-form a theory for Fig.\ts\ref{fol-fig}; this could be extended in two
-separate ways to form a theory for natural numbers and a theory for
-lists; the union of these two could be extended into a theory defining
-the length of a list:
+\index{theories|bold} A {\bf theory} consists of a signature plus a collection
+of axioms. The Pure theory contains only the meta-logic. Theories can be
+combined provided their signatures are compatible. A theory definition
+extends an existing theory with further signature specifications --- classes,
+types, constants and mixfix declarations --- plus lists of axioms and
+definitions etc., expressed as strings to be parsed. A theory can formalize a
+small piece of mathematics, such as lists and their operations, or an entire
+logic. A mathematical development typically involves many theories in a
+hierarchy. For example, the Pure theory could be extended to form a theory
+for Fig.\ts\ref{fol-fig}; this could be extended in two separate ways to form
+a theory for natural numbers and a theory for lists; the union of these two
+could be extended into a theory defining the length of a list:
\begin{tt}
\[
\begin{array}{c@{}c@{}c@{}c@{}c}
--- a/doc-src/Intro/getting.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Intro/getting.tex Mon Aug 28 13:52:38 2000 +0200
@@ -26,7 +26,7 @@
isabelle FOL
\end{ttbox}
Note that just typing \texttt{isabelle} usually brings up higher-order logic
-(\HOL) by default.
+(HOL) by default.
\subsection{Lexical matters}
@@ -110,8 +110,8 @@
t\; u@1 \ldots\; u@n & \hbox{application to several arguments (HOL)}
\end{array}
\]
-Note that \HOL{} uses its traditional ``higher-order'' syntax for application,
-which differs from that used in \FOL\@.
+Note that HOL uses its traditional ``higher-order'' syntax for application,
+which differs from that used in FOL.
The theorems and rules of an object-logic are represented by theorems in
the meta-logic, which are expressed using meta-formulae. Since the
@@ -228,10 +228,10 @@
to have subscript zero, improving readability and reducing subscript
growth.
\end{ttdescription}
-The rules of a theory are normally bound to \ML\ identifiers. Suppose we
-are running an Isabelle session containing theory~\FOL, natural deduction
-first-order logic.\footnote{For a listing of the \FOL{} rules and their
- \ML{} names, turn to
+The rules of a theory are normally bound to \ML\ identifiers. Suppose we are
+running an Isabelle session containing theory~FOL, natural deduction
+first-order logic.\footnote{For a listing of the FOL rules and their \ML{}
+ names, turn to
\iflabelundefined{fol-rules}{{\em Isabelle's Object-Logics}}%
{page~\pageref{fol-rules}}.}
Let us try an example given in~\S\ref{joining}. We
--- a/doc-src/Intro/intro.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Intro/intro.tex Mon Aug 28 13:52:38 2000 +0200
@@ -1,5 +1,5 @@
\documentclass[12pt,a4paper]{article}
-\usepackage{graphicx,../iman,../extra,../proof,../pdfsetup}
+\usepackage{graphicx,../iman,../extra,../ttbox,../proof,../pdfsetup}
%% $Id$
%% run bibtex intro to prepare bibliography
--- a/doc-src/IsarRef/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/IsarRef/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -15,7 +15,8 @@
FILES = isar-ref.tex intro.tex basics.tex syntax.tex pure.tex \
generic.tex hol.tex refcard.tex conversion.tex ../isar.sty \
- ../rail.sty ../railsetup.sty ../proof.sty ../iman.sty ../extra.sty ../manual.bib
+ ../rail.sty ../railsetup.sty ../proof.sty ../iman.sty \
+ ../extra.sty ../ttbox.sty ../manual.bib
dvi: $(NAME).dvi
--- a/doc-src/IsarRef/hol.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/IsarRef/hol.tex Mon Aug 28 13:52:38 2000 +0200
@@ -231,11 +231,11 @@
The rule is determined as follows, according to the facts and arguments
passed to the $cases$ method:
\begin{matharray}{llll}
- \text{facts} & & \text{arguments} & \text{rule} \\\hline
- & cases & & \text{classical case split} \\
- & cases & t & \text{datatype exhaustion (type of $t$)} \\
- \edrv a \in A & cases & \dots & \text{inductive set elimination (of $A$)} \\
- \dots & cases & \dots ~ R & \text{explicit rule $R$} \\
+ \Text{facts} & & \Text{arguments} & \Text{rule} \\\hline
+ & cases & & \Text{classical case split} \\
+ & cases & t & \Text{datatype exhaustion (type of $t$)} \\
+ \edrv a \in A & cases & \dots & \Text{inductive set elimination (of $A$)} \\
+ \dots & cases & \dots ~ R & \Text{explicit rule $R$} \\
\end{matharray}
Several instantiations may be given, referring to the \emph{suffix} of
@@ -257,10 +257,10 @@
\item [$induct~insts~R$] is analogous to the $cases$ method, but refers to
induction rules, which are determined as follows:
\begin{matharray}{llll}
- \text{facts} & & \text{arguments} & \text{rule} \\\hline
- & induct & P ~ x ~ \dots & \text{datatype induction (type of $x$)} \\
- \edrv x \in A & induct & \dots & \text{set induction (of $A$)} \\
- \dots & induct & \dots ~ R & \text{explicit rule $R$} \\
+ \Text{facts} & & \Text{arguments} & \Text{rule} \\\hline
+ & induct & P ~ x ~ \dots & \Text{datatype induction (type of $x$)} \\
+ \edrv x \in A & induct & \dots & \Text{set induction (of $A$)} \\
+ \dots & induct & \dots ~ R & \Text{explicit rule $R$} \\
\end{matharray}
Several instantiations may be given, each referring to some part of a mutual
--- a/doc-src/IsarRef/isar-ref.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/IsarRef/isar-ref.tex Mon Aug 28 13:52:38 2000 +0200
@@ -9,7 +9,7 @@
\documentclass[12pt,a4paper,fleqn]{report}
-\usepackage{latexsym,graphicx,../iman,../extra,../proof,../rail,../railsetup,../isar,../pdfsetup}
+\usepackage{latexsym,graphicx,../iman,../extra,../ttbox,../proof,../rail,../railsetup,../isar,../pdfsetup}
\title{\includegraphics[scale=0.5]{isabelle_isar} \\[4ex] The Isabelle/Isar Reference Manual}
\author{\emph{Markus Wenzel} \\ TU M\"unchen}
--- a/doc-src/IsarRef/pure.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/IsarRef/pure.tex Mon Aug 28 13:52:38 2000 +0200
@@ -687,7 +687,7 @@
Basic proof methods (such as $rule$, see \S\ref{sec:pure-meth-att}) expect
multiple facts to be given in their proper order, corresponding to a prefix of
the premises of the rule involved. Note that positions may be easily skipped
-using something like $\FROM{\text{\texttt{_}}~a~b}$, for example. This
+using something like $\FROM{\Text{\texttt{_}}~a~b}$, for example. This
involves the trivial rule $\PROP\psi \Imp \PROP\psi$, which happens to be
bound in Isabelle/Pure as ``\texttt{_}''
(underscore).\indexisarthm{_@\texttt{_}}
--- a/doc-src/IsarRef/refcard.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/IsarRef/refcard.tex Mon Aug 28 13:52:38 2000 +0200
@@ -15,7 +15,7 @@
$\BG~\dots~\EN$ & declare explicit blocks \\
$\NEXT$ & switch implicit blocks \\
$\NOTE{a}{\vec b}$ & reconsider facts \\
- $\LET{p = t}$ & \text{abbreviate terms by higher-order matching} \\
+ $\LET{p = t}$ & \Text{abbreviate terms by higher-order matching} \\
\end{tabular}
\begin{matharray}{rcl}
@@ -61,13 +61,13 @@
\MOREOVER & \approx & \NOTE{calculation}{calculation~this} \\
\ULTIMATELY & \approx & \MOREOVER~\FROM{calculation} \\[0.5ex]
\PRESUME{a}{\vec\phi} & \approx & \ASSUME{a}{\vec\phi} \\
-% & & \text{(permissive assumption)} \\
+% & & \Text{(permissive assumption)} \\
\DEF{a}{x \equiv t} & \approx & \FIX{x}~\ASSUME{a}{x \equiv t} \\
-% & & \text{(definitional assumption)} \\
+% & & \Text{(definitional assumption)} \\
\OBTAIN{\vec x}{a}{\vec\phi} & \approx & \dots~\FIX{\vec x}~\ASSUME{a}{\vec\phi} \\
-% & & \text{(generalized existence)} \\
+% & & \Text{(generalized existence)} \\
\CASE{c} & \approx & \FIX{\vec x}~\ASSUME{c}{\vec\phi} \\
-% & & \text{(named context)} \\[0.5ex]
+% & & \Text{(named context)} \\[0.5ex]
\SORRY & \approx & \BY{cheating} \\
\end{matharray}
@@ -75,11 +75,11 @@
\subsection{Diagnostic commands}
\begin{matharray}{ll}
- \isarkeyword{pr} & \text{print current state} \\
- \isarkeyword{thm}~\vec a & \text{print theorems} \\
- \isarkeyword{term}~t & \text{print term} \\
- \isarkeyword{prop}~\phi & \text{print meta-level proposition} \\
- \isarkeyword{typ}~\tau & \text{print meta-level type} \\
+ \isarkeyword{pr} & \Text{print current state} \\
+ \isarkeyword{thm}~\vec a & \Text{print theorems} \\
+ \isarkeyword{term}~t & \Text{print term} \\
+ \isarkeyword{prop}~\phi & \Text{print meta-level proposition} \\
+ \isarkeyword{typ}~\tau & \Text{print meta-level type} \\
\end{matharray}
@@ -96,11 +96,11 @@
$induct~\vec x$ & proof by induction (provides cases) \\[2ex]
\multicolumn{2}{l}{\textbf{Repeated steps (inserting facts)}} \\[0.5ex]
- $-$ & \text{no rules} \\
- $intro~\vec a$ & \text{introduction rules} \\
- $intro_classes$ & \text{class introduction rules} \\
- $elim~\vec a$ & \text{elimination rules} \\
- $unfold~\vec a$ & \text{definitions} \\[2ex]
+ $-$ & \Text{no rules} \\
+ $intro~\vec a$ & \Text{introduction rules} \\
+ $intro_classes$ & \Text{class introduction rules} \\
+ $elim~\vec a$ & \Text{elimination rules} \\
+ $unfold~\vec a$ & \Text{definitions} \\[2ex]
\multicolumn{2}{l}{\textbf{Automated proof tools (inserting facts, or even prems!)}} \\[0.5ex]
$simp$, $simp_all$ & Simplifier (+ Splitter) \\
--- a/doc-src/Logics/CTT.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Logics/CTT.tex Mon Aug 28 13:52:38 2000 +0200
@@ -40,16 +40,16 @@
Assumptions can use all the judgement forms, for instance to express that
$B$ is a family of types over~$A$:
\[ \Forall x . x\in A \Imp B(x)\;{\rm type} \]
-To justify the {\CTT} formulation it is probably best to appeal directly
-to the semantic explanations of the rules~\cite{martinlof84}, rather than
-to the rules themselves. The order of assumptions no longer matters,
-unlike in standard Type Theory. Contexts, which are typical of many modern
-type theories, are difficult to represent in Isabelle. In particular, it
-is difficult to enforce that all the variables in a context are distinct.
-\index{assumptions!in {\CTT}}
+To justify the CTT formulation it is probably best to appeal directly to the
+semantic explanations of the rules~\cite{martinlof84}, rather than to the
+rules themselves. The order of assumptions no longer matters, unlike in
+standard Type Theory. Contexts, which are typical of many modern type
+theories, are difficult to represent in Isabelle. In particular, it is
+difficult to enforce that all the variables in a context are distinct.
+\index{assumptions!in CTT}
-The theory does not use polymorphism. Terms in {\CTT} have type~\tydx{i}, the
-type of individuals. Types in {\CTT} have type~\tydx{t}.
+The theory does not use polymorphism. Terms in CTT have type~\tydx{i}, the
+type of individuals. Types in CTT have type~\tydx{t}.
\begin{figure} \tabcolsep=1em %wider spacing in tables
\begin{center}
@@ -81,29 +81,29 @@
\cdx{tt} & $i$ & constructor
\end{tabular}
\end{center}
-\caption{The constants of {\CTT}} \label{ctt-constants}
+\caption{The constants of CTT} \label{ctt-constants}
\end{figure}
-{\CTT} supports all of Type Theory apart from list types, well-ordering
-types, and universes. Universes could be introduced {\em\`a la Tarski},
-adding new constants as names for types. The formulation {\em\`a la
- Russell}, where types denote themselves, is only possible if we identify
-the meta-types~{\tt i} and~{\tt t}. Most published formulations of
-well-ordering types have difficulties involving extensionality of
-functions; I suggest that you use some other method for defining recursive
-types. List types are easy to introduce by declaring new rules.
+CTT supports all of Type Theory apart from list types, well-ordering types,
+and universes. Universes could be introduced {\em\`a la Tarski}, adding new
+constants as names for types. The formulation {\em\`a la Russell}, where
+types denote themselves, is only possible if we identify the meta-types~{\tt
+ i} and~{\tt t}. Most published formulations of well-ordering types have
+difficulties involving extensionality of functions; I suggest that you use
+some other method for defining recursive types. List types are easy to
+introduce by declaring new rules.
-{\CTT} uses the 1982 version of Type Theory, with extensional equality.
-The computation $a=b\in A$ and the equality $c\in Eq(A,a,b)$ are
-interchangeable. Its rewriting tactics prove theorems of the form $a=b\in
-A$. It could be modified to have intensional equality, but rewriting
-tactics would have to prove theorems of the form $c\in Eq(A,a,b)$ and the
-computation rules might require a separate simplifier.
+CTT uses the 1982 version of Type Theory, with extensional equality. The
+computation $a=b\in A$ and the equality $c\in Eq(A,a,b)$ are interchangeable.
+Its rewriting tactics prove theorems of the form $a=b\in A$. It could be
+modified to have intensional equality, but rewriting tactics would have to
+prove theorems of the form $c\in Eq(A,a,b)$ and the computation rules might
+require a separate simplifier.
\begin{figure} \tabcolsep=1em %wider spacing in tables
-\index{lambda abs@$\lambda$-abstractions!in \CTT}
+\index{lambda abs@$\lambda$-abstractions!in CTT}
\begin{center}
\begin{tabular}{llrrr}
\it symbol &\it name &\it meta-type & \it priority & \it description \\
@@ -113,7 +113,7 @@
\subcaption{Binders}
\begin{center}
-\index{*"` symbol}\index{function applications!in \CTT}
+\index{*"` symbol}\index{function applications!in CTT}
\index{*"+ symbol}
\begin{tabular}{rrrr}
\it symbol & \it meta-type & \it priority & \it description \\
@@ -160,7 +160,7 @@
\]
\end{center}
\subcaption{Grammar}
-\caption{Syntax of {\CTT}} \label{ctt-syntax}
+\caption{Syntax of CTT} \label{ctt-syntax}
\end{figure}
%%%%\section{Generic Packages} typedsimp.ML????????????????
@@ -168,16 +168,16 @@
\section{Syntax}
The constants are shown in Fig.\ts\ref{ctt-constants}. The infixes include
-the function application operator (sometimes called `apply'), and the
-2-place type operators. Note that meta-level abstraction and application,
-$\lambda x.b$ and $f(a)$, differ from object-level abstraction and
-application, \hbox{\tt lam $x$. $b$} and $b{\tt`}a$. A {\CTT}
-function~$f$ is simply an individual as far as Isabelle is concerned: its
-Isabelle type is~$i$, not say $i\To i$.
+the function application operator (sometimes called `apply'), and the 2-place
+type operators. Note that meta-level abstraction and application, $\lambda
+x.b$ and $f(a)$, differ from object-level abstraction and application,
+\hbox{\tt lam $x$. $b$} and $b{\tt`}a$. A CTT function~$f$ is simply an
+individual as far as Isabelle is concerned: its Isabelle type is~$i$, not say
+$i\To i$.
-The notation for~{\CTT} (Fig.\ts\ref{ctt-syntax}) is based on that of
-Nordstr\"om et al.~\cite{nordstrom90}. The empty type is called $F$ and
-the one-element type is $T$; other finite types are built as $T+T+T$, etc.
+The notation for~CTT (Fig.\ts\ref{ctt-syntax}) is based on that of Nordstr\"om
+et al.~\cite{nordstrom90}. The empty type is called $F$ and the one-element
+type is $T$; other finite types are built as $T+T+T$, etc.
\index{*SUM symbol}\index{*PROD symbol}
Quantification is expressed by sums $\sum@{x\in A}B[x]$ and
@@ -387,7 +387,7 @@
|] ==> snd(p) : B(fst(p))
\end{ttbox}
-\caption{Derived rules for {\CTT}} \label{ctt-derived}
+\caption{Derived rules for CTT} \label{ctt-derived}
\end{figure}
@@ -470,7 +470,7 @@
\section{Rule lists}
The Type Theory tactics provide rewriting, type inference, and logical
reasoning. Many proof procedures work by repeatedly resolving certain Type
-Theory rules against a proof state. {\CTT} defines lists --- each with
+Theory rules against a proof state. CTT defines lists --- each with
type
\hbox{\tt thm list} --- of related rules.
\begin{ttdescription}
@@ -514,14 +514,14 @@
equal_tac : thm list -> tactic
intr_tac : thm list -> tactic
\end{ttbox}
-Blind application of {\CTT} rules seldom leads to a proof. The elimination
+Blind application of CTT rules seldom leads to a proof. The elimination
rules, especially, create subgoals containing new unknowns. These subgoals
unify with anything, creating a huge search space. The standard tactic
-\ttindex{filt_resolve_tac}
+\ttindex{filt_resolve_tac}
(see \iflabelundefined{filt_resolve_tac}{the {\em Reference Manual\/}}%
{\S\ref{filt_resolve_tac}})
%
-fails for goals that are too flexible; so does the {\CTT} tactic {\tt
+fails for goals that are too flexible; so does the CTT tactic {\tt
test_assume_tac}. Used with the tactical \ttindex{REPEAT_FIRST} they
achieve a simple kind of subgoal reordering: the less flexible subgoals are
attempted first. Do some single step proofs, or study the examples below,
@@ -563,8 +563,8 @@
CTT} equality rules and the built-in rewriting functor
{\tt TSimpFun}.%
\footnote{This should not be confused with Isabelle's main simplifier; {\tt
- TSimpFun} is only useful for {\CTT} and similar logics with type
- inference rules. At present it is undocumented.}
+ TSimpFun} is only useful for CTT and similar logics with type inference
+ rules. At present it is undocumented.}
%
The rewrites include the computation rules and other equations. The long
versions of the other rules permit rewriting of subterms and subtypes.
@@ -583,11 +583,11 @@
\section{Tactics for logical reasoning}
-Interpreting propositions as types lets {\CTT} express statements of
-intuitionistic logic. However, Constructive Type Theory is not just
-another syntax for first-order logic. There are fundamental differences.
+Interpreting propositions as types lets CTT express statements of
+intuitionistic logic. However, Constructive Type Theory is not just another
+syntax for first-order logic. There are fundamental differences.
-\index{assumptions!in {\CTT}}
+\index{assumptions!in CTT}
Can assumptions be deleted after use? Not every occurrence of a type
represents a proposition, and Type Theory assumptions declare variables.
In first-order logic, $\disj$-elimination with the assumption $P\disj Q$
@@ -598,7 +598,7 @@
refer to $z$ may render the subgoal unprovable: arguably,
meaningless.
-Isabelle provides several tactics for predicate calculus reasoning in \CTT:
+Isabelle provides several tactics for predicate calculus reasoning in CTT:
\begin{ttbox}
mp_tac : int -> tactic
add_mp_tac : int -> tactic
@@ -728,7 +728,7 @@
\section{The examples directory}
-This directory contains examples and experimental proofs in {\CTT}.
+This directory contains examples and experimental proofs in CTT.
\begin{ttdescription}
\item[CTT/ex/typechk.ML]
contains simple examples of type-checking and type deduction.
--- a/doc-src/Logics/LK.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Logics/LK.tex Mon Aug 28 13:52:38 2000 +0200
@@ -18,8 +18,8 @@
are polymorphic (many-sorted). The type of formulae is~\tydx{o}, which
belongs to class {\tt logic}.
-\LK{} implements a classical logic theorem prover that is nearly as powerful
-as the generic classical reasoner. The simplifier is now available too.
+LK implements a classical logic theorem prover that is nearly as powerful as
+the generic classical reasoner. The simplifier is now available too.
To work in LK, start up Isabelle specifying \texttt{Sequents} as the
object-logic. Once in Isabelle, change the context to theory \texttt{LK.thy}:
@@ -179,17 +179,17 @@
of formulae.
The \textbf{definite description} operator~$\iota x. P[x]$ stands for some~$a$
-satisfying~$P[a]$, if one exists and is unique. Since all terms in \LK{}
-denote something, a description is always meaningful, but we do not know
-its value unless $P[x]$ defines it uniquely. The Isabelle notation is
-\hbox{\tt THE $x$.\ $P[x]$}. The corresponding rule (Fig.\ts\ref{lk-rules})
-does not entail the Axiom of Choice because it requires uniqueness.
+satisfying~$P[a]$, if one exists and is unique. Since all terms in LK denote
+something, a description is always meaningful, but we do not know its value
+unless $P[x]$ defines it uniquely. The Isabelle notation is \hbox{\tt THE
+ $x$.\ $P[x]$}. The corresponding rule (Fig.\ts\ref{lk-rules}) does not
+entail the Axiom of Choice because it requires uniqueness.
Conditional expressions are available with the notation
\[ \dquotes
"if"~formula~"then"~term~"else"~term. \]
-Figure~\ref{lk-grammar} presents the grammar of \LK. Traditionally,
+Figure~\ref{lk-grammar} presents the grammar of LK. Traditionally,
\(\Gamma\) and \(\Delta\) are meta-variables for sequences. In Isabelle's
notation, the prefix~\verb|$| on a term makes it range over sequences.
In a sequent, anything not prefixed by \verb|$| is taken as a formula.
@@ -272,13 +272,13 @@
\section{Automatic Proof}
-\LK{} instantiates Isabelle's simplifier. Both equality ($=$) and the
+LK instantiates Isabelle's simplifier. Both equality ($=$) and the
biconditional ($\bimp$) may be used for rewriting. The tactic
\texttt{Simp_tac} refers to the default simpset (\texttt{simpset()}). With
sequents, the \texttt{full_} and \texttt{asm_} forms of the simplifier are not
-required; all the formulae{} in the sequent will be simplified. The
-left-hand formulae{} are taken as rewrite rules. (Thus, the behaviour is what
-you would normally expect from calling \texttt{Asm_full_simp_tac}.)
+required; all the formulae{} in the sequent will be simplified. The left-hand
+formulae{} are taken as rewrite rules. (Thus, the behaviour is what you would
+normally expect from calling \texttt{Asm_full_simp_tac}.)
For classical reasoning, several tactics are available:
\begin{ttbox}
@@ -325,15 +325,13 @@
These tactics refine a subgoal into two by applying the cut rule. The cut
formula is given as a string, and replaces some other formula in the sequent.
\begin{ttdescription}
-\item[\ttindexbold{cutR_tac} {\it P\/} {\it i}]
-reads an \LK{} formula~$P$, and applies the cut rule to subgoal~$i$. It
-then deletes some formula from the right side of subgoal~$i$, replacing
-that formula by~$P$.
-
-\item[\ttindexbold{cutL_tac} {\it P\/} {\it i}]
-reads an \LK{} formula~$P$, and applies the cut rule to subgoal~$i$. It
-then deletes some formula from the left side of the new subgoal $i+1$,
-replacing that formula by~$P$.
+\item[\ttindexbold{cutR_tac} {\it P\/} {\it i}] reads an LK formula~$P$, and
+ applies the cut rule to subgoal~$i$. It then deletes some formula from the
+ right side of subgoal~$i$, replacing that formula by~$P$.
+
+\item[\ttindexbold{cutL_tac} {\it P\/} {\it i}] reads an LK formula~$P$, and
+ applies the cut rule to subgoal~$i$. It then deletes some formula from the
+ left side of the new subgoal $i+1$, replacing that formula by~$P$.
\end{ttdescription}
All the structural rules --- cut, contraction, and thinning --- can be
applied to particular formulae using {\tt res_inst_tac}.
@@ -378,11 +376,11 @@
\section{A simple example of classical reasoning}
The theorem $\turn\ex{y}\all{x}P(y)\imp P(x)$ is a standard example of the
-classical treatment of the existential quantifier. Classical reasoning
-is easy using~{\LK}, as you can see by comparing this proof with the one
-given in the FOL manual~\cite{isabelle-ZF}. From a logical point of view, the
-proofs are essentially the same; the key step here is to use \tdx{exR}
-rather than the weaker~\tdx{exR_thin}.
+classical treatment of the existential quantifier. Classical reasoning is
+easy using~LK, as you can see by comparing this proof with the one given in
+the FOL manual~\cite{isabelle-ZF}. From a logical point of view, the proofs
+are essentially the same; the key step here is to use \tdx{exR} rather than
+the weaker~\tdx{exR_thin}.
\begin{ttbox}
Goal "|- EX y. ALL x. P(y)-->P(x)";
{\out Level 0}
@@ -405,10 +403,10 @@
{\out |- EX y. ALL x. P(y) --> P(x)}
{\out 1. !!x. P(?x) |- P(x), EX x. ALL xa. P(x) --> P(xa)}
\end{ttbox}
-Because {\LK} is a sequent calculus, the formula~$P(\Var{x})$ does not
-become an assumption; instead, it moves to the left side. The
-resulting subgoal cannot be instantiated to a basic sequent: the bound
-variable~$x$ is not unifiable with the unknown~$\Var{x}$.
+Because LK is a sequent calculus, the formula~$P(\Var{x})$ does not become an
+assumption; instead, it moves to the left side. The resulting subgoal cannot
+be instantiated to a basic sequent: the bound variable~$x$ is not unifiable
+with the unknown~$\Var{x}$.
\begin{ttbox}
by (resolve_tac [basic] 1);
{\out by: tactic failed}
@@ -548,10 +546,10 @@
Multiple unifiers occur whenever this is resolved against a goal containing
more than one conjunction on the left.
-\LK{} exploits this representation of lists. As an alternative, the
-sequent calculus can be formalized using an ordinary representation of
-lists, with a logic program for removing a formula from a list. Amy Felty
-has applied this technique using the language $\lambda$Prolog~\cite{felty91a}.
+LK exploits this representation of lists. As an alternative, the sequent
+calculus can be formalized using an ordinary representation of lists, with a
+logic program for removing a formula from a list. Amy Felty has applied this
+technique using the language $\lambda$Prolog~\cite{felty91a}.
Explicit formalization of sequents can be tiresome. But it gives precise
control over contraction and weakening, and is essential to handle relevant
@@ -575,10 +573,10 @@
rules require a search strategy, such as backtracking.
A \textbf{pack} is a pair whose first component is a list of safe rules and
-whose second is a list of unsafe rules. Packs can be extended in an
-obvious way to allow reasoning with various collections of rules. For
-clarity, \LK{} declares \mltydx{pack} as an \ML{} datatype, although is
-essentially a type synonym:
+whose second is a list of unsafe rules. Packs can be extended in an obvious
+way to allow reasoning with various collections of rules. For clarity, LK
+declares \mltydx{pack} as an \ML{} datatype, although is essentially a type
+synonym:
\begin{ttbox}
datatype pack = Pack of thm list * thm list;
\end{ttbox}
@@ -628,8 +626,7 @@
\section{*Proof procedures}\label{sec:sequent-provers}
-The \LK{} proof procedure is similar to the classical reasoner
-described in
+The LK proof procedure is similar to the classical reasoner described in
\iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
{Chap.\ts\ref{chap:classical}}.
%
--- a/doc-src/Logics/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Logics/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -13,7 +13,7 @@
NAME = logics
FILES = logics.tex preface.tex syntax.tex LK.tex Sequents.tex CTT.tex \
- ../proof.sty ../iman.sty ../extra.sty ../manual.bib
+ ../proof.sty ../iman.sty ../extra.sty ../ttbox.sty ../manual.bib
dvi: $(NAME).dvi
--- a/doc-src/Logics/Old_HOL.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Logics/Old_HOL.tex Mon Aug 28 13:52:38 2000 +0200
@@ -3,33 +3,32 @@
\index{higher-order logic|(}
\index{HOL system@{\sc hol} system}
-The theory~\thydx{HOL} implements higher-order logic.
-It is based on Gordon's~{\sc hol} system~\cite{mgordon-hol}, which itself is
-based on Church's original paper~\cite{church40}. Andrews's
-book~\cite{andrews86} is a full description of higher-order logic.
-Experience with the {\sc hol} system has demonstrated that higher-order
-logic is useful for hardware verification; beyond this, it is widely
-applicable in many areas of mathematics. It is weaker than {\ZF} set
-theory but for most applications this does not matter. If you prefer {\ML}
-to Lisp, you will probably prefer \HOL\ to~{\ZF}.
+The theory~\thydx{HOL} implements higher-order logic. It is based on
+Gordon's~{\sc hol} system~\cite{mgordon-hol}, which itself is based on
+Church's original paper~\cite{church40}. Andrews's book~\cite{andrews86} is a
+full description of higher-order logic. Experience with the {\sc hol} system
+has demonstrated that higher-order logic is useful for hardware verification;
+beyond this, it is widely applicable in many areas of mathematics. It is
+weaker than ZF set theory but for most applications this does not matter. If
+you prefer {\ML} to Lisp, you will probably prefer HOL to~ZF.
-Previous releases of Isabelle included a different version of~\HOL, with
+Previous releases of Isabelle included a different version of~HOL, with
explicit type inference rules~\cite{paulson-COLOG}. This version no longer
exists, but \thydx{ZF} supports a similar style of reasoning.
-\HOL\ has a distinct feel, compared with {\ZF} and {\CTT}. It
-identifies object-level types with meta-level types, taking advantage of
-Isabelle's built-in type checker. It identifies object-level functions
-with meta-level functions, so it uses Isabelle's operations for abstraction
-and application. There is no `apply' operator: function applications are
-written as simply~$f(a)$ rather than $f{\tt`}a$.
+HOL has a distinct feel, compared with ZF and CTT. It identifies object-level
+types with meta-level types, taking advantage of Isabelle's built-in type
+checker. It identifies object-level functions with meta-level functions, so
+it uses Isabelle's operations for abstraction and application. There is no
+`apply' operator: function applications are written as simply~$f(a)$ rather
+than $f{\tt`}a$.
-These identifications allow Isabelle to support \HOL\ particularly nicely,
-but they also mean that \HOL\ requires more sophistication from the user
---- in particular, an understanding of Isabelle's type system. Beginners
-should work with {\tt show_types} set to {\tt true}. Gain experience by
-working in first-order logic before attempting to use higher-order logic.
-This chapter assumes familiarity with~{\FOL{}}.
+These identifications allow Isabelle to support HOL particularly nicely, but
+they also mean that HOL requires more sophistication from the user --- in
+particular, an understanding of Isabelle's type system. Beginners should work
+with {\tt show_types} set to {\tt true}. Gain experience by working in
+first-order logic before attempting to use higher-order logic. This chapter
+assumes familiarity with~FOL.
\begin{figure}
@@ -115,7 +114,7 @@
& | & "EX!~" id~id^* " . " formula
\end{array}
\]
-\caption{Full grammar for \HOL} \label{hol-grammar}
+\caption{Full grammar for HOL} \label{hol-grammar}
\end{figure}
@@ -140,7 +139,7 @@
$\neg(a=b)$.
\begin{warn}
- \HOL\ has no if-and-only-if connective; logical equivalence is expressed
+ HOL has no if-and-only-if connective; logical equivalence is expressed
using equality. But equality has a high priority, as befitting a
relation, while if-and-only-if typically has the lowest priority. Thus,
$\neg\neg P=P$ abbreviates $\neg\neg (P=P)$ and not $(\neg\neg P)=P$.
@@ -155,7 +154,7 @@
belongs to class~{\tt term} if $\sigma$ and~$\tau$ do, allowing quantification
over functions.
-Types in \HOL\ must be non-empty; otherwise the quantifier rules would be
+Types in HOL must be non-empty; otherwise the quantifier rules would be
unsound. I have commented on this elsewhere~\cite[\S7]{paulson-COLOG}.
\index{type definitions}
@@ -178,11 +177,10 @@
\subsection{Binders}
Hilbert's {\bf description} operator~$\epsilon x.P[x]$ stands for some~$a$
-satisfying~$P[a]$, if such exists. Since all terms in \HOL\ denote
-something, a description is always meaningful, but we do not know its value
-unless $P[x]$ defines it uniquely. We may write descriptions as
-\cdx{Eps}($P$) or use the syntax
-\hbox{\tt \at $x$.$P[x]$}.
+satisfying~$P[a]$, if such exists. Since all terms in HOL denote something, a
+description is always meaningful, but we do not know its value unless $P[x]$
+defines it uniquely. We may write descriptions as \cdx{Eps}($P$) or use the
+syntax \hbox{\tt \at $x$.$P[x]$}.
Existential quantification is defined by
\[ \exists x.P(x) \;\equiv\; P(\epsilon x.P(x)). \]
@@ -193,14 +191,14 @@
exists a unique pair $(x,y)$ satisfying~$P(x,y)$.
\index{*"! symbol}\index{*"? symbol}\index{HOL system@{\sc hol} system}
-Quantifiers have two notations. As in Gordon's {\sc hol} system, \HOL\
+Quantifiers have two notations. As in Gordon's {\sc hol} system, HOL
uses~{\tt!}\ and~{\tt?}\ to stand for $\forall$ and $\exists$. The
existential quantifier must be followed by a space; thus {\tt?x} is an
unknown, while \verb'? x.f(x)=y' is a quantification. Isabelle's usual
-notation for quantifiers, \sdx{ALL} and \sdx{EX}, is also
-available. Both notations are accepted for input. The {\ML} reference
+notation for quantifiers, \sdx{ALL} and \sdx{EX}, is also available. Both
+notations are accepted for input. The {\ML} reference
\ttindexbold{HOL_quantifiers} governs the output notation. If set to {\tt
-true}, then~{\tt!}\ and~{\tt?}\ are displayed; this is the default. If set
+ true}, then~{\tt!}\ and~{\tt?}\ are displayed; this is the default. If set
to {\tt false}, then~{\tt ALL} and~{\tt EX} are displayed.
All these binders have priority 10.
@@ -212,7 +210,7 @@
the constant~\cdx{Let}. It can be expanded by rewriting with its
definition, \tdx{Let_def}.
-\HOL\ also defines the basic syntax
+HOL also defines the basic syntax
\[\dquotes"case"~e~"of"~c@1~"=>"~e@1~"|" \dots "|"~c@n~"=>"~e@n\]
as a uniform means of expressing {\tt case} constructs. Therefore {\tt
case} and \sdx{of} are reserved words. However, so far this is mere
@@ -259,8 +257,8 @@
\section{Rules of inference}
-Figure~\ref{hol-rules} shows the inference rules of~\HOL{}, with
-their~{\ML} names. Some of the rules deserve additional comments:
+Figure~\ref{hol-rules} shows the inference rules of~HOL, with their~{\ML}
+names. Some of the rules deserve additional comments:
\begin{ttdescription}
\item[\tdx{ext}] expresses extensionality of functions.
\item[\tdx{iff}] asserts that logically equivalent formulae are
@@ -273,14 +271,14 @@
shown by Diaconescu; see Paulson~\cite{paulson-COLOG} for details.}
\end{ttdescription}
-\HOL{} follows standard practice in higher-order logic: only a few
-connectives are taken as primitive, with the remainder defined obscurely
+HOL follows standard practice in higher-order logic: only a few connectives
+are taken as primitive, with the remainder defined obscurely
(Fig.\ts\ref{hol-defs}). Gordon's {\sc hol} system expresses the
corresponding definitions \cite[page~270]{mgordon-hol} using
-object-equality~({\tt=}), which is possible because equality in
-higher-order logic may equate formulae and even functions over formulae.
-But theory~\HOL{}, like all other Isabelle theories, uses
-meta-equality~({\tt==}) for definitions.
+object-equality~({\tt=}), which is possible because equality in higher-order
+logic may equate formulae and even functions over formulae. But theory~HOL,
+like all other Isabelle theories, uses meta-equality~({\tt==}) for
+definitions.
Some of the rules mention type variables; for
example, {\tt refl} mentions the type variable~{\tt'a}. This allows you to
@@ -344,7 +342,7 @@
\subcaption{Logical equivalence}
\end{ttbox}
-\caption{Derived rules for \HOL} \label{hol-lemmas1}
+\caption{Derived rules for HOL} \label{hol-lemmas1}
\end{figure}
@@ -521,8 +519,8 @@
\section{A formulation of set theory}
Historically, higher-order logic gives a foundation for Russell and
Whitehead's theory of classes. Let us use modern terminology and call them
-{\bf sets}, but note that these sets are distinct from those of {\ZF} set
-theory, and behave more like {\ZF} classes.
+{\bf sets}, but note that these sets are distinct from those of ZF set theory,
+and behave more like ZF classes.
\begin{itemize}
\item
Sets are given by predicates over some type~$\sigma$. Types serve to
@@ -534,20 +532,19 @@
Although sets may contain other sets as elements, the containing set must
have a more complex type.
\end{itemize}
-Finite unions and intersections have the same behaviour in \HOL\ as they
-do in~{\ZF}. In \HOL\ the intersection of the empty set is well-defined,
-denoting the universal set for the given type.
+Finite unions and intersections have the same behaviour in HOL as they do
+in~ZF. In HOL the intersection of the empty set is well-defined, denoting the
+universal set for the given type.
\subsection{Syntax of set theory}\index{*set type}
-\HOL's set theory is called \thydx{Set}. The type $\alpha\,set$ is
-essentially the same as $\alpha\To bool$. The new type is defined for
-clarity and to avoid complications involving function types in unification.
-Since Isabelle does not support type definitions (as mentioned in
-\S\ref{HOL-types}), the isomorphisms between the two types are declared
-explicitly. Here they are natural: {\tt Collect} maps $\alpha\To bool$ to
-$\alpha\,set$, while \hbox{\tt op :} maps in the other direction (ignoring
-argument order).
+HOL's set theory is called \thydx{Set}. The type $\alpha\,set$ is essentially
+the same as $\alpha\To bool$. The new type is defined for clarity and to
+avoid complications involving function types in unification. Since Isabelle
+does not support type definitions (as mentioned in \S\ref{HOL-types}), the
+isomorphisms between the two types are declared explicitly. Here they are
+natural: {\tt Collect} maps $\alpha\To bool$ to $\alpha\,set$, while \hbox{\tt
+ op :} maps in the other direction (ignoring argument order).
Figure~\ref{hol-set-syntax} lists the constants, infixes, and syntax
translations. Figure~\ref{hol-set-syntax2} presents the grammar of the new
@@ -563,11 +560,11 @@
{\tt insert}(a@1,\ldots,{\tt insert}(a@n,\{\}))
\end{eqnarray*}
-The set \hbox{\tt\{$x$.$P[x]$\}} consists of all $x$ (of suitable type)
-that satisfy~$P[x]$, where $P[x]$ is a formula that may contain free
-occurrences of~$x$. This syntax expands to \cdx{Collect}$(\lambda
-x.P[x])$. It defines sets by absolute comprehension, which is impossible
-in~{\ZF}; the type of~$x$ implicitly restricts the comprehension.
+The set \hbox{\tt\{$x$.$P[x]$\}} consists of all $x$ (of suitable type) that
+satisfy~$P[x]$, where $P[x]$ is a formula that may contain free occurrences
+of~$x$. This syntax expands to \cdx{Collect}$(\lambda x.P[x])$. It defines
+sets by absolute comprehension, which is impossible in~ZF; the type of~$x$
+implicitly restricts the comprehension.
The set theory defines two {\bf bounded quantifiers}:
\begin{eqnarray*}
@@ -811,14 +808,13 @@
Figures~\ref{hol-set1} and~\ref{hol-set2} present derived rules. Most are
-obvious and resemble rules of Isabelle's {\ZF} set theory. Certain rules,
-such as \tdx{subsetCE}, \tdx{bexCI} and \tdx{UnCI},
-are designed for classical reasoning; the rules \tdx{subsetD},
-\tdx{bexI}, \tdx{Un1} and~\tdx{Un2} are not
-strictly necessary but yield more natural proofs. Similarly,
-\tdx{equalityCE} supports classical reasoning about extensionality,
-after the fashion of \tdx{iffCE}. See the file {\tt HOL/Set.ML} for
-proofs pertaining to set theory.
+obvious and resemble rules of Isabelle's ZF set theory. Certain rules, such
+as \tdx{subsetCE}, \tdx{bexCI} and \tdx{UnCI}, are designed for classical
+reasoning; the rules \tdx{subsetD}, \tdx{bexI}, \tdx{Un1} and~\tdx{Un2} are
+not strictly necessary but yield more natural proofs. Similarly,
+\tdx{equalityCE} supports classical reasoning about extensionality, after the
+fashion of \tdx{iffCE}. See the file {\tt HOL/Set.ML} for proofs pertaining
+to set theory.
Figure~\ref{hol-fun} presents derived inference rules involving functions.
They also include rules for \cdx{Inv}, which is defined in theory~{\tt
@@ -905,13 +901,12 @@
\section{Generic packages and classical reasoning}
-\HOL\ instantiates most of Isabelle's generic packages;
-see {\tt HOL/ROOT.ML} for details.
+HOL instantiates most of Isabelle's generic packages; see {\tt HOL/ROOT.ML}
+for details.
\begin{itemize}
-\item
-Because it includes a general substitution rule, \HOL\ instantiates the
-tactic {\tt hyp_subst_tac}, which substitutes for an equality
-throughout a subgoal and its hypotheses.
+\item Because it includes a general substitution rule, HOL instantiates the
+ tactic {\tt hyp_subst_tac}, which substitutes for an equality throughout a
+ subgoal and its hypotheses.
\item
It instantiates the simplifier, defining~\ttindexbold{HOL_ss} as the
simplification set for higher-order logic. Equality~($=$), which also
@@ -921,14 +916,14 @@
\item
It instantiates the classical reasoner, as described below.
\end{itemize}
-\HOL\ derives classical introduction rules for $\disj$ and~$\exists$, as
-well as classical elimination rules for~$\imp$ and~$\bimp$, and the swap
-rule; recall Fig.\ts\ref{hol-lemmas2} above.
+HOL derives classical introduction rules for $\disj$ and~$\exists$, as well as
+classical elimination rules for~$\imp$ and~$\bimp$, and the swap rule; recall
+Fig.\ts\ref{hol-lemmas2} above.
-The classical reasoner is set up as the structure
-{\tt Classical}. This structure is open, so {\ML} identifiers such
-as {\tt step_tac}, {\tt fast_tac}, {\tt best_tac}, etc., refer to it.
-\HOL\ defines the following classical rule sets:
+The classical reasoner is set up as the structure {\tt Classical}. This
+structure is open, so {\ML} identifiers such as {\tt step_tac}, {\tt
+ fast_tac}, {\tt best_tac}, etc., refer to it. HOL defines the following
+classical rule sets:
\begin{ttbox}
prop_cs : claset
HOL_cs : claset
@@ -1075,13 +1070,12 @@
called {\tt Lfp}, {\tt Trancl} and {\tt WF}\@. Elsewhere I have described
similar constructions in the context of set theory~\cite{paulson-set-II}.
-Type~\tydx{nat} is postulated to belong to class~\cldx{ord}, which
-overloads $<$ and $\leq$ on the natural numbers. As of this writing,
-Isabelle provides no means of verifying that such overloading is sensible;
-there is no means of specifying the operators' properties and verifying
-that instances of the operators satisfy those properties. To be safe, the
-\HOL\ theory includes no polymorphic axioms asserting general properties of
-$<$ and~$\leq$.
+Type~\tydx{nat} is postulated to belong to class~\cldx{ord}, which overloads
+$<$ and $\leq$ on the natural numbers. As of this writing, Isabelle provides
+no means of verifying that such overloading is sensible; there is no means of
+specifying the operators' properties and verifying that instances of the
+operators satisfy those properties. To be safe, the HOL theory includes no
+polymorphic axioms asserting general properties of $<$ and~$\leq$.
Theory \thydx{Arith} develops arithmetic on the natural numbers. It
defines addition, multiplication, subtraction, division, and remainder.
@@ -1190,9 +1184,9 @@
\subsection{The type constructor for lists, {\tt list}}
\index{*list type}
-\HOL's definition of lists is an example of an experimental method for
-handling recursive data types. Figure~\ref{hol-list} presents the theory
-\thydx{List}: the basic list operations with their types and properties.
+HOL's definition of lists is an example of an experimental method for handling
+recursive data types. Figure~\ref{hol-list} presents the theory \thydx{List}:
+the basic list operations with their types and properties.
The \sdx{case} construct is defined by the following translation:
{\dquotes
@@ -1740,8 +1734,8 @@
proves the two equivalent. It contains several datatype and inductive
definitions, and demonstrates their use.
-Directory {\tt HOL/ex} contains other examples and experimental proofs in
-{\HOL}. Here is an overview of the more interesting files.
+Directory {\tt HOL/ex} contains other examples and experimental proofs in HOL.
+Here is an overview of the more interesting files.
\begin{itemize}
\item File {\tt cla.ML} demonstrates the classical reasoner on over sixty
predicate calculus theorems, ranging from simple tautologies to
@@ -1770,12 +1764,12 @@
as filter, which can compute indefinitely before yielding the next
element (if any!) of the lazy list. A coinduction principle is defined
for proving equations on lazy lists.
-
-\item Theory {\tt PropLog} proves the soundness and completeness of
- classical propositional logic, given a truth table semantics. The only
- connective is $\imp$. A Hilbert-style axiom system is specified, and its
- set of theorems defined inductively. A similar proof in \ZF{} is
- described elsewhere~\cite{paulson-set-II}.
+
+\item Theory {\tt PropLog} proves the soundness and completeness of classical
+ propositional logic, given a truth table semantics. The only connective is
+ $\imp$. A Hilbert-style axiom system is specified, and its set of theorems
+ defined inductively. A similar proof in ZF is described
+ elsewhere~\cite{paulson-set-II}.
\item Theory {\tt Term} develops an experimental recursive type definition;
the recursion goes through the type constructor~\tydx{list}.
@@ -1804,8 +1798,8 @@
of~$\alpha$. This version states that for every function from $\alpha$ to
its powerset, some subset is outside its range.
-The Isabelle proof uses \HOL's set theory, with the type $\alpha\,set$ and
-the operator \cdx{range}. The set~$S$ is given as an unknown instead of a
+The Isabelle proof uses HOL's set theory, with the type $\alpha\,set$ and the
+operator \cdx{range}. The set~$S$ is given as an unknown instead of a
quantified variable so that we may inspect the subset found by the proof.
\begin{ttbox}
goal Set.thy "~ ?S : range(f :: 'a=>'a set)";
@@ -1870,12 +1864,11 @@
{\out No subgoals!}
\end{ttbox}
How much creativity is required? As it happens, Isabelle can prove this
-theorem automatically. The classical set \ttindex{set_cs} contains rules
-for most of the constructs of \HOL's set theory. We must augment it with
-\tdx{equalityCE} to break up set equalities, and then apply best-first
-search. Depth-first search would diverge, but best-first search
-successfully navigates through the large search space.
-\index{search!best-first}
+theorem automatically. The classical set \ttindex{set_cs} contains rules for
+most of the constructs of HOL's set theory. We must augment it with
+\tdx{equalityCE} to break up set equalities, and then apply best-first search.
+Depth-first search would diverge, but best-first search successfully navigates
+through the large search space. \index{search!best-first}
\begin{ttbox}
choplev 0;
{\out Level 0}
--- a/doc-src/Logics/logics.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Logics/logics.tex Mon Aug 28 13:52:38 2000 +0200
@@ -1,6 +1,6 @@
%% $Id$
\documentclass[12pt,a4paper]{report}
-\usepackage{graphicx,../iman,../extra,../proof,../rail,latexsym,../pdfsetup}
+\usepackage{graphicx,../iman,../extra,../ttbox,../proof,../rail,latexsym,../pdfsetup}
%%%STILL NEEDS MODAL, LCF
%%% to index derived rls: ^\([a-zA-Z0-9][a-zA-Z0-9_]*\) \\tdx{\1}
@@ -16,8 +16,8 @@
With Contributions by Tobias Nipkow and Markus Wenzel%
\thanks{Markus Wenzel made numerous improvements. Sara Kalvala
contributed Chap.\ts\ref{chap:sequents}. Philippe de Groote
- wrote the first version of the logic~\LK{}. Tobias Nipkow developed
- \LCF{} and~\Cube{}. Martin Coen developed~\Modal{} with assistance
+ wrote the first version of the logic~LK. Tobias Nipkow developed
+ LCF and~Cube. Martin Coen developed~Modal with assistance
from Rajeev Gor\'e. The research has been funded by the EPSRC
(grants GR/G53279, GR/H40570, GR/K57381, GR/K77051, GR/M75440) and by ESPRIT
(projects 3245: Logical Frameworks, and 6453: Types), and by the DFG
--- a/doc-src/Logics/preface.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Logics/preface.tex Mon Aug 28 13:52:38 2000 +0200
@@ -23,11 +23,11 @@
\begin{ttdescription}
\item[\thydx{CCL}] is Martin Coen's Classical Computational Logic,
which is the basis of a preliminary method for deriving programs from
- proofs~\cite{coen92}. It is built upon classical~\FOL{}.
+ proofs~\cite{coen92}. It is built upon classical~FOL.
\item[\thydx{LCF}] is a version of Scott's Logic for Computable
Functions, which is also implemented by the~{\sc lcf}
- system~\cite{paulson87}. It is built upon classical~\FOL{}.
+ system~\cite{paulson87}. It is built upon classical~FOL.
\item[\thydx{HOLCF}] is a version of {\sc lcf}, defined as an extension of
\texttt{HOL}\@. See \cite{MuellerNvOS99} for more details on \texttt{HOLCF}.
--- a/doc-src/Logics/syntax.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Logics/syntax.tex Mon Aug 28 13:52:38 2000 +0200
@@ -31,16 +31,15 @@
becomes syntactically invalid if the brackets are removed.
A {\bf binder} is a symbol associated with a constant of type
-$(\sigma\To\tau)\To\tau'$. For instance, we may declare~$\forall$ as
-a binder for the constant~$All$, which has type $(\alpha\To o)\To o$.
-This defines the syntax $\forall x.t$ to mean $All(\lambda x.t)$. We
-can also write $\forall x@1\ldots x@m.t$ to abbreviate $\forall x@1.
-\ldots \forall x@m.t$; this is possible for any constant provided that
-$\tau$ and $\tau'$ are the same type. \HOL's description operator
-$\varepsilon x.P\,x$ has type $(\alpha\To bool)\To\alpha$ and can bind
-only one variable, except when $\alpha$ is $bool$. \ZF's bounded
-quantifier $\forall x\in A.P(x)$ cannot be declared as a binder
-because it has type $[i, i\To o]\To o$. The syntax for binders allows
+$(\sigma\To\tau)\To\tau'$. For instance, we may declare~$\forall$ as a binder
+for the constant~$All$, which has type $(\alpha\To o)\To o$. This defines the
+syntax $\forall x.t$ to mean $All(\lambda x.t)$. We can also write $\forall
+x@1\ldots x@m.t$ to abbreviate $\forall x@1. \ldots \forall x@m.t$; this is
+possible for any constant provided that $\tau$ and $\tau'$ are the same type.
+HOL's description operator $\varepsilon x.P\,x$ has type $(\alpha\To
+bool)\To\alpha$ and can bind only one variable, except when $\alpha$ is
+$bool$. ZF's bounded quantifier $\forall x\in A.P(x)$ cannot be declared as a
+binder because it has type $[i, i\To o]\To o$. The syntax for binders allows
type constraints on bound variables, as in
\[ \forall (x{::}\alpha) \; (y{::}\beta) \; z{::}\gamma. Q(x,y,z) \]
--- a/doc-src/Ref/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -15,7 +15,7 @@
FILES = ref.tex introduction.tex goals.tex tactic.tex tctical.tex \
thm.tex theories.tex defining.tex syntax.tex substitution.tex \
simplifier.tex classical.tex theory-syntax.tex \
- ../rail.sty ../proof.sty ../iman.sty ../extra.sty ../manual.bib
+ ../rail.sty ../proof.sty ../iman.sty ../extra.sty ../ttbox.sty ../manual.bib
dvi: $(NAME).dvi
--- a/doc-src/Ref/classical.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/classical.tex Mon Aug 28 13:52:38 2000 +0200
@@ -3,12 +3,11 @@
\index{classical reasoner|(}
\newcommand\ainfer[2]{\begin{array}{r@{\,}l}#2\\ \hline#1\end{array}}
-Although Isabelle is generic, many users will be working in some
-extension of classical first-order logic.
-Isabelle's set theory~{\tt ZF} is built upon theory~\texttt{FOL},
-while {\HOL} conceptually contains first-order logic as a fragment.
-Theorem-proving in predicate logic is undecidable, but many
-researchers have developed strategies to assist in this task.
+Although Isabelle is generic, many users will be working in some extension of
+classical first-order logic. Isabelle's set theory~ZF is built upon
+theory~FOL, while HOL conceptually contains first-order logic as a fragment.
+Theorem-proving in predicate logic is undecidable, but many researchers have
+developed strategies to assist in this task.
Isabelle's classical reasoner is an \ML{} functor that accepts certain
information about a logic and delivers a suite of automatic tactics. Each
@@ -52,9 +51,9 @@
effectively. There are many tactics to choose from, including
{\tt Fast_tac} and \texttt{Best_tac}.
-We shall first discuss the underlying principles, then present the
-classical reasoner. Finally, we shall see how to instantiate it for new logics.
-The logics \FOL, \ZF, {\HOL} and {\HOLCF} have it already installed.
+We shall first discuss the underlying principles, then present the classical
+reasoner. Finally, we shall see how to instantiate it for new logics. The
+logics FOL, ZF, HOL and HOLCF have it already installed.
\section{The sequent calculus}
@@ -445,12 +444,11 @@
\begin{itemize}
\item It does not use the wrapper tacticals described above, such as
\ttindex{addss}.
-\item It ignores types, which can cause problems in \HOL. If it applies a rule
+\item It ignores types, which can cause problems in HOL. If it applies a rule
whose types are inappropriate, then proof reconstruction will fail.
\item It does not perform higher-order unification, as needed by the rule {\tt
- rangeI} in {\HOL} and \texttt{RepFunI} in {\ZF}. There are often
- alternatives to such rules, for example {\tt
- range_eqI} and \texttt{RepFun_eqI}.
+ rangeI} in HOL and \texttt{RepFunI} in ZF. There are often alternatives
+ to such rules, for example {\tt range_eqI} and \texttt{RepFun_eqI}.
\item Function variables may only be applied to parameters of the subgoal.
(This restriction arises because the prover does not use higher-order
unification.) If other function variables are present then the prover will
--- a/doc-src/Ref/introduction.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/introduction.tex Mon Aug 28 13:52:38 2000 +0200
@@ -34,7 +34,7 @@
\({\langle}isabellehome{\rangle}\)/bin/isabelle
\end{ttbox}
This should start an interactive \ML{} session with the default object-logic
-(usually {\HOL}) already pre-loaded.
+(usually HOL) already pre-loaded.
Subsequently, we assume that the \texttt{isabelle} executable is determined
automatically by the shell, e.g.\ by adding {\tt \(\langle isabellehome
@@ -52,7 +52,7 @@
isabelle FOL
\end{ttbox}
This should put you into the world of polymorphic first-order logic (assuming
-that an image of {\FOL} has been pre-built).
+that an image of FOL has been pre-built).
\index{saving your session|bold} Isabelle provides no means of storing
theorems or internal proof objects on files. Theorems are simply part of the
@@ -62,7 +62,7 @@
\emph{writable} in the first place. The standard object-logic images are
usually read-only, so you have to create a private working copy first. For
example, the following shell command puts you into a writable Isabelle session
-of name \texttt{Foo} that initially contains just plain \HOL:
+of name \texttt{Foo} that initially contains just plain HOL:
\begin{ttbox}
isabelle HOL Foo
\end{ttbox}
@@ -202,7 +202,7 @@
\end{ttdescription}
-Note that theories of pre-built logic images (e.g.\ {\HOL}) are marked as
+Note that theories of pre-built logic images (e.g.\ HOL) are marked as
\emph{finished} and cannot be updated any more. See \S\ref{sec:more-theories}
for further information on Isabelle's theory loader.
--- a/doc-src/Ref/ref.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/ref.tex Mon Aug 28 13:52:38 2000 +0200
@@ -1,5 +1,5 @@
\documentclass[12pt,a4paper]{report}
-\usepackage{graphicx,../iman,../extra,../proof,../rail,../pdfsetup}
+\usepackage{graphicx,../iman,../extra,../ttbox,../proof,../rail,../pdfsetup}
%% $Id$
%%\includeonly{}
--- a/doc-src/Ref/simp.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/simp.tex Mon Aug 28 13:52:38 2000 +0200
@@ -489,8 +489,8 @@
\section{A sample instantiation}
-Here is the instantiation of {\tt SIMP_DATA} for {\FOL}. The code for {\tt
-mk_rew_rules} is not shown; see the file {\tt FOL/simpdata.ML}.
+Here is the instantiation of {\tt SIMP_DATA} for FOL. The code for {\tt
+ mk_rew_rules} is not shown; see the file {\tt FOL/simpdata.ML}.
\begin{ttbox}
structure FOL_SimpData : SIMP_DATA =
struct
--- a/doc-src/Ref/simplifier.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/simplifier.tex Mon Aug 28 13:52:38 2000 +0200
@@ -3,12 +3,11 @@
\label{chap:simplification}
\index{simplification|(}
-This chapter describes Isabelle's generic simplification package. It
-performs conditional and unconditional rewriting and uses contextual
-information (`local assumptions'). It provides several general hooks,
-which can provide automatic case splits during rewriting, for example.
-The simplifier is already set up for many of Isabelle's logics: \FOL,
-\ZF, \HOL, \HOLCF.
+This chapter describes Isabelle's generic simplification package. It performs
+conditional and unconditional rewriting and uses contextual information
+(`local assumptions'). It provides several general hooks, which can provide
+automatic case splits during rewriting, for example. The simplifier is
+already set up for many of Isabelle's logics: FOL, ZF, HOL, HOLCF.
The first section is a quick introduction to the simplifier that
should be sufficient to get started. The later sections explain more
@@ -71,9 +70,9 @@
\medskip
-As an example, consider the theory of arithmetic in \HOL. The (rather
-trivial) goal $0 + (x + 0) = x + 0 + 0$ can be solved by a single call
-of \texttt{Simp_tac} as follows:
+As an example, consider the theory of arithmetic in HOL. The (rather trivial)
+goal $0 + (x + 0) = x + 0 + 0$ can be solved by a single call of
+\texttt{Simp_tac} as follows:
\begin{ttbox}
context Arith.thy;
Goal "0 + (x + 0) = x + 0 + 0";
@@ -181,11 +180,11 @@
\end{ttdescription}
-When a new theory is built, its implicit simpset is initialized by the
-union of the respective simpsets of its parent theories. In addition,
-certain theory definition constructs (e.g.\ \ttindex{datatype} and
-\ttindex{primrec} in \HOL) implicitly augment the current simpset.
-Ordinary definitions are not added automatically!
+When a new theory is built, its implicit simpset is initialized by the union
+of the respective simpsets of its parent theories. In addition, certain
+theory definition constructs (e.g.\ \ttindex{datatype} and \ttindex{primrec}
+in HOL) implicitly augment the current simpset. Ordinary definitions are not
+added automatically!
It is up the user to manipulate the current simpset further by
explicitly adding or deleting theorems and simplification procedures.
@@ -253,12 +252,11 @@
\end{ttbox}
\begin{ttdescription}
-\item[\ttindexbold{empty_ss}] is the empty simpset. This is not very
- useful under normal circumstances because it doesn't contain
- suitable tactics (subgoaler etc.). When setting up the simplifier
- for a particular object-logic, one will typically define a more
- appropriate ``almost empty'' simpset. For example, in \HOL\ this is
- called \ttindexbold{HOL_basic_ss}.
+\item[\ttindexbold{empty_ss}] is the empty simpset. This is not very useful
+ under normal circumstances because it doesn't contain suitable tactics
+ (subgoaler etc.). When setting up the simplifier for a particular
+ object-logic, one will typically define a more appropriate ``almost empty''
+ simpset. For example, in HOL this is called \ttindexbold{HOL_basic_ss}.
\item[\ttindexbold{merge_ss} ($ss@1$, $ss@2$)] merges simpsets $ss@1$
and $ss@2$ by building the union of their respective rewrite rules,
@@ -1077,12 +1075,11 @@
\subsection{Example: sums of natural numbers}
-This example is again set in \HOL\ (see \texttt{HOL/ex/NatSum}).
-Theory \thydx{Arith} contains natural numbers arithmetic. Its
-associated simpset contains many arithmetic laws including
-distributivity of~$\times$ over~$+$, while \texttt{add_ac} is a list
-consisting of the A, C and LC laws for~$+$ on type \texttt{nat}. Let
-us prove the theorem
+This example is again set in HOL (see \texttt{HOL/ex/NatSum}). Theory
+\thydx{Arith} contains natural numbers arithmetic. Its associated simpset
+contains many arithmetic laws including distributivity of~$\times$ over~$+$,
+while \texttt{add_ac} is a list consisting of the A, C and LC laws for~$+$ on
+type \texttt{nat}. Let us prove the theorem
\[ \sum@{i=1}^n i = n\times(n+1)/2. \]
%
A functional~\texttt{sum} represents the summation operator under the
@@ -1216,12 +1213,11 @@
\texttt{HOL/Modelcheck/MCSyn} where this is used to provide external
model checker syntax).
-The {\HOL} theory of tuples (see \texttt{HOL/Prod}) provides an
-operator \texttt{split} together with some concrete syntax supporting
-$\lambda\,(x,y).b$ abstractions. Assume that we would like to offer a
-tactic that rewrites any function $\lambda\,p.f\,p$ (where $p$ is of
-some pair type) to $\lambda\,(x,y).f\,(x,y)$. The corresponding rule
-is:
+The HOL theory of tuples (see \texttt{HOL/Prod}) provides an operator
+\texttt{split} together with some concrete syntax supporting
+$\lambda\,(x,y).b$ abstractions. Assume that we would like to offer a tactic
+that rewrites any function $\lambda\,p.f\,p$ (where $p$ is of some pair type)
+to $\lambda\,(x,y).f\,(x,y)$. The corresponding rule is:
\begin{ttbox}
pair_eta_expand: (f::'a*'b=>'c) = (\%(x, y). f (x, y))
\end{ttbox}
@@ -1298,7 +1294,7 @@
We first prove lots of standard rewrite rules about the logical
connectives. These include cancellation and associative laws. We
define a function that echoes the desired law and then supplies it the
-prover for intuitionistic \FOL:
+prover for intuitionistic FOL:
\begin{ttbox}
fun int_prove_fun s =
(writeln s;
@@ -1437,12 +1433,11 @@
val triv_rls = [TrueI,refl,iff_refl,notFalseI];
\end{ttbox}
%
-The basic simpset for intuitionistic \FOL{} is
-\ttindexbold{FOL_basic_ss}. It preprocess rewrites using {\tt
- gen_all}, \texttt{atomize} and \texttt{mk_eq}. It solves simplified
-subgoals using \texttt{triv_rls} and assumptions, and by detecting
-contradictions. It uses \ttindex{asm_simp_tac} to tackle subgoals of
-conditional rewrites.
+The basic simpset for intuitionistic FOL is \ttindexbold{FOL_basic_ss}. It
+preprocess rewrites using {\tt gen_all}, \texttt{atomize} and \texttt{mk_eq}.
+It solves simplified subgoals using \texttt{triv_rls} and assumptions, and by
+detecting contradictions. It uses \ttindex{asm_simp_tac} to tackle subgoals
+of conditional rewrites.
Other simpsets built from \texttt{FOL_basic_ss} will inherit these items.
In particular, \ttindexbold{IFOL_ss}, which introduces {\tt
--- a/doc-src/Ref/substitution.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/substitution.tex Mon Aug 28 13:52:38 2000 +0200
@@ -61,7 +61,7 @@
eresolve_tac [subst] \(i\) {\rm or} eresolve_tac [ssubst] \(i\){\it.}
\end{ttbox}
-Logics \HOL, {\FOL} and {\ZF} define the tactic \ttindexbold{stac} by
+Logics HOL, FOL and ZF define the tactic \ttindexbold{stac} by
\begin{ttbox}
fun stac eqth = CHANGED o rtac (eqth RS ssubst);
\end{ttbox}
--- a/doc-src/Ref/syntax.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/syntax.tex Mon Aug 28 13:52:38 2000 +0200
@@ -688,11 +688,11 @@
\section{Translation functions} \label{sec:tr_funs}
\index{translations|(}
%
-This section describes the translation function mechanism. By writing
-\ML{} functions, you can do almost everything to terms or \AST{}s
-during parsing and printing. The logic \LK\ is a good example of
-sophisticated transformations between internal and external
-representations of sequents; here, macros would be useless.
+This section describes the translation function mechanism. By writing \ML{}
+functions, you can do almost everything to terms or \AST{}s during parsing and
+printing. The logic LK is a good example of sophisticated transformations
+between internal and external representations of sequents; here, macros would
+be useless.
A full understanding of translations requires some familiarity
with Isabelle's internals, especially the datatypes {\tt term}, {\tt typ},
--- a/doc-src/Ref/tactic.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/tactic.tex Mon Aug 28 13:52:38 2000 +0200
@@ -422,9 +422,9 @@
flexflex_tac : tactic
\end{ttbox}
\begin{ttdescription}
-\item[\ttindexbold{distinct_subgoals_tac}]
- removes duplicate subgoals from a proof state. (These arise especially
- in \ZF{}, where the subgoals are essentially type constraints.)
+\item[\ttindexbold{distinct_subgoals_tac}] removes duplicate subgoals from a
+ proof state. (These arise especially in ZF, where the subgoals are
+ essentially type constraints.)
\item[\ttindexbold{prune_params_tac}]
removes unused parameters from all subgoals of the proof state. It works
--- a/doc-src/Ref/theories.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/theories.tex Mon Aug 28 13:52:38 2000 +0200
@@ -3,12 +3,12 @@
\chapter{Theories, Terms and Types} \label{theories}
\index{theories|(}\index{signatures|bold}
-\index{reading!axioms|see{\texttt{assume_ax}}} Theories organize the
-syntax, declarations and axioms of a mathematical development. They
-are built, starting from the {\Pure} or {\CPure} theory, by extending
-and merging existing theories. They have the \ML\ type
-\mltydx{theory}. Theory operations signal errors by raising exception
-\xdx{THEORY}, returning a message and a list of theories.
+\index{reading!axioms|see{\texttt{assume_ax}}} Theories organize the syntax,
+declarations and axioms of a mathematical development. They are built,
+starting from the Pure or CPure theory, by extending and merging existing
+theories. They have the \ML\ type \mltydx{theory}. Theory operations signal
+errors by raising exception \xdx{THEORY}, returning a message and a list of
+theories.
Signatures, which contain information about sorts, types, constants and
syntax, have the \ML\ type~\mltydx{Sign.sg}. For identification, each
@@ -715,9 +715,9 @@
\item[$t$ \$ $u$] \index{$@{\tt\$}|bold} \index{function applications|bold}
is the \textbf{application} of~$t$ to~$u$.
\end{ttdescription}
-Application is written as an infix operator to aid readability.
-Here is an \ML\ pattern to recognize \FOL{} formulae of
-the form~$A\imp B$, binding the subformulae to~$A$ and~$B$:
+Application is written as an infix operator to aid readability. Here is an
+\ML\ pattern to recognize FOL formulae of the form~$A\imp B$, binding the
+subformulae to~$A$ and~$B$:
\begin{ttbox}
Const("Trueprop",_) $ (Const("op -->",_) $ A $ B)
\end{ttbox}
--- a/doc-src/Ref/theory-syntax.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Ref/theory-syntax.tex Mon Aug 28 13:52:38 2000 +0200
@@ -5,10 +5,10 @@
\chapter{Syntax of Isabelle Theories}\label{app:TheorySyntax}
-Below we present the full syntax of theory definition files as
-provided by {\Pure} Isabelle --- object-logics may add their own
-sections. \S\ref{sec:ref-defining-theories} explains the meanings of
-these constructs. The syntax obeys the following conventions:
+Below we present the full syntax of theory definition files as provided by
+Pure Isabelle --- object-logics may add their own sections.
+\S\ref{sec:ref-defining-theories} explains the meanings of these constructs.
+The syntax obeys the following conventions:
\begin{itemize}
\item {\tt Typewriter font} denotes terminal symbols.
--- a/doc-src/System/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/System/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -13,7 +13,7 @@
NAME = system
FILES = system.tex basics.tex misc.tex fonts.tex present.tex \
- ../iman.sty ../extra.sty ../manual.bib
+ ../iman.sty ../extra.sty ../ttbox.sty ../manual.bib
dvi: $(NAME).dvi
--- a/doc-src/System/fonts.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/System/fonts.tex Mon Aug 28 13:52:38 2000 +0200
@@ -4,8 +4,8 @@
\chapter{Fonts and character encodings}
Using the print mode mechanism of Isabelle, variant forms of output become
-very easy. As the canonical application of this feature, {\Pure} and major
-object-logics (\FOL, \ZF, \HOL, \HOLCF) support optional input and output of
+very easy. As the canonical application of this feature, Pure and major
+object-logics (FOL, ZF, HOL, HOLCF) support optional input and output of
proper mathematical symbols as built-in option.
Symbolic output is enabled by activating the ``\ttindex{symbols}'' print mode.
--- a/doc-src/System/present.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/System/present.tex Mon Aug 28 13:52:38 2000 +0200
@@ -38,7 +38,7 @@
In order to let Isabelle generate theory browsing information, simply append
``\texttt{-i true}'' to the \settdx{ISABELLE_USEDIR_OPTIONS} setting before
-making a logic. For example, in order to do this for {\FOL}, add this to your
+making a logic. For example, in order to do this for FOL, add this to your
Isabelle settings file
\begin{ttbox}
ISABELLE_USEDIR_OPTIONS="-i true"
@@ -87,8 +87,8 @@
isatool usedir -i true HOL Foo
\end{ttbox}
This assumes that directory \texttt{Foo} contains some \texttt{ROOT.ML} file
-to load all your theories, and {\HOL} is your parent logic image. Ideally,
-theory browser information would have been generated for {\HOL} already.
+to load all your theories, and HOL is your parent logic image. Ideally,
+theory browser information would have been generated for HOL already.
Alternatively, one may specify an external link to an existing body of HTML
data by giving \texttt{usedir} a \texttt{-P} option like this:
@@ -499,7 +499,7 @@
\medskip Any \texttt{usedir} session is named by some \emph{session
identifier}. These accumulate, documenting the way sessions depend on
others. For example, consider \texttt{Pure/FOL/ex}, which refers to the
-examples of {\FOL}, which in turn is built upon {\Pure}.
+examples of FOL, which in turn is built upon Pure.
The current session's identifier is by default just the base name of the
\texttt{LOGIC} argument (in build mode), or of the \texttt{NAME} argument (in
--- a/doc-src/System/system.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/System/system.tex Mon Aug 28 13:52:38 2000 +0200
@@ -2,7 +2,7 @@
%% $Id$
\documentclass[12pt,a4paper]{report}
-\usepackage{graphicx,../iman,../extra,../pdfsetup}
+\usepackage{graphicx,../iman,../extra,../ttbox,../pdfsetup}
\title{\includegraphics[scale=0.5]{isabelle} \\[4ex] The Isabelle System Manual}
--- a/doc-src/Tutorial/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Tutorial/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -13,7 +13,7 @@
NAME = tutorial
FILES = tutorial.tex basics.tex fp.tex appendix.tex \
- ../iman.sty ttbox.sty extra.sty
+ ../iman.sty ../ttbox.sty ../extra.sty
dvi: $(NAME).dvi
--- a/doc-src/Tutorial/extra.sty Mon Aug 28 13:50:24 2000 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,29 +0,0 @@
-% extra.sty : Isabelle Manual extra macros for non-Springer version
-%
-\typeout{Document Style extra. Released 17/2/94, version of 22/8/00}
-
-\usepackage{ttbox}
-{\obeylines\gdef\ttbreak
-{\allowbreak}}
-
-%%Euro-style date: 20 September 1955
-\def\today{\number\day\space\ifcase\month\or
-January\or February\or March\or April\or May\or June\or
-July\or August\or September\or October\or November\or December\fi
-\space\number\year}
-
-%%%Put first chapter on odd page, with arabic numbering; like \cleardoublepage
-\newcommand\clearfirst{\clearpage\ifodd\c@page\else
- \hbox{}\newpage\if@twocolumn\hbox{}\newpage\fi\fi
- \pagenumbering{arabic}}
-
-%%%Ruled chapter headings
-\def\@rulehead#1{\hrule height1pt \vskip 14pt \Huge \bf
- #1 \vskip 14pt\hrule height1pt}
-\def\@makechapterhead#1{ { \parindent 0pt
- \ifnum\c@secnumdepth >\m@ne \raggedleft\large\bf\@chapapp{} \thechapter \par
- \vskip 20pt \fi \raggedright \@rulehead{#1} \par \nobreak \vskip 40pt } }
-
-\def\@makeschapterhead#1{ { \parindent 0pt \raggedright
- \@rulehead{#1} \par \nobreak \vskip 40pt } }
-
--- a/doc-src/Tutorial/ttbox.sty Mon Aug 28 13:50:24 2000 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,20 +0,0 @@
-\ProvidesPackage{ttbox}[1997/06/25]
-\RequirePackage{alltt}
-
-%%%Boxed terminal sessions
-
-%Redefines \{ and \} to be in \tt font and \| to make a BACKSLASH
-\def\ttbraces{\chardef\{=`\{\chardef\}=`\}\chardef\|=`\\}
-
-%Restores % as the comment character (especially, to suppress line breaks)
-\newcommand\comments{\catcode`\%=14\relax}
-
-%alltt* environment: like alltt but smaller, and with \{ \} and \| as in ttbox
-\newenvironment{alltt*}{\begin{alltt}\footnotesize\ttbraces}{\end{alltt}}
-
-%Indented alltt* environment with small point size
-%NO LINE BREAKS are allowed unless \pagebreak appears at START of a line
-\newenvironment{ttbox}{\begin{quote}\samepage\begin{alltt*}}%
- {\end{alltt*}\end{quote}}
-
-\endinput
--- a/doc-src/Tutorial/tutorial.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/Tutorial/tutorial.tex Mon Aug 28 13:52:38 2000 +0200
@@ -1,5 +1,5 @@
\documentclass[11pt,a4paper]{report}
-\usepackage{latexsym,verbatim,graphicx,../iman,extra,../pdfsetup}
+\usepackage{latexsym,verbatim,graphicx,../iman,../extra,../ttbox,../pdfsetup}
\newcommand\Out[1]{\texttt{\textsl{#1}}} %% for output from terminal sessions
--- a/doc-src/TutorialI/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/TutorialI/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -13,7 +13,7 @@
NAME = tutorial
FILES = tutorial.tex basics.tex fp.tex appendix.tex \
- ../iman.sty ttbox.sty extra.sty \
+ ../iman.sty ../ttbox.sty ../extra.sty \
isabelle.sty isabellesym.sty ../pdfsetup.sty
dvi: $(NAME).dvi
--- a/doc-src/TutorialI/extra.sty Mon Aug 28 13:50:24 2000 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,29 +0,0 @@
-% extra.sty : Isabelle Manual extra macros for non-Springer version
-%
-\typeout{Document Style extra. Released 17/2/94, version of 22/8/00}
-
-\usepackage{ttbox}
-{\obeylines\gdef\ttbreak
-{\allowbreak}}
-
-%%Euro-style date: 20 September 1955
-\def\today{\number\day\space\ifcase\month\or
-January\or February\or March\or April\or May\or June\or
-July\or August\or September\or October\or November\or December\fi
-\space\number\year}
-
-%%%Put first chapter on odd page, with arabic numbering; like \cleardoublepage
-\newcommand\clearfirst{\clearpage\ifodd\c@page\else
- \hbox{}\newpage\if@twocolumn\hbox{}\newpage\fi\fi
- \pagenumbering{arabic}}
-
-%%%Ruled chapter headings
-\def\@rulehead#1{\hrule height1pt \vskip 14pt \Huge \bf
- #1 \vskip 14pt\hrule height1pt}
-\def\@makechapterhead#1{ { \parindent 0pt
- \ifnum\c@secnumdepth >\m@ne \raggedleft\large\bf\@chapapp{} \thechapter \par
- \vskip 20pt \fi \raggedright \@rulehead{#1} \par \nobreak \vskip 40pt } }
-
-\def\@makeschapterhead#1{ { \parindent 0pt \raggedright
- \@rulehead{#1} \par \nobreak \vskip 40pt } }
-
--- a/doc-src/TutorialI/ttbox.sty Mon Aug 28 13:50:24 2000 +0200
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,20 +0,0 @@
-\ProvidesPackage{ttbox}[1997/06/25]
-\RequirePackage{alltt}
-
-%%%Boxed terminal sessions
-
-%Redefines \{ and \} to be in \tt font and \| to make a BACKSLASH
-\def\ttbraces{\chardef\{=`\{\chardef\}=`\}\chardef\|=`\\}
-
-%Restores % as the comment character (especially, to suppress line breaks)
-\newcommand\comments{\catcode`\%=14\relax}
-
-%alltt* environment: like alltt but smaller, and with \{ \} and \| as in ttbox
-\newenvironment{alltt*}{\begin{alltt}\footnotesize\ttbraces}{\end{alltt}}
-
-%Indented alltt* environment with small point size
-%NO LINE BREAKS are allowed unless \pagebreak appears at START of a line
-\newenvironment{ttbox}{\begin{quote}\samepage\begin{alltt*}}%
- {\end{alltt*}\end{quote}}
-
-\endinput
--- a/doc-src/TutorialI/tutorial.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/TutorialI/tutorial.tex Mon Aug 28 13:52:38 2000 +0200
@@ -1,7 +1,7 @@
% pr(latex xsymbols symbols)
\documentclass[11pt,a4paper]{report}
\usepackage{isabelle,isabellesym}
-\usepackage{latexsym,verbatim,graphicx,../iman,extra,comment}
+\usepackage{latexsym,verbatim,graphicx,../iman,../extra,../ttbox,comment}
\usepackage{../pdfsetup} %last package!
\newcommand\Out[1]{\texttt{\textsl{#1}}} %% for output from terminal sessions
--- a/doc-src/ZF/FOL.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/ZF/FOL.tex Mon Aug 28 13:52:38 2000 +0200
@@ -192,7 +192,7 @@
\section{Generic packages}
-\FOL{} instantiates most of Isabelle's generic packages.
+FOL instantiates most of Isabelle's generic packages.
\begin{itemize}
\item
It instantiates the simplifier. Both equality ($=$) and the biconditional
@@ -210,9 +210,9 @@
It instantiates the classical reasoner. See~\S\ref{fol-cla-prover}
for details.
-\item \FOL{} provides the tactic \ttindex{hyp_subst_tac}, which substitutes
- for an equality throughout a subgoal and its hypotheses. This tactic uses
- \FOL's general substitution rule.
+\item FOL provides the tactic \ttindex{hyp_subst_tac}, which substitutes for
+ an equality throughout a subgoal and its hypotheses. This tactic uses FOL's
+ general substitution rule.
\end{itemize}
\begin{warn}\index{simplification!of conjunctions}%
@@ -331,10 +331,10 @@
the rule
$$ \vcenter{\infer{P}{\infer*{P}{[\neg P]}}} \eqno(classical) $$
\noindent
-Natural deduction in classical logic is not really all that natural.
-{\FOL} derives classical introduction rules for $\disj$ and~$\exists$, as
-well as classical elimination rules for~$\imp$ and~$\bimp$, and the swap
-rule (see Fig.\ts\ref{fol-cla-derived}).
+Natural deduction in classical logic is not really all that natural. FOL
+derives classical introduction rules for $\disj$ and~$\exists$, as well as
+classical elimination rules for~$\imp$ and~$\bimp$, and the swap rule (see
+Fig.\ts\ref{fol-cla-derived}).
The classical reasoner is installed. Tactics such as \texttt{Blast_tac} and {\tt
Best_tac} refer to the default claset (\texttt{claset()}), which works for most
@@ -897,8 +897,8 @@
while~$R$ and~$A$ are true. This truth assignment reduces the main goal to
$true\bimp false$, which is of course invalid.
-We can repeat this analysis by expanding definitions, using just
-the rules of {\FOL}:
+We can repeat this analysis by expanding definitions, using just the rules of
+FOL:
\begin{ttbox}
choplev 0;
{\out Level 0}
--- a/doc-src/ZF/Makefile Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/ZF/Makefile Mon Aug 28 13:52:38 2000 +0200
@@ -13,7 +13,7 @@
NAME = logics-ZF
FILES = logics-ZF.tex ../Logics/syntax.tex FOL.tex ZF.tex \
- ../rail.sty ../proof.sty ../iman.sty ../extra.sty ../manual.bib
+ ../rail.sty ../proof.sty ../iman.sty ../extra.sty ../ttbox.sty ../manual.bib
dvi: $(NAME).dvi
--- a/doc-src/ZF/ZF.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/ZF/ZF.tex Mon Aug 28 13:52:38 2000 +0200
@@ -23,10 +23,9 @@
provides a streamlined syntax for defining primitive recursive functions over
datatypes.
-Because {\ZF} is an extension of {\FOL}, it provides the same
-packages, namely \texttt{hyp_subst_tac}, the simplifier, and the
-classical reasoner. The default simpset and claset are usually
-satisfactory.
+Because ZF is an extension of FOL, it provides the same packages, namely
+\texttt{hyp_subst_tac}, the simplifier, and the classical reasoner. The
+default simpset and claset are usually satisfactory.
Published articles~\cite{paulson-set-I,paulson-set-II} describe \texttt{ZF}
less formally than this chapter. Isabelle employs a novel treatment of
@@ -94,7 +93,7 @@
\begin{center}
\index{*"`"` symbol}
\index{*"-"`"` symbol}
-\index{*"` symbol}\index{function applications!in \ZF}
+\index{*"` symbol}\index{function applications!in ZF}
\index{*"- symbol}
\index{*": symbol}
\index{*"<"= symbol}
@@ -111,7 +110,7 @@
\end{tabular}
\end{center}
\subcaption{Infixes}
-\caption{Constants of {\ZF}} \label{zf-constants}
+\caption{Constants of ZF} \label{zf-constants}
\end{figure}
@@ -125,10 +124,10 @@
bounded quantifiers. In most other respects, Isabelle implements precisely
Zermelo-Fraenkel set theory.
-Figure~\ref{zf-constants} lists the constants and infixes of~\ZF, while
+Figure~\ref{zf-constants} lists the constants and infixes of~ZF, while
Figure~\ref{zf-trans} presents the syntax translations. Finally,
-Figure~\ref{zf-syntax} presents the full grammar for set theory, including
-the constructs of \FOL.
+Figure~\ref{zf-syntax} presents the full grammar for set theory, including the
+constructs of FOL.
Local abbreviations can be introduced by a \texttt{let} construct whose
syntax appears in Fig.\ts\ref{zf-syntax}. Internally it is translated into
@@ -136,7 +135,7 @@
definition, \tdx{Let_def}.
Apart from \texttt{let}, set theory does not use polymorphism. All terms in
-{\ZF} have type~\tydx{i}, which is the type of individuals and has class~{\tt
+ZF have type~\tydx{i}, which is the type of individuals and has class~{\tt
term}. The type of first-order formulae, remember, is~\textit{o}.
Infix operators include binary union and intersection ($A\un B$ and
@@ -167,15 +166,15 @@
abbreviates the nest of pairs\par\nobreak
\centerline{\texttt{Pair($a@1$,\ldots,Pair($a@{n-1}$,$a@n$)\ldots).}}
-In {\ZF}, a function is a set of pairs. A {\ZF} function~$f$ is simply an
-individual as far as Isabelle is concerned: its Isabelle type is~$i$, not
-say $i\To i$. The infix operator~{\tt`} denotes the application of a
-function set to its argument; we must write~$f{\tt`}x$, not~$f(x)$. The
-syntax for image is~$f{\tt``}A$ and that for inverse image is~$f{\tt-``}A$.
+In ZF, a function is a set of pairs. A ZF function~$f$ is simply an
+individual as far as Isabelle is concerned: its Isabelle type is~$i$, not say
+$i\To i$. The infix operator~{\tt`} denotes the application of a function set
+to its argument; we must write~$f{\tt`}x$, not~$f(x)$. The syntax for image
+is~$f{\tt``}A$ and that for inverse image is~$f{\tt-``}A$.
\begin{figure}
-\index{lambda abs@$\lambda$-abstractions!in \ZF}
+\index{lambda abs@$\lambda$-abstractions!in ZF}
\index{*"-"> symbol}
\index{*"* symbol}
\begin{center} \footnotesize\tt\frenchspacing
@@ -215,7 +214,7 @@
\rm bounded $\exists$
\end{tabular}
\end{center}
-\caption{Translations for {\ZF}} \label{zf-trans}
+\caption{Translations for ZF} \label{zf-trans}
\end{figure}
@@ -264,7 +263,7 @@
& | & "EX!~" id~id^* " . " formula
\end{array}
\]
-\caption{Full grammar for {\ZF}} \label{zf-syntax}
+\caption{Full grammar for ZF} \label{zf-syntax}
\end{figure}
@@ -321,14 +320,13 @@
no constants~\texttt{op~*} and~\hbox{\tt op~->}.} Isabelle accepts these
abbreviations in parsing and uses them whenever possible for printing.
-\index{*THE symbol}
-As mentioned above, whenever the axioms assert the existence and uniqueness
-of a set, Isabelle's set theory declares a constant for that set. These
-constants can express the {\bf definite description} operator~$\iota
-x. P[x]$, which stands for the unique~$a$ satisfying~$P[a]$, if such exists.
-Since all terms in {\ZF} denote something, a description is always
-meaningful, but we do not know its value unless $P[x]$ defines it uniquely.
-Using the constant~\cdx{The}, we may write descriptions as {\tt
+\index{*THE symbol} As mentioned above, whenever the axioms assert the
+existence and uniqueness of a set, Isabelle's set theory declares a constant
+for that set. These constants can express the {\bf definite description}
+operator~$\iota x. P[x]$, which stands for the unique~$a$ satisfying~$P[a]$,
+if such exists. Since all terms in ZF denote something, a description is
+always meaningful, but we do not know its value unless $P[x]$ defines it
+uniquely. Using the constant~\cdx{The}, we may write descriptions as {\tt
The($\lambda x. P[x]$)} or use the syntax \hbox{\tt THE $x$.\ $P[x]$}.
\index{*lam symbol}
@@ -385,7 +383,7 @@
\tdx{Diff_def} A - B == {\ttlbrace}x:A . x~:B{\ttrbrace}
\subcaption{Union, intersection, difference}
\end{ttbox}
-\caption{Rules and axioms of {\ZF}} \label{zf-rules}
+\caption{Rules and axioms of ZF} \label{zf-rules}
\end{figure}
@@ -417,7 +415,7 @@
\tdx{restrict_def} restrict(f,A) == lam x:A. f`x
\subcaption{Functions and general product}
\end{ttbox}
-\caption{Further definitions of {\ZF}} \label{zf-defs}
+\caption{Further definitions of ZF} \label{zf-defs}
\end{figure}
@@ -515,7 +513,7 @@
\tdx{PowD} A : Pow(B) ==> A<=B
\subcaption{The empty set; power sets}
\end{ttbox}
-\caption{Basic derived rules for {\ZF}} \label{zf-lemmas1}
+\caption{Basic derived rules for ZF} \label{zf-lemmas1}
\end{figure}
@@ -555,7 +553,7 @@
Figure~\ref{zf-lemmas3} presents rules for general union and intersection.
The empty intersection should be undefined. We cannot have
$\bigcap(\emptyset)=V$ because $V$, the universal class, is not a set. All
-expressions denote something in {\ZF} set theory; the definition of
+expressions denote something in ZF set theory; the definition of
intersection implies $\bigcap(\emptyset)=\emptyset$, but this value is
arbitrary. The rule \tdx{InterI} must have a premise to exclude
the empty intersection. Some of the laws governing intersections require
@@ -1051,7 +1049,7 @@
See file \texttt{ZF/equalities.ML}.
Theory \thydx{Bool} defines $\{0,1\}$ as a set of booleans, with the usual
-operators including a conditional (Fig.\ts\ref{zf-bool}). Although {\ZF} is a
+operators including a conditional (Fig.\ts\ref{zf-bool}). Although ZF is a
first-order theory, you can obtain the effect of higher-order logic using
\texttt{bool}-valued functions, for example. The constant~\texttt{1} is
translated to \texttt{succ(0)}.
@@ -1343,13 +1341,13 @@
\section{Automatic Tools}
-{\ZF} provides the simplifier and the classical reasoner. Moreover it
-supplies a specialized tool to infer `types' of terms.
+ZF provides the simplifier and the classical reasoner. Moreover it supplies a
+specialized tool to infer `types' of terms.
\subsection{Simplification}
-{\ZF} inherits simplification from {\FOL} but adopts it for set theory. The
-extraction of rewrite rules takes the {\ZF} primitives into account. It can
+ZF inherits simplification from FOL but adopts it for set theory. The
+extraction of rewrite rules takes the ZF primitives into account. It can
strip bounded universal quantifiers from a formula; for example, ${\forall
x\in A. f(x)=g(x)}$ yields the conditional rewrite rule $x\in A \Imp
f(x)=g(x)$. Given $a\in\{x\in A. P(x)\}$ it extracts rewrite rules from $a\in
@@ -1360,10 +1358,9 @@
works for most purposes. A small simplification set for set theory is
called~\ttindexbold{ZF_ss}, and you can even use \ttindex{FOL_ss} as a minimal
starting point. \texttt{ZF_ss} contains congruence rules for all the binding
-operators of {\ZF}\@. It contains all the conversion rules, such as
-\texttt{fst} and \texttt{snd}, as well as the rewrites shown in
-Fig.\ts\ref{zf-simpdata}. See the file \texttt{ZF/simpdata.ML} for a fuller
-list.
+operators of ZF. It contains all the conversion rules, such as \texttt{fst}
+and \texttt{snd}, as well as the rewrites shown in Fig.\ts\ref{zf-simpdata}.
+See the file \texttt{ZF/simpdata.ML} for a fuller list.
\subsection{Classical Reasoning}
@@ -1396,7 +1393,7 @@
\subsection{Type-Checking Tactics}
\index{type-checking tactics}
-Isabelle/{\ZF} provides simple tactics to help automate those proofs that are
+Isabelle/ZF provides simple tactics to help automate those proofs that are
essentially type-checking. Such proofs are built by applying rules such as
these:
\begin{ttbox}
@@ -1614,13 +1611,13 @@
\label{sec:ZF:datatype}
\index{*datatype|(}
-The \ttindex{datatype} definition package of \ZF\ constructs inductive
-datatypes similar to those of \ML. It can also construct coinductive
-datatypes (codatatypes), which are non-well-founded structures such as
-streams. It defines the set using a fixed-point construction and proves
-induction rules, as well as theorems for recursion and case combinators. It
-supplies mechanisms for reasoning about freeness. The datatype package can
-handle both mutual and indirect recursion.
+The \ttindex{datatype} definition package of ZF constructs inductive datatypes
+similar to those of \ML. It can also construct coinductive datatypes
+(codatatypes), which are non-well-founded structures such as streams. It
+defines the set using a fixed-point construction and proves induction rules,
+as well as theorems for recursion and case combinators. It supplies
+mechanisms for reasoning about freeness. The datatype package can handle both
+mutual and indirect recursion.
\subsection{Basics}
@@ -2440,10 +2437,10 @@
proves the two equivalent. It contains several datatype and inductive
definitions, and demonstrates their use.
-The directory \texttt{ZF/ex} contains further developments in {\ZF} set
-theory. Here is an overview; see the files themselves for more details. I
-describe much of this material in other
-publications~\cite{paulson-set-I,paulson-set-II,paulson-CADE}.
+The directory \texttt{ZF/ex} contains further developments in ZF set theory.
+Here is an overview; see the files themselves for more details. I describe
+much of this material in other
+publications~\cite{paulson-set-I,paulson-set-II,paulson-CADE}.
\begin{itemize}
\item File \texttt{misc.ML} contains miscellaneous examples such as
Cantor's Theorem, the Schr\"oder-Bernstein Theorem and the `Composition
--- a/doc-src/ZF/logics-ZF.tex Mon Aug 28 13:50:24 2000 +0200
+++ b/doc-src/ZF/logics-ZF.tex Mon Aug 28 13:52:38 2000 +0200
@@ -1,6 +1,6 @@
%% $Id$
\documentclass[11pt,a4paper]{report}
-\usepackage{graphicx,../iman,../extra,../proof,../rail,latexsym,../pdfsetup}
+\usepackage{graphicx,../iman,../extra,../ttbox,../proof,../rail,latexsym,../pdfsetup}
%%% to index derived rls: ^\([a-zA-Z0-9][a-zA-Z0-9_]*\) \\tdx{\1}
%%% to index rulenames: ^ *(\([a-zA-Z0-9][a-zA-Z0-9_]*\), \\tdx{\1}
@@ -15,8 +15,8 @@
\texttt{lcp@cl.cam.ac.uk}\\[3ex]
With Contributions by Tobias Nipkow and Markus Wenzel%
\thanks{Markus Wenzel made numerous improvements.
- Philippe de Groote contributed to~\ZF{}. Philippe No\"el and
- Martin Coen made many contributions to~\ZF{}. The research has
+ Philippe de Groote contributed to~ZF. Philippe No\"el and
+ Martin Coen made many contributions to~ZF. The research has
been funded by the EPSRC (grants GR/G53279, GR/H40570, GR/K57381,
GR/K77051, GR/M75440) and by ESPRIT (projects 3245:
Logical Frameworks, and 6453: Types) and by the DFG Schwerpunktprogramm