doc-src/Ref/theories.tex
author wenzelm
Thu, 15 May 1997 14:59:25 +0200
changeset 3201 7c3cbf675e85
parent 3108 335efc3f5632
child 3485 f27a30a18a17
permissions -rw-r--r--
removed garbage;


%% $Id$

\chapter{Theories, Terms and Types} \label{theories}
\index{theories|(}\index{signatures|bold}
\index{reading!axioms|see{{\tt assume_ax}}} Theories organize the
syntax, declarations and axioms of a mathematical development.  They
are built, starting from the {\Pure} or {\CPure} theory, by extending
and merging existing theories.  They have the \ML\ type
\mltydx{theory}.  Theory operations signal errors by raising exception
\xdx{THEORY}, returning a message and a list of theories.

Signatures, which contain information about sorts, types, constants and
syntax, have the \ML\ type~\mltydx{Sign.sg}.  For identification, each
signature carries a unique list of \bfindex{stamps}, which are \ML\
references to strings.  The strings serve as human-readable names; the
references serve as unique identifiers.  Each primitive signature has a
single stamp.  When two signatures are merged, their lists of stamps are
also merged.  Every theory carries a unique signature.

Terms and types are the underlying representation of logical syntax.  Their
\ML\ definitions are irrelevant to naive Isabelle users.  Programmers who
wish to extend Isabelle may need to know such details, say to code a tactic
that looks for subgoals of a particular form.  Terms and types may be
`certified' to be well-formed with respect to a given signature.


\section{Defining theories}\label{sec:ref-defining-theories}

Theories are usually defined using theory definition files (which have a name
suffix {\tt .thy}). There is also a low level interface provided by certain
\ML{} functions (see \S\ref{BuildingATheory}).
Appendix~\ref{app:TheorySyntax} presents the concrete syntax for theory
definitions; here is an explanation of the constituent parts:
\begin{description}
\item[{\it theoryDef}] is the full definition.  The new theory is
  called $id$.  It is the union of the named {\bf parent
    theories}\indexbold{theories!parent}, possibly extended with new
  components.  \thydx{Pure} and \thydx{CPure} are the basic theories,
  which contain only the meta-logic. They differ just in their
  concrete syntax for function applications.

  Normally each {\it name\/} is an identifier, the name of the parent theory.
  Quoted strings can be used to document additional file dependencies; see
  \S\ref{LoadingTheories} for details.

\item[$classes$]
  is a series of class declarations.  Declaring {\tt$id$ < $id@1$ \dots\
    $id@n$} makes $id$ a subclass of the existing classes $id@1\dots
  id@n$.  This rules out cyclic class structures.  Isabelle automatically
  computes the transitive closure of subclass hierarchies; it is not
  necessary to declare {\tt c < e} in addition to {\tt c < d} and {\tt d <
    e}.

\item[$default$]
  introduces $sort$ as the new default sort for type variables.  This applies
  to unconstrained type variables in an input string but not to type
  variables created internally.  If omitted, the default sort is the listwise
  union of the default sorts of the parent theories (i.e.\ their logical
  intersection).
  
\item[$sort$] is a finite set of classes.  A single class $id$
  abbreviates the sort $\ttlbrace id\ttrbrace$.

\item[$types$]
  is a series of type declarations.  Each declares a new type constructor
  or type synonym.  An $n$-place type constructor is specified by
  $(\alpha@1,\dots,\alpha@n)name$, where the type variables serve only to
  indicate the number~$n$.

  A {\bf type synonym}\indexbold{type synonyms} is an abbreviation
  $(\alpha@1,\dots,\alpha@n)name = \tau$, where $name$ and $\tau$ can
  be strings.

\item[$infix$]
  declares a type or constant to be an infix operator of priority $nat$
  associating to the left ({\tt infixl}) or right ({\tt infixr}). Only
  2-place type constructors can have infix status; an example is {\tt
  ('a,'b)~"*"~(infixr~20)}, which may express binary product types.

\item[$arities$] is a series of type arity declarations.  Each assigns
  arities to type constructors.  The $name$ must be an existing type
  constructor, which is given the additional arity $arity$.
  
\item[$consts$] is a series of constant declarations.  Each new
  constant $name$ is given the specified type.  The optional $mixfix$
  annotations may attach concrete syntax to the constant.
  
\item[$syntax$] \index{*syntax section}\index{print mode} is a variant
  of $consts$ which adds just syntax without actually declaring
  logical constants.  This gives full control over a theory's context
  free grammar. The optional $mode$ specifies the print mode where the
  mixfix productions should be added. If there is no \texttt{output}
  option given, all productions are also added to the input syntax
  (regardless of the print mode).

\item[$mixfix$] \index{mixfix declarations}
  annotations can take three forms:
  \begin{itemize}
  \item A mixfix template given as a $string$ of the form
    {\tt"}\dots{\tt\_}\dots{\tt\_}\dots{\tt"} where the $i$-th underscore
    indicates the position where the $i$-th argument should go.  The list
    of numbers gives the priority of each argument.  The final number gives
    the priority of the whole construct.

  \item A constant $f$ of type $\tau@1\To(\tau@2\To\tau)$ can be given {\bf
    infix} status.

  \item A constant $f$ of type $(\tau@1\To\tau@2)\To\tau$ can be given {\bf
    binder} status.  The declaration {\tt binder} $\cal Q$ $p$ causes
  ${\cal Q}\,x.F(x)$ to be treated
  like $f(F)$, where $p$ is the priority.
  \end{itemize}

\item[$trans$]
  specifies syntactic translation rules (macros).  There are three forms:
  parse rules ({\tt =>}), print rules ({\tt <=}), and parse/print rules ({\tt
  ==}).

\item[$rules$]
  is a series of rule declarations.  Each has a name $id$ and the formula is
  given by the $string$.  Rule names must be distinct within any single
  theory.

\item[$defs$] is a series of definitions.  They are just like $rules$, except
  that every $string$ must be a definition (see below for details).

\item[$constdefs$] combines the declaration of constants and their
  definition. The first $string$ is the type, the second the definition.
  
\item[$axclass$] \index{*axclass section} defines an
  \rmindex{axiomatic type class} as the intersection of existing
  classes, with additional axioms holding. Class axioms may not
  contain more than one type variable. The class axioms (with implicit
  sort constraints added) are bound to the given names.  Furthermore a
  class introduction rule is generated, which is automatically
  employed by $instance$ to prove instantiations of this class.
  
\item[$instance$] \index{*instance section} proves class inclusions or
  type arities at the logical level and then transfers these to the
  type signature. The instantiation is proven and checked properly.
  The user has to supply sufficient witness information: theorems
  ($longident$), axioms ($string$), or even arbitrary \ML{} tactic
  code $verbatim$.

\item[$oracle$] links the theory to a trusted external reasoner.  It is
  allowed to create theorems, but each theorem carries a proof object
  describing the oracle invocation.  See \S\ref{sec:oracles} for details.

\item[$ml$] \index{*ML section}
  consists of \ML\ code, typically for parse and print translation functions.
\end{description}
%
Chapters~\ref{Defining-Logics} and \ref{chap:syntax} explain mixfix
declarations, translation rules and the {\tt ML} section in more detail.


\subsection{Definitions}\indexbold{definitions}

{\bf Definitions} are intended to express abbreviations. The simplest
form of a definition is $f \equiv t$, where $f$ is a constant.
Isabelle also allows a derived forms where the arguments of~$f$ appear
on the left, abbreviating a string of $\lambda$-abstractions.

Isabelle makes the following checks on definitions:
\begin{itemize}
\item Arguments (on the left-hand side) must be distinct variables.
\item All variables on the right-hand side must also appear on the left-hand
  side. 
\item All type variables on the right-hand side must also appear on
  the left-hand side; this prohibits definitions such as {\tt
    (zero::nat) == length ([]::'a list)}.
\item The definition must not be recursive.  Most object-logics provide
  definitional principles that can be used to express recursion safely.
\end{itemize}
These checks are intended to catch the sort of errors that might be made
accidentally.  Misspellings, for instance, might result in additional
variables appearing on the right-hand side.  More elaborate checks could be
made, but the cost might be overly strict rules on declaration order, etc.


\subsection{*Classes and arities}
\index{classes!context conditions}\index{arities!context conditions}

In order to guarantee principal types~\cite{nipkow-prehofer},
arity declarations must obey two conditions:
\begin{itemize}
\item There must not be any two declarations $ty :: (\vec{r})c$ and
  $ty :: (\vec{s})c$ with $\vec{r} \neq \vec{s}$.  For example, this
  excludes the following:
\begin{ttbox}
arities
  foo :: ({\ttlbrace}logic{\ttrbrace}) logic
  foo :: ({\ttlbrace}{\ttrbrace})logic
\end{ttbox}

\item If there are two declarations $ty :: (s@1,\dots,s@n)c$ and $ty ::
  (s@1',\dots,s@n')c'$ such that $c' < c$ then $s@i' \preceq s@i$ must hold
  for $i=1,\dots,n$.  The relationship $\preceq$, defined as
\[ s' \preceq s \iff \forall c\in s. \exists c'\in s'.~ c'\le c, \]
expresses that the set of types represented by $s'$ is a subset of the
set of types represented by $s$.  Assuming $term \preceq logic$, the
following is forbidden:
\begin{ttbox}
arities
  foo :: ({\ttlbrace}logic{\ttrbrace})logic
  foo :: ({\ttlbrace}{\ttrbrace})term
\end{ttbox}

\end{itemize}


\section{Loading a new theory}\label{LoadingTheories}
\index{theories!loading}\index{files!reading}
\begin{ttbox}
use_thy         : string -> unit
time_use_thy    : string -> unit
loadpath        : string list ref \hfill{\bf initially {\tt["."]}}
delete_tmpfiles : bool ref \hfill{\bf initially true}
\end{ttbox}

\begin{ttdescription}
\item[\ttindexbold{use_thy} $thyname$]
  reads the theory $thyname$ and creates an \ML{} structure as described below.

\item[\ttindexbold{time_use_thy} $thyname$]
  calls {\tt use_thy} $thyname$ and reports the time taken.

\item[\ttindexbold{loadpath}]
  contains a list of directories to search when locating the files that
  define a theory.  This list is only used if the theory name in {\tt
    use_thy} does not specify the path explicitly.

\item[\ttindexbold{delete_tmpfiles} := false;]
suppresses the deletion of temporary files.
\end{ttdescription}
%
Each theory definition must reside in a separate file.  Let the file
{\it T}{\tt.thy} contain the definition of a theory called~$T$, whose
parent theories are $TB@1$ \dots $TB@n$.  Calling
\ttindex{use_thy}~{\tt"{\it T\/}"} reads the file {\it T}{\tt.thy},
writes a temporary \ML{} file {\tt.{\it T}.thy.ML}, and reads the
latter file.  Recursive {\tt use_thy} calls load those parent theories
that have not been loaded previously; the recursive calls may continue
to any depth.  One {\tt use_thy} call can read an entire logic
provided all theories are linked appropriately.

The result is an \ML\ structure~$T$ containing at least a component {\tt thy}
for the new theory and components for each of the rules.  The structure also
contains the definitions of the {\tt ML} section, if present.  The file
{\tt.{\it T}.thy.ML} is then deleted if {\tt delete_tmpfiles} is set to {\tt
true} and no errors occurred.

Finally the file {\it T}{\tt.ML} is read, if it exists.  Since the
structure $T$ is automatically open in this context, proof scripts may
(or even should) refer to its components by unqualified names.

Some applications construct theories directly by calling \ML\ functions.  In
this situation there is no {\tt.thy} file, only an {\tt.ML} file.  The
{\tt.ML} file must declare an \ML\ structure having the theory's name and a
component {\tt thy} containing the new theory object.
Section~\ref{sec:pseudo-theories} below describes a way of linking such
theories to their parents.

\begin{warn}
  Temporary files are written to the current directory, so this must be
  writable.  Isabelle inherits the current directory from the operating
  system; you can change it within Isabelle by typing {\tt
  cd"$dir$"}\index{*cd}.
\end{warn}


\section{Reloading modified theories}\label{sec:reloading-theories}
\indexbold{theories!reloading}
\begin{ttbox}
update     : unit -> unit
unlink_thy : string -> unit
\end{ttbox}
Changing a theory on disk often makes it necessary to reload all theories
descended from it.  However, {\tt use_thy} reads only one theory, even if
some of the parent theories are out of date.  In this case you should call
{\tt update()}.

Isabelle keeps track of all loaded theories and their files.  If
\ttindex{use_thy} finds that the theory to be loaded has been read before,
it determines whether to reload the theory as follows.  First it looks for
the theory's files in their previous location.  If it finds them, it
compares their modification times to the internal data and stops if they
are equal.  If the files have been moved, {\tt use_thy} searches for them
as it would for a new theory.  After {\tt use_thy} reloads a theory, it
marks the children as out-of-date.

\begin{ttdescription}
\item[\ttindexbold{update}()]
  reloads all modified theories and their descendants in the correct order.

\item[\ttindexbold{unlink_thy} $thyname$]\indexbold{theories!removing}
  informs Isabelle that theory $thyname$ no longer exists.  If you delete the
  theory files for $thyname$ then you must execute {\tt unlink_thy};
  otherwise {\tt update} will complain about a missing file.
\end{ttdescription}


\subsection{*Pseudo theories}\label{sec:pseudo-theories}
\indexbold{theories!pseudo}%
Any automatic reloading facility requires complete knowledge of all
dependencies.  Sometimes theories depend on objects created in \ML{} files
with no associated theory definition file.  These objects may be theories but
they could also be theorems, proof procedures, etc.

Unless such dependencies are documented, {\tt update} fails to reload these
\ML{} files and the system is left in a state where some objects, such as
theorems, still refer to old versions of theories.  This may lead to the
error
\begin{ttbox}
Attempt to merge different versions of theories: \dots
\end{ttbox}
Therefore there is a way to link theories and {\bf orphaned} \ML{} files ---
those not associated with a theory definition.

Let us assume we have an orphaned \ML{} file named {\tt orphan.ML} and a
theory~$B$ that depends on {\tt orphan.ML} --- for example, {\tt B.ML} uses
theorems proved in {\tt orphan.ML}.  Then {\tt B.thy} should
mention this dependency as follows:
\begin{ttbox}
B = \(\ldots\) + "orphan" + \(\ldots\)
\end{ttbox}
Quoted strings stand for theories which have to be loaded before the
current theory is read but which are not used in building the base of
theory~$B$. Whenever {\tt orphan} changes and is reloaded, Isabelle
knows that $B$ has to be updated, too.

Note that it's necessary for {\tt orphan} to declare a special ML
object of type {\tt theory} which is present in all theories. This is
normally achieved by adding the file {\tt orphan.thy} to make {\tt
orphan} a {\bf pseudo theory}. A minimum version of {\tt orphan.thy}
would be

\begin{ttbox}
orphan = Pure
\end{ttbox}

which uses {\tt Pure} to make a dummy theory. Normally though the
orphaned file has its own dependencies.  If {\tt orphan.ML} depends on
theories or files $A@1$, \ldots, $A@n$, record this by creating the
pseudo theory in the following way:
\begin{ttbox}
orphan = \(A@1\) + \(\ldots\) + \(A@n\)
\end{ttbox}
The resulting theory ensures that {\tt update} reloads {\tt orphan}
whenever it reloads one of the $A@i$.

For an extensive example of how this technique can be used to link
lots of theory files and load them by just a few {\tt use_thy} calls
see the sources of one of the major object-logics (e.g.\ \ZF).



\section{Basic operations on theories}\label{BasicOperationsOnTheories}
\subsection{Extracting an axiom or theorem from a theory}
\index{theories!axioms of}\index{axioms!extracting}
\index{theories!theorems of}\index{theorems!extracting}
\begin{ttbox}
get_axiom : theory -> string -> thm
get_thm   : theory -> string -> thm
assume_ax : theory -> string -> thm
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{get_axiom} $thy$ $name$]
  returns an axiom with the given $name$ from $thy$, raising exception
  \xdx{THEORY} if none exists.  Merging theories can cause several axioms
  to have the same name; {\tt get_axiom} returns an arbitrary one.

\item[\ttindexbold{get_thm} $thy$ $name$]
  is analogous to {\tt get_axiom}, but looks for a stored theorem. Like
  {\tt get_axiom} it searches all parents of a theory if the theorem
  is not associated with $thy$.

\item[\ttindexbold{assume_ax} $thy$ $formula$]
  reads the {\it formula} using the syntax of $thy$, following the same
  conventions as axioms in a theory definition.  You can thus pretend that
  {\it formula} is an axiom and use the resulting theorem like an axiom.
  Actually {\tt assume_ax} returns an assumption;  \ttindex{result}
  complains about additional assumptions, but \ttindex{uresult} does not.

For example, if {\it formula} is
\hbox{\tt a=b ==> b=a} then the resulting theorem has the form
\hbox{\verb'?a=?b ==> ?b=?a  [!!a b. a=b ==> b=a]'}
\end{ttdescription}

\subsection{*Building a theory}
\label{BuildingATheory}
\index{theories!constructing|bold}
\begin{ttbox}
Pure.thy       : theory
CPure.thy      : theory
merge_theories : theory * theory -> theory
\end{ttbox}
\begin{description}
\item[\ttindexbold{Pure.thy}, \ttindexbold{CPure.thy}] contain the
  syntax and signature of the meta-logic.  There are no axioms:
  meta-level inferences are carried out by \ML\ functions. The two
  \Pure s just differ in their concrete syntax of function
  application: $t(u@1, \ldots, u@n)$ vs.\ $t\,u@1,\ldots\,u@n$.

\item[\ttindexbold{merge_theories} ($thy@1$, $thy@2$)] merges the two
  theories $thy@1$ and $thy@2$.  The resulting theory contains all of the
  syntax, signature and axioms of the constituent theories. Merging theories
  that contain different identification stamps of the same name fails with
  the following message
\begin{ttbox}
Attempt to merge different versions of theories: "\(T@1\)", \(\ldots\), "\(T@n\)"
\end{ttbox}
  This error may especially occur when a theory is redeclared --- say to
  change an incorrect axiom --- and bindings to old versions persist.
  Isabelle ensures that old and new theories of the same name are not
  involved in a proof.

%% FIXME
%\item [\ttindexbold{extend_theory} $thy$ {\tt"}$T${\tt"} $\cdots$] extends
%  the theory $thy$ with new types, constants, etc.  $T$ identifies the theory
%  internally.  When a theory is redeclared, say to change an incorrect axiom,
%  bindings to the old axiom may persist.  Isabelle ensures that the old and
%  new theories are not involved in the same proof.  Attempting to combine
%  different theories having the same name $T$ yields the fatal error
%extend_theory  : theory -> string -> \(\cdots\) -> theory
%\begin{ttbox}
%Attempt to merge different versions of theory: \(T\)
%\end{ttbox}
\end{description}

%% FIXME
%\item [\ttindexbold{extend_theory} $thy$ {\tt"}$T${\tt"}
%      ($classes$, $default$, $types$, $arities$, $consts$, $sextopt$) $rules$]
%\hfill\break   %%% include if line is just too short
%is the \ML{} equivalent of the following theory definition:
%\begin{ttbox}
%\(T\) = \(thy\) +
%classes \(c\) < \(c@1\),\(\dots\),\(c@m\)
%        \dots
%default {\(d@1,\dots,d@r\)}
%types   \(tycon@1\),\dots,\(tycon@i\) \(n\)
%        \dots
%arities \(tycon@1'\),\dots,\(tycon@j'\) :: (\(s@1\),\dots,\(s@n\))\(c\)
%        \dots
%consts  \(b@1\),\dots,\(b@k\) :: \(\tau\)
%        \dots
%rules   \(name\) \(rule\)
%        \dots
%end
%\end{ttbox}
%where
%\begin{tabular}[t]{l@{~=~}l}
%$classes$ & \tt[("$c$",["$c@1$",\dots,"$c@m$"]),\dots] \\
%$default$ & \tt["$d@1$",\dots,"$d@r$"]\\
%$types$   & \tt[([$tycon@1$,\dots,$tycon@i$], $n$),\dots] \\
%$arities$ & \tt[([$tycon'@1$,\dots,$tycon'@j$], ([$s@1$,\dots,$s@n$],$c$)),\dots]
%\\
%$consts$  & \tt[([$b@1$,\dots,$b@k$],$\tau$),\dots] \\
%$rules$   & \tt[("$name$",$rule$),\dots]
%\end{tabular}


\subsection{Inspecting a theory}\label{sec:inspct-thy}
\index{theories!inspecting|bold}
\begin{ttbox}
print_syntax  : theory -> unit
print_theory  : theory -> unit
axioms_of     : theory -> (string * thm) list
thms_of       : theory -> (string * thm) list
parents_of    : theory -> theory list
sign_of       : theory -> Sign.sg
stamps_of_thy : theory -> string ref list
\end{ttbox}
These provide means of viewing a theory's components.
\begin{ttdescription}
\item[\ttindexbold{print_syntax} $thy$] prints the syntax of $thy$
  (grammar, macros, translation functions etc., see
  page~\pageref{pg:print_syn} for more details).
  
\item[\ttindexbold{print_theory} $thy$] prints the logical parts of
  $thy$, excluding the syntax.

\item[\ttindexbold{axioms_of} $thy$]
returns the additional axioms of the most recent extend node of~$thy$.

\item[\ttindexbold{thms_of} $thy$]
returns all theorems that are associated with $thy$.

\item[\ttindexbold{parents_of} $thy$]
returns the direct ancestors of~$thy$.

\item[\ttindexbold{sign_of} $thy$]
returns the signature associated with~$thy$.  It is useful with functions
like {\tt read_instantiate_sg}, which take a signature as an argument.

\item[\ttindexbold{stamps_of_thy} $thy$]\index{signatures}
returns the identification \rmindex{stamps} of the signature associated
with~$thy$.
\end{ttdescription}


\section{Terms}
\index{terms|bold}
Terms belong to the \ML\ type \mltydx{term}, which is a concrete datatype
with six constructors:
\begin{ttbox}
type indexname = string * int;
infix 9 $;
datatype term = Const of string * typ
              | Free  of string * typ
              | Var   of indexname * typ
              | Bound of int
              | Abs   of string * typ * term
              | op $  of term * term;
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{Const}($a$, $T$)] \index{constants|bold}
  is the {\bf constant} with name~$a$ and type~$T$.  Constants include
  connectives like $\land$ and $\forall$ as well as constants like~0
  and~$Suc$.  Other constants may be required to define a logic's concrete
  syntax.

\item[\ttindexbold{Free}($a$, $T$)] \index{variables!free|bold}
  is the {\bf free variable} with name~$a$ and type~$T$.

\item[\ttindexbold{Var}($v$, $T$)] \index{unknowns|bold}
  is the {\bf scheme variable} with indexname~$v$ and type~$T$.  An
  \mltydx{indexname} is a string paired with a non-negative index, or
  subscript; a term's scheme variables can be systematically renamed by
  incrementing their subscripts.  Scheme variables are essentially free
  variables, but may be instantiated during unification.

\item[\ttindexbold{Bound} $i$] \index{variables!bound|bold}
  is the {\bf bound variable} with de Bruijn index~$i$, which counts the
  number of lambdas, starting from zero, between a variable's occurrence
  and its binding.  The representation prevents capture of variables.  For
  more information see de Bruijn \cite{debruijn72} or
  Paulson~\cite[page~336]{paulson91}.

\item[\ttindexbold{Abs}($a$, $T$, $u$)]
  \index{lambda abs@$\lambda$-abstractions|bold}
  is the $\lambda$-{\bf abstraction} with body~$u$, and whose bound
  variable has name~$a$ and type~$T$.  The name is used only for parsing
  and printing; it has no logical significance.

\item[$t$ \$ $u$] \index{$@{\tt\$}|bold} \index{function applications|bold}
is the {\bf application} of~$t$ to~$u$.
\end{ttdescription}
Application is written as an infix operator to aid readability.
Here is an \ML\ pattern to recognize \FOL{} formulae of
the form~$A\imp B$, binding the subformulae to~$A$ and~$B$:
\begin{ttbox}
Const("Trueprop",_) $ (Const("op -->",_) $ A $ B)
\end{ttbox}


\section{Variable binding}
\begin{ttbox}
loose_bnos     : term -> int list
incr_boundvars : int -> term -> term
abstract_over  : term*term -> term
variant_abs    : string * typ * term -> string * term
aconv          : term*term -> bool\hfill{\bf infix}
\end{ttbox}
These functions are all concerned with the de Bruijn representation of
bound variables.
\begin{ttdescription}
\item[\ttindexbold{loose_bnos} $t$]
  returns the list of all dangling bound variable references.  In
  particular, {\tt Bound~0} is loose unless it is enclosed in an
  abstraction.  Similarly {\tt Bound~1} is loose unless it is enclosed in
  at least two abstractions; if enclosed in just one, the list will contain
  the number 0.  A well-formed term does not contain any loose variables.

\item[\ttindexbold{incr_boundvars} $j$]
  increases a term's dangling bound variables by the offset~$j$.  This is
  required when moving a subterm into a context where it is enclosed by a
  different number of abstractions.  Bound variables with a matching
  abstraction are unaffected.

\item[\ttindexbold{abstract_over} $(v,t)$]
  forms the abstraction of~$t$ over~$v$, which may be any well-formed term.
  It replaces every occurrence of \(v\) by a {\tt Bound} variable with the
  correct index.

\item[\ttindexbold{variant_abs} $(a,T,u)$]
  substitutes into $u$, which should be the body of an abstraction.
  It replaces each occurrence of the outermost bound variable by a free
  variable.  The free variable has type~$T$ and its name is a variant
  of~$a$ chosen to be distinct from all constants and from all variables
  free in~$u$.

\item[$t$ \ttindexbold{aconv} $u$]
  tests whether terms~$t$ and~$u$ are \(\alpha\)-convertible: identical up
  to renaming of bound variables.
\begin{itemize}
  \item
    Two constants, {\tt Free}s, or {\tt Var}s are \(\alpha\)-convertible
    if their names and types are equal.
    (Variables having the same name but different types are thus distinct.
    This confusing situation should be avoided!)
  \item
    Two bound variables are \(\alpha\)-convertible
    if they have the same number.
  \item
    Two abstractions are \(\alpha\)-convertible
    if their bodies are, and their bound variables have the same type.
  \item
    Two applications are \(\alpha\)-convertible
    if the corresponding subterms are.
\end{itemize}

\end{ttdescription}

\section{Certified terms}\index{terms!certified|bold}\index{signatures}
A term $t$ can be {\bf certified} under a signature to ensure that every type
in~$t$ is well-formed and every constant in~$t$ is a type instance of a
constant declared in the signature.  The term must be well-typed and its use
of bound variables must be well-formed.  Meta-rules such as {\tt forall_elim}
take certified terms as arguments.

Certified terms belong to the abstract type \mltydx{cterm}.
Elements of the type can only be created through the certification process.
In case of error, Isabelle raises exception~\ttindex{TERM}\@.

\subsection{Printing terms}
\index{terms!printing of}
\begin{ttbox}
     string_of_cterm :           cterm -> string
Sign.string_of_term  : Sign.sg -> term -> string
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{string_of_cterm} $ct$]
displays $ct$ as a string.

\item[\ttindexbold{Sign.string_of_term} $sign$ $t$]
displays $t$ as a string, using the syntax of~$sign$.
\end{ttdescription}

\subsection{Making and inspecting certified terms}
\begin{ttbox}
cterm_of   : Sign.sg -> term -> cterm
read_cterm : Sign.sg -> string * typ -> cterm
cert_axm   : Sign.sg -> string * term -> string * term
read_axm   : Sign.sg -> string * string -> string * term
rep_cterm  : cterm -> {\ttlbrace}T:typ, t:term, sign:Sign.sg, maxidx:int\ttrbrace
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{cterm_of} $sign$ $t$] \index{signatures}
certifies $t$ with respect to signature~$sign$.

\item[\ttindexbold{read_cterm} $sign$ ($s$, $T$)]
reads the string~$s$ using the syntax of~$sign$, creating a certified term.
The term is checked to have type~$T$; this type also tells the parser what
kind of phrase to parse.

\item[\ttindexbold{cert_axm} $sign$ ($name$, $t$)]
certifies $t$ with respect to $sign$ as a meta-proposition and converts all
exceptions to an error, including the final message
\begin{ttbox}
The error(s) above occurred in axiom "\(name\)"
\end{ttbox}

\item[\ttindexbold{read_axm} $sign$ ($name$, $s$)]
similar to {\tt cert_axm}, but first reads the string $s$ using the syntax of
$sign$.

\item[\ttindexbold{rep_cterm} $ct$]
decomposes $ct$ as a record containing its type, the term itself, its
signature, and the maximum subscript of its unknowns.  The type and maximum
subscript are computed during certification.
\end{ttdescription}


\section{Types}\index{types|bold}
Types belong to the \ML\ type \mltydx{typ}, which is a concrete datatype with
three constructor functions.  These correspond to type constructors, free
type variables and schematic type variables.  Types are classified by sorts,
which are lists of classes (representing an intersection).  A class is
represented by a string.
\begin{ttbox}
type class = string;
type sort  = class list;

datatype typ = Type  of string * typ list
             | TFree of string * sort
             | TVar  of indexname * sort;

infixr 5 -->;
fun S --> T = Type ("fun", [S, T]);
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{Type}($a$, $Ts$)] \index{type constructors|bold}
  applies the {\bf type constructor} named~$a$ to the type operands~$Ts$.
  Type constructors include~\tydx{fun}, the binary function space
  constructor, as well as nullary type constructors such as~\tydx{prop}.
  Other type constructors may be introduced.  In expressions, but not in
  patterns, \hbox{\tt$S$-->$T$} is a convenient shorthand for function
  types.

\item[\ttindexbold{TFree}($a$, $s$)] \index{type variables|bold}
  is the {\bf type variable} with name~$a$ and sort~$s$.

\item[\ttindexbold{TVar}($v$, $s$)] \index{type unknowns|bold}
  is the {\bf type unknown} with indexname~$v$ and sort~$s$.
  Type unknowns are essentially free type variables, but may be
  instantiated during unification.
\end{ttdescription}


\section{Certified types}
\index{types!certified|bold}
Certified types, which are analogous to certified terms, have type
\ttindexbold{ctyp}.

\subsection{Printing types}
\index{types!printing of}
\begin{ttbox}
     string_of_ctyp :           ctyp -> string
Sign.string_of_typ  : Sign.sg -> typ -> string
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{string_of_ctyp} $cT$]
displays $cT$ as a string.

\item[\ttindexbold{Sign.string_of_typ} $sign$ $T$]
displays $T$ as a string, using the syntax of~$sign$.
\end{ttdescription}


\subsection{Making and inspecting certified types}
\begin{ttbox}
ctyp_of  : Sign.sg -> typ -> ctyp
rep_ctyp : ctyp -> {\ttlbrace}T: typ, sign: Sign.sg\ttrbrace
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{ctyp_of} $sign$ $T$] \index{signatures}
certifies $T$ with respect to signature~$sign$.

\item[\ttindexbold{rep_ctyp} $cT$]
decomposes $cT$ as a record containing the type itself and its signature.
\end{ttdescription}


\section{Oracles: calling external reasoners }
\label{sec:oracles}
\index{oracles|(}

Oracles allow Isabelle to take advantage of external reasoners such as
arithmetic decision procedures, model checkers, fast tautology checkers or
computer algebra systems.  Invoked as an oracle, an external reasoner can
create arbitrary Isabelle theorems.  It is your responsibility to ensure that
the external reasoner is as trustworthy as your application requires.
Isabelle's proof objects~(\S\ref{sec:proofObjects}) record how each theorem
depends upon oracle calls.

\begin{ttbox}
     invoke_oracle : theory * Sign.sg * exn -> thm
     set_oracle    : (Sign.sg * exn -> term) -> theory -> theory
\end{ttbox}
\begin{ttdescription}
\item[\ttindexbold{invoke_oracle} ($thy$, $sign$, $exn$)] invokes the oracle
  of theory $thy$ passing the information contained in the exception value
  $exn$ and creating a theorem having signature $sign$.  Errors arise if $thy$
  does not have an oracle, if the oracle rejects its arguments or if its
  result is ill-typed.

\item[\ttindexbold{set_oracle} $fn$ $thy$] sets the oracle of theory $thy$ to
  be $fn$.  It is seldom called explicitly, as there is syntax for oracles in
  theory files.  Any theory node can have at most one oracle.
\end{ttdescription}

A curious feature of {\ML} exceptions is that they are ordinary constructors.
The {\ML} type {\tt exn} is a datatype that can be extended at any time.  (See
my {\em {ML} for the Working Programmer}~\cite{paulson-ml2}, especially
page~136.)  The oracle mechanism takes advantage of this to allow an oracle to
take any information whatever.

There must be some way of invoking the external reasoner from \ML, either
because it is coded in {\ML} or via an operating system interface.  Isabelle
expects the {\ML} function to take two arguments: a signature and an
exception.
\begin{itemize}
\item The signature will typically be that of a desendant of the theory
  declaring the oracle.  The oracle will use it to distinguish constants from
  variables, etc., and it will be attached to the generated theorems.

\item The exception is used to pass arbitrary information to the oracle.  This
  information must contain a full description of the problem to be solved by
  the external reasoner, including any additional information that might be
  required.  The oracle may raise the exception to indicate that it cannot
  solve the specified problem.
\end{itemize}

A trivial example is provided on directory {\tt FOL/ex}.  This oracle
generates tautologies of the form $P\bimp\cdots\bimp P$, with an even number
of $P$s. 

File {\tt declIffOracle.ML} begins by declaring a new exception constructor
for the oracle the information it requires: here, just an integer.  It
contains some code (suppressed below) for creating the tautologies, and
finally declares the oracle function itself:
\begin{ttbox}
exception IffOracleExn of int;
\(\vdots\)
fun mk_iff_oracle (sign, IffOracleExn n) = 
    if n>0 andalso n mod 2 = 0 
    then Trueprop $ mk_iff n
    else raise IffOracleExn n;
\end{ttbox}
Observe the function two arguments, the signature {\tt sign} and the exception
given as a pattern.  The function checks its argument for validity.  If $n$ is
positive and even then it creates a tautology containing $n$ occurrences
of~$P$.  Otherwise it signals error by raising its own exception.  Errors may
be signalled by other means, such as returning the theorem {\tt True}.
Please ensure that the oracle's result is correctly typed; Isabelle will
reject ill-typed theorems by raising a cryptic exception at top level.

The theory file {\tt IffOracle.thy} packages up the function above as an
oracle.  The first line indicates that the new theory depends upon the file
{\tt declIffOracle.ML} (which declares the {\ML} code) as well as on \FOL.
The second line informs Isabelle that this theory has an oracle given by the
function {\tt mk_iff_oracle}.
\begin{ttbox}
IffOracle = "declIffOracle" + FOL +
oracle mk_iff_oracle
end
\end{ttbox}
Because a theory can have at most one oracle, the theory itself serves to
identify the oracle.

Here are some examples of invoking the oracle.  An argument of 10 is allowed,
but one of 5 is forbidden:
\begin{ttbox}
invoke_oracle (IffOracle.thy, sign_of IffOracle.thy, IffOracleExn 10);
{\out  "P <-> P <-> P <-> P <-> P <-> P <-> P <-> P <-> P <-> P" : thm}
invoke_oracle (IffOracle.thy, sign_of IffOracle.thy, IffOracleExn 5); 
{\out Exception- IffOracleExn 5 raised}
\end{ttbox}

\index{oracles|)}
\index{theories|)}