--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/IsarImplementation/Thy/document/Base.tex Fri Mar 06 11:28:07 2009 +0100
@@ -0,0 +1,29 @@
+%
+\begin{isabellebody}%
+\def\isabellecontext{Base}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{theory}\isamarkupfalse%
+\ Base\isanewline
+\isakeyword{imports}\ Pure\isanewline
+\isakeyword{uses}\ {\isachardoublequoteopen}{\isachardot}{\isachardot}{\isacharslash}{\isachardot}{\isachardot}{\isacharslash}antiquote{\isacharunderscore}setup{\isachardot}ML{\isachardoublequoteclose}\isanewline
+\isakeyword{begin}\isanewline
+\isanewline
+\isacommand{end}\isamarkupfalse%
+%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+\isanewline
+%
+\endisadelimtheory
+\end{isabellebody}%
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "root"
+%%% End:
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/IsarImplementation/Thy/document/Integration.tex Fri Mar 06 11:28:07 2009 +0100
@@ -0,0 +1,520 @@
+%
+\begin{isabellebody}%
+\def\isabellecontext{Integration}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{theory}\isamarkupfalse%
+\ Integration\isanewline
+\isakeyword{imports}\ Base\isanewline
+\isakeyword{begin}%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isamarkupchapter{System integration%
+}
+\isamarkuptrue%
+%
+\isamarkupsection{Isar toplevel \label{sec:isar-toplevel}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The Isar toplevel may be considered the centeral hub of the
+ Isabelle/Isar system, where all key components and sub-systems are
+ integrated into a single read-eval-print loop of Isar commands. We
+ shall even incorporate the existing {\ML} toplevel of the compiler
+ and run-time system (cf.\ \secref{sec:ML-toplevel}).
+
+ Isabelle/Isar departs from the original ``LCF system architecture''
+ where {\ML} was really The Meta Language for defining theories and
+ conducting proofs. Instead, {\ML} now only serves as the
+ implementation language for the system (and user extensions), while
+ the specific Isar toplevel supports the concepts of theory and proof
+ development natively. This includes the graph structure of theories
+ and the block structure of proofs, support for unlimited undo,
+ facilities for tracing, debugging, timing, profiling etc.
+
+ \medskip The toplevel maintains an implicit state, which is
+ transformed by a sequence of transitions -- either interactively or
+ in batch-mode. In interactive mode, Isar state transitions are
+ encapsulated as safe transactions, such that both failure and undo
+ are handled conveniently without destroying the underlying draft
+ theory (cf.~\secref{sec:context-theory}). In batch mode,
+ transitions operate in a linear (destructive) fashion, such that
+ error conditions abort the present attempt to construct a theory or
+ proof altogether.
+
+ The toplevel state is a disjoint sum of empty \isa{toplevel}, or
+ \isa{theory}, or \isa{proof}. On entering the main Isar loop we
+ start with an empty toplevel. A theory is commenced by giving a
+ \isa{{\isasymTHEORY}} header; within a theory we may issue theory
+ commands such as \isa{{\isasymDEFINITION}}, or state a \isa{{\isasymTHEOREM}} to be proven. Now we are within a proof state, with a
+ rich collection of Isar proof commands for structured proof
+ composition, or unstructured proof scripts. When the proof is
+ concluded we get back to the theory, which is then updated by
+ storing the resulting fact. Further theory declarations or theorem
+ statements with proofs may follow, until we eventually conclude the
+ theory development by issuing \isa{{\isasymEND}}. The resulting theory
+ is then stored within the theory database and we are back to the
+ empty toplevel.
+
+ In addition to these proper state transformations, there are also
+ some diagnostic commands for peeking at the toplevel state without
+ modifying it (e.g.\ \isakeyword{thm}, \isakeyword{term},
+ \isakeyword{print-cases}).%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{Toplevel.state}\verb|type Toplevel.state| \\
+ \indexdef{}{ML}{Toplevel.UNDEF}\verb|Toplevel.UNDEF: exn| \\
+ \indexdef{}{ML}{Toplevel.is\_toplevel}\verb|Toplevel.is_toplevel: Toplevel.state -> bool| \\
+ \indexdef{}{ML}{Toplevel.theory\_of}\verb|Toplevel.theory_of: Toplevel.state -> theory| \\
+ \indexdef{}{ML}{Toplevel.proof\_of}\verb|Toplevel.proof_of: Toplevel.state -> Proof.state| \\
+ \indexdef{}{ML}{Toplevel.debug}\verb|Toplevel.debug: bool ref| \\
+ \indexdef{}{ML}{Toplevel.timing}\verb|Toplevel.timing: bool ref| \\
+ \indexdef{}{ML}{Toplevel.profiling}\verb|Toplevel.profiling: int ref| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Toplevel.state| represents Isar toplevel states,
+ which are normally manipulated through the concept of toplevel
+ transitions only (\secref{sec:toplevel-transition}). Also note that
+ a raw toplevel state is subject to the same linearity restrictions
+ as a theory context (cf.~\secref{sec:context-theory}).
+
+ \item \verb|Toplevel.UNDEF| is raised for undefined toplevel
+ operations. Many operations work only partially for certain cases,
+ since \verb|Toplevel.state| is a sum type.
+
+ \item \verb|Toplevel.is_toplevel|~\isa{state} checks for an empty
+ toplevel state.
+
+ \item \verb|Toplevel.theory_of|~\isa{state} selects the theory of
+ a theory or proof (!), otherwise raises \verb|Toplevel.UNDEF|.
+
+ \item \verb|Toplevel.proof_of|~\isa{state} selects the Isar proof
+ state if available, otherwise raises \verb|Toplevel.UNDEF|.
+
+ \item \verb|set Toplevel.debug| makes the toplevel print further
+ details about internal error conditions, exceptions being raised
+ etc.
+
+ \item \verb|set Toplevel.timing| makes the toplevel print timing
+ information for each Isar command being executed.
+
+ \item \verb|Toplevel.profiling|~\verb|:=|~\isa{n} controls
+ low-level profiling of the underlying {\ML} runtime system. For
+ Poly/ML, \isa{n\ {\isacharequal}\ {\isadigit{1}}} means time and \isa{n\ {\isacharequal}\ {\isadigit{2}}} space
+ profiling.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Toplevel transitions \label{sec:toplevel-transition}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+An Isar toplevel transition consists of a partial function on the
+ toplevel state, with additional information for diagnostics and
+ error reporting: there are fields for command name, source position,
+ optional source text, as well as flags for interactive-only commands
+ (which issue a warning in batch-mode), printing of result state,
+ etc.
+
+ The operational part is represented as the sequential union of a
+ list of partial functions, which are tried in turn until the first
+ one succeeds. This acts like an outer case-expression for various
+ alternative state transitions. For example, \isakeyword{qed} acts
+ differently for a local proofs vs.\ the global ending of the main
+ proof.
+
+ Toplevel transitions are composed via transition transformers.
+ Internally, Isar commands are put together from an empty transition
+ extended by name and source position (and optional source text). It
+ is then left to the individual command parser to turn the given
+ concrete syntax into a suitable transition transformer that adjoins
+ actual operations on a theory or proof state etc.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{Toplevel.print}\verb|Toplevel.print: Toplevel.transition -> Toplevel.transition| \\
+ \indexdef{}{ML}{Toplevel.no\_timing}\verb|Toplevel.no_timing: Toplevel.transition -> Toplevel.transition| \\
+ \indexdef{}{ML}{Toplevel.keep}\verb|Toplevel.keep: (Toplevel.state -> unit) ->|\isasep\isanewline%
+\verb| Toplevel.transition -> Toplevel.transition| \\
+ \indexdef{}{ML}{Toplevel.theory}\verb|Toplevel.theory: (theory -> theory) ->|\isasep\isanewline%
+\verb| Toplevel.transition -> Toplevel.transition| \\
+ \indexdef{}{ML}{Toplevel.theory\_to\_proof}\verb|Toplevel.theory_to_proof: (theory -> Proof.state) ->|\isasep\isanewline%
+\verb| Toplevel.transition -> Toplevel.transition| \\
+ \indexdef{}{ML}{Toplevel.proof}\verb|Toplevel.proof: (Proof.state -> Proof.state) ->|\isasep\isanewline%
+\verb| Toplevel.transition -> Toplevel.transition| \\
+ \indexdef{}{ML}{Toplevel.proofs}\verb|Toplevel.proofs: (Proof.state -> Proof.state Seq.seq) ->|\isasep\isanewline%
+\verb| Toplevel.transition -> Toplevel.transition| \\
+ \indexdef{}{ML}{Toplevel.end\_proof}\verb|Toplevel.end_proof: (bool -> Proof.state -> Proof.context) ->|\isasep\isanewline%
+\verb| Toplevel.transition -> Toplevel.transition| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Toplevel.print|~\isa{tr} sets the print flag, which
+ causes the toplevel loop to echo the result state (in interactive
+ mode).
+
+ \item \verb|Toplevel.no_timing|~\isa{tr} indicates that the
+ transition should never show timing information, e.g.\ because it is
+ a diagnostic command.
+
+ \item \verb|Toplevel.keep|~\isa{tr} adjoins a diagnostic
+ function.
+
+ \item \verb|Toplevel.theory|~\isa{tr} adjoins a theory
+ transformer.
+
+ \item \verb|Toplevel.theory_to_proof|~\isa{tr} adjoins a global
+ goal function, which turns a theory into a proof state. The theory
+ may be changed before entering the proof; the generic Isar goal
+ setup includes an argument that specifies how to apply the proven
+ result to the theory, when the proof is finished.
+
+ \item \verb|Toplevel.proof|~\isa{tr} adjoins a deterministic
+ proof command, with a singleton result.
+
+ \item \verb|Toplevel.proofs|~\isa{tr} adjoins a general proof
+ command, with zero or more result states (represented as a lazy
+ list).
+
+ \item \verb|Toplevel.end_proof|~\isa{tr} adjoins a concluding
+ proof command, that returns the resulting theory, after storing the
+ resulting facts in the context etc.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Toplevel control%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+There are a few special control commands that modify the behavior
+ the toplevel itself, and only make sense in interactive mode. Under
+ normal circumstances, the user encounters these only implicitly as
+ part of the protocol between the Isabelle/Isar system and a
+ user-interface such as ProofGeneral.
+
+ \begin{description}
+
+ \item \isacommand{undo} follows the three-level hierarchy of empty
+ toplevel vs.\ theory vs.\ proof: undo within a proof reverts to the
+ previous proof context, undo after a proof reverts to the theory
+ before the initial goal statement, undo of a theory command reverts
+ to the previous theory value, undo of a theory header discontinues
+ the current theory development and removes it from the theory
+ database (\secref{sec:theory-database}).
+
+ \item \isacommand{kill} aborts the current level of development:
+ kill in a proof context reverts to the theory before the initial
+ goal statement, kill in a theory context aborts the current theory
+ development, removing it from the database.
+
+ \item \isacommand{exit} drops out of the Isar toplevel into the
+ underlying {\ML} toplevel (\secref{sec:ML-toplevel}). The Isar
+ toplevel state is preserved and may be continued later.
+
+ \item \isacommand{quit} terminates the Isabelle/Isar process without
+ saving.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsection{ML toplevel \label{sec:ML-toplevel}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The {\ML} toplevel provides a read-compile-eval-print loop for {\ML}
+ values, types, structures, and functors. {\ML} declarations operate
+ on the global system state, which consists of the compiler
+ environment plus the values of {\ML} reference variables. There is
+ no clean way to undo {\ML} declarations, except for reverting to a
+ previously saved state of the whole Isabelle process. {\ML} input
+ is either read interactively from a TTY, or from a string (usually
+ within a theory text), or from a source file (usually loaded from a
+ theory).
+
+ Whenever the {\ML} toplevel is active, the current Isabelle theory
+ context is passed as an internal reference variable. Thus {\ML}
+ code may access the theory context during compilation, it may even
+ change the value of a theory being under construction --- while
+ observing the usual linearity restrictions
+ (cf.~\secref{sec:context-theory}).%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{the\_context}\verb|the_context: unit -> theory| \\
+ \indexdef{}{ML}{Context.$>$$>$ }\verb|Context.>> : (Context.generic -> Context.generic) -> unit| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|the_context ()| refers to the theory context of the
+ {\ML} toplevel --- at compile time! {\ML} code needs to take care
+ to refer to \verb|the_context ()| correctly. Recall that
+ evaluation of a function body is delayed until actual runtime.
+ Moreover, persistent {\ML} toplevel bindings to an unfinished theory
+ should be avoided: code should either project out the desired
+ information immediately, or produce an explicit \verb|theory_ref| (cf.\ \secref{sec:context-theory}).
+
+ \item \verb|Context.>>|~\isa{f} applies context transformation
+ \isa{f} to the implicit context of the {\ML} toplevel.
+
+ \end{description}
+
+ It is very important to note that the above functions are really
+ restricted to the compile time, even though the {\ML} compiler is
+ invoked at runtime! The majority of {\ML} code uses explicit
+ functional arguments of a theory or proof context instead. Thus it
+ may be invoked for an arbitrary context later on, without having to
+ worry about any operational details.
+
+ \bigskip
+
+ \begin{mldecls}
+ \indexdef{}{ML}{Isar.main}\verb|Isar.main: unit -> unit| \\
+ \indexdef{}{ML}{Isar.loop}\verb|Isar.loop: unit -> unit| \\
+ \indexdef{}{ML}{Isar.state}\verb|Isar.state: unit -> Toplevel.state| \\
+ \indexdef{}{ML}{Isar.exn}\verb|Isar.exn: unit -> (exn * string) option| \\
+ \indexdef{}{ML}{Isar.context}\verb|Isar.context: unit -> Proof.context| \\
+ \indexdef{}{ML}{Isar.goal}\verb|Isar.goal: unit -> thm| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Isar.main ()| invokes the Isar toplevel from {\ML},
+ initializing an empty toplevel state.
+
+ \item \verb|Isar.loop ()| continues the Isar toplevel with the
+ current state, after having dropped out of the Isar toplevel loop.
+
+ \item \verb|Isar.state ()| and \verb|Isar.exn ()| get current
+ toplevel state and error condition, respectively. This only works
+ after having dropped out of the Isar toplevel loop.
+
+ \item \verb|Isar.context ()| produces the proof context from \verb|Isar.state ()|, analogous to \verb|Context.proof_of|
+ (\secref{sec:generic-context}).
+
+ \item \verb|Isar.goal ()| picks the tactical goal from \verb|Isar.state ()|, represented as a theorem according to
+ \secref{sec:tactical-goals}.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsection{Theory database \label{sec:theory-database}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The theory database maintains a collection of theories, together
+ with some administrative information about their original sources,
+ which are held in an external store (i.e.\ some directory within the
+ regular file system).
+
+ The theory database is organized as a directed acyclic graph;
+ entries are referenced by theory name. Although some additional
+ interfaces allow to include a directory specification as well, this
+ is only a hint to the underlying theory loader. The internal theory
+ name space is flat!
+
+ Theory \isa{A} is associated with the main theory file \isa{A}\verb,.thy,, which needs to be accessible through the theory
+ loader path. Any number of additional {\ML} source files may be
+ associated with each theory, by declaring these dependencies in the
+ theory header as \isa{{\isasymUSES}}, and loading them consecutively
+ within the theory context. The system keeps track of incoming {\ML}
+ sources and associates them with the current theory. The file
+ \isa{A}\verb,.ML, is loaded after a theory has been concluded, in
+ order to support legacy proof {\ML} proof scripts.
+
+ The basic internal actions of the theory database are \isa{update}, \isa{outdate}, and \isa{remove}:
+
+ \begin{itemize}
+
+ \item \isa{update\ A} introduces a link of \isa{A} with a
+ \isa{theory} value of the same name; it asserts that the theory
+ sources are now consistent with that value;
+
+ \item \isa{outdate\ A} invalidates the link of a theory database
+ entry to its sources, but retains the present theory value;
+
+ \item \isa{remove\ A} deletes entry \isa{A} from the theory
+ database.
+
+ \end{itemize}
+
+ These actions are propagated to sub- or super-graphs of a theory
+ entry as expected, in order to preserve global consistency of the
+ state of all loaded theories with the sources of the external store.
+ This implies certain causalities between actions: \isa{update}
+ or \isa{outdate} of an entry will \isa{outdate} all
+ descendants; \isa{remove} will \isa{remove} all descendants.
+
+ \medskip There are separate user-level interfaces to operate on the
+ theory database directly or indirectly. The primitive actions then
+ just happen automatically while working with the system. In
+ particular, processing a theory header \isa{{\isasymTHEORY}\ A\ {\isasymIMPORTS}\ B\isactrlsub {\isadigit{1}}\ {\isasymdots}\ B\isactrlsub n\ {\isasymBEGIN}} ensures that the
+ sub-graph of the collective imports \isa{B\isactrlsub {\isadigit{1}}\ {\isasymdots}\ B\isactrlsub n}
+ is up-to-date, too. Earlier theories are reloaded as required, with
+ \isa{update} actions proceeding in topological order according to
+ theory dependencies. There may be also a wave of implied \isa{outdate} actions for derived theory nodes until a stable situation
+ is achieved eventually.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{theory}\verb|theory: string -> theory| \\
+ \indexdef{}{ML}{use\_thy}\verb|use_thy: string -> unit| \\
+ \indexdef{}{ML}{use\_thys}\verb|use_thys: string list -> unit| \\
+ \indexdef{}{ML}{ThyInfo.touch\_thy}\verb|ThyInfo.touch_thy: string -> unit| \\
+ \indexdef{}{ML}{ThyInfo.remove\_thy}\verb|ThyInfo.remove_thy: string -> unit| \\[1ex]
+ \indexdef{}{ML}{ThyInfo.begin\_theory}\verb|ThyInfo.begin_theory|\verb|: ... -> bool -> theory| \\
+ \indexdef{}{ML}{ThyInfo.end\_theory}\verb|ThyInfo.end_theory: theory -> unit| \\
+ \indexdef{}{ML}{ThyInfo.register\_theory}\verb|ThyInfo.register_theory: theory -> unit| \\[1ex]
+ \verb|datatype action = Update |\verb,|,\verb| Outdate |\verb,|,\verb| Remove| \\
+ \indexdef{}{ML}{ThyInfo.add\_hook}\verb|ThyInfo.add_hook: (ThyInfo.action -> string -> unit) -> unit| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|theory|~\isa{A} retrieves the theory value presently
+ associated with name \isa{A}. Note that the result might be
+ outdated.
+
+ \item \verb|use_thy|~\isa{A} ensures that theory \isa{A} is fully
+ up-to-date wrt.\ the external file store, reloading outdated
+ ancestors as required.
+
+ \item \verb|use_thys| is similar to \verb|use_thy|, but handles
+ several theories simultaneously. Thus it acts like processing the
+ import header of a theory, without performing the merge of the
+ result, though.
+
+ \item \verb|ThyInfo.touch_thy|~\isa{A} performs and \isa{outdate} action
+ on theory \isa{A} and all descendants.
+
+ \item \verb|ThyInfo.remove_thy|~\isa{A} deletes theory \isa{A} and all
+ descendants from the theory database.
+
+ \item \verb|ThyInfo.begin_theory| is the basic operation behind a
+ \isa{{\isasymTHEORY}} header declaration. This is {\ML} functions is
+ normally not invoked directly.
+
+ \item \verb|ThyInfo.end_theory| concludes the loading of a theory
+ proper and stores the result in the theory database.
+
+ \item \verb|ThyInfo.register_theory|~\isa{text\ thy} registers an
+ existing theory value with the theory loader database. There is no
+ management of associated sources.
+
+ \item \verb|ThyInfo.add_hook|~\isa{f} registers function \isa{f} as a hook for theory database actions. The function will be
+ invoked with the action and theory name being involved; thus derived
+ actions may be performed in associated system components, e.g.\
+ maintaining the state of an editor for the theory sources.
+
+ The kind and order of actions occurring in practice depends both on
+ user interactions and the internal process of resolving theory
+ imports. Hooks should not rely on a particular policy here! Any
+ exceptions raised by the hook are ignored.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{end}\isamarkupfalse%
+%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+\isanewline
+\end{isabellebody}%
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "root"
+%%% End:
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/IsarImplementation/Thy/document/Isar.tex Fri Mar 06 11:28:07 2009 +0100
@@ -0,0 +1,86 @@
+%
+\begin{isabellebody}%
+\def\isabellecontext{Isar}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{theory}\isamarkupfalse%
+\ Isar\isanewline
+\isakeyword{imports}\ Base\isanewline
+\isakeyword{begin}%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isamarkupchapter{Isar language elements%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The primary Isar language consists of three main categories of
+ language elements:
+
+ \begin{enumerate}
+
+ \item Proof commands
+
+ \item Proof methods
+
+ \item Attributes
+
+ \end{enumerate}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsection{Proof commands%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+FIXME%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsection{Proof methods%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+FIXME%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsection{Attributes%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+FIXME%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{end}\isamarkupfalse%
+%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+\isanewline
+\end{isabellebody}%
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "root"
+%%% End:
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/IsarImplementation/Thy/document/Logic.tex Fri Mar 06 11:28:07 2009 +0100
@@ -0,0 +1,959 @@
+%
+\begin{isabellebody}%
+\def\isabellecontext{Logic}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{theory}\isamarkupfalse%
+\ Logic\isanewline
+\isakeyword{imports}\ Base\isanewline
+\isakeyword{begin}%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isamarkupchapter{Primitive logic \label{ch:logic}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The logical foundations of Isabelle/Isar are that of the Pure logic,
+ which has been introduced as a Natural Deduction framework in
+ \cite{paulson700}. This is essentially the same logic as ``\isa{{\isasymlambda}HOL}'' in the more abstract setting of Pure Type Systems (PTS)
+ \cite{Barendregt-Geuvers:2001}, although there are some key
+ differences in the specific treatment of simple types in
+ Isabelle/Pure.
+
+ Following type-theoretic parlance, the Pure logic consists of three
+ levels of \isa{{\isasymlambda}}-calculus with corresponding arrows, \isa{{\isasymRightarrow}} for syntactic function space (terms depending on terms), \isa{{\isasymAnd}} for universal quantification (proofs depending on terms), and
+ \isa{{\isasymLongrightarrow}} for implication (proofs depending on proofs).
+
+ Derivations are relative to a logical theory, which declares type
+ constructors, constants, and axioms. Theory declarations support
+ schematic polymorphism, which is strictly speaking outside the
+ logic.\footnote{This is the deeper logical reason, why the theory
+ context \isa{{\isasymTheta}} is separate from the proof context \isa{{\isasymGamma}}
+ of the core calculus.}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsection{Types \label{sec:types}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The language of types is an uninterpreted order-sorted first-order
+ algebra; types are qualified by ordered type classes.
+
+ \medskip A \emph{type class} is an abstract syntactic entity
+ declared in the theory context. The \emph{subclass relation} \isa{c\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ c\isactrlisub {\isadigit{2}}} is specified by stating an acyclic
+ generating relation; the transitive closure is maintained
+ internally. The resulting relation is an ordering: reflexive,
+ transitive, and antisymmetric.
+
+ A \emph{sort} is a list of type classes written as \isa{s\ {\isacharequal}\ {\isacharbraceleft}c\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ c\isactrlisub m{\isacharbraceright}}, which represents symbolic
+ intersection. Notationally, the curly braces are omitted for
+ singleton intersections, i.e.\ any class \isa{c} may be read as
+ a sort \isa{{\isacharbraceleft}c{\isacharbraceright}}. The ordering on type classes is extended to
+ sorts according to the meaning of intersections: \isa{{\isacharbraceleft}c\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}\ c\isactrlisub m{\isacharbraceright}\ {\isasymsubseteq}\ {\isacharbraceleft}d\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ d\isactrlisub n{\isacharbraceright}} iff
+ \isa{{\isasymforall}j{\isachardot}\ {\isasymexists}i{\isachardot}\ c\isactrlisub i\ {\isasymsubseteq}\ d\isactrlisub j}. The empty intersection
+ \isa{{\isacharbraceleft}{\isacharbraceright}} refers to the universal sort, which is the largest
+ element wrt.\ the sort order. The intersections of all (finitely
+ many) classes declared in the current theory are the minimal
+ elements wrt.\ the sort order.
+
+ \medskip A \emph{fixed type variable} is a pair of a basic name
+ (starting with a \isa{{\isacharprime}} character) and a sort constraint, e.g.\
+ \isa{{\isacharparenleft}{\isacharprime}a{\isacharcomma}\ s{\isacharparenright}} which is usually printed as \isa{{\isasymalpha}\isactrlisub s}.
+ A \emph{schematic type variable} is a pair of an indexname and a
+ sort constraint, e.g.\ \isa{{\isacharparenleft}{\isacharparenleft}{\isacharprime}a{\isacharcomma}\ {\isadigit{0}}{\isacharparenright}{\isacharcomma}\ s{\isacharparenright}} which is usually
+ printed as \isa{{\isacharquery}{\isasymalpha}\isactrlisub s}.
+
+ Note that \emph{all} syntactic components contribute to the identity
+ of type variables, including the sort constraint. The core logic
+ handles type variables with the same name but different sorts as
+ different, although some outer layers of the system make it hard to
+ produce anything like this.
+
+ A \emph{type constructor} \isa{{\isasymkappa}} is a \isa{k}-ary operator
+ on types declared in the theory. Type constructor application is
+ written postfix as \isa{{\isacharparenleft}{\isasymalpha}\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymalpha}\isactrlisub k{\isacharparenright}{\isasymkappa}}. For
+ \isa{k\ {\isacharequal}\ {\isadigit{0}}} the argument tuple is omitted, e.g.\ \isa{prop}
+ instead of \isa{{\isacharparenleft}{\isacharparenright}prop}. For \isa{k\ {\isacharequal}\ {\isadigit{1}}} the parentheses
+ are omitted, e.g.\ \isa{{\isasymalpha}\ list} instead of \isa{{\isacharparenleft}{\isasymalpha}{\isacharparenright}list}.
+ Further notation is provided for specific constructors, notably the
+ right-associative infix \isa{{\isasymalpha}\ {\isasymRightarrow}\ {\isasymbeta}} instead of \isa{{\isacharparenleft}{\isasymalpha}{\isacharcomma}\ {\isasymbeta}{\isacharparenright}fun}.
+
+ A \emph{type} is defined inductively over type variables and type
+ constructors as follows: \isa{{\isasymtau}\ {\isacharequal}\ {\isasymalpha}\isactrlisub s\ {\isacharbar}\ {\isacharquery}{\isasymalpha}\isactrlisub s\ {\isacharbar}\ {\isacharparenleft}{\isasymtau}\isactrlsub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlsub k{\isacharparenright}{\isasymkappa}}.
+
+ A \emph{type abbreviation} is a syntactic definition \isa{{\isacharparenleft}\isactrlvec {\isasymalpha}{\isacharparenright}{\isasymkappa}\ {\isacharequal}\ {\isasymtau}} of an arbitrary type expression \isa{{\isasymtau}} over
+ variables \isa{\isactrlvec {\isasymalpha}}. Type abbreviations appear as type
+ constructors in the syntax, but are expanded before entering the
+ logical core.
+
+ A \emph{type arity} declares the image behavior of a type
+ constructor wrt.\ the algebra of sorts: \isa{{\isasymkappa}\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}s\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ s\isactrlisub k{\isacharparenright}s} means that \isa{{\isacharparenleft}{\isasymtau}\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlisub k{\isacharparenright}{\isasymkappa}} is
+ of sort \isa{s} if every argument type \isa{{\isasymtau}\isactrlisub i} is
+ of sort \isa{s\isactrlisub i}. Arity declarations are implicitly
+ completed, i.e.\ \isa{{\isasymkappa}\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s{\isacharparenright}c} entails \isa{{\isasymkappa}\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s{\isacharparenright}c{\isacharprime}} for any \isa{c{\isacharprime}\ {\isasymsupseteq}\ c}.
+
+ \medskip The sort algebra is always maintained as \emph{coregular},
+ which means that type arities are consistent with the subclass
+ relation: for any type constructor \isa{{\isasymkappa}}, and classes \isa{c\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ c\isactrlisub {\isadigit{2}}}, and arities \isa{{\isasymkappa}\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s\isactrlisub {\isadigit{1}}{\isacharparenright}c\isactrlisub {\isadigit{1}}} and \isa{{\isasymkappa}\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s\isactrlisub {\isadigit{2}}{\isacharparenright}c\isactrlisub {\isadigit{2}}} holds \isa{\isactrlvec s\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ \isactrlvec s\isactrlisub {\isadigit{2}}} component-wise.
+
+ The key property of a coregular order-sorted algebra is that sort
+ constraints can be solved in a most general fashion: for each type
+ constructor \isa{{\isasymkappa}} and sort \isa{s} there is a most general
+ vector of argument sorts \isa{{\isacharparenleft}s\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ s\isactrlisub k{\isacharparenright}} such
+ that a type scheme \isa{{\isacharparenleft}{\isasymalpha}\isactrlbsub s\isactrlisub {\isadigit{1}}\isactrlesub {\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymalpha}\isactrlbsub s\isactrlisub k\isactrlesub {\isacharparenright}{\isasymkappa}} is of sort \isa{s}.
+ Consequently, type unification has most general solutions (modulo
+ equivalence of sorts), so type-inference produces primary types as
+ expected \cite{nipkow-prehofer}.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{class}\verb|type class| \\
+ \indexdef{}{ML type}{sort}\verb|type sort| \\
+ \indexdef{}{ML type}{arity}\verb|type arity| \\
+ \indexdef{}{ML type}{typ}\verb|type typ| \\
+ \indexdef{}{ML}{map\_atyps}\verb|map_atyps: (typ -> typ) -> typ -> typ| \\
+ \indexdef{}{ML}{fold\_atyps}\verb|fold_atyps: (typ -> 'a -> 'a) -> typ -> 'a -> 'a| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML}{Sign.subsort}\verb|Sign.subsort: theory -> sort * sort -> bool| \\
+ \indexdef{}{ML}{Sign.of\_sort}\verb|Sign.of_sort: theory -> typ * sort -> bool| \\
+ \indexdef{}{ML}{Sign.add\_types}\verb|Sign.add_types: (string * int * mixfix) list -> theory -> theory| \\
+ \indexdef{}{ML}{Sign.add\_tyabbrs\_i}\verb|Sign.add_tyabbrs_i: |\isasep\isanewline%
+\verb| (string * string list * typ * mixfix) list -> theory -> theory| \\
+ \indexdef{}{ML}{Sign.primitive\_class}\verb|Sign.primitive_class: string * class list -> theory -> theory| \\
+ \indexdef{}{ML}{Sign.primitive\_classrel}\verb|Sign.primitive_classrel: class * class -> theory -> theory| \\
+ \indexdef{}{ML}{Sign.primitive\_arity}\verb|Sign.primitive_arity: arity -> theory -> theory| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|class| represents type classes; this is an alias for
+ \verb|string|.
+
+ \item \verb|sort| represents sorts; this is an alias for
+ \verb|class list|.
+
+ \item \verb|arity| represents type arities; this is an alias for
+ triples of the form \isa{{\isacharparenleft}{\isasymkappa}{\isacharcomma}\ \isactrlvec s{\isacharcomma}\ s{\isacharparenright}} for \isa{{\isasymkappa}\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s{\isacharparenright}s} described above.
+
+ \item \verb|typ| represents types; this is a datatype with
+ constructors \verb|TFree|, \verb|TVar|, \verb|Type|.
+
+ \item \verb|map_atyps|~\isa{f\ {\isasymtau}} applies the mapping \isa{f}
+ to all atomic types (\verb|TFree|, \verb|TVar|) occurring in \isa{{\isasymtau}}.
+
+ \item \verb|fold_atyps|~\isa{f\ {\isasymtau}} iterates the operation \isa{f} over all occurrences of atomic types (\verb|TFree|, \verb|TVar|)
+ in \isa{{\isasymtau}}; the type structure is traversed from left to right.
+
+ \item \verb|Sign.subsort|~\isa{thy\ {\isacharparenleft}s\isactrlisub {\isadigit{1}}{\isacharcomma}\ s\isactrlisub {\isadigit{2}}{\isacharparenright}}
+ tests the subsort relation \isa{s\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ s\isactrlisub {\isadigit{2}}}.
+
+ \item \verb|Sign.of_sort|~\isa{thy\ {\isacharparenleft}{\isasymtau}{\isacharcomma}\ s{\isacharparenright}} tests whether type
+ \isa{{\isasymtau}} is of sort \isa{s}.
+
+ \item \verb|Sign.add_types|~\isa{{\isacharbrackleft}{\isacharparenleft}{\isasymkappa}{\isacharcomma}\ k{\isacharcomma}\ mx{\isacharparenright}{\isacharcomma}\ {\isasymdots}{\isacharbrackright}} declares a new
+ type constructors \isa{{\isasymkappa}} with \isa{k} arguments and
+ optional mixfix syntax.
+
+ \item \verb|Sign.add_tyabbrs_i|~\isa{{\isacharbrackleft}{\isacharparenleft}{\isasymkappa}{\isacharcomma}\ \isactrlvec {\isasymalpha}{\isacharcomma}\ {\isasymtau}{\isacharcomma}\ mx{\isacharparenright}{\isacharcomma}\ {\isasymdots}{\isacharbrackright}}
+ defines a new type abbreviation \isa{{\isacharparenleft}\isactrlvec {\isasymalpha}{\isacharparenright}{\isasymkappa}\ {\isacharequal}\ {\isasymtau}} with
+ optional mixfix syntax.
+
+ \item \verb|Sign.primitive_class|~\isa{{\isacharparenleft}c{\isacharcomma}\ {\isacharbrackleft}c\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ c\isactrlisub n{\isacharbrackright}{\isacharparenright}} declares a new class \isa{c}, together with class
+ relations \isa{c\ {\isasymsubseteq}\ c\isactrlisub i}, for \isa{i\ {\isacharequal}\ {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ n}.
+
+ \item \verb|Sign.primitive_classrel|~\isa{{\isacharparenleft}c\isactrlisub {\isadigit{1}}{\isacharcomma}\ c\isactrlisub {\isadigit{2}}{\isacharparenright}} declares the class relation \isa{c\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ c\isactrlisub {\isadigit{2}}}.
+
+ \item \verb|Sign.primitive_arity|~\isa{{\isacharparenleft}{\isasymkappa}{\isacharcomma}\ \isactrlvec s{\isacharcomma}\ s{\isacharparenright}} declares
+ the arity \isa{{\isasymkappa}\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s{\isacharparenright}s}.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsection{Terms \label{sec:terms}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The language of terms is that of simply-typed \isa{{\isasymlambda}}-calculus
+ with de-Bruijn indices for bound variables (cf.\ \cite{debruijn72}
+ or \cite{paulson-ml2}), with the types being determined by the
+ corresponding binders. In contrast, free variables and constants
+ are have an explicit name and type in each occurrence.
+
+ \medskip A \emph{bound variable} is a natural number \isa{b},
+ which accounts for the number of intermediate binders between the
+ variable occurrence in the body and its binding position. For
+ example, the de-Bruijn term \isa{{\isasymlambda}\isactrlbsub nat\isactrlesub {\isachardot}\ {\isasymlambda}\isactrlbsub nat\isactrlesub {\isachardot}\ {\isadigit{1}}\ {\isacharplus}\ {\isadigit{0}}} would
+ correspond to \isa{{\isasymlambda}x\isactrlbsub nat\isactrlesub {\isachardot}\ {\isasymlambda}y\isactrlbsub nat\isactrlesub {\isachardot}\ x\ {\isacharplus}\ y} in a named
+ representation. Note that a bound variable may be represented by
+ different de-Bruijn indices at different occurrences, depending on
+ the nesting of abstractions.
+
+ A \emph{loose variable} is a bound variable that is outside the
+ scope of local binders. The types (and names) for loose variables
+ can be managed as a separate context, that is maintained as a stack
+ of hypothetical binders. The core logic operates on closed terms,
+ without any loose variables.
+
+ A \emph{fixed variable} is a pair of a basic name and a type, e.g.\
+ \isa{{\isacharparenleft}x{\isacharcomma}\ {\isasymtau}{\isacharparenright}} which is usually printed \isa{x\isactrlisub {\isasymtau}}. A
+ \emph{schematic variable} is a pair of an indexname and a type,
+ e.g.\ \isa{{\isacharparenleft}{\isacharparenleft}x{\isacharcomma}\ {\isadigit{0}}{\isacharparenright}{\isacharcomma}\ {\isasymtau}{\isacharparenright}} which is usually printed as \isa{{\isacharquery}x\isactrlisub {\isasymtau}}.
+
+ \medskip A \emph{constant} is a pair of a basic name and a type,
+ e.g.\ \isa{{\isacharparenleft}c{\isacharcomma}\ {\isasymtau}{\isacharparenright}} which is usually printed as \isa{c\isactrlisub {\isasymtau}}. Constants are declared in the context as polymorphic
+ families \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}}, meaning that all substitution instances
+ \isa{c\isactrlisub {\isasymtau}} for \isa{{\isasymtau}\ {\isacharequal}\ {\isasymsigma}{\isasymvartheta}} are valid.
+
+ The vector of \emph{type arguments} of constant \isa{c\isactrlisub {\isasymtau}}
+ wrt.\ the declaration \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}} is defined as the codomain of
+ the matcher \isa{{\isasymvartheta}\ {\isacharequal}\ {\isacharbraceleft}{\isacharquery}{\isasymalpha}\isactrlisub {\isadigit{1}}\ {\isasymmapsto}\ {\isasymtau}\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isacharquery}{\isasymalpha}\isactrlisub n\ {\isasymmapsto}\ {\isasymtau}\isactrlisub n{\isacharbraceright}} presented in canonical order \isa{{\isacharparenleft}{\isasymtau}\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlisub n{\isacharparenright}}. Within a given theory context,
+ there is a one-to-one correspondence between any constant \isa{c\isactrlisub {\isasymtau}} and the application \isa{c{\isacharparenleft}{\isasymtau}\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlisub n{\isacharparenright}} of its type arguments. For example, with \isa{plus\ {\isacharcolon}{\isacharcolon}\ {\isasymalpha}\ {\isasymRightarrow}\ {\isasymalpha}\ {\isasymRightarrow}\ {\isasymalpha}}, the instance \isa{plus\isactrlbsub nat\ {\isasymRightarrow}\ nat\ {\isasymRightarrow}\ nat\isactrlesub } corresponds to \isa{plus{\isacharparenleft}nat{\isacharparenright}}.
+
+ Constant declarations \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}} may contain sort constraints
+ for type variables in \isa{{\isasymsigma}}. These are observed by
+ type-inference as expected, but \emph{ignored} by the core logic.
+ This means the primitive logic is able to reason with instances of
+ polymorphic constants that the user-level type-checker would reject
+ due to violation of type class restrictions.
+
+ \medskip An \emph{atomic} term is either a variable or constant. A
+ \emph{term} is defined inductively over atomic terms, with
+ abstraction and application as follows: \isa{t\ {\isacharequal}\ b\ {\isacharbar}\ x\isactrlisub {\isasymtau}\ {\isacharbar}\ {\isacharquery}x\isactrlisub {\isasymtau}\ {\isacharbar}\ c\isactrlisub {\isasymtau}\ {\isacharbar}\ {\isasymlambda}\isactrlisub {\isasymtau}{\isachardot}\ t\ {\isacharbar}\ t\isactrlisub {\isadigit{1}}\ t\isactrlisub {\isadigit{2}}}.
+ Parsing and printing takes care of converting between an external
+ representation with named bound variables. Subsequently, we shall
+ use the latter notation instead of internal de-Bruijn
+ representation.
+
+ The inductive relation \isa{t\ {\isacharcolon}{\isacharcolon}\ {\isasymtau}} assigns a (unique) type to a
+ term according to the structure of atomic terms, abstractions, and
+ applicatins:
+ \[
+ \infer{\isa{a\isactrlisub {\isasymtau}\ {\isacharcolon}{\isacharcolon}\ {\isasymtau}}}{}
+ \qquad
+ \infer{\isa{{\isacharparenleft}{\isasymlambda}x\isactrlsub {\isasymtau}{\isachardot}\ t{\isacharparenright}\ {\isacharcolon}{\isacharcolon}\ {\isasymtau}\ {\isasymRightarrow}\ {\isasymsigma}}}{\isa{t\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}}}
+ \qquad
+ \infer{\isa{t\ u\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}}}{\isa{t\ {\isacharcolon}{\isacharcolon}\ {\isasymtau}\ {\isasymRightarrow}\ {\isasymsigma}} & \isa{u\ {\isacharcolon}{\isacharcolon}\ {\isasymtau}}}
+ \]
+ A \emph{well-typed term} is a term that can be typed according to these rules.
+
+ Typing information can be omitted: type-inference is able to
+ reconstruct the most general type of a raw term, while assigning
+ most general types to all of its variables and constants.
+ Type-inference depends on a context of type constraints for fixed
+ variables, and declarations for polymorphic constants.
+
+ The identity of atomic terms consists both of the name and the type
+ component. This means that different variables \isa{x\isactrlbsub {\isasymtau}\isactrlisub {\isadigit{1}}\isactrlesub } and \isa{x\isactrlbsub {\isasymtau}\isactrlisub {\isadigit{2}}\isactrlesub } may become the same after type
+ instantiation. Some outer layers of the system make it hard to
+ produce variables of the same name, but different types. In
+ contrast, mixed instances of polymorphic constants occur frequently.
+
+ \medskip The \emph{hidden polymorphism} of a term \isa{t\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}}
+ is the set of type variables occurring in \isa{t}, but not in
+ \isa{{\isasymsigma}}. This means that the term implicitly depends on type
+ arguments that are not accounted in the result type, i.e.\ there are
+ different type instances \isa{t{\isasymvartheta}\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}} and \isa{t{\isasymvartheta}{\isacharprime}\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}} with the same type. This slightly
+ pathological situation notoriously demands additional care.
+
+ \medskip A \emph{term abbreviation} is a syntactic definition \isa{c\isactrlisub {\isasymsigma}\ {\isasymequiv}\ t} of a closed term \isa{t} of type \isa{{\isasymsigma}},
+ without any hidden polymorphism. A term abbreviation looks like a
+ constant in the syntax, but is expanded before entering the logical
+ core. Abbreviations are usually reverted when printing terms, using
+ \isa{t\ {\isasymrightarrow}\ c\isactrlisub {\isasymsigma}} as rules for higher-order rewriting.
+
+ \medskip Canonical operations on \isa{{\isasymlambda}}-terms include \isa{{\isasymalpha}{\isasymbeta}{\isasymeta}}-conversion: \isa{{\isasymalpha}}-conversion refers to capture-free
+ renaming of bound variables; \isa{{\isasymbeta}}-conversion contracts an
+ abstraction applied to an argument term, substituting the argument
+ in the body: \isa{{\isacharparenleft}{\isasymlambda}x{\isachardot}\ b{\isacharparenright}a} becomes \isa{b{\isacharbrackleft}a{\isacharslash}x{\isacharbrackright}}; \isa{{\isasymeta}}-conversion contracts vacuous application-abstraction: \isa{{\isasymlambda}x{\isachardot}\ f\ x} becomes \isa{f}, provided that the bound variable
+ does not occur in \isa{f}.
+
+ Terms are normally treated modulo \isa{{\isasymalpha}}-conversion, which is
+ implicit in the de-Bruijn representation. Names for bound variables
+ in abstractions are maintained separately as (meaningless) comments,
+ mostly for parsing and printing. Full \isa{{\isasymalpha}{\isasymbeta}{\isasymeta}}-conversion is
+ commonplace in various standard operations (\secref{sec:obj-rules})
+ that are based on higher-order unification and matching.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{term}\verb|type term| \\
+ \indexdef{}{ML}{op aconv}\verb|op aconv: term * term -> bool| \\
+ \indexdef{}{ML}{map\_types}\verb|map_types: (typ -> typ) -> term -> term| \\
+ \indexdef{}{ML}{fold\_types}\verb|fold_types: (typ -> 'a -> 'a) -> term -> 'a -> 'a| \\
+ \indexdef{}{ML}{map\_aterms}\verb|map_aterms: (term -> term) -> term -> term| \\
+ \indexdef{}{ML}{fold\_aterms}\verb|fold_aterms: (term -> 'a -> 'a) -> term -> 'a -> 'a| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML}{fastype\_of}\verb|fastype_of: term -> typ| \\
+ \indexdef{}{ML}{lambda}\verb|lambda: term -> term -> term| \\
+ \indexdef{}{ML}{betapply}\verb|betapply: term * term -> term| \\
+ \indexdef{}{ML}{Sign.declare\_const}\verb|Sign.declare_const: Properties.T -> (binding * typ) * mixfix ->|\isasep\isanewline%
+\verb| theory -> term * theory| \\
+ \indexdef{}{ML}{Sign.add\_abbrev}\verb|Sign.add_abbrev: string -> Properties.T -> binding * term ->|\isasep\isanewline%
+\verb| theory -> (term * term) * theory| \\
+ \indexdef{}{ML}{Sign.const\_typargs}\verb|Sign.const_typargs: theory -> string * typ -> typ list| \\
+ \indexdef{}{ML}{Sign.const\_instance}\verb|Sign.const_instance: theory -> string * typ list -> typ| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|term| represents de-Bruijn terms, with comments in
+ abstractions, and explicitly named free variables and constants;
+ this is a datatype with constructors \verb|Bound|, \verb|Free|, \verb|Var|, \verb|Const|, \verb|Abs|, \verb|op $|.
+
+ \item \isa{t}~\verb|aconv|~\isa{u} checks \isa{{\isasymalpha}}-equivalence of two terms. This is the basic equality relation
+ on type \verb|term|; raw datatype equality should only be used
+ for operations related to parsing or printing!
+
+ \item \verb|map_types|~\isa{f\ t} applies the mapping \isa{f} to all types occurring in \isa{t}.
+
+ \item \verb|fold_types|~\isa{f\ t} iterates the operation \isa{f} over all occurrences of types in \isa{t}; the term
+ structure is traversed from left to right.
+
+ \item \verb|map_aterms|~\isa{f\ t} applies the mapping \isa{f}
+ to all atomic terms (\verb|Bound|, \verb|Free|, \verb|Var|, \verb|Const|) occurring in \isa{t}.
+
+ \item \verb|fold_aterms|~\isa{f\ t} iterates the operation \isa{f} over all occurrences of atomic terms (\verb|Bound|, \verb|Free|,
+ \verb|Var|, \verb|Const|) in \isa{t}; the term structure is
+ traversed from left to right.
+
+ \item \verb|fastype_of|~\isa{t} determines the type of a
+ well-typed term. This operation is relatively slow, despite the
+ omission of any sanity checks.
+
+ \item \verb|lambda|~\isa{a\ b} produces an abstraction \isa{{\isasymlambda}a{\isachardot}\ b}, where occurrences of the atomic term \isa{a} in the
+ body \isa{b} are replaced by bound variables.
+
+ \item \verb|betapply|~\isa{{\isacharparenleft}t{\isacharcomma}\ u{\isacharparenright}} produces an application \isa{t\ u}, with topmost \isa{{\isasymbeta}}-conversion if \isa{t} is an
+ abstraction.
+
+ \item \verb|Sign.declare_const|~\isa{properties\ {\isacharparenleft}{\isacharparenleft}c{\isacharcomma}\ {\isasymsigma}{\isacharparenright}{\isacharcomma}\ mx{\isacharparenright}}
+ declares a new constant \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}} with optional mixfix
+ syntax.
+
+ \item \verb|Sign.add_abbrev|~\isa{print{\isacharunderscore}mode\ properties\ {\isacharparenleft}c{\isacharcomma}\ t{\isacharparenright}}
+ introduces a new term abbreviation \isa{c\ {\isasymequiv}\ t}.
+
+ \item \verb|Sign.const_typargs|~\isa{thy\ {\isacharparenleft}c{\isacharcomma}\ {\isasymtau}{\isacharparenright}} and \verb|Sign.const_instance|~\isa{thy\ {\isacharparenleft}c{\isacharcomma}\ {\isacharbrackleft}{\isasymtau}\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlisub n{\isacharbrackright}{\isacharparenright}}
+ convert between two representations of polymorphic constants: full
+ type instance vs.\ compact type arguments form.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsection{Theorems \label{sec:thms}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A \emph{proposition} is a well-typed term of type \isa{prop}, a
+ \emph{theorem} is a proven proposition (depending on a context of
+ hypotheses and the background theory). Primitive inferences include
+ plain Natural Deduction rules for the primary connectives \isa{{\isasymAnd}} and \isa{{\isasymLongrightarrow}} of the framework. There is also a builtin
+ notion of equality/equivalence \isa{{\isasymequiv}}.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsubsection{Primitive connectives and rules \label{sec:prim-rules}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The theory \isa{Pure} contains constant declarations for the
+ primitive connectives \isa{{\isasymAnd}}, \isa{{\isasymLongrightarrow}}, and \isa{{\isasymequiv}} of
+ the logical framework, see \figref{fig:pure-connectives}. The
+ derivability judgment \isa{A\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ A\isactrlisub n\ {\isasymturnstile}\ B} is
+ defined inductively by the primitive inferences given in
+ \figref{fig:prim-rules}, with the global restriction that the
+ hypotheses must \emph{not} contain any schematic variables. The
+ builtin equality is conceptually axiomatized as shown in
+ \figref{fig:pure-equality}, although the implementation works
+ directly with derived inferences.
+
+ \begin{figure}[htb]
+ \begin{center}
+ \begin{tabular}{ll}
+ \isa{all\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}{\isasymalpha}\ {\isasymRightarrow}\ prop{\isacharparenright}\ {\isasymRightarrow}\ prop} & universal quantification (binder \isa{{\isasymAnd}}) \\
+ \isa{{\isasymLongrightarrow}\ {\isacharcolon}{\isacharcolon}\ prop\ {\isasymRightarrow}\ prop\ {\isasymRightarrow}\ prop} & implication (right associative infix) \\
+ \isa{{\isasymequiv}\ {\isacharcolon}{\isacharcolon}\ {\isasymalpha}\ {\isasymRightarrow}\ {\isasymalpha}\ {\isasymRightarrow}\ prop} & equality relation (infix) \\
+ \end{tabular}
+ \caption{Primitive connectives of Pure}\label{fig:pure-connectives}
+ \end{center}
+ \end{figure}
+
+ \begin{figure}[htb]
+ \begin{center}
+ \[
+ \infer[\isa{{\isacharparenleft}axiom{\isacharparenright}}]{\isa{{\isasymturnstile}\ A}}{\isa{A\ {\isasymin}\ {\isasymTheta}}}
+ \qquad
+ \infer[\isa{{\isacharparenleft}assume{\isacharparenright}}]{\isa{A\ {\isasymturnstile}\ A}}{}
+ \]
+ \[
+ \infer[\isa{{\isacharparenleft}{\isasymAnd}{\isacharunderscore}intro{\isacharparenright}}]{\isa{{\isasymGamma}\ {\isasymturnstile}\ {\isasymAnd}x{\isachardot}\ b{\isacharbrackleft}x{\isacharbrackright}}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ b{\isacharbrackleft}x{\isacharbrackright}} & \isa{x\ {\isasymnotin}\ {\isasymGamma}}}
+ \qquad
+ \infer[\isa{{\isacharparenleft}{\isasymAnd}{\isacharunderscore}elim{\isacharparenright}}]{\isa{{\isasymGamma}\ {\isasymturnstile}\ b{\isacharbrackleft}a{\isacharbrackright}}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ {\isasymAnd}x{\isachardot}\ b{\isacharbrackleft}x{\isacharbrackright}}}
+ \]
+ \[
+ \infer[\isa{{\isacharparenleft}{\isasymLongrightarrow}{\isacharunderscore}intro{\isacharparenright}}]{\isa{{\isasymGamma}\ {\isacharminus}\ A\ {\isasymturnstile}\ A\ {\isasymLongrightarrow}\ B}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B}}
+ \qquad
+ \infer[\isa{{\isacharparenleft}{\isasymLongrightarrow}{\isacharunderscore}elim{\isacharparenright}}]{\isa{{\isasymGamma}\isactrlsub {\isadigit{1}}\ {\isasymunion}\ {\isasymGamma}\isactrlsub {\isadigit{2}}\ {\isasymturnstile}\ B}}{\isa{{\isasymGamma}\isactrlsub {\isadigit{1}}\ {\isasymturnstile}\ A\ {\isasymLongrightarrow}\ B} & \isa{{\isasymGamma}\isactrlsub {\isadigit{2}}\ {\isasymturnstile}\ A}}
+ \]
+ \caption{Primitive inferences of Pure}\label{fig:prim-rules}
+ \end{center}
+ \end{figure}
+
+ \begin{figure}[htb]
+ \begin{center}
+ \begin{tabular}{ll}
+ \isa{{\isasymturnstile}\ {\isacharparenleft}{\isasymlambda}x{\isachardot}\ b{\isacharbrackleft}x{\isacharbrackright}{\isacharparenright}\ a\ {\isasymequiv}\ b{\isacharbrackleft}a{\isacharbrackright}} & \isa{{\isasymbeta}}-conversion \\
+ \isa{{\isasymturnstile}\ x\ {\isasymequiv}\ x} & reflexivity \\
+ \isa{{\isasymturnstile}\ x\ {\isasymequiv}\ y\ {\isasymLongrightarrow}\ P\ x\ {\isasymLongrightarrow}\ P\ y} & substitution \\
+ \isa{{\isasymturnstile}\ {\isacharparenleft}{\isasymAnd}x{\isachardot}\ f\ x\ {\isasymequiv}\ g\ x{\isacharparenright}\ {\isasymLongrightarrow}\ f\ {\isasymequiv}\ g} & extensionality \\
+ \isa{{\isasymturnstile}\ {\isacharparenleft}A\ {\isasymLongrightarrow}\ B{\isacharparenright}\ {\isasymLongrightarrow}\ {\isacharparenleft}B\ {\isasymLongrightarrow}\ A{\isacharparenright}\ {\isasymLongrightarrow}\ A\ {\isasymequiv}\ B} & logical equivalence \\
+ \end{tabular}
+ \caption{Conceptual axiomatization of Pure equality}\label{fig:pure-equality}
+ \end{center}
+ \end{figure}
+
+ The introduction and elimination rules for \isa{{\isasymAnd}} and \isa{{\isasymLongrightarrow}} are analogous to formation of dependently typed \isa{{\isasymlambda}}-terms representing the underlying proof objects. Proof terms
+ are irrelevant in the Pure logic, though; they cannot occur within
+ propositions. The system provides a runtime option to record
+ explicit proof terms for primitive inferences. Thus all three
+ levels of \isa{{\isasymlambda}}-calculus become explicit: \isa{{\isasymRightarrow}} for
+ terms, and \isa{{\isasymAnd}{\isacharslash}{\isasymLongrightarrow}} for proofs (cf.\
+ \cite{Berghofer-Nipkow:2000:TPHOL}).
+
+ Observe that locally fixed parameters (as in \isa{{\isasymAnd}{\isacharunderscore}intro}) need
+ not be recorded in the hypotheses, because the simple syntactic
+ types of Pure are always inhabitable. ``Assumptions'' \isa{x\ {\isacharcolon}{\isacharcolon}\ {\isasymtau}} for type-membership are only present as long as some \isa{x\isactrlisub {\isasymtau}} occurs in the statement body.\footnote{This is the key
+ difference to ``\isa{{\isasymlambda}HOL}'' in the PTS framework
+ \cite{Barendregt-Geuvers:2001}, where hypotheses \isa{x\ {\isacharcolon}\ A} are
+ treated uniformly for propositions and types.}
+
+ \medskip The axiomatization of a theory is implicitly closed by
+ forming all instances of type and term variables: \isa{{\isasymturnstile}\ A{\isasymvartheta}} holds for any substitution instance of an axiom
+ \isa{{\isasymturnstile}\ A}. By pushing substitutions through derivations
+ inductively, we also get admissible \isa{generalize} and \isa{instance} rules as shown in \figref{fig:subst-rules}.
+
+ \begin{figure}[htb]
+ \begin{center}
+ \[
+ \infer{\isa{{\isasymGamma}\ {\isasymturnstile}\ B{\isacharbrackleft}{\isacharquery}{\isasymalpha}{\isacharbrackright}}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B{\isacharbrackleft}{\isasymalpha}{\isacharbrackright}} & \isa{{\isasymalpha}\ {\isasymnotin}\ {\isasymGamma}}}
+ \quad
+ \infer[\quad\isa{{\isacharparenleft}generalize{\isacharparenright}}]{\isa{{\isasymGamma}\ {\isasymturnstile}\ B{\isacharbrackleft}{\isacharquery}x{\isacharbrackright}}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B{\isacharbrackleft}x{\isacharbrackright}} & \isa{x\ {\isasymnotin}\ {\isasymGamma}}}
+ \]
+ \[
+ \infer{\isa{{\isasymGamma}\ {\isasymturnstile}\ B{\isacharbrackleft}{\isasymtau}{\isacharbrackright}}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B{\isacharbrackleft}{\isacharquery}{\isasymalpha}{\isacharbrackright}}}
+ \quad
+ \infer[\quad\isa{{\isacharparenleft}instantiate{\isacharparenright}}]{\isa{{\isasymGamma}\ {\isasymturnstile}\ B{\isacharbrackleft}t{\isacharbrackright}}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B{\isacharbrackleft}{\isacharquery}x{\isacharbrackright}}}
+ \]
+ \caption{Admissible substitution rules}\label{fig:subst-rules}
+ \end{center}
+ \end{figure}
+
+ Note that \isa{instantiate} does not require an explicit
+ side-condition, because \isa{{\isasymGamma}} may never contain schematic
+ variables.
+
+ In principle, variables could be substituted in hypotheses as well,
+ but this would disrupt the monotonicity of reasoning: deriving
+ \isa{{\isasymGamma}{\isasymvartheta}\ {\isasymturnstile}\ B{\isasymvartheta}} from \isa{{\isasymGamma}\ {\isasymturnstile}\ B} is
+ correct, but \isa{{\isasymGamma}{\isasymvartheta}\ {\isasymsupseteq}\ {\isasymGamma}} does not necessarily hold:
+ the result belongs to a different proof context.
+
+ \medskip An \emph{oracle} is a function that produces axioms on the
+ fly. Logically, this is an instance of the \isa{axiom} rule
+ (\figref{fig:prim-rules}), but there is an operational difference.
+ The system always records oracle invocations within derivations of
+ theorems by a unique tag.
+
+ Axiomatizations should be limited to the bare minimum, typically as
+ part of the initial logical basis of an object-logic formalization.
+ Later on, theories are usually developed in a strictly definitional
+ fashion, by stating only certain equalities over new constants.
+
+ A \emph{simple definition} consists of a constant declaration \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}} together with an axiom \isa{{\isasymturnstile}\ c\ {\isasymequiv}\ t}, where \isa{t\ {\isacharcolon}{\isacharcolon}\ {\isasymsigma}} is a closed term without any hidden polymorphism. The RHS
+ may depend on further defined constants, but not \isa{c} itself.
+ Definitions of functions may be presented as \isa{c\ \isactrlvec x\ {\isasymequiv}\ t} instead of the puristic \isa{c\ {\isasymequiv}\ {\isasymlambda}\isactrlvec x{\isachardot}\ t}.
+
+ An \emph{overloaded definition} consists of a collection of axioms
+ for the same constant, with zero or one equations \isa{c{\isacharparenleft}{\isacharparenleft}\isactrlvec {\isasymalpha}{\isacharparenright}{\isasymkappa}{\isacharparenright}\ {\isasymequiv}\ t} for each type constructor \isa{{\isasymkappa}} (for
+ distinct variables \isa{\isactrlvec {\isasymalpha}}). The RHS may mention
+ previously defined constants as above, or arbitrary constants \isa{d{\isacharparenleft}{\isasymalpha}\isactrlisub i{\isacharparenright}} for some \isa{{\isasymalpha}\isactrlisub i} projected from \isa{\isactrlvec {\isasymalpha}}. Thus overloaded definitions essentially work by
+ primitive recursion over the syntactic structure of a single type
+ argument.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{ctyp}\verb|type ctyp| \\
+ \indexdef{}{ML type}{cterm}\verb|type cterm| \\
+ \indexdef{}{ML}{Thm.ctyp\_of}\verb|Thm.ctyp_of: theory -> typ -> ctyp| \\
+ \indexdef{}{ML}{Thm.cterm\_of}\verb|Thm.cterm_of: theory -> term -> cterm| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML type}{thm}\verb|type thm| \\
+ \indexdef{}{ML}{proofs}\verb|proofs: int ref| \\
+ \indexdef{}{ML}{Thm.assume}\verb|Thm.assume: cterm -> thm| \\
+ \indexdef{}{ML}{Thm.forall\_intr}\verb|Thm.forall_intr: cterm -> thm -> thm| \\
+ \indexdef{}{ML}{Thm.forall\_elim}\verb|Thm.forall_elim: cterm -> thm -> thm| \\
+ \indexdef{}{ML}{Thm.implies\_intr}\verb|Thm.implies_intr: cterm -> thm -> thm| \\
+ \indexdef{}{ML}{Thm.implies\_elim}\verb|Thm.implies_elim: thm -> thm -> thm| \\
+ \indexdef{}{ML}{Thm.generalize}\verb|Thm.generalize: string list * string list -> int -> thm -> thm| \\
+ \indexdef{}{ML}{Thm.instantiate}\verb|Thm.instantiate: (ctyp * ctyp) list * (cterm * cterm) list -> thm -> thm| \\
+ \indexdef{}{ML}{Thm.axiom}\verb|Thm.axiom: theory -> string -> thm| \\
+ \indexdef{}{ML}{Thm.add\_oracle}\verb|Thm.add_oracle: binding * ('a -> cterm) -> theory|\isasep\isanewline%
+\verb| -> (string * ('a -> thm)) * theory| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML}{Theory.add\_axioms\_i}\verb|Theory.add_axioms_i: (binding * term) list -> theory -> theory| \\
+ \indexdef{}{ML}{Theory.add\_deps}\verb|Theory.add_deps: string -> string * typ -> (string * typ) list -> theory -> theory| \\
+ \indexdef{}{ML}{Theory.add\_defs\_i}\verb|Theory.add_defs_i: bool -> bool -> (binding * term) list -> theory -> theory| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|ctyp| and \verb|cterm| represent certified types
+ and terms, respectively. These are abstract datatypes that
+ guarantee that its values have passed the full well-formedness (and
+ well-typedness) checks, relative to the declarations of type
+ constructors, constants etc. in the theory.
+
+ \item \verb|Thm.ctyp_of|~\isa{thy\ {\isasymtau}} and \verb|Thm.cterm_of|~\isa{thy\ t} explicitly checks types and terms,
+ respectively. This also involves some basic normalizations, such
+ expansion of type and term abbreviations from the theory context.
+
+ Re-certification is relatively slow and should be avoided in tight
+ reasoning loops. There are separate operations to decompose
+ certified entities (including actual theorems).
+
+ \item \verb|thm| represents proven propositions. This is an
+ abstract datatype that guarantees that its values have been
+ constructed by basic principles of the \verb|Thm| module.
+ Every \verb|thm| value contains a sliding back-reference to the
+ enclosing theory, cf.\ \secref{sec:context-theory}.
+
+ \item \verb|proofs| determines the detail of proof recording within
+ \verb|thm| values: \verb|0| records only the names of oracles,
+ \verb|1| records oracle names and propositions, \verb|2| additionally
+ records full proof terms. Officially named theorems that contribute
+ to a result are always recorded.
+
+ \item \verb|Thm.assume|, \verb|Thm.forall_intr|, \verb|Thm.forall_elim|, \verb|Thm.implies_intr|, and \verb|Thm.implies_elim|
+ correspond to the primitive inferences of \figref{fig:prim-rules}.
+
+ \item \verb|Thm.generalize|~\isa{{\isacharparenleft}\isactrlvec {\isasymalpha}{\isacharcomma}\ \isactrlvec x{\isacharparenright}}
+ corresponds to the \isa{generalize} rules of
+ \figref{fig:subst-rules}. Here collections of type and term
+ variables are generalized simultaneously, specified by the given
+ basic names.
+
+ \item \verb|Thm.instantiate|~\isa{{\isacharparenleft}\isactrlvec {\isasymalpha}\isactrlisub s{\isacharcomma}\ \isactrlvec x\isactrlisub {\isasymtau}{\isacharparenright}} corresponds to the \isa{instantiate} rules
+ of \figref{fig:subst-rules}. Type variables are substituted before
+ term variables. Note that the types in \isa{\isactrlvec x\isactrlisub {\isasymtau}}
+ refer to the instantiated versions.
+
+ \item \verb|Thm.axiom|~\isa{thy\ name} retrieves a named
+ axiom, cf.\ \isa{axiom} in \figref{fig:prim-rules}.
+
+ \item \verb|Thm.add_oracle|~\isa{{\isacharparenleft}binding{\isacharcomma}\ oracle{\isacharparenright}} produces a named
+ oracle rule, essentially generating arbitrary axioms on the fly,
+ cf.\ \isa{axiom} in \figref{fig:prim-rules}.
+
+ \item \verb|Theory.add_axioms_i|~\isa{{\isacharbrackleft}{\isacharparenleft}name{\isacharcomma}\ A{\isacharparenright}{\isacharcomma}\ {\isasymdots}{\isacharbrackright}} declares
+ arbitrary propositions as axioms.
+
+ \item \verb|Theory.add_deps|~\isa{name\ c\isactrlisub {\isasymtau}\ \isactrlvec d\isactrlisub {\isasymsigma}} declares dependencies of a named specification
+ for constant \isa{c\isactrlisub {\isasymtau}}, relative to existing
+ specifications for constants \isa{\isactrlvec d\isactrlisub {\isasymsigma}}.
+
+ \item \verb|Theory.add_defs_i|~\isa{unchecked\ overloaded\ {\isacharbrackleft}{\isacharparenleft}name{\isacharcomma}\ c\ \isactrlvec x\ {\isasymequiv}\ t{\isacharparenright}{\isacharcomma}\ {\isasymdots}{\isacharbrackright}} states a definitional axiom for an existing
+ constant \isa{c}. Dependencies are recorded (cf.\ \verb|Theory.add_deps|), unless the \isa{unchecked} option is set.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Auxiliary definitions%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+Theory \isa{Pure} provides a few auxiliary definitions, see
+ \figref{fig:pure-aux}. These special constants are normally not
+ exposed to the user, but appear in internal encodings.
+
+ \begin{figure}[htb]
+ \begin{center}
+ \begin{tabular}{ll}
+ \isa{conjunction\ {\isacharcolon}{\isacharcolon}\ prop\ {\isasymRightarrow}\ prop\ {\isasymRightarrow}\ prop} & (infix \isa{{\isacharampersand}}) \\
+ \isa{{\isasymturnstile}\ A\ {\isacharampersand}\ B\ {\isasymequiv}\ {\isacharparenleft}{\isasymAnd}C{\isachardot}\ {\isacharparenleft}A\ {\isasymLongrightarrow}\ B\ {\isasymLongrightarrow}\ C{\isacharparenright}\ {\isasymLongrightarrow}\ C{\isacharparenright}} \\[1ex]
+ \isa{prop\ {\isacharcolon}{\isacharcolon}\ prop\ {\isasymRightarrow}\ prop} & (prefix \isa{{\isacharhash}}, suppressed) \\
+ \isa{{\isacharhash}A\ {\isasymequiv}\ A} \\[1ex]
+ \isa{term\ {\isacharcolon}{\isacharcolon}\ {\isasymalpha}\ {\isasymRightarrow}\ prop} & (prefix \isa{TERM}) \\
+ \isa{term\ x\ {\isasymequiv}\ {\isacharparenleft}{\isasymAnd}A{\isachardot}\ A\ {\isasymLongrightarrow}\ A{\isacharparenright}} \\[1ex]
+ \isa{TYPE\ {\isacharcolon}{\isacharcolon}\ {\isasymalpha}\ itself} & (prefix \isa{TYPE}) \\
+ \isa{{\isacharparenleft}unspecified{\isacharparenright}} \\
+ \end{tabular}
+ \caption{Definitions of auxiliary connectives}\label{fig:pure-aux}
+ \end{center}
+ \end{figure}
+
+ Derived conjunction rules include introduction \isa{A\ {\isasymLongrightarrow}\ B\ {\isasymLongrightarrow}\ A\ {\isacharampersand}\ B}, and destructions \isa{A\ {\isacharampersand}\ B\ {\isasymLongrightarrow}\ A} and \isa{A\ {\isacharampersand}\ B\ {\isasymLongrightarrow}\ B}.
+ Conjunction allows to treat simultaneous assumptions and conclusions
+ uniformly. For example, multiple claims are intermediately
+ represented as explicit conjunction, but this is refined into
+ separate sub-goals before the user continues the proof; the final
+ result is projected into a list of theorems (cf.\
+ \secref{sec:tactical-goals}).
+
+ The \isa{prop} marker (\isa{{\isacharhash}}) makes arbitrarily complex
+ propositions appear as atomic, without changing the meaning: \isa{{\isasymGamma}\ {\isasymturnstile}\ A} and \isa{{\isasymGamma}\ {\isasymturnstile}\ {\isacharhash}A} are interchangeable. See
+ \secref{sec:tactical-goals} for specific operations.
+
+ The \isa{term} marker turns any well-typed term into a derivable
+ proposition: \isa{{\isasymturnstile}\ TERM\ t} holds unconditionally. Although
+ this is logically vacuous, it allows to treat terms and proofs
+ uniformly, similar to a type-theoretic framework.
+
+ The \isa{TYPE} constructor is the canonical representative of
+ the unspecified type \isa{{\isasymalpha}\ itself}; it essentially injects the
+ language of types into that of terms. There is specific notation
+ \isa{TYPE{\isacharparenleft}{\isasymtau}{\isacharparenright}} for \isa{TYPE\isactrlbsub {\isasymtau}\ itself\isactrlesub }.
+ Although being devoid of any particular meaning, the \isa{TYPE{\isacharparenleft}{\isasymtau}{\isacharparenright}} accounts for the type \isa{{\isasymtau}} within the term
+ language. In particular, \isa{TYPE{\isacharparenleft}{\isasymalpha}{\isacharparenright}} may be used as formal
+ argument in primitive definitions, in order to circumvent hidden
+ polymorphism (cf.\ \secref{sec:terms}). For example, \isa{c\ TYPE{\isacharparenleft}{\isasymalpha}{\isacharparenright}\ {\isasymequiv}\ A{\isacharbrackleft}{\isasymalpha}{\isacharbrackright}} defines \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isasymalpha}\ itself\ {\isasymRightarrow}\ prop} in terms of
+ a proposition \isa{A} that depends on an additional type
+ argument, which is essentially a predicate on types.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{Conjunction.intr}\verb|Conjunction.intr: thm -> thm -> thm| \\
+ \indexdef{}{ML}{Conjunction.elim}\verb|Conjunction.elim: thm -> thm * thm| \\
+ \indexdef{}{ML}{Drule.mk\_term}\verb|Drule.mk_term: cterm -> thm| \\
+ \indexdef{}{ML}{Drule.dest\_term}\verb|Drule.dest_term: thm -> cterm| \\
+ \indexdef{}{ML}{Logic.mk\_type}\verb|Logic.mk_type: typ -> term| \\
+ \indexdef{}{ML}{Logic.dest\_type}\verb|Logic.dest_type: term -> typ| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Conjunction.intr| derives \isa{A\ {\isacharampersand}\ B} from \isa{A} and \isa{B}.
+
+ \item \verb|Conjunction.elim| derives \isa{A} and \isa{B}
+ from \isa{A\ {\isacharampersand}\ B}.
+
+ \item \verb|Drule.mk_term| derives \isa{TERM\ t}.
+
+ \item \verb|Drule.dest_term| recovers term \isa{t} from \isa{TERM\ t}.
+
+ \item \verb|Logic.mk_type|~\isa{{\isasymtau}} produces the term \isa{TYPE{\isacharparenleft}{\isasymtau}{\isacharparenright}}.
+
+ \item \verb|Logic.dest_type|~\isa{TYPE{\isacharparenleft}{\isasymtau}{\isacharparenright}} recovers the type
+ \isa{{\isasymtau}}.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsection{Object-level rules \label{sec:obj-rules}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The primitive inferences covered so far mostly serve foundational
+ purposes. User-level reasoning usually works via object-level rules
+ that are represented as theorems of Pure. Composition of rules
+ involves \emph{backchaining}, \emph{higher-order unification} modulo
+ \isa{{\isasymalpha}{\isasymbeta}{\isasymeta}}-conversion of \isa{{\isasymlambda}}-terms, and so-called
+ \emph{lifting} of rules into a context of \isa{{\isasymAnd}} and \isa{{\isasymLongrightarrow}} connectives. Thus the full power of higher-order Natural
+ Deduction in Isabelle/Pure becomes readily available.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsubsection{Hereditary Harrop Formulae%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The idea of object-level rules is to model Natural Deduction
+ inferences in the style of Gentzen \cite{Gentzen:1935}, but we allow
+ arbitrary nesting similar to \cite{extensions91}. The most basic
+ rule format is that of a \emph{Horn Clause}:
+ \[
+ \infer{\isa{A}}{\isa{A\isactrlsub {\isadigit{1}}} & \isa{{\isasymdots}} & \isa{A\isactrlsub n}}
+ \]
+ where \isa{A{\isacharcomma}\ A\isactrlsub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ A\isactrlsub n} are atomic propositions
+ of the framework, usually of the form \isa{Trueprop\ B}, where
+ \isa{B} is a (compound) object-level statement. This
+ object-level inference corresponds to an iterated implication in
+ Pure like this:
+ \[
+ \isa{A\isactrlsub {\isadigit{1}}\ {\isasymLongrightarrow}\ {\isasymdots}\ A\isactrlsub n\ {\isasymLongrightarrow}\ A}
+ \]
+ As an example consider conjunction introduction: \isa{A\ {\isasymLongrightarrow}\ B\ {\isasymLongrightarrow}\ A\ {\isasymand}\ B}. Any parameters occurring in such rule statements are
+ conceptionally treated as arbitrary:
+ \[
+ \isa{{\isasymAnd}x\isactrlsub {\isadigit{1}}\ {\isasymdots}\ x\isactrlsub m{\isachardot}\ A\isactrlsub {\isadigit{1}}\ x\isactrlsub {\isadigit{1}}\ {\isasymdots}\ x\isactrlsub m\ {\isasymLongrightarrow}\ {\isasymdots}\ A\isactrlsub n\ x\isactrlsub {\isadigit{1}}\ {\isasymdots}\ x\isactrlsub m\ {\isasymLongrightarrow}\ A\ x\isactrlsub {\isadigit{1}}\ {\isasymdots}\ x\isactrlsub m}
+ \]
+
+ Nesting of rules means that the positions of \isa{A\isactrlsub i} may
+ again hold compound rules, not just atomic propositions.
+ Propositions of this format are called \emph{Hereditary Harrop
+ Formulae} in the literature \cite{Miller:1991}. Here we give an
+ inductive characterization as follows:
+
+ \medskip
+ \begin{tabular}{ll}
+ \isa{\isactrlbold x} & set of variables \\
+ \isa{\isactrlbold A} & set of atomic propositions \\
+ \isa{\isactrlbold H\ \ {\isacharequal}\ \ {\isasymAnd}\isactrlbold x\isactrlsup {\isacharasterisk}{\isachardot}\ \isactrlbold H\isactrlsup {\isacharasterisk}\ {\isasymLongrightarrow}\ \isactrlbold A} & set of Hereditary Harrop Formulas \\
+ \end{tabular}
+ \medskip
+
+ \noindent Thus we essentially impose nesting levels on propositions
+ formed from \isa{{\isasymAnd}} and \isa{{\isasymLongrightarrow}}. At each level there is a
+ prefix of parameters and compound premises, concluding an atomic
+ proposition. Typical examples are \isa{{\isasymlongrightarrow}}-introduction \isa{{\isacharparenleft}A\ {\isasymLongrightarrow}\ B{\isacharparenright}\ {\isasymLongrightarrow}\ A\ {\isasymlongrightarrow}\ B} or mathematical induction \isa{P\ {\isadigit{0}}\ {\isasymLongrightarrow}\ {\isacharparenleft}{\isasymAnd}n{\isachardot}\ P\ n\ {\isasymLongrightarrow}\ P\ {\isacharparenleft}Suc\ n{\isacharparenright}{\isacharparenright}\ {\isasymLongrightarrow}\ P\ n}. Even deeper nesting occurs in well-founded
+ induction \isa{{\isacharparenleft}{\isasymAnd}x{\isachardot}\ {\isacharparenleft}{\isasymAnd}y{\isachardot}\ y\ {\isasymprec}\ x\ {\isasymLongrightarrow}\ P\ y{\isacharparenright}\ {\isasymLongrightarrow}\ P\ x{\isacharparenright}\ {\isasymLongrightarrow}\ P\ x}, but this
+ already marks the limit of rule complexity seen in practice.
+
+ \medskip Regular user-level inferences in Isabelle/Pure always
+ maintain the following canonical form of results:
+
+ \begin{itemize}
+
+ \item Normalization by \isa{{\isacharparenleft}A\ {\isasymLongrightarrow}\ {\isacharparenleft}{\isasymAnd}x{\isachardot}\ B\ x{\isacharparenright}{\isacharparenright}\ {\isasymequiv}\ {\isacharparenleft}{\isasymAnd}x{\isachardot}\ A\ {\isasymLongrightarrow}\ B\ x{\isacharparenright}},
+ which is a theorem of Pure, means that quantifiers are pushed in
+ front of implication at each level of nesting. The normal form is a
+ Hereditary Harrop Formula.
+
+ \item The outermost prefix of parameters is represented via
+ schematic variables: instead of \isa{{\isasymAnd}\isactrlvec x{\isachardot}\ \isactrlvec H\ \isactrlvec x\ {\isasymLongrightarrow}\ A\ \isactrlvec x} we have \isa{\isactrlvec H\ {\isacharquery}\isactrlvec x\ {\isasymLongrightarrow}\ A\ {\isacharquery}\isactrlvec x}.
+ Note that this representation looses information about the order of
+ parameters, and vacuous quantifiers vanish automatically.
+
+ \end{itemize}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{MetaSimplifier.norm\_hhf}\verb|MetaSimplifier.norm_hhf: thm -> thm| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|MetaSimplifier.norm_hhf|~\isa{thm} normalizes the given
+ theorem according to the canonical form specified above. This is
+ occasionally helpful to repair some low-level tools that do not
+ handle Hereditary Harrop Formulae properly.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Rule composition%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The rule calculus of Isabelle/Pure provides two main inferences:
+ \hyperlink{inference.resolution}{\mbox{\isa{resolution}}} (i.e.\ back-chaining of rules) and
+ \hyperlink{inference.assumption}{\mbox{\isa{assumption}}} (i.e.\ closing a branch), both modulo
+ higher-order unification. There are also combined variants, notably
+ \hyperlink{inference.elim-resolution}{\mbox{\isa{elim{\isacharunderscore}resolution}}} and \hyperlink{inference.dest-resolution}{\mbox{\isa{dest{\isacharunderscore}resolution}}}.
+
+ To understand the all-important \hyperlink{inference.resolution}{\mbox{\isa{resolution}}} principle,
+ we first consider raw \indexdef{}{inference}{composition}\hypertarget{inference.composition}{\hyperlink{inference.composition}{\mbox{\isa{composition}}}} (modulo
+ higher-order unification with substitution \isa{{\isasymvartheta}}):
+ \[
+ \infer[(\indexdef{}{inference}{composition}\hypertarget{inference.composition}{\hyperlink{inference.composition}{\mbox{\isa{composition}}}})]{\isa{\isactrlvec A{\isasymvartheta}\ {\isasymLongrightarrow}\ C{\isasymvartheta}}}
+ {\isa{\isactrlvec A\ {\isasymLongrightarrow}\ B} & \isa{B{\isacharprime}\ {\isasymLongrightarrow}\ C} & \isa{B{\isasymvartheta}\ {\isacharequal}\ B{\isacharprime}{\isasymvartheta}}}
+ \]
+ Here the conclusion of the first rule is unified with the premise of
+ the second; the resulting rule instance inherits the premises of the
+ first and conclusion of the second. Note that \isa{C} can again
+ consist of iterated implications. We can also permute the premises
+ of the second rule back-and-forth in order to compose with \isa{B{\isacharprime}} in any position (subsequently we shall always refer to
+ position 1 w.l.o.g.).
+
+ In \hyperlink{inference.composition}{\mbox{\isa{composition}}} the internal structure of the common
+ part \isa{B} and \isa{B{\isacharprime}} is not taken into account. For
+ proper \hyperlink{inference.resolution}{\mbox{\isa{resolution}}} we require \isa{B} to be atomic,
+ and explicitly observe the structure \isa{{\isasymAnd}\isactrlvec x{\isachardot}\ \isactrlvec H\ \isactrlvec x\ {\isasymLongrightarrow}\ B{\isacharprime}\ \isactrlvec x} of the premise of the second rule. The
+ idea is to adapt the first rule by ``lifting'' it into this context,
+ by means of iterated application of the following inferences:
+ \[
+ \infer[(\indexdef{}{inference}{imp\_lift}\hypertarget{inference.imp-lift}{\hyperlink{inference.imp-lift}{\mbox{\isa{imp{\isacharunderscore}lift}}}})]{\isa{{\isacharparenleft}\isactrlvec H\ {\isasymLongrightarrow}\ \isactrlvec A{\isacharparenright}\ {\isasymLongrightarrow}\ {\isacharparenleft}\isactrlvec H\ {\isasymLongrightarrow}\ B{\isacharparenright}}}{\isa{\isactrlvec A\ {\isasymLongrightarrow}\ B}}
+ \]
+ \[
+ \infer[(\indexdef{}{inference}{all\_lift}\hypertarget{inference.all-lift}{\hyperlink{inference.all-lift}{\mbox{\isa{all{\isacharunderscore}lift}}}})]{\isa{{\isacharparenleft}{\isasymAnd}\isactrlvec x{\isachardot}\ \isactrlvec A\ {\isacharparenleft}{\isacharquery}\isactrlvec a\ \isactrlvec x{\isacharparenright}{\isacharparenright}\ {\isasymLongrightarrow}\ {\isacharparenleft}{\isasymAnd}\isactrlvec x{\isachardot}\ B\ {\isacharparenleft}{\isacharquery}\isactrlvec a\ \isactrlvec x{\isacharparenright}{\isacharparenright}}}{\isa{\isactrlvec A\ {\isacharquery}\isactrlvec a\ {\isasymLongrightarrow}\ B\ {\isacharquery}\isactrlvec a}}
+ \]
+ By combining raw composition with lifting, we get full \hyperlink{inference.resolution}{\mbox{\isa{resolution}}} as follows:
+ \[
+ \infer[(\indexdef{}{inference}{resolution}\hypertarget{inference.resolution}{\hyperlink{inference.resolution}{\mbox{\isa{resolution}}}})]
+ {\isa{{\isacharparenleft}{\isasymAnd}\isactrlvec x{\isachardot}\ \isactrlvec H\ \isactrlvec x\ {\isasymLongrightarrow}\ \isactrlvec A\ {\isacharparenleft}{\isacharquery}\isactrlvec a\ \isactrlvec x{\isacharparenright}{\isacharparenright}{\isasymvartheta}\ {\isasymLongrightarrow}\ C{\isasymvartheta}}}
+ {\begin{tabular}{l}
+ \isa{\isactrlvec A\ {\isacharquery}\isactrlvec a\ {\isasymLongrightarrow}\ B\ {\isacharquery}\isactrlvec a} \\
+ \isa{{\isacharparenleft}{\isasymAnd}\isactrlvec x{\isachardot}\ \isactrlvec H\ \isactrlvec x\ {\isasymLongrightarrow}\ B{\isacharprime}\ \isactrlvec x{\isacharparenright}\ {\isasymLongrightarrow}\ C} \\
+ \isa{{\isacharparenleft}{\isasymlambda}\isactrlvec x{\isachardot}\ B\ {\isacharparenleft}{\isacharquery}\isactrlvec a\ \isactrlvec x{\isacharparenright}{\isacharparenright}{\isasymvartheta}\ {\isacharequal}\ B{\isacharprime}{\isasymvartheta}} \\
+ \end{tabular}}
+ \]
+
+ Continued resolution of rules allows to back-chain a problem towards
+ more and sub-problems. Branches are closed either by resolving with
+ a rule of 0 premises, or by producing a ``short-circuit'' within a
+ solved situation (again modulo unification):
+ \[
+ \infer[(\indexdef{}{inference}{assumption}\hypertarget{inference.assumption}{\hyperlink{inference.assumption}{\mbox{\isa{assumption}}}})]{\isa{C{\isasymvartheta}}}
+ {\isa{{\isacharparenleft}{\isasymAnd}\isactrlvec x{\isachardot}\ \isactrlvec H\ \isactrlvec x\ {\isasymLongrightarrow}\ A\ \isactrlvec x{\isacharparenright}\ {\isasymLongrightarrow}\ C} & \isa{A{\isasymvartheta}\ {\isacharequal}\ H\isactrlsub i{\isasymvartheta}}~~\text{(for some~\isa{i})}}
+ \]
+
+ FIXME \indexdef{}{inference}{elim\_resolution}\hypertarget{inference.elim-resolution}{\hyperlink{inference.elim-resolution}{\mbox{\isa{elim{\isacharunderscore}resolution}}}}, \indexdef{}{inference}{dest\_resolution}\hypertarget{inference.dest-resolution}{\hyperlink{inference.dest-resolution}{\mbox{\isa{dest{\isacharunderscore}resolution}}}}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{op RS}\verb|op RS: thm * thm -> thm| \\
+ \indexdef{}{ML}{op OF}\verb|op OF: thm * thm list -> thm| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \isa{rule\isactrlsub {\isadigit{1}}\ RS\ rule\isactrlsub {\isadigit{2}}} resolves \isa{rule\isactrlsub {\isadigit{1}}} with \isa{rule\isactrlsub {\isadigit{2}}} according to the
+ \hyperlink{inference.resolution}{\mbox{\isa{resolution}}} principle explained above. Note that the
+ corresponding attribute in the Isar language is called \hyperlink{attribute.THEN}{\mbox{\isa{THEN}}}.
+
+ \item \isa{rule\ OF\ rules} resolves a list of rules with the
+ first rule, addressing its premises \isa{{\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ length\ rules}
+ (operating from last to first). This means the newly emerging
+ premises are all concatenated, without interfering. Also note that
+ compared to \isa{RS}, the rule argument order is swapped: \isa{rule\isactrlsub {\isadigit{1}}\ RS\ rule\isactrlsub {\isadigit{2}}\ {\isacharequal}\ rule\isactrlsub {\isadigit{2}}\ OF\ {\isacharbrackleft}rule\isactrlsub {\isadigit{1}}{\isacharbrackright}}.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{end}\isamarkupfalse%
+%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+\isanewline
+\end{isabellebody}%
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "root"
+%%% End:
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/IsarImplementation/Thy/document/Prelim.tex Fri Mar 06 11:28:07 2009 +0100
@@ -0,0 +1,897 @@
+%
+\begin{isabellebody}%
+\def\isabellecontext{Prelim}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{theory}\isamarkupfalse%
+\ Prelim\isanewline
+\isakeyword{imports}\ Base\isanewline
+\isakeyword{begin}%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isamarkupchapter{Preliminaries%
+}
+\isamarkuptrue%
+%
+\isamarkupsection{Contexts \label{sec:context}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A logical context represents the background that is required for
+ formulating statements and composing proofs. It acts as a medium to
+ produce formal content, depending on earlier material (declarations,
+ results etc.).
+
+ For example, derivations within the Isabelle/Pure logic can be
+ described as a judgment \isa{{\isasymGamma}\ {\isasymturnstile}\isactrlsub {\isasymTheta}\ {\isasymphi}}, which means that a
+ proposition \isa{{\isasymphi}} is derivable from hypotheses \isa{{\isasymGamma}}
+ within the theory \isa{{\isasymTheta}}. There are logical reasons for
+ keeping \isa{{\isasymTheta}} and \isa{{\isasymGamma}} separate: theories can be
+ liberal about supporting type constructors and schematic
+ polymorphism of constants and axioms, while the inner calculus of
+ \isa{{\isasymGamma}\ {\isasymturnstile}\ {\isasymphi}} is strictly limited to Simple Type Theory (with
+ fixed type variables in the assumptions).
+
+ \medskip Contexts and derivations are linked by the following key
+ principles:
+
+ \begin{itemize}
+
+ \item Transfer: monotonicity of derivations admits results to be
+ transferred into a \emph{larger} context, i.e.\ \isa{{\isasymGamma}\ {\isasymturnstile}\isactrlsub {\isasymTheta}\ {\isasymphi}} implies \isa{{\isasymGamma}{\isacharprime}\ {\isasymturnstile}\isactrlsub {\isasymTheta}\isactrlsub {\isacharprime}\ {\isasymphi}} for contexts \isa{{\isasymTheta}{\isacharprime}\ {\isasymsupseteq}\ {\isasymTheta}} and \isa{{\isasymGamma}{\isacharprime}\ {\isasymsupseteq}\ {\isasymGamma}}.
+
+ \item Export: discharge of hypotheses admits results to be exported
+ into a \emph{smaller} context, i.e.\ \isa{{\isasymGamma}{\isacharprime}\ {\isasymturnstile}\isactrlsub {\isasymTheta}\ {\isasymphi}}
+ implies \isa{{\isasymGamma}\ {\isasymturnstile}\isactrlsub {\isasymTheta}\ {\isasymDelta}\ {\isasymLongrightarrow}\ {\isasymphi}} where \isa{{\isasymGamma}{\isacharprime}\ {\isasymsupseteq}\ {\isasymGamma}} and
+ \isa{{\isasymDelta}\ {\isacharequal}\ {\isasymGamma}{\isacharprime}\ {\isacharminus}\ {\isasymGamma}}. Note that \isa{{\isasymTheta}} remains unchanged here,
+ only the \isa{{\isasymGamma}} part is affected.
+
+ \end{itemize}
+
+ \medskip By modeling the main characteristics of the primitive
+ \isa{{\isasymTheta}} and \isa{{\isasymGamma}} above, and abstracting over any
+ particular logical content, we arrive at the fundamental notions of
+ \emph{theory context} and \emph{proof context} in Isabelle/Isar.
+ These implement a certain policy to manage arbitrary \emph{context
+ data}. There is a strongly-typed mechanism to declare new kinds of
+ data at compile time.
+
+ The internal bootstrap process of Isabelle/Pure eventually reaches a
+ stage where certain data slots provide the logical content of \isa{{\isasymTheta}} and \isa{{\isasymGamma}} sketched above, but this does not stop there!
+ Various additional data slots support all kinds of mechanisms that
+ are not necessarily part of the core logic.
+
+ For example, there would be data for canonical introduction and
+ elimination rules for arbitrary operators (depending on the
+ object-logic and application), which enables users to perform
+ standard proof steps implicitly (cf.\ the \isa{rule} method
+ \cite{isabelle-isar-ref}).
+
+ \medskip Thus Isabelle/Isar is able to bring forth more and more
+ concepts successively. In particular, an object-logic like
+ Isabelle/HOL continues the Isabelle/Pure setup by adding specific
+ components for automated reasoning (classical reasoner, tableau
+ prover, structured induction etc.) and derived specification
+ mechanisms (inductive predicates, recursive functions etc.). All of
+ this is ultimately based on the generic data management by theory
+ and proof contexts introduced here.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsubsection{Theory context \label{sec:context-theory}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A \emph{theory} is a data container with explicit name and unique
+ identifier. Theories are related by a (nominal) sub-theory
+ relation, which corresponds to the dependency graph of the original
+ construction; each theory is derived from a certain sub-graph of
+ ancestor theories.
+
+ The \isa{merge} operation produces the least upper bound of two
+ theories, which actually degenerates into absorption of one theory
+ into the other (due to the nominal sub-theory relation).
+
+ The \isa{begin} operation starts a new theory by importing
+ several parent theories and entering a special \isa{draft} mode,
+ which is sustained until the final \isa{end} operation. A draft
+ theory acts like a linear type, where updates invalidate earlier
+ versions. An invalidated draft is called ``stale''.
+
+ The \isa{checkpoint} operation produces an intermediate stepping
+ stone that will survive the next update: both the original and the
+ changed theory remain valid and are related by the sub-theory
+ relation. Checkpointing essentially recovers purely functional
+ theory values, at the expense of some extra internal bookkeeping.
+
+ The \isa{copy} operation produces an auxiliary version that has
+ the same data content, but is unrelated to the original: updates of
+ the copy do not affect the original, neither does the sub-theory
+ relation hold.
+
+ \medskip The example in \figref{fig:ex-theory} below shows a theory
+ graph derived from \isa{Pure}, with theory \isa{Length}
+ importing \isa{Nat} and \isa{List}. The body of \isa{Length} consists of a sequence of updates, working mostly on
+ drafts. Intermediate checkpoints may occur as well, due to the
+ history mechanism provided by the Isar top-level, cf.\
+ \secref{sec:isar-toplevel}.
+
+ \begin{figure}[htb]
+ \begin{center}
+ \begin{tabular}{rcccl}
+ & & \isa{Pure} \\
+ & & \isa{{\isasymdown}} \\
+ & & \isa{FOL} \\
+ & $\swarrow$ & & $\searrow$ & \\
+ \isa{Nat} & & & & \isa{List} \\
+ & $\searrow$ & & $\swarrow$ \\
+ & & \isa{Length} \\
+ & & \multicolumn{3}{l}{~~\hyperlink{keyword.imports}{\mbox{\isa{\isakeyword{imports}}}}} \\
+ & & \multicolumn{3}{l}{~~\hyperlink{keyword.begin}{\mbox{\isa{\isakeyword{begin}}}}} \\
+ & & $\vdots$~~ \\
+ & & \isa{{\isasymbullet}}~~ \\
+ & & $\vdots$~~ \\
+ & & \isa{{\isasymbullet}}~~ \\
+ & & $\vdots$~~ \\
+ & & \multicolumn{3}{l}{~~\hyperlink{command.end}{\mbox{\isa{\isacommand{end}}}}} \\
+ \end{tabular}
+ \caption{A theory definition depending on ancestors}\label{fig:ex-theory}
+ \end{center}
+ \end{figure}
+
+ \medskip There is a separate notion of \emph{theory reference} for
+ maintaining a live link to an evolving theory context: updates on
+ drafts are propagated automatically. Dynamic updating stops after
+ an explicit \isa{end} only.
+
+ Derived entities may store a theory reference in order to indicate
+ the context they belong to. This implicitly assumes monotonic
+ reasoning, because the referenced context may become larger without
+ further notice.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{theory}\verb|type theory| \\
+ \indexdef{}{ML}{Theory.subthy}\verb|Theory.subthy: theory * theory -> bool| \\
+ \indexdef{}{ML}{Theory.merge}\verb|Theory.merge: theory * theory -> theory| \\
+ \indexdef{}{ML}{Theory.checkpoint}\verb|Theory.checkpoint: theory -> theory| \\
+ \indexdef{}{ML}{Theory.copy}\verb|Theory.copy: theory -> theory| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML type}{theory\_ref}\verb|type theory_ref| \\
+ \indexdef{}{ML}{Theory.deref}\verb|Theory.deref: theory_ref -> theory| \\
+ \indexdef{}{ML}{Theory.check\_thy}\verb|Theory.check_thy: theory -> theory_ref| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|theory| represents theory contexts. This is
+ essentially a linear type! Most operations destroy the original
+ version, which then becomes ``stale''.
+
+ \item \verb|Theory.subthy|~\isa{{\isacharparenleft}thy\isactrlsub {\isadigit{1}}{\isacharcomma}\ thy\isactrlsub {\isadigit{2}}{\isacharparenright}}
+ compares theories according to the inherent graph structure of the
+ construction. This sub-theory relation is a nominal approximation
+ of inclusion (\isa{{\isasymsubseteq}}) of the corresponding content.
+
+ \item \verb|Theory.merge|~\isa{{\isacharparenleft}thy\isactrlsub {\isadigit{1}}{\isacharcomma}\ thy\isactrlsub {\isadigit{2}}{\isacharparenright}}
+ absorbs one theory into the other. This fails for unrelated
+ theories!
+
+ \item \verb|Theory.checkpoint|~\isa{thy} produces a safe
+ stepping stone in the linear development of \isa{thy}. The next
+ update will result in two related, valid theories.
+
+ \item \verb|Theory.copy|~\isa{thy} produces a variant of \isa{thy} that holds a copy of the same data. The result is not
+ related to the original; the original is unchanged.
+
+ \item \verb|theory_ref| represents a sliding reference to an
+ always valid theory; updates on the original are propagated
+ automatically.
+
+ \item \verb|Theory.deref|~\isa{thy{\isacharunderscore}ref} turns a \verb|theory_ref| into an \verb|theory| value. As the referenced
+ theory evolves monotonically over time, later invocations of \verb|Theory.deref| may refer to a larger context.
+
+ \item \verb|Theory.check_thy|~\isa{thy} produces a \verb|theory_ref| from a valid \verb|theory| value.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Proof context \label{sec:context-proof}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A proof context is a container for pure data with a back-reference
+ to the theory it belongs to. The \isa{init} operation creates a
+ proof context from a given theory. Modifications to draft theories
+ are propagated to the proof context as usual, but there is also an
+ explicit \isa{transfer} operation to force resynchronization
+ with more substantial updates to the underlying theory. The actual
+ context data does not require any special bookkeeping, thanks to the
+ lack of destructive features.
+
+ Entities derived in a proof context need to record inherent logical
+ requirements explicitly, since there is no separate context
+ identification as for theories. For example, hypotheses used in
+ primitive derivations (cf.\ \secref{sec:thms}) are recorded
+ separately within the sequent \isa{{\isasymGamma}\ {\isasymturnstile}\ {\isasymphi}}, just to make double
+ sure. Results could still leak into an alien proof context due to
+ programming errors, but Isabelle/Isar includes some extra validity
+ checks in critical positions, notably at the end of a sub-proof.
+
+ Proof contexts may be manipulated arbitrarily, although the common
+ discipline is to follow block structure as a mental model: a given
+ context is extended consecutively, and results are exported back
+ into the original context. Note that the Isar proof states model
+ block-structured reasoning explicitly, using a stack of proof
+ contexts internally.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{Proof.context}\verb|type Proof.context| \\
+ \indexdef{}{ML}{ProofContext.init}\verb|ProofContext.init: theory -> Proof.context| \\
+ \indexdef{}{ML}{ProofContext.theory\_of}\verb|ProofContext.theory_of: Proof.context -> theory| \\
+ \indexdef{}{ML}{ProofContext.transfer}\verb|ProofContext.transfer: theory -> Proof.context -> Proof.context| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Proof.context| represents proof contexts. Elements
+ of this type are essentially pure values, with a sliding reference
+ to the background theory.
+
+ \item \verb|ProofContext.init|~\isa{thy} produces a proof context
+ derived from \isa{thy}, initializing all data.
+
+ \item \verb|ProofContext.theory_of|~\isa{ctxt} selects the
+ background theory from \isa{ctxt}, dereferencing its internal
+ \verb|theory_ref|.
+
+ \item \verb|ProofContext.transfer|~\isa{thy\ ctxt} promotes the
+ background theory of \isa{ctxt} to the super theory \isa{thy}.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Generic contexts \label{sec:generic-context}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A generic context is the disjoint sum of either a theory or proof
+ context. Occasionally, this enables uniform treatment of generic
+ context data, typically extra-logical information. Operations on
+ generic contexts include the usual injections, partial selections,
+ and combinators for lifting operations on either component of the
+ disjoint sum.
+
+ Moreover, there are total operations \isa{theory{\isacharunderscore}of} and \isa{proof{\isacharunderscore}of} to convert a generic context into either kind: a theory
+ can always be selected from the sum, while a proof context might
+ have to be constructed by an ad-hoc \isa{init} operation.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{Context.generic}\verb|type Context.generic| \\
+ \indexdef{}{ML}{Context.theory\_of}\verb|Context.theory_of: Context.generic -> theory| \\
+ \indexdef{}{ML}{Context.proof\_of}\verb|Context.proof_of: Context.generic -> Proof.context| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Context.generic| is the direct sum of \verb|theory| and \verb|Proof.context|, with the datatype
+ constructors \verb|Context.Theory| and \verb|Context.Proof|.
+
+ \item \verb|Context.theory_of|~\isa{context} always produces a
+ theory from the generic \isa{context}, using \verb|ProofContext.theory_of| as required.
+
+ \item \verb|Context.proof_of|~\isa{context} always produces a
+ proof context from the generic \isa{context}, using \verb|ProofContext.init| as required (note that this re-initializes the
+ context data with each invocation).
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Context data \label{sec:context-data}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The main purpose of theory and proof contexts is to manage arbitrary
+ data. New data types can be declared incrementally at compile time.
+ There are separate declaration mechanisms for any of the three kinds
+ of contexts: theory, proof, generic.
+
+ \paragraph{Theory data} may refer to destructive entities, which are
+ maintained in direct correspondence to the linear evolution of
+ theory values, including explicit copies.\footnote{Most existing
+ instances of destructive theory data are merely historical relics
+ (e.g.\ the destructive theorem storage, and destructive hints for
+ the Simplifier and Classical rules).} A theory data declaration
+ needs to implement the following SML signature:
+
+ \medskip
+ \begin{tabular}{ll}
+ \isa{{\isasymtype}\ T} & representing type \\
+ \isa{{\isasymval}\ empty{\isacharcolon}\ T} & empty default value \\
+ \isa{{\isasymval}\ copy{\isacharcolon}\ T\ {\isasymrightarrow}\ T} & refresh impure data \\
+ \isa{{\isasymval}\ extend{\isacharcolon}\ T\ {\isasymrightarrow}\ T} & re-initialize on import \\
+ \isa{{\isasymval}\ merge{\isacharcolon}\ T\ {\isasymtimes}\ T\ {\isasymrightarrow}\ T} & join on import \\
+ \end{tabular}
+ \medskip
+
+ \noindent The \isa{empty} value acts as initial default for
+ \emph{any} theory that does not declare actual data content; \isa{copy} maintains persistent integrity for impure data, it is just
+ the identity for pure values; \isa{extend} is acts like a
+ unitary version of \isa{merge}, both operations should also
+ include the functionality of \isa{copy} for impure data.
+
+ \paragraph{Proof context data} is purely functional. A declaration
+ needs to implement the following SML signature:
+
+ \medskip
+ \begin{tabular}{ll}
+ \isa{{\isasymtype}\ T} & representing type \\
+ \isa{{\isasymval}\ init{\isacharcolon}\ theory\ {\isasymrightarrow}\ T} & produce initial value \\
+ \end{tabular}
+ \medskip
+
+ \noindent The \isa{init} operation is supposed to produce a pure
+ value from the given background theory.
+
+ \paragraph{Generic data} provides a hybrid interface for both theory
+ and proof data. The declaration is essentially the same as for
+ (pure) theory data, without \isa{copy}. The \isa{init}
+ operation for proof contexts merely selects the current data value
+ from the background theory.
+
+ \bigskip A data declaration of type \isa{T} results in the
+ following interface:
+
+ \medskip
+ \begin{tabular}{ll}
+ \isa{init{\isacharcolon}\ theory\ {\isasymrightarrow}\ T} \\
+ \isa{get{\isacharcolon}\ context\ {\isasymrightarrow}\ T} \\
+ \isa{put{\isacharcolon}\ T\ {\isasymrightarrow}\ context\ {\isasymrightarrow}\ context} \\
+ \isa{map{\isacharcolon}\ {\isacharparenleft}T\ {\isasymrightarrow}\ T{\isacharparenright}\ {\isasymrightarrow}\ context\ {\isasymrightarrow}\ context} \\
+ \end{tabular}
+ \medskip
+
+ \noindent Here \isa{init} is only applicable to impure theory
+ data to install a fresh copy persistently (destructive update on
+ uninitialized has no permanent effect). The other operations provide
+ access for the particular kind of context (theory, proof, or generic
+ context). Note that this is a safe interface: there is no other way
+ to access the corresponding data slot of a context. By keeping
+ these operations private, a component may maintain abstract values
+ authentically, without other components interfering.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML functor}{TheoryDataFun}\verb|functor TheoryDataFun| \\
+ \indexdef{}{ML functor}{ProofDataFun}\verb|functor ProofDataFun| \\
+ \indexdef{}{ML functor}{GenericDataFun}\verb|functor GenericDataFun| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|TheoryDataFun|\isa{{\isacharparenleft}spec{\isacharparenright}} declares data for
+ type \verb|theory| according to the specification provided as
+ argument structure. The resulting structure provides data init and
+ access operations as described above.
+
+ \item \verb|ProofDataFun|\isa{{\isacharparenleft}spec{\isacharparenright}} is analogous to
+ \verb|TheoryDataFun| for type \verb|Proof.context|.
+
+ \item \verb|GenericDataFun|\isa{{\isacharparenleft}spec{\isacharparenright}} is analogous to
+ \verb|TheoryDataFun| for type \verb|Context.generic|.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsection{Names \label{sec:names}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+In principle, a name is just a string, but there are various
+ convention for encoding additional structure. For example, ``\isa{Foo{\isachardot}bar{\isachardot}baz}'' is considered as a qualified name consisting of
+ three basic name components. The individual constituents of a name
+ may have further substructure, e.g.\ the string
+ ``\verb,\,\verb,<alpha>,'' encodes as a single symbol.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsubsection{Strings of symbols%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A \emph{symbol} constitutes the smallest textual unit in Isabelle
+ --- raw characters are normally not encountered at all. Isabelle
+ strings consist of a sequence of symbols, represented as a packed
+ string or a list of strings. Each symbol is in itself a small
+ string, which has either one of the following forms:
+
+ \begin{enumerate}
+
+ \item a single ASCII character ``\isa{c}'', for example
+ ``\verb,a,'',
+
+ \item a regular symbol ``\verb,\,\verb,<,\isa{ident}\verb,>,'',
+ for example ``\verb,\,\verb,<alpha>,'',
+
+ \item a control symbol ``\verb,\,\verb,<^,\isa{ident}\verb,>,'',
+ for example ``\verb,\,\verb,<^bold>,'',
+
+ \item a raw symbol ``\verb,\,\verb,<^raw:,\isa{text}\verb,>,''
+ where \isa{text} constists of printable characters excluding
+ ``\verb,.,'' and ``\verb,>,'', for example
+ ``\verb,\,\verb,<^raw:$\sum_{i = 1}^n$>,'',
+
+ \item a numbered raw control symbol ``\verb,\,\verb,<^raw,\isa{n}\verb,>, where \isa{n} consists of digits, for example
+ ``\verb,\,\verb,<^raw42>,''.
+
+ \end{enumerate}
+
+ \noindent The \isa{ident} syntax for symbol names is \isa{letter\ {\isacharparenleft}letter\ {\isacharbar}\ digit{\isacharparenright}\isactrlsup {\isacharasterisk}}, where \isa{letter\ {\isacharequal}\ A{\isachardot}{\isachardot}Za{\isachardot}{\isachardot}z} and \isa{digit\ {\isacharequal}\ {\isadigit{0}}{\isachardot}{\isachardot}{\isadigit{9}}}. There are infinitely many
+ regular symbols and control symbols, but a fixed collection of
+ standard symbols is treated specifically. For example,
+ ``\verb,\,\verb,<alpha>,'' is classified as a letter, which means it
+ may occur within regular Isabelle identifiers.
+
+ Since the character set underlying Isabelle symbols is 7-bit ASCII
+ and 8-bit characters are passed through transparently, Isabelle may
+ also process Unicode/UCS data in UTF-8 encoding. Unicode provides
+ its own collection of mathematical symbols, but there is no built-in
+ link to the standard collection of Isabelle.
+
+ \medskip Output of Isabelle symbols depends on the print mode
+ (\secref{print-mode}). For example, the standard {\LaTeX} setup of
+ the Isabelle document preparation system would present
+ ``\verb,\,\verb,<alpha>,'' as \isa{{\isasymalpha}}, and
+ ``\verb,\,\verb,<^bold>,\verb,\,\verb,<alpha>,'' as \isa{\isactrlbold {\isasymalpha}}.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{Symbol.symbol}\verb|type Symbol.symbol| \\
+ \indexdef{}{ML}{Symbol.explode}\verb|Symbol.explode: string -> Symbol.symbol list| \\
+ \indexdef{}{ML}{Symbol.is\_letter}\verb|Symbol.is_letter: Symbol.symbol -> bool| \\
+ \indexdef{}{ML}{Symbol.is\_digit}\verb|Symbol.is_digit: Symbol.symbol -> bool| \\
+ \indexdef{}{ML}{Symbol.is\_quasi}\verb|Symbol.is_quasi: Symbol.symbol -> bool| \\
+ \indexdef{}{ML}{Symbol.is\_blank}\verb|Symbol.is_blank: Symbol.symbol -> bool| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML type}{Symbol.sym}\verb|type Symbol.sym| \\
+ \indexdef{}{ML}{Symbol.decode}\verb|Symbol.decode: Symbol.symbol -> Symbol.sym| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Symbol.symbol| represents individual Isabelle
+ symbols; this is an alias for \verb|string|.
+
+ \item \verb|Symbol.explode|~\isa{str} produces a symbol list
+ from the packed form. This function supercedes \verb|String.explode| for virtually all purposes of manipulating text in
+ Isabelle!
+
+ \item \verb|Symbol.is_letter|, \verb|Symbol.is_digit|, \verb|Symbol.is_quasi|, \verb|Symbol.is_blank| classify standard
+ symbols according to fixed syntactic conventions of Isabelle, cf.\
+ \cite{isabelle-isar-ref}.
+
+ \item \verb|Symbol.sym| is a concrete datatype that represents
+ the different kinds of symbols explicitly, with constructors \verb|Symbol.Char|, \verb|Symbol.Sym|, \verb|Symbol.Ctrl|, \verb|Symbol.Raw|.
+
+ \item \verb|Symbol.decode| converts the string representation of a
+ symbol into the datatype version.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Basic names \label{sec:basic-names}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A \emph{basic name} essentially consists of a single Isabelle
+ identifier. There are conventions to mark separate classes of basic
+ names, by attaching a suffix of underscores: one underscore means
+ \emph{internal name}, two underscores means \emph{Skolem name},
+ three underscores means \emph{internal Skolem name}.
+
+ For example, the basic name \isa{foo} has the internal version
+ \isa{foo{\isacharunderscore}}, with Skolem versions \isa{foo{\isacharunderscore}{\isacharunderscore}} and \isa{foo{\isacharunderscore}{\isacharunderscore}{\isacharunderscore}}, respectively.
+
+ These special versions provide copies of the basic name space, apart
+ from anything that normally appears in the user text. For example,
+ system generated variables in Isar proof contexts are usually marked
+ as internal, which prevents mysterious name references like \isa{xaa} to appear in the text.
+
+ \medskip Manipulating binding scopes often requires on-the-fly
+ renamings. A \emph{name context} contains a collection of already
+ used names. The \isa{declare} operation adds names to the
+ context.
+
+ The \isa{invents} operation derives a number of fresh names from
+ a given starting point. For example, the first three names derived
+ from \isa{a} are \isa{a}, \isa{b}, \isa{c}.
+
+ The \isa{variants} operation produces fresh names by
+ incrementing tentative names as base-26 numbers (with digits \isa{a{\isachardot}{\isachardot}z}) until all clashes are resolved. For example, name \isa{foo} results in variants \isa{fooa}, \isa{foob}, \isa{fooc}, \dots, \isa{fooaa}, \isa{fooab} etc.; each renaming
+ step picks the next unused variant from this sequence.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{Name.internal}\verb|Name.internal: string -> string| \\
+ \indexdef{}{ML}{Name.skolem}\verb|Name.skolem: string -> string| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML type}{Name.context}\verb|type Name.context| \\
+ \indexdef{}{ML}{Name.context}\verb|Name.context: Name.context| \\
+ \indexdef{}{ML}{Name.declare}\verb|Name.declare: string -> Name.context -> Name.context| \\
+ \indexdef{}{ML}{Name.invents}\verb|Name.invents: Name.context -> string -> int -> string list| \\
+ \indexdef{}{ML}{Name.variants}\verb|Name.variants: string list -> Name.context -> string list * Name.context| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Name.internal|~\isa{name} produces an internal name
+ by adding one underscore.
+
+ \item \verb|Name.skolem|~\isa{name} produces a Skolem name by
+ adding two underscores.
+
+ \item \verb|Name.context| represents the context of already used
+ names; the initial value is \verb|Name.context|.
+
+ \item \verb|Name.declare|~\isa{name} enters a used name into the
+ context.
+
+ \item \verb|Name.invents|~\isa{context\ name\ n} produces \isa{n} fresh names derived from \isa{name}.
+
+ \item \verb|Name.variants|~\isa{names\ context} produces fresh
+ variants of \isa{names}; the result is entered into the context.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Indexed names%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+An \emph{indexed name} (or \isa{indexname}) is a pair of a basic
+ name and a natural number. This representation allows efficient
+ renaming by incrementing the second component only. The canonical
+ way to rename two collections of indexnames apart from each other is
+ this: determine the maximum index \isa{maxidx} of the first
+ collection, then increment all indexes of the second collection by
+ \isa{maxidx\ {\isacharplus}\ {\isadigit{1}}}; the maximum index of an empty collection is
+ \isa{{\isacharminus}{\isadigit{1}}}.
+
+ Occasionally, basic names and indexed names are injected into the
+ same pair type: the (improper) indexname \isa{{\isacharparenleft}x{\isacharcomma}\ {\isacharminus}{\isadigit{1}}{\isacharparenright}} is used
+ to encode basic names.
+
+ \medskip Isabelle syntax observes the following rules for
+ representing an indexname \isa{{\isacharparenleft}x{\isacharcomma}\ i{\isacharparenright}} as a packed string:
+
+ \begin{itemize}
+
+ \item \isa{{\isacharquery}x} if \isa{x} does not end with a digit and \isa{i\ {\isacharequal}\ {\isadigit{0}}},
+
+ \item \isa{{\isacharquery}xi} if \isa{x} does not end with a digit,
+
+ \item \isa{{\isacharquery}x{\isachardot}i} otherwise.
+
+ \end{itemize}
+
+ Indexnames may acquire large index numbers over time. Results are
+ normalized towards \isa{{\isadigit{0}}} at certain checkpoints, notably at
+ the end of a proof. This works by producing variants of the
+ corresponding basic name components. For example, the collection
+ \isa{{\isacharquery}x{\isadigit{1}}{\isacharcomma}\ {\isacharquery}x{\isadigit{7}}{\isacharcomma}\ {\isacharquery}x{\isadigit{4}}{\isadigit{2}}} becomes \isa{{\isacharquery}x{\isacharcomma}\ {\isacharquery}xa{\isacharcomma}\ {\isacharquery}xb}.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{indexname}\verb|type indexname| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|indexname| represents indexed names. This is an
+ abbreviation for \verb|string * int|. The second component is
+ usually non-negative, except for situations where \isa{{\isacharparenleft}x{\isacharcomma}\ {\isacharminus}{\isadigit{1}}{\isacharparenright}}
+ is used to embed basic names into this type.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Qualified names and name spaces%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A \emph{qualified name} consists of a non-empty sequence of basic
+ name components. The packed representation uses a dot as separator,
+ as in ``\isa{A{\isachardot}b{\isachardot}c}''. The last component is called \emph{base}
+ name, the remaining prefix \emph{qualifier} (which may be empty).
+ The idea of qualified names is to encode nested structures by
+ recording the access paths as qualifiers. For example, an item
+ named ``\isa{A{\isachardot}b{\isachardot}c}'' may be understood as a local entity \isa{c}, within a local structure \isa{b}, within a global
+ structure \isa{A}. Typically, name space hierarchies consist of
+ 1--2 levels of qualification, but this need not be always so.
+
+ The empty name is commonly used as an indication of unnamed
+ entities, whenever this makes any sense. The basic operations on
+ qualified names are smart enough to pass through such improper names
+ unchanged.
+
+ \medskip A \isa{naming} policy tells how to turn a name
+ specification into a fully qualified internal name (by the \isa{full} operation), and how fully qualified names may be accessed
+ externally. For example, the default naming policy is to prefix an
+ implicit path: \isa{full\ x} produces \isa{path{\isachardot}x}, and the
+ standard accesses for \isa{path{\isachardot}x} include both \isa{x} and
+ \isa{path{\isachardot}x}. Normally, the naming is implicit in the theory or
+ proof context; there are separate versions of the corresponding.
+
+ \medskip A \isa{name\ space} manages a collection of fully
+ internalized names, together with a mapping between external names
+ and internal names (in both directions). The corresponding \isa{intern} and \isa{extern} operations are mostly used for
+ parsing and printing only! The \isa{declare} operation augments
+ a name space according to the accesses determined by the naming
+ policy.
+
+ \medskip As a general principle, there is a separate name space for
+ each kind of formal entity, e.g.\ logical constant, type
+ constructor, type class, theorem. It is usually clear from the
+ occurrence in concrete syntax (or from the scope) which kind of
+ entity a name refers to. For example, the very same name \isa{c} may be used uniformly for a constant, type constructor, and
+ type class.
+
+ There are common schemes to name theorems systematically, according
+ to the name of the main logical entity involved, e.g.\ \isa{c{\isachardot}intro} for a canonical theorem related to constant \isa{c}.
+ This technique of mapping names from one space into another requires
+ some care in order to avoid conflicts. In particular, theorem names
+ derived from a type constructor or type class are better suffixed in
+ addition to the usual qualification, e.g.\ \isa{c{\isacharunderscore}type{\isachardot}intro}
+ and \isa{c{\isacharunderscore}class{\isachardot}intro} for theorems related to type \isa{c}
+ and class \isa{c}, respectively.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{NameSpace.base\_name}\verb|NameSpace.base_name: string -> string| \\
+ \indexdef{}{ML}{NameSpace.qualifier}\verb|NameSpace.qualifier: string -> string| \\
+ \indexdef{}{ML}{NameSpace.append}\verb|NameSpace.append: string -> string -> string| \\
+ \indexdef{}{ML}{NameSpace.implode}\verb|NameSpace.implode: string list -> string| \\
+ \indexdef{}{ML}{NameSpace.explode}\verb|NameSpace.explode: string -> string list| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML type}{NameSpace.naming}\verb|type NameSpace.naming| \\
+ \indexdef{}{ML}{NameSpace.default\_naming}\verb|NameSpace.default_naming: NameSpace.naming| \\
+ \indexdef{}{ML}{NameSpace.add\_path}\verb|NameSpace.add_path: string -> NameSpace.naming -> NameSpace.naming| \\
+ \indexdef{}{ML}{NameSpace.full\_name}\verb|NameSpace.full_name: NameSpace.naming -> binding -> string| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML type}{NameSpace.T}\verb|type NameSpace.T| \\
+ \indexdef{}{ML}{NameSpace.empty}\verb|NameSpace.empty: NameSpace.T| \\
+ \indexdef{}{ML}{NameSpace.merge}\verb|NameSpace.merge: NameSpace.T * NameSpace.T -> NameSpace.T| \\
+ \indexdef{}{ML}{NameSpace.declare}\verb|NameSpace.declare: NameSpace.naming -> binding -> NameSpace.T ->|\isasep\isanewline%
+\verb| string * NameSpace.T| \\
+ \indexdef{}{ML}{NameSpace.intern}\verb|NameSpace.intern: NameSpace.T -> string -> string| \\
+ \indexdef{}{ML}{NameSpace.extern}\verb|NameSpace.extern: NameSpace.T -> string -> string| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|NameSpace.base_name|~\isa{name} returns the base name of a
+ qualified name.
+
+ \item \verb|NameSpace.qualifier|~\isa{name} returns the qualifier
+ of a qualified name.
+
+ \item \verb|NameSpace.append|~\isa{name\isactrlisub {\isadigit{1}}\ name\isactrlisub {\isadigit{2}}}
+ appends two qualified names.
+
+ \item \verb|NameSpace.implode|~\isa{name} and \verb|NameSpace.explode|~\isa{names} convert between the packed string
+ representation and the explicit list form of qualified names.
+
+ \item \verb|NameSpace.naming| represents the abstract concept of
+ a naming policy.
+
+ \item \verb|NameSpace.default_naming| is the default naming policy.
+ In a theory context, this is usually augmented by a path prefix
+ consisting of the theory name.
+
+ \item \verb|NameSpace.add_path|~\isa{path\ naming} augments the
+ naming policy by extending its path component.
+
+ \item \verb|NameSpace.full_name|~\isa{naming\ binding} turns a
+ name binding (usually a basic name) into the fully qualified
+ internal name, according to the given naming policy.
+
+ \item \verb|NameSpace.T| represents name spaces.
+
+ \item \verb|NameSpace.empty| and \verb|NameSpace.merge|~\isa{{\isacharparenleft}space\isactrlisub {\isadigit{1}}{\isacharcomma}\ space\isactrlisub {\isadigit{2}}{\isacharparenright}} are the canonical operations for
+ maintaining name spaces according to theory data management
+ (\secref{sec:context-data}).
+
+ \item \verb|NameSpace.declare|~\isa{naming\ bindings\ space} enters a
+ name binding as fully qualified internal name into the name space,
+ with external accesses determined by the naming policy.
+
+ \item \verb|NameSpace.intern|~\isa{space\ name} internalizes a
+ (partially qualified) external name.
+
+ This operation is mostly for parsing! Note that fully qualified
+ names stemming from declarations are produced via \verb|NameSpace.full_name| and \verb|NameSpace.declare|
+ (or their derivatives for \verb|theory| and
+ \verb|Proof.context|).
+
+ \item \verb|NameSpace.extern|~\isa{space\ name} externalizes a
+ (fully qualified) internal name.
+
+ This operation is mostly for printing! User code should not rely on
+ the precise result too much.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{end}\isamarkupfalse%
+%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+\isanewline
+\end{isabellebody}%
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "root"
+%%% End:
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/IsarImplementation/Thy/document/Proof.tex Fri Mar 06 11:28:07 2009 +0100
@@ -0,0 +1,394 @@
+%
+\begin{isabellebody}%
+\def\isabellecontext{Proof}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{theory}\isamarkupfalse%
+\ Proof\isanewline
+\isakeyword{imports}\ Base\isanewline
+\isakeyword{begin}%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isamarkupchapter{Structured proofs%
+}
+\isamarkuptrue%
+%
+\isamarkupsection{Variables \label{sec:variables}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+Any variable that is not explicitly bound by \isa{{\isasymlambda}}-abstraction
+ is considered as ``free''. Logically, free variables act like
+ outermost universal quantification at the sequent level: \isa{A\isactrlisub {\isadigit{1}}{\isacharparenleft}x{\isacharparenright}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ A\isactrlisub n{\isacharparenleft}x{\isacharparenright}\ {\isasymturnstile}\ B{\isacharparenleft}x{\isacharparenright}} means that the result
+ holds \emph{for all} values of \isa{x}. Free variables for
+ terms (not types) can be fully internalized into the logic: \isa{{\isasymturnstile}\ B{\isacharparenleft}x{\isacharparenright}} and \isa{{\isasymturnstile}\ {\isasymAnd}x{\isachardot}\ B{\isacharparenleft}x{\isacharparenright}} are interchangeable, provided
+ that \isa{x} does not occur elsewhere in the context.
+ Inspecting \isa{{\isasymturnstile}\ {\isasymAnd}x{\isachardot}\ B{\isacharparenleft}x{\isacharparenright}} more closely, we see that inside the
+ quantifier, \isa{x} is essentially ``arbitrary, but fixed'',
+ while from outside it appears as a place-holder for instantiation
+ (thanks to \isa{{\isasymAnd}} elimination).
+
+ The Pure logic represents the idea of variables being either inside
+ or outside the current scope by providing separate syntactic
+ categories for \emph{fixed variables} (e.g.\ \isa{x}) vs.\
+ \emph{schematic variables} (e.g.\ \isa{{\isacharquery}x}). Incidently, a
+ universal result \isa{{\isasymturnstile}\ {\isasymAnd}x{\isachardot}\ B{\isacharparenleft}x{\isacharparenright}} has the HHF normal form \isa{{\isasymturnstile}\ B{\isacharparenleft}{\isacharquery}x{\isacharparenright}}, which represents its generality nicely without requiring
+ an explicit quantifier. The same principle works for type
+ variables: \isa{{\isasymturnstile}\ B{\isacharparenleft}{\isacharquery}{\isasymalpha}{\isacharparenright}} represents the idea of ``\isa{{\isasymturnstile}\ {\isasymforall}{\isasymalpha}{\isachardot}\ B{\isacharparenleft}{\isasymalpha}{\isacharparenright}}'' without demanding a truly polymorphic framework.
+
+ \medskip Additional care is required to treat type variables in a
+ way that facilitates type-inference. In principle, term variables
+ depend on type variables, which means that type variables would have
+ to be declared first. For example, a raw type-theoretic framework
+ would demand the context to be constructed in stages as follows:
+ \isa{{\isasymGamma}\ {\isacharequal}\ {\isasymalpha}{\isacharcolon}\ type{\isacharcomma}\ x{\isacharcolon}\ {\isasymalpha}{\isacharcomma}\ a{\isacharcolon}\ A{\isacharparenleft}x\isactrlisub {\isasymalpha}{\isacharparenright}}.
+
+ We allow a slightly less formalistic mode of operation: term
+ variables \isa{x} are fixed without specifying a type yet
+ (essentially \emph{all} potential occurrences of some instance
+ \isa{x\isactrlisub {\isasymtau}} are fixed); the first occurrence of \isa{x}
+ within a specific term assigns its most general type, which is then
+ maintained consistently in the context. The above example becomes
+ \isa{{\isasymGamma}\ {\isacharequal}\ x{\isacharcolon}\ term{\isacharcomma}\ {\isasymalpha}{\isacharcolon}\ type{\isacharcomma}\ A{\isacharparenleft}x\isactrlisub {\isasymalpha}{\isacharparenright}}, where type \isa{{\isasymalpha}} is fixed \emph{after} term \isa{x}, and the constraint
+ \isa{x\ {\isacharcolon}{\isacharcolon}\ {\isasymalpha}} is an implicit consequence of the occurrence of
+ \isa{x\isactrlisub {\isasymalpha}} in the subsequent proposition.
+
+ This twist of dependencies is also accommodated by the reverse
+ operation of exporting results from a context: a type variable
+ \isa{{\isasymalpha}} is considered fixed as long as it occurs in some fixed
+ term variable of the context. For example, exporting \isa{x{\isacharcolon}\ term{\isacharcomma}\ {\isasymalpha}{\isacharcolon}\ type\ {\isasymturnstile}\ x\isactrlisub {\isasymalpha}\ {\isacharequal}\ x\isactrlisub {\isasymalpha}} produces in the first step
+ \isa{x{\isacharcolon}\ term\ {\isasymturnstile}\ x\isactrlisub {\isasymalpha}\ {\isacharequal}\ x\isactrlisub {\isasymalpha}} for fixed \isa{{\isasymalpha}},
+ and only in the second step \isa{{\isasymturnstile}\ {\isacharquery}x\isactrlisub {\isacharquery}\isactrlisub {\isasymalpha}\ {\isacharequal}\ {\isacharquery}x\isactrlisub {\isacharquery}\isactrlisub {\isasymalpha}} for schematic \isa{{\isacharquery}x} and \isa{{\isacharquery}{\isasymalpha}}.
+
+ \medskip The Isabelle/Isar proof context manages the gory details of
+ term vs.\ type variables, with high-level principles for moving the
+ frontier between fixed and schematic variables.
+
+ The \isa{add{\isacharunderscore}fixes} operation explictly declares fixed
+ variables; the \isa{declare{\isacharunderscore}term} operation absorbs a term into
+ a context by fixing new type variables and adding syntactic
+ constraints.
+
+ The \isa{export} operation is able to perform the main work of
+ generalizing term and type variables as sketched above, assuming
+ that fixing variables and terms have been declared properly.
+
+ There \isa{import} operation makes a generalized fact a genuine
+ part of the context, by inventing fixed variables for the schematic
+ ones. The effect can be reversed by using \isa{export} later,
+ potentially with an extended context; the result is equivalent to
+ the original modulo renaming of schematic variables.
+
+ The \isa{focus} operation provides a variant of \isa{import}
+ for nested propositions (with explicit quantification): \isa{{\isasymAnd}x\isactrlisub {\isadigit{1}}\ {\isasymdots}\ x\isactrlisub n{\isachardot}\ B{\isacharparenleft}x\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ x\isactrlisub n{\isacharparenright}} is
+ decomposed by inventing fixed variables \isa{x\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ x\isactrlisub n} for the body.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{Variable.add\_fixes}\verb|Variable.add_fixes: |\isasep\isanewline%
+\verb| string list -> Proof.context -> string list * Proof.context| \\
+ \indexdef{}{ML}{Variable.variant\_fixes}\verb|Variable.variant_fixes: |\isasep\isanewline%
+\verb| string list -> Proof.context -> string list * Proof.context| \\
+ \indexdef{}{ML}{Variable.declare\_term}\verb|Variable.declare_term: term -> Proof.context -> Proof.context| \\
+ \indexdef{}{ML}{Variable.declare\_constraints}\verb|Variable.declare_constraints: term -> Proof.context -> Proof.context| \\
+ \indexdef{}{ML}{Variable.export}\verb|Variable.export: Proof.context -> Proof.context -> thm list -> thm list| \\
+ \indexdef{}{ML}{Variable.polymorphic}\verb|Variable.polymorphic: Proof.context -> term list -> term list| \\
+ \indexdef{}{ML}{Variable.import\_thms}\verb|Variable.import_thms: bool -> thm list -> Proof.context ->|\isasep\isanewline%
+\verb| ((ctyp list * cterm list) * thm list) * Proof.context| \\
+ \indexdef{}{ML}{Variable.focus}\verb|Variable.focus: cterm -> Proof.context -> (cterm list * cterm) * Proof.context| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Variable.add_fixes|~\isa{xs\ ctxt} fixes term
+ variables \isa{xs}, returning the resulting internal names. By
+ default, the internal representation coincides with the external
+ one, which also means that the given variables must not be fixed
+ already. There is a different policy within a local proof body: the
+ given names are just hints for newly invented Skolem variables.
+
+ \item \verb|Variable.variant_fixes| is similar to \verb|Variable.add_fixes|, but always produces fresh variants of the given
+ names.
+
+ \item \verb|Variable.declare_term|~\isa{t\ ctxt} declares term
+ \isa{t} to belong to the context. This automatically fixes new
+ type variables, but not term variables. Syntactic constraints for
+ type and term variables are declared uniformly, though.
+
+ \item \verb|Variable.declare_constraints|~\isa{t\ ctxt} declares
+ syntactic constraints from term \isa{t}, without making it part
+ of the context yet.
+
+ \item \verb|Variable.export|~\isa{inner\ outer\ thms} generalizes
+ fixed type and term variables in \isa{thms} according to the
+ difference of the \isa{inner} and \isa{outer} context,
+ following the principles sketched above.
+
+ \item \verb|Variable.polymorphic|~\isa{ctxt\ ts} generalizes type
+ variables in \isa{ts} as far as possible, even those occurring
+ in fixed term variables. The default policy of type-inference is to
+ fix newly introduced type variables, which is essentially reversed
+ with \verb|Variable.polymorphic|: here the given terms are detached
+ from the context as far as possible.
+
+ \item \verb|Variable.import_thms|~\isa{open\ thms\ ctxt} invents fixed
+ type and term variables for the schematic ones occurring in \isa{thms}. The \isa{open} flag indicates whether the fixed names
+ should be accessible to the user, otherwise newly introduced names
+ are marked as ``internal'' (\secref{sec:names}).
+
+ \item \verb|Variable.focus|~\isa{B} decomposes the outermost \isa{{\isasymAnd}} prefix of proposition \isa{B}.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsection{Assumptions \label{sec:assumptions}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+An \emph{assumption} is a proposition that it is postulated in the
+ current context. Local conclusions may use assumptions as
+ additional facts, but this imposes implicit hypotheses that weaken
+ the overall statement.
+
+ Assumptions are restricted to fixed non-schematic statements, i.e.\
+ all generality needs to be expressed by explicit quantifiers.
+ Nevertheless, the result will be in HHF normal form with outermost
+ quantifiers stripped. For example, by assuming \isa{{\isasymAnd}x\ {\isacharcolon}{\isacharcolon}\ {\isasymalpha}{\isachardot}\ P\ x} we get \isa{{\isasymAnd}x\ {\isacharcolon}{\isacharcolon}\ {\isasymalpha}{\isachardot}\ P\ x\ {\isasymturnstile}\ P\ {\isacharquery}x} for schematic \isa{{\isacharquery}x}
+ of fixed type \isa{{\isasymalpha}}. Local derivations accumulate more and
+ more explicit references to hypotheses: \isa{A\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ A\isactrlisub n\ {\isasymturnstile}\ B} where \isa{A\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ A\isactrlisub n} needs to
+ be covered by the assumptions of the current context.
+
+ \medskip The \isa{add{\isacharunderscore}assms} operation augments the context by
+ local assumptions, which are parameterized by an arbitrary \isa{export} rule (see below).
+
+ The \isa{export} operation moves facts from a (larger) inner
+ context into a (smaller) outer context, by discharging the
+ difference of the assumptions as specified by the associated export
+ rules. Note that the discharged portion is determined by the
+ difference contexts, not the facts being exported! There is a
+ separate flag to indicate a goal context, where the result is meant
+ to refine an enclosing sub-goal of a structured proof state.
+
+ \medskip The most basic export rule discharges assumptions directly
+ by means of the \isa{{\isasymLongrightarrow}} introduction rule:
+ \[
+ \infer[(\isa{{\isasymLongrightarrow}{\isacharunderscore}intro})]{\isa{{\isasymGamma}\ {\isacharbackslash}\ A\ {\isasymturnstile}\ A\ {\isasymLongrightarrow}\ B}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B}}
+ \]
+
+ The variant for goal refinements marks the newly introduced
+ premises, which causes the canonical Isar goal refinement scheme to
+ enforce unification with local premises within the goal:
+ \[
+ \infer[(\isa{{\isacharhash}{\isasymLongrightarrow}{\isacharunderscore}intro})]{\isa{{\isasymGamma}\ {\isacharbackslash}\ A\ {\isasymturnstile}\ {\isacharhash}A\ {\isasymLongrightarrow}\ B}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B}}
+ \]
+
+ \medskip Alternative versions of assumptions may perform arbitrary
+ transformations on export, as long as the corresponding portion of
+ hypotheses is removed from the given facts. For example, a local
+ definition works by fixing \isa{x} and assuming \isa{x\ {\isasymequiv}\ t},
+ with the following export rule to reverse the effect:
+ \[
+ \infer[(\isa{{\isasymequiv}{\isacharminus}expand})]{\isa{{\isasymGamma}\ {\isacharbackslash}\ x\ {\isasymequiv}\ t\ {\isasymturnstile}\ B\ t}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B\ x}}
+ \]
+ This works, because the assumption \isa{x\ {\isasymequiv}\ t} was introduced in
+ a context with \isa{x} being fresh, so \isa{x} does not
+ occur in \isa{{\isasymGamma}} here.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{Assumption.export}\verb|type Assumption.export| \\
+ \indexdef{}{ML}{Assumption.assume}\verb|Assumption.assume: cterm -> thm| \\
+ \indexdef{}{ML}{Assumption.add\_assms}\verb|Assumption.add_assms: Assumption.export ->|\isasep\isanewline%
+\verb| cterm list -> Proof.context -> thm list * Proof.context| \\
+ \indexdef{}{ML}{Assumption.add\_assumes}\verb|Assumption.add_assumes: |\isasep\isanewline%
+\verb| cterm list -> Proof.context -> thm list * Proof.context| \\
+ \indexdef{}{ML}{Assumption.export}\verb|Assumption.export: bool -> Proof.context -> Proof.context -> thm -> thm| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Assumption.export| represents arbitrary export
+ rules, which is any function of type \verb|bool -> cterm list -> thm -> thm|,
+ where the \verb|bool| indicates goal mode, and the \verb|cterm list| the collection of assumptions to be discharged
+ simultaneously.
+
+ \item \verb|Assumption.assume|~\isa{A} turns proposition \isa{A} into a raw assumption \isa{A\ {\isasymturnstile}\ A{\isacharprime}}, where the conclusion
+ \isa{A{\isacharprime}} is in HHF normal form.
+
+ \item \verb|Assumption.add_assms|~\isa{r\ As} augments the context
+ by assumptions \isa{As} with export rule \isa{r}. The
+ resulting facts are hypothetical theorems as produced by the raw
+ \verb|Assumption.assume|.
+
+ \item \verb|Assumption.add_assumes|~\isa{As} is a special case of
+ \verb|Assumption.add_assms| where the export rule performs \isa{{\isasymLongrightarrow}{\isacharunderscore}intro} or \isa{{\isacharhash}{\isasymLongrightarrow}{\isacharunderscore}intro}, depending on goal mode.
+
+ \item \verb|Assumption.export|~\isa{is{\isacharunderscore}goal\ inner\ outer\ thm}
+ exports result \isa{thm} from the the \isa{inner} context
+ back into the \isa{outer} one; \isa{is{\isacharunderscore}goal\ {\isacharequal}\ true} means
+ this is a goal context. The result is in HHF normal form. Note
+ that \verb|ProofContext.export| combines \verb|Variable.export|
+ and \verb|Assumption.export| in the canonical way.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsection{Results \label{sec:results}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+Local results are established by monotonic reasoning from facts
+ within a context. This allows common combinations of theorems,
+ e.g.\ via \isa{{\isasymAnd}{\isacharslash}{\isasymLongrightarrow}} elimination, resolution rules, or equational
+ reasoning, see \secref{sec:thms}. Unaccounted context manipulations
+ should be avoided, notably raw \isa{{\isasymAnd}{\isacharslash}{\isasymLongrightarrow}} introduction or ad-hoc
+ references to free variables or assumptions not present in the proof
+ context.
+
+ \medskip The \isa{SUBPROOF} combinator allows to structure a
+ tactical proof recursively by decomposing a selected sub-goal:
+ \isa{{\isacharparenleft}{\isasymAnd}x{\isachardot}\ A{\isacharparenleft}x{\isacharparenright}\ {\isasymLongrightarrow}\ B{\isacharparenleft}x{\isacharparenright}{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymdots}} is turned into \isa{B{\isacharparenleft}x{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymdots}}
+ after fixing \isa{x} and assuming \isa{A{\isacharparenleft}x{\isacharparenright}}. This means
+ the tactic needs to solve the conclusion, but may use the premise as
+ a local fact, for locally fixed variables.
+
+ The \isa{prove} operation provides an interface for structured
+ backwards reasoning under program control, with some explicit sanity
+ checks of the result. The goal context can be augmented by
+ additional fixed variables (cf.\ \secref{sec:variables}) and
+ assumptions (cf.\ \secref{sec:assumptions}), which will be available
+ as local facts during the proof and discharged into implications in
+ the result. Type and term variables are generalized as usual,
+ according to the context.
+
+ The \isa{obtain} operation produces results by eliminating
+ existing facts by means of a given tactic. This acts like a dual
+ conclusion: the proof demonstrates that the context may be augmented
+ by certain fixed variables and assumptions. See also
+ \cite{isabelle-isar-ref} for the user-level \isa{{\isasymOBTAIN}} and
+ \isa{{\isasymGUESS}} elements. Final results, which may not refer to
+ the parameters in the conclusion, need to exported explicitly into
+ the original context.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{SUBPROOF}\verb|SUBPROOF: ({context: Proof.context, schematics: ctyp list * cterm list,|\isasep\isanewline%
+\verb| params: cterm list, asms: cterm list, concl: cterm,|\isasep\isanewline%
+\verb| prems: thm list} -> tactic) -> Proof.context -> int -> tactic| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML}{Goal.prove}\verb|Goal.prove: Proof.context -> string list -> term list -> term ->|\isasep\isanewline%
+\verb| ({prems: thm list, context: Proof.context} -> tactic) -> thm| \\
+ \indexdef{}{ML}{Goal.prove\_multi}\verb|Goal.prove_multi: Proof.context -> string list -> term list -> term list ->|\isasep\isanewline%
+\verb| ({prems: thm list, context: Proof.context} -> tactic) -> thm list| \\
+ \end{mldecls}
+ \begin{mldecls}
+ \indexdef{}{ML}{Obtain.result}\verb|Obtain.result: (Proof.context -> tactic) ->|\isasep\isanewline%
+\verb| thm list -> Proof.context -> (cterm list * thm list) * Proof.context| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|SUBPROOF|~\isa{tac\ ctxt\ i} decomposes the structure
+ of the specified sub-goal, producing an extended context and a
+ reduced goal, which needs to be solved by the given tactic. All
+ schematic parameters of the goal are imported into the context as
+ fixed ones, which may not be instantiated in the sub-proof.
+
+ \item \verb|Goal.prove|~\isa{ctxt\ xs\ As\ C\ tac} states goal \isa{C} in the context augmented by fixed variables \isa{xs} and
+ assumptions \isa{As}, and applies tactic \isa{tac} to solve
+ it. The latter may depend on the local assumptions being presented
+ as facts. The result is in HHF normal form.
+
+ \item \verb|Goal.prove_multi| is simular to \verb|Goal.prove|, but
+ states several conclusions simultaneously. The goal is encoded by
+ means of Pure conjunction; \verb|Goal.conjunction_tac| will turn this
+ into a collection of individual subgoals.
+
+ \item \verb|Obtain.result|~\isa{tac\ thms\ ctxt} eliminates the
+ given facts using a tactic, which results in additional fixed
+ variables and assumptions in the context. Final results need to be
+ exported explicitly.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{end}\isamarkupfalse%
+%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+\isanewline
+\end{isabellebody}%
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "root"
+%%% End:
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/doc-src/IsarImplementation/Thy/document/Tactic.tex Fri Mar 06 11:28:07 2009 +0100
@@ -0,0 +1,497 @@
+%
+\begin{isabellebody}%
+\def\isabellecontext{Tactic}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{theory}\isamarkupfalse%
+\ Tactic\isanewline
+\isakeyword{imports}\ Base\isanewline
+\isakeyword{begin}%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isamarkupchapter{Tactical reasoning%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+Tactical reasoning works by refining the initial claim in a
+ backwards fashion, until a solved form is reached. A \isa{goal}
+ consists of several subgoals that need to be solved in order to
+ achieve the main statement; zero subgoals means that the proof may
+ be finished. A \isa{tactic} is a refinement operation that maps
+ a goal to a lazy sequence of potential successors. A \isa{tactical} is a combinator for composing tactics.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isamarkupsection{Goals \label{sec:tactical-goals}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+Isabelle/Pure represents a goal as a theorem stating that the
+ subgoals imply the main goal: \isa{A\isactrlsub {\isadigit{1}}\ {\isasymLongrightarrow}\ {\isasymdots}\ {\isasymLongrightarrow}\ A\isactrlsub n\ {\isasymLongrightarrow}\ C}. The outermost goal structure is that of a Horn Clause: i.e.\
+ an iterated implication without any quantifiers\footnote{Recall that
+ outermost \isa{{\isasymAnd}x{\isachardot}\ {\isasymphi}{\isacharbrackleft}x{\isacharbrackright}} is always represented via schematic
+ variables in the body: \isa{{\isasymphi}{\isacharbrackleft}{\isacharquery}x{\isacharbrackright}}. These variables may get
+ instantiated during the course of reasoning.}. For \isa{n\ {\isacharequal}\ {\isadigit{0}}}
+ a goal is called ``solved''.
+
+ The structure of each subgoal \isa{A\isactrlsub i} is that of a
+ general Hereditary Harrop Formula \isa{{\isasymAnd}x\isactrlsub {\isadigit{1}}\ {\isasymdots}\ {\isasymAnd}x\isactrlsub k{\isachardot}\ H\isactrlsub {\isadigit{1}}\ {\isasymLongrightarrow}\ {\isasymdots}\ {\isasymLongrightarrow}\ H\isactrlsub m\ {\isasymLongrightarrow}\ B}. Here \isa{x\isactrlsub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ x\isactrlsub k} are goal parameters, i.e.\
+ arbitrary-but-fixed entities of certain types, and \isa{H\isactrlsub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ H\isactrlsub m} are goal hypotheses, i.e.\ facts that may
+ be assumed locally. Together, this forms the goal context of the
+ conclusion \isa{B} to be established. The goal hypotheses may be
+ again arbitrary Hereditary Harrop Formulas, although the level of
+ nesting rarely exceeds 1--2 in practice.
+
+ The main conclusion \isa{C} is internally marked as a protected
+ proposition, which is represented explicitly by the notation \isa{{\isacharhash}C}. This ensures that the decomposition into subgoals and main
+ conclusion is well-defined for arbitrarily structured claims.
+
+ \medskip Basic goal management is performed via the following
+ Isabelle/Pure rules:
+
+ \[
+ \infer[\isa{{\isacharparenleft}init{\isacharparenright}}]{\isa{C\ {\isasymLongrightarrow}\ {\isacharhash}C}}{} \qquad
+ \infer[\isa{{\isacharparenleft}finish{\isacharparenright}}]{\isa{C}}{\isa{{\isacharhash}C}}
+ \]
+
+ \medskip The following low-level variants admit general reasoning
+ with protected propositions:
+
+ \[
+ \infer[\isa{{\isacharparenleft}protect{\isacharparenright}}]{\isa{{\isacharhash}C}}{\isa{C}} \qquad
+ \infer[\isa{{\isacharparenleft}conclude{\isacharparenright}}]{\isa{A\isactrlsub {\isadigit{1}}\ {\isasymLongrightarrow}\ {\isasymdots}\ {\isasymLongrightarrow}\ A\isactrlsub n\ {\isasymLongrightarrow}\ C}}{\isa{A\isactrlsub {\isadigit{1}}\ {\isasymLongrightarrow}\ {\isasymdots}\ {\isasymLongrightarrow}\ A\isactrlsub n\ {\isasymLongrightarrow}\ {\isacharhash}C}}
+ \]%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{Goal.init}\verb|Goal.init: cterm -> thm| \\
+ \indexdef{}{ML}{Goal.finish}\verb|Goal.finish: thm -> thm| \\
+ \indexdef{}{ML}{Goal.protect}\verb|Goal.protect: thm -> thm| \\
+ \indexdef{}{ML}{Goal.conclude}\verb|Goal.conclude: thm -> thm| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|Goal.init|~\isa{C} initializes a tactical goal from
+ the well-formed proposition \isa{C}.
+
+ \item \verb|Goal.finish|~\isa{thm} checks whether theorem
+ \isa{thm} is a solved goal (no subgoals), and concludes the
+ result by removing the goal protection.
+
+ \item \verb|Goal.protect|~\isa{thm} protects the full statement
+ of theorem \isa{thm}.
+
+ \item \verb|Goal.conclude|~\isa{thm} removes the goal
+ protection, even if there are pending subgoals.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsection{Tactics%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A \isa{tactic} is a function \isa{goal\ {\isasymrightarrow}\ goal\isactrlsup {\isacharasterisk}\isactrlsup {\isacharasterisk}} that
+ maps a given goal state (represented as a theorem, cf.\
+ \secref{sec:tactical-goals}) to a lazy sequence of potential
+ successor states. The underlying sequence implementation is lazy
+ both in head and tail, and is purely functional in \emph{not}
+ supporting memoing.\footnote{The lack of memoing and the strict
+ nature of SML requires some care when working with low-level
+ sequence operations, to avoid duplicate or premature evaluation of
+ results.}
+
+ An \emph{empty result sequence} means that the tactic has failed: in
+ a compound tactic expressions other tactics might be tried instead,
+ or the whole refinement step might fail outright, producing a
+ toplevel error message. When implementing tactics from scratch, one
+ should take care to observe the basic protocol of mapping regular
+ error conditions to an empty result; only serious faults should
+ emerge as exceptions.
+
+ By enumerating \emph{multiple results}, a tactic can easily express
+ the potential outcome of an internal search process. There are also
+ combinators for building proof tools that involve search
+ systematically, see also \secref{sec:tacticals}.
+
+ \medskip As explained in \secref{sec:tactical-goals}, a goal state
+ essentially consists of a list of subgoals that imply the main goal
+ (conclusion). Tactics may operate on all subgoals or on a
+ particularly specified subgoal, but must not change the main
+ conclusion (apart from instantiating schematic goal variables).
+
+ Tactics with explicit \emph{subgoal addressing} are of the form
+ \isa{int\ {\isasymrightarrow}\ tactic} and may be applied to a particular subgoal
+ (counting from 1). If the subgoal number is out of range, the
+ tactic should fail with an empty result sequence, but must not raise
+ an exception!
+
+ Operating on a particular subgoal means to replace it by an interval
+ of zero or more subgoals in the same place; other subgoals must not
+ be affected, apart from instantiating schematic variables ranging
+ over the whole goal state.
+
+ A common pattern of composing tactics with subgoal addressing is to
+ try the first one, and then the second one only if the subgoal has
+ not been solved yet. Special care is required here to avoid bumping
+ into unrelated subgoals that happen to come after the original
+ subgoal. Assuming that there is only a single initial subgoal is a
+ very common error when implementing tactics!
+
+ Tactics with internal subgoal addressing should expose the subgoal
+ index as \isa{int} argument in full generality; a hardwired
+ subgoal 1 inappropriate.
+
+ \medskip The main well-formedness conditions for proper tactics are
+ summarized as follows.
+
+ \begin{itemize}
+
+ \item General tactic failure is indicated by an empty result, only
+ serious faults may produce an exception.
+
+ \item The main conclusion must not be changed, apart from
+ instantiating schematic variables.
+
+ \item A tactic operates either uniformly on all subgoals, or
+ specifically on a selected subgoal (without bumping into unrelated
+ subgoals).
+
+ \item Range errors in subgoal addressing produce an empty result.
+
+ \end{itemize}
+
+ Some of these conditions are checked by higher-level goal
+ infrastructure (\secref{sec:results}); others are not checked
+ explicitly, and violating them merely results in ill-behaved tactics
+ experienced by the user (e.g.\ tactics that insist in being
+ applicable only to singleton goals, or disallow composition with
+ basic tacticals).%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML type}{tactic}\verb|type tactic = thm -> thm Seq.seq| \\
+ \indexdef{}{ML}{no\_tac}\verb|no_tac: tactic| \\
+ \indexdef{}{ML}{all\_tac}\verb|all_tac: tactic| \\
+ \indexdef{}{ML}{print\_tac}\verb|print_tac: string -> tactic| \\[1ex]
+ \indexdef{}{ML}{PRIMITIVE}\verb|PRIMITIVE: (thm -> thm) -> tactic| \\[1ex]
+ \indexdef{}{ML}{SUBGOAL}\verb|SUBGOAL: (term * int -> tactic) -> int -> tactic| \\
+ \indexdef{}{ML}{CSUBGOAL}\verb|CSUBGOAL: (cterm * int -> tactic) -> int -> tactic| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|tactic| represents tactics. The well-formedness
+ conditions described above need to be observed. See also \hyperlink{file.~~/src/Pure/General/seq.ML}{\mbox{\isa{\isatt{{\isachartilde}{\isachartilde}{\isacharslash}src{\isacharslash}Pure{\isacharslash}General{\isacharslash}seq{\isachardot}ML}}}} for the underlying implementation of
+ lazy sequences.
+
+ \item \verb|int -> tactic| represents tactics with explicit
+ subgoal addressing, with well-formedness conditions as described
+ above.
+
+ \item \verb|no_tac| is a tactic that always fails, returning the
+ empty sequence.
+
+ \item \verb|all_tac| is a tactic that always succeeds, returning a
+ singleton sequence with unchanged goal state.
+
+ \item \verb|print_tac|~\isa{message} is like \verb|all_tac|, but
+ prints a message together with the goal state on the tracing
+ channel.
+
+ \item \verb|PRIMITIVE|~\isa{rule} turns a primitive inference rule
+ into a tactic with unique result. Exception \verb|THM| is considered
+ a regular tactic failure and produces an empty result; other
+ exceptions are passed through.
+
+ \item \verb|SUBGOAL|~\isa{{\isacharparenleft}fn\ {\isacharparenleft}subgoal{\isacharcomma}\ i{\isacharparenright}\ {\isacharequal}{\isachargreater}\ tactic{\isacharparenright}} is the
+ most basic form to produce a tactic with subgoal addressing. The
+ given abstraction over the subgoal term and subgoal number allows to
+ peek at the relevant information of the full goal state. The
+ subgoal range is checked as required above.
+
+ \item \verb|CSUBGOAL| is similar to \verb|SUBGOAL|, but passes the
+ subgoal as \verb|cterm| instead of raw \verb|term|. This
+ avoids expensive re-certification in situations where the subgoal is
+ used directly for primitive inferences.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Resolution and assumption tactics \label{sec:resolve-assume-tac}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+\emph{Resolution} is the most basic mechanism for refining a
+ subgoal using a theorem as object-level rule.
+ \emph{Elim-resolution} is particularly suited for elimination rules:
+ it resolves with a rule, proves its first premise by assumption, and
+ finally deletes that assumption from any new subgoals.
+ \emph{Destruct-resolution} is like elim-resolution, but the given
+ destruction rules are first turned into canonical elimination
+ format. \emph{Forward-resolution} is like destruct-resolution, but
+ without deleting the selected assumption. The \isa{r{\isacharslash}e{\isacharslash}d{\isacharslash}f}
+ naming convention is maintained for several different kinds of
+ resolution rules and tactics.
+
+ Assumption tactics close a subgoal by unifying some of its premises
+ against its conclusion.
+
+ \medskip All the tactics in this section operate on a subgoal
+ designated by a positive integer. Other subgoals might be affected
+ indirectly, due to instantiation of schematic variables.
+
+ There are various sources of non-determinism, the tactic result
+ sequence enumerates all possibilities of the following choices (if
+ applicable):
+
+ \begin{enumerate}
+
+ \item selecting one of the rules given as argument to the tactic;
+
+ \item selecting a subgoal premise to eliminate, unifying it against
+ the first premise of the rule;
+
+ \item unifying the conclusion of the subgoal to the conclusion of
+ the rule.
+
+ \end{enumerate}
+
+ Recall that higher-order unification may produce multiple results
+ that are enumerated here.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{resolve\_tac}\verb|resolve_tac: thm list -> int -> tactic| \\
+ \indexdef{}{ML}{eresolve\_tac}\verb|eresolve_tac: thm list -> int -> tactic| \\
+ \indexdef{}{ML}{dresolve\_tac}\verb|dresolve_tac: thm list -> int -> tactic| \\
+ \indexdef{}{ML}{forward\_tac}\verb|forward_tac: thm list -> int -> tactic| \\[1ex]
+ \indexdef{}{ML}{assume\_tac}\verb|assume_tac: int -> tactic| \\
+ \indexdef{}{ML}{eq\_assume\_tac}\verb|eq_assume_tac: int -> tactic| \\[1ex]
+ \indexdef{}{ML}{match\_tac}\verb|match_tac: thm list -> int -> tactic| \\
+ \indexdef{}{ML}{ematch\_tac}\verb|ematch_tac: thm list -> int -> tactic| \\
+ \indexdef{}{ML}{dmatch\_tac}\verb|dmatch_tac: thm list -> int -> tactic| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|resolve_tac|~\isa{thms\ i} refines the goal state
+ using the given theorems, which should normally be introduction
+ rules. The tactic resolves a rule's conclusion with subgoal \isa{i}, replacing it by the corresponding versions of the rule's
+ premises.
+
+ \item \verb|eresolve_tac|~\isa{thms\ i} performs elim-resolution
+ with the given theorems, which should normally be elimination rules.
+
+ \item \verb|dresolve_tac|~\isa{thms\ i} performs
+ destruct-resolution with the given theorems, which should normally
+ be destruction rules. This replaces an assumption by the result of
+ applying one of the rules.
+
+ \item \verb|forward_tac| is like \verb|dresolve_tac| except that the
+ selected assumption is not deleted. It applies a rule to an
+ assumption, adding the result as a new assumption.
+
+ \item \verb|assume_tac|~\isa{i} attempts to solve subgoal \isa{i}
+ by assumption (modulo higher-order unification).
+
+ \item \verb|eq_assume_tac| is similar to \verb|assume_tac|, but checks
+ only for immediate \isa{{\isasymalpha}}-convertibility instead of using
+ unification. It succeeds (with a unique next state) if one of the
+ assumptions is equal to the subgoal's conclusion. Since it does not
+ instantiate variables, it cannot make other subgoals unprovable.
+
+ \item \verb|match_tac|, \verb|ematch_tac|, and \verb|dmatch_tac| are
+ similar to \verb|resolve_tac|, \verb|eresolve_tac|, and \verb|dresolve_tac|, respectively, but do not instantiate schematic
+ variables in the goal state.
+
+ Flexible subgoals are not updated at will, but are left alone.
+ Strictly speaking, matching means to treat the unknowns in the goal
+ state as constants; these tactics merely discard unifiers that would
+ update the goal state.
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsubsection{Explicit instantiation within a subgoal context%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+The main resolution tactics (\secref{sec:resolve-assume-tac})
+ use higher-order unification, which works well in many practical
+ situations despite its daunting theoretical properties.
+ Nonetheless, there are important problem classes where unguided
+ higher-order unification is not so useful. This typically involves
+ rules like universal elimination, existential introduction, or
+ equational substitution. Here the unification problem involves
+ fully flexible \isa{{\isacharquery}P\ {\isacharquery}x} schemes, which are hard to manage
+ without further hints.
+
+ By providing a (small) rigid term for \isa{{\isacharquery}x} explicitly, the
+ remaining unification problem is to assign a (large) term to \isa{{\isacharquery}P}, according to the shape of the given subgoal. This is
+ sufficiently well-behaved in most practical situations.
+
+ \medskip Isabelle provides separate versions of the standard \isa{r{\isacharslash}e{\isacharslash}d{\isacharslash}f} resolution tactics that allow to provide explicit
+ instantiations of unknowns of the given rule, wrt.\ terms that refer
+ to the implicit context of the selected subgoal.
+
+ An instantiation consists of a list of pairs of the form \isa{{\isacharparenleft}{\isacharquery}x{\isacharcomma}\ t{\isacharparenright}}, where \isa{{\isacharquery}x} is a schematic variable occurring in
+ the given rule, and \isa{t} is a term from the current proof
+ context, augmented by the local goal parameters of the selected
+ subgoal; cf.\ the \isa{focus} operation described in
+ \secref{sec:variables}.
+
+ Entering the syntactic context of a subgoal is a brittle operation,
+ because its exact form is somewhat accidental, and the choice of
+ bound variable names depends on the presence of other local and
+ global names. Explicit renaming of subgoal parameters prior to
+ explicit instantiation might help to achieve a bit more robustness.
+
+ Type instantiations may be given as well, via pairs like \isa{{\isacharparenleft}{\isacharquery}{\isacharprime}a{\isacharcomma}\ {\isasymtau}{\isacharparenright}}. Type instantiations are distinguished from term
+ instantiations by the syntactic form of the schematic variable.
+ Types are instantiated before terms are. Since term instantiation
+ already performs type-inference as expected, explicit type
+ instantiations are seldom necessary.%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isatagmlref
+%
+\begin{isamarkuptext}%
+\begin{mldecls}
+ \indexdef{}{ML}{res\_inst\_tac}\verb|res_inst_tac: Proof.context -> (indexname * string) list -> thm -> int -> tactic| \\
+ \indexdef{}{ML}{eres\_inst\_tac}\verb|eres_inst_tac: Proof.context -> (indexname * string) list -> thm -> int -> tactic| \\
+ \indexdef{}{ML}{dres\_inst\_tac}\verb|dres_inst_tac: Proof.context -> (indexname * string) list -> thm -> int -> tactic| \\
+ \indexdef{}{ML}{forw\_inst\_tac}\verb|forw_inst_tac: Proof.context -> (indexname * string) list -> thm -> int -> tactic| \\[1ex]
+ \indexdef{}{ML}{rename\_tac}\verb|rename_tac: string list -> int -> tactic| \\
+ \end{mldecls}
+
+ \begin{description}
+
+ \item \verb|res_inst_tac|~\isa{ctxt\ insts\ thm\ i} instantiates the
+ rule \isa{thm} with the instantiations \isa{insts}, as described
+ above, and then performs resolution on subgoal \isa{i}.
+
+ \item \verb|eres_inst_tac| is like \verb|res_inst_tac|, but performs
+ elim-resolution.
+
+ \item \verb|dres_inst_tac| is like \verb|res_inst_tac|, but performs
+ destruct-resolution.
+
+ \item \verb|forw_inst_tac| is like \verb|dres_inst_tac| except that
+ the selected assumption is not deleted.
+
+ \item \verb|rename_tac|~\isa{names\ i} renames the innermost
+ parameters of subgoal \isa{i} according to the provided \isa{names} (which need to be distinct indentifiers).
+
+ \end{description}%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\endisatagmlref
+{\isafoldmlref}%
+%
+\isadelimmlref
+%
+\endisadelimmlref
+%
+\isamarkupsection{Tacticals \label{sec:tacticals}%
+}
+\isamarkuptrue%
+%
+\begin{isamarkuptext}%
+A \emph{tactical} is a functional combinator for building up complex
+ tactics from simpler ones. Typical tactical perform sequential
+ composition, disjunction (choice), iteration, or goal addressing.
+ Various search strategies may be expressed via tacticals.
+
+ \medskip FIXME%
+\end{isamarkuptext}%
+\isamarkuptrue%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+%
+\isatagtheory
+\isacommand{end}\isamarkupfalse%
+%
+\endisatagtheory
+{\isafoldtheory}%
+%
+\isadelimtheory
+%
+\endisadelimtheory
+\isanewline
+\end{isabellebody}%
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "root"
+%%% End: