--- a/doc-src/IsarImplementation/Thy/ML.thy Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/Thy/ML.thy Thu Sep 07 20:12:08 2006 +0200
@@ -18,7 +18,7 @@
chapter {* Cookbook *}
-section {* Defining a method that depends on declarations in the context *}
+section {* A method that depends on declarations in the context *}
text FIXME
--- a/doc-src/IsarImplementation/Thy/document/ML.tex Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/Thy/document/ML.tex Thu Sep 07 20:12:08 2006 +0200
@@ -43,7 +43,7 @@
}
\isamarkuptrue%
%
-\isamarkupsection{Defining a method that depends on declarations in the context%
+\isamarkupsection{A method that depends on declarations in the context%
}
\isamarkuptrue%
%
--- a/doc-src/IsarImplementation/Thy/document/integration.tex Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/Thy/document/integration.tex Thu Sep 07 20:12:08 2006 +0200
@@ -510,15 +510,6 @@
%
\endisadelimmlref
%
-\isamarkupsection{Sessions and document preparation%
-}
-\isamarkuptrue%
-%
-\begin{isamarkuptext}%
-FIXME%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
\isadelimtheory
%
\endisadelimtheory
--- a/doc-src/IsarImplementation/Thy/document/logic.tex Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/Thy/document/logic.tex Thu Sep 07 20:12:08 2006 +0200
@@ -28,15 +28,18 @@
which has been introduced as a natural-deduction framework in
\cite{paulson700}. This is essentially the same logic as ``\isa{{\isasymlambda}HOL}'' in the more abstract framework of Pure Type Systems (PTS)
\cite{Barendregt-Geuvers:2001}, although there are some key
- differences in the practical treatment of simple types.
+ differences in the specific treatment of simple types in
+ Isabelle/Pure.
Following type-theoretic parlance, the Pure logic consists of three
levels of \isa{{\isasymlambda}}-calculus with corresponding arrows: \isa{{\isasymRightarrow}} for syntactic function space (terms depending on terms), \isa{{\isasymAnd}} for universal quantification (proofs depending on terms), and
\isa{{\isasymLongrightarrow}} for implication (proofs depending on proofs).
Pure derivations are relative to a logical theory, which declares
- type constructors, term constants, and axioms. Term constants and
- axioms support schematic polymorphism.%
+ type constructors, term constants, and axioms. Theory declarations
+ support schematic polymorphism, which is strictly speaking outside
+ the logic.\footnote{Incidently, this is the main logical reason, why
+ the theory context \isa{{\isasymTheta}} is separate from the context \isa{{\isasymGamma}} of the core calculus.}%
\end{isamarkuptext}%
\isamarkuptrue%
%
@@ -50,38 +53,42 @@
\medskip A \emph{type class} is an abstract syntactic entity
declared in the theory context. The \emph{subclass relation} \isa{c\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ c\isactrlisub {\isadigit{2}}} is specified by stating an acyclic
- generating relation; the transitive closure maintained internally.
+ generating relation; the transitive closure is maintained
+ internally. The resulting relation is an ordering: reflexive,
+ transitive, and antisymmetric.
A \emph{sort} is a list of type classes written as \isa{{\isacharbraceleft}c\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ c\isactrlisub m{\isacharbraceright}}, which represents symbolic
intersection. Notationally, the curly braces are omitted for
singleton intersections, i.e.\ any class \isa{c} may be read as
a sort \isa{{\isacharbraceleft}c{\isacharbraceright}}. The ordering on type classes is extended to
- sorts in the canonical fashion: \isa{{\isacharbraceleft}c\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}\ c\isactrlisub m{\isacharbraceright}\ {\isasymsubseteq}\ {\isacharbraceleft}d\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ d\isactrlisub n{\isacharbraceright}} iff \isa{{\isasymforall}j{\isachardot}\ {\isasymexists}i{\isachardot}\ c\isactrlisub i\ {\isasymsubseteq}\ d\isactrlisub j}. The empty intersection \isa{{\isacharbraceleft}{\isacharbraceright}} refers to the
- universal sort, which is the largest element wrt.\ the sort order.
- The intersections of all (i.e.\ finitely many) classes declared in
- the current theory are the minimal elements wrt.\ sort order.
+ sorts according to the meaning of intersections: \isa{{\isacharbraceleft}c\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}\ c\isactrlisub m{\isacharbraceright}\ {\isasymsubseteq}\ {\isacharbraceleft}d\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ d\isactrlisub n{\isacharbraceright}} iff
+ \isa{{\isasymforall}j{\isachardot}\ {\isasymexists}i{\isachardot}\ c\isactrlisub i\ {\isasymsubseteq}\ d\isactrlisub j}. The empty intersection
+ \isa{{\isacharbraceleft}{\isacharbraceright}} refers to the universal sort, which is the largest
+ element wrt.\ the sort order. The intersections of all (finitely
+ many) classes declared in the current theory are the minimal
+ elements wrt.\ the sort order.
- \medskip A \emph{fixed type variable} is pair of a basic name
+ \medskip A \emph{fixed type variable} is a pair of a basic name
(starting with \isa{{\isacharprime}} character) and a sort constraint. For
example, \isa{{\isacharparenleft}{\isacharprime}a{\isacharcomma}\ s{\isacharparenright}} which is usually printed as \isa{{\isasymalpha}\isactrlisub s}. A \emph{schematic type variable} is a pair of an
- indexname and a sort constraint. For example, \isa{{\isacharparenleft}{\isacharparenleft}{\isacharprime}a{\isacharcomma}\ {\isadigit{0}}{\isacharparenright}{\isacharcomma}\ s{\isacharparenright}} which is usually printed \isa{{\isacharquery}{\isasymalpha}\isactrlisub s}.
+ indexname and a sort constraint. For example, \isa{{\isacharparenleft}{\isacharparenleft}{\isacharprime}a{\isacharcomma}\ {\isadigit{0}}{\isacharparenright}{\isacharcomma}\ s{\isacharparenright}} which is usually printed as \isa{{\isacharquery}{\isasymalpha}\isactrlisub s}.
Note that \emph{all} syntactic components contribute to the identity
- of a type variables, including the literal sort constraint. The
- core logic handles type variables with the same name but different
- sorts as different, even though the outer layers of the system make
- it hard to produce anything like this.
+ of type variables, including the literal sort constraint. The core
+ logic handles type variables with the same name but different sorts
+ as different, although some outer layers of the system make it hard
+ to produce anything like this.
- A \emph{type constructor} is an \isa{k}-ary type operator
- declared in the theory.
+ A \emph{type constructor} is a \isa{k}-ary operator on types
+ declared in the theory. Type constructor application is usually
+ written postfix. For \isa{k\ {\isacharequal}\ {\isadigit{0}}} the argument tuple is omitted,
+ e.g.\ \isa{prop} instead of \isa{{\isacharparenleft}{\isacharparenright}prop}. For \isa{k\ {\isacharequal}\ {\isadigit{1}}} the parentheses are omitted, e.g.\ \isa{{\isasymalpha}\ list} instead of
+ \isa{{\isacharparenleft}{\isasymalpha}{\isacharparenright}list}. Further notation is provided for specific
+ constructors, notably right-associative infix \isa{{\isasymalpha}\ {\isasymRightarrow}\ {\isasymbeta}}
+ instead of \isa{{\isacharparenleft}{\isasymalpha}{\isacharcomma}\ {\isasymbeta}{\isacharparenright}fun} constructor.
A \emph{type} is defined inductively over type variables and type
- constructors: \isa{{\isasymtau}\ {\isacharequal}\ {\isasymalpha}\isactrlisub s\ {\isacharbar}\ {\isacharquery}{\isasymalpha}\isactrlisub s\ {\isacharbar}\ {\isacharparenleft}{\isasymtau}\isactrlsub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlsub k{\isacharparenright}c}. Type constructor application is usually written
- postfix. For \isa{k\ {\isacharequal}\ {\isadigit{0}}} the argument tuple is omitted, e.g.\
- \isa{prop} instead of \isa{{\isacharparenleft}{\isacharparenright}prop}. For \isa{k\ {\isacharequal}\ {\isadigit{1}}} the
- parentheses are omitted, e.g.\ \isa{{\isasymtau}\ list} instead of \isa{{\isacharparenleft}{\isasymtau}{\isacharparenright}\ list}. Further notation is provided for specific
- constructors, notably right-associative infix \isa{{\isasymtau}\isactrlisub {\isadigit{1}}\ {\isasymRightarrow}\ {\isasymtau}\isactrlisub {\isadigit{2}}} instead of \isa{{\isacharparenleft}{\isasymtau}\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymtau}\isactrlisub {\isadigit{2}}{\isacharparenright}fun}
- constructor.
+ constructors as follows: \isa{{\isasymtau}\ {\isacharequal}\ {\isasymalpha}\isactrlisub s\ {\isacharbar}\ {\isacharquery}{\isasymalpha}\isactrlisub s\ {\isacharbar}\ {\isacharparenleft}{\isasymtau}\isactrlsub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlsub k{\isacharparenright}c}.
A \emph{type abbreviation} is a syntactic abbreviation of an
arbitrary type expression of the theory. Type abbreviations looks
@@ -89,21 +96,23 @@
core logic encounters them.
A \emph{type arity} declares the image behavior of a type
- constructor wrt.\ the algebra of sorts: \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}s\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ s\isactrlisub n{\isacharparenright}s} means that \isa{{\isacharparenleft}{\isasymtau}\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlisub k{\isacharparenright}c} is
+ constructor wrt.\ the algebra of sorts: \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}s\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ s\isactrlisub k{\isacharparenright}s} means that \isa{{\isacharparenleft}{\isasymtau}\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlisub k{\isacharparenright}c} is
of sort \isa{s} if each argument type \isa{{\isasymtau}\isactrlisub i} is of
- sort \isa{s\isactrlisub i}. The sort algebra is always maintained as
- \emph{coregular}, which means that type arities are consistent with
- the subclass relation: for each type constructor \isa{c} and
- classes \isa{c\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ c\isactrlisub {\isadigit{2}}}, any arity \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s\isactrlisub {\isadigit{1}}{\isacharparenright}c\isactrlisub {\isadigit{1}}} has a corresponding arity \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s\isactrlisub {\isadigit{2}}{\isacharparenright}c\isactrlisub {\isadigit{2}}} where \isa{\isactrlvec s\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ \isactrlvec s\isactrlisub {\isadigit{2}}} holds pointwise for all argument sorts.
+ sort \isa{s\isactrlisub i}. Arity declarations are implicitly
+ completed, i.e.\ \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s{\isacharparenright}c} entails \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s{\isacharparenright}c{\isacharprime}} for any \isa{c{\isacharprime}\ {\isasymsupseteq}\ c}.
- The key property of the order-sorted algebra of types is that sort
+ \medskip The sort algebra is always maintained as \emph{coregular},
+ which means that type arities are consistent with the subclass
+ relation: for each type constructor \isa{c} and classes \isa{c\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ c\isactrlisub {\isadigit{2}}}, any arity \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s\isactrlisub {\isadigit{1}}{\isacharparenright}c\isactrlisub {\isadigit{1}}} has a corresponding arity \isa{c\ {\isacharcolon}{\isacharcolon}\ {\isacharparenleft}\isactrlvec s\isactrlisub {\isadigit{2}}{\isacharparenright}c\isactrlisub {\isadigit{2}}} where \isa{\isactrlvec s\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ \isactrlvec s\isactrlisub {\isadigit{2}}} holds pointwise for all argument sorts.
+
+ The key property of a coregular order-sorted algebra is that sort
constraints may be always fulfilled in a most general fashion: for
each type constructor \isa{c} and sort \isa{s} there is a
- most general vector of argument sorts \isa{{\isacharparenleft}s\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ s\isactrlisub k{\isacharparenright}} such that \isa{{\isacharparenleft}{\isasymtau}\isactrlbsub s\isactrlisub {\isadigit{1}}\isactrlesub {\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymtau}\isactrlbsub s\isactrlisub k\isactrlesub {\isacharparenright}} for arbitrary \isa{{\isasymtau}\isactrlisub i} of
- sort \isa{s\isactrlisub i}. This means the unification problem on
- the algebra of types has most general solutions (modulo renaming and
- equivalence of sorts). As a consequence, type-inference is able to
- produce primary types.%
+ most general vector of argument sorts \isa{{\isacharparenleft}s\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ s\isactrlisub k{\isacharparenright}} such that a type scheme \isa{{\isacharparenleft}{\isasymalpha}\isactrlbsub s\isactrlisub {\isadigit{1}}\isactrlesub {\isacharcomma}\ {\isasymdots}{\isacharcomma}\ {\isasymalpha}\isactrlbsub s\isactrlisub k\isactrlesub {\isacharparenright}c} is
+ of sort \isa{s}. Consequently, the unification problem on the
+ algebra of types has most general solutions (modulo renaming and
+ equivalence of sorts). Moreover, the usual type-inference algorithm
+ will produce primary types as expected \cite{nipkow-prehofer}.%
\end{isamarkuptext}%
\isamarkuptrue%
%
@@ -147,18 +156,18 @@
tests the subsort relation \isa{s\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ s\isactrlisub {\isadigit{2}}}.
\item \verb|Sign.of_sort|~\isa{thy\ {\isacharparenleft}{\isasymtau}{\isacharcomma}\ s{\isacharparenright}} tests whether a type
- expression of of a given sort.
+ is of a given sort.
\item \verb|Sign.add_types|~\isa{{\isacharbrackleft}{\isacharparenleft}c{\isacharcomma}\ k{\isacharcomma}\ mx{\isacharparenright}{\isacharcomma}\ {\isasymdots}{\isacharbrackright}} declares new
- type constructors \isa{c} with \isa{k} arguments, and
+ type constructors \isa{c} with \isa{k} arguments and
optional mixfix syntax.
\item \verb|Sign.add_tyabbrs_i|~\isa{{\isacharbrackleft}{\isacharparenleft}c{\isacharcomma}\ \isactrlvec {\isasymalpha}{\isacharcomma}\ {\isasymtau}{\isacharcomma}\ mx{\isacharparenright}{\isacharcomma}\ {\isasymdots}{\isacharbrackright}}
- defines type abbreviation \isa{{\isacharparenleft}\isactrlvec {\isasymalpha}{\isacharparenright}c} (with optional
- mixfix syntax) as \isa{{\isasymtau}}.
+ defines a new type abbreviation \isa{{\isacharparenleft}\isactrlvec {\isasymalpha}{\isacharparenright}c\ {\isacharequal}\ {\isasymtau}} with
+ optional mixfix syntax.
\item \verb|Sign.primitive_class|~\isa{{\isacharparenleft}c{\isacharcomma}\ {\isacharbrackleft}c\isactrlisub {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ c\isactrlisub n{\isacharbrackright}{\isacharparenright}} declares new class \isa{c} derived together with
- class relations \isa{c\ {\isasymsubseteq}\ c\isactrlisub i}.
+ class relations \isa{c\ {\isasymsubseteq}\ c\isactrlisub i}, for \isa{i\ {\isacharequal}\ {\isadigit{1}}{\isacharcomma}\ {\isasymdots}{\isacharcomma}\ n}.
\item \verb|Sign.primitive_classrel|~\isa{{\isacharparenleft}c\isactrlisub {\isadigit{1}}{\isacharcomma}\ c\isactrlisub {\isadigit{2}}{\isacharparenright}} declares class relation \isa{c\isactrlisub {\isadigit{1}}\ {\isasymsubseteq}\ c\isactrlisub {\isadigit{2}}}.
@@ -183,6 +192,13 @@
\begin{isamarkuptext}%
\glossary{Term}{FIXME}
+ The language of terms is that of simply-typed \isa{{\isasymlambda}}-calculus
+ with de-Bruijn indices for bound variables, and named free
+ variables, and constants. Terms with loose bound variables are
+ usually considered malformed. The types of variables and constants
+ is stored explicitly at each occurrence in the term (which is a
+ known performance issue).
+
FIXME de-Bruijn representation of lambda terms
Term syntax provides explicit abstraction \isa{{\isasymlambda}x\ {\isacharcolon}{\isacharcolon}\ {\isasymalpha}{\isachardot}\ b{\isacharparenleft}x{\isacharparenright}}
@@ -203,15 +219,6 @@
\end{isamarkuptext}%
\isamarkuptrue%
%
-\isamarkupsection{Proof terms%
-}
-\isamarkuptrue%
-%
-\begin{isamarkuptext}%
-FIXME%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
\isamarkupsection{Theorems \label{sec:thms}%
}
\isamarkuptrue%
@@ -267,22 +274,53 @@
\end{isamarkuptext}%
\isamarkuptrue%
%
-\isamarkupsubsection{Primitive inferences%
+\isamarkupsection{Proof terms%
}
\isamarkuptrue%
%
\begin{isamarkuptext}%
-FIXME%
+FIXME !?%
\end{isamarkuptext}%
\isamarkuptrue%
%
-\isamarkupsubsection{Higher-order resolution%
+\isamarkupsection{Rules \label{sec:rules}%
}
\isamarkuptrue%
%
\begin{isamarkuptext}%
FIXME
+ A \emph{rule} is any Pure theorem in HHF normal form; there is a
+ separate calculus for rule composition, which is modeled after
+ Gentzen's Natural Deduction \cite{Gentzen:1935}, but allows
+ rules to be nested arbitrarily, similar to \cite{extensions91}.
+
+ Normally, all theorems accessible to the user are proper rules.
+ Low-level inferences are occasional required internally, but the
+ result should be always presented in canonical form. The higher
+ interfaces of Isabelle/Isar will always produce proper rules. It is
+ important to maintain this invariant in add-on applications!
+
+ There are two main principles of rule composition: \isa{resolution} (i.e.\ backchaining of rules) and \isa{by{\isacharminus}assumption} (i.e.\ closing a branch); both principles are
+ combined in the variants of \isa{elim{\isacharminus}resosultion} and \isa{dest{\isacharminus}resolution}. Raw \isa{composition} is occasionally
+ useful as well, also it is strictly speaking outside of the proper
+ rule calculus.
+
+ Rules are treated modulo general higher-order unification, which is
+ unification modulo the equational theory of \isa{{\isasymalpha}{\isasymbeta}{\isasymeta}}-conversion
+ on \isa{{\isasymlambda}}-terms. Moreover, propositions are understood modulo
+ the (derived) equivalence \isa{{\isacharparenleft}A\ {\isasymLongrightarrow}\ {\isacharparenleft}{\isasymAnd}x{\isachardot}\ B\ x{\isacharparenright}{\isacharparenright}\ {\isasymequiv}\ {\isacharparenleft}{\isasymAnd}x{\isachardot}\ A\ {\isasymLongrightarrow}\ B\ x{\isacharparenright}}.
+
+ This means that any operations within the rule calculus may be
+ subject to spontaneous \isa{{\isasymalpha}{\isasymbeta}{\isasymeta}}-HHF conversions. It is common
+ practice not to contract or expand unnecessarily. Some mechanisms
+ prefer an one form, others the opposite, so there is a potential
+ danger to produce some oscillation!
+
+ Only few operations really work \emph{modulo} HHF conversion, but
+ expect a normal form: quantifiers \isa{{\isasymAnd}} before implications
+ \isa{{\isasymLongrightarrow}} at each level of nesting.
+
\glossary{Hereditary Harrop Formula}{The set of propositions in HHF
format is defined inductively as \isa{H\ {\isacharequal}\ {\isacharparenleft}{\isasymAnd}x\isactrlsup {\isacharasterisk}{\isachardot}\ H\isactrlsup {\isacharasterisk}\ {\isasymLongrightarrow}\ A{\isacharparenright}}, for variables \isa{x} and atomic propositions \isa{A}.
Any proposition may be put into HHF form by normalizing with the rule
@@ -295,15 +333,6 @@
\end{isamarkuptext}%
\isamarkuptrue%
%
-\isamarkupsubsection{Equational reasoning%
-}
-\isamarkuptrue%
-%
-\begin{isamarkuptext}%
-FIXME%
-\end{isamarkuptext}%
-\isamarkuptrue%
-%
\isadelimtheory
%
\endisadelimtheory
--- a/doc-src/IsarImplementation/Thy/document/proof.tex Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/Thy/document/proof.tex Thu Sep 07 20:12:08 2006 +0200
@@ -217,7 +217,7 @@
definition works by fixing \isa{x} and assuming \isa{x\ {\isasymequiv}\ t},
with the following export rule to reverse the effect:
\[
- \infer{\isa{{\isasymGamma}\ {\isacharbackslash}\ x\ {\isasymequiv}\ t\ {\isasymturnstile}\ B\ t}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B\ x}}
+ \infer[(\isa{{\isasymequiv}{\isacharminus}expand})]{\isa{{\isasymGamma}\ {\isacharbackslash}\ x\ {\isasymequiv}\ t\ {\isasymturnstile}\ B\ t}}{\isa{{\isasymGamma}\ {\isasymturnstile}\ B\ x}}
\]
This works, because the assumption \isa{x\ {\isasymequiv}\ t} was introduced in
a context with \isa{x} being fresh, so \isa{x} does not
@@ -291,22 +291,30 @@
references to free variables or assumptions not present in the proof
context.
- \medskip The \isa{prove} operation provides an interface for
- structured backwards reasoning under program control, with some
- explicit sanity checks of the result. The goal context can be
- augmented by additional fixed variables (cf.\
- \secref{sec:variables}) and assumptions (cf.\
- \secref{sec:assumptions}), which will be available as local facts
- during the proof and discharged into implications in the result.
- Type and term variables are generalized as usual, according to the
- context.
+ \medskip The \isa{SUBPROOF} combinator allows to structure a
+ tactical proof recursively by decomposing a selected sub-goal:
+ \isa{{\isacharparenleft}{\isasymAnd}x{\isachardot}\ A{\isacharparenleft}x{\isacharparenright}\ {\isasymLongrightarrow}\ B{\isacharparenleft}x{\isacharparenright}{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymdots}} is turned into \isa{B{\isacharparenleft}x{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymdots}}
+ after fixing \isa{x} and assuming \isa{A{\isacharparenleft}x{\isacharparenright}}. This means
+ the tactic needs to solve the conclusion, but may use the premise as
+ a local fact, for locally fixed variables.
- The \isa{prove{\isacharunderscore}multi} operation handles several simultaneous
- claims within a single goal, by using Pure conjunction internally.
+ The \isa{prove} operation provides an interface for structured
+ backwards reasoning under program control, with some explicit sanity
+ checks of the result. The goal context can be augmented by
+ additional fixed variables (cf.\ \secref{sec:variables}) and
+ assumptions (cf.\ \secref{sec:assumptions}), which will be available
+ as local facts during the proof and discharged into implications in
+ the result. Type and term variables are generalized as usual,
+ according to the context.
- \medskip Tactical proofs may be structured recursively by
- decomposing a sub-goal: \isa{{\isacharparenleft}{\isasymAnd}x{\isachardot}\ A{\isacharparenleft}x{\isacharparenright}\ {\isasymLongrightarrow}\ B{\isacharparenleft}x{\isacharparenright}{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymdots}} is turned
- into \isa{B{\isacharparenleft}x{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymdots}} after fixing \isa{x} and assuming \isa{A{\isacharparenleft}x{\isacharparenright}}.%
+ The \isa{obtain} operation produces results by eliminating
+ existing facts by means of a given tactic. This acts like a dual
+ conclusion: the proof demonstrates that the context may be augmented
+ by certain fixed variables and assumptions. See also
+ \cite{isabelle-isar-ref} for the user-level \isa{{\isasymOBTAIN}} and
+ \isa{{\isasymGUESS}} elements. Final results, which may not refer to
+ the parameters in the conclusion, need to exported explicitly into
+ the original context.%
\end{isamarkuptext}%
\isamarkuptrue%
%
@@ -318,31 +326,39 @@
%
\begin{isamarkuptext}%
\begin{mldecls}
+ \indexml{SUBPROOF}\verb|SUBPROOF: ({context: Proof.context, schematics: ctyp list * cterm list,|\isasep\isanewline%
+\verb| params: cterm list, asms: cterm list, concl: cterm,|\isasep\isanewline%
+\verb| prems: thm list} -> tactic) -> Proof.context -> int -> tactic| \\
\indexml{Goal.prove}\verb|Goal.prove: Proof.context -> string list -> term list -> term ->|\isasep\isanewline%
\verb| ({prems: thm list, context: Proof.context} -> tactic) -> thm| \\
\indexml{Goal.prove-multi}\verb|Goal.prove_multi: Proof.context -> string list -> term list -> term list ->|\isasep\isanewline%
\verb| ({prems: thm list, context: Proof.context} -> tactic) -> thm list| \\
- \indexml{SUBPROOF}\verb|SUBPROOF: ({context: Proof.context, schematics: ctyp list * cterm list,|\isasep\isanewline%
-\verb| params: cterm list, asms: cterm list, concl: cterm,|\isasep\isanewline%
-\verb| prems: thm list} -> tactic) -> Proof.context -> int -> tactic|
+ \indexml{Obtain.result}\verb|Obtain.result: (Proof.context -> tactic) ->|\isasep\isanewline%
+\verb| thm list -> Proof.context -> (cterm list * thm list) * Proof.context|
\end{mldecls}
\begin{description}
+ \item \verb|SUBPROOF|~\isa{tac} decomposes the structure of a
+ particular sub-goal, producing an extended context and a reduced
+ goal, which needs to be solved by the given tactic. All schematic
+ parameters of the goal are imported into the context as fixed ones,
+ which may not be instantiated in the sub-proof.
+
\item \verb|Goal.prove|~\isa{ctxt\ xs\ As\ C\ tac} states goal \isa{C} in the context augmented by fixed variables \isa{xs} and
assumptions \isa{As}, and applies tactic \isa{tac} to solve
it. The latter may depend on the local assumptions being presented
as facts. The result is in HHF normal form.
\item \verb|Goal.prove_multi| is simular to \verb|Goal.prove|, but
- states several conclusions simultaneously. \verb|Tactic.conjunction_tac| may be used to split these into individual
- subgoals before commencing the actual proof.
+ states several conclusions simultaneously. The goal is encoded by
+ means of Pure conjunction; \verb|Tactic.conjunction_tac| will turn
+ this into a collection of individual subgoals.
- \item \verb|SUBPROOF|~\isa{tac} decomposes the structure of a
- particular sub-goal, producing an extended context and a reduced
- goal, which needs to be solved by the given tactic. All schematic
- parameters of the goal are imported into the context as fixed ones,
- which may not be instantiated in the sub-proof.
+ \item \verb|Obtain.result|~\isa{tac\ thms\ ctxt} eliminates the
+ given facts using a tactic, which results in additional fixed
+ variables and assumptions in the context. Final results need to be
+ exported explicitly.
\end{description}%
\end{isamarkuptext}%
--- a/doc-src/IsarImplementation/Thy/integration.thy Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/Thy/integration.thy Thu Sep 07 20:12:08 2006 +0200
@@ -432,9 +432,4 @@
\end{description}
*}
-
-section {* Sessions and document preparation *}
-
-text FIXME
-
end
--- a/doc-src/IsarImplementation/Thy/logic.thy Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/Thy/logic.thy Thu Sep 07 20:12:08 2006 +0200
@@ -11,7 +11,8 @@
\cite{paulson700}. This is essentially the same logic as ``@{text
"\<lambda>HOL"}'' in the more abstract framework of Pure Type Systems (PTS)
\cite{Barendregt-Geuvers:2001}, although there are some key
- differences in the practical treatment of simple types.
+ differences in the specific treatment of simple types in
+ Isabelle/Pure.
Following type-theoretic parlance, the Pure logic consists of three
levels of @{text "\<lambda>"}-calculus with corresponding arrows: @{text
@@ -20,8 +21,11 @@
@{text "\<Longrightarrow>"} for implication (proofs depending on proofs).
Pure derivations are relative to a logical theory, which declares
- type constructors, term constants, and axioms. Term constants and
- axioms support schematic polymorphism.
+ type constructors, term constants, and axioms. Theory declarations
+ support schematic polymorphism, which is strictly speaking outside
+ the logic.\footnote{Incidently, this is the main logical reason, why
+ the theory context @{text "\<Theta>"} is separate from the context @{text
+ "\<Gamma>"} of the core calculus.}
*}
@@ -34,46 +38,48 @@
\medskip A \emph{type class} is an abstract syntactic entity
declared in the theory context. The \emph{subclass relation} @{text
"c\<^isub>1 \<subseteq> c\<^isub>2"} is specified by stating an acyclic
- generating relation; the transitive closure maintained internally.
+ generating relation; the transitive closure is maintained
+ internally. The resulting relation is an ordering: reflexive,
+ transitive, and antisymmetric.
A \emph{sort} is a list of type classes written as @{text
"{c\<^isub>1, \<dots>, c\<^isub>m}"}, which represents symbolic
intersection. Notationally, the curly braces are omitted for
singleton intersections, i.e.\ any class @{text "c"} may be read as
a sort @{text "{c}"}. The ordering on type classes is extended to
- sorts in the canonical fashion: @{text "{c\<^isub>1, \<dots> c\<^isub>m} \<subseteq>
- {d\<^isub>1, \<dots>, d\<^isub>n}"} iff @{text "\<forall>j. \<exists>i. c\<^isub>i \<subseteq>
- d\<^isub>j"}. The empty intersection @{text "{}"} refers to the
- universal sort, which is the largest element wrt.\ the sort order.
- The intersections of all (i.e.\ finitely many) classes declared in
- the current theory are the minimal elements wrt.\ sort order.
+ sorts according to the meaning of intersections: @{text
+ "{c\<^isub>1, \<dots> c\<^isub>m} \<subseteq> {d\<^isub>1, \<dots>, d\<^isub>n}"} iff
+ @{text "\<forall>j. \<exists>i. c\<^isub>i \<subseteq> d\<^isub>j"}. The empty intersection
+ @{text "{}"} refers to the universal sort, which is the largest
+ element wrt.\ the sort order. The intersections of all (finitely
+ many) classes declared in the current theory are the minimal
+ elements wrt.\ the sort order.
- \medskip A \emph{fixed type variable} is pair of a basic name
+ \medskip A \emph{fixed type variable} is a pair of a basic name
(starting with @{text "'"} character) and a sort constraint. For
example, @{text "('a, s)"} which is usually printed as @{text
"\<alpha>\<^isub>s"}. A \emph{schematic type variable} is a pair of an
indexname and a sort constraint. For example, @{text "(('a, 0),
- s)"} which is usually printed @{text "?\<alpha>\<^isub>s"}.
+ s)"} which is usually printed as @{text "?\<alpha>\<^isub>s"}.
Note that \emph{all} syntactic components contribute to the identity
- of a type variables, including the literal sort constraint. The
- core logic handles type variables with the same name but different
- sorts as different, even though the outer layers of the system make
- it hard to produce anything like this.
+ of type variables, including the literal sort constraint. The core
+ logic handles type variables with the same name but different sorts
+ as different, although some outer layers of the system make it hard
+ to produce anything like this.
- A \emph{type constructor} is an @{text "k"}-ary type operator
- declared in the theory.
+ A \emph{type constructor} is a @{text "k"}-ary operator on types
+ declared in the theory. Type constructor application is usually
+ written postfix. For @{text "k = 0"} the argument tuple is omitted,
+ e.g.\ @{text "prop"} instead of @{text "()prop"}. For @{text "k =
+ 1"} the parentheses are omitted, e.g.\ @{text "\<alpha> list"} instead of
+ @{text "(\<alpha>)list"}. Further notation is provided for specific
+ constructors, notably right-associative infix @{text "\<alpha> \<Rightarrow> \<beta>"}
+ instead of @{text "(\<alpha>, \<beta>)fun"} constructor.
A \emph{type} is defined inductively over type variables and type
- constructors: @{text "\<tau> = \<alpha>\<^isub>s | ?\<alpha>\<^isub>s | (\<tau>\<^sub>1, \<dots>,
- \<tau>\<^sub>k)c"}. Type constructor application is usually written
- postfix. For @{text "k = 0"} the argument tuple is omitted, e.g.\
- @{text "prop"} instead of @{text "()prop"}. For @{text "k = 1"} the
- parentheses are omitted, e.g.\ @{text "\<tau> list"} instead of @{text
- "(\<tau>) list"}. Further notation is provided for specific
- constructors, notably right-associative infix @{text "\<tau>\<^isub>1 \<Rightarrow>
- \<tau>\<^isub>2"} instead of @{text "(\<tau>\<^isub>1, \<tau>\<^isub>2)fun"}
- constructor.
+ constructors as follows: @{text "\<tau> = \<alpha>\<^isub>s | ?\<alpha>\<^isub>s |
+ (\<tau>\<^sub>1, \<dots>, \<tau>\<^sub>k)c"}.
A \emph{type abbreviation} is a syntactic abbreviation of an
arbitrary type expression of the theory. Type abbreviations looks
@@ -82,26 +88,30 @@
A \emph{type arity} declares the image behavior of a type
constructor wrt.\ the algebra of sorts: @{text "c :: (s\<^isub>1, \<dots>,
- s\<^isub>n)s"} means that @{text "(\<tau>\<^isub>1, \<dots>, \<tau>\<^isub>k)c"} is
+ s\<^isub>k)s"} means that @{text "(\<tau>\<^isub>1, \<dots>, \<tau>\<^isub>k)c"} is
of sort @{text "s"} if each argument type @{text "\<tau>\<^isub>i"} is of
- sort @{text "s\<^isub>i"}. The sort algebra is always maintained as
- \emph{coregular}, which means that type arities are consistent with
- the subclass relation: for each type constructor @{text "c"} and
- classes @{text "c\<^isub>1 \<subseteq> c\<^isub>2"}, any arity @{text "c ::
+ sort @{text "s\<^isub>i"}. Arity declarations are implicitly
+ completed, i.e.\ @{text "c :: (\<^vec>s)c"} entails @{text "c ::
+ (\<^vec>s)c'"} for any @{text "c' \<supseteq> c"}.
+
+ \medskip The sort algebra is always maintained as \emph{coregular},
+ which means that type arities are consistent with the subclass
+ relation: for each type constructor @{text "c"} and classes @{text
+ "c\<^isub>1 \<subseteq> c\<^isub>2"}, any arity @{text "c ::
(\<^vec>s\<^isub>1)c\<^isub>1"} has a corresponding arity @{text "c
:: (\<^vec>s\<^isub>2)c\<^isub>2"} where @{text "\<^vec>s\<^isub>1 \<subseteq>
\<^vec>s\<^isub>2"} holds pointwise for all argument sorts.
- The key property of the order-sorted algebra of types is that sort
+ The key property of a coregular order-sorted algebra is that sort
constraints may be always fulfilled in a most general fashion: for
each type constructor @{text "c"} and sort @{text "s"} there is a
most general vector of argument sorts @{text "(s\<^isub>1, \<dots>,
- s\<^isub>k)"} such that @{text "(\<tau>\<^bsub>s\<^isub>1\<^esub>, \<dots>,
- \<tau>\<^bsub>s\<^isub>k\<^esub>)"} for arbitrary @{text "\<tau>\<^isub>i"} of
- sort @{text "s\<^isub>i"}. This means the unification problem on
- the algebra of types has most general solutions (modulo renaming and
- equivalence of sorts). As a consequence, type-inference is able to
- produce primary types.
+ s\<^isub>k)"} such that a type scheme @{text
+ "(\<alpha>\<^bsub>s\<^isub>1\<^esub>, \<dots>, \<alpha>\<^bsub>s\<^isub>k\<^esub>)c"} is
+ of sort @{text "s"}. Consequently, the unification problem on the
+ algebra of types has most general solutions (modulo renaming and
+ equivalence of sorts). Moreover, the usual type-inference algorithm
+ will produce primary types as expected \cite{nipkow-prehofer}.
*}
text %mlref {*
@@ -139,19 +149,19 @@
tests the subsort relation @{text "s\<^isub>1 \<subseteq> s\<^isub>2"}.
\item @{ML Sign.of_sort}~@{text "thy (\<tau>, s)"} tests whether a type
- expression of of a given sort.
+ is of a given sort.
\item @{ML Sign.add_types}~@{text "[(c, k, mx), \<dots>]"} declares new
- type constructors @{text "c"} with @{text "k"} arguments, and
+ type constructors @{text "c"} with @{text "k"} arguments and
optional mixfix syntax.
\item @{ML Sign.add_tyabbrs_i}~@{text "[(c, \<^vec>\<alpha>, \<tau>, mx), \<dots>]"}
- defines type abbreviation @{text "(\<^vec>\<alpha>)c"} (with optional
- mixfix syntax) as @{text "\<tau>"}.
+ defines a new type abbreviation @{text "(\<^vec>\<alpha>)c = \<tau>"} with
+ optional mixfix syntax.
\item @{ML Sign.primitive_class}~@{text "(c, [c\<^isub>1, \<dots>,
c\<^isub>n])"} declares new class @{text "c"} derived together with
- class relations @{text "c \<subseteq> c\<^isub>i"}.
+ class relations @{text "c \<subseteq> c\<^isub>i"}, for @{text "i = 1, \<dots>, n"}.
\item @{ML Sign.primitive_classrel}~@{text "(c\<^isub>1,
c\<^isub>2)"} declares class relation @{text "c\<^isub>1 \<subseteq>
@@ -170,6 +180,13 @@
text {*
\glossary{Term}{FIXME}
+ The language of terms is that of simply-typed @{text "\<lambda>"}-calculus
+ with de-Bruijn indices for bound variables, and named free
+ variables, and constants. Terms with loose bound variables are
+ usually considered malformed. The types of variables and constants
+ is stored explicitly at each occurrence in the term (which is a
+ known performance issue).
+
FIXME de-Bruijn representation of lambda terms
Term syntax provides explicit abstraction @{text "\<lambda>x :: \<alpha>. b(x)"}
@@ -193,13 +210,6 @@
*}
-section {* Proof terms *}
-
-text {*
- FIXME
-*}
-
-
section {* Theorems \label{sec:thms} *}
text {*
@@ -258,17 +268,54 @@
*}
-subsection {* Primitive inferences *}
+
+section {* Proof terms *}
-text FIXME
+text {*
+ FIXME !?
+*}
-subsection {* Higher-order resolution *}
+section {* Rules \label{sec:rules} *}
text {*
FIXME
+ A \emph{rule} is any Pure theorem in HHF normal form; there is a
+ separate calculus for rule composition, which is modeled after
+ Gentzen's Natural Deduction \cite{Gentzen:1935}, but allows
+ rules to be nested arbitrarily, similar to \cite{extensions91}.
+
+ Normally, all theorems accessible to the user are proper rules.
+ Low-level inferences are occasional required internally, but the
+ result should be always presented in canonical form. The higher
+ interfaces of Isabelle/Isar will always produce proper rules. It is
+ important to maintain this invariant in add-on applications!
+
+ There are two main principles of rule composition: @{text
+ "resolution"} (i.e.\ backchaining of rules) and @{text
+ "by-assumption"} (i.e.\ closing a branch); both principles are
+ combined in the variants of @{text "elim-resosultion"} and @{text
+ "dest-resolution"}. Raw @{text "composition"} is occasionally
+ useful as well, also it is strictly speaking outside of the proper
+ rule calculus.
+
+ Rules are treated modulo general higher-order unification, which is
+ unification modulo the equational theory of @{text "\<alpha>\<beta>\<eta>"}-conversion
+ on @{text "\<lambda>"}-terms. Moreover, propositions are understood modulo
+ the (derived) equivalence @{text "(A \<Longrightarrow> (\<And>x. B x)) \<equiv> (\<And>x. A \<Longrightarrow> B x)"}.
+
+ This means that any operations within the rule calculus may be
+ subject to spontaneous @{text "\<alpha>\<beta>\<eta>"}-HHF conversions. It is common
+ practice not to contract or expand unnecessarily. Some mechanisms
+ prefer an one form, others the opposite, so there is a potential
+ danger to produce some oscillation!
+
+ Only few operations really work \emph{modulo} HHF conversion, but
+ expect a normal form: quantifiers @{text "\<And>"} before implications
+ @{text "\<Longrightarrow>"} at each level of nesting.
+
\glossary{Hereditary Harrop Formula}{The set of propositions in HHF
format is defined inductively as @{text "H = (\<And>x\<^sup>*. H\<^sup>* \<Longrightarrow>
A)"}, for variables @{text "x"} and atomic propositions @{text "A"}.
@@ -282,9 +329,4 @@
*}
-subsection {* Equational reasoning *}
-
-text FIXME
-
-
end
--- a/doc-src/IsarImplementation/Thy/proof.thy Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/Thy/proof.thy Thu Sep 07 20:12:08 2006 +0200
@@ -196,7 +196,7 @@
definition works by fixing @{text "x"} and assuming @{text "x \<equiv> t"},
with the following export rule to reverse the effect:
\[
- \infer{@{text "\<Gamma> \\ x \<equiv> t \<turnstile> B t"}}{@{text "\<Gamma> \<turnstile> B x"}}
+ \infer[(@{text "\<equiv>-expand"})]{@{text "\<Gamma> \\ x \<equiv> t \<turnstile> B t"}}{@{text "\<Gamma> \<turnstile> B x"}}
\]
This works, because the assumption @{text "x \<equiv> t"} was introduced in
a context with @{text "x"} being fresh, so @{text "x"} does not
@@ -258,39 +258,54 @@
references to free variables or assumptions not present in the proof
context.
- \medskip The @{text "prove"} operation provides an interface for
- structured backwards reasoning under program control, with some
- explicit sanity checks of the result. The goal context can be
- augmented by additional fixed variables (cf.\
- \secref{sec:variables}) and assumptions (cf.\
- \secref{sec:assumptions}), which will be available as local facts
- during the proof and discharged into implications in the result.
- Type and term variables are generalized as usual, according to the
- context.
+ \medskip The @{text "SUBPROOF"} combinator allows to structure a
+ tactical proof recursively by decomposing a selected sub-goal:
+ @{text "(\<And>x. A(x) \<Longrightarrow> B(x)) \<Longrightarrow> \<dots>"} is turned into @{text "B(x) \<Longrightarrow> \<dots>"}
+ after fixing @{text "x"} and assuming @{text "A(x)"}. This means
+ the tactic needs to solve the conclusion, but may use the premise as
+ a local fact, for locally fixed variables.
- The @{text "prove_multi"} operation handles several simultaneous
- claims within a single goal, by using Pure conjunction internally.
+ The @{text "prove"} operation provides an interface for structured
+ backwards reasoning under program control, with some explicit sanity
+ checks of the result. The goal context can be augmented by
+ additional fixed variables (cf.\ \secref{sec:variables}) and
+ assumptions (cf.\ \secref{sec:assumptions}), which will be available
+ as local facts during the proof and discharged into implications in
+ the result. Type and term variables are generalized as usual,
+ according to the context.
- \medskip Tactical proofs may be structured recursively by
- decomposing a sub-goal: @{text "(\<And>x. A(x) \<Longrightarrow> B(x)) \<Longrightarrow> \<dots>"} is turned
- into @{text "B(x) \<Longrightarrow> \<dots>"} after fixing @{text "x"} and assuming @{text
- "A(x)"}.
+ The @{text "obtain"} operation produces results by eliminating
+ existing facts by means of a given tactic. This acts like a dual
+ conclusion: the proof demonstrates that the context may be augmented
+ by certain fixed variables and assumptions. See also
+ \cite{isabelle-isar-ref} for the user-level @{text "\<OBTAIN>"} and
+ @{text "\<GUESS>"} elements. Final results, which may not refer to
+ the parameters in the conclusion, need to exported explicitly into
+ the original context.
*}
text %mlref {*
\begin{mldecls}
+ @{index_ML SUBPROOF:
+ "({context: Proof.context, schematics: ctyp list * cterm list,
+ params: cterm list, asms: cterm list, concl: cterm,
+ prems: thm list} -> tactic) -> Proof.context -> int -> tactic"} \\
@{index_ML Goal.prove: "Proof.context -> string list -> term list -> term ->
({prems: thm list, context: Proof.context} -> tactic) -> thm"} \\
@{index_ML Goal.prove_multi: "Proof.context -> string list -> term list -> term list ->
({prems: thm list, context: Proof.context} -> tactic) -> thm list"} \\
- @{index_ML SUBPROOF:
- "({context: Proof.context, schematics: ctyp list * cterm list,
- params: cterm list, asms: cterm list, concl: cterm,
- prems: thm list} -> tactic) -> Proof.context -> int -> tactic"}
+ @{index_ML Obtain.result: "(Proof.context -> tactic) ->
+ thm list -> Proof.context -> (cterm list * thm list) * Proof.context"}
\end{mldecls}
\begin{description}
+ \item @{ML SUBPROOF}~@{text "tac"} decomposes the structure of a
+ particular sub-goal, producing an extended context and a reduced
+ goal, which needs to be solved by the given tactic. All schematic
+ parameters of the goal are imported into the context as fixed ones,
+ which may not be instantiated in the sub-proof.
+
\item @{ML Goal.prove}~@{text "ctxt xs As C tac"} states goal @{text
"C"} in the context augmented by fixed variables @{text "xs"} and
assumptions @{text "As"}, and applies tactic @{text "tac"} to solve
@@ -298,15 +313,14 @@
as facts. The result is in HHF normal form.
\item @{ML Goal.prove_multi} is simular to @{ML Goal.prove}, but
- states several conclusions simultaneously. @{ML
- Tactic.conjunction_tac} may be used to split these into individual
- subgoals before commencing the actual proof.
+ states several conclusions simultaneously. The goal is encoded by
+ means of Pure conjunction; @{ML Tactic.conjunction_tac} will turn
+ this into a collection of individual subgoals.
- \item @{ML SUBPROOF}~@{text "tac"} decomposes the structure of a
- particular sub-goal, producing an extended context and a reduced
- goal, which needs to be solved by the given tactic. All schematic
- parameters of the goal are imported into the context as fixed ones,
- which may not be instantiated in the sub-proof.
+ \item @{ML Obtain.result}~@{text "tac thms ctxt"} eliminates the
+ given facts using a tactic, which results in additional fixed
+ variables and assumptions in the context. Final results need to be
+ exported explicitly.
\end{description}
*}
--- a/doc-src/IsarImplementation/Thy/unused.thy Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/Thy/unused.thy Thu Sep 07 20:12:08 2006 +0200
@@ -1,3 +1,6 @@
+
+section {* Sessions and document preparation *}
+
section {* Structured output *}
subsection {* Pretty printing *}
--- a/doc-src/IsarImplementation/style.sty Thu Sep 07 15:16:51 2006 +0200
+++ b/doc-src/IsarImplementation/style.sty Thu Sep 07 20:12:08 2006 +0200
@@ -49,6 +49,8 @@
\renewcommand{\isatagmlref}{\subsection*{\makebox[0pt][r]{\fbox{\ML}~~}Reference}\begingroup\def\isastyletext{\rm}\small}
\renewcommand{\endisatagmlref}{\endgroup}
+\newcommand{\isasymGUESS}{\isakeyword{guess}}
+\newcommand{\isasymOBTAIN}{\isakeyword{obtain}}
\newcommand{\isasymTHEORY}{\isakeyword{theory}}
\newcommand{\isasymIMPORTS}{\isakeyword{imports}}
\newcommand{\isasymUSES}{\isakeyword{uses}}