# HG changeset patch # User wenzelm # Date 1235074789 -3600 # Node ID 64a50ff3f3087c6abdd7faf3395bffe90fdaf7c4 # Parent 07bf5dd48c9fd8ccb8e100ae2d90e244e3ccc93e more on object-level rules; tuned; diff -r 07bf5dd48c9f -r 64a50ff3f308 doc-src/IsarImplementation/Thy/Logic.thy --- a/doc-src/IsarImplementation/Thy/Logic.thy Wed Feb 18 22:44:59 2009 +0100 +++ b/doc-src/IsarImplementation/Thy/Logic.thy Thu Feb 19 21:19:49 2009 +0100 @@ -513,7 +513,7 @@ fly. Logically, this is an instance of the @{text "axiom"} rule (\figref{fig:prim-rules}), but there is an operational difference. The system always records oracle invocations within derivations of - theorems. Tracing plain axioms (and named theorems) is optional. + theorems by a unique tag. Axiomatizations should be limited to the bare minimum, typically as part of the initial logical basis of an object-logic formalization. @@ -573,10 +573,10 @@ well-typedness) checks, relative to the declarations of type constructors, constants etc. in the theory. - \item @{ML ctyp_of}~@{text "thy \"} and @{ML cterm_of}~@{text "thy - t"} explicitly checks types and terms, respectively. This also - involves some basic normalizations, such expansion of type and term - abbreviations from the theory context. + \item @{ML Thm.ctyp_of}~@{text "thy \"} and @{ML + Thm.cterm_of}~@{text "thy t"} explicitly checks types and terms, + respectively. This also involves some basic normalizations, such + expansion of type and term abbreviations from the theory context. Re-certification is relatively slow and should be avoided in tight reasoning loops. There are separate operations to decompose @@ -589,9 +589,10 @@ enclosing theory, cf.\ \secref{sec:context-theory}. \item @{ML proofs} determines the detail of proof recording within - @{ML_type thm} values: @{ML 0} records only oracles, @{ML 1} records - oracles, axioms and named theorems, @{ML 2} records full proof - terms. + @{ML_type thm} values: @{ML 0} records only the names of oracles, + @{ML 1} records oracle names and propositions, @{ML 2} additionally + records full proof terms. Officially named theorems that contribute + to a result are always recorded. \item @{ML Thm.assume}, @{ML Thm.forall_intr}, @{ML Thm.forall_elim}, @{ML Thm.implies_intr}, and @{ML Thm.implies_elim} @@ -725,33 +726,71 @@ section {* Object-level rules \label{sec:obj-rules} *} -text %FIXME {* - -FIXME - - A \emph{rule} is any Pure theorem in HHF normal form; there is a - separate calculus for rule composition, which is modeled after - Gentzen's Natural Deduction \cite{Gentzen:1935}, but allows - rules to be nested arbitrarily, similar to \cite{extensions91}. +text {* + The primitive inferences covered so far mostly serve foundational + purposes. User-level reasoning usually works via object-level rules + that are represented as theorems of Pure. Composition of rules + works via \emph{higher-order backchaining}, which involves + unification module @{text "\\\"}-conversion of @{text "\"}-terms, + and so-called \emph{lifting} of rules into a context of @{text "\"} + and @{text "\"} connectives. Thus the full power of higher-order + natural deduction in Isabelle/Pure becomes readily available. - Normally, all theorems accessible to the user are proper rules. - Low-level inferences are occasional required internally, but the - result should be always presented in canonical form. The higher - interfaces of Isabelle/Isar will always produce proper rules. It is - important to maintain this invariant in add-on applications! + The idea of object-level rules is to model Natural Deduction + inferences in the style of Gentzen \cite{Gentzen:1935}, but we allow + arbitrary nesting similar to \cite{extensions91}. The most basic + rule format is that of a \emph{Horn Clause}: + \[ + \infer{@{text "A"}}{@{text "A\<^sub>1"} & @{text "\"} & @{text "A\<^sub>n"}} + \] + where @{text "A, A\<^sub>1, \, A\<^sub>n"} are atomic propositions + of the framework, usually of the form @{text "Trueprop B"}, where + @{text "B"} is a (compound) object-level statement. This + object-level inference corresponds to an iterated implication in + Pure like this: + \[ + @{text "A\<^sub>1 \ \ A\<^sub>n \ A"} + \] + Any parameters occurring in rule statement are conceptionally + treated as arbitrary: + \[ + @{text "\x\<^sub>1 \ x\<^sub>m. A\<^sub>1 \ \ A\<^sub>n \ A"} + \] + This outermost quantifier prefix via is represented in Isabelle via + schematic variables @{text "?x\<^sub>1, \, ?x\<^sub>m"} occurring in + the statement body. Note that this representation lacks information + about the order of parameters, and vacuous quantifiers disappear + automatically. - There are two main principles of rule composition: @{text - "resolution"} (i.e.\ backchaining of rules) and @{text - "by-assumption"} (i.e.\ closing a branch); both principles are - combined in the variants of @{text "elim-resolution"} and @{text - "dest-resolution"}. Raw @{text "composition"} is occasionally + \medskip Nesting of rules means that the positions of @{text + "A\<^sub>i"} may hold recursively rules of the same shape, not just + atomic propositions. This format is called \emph{Hereditary Harrop + Form} in the literature \cite{Miller:1991} and may be characterized + inductively as follows: + + \medskip + \begin{tabular}{ll} + @{text "\<^bold>x"} & set of variables \\ + @{text "\<^bold>A"} & set of atomic propositions \\ + @{text "\<^bold>H = \\<^bold>x\<^sup>*. \<^bold>H\<^sup>* \ \<^bold>A"} & set of Hereditary Harrop Formulas \\ + \end{tabular} + \medskip + + \noindent Due to @{prop "(PROP A \ (\x. PROP B x)) \ (\x. PROP A \ + PROP B x)"} any Pure proposition can be seen a rule in HHF format. + Some operations in Isabelle already work modulo that equivalence, in + other places results are explicitly put into canonical form (e.g.\ + when exporting results, notably at the end of a proof). Regular + user-level results should always be in canonical HHF format. + + There are two main principles of rule composition: @{inference + resolution} (i.e.\ backchaining of rules) and @{inference + assumption} (i.e.\ closing a branch); both principles are combined + in the variants of @{inference elim_resolution} and @{inference + dest_resolution}. Raw @{inference composition} is occasionally useful as well, also it is strictly speaking outside of the proper rule calculus. - Rules are treated modulo general higher-order unification, which is - unification modulo the equational theory of @{text "\\\"}-conversion - on @{text "\"}-terms. Moreover, propositions are understood modulo - the (derived) equivalence @{text "(A \ (\x. B x)) \ (\x. A \ B x)"}. This means that any operations within the rule calculus may be subject to spontaneous @{text "\\\"}-HHF conversions. It is common @@ -759,27 +798,14 @@ prefer an one form, others the opposite, so there is a potential danger to produce some oscillation! - Only few operations really work \emph{modulo} HHF conversion, but - expect a normal form: quantifiers @{text "\"} before implications - @{text "\"} at each level of nesting. - - The set of propositions in HHF format is defined inductively as - @{text "H = (\x\<^sup>*. H\<^sup>* \ A)"}, for variables @{text "x"} - and atomic propositions @{text "A"}. Any proposition may be put - into HHF form by normalizing with the rule @{text "(A \ (\x. B x)) \ - (\x. A \ B x)"}. In Isabelle, the outermost quantifier prefix is - represented via schematic variables, such that the top-level - structure is merely that of a Horn Clause. - - \[ - \infer[@{text "(assumption)"}]{@{text "C\"}} + \infer[(@{inference assumption})]{@{text "C\"}} {@{text "(\\<^vec>x. \<^vec>H \<^vec>x \ A \<^vec>x) \ C"} & @{text "A\ = H\<^sub>i\"}~~\text{(for some~@{text i})}} \] \[ - \infer[@{text "(compose)"}]{@{text "\<^vec>A\ \ C\"}} + \infer[(@{inference composition})]{@{text "\<^vec>A\ \ C\"}} {@{text "\<^vec>A \ B"} & @{text "B' \ C"} & @{text "B\ = B'\"}} \] @@ -795,7 +821,7 @@ @{text "\_lift"}, and @{text compose}. \[ - \infer[@{text "(resolution)"}] + \infer[(@{inference resolution})] {@{text "(\\<^vec>x. \<^vec>H \<^vec>x \ \<^vec>A (?\<^vec>a \<^vec>x))\ \ C\"}} {\begin{tabular}{l} @{text "\<^vec>A ?\<^vec>a \ B ?\<^vec>a"} \\ @@ -805,7 +831,27 @@ \] - FIXME @{text "elim_resolution"}, @{text "dest_resolution"} + FIXME @{inference elim_resolution}, @{inference dest_resolution} *} + +text %mlref {* + \begin{mldecls} + @{index_ML "op RS": "thm * thm -> thm"} \\ + @{index_ML "op OF": "thm * thm list -> thm"} \\ + @{index_ML MetaSimplifier.norm_hhf: "thm -> thm"} \\ + \end{mldecls} + + \begin{description} + + \item @{text "thm\<^sub>1 RS thm\<^sub>2"} FIXME + + \item @{text "thm OF thms"} FIXME + + \item @{ML MetaSimplifier.norm_hhf}~@{text thm} FIXME + + \end{description} +*} + + end