diff -r 419116f1157a -r e23770bc97c8 doc-src/IsarImplementation/Thy/Logic.thy --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/doc-src/IsarImplementation/Thy/Logic.thy Thu Feb 26 08:48:33 2009 -0800 @@ -0,0 +1,909 @@ +theory Logic +imports Base +begin + +chapter {* Primitive logic \label{ch:logic} *} + +text {* + The logical foundations of Isabelle/Isar are that of the Pure logic, + which has been introduced as a Natural Deduction framework in + \cite{paulson700}. This is essentially the same logic as ``@{text + "\HOL"}'' in the more abstract setting of Pure Type Systems (PTS) + \cite{Barendregt-Geuvers:2001}, although there are some key + differences in the specific treatment of simple types in + Isabelle/Pure. + + Following type-theoretic parlance, the Pure logic consists of three + levels of @{text "\"}-calculus with corresponding arrows, @{text + "\"} for syntactic function space (terms depending on terms), @{text + "\"} for universal quantification (proofs depending on terms), and + @{text "\"} for implication (proofs depending on proofs). + + Derivations are relative to a logical theory, which declares type + constructors, constants, and axioms. Theory declarations support + schematic polymorphism, which is strictly speaking outside the + logic.\footnote{This is the deeper logical reason, why the theory + context @{text "\"} is separate from the proof context @{text "\"} + of the core calculus.} +*} + + +section {* Types \label{sec:types} *} + +text {* + The language of types is an uninterpreted order-sorted first-order + algebra; types are qualified by ordered type classes. + + \medskip A \emph{type class} is an abstract syntactic entity + declared in the theory context. The \emph{subclass relation} @{text + "c\<^isub>1 \ c\<^isub>2"} is specified by stating an acyclic + generating relation; the transitive closure is maintained + internally. The resulting relation is an ordering: reflexive, + transitive, and antisymmetric. + + A \emph{sort} is a list of type classes written as @{text "s = + {c\<^isub>1, \, c\<^isub>m}"}, which represents symbolic + intersection. Notationally, the curly braces are omitted for + singleton intersections, i.e.\ any class @{text "c"} may be read as + a sort @{text "{c}"}. The ordering on type classes is extended to + sorts according to the meaning of intersections: @{text + "{c\<^isub>1, \ c\<^isub>m} \ {d\<^isub>1, \, d\<^isub>n}"} iff + @{text "\j. \i. c\<^isub>i \ d\<^isub>j"}. The empty intersection + @{text "{}"} refers to the universal sort, which is the largest + element wrt.\ the sort order. The intersections of all (finitely + many) classes declared in the current theory are the minimal + elements wrt.\ the sort order. + + \medskip A \emph{fixed type variable} is a pair of a basic name + (starting with a @{text "'"} character) and a sort constraint, e.g.\ + @{text "('a, s)"} which is usually printed as @{text "\\<^isub>s"}. + A \emph{schematic type variable} is a pair of an indexname and a + sort constraint, e.g.\ @{text "(('a, 0), s)"} which is usually + printed as @{text "?\\<^isub>s"}. + + Note that \emph{all} syntactic components contribute to the identity + of type variables, including the sort constraint. The core logic + handles type variables with the same name but different sorts as + different, although some outer layers of the system make it hard to + produce anything like this. + + A \emph{type constructor} @{text "\"} is a @{text "k"}-ary operator + on types declared in the theory. Type constructor application is + written postfix as @{text "(\\<^isub>1, \, \\<^isub>k)\"}. For + @{text "k = 0"} the argument tuple is omitted, e.g.\ @{text "prop"} + instead of @{text "()prop"}. For @{text "k = 1"} the parentheses + are omitted, e.g.\ @{text "\ list"} instead of @{text "(\)list"}. + Further notation is provided for specific constructors, notably the + right-associative infix @{text "\ \ \"} instead of @{text "(\, + \)fun"}. + + A \emph{type} is defined inductively over type variables and type + constructors as follows: @{text "\ = \\<^isub>s | ?\\<^isub>s | + (\\<^sub>1, \, \\<^sub>k)\"}. + + A \emph{type abbreviation} is a syntactic definition @{text + "(\<^vec>\)\ = \"} of an arbitrary type expression @{text "\"} over + variables @{text "\<^vec>\"}. Type abbreviations appear as type + constructors in the syntax, but are expanded before entering the + logical core. + + A \emph{type arity} declares the image behavior of a type + constructor wrt.\ the algebra of sorts: @{text "\ :: (s\<^isub>1, \, + s\<^isub>k)s"} means that @{text "(\\<^isub>1, \, \\<^isub>k)\"} is + of sort @{text "s"} if every argument type @{text "\\<^isub>i"} is + of sort @{text "s\<^isub>i"}. Arity declarations are implicitly + completed, i.e.\ @{text "\ :: (\<^vec>s)c"} entails @{text "\ :: + (\<^vec>s)c'"} for any @{text "c' \ c"}. + + \medskip The sort algebra is always maintained as \emph{coregular}, + which means that type arities are consistent with the subclass + relation: for any type constructor @{text "\"}, and classes @{text + "c\<^isub>1 \ c\<^isub>2"}, and arities @{text "\ :: + (\<^vec>s\<^isub>1)c\<^isub>1"} and @{text "\ :: + (\<^vec>s\<^isub>2)c\<^isub>2"} holds @{text "\<^vec>s\<^isub>1 \ + \<^vec>s\<^isub>2"} component-wise. + + The key property of a coregular order-sorted algebra is that sort + constraints can be solved in a most general fashion: for each type + constructor @{text "\"} and sort @{text "s"} there is a most general + vector of argument sorts @{text "(s\<^isub>1, \, s\<^isub>k)"} such + that a type scheme @{text "(\\<^bsub>s\<^isub>1\<^esub>, \, + \\<^bsub>s\<^isub>k\<^esub>)\"} is of sort @{text "s"}. + Consequently, type unification has most general solutions (modulo + equivalence of sorts), so type-inference produces primary types as + expected \cite{nipkow-prehofer}. +*} + +text %mlref {* + \begin{mldecls} + @{index_ML_type class} \\ + @{index_ML_type sort} \\ + @{index_ML_type arity} \\ + @{index_ML_type typ} \\ + @{index_ML map_atyps: "(typ -> typ) -> typ -> typ"} \\ + @{index_ML fold_atyps: "(typ -> 'a -> 'a) -> typ -> 'a -> 'a"} \\ + \end{mldecls} + \begin{mldecls} + @{index_ML Sign.subsort: "theory -> sort * sort -> bool"} \\ + @{index_ML Sign.of_sort: "theory -> typ * sort -> bool"} \\ + @{index_ML Sign.add_types: "(string * int * mixfix) list -> theory -> theory"} \\ + @{index_ML Sign.add_tyabbrs_i: " + (string * string list * typ * mixfix) list -> theory -> theory"} \\ + @{index_ML Sign.primitive_class: "string * class list -> theory -> theory"} \\ + @{index_ML Sign.primitive_classrel: "class * class -> theory -> theory"} \\ + @{index_ML Sign.primitive_arity: "arity -> theory -> theory"} \\ + \end{mldecls} + + \begin{description} + + \item @{ML_type class} represents type classes; this is an alias for + @{ML_type string}. + + \item @{ML_type sort} represents sorts; this is an alias for + @{ML_type "class list"}. + + \item @{ML_type arity} represents type arities; this is an alias for + triples of the form @{text "(\, \<^vec>s, s)"} for @{text "\ :: + (\<^vec>s)s"} described above. + + \item @{ML_type typ} represents types; this is a datatype with + constructors @{ML TFree}, @{ML TVar}, @{ML Type}. + + \item @{ML map_atyps}~@{text "f \"} applies the mapping @{text "f"} + to all atomic types (@{ML TFree}, @{ML TVar}) occurring in @{text + "\"}. + + \item @{ML fold_atyps}~@{text "f \"} iterates the operation @{text + "f"} over all occurrences of atomic types (@{ML TFree}, @{ML TVar}) + in @{text "\"}; the type structure is traversed from left to right. + + \item @{ML Sign.subsort}~@{text "thy (s\<^isub>1, s\<^isub>2)"} + tests the subsort relation @{text "s\<^isub>1 \ s\<^isub>2"}. + + \item @{ML Sign.of_sort}~@{text "thy (\, s)"} tests whether type + @{text "\"} is of sort @{text "s"}. + + \item @{ML Sign.add_types}~@{text "[(\, k, mx), \]"} declares a new + type constructors @{text "\"} with @{text "k"} arguments and + optional mixfix syntax. + + \item @{ML Sign.add_tyabbrs_i}~@{text "[(\, \<^vec>\, \, mx), \]"} + defines a new type abbreviation @{text "(\<^vec>\)\ = \"} with + optional mixfix syntax. + + \item @{ML Sign.primitive_class}~@{text "(c, [c\<^isub>1, \, + c\<^isub>n])"} declares a new class @{text "c"}, together with class + relations @{text "c \ c\<^isub>i"}, for @{text "i = 1, \, n"}. + + \item @{ML Sign.primitive_classrel}~@{text "(c\<^isub>1, + c\<^isub>2)"} declares the class relation @{text "c\<^isub>1 \ + c\<^isub>2"}. + + \item @{ML Sign.primitive_arity}~@{text "(\, \<^vec>s, s)"} declares + the arity @{text "\ :: (\<^vec>s)s"}. + + \end{description} +*} + + +section {* Terms \label{sec:terms} *} + +text {* + The language of terms is that of simply-typed @{text "\"}-calculus + with de-Bruijn indices for bound variables (cf.\ \cite{debruijn72} + or \cite{paulson-ml2}), with the types being determined by the + corresponding binders. In contrast, free variables and constants + are have an explicit name and type in each occurrence. + + \medskip A \emph{bound variable} is a natural number @{text "b"}, + which accounts for the number of intermediate binders between the + variable occurrence in the body and its binding position. For + example, the de-Bruijn term @{text + "\\<^bsub>nat\<^esub>. \\<^bsub>nat\<^esub>. 1 + 0"} would + correspond to @{text + "\x\<^bsub>nat\<^esub>. \y\<^bsub>nat\<^esub>. x + y"} in a named + representation. Note that a bound variable may be represented by + different de-Bruijn indices at different occurrences, depending on + the nesting of abstractions. + + A \emph{loose variable} is a bound variable that is outside the + scope of local binders. The types (and names) for loose variables + can be managed as a separate context, that is maintained as a stack + of hypothetical binders. The core logic operates on closed terms, + without any loose variables. + + A \emph{fixed variable} is a pair of a basic name and a type, e.g.\ + @{text "(x, \)"} which is usually printed @{text "x\<^isub>\"}. A + \emph{schematic variable} is a pair of an indexname and a type, + e.g.\ @{text "((x, 0), \)"} which is usually printed as @{text + "?x\<^isub>\"}. + + \medskip A \emph{constant} is a pair of a basic name and a type, + e.g.\ @{text "(c, \)"} which is usually printed as @{text + "c\<^isub>\"}. Constants are declared in the context as polymorphic + families @{text "c :: \"}, meaning that all substitution instances + @{text "c\<^isub>\"} for @{text "\ = \\"} are valid. + + The vector of \emph{type arguments} of constant @{text "c\<^isub>\"} + wrt.\ the declaration @{text "c :: \"} is defined as the codomain of + the matcher @{text "\ = {?\\<^isub>1 \ \\<^isub>1, \, + ?\\<^isub>n \ \\<^isub>n}"} presented in canonical order @{text + "(\\<^isub>1, \, \\<^isub>n)"}. Within a given theory context, + there is a one-to-one correspondence between any constant @{text + "c\<^isub>\"} and the application @{text "c(\\<^isub>1, \, + \\<^isub>n)"} of its type arguments. For example, with @{text "plus + :: \ \ \ \ \"}, the instance @{text "plus\<^bsub>nat \ nat \ + nat\<^esub>"} corresponds to @{text "plus(nat)"}. + + Constant declarations @{text "c :: \"} may contain sort constraints + for type variables in @{text "\"}. These are observed by + type-inference as expected, but \emph{ignored} by the core logic. + This means the primitive logic is able to reason with instances of + polymorphic constants that the user-level type-checker would reject + due to violation of type class restrictions. + + \medskip An \emph{atomic} term is either a variable or constant. A + \emph{term} is defined inductively over atomic terms, with + abstraction and application as follows: @{text "t = b | x\<^isub>\ | + ?x\<^isub>\ | c\<^isub>\ | \\<^isub>\. t | t\<^isub>1 t\<^isub>2"}. + Parsing and printing takes care of converting between an external + representation with named bound variables. Subsequently, we shall + use the latter notation instead of internal de-Bruijn + representation. + + The inductive relation @{text "t :: \"} assigns a (unique) type to a + term according to the structure of atomic terms, abstractions, and + applicatins: + \[ + \infer{@{text "a\<^isub>\ :: \"}}{} + \qquad + \infer{@{text "(\x\<^sub>\. t) :: \ \ \"}}{@{text "t :: \"}} + \qquad + \infer{@{text "t u :: \"}}{@{text "t :: \ \ \"} & @{text "u :: \"}} + \] + A \emph{well-typed term} is a term that can be typed according to these rules. + + Typing information can be omitted: type-inference is able to + reconstruct the most general type of a raw term, while assigning + most general types to all of its variables and constants. + Type-inference depends on a context of type constraints for fixed + variables, and declarations for polymorphic constants. + + The identity of atomic terms consists both of the name and the type + component. This means that different variables @{text + "x\<^bsub>\\<^isub>1\<^esub>"} and @{text + "x\<^bsub>\\<^isub>2\<^esub>"} may become the same after type + instantiation. Some outer layers of the system make it hard to + produce variables of the same name, but different types. In + contrast, mixed instances of polymorphic constants occur frequently. + + \medskip The \emph{hidden polymorphism} of a term @{text "t :: \"} + is the set of type variables occurring in @{text "t"}, but not in + @{text "\"}. This means that the term implicitly depends on type + arguments that are not accounted in the result type, i.e.\ there are + different type instances @{text "t\ :: \"} and @{text + "t\' :: \"} with the same type. This slightly + pathological situation notoriously demands additional care. + + \medskip A \emph{term abbreviation} is a syntactic definition @{text + "c\<^isub>\ \ t"} of a closed term @{text "t"} of type @{text "\"}, + without any hidden polymorphism. A term abbreviation looks like a + constant in the syntax, but is expanded before entering the logical + core. Abbreviations are usually reverted when printing terms, using + @{text "t \ c\<^isub>\"} as rules for higher-order rewriting. + + \medskip Canonical operations on @{text "\"}-terms include @{text + "\\\"}-conversion: @{text "\"}-conversion refers to capture-free + renaming of bound variables; @{text "\"}-conversion contracts an + abstraction applied to an argument term, substituting the argument + in the body: @{text "(\x. b)a"} becomes @{text "b[a/x]"}; @{text + "\"}-conversion contracts vacuous application-abstraction: @{text + "\x. f x"} becomes @{text "f"}, provided that the bound variable + does not occur in @{text "f"}. + + Terms are normally treated modulo @{text "\"}-conversion, which is + implicit in the de-Bruijn representation. Names for bound variables + in abstractions are maintained separately as (meaningless) comments, + mostly for parsing and printing. Full @{text "\\\"}-conversion is + commonplace in various standard operations (\secref{sec:obj-rules}) + that are based on higher-order unification and matching. +*} + +text %mlref {* + \begin{mldecls} + @{index_ML_type term} \\ + @{index_ML "op aconv": "term * term -> bool"} \\ + @{index_ML map_types: "(typ -> typ) -> term -> term"} \\ + @{index_ML fold_types: "(typ -> 'a -> 'a) -> term -> 'a -> 'a"} \\ + @{index_ML map_aterms: "(term -> term) -> term -> term"} \\ + @{index_ML fold_aterms: "(term -> 'a -> 'a) -> term -> 'a -> 'a"} \\ + \end{mldecls} + \begin{mldecls} + @{index_ML fastype_of: "term -> typ"} \\ + @{index_ML lambda: "term -> term -> term"} \\ + @{index_ML betapply: "term * term -> term"} \\ + @{index_ML Sign.declare_const: "Properties.T -> (binding * typ) * mixfix -> + theory -> term * theory"} \\ + @{index_ML Sign.add_abbrev: "string -> Properties.T -> binding * term -> + theory -> (term * term) * theory"} \\ + @{index_ML Sign.const_typargs: "theory -> string * typ -> typ list"} \\ + @{index_ML Sign.const_instance: "theory -> string * typ list -> typ"} \\ + \end{mldecls} + + \begin{description} + + \item @{ML_type term} represents de-Bruijn terms, with comments in + abstractions, and explicitly named free variables and constants; + this is a datatype with constructors @{ML Bound}, @{ML Free}, @{ML + Var}, @{ML Const}, @{ML Abs}, @{ML "op $"}. + + \item @{text "t"}~@{ML aconv}~@{text "u"} checks @{text + "\"}-equivalence of two terms. This is the basic equality relation + on type @{ML_type term}; raw datatype equality should only be used + for operations related to parsing or printing! + + \item @{ML map_types}~@{text "f t"} applies the mapping @{text + "f"} to all types occurring in @{text "t"}. + + \item @{ML fold_types}~@{text "f t"} iterates the operation @{text + "f"} over all occurrences of types in @{text "t"}; the term + structure is traversed from left to right. + + \item @{ML map_aterms}~@{text "f t"} applies the mapping @{text "f"} + to all atomic terms (@{ML Bound}, @{ML Free}, @{ML Var}, @{ML + Const}) occurring in @{text "t"}. + + \item @{ML fold_aterms}~@{text "f t"} iterates the operation @{text + "f"} over all occurrences of atomic terms (@{ML Bound}, @{ML Free}, + @{ML Var}, @{ML Const}) in @{text "t"}; the term structure is + traversed from left to right. + + \item @{ML fastype_of}~@{text "t"} determines the type of a + well-typed term. This operation is relatively slow, despite the + omission of any sanity checks. + + \item @{ML lambda}~@{text "a b"} produces an abstraction @{text + "\a. b"}, where occurrences of the atomic term @{text "a"} in the + body @{text "b"} are replaced by bound variables. + + \item @{ML betapply}~@{text "(t, u)"} produces an application @{text + "t u"}, with topmost @{text "\"}-conversion if @{text "t"} is an + abstraction. + + \item @{ML Sign.declare_const}~@{text "properties ((c, \), mx)"} + declares a new constant @{text "c :: \"} with optional mixfix + syntax. + + \item @{ML Sign.add_abbrev}~@{text "print_mode properties (c, t)"} + introduces a new term abbreviation @{text "c \ t"}. + + \item @{ML Sign.const_typargs}~@{text "thy (c, \)"} and @{ML + Sign.const_instance}~@{text "thy (c, [\\<^isub>1, \, \\<^isub>n])"} + convert between two representations of polymorphic constants: full + type instance vs.\ compact type arguments form. + + \end{description} +*} + + +section {* Theorems \label{sec:thms} *} + +text {* + A \emph{proposition} is a well-typed term of type @{text "prop"}, a + \emph{theorem} is a proven proposition (depending on a context of + hypotheses and the background theory). Primitive inferences include + plain Natural Deduction rules for the primary connectives @{text + "\"} and @{text "\"} of the framework. There is also a builtin + notion of equality/equivalence @{text "\"}. +*} + + +subsection {* Primitive connectives and rules \label{sec:prim-rules} *} + +text {* + The theory @{text "Pure"} contains constant declarations for the + primitive connectives @{text "\"}, @{text "\"}, and @{text "\"} of + the logical framework, see \figref{fig:pure-connectives}. The + derivability judgment @{text "A\<^isub>1, \, A\<^isub>n \ B"} is + defined inductively by the primitive inferences given in + \figref{fig:prim-rules}, with the global restriction that the + hypotheses must \emph{not} contain any schematic variables. The + builtin equality is conceptually axiomatized as shown in + \figref{fig:pure-equality}, although the implementation works + directly with derived inferences. + + \begin{figure}[htb] + \begin{center} + \begin{tabular}{ll} + @{text "all :: (\ \ prop) \ prop"} & universal quantification (binder @{text "\"}) \\ + @{text "\ :: prop \ prop \ prop"} & implication (right associative infix) \\ + @{text "\ :: \ \ \ \ prop"} & equality relation (infix) \\ + \end{tabular} + \caption{Primitive connectives of Pure}\label{fig:pure-connectives} + \end{center} + \end{figure} + + \begin{figure}[htb] + \begin{center} + \[ + \infer[@{text "(axiom)"}]{@{text "\ A"}}{@{text "A \ \"}} + \qquad + \infer[@{text "(assume)"}]{@{text "A \ A"}}{} + \] + \[ + \infer[@{text "(\_intro)"}]{@{text "\ \ \x. b[x]"}}{@{text "\ \ b[x]"} & @{text "x \ \"}} + \qquad + \infer[@{text "(\_elim)"}]{@{text "\ \ b[a]"}}{@{text "\ \ \x. b[x]"}} + \] + \[ + \infer[@{text "(\_intro)"}]{@{text "\ - A \ A \ B"}}{@{text "\ \ B"}} + \qquad + \infer[@{text "(\_elim)"}]{@{text "\\<^sub>1 \ \\<^sub>2 \ B"}}{@{text "\\<^sub>1 \ A \ B"} & @{text "\\<^sub>2 \ A"}} + \] + \caption{Primitive inferences of Pure}\label{fig:prim-rules} + \end{center} + \end{figure} + + \begin{figure}[htb] + \begin{center} + \begin{tabular}{ll} + @{text "\ (\x. b[x]) a \ b[a]"} & @{text "\"}-conversion \\ + @{text "\ x \ x"} & reflexivity \\ + @{text "\ x \ y \ P x \ P y"} & substitution \\ + @{text "\ (\x. f x \ g x) \ f \ g"} & extensionality \\ + @{text "\ (A \ B) \ (B \ A) \ A \ B"} & logical equivalence \\ + \end{tabular} + \caption{Conceptual axiomatization of Pure equality}\label{fig:pure-equality} + \end{center} + \end{figure} + + The introduction and elimination rules for @{text "\"} and @{text + "\"} are analogous to formation of dependently typed @{text + "\"}-terms representing the underlying proof objects. Proof terms + are irrelevant in the Pure logic, though; they cannot occur within + propositions. The system provides a runtime option to record + explicit proof terms for primitive inferences. Thus all three + levels of @{text "\"}-calculus become explicit: @{text "\"} for + terms, and @{text "\/\"} for proofs (cf.\ + \cite{Berghofer-Nipkow:2000:TPHOL}). + + Observe that locally fixed parameters (as in @{text "\_intro"}) need + not be recorded in the hypotheses, because the simple syntactic + types of Pure are always inhabitable. ``Assumptions'' @{text "x :: + \"} for type-membership are only present as long as some @{text + "x\<^isub>\"} occurs in the statement body.\footnote{This is the key + difference to ``@{text "\HOL"}'' in the PTS framework + \cite{Barendregt-Geuvers:2001}, where hypotheses @{text "x : A"} are + treated uniformly for propositions and types.} + + \medskip The axiomatization of a theory is implicitly closed by + forming all instances of type and term variables: @{text "\ + A\"} holds for any substitution instance of an axiom + @{text "\ A"}. By pushing substitutions through derivations + inductively, we also get admissible @{text "generalize"} and @{text + "instance"} rules as shown in \figref{fig:subst-rules}. + + \begin{figure}[htb] + \begin{center} + \[ + \infer{@{text "\ \ B[?\]"}}{@{text "\ \ B[\]"} & @{text "\ \ \"}} + \quad + \infer[\quad@{text "(generalize)"}]{@{text "\ \ B[?x]"}}{@{text "\ \ B[x]"} & @{text "x \ \"}} + \] + \[ + \infer{@{text "\ \ B[\]"}}{@{text "\ \ B[?\]"}} + \quad + \infer[\quad@{text "(instantiate)"}]{@{text "\ \ B[t]"}}{@{text "\ \ B[?x]"}} + \] + \caption{Admissible substitution rules}\label{fig:subst-rules} + \end{center} + \end{figure} + + Note that @{text "instantiate"} does not require an explicit + side-condition, because @{text "\"} may never contain schematic + variables. + + In principle, variables could be substituted in hypotheses as well, + but this would disrupt the monotonicity of reasoning: deriving + @{text "\\ \ B\"} from @{text "\ \ B"} is + correct, but @{text "\\ \ \"} does not necessarily hold: + the result belongs to a different proof context. + + \medskip An \emph{oracle} is a function that produces axioms on the + fly. Logically, this is an instance of the @{text "axiom"} rule + (\figref{fig:prim-rules}), but there is an operational difference. + The system always records oracle invocations within derivations of + theorems by a unique tag. + + Axiomatizations should be limited to the bare minimum, typically as + part of the initial logical basis of an object-logic formalization. + Later on, theories are usually developed in a strictly definitional + fashion, by stating only certain equalities over new constants. + + A \emph{simple definition} consists of a constant declaration @{text + "c :: \"} together with an axiom @{text "\ c \ t"}, where @{text "t + :: \"} is a closed term without any hidden polymorphism. The RHS + may depend on further defined constants, but not @{text "c"} itself. + Definitions of functions may be presented as @{text "c \<^vec>x \ + t"} instead of the puristic @{text "c \ \\<^vec>x. t"}. + + An \emph{overloaded definition} consists of a collection of axioms + for the same constant, with zero or one equations @{text + "c((\<^vec>\)\) \ t"} for each type constructor @{text "\"} (for + distinct variables @{text "\<^vec>\"}). The RHS may mention + previously defined constants as above, or arbitrary constants @{text + "d(\\<^isub>i)"} for some @{text "\\<^isub>i"} projected from @{text + "\<^vec>\"}. Thus overloaded definitions essentially work by + primitive recursion over the syntactic structure of a single type + argument. +*} + +text %mlref {* + \begin{mldecls} + @{index_ML_type ctyp} \\ + @{index_ML_type cterm} \\ + @{index_ML Thm.ctyp_of: "theory -> typ -> ctyp"} \\ + @{index_ML Thm.cterm_of: "theory -> term -> cterm"} \\ + \end{mldecls} + \begin{mldecls} + @{index_ML_type thm} \\ + @{index_ML proofs: "int ref"} \\ + @{index_ML Thm.assume: "cterm -> thm"} \\ + @{index_ML Thm.forall_intr: "cterm -> thm -> thm"} \\ + @{index_ML Thm.forall_elim: "cterm -> thm -> thm"} \\ + @{index_ML Thm.implies_intr: "cterm -> thm -> thm"} \\ + @{index_ML Thm.implies_elim: "thm -> thm -> thm"} \\ + @{index_ML Thm.generalize: "string list * string list -> int -> thm -> thm"} \\ + @{index_ML Thm.instantiate: "(ctyp * ctyp) list * (cterm * cterm) list -> thm -> thm"} \\ + @{index_ML Thm.axiom: "theory -> string -> thm"} \\ + @{index_ML Thm.add_oracle: "bstring * ('a -> cterm) -> theory + -> (string * ('a -> thm)) * theory"} \\ + \end{mldecls} + \begin{mldecls} + @{index_ML Theory.add_axioms_i: "(binding * term) list -> theory -> theory"} \\ + @{index_ML Theory.add_deps: "string -> string * typ -> (string * typ) list -> theory -> theory"} \\ + @{index_ML Theory.add_defs_i: "bool -> bool -> (binding * term) list -> theory -> theory"} \\ + \end{mldecls} + + \begin{description} + + \item @{ML_type ctyp} and @{ML_type cterm} represent certified types + and terms, respectively. These are abstract datatypes that + guarantee that its values have passed the full well-formedness (and + well-typedness) checks, relative to the declarations of type + constructors, constants etc. in the theory. + + \item @{ML Thm.ctyp_of}~@{text "thy \"} and @{ML + Thm.cterm_of}~@{text "thy t"} explicitly checks types and terms, + respectively. This also involves some basic normalizations, such + expansion of type and term abbreviations from the theory context. + + Re-certification is relatively slow and should be avoided in tight + reasoning loops. There are separate operations to decompose + certified entities (including actual theorems). + + \item @{ML_type thm} represents proven propositions. This is an + abstract datatype that guarantees that its values have been + constructed by basic principles of the @{ML_struct Thm} module. + Every @{ML thm} value contains a sliding back-reference to the + enclosing theory, cf.\ \secref{sec:context-theory}. + + \item @{ML proofs} determines the detail of proof recording within + @{ML_type thm} values: @{ML 0} records only the names of oracles, + @{ML 1} records oracle names and propositions, @{ML 2} additionally + records full proof terms. Officially named theorems that contribute + to a result are always recorded. + + \item @{ML Thm.assume}, @{ML Thm.forall_intr}, @{ML + Thm.forall_elim}, @{ML Thm.implies_intr}, and @{ML Thm.implies_elim} + correspond to the primitive inferences of \figref{fig:prim-rules}. + + \item @{ML Thm.generalize}~@{text "(\<^vec>\, \<^vec>x)"} + corresponds to the @{text "generalize"} rules of + \figref{fig:subst-rules}. Here collections of type and term + variables are generalized simultaneously, specified by the given + basic names. + + \item @{ML Thm.instantiate}~@{text "(\<^vec>\\<^isub>s, + \<^vec>x\<^isub>\)"} corresponds to the @{text "instantiate"} rules + of \figref{fig:subst-rules}. Type variables are substituted before + term variables. Note that the types in @{text "\<^vec>x\<^isub>\"} + refer to the instantiated versions. + + \item @{ML Thm.axiom}~@{text "thy name"} retrieves a named + axiom, cf.\ @{text "axiom"} in \figref{fig:prim-rules}. + + \item @{ML Thm.add_oracle}~@{text "(name, oracle)"} produces a named + oracle rule, essentially generating arbitrary axioms on the fly, + cf.\ @{text "axiom"} in \figref{fig:prim-rules}. + + \item @{ML Theory.add_axioms_i}~@{text "[(name, A), \]"} declares + arbitrary propositions as axioms. + + \item @{ML Theory.add_deps}~@{text "name c\<^isub>\ + \<^vec>d\<^isub>\"} declares dependencies of a named specification + for constant @{text "c\<^isub>\"}, relative to existing + specifications for constants @{text "\<^vec>d\<^isub>\"}. + + \item @{ML Theory.add_defs_i}~@{text "unchecked overloaded [(name, c + \<^vec>x \ t), \]"} states a definitional axiom for an existing + constant @{text "c"}. Dependencies are recorded (cf.\ @{ML + Theory.add_deps}), unless the @{text "unchecked"} option is set. + + \end{description} +*} + + +subsection {* Auxiliary definitions *} + +text {* + Theory @{text "Pure"} provides a few auxiliary definitions, see + \figref{fig:pure-aux}. These special constants are normally not + exposed to the user, but appear in internal encodings. + + \begin{figure}[htb] + \begin{center} + \begin{tabular}{ll} + @{text "conjunction :: prop \ prop \ prop"} & (infix @{text "&"}) \\ + @{text "\ A & B \ (\C. (A \ B \ C) \ C)"} \\[1ex] + @{text "prop :: prop \ prop"} & (prefix @{text "#"}, suppressed) \\ + @{text "#A \ A"} \\[1ex] + @{text "term :: \ \ prop"} & (prefix @{text "TERM"}) \\ + @{text "term x \ (\A. A \ A)"} \\[1ex] + @{text "TYPE :: \ itself"} & (prefix @{text "TYPE"}) \\ + @{text "(unspecified)"} \\ + \end{tabular} + \caption{Definitions of auxiliary connectives}\label{fig:pure-aux} + \end{center} + \end{figure} + + Derived conjunction rules include introduction @{text "A \ B \ A & + B"}, and destructions @{text "A & B \ A"} and @{text "A & B \ B"}. + Conjunction allows to treat simultaneous assumptions and conclusions + uniformly. For example, multiple claims are intermediately + represented as explicit conjunction, but this is refined into + separate sub-goals before the user continues the proof; the final + result is projected into a list of theorems (cf.\ + \secref{sec:tactical-goals}). + + The @{text "prop"} marker (@{text "#"}) makes arbitrarily complex + propositions appear as atomic, without changing the meaning: @{text + "\ \ A"} and @{text "\ \ #A"} are interchangeable. See + \secref{sec:tactical-goals} for specific operations. + + The @{text "term"} marker turns any well-typed term into a derivable + proposition: @{text "\ TERM t"} holds unconditionally. Although + this is logically vacuous, it allows to treat terms and proofs + uniformly, similar to a type-theoretic framework. + + The @{text "TYPE"} constructor is the canonical representative of + the unspecified type @{text "\ itself"}; it essentially injects the + language of types into that of terms. There is specific notation + @{text "TYPE(\)"} for @{text "TYPE\<^bsub>\ + itself\<^esub>"}. + Although being devoid of any particular meaning, the @{text + "TYPE(\)"} accounts for the type @{text "\"} within the term + language. In particular, @{text "TYPE(\)"} may be used as formal + argument in primitive definitions, in order to circumvent hidden + polymorphism (cf.\ \secref{sec:terms}). For example, @{text "c + TYPE(\) \ A[\]"} defines @{text "c :: \ itself \ prop"} in terms of + a proposition @{text "A"} that depends on an additional type + argument, which is essentially a predicate on types. +*} + +text %mlref {* + \begin{mldecls} + @{index_ML Conjunction.intr: "thm -> thm -> thm"} \\ + @{index_ML Conjunction.elim: "thm -> thm * thm"} \\ + @{index_ML Drule.mk_term: "cterm -> thm"} \\ + @{index_ML Drule.dest_term: "thm -> cterm"} \\ + @{index_ML Logic.mk_type: "typ -> term"} \\ + @{index_ML Logic.dest_type: "term -> typ"} \\ + \end{mldecls} + + \begin{description} + + \item @{ML Conjunction.intr} derives @{text "A & B"} from @{text + "A"} and @{text "B"}. + + \item @{ML Conjunction.elim} derives @{text "A"} and @{text "B"} + from @{text "A & B"}. + + \item @{ML Drule.mk_term} derives @{text "TERM t"}. + + \item @{ML Drule.dest_term} recovers term @{text "t"} from @{text + "TERM t"}. + + \item @{ML Logic.mk_type}~@{text "\"} produces the term @{text + "TYPE(\)"}. + + \item @{ML Logic.dest_type}~@{text "TYPE(\)"} recovers the type + @{text "\"}. + + \end{description} +*} + + +section {* Object-level rules \label{sec:obj-rules} *} + +text {* + The primitive inferences covered so far mostly serve foundational + purposes. User-level reasoning usually works via object-level rules + that are represented as theorems of Pure. Composition of rules + involves \emph{backchaining}, \emph{higher-order unification} modulo + @{text "\\\"}-conversion of @{text "\"}-terms, and so-called + \emph{lifting} of rules into a context of @{text "\"} and @{text + "\"} connectives. Thus the full power of higher-order Natural + Deduction in Isabelle/Pure becomes readily available. +*} + + +subsection {* Hereditary Harrop Formulae *} + +text {* + The idea of object-level rules is to model Natural Deduction + inferences in the style of Gentzen \cite{Gentzen:1935}, but we allow + arbitrary nesting similar to \cite{extensions91}. The most basic + rule format is that of a \emph{Horn Clause}: + \[ + \infer{@{text "A"}}{@{text "A\<^sub>1"} & @{text "\"} & @{text "A\<^sub>n"}} + \] + where @{text "A, A\<^sub>1, \, A\<^sub>n"} are atomic propositions + of the framework, usually of the form @{text "Trueprop B"}, where + @{text "B"} is a (compound) object-level statement. This + object-level inference corresponds to an iterated implication in + Pure like this: + \[ + @{text "A\<^sub>1 \ \ A\<^sub>n \ A"} + \] + As an example consider conjunction introduction: @{text "A \ B \ A \ + B"}. Any parameters occurring in such rule statements are + conceptionally treated as arbitrary: + \[ + @{text "\x\<^sub>1 \ x\<^sub>m. A\<^sub>1 x\<^sub>1 \ x\<^sub>m \ \ A\<^sub>n x\<^sub>1 \ x\<^sub>m \ A x\<^sub>1 \ x\<^sub>m"} + \] + + Nesting of rules means that the positions of @{text "A\<^sub>i"} may + again hold compound rules, not just atomic propositions. + Propositions of this format are called \emph{Hereditary Harrop + Formulae} in the literature \cite{Miller:1991}. Here we give an + inductive characterization as follows: + + \medskip + \begin{tabular}{ll} + @{text "\<^bold>x"} & set of variables \\ + @{text "\<^bold>A"} & set of atomic propositions \\ + @{text "\<^bold>H = \\<^bold>x\<^sup>*. \<^bold>H\<^sup>* \ \<^bold>A"} & set of Hereditary Harrop Formulas \\ + \end{tabular} + \medskip + + \noindent Thus we essentially impose nesting levels on propositions + formed from @{text "\"} and @{text "\"}. At each level there is a + prefix of parameters and compound premises, concluding an atomic + proposition. Typical examples are @{text "\"}-introduction @{text + "(A \ B) \ A \ B"} or mathematical induction @{text "P 0 \ (\n. P n + \ P (Suc n)) \ P n"}. Even deeper nesting occurs in well-founded + induction @{text "(\x. (\y. y \ x \ P y) \ P x) \ P x"}, but this + already marks the limit of rule complexity seen in practice. + + \medskip Regular user-level inferences in Isabelle/Pure always + maintain the following canonical form of results: + + \begin{itemize} + + \item Normalization by @{text "(A \ (\x. B x)) \ (\x. A \ B x)"}, + which is a theorem of Pure, means that quantifiers are pushed in + front of implication at each level of nesting. The normal form is a + Hereditary Harrop Formula. + + \item The outermost prefix of parameters is represented via + schematic variables: instead of @{text "\\<^vec>x. \<^vec>H \<^vec>x + \ A \<^vec>x"} we have @{text "\<^vec>H ?\<^vec>x \ A ?\<^vec>x"}. + Note that this representation looses information about the order of + parameters, and vacuous quantifiers vanish automatically. + + \end{itemize} +*} + +text %mlref {* + \begin{mldecls} + @{index_ML MetaSimplifier.norm_hhf: "thm -> thm"} \\ + \end{mldecls} + + \begin{description} + + \item @{ML MetaSimplifier.norm_hhf}~@{text thm} normalizes the given + theorem according to the canonical form specified above. This is + occasionally helpful to repair some low-level tools that do not + handle Hereditary Harrop Formulae properly. + + \end{description} +*} + + +subsection {* Rule composition *} + +text {* + The rule calculus of Isabelle/Pure provides two main inferences: + @{inference resolution} (i.e.\ back-chaining of rules) and + @{inference assumption} (i.e.\ closing a branch), both modulo + higher-order unification. There are also combined variants, notably + @{inference elim_resolution} and @{inference dest_resolution}. + + To understand the all-important @{inference resolution} principle, + we first consider raw @{inference_def composition} (modulo + higher-order unification with substitution @{text "\"}): + \[ + \infer[(@{inference_def composition})]{@{text "\<^vec>A\ \ C\"}} + {@{text "\<^vec>A \ B"} & @{text "B' \ C"} & @{text "B\ = B'\"}} + \] + Here the conclusion of the first rule is unified with the premise of + the second; the resulting rule instance inherits the premises of the + first and conclusion of the second. Note that @{text "C"} can again + consist of iterated implications. We can also permute the premises + of the second rule back-and-forth in order to compose with @{text + "B'"} in any position (subsequently we shall always refer to + position 1 w.l.o.g.). + + In @{inference composition} the internal structure of the common + part @{text "B"} and @{text "B'"} is not taken into account. For + proper @{inference resolution} we require @{text "B"} to be atomic, + and explicitly observe the structure @{text "\\<^vec>x. \<^vec>H + \<^vec>x \ B' \<^vec>x"} of the premise of the second rule. The + idea is to adapt the first rule by ``lifting'' it into this context, + by means of iterated application of the following inferences: + \[ + \infer[(@{inference_def imp_lift})]{@{text "(\<^vec>H \ \<^vec>A) \ (\<^vec>H \ B)"}}{@{text "\<^vec>A \ B"}} + \] + \[ + \infer[(@{inference_def all_lift})]{@{text "(\\<^vec>x. \<^vec>A (?\<^vec>a \<^vec>x)) \ (\\<^vec>x. B (?\<^vec>a \<^vec>x))"}}{@{text "\<^vec>A ?\<^vec>a \ B ?\<^vec>a"}} + \] + By combining raw composition with lifting, we get full @{inference + resolution} as follows: + \[ + \infer[(@{inference_def resolution})] + {@{text "(\\<^vec>x. \<^vec>H \<^vec>x \ \<^vec>A (?\<^vec>a \<^vec>x))\ \ C\"}} + {\begin{tabular}{l} + @{text "\<^vec>A ?\<^vec>a \ B ?\<^vec>a"} \\ + @{text "(\\<^vec>x. \<^vec>H \<^vec>x \ B' \<^vec>x) \ C"} \\ + @{text "(\\<^vec>x. B (?\<^vec>a \<^vec>x))\ = B'\"} \\ + \end{tabular}} + \] + + Continued resolution of rules allows to back-chain a problem towards + more and sub-problems. Branches are closed either by resolving with + a rule of 0 premises, or by producing a ``short-circuit'' within a + solved situation (again modulo unification): + \[ + \infer[(@{inference_def assumption})]{@{text "C\"}} + {@{text "(\\<^vec>x. \<^vec>H \<^vec>x \ A \<^vec>x) \ C"} & @{text "A\ = H\<^sub>i\"}~~\text{(for some~@{text i})}} + \] + + FIXME @{inference_def elim_resolution}, @{inference_def dest_resolution} +*} + +text %mlref {* + \begin{mldecls} + @{index_ML "op RS": "thm * thm -> thm"} \\ + @{index_ML "op OF": "thm * thm list -> thm"} \\ + \end{mldecls} + + \begin{description} + + \item @{text "rule\<^sub>1 RS rule\<^sub>2"} resolves @{text + "rule\<^sub>1"} with @{text "rule\<^sub>2"} according to the + @{inference resolution} principle explained above. Note that the + corresponding attribute in the Isar language is called @{attribute + THEN}. + + \item @{text "rule OF rules"} resolves a list of rules with the + first rule, addressing its premises @{text "1, \, length rules"} + (operating from last to first). This means the newly emerging + premises are all concatenated, without interfering. Also note that + compared to @{text "RS"}, the rule argument order is swapped: @{text + "rule\<^sub>1 RS rule\<^sub>2 = rule\<^sub>2 OF [rule\<^sub>1]"}. + + \end{description} +*} + +end