diff -r f47de9e82b0f -r b266e7a86485 src/Doc/Isar-Ref/HOL_Specific.thy --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/Doc/Isar-Ref/HOL_Specific.thy Sat Apr 05 11:37:00 2014 +0200 @@ -0,0 +1,2677 @@ +theory HOL_Specific +imports Base Main "~~/src/HOL/Library/Old_Recdef" "~~/src/Tools/Adhoc_Overloading" +begin + +chapter {* Higher-Order Logic *} + +text {* Isabelle/HOL is based on Higher-Order Logic, a polymorphic + version of Church's Simple Theory of Types. HOL can be best + understood as a simply-typed version of classical set theory. The + logic was first implemented in Gordon's HOL system + \cite{mgordon-hol}. It extends Church's original logic + \cite{church40} by explicit type variables (naive polymorphism) and + a sound axiomatization scheme for new types based on subsets of + existing types. + + Andrews's book \cite{andrews86} is a full description of the + original Church-style higher-order logic, with proofs of correctness + and completeness wrt.\ certain set-theoretic interpretations. The + particular extensions of Gordon-style HOL are explained semantically + in two chapters of the 1993 HOL book \cite{pitts93}. + + Experience with HOL over decades has demonstrated that higher-order + logic is widely applicable in many areas of mathematics and computer + science. In a sense, Higher-Order Logic is simpler than First-Order + Logic, because there are fewer restrictions and special cases. Note + that HOL is \emph{weaker} than FOL with axioms for ZF set theory, + which is traditionally considered the standard foundation of regular + mathematics, but for most applications this does not matter. If you + prefer ML to Lisp, you will probably prefer HOL to ZF. + + \medskip The syntax of HOL follows @{text "\"}-calculus and + functional programming. Function application is curried. To apply + the function @{text f} of type @{text "\\<^sub>1 \ \\<^sub>2 \ \\<^sub>3"} to the + arguments @{text a} and @{text b} in HOL, you simply write @{text "f + a b"} (as in ML or Haskell). There is no ``apply'' operator; the + existing application of the Pure @{text "\"}-calculus is re-used. + Note that in HOL @{text "f (a, b)"} means ``@{text "f"} applied to + the pair @{text "(a, b)"} (which is notation for @{text "Pair a + b"}). The latter typically introduces extra formal efforts that can + be avoided by currying functions by default. Explicit tuples are as + infrequent in HOL formalizations as in good ML or Haskell programs. + + \medskip Isabelle/HOL has a distinct feel, compared to other + object-logics like Isabelle/ZF. It identifies object-level types + with meta-level types, taking advantage of the default + type-inference mechanism of Isabelle/Pure. HOL fully identifies + object-level functions with meta-level functions, with native + abstraction and application. + + These identifications allow Isabelle to support HOL particularly + nicely, but they also mean that HOL requires some sophistication + from the user. In particular, an understanding of Hindley-Milner + type-inference with type-classes, which are both used extensively in + the standard libraries and applications. Beginners can set + @{attribute show_types} or even @{attribute show_sorts} to get more + explicit information about the result of type-inference. *} + + +chapter {* Derived specification elements *} + +section {* Inductive and coinductive definitions \label{sec:hol-inductive} *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "inductive"} & : & @{text "local_theory \ local_theory"} \\ + @{command_def (HOL) "inductive_set"} & : & @{text "local_theory \ local_theory"} \\ + @{command_def (HOL) "coinductive"} & : & @{text "local_theory \ local_theory"} \\ + @{command_def (HOL) "coinductive_set"} & : & @{text "local_theory \ local_theory"} \\ + @{command_def "print_inductives"}@{text "\<^sup>*"} & : & @{text "context \"} \\ + @{attribute_def (HOL) mono} & : & @{text attribute} \\ + \end{matharray} + + An \emph{inductive definition} specifies the least predicate or set + @{text R} closed under given rules: applying a rule to elements of + @{text R} yields a result within @{text R}. For example, a + structural operational semantics is an inductive definition of an + evaluation relation. + + Dually, a \emph{coinductive definition} specifies the greatest + predicate or set @{text R} that is consistent with given rules: + every element of @{text R} can be seen as arising by applying a rule + to elements of @{text R}. An important example is using + bisimulation relations to formalise equivalence of processes and + infinite data structures. + + Both inductive and coinductive definitions are based on the + Knaster-Tarski fixed-point theorem for complete lattices. The + collection of introduction rules given by the user determines a + functor on subsets of set-theoretic relations. The required + monotonicity of the recursion scheme is proven as a prerequisite to + the fixed-point definition and the resulting consequences. This + works by pushing inclusion through logical connectives and any other + operator that might be wrapped around recursive occurrences of the + defined relation: there must be a monotonicity theorem of the form + @{text "A \ B \ \ A \ \ B"}, for each premise @{text "\ R t"} in an + introduction rule. The default rule declarations of Isabelle/HOL + already take care of most common situations. + + @{rail \ + (@@{command (HOL) inductive} | @@{command (HOL) inductive_set} | + @@{command (HOL) coinductive} | @@{command (HOL) coinductive_set}) + @{syntax target}? \ + @{syntax "fixes"} (@'for' @{syntax "fixes"})? (@'where' clauses)? \ + (@'monos' @{syntax thmrefs})? + ; + clauses: (@{syntax thmdecl}? @{syntax prop} + '|') + ; + @@{attribute (HOL) mono} (() | 'add' | 'del') + \} + + \begin{description} + + \item @{command (HOL) "inductive"} and @{command (HOL) + "coinductive"} define (co)inductive predicates from the introduction + rules. + + The propositions given as @{text "clauses"} in the @{keyword + "where"} part are either rules of the usual @{text "\/\"} format + (with arbitrary nesting), or equalities using @{text "\"}. The + latter specifies extra-logical abbreviations in the sense of + @{command_ref abbreviation}. Introducing abstract syntax + simultaneously with the actual introduction rules is occasionally + useful for complex specifications. + + The optional @{keyword "for"} part contains a list of parameters of + the (co)inductive predicates that remain fixed throughout the + definition, in contrast to arguments of the relation that may vary + in each occurrence within the given @{text "clauses"}. + + The optional @{keyword "monos"} declaration contains additional + \emph{monotonicity theorems}, which are required for each operator + applied to a recursive set in the introduction rules. + + \item @{command (HOL) "inductive_set"} and @{command (HOL) + "coinductive_set"} are wrappers for to the previous commands for + native HOL predicates. This allows to define (co)inductive sets, + where multiple arguments are simulated via tuples. + + \item @{command "print_inductives"} prints (co)inductive definitions and + monotonicity rules. + + \item @{attribute (HOL) mono} declares monotonicity rules in the + context. These rule are involved in the automated monotonicity + proof of the above inductive and coinductive definitions. + + \end{description} +*} + + +subsection {* Derived rules *} + +text {* A (co)inductive definition of @{text R} provides the following + main theorems: + + \begin{description} + + \item @{text R.intros} is the list of introduction rules as proven + theorems, for the recursive predicates (or sets). The rules are + also available individually, using the names given them in the + theory file; + + \item @{text R.cases} is the case analysis (or elimination) rule; + + \item @{text R.induct} or @{text R.coinduct} is the (co)induction + rule; + + \item @{text R.simps} is the equation unrolling the fixpoint of the + predicate one step. + + \end{description} + + When several predicates @{text "R\<^sub>1, \, R\<^sub>n"} are + defined simultaneously, the list of introduction rules is called + @{text "R\<^sub>1_\_R\<^sub>n.intros"}, the case analysis rules are + called @{text "R\<^sub>1.cases, \, R\<^sub>n.cases"}, and the list + of mutual induction rules is called @{text + "R\<^sub>1_\_R\<^sub>n.inducts"}. +*} + + +subsection {* Monotonicity theorems *} + +text {* The context maintains a default set of theorems that are used + in monotonicity proofs. New rules can be declared via the + @{attribute (HOL) mono} attribute. See the main Isabelle/HOL + sources for some examples. The general format of such monotonicity + theorems is as follows: + + \begin{itemize} + + \item Theorems of the form @{text "A \ B \ \ A \ \ B"}, for proving + monotonicity of inductive definitions whose introduction rules have + premises involving terms such as @{text "\ R t"}. + + \item Monotonicity theorems for logical operators, which are of the + general form @{text "(\ \ \) \ \ (\ \ \) \ \ \ \"}. For example, in + the case of the operator @{text "\"}, the corresponding theorem is + \[ + \infer{@{text "P\<^sub>1 \ P\<^sub>2 \ Q\<^sub>1 \ Q\<^sub>2"}}{@{text "P\<^sub>1 \ Q\<^sub>1"} & @{text "P\<^sub>2 \ Q\<^sub>2"}} + \] + + \item De Morgan style equations for reasoning about the ``polarity'' + of expressions, e.g. + \[ + @{prop "\ \ P \ P"} \qquad\qquad + @{prop "\ (P \ Q) \ \ P \ \ Q"} + \] + + \item Equations for reducing complex operators to more primitive + ones whose monotonicity can easily be proved, e.g. + \[ + @{prop "(P \ Q) \ \ P \ Q"} \qquad\qquad + @{prop "Ball A P \ \x. x \ A \ P x"} + \] + + \end{itemize} +*} + +subsubsection {* Examples *} + +text {* The finite powerset operator can be defined inductively like this: *} + +inductive_set Fin :: "'a set \ 'a set set" for A :: "'a set" +where + empty: "{} \ Fin A" +| insert: "a \ A \ B \ Fin A \ insert a B \ Fin A" + +text {* The accessible part of a relation is defined as follows: *} + +inductive acc :: "('a \ 'a \ bool) \ 'a \ bool" + for r :: "'a \ 'a \ bool" (infix "\" 50) +where acc: "(\y. y \ x \ acc r y) \ acc r x" + +text {* Common logical connectives can be easily characterized as +non-recursive inductive definitions with parameters, but without +arguments. *} + +inductive AND for A B :: bool +where "A \ B \ AND A B" + +inductive OR for A B :: bool +where "A \ OR A B" + | "B \ OR A B" + +inductive EXISTS for B :: "'a \ bool" +where "B a \ EXISTS B" + +text {* Here the @{text "cases"} or @{text "induct"} rules produced by + the @{command inductive} package coincide with the expected + elimination rules for Natural Deduction. Already in the original + article by Gerhard Gentzen \cite{Gentzen:1935} there is a hint that + each connective can be characterized by its introductions, and the + elimination can be constructed systematically. *} + + +section {* Recursive functions \label{sec:recursion} *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "primrec"} & : & @{text "local_theory \ local_theory"} \\ + @{command_def (HOL) "fun"} & : & @{text "local_theory \ local_theory"} \\ + @{command_def (HOL) "function"} & : & @{text "local_theory \ proof(prove)"} \\ + @{command_def (HOL) "termination"} & : & @{text "local_theory \ proof(prove)"} \\ + @{command_def (HOL) "fun_cases"} & : & @{text "local_theory \ local_theory"} \\ + \end{matharray} + + @{rail \ + @@{command (HOL) primrec} @{syntax target}? @{syntax "fixes"} @'where' equations + ; + (@@{command (HOL) fun} | @@{command (HOL) function}) @{syntax target}? functionopts? + @{syntax "fixes"} \ @'where' equations + ; + + equations: (@{syntax thmdecl}? @{syntax prop} + '|') + ; + functionopts: '(' (('sequential' | 'domintros') + ',') ')' + ; + @@{command (HOL) termination} @{syntax term}? + ; + @@{command (HOL) fun_cases} (@{syntax thmdecl}? @{syntax prop} + @'and') + \} + + \begin{description} + + \item @{command (HOL) "primrec"} defines primitive recursive + functions over datatypes (see also @{command_ref (HOL) datatype} and + @{command_ref (HOL) rep_datatype}). The given @{text equations} + specify reduction rules that are produced by instantiating the + generic combinator for primitive recursion that is available for + each datatype. + + Each equation needs to be of the form: + + @{text [display] "f x\<^sub>1 \ x\<^sub>m (C y\<^sub>1 \ y\<^sub>k) z\<^sub>1 \ z\<^sub>n = rhs"} + + such that @{text C} is a datatype constructor, @{text rhs} contains + only the free variables on the left-hand side (or from the context), + and all recursive occurrences of @{text "f"} in @{text "rhs"} are of + the form @{text "f \ y\<^sub>i \"} for some @{text i}. At most one + reduction rule for each constructor can be given. The order does + not matter. For missing constructors, the function is defined to + return a default value, but this equation is made difficult to + access for users. + + The reduction rules are declared as @{attribute simp} by default, + which enables standard proof methods like @{method simp} and + @{method auto} to normalize expressions of @{text "f"} applied to + datatype constructions, by simulating symbolic computation via + rewriting. + + \item @{command (HOL) "function"} defines functions by general + wellfounded recursion. A detailed description with examples can be + found in \cite{isabelle-function}. The function is specified by a + set of (possibly conditional) recursive equations with arbitrary + pattern matching. The command generates proof obligations for the + completeness and the compatibility of patterns. + + The defined function is considered partial, and the resulting + simplification rules (named @{text "f.psimps"}) and induction rule + (named @{text "f.pinduct"}) are guarded by a generated domain + predicate @{text "f_dom"}. The @{command (HOL) "termination"} + command can then be used to establish that the function is total. + + \item @{command (HOL) "fun"} is a shorthand notation for ``@{command + (HOL) "function"}~@{text "(sequential)"}, followed by automated + proof attempts regarding pattern matching and termination. See + \cite{isabelle-function} for further details. + + \item @{command (HOL) "termination"}~@{text f} commences a + termination proof for the previously defined function @{text f}. If + this is omitted, the command refers to the most recent function + definition. After the proof is closed, the recursive equations and + the induction principle is established. + + \item @{command (HOL) "fun_cases"} generates specialized elimination + rules for function equations. It expects one or more function equations + and produces rules that eliminate the given equalities, following the cases + given in the function definition. + \end{description} + + Recursive definitions introduced by the @{command (HOL) "function"} + command accommodate reasoning by induction (cf.\ @{method induct}): + rule @{text "f.induct"} refers to a specific induction rule, with + parameters named according to the user-specified equations. Cases + are numbered starting from 1. For @{command (HOL) "primrec"}, the + induction principle coincides with structural recursion on the + datatype where the recursion is carried out. + + The equations provided by these packages may be referred later as + theorem list @{text "f.simps"}, where @{text f} is the (collective) + name of the functions defined. Individual equations may be named + explicitly as well. + + The @{command (HOL) "function"} command accepts the following + options. + + \begin{description} + + \item @{text sequential} enables a preprocessor which disambiguates + overlapping patterns by making them mutually disjoint. Earlier + equations take precedence over later ones. This allows to give the + specification in a format very similar to functional programming. + Note that the resulting simplification and induction rules + correspond to the transformed specification, not the one given + originally. This usually means that each equation given by the user + may result in several theorems. Also note that this automatic + transformation only works for ML-style datatype patterns. + + \item @{text domintros} enables the automated generation of + introduction rules for the domain predicate. While mostly not + needed, they can be helpful in some proofs about partial functions. + + \end{description} +*} + +subsubsection {* Example: evaluation of expressions *} + +text {* Subsequently, we define mutual datatypes for arithmetic and + boolean expressions, and use @{command primrec} for evaluation + functions that follow the same recursive structure. *} + +datatype 'a aexp = + IF "'a bexp" "'a aexp" "'a aexp" + | Sum "'a aexp" "'a aexp" + | Diff "'a aexp" "'a aexp" + | Var 'a + | Num nat +and 'a bexp = + Less "'a aexp" "'a aexp" + | And "'a bexp" "'a bexp" + | Neg "'a bexp" + + +text {* \medskip Evaluation of arithmetic and boolean expressions *} + +primrec evala :: "('a \ nat) \ 'a aexp \ nat" + and evalb :: "('a \ nat) \ 'a bexp \ bool" +where + "evala env (IF b a1 a2) = (if evalb env b then evala env a1 else evala env a2)" +| "evala env (Sum a1 a2) = evala env a1 + evala env a2" +| "evala env (Diff a1 a2) = evala env a1 - evala env a2" +| "evala env (Var v) = env v" +| "evala env (Num n) = n" +| "evalb env (Less a1 a2) = (evala env a1 < evala env a2)" +| "evalb env (And b1 b2) = (evalb env b1 \ evalb env b2)" +| "evalb env (Neg b) = (\ evalb env b)" + +text {* Since the value of an expression depends on the value of its + variables, the functions @{const evala} and @{const evalb} take an + additional parameter, an \emph{environment} that maps variables to + their values. + + \medskip Substitution on expressions can be defined similarly. The + mapping @{text f} of type @{typ "'a \ 'a aexp"} given as a + parameter is lifted canonically on the types @{typ "'a aexp"} and + @{typ "'a bexp"}, respectively. +*} + +primrec substa :: "('a \ 'b aexp) \ 'a aexp \ 'b aexp" + and substb :: "('a \ 'b aexp) \ 'a bexp \ 'b bexp" +where + "substa f (IF b a1 a2) = IF (substb f b) (substa f a1) (substa f a2)" +| "substa f (Sum a1 a2) = Sum (substa f a1) (substa f a2)" +| "substa f (Diff a1 a2) = Diff (substa f a1) (substa f a2)" +| "substa f (Var v) = f v" +| "substa f (Num n) = Num n" +| "substb f (Less a1 a2) = Less (substa f a1) (substa f a2)" +| "substb f (And b1 b2) = And (substb f b1) (substb f b2)" +| "substb f (Neg b) = Neg (substb f b)" + +text {* In textbooks about semantics one often finds substitution + theorems, which express the relationship between substitution and + evaluation. For @{typ "'a aexp"} and @{typ "'a bexp"}, we can prove + such a theorem by mutual induction, followed by simplification. +*} + +lemma subst_one: + "evala env (substa (Var (v := a')) a) = evala (env (v := evala env a')) a" + "evalb env (substb (Var (v := a')) b) = evalb (env (v := evala env a')) b" + by (induct a and b) simp_all + +lemma subst_all: + "evala env (substa s a) = evala (\x. evala env (s x)) a" + "evalb env (substb s b) = evalb (\x. evala env (s x)) b" + by (induct a and b) simp_all + + +subsubsection {* Example: a substitution function for terms *} + +text {* Functions on datatypes with nested recursion are also defined + by mutual primitive recursion. *} + +datatype ('a, 'b) "term" = Var 'a | App 'b "('a, 'b) term list" + +text {* A substitution function on type @{typ "('a, 'b) term"} can be + defined as follows, by working simultaneously on @{typ "('a, 'b) + term list"}: *} + +primrec subst_term :: "('a \ ('a, 'b) term) \ ('a, 'b) term \ ('a, 'b) term" and + subst_term_list :: "('a \ ('a, 'b) term) \ ('a, 'b) term list \ ('a, 'b) term list" +where + "subst_term f (Var a) = f a" +| "subst_term f (App b ts) = App b (subst_term_list f ts)" +| "subst_term_list f [] = []" +| "subst_term_list f (t # ts) = subst_term f t # subst_term_list f ts" + +text {* The recursion scheme follows the structure of the unfolded + definition of type @{typ "('a, 'b) term"}. To prove properties of this + substitution function, mutual induction is needed: +*} + +lemma "subst_term (subst_term f1 \ f2) t = subst_term f1 (subst_term f2 t)" and + "subst_term_list (subst_term f1 \ f2) ts = subst_term_list f1 (subst_term_list f2 ts)" + by (induct t and ts) simp_all + + +subsubsection {* Example: a map function for infinitely branching trees *} + +text {* Defining functions on infinitely branching datatypes by + primitive recursion is just as easy. +*} + +datatype 'a tree = Atom 'a | Branch "nat \ 'a tree" + +primrec map_tree :: "('a \ 'b) \ 'a tree \ 'b tree" +where + "map_tree f (Atom a) = Atom (f a)" +| "map_tree f (Branch ts) = Branch (\x. map_tree f (ts x))" + +text {* Note that all occurrences of functions such as @{text ts} + above must be applied to an argument. In particular, @{term + "map_tree f \ ts"} is not allowed here. *} + +text {* Here is a simple composition lemma for @{term map_tree}: *} + +lemma "map_tree g (map_tree f t) = map_tree (g \ f) t" + by (induct t) simp_all + + +subsection {* Proof methods related to recursive definitions *} + +text {* + \begin{matharray}{rcl} + @{method_def (HOL) pat_completeness} & : & @{text method} \\ + @{method_def (HOL) relation} & : & @{text method} \\ + @{method_def (HOL) lexicographic_order} & : & @{text method} \\ + @{method_def (HOL) size_change} & : & @{text method} \\ + @{method_def (HOL) induction_schema} & : & @{text method} \\ + \end{matharray} + + @{rail \ + @@{method (HOL) relation} @{syntax term} + ; + @@{method (HOL) lexicographic_order} (@{syntax clasimpmod} * ) + ; + @@{method (HOL) size_change} ( orders (@{syntax clasimpmod} * ) ) + ; + @@{method (HOL) induction_schema} + ; + orders: ( 'max' | 'min' | 'ms' ) * + \} + + \begin{description} + + \item @{method (HOL) pat_completeness} is a specialized method to + solve goals regarding the completeness of pattern matching, as + required by the @{command (HOL) "function"} package (cf.\ + \cite{isabelle-function}). + + \item @{method (HOL) relation}~@{text R} introduces a termination + proof using the relation @{text R}. The resulting proof state will + contain goals expressing that @{text R} is wellfounded, and that the + arguments of recursive calls decrease with respect to @{text R}. + Usually, this method is used as the initial proof step of manual + termination proofs. + + \item @{method (HOL) "lexicographic_order"} attempts a fully + automated termination proof by searching for a lexicographic + combination of size measures on the arguments of the function. The + method accepts the same arguments as the @{method auto} method, + which it uses internally to prove local descents. The @{syntax + clasimpmod} modifiers are accepted (as for @{method auto}). + + In case of failure, extensive information is printed, which can help + to analyse the situation (cf.\ \cite{isabelle-function}). + + \item @{method (HOL) "size_change"} also works on termination goals, + using a variation of the size-change principle, together with a + graph decomposition technique (see \cite{krauss_phd} for details). + Three kinds of orders are used internally: @{text max}, @{text min}, + and @{text ms} (multiset), which is only available when the theory + @{text Multiset} is loaded. When no order kinds are given, they are + tried in order. The search for a termination proof uses SAT solving + internally. + + For local descent proofs, the @{syntax clasimpmod} modifiers are + accepted (as for @{method auto}). + + \item @{method (HOL) induction_schema} derives user-specified + induction rules from well-founded induction and completeness of + patterns. This factors out some operations that are done internally + by the function package and makes them available separately. See + @{file "~~/src/HOL/ex/Induction_Schema.thy"} for examples. + + \end{description} +*} + + +subsection {* Functions with explicit partiality *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "partial_function"} & : & @{text "local_theory \ local_theory"} \\ + @{attribute_def (HOL) "partial_function_mono"} & : & @{text attribute} \\ + \end{matharray} + + @{rail \ + @@{command (HOL) partial_function} @{syntax target}? + '(' @{syntax nameref} ')' @{syntax "fixes"} \ + @'where' @{syntax thmdecl}? @{syntax prop} + \} + + \begin{description} + + \item @{command (HOL) "partial_function"}~@{text "(mode)"} defines + recursive functions based on fixpoints in complete partial + orders. No termination proof is required from the user or + constructed internally. Instead, the possibility of non-termination + is modelled explicitly in the result type, which contains an + explicit bottom element. + + Pattern matching and mutual recursion are currently not supported. + Thus, the specification consists of a single function described by a + single recursive equation. + + There are no fixed syntactic restrictions on the body of the + function, but the induced functional must be provably monotonic + wrt.\ the underlying order. The monotonicity proof is performed + internally, and the definition is rejected when it fails. The proof + can be influenced by declaring hints using the + @{attribute (HOL) partial_function_mono} attribute. + + The mandatory @{text mode} argument specifies the mode of operation + of the command, which directly corresponds to a complete partial + order on the result type. By default, the following modes are + defined: + + \begin{description} + + \item @{text option} defines functions that map into the @{type + option} type. Here, the value @{term None} is used to model a + non-terminating computation. Monotonicity requires that if @{term + None} is returned by a recursive call, then the overall result must + also be @{term None}. This is best achieved through the use of the + monadic operator @{const "Option.bind"}. + + \item @{text tailrec} defines functions with an arbitrary result + type and uses the slightly degenerated partial order where @{term + "undefined"} is the bottom element. Now, monotonicity requires that + if @{term undefined} is returned by a recursive call, then the + overall result must also be @{term undefined}. In practice, this is + only satisfied when each recursive call is a tail call, whose result + is directly returned. Thus, this mode of operation allows the + definition of arbitrary tail-recursive functions. + + \end{description} + + Experienced users may define new modes by instantiating the locale + @{const "partial_function_definitions"} appropriately. + + \item @{attribute (HOL) partial_function_mono} declares rules for + use in the internal monotonicity proofs of partial function + definitions. + + \end{description} + +*} + + +subsection {* Old-style recursive function definitions (TFL) *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "recdef"} & : & @{text "theory \ theory)"} \\ + @{command_def (HOL) "recdef_tc"}@{text "\<^sup>*"} & : & @{text "theory \ proof(prove)"} \\ + \end{matharray} + + The old TFL commands @{command (HOL) "recdef"} and @{command (HOL) + "recdef_tc"} for defining recursive are mostly obsolete; @{command + (HOL) "function"} or @{command (HOL) "fun"} should be used instead. + + @{rail \ + @@{command (HOL) recdef} ('(' @'permissive' ')')? \ + @{syntax name} @{syntax term} (@{syntax prop} +) hints? + ; + recdeftc @{syntax thmdecl}? tc + ; + hints: '(' @'hints' ( recdefmod * ) ')' + ; + recdefmod: (('recdef_simp' | 'recdef_cong' | 'recdef_wf') + (() | 'add' | 'del') ':' @{syntax thmrefs}) | @{syntax clasimpmod} + ; + tc: @{syntax nameref} ('(' @{syntax nat} ')')? + \} + + \begin{description} + + \item @{command (HOL) "recdef"} defines general well-founded + recursive functions (using the TFL package), see also + \cite{isabelle-HOL}. The ``@{text "(permissive)"}'' option tells + TFL to recover from failed proof attempts, returning unfinished + results. The @{text recdef_simp}, @{text recdef_cong}, and @{text + recdef_wf} hints refer to auxiliary rules to be used in the internal + automated proof process of TFL. Additional @{syntax clasimpmod} + declarations may be given to tune the context of the Simplifier + (cf.\ \secref{sec:simplifier}) and Classical reasoner (cf.\ + \secref{sec:classical}). + + \item @{command (HOL) "recdef_tc"}~@{text "c (i)"} recommences the + proof for leftover termination condition number @{text i} (default + 1) as generated by a @{command (HOL) "recdef"} definition of + constant @{text c}. + + Note that in most cases, @{command (HOL) "recdef"} is able to finish + its internal proofs without manual intervention. + + \end{description} + + \medskip Hints for @{command (HOL) "recdef"} may be also declared + globally, using the following attributes. + + \begin{matharray}{rcl} + @{attribute_def (HOL) recdef_simp} & : & @{text attribute} \\ + @{attribute_def (HOL) recdef_cong} & : & @{text attribute} \\ + @{attribute_def (HOL) recdef_wf} & : & @{text attribute} \\ + \end{matharray} + + @{rail \ + (@@{attribute (HOL) recdef_simp} | @@{attribute (HOL) recdef_cong} | + @@{attribute (HOL) recdef_wf}) (() | 'add' | 'del') + \} +*} + + +section {* Datatypes \label{sec:hol-datatype} *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "datatype"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "rep_datatype"} & : & @{text "theory \ proof(prove)"} \\ + \end{matharray} + + @{rail \ + @@{command (HOL) datatype} (spec + @'and') + ; + @@{command (HOL) rep_datatype} ('(' (@{syntax name} +) ')')? (@{syntax term} +) + ; + + spec: @{syntax typespec_sorts} @{syntax mixfix}? '=' (cons + '|') + ; + cons: @{syntax name} (@{syntax type} * ) @{syntax mixfix}? + \} + + \begin{description} + + \item @{command (HOL) "datatype"} defines inductive datatypes in + HOL. + + \item @{command (HOL) "rep_datatype"} represents existing types as + datatypes. + + For foundational reasons, some basic types such as @{typ nat}, @{typ + "'a \ 'b"}, @{typ "'a + 'b"}, @{typ bool} and @{typ unit} are + introduced by more primitive means using @{command_ref typedef}. To + recover the rich infrastructure of @{command datatype} (e.g.\ rules + for @{method cases} and @{method induct} and the primitive recursion + combinators), such types may be represented as actual datatypes + later. This is done by specifying the constructors of the desired + type, and giving a proof of the induction rule, distinctness and + injectivity of constructors. + + For example, see @{file "~~/src/HOL/Sum_Type.thy"} for the + representation of the primitive sum type as fully-featured datatype. + + \end{description} + + The generated rules for @{method induct} and @{method cases} provide + case names according to the given constructors, while parameters are + named after the types (see also \secref{sec:cases-induct}). + + See \cite{isabelle-HOL} for more details on datatypes, but beware of + the old-style theory syntax being used there! Apart from proper + proof methods for case-analysis and induction, there are also + emulations of ML tactics @{method (HOL) case_tac} and @{method (HOL) + induct_tac} available, see \secref{sec:hol-induct-tac}; these admit + to refer directly to the internal structure of subgoals (including + internally bound parameters). +*} + + +subsubsection {* Examples *} + +text {* We define a type of finite sequences, with slightly different + names than the existing @{typ "'a list"} that is already in @{theory + Main}: *} + +datatype 'a seq = Empty | Seq 'a "'a seq" + +text {* We can now prove some simple lemma by structural induction: *} + +lemma "Seq x xs \ xs" +proof (induct xs arbitrary: x) + case Empty + txt {* This case can be proved using the simplifier: the freeness + properties of the datatype are already declared as @{attribute + simp} rules. *} + show "Seq x Empty \ Empty" + by simp +next + case (Seq y ys) + txt {* The step case is proved similarly. *} + show "Seq x (Seq y ys) \ Seq y ys" + using `Seq y ys \ ys` by simp +qed + +text {* Here is a more succinct version of the same proof: *} + +lemma "Seq x xs \ xs" + by (induct xs arbitrary: x) simp_all + + +section {* Records \label{sec:hol-record} *} + +text {* + In principle, records merely generalize the concept of tuples, where + components may be addressed by labels instead of just position. The + logical infrastructure of records in Isabelle/HOL is slightly more + advanced, though, supporting truly extensible record schemes. This + admits operations that are polymorphic with respect to record + extension, yielding ``object-oriented'' effects like (single) + inheritance. See also \cite{NaraschewskiW-TPHOLs98} for more + details on object-oriented verification and record subtyping in HOL. +*} + + +subsection {* Basic concepts *} + +text {* + Isabelle/HOL supports both \emph{fixed} and \emph{schematic} records + at the level of terms and types. The notation is as follows: + + \begin{center} + \begin{tabular}{l|l|l} + & record terms & record types \\ \hline + fixed & @{text "\x = a, y = b\"} & @{text "\x :: A, y :: B\"} \\ + schematic & @{text "\x = a, y = b, \ = m\"} & + @{text "\x :: A, y :: B, \ :: M\"} \\ + \end{tabular} + \end{center} + + \noindent The ASCII representation of @{text "\x = a\"} is @{text + "(| x = a |)"}. + + A fixed record @{text "\x = a, y = b\"} has field @{text x} of value + @{text a} and field @{text y} of value @{text b}. The corresponding + type is @{text "\x :: A, y :: B\"}, assuming that @{text "a :: A"} + and @{text "b :: B"}. + + A record scheme like @{text "\x = a, y = b, \ = m\"} contains fields + @{text x} and @{text y} as before, but also possibly further fields + as indicated by the ``@{text "\"}'' notation (which is actually part + of the syntax). The improper field ``@{text "\"}'' of a record + scheme is called the \emph{more part}. Logically it is just a free + variable, which is occasionally referred to as ``row variable'' in + the literature. The more part of a record scheme may be + instantiated by zero or more further components. For example, the + previous scheme may get instantiated to @{text "\x = a, y = b, z = + c, \ = m'\"}, where @{text m'} refers to a different more part. + Fixed records are special instances of record schemes, where + ``@{text "\"}'' is properly terminated by the @{text "() :: unit"} + element. In fact, @{text "\x = a, y = b\"} is just an abbreviation + for @{text "\x = a, y = b, \ = ()\"}. + + \medskip Two key observations make extensible records in a simply + typed language like HOL work out: + + \begin{enumerate} + + \item the more part is internalized, as a free term or type + variable, + + \item field names are externalized, they cannot be accessed within + the logic as first-class values. + + \end{enumerate} + + \medskip In Isabelle/HOL record types have to be defined explicitly, + fixing their field names and types, and their (optional) parent + record. Afterwards, records may be formed using above syntax, while + obeying the canonical order of fields as given by their declaration. + The record package provides several standard operations like + selectors and updates. The common setup for various generic proof + tools enable succinct reasoning patterns. See also the Isabelle/HOL + tutorial \cite{isabelle-hol-book} for further instructions on using + records in practice. +*} + + +subsection {* Record specifications *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "record"} & : & @{text "theory \ theory"} \\ + \end{matharray} + + @{rail \ + @@{command (HOL) record} @{syntax typespec_sorts} '=' \ + (@{syntax type} '+')? (constdecl +) + ; + constdecl: @{syntax name} '::' @{syntax type} @{syntax mixfix}? + \} + + \begin{description} + + \item @{command (HOL) "record"}~@{text "(\\<^sub>1, \, \\<^sub>m) t = \ + c\<^sub>1 :: \\<^sub>1 + \ c\<^sub>n :: \\<^sub>n"} defines extensible record type @{text "(\\<^sub>1, \, \\<^sub>m) t"}, + derived from the optional parent record @{text "\"} by adding new + field components @{text "c\<^sub>i :: \\<^sub>i"} etc. + + The type variables of @{text "\"} and @{text "\\<^sub>i"} need to be + covered by the (distinct) parameters @{text "\\<^sub>1, \, + \\<^sub>m"}. Type constructor @{text t} has to be new, while @{text + \} needs to specify an instance of an existing record type. At + least one new field @{text "c\<^sub>i"} has to be specified. + Basically, field names need to belong to a unique record. This is + not a real restriction in practice, since fields are qualified by + the record name internally. + + The parent record specification @{text \} is optional; if omitted + @{text t} becomes a root record. The hierarchy of all records + declared within a theory context forms a forest structure, i.e.\ a + set of trees starting with a root record each. There is no way to + merge multiple parent records! + + For convenience, @{text "(\\<^sub>1, \, \\<^sub>m) t"} is made a + type abbreviation for the fixed record type @{text "\c\<^sub>1 :: + \\<^sub>1, \, c\<^sub>n :: \\<^sub>n\"}, likewise is @{text + "(\\<^sub>1, \, \\<^sub>m, \) t_scheme"} made an abbreviation for + @{text "\c\<^sub>1 :: \\<^sub>1, \, c\<^sub>n :: \\<^sub>n, \ :: + \\"}. + + \end{description} +*} + + +subsection {* Record operations *} + +text {* + Any record definition of the form presented above produces certain + standard operations. Selectors and updates are provided for any + field, including the improper one ``@{text more}''. There are also + cumulative record constructor functions. To simplify the + presentation below, we assume for now that @{text "(\\<^sub>1, \, + \\<^sub>m) t"} is a root record with fields @{text "c\<^sub>1 :: + \\<^sub>1, \, c\<^sub>n :: \\<^sub>n"}. + + \medskip \textbf{Selectors} and \textbf{updates} are available for + any field (including ``@{text more}''): + + \begin{matharray}{lll} + @{text "c\<^sub>i"} & @{text "::"} & @{text "\\<^vec>c :: \<^vec>\, \ :: \\ \ \\<^sub>i"} \\ + @{text "c\<^sub>i_update"} & @{text "::"} & @{text "\\<^sub>i \ \\<^vec>c :: \<^vec>\, \ :: \\ \ \\<^vec>c :: \<^vec>\, \ :: \\"} \\ + \end{matharray} + + There is special syntax for application of updates: @{text "r\x := + a\"} abbreviates term @{text "x_update a r"}. Further notation for + repeated updates is also available: @{text "r\x := a\\y := b\\z := + c\"} may be written @{text "r\x := a, y := b, z := c\"}. Note that + because of postfix notation the order of fields shown here is + reverse than in the actual term. Since repeated updates are just + function applications, fields may be freely permuted in @{text "\x + := a, y := b, z := c\"}, as far as logical equality is concerned. + Thus commutativity of independent updates can be proven within the + logic for any two fields, but not as a general theorem. + + \medskip The \textbf{make} operation provides a cumulative record + constructor function: + + \begin{matharray}{lll} + @{text "t.make"} & @{text "::"} & @{text "\\<^sub>1 \ \ \\<^sub>n \ \\<^vec>c :: \<^vec>\\"} \\ + \end{matharray} + + \medskip We now reconsider the case of non-root records, which are + derived of some parent. In general, the latter may depend on + another parent as well, resulting in a list of \emph{ancestor + records}. Appending the lists of fields of all ancestors results in + a certain field prefix. The record package automatically takes care + of this by lifting operations over this context of ancestor fields. + Assuming that @{text "(\\<^sub>1, \, \\<^sub>m) t"} has ancestor + fields @{text "b\<^sub>1 :: \\<^sub>1, \, b\<^sub>k :: \\<^sub>k"}, + the above record operations will get the following types: + + \medskip + \begin{tabular}{lll} + @{text "c\<^sub>i"} & @{text "::"} & @{text "\\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\ \ \\<^sub>i"} \\ + @{text "c\<^sub>i_update"} & @{text "::"} & @{text "\\<^sub>i \ + \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\ \ + \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\"} \\ + @{text "t.make"} & @{text "::"} & @{text "\\<^sub>1 \ \ \\<^sub>k \ \\<^sub>1 \ \ \\<^sub>n \ + \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\\"} \\ + \end{tabular} + \medskip + + \noindent Some further operations address the extension aspect of a + derived record scheme specifically: @{text "t.fields"} produces a + record fragment consisting of exactly the new fields introduced here + (the result may serve as a more part elsewhere); @{text "t.extend"} + takes a fixed record and adds a given more part; @{text + "t.truncate"} restricts a record scheme to a fixed record. + + \medskip + \begin{tabular}{lll} + @{text "t.fields"} & @{text "::"} & @{text "\\<^sub>1 \ \ \\<^sub>n \ \\<^vec>c :: \<^vec>\\"} \\ + @{text "t.extend"} & @{text "::"} & @{text "\\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\\ \ + \ \ \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\"} \\ + @{text "t.truncate"} & @{text "::"} & @{text "\\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\ \ \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\\"} \\ + \end{tabular} + \medskip + + \noindent Note that @{text "t.make"} and @{text "t.fields"} coincide + for root records. +*} + + +subsection {* Derived rules and proof tools *} + +text {* + The record package proves several results internally, declaring + these facts to appropriate proof tools. This enables users to + reason about record structures quite conveniently. Assume that + @{text t} is a record type as specified above. + + \begin{enumerate} + + \item Standard conversions for selectors or updates applied to + record constructor terms are made part of the default Simplifier + context; thus proofs by reduction of basic operations merely require + the @{method simp} method without further arguments. These rules + are available as @{text "t.simps"}, too. + + \item Selectors applied to updated records are automatically reduced + by an internal simplification procedure, which is also part of the + standard Simplifier setup. + + \item Inject equations of a form analogous to @{prop "(x, y) = (x', + y') \ x = x' \ y = y'"} are declared to the Simplifier and Classical + Reasoner as @{attribute iff} rules. These rules are available as + @{text "t.iffs"}. + + \item The introduction rule for record equality analogous to @{text + "x r = x r' \ y r = y r' \ \ r = r'"} is declared to the Simplifier, + and as the basic rule context as ``@{attribute intro}@{text "?"}''. + The rule is called @{text "t.equality"}. + + \item Representations of arbitrary record expressions as canonical + constructor terms are provided both in @{method cases} and @{method + induct} format (cf.\ the generic proof methods of the same name, + \secref{sec:cases-induct}). Several variations are available, for + fixed records, record schemes, more parts etc. + + The generic proof methods are sufficiently smart to pick the most + sensible rule according to the type of the indicated record + expression: users just need to apply something like ``@{text "(cases + r)"}'' to a certain proof problem. + + \item The derived record operations @{text "t.make"}, @{text + "t.fields"}, @{text "t.extend"}, @{text "t.truncate"} are \emph{not} + treated automatically, but usually need to be expanded by hand, + using the collective fact @{text "t.defs"}. + + \end{enumerate} +*} + + +subsubsection {* Examples *} + +text {* See @{file "~~/src/HOL/ex/Records.thy"}, for example. *} + +section {* Typedef axiomatization \label{sec:hol-typedef} *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "typedef"} & : & @{text "local_theory \ proof(prove)"} \\ + \end{matharray} + + A Gordon/HOL-style type definition is a certain axiom scheme that + identifies a new type with a subset of an existing type. More + precisely, the new type is defined by exhibiting an existing type + @{text \}, a set @{text "A :: \ set"}, and a theorem that proves + @{prop "\x. x \ A"}. Thus @{text A} is a non-empty subset of @{text + \}, and the new type denotes this subset. New functions are + postulated that establish an isomorphism between the new type and + the subset. In general, the type @{text \} may involve type + variables @{text "\\<^sub>1, \, \\<^sub>n"} which means that the type definition + produces a type constructor @{text "(\\<^sub>1, \, \\<^sub>n) t"} depending on + those type arguments. + + The axiomatization can be considered a ``definition'' in the sense + of the particular set-theoretic interpretation of HOL + \cite{pitts93}, where the universe of types is required to be + downwards-closed wrt.\ arbitrary non-empty subsets. Thus genuinely + new types introduced by @{command "typedef"} stay within the range + of HOL models by construction. Note that @{command_ref + type_synonym} from Isabelle/Pure merely introduces syntactic + abbreviations, without any logical significance. + + @{rail \ + @@{command (HOL) typedef} abs_type '=' rep_set + ; + abs_type: @{syntax typespec_sorts} @{syntax mixfix}? + ; + rep_set: @{syntax term} (@'morphisms' @{syntax name} @{syntax name})? + \} + + \begin{description} + + \item @{command (HOL) "typedef"}~@{text "(\\<^sub>1, \, \\<^sub>n) t = A"} + axiomatizes a type definition in the background theory of the + current context, depending on a non-emptiness result of the set + @{text A} that needs to be proven here. The set @{text A} may + contain type variables @{text "\\<^sub>1, \, \\<^sub>n"} as specified on the LHS, + but no term variables. + + Even though a local theory specification, the newly introduced type + constructor cannot depend on parameters or assumptions of the + context: this is structurally impossible in HOL. In contrast, the + non-emptiness proof may use local assumptions in unusual situations, + which could result in different interpretations in target contexts: + the meaning of the bijection between the representing set @{text A} + and the new type @{text t} may then change in different application + contexts. + + For @{command (HOL) "typedef"}~@{text "t = A"} the newly introduced + type @{text t} is accompanied by a pair of morphisms to relate it to + the representing set over the old type. By default, the injection + from type to set is called @{text Rep_t} and its inverse @{text + Abs_t}: An explicit @{keyword (HOL) "morphisms"} specification + allows to provide alternative names. + + The core axiomatization uses the locale predicate @{const + type_definition} as defined in Isabelle/HOL. Various basic + consequences of that are instantiated accordingly, re-using the + locale facts with names derived from the new type constructor. Thus + the generic @{thm type_definition.Rep} is turned into the specific + @{text "Rep_t"}, for example. + + Theorems @{thm type_definition.Rep}, @{thm + type_definition.Rep_inverse}, and @{thm type_definition.Abs_inverse} + provide the most basic characterization as a corresponding + injection/surjection pair (in both directions). The derived rules + @{thm type_definition.Rep_inject} and @{thm + type_definition.Abs_inject} provide a more convenient version of + injectivity, suitable for automated proof tools (e.g.\ in + declarations involving @{attribute simp} or @{attribute iff}). + Furthermore, the rules @{thm type_definition.Rep_cases}~/ @{thm + type_definition.Rep_induct}, and @{thm type_definition.Abs_cases}~/ + @{thm type_definition.Abs_induct} provide alternative views on + surjectivity. These rules are already declared as set or type rules + for the generic @{method cases} and @{method induct} methods, + respectively. + + \end{description} + + \begin{warn} + If you introduce a new type axiomatically, i.e.\ via @{command_ref + typedecl} and @{command_ref axiomatization}, the minimum requirement + is that it has a non-empty model, to avoid immediate collapse of the + HOL logic. Moreover, one needs to demonstrate that the + interpretation of such free-form axiomatizations can coexist with + that of the regular @{command_def typedef} scheme, and any extension + that other people might have introduced elsewhere. + \end{warn} +*} + +subsubsection {* Examples *} + +text {* Type definitions permit the introduction of abstract data + types in a safe way, namely by providing models based on already + existing types. Given some abstract axiomatic description @{text P} + of a type, this involves two steps: + + \begin{enumerate} + + \item Find an appropriate type @{text \} and subset @{text A} which + has the desired properties @{text P}, and make a type definition + based on this representation. + + \item Prove that @{text P} holds for @{text \} by lifting @{text P} + from the representation. + + \end{enumerate} + + You can later forget about the representation and work solely in + terms of the abstract properties @{text P}. + + \medskip The following trivial example pulls a three-element type + into existence within the formal logical environment of HOL. *} + +typedef three = "{(True, True), (True, False), (False, True)}" + by blast + +definition "One = Abs_three (True, True)" +definition "Two = Abs_three (True, False)" +definition "Three = Abs_three (False, True)" + +lemma three_distinct: "One \ Two" "One \ Three" "Two \ Three" + by (simp_all add: One_def Two_def Three_def Abs_three_inject) + +lemma three_cases: + fixes x :: three obtains "x = One" | "x = Two" | "x = Three" + by (cases x) (auto simp: One_def Two_def Three_def Abs_three_inject) + +text {* Note that such trivial constructions are better done with + derived specification mechanisms such as @{command datatype}: *} + +datatype three' = One' | Two' | Three' + +text {* This avoids re-doing basic definitions and proofs from the + primitive @{command typedef} above. *} + + + +section {* Functorial structure of types *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "functor"} & : & @{text "local_theory \ proof(prove)"} + \end{matharray} + + @{rail \ + @@{command (HOL) functor} (@{syntax name} ':')? @{syntax term} + \} + + \begin{description} + + \item @{command (HOL) "functor"}~@{text "prefix: m"} allows to + prove and register properties about the functorial structure of type + constructors. These properties then can be used by other packages + to deal with those type constructors in certain type constructions. + Characteristic theorems are noted in the current local theory. By + default, they are prefixed with the base name of the type + constructor, an explicit prefix can be given alternatively. + + The given term @{text "m"} is considered as \emph{mapper} for the + corresponding type constructor and must conform to the following + type pattern: + + \begin{matharray}{lll} + @{text "m"} & @{text "::"} & + @{text "\\<^sub>1 \ \ \\<^sub>k \ (\<^vec>\\<^sub>n) t \ (\<^vec>\\<^sub>n) t"} \\ + \end{matharray} + + \noindent where @{text t} is the type constructor, @{text + "\<^vec>\\<^sub>n"} and @{text "\<^vec>\\<^sub>n"} are distinct + type variables free in the local theory and @{text "\\<^sub>1"}, + \ldots, @{text "\\<^sub>k"} is a subsequence of @{text "\\<^sub>1 \ + \\<^sub>1"}, @{text "\\<^sub>1 \ \\<^sub>1"}, \ldots, + @{text "\\<^sub>n \ \\<^sub>n"}, @{text "\\<^sub>n \ + \\<^sub>n"}. + + \end{description} +*} + + +section {* Quotient types *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "quotient_type"} & : & @{text "local_theory \ proof(prove)"}\\ + @{command_def (HOL) "quotient_definition"} & : & @{text "local_theory \ proof(prove)"}\\ + @{command_def (HOL) "print_quotmapsQ3"} & : & @{text "context \"}\\ + @{command_def (HOL) "print_quotientsQ3"} & : & @{text "context \"}\\ + @{command_def (HOL) "print_quotconsts"} & : & @{text "context \"}\\ + @{method_def (HOL) "lifting"} & : & @{text method} \\ + @{method_def (HOL) "lifting_setup"} & : & @{text method} \\ + @{method_def (HOL) "descending"} & : & @{text method} \\ + @{method_def (HOL) "descending_setup"} & : & @{text method} \\ + @{method_def (HOL) "partiality_descending"} & : & @{text method} \\ + @{method_def (HOL) "partiality_descending_setup"} & : & @{text method} \\ + @{method_def (HOL) "regularize"} & : & @{text method} \\ + @{method_def (HOL) "injection"} & : & @{text method} \\ + @{method_def (HOL) "cleaning"} & : & @{text method} \\ + @{attribute_def (HOL) "quot_thm"} & : & @{text attribute} \\ + @{attribute_def (HOL) "quot_lifted"} & : & @{text attribute} \\ + @{attribute_def (HOL) "quot_respect"} & : & @{text attribute} \\ + @{attribute_def (HOL) "quot_preserve"} & : & @{text attribute} \\ + \end{matharray} + + The quotient package defines a new quotient type given a raw type + and a partial equivalence relation. The package also historically + includes automation for transporting definitions and theorems. + But most of this automation was superseded by the Lifting and Transfer + packages. The user should consider using these two new packages for + lifting definitions and transporting theorems. + + @{rail \ + @@{command (HOL) quotient_type} (spec) + ; + spec: @{syntax typespec} @{syntax mixfix}? '=' \ + @{syntax type} '/' ('partial' ':')? @{syntax term} \ + (@'morphisms' @{syntax name} @{syntax name})? (@'parametric' @{syntax thmref})? + \} + + @{rail \ + @@{command (HOL) quotient_definition} constdecl? @{syntax thmdecl}? \ + @{syntax term} 'is' @{syntax term} + ; + constdecl: @{syntax name} ('::' @{syntax type})? @{syntax mixfix}? + \} + + @{rail \ + @@{method (HOL) lifting} @{syntax thmrefs}? + ; + @@{method (HOL) lifting_setup} @{syntax thmrefs}? + \} + + \begin{description} + + \item @{command (HOL) "quotient_type"} defines a new quotient type @{text \}. The + injection from a quotient type to a raw type is called @{text + rep_\}, its inverse @{text abs_\} unless explicit @{keyword (HOL) + "morphisms"} specification provides alternative names. @{command + (HOL) "quotient_type"} requires the user to prove that the relation + is an equivalence relation (predicate @{text equivp}), unless the + user specifies explicitly @{text partial} in which case the + obligation is @{text part_equivp}. A quotient defined with @{text + partial} is weaker in the sense that less things can be proved + automatically. + + The command internally proves a Quotient theorem and sets up the Lifting + package by the command @{command (HOL) setup_lifting}. Thus the Lifting + and Transfer packages can be used also with quotient types defined by + @{command (HOL) "quotient_type"} without any extra set-up. The parametricity + theorem for the equivalence relation R can be provided as an extra argument + of the command and is passed to the corresponding internal call of @{command (HOL) setup_lifting}. + This theorem allows the Lifting package to generate a stronger transfer rule for equality. + + \end{description} + + The most of the rest of the package was superseded by the Lifting and Transfer + packages. The user should consider using these two new packages for + lifting definitions and transporting theorems. + + \begin{description} + + \item @{command (HOL) "quotient_definition"} defines a constant on + the quotient type. + + \item @{command (HOL) "print_quotmapsQ3"} prints quotient map + functions. + + \item @{command (HOL) "print_quotientsQ3"} prints quotients. + + \item @{command (HOL) "print_quotconsts"} prints quotient constants. + + \item @{method (HOL) "lifting"} and @{method (HOL) "lifting_setup"} + methods match the current goal with the given raw theorem to be + lifted producing three new subgoals: regularization, injection and + cleaning subgoals. @{method (HOL) "lifting"} tries to apply the + heuristics for automatically solving these three subgoals and + leaves only the subgoals unsolved by the heuristics to the user as + opposed to @{method (HOL) "lifting_setup"} which leaves the three + subgoals unsolved. + + \item @{method (HOL) "descending"} and @{method (HOL) + "descending_setup"} try to guess a raw statement that would lift + to the current subgoal. Such statement is assumed as a new subgoal + and @{method (HOL) "descending"} continues in the same way as + @{method (HOL) "lifting"} does. @{method (HOL) "descending"} tries + to solve the arising regularization, injection and cleaning + subgoals with the analogous method @{method (HOL) + "descending_setup"} which leaves the four unsolved subgoals. + + \item @{method (HOL) "partiality_descending"} finds the regularized + theorem that would lift to the current subgoal, lifts it and + leaves as a subgoal. This method can be used with partial + equivalence quotients where the non regularized statements would + not be true. @{method (HOL) "partiality_descending_setup"} leaves + the injection and cleaning subgoals unchanged. + + \item @{method (HOL) "regularize"} applies the regularization + heuristics to the current subgoal. + + \item @{method (HOL) "injection"} applies the injection heuristics + to the current goal using the stored quotient respectfulness + theorems. + + \item @{method (HOL) "cleaning"} applies the injection cleaning + heuristics to the current subgoal using the stored quotient + preservation theorems. + + \item @{attribute (HOL) quot_lifted} attribute tries to + automatically transport the theorem to the quotient type. + The attribute uses all the defined quotients types and quotient + constants often producing undesired results or theorems that + cannot be lifted. + + \item @{attribute (HOL) quot_respect} and @{attribute (HOL) + quot_preserve} attributes declare a theorem as a respectfulness + and preservation theorem respectively. These are stored in the + local theory store and used by the @{method (HOL) "injection"} + and @{method (HOL) "cleaning"} methods respectively. + + \item @{attribute (HOL) quot_thm} declares that a certain theorem + is a quotient extension theorem. Quotient extension theorems + allow for quotienting inside container types. Given a polymorphic + type that serves as a container, a map function defined for this + container using @{command (HOL) "functor"} and a relation + map defined for for the container type, the quotient extension + theorem should be @{term "Quotient3 R Abs Rep \ Quotient3 + (rel_map R) (map Abs) (map Rep)"}. Quotient extension theorems + are stored in a database and are used all the steps of lifting + theorems. + + \end{description} +*} + + +section {* Definition by specification \label{sec:hol-specification} *} + +text {* + \begin{matharray}{rcl} + @{command_def (HOL) "specification"} & : & @{text "theory \ proof(prove)"} \\ + \end{matharray} + + @{rail \ + @@{command (HOL) specification} '(' (decl +) ')' \ + (@{syntax thmdecl}? @{syntax prop} +) + ; + decl: (@{syntax name} ':')? @{syntax term} ('(' @'overloaded' ')')? + \} + + \begin{description} + + \item @{command (HOL) "specification"}~@{text "decls \"} sets up a + goal stating the existence of terms with the properties specified to + hold for the constants given in @{text decls}. After finishing the + proof, the theory will be augmented with definitions for the given + constants, as well as with theorems stating the properties for these + constants. + + @{text decl} declares a constant to be defined by the + specification given. The definition for the constant @{text c} is + bound to the name @{text c_def} unless a theorem name is given in + the declaration. Overloaded constants should be declared as such. + + \end{description} +*} + + +section {* Adhoc overloading of constants *} + +text {* + \begin{tabular}{rcll} + @{command_def "adhoc_overloading"} & : & @{text "local_theory \ local_theory"} \\ + @{command_def "no_adhoc_overloading"} & : & @{text "local_theory \ local_theory"} \\ + @{attribute_def "show_variants"} & : & @{text "attribute"} & default @{text false} \\ + \end{tabular} + + \medskip + + Adhoc overloading allows to overload a constant depending on + its type. Typically this involves the introduction of an + uninterpreted constant (used for input and output) and the addition + of some variants (used internally). For examples see + @{file "~~/src/HOL/ex/Adhoc_Overloading_Examples.thy"} and + @{file "~~/src/HOL/Library/Monad_Syntax.thy"}. + + @{rail \ + (@@{command adhoc_overloading} | @@{command no_adhoc_overloading}) + (@{syntax nameref} (@{syntax term} + ) + @'and') + \} + + \begin{description} + + \item @{command "adhoc_overloading"}~@{text "c v\<^sub>1 ... v\<^sub>n"} + associates variants with an existing constant. + + \item @{command "no_adhoc_overloading"} is similar to + @{command "adhoc_overloading"}, but removes the specified variants + from the present context. + + \item @{attribute "show_variants"} controls printing of variants + of overloaded constants. If enabled, the internally used variants + are printed instead of their respective overloaded constants. This + is occasionally useful to check whether the system agrees with a + user's expectations about derived variants. + + \end{description} +*} + +chapter {* Proof tools *} + +section {* Adhoc tuples *} + +text {* + \begin{matharray}{rcl} + @{attribute_def (HOL) split_format}@{text "\<^sup>*"} & : & @{text attribute} \\ + \end{matharray} + + @{rail \ + @@{attribute (HOL) split_format} ('(' 'complete' ')')? + \} + + \begin{description} + + \item @{attribute (HOL) split_format}\ @{text "(complete)"} causes + arguments in function applications to be represented canonically + according to their tuple type structure. + + Note that this operation tends to invent funny names for new local + parameters introduced. + + \end{description} +*} + + +section {* Transfer package *} + +text {* + \begin{matharray}{rcl} + @{method_def (HOL) "transfer"} & : & @{text method} \\ + @{method_def (HOL) "transfer'"} & : & @{text method} \\ + @{method_def (HOL) "transfer_prover"} & : & @{text method} \\ + @{attribute_def (HOL) "Transfer.transferred"} & : & @{text attribute} \\ + @{attribute_def (HOL) "untransferred"} & : & @{text attribute} \\ + @{attribute_def (HOL) "transfer_rule"} & : & @{text attribute} \\ + @{attribute_def (HOL) "transfer_domain_rule"} & : & @{text attribute} \\ + @{attribute_def (HOL) "relator_eq"} & : & @{text attribute} \\ + @{attribute_def (HOL) "relator_domain"} & : & @{text attribute} \\ + \end{matharray} + + \begin{description} + + \item @{method (HOL) "transfer"} method replaces the current subgoal + with a logically equivalent one that uses different types and + constants. The replacement of types and constants is guided by the + database of transfer rules. Goals are generalized over all free + variables by default; this is necessary for variables whose types + change, but can be overridden for specific variables with e.g. + @{text "transfer fixing: x y z"}. + + \item @{method (HOL) "transfer'"} is a variant of @{method (HOL) + transfer} that allows replacing a subgoal with one that is + logically stronger (rather than equivalent). For example, a + subgoal involving equality on a quotient type could be replaced + with a subgoal involving equality (instead of the corresponding + equivalence relation) on the underlying raw type. + + \item @{method (HOL) "transfer_prover"} method assists with proving + a transfer rule for a new constant, provided the constant is + defined in terms of other constants that already have transfer + rules. It should be applied after unfolding the constant + definitions. + + \item @{attribute (HOL) "untransferred"} proves the same equivalent theorem + as @{method (HOL) "transfer"} internally does. + + \item @{attribute (HOL) Transfer.transferred} works in the opposite + direction than @{method (HOL) "transfer'"}. E.g., given the transfer + relation @{text "ZN x n \ (x = int n)"}, corresponding transfer rules and the theorem + @{text "\x::int \ {0..}. x < x + 1"}, the attribute would prove + @{text "\n::nat. n < n + 1"}. The attribute is still in experimental + phase of development. + + \item @{attribute (HOL) "transfer_rule"} attribute maintains a + collection of transfer rules, which relate constants at two + different types. Typical transfer rules may relate different type + instances of the same polymorphic constant, or they may relate an + operation on a raw type to a corresponding operation on an + abstract type (quotient or subtype). For example: + + @{text "((A ===> B) ===> list_all2 A ===> list_all2 B) map map"}\\ + @{text "(cr_int ===> cr_int ===> cr_int) (\(x,y) (u,v). (x+u, y+v)) plus"} + + Lemmas involving predicates on relations can also be registered + using the same attribute. For example: + + @{text "bi_unique A \ (list_all2 A ===> op =) distinct distinct"}\\ + @{text "\bi_unique A; bi_unique B\ \ bi_unique (rel_prod A B)"} + + \item @{attribute (HOL) "transfer_domain_rule"} attribute maintains a collection + of rules, which specify a domain of a transfer relation by a predicate. + E.g., given the transfer relation @{text "ZN x n \ (x = int n)"}, + one can register the following transfer domain rule: + @{text "Domainp ZN = (\x. x \ 0)"}. The rules allow the package to produce + more readable transferred goals, e.g., when quantifiers are transferred. + + \item @{attribute (HOL) relator_eq} attribute collects identity laws + for relators of various type constructors, e.g. @{text "list_all2 + (op =) = (op =)"}. The @{method (HOL) transfer} method uses these + lemmas to infer transfer rules for non-polymorphic constants on + the fly. + + \item @{attribute_def (HOL) "relator_domain"} attribute collects rules + describing domains of relators by predicators. E.g., @{text "Domainp A = P \ + Domainp (list_all2 A) = (list_all P)"}. This allows the package to lift transfer + domain rules through type constructors. + + \end{description} + + Theoretical background can be found in \cite{Huffman-Kuncar:2013:lifting_transfer}. +*} + + +section {* Lifting package *} + +text {* + The Lifting package allows users to lift terms of the raw type to the abstract type, which is + a necessary step in building a library for an abstract type. Lifting defines a new constant + by combining coercion functions (Abs and Rep) with the raw term. It also proves an appropriate + transfer rule for the Transfer package and, if possible, an equation for the code generator. + + The Lifting package provides two main commands: @{command (HOL) "setup_lifting"} for initializing + the package to work with a new type, and @{command (HOL) "lift_definition"} for lifting constants. + The Lifting package works with all four kinds of type abstraction: type copies, subtypes, + total quotients and partial quotients. + + Theoretical background can be found in \cite{Huffman-Kuncar:2013:lifting_transfer}. + + \begin{matharray}{rcl} + @{command_def (HOL) "setup_lifting"} & : & @{text "local_theory \ local_theory"}\\ + @{command_def (HOL) "lift_definition"} & : & @{text "local_theory \ proof(prove)"}\\ + @{command_def (HOL) "lifting_forget"} & : & @{text "local_theory \ local_theory"}\\ + @{command_def (HOL) "lifting_update"} & : & @{text "local_theory \ local_theory"}\\ + @{command_def (HOL) "print_quot_maps"} & : & @{text "context \"}\\ + @{command_def (HOL) "print_quotients"} & : & @{text "context \"}\\ + @{attribute_def (HOL) "quot_map"} & : & @{text attribute} \\ + @{attribute_def (HOL) "invariant_commute"} & : & @{text attribute} \\ + @{attribute_def (HOL) "reflexivity_rule"} & : & @{text attribute} \\ + @{attribute_def (HOL) "relator_mono"} & : & @{text attribute} \\ + @{attribute_def (HOL) "relator_distr"} & : & @{text attribute} \\ + @{attribute_def (HOL) "quot_del"} & : & @{text attribute} \\ + @{attribute_def (HOL) "lifting_restore"} & : & @{text attribute} \\ + \end{matharray} + + @{rail \ + @@{command (HOL) setup_lifting} ('(' 'no_code' ')')? \ + @{syntax thmref} @{syntax thmref}? (@'parametric' @{syntax thmref})?; + \} + + @{rail \ + @@{command (HOL) lift_definition} @{syntax name} '::' @{syntax type} @{syntax mixfix}? \ + 'is' @{syntax term} (@'parametric' @{syntax thmref})?; + \} + + @{rail \ + @@{command (HOL) lifting_forget} @{syntax nameref}; + \} + + @{rail \ + @@{command (HOL) lifting_update} @{syntax nameref}; + \} + + @{rail \ + @@{attribute (HOL) lifting_restore} @{syntax thmref} (@{syntax thmref} @{syntax thmref})?; + \} + + \begin{description} + + \item @{command (HOL) "setup_lifting"} Sets up the Lifting package + to work with a user-defined type. + The command supports two modes. The first one is a low-level mode when + the user must provide as a first + argument of @{command (HOL) "setup_lifting"} a + quotient theorem @{text "Quotient R Abs Rep T"}. The + package configures a transfer rule for equality, a domain transfer + rules and sets up the @{command_def (HOL) "lift_definition"} + command to work with the abstract type. An optional theorem @{text "reflp R"}, which certifies that + the equivalence relation R is total, + can be provided as a second argument. This allows the package to generate stronger transfer + rules. And finally, the parametricity theorem for R can be provided as a third argument. + This allows the package to generate a stronger transfer rule for equality. + + Users generally will not prove the @{text Quotient} theorem manually for + new types, as special commands exist to automate the process. + + When a new subtype is defined by @{command (HOL) typedef}, @{command (HOL) "lift_definition"} + can be used in its + second mode, where only the type_definition theorem @{text "type_definition Rep Abs A"} + is used as an argument of the command. The command internally proves the corresponding + Quotient theorem and registers it with @{command (HOL) setup_lifting} using its first mode. + + For quotients, the command @{command (HOL) quotient_type} can be used. The command defines + a new quotient type and similarly to the previous case, the corresponding Quotient theorem is proved + and registered by @{command (HOL) setup_lifting}. + + The command @{command (HOL) "setup_lifting"} also sets up the code generator + for the new type. Later on, when a new constant is defined by @{command (HOL) "lift_definition"}, + the Lifting package proves and registers a code equation (if there is one) for the new constant. + If the option @{text "no_code"} is specified, the Lifting package does not set up the code + generator and as a consequence no code equations involving an abstract type are registered + by @{command (HOL) "lift_definition"}. + + \item @{command (HOL) "lift_definition"} @{text "f :: \"} @{keyword (HOL) "is"} @{text t} + Defines a new function @{text f} with an abstract type @{text \} + in terms of a corresponding operation @{text t} on a + representation type. More formally, if @{text "t :: \"}, then + the command builds a term @{text "F"} as a corresponding combination of abstraction + and representation functions such that @{text "F :: \ \ \" } and + defines @{text f} is as @{text "f \ F t"}. + The term @{text t} does not have to be necessarily a constant but it can be any term. + + The command opens a proof environment and the user must discharge + a respectfulness proof obligation. For a type copy, i.e., a typedef with @{text + UNIV}, the obligation is discharged automatically. The proof goal is + presented in a user-friendly, readable form. A respectfulness + theorem in the standard format @{text f.rsp} and a transfer rule + @{text f.transfer} for the Transfer package are generated by the + package. + + The user can specify a parametricity theorem for @{text t} after the keyword + @{keyword "parametric"}, which allows the command + to generate a parametric transfer rule for @{text f}. + + For each constant defined through trivial quotients (type copies or + subtypes) @{text f.rep_eq} is generated. The equation is a code certificate + that defines @{text f} using the representation function. + + For each constant @{text f.abs_eq} is generated. The equation is unconditional + for total quotients. The equation defines @{text f} using + the abstraction function. + + Integration with [@{attribute code} abstract]: For subtypes (e.g., + corresponding to a datatype invariant, such as dlist), @{command + (HOL) "lift_definition"} uses a code certificate theorem + @{text f.rep_eq} as a code equation. + + Integration with [@{attribute code} equation]: For total quotients, @{command + (HOL) "lift_definition"} uses @{text f.abs_eq} as a code equation. + + \item @{command (HOL) lifting_forget} and @{command (HOL) lifting_update} + These two commands serve for storing and deleting the set-up of + the Lifting package and corresponding transfer rules defined by this package. + This is useful for hiding of type construction details of an abstract type + when the construction is finished but it still allows additions to this construction + when this is later necessary. + + Whenever the Lifting package is set up with a new abstract type @{text "\"} by + @{command_def (HOL) "lift_definition"}, the package defines a new bundle + that is called @{text "\.lifting"}. This bundle already includes set-up for the Lifting package. + The new transfer rules + introduced by @{command (HOL) "lift_definition"} can be stored in the bundle by + the command @{command (HOL) "lifting_update"} @{text "\.lifting"}. + + The command @{command (HOL) "lifting_forget"} @{text "\.lifting"} deletes set-up of the Lifting + package + for @{text \} and deletes all the transfer rules that were introduced + by @{command (HOL) "lift_definition"} using @{text \} as an abstract type. + + The stored set-up in a bundle can be reintroduced by the Isar commands for including a bundle + (@{command "include"}, @{keyword "includes"} and @{command "including"}). + + \item @{command (HOL) "print_quot_maps"} prints stored quotient map + theorems. + + \item @{command (HOL) "print_quotients"} prints stored quotient + theorems. + + \item @{attribute (HOL) quot_map} registers a quotient map + theorem. E.g., @{text "Quotient R Abs Rep T \ + Quotient (list_all2 R) (map Abs) (map Rep) (list_all2 T)"}. + For examples see @{file + "~~/src/HOL/List.thy"} or @{file "~~/src/HOL/Lifting.thy"} or Lifting_*.thy files + in the same directory. + + \item @{attribute (HOL) invariant_commute} registers a theorem that + shows a relationship between the constant @{text + Lifting.invariant} (used for internal encoding of proper subtypes) + and a relator. Such theorems allows the package to hide @{text + Lifting.invariant} from a user in a user-readable form of a + respectfulness theorem. For examples see @{file + "~~/src/HOL/List.thy"} or Lifting_*.thy files in the same directory. + + \item @{attribute (HOL) reflexivity_rule} registers a theorem that shows + that a relator respects left-totality and left_uniqueness. For examples + see @{file "~~/src/HOL/List.thy"} or @{file "~~/src/HOL/Lifting.thy"} or Lifting_*.thy files + in the same directory. + The property is used in a reflexivity prover, which is used for discharging respectfulness + theorems for type copies and also for discharging assumptions of abstraction function equations. + + \item @{attribute (HOL) "relator_mono"} registers a property describing a monotonicity of a relator. + E.g., @{text "A \ B \ list_all2 A \ list_all2 B"}. For examples + see @{file "~~/src/HOL/List.thy"} or @{file "~~/src/HOL/Lifting.thy"} + or Lifting_*.thy files in the same directory. + This property is needed for proving a stronger transfer rule in @{command_def (HOL) "lift_definition"} + when a parametricity theorem for the raw term is specified. + + \item @{attribute (HOL) "relator_distr"} registers a property describing a distributivity + of the relation composition and a relator. E.g., + @{text "list_all2 R \\ list_all2 S = list_all2 (R \\ S)"}. + This property is needed for proving a stronger transfer rule in @{command_def (HOL) "lift_definition"} + when a parametricity theorem for the raw term is specified. + When this equality does not hold unconditionally (e.g., for the function type), the user can specified + each direction separately and also register multiple theorems with different set of assumptions. + This attribute can be used only after the monotonicity property was already registered by + @{attribute (HOL) "relator_mono"}. For examples + see @{file "~~/src/HOL/List.thy"} or @{file "~~/src/HOL/Lifting.thy"} + or Lifting_*.thy files in the same directory. + + \item @{attribute (HOL) quot_del} deletes a corresponding Quotient theorem + from the Lifting infrastructure and thus de-register the corresponding quotient. + This effectively causes that @{command (HOL) lift_definition} will not + do any lifting for the corresponding type. This attribute is rather used for low-level + manipulation with set-up of the Lifting package because @{command (HOL) lifting_forget} is + preferred for normal usage. + + \item @{attribute (HOL) lifting_restore} @{text "Quotient_thm pcr_def pcr_cr_eq_thm"} + registers the Quotient theorem @{text Quotient_thm} in the Lifting infrastructure + and thus sets up lifting for an abstract type @{text \} (that is defined by @{text Quotient_thm}). + Optional theorems @{text pcr_def} and @{text pcr_cr_eq_thm} can be specified to register + the parametrized + correspondence relation for @{text \}. E.g., for @{text "'a dlist"}, @{text pcr_def} is + @{text "pcr_dlist A \ list_all2 A \\ cr_dlist"} and @{text pcr_cr_eq_thm} is + @{text "pcr_dlist op= = op="}. + This attribute is rather used for low-level + manipulation with set-up of the Lifting package because using of the bundle @{text \.lifting} + together with the commands @{command (HOL) lifting_forget} and @{command (HOL) lifting_update} is + preferred for normal usage. + + \end{description} +*} + + +section {* Coercive subtyping *} + +text {* + \begin{matharray}{rcl} + @{attribute_def (HOL) coercion} & : & @{text attribute} \\ + @{attribute_def (HOL) coercion_enabled} & : & @{text attribute} \\ + @{attribute_def (HOL) coercion_map} & : & @{text attribute} \\ + \end{matharray} + + Coercive subtyping allows the user to omit explicit type + conversions, also called \emph{coercions}. Type inference will add + them as necessary when parsing a term. See + \cite{traytel-berghofer-nipkow-2011} for details. + + @{rail \ + @@{attribute (HOL) coercion} (@{syntax term})? + ; + @@{attribute (HOL) coercion_map} (@{syntax term})? + \} + + \begin{description} + + \item @{attribute (HOL) "coercion"}~@{text "f"} registers a new + coercion function @{text "f :: \\<^sub>1 \ \\<^sub>2"} where @{text "\\<^sub>1"} and + @{text "\\<^sub>2"} are type constructors without arguments. Coercions are + composed by the inference algorithm if needed. Note that the type + inference algorithm is complete only if the registered coercions + form a lattice. + + \item @{attribute (HOL) "coercion_map"}~@{text "map"} registers a + new map function to lift coercions through type constructors. The + function @{text "map"} must conform to the following type pattern + + \begin{matharray}{lll} + @{text "map"} & @{text "::"} & + @{text "f\<^sub>1 \ \ \ f\<^sub>n \ (\\<^sub>1, \, \\<^sub>n) t \ (\\<^sub>1, \, \\<^sub>n) t"} \\ + \end{matharray} + + where @{text "t"} is a type constructor and @{text "f\<^sub>i"} is of type + @{text "\\<^sub>i \ \\<^sub>i"} or @{text "\\<^sub>i \ \\<^sub>i"}. Registering a map function + overwrites any existing map function for this particular type + constructor. + + \item @{attribute (HOL) "coercion_enabled"} enables the coercion + inference algorithm. + + \end{description} +*} + + +section {* Arithmetic proof support *} + +text {* + \begin{matharray}{rcl} + @{method_def (HOL) arith} & : & @{text method} \\ + @{attribute_def (HOL) arith} & : & @{text attribute} \\ + @{attribute_def (HOL) arith_split} & : & @{text attribute} \\ + \end{matharray} + + \begin{description} + + \item @{method (HOL) arith} decides linear arithmetic problems (on + types @{text nat}, @{text int}, @{text real}). Any current facts + are inserted into the goal before running the procedure. + + \item @{attribute (HOL) arith} declares facts that are supplied to + the arithmetic provers implicitly. + + \item @{attribute (HOL) arith_split} attribute declares case split + rules to be expanded before @{method (HOL) arith} is invoked. + + \end{description} + + Note that a simpler (but faster) arithmetic prover is already + invoked by the Simplifier. +*} + + +section {* Intuitionistic proof search *} + +text {* + \begin{matharray}{rcl} + @{method_def (HOL) iprover} & : & @{text method} \\ + \end{matharray} + + @{rail \ + @@{method (HOL) iprover} (@{syntax rulemod} *) + \} + + \begin{description} + + \item @{method (HOL) iprover} performs intuitionistic proof search, + depending on specifically declared rules from the context, or given + as explicit arguments. Chained facts are inserted into the goal + before commencing proof search. + + Rules need to be classified as @{attribute (Pure) intro}, + @{attribute (Pure) elim}, or @{attribute (Pure) dest}; here the + ``@{text "!"}'' indicator refers to ``safe'' rules, which may be + applied aggressively (without considering back-tracking later). + Rules declared with ``@{text "?"}'' are ignored in proof search (the + single-step @{method (Pure) rule} method still observes these). An + explicit weight annotation may be given as well; otherwise the + number of rule premises will be taken into account here. + + \end{description} +*} + + +section {* Model Elimination and Resolution *} + +text {* + \begin{matharray}{rcl} + @{method_def (HOL) "meson"} & : & @{text method} \\ + @{method_def (HOL) "metis"} & : & @{text method} \\ + \end{matharray} + + @{rail \ + @@{method (HOL) meson} @{syntax thmrefs}? + ; + @@{method (HOL) metis} + ('(' ('partial_types' | 'full_types' | 'no_types' | @{syntax name}) ')')? + @{syntax thmrefs}? + \} + + \begin{description} + + \item @{method (HOL) meson} implements Loveland's model elimination + procedure \cite{loveland-78}. See @{file + "~~/src/HOL/ex/Meson_Test.thy"} for examples. + + \item @{method (HOL) metis} combines ordered resolution and ordered + paramodulation to find first-order (or mildly higher-order) proofs. + The first optional argument specifies a type encoding; see the + Sledgehammer manual \cite{isabelle-sledgehammer} for details. The + directory @{file "~~/src/HOL/Metis_Examples"} contains several small + theories developed to a large extent using @{method (HOL) metis}. + + \end{description} +*} + + +section {* Algebraic reasoning via Gr\"obner bases *} + +text {* + \begin{matharray}{rcl} + @{method_def (HOL) "algebra"} & : & @{text method} \\ + @{attribute_def (HOL) algebra} & : & @{text attribute} \\ + \end{matharray} + + @{rail \ + @@{method (HOL) algebra} + ('add' ':' @{syntax thmrefs})? + ('del' ':' @{syntax thmrefs})? + ; + @@{attribute (HOL) algebra} (() | 'add' | 'del') + \} + + \begin{description} + + \item @{method (HOL) algebra} performs algebraic reasoning via + Gr\"obner bases, see also \cite{Chaieb-Wenzel:2007} and + \cite[\S3.2]{Chaieb-thesis}. The method handles deals with two main + classes of problems: + + \begin{enumerate} + + \item Universal problems over multivariate polynomials in a + (semi)-ring/field/idom; the capabilities of the method are augmented + according to properties of these structures. For this problem class + the method is only complete for algebraically closed fields, since + the underlying method is based on Hilbert's Nullstellensatz, where + the equivalence only holds for algebraically closed fields. + + The problems can contain equations @{text "p = 0"} or inequations + @{text "q \ 0"} anywhere within a universal problem statement. + + \item All-exists problems of the following restricted (but useful) + form: + + @{text [display] "\x\<^sub>1 \ x\<^sub>n. + e\<^sub>1(x\<^sub>1, \, x\<^sub>n) = 0 \ \ \ e\<^sub>m(x\<^sub>1, \, x\<^sub>n) = 0 \ + (\y\<^sub>1 \ y\<^sub>k. + p\<^sub>1\<^sub>1(x\<^sub>1, \ ,x\<^sub>n) * y\<^sub>1 + \ + p\<^sub>1\<^sub>k(x\<^sub>1, \, x\<^sub>n) * y\<^sub>k = 0 \ + \ \ + p\<^sub>t\<^sub>1(x\<^sub>1, \, x\<^sub>n) * y\<^sub>1 + \ + p\<^sub>t\<^sub>k(x\<^sub>1, \, x\<^sub>n) * y\<^sub>k = 0)"} + + Here @{text "e\<^sub>1, \, e\<^sub>n"} and the @{text "p\<^sub>i\<^sub>j"} are multivariate + polynomials only in the variables mentioned as arguments. + + \end{enumerate} + + The proof method is preceded by a simplification step, which may be + modified by using the form @{text "(algebra add: ths\<^sub>1 del: ths\<^sub>2)"}. + This acts like declarations for the Simplifier + (\secref{sec:simplifier}) on a private simpset for this tool. + + \item @{attribute algebra} (as attribute) manages the default + collection of pre-simplification rules of the above proof method. + + \end{description} +*} + + +subsubsection {* Example *} + +text {* The subsequent example is from geometry: collinearity is + invariant by rotation. *} + +type_synonym point = "int \ int" + +fun collinear :: "point \ point \ point \ bool" where + "collinear (Ax, Ay) (Bx, By) (Cx, Cy) \ + (Ax - Bx) * (By - Cy) = (Ay - By) * (Bx - Cx)" + +lemma collinear_inv_rotation: + assumes "collinear (Ax, Ay) (Bx, By) (Cx, Cy)" and "c\<^sup>2 + s\<^sup>2 = 1" + shows "collinear (Ax * c - Ay * s, Ay * c + Ax * s) + (Bx * c - By * s, By * c + Bx * s) (Cx * c - Cy * s, Cy * c + Cx * s)" + using assms by (algebra add: collinear.simps) + +text {* + See also @{file "~~/src/HOL/ex/Groebner_Examples.thy"}. +*} + + +section {* Coherent Logic *} + +text {* + \begin{matharray}{rcl} + @{method_def (HOL) "coherent"} & : & @{text method} \\ + \end{matharray} + + @{rail \ + @@{method (HOL) coherent} @{syntax thmrefs}? + \} + + \begin{description} + + \item @{method (HOL) coherent} solves problems of \emph{Coherent + Logic} \cite{Bezem-Coquand:2005}, which covers applications in + confluence theory, lattice theory and projective geometry. See + @{file "~~/src/HOL/ex/Coherent.thy"} for some examples. + + \end{description} +*} + + +section {* Proving propositions *} + +text {* + In addition to the standard proof methods, a number of diagnosis + tools search for proofs and provide an Isar proof snippet on success. + These tools are available via the following commands. + + \begin{matharray}{rcl} + @{command_def (HOL) "solve_direct"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ + @{command_def (HOL) "try"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ + @{command_def (HOL) "try0"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ + @{command_def (HOL) "sledgehammer"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ + @{command_def (HOL) "sledgehammer_params"} & : & @{text "theory \ theory"} + \end{matharray} + + @{rail \ + @@{command (HOL) try} + ; + + @@{command (HOL) try0} ( ( ( 'simp' | 'intro' | 'elim' | 'dest' ) ':' @{syntax thmrefs} ) + ) ? + @{syntax nat}? + ; + + @@{command (HOL) sledgehammer} ( '[' args ']' )? facts? @{syntax nat}? + ; + + @@{command (HOL) sledgehammer_params} ( ( '[' args ']' ) ? ) + ; + args: ( @{syntax name} '=' value + ',' ) + ; + facts: '(' ( ( ( ( 'add' | 'del' ) ':' ) ? @{syntax thmrefs} ) + ) ? ')' + \} % FIXME check args "value" + + \begin{description} + + \item @{command (HOL) "solve_direct"} checks whether the current + subgoals can be solved directly by an existing theorem. Duplicate + lemmas can be detected in this way. + + \item @{command (HOL) "try0"} attempts to prove a subgoal + using a combination of standard proof methods (@{method auto}, + @{method simp}, @{method blast}, etc.). Additional facts supplied + via @{text "simp:"}, @{text "intro:"}, @{text "elim:"}, and @{text + "dest:"} are passed to the appropriate proof methods. + + \item @{command (HOL) "try"} attempts to prove or disprove a subgoal + using a combination of provers and disprovers (@{command (HOL) + "solve_direct"}, @{command (HOL) "quickcheck"}, @{command (HOL) + "try0"}, @{command (HOL) "sledgehammer"}, @{command (HOL) + "nitpick"}). + + \item @{command (HOL) "sledgehammer"} attempts to prove a subgoal + using external automatic provers (resolution provers and SMT + solvers). See the Sledgehammer manual \cite{isabelle-sledgehammer} + for details. + + \item @{command (HOL) "sledgehammer_params"} changes @{command (HOL) + "sledgehammer"} configuration options persistently. + + \end{description} +*} + + +section {* Checking and refuting propositions *} + +text {* + Identifying incorrect propositions usually involves evaluation of + particular assignments and systematic counterexample search. This + is supported by the following commands. + + \begin{matharray}{rcl} + @{command_def (HOL) "value"}@{text "\<^sup>*"} & : & @{text "context \"} \\ + @{command_def (HOL) "values"}@{text "\<^sup>*"} & : & @{text "context \"} \\ + @{command_def (HOL) "quickcheck"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ + @{command_def (HOL) "nitpick"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ + @{command_def (HOL) "quickcheck_params"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "nitpick_params"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "quickcheck_generator"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "find_unused_assms"} & : & @{text "context \"} + \end{matharray} + + @{rail \ + @@{command (HOL) value} ( '[' @{syntax name} ']' )? modes? @{syntax term} + ; + + @@{command (HOL) values} modes? @{syntax nat}? @{syntax term} + ; + + (@@{command (HOL) quickcheck} | @@{command (HOL) nitpick}) + ( '[' args ']' )? @{syntax nat}? + ; + + (@@{command (HOL) quickcheck_params} | + @@{command (HOL) nitpick_params}) ( '[' args ']' )? + ; + + @@{command (HOL) quickcheck_generator} @{syntax nameref} \ + 'operations:' ( @{syntax term} +) + ; + + @@{command (HOL) find_unused_assms} @{syntax name}? + ; + modes: '(' (@{syntax name} +) ')' + ; + args: ( @{syntax name} '=' value + ',' ) + \} % FIXME check "value" + + \begin{description} + + \item @{command (HOL) "value"}~@{text t} evaluates and prints a + term; optionally @{text modes} can be specified, which are appended + to the current print mode; see \secref{sec:print-modes}. + Internally, the evaluation is performed by registered evaluators, + which are invoked sequentially until a result is returned. + Alternatively a specific evaluator can be selected using square + brackets; typical evaluators use the current set of code equations + to normalize and include @{text simp} for fully symbolic evaluation + using the simplifier, @{text nbe} for \emph{normalization by + evaluation} and \emph{code} for code generation in SML. + + \item @{command (HOL) "values"}~@{text t} enumerates a set + comprehension by evaluation and prints its values up to the given + number of solutions; optionally @{text modes} can be specified, + which are appended to the current print mode; see + \secref{sec:print-modes}. + + \item @{command (HOL) "quickcheck"} tests the current goal for + counterexamples using a series of assignments for its free + variables; by default the first subgoal is tested, an other can be + selected explicitly using an optional goal index. Assignments can + be chosen exhausting the search space up to a given size, or using a + fixed number of random assignments in the search space, or exploring + the search space symbolically using narrowing. By default, + quickcheck uses exhaustive testing. A number of configuration + options are supported for @{command (HOL) "quickcheck"}, notably: + + \begin{description} + + \item[@{text tester}] specifies which testing approach to apply. + There are three testers, @{text exhaustive}, @{text random}, and + @{text narrowing}. An unknown configuration option is treated as + an argument to tester, making @{text "tester ="} optional. When + multiple testers are given, these are applied in parallel. If no + tester is specified, quickcheck uses the testers that are set + active, i.e., configurations @{attribute + quickcheck_exhaustive_active}, @{attribute + quickcheck_random_active}, @{attribute + quickcheck_narrowing_active} are set to true. + + \item[@{text size}] specifies the maximum size of the search space + for assignment values. + + \item[@{text genuine_only}] sets quickcheck only to return genuine + counterexample, but not potentially spurious counterexamples due + to underspecified functions. + + \item[@{text abort_potential}] sets quickcheck to abort once it + found a potentially spurious counterexample and to not continue + to search for a further genuine counterexample. + For this option to be effective, the @{text genuine_only} option + must be set to false. + + \item[@{text eval}] takes a term or a list of terms and evaluates + these terms under the variable assignment found by quickcheck. + This option is currently only supported by the default + (exhaustive) tester. + + \item[@{text iterations}] sets how many sets of assignments are + generated for each particular size. + + \item[@{text no_assms}] specifies whether assumptions in + structured proofs should be ignored. + + \item[@{text locale}] specifies how to process conjectures in + a locale context, i.e., they can be interpreted or expanded. + The option is a whitespace-separated list of the two words + @{text interpret} and @{text expand}. The list determines the + order they are employed. The default setting is to first use + interpretations and then test the expanded conjecture. + The option is only provided as attribute declaration, but not + as parameter to the command. + + \item[@{text timeout}] sets the time limit in seconds. + + \item[@{text default_type}] sets the type(s) generally used to + instantiate type variables. + + \item[@{text report}] if set quickcheck reports how many tests + fulfilled the preconditions. + + \item[@{text use_subtype}] if set quickcheck automatically lifts + conjectures to registered subtypes if possible, and tests the + lifted conjecture. + + \item[@{text quiet}] if set quickcheck does not output anything + while testing. + + \item[@{text verbose}] if set quickcheck informs about the current + size and cardinality while testing. + + \item[@{text expect}] can be used to check if the user's + expectation was met (@{text no_expectation}, @{text + no_counterexample}, or @{text counterexample}). + + \end{description} + + These option can be given within square brackets. + + Using the following type classes, the testers generate values and convert + them back into Isabelle terms for displaying counterexamples. + \begin{description} + \item[@{text exhaustive}] The parameters of the type classes @{class exhaustive} + and @{class full_exhaustive} implement the testing. They take a + testing function as a parameter, which takes a value of type @{typ "'a"} + and optionally produces a counterexample, and a size parameter for the test values. + In @{class full_exhaustive}, the testing function parameter additionally + expects a lazy term reconstruction in the type @{typ Code_Evaluation.term} + of the tested value. + + The canonical implementation for @{text exhaustive} testers calls the given + testing function on all values up to the given size and stops as soon + as a counterexample is found. + + \item[@{text random}] The operation @{const Quickcheck_Random.random} + of the type class @{class random} generates a pseudo-random + value of the given size and a lazy term reconstruction of the value + in the type @{typ Code_Evaluation.term}. A pseudo-randomness generator + is defined in theory @{theory Random}. + + \item[@{text narrowing}] implements Haskell's Lazy Smallcheck~\cite{runciman-naylor-lindblad} + using the type classes @{class narrowing} and @{class partial_term_of}. + Variables in the current goal are initially represented as symbolic variables. + If the execution of the goal tries to evaluate one of them, the test engine + replaces it with refinements provided by @{const narrowing}. + Narrowing views every value as a sum-of-products which is expressed using the operations + @{const Quickcheck_Narrowing.cons} (embedding a value), + @{const Quickcheck_Narrowing.apply} (product) and @{const Quickcheck_Narrowing.sum} (sum). + The refinement should enable further evaluation of the goal. + + For example, @{const narrowing} for the list type @{typ "'a :: narrowing list"} + can be recursively defined as + @{term "Quickcheck_Narrowing.sum (Quickcheck_Narrowing.cons []) + (Quickcheck_Narrowing.apply + (Quickcheck_Narrowing.apply + (Quickcheck_Narrowing.cons (op #)) + narrowing) + narrowing)"}. + If a symbolic variable of type @{typ "_ list"} is evaluated, it is replaced by (i)~the empty + list @{term "[]"} and (ii)~by a non-empty list whose head and tail can then be recursively + refined if needed. + + To reconstruct counterexamples, the operation @{const partial_term_of} transforms + @{text narrowing}'s deep representation of terms to the type @{typ Code_Evaluation.term}. + The deep representation models symbolic variables as + @{const Quickcheck_Narrowing.Narrowing_variable}, which are normally converted to + @{const Code_Evaluation.Free}, and refined values as + @{term "Quickcheck_Narrowing.Narrowing_constructor i args"}, where @{term "i :: integer"} + denotes the index in the sum of refinements. In the above example for lists, + @{term "0"} corresponds to @{term "[]"} and @{term "1"} + to @{term "op #"}. + + The command @{command (HOL) "code_datatype"} sets up @{const partial_term_of} + such that the @{term "i"}-th refinement is interpreted as the @{term "i"}-th constructor, + but it does not ensures consistency with @{const narrowing}. + \end{description} + + \item @{command (HOL) "quickcheck_params"} changes @{command (HOL) + "quickcheck"} configuration options persistently. + + \item @{command (HOL) "quickcheck_generator"} creates random and + exhaustive value generators for a given type and operations. It + generates values by using the operations as if they were + constructors of that type. + + \item @{command (HOL) "nitpick"} tests the current goal for + counterexamples using a reduction to first-order relational + logic. See the Nitpick manual \cite{isabelle-nitpick} for details. + + \item @{command (HOL) "nitpick_params"} changes @{command (HOL) + "nitpick"} configuration options persistently. + + \item @{command (HOL) "find_unused_assms"} finds potentially superfluous + assumptions in theorems using quickcheck. + It takes the theory name to be checked for superfluous assumptions as + optional argument. If not provided, it checks the current theory. + Options to the internal quickcheck invocations can be changed with + common configuration declarations. + + \end{description} +*} + + +section {* Unstructured case analysis and induction \label{sec:hol-induct-tac} *} + +text {* + The following tools of Isabelle/HOL support cases analysis and + induction in unstructured tactic scripts; see also + \secref{sec:cases-induct} for proper Isar versions of similar ideas. + + \begin{matharray}{rcl} + @{method_def (HOL) case_tac}@{text "\<^sup>*"} & : & @{text method} \\ + @{method_def (HOL) induct_tac}@{text "\<^sup>*"} & : & @{text method} \\ + @{method_def (HOL) ind_cases}@{text "\<^sup>*"} & : & @{text method} \\ + @{command_def (HOL) "inductive_cases"}@{text "\<^sup>*"} & : & @{text "local_theory \ local_theory"} \\ + \end{matharray} + + @{rail \ + @@{method (HOL) case_tac} @{syntax goal_spec}? @{syntax term} rule? + ; + @@{method (HOL) induct_tac} @{syntax goal_spec}? (@{syntax insts} * @'and') rule? + ; + @@{method (HOL) ind_cases} (@{syntax prop}+) (@'for' (@{syntax name}+))? + ; + @@{command (HOL) inductive_cases} (@{syntax thmdecl}? (@{syntax prop}+) + @'and') + ; + rule: 'rule' ':' @{syntax thmref} + \} + + \begin{description} + + \item @{method (HOL) case_tac} and @{method (HOL) induct_tac} admit + to reason about inductive types. Rules are selected according to + the declarations by the @{attribute cases} and @{attribute induct} + attributes, cf.\ \secref{sec:cases-induct}. The @{command (HOL) + datatype} package already takes care of this. + + These unstructured tactics feature both goal addressing and dynamic + instantiation. Note that named rule cases are \emph{not} provided + as would be by the proper @{method cases} and @{method induct} proof + methods (see \secref{sec:cases-induct}). Unlike the @{method + induct} method, @{method induct_tac} does not handle structured rule + statements, only the compact object-logic conclusion of the subgoal + being addressed. + + \item @{method (HOL) ind_cases} and @{command (HOL) + "inductive_cases"} provide an interface to the internal @{ML_text + mk_cases} operation. Rules are simplified in an unrestricted + forward manner. + + While @{method (HOL) ind_cases} is a proof method to apply the + result immediately as elimination rules, @{command (HOL) + "inductive_cases"} provides case split theorems at the theory level + for later use. The @{keyword "for"} argument of the @{method (HOL) + ind_cases} method allows to specify a list of variables that should + be generalized before applying the resulting rule. + + \end{description} +*} + + +chapter {* Executable code *} + +text {* For validation purposes, it is often useful to \emph{execute} + specifications. In principle, execution could be simulated by + Isabelle's inference kernel, i.e. by a combination of resolution and + simplification. Unfortunately, this approach is rather inefficient. + A more efficient way of executing specifications is to translate + them into a functional programming language such as ML. + + Isabelle provides a generic framework to support code generation + from executable specifications. Isabelle/HOL instantiates these + mechanisms in a way that is amenable to end-user applications. Code + can be generated for functional programs (including overloading + using type classes) targeting SML \cite{SML}, OCaml \cite{OCaml}, + Haskell \cite{haskell-revised-report} and Scala + \cite{scala-overview-tech-report}. Conceptually, code generation is + split up in three steps: \emph{selection} of code theorems, + \emph{translation} into an abstract executable view and + \emph{serialization} to a specific \emph{target language}. + Inductive specifications can be executed using the predicate + compiler which operates within HOL. See \cite{isabelle-codegen} for + an introduction. + + \begin{matharray}{rcl} + @{command_def (HOL) "export_code"}@{text "\<^sup>*"} & : & @{text "context \"} \\ + @{attribute_def (HOL) code} & : & @{text attribute} \\ + @{command_def (HOL) "code_datatype"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "print_codesetup"}@{text "\<^sup>*"} & : & @{text "context \"} \\ + @{attribute_def (HOL) code_unfold} & : & @{text attribute} \\ + @{attribute_def (HOL) code_post} & : & @{text attribute} \\ + @{attribute_def (HOL) code_abbrev} & : & @{text attribute} \\ + @{command_def (HOL) "print_codeproc"}@{text "\<^sup>*"} & : & @{text "context \"} \\ + @{command_def (HOL) "code_thms"}@{text "\<^sup>*"} & : & @{text "context \"} \\ + @{command_def (HOL) "code_deps"}@{text "\<^sup>*"} & : & @{text "context \"} \\ + @{command_def (HOL) "code_reserved"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "code_printing"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "code_identifier"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "code_monad"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "code_reflect"} & : & @{text "theory \ theory"} \\ + @{command_def (HOL) "code_pred"} & : & @{text "theory \ proof(prove)"} + \end{matharray} + + @{rail \ + @@{command (HOL) export_code} ( @'open' ) ? ( constexpr + ) \ + ( ( @'in' target ( @'module_name' @{syntax string} ) ? \ + ( @'file' @{syntax string} ) ? ( '(' args ')' ) ?) + ) ? + ; + + const: @{syntax term} + ; + + constexpr: ( const | 'name._' | '_' ) + ; + + typeconstructor: @{syntax nameref} + ; + + class: @{syntax nameref} + ; + + target: 'SML' | 'OCaml' | 'Haskell' | 'Scala' | 'Eval' + ; + + @@{attribute (HOL) code} ( 'del' | 'equation' | 'abstype' | 'abstract' + | 'drop:' ( const + ) | 'abort:' ( const + ) )? + ; + + @@{command (HOL) code_datatype} ( const + ) + ; + + @@{attribute (HOL) code_unfold} ( 'del' ) ? + ; + + @@{attribute (HOL) code_post} ( 'del' ) ? + ; + + @@{attribute (HOL) code_abbrev} + ; + + @@{command (HOL) code_thms} ( constexpr + ) ? + ; + + @@{command (HOL) code_deps} ( constexpr + ) ? + ; + + @@{command (HOL) code_reserved} target ( @{syntax string} + ) + ; + + symbol_const: ( @'constant' const ) + ; + + symbol_typeconstructor: ( @'type_constructor' typeconstructor ) + ; + + symbol_class: ( @'type_class' class ) + ; + + symbol_class_relation: ( @'class_relation' class ( '<' | '\' ) class ) + ; + + symbol_class_instance: ( @'class_instance' typeconstructor @'::' class ) + ; + + symbol_module: ( @'code_module' name ) + ; + + syntax: @{syntax string} | ( @'infix' | @'infixl' | @'infixr' ) @{syntax nat} @{syntax string} + ; + + printing_const: symbol_const ( '\' | '=>' ) \ + ( '(' target ')' syntax ? + @'and' ) + ; + + printing_typeconstructor: symbol_typeconstructor ( '\' | '=>' ) \ + ( '(' target ')' syntax ? + @'and' ) + ; + + printing_class: symbol_class ( '\' | '=>' ) \ + ( '(' target ')' @{syntax string} ? + @'and' ) + ; + + printing_class_relation: symbol_class_relation ( '\' | '=>' ) \ + ( '(' target ')' @{syntax string} ? + @'and' ) + ; + + printing_class_instance: symbol_class_instance ( '\' | '=>' ) \ + ( '(' target ')' '-' ? + @'and' ) + ; + + printing_module: symbol_module ( '\' | '=>' ) \ + ( '(' target ')' ( @{syntax string} ( @'attach' ( const + ) ) ? ) ? + @'and' ) + ; + + @@{command (HOL) code_printing} ( ( printing_const | printing_typeconstructor + | printing_class | printing_class_relation | printing_class_instance + | printing_module ) + '|' ) + ; + + @@{command (HOL) code_identifier} ( ( symbol_const | symbol_typeconstructor + | symbol_class | symbol_class_relation | symbol_class_instance + | symbol_module ) ( '\' | '=>' ) \ + ( '(' target ')' @{syntax string} ? + @'and' ) + '|' ) + ; + + @@{command (HOL) code_monad} const const target + ; + + @@{command (HOL) code_reflect} @{syntax string} \ + ( @'datatypes' ( @{syntax string} '=' ( '_' | ( @{syntax string} + '|' ) + @'and' ) ) ) ? \ + ( @'functions' ( @{syntax string} + ) ) ? ( @'file' @{syntax string} ) ? + ; + + @@{command (HOL) code_pred} \ ('(' @'modes' ':' modedecl ')')? \ const + ; + + modedecl: (modes | ((const ':' modes) \ + (@'and' ((const ':' modes @'and') +))?)) + ; + + modes: mode @'as' const + \} + + \begin{description} + + \item @{command (HOL) "export_code"} generates code for a given list + of constants in the specified target language(s). If no + serialization instruction is given, only abstract code is generated + internally. + + Constants may be specified by giving them literally, referring to + all executable constants within a certain theory by giving @{text + "name._"}, or referring to \emph{all} executable constants currently + available by giving @{text "_"}. + + By default, exported identifiers are minimized per module. This + can be suppressed by prepending @{keyword "open"} before the list + of contants. + + By default, for each involved theory one corresponding name space + module is generated. Alternatively, a module name may be specified + after the @{keyword "module_name"} keyword; then \emph{all} code is + placed in this module. + + For \emph{SML}, \emph{OCaml} and \emph{Scala} the file specification + refers to a single file; for \emph{Haskell}, it refers to a whole + directory, where code is generated in multiple files reflecting the + module hierarchy. Omitting the file specification denotes standard + output. + + Serializers take an optional list of arguments in parentheses. + For \emph{Haskell} a module name prefix may be given using the + ``@{text "root:"}'' argument; ``@{text string_classes}'' adds a + ``@{verbatim "deriving (Read, Show)"}'' clause to each appropriate + datatype declaration. + + \item @{attribute (HOL) code} declare code equations for code + generation. Variant @{text "code equation"} declares a conventional + equation as code equation. Variants @{text "code abstype"} and + @{text "code abstract"} declare abstract datatype certificates or + code equations on abstract datatype representations respectively. + Vanilla @{text "code"} falls back to @{text "code equation"} + or @{text "code abstype"} depending on the syntactic shape + of the underlying equation. Variant @{text "code del"} + deselects a code equation for code generation. + + Variants @{text "code drop:"} and @{text "code abort:"} take + a list of constant as arguments and drop all code equations declared + for them. In the case of {text abort}, these constants then are + are not required to have a definition by means of code equations; + if needed these are implemented by program abort (exception) instead. + + Usually packages introducing code equations provide a reasonable + default setup for selection. + + \item @{command (HOL) "code_datatype"} specifies a constructor set + for a logical type. + + \item @{command (HOL) "print_codesetup"} gives an overview on + selected code equations and code generator datatypes. + + \item @{attribute (HOL) code_unfold} declares (or with option + ``@{text "del"}'' removes) theorems which during preprocessing + are applied as rewrite rules to any code equation or evaluation + input. + + \item @{attribute (HOL) code_post} declares (or with option ``@{text + "del"}'' removes) theorems which are applied as rewrite rules to any + result of an evaluation. + + \item @{attribute (HOL) code_abbrev} declares equations which are + applied as rewrite rules to any result of an evaluation and + symmetrically during preprocessing to any code equation or evaluation + input. + + \item @{command (HOL) "print_codeproc"} prints the setup of the code + generator preprocessor. + + \item @{command (HOL) "code_thms"} prints a list of theorems + representing the corresponding program containing all given + constants after preprocessing. + + \item @{command (HOL) "code_deps"} visualizes dependencies of + theorems representing the corresponding program containing all given + constants after preprocessing. + + \item @{command (HOL) "code_reserved"} declares a list of names as + reserved for a given target, preventing it to be shadowed by any + generated code. + + \item @{command (HOL) "code_printing"} associates a series of symbols + (constants, type constructors, classes, class relations, instances, + module names) with target-specific serializations; omitting a serialization + deletes an existing serialization. + + \item @{command (HOL) "code_monad"} provides an auxiliary mechanism + to generate monadic code for Haskell. + + \item @{command (HOL) "code_identifier"} associates a a series of symbols + (constants, type constructors, classes, class relations, instances, + module names) with target-specific hints how these symbols shall be named. + These hints gain precedence over names for symbols with no hints at all. + Conflicting hints are subject to name disambiguation. + \emph{Warning:} It is at the discretion + of the user to ensure that name prefixes of identifiers in compound + statements like type classes or datatypes are still the same. + + \item @{command (HOL) "code_reflect"} without a ``@{text "file"}'' + argument compiles code into the system runtime environment and + modifies the code generator setup that future invocations of system + runtime code generation referring to one of the ``@{text + "datatypes"}'' or ``@{text "functions"}'' entities use these + precompiled entities. With a ``@{text "file"}'' argument, the + corresponding code is generated into that specified file without + modifying the code generator setup. + + \item @{command (HOL) "code_pred"} creates code equations for a + predicate given a set of introduction rules. Optional mode + annotations determine which arguments are supposed to be input or + output. If alternative introduction rules are declared, one must + prove a corresponding elimination rule. + + \end{description} +*} + +end