# HG changeset patch # User wenzelm # Date 1015537927 -3600 # Node ID 6faccf7d0f25c289eab6c5cdc8c2a7300d97a328 # Parent 02bfd6d622ca55fbc63a435238a5a80599a49156 *** empty log message *** diff -r 02bfd6d622ca -r 6faccf7d0f25 doc-src/IsarRef/conversion.tex --- a/doc-src/IsarRef/conversion.tex Thu Mar 07 19:07:56 2002 +0100 +++ b/doc-src/IsarRef/conversion.tex Thu Mar 07 22:52:07 2002 +0100 @@ -69,14 +69,17 @@ ML proof scripts have to be well-behaved by storing theorems properly within the current theory context, in order to enable new-style theories to retrieve -these these later. +these later. \begin{descr} + \item [$\mathtt{qed}~name$] is the canonical way to conclude a proof script in ML. This already manages entry in the theorem database of the current theory context. + \item [$\mathtt{bind_thm}~(name, thm)$ and $\mathtt{bind_thms}~(name, thms)$] store theorems that have been produced in ML in an ad-hoc manner. + \end{descr} Note that the original ``LCF-system'' approach of binding theorem values on @@ -197,7 +200,7 @@ ML values of type \texttt{thm}. After the proof is finished, these premises are discharged again, resulting in the original rule statement. The ``long format'' of Isabelle/Isar goal statements admits to emulate this technique -closely. The general ML goal statement for derived rules looks like this: +nicely. The general ML goal statement for derived rules looks like this: \begin{ttbox} val [\(prem@1\), \dots] = goal "\(\phi@1 \Imp \dots \Imp \psi\)"; by \(tac@1\); @@ -218,12 +221,10 @@ corresponding atomic statements, typically stemming from the definition of a new concept. In that case, the general scheme for deriving rules may be greatly simplified, using one of the standard automated proof tools, such as -$simp$, $blast$, or $auto$. This would work as follows: +$simp$, $blast$, or $auto$. This could work as follows: \begin{matharray}{l} \LEMMA{}{\texttt"{\phi@1 \Imp \dots \Imp \psi}\texttt"} \\ - \quad \APPLY{unfold~defs} \\ - \quad \APPLY{blast} \\ - \quad \DONE \\ + \quad \BYY{unfold~defs}{blast} \\ \end{matharray} Note that classic Isabelle would support this form only in the special case where $\phi@1$, \dots are atomic statements (when using the standard @@ -270,6 +271,9 @@ \quad \DONE \\ \end{matharray} +In many situations the $atomize$ step above is actually unnecessary, +especially if the subsequent script mainly consists of automated tools. + \subsection{Tactics}\label{sec:conv-tac} @@ -287,7 +291,7 @@ For example, method $simp$ replaces all of \texttt{simp_tac}~/ \texttt{asm_simp_tac}~/ \texttt{full_simp_tac}~/ \texttt{asm_full_simp_tac}, there is also concrete syntax for augmenting the Simplifier context (the -current ``simpset'') in a handsome way. +current ``simpset'') in a convenient way. \subsubsection{Resolution tactics} @@ -458,19 +462,19 @@ \medskip Theorem operations may be attached as attributes in the very place where theorems are referenced, say within a method argument. The subsequent -common ML combinators may be expressed directly in Isar as follows. +ML combinators may be expressed directly in Isar as follows. \begin{matharray}{lll} thm@1~\texttt{RS}~thm@2 & & thm@1~[THEN~thm@2] \\ thm@1~\texttt{RSN}~(i, thm@2) & & thm@1~[THEN~[i]~thm@2] \\ thm@1~\texttt{COMP}~thm@2 & & thm@1~[COMP~thm@2] \\ \relax[thm@1, \dots]~\texttt{MRS}~thm & & thm~[OF~thm@1~\dots] \\ \texttt{read_instantiate}~[(x@1, t@1), \dots]~thm & & thm~[where~x@1 = t@1~and~\dots] \\ + \texttt{make_elim}~thm & & thm~[elim_format] \\ \texttt{standard}~thm & & thm~[standard] \\ - \texttt{make_elim}~thm & & thm~[elim_format] \\ \end{matharray} Note that $OF$ is often more readable as $THEN$; likewise positional -instantiation with $of$ is more handsome than $where$. +instantiation with $of$ is often more appropriate than $where$. The special ML command \texttt{qed_spec_mp} of Isabelle/HOL and FOL may be replaced by passing the result of a proof through $rule_format$. @@ -491,8 +495,8 @@ \texttt{AddSDs}~[thm] & & \DECLARE~thm~[dest!] \\[0.5ex] \texttt{AddIffs}~[thm] & & \DECLARE~thm~[iff] \\ \end{matharray} -Note that explicit $\DECLARE$ commands are actually needed only very rarely; -Isar admits to declare theorems on-the-fly wherever they emerge. Consider the +Note that explicit $\DECLARE$ commands are rarely needed in practice; Isar +admits to declare theorems on-the-fly wherever they emerge. Consider the following ML idiom: \begin{ttbox} Goal "\(\phi\)"; @@ -500,7 +504,7 @@ qed "name"; Addsimps [name]; \end{ttbox} -This may be expressed more concisely in Isar like this: +This may be expressed more succinctly in Isar like this: \begin{matharray}{l} \LEMMA{name~[simp]}{\phi} \\ \quad\vdots @@ -525,26 +529,32 @@ code expressed in the low-level language. As far as Isar proofs are concerned, it is usually much easier to re-use only -definitions and the main statements directly, while following the arrangement -of proof scripts only very loosely. Ideally, one would also have some -\emph{informal} proof outlines available for guidance as well. In the worst -case, obscure proof scripts would have to be re-engineered by tracing forth -and backwards, and by educated guessing! +definitions and the main statements, while following the arrangement of proof +scripts only very loosely. Ideally, one would also have some \emph{informal} +proof outlines available for guidance as well. In the worst case, obscure +proof scripts would have to be re-engineered by tracing forth and backwards, +and by educated guessing! \medskip This is a possible schedule to embark on actual conversion of legacy proof scripts into Isar proof texts. + \begin{enumerate} + \item Port ML scripts to Isar tactic emulation scripts (see \S\ref{sec:port-scripts}). + \item Get sufficiently acquainted with Isabelle/Isar proof development.\footnote{As there is still no Isar tutorial around, it is best to look at existing Isar examples, see also \S\ref{sec:isar-howto}.} + \item Recover the proof structure of a few important theorems. + \item Rephrase the original intention of the course of reasoning in terms of Isar proof language elements. + \end{enumerate} -Certainly, rewriting formal reasoning in Isar requires much additional effort. +Certainly, rewriting formal reasoning in Isar requires some additional effort. On the other hand, one gains a human-readable representation of machine-checked formal proof. Depending on the context of application, this might be even indispensable to start with! diff -r 02bfd6d622ca -r 6faccf7d0f25 doc-src/IsarRef/generic.tex --- a/doc-src/IsarRef/generic.tex Thu Mar 07 19:07:56 2002 +0100 +++ b/doc-src/IsarRef/generic.tex Thu Mar 07 22:52:07 2002 +0100 @@ -27,7 +27,7 @@ \end{rail} \begin{descr} - + \item [$\AXCLASS~c \subseteq \vec c~~axms$] defines an axiomatic type class as the intersection of existing classes, with additional axioms holding. Class axioms may not contain more than one type variable. The class axioms (with @@ -35,18 +35,18 @@ a class introduction rule is generated (being bound as $c{.}intro$); this rule is employed by method $intro_classes$ to support instantiation proofs of this class. - + The ``axioms'' are stored as theorems according to the given name specifications, adding the class name $c$ as name space prefix; the same facts are also stored collectively as $c{\dtt}axioms$. - + \item [$\INSTANCE~c@1 \subseteq c@2$ and $\INSTANCE~t :: (\vec s)c$] setup a goal stating a class relation or type arity. The proof would usually proceed by $intro_classes$, and then establish the characteristic theorems of the type classes involved. After finishing the proof, the theory will be augmented by a type signature declaration corresponding to the resulting theorem. - + \item [$intro_classes$] repeatedly expands all class introduction rules of this theory. Note that this method usually needs not be named explicitly, as it is already included in the default proof step (of $\PROOFNAME$ etc.). @@ -59,7 +59,7 @@ \subsection{Locales and local contexts}\label{sec:locale} Locales are named local contexts, consisting of a list of declaration elements -that are modeled after the Isar proof context commands (cf.\ +that are modeled after the Isar proof context commands (cf.\ \S\ref{sec:proof-context}). \subsubsection{Localized commands} @@ -83,12 +83,12 @@ the locale context of $loc$ and augments its body by an appropriate ``$\isarkeyword{notes}$'' element (see below). The exported view of $a$, after discharging the locale context, is stored as $loc{.}a$ within the global -theory. A localized goal ``$\LEMMANAME~(\IN~loc)~a:~\phi$'' work similarly, -only that the fact emerges through the subsequent proof, -which may refer to the full infrastructure of the locale context (including -local parameters with typing and concrete syntax, assumptions, definitions -etc.). Most notably, fact declarations of the locale are active during the -proof, too (e.g.\ local $simp$ rules). +theory. A localized goal ``$\LEMMANAME~(\IN~loc)~a:~\phi$'' works similarly, +only that the fact emerges through the subsequent proof, which may refer to +the full infrastructure of the locale context (covering local parameters with +typing and concrete syntax, assumptions, definitions etc.). Most notably, +fact declarations of the locale are active during the proof as well (e.g.\ +local $simp$ rules). \subsubsection{Locale specifications} @@ -131,7 +131,7 @@ \end{rail} \begin{descr} - + \item [$\LOCALE~loc~=~import~+~body$] defines new locale $loc$ as a context consisting of a certain view of existing locales ($import$) plus some additional elements ($body$). Both $import$ and $body$ are optional; the @@ -139,57 +139,59 @@ useful to collect declarations of facts later on. Type-inference on locale expressions automatically takes care of the most general typing that the combined context elements may acquire. - + The $import$ consists of a structured context expression, consisting of references to existing locales, renamed contexts, or merged contexts. Renaming uses positional notation: $c~\vec x$ means that (a prefix) the fixed parameters of context $c$ are named according to $\vec x$; a ``\texttt{_}'' (underscore).\indexisarthm{_@\texttt{_}} means to skip that position. Also note that concrete syntax only works with the original name. - Merging proceeds from left-to-right, suppressing any duplicates emerging - from different paths through an import hierarchy. - + Merging proceeds from left-to-right, suppressing any duplicates stemming + from different paths through the import hierarchy. + The $body$ consists of basic context elements, further context expressions may be included as well. \begin{descr} - + \item [$\FIXES{~x::\tau~(mx)}$] declares a local parameter of type $\tau$ and mixfix annotation $mx$ (both are optional). The special syntax declaration ``$(structure)$'' means that $x$ may be referenced implicitly in this context. - + \item [$\ASSUMES{a}{\vec\phi}$] introduces local premises, similar to $\ASSUMENAME$ within a proof (cf.\ \S\ref{sec:proof-context}). - + \item [$\DEFINES{a}{x \equiv t}$] defines a previously declared parameter. - This is close to $\DEFNAME$ within a proof (cf.\ + This is close to $\DEFNAME$ within a proof (cf.\ \S\ref{sec:proof-context}), but $\DEFINESNAME$ takes an equational - proposition instead of variable-term. The left-hand side of the equation - may have additional arguments, e.g.\ $\DEFINES{}{f~\vec x \equiv t}$. - + proposition instead of variable-term pair. The left-hand side of the + equation may have additional arguments, e.g.\ ``$\DEFINES{}{f~\vec x + \equiv t}$''. + \item [$\NOTES{a}{\vec b}$] reconsiders facts within a local context. Most notably, this may include arbitrary declarations in any attribute specifications included here, e.g.\ a local $simp$ rule. - + \item [$\INCLUDES{c}$] copies the specified context in a statically scoped manner. - + In contrast, the initial $import$ specification of a locale expression maintains a dynamic relation to the locales being referenced (benefiting from any later fact declarations in the obvious manner). \end{descr} - - Note that $\IS{p}$ patterns given in the syntax of $\ASSUMESNAME$ and - $\DEFINESNAME$ above is actually illegal in locale definitions. In the long - goal format of \S\ref{sec:goals}, term bindings may be included as expected. + + Note that ``$\IS{p}$'' patterns given in the syntax of $\ASSUMESNAME$ and + $\DEFINESNAME$ above are actually illegal in locale definitions. In the + long goal format of \S\ref{sec:goals}, term bindings may be included as + expected, though. \item [$\isarkeyword{print_locale}~import~+~body$] prints the specified locale expression in a flattened form. The notable special case $\isarkeyword{print_locale}~loc$ just prints the contents of the named locale, but keep in mind that type-inference will normalize type variables according to the usual alphabetical order. - + \item [$\isarkeyword{print_locales}$] prints the names of all locales of the current theory. @@ -206,7 +208,7 @@ \end{matharray} Generalized elimination means that additional elements with certain properties -may introduced in the current context, by virtue of a locally proven +may be introduced in the current context, by virtue of a locally proven ``soundness statement''. Technically speaking, the $\OBTAINNAME$ language element is like a declaration of $\FIXNAME$ and $\ASSUMENAME$ (see also see \S\ref{sec:proof-context}), together with a soundness proof of its additional @@ -224,28 +226,32 @@ \begin{matharray}{l} \langle facts~\vec b\rangle \\ \OBTAIN{\vec x}{a}{\vec \phi}~~\langle proof\rangle \equiv {} \\[1ex] - \quad \BG \\ + \quad \HAVE{}{\All{thesis} (\All{\vec x} \vec\phi \Imp thesis) \Imp thesis} \\ + \quad \PROOF{succeed} \\ \qquad \FIX{thesis} \\ - \qquad \ASSUME{that~[simp, intro]}{\All{\vec x} \vec\phi \Imp thesis} \\ - \qquad \FROM{\vec b}~\HAVE{}{thesis}~~\langle proof\rangle \\ - \quad \EN \\ + \qquad \ASSUME{that~[intro?]}{\All{\vec x} \vec\phi \Imp thesis} \\ + \qquad \SHOW{}{thesis} \\ + \quad\qquad \APPLY{insert~that} \\ + \quad\qquad \USING{\vec b}~~\langle proof\rangle \\ + \quad \QED{} \\ \quad \FIX{\vec x}~\ASSUMENAME^\ast~a\colon~\vec\phi \\ \end{matharray} Typically, the soundness proof is relatively straight-forward, often just by -canonical automated tools such as $\BY{simp}$ (see \S\ref{sec:simp}) or -$\BY{blast}$ (see \S\ref{sec:classical-auto}). Accordingly, the ``$that$'' -reduction above is declared as simplification and introduction rule. +canonical automated tools such as ``$\BY{simp}$'' (see \S\ref{sec:simp}) or +``$\BY{blast}$'' (see \S\ref{sec:classical-auto}). Accordingly, the +``$that$'' reduction above is declared as simplification and introduction +rule. \medskip In a sense, $\OBTAINNAME$ represents at the level of Isar proofs what would be meta-logical existential quantifiers and conjunctions. This concept has a -broad range of useful applications, ranging from plain elimination (or even +broad range of useful applications, ranging from plain elimination (or introduction) of object-level existentials and conjunctions, to elimination over results of symbolic evaluation of recursive definitions, for example. Also note that $\OBTAINNAME$ without parameters acts much like $\HAVENAME$, -where the result is treated as an assumption. +where the result is treated as a genuine assumption. \subsection{Calculational reasoning}\label{sec:calculation} @@ -277,10 +283,10 @@ similar to $\ALSO$ and $\FINALLY$, but only collect further results in $calculation$ without applying any rules yet. -Also note that the automatic term abbreviation ``$\dots$'' has its canonical -application with calculational proofs. It refers to the argument\footnote{The - argument of a curried infix expression is its right-hand side.} of the -preceding statement. +Also note that the implicit term abbreviation ``$\dots$'' has its canonical +application with calculational proofs. It refers to the argument of the +preceding statement. (The argument of a curried infix expression happens to be +its right-hand side.) Isabelle/Isar calculations are implicitly subject to block structure in the sense that new threads of calculational reasoning are commenced for any new @@ -290,9 +296,8 @@ \medskip -The Isar calculation proof commands may be defined as -follows:\footnote{Internal bookkeeping such as proper handling of - block-structure has been suppressed.} +The Isar calculation proof commands may be defined as follows:\footnote{We + suppress internal bookkeeping such as proper handling of block-structure.} \begin{matharray}{rcl} \ALSO@0 & \equiv & \NOTE{calculation}{this} \\ \ALSO@{n+1} & \equiv & \NOTE{calculation}{trans~[OF~calculation~this]} \\[0.5ex] @@ -309,7 +314,7 @@ \end{rail} \begin{descr} - + \item [$\ALSO~(\vec a)$] maintains the auxiliary $calculation$ register as follows. The first occurrence of $\ALSO$ in some calculational thread initializes $calculation$ by $this$. Any subsequent $\ALSO$ on the same @@ -328,29 +333,29 @@ \item [$\MOREOVER$ and $\ULTIMATELY$] are analogous to $\ALSO$ and $\FINALLY$, but collect results only, without applying rules. - + \item [$\isarkeyword{print_trans_rules}$] prints the list of transitivity rules (for calculational commands $\ALSO$ and $\FINALLY$) and symmetry rules (for the $symmetric$ operation and single step elimination patters) of the current context. - + \item [$trans$] declares theorems as transitivity rules. - + \item [$sym$] declares symmetry rules. - + \item [$symmetric$] resolves a theorem with some rule declared as $sym$ in the current context. For example, ``$\ASSUME{[symmetric]}{x = y}$'' produces a swapped fact derived from that assumption. - + In structured proof texts it is often more appropriate to use an explicit single-step elimination proof, such as ``$\ASSUME{}{x = y}~\HENCE{}{y = - x}~\DDOT$''. Note that the very same rules known to $symmetric$ are - declared as $elim$ at the same time. + x}~\DDOT$''. The very same rules known to $symmetric$ are declared as + $elim?$ as well. \end{descr} -\section{Specific proof tools} +\section{Proof tools} \subsection{Miscellaneous methods and attributes}\label{sec:misc-meth-att} @@ -376,11 +381,11 @@ \end{rail} \begin{descr} - + \item [$unfold~\vec a$ and $fold~\vec a$] expand (or fold back again) the given meta-level definitions throughout all goals; any chained facts provided are inserted into the goal and subject to rewriting as well. - + \item [$insert~\vec a$] inserts theorems as facts into all goals of the proof state. Note that current facts indicated for forward chaining are ignored. @@ -388,13 +393,13 @@ basic $rule$ method (see \S\ref{sec:pure-meth-att}), but apply rules by elim-resolution, destruct-resolution, and forward-resolution, respectively \cite{isabelle-ref}. The optional natural number argument (default $0$) - specifies additional assumption steps to be performed. - + specifies additional assumption steps to be performed here. + Note that these methods are improper ones, mainly serving for experimentation and tactic script emulation. Different modes of basic rule application are usually expressed in Isar at the proof language level, rather than via implicit proof state manipulations. For example, a proper - single-step elimination would be done using the basic $rule$ method, with + single-step elimination would be done using the plain $rule$ method, with forward chaining of current facts. \item [$succeed$] yields a single (unchanged) result; it is the identity of @@ -418,8 +423,8 @@ where & : & \isaratt \\[0.5ex] unfolded & : & \isaratt \\ folded & : & \isaratt \\[0.5ex] - standard & : & \isaratt \\ elim_format & : & \isaratt \\ + standard^* & : & \isaratt \\ no_vars^* & : & \isaratt \\ \end{matharray} @@ -437,36 +442,38 @@ \end{rail} \begin{descr} - + \item [$tagged~name~args$ and $untagged~name$] add and remove $tags$ of some theorem. Tags may be any list of strings that serve as comment for some tools (e.g.\ $\LEMMANAME$ causes the tag ``$lemma$'' to be added to the result). The first string is considered the tag name, the rest its arguments. Note that untag removes any tags of the same name. - -\item [$THEN~n~a$ and $COMP~n~a$] compose rules. $THEN$ resolves with the - $n$-th premise of $a$; the $COMP$ version skips the automatic lifting - process that is normally intended (cf.\ \texttt{RS} and \texttt{COMP} in + +\item [$THEN~a$ and $COMP~a$] compose rules by resolution. $THEN$ resolves + with the first premise of $a$ (an alternative position may be also + specified); the $COMP$ version skips the automatic lifting process that is + normally intended (cf.\ \texttt{RS} and \texttt{COMP} in \cite[\S5]{isabelle-ref}). - + \item [$where~\vec x = \vec t$] perform named instantiation of schematic variables occurring in a theorem. Unlike instantiation tactics such as $rule_tac$ (see \S\ref{sec:tactic-commands}), actual schematic variables - have to be specified (e.g.\ $\Var{x@3}$). - + have to be specified on the left-hand side (e.g.\ $\Var{x@3}$). + \item [$unfolded~\vec a$ and $folded~\vec a$] expand and fold back again the given meta-level definitions throughout a rule. - -\item [$standard$] puts a theorem into the standard form of object-rules, just - as the ML function \texttt{standard} (see \cite[\S5]{isabelle-ref}). - + \item [$elim_format$] turns a destruction rule into elimination rule format, by resolving with the rule $\PROP A \Imp (\PROP A \Imp \PROP B) \Imp \PROP B$. - + Note that the Classical Reasoner (\S\ref{sec:classical-att}) provides its own version of this operation. - + +\item [$standard$] puts a theorem into the standard form of object-rules at + the outermost theory level. Note that this operation violates the local + proof context (including active locales). + \item [$no_vars$] replaces schematic variables by free ones; this is mainly for tuning output of pretty printed theorems. @@ -554,37 +561,37 @@ \end{rail} \begin{descr} - + \item [$rule_tac$ etc.] do resolution of rules with explicit instantiation. This works the same way as the ML tactics \texttt{res_inst_tac} etc. (see \cite[\S3]{isabelle-ref}). - - Note that multiple rules may be only given there is no instantiation. Then + + Multiple rules may be only given if there is no instantiation; then $rule_tac$ is the same as \texttt{resolve_tac} in ML (see \cite[\S3]{isabelle-ref}). - + \item [$cut_tac$] inserts facts into the proof state as assumption of a subgoal, see also \texttt{cut_facts_tac} in \cite[\S3]{isabelle-ref}. Note that the scope of schematic variables is spread over the main goal statement. Instantiations may be given as well, see also ML tactic \texttt{cut_inst_tac} in \cite[\S3]{isabelle-ref}. - + \item [$thin_tac~\phi$] deletes the specified assumption from a subgoal; note that $\phi$ may contain schematic variables. See also \texttt{thin_tac} in \cite[\S3]{isabelle-ref}. - + \item [$subgoal_tac~\phi$] adds $\phi$ as an assumption to a subgoal. See also \texttt{subgoal_tac} and \texttt{subgoals_tac} in \cite[\S3]{isabelle-ref}. - + \item [$rename_tac~\vec x$] renames parameters of a goal according to the list $\vec x$, which refers to the \emph{suffix} of variables. - + \item [$rotate_tac~n$] rotates the assumptions of a goal by $n$ positions: from right to left if $n$ is positive, and from left to right if $n$ is negative; the default value is $1$. See also \texttt{rotate_tac} in \cite[\S3]{isabelle-ref}. - + \item [$tactic~text$] produces a proof method from any ML text of type \texttt{tactic}. Apart from the usual ML environment and the current implicit theory context, the ML code may refer to the following locally @@ -643,15 +650,15 @@ according to the arguments given. Note that the \railtterm{only} modifier first removes all other rewrite rules, congruences, and looper tactics (including splits), and then behaves like \railtterm{add}. - + \medskip The \railtterm{cong} modifiers add or delete Simplifier congruence rules (see also \cite{isabelle-ref}), the default is to add. - + \medskip The \railtterm{split} modifiers add or delete rules for the Splitter (see also \cite{isabelle-ref}), the default is to add. This works only if the Simplifier method has been properly setup to include the Splitter (all major object logics such HOL, HOLCF, FOL, ZF do this already). - + \item [$simp_all$] is similar to $simp$, but acts on all goals (backwards from the last to the first one). @@ -662,24 +669,25 @@ simplifying assumptions themselves (cf.\ \texttt{asm_full_simp_tac} in \cite[\S10]{isabelle-ref}). In structured proofs this is usually quite well behaved in practice: just the local premises of the actual goal are involved, -additional facts may inserted via explicit forward-chaining (using $\THEN$, +additional facts may be inserted via explicit forward-chaining (using $\THEN$, $\FROMNAME$ etc.). The full context of assumptions is only included if the ``$!$'' (bang) argument is given, which should be used with some care, though. Additional Simplifier options may be specified to tune the behavior further -(mostly for unstructured scripts with many accidental local facts): $(no_asm)$ -means assumptions are ignored completely (cf.\ \texttt{simp_tac}), -$(no_asm_simp)$ means assumptions are used in the simplification of the -conclusion but are not themselves simplified (cf.\ \texttt{asm_simp_tac}), and -$(no_asm_use)$ means assumptions are simplified but are not used in the -simplification of each other or the conclusion (cf. \texttt{full_simp_tac}). +(mostly for unstructured scripts with many accidental local facts): +``$(no_asm)$'' means assumptions are ignored completely (cf.\ +\texttt{simp_tac}), ``$(no_asm_simp)$'' means assumptions are used in the +simplification of the conclusion but are not themselves simplified (cf.\ +\texttt{asm_simp_tac}), and ``$(no_asm_use)$'' means assumptions are +simplified but are not used in the simplification of each other or the +conclusion (cf.\ \texttt{full_simp_tac}). \medskip The Splitter package is usually configured to work as part of the Simplifier. The effect of repeatedly applying \texttt{split_tac} can be simulated by -$(simp~only\colon~split\colon~\vec a)$. There is also a separate $split$ -method available for single-step case splitting, see \S\ref{sec:basic-eq}. +``$(simp~only\colon~split\colon~\vec a)$''. There is also a separate $split$ +method available for single-step case splitting. \subsubsection{Declaring rules} @@ -729,14 +737,14 @@ \end{rail} \begin{descr} - + \item [$simplified~\vec a$] causes a theorem to be simplified, either by exactly the specified rules $\vec a$, or the implicit Simplifier context if no arguments are given. The result is fully simplified by default, including assumptions and conclusion; the options $no_asm$ etc.\ tune the Simplifier in the same way as the for the $simp$ method (see \S\ref{sec:simp}). - + Note that forward simplification restricts the simplifier to its most basic operation of term rewriting; solver and looper tactics \cite{isabelle-ref} are \emph{not} involved here. The $simplified$ attribute should be only @@ -768,18 +776,18 @@ way for automated normalization (see \S\ref{sec:simplifier}). \begin{descr} - -\item [$subst~thm$] performs a single substitution step using rule $thm$, - which may be either a meta or object equality. - -\item [$hypsubst$] performs substitution using some assumption. Note that - this only works for equations of the form $x = t$ where $x$ is a free or - bound variable! - -\item [$split~thms$] performs single-step case splitting using rules $thms$. + +\item [$subst~a$] performs a single substitution step using rule $a$, which + may be either a meta or object equality. + +\item [$hypsubst$] performs substitution using some assumption; this only + works for equations of the form $x = t$ where $x$ is a free or bound + variable. + +\item [$split~\vec a$] performs single-step case splitting using rules $thms$. By default, splitting is performed in the conclusion of a goal; the $asm$ option indicates to operate on assumptions instead. - + Note that the $simp$ method already involves repeated application of split rules as declared in the current context (see \S\ref{sec:simp}). \end{descr} @@ -804,29 +812,26 @@ \end{rail} \begin{descr} - + \item [$rule$] as offered by the classical reasoner is a refinement over the primitive one (see \S\ref{sec:pure-meth-att}). Both versions essentially work the same, but the classical version observes the classical rule context - in addition to the Isabelle/Pure one. - - The library of common object logics (HOL, ZF, etc.) usually declare a rich - collection of classical rules (even if these perfectly OK from the - intuitionistic viewpoint), but only few declarations to the rule context of - Isabelle/Pure (\S\ref{sec:pure-meth-att}). - + in addition to that of Isabelle/Pure. + + Common object logics (HOL, ZF, etc.) declare a rich collection of classical + rules (even if these would qualify as intuitionistic ones), but only few + declarations to the rule context of Isabelle/Pure + (\S\ref{sec:pure-meth-att}). + \item [$contradiction$] solves some goal by contradiction, deriving any result - from both $\neg A$ and $A$. Facts, which are guaranteed to participate, may - appear in either order. + from both $\neg A$ and $A$. Chained facts, which are guaranteed to + participate, may appear in either order. \item [$intro$ and $elim$] repeatedly refine some goal by intro- or - elim-resolution, after having inserted any facts. Omitting the arguments - refers to any suitable rules declared in the context, otherwise only the - explicitly given ones may be applied. The latter form admits better control - of what actually happens, thus it is very appropriate as an initial method - for $\PROOFNAME$ that splits up certain connectives of the goal, before - entering the actual sub-proof. - + elim-resolution, after having inserted any chained facts. Exactly the rules + given as arguments are taken into account; this allows fine-tuned + decomposition of a proof problem, in contrast to common automated tools. + \end{descr} @@ -864,14 +869,11 @@ \cite[\S11]{isabelle-ref} for more information. \end{descr} -Any of above methods support additional modifiers of the context of classical -rules. Their semantics is analogous to the attributes given in -\S\ref{sec:classical-mod}. Facts provided by forward chaining are -inserted\footnote{These methods usually cannot make proper use of actual rules - inserted that way, though.} into the goal before doing the search. The -``!''~argument causes the full context of assumptions to be included as well. -This is slightly less hazardous than for the Simplifier (see -\S\ref{sec:simp}). +Any of the above methods support additional modifiers of the context of +classical rules. Their semantics is analogous to the attributes given before. +Facts provided by forward chaining are inserted into the goal before +commencing proof search. The ``!''~argument causes the full context of +assumptions to be included as well. \subsubsection{Combined automated methods}\label{sec:clasimp} @@ -948,23 +950,22 @@ \item [$intro$, $elim$, and $dest$] declare introduction, elimination, and destruction rules, respectively. By default, rules are considered as \emph{unsafe} (i.e.\ not applied blindly without backtracking), while a - single ``!'' classifies as \emph{safe}, and ``?'' as \emph{extra} (i.e.\ not - applied in the search-oriented automated methods, but only in single-step - methods such as $rule$). + single ``!'' classifies as \emph{safe}. Rule declarations marked by ``?'' + coincide with those of Isabelle/Pure, cf.\ \S\ref{sec:pure-meth-att} (i.e.\ + are only applied in single steps of the $rule$ method). \item [$rule~del$] deletes introduction, elimination, or destruction rules from the context. - -\item [$iff$] declares a logical equivalences to the Simplifier and the + +\item [$iff$] declares logical equivalences to the Simplifier and the Classical reasoner at the same time. Non-conditional rules result in a ``safe'' introduction and elimination pair; conditional ones are considered ``unsafe''. Rules with negative conclusion are automatically inverted - (using $\neg$-elimination). - - The ``?'' version of $iff$ declares rules to the Pure context only, and - omits the Simplifier declaration. Thus the declaration does not have any - effect on automated proof tools, but only on the single-step $rule$ method - (see \S\ref{sec:misc-meth-att}). + (using $\neg$ elimination internally). + + The ``?'' version of $iff$ declares rules to the Isabelle/Pure context only, + and omits the Simplifier declaration. + \end{descr} @@ -978,13 +979,13 @@ \end{matharray} \begin{descr} - + \item [$elim_format$] turns a destruction rule into elimination rule format; this operation is similar to the the intuitionistic version (\S\ref{sec:misc-meth-att}), but each premise of the resulting rule acquires - an additional local fact of the negated main thesis -- according to the + an additional local fact of the negated main thesis; according to the classical principle $(\neg A \Imp A) \Imp A$. - + \item [$swapped$] turns an introduction rule into an elimination, by resolving with the classical swap principle $(\neg B \Imp A) \Imp (\neg A \Imp B)$. @@ -1016,13 +1017,13 @@ The $\CASENAME$ command provides a shorthand to refer to certain parts of logical context symbolically. Proof methods may provide an environment of named ``cases'' of the form $c\colon \vec x, \vec \phi$. Then the effect of -``$\CASE{c}$'' is that of ``$\FIX{\vec x}~\ASSUME{c}{\vec\phi}$''; term -bindings may be covered as well, such as $\Var{case}$. +``$\CASE{c}$'' is that of ``$\FIX{\vec x}~\ASSUME{c}{\vec\phi}$''. Term +bindings may be covered as well, such as $\Var{case}$ for the intended +conclusion. Normally the ``terminology'' of a case value (i.e.\ the parameters $\vec x$) -are marked as hidden. Using the alternative form ``$(\CASE{c}~\vec x)$'' -enables proof writers to choose their own naming for the subsequent proof -text. +are marked as hidden. Using the explicit form ``$\CASE{(c~\vec x)}$'' enables +proof writers to choose their own names for the subsequent proof text. \medskip @@ -1030,7 +1031,8 @@ to peek at the current goal state, which is generally considered non-observable in Isar. The text of the cases basically emerge from standard elimination or induction rules, which in turn are derived from previous theory -specifications in a canonical way (say via $\isarkeyword{inductive}$). +specifications in a canonical way (say from $\isarkeyword{inductive}$ +definitions). Named cases may be exhibited in the current proof context only if both the proof method and the rules involved support this. Case names and parameters @@ -1042,7 +1044,7 @@ \railterm{casenames} \begin{rail} - 'case' caseref | ('(' caseref ((name | underscore) +) ')') + 'case' (caseref | '(' caseref ((name | underscore) +) ')') ; caseref: nameref attributes? ; @@ -1056,35 +1058,35 @@ \end{rail} \begin{descr} - -\item [$\CASE{c}$] invokes a named local context $c\colon \vec x, \vec \phi$, - as provided by an appropriate proof method (such as $cases$ and $induct$, - see \S\ref{sec:cases-induct-meth}). The command $\CASE{c}$ abbreviates - $\FIX{\vec x}~\ASSUME{c}{\vec\phi}$. - + +\item [$\CASE{(c~\vec x)}$] invokes a named local context $c\colon \vec x, + \vec \phi$, as provided by an appropriate proof method (such as $cases$ and + $induct$, see \S\ref{sec:cases-induct-meth}). The command ``$\CASE{(c~\vec + x)}$'' abbreviates ``$\FIX{\vec x}~\ASSUME{c}{\vec\phi}$''. + \item [$\isarkeyword{print_cases}$] prints all local contexts of the current state, using Isar proof language notation. This is a diagnostic command; $undo$ does not apply. - + \item [$case_names~\vec c$] declares names for the local contexts of premises of some theorem; $\vec c$ refers to the \emph{suffix} of the list of premises. - + \item [$params~\vec p@1 \dots \vec p@n$] renames the innermost parameters of premises $1, \dots, n$ of some theorem. An empty list of names may be given to skip positions, leaving the present parameters unchanged. - + Note that the default usage of case rules does \emph{not} directly expose parameters to the proof context (see also \S\ref{sec:cases-induct-meth}). - + \item [$consumes~n$] declares the number of ``major premises'' of a rule, i.e.\ the number of facts to be consumed when it is applied by an appropriate proof method (cf.\ \S\ref{sec:cases-induct-meth}). The default value of $consumes$ is $n = 1$, which is appropriate for the usual kind of - cases and induction rules for inductive sets (cf.\ + cases and induction rules for inductive sets (cf.\ \S\ref{sec:hol-inductive}). Rules without any $consumes$ declaration given are treated as if $consumes~0$ had been specified. - + Note that explicit $consumes$ declarations are only rarely needed; this is already taken care of automatically by the higher-level $cases$ and $induct$ declarations, see also \S\ref{sec:cases-induct-att}. @@ -1104,39 +1106,32 @@ and induction over datatypes, inductive sets, and recursive functions. The corresponding rules may be specified and instantiated in a casual manner. Furthermore, these methods provide named local contexts that may be invoked -via the $\CASENAME$ proof command within the subsequent proof text (cf.\ +via the $\CASENAME$ proof command within the subsequent proof text (cf.\ \S\ref{sec:rule-cases}). This accommodates compact proof texts even when reasoning about large specifications. -Note that the full spectrum of this generic functionality is currently only -supported by Isabelle/HOL, when used in conjunction with advanced definitional -packages (see especially \S\ref{sec:hol-datatype} and -\S\ref{sec:hol-inductive}). - \begin{rail} 'cases' spec ; 'induct' spec ; - spec: open? args rule? params? + spec: open? args rule? ; open: '(' 'open' ')' ; - args: (insts * 'and') + args: (insts * 'and') ; rule: ('type' | 'set') ':' nameref | 'rule' ':' thmref ; - params: 'of' ':' insts - ; \end{rail} \begin{descr} - -\item [$cases~insts~R~ps$] applies method $rule$ with an appropriate case + +\item [$cases~insts~R$] applies method $rule$ with an appropriate case distinction theorem, instantiated to the subjects $insts$. Symbolic case names are bound according to the rule's local contexts. - + The rule is determined as follows, according to the facts and arguments passed to the $cases$ method: \begin{matharray}{llll} @@ -1146,28 +1141,21 @@ \edrv a \in A & cases & \dots & \Text{inductive set elimination (of $A$)} \\ \dots & cases & \dots ~ R & \Text{explicit rule $R$} \\ \end{matharray} - + Several instantiations may be given, referring to the \emph{suffix} of premises of the case rule; within each premise, the \emph{prefix} of variables is instantiated. In most situations, only a single term needs to be specified; this refers to the first variable of the last premise (it is usually the same for all cases). - - Additional parameters may be specified as $ps$; these are applied after the - primary instantiation in the same manner as by the $of$ attribute (cf.\ - \S\ref{sec:pure-meth-att}). This feature is rarely needed in practice; a - typical application would be to specify additional arguments for rules - stemming from parameterized inductive definitions (see also - \S\ref{sec:hol-inductive}). - - The $open$ option causes the parameters of the new local contexts to be - exposed to the current proof context. Thus local variables stemming from + + The ``$(open)$'' option causes the parameters of the new local contexts to + be exposed to the current proof context. Thus local variables stemming from distant parts of the theory development may be introduced in an implicit manner, which can be quite confusing to the reader. Furthermore, this option may cause unwanted hiding of existing local variables, resulting in less robust proof texts. - -\item [$induct~insts~R~ps$] is analogous to the $cases$ method, but refers to + +\item [$induct~insts~R$] is analogous to the $cases$ method, but refers to induction rules, which are determined as follows: \begin{matharray}{llll} \Text{facts} & & \Text{arguments} & \Text{rule} \\\hline @@ -1175,30 +1163,25 @@ \edrv x \in A & induct & \dots & \Text{set induction (of $A$)} \\ \dots & induct & \dots ~ R & \Text{explicit rule $R$} \\ \end{matharray} - + Several instantiations may be given, each referring to some part of a mutual inductive definition or datatype --- only related partial induction rules may be used together, though. Any of the lists of terms $P, x, \dots$ refers to the \emph{suffix} of variables present in the induction rule. This enables the writer to specify only induction variables, or both predicates and variables, for example. - - Additional parameters (including the $open$ option) may be given in the same - way as for $cases$, see above. + + The ``$(open)$'' option works the same way as for $cases$. \end{descr} Above methods produce named local contexts (cf.\ \S\ref{sec:rule-cases}), as -determined by the instantiated rule \emph{before} it has been applied to the -internal proof state.\footnote{As a general principle, Isar proof text may - never refer to parts of proof states directly.} Thus proper use of symbolic -cases usually require the rule to be instantiated fully, as far as the -emerging local contexts and subgoals are concerned. In particular, for -induction both the predicates and variables have to be specified. Otherwise -the $\CASENAME$ command would refuse to invoke cases containing schematic -variables. Furthermore the resulting local goal statement is bound to the -term variable $\Var{case}$\indexisarvar{case} --- for each case where it is -fully specified. +determined by the instantiated rule as specified in the text. Beyond that, +the $induct$ method guesses further instantiations from the goal specification +itself. Any persisting unresolved schematic variables of the resulting rule +will render the the corresponding case invalid. The term binding +$\Var{case}$\indexisarvar{case} for the conclusion will be provided with each +case, provided that term is fully specified. The $\isarkeyword{print_cases}$ command (\S\ref{sec:rule-cases}) prints all named cases present in the current proof state. @@ -1224,17 +1207,11 @@ Facts presented to either method are consumed according to the number of ``major premises'' of the rule involved (see also \S\ref{sec:cases-induct}), -which is usually $0$ for plain cases and induction rules of datatypes etc.\ +which is usually $0$ for plain cases and induction rules of datatypes etc.\ and $1$ for rules of inductive sets and the like. The remaining facts are inserted into the goal verbatim before the actual $cases$ or $induct$ rule is applied (thus facts may be even passed through an induction). -Note that whenever facts are present, the default rule selection scheme would -provide a ``set'' rule only, with the first fact consumed and the rest -inserted into the goal. In order to pass all facts into a ``type'' rule -instead, one would have to specify this explicitly, e.g.\ by appending -``$type: name$'' to the method argument. - \subsubsection{Declaring rules}\label{sec:cases-induct-att} @@ -1256,22 +1233,22 @@ \end{rail} \begin{descr} - + \item [$\isarkeyword{print_induct_rules}$] prints cases and induct rules for sets and types of the current context. - + \item [$cases$ and $induct$] (as attributes) augment the corresponding context of rules for reasoning about inductive sets and types, using the corresponding methods of the same name. Certain definitional packages of object-logics usually declare emerging cases and induction rules as expected, so users rarely need to intervene. - + Manual rule declarations usually include the the $case_names$ and $ps$ attributes to adjust names of cases and parameters of a rule (see \S\ref{sec:rule-cases}); the $consumes$ declaration is taken care of automatically: $consumes~0$ is specified for ``type'' rules and $consumes~1$ for ``set'' rules. - + \end{descr} %%% Local Variables: diff -r 02bfd6d622ca -r 6faccf7d0f25 doc-src/IsarRef/logics.tex --- a/doc-src/IsarRef/logics.tex Thu Mar 07 19:07:56 2002 +0100 +++ b/doc-src/IsarRef/logics.tex Thu Mar 07 22:52:07 2002 +0100 @@ -18,11 +18,13 @@ The very starting point for any Isabelle object-logic is a ``truth judgment'' that links object-level statements to the meta-logic (with its minimal language of $prop$ that covers universal quantification $\Forall$ and -implication $\Imp$). Common object-logics are sufficiently expressive to -\emph{internalize} rule statements over $\Forall$ and $\Imp$ within their own -language. This is useful in certain situations where a rule needs to be -viewed as an atomic statement from the meta-level perspective (e.g.\ $\All x x -\in A \Imp P(x)$ versus $\forall x \in A. P(x)$). +implication $\Imp$). + +Common object-logics are sufficiently expressive to internalize rule +statements over $\Forall$ and $\Imp$ within their own language. This is +useful in certain situations where a rule needs to be viewed as an atomic +statement from the meta-level perspective, e.g.\ $\All x x \in A \Imp P(x)$ +versus $\forall x \in A. P(x)$. From the following language elements, only the $atomize$ method and $rule_format$ attribute are occasionally required by end-users, the rest is @@ -31,34 +33,36 @@ realistic examples. Generic tools may refer to the information provided by object-logic -declarations internally (e.g.\ locales \S\ref{sec:locale}, or the Classical -Reasoner \S\ref{sec:classical}). +declarations internally. + +\railalias{ruleformat}{rule\_format} +\railterm{ruleformat} \begin{rail} 'judgment' constdecl ; - atomize ('(' 'full' ')')? + 'atomize' ('(' 'full' ')')? ; ruleformat ('(' 'noasm' ')')? ; \end{rail} \begin{descr} - -\item [$\isarkeyword{judgment}~c::\sigma~~syn$] declares constant $c$ as the + +\item [$\isarkeyword{judgment}~c::\sigma~~(mx)$] declares constant $c$ as the truth judgment of the current object-logic. Its type $\sigma$ should specify a coercion of the category of object-level propositions to $prop$ of - the Pure meta-logic; the mixfix annotation $syn$ would typically just link + the Pure meta-logic; the mixfix annotation $(mx)$ would typically just link the object language (internally of syntactic category $logic$) with that of $prop$. Only one $\isarkeyword{judgment}$ declaration may be given in any theory development. - + \item [$atomize$] (as a method) rewrites any non-atomic premises of a sub-goal, using the meta-level equations declared via $atomize$ (as an attribute) beforehand. As a result, heavily nested goals become amenable to fundamental operations such as resolution (cf.\ the $rule$ method) and proof-by-assumption (cf.\ $assumption$). Giving the ``$(full)$'' option - here means to turn the subgoal into an object-statement (if possible), + here means to turn the whole subgoal into an object-statement (if possible), including the outermost parameters and assumptions as well. A typical collection of $atomize$ rules for a particular object-logic would @@ -106,7 +110,7 @@ \item [$\isarkeyword{typedecl}~(\vec\alpha)t$] is similar to the original $\isarkeyword{typedecl}$ of Isabelle/Pure (see \S\ref{sec:types-pure}), but - also declares type arity $t :: (term, \dots, term) term$, making $t$ an + also declares type arity $t :: (type, \dots, type) type$, making $t$ an actual HOL type constructor. \item [$\isarkeyword{typedef}~(\vec\alpha)t = A$] sets up a goal stating @@ -120,21 +124,22 @@ $Abs_t$ (this may be changed via an explicit $\isarkeyword{morphisms}$ declaration). - Theorems $Rep_t$, $Rep_inverse$, and $Abs_inverse$ provide the most basic - characterization as a corresponding injection/surjection pair (in both + Theorems $Rep_t$, $Rep_t_inverse$, and $Abs_t_inverse$ provide the most + basic characterization as a corresponding injection/surjection pair (in both directions). Rules $Rep_t_inject$ and $Abs_t_inject$ provide a slightly - more comfortable view on the injectivity part, suitable for automated proof - tools (e.g.\ in $simp$ or $iff$ declarations). Rules $Rep_t_cases$, - $Rep_t_induct$, and $Abs_t_cases$, $Abs_t_induct$ provide alternative views - on surjectivity; these are already declared as type or set rules for the - generic $cases$ and $induct$ methods. + more convenient view on the injectivity part, suitable for automated proof + tools (e.g.\ in $simp$ or $iff$ declarations). Rules + $Rep_t_cases/Rep_t_induct$, and $Abs_t_cases/Abs_t_induct$ provide + alternative views on surjectivity; these are already declared as set or type + rules for the generic $cases$ and $induct$ methods. \end{descr} -Raw type declarations are rarely used in practice; the main application is -with experimental (or even axiomatic!) theory fragments. Instead of primitive -HOL type definitions, user-level theories usually refer to higher-level -packages such as $\isarkeyword{record}$ (see \S\ref{sec:hol-record}) or -$\isarkeyword{datatype}$ (see \S\ref{sec:hol-datatype}). +Note that raw type declarations are rarely used in practice; the main +application is with experimental (or even axiomatic!) theory fragments. +Instead of primitive HOL type definitions, user-level theories usually refer +to higher-level packages such as $\isarkeyword{record}$ (see +\S\ref{sec:hol-record}) or $\isarkeyword{datatype}$ (see +\S\ref{sec:hol-datatype}). \subsection{Adhoc tuples} @@ -153,13 +158,12 @@ \end{rail} \begin{descr} - + \item [$split_format~\vec p@1 \dots \vec p@n$] puts expressions of low-level tuple types into canonical form as specified by the arguments given; $\vec - p@i$ refers to occurrences in premise $i$ of the rule. The - $split_format~(complete)$ form causes \emph{all} arguments in function - applications to be represented canonically according to their tuple type - structure. + p@i$ refers to occurrences in premise $i$ of the rule. The $(complete)$ + option causes \emph{all} arguments in function applications to be + represented canonically according to their tuple type structure. Note that these operations tend to invent funny names for new local parameters to be introduced. @@ -169,8 +173,8 @@ \subsection{Records}\label{sec:hol-record} -In principle, records merely generalize the concept of tuples where components -may be addressed by labels instead of just position. The logical +In principle, records merely generalize the concept of tuples, where +components may be addressed by labels instead of just position. The logical infrastructure of records in Isabelle/HOL is slightly more advanced, though, supporting truly extensible record schemes. This admits operations that are polymorphic with respect to record extension, yielding ``object-oriented'' @@ -203,8 +207,8 @@ ``$\more$'' notation (which is actually part of the syntax). The improper field ``$\more$'' of a record scheme is called the \emph{more part}. Logically it is just a free variable, which is occasionally referred to as -\emph{row variable} in the literature. The more part of a record scheme may -be instantiated by zero or more further components. For example, the above +``row variable'' in the literature. The more part of a record scheme may be +instantiated by zero or more further components. For example, the previous scheme may get instantiated to $\record{x = a\fs y = b\fs z = c\fs \more = m'}$, where $m'$ refers to a different more part. Fixed records are special instances of record schemes, where ``$\more$'' is properly terminated by the @@ -295,11 +299,11 @@ reverse than in the actual term. Since repeated updates are just function applications, fields may be freely permuted in $\record{x \asn a\fs y \asn b\fs z \asn c}$, as far as logical equality is concerned. Thus -commutativity of updates can be proven within the logic for any two fields, -but not as a general theorem: fields are not first-class values. +commutativity of independent updates can be proven within the logic for any +two fields, but not as a general theorem. \medskip The \textbf{make} operation provides a cumulative record constructor -functions: +function: \begin{matharray}{lll} t{\dtt}make & \ty & \vec\sigma \To \record{\vec c \ty \vec \sigma} \\ \end{matharray} @@ -336,25 +340,26 @@ \record{\vec d \ty \vec \rho, \vec c \ty \vec\sigma} \\ \end{matharray} -\noindent Note that $t{\dtt}make$ and $t{\dtt}fields$ are actually coincide for root records. +\noindent Note that $t{\dtt}make$ and $t{\dtt}fields$ actually coincide for root records. \subsubsection{Derived rules and proof tools}\label{sec:hol-record-thms} The record package proves several results internally, declaring these facts to appropriate proof tools. This enables users to reason about record structures -quite comfortably. Assume that $t$ is a record type as specified above. +quite conveniently. Assume that $t$ is a record type as specified above. \begin{enumerate} - + \item Standard conversions for selectors or updates applied to record constructor terms are made part of the default Simplifier context; thus proofs by reduction of basic operations merely require the $simp$ method - without further arguments. These rules are available as $t{\dtt}simps$. - + without further arguments. These rules are available as $t{\dtt}simps$, + too. + \item Selectors applied to updated records are automatically reduced by an - internal simplification procedure, which is also part of the default - Simplifier context. + internal simplification procedure, which is also part of the standard + Simplifier setup. \item Inject equations of a form analogous to $((x, y) = (x', y')) \equiv x=x' \conj y=y'$ are declared to the Simplifier and Classical Reasoner as $iff$ @@ -368,10 +373,10 @@ terms are provided both in $cases$ and $induct$ format (cf.\ the generic proof methods of the same name, \S\ref{sec:cases-induct}). Several variations are available, for fixed records, record schemes, more parts etc. - + The generic proof methods are sufficiently smart to pick the most sensible rule according to the type of the indicated record expression: users just - need to apply something like ``$(cases r)$'' to a certain proof problem. + need to apply something like ``$(cases~r)$'' to a certain proof problem. \item The derived record operations $t{\dtt}make$, $t{\dtt}fields$, $t{\dtt}extend$, $t{\dtt}truncate$ are \emph{not} treated automatically, but @@ -471,7 +476,7 @@ \item [$\isarkeyword{recdef}$] defines general well-founded recursive functions (using the TFL package), see also \cite{isabelle-HOL}. The - $(permissive)$ option tells TFL to recover from failed proof attempts, + ``$(permissive)$'' option tells TFL to recover from failed proof attempts, returning unfinished results. The $recdef_simp$, $recdef_cong$, and $recdef_wf$ hints refer to auxiliary rules to be used in the internal automated proof process of TFL. Additional $clasimpmod$ declarations (cf.\ @@ -496,8 +501,8 @@ $\isarkeyword{recdef}$ are numbered (starting from $1$). The equations provided by these packages may be referred later as theorem list -$f\mathord.simps$, where $f$ is the (collective) name of the functions -defined. Individual equations may be named explicitly as well; note that for +$f{\dtt}simps$, where $f$ is the (collective) name of the functions defined. +Individual equations may be named explicitly as well; note that for $\isarkeyword{recdef}$ each specification given by the user may result in several theorems. @@ -631,10 +636,10 @@ both goal addressing and dynamic instantiation. Note that named rule cases are \emph{not} provided as would be by the proper $induct$ and $cases$ proof methods (see \S\ref{sec:cases-induct}). - + \item [$ind_cases$ and $\isarkeyword{inductive_cases}$] provide an interface - to the \texttt{mk_cases} operation. Rules are simplified in an unrestricted - forward manner. + to the internal \texttt{mk_cases} operation. Rules are simplified in an + unrestricted forward manner. While $ind_cases$ is a proof method to apply the result immediately as elimination rules, $\isarkeyword{inductive_cases}$ provides case split @@ -648,7 +653,7 @@ from executable specifications, both functional and relational programs. Isabelle/HOL instantiates these mechanisms in a way that is amenable to end-user applications. See \cite{isabelle-HOL} for further information (this -actually covers the new-style theory format). +actually covers the new-style theory format as well). \indexisarcmd{generate-code}\indexisarcmd{consts-code}\indexisarcmd{types-code} \indexisaratt{code} @@ -727,10 +732,10 @@ dtrules: 'distinct' thmrefs 'inject' thmrefs 'induction' thmrefs \end{rail} -Recursive domains in HOLCF are analogous to datatypes in classical HOL (cf.\ -\S\ref{sec:hol-datatype}). Mutual recursive is supported, but no nesting nor +Recursive domains in HOLCF are analogous to datatypes in classical HOL (cf.\ +\S\ref{sec:hol-datatype}). Mutual recursion is supported, but no nesting nor arbitrary branching. Domain constructors may be strict (default) or lazy, the -latter admits to introduce infinitary objects in the typical LCF manner (e.g.\ +latter admits to introduce infinitary objects in the typical LCF manner (e.g.\ lazy lists). See also \cite{MuellerNvOS99} for a general discussion of HOLCF domains. @@ -742,7 +747,7 @@ The ZF logic is essentially untyped, so the concept of ``type checking'' is performed as logical reasoning about set-membership statements. A special method assists users in this task; a version of this is already declared as a -``solver'' in the default Simplifier context. +``solver'' in the standard Simplifier setup. \indexisarcmd{print-tcset}\indexisaratt{typecheck}\indexisaratt{TC} @@ -779,7 +784,7 @@ In ZF everything is a set. The generic inductive package also provides a specific view for ``datatype'' specifications. Coinductive definitions are -available as well. +available in both cases, too. \indexisarcmdof{ZF}{inductive}\indexisarcmdof{ZF}{coinductive} \indexisarcmdof{ZF}{datatype}\indexisarcmdof{ZF}{codatatype} diff -r 02bfd6d622ca -r 6faccf7d0f25 doc-src/IsarRef/refcard.tex --- a/doc-src/IsarRef/refcard.tex Thu Mar 07 19:07:56 2002 +0100 +++ b/doc-src/IsarRef/refcard.tex Thu Mar 07 22:52:07 2002 +0100 @@ -48,7 +48,7 @@ \HENCENAME & \equiv & \THEN~\HAVENAME \\ \THUSNAME & \equiv & \THEN~\SHOWNAME \\ \FROM{\vec a} & \equiv & \NOTE{this}{\vec a}~\THEN \\ - \WITH{\vec a} & \equiv & \FROM{\vec a~this} \\[1ex] + \WITH{\vec a} & \equiv & \FROM{\vec a~\AND~this} \\[1ex] \FROM{this} & \equiv & \THEN \\ \FROM{this}~\HAVENAME & \equiv & \HENCENAME \\ \FROM{this}~\SHOWNAME & \equiv & \THUSNAME \\ @@ -120,18 +120,18 @@ \multicolumn{2}{l}{\textbf{Operations}} \\[0.5ex] $OF~\vec a$ & rule applied to facts (skipping ``$_$'') \\ $of~\vec t$ & rule applied to terms (skipping ``$_$'') \\ - $symmetric$ & resolution with symmetry of equality \\ - $THEN~b$ & resolution with other rule \\ + $symmetric$ & resolution with symmetry rule \\ + $THEN~b$ & resolution with another rule \\ $rule_format$ & result put into standard rule format \\ - $elim_format$ & destruct rule turned into elimination rule format \\ - $standard$ & result put into standard form \\[1ex] + $elim_format$ & destruct rule turned into elimination rule format \\[1ex] \multicolumn{2}{l}{\textbf{Declarations}} \\[0.5ex] $simp$ & Simplifier rule \\ - $intro$, $elim$, $dest$ & Classical Reasoner rule \\ + $intro$, $elim$, $dest$ & Pure or Classical Reasoner rule \\ $iff$ & Simplifier + Classical Reasoner rule \\ $split$ & case split rule \\ $trans$ & transitivity rule \\ + $sym$ & symmetry rule \\ \end{tabular}