doc-src/IsarRef/generic.tex
author wenzelm
Tue, 04 Apr 2000 22:16:11 +0200
changeset 8667 4230d17073ea
parent 8638 21cb46716f32
child 8704 f76f41f24c44
permissions -rw-r--r--
print_simpset / print_claset command;


\chapter{Generic Tools and Packages}\label{ch:gen-tools}

\section{Axiomatic Type Classes}\label{sec:axclass}

\indexisarcmd{axclass}\indexisarcmd{instance}\indexisarmeth{intro-classes}
\begin{matharray}{rcl}
  \isarcmd{axclass} & : & \isartrans{theory}{theory} \\
  \isarcmd{instance} & : & \isartrans{theory}{proof(prove)} \\
  intro_classes & : & \isarmeth \\
\end{matharray}

Axiomatic type classes are provided by Isabelle/Pure as a \emph{definitional}
interface to type classes (cf.~\S\ref{sec:classes}).  Thus any object logic
may make use of this light-weight mechanism of abstract theories
\cite{Wenzel:1997:TPHOL}.  There is also a tutorial on \emph{Using Axiomatic
  Type Classes in Isabelle} that is part of the standard Isabelle
documentation.
%FIXME cite

\begin{rail}
  'axclass' classdecl (axmdecl prop comment? +)
  ;
  'instance' (nameref '<' nameref | nameref '::' simplearity) comment?
  ;
\end{rail}

\begin{descr}
\item [$\isarkeyword{axclass}~c < \vec c~axms$] defines an axiomatic type
  class as the intersection of existing classes, with additional axioms
  holding.  Class axioms may not contain more than one type variable.  The
  class axioms (with implicit sort constraints added) are bound to the given
  names.  Furthermore a class introduction rule is generated, which is
  employed by method $intro_classes$ to support instantiation proofs of this
  class.
  
\item [$\isarkeyword{instance}~c@1 < c@2$ and $\isarkeyword{instance}~t ::
  (\vec s)c$] setup a goal stating a class relation or type arity.  The proof
  would usually proceed by $intro_classes$, and then establish the
  characteristic theorems of the type classes involved.  After finishing the
  proof, the theory will be augmented by a type signature declaration
  corresponding to the resulting theorem.
\item [$intro_classes$] repeatedly expands all class introduction rules of
  this theory.
\end{descr}


\section{Calculational proof}\label{sec:calculation}

\indexisarcmd{also}\indexisarcmd{finally}
\indexisarcmd{moreover}\indexisarcmd{ultimately}
\indexisaratt{trans}
\begin{matharray}{rcl}
  \isarcmd{also} & : & \isartrans{proof(state)}{proof(state)} \\
  \isarcmd{finally} & : & \isartrans{proof(state)}{proof(chain)} \\
  \isarcmd{moreover} & : & \isartrans{proof(state)}{proof(state)} \\
  \isarcmd{ultimately} & : & \isartrans{proof(state)}{proof(chain)} \\
  trans & : & \isaratt \\
\end{matharray}

Calculational proof is forward reasoning with implicit application of
transitivity rules (such those of $=$, $\le$, $<$).  Isabelle/Isar maintains
an auxiliary register $calculation$\indexisarthm{calculation} for accumulating
results obtained by transitivity composed with the current result.  Command
$\ALSO$ updates $calculation$ involving $this$, while $\FINALLY$ exhibits the
final $calculation$ by forward chaining towards the next goal statement.  Both
commands require valid current facts, i.e.\ may occur only after commands that
produce theorems such as $\ASSUMENAME$, $\NOTENAME$, or some finished proof of
$\HAVENAME$, $\SHOWNAME$ etc.  The $\MOREOVER$ and $\ULTIMATELY$ commands are
similar to $\ALSO$ and $\FINALLY$, but only collect further results in
$calculation$ without applying any rules yet.

Also note that the automatic term abbreviation ``$\dots$'' has its canonical
application with calculational proofs.  It refers to the argument\footnote{The
  argument of a curried infix expression is its right-hand side.} of the
preceding statement.

Isabelle/Isar calculations are implicitly subject to block structure in the
sense that new threads of calculational reasoning are commenced for any new
block (as opened by a local goal, for example).  This means that, apart from
being able to nest calculations, there is no separate \emph{begin-calculation}
command required.

\medskip

The Isar calculation proof commands may be defined as
follows:\footnote{Internal bookkeeping such as proper handling of
  block-structure has been suppressed.}
\begin{matharray}{rcl}
  \ALSO@0 & \equiv & \NOTE{calculation}{this} \\
  \ALSO@{n+1} & \equiv & \NOTE{calculation}{trans~[OF~calculation~this]} \\
  \FINALLY & \equiv & \ALSO~\FROM{calculation} \\
  \MOREOVER & \equiv & \NOTE{calculation}{calculation~this} \\
  \ULTIMATELY & \equiv & \MOREOVER~\FROM{calculation} \\
\end{matharray}

\begin{rail}
  ('also' | 'finally') transrules? comment?
  ;
  ('moreover' | 'ultimately') comment?
  ;
  'trans' (() | 'add' | 'del')
  ;

  transrules: '(' thmrefs ')' interest?
  ;
\end{rail}

\begin{descr}
\item [$\ALSO~(\vec a)$] maintains the auxiliary $calculation$ register as
  follows.  The first occurrence of $\ALSO$ in some calculational thread
  initializes $calculation$ by $this$. Any subsequent $\ALSO$ on the same
  level of block-structure updates $calculation$ by some transitivity rule
  applied to $calculation$ and $this$ (in that order).  Transitivity rules are
  picked from the current context plus those given as explicit arguments (the
  latter have precedence).
  
\item [$\FINALLY~(\vec a)$] maintaining $calculation$ in the same way as
  $\ALSO$, and concludes the current calculational thread.  The final result
  is exhibited as fact for forward chaining towards the next goal. Basically,
  $\FINALLY$ just abbreviates $\ALSO~\FROM{calculation}$.  Note that
  ``$\FINALLY~\SHOW{}{\Var{thesis}}~\DOT$'' and
  ``$\FINALLY~\HAVE{}{\phi}~\DOT$'' are typical idioms for concluding
  calculational proofs.
  
\item [$\MOREOVER$ and $\ULTIMATELY$] are analogous to $\ALSO$ and $\FINALLY$,
  but collect results only, without applying rules.
  
\item [$trans$] declares theorems as transitivity rules.
\end{descr}


\section{Named local contexts (cases)}\label{sec:cases}

\indexisarcmd{case}\indexisarcmd{print-cases}
\indexisaratt{case-names}\indexisaratt{params}
\begin{matharray}{rcl}
  \isarcmd{case} & : & \isartrans{proof(state)}{proof(state)} \\
  \isarcmd{print_cases}^* & : & \isarkeep{proof} \\
  case_names & : & \isaratt \\
  params & : & \isaratt \\
\end{matharray}

Basically, Isar proof contexts are built up explicitly using commands like
$\FIXNAME$, $\ASSUMENAME$ etc.\ (see \S\ref{sec:proof-context}).  In typical
verification tasks this can become hard to manage, though.  In particular, a
large number of local contexts may emerge from case analysis or induction over
inductive sets and types.

\medskip

The $\CASENAME$ command provides a shorthand to refer to certain parts of
logical context symbolically.  Proof methods may provide an environment of
named ``cases'' of the form $c\colon \vec x, \vec \phi$.  Then the effect of
$\CASE{c}$ is exactly the same as $\FIX{\vec x}~\ASSUME{c}{\vec\phi}$.

It is important to note that $\CASENAME$ does \emph{not} provide any means to
peek at the current goal state, which is treated as strictly non-observable in
Isar!  Instead, the cases considered here usually emerge in a canonical way
from certain pieces of specification that appear in the theory somewhere else
(e.g.\ in an inductive definition, or recursive function).  See also
\S\ref{sec:induct-method} for more details of how this works in HOL.

\medskip

Named cases may be exhibited in the current proof context only if both the
proof method and the rules involved support this.  Case names and parameters
of basic rules may be declared by hand as well, by using appropriate
attributes.  Thus variant versions of rules that have been derived manually
may be used in advanced case analysis later.

\railalias{casenames}{case\_names}
\railterm{casenames}

\begin{rail}
  'case' nameref attributes?
  ;
  casenames (name + )
  ;
  'params' ((name * ) + 'and')
  ;
\end{rail}
%FIXME bug in rail

\begin{descr}
\item [$\CASE{c}$] invokes a named local context $c\colon \vec x, \vec \phi$,
  as provided by an appropriate proof method (such as $cases$ and $induct$ in
  Isabelle/HOL, see \S\ref{sec:induct-method}).  The command $\CASE{c}$
  abbreviates $\FIX{\vec x}~\ASSUME{c}{\vec\phi}$.
\item [$\isarkeyword{print_cases}$] prints all local contexts of the current
  state, using Isar proof language notation.  This is a diagnostic command;
  $undo$ does not apply.
\item [$case_names~\vec c$] declares names for the local contexts of premises
  of some theorem; $\vec c$ refers to the \emph{suffix} of the list premises.
\item [$params~\vec p@1 \dots \vec p@n$] renames the innermost parameters of
  premises $1, \dots, n$ of some theorem.  An empty list of names may be given
  to skip positions, leaving the present parameters unchanged.
\end{descr}


\section{Generalized existence}

\indexisarcmd{obtain}
\begin{matharray}{rcl}
  \isarcmd{obtain} & : & \isartrans{proof(prove)}{proof(state)} \\
\end{matharray}

Generalized existence reasoning means that additional elements with certain
properties are introduced, together with a soundness proof of that context
change (the rest of the main goal is left unchanged).

Syntactically, the $\OBTAINNAME$ language element is like an initial proof
method to the present goal, followed by a proof of its additional claim,
followed by the actual context commands (using the syntax of $\FIXNAME$ and
$\ASSUMENAME$, see \S\ref{sec:proof-context}).

\begin{rail}
  'obtain' (vars + 'and') comment? \\ 'where' (assm comment? + 'and')
  ;
\end{rail}

$\OBTAINNAME$ is defined as a derived Isar command as follows; here the
preceding goal shall be $\psi$, with (optional) facts $\vec b$ indicated for
forward chaining.
\begin{matharray}{l}
  \OBTAIN{\vec x}{a}{\vec \phi}~~\langle proof\rangle \equiv {} \\[0.5ex]
  \quad \PROOF{succeed} \\
  \qquad \DEF{}{thesis \equiv \psi} \\
  \qquad \PRESUME{that}{\All{\vec x} \vec\phi \Imp thesis} \\
  \qquad \FROM{\vec b}~\SHOW{}{thesis}~~\langle proof\rangle \\
  \quad \NEXT \\
  \qquad \FIX{\vec x}~\ASSUME{a}{\vec\phi} \\
\end{matharray}

Typically, the soundness proof is relatively straight-forward, often just by
canonical automated tools such as $\BY{simp}$ (see \S\ref{sec:simp}) or
$\BY{blast}$ (see \S\ref{sec:classical-auto}).  Note that the ``$that$''
presumption above is usually declared as simplification and (unsafe)
introduction rule, depending on the object-logic's policy,
though.\footnote{HOL and HOLCF do this already.}

The original goal statement is wrapped into a local definition in order to
avoid any automated tools descending into it.  Usually, any statement would
admit the intended reduction anyway; only in very rare cases $thesis_def$ has
to be expanded to complete the soundness proof.

\medskip

In a sense, $\OBTAINNAME$ represents at the level of Isar proofs what would be
meta-logical existential quantifiers and conjunctions.  This concept has a
broad range of useful applications, ranging from plain elimination (or even
introduction) of object-level existentials and conjunctions, to elimination
over results of symbolic evaluation of recursive definitions, for example.


\section{Miscellaneous methods and attributes}

\indexisarmeth{unfold}\indexisarmeth{fold}
\indexisarmeth{erule}\indexisarmeth{drule}\indexisarmeth{frule}
\indexisarmeth{fail}\indexisarmeth{succeed}
\begin{matharray}{rcl}
  unfold & : & \isarmeth \\
  fold & : & \isarmeth \\[0.5ex]
  erule^* & : & \isarmeth \\
  drule^* & : & \isarmeth \\
  frule^* & : & \isarmeth \\[0.5ex]
  succeed & : & \isarmeth \\
  fail & : & \isarmeth \\
\end{matharray}

\begin{rail}
  ('fold' | 'unfold' | 'erule' | 'drule' | 'frule') thmrefs
  ;
\end{rail}

\begin{descr}
\item [$unfold~\vec a$ and $fold~\vec a$] expand and fold back again the given
  meta-level definitions throughout all goals; any facts provided are inserted
  into the goal and subject to rewriting as well.
\item [$erule~\vec a$, $drule~\vec a$, and $frule~\vec a$] are similar to the
  basic $rule$ method (see \S\ref{sec:pure-meth-att}), but apply rules by
  elim-resolution, destruct-resolution, and forward-resolution, respectively
  \cite{isabelle-ref}.  These are improper method, mainly for experimentation
  and emulating tactic scripts.
  
  Different modes of basic rule application are usually expressed in Isar at
  the proof language level, rather than via implicit proof state
  manipulations.  For example, a proper single-step elimination would be done
  using the basic $rule$ method, with forward chaining of current facts.
\item [$succeed$] yields a single (unchanged) result; it is the identity of
  the ``\texttt{,}'' method combinator (cf.\ \S\ref{sec:syn-meth}).
\item [$fail$] yields an empty result sequence; it is the identity of the
  ``\texttt{|}'' method combinator (cf.\ \S\ref{sec:syn-meth}).
\end{descr}


\indexisaratt{standard}
\indexisaratt{elimify}

\indexisaratt{RS}\indexisaratt{COMP}
\indexisaratt{where}
\indexisaratt{tag}\indexisaratt{untag}
\indexisaratt{transfer}
\indexisaratt{export}
\indexisaratt{unfold}\indexisaratt{fold}
\begin{matharray}{rcl}
  tag & : & \isaratt \\
  untag & : & \isaratt \\[0.5ex]
  RS & : & \isaratt \\
  COMP & : & \isaratt \\[0.5ex]
  where & : & \isaratt \\[0.5ex]
  unfold & : & \isaratt \\
  fold & : & \isaratt \\[0.5ex]
  standard & : & \isaratt \\
  elimify & : & \isaratt \\
  export^* & : & \isaratt \\
  transfer & : & \isaratt \\[0.5ex]
\end{matharray}

\begin{rail}
  'tag' (nameref+)
  ;
  'untag' name
  ;
  ('RS' | 'COMP') nat? thmref
  ;
  'where' (name '=' term * 'and')
  ;
  ('unfold' | 'fold') thmrefs
  ;
\end{rail}

\begin{descr}
\item [$tag~name~args$ and $untag~name$] add and remove $tags$ of some
  theorem.  Tags may be any list of strings that serve as comment for some
  tools (e.g.\ $\LEMMANAME$ causes the tag ``$lemma$'' to be added to the
  result).  The first string is considered the tag name, the rest its
  arguments.  Note that untag removes any tags of the same name.
\item [$RS~n~a$ and $COMP~n~a$] compose rules.  $RS$ resolves with the $n$-th
  premise of $a$; $COMP$ is a version of $RS$ that skips the automatic lifting
  process that is normally intended (cf.\ \texttt{RS} and \texttt{COMP} in
  \cite[\S5]{isabelle-ref}).
\item [$where~\vec x = \vec t$] perform named instantiation of schematic
  variables occurring in a theorem.  Unlike instantiation tactics (such as
  \texttt{res_inst_tac}, see \cite{isabelle-ref}), actual schematic variables
  have to be specified (e.g.\ $\Var{x@3}$).
  
\item [$unfold~\vec a$ and $fold~\vec a$] expand and fold back again the given
  meta-level definitions throughout a rule.
 
\item [$standard$] puts a theorem into the standard form of object-rules, just
  as the ML function \texttt{standard} (see \cite[\S5]{isabelle-ref}).
  
\item [$elimify$] turns an destruction rule into an elimination, just as the
  ML function \texttt{make\_elim} (see \cite{isabelle-ref}).
  
\item [$export$] lifts a local result out of the current proof context,
  generalizing all fixed variables and discharging all assumptions.  Note that
  proper incremental export is already done as part of the basic Isar
  machinery.  This attribute is mainly for experimentation.
  
\item [$transfer$] promotes a theorem to the current theory context, which has
  to enclose the former one.  This is done automatically whenever rules are
  joined by inference.

\end{descr}


\section{The Simplifier}

\subsection{Simplification methods}\label{sec:simp}

\indexisarmeth{simp}\indexisarmeth{simp-all}
\begin{matharray}{rcl}
  simp & : & \isarmeth \\
  simp_all & : & \isarmeth \\
\end{matharray}

\railalias{simpall}{simp\_all}
\railterm{simpall}

\begin{rail}
  ('simp' | simpall) ('!' ?) (simpmod * )
  ;

  simpmod: ('add' | 'del' | 'only' | 'split' (() | 'add' | 'del') | 'other') ':' thmrefs
  ;
\end{rail}

\begin{descr}
\item [$simp$] invokes Isabelle's simplifier, after declaring additional rules
  according to the arguments given.  Note that the \railtterm{only} modifier
  first removes all other rewrite rules, congruences, and looper tactics
  (including splits), and then behaves like \railtterm{add}.
  
  The \railtterm{split} modifiers add or delete rules for the Splitter (see
  also \cite{isabelle-ref}), the default is to add.  This works only if the
  Simplifier method has been properly setup to include the Splitter (all major
  object logics such HOL, HOLCF, FOL, ZF do this already).
  
  The \railtterm{other} modifier ignores its arguments.  Nevertheless,
  additional kinds of rules may be declared by including appropriate
  attributes in the specification.
\item [$simp_all$] is similar to $simp$, but acts on all goals.
\end{descr}

Internally, the $simp$ method is based on \texttt{asm_full_simp_tac}
\cite[\S10]{isabelle-ref}, but is much better behaved in practice.  Just the
local premises of the actual goal are involved by default.  Additional facts
may be inserted via forward-chaining (using $\THEN$, $\FROMNAME$ etc.).  The
full context of assumptions is only included in the $simp!$ version, which
should be used with some care, though.

Note that there is no separate $split$ method.  The effect of
\texttt{split_tac} can be simulated by $(simp~only\colon~split\colon~\vec a)$.


\subsection{Declaring rules}

\indexisarcmd{print-simpset}
\indexisaratt{simp}\indexisaratt{split}\indexisaratt{cong}
\begin{matharray}{rcl}
  print_simpset & : & \isarkeep{theory~|~proof} \\
  simp & : & \isaratt \\
  split & : & \isaratt \\
  cong & : & \isaratt \\
\end{matharray}

\begin{rail}
  ('simp' | 'split' | 'cong') (() | 'add' | 'del')
  ;
\end{rail}

\begin{descr}
\item [$print_simpset$] prints the collection of rules declared to the
  Simplifier, which is also known as ``simpset'' internally
  \cite{isabelle-ref}.  This is a diagnostic command; $undo$ does not apply.
\item [$simp$] declares simplification rules.
\item [$split$] declares split rules.
\item [$cong$] declares congruence rules.
\end{descr}


\subsection{Forward simplification}

\indexisaratt{simplify}\indexisaratt{asm-simplify}
\indexisaratt{full-simplify}\indexisaratt{asm-full-simplify}
\begin{matharray}{rcl}
  simplify & : & \isaratt \\
  asm_simplify & : & \isaratt \\
  full_simplify & : & \isaratt \\
  asm_full_simplify & : & \isaratt \\
\end{matharray}

These attributes provide forward rules for simplification, which should be
used only very rarely.  There are no separate options for declaring
simplification rules locally.

See the ML functions of the same name in \cite[\S10]{isabelle-ref} for more
information.


\section{The Classical Reasoner}

\subsection{Basic methods}\label{sec:classical-basic}

\indexisarmeth{rule}\indexisarmeth{intro}
\indexisarmeth{elim}\indexisarmeth{default}\indexisarmeth{contradiction}
\begin{matharray}{rcl}
  rule & : & \isarmeth \\
  intro & : & \isarmeth \\
  elim & : & \isarmeth \\
  contradiction & : & \isarmeth \\
\end{matharray}

\begin{rail}
  ('rule' | 'intro' | 'elim') thmrefs?
  ;
\end{rail}

\begin{descr}
\item [$rule$] as offered by the classical reasoner is a refinement over the
  primitive one (see \S\ref{sec:pure-meth-att}).  In case that no rules are
  provided as arguments, it automatically determines elimination and
  introduction rules from the context (see also \S\ref{sec:classical-mod}).
  This is made the default method for basic proof steps, such as $\PROOFNAME$
  and ``$\DDOT$'' (two dots), see also \S\ref{sec:proof-steps} and
  \S\ref{sec:pure-meth-att}.
  
\item [$intro$ and $elim$] repeatedly refine some goal by intro- or
  elim-resolution, after having inserted any facts.  Omitting the arguments
  refers to any suitable rules declared in the context, otherwise only the
  explicitly given ones may be applied.  The latter form admits better control
  of what actually happens, thus it is very appropriate as an initial method
  for $\PROOFNAME$ that splits up certain connectives of the goal, before
  entering the actual sub-proof.
  
\item [$contradiction$] solves some goal by contradiction, deriving any result
  from both $\neg A$ and $A$.  Facts, which are guaranteed to participate, may
  appear in either order.
\end{descr}


\subsection{Automated methods}\label{sec:classical-auto}

\indexisarmeth{blast}
\indexisarmeth{fast}\indexisarmeth{best}\indexisarmeth{slow}\indexisarmeth{slow-best}
\begin{matharray}{rcl}
 blast & : & \isarmeth \\
 fast & : & \isarmeth \\
 best & : & \isarmeth \\
 slow & : & \isarmeth \\
 slow_best & : & \isarmeth \\
\end{matharray}

\railalias{slowbest}{slow\_best}
\railterm{slowbest}

\begin{rail}
  'blast' ('!' ?) nat? (clamod * )
  ;
  ('fast' | 'best' | 'slow' | slowbest) ('!' ?) (clamod * )
  ;

  clamod: (('intro' | 'elim' | 'dest') (() | '?' | '??') | 'del') ':' thmrefs
  ;
\end{rail}

\begin{descr}
\item [$blast$] refers to the classical tableau prover (see \texttt{blast_tac}
  in \cite[\S11]{isabelle-ref}).  The optional argument specifies a
  user-supplied search bound (default 20).
\item [$fast$, $best$, $slow$, $slow_best$] refer to the generic classical
  reasoner (see \cite[\S11]{isabelle-ref}, tactic \texttt{fast_tac} etc).
\end{descr}

Any of above methods support additional modifiers of the context of classical
rules.  Their semantics is analogous to the attributes given in
\S\ref{sec:classical-mod}.  Facts provided by forward chaining are
inserted\footnote{These methods usually cannot make proper use of actual rules
  inserted that way, though.} into the goal before doing the search.  The
``!''~argument causes the full context of assumptions to be included as well.
This is slightly less hazardous than for the Simplifier (see
\S\ref{sec:simp}).


\subsection{Combined automated methods}

\indexisarmeth{auto}\indexisarmeth{force}
\begin{matharray}{rcl}
  force & : & \isarmeth \\
  auto & : & \isarmeth \\
\end{matharray}

\begin{rail}
  ('force' | 'auto') ('!' ?) (clasimpmod * )
  ;

  clasimpmod: ('simp' (() | 'add' | 'del' | 'only') | 'other' |
    ('split' (() | 'add' | 'del')) |
    (('intro' | 'elim' | 'dest') (() | '?' | '??') | 'del')) ':' thmrefs
\end{rail}

\begin{descr}
\item [$force$ and $auto$] provide access to Isabelle's combined
  simplification and classical reasoning tactics.  See \texttt{force_tac} and
  \texttt{auto_tac} in \cite[\S11]{isabelle-ref} for more information.  The
  modifier arguments correspond to those given in \S\ref{sec:simp} and
  \S\ref{sec:classical-auto}.  Just note that the ones related to the
  Simplifier are prefixed by \railtterm{simp} here.
  
  Facts provided by forward chaining are inserted into the goal before doing
  the search.  The ``!''~argument causes the full context of assumptions to be
  included as well.
\end{descr}


\subsection{Declaring rules}\label{sec:classical-mod}

\indexisarcmd{print-claset}
\indexisaratt{intro}\indexisaratt{elim}\indexisaratt{dest}
\indexisaratt{iff}\indexisaratt{delrule}
\begin{matharray}{rcl}
  print_claset & : & \isarkeep{theory~|~proof} \\
  intro & : & \isaratt \\
  elim & : & \isaratt \\
  dest & : & \isaratt \\
  iff & : & \isaratt \\
  delrule & : & \isaratt \\
\end{matharray}

\begin{rail}
  ('intro' | 'elim' | 'dest') (() | '?' | '??')
  ;
  'iff' (() | 'add' | 'del')
\end{rail}

\begin{descr}
\item [$print_claset$] prints the collection of rules declared to the
  Classical Reasoner, which is also known as ``simpset'' internally
  \cite{isabelle-ref}.  This is a diagnostic command; $undo$ does not apply.
\item [$intro$, $elim$, and $dest$] declare introduction, elimination, and
  destruct rules, respectively.  By default, rules are considered as
  \emph{safe}, while a single ``?'' classifies as \emph{unsafe}, and ``??'' as
  \emph{extra} (i.e.\ not applied in the search-oriented automated methods,
  but only in single-step methods such as $rule$).
  
\item [$iff$] declares equations both as rules for the Simplifier and
  Classical Reasoner.

\item [$delrule$] deletes introduction or elimination rules from the context.
  Note that destruction rules would have to be turned into elimination rules
  first, e.g.\ by using the $elimify$ attribute.
\end{descr}


%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "isar-ref"
%%% End: