doc-src/IsarRef/generic.tex
author wenzelm
Thu, 04 Oct 2001 16:09:12 +0200
changeset 11691 fc9bd420162c
parent 11469 57b072f00626
child 12618 43a97a2155d0
permissions -rw-r--r--
induct/cases made generic, removed simplified/stripped options;


\chapter{Generic Tools and Packages}\label{ch:gen-tools}

\section{Axiomatic Type Classes}\label{sec:axclass}

%FIXME
% - qualified names
% - class intro rules;
% - class axioms;

\indexisarcmd{axclass}\indexisarcmd{instance}\indexisarmeth{intro-classes}
\begin{matharray}{rcl}
  \isarcmd{axclass} & : & \isartrans{theory}{theory} \\
  \isarcmd{instance} & : & \isartrans{theory}{proof(prove)} \\
  intro_classes & : & \isarmeth \\
\end{matharray}

Axiomatic type classes are provided by Isabelle/Pure as a \emph{definitional}
interface to type classes (cf.~\S\ref{sec:classes}).  Thus any object logic
may make use of this light-weight mechanism of abstract theories
\cite{Wenzel:1997:TPHOL}.  There is also a tutorial on using axiomatic type
classes in isabelle \cite{isabelle-axclass} that is part of the standard
Isabelle documentation.

\begin{rail}
  'axclass' classdecl (axmdecl prop comment? +)
  ;
  'instance' (nameref ('<' | subseteq) nameref | nameref '::' simplearity) comment?
  ;
\end{rail}

\begin{descr}
\item [$\AXCLASS~c \subseteq \vec c~axms$] defines an axiomatic type class as
  the intersection of existing classes, with additional axioms holding.  Class
  axioms may not contain more than one type variable.  The class axioms (with
  implicit sort constraints added) are bound to the given names.  Furthermore
  a class introduction rule is generated, which is employed by method
  $intro_classes$ to support instantiation proofs of this class.
  
\item [$\INSTANCE~c@1 \subseteq c@2$ and $\INSTANCE~t :: (\vec s)c$] setup a
  goal stating a class relation or type arity.  The proof would usually
  proceed by $intro_classes$, and then establish the characteristic theorems
  of the type classes involved.  After finishing the proof, the theory will be
  augmented by a type signature declaration corresponding to the resulting
  theorem.
\item [$intro_classes$] repeatedly expands all class introduction rules of
  this theory.  Note that this method usually needs not be named explicitly,
  as it is already included in the default proof step (of $\PROOFNAME$,
  $\BYNAME$, etc.).  In particular, instantiation of trivial (syntactic)
  classes may be performed by a single ``$\DDOT$'' proof step.
\end{descr}


\section{Calculational proof}\label{sec:calculation}

\indexisarcmd{also}\indexisarcmd{finally}
\indexisarcmd{moreover}\indexisarcmd{ultimately}
\indexisarcmd{print-trans-rules}\indexisaratt{trans}
\begin{matharray}{rcl}
  \isarcmd{also} & : & \isartrans{proof(state)}{proof(state)} \\
  \isarcmd{finally} & : & \isartrans{proof(state)}{proof(chain)} \\
  \isarcmd{moreover} & : & \isartrans{proof(state)}{proof(state)} \\
  \isarcmd{ultimately} & : & \isartrans{proof(state)}{proof(chain)} \\
  \isarcmd{print_trans_rules}^* & : & \isarkeep{theory~|~proof} \\
  trans & : & \isaratt \\
\end{matharray}

Calculational proof is forward reasoning with implicit application of
transitivity rules (such those of $=$, $\leq$, $<$).  Isabelle/Isar maintains
an auxiliary register $calculation$\indexisarthm{calculation} for accumulating
results obtained by transitivity composed with the current result.  Command
$\ALSO$ updates $calculation$ involving $this$, while $\FINALLY$ exhibits the
final $calculation$ by forward chaining towards the next goal statement.  Both
commands require valid current facts, i.e.\ may occur only after commands that
produce theorems such as $\ASSUMENAME$, $\NOTENAME$, or some finished proof of
$\HAVENAME$, $\SHOWNAME$ etc.  The $\MOREOVER$ and $\ULTIMATELY$ commands are
similar to $\ALSO$ and $\FINALLY$, but only collect further results in
$calculation$ without applying any rules yet.

Also note that the automatic term abbreviation ``$\dots$'' has its canonical
application with calculational proofs.  It refers to the argument\footnote{The
  argument of a curried infix expression is its right-hand side.} of the
preceding statement.

Isabelle/Isar calculations are implicitly subject to block structure in the
sense that new threads of calculational reasoning are commenced for any new
block (as opened by a local goal, for example).  This means that, apart from
being able to nest calculations, there is no separate \emph{begin-calculation}
command required.

\medskip

The Isar calculation proof commands may be defined as
follows:\footnote{Internal bookkeeping such as proper handling of
  block-structure has been suppressed.}
\begin{matharray}{rcl}
  \ALSO@0 & \equiv & \NOTE{calculation}{this} \\
  \ALSO@{n+1} & \equiv & \NOTE{calculation}{trans~[OF~calculation~this]} \\[0.5ex]
  \FINALLY & \equiv & \ALSO~\FROM{calculation} \\
  \MOREOVER & \equiv & \NOTE{calculation}{calculation~this} \\
  \ULTIMATELY & \equiv & \MOREOVER~\FROM{calculation} \\
\end{matharray}

\begin{rail}
  ('also' | 'finally') transrules? comment?
  ;
  ('moreover' | 'ultimately') comment?
  ;
  'trans' (() | 'add' | 'del')
  ;

  transrules: '(' thmrefs ')' interest?
  ;
\end{rail}

\begin{descr}
\item [$\ALSO~(\vec a)$] maintains the auxiliary $calculation$ register as
  follows.  The first occurrence of $\ALSO$ in some calculational thread
  initializes $calculation$ by $this$. Any subsequent $\ALSO$ on the same
  level of block-structure updates $calculation$ by some transitivity rule
  applied to $calculation$ and $this$ (in that order).  Transitivity rules are
  picked from the current context, unless alternative rules are given as
  explicit arguments.

\item [$\FINALLY~(\vec a)$] maintaining $calculation$ in the same way as
  $\ALSO$, and concludes the current calculational thread.  The final result
  is exhibited as fact for forward chaining towards the next goal. Basically,
  $\FINALLY$ just abbreviates $\ALSO~\FROM{calculation}$.  Note that
  ``$\FINALLY~\SHOW{}{\Var{thesis}}~\DOT$'' and
  ``$\FINALLY~\HAVE{}{\phi}~\DOT$'' are typical idioms for concluding
  calculational proofs.

\item [$\MOREOVER$ and $\ULTIMATELY$] are analogous to $\ALSO$ and $\FINALLY$,
  but collect results only, without applying rules.

\item [$\isarkeyword{print_trans_rules}$] prints the list of transitivity
  rules declared in the current context.

\item [$trans$] declares theorems as transitivity rules.

\end{descr}


\section{Named local contexts (cases)}\label{sec:cases}

\indexisarcmd{case}\indexisarcmd{print-cases}
\indexisaratt{case-names}\indexisaratt{params}\indexisaratt{consumes}
\begin{matharray}{rcl}
  \isarcmd{case} & : & \isartrans{proof(state)}{proof(state)} \\
  \isarcmd{print_cases}^* & : & \isarkeep{proof} \\
  case_names & : & \isaratt \\
  params & : & \isaratt \\
  consumes & : & \isaratt \\
\end{matharray}

Basically, Isar proof contexts are built up explicitly using commands like
$\FIXNAME$, $\ASSUMENAME$ etc.\ (see \S\ref{sec:proof-context}).  In typical
verification tasks this can become hard to manage, though.  In particular, a
large number of local contexts may emerge from case analysis or induction over
inductive sets and types.

\medskip

The $\CASENAME$ command provides a shorthand to refer to certain parts of
logical context symbolically.  Proof methods may provide an environment of
named ``cases'' of the form $c\colon \vec x, \vec \phi$.  Then the effect of
$\CASE{c}$ is exactly the same as $\FIX{\vec x}~\ASSUME{c}{\vec\phi}$.

It is important to note that $\CASENAME$ does \emph{not} provide any means to
peek at the current goal state, which is treated as strictly non-observable in
Isar!  Instead, the cases considered here usually emerge in a canonical way
from certain pieces of specification that appear in the theory somewhere else
(e.g.\ in an inductive definition, or recursive function).  See also
\S\ref{sec:induct-method} for more details of how this works in HOL.

\medskip

Named cases may be exhibited in the current proof context only if both the
proof method and the rules involved support this.  Case names and parameters
of basic rules may be declared by hand as well, by using appropriate
attributes.  Thus variant versions of rules that have been derived manually
may be used in advanced case analysis later.

\railalias{casenames}{case\_names}
\railterm{casenames}

\begin{rail}
  'case' nameref attributes?
  ;
  casenames (name + )
  ;
  'params' ((name * ) + 'and')
  ;
  'consumes' nat?
  ;
\end{rail}
%FIXME bug in rail

\begin{descr}
\item [$\CASE{c}$] invokes a named local context $c\colon \vec x, \vec \phi$,
  as provided by an appropriate proof method (such as $cases$ and $induct$ in
  Isabelle/HOL, see \S\ref{sec:induct-method}).  The command $\CASE{c}$
  abbreviates $\FIX{\vec x}~\ASSUME{c}{\vec\phi}$.
\item [$\isarkeyword{print_cases}$] prints all local contexts of the current
  state, using Isar proof language notation.  This is a diagnostic command;
  $undo$ does not apply.
\item [$case_names~\vec c$] declares names for the local contexts of premises
  of some theorem; $\vec c$ refers to the \emph{suffix} of the list of
  premises.
\item [$params~\vec p@1 \dots \vec p@n$] renames the innermost parameters of
  premises $1, \dots, n$ of some theorem.  An empty list of names may be given
  to skip positions, leaving the present parameters unchanged.

  Note that the default usage of case rules does \emph{not} directly expose
  parameters to the proof context (see also \S\ref{sec:induct-method-proper}).
\item [$consumes~n$] declares the number of ``major premises'' of a rule,
  i.e.\ the number of facts to be consumed when it is applied by an
  appropriate proof method (cf.\ \S\ref{sec:induct-method}).  The default
  value of $consumes$ is $n = 1$, which is appropriate for the usual kind of
  cases and induction rules for inductive sets (cf.\ \S\ref{sec:inductive}).
  Rules without any $consumes$ declaration given are treated as if
  $consumes~0$ had been specified.
  
  Note that explicit $consumes$ declarations are only rarely needed; this is
  already taken care of automatically by the higher-level $cases$ and $induct$
  declarations, see also \S\ref{sec:induct-att}.
\end{descr}


\section{Generalized existence}\label{sec:obtain}

\indexisarcmd{obtain}
\begin{matharray}{rcl}
  \isarcmd{obtain} & : & \isartrans{proof(state)}{proof(prove)} \\
\end{matharray}

Generalized existence means that additional elements with certain properties
may introduced in the current context.  Technically, the $\OBTAINNAME$
language element is like a declaration of $\FIXNAME$ and $\ASSUMENAME$ (see
also see \S\ref{sec:proof-context}), together with a soundness proof of its
additional claim.  According to the nature of existential reasoning,
assumptions get eliminated from any result exported from the context later,
provided that the corresponding parameters do \emph{not} occur in the
conclusion.

\begin{rail}
  'obtain' (vars + 'and') comment? \\ 'where' (assm comment? + 'and')
  ;
\end{rail}

$\OBTAINNAME$ is defined as a derived Isar command as follows, where $\vec b$
shall refer to (optional) facts indicated for forward chaining.
\begin{matharray}{l}
  \langle facts~\vec b\rangle \\
  \OBTAIN{\vec x}{a}{\vec \phi}~~\langle proof\rangle \equiv {} \\[1ex]
  \quad \BG \\
  \qquad \FIX{thesis} \\
  \qquad \ASSUME{that~[simp, intro]}{\All{\vec x} \vec\phi \Imp thesis} \\
  \qquad \FROM{\vec b}~\HAVE{}{thesis}~~\langle proof\rangle \\
  \quad \EN \\
  \quad \FIX{\vec x}~\ASSUMENAME^\ast~a\colon~\vec\phi \\
\end{matharray}

Typically, the soundness proof is relatively straight-forward, often just by
canonical automated tools such as $\BY{simp}$ (see \S\ref{sec:simp}) or
$\BY{blast}$ (see \S\ref{sec:classical-auto}).  Accordingly, the ``$that$''
reduction above is declared as simplification and introduction rule.

\medskip

In a sense, $\OBTAINNAME$ represents at the level of Isar proofs what would be
meta-logical existential quantifiers and conjunctions.  This concept has a
broad range of useful applications, ranging from plain elimination (or even
introduction) of object-level existentials and conjunctions, to elimination
over results of symbolic evaluation of recursive definitions, for example.
Also note that $\OBTAINNAME$ without parameters acts much like $\HAVENAME$,
where the result is treated as an assumption.


\section{Miscellaneous methods and attributes}\label{sec:misc-methods}

\indexisarmeth{unfold}\indexisarmeth{fold}\indexisarmeth{insert}
\indexisarmeth{erule}\indexisarmeth{drule}\indexisarmeth{frule}
\indexisarmeth{fail}\indexisarmeth{succeed}
\begin{matharray}{rcl}
  unfold & : & \isarmeth \\
  fold & : & \isarmeth \\
  insert & : & \isarmeth \\[0.5ex]
  erule^* & : & \isarmeth \\
  drule^* & : & \isarmeth \\
  frule^* & : & \isarmeth \\[0.5ex]
  succeed & : & \isarmeth \\
  fail & : & \isarmeth \\
\end{matharray}

\begin{rail}
  ('fold' | 'unfold' | 'insert') thmrefs
  ;
  ('erule' | 'drule' | 'frule') ('('nat')')? thmrefs
  ;
\end{rail}

\begin{descr}
\item [$unfold~\vec a$ and $fold~\vec a$] expand and fold back again the given
  meta-level definitions throughout all goals; any facts provided are inserted
  into the goal and subject to rewriting as well.
\item [$insert~\vec a$] inserts theorems as facts into all goals of the proof
  state.  Note that current facts indicated for forward chaining are ignored.
\item [$erule~\vec a$, $drule~\vec a$, and $frule~\vec a$] are similar to the
  basic $rule$ method (see \S\ref{sec:pure-meth-att}), but apply rules by
  elim-resolution, destruct-resolution, and forward-resolution, respectively
  \cite{isabelle-ref}.  The optional natural number argument (default $0$)
  specifies additional assumption steps to be performed.
  
  Note that these methods are improper ones, mainly serving for
  experimentation and tactic script emulation.  Different modes of basic rule
  application are usually expressed in Isar at the proof language level,
  rather than via implicit proof state manipulations.  For example, a proper
  single-step elimination would be done using the basic $rule$ method, with
  forward chaining of current facts.
\item [$succeed$] yields a single (unchanged) result; it is the identity of
  the ``\texttt{,}'' method combinator (cf.\ \S\ref{sec:syn-meth}).
\item [$fail$] yields an empty result sequence; it is the identity of the
  ``\texttt{|}'' method combinator (cf.\ \S\ref{sec:syn-meth}).
\end{descr}

\indexisaratt{tagged}\indexisaratt{untagged}
\indexisaratt{THEN}\indexisaratt{COMP}
\indexisaratt{where}\indexisaratt{unfolded}\indexisaratt{folded}
\indexisaratt{standard}\indexisaratt{elim-format}
\indexisaratt{no-vars}\indexisaratt{exported}
\begin{matharray}{rcl}
  tagged & : & \isaratt \\
  untagged & : & \isaratt \\[0.5ex]
  THEN & : & \isaratt \\
  COMP & : & \isaratt \\[0.5ex]
  where & : & \isaratt \\[0.5ex]
  unfolded & : & \isaratt \\
  folded & : & \isaratt \\[0.5ex]
  standard & : & \isaratt \\
  elim_format & : & \isaratt \\
  no_vars^* & : & \isaratt \\
  exported^* & : & \isaratt \\
\end{matharray}

\begin{rail}
  'tagged' (nameref+)
  ;
  'untagged' name
  ;
  ('THEN' | 'COMP') ('[' nat ']')? thmref
  ;
  'where' (name '=' term * 'and')
  ;
  ('unfolded' | 'folded') thmrefs
  ;
\end{rail}

\begin{descr}
\item [$tagged~name~args$ and $untagged~name$] add and remove $tags$ of some
  theorem.  Tags may be any list of strings that serve as comment for some
  tools (e.g.\ $\LEMMANAME$ causes the tag ``$lemma$'' to be added to the
  result).  The first string is considered the tag name, the rest its
  arguments.  Note that untag removes any tags of the same name.
\item [$THEN~n~a$ and $COMP~n~a$] compose rules.  $THEN$ resolves with the
  $n$-th premise of $a$; the $COMP$ version skips the automatic lifting
  process that is normally intended (cf.\ \texttt{RS} and \texttt{COMP} in
  \cite[\S5]{isabelle-ref}).
\item [$where~\vec x = \vec t$] perform named instantiation of schematic
  variables occurring in a theorem.  Unlike instantiation tactics such as
  $rule_tac$ (see \S\ref{sec:tactic-commands}), actual schematic variables
  have to be specified (e.g.\ $\Var{x@3}$).
\item [$unfolded~\vec a$ and $folded~\vec a$] expand and fold back again the
  given meta-level definitions throughout a rule.
\item [$standard$] puts a theorem into the standard form of object-rules, just
  as the ML function \texttt{standard} (see \cite[\S5]{isabelle-ref}).
\item [$elim_format$] turns a destruction rule into elimination rule format;
  see also the ML function \texttt{make\_elim} (see \cite{isabelle-ref}).
\item [$no_vars$] replaces schematic variables by free ones; this is mainly
  for tuning output of pretty printed theorems.
\item [$exported$] lifts a local result out of the current proof context,
  generalizing all fixed variables and discharging all assumptions.  Note that
  proper incremental export is already done as part of the basic Isar
  machinery.  This attribute is mainly for experimentation.
\end{descr}


\section{Tactic emulations}\label{sec:tactics}

The following improper proof methods emulate traditional tactics.  These admit
direct access to the goal state, which is normally considered harmful!  In
particular, this may involve both numbered goal addressing (default 1), and
dynamic instantiation within the scope of some subgoal.

\begin{warn}
  Dynamic instantiations are read and type-checked according to a subgoal of
  the current dynamic goal state, rather than the static proof context!  In
  particular, locally fixed variables and term abbreviations may not be
  included in the term specifications.  Thus schematic variables are left to
  be solved by unification with certain parts of the subgoal involved.
\end{warn}

Note that the tactic emulation proof methods in Isabelle/Isar are consistently
named $foo_tac$.

\indexisarmeth{rule-tac}\indexisarmeth{erule-tac}
\indexisarmeth{drule-tac}\indexisarmeth{frule-tac}
\indexisarmeth{cut-tac}\indexisarmeth{thin-tac}
\indexisarmeth{subgoal-tac}\indexisarmeth{rename-tac}
\indexisarmeth{rotate-tac}\indexisarmeth{tactic}
\begin{matharray}{rcl}
  rule_tac^* & : & \isarmeth \\
  erule_tac^* & : & \isarmeth \\
  drule_tac^* & : & \isarmeth \\
  frule_tac^* & : & \isarmeth \\
  cut_tac^* & : & \isarmeth \\
  thin_tac^* & : & \isarmeth \\
  subgoal_tac^* & : & \isarmeth \\
  rename_tac^* & : & \isarmeth \\
  rotate_tac^* & : & \isarmeth \\
  tactic^* & : & \isarmeth \\
\end{matharray}

\railalias{ruletac}{rule\_tac}
\railterm{ruletac}

\railalias{eruletac}{erule\_tac}
\railterm{eruletac}

\railalias{druletac}{drule\_tac}
\railterm{druletac}

\railalias{fruletac}{frule\_tac}
\railterm{fruletac}

\railalias{cuttac}{cut\_tac}
\railterm{cuttac}

\railalias{thintac}{thin\_tac}
\railterm{thintac}

\railalias{subgoaltac}{subgoal\_tac}
\railterm{subgoaltac}

\railalias{renametac}{rename\_tac}
\railterm{renametac}

\railalias{rotatetac}{rotate\_tac}
\railterm{rotatetac}

\begin{rail}
  ( ruletac | eruletac | druletac | fruletac | cuttac | thintac ) goalspec?
  ( insts thmref | thmrefs )
  ;
  subgoaltac goalspec? (prop +)
  ;
  renametac goalspec? (name +)
  ;
  rotatetac goalspec? int?
  ;
  'tactic' text
  ;

  insts: ((name '=' term) + 'and') 'in'
  ;
\end{rail}

\begin{descr}
\item [$rule_tac$ etc.] do resolution of rules with explicit instantiation.
  This works the same way as the ML tactics \texttt{res_inst_tac} etc. (see
  \cite[\S3]{isabelle-ref}).

  Note that multiple rules may be only given there is no instantiation.  Then
  $rule_tac$ is the same as \texttt{resolve_tac} in ML (see
  \cite[\S3]{isabelle-ref}).
\item [$cut_tac$] inserts facts into the proof state as assumption of a
  subgoal, see also \texttt{cut_facts_tac} in \cite[\S3]{isabelle-ref}.  Note
  that the scope of schmatic variables is spread over the main goal statement.
  Instantiations may be given as well, see also ML tactic
  \texttt{cut_inst_tac} in \cite[\S3]{isabelle-ref}.
\item [$thin_tac~\phi$] deletes the specified assumption from a subgoal; note
  that $\phi$ may contain schematic variables.  See also \texttt{thin_tac} in
  \cite[\S3]{isabelle-ref}.
\item [$subgoal_tac~\phi$] adds $\phi$ as an assumption to a subgoal.  See
  also \texttt{subgoal_tac} and \texttt{subgoals_tac} in
  \cite[\S3]{isabelle-ref}.
\item [$rename_tac~\vec x$] renames parameters of a goal according to the list
  $\vec x$, which refers to the \emph{suffix} of variables.
\item [$rotate_tac~n$] rotates the assumptions of a goal by $n$ positions:
  from right to left if $n$ is positive, and from left to right if $n$ is
  negative; the default value is $1$.  See also \texttt{rotate_tac} in
  \cite[\S3]{isabelle-ref}.
\item [$tactic~text$] produces a proof method from any ML text of type
  \texttt{tactic}.  Apart from the usual ML environment and the current
  implicit theory context, the ML code may refer to the following locally
  bound values:

%%FIXME ttbox produces too much trailing space (why?)
{\footnotesize\begin{verbatim}
val ctxt  : Proof.context
val facts : thm list
val thm   : string -> thm
val thms  : string -> thm list
\end{verbatim}}
  Here \texttt{ctxt} refers to the current proof context, \texttt{facts}
  indicates any current facts for forward-chaining, and
  \texttt{thm}~/~\texttt{thms} retrieve named facts (including global
  theorems) from the context.
\end{descr}


\section{The Simplifier}\label{sec:simplifier}

\subsection{Simplification methods}\label{sec:simp}

\indexisarmeth{simp}\indexisarmeth{simp-all}
\begin{matharray}{rcl}
  simp & : & \isarmeth \\
  simp_all & : & \isarmeth \\
\end{matharray}

\railalias{simpall}{simp\_all}
\railterm{simpall}

\railalias{noasm}{no\_asm}
\railterm{noasm}

\railalias{noasmsimp}{no\_asm\_simp}
\railterm{noasmsimp}

\railalias{noasmuse}{no\_asm\_use}
\railterm{noasmuse}

\indexouternonterm{simpmod}
\begin{rail}
  ('simp' | simpall) ('!' ?) opt? (simpmod * )
  ;

  opt: '(' (noasm | noasmsimp | noasmuse) ')'
  ;
  simpmod: ('add' | 'del' | 'only' | 'cong' (() | 'add' | 'del') |
    'split' (() | 'add' | 'del')) ':' thmrefs
  ;
\end{rail}

\begin{descr}
\item [$simp$] invokes Isabelle's simplifier, after declaring additional rules
  according to the arguments given.  Note that the \railtterm{only} modifier
  first removes all other rewrite rules, congruences, and looper tactics
  (including splits), and then behaves like \railtterm{add}.
  
  \medskip The \railtterm{cong} modifiers add or delete Simplifier congruence
  rules (see also \cite{isabelle-ref}), the default is to add.
  
  \medskip The \railtterm{split} modifiers add or delete rules for the
  Splitter (see also \cite{isabelle-ref}), the default is to add.  This works
  only if the Simplifier method has been properly setup to include the
  Splitter (all major object logics such HOL, HOLCF, FOL, ZF do this already).
\item [$simp_all$] is similar to $simp$, but acts on all goals.
\end{descr}

By default, the Simplifier methods are based on \texttt{asm_full_simp_tac}
internally \cite[\S10]{isabelle-ref}, which means that assumptions are both
simplified as well as used in simplifying the conclusion.  In structured
proofs this is usually quite well behaved in practice: just the local premises
of the actual goal are involved, additional facts may inserted via explicit
forward-chaining (using $\THEN$, $\FROMNAME$ etc.).  The full context of
assumptions is only included if the ``$!$'' (bang) argument is given, which
should be used with some care, though.

Additional Simplifier options may be specified to tune the behavior even
further: $(no_asm)$ means assumptions are ignored completely (cf.\
\texttt{simp_tac}), $(no_asm_simp)$ means assumptions are used in the
simplification of the conclusion but are not themselves simplified (cf.\
\texttt{asm_simp_tac}), and $(no_asm_use)$ means assumptions are simplified
but are not used in the simplification of each other or the conclusion (cf.
\texttt{full_simp_tac}).

\medskip

The Splitter package is usually configured to work as part of the Simplifier.
The effect of repeatedly applying \texttt{split_tac} can be simulated by
$(simp~only\colon~split\colon~\vec a)$.  There is also a separate $split$
method available for single-step case splitting, see \S\ref{sec:basic-eq}.


\subsection{Declaring rules}

\indexisarcmd{print-simpset}
\indexisaratt{simp}\indexisaratt{split}\indexisaratt{cong}
\begin{matharray}{rcl}
  print_simpset^* & : & \isarkeep{theory~|~proof} \\
  simp & : & \isaratt \\
  cong & : & \isaratt \\
  split & : & \isaratt \\
\end{matharray}

\begin{rail}
  ('simp' | 'cong' | 'split') (() | 'add' | 'del')
  ;
\end{rail}

\begin{descr}
\item [$print_simpset$] prints the collection of rules declared to the
  Simplifier, which is also known as ``simpset'' internally
  \cite{isabelle-ref}.  This is a diagnostic command; $undo$ does not apply.
\item [$simp$] declares simplification rules.
\item [$cong$] declares congruence rules.
\item [$split$] declares case split rules.
\end{descr}


\subsection{Forward simplification}

\indexisaratt{simplified}
\begin{matharray}{rcl}
  simplified & : & \isaratt \\
\end{matharray}

\begin{rail}
  'simplified' opt?
  ;

  opt: '(' (noasm | noasmsimp | noasmuse) ')'
  ;
\end{rail}

\begin{descr}
\item [$simplified$] causes a theorem to be simplified according to the
  current Simplifier context (there are no separate arguments for declaring
  additional rules).  By default the result is fully simplified, including
  assumptions and conclusion.  The options $no_asm$ etc.\ restrict the
  Simplifier in the same way as the for the $simp$ method (see
  \S\ref{sec:simp}).
  
  The $simplified$ operation should be used only very rarely, usually for
  experimentation only.
\end{descr}


\section{Basic equational reasoning}\label{sec:basic-eq}

\indexisarmeth{subst}\indexisarmeth{hypsubst}\indexisarmeth{split}\indexisaratt{symmetric}
\begin{matharray}{rcl}
  subst & : & \isarmeth \\
  hypsubst^* & : & \isarmeth \\
  split & : & \isarmeth \\
  symmetric & : & \isaratt \\
\end{matharray}

\begin{rail}
  'subst' thmref
  ;
  'split' ('(' 'asm' ')')? thmrefs
  ;
\end{rail}

These methods and attributes provide basic facilities for equational reasoning
that are intended for specialized applications only.  Normally, single step
reasoning would be performed by calculation (see \S\ref{sec:calculation}),
while the Simplifier is the canonical tool for automated normalization (see
\S\ref{sec:simplifier}).

\begin{descr}
\item [$subst~thm$] performs a single substitution step using rule $thm$,
  which may be either a meta or object equality.
\item [$hypsubst$] performs substitution using some assumption.
\item [$split~thms$] performs single-step case splitting using rules $thms$.
  By default, splitting is performed in the conclusion of a goal; the $asm$
  option indicates to operate on assumptions instead.
  
  Note that the $simp$ method already involves repeated application of split
  rules as declared in the current context (see \S\ref{sec:simp}).
\item [$symmetric$] applies the symmetry rule of meta or object equality.
\end{descr}


\section{The Classical Reasoner}\label{sec:classical}

\subsection{Basic methods}\label{sec:classical-basic}

\indexisarmeth{rule}\indexisarmeth{intro}
\indexisarmeth{elim}\indexisarmeth{default}\indexisarmeth{contradiction}
\begin{matharray}{rcl}
  rule & : & \isarmeth \\
  intro & : & \isarmeth \\
  elim & : & \isarmeth \\
  contradiction & : & \isarmeth \\
\end{matharray}

\begin{rail}
  ('rule' | 'intro' | 'elim') thmrefs?
  ;
\end{rail}

\begin{descr}
\item [$rule$] as offered by the classical reasoner is a refinement over the
  primitive one (see \S\ref{sec:pure-meth-att}).  In case that no rules are
  provided as arguments, it automatically determines elimination and
  introduction rules from the context (see also \S\ref{sec:classical-mod}).
  This is made the default method for basic proof steps, such as $\PROOFNAME$
  and ``$\DDOT$'' (two dots), see also \S\ref{sec:proof-steps} and
  \S\ref{sec:pure-meth-att}.

\item [$intro$ and $elim$] repeatedly refine some goal by intro- or
  elim-resolution, after having inserted any facts.  Omitting the arguments
  refers to any suitable rules declared in the context, otherwise only the
  explicitly given ones may be applied.  The latter form admits better control
  of what actually happens, thus it is very appropriate as an initial method
  for $\PROOFNAME$ that splits up certain connectives of the goal, before
  entering the actual sub-proof.

\item [$contradiction$] solves some goal by contradiction, deriving any result
  from both $\neg A$ and $A$.  Facts, which are guaranteed to participate, may
  appear in either order.
\end{descr}


\subsection{Automated methods}\label{sec:classical-auto}

\indexisarmeth{blast}\indexisarmeth{fast}\indexisarmeth{slow}
\indexisarmeth{best}\indexisarmeth{safe}\indexisarmeth{clarify}
\begin{matharray}{rcl}
  blast & : & \isarmeth \\
  fast & : & \isarmeth \\
  slow & : & \isarmeth \\
  best & : & \isarmeth \\
  safe & : & \isarmeth \\
  clarify & : & \isarmeth \\
\end{matharray}

\indexouternonterm{clamod}
\begin{rail}
  'blast' ('!' ?) nat? (clamod * )
  ;
  ('fast' | 'slow' | 'best' | 'safe' | 'clarify') ('!' ?) (clamod * )
  ;

  clamod: (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del') ':' thmrefs
  ;
\end{rail}

\begin{descr}
\item [$blast$] refers to the classical tableau prover (see \texttt{blast_tac}
  in \cite[\S11]{isabelle-ref}).  The optional argument specifies a
  user-supplied search bound (default 20).
\item [$fast$, $slow$, $best$, $safe$, and $clarify$] refer to the generic
  classical reasoner.  See \texttt{fast_tac}, \texttt{slow_tac},
  \texttt{best_tac}, \texttt{safe_tac}, and \texttt{clarify_tac} in
  \cite[\S11]{isabelle-ref} for more information.
\end{descr}

Any of above methods support additional modifiers of the context of classical
rules.  Their semantics is analogous to the attributes given in
\S\ref{sec:classical-mod}.  Facts provided by forward chaining are
inserted\footnote{These methods usually cannot make proper use of actual rules
  inserted that way, though.} into the goal before doing the search.  The
``!''~argument causes the full context of assumptions to be included as well.
This is slightly less hazardous than for the Simplifier (see
\S\ref{sec:simp}).


\subsection{Combined automated methods}\label{sec:clasimp}

\indexisarmeth{auto}\indexisarmeth{force}\indexisarmeth{clarsimp}
\indexisarmeth{fastsimp}\indexisarmeth{slowsimp}\indexisarmeth{bestsimp}
\begin{matharray}{rcl}
  auto & : & \isarmeth \\
  force & : & \isarmeth \\
  clarsimp & : & \isarmeth \\
  fastsimp & : & \isarmeth \\
  slowsimp & : & \isarmeth \\
  bestsimp & : & \isarmeth \\
\end{matharray}

\indexouternonterm{clasimpmod}
\begin{rail}
  'auto' '!'? (nat nat)? (clasimpmod * )
  ;
  ('force' | 'clarsimp' | 'fastsimp' | 'slowsimp' | 'bestsimp') '!'? (clasimpmod * )
  ;

  clasimpmod: ('simp' (() | 'add' | 'del' | 'only') |
    ('cong' | 'split') (() | 'add' | 'del') |
    'iff' (((() | 'add') '?'?) | 'del') |
    (('intro' | 'elim' | 'dest') ('!' | () | '?') | 'del')) ':' thmrefs
\end{rail}

\begin{descr}
\item [$auto$, $force$, $clarsimp$, $fastsimp$, $slowsimp$, and $bestsimp$]
  provide access to Isabelle's combined simplification and classical reasoning
  tactics.  These correspond to \texttt{auto_tac}, \texttt{force_tac},
  \texttt{clarsimp_tac}, and Classical Reasoner tactics with the Simplifier
  added as wrapper, see \cite[\S11]{isabelle-ref} for more information.  The
  modifier arguments correspond to those given in \S\ref{sec:simp} and
  \S\ref{sec:classical-auto}.  Just note that the ones related to the
  Simplifier are prefixed by \railtterm{simp} here.

  Facts provided by forward chaining are inserted into the goal before doing
  the search.  The ``!''~argument causes the full context of assumptions to be
  included as well.
\end{descr}


\subsection{Declaring rules}\label{sec:classical-mod}

\indexisarcmd{print-claset}
\indexisaratt{intro}\indexisaratt{elim}\indexisaratt{dest}
\indexisaratt{iff}\indexisaratt{rule}
\begin{matharray}{rcl}
  print_claset^* & : & \isarkeep{theory~|~proof} \\
  intro & : & \isaratt \\
  elim & : & \isaratt \\
  dest & : & \isaratt \\
  rule & : & \isaratt \\
  iff & : & \isaratt \\
\end{matharray}

\begin{rail}
  ('intro' | 'elim' | 'dest') ('!' | () | '?')
  ;
  'rule' 'del'
  ;
  'iff' (((() | 'add') '?'?) | 'del')
  ;
\end{rail}

\begin{descr}
\item [$print_claset$] prints the collection of rules declared to the
  Classical Reasoner, which is also known as ``simpset'' internally
  \cite{isabelle-ref}.  This is a diagnostic command; $undo$ does not apply.
\item [$intro$, $elim$, and $dest$] declare introduction, elimination, and
  destruction rules, respectively.  By default, rules are considered as
  \emph{unsafe} (i.e.\ not applied blindly without backtracking), while a
  single ``!'' classifies as \emph{safe}, and ``?'' as \emph{extra} (i.e.\ not
  applied in the search-oriented automated methods, but only in single-step
  methods such as $rule$).
\item [$rule~del$] deletes introduction, elimination, or destruction rules from
  the context.
\item [$iff$] declares a (possibly conditional) ``safe'' rule to the context in
  several ways.   The rule is declared as a rewrite rule to the Simplifier. 
  Furthermore, it is 
  declared in several ways (depending on its structure) to the Classical 
  Reasoner for aggressive use, which would normally be indicated by ``!'').
  If the rule is an equivalence, the two corresponding implications are 
  declared as introduction and destruction rules. Otherwise, 
  if the rule is an inequality, the corresponding negation elimination rule
  is declared, else the rule itself is declared as an introduction rule.
  
  The ``?'' version of $iff$ declares ``extra'' Classical Reasoner rules only,
  and omits the Simplifier declaration.  Thus the declaration does not have
  any effect on automated proof tools, but only on simple methods such as
  $rule$ (see \S\ref{sec:misc-methods}).
\end{descr}


\section{Proof by cases and induction}\label{sec:induct-method}

\subsection{Proof methods}\label{sec:induct-method-proper}

\indexisarmeth{cases}\indexisarmeth{induct}
\begin{matharray}{rcl}
  cases & : & \isarmeth \\
  induct & : & \isarmeth \\
\end{matharray}

The $cases$ and $induct$ methods provide a uniform interface to case analysis
and induction over datatypes, inductive sets, and recursive functions.  The
corresponding rules may be specified and instantiated in a casual manner.
Furthermore, these methods provide named local contexts that may be invoked
via the $\CASENAME$ proof command within the subsequent proof text (cf.\ 
\S\ref{sec:cases}).  This accommodates compact proof texts even when reasoning
about large specifications.

Note that the full spectrum of this generic functionality is currently only
supported by Isabelle/HOL, when used in conjunction with advanced definitional
packages (see especially \S\ref{sec:datatype} and \S\ref{sec:inductive}).

\begin{rail}
  'cases' spec
  ;
  'induct' spec
  ;

  spec: open? args rule? params?
  ;
  open: '(' 'open' ')'
  ;
  args: (insts * 'and') 
  ;
  rule: ('type' | 'set') ':' nameref | 'rule' ':' thmref
  ;
  params: 'of' ':' insts
  ;
\end{rail}

\begin{descr}
\item [$cases~insts~R~ps$] applies method $rule$ with an appropriate case
  distinction theorem, instantiated to the subjects $insts$.  Symbolic case
  names are bound according to the rule's local contexts.
  
  The rule is determined as follows, according to the facts and arguments
  passed to the $cases$ method:
  \begin{matharray}{llll}
    \Text{facts}    &       & \Text{arguments} & \Text{rule} \\\hline
                    & cases &           & \Text{classical case split} \\
                    & cases & t         & \Text{datatype exhaustion (type of $t$)} \\
    \edrv a \in A   & cases & \dots     & \Text{inductive set elimination (of $A$)} \\
    \dots           & cases & \dots ~ R & \Text{explicit rule $R$} \\
  \end{matharray}
  
  Several instantiations may be given, referring to the \emph{suffix} of
  premises of the case rule; within each premise, the \emph{prefix} of
  variables is instantiated.  In most situations, only a single term needs to
  be specified; this refers to the first variable of the last premise (it is
  usually the same for all cases).
  
  Additional parameters may be specified as $ps$; these are applied after the
  primary instantiation in the same manner as by the $of$ attribute (cf.\ 
  \S\ref{sec:pure-meth-att}).  This feature is rarely needed in practice; a
  typical application would be to specify additional arguments for rules
  stemming from parameterized inductive definitions (see also
  \S\ref{sec:inductive}).
  
  The $open$ option causes the parameters of the new local contexts to be
  exposed to the current proof context.  Thus local variables stemming from
  distant parts of the theory development may be introduced in an implicit
  manner, which can be quite confusing to the reader.  Furthermore, this
  option may cause unwanted hiding of existing local variables, resulting in
  less robust proof texts.
  
\item [$induct~insts~R~ps$] is analogous to the $cases$ method, but refers to
  induction rules, which are determined as follows:
  \begin{matharray}{llll}
    \Text{facts}    &        & \Text{arguments} & \Text{rule} \\\hline
                    & induct & P ~ x ~ \dots & \Text{datatype induction (type of $x$)} \\
    \edrv x \in A   & induct & \dots         & \Text{set induction (of $A$)} \\
    \dots           & induct & \dots ~ R     & \Text{explicit rule $R$} \\
  \end{matharray}
  
  Several instantiations may be given, each referring to some part of a mutual
  inductive definition or datatype --- only related partial induction rules
  may be used together, though.  Any of the lists of terms $P, x, \dots$
  refers to the \emph{suffix} of variables present in the induction rule.
  This enables the writer to specify only induction variables, or both
  predicates and variables, for example.
  
  Additional parameters (including the $open$ option) may be given in the same
  way as for $cases$, see above.
\end{descr}

Above methods produce named local contexts (cf.\ \S\ref{sec:cases}), as
determined by the instantiated rule \emph{before} it has been applied to the
internal proof state.\footnote{As a general principle, Isar proof text may
  never refer to parts of proof states directly.} Thus proper use of symbolic
cases usually require the rule to be instantiated fully, as far as the
emerging local contexts and subgoals are concerned.  In particular, for
induction both the predicates and variables have to be specified.  Otherwise
the $\CASENAME$ command would refuse to invoke cases containing schematic
variables.  Furthermore the resulting local goal statement is bound to the
term variable $\Var{case}$\indexisarvar{case} --- for each case where it is
fully specified.

The $\isarkeyword{print_cases}$ command (\S\ref{sec:cases}) prints all named
cases present in the current proof state.

\medskip

It is important to note that there is a fundamental difference of the $cases$
and $induct$ methods in handling of non-atomic goal statements: $cases$ just
applies a certain rule in backward fashion, splitting the result into new
goals with the local contexts being augmented in a purely monotonic manner.

In contrast, $induct$ passes the full goal statement through the ``recursive''
course involved in the induction.  Thus the original statement is basically
replaced by separate copies, corresponding to the induction hypotheses and
conclusion; the original goal context is no longer available.  This behavior
allows \emph{strengthened induction predicates} to be expressed concisely as
meta-level rule statements, i.e.\ $\All{\vec x} \vec\phi \Imp \psi$ to
indicate ``variable'' parameters $\vec x$ and ``recursive'' assumptions
$\vec\phi$.  Also note that local definitions may be expressed as $\All{\vec
  x} n \equiv t[\vec x] \Imp \phi[n]$, with induction over $n$.

\medskip

Facts presented to either method are consumed according to the number of
``major premises'' of the rule involved (see also \S\ref{sec:induct-att} and
\S\ref{sec:cases}), which is usually $0$ for plain cases and induction rules
of datatypes etc.\ and $1$ for rules of inductive sets and the like.  The
remaining facts are inserted into the goal verbatim before the actual $cases$
or $induct$ rule is applied (thus facts may be even passed through an
induction).

Note that whenever facts are present, the default rule selection scheme would
provide a ``set'' rule only, with the first fact consumed and the rest
inserted into the goal.  In order to pass all facts into a ``type'' rule
instead, one would have to specify this explicitly, e.g.\ by appending
``$type: name$'' to the method argument.


\subsection{Declaring rules}\label{sec:induct-att}

\indexisarcmd{print-induct-rules}\indexisaratt{cases}\indexisaratt{induct}
\begin{matharray}{rcl}
  \isarcmd{print_induct_rules}^* & : & \isarkeep{theory~|~proof} \\
  cases & : & \isaratt \\
  induct & : & \isaratt \\
\end{matharray}

\begin{rail}
  'cases' spec
  ;
  'induct' spec
  ;

  spec: ('type' | 'set') ':' nameref
  ;
\end{rail}

The $cases$ and $induct$ attributes augment the corresponding context of rules
for reasoning about inductive sets and types.  The standard rules are already
declared by advanced definitional packages.  For special applications, these
may be replaced manually by variant versions.

Refer to the $case_names$ and $ps$ attributes (see \S\ref{sec:cases}) to
adjust names of cases and parameters of a rule.

The $consumes$ declaration (cf.\ \S\ref{sec:cases}) is taken care of
automatically (if none had been given already): $consumes~0$ is specified for
``type'' rules and $consumes~1$ for ``set'' rules.


%%% Local Variables:
%%% mode: latex
%%% TeX-master: "isar-ref"
%%% End: