major reorganization of document structure;
authorwenzelm
Mon, 02 Jun 2008 22:50:23 +0200
changeset 27040 3d3e6e07b931
parent 27039 14582233d36b
child 27041 22dcf2fc0aa2
major reorganization of document structure;
doc-src/IsarRef/Thy/Generic.thy
doc-src/IsarRef/Thy/Introduction.thy
doc-src/IsarRef/Thy/Outer_Syntax.thy
doc-src/IsarRef/Thy/Proof.thy
doc-src/IsarRef/Thy/Spec.thy
doc-src/IsarRef/Thy/pure.thy
--- a/doc-src/IsarRef/Thy/Generic.thy	Mon Jun 02 22:50:21 2008 +0200
+++ b/doc-src/IsarRef/Thy/Generic.thy	Mon Jun 02 22:50:23 2008 +0200
@@ -6,746 +6,7 @@
 
 chapter {* Generic tools and packages \label{ch:gen-tools} *}
 
-section {* Specification commands *}
-
-subsection {* Derived specifications *}
-
-text {*
-  \begin{matharray}{rcll}
-    @{command_def "axiomatization"} & : & \isarkeep{local{\dsh}theory} & (axiomatic!)\\
-    @{command_def "definition"} & : & \isarkeep{local{\dsh}theory} \\
-    @{attribute_def "defn"} & : & \isaratt \\
-    @{command_def "abbreviation"} & : & \isarkeep{local{\dsh}theory} \\
-    @{command_def "print_abbrevs"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
-    @{command_def "notation"} & : & \isarkeep{local{\dsh}theory} \\
-    @{command_def "no_notation"} & : & \isarkeep{local{\dsh}theory} \\
-  \end{matharray}
-
-  These specification mechanisms provide a slightly more abstract view
-  than the underlying primitives of @{command "consts"}, @{command
-  "defs"} (see \secref{sec:consts}), and @{command "axioms"} (see
-  \secref{sec:axms-thms}).  In particular, type-inference is commonly
-  available, and result names need not be given.
-
-  \begin{rail}
-    'axiomatization' target? fixes? ('where' specs)?
-    ;
-    'definition' target? (decl 'where')? thmdecl? prop
-    ;
-    'abbreviation' target? mode? (decl 'where')? prop
-    ;
-    ('notation' | 'no\_notation') target? mode? (nameref structmixfix + 'and')
-    ;
-
-    fixes: ((name ('::' type)? mixfix? | vars) + 'and')
-    ;
-    specs: (thmdecl? props + 'and')
-    ;
-    decl: name ('::' type)? mixfix?
-    ;
-  \end{rail}
-
-  \begin{descr}
-  
-  \item [@{command "axiomatization"}~@{text "c\<^sub>1 \<dots> c\<^sub>m
-  \<WHERE> \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"}] introduces several constants
-  simultaneously and states axiomatic properties for these.  The
-  constants are marked as being specified once and for all, which
-  prevents additional specifications being issued later on.
-  
-  Note that axiomatic specifications are only appropriate when
-  declaring a new logical system.  Normal applications should only use
-  definitional mechanisms!
-
-  \item [@{command "definition"}~@{text "c \<WHERE> eq"}] produces an
-  internal definition @{text "c \<equiv> t"} according to the specification
-  given as @{text eq}, which is then turned into a proven fact.  The
-  given proposition may deviate from internal meta-level equality
-  according to the rewrite rules declared as @{attribute defn} by the
-  object-logic.  This usually covers object-level equality @{text "x =
-  y"} and equivalence @{text "A \<leftrightarrow> B"}.  End-users normally need not
-  change the @{attribute defn} setup.
-  
-  Definitions may be presented with explicit arguments on the LHS, as
-  well as additional conditions, e.g.\ @{text "f x y = t"} instead of
-  @{text "f \<equiv> \<lambda>x y. t"} and @{text "y \<noteq> 0 \<Longrightarrow> g x y = u"} instead of an
-  unrestricted @{text "g \<equiv> \<lambda>x y. u"}.
-  
-  \item [@{command "abbreviation"}~@{text "c \<WHERE> eq"}] introduces
-  a syntactic constant which is associated with a certain term
-  according to the meta-level equality @{text eq}.
-  
-  Abbreviations participate in the usual type-inference process, but
-  are expanded before the logic ever sees them.  Pretty printing of
-  terms involves higher-order rewriting with rules stemming from
-  reverted abbreviations.  This needs some care to avoid overlapping
-  or looping syntactic replacements!
-  
-  The optional @{text mode} specification restricts output to a
-  particular print mode; using ``@{text input}'' here achieves the
-  effect of one-way abbreviations.  The mode may also include an
-  ``@{keyword "output"}'' qualifier that affects the concrete syntax
-  declared for abbreviations, cf.\ @{command "syntax"} in
-  \secref{sec:syn-trans}.
-  
-  \item [@{command "print_abbrevs"}] prints all constant abbreviations
-  of the current context.
-  
-  \item [@{command "notation"}~@{text "c (mx)"}] associates mixfix
-  syntax with an existing constant or fixed variable.  This is a
-  robust interface to the underlying @{command "syntax"} primitive
-  (\secref{sec:syn-trans}).  Type declaration and internal syntactic
-  representation of the given entity is retrieved from the context.
-  
-  \item [@{command "no_notation"}] is similar to @{command
-  "notation"}, but removes the specified syntax annotation from the
-  present context.
-
-  \end{descr}
-
-  All of these specifications support local theory targets (cf.\
-  \secref{sec:target}).
-*}
-
-
-subsection {* Generic declarations *}
-
-text {*
-  Arbitrary operations on the background context may be wrapped-up as
-  generic declaration elements.  Since the underlying concept of local
-  theories may be subject to later re-interpretation, there is an
-  additional dependency on a morphism that tells the difference of the
-  original declaration context wrt.\ the application context
-  encountered later on.  A fact declaration is an important special
-  case: it consists of a theorem which is applied to the context by
-  means of an attribute.
-
-  \begin{matharray}{rcl}
-    @{command_def "declaration"} & : & \isarkeep{local{\dsh}theory} \\
-    @{command_def "declare"} & : & \isarkeep{local{\dsh}theory} \\
-  \end{matharray}
-
-  \begin{rail}
-    'declaration' target? text
-    ;
-    'declare' target? (thmrefs + 'and')
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "declaration"}~@{text d}] adds the declaration
-  function @{text d} of ML type @{ML_type declaration}, to the current
-  local theory under construction.  In later application contexts, the
-  function is transformed according to the morphisms being involved in
-  the interpretation hierarchy.
-
-  \item [@{command "declare"}~@{text thms}] declares theorems to the
-  current local theory context.  No theorem binding is involved here,
-  unlike @{command "theorems"} or @{command "lemmas"} (cf.\
-  \secref{sec:axms-thms}), so @{command "declare"} only has the effect
-  of applying attributes as included in the theorem specification.
-
-  \end{descr}
-*}
-
-
-subsection {* Local theory targets \label{sec:target} *}
-
-text {*
-  A local theory target is a context managed separately within the
-  enclosing theory.  Contexts may introduce parameters (fixed
-  variables) and assumptions (hypotheses).  Definitions and theorems
-  depending on the context may be added incrementally later on.  Named
-  contexts refer to locales (cf.\ \secref{sec:locale}) or type classes
-  (cf.\ \secref{sec:class}); the name ``@{text "-"}'' signifies the
-  global theory context.
-
-  \begin{matharray}{rcll}
-    @{command_def "context"} & : & \isartrans{theory}{local{\dsh}theory} \\
-    @{command_def "end"} & : & \isartrans{local{\dsh}theory}{theory} \\
-  \end{matharray}
-
-  \indexouternonterm{target}
-  \begin{rail}
-    'context' name 'begin'
-    ;
-
-    target: '(' 'in' name ')'
-    ;
-  \end{rail}
-
-  \begin{descr}
-  
-  \item [@{command "context"}~@{text "c \<BEGIN>"}] recommences an
-  existing locale or class context @{text c}.  Note that locale and
-  class definitions allow to include the @{keyword_ref "begin"}
-  keyword as well, in order to continue the local theory immediately
-  after the initial specification.
-  
-  \item [@{command "end"}] concludes the current local theory and
-  continues the enclosing global theory.  Note that a non-local
-  @{command "end"} has a different meaning: it concludes the theory
-  itself (\secref{sec:begin-thy}).
-  
-  \item [@{text "(\<IN> c)"}] given after any local theory command
-  specifies an immediate target, e.g.\ ``@{command
-  "definition"}~@{text "(\<IN> c) \<dots>"}'' or ``@{command
-  "theorem"}~@{text "(\<IN> c) \<dots>"}''.  This works both in a local or
-  global theory context; the current target context will be suspended
-  for this command only.  Note that ``@{text "(\<IN> -)"}'' will
-  always produce a global result independently of the current target
-  context.
-
-  \end{descr}
-
-  The exact meaning of results produced within a local theory context
-  depends on the underlying target infrastructure (locale, type class
-  etc.).  The general idea is as follows, considering a context named
-  @{text c} with parameter @{text x} and assumption @{text "A[x]"}.
-  
-  Definitions are exported by introducing a global version with
-  additional arguments; a syntactic abbreviation links the long form
-  with the abstract version of the target context.  For example,
-  @{text "a \<equiv> t[x]"} becomes @{text "c.a ?x \<equiv> t[?x]"} at the theory
-  level (for arbitrary @{text "?x"}), together with a local
-  abbreviation @{text "c \<equiv> c.a x"} in the target context (for the
-  fixed parameter @{text x}).
-
-  Theorems are exported by discharging the assumptions and
-  generalizing the parameters of the context.  For example, @{text "a:
-  B[x]"} becomes @{text "c.a: A[?x] \<Longrightarrow> B[?x]"}, again for arbitrary
-  @{text "?x"}.
-*}
-
-
-subsection {* Locales \label{sec:locale} *}
-
-text {*
-  Locales are named local contexts, consisting of a list of
-  declaration elements that are modeled after the Isar proof context
-  commands (cf.\ \secref{sec:proof-context}).
-*}
-
-
-subsubsection {* Locale specifications *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "locale"} & : & \isartrans{theory}{local{\dsh}theory} \\
-    @{command_def "print_locale"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
-    @{command_def "print_locales"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
-    @{method_def intro_locales} & : & \isarmeth \\
-    @{method_def unfold_locales} & : & \isarmeth \\
-  \end{matharray}
-
-  \indexouternonterm{contextexpr}\indexouternonterm{contextelem}
-  \indexisarelem{fixes}\indexisarelem{constrains}\indexisarelem{assumes}
-  \indexisarelem{defines}\indexisarelem{notes}\indexisarelem{includes}
-  \begin{rail}
-    'locale' ('(open)')? name ('=' localeexpr)? 'begin'?
-    ;
-    'print\_locale' '!'? localeexpr
-    ;
-    localeexpr: ((contextexpr '+' (contextelem+)) | contextexpr | (contextelem+))
-    ;
-
-    contextexpr: nameref | '(' contextexpr ')' |
-    (contextexpr (name mixfix? +)) | (contextexpr + '+')
-    ;
-    contextelem: fixes | constrains | assumes | defines | notes
-    ;
-    fixes: 'fixes' ((name ('::' type)? structmixfix? | vars) + 'and')
-    ;
-    constrains: 'constrains' (name '::' type + 'and')
-    ;
-    assumes: 'assumes' (thmdecl? props + 'and')
-    ;
-    defines: 'defines' (thmdecl? prop proppat? + 'and')
-    ;
-    notes: 'notes' (thmdef? thmrefs + 'and')
-    ;
-    includes: 'includes' contextexpr
-    ;
-  \end{rail}
-
-  \begin{descr}
-  
-  \item [@{command "locale"}~@{text "loc = import + body"}] defines a
-  new locale @{text loc} as a context consisting of a certain view of
-  existing locales (@{text import}) plus some additional elements
-  (@{text body}).  Both @{text import} and @{text body} are optional;
-  the degenerate form @{command "locale"}~@{text loc} defines an empty
-  locale, which may still be useful to collect declarations of facts
-  later on.  Type-inference on locale expressions automatically takes
-  care of the most general typing that the combined context elements
-  may acquire.
-
-  The @{text import} consists of a structured context expression,
-  consisting of references to existing locales, renamed contexts, or
-  merged contexts.  Renaming uses positional notation: @{text "c
-  x\<^sub>1 \<dots> x\<^sub>n"} means that (a prefix of) the fixed
-  parameters of context @{text c} are named @{text "x\<^sub>1, \<dots>,
-  x\<^sub>n"}; a ``@{text _}'' (underscore) means to skip that
-  position.  Renaming by default deletes concrete syntax, but new
-  syntax may by specified with a mixfix annotation.  An exeption of
-  this rule is the special syntax declared with ``@{text
-  "(\<STRUCTURE>)"}'' (see below), which is neither deleted nor can it
-  be changed.  Merging proceeds from left-to-right, suppressing any
-  duplicates stemming from different paths through the import
-  hierarchy.
-
-  The @{text body} consists of basic context elements, further context
-  expressions may be included as well.
-
-  \begin{descr}
-
-  \item [@{element "fixes"}~@{text "x :: \<tau> (mx)"}] declares a local
-  parameter of type @{text \<tau>} and mixfix annotation @{text mx} (both
-  are optional).  The special syntax declaration ``@{text
-  "(\<STRUCTURE>)"}'' means that @{text x} may be referenced
-  implicitly in this context.
-
-  \item [@{element "constrains"}~@{text "x :: \<tau>"}] introduces a type
-  constraint @{text \<tau>} on the local parameter @{text x}.
-
-  \item [@{element "assumes"}~@{text "a: \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"}]
-  introduces local premises, similar to @{command "assume"} within a
-  proof (cf.\ \secref{sec:proof-context}).
-
-  \item [@{element "defines"}~@{text "a: x \<equiv> t"}] defines a previously
-  declared parameter.  This is similar to @{command "def"} within a
-  proof (cf.\ \secref{sec:proof-context}), but @{element "defines"}
-  takes an equational proposition instead of variable-term pair.  The
-  left-hand side of the equation may have additional arguments, e.g.\
-  ``@{element "defines"}~@{text "f x\<^sub>1 \<dots> x\<^sub>n \<equiv> t"}''.
-
-  \item [@{element "notes"}~@{text "a = b\<^sub>1 \<dots> b\<^sub>n"}]
-  reconsiders facts within a local context.  Most notably, this may
-  include arbitrary declarations in any attribute specifications
-  included here, e.g.\ a local @{attribute simp} rule.
-
-  \item [@{element "includes"}~@{text c}] copies the specified context
-  in a statically scoped manner.  Only available in the long goal
-  format of \secref{sec:goals}.
-
-  In contrast, the initial @{text import} specification of a locale
-  expression maintains a dynamic relation to the locales being
-  referenced (benefiting from any later fact declarations in the
-  obvious manner).
-
-  \end{descr}
-  
-  Note that ``@{text "(\<IS> p\<^sub>1 \<dots> p\<^sub>n)"}'' patterns given
-  in the syntax of @{element "assumes"} and @{element "defines"} above
-  are illegal in locale definitions.  In the long goal format of
-  \secref{sec:goals}, term bindings may be included as expected,
-  though.
-  
-  \medskip By default, locale specifications are ``closed up'' by
-  turning the given text into a predicate definition @{text
-  loc_axioms} and deriving the original assumptions as local lemmas
-  (modulo local definitions).  The predicate statement covers only the
-  newly specified assumptions, omitting the content of included locale
-  expressions.  The full cumulative view is only provided on export,
-  involving another predicate @{text loc} that refers to the complete
-  specification text.
-  
-  In any case, the predicate arguments are those locale parameters
-  that actually occur in the respective piece of text.  Also note that
-  these predicates operate at the meta-level in theory, but the locale
-  packages attempts to internalize statements according to the
-  object-logic setup (e.g.\ replacing @{text \<And>} by @{text \<forall>}, and
-  @{text "\<Longrightarrow>"} by @{text "\<longrightarrow>"} in HOL; see also
-  \secref{sec:object-logic}).  Separate introduction rules @{text
-  loc_axioms.intro} and @{text loc.intro} are provided as well.
-  
-  The @{text "(open)"} option of a locale specification prevents both
-  the current @{text loc_axioms} and cumulative @{text loc} predicate
-  constructions.  Predicates are also omitted for empty specification
-  texts.
-
-  \item [@{command "print_locale"}~@{text "import + body"}] prints the
-  specified locale expression in a flattened form.  The notable
-  special case @{command "print_locale"}~@{text loc} just prints the
-  contents of the named locale, but keep in mind that type-inference
-  will normalize type variables according to the usual alphabetical
-  order.  The command omits @{element "notes"} elements by default.
-  Use @{command "print_locale"}@{text "!"} to get them included.
-
-  \item [@{command "print_locales"}] prints the names of all locales
-  of the current theory.
-
-  \item [@{method intro_locales} and @{method unfold_locales}]
-  repeatedly expand all introduction rules of locale predicates of the
-  theory.  While @{method intro_locales} only applies the @{text
-  loc.intro} introduction rules and therefore does not decend to
-  assumptions, @{method unfold_locales} is more aggressive and applies
-  @{text loc_axioms.intro} as well.  Both methods are aware of locale
-  specifications entailed by the context, both from target and
-  @{element "includes"} statements, and from interpretations (see
-  below).  New goals that are entailed by the current context are
-  discharged automatically.
-
-  \end{descr}
-*}
-
-
-subsubsection {* Interpretation of locales *}
-
-text {*
-  Locale expressions (more precisely, \emph{context expressions}) may
-  be instantiated, and the instantiated facts added to the current
-  context.  This requires a proof of the instantiated specification
-  and is called \emph{locale interpretation}.  Interpretation is
-  possible in theories and locales (command @{command
-  "interpretation"}) and also within a proof body (command @{command
-  "interpret"}).
-
-  \begin{matharray}{rcl}
-    @{command_def "interpretation"} & : & \isartrans{theory}{proof(prove)} \\
-    @{command_def "interpret"} & : & \isartrans{proof(state) ~|~ proof(chain)}{proof(prove)} \\
-    @{command_def "print_interps"}@{text "\<^sup>*"} & : &  \isarkeep{theory~|~proof} \\
-  \end{matharray}
-
-  \indexouternonterm{interp}
-  \begin{rail}
-    'interpretation' (interp | name ('<' | subseteq) contextexpr)
-    ;
-    'interpret' interp
-    ;
-    'print\_interps' '!'? name
-    ;
-    instantiation: ('[' (inst+) ']')?
-    ;
-    interp: thmdecl? \\ (contextexpr instantiation |
-      name instantiation 'where' (thmdecl? prop + 'and'))
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "interpretation"}~@{text "expr insts \<WHERE> eqns"}]
-
-  The first form of @{command "interpretation"} interprets @{text
-  expr} in the theory.  The instantiation is given as a list of terms
-  @{text insts} and is positional.  All parameters must receive an
-  instantiation term --- with the exception of defined parameters.
-  These are, if omitted, derived from the defining equation and other
-  instantiations.  Use ``@{text _}'' to omit an instantiation term.
-
-  The command generates proof obligations for the instantiated
-  specifications (assumes and defines elements).  Once these are
-  discharged by the user, instantiated facts are added to the theory
-  in a post-processing phase.
-
-  Additional equations, which are unfolded in facts during
-  post-processing, may be given after the keyword @{keyword "where"}.
-  This is useful for interpreting concepts introduced through
-  definition specification elements.  The equations must be proved.
-  Note that if equations are present, the context expression is
-  restricted to a locale name.
-
-  The command is aware of interpretations already active in the
-  theory.  No proof obligations are generated for those, neither is
-  post-processing applied to their facts.  This avoids duplication of
-  interpreted facts, in particular.  Note that, in the case of a
-  locale with import, parts of the interpretation may already be
-  active.  The command will only generate proof obligations and
-  process facts for new parts.
-
-  The context expression may be preceded by a name and/or attributes.
-  These take effect in the post-processing of facts.  The name is used
-  to prefix fact names, for example to avoid accidental hiding of
-  other facts.  Attributes are applied after attributes of the
-  interpreted facts.
-
-  Adding facts to locales has the effect of adding interpreted facts
-  to the theory for all active interpretations also.  That is,
-  interpretations dynamically participate in any facts added to
-  locales.
-
-  \item [@{command "interpretation"}~@{text "name \<subseteq> expr"}]
-
-  This form of the command interprets @{text expr} in the locale
-  @{text name}.  It requires a proof that the specification of @{text
-  name} implies the specification of @{text expr}.  As in the
-  localized version of the theorem command, the proof is in the
-  context of @{text name}.  After the proof obligation has been
-  dischared, the facts of @{text expr} become part of locale @{text
-  name} as \emph{derived} context elements and are available when the
-  context @{text name} is subsequently entered.  Note that, like
-  import, this is dynamic: facts added to a locale part of @{text
-  expr} after interpretation become also available in @{text name}.
-  Like facts of renamed context elements, facts obtained by
-  interpretation may be accessed by prefixing with the parameter
-  renaming (where the parameters are separated by ``@{text _}'').
-
-  Unlike interpretation in theories, instantiation is confined to the
-  renaming of parameters, which may be specified as part of the
-  context expression @{text expr}.  Using defined parameters in @{text
-  name} one may achieve an effect similar to instantiation, though.
-
-  Only specification fragments of @{text expr} that are not already
-  part of @{text name} (be it imported, derived or a derived fragment
-  of the import) are considered by interpretation.  This enables
-  circular interpretations.
-
-  If interpretations of @{text name} exist in the current theory, the
-  command adds interpretations for @{text expr} as well, with the same
-  prefix and attributes, although only for fragments of @{text expr}
-  that are not interpreted in the theory already.
-
-  \item [@{command "interpret"}~@{text "expr insts \<WHERE> eqns"}]
-  interprets @{text expr} in the proof context and is otherwise
-  similar to interpretation in theories.
-
-  \item [@{command "print_interps"}~@{text loc}] prints the
-  interpretations of a particular locale @{text loc} that are active
-  in the current context, either theory or proof context.  The
-  exclamation point argument triggers printing of \emph{witness}
-  theorems justifying interpretations.  These are normally omitted
-  from the output.
-  
-  \end{descr}
-
-  \begin{warn}
-    Since attributes are applied to interpreted theorems,
-    interpretation may modify the context of common proof tools, e.g.\
-    the Simplifier or Classical Reasoner.  Since the behavior of such
-    automated reasoning tools is \emph{not} stable under
-    interpretation morphisms, manual declarations might have to be
-    issued.
-  \end{warn}
-
-  \begin{warn}
-    An interpretation in a theory may subsume previous
-    interpretations.  This happens if the same specification fragment
-    is interpreted twice and the instantiation of the second
-    interpretation is more general than the interpretation of the
-    first.  A warning is issued, since it is likely that these could
-    have been generalized in the first place.  The locale package does
-    not attempt to remove subsumed interpretations.
-  \end{warn}
-*}
-
-
-subsection {* Classes \label{sec:class} *}
-
-text {*
-  A class is a particular locale with \emph{exactly one} type variable
-  @{text \<alpha>}.  Beyond the underlying locale, a corresponding type class
-  is established which is interpreted logically as axiomatic type
-  class \cite{Wenzel:1997:TPHOL} whose logical content are the
-  assumptions of the locale.  Thus, classes provide the full
-  generality of locales combined with the commodity of type classes
-  (notably type-inference).  See \cite{isabelle-classes} for a short
-  tutorial.
-
-  \begin{matharray}{rcl}
-    @{command_def "class"} & : & \isartrans{theory}{local{\dsh}theory} \\
-    @{command_def "instantiation"} & : & \isartrans{theory}{local{\dsh}theory} \\
-    @{command_def "instance"} & : & \isartrans{local{\dsh}theory}{local{\dsh}theory} \\
-    @{command_def "subclass"} & : & \isartrans{local{\dsh}theory}{local{\dsh}theory} \\
-    @{command_def "print_classes"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
-    @{method_def intro_classes} & : & \isarmeth \\
-  \end{matharray}
-
-  \begin{rail}
-    'class' name '=' ((superclassexpr '+' (contextelem+)) | superclassexpr | (contextelem+)) \\
-      'begin'?
-    ;
-    'instantiation' (nameref + 'and') '::' arity 'begin'
-    ;
-    'instance'
-    ;
-    'subclass' target? nameref
-    ;
-    'print\_classes'
-    ;
-
-    superclassexpr: nameref | (nameref '+' superclassexpr)
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "class"}~@{text "c = superclasses + body"}] defines
-  a new class @{text c}, inheriting from @{text superclasses}.  This
-  introduces a locale @{text c} with import of all locales @{text
-  superclasses}.
-
-  Any @{element "fixes"} in @{text body} are lifted to the global
-  theory level (\emph{class operations} @{text "f\<^sub>1, \<dots>,
-  f\<^sub>n"} of class @{text c}), mapping the local type parameter
-  @{text \<alpha>} to a schematic type variable @{text "?\<alpha> :: c"}.
-
-  Likewise, @{element "assumes"} in @{text body} are also lifted,
-  mapping each local parameter @{text "f :: \<tau>[\<alpha>]"} to its
-  corresponding global constant @{text "f :: \<tau>[?\<alpha> :: c]"}.  The
-  corresponding introduction rule is provided as @{text
-  c_class_axioms.intro}.  This rule should be rarely needed directly
-  --- the @{method intro_classes} method takes care of the details of
-  class membership proofs.
-
-  \item [@{command "instantiation"}~@{text "t :: (s\<^sub>1, \<dots>,
-  s\<^sub>n) s \<BEGIN>"}] opens a theory target (cf.\
-  \secref{sec:target}) which allows to specify class operations @{text
-  "f\<^sub>1, \<dots>, f\<^sub>n"} corresponding to sort @{text s} at the
-  particular type instance @{text "(\<alpha>\<^sub>1 :: s\<^sub>1, \<dots>,
-  \<alpha>\<^sub>n :: s\<^sub>n) t"}.  A plain @{command "instance"} command
-  in the target body poses a goal stating these type arities.  The
-  target is concluded by an @{command_ref "end"} command.
-
-  Note that a list of simultaneous type constructors may be given;
-  this corresponds nicely to mutual recursive type definitions, e.g.\
-  in Isabelle/HOL.
-
-  \item [@{command "instance"}] in an instantiation target body sets
-  up a goal stating the type arities claimed at the opening @{command
-  "instantiation"}.  The proof would usually proceed by @{method
-  intro_classes}, and then establish the characteristic theorems of
-  the type classes involved.  After finishing the proof, the
-  background theory will be augmented by the proven type arities.
-
-  \item [@{command "subclass"}~@{text c}] in a class context for class
-  @{text d} sets up a goal stating that class @{text c} is logically
-  contained in class @{text d}.  After finishing the proof, class
-  @{text d} is proven to be subclass @{text c} and the locale @{text
-  c} is interpreted into @{text d} simultaneously.
-
-  \item [@{command "print_classes"}] prints all classes in the current
-  theory.
-
-  \item [@{method intro_classes}] repeatedly expands all class
-  introduction rules of this theory.  Note that this method usually
-  needs not be named explicitly, as it is already included in the
-  default proof step (e.g.\ of @{command "proof"}).  In particular,
-  instantiation of trivial (syntactic) classes may be performed by a
-  single ``@{command ".."}'' proof step.
-
-  \end{descr}
-*}
-
-
-subsubsection {* The class target *}
-
-text {*
-  %FIXME check
-
-  A named context may refer to a locale (cf.\ \secref{sec:target}).
-  If this locale is also a class @{text c}, apart from the common
-  locale target behaviour the following happens.
-
-  \begin{itemize}
-
-  \item Local constant declarations @{text "g[\<alpha>]"} referring to the
-  local type parameter @{text \<alpha>} and local parameters @{text "f[\<alpha>]"}
-  are accompanied by theory-level constants @{text "g[?\<alpha> :: c]"}
-  referring to theory-level class operations @{text "f[?\<alpha> :: c]"}.
-
-  \item Local theorem bindings are lifted as are assumptions.
-
-  \item Local syntax refers to local operations @{text "g[\<alpha>]"} and
-  global operations @{text "g[?\<alpha> :: c]"} uniformly.  Type inference
-  resolves ambiguities.  In rare cases, manual type annotations are
-  needed.
-  
-  \end{itemize}
-*}
-
-
-subsection {* Axiomatic type classes \label{sec:axclass} *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "axclass"} & : & \isartrans{theory}{theory} \\
-    @{command_def "instance"} & : & \isartrans{theory}{proof(prove)} \\
-  \end{matharray}
-
-  Axiomatic type classes are Isabelle/Pure's primitive
-  \emph{definitional} interface to type classes.  For practical
-  applications, you should consider using classes
-  (cf.~\secref{sec:classes}) which provide high level interface.
-
-  \begin{rail}
-    'axclass' classdecl (axmdecl prop +)
-    ;
-    'instance' (nameref ('<' | subseteq) nameref | nameref '::' arity)
-    ;
-  \end{rail}
-
-  \begin{descr}
-  
-  \item [@{command "axclass"}~@{text "c \<subseteq> c\<^sub>1, \<dots>, c\<^sub>n
-  axms"}] defines an axiomatic type class as the intersection of
-  existing classes, with additional axioms holding.  Class axioms may
-  not contain more than one type variable.  The class axioms (with
-  implicit sort constraints added) are bound to the given names.
-  Furthermore a class introduction rule is generated (being bound as
-  @{text c_class.intro}); this rule is employed by method @{method
-  intro_classes} to support instantiation proofs of this class.
-  
-  The ``class axioms'' are stored as theorems according to the given
-  name specifications, adding @{text "c_class"} as name space prefix;
-  the same facts are also stored collectively as @{text
-  c_class.axioms}.
-  
-  \item [@{command "instance"}~@{text "c\<^sub>1 \<subseteq> c\<^sub>2"} and
-  @{command "instance"}~@{text "t :: (s\<^sub>1, \<dots>, s\<^sub>n) s"}]
-  setup a goal stating a class relation or type arity.  The proof
-  would usually proceed by @{method intro_classes}, and then establish
-  the characteristic theorems of the type classes involved.  After
-  finishing the proof, the theory will be augmented by a type
-  signature declaration corresponding to the resulting theorem.
-
-  \end{descr}
-*}
-
-
-subsection {* Arbitrary overloading *}
-
-text {*
-  Isabelle/Pure's definitional schemes support certain forms of
-  overloading (see \secref{sec:consts}).  At most occassions
-  overloading will be used in a Haskell-like fashion together with
-  type classes by means of @{command "instantiation"} (see
-  \secref{sec:class}).  Sometimes low-level overloading is desirable.
-  The @{command "overloading"} target provides a convenient view for
-  end-users.
-
-  \begin{matharray}{rcl}
-    @{command_def "overloading"} & : & \isartrans{theory}{local{\dsh}theory} \\
-  \end{matharray}
-
-  \begin{rail}
-    'overloading' \\
-    ( string ( '==' | equiv ) term ( '(' 'unchecked' ')' )? + ) 'begin'
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "overloading"}~@{text "x\<^sub>1 \<equiv> c\<^sub>1 ::
-  \<tau>\<^sub>1 \<AND> \<dots> x\<^sub>n \<equiv> c\<^sub>n :: \<tau>\<^sub>n \<BEGIN>"}]
-  opens a theory target (cf.\ \secref{sec:target}) which allows to
-  specify constants with overloaded definitions.  These are identified
-  by an explicitly given mapping from variable names @{text
-  "x\<^sub>i"} to constants @{text "c\<^sub>i"} at particular type
-  instances.  The definitions themselves are established using common
-  specification tools, using the names @{text "x\<^sub>i"} as
-  reference to the corresponding constants.  The target is concluded
-  by @{command "end"}.
-
-  A @{text "(unchecked)"} option disables global dependency checks for
-  the corresponding definition, which is occasionally useful for
-  exotic overloading.  It is at the discretion of the user to avoid
-  malformed theory specifications!
-
-  \end{descr}
-*}
-
-
-subsection {* Configuration options *}
+section {* Configuration options *}
 
 text {*
   Isabelle/Pure maintains a record of named configuration options
@@ -786,7 +47,7 @@
 *}
 
 
-section {* Proof tools *}
+section {* Basic proof tools *}
 
 subsection {* Miscellaneous methods and attributes \label{sec:misc-meth-att} *}
 
@@ -1031,9 +292,9 @@
 *}
 
 
-subsection {* The Simplifier \label{sec:simplifier} *}
+section {* The Simplifier \label{sec:simplifier} *}
 
-subsubsection {* Simplification methods *}
+subsection {* Simplification methods *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1112,7 +373,7 @@
 *}
 
 
-subsubsection {* Declaring rules *}
+subsection {* Declaring rules *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1143,7 +404,7 @@
 *}
 
 
-subsubsection {* Simplification procedures *}
+subsection {* Simplification procedures *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1189,7 +450,7 @@
 *}
 
 
-subsubsection {* Forward simplification *}
+subsection {* Forward simplification *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1224,7 +485,7 @@
 *}
 
 
-subsubsection {* Low-level equational reasoning *}
+subsection {* Low-level equational reasoning *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1290,9 +551,9 @@
 *}
 
 
-subsection {* The Classical Reasoner \label{sec:classical} *}
+section {* The Classical Reasoner \label{sec:classical} *}
 
-subsubsection {* Basic methods *}
+subsection {* Basic methods *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1335,7 +596,7 @@
 *}
 
 
-subsubsection {* Automated methods *}
+subsection {* Automated methods *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1381,7 +642,7 @@
 *}
 
 
-subsubsection {* Combined automated methods \label{sec:clasimp} *}
+subsection {* Combined automated methods \label{sec:clasimp} *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1427,7 +688,7 @@
 *}
 
 
-subsubsection {* Declaring rules *}
+subsection {* Declaring rules *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1483,7 +744,7 @@
 *}
 
 
-subsubsection {* Classical operations *}
+subsection {* Classical operations *}
 
 text {*
   \begin{matharray}{rcl}
@@ -1500,370 +761,6 @@
 *}
 
 
-subsection {* Proof by cases and induction \label{sec:cases-induct} *}
-
-subsubsection {* Rule contexts *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "case"} & : & \isartrans{proof(state)}{proof(state)} \\
-    @{command_def "print_cases"}@{text "\<^sup>*"} & : & \isarkeep{proof} \\
-    @{attribute_def case_names} & : & \isaratt \\
-    @{attribute_def case_conclusion} & : & \isaratt \\
-    @{attribute_def params} & : & \isaratt \\
-    @{attribute_def consumes} & : & \isaratt \\
-  \end{matharray}
-
-  The puristic way to build up Isar proof contexts is by explicit
-  language elements like @{command "fix"}, @{command "assume"},
-  @{command "let"} (see \secref{sec:proof-context}).  This is adequate
-  for plain natural deduction, but easily becomes unwieldy in concrete
-  verification tasks, which typically involve big induction rules with
-  several cases.
-
-  The @{command "case"} command provides a shorthand to refer to a
-  local context symbolically: certain proof methods provide an
-  environment of named ``cases'' of the form @{text "c: x\<^sub>1, \<dots>,
-  x\<^sub>m, \<phi>\<^sub>1, \<dots>, \<phi>\<^sub>n"}; the effect of ``@{command
-  "case"}~@{text c}'' is then equivalent to ``@{command "fix"}~@{text
-  "x\<^sub>1 \<dots> x\<^sub>m"}~@{command "assume"}~@{text "c: \<phi>\<^sub>1 \<dots>
-  \<phi>\<^sub>n"}''.  Term bindings may be covered as well, notably
-  @{variable ?case} for the main conclusion.
-
-  By default, the ``terminology'' @{text "x\<^sub>1, \<dots>, x\<^sub>m"} of
-  a case value is marked as hidden, i.e.\ there is no way to refer to
-  such parameters in the subsequent proof text.  After all, original
-  rule parameters stem from somewhere outside of the current proof
-  text.  By using the explicit form ``@{command "case"}~@{text "(c
-  y\<^sub>1 \<dots> y\<^sub>m)"}'' instead, the proof author is able to
-  chose local names that fit nicely into the current context.
-
-  \medskip It is important to note that proper use of @{command
-  "case"} does not provide means to peek at the current goal state,
-  which is not directly observable in Isar!  Nonetheless, goal
-  refinement commands do provide named cases @{text "goal\<^sub>i"}
-  for each subgoal @{text "i = 1, \<dots>, n"} of the resulting goal state.
-  Using this extra feature requires great care, because some bits of
-  the internal tactical machinery intrude the proof text.  In
-  particular, parameter names stemming from the left-over of automated
-  reasoning tools are usually quite unpredictable.
-
-  Under normal circumstances, the text of cases emerge from standard
-  elimination or induction rules, which in turn are derived from
-  previous theory specifications in a canonical way (say from
-  @{command "inductive"} definitions).
-
-  \medskip Proper cases are only available if both the proof method
-  and the rules involved support this.  By using appropriate
-  attributes, case names, conclusions, and parameters may be also
-  declared by hand.  Thus variant versions of rules that have been
-  derived manually become ready to use in advanced case analysis
-  later.
-
-  \begin{rail}
-    'case' (caseref | '(' caseref ((name | underscore) +) ')')
-    ;
-    caseref: nameref attributes?
-    ;
-
-    'case\_names' (name +)
-    ;
-    'case\_conclusion' name (name *)
-    ;
-    'params' ((name *) + 'and')
-    ;
-    'consumes' nat?
-    ;
-  \end{rail}
-
-  \begin{descr}
-  
-  \item [@{command "case"}~@{text "(c x\<^sub>1 \<dots> x\<^sub>m)"}]
-  invokes a named local context @{text "c: x\<^sub>1, \<dots>, x\<^sub>m,
-  \<phi>\<^sub>1, \<dots>, \<phi>\<^sub>m"}, as provided by an appropriate
-  proof method (such as @{method_ref cases} and @{method_ref induct}).
-  The command ``@{command "case"}~@{text "(c x\<^sub>1 \<dots>
-  x\<^sub>m)"}'' abbreviates ``@{command "fix"}~@{text "x\<^sub>1 \<dots>
-  x\<^sub>m"}~@{command "assume"}~@{text "c: \<phi>\<^sub>1 \<dots>
-  \<phi>\<^sub>n"}''.
-
-  \item [@{command "print_cases"}] prints all local contexts of the
-  current state, using Isar proof language notation.
-  
-  \item [@{attribute case_names}~@{text "c\<^sub>1 \<dots> c\<^sub>k"}]
-  declares names for the local contexts of premises of a theorem;
-  @{text "c\<^sub>1, \<dots>, c\<^sub>k"} refers to the \emph{suffix} of the
-  list of premises.
-  
-  \item [@{attribute case_conclusion}~@{text "c d\<^sub>1 \<dots>
-  d\<^sub>k"}] declares names for the conclusions of a named premise
-  @{text c}; here @{text "d\<^sub>1, \<dots>, d\<^sub>k"} refers to the
-  prefix of arguments of a logical formula built by nesting a binary
-  connective (e.g.\ @{text "\<or>"}).
-  
-  Note that proof methods such as @{method induct} and @{method
-  coinduct} already provide a default name for the conclusion as a
-  whole.  The need to name subformulas only arises with cases that
-  split into several sub-cases, as in common co-induction rules.
-
-  \item [@{attribute params}~@{text "p\<^sub>1 \<dots> p\<^sub>m \<AND> \<dots>
-  q\<^sub>1 \<dots> q\<^sub>n"}] renames the innermost parameters of
-  premises @{text "1, \<dots>, n"} of some theorem.  An empty list of names
-  may be given to skip positions, leaving the present parameters
-  unchanged.
-  
-  Note that the default usage of case rules does \emph{not} directly
-  expose parameters to the proof context.
-  
-  \item [@{attribute consumes}~@{text n}] declares the number of
-  ``major premises'' of a rule, i.e.\ the number of facts to be
-  consumed when it is applied by an appropriate proof method.  The
-  default value of @{attribute consumes} is @{text "n = 1"}, which is
-  appropriate for the usual kind of cases and induction rules for
-  inductive sets (cf.\ \secref{sec:hol-inductive}).  Rules without any
-  @{attribute consumes} declaration given are treated as if
-  @{attribute consumes}~@{text 0} had been specified.
-  
-  Note that explicit @{attribute consumes} declarations are only
-  rarely needed; this is already taken care of automatically by the
-  higher-level @{attribute cases}, @{attribute induct}, and
-  @{attribute coinduct} declarations.
-
-  \end{descr}
-*}
-
-
-subsubsection {* Proof methods *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{method_def cases} & : & \isarmeth \\
-    @{method_def induct} & : & \isarmeth \\
-    @{method_def coinduct} & : & \isarmeth \\
-  \end{matharray}
-
-  The @{method cases}, @{method induct}, and @{method coinduct}
-  methods provide a uniform interface to common proof techniques over
-  datatypes, inductive predicates (or sets), recursive functions etc.
-  The corresponding rules may be specified and instantiated in a
-  casual manner.  Furthermore, these methods provide named local
-  contexts that may be invoked via the @{command "case"} proof command
-  within the subsequent proof text.  This accommodates compact proof
-  texts even when reasoning about large specifications.
-
-  The @{method induct} method also provides some additional
-  infrastructure in order to be applicable to structure statements
-  (either using explicit meta-level connectives, or including facts
-  and parameters separately).  This avoids cumbersome encoding of
-  ``strengthened'' inductive statements within the object-logic.
-
-  \begin{rail}
-    'cases' (insts * 'and') rule?
-    ;
-    'induct' (definsts * 'and') \\ arbitrary? taking? rule?
-    ;
-    'coinduct' insts taking rule?
-    ;
-
-    rule: ('type' | 'pred' | 'set') ':' (nameref +) | 'rule' ':' (thmref +)
-    ;
-    definst: name ('==' | equiv) term | inst
-    ;
-    definsts: ( definst *)
-    ;
-    arbitrary: 'arbitrary' ':' ((term *) 'and' +)
-    ;
-    taking: 'taking' ':' insts
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{method cases}~@{text "insts R"}] applies method @{method
-  rule} with an appropriate case distinction theorem, instantiated to
-  the subjects @{text insts}.  Symbolic case names are bound according
-  to the rule's local contexts.
-
-  The rule is determined as follows, according to the facts and
-  arguments passed to the @{method cases} method:
-
-  \medskip
-  \begin{tabular}{llll}
-    facts           &                 & arguments   & rule \\\hline
-                    & @{method cases} &             & classical case split \\
-                    & @{method cases} & @{text t}   & datatype exhaustion (type of @{text t}) \\
-    @{text "\<turnstile> A t"} & @{method cases} & @{text "\<dots>"} & inductive predicate/set elimination (of @{text A}) \\
-    @{text "\<dots>"}     & @{method cases} & @{text "\<dots> rule: R"} & explicit rule @{text R} \\
-  \end{tabular}
-  \medskip
-
-  Several instantiations may be given, referring to the \emph{suffix}
-  of premises of the case rule; within each premise, the \emph{prefix}
-  of variables is instantiated.  In most situations, only a single
-  term needs to be specified; this refers to the first variable of the
-  last premise (it is usually the same for all cases).
-
-  \item [@{method induct}~@{text "insts R"}] is analogous to the
-  @{method cases} method, but refers to induction rules, which are
-  determined as follows:
-
-  \medskip
-  \begin{tabular}{llll}
-    facts           &                  & arguments            & rule \\\hline
-                    & @{method induct} & @{text "P x"}        & datatype induction (type of @{text x}) \\
-    @{text "\<turnstile> A x"} & @{method induct} & @{text "\<dots>"}          & predicate/set induction (of @{text A}) \\
-    @{text "\<dots>"}     & @{method induct} & @{text "\<dots> rule: R"} & explicit rule @{text R} \\
-  \end{tabular}
-  \medskip
-  
-  Several instantiations may be given, each referring to some part of
-  a mutual inductive definition or datatype --- only related partial
-  induction rules may be used together, though.  Any of the lists of
-  terms @{text "P, x, \<dots>"} refers to the \emph{suffix} of variables
-  present in the induction rule.  This enables the writer to specify
-  only induction variables, or both predicates and variables, for
-  example.
-  
-  Instantiations may be definitional: equations @{text "x \<equiv> t"}
-  introduce local definitions, which are inserted into the claim and
-  discharged after applying the induction rule.  Equalities reappear
-  in the inductive cases, but have been transformed according to the
-  induction principle being involved here.  In order to achieve
-  practically useful induction hypotheses, some variables occurring in
-  @{text t} need to be fixed (see below).
-  
-  The optional ``@{text "arbitrary: x\<^sub>1 \<dots> x\<^sub>m"}''
-  specification generalizes variables @{text "x\<^sub>1, \<dots>,
-  x\<^sub>m"} of the original goal before applying induction.  Thus
-  induction hypotheses may become sufficiently general to get the
-  proof through.  Together with definitional instantiations, one may
-  effectively perform induction over expressions of a certain
-  structure.
-  
-  The optional ``@{text "taking: t\<^sub>1 \<dots> t\<^sub>n"}''
-  specification provides additional instantiations of a prefix of
-  pending variables in the rule.  Such schematic induction rules
-  rarely occur in practice, though.
-
-  \item [@{method coinduct}~@{text "inst R"}] is analogous to the
-  @{method induct} method, but refers to coinduction rules, which are
-  determined as follows:
-
-  \medskip
-  \begin{tabular}{llll}
-    goal          &                    & arguments & rule \\\hline
-                  & @{method coinduct} & @{text x} & type coinduction (type of @{text x}) \\
-    @{text "A x"} & @{method coinduct} & @{text "\<dots>"} & predicate/set coinduction (of @{text A}) \\
-    @{text "\<dots>"}   & @{method coinduct} & @{text "\<dots> rule: R"} & explicit rule @{text R} \\
-  \end{tabular}
-  
-  Coinduction is the dual of induction.  Induction essentially
-  eliminates @{text "A x"} towards a generic result @{text "P x"},
-  while coinduction introduces @{text "A x"} starting with @{text "B
-  x"}, for a suitable ``bisimulation'' @{text B}.  The cases of a
-  coinduct rule are typically named after the predicates or sets being
-  covered, while the conclusions consist of several alternatives being
-  named after the individual destructor patterns.
-  
-  The given instantiation refers to the \emph{suffix} of variables
-  occurring in the rule's major premise, or conclusion if unavailable.
-  An additional ``@{text "taking: t\<^sub>1 \<dots> t\<^sub>n"}''
-  specification may be required in order to specify the bisimulation
-  to be used in the coinduction step.
-
-  \end{descr}
-
-  Above methods produce named local contexts, as determined by the
-  instantiated rule as given in the text.  Beyond that, the @{method
-  induct} and @{method coinduct} methods guess further instantiations
-  from the goal specification itself.  Any persisting unresolved
-  schematic variables of the resulting rule will render the the
-  corresponding case invalid.  The term binding @{variable ?case} for
-  the conclusion will be provided with each case, provided that term
-  is fully specified.
-
-  The @{command "print_cases"} command prints all named cases present
-  in the current proof state.
-
-  \medskip Despite the additional infrastructure, both @{method cases}
-  and @{method coinduct} merely apply a certain rule, after
-  instantiation, while conforming due to the usual way of monotonic
-  natural deduction: the context of a structured statement @{text
-  "\<And>x\<^sub>1 \<dots> x\<^sub>m. \<phi>\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<dots>"}
-  reappears unchanged after the case split.
-
-  The @{method induct} method is fundamentally different in this
-  respect: the meta-level structure is passed through the
-  ``recursive'' course involved in the induction.  Thus the original
-  statement is basically replaced by separate copies, corresponding to
-  the induction hypotheses and conclusion; the original goal context
-  is no longer available.  Thus local assumptions, fixed parameters
-  and definitions effectively participate in the inductive rephrasing
-  of the original statement.
-
-  In induction proofs, local assumptions introduced by cases are split
-  into two different kinds: @{text hyps} stemming from the rule and
-  @{text prems} from the goal statement.  This is reflected in the
-  extracted cases accordingly, so invoking ``@{command "case"}~@{text
-  c}'' will provide separate facts @{text c.hyps} and @{text c.prems},
-  as well as fact @{text c} to hold the all-inclusive list.
-
-  \medskip Facts presented to either method are consumed according to
-  the number of ``major premises'' of the rule involved, which is
-  usually 0 for plain cases and induction rules of datatypes etc.\ and
-  1 for rules of inductive predicates or sets and the like.  The
-  remaining facts are inserted into the goal verbatim before the
-  actual @{text cases}, @{text induct}, or @{text coinduct} rule is
-  applied.
-*}
-
-
-subsubsection {* Declaring rules *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "print_induct_rules"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
-    @{attribute_def cases} & : & \isaratt \\
-    @{attribute_def induct} & : & \isaratt \\
-    @{attribute_def coinduct} & : & \isaratt \\
-  \end{matharray}
-
-  \begin{rail}
-    'cases' spec
-    ;
-    'induct' spec
-    ;
-    'coinduct' spec
-    ;
-
-    spec: ('type' | 'pred' | 'set') ':' nameref
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "print_induct_rules"}] prints cases and induct
-  rules for predicates (or sets) and types of the current context.
-  
-  \item [@{attribute cases}, @{attribute induct}, and @{attribute
-  coinduct}] (as attributes) augment the corresponding context of
-  rules for reasoning about (co)inductive predicates (or sets) and
-  types, using the corresponding methods of the same name.  Certain
-  definitional packages of object-logics usually declare emerging
-  cases and induction rules as expected, so users rarely need to
-  intervene.
-  
-  Manual rule declarations usually refer to the @{attribute
-  case_names} and @{attribute params} attributes to adjust names of
-  cases and parameters of a rule; the @{attribute consumes}
-  declaration is taken care of automatically: @{attribute
-  consumes}~@{text 0} is specified for ``type'' rules and @{attribute
-  consumes}~@{text 1} for ``predicate'' / ``set'' rules.
-
-  \end{descr}
-*}
-
-
 section {* General logic setup \label{sec:object-logic} *}
 
 text {*
--- a/doc-src/IsarRef/Thy/Introduction.thy	Mon Jun 02 22:50:21 2008 +0200
+++ b/doc-src/IsarRef/Thy/Introduction.thy	Mon Jun 02 22:50:23 2008 +0200
@@ -53,10 +53,23 @@
 
   \medskip The Isabelle/Isar framework is generic and should work
   reasonably well for any Isabelle object-logic that conforms to the
-  natural deduction view of the Isabelle/Pure framework.  Major
-  Isabelle logics like HOL \cite{isabelle-HOL}, HOLCF
-  \cite{MuellerNvOS99}, FOL \cite{isabelle-logics}, and ZF
-  \cite{isabelle-ZF} have already been set up for end-users.
+  natural deduction view of the Isabelle/Pure framework.  Specific
+  language elements introduced by the major object-logics are
+  described in \chref{ch:hol} (Isabelle/HOL), \chref{ch:holcf}
+  (Isabelle/HOLCF), and \chref{ch:zf} (Isabelle/ZF).  The main
+  language elements are already provided by the Isabelle/Pure
+  framework. Nevertheless, examples given in the generic parts will
+  usually refer to Isabelle/HOL as well.
+
+  \medskip Isar commands may be either \emph{proper} document
+  constructors, or \emph{improper commands}.  Some proof methods and
+  attributes introduced later are classified as improper as well.
+  Improper Isar language elements, which are marked by ``@{text
+  "\<^sup>*"}'' in the subsequent chapters; they are often helpful
+  when developing proof documents, but their use is discouraged for
+  the final human-readable outcome.  Typical examples are diagnostic
+  commands that print terms or theorems according to the current
+  context; other commands emulate old-style tactical theorem proving.
 *}
 
 
@@ -84,7 +97,7 @@
 *}
 
 
-subsection {* Proof General *}
+subsection {* Emacs Proof General *}
 
 text {*
   Plain TTY-based interaction as above used to be quite feasible with
@@ -171,7 +184,7 @@
   hand, the plain ASCII sources easily become somewhat unintelligible.
   For example, @{text "\<Longrightarrow>"} would appear as @{verbatim "\<Longrightarrow>"} according
   the default set of Isabelle symbols.  Nevertheless, the Isabelle
-  document preparation system (see \secref{sec:document-prep}) will be
+  document preparation system (see \chref{ch:document-prep}) will be
   happy to print non-ASCII symbols properly.  It is even possible to
   invent additional notation beyond the display capabilities of Emacs
   and X-Symbol.
@@ -214,56 +227,6 @@
 *}
 
 
-subsection {* Document preparation \label{sec:document-prep} *}
-
-text {*
-  Isabelle/Isar provides a simple document preparation system based on
-  existing {PDF-\LaTeX} technology, with full support of hyper-links
-  (both local references and URLs) and bookmarks.  Thus the results
-  are equally well suited for WWW browsing and as printed copies.
-
-  \medskip Isabelle generates {\LaTeX} output as part of the run of a
-  \emph{logic session} (see also \cite{isabelle-sys}).  Getting
-  started with a working configuration for common situations is quite
-  easy by using the Isabelle @{verbatim mkdir} and @{verbatim make}
-  tools.  First invoke
-\begin{ttbox}
-  isatool mkdir Foo
-\end{ttbox}
-  to initialize a separate directory for session @{verbatim Foo} ---
-  it is safe to experiment, since @{verbatim "isatool mkdir"} never
-  overwrites existing files.  Ensure that @{verbatim "Foo/ROOT.ML"}
-  holds ML commands to load all theories required for this session;
-  furthermore @{verbatim "Foo/document/root.tex"} should include any
-  special {\LaTeX} macro packages required for your document (the
-  default is usually sufficient as a start).
-
-  The session is controlled by a separate @{verbatim IsaMakefile}
-  (with crude source dependencies by default).  This file is located
-  one level up from the @{verbatim Foo} directory location.  Now
-  invoke
-\begin{ttbox}
-  isatool make Foo
-\end{ttbox}
-  to run the @{verbatim Foo} session, with browser information and
-  document preparation enabled.  Unless any errors are reported by
-  Isabelle or {\LaTeX}, the output will appear inside the directory
-  @{verbatim ISABELLE_BROWSER_INFO}, as reported by the batch job in
-  verbose mode.
-
-  \medskip You may also consider to tune the @{verbatim usedir}
-  options in @{verbatim IsaMakefile}, for example to change the output
-  format from @{verbatim pdf} to @{verbatim dvi}, or activate the
-  @{verbatim "-D"} option to retain a second copy of the generated
-  {\LaTeX} sources.
-
-  \medskip See \emph{The Isabelle System Manual} \cite{isabelle-sys}
-  for further details on Isabelle logic sessions and theory
-  presentation.  The Isabelle/HOL tutorial \cite{isabelle-hol-book}
-  also covers theory presentation issues.
-*}
-
-
 subsection {* How to write Isar proofs anyway? \label{sec:isar-howto} *}
 
 text {*
--- a/doc-src/IsarRef/Thy/Outer_Syntax.thy	Mon Jun 02 22:50:21 2008 +0200
+++ b/doc-src/IsarRef/Thy/Outer_Syntax.thy	Mon Jun 02 22:50:23 2008 +0200
@@ -4,7 +4,7 @@
 imports Pure
 begin
 
-chapter {* Syntax primitives *}
+chapter {* Outer syntax *}
 
 text {*
   The rather generic framework of Isabelle/Isar syntax emerges from
@@ -468,283 +468,4 @@
   \secref{sec:proof-context}.
 *}
 
-
-subsection {* Antiquotations \label{sec:antiq} *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{antiquotation_def "theory"} & : & \isarantiq \\
-    @{antiquotation_def "thm"} & : & \isarantiq \\
-    @{antiquotation_def "prop"} & : & \isarantiq \\
-    @{antiquotation_def "term"} & : & \isarantiq \\
-    @{antiquotation_def const} & : & \isarantiq \\
-    @{antiquotation_def abbrev} & : & \isarantiq \\
-    @{antiquotation_def typeof} & : & \isarantiq \\
-    @{antiquotation_def typ} & : & \isarantiq \\
-    @{antiquotation_def thm_style} & : & \isarantiq \\
-    @{antiquotation_def term_style} & : & \isarantiq \\
-    @{antiquotation_def "text"} & : & \isarantiq \\
-    @{antiquotation_def goals} & : & \isarantiq \\
-    @{antiquotation_def subgoals} & : & \isarantiq \\
-    @{antiquotation_def prf} & : & \isarantiq \\
-    @{antiquotation_def full_prf} & : & \isarantiq \\
-    @{antiquotation_def ML} & : & \isarantiq \\
-    @{antiquotation_def ML_type} & : & \isarantiq \\
-    @{antiquotation_def ML_struct} & : & \isarantiq \\
-  \end{matharray}
-
-  The text body of formal comments (see also \secref{sec:comments})
-  may contain antiquotations of logical entities, such as theorems,
-  terms and types, which are to be presented in the final output
-  produced by the Isabelle document preparation system (see also
-  \secref{sec:document-prep}).
-
-  Thus embedding of ``@{text "@{term [show_types] \"f x = a + x\"}"}''
-  within a text block would cause
-  \isa{{\isacharparenleft}f{\isasymColon}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a{\isacharparenright}\ {\isacharparenleft}x{\isasymColon}{\isacharprime}a{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}a{\isasymColon}{\isacharprime}a{\isacharparenright}\ {\isacharplus}\ x} to appear in the final {\LaTeX} document.  Also note that theorem
-  antiquotations may involve attributes as well.  For example,
-  @{text "@{thm sym [no_vars]}"} would print the theorem's
-  statement where all schematic variables have been replaced by fixed
-  ones, which are easier to read.
-
-  \begin{rail}
-    atsign lbrace antiquotation rbrace
-    ;
-
-    antiquotation:
-      'theory' options name |
-      'thm' options thmrefs |
-      'prop' options prop |
-      'term' options term |
-      'const' options term |
-      'abbrev' options term |
-      'typeof' options term |
-      'typ' options type |
-      'thm\_style' options name thmref |
-      'term\_style' options name term |
-      'text' options name |
-      'goals' options |
-      'subgoals' options |
-      'prf' options thmrefs |
-      'full\_prf' options thmrefs |
-      'ML' options name |
-      'ML\_type' options name |
-      'ML\_struct' options name
-    ;
-    options: '[' (option * ',') ']'
-    ;
-    option: name | name '=' name
-    ;
-  \end{rail}
-
-  Note that the syntax of antiquotations may \emph{not} include source
-  comments @{verbatim "(*"}~@{text "\<dots>"}~@{verbatim "*)"} or verbatim
-  text @{verbatim "{"}@{verbatim "*"}~@{text "\<dots>"}~@{verbatim
-  "*"}@{verbatim "}"}.
-
-  \begin{descr}
-  
-  \item [@{text "@{theory A}"}] prints the name @{text "A"}, which is
-  guaranteed to refer to a valid ancestor theory in the current
-  context.
-
-  \item [@{text "@{thm a\<^sub>1 \<dots> a\<^sub>n}"}] prints theorems
-  @{text "a\<^sub>1 \<dots> a\<^sub>n"}.  Note that attribute specifications
-  may be included as well (see also \secref{sec:syn-att}); the
-  @{attribute_ref no_vars} rule (see \secref{sec:misc-meth-att}) would
-  be particularly useful to suppress printing of schematic variables.
-
-  \item [@{text "@{prop \<phi>}"}] prints a well-typed proposition @{text
-  "\<phi>"}.
-
-  \item [@{text "@{term t}"}] prints a well-typed term @{text "t"}.
-
-  \item [@{text "@{const c}"}] prints a logical or syntactic constant
-  @{text "c"}.
-  
-  \item [@{text "@{abbrev c x\<^sub>1 \<dots> x\<^sub>n}"}] prints a constant
-  abbreviation @{text "c x\<^sub>1 \<dots> x\<^sub>n \<equiv> rhs"} as defined in
-  the current context.
-
-  \item [@{text "@{typeof t}"}] prints the type of a well-typed term
-  @{text "t"}.
-
-  \item [@{text "@{typ \<tau>}"}] prints a well-formed type @{text "\<tau>"}.
-  
-  \item [@{text "@{thm_style s a}"}] prints theorem @{text a},
-  previously applying a style @{text s} to it (see below).
-  
-  \item [@{text "@{term_style s t}"}] prints a well-typed term @{text
-  t} after applying a style @{text s} to it (see below).
-
-  \item [@{text "@{text s}"}] prints uninterpreted source text @{text
-  s}.  This is particularly useful to print portions of text according
-  to the Isabelle {\LaTeX} output style, without demanding
-  well-formedness (e.g.\ small pieces of terms that should not be
-  parsed or type-checked yet).
-
-  \item [@{text "@{goals}"}] prints the current \emph{dynamic} goal
-  state.  This is mainly for support of tactic-emulation scripts
-  within Isar --- presentation of goal states does not conform to
-  actual human-readable proof documents.
-
-  Please do not include goal states into document output unless you
-  really know what you are doing!
-  
-  \item [@{text "@{subgoals}"}] is similar to @{text "@{goals}"}, but
-  does not print the main goal.
-  
-  \item [@{text "@{prf a\<^sub>1 \<dots> a\<^sub>n}"}] prints the (compact)
-  proof terms corresponding to the theorems @{text "a\<^sub>1 \<dots>
-  a\<^sub>n"}. Note that this requires proof terms to be switched on
-  for the current object logic (see the ``Proof terms'' section of the
-  Isabelle reference manual for information on how to do this).
-  
-  \item [@{text "@{full_prf a\<^sub>1 \<dots> a\<^sub>n}"}] is like @{text
-  "@{prf a\<^sub>1 \<dots> a\<^sub>n}"}, but displays the full proof terms,
-  i.e.\ also displays information omitted in the compact proof term,
-  which is denoted by ``@{text _}'' placeholders there.
-  
-  \item [@{text "@{ML s}"}, @{text "@{ML_type s}"}, and @{text
-  "@{ML_struct s}"}] check text @{text s} as ML value, type, and
-  structure, respectively.  The source is displayed verbatim.
-
-  \end{descr}
-
-  \medskip The following standard styles for use with @{text
-  thm_style} and @{text term_style} are available:
-
-  \begin{descr}
-  
-  \item [@{text lhs}] extracts the first argument of any application
-  form with at least two arguments -- typically meta-level or
-  object-level equality, or any other binary relation.
-  
-  \item [@{text rhs}] is like @{text lhs}, but extracts the second
-  argument.
-  
-  \item [@{text "concl"}] extracts the conclusion @{text C} from a rule
-  in Horn-clause normal form @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C"}.
-  
-  \item [@{text "prem1"}, \dots, @{text "prem9"}] extract premise
-  number @{text "1, \<dots>, 9"}, respectively, from from a rule in
-  Horn-clause normal form @{text "A\<^sub>1 \<Longrightarrow> \<dots> A\<^sub>n \<Longrightarrow> C"}
-
-  \end{descr}
-
-  \medskip
-  The following options are available to tune the output.  Note that most of
-  these coincide with ML flags of the same names (see also \cite{isabelle-ref}).
-
-  \begin{descr}
-
-  \item[@{text "show_types = bool"} and @{text "show_sorts = bool"}]
-  control printing of explicit type and sort constraints.
-
-  \item[@{text "show_structs = bool"}] controls printing of implicit
-  structures.
-
-  \item[@{text "long_names = bool"}] forces names of types and
-  constants etc.\ to be printed in their fully qualified internal
-  form.
-
-  \item[@{text "short_names = bool"}] forces names of types and
-  constants etc.\ to be printed unqualified.  Note that internalizing
-  the output again in the current context may well yield a different
-  result.
-
-  \item[@{text "unique_names = bool"}] determines whether the printed
-  version of qualified names should be made sufficiently long to avoid
-  overlap with names declared further back.  Set to @{text false} for
-  more concise output.
-
-  \item[@{text "eta_contract = bool"}] prints terms in @{text
-  \<eta>}-contracted form.
-
-  \item[@{text "display = bool"}] indicates if the text is to be
-  output as multi-line ``display material'', rather than a small piece
-  of text without line breaks (which is the default).
-
-  \item[@{text "break = bool"}] controls line breaks in non-display
-  material.
-
-  \item[@{text "quotes = bool"}] indicates if the output should be
-  enclosed in double quotes.
-
-  \item[@{text "mode = name"}] adds @{text name} to the print mode to
-  be used for presentation (see also \cite{isabelle-ref}).  Note that
-  the standard setup for {\LaTeX} output is already present by
-  default, including the modes @{text latex} and @{text xsymbols}.
-
-  \item[@{text "margin = nat"} and @{text "indent = nat"}] change the
-  margin or indentation for pretty printing of display material.
-
-  \item[@{text "source = bool"}] prints the source text of the
-  antiquotation arguments, rather than the actual value.  Note that
-  this does not affect well-formedness checks of @{antiquotation
-  "thm"}, @{antiquotation "term"}, etc. (only the @{antiquotation
-  "text"} antiquotation admits arbitrary output).
-
-  \item[@{text "goals_limit = nat"}] determines the maximum number of
-  goals to be printed.
-
-  \item[@{text "locale = name"}] specifies an alternative locale
-  context used for evaluating and printing the subsequent argument.
-
-  \end{descr}
-
-  For boolean flags, ``@{text "name = true"}'' may be abbreviated as
-  ``@{text name}''.  All of the above flags are disabled by default,
-  unless changed from ML.
-
-  \medskip Note that antiquotations do not only spare the author from
-  tedious typing of logical entities, but also achieve some degree of
-  consistency-checking of informal explanations with formal
-  developments: well-formedness of terms and types with respect to the
-  current theory or proof context is ensured here.
-*}
-
-
-subsection {* Tagged commands \label{sec:tags} *}
-
-text {*
-  Each Isabelle/Isar command may be decorated by presentation tags:
-
-  \indexouternonterm{tags}
-  \begin{rail}
-    tags: ( tag * )
-    ;
-    tag: '\%' (ident | string)
-  \end{rail}
-
-  The tags @{text "theory"}, @{text "proof"}, @{text "ML"} are already
-  pre-declared for certain classes of commands:
-
- \medskip
-
-  \begin{tabular}{ll}
-    @{text "theory"} & theory begin/end \\
-    @{text "proof"} & all proof commands \\
-    @{text "ML"} & all commands involving ML code \\
-  \end{tabular}
-
-  \medskip The Isabelle document preparation system (see also
-  \cite{isabelle-sys}) allows tagged command regions to be presented
-  specifically, e.g.\ to fold proof texts, or drop parts of the text
-  completely.
-
-  For example ``@{command "by"}~@{text "%invisible auto"}'' would
-  cause that piece of proof to be treated as @{text invisible} instead
-  of @{text "proof"} (the default), which may be either show or hidden
-  depending on the document setup.  In contrast, ``@{command
-  "by"}~@{text "%visible auto"}'' would force this text to be shown
-  invariably.
-
-  Explicit tag specifications within a proof apply to all subsequent
-  commands of the same level of nesting.  For example, ``@{command
-  "proof"}~@{text "%visible \<dots>"}~@{command "qed"}'' would force the
-  whole sub-proof to be typeset as @{text visible} (unless some of its
-  parts are tagged differently).
-*}
-
 end
--- a/doc-src/IsarRef/Thy/Proof.thy	Mon Jun 02 22:50:21 2008 +0200
+++ b/doc-src/IsarRef/Thy/Proof.thy	Mon Jun 02 22:50:23 2008 +0200
@@ -1036,4 +1036,367 @@
   \end{descr}
 *}
 
+section {* Proof by cases and induction \label{sec:cases-induct} *}
+
+subsection {* Rule contexts *}
+
+text {*
+  \begin{matharray}{rcl}
+    @{command_def "case"} & : & \isartrans{proof(state)}{proof(state)} \\
+    @{command_def "print_cases"}@{text "\<^sup>*"} & : & \isarkeep{proof} \\
+    @{attribute_def case_names} & : & \isaratt \\
+    @{attribute_def case_conclusion} & : & \isaratt \\
+    @{attribute_def params} & : & \isaratt \\
+    @{attribute_def consumes} & : & \isaratt \\
+  \end{matharray}
+
+  The puristic way to build up Isar proof contexts is by explicit
+  language elements like @{command "fix"}, @{command "assume"},
+  @{command "let"} (see \secref{sec:proof-context}).  This is adequate
+  for plain natural deduction, but easily becomes unwieldy in concrete
+  verification tasks, which typically involve big induction rules with
+  several cases.
+
+  The @{command "case"} command provides a shorthand to refer to a
+  local context symbolically: certain proof methods provide an
+  environment of named ``cases'' of the form @{text "c: x\<^sub>1, \<dots>,
+  x\<^sub>m, \<phi>\<^sub>1, \<dots>, \<phi>\<^sub>n"}; the effect of ``@{command
+  "case"}~@{text c}'' is then equivalent to ``@{command "fix"}~@{text
+  "x\<^sub>1 \<dots> x\<^sub>m"}~@{command "assume"}~@{text "c: \<phi>\<^sub>1 \<dots>
+  \<phi>\<^sub>n"}''.  Term bindings may be covered as well, notably
+  @{variable ?case} for the main conclusion.
+
+  By default, the ``terminology'' @{text "x\<^sub>1, \<dots>, x\<^sub>m"} of
+  a case value is marked as hidden, i.e.\ there is no way to refer to
+  such parameters in the subsequent proof text.  After all, original
+  rule parameters stem from somewhere outside of the current proof
+  text.  By using the explicit form ``@{command "case"}~@{text "(c
+  y\<^sub>1 \<dots> y\<^sub>m)"}'' instead, the proof author is able to
+  chose local names that fit nicely into the current context.
+
+  \medskip It is important to note that proper use of @{command
+  "case"} does not provide means to peek at the current goal state,
+  which is not directly observable in Isar!  Nonetheless, goal
+  refinement commands do provide named cases @{text "goal\<^sub>i"}
+  for each subgoal @{text "i = 1, \<dots>, n"} of the resulting goal state.
+  Using this extra feature requires great care, because some bits of
+  the internal tactical machinery intrude the proof text.  In
+  particular, parameter names stemming from the left-over of automated
+  reasoning tools are usually quite unpredictable.
+
+  Under normal circumstances, the text of cases emerge from standard
+  elimination or induction rules, which in turn are derived from
+  previous theory specifications in a canonical way (say from
+  @{command "inductive"} definitions).
+
+  \medskip Proper cases are only available if both the proof method
+  and the rules involved support this.  By using appropriate
+  attributes, case names, conclusions, and parameters may be also
+  declared by hand.  Thus variant versions of rules that have been
+  derived manually become ready to use in advanced case analysis
+  later.
+
+  \begin{rail}
+    'case' (caseref | '(' caseref ((name | underscore) +) ')')
+    ;
+    caseref: nameref attributes?
+    ;
+
+    'case\_names' (name +)
+    ;
+    'case\_conclusion' name (name *)
+    ;
+    'params' ((name *) + 'and')
+    ;
+    'consumes' nat?
+    ;
+  \end{rail}
+
+  \begin{descr}
+  
+  \item [@{command "case"}~@{text "(c x\<^sub>1 \<dots> x\<^sub>m)"}]
+  invokes a named local context @{text "c: x\<^sub>1, \<dots>, x\<^sub>m,
+  \<phi>\<^sub>1, \<dots>, \<phi>\<^sub>m"}, as provided by an appropriate
+  proof method (such as @{method_ref cases} and @{method_ref induct}).
+  The command ``@{command "case"}~@{text "(c x\<^sub>1 \<dots>
+  x\<^sub>m)"}'' abbreviates ``@{command "fix"}~@{text "x\<^sub>1 \<dots>
+  x\<^sub>m"}~@{command "assume"}~@{text "c: \<phi>\<^sub>1 \<dots>
+  \<phi>\<^sub>n"}''.
+
+  \item [@{command "print_cases"}] prints all local contexts of the
+  current state, using Isar proof language notation.
+  
+  \item [@{attribute case_names}~@{text "c\<^sub>1 \<dots> c\<^sub>k"}]
+  declares names for the local contexts of premises of a theorem;
+  @{text "c\<^sub>1, \<dots>, c\<^sub>k"} refers to the \emph{suffix} of the
+  list of premises.
+  
+  \item [@{attribute case_conclusion}~@{text "c d\<^sub>1 \<dots>
+  d\<^sub>k"}] declares names for the conclusions of a named premise
+  @{text c}; here @{text "d\<^sub>1, \<dots>, d\<^sub>k"} refers to the
+  prefix of arguments of a logical formula built by nesting a binary
+  connective (e.g.\ @{text "\<or>"}).
+  
+  Note that proof methods such as @{method induct} and @{method
+  coinduct} already provide a default name for the conclusion as a
+  whole.  The need to name subformulas only arises with cases that
+  split into several sub-cases, as in common co-induction rules.
+
+  \item [@{attribute params}~@{text "p\<^sub>1 \<dots> p\<^sub>m \<AND> \<dots>
+  q\<^sub>1 \<dots> q\<^sub>n"}] renames the innermost parameters of
+  premises @{text "1, \<dots>, n"} of some theorem.  An empty list of names
+  may be given to skip positions, leaving the present parameters
+  unchanged.
+  
+  Note that the default usage of case rules does \emph{not} directly
+  expose parameters to the proof context.
+  
+  \item [@{attribute consumes}~@{text n}] declares the number of
+  ``major premises'' of a rule, i.e.\ the number of facts to be
+  consumed when it is applied by an appropriate proof method.  The
+  default value of @{attribute consumes} is @{text "n = 1"}, which is
+  appropriate for the usual kind of cases and induction rules for
+  inductive sets (cf.\ \secref{sec:hol-inductive}).  Rules without any
+  @{attribute consumes} declaration given are treated as if
+  @{attribute consumes}~@{text 0} had been specified.
+  
+  Note that explicit @{attribute consumes} declarations are only
+  rarely needed; this is already taken care of automatically by the
+  higher-level @{attribute cases}, @{attribute induct}, and
+  @{attribute coinduct} declarations.
+
+  \end{descr}
+*}
+
+
+subsection {* Proof methods *}
+
+text {*
+  \begin{matharray}{rcl}
+    @{method_def cases} & : & \isarmeth \\
+    @{method_def induct} & : & \isarmeth \\
+    @{method_def coinduct} & : & \isarmeth \\
+  \end{matharray}
+
+  The @{method cases}, @{method induct}, and @{method coinduct}
+  methods provide a uniform interface to common proof techniques over
+  datatypes, inductive predicates (or sets), recursive functions etc.
+  The corresponding rules may be specified and instantiated in a
+  casual manner.  Furthermore, these methods provide named local
+  contexts that may be invoked via the @{command "case"} proof command
+  within the subsequent proof text.  This accommodates compact proof
+  texts even when reasoning about large specifications.
+
+  The @{method induct} method also provides some additional
+  infrastructure in order to be applicable to structure statements
+  (either using explicit meta-level connectives, or including facts
+  and parameters separately).  This avoids cumbersome encoding of
+  ``strengthened'' inductive statements within the object-logic.
+
+  \begin{rail}
+    'cases' (insts * 'and') rule?
+    ;
+    'induct' (definsts * 'and') \\ arbitrary? taking? rule?
+    ;
+    'coinduct' insts taking rule?
+    ;
+
+    rule: ('type' | 'pred' | 'set') ':' (nameref +) | 'rule' ':' (thmref +)
+    ;
+    definst: name ('==' | equiv) term | inst
+    ;
+    definsts: ( definst *)
+    ;
+    arbitrary: 'arbitrary' ':' ((term *) 'and' +)
+    ;
+    taking: 'taking' ':' insts
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{method cases}~@{text "insts R"}] applies method @{method
+  rule} with an appropriate case distinction theorem, instantiated to
+  the subjects @{text insts}.  Symbolic case names are bound according
+  to the rule's local contexts.
+
+  The rule is determined as follows, according to the facts and
+  arguments passed to the @{method cases} method:
+
+  \medskip
+  \begin{tabular}{llll}
+    facts           &                 & arguments   & rule \\\hline
+                    & @{method cases} &             & classical case split \\
+                    & @{method cases} & @{text t}   & datatype exhaustion (type of @{text t}) \\
+    @{text "\<turnstile> A t"} & @{method cases} & @{text "\<dots>"} & inductive predicate/set elimination (of @{text A}) \\
+    @{text "\<dots>"}     & @{method cases} & @{text "\<dots> rule: R"} & explicit rule @{text R} \\
+  \end{tabular}
+  \medskip
+
+  Several instantiations may be given, referring to the \emph{suffix}
+  of premises of the case rule; within each premise, the \emph{prefix}
+  of variables is instantiated.  In most situations, only a single
+  term needs to be specified; this refers to the first variable of the
+  last premise (it is usually the same for all cases).
+
+  \item [@{method induct}~@{text "insts R"}] is analogous to the
+  @{method cases} method, but refers to induction rules, which are
+  determined as follows:
+
+  \medskip
+  \begin{tabular}{llll}
+    facts           &                  & arguments            & rule \\\hline
+                    & @{method induct} & @{text "P x"}        & datatype induction (type of @{text x}) \\
+    @{text "\<turnstile> A x"} & @{method induct} & @{text "\<dots>"}          & predicate/set induction (of @{text A}) \\
+    @{text "\<dots>"}     & @{method induct} & @{text "\<dots> rule: R"} & explicit rule @{text R} \\
+  \end{tabular}
+  \medskip
+  
+  Several instantiations may be given, each referring to some part of
+  a mutual inductive definition or datatype --- only related partial
+  induction rules may be used together, though.  Any of the lists of
+  terms @{text "P, x, \<dots>"} refers to the \emph{suffix} of variables
+  present in the induction rule.  This enables the writer to specify
+  only induction variables, or both predicates and variables, for
+  example.
+  
+  Instantiations may be definitional: equations @{text "x \<equiv> t"}
+  introduce local definitions, which are inserted into the claim and
+  discharged after applying the induction rule.  Equalities reappear
+  in the inductive cases, but have been transformed according to the
+  induction principle being involved here.  In order to achieve
+  practically useful induction hypotheses, some variables occurring in
+  @{text t} need to be fixed (see below).
+  
+  The optional ``@{text "arbitrary: x\<^sub>1 \<dots> x\<^sub>m"}''
+  specification generalizes variables @{text "x\<^sub>1, \<dots>,
+  x\<^sub>m"} of the original goal before applying induction.  Thus
+  induction hypotheses may become sufficiently general to get the
+  proof through.  Together with definitional instantiations, one may
+  effectively perform induction over expressions of a certain
+  structure.
+  
+  The optional ``@{text "taking: t\<^sub>1 \<dots> t\<^sub>n"}''
+  specification provides additional instantiations of a prefix of
+  pending variables in the rule.  Such schematic induction rules
+  rarely occur in practice, though.
+
+  \item [@{method coinduct}~@{text "inst R"}] is analogous to the
+  @{method induct} method, but refers to coinduction rules, which are
+  determined as follows:
+
+  \medskip
+  \begin{tabular}{llll}
+    goal          &                    & arguments & rule \\\hline
+                  & @{method coinduct} & @{text x} & type coinduction (type of @{text x}) \\
+    @{text "A x"} & @{method coinduct} & @{text "\<dots>"} & predicate/set coinduction (of @{text A}) \\
+    @{text "\<dots>"}   & @{method coinduct} & @{text "\<dots> rule: R"} & explicit rule @{text R} \\
+  \end{tabular}
+  
+  Coinduction is the dual of induction.  Induction essentially
+  eliminates @{text "A x"} towards a generic result @{text "P x"},
+  while coinduction introduces @{text "A x"} starting with @{text "B
+  x"}, for a suitable ``bisimulation'' @{text B}.  The cases of a
+  coinduct rule are typically named after the predicates or sets being
+  covered, while the conclusions consist of several alternatives being
+  named after the individual destructor patterns.
+  
+  The given instantiation refers to the \emph{suffix} of variables
+  occurring in the rule's major premise, or conclusion if unavailable.
+  An additional ``@{text "taking: t\<^sub>1 \<dots> t\<^sub>n"}''
+  specification may be required in order to specify the bisimulation
+  to be used in the coinduction step.
+
+  \end{descr}
+
+  Above methods produce named local contexts, as determined by the
+  instantiated rule as given in the text.  Beyond that, the @{method
+  induct} and @{method coinduct} methods guess further instantiations
+  from the goal specification itself.  Any persisting unresolved
+  schematic variables of the resulting rule will render the the
+  corresponding case invalid.  The term binding @{variable ?case} for
+  the conclusion will be provided with each case, provided that term
+  is fully specified.
+
+  The @{command "print_cases"} command prints all named cases present
+  in the current proof state.
+
+  \medskip Despite the additional infrastructure, both @{method cases}
+  and @{method coinduct} merely apply a certain rule, after
+  instantiation, while conforming due to the usual way of monotonic
+  natural deduction: the context of a structured statement @{text
+  "\<And>x\<^sub>1 \<dots> x\<^sub>m. \<phi>\<^sub>1 \<Longrightarrow> \<dots> \<phi>\<^sub>n \<Longrightarrow> \<dots>"}
+  reappears unchanged after the case split.
+
+  The @{method induct} method is fundamentally different in this
+  respect: the meta-level structure is passed through the
+  ``recursive'' course involved in the induction.  Thus the original
+  statement is basically replaced by separate copies, corresponding to
+  the induction hypotheses and conclusion; the original goal context
+  is no longer available.  Thus local assumptions, fixed parameters
+  and definitions effectively participate in the inductive rephrasing
+  of the original statement.
+
+  In induction proofs, local assumptions introduced by cases are split
+  into two different kinds: @{text hyps} stemming from the rule and
+  @{text prems} from the goal statement.  This is reflected in the
+  extracted cases accordingly, so invoking ``@{command "case"}~@{text
+  c}'' will provide separate facts @{text c.hyps} and @{text c.prems},
+  as well as fact @{text c} to hold the all-inclusive list.
+
+  \medskip Facts presented to either method are consumed according to
+  the number of ``major premises'' of the rule involved, which is
+  usually 0 for plain cases and induction rules of datatypes etc.\ and
+  1 for rules of inductive predicates or sets and the like.  The
+  remaining facts are inserted into the goal verbatim before the
+  actual @{text cases}, @{text induct}, or @{text coinduct} rule is
+  applied.
+*}
+
+
+subsection {* Declaring rules *}
+
+text {*
+  \begin{matharray}{rcl}
+    @{command_def "print_induct_rules"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
+    @{attribute_def cases} & : & \isaratt \\
+    @{attribute_def induct} & : & \isaratt \\
+    @{attribute_def coinduct} & : & \isaratt \\
+  \end{matharray}
+
+  \begin{rail}
+    'cases' spec
+    ;
+    'induct' spec
+    ;
+    'coinduct' spec
+    ;
+
+    spec: ('type' | 'pred' | 'set') ':' nameref
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "print_induct_rules"}] prints cases and induct
+  rules for predicates (or sets) and types of the current context.
+  
+  \item [@{attribute cases}, @{attribute induct}, and @{attribute
+  coinduct}] (as attributes) augment the corresponding context of
+  rules for reasoning about (co)inductive predicates (or sets) and
+  types, using the corresponding methods of the same name.  Certain
+  definitional packages of object-logics usually declare emerging
+  cases and induction rules as expected, so users rarely need to
+  intervene.
+  
+  Manual rule declarations usually refer to the @{attribute
+  case_names} and @{attribute params} attributes to adjust names of
+  cases and parameters of a rule; the @{attribute consumes}
+  declaration is taken care of automatically: @{attribute
+  consumes}~@{text 0} is specified for ``type'' rules and @{attribute
+  consumes}~@{text 1} for ``predicate'' / ``set'' rules.
+
+  \end{descr}
+*}
+
 end
--- a/doc-src/IsarRef/Thy/Spec.thy	Mon Jun 02 22:50:21 2008 +0200
+++ b/doc-src/IsarRef/Thy/Spec.thy	Mon Jun 02 22:50:23 2008 +0200
@@ -12,22 +12,24 @@
   \begin{matharray}{rcl}
     @{command_def "header"} & : & \isarkeep{toplevel} \\
     @{command_def "theory"} & : & \isartrans{toplevel}{theory} \\
-    @{command_def "end"} & : & \isartrans{theory}{toplevel} \\
+    @{command_def (global) "end"} & : & \isartrans{theory}{toplevel} \\
   \end{matharray}
 
-  Isabelle/Isar theories are defined via theory, which contain both
-  specifications and proofs; occasionally definitional mechanisms also
-  require some explicit proof.
+  Isabelle/Isar theories are defined via theory file, which contain
+  both specifications and proofs; occasionally definitional mechanisms
+  also require some explicit proof.  The theory body may be
+  sub-structered by means of \emph{local theory} target mechanisms,
+  notably @{command "locale"} and @{command "class"}.
 
   The first ``real'' command of any theory has to be @{command
   "theory"}, which starts a new theory based on the merge of existing
   ones.  Just preceding the @{command "theory"} keyword, there may be
   an optional @{command "header"} declaration, which is relevant to
   document preparation only; it acts very much like a special
-  pre-theory markup command (cf.\ \secref{sec:markup-thy} and
-  \secref{sec:markup-thy}).  The @{command "end"} command concludes a
-  theory development; it has to be the very last command of any theory
-  file loaded in batch-mode.
+  pre-theory markup command (cf.\ \secref{sec:markup} and).  The
+  @{command (global) "end"} command
+  concludes a theory development; it has to be the very last command
+  of any theory file loaded in batch-mode.
 
   \begin{rail}
     'header' text
@@ -44,8 +46,7 @@
   markup just preceding the formal beginning of a theory.  In actual
   document preparation the corresponding {\LaTeX} macro @{verbatim
   "\\isamarkupheader"} may be redefined to produce chapter or section
-  headings.  See also \secref{sec:markup-thy} and
-  \secref{sec:markup-prf} for further markup commands.
+  headings.  See also \secref{sec:markup} for further markup commands.
   
   \item [@{command "theory"}~@{text "A \<IMPORTS> B\<^sub>1 \<dots>
   B\<^sub>n \<BEGIN>"}] starts a new theory @{text A} based on the
@@ -65,10 +66,1269 @@
   text (typically via explicit @{command_ref "use"} in the body text,
   see \secref{sec:ML}).
   
-  \item [@{command "end"}] concludes the current theory definition or
-  context switch.
+  \item [@{command (global) "end"}] concludes the current theory
+  definition.
+
+  \end{descr}
+*}
+
+
+section {* Local theory targets \label{sec:target} *}
+
+text {*
+  A local theory target is a context managed separately within the
+  enclosing theory.  Contexts may introduce parameters (fixed
+  variables) and assumptions (hypotheses).  Definitions and theorems
+  depending on the context may be added incrementally later on.  Named
+  contexts refer to locales (cf.\ \secref{sec:locale}) or type classes
+  (cf.\ \secref{sec:class}); the name ``@{text "-"}'' signifies the
+  global theory context.
+
+  \begin{matharray}{rcll}
+    @{command_def "context"} & : & \isartrans{theory}{local{\dsh}theory} \\
+    @{command_def (local) "end"} & : & \isartrans{local{\dsh}theory}{theory} \\
+  \end{matharray}
+
+  \indexouternonterm{target}
+  \begin{rail}
+    'context' name 'begin'
+    ;
+
+    target: '(' 'in' name ')'
+    ;
+  \end{rail}
+
+  \begin{descr}
+  
+  \item [@{command "context"}~@{text "c \<BEGIN>"}] recommences an
+  existing locale or class context @{text c}.  Note that locale and
+  class definitions allow to include the @{keyword_ref "begin"}
+  keyword as well, in order to continue the local theory immediately
+  after the initial specification.
+  
+  \item [@{command (local) "end"}] concludes the current local theory
+  and continues the enclosing global theory.  Note that a global
+  @{command (global) "end"} has a different meaning: it concludes the
+  theory itself (\secref{sec:begin-thy}).
+  
+  \item [@{text "(\<IN> c)"}] given after any local theory command
+  specifies an immediate target, e.g.\ ``@{command
+  "definition"}~@{text "(\<IN> c) \<dots>"}'' or ``@{command
+  "theorem"}~@{text "(\<IN> c) \<dots>"}''.  This works both in a local or
+  global theory context; the current target context will be suspended
+  for this command only.  Note that ``@{text "(\<IN> -)"}'' will
+  always produce a global result independently of the current target
+  context.
+
+  \end{descr}
+
+  The exact meaning of results produced within a local theory context
+  depends on the underlying target infrastructure (locale, type class
+  etc.).  The general idea is as follows, considering a context named
+  @{text c} with parameter @{text x} and assumption @{text "A[x]"}.
+  
+  Definitions are exported by introducing a global version with
+  additional arguments; a syntactic abbreviation links the long form
+  with the abstract version of the target context.  For example,
+  @{text "a \<equiv> t[x]"} becomes @{text "c.a ?x \<equiv> t[?x]"} at the theory
+  level (for arbitrary @{text "?x"}), together with a local
+  abbreviation @{text "c \<equiv> c.a x"} in the target context (for the
+  fixed parameter @{text x}).
+
+  Theorems are exported by discharging the assumptions and
+  generalizing the parameters of the context.  For example, @{text "a:
+  B[x]"} becomes @{text "c.a: A[?x] \<Longrightarrow> B[?x]"}, again for arbitrary
+  @{text "?x"}.
+*}
+
+
+section {* Basic specification elements *}
+
+text {*
+  \begin{matharray}{rcll}
+    @{command_def "axiomatization"} & : & \isarkeep{local{\dsh}theory} & (axiomatic!)\\
+    @{command_def "definition"} & : & \isarkeep{local{\dsh}theory} \\
+    @{attribute_def "defn"} & : & \isaratt \\
+    @{command_def "abbreviation"} & : & \isarkeep{local{\dsh}theory} \\
+    @{command_def "print_abbrevs"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
+    @{command_def "notation"} & : & \isarkeep{local{\dsh}theory} \\
+    @{command_def "no_notation"} & : & \isarkeep{local{\dsh}theory} \\
+  \end{matharray}
+
+  These specification mechanisms provide a slightly more abstract view
+  than the underlying primitives of @{command "consts"}, @{command
+  "defs"} (see \secref{sec:consts}), and @{command "axioms"} (see
+  \secref{sec:axms-thms}).  In particular, type-inference is commonly
+  available, and result names need not be given.
+
+  \begin{rail}
+    'axiomatization' target? fixes? ('where' specs)?
+    ;
+    'definition' target? (decl 'where')? thmdecl? prop
+    ;
+    'abbreviation' target? mode? (decl 'where')? prop
+    ;
+    ('notation' | 'no\_notation') target? mode? (nameref structmixfix + 'and')
+    ;
+
+    fixes: ((name ('::' type)? mixfix? | vars) + 'and')
+    ;
+    specs: (thmdecl? props + 'and')
+    ;
+    decl: name ('::' type)? mixfix?
+    ;
+  \end{rail}
+
+  \begin{descr}
+  
+  \item [@{command "axiomatization"}~@{text "c\<^sub>1 \<dots> c\<^sub>m
+  \<WHERE> \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"}] introduces several constants
+  simultaneously and states axiomatic properties for these.  The
+  constants are marked as being specified once and for all, which
+  prevents additional specifications being issued later on.
+  
+  Note that axiomatic specifications are only appropriate when
+  declaring a new logical system.  Normal applications should only use
+  definitional mechanisms!
+
+  \item [@{command "definition"}~@{text "c \<WHERE> eq"}] produces an
+  internal definition @{text "c \<equiv> t"} according to the specification
+  given as @{text eq}, which is then turned into a proven fact.  The
+  given proposition may deviate from internal meta-level equality
+  according to the rewrite rules declared as @{attribute defn} by the
+  object-logic.  This usually covers object-level equality @{text "x =
+  y"} and equivalence @{text "A \<leftrightarrow> B"}.  End-users normally need not
+  change the @{attribute defn} setup.
+  
+  Definitions may be presented with explicit arguments on the LHS, as
+  well as additional conditions, e.g.\ @{text "f x y = t"} instead of
+  @{text "f \<equiv> \<lambda>x y. t"} and @{text "y \<noteq> 0 \<Longrightarrow> g x y = u"} instead of an
+  unrestricted @{text "g \<equiv> \<lambda>x y. u"}.
+  
+  \item [@{command "abbreviation"}~@{text "c \<WHERE> eq"}] introduces
+  a syntactic constant which is associated with a certain term
+  according to the meta-level equality @{text eq}.
+  
+  Abbreviations participate in the usual type-inference process, but
+  are expanded before the logic ever sees them.  Pretty printing of
+  terms involves higher-order rewriting with rules stemming from
+  reverted abbreviations.  This needs some care to avoid overlapping
+  or looping syntactic replacements!
+  
+  The optional @{text mode} specification restricts output to a
+  particular print mode; using ``@{text input}'' here achieves the
+  effect of one-way abbreviations.  The mode may also include an
+  ``@{keyword "output"}'' qualifier that affects the concrete syntax
+  declared for abbreviations, cf.\ @{command "syntax"} in
+  \secref{sec:syn-trans}.
+  
+  \item [@{command "print_abbrevs"}] prints all constant abbreviations
+  of the current context.
+  
+  \item [@{command "notation"}~@{text "c (mx)"}] associates mixfix
+  syntax with an existing constant or fixed variable.  This is a
+  robust interface to the underlying @{command "syntax"} primitive
+  (\secref{sec:syn-trans}).  Type declaration and internal syntactic
+  representation of the given entity is retrieved from the context.
+  
+  \item [@{command "no_notation"}] is similar to @{command
+  "notation"}, but removes the specified syntax annotation from the
+  present context.
+
+  \end{descr}
+
+  All of these specifications support local theory targets (cf.\
+  \secref{sec:target}).
+*}
+
+
+section {* Generic declarations *}
+
+text {*
+  Arbitrary operations on the background context may be wrapped-up as
+  generic declaration elements.  Since the underlying concept of local
+  theories may be subject to later re-interpretation, there is an
+  additional dependency on a morphism that tells the difference of the
+  original declaration context wrt.\ the application context
+  encountered later on.  A fact declaration is an important special
+  case: it consists of a theorem which is applied to the context by
+  means of an attribute.
+
+  \begin{matharray}{rcl}
+    @{command_def "declaration"} & : & \isarkeep{local{\dsh}theory} \\
+    @{command_def "declare"} & : & \isarkeep{local{\dsh}theory} \\
+  \end{matharray}
+
+  \begin{rail}
+    'declaration' target? text
+    ;
+    'declare' target? (thmrefs + 'and')
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "declaration"}~@{text d}] adds the declaration
+  function @{text d} of ML type @{ML_type declaration}, to the current
+  local theory under construction.  In later application contexts, the
+  function is transformed according to the morphisms being involved in
+  the interpretation hierarchy.
+
+  \item [@{command "declare"}~@{text thms}] declares theorems to the
+  current local theory context.  No theorem binding is involved here,
+  unlike @{command "theorems"} or @{command "lemmas"} (cf.\
+  \secref{sec:axms-thms}), so @{command "declare"} only has the effect
+  of applying attributes as included in the theorem specification.
+
+  \end{descr}
+*}
+
+
+section {* Locales \label{sec:locale} *}
+
+text {*
+  Locales are named local contexts, consisting of a list of
+  declaration elements that are modeled after the Isar proof context
+  commands (cf.\ \secref{sec:proof-context}).
+*}
+
+
+subsection {* Locale specifications *}
+
+text {*
+  \begin{matharray}{rcl}
+    @{command_def "locale"} & : & \isartrans{theory}{local{\dsh}theory} \\
+    @{command_def "print_locale"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
+    @{command_def "print_locales"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
+    @{method_def intro_locales} & : & \isarmeth \\
+    @{method_def unfold_locales} & : & \isarmeth \\
+  \end{matharray}
+
+  \indexouternonterm{contextexpr}\indexouternonterm{contextelem}
+  \indexisarelem{fixes}\indexisarelem{constrains}\indexisarelem{assumes}
+  \indexisarelem{defines}\indexisarelem{notes}\indexisarelem{includes}
+  \begin{rail}
+    'locale' ('(open)')? name ('=' localeexpr)? 'begin'?
+    ;
+    'print\_locale' '!'? localeexpr
+    ;
+    localeexpr: ((contextexpr '+' (contextelem+)) | contextexpr | (contextelem+))
+    ;
+
+    contextexpr: nameref | '(' contextexpr ')' |
+    (contextexpr (name mixfix? +)) | (contextexpr + '+')
+    ;
+    contextelem: fixes | constrains | assumes | defines | notes
+    ;
+    fixes: 'fixes' ((name ('::' type)? structmixfix? | vars) + 'and')
+    ;
+    constrains: 'constrains' (name '::' type + 'and')
+    ;
+    assumes: 'assumes' (thmdecl? props + 'and')
+    ;
+    defines: 'defines' (thmdecl? prop proppat? + 'and')
+    ;
+    notes: 'notes' (thmdef? thmrefs + 'and')
+    ;
+    includes: 'includes' contextexpr
+    ;
+  \end{rail}
+
+  \begin{descr}
+  
+  \item [@{command "locale"}~@{text "loc = import + body"}] defines a
+  new locale @{text loc} as a context consisting of a certain view of
+  existing locales (@{text import}) plus some additional elements
+  (@{text body}).  Both @{text import} and @{text body} are optional;
+  the degenerate form @{command "locale"}~@{text loc} defines an empty
+  locale, which may still be useful to collect declarations of facts
+  later on.  Type-inference on locale expressions automatically takes
+  care of the most general typing that the combined context elements
+  may acquire.
+
+  The @{text import} consists of a structured context expression,
+  consisting of references to existing locales, renamed contexts, or
+  merged contexts.  Renaming uses positional notation: @{text "c
+  x\<^sub>1 \<dots> x\<^sub>n"} means that (a prefix of) the fixed
+  parameters of context @{text c} are named @{text "x\<^sub>1, \<dots>,
+  x\<^sub>n"}; a ``@{text _}'' (underscore) means to skip that
+  position.  Renaming by default deletes concrete syntax, but new
+  syntax may by specified with a mixfix annotation.  An exeption of
+  this rule is the special syntax declared with ``@{text
+  "(\<STRUCTURE>)"}'' (see below), which is neither deleted nor can it
+  be changed.  Merging proceeds from left-to-right, suppressing any
+  duplicates stemming from different paths through the import
+  hierarchy.
+
+  The @{text body} consists of basic context elements, further context
+  expressions may be included as well.
+
+  \begin{descr}
+
+  \item [@{element "fixes"}~@{text "x :: \<tau> (mx)"}] declares a local
+  parameter of type @{text \<tau>} and mixfix annotation @{text mx} (both
+  are optional).  The special syntax declaration ``@{text
+  "(\<STRUCTURE>)"}'' means that @{text x} may be referenced
+  implicitly in this context.
+
+  \item [@{element "constrains"}~@{text "x :: \<tau>"}] introduces a type
+  constraint @{text \<tau>} on the local parameter @{text x}.
+
+  \item [@{element "assumes"}~@{text "a: \<phi>\<^sub>1 \<dots> \<phi>\<^sub>n"}]
+  introduces local premises, similar to @{command "assume"} within a
+  proof (cf.\ \secref{sec:proof-context}).
+
+  \item [@{element "defines"}~@{text "a: x \<equiv> t"}] defines a previously
+  declared parameter.  This is similar to @{command "def"} within a
+  proof (cf.\ \secref{sec:proof-context}), but @{element "defines"}
+  takes an equational proposition instead of variable-term pair.  The
+  left-hand side of the equation may have additional arguments, e.g.\
+  ``@{element "defines"}~@{text "f x\<^sub>1 \<dots> x\<^sub>n \<equiv> t"}''.
+
+  \item [@{element "notes"}~@{text "a = b\<^sub>1 \<dots> b\<^sub>n"}]
+  reconsiders facts within a local context.  Most notably, this may
+  include arbitrary declarations in any attribute specifications
+  included here, e.g.\ a local @{attribute simp} rule.
+
+  \item [@{element "includes"}~@{text c}] copies the specified context
+  in a statically scoped manner.  Only available in the long goal
+  format of \secref{sec:goals}.
+
+  In contrast, the initial @{text import} specification of a locale
+  expression maintains a dynamic relation to the locales being
+  referenced (benefiting from any later fact declarations in the
+  obvious manner).
+
+  \end{descr}
+  
+  Note that ``@{text "(\<IS> p\<^sub>1 \<dots> p\<^sub>n)"}'' patterns given
+  in the syntax of @{element "assumes"} and @{element "defines"} above
+  are illegal in locale definitions.  In the long goal format of
+  \secref{sec:goals}, term bindings may be included as expected,
+  though.
+  
+  \medskip By default, locale specifications are ``closed up'' by
+  turning the given text into a predicate definition @{text
+  loc_axioms} and deriving the original assumptions as local lemmas
+  (modulo local definitions).  The predicate statement covers only the
+  newly specified assumptions, omitting the content of included locale
+  expressions.  The full cumulative view is only provided on export,
+  involving another predicate @{text loc} that refers to the complete
+  specification text.
+  
+  In any case, the predicate arguments are those locale parameters
+  that actually occur in the respective piece of text.  Also note that
+  these predicates operate at the meta-level in theory, but the locale
+  packages attempts to internalize statements according to the
+  object-logic setup (e.g.\ replacing @{text \<And>} by @{text \<forall>}, and
+  @{text "\<Longrightarrow>"} by @{text "\<longrightarrow>"} in HOL; see also
+  \secref{sec:object-logic}).  Separate introduction rules @{text
+  loc_axioms.intro} and @{text loc.intro} are provided as well.
+  
+  The @{text "(open)"} option of a locale specification prevents both
+  the current @{text loc_axioms} and cumulative @{text loc} predicate
+  constructions.  Predicates are also omitted for empty specification
+  texts.
+
+  \item [@{command "print_locale"}~@{text "import + body"}] prints the
+  specified locale expression in a flattened form.  The notable
+  special case @{command "print_locale"}~@{text loc} just prints the
+  contents of the named locale, but keep in mind that type-inference
+  will normalize type variables according to the usual alphabetical
+  order.  The command omits @{element "notes"} elements by default.
+  Use @{command "print_locale"}@{text "!"} to get them included.
+
+  \item [@{command "print_locales"}] prints the names of all locales
+  of the current theory.
+
+  \item [@{method intro_locales} and @{method unfold_locales}]
+  repeatedly expand all introduction rules of locale predicates of the
+  theory.  While @{method intro_locales} only applies the @{text
+  loc.intro} introduction rules and therefore does not decend to
+  assumptions, @{method unfold_locales} is more aggressive and applies
+  @{text loc_axioms.intro} as well.  Both methods are aware of locale
+  specifications entailed by the context, both from target and
+  @{element "includes"} statements, and from interpretations (see
+  below).  New goals that are entailed by the current context are
+  discharged automatically.
+
+  \end{descr}
+*}
+
+
+subsection {* Interpretation of locales *}
+
+text {*
+  Locale expressions (more precisely, \emph{context expressions}) may
+  be instantiated, and the instantiated facts added to the current
+  context.  This requires a proof of the instantiated specification
+  and is called \emph{locale interpretation}.  Interpretation is
+  possible in theories and locales (command @{command
+  "interpretation"}) and also within a proof body (command @{command
+  "interpret"}).
+
+  \begin{matharray}{rcl}
+    @{command_def "interpretation"} & : & \isartrans{theory}{proof(prove)} \\
+    @{command_def "interpret"} & : & \isartrans{proof(state) ~|~ proof(chain)}{proof(prove)} \\
+    @{command_def "print_interps"}@{text "\<^sup>*"} & : &  \isarkeep{theory~|~proof} \\
+  \end{matharray}
+
+  \indexouternonterm{interp}
+  \begin{rail}
+    'interpretation' (interp | name ('<' | subseteq) contextexpr)
+    ;
+    'interpret' interp
+    ;
+    'print\_interps' '!'? name
+    ;
+    instantiation: ('[' (inst+) ']')?
+    ;
+    interp: thmdecl? \\ (contextexpr instantiation |
+      name instantiation 'where' (thmdecl? prop + 'and'))
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "interpretation"}~@{text "expr insts \<WHERE> eqns"}]
+
+  The first form of @{command "interpretation"} interprets @{text
+  expr} in the theory.  The instantiation is given as a list of terms
+  @{text insts} and is positional.  All parameters must receive an
+  instantiation term --- with the exception of defined parameters.
+  These are, if omitted, derived from the defining equation and other
+  instantiations.  Use ``@{text _}'' to omit an instantiation term.
+
+  The command generates proof obligations for the instantiated
+  specifications (assumes and defines elements).  Once these are
+  discharged by the user, instantiated facts are added to the theory
+  in a post-processing phase.
+
+  Additional equations, which are unfolded in facts during
+  post-processing, may be given after the keyword @{keyword "where"}.
+  This is useful for interpreting concepts introduced through
+  definition specification elements.  The equations must be proved.
+  Note that if equations are present, the context expression is
+  restricted to a locale name.
+
+  The command is aware of interpretations already active in the
+  theory.  No proof obligations are generated for those, neither is
+  post-processing applied to their facts.  This avoids duplication of
+  interpreted facts, in particular.  Note that, in the case of a
+  locale with import, parts of the interpretation may already be
+  active.  The command will only generate proof obligations and
+  process facts for new parts.
+
+  The context expression may be preceded by a name and/or attributes.
+  These take effect in the post-processing of facts.  The name is used
+  to prefix fact names, for example to avoid accidental hiding of
+  other facts.  Attributes are applied after attributes of the
+  interpreted facts.
+
+  Adding facts to locales has the effect of adding interpreted facts
+  to the theory for all active interpretations also.  That is,
+  interpretations dynamically participate in any facts added to
+  locales.
+
+  \item [@{command "interpretation"}~@{text "name \<subseteq> expr"}]
+
+  This form of the command interprets @{text expr} in the locale
+  @{text name}.  It requires a proof that the specification of @{text
+  name} implies the specification of @{text expr}.  As in the
+  localized version of the theorem command, the proof is in the
+  context of @{text name}.  After the proof obligation has been
+  dischared, the facts of @{text expr} become part of locale @{text
+  name} as \emph{derived} context elements and are available when the
+  context @{text name} is subsequently entered.  Note that, like
+  import, this is dynamic: facts added to a locale part of @{text
+  expr} after interpretation become also available in @{text name}.
+  Like facts of renamed context elements, facts obtained by
+  interpretation may be accessed by prefixing with the parameter
+  renaming (where the parameters are separated by ``@{text _}'').
+
+  Unlike interpretation in theories, instantiation is confined to the
+  renaming of parameters, which may be specified as part of the
+  context expression @{text expr}.  Using defined parameters in @{text
+  name} one may achieve an effect similar to instantiation, though.
+
+  Only specification fragments of @{text expr} that are not already
+  part of @{text name} (be it imported, derived or a derived fragment
+  of the import) are considered by interpretation.  This enables
+  circular interpretations.
+
+  If interpretations of @{text name} exist in the current theory, the
+  command adds interpretations for @{text expr} as well, with the same
+  prefix and attributes, although only for fragments of @{text expr}
+  that are not interpreted in the theory already.
+
+  \item [@{command "interpret"}~@{text "expr insts \<WHERE> eqns"}]
+  interprets @{text expr} in the proof context and is otherwise
+  similar to interpretation in theories.
+
+  \item [@{command "print_interps"}~@{text loc}] prints the
+  interpretations of a particular locale @{text loc} that are active
+  in the current context, either theory or proof context.  The
+  exclamation point argument triggers printing of \emph{witness}
+  theorems justifying interpretations.  These are normally omitted
+  from the output.
+  
+  \end{descr}
+
+  \begin{warn}
+    Since attributes are applied to interpreted theorems,
+    interpretation may modify the context of common proof tools, e.g.\
+    the Simplifier or Classical Reasoner.  Since the behavior of such
+    automated reasoning tools is \emph{not} stable under
+    interpretation morphisms, manual declarations might have to be
+    issued.
+  \end{warn}
+
+  \begin{warn}
+    An interpretation in a theory may subsume previous
+    interpretations.  This happens if the same specification fragment
+    is interpreted twice and the instantiation of the second
+    interpretation is more general than the interpretation of the
+    first.  A warning is issued, since it is likely that these could
+    have been generalized in the first place.  The locale package does
+    not attempt to remove subsumed interpretations.
+  \end{warn}
+*}
+
+
+section {* Classes \label{sec:class} *}
+
+text {*
+  A class is a particular locale with \emph{exactly one} type variable
+  @{text \<alpha>}.  Beyond the underlying locale, a corresponding type class
+  is established which is interpreted logically as axiomatic type
+  class \cite{Wenzel:1997:TPHOL} whose logical content are the
+  assumptions of the locale.  Thus, classes provide the full
+  generality of locales combined with the commodity of type classes
+  (notably type-inference).  See \cite{isabelle-classes} for a short
+  tutorial.
+
+  \begin{matharray}{rcl}
+    @{command_def "class"} & : & \isartrans{theory}{local{\dsh}theory} \\
+    @{command_def "instantiation"} & : & \isartrans{theory}{local{\dsh}theory} \\
+    @{command_def "instance"} & : & \isartrans{local{\dsh}theory}{local{\dsh}theory} \\
+    @{command_def "subclass"} & : & \isartrans{local{\dsh}theory}{local{\dsh}theory} \\
+    @{command_def "print_classes"}@{text "\<^sup>*"} & : & \isarkeep{theory~|~proof} \\
+    @{method_def intro_classes} & : & \isarmeth \\
+  \end{matharray}
+
+  \begin{rail}
+    'class' name '=' ((superclassexpr '+' (contextelem+)) | superclassexpr | (contextelem+)) \\
+      'begin'?
+    ;
+    'instantiation' (nameref + 'and') '::' arity 'begin'
+    ;
+    'instance'
+    ;
+    'subclass' target? nameref
+    ;
+    'print\_classes'
+    ;
+
+    superclassexpr: nameref | (nameref '+' superclassexpr)
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "class"}~@{text "c = superclasses + body"}] defines
+  a new class @{text c}, inheriting from @{text superclasses}.  This
+  introduces a locale @{text c} with import of all locales @{text
+  superclasses}.
+
+  Any @{element "fixes"} in @{text body} are lifted to the global
+  theory level (\emph{class operations} @{text "f\<^sub>1, \<dots>,
+  f\<^sub>n"} of class @{text c}), mapping the local type parameter
+  @{text \<alpha>} to a schematic type variable @{text "?\<alpha> :: c"}.
+
+  Likewise, @{element "assumes"} in @{text body} are also lifted,
+  mapping each local parameter @{text "f :: \<tau>[\<alpha>]"} to its
+  corresponding global constant @{text "f :: \<tau>[?\<alpha> :: c]"}.  The
+  corresponding introduction rule is provided as @{text
+  c_class_axioms.intro}.  This rule should be rarely needed directly
+  --- the @{method intro_classes} method takes care of the details of
+  class membership proofs.
+
+  \item [@{command "instantiation"}~@{text "t :: (s\<^sub>1, \<dots>,
+  s\<^sub>n) s \<BEGIN>"}] opens a theory target (cf.\
+  \secref{sec:target}) which allows to specify class operations @{text
+  "f\<^sub>1, \<dots>, f\<^sub>n"} corresponding to sort @{text s} at the
+  particular type instance @{text "(\<alpha>\<^sub>1 :: s\<^sub>1, \<dots>,
+  \<alpha>\<^sub>n :: s\<^sub>n) t"}.  A plain @{command "instance"} command
+  in the target body poses a goal stating these type arities.  The
+  target is concluded by an @{command_ref (local) "end"} command.
+
+  Note that a list of simultaneous type constructors may be given;
+  this corresponds nicely to mutual recursive type definitions, e.g.\
+  in Isabelle/HOL.
+
+  \item [@{command "instance"}] in an instantiation target body sets
+  up a goal stating the type arities claimed at the opening @{command
+  "instantiation"}.  The proof would usually proceed by @{method
+  intro_classes}, and then establish the characteristic theorems of
+  the type classes involved.  After finishing the proof, the
+  background theory will be augmented by the proven type arities.
+
+  \item [@{command "subclass"}~@{text c}] in a class context for class
+  @{text d} sets up a goal stating that class @{text c} is logically
+  contained in class @{text d}.  After finishing the proof, class
+  @{text d} is proven to be subclass @{text c} and the locale @{text
+  c} is interpreted into @{text d} simultaneously.
+
+  \item [@{command "print_classes"}] prints all classes in the current
+  theory.
+
+  \item [@{method intro_classes}] repeatedly expands all class
+  introduction rules of this theory.  Note that this method usually
+  needs not be named explicitly, as it is already included in the
+  default proof step (e.g.\ of @{command "proof"}).  In particular,
+  instantiation of trivial (syntactic) classes may be performed by a
+  single ``@{command ".."}'' proof step.
 
   \end{descr}
 *}
 
+
+subsection {* The class target *}
+
+text {*
+  %FIXME check
+
+  A named context may refer to a locale (cf.\ \secref{sec:target}).
+  If this locale is also a class @{text c}, apart from the common
+  locale target behaviour the following happens.
+
+  \begin{itemize}
+
+  \item Local constant declarations @{text "g[\<alpha>]"} referring to the
+  local type parameter @{text \<alpha>} and local parameters @{text "f[\<alpha>]"}
+  are accompanied by theory-level constants @{text "g[?\<alpha> :: c]"}
+  referring to theory-level class operations @{text "f[?\<alpha> :: c]"}.
+
+  \item Local theorem bindings are lifted as are assumptions.
+
+  \item Local syntax refers to local operations @{text "g[\<alpha>]"} and
+  global operations @{text "g[?\<alpha> :: c]"} uniformly.  Type inference
+  resolves ambiguities.  In rare cases, manual type annotations are
+  needed.
+  
+  \end{itemize}
+*}
+
+
+section {* Axiomatic type classes \label{sec:axclass} *}
+
+text {*
+  \begin{warn}
+  This describes the old interface to axiomatic type-classes in
+  Isabelle.  See \secref{sec:class} for a more recent higher-level
+  view on the same ideas.
+  \end{warn}
+
+  \begin{matharray}{rcl}
+    @{command_def "axclass"} & : & \isartrans{theory}{theory} \\
+    @{command_def "instance"} & : & \isartrans{theory}{proof(prove)} \\
+  \end{matharray}
+
+  Axiomatic type classes are Isabelle/Pure's primitive
+  \emph{definitional} interface to type classes.  For practical
+  applications, you should consider using classes
+  (cf.~\secref{sec:classes}) which provide high level interface.
+
+  \begin{rail}
+    'axclass' classdecl (axmdecl prop +)
+    ;
+    'instance' (nameref ('<' | subseteq) nameref | nameref '::' arity)
+    ;
+  \end{rail}
+
+  \begin{descr}
+  
+  \item [@{command "axclass"}~@{text "c \<subseteq> c\<^sub>1, \<dots>, c\<^sub>n
+  axms"}] defines an axiomatic type class as the intersection of
+  existing classes, with additional axioms holding.  Class axioms may
+  not contain more than one type variable.  The class axioms (with
+  implicit sort constraints added) are bound to the given names.
+  Furthermore a class introduction rule is generated (being bound as
+  @{text c_class.intro}); this rule is employed by method @{method
+  intro_classes} to support instantiation proofs of this class.
+  
+  The ``class axioms'' are stored as theorems according to the given
+  name specifications, adding @{text "c_class"} as name space prefix;
+  the same facts are also stored collectively as @{text
+  c_class.axioms}.
+  
+  \item [@{command "instance"}~@{text "c\<^sub>1 \<subseteq> c\<^sub>2"} and
+  @{command "instance"}~@{text "t :: (s\<^sub>1, \<dots>, s\<^sub>n) s"}]
+  setup a goal stating a class relation or type arity.  The proof
+  would usually proceed by @{method intro_classes}, and then establish
+  the characteristic theorems of the type classes involved.  After
+  finishing the proof, the theory will be augmented by a type
+  signature declaration corresponding to the resulting theorem.
+
+  \end{descr}
+*}
+
+
+section {* Unrestricted overloading *}
+
+text {*
+  Isabelle/Pure's definitional schemes support certain forms of
+  overloading (see \secref{sec:consts}).  At most occassions
+  overloading will be used in a Haskell-like fashion together with
+  type classes by means of @{command "instantiation"} (see
+  \secref{sec:class}).  Sometimes low-level overloading is desirable.
+  The @{command "overloading"} target provides a convenient view for
+  end-users.
+
+  \begin{matharray}{rcl}
+    @{command_def "overloading"} & : & \isartrans{theory}{local{\dsh}theory} \\
+  \end{matharray}
+
+  \begin{rail}
+    'overloading' \\
+    ( string ( '==' | equiv ) term ( '(' 'unchecked' ')' )? + ) 'begin'
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "overloading"}~@{text "x\<^sub>1 \<equiv> c\<^sub>1 ::
+  \<tau>\<^sub>1 \<AND> \<dots> x\<^sub>n \<equiv> c\<^sub>n :: \<tau>\<^sub>n \<BEGIN>"}]
+  opens a theory target (cf.\ \secref{sec:target}) which allows to
+  specify constants with overloaded definitions.  These are identified
+  by an explicitly given mapping from variable names @{text
+  "x\<^sub>i"} to constants @{text "c\<^sub>i"} at particular type
+  instances.  The definitions themselves are established using common
+  specification tools, using the names @{text "x\<^sub>i"} as
+  reference to the corresponding constants.  The target is concluded
+  by @{command (local) "end"}.
+
+  A @{text "(unchecked)"} option disables global dependency checks for
+  the corresponding definition, which is occasionally useful for
+  exotic overloading.  It is at the discretion of the user to avoid
+  malformed theory specifications!
+
+  \end{descr}
+*}
+
+
+section {* Incorporating ML code \label{sec:ML} *}
+
+text {*
+  \begin{matharray}{rcl}
+    @{command_def "use"} & : & \isarkeep{theory~|~local{\dsh}theory} \\
+    @{command_def "ML"} & : & \isarkeep{theory~|~local{\dsh}theory} \\
+    @{command_def "ML_val"} & : & \isartrans{\cdot}{\cdot} \\
+    @{command_def "ML_command"} & : & \isartrans{\cdot}{\cdot} \\
+    @{command_def "setup"} & : & \isartrans{theory}{theory} \\
+    @{command_def "method_setup"} & : & \isartrans{theory}{theory} \\
+  \end{matharray}
+
+  \begin{rail}
+    'use' name
+    ;
+    ('ML' | 'ML\_val' | 'ML\_command' | 'setup') text
+    ;
+    'method\_setup' name '=' text text
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "use"}~@{text "file"}] reads and executes ML
+  commands from @{text "file"}.  The current theory context is passed
+  down to the ML toplevel and may be modified, using @{ML
+  "Context.>>"} or derived ML commands.  The file name is checked with
+  the @{keyword_ref "uses"} dependency declaration given in the theory
+  header (see also \secref{sec:begin-thy}).
+  
+  \item [@{command "ML"}~@{text "text"}] is similar to @{command
+  "use"}, but executes ML commands directly from the given @{text
+  "text"}.
+
+  \item [@{command "ML_val"} and @{command "ML_command"}] are
+  diagnostic versions of @{command "ML"}, which means that the context
+  may not be updated.  @{command "ML_val"} echos the bindings produced
+  at the ML toplevel, but @{command "ML_command"} is silent.
+  
+  \item [@{command "setup"}~@{text "text"}] changes the current theory
+  context by applying @{text "text"}, which refers to an ML expression
+  of type @{ML_type "theory -> theory"}.  This enables to initialize
+  any object-logic specific tools and packages written in ML, for
+  example.
+  
+  \item [@{command "method_setup"}~@{text "name = text description"}]
+  defines a proof method in the current theory.  The given @{text
+  "text"} has to be an ML expression of type @{ML_type "Args.src ->
+  Proof.context -> Proof.method"}.  Parsing concrete method syntax
+  from @{ML_type Args.src} input can be quite tedious in general.  The
+  following simple examples are for methods without any explicit
+  arguments, or a list of theorems, respectively.
+
+%FIXME proper antiquotations
+{\footnotesize
+\begin{verbatim}
+ Method.no_args (Method.METHOD (fn facts => foobar_tac))
+ Method.thms_args (fn thms => Method.METHOD (fn facts => foobar_tac))
+ Method.ctxt_args (fn ctxt => Method.METHOD (fn facts => foobar_tac))
+ Method.thms_ctxt_args (fn thms => fn ctxt =>
+    Method.METHOD (fn facts => foobar_tac))
+\end{verbatim}
+}
+
+  Note that mere tactic emulations may ignore the @{text facts}
+  parameter above.  Proper proof methods would do something
+  appropriate with the list of current facts, though.  Single-rule
+  methods usually do strict forward-chaining (e.g.\ by using @{ML
+  Drule.multi_resolves}), while automatic ones just insert the facts
+  using @{ML Method.insert_tac} before applying the main tactic.
+
+  \end{descr}
+*}
+
+
+section {* Primitive specification elements *}
+
+subsection {* Type classes and sorts \label{sec:classes} *}
+
+text {*
+  \begin{matharray}{rcll}
+    @{command_def "classes"} & : & \isartrans{theory}{theory} \\
+    @{command_def "classrel"} & : & \isartrans{theory}{theory} & (axiomatic!) \\
+    @{command_def "defaultsort"} & : & \isartrans{theory}{theory} \\
+    @{command_def "class_deps"} & : & \isarkeep{theory~|~proof} \\
+  \end{matharray}
+
+  \begin{rail}
+    'classes' (classdecl +)
+    ;
+    'classrel' (nameref ('<' | subseteq) nameref + 'and')
+    ;
+    'defaultsort' sort
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "classes"}~@{text "c \<subseteq> c\<^sub>1, \<dots>, c\<^sub>n"}]
+  declares class @{text c} to be a subclass of existing classes @{text
+  "c\<^sub>1, \<dots>, c\<^sub>n"}.  Cyclic class structures are not permitted.
+
+  \item [@{command "classrel"}~@{text "c\<^sub>1 \<subseteq> c\<^sub>2"}] states
+  subclass relations between existing classes @{text "c\<^sub>1"} and
+  @{text "c\<^sub>2"}.  This is done axiomatically!  The @{command_ref
+  "instance"} command (see \secref{sec:axclass}) provides a way to
+  introduce proven class relations.
+
+  \item [@{command "defaultsort"}~@{text s}] makes sort @{text s} the
+  new default sort for any type variables given without sort
+  constraints.  Usually, the default sort would be only changed when
+  defining a new object-logic.
+
+  \item [@{command "class_deps"}] visualizes the subclass relation,
+  using Isabelle's graph browser tool (see also \cite{isabelle-sys}).
+
+  \end{descr}
+*}
+
+
+subsection {* Types and type abbreviations \label{sec:types-pure} *}
+
+text {*
+  \begin{matharray}{rcll}
+    @{command_def "types"} & : & \isartrans{theory}{theory} \\
+    @{command_def "typedecl"} & : & \isartrans{theory}{theory} \\
+    @{command_def "nonterminals"} & : & \isartrans{theory}{theory} \\
+    @{command_def "arities"} & : & \isartrans{theory}{theory} & (axiomatic!) \\
+  \end{matharray}
+
+  \begin{rail}
+    'types' (typespec '=' type infix? +)
+    ;
+    'typedecl' typespec infix?
+    ;
+    'nonterminals' (name +)
+    ;
+    'arities' (nameref '::' arity +)
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "types"}~@{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t = \<tau>"}]
+  introduces \emph{type synonym} @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t"}
+  for existing type @{text "\<tau>"}.  Unlike actual type definitions, as
+  are available in Isabelle/HOL for example, type synonyms are just
+  purely syntactic abbreviations without any logical significance.
+  Internally, type synonyms are fully expanded.
+  
+  \item [@{command "typedecl"}~@{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t"}]
+  declares a new type constructor @{text t}, intended as an actual
+  logical type (of the object-logic, if available).
+
+  \item [@{command "nonterminals"}~@{text c}] declares type
+  constructors @{text c} (without arguments) to act as purely
+  syntactic types, i.e.\ nonterminal symbols of Isabelle's inner
+  syntax of terms or types.
+
+  \item [@{command "arities"}~@{text "t :: (s\<^sub>1, \<dots>, s\<^sub>n)
+  s"}] augments Isabelle's order-sorted signature of types by new type
+  constructor arities.  This is done axiomatically!  The @{command_ref
+  "instance"} command (see \S\ref{sec:axclass}) provides a way to
+  introduce proven type arities.
+
+  \end{descr}
+*}
+
+
+subsection {* Constants and definitions \label{sec:consts} *}
+
+text {*
+  Definitions essentially express abbreviations within the logic.  The
+  simplest form of a definition is @{text "c :: \<sigma> \<equiv> t"}, where @{text
+  c} is a newly declared constant.  Isabelle also allows derived forms
+  where the arguments of @{text c} appear on the left, abbreviating a
+  prefix of @{text \<lambda>}-abstractions, e.g.\ @{text "c \<equiv> \<lambda>x y. t"} may be
+  written more conveniently as @{text "c x y \<equiv> t"}.  Moreover,
+  definitions may be weakened by adding arbitrary pre-conditions:
+  @{text "A \<Longrightarrow> c x y \<equiv> t"}.
+
+  \medskip The built-in well-formedness conditions for definitional
+  specifications are:
+
+  \begin{itemize}
+
+  \item Arguments (on the left-hand side) must be distinct variables.
+
+  \item All variables on the right-hand side must also appear on the
+  left-hand side.
+
+  \item All type variables on the right-hand side must also appear on
+  the left-hand side; this prohibits @{text "0 :: nat \<equiv> length ([] ::
+  \<alpha> list)"} for example.
+
+  \item The definition must not be recursive.  Most object-logics
+  provide definitional principles that can be used to express
+  recursion safely.
+
+  \end{itemize}
+
+  Overloading means that a constant being declared as @{text "c :: \<alpha>
+  decl"} may be defined separately on type instances @{text "c ::
+  (\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n) t decl"} for each type constructor @{text
+  t}.  The right-hand side may mention overloaded constants
+  recursively at type instances corresponding to the immediate
+  argument types @{text "\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n"}.  Incomplete
+  specification patterns impose global constraints on all occurrences,
+  e.g.\ @{text "d :: \<alpha> \<times> \<alpha>"} on the left-hand side means that all
+  corresponding occurrences on some right-hand side need to be an
+  instance of this, general @{text "d :: \<alpha> \<times> \<beta>"} will be disallowed.
+
+  \begin{matharray}{rcl}
+    @{command_def "consts"} & : & \isartrans{theory}{theory} \\
+    @{command_def "defs"} & : & \isartrans{theory}{theory} \\
+    @{command_def "constdefs"} & : & \isartrans{theory}{theory} \\
+  \end{matharray}
+
+  \begin{rail}
+    'consts' ((name '::' type mixfix?) +)
+    ;
+    'defs' ('(' 'unchecked'? 'overloaded'? ')')? \\ (axmdecl prop +)
+    ;
+  \end{rail}
+
+  \begin{rail}
+    'constdefs' structs? (constdecl? constdef +)
+    ;
+
+    structs: '(' 'structure' (vars + 'and') ')'
+    ;
+    constdecl:  ((name '::' type mixfix | name '::' type | name mixfix) 'where'?) | name 'where'
+    ;
+    constdef: thmdecl? prop
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "consts"}~@{text "c :: \<sigma>"}] declares constant
+  @{text c} to have any instance of type scheme @{text \<sigma>}.  The
+  optional mixfix annotations may attach concrete syntax to the
+  constants declared.
+  
+  \item [@{command "defs"}~@{text "name: eqn"}] introduces @{text eqn}
+  as a definitional axiom for some existing constant.
+  
+  The @{text "(unchecked)"} option disables global dependency checks
+  for this definition, which is occasionally useful for exotic
+  overloading.  It is at the discretion of the user to avoid malformed
+  theory specifications!
+  
+  The @{text "(overloaded)"} option declares definitions to be
+  potentially overloaded.  Unless this option is given, a warning
+  message would be issued for any definitional equation with a more
+  special type than that of the corresponding constant declaration.
+  
+  \item [@{command "constdefs"}] provides a streamlined combination of
+  constants declarations and definitions: type-inference takes care of
+  the most general typing of the given specification (the optional
+  type constraint may refer to type-inference dummies ``@{text
+  _}'' as usual).  The resulting type declaration needs to agree with
+  that of the specification; overloading is \emph{not} supported here!
+  
+  The constant name may be omitted altogether, if neither type nor
+  syntax declarations are given.  The canonical name of the
+  definitional axiom for constant @{text c} will be @{text c_def},
+  unless specified otherwise.  Also note that the given list of
+  specifications is processed in a strictly sequential manner, with
+  type-checking being performed independently.
+  
+  An optional initial context of @{text "(structure)"} declarations
+  admits use of indexed syntax, using the special symbol @{verbatim
+  "\<index>"} (printed as ``@{text "\<index>"}'').  The latter concept is
+  particularly useful with locales (see also \S\ref{sec:locale}).
+
+  \end{descr}
+*}
+
+
+section {* Axioms and theorems \label{sec:axms-thms} *}
+
+text {*
+  \begin{matharray}{rcll}
+    @{command_def "axioms"} & : & \isartrans{theory}{theory} & (axiomatic!) \\
+    @{command_def "lemmas"} & : & \isarkeep{local{\dsh}theory} \\
+    @{command_def "theorems"} & : & isarkeep{local{\dsh}theory} \\
+  \end{matharray}
+
+  \begin{rail}
+    'axioms' (axmdecl prop +)
+    ;
+    ('lemmas' | 'theorems') target? (thmdef? thmrefs + 'and')
+    ;
+  \end{rail}
+
+  \begin{descr}
+  
+  \item [@{command "axioms"}~@{text "a: \<phi>"}] introduces arbitrary
+  statements as axioms of the meta-logic.  In fact, axioms are
+  ``axiomatic theorems'', and may be referred later just as any other
+  theorem.
+  
+  Axioms are usually only introduced when declaring new logical
+  systems.  Everyday work is typically done the hard way, with proper
+  definitions and proven theorems.
+  
+  \item [@{command "lemmas"}~@{text "a = b\<^sub>1 \<dots> b\<^sub>n"}]
+  retrieves and stores existing facts in the theory context, or the
+  specified target context (see also \secref{sec:target}).  Typical
+  applications would also involve attributes, to declare Simplifier
+  rules, for example.
+  
+  \item [@{command "theorems"}] is essentially the same as @{command
+  "lemmas"}, but marks the result as a different kind of facts.
+
+  \end{descr}
+*}
+
+
+section {* Oracles *}
+
+text {*
+  \begin{matharray}{rcl}
+    @{command_def "oracle"} & : & \isartrans{theory}{theory} \\
+  \end{matharray}
+
+  The oracle interface promotes a given ML function @{ML_text
+  "theory -> T -> term"} to @{ML_text "theory -> T -> thm"}, for some
+  type @{ML_text T} given by the user.  This acts like an infinitary
+  specification of axioms -- there is no internal check of the
+  correctness of the results!  The inference kernel records oracle
+  invocations within the internal derivation object of theorems, and
+  the pretty printer attaches ``@{text "[!]"}'' to indicate results
+  that are not fully checked by Isabelle inferences.
+
+  \begin{rail}
+    'oracle' name '(' type ')' '=' text
+    ;
+  \end{rail}
+
+  \begin{descr}
+
+  \item [@{command "oracle"}~@{text "name (type) = text"}] turns the
+  given ML expression @{text "text"} of type
+  @{ML_text "theory ->"}~@{text "type"}~@{ML_text "-> term"} into an
+  ML function of type
+  @{ML_text "theory ->"}~@{text "type"}~@{ML_text "-> thm"}, which is
+  bound to the global identifier @{ML_text name}.
+
+  \end{descr}
+*}
+
+
+section {* Name spaces *}
+
+text {*
+  \begin{matharray}{rcl}
+    @{command_def "global"} & : & \isartrans{theory}{theory} \\
+    @{command_def "local"} & : & \isartrans{theory}{theory} \\
+    @{command_def "hide"} & : & \isartrans{theory}{theory} \\
+  \end{matharray}
+
+  \begin{rail}
+    'hide' ('(open)')? name (nameref + )
+    ;
+  \end{rail}
+
+  Isabelle organizes any kind of name declarations (of types,
+  constants, theorems etc.) by separate hierarchically structured name
+  spaces.  Normally the user does not have to control the behavior of
+  name spaces by hand, yet the following commands provide some way to
+  do so.
+
+  \begin{descr}
+
+  \item [@{command "global"} and @{command "local"}] change the
+  current name declaration mode.  Initially, theories start in
+  @{command "local"} mode, causing all names to be automatically
+  qualified by the theory name.  Changing this to @{command "global"}
+  causes all names to be declared without the theory prefix, until
+  @{command "local"} is declared again.
+  
+  Note that global names are prone to get hidden accidently later,
+  when qualified names of the same base name are introduced.
+  
+  \item [@{command "hide"}~@{text "space names"}] fully removes
+  declarations from a given name space (which may be @{text "class"},
+  @{text "type"}, @{text "const"}, or @{text "fact"}); with the @{text
+  "(open)"} option, only the base name is hidden.  Global
+  (unqualified) names may never be hidden.
+  
+  Note that hiding name space accesses has no impact on logical
+  declarations -- they remain valid internally.  Entities that are no
+  longer accessible to the user are printed with the special qualifier
+  ``@{text "??"}'' prefixed to the full internal name.
+
+  \end{descr}
+*}
+
+
+section {* Syntax and translations \label{sec:syn-trans} *}
+
+text {*
+  \begin{matharray}{rcl}
+    @{command_def "syntax"} & : & \isartrans{theory}{theory} \\
+    @{command_def "no_syntax"} & : & \isartrans{theory}{theory} \\
+    @{command_def "translations"} & : & \isartrans{theory}{theory} \\
+    @{command_def "no_translations"} & : & \isartrans{theory}{theory} \\
+  \end{matharray}
+
+  \begin{rail}
+    ('syntax' | 'no\_syntax') mode? (constdecl +)
+    ;
+    ('translations' | 'no\_translations') (transpat ('==' | '=>' | '<=' | rightleftharpoons | rightharpoonup | leftharpoondown) transpat +)
+    ;
+
+    mode: ('(' ( name | 'output' | name 'output' ) ')')
+    ;
+    transpat: ('(' nameref ')')? string
+    ;
+  \end{rail}
+
+  \begin{descr}
+  
+  \item [@{command "syntax"}~@{text "(mode) decls"}] is similar to
+  @{command "consts"}~@{text decls}, except that the actual logical
+  signature extension is omitted.  Thus the context free grammar of
+  Isabelle's inner syntax may be augmented in arbitrary ways,
+  independently of the logic.  The @{text mode} argument refers to the
+  print mode that the grammar rules belong; unless the @{keyword_ref
+  "output"} indicator is given, all productions are added both to the
+  input and output grammar.
+  
+  \item [@{command "no_syntax"}~@{text "(mode) decls"}] removes
+  grammar declarations (and translations) resulting from @{text
+  decls}, which are interpreted in the same manner as for @{command
+  "syntax"} above.
+  
+  \item [@{command "translations"}~@{text rules}] specifies syntactic
+  translation rules (i.e.\ macros): parse~/ print rules (@{text "\<rightleftharpoons>"}),
+  parse rules (@{text "\<rightharpoonup>"}), or print rules (@{text "\<leftharpoondown>"}).
+  Translation patterns may be prefixed by the syntactic category to be
+  used for parsing; the default is @{text logic}.
+  
+  \item [@{command "no_translations"}~@{text rules}] removes syntactic
+  translation rules, which are interpreted in the same manner as for
+  @{command "translations"} above.
+
+  \end{descr}
+*}
+
+
+section {* Syntax translation functions *}
+
+text {*
+  \begin{matharray}{rcl}
+    @{command_def "parse_ast_translation"} & : & \isartrans{theory}{theory} \\
+    @{command_def "parse_translation"} & : & \isartrans{theory}{theory} \\
+    @{command_def "print_translation"} & : & \isartrans{theory}{theory} \\
+    @{command_def "typed_print_translation"} & : & \isartrans{theory}{theory} \\
+    @{command_def "print_ast_translation"} & : & \isartrans{theory}{theory} \\
+    @{command_def "token_translation"} & : & \isartrans{theory}{theory} \\
+  \end{matharray}
+
+  \begin{rail}
+  ( 'parse\_ast\_translation' | 'parse\_translation' | 'print\_translation' |
+    'typed\_print\_translation' | 'print\_ast\_translation' ) ('(advanced)')? text
+  ;
+
+  'token\_translation' text
+  ;
+  \end{rail}
+
+  Syntax translation functions written in ML admit almost arbitrary
+  manipulations of Isabelle's inner syntax.  Any of the above commands
+  have a single \railqtok{text} argument that refers to an ML
+  expression of appropriate type, which are as follows by default:
+
+%FIXME proper antiquotations
+\begin{ttbox}
+val parse_ast_translation   : (string * (ast list -> ast)) list
+val parse_translation       : (string * (term list -> term)) list
+val print_translation       : (string * (term list -> term)) list
+val typed_print_translation :
+  (string * (bool -> typ -> term list -> term)) list
+val print_ast_translation   : (string * (ast list -> ast)) list
+val token_translation       :
+  (string * string * (string -> string * real)) list
+\end{ttbox}
+
+  If the @{text "(advanced)"} option is given, the corresponding
+  translation functions may depend on the current theory or proof
+  context.  This allows to implement advanced syntax mechanisms, as
+  translations functions may refer to specific theory declarations or
+  auxiliary proof data.
+
+  See also \cite[\S8]{isabelle-ref} for more information on the
+  general concept of syntax transformations in Isabelle.
+
+%FIXME proper antiquotations
+\begin{ttbox}
+val parse_ast_translation:
+  (string * (Context.generic -> ast list -> ast)) list
+val parse_translation:
+  (string * (Context.generic -> term list -> term)) list
+val print_translation:
+  (string * (Context.generic -> term list -> term)) list
+val typed_print_translation:
+  (string * (Context.generic -> bool -> typ -> term list -> term)) list
+val print_ast_translation:
+  (string * (Context.generic -> ast list -> ast)) list
+\end{ttbox}
+*}
+
 end
--- a/doc-src/IsarRef/Thy/pure.thy	Mon Jun 02 22:50:21 2008 +0200
+++ b/doc-src/IsarRef/Thy/pure.thy	Mon Jun 02 22:50:23 2008 +0200
@@ -6,629 +6,6 @@
 
 chapter {* Basic language elements \label{ch:pure-syntax} *}
 
-text {*
-  Subsequently, we introduce the main part of Pure theory and proof
-  commands, together with fundamental proof methods and attributes.
-  \Chref{ch:gen-tools} describes further Isar elements provided by
-  generic tools and packages (such as the Simplifier) that are either
-  part of Pure Isabelle or pre-installed in most object logics.
-  Specific language elements introduced by the major object-logics are
-  described in \chref{ch:hol} (Isabelle/HOL), \chref{ch:holcf}
-  (Isabelle/HOLCF), and \chref{ch:zf} (Isabelle/ZF).  Nevertheless,
-  examples given in the generic parts will usually refer to
-  Isabelle/HOL as well.
-
-  \medskip Isar commands may be either \emph{proper} document
-  constructors, or \emph{improper commands}.  Some proof methods and
-  attributes introduced later are classified as improper as well.
-  Improper Isar language elements, which are subsequently marked by
-  ``@{text "\<^sup>*"}'', are often helpful when developing proof
-  documents, while their use is discouraged for the final
-  human-readable outcome.  Typical examples are diagnostic commands
-  that print terms or theorems according to the current context; other
-  commands emulate old-style tactical theorem proving.
-*}
-
-
-section {* Theory commands *}
-
-subsection {* Markup commands \label{sec:markup-thy} *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "chapter"} & : & \isarkeep{local{\dsh}theory} \\
-    @{command_def "section"} & : & \isarkeep{local{\dsh}theory} \\
-    @{command_def "subsection"} & : & \isarkeep{local{\dsh}theory} \\
-    @{command_def "subsubsection"} & : & \isarkeep{local{\dsh}theory} \\
-    @{command_def "text"} & : & \isarkeep{local{\dsh}theory} \\
-    @{command_def "text_raw"} & : & \isarkeep{local{\dsh}theory} \\
-  \end{matharray}
-
-  Apart from formal comments (see \secref{sec:comments}), markup
-  commands provide a structured way to insert text into the document
-  generated from a theory (see \cite{isabelle-sys} for more
-  information on Isabelle's document preparation tools).
-
-  \begin{rail}
-    ('chapter' | 'section' | 'subsection' | 'subsubsection' | 'text') target? text
-    ;
-    'text\_raw' text
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "chapter"}, @{command "section"}, @{command
-  "subsection"}, and @{command "subsubsection"}] mark chapter and
-  section headings.
-
-  \item [@{command "text"}] specifies paragraphs of plain text.
-
-  \item [@{command "text_raw"}] inserts {\LaTeX} source into the
-  output, without additional markup.  Thus the full range of document
-  manipulations becomes available.
-
-  \end{descr}
-
-  The @{text "text"} argument of these markup commands (except for
-  @{command "text_raw"}) may contain references to formal entities
-  (``antiquotations'', see also \secref{sec:antiq}).  These are
-  interpreted in the present theory context, or the named @{text
-  "target"}.
-
-  Any of these markup elements corresponds to a {\LaTeX} command with
-  the name prefixed by @{verbatim "\\isamarkup"}.  For the sectioning
-  commands this is a plain macro with a single argument, e.g.\
-  @{verbatim "\\isamarkupchapter{"}@{text "\<dots>"}@{verbatim "}"} for
-  @{command "chapter"}.  The @{command "text"} markup results in a
-  {\LaTeX} environment @{verbatim "\\begin{isamarkuptext}"} @{text
-  "\<dots>"} @{verbatim "\\end{isamarkuptext}"}, while @{command "text_raw"}
-  causes the text to be inserted directly into the {\LaTeX} source.
-
-  \medskip Additional markup commands are available for proofs (see
-  \secref{sec:markup-prf}).  Also note that the @{command_ref
-  "header"} declaration (see \secref{sec:begin-thy}) admits to insert
-  section markup just preceding the actual theory definition.
-*}
-
-
-subsection {* Type classes and sorts \label{sec:classes} *}
-
-text {*
-  \begin{matharray}{rcll}
-    @{command_def "classes"} & : & \isartrans{theory}{theory} \\
-    @{command_def "classrel"} & : & \isartrans{theory}{theory} & (axiomatic!) \\
-    @{command_def "defaultsort"} & : & \isartrans{theory}{theory} \\
-    @{command_def "class_deps"} & : & \isarkeep{theory~|~proof} \\
-  \end{matharray}
-
-  \begin{rail}
-    'classes' (classdecl +)
-    ;
-    'classrel' (nameref ('<' | subseteq) nameref + 'and')
-    ;
-    'defaultsort' sort
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "classes"}~@{text "c \<subseteq> c\<^sub>1, \<dots>, c\<^sub>n"}]
-  declares class @{text c} to be a subclass of existing classes @{text
-  "c\<^sub>1, \<dots>, c\<^sub>n"}.  Cyclic class structures are not permitted.
-
-  \item [@{command "classrel"}~@{text "c\<^sub>1 \<subseteq> c\<^sub>2"}] states
-  subclass relations between existing classes @{text "c\<^sub>1"} and
-  @{text "c\<^sub>2"}.  This is done axiomatically!  The @{command_ref
-  "instance"} command (see \secref{sec:axclass}) provides a way to
-  introduce proven class relations.
-
-  \item [@{command "defaultsort"}~@{text s}] makes sort @{text s} the
-  new default sort for any type variables given without sort
-  constraints.  Usually, the default sort would be only changed when
-  defining a new object-logic.
-
-  \item [@{command "class_deps"}] visualizes the subclass relation,
-  using Isabelle's graph browser tool (see also \cite{isabelle-sys}).
-
-  \end{descr}
-*}
-
-
-subsection {* Primitive types and type abbreviations \label{sec:types-pure} *}
-
-text {*
-  \begin{matharray}{rcll}
-    @{command_def "types"} & : & \isartrans{theory}{theory} \\
-    @{command_def "typedecl"} & : & \isartrans{theory}{theory} \\
-    @{command_def "nonterminals"} & : & \isartrans{theory}{theory} \\
-    @{command_def "arities"} & : & \isartrans{theory}{theory} & (axiomatic!) \\
-  \end{matharray}
-
-  \begin{rail}
-    'types' (typespec '=' type infix? +)
-    ;
-    'typedecl' typespec infix?
-    ;
-    'nonterminals' (name +)
-    ;
-    'arities' (nameref '::' arity +)
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "types"}~@{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t = \<tau>"}]
-  introduces \emph{type synonym} @{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t"}
-  for existing type @{text "\<tau>"}.  Unlike actual type definitions, as
-  are available in Isabelle/HOL for example, type synonyms are just
-  purely syntactic abbreviations without any logical significance.
-  Internally, type synonyms are fully expanded.
-  
-  \item [@{command "typedecl"}~@{text "(\<alpha>\<^sub>1, \<dots>, \<alpha>\<^sub>n) t"}]
-  declares a new type constructor @{text t}, intended as an actual
-  logical type (of the object-logic, if available).
-
-  \item [@{command "nonterminals"}~@{text c}] declares type
-  constructors @{text c} (without arguments) to act as purely
-  syntactic types, i.e.\ nonterminal symbols of Isabelle's inner
-  syntax of terms or types.
-
-  \item [@{command "arities"}~@{text "t :: (s\<^sub>1, \<dots>, s\<^sub>n)
-  s"}] augments Isabelle's order-sorted signature of types by new type
-  constructor arities.  This is done axiomatically!  The @{command_ref
-  "instance"} command (see \S\ref{sec:axclass}) provides a way to
-  introduce proven type arities.
-
-  \end{descr}
-*}
-
-
-subsection {* Primitive constants and definitions \label{sec:consts} *}
-
-text {*
-  Definitions essentially express abbreviations within the logic.  The
-  simplest form of a definition is @{text "c :: \<sigma> \<equiv> t"}, where @{text
-  c} is a newly declared constant.  Isabelle also allows derived forms
-  where the arguments of @{text c} appear on the left, abbreviating a
-  prefix of @{text \<lambda>}-abstractions, e.g.\ @{text "c \<equiv> \<lambda>x y. t"} may be
-  written more conveniently as @{text "c x y \<equiv> t"}.  Moreover,
-  definitions may be weakened by adding arbitrary pre-conditions:
-  @{text "A \<Longrightarrow> c x y \<equiv> t"}.
-
-  \medskip The built-in well-formedness conditions for definitional
-  specifications are:
-
-  \begin{itemize}
-
-  \item Arguments (on the left-hand side) must be distinct variables.
-
-  \item All variables on the right-hand side must also appear on the
-  left-hand side.
-
-  \item All type variables on the right-hand side must also appear on
-  the left-hand side; this prohibits @{text "0 :: nat \<equiv> length ([] ::
-  \<alpha> list)"} for example.
-
-  \item The definition must not be recursive.  Most object-logics
-  provide definitional principles that can be used to express
-  recursion safely.
-
-  \end{itemize}
-
-  Overloading means that a constant being declared as @{text "c :: \<alpha>
-  decl"} may be defined separately on type instances @{text "c ::
-  (\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n) t decl"} for each type constructor @{text
-  t}.  The right-hand side may mention overloaded constants
-  recursively at type instances corresponding to the immediate
-  argument types @{text "\<beta>\<^sub>1, \<dots>, \<beta>\<^sub>n"}.  Incomplete
-  specification patterns impose global constraints on all occurrences,
-  e.g.\ @{text "d :: \<alpha> \<times> \<alpha>"} on the left-hand side means that all
-  corresponding occurrences on some right-hand side need to be an
-  instance of this, general @{text "d :: \<alpha> \<times> \<beta>"} will be disallowed.
-
-  \begin{matharray}{rcl}
-    @{command_def "consts"} & : & \isartrans{theory}{theory} \\
-    @{command_def "defs"} & : & \isartrans{theory}{theory} \\
-    @{command_def "constdefs"} & : & \isartrans{theory}{theory} \\
-  \end{matharray}
-
-  \begin{rail}
-    'consts' ((name '::' type mixfix?) +)
-    ;
-    'defs' ('(' 'unchecked'? 'overloaded'? ')')? \\ (axmdecl prop +)
-    ;
-  \end{rail}
-
-  \begin{rail}
-    'constdefs' structs? (constdecl? constdef +)
-    ;
-
-    structs: '(' 'structure' (vars + 'and') ')'
-    ;
-    constdecl:  ((name '::' type mixfix | name '::' type | name mixfix) 'where'?) | name 'where'
-    ;
-    constdef: thmdecl? prop
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "consts"}~@{text "c :: \<sigma>"}] declares constant
-  @{text c} to have any instance of type scheme @{text \<sigma>}.  The
-  optional mixfix annotations may attach concrete syntax to the
-  constants declared.
-  
-  \item [@{command "defs"}~@{text "name: eqn"}] introduces @{text eqn}
-  as a definitional axiom for some existing constant.
-  
-  The @{text "(unchecked)"} option disables global dependency checks
-  for this definition, which is occasionally useful for exotic
-  overloading.  It is at the discretion of the user to avoid malformed
-  theory specifications!
-  
-  The @{text "(overloaded)"} option declares definitions to be
-  potentially overloaded.  Unless this option is given, a warning
-  message would be issued for any definitional equation with a more
-  special type than that of the corresponding constant declaration.
-  
-  \item [@{command "constdefs"}] provides a streamlined combination of
-  constants declarations and definitions: type-inference takes care of
-  the most general typing of the given specification (the optional
-  type constraint may refer to type-inference dummies ``@{text
-  _}'' as usual).  The resulting type declaration needs to agree with
-  that of the specification; overloading is \emph{not} supported here!
-  
-  The constant name may be omitted altogether, if neither type nor
-  syntax declarations are given.  The canonical name of the
-  definitional axiom for constant @{text c} will be @{text c_def},
-  unless specified otherwise.  Also note that the given list of
-  specifications is processed in a strictly sequential manner, with
-  type-checking being performed independently.
-  
-  An optional initial context of @{text "(structure)"} declarations
-  admits use of indexed syntax, using the special symbol @{verbatim
-  "\<index>"} (printed as ``@{text "\<index>"}'').  The latter concept is
-  particularly useful with locales (see also \S\ref{sec:locale}).
-
-  \end{descr}
-*}
-
-
-subsection {* Syntax and translations \label{sec:syn-trans} *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "syntax"} & : & \isartrans{theory}{theory} \\
-    @{command_def "no_syntax"} & : & \isartrans{theory}{theory} \\
-    @{command_def "translations"} & : & \isartrans{theory}{theory} \\
-    @{command_def "no_translations"} & : & \isartrans{theory}{theory} \\
-  \end{matharray}
-
-  \begin{rail}
-    ('syntax' | 'no\_syntax') mode? (constdecl +)
-    ;
-    ('translations' | 'no\_translations') (transpat ('==' | '=>' | '<=' | rightleftharpoons | rightharpoonup | leftharpoondown) transpat +)
-    ;
-
-    mode: ('(' ( name | 'output' | name 'output' ) ')')
-    ;
-    transpat: ('(' nameref ')')? string
-    ;
-  \end{rail}
-
-  \begin{descr}
-  
-  \item [@{command "syntax"}~@{text "(mode) decls"}] is similar to
-  @{command "consts"}~@{text decls}, except that the actual logical
-  signature extension is omitted.  Thus the context free grammar of
-  Isabelle's inner syntax may be augmented in arbitrary ways,
-  independently of the logic.  The @{text mode} argument refers to the
-  print mode that the grammar rules belong; unless the @{keyword_ref
-  "output"} indicator is given, all productions are added both to the
-  input and output grammar.
-  
-  \item [@{command "no_syntax"}~@{text "(mode) decls"}] removes
-  grammar declarations (and translations) resulting from @{text
-  decls}, which are interpreted in the same manner as for @{command
-  "syntax"} above.
-  
-  \item [@{command "translations"}~@{text rules}] specifies syntactic
-  translation rules (i.e.\ macros): parse~/ print rules (@{text "\<rightleftharpoons>"}),
-  parse rules (@{text "\<rightharpoonup>"}), or print rules (@{text "\<leftharpoondown>"}).
-  Translation patterns may be prefixed by the syntactic category to be
-  used for parsing; the default is @{text logic}.
-  
-  \item [@{command "no_translations"}~@{text rules}] removes syntactic
-  translation rules, which are interpreted in the same manner as for
-  @{command "translations"} above.
-
-  \end{descr}
-*}
-
-
-subsection {* Axioms and theorems \label{sec:axms-thms} *}
-
-text {*
-  \begin{matharray}{rcll}
-    @{command_def "axioms"} & : & \isartrans{theory}{theory} & (axiomatic!) \\
-    @{command_def "lemmas"} & : & \isarkeep{local{\dsh}theory} \\
-    @{command_def "theorems"} & : & isarkeep{local{\dsh}theory} \\
-  \end{matharray}
-
-  \begin{rail}
-    'axioms' (axmdecl prop +)
-    ;
-    ('lemmas' | 'theorems') target? (thmdef? thmrefs + 'and')
-    ;
-  \end{rail}
-
-  \begin{descr}
-  
-  \item [@{command "axioms"}~@{text "a: \<phi>"}] introduces arbitrary
-  statements as axioms of the meta-logic.  In fact, axioms are
-  ``axiomatic theorems'', and may be referred later just as any other
-  theorem.
-  
-  Axioms are usually only introduced when declaring new logical
-  systems.  Everyday work is typically done the hard way, with proper
-  definitions and proven theorems.
-  
-  \item [@{command "lemmas"}~@{text "a = b\<^sub>1 \<dots> b\<^sub>n"}]
-  retrieves and stores existing facts in the theory context, or the
-  specified target context (see also \secref{sec:target}).  Typical
-  applications would also involve attributes, to declare Simplifier
-  rules, for example.
-  
-  \item [@{command "theorems"}] is essentially the same as @{command
-  "lemmas"}, but marks the result as a different kind of facts.
-
-  \end{descr}
-*}
-
-
-subsection {* Name spaces *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "global"} & : & \isartrans{theory}{theory} \\
-    @{command_def "local"} & : & \isartrans{theory}{theory} \\
-    @{command_def "hide"} & : & \isartrans{theory}{theory} \\
-  \end{matharray}
-
-  \begin{rail}
-    'hide' ('(open)')? name (nameref + )
-    ;
-  \end{rail}
-
-  Isabelle organizes any kind of name declarations (of types,
-  constants, theorems etc.) by separate hierarchically structured name
-  spaces.  Normally the user does not have to control the behavior of
-  name spaces by hand, yet the following commands provide some way to
-  do so.
-
-  \begin{descr}
-
-  \item [@{command "global"} and @{command "local"}] change the
-  current name declaration mode.  Initially, theories start in
-  @{command "local"} mode, causing all names to be automatically
-  qualified by the theory name.  Changing this to @{command "global"}
-  causes all names to be declared without the theory prefix, until
-  @{command "local"} is declared again.
-  
-  Note that global names are prone to get hidden accidently later,
-  when qualified names of the same base name are introduced.
-  
-  \item [@{command "hide"}~@{text "space names"}] fully removes
-  declarations from a given name space (which may be @{text "class"},
-  @{text "type"}, @{text "const"}, or @{text "fact"}); with the @{text
-  "(open)"} option, only the base name is hidden.  Global
-  (unqualified) names may never be hidden.
-  
-  Note that hiding name space accesses has no impact on logical
-  declarations -- they remain valid internally.  Entities that are no
-  longer accessible to the user are printed with the special qualifier
-  ``@{text "??"}'' prefixed to the full internal name.
-
-  \end{descr}
-*}
-
-
-subsection {* Incorporating ML code \label{sec:ML} *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "use"} & : & \isarkeep{theory~|~local{\dsh}theory} \\
-    @{command_def "ML"} & : & \isarkeep{theory~|~local{\dsh}theory} \\
-    @{command_def "ML_val"} & : & \isartrans{\cdot}{\cdot} \\
-    @{command_def "ML_command"} & : & \isartrans{\cdot}{\cdot} \\
-    @{command_def "setup"} & : & \isartrans{theory}{theory} \\
-    @{command_def "method_setup"} & : & \isartrans{theory}{theory} \\
-  \end{matharray}
-
-  \begin{rail}
-    'use' name
-    ;
-    ('ML' | 'ML\_val' | 'ML\_command' | 'setup') text
-    ;
-    'method\_setup' name '=' text text
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "use"}~@{text "file"}] reads and executes ML
-  commands from @{text "file"}.  The current theory context is passed
-  down to the ML toplevel and may be modified, using @{ML
-  "Context.>>"} or derived ML commands.  The file name is checked with
-  the @{keyword_ref "uses"} dependency declaration given in the theory
-  header (see also \secref{sec:begin-thy}).
-  
-  \item [@{command "ML"}~@{text "text"}] is similar to @{command
-  "use"}, but executes ML commands directly from the given @{text
-  "text"}.
-
-  \item [@{command "ML_val"} and @{command "ML_command"}] are
-  diagnostic versions of @{command "ML"}, which means that the context
-  may not be updated.  @{command "ML_val"} echos the bindings produced
-  at the ML toplevel, but @{command "ML_command"} is silent.
-  
-  \item [@{command "setup"}~@{text "text"}] changes the current theory
-  context by applying @{text "text"}, which refers to an ML expression
-  of type @{ML_type "theory -> theory"}.  This enables to initialize
-  any object-logic specific tools and packages written in ML, for
-  example.
-  
-  \item [@{command "method_setup"}~@{text "name = text description"}]
-  defines a proof method in the current theory.  The given @{text
-  "text"} has to be an ML expression of type @{ML_type "Args.src ->
-  Proof.context -> Proof.method"}.  Parsing concrete method syntax
-  from @{ML_type Args.src} input can be quite tedious in general.  The
-  following simple examples are for methods without any explicit
-  arguments, or a list of theorems, respectively.
-
-%FIXME proper antiquotations
-{\footnotesize
-\begin{verbatim}
- Method.no_args (Method.METHOD (fn facts => foobar_tac))
- Method.thms_args (fn thms => Method.METHOD (fn facts => foobar_tac))
- Method.ctxt_args (fn ctxt => Method.METHOD (fn facts => foobar_tac))
- Method.thms_ctxt_args (fn thms => fn ctxt =>
-    Method.METHOD (fn facts => foobar_tac))
-\end{verbatim}
-}
-
-  Note that mere tactic emulations may ignore the @{text facts}
-  parameter above.  Proper proof methods would do something
-  appropriate with the list of current facts, though.  Single-rule
-  methods usually do strict forward-chaining (e.g.\ by using @{ML
-  Drule.multi_resolves}), while automatic ones just insert the facts
-  using @{ML Method.insert_tac} before applying the main tactic.
-
-  \end{descr}
-*}
-
-
-subsection {* Syntax translation functions *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "parse_ast_translation"} & : & \isartrans{theory}{theory} \\
-    @{command_def "parse_translation"} & : & \isartrans{theory}{theory} \\
-    @{command_def "print_translation"} & : & \isartrans{theory}{theory} \\
-    @{command_def "typed_print_translation"} & : & \isartrans{theory}{theory} \\
-    @{command_def "print_ast_translation"} & : & \isartrans{theory}{theory} \\
-    @{command_def "token_translation"} & : & \isartrans{theory}{theory} \\
-  \end{matharray}
-
-  \begin{rail}
-  ( 'parse\_ast\_translation' | 'parse\_translation' | 'print\_translation' |
-    'typed\_print\_translation' | 'print\_ast\_translation' ) ('(advanced)')? text
-  ;
-
-  'token\_translation' text
-  ;
-  \end{rail}
-
-  Syntax translation functions written in ML admit almost arbitrary
-  manipulations of Isabelle's inner syntax.  Any of the above commands
-  have a single \railqtok{text} argument that refers to an ML
-  expression of appropriate type, which are as follows by default:
-
-%FIXME proper antiquotations
-\begin{ttbox}
-val parse_ast_translation   : (string * (ast list -> ast)) list
-val parse_translation       : (string * (term list -> term)) list
-val print_translation       : (string * (term list -> term)) list
-val typed_print_translation :
-  (string * (bool -> typ -> term list -> term)) list
-val print_ast_translation   : (string * (ast list -> ast)) list
-val token_translation       :
-  (string * string * (string -> string * real)) list
-\end{ttbox}
-
-  If the @{text "(advanced)"} option is given, the corresponding
-  translation functions may depend on the current theory or proof
-  context.  This allows to implement advanced syntax mechanisms, as
-  translations functions may refer to specific theory declarations or
-  auxiliary proof data.
-
-  See also \cite[\S8]{isabelle-ref} for more information on the
-  general concept of syntax transformations in Isabelle.
-
-%FIXME proper antiquotations
-\begin{ttbox}
-val parse_ast_translation:
-  (string * (Context.generic -> ast list -> ast)) list
-val parse_translation:
-  (string * (Context.generic -> term list -> term)) list
-val print_translation:
-  (string * (Context.generic -> term list -> term)) list
-val typed_print_translation:
-  (string * (Context.generic -> bool -> typ -> term list -> term)) list
-val print_ast_translation:
-  (string * (Context.generic -> ast list -> ast)) list
-\end{ttbox}
-*}
-
-
-subsection {* Oracles *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "oracle"} & : & \isartrans{theory}{theory} \\
-  \end{matharray}
-
-  The oracle interface promotes a given ML function @{ML_text
-  "theory -> T -> term"} to @{ML_text "theory -> T -> thm"}, for some
-  type @{ML_text T} given by the user.  This acts like an infinitary
-  specification of axioms -- there is no internal check of the
-  correctness of the results!  The inference kernel records oracle
-  invocations within the internal derivation object of theorems, and
-  the pretty printer attaches ``@{text "[!]"}'' to indicate results
-  that are not fully checked by Isabelle inferences.
-
-  \begin{rail}
-    'oracle' name '(' type ')' '=' text
-    ;
-  \end{rail}
-
-  \begin{descr}
-
-  \item [@{command "oracle"}~@{text "name (type) = text"}] turns the
-  given ML expression @{text "text"} of type
-  @{ML_text "theory ->"}~@{text "type"}~@{ML_text "-> term"} into an
-  ML function of type
-  @{ML_text "theory ->"}~@{text "type"}~@{ML_text "-> thm"}, which is
-  bound to the global identifier @{ML_text name}.
-
-  \end{descr}
-*}
-
-
-section {* Proof commands *}
-
-subsection {* Markup commands \label{sec:markup-prf} *}
-
-text {*
-  \begin{matharray}{rcl}
-    @{command_def "sect"} & : & \isartrans{proof}{proof} \\
-    @{command_def "subsect"} & : & \isartrans{proof}{proof} \\
-    @{command_def "subsubsect"} & : & \isartrans{proof}{proof} \\
-    @{command_def "txt"} & : & \isartrans{proof}{proof} \\
-    @{command_def "txt_raw"} & : & \isartrans{proof}{proof} \\
-  \end{matharray}
-
-  These markup commands for proof mode closely correspond to the ones
-  of theory mode (see \S\ref{sec:markup-thy}).
-
-  \begin{rail}
-    ('sect' | 'subsect' | 'subsubsect' | 'txt' | 'txt\_raw') text
-    ;
-  \end{rail}
-*}
-
-
 section {* Other commands *}
 
 subsection {* Diagnostics *}
@@ -846,15 +223,11 @@
     @{command_def "cd"}@{text "\<^sup>*"} & : & \isarkeep{\cdot} \\
     @{command_def "pwd"}@{text "\<^sup>*"} & : & \isarkeep{\cdot} \\
     @{command_def "use_thy"}@{text "\<^sup>*"} & : & \isarkeep{\cdot} \\
-    @{command_def "display_drafts"}@{text "\<^sup>*"} & : & \isarkeep{\cdot} \\
-    @{command_def "print_drafts"}@{text "\<^sup>*"} & : & \isarkeep{\cdot} \\
   \end{matharray}
 
   \begin{rail}
     ('cd' | 'use\_thy' | 'update\_thy') name
     ;
-    ('display\_drafts' | 'print\_drafts') (name +)
-    ;
   \end{rail}
 
   \begin{descr}
@@ -868,12 +241,6 @@
   These system commands are scarcely used when working interactively,
   since loading of theories is done automatically as required.
 
-  \item [@{command "display_drafts"}~@{text paths} and @{command
-  "print_drafts"}~@{text paths}] perform simple output of a given list
-  of raw source files.  Only those symbols that do not require
-  additional {\LaTeX} packages are displayed properly, everything else
-  is left verbatim.
-
   \end{descr}
 *}