merged
authorhuffman
Wed, 07 Sep 2011 19:24:28 -0700
changeset 44826 1120cba9bce4
parent 44825 353ddca2e4c0 (current diff)
parent 44820 7798deb6f8fa (diff)
child 44827 4d1384a1fc82
child 44839 d19c677eb812
merged
NEWS
--- a/ANNOUNCE	Wed Sep 07 17:41:29 2011 -0700
+++ b/ANNOUNCE	Wed Sep 07 19:24:28 2011 -0700
@@ -1,34 +1,15 @@
-Subject: Announcing Isabelle2011
+Subject: Announcing Isabelle2011-1
 To: isabelle-users@cl.cam.ac.uk
 
-Isabelle2011 is now available.
-
-This version significantly improves upon Isabelle2009-2, see the NEWS
-file in the distribution for more details.  Some notable changes are:
-
-* Experimental Prover IDE based on Isabelle/Scala and jEdit.
-
-* Coercive subtyping (configured in HOL/Complex_Main).
-
-* HOL code generation: Scala as another target language.
-
-* HOL: partial_function definitions.
+Isabelle2011-1 is now available.
 
-* HOL: various tool enhancements, including Quickcheck, Nitpick,
-  Sledgehammer, SMT integration.
-
-* HOL: various additions to theory library, including HOL-Algebra,
-  Imperative_HOL, Multivariate_Analysis, Probability.
+This version improves upon Isabelle2011, see the NEWS file in the
+distribution for more details.  Some important changes are:
 
-* HOLCF: reorganization of library and related tools.
-
-* HOL/SPARK: interactive proof environment for verification conditions
-  generated by the SPARK Ada program verifier.
-
-* Improved Isabelle/Isar implementation manual (covering Isabelle/ML).
+* FIXME
 
 
-You may get Isabelle2011 from the following mirror sites:
+You may get Isabelle2011-1 from the following mirror sites:
 
   Cambridge (UK)       http://www.cl.cam.ac.uk/research/hvg/Isabelle/
   Munich (Germany)     http://isabelle.in.tum.de/
--- a/Admin/CHECKLIST	Wed Sep 07 17:41:29 2011 -0700
+++ b/Admin/CHECKLIST	Wed Sep 07 19:24:28 2011 -0700
@@ -3,9 +3,7 @@
 
 - test polyml-5.4.0, polyml-5.3.0, polyml-5.2.1, smlnj;
 
-- test Proof General 4.1, 4.0, 3.7.1.1;
-
-- test Scala wrapper;
+- test Proof General 4.1, 3.7.1.1;
 
 - check HTML header of library;
 
--- a/Admin/makebundle	Wed Sep 07 17:41:29 2011 -0700
+++ b/Admin/makebundle	Wed Sep 07 19:24:28 2011 -0700
@@ -75,7 +75,13 @@
 )
 
 case "$PLATFORM" in
-  x86-cygwin)
+  *-darwin)
+    perl -pi -e "s,lookAndFeel=.*,lookAndFeel=com.apple.laf.AquaLookAndFeel,g;" \
+      "$TMP/$ISABELLE_NAME/src/Tools/jEdit/dist/properties/jEdit.props"
+    ;;
+  *-cygwin)
+    perl -pi -e "s,lookAndFeel=.*,lookAndFeel=com.sun.java.swing.plaf.windows.WindowsLookAndFeel,g;" \
+      "$TMP/$ISABELLE_NAME/src/Tools/jEdit/dist/properties/jEdit.props"
     rm "$TMP/$ISABELLE_NAME/contrib/ProofGeneral"
     ln -s ProofGeneral-3.7.1.1 "$TMP/$ISABELLE_NAME/contrib/ProofGeneral"
     ;;
--- a/CONTRIBUTORS	Wed Sep 07 17:41:29 2011 -0700
+++ b/CONTRIBUTORS	Wed Sep 07 19:24:28 2011 -0700
@@ -3,8 +3,14 @@
 who is listed as an author in one of the source files of this Isabelle
 distribution.
 
-Contributions to this Isabelle version
---------------------------------------
+Contributions to Isabelle2011-1
+-------------------------------
+
+* September 2011: Peter Gammie
+  Theory HOL/Libary/Saturated: numbers with saturated arithmetic.
+
+* August 2011: Florian Haftmann, Johannes Hölzl and Lars Noschinski, TUM
+  Refined theory on complete lattices.
 
 
 Contributions to Isabelle2011
--- a/NEWS	Wed Sep 07 17:41:29 2011 -0700
+++ b/NEWS	Wed Sep 07 19:24:28 2011 -0700
@@ -1,8 +1,8 @@
 Isabelle NEWS -- history user-relevant changes
 ==============================================
 
-New in this Isabelle version
-----------------------------
+New in Isabelle2011-1 (October 2011)
+------------------------------------
 
 *** General ***
 
@@ -34,6 +34,13 @@
 See also ~~/src/Tools/jEdit/README.html for further information,
 including some remaining limitations.
 
+* Theory loader: source files are exclusively located via the master
+directory of each theory node (where the .thy file itself resides).
+The global load path (such as src/HOL/Library) has been discontinued.
+Note that the path element ~~ may be used to reference theories in the
+Isabelle home folder -- for instance, "~~/src/HOL/Library/FuncSet".
+INCOMPATIBILITY.
+
 * Theory loader: source files are identified by content via SHA1
 digests.  Discontinued former path/modtime identification and optional
 ISABELLE_FILE_IDENT plugin scripts.
@@ -48,13 +55,6 @@
 * Discontinued old lib/scripts/polyml-platform, which has been
 obsolete since Isabelle2009-2.
 
-* Theory loader: source files are exclusively located via the master
-directory of each theory node (where the .thy file itself resides).
-The global load path (such as src/HOL/Library) has been discontinued.
-Note that the path element ~~ may be used to reference theories in the
-Isabelle home folder -- for instance, "~~/src/HOL/Library/FuncSet".
-INCOMPATIBILITY.
-
 * Various optional external tools are referenced more robustly and
 uniformly by explicit Isabelle settings as follows:
 
@@ -82,29 +82,38 @@
 that the result needs to be unique, which means fact specifications
 may have to be refined after enriching a proof context.
 
+* Attribute "case_names" has been refined: the assumptions in each case
+can be named now by following the case name with [name1 name2 ...].
+
 * Isabelle/Isar reference manual provides more formal references in
 syntax diagrams.
 
-* Attribute case_names has been refined: the assumptions in each case can
-be named now by following the case name with [name1 name2 ...].
-
 
 *** HOL ***
 
-* Classes bot and top require underlying partial order rather than preorder:
-uniqueness of bot and top is guaranteed.  INCOMPATIBILITY.
+* Theory Library/Saturated provides type of numbers with saturated
+arithmetic.
+
+* Classes bot and top require underlying partial order rather than
+preorder: uniqueness of bot and top is guaranteed.  INCOMPATIBILITY.
 
 * Class complete_lattice: generalized a couple of lemmas from sets;
-generalized theorems INF_cong and SUP_cong.  New type classes for complete
-boolean algebras and complete linear orders.  Lemmas Inf_less_iff,
-less_Sup_iff, INF_less_iff, less_SUP_iff now reside in class complete_linorder.
-Changed proposition of lemmas Inf_bool_def, Sup_bool_def, Inf_fun_def, Sup_fun_def,
-Inf_apply, Sup_apply.
+generalized theorems INF_cong and SUP_cong.  New type classes for
+complete boolean algebras and complete linear orders.  Lemmas
+Inf_less_iff, less_Sup_iff, INF_less_iff, less_SUP_iff now reside in
+class complete_linorder.
+
+Changed proposition of lemmas Inf_bool_def, Sup_bool_def, Inf_fun_def,
+Sup_fun_def, Inf_apply, Sup_apply.
+
 Redundant lemmas Inf_singleton, Sup_singleton, Inf_binary, Sup_binary,
 INF_eq, SUP_eq, INF_UNIV_range, SUP_UNIV_range, Int_eq_Inter,
-INTER_eq_Inter_image, Inter_def, INT_eq, Un_eq_Union, UNION_eq_Union_image,
-Union_def, UN_singleton, UN_eq have been discarded.
-More consistent and less misunderstandable names:
+INTER_eq_Inter_image, Inter_def, INT_eq, Un_eq_Union,
+UNION_eq_Union_image, Union_def, UN_singleton, UN_eq have been
+discarded.
+
+More consistent and comprehensive names:
+
   INFI_def ~> INF_def
   SUPR_def ~> SUP_def
   INF_leI ~> INF_lower
@@ -122,30 +131,35 @@
 
 INCOMPATIBILITY.
 
-* Theorem collections ball_simps and bex_simps do not contain theorems referring
-to UNION any longer;  these have been moved to collection UN_ball_bex_simps.
-INCOMPATIBILITY.
-
-* Archimedean_Field.thy:
-    floor now is defined as parameter of a separate type class floor_ceiling.
- 
-* Finite_Set.thy: more coherent development of fold_set locales:
+* Theorem collections ball_simps and bex_simps do not contain theorems
+referring to UNION any longer; these have been moved to collection
+UN_ball_bex_simps.  INCOMPATIBILITY.
+
+* Theory Archimedean_Field: floor now is defined as parameter of a
+separate type class floor_ceiling.
+
+* Theory Finite_Set: more coherent development of fold_set locales:
 
     locale fun_left_comm ~> locale comp_fun_commute
     locale fun_left_comm_idem ~> locale comp_fun_idem
-    
-Both use point-free characterisation; interpretation proofs may need adjustment.
-INCOMPATIBILITY.
+
+Both use point-free characterization; interpretation proofs may need
+adjustment.  INCOMPATIBILITY.
 
 * Code generation:
-  - theory Library/Code_Char_ord provides native ordering of characters
-    in the target language.
-  - commands code_module and code_library are legacy, use export_code instead. 
-  - method evaluation is legacy, use method eval instead.
-  - legacy evaluator "SML" is deactivated by default. To activate it, add the following
-    line in your theory:
+
+  - Theory Library/Code_Char_ord provides native ordering of
+    characters in the target language.
+
+  - Commands code_module and code_library are legacy, use export_code instead.
+
+  - Method "evaluation" is legacy, use method "eval" instead.
+
+  - Legacy evaluator "SML" is deactivated by default.  May be
+    reactivated by the following theory command:
+
       setup {* Value.add_evaluator ("SML", Codegen.eval_term) *}
- 
+
 * Declare ext [intro] by default.  Rare INCOMPATIBILITY.
 
 * Nitpick:
@@ -168,51 +182,57 @@
   - Removed "metisF" -- use "metis" instead. INCOMPATIBILITY.
   - Obsoleted "metisFT" -- use "metis (full_types)" instead. INCOMPATIBILITY.
 
-* "try":
-  - Renamed "try_methods" and added "simp:", "intro:", "dest:", and "elim:"
-    options. INCOMPATIBILITY.
-  - Introduced "try" that not only runs "try_methods" but also "solve_direct",
-    "sledgehammer", "quickcheck", and "nitpick".
+* Command 'try':
+  - Renamed 'try_methods' and added "simp:", "intro:", "dest:", and
+    "elim:" options. INCOMPATIBILITY.
+  - Introduced 'tryÄ that not only runs 'try_methods' but also
+    'solve_direct', 'sledgehammer', 'quickcheck', and 'nitpick'.
 
 * Quickcheck:
+
   - Added "eval" option to evaluate terms for the found counterexample
-    (currently only supported by the default (exhaustive) tester)
+    (currently only supported by the default (exhaustive) tester).
+
   - Added post-processing of terms to obtain readable counterexamples
-    (currently only supported by the default (exhaustive) tester)
+    (currently only supported by the default (exhaustive) tester).
+
   - New counterexample generator quickcheck[narrowing] enables
-    narrowing-based testing.
-    It requires that the Glasgow Haskell compiler is installed and
-    its location is known to Isabelle with the environment variable
-    ISABELLE_GHC.
+    narrowing-based testing.  Requires the Glasgow Haskell compiler
+    with its installation location defined in the Isabelle settings
+    environment as ISABELLE_GHC.
+
   - Removed quickcheck tester "SML" based on the SML code generator
-    from HOL-Library
+    (formly in HOL/Library).
 
 * Function package: discontinued option "tailrec".
-INCOMPATIBILITY. Use partial_function instead.
-
-* HOL-Probability:
+INCOMPATIBILITY. Use 'partial_function' instead.
+
+* Session HOL-Probability:
   - Caratheodory's extension lemma is now proved for ring_of_sets.
   - Infinite products of probability measures are now available.
-  - Use extended reals instead of positive extended reals.
-    INCOMPATIBILITY.
-
-* Old recdef package has been moved to Library/Old_Recdef.thy, where it
-must be loaded explicitly.  INCOMPATIBILITY.
-
-* Well-founded recursion combinator "wfrec" has been moved to
-Library/Wfrec.thy. INCOMPATIBILITY.
-
-* Theory Library/Nat_Infinity has been renamed to Library/Extended_Nat.
-The names of the following types and constants have changed:
-  inat (type) ~> enat
+  - Use extended reals instead of positive extended
+    reals. INCOMPATIBILITY.
+
+* Old 'recdef' package has been moved to theory Library/Old_Recdef,
+from where it must be imported explicitly.  INCOMPATIBILITY.
+
+* Well-founded recursion combinator "wfrec" has been moved to theory
+Library/Wfrec. INCOMPATIBILITY.
+
+* Theory Library/Nat_Infinity has been renamed to
+Library/Extended_Nat, with name changes of the following types and
+constants:
+
+  type inat   ~> type enat
   Fin         ~> enat
   Infty       ~> infinity (overloaded)
   iSuc        ~> eSuc
   the_Fin     ~> the_enat
+
 Every theorem name containing "inat", "Fin", "Infty", or "iSuc" has
 been renamed accordingly.
 
-* Limits.thy: Type "'a net" has been renamed to "'a filter", in
+* Theory Limits: Type "'a net" has been renamed to "'a filter", in
 accordance with standard mathematical terminology. INCOMPATIBILITY.
 
 * Session Multivariate_Analysis: Type "('a, 'b) cart" has been renamed
@@ -283,10 +303,10 @@
   real_abs_sub_norm ~> norm_triangle_ineq3
   norm_cauchy_schwarz_abs ~> Cauchy_Schwarz_ineq2
 
-* Complex_Main: The locale interpretations for the bounded_linear and
-bounded_bilinear locales have been removed, in order to reduce the
-number of duplicate lemmas. Users must use the original names for
-distributivity theorems, potential INCOMPATIBILITY.
+* Theory Complex_Main: The locale interpretations for the
+bounded_linear and bounded_bilinear locales have been removed, in
+order to reduce the number of duplicate lemmas. Users must use the
+original names for distributivity theorems, potential INCOMPATIBILITY.
 
   divide.add ~> add_divide_distrib
   divide.diff ~> diff_divide_distrib
@@ -296,7 +316,7 @@
   mult_right.setsum ~> setsum_right_distrib
   mult_left.diff ~> left_diff_distrib
 
-* Complex_Main: Several redundant theorems have been removed or
+* Theory Complex_Main: Several redundant theorems have been removed or
 replaced by more general versions. INCOMPATIBILITY.
 
   real_of_int_real_of_nat ~> real_of_int_of_nat_eq
@@ -365,26 +385,30 @@
 
 *** Document preparation ***
 
-* Discontinued special treatment of hard tabulators, which are better
-avoided in the first place.  Implicit tab-width is 1.
-
-* Antiquotation @{rail} layouts railroad syntax diagrams, see also
-isar-ref manual.
-
-* Antiquotation @{value} evaluates the given term and presents its result.
-
 * Localized \isabellestyle switch can be used within blocks or groups
 like this:
 
   \isabellestyle{it}  %preferred default
   {\isabellestylett @{text "typewriter stuff"}}
 
-* New term style "isub" as ad-hoc conversion of variables x1, y23 into
-subscripted form x\<^isub>1, y\<^isub>2\<^isub>3.
+* Antiquotation @{rail} layouts railroad syntax diagrams, see also
+isar-ref manual, both for description and actual application of the
+same.
+
+* Antiquotation @{value} evaluates the given term and presents its
+result.
+
+* Antiquotations: term style "isub" provides ad-hoc conversion of
+variables x1, y23 into subscripted form x\<^isub>1,
+y\<^isub>2\<^isub>3.
 
 * Predefined LaTeX macros for Isabelle symbols \<bind> and \<then>
 (e.g. see ~~/src/HOL/Library/Monad_Syntax.thy).
 
+* Discontinued special treatment of hard tabulators, which are better
+avoided in the first place (no universally agreed standard expansion).
+Implicit tab-width is now 1.
+
 
 *** ML ***
 
@@ -443,12 +467,22 @@
 INCOMPATIBILITY, classical tactics and derived proof methods require
 proper Proof.context.
 
+
+*** System ***
+
 * Scala layer provides JVM method invocation service for static
-methods of type (String)String, see Invoke_Scala.method in ML.
-For example:
+methods of type (String)String, see Invoke_Scala.method in ML.  For
+example:
 
   Invoke_Scala.method "java.lang.System.getProperty" "java.home"
 
+Togeter with YXML.string_of_body/parse_body and XML.Encode/Decode this
+allows to pass structured values between ML and Scala.
+
+* The IsabelleText fonts includes some further glyphs to support the
+Prover IDE.  Potential INCOMPATIBILITY: users who happen to have
+installed a local copy (which is normally *not* required) need to
+delete or update it from ~~/lib/fonts/.
 
 
 New in Isabelle2011 (January 2011)
--- a/README	Wed Sep 07 17:41:29 2011 -0700
+++ b/README	Wed Sep 07 19:24:28 2011 -0700
@@ -16,8 +16,8 @@
      * The Poly/ML compiler and runtime system (version 5.2.1 or later).
      * The GNU bash shell (version 3.x or 2.x).
      * Perl (version 5.x).
+     * Java 1.6.x from Oracle or Apple -- for Scala and jEdit.
      * GNU Emacs (version 23) -- for the Proof General 4.x interface.
-     * Java 1.6.x from Oracle/Sun or Apple -- for Scala and jEdit.
      * A complete LaTeX installation -- for document preparation.
 
 Installation
@@ -31,17 +31,18 @@
 
 User interface
 
+   Isabelle/jEdit is an emerging Prover IDE based on advanced
+   technology of Isabelle/Scala.  It provides a metaphor of continuous
+   proof checking of a versioned collection of theory sources, with
+   instantaneous feedback in real-time and rich semantic markup
+   associated with the formal text.
+
    The classic Isabelle user interface is Proof General by David
    Aspinall and others.  It is a generic Emacs interface for proof
    assistants, including Isabelle.  Its most prominent feature is
    script management, providing a metaphor of stepwise proof script
    editing.
 
-   Isabelle/jEdit is an experimental Prover IDE based on advanced
-   technology of Isabelle/Scala.  It provides a metaphor of continuous
-   proof checking of a versioned collection of theory sources, with
-   instantaneous feedback in real-time.
-
 Other sources of information
 
   The Isabelle Page
--- a/doc-src/Sledgehammer/sledgehammer.tex	Wed Sep 07 17:41:29 2011 -0700
+++ b/doc-src/Sledgehammer/sledgehammer.tex	Wed Sep 07 19:24:28 2011 -0700
@@ -942,19 +942,29 @@
 \textit{raw\_mono\_guards}, \textit{raw\_mono\_tags}, \textit{mono\_guards},
 \textit{mono\_tags}, and \textit{mono\_simple} are fully
 typed and sound. For each of these, Sledgehammer also provides a lighter,
-virtually sound variant identified by a question mark (`{?}')\ that detects and
-erases monotonic types, notably infinite types. (For \textit{mono\_simple}, the
-types are not actually erased but rather replaced by a shared uniform type of
-individuals.) As argument to the \textit{metis} proof method, the question mark
-is replaced by a \hbox{``\textit{\_query}''} suffix. If the \emph{sound} option
-is enabled, these encodings are fully sound.
+virtually sound variant identified by a question mark (`\hbox{?}')\ that detects
+and erases monotonic types, notably infinite types. (For \textit{mono\_simple},
+the types are not actually erased but rather replaced by a shared uniform type
+of individuals.) As argument to the \textit{metis} proof method, the question
+mark is replaced by a \hbox{``\textit{\_query}''} suffix. If the \emph{sound}
+option is enabled, these encodings are fully sound.
 
 \item[$\bullet$]
 \textbf{%
 \textit{poly\_guards}??, \textit{poly\_tags}??, \textit{raw\_mono\_guards}??, \\
 \textit{raw\_mono\_tags}??, \textit{mono\_guards}??, \textit{mono\_tags}?? \\
 (quasi-sound):} \\
-Even lighter versions of the `{?}' encodings.
+Even lighter versions of the `\hbox{?}' encodings. As argument to the
+\textit{metis} proof method, the `\hbox{??}' suffix is replaced by
+\hbox{``\textit{\_query\_query}''}.
+
+\item[$\bullet$]
+\textbf{%
+\textit{poly\_guards}@?, \textit{poly\_tags}@?, \textit{raw\_mono\_guards}@?, \\
+\textit{raw\_mono\_tags}@? (quasi-sound):} \\
+Alternative versions of the `\hbox{??}' encodings. As argument to the
+\textit{metis} proof method, the `\hbox{@?}' suffix is replaced by
+\hbox{``\textit{\_at\_query}''}.
 
 \item[$\bullet$]
 \textbf{%
@@ -965,9 +975,9 @@
 \textit{raw\_mono\_guards}, \textit{raw\_mono\_tags}, \textit{mono\_guards},
 \textit{mono\_tags}, \textit{mono\_simple}, and \textit{mono\_simple\_higher}
 also admit a mildly unsound (but very efficient) variant identified by an
-exclamation mark (`{!}') that detects and erases erases all types except those
-that are clearly finite (e.g., \textit{bool}). (For \textit{mono\_simple} and
-\textit{mono\_simple\_higher}, the types are not actually erased but rather
+exclamation mark (`\hbox{!}') that detects and erases erases all types except
+those that are clearly finite (e.g., \textit{bool}). (For \textit{mono\_simple}
+and \textit{mono\_simple\_higher}, the types are not actually erased but rather
 replaced by a shared uniform type of individuals.) As argument to the
 \textit{metis} proof method, the exclamation mark is replaced by the suffix
 \hbox{``\textit{\_bang}''}.
@@ -977,7 +987,17 @@
 \textit{poly\_guards}!!, \textit{poly\_tags}!!, \textit{raw\_mono\_guards}!!, \\
 \textit{raw\_mono\_tags}!!, \textit{mono\_guards}!!, \textit{mono\_tags}!! \\
 (mildly unsound):} \\
-Even lighter versions of the `{!}' encodings.
+Even lighter versions of the `\hbox{!}' encodings. As argument to the
+\textit{metis} proof method, the `\hbox{!!}' suffix is replaced by
+\hbox{``\textit{\_bang\_bang}''}.
+
+\item[$\bullet$]
+\textbf{%
+\textit{poly\_guards}@!, \textit{poly\_tags}@!, \textit{raw\_mono\_guards}@!, \\
+\textit{raw\_mono\_tags}@! (mildly unsound):} \\
+Alternative versions of the `\hbox{!!}' encodings. As argument to the
+\textit{metis} proof method, the `\hbox{@!}' suffix is replaced by
+\hbox{``\textit{\_at\_bang}''}.
 
 \item[$\bullet$] \textbf{\textit{smart}:} The actual encoding used depends on
 the ATP and should be the most efficient virtually sound encoding for that ATP.
--- a/doc-src/System/Thy/Misc.thy	Wed Sep 07 17:41:29 2011 -0700
+++ b/doc-src/System/Thy/Misc.thy	Wed Sep 07 19:24:28 2011 -0700
@@ -336,8 +336,8 @@
   sub-chunks separated by @{text "\<^bold>Y"}.  Markup chunks start
   with an empty sub-chunk, and a second empty sub-chunk indicates
   close of an element.  Any other non-empty chunk consists of plain
-  text.  For example, see @{file "~~/src/Pure/General/yxml.ML"} or
-  @{file "~~/src/Pure/General/yxml.scala"}.
+  text.  For example, see @{file "~~/src/Pure/PIDE/yxml.ML"} or
+  @{file "~~/src/Pure/PIDE/yxml.scala"}.
 
   YXML documents may be detected quickly by checking that the first
   two characters are @{text "\<^bold>X\<^bold>Y"}.
--- a/doc-src/System/Thy/document/Misc.tex	Wed Sep 07 17:41:29 2011 -0700
+++ b/doc-src/System/Thy/document/Misc.tex	Wed Sep 07 19:24:28 2011 -0700
@@ -376,8 +376,8 @@
   sub-chunks separated by \isa{{\isaliteral{22}{\isachardoublequote}}\isaliteral{5C3C5E626F6C643E}{}\isactrlbold Y{\isaliteral{22}{\isachardoublequote}}}.  Markup chunks start
   with an empty sub-chunk, and a second empty sub-chunk indicates
   close of an element.  Any other non-empty chunk consists of plain
-  text.  For example, see \verb|~~/src/Pure/General/yxml.ML| or
-  \verb|~~/src/Pure/General/yxml.scala|.
+  text.  For example, see \verb|~~/src/Pure/PIDE/yxml.ML| or
+  \verb|~~/src/Pure/PIDE/yxml.scala|.
 
   YXML documents may be detected quickly by checking that the first
   two characters are \isa{{\isaliteral{22}{\isachardoublequote}}\isaliteral{5C3C5E626F6C643E}{}\isactrlbold X\isaliteral{5C3C5E626F6C643E}{}\isactrlbold Y{\isaliteral{22}{\isachardoublequote}}}.%
--- a/doc/Contents	Wed Sep 07 17:41:29 2011 -0700
+++ b/doc/Contents	Wed Sep 07 19:24:28 2011 -0700
@@ -1,4 +1,4 @@
-Learning and using Isabelle
+Miscellaneous tutorials
   tutorial        Tutorial on Isabelle/HOL
   main            What's in Main
   isar-overview   Tutorial on Isar
--- a/src/HOL/Decision_Procs/Commutative_Ring_Complete.thy	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/HOL/Decision_Procs/Commutative_Ring_Complete.thy	Wed Sep 07 19:24:28 2011 -0700
@@ -12,8 +12,7 @@
 begin
 
 text {* Formalization of normal form *}
-fun
-  isnorm :: "('a::{comm_ring}) pol \<Rightarrow> bool"
+fun isnorm :: "'a::comm_ring pol \<Rightarrow> bool"
 where
     "isnorm (Pc c) \<longleftrightarrow> True"
   | "isnorm (Pinj i (Pc c)) \<longleftrightarrow> False"
@@ -26,35 +25,40 @@
   | "isnorm (PX P i Q) \<longleftrightarrow> isnorm P \<and> isnorm Q"
 
 (* Some helpful lemmas *)
-lemma norm_Pinj_0_False:"isnorm (Pinj 0 P) = False"
-by(cases P, auto)
+lemma norm_Pinj_0_False: "isnorm (Pinj 0 P) = False"
+  by (cases P) auto
 
-lemma norm_PX_0_False:"isnorm (PX (Pc 0) i Q) = False"
-by(cases i, auto)
+lemma norm_PX_0_False: "isnorm (PX (Pc 0) i Q) = False"
+  by (cases i) auto
 
-lemma norm_Pinj:"isnorm (Pinj i Q) \<Longrightarrow> isnorm Q"
-by(cases i,simp add: norm_Pinj_0_False norm_PX_0_False,cases Q) auto
+lemma norm_Pinj: "isnorm (Pinj i Q) \<Longrightarrow> isnorm Q"
+  by (cases i) (simp add: norm_Pinj_0_False norm_PX_0_False, cases Q, auto)
 
-lemma norm_PX2:"isnorm (PX P i Q) \<Longrightarrow> isnorm Q"
-by(cases i, auto, cases P, auto, case_tac pol2, auto)
+lemma norm_PX2: "isnorm (PX P i Q) \<Longrightarrow> isnorm Q"
+  by (cases i) (auto, cases P, auto, case_tac pol2, auto)
+
+lemma norm_PX1: "isnorm (PX P i Q) \<Longrightarrow> isnorm P"
+  by (cases i) (auto, cases P, auto, case_tac pol2, auto)
 
-lemma norm_PX1:"isnorm (PX P i Q) \<Longrightarrow> isnorm P"
-by(cases i, auto, cases P, auto, case_tac pol2, auto)
-
-lemma mkPinj_cn:"\<lbrakk>y~=0; isnorm Q\<rbrakk> \<Longrightarrow> isnorm (mkPinj y Q)" 
-apply(auto simp add: mkPinj_def norm_Pinj_0_False split: pol.split)
-apply(case_tac nat, auto simp add: norm_Pinj_0_False)
-by(case_tac pol, auto) (case_tac y, auto)
+lemma mkPinj_cn: "y ~= 0 \<Longrightarrow> isnorm Q \<Longrightarrow> isnorm (mkPinj y Q)"
+  apply (auto simp add: mkPinj_def norm_Pinj_0_False split: pol.split)
+  apply (case_tac nat, auto simp add: norm_Pinj_0_False)
+  apply (case_tac pol, auto)
+  apply (case_tac y, auto)
+  done
 
 lemma norm_PXtrans: 
-  assumes A:"isnorm (PX P x Q)" and "isnorm Q2" 
+  assumes A: "isnorm (PX P x Q)" and "isnorm Q2" 
   shows "isnorm (PX P x Q2)"
-proof(cases P)
-  case (PX p1 y p2) with assms show ?thesis by(cases x, auto, cases p2, auto)
+proof (cases P)
+  case (PX p1 y p2)
+  with assms show ?thesis by (cases x) (auto, cases p2, auto)
 next
-  case Pc with assms show ?thesis by (cases x) auto
+  case Pc
+  with assms show ?thesis by (cases x) auto
 next
-  case Pinj with assms show ?thesis by (cases x) auto
+  case Pinj
+  with assms show ?thesis by (cases x) auto
 qed
  
 lemma norm_PXtrans2:
@@ -62,7 +66,7 @@
   shows "isnorm (PX P (Suc (n+x)) Q2)"
 proof (cases P)
   case (PX p1 y p2)
-  with assms show ?thesis by (cases x, auto, cases p2, auto)
+  with assms show ?thesis by (cases x) (auto, cases p2, auto)
 next
   case Pc
   with assms show ?thesis by (cases x) auto
@@ -83,27 +87,33 @@
   with assms show ?thesis by (cases x) (auto simp add: mkPinj_cn mkPX_def)
 next
   case (PX P1 y P2)
-  with assms have Y0: "y>0" by (cases y) auto
+  with assms have Y0: "y > 0" by (cases y) auto
   from assms PX have "isnorm P1" "isnorm P2"
     by (auto simp add: norm_PX1[of P1 y P2] norm_PX2[of P1 y P2])
   from assms PX Y0 show ?thesis
-    by (cases x, auto simp add: mkPX_def norm_PXtrans2[of P1 y _ Q _], cases P2, auto)
+    by (cases x) (auto simp add: mkPX_def norm_PXtrans2[of P1 y _ Q _], cases P2, auto)
 qed
 
 text {* add conserves normalizedness *}
-lemma add_cn:"isnorm P \<Longrightarrow> isnorm Q \<Longrightarrow> isnorm (P \<oplus> Q)"
-proof(induct P Q rule: add.induct)
-  case (2 c i P2) thus ?case by (cases P2, simp_all, cases i, simp_all)
+lemma add_cn: "isnorm P \<Longrightarrow> isnorm Q \<Longrightarrow> isnorm (P \<oplus> Q)"
+proof (induct P Q rule: add.induct)
+  case (2 c i P2)
+  thus ?case by (cases P2) (simp_all, cases i, simp_all)
 next
-  case (3 i P2 c) thus ?case by (cases P2, simp_all, cases i, simp_all)
+  case (3 i P2 c)
+  thus ?case by (cases P2) (simp_all, cases i, simp_all)
 next
   case (4 c P2 i Q2)
-  then have "isnorm P2" "isnorm Q2" by (auto simp only: norm_PX1[of P2 i Q2] norm_PX2[of P2 i Q2])
-  with 4 show ?case by(cases i, simp, cases P2, auto, case_tac pol2, auto)
+  then have "isnorm P2" "isnorm Q2"
+    by (auto simp only: norm_PX1[of P2 i Q2] norm_PX2[of P2 i Q2])
+  with 4 show ?case
+    by (cases i) (simp, cases P2, auto, case_tac pol2, auto)
 next
   case (5 P2 i Q2 c)
-  then have "isnorm P2" "isnorm Q2" by (auto simp only: norm_PX1[of P2 i Q2] norm_PX2[of P2 i Q2])
-  with 5 show ?case by(cases i, simp, cases P2, auto, case_tac pol2, auto)
+  then have "isnorm P2" "isnorm Q2"
+    by (auto simp only: norm_PX1[of P2 i Q2] norm_PX2[of P2 i Q2])
+  with 5 show ?case
+    by (cases i) (simp, cases P2, auto, case_tac pol2, auto)
 next
   case (6 x P2 y Q2)
   then have Y0: "y>0" by (cases y) (auto simp add: norm_Pinj_0_False)
@@ -115,14 +125,17 @@
     moreover
     note 6 X0
     moreover
-    from 6 have "isnorm P2" "isnorm Q2" by (auto simp add: norm_Pinj[of _ P2] norm_Pinj[of _ Q2])
+    from 6 have "isnorm P2" "isnorm Q2"
+      by (auto simp add: norm_Pinj[of _ P2] norm_Pinj[of _ Q2])
     moreover
-    from 6 `x < y` y have "isnorm (Pinj d Q2)" by (cases d, simp, cases Q2, auto)
+    from 6 `x < y` y have "isnorm (Pinj d Q2)"
+      by (cases d, simp, cases Q2, auto)
     ultimately have ?case by (simp add: mkPinj_cn) }
   moreover
   { assume "x=y"
     moreover
-    from 6 have "isnorm P2" "isnorm Q2" by(auto simp add: norm_Pinj[of _ P2] norm_Pinj[of _ Q2])
+    from 6 have "isnorm P2" "isnorm Q2"
+      by (auto simp add: norm_Pinj[of _ P2] norm_Pinj[of _ Q2])
     moreover
     note 6 Y0
     moreover
@@ -133,30 +146,35 @@
     moreover
     note 6 Y0
     moreover
-    from 6 have "isnorm P2" "isnorm Q2" by (auto simp add: norm_Pinj[of _ P2] norm_Pinj[of _ Q2])
+    from 6 have "isnorm P2" "isnorm Q2"
+      by (auto simp add: norm_Pinj[of _ P2] norm_Pinj[of _ Q2])
     moreover
-    from 6 `x > y` x have "isnorm (Pinj d P2)" by (cases d, simp, cases P2, auto)
-    ultimately have ?case by (simp add: mkPinj_cn)}
+    from 6 `x > y` x have "isnorm (Pinj d P2)"
+      by (cases d) (simp, cases P2, auto)
+    ultimately have ?case by (simp add: mkPinj_cn) }
   ultimately show ?case by blast
 next
   case (7 x P2 Q2 y R)
-  have "x=0 \<or> (x = 1) \<or> (x > 1)" by arith
+  have "x = 0 \<or> x = 1 \<or> x > 1" by arith
   moreover
   { assume "x = 0"
     with 7 have ?case by (auto simp add: norm_Pinj_0_False) }
   moreover
   { assume "x = 1"
-    from 7 have "isnorm R" "isnorm P2" by (auto simp add: norm_Pinj[of _ P2] norm_PX2[of Q2 y R])
+    from 7 have "isnorm R" "isnorm P2"
+      by (auto simp add: norm_Pinj[of _ P2] norm_PX2[of Q2 y R])
     with 7 `x = 1` have "isnorm (R \<oplus> P2)" by simp
-    with 7 `x = 1` have ?case by (simp add: norm_PXtrans[of Q2 y _]) }
+    with 7 `x = 1` have ?case
+      by (simp add: norm_PXtrans[of Q2 y _]) }
   moreover
   { assume "x > 1" hence "EX d. x=Suc (Suc d)" by arith
-    then obtain d where X:"x=Suc (Suc d)" ..
+    then obtain d where X: "x=Suc (Suc d)" ..
     with 7 have NR: "isnorm R" "isnorm P2"
       by (auto simp add: norm_Pinj[of _ P2] norm_PX2[of Q2 y R])
     with 7 X have "isnorm (Pinj (x - 1) P2)" by (cases P2) auto
     with 7 X NR have "isnorm (R \<oplus> Pinj (x - 1) P2)" by simp
-    with `isnorm (PX Q2 y R)` X have ?case by (simp add: norm_PXtrans[of Q2 y _]) }
+    with `isnorm (PX Q2 y R)` X have ?case
+      by (simp add: norm_PXtrans[of Q2 y _]) }
   ultimately show ?case by blast
 next
   case (8 Q2 y R x P2)
@@ -183,7 +201,7 @@
   with 9 have X0: "x>0" by (cases x) auto
   with 9 have NP1: "isnorm P1" and NP2: "isnorm P2"
     by (auto simp add: norm_PX1[of P1 _ P2] norm_PX2[of P1 _ P2])
-  with 9 have NQ1:"isnorm Q1" and NQ2: "isnorm Q2"
+  with 9 have NQ1: "isnorm Q1" and NQ2: "isnorm Q2"
     by (auto simp add: norm_PX1[of Q1 _ Q2] norm_PX2[of Q1 _ Q2])
   have "y < x \<or> x = y \<or> x < y" by arith
   moreover
@@ -194,7 +212,7 @@
     have "isnorm (PX P1 d (Pc 0))" 
     proof (cases P1)
       case (PX p1 y p2)
-      with 9 sm1 sm2 show ?thesis by - (cases d, simp, cases p2, auto)
+      with 9 sm1 sm2 show ?thesis by (cases d) (simp, cases p2, auto)
     next
       case Pc with 9 sm1 sm2 show ?thesis by (cases d) auto
     next
@@ -214,35 +232,37 @@
     have "isnorm (PX Q1 d (Pc 0))" 
     proof (cases Q1)
       case (PX p1 y p2)
-      with 9 sm1 sm2 show ?thesis by - (cases d, simp, cases p2, auto)
+      with 9 sm1 sm2 show ?thesis by (cases d) (simp, cases p2, auto)
     next
       case Pc with 9 sm1 sm2 show ?thesis by (cases d) auto
     next
       case Pinj with 9 sm1 sm2 show ?thesis by (cases d) auto
     qed
     ultimately have "isnorm (P2 \<oplus> Q2)" "isnorm (PX Q1 (y - x) (Pc 0) \<oplus> P1)" by auto
-    with X0 sm1 sm2 have ?case by (simp add: mkPX_cn)}
+    with X0 sm1 sm2 have ?case by (simp add: mkPX_cn) }
   ultimately show ?case by blast
 qed simp
 
 text {* mul concerves normalizedness *}
-lemma mul_cn :"isnorm P \<Longrightarrow> isnorm Q \<Longrightarrow> isnorm (P \<otimes> Q)"
-proof(induct P Q rule: mul.induct)
+lemma mul_cn: "isnorm P \<Longrightarrow> isnorm Q \<Longrightarrow> isnorm (P \<otimes> Q)"
+proof (induct P Q rule: mul.induct)
   case (2 c i P2) thus ?case 
-    by (cases P2, simp_all) (cases "i",simp_all add: mkPinj_cn)
+    by (cases P2) (simp_all, cases i, simp_all add: mkPinj_cn)
 next
   case (3 i P2 c) thus ?case 
-    by (cases P2, simp_all) (cases "i",simp_all add: mkPinj_cn)
+    by (cases P2) (simp_all, cases i, simp_all add: mkPinj_cn)
 next
   case (4 c P2 i Q2)
-  then have "isnorm P2" "isnorm Q2" by (auto simp only: norm_PX1[of P2 i Q2] norm_PX2[of P2 i Q2])
+  then have "isnorm P2" "isnorm Q2"
+    by (auto simp only: norm_PX1[of P2 i Q2] norm_PX2[of P2 i Q2])
   with 4 show ?case 
-    by - (cases "c = 0", simp_all, cases "i = 0", simp_all add: mkPX_cn)
+    by (cases "c = 0") (simp_all, cases "i = 0", simp_all add: mkPX_cn)
 next
   case (5 P2 i Q2 c)
-  then have "isnorm P2" "isnorm Q2" by (auto simp only: norm_PX1[of P2 i Q2] norm_PX2[of P2 i Q2])
+  then have "isnorm P2" "isnorm Q2"
+    by (auto simp only: norm_PX1[of P2 i Q2] norm_PX2[of P2 i Q2])
   with 5 show ?case
-    by - (cases "c = 0", simp_all, cases "i = 0", simp_all add: mkPX_cn)
+    by (cases "c = 0") (simp_all, cases "i = 0", simp_all add: mkPX_cn)
 next
   case (6 x P2 y Q2)
   have "x < y \<or> x = y \<or> x > y" by arith
@@ -256,7 +276,7 @@
     moreover
     from 6 have "isnorm P2" "isnorm Q2" by (auto simp add: norm_Pinj[of _ P2] norm_Pinj[of _ Q2])
     moreover
-    from 6 `x < y` y have "isnorm (Pinj d Q2)" by - (cases d, simp, cases Q2, auto) 
+    from 6 `x < y` y have "isnorm (Pinj d Q2)" by (cases d) (simp, cases Q2, auto) 
     ultimately have ?case by (simp add: mkPinj_cn) }
   moreover
   { assume "x = y"
@@ -278,7 +298,7 @@
     moreover
     from 6 have "isnorm P2" "isnorm Q2" by (auto simp add: norm_Pinj[of _ P2] norm_Pinj[of _ Q2])
     moreover
-    from 6 `x > y` x have "isnorm (Pinj d P2)" by - (cases d, simp, cases P2, auto)
+    from 6 `x > y` x have "isnorm (Pinj d P2)" by (cases d) (simp, cases P2, auto)
     ultimately have ?case by (simp add: mkPinj_cn) }
   ultimately show ?case by blast
 next
@@ -356,7 +376,7 @@
 proof (induct P)
   case (Pinj i P2)
   then have "isnorm P2" by (simp add: norm_Pinj[of i P2])
-  with Pinj show ?case by - (cases P2, auto, cases i, auto)
+  with Pinj show ?case by (cases P2) (auto, cases i, auto)
 next
   case (PX P1 x P2) note PX1 = this
   from PX have "isnorm P2" "isnorm P1"
@@ -364,7 +384,7 @@
   with PX show ?case
   proof (cases P1)
     case (PX p1 y p2)
-    with PX1 show ?thesis by - (cases x, auto, cases p2, auto)
+    with PX1 show ?thesis by (cases x) (auto, cases p2, auto)
   next
     case Pinj
     with PX1 show ?thesis by (cases x) auto
@@ -372,15 +392,18 @@
 qed simp
 
 text {* sub conserves normalizedness *}
-lemma sub_cn:"isnorm p \<Longrightarrow> isnorm q \<Longrightarrow> isnorm (p \<ominus> q)"
-by (simp add: sub_def add_cn neg_cn)
+lemma sub_cn: "isnorm p \<Longrightarrow> isnorm q \<Longrightarrow> isnorm (p \<ominus> q)"
+  by (simp add: sub_def add_cn neg_cn)
 
 text {* sqr conserves normalizizedness *}
-lemma sqr_cn:"isnorm P \<Longrightarrow> isnorm (sqr P)"
+lemma sqr_cn: "isnorm P \<Longrightarrow> isnorm (sqr P)"
 proof (induct P)
+  case Pc
+  then show ?case by simp
+next
   case (Pinj i Q)
   then show ?case
-    by - (cases Q, auto simp add: mkPX_cn mkPinj_cn, cases i, auto simp add: mkPX_cn mkPinj_cn)
+    by (cases Q) (auto simp add: mkPX_cn mkPinj_cn, cases i, auto simp add: mkPX_cn mkPinj_cn)
 next 
   case (PX P1 x P2)
   then have "x + x ~= 0" "isnorm P2" "isnorm P1"
@@ -389,20 +412,23 @@
       and "isnorm (mkPX (sqr P1) (x + x) (sqr P2))"
     by (auto simp add: add_cn mkPX_cn mkPinj_cn mul_cn)
   then show ?case by (auto simp add: add_cn mkPX_cn mkPinj_cn mul_cn)
-qed simp
+qed
 
 text {* pow conserves normalizedness *}
-lemma pow_cn:"isnorm P \<Longrightarrow> isnorm (pow n P)"
-proof (induct n arbitrary: P rule: nat_less_induct)
-  case (1 k)
+lemma pow_cn: "isnorm P \<Longrightarrow> isnorm (pow n P)"
+proof (induct n arbitrary: P rule: less_induct)
+  case (less k)
   show ?case 
   proof (cases "k = 0")
+    case True
+    then show ?thesis by simp
+  next
     case False
     then have K2: "k div 2 < k" by (cases k) auto
-    from 1 have "isnorm (sqr P)" by (simp add: sqr_cn)
-    with 1 False K2 show ?thesis
-      by - (simp add: allE[of _ "(k div 2)" _] allE[of _ "(sqr P)" _], cases k, auto simp add: mul_cn)
-  qed simp
+    from less have "isnorm (sqr P)" by (simp add: sqr_cn)
+    with less False K2 show ?thesis
+      by (simp add: allE[of _ "(k div 2)" _] allE[of _ "(sqr P)" _], cases k, auto simp add: mul_cn)
+  qed
 qed
 
 end
--- a/src/HOL/Decision_Procs/Ferrack.thy	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/HOL/Decision_Procs/Ferrack.thy	Wed Sep 07 19:24:28 2011 -0700
@@ -676,13 +676,13 @@
   {assume nz: "n = 0" hence ?thesis by (simp add: Let_def simp_num_pair_def)}
   moreover
   { assume nnz: "n \<noteq> 0"
-    {assume "\<not> ?g > 1" hence ?thesis by (simp add: Let_def simp_num_pair_def simpnum_ci) }
+    {assume "\<not> ?g > 1" hence ?thesis by (simp add: Let_def simp_num_pair_def) }
     moreover
     {assume g1:"?g>1" hence g0: "?g > 0" by simp
       from g1 nnz have gp0: "?g' \<noteq> 0" by simp
       hence g'p: "?g' > 0" using gcd_ge_0_int[where x="n" and y="numgcd ?t'"] by arith 
       hence "?g'= 1 \<or> ?g' > 1" by arith
-      moreover {assume "?g'=1" hence ?thesis by (simp add: Let_def simp_num_pair_def simpnum_ci)}
+      moreover {assume "?g'=1" hence ?thesis by (simp add: Let_def simp_num_pair_def)}
       moreover {assume g'1:"?g'>1"
         from dvdnumcoeff_aux2[OF g1] have th1:"dvdnumcoeff ?t' ?g" ..
         let ?tt = "reducecoeffh ?t' ?g'"
@@ -800,32 +800,34 @@
 proof(induct p rule: simpfm.induct)
   case (6 a) hence nb: "numbound0 a" by simp
   hence "numbound0 (simpnum a)" by (simp only: simpnum_numbound0[OF nb])
-  thus ?case by (cases "simpnum a", auto simp add: Let_def)
+  thus ?case by (cases "simpnum a") (auto simp add: Let_def)
 next
   case (7 a) hence nb: "numbound0 a" by simp
   hence "numbound0 (simpnum a)" by (simp only: simpnum_numbound0[OF nb])
-  thus ?case by (cases "simpnum a", auto simp add: Let_def)
+  thus ?case by (cases "simpnum a") (auto simp add: Let_def)
 next
   case (8 a) hence nb: "numbound0 a" by simp
   hence "numbound0 (simpnum a)" by (simp only: simpnum_numbound0[OF nb])
-  thus ?case by (cases "simpnum a", auto simp add: Let_def)
+  thus ?case by (cases "simpnum a") (auto simp add: Let_def)
 next
   case (9 a) hence nb: "numbound0 a" by simp
   hence "numbound0 (simpnum a)" by (simp only: simpnum_numbound0[OF nb])
-  thus ?case by (cases "simpnum a", auto simp add: Let_def)
+  thus ?case by (cases "simpnum a") (auto simp add: Let_def)
 next
   case (10 a) hence nb: "numbound0 a" by simp
   hence "numbound0 (simpnum a)" by (simp only: simpnum_numbound0[OF nb])
-  thus ?case by (cases "simpnum a", auto simp add: Let_def)
+  thus ?case by (cases "simpnum a") (auto simp add: Let_def)
 next
   case (11 a) hence nb: "numbound0 a" by simp
   hence "numbound0 (simpnum a)" by (simp only: simpnum_numbound0[OF nb])
-  thus ?case by (cases "simpnum a", auto simp add: Let_def)
+  thus ?case by (cases "simpnum a") (auto simp add: Let_def)
 qed(auto simp add: disj_def imp_def iff_def conj_def not_bn)
 
 lemma simpfm_qf: "qfree p \<Longrightarrow> qfree (simpfm p)"
-by (induct p rule: simpfm.induct, auto simp add: disj_qf imp_qf iff_qf conj_qf not_qf Let_def)
- (case_tac "simpnum a",auto)+
+  apply (induct p rule: simpfm.induct)
+  apply (auto simp add: Let_def)
+  apply (case_tac "simpnum a", auto)+
+  done
 
 consts prep :: "fm \<Rightarrow> fm"
 recdef prep "measure fmsize"
@@ -854,7 +856,7 @@
   "prep p = p"
 (hints simp add: fmsize_pos)
 lemma prep: "\<And> bs. Ifm bs (prep p) = Ifm bs p"
-by (induct p rule: prep.induct, auto)
+  by (induct p rule: prep.induct) auto
 
   (* Generic quantifier elimination *)
 function (sequential) qelim :: "fm \<Rightarrow> (fm \<Rightarrow> fm) \<Rightarrow> fm" where
@@ -1037,7 +1039,7 @@
   assumes qfp: "qfree p"
   shows "(Ifm bs (rlfm p) = Ifm bs p) \<and> isrlfm (rlfm p)"
   using qfp 
-by (induct p rule: rlfm.induct, auto simp add: lt le gt ge eq neq conj disj conj_lin disj_lin)
+by (induct p rule: rlfm.induct) (auto simp add: lt le gt ge eq neq conj disj conj_lin disj_lin)
 
     (* Operations needed for Ferrante and Rackoff *)
 lemma rminusinf_inf:
@@ -1045,9 +1047,11 @@
   shows "\<exists> z. \<forall> x < z. Ifm (x#bs) (minusinf p) = Ifm (x#bs) p" (is "\<exists> z. \<forall> x. ?P z x p")
 using lp
 proof (induct p rule: minusinf.induct)
-  case (1 p q) thus ?case by (auto,rule_tac x= "min z za" in exI) auto 
+  case (1 p q)
+  thus ?case apply auto apply (rule_tac x= "min z za" in exI) apply auto done
 next
-  case (2 p q) thus ?case by (auto,rule_tac x= "min z za" in exI) auto
+  case (2 p q)
+  thus ?case apply auto apply (rule_tac x= "min z za" in exI) apply auto done
 next
   case (3 c e) 
   from 3 have nb: "numbound0 e" by simp
--- a/src/HOL/HOLCF/Representable.thy	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/HOL/HOLCF/Representable.thy	Wed Sep 07 19:24:28 2011 -0700
@@ -5,7 +5,7 @@
 header {* Representable domains *}
 
 theory Representable
-imports Algebraic Map_Functions Countable
+imports Algebraic Map_Functions "~~/src/HOL/Library/Countable"
 begin
 
 default_sort cpo
--- a/src/HOL/IsaMakefile	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/HOL/IsaMakefile	Wed Sep 07 19:24:28 2011 -0700
@@ -463,10 +463,10 @@
   Library/Quotient_Option.thy Library/Quotient_Product.thy		\
   Library/Quotient_Sum.thy Library/Quotient_Syntax.thy			\
   Library/Quotient_Type.thy Library/RBT.thy Library/RBT_Impl.thy	\
-  Library/RBT_Mapping.thy Library/README.html Library/Set_Algebras.thy	\
-  Library/State_Monad.thy Library/Ramsey.thy Library/Reflection.thy	\
-  Library/Sublist_Order.thy Library/Sum_of_Squares.thy			\
-  Library/Sum_of_Squares/sos_wrapper.ML					\
+  Library/RBT_Mapping.thy Library/README.html Library/Saturated.thy	\
+  Library/Set_Algebras.thy Library/State_Monad.thy Library/Ramsey.thy	\
+  Library/Reflection.thy Library/Sublist_Order.thy			\
+  Library/Sum_of_Squares.thy Library/Sum_of_Squares/sos_wrapper.ML	\
   Library/Sum_of_Squares/sum_of_squares.ML				\
   Library/Transitive_Closure_Table.thy Library/Univ_Poly.thy		\
   Library/Wfrec.thy Library/While_Combinator.thy Library/Zorn.thy	\
--- a/src/HOL/Library/Abstract_Rat.thy	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/HOL/Library/Abstract_Rat.thy	Wed Sep 07 19:24:28 2011 -0700
@@ -10,64 +10,57 @@
 
 type_synonym Num = "int \<times> int"
 
-abbreviation
-  Num0_syn :: Num ("0\<^sub>N")
-where "0\<^sub>N \<equiv> (0, 0)"
+abbreviation Num0_syn :: Num  ("0\<^sub>N")
+  where "0\<^sub>N \<equiv> (0, 0)"
 
-abbreviation
-  Numi_syn :: "int \<Rightarrow> Num" ("_\<^sub>N")
-where "i\<^sub>N \<equiv> (i, 1)"
+abbreviation Numi_syn :: "int \<Rightarrow> Num"  ("_\<^sub>N")
+  where "i\<^sub>N \<equiv> (i, 1)"
 
-definition
-  isnormNum :: "Num \<Rightarrow> bool"
-where
+definition isnormNum :: "Num \<Rightarrow> bool" where
   "isnormNum = (\<lambda>(a,b). (if a = 0 then b = 0 else b > 0 \<and> gcd a b = 1))"
 
-definition
-  normNum :: "Num \<Rightarrow> Num"
-where
-  "normNum = (\<lambda>(a,b). (if a=0 \<or> b = 0 then (0,0) else 
-  (let g = gcd a b 
-   in if b > 0 then (a div g, b div g) else (- (a div g), - (b div g)))))"
+definition normNum :: "Num \<Rightarrow> Num" where
+  "normNum = (\<lambda>(a,b).
+    (if a=0 \<or> b = 0 then (0,0) else
+      (let g = gcd a b
+       in if b > 0 then (a div g, b div g) else (- (a div g), - (b div g)))))"
 
-declare gcd_dvd1_int[presburger]
-declare gcd_dvd2_int[presburger]
+declare gcd_dvd1_int[presburger] gcd_dvd2_int[presburger]
+
 lemma normNum_isnormNum [simp]: "isnormNum (normNum x)"
 proof -
-  have " \<exists> a b. x = (a,b)" by auto
-  then obtain a b where x[simp]: "x = (a,b)" by blast
-  {assume "a=0 \<or> b = 0" hence ?thesis by (simp add: normNum_def isnormNum_def)}  
+  obtain a b where x: "x = (a, b)" by (cases x)
+  { assume "a=0 \<or> b = 0" hence ?thesis by (simp add: x normNum_def isnormNum_def) }
   moreover
-  {assume anz: "a \<noteq> 0" and bnz: "b \<noteq> 0" 
+  { assume anz: "a \<noteq> 0" and bnz: "b \<noteq> 0"
     let ?g = "gcd a b"
     let ?a' = "a div ?g"
     let ?b' = "b div ?g"
     let ?g' = "gcd ?a' ?b'"
-    from anz bnz have "?g \<noteq> 0" by simp  with gcd_ge_0_int[of a b] 
+    from anz bnz have "?g \<noteq> 0" by simp  with gcd_ge_0_int[of a b]
     have gpos: "?g > 0" by arith
-    have gdvd: "?g dvd a" "?g dvd b" by arith+ 
-    from zdvd_mult_div_cancel[OF gdvd(1)] zdvd_mult_div_cancel[OF gdvd(2)]
-    anz bnz
-    have nz':"?a' \<noteq> 0" "?b' \<noteq> 0"
-      by - (rule notI, simp)+
-    from anz bnz have stupid: "a \<noteq> 0 \<or> b \<noteq> 0" by arith 
+    have gdvd: "?g dvd a" "?g dvd b" by arith+
+    from zdvd_mult_div_cancel[OF gdvd(1)] zdvd_mult_div_cancel[OF gdvd(2)] anz bnz
+    have nz': "?a' \<noteq> 0" "?b' \<noteq> 0" by - (rule notI, simp)+
+    from anz bnz have stupid: "a \<noteq> 0 \<or> b \<noteq> 0" by arith
     from div_gcd_coprime_int[OF stupid] have gp1: "?g' = 1" .
     from bnz have "b < 0 \<or> b > 0" by arith
     moreover
-    {assume b: "b > 0"
-      from b have "?b' \<ge> 0" 
-        by (presburger add: pos_imp_zdiv_nonneg_iff[OF gpos])  
-      with nz' have b': "?b' > 0" by arith 
-      from b b' anz bnz nz' gp1 have ?thesis 
-        by (simp add: isnormNum_def normNum_def Let_def split_def fst_conv snd_conv)}
-    moreover {assume b: "b < 0"
-      {assume b': "?b' \<ge> 0" 
+    { assume b: "b > 0"
+      from b have "?b' \<ge> 0"
+        by (presburger add: pos_imp_zdiv_nonneg_iff[OF gpos])
+      with nz' have b': "?b' > 0" by arith
+      from b b' anz bnz nz' gp1 have ?thesis
+        by (simp add: x isnormNum_def normNum_def Let_def split_def) }
+    moreover {
+      assume b: "b < 0"
+      { assume b': "?b' \<ge> 0"
         from gpos have th: "?g \<ge> 0" by arith
         from mult_nonneg_nonneg[OF th b'] zdvd_mult_div_cancel[OF gdvd(2)]
         have False using b by arith }
-      hence b': "?b' < 0" by (presburger add: linorder_not_le[symmetric]) 
-      from anz bnz nz' b b' gp1 have ?thesis 
-        by (simp add: isnormNum_def normNum_def Let_def split_def)}
+      hence b': "?b' < 0" by (presburger add: linorder_not_le[symmetric])
+      from anz bnz nz' b b' gp1 have ?thesis
+        by (simp add: x isnormNum_def normNum_def Let_def split_def) }
     ultimately have ?thesis by blast
   }
   ultimately show ?thesis by blast
@@ -75,63 +68,55 @@
 
 text {* Arithmetic over Num *}
 
-definition
-  Nadd :: "Num \<Rightarrow> Num \<Rightarrow> Num" (infixl "+\<^sub>N" 60)
-where
-  "Nadd = (\<lambda>(a,b) (a',b'). if a = 0 \<or> b = 0 then normNum(a',b') 
-    else if a'=0 \<or> b' = 0 then normNum(a,b) 
+definition Nadd :: "Num \<Rightarrow> Num \<Rightarrow> Num"  (infixl "+\<^sub>N" 60) where
+  "Nadd = (\<lambda>(a,b) (a',b'). if a = 0 \<or> b = 0 then normNum(a',b')
+    else if a'=0 \<or> b' = 0 then normNum(a,b)
     else normNum(a*b' + b*a', b*b'))"
 
-definition
-  Nmul :: "Num \<Rightarrow> Num \<Rightarrow> Num" (infixl "*\<^sub>N" 60)
-where
-  "Nmul = (\<lambda>(a,b) (a',b'). let g = gcd (a*a') (b*b') 
+definition Nmul :: "Num \<Rightarrow> Num \<Rightarrow> Num"  (infixl "*\<^sub>N" 60) where
+  "Nmul = (\<lambda>(a,b) (a',b'). let g = gcd (a*a') (b*b')
     in (a*a' div g, b*b' div g))"
 
-definition
-  Nneg :: "Num \<Rightarrow> Num" ("~\<^sub>N")
-where
-  "Nneg \<equiv> (\<lambda>(a,b). (-a,b))"
+definition Nneg :: "Num \<Rightarrow> Num" ("~\<^sub>N")
+  where "Nneg \<equiv> (\<lambda>(a,b). (-a,b))"
 
-definition
-  Nsub :: "Num \<Rightarrow> Num \<Rightarrow> Num" (infixl "-\<^sub>N" 60)
-where
-  "Nsub = (\<lambda>a b. a +\<^sub>N ~\<^sub>N b)"
+definition Nsub :: "Num \<Rightarrow> Num \<Rightarrow> Num"  (infixl "-\<^sub>N" 60)
+  where "Nsub = (\<lambda>a b. a +\<^sub>N ~\<^sub>N b)"
 
-definition
-  Ninv :: "Num \<Rightarrow> Num" 
-where
-  "Ninv \<equiv> \<lambda>(a,b). if a < 0 then (-b, \<bar>a\<bar>) else (b,a)"
+definition Ninv :: "Num \<Rightarrow> Num"
+  where "Ninv = (\<lambda>(a,b). if a < 0 then (-b, \<bar>a\<bar>) else (b,a))"
 
-definition
-  Ndiv :: "Num \<Rightarrow> Num \<Rightarrow> Num" (infixl "\<div>\<^sub>N" 60)
-where
-  "Ndiv \<equiv> \<lambda>a b. a *\<^sub>N Ninv b"
+definition Ndiv :: "Num \<Rightarrow> Num \<Rightarrow> Num"  (infixl "\<div>\<^sub>N" 60)
+  where "Ndiv = (\<lambda>a b. a *\<^sub>N Ninv b)"
 
 lemma Nneg_normN[simp]: "isnormNum x \<Longrightarrow> isnormNum (~\<^sub>N x)"
-  by(simp add: isnormNum_def Nneg_def split_def)
+  by (simp add: isnormNum_def Nneg_def split_def)
+
 lemma Nadd_normN[simp]: "isnormNum (x +\<^sub>N y)"
   by (simp add: Nadd_def split_def)
+
 lemma Nsub_normN[simp]: "\<lbrakk> isnormNum y\<rbrakk> \<Longrightarrow> isnormNum (x -\<^sub>N y)"
   by (simp add: Nsub_def split_def)
-lemma Nmul_normN[simp]: assumes xn:"isnormNum x" and yn: "isnormNum y"
+
+lemma Nmul_normN[simp]:
+  assumes xn: "isnormNum x" and yn: "isnormNum y"
   shows "isnormNum (x *\<^sub>N y)"
-proof-
-  have "\<exists>a b. x = (a,b)" and "\<exists> a' b'. y = (a',b')" by auto
-  then obtain a b a' b' where ab: "x = (a,b)"  and ab': "y = (a',b')" by blast 
-  {assume "a = 0"
-    hence ?thesis using xn ab ab'
-      by (simp add: isnormNum_def Let_def Nmul_def split_def)}
+proof -
+  obtain a b where x: "x = (a, b)" by (cases x)
+  obtain a' b' where y: "y = (a', b')" by (cases y)
+  { assume "a = 0"
+    hence ?thesis using xn x y
+      by (simp add: isnormNum_def Let_def Nmul_def split_def) }
   moreover
-  {assume "a' = 0"
-    hence ?thesis using yn ab ab' 
-      by (simp add: isnormNum_def Let_def Nmul_def split_def)}
+  { assume "a' = 0"
+    hence ?thesis using yn x y
+      by (simp add: isnormNum_def Let_def Nmul_def split_def) }
   moreover
-  {assume a: "a \<noteq>0" and a': "a'\<noteq>0"
-    hence bp: "b > 0" "b' > 0" using xn yn ab ab' by (simp_all add: isnormNum_def)
-    from mult_pos_pos[OF bp] have "x *\<^sub>N y = normNum (a*a', b*b')" 
-      using ab ab' a a' bp by (simp add: Nmul_def Let_def split_def normNum_def)
-    hence ?thesis by simp}
+  { assume a: "a \<noteq>0" and a': "a'\<noteq>0"
+    hence bp: "b > 0" "b' > 0" using xn yn x y by (simp_all add: isnormNum_def)
+    from mult_pos_pos[OF bp] have "x *\<^sub>N y = normNum (a * a', b * b')"
+      using x y a a' bp by (simp add: Nmul_def Let_def split_def normNum_def)
+    hence ?thesis by simp }
   ultimately show ?thesis by blast
 qed
 
@@ -139,89 +124,77 @@
   by (simp add: Ninv_def isnormNum_def split_def)
     (cases "fst x = 0", auto simp add: gcd_commute_int)
 
-lemma isnormNum_int[simp]: 
+lemma isnormNum_int[simp]:
   "isnormNum 0\<^sub>N" "isnormNum ((1::int)\<^sub>N)" "i \<noteq> 0 \<Longrightarrow> isnormNum (i\<^sub>N)"
   by (simp_all add: isnormNum_def)
 
 
 text {* Relations over Num *}
 
-definition
-  Nlt0:: "Num \<Rightarrow> bool" ("0>\<^sub>N")
-where
-  "Nlt0 = (\<lambda>(a,b). a < 0)"
+definition Nlt0:: "Num \<Rightarrow> bool"  ("0>\<^sub>N")
+  where "Nlt0 = (\<lambda>(a,b). a < 0)"
 
-definition
-  Nle0:: "Num \<Rightarrow> bool" ("0\<ge>\<^sub>N")
-where
-  "Nle0 = (\<lambda>(a,b). a \<le> 0)"
+definition Nle0:: "Num \<Rightarrow> bool"  ("0\<ge>\<^sub>N")
+  where "Nle0 = (\<lambda>(a,b). a \<le> 0)"
 
-definition
-  Ngt0:: "Num \<Rightarrow> bool" ("0<\<^sub>N")
-where
-  "Ngt0 = (\<lambda>(a,b). a > 0)"
+definition Ngt0:: "Num \<Rightarrow> bool"  ("0<\<^sub>N")
+  where "Ngt0 = (\<lambda>(a,b). a > 0)"
 
-definition
-  Nge0:: "Num \<Rightarrow> bool" ("0\<le>\<^sub>N")
-where
-  "Nge0 = (\<lambda>(a,b). a \<ge> 0)"
+definition Nge0:: "Num \<Rightarrow> bool"  ("0\<le>\<^sub>N")
+  where "Nge0 = (\<lambda>(a,b). a \<ge> 0)"
 
-definition
-  Nlt :: "Num \<Rightarrow> Num \<Rightarrow> bool" (infix "<\<^sub>N" 55)
-where
-  "Nlt = (\<lambda>a b. 0>\<^sub>N (a -\<^sub>N b))"
+definition Nlt :: "Num \<Rightarrow> Num \<Rightarrow> bool"  (infix "<\<^sub>N" 55)
+  where "Nlt = (\<lambda>a b. 0>\<^sub>N (a -\<^sub>N b))"
 
-definition
-  Nle :: "Num \<Rightarrow> Num \<Rightarrow> bool" (infix "\<le>\<^sub>N" 55)
-where
-  "Nle = (\<lambda>a b. 0\<ge>\<^sub>N (a -\<^sub>N b))"
+definition Nle :: "Num \<Rightarrow> Num \<Rightarrow> bool"  (infix "\<le>\<^sub>N" 55)
+  where "Nle = (\<lambda>a b. 0\<ge>\<^sub>N (a -\<^sub>N b))"
 
-definition
-  "INum = (\<lambda>(a,b). of_int a / of_int b)"
+definition "INum = (\<lambda>(a,b). of_int a / of_int b)"
 
 lemma INum_int [simp]: "INum (i\<^sub>N) = ((of_int i) ::'a::field)" "INum 0\<^sub>N = (0::'a::field)"
   by (simp_all add: INum_def)
 
-lemma isnormNum_unique[simp]: 
-  assumes na: "isnormNum x" and nb: "isnormNum y" 
+lemma isnormNum_unique[simp]:
+  assumes na: "isnormNum x" and nb: "isnormNum y"
   shows "((INum x ::'a::{field_char_0, field_inverse_zero}) = INum y) = (x = y)" (is "?lhs = ?rhs")
 proof
-  have "\<exists> a b a' b'. x = (a,b) \<and> y = (a',b')" by auto
-  then obtain a b a' b' where xy[simp]: "x = (a,b)" "y=(a',b')" by blast
-  assume H: ?lhs 
-  {assume "a = 0 \<or> b = 0 \<or> a' = 0 \<or> b' = 0"
+  obtain a b where x: "x = (a, b)" by (cases x)
+  obtain a' b' where y: "y = (a', b')" by (cases y)
+  assume H: ?lhs
+  { assume "a = 0 \<or> b = 0 \<or> a' = 0 \<or> b' = 0"
     hence ?rhs using na nb H
-      by (simp add: INum_def split_def isnormNum_def split: split_if_asm)}
+      by (simp add: x y INum_def split_def isnormNum_def split: split_if_asm) }
   moreover
   { assume az: "a \<noteq> 0" and bz: "b \<noteq> 0" and a'z: "a'\<noteq>0" and b'z: "b'\<noteq>0"
-    from az bz a'z b'z na nb have pos: "b > 0" "b' > 0" by (simp_all add: isnormNum_def)
-    from H bz b'z have eq:"a * b' = a'*b" 
-      by (simp add: INum_def  eq_divide_eq divide_eq_eq of_int_mult[symmetric] del: of_int_mult)
-    from az a'z na nb have gcd1: "gcd a b = 1" "gcd b a = 1" "gcd a' b' = 1" "gcd b' a' = 1"       
-      by (simp_all add: isnormNum_def add: gcd_commute_int)
-    from eq have raw_dvd: "a dvd a'*b" "b dvd b'*a" "a' dvd a*b'" "b' dvd b*a'"
-      apply - 
+    from az bz a'z b'z na nb have pos: "b > 0" "b' > 0" by (simp_all add: x y isnormNum_def)
+    from H bz b'z have eq: "a * b' = a'*b"
+      by (simp add: x y INum_def eq_divide_eq divide_eq_eq of_int_mult[symmetric] del: of_int_mult)
+    from az a'z na nb have gcd1: "gcd a b = 1" "gcd b a = 1" "gcd a' b' = 1" "gcd b' a' = 1"
+      by (simp_all add: x y isnormNum_def add: gcd_commute_int)
+    from eq have raw_dvd: "a dvd a' * b" "b dvd b' * a" "a' dvd a * b'" "b' dvd b * a'"
+      apply -
       apply algebra
       apply algebra
       apply simp
       apply algebra
       done
     from zdvd_antisym_abs[OF coprime_dvd_mult_int[OF gcd1(2) raw_dvd(2)]
-      coprime_dvd_mult_int[OF gcd1(4) raw_dvd(4)]]
+        coprime_dvd_mult_int[OF gcd1(4) raw_dvd(4)]]
       have eq1: "b = b'" using pos by arith
       with eq have "a = a'" using pos by simp
-      with eq1 have ?rhs by simp}
+      with eq1 have ?rhs by (simp add: x y) }
   ultimately show ?rhs by blast
 next
   assume ?rhs thus ?lhs by simp
 qed
 
 
-lemma isnormNum0[simp]: "isnormNum x \<Longrightarrow> (INum x = (0::'a::{field_char_0, field_inverse_zero})) = (x = 0\<^sub>N)"
+lemma isnormNum0[simp]:
+    "isnormNum x \<Longrightarrow> (INum x = (0::'a::{field_char_0, field_inverse_zero})) = (x = 0\<^sub>N)"
   unfolding INum_int(2)[symmetric]
-  by (rule isnormNum_unique, simp_all)
+  by (rule isnormNum_unique) simp_all
 
-lemma of_int_div_aux: "d ~= 0 ==> ((of_int x)::'a::field_char_0) / (of_int d) = 
+lemma of_int_div_aux: "d ~= 0 ==> ((of_int x)::'a::field_char_0) / (of_int d) =
     of_int (x div d) + (of_int (x mod d)) / ((of_int d)::'a)"
 proof -
   assume "d ~= 0"
@@ -231,7 +204,7 @@
     by auto
   then have eq: "of_int x = ?t"
     by (simp only: of_int_mult[symmetric] of_int_add [symmetric])
-  then have "of_int x / of_int d = ?t / of_int d" 
+  then have "of_int x / of_int d = ?t / of_int d"
     using cong[OF refl[of ?f] eq] by simp
   then show ?thesis by (simp add: add_divide_distrib algebra_simps `d ~= 0`)
 qed
@@ -241,25 +214,26 @@
   apply (frule of_int_div_aux [of d n, where ?'a = 'a])
   apply simp
   apply (simp add: dvd_eq_mod_eq_0)
-done
+  done
 
 
 lemma normNum[simp]: "INum (normNum x) = (INum x :: 'a::{field_char_0, field_inverse_zero})"
-proof-
-  have "\<exists> a b. x = (a,b)" by auto
-  then obtain a b where x[simp]: "x = (a,b)" by blast
-  {assume "a=0 \<or> b = 0" hence ?thesis
-      by (simp add: INum_def normNum_def split_def Let_def)}
-  moreover 
-  {assume a: "a\<noteq>0" and b: "b\<noteq>0"
+proof -
+  obtain a b where x: "x = (a, b)" by (cases x)
+  { assume "a = 0 \<or> b = 0"
+    hence ?thesis by (simp add: x INum_def normNum_def split_def Let_def) }
+  moreover
+  { assume a: "a \<noteq> 0" and b: "b \<noteq> 0"
     let ?g = "gcd a b"
     from a b have g: "?g \<noteq> 0"by simp
     from of_int_div[OF g, where ?'a = 'a]
-    have ?thesis by (auto simp add: INum_def normNum_def split_def Let_def)}
+    have ?thesis by (auto simp add: x INum_def normNum_def split_def Let_def) }
   ultimately show ?thesis by blast
 qed
 
-lemma INum_normNum_iff: "(INum x ::'a::{field_char_0, field_inverse_zero}) = INum y \<longleftrightarrow> normNum x = normNum y" (is "?lhs = ?rhs")
+lemma INum_normNum_iff:
+  "(INum x ::'a::{field_char_0, field_inverse_zero}) = INum y \<longleftrightarrow> normNum x = normNum y"
+  (is "?lhs = ?rhs")
 proof -
   have "normNum x = normNum y \<longleftrightarrow> (INum (normNum x) :: 'a) = INum (normNum y)"
     by (simp del: normNum)
@@ -268,139 +242,157 @@
 qed
 
 lemma Nadd[simp]: "INum (x +\<^sub>N y) = INum x + (INum y :: 'a :: {field_char_0, field_inverse_zero})"
-proof-
-let ?z = "0:: 'a"
-  have " \<exists> a b. x = (a,b)" " \<exists> a' b'. y = (a',b')" by auto
-  then obtain a b a' b' where x[simp]: "x = (a,b)" 
-    and y[simp]: "y = (a',b')" by blast
-  {assume "a=0 \<or> a'= 0 \<or> b =0 \<or> b' = 0" hence ?thesis 
-      apply (cases "a=0",simp_all add: Nadd_def)
-      apply (cases "b= 0",simp_all add: INum_def)
-       apply (cases "a'= 0",simp_all)
-       apply (cases "b'= 0",simp_all)
+proof -
+  let ?z = "0:: 'a"
+  obtain a b where x: "x = (a, b)" by (cases x)
+  obtain a' b' where y: "y = (a', b')" by (cases y)
+  { assume "a=0 \<or> a'= 0 \<or> b =0 \<or> b' = 0"
+    hence ?thesis
+      apply (cases "a=0", simp_all add: x y Nadd_def)
+      apply (cases "b= 0", simp_all add: INum_def)
+       apply (cases "a'= 0", simp_all)
+       apply (cases "b'= 0", simp_all)
        done }
-  moreover 
-  {assume aa':"a \<noteq> 0" "a'\<noteq> 0" and bb': "b \<noteq> 0" "b' \<noteq> 0" 
-    {assume z: "a * b' + b * a' = 0"
+  moreover
+  { assume aa': "a \<noteq> 0" "a'\<noteq> 0" and bb': "b \<noteq> 0" "b' \<noteq> 0"
+    { assume z: "a * b' + b * a' = 0"
       hence "of_int (a*b' + b*a') / (of_int b* of_int b') = ?z" by simp
-      hence "of_int b' * of_int a / (of_int b * of_int b') + of_int b * of_int a' / (of_int b * of_int b') = ?z"  by (simp add:add_divide_distrib) 
-      hence th: "of_int a / of_int b + of_int a' / of_int b' = ?z" using bb' aa' by simp 
-      from z aa' bb' have ?thesis 
-        by (simp add: th Nadd_def normNum_def INum_def split_def)}
-    moreover {assume z: "a * b' + b * a' \<noteq> 0"
+      hence "of_int b' * of_int a / (of_int b * of_int b') +
+          of_int b * of_int a' / (of_int b * of_int b') = ?z"
+        by (simp add:add_divide_distrib)
+      hence th: "of_int a / of_int b + of_int a' / of_int b' = ?z" using bb' aa'
+        by simp
+      from z aa' bb' have ?thesis
+        by (simp add: x y th Nadd_def normNum_def INum_def split_def) }
+    moreover {
+      assume z: "a * b' + b * a' \<noteq> 0"
       let ?g = "gcd (a * b' + b * a') (b*b')"
       have gz: "?g \<noteq> 0" using z by simp
       have ?thesis using aa' bb' z gz
-        of_int_div[where ?'a = 'a, OF gz gcd_dvd1_int[where x="a * b' + b * a'" and y="b*b'"]]  of_int_div[where ?'a = 'a,
-        OF gz gcd_dvd2_int[where x="a * b' + b * a'" and y="b*b'"]]
-        by (simp add: Nadd_def INum_def normNum_def Let_def add_divide_distrib)}
-    ultimately have ?thesis using aa' bb' 
-      by (simp add: Nadd_def INum_def normNum_def Let_def) }
+        of_int_div[where ?'a = 'a, OF gz gcd_dvd1_int[where x="a * b' + b * a'" and y="b*b'"]]
+        of_int_div[where ?'a = 'a, OF gz gcd_dvd2_int[where x="a * b' + b * a'" and y="b*b'"]]
+        by (simp add: x y Nadd_def INum_def normNum_def Let_def add_divide_distrib) }
+    ultimately have ?thesis using aa' bb'
+      by (simp add: x y Nadd_def INum_def normNum_def Let_def) }
   ultimately show ?thesis by blast
 qed
 
-lemma Nmul[simp]: "INum (x *\<^sub>N y) = INum x * (INum y:: 'a :: {field_char_0, field_inverse_zero}) "
-proof-
+lemma Nmul[simp]: "INum (x *\<^sub>N y) = INum x * (INum y:: 'a :: {field_char_0, field_inverse_zero})"
+proof -
   let ?z = "0::'a"
-  have " \<exists> a b. x = (a,b)" " \<exists> a' b'. y = (a',b')" by auto
-  then obtain a b a' b' where x: "x = (a,b)" and y: "y = (a',b')" by blast
-  {assume "a=0 \<or> a'= 0 \<or> b = 0 \<or> b' = 0" hence ?thesis 
-      apply (cases "a=0",simp_all add: x y Nmul_def INum_def Let_def)
-      apply (cases "b=0",simp_all)
-      apply (cases "a'=0",simp_all) 
+  obtain a b where x: "x = (a, b)" by (cases x)
+  obtain a' b' where y: "y = (a', b')" by (cases y)
+  { assume "a=0 \<or> a'= 0 \<or> b = 0 \<or> b' = 0"
+    hence ?thesis
+      apply (cases "a=0", simp_all add: x y Nmul_def INum_def Let_def)
+      apply (cases "b=0", simp_all)
+      apply (cases "a'=0", simp_all)
       done }
   moreover
-  {assume z: "a \<noteq> 0" "a' \<noteq> 0" "b \<noteq> 0" "b' \<noteq> 0"
+  { assume z: "a \<noteq> 0" "a' \<noteq> 0" "b \<noteq> 0" "b' \<noteq> 0"
     let ?g="gcd (a*a') (b*b')"
     have gz: "?g \<noteq> 0" using z by simp
-    from z of_int_div[where ?'a = 'a, OF gz gcd_dvd1_int[where x="a*a'" and y="b*b'"]] 
-      of_int_div[where ?'a = 'a , OF gz gcd_dvd2_int[where x="a*a'" and y="b*b'"]] 
-    have ?thesis by (simp add: Nmul_def x y Let_def INum_def)}
+    from z of_int_div[where ?'a = 'a, OF gz gcd_dvd1_int[where x="a*a'" and y="b*b'"]]
+      of_int_div[where ?'a = 'a , OF gz gcd_dvd2_int[where x="a*a'" and y="b*b'"]]
+    have ?thesis by (simp add: Nmul_def x y Let_def INum_def) }
   ultimately show ?thesis by blast
 qed
 
 lemma Nneg[simp]: "INum (~\<^sub>N x) = - (INum x ::'a:: field)"
   by (simp add: Nneg_def split_def INum_def)
 
-lemma Nsub[simp]: shows "INum (x -\<^sub>N y) = INum x - (INum y:: 'a :: {field_char_0, field_inverse_zero})"
-by (simp add: Nsub_def split_def)
+lemma Nsub[simp]: "INum (x -\<^sub>N y) = INum x - (INum y:: 'a :: {field_char_0, field_inverse_zero})"
+  by (simp add: Nsub_def split_def)
 
 lemma Ninv[simp]: "INum (Ninv x) = (1::'a :: field_inverse_zero) / (INum x)"
   by (simp add: Ninv_def INum_def split_def)
 
-lemma Ndiv[simp]: "INum (x \<div>\<^sub>N y) = INum x / (INum y ::'a :: {field_char_0, field_inverse_zero})" by (simp add: Ndiv_def)
+lemma Ndiv[simp]: "INum (x \<div>\<^sub>N y) = INum x / (INum y ::'a :: {field_char_0, field_inverse_zero})"
+  by (simp add: Ndiv_def)
 
-lemma Nlt0_iff[simp]: assumes nx: "isnormNum x" 
-  shows "((INum x :: 'a :: {field_char_0, linordered_field_inverse_zero})< 0) = 0>\<^sub>N x "
-proof-
-  have " \<exists> a b. x = (a,b)" by simp
-  then obtain a b where x[simp]:"x = (a,b)" by blast
-  {assume "a = 0" hence ?thesis by (simp add: Nlt0_def INum_def) }
+lemma Nlt0_iff[simp]:
+  assumes nx: "isnormNum x"
+  shows "((INum x :: 'a :: {field_char_0, linordered_field_inverse_zero})< 0) = 0>\<^sub>N x"
+proof -
+  obtain a b where x: "x = (a, b)" by (cases x)
+  { assume "a = 0" hence ?thesis by (simp add: x Nlt0_def INum_def) }
   moreover
-  {assume a: "a\<noteq>0" hence b: "(of_int b::'a) > 0" using nx by (simp add: isnormNum_def)
+  { assume a: "a \<noteq> 0" hence b: "(of_int b::'a) > 0"
+      using nx by (simp add: x isnormNum_def)
     from pos_divide_less_eq[OF b, where b="of_int a" and a="0::'a"]
-    have ?thesis by (simp add: Nlt0_def INum_def)}
+    have ?thesis by (simp add: x Nlt0_def INum_def) }
   ultimately show ?thesis by blast
 qed
 
-lemma Nle0_iff[simp]:assumes nx: "isnormNum x" 
+lemma Nle0_iff[simp]:
+  assumes nx: "isnormNum x"
   shows "((INum x :: 'a :: {field_char_0, linordered_field_inverse_zero}) \<le> 0) = 0\<ge>\<^sub>N x"
-proof-
-  have " \<exists> a b. x = (a,b)" by simp
-  then obtain a b where x[simp]:"x = (a,b)" by blast
-  {assume "a = 0" hence ?thesis by (simp add: Nle0_def INum_def) }
+proof -
+  obtain a b where x: "x = (a, b)" by (cases x)
+  { assume "a = 0" hence ?thesis by (simp add: x Nle0_def INum_def) }
   moreover
-  {assume a: "a\<noteq>0" hence b: "(of_int b :: 'a) > 0" using nx by (simp add: isnormNum_def)
+  { assume a: "a \<noteq> 0" hence b: "(of_int b :: 'a) > 0"
+      using nx by (simp add: x isnormNum_def)
     from pos_divide_le_eq[OF b, where b="of_int a" and a="0::'a"]
-    have ?thesis by (simp add: Nle0_def INum_def)}
+    have ?thesis by (simp add: x Nle0_def INum_def) }
   ultimately show ?thesis by blast
 qed
 
-lemma Ngt0_iff[simp]:assumes nx: "isnormNum x" shows "((INum x :: 'a :: {field_char_0, linordered_field_inverse_zero})> 0) = 0<\<^sub>N x"
-proof-
-  have " \<exists> a b. x = (a,b)" by simp
-  then obtain a b where x[simp]:"x = (a,b)" by blast
-  {assume "a = 0" hence ?thesis by (simp add: Ngt0_def INum_def) }
+lemma Ngt0_iff[simp]:
+  assumes nx: "isnormNum x"
+  shows "((INum x :: 'a :: {field_char_0, linordered_field_inverse_zero})> 0) = 0<\<^sub>N x"
+proof -
+  obtain a b where x: "x = (a, b)" by (cases x)
+  { assume "a = 0" hence ?thesis by (simp add: x Ngt0_def INum_def) }
   moreover
-  {assume a: "a\<noteq>0" hence b: "(of_int b::'a) > 0" using nx by (simp add: isnormNum_def)
+  { assume a: "a \<noteq> 0" hence b: "(of_int b::'a) > 0" using nx
+      by (simp add: x isnormNum_def)
     from pos_less_divide_eq[OF b, where b="of_int a" and a="0::'a"]
-    have ?thesis by (simp add: Ngt0_def INum_def)}
-  ultimately show ?thesis by blast
-qed
-lemma Nge0_iff[simp]:assumes nx: "isnormNum x" 
-  shows "((INum x :: 'a :: {field_char_0, linordered_field_inverse_zero}) \<ge> 0) = 0\<le>\<^sub>N x"
-proof-
-  have " \<exists> a b. x = (a,b)" by simp
-  then obtain a b where x[simp]:"x = (a,b)" by blast
-  {assume "a = 0" hence ?thesis by (simp add: Nge0_def INum_def) }
-  moreover
-  {assume a: "a\<noteq>0" hence b: "(of_int b::'a) > 0" using nx by (simp add: isnormNum_def)
-    from pos_le_divide_eq[OF b, where b="of_int a" and a="0::'a"]
-    have ?thesis by (simp add: Nge0_def INum_def)}
+    have ?thesis by (simp add: x Ngt0_def INum_def) }
   ultimately show ?thesis by blast
 qed
 
-lemma Nlt_iff[simp]: assumes nx: "isnormNum x" and ny: "isnormNum y"
+lemma Nge0_iff[simp]:
+  assumes nx: "isnormNum x"
+  shows "((INum x :: 'a :: {field_char_0, linordered_field_inverse_zero}) \<ge> 0) = 0\<le>\<^sub>N x"
+proof -
+  obtain a b where x: "x = (a, b)" by (cases x)
+  { assume "a = 0" hence ?thesis by (simp add: x Nge0_def INum_def) }
+  moreover
+  { assume "a \<noteq> 0" hence b: "(of_int b::'a) > 0" using nx
+      by (simp add: x isnormNum_def)
+    from pos_le_divide_eq[OF b, where b="of_int a" and a="0::'a"]
+    have ?thesis by (simp add: x Nge0_def INum_def) }
+  ultimately show ?thesis by blast
+qed
+
+lemma Nlt_iff[simp]:
+  assumes nx: "isnormNum x" and ny: "isnormNum y"
   shows "((INum x :: 'a :: {field_char_0, linordered_field_inverse_zero}) < INum y) = (x <\<^sub>N y)"
-proof-
+proof -
   let ?z = "0::'a"
-  have "((INum x ::'a) < INum y) = (INum (x -\<^sub>N y) < ?z)" using nx ny by simp
-  also have "\<dots> = (0>\<^sub>N (x -\<^sub>N y))" using Nlt0_iff[OF Nsub_normN[OF ny]] by simp
+  have "((INum x ::'a) < INum y) = (INum (x -\<^sub>N y) < ?z)"
+    using nx ny by simp
+  also have "\<dots> = (0>\<^sub>N (x -\<^sub>N y))"
+    using Nlt0_iff[OF Nsub_normN[OF ny]] by simp
   finally show ?thesis by (simp add: Nlt_def)
 qed
 
-lemma Nle_iff[simp]: assumes nx: "isnormNum x" and ny: "isnormNum y"
+lemma Nle_iff[simp]:
+  assumes nx: "isnormNum x" and ny: "isnormNum y"
   shows "((INum x :: 'a :: {field_char_0, linordered_field_inverse_zero})\<le> INum y) = (x \<le>\<^sub>N y)"
-proof-
-  have "((INum x ::'a) \<le> INum y) = (INum (x -\<^sub>N y) \<le> (0::'a))" using nx ny by simp
-  also have "\<dots> = (0\<ge>\<^sub>N (x -\<^sub>N y))" using Nle0_iff[OF Nsub_normN[OF ny]] by simp
+proof -
+  have "((INum x ::'a) \<le> INum y) = (INum (x -\<^sub>N y) \<le> (0::'a))"
+    using nx ny by simp
+  also have "\<dots> = (0\<ge>\<^sub>N (x -\<^sub>N y))"
+    using Nle0_iff[OF Nsub_normN[OF ny]] by simp
   finally show ?thesis by (simp add: Nle_def)
 qed
 
 lemma Nadd_commute:
   assumes "SORT_CONSTRAINT('a::{field_char_0, field_inverse_zero})"
   shows "x +\<^sub>N y = y +\<^sub>N x"
-proof-
+proof -
   have n: "isnormNum (x +\<^sub>N y)" "isnormNum (y +\<^sub>N x)" by simp_all
   have "(INum (x +\<^sub>N y)::'a) = INum (y +\<^sub>N x)" by simp
   with isnormNum_unique[OF n] show ?thesis by simp
@@ -409,7 +401,7 @@
 lemma [simp]:
   assumes "SORT_CONSTRAINT('a::{field_char_0, field_inverse_zero})"
   shows "(0, b) +\<^sub>N y = normNum y"
-    and "(a, 0) +\<^sub>N y = normNum y" 
+    and "(a, 0) +\<^sub>N y = normNum y"
     and "x +\<^sub>N (0, b) = normNum x"
     and "x +\<^sub>N (a, 0) = normNum x"
   apply (simp add: Nadd_def split_def)
@@ -420,14 +412,13 @@
 
 lemma normNum_nilpotent_aux[simp]:
   assumes "SORT_CONSTRAINT('a::{field_char_0, field_inverse_zero})"
-  assumes nx: "isnormNum x" 
+  assumes nx: "isnormNum x"
   shows "normNum x = x"
-proof-
+proof -
   let ?a = "normNum x"
   have n: "isnormNum ?a" by simp
-  have th:"INum ?a = (INum x ::'a)" by simp
-  with isnormNum_unique[OF n nx]  
-  show ?thesis by simp
+  have th: "INum ?a = (INum x ::'a)" by simp
+  with isnormNum_unique[OF n nx] show ?thesis by simp
 qed
 
 lemma normNum_nilpotent[simp]:
@@ -445,7 +436,7 @@
 lemma Nadd_normNum1[simp]:
   assumes "SORT_CONSTRAINT('a::{field_char_0, field_inverse_zero})"
   shows "normNum x +\<^sub>N y = x +\<^sub>N y"
-proof-
+proof -
   have n: "isnormNum (normNum x +\<^sub>N y)" "isnormNum (x +\<^sub>N y)" by simp_all
   have "INum (normNum x +\<^sub>N y) = INum x + (INum y :: 'a)" by simp
   also have "\<dots> = INum (x +\<^sub>N y)" by simp
@@ -455,7 +446,7 @@
 lemma Nadd_normNum2[simp]:
   assumes "SORT_CONSTRAINT('a::{field_char_0, field_inverse_zero})"
   shows "x +\<^sub>N normNum y = x +\<^sub>N y"
-proof-
+proof -
   have n: "isnormNum (x +\<^sub>N normNum y)" "isnormNum (x +\<^sub>N y)" by simp_all
   have "INum (x +\<^sub>N normNum y) = INum x + (INum y :: 'a)" by simp
   also have "\<dots> = INum (x +\<^sub>N y)" by simp
@@ -465,7 +456,7 @@
 lemma Nadd_assoc:
   assumes "SORT_CONSTRAINT('a::{field_char_0, field_inverse_zero})"
   shows "x +\<^sub>N y +\<^sub>N z = x +\<^sub>N (y +\<^sub>N z)"
-proof-
+proof -
   have n: "isnormNum (x +\<^sub>N y +\<^sub>N z)" "isnormNum (x +\<^sub>N (y +\<^sub>N z))" by simp_all
   have "INum (x +\<^sub>N y +\<^sub>N z) = (INum (x +\<^sub>N (y +\<^sub>N z)) :: 'a)" by simp
   with isnormNum_unique[OF n] show ?thesis by simp
@@ -476,10 +467,10 @@
 
 lemma Nmul_assoc:
   assumes "SORT_CONSTRAINT('a::{field_char_0, field_inverse_zero})"
-  assumes nx: "isnormNum x" and ny:"isnormNum y" and nz:"isnormNum z"
+  assumes nx: "isnormNum x" and ny: "isnormNum y" and nz: "isnormNum z"
   shows "x *\<^sub>N y *\<^sub>N z = x *\<^sub>N (y *\<^sub>N z)"
-proof-
-  from nx ny nz have n: "isnormNum (x *\<^sub>N y *\<^sub>N z)" "isnormNum (x *\<^sub>N (y *\<^sub>N z))" 
+proof -
+  from nx ny nz have n: "isnormNum (x *\<^sub>N y *\<^sub>N z)" "isnormNum (x *\<^sub>N (y *\<^sub>N z))"
     by simp_all
   have "INum (x +\<^sub>N y +\<^sub>N z) = (INum (x +\<^sub>N (y +\<^sub>N z)) :: 'a)" by simp
   with isnormNum_unique[OF n] show ?thesis by simp
@@ -487,14 +478,15 @@
 
 lemma Nsub0:
   assumes "SORT_CONSTRAINT('a::{field_char_0, field_inverse_zero})"
-  assumes x: "isnormNum x" and y:"isnormNum y" shows "(x -\<^sub>N y = 0\<^sub>N) = (x = y)"
-proof-
-  { fix h :: 'a
-    from isnormNum_unique[where 'a = 'a, OF Nsub_normN[OF y], where y="0\<^sub>N"] 
-    have "(x -\<^sub>N y = 0\<^sub>N) = (INum (x -\<^sub>N y) = (INum 0\<^sub>N :: 'a)) " by simp
-    also have "\<dots> = (INum x = (INum y :: 'a))" by simp
-    also have "\<dots> = (x = y)" using x y by simp
-    finally show ?thesis . }
+  assumes x: "isnormNum x" and y: "isnormNum y"
+  shows "x -\<^sub>N y = 0\<^sub>N \<longleftrightarrow> x = y"
+proof -
+  fix h :: 'a
+  from isnormNum_unique[where 'a = 'a, OF Nsub_normN[OF y], where y="0\<^sub>N"]
+  have "(x -\<^sub>N y = 0\<^sub>N) = (INum (x -\<^sub>N y) = (INum 0\<^sub>N :: 'a)) " by simp
+  also have "\<dots> = (INum x = (INum y :: 'a))" by simp
+  also have "\<dots> = (x = y)" using x y by simp
+  finally show ?thesis .
 qed
 
 lemma Nmul0[simp]: "c *\<^sub>N 0\<^sub>N = 0\<^sub>N" " 0\<^sub>N *\<^sub>N c = 0\<^sub>N"
@@ -502,24 +494,26 @@
 
 lemma Nmul_eq0[simp]:
   assumes "SORT_CONSTRAINT('a::{field_char_0, field_inverse_zero})"
-  assumes nx:"isnormNum x" and ny: "isnormNum y"
-  shows "(x*\<^sub>N y = 0\<^sub>N) = (x = 0\<^sub>N \<or> y = 0\<^sub>N)"
-proof-
-  { fix h :: 'a
-    have " \<exists> a b a' b'. x = (a,b) \<and> y= (a',b')" by auto
-    then obtain a b a' b' where xy[simp]: "x = (a,b)" "y = (a',b')" by blast
-    have n0: "isnormNum 0\<^sub>N" by simp
-    show ?thesis using nx ny 
-      apply (simp only: isnormNum_unique[where ?'a = 'a, OF  Nmul_normN[OF nx ny] n0, symmetric] Nmul[where ?'a = 'a])
-      by (simp add: INum_def split_def isnormNum_def split: split_if_asm)
-  }
+  assumes nx: "isnormNum x" and ny: "isnormNum y"
+  shows "x*\<^sub>N y = 0\<^sub>N \<longleftrightarrow> x = 0\<^sub>N \<or> y = 0\<^sub>N"
+proof -
+  fix h :: 'a
+  obtain a b where x: "x = (a, b)" by (cases x)
+  obtain a' b' where y: "y = (a', b')" by (cases y)
+  have n0: "isnormNum 0\<^sub>N" by simp
+  show ?thesis using nx ny
+    apply (simp only: isnormNum_unique[where ?'a = 'a, OF  Nmul_normN[OF nx ny] n0, symmetric]
+      Nmul[where ?'a = 'a])
+    apply (simp add: x y INum_def split_def isnormNum_def split: split_if_asm)
+    done
 qed
+
 lemma Nneg_Nneg[simp]: "~\<^sub>N (~\<^sub>N c) = c"
   by (simp add: Nneg_def split_def)
 
-lemma Nmul1[simp]: 
-  "isnormNum c \<Longrightarrow> 1\<^sub>N *\<^sub>N c = c" 
-  "isnormNum c \<Longrightarrow> c *\<^sub>N (1\<^sub>N) = c" 
+lemma Nmul1[simp]:
+    "isnormNum c \<Longrightarrow> 1\<^sub>N *\<^sub>N c = c"
+    "isnormNum c \<Longrightarrow> c *\<^sub>N (1\<^sub>N) = c"
   apply (simp_all add: Nmul_def Let_def split_def isnormNum_def)
   apply (cases "fst c = 0", simp_all, cases c, simp_all)+
   done
--- a/src/HOL/Library/Library.thy	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/HOL/Library/Library.thy	Wed Sep 07 19:24:28 2011 -0700
@@ -55,6 +55,7 @@
   Ramsey
   Reflection
   RBT_Mapping
+  Saturated
   Set_Algebras
   State_Monad
   Sum_of_Squares
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/HOL/Library/Saturated.thy	Wed Sep 07 19:24:28 2011 -0700
@@ -0,0 +1,242 @@
+(* Author: Brian Huffman *)
+(* Author: Peter Gammie *)
+(* Author: Florian Haftmann *)
+
+header {* Saturated arithmetic *}
+
+theory Saturated
+imports Main "~~/src/HOL/Library/Numeral_Type" "~~/src/HOL/Word/Type_Length"
+begin
+
+subsection {* The type of saturated naturals *}
+
+typedef (open) ('a::len) sat = "{.. len_of TYPE('a)}"
+  morphisms nat_of Abs_sat
+  by auto
+
+lemma sat_eqI:
+  "nat_of m = nat_of n \<Longrightarrow> m = n"
+  by (simp add: nat_of_inject)
+
+lemma sat_eq_iff:
+  "m = n \<longleftrightarrow> nat_of m = nat_of n"
+  by (simp add: nat_of_inject)
+
+lemma Abs_sa_nat_of [code abstype]:
+  "Abs_sat (nat_of n) = n"
+  by (fact nat_of_inverse)
+
+definition Sat :: "nat \<Rightarrow> 'a::len sat" where
+  "Sat n = Abs_sat (min (len_of TYPE('a)) n)"
+
+lemma nat_of_Sat [simp]:
+  "nat_of (Sat n :: ('a::len) sat) = min (len_of TYPE('a)) n"
+  unfolding Sat_def by (rule Abs_sat_inverse) simp
+
+lemma nat_of_le_len_of [simp]:
+  "nat_of (n :: ('a::len) sat) \<le> len_of TYPE('a)"
+  using nat_of [where x = n] by simp
+
+lemma min_len_of_nat_of [simp]:
+  "min (len_of TYPE('a)) (nat_of (n::('a::len) sat)) = nat_of n"
+  by (rule min_max.inf_absorb2 [OF nat_of_le_len_of])
+
+lemma min_nat_of_len_of [simp]:
+  "min (nat_of (n::('a::len) sat)) (len_of TYPE('a)) = nat_of n"
+  by (subst min_max.inf.commute) simp
+
+lemma Sat_nat_of [simp]:
+  "Sat (nat_of n) = n"
+  by (simp add: Sat_def nat_of_inverse)
+
+instantiation sat :: (len) linorder
+begin
+
+definition
+  less_eq_sat_def: "x \<le> y \<longleftrightarrow> nat_of x \<le> nat_of y"
+
+definition
+  less_sat_def: "x < y \<longleftrightarrow> nat_of x < nat_of y"
+
+instance
+by default (auto simp add: less_eq_sat_def less_sat_def not_le sat_eq_iff min_max.le_infI1 nat_mult_commute)
+
+end
+
+instantiation sat :: (len) "{minus, comm_semiring_0, comm_semiring_1}"
+begin
+
+definition
+  "0 = Sat 0"
+
+definition
+  "1 = Sat 1"
+
+lemma nat_of_zero_sat [simp, code abstract]:
+  "nat_of 0 = 0"
+  by (simp add: zero_sat_def)
+
+lemma nat_of_one_sat [simp, code abstract]:
+  "nat_of 1 = min 1 (len_of TYPE('a))"
+  by (simp add: one_sat_def)
+
+definition
+  "x + y = Sat (nat_of x + nat_of y)"
+
+lemma nat_of_plus_sat [simp, code abstract]:
+  "nat_of (x + y) = min (nat_of x + nat_of y) (len_of TYPE('a))"
+  by (simp add: plus_sat_def)
+
+definition
+  "x - y = Sat (nat_of x - nat_of y)"
+
+lemma nat_of_minus_sat [simp, code abstract]:
+  "nat_of (x - y) = nat_of x - nat_of y"
+proof -
+  from nat_of_le_len_of [of x] have "nat_of x - nat_of y \<le> len_of TYPE('a)" by arith
+  then show ?thesis by (simp add: minus_sat_def)
+qed
+
+definition
+  "x * y = Sat (nat_of x * nat_of y)"
+
+lemma nat_of_times_sat [simp, code abstract]:
+  "nat_of (x * y) = min (nat_of x * nat_of y) (len_of TYPE('a))"
+  by (simp add: times_sat_def)
+
+instance proof
+  fix a b c :: "('a::len) sat"
+  show "a * b * c = a * (b * c)"
+  proof(cases "a = 0")
+    case True thus ?thesis by (simp add: sat_eq_iff)
+  next
+    case False show ?thesis
+    proof(cases "c = 0")
+      case True thus ?thesis by (simp add: sat_eq_iff)
+    next
+      case False with `a \<noteq> 0` show ?thesis
+        by (simp add: sat_eq_iff nat_mult_min_left nat_mult_min_right mult_assoc min_max.inf_assoc min_max.inf_absorb2)
+    qed
+  qed
+next
+  fix a :: "('a::len) sat"
+  show "1 * a = a"
+    apply (simp add: sat_eq_iff)
+    apply (metis One_nat_def len_gt_0 less_Suc0 less_zeroE linorder_not_less min_max.le_iff_inf min_nat_of_len_of nat_mult_1_right nat_mult_commute)
+    done
+next
+  fix a b c :: "('a::len) sat"
+  show "(a + b) * c = a * c + b * c"
+  proof(cases "c = 0")
+    case True thus ?thesis by (simp add: sat_eq_iff)
+  next
+    case False thus ?thesis
+      by (simp add: sat_eq_iff nat_mult_min_left add_mult_distrib nat_add_min_left nat_add_min_right min_max.inf_assoc min_max.inf_absorb2)
+  qed
+qed (simp_all add: sat_eq_iff mult.commute)
+
+end
+
+instantiation sat :: (len) ordered_comm_semiring
+begin
+
+instance
+by default (auto simp add: less_eq_sat_def less_sat_def not_le sat_eq_iff min_max.le_infI1 nat_mult_commute)
+
+end
+
+instantiation sat :: (len) number
+begin
+
+definition
+  number_of_sat_def [code del]: "number_of = Sat \<circ> nat"
+
+instance ..
+
+end
+
+lemma [code abstract]:
+  "nat_of (number_of n :: ('a::len) sat) = min (nat n) (len_of TYPE('a))"
+  unfolding number_of_sat_def by simp
+
+instance sat :: (len) finite
+proof
+  show "finite (UNIV::'a sat set)"
+    unfolding type_definition.univ [OF type_definition_sat]
+    using finite by simp
+qed
+
+instantiation sat :: (len) equal
+begin
+
+definition
+  "HOL.equal A B \<longleftrightarrow> nat_of A = nat_of B"
+
+instance proof
+qed (simp add: equal_sat_def nat_of_inject)
+
+end
+
+instantiation sat :: (len) "{bounded_lattice, distrib_lattice}"
+begin
+
+definition
+  "(inf :: 'a sat \<Rightarrow> 'a sat \<Rightarrow> 'a sat) = min"
+
+definition
+  "(sup :: 'a sat \<Rightarrow> 'a sat \<Rightarrow> 'a sat) = max"
+
+definition
+  "bot = (0 :: 'a sat)"
+
+definition
+  "top = Sat (len_of TYPE('a))"
+
+instance proof
+qed (simp_all add: inf_sat_def sup_sat_def bot_sat_def top_sat_def min_max.sup_inf_distrib1,
+  simp_all add: less_eq_sat_def)
+
+end
+
+instantiation sat :: (len) complete_lattice
+begin
+
+definition
+  "Inf (A :: 'a sat set) = fold min top A"
+
+definition
+  "Sup (A :: 'a sat set) = fold max bot A"
+
+instance proof
+  fix x :: "'a sat"
+  fix A :: "'a sat set"
+  note finite
+  moreover assume "x \<in> A"
+  ultimately have "fold min top A \<le> min x top" by (rule min_max.fold_inf_le_inf)
+  then show "Inf A \<le> x" by (simp add: Inf_sat_def)
+next
+  fix z :: "'a sat"
+  fix A :: "'a sat set"
+  note finite
+  moreover assume z: "\<And>x. x \<in> A \<Longrightarrow> z \<le> x"
+  ultimately have "min z top \<le> fold min top A" by (blast intro: min_max.inf_le_fold_inf)
+  then show "z \<le> Inf A" by (simp add: Inf_sat_def min_def)
+next
+  fix x :: "'a sat"
+  fix A :: "'a sat set"
+  note finite
+  moreover assume "x \<in> A"
+  ultimately have "max x bot \<le> fold max bot A" by (rule min_max.sup_le_fold_sup)
+  then show "x \<le> Sup A" by (simp add: Sup_sat_def)
+next
+  fix z :: "'a sat"
+  fix A :: "'a sat set"
+  note finite
+  moreover assume z: "\<And>x. x \<in> A \<Longrightarrow> x \<le> z"
+  ultimately have "fold max bot A \<le> max z bot" by (blast intro: min_max.fold_sup_le_sup)
+  then show "Sup A \<le> z" by (simp add: Sup_sat_def max_def bot_unique)
+qed
+
+end
+
+end
--- a/src/HOL/Metis_Examples/Type_Encodings.thy	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/HOL/Metis_Examples/Type_Encodings.thy	Wed Sep 07 19:24:28 2011 -0700
@@ -27,24 +27,32 @@
    "poly_guards",
    "poly_guards?",
    "poly_guards??",
+   "poly_guards@?",
    "poly_guards!",
    "poly_guards!!",
+   "poly_guards@!",
    "poly_tags",
    "poly_tags?",
    "poly_tags??",
+   "poly_tags@?",
    "poly_tags!",
    "poly_tags!!",
+   "poly_tags@!",
    "poly_args",
    "raw_mono_guards",
    "raw_mono_guards?",
    "raw_mono_guards??",
+   "raw_mono_guards@?",
    "raw_mono_guards!",
    "raw_mono_guards!!",
+   "raw_mono_guards@!",
    "raw_mono_tags",
    "raw_mono_tags?",
    "raw_mono_tags??",
+   "raw_mono_tags@?",
    "raw_mono_tags!",
    "raw_mono_tags!!",
+   "raw_mono_tags@!",
    "raw_mono_args",
    "mono_guards",
    "mono_guards?",
--- a/src/HOL/Nat.thy	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/HOL/Nat.thy	Wed Sep 07 19:24:28 2011 -0700
@@ -657,46 +657,6 @@
 by (cases m) simp_all
 
 
-subsubsection {* @{term min} and @{term max} *}
-
-lemma mono_Suc: "mono Suc"
-by (rule monoI) simp
-
-lemma min_0L [simp]: "min 0 n = (0::nat)"
-by (rule min_leastL) simp
-
-lemma min_0R [simp]: "min n 0 = (0::nat)"
-by (rule min_leastR) simp
-
-lemma min_Suc_Suc [simp]: "min (Suc m) (Suc n) = Suc (min m n)"
-by (simp add: mono_Suc min_of_mono)
-
-lemma min_Suc1:
-   "min (Suc n) m = (case m of 0 => 0 | Suc m' => Suc(min n m'))"
-by (simp split: nat.split)
-
-lemma min_Suc2:
-   "min m (Suc n) = (case m of 0 => 0 | Suc m' => Suc(min m' n))"
-by (simp split: nat.split)
-
-lemma max_0L [simp]: "max 0 n = (n::nat)"
-by (rule max_leastL) simp
-
-lemma max_0R [simp]: "max n 0 = (n::nat)"
-by (rule max_leastR) simp
-
-lemma max_Suc_Suc [simp]: "max (Suc m) (Suc n) = Suc(max m n)"
-by (simp add: mono_Suc max_of_mono)
-
-lemma max_Suc1:
-   "max (Suc n) m = (case m of 0 => Suc n | Suc m' => Suc(max n m'))"
-by (simp split: nat.split)
-
-lemma max_Suc2:
-   "max m (Suc n) = (case m of 0 => Suc n | Suc m' => Suc(max m' n))"
-by (simp split: nat.split)
-
-
 subsubsection {* Monotonicity of Addition *}
 
 lemma Suc_pred [simp]: "n>0 ==> Suc (n - Suc 0) = n"
@@ -753,11 +713,85 @@
   fix a::nat and b::nat show "a ~= 0 \<Longrightarrow> b ~= 0 \<Longrightarrow> a * b ~= 0" by auto
 qed
 
-lemma nat_mult_1: "(1::nat) * n = n"
-by simp
+
+subsubsection {* @{term min} and @{term max} *}
+
+lemma mono_Suc: "mono Suc"
+by (rule monoI) simp
+
+lemma min_0L [simp]: "min 0 n = (0::nat)"
+by (rule min_leastL) simp
+
+lemma min_0R [simp]: "min n 0 = (0::nat)"
+by (rule min_leastR) simp
+
+lemma min_Suc_Suc [simp]: "min (Suc m) (Suc n) = Suc (min m n)"
+by (simp add: mono_Suc min_of_mono)
+
+lemma min_Suc1:
+   "min (Suc n) m = (case m of 0 => 0 | Suc m' => Suc(min n m'))"
+by (simp split: nat.split)
+
+lemma min_Suc2:
+   "min m (Suc n) = (case m of 0 => 0 | Suc m' => Suc(min m' n))"
+by (simp split: nat.split)
+
+lemma max_0L [simp]: "max 0 n = (n::nat)"
+by (rule max_leastL) simp
+
+lemma max_0R [simp]: "max n 0 = (n::nat)"
+by (rule max_leastR) simp
+
+lemma max_Suc_Suc [simp]: "max (Suc m) (Suc n) = Suc(max m n)"
+by (simp add: mono_Suc max_of_mono)
+
+lemma max_Suc1:
+   "max (Suc n) m = (case m of 0 => Suc n | Suc m' => Suc(max n m'))"
+by (simp split: nat.split)
+
+lemma max_Suc2:
+   "max m (Suc n) = (case m of 0 => Suc n | Suc m' => Suc(max m' n))"
+by (simp split: nat.split)
 
-lemma nat_mult_1_right: "n * (1::nat) = n"
-by simp
+lemma nat_add_min_left:
+  fixes m n q :: nat
+  shows "min m n + q = min (m + q) (n + q)"
+  by (simp add: min_def)
+
+lemma nat_add_min_right:
+  fixes m n q :: nat
+  shows "m + min n q = min (m + n) (m + q)"
+  by (simp add: min_def)
+
+lemma nat_mult_min_left:
+  fixes m n q :: nat
+  shows "min m n * q = min (m * q) (n * q)"
+  by (simp add: min_def not_le) (auto dest: mult_right_le_imp_le mult_right_less_imp_less le_less_trans)
+
+lemma nat_mult_min_right:
+  fixes m n q :: nat
+  shows "m * min n q = min (m * n) (m * q)"
+  by (simp add: min_def not_le) (auto dest: mult_left_le_imp_le mult_left_less_imp_less le_less_trans)
+
+lemma nat_add_max_left:
+  fixes m n q :: nat
+  shows "max m n + q = max (m + q) (n + q)"
+  by (simp add: max_def)
+
+lemma nat_add_max_right:
+  fixes m n q :: nat
+  shows "m + max n q = max (m + n) (m + q)"
+  by (simp add: max_def)
+
+lemma nat_mult_max_left:
+  fixes m n q :: nat
+  shows "max m n * q = max (m * q) (n * q)"
+  by (simp add: max_def not_le) (auto dest: mult_right_le_imp_le mult_right_less_imp_less le_less_trans)
+
+lemma nat_mult_max_right:
+  fixes m n q :: nat
+  shows "m * max n q = max (m * n) (m * q)"
+  by (simp add: max_def not_le) (auto dest: mult_left_le_imp_le mult_left_less_imp_less le_less_trans)
 
 
 subsubsection {* Additional theorems about @{term "op \<le>"} *}
@@ -1700,6 +1734,15 @@
 by (auto elim!: dvdE) (auto simp add: gr0_conv_Suc)
 
 
+subsection {* aliasses *}
+
+lemma nat_mult_1: "(1::nat) * n = n"
+  by simp
+ 
+lemma nat_mult_1_right: "n * (1::nat) = n"
+  by simp
+
+
 subsection {* size of a datatype value *}
 
 class size =
--- a/src/HOL/Tools/ATP/atp_translate.ML	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/HOL/Tools/ATP/atp_translate.ML	Wed Sep 07 19:24:28 2011 -0700
@@ -20,11 +20,11 @@
 
   datatype polymorphism = Polymorphic | Raw_Monomorphic | Mangled_Monomorphic
   datatype soundness = Sound_Modulo_Infiniteness | Sound
-  datatype heaviness = Heavy | Ann_Light | Arg_Light
+  datatype granularity = All_Vars | Positively_Naked_Vars | Ghost_Type_Arg_Vars
   datatype type_level =
     All_Types |
-    Noninf_Nonmono_Types of soundness * heaviness |
-    Fin_Nonmono_Types of heaviness |
+    Noninf_Nonmono_Types of soundness * granularity |
+    Fin_Nonmono_Types of granularity |
     Const_Arg_Types |
     No_Types
   type type_enc
@@ -530,11 +530,11 @@
 datatype order = First_Order | Higher_Order
 datatype polymorphism = Polymorphic | Raw_Monomorphic | Mangled_Monomorphic
 datatype soundness = Sound_Modulo_Infiniteness | Sound
-datatype heaviness = Heavy | Ann_Light | Arg_Light
+datatype granularity = All_Vars | Positively_Naked_Vars | Ghost_Type_Arg_Vars
 datatype type_level =
   All_Types |
-  Noninf_Nonmono_Types of soundness * heaviness |
-  Fin_Nonmono_Types of heaviness |
+  Noninf_Nonmono_Types of soundness * granularity |
+  Fin_Nonmono_Types of granularity |
   Const_Arg_Types |
   No_Types
 
@@ -554,9 +554,9 @@
   | level_of_type_enc (Guards (_, level)) = level
   | level_of_type_enc (Tags (_, level)) = level
 
-fun heaviness_of_level (Noninf_Nonmono_Types (_, heaviness)) = heaviness
-  | heaviness_of_level (Fin_Nonmono_Types heaviness) = heaviness
-  | heaviness_of_level _ = Heavy
+fun granularity_of_type_level (Noninf_Nonmono_Types (_, grain)) = grain
+  | granularity_of_type_level (Fin_Nonmono_Types grain) = grain
+  | granularity_of_type_level _ = All_Vars
 
 fun is_type_level_quasi_sound All_Types = true
   | is_type_level_quasi_sound (Noninf_Nonmono_Types _) = true
@@ -584,13 +584,17 @@
   case try_unsuffixes suffixes s of
     SOME s =>
     (case try_unsuffixes suffixes s of
-       SOME s => (constr Ann_Light, s)
+       SOME s => (constr Positively_Naked_Vars, s)
      | NONE =>
        case try_unsuffixes ats s of
-         SOME s => (constr Arg_Light, s)
-       | NONE => (constr Heavy, s))
+         SOME s => (constr Ghost_Type_Arg_Vars, s)
+       | NONE => (constr All_Vars, s))
   | NONE => fallback s
 
+fun is_incompatible_type_level poly level =
+  poly = Mangled_Monomorphic andalso
+  granularity_of_type_level level = Ghost_Type_Arg_Vars
+
 fun type_enc_from_string soundness s =
   (case try (unprefix "poly_") s of
      SOME s => (SOME Polymorphic, s)
@@ -611,7 +615,7 @@
               (Polymorphic, All_Types) =>
               Simple_Types (First_Order, Polymorphic, All_Types)
             | (Mangled_Monomorphic, _) =>
-              if heaviness_of_level level = Heavy then
+              if granularity_of_type_level level = All_Vars then
                 Simple_Types (First_Order, Mangled_Monomorphic, level)
               else
                 raise Same.SAME
@@ -622,14 +626,17 @@
               Simple_Types (Higher_Order, Polymorphic, All_Types)
             | (_, Noninf_Nonmono_Types _) => raise Same.SAME
             | (Mangled_Monomorphic, _) =>
-              if heaviness_of_level level = Heavy then
+              if granularity_of_type_level level = All_Vars then
                 Simple_Types (Higher_Order, Mangled_Monomorphic, level)
               else
                 raise Same.SAME
             | _ => raise Same.SAME)
-         | ("guards", (SOME poly, _)) => Guards (poly, level)
-         | ("tags", (SOME Polymorphic, _)) => Tags (Polymorphic, level)
-         | ("tags", (SOME poly, _)) => Tags (poly, level)
+         | ("guards", (SOME poly, _)) =>
+           if is_incompatible_type_level poly level then raise Same.SAME
+           else Guards (poly, level)
+         | ("tags", (SOME poly, _)) =>
+           if is_incompatible_type_level poly level then raise Same.SAME
+           else Tags (poly, level)
          | ("args", (SOME poly, All_Types (* naja *))) =>
            Guards (poly, Const_Arg_Types)
          | ("erased", (NONE, All_Types (* naja *))) =>
@@ -700,10 +707,6 @@
   Mangled_Type_Args |
   No_Type_Args
 
-fun should_drop_arg_type_args (Simple_Types _) = false
-  | should_drop_arg_type_args type_enc =
-    level_of_type_enc type_enc = All_Types
-
 fun type_arg_policy type_enc s =
   let val mangled = (polymorphism_of_type_enc type_enc = Mangled_Monomorphic) in
     if s = type_tag_name then
@@ -718,7 +721,9 @@
         else if mangled then
           Mangled_Type_Args
         else
-          Explicit_Type_Args (should_drop_arg_type_args type_enc)
+          Explicit_Type_Args
+              (level = All_Types orelse
+               granularity_of_type_level level = Ghost_Type_Arg_Vars)
       end
   end
 
@@ -1089,28 +1094,31 @@
       t
     else
       let
-        fun aux Ts t =
+        fun trans Ts t =
           case t of
-            @{const Not} $ t1 => @{const Not} $ aux Ts t1
+            @{const Not} $ t1 => @{const Not} $ trans Ts t1
           | (t0 as Const (@{const_name All}, _)) $ Abs (s, T, t') =>
-            t0 $ Abs (s, T, aux (T :: Ts) t')
+            t0 $ Abs (s, T, trans (T :: Ts) t')
           | (t0 as Const (@{const_name All}, _)) $ t1 =>
-            aux Ts (t0 $ eta_expand Ts t1 1)
+            trans Ts (t0 $ eta_expand Ts t1 1)
           | (t0 as Const (@{const_name Ex}, _)) $ Abs (s, T, t') =>
-            t0 $ Abs (s, T, aux (T :: Ts) t')
+            t0 $ Abs (s, T, trans (T :: Ts) t')
           | (t0 as Const (@{const_name Ex}, _)) $ t1 =>
-            aux Ts (t0 $ eta_expand Ts t1 1)
-          | (t0 as @{const HOL.conj}) $ t1 $ t2 => t0 $ aux Ts t1 $ aux Ts t2
-          | (t0 as @{const HOL.disj}) $ t1 $ t2 => t0 $ aux Ts t1 $ aux Ts t2
-          | (t0 as @{const HOL.implies}) $ t1 $ t2 => t0 $ aux Ts t1 $ aux Ts t2
+            trans Ts (t0 $ eta_expand Ts t1 1)
+          | (t0 as @{const HOL.conj}) $ t1 $ t2 =>
+            t0 $ trans Ts t1 $ trans Ts t2
+          | (t0 as @{const HOL.disj}) $ t1 $ t2 =>
+            t0 $ trans Ts t1 $ trans Ts t2
+          | (t0 as @{const HOL.implies}) $ t1 $ t2 =>
+            t0 $ trans Ts t1 $ trans Ts t2
           | (t0 as Const (@{const_name HOL.eq}, Type (_, [@{typ bool}, _])))
               $ t1 $ t2 =>
-            t0 $ aux Ts t1 $ aux Ts t2
+            t0 $ trans Ts t1 $ trans Ts t2
           | _ =>
             if not (exists_subterm (fn Abs _ => true | _ => false) t) then t
             else t |> Envir.eta_contract |> do_lambdas ctxt Ts
         val (t, ctxt') = Variable.import_terms true [t] ctxt |>> the_single
-      in t |> aux [] |> singleton (Variable.export_terms ctxt' ctxt) end
+      in t |> trans [] |> singleton (Variable.export_terms ctxt' ctxt) end
   end
 
 fun do_cheaply_conceal_lambdas Ts (t1 $ t2) =
@@ -1148,12 +1156,12 @@
    same in Sledgehammer to prevent the discovery of unreplayable proofs. *)
 fun freeze_term t =
   let
-    fun aux (t $ u) = aux t $ aux u
-      | aux (Abs (s, T, t)) = Abs (s, T, aux t)
-      | aux (Var ((s, i), T)) =
+    fun freeze (t $ u) = freeze t $ freeze u
+      | freeze (Abs (s, T, t)) = Abs (s, T, freeze t)
+      | freeze (Var ((s, i), T)) =
         Free (atp_weak_prefix ^ s ^ "_" ^ string_of_int i, T)
-      | aux t = t
-  in t |> exists_subterm is_Var t ? aux end
+      | freeze t = t
+  in t |> exists_subterm is_Var t ? freeze end
 
 fun presimp_prop ctxt presimp_consts t =
   let
@@ -1198,6 +1206,30 @@
 
 (** Finite and infinite type inference **)
 
+fun tvar_footprint thy s ary =
+  (case strip_prefix_and_unascii const_prefix s of
+     SOME s =>
+     s |> invert_const |> robust_const_type thy |> chop_fun ary |> fst
+       |> map (fn T => Term.add_tvarsT T [] |> map fst)
+   | NONE => [])
+  handle TYPE _ => []
+
+fun ghost_type_args thy s ary =
+  let
+    val footprint = tvar_footprint thy s ary
+    fun ghosts _ [] = []
+      | ghosts seen ((i, tvars) :: args) =
+        ghosts (union (op =) seen tvars) args
+        |> exists (not o member (op =) seen) tvars ? cons i
+  in
+    if forall null footprint then
+      []
+    else
+      0 upto length footprint - 1 ~~ footprint
+      |> sort (rev_order o list_ord Term_Ord.indexname_ord o pairself snd)
+      |> ghosts []
+  end
+
 type monotonicity_info =
   {maybe_finite_Ts : typ list,
    surely_finite_Ts : typ list,
@@ -1221,23 +1253,25 @@
 fun should_encode_type _ (_ : monotonicity_info) All_Types _ = true
   | should_encode_type ctxt {maybe_finite_Ts, surely_infinite_Ts,
                              maybe_nonmono_Ts, ...}
-                       (Noninf_Nonmono_Types (soundness, _)) T =
-    exists (type_intersect ctxt T) maybe_nonmono_Ts andalso
-    not (exists (type_instance ctxt T) surely_infinite_Ts orelse
-         (not (member (type_aconv ctxt) maybe_finite_Ts T) andalso
-          is_type_kind_of_surely_infinite ctxt soundness surely_infinite_Ts T))
+                       (Noninf_Nonmono_Types (soundness, grain)) T =
+    grain = Ghost_Type_Arg_Vars orelse
+    (exists (type_intersect ctxt T) maybe_nonmono_Ts andalso
+     not (exists (type_instance ctxt T) surely_infinite_Ts orelse
+          (not (member (type_aconv ctxt) maybe_finite_Ts T) andalso
+           is_type_kind_of_surely_infinite ctxt soundness surely_infinite_Ts
+                                           T)))
   | should_encode_type ctxt {surely_finite_Ts, maybe_infinite_Ts,
                              maybe_nonmono_Ts, ...}
-                       (Fin_Nonmono_Types _) T =
-    exists (type_intersect ctxt T) maybe_nonmono_Ts andalso
-    (exists (type_generalization ctxt T) surely_finite_Ts orelse
-     (not (member (type_aconv ctxt) maybe_infinite_Ts T) andalso
-      is_type_surely_finite ctxt T))
+                       (Fin_Nonmono_Types grain) T =
+    grain = Ghost_Type_Arg_Vars orelse
+    (exists (type_intersect ctxt T) maybe_nonmono_Ts andalso
+     (exists (type_generalization ctxt T) surely_finite_Ts orelse
+      (not (member (type_aconv ctxt) maybe_infinite_Ts T) andalso
+       is_type_surely_finite ctxt T)))
   | should_encode_type _ _ _ _ = false
 
 fun should_guard_type ctxt mono (Guards (_, level)) should_guard_var T =
-    (heaviness_of_level level = Heavy orelse should_guard_var ()) andalso
-    should_encode_type ctxt mono level T
+    should_guard_var () andalso should_encode_type ctxt mono level T
   | should_guard_type _ _ _ _ _ = false
 
 fun is_maybe_universal_var (IConst ((s, _), _, _)) =
@@ -1249,15 +1283,21 @@
 datatype tag_site =
   Top_Level of bool option |
   Eq_Arg of bool option |
+  Arg of string * int |
   Elsewhere
 
 fun should_tag_with_type _ _ _ (Top_Level _) _ _ = false
   | should_tag_with_type ctxt mono (Tags (_, level)) site u T =
-    (if heaviness_of_level level = Heavy then
-       should_encode_type ctxt mono level T
-     else case (site, is_maybe_universal_var u) of
-       (Eq_Arg _, true) => should_encode_type ctxt mono level T
-     | _ => false)
+    (case granularity_of_type_level level of
+       All_Vars => should_encode_type ctxt mono level T
+     | grain =>
+       case (site, is_maybe_universal_var u) of
+         (Eq_Arg _, true) => should_encode_type ctxt mono level T
+       | (Arg (s, j), true) =>
+         grain = Ghost_Type_Arg_Vars andalso
+         member (op =)
+                (ghost_type_args (Proof_Context.theory_of ctxt) s (j + 1)) j
+       | _ => false)
   | should_tag_with_type _ _ _ _ _ _ = false
 
 fun fused_type ctxt mono level =
@@ -1646,13 +1686,36 @@
     accum orelse (is_tptp_equal s andalso member (op =) tms (ATerm (name, [])))
   | is_var_positively_naked_in_term _ _ _ _ = true
 
-fun should_guard_var_in_formula pos phi (SOME true) name =
-    formula_fold pos (is_var_positively_naked_in_term name) phi false
-  | should_guard_var_in_formula _ _ _ _ = true
+fun is_var_ghost_type_arg_in_term thy name pos tm accum =
+  is_var_positively_naked_in_term name pos tm accum orelse
+  let
+    val var = ATerm (name, [])
+    fun is_nasty_in_term (ATerm (_, [])) = false
+      | is_nasty_in_term (ATerm ((s, _), tms)) =
+        (member (op =) tms var andalso
+         let val ary = length tms in
+           case ghost_type_args thy s ary of
+             [] => false
+           | ghosts =>
+             exists (fn (j, tm) => tm = var andalso member (op =) ghosts j)
+                    (0 upto length tms - 1 ~~ tms)
+         end) orelse
+        exists is_nasty_in_term tms
+      | is_nasty_in_term _ = true
+  in is_nasty_in_term tm end
+
+fun should_guard_var_in_formula thy level pos phi (SOME true) name =
+    (case granularity_of_type_level level of
+       All_Vars => true
+     | Positively_Naked_Vars =>
+       formula_fold pos (is_var_positively_naked_in_term name) phi false
+     | Ghost_Type_Arg_Vars =>
+       formula_fold pos (is_var_ghost_type_arg_in_term thy name) phi false)
+  | should_guard_var_in_formula _ _ _ _ _ _ = true
 
 fun should_generate_tag_bound_decl _ _ _ (SOME true) _ = false
   | should_generate_tag_bound_decl ctxt mono (Tags (_, level)) _ T =
-    heaviness_of_level level <> Heavy andalso
+    granularity_of_type_level level <> All_Vars andalso
     should_encode_type ctxt mono level T
   | should_generate_tag_bound_decl _ _ _ _ _ = false
 
@@ -1667,27 +1730,29 @@
        | _ => raise Fail "unexpected lambda-abstraction")
 and ho_term_from_iterm ctxt format mono type_enc =
   let
-    fun aux site u =
+    fun term site u =
       let
         val (head, args) = strip_iterm_comb u
         val pos =
           case site of
             Top_Level pos => pos
           | Eq_Arg pos => pos
-          | Elsewhere => NONE
+          | _ => NONE
         val t =
           case head of
             IConst (name as (s, _), _, T_args) =>
             let
-              val arg_site = if is_tptp_equal s then Eq_Arg pos else Elsewhere
+              fun arg_site j =
+                if is_tptp_equal s then Eq_Arg pos else Arg (s, j)
             in
-              mk_aterm format type_enc name T_args (map (aux arg_site) args)
+              mk_aterm format type_enc name T_args
+                       (map2 (term o arg_site) (0 upto length args - 1) args)
             end
           | IVar (name, _) =>
-            mk_aterm format type_enc name [] (map (aux Elsewhere) args)
+            mk_aterm format type_enc name [] (map (term Elsewhere) args)
           | IAbs ((name, T), tm) =>
             AAbs ((name, ho_type_from_typ format type_enc true 0 T),
-                  aux Elsewhere tm)
+                  term Elsewhere tm)
           | IApp _ => raise Fail "impossible \"IApp\""
         val T = ityp_of u
       in
@@ -1696,18 +1761,20 @@
               else
                 I)
       end
-  in aux end
+  in term end
 and formula_from_iformula ctxt format mono type_enc should_guard_var =
   let
+    val thy = Proof_Context.theory_of ctxt
+    val level = level_of_type_enc type_enc
     val do_term = ho_term_from_iterm ctxt format mono type_enc o Top_Level
     val do_bound_type =
       case type_enc of
-        Simple_Types (_, _, level) => fused_type ctxt mono level 0
+        Simple_Types _ => fused_type ctxt mono level 0
         #> ho_type_from_typ format type_enc false 0 #> SOME
       | _ => K NONE
     fun do_out_of_bound_type pos phi universal (name, T) =
       if should_guard_type ctxt mono type_enc
-             (fn () => should_guard_var pos phi universal name) T then
+             (fn () => should_guard_var thy level pos phi universal name) T then
         IVar (name, T)
         |> type_guard_iterm format type_enc T
         |> do_term pos |> AAtom |> SOME
@@ -1958,9 +2025,12 @@
 fun add_fact_monotonic_types ctxt mono type_enc =
   add_iformula_monotonic_types ctxt mono type_enc |> fact_lift
 fun monotonic_types_for_facts ctxt mono type_enc facts =
-  [] |> (polymorphism_of_type_enc type_enc = Polymorphic andalso
-         is_type_level_monotonicity_based (level_of_type_enc type_enc))
-        ? fold (add_fact_monotonic_types ctxt mono type_enc) facts
+  let val level = level_of_type_enc type_enc in
+    [] |> (polymorphism_of_type_enc type_enc = Polymorphic andalso
+           is_type_level_monotonicity_based level andalso
+           granularity_of_type_level level <> Ghost_Type_Arg_Vars)
+          ? fold (add_fact_monotonic_types ctxt mono type_enc) facts
+  end
 
 fun formula_line_for_guards_mono_type ctxt format mono type_enc T =
   Formula (guards_sym_formula_prefix ^
@@ -1970,7 +2040,7 @@
            |> type_guard_iterm format type_enc T
            |> AAtom
            |> formula_from_iformula ctxt format mono type_enc
-                                    (K (K (K (K true)))) (SOME true)
+                                    (K (K (K (K (K (K true)))))) (SOME true)
            |> bound_tvars type_enc (atyps_of T)
            |> close_formula_universally type_enc,
            isabelle_info introN, NONE)
@@ -2023,21 +2093,28 @@
 fun formula_line_for_guards_sym_decl ctxt format conj_sym_kind mono type_enc n s
                                      j (s', T_args, T, _, ary, in_conj) =
   let
+    val thy = Proof_Context.theory_of ctxt
     val (kind, maybe_negate) =
       if in_conj then (conj_sym_kind, conj_sym_kind = Conjecture ? mk_anot)
       else (Axiom, I)
     val (arg_Ts, res_T) = chop_fun ary T
-    val num_args = length arg_Ts
-    val bound_names =
-      1 upto num_args |> map (`I o make_bound_var o string_of_int)
+    val bound_names = 1 upto ary |> map (`I o make_bound_var o string_of_int)
     val bounds =
       bound_names ~~ arg_Ts |> map (fn (name, T) => IConst (name, T, []))
-    val sym_needs_arg_types = exists (curry (op =) dummyT) T_args
-    fun should_keep_arg_type T =
-      sym_needs_arg_types andalso
-      should_guard_type ctxt mono type_enc (K true) T
     val bound_Ts =
-      arg_Ts |> map (fn T => if should_keep_arg_type T then SOME T else NONE)
+      if exists (curry (op =) dummyT) T_args then
+        case level_of_type_enc type_enc of
+          All_Types => map SOME arg_Ts
+        | level =>
+          if granularity_of_type_level level = Ghost_Type_Arg_Vars then
+            let val ghosts = ghost_type_args thy s ary in
+              map2 (fn j => if member (op =) ghosts j then SOME else K NONE)
+                   (0 upto ary - 1) arg_Ts
+            end
+          else
+            replicate ary NONE
+      else
+        replicate ary NONE
   in
     Formula (guards_sym_formula_prefix ^ s ^
              (if n > 1 then "_" ^ string_of_int j else ""), kind,
@@ -2046,16 +2123,19 @@
              |> type_guard_iterm format type_enc res_T
              |> AAtom |> mk_aquant AForall (bound_names ~~ bound_Ts)
              |> formula_from_iformula ctxt format mono type_enc
-                                      (K (K (K (K true)))) (SOME true)
+                                      (K (K (K (K (K (K true)))))) (SOME true)
              |> n > 1 ? bound_tvars type_enc (atyps_of T)
              |> close_formula_universally type_enc
              |> maybe_negate,
              isabelle_info introN, NONE)
   end
 
-fun formula_lines_for_nonuniform_tags_sym_decl ctxt format conj_sym_kind mono
-        type_enc n s (j, (s', T_args, T, pred_sym, ary, in_conj)) =
+fun formula_lines_for_tags_sym_decl ctxt format conj_sym_kind mono type_enc n s
+        (j, (s', T_args, T, pred_sym, ary, in_conj)) =
   let
+    val thy = Proof_Context.theory_of ctxt
+    val level = level_of_type_enc type_enc
+    val grain = granularity_of_type_level level
     val ident_base =
       tags_sym_formula_prefix ^ s ^
       (if n > 1 then "_" ^ string_of_int j else "")
@@ -2063,19 +2143,28 @@
       if in_conj then (conj_sym_kind, conj_sym_kind = Conjecture ? mk_anot)
       else (Axiom, I)
     val (arg_Ts, res_T) = chop_fun ary T
-    val bound_names =
-      1 upto length arg_Ts |> map (`I o make_bound_var o string_of_int)
+    val bound_names = 1 upto ary |> map (`I o make_bound_var o string_of_int)
     val bounds = bound_names |> map (fn name => ATerm (name, []))
     val cst = mk_aterm format type_enc (s, s') T_args
     val eq = maybe_negate oo eq_formula type_enc (atyps_of T) pred_sym
-    val should_encode =
-      should_encode_type ctxt mono (level_of_type_enc type_enc)
+    val should_encode = should_encode_type ctxt mono level
     val tag_with = tag_with_type ctxt format mono type_enc NONE
     val add_formula_for_res =
       if should_encode res_T then
-        cons (Formula (ident_base ^ "_res", kind,
-                       eq (tag_with res_T (cst bounds)) (cst bounds),
-                       isabelle_info simpN, NONE))
+        let
+          val tagged_bounds =
+            if grain = Ghost_Type_Arg_Vars then
+              let val ghosts = ghost_type_args thy s ary in
+                map2 (fn (j, arg_T) => member (op =) ghosts j ? tag_with arg_T)
+                     (0 upto ary - 1 ~~ arg_Ts) bounds
+              end
+            else
+              bounds
+        in
+          cons (Formula (ident_base ^ "_res", kind,
+                         eq (tag_with res_T (cst bounds)) (cst tagged_bounds),
+                         isabelle_info simpN, NONE))
+        end
       else
         I
     fun add_formula_for_arg k =
@@ -2093,7 +2182,8 @@
       end
   in
     [] |> not pred_sym ? add_formula_for_res
-       |> Config.get ctxt type_tag_arguments
+       |> (Config.get ctxt type_tag_arguments andalso
+           grain = Positively_Naked_Vars)
           ? fold add_formula_for_arg (ary - 1 downto 0)
   end
 
@@ -2127,13 +2217,13 @@
                                                  type_enc n s)
     end
   | Tags (_, level) =>
-    if heaviness_of_level level = Heavy then
+    if granularity_of_type_level level = All_Vars then
       []
     else
       let val n = length decls in
         (0 upto n - 1 ~~ decls)
-        |> maps (formula_lines_for_nonuniform_tags_sym_decl ctxt format
-                     conj_sym_kind mono type_enc n s)
+        |> maps (formula_lines_for_tags_sym_decl ctxt format conj_sym_kind mono
+                                                 type_enc n s)
       end
 
 fun problem_lines_for_sym_decl_table ctxt format conj_sym_kind mono type_enc
@@ -2168,13 +2258,22 @@
 val conjsN = "Conjectures"
 val free_typesN = "Type variables"
 
-val explicit_apply = NONE (* for experiments *)
+val explicit_apply_threshold = 50
 
 fun prepare_atp_problem ctxt format conj_sym_kind prem_kind type_enc exporter
         lambda_trans readable_names preproc hyp_ts concl_t facts =
   let
     val thy = Proof_Context.theory_of ctxt
     val type_enc = type_enc |> adjust_type_enc format
+    (* Forcing explicit applications is expensive for polymorphic encodings,
+       because it takes only one existential variable ranging over "'a => 'b" to
+       ruin everything. Hence we do it only if there are few facts. *)
+    val explicit_apply =
+      if polymorphism_of_type_enc type_enc <> Polymorphic orelse
+         length facts <= explicit_apply_threshold then
+        NONE
+      else
+        SOME false
     val lambda_trans =
       if lambda_trans = smartN then
         if is_type_enc_higher_order type_enc then lambdasN else combinatorsN
--- a/src/Pure/PIDE/document.ML	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/Pure/PIDE/document.ML	Wed Sep 07 19:24:28 2011 -0700
@@ -331,7 +331,6 @@
   let
     val is_init = Toplevel.is_init tr;
     val is_proof = Keyword.is_proof (Toplevel.name_of tr);
-    val do_print = not is_init andalso (Toplevel.print_of tr orelse is_proof);
 
     val _ = Multithreading.interrupted ();
     val _ = Toplevel.status tr Markup.forked;
@@ -343,13 +342,18 @@
   in
     (case result of
       NONE =>
-       (if null errs then Exn.interrupt () else ();
-        Toplevel.status tr Markup.failed;
-        (st, no_print))
+        let
+          val _ = if null errs then Exn.interrupt () else ();
+          val _ = Toplevel.status tr Markup.failed;
+        in (st, no_print) end
     | SOME st' =>
-       (Toplevel.status tr Markup.finished;
-        proof_status tr st';
-        (st', if do_print then print_state tr st' else no_print)))
+        let
+          val _ = Toplevel.status tr Markup.finished;
+          val _ = proof_status tr st';
+          val do_print =
+            not is_init andalso
+              (Toplevel.print_of tr orelse (is_proof andalso Toplevel.is_proof st'));
+        in (st', if do_print then print_state tr st' else no_print) end)
   end;
 
 end;
--- a/src/Pure/PIDE/xml.ML	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/Pure/PIDE/xml.ML	Wed Sep 07 19:24:28 2011 -0700
@@ -47,6 +47,7 @@
   val parse_element: string list -> tree * string list
   val parse_document: string list -> tree * string list
   val parse: string -> tree
+  val cache: unit -> tree -> tree
   exception XML_ATOM of string
   exception XML_BODY of body
   structure Encode: XML_DATA_OPS
@@ -228,6 +229,48 @@
 end;
 
 
+(** cache for substructural sharing **)
+
+fun tree_ord tu =
+  if pointer_eq tu then EQUAL
+  else
+    (case tu of
+      (Text _, Elem _) => LESS
+    | (Elem _, Text _) => GREATER
+    | (Text s, Text s') => fast_string_ord (s, s')
+    | (Elem e, Elem e') =>
+        prod_ord
+          (prod_ord fast_string_ord (list_ord (prod_ord fast_string_ord fast_string_ord)))
+          (list_ord tree_ord) (e, e'));
+
+structure Treetab = Table(type key = tree val ord = tree_ord);
+
+fun cache () =
+  let
+    val strings = Unsynchronized.ref (Symtab.empty: unit Symtab.table);
+    val trees = Unsynchronized.ref (Treetab.empty: unit Treetab.table);
+
+    fun string s =
+      if size s <= 1 then s
+      else
+        (case Symtab.lookup_key (! strings) s of
+          SOME (s', ()) => s'
+        | NONE => (Unsynchronized.change strings (Symtab.update (s, ())); s));
+
+    fun tree t =
+      (case Treetab.lookup_key (! trees) t of
+        SOME (t', ()) => t'
+      | NONE =>
+          let
+            val t' =
+              (case t of
+                Elem ((a, ps), b) => Elem ((string a, map (pairself string) ps), map tree b)
+              | Text s => Text (string s));
+            val _ = Unsynchronized.change trees (Treetab.update (t', ()));
+          in t' end);
+  in tree end;
+
+
 
 (** XML as data representation language **)
 
--- a/src/Pure/PIDE/xml.scala	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/Pure/PIDE/xml.scala	Wed Sep 07 19:24:28 2011 -0700
@@ -84,7 +84,8 @@
   def content(body: Body): Iterator[String] = content_stream(body).iterator
 
 
-  /* pipe-lined cache for partial sharing */
+
+  /** cache for partial sharing (weak table) **/
 
   class Cache(initial_size: Int = 131071, max_string: Int = 100)
   {
--- a/src/Pure/Syntax/syntax.ML	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/Pure/Syntax/syntax.ML	Wed Sep 07 19:24:28 2011 -0700
@@ -99,6 +99,7 @@
   val string_of_sort_global: theory -> sort -> string
   type syntax
   val eq_syntax: syntax * syntax -> bool
+  val join_syntax: syntax -> unit
   val lookup_const: syntax -> string -> string option
   val is_keyword: syntax -> string -> bool
   val tokenize: syntax -> bool -> Symbol_Pos.T list -> Lexicon.token list
@@ -508,6 +509,8 @@
 
 fun eq_syntax (Syntax (_, s1), Syntax (_, s2)) = s1 = s2;
 
+fun join_syntax (Syntax ({gram, ...}, _)) = ignore (Future.join gram);
+
 fun lookup_const (Syntax ({consts, ...}, _)) = Symtab.lookup consts;
 fun is_keyword (Syntax ({lexicon, ...}, _)) = Scan.is_literal lexicon o Symbol.explode;
 fun tokenize (Syntax ({lexicon, ...}, _)) = Lexicon.tokenize lexicon;
--- a/src/Pure/System/session.scala	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/Pure/System/session.scala	Wed Sep 07 19:24:28 2011 -0700
@@ -22,7 +22,7 @@
 
   //{{{
   case object Global_Settings
-  case object Perspective
+  case object Caret_Focus
   case object Assignment
   case class Commands_Changed(nodes: Set[Document.Node.Name], commands: Set[Command])
 
@@ -52,7 +52,7 @@
   /* pervasive event buses */
 
   val global_settings = new Event_Bus[Session.Global_Settings.type]
-  val perspective = new Event_Bus[Session.Perspective.type]
+  val caret_focus = new Event_Bus[Session.Caret_Focus.type]
   val assignments = new Event_Bus[Session.Assignment.type]
   val commands_changed = new Event_Bus[Session.Commands_Changed]
   val phase_changed = new Event_Bus[Session.Phase]
--- a/src/Pure/theory.ML	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/Pure/theory.ML	Wed Sep 07 19:24:28 2011 -0700
@@ -147,6 +147,7 @@
     |> Sign.local_path
     |> Sign.map_naming (Name_Space.set_theory_name name)
     |> apply_wrappers wrappers
+    |> tap (Syntax.join_syntax o Sign.syn_of)
   end;
 
 fun end_theory thy =
--- a/src/Tools/jEdit/README.html	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/Tools/jEdit/README.html	Wed Sep 07 19:24:28 2011 -0700
@@ -144,6 +144,11 @@
   <em>Workaround:</em> Force re-parsing of files using such commands
   via reload menu of jEdit.</li>
 
+  <li>No way to delete document nodes from the overall collection of
+  theories.<br/>
+  <em>Workaround:</em> Restart whole Isabelle/jEdit session in
+  worst-case situation.</li>
+
   <li>No support for non-local markup, e.g. commands reporting on
   previous commands (proof end on proof head), or markup produced by
   loading external files.</li>
--- a/src/Tools/jEdit/src/document_view.scala	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/Tools/jEdit/src/document_view.scala	Wed Sep 07 19:24:28 2011 -0700
@@ -362,7 +362,7 @@
 
   private val caret_listener = new CaretListener {
     private val delay = Swing_Thread.delay_last(session.input_delay) {
-      session.perspective.event(Session.Perspective)
+      session.caret_focus.event(Session.Caret_Focus)
     }
     override def caretUpdate(e: CaretEvent) { delay() }
   }
--- a/src/Tools/jEdit/src/output_dockable.scala	Wed Sep 07 17:41:29 2011 -0700
+++ b/src/Tools/jEdit/src/output_dockable.scala	Wed Sep 07 19:24:28 2011 -0700
@@ -106,7 +106,7 @@
       react {
         case Session.Global_Settings => handle_resize()
         case changed: Session.Commands_Changed => handle_update(Some(changed.commands))
-        case Session.Perspective => if (follow_caret && handle_perspective()) handle_update()
+        case Session.Caret_Focus => if (follow_caret && handle_perspective()) handle_update()
         case bad => System.err.println("Output_Dockable: ignoring bad message " + bad)
       }
     }
@@ -116,14 +116,14 @@
   {
     Isabelle.session.global_settings += main_actor
     Isabelle.session.commands_changed += main_actor
-    Isabelle.session.perspective += main_actor
+    Isabelle.session.caret_focus += main_actor
   }
 
   override def exit()
   {
     Isabelle.session.global_settings -= main_actor
     Isabelle.session.commands_changed -= main_actor
-    Isabelle.session.perspective -= main_actor
+    Isabelle.session.caret_focus -= main_actor
   }